{"text":"\\section{Assigning Types to \\textsc{Houdini}\\xspace Programs}\\label{app:types}\n\nA program $e$ in \\textsc{Houdini}\\xspace is assigned a type using the following rules:\n\\begin{itemize}\n\\item $e = e' \\circ e''$ is assigned a type iff $e'$ has type $\\tau \\rightarrow \\tau'$ and $e''$ has type\n $\\tau' \\rightarrow \\tau''$. In this case, $e$ has type $\\tau \\rightarrow\n \\tau''$. \n\\item $e = \\mathbf{map}_{\\alpha\\zug{\\tau}}~e'$ is assigned a type\n iff $e'$ has the type $\\tau \\rightarrow \\tau'$. In this case, the type of $e$\n is $\\alpha\\zug{\\tau} \\rightarrow \\alpha\\zug{\\tau'}$.\n\\item $e = \\mathbf{fold}_{\\alpha\\zug{\\tau}}~e'~z$ is assigned a type\n iff $e'$ has the type $\\tau' \\rightarrow (\\tau \\rightarrow \\tau')$ and $z$ has the type $\\tau'$. In this case,\n $e$ has type\n $\\alpha\\zug{\\tau} \\rightarrow \\tau'$.\n\\item $e = \\mathbf{conv}_{\\alpha\\zug{\\tau}}~e'$ is assigned a type\n iff $e'$ has the type $\\mathtt{list}\\zug{\\tau} \\rightarrow \\tau'$. In this case, $e$\n has type $\\alpha\\zug{\\tau} \\rightarrow \\alpha\\zug{\\tau'}$.\n\\end{itemize}\nIf it is not possible to assign a type to the program $e$,\nthen it is considered {\\em type-inconsistent} and excluded from the\nscope of synthesis.\n\n\\section{Symbolic Program Synthesis}\\label{app:synthesis}\n\n\\input{synth-appendix}\n\n\n\\section{Details of Experimental Setup}\n\n\nThe initial library models, which have trainable weights, have the following architecture. MLP modules have one hidden layer of size 1024, followed by batch normalization and dropout, followed by an output layer. CNNs have two convolutional layers with 32 and 64 output channels respectively, each with a 5x5 kernel, stride 1 and 0 padding, and each followed by max pooling, followed by spatial dropout. RNN modules are long short-term memory (LSTM) networks with a hidden dimension of 100, followed by an output layer, which transforms the last hidden state. For a given task, we use the input and output types of the new function to decide between MLP, CNN, or RNN, and also deduce the output activation function.\n\n\nThe standalone baseline for counting uses an architecture of the form $\\lambda x. \\mathrm{RNN}(\\mathbf{map}~(\\mathrm{MLP} \\circ \\mathrm{CNN}(x)))$, which is intuitively appropriate for the task, and also matches the shape of some programs commonly returned by \\textsc{Houdini}\\xspace. \n \nAs for the shortest path sequences, \nthe first task for GS1 and GS2 is regression, which we train using a network with architecture $\\mathrm{MLP} \\circ \\mathrm{CNN}$, in which the last layer is linear.\nIn the RNN baseline for the other tasks in the graph sequences, we map a learned $\\mathrm{MLP} \\circ \\mathrm{CNN}$ regression module to each image in the grid. Afterwards, we linearize the grid row-wise, converting it into a list, and then we process it using an LSTM (RNN) with hidden state of size 100. The number was chosen so that both our implementation and the baseline have almost the same number of parameters.\n\n\nFor multi-class classification (Sequence SS - Task 1) and regression (GS1 - Task1, GS2 - Task 1), we used all training images available. For the rest of the tasks in GS1, GS2, GS3 and SS, we use 12000 data points for training, with 2100 for testing. The list lengths for training are [2, 3, 4, 5], and [6, 7, 8] for testing in order to evaluate the generalization to longer sequences. We train for 20 epochs on all list-related tasks and for 1000 epochs for the regression tasks. The training datasets for the graph shortest path tasks (GS1 - Task 2; GS2 - Task2, GS2 - Task3) consists of 70,000 3x3 grids and 1,000,000 4x4 grids, while the testing datasets consists of 10,000 5x5 grids. The number of epochs for these tasks is 5.\nIn GS2 - Task3, the \\emph{low-level transfer} baseline reuses the regression function learned in GS2 - Task1, thus, the image dimensions from MNIST and the colored GTSRB need to match. Therefore, we expanded the MNIST digit images, used for the graph sequences GS1 and GS2, to 28x28x3 dimensionality and resized the images from GTSRB from 32x32x3 to 28x28x3.\n\nFor all experiments, we use early stopping, reporting the the test error\nat the epoch where the validation error was minimized.\n\n\n\\section{Programs Discovered in Experiments}\\label{app:programs}\n\n\nTables \\ref{tab:CS1Progs}-\\ref{tab:LS5ProgsEvo} list the top 3 programs and the corresponding classification errors\/RMSEs, on a test dataset, for most of our task sequences. The programs are ordered by their performance on a validaiton dataset. Finally, the presented programs are the ones evaluated for all (100\\%) of the training dataset.\nHere we use the syntax $\\mathbf{compose}$ to denote function composition. Program terms with prefix ``nn\\_\" denote neural modules trained during the corresponding tasks whereas terms with prefix ``lib.\" denote already trained neural modules in the library.\nFor example, in Counting Sequence 1 (Table \\ref{tab:CS1Progs}), ``nn\\_cs1\\_1\" is the neural module trained during Task 1 (\\recogdigit{d_1}).\nAfter completion of this task, the neural module is added to the library and is available for use during the subsequent tasks.\nFor example, the top performing program for Task 3 (\\countdigit{d_1})\nuses the neural module ``lib.nn\\_cs1\\_1\" from the library (and a\nfreshly trained neural module ``nn\\_cs1\\_5\") to construct a program\nfor the counting task.\n\n\n\n\n\n\n\n\n\\section{Summing Experiment}\\label{app:summing}\n\nIn this section we present the result\nfrom task sequence SS in Figure 3 of the main paper.\nThis sequence was designed to demonstrate\nlow-level transfer of a multi-class classifier as well as the advantage of functional methods like foldl in specific situations.\nThe first task of the sequence is a simple MNIST\nclassifier, on which all competing methods do equally\nwell. The second task is a regression task, to learn\nto sum all of the digits in the sequence. The standalone method,\nlow level transfer one and the progressive neural networks all perform equally poorly (note that their lines are overplotted in the Figure),\nbut the synthesized program from \\textsc{Houdini}\\xspace\nis able to learn this function easily\nbecause it is able to use a foldl operation. We also add a new baseline \"standalone\\_with\\_fold\", which reuses the program found by \\textsc{Houdini}\\xspace, but trains the parameter from a random initialization.\n\n\\begin{figure}[h!]\n \\centering\n \\begin{subfigure}[t]{3in}\n \\centering\n \\includegraphics[width=3in]{s4t2.png}\n \\caption{Task 2: Sum digits}\\label{fig:4t2} \\label{fig:sst2}\n \\end{subfigure}\n \\caption{Lifelong learning for ``learning to sum'' (Sequence SS).}\\label{fig:ss}\n\\end{figure}\n\n\\input{programs_td}\n\\input{programs_evo}\n\n\\section{Full Experimental Results on Counting Tasks}\n\nIn the paper, we present results\nfor the counting sequences on for the\nlater tasks, in which transfer learning\nis possible. For completeness, in this section\nwe present results on all of the tasks in the sequences. See Figures \\ref{fig:cs1appx}--\\ref{fig:cs3appx}.\nWe note that for the early tasks in each task sequence (e.g. CS1 tasks 1 and 2),\nthere is little relevant information that can be transferred from early tasks,\nso as expected all methods perform similarly; e.g., the output of \\textsc{Houdini}\\xspace is a single library function.\n\n\n\\begin{figure*}[t!]\n \\centering\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s1t1}\n \\caption{Task 1: \\isdigit{d_1}}\\label{fig:cs1t1appx}\n \\end{subfigure}\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s1t2}\n \\caption{Task 2: \\isdigit{d_2}}\\label{fig:cs1t2appx}\n \\end{subfigure}\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s1t3}\n \\caption{Task 3: \\countdigit{d_1}}\\label{fig:cs1t3appx}\n \\end{subfigure}\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s1t4}\n \\caption{Task 4: \\countdigit{d_2}}\\label{fig:cs1t4appx}\n \\end{subfigure}\n \\caption{Lifelong learning for ``learning to count'' (Sequence CS1), demonstrating low-level transfer\n of perceptual recognizers.}\\label{fig:cs1appx}\n\\end{figure*}\n\\begin{figure*}[t!]\n \\centering\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s2t1}\n \\caption{Task 1: \\isdigit{d_1}}\\label{fig:cs2t1appx}\n \\end{subfigure}\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s2t2}\n \\caption{Task 2: \\countdigit{d_1}}\\label{fig:cs2t2appx}\n \\end{subfigure}\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s2t3}\n \\caption{Task 3: \\countdigit{d_2}}\\label{fig:cs2t3appx}\n \\end{subfigure}\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s2t4}\n \\caption{Task 4: \\isdigit{d_2}}\\label{fig:cs2t4appx}\n \\end{subfigure}\n \\caption{Lifelong learning for ``learning to count'' (Sequence CS2), demonstrating high-level transfer\n of a counting network across categories.}\\label{fig:cs2appx}\n\\end{figure*}\n\\begin{figure*}[t!]\n \\centering\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s3t1}\n \\caption{Task 1: \\isdigit{d_1}}\\label{fig:3t1appx}\n \\end{subfigure}\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s3t2}\n \\caption{Task 2: \\countdigit{d_1}}\\label{fig:3t2appx}\n \\end{subfigure}\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s3t3}\n \\caption{Task 3: \\counttoy{t_1}}\\label{fig:3t3appx}\n \\end{subfigure}\n \\begin{subfigure}[t]{2.4in}\n \\centering\n \\includegraphics[width=2.3in]{s3t4}\n \\caption{Task 4: \\istoy{t_1}}\\label{fig:3t4appx}\n \\end{subfigure}\n \\caption{Lifelong learning for ``learning to count'' (Sequence CS3), demonstrating high-level transfer across different\n types of images. After learning to count MNIST digits,\n the same network can be used to count images\n of toys.}\\label{fig:cs3appx}\n\\end{figure*}\n\n\\section{Results on Longer Task Sequence LS}\n\nWe report the performance of all methods on the longer task sequences\non Figure~\\ref{fig:ls}.\nTo save space, we report performance of all methods when trained\non 10\\% of the data. The full learning curves follow similar\npatterns as the other task sequences. We report the classification\nand regression tasks from LS separately, because the error functions\nfor the two tasks have different dynamic ranges. \nPlease note that in the Figure, the tasks are labelled starting from $0$.\nOn the classification tasks, we note that all methods have similar\nperformance. Examining the task sequence LS from Figure \\ref{fig:tasks}, we see that these\ntasks have no opportunity to transfer from earlier tasks.\nOn the regression tasks however, there is opportunity\nto transfer, and we see that \\textsc{Houdini}\\xspace shows much better performance\nthan the other methods.\n\n\n\\begin{figure}[h!]\n\\includegraphics[width=0.45\\textwidth]{longer_tasks1.png}\n\\hspace{1em}\n\\includegraphics[width=0.45\\textwidth]{longer_tasks2.png}\n\\caption{Performance of transfer learning systems on task sequence LS1. On the left:\nregression tasks. On the right: classification tasks}\\label{fig:ls}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\n \\input{intro}\n\n\n \\section{The \\textsc{Houdini}\\xspace Programming Language}\\seclabel{lang}\n\n \\input{lang}\n\n \\section{Learning Algorithm}\\seclabel{synth}\n\n \\input{synth}\n\n \\section{Evaluation}\\seclabel{eval}\n\n \\input{eval}\n\n \\section{Conclusion}\\seclabel{conc}\n\n \\input{conc}\n\n\n \\bibliographystyle{plain}\n\n \n\n\n\n\n\\subsection{Synthesis Using Top-down Iterative Refinement}\n\nNow we give more details on the implementation of \\textsc{Generate}\\xspace based on\niterative refinement. To explain this algorithm, we need to define a\nnotion of a {\\em partial program}. The grammar for partial programs\n$e$ is obtained by augmenting the \\textsc{Houdini}\\xspace grammar (\\figref{language})\nwith an additional rule: $e ::= \\hole{\\tau}$. The form $\\hole{\\tau}$\nrepresents a {\\em hole}, standing for missing code. A program with\nholes has no operational meaning; however, we do have a type system\nfor such programs. This type system follows the rules in\nAppendix~\\ref{app:types}, but in addition, axiomatically assumes any subterm $\\hole{\\tau}$\nto be of type $\\tau$. A partial program that cannot be assigned a type\nis automatically excluded from the scope of synthesis.\n\nNow, the initial input to the algorithm is the type $\\tau$ of the function\nwe want to learn. The procedure proceeds iteratively, maintaining a\npriority queue $Q$ of {\\em synthesis subtasks} of the form $(e, f)$,\nwhere $e$ is a type-safe partial or complete program of type $\\tau$,\nand $f$ is either a hole of type $\\tau'$ in $e$, or a special symbol\n$\\perp$ indicating that $e$ is complete (i.e., free of holes). The interpretation of such a\ntask is to find a replacement $e'$ of type $\\tau'$ for the hole $f$\nsuch that the program $e''$ obtained by substituting $f$ by $e'$ is\ncomplete. (Because $e$ is type-safe by construction, $e''$ is of type\n$\\tau$.) The queue is sorted according to a heuristic cost function\nthat prioritizes simpler programs.\n\nInitially, $Q$ has a single element $(e, f)$, where $e$ is an ``empty''\nprogram of form $\\hole{\\tau}$, and $f$ is\na reference to the hole in $e$. The procedure iteratively processes subtasks in the queue $Q$,\nselecting a task $(e, f)$ in the beginning of each iteration. If the\nprogram $e$ is complete, it is sent to the neural module for parameter\nlearning. Otherwise, the algorithm expands the program $e$ by\nproposing a partial program that fills the hole $f$. To do this, the\nalgorithm selects a production rule for partial programs from the\ngrammar for partial programs. Suppose the right hand side of this rule\nis $\\alpha$. The algorithm constructs an expression $e'$ from $\\alpha$\nby replacing each nonterminal in $\\alpha$ by a hole with the same type\nas the nonterminal. If $e'$ is not of the same type as $f$, it is\nautomatically rejected. Otherwise, the algorithm constructs the\nprogram $e'' = e[f \\mapsto e']$. For each hole $f'$ in $e''$, the\nalgorithm adds to $Q$ a new task $(e'', f')$. If $e''$ has no hole, it\nadds to $Q$ a task $(e'', \\perp)$.\n\n\\subsection{Evolutionary Synthesis}\n\nThe evolutionary synthesis algorithm is an iterative procedure that maintains a population of\nprograms. The population is initialized with a set of randomly\ngenerated type-safe parameterized programs. Each iteration of the algorithm performs the following steps. \n\\begin{enumerate}\n\\item Each program in the population is sent to the neural module\n \\textsc{Tune}\\xspace, which computes a {\\em fitness score} (the loss under optimal\n parameters) for the program.\n\n\\item We perform {\\em random proportional selection}, in which a subset\n of the (parameterized) programs are retained, while the other programs are filtered\n out. Programs with higher fitness are more likely to remain in the\n population. \n\n\\item We perform a series of {\\em crossover} operations, each of which\n draws a random pair of programs from the population and swaps a pair\n of randomly drawn subterms of the same type in these programs.\n\n\\item We perform a series of {\\em mutation} operations, each of which randomly\n chooses a program and replaces a random\n subterm the program with a new subterm of the same type.\n\\end{enumerate}\n\nBecause the crossover and mutation operations only replace terms with\nother terms of the same type, the programs in the population are\nalways guaranteed to be type-consistent. This fact is key to the\nperformance of the algorithm.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nTypes are an important part of modern programming languages, as one of the prominent abstraction mechanisms over data\\footnote{Even in ``\\emph{untyped}'' languages (Python, say) types are present and relevant.}. This is so obvious that we seldom realise that the concept of type we understand nowadays is not the same it was perceived in the sixties, and that it was largely absent (as such) in the programming languages of the fifties. Moreover, we now conflate the concept of ``type'' in programming languages with the concept of the same name in mathematical logic---an identification which may be (is it?) good for today, but which is the result of a (slow) convergence of two different paths, that started quite apart with different aims. Tracing and recounting this story in details, with the painstaking accuracy it merits, it is well beyond the limits of this paper---it could be the subject of a doctoral thesis. We will instead make several remarks (some historical, some of more conceptual character) that we hope will be useful as a basis for a further investigation. We will argue that there are three different characters at play in programming languages, all of them now called \\emph{types}: the \\emph{technical concept} used in language design to guide implementation; the \\emph{general abstraction mechanism} used as a modelling tool; the \\emph{classifying tool} inherited from mathematical logic. We will suggest three possible dates \\emph{ad quem} for their presence in the programming language literature, suggesting that the emergence of the concept of type in computer science is relatively independent from the logical tradition, until the Curry-Howard isomorphism will make an explicit bridge between them.\nAs we will see, the investigation on the arrival on the scene of these three characters will bring us to the (early) seventies.\n\n\\section{From types to ``types''}\nOne of the first questions to be cleared is when the very word ``\\emph{type}'' stably entered the technical jargon of programming languages\\footnote{Which is not to say when it was first used in that context. To our knowledge, the very first technical use of the term ``type'' in programming is H.B. Curry's~\\cite{Curry1949}, to distinguish between memory words containing instructions (``\\emph{orders}'') and those containing data (``\\emph{quantities}''). These reports by Curry, as reconstructed by~\\cite{DeMol2013}, contain a surprising and non-trivial mathematical theory of programs, up to a theorem of the style ``well-typed expressions do not go wrong''! Despite G.W. Patterson's review on JSL 22(01), 1957, 102-103, we do not know of any influence of this theory on subsequent developments of programming languages.\n}. Contrary to folklore, early documentation on FORTRAN does not use the word, at least in the technical sense we mean today. In one of the early manuals, dating 1956~\\cite{FORTAN56}, we read, for instance\n\\begin{quotation}\\noindent\nTwo types of constants are permissible: fixed points (restricted to integers) and\nfloating points {\\tiny (page 9)}\n\\end{quotation}\nor\n\\begin{quotation}\\noindent\nTwo types of variables are also permissible (with the usual distinction based\non the initial letter) {\\tiny (page 10)}\n\\end{quotation}\nbut also\n\\begin{quotation}\\noindent\n32 types of statement {\\tiny (page 8)}\n\\end{quotation}\nThese are generic uses of the term ``type''---``kind'' or ``class'' could be used instead. \nEspecially because, on page 14 there is a detailed discussion of what happens\nwhen one mixes integers and floats. And ``type'' is never used. The noun ``mode''\nis used instead\\footnote{\nOf course the distinction between integers and floating points---that is, a type-based distinction, in today's terminology---was present and used, to decide the memory layout of the different kinds of variables, and to compile into the correct arithmetic operations.\n}:\n\\begin{quotation}\\noindent\nA FORTRAN expression may be either a fixed or a floating point expression, but it must not be a mixed expression. This does not mean that a floating point quantity can not appear in a fixed point expression, or vice versa, but rather that a quantity of one mode can appear in an expression of the other mode only in certain ways. (\\ldots) Any fixed point (floating point) constant, variable, or subscripted variable is an expression of the same mode. (\\ldots) If SOMEF is some function of n variables, and if E, F, \\ldots ,H are a set of n expressions of the correct modes for SOMEF, then SOMEF (E, F,\n\\ldots , H) is an expression of the same mode as SOMEF. {\\tiny (page 14)}\n\\end{quotation}\n\nWhen, then, do we find a precise occurrence of our technical term? For sure in the report on Algol 58~\\cite{Algol58}\npublished in December 1958. There, ``type'' is used as a collective representative for ``special types, e.g., \\emph{integral}, or \\emph{Boolean}'' (page 12). Declarations (needed for non real-valued identifiers) are called ``type declarations'':\n\\begin{quotation}\\noindent\nType declarations serve to declare certain variables, or functions, to represent quantities of a\ngiven class, such as the class of integers or class of Boolean values.\\\\\nForm: $\\Delta \\sim$ \\emph{type} (I,I,\\ldots I) where \\emph{type} is a symbolic\nrepresentative of some type declarator such as \\emph{integer} or \\emph{boolean}, and the I are identifiers. Throughout the program, the variables, or functions named by the identifiers I, are constrained to refer only to quantities of the type indicated by the declarator {\\tiny (page 16)}.\n\\end{quotation}\nAlgol 58 is the result of a meeting held in Zurich at the end of May 1958, between an ACM group (including Backus and Perlis) and a European group.\nEach group had its own preparatory paper~\\cite{ACMadhocAlgol1958,Bauer1958}, and both such papers \\emph{do not} use ``type''. Of the two, only the ACM's one discusses the issue of the declaration for non real-valued identifiers, using the general term ``class'': \n\\begin{quotation}\\noindent\nA data symbol falls in one of the following classes:\na) Integer b) Boolean c) General {\\tiny (page 4)}\n\\end{quotation}\nDeclarations are called ``Symbol Classification Statements''; their concrete syntax is the same \nas in the final Algol 58 report\\footnote{Recall that \\emph{type} is not a reserved word in Algol 58---it is used in the report for the ``symbolic\nrepresentative of some type declarator such as'' INTEGER, BOOLEAN, etc.}:\n\\begin{quotation}\\noindent\nThe symbol classification statements are:\\\\\nINTEGER ($s_1,\\ldots,s_n$) \\\\\nBOOLEAN ($s_1,\\ldots,s_n$)\n\\end{quotation}\nbut it is striking how during the Zurich meeting the committee realised that the different ``classes'' could be grouped together, and given a name as a collective---types were born. It is also remarkable that, at least from these references, the technical term appears to be just a semantical shift from the generic one; in particular, there is no clue that in this process the technical term ``type'' from mathematical logic had any role\\footnote{Alan Perlis summarises in 1978, referring to Algol 58: ``The use of `type,' as in `x is of type {\\bf real},' was analogous to that employed in logic. Both programming language design and logic dipped into the English language and came up with the same word for roughly the same purpose''~\\cite{Perlis1981}.\n}. This process will come to maturity in Algol 60~\\cite{Algol60}:\n\\begin{quotation}\\noindent\nIntegers are of type {\\bf integer}. All other numbers are of type {\\bf real}.\n\\end{quotation}\nor\n\\begin{quotation}\\noindent\nThe various ``types''\n({\\bf integer}, \n{\\bf real}, \n{\\bf Boolean}) \nbasically denote properties of values. \n\\end{quotation}\nObserve the word ``types'' under quotes, as to stress that it is no longer the ordinary word, but the technical one.\n\nWhat this term means is simple---data values are partitioned in disjoint classes; each class is mapped to a specific memory representation. Type information collected from the source program guides the compiler for memory allocation and choice of machine operations to be used during translation. Moreover, these types provide a certain level of abstraction over such implementation details, avoiding the manipulation of a value by operations of different types. However, besides the availability of indexed values (arrays), there is no linguistic provision for dealing with more structured data, or for data ``naturally'' belonging to classes not available as primitive types. \n\n\\section{Data types and abstractions}\nAlgol 58 treats arrays separately from types. One first declares the type of an identifier (unless it is a real-valued one, for which no such declaration is needed), than declares the identifier to be an array, fixing the number of dimensions (and assigning lower and upper bounds for the indexes). With all its maturity with respect to ``types'', Algol 60 makes no change in the treatment of arrays---types denote properties of just ``simple'' values. \n\nThat Algol's provision for primitive data was too restrictive, was clear even to its designers\\footnote{E.g., ``ALGOL (\\ldots) lacks the ability to describe different kind of data''~\\cite{McCarthy1961} (note that once again the generic ``kind'' is used, and not ``type''). Cfr also~\\cite{Priestley2011}, page 244.}. To address this ``weakness,''\nJohn McCarthy advocates a\n\\begin{quotation}\\noindent\nway of defining new data spaces in terms of given base spaces and of defining functions on the new spaces in terms of functions on the base spaces.~\\cite{McCarthy1961}. {\\tiny (page 226)}\n\\end{quotation}\nThe new data space constructors are the Cartesian product, the disjoint union, and the power set, each of them equipped with its canonical (universal) maps, which are used to define functions on the new spaces from functions on the base spaces. McCarthy's paper treats the question at a general meta-level, it does not propose a specific language, and it does not use the term ``type'', but it sets a clear roadmap on how to introduce new types in programming languages---instead of inventing an arbitrary new palette of primitive types, provide general, abstract\\footnote{Category theory and Bourbaki are clearly at an arm's length, but there is no explicit reference to them in the paper.} \nmechanisms for the construction of new types from the base ones. Base types could be taken as frugal as the single ``null set'', since natural numbers could be defined from it. Although McCarthy's paper has no explicit reference to any type-theoretic, mathematical logic paper (it cites Church's logic manual, though), we think this is one of the first contacts of the two concepts we are concerned with in this essay, albeit still in an anonymous form.\n\nThe challenge to amend the ``weakness of Algol'' was taken up in more concrete forms, and in similar ways, by Tony Hoare~\\cite{Hoare1965}, and by Ole-Johan Dahl and Kristen Nygaard~\\cite{Dahl:1966}, around 1965. Hoare's paper, with an explicit reference to McCarthy's project introduces at the same time the concepts of (dynamically allocated) \\emph{record} and \\emph{typed reference}.\nA record\nis an ordered collection of named \\emph{fields}\\footnote{It is a ``structure,'' in C's terminology.}; the life of a record does not follow the life of the block in which the record is created. \nTyped references may be seen like pointers, but no operations are allowed on them, besides creation and dereferencing (that is, access to the ``pointed'', or referenced, object). Moreover, when such a reference is created, the type (or class, in the paper's terminology) of the referenced record is fixed and cannot be dynamically modified. \nRecords are not a new idea---the concept was introduced in ``business oriented languages'', FLOWMATIC first, then COBOL (see, e.g.,~\\cite{Hopper1959}), where the field of a record may be a record itself (nested records), thus permitting static hierarchical structures (i.e., trees). \nAlso dynamically allocated structures\\footnote{More precisely: dynamically allocated structure which do not follow a stack-based life policy.}\nwere already available in ``list processing languages'', of which LISP is the main representative. Lisp's \\emph{S-expressions}~\\cite{McCarthy:1960} may indeed be seen as dynamic records composed of two unnamed fields. Moreover, since S-expressions may be nested, they may be used to simulate more complex structures. \nWhat is new in Hoare's proposal, however, is from one side the flexibility in modelling provided by arbitrary named fields; from the other, and crucially, Hoare's records may contain references to other records, thus allowing for the explicit representation of graph-like structures. \n\nIn Simula~\\cite{Dahl:1966}, Dahl and Nygaard had already implemented analogous ideas, with the aim to design an extension to Algol for discrete event simulation: a record class is an \\emph{activity}; a record is a \\emph{process}; a field of a record is a local variable of a process (see also~\\cite{Hoare1966}). References are not a prime construct of the language; instead, there are \\emph{sets}, which are bidirectional lists of \\emph{elements}, each of them being (a pointer to) a process. What is really new in Simula I is that a (dynamically created) ``process'' encapsulates both data objects and their associated operators, a concept that will be called \\emph{object} in Simula 67 (see, e.g.,~\\cite{Dahl2001}) and which will be popularised by Alan Kay in the context of Smalltalk~\\cite{Smalltalk1976,Kay:1993}. \n\nOf the two papers we are discussing, it will be Hoare's one to have the major, immediate impact. Although the proposal is for an extension to Algol 60, it will not materialise into the ``official'' Algol family---Algol W, which we shall discuss later,\nis not an official product of the Algol committee\\footnote{Hoare's paper will have significant impact also on Algol 68---the legitimate child of the Algol committee---which contains references and structured types. \nTracing the genealogy of Algol 68's \\emph{modes} (Algol 68's terminology for types) is however a task that should be left for the future.\n}.\nThe paper is fundamental because types change their ontology---from an implementation issue, they programmatically become a general abstraction mechanism\\footnote{\nIn John Reynolds's words from 1983, ``Type structure is a syntactic discipline for enforcing levels of abstraction''~\\cite{Reynolds1983}. Or in those of Luca Cardelli and Peter Wegner from their seminal 1985 paper, ``The objective of a language for talking about types is to allow the programmer to name those types that correspond to interesting kinds of behavior''~\\cite{CardelliWegner85}.\n}:\n\\begin{quotation}\\noindent\nthe proposal is no arbitrary extension to an existing language, but represents a genuine abstraction\nof some feature which is fundamental to the art or science of computation. {\\tiny (page 39)}\n\\end{quotation}\nThis happens on (at least) three levels. First, it implements McCarthy's project into a specific programming language, extending the concept of type from simple to structured values\\footnote{From the terminological point of view, the paper uses ``classes'' when referring to records, and ``types'' for simple types (integer, real, boolean \\emph{and} references, which are typed: the type of a reference includes the name of the record class to which it refers). On page 48, however, discussing the relations with McCarthy's proposal, we find cristal-clear awareness: ``The current proposal represents part of the cartesian suggestion made by Prof. J. McCarthy as a means of introducing new types of quantity into a language.'' \nFrom Hoare's expression ``record class'', Dahl and Nygaard derive the term ``object class'' in Simula 67~\\cite{Dahl2001}, then simply ``class'' in the object oriented jargon.\n}. Starting from this paper, ``structured values'' are organised in types in the same way as ``simple values'',\nthus opening the way to the modern view of \\emph{data types}. \n\nSecond, types are a linguistic modelling tool:\n\\begin{quotation}\\noindent\nIn the simulation of complex situations in the real world, it is necessary to construct in the computer analogues of the objects of the real world, so that procedures representing types of even may operate upon them in a realistic fashion. {\\tiny (page 46)}\n\\end{quotation}\nThe availability of a flexible way of data structuring (contrasted to the rigid structure provided by arrays) is seen as the linguistic mechanism that provides the classification of ``the objects of the real world''. Moreover, the possibility to embed references into records allows for the construction of complex relational structures. Data are no longer ``coded'' into integers or reals---a record type naturally represents a class of complex and articulated values. Even more importantly, following McCarthy, the language only provides general means of construction---the definition of new classes of data being left to the programmer. \n\nFinally, the combination of record types and typed references provides a robust abstraction over the memory layout used to represent them. By insisting that references be typed, the type checker may statically verify that the field of a record obtained by dereferencing is of the correct type required by the context---primitive types are true abstractions over their representation. \nIn retrospect,\n\\begin{quotation}\\noindent\nI realised that [types] were essential not only for determining memory requirements, but also for avoiding machine-dependent error in a running object program. It was a firm principle of our implementation that the results of any program, even erroneous, should be comprehensible without knowing anything about the machine or its storage layout.~\\cite{Hoare2014}\n\\end{quotation}\nHoare's proposal, including the terminology (``record classes''), will find its context into the joint paper~\\cite{HoareWirth:1966}, and finally will be implemented in Algol W~\\cite{AlgolW}, which will have a significant impact on subsequent languages, being an important precursor of Pascal. \nIn Algol W the picture and the terminology are complete:\n\\begin{quotation}\\noindent\nEvery value is said to be of a certain type.\n(\\ldots) \nThe following types of structured values are distiguished: array: (\\ldots), record: (\\ldots). {\\tiny (pages 16-17)}\n\\end{quotation}\n\nThe last step yet to be done was the unification of two aspects that were still distinct in Hoare's proposal---classification (i.e., modelling) and abstraction. In Algol W, primitive types and user defined record types do not enjoy the same level of abstraction.\nOn one hand, primitive types (integers or floats, say) are an opaque capsule over their implementation-dependent representation, and the type system ensures that on a primitive type only operations of that type are allowed. On the other hand, the user may well define a record class for modelling `the objects of the real world'', but there is no way of fixing which operations are allowed on such class, besides the general ones manipulating records and references. The user will probably define functions taking as argument values of these record classes, but the type system cannot enforce that \\emph{only} such operations are allowed to manipulate those values. \nIn the literature of the early seventies there are several proposals for allowing (and enforcing) stricter checks.\nMorris~\\cite{Morris:1973} advocates that the type system (including user-defined types) guarantee that only the prescribed operations on a type could operate on its values (forbidding thus the manipulation of the representations of those values). \nA thesis which will be further elaborated and formulated in modern terminology\\footnote{Morris talks about ``protection,'' ``authentication'', ``secrecy''.} by Reynolds in his seminal~\\cite{Reynolds:1974}, which also extends it to polymorphic situations:\n\\begin{quotation}\\noindent\nThe meaning of a syntactically-valid program in a ``type-correct'' language should never depend upon the particular representation used to implement its primitive types. (\\ldots) The main thesis of [Morris \\cite{Morris:1973}] is that this property of representation independence should hold for user-defined types as well as primitive types.\n\\end{quotation}\nFrom now on, types will be the central feature of programming languages as we understand them \ntoday\\footnote{The story of abstract data types, their relation to polymorphism, and how their parabola gives way to object oriented programming, is something to be told in a different paper, see~\\cite{MartiniCiE2016}.\n}.\n\\section{Classifying values}\nTypes inhabit mathematical logic since the early days, with the role of restricting the formation of formulas, in order to avoid paradoxes\\footnote{This is not the place where to discuss the emergence and the evolution of the concept of type in logic---we will limit ourselves to a single glimpse on the view of Russell and Whitehead, which will be the dominant one in the twentieth century. Stratification, or classification, in types, orders, or similar ways was already present in the nineteenth century, see, for instance, Frege's \\emph{Stufe} (in the \\emph{Grundgesetze}; before he also used \\emph{Ordnung}), usually translated with ``level'', or ``degree''.\n}. They are a discipline for (statically---as we would say today) separating formulas ``denoting'' something from formulas that ``do not denote''. In the words of the Preface to \\emph{Principia Mathematica}~\\cite{RussellWhiteheadPM}:\n\\begin{quotation}\\noindent\nIt should be observed that the whole effect of the doctrine of types is negative: it forbids certain inferences which would otherwise be valid, but does not permit any which would otherwise be invalid.\n\\end{quotation}\nThe opposition ``denoting'' vs.\\ ``non denoting'' becomes, in programming languages, ``non producing errors'' vs.\\ ``producing errors''\\footnote{``Well-typed expressions do not go wrong.''~\\cite{Milner1978}\n}.\nTypes as a classifying discipline for programs---and with the same emphasis on the fact that some valid formulas will be necessarily forbidden, for decidability's sake---are found in the programming languages literature as early as in the PhD thesis of Morris~\\cite{Morris1968}:\n\n\\begin{quotation}\\noindent\nWe shall now introduce a type system which, in effect,\nsingles out a decidable subset of those wfes that are safe; i.e., cannot given rise to ERRORs. This will disqualify certain wfes which do not, in fact, cause ERRORS and thus reduce the expressive power of the language.~{\\tiny(page 89)}\n\\end{quotation}\n\nMorris performs his ``analysis'' by taking first the type-free $\\lambda$-calculus, and imposing then the constraints of the ``simple'' functional types, formulated as a type-assignment system. More specifically, Morris says that ``the type system is inspired by Curry's theory of functionality'', quoting~\\cite{CurryFeys1958}, while there is no reference to~\\cite{Church1940}, which apparently would have been a more precise reference. The reason could be that Church formulates his theory directly with typed terms, instead of seeing types as predicates on type-free terms. Were this the reason, Morris' thesis would be the first reference to the now common distinction between typing ``\\`a la Curry'' and ``\\`a la Church''. \n\nAre these the types of mathematical logic? They share the same aims, but the connection is implicit, even unacknowledged. The fact that Church's~\\cite{Church1940} is not cited by Morris could certainly be explained as we argued above, but it is nonetheless revealing of the lack of awareness for the mathematical logic development of the concept. The first explicit connection we know of, in a non technical, yet explicit, way is~\\cite{Hoare1972}, but the lack of acknowledgement is going to persist---neither Morris'~\\cite{Morris:1973} or Reynolds'~\\cite{Reynolds:1974} cites any work using types in logic. Certainly the \\emph{Zeitgeist} was ripe for the convergence of the two concepts, and there was a formidable middleman---$\\lambda$-calculus. Used first by Landin as a tool for the analysis of Algol (and then by Scott, Strachey, Morris, Reynolds, and all the rest), at the dawn of the seventies $\\lambda$-calculus was the \\emph{lingua franca} of conscious programming language theorists, both in the type-free and the typed version. Programming languages and proof-theory were talking the same language, but the conflation was always anonymous. In Reynolds's \\cite{Reynolds:1974} a second order (``polymorphic'') typed lambda-calculus is independently introduced and studied, almost at the same time in which Girard \\cite{Girard1971} uses it as a tool to prove cut-elimination for second order logic; \nMilner \\cite{Milner1978} presents a type-reconstruction algorithm for simple types, independently from Hindley~\\cite{Hindley1969} (which will be cited in the final version). The Curry-Howard isomorphism~\\cite{Howard1980} (the original manuscript dates 1969 and was widely circulated, at least in the proof-theory and lambda-calculus communities) will be the catalyst for the actual recognition\\footnote{For a lucid account of the interplay between types, constructive mathematics, and lambda-calculus in the seventies, see~\\cite{Cardone2009}, Section 8.1.\n}, which comes only in Martin-L\\\"of's~\\cite{MartinLof1982}, written and circulated in 1979, which presents a complete, explicit correspondence between proof-theory and functional languages. The paper will have significant impact on following research (and not only the one on programming languages).\n\nThis slow mutual recognition of the two fields tells a lot on their essential differences. \nFor most of the ``types-as-a-foundation-of-mathematics'' authors,\ntypes where never supposed to be actually used by the working mathematician\n(with the debatable exception of Russell himself). It was sufficient that\n\\emph{in principle} most of the mathematics could be done in typed languages, so that paradoxes could be avoided.\n\nTypes in programming languages, on the contrary, while being restrictive in the\nsame sense, are used everyday by the working computer programmer. \nAnd hence, from the very beginning in Algol, computer science had\nto face the problem to make types more ``expressive'', and ``flexible''\\footnote{\nSee, for instance, the Introduction to~\\cite{Milner1978} which calls for polymorphism to ensure flexibility. Damas-Milner~\\cite{LCF1979} type inference provides a powerful\nmechanism for enforcing type restrictions while allowing more liberal (but principled)\nreasoning.}. If in proof-theory ``typed'' means first of all ``normalizing'', in computer science there are --- since the beginning --- well-typed programs which diverge. While mathematical logic types are perceived as constraints (they ``forbid'' something, as in Russell's quote above), types in programming languages are experienced as an enabling feature, allowing simpler writing of programs, and, especially, better verification of their correctness\\footnote{\nThis emphasis on the moral need for a programming language to assist (or even guide) the programmer in avoiding bugs or, worse, unintended behaviour in a program, is the core of what Mark Priestly~\\cite{Priestley2011} identifies as the ``Algol research program'', a way of thinking to the design of programming languages which still today informs most work in programming language research.\n}. \n\nThe crucial point, here and in most computer science applications of\nmathematical logic concepts and techniques, is that computer science never used \nideological glasses (types per se; constructive mathematics per se; linear logic per se; etc.),\nbut exploited what it found useful for the design of more elegant,\neconomical, usable artefacts. This eclecticism (or even anarchism, in the sense of epistemological theory) is one of the distinctive traits of the discipline, and one of the reasons of its success.\n\nBut this is the subject of an entirely different paper.\n\n\\section*{Acknowledgments}\nI am happy to thank Gianfranco Prini for helpful discussions (and for his---alas, remote in time---teaching on the subject).\n\n\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\\IEEEPARstart{V}{arious} recent works have built Radio Frequency (RF) sensing systems to perceive and understand the activities of humans. Compared with alternative sensing methods, RF sensing has improved usability due to the characteristics of RF signals, for example, the RF signals can work in all-day and all-weather scenarios, the sensing is contactless etc.. Existing RF-based human sensing works mainly include human position tracking~\\cite{adib2013see, kotaru2015spotfi, li2017indotrack, ghazalian2020energy, qian2017widar, chen2019residual, zhang2018multitarget, zhang2020mtrack, liu2021multi}, human speed estimation~\\cite{chen2020speednet, qian2018enabling, wu2015non}, human keypoint prediction~\\cite{zhao2018through, zhao2018rf, wang2019person, jiang2020towards}, and gesture recognition~\\cite{niu2021understanding}. However, these works can only perceive humans roughly and the sensing results usually lack fine details and are not as intuitive as optical sensing results.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=\\columnwidth]{fig\/intro.pdf}\n\t\\end{center}\n\t\\caption{With a source frame as reference, the RFGAN model can synthesize the target human activities based on the RF signals.}\n\t\\label{fig:intror}\n\\end{figure}\n\nIn recent years, Generative Adversarial Networks (GAN)~\\cite{goodfellow2014generative} have achieved promising results in modeling complex multimodal data and synthesizing realistic images. \nFurthermore, to generate meaningful images that meet actual requirements, many conditional GAN models have been proposed to control the generated results. Researchers have explored various kinds of conditions, e.g., category labels~\\cite{ConditionalarXiv14}, text descriptions~\\cite{GenerativeICML16, StackGANICCV17, li2020manigan}, and images~\\cite{Image-to-ImageCVPR17, CartoonGANCVPR18, tang2019cycle, park2019semantic, PersonalizedFashionICCV19, chen2019quality, zhang2020supervised, emami2020spa, richardson2021encoding}. From technology perspective, most existing GAN models require the conditions to be able to guide the GAN model explicitly~\\cite{ConditionalarXiv14, Image-to-ImageCVPR17, tang2019cycle, park2019semantic, richardson2021encoding} or can be transformed to conditional variables for GAN using an existing pre-trained model~\\cite{GenerativeICML16, StackGANICCV17, li2020manigan}.\n\nIn this paper, we propose a solution to overcome the limitation of existing RF-based human activity sensing by making the results more visually intuitive, which is valuable in practice. We leverage the power of GAN models to generate photo-realistic sensing results from RF signals. Specifically, a photograph of the people in the scene is provided so that the GAN model has sufficient information about the visual appearance of the people and the environment of the scene. \nWe use millimeter wave (mmWave) radars to build our radio system, which is equipped with two antenna arrays, horizontal and vertical ones, to obtain the RF signals that reflect off the human body. We process the horizontal and vertical RF signal reflections to horizontal and vertical RF heatmaps, which record the activity information of the human. \nThe RF signal is a new kind of conditional data for GAN models. Due to the characteristics of RF signals, the resolution of the horizontal and the vertical RF heatmaps are relatively low. Besides, their spatial structures are quite different from optical images. Therefore, to utilize the RF signals as the conditional data to guide the GAN model, some challenges need to be addressed: firstly, we need to train the RF conditioning encoding network without supervision labels to obtain the desirable human activity information; secondly, the information from the horizontal and vertical RF heatmaps need to be fused to characterize the overall human activity; thirdly, the fused information needs to be injected into the GAN model properly.\n\nTo tackle the above challenges, we design dual RF-Extractors and RNNs in the RFGAN model, one in the generative part and the other in the discriminative part, and we train them by adversarial learning. \nTwo CNN encoders in the RF-Extractor are used to extract features from the horizontal and vertical RF heatmaps, respectively. Then a novel fusion operation is designed to fuse the information by building relationships between the extracted features. \nTo inject the fused information into the GAN model, inspired by~\\cite{huang2017arbitrary, Karras_2019_CVPR, park2019semantic}, we propose to modify the distributions of the latent features in GAN by using a RF-based adaptive normalization. \nFurthermore, we create two cross-modal datasets (\\textit{RF-Walk} \\& \\textit{RF-Activity}) that consist of optical human activity frames and corresponding RF signals to train and test our proposed RFGAN model. The experimental results show that the RFGAN can generate better human results than alternative methods. \n\nSince the RF signals do not rely on visible lights and can traverse occlusions, our proposed RFGAN model can also work when lights dim or the human is occluded by barriers. For example, when the environment is favorable, we capture a human frame as the source. Our radio system can record the RF signal reflections when the illumination becomes bad or the human is in occlusions. The proposed RFGAN model can synthesize human activities based on these collected multimodal data.\n\nTherefore, the main contributions of this paper can be summarized as follows:\n\\begin{itemize}\n\t\\item [1.] We propose a novel RFGAN model to enable RF-based human synthesis. To the best of our knowledge, this is the first work to generate human images from the mmWave radar signals. There are many potential applications that can be derived from this task, e.g., fine-grained human perception and all-day monitoring systems in the smart home.\n\t\\item [2.] Technically, for the new kind of conditional data, i.e., the RF signals, we propose to train the RF conditioning encoding network, i.e., the RF-Extractor and RNN, by adversarial learning. Then we design a novel fusion operation to fuse the horizontal and vertical RF information, which is an effective approach for overall human activity sensing from the two-dimensional RF heatmaps. Due to the spatial structure difference, we propose to use the RF-based adaptive normalizations to inject the fused information into the GAN model.\n\t\\item [3.] We create two cross-modal datasets, i.e., \\textit{RF-Walk} and \\textit{RF-Activity}, which contain thousands of optical human activity frames and corresponding RF signals. The datasets will be released to public.\n\\end{itemize}\n\n\n\\section{Related Work}\n\\noindent\\textbf{Conditional GAN}\nMany research works have shown that GAN~\\cite{goodfellow2014generative} has the capability of generating realistic images based on the given conditional data. For example, \\cite{ConditionalarXiv14} utilize category labels to generate target digit images. Some works~\\cite{Image-to-ImageCVPR17, CartoonGANCVPR18, tang2019cycle, park2019semantic, PersonalizedFashionICCV19, chen2019quality, zhang2020supervised, emami2020spa, richardson2021encoding} introduce the GAN-based image-to-image translation frameworks. For some more complex conditional data, e.g., text data, \\cite{GenerativeICML16, StackGANICCV17, li2020manigan} use existing pre-trained models to transform the text into conditioning variables for GAN.\nTo employ these conditions in the networks, some works~\\cite{huang2017arbitrary, Karras_2019_CVPR, park2019semantic} find that utilizing the conditional normalization in the hidden layers can contribute to generating target images. \nIn our case, we take RF signals as the condition to guide the image synthesis, which is a new cross-modal conditional data that has obscure guidance for GAN and has no existing pre-trained model for conditioning encoding.\n\n\\noindent\\textbf{RF-Based Human Perception}\nRecent years have witnessed much interest in using RF signals to enable various human perception tasks~\\cite{he2020wifi}, including indoor localization and tracking~\\cite{adib2013see, kotaru2015spotfi, ghazalian2020energy, li2017indotrack, qian2017widar, zhang2018multitarget, zhang2020mtrack, liu2021multi}, human speed estimation or human movement detection~\\cite{chen2020speednet, qian2018enabling, wu2015non, niu2021understanding}, human identification~\\cite{zeng2016wiwho, fan2020learning, hsu2019enabling}, and human vital signs inference~\\cite{zhang2019breathtrack, yue2018extracting, rahman2015dopplesleep, zhang2019sj, hsu2017zero}.\nBesides the above signal-processing-based methods, approaches based on deep learning are also utilized to handle radio human perception. For example, \\cite{zhao2017learning} combines convolutional and recurrent neural networks to learn sleep stages from radio signals. \\cite{zhao2018through, zhao2018rf} propose to predict the 2D\/3D human keypoints based on RF signals by building a teacher-student network model. \nIn this paper, we propose to use RF signals for human image synthesis by combining conditional GAN models. \n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=0.96\\columnwidth]{fig\/model.pdf}\n\t\\end{center}\n\t\\caption{The architecture of the RFGAN model for generating sequential human activity frames.}\n\t\\label{fig:model}\n\\end{figure}\n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[width=0.95\\textwidth]{fig\/rf-gan.pdf}\n\t\\end{center}\n\t\\caption{The training framework of RFGAN at one moment. It consists of a generative part and a discriminative part. The whole model is trained by adversarial learning in an end-to-end manner.}\n\t\\label{fig:rfgan}\n\\end{figure*}\n\n\\noindent\\textbf{Sequence Modeling} \nRecurrent neural networks, such as vanilla RNN, GRU and LSTM, have been widely used for processing sequential data, such as text and speech. They have also been successfully applied to model the temporal dependencies in videos for various vision problems, such as video classification~\\cite{yue2015beyond}, action recognition~\\cite{song2017end, zhang2018fusing, si2018skeleton, agethen2019deep, si2019attention}, object segmentation~\\cite{ventura2019rvos}, video prediction~\\cite{lu2017flexible}, etc. In this work, considering that the RF signals are sequential data and the RF heatmaps are the samples at different moments, we utilize recurrent neural networks as the backbone of our model to perceive human activities from RF signals and synthesize corresponding optical images.\n\n\n\n\\section{Preliminary}\nOur method relies on transmitting RF signals and receiving the reflections. We adopt Frequency Modulated Continuous Wave (FMCW) and linear antenna arrays for signal transceiving. Inspired by ~\\cite{zhao2018through}, our radio system is equipped with two antenna arrays: horizontal and vertical ones, which are utilized to acquire the signal projections on the plane parallel to the ground and the plane perpendicular to the ground, respectively. \nHence, the RF data is composed of both horizontal and vertical heatmaps.\n\nCompared with camera-based visual data, RF signals have some different characteristics. Firstly, RF signals have much lower resolution. The resolution is determined by the bandwidth of the signal and the aperture of the antenna array~\\cite{richards2014fundamentals}. In our system, the depth resolution is about 7.5cm, and the angular resolution is about 1.3 degrees. Secondly, the RF signals suffer from severe multi-path propagation in an indoor environment~\\cite{zhang2020mtrack}, which introduces severe interference in the received signals. Thirdly, the RF signals have different representations of the scene compared with the camera, i.e., horizontal and vertical projections.\n\n\\section{RFGAN}\n\nThe RFGAN model aims to generate sequential human activity frames using a sequence of RF heatmaps (horizontal \\& vertical) and a source frame. \nTo extract and combine the human activity information from the horizontal and vertical RF heatmaps, we design a RF-Extractor, which is built with a sequence model, i.e., the Recurrent Neural Network (RNN), to process the RF sequence.\nTo generate optical human activity frames, we utilize the Generative Adversarial Network (GAN) as the main technological approach in our model, where the source frame is fed as the input layer and the information extracted from RF heatmaps is the condition of GAN. \n\nThe architecture of RFGAN model is shown in Figure~\\ref{fig:model}. The RNN is the backbone of the model, which is designed for sequence data processing and generation. The RF-Extractor and the Generator are plugged into both sides of the RNN to process RF heatmaps and generate human frames. \nIn the following subsections, we first introduce the training framework of the model and then discuss the network structures of the RF-Extractor, the RNN, and the RF-based Generator and Discriminators in detail. Finally, we describe the loss functions used to train the whole model. \n\n\\subsection{Training Framework}\nThe proposed human synthesis model aims to generate sequential human frames from a source frame and the corresponding sequential RF heatmaps. Figure~\\ref{fig:rfgan} shows the adversarial training framework of the human synthesis model at one moment, which consists of a generative part and a discriminative part. The generative part contains a RF-Extractor, a RNN, and a Generator. The RF-Extractor and RNN extract the human position and posture information from the corresponding RF heatmaps and represent it as a RF fused representation. For the Generator, the source frame is fed as the input layer, and the extracted RF fused representation controls the network through normalization at the convolution layers. The output is the generated human frame. There are two Discriminators in the discriminative part. The Activity-Discriminator is designed to ensure that the human position and posture in the generated frame are consistent with the RF signal. It takes the generated frame as the input layer. The RF fused representation extracted by the RF-Extractor and RNN in the discriminative part is used as condition of this Discriminator. The Appearance-Discriminator ensures that the generated frame maintains the same visual information, such as human appearance, with the source frame, thus the generated frame is concatenated with the source frame at the input layer.\n\nNote that the RF-Extractors and RNNs in the generative part and the discriminative part have the same network structures and RF inputs, but they do not share the network parameters. \nIn the previous GAN literature, the conditional variables that fed into the Generator and the Discriminator are obtained by the same network model, mainly due to the existence of the pre-trained model to enable the desirable conditioning encoding. However, in our task, there is no existing pre-trained model for RF encoding. Thus, we propose dual RF-Extractors and RNNs under an adversarial training framework to learn to transform the RF heatmaps, one for the generation task and the other for the discrimination task. \nSpecifically, the RF-Extractor and RNN in the generative part update with the Generator, whereas the RF-Extractor and RNN in the discriminative part update with the Activity-Discriminator. The update process is adversarial training and the whole model is trained in an end-to-end manner.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=0.93\\columnwidth]{fig\/rf-extractor.pdf}\n\t\\end{center}\n\t\\caption{The structure of RF-Extractor. It consists of two CNN encoders, a fusion operation, and a RNN.}\n\t\\label{fig:rfextractor}\n\\end{figure}\n\n\\subsection{RF-Extractor \\& RNN}\nThe horizontal RF heatmaps and the vertical RF heatmaps record human activities from different viewpoints and each of them only contains partial human activity information, i.e., the horizontal RF heatmap is a projection of the signal reflections on a plane parallel to the ground, which leads to the loss of the human height information, whereas the vertical heatmap is a projection of the reflected signals on a plane perpendicular to the ground and the human width information is missed. Thus, it is a challenge that how to extract and combine the horizontal and vertical RF information to get the whole human activity information.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=0.93\\columnwidth]{fig\/encode.pdf}\n\t\\end{center}\n\t\\caption{The RF heatmap and the corresponding feature maps.}\n\t\\label{fig:encode}\n\\end{figure}\n\nIn our proposed network structure, as shown in Figure~\\ref{fig:rfextractor}, we first use two standard CNN encoders to transform the horizontal and vertical RF heatmaps into feature maps, respectively. \nThe original RF heatmaps record the reflected signals throughout the whole room. After the differential operation along the time, only the signals introduced by the moving human are retained. As shown in left part of Figure~\\ref{fig:encode}, we can find the signal reflections from the moving human (bright area) only occupy a very small area of the RF heatmap. Therefore, we use an encoder that consists of several convolution layers to reduce the RF heatmap size and focus on the bright area. Since the values of signal reflections from no human areas (dark areas) are very small and close to $0$, the convolution results, which are denoted as feature maps (shown in right part of Figure~\\ref{fig:encode}), can capture the human posture information from the bright area and the human position information from the location of the bright area.\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.96\\columnwidth]{fig\/related-pixel.pdf}\n\t\\end{center}\n\t\\caption{The horizontal and vertical feature maps contain the human activity information on the horizontal and vertical plane, respectively. The red cuboids are the related-feature-vectors, which contain the activity information introduced by the same human body part.}\n\t\\label{fig:relatedpixel}\n\\end{figure}\n\nAfter encoding RF heatmaps, a fusion operation (RF-Fusion) is proposed to combine the horizontal and vertical feature maps into a fused representation.\nAs shown in Figure~\\ref{fig:relatedpixel}, the horizontal feature maps can be represented as a $H_{hor} \\times W_{hor} \\times N$ tensor, which uses $H_{hor} \\times W_{hor}$ feature vectors on a horizontal plane to record the human activity information, and each feature vector is $N$ dimensional. For the vertical feature maps, $H_{ver} \\times W_{ver}$ feature vectors are used on a vertical plane to record the human activity information, and each feature vector is also $N$ dimensional. We refer to the feature vectors in the horizontal and vertical feature maps as related-feature-vectors if they record the activity information introduced by the same human body part (see Figure~\\ref{fig:relatedpixel}). Combining these related-feature-vectors can help characterize the overall human activity. However, it is difficult to find the one-to-one correspondence between them directly.\n\nTo address this problem and bridge these related-feature-vectors, we define RF-Fusion as follows: \nfor each feature vector in the horizontal feature maps, the dot product is applied with every feature vector in the vertical feature maps, and the results are denoted as a RF fused representation. For example, as shown in Figure~\\ref{fig:rffusion}, the dot products between the first horizontal feature vector and every vertical feature vector generate $H_{ver} \\times W_{ver}$ values, which are the first row of the RF fused representation. In such a way, we can obtain the RF fused representation as follows:\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:fusion}\n\t\tR&(i, j)=\\frac{H(i)V(j)^T}{\\sqrt{N}}, \\\\ i \\in [0, H_{hor}& \\times W_{hor}), j \\in [0, H_{ver} \\times W_{ver}),\n\t\\end{split}\n\\end{equation}\nwhere $R(i, j)$ is the value at the point $(i, j)$ in the RF fused representation, $H(i)$ and $V(j)$ refer to the feature vector in the horizontal feature maps and the feature vector in the vertical feature maps. The denominator $\\sqrt{N}$ is to scale the values.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=\\columnwidth]{fig\/rf-fusion.pdf}\n\t\\end{center}\n\t\\caption{The RF-Fusion operation.}\n\t\\label{fig:rffusion}\n\\end{figure}\n\n\\noindent\\textbf{Why does RF-Fusion work?} The traditional feature map fusion approach is to concatenate feature maps along the channel directly, which is effective when feature maps have the same spatial structure, i.e., the feature vectors in the different feature maps are aligned and can be combined by concatenating. However, in the RF fusion step, for a given feature vector in the horizontal feature maps, we do not know which feature vector in the vertical feature maps is related to it. Thus our proposed RF-Fusion builds relationships between every feature vector in the horizontal feature maps and every feature vector in the vertical feature maps. \nFor the learned RF-Extractor, the related-feature-vectors in the horizontal and vertical feature maps are supposed to be highly correlated and lead to generating large values in the RF fused representation, which are shown as bright points. Therefore, the distribution and values of these bright points can characterize the overall human activity.\n\nFinally, the RF fused representations are fed into the RNN to get adjustments. We propose this procedure based on the following fact: human activities, such as arm swing, leg raising, etc., are generally continuous, thus the RF fused representation that contains the human position and posture information at a certain moment is interrelated with several preceding and subsequent RF fused representations. The RNN model can adjust the current representation by considering its neighbors, and the adjusted results would contain more smooth and more accurate human activity information. In our model, we use a three-layer BiLSTM as the proposed RNN, and the hidden states in the last layer are adopted as the adjusted RF fused representations.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=\\columnwidth]{fig\/generator.pdf}\n\t\\end{center}\n\t\\caption{The structure of Generator.}\n\t\\label{fig:generator}\n\\end{figure}\n\n\\subsection{RF-Based Generator \\& Discriminators}\nThe Generator in our model consists of an encoder, several residual blocks, and a decoder (see Figure~\\ref{fig:generator}). The encoder and the decoder contain the same numbers of convolutional and deconvolutional layers. The residual blocks are divided into several groups and each group has the same numbers of blocks. The feature extracted from the source frame is concatenated with several feature maps in the residual blocks to maintain the appearance information. For the Activity-Discriminator and the Appearance-Discriminator, we use the network structures inspired by PatchGAN~\\cite{Image-to-ImageCVPR17}. They both consist of convolutional layers, where the first layer does not use the normalization and the last layer is only a convolution to produce a 1-dimensional output.\n\nSpecifically, to enable the RF-based condition setting in RFGAN, we propose a RF-based adaptive instance normalization (RF-InNorm) in the hidden layers of Generator and Activity-Discriminator, which injects the RF fused representation by modifying the feature distribution. The RF-InNorm is defined as \n\\begin{equation}\n\t\\label{eqn:RFInNorm}\n\t\\text{RF-InNorm}(\\bm{f}^{n})=F_{\\gamma}^n(\\bm{h})\\cdot\\frac{\\bm{f}^{n}-\\bm{\\mu}^n}{\\bm{\\sigma}^n}+F_{\\beta}^n(\\bm{h}),\n\\end{equation}\nwhere $\\bm{f}^n$ is the feature map of the $n$-th layer in the Generator or Activity-Discriminator, $\\bm{\\mu}^n$ and $\\bm{\\sigma}^n$ are the mean and standard deviation of the feature map.\n$\\bm{h}$ refers to the RF fused representation, $F_{\\gamma}^n(\\cdot)$ and $F_{\\beta}^n(\\cdot)$ are the learned nonlinear functions, which specialize $\\bm{h}$ to RF-based modulation parameters.\nTherefore, the feature map $\\bm{f}^n$ is first normalized and then scaled and biased by $F_{\\gamma}^n(\\bm{h})$ and $F_{\\beta}^n(\\bm{h})$ to incorporate the RF fused representation condition. \n\nFor the Appearance-Discriminator, the source frame condition is concatenated with the input and fed into the network. \n\n\\subsection{Loss Functions}\nThe training process of the RFGAN is a two-player minimax game between the generative part and the discriminative part. For the discriminative part, we set the loss for the Activity-Discriminator and the RF-Extractor and RNN as:\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:d_act_loss}\n\t\t\\mathcal{L}^{act} = \\mathcal{L}^{act}_{LSD}+\\lambda\\mathcal{L}^{act}_{GP},\n\t\\end{split}\n\\end{equation}\nwhere $\\mathcal{L}^{act}_{LSD}$ is the adversarial loss inspired by LSGAN~\\cite{mao2017least}, $\\mathcal{L}^{act}_{GP}$ is the gradient regularization term that penalizes the discriminator gradients only on the true data to stabilize the training process~\\cite{mescheder2018training}, which can be calculated by\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:d_act_lsd}\n\t\t\\mathcal{L}^{act}_{LSD}=&\\mathbb{E}_{\\bm{x}_r \\sim \\mathbb{P}}\\left [(D_{act}(\\bm{x}_r|E_{dis}(\\bm{r}_h,\\bm{r}_v))-1)^2\\right ] + \\\\ &\\mathbb{E}_{\\bm{x}_f \\sim \\mathbb{Q}}\\left [(D_{act}(\\bm{x}_f|E_{dis}(\\bm{r}_h,\\bm{r}_v))-0)^2\\right ],\n\t\\end{split}\n\\end{equation}\nand\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:d_act_gp}\n\t\t\\mathcal{L}^{act}_{GP}= \\mathbb{E}_{\\bm{x}_r \\sim \\mathbb{P}} [\\left\\| \\nabla D_{act}(\\bm{x}_r|E_{dis}(\\bm{r}_h,\\bm{r}_v))\\right\\|_2^2],\n\t\\end{split}\n\\end{equation}\nwhere $D_{act}$ is the Activity-Discriminator, $E_{dis}$ is the RF-Extractor and RNN in the discriminative part, $\\bm{x}_r$ and $\\bm{x}_f$ are the ground-truth and the generated human frame, respectively, $\\bm{r}_h$ and $\\bm{r}_v$ refer to the horizontal RF heatmap and the vertical RF heatmap, respectively.\n\nFor the Appearance-Discriminator, the loss function is similar to the loss for the Activity-Discriminator and the RF-Extractor and RNN:\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:d_app_loss}\n\t\t\\mathcal{L}^{app} = \\mathcal{L}^{app}_{LSD}+\\lambda\\mathcal{L}^{app}_{GP},\n\t\\end{split}\n\\end{equation}\nthe $\\mathcal{L}^{app}_{LSD}$ and $\\mathcal{L}^{app}_{GP}$ are calculated by\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:d_app_lsd}\n\t\t\\mathcal{L}^{app}_{LSD}=&\\mathbb{E}_{\\bm{x}_r \\sim \\mathbb{P}}\\left [(D_{app}(\\bm{x}_r|\\bm{x}_s)-1)^2\\right ] + \\\\ &\\mathbb{E}_{\\bm{x}_f \\sim \\mathbb{Q}}\\left [(D_{app}(\\bm{x}_f|\\bm{x}_s)-0)^2\\right ],\n\t\\end{split}\n\\end{equation}\nand\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:d_app_gp}\n\t\t\\mathcal{L}^{app}_{GP}= \\mathbb{E}_{\\bm{x}_r \\sim \\mathbb{P}} [\\left\\| \\nabla D_{app}(\\bm{x}_r|\\bm{x}_s)\\right\\|_2^2],\n\t\\end{split}\n\\end{equation}\nwhere $\\bm{x}_s$ is the source frame.\n\nTherefore, the final loss function of the discriminative part is\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:d_loss}\n\t\t\\mathcal{L}_{D} = \\mathcal{L}^{act} + \\mathcal{L}^{app}.\n\t\\end{split}\n\\end{equation}\n\nFor the generative part, the loss function is\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:g_loss}\n\t\t\\mathcal{L}_{G} = \\mathcal{L}_{LSG}+\\alpha\\mathcal{L}_{IMG}+\\beta\\mathcal{L}_{FEA},\n\t\\end{split}\n\\end{equation}\nwhere $\\mathcal{L}_{LSG}$ is the corresponding adversarial loss, $\\mathcal{L}_{IMG}$ and $\\mathcal{L}_{FEA}$ are designed for synthesizing images with better visual quality, which push the generated images towards the ground-truth images in the image space and the feature space. They are calculated by:\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:g_lsg}\n\t\t\\mathcal{L}_{LSG}=&\\mathbb{E}_{\\bm{x}_f \\sim \\mathbb{Q}}\\left [(D_{act}(\\bm{x}_f|E_{gen}(\\bm{r}_h,\\bm{r}_v))-1)^2\\right ] + \\\\ &\\mathbb{E}_{\\bm{x}_f \\sim \\mathbb{Q}}\\left [(D_{app}(\\bm{x}_f|\\bm{x}_s)-1)^2\\right ],\n\t\\end{split}\n\\end{equation}\nand\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:g_img}\n\t\t\\mathcal{L}_{IMG}=\\mathbb{E}_{\\bm{x}_f \\sim \\mathbb{Q}, \\bm{x}_r \\sim \\mathbb{P}}\\left \\|\\bm{x}_f - \\bm{x}_r\\right \\|_1,\n\t\\end{split}\n\\end{equation}\n\\begin{equation}\n\t\\begin{split}\n\t\t\\label{eqn:g_feature}\n\t\t\\mathcal{L}_{FEA}=&\\sum_{i}^{K}\\mathbb{E}_{\\bm{x}_f \\sim \\mathbb{Q}, \\bm{x}_r \\sim \\mathbb{P}}\\left \\|\\bm{f}_{\\bm{x}_f}^{i,act} - \\bm{f}_{\\bm{x}_r}^{i,act} \\right \\|_1 + \\\\ &\\sum_{i}^{K}\\mathbb{E}_{\\bm{x}_f \\sim \\mathbb{Q}, \\bm{x}_r \\sim \\mathbb{P}}\\left \\|\\bm{f}_{\\bm{x}_f}^{i,app} - \\bm{f}_{\\bm{x}_r}^{i,app} \\right \\|_1,\n\t\\end{split}\n\\end{equation}\nwhere $E_{gen}$ is the RF-Extractor and RNN in the generative part, $\\bm{f}_{\\bm{x}}^{i,act}$ refers to the feature map of $\\bm{x}$ at layer $i$ in the Activity-Discriminator, $\\bm{f}_{\\bm{x}}^{i,app}$ refers to the feature map of $\\bm{x}$ at layer $i$ in the Appearance-Discriminator, and $K$ is the total number of layers.\n\nThe whole training procedure is described in Algorithm ~\\ref{alg:algo}.\n\n\\begin{algorithm}[t]\n\t\\caption{Training algorithm for RFGAN.}\n\t\\label{alg:algo}\n\t\\begin{flushleft}\n\t\t\\textbf{Set:} The batch size $m$ is 2, the hyperparameters $\\lambda=\\alpha=\\beta=10.0$, the learning rate $\\eta$ is 0.0002.\\\\\n\t\t\\textbf{Initialize:} \n\t\tInitial $\\Phi_{E_{gen}}$ for the RF-Extractor and RNN in generative part,\n\t\tinitial $\\Phi_{E_{dis}}$ for the RF-Extractor and RNN in discriminative part,\n\t\tinitial $\\Phi_{G}$ for the Generator,\n\t\tinitial $\\Phi_{D_{act}}$ for the Activity-Discriminator,\n\t\tand initial $\\Phi_{D_{app}}$ for the Appearance-Discriminator.\n\t\\end{flushleft}\n\t\\begin{algorithmic}[1]\n\t\t\\WHILE{$\\Phi_{E_{gen}}, \\Phi_{G}$ has not converged}\n\t\t\\STATE Sample a batch of $\\{\\bm{r}_h, \\bm{r}_v, \\bm{x}_s, \\bm{x}_r\\}$ from the dataset\n\t\t\\STATE Update $\\Phi_{E_{dis}}, \\Phi_{D_{act}}, \\Phi_{D_{app}}$ using Adam with:\n\t\t\\STATE \\quad\n\t\t$\\Phi_{E_{dis}} \\leftarrow \\Phi_{E_{dis}} - \\eta\\frac{1}{m}\\nabla_{\\Phi_{E_{dis}}}\\sum_{i=1}^{m} \\mathcal{L}^{act}$\n\t\t\\STATE \\quad\n\t\t$\\Phi_{D_{act}} \\leftarrow \\Phi_{D_{act}} - \\eta\\frac{1}{m}\\nabla_{\\Phi_{D_{act}}}\\sum_{i=1}^{m} \\mathcal{L}^{act}$\n\t\t\\STATE \\quad\n\t\t$\\Phi_{D_{app}} \\leftarrow \\Phi_{D_{app}} - \\eta\\frac{1}{m}\\nabla_{\\Phi_{D_{app}}}\\sum_{i=1}^{m} \\mathcal{L}^{app}$\n\t\t\\STATE Update $\\Phi_{E_{gen}}, \\Phi_{G}$ using Adam with:\n\t\t\\STATE \\quad\n\t\t$\\Phi_{E_{gen}} \\leftarrow \\Phi_{E_{gen}} - \\eta\\frac{1}{m}\\nabla_{\\Phi_{E_{gen}}}\\sum_{i=1}^{m} \\mathcal{L}_{G}$\n\t\t\\STATE \\quad\n\t\t$\\Phi_{G} \\leftarrow \\Phi_{G} - \\eta\\frac{1}{m}\\nabla_{\\Phi_{G}}\\sum_{i=1}^{m} \\mathcal{L}_{G}$\n\t\t\\ENDWHILE\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Experiments}\n\n\\subsection{Implementation}\n\\noindent\\textbf{Data} \nWe collected the RF signal reflections at 20Hz from our mmWave radar system, i.e., the horizontal and vertical antenna arrays generate 20 pairs of heatmaps per second. To obtain the optical human images, we attach an RGB camera with the mmWave radar system to record videos at 10 FPS. In order to reduce the coupling between the human and the environment, we collected the data under 9 indoor scenes. There were 6 volunteers involved in the data collection and each volunteer wears multiple dresses. \n\nIn total, we create two types of RF-Vision datasets, i.e., \\textit{RF-Walk} and \\textit{RF-Activity}. \nFor \\textit{RF-Walk}, it contains 67,860 human random walking frames and 135,720 pairs of corresponding RF heatmaps. We use 54,525 frames of human walking images and 109,050 pairs of RF heatmaps for training and the rest for testing. \nFor \\textit{RF-Activity}, it contains 68,680 human daily activity (e.g., stand, walk, squat, sit, etc.) frames and 137,360 pairs of corresponding RF heatmaps. We use 55,225 frames of human activity images and 110,450 pairs of RF heatmaps for training and the rest for testing. \nEach human activity frame is resized to $320\\times180$, and the shape of each RF heatmap is $201\\times160$. \n\n\\noindent\\textbf{Training details}\nThe proposed model is trained using Adam solver. The learning rate is set to 0.0002 for both the generative part and the discriminative part. The number of epochs is 80 and the batch size is 2. The hyperparameters $\\lambda$, $\\alpha$, and $\\beta$ are equal and set to 10.0. We implement our method using PyTorch and all experiments can be run on a commodity workstation with a single GTX-1080 graphics card.\n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[width=\\textwidth]{fig\/baselines.pdf}\n\t\\end{center}\n\t\\caption{Qualitative comparison of different methods. The 1st row shows the source frames. The 2nd row shows the horizontal and vertical RF heatmaps. The 3rd row shows the ground-truth human activity frames captured by the optical camera. The 4th to the 6th rows show the generated results by Img\\&RF, RF-Concat, and RFGAN.}\n\t\\label{fig:baselines}\n\\end{figure*}\n\n\\begin{table*}\n\t\\begin{center}\n\t\t\\renewcommand{\\arraystretch}{1.6}\n\t\t\\scalebox{1.0}{\n\t\t\t\\setlength{\\tabcolsep}{4.2mm}{\n\t\t\t\t\\begin{tabular}{l|ccc|ccc}\n\t\t\t\t\t\\hline\n\t\t\t\t\t& \\multicolumn{3}{c|}{\\textit{RF-Walk}} & \\multicolumn{3}{c}{\\textit{RF-Activity}} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\tMethods & FID $\\downarrow$ & SSIM $\\uparrow$ & User study $\\uparrow$ & FID $\\downarrow$ & SSIM $\\uparrow$ & User study $\\uparrow$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\hline\n\t\t\t\t\tImg\\&RF & 27.84 & 0.9622 & 42.11\\% & 22.03 & 0.9643 & 35.89\\%\\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\tRF-Concat & 21.08 & 0.9689 & 69.23\\% & 19.19 & 0.9707 & 68.42\\% \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\tRFGAN & \\textbf{15.75} & \\textbf{0.9695} & \\textbf{80.76\\%} & \\textbf{15.05} & \\textbf{0.9708} & \\textbf{78.12\\%} \\\\\n\t\t\t\t\t\\hline\n\t\t\\end{tabular}}}\n\t\\end{center}\n\t\\caption{Quantitative comparison of different methods.}\n\t\\label{tab:baselines}\n\\end{table*}\n\n\\subsection{Evaluation Metric}\nWe evaluate our proposed model from the following aspects:\n\n\\noindent\\textbf{- Image Quality (FID):} \nWe use the most popular metric FID~\\cite{GANsTrainedByarXiv17} to evaluate the quality of the generated images. It computes the Fr\u00e9chet Inception Distance between the sets of generated images and the real images. The smaller the distance, the better the quality.\n\n\\noindent\\textbf{- Image Similarity (SSIM):} \nFor each test sample, we calculate the visual structural similarity (SSIM)~\\cite{ImageQualityAssessmentTIP04} to measure the similarity between the generated and the ground-truth human frames. A higher value means that the model can generate a human frame more similar to the ground truth.\n\n\\noindent\\textbf{- User Study:} \nWe conduct user surveys to evaluate whether our model can synthesize human frames with correct positions and postures. We first show our subjects some generated human frames and the corresponding ground-truth human frames, then each subject is asked to assess (yes\/no) the generated results based on human positions and postures. There are 10 subjects involoved in the study. \n\n\\begin{figure*}\n\t\\begin{center}\n\t\t\\includegraphics[width=\\textwidth]{fig\/ablation.pdf}\n\t\\end{center}\n\t\\caption{Qualitative comparison of the ablation study. The 1st row shows the horizontal and vertical RF heatmaps. The 2nd row shows the ground-truth human activity frames captured by the optical camera. The 3rd to 5th rows show the generated results by RFGAN-w\/o-RFExtD, RFGAN-w\/o-RNN, and RFGAN (full).}\n\t\\label{fig:ablation}\n\\end{figure*}\n\n\\subsection{Baselines}\nTo our knowledge, this work is the first attempt that utilizes the RF signals to generate realistic human activity frames and there is no existing and suitable baseline method to be compared with. Therefore, we modify our model with some classic approaches that are widely used in GANs or related works, and the modified models are set as the baselines:\n\n\\noindent\\textbf{- Img\\&RF:} \nTo enable the RF-based condition setting, we propose a RF-Extractor with RNN to encode RF heatmaps and use RF-InNorm to inject the extracted information. Another alternative approach is to concatenate the RF condition with the input image directly, which is effective when the conditions have explicit guidance for GAN, e.g., pose-guided human synthesis~\\cite{tang2019cycle}. However, the RF conditions are obscure data and have totally different spatial structures with optical images.\n\n\\noindent\\textbf{- RF-Concat:} \nIn our model, we propose a novel RF-Fusion operation to combine the horizontal and the vertical RF information, whereas the state-of-the-art approach for fusing RF information is to concatenate the features from RF signals along the channel directly, as in~\\cite{zhao2018through, sengupta2020mm}. We can find most existing learning-based RF sensing works just follow the common approach in computer vision literature to combine the two-dimensional RF information. In this paper, we design a specialized operation for RF signal data.\n\n\\noindent\\textbf{- Ground Truth:} \nAnother baseline is the ground-truth human activity frames captured by the optical camera. \n\n\\begin{table*}\n\t\\begin{center}\n\t\t\\renewcommand{\\arraystretch}{1.6}\n\t\t\\scalebox{1.0}{\n\t\t\t\\setlength{\\tabcolsep}{3.6mm}{\n\t\t\t\t\\begin{tabular}{l|ccc|ccc}\n\t\t\t\t\t\\hline\n\t\t\t\t\t& \\multicolumn{3}{c|}{\\textit{RF-Walk}} & \\multicolumn{3}{c}{\\textit{RF-Activity}} \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\tMethods & FID $\\downarrow$ & SSIM $\\uparrow$ & User study $\\uparrow$ & FID $\\downarrow$ & SSIM $\\uparrow$ & User study $\\uparrow$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\hline\n\t\t\t\t\tRFGAN-w\/o-RFExtD & 58.36 & 0.9618 & 0.00\\% & 45.71 & 0.9630 & 0.00\\% \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\tRFGAN-w\/o-RNN & 16.41 & 0.9691 & 71.79\\% & 18.11 & 0.9705 & 72.00\\% \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\tRFGAN (full) & \\textbf{15.75} & \\textbf{0.9695} & \\textbf{80.76\\%} & \\textbf{15.05} & \\textbf{0.9708} & \\textbf{78.12\\%} \\\\\n\t\t\t\t\t\\hline\n\t\t\\end{tabular}}}\n\t\\end{center}\n\t\\caption{Quantitative comparison for the ablation study.}\n\t\\label{tab:ablation}\n\\end{table*}\n\nThe qualitative and quantitative comparisons are shown in Figure~\\ref{fig:baselines} and Table~\\ref{tab:baselines}. From the visual results, we can see that our proposed RFGAN model can capture the human position and posture information from RF signals and generate desirable activity frames. Although the baselines, i.e., RF-Concat and Img\\&RF, can capture the human position information, people in the generated frames are quite blurred. The user survey results also confirm the higher position and posture accuracy of the generated human activity frames by RFGAN. According to the FID and SSIM measurements, we find that the human activity frames generated by the proposed RFGAN have better quality and are more similar to the ground truth. The experimental results demonstrate the effectiveness of our proposed RF-Fusion and the RF conditioning encoding network.\n\n\\subsection{Ablation Study}\nIn this subsection, we conduct ablation studies to evaluate some important components in our proposed RFGAN model:\n\n\\noindent\\textbf{- RFGAN-w\/o-RFExtD:} In our full RFGAN model, there are two RF-Extractors and RNNs, one in the generative part and the other in the discriminative part. In RFGAN-w\/o-RFExtD, We remove the RF-Extractor and RNN in the discriminative part and use the RF fused representation extracted in the generative part as the condition for Activity-Discriminator.\n\n\\noindent\\textbf{- RFGAN-w\/o-RNN:} We remove the RNN module from our full RFGAN model in this setting, which means the RFGAN-w\/o-RNN generates human activity frames only based on current RF signal inputs.\n\nThe qualitative and quantitative results are shown in Figure~\\ref{fig:ablation} and Table~\\ref{tab:ablation}. We find that the dual RF-Extractors and RNNs setting under the adversarial learning framework, i.e., RFGAN (full), can synthesize the target human activity frames, which is mainly due to the fact that the RF-Extractors and RNNs in the generative and the discriminative parts have different focuses. The RF-Extractor and RNN in the generative part pay more attention to guide the Generator for better synthesis, whereas the RF-Extractor and RNN in the discriminative part aim to help the Activity-Discriminator to distinguish different human poses. They are trained by adversarial learning.\nFor RFGAN-w\/o-RFExtD, only one RF-Extractor and RNN are used for RF conditioning encoding. Due to the lack of supervision labels for training guidance or another RF-Extractor and RNN for adversarial learning, one RF-Extractor and RNN cannot get the desirable human activity information from the RF signals and lead to ignoring the human object in the generated frames (see 3rd row in Figure~\\ref{fig:ablation}).\nFor RFGAN-w\/o-RNN, due to the lack of information from the RF input neighbors, which can be used to adjust the current RF fused representation, it performs worse than the full RFGAN model. From the visual results, we can see the people in the generated frames by RFGAN-w\/o-RNN are full of artifacts (4th row in Figure~\\ref{fig:ablation}).\n\n\\subsection{Deployment in New Scene}\nTo deploy our model in a new scene, we do not need to retrain the whole model from the start. We can fine-tune the pre-trained RFGAN using very little data (about 40s data) to get similar results (see Table~\\ref{tab:newscene}). Specifically, the learning rate during the fine-tuning process is 0.0002 for both the generative part and the discriminative part. The epochs and batch size are set to 40 and 2, respectively. The loss functions and hyperparameters are the same with the training stage. From the quantitative results, we find that the pre-trained RFGAN model can generate desirable human activity frames in the new scene after fine-tuning with only a little data, which means our proposed model has the potential for being widely used.\n\n\\begin{table}[h]\n\t\\begin{center}\n\t\t\\renewcommand{\\arraystretch}{1.6}\n\t\t\\scalebox{1.0}{\n\t\t\t\\setlength{\\tabcolsep}{3.9mm}{\n\t\t\t\t\\begin{tabular}{l|ccc}\n\t\t\t\t\t\\hline\n\t\t\t\t\t& FID $\\downarrow$ & SSIM $\\uparrow$ & User study $\\uparrow$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\hline\n\t\t\t\t\tNew Scene & 20.64 & 0.9739 & 73.33\\% \\\\\n\t\t\t\t\t\\hline\n\t\t\\end{tabular}}}\n\t\\end{center}\n\t\\caption{Quantitative results in the new scene.}\n\t\\label{tab:newscene}\n\\end{table}\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=\\columnwidth]{fig\/occ_light.pdf}\n\t\\end{center}\n\t\\caption{Performance under occlusions and bad lighting.}\n\t\\label{fig:occ_light}\n\\end{figure}\n\n\\subsection{Occlusions and Bad Lighting}\nRF signals can traverse occlusions and do not rely on lights, thus our model can work in the occluded or bad lighting environments (see Figure~\\ref{fig:occ_light}).\n\n\\section{Limitations}\nSince our method relies on the natural characteristics of RF signals, the solution that we present in this paper has some limitations. Firstly, in our mmWave radar system, the depth resolution of the RF signals is about 7.5cm, and the angular resolution is about 1.3 degrees. Thus, some micro-motion behaviors that are smaller than the resolution thresholds may be missed by our model. Secondly, the operating distance of our radar system depends on the transmission power, which works up to 20m. Finally, the datasets we use in this paper mainly contain the data of human daily activities under indoor scenes. Exploring more RF-based sensing models and synthesizing people in the wild is left for future work.\n\n\\section{Conclusion}\nIn this paper, we aim to use RF signals to guide human synthesis. To tackle the challenge of using this new kind of driving signal, we propose a novel RFGAN model, which introduces a RF-Extractor with RNN to obtain the human activity information from the horizontal and vertical RF heatmaps and utilize the RF-InNorm to inject the information into the GAN networks. Furthermore, we propose to train the RF-Extractor and RNN under an adversarial learning framework to enable the encoding of the new kind of conditional data.\nTo evaluate our proposed model, we create two cross-modal datasets and the experimental results show that the RFGAN can achieve a promising performance. We believe this work opens up a research opportunity to use a new form of conditional data, i.e., RF signals, to guide the GAN model, and the performance of the RFGAN confirms that RF signals have great potential in the imaging applications.\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n\\newpage\n\\fi\n\n\n\\bibliographystyle{IEEEtran} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nInvestigating the thermodynamic properties and their origins of black holes has been\nan over-40-year's industry since Bekenstein found the analogue of the principle of increasing\nentropy in the black hole context in 1973 \\cite{bekenstein:1973}. Hawking confirmed\nblack holes do have temperature by applying quantum mechanics to a classical gravitational\nbackground \\cite{hawking:1975}. It has been a consensus since then that understanding the\nthermodynamic nature of black holes needs a more fundamental theory of quantum gravity\nwhich treats gravitation itself as a dynamical quantum object instead of just a fixed\nbackground. String theory is a promising quantum gravity theory and in\nthis theory $p$-branes\nemerge as a natural extension of point-like and string-like objects.\nBlack $p$-branes\n\\cite{horowitz:1991}, therefore, naturally extend black holes and thermodynamic properties\nthereof to high dimensions. There are certain attempts to understand the origin of\nBekenstein-Hawking entropy of black branes with the knowledge gained in string theories\nfrom a statistical point of view \\cite{gubser:1996}.\n\nApart from the efforts on trying to find out the microscopic\nexplanation of black hole entropies, more thorough inspection on black\nhole thermodynamic laws \\cite{bardeen:1973} and their phase structures\nhas been proposed. A pioneering method that was put forward in\n\\cite{gibbons:1977} is to evaluate the Euclideanized Einstein-Hilbert\naction with the classical black hole solution as a zeroth order\napproximation to the partition function and has been successfully\napplied to specific gravitational solutions such as AdS black holes\n\\cite{hawking:1983}, Schwarzschild black\nholes \\cite{york:1986}, charged (AdS or RN) black holes\n\\cite{whiting:1988,Braden:1990hw,chamblin:1999,chamblin:1999-2,carlip:2003,lundgren:2008,banerjee:2011,banerjee:2012}\nand even Kerr-Newman-AdS black holes \\cite{caldarelli:1999} and\nGauss-Bonnet black holes \\cite{cai:2002, cai:2007}. Among these\nefforts, asymptotically flat black holes, due to their negative\nspecific heat and Hawking radiation, may not have a well-defined\ncanonical ensemble description.\nHence, in \\cite{york:1986,whiting:1988}, the authors proposed to\nhypothetically put the Schwarzschild black hole in a cavity with a\nconstant temperature to form a canonical ensemble and then thermally\nstable black holes with positive specific heat can also be found.\nThis analysis has also been extended to asymptotically flat black holes\nwith charges \\cite{Braden:1990hw,carlip:2003}. For charged black\nholes, there can be two kinds of boundary (i.e. the cavity)\nconditions, either fixing the charge within the cavity, which\ncorresponds to the canonical ensemble, or fixing the electric\npotential at the boundary, which corresponds to the grand canonical\nensemble. Depending on which ensemble we examine, the resulting phase\nstructure of the black hole can be quite different. In grand canonical\nensemble, since the charge is not fixed, there could be, in principle,\na black hole phase or a hot flat space phase. If the temperature and\nelectric potential at the boundary are carefully chosen, there could\nbe a first order phase transition between these two phases because\nthey have the same free energy (classical action). This first order\nphase transition is the analogue of the Hawking-Page phase transition\nin an AdS black hole system \\cite{hawking:1983}. In a canonical\nensemble, since the charge is fixed and no space can be both flat and\ncharged, this Hawking-Page-like phase transition cannot happen in this\ncase. However, there may be a van der Waals-like phase transition\nwhich occurs between two black hole phases of different size and ends\nat a critical point where a second order phase transition takes place.\n\nAll the above analysis can also be applied to black brane\/bubble \\cite{witten:1982}\nsystems\\footnote{We are here considering only the thermodynamic stability of\nthe brane system in a cavity. The dynamical stability, or the so-called\n``Gregory Laflamme instability''\\cite{Gregory:1993vy}, of the\nchargeless black branes in a cavity \nwas also studied in \\cite{Emparan:2012be} and was found to be\ncorrelated \nwith the thermodynamic stability. } which embody richer structure \\cite{lu:2011,lu:2012-2,lu:2011-2,wu:2012}\nsince the spacetime dimension in superstring theory is ten and there could be branes\nof $p+1$ dimensions where $p=0,1,2,\\cdots$. Almost in all these studies, Hawking-page-like\nor van der Waals-like phase transitions are found (except for some special $p$).\nThere can also be combinations of black branes with different dimensions, i.e.\nD$p$-D$q$-branes ($q>p$) where D$p$-branes are uniformly smeared on D$q$-branes. In\nour last paper \\cite{zhou:2015}, we did a thorough scan over D$p$-D$(p+4)$-brane systems ($p=0,1,2$)\non their thermal structures in various ensembles, which is a natural extension of\nthe work done by Lu et al. \\cite{lu:2012}. In Lu's paper, the phase structure and\ncritical behavior of D$p$-D$(p+4)$-brane system, especially the D1-D5 system, in canonical\nensemble is elaborately studied. Then a rather direct question is what their phase\nstructures are in other ensembles. Now that D$p$- and D$(p+4)$-branes coexist, each of\nthem can have its own canonical ensemble and grand canonical ensemble. So there are\nanother three different ensembles:\nD$p$\\ in canonical ensemble\nD$(p+4)$\\ in grand canonical ensemble (CG ensemble), D$p$\\ in grand canonical ensemble D$(p+4)$\\ in\ncanonical ensemble (GC ensemble), and both D$p$\\ and D$(p+4)$\\ in grand canonical ensemble\n(GG ensemble). In GG ensemble, the Hawking-Page-like phase transition is found in\nall D0-D4 D1-D5 and D2-D6 systems while no critical behavior would happen in these\nsystems, which is consistent with those studies on black holes or black $p$-branes.\nIn CG or GC ensemble, the van der Waals-like phase transition and critical behavior\ncan only happen in the D0-D4 system, whereas in CC ensemble this feature can appear\nin both D0-D4 and D1-D5 systems but not in the D2-D6 system. We also noticed an interesting\nsymmetry of interchanging the roles between D$p$- and D$(p+4)$-branes. More precisely,\nthe phase structure remains unchanged under the following simultaneous transformations,\n\\[ {\\rm D}p\\ {\\rm charge} \\leftrightarrow {\\rm D}(p+4)\\ {\\rm charge},\\qquad\n{\\rm D}p\\ {\\rm potential} \\leftrightarrow {\\rm D}(p+4)\\ {\\rm potential}. \\]\n\nAlthough all results obtained from studies on all kinds of black holes and branes\nseem to match very well, we have to point out that almost all these analyses only\nconcern about thermal stability conditions for corresponding systems,\nwhich requires the stable phases to \nminimize the free energy which is equivalent to the positive specific\nheat condition. It is true that for chargeless systems, when there is\nonly one independent thermodynamic variable, i.e. temperature or entropy, thermal\nstability is the same thing as thermodynamic stability. However, for charged\nsystem with a second independent thermodynamic variable, i.e. charge\nor electric potential, the thermodynamic stability also involves electrical stability which\nin general should not be omitted, although in some special cases it can be shown\nthat being thermally stable implies being electrically stable \\cite{lu:2011-2}.\nIn general, the stability anlysis follows from the second law of\nthermodynamics, which is related to the second order variations of the\nthermodynamic potentials. Studying the full thermodynamic stability of a system\nwith more than one independent variables involves computations on\nthe positivity property of the Hessian matrix of the thermodynamic\npotential evaluated at the stationary point in\nthe parameter space, e.g. the\n``temperature-potential'' space for grand canonical ensembles\n\\cite{Braden:1990hw,lu:2011-2}) which may be complicated. Nonetheless,\nthese conditions can be reduced to the positivity of some gereralized response\nfuntions as was done in \\cite{chamblin:1999-2} where \nthe stability conditions of the Einstein-Maxwell-anti-de-Sitter(EMadS) black hole reduce to the positivity of\nspecific heat and ``isothermal permitivity''. While it is well-known\nthat the\npositivity of the specific heat of the black hole indicates the stability of the system\nunder the fluctuation of the horison size, the isothermal permitivity\nis the stability of the system under the electrical fluctuation, such\nas the potential or charge fluctuations.\nIn the present paper, we will mainly adopt the method in \\cite{chamblin:1999-2} and\ntry to perform a systematic investigation on the more general thermodynamic\nstability of D$p$-D$(p+4)$\\ systems by including the electrical stability. Since we have three independent\nvariables here, we would expect that there are three response\nfunctions for each ensemble. \nAs a result of our investigation, we find that electrical perturbations prohibit\nthe small thermally stable black branes to be electrically stable. That means,\nthe black brane with larger horizon is now the only thermodynamicaly stable phase,\nso the van der Waals-like first order phase transition cannot occur any more,\nand neither can the second order phase transition.\n\nThe paper is organized as follows. In section \\ref{sect:setup}, we describe the basic setups\nof the problem to solve and discuss the meaning of the thermal and\nelectrical stability \nin canonical and grand canonical ensemble. The main results of the thermal\nstability discussion in \\cite{zhou:2015} are reviewed and the\nformulae obtained previously and needed for the calculation in this\npaper are also collected at the end of section \\ref{sect:setup}. We\nthen derive the thermodynamic stability criteria by incorporating the\nelectrical stability conditions and reduce them in section\n\\ref{sect:criteria}. In section \\ref{sect:electrical} we find out the\nelectrical stability constraints on the parameters by using the\nreduced criteria and then by combining them with the thermal stability\nresults, we accomplish the whole thermodynamic analysis in section\n\\ref{sect:thermo}. Finally, section \\ref{sect:conclude} is devoted to the\nconclusion and discussion.\n\n\\section{The D$p$-D$(p+4)$-brane system \\label{sect:setup}}\n\\label{sec:background}\n\n\\subsection{The brane system}\n\nA D$p$-D$(p+4)$-brane system can be described by the following Euclideanized metric,\ndilaton and form fields (see section 2 of \\cite{zhou:2015} for more details),\n\\begin{eqnarray}\n ds^2 & = & \\Delta_- ^{\\frac{1-p}{4}} \\Delta_* ^{\\frac{p-7}{8}}\n \\left( \\Delta_+ dt^2 + \\Delta_- \\sum^{p}_{i=1} dx_i^2\n + \\Delta_* \\sum^{p+4}_{j=p+1} dx_j^2 \\right) \\nonumber\\\\\n & & + \\Delta_- ^{\\frac{p^2-1}{4(3-p)}} \\Delta_* ^{\\frac{p+1}{8}}\n \\left( \\frac{d\\rho^2}{ \\Delta_+ \\Delta_- } + \\rho^2 d\\Omega_{4-p}^2 \\right) ,\\nonumber\\\\\n e^\\phi &=& \\Delta_- ^{\\frac{p-1}{2}} \\Delta_* ^{\\frac{3-p}{4}} ,\\nonumber\\\\\n A_{[p+1]} &=& -i \\frac{ \\Delta_+ }{ \\Delta_* } \\left( \\frac{ \\Delta_* - \\Delta_- }{ \\Delta_* - \\Delta_+ } \\right)^{1\/2}\n dt \\wedge dx_1 \\wedge \\cdots \\wedge dx_p ,\\nonumber\\\\\n A_{[p+5]} &=& -i \\Delta_+ \\left( \\frac{1- \\Delta_- }{1- \\Delta_+ } \\right)^{1\/2}\n dt \\wedge dx_1 \\wedge \\cdots \\wedge dx_{p+4} ,\n \\label{eq:solution}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n \\Delta_+ &=& 1-\\frac{x}{\\tilde \\rho} ,\\qquad\\quad \\Delta_- = 1-\n\\frac{q^2}{x\\tilde \\rho} ,\\nonumber\\\\\n \\Delta_* &=& \\frac{-2Q^2(1-1\/\\tilde \\rho)+ \\Delta_+ + \\Delta_- +\\frac 1{\\tilde\n\\rho}\\sqrt{(\\bar \\Delta_--\\bar\\Delta_+)^2+4Q^2\\bar\n\\Delta_+\\bar\\Delta_-}}{2(1-Q^2)}\\, ,\\nonumber\\\\\n\\bar\\Delta_+ &=&1-{x}\\,,\\quad \\bar\\Delta_- = 1-\n\\frac{q^2}{x}\\,.\n \\label{eq:deltas}\n\\end{eqnarray}\nThe functions defined in \\eqref{eq:deltas} are expressed using the parameters of CC\nensemble such as $Q$ and $q$ which are reduced D$p$\\ and D$(p+4)$\\ charge (densities).\nFor the grand canonical ensemble of D$p$-branes, we should use $\\Phi$ instead of\n$Q$ while for the grand canonical ensemble of D$(p+4)$-brane, we use $\\varphi$ rather\nthan $q$. $\\Phi$ and $\\varphi$ are the corresponding conjugate\npotentials for $Q$ and $q$ defined in \\cite{zhou:2015}, which are\nproportional to the form fields at the boundary. The relations between these parameters are summarized in section \\ref{sec:collection}.\nThe other parameter $x\\in (0,1)$ is the reduced size of the horizon, which is defined as\n$x\\equiv \\rho_+^{3-p}\/\\rho_b^{3-p}$ where $\\rho_+$ is the coordinate of the\nouter horizon and $\\rho_b$ the coordinate of the boundary (cavity).\nSimilarly, $\\tilde\\rho\\equiv(\\rho\/\\rho_b)^{3-p}$, $Q$,\n$q$, $\\Phi$ and $\\varphi$ are all rescaled to the same range $(0,1)$ to simplify\nthe analysis. The Euclideanized metric in \\eqref{eq:solution} possesses another\nproperty that the Euclidean time direction is periodic in order to\navoid the conical\nsingularity. The reduced time defined as $t\/(4\\pi\\rho_b)$ has a period\nseen at the boundary\n\\begin{eqnarray}\n b = \\frac{x}{3-p} \\left( \\frac{\\bar\\Delta_+}{\\bar\\Delta_-} \\right)^{1\/2}\n \\left( 1- \\frac{\\bar\\Delta_+}{\\bar\\Delta_-} \\right)^{\\frac{p-2}{3-p}}\n \\left( 1- \\frac{\\bar\\Delta_+}{\\bar\\Delta_*} \\right)^{1\/2}\n \\label{eq:temperature}\n\\end{eqnarray}\nwhich turns out to be proportional to the inverse temperature of the black\nbranes measured at the boundary.\n\nBy using quantities in \\eqref{eq:solution}, we can compute in each ensemble the\nthe thermodynamic potential which is a function of variables such as $x$,\n$b$, $Q$ or $\\Phi$ and $q$\nor $ \\varphi$, e.g. the free energy in canonical\nensemble in terms of $x$, $q$ and $Q$. \nMinimizing the thermodynamic potential as a univariate\nfunction of $x$ by fixing the other parameters, which is equivalent to only considering the thermal\nstability condition of the equilibrium, can provide us with information\nabout phase diagrams of the system which has already been presented in\n\\cite{zhou:2015}. \nHowever, we need to emphasize that we are now dealing with a system with three independent\nthermodynamic variables, and in principle there should be three\nstability condition, one thermal stability condition and two electrical\nstability conditions for both charges. \nThe phase diagram, unlike most planar phase diagrams we have seen in\ntextbooks, should also be 3-dimensional. Though we can draw 3D phase\ndiagrams on a 2D paper, it may look neater to present them in planar\ndiagrams with respect to two variables and to use algebraic inequalities\nfor the third one, which in our convention would always be the $b$\ndirection, to describe the region for the stable phases. Now we\nsummarize the basic results obtained in \\cite{zhou:2015} below as a reference\nfor later analysis. However, we will only describe the main traits in those\nresults and will not be rigorous about specific values.\n\nIn GG ensemble, there exists a region in the $\\varphi$-$\\Phi$ plane,\nwhere one of the black\nbrane and the hot flat space phase is the globally stable phase while the other is\nlocally stable. Which one is globally stable depends on the value of $b$, and\nthere is a specific value of $b$ such that these two phases have equal Gibbs free\nenergy, which indicates a Hawking-Page-like phase transition happening at this\n$b$. This first order phase transition could occur for all $p=0,1,2$.\nIn GC or CG ensemble, for $p=1,2$ the system either is unstable or has\na stable black brane \nphase, but neither the Hawking-Page-like phase transition nor the van der Waals-like\nphase transition could happen in the phase diagram. However, for $p=0$ case it is\npossible that in some region of the $Q$-$\\varphi$ or $q$-$\\Phi$ plane and at some\nspecific $b$ which depends on the value of the other two parameters, a van der\nWaals-like first order phase transition would arise and as $(\\varphi,Q)$ or $(\\Phi,q)$\nevolves this first order phase transition will eventually end up at a second order\nphase transition point. In CC ensemble, the van der Waals-like phase transition cannot\nhappen only in the $p=2$ case. For both $p=0,1$, this liquid-gas-like phase transition\nis found in certain region of the $q$-$Q$ plane at some\nspecific $b$,\nand terminates at a second order phase transition point. What we will\nshow later in this paper is that \nafter the electrical stability is considered, the small\nblack brane phase in the van der Waals-like phase transition is not stable any more,\nwhich actually rules out the possibility of this phase transition. Thus in that\ncase the only stable phase is the large black brane phase.\n\n\n\\subsection{Thermal stability and electrical stability}\nTo gain further understanding of the electrical stability, let us\nreview the physical interpretation of the thermal\nstability\/instability. We have mentioned the fact that black holes\/branes have\nnegative specific heat and they radiate, which makes it impossible for them to be\nself-perpetuating in asymptotically flat spacetimes. Due to this innate instability,\nwe have to stabilize the black system by placing it inside a homeothermal reservoir\nwhich may compensate the thermal loss of the black system. So what we are dealing\nwith is such a system that the black brane keeps emitting energy to the outside and\nsucking energy from the reservoir at the same time. When the amount it emits equals\nthe amount it sucks, the system can be possibly in equilibrium. However, the system can be\ntruly stable or meta-stable only when this equilibrium can be preserved under small fluctuations.\nWhat our result (and most literature) reveals is that the existence of reservoir\ndoes not necessarily guarantee the thermal stability of the system.\nWhen the system has negative specific heat, it may be in an\ninstantaneous balance but will still collapse under arbitrarily\nsmall fluctuations of the temperature or the energy and can not form\nan equilibrium with the reservoir. In this case, a tiny fluctuation of the system state would\nend up either with an explosion to the hot flat thermal gas (the horizon always emits\nmore than it swallows) or with the reservoir inevitably being engulfed by the black brane\nhorizon (the horizon always absorbs more than it ejects) if there is not a truly\nstable state lying between these two fates. Nevertheless, by putting\nthe black holes\/branes inside a reservoir, unlike putting them in the\ninfinite flat space, there really may exist some stable phases with positive specific heat.\n \n\n\nFor black hole system with charges, as Hawking radiation always exists, there is no reason to forbid the\nhorizon from emitting charged objects. Thus, we have to impose\nanother property on the cavity such that the idea of canonical ensemble makes sense.\nThat is, the cavity should constantly trade charged objects with the horizon in such\na way that the total charge of the system is conserved when fluctuations\nare not considered. In our case, since there exist both D$p$\\ and D$(p+4)$\\ charges,\nthe cavity should be able to supply both charges. \nGiven this imposition, now we can\ndiscuss the stability under small fluctuations. In this context, the electrical\ninstability of the system means that however small the fluctuation of\nthe charges in the system is it will cause the\nhorizon to keep swallowing more or fewer charges than it admits from\nthe reservoir, and thus will be going farther away from the equilibrium.\nSimilarly, to realize a grand canonical ensemble for one of the brane\ncharges or both, we should assume that the reservoir has a mechanism to fix\nthe electrical potential at the boundary by exchanging charged objects\nwith the inside. \nSince the fluctuations of charges or potentials are always there, the\nelectrical stability condition must be considered in discussing the\nphase structure of the black brane system in various ensembles.\n\n\n\\subsection{Collections of previous results}\n\\label{sec:collection}\n\nHere we collect some results obtained in \\cite{zhou:2015} for later reference.\n\\begin{itemize}\n \\item D$p$\\ charge:\n \\begin{eqnarray}\n Q(x,\\Phi } %{ \\bar{\\Phi} ,q) &=& \\frac{\\Phi } %{ \\bar{\\Phi} (x^2-q^2)}{(1-\\Phi } %{ \\bar{\\Phi} ^2)\\sqrt{x(1-x)(x-q^2)}} ,\\nonumber\\\\\n Q(x,\\Phi } %{ \\bar{\\Phi} ,\\varphi } %{ \\bar{\\varphi} ) &=& \\frac{z\\Phi } %{ \\bar{\\Phi} }{(1-\\Phi } %{ \\bar{\\Phi} ^2)\\sqrt{1-z}} .\n \\label{eq:Q-charge}\n \\end{eqnarray}\n where $z\\equiv z(x,\\varphi } %{ \\bar{\\varphi} )=x(1-\\varphi } %{ \\bar{\\varphi} ^2)>0$.\n \\item D$(p+4)$\\ charges:\n \\begin{eqnarray}\n q(x,\\varphi } %{ \\bar{\\varphi} ) &=& \\frac{x\\varphi } %{ \\bar{\\varphi} }{\\sqrt{1-z}} .\n \\label{eq:q-charge}\n \\end{eqnarray}\n \\item D$p$\\ potential:\n \\begin{eqnarray}\n \\Phi(x,Q,q) &=& \\frac{\\lambda+q^2-x^2}{2\\sqrt{Q^2x(1-x)(x-q^2)}} ,\\nonumber\\\\\n \\Phi(x,Q,\\varphi } %{ \\bar{\\varphi} ) &=& \\frac{\\eta-z}{2Q\\sqrt{1-z}} .\n \\label{eq:Phi-potential}\n \\end{eqnarray}\n where \n $\\lambda\\equiv\\lambda(x,Q,q)=\\sqrt{4Q^2x(1-x)(x-q^2)+(x^2-q^2)^2}$ and\n $\\eta\\equiv\\eta(x,Q,\\varphi } %{ \\bar{\\varphi} )=\\sqrt{z^2+4Q^2(1-z)}$ .\n \\item D$(p+4)$\\ potential:\n \\begin{eqnarray}\n \\varphi(x,q) &=& q \\sqrt{\\frac{1-x}{x(x-q^2)}} .\n \\label{eq:phi-potential}\n \\end{eqnarray}\n \\item Reciprocal of temperature $(4\\pi\\bar{\\rho}_b)b=1\/T\\,$:\n \\begin{eqnarray}\n b_{CC} \\equiv b(x,Q,q) &=& \\left(\\frac{x^2-q^2}{x-q^2}\\right)^{\\frac{p-2}{3-p}}\n \\frac{x\\sqrt{(1-x)(x^2-q^2+\\lambda)}}{\\sqrt{2}(3-p)(x-q^2)} ,\\nonumber\\\\\n b_{CG} \\equiv b(x,Q,\\varphi } %{ \\bar{\\varphi} ) &=& \\frac{z^{\\frac{p-2}{3-p}}}{\\sqrt{2}(3-p)}\n \\sqrt{x(1-z)\\big(z+\\eta\\big)} ,\\nonumber\\\\\n b_{GC} \\equiv b(x,\\Phi } %{ \\bar{\\Phi} ,q) &=& \\frac{x}{(3-p)(x-q^2)} \\left(\\frac{x^2-q^2}{x-q^2}\\right)^{\\frac{p-2}{3-p}}\n \\sqrt{\\frac{(1-x)(x^2-q^2)}{1-\\Phi } %{ \\bar{\\Phi} ^2}} ,\\nonumber\\\\\n b_{GG} \\equiv b(x,\\Phi } %{ \\bar{\\Phi} ,\\varphi } %{ \\bar{\\varphi} ) &=& \\frac{xz^{\\frac{p-2}{3-p}}}{3-p}\n \\sqrt{\\frac{(1-\\varphi } %{ \\bar{\\varphi} ^2)(1-z)}{1-\\Phi } %{ \\bar{\\Phi} ^2}} .\n \\label{eq:b-temperature}\n \\end{eqnarray}\n \\item Entropy:\n \\begin{eqnarray}\n S(x,Q,q) &=& \\left(\\frac{x^2-q^2}{x-q^2}\\right)^{\\frac{1}{3-p}}\n \\sqrt{\\frac{x(x^2-q^2+\\lambda)}{2(x-q^2)}} ,\\nonumber\\\\\n S(x,\\Phi } %{ \\bar{\\Phi} ,q) &=& \\left(\\frac{x^2-q^2}{x-q^2}\\right)^{\\frac{1}{3-p}}\n \\sqrt{\\frac{x(x^2-q^2)}{(x-q^2)(1-\\Phi } %{ \\bar{\\Phi} ^2)}} ,\\nonumber\\\\\n S(x,Q,\\varphi } %{ \\bar{\\varphi} ) &=& z^{\\frac{1}{3-p}} \\sqrt{\\frac{x(z+\\eta)}{2}} ,\\nonumber\\\\\n S(x,\\Phi } %{ \\bar{\\Phi} ,\\varphi } %{ \\bar{\\varphi} ) &=& xz^{\\frac{1}{3-p}} \\sqrt{\\frac{1-\\varphi } %{ \\bar{\\varphi} ^2}{1-\\Phi } %{ \\bar{\\Phi} ^2}} .\n \\label{eq:S-entropy}\n \\end{eqnarray}\n\\end{itemize}\nThe domain of each independent variable ($x$, $Q$, $q$, $\\Phi } %{ \\bar{\\Phi} $ or $\\varphi } %{ \\bar{\\varphi} $) has been\nnormalized to the unit interval $(0,1)$ except in some cases the upper bound of $x$\nis restricted to a variable $x_{max}$($\\leq 1$) which depends on $(\\Phi } %{ \\bar{\\Phi} ,\\varphi } %{ \\bar{\\varphi} )$ \nor $(\\Phi } %{ \\bar{\\Phi} ,q)$ in order to avoid the naked singularity.\n\n\n\\section{Stability criteria for thermodynamic equilibrium\\label{sect:criteria}}\n\nIn thermodynamics, the stability condition for an equilibrium is independent of the\nensemble, and can be obtained either by the maximization of the\nentropy with fixed $E$, $Q$, $q$, or by minimization of the energy\nwith fixed $S$, $Q$, $q$. \nFor example, we use the minimization of the energy condition\n\\begin{eqnarray}\n \\delta E - \\bar T \\delta S - \\bar\\Phi \\delta Q - \\bar\\varphi \\delta q &=& 0 ,\\nonumber\\\\\n \\delta^2 E - \\bar T \\delta^2 S - \\bar\\Phi \\delta^2 Q - \\bar\\varphi \\delta^2 q &>& 0 ,\n \\label{eq:equi-condi}\n\\end{eqnarray}\nwhere $E$ is the internal energy of the brane system, and $S$ is the\nentropy. $\\bar T$, $\\bar \\Phi$ and $\\bar \\varphi$ are the\ncorresponding temperature and potentials at the boundary fixed by the\nreservoir, which play\nthe roles of the Lagrange multipliers. Using the first law of the\ntheromodynamics for the black brane $\\delta E= T \\delta S + \\Phi\n\\delta Q + \\varphi \\delta q$, one can obtain the equillibrium\ncondition $T=\\bar T$, $\\Phi=\\bar \\Phi$, $\\varphi=\\bar \\varphi$. \nThen the first equation above gives the second\norder variation of internal energy,\n\\begin{eqnarray}\n \\delta^2 E = \\delta \\bar T \\delta S +\\bar T \\delta^2 S + \\delta \\bar\\Phi \\delta Q\n + \\bar\\Phi \\delta^2 Q + \\delta\\bar \\varphi \\delta q + \\bar\\varphi \\delta^2 q .\n \\label{eq:E-2nd-var}\n\\end{eqnarray}\nInserting this into the inequality of \\eqref{eq:equi-condi} gives the\nstability condition at the equilibrium\n\\begin{eqnarray}\n \\delta\\bar T \\delta S + \\delta \n\\bar\\Phi \\delta Q + \\delta \\bar\\varphi \\delta q > 0 .\n \\label{eq:min-condi}\n\\end{eqnarray}\nSince at the equilibrium, barred quantities equal the unbarred\nquantities, one does not need to distinguish the barred and unbarred\nquantities. \nHowever, in different context, we need to keep the bars to\ndistinguish the quantities fixed on the boundary and the quantities of\nthe black branes as functions of the other variables. \nEquation (\\ref{eq:min-condi}) will be the starting point of our\nstability analyses in various ensembles.\n\n\nIn CC ensemble, we use $T$, $Q$ and $q$ as independent variables, then\n\\eqref{eq:min-condi} becomes\n\\begin{align}\n &\\left(\n \\begin{array}{ccc}\n \\delta T & \\delta q & \\delta Q\n \\end{array} \\right) \\left(\n \\begin{array}{ccc}\n \\big(\\PP{S}{T}\\big)_{Qq} & \\big(\\PP{S}{q}\\big)_{TQ} & \\big(\\PP{S}{Q}\\big)_{Tq} \\\\\n \\big(\\PP{\\varphi}{T}\\big)_{Qq} & \\big(\\PP{\\varphi}{q}\\big)_{TQ} &\n\\big(\\PP{\\varphi}{Q}\\big)_{Tq} \\\\\n \\big(\\PP{\\Phi}{T}\\big)_{Qq} & \\big(\\PP{\\Phi}{q}\\big)_{TQ} & \\big(\\PP{\\Phi}{Q}\\big)_{Tq}\n \\end{array} \\right) \\left(\n \\begin{array}{c}\n \\delta T \\\\ \\delta q \\\\ \\delta Q\n \\end{array} \\right)\n> 0\\, .\n \\label{eq:CC-min-condi}\n\\end{align}\nBy using the Maxwell relations $\\big(\\PP{S}{q}\\big)_{TQ}=- \n \\big(\\PP{\\varphi}{T}\\big)_{Qq}$ and $\n\\big(\\PP{S}{Q}\\big)_{Tq}=-\\big(\\PP{\\Phi}{T}\\big)_{Qq}$, the positivity\nof the above quadratic form is equivalent to \n the positivity conditions\n\\begin{equation}\n \\left( \\PP{S}{T} \\right)_{Qq} > 0 ,\\ \n\\left(\\PP{\\varphi}{q}\\right)_{TQ}\n> 0 ,\\ \\left|\n \\begin{array}{cc}\n \\big(\\PP{\\varphi}{q}\\big)_{TQ} &\n\\big(\\PP{\\varphi}{Q}\\big)_{TQ} \\\\\n \\big(\\PP{\\Phi}{q}\\big)_{TQ} & \\big(\\PP{\\Phi}{Q}\\big)_{Tq}\n \\end{array} \\right| > 0 .\n \\label{eq:CC-positivity}\n\\end{equation}\nThese conditions can then be further reduced to\n\\begin{eqnarray}\n \\left( \\PP{S}{T} \\right)_{Qq} > 0, \\quad\n\\left(\\PP{\\Phi}{Q}\\right)_{T\\varphi} > 0\n ,\\quad \\left(\\PP{\\varphi}{q}\\right)_{TQ} > 0 .\n \\label{eq:CC-criteria}\n\\end{eqnarray}\nSince the specific heat capacity is defined as $C_{Qq}=T\\left( \\PP{S}{T}\n\\right)_{Qq}$, the first condition is just the positivity of the\nspecific heat capacity. The other two conditions means the positivity\nof the other two response functions of so-called ``permitivities'' for the two\nkinds of the charges, similar to\nthe definitions in~\\cite{chamblin:1999-2}.\n\n\nBy the same token, we can obtain criteria for equilibria in the other\nensembles. The main results are collected in table \\ref{tab:criteria}.\nTo study the thermodynamic equilibrium criteria listed in Table\n\\ref{tab:criteria} directly is a little complicated. For example, there are\npartial derivatives with $S$ fixed which is not easy to handle\ndirectly. Since our expressions listed in section \\ref{sec:collection}\nexplicitly depend on $x$, we will reduce these conditions to some\nconvenient form in favor of $x$ in order to use these equations in the\nfollowing subsections. \n\\begin{table}[!t]\n \\centering\n \\begin{tabular}{c|c|c}\n \\hline\n Ensemble & Independent variables & Criterion \\\\\n \\hline\n CC & $T$, $Q$, $q$ & $ \\big(\\PP{S}{T}\\big)_{Qq} > 0, \\quad\n \\big(\\PP{\\Phi } %{ \\bar{\\Phi} }{Q}\\big)_{T\\varphi } %{ \\bar{\\varphi} } > 0 ,\\quad \\big(\\PP{\\varphi } %{ \\bar{\\varphi} }{q}\\big)_{TQ} > 0 $ \\\\\n \\hline\n GC & $T$, $\\Phi } %{ \\bar{\\Phi} $, $q$ & $ \\big(\\PP{S}{T}\\big)_{\\Phi } %{ \\bar{\\Phi} q} > 0, \\quad\n \\big(\\PP{Q}{\\Phi } %{ \\bar{\\Phi} }\\big)_{Sq} > 0 ,\\quad \\big(\\PP{\\varphi } %{ \\bar{\\varphi} }{q}\\big)_{T\\Phi } %{ \\bar{\\Phi} } > 0 $ \\\\\n \\hline\n CG & $T$, $Q$, $\\varphi } %{ \\bar{\\varphi} $ & $ \\big(\\PP{S}{T}\\big)_{Q\\varphi } %{ \\bar{\\varphi} } > 0, \\quad\n \\big(\\PP{\\Phi } %{ \\bar{\\Phi} }{Q}\\big)_{T\\varphi } %{ \\bar{\\varphi} } > 0 ,\\quad \\big(\\PP{q}{\\varphi } %{ \\bar{\\varphi} }\\big)_{SQ} > 0 $ \\\\\n \\hline\n GG & $T$, $\\Phi } %{ \\bar{\\Phi} $, $\\varphi } %{ \\bar{\\varphi} $ & $ \\left(\\PP{S}{T}\\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } > 0, \\quad\n \\big(\\PP{Q}{\\Phi } %{ \\bar{\\Phi} }\\big)_{Sq} > 0 ,\\quad \\big(\\PP{q}{\\varphi } %{ \\bar{\\varphi} }\\big)_{S\\Phi } %{ \\bar{\\Phi} } > 0 $ \\\\\n \\hline\n \\end{tabular}\n \\caption{Criteria for equilibrium in various ensembles}\n \\label{tab:criteria}\n\\end{table}\n\n\n\n\\subsection{Reduction for CC ensemble }\n\nFirst, to make our computation under control, we prefer to use the variable $b$ instead of $T$.\nSince we know that $T\\sim 1\/b$, the condition $\\PP{S}{T}>0$ is the same as $\\PP{S}{b}<0$. In CC\nensemble, we have\n\\begin{eqnarray}\n \\left(\\PP{S}{b}\\right)_{Qq} = \\PP{(S,Q,q)}{(b,Q,q)} = \\PP{(S,Q,q)}{(x,Q,q)} \\PP{(x,Q,q)}{(b,Q,q)}\n = \\left(\\PP{S}{x}\\right)_{Qq} \\Bigg\/ \\left(\\PP{b}{x}\\right)_{Qq} .\n \\label{eq:CC-pS-pb}\n\\end{eqnarray}\nThe numerator in the above result is actually the product of $b(x,Q,q)$ and $f(x,Q,q)$, and the\nlatter is the function we have obtained in the last equation of\nEq.~(2.35) in our previous paper~\\cite{zhou:2015}.\nIt can be shown that $f(x,Q,q)>0$ and obviously $b(x,Q,q)$ is positive, so we proved\n\\begin{eqnarray}\n \\left(\\PP{S}{b}\\right)_{Qq} < 0 &\\Leftrightarrow& \\left(\\PP{b}{x}\\right)_{Qq} < 0 .\n \\label{eq:CC-pb-px}\n\\end{eqnarray}\nHere we can see that the positive specific heat condition is\nequivalent to the stability condition we used in our previous paper.\n\nNext we reduce the third condition in Table~\\ref{tab:criteria}. We can rewrite the partial derivative\n\\begin{eqnarray}\n \\left( \\PP{\\varphi } %{ \\bar{\\varphi} }{q} \\right)_{bQ} = \\PP{(\\varphi } %{ \\bar{\\varphi} ,b,Q)}{(\\varphi } %{ \\bar{\\varphi} ,x,Q)}\n \\PP{(\\varphi } %{ \\bar{\\varphi} ,x,Q)}{(q,x,Q)} \\PP{(q,x,Q)}{(q,b,Q)} = \\left( \\PP{b}{x} \\right)_{Q\\varphi } %{ \\bar{\\varphi} }\n \\left( \\PP{\\varphi } %{ \\bar{\\varphi} }{q} \\right)_x \\Bigg\/ \\left( \\PP{b}{x} \\right)_{qQ} > 0 .\n \\label{eq:CC-dphi-dq}\n\\end{eqnarray}\nAccording to \\eqref{eq:CC-pb-px}, the denominator is negative, and by using\n\\eqref{eq:phi-potential} we can easily prove that the second factor in the numerator is positive; therefore for\n\\eqref{eq:CC-dphi-dq} to hold true, the other term in the numerator has to be negative,\n\\begin{eqnarray}\n \\left( \\PP{b}{x} \\right)_{Q\\varphi } %{ \\bar{\\varphi} } < 0 .\n \\label{eq:CC-3rd-condition}\n\\end{eqnarray}\n\nSimilarly, we can reduce the second condition in Table~\\ref{tab:criteria} through the\nsame procedure,\n\\begin{eqnarray}\n \\left( \\PP{\\Phi } %{ \\bar{\\Phi} }{Q} \\right)_{b\\varphi } %{ \\bar{\\varphi} } = \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} }\n \\left( \\PP{\\Phi } %{ \\bar{\\Phi} }{Q} \\right)_{x\\varphi } %{ \\bar{\\varphi} } \\Bigg\/ \\left( \\PP{b}{x} \\right)_{Q\\varphi } %{ \\bar{\\varphi} } > 0 .\n \\label{eq:CC-dPhi-dQ}\n\\end{eqnarray}\nThen one can show that the second term in the numerator is positive by\nexplicit computation, and at the same time the denominator is negative\ndue to \\eqref{eq:CC-3rd-condition}. So, finally we get the last reduced stability condition,\n\\begin{eqnarray}\n \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } < 0 .\n \\label{eq:CC-2nd-condition}\n\\end{eqnarray}\n\n\n\\subsection{Reduction for GC ensemble}\n\nFirst we reduce the second condition using the following identity,\n\\begin{eqnarray}\n \\left( \\PP{Q}{\\Phi } %{ \\bar{\\Phi} } \\right)_{Sq} = \\PP{(Q,S,q)}{(Q,x,q)} \\PP{(Q,x,q)}{(\\Phi } %{ \\bar{\\Phi} ,x,q)}\n \\PP{(\\Phi } %{ \\bar{\\Phi} ,x,q)}{(\\Phi } %{ \\bar{\\Phi} ,S,q)} = \\left( \\PP{S}{x} \\right)_{Qq}\n \\left( \\PP{Q}{\\Phi } %{ \\bar{\\Phi} } \\right)_{xq} \\Bigg\/ \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} .\n \\label{eq:GC-dQ-dPhib}\n\\end{eqnarray}\nThe first term in the numerator is positive as discussed in the previous subsection,\nand the second term can be shown to be positive as well by explicitly using the first\nexpression in \\eqref{eq:Q-charge}. Thus the following equivalent relations,\n\\begin{eqnarray}\n \\left( \\PP{Q}{\\Phi } %{ \\bar{\\Phi} } \\right)_{Sq} > 0 \\quad \\Leftrightarrow \\quad \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} > 0 .\n \\label{eq:GC-2nd-condition}\n\\end{eqnarray}\n\nNext we reduce the first condition using the same trick,\n\\begin{eqnarray}\n \\left( \\PP{S}{b} \\right)_{\\Phi } %{ \\bar{\\Phi} q} = \\PP{(S,\\Phi } %{ \\bar{\\Phi} ,q)}{(x,\\Phi } %{ \\bar{\\Phi} ,q)} \\PP{(x,\\Phi } %{ \\bar{\\Phi} ,q)}{(b,\\Phi } %{ \\bar{\\Phi} ,q)}\n = \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} \\Bigg\/ \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} .\n \\label{eq:GC-dS-db}\n\\end{eqnarray}\nNow if \\eqref{eq:GC-2nd-condition} is already satisfied, then the above relation implies,\n\\begin{eqnarray}\n \\left( \\PP{S}{b} \\right)_{\\Phi } %{ \\bar{\\Phi} q} < 0 \\quad \\Leftrightarrow \\quad \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} < 0 .\n \\label{eq:GC-1st-condition}\n\\end{eqnarray}\nThe right hand side of \\eqref{eq:GC-1st-condition} is indeed the condition we examined\nin our previous paper, which represents the thermal stability condition when the first\nelectric stability condition is satisfied.\n\nThe other electrical stability condition can be recast in the same\nfashion,\n\\begin{eqnarray}\n \\left( \\PP{\\varphi}{q} \\right)_{b\\Phi } %{ \\bar{\\Phi} } = \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} }\n \\left( \\PP{\\varphi } %{ \\bar{\\varphi} }{q} \\right)_x \\Bigg\/ \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} > 0 .\n \\label{eq:GC-dphi-dq}\n\\end{eqnarray}\nNow if we already solved the other two conditions, and since it can be readily seen that\n$\\PP{\\varphi } %{ \\bar{\\varphi} }{q}>0$, we would be left with the condition\n\\begin{eqnarray}\n \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } < 0 .\n \\label{eq:GC-3rd-condition}\n\\end{eqnarray}\nWe still need to bear in mind that, after we solved this inequality, we have to restate the result in terms of $x$,\n$\\Phi } %{ \\bar{\\Phi} $ and $q$.\n\n\n\n\n\\subsection{Reduction for CG ensemble }\n\nIn CG ensemble, we first calculate $(\\PP{q}{\\varphi } %{ \\bar{\\varphi} })_{SQ}$,\n\\begin{eqnarray}\n \\left( \\PP{q}{\\varphi } %{ \\bar{\\varphi} } \\right)_{SQ} = \\left( \\PP{S}{x} \\right)_{qQ}\n \\left( \\PP{q}{\\varphi } %{ \\bar{\\varphi} } \\right)_x \\Bigg\/ \\left( \\PP{S}{x} \\right)_{\\varphi } %{ \\bar{\\varphi} Q} > 0 .\n \\label{eq:CG-dq-dphi}\n\\end{eqnarray}\nAgain the first term in the numerator is positive as argued before, and the second term\nis also positive by \\eqref{eq:q-charge}. So the denominator must be postive in order for\n\\eqref{eq:CG-dq-dphi} to hold, $\\left( \\PP{S}{x} \\right)_{\\varphi } %{ \\bar{\\varphi} Q} >\n0 $. We then calculate the thermal condition $(\\PP{S}{b})_{Q\\varphi } %{ \\bar{\\varphi} }<0$,\n\\begin{eqnarray}\n \\left( \\PP{S}{b} \\right)_{Q\\varphi } %{ \\bar{\\varphi} } = \\left( \\PP{S}{x} \\right)_{Q\\varphi } %{ \\bar{\\varphi} }\n \\Bigg\/ \\left( \\PP{b}{x} \\right)_{Q\\varphi } %{ \\bar{\\varphi} } < 0 .\n \\label{eq:CG-dS-db}\n\\end{eqnarray}\nFrom the condition analysed above, we see that the numerator is positive, so the denominator\nhas to be negative, $\\left( \\PP{b}{x} \\right)_{Q\\varphi } %{ \\bar{\\varphi} } < 0$. Lastly, for the last condition\n\\begin{eqnarray}\n \\left( \\PP{\\Phi } %{ \\bar{\\Phi} }{Q} \\right)_{b\\varphi } %{ \\bar{\\varphi} } = \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} }\n \\left( \\PP{\\Phi } %{ \\bar{\\Phi} }{Q} \\right)_{x\\varphi } %{ \\bar{\\varphi} } \\Bigg\/ \\left( \\PP{b}{x} \\right)_{Q\\varphi } %{ \\bar{\\varphi} } > 0\n \\label{eq:CG-dPhi-dQ}\n\\end{eqnarray}\nto hold, we need the first term in the numerator to be negative, i.e. \n$\\left( \\PP{b}{x} \\right)_{\\Phi\\varphi } %{ \\bar{\\varphi} }<0$, because the second term\nis positive by \\eqref{eq:Phi-potential} and the term in the denominator is negative by\n\\eqref{eq:CG-dS-db}. \n\n\n\\subsection{Reduction for GG ensemble}\n\nIn GG ensemble, for D$p$\\ electrical stability condition to hold,\n\\begin{eqnarray}\n \\left( \\PP{Q}{\\Phi } %{ \\bar{\\Phi} } \\right)_{Sq} = \\left( \\PP{S}{x} \\right)_{Qq}\n \\left( \\PP{Q}{\\Phi } %{ \\bar{\\Phi} } \\right)_{xq} \\Bigg\/ \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} > 0 ,\n \\label{eq:GG-dQ-dPhi}\n\\end{eqnarray}\nwe need the denominator to be positive because both terms in the numerator are positive.\nFor the other electric stability condition to be true, i.e.\n\\begin{eqnarray}\n \\left( \\PP{q}{\\varphi } %{ \\bar{\\varphi} } \\right)_{S\\Phi } %{ \\bar{\\Phi} } = \\left( \\PP{S}{x} \\right)_{q\\Phi } %{ \\bar{\\Phi} }\n \\left( \\PP{q}{\\varphi } %{ \\bar{\\varphi} } \\right)_x \\Bigg\/ \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } > 0 ,\n \\label{eq:GG-dq-dphi}\n\\end{eqnarray}\nthe denominator also has to be positive as a consequence of \\eqref{eq:GG-dQ-dPhi}.\nThen because the thermal condition can be rewritten as\n\\begin{eqnarray}\n \\left( \\PP{S}{b} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } = \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} }\n \\Bigg\/ \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } < 0 ,\n \\label{eq:GG-dS-db}\n\\end{eqnarray}\nwe find that the denominator needs to be negative by \\eqref{eq:GG-dq-dphi}.\n\n\\subsection{Summary of the reduced stability conditions}\nIn summary, we list all the reduced stability conditions for these\nensembles below\n\\begin{eqnarray}\n \\left(\\PP{b}{x}\\right)_{Qq} < 0 ,\\,\n \\left( \\PP{b}{x} \\right)_{Q\\varphi } %{ \\bar{\\varphi} } \\Bigg|_{\\varphi } %{ \\bar{\\varphi} \\to\\varphi(x,q)} <\n0 ,\\,\n \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} }\n\\Bigg|_{\\Phi } %{ \\bar{\\Phi} \\to\\Phi } %{ \\bar{\\Phi} (x,Q,q),\\,\\varphi } %{ \\bar{\\varphi} \\to\\varphi(x,q)} < 0 \\,,\\,\n\\text{(CC)}\\,;\n\\label{eq:cc-reduced}\\\\\n \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} < 0 ,\\quad\n \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } \\Bigg|_{\\varphi } %{ \\bar{\\varphi} \\to\\varphi(x,q)} < 0 ,\\quad\n \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} > 0 \\,,\\quad\n\\text{(GC)}\\,;\\label{eq:GC-reduced}\n\\\\\n \\left( \\PP{b}{x} \\right)_{Q\\varphi } %{ \\bar{\\varphi} } < 0 ,\\quad\n \\left( \\PP{S}{x} \\right)_{\\varphi } %{ \\bar{\\varphi} Q} > 0 ,\\quad\n \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} }\n\\Bigg|_{\\Phi } %{ \\bar{\\Phi} \\to\\Phi(x,Q,\\varphi } %{ \\bar{\\varphi} )} < 0 \\,,\\quad \n\\text{(CG)}\\,;\\label{eq:CG-reduced}\n\\\\\n \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } < 0 ,\\quad\n \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } > 0 ,\\quad\n \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} \\Bigg|_{q\\to q(x,\\varphi } %{ \\bar{\\varphi} )} > 0 \\quad\n\\text{(GG)}\\,.\\label{eq:GG-reduced}\n\\end{eqnarray}\nIn each set of the conditions, the first one is the thermal stability\ncondition and the second and the third come from the electrical\nstability conditions for D$(p+4)$ and D$p$ charges, respectively.\nIn fact, all these different sets of stability conditions are\nequivalent since there are only three independent conditions and the\nstability for an equilibrium state should be independent of ensemble.\nFrom the deduction, these conditions all come from\n\\eqref{eq:min-condi}, and only we are\nchoosing different sets of convenient conditions for these ensembles.\nWe also see that some conditions are shared by different\nensembles and only are expressed in different independent variables.\n\n\\section{Electrical stability analyses\\label{sect:electrical}}\n\nIn following sections, we will delve into the reduced criteria obtained in the last section\nand find the more explicit electrical stability conditions in terms of variables such as\n$x$, $\\Phi } %{ \\bar{\\Phi} $, $\\varphi } %{ \\bar{\\varphi} $, etc. Readers not interested in the deduction details can skip the\nfollowing four subsections and just jump to section~\\ref{sec:elec-sum} where you can find\nthe final summarized results.\n\n\\subsection{GG ensemble\\label{subsec:elect-GG}}\n\nNow we turn to electrical stability analysis in GG ensemble. By using the last equation\nof \\eqref{eq:S-entropy}, it is easy to check the stability condition for D$(p+4)$-brane\n\\begin{eqnarray}\n \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } = \\frac{4-p}{3-p}\n \\sqrt{\\frac{1-\\varphi } %{ \\bar{\\varphi} ^2}{1-\\Phi } %{ \\bar{\\Phi} ^2}} \\, z^{\\frac{1}{3-p}} > 0\n \\label{eq:GG-dppf-condition}\n\\end{eqnarray}\nis always true. The stability condition for D$p$-brane, albeit needing some tricky\nrearrangements, can be proved to be true as well,\n\\begin{eqnarray}\n \\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q} = z^{\\frac{1}{3-p}}\n \\sqrt{\\frac{1-\\varphi } %{ \\bar{\\varphi} ^2}{1-\\Phi } %{ \\bar{\\Phi} ^2}}\n \\left[ \\frac{(5-p)\\varphi } %{ \\bar{\\varphi} ^2(1+\\varphi } %{ \\bar{\\varphi} ^2)}{2(3-p)(1-x)(1-\\varphi } %{ \\bar{\\varphi} ^2)}\n + \\frac{2(4-p)+(5-p)\\varphi } %{ \\bar{\\varphi} ^2}{2(3-p)} \\right] > 0 .\n \\label{eq:GG-dp-condition}\n\\end{eqnarray}\nThese two conditions are actually consistent with our intuition, i.e., as the radius\nof horizon grows, the entropy (which is essentially the area of horizon) must grow. This\nresult also confirms the conclusion made in \\cite{lu:2011-2}. That is, in GG ensemble\nthe thermodynamic stability is implied by sole thermal stability.\n\n\n\\subsection{GC ensemble}\n\nLet us look at the stability condition for D$(p+4)$-brane charges first,\n\\begin{eqnarray}\n \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } = \\frac{1}{2x(3-p)^2}\n \\left( \\frac{x^2-q^2}{x-q^2} \\right)^{\\frac{1}{3-p}}\n \\frac{(3-p)q^2 + 2x - (5-p)x^2}{\\sqrt{(1-x)(1-\\Phi } %{ \\bar{\\Phi} ^2)(x^2-q^2)}} < 0 .\n \\label{eq:GC-dppf-condition}\n\\end{eqnarray}\nThis implies\n\\begin{eqnarray}\n (5-p)x^2 -2x + (3-p)q^2 > 0 ,\n \\label{eq:GC-dppf-imply}\n\\end{eqnarray}\nwhich is solved by $xx^+$ where\n\\begin{eqnarray}\n x^{\\pm} = \\frac{1 \\pm \\sqrt{1+(5-p)(3-p)q^2}}{5-p} .\n \\label{eq:GC-dppf-x}\n\\end{eqnarray}\nObviously $x^-<0$, so we only need to consider $x>x^+$ case. Also we have\nanother condition that \\cite{zhou:2015}\n\\begin{eqnarray}\n 0< q < x < x_{max} = \\frac{1-\\Phi } %{ \\bar{\\Phi} ^2 + \\sqrt{(1-\\Phi } %{ \\bar{\\Phi} ^2)^2 + 4q^2\\Phi } %{ \\bar{\\Phi} ^2}}{2} < 1.\n \\label{eq:GC-q-xmax}\n\\end{eqnarray}\nSince we have\n\\begin{eqnarray}\n x^+ = \\frac{1 + \\sqrt{1-q^2 + (4-p)^2q^2}}{5-p} > \\frac{q+\\sqrt{(4-p)^2q^2}}{5-p} = q ,\n \\label{eq:GC-x-lower-bound}\n\\end{eqnarray}\nthe lower bound of $x$ for \\eqref{eq:GC-dppf-condition} to be true is $x^+$.\nHowever, for $x^+x^+$ only when $\\Phi } %{ \\bar{\\Phi} <\\sqrt{\\frac{3-p}{5-p}}$.\nSo \\eqref{eq:GC-dppf-condition} finally gives\n\\begin{eqnarray}\n x^+ < x < x_{max} \\qquad \\textrm{and} \\qquad 0 < \\Phi } %{ \\bar{\\Phi} < \\sqrt{\\frac{3-p}{5-p}} .\n \\label{eq:GC-dppf-final}\n\\end{eqnarray}\n\nNext, we deal with the stability condition for D$p$-brane charges,\n$\\left( \\PP{S}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} q}>0$. This inequality has already\nbeen proved in \\eqref{eq:GG-dppf-condition} where \\eqref{eq:q-charge}\nis used to substitute for $q$. Since in \\eqref{eq:q-charge}, for arbitrary\n$x\\in (0,1)$ and $\\varphi\\in (0,1)$, the range of $q$ is $0 \\frac{2}{5-p} \\qquad \\textrm{or} \\qquad x > x_0 \\equiv \\frac{2}{(5-p)(1-\\varphi } %{ \\bar{\\varphi} ^2)} .\n \\label{eq:CG-dp-zx}\n\\end{eqnarray}\nIn fact, the condition \\eqref{eq:CG-dp-condition} and\n\\eqref{eq:GC-dppf-condition} are the same condition expressed in\ndifferent variables, so the result \\eqref{eq:CG-dp-zx}\n is consistent with \n\\eqref{eq:GC-x-lower-bound}, which can be checked by using \\eqref{eq:q-charge}. \nOn the other hand, since $x<1$, the second relation above also indicates\n\\begin{eqnarray}\n 1-\\varphi } %{ \\bar{\\varphi} ^2 > \\frac{2}{5-p} \\qquad \\textrm{or} \\qquad \\varphi } %{ \\bar{\\varphi} < \\sqrt{\\frac{3-p}{5-p}} ,\n \\label{eq:CG-dp-phi}\n\\end{eqnarray}\notherwise there is no such $x$ that \\eqref{eq:CG-dp-condition} holds. Next we prove\nthat the stability condition for D$(p+4)$-brane is again satisfied automatically, which\ncan be na\\\"ively argued by the same reason stated in subsection \\ref{subsec:elect-GG}. First we\nwrite this condition explicitly,\n\\begin{eqnarray}\n \\left( \\PP{S}{x} \\right)_{Q\\varphi } %{ \\bar{\\varphi} } = \n \\frac{Q^2 z^{\\frac{1}{3-p}}}{(3-p)\\eta\\sqrt{2x(z+\\eta)}}\n \\left[ 2(5-p)-(13-3p)z+\\frac{(4-p)z(z+\\eta)}{Q^2} \\right] > 0 .\n \\label{eq:CG-dppf-condition}\n\\end{eqnarray}\nThe validity of this inequality relies on the sign of the term inside\nthe square brackets\nwhich we rewrite as a function $h$ of $z$ and $Q$,\n\\begin{eqnarray}\n h(z,Q) = 2(5-p)-(13-3p)z + (4-p)z \\frac{z+\\sqrt{z^2+4Q^2(1-z)}}{Q^2} .\n \\label{eq:CG-h-zQ}\n\\end{eqnarray}\nThen we have\n\\begin{eqnarray}\n \\PP{h}{Q} = - \\frac{2}{\\eta Q^3} \\big[ 2Q^2(1-z) + z^2 + z\\eta \\big] < 0 ,\n \\label{eq:cg-dh-dQ}\n\\end{eqnarray}\nwhich means\n\\begin{eqnarray}\n h(z,Q) > h(z,1) = (5-p)(2-z) > 0 .\n \\label{eq:CG-h-sign}\n\\end{eqnarray}\nThis proves \\eqref{eq:CG-dppf-condition} to be true.\n\n\n\\subsection{CC ensemble}\n\nBy using \\eqref{eq:Phi-potential} \\eqref{eq:phi-potential} and \\eqref{eq:b-temperature},\nwe can write down the explicit expression of the stability condition\nfor D$p$-brane charges,\n\\begin{eqnarray}\n \\left( \\PP{b}{x} \\right)_{\\Phi } %{ \\bar{\\Phi} \\varphi } %{ \\bar{\\varphi} } = \\frac{Q}{(3-p)^2}\n \\left( \\frac{x^2-q^2}{x-q^2} \\right)^{\\frac{p-2}{3-p}}\n \\frac{(3-p)q^2 + 2x - (5-p)x^2}{\\sqrt{2x(x-q^2)(q^2-x^2+\\lambda)}} <\n0 \\,,\n \\label{eq:CC-dp-condition}\n\\end{eqnarray}\nwhich is just \\eqref{eq:GC-dppf-condition} or \\eqref{eq:CG-dp-condition} in terms of $Q$ and $q$. Similar to the GC ensemble, this condition would give us $x>x^+$ where $x^+$ is defined\nin \\eqref{eq:GC-dppf-x}. So the D$p$-brane stability condition restricts $x$ to the region\n$x^+ 2(7-p)(x^+ - \\frac{p-1}{7-p}) .\n \\label{eq:CC-dh1-dx}\n\\end{eqnarray}\nFor $p=0,1$, the above expression is obviously always positive, and for $p=2$,\n\\begin{eqnarray}\n \\PP{h_1}{x} > 10 \\left( \\frac{1+\\sqrt{1+3q^2}}{3} - \\frac{1}{7} \\right) > 110\/21 .\n \\label{eq:CC-dh1-dx-p2}\n\\end{eqnarray}\nSo we have $\\PP{h_1}{x}>0$ and hence\n\\begin{eqnarray}\n h_1(x,q) > h(x^+,q) = 2(4-p)x^+(1-x^+) > 0\\,.\n \\label{eq:CC-h1>0}\n\\end{eqnarray}\nThus we have proved $\\PP{h}{Q}<0$ and therefore \\eqref{eq:CC-dppf-condition} always\nholds true as long as condition \\eqref{eq:CC-dp-condition} is true.\n\n\n\\subsection{Electrical stability summary}\n\\label{sec:elec-sum}\n\nTo summarize, we collect all the new constraints from the electrical\nstability as follows,\n\\begin{itemize}\n \\item GG ensemble: no more constraints.\n \\item CG ensemble:\n \\begin{eqnarray}\n x_0 < x < 1 \\qquad \\textrm{and} \\qquad 0 < \\varphi } %{ \\bar{\\varphi} < \\sqrt{\\frac{3-p}{5-p}} ,\n \\label{eq:CG-electrical}\n \\end{eqnarray}\n where $x_0$ is defined in \\eqref{eq:CG-dp-zx}.\n \\item GC ensemble:\n \\begin{eqnarray}\n x^+ < x < x_{max} \\qquad \\textrm{and} \\qquad 0 < \\Phi } %{ \\bar{\\Phi} < \\sqrt{\\frac{3-p}{5-p}} ,\n \\label{eq:GC-electrical}\n \\end{eqnarray}\n where $x^+$ is defined in \\eqref{eq:GC-dppf-x}.\n \\item CC ensemble:\n \\begin{eqnarray}\n x^+ < x < 1 .\n \\label{eq:CC-electrical}\n \\end{eqnarray}\n\\end{itemize}\nNotice that all the three sets of new conditions in CG, GC and CC ensembles\nare derived from the $\\left(\\frac {\\partial b}{\\partial x}\\right)_{{\n}_{\\Phi\\varphi}}$, which is just the thermal stability condition for the\nGG ensemble. \n\n\\section{Thermodynamic stability\\label{sect:thermo}}\n\nIn this section, we will combine the results from electrical stability analysis\nand the thermal stability results to find the full thermodynamic stability conditions\nin each ensemble.\n\n\n\\subsection{GG ensemble}\n\\label{sec:thermo-GG}\n\nSince in GG ensemble thermal stability always guarantees electrical stability, we\ncan infer its phase structure from the thermal stability conditions directly and\nthe results are listed in table \\ref{tab:GG-phase-structure}.\n\\begin{table}[!t]\n \\centering\n \\begin{tabular}{c|c|c}\n \\hline\n Globally & (Only) locally & \\\\\n stable phase & stable phase & \\Raise{7pt}{Conditions} \\\\\n \\hline\n \\rule[-9pt]{0pt}{24pt} Black brane & Hot flat space & $(\\Phi } %{ \\bar{\\Phi} ,\\varphi } %{ \\bar{\\varphi} )\\in\\textrm{C}$ , $b_2< \\bar{b} b_{unstable}$ \\\\\n \\cline{3-3}\n \\rule[-9pt]{0pt}{24pt} Hot flat space & N\/A & $(\\Phi } %{ \\bar{\\Phi} ,\\varphi } %{ \\bar{\\varphi} )\\in\\textrm{B}\\cup\\textrm{C}$, $ \\bar{b} >b_1$ \\\\\n \\cline{3-3}\n \\rule[-9pt]{0pt}{24pt} & & $(\\Phi } %{ \\bar{\\Phi} ,\\varphi } %{ \\bar{\\varphi} )\\in\\textrm{B}$, $b_{unstable}< \\bar{b} x^+$. It can be proved that at $x^+$, $db_{GC}\/dx<0$ and $x^+$ lies on\nthe same decreasing branch of $b_{GC}(x)$ as $\\bar x$ where the large\nlocally stable black brane lies.\nOne can then numerically show that $b_{GC}(x^+)>b_{unstable}$,\nwhich implies $X>x^+$. Therefore, if $ \\bar{b} $ lies between $b_2$ and $b_{unstable}$,\nwe will have $ \\bar{x} >x^+$ (see the second diagram in figure \\ref{fig:GC-q-Phi-p2}).\nThus we proved that the thermally stable black brane phase is also electrically\nstable and is the only stable phase. The stability condition is\n\\begin{eqnarray}\n (\\Phi } %{ \\bar{\\Phi} , q) \\in \\textrm{A} \\quad \\textrm{and} \\quad b_2< \\bar{b} 1\/3$) of figure~\\ref{fig:GC-q-Phi-p1}, $b_{GC}(x)$ is monotonically decreasing,\nso the system has a thermally stable black brane if $b_2< \\bar{b} b_{GC}(x^+)$ (or $ \\bar{x} x^+$.\nComparing $X$ with $x^+$, we find that when $q=q_0=\\frac{5}{3}-\\sqrt{2}$, we have\n$X=x^+$, otherwise when $q>q_0$ (respectively, $qx^+$)\nas shown in the second (respectively, the third) diagram in figure~\\ref{fig:GC-b-p1}.\n\nIn summary, the thermodynamic stability condition for D1-D5 black\nbrane system is\n\\begin{eqnarray}\n (\\Phi } %{ \\bar{\\Phi} ,q) \\in \\textrm{A} \\cup \\textrm{B} \\cup \\textrm{C} \\quad \\textrm{and} \\quad\n b_2 < \\bar{b} < \\left\\{\n \\begin{array}[]{ll}\n b_{GC} (x^+) , & (\\Phi } %{ \\bar{\\Phi} ,q) \\in \\textrm{A} \\cup \\textrm{B} ,\\\\\n b_{unstable} , & (\\Phi } %{ \\bar{\\Phi} ,q) \\in \\textrm{C}\n \\end{array} \\right. .\n \\label{eq:GC-final-p1}\n\\end{eqnarray}\n\n\n\\subsubsection*{D0-D4-branes}\n\nFor D0-D4 system, the electrical stability condition requires\n$0<\\Phi } %{ \\bar{\\Phi} <\\sqrt{\\frac{3}{5}}$ which further constraints the stability\nregions resulted from the thermal stability analysis.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=.4\\textwidth]{GC_q_Phi_p0.eps}\n \\caption{Regions in which there may exist a stable phase. The value of $q_c\\approx 0.1416$\n was obtained in the previous paper and $q_0\\approx 0.1227$ is obtained numerically.}\n \\label{fig:GC-q-Phi-p0}\n\\end{figure}\nAs a result, in figure~\\ref{fig:GC-q-Phi-p0}, the region of $0<\\Phi } %{ \\bar{\\Phi} <\\sqrt{\\frac{3}{5}}$ is then further assigned to four subregions, A, B, C and D, according to\nthe shapes of function $b_{GC}(x)$ \nin each subregion as shown in figure~\\ref{fig:GC-b-p0}.\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=.35\\textwidth]{GC_b_A_p0.eps} \\quad\n \\includegraphics[width=.35\\textwidth]{GC_b_B_p0.eps} \\quad\n \\includegraphics[width=.35\\textwidth]{GC_b_C_p0.eps} \\quad\n \\includegraphics[width=.35\\textwidth]{GC_b_D_p0.eps}\n \\caption{Four typical types of shapes of function $b_{GC}(x)$ in D0-D4 system. Diagram A\/B\/C\/D\n correspond to region A\/B\/C\/D in figure~\\ref{fig:GC-q-Phi-p0} respectively.}\n \\label{fig:GC-b-p0}\n\\end{figure}\nThe first diagram of figure~\\ref{fig:GC-b-p0} corresponds to region A in the previous figure\nwhere $b_{GC}$ is shown to be monotonically decreasing. So, at any $ \\bar{b} >b_2$, the brane system\nis thermally stable. In addition, electrical stability condition\nrequires $ \\bar{x} >x^+$\nor, equivalently, $ \\bar{b} x^+$\nand \nthe numerical computation shows that $x^+>x_2$ as shown in the first\ndiagram in figure~\\ref{fig:GC-x+x<-p0}, which means that the smaller black brane phase is by\nno means electrically stable. So in the following analysis, we only need to find out whether\nthe larger black brane phases are stable. \n\nThe difference between the $ b_{GC} $ curves in region\nB, C and D is that in region B or C there exists a temperature $T_t=1\/b_t$ at which the large\nblack brane and the small one have the same thermodynamic potential whereas in region D, there is no such\n$b_t$ because the large black brane, if exists, always has higher\nthermodynamic potential than the smaller\none under the same temperature. So in region D the larger black brane is globally unstable which means there exist no\nthermodynamicaly stable black brane phases in D, which is why we used\na dotted line as the right \n boundary of region D in figure~\\ref{fig:GC-q-Phi-p0}. The distinction between region\nB and C will be dealt with as follows.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=.48\\textwidth]{GC_x+xM_p0.eps}\n \\includegraphics[width=.48\\textwidth]{GC_b+bt_p0.eps}\n \\caption{The first diagram shows the numerical result of $x^+>x_2$ in region B, C and D.\n The second diagram shows a general relation between $b_t$ and $ b_{GC} (x^+)$ in region B and C\n though a special value $\\Phi } %{ \\bar{\\Phi} =0.5$ is chosen to plot this diagram.}\n \\label{fig:GC-x+x<-p0}\n\\end{figure}\nOn the one hand, since in both regions, for $\\bar b\\in (b_2,b_t)$ ,\nthe larger black brane always has lower free energy than the\ncorresponding smaller black\nbrane (if there is one), the thermal stability condition would be $b_t> \\bar{b} >b_2$. On the other\nhand, since $x^+>x_2$, the electrical stability condition now becomes $ b_{GC} (x^+)> \\bar{b} >b_2$.\nCombining these two conditions together, we get $b_2< \\bar{b} <\\textrm{min}\\{b_t, b_{GC} (x^+)\\}$. So we need to\ndetermine which one in $b_t$ and $ b_{GC} (x^+)$ is smaller. Again by\nnumerical calculations, we\nfind that there exists a charge $q_0\\approx 0.1227$, above which (in region B) we have\n$ b_{GC} (x^+)b_t$,\nwhich is illustrated in the second diagram in figure~\\ref{fig:GC-x+x<-p0}. \nOne may further notice that in figure~\\ref{fig:GC-x+x<-p0}, near (but less than)\n$q_c$ we have $ b_{GC} (x^+)X$, the black brane is automatically\nelectrically stable. Therefore, the thermodynamic stability condition is\n\\begin{eqnarray}\n (\\varphi } %{ \\bar{\\varphi} , Q) \\in \\textrm{A} \\quad \\textrm{and} \\quad b_2< \\bar{b} 1\/\\sqrt{2}$.\nThis can be readily solved and gives $1\/3>Q>5\/3-\\sqrt{2}$ which corresponds to region B in\nfigure~\\ref{fig:CG-Q-phi-p1}. This also proves in region C we have $X>x_0$.\n\nCombining all cases together, we obtain the final condition for D1-D5-branes to be stable\nin CG ensemble,\n\\begin{eqnarray}\n (\\varphi } %{ \\bar{\\varphi} ,Q) \\in \\textrm{A} \\cup \\textrm{B} \\cup \\textrm{C} \\quad \\textrm{and} \\quad\n b_2 < \\bar{b} < \\left\\{\n \\begin{array}[]{ll}\n b_{CG} (x_0) , & (\\varphi } %{ \\bar{\\varphi} ,Q) \\in \\textrm{A} \\cup \\textrm{B}\\, ,\\\\\n b_{unstable} , & (\\varphi } %{ \\bar{\\varphi} ,Q) \\in \\textrm{C}\\,.\n \\end{array} \\right. \n \\label{eq:CG-final-p1}\n\\end{eqnarray}\n\n\\subsubsection*{D0-D4-branes}\n\nWe have seen enough evidences that GC ensemble and CG ensemble\nare related to each other by the exchange of $\\Phi\\leftrightarrow\\varphi$\nand $Q\\leftrightarrow q$, and it is not too hard to convince oneself\nthat this is also true for D0-D4 branes by the same calculations as in\nprevious subsections. So, \nwe shall omit most of the rigorous deduction details and simply\nrepresent the final results below. The regions that can have both electrically and thermally stable\nblack brane phases in $Q-\\varphi } %{ \\bar{\\varphi} $ plane are shown in figure~\\ref{fig:CG-p0-Q-phi}.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=.4\\textwidth]{CG_Q_phi_p0.eps}\n \\caption{Regions in $Q$-$\\varphi } %{ \\bar{\\varphi} $ plane where stable phases can exist. $Q_c$ and $Q_0$\n respectively have the same values as $q_c$ and $q_0$ in GC ensemble, i.e.\n $Q_c\\approx 0.1416$ and $Q_0\\approx 0.1227$.}\n \\label{fig:CG-p0-Q-phi}\n\\end{figure}\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=.32\\textwidth]{CG_b_A_p0.eps}\n \\includegraphics[width=.32\\textwidth]{CG_b_B_p0.eps}\n \\includegraphics[width=.32\\textwidth]{CG_b_C_p0.eps}\n \\caption{Typical shapes of $ b_{CG} $ in D0-D4 system. Diagram A\/B\/C correspond to region A\/B\/C\n in figure~\\ref{fig:CG-p0-Q-phi} respectively.}\n \\label{fig:CG-p0-b}\n\\end{figure}\nAs to the constraint on $ \\bar{b} $, in region A, $ b_{CG} $ looks like the curve in the first\ndiagram in figure~\\ref{fig:CG-p0-b}, so in order for the system to be stable, we need\n$b_2< \\bar{b} < b_{CG} (x_0)$. In region B and C, $ b_{CG} $ has the shape shown in the second and\nthird diagrams respectively and in both regions there exists a $b_t$ which has the\nsame meaning as the $b_t$ in figure~\\ref{fig:GC-b-p0}. The difference between region\nB and C is that in the former we have $b_t> b_{CG} (x_0)$ while in the latter otherwise.\nHence in region B, we need $b_2< \\bar{b} < b_{CG} (x_0)$ while in region C we need $b_2< \\bar{b} b_{CC} (x^+)$\nwhile in region C it is otherwise, and this difference obviously leads to different constraints\non $ \\bar{b} $. The line which divides region B and C and also on which $b_{unstable}= b_{CC} (x^+)$ is described by\nfollowing relation,\n\\begin{eqnarray}\n (\\sqrt{1+3q^2} - \\sqrt{3}q) + (\\sqrt{1+3Q^2} - \\sqrt{3}Q) = 8 - 4\\sqrt{3} .\n \\label{eq:CC-BC-boundary}\n\\end{eqnarray}\nThere is also an interesting feature that the symmetry between D2 and D6 can be explicitly seen\nvia the symmetry between $Q$ and $q$ in \\eqref{eq:CC-bplus} and \\eqref{eq:CC-BC-boundary}.\n\nTo sum up, the dynamical stability condition for D2-D6 system in CC ensemble can be expressed\nby the following simple constraint,\n\\begin{eqnarray}\n (0 <)\\ \\bar{b} < \\left\\{\n \\begin{array}[]{ll}\n b_{CC} (x^+) , & (Q,q) \\in \\textrm{A} \\cup \\textrm{B} ,\\\\\n b_{unstable} , & (Q,q) \\in \\textrm{C}\n \\end{array} \\right. .\n \\label{eq:CC-p2-final}\n\\end{eqnarray}\n\n\\subsubsection*{D1-D5-branes}\n\nFor D1-D5 system, the $Q-q$ plane can also be divided into three regions (see\nfigure~\\ref{fig:CC-p1-Q-q}) and in each region the shape of $ b_{CC} $ is shown\nin figure~\\ref{fig:CC-p1-b}.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=.4\\textwidth]{CC_Q_q_p1.eps}\n \\caption{Regions in the $Q$-$q$ plane where there exist different types of $ b_{CC} $ curves.}\n \\label{fig:CC-p1-Q-q}\n\\end{figure}\nThe two lines dividing region A, B and B, C can not be analytically\nsolved and are both obtained numerically. \nWith experiences of dealing with\nso many cases in previous analyses, we can easily read out the final stability\ncondition from figure~\\ref{fig:CC-p1-b},\n\\begin{eqnarray}\n (0 <)\\ \\bar{b} < \\left\\{\n \\begin{array}[]{ll}\n b_{CC} (x^+) , & (Q,q) \\in \\textrm{A} \\cup \\textrm{B} ,\\\\\n b_t , & (Q,q) \\in \\textrm{C}\n \\end{array} \\right. .\n \\label{eq:CC-p1-final}\n\\end{eqnarray}\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=.32\\textwidth]{CC_b_A_p1.eps}\n \\includegraphics[width=.32\\textwidth]{CC_b_B_p1.eps}\n \\includegraphics[width=.32\\textwidth]{CC_b_C_p1.eps}\n \\caption{Shapes of $ b_{CC} $ of D1-D5 (or D0-D4) system in CC ensemble. Diagram A\/B\/C\n correspond to region A\/B\/C in figure~\\ref{fig:CC-p1-Q-q} respectively.}\n \\label{fig:CC-p1-b}\n\\end{figure}\n\n\\subsubsection*{D0-D4-branes}\n\nThe D0-D4 system has very similar feature to the D1-D5 system except that the division\nof region A, B and C as shown in figure~\\ref{fig:CC-p0-Q-q} is slightly different from\nfigure~\\ref{fig:CC-p1-Q-q}.\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=.4\\textwidth]{CC_Q_q_p0.eps}\n \\caption{Regions in the $Q$-$q$ plane where there exist different types of $ b_{CC} $\n curves. The constants are $q_c=Q_c\\approx 0.1416$ and $q_0=Q_0\\approx 0.1227$.}\n \\label{fig:CC-p0-Q-q}\n\\end{figure}\nThe shapes of $ b_{CC} $ in each region resemble those in figure~\\ref{fig:CC-p1-b}. Therefore,\nthe final stability condition in terms of $ \\bar{b} $ is exact the same as the condition in\n\\eqref{eq:CC-p1-final}, though the specific values of $ b_{CC} (x^+)$ and $b_t$ are different\nin general. One final remark is, in both D0-D4 and D1-D5 systems, the first order\nand second order phase transitions would not arise any more because of the instability of\nthe small black brane phase in the first order phase transition and of\nthe state at the critical\npoint in the second order phase transition. \n\n\n\\subsection{Compatibilities}\n\n\nFinally, we should check the compatibility of results obtained from different ensembles.\nFor example, if we take the limit $Q\\to 0$ in CC ensemble, or we take the limit $\\Phi } %{ \\bar{\\Phi} \\to 0$\nin GC ensemble, in both limits we would end up with a canonical ensemble for D$(p+4)$-branes.\nHence compatibility requires the stability conditions in these two ensembles should\ndegenerate into the same condition at the aforementioned limits. This can be easily\nseen by noticing the following two facts. First, from \\eqref{eq:b-temperature} we can\nsee that $b(x,Q=0,q)=b(x,\\Phi } %{ \\bar{\\Phi} =0,q)$ which means the thermal stability conditions coincides\nin these two limits. Second, by setting $\\Phi } %{ \\bar{\\Phi} \\to 0$, one realizes that in GC ensemble\n$x_{max}\\to 1$, so the two electrical stability conditions in \\eqref{eq:GC-electrical}\nand \\eqref{eq:CC-electrical} degenerate. Therefore, we proved the thermodynamic stability\nconditions in CC and GC ensembles are compatible. Similarly, one can also easily\ncheck the other\ncompatibilities such as the degeneracy of CC and CG ensembles in the limit of $q\\to 0$ and\n$\\varphi } %{ \\bar{\\varphi} \\to 0$, the degeneracy of GG and GC in the same limit and so on. So, the conclusion is, the stability conditions in all ensembles are compatible with each other.\n\n\n\\section{Conclusions\\label{sect:conclude}}\n\nIn this paper and our previous paper \\cite{zhou:2015}, we have\nconducted an exhaustive scan over the thermal and electrical\nstabilities of D$p$-D$(p+4)$-brane systems in all possible ensembles, and\nthis paper is especially focusing on the electrical stabilities.\n\nWe have confirmed in GG ensemble the thermal stability alone already\nguarantees the thermodynamic stability which was found in\n\\cite{lu:2011-2} for D$p$-brane systems. We find that in CG,GC and CC\nensembles, the electrical stability conditions generate extra\nconstraints on the horizon sizes and also on electric potentials only\nin GC and CG ensembles, besides the constraints that we already got\nfrom thermal stability conditions. These new conditions rule out the\nsmall brane phases from the old phase diagrams gained from pure\nthermal stability considerations. Consequently, for all possible $p$,\nthere is no van der Waals-like phase structure any more, i.e. neither\nthe first order small-large black brane transitions nor the second\norder phase transition. In the $Q\\to0$ limit, the electrical stability\nconditions also modify the one-charge brane system discussed in\n\\cite{lu:2011} such that the van der Waals phase structures are\ninvisible. Although we find this result in D$p$-D$(p+4)$-brane systems, we\nexpect that it may be a common feature in other charged black hole\nsystems. In fact, this electrical instability has already been noticed\nin EMadS black hole system studied in \\cite{chamblin:1999-2} in which\nthe black holes with smaller horizons near the van der Waals phase\ntransition points were found to be unstable and hence the transition\ndoes not exist after the electrical instability is considered. Whether\nthis is also the case for other systems could be left for future\nresearch works.\n\nFrom the new phase diagrams, we find that the symmetry of\ninterchanging $Q\\leftrightarrow q$ and $\\Phi } %{ \\bar{\\Phi} \\leftrightarrow\\varphi } %{ \\bar{\\varphi} $\nis still kept after electrical stability conditions are considered. This\nsymmetry in fact could be a result of the T-duality\\footnote{We would\nlike to thank \nJun-Bao Wu for pointing out this possibility.}: If we make T-duality in\nthe $p+1,\\dots,p+4$ directions of the D$(p+4)$ brane in which the\nD$p$-charges are smeared, these two kinds\nof branes interchange with each other, which is equivalent to make above\ninterchange in charges. This provides a natural interpretation of this symmetry\nof the phase structure. \n\n\nNow, we have included the thermal stability and the electrical\nstability in the discussion of the thermodynamic structure of the black brane system.\nHowever, there are another two independent parameters, the volume\nof the cavity and the volume of the brane. The corresponding\ngeneralized forces are the pressures on the cavity or in the brane\ndirections. These parameters may also introduce new stability\nconditions such as the compressibilities in different directions.\nThis is in analogy to the similar discussions in AdS black holes or\nasymptotically AdS black holes in which the cosmological constant\nplays the role of the pressure \\cite{Kastor:2009wy} and the positivity\nof the compressibility also determines the stability of the\nsystem\\cite{Dolan:2014lea}. To discuss the effects of this kind of\ninstability on the phase structure of the black brane system could also\nbe a future research direction.\n\n\\section*{Acknowledgments}\n\nThis work is supported by the National Natural Science Foundation of\nChina under grant No.~11105138, and 11235010. Z.X. is also partly\nsupported by the Fundamental Research Funds for the Central\nUniversities under grant No.~WK2030040020. He also thanks Liang-Zhu Mu\nand Jun-Bao Wu for helpful discussions. D.Z. is indebted to the\nChinese Scholarship Council (CSC). He would also like to thank Prof.\nYang-Hui He for providing him with a one-year visiting studentship at\nCity University London. Finally, the authors are specially grateful to\nthe referee of our last paper for the insightful suggestions to\ninitiate this work.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nWhereas off-the-shelf deep models keep finding promising applications, it has been gradually recognized to incorporate the problem structure into the design of deep architectures. Such customized deep architectures can benefit from their problem-specific regularizations, and improve the performance. In particular, there has been a blooming interest in bridging sparse coding \\cite{wang2015sparse} and deep models. Starting from \\cite{LISTA}, many work \\cite{AAAI16}, \\cite{SDM}, \\cite{D3}, \\cite{ijcai16} leveraged similar ideas on fast trainable regressors, and constructed feed-forward network approximations to solve the variants of sparse coding models. Lately, \\cite{xin2016maximal} demonstrated both theoretically and empirically that a trained deep network is potentially able to recover $\\ell_0$-based sparse representations under milder conditions. \n\nThe paper proceeds along this direction to embed sparsity regularization into the target deep model, and simultaneously exploits the \\textbf{structure of model parameters} into the design of the model architecture. Up to our best knowledge, it is the first principled and unified framework, that jointly sparsifies both learned features and model parameters. The resulting deep feed-forward network, called \\textit{deep double sparsity encoder} \\textbf{(DDSE)}, enjoys a compact structure, a clear interpretation, an efficient implementation, and competitive performance, as verified by various comparison experiments. \n\n\\section{Related Work}\n\n\\subsection{Network Implementation of Sparse Coding}\n\n \\begin{figure}[htbp]\n\\centering\n\\begin{minipage}{0.33\\textwidth}\n\\centering \\subfigure[] {\n\\includegraphics[width=\\textwidth]{GeneralR.pdf}\n}\\end{minipage}\n\\\\\n\\begin{minipage}{0.45\\textwidth}\n\\centering \\subfigure[] {\n\\includegraphics[width=\\textwidth]{GeneralUnfold.pdf}\n}\\end{minipage}\n\\caption{(a) The recursive system diagram for Eqn. (\\ref{iterative}); (b) a 3-layer neural network, unfolded and truncated to $k$ = 2 iterations from (a).}\n\\label{structure}\n\\end{figure}\n\n\\begin{figure*}[htbp]\n\\centering\n\\begin{minipage}{0.99\\textwidth}\n\\centering{\n\\includegraphics[width=\\textwidth]{doublesparse.pdf}\n}\\end{minipage}\n\\caption{The proposed deep double sparsity encoder, unfolded and truncated to $k$ = 2 iterations. The parameters $\\mathbf{W}_l$ ($l$ = 1, 2, 3) are subject to the constraints in Eqn. (\\ref{overall}).}\n\\label{dss}\n\\end{figure*}\n\n\nWe start from the classical sparse coding model \\cite{wang2015sparse} ($||\\mathbf{D}||_2$ = 1 by default hereinafter):\n\\begin{equation}\n\\begin{array}{l}\\label{rr}\n\\mathbf{z} = \\arg \\min_{\\mathbf{z}} \\frac{1}{2}||\\mathbf{x} - \\mathbf{D}\\mathbf{z}||_2^2 + \\lambda ||\\mathbf{z}||_1.\n\\end{array}\n\\end{equation}\n$\\mathbf{x} \\in R^n$ denotes the input data, $\\mathbf{z} \\in R^m$ is the sparse code feature, $\\mathbf{D} \\in R^{n \\times m}$ is the dictionary, and $\\lambda$ is the sparsity regularization coefficient. $\\mathbf{D}$ is usually chosen to be \\textit{overcomplete}, i.e. $m > n$. Eqn. (\\ref{rr}) could be solved by the iterative shrinkage and thresholding algorithm (ISTA) \\cite{IST} ($\\mathbf{u}$ is a vector and $u_i$ is its $i$-th element):\n \\begin{equation}\n\\begin{array}{l}\\label{iterative}\n\\mathbf{z}^{k+1} = \\mathcal{N}(\\mathcal{L}_1(\\mathbf{x}) + \\mathcal{L}_2(\\mathbf{z}^k)),\\, \\text{where}:\\\\\n\\mathcal{L}_1(\\mathbf{x}) = \\mathbf{D}^T \\mathbf{x}, ~\\mathcal{L}_2(\\mathbf{z}^k) = (\\mathbf{I} - \\mathbf{D}^T\\mathbf{D}) \\mathbf{z}^k, \\\\ \\mathcal{N}(\\mathbf{u})_i = \\text{sign}(u_i)(|u_i| - \\lambda)_{+},\n\\end{array}\n\\end{equation}\nwhere $\\mathbf{z}^{k} \\in R^m$ denotes the intermediate output of the $k$-th iteration, $k$ = 0, 1, $\\cdots$. $\\mathcal{L}_1$ and $\\mathcal{L}_2$ are linear operators that both hinge on $\\mathbf{D}$, while $\\mathcal{N}$ is the element-wise soft shrinkage.\n\nEqn. (\\ref{iterative}) could be equivalently expressed by the recursive system in Figure \\ref{structure} (a), whose fixed point is expected to be the solution $\\mathbf{a}$ of (\\ref{rr}). Furthermore, Figure \\ref{structure} (a) could be \\textit{unfolded} and \\textit{truncated} to $k$ iterations, to construct a ($k$+1)-layer feed-forward network \\cite{LISTA}, as in Figure \\ref{structure} (b) Without any further tuning, the resulting \\textit{learned ISTA} (LISTA) architecture will output a $k$-iteration approximation of the exact solution $\\mathbf{a}$. Furthermore, Figure \\ref{structure} (b) could be viewed as a \\textit{trainable regressor} to fit the data, as a function of $\\mathbf{D}$. It could be jointly tuned with a task-specific loss function $\\mathcal{F}_\\theta(\\mathbf{z}^k)$ (e.g., the softmax loss for classification; $\\theta$ denotes the parameters of the loss function), as an end-to-end network \\cite{AAAI16}. \n\n\n\\subsection{Double Sparsity Model for Dictionary Learning}\n\nA crucial consideration in employing the sparse coding model (\\ref{rr}) is the choice of the dictionary $\\mathbf{D}$. It has been observed that the learned dictionary atoms are highly structured, with noticeably regular patterns \\cite{peng2015connections}. This gives rise to the hypothesis that the dictionary atoms\nthemselves may have some underlying sparse structure over a more fundamental dictionary. \\cite{double} proposed a double sparsity model, suggesting that: \n \\begin{equation}\n\\begin{array}{l}\\label{double}\n\\mathbf{D} = \\mathbf{D}_0 \\mathbf{S}, ||\\mathbf{S}(:, i)||_0 \\le s, \\forall i,\n\\end{array}\n\\end{equation}\nwhere $\\mathbf{S}$ is the sparse atom representation matrix, which has no more than $s$ nonzero elements per column ($s \\ll n, m$). We also assume $\\mathbf{D}_0 \\in R^{n \\times n}$ and $\\mathbf{S} \\in R^{n \\times m}$. Note that in \\cite{double}, $\\mathbf{D}_0$ is chosen as $R^{n \\times m}$, and $\\mathbf{S} \\in R^{m \\times m}$. We make slightly different choices in order for orthogonal $\\mathbf{D}_0$, whose benefits will be shown next. The base dictionary $\\mathbf{D}_0$ spans the signal space, and will generally be chosen to have a quick implementation. The new parametric structure of $\\mathbf{D}$ leads to a simple and flexible dictionary representation which is both adaptive and efficient. Advantages of the double sparsity model (\\ref{double}) also include compact representation, stability under noise and reduced overfitting, among others.\n\n\\section{Deep Double Sparsity Encoder}\n\\subsection{The Proposed Model}\n\nGiven $\\mathbf{D}_0$ and $\\mathbf{S}$, we substitute (\\ref{double}) into (\\ref{iterative}) to obtain:\n\\begin{equation}\n\\begin{array}{l}\\label{ds}\n\\mathcal{L}_1(\\mathbf{x}) = \\mathbf{S}^T \\mathbf{D}_0^T \\mathbf{x}, ~\\mathcal{L}_2(\\mathbf{z}^k) = (\\mathbf{I} - \\mathbf{S}^T \\mathbf{D}_0^T\\mathbf{D}_0 \\mathbf{S}) \\mathbf{z}^k,\n\\end{array}\n\\end{equation}\nwith the iterative formula of $\\mathbf{z}^k$ and the form of $\\mathcal{N}$ remaining the same. \nCompared to (\\ref{iterative}), $\\mathbf{S}$ now becomes the trainable parameter in place of $\\mathbf{D}$ .\n\nTo simplify (\\ref{ds}), we first eliminate $\\mathbf{D}_0^T\\mathbf{D}_0$ from $\\mathcal{L}_2(\\mathbf{z}^k)$. Given the training data $\\mathbf{X}_{\\Sigma} \\in R^{n \\times t} = \\{\\mathbf{x}_i\\}, i = 1, 2, ..., t$, and assuming $\\mathbf{X}_\\Sigma$ to have zero mean, we choose $\\mathbf{D}_0$ as the (full) eigenvector matrix of $\\mathbf{X}_{\\Sigma}\\mathbf{X}_{\\Sigma}^T$ (i.e., the covariance matrix of $\\mathbf{X}_{\\Sigma}$). The obtained $\\mathbf{D}_0$ constitutes an orthonormal basis for $R^n$. Further, $\\mathbf{D}_0^T \\mathbf{x}$ performs the PCA projection of $\\mathbf{x}$, denoted as: $\\mathbf{x}_{\\text{PCA}}= \\mathbf{D}_0^T \\mathbf{x}$. The formula (\\ref{ds}) is reduced to:\n\\begin{equation}\n\\begin{array}{l}\\label{ds1}\n\\mathcal{L}_1(\\mathbf{x}) = \\mathbf{W}_1 \\mathbf{x}_{\\text{PCA}}, ~\\mathcal{L}_2(\\mathbf{z}^k) = (\\mathbf{I} - \\mathbf{W}_3 \\mathbf{W}_2) \\mathbf{z}^k,\\, \\text{where} \\\\\n\\,\\,\\,\\,\\, \\mathbf{W}_1 = \\mathbf{S}^T, \\mathbf{W}_2 = \\mathbf{S}, \\mathbf{W}_3 = \\mathbf{S}^T.\n\\end{array}\n\\end{equation}\nWe introduce three new variables in (\\ref{ds1}): $\\mathbf{W}_1 \\in R^{m \\times n}$, $\\mathbf{W}_2 \\in R^{n \\times m}$, and $\\mathbf{W}_3 \\in R^{m \\times n}$. Both $\\mathbf{W}_1$ and $\\mathbf{W}_3$ have no more than $s$ nonzero elements per \\textit{row}, while $\\mathbf{W}_2$ has no more than $s$ nonzero elements per \\textit{column}. Figure \\ref{dss} depicts the resulting \\textit{deep double sparsity encoder} \\textbf{(DDSE)}, unfolded and truncated from (\\ref{ds1}) (up to $k$ = 2). We purposely model $\\mathbf{W}_2$ and $\\mathbf{W}_3$ as two separate layers (with no nonlinearity in between), so that we could specify the proper row- or column-wise sparsity constraint on each. \n\nFurthermore, under the loss function $\\mathcal{F}_\\theta$, $\\mathbf{W}_1$, $\\mathbf{W}_2$ and $\\mathbf{W}_3$ can again be learned via end-to-end learning, instead of being constructed from any pre-computed $\\mathbf{S}$\\footnote{Another parameter to be learned jointly is the threshold $\\lambda$ in $\\mathcal{N}$. It is handled identically as in \\cite{AAAI16}.}. In this way, the DDSE network is solved over $\\mathbf{X}_{\\Sigma}$ by back-prorogation, where $\\mathbf{W}_l$ ($l$ = 1, 2, 3) are treated as fully-connected layers. Different from \\cite{AAAI16}, $\\mathbf{W}_2$ and $\\mathbf{W}_3$ are \\textbf{untied} throughout iterations, in order to enlarge the learning capacity. We also relax the formulation (\\ref{ds1}), by \\textbf{decoupling} $\\mathbf{W}_l$ ($l$ = 1, 2, 3) with each other, e.g., it is no longer required that $\\mathbf{W}_1$ = $\\mathbf{W}_3$, or $\\mathbf{W}^T_2$ = $\\mathbf{W}_3$. during training. For simplicity, we use the same $s$ for all $\\mathbf{W}_l$s.\n\n\n\\subsection{The Projected Gradient Descent Algorithm }\n\nLet $\\mathcal{G}$ denote the nonlinear mapping from the data to the last hidden feature before the loss, \nthe optimization problem of training DDSE could be written as below:\n\\begin{equation}\n\\begin{array}{l}\\label{overall}\n\\min_{\\{\\mathbf{W}_1, \\mathbf{W}_2, \\mathbf{W}_3, \\theta\\}} \\mathcal{F}_\\theta(\\mathcal{G}(\\mathbf{X}_{\\Sigma}|\\mathbf{W}_1, \\mathbf{W}_2, \\mathbf{W}_3)),\\\\ \ns.t. ||\\mathbf{W}_1(i,:)||_0 \\le s, ||\\mathbf{W}_2(:, j)||_0 \\le s, \\\\\n\\quad\\,\\, ||\\mathbf{W}_3(k,:)||_0 \\le s, \\forall i, j, k.\n\\end{array}\n\\end{equation} \nApart from the constraints, the objective in (\\ref{overall}) is usually minimized by the stochastic gradient descent (SGD) algorithm ($\\gamma$ is the learning rate):\n\\begin{equation}\n\\begin{array}{l}\\label{sgd}\n\\mathbf{W}_l = \\mathbf{W}_l - \\gamma \\frac{\\partial \\mathcal{F}}{\\partial \\mathbf{W}_l}, l= 1, 2, 3. \n\\end{array}\n\\end{equation} \nIt is guaranteed to converge to a stationary point, under a few stricter assumptions than ones satisfied here \\cite{bottou2010large}\\footnote{As a typical case in deep learning, SGD is widely used where it is not guaranteed to converge in theory, but behaves well in practice.}. With the constraints in (\\ref{overall}) specifying the feasible sets, we move forward to the Projected Gradient Descent (PGD) algorithm:\n\\begin{equation}\n\\begin{array}{l}\\label{pgd}\n\\mathbf{W}_l = \\mathcal{P}_l(\\mathbf{W}_l - \\gamma \\frac{\\partial \\mathcal{F}}{\\partial \\mathbf{W}_l}), l= 1, 2, 3. \n\\end{array}\n\\end{equation} \nwhere $\\mathcal{P}_l$ is the projection onto the feasible set for $\\mathbf{W}_l$. When $l$ = 1, 3, $\\mathcal{P}_l$ keeps the $s$ largest-magnitude elements in each row of $\\mathbf{W}_l$, and zeros out others. For $l$ = 2, $\\mathcal{P}_l$ is the same hard thresholding operator, but on a column-wise basis.\n\nSince both the objective and feasible sets of (\\ref{overall}) are non-convex, there is no convergence guarantee for PGD in (\\ref{pgd}). However, many literatures, e.g., \\cite{IST}, have demonstrated that solving such problems with PGD is well executed in practice. The stochastic implementation of PGD is also straightforward. \n\n\n\n\\subsection{Complexity Analysis}\n\n\\subsubsection{Model parameter complexity}\n\nFor $k$-iteration DDSE, each $\\mathbf{W}_l$ ($l$= 1, 2, 3) is a sparse matrix of $sm$ nonzero elements. The total amount of parameters in DDSE is $(2k+1)sm$. In contrast, the LISTA network in Figure \\ref{structure} (b) takes $mn + km^2$ parameters, assuming its $\\mathcal{L}_2$ parameters not tied across iterations as well. Since $s \\ll m, n$, the parameter ratio turns out to be $\\frac{(2k+1)sm}{mn + km^2} = \\frac{(2k+1)s}{n + km} \\rightarrow \\frac{2s}{m} \\ll1$, as $k \\rightarrow \\infty$. DDSE can thus be stored and loaded much more compactly, due to the sparse structure of $\\mathbf{W}_l$s. More importantly, DDSE can ensure the sufficient capacity and flexibility of learning by using large $m$, while effectively regularizing the learning process by choosing small $s$.\n\n\n\\subsubsection{Inference time complexity} \n\nThe efficient multiplication of a sparse matrix with $sm$ nonzero elements, and an arbitrary input vector, takes $sm$ time. Given a $k$-iteration DDSE, the inference time complexity of one sample $\\in R^n$ is $\\mathcal{O}((2k+1)sm)$. In comparison, LISTA has a time complexity of $\\mathcal{O}(mn + km^2)$. Again, when $k \\rightarrow \\infty$, $\\frac{(2k+1)sm}{mn + km^2} \\rightarrow \\frac{2s}{m} \\ll 1$.\n\n\\subsubsection{Remark on the number of layers}\n\nWhen (\\ref{ds1}) is unfolded and truncated to $k$ iterations, the obtained DDSE has 1 $\\mathbf{W}_1$ layer, $k$ $\\mathbf{W}_2$ layers, and $k$ $\\mathbf{W}_3$ layers. However, since $\\mathbf{W}_2$ and $\\mathbf{W}_3$ are always linearly concatenated within each iteration, with no nonlinearity in between, we can also consider $\\mathbf{W}_3\\mathbf{W}_2 \\in R^{m \\times m}$ as one layer, whose two factors are individually regularized. Hence, we treat a DDSE unfolded to $k$ iterations as a ($k$+1)-layer network, which also follows the LISTA convention \\cite{LISTA}.\n\n\n\\subsection{Relationship to Existing Techniques} \n\nMany regularization techniques have been proposed to reduce overfitting in deep learning, such as dropout \\cite{imagenet}, that set a randomly selected subset of activations to zero within each layer. \\cite{wan2013regularization} further introduced \\textit{dropconnect} for regularizing fully-connected layers, which instead sets a randomly selected subset of weights to zero during training. The proposed DDSE model implies an adaptive regime for dropconnect, where the selection of ``dropped'' weights is decided not randomly, but by data-driven hard thresholding. Besides, both dropout and dropconnect are only applied to training, and are unable to reduce the actual model size. \n\nDDSE could be alternatively viewed to have a \\textit{weight decay} penalty, which is enforced by hard $\\ell_0$ constraints. The skip connections (a.k.a. \\textit{shortcuts}) in DDSE is also reminiscent of the \\textit{residual learning} strategy \\cite{Res}. \n\n\n\n\\section{Experiments}\n\n\\subsection{Implementation}\n\nThe proposed DDSE is implemented with the CUDA ConvNet package \\cite{imagenet}. We use a constant learning rate of 0.01, with the momentum parameter fixed at 0.9, and a batch size of 128. Neither dropout nor dropconnect is applied unless specified otherwise. We manually decrease the learning rate if the network stops improving as in \\cite{imagenet} according to a schedule determined on a validation set.\n\nAs suggested by (\\ref{ds1}), we first subtract the mean and conduct PCA over the training data $\\mathbf{X}_{\\Sigma}$. We adopt the multi-step update strategy in \\cite{jiashi}, namely, updating $\\mathbf{W}_l$ by SGD without the cardinality constraints for several (15 by default) iterations, before the projection $\\mathcal{P}_l$ ($l$ = 1, 2, 3). It both accelerates training by reducing the time of performing hard thresholding, and encourage DDSE to learn more informative parameters to make pruning more reliable. \n\nWhile many neural networks are trained well with random initializations, it has been discovered that poor initializations can still hamper the effectiveness of first-order methods \\cite{sutskever2013importance}. On the other hand, It is much easier to initialize DDSE in the right regime. We first initialize $\\mathbf{S}$ by setting $s$ randomly selected elements to be one for each column, and zero elsewhere. Based on the correspondence relationships in (\\ref{ds1}), $\\mathbf{W}_l$s ($l$= 1, 2, 3) are all trivially initialized from $\\mathbf{S}$. That helps DDSE achieve a steadily decreasing curve of training errors, without common tricks such as annealing the learning rate, which may be indispensable if random initialization is applied.\n\n\n \n\\subsection{Simulation and Comparison}\n\n\\begin{figure}[htbp]\n\\centering\n\\begin{minipage}{0.40\\textwidth}\n\\centering{\n\\includegraphics[width=\\textwidth]{s.png}\n}\\end{minipage}\n\\caption{The error rate (\\%) comparison between baselines and DDSE on MNIST, with the sparsity ratio $s\/n$ varied.}\n\\label{varys}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering\n\\begin{minipage}{0.40\\textwidth}\n\\centering{\n\\includegraphics[width=\\textwidth]{m.png}\n}\\end{minipage}\n\\caption{The error rate (\\%) comparison between baselines and DDSE on MNIST, with the feature dimension $m$ varied.}\n\\label{varym}\n\\end{figure}\n\n\nIn the simulation experiments, we use the first 60, 000 samples of the MNIST dataset for training and the last 10,000 for testing. A MNIST sample is a $28 \\times 28$ gray-scale image, i.e., $n$ = 784. Common data augmentations (noice, blur, flipping, rotation, and scaling) are applied. In addition to a $k$-iteration DDSE, we design five baselines for comparison:\n\\begin{itemize}\n\\item \\textbf{Baseline I:} ($k$+1)-layer fully-connected network, whose first layer $\\in R^{m \\times n}$ and remaining $k$ layers $\\in R^{m \\times m}$. \n\\item \\textbf{Baseline II:} Baseline I regularized by \\textit{dropout}, with a ratio of 0.5 (as in \\cite{imagenet}) for each layer.\n\\item \\textbf{Baseline III:} Baseline I regularized by \\textit{dropconnect}, with a ratio of 0.5 (as in \\cite{wan2013regularization}) for each layer.\n\\item \\textbf{Baseline IV:} a LISTA network, unfolded and truncated to $k$ iterations from (\\ref{rr}). We also apply dropout to regularize its fully-connected layers.\n\\item \\textbf{Baseline V:} a network inspired by \\cite{jiashi}, by removing all ``shortcuts'' in DDSE while leaving all else unchanged.\n\\end{itemize}\nAll comparison methods are ensured to have the identical layer dimensions. They are jointly tuned with the softmax loss for the classification task. The default configuration parameters are $s$ = $\\frac{1}{4}n$, $m$ = 1, 024, $t$ = 60, 000, and $k$ = 2. We further vary each of the four parameters, while keeping others unchanged, in our controlled experiments below.\n\n\n\\subsubsection{Sparsity level $s$}\n\nFigure \\ref{varys} varies the sparsity ratio $s\/n$ from 0.1 to 0.6, and plots the corresponding error rates for all methods. Baselines I - IV are not parameterized by $s$ and thus not affected. Comparing Baselines II and III with Baseline I certifies that applying (even random) regularizations avoids overfitting and improves generalization. Baseline V and DDSE both benefit further from their more sophisticated regularization on the parameters. DDSE outperforms Baseline V noticeably at all $s\/n$ ratios, and reaches the best overall performance at $s\/n$ = 0.25.\n\nAs displayed in Figure \\ref{varys}, the performance of Baseline V and DDSE will both be degraded with either too small or too large $s\/n$ ratios. Whereas increasing $s\/n$ may loose the regularization effect, a small $s\/n$ also implies over-regularization, limiting the representation power of free parameters. In the random dropout\/dropconnect cases, the popular practice is to choose $s\/n$ around 0.5. \\cite{jiashi} also observed the best $s\/n$ to be between 0.4 and 0.5. DDSE seems to admit a lower ``optimal'' $s\/n$ (around 0.25). It implies that DDSE could attain more competitive performance with less parameters (i.e., lower $s\/n$), by ``smartly'' selecting non-zero elements in a data-driven way.\n\n\n\\subsubsection{Feature dimension $m$}\n\nIn (\\ref{rr}), the choice of $m$ corresponds to the dimensionality of the learned sparse code feature, and turns into the hidden layer dimensions of DDSE, etc. As illustrated in Figure \\ref{varym}, we start from $m$ = 800, and raise it up to 2, 000. Not surprisingly, the performance of Baseline I is degraded with $m$ growing larger, due to obviously overfitting. All other methods, regularized in various ways, all seem to benefit from larger $m$ values. Among them, DDSE consistently outperforms others, with a 0.2\\% error rate margin over Baseline IV (the second best). It proves effective to handle highly over-complete and redundant basis, and hence to learn more sparse hidden features.\n\n\n\n\\subsubsection{Training sample size $t (t_s)$}\n\nDDSE is meant to seek a trade-off between ``data-driven'' and ``model-based'' methods. By confining the degrees of freedom of parameters and permitting only certain sparse combinations over a pre-specified base dictionary, the parameter structure model (\\ref{double}) enables us to reduce, sometimes significantly, the amount of training data required to reliably approximate and recover the underlying nonlinear mapping of the deep model. \n\n\\begin{figure}[htbp]\n\\centering\n\\begin{minipage}{0.40\\textwidth}\n\\centering{\n\\includegraphics[width=\\textwidth]{t.png}\n}\\end{minipage}\n\\caption{The error rate (\\%) comparison between baselines and DDSE on MNIST, with $t_s\/t$ varied.}\n\\label{varyt}\n\\end{figure}\n\nWe empirically verify our conjecture, by the following comparison experiment. A small subset of size $t_s$ is drawn from $\\mathbf{X}_\\Sigma$ (the MNIST dataset with $t$ = 60, 000 samples), where each class is sampled proportionally. We range the ratio $t_s\/t$ from 0.1 to 1. Figure \\ref{varyt} shows that DDSE leads to dramatically more robust learning and generalization, under insufficient training data. Even when $t_s\/t$ is as low as 0.05, DDSE only bears a slight performance loss of 2.46\\%, while Baselines IV and V are degraded for more than 6\\% and 4\\%, respectively. It is also noteworthy that, to achieve the same performance level of DDSE at $t_s\/t$ = 0.05, Baselines IV and V requires approximately $t_s\/t$ = 0.4, Baselines II and III take $t_s\/t$ = 0.5, and Baseline I even needs $t_s\/t$ > 0.8. Those observations strongly support our hypothesis, that DDSE greatly alleviates the need for large training data, by exploiting the prior knowledge of parameter structure. In addition, we note that Baseline V slightly outperforms Baseline IV in Figure \\ref{varyt}. Recall that similarly to DDSE, the regularization on the Baseline V parameters is also enforced by data-driven adaptive sparsity. Under small training data, it appears more effective than the random dropout.\n\n\n\\subsubsection{Number of layers $k$ + 1}\n\nThe last simulation investigates how well DDSE and other methods can be scaled to deeper cases. We grow $k$ from 1 to 6, resulting to 2 to 7-layer networks\\footnote{We find it necessary to apply layer-wise pre-training \\cite{erhan2010does} to Baseline I when $k$ > 2, otherwise it will converge very slowly or even diverge.}. The comparison in Figure \\ref{varyk} evidently demonstrates the superiority of DDSE with all $k$ values. Besides, it is also interesting to see from Figure \\ref{varyk}, that Baseline IV obtains a significant performance advantage over Baseline V as $k$ grows. It is opposite to the observation in Figure \\ref{varyt}. On one hand, it might be attributed to the utility of ``shortcuts'', as analyzed in \\cite{Res}. On the other hand, we believe that the incorporation of the original problem structure (\\ref{rr}) also places deep models in good conditions: increasing $k$ is resemblant to running (\\ref{iterative}) up to more iterations, and thus solving (\\ref{rr}) more precisely. \n\n\\begin{figure}[htbp]\n\\centering\n\\begin{minipage}{0.40\\textwidth}\n\\centering{\n\\includegraphics[width=\\textwidth]{k.png}\n}\\end{minipage}\n\\caption{The error rate (\\%) comparison between baselines and DDSE on MNIST, with $k$ varied.}\n\\label{varyk}\n\\end{figure}\n\n\n\\subsubsection{Concluding remarks} Although the simulations are only intended for proof-of-concepts, the result of default-configured DDSE has already been comparable to the 6-layer network in \\cite{ciresan3deep}, and the committee of 25 networks trained with elastic distortions in \\cite{meier2011better}. We conclude from those experiments, that both the problem structure (``sparsifying features'') and the parameter structure (``sparsifying parameters') have contributed to the superior performance of DDSE. \n\nBy the comparison to Baselines II and III, the sophisticated regularization of DDSE appears more powerful than random dropout\/dropconnect. Compared to Baseline IV, DDSE further utilizes the double sparsity structure of the dictionary (\\ref{double}) as a priori, which accounts for its improved performance in all aspects. Meanwhile, exploiting the structure of the original problem (\\ref{rr}), that encourages sparse and more discriminative features, also helps DDSE outperform Baseline V consistently. \n\n\n\n\n\n\n\n\\section{Summary}\n\nThe study of DDSE showcases how jointly exploiting the problem structure and the parameter structure improves the deep modeling. Our simulations have verified its consistently superior performance, as well as robustness to highly insufficient training data. In our future work, a wide variety of parameter structures will be exploited for different models as a priori, such as the subspace structure \\cite{peng2016deep}, and the tree structure \\cite{baraniuk2010model}. \n\n\\bibliographystyle{IEEEbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}