diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgonx" "b/data_all_eng_slimpj/shuffled/split2/finalzzgonx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgonx" @@ -0,0 +1,5 @@ +{"text":"\n\n\n\\section{Introduction}\n\nBig Data is becoming one of the most (ab)used buzzword of our times. In companies, industries, academia, the interest is dramatically increasing and everyone wants to ``do Big Data\\xspace'', even though its definition or role in analytics is not completely clear.\nFrom a high-level perspective, Big Data\\xspace is about extracting knowledge from both structured and unstructured data.\nThis is a useful process for big companies such as banks, insurance, telecommunication, public institutions, and so on, as well as for business in general.\nExtracting knowledge from Big Data\\xspace requires tools satisfying strong requirements with respect to programmability --- that is, allowing to easily write programs and algorithms to analyze data --- and performance, ensuring scalability when running analysis and queries on multicore or cluster of multicore nodes. \nFurthermore, they need to cope with input data in different formats, e.g. batch from data marts, live stream from the Internet or very high-frequency sources.\nIn the last decade, a large number of frameworks for Big Data\\xspace processing has been implemented addressing these issues.\n\nTheir common aim is to ensure ease of programming by providing a unique framework addressing both batch and stream processing.\nEven if they accomplish this task, they often lack of a clear semantics of their programming and execution model.\nFor instance, users can be provided with two different data models for representing collections and streams, both supporting the same operations but often having different semantics.\n\nWe advocate a new Domain Specific Language (DSL), called \\textbf{{Pi}}peline \\textbf{{Co}}m\\-po\\-si\\-tion (PiCo\\xspace), designed over the presented layered Dataflow\\xspace conceptual framework~\\cite{17:bigdatasurvey:PPL}. PiCo\\xspace programming model aims at \\emph{easing the programming}\nof Analytics applications by two design routes: 1) unifying data access model, and 2) decoupling processing from data layout.\n\nBoth design routes undertake the same goal, which is the raising of the level of abstraction in the programming and the execution model with respect to mainstream approaches in tools (Spark~\\cite{zaharia:resilient:2012}, Storm~\\cite{Anis:CoRR:storm:15}, Flink~\\cite{flink-web} and Google Dataflow~\\cite{Dataflow:Akidau:2015}) for Big Data\\xspace analytics, which typically force the specialization of the algorithm to match the data access and layout. Specifically, data transformation functions (called \\emph{operators} in PiCo\\xspace) exhibit a different functional types when accessing data in different ways. \n\nFor this reason, the source code should be revised when switching from one data model to the next. This happens in all the above mentioned frameworks and also in the abstract Big Data\\xspace architectures, such as the Lambda~\\cite{15:lambda:kiran} and Kappa architectures~\\cite{kappa-web}. \nSome of them, such as the Spark framework, provide the runtime with a module to convert streams into micro-batches (Spark Streaming, a library running on Spark core), but still a different code should be written at user-level. The Kappa architecture advocates the opposite approach, i.e., to ``streamize'' batch processing, but the streamizing proxy has to be coded. The Lambda architecture requires the implementation of both a batch-oriented and a stream-oriented algorithm, which means coding and maintaining two codebases per algorithm. \n\nPiCo\\xspace fully decouples algorithm design from data model and layout. Code is designed in a fully functional style by composing stateless \\emph{operators} (i.e., transformations in Spark terminology). As we discuss in this report, all PiCo\\xspace operators are polymorphic with respect to data types. This makes it possible to 1) re-use the same algorithms and pipelines on different data models (e.g., streams, lists, sets, etc); 2)~reuse the same operators in different contexts, and 3) update operators without affecting the calling context, i.e., the previous and following stages in the pipeline. Notice that in other mainstream frameworks, such as Spark, the update of a pipeline by changing a transformation with another is not necessarily trivial, since it may require the development of an input and output proxy to adapt the new transformation for the calling context.\n\nThis report proceeds as follows.\nWe formally define the syntax of a program, which is based on Pipeline s and operator s whereas it hides the data structures produced and generated by the program.\nThen we provide the formalization of a minimal type system defining legal compositions of operator s into Pipeline s.\nFinally, we provide a semantic interpretation that maps any PiCo\\xspace program to a functional Dataflow\\xspace graph, representing the transformation flow followed by the processed collections.\n\n\n\n\n\\section{Syntax}\n\\label{ch:pm:design}\n\nWe propose a programming model for processing data collections, based on the Dataflow\\xspace model. \nThe building blocks of a PiCo\\xspace program are \\emph{Pipeline s} and \\emph{Operator s}, which we investigate in this section. Conversely, \\emph{Collections} are not included in the syntax and they are introduced in Section~\\ref{ch:pm:coll} since they contribute at defining the type system and the semantic interpretation of PiCo\\xspace programs.\n\n\n\\subsection{Pipeline s}\n\\label{ch:pm:pipe}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\input{pipelines}\n\t\\caption{Graphical representation of PiCo\\xspace Pipeline s\\label{fig:pipes}}\n\\end{figure}\n\n\\begin{table}\n\t\\centering\n\t\\footnotesize\n\t\\begin{tabularx}{\\textwidth}{lXp{0.3\\linewidth}}\n\t\t\\toprule\n\t\tPipeline & Structural\\newline properties & Behavior \\\\\n\t\t\n\t\t\\midrule\n\t\t$\\pname{new}\\xspace\\; op$ & - &\n\t\tdata is processed by operator\\ $op$ (i.e., \\emph{unary} Pipeline{})\\\\\n\t\t\n\t\t\\midrule\n\t\t$\\pname{to}\\xspace\\; p\\; p_1\\; \\ldots\\; p_n$\n\t\t& associativity for linear Pipeline s:\\newline\n\t\t$\n\t\t\\begin{array}{l}\n\t\t\\pname{to}\\xspace\\;(\\pname{to}\\xspace\\; p_A\\; p_B)\\; p_C\\equiv\\\\\n\t\t\\pname{to}\\xspace\\; p_A\\; (\\pname{to}\\xspace\\; p_B\\; p_C)\\equiv\\\\\n\t\tp_A \\;\\vert\\; p_B \\;\\vert\\; p_C\n\t\t\\end{array}\n\t\t$\n\t\t\\smallskip\n\t\t\\newline\n\t\tdestination commutativity:\\newline\n\t\t$\n\t\t\\begin{array}{l}\n\t\t\\pname{to}\\xspace\\; p\\; p_1 \\ldots p_n\\equiv\\\\\n\t\t\\pname{to}\\xspace\\; p\\; p_{\\pi(1)} \\ldots p_{\\pi(n)}\n\t\t\\end{array}\n\t\t$\n\t\t\\newline\n\t\tfor any $\\pi$ permutation of $1..n$ &\n\t\tdata from Pipeline\\ $p$ is sent to all Pipeline s $p_i$ (i.e., broadcast)\\\\\n\t\t\n\t\t\\midrule\n\t\t$\\pname{pair}\\xspace\\; p_1\\; p_2\\; op$\t& - &\n\t\tdata from Pipeline s $p_1$ and $p_2$ are pair-wise processed by operator\\ $op$\\\\\n\t\t\n\t\t\\midrule\n\t\t$\\pname{merge}\\xspace\\; p_1\\; p_2$ & associativity:\\newline\n\t\t$\n\t\t\\begin{array}{l}\n\t\t\\pname{merge}\\xspace\\;(\\pname{merge}\\xspace\\; p_1\\; p_2)\\; p_3\\equiv\\\\\n\t\t\\pname{merge}\\xspace\\; p_1\\;(\\pname{merge}\\xspace\\; p_2\\; p_3)\\equiv\\\\\n\t\tp_1\\;+\\; p_2\\;+\\; p_3\n\t\t\\end{array}\n\t\t$\n\t\t\\smallskip\n\t\t\\newline\n\t\tcommutativity:\\newline\n\t\t$\n\t\t\\begin{array}{l}\n\t\t\\pname{merge}\\xspace\\; p_1\\; p_2\\equiv\\\\\n\t\t\\pname{merge}\\xspace\\; p_2\\; p_1\n\t\t\\end{array}\n\t\t$\n\t\t& \tdata from Pipeline s $p_1$ and $p_2$ are merged, respecting the ordering in case of ordered collections\\\\\n\t\t\n\t\t\\bottomrule\n\t\\end{tabularx}\n\t\\caption{Pipeline s} \n\t\\label{tab:pipelines}\n\\end{table}\n\nThe cornerstone concept in the Programming Model is the \\emph{Pipeline}, basically a DAG-composition of processing \\emph{operator{s}}.\nPipeline s are built according to the following grammar\\footnote{For simplicity, here we introduce the non-terminal $\\unaryoperator{}$ (resp.\\ $\\binaryoperator$) that includes core and partitioning unary (resp.\\ binary) operator s.}:\n\\begin{grammar}\n\t ::= $\\pname{new}\\xspace\\;<\\unaryoperator>$\n\t\\alt $\\pname{to}\\xspace\\;\\;\\;\\ldots\\;$\n\t\\alt $\\pname{pair}\\xspace\\;\\;\\;<\\binaryoperator>$\n\t\\alt $\\pname{merge}\\xspace\\;\\;$\n\\end{grammar}\n\nWe categorize Pipeline s according to the number of collections they take as input and output:\n\\begin{itemize}\n\t\\item A source Pipeline{} takes no input and produces one output collection\n\t\\item A sink Pipeline{} consumes one input collection and produces no output\n\t\\item A processing Pipeline{} consumes one input collection and produces one output collection\n\\end{itemize}\n\nA pictorial representation of Pipeline s is reported in Figure~\\ref{fig:pipes}.\nWe refer to Figs.~\\ref{fig:pipe-source}, \\ref{fig:pipe-sink} and \\ref{fig:pipe-unary} as \\emph{unary} Pipeline s, since they are composed by a single operator.\nFigs.~\\ref{fig:pipe-linear} and \\ref{fig:pipe-non-linear} represent, respectively, linear (i.e., one-to-one) and branching (i.e., one-to-$n$) \\pname{to}\\xspace{} composition.\nFigs.~\\ref{fig:pipe-pair} and \\ref{fig:pipe-merge} represent composition of Pipeline s by, respectively, pairing and merging.\nA dotted line means the respective path may be void (e.g., a source Pipeline{} has void input path).\nMoreover, as we show in Section~\\ref{ch:pm:type system}, Pipeline s are not allowed to consume more than one input collection, thus both \\pname{pair}\\xspace{} and \\pname{merge}\\xspace{} Pipeline s must have at least one void input path.\n\nThe meaning of each Pipeline{} is summarized in Table~\\ref{tab:pipelines}.\n\n\n\n\n\\subsection{Operator{s}}\n\\label{ch:pm:actors}\n\nOperator{s} are the building blocks composing a Pipeline.\nThey are categorized according to the following grammar of core operator\\ families:\n\\begin{grammar}\n\t<$\\coreoperator$> ::= $<\\amodifier{core}\\unaryoperator>$ | $<\\amodifier{core}\\binaryoperator>$\n\t\n\t<$\\amodifier{core}\\unaryoperator$> ::= $<\\aname{map}\\xspace>$ | $<\\aname{combine}>$ | $<\\aname{emit}>$ | $<\\aname{collect}>$\n\t\n\t<\\amodifier{core}\\binaryoperator> ::= $<\\aname{b-\\map}>$ | $<\\aname{b-\\combine}>$\n\\end{grammar}\n\nThe intuitive meanings of the core operator s are summarized in Table~\\ref{tab:actors}.\n\\begin{table}[H]\n\t\\centering\n\t\\footnotesize\n\t\\begin{tabularx}{\\textwidth}{lp{0.16\\linewidth}lX}\n\t\t\\toprule\n\t\t{Operator\\ family} & Categorization & Decomposition & Behavior \\\\\n\t\t\n\t\t\\midrule\n\t\t\\aname{map}\\xspace & unary, \\newline element-wise & no &\n\t\tapplies a user function to each element in the input collection\\\\\n\t\t\n\t\t\\midrule\n\t\t\\aname{combine} & unary, \\newline collective& yes &\n\t\tsynthesizes all the elements in the input collection\n\t\tinto an atomic value, according to a user-defined policy\\\\\n\t\t\n\t\t\\midrule\n\t\t\\aname{b-\\map} & binary, \\newline pair-wise & yes &\n\t\tthe binary counterpart of \\aname{map}\\xspace: applies a (binary) user function to \n\t\teach pair generated by pairing (i.e. zipping\/joining) two input \n\t\tcollections\\\\\n\t\t\n\t\t\\midrule\n\t\t\\aname{b-\\combine} & binary, \\newline collective & yes &\n\t\tthe binary counterpart of \\aname{combine}: synthesizes all pairs generated by pairing (i.e. zipping\/joining) two input collections \\\\\n\t\t\n\t\t\\midrule\n\t\t\\aname{emit} & produce-only & no &\n\t\treads data from a source, e.g., regular collection, text file, \n\t\ttweet feed, etc.\\\\\n\t\t\n\t\t\\midrule\n\t\t\\aname{collect} & consume-only & no &\n\t\twrites data to some destination, e.g., regular collection, text \n\t\tfile, screen, etc.\\\\\n\t\t\\bottomrule\n\t\\end{tabularx}\n\t\\caption{Core operator\\ families.\\label{tab:actors}}\n\\end{table}\n\n\nIn addition to core operator s, generalized operator s can decompose their input collections by:\n\\begin{itemize}\n\t\\item partitioning the input collection according to a user-defined \n\tgrouping \n\tpolicy (e.g., group by key)\n\t\\item windowing the \\emph{ordered} input collection according to a \n\tuser-defined \n\twindowing policy (e.g., sliding windows)\n\\end{itemize}\nThe complete grammar of operator s follows:\n\\begin{grammar}\n\t ::= $<\\coreoperator>$\n\t\\alt $<\\windowingoperator>$ | $<\\partitioningoperator>$ | $<\\winparoperator>$\n\\end{grammar}\nwhere \\amodifier{w}{} and \\amodifier{p}{} denote decomposition by windowing and partitioning, respectively.\n\nFor those operator s $op$ not supporting decomposition (cf.\\ \nTable~\\ref{tab:actors}), the following structural equivalence holds:\n$op\\equiv \\amodifier{w} op \\equiv \\amodifier{p} op \\equiv \\amodifier{w}\\amodifier{p} op$.\n\n\\subsubsection{Data-Parallel Operators}\n\\MD{TODO: establish the analogy between \\aname{map}\\xspace\/\\aname{flatmap}\\\n\tand the scalar component of list homomorphisms as defined\n\tby Gorlatch et al. in HLPP 2016.}\n\nOperator s in the \\aname{map}\\xspace family are defined according to the \nfollowing \ngrammar:\n\\begin{grammar}\n\t<\\aname{map}\\xspace> ::= $\\aname{map}\\xspace\\; f$ | $\\aname{flatmap}\\; f$\n\\end{grammar}\nwhere $f$ is a user-defined function (i.e., the \\emph{kernel} function) from a host language.\\footnote{Note that we treat kernels as terminal symbols, thus we do not define the language in which kernel functions are defined; we rather denote this aspect to a specific implementation of the model.}\nThe former produces exactly one output element from each input element \n(one-to-one user function), whereas the latter produces a (possibly empty) bounded sequence\nof \noutput elements for each input element (one-to-many user function) and the \noutput collection is the merging of the output sequences.\n\n\nOperator s in the \\aname{combine}\\ family synthesize all the elements from an input collection into a single value, according to a user-defined kernel.\nThey are defined according to the following grammar:\n\\begin{grammar}\n\t<\\aname{combine}> ::= $\\aname{reduce}\\; \\oplus$\n\t| $\\aname{fold+reduce}\\; \\oplus_1\\; z\\; \\oplus_2$\n\\end{grammar}\nThe former corresponds to the classical reduction, whereas the latter is a \ntwo-phase aggregation that consists in the reduction of partial accumulative states (i.e., partitioned folding with explicit initial value).\nThe parameters for the \\aname{fold+reduce}\\ operator\\ specify the initial value for each partial accumulator ($z \\in S$, the initial value for the folding), how each input item affects the aggregative state ($\\oplus_1:S\\times T \\to S$, the folding function) and how aggregative states are combined into a final accumulator ($\\oplus_2:S\\times S \\to S$, the reduce function).\n\n\n\\subsubsection{Pairing}\nOperator{s} in the \\aname{b-\\map}\\ family are intended to be the binary \ncounterparts of \\aname{map}\\xspace\\ operator s:\n\\begin{grammar}\n\t<\\aname{b-\\map}> ::= $\\zipp\\map\\; f$ | $\\joinp\\map\\; f$\n\t\\alt $\\zipp\\flatmap\\; f$ | $\\joinp\\flatmap\\; f$\n\\end{grammar}\nThe binary user function $f$ takes as input pairs of elements, one from each of the input collections.\nVariants $\\aname{zip-}$ and $\\aname{join-}$ corresponds to the following pairing policies, respectively:\n\\begin{itemize}\n\t\\item zipping of ordered collections produces the pairs of elements with the same position within the order of respective collections\n\t\\item joining of bounded collections produces the Cartesian product of the input collections\n\\end{itemize}\n\nAnalogously, operator s in the \\aname{b-\\combine}\\ family are the binary counterparts of \\aname{combine}\\ operator s.\n\n\n\\subsubsection{Sources and Sinks}\nOperator s in the \\aname{emit}{} and \\aname{collect}{} families model data collection sources and sinks, respectively:\n\\begin{grammar}\n\t<\\aname{emit}> ::= $\\aname{from-file}\\;$file | $\\aname{from-socket}\\;$socket | $\\ldots$\n\t\n\t<\\aname{collect}> ::= $\\aname{to-file}\\;$file | $\\aname{to-socket}\\;$socket | $\\ldots$\n\\end{grammar}\n\n\n\n\\subsubsection{Windowing}\n\\label{ch:pm:windowing}\nWindowing is a well-known approach for overcoming the difficulties stemming from the unbounded nature of stream processing.\nThe basic idea is to process parts of some recent\nstream history upon the arrival of new stream items, rather than store\nand process the whole stream each time.\n\n\n\\GT{Subsequent means (I think?) ``that come afterward'', ``later''. I think you\n\tprobably mean ``consecutive''?}\n\n\\GT{But still\\ldots\\ not sure of the above sentence. How about simply\n\tthe following: The basic idea is to process parts of some recent\n\tstream history upon the arrival of new stream items, rather than store\n\tand process the whole stream each time.}\n\n\\MD{Much better!}\n\n\n\n\nA windowing operator{} takes an ordered collection, produces a collection (with the same structure type\\xspace as the input one) of windows (i.e., lists), and applies the subsequent operation to each window.\nWindowing operator s are defined according to the following grammar, where $\\omega$ is the windowing policy:\n\\begin{grammar}\n\t<$\\windowingoperator$> ::= $\\amodifier{w}<\\coreoperator>\\;\\omega$\n\\end{grammar}\n\nAmong the various definitions from the literature, for the sake of simplicity we only consider policies producing \\emph{sliding windows}, characterized by two parameters, namely, a window size $|W|$---specifying which elements fall into a window---and a sliding factor $\\delta$---specifying how the window slides over the stream items.\nBoth parameters can be expressed either in time units (i.e., time-based windowing) or in number of items (i.e., count-based windowing).\nIn this setting, a windowing policy $\\omega$ is a term $(|W|,\\delta,b)$ where $b$ is either {\\tt time} or {\\tt count}.\nA typical case is when $|W| = \\delta$, referred as a \\emph{tumbling} policy.\n\nThe meaning of the supported windowing policies will be detailed in semantic terms (Section~\\ref{ch:pm:semantic collections}).\nAlthough the PiCo\\xspace syntax only supports a limited class of windowing policies, the semantics we provide is general enough to express other policies such as session windows~\\cite{googlecloud:2015}.\n\nAs we will show in Section~\\ref{ch:pm:type system}, we rely on tumbling windowing to extend bounded operator s\\footnote{We say an operator\\ is \\emph{bounded} if it can only deal with bounded collections.} and have them deal with unbounded collections; for instance, \\aname{combine}{} operator s are bounded and require windowing to extend them to unbounded collections.\n\n\n\n\\subsubsection{Partitioning}\n\\label{ch:pm:partitioning}\nLogically, partitioning operator s take a collection, produces a set (one per \ngroup) of sub-collections (with the same type as the input one) and applies\nthe subsequent operation to each sub-collection.\nPartitioning operator s are defined according to the following grammar, where $\\pi$ is a user-defined partitioning policy that maps each item to the respective sub-collection:\n\\begin{grammar}\n\t<$\\partitioningoperator$> ::= $\\amodifier{p}<\\coreoperator>\\; \\pi$\n\\end{grammar}\n\nOperator s in the \\aname{combine}, \\aname{b-\\map}{} and \\aname{b-\\combine}{} families support partitioning, so, for instance, a \n\\amodifier{p}\\aname{combine}\\ produces a \\text{bag}{} of values, each being the synthesis \nof one group; also the natural join operator from the relational algebra is a particular case of per-group joining.\n\nThe decomposition by both partitioning and windowing considers the former \nas the external decomposition, thus it logically produces a set (one per \ngroup) of collections of windows:\n\\begin{grammar}\n\t<$\\winparoperator$> ::= $\\windowing\\partitioning<\\coreoperator>\\;\\pi\\;\\omega$\n\\end{grammar}\n\n\\subsection{Running Example: The \\pname{word-count} Pipeline}\n\\begin{algorithm}\n\t\\caption{A \\pname{word-count} Pipeline}\n\t\\label{alg:wc model}\n\t\\begin{algorithmic}[0]\n\t\t\\State $f = \\lambda l.\\text{list-map}\\;(\\lambda w.\\pair w \n\t\t1)\\;(\\text{split}\\; l)$\n\t\t\\State $\\aname{tokenize} = \\aname{flatmap}\\; f$\n\t\t\\newline\n\t\t\\State $\\oplus = \\lambda x y.\\pair {\\pi_1(x)} {\\pi_2(x) + \n\t\t\t\\pi_2(y)}$\n\t\t\\State $\\aname{keyed-sum} = \\amodifier{p}(\\aname{reduce}\\;\\oplus)\\;\\pi_1$\n\t\t\\newline\n\t\t\\State $\\aname{file-read} = \\aname{from-file}\\;$input-file\n\t\t\\State $\\aname{file-write} = \\aname{to-file}\\;$output-file\n\t\t\\newline\n\t\t\\State $\\pname{word-count} = \\pname{new}\\xspace\\;\\aname{tokenize} \\;\\vert\\; \n\t\t\\pname{new}\\xspace\\;\\aname{keyed-sum}$\n\t\t\\State $\\pname{file-word-count} = \\pname{new}\\xspace\\ \\aname{file-read} \\;\\vert\\; \n\t\t\\pname{word-count} \\;\\vert\\; \\pname{new}\\xspace\\;\\aname{file-write}$\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\GUY{Below: I suggest ``in a generic functional language''\n\tinstead. Because saying it is like lambda-calculus may not be right,\n\tand may be interpreted rather strictly by some.}\n\\MD{I agree.}\n\nWe illustrate a simple \\pname{word-count} Pipeline{} in Algorithm~\\ref{alg:wc model}.\nWe assume an hypothetical PiCo\\xspace implementation where the host language provides some common functions over basic types---such as strings and lists---and a syntax for defining and naming functional transformations.\nIn this setting, the functions $f$ and $\\oplus$ in the example are user-defined kernels (i.e., functional transformations) and:\n\\begin{itemize}\n\t\\item split is a host function mapping a text line (i.e., a string) into the list of words occurring in the line\n\t\n\t\\item list-map is a classical host map over lists\n\t\\item $\\pi_1$ is the left-projection partitioning policy (cf.\\ example below, Section~\\ref{ch:pm:semantic collections}, Definition~\\ref{def:selection})\n\\end{itemize}\n\nThe operator s have the following meaning:\n\\begin{itemize}\n\t\\item \\aname{tokenize} is a \\aname{flatmap}\\ operator\\ that receives lines $l$ of text and produces, for each word $w$ in each line, a pair $\\pair w 1$;\n\t\\item \\aname{keyed-sum} is a \\amodifier{p}\\aname{reduce}\\ operator\\ that partitions \n\tthe \n\tpairs based on $w$ (obtained with $\\pi_1$, using group-by-word) and then sums up each group to \n\t$\\pair \n\tw {n_w}$, where $w$ occurs $n_w$ times in the input text;\n\t\\item \\aname{file-read} is an \\aname{emit}\\ operator\\ that reads from a text file and generates a list of lines;\n\t\n\t\\item \\aname{file-write} is a \\aname{collect}\\ operator\\ that writes a bag of \n\tpairs \n\t$\\pair w {n_w}$ to a text file.\n\\end{itemize}\n\n\n\n\n\n\\section{Type System}\n\\label{ch:pm:type system}\nLegal Pipeline{s} are defined according to typing rules, described below.\nWe denote the typing relation as $a:\\tau$, if and only if there exists a legal inference assigning type $\\tau$ to the term $a$.\n\n\\subsection{Collection Types}\n\\label{ch:pm:coll}\nWe mentioned earlier (Section~\\ref{ch:pm:design}) that collections are \\emph{implicit} entities that flow across Pipeline{s} through the DAG edges.\nA collection is either \\emph{bounded} or \\emph{unbounded}; moreover, it is also either \\emph{ordered} or \\emph{unordered}.\nA combination of the mentioned characteristics defines the \\emph{structure type\\xspace} of a collection.\nWe refer to each structure type\\xspace with a mnemonic name:\n\\begin{itemize}\n\t\\item a bounded, ordered collection is a \\emph{\\text{list}}\n\t\\item a bounded, unordered collection is a (bounded) \\emph{\\text{bag}}\n\t\\item an unbounded, ordered collection is a \\emph{\\text{stream}}\n\n\\end{itemize}\n\n\nA collection type is characterized by its structure type\\xspace and its \\emph{data type\\xspace}, namely the type of the collection elements.\nFormally, a collection type has form $\\ctype{T}{\\sigma}$ where \n$\\sigma\\in\\Sigma$ is the structure type\\xspace, $T$ is the data type\\xspace---and where $\\Sigma=\\{\\text{bag},\\text{list},\\text{stream}\\}$ is the set of all structure type\\xspace{s}.\nWe also partition $\\Sigma$ into $\\Sigma_b$ and $\\Sigma_u$, defined as the \nsets of bounded and unbounded structure type\\xspace{s}, respectively.\nMoreover, we define $\\Sigma_o$ as the set of ordered structure type\\xspace{s}, thus \n$\\Sigma_b \\cap \\Sigma_o = \\{\\text{list}\\}$ and $\\Sigma_u \\cap \\Sigma_o = \\{\\text{stream}\\}$.\nFinally, we allow the void type $\\emptyset$.\n\n\n\n\\subsection{Operator{} Types}\n\\begin{table}\n\t\\centering\n\t\\footnotesize\n\t\\begin{tabular}{lc}\n\t\t\\toprule\n\t\tOperator & Type \\\\\n\t\t\n\t\t\\midrule\n\t\t\\multicolumn{2}{c}{Unary}\\\\\n\t\t\\midrule\n\t\t$\\aname{map}\\xspace$ & $\\tpairio {\\ctype T \\sigma} {\\ctype U \\sigma}, \\forall\\sigma \n\t\t\\in \n\t\t\\Sigma$\\\\\n\t\t\n\t\t$\\aname{combine}$, $\\amodifier{p}\\aname{combine}$ & $\\tpairio {\\ctype T \\sigma} \n\t\t{\\ctype U \n\t\t\t\\sigma}, \\forall\\sigma \\in \\Sigma_b$\\\\\n\t\t\n\t\t$\\amodifier{w}\\aname{combine}$, $\\windowing\\partitioning\\aname{combine}$ & $\\tpairio \n\t\t{\\ctype \n\t\t\tT {\\sigma}} {\\ctype U {\\sigma}}, \\forall\\sigma \\in \\Sigma_o$\\\\\n\t\t\n\t\t$\\aname{emit}$ & $\\tpairio \\emptyset {\\ctype U \\sigma}$ \\\\\n\t\t$\\aname{collect}$ & $\\tpairio {\\ctype T \\sigma} \\emptyset$ \\\\\n\t\t\n\t\t\\midrule\n\t\t\\multicolumn{2}{c}{Binary}\\\\\n\t\t\\midrule\n\t\t$\\aname{b-\\map}$, $\\amodifier{p}\\aname{b-\\map}$ & $\\tpairio {\\ctype {T} {\\sigma} \\times \n\t\t\t\\ctype {T'} {\\sigma}} {\\ctype U {\\sigma}}, \n\t\t\\forall\\sigma\\in\\Sigma_b$ \\\\\n\t\t\n\t\t$\\amodifier{w}\\aname{b-\\map}$, $\\windowing\\partitioning\\aname{b-\\map}$ & $\\tpairio {\\ctype \n\t\t\t{T} \n\t\t\t{\\sigma} \\times \\ctype {T'} {\\sigma}} {\\ctype U \n\t\t\t{\\sigma}},\\forall\\sigma\\in\\Sigma_o$ \\\\\n\t\t\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption{Operator\\ types.\\label{tab:actor types}}\n\\end{table}\nOperator{} types are defined in terms of input\/output signatures.\nThe typing of operator s is reported in Table~\\ref{tab:actor types}.\nWe do not show the type inference rules since they are straightforward.\n\nFrom the type specification, we say each operator{} is characterized by its \ninput \nand output degrees (i.e., the cardinality of left and right-hand side of \nthe \n$\\to$ symbol, respectively). All operator s but \\aname{collect}\\ have output degree~1, \nwhile \\aname{collect}\\ has output degree 0. All binary operator s have input degree~2, \n\\aname{emit}\\ has input degree 0 and all the other operator s have input degree~1.\n\nAll operator s are polymorphic with respect to data type\\xspace{s}.\nMoreover, all operator s but \\aname{emit}\\ and \\aname{collect}\\ are polymorphic with \nrespect \nto structure type\\xspace{s}.\nConversely, each \\aname{emit}\\ and \\aname{collect}\\ operator\\ deals with one specific\nstructure type\\xspace.%\n\\footnote{For example, an emitter for a finite text file would\n\tgenerate a bounded collection of strings, whereas an emitter for stream of \n\ttweets\n\twould generate an unbounded collection of tweet objects.}\n\n\\begin{figure}\n\t\\centering\n\t$\\infer[\\amodifier{w}]\n\t{\\amodifier{w} op\\; \\omega:\\tpairio{\\ctype{T}{\\sigma'}}{\\ctype{U}{\\sigma'}}, \\sigma' \\in \\Sigma_o}\n\t{op:\\tpairio{\\ctype{T}{\\sigma}}{\\ctype{U}{\\sigma}}, \\sigma \\in \\Sigma_o}$\n\t\\caption{Unbounded extension provided by windowing\\label{fig:win-typing}}\n\\end{figure}\nAs we mentioned in Section~\\ref{ch:pm:coll}, a windowing operator\\ may behave as the unbounded extension of the respective bounded operator.\nThis is formalized by the inference rule $\\amodifier{w}$ that is reported in Figure~\\ref{fig:win-typing}: given an operator\\ $op$ dealing with ordered structure type\\xspace{s} (bounded or unbounded), its windowing counterpart $\\amodifier{w} op$ can operate on \\emph{any} ordered structure type\\xspace, including \\text{stream}.\nThe analogous principle underlies the inference rules for all the $\\amodifier{w}$ operator s.\n\n\n\\subsection{Pipeline\\ Types}\n\n\\begin{figure}\n\n\t\\begin{subfigure}[b]{\\textwidth}\n\t\t\\centering\n\t\t$\\infer[\\pname{new}\\xspace]{\\pname{new}\\xspace\\; op:\\tau}{op:\\tau}$\n\t\\end{subfigure}\n\t\n\n\t\\bigskip\n\t\\begin{subfigure}[b]{\\textwidth}\n\t\t\\centering\n\t\t$\\infer[\\pname{to}\\xspace]\n\t\t{\\pname{to}\\xspace\\; p\\; p_1\\; \\ldots\\; p_n:\\tpairio{\\ctypeopt{T}{\\sigma}} {\\ctype V \\sigma}}\n\t\t{p:\\tpairio{\\ctypeopt T \\sigma}{\\ctype U \\sigma}&\n\t\t\tp_i:\\tpairio {\\ctype {U} \\sigma} {(\\ctypeopt {V} \\sigma)_i}&\n\t\t\t\\exists i : (\\ctypeopt {V} \\sigma)_i = \\ctype {V} \\sigma\n\t\t}$\n\t\\end{subfigure}\n\t\n\t\\bigskip\n\t\\begin{subfigure}[b]{\\textwidth}\n\t\t\\centering\n\t\t$\\infer[\\pname{to}\\xspace_\\emptyset]\n\t\t{\\pname{to}\\xspace\\; p\\; p_1\\;\\ldots\\; p_n:\\tpairio{\\ctypeopt{T}{\\sigma}} \\emptyset}\n\t\t{p:\\tpairio{\\ctypeopt T \\sigma}{\\ctype U \\sigma}&\n\t\t\tp_i:\\tpairio {\\ctype {U} \\sigma} \\emptyset\n\t\t}$\n\t\\end{subfigure}\n\t\n\n\t\\bigskip\n\t\\begin{subfigure}[b]{\\textwidth}\n\t\t\\centering\n\t\t$\\infer[\\pname{pair}\\xspace]\n\t\t{\\pname{pair}\\xspace\\; p\\; p'\\; a:\\tpairio{\\ctypeopt{T}{\\sigma}} {\\ctypeopt V \\sigma}}\n\t\t{p:\\tpairio{\\ctypeopt T \\sigma} {\\ctype U \\sigma}&\n\t\t\tp':\\tpairio \\emptyset {\\ctype {U'} \\sigma}&\n\t\t\ta:\\tpairio {\\ctype {U} \\sigma \\times \\ctype {U'} \\sigma} {\\ctypeopt {V} \n\t\t\t\t\\sigma}\n\t\t}$\n\t\\end{subfigure}\n\t\n\t\\bigskip\n\t\\begin{subfigure}[b]{\\textwidth}\n\t\t\\centering\n\t\t$\\infer[\\pname{pair}\\xspace']\n\t\t{\\pname{pair}\\xspace\\; p\\; p'\\; a:\\tpairio{\\ctypeopt{T}{\\sigma}} {\\ctypeopt V \\sigma}}\n\t\t{p:\\tpairio \\emptyset {\\ctype U \\sigma}&\n\t\t\tp':\\tpairio{\\ctypeopt T \\sigma} {\\ctype {U'} \\sigma}&\n\t\t\ta:\\tpairio {\\ctype {U} \\sigma \\times \\ctype {U'} \\sigma} {\\ctypeopt {V} \n\t\t\t\t\\sigma}\n\t\t}$\n\t\\end{subfigure}\n\t\n\t\\bigskip\n\t\\begin{subfigure}[b]{\\textwidth}\n\t\t\\centering\n\t\t$\\infer[\\pname{merge}\\xspace]\n\t\t{\\pname{merge}\\xspace\\; p\\; p':\\tpairio{\\ctypeopt T \\sigma} {\\ctype U \\sigma}}\n\t\t{\n\t\t\t{p:\\tpairio{\\ctypeopt T \\sigma}{\\ctype U \\sigma}}&\n\t\t\tp':\\tpairio \\emptyset {\\ctype {U} \\sigma}\n\t\t}$\n\t\\end{subfigure}\n\t\n\t\\caption{Pipeline{} typing\\label{fig:pipe-typing}}\n\\end{figure}\n\nPipeline\\ types are defined according to the inference rules in \nFigure~\\ref{fig:pipe-typing}.\nFor simplicity, we use the meta-variable $\\ctypeopt{T}{\\sigma}$, which can be rewritten as either $\\ctype{T}{\\sigma}$ or~$\\emptyset$, to represent the optional collection type\\footnote{We remark the optional collection type is a mere syntactic rewriting, thus it does not represent any additional feature of the typing system.}.\nThe awkward rule $\\pname{to}\\xspace$ covers the case in which, in a \\pname{to}\\xspace\\ Pipeline, at least one destination Pipeline\\ $p_i$ has non-void output type $\\ctype V \\sigma$; in such case, all the destination Pipeline s with non-void output type must have the same output type $\\ctype V \\sigma$, which is also the output type of the resulting Pipeline.\n\n\nFinally, we define the notion of top-level Pipeline{s}, representing \nPipeline{s} that may be executed.\n\\begin{mydef}\n\t\\label{def:top-level}\n\tA \\emph{top-level} Pipeline\\ is a \\emph{non-empty} Pipeline\\ of type\n\t$\\tpairio \\emptyset \\emptyset$.\n\\end{mydef}\n\n\\subsection*{Running Example: Typing of \\pname{word-count}}\nWe present the types of the \\pname{word-count} components, defined in \nSection~\\ref{ch:pm:design}.\nWe omit full type derivations since they are straightforward applications \nof \nthe typing rules.\n\nThe operator s are all unary and have the following types:\n$$\n\\begin{array}{ll}\n\\aname{tokenize} &: \\tpairio\n{\\ctype {\\text{String}} \\sigma}\n{\\ctype {({\\text{String}} \\times {\\mathbb N})} \\sigma},\n\\forall \\sigma \\in \\Sigma\\\\\n\n\\aname{keyed-sum} &: \\tpairio\n{\\ctype {({\\text{String}}\\times{\\mathbb N})} \\sigma}\n{\\ctype {({\\text{String}}\\times{\\mathbb N})} \\sigma},\n\\forall \\sigma \\in \\Sigma\\\\\n\n\\aname{file-read} &: \\tpairio\n{\\ctype \\emptyset \\text{bag}}\n{\\ctype {\\text{String}} \\text{bag}}\\\\\n\n\\aname{file-write} &: \\tpairio\n{\\ctype {({\\text{String}} \\times {\\mathbb N})} \\text{bag}}\n{\\ctype \\emptyset \\text{bag}}\n\\end{array}\n$$\nPipeline s have the following types:\n$$\n\\begin{array}{ll}\n\\pname{word-count} &: \\tpairio\n{\\ctype {\\text{String}} \\sigma}\n{\\ctype {({\\text{String}} \\times {\\mathbb N})} \\sigma},\n\\forall \\sigma \\in \\Sigma\\\\\n{\\pname{file-word-count}} &: \\tpairio \\emptyset \\emptyset\n\\end{array}\n$$\n\nWe remark that \\pname{word-count} is polymorphic whereas \n\\pname{file-word-count} is a top-level Pipeline{}.\n\n\\section{Semantics}\n\\label{ch:pm:semantics}\n\nWe propose an interpretation of Pipeline s in terms of semantic Dataflow\\xspace graphs, as defined in~\\cite{16:bigdatasurvey:hlpp}.\nNamely, we propose the following mapping:\n\\begin{itemize}\n\t\\item Collections $\\Rightarrow$ Dataflow\\xspace tokens\n\t\\item Operator s $\\Rightarrow$ Dataflow\\xspace vertexes\n\t\\item Pipeline s $\\Rightarrow$ Dataflow\\xspace graphs\n\\end{itemize}\nNote that collections in semantic Dataflow\\xspace graphs are treated as a whole, thus they are mapped to single Dataflow\\xspace tokens that flow through the graph of transformations.\nIn this setting, semantic operator s (i.e., Dataflow\\xspace vertexes) map an input collection to the respective output collection upon a single firing.\n\n\n\n\\subsection{Semantic Collections}\n\\label{ch:pm:semantic collections}\nDataflow\\xspace tokens are data collections of $T$-typed elements, where $T$ is the \ndata type\\xspace \nof the collection.\nUnordered collections are semantically mapped to multi-sets, whereas \nordered collections are mapped to sequences.\n\nWe denote an unordered data collection of data type\\xspace $T$ with the following,\n``\\{\\;\\ldots\\;\\}'' being interpreted as a multi-set (i.e., unordered \ncollection \nwith possible multiple\noccurrences of elements): \n\\begin{equation}\nm = \\Set{m_0,m_1,\\ldots,m_{\\card{m}-1}}\n\\end{equation}\n\n\n\nA sequence (i.e., semantic ordered collection) associates a numeric \\emph{timestamp} to each item, representing its temporal coordinate, in time units, with respect to time zero.\nTherefore, we denote the generic item of a sequence having data type\\xspace $T$ as\n$\n(t_i, s_i)\n$\nwhere $i \\in {\\mathbb N}$ is the position of the item in the sequence, $t_i\\in{\\mathbb N}$ is the timestamp and $s_i \\in T$ is the item value.\nWe denote an ordered data collection of data type\\xspace $T$ with the following, where $\\markeq{b}$ holds only for bounded sequences (i.e., lists):\n\\begin{equation}\n\\begin{array}{rl}\ns &= \\Seq{(t_0, s_0), (t_1, s_1), (t_2, s_2), \\ldots\\bullet t_i \\in {\\mathbb N}, s_i \\in T}\\\\\n&= \\Seq{(t_0,s_0)}+\\!+\\Seq{(t_1,s_1), (t_2,s_2), \\ldots}\\\\\n&= \\Cons{(t_0,s_0)}{[(t_1,s_1), (t_2,s_2), \\ldots]}\\\\\n&\\markeq{b} \\Seq{(t_0,s_0),(t_1,s_1),\\ldots,(t_{\\card{s}-1}, s_{\\card{s}-1})}\n\\end{array}\n\\end{equation}\nThe symbol $++$ represents the concatenation of sequence $\\Seq{(t_0,s_0)}$ (head sequence) with the sequence $\\Seq{(t_1,s_1), (t_2,s_2), \\ldots}$ (tail sequence).\nThe symbol $::$ represents the concatenation of element $(t_0,s_0)$ (head element) with the sequence $\\Seq{(t_1,s_1), (t_2,s_2), \\ldots}$ (tail sequence).\n\nWe define the notion of \\emph{time-ordered sequences}.\n\\begin{mydef}\n\t\\label{def:time-ordered}\n\tA sequence $s = \\Seq{(t_0, s_0), (t_1, s_1), (t_2, s_2), \\ldots}$ is\n\ttime-ordered when the following condition is satisfied for any $i,j\\in{\\mathbb N}$: $$i\\leq j \\Rightarrow t_i \\leq t_j$$\n\\end{mydef}\nWe denote as $\\ocol s$ any time-ordered permutation of $s$.\nThe ability of dealing with non-time-ordered sequences, which is provided by PiCo\\xspace, is sometimes referred as \\emph{out-of-order} data processing~\\cite{googlecloud:2015}.\n\nBefore proceeding to semantic operator s and Pipeline s, we define some preliminary notions about the effect of partitioning and windowing over semantic collections.\n\n\n\\subsubsection{Partitioned Collections}\nIn Section~\\ref{ch:pm:actors}, we introduced partitioning policies.\nIn semantic terms, a partitioning policy $\\pi$ defines how to group collection elements.\n\\begin{mydef}\n\t\\label{def:selection}\n\tGiven a multi-set $m$ of data type\\xspace $T$, a function $\\pi:T\\to K$ and a key \n\t$k \\in K$, we define the $k$-selection $\\select{\\pi}{k}(m)$ as follows:\n\t\\begin{equation}\n\t\\select{\\pi}{k}(m) = \\{ m_i \\bullet x \\in m_i \\wedge \\pi(m_i) = k \\}\n\t\\end{equation}\n\tSimilarly, the $k$-selection $\\select{\\pi}{k}(s)$ of a sequence $s$ is the sub-sequence of $s$ such that the following holds:\n\t\\begin{equation}\n\t\\forall(t_i,s_i)\\in s, (t_i, s_i)\\in\\select{\\pi}{k}(s) \\iff \\pi(s_i) = k\n\t\\end{equation}\n\\end{mydef}\n\nWe define the partitioned collection as the set of all groups generated according to a partitioning policy.\n\\begin{mydef}\n\t\\label{def:partitioned collection}\n\tGiven a collection $c$ and a partitioning policy $\\pi$, the partitioned collection $c$ according to $\\pi$, noted $c^{(\\pi)}$, is defined as follows:\n\t\\begin{equation}\n\tc^{(\\pi)} = \\Set{\\select{\\pi}{k}(c) \\bullet {k \\in K}\\wedge \\card{\\select{\\pi}{k}(c)} > 0}\n\t\\end{equation}\n\\end{mydef}\n\nWe remark that partitioning has no effect with respect to time-ordering.\n\n\\noindent{\\bf Example:} The group-by-key decomposition, with $\\pi_1$\nbeing the left projection,\\footnote{$\\pi_1 (x, y) = x$} uses a special\ncase of selection where:\n\\begin{itemize}\n\t\\item the collection has data type\\xspace $K \\times V$\n\t\\item $\\pi = \\pi_1$\n\\end{itemize}\n\n\n\\subsubsection{Windowed Collections}\nBefore proceeding further, we provide the preliminary notion of \\emph{sequence splitting}.\nA splitting function $f$ defines how to split a sequence into two possibly overlapping sub-sequences, namely the \\emph{head} and the \\emph{tail}.\n\\begin{mydef}\n\t\\label{def:collection windowing}\n\tGiven a sequence $s$ and a splitting function $f$, the splitting of $s$ according to $f$ is: \n\t\\begin{equation}\n\tf(s) = \\left(h(s), t(s)\\right)\n\t\\end{equation}\n\twhere $h(s)$ is a bounded prefix of $s$, $t(s)$ is a proper suffix of $s$, and there is a prefix $p$ of $h(s)$ and a suffix $u$ of $t(s)$ such that $s = p+\\!+ u$.\n\\end{mydef}\n\nIn Section~\\ref{ch:pm:windowing}, we introduced windowing policies.\nIn semantic terms, a windowing policy $\\omega$ identifies a splitting function $f^{(\\omega)}$.\nConsidering a split sequence $f_\\omega(s)$, the head $h_\\omega(s)$ represents the elements falling into the window, whereas the tail $t_\\omega(s)$ represents the remainder of the sequence.\n\nWe define the windowed sequence as the result of repeated applications of windowing with time-reordering of the heads.\n\\begin{mydef}\n\t\\label{def:windowed collection}\n\tGiven a sequence $s$ and a windowing policy $w$, the windowed view of $s$ according to $w$ is:\n\t\\begin{equation}\n\ts^{(\\omega)} = \\Seq{\\ocol{s_0}, \\ocol{s_1}, \\ldots, \\ocol{s_i}, \\ldots}\n\t\\end{equation}\n\twhere $s_i = h_\\omega(\\underbrace{t_\\omega(t_\\omega(\\ldots t_\\omega}_i(s)\\ldots)))$\n\\end{mydef}\n\n\\noindent{\\bf Example:} The count-based policy $\\omega = (5,2,\\texttt{count})$ extracts the first 5~items from the sequence at hand and discards the first 2 items of the sequence upon sliding, whereas the tumbling policy $\\omega = (5,5,\\texttt{count})$ yields non-overlapping contiguous windows spanning 5 items.\n\n\n\n\n\\subsection{Semantic Operator s}\n\\label{ch:pm:semantic actors}\nWe define the semantics of each operator{} in terms of its behavior with respect to token processing by following the structure of Table~\\ref{tab:actor types}.\nWe start from bounded operator s and then we show how they can be extended to their unbounded counterparts by considering windowed streams.\n\nDataflow\\xspace vertexes with one input edge and one output edge (i.e., unary operator{}s \nwith both input and output degrees equal to 1) take as input a token (i.e., \na \ndata collection), apply a transformation, and emit the resulting \ntransformed \ntoken.\nVertexes with no input edges (i.e., \\aname{emit})\/no output edges (i.e., \n\\aname{collect}) execute a routine to produce\/consume an output\/input token, \nrespectively.\n\n\n\\subsubsection{Semantic Core Operator s}\nThe bounded \\aname{map}\\xspace\\ operator\\ has the following semantics:\n\\begin{equation}\n\\label{eq:strict semantic map}\n\\begin{array}{ll}\n\\aname{map}\\xspace\\; f\\; m &= \\Set{f(m_i)\\bullet m_i\\in m}\\\\\n\\aname{map}\\xspace\\; f\\; s &= \\Seq{(t_0, f(s_0)), \\ldots, (t_{\\card{s}-1}, f(s_{\\card{s}-1}))}\n\\end{array}\n\\end{equation}\nwhere $m$ and $s$ are input tokens (multi-set and list, \nrespectively) whereas right-hand side terms are output tokens.\nIn the ordered case, we refer to the above definition as \\emph{strict} semantic \\aname{map}\\xspace, since it respects the global time-ordering of the input collection.\n\nThe bounded \\aname{flatmap}\\ operator\\ has the following semantics:\n\\begin{equation}\n\\begin{array}{lcl}\n\\aname{flatmap}\\; f\\; m &= &{\\displaystyle\\bigcup\\Set{f(m_i)\\bullet m_i\\in m}}\\\\\n\\aname{flatmap}\\; f\\; s &= &\\Seq{(t_0, f(s_0)_0),(t_0, f(s_0)_1),\\ldots,(t_0, f(s_0)_{n_0})}+\\!+\\\\\n&&\\Seq{(t_1, f(s_1)_0),\\ldots,(t_1, f(s_1)_{n_1})}+\\!+ \\ldots+\\!+ \\\\\n&&\\Seq{(t_{\\card{s}-1}, f(s_{\\card{s}-1})_0)\\ldots, (t_{\\card{s}-1}, f(s_{\\card{s}-1})_{n_{\\card{s}-1}})}\n\\end{array}\n\\end{equation}\nwhere $f(s_i)_j$ is the $j$-th item of the list $f(s_i)$, that is, the output of the kernel function $f$ over the input $s_i$.\nNotice that the timestamp of each output item is the same as the respective input item.\n\nThe bounded \\aname{reduce}\\ operator has the following semantics, where $\\oplus$ is both associative and commutative and, in the ordered variant, $t' = {\\displaystyle\\max_{(t_i,s_i) \\in s}t_i}$:\n\\begin{equation}\n\\label{eq:semantic reduce}\n\\begin{array}{ll}\n\\aname{reduce} \\; \\oplus \\; m &= \\Set{\\bigoplus\\Set{m_i \\in m}}\\\\\n\\aname{reduce} \\; \\oplus \\; s &=\\Seq{(t',(\\ldots(s_0\\oplus s_1)\\oplus\\ldots)\\oplus s_{\\card s - 1})}\\\\\n&\\markeq{a}\\Seq{(t', s_0\\oplus s_1 \\oplus \\ldots \\oplus s_{\\card s - 1})}\\\\\n&\\markeq{c}\\Seq{(t', \\bigoplus\\Pi_2(s))}\n\\end{array}\n\\end{equation}\nmeaning that, in the ordered variant, the timestamp of the resulting value is the same as the input item having the maximum timestamp.\nEquation~$\\markeq{a}$ holds since $\\oplus$ is associative and equation $\\markeq{c}$ holds since it is commutative.\n\n\\GUY{In the above, when it is written $x \\in s$, if timestamps are not\n\tvisible, than it is a bit more complex than indicated, isn't it? I\n\tthink there is also the ambiguity of what $s_i$ means: the regular\n\tsequence item? Or the timestamped tuple? If there was always an arrow\n\tabove s, then this ambiguity could be avoided.}\n\n\\MD{Yes, I added the right projection to correct the last equation.\n\tIn my intention there is no ambiguity about $s_i$, it always means the regular sequence item (without timestamp).\n\tLet me do a double check over the section to see if I made some misuse.}\n\nThe \\aname{fold+reduce}\\ operator has a more complex semantics, defined with respect to an \\emph{arbitrary} partitioning of the input data. Informally, given a partition $P$ of the input collection, each subset $P_i\\in P$ is mapped to a local accumulator $a_i$, initialized with value $z$;\nthen:\n\\begin{enumerate}\n\t\\item Each subset $P_i$ is folded into its local accumulator $a_i$, using $\\oplus_1$;\n\t\\item The local accumulators $a_i$ are combined using $\\oplus_2$, producing a reduced value $r$;\n\\end{enumerate}\nThe formal definition---that we omit for the sake of simplicity---is similar to the semantic of \\aname{reduce}, with the same distinction between ordered and unordered processing and similar considerations about associativity and commutativity of user functions.\nWe assume, without loss of generality, that the user parameters $z$ and $\\oplus_1$ are always defined such that the resulting \\aname{fold+reduce}{} operator{} is partition-independent, meaning that the result is independent from the choice of the partition $P$.\n\n\\subsubsection{Semantic Decomposition}\n\nGiven a bounded \\aname{combine}{} operator{} $op$ and a selection function $\\pi:T\\to K$, the partitioning operator{} $\\amodifier{p} op$ has the following semantics over a generic collection $c$:\n$$\n\\amodifier{p} op \\; \\pi \\; c = \\Set{op \\; c' \\bullet c' \\in c^{(\\pi)}}\n$$\nFor instance, the group-by-key processing is obtained by using the by-key \npartitioning policy (cf.\\ example below definition~\\ref{def:selection}).\n\nSimilarly, given a bounded \\aname{combine}{} operator{} $op$ and a windowing policy $\\omega$, the windowing operator{} $\\amodifier{w} op$ has the following semantics:\n\\begin{equation}\n\\label{eq:windowing operator}\n\\amodifier{w} op \\; \\omega \\; s = op\\; s^{(\\omega)}_0 +\\!+ \\ldots +\\!+\\ op\\; s^{(\\omega)}_{\\card{s^{(\\omega)}}-1}\n\\end{equation}\nwhere $s^{(\\omega)}_i$ is the $i$-th list in $s^{(\\omega)}$ (cf. Definition~\\ref{def:windowed collection}).\n\nAs for the combination of the two partitioning mechanisms, \n\\amodifier{w}\\amodifier{p}$op$, it has the following semantics:\n$$\n\\windowing\\partitioning op \\; \\pi\\; \\omega \\; s =\n\\Set{\\amodifier{w} op \\; \\omega \\; s' \\bullet s' \\in s^{(\\pi)}}\n$$\nThus, as mentioned in Section~\\ref{ch:pm:actors}, partitioning first\nperforms the decomposition, and then processes each group on a\nper-window basis.\n\n\\subsubsection{Unbounded Operator s}\n\\label{ch:pm:ubops}\nWe remark that none of the semantic operator s defined so far can deal with unbounded collections.\nAs mentioned in Section~\\ref{ch:pm:actors}, we rely on windowing for extending them to the unbounded case.\n\nGiven a (bounded) windowing \\aname{combine}{} operator{} $op$, the semantics of its unbounded variant is a trivial extension of the bounded case:\n\\begin{equation}\n\\label{eq:unbounded reduce}\n\\amodifier{w} op \\; \\omega \\; s = op\\; s^{(\\omega)}_0 +\\!+ \\ldots +\\!+\\ c\\; s^{(\\omega)}_i +\\!+ \\ldots\n\\end{equation}\nThe above incidentally also defines the semantics of unbounded windowing and partitioning \\aname{combine}{} operator s.\n\nWe rely on the analogous approach to define the semantics of unbounded operator s in the \\aname{map}\\xspace{} family, but in this case the windowing policy is introduced at the semantic rather than syntactic level, since \\aname{map}\\xspace{} operator s do not support decomposition.\nMoreover, the windowing policy is forced to be batching (cf.\\ Example below Definition~\\ref{def:collection windowing}).\nWe illustrate this concept on \\aname{map}\\xspace{} operator s, but the same holds for \\aname{flatmap}{} ones. \nGiven a bounded \\aname{map}\\xspace{} operator, the semantics of its unbounded extension is as follows, where $\\omega$ is a tumbling windowing policy:\n\\begin{equation}\n\\label{eq:weak semantic map}\n\\llbracket \\aname{map}\\xspace\\; f\\; s \\rrbracket_\\omega =\n\\aname{map}\\xspace\\; f\\; s^{(\\omega)}_0 +\\!+ \\ldots +\\!+\\ \n\\aname{map}\\xspace\\; f\\; s^{(\\omega)}_i +\\!+ \\ldots \n\\end{equation}\nWe refer to the above definition as \\emph{weak} semantic \\aname{map}\\xspace (cf.\\ strict semantic \\aname{map}\\xspace{} in Equation~\\ref{eq:strict semantic map}), since the time-ordering of the input collection is partially dropped.\nIn the following chapters, we provide a PiCo\\xspace implementation based on weak semantic operators for both bounded and unbounded processing.\n\n\n\\subsubsection{Semantic Sources and Sinks}\nFinally, \\aname{emit}\/\\aname{collect}\\ operator s do not have a functional semantics, \nsince they produce\/consume collections by interacting with the system state \n(e.g., read\/write from\/to a text file, read\/write from\/to a network socket).\nFrom the semantic perspective, we consider each \\aname{emit}\/\\aname{collect}\\\noperator\\ as a Dataflow\\xspace node able to produce\/consume as output\/input a collection of a given type, as shown in Table~\\ref{tab:actor types}.\nMoreover, \\aname{emit}{} operator s of ordered type have the responsibility of tagging each emitted item with a timestamp.\n\n\\subsection{Semantic Pipeline s}\n\\label{ch:pm:semantic pipes}\nThe semantics of a Pipeline{} maps it to a semantic Dataflow\\xspace graph.\nWe define such mapping by induction on the Pipeline{} grammar defined in Section~\\ref{ch:pm:design}.\nThe following definitions are basically a formalization of the pictorial representation in Figure~\\ref{fig:pipes}.\n\nWe also define the notion of \\emph{input}, resp. \\emph{output}, vertex of \na Dataflow\\xspace graph~$G$, denoted as $\\inode G$ and $\\onode G$, respectively.\nConceptually, an input node represents a Pipeline{} source, whereas an output \nnode represents a Pipeline{} sink.\n\nThe following formalization provides the semantics of any PiCo\\xspace program.\n\\begin{itemize}\n\t\\item ($\\pname{new}\\xspace\\; op$) is mapped to the graph $G=\\pair {\\Set{ op}} \\emptyset$;\n\tmoreover, one of the following three cases hold:\n\t\\begin{itemize}\n\t\t\\item $op$ is an \\aname{emit}\\ operator, then $\\onode G = op$, while $\\inode \n\t\tG$ \n\t\tis undefined\n\t\t\\item $op$ is a \\aname{collect}\\ operator, then $\\inode G = op$, while \n\t\t$\\onode G$ \n\t\tis undefined \n\t\t\\item $op$ is an unary operator\\ with both input and output degree \n\t\tequal \n\t\tto~1, then $\\inode G = \\onode G = op$\n\t\\end{itemize}\n\t\n\t\n\t\n\t\\item ($\\pname{to}\\xspace\\; p\\; p_1\\; \\ldots\\; p_n$) is mapped to the graph $G = \\pair \n\tV \n\tE$ with:\n\t$$\n\t\\begin{array}{ll}\n\tV =&{V(G_p)\\cup V(G_{p_1}) \\cup \\ldots \\cup V(G_{p_n}) \\cup \\Set \n\t\t\\mu}\\\\\n\tE =&E(G_p)\\cup \\bigcup_{i=1}^n E(G_{p_i}) \\cup \n\t\\bigcup_{i=1}^n\\Set{\\pair {\\onode{G_{p}}} {\\inode{G_{p_i}}}} \\cup\\\\\n\t&\\bigcup_{i=1}^{\\card {G'}}\\Set{\\pair {\\onode{G'_{i}}} {\\mu}}\n\t\\end{array}\n\t$$\n\twhere $\\mu$ is a non-determinate merging node as defined in~\\cite{Lee:IEEE:P95} and\n\t$G'=\\Set{G_{p_i} \\bullet d_O(G_{p_i}) = 1}$;\n\tmoreover,\n\t$\\inode G = \\inode{G_p}$ if $d_I(G_p)=1$ and undefined otherwise, while \n\t$\\onode{G} = \\mu$ if $\\card{G'} > 0$ and undefined otherwise.\n\t\n\t\\item ($\\pname{pair}\\xspace\\; p\\; p'\\; op$) is mapped to the graph $G = \\pair V E$ with:\n\t$$\n\t\\begin{array}{l}\n\tV = {V(G_p)\\cup V(G_{p'})\\cup \\Set op}\\\\\n\tE = {E(G_p)\\cup E(G_{p'})\\cup \n\t\t\\Set{\\pair{\\onode{G_{p}}}{op},\\pair{\\onode{G_{p'}}}{op}}}\n\t\\end{array}\n\t$$\n\tmoreover, $\\onode{G} = op$, while one of the following cases holds:\n\t\\begin{itemize}\n\t\t\\item $\\inode G = \\inode{G_{p}}$ if the input degree of $p$ is 1\n\t\t\\item $\\inode G = \\inode{G_{p'}}$ if the input degree of $p'$ is 1\n\t\t\\item $\\inode G$ is undefined if both $p$ and $p'$ have output \n\t\tdegree \n\t\tequal to 0\n\t\\end{itemize}\n\t\n\t\\item ($\\pname{merge}\\xspace\\; p\\; p'$) is mapped to the graph $G = \\pair V E$ with:\n\t$$\n\t\\begin{array}{l}\n\tV = {V(G_p)\\cup V(G_{p'})\\cup \\Set \\mu}\\\\\n\tE = {E(G_p)\\cup E(G_{p'})\\cup \n\t\t\\Set{\\pair{\\onode{G_{p}}}{\\mu},\\pair{\\onode{G_{p'}}}{\\mu}}}\n\t\\end{array}\n\t$$\n\twhere $\\mu$ is a non-determinate merging node; moreover, $\\onode{G} = \\mu$, while one of \n\tthe \n\tfollowing cases holds:\n\t\\begin{itemize}\n\t\t\\item $\\inode{G} = \\inode{G_{p}}$ if the input degree of $p$ is 1\n\t\t\\item $\\inode{G} = \\inode{G_{p'}}$ if the input degree of $p'$ is 1\n\t\t\\item $\\inode{G}$ is undefined if both $p$ and $p'$ have output \n\t\tdegree \n\t\tequal to 0\n\t\\end{itemize}\n\t\n\\end{itemize}\n\n\\subsection*{Running Example: Semantics of \\pname{word-count}}\nThe tokens (i.e., data collections) flowing through the semantic Dataflow\\xspace graph \nresulting from the \\pname{word-count} Pipeline\\ are bags of strings \n(e.g., \nlines produced by \\aname{file-read} and consumed by \\aname{tokenize}) or \nbags \nof \\mbox{string-${\\mathbb N}$} pairs (e.g., counts produced by \\aname{tokenize} and \nconsumed by \\aname{keyed-sum}).\nIn this example, as usual, string-${\\mathbb N}$ pairs are treated as key-value \npairs, where keys are strings (i.e., words) and values are numbers (i.e., \ncounts).\n\nBy applying the semantic of \\aname{flatmap}, \\aname{reduce}\\ and \n$\\amodifier{p}(\\aname{reduce}\\;\\oplus)$ to Algorithm~\\ref{alg:wc model}, the result obtained is that the token being emitted by the \\aname{combine}\\ operator\\ is a bag of pairs $\\pair w {n_w}$ for each word $w$ in the input token of the \\aname{flatmap}\\ operator.\n\nThe Dataflow\\xspace graph resulting from the semantic interpretation of the \n\\pname{word-count} Pipeline{} defined in Section~\\ref{ch:pm:design} is $G = \n\\pair \nV E$, where:\n$$\n\\begin{array}{lll}\nV&=&\\Set{\\text{\\aname{tokenize}},\\text{\\aname{keyed-sum}}}\\\\\nE&=&\\Set{\\pair{\\text{\\aname{tokenize}}}{\\text{\\aname{keyed-sum}}}}\n\\end{array}\n$$\nFinally, the \\pname{file-word-count} Pipeline{} results in the graph $G = \n\\pair V \nE$ where:\n$$\n\\begin{array}{lll}\nV&=&\\Set{\\aname{file-read}, \\aname{tokenize}, \\aname{keyed-sum}, \\aname{file-write}}\\\\\nE&=&\\{\\pair{\\aname{file-read}}{\\aname{tokenize}},\\\\\n&&\\pair{\\aname{tokenize}}{\\aname{keyed-sum}},\\\\\n&&\\ \\pair{\\aname{keyed-sum}}{\\aname{file-write}}\\}\n\\end{array}\n$$\n\n\n\\section{Programming Model Expressiveness}\n\\label{ch:pmexpr}\n\nIn this section, we provide a set of use cases adapted from examples in Flink's user guide~\\cite{online:flink-examples}.\nBesides they are very simple examples, they exploit grouping, partitioning, windowing and Pipeline s merging.\nWe aim to show the expressiveness of our model without using any concrete API, to demonstrate that the model is independent from its implementation.\n\n\n\\subsection{Use Cases: Stock Market}\nThe first use case is about analyzing stock market data streams.\nIn this use case, we:\n\\begin{enumerate}\n\t\\item read and merge two stock market data streams from two sockets (algorithm~\\ref{alg:stock-read})\n\t\\item compute statistics on this market data stream, like rolling aggregations per stock (algorithm~\\ref{alg:stock-stats})\n\t\\item emit price warning alerts when the prices change (algorithm~\\ref{alg:price-warnings})\n\t\\item compute correlations between the market data streams and a Twitter stream with stock mentions (algorithm~\\ref{alg:correlate-stocks-tweets})\n\\end{enumerate}\n\n\n\\paragraph{Read from multiple sources}\n\\newcommand{\\text{StockName}}{\\text{StockName}}\n\\newcommand{\\text{Price}}{\\text{Price}}\n\\begin{algorithm}[H]\n\t\\caption{The \\pname{read-price} Pipeline}\n\t\\label{alg:stock-read}\n\t\\begin{algorithmic}[0]\n\t\t\\State $\\pname{read-prices} = \\pname{new}\\xspace\\;\\aname{from-socket}\\; s_1 + \\pname{new}\\xspace\\;\\aname{from-socket}\\; s_2$\n\t\\end{algorithmic}\n\\end{algorithm}\nAlgorithm~\\ref{alg:stock-read} shows the \\pname{stock-read} Pipeline, which reads and merges two stock market data streams from sockets $s_1$ and $s_2$.\nAssuming $\\text{StockName}$ and $\\text{Price}$ are types representing stock names and prices, respectively, then the type of each \\aname{emit}\\ operator\\ is the following (since \\aname{emit}\\ operator s are polymorphic with respect to data type\\xspace):\n$$\n\\tpairio \\emptyset {\\ctype{(\\text{StockName} \\times \\text{Price})}{\\{\\text{stream}\\}}}\n$$\n\nTherefore it is also the type of \\pname{read-prices} since it is a \\pname{merge}\\xspace\\ of two \\aname{emit}\\ operator s of such type.\n\n\n\\paragraph{Statistics on market data stream}\n\\begin{algorithm}[H]\n\t\\caption{The \\pname{stock-stats} Pipeline}\n\t\\label{alg:stock-stats}\n\t\\begin{algorithmic}[0]\n\t\t\\State $\\aname{min} = \\aname{reduce}\\;(\\lambda x y.\\text{min}(x,y))$\n\t\t\\State $\\aname{max} = \\aname{reduce}\\;(\\lambda x y.\\text{max}(x,y))$\n\t\t\n\t\t\\State $\\aname{sum-count} = \\aname{fold+reduce}\\;$\n\t\t\\parbox[t]{.6\\linewidth}{%\n\t\t\t$(\\lambda a x.((\\pi_1(a)) + 1, (\\pi_2(a)) + x)) \\; (0,0)$ \\\\\n\t\t\t$(\\lambda a_1 a_2.(\\pi_1(s_1) + \\pi_1(a_2), \\pi_2(a_1) + \\pi_2(a_2)))$}\n\t\t\n\t\t\\State $\\aname{normalize} = \\aname{map}\\xspace\\;(\\lambda x.\\pi_2(x) \/ \\pi_1(x))$\n\t\t\\State $\\omega = (10,5,\\texttt{count})$\n\t\t\\newline\n\t\t\n\t\t\\State $\\pname{stock-stats} = \\pname{to}\\xspace\\;$\n\t\t\\parbox[t]{.6\\linewidth}{%\n\t\t\t$\\pname{read-prices}$\\\\\n\t\t\t$\\pname{new}\\xspace\\;\\windowing\\partitioning(\\aname{min})\\;\\pi_1\\;\\omega$\\\\\n\t\t\t$\\pname{new}\\xspace\\;\\windowing\\partitioning(\\aname{max})\\;\\pi_1\\;\\omega$\\\\\n\t\t\t$(\\pname{new}\\xspace\\;\\windowing\\partitioning(\\aname{sum-count})\\;\\pi_1\\;\\omega \\;\\vert\\; \\pname{new}\\xspace\\;\\aname{normalize})$}\n\t\\end{algorithmic}\n\\end{algorithm}\nAlgorithm~\\ref{alg:stock-stats} shows the \\pname{stock-stats} Pipeline, that computes three different statistics---minimum, maximum and mean---for each stock name, over the prices coming from the \\pname{read-prices} Pipeline.\nThese statistics are windowing based, since the data processed belongs to a stream possibly unbound.\nThe specified window policy $\\omega = (10,5,\\texttt{count})$ creates windows of 10 elements with sliding factor 5.\n\nThe type of \\pname{stock-stats} is\n$\n\\tpairio \\emptyset {\\ctype{(\\text{StockName} \\times \\text{Price})}{\\{\\text{stream}\\}}}\n$,\nthe same as \\pname{read-prices}.\n\n\n\\paragraph{Generate price fluctuation warnings}\n\\begin{algorithm}[H]\n\t\\caption{The \\pname{price-warnings} Pipeline}\n\t\\label{alg:price-warnings}\n\t\\begin{algorithmic}[0]\n\t\t\\State $\\aname{collect} = \\aname{fold+reduce}\\;$\n\t\t\\parbox[t]{.6\\linewidth}{%\n\t\t\t$(\\lambda s x.s \\cup \\{x\\}) \\; \\emptyset$\\\\\n\t\t\t$(\\lambda s_1 s_2.s_1 \\cup s_2)\\;$}\n\t\t\n\t\t\\State $\\aname{fluctuation} = \\aname{map}\\xspace\\;(\\lambda s.\\text{set-fluctuation}(s)) $\n\t\t\n\t\t\\State $\\aname{high-pass} = \\aname{flatmap}\\;\n\t\t(\\lambda \\delta.\\text{if }\\delta \\geq 0.05 \\text{ then yield }\\delta)$\n\t\t\\State $\\omega = (10,5,\\texttt{count})$\n\t\t\\newline\n\t\t\n\t\t\\State $\\pname{price-warnings} =$\n\t\t\\parbox[t]{.6\\linewidth}{%\n\t\t\t$\\pname{read-prices} \\;\\vert\\;$\\\\\n\t\t\t$\\pname{new}\\xspace\\;\\windowing\\partitioning(\\aname{collect})\\;\\pi_1\\;\\omega \\;\\vert\\;\n\t\t\t\\pname{new}\\xspace\\;\\aname{fluctuation}$\\\\\n\t\t\t$\\pname{new}\\xspace\\;\\aname{high-pass}$\n\t\t}\n\t\\end{algorithmic}\n\\end{algorithm}\nAlgorithm~\\ref{alg:price-warnings} shows the Pipeline{} \\pname{price-warnings}, that generates a warning each time the stock market data within a window exhibits high price fluctuation for a certain stock name---\\texttt{yield} is a host-language method that produces an element.\n\nIn the example, the \\aname{fold+reduce}{} operator{} \\aname{fluctuation} just builds the sets, one per window, of all items falling within the window, whereas the downstream $\\aname{map}\\xspace$ computes the fluctuation over each set.\nThis is a generic pattern that allows to combine collection items by re-using available user functions defined over collective data structures.\n\nThe type of \\pname{price-warnings} is again\n$\n\\tpairio \\emptyset {\\ctype{(\\text{StockName} \\times \\text{Price})}{\\{\\text{stream}\\}}}\n$.\n\n\\paragraph{Correlate warnings with tweets}\n\\begin{algorithm}\n\t\\caption{The \\pname{correlate-stocks-tweets} Pipeline}\n\t\\label{alg:correlate-stocks-tweets}\n\t\n\t\\begin{algorithmic}[0]\n\t\t\\State $\\pname{read-tweets} = \\pname{new}\\xspace\\;\\aname{from-twitter} \\;\\vert\\; \\pname{new}\\xspace\\;\\aname{tokenize-tweets}$\n\t\t\\State $\\omega = (10,10,\\texttt{count})$\n\t\t\\newline\n\t\t\n\t\t\\State $\\pname{correlate-stocks-tweets} = \\pname{pair}\\xspace$\n\t\t\\parbox[t]{0.45\\linewidth}{%\n\t\t\t$\\pname{price-warnings} \\; \\pname{read-tweets}$\\\\\n\t\t\t$\\windowing\\partitioning(\\aname{correlate})\\;\\pi_1\\;\\omega$\\\\\n\t\t}\n\t\\end{algorithmic}\n\\end{algorithm}\nAlgorithm~\\ref{alg:correlate-stocks-tweets} shows \\pname{correlate-stocks-tweets}, a Pipeline{} that generates a correlation between warning generated by \\pname{price-warnings} and tweets coming from a Twitter feed.\nThe \\pname{read-tweets} Pipeline\\ generates a stream of $(\\text{StockName} \\times \\text{String})$ items, representing tweets each mentioning a stock name.\nStocks and tweets are paired according to a join-by-key policy (cf. definition~\\ref{def:selection}), where the key is the stock name.\n\nIn the example, \\aname{correlate} is a \\joinp\\aggregate\\ operator\\ that computes the correlation between two joined collections.\nAs we mentioned in Section~\\ref{ch:pm:actors}, we rely on windowing to apply the (bounded) \\joinp\\aggregate{} operator\\ to unbounded streams.\nIn the example, we use a simple tumbling policy $\\omega = (10,10,\\texttt{count})$ in order to correlate items from the two collections in a 10-by-10 fashion.\n\n\n\\section{Conclusion}\nWe proposed a new programming model based on Pipeline s and operator s, which are the building blocks of PiCo\\xspace programs, first defining the syntax of programs, then providing a formalization of the type system and semantics.\n\nThe contribution of PiCo\\xspace with respect to the state-of-the-art in tools for Big Data Analytics is also in the definition and formalization of a programming model that is independent from the effective API and runtime implementation.\nIn the state-of-the-art tools for Analytics, this aspect is typically not considered and the user is left in some cases to its own interpretation of the documentation. This happens particularly when the implementation of operators in state-of-the-art tools is conditioned in part or totally by the runtime implementation itself.\n\n\n\\section*{Acknowledgements}\nThis work was partly supported by the EU-funded project TOREADOR\n(contract no.\\ H2020-688797), the EU-funded project Rephrase (contract no.\\ H2020-644235),\nand the 2015--2016 IBM Ph.D.\\ Scholarship program. We gratefully\nacknowledge Prof. Luca Padovani for his comments on the early version\nof the manuscript.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nSpeech recognition using non-invasive neural electroencephalography (EEG) signals is an emerging area of research. In recent years, works from different research groups have demonstrated the feasibility of using EEG features for performing isolated classification based speech recognition with high accuracy. Some of these works are described in references \\cite{krishna2019speech,yoshimura2016decoding,kumar2018envisioned,sun2016neural,kim2014eeg,rosinova2017voice}. Even though EEG signals offer poor signal to noise ratio (SNR) and spatial resolution compared to invasive electrophysiological monitoring techniques like electrocorticography (ECoG) and local field potentials, the non-invasive nature of EEG makes it safe, easy to deploy and study. The EEG signals can be easily recorded by placing EEG sensors on the scalp of the subject and the signals offer high temporal resolution. \nThe prior published studies on speech recognition using EEG, used EEG signals recorded using wet EEG electrodes. Even though EEG signals recorded using wet electrodes demonstrate relatively higher SNR compared to EEG recordings obtained using dry electrodes, the time required to set up a wet EEG based recording system is significantly more compared to a dry EEG system. \nBefore starting a wet EEG recording session a conductive gel need to be applied on the scalp of the subjects. After the EEG recording is completed, the subjects then need to remove the gel from their scalp. In this paper we demonstrate speech recognition using EEG signals recorded using dry EEG electrodes. Dry EEG electrodes doesn't require application of conductive gel on the scalp of the subject and moreover for our experiments we used a dry EEG amplifier and wireless transmitter integrated into an easy-to-use, self-contained headset making it extremely convenient for the subjects to wear. \n\nSpeech recognition using EEG signals might help people with speaking disabilities to restore their normal speech. Current state-of-the-art automatic speech recognition (ASR) system used in virtual personal assistants like Siri, Alexa, Bixby etc can recognize only acoustic features and this limits technology accessibility for people with speaking disabilities. Thus speech recognition using EEG can improve technology accessibility. \nReferences \\cite{krishna20,krishna2020synthesis} studied the problems of continuous speech recognition and speech synthesis using EEG signals recorded using wet EEG electrodes. In this paper we limit our focus to the problem of isolated speech recognition. \n\nIn this paper we propose deep learning models inspired from \\cite{krishna2019speech,krishna2019improving} to perform isolated speech recognition using EEG signals recorded using dry electrodes. We were able to achieve a test accuracy as high as \\textbf{79.07 \\%}. We demonstrate our results on a limited English vocabulary consisting of three vowels and one word. \nOur results demonstrate the feasibility of using EEG signals recorded using dry electrodes for performing the task of isolated speech recognition using deep learning models. The experiments and models proposed in this paper can be extended to study the problems of continuous speech recognition, speech synthesis using EEG signals recorded using dry electrodes. \n\n\n\\section{Speech Recognition Model}\n\\label{sec:format}\nThe architecture of the speech recognition model used in this work is explained in Figure 1. The model takes dry EEG features as input and predict the label of the text. The model consists of a single layer of gated recurrent unit (GRU) \\cite{chung2014empirical} with 256 hidden units connected to a dropout regularization \\cite{srivastava2014dropout} with dropout rate 0.1 followed by another layer of GRU with 128 hidden units followed by a single layer of temporal convolutional network (TCN) \\cite{bai2018empirical} with 32 filters. The last time step output of the TCN layer is passed to a dense layer with softmax activation function. The number of units in the dense layer can be 2 or 3 or 4 depending on the number of labels used in that experiment. The labels were one hot vector encoded. \n\nThe model was trained for 1000 epochs with batch size 100. We used categorical cross entropy as the loss function and adam \\cite{kingma2014adam} as the optimizer. Motivated by the results demonstrated by the authors in \\cite{krishna2019improving}, we initialized the weights of the GRU layers of the speech recognition model using GRU layer weights derived from the regression model used to predict acoustic features from dry EEG features. The GRU layers were frozen and set to non-trainable in the speech recognition model. The intuition here is that the two GRU layers will learn the mapping from EEG to acoustic features and the trainable TCN layer will learn the mapping of these acoustic features to text. In \\cite{krishna2019improving} authors used similar idea for the task of continuous speech recognition using EEG signals. \n\nThe architecture of the regression model is described in Figure 2. The model consists of two layers of GRU with 256, 128 hidden units respectively with a dropout regularization of dropout rate 0.1 applied between the GRU layers, followed by a time distributed dense layer with linear activation function. The number of hidden units in the time distributed dense layer depends on the dimension of the target acoustic features. The model was trained to predict either mel-frequency cepstral coefficients (MFCCs) of dimension 13 or gammatone frequency cepstral coefficients (GFCCs) of dimension 13 or concatenation of GFCC and MFCC of dimension 26. In \\cite{krishna2019improving} authors trained their regression model to predict MFCC and articulatory features. In this work we didn't include articulatory features as we were working with only clean data set whereas in \\cite{krishna2019improving} authors used data sets recorded in presence and absence of background noise, hence articulatory features were helpful to them in providing noise robustness. \n\nThe regression model was trained for 2000 epochs with batch size 100. We used mean squared error as the loss function and adam as the optimizer. The objective of training the regression model was to derive weights to initialize the recurrent layers in the speech recognition model. \n\nFor both speech recognition and regression model we used 70\\% of the total data as training set, 10\\% as validation set and remaining 20\\% as test set. The Figure 3 shows the training and validation loss of the regression model. \n\n\n\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[height=7cm,width=0.3\\textwidth,trim={0.1cm 0.1cm 0.1cm 0.1cm},clip]{model.png}\n\\caption{Speech Recognition Model} \n\\label{1vsall}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[height=7cm,width=0.3\\textwidth,trim={0.1cm 0.1cm 0.1cm 0.1cm},clip]{reg.png}\n\\caption{Regression Model} \n\\label{1vsall}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[height=5cm,width=0.3\\textwidth,trim={0.1cm 0.1cm 0.1cm 0.1cm},clip]{egm.png}\n\\caption{Training and Validation loss for model used to predict MFCC and GFCC from EEG features} \n\\label{1vsall}\n\\end{center}\n\\end{figure}\n\n\n\n\\section{Design of experiments for building the data set}\n\\label{sec:typestyle}\n\nTwo male and two female subjects took part in the Speech-EEG recording experiments to collect data. Out of the four subjects, three were native American English speakers. All the four subjects were UT Austin undergraduate students in their early twenties.\nEach subject was asked to speak English vowels a,e,i and English word left and their simultaneous speech and EEG signals were recorded. The number of times the subjects were asked to repeat the experiment and number of samples recorded were similar to the study protocol explained in \\cite{krishna2019speech}. The data was recorded in absence of background noise. \n\nWe used QUICK-20 DRY EEG HEADSET for performing EEG recording experiments. The headset was manufactured by Cognionics. We used the data from the following 17 EEG sensors from the headset fp1, fp2, f8, f3, fz, f4,\nc3, cz, p8, p7, pz, t3\n,p3, o1, o2, c4 and t4. The sensors were placed on the headset following the standard international 10-20 layout. \n\n\n\\section{EEG feature extraction details}\n\\label{sec:majhead}\n\nThe EEG signals were sampled at 1000Hz and a fourth order IIR band pass filter with cut off frequencies 0.1Hz and 70Hz was applied. A notch filter with cut off frequency 60 Hz was used to remove the power line noise.\nThe EEGlab's \\cite{delorme2004eeglab} Independent component analysis (ICA) toolbox was used to remove other biological signal artifacts like electrocardiography (ECG), electromyography (EMG), electrooculography (EOG) etc from the EEG signals.\n\nThen we extracted five statistical features for EEG, namely root mean square, zero crossing rate, moving window average, kurtosis and power spectral entropy \\cite{krishna2019speech,krishna20}. In total there were 85 features (17(channels) X 5) for EEG signals. The EEG features were extracted at a sampling frequency of 500Hz for each EEG channel.\n\nEven though in \\cite{krishna20,krishna2019speech} authors performed non-linear dimension reduction after extracting EEG features, for this work we didn't perform dimension reduction as we were working with fewer number of EEG channels compared to the number of sensors used by authors in \\cite{krishna20,krishna2019speech}. \n\n\n\\section{Acoustic Feature Extraction}\n\\label{sec:print}\n\nFor training the regression model we extracted MFCC and GFCC features each of dimension 13 from the recorded speech signal as acoustic features. The recorded speech signal was sampled at a sampling frequency of 16KHz. The acoustic features were also extracted at the same sampling frequency of 500Hz like that of EEG features to avoid sequence-sequence mismatch. \n\n\n\\section{Results}\n\\label{sec:page}\nWe used test accuracy as the performance metric of the speech recognition model during test time. Test accuracy is defined as the ratio of number of correct predictions given by the model to total number of predictions in the test set. The obtained test time results are summarized in Table 1. The results demonstrate that when the GRU layers of the speech recognition model were initialized with weights derived from EEG to MFCC + GFCC regression model, resulted in highest test time accuracy for all the experiments. We observed highest test accuracy value of \\textbf{79.07 \\%} for the experiment involving making predictions over first two labels as seen from Table 1. The Figure 4 shows the corresponding training and validation accuracy plot for the experiment. As seen from the plot we can observe that training and validation accuracy values were almost comparable, indicating our model didn't over fit on the data. \n \nWhen we performed speech recognition experiment using EEG features from frontal and temporal lobe sensors (total 8 channels) over the first two labels where the GRU layer weights were initialized with MFCC + GFCC regression weights, we observed a test accuracy of \\textbf{69.77\\%}. \n \nOur results were poor compared to isolated speech recognition using wet EEG demonstrated by authors in \\cite{krishna2019speech} where they were able to achieve average test accuracy of more than 90\\%, indicating wet EEG offer better SNR than dry EEG for the task of speech recognition even though it is more convenient for a subject to wear a wireless dry EEG headset compared to a wired wet EEG cap used by authors in \\cite{krishna2019speech}. \n \n \n \n \n \n\\begin{table}[!ht]\n\\centering\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\n\\textbf{\\begin{tabular}[c]{@{}l@{}}Number\\\\ of\\\\ Labels\\\\ Used\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}\\% Test\\\\ Accuracy\\\\ Baseline\\\\ (GRU\\\\ layers\\\\ Random\\\\ Weights)\\end{tabular}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}\\% Test\\\\ Accuracy\\\\ (GRU layers\\\\ MFCC\\\\ Weights)\\end{tabular}}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}\\% Test\\\\ Accuracy\\\\ (GRU layers\\\\ GFCC\\\\ Weights)\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}\\% Test\\\\ Accuracy\\\\ (GRU layers\\\\ MFCC \\\\ +\\\\ GFCC\\\\ Weights)\\end{tabular}} \\\\ \\hline\n2 & 74.42 & 77.91 & 73.26 & \\textbf{79.07} \\\\ \\hline\n3 & 62.90 & 70.16 & 70.16 & \\textbf{72.58} \\\\ \\hline\n4 & 50.65 & 56.49 & 55.19 & \\textbf{61.04} \\\\ \\hline\n\\end{tabular}\n\\caption{Speech Recognition Test Time Results}\n\\end{table}\n \n \n \n \n \n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[height=6.5cm,width=0.5\\textwidth,trim={0.1cm 0.1cm 0.1cm 0.1cm},clip]{e.png}\n\\caption{Training and Validation accuracy plot} \n\\label{1vsall}\n\\end{center}\n\\end{figure}\n\n\n\n\n\\section{Conclusions and Future work}\n\\label{sec:illust}\nIn this paper we demonstrated the feasibility of using dry EEG features for performing isolated speech recognition on a limited English vocabulary using deep learning model. To the best of our knowledge this is the first work which explored speech recognition using dry EEG features. We were able to achieve highest test accuracy of \\textbf{79.07\\%}. \n\nFuture work will focus on exploring the use of dry EEG for other speech technologies and improving current performance. \n\n\n\\section{Acknowledgements}\n\\label{sec:foot}\nWe would like to thank Kerry Loader and Rezwanul Kabir from Dell, Austin, TX for donating us the GPU to train the models used in this work.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Domany-Kinzel cellular automaton\\cite{Domany} (DK herein) has been vastly studied in the last years. Its combined simplicity and rich dynamical behavior make it a prototype for studying and testing different theoretical aspects of complex systems, such as dynamical critical properties\\cite{Martins2}, directed percolation\\cite{Domany,Grassberger} and spreading of damage\\cite{Martins,Hinrichsen} properties.\n\nThe DK model consists in a one dimensional lattice with binary variables associated to each site $i$ ($S_i=0$: ``dry'' or ``death'' state and $S_i=1$: ``active'' state). The system evolves at discrete time, with the state of each site depending stochastically on the configuration of its two nearest neighbor sites at the previous time, independent of the other sites at the same time. In other words, the interactions in the DK model are local or short ranged both in space and time. Then, the dynamics of the model is defined by four independent parameters, namely, the transition probabilities corresponding to the four different configurations of the nearest neighbors of any site.\n\nA simple version of the model is obtained by equaling to zero the probability of any site to be in the active state given that its neighbors were both in the death state (which defines a so-called ``legal'' rule of evolution), and considering only one value for the probability of the site to be in the active state given that only one of its neighbors were in the active state, independent of which one. This leads to a two-parameter model which contains most of the complex behavior of the model. Domany and Kinzel showed that this model presents a continuous transition between a ``frozen'' phase where all the sites are in the death state (this is an absorbing state from which the system cannot scape once it is reached) and an ``active'' phase with a constant average fraction of sites in the active state. Some years after Martins {\\it et al} showed\\cite{Martins}, using the spreading of damage technique, that the active phase is divided two regions: one in which the system presents sensitivity to the initial conditions and one in which it does not. Such regions were interpreted as dynamical phases and the transition between them was shown to be a continuous one. The sensibility to initial conditions is studied by looking at the time evolution of the Hamming distance between two replicas of the system ({\\it i.e.}, two identical systems with different initial configurations) under correlated noise\\cite{Martins}. Recently, Hinrichsen {\\it et al} showed that the active phase is actually subdivided into three dynamical phases which depend on the correlations between the different types of stochastic noise used for each replica. The phases are the following: ({\\it i}) one in which the damage spreads for all possible types of correlations, which coincides with the so called chaotic phase founded by Martins {\\it et al}; ({\\it ii}) one in which the damage spreads for some type of noise but it does not others and ({\\it iii}) one where damage heals always.\n\nThe phase diagram of this model was calculated using different techniques, such as mean field\\cite{Tome,Kohring,Bagnoli} and Monte Carlo\\cite{Martins2,Hinrichsen,Zebende}. \n\nIn this work we introduce a generalization of the above model, in which the state of any site depends on the state of {\\bf all} the rest of the sites at the previous time through a distance dependent transition probability. In this case the interaction between a pair of sites separated by a distance $r$ decays with a power law of the type $1\/r^\\alpha$ ($\\alpha\\geq 0$). In other words, the interactions are long ranged in space. Microscopic long-range interactions appear in different physical contexts (see Ref.\\cite{Cannas} and references therein) but, until now, most of the theoretical analysis has been concentrated into the equilibrium properties of such kind of systems. Hence, it is of interest to analize systematically the influence of long range interactions in a dynamical model, such as a cellular automaton.\n\n\n\\section{The model: long-range cellular automaton}\n\nLet us consider a stochastic one-dimensional lattice with $N+1$ sites. The configurations of the system are described in terms of a set of occupation variables $S_i=0,1$ with $i=0,1,\\ldots,N$. Let $S^t_i$ the state of the site $i$ at the discrete time $t$. We denote by $S^t\\equiv \\left\\{S^t_i\\right\\}$ the state of the complete system at time $t$. We consider periodic boundary conditions, so that\n\n\\begin{equation}\nS^t_{i+N+1} = S^t_i \\;\\;\\;\\;\\;\\;\\;\\; i=0,1,\\ldots,N\n\\label{pbc}\n\\end{equation}\n\nThe system evolves in time according to a spatially {\\bf global} rule, but without memory effects, that means, the state of the site $i$ at time $t$ depends on the state of {\\bf all} the rest of the sites $S^{t-1}$ at the previous time $t-1$. However, the influence of a site $j$ at time $t-1$ over the site $i$ at time $t$ decays with the distance between sites with a power law $\\left| i-j \\right|^\\alpha$ with $\\alpha\\geq 0$. Before establishing explicitly the evolution rule we introduce an auxiliary function $p'(S_3|S_1,S_2)$ of three binary variables $S_1,S_2$ and $S_3$, such that\n\n\\begin{equation}\np'(0|S_1,S_2)+p'(1|S_1,S_2)=1\\;\\;\\;\\;\\;\\;\\;\\forall S_1,S_2\n\\label{eq2}\n\\end{equation}\n\n\\noindent and\n\n\\begin{eqnarray}\np'(1|0,0) &=& 0 \\nonumber\\\\\np'(1|1,0) &=& p'(1|0,1) = p_1 \\label{eq2'}\\\\\np'(1|1,1) &=& p_2 \\nonumber\n\\end{eqnarray}\n\n\\noindent with\n\n\\begin{equation}\n0 \\leq p_1,p_2 \\leq 1.\n\\label{eq4}\n\\end{equation}\n\n\\noindent Notice that $p'(S_3|S_1,S_2)$ has the properties of a conditional probability for the variable $S_3$ given the values of $S_1$ and $S_2$. Now, we consider a parallel dynamics, so that all the sites are updated simultaneously and independent of the rest of the sites at the same time. Then, the transition probability $W(S^t|S^{t-1})$ from state $S^{t-1}$ to $S^t$ is given by\n\n\\begin{equation}\nW(S^t|S^{t-1})= \\prod_{i=0}^N w(S^t_i|S^{t-1})\n\\label{eq5}\n\\end{equation}\n\n\\noindent where $w(S^t_i|S^{t-1})$ is the probability that the site $i$, at time $t$, has the value $S^t_i$ given that, at time $t-1$, the system was at $S^{t-1}$.\n\nThe model is then defined by the expression:\n\n\\begin{equation}\nw(S^t_i|S^{t-1})=\\frac{\\sum_{n=1}^{N\/2} \\frac{1}{n^\\alpha}\\; \n p'(S^t_i|S^{t-1}_{i-n},S^{t-1}_{i+n})}\n {\\sum_{n=1}^{N\/2} \\frac{1}{n^\\alpha}}\n\\label{modelo}\n\\end{equation}\n\n\\noindent where $N$ is assumed an even number for simplicity and $\\alpha\\geq 0$. Note that the interactions are symmetric on both sides of site $i$ and that, together with the periodic boundary conditions (\\ref{pbc}) every interaction is counted just once on a ring of length $N+1$, as schematized in Fig.1. The sum $\\sum_{n=1}^{N\/2} \\frac{1}{n^\\alpha}$ that appears in the denominator of Eq.(\\ref{modelo}) ensures the correct normalization of the conditional probability $w(S^t_i|S^{t-1})$. It is also related\\cite{Cannas1} to some function $N^*(\\alpha)$ that provides the correct scaling of the thermodynamic functions of magnetic models\\cite{Tsallis} for $\\alpha\\leq 1$, where these kind of interactions are non integrable.\n\nFrom Eqs.(\\ref{eq2})-(\\ref{eq4}) is easy to verify the following properties:\n\n\\begin{equation}\n0\\leq w(S^t_i|S^{t-1})\\leq 1\n\\end{equation}\n\n\\begin{equation}\n\\sum_{S_i=0,1} w(S_i|S^{t-1})=1 \\;\\;\\;\\;\\;\\forall S^{t-1}\n\\label{normalization}\n\\end{equation}\n\n\\noindent and from Eq.(\\ref{eq5}):\n\n\\begin{equation}\n\\sum_{S^t} W(S^t|S^{t-1})=1 \\;\\;\\;\\;\\;\\forall S^{t-1}\n\\end{equation}\n\nIn the limit $\\alpha\\rightarrow\\infty$ we have that\n\n\\begin{equation}\n\\lim_{\\alpha\\rightarrow\\infty} w(S^t_i|S^{t-1})=\n p'(S^t_i|S^{t-1}_{i-1},S^{t-1}_{i+1})\n\\end{equation}\n\n\\noindent and the model reduces to the DK cellular automaton \\cite{Domany}. For finite values of $\\alpha$ the transition probability for the site $i$ depends on all the rest of the sites. For small values of $\\alpha$ the interaction strength tends to become independent of $\\alpha$. This is exactly the case for $\\alpha=0$ where the system lost its dimensionality. In the limit $N\\rightarrow\\infty$ this model ($\\alpha=0$) can also be thought as an infinite dimensional one. In magnetic systems the mean field approximation becomes exact in the limit of infinite dimensionality. Moreover, in a recent study of the one-dimensional Ising model with long-range interactions of the type $1\/r^\\alpha$ it has been shown that the mean field behaviour is exact, not only in the infinite dimensional case $\\alpha=0$ (which corresponds to the Curie-Weiss model) but in the full range of interactions\\cite{Cannas1} $0\\leq \\alpha\\leq 1$. This result suggests that mean field behaviour could be an universal property of systems with long-range interactions, from which the infinite dimensional case $\\alpha=0$ is just a particular case. Therefore, it is interesting to investigate whether or not this property is shared by dynamical models with long-range interactions, like the present one.\n\nIn a very broad sense (and perhaps a rather vague one) a mean field approximation is one in which fluctuations are neglected. Different systematic mean field approaches has been proposed for the DK cellular automaton\\cite{Tome,Bagnoli,Kohring}. They all coincide (at least qualitatively) in the prediction for the shape of the frozen-active transition line, but they differ in the prediction of the damage spreading transition line. Hence, it is of interest to analize the behaviour of the long-range cellular automaton for small values of $\\alpha$, where some kind of mean-field type behaviour can be expected, by analogy with the magnetic models.\n\nLet $P_t(S)$ be the probability of state $S$ at time $t$. The time evolution of this function is given by the equation:\n\n\\begin{equation}\nP_{t+1}(S)=\\sum_{S'} W(S|S') P_t(S').\n\\label{Pt}\n\\end{equation}\n\nWe are mainly interested in the asymptotic behaviour of this model for $t\\rightarrow\\infty$. Since this behaviour is not expected to depend on the initial state (except for some very special cases, for instance, $S^t_i=0$ $\\forall i$, where the solution is trivial), we consider, without loss of generality, the following initial state:\n\n\\begin{equation}\nP_0(S) = \\prod_{i=0}^N \\frac{1}{2} \\left(\\delta_{S_i,0}+\\delta_{S_i,1}\\right)\n\\end{equation}\n\n\\noindent where $\\delta_{S_i,S_j}$ is a Kronecker delta function.\n\nWe denote by $\\left< f(S) \\right>_t$ the mean value of a state function $f(S)$ at time $t$:\n\n\\begin{equation}\n\\left< f(S) \\right>_t\\equiv \\sum_S f(S) P_t(S)\n\\end{equation}\n\nThe mean activity is then defined as\n\n\\begin{equation}\na_N(t) \\equiv \\frac{1}{N+1} \\left< \\sum_{i=0}^N S_i \\right>_t =\n \\frac{1}{N+1} \\sum_{i=0}^N S_i P_t(S_i)\n\\end{equation}\n\n\\noindent where $P_t(S_i)$ is the marginal probability for $S_i$:\n\n\\begin{equation}\nP_t(S_i) \\equiv \\sum_{\\left\\{ S_{j\\neq i} \\right\\}} P_t(S).\n\\label{marginal}\n\\end{equation}\n\nSince we are considering an homogeneous system $P_t(S_i)$ is independent of the site $i$. Then, defining $P_t(1)\\equiv P_t(S_i=1)$ we have that\n\n\\begin{equation}\na_N(t) = P_t(1)\n\\label{at}\n\\end{equation}\n\nThis kind of system is expected to present two type of homogeneous stationary states for $t\\rightarrow\\infty$: one with $S_i=0$ $\\forall i$, which defines the so-called ``frozen'' or ``dry'' phase, and one with a finite fraction of active sites $S_i=1$, which defines the ``active'' phase. The order parameter for this phase transition is given by\n\n\\begin{equation}\na_N \\equiv \\lim_{t\\rightarrow\\infty} a_N(t).\n\\end{equation}\n\nFrom Eqs.(\\ref{eq5}), (\\ref{Pt}) and (\\ref{at}) the time evolution of $a_N(t)$ is given by the equation:\n\n\\begin{equation}\na_N(t+1) = \\left< w(S_i=1|S')\\right>'_t\n\\label{at2}\n\\end{equation}\n\n\\noindent for an arbitrary site $i$. Let $P_t(S_i,S_j)$ the joint marginal probability for $S_i$ and $S_j$. From Eqs.(\\ref{eq2'}), (\\ref{modelo}) and (\\ref{at2}) we have that\n\n\\begin{equation}\na_N(t+1) = 2\\, p_1\\, a_N(t) + \\frac{p_2-2\\, p_1}{\\sum_{n=1}^{N\/2} \\frac{1}{n^\\alpha}}\\;\n \\sum_{n=1}^{N\/2} \\frac{1}{n^\\alpha} P_t^n(1,1)\n\\label{at3}\n\\end{equation}\n\n\\noindent where \n\n\\[ P_t^n(1,1) \\equiv P_t(S_{i-n}=1,S_{i+n}=1) \\]\n\n\\noindent and we have used that \n\n\\[ P_t(1,0) = P_t(0,1) = P_t(1)-P_t(1,1) \\]\n\n\\noindent for all pair of sites $i\\neq j$.\n\n\\section{The frozen-active phase transition}\n\nBefore proceeding with the mean-field and Monte Carlo calculations we can obtain a few analytical properties of the phase diagram in the $(p_1,p_2)$ space, regarding the frozen-active phase transition.\n\nFirst of all, notice that, from Eqs.(\\ref{eq2'}) and (\\ref{modelo}), the absorbing state $a_N=0$ ($S_i=0\\;\\forall i$) is always a fixed point of the dynamics for all values of $\\alpha$ and $N$. That means, $a_N(t)=0$ implies that $a_N(t+1)=0$. This state is expected to be an attractor of the dynamics for low values of $p_1$, as occurs in the DK automaton ($\\alpha=\\infty$).\n\nOn the other hand, for the particular case $p_2=1$ the fully occupied state $a_N=1$ ($S_i=1\\;\\forall i$) is also a fixed point of the dynamics for all values of $\\alpha$ and $N$. If we now perform a ``particle-hole'' transformation, by defining a new set of variables $S'_i=1-S_i$ $\\forall i$, we obtain, from Eqs.(\\ref{eq2})-(\\ref{modelo}) and Eq.(\\ref{normalization}), an equivalent system with parameters $p'_2=1$ and $p'_1=1-p_1$. We also see that $a'_N=1-a_N$. Therefore, if the system presents a frozen-active phase transition for $p_2=1$ then the critical value of $p_1$ has to be $p_1=p'_1=1\/2$. Moreover, in the active phase $a_N=1$ and the phase transition will be a discontinuous one. These results will be verified by both the mean-field and Monte Carlo calculations.\n\n\\subsection{Mean-field solution}\n\nFrom Eq.(\\ref{at3}) we see that the time evolution of $a_N(t)$ depends on higher order correlation functions between sites. A simple dynamical mean-field approximation can be obtained by neglecting such correlations \\cite{Tome}, that is, \n\n\\begin{equation}\nP_t(S_i,S_j)=P_t(S_i)P_t(S_j)\\;\\;\\;\\;\\;\\;\\forall\\; i,j\n\\end{equation}\n\n\\noindent which implies that\n\n\\begin{equation}\nP^n_t(1,1) = \\left[ P_t(1) \\right]^2 = a_N^2(t)\n\\end{equation}\n\nReplacing into Eq.(\\ref{at3}) we find that\n\n\n\\begin{equation}\na_N(t+1) = 2\\, p_1\\, a_N(t) + (p_2-2\\, p_1) a_N^2(t)\n\\label{at4}\n\\end{equation}\n\nThis equation is independent of $\\alpha$ and $N$, and it is the same already encountered (with the same approximation) for the DK automaton\\cite{Tome}. The fixed point equation $a_N(t+1)=a_N(t)=a_N$ derived from Eq.(\\ref{at4}) gives the critical line $p_1^c=1\/2$ $\\forall p_2$ with\n\n\\begin{equation}\na_N= \\frac{2\\, p_1-1}{2\\, p_1-p_2}\n\\label{MF}\n\\end{equation}\n\n\\noindent in the active phase $p_1\\geq p_1^c$.\n\nWe can now develop a systematic improvement of this mean-field approximation, by neglecting the pair correlations only between sites located at a distance greater than K of site $i$, for some integer value $K$, that is, \n\n\\begin{equation}\nP^n_t(1,1) = \\left[ P_t(1) \\right]^2 \\text{ for } n>K\n\\end{equation}\n\n\\noindent with $1\\leq K1,\n\\end{equation}\n\n\\noindent where $\\zeta(\\alpha)$ is the Riemann Zeta function. Hence, for any {\\it finite} value of $K$, Eq.(\\ref{at5}) reduces to Eq.(\\ref{at4}) in the limit $N\\rightarrow\\infty$, for $0\\leq\\alpha\\leq 1$. This fact suggests that the mean field solution becomes exact for an infinite lattice when $0\\leq\\alpha\\leq 1$, while a departure from such solution should be expected for $\\alpha>1$, a behaviour that has already been observed in the ferromagnetic Ising model with long-range interactions, as was mentioned in the preceeding section. In fact, the Monte Carlo simulations of the next subsection confirm such conjecture.\n\n\\subsection{Monte Carlo results}\n\nWe performed a Monte Carlo simulation for several system sizes, $N=100$, $200$, $400$ and $800$, and for different values of $\\alpha$. It is worth mentioning that the computational complexity of the algorithm implied by the long-ranged transition probability (\\ref{modelo}) is $O(N^2)$.\n\nWe calculated the transition line between the frozen and active phases, founding a continuous transition for all values of $\\alpha$ in the full plane $(p_1,p_2)$, except for $p_2=1$, where the numerical results verify the prediction of the beginning of this section. That is, for $p_2=1$, the transition point always occurs at $p_1^c=1\/2$ for all values of $N$ and $\\alpha$, and the transition is a discontinuous one at that point. The typical shapes of the transition lines between the frozen (left part of the diagram) and active (right part of the diagram) phases are shown in Fig.2 for $N=800$ and different values of $\\alpha$. The curves show all the same qualitative shape as the corresponding one for the short-range model $\\alpha=\\infty$ (DK model). We can observe that, for $\\alpha$ running from $\\alpha=\\infty$ to $\\alpha=1$ the curves move smoothly to the left of the diagram. All the curves with $\\alpha$ between $0$ and $1$ are indistinguishable inside the error intervals of the calculation (the error bars in the $p_1$ direction are approximately of the order of the symbol size; there are no error bars in the $p_2$ direction). This behaviour was verified for all the values of $N$ used in the simulation.\n\nFor each checked value of $\\alpha>1$ the corresponding curves show a fast convergence into an asymptotic one for increasingly higher values of $N$. On the other hand, for $0\\leq\\alpha\\leq 1$ the curves show a slow convergence towards the $p_1=1\/2$ vertical line, as depicted in Fig.3 for $\\alpha=0$. In fact, we verified for several values of $p_2$ the asymptotic behaviour $p_1^c-1\/2 \\sim A(p_2)\\, N^{-1\/2}$ for high values of $N$, where $A(p_2)$ is some continuous function of $p_2$ with $A(1)=0$. The Fig.4 illustrates the typical behaviour of $p_1^c$ as a function of $\\alpha$, for several values of $N$ and a particular value of $p_2$. The circles in that figure correspond to a numerical extrapolation for $N\\rightarrow\\infty$.\n\nIn Fig.5 we show a numerical extrapolation for $N\\rightarrow\\infty$ of the full critical lines for different values of $\\alpha$.\n\nFinally, in Fig.6 we show a numerical extrapolation for $N\\rightarrow\\infty$ of the mean activity $a = lim_{N\\rightarrow\\infty} a_N$ {\\it vs} $p_1$ curves (symbols) for three different values of $p_2$ and $0\\leq\\alpha\\leq 1$ (the activity curves for different values of $\\alpha$ in this range are indistinguishable). The solid lines are the mean field prediction of Eq.(\\ref{MF}). The excellent agreement between these results confirm the conjecture of the preceeding subsection.\n\n\\section{Spreading of Damage}\n\nTo study the spreading of damage in this model we have to create a replica of the system with an initial ``damage'', by flipping randomly a fraction $p$ of sites\\cite{Martins}. Then we let both replicas to evolve under correlated noise and analize the long time evolution of the normalized Hamming distance between replicas. If the Hamming distance evolves to zero it is said that the damage ``heals''; otherwise, the damage ``spreads''.\n\nIn a recent work, Hinrichsen\\cite{Hinrichsen} {\\it et al} have shown that the damage can spread in the DK automaton for different kinds of correlated noise. In this work we let the replicas evolve under the {\\bf identical realizations} of the stochastic noise, which was the most studied case in the DK automaton\\cite{Martins,Tome,Zebende,Bagnoli,Grassberger}. This corresponds, according to Hinrichsen {\\it et al} definition, to maximally correlated noise, at least for the DK automaton\\cite{Hinrichsen} ($\\alpha=\\infty$). It is expected this property to be valid for all $\\alpha$, as long as in this case a single random number is generated for every site in both replicas. Other types of correlated noise are very difficult to simulate in the long-range model, because they involve, in principle, the generation of $O(2^N)$ random numbers for every site at every time step.\n\nWe performed this Monte Carlo simulation for $N=100$, $200$ and $400$ founding that, for $0\\leq\\alpha\\leq 1$, the damage spreading is suppressed for all values of $(p_1,p_2)$. For $\\alpha>1$ a small region where the damage spreads appears in the lower right corner of the diagram. The transition line moves continuously towards the DK one for $\\alpha=\\infty$.\n\nIn Fig.7 we show the transition lines between the healed and the ``chaotic'' regions, for different values of $\\alpha>1$ and $N=400$.\n\n\\section{Conclusions}\n\nWe introduced a one-dimensional stochastic cellular automaton which allows to analyze different ranges of interactions through a single parameter $\\alpha$. We verified that this model reproduces, for high values of $\\alpha$ most of the known results of the DK model ($\\alpha=\\infty$).\n\nWe found two different regimes, according to the range of values of $\\alpha$.\n\nFor $\\alpha>1$ the dynamical phase diagram shows the same qualitative aspects of the one corresponding to the DK model, that is, three different regions: a frozen and an active phase, with the last one divided in two regions, one where the damage spreads (``chaotic'' phase) and one where the damage heals, when the replicas evolve under the same noise. The transition lines between these regions continuously vary when $\\alpha$ decreases from $\\alpha=\\infty$ to $\\alpha=1$.\n\nFor $0\\leq\\alpha\\leq 1$ the phase diagram becomes independent of $\\alpha$, even for finite $N$. No damage spreading were found in this case, at least for replicas evolving under the same noise. Regarding the frozen-active phase transition, we presented a strong evidence that the mean field theory is {\\bf exact} in the limit $N\\rightarrow\\infty$. The fact that the same behaviour appears in the thermodynamics of the one-dimensional Ising model\\cite{Cannas1} with similar interactions\\cite{nota2} may suggests the existence of some {\\bf universal} relationship between mean field theories and very long ranged interactions ($0\\leq\\alpha\\leq d$, where $d$ is the space dimension). \n\nIt is worth noting that mean field calculations of the Hamming distance dynamics between replicas evolving under two different types of correlated noise\\cite{Tome}-\\cite{Bagnoli} predicts a spreading of damage transition in the DK model. It is a general heuristic argument (based on the equilibrium behaviour of magnetic systems) that mean field approximations of short range models ($\\alpha=\\infty$) show the same behaviour as the corresponding fully connected models ($\\alpha=0$). We have shown that this is not true for the spreading of damage transition in the DK model. This variance between the two phenomena in the cellular automata is telling us that the spreading of damage transition is of different nature, at least in some of its properties, than other cooperative phenomena like the frozen-active or the ferromagnetic phase ones. \n\nFinally, there is recent conjecture of Grassberger\\cite{Grassberger} which states that all continuous transitions from an ``absorbing'' state to an ``active'' one with a single order parameter are in the universality class of directed percolation, provided that some conditions are fulfilled, among others, {\\bf short-range interactions} in space and time. Both transitions in the present model fall into this category, except for the range of the interactions for small values of $\\alpha$. Hence, having a model which allows to interpolate between long and short range regimes, and for which the long range behaviour is known exactly may be a good candidate for testing the above conjecture.\n\n\n\nI am indebted to Constantino Tsallis for many fruitful\nsuggestions and discussions about this problem. Useful discussions with M. L. Martins, T. J. Penna and M. de Oliveira are acknowledged. I also \nacknowledge warm hospitality received at the Centro Brasileiro\nde Pesquisas F\\'\\i sicas (Brazil), where this work was partly\ncarried out. This work was partially supported by grants\nfrom Funda\\c{c}\\~ao Vitae (Brazil), \nConsejo Nacional de Investigaciones Cient\\'\\i ficas y T\\'ecnicas\nCONICET (Argentina), Consejo Provincial de\nInvestigaciones Cient\\'\\i ficas y Tecnol\\'ogicas CONICOR (C\\'ordoba,\nArgentina) and Secretar\\'\\i a de Ciencia y\nTecnolog\\'\\i a de la Universidad Nacional de C\\'ordoba\n(Argentina).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nDeep neural networks (DNN) are applied in an increasing number of sensitive domains.\nYet, they exhibit vulnerabilities against a broad range of attacks threatening privacy and security \\cite{Shokri.2017Membership,Fredrikson.2015Model,Szegedy.2014Intriguing}. \nTo protect DNNs from such risks, in recent years, numerous defense strategies have been proposed.\n\nAs for privacy protection, \\emph{Differential Privacy} (DP) \\cite{Dwork.2006Differential} has become a standard framework.\nBy providing strong mathematical guarantees on privacy disclosure, DP allows to conduct data analyses on a population while protecting individual privacy.\nWith the introduction of the differential private stochastic gradient descent algorithm (DP-SGD) \\cite{Abadi.10242016Deep}, DP has been made available for numerous machine learning (ML) algorithms, such as DNNs.\nIt has been shown that training with DP can mitigate the risks of privacy threatening attacks \\cite{Shokri.2017Membership}.\nAt the same time, a large body of research has evolved around protecting DNNs against \\emph{adversarial examples} \\cite{Goodfellow.2014Explaining,Huang.2011Adversarial,Madry.2017Towards}.\nSuch data points contain small and human-imperceptible perturbations, forcing the attacked DNNs into misclassifications.\nThis poses a security threat for all neural network based architectures and depending on the use-case may lead to severe incidents.\nCurrently, adversarial retraining is considered the most effective method to protect against such attacks \\cite{Ren.2020Adversarial}.\nDuring this computationally expensive process, adversarial examples are used to retrain the DNNs using their original labels \\cite{Madry.2017Towards}.\n\nEven though the creation of models that are both private and secure at the same time poses a desirable goal, the majority of previous work focused on either one of the tasks \\cite{Song.2019Membership}.\nOnly recently has the intersection of the different research branches received more attention \\cite{Pinot.2019unified, Phan.2019Preserving, Phan.2019Heterogeneous, Phan.2019Scalable, Ibitoye.2021DiPSeN}.\nThe interrelation between privacy and security can be approached from two sides: either by evaluating the privacy implications of making a model more \\emph{robust}, or by studying the influence of private training on model robustness.\nSo far, the former, namely the question of the influence of adversarial retraining on model privacy, has been investigated more thoroughly \\cite{Giraldo.2020Adversarial,He.2020Robustness}.\nIt was shown that adversarial retraining decreases the membership privacy of a model's training data points. \\Ie, \nfor an attacker, it becomes easier to determine whether a specific data point was used during training \\cite{Song.2019Membership,Mejia.2019Robust,Hayes.2020Provable}.\nRegarding the latter perspective, first results suggest that applying DP-SGD to train a DNN has negative impacts on the model's robustness, even though there might potentially be privacy parameter combinations that are beneficial for both goals \\cite{Tursynbek.2020Robustness}.\nHowever, the experiments conducted to evaluate robustness in \\cite{Tursynbek.2020Robustness} are limited to gradient-based adversarial attacks. \nHence, it remains unclear whether these findings can be generalized and whether the robustness can be confirmed for other adversarial attack methods. \n\nTo further shed light on the intersection between models' robustness and privacy, this work presents a comprehensive study building upon current findings.\nIn more detail, in addition to the previously evaluated robustness against gradient-based attacks \\cite{Goodfellow.2014Explaining, Madry.2017Towards}, the experiments include gradient-free \\cite{Brendel.2017Decision} and optimization-based methods \\cite{Carlini.2017Towards}. \nFurthermore, to the best of the authors' knowledge, this work is the first one studying adversarial transferability \\cite{Goodfellow.2014Explaining} among models with different privacy settings, and between private and non-private models.\nFinally, it is also the first one analyzing DP models under the aspect of gradient masking \\cite{Papernot.2017Practical} which might be the cause of a false sense of robustness.\n\nIn summary, the presented work makes the following contributions:\n\\begin{itemize}\n \\item It experimentally evaluates the impact of DP-SGD training on adversarial robustness within a broad range of different privacy setups and parameters using various types of adversarial attacks. \n Results suggest that DP models may be more prone to adversarial attacks than non-private models and that with increasing privacy through higher noise levels (especially in combination with high clip norms), decreasing adversarial robustness can be found.\n \\item To exclude the effect of gradient masking from the robustness analyses, transferability attacks between private and non-private models are conducted.\n Their results suggest that adversarial examples transfer better from non-private models to private ones than the other way round.\n Additionally, there seems to be a higher transferability among DP models than among private and non-private ones.\n \n \\item To investigate why DP models might be vulnerable to adversarial attacks, the gradients of the models are studied and compared with the gradients of non-private models.\n \\item The work uncovers that in some settings, DP-SGD unintentionally causes gradient masking, which might explain the apparently increased robustness of certain settings reported in previous work~\\cite{Tursynbek.2020Robustness}.\n Furthermore, it presents parameter combinations which intentionally provoke this effect.\n The findings are confirmed using a catalogue of criteria previously presented in literature \\cite{Athalye.2018Obfuscated,Carlini.2019On}.\n\\end{itemize}\n\n\n\n\n\n\n\\section{Background and Related Work}\n\\label{sec:related}\nThis section depicts the theoretical background of deep learning classification, DP, adversarial machine learning (ML), and gradient masking.\n\n\\subsection{Deep Learning Classification}\n\\label{ssec:notation}\nLet $f : \\mathbb{R}^m \\rightarrow \\{1 \\dots k\\}$ be a deep learning model for classification. \nThe model $f$ maps the input $x$ to a discrete label $l \\in \\{1 \\dots k \\}$.\nInternally, $f$ consists of $n$ parametric functions $f_i$ with $i \\in \\{1 \\dots n \\}$. \nEach function is represented by a layer of neurons that apply an activation function to the weighted output of the previous layer in order to create a new intermediate representation.\nThe layer $i$ is parameterized by a weight vector $\\theta_i$, that incorporates the knowledge of $f$ learned during the training procedure \\cite{Papernot.2017Practical}.\n\nDuring training, $f$ is given a large number of input-label pairs (x,y) based on which the weight vectors $\\theta_i$ are adapted.\nAdaptation is usually performed based on backpropagation \\cite{Chauvin.1995Backpropagation}.\nTherefore, in a \\emph{forward pass}, an input is propagated through the network by computing $f(x) = f_n (\\theta_n, f_{n-1} (\\theta_{n-1}, \\dots f_2 (\\theta_2, f_1 (\\theta_1, x))))$. \nA loss function $\\mathcal{L}$ then quantifies the difference between $f(x)$ and the original label $y$. \nTo reduce the cost during training, in the \\emph{backward pass}, the weight vectors $\\theta_i$ are adjusted based on the \\emph{gradients}, i.e. the first order partial derivatives, of each weight w.r.t the costs. \n\n\n\\subsection{Differential Privacy}\n\\label{ssec:rel_dp}\nDP \\cite{Dwork.2006Differential} is a mathematical framework that was introduced to provide privacy guarantees for algorithms analyzing databases.\nIts definitions are described for neighboring data bases, i.e. data bases that differ in exactly one entry.\nThe $(\\epsilon, \\delta)$-variant \\cite{Dwork.2013Algorithmic} is defined as follows.\n\n\\begin{definition}[$(\\epsilon,\\delta)$-Differential Privacy]\n\\label{def:edDP}\nA randomized algorithm $\\mathcal{K}$ with domain $\\mathbb{N}^{|\\mathcal{X}|}$ provides $(\\epsilon,\\delta)$-DP, if for all neighboring databases $D_1, D_2 \\in \\mathbb{N}^{|\\mathcal{X}|}$ and all S $\\subseteq\\Ima(\\mathcal{K})$\n\\begin{align}\n\\Pr[\\mathcal{K}(D_1)\\in S] \\leq e^\\epsilon \\cdot \\Pr[\\mathcal{K}(D_2)\\in S] + \\delta \\text{.}\n\\end{align}\n\\end{definition}\n\nThe parameters $\\epsilon, \\delta > 0$ can be interpreted as the privacy budget and the probability of violating this particular level of privacy, respectively.\nThe budget parameter $\\epsilon$ sets an upper bound to possible privacy leaks. \nWith a smaller privacy budget, higher levels of privacy are achieved.\n\n\n\\subsection{Differential Private Stochastic Gradient Descent}\n\\label{ssec:rel_dpsgd}\nIn deep learning, the randomized algorithm from the definition of DP usually refers to the network, while $D_1$ and $D_2$ are training data sets that differ in exactly one data point.\nIn order to integrate privacy as an optimization goal during the training process of deep learning models, DP-SGD \\cite{Abadi.10242016Deep} was proposed.\nThe algorithm describes an adaptation of the original stochastic gradient descent algorithm \\cite{Amari.1993Backpropagation}. \nPrivacy is mainly achieved by two parameters, a noise value $\\sigma$ and a gradient clipping bound $C$. \nAfter the gradients are computed with respect to the loss $\\mathcal{L}$, their values are clipped to $C$ to reduce the influence of single samples. \nThen, Gaussian noise with zero mean and a variance of $\\sigma^2C^2\\mathbb{I}$ is added to the clipped gradients in order to protect privacy.\nFor a detailed depiction of the algorithm, see \\cite{Abadi.10242016Deep}.\n\n\n\n\\subsection{Adversarial Examples}\n\\label{ssec:rel_adv}\nAccording to Szegedy \\emph{et al}\\onedot \\cite{Szegedy.2014Intriguing}, adversarial examples can be characterized as follows:\nGiven an original sample $x$, the data point $x'$ is an adversarial example if (1) $x' = x + \\beta$ for a small value $\\beta$, and (2) $f(x) \\neq f(x')$.\nHence, while $x'$ differs only slightly from $x$, the attacked model wrongly predicts the class of $x'$.\nTo measure the distance between the benign sample $x$ and its adversarial counterpart $x'$, usually an $l_p$-norm is used.\nIn literature, several methods have been proposed to generate such adversarial examples, which can be divided in gradient, optimization, and decision-based methods \\cite{Zhang.2021Detecting}.\nA summary of current attack methods can be found in the survey by Ren \\emph{et al}\\onedot~\\cite{Ren.2020Adversarial}.\nIn the following, four widely used methods that were included in the experiments of this work are introduced.\nOther important methods worth mentioning are DeepFool~\\cite{MoosaviDezfooli.2016DeepFool} and the One-Pixel-Attack \\cite{Su.2019One}.\nThe former method explores and approximates the decision boundary of the attacked model by iteratively perturbing the inputs. \nFor the latter one, solely one pixel of the benign images is changed.\n\n\\subsubsection{Fast Gradient Sign Method}\n\\label{ssec:rel_fgsm}\nThe Fast Gradient Sign Method (FGSM) \\cite{Goodfellow.2014Explaining} is a single-step, gradient-based method that relies on backpropagation to find adversarial examples. \nFor the model parameters $\\theta$, input $x$, target $y$, and loss function $\\mathcal{L}(\\theta,x,y)$, $x'$ is calculated as follows\n\\begin{align}\nx' = x + \\epsilon \\sign(\\nabla_x \\mathcal{L}(\\theta,x,y)) \\text{.}\n\\end{align}\n\n\n\\subsubsection{Projected Gradient Descent}\n\\label{ssec:rel_pgd}\nThe Projected Gradient Descent (PGD) method \\cite{Madry.2017Towards} represents a multi-step variant of FGSM.\nGiven a distance norm $p$, $x'_0$ is initialized in an $L_p$ ball around the original sample, iteratively adapted as in FGSM, and if necessary, the perturbation is projected back into the $L_p$ ball.\nStarting with $x'_0$, a step size $\\alpha$, and projection $\\Pi_{L_p}$, the adversarial example $x'_t$ at iteration $t$ is constructed as follows\n\\begin{align}\nx'_t = \\Pi_{L_p}(x'_{t-1} + \\epsilon \\sign(\\nabla_x \\mathcal{L}(\\theta,x'_{t-1},y)) \\text{.}\n\\end{align}\n\n\\subsubsection{Carlini and Wagner}\n\\label{ssec:rel_cuw}\nThe optimization-based Carlini and Wagner (CW) method \\cite{Carlini.2017Towards} uses a loss function that optimizes the distance between a target class $y'$ and the most likely class.\nGiven a model $f$ with logits $Z$, the CW$_2$ attack can be formulated as follows:\n\\begin{align}\n\\min \\parallel x' - x \\parallel_2^2 + c \\cdot \\mathcal{L}(\\theta,x',y)\n\\end{align}\nwith the loss function being\n\\begin{align}\n\\mathcal{L}(\\theta,x',y) = \\max (\\max\\{Z(x')_i : i \\neq t \\} -Z(x')_t -k ) \\text{.}\n\\end{align}\nThe constant $c$ is determined by binary search. \n\n\\subsubsection{Boundary Attack}\n\\label{ssec:rel_boundary}\nThe Boundary Attack (BA$_2$) \\cite{Brendel.2017Decision} is a multi-step method that relies solely on the model's decision to generate adversarial examples.\nTherefore, $x'$ is initialized as an adversarial point, i.e\\onedot} \\def\\Ie{\\emph{I.e}\\onedot, $f(x)\\neq f(x')$. \nAfterwards, a random walk on the boundary between the adversarial and the non-adversarial region is performed in order to minimize the distance between $x$ and $x'$, while keeping $x'$ adversarial. \nWith the BA$_2$ method, adversaries are able to craft adversarial examples even if the gradients of the attacked NN are not available.\n\n\\subsection{Adversarial Transferability}\n\\label{ssec:rel_transfer}\nIt has been shown that adversarial examples transfer between models \\cite{Goodfellow.2014Explaining}, i.e\\onedot} \\def\\Ie{\\emph{I.e}\\onedot adversarial examples that are generated on one model are often successful in fooling other similar models.\nThe degree of success depends on several factors, among which the complexity of the model that the adversarial examples are crafted on, and the similarity between both models \\cite{Demontis.2019Why}.\n\n\\subsection{Privacy and Robustness}\n\\label{ssec:priv_rob}\nResearch suggests that the goals of achieving privacy and robustness in ML models are not always in line.\nIt was found that adversarial retraining, i.e\\onedot} \\def\\Ie{\\emph{I.e}\\onedot retraining a model with adversarial examples, increases membership privacy risks \\cite{Song.2019Membership, Mejia.2019Robust}. \nWith decreased membership privacy, it might be easier for an attacker to determine whether a specific data point has been used for model training \\cite{Shokri.2017Membership}.\nThis negative impact of adversarial retraining on models' privacy can be explained with overfitting. \nThe retraining might enforce the training data points more strongly to the model \\cite{Hayes.2020Provable} leading to a decrease of the membership privacy \\cite{Yeom.2018Privacy}.\n\nExamined from the opposite perspective, it was shown that the noise of DP training can be exploited to craft adversarial examples more successfully \\cite{Giraldo.2020Adversarial}.\nIn the work that is closest to the present, Tursynbek \\emph{et al}\\onedot \\cite{Tursynbek.2020Robustness} observed that DP training can have a negative influence on adversarial robustness. \nYet, the authors identified some DP parameters that give the impression of improved robustness against gradient-based adversarial example crafting methods.\n\nSome research has already been dedicated to aligning model privacy and robustness.\nPinot \\emph{et al}\\onedot \\cite{Pinot.2019unified} showed that model robustness and DP share similarities in their goals.\nFurthermore, several mechanisms to integrate (provable) adversarial robustness into DP training have been proposed \\cite{Phan.2019Heterogeneous, Phan.2019Preserving, Phan.2019Scalable, Ibitoye.2021DiPSeN}.\n\n\n\\subsection{Gradient Masking}\n\\label{ssec:rel_grad}\nThe term of \\emph{gradient masking} \\cite{Papernot.2017Practical} refers to adversarial defence methods intentionally or unintentionally reducing the usefulness of the protected model's gradients.\nThereby, gradient-based adversarial attacks are less successful\nAs the models are often still vulnerable to non-gradient-based attacks, gradient masking results in a false sense of robustness.\nThe term \\emph{gradient obfuscation} \\cite{Athalye.2018Obfuscated} describes a specific case of gradient masking, in which defenses against adversarial attacks are designed such that they cause gradient masking.\nHence, the resulting level of robustness provided by such methods is often overestimated.\nAthalye \\emph{et al}\\onedot \\cite{Athalye.2018Obfuscated} identified three types of gradients with reduced utility:\n\\begin{enumerate}\n \\item \\emph{Shattered gradients} are non-existent or incorrect gradients caused either intentionally through the use of operations that are non-differentiable or unintentionally through numerical instabilities.\n \\item \\emph{Stochastic gradients} are gradients the depend on test-time randomness.\n \\item \\emph{Vanishing or exploding gradients} are gradients that are too small or too large to be useful for the computations performed during attacks.\n\\end{enumerate}\n\nFor models, potentially in combination with some defense method, the following properties indicate sane and un-masked gradients \\cite{Athalye.2018Obfuscated, Carlini.2019On}:\n\\begin{enumerate}\n \\item Iterative attacks perform better than one-step attacks.\n \\item White-box attacks preform better than black-box attacks.\n \\item Gradient-based attacks perform better then gradient-free attacks.\n \\item Unbounded attacks should reach a 100\\% adversarial success rate.\n \\item Increasing the iterations within an attack should increase the adversarial success rate.\n \\item Increasing the distortion bound on the adversarial examples should increase the adversarial success rate.\n \\item White-box attacks perform better than transferability attacks using a similar substitute model.\n\\end{enumerate}\nViolation of these criteria can be an indicator for masked gradients either within the model itself or due to the introduced defense method.\n\n\n\\section{Method and Experimental Setup}\n\\label{sec:setup}\nIn this paper, the intersection of privacy and security in DNNs is investigated.\nMore specifically, the impact of using a state-of-the-art training method to increase ML models' privacy levels, namely the DP-SGD optimizer, on model robustness is examined. \nPrevious work suggests that using this optimizer with certain parameters might have positive effects on the robustness of the trained DNNs \\cite{Tursynbek.2020Robustness}.\nThe experiments of this work show that this first indication of increased robustness may be due to masked gradients.\nTo validate this hypothesis, extensive experiments using two model architectures and four attack methods were conducted.\nAs can be seen in previous guidelines on the evaluation of adversarial robustness \\cite{Athalye.2018Obfuscated, Carlini.2019On}, the selection of used attack methods heavily influences the perception of robustness.\nTherefore, in the following, the experimental setup used to shed more light on the robustness of privately trained DNNs is summarized.\n\nThe experiments in this paper were conducted using the MNIST dataset \\cite{LeCun.2010MNIST} consisting of 60000 train and 10000 test gray-scale images of size 28 \u00d7 28 pixel.\n\nTwo different model architectures were evaluated: an adaptation of the \\emph{LeNet} architecture \\cite{LeCun.1989Backpropagation} (1), and a \\emph{custom} conv-net architecture (2), both depicted in Table \\ref{tab:architectures}.\nThe networks were implemented using TensorFlow \\cite{Abadi.2016TensorFlow} version 2.4.1 and trained for 50 epochs using a learning rate of 0.05, and a batch size of 250.\nStochastic gradient descent and TensorFlow Privacy's \\cite{Googleetal..2018TensorFlow} implementation of the DP-SGD were used as optimizers to train the models without and with privacy, respectively.\nThe privacy parameters $\\sigma$ and $C$ of the DP-SGD were specified during the experiments. \nIn accordance to \\cite{Tursynbek.2020Robustness} the value ranges were $\\sigma \\in \\{0,1.3,2,3\\}$ and $C\\in \\{1,3,5,10\\}$.\n\nThe adversarial examples were generated using the Foolbox framework \\cite{Rauber.2017Foolbox}.\nAs shown in \\Cref{ssec:rel_adv} PGD$_{\\infty}$, CW$_{2}$, and BA$_{2}$ were used as examples of a multi-step gradient-based, multi-step optimization-based, and a multi-step gradient-free attack, respectively. \nFor every experiment, 1000 adversarial examples were generated based on 1000 random test data points that were predicted correctly by the model under attack.\n\n\\begin{table*}[t]\n \\centering\n \\begin{tabular}{cc}\n \\toprule\n LeNet architecture & Custom architecture \\\\\n \\midrule \n Conv(f=6, k=(3,3), s=1, p=valid, act=relu) & Conv(f=16, k=(8,8), s=2, p=same, act=relu)\\\\\n 2D Average Pooling(pool size=(2,2), s=1, p=valid) & 2D Max Pooling(pool size=(2,2), s=1, p=valid)\\\\\n Conv(f=16, k=(3,3), s=1, p=valid, act=relu) & Conv(f=32, k=(4,4), s=2, p=valid, act=relu)\\\\\n 2D Average Pooling(pool size=(2,2), s=1, p=valid) & 2D Max Pooling(pool size=(2,2), s=1, p=valid)\\\\\n Flatten & Flatten\\\\\n Dense(n=120, act=relu) & Dense(n=32, act=relu)\\\\\n Dense(n=84, act=relu) & Dense(n=10, act=None)\\\\\n Dense(n=10, act=None) & \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Architectures of the models used in the experiments. f: number of filters, k: kernel size, s: stride, p: padding act: activation function, n: number of neurons.}\n \\label{tab:architectures}\n\\end{table*}\n\n\\section{Experimental Robustness Evaluation of DP models}\n\\label{sec:robustness}\nThis section describes the results of the robustness evaluation on different DP models for the respective attacks.\n\n\\subsection{Robustness Evaluation with the PGD$_{\\infty}$ Attack}\n\\label{ssec:results_linfpgd}\n\n\\paragraph{Perturbation-based Analysis}\nIn the first experiment, both the custom and the LeNet model were attacked using PGD$_{\\infty}$. \nAs this method creates bounded adversarial examples, the robustness was quantified using the adversarial success rate depending on the maximal adversarial perturbation budget $\\epsilon$.\nThe perturbation budget was increased successively from $0.0$ to $0.5$ in steps of size $0.025$ for a fixed number of 40 attack iterations.\nAt every perturbation value, the adversarial success rate was measured.\nHigher success rates for the same perturbation budget suggest a lower model robustness.\nSee Figure \\ref{fig:exp01} for the results. \nFor both model architectures and all privacy parameter combinations, an increase of the perturbation budget $\\epsilon$, resulted in an increase of the success rate. \n\nFor the custom architecture, the DP models with noise $\\sigma=1.3$ and clip norm $C=1$ or $C=3$ achieve higher or similar levels of robustness compared to the non-private baseline model.\nIn contrast, models with higher clip norms can be attacked more successfully (see Figure \\ref{fig:exp01_custom}).\nThe observation that some DP parameter combinations might be beneficial for robustness is in line with the findings by Tursynbek \\emph{et al}\\onedot~\\cite{Tursynbek.2020Robustness} on their custom architecture.\nYet, this behavior cannot be observed for the LeNet model (see Figure \\ref{fig:exp01_lenet}).\nIn this experiment, the adversarial success rate when attacking the non-private baseline model is lower than the success rate observed for every DP parameter combination. \n\nSimilar to Tursynbek \\emph{et al}\\onedot~\\cite{Tursynbek.2020Robustness}, a plateau-like stagnation of the success rate for combinations with higher noise of $\\sigma=2$ or $\\sigma=3$ in combination with a high clip norm of $C=10$ can be observed for the custom model.\nThis might be interpreted as a robustness improvement using these settings. \nHowever, in the LeNet model, no similar behavior can be observed.\nFigure \\ref{fig:exp01_lenet} (right) instead suggests that for $C=10$, the higher the amount of noise, the lower the model robustness.\n\nFor both model architectures, the success rates of different noise values in combination with a small clip norm of $C=1$ are very similar.\nWhile for the custom architecture, the attacks achieve a higher success rate for the non-DP model, for the LeNet models, the opposite applies.\n\n\\paragraph{Step-based Analysis}\nIn the second experiment, the success rate of PGD$_{\\infty}$ when attacking both architectures was evaluated depending on the number of iterations or so-called \\emph{steps} for a fixed $\\epsilon$ of \\num{0.3}. \nThe results suggest that, in general, the success rate increases with increasing numbers of iterations and finally reaches 100\\% (see Figure \\ref{fig:exp03}). \nHowever, in the custom model, with $\\sigma=2, \\sigma=3$, and $C=10$, the success rate reaches a plateau after \\raisebox{-0.5ex}{\\~{}}10 iterations and does not reach 100\\%.\nIn contrast, for the LeNet architecture, no plateaus are reached and for the non-DP model highest robustness is reached with a sufficient amount of performed attack steps.\n\nIn summary, the experiments presented above confirm the findings by Tursynbek \\emph{et al}\\onedot \\cite{Tursynbek.2020Robustness}.\nAdversaries using PGD$_{\\infty}$ cannot successfully attack their custom and privately trained model with a success rate of 100\\%.\nBased on this finding Tursynbek~\\emph{et al}\\onedot concluded, that using the DP-SGD optimizer during training simultaneously improves privacy, as well as robustness.\nYet, the experiments presented in this section using the LeNet model already suggest, that this finding does not generalize to other architectures.\nFurthermore, in the next section, additional evidence will be presented showing that the evaluated private models do not show an increased level of robustness compared to their normally trained counterparts.\n\n\\begin{figure*}\n \\centering\n \\begin{subfigure}[b]{\\textwidth}\n \\centering\n \\input{imgs\/exp01-epsilon_custom.pgf}\n \\caption{Custom architecture.}\n \\label{fig:exp01_custom}\n \\end{subfigure}\n \\vfill\n \\begin{subfigure}[b]{\\textwidth}\n \\centering\n \\input{imgs\/exp01-epsilon_lenet.pgf}\n \\caption{LeNet architecture.}\n \\label{fig:exp01_lenet}\n \\end{subfigure}\n \\caption{PGD$_{\\infty}$: adversarial success rate plotted against adversarial perturbation $\\epsilon$.}\n \\label{fig:exp01}\n\\end{figure*}\n\n\n\\begin{figure*}\n \\centering\n \\begin{subfigure}[b]{\\textwidth}\n \\centering\n \\input{imgs\/exp03-epsilon_custom.pgf}\n \\caption{Custom architecture.}\n \\label{fig:exp03_custom}\n \\end{subfigure}\n \\vfill\n \\begin{subfigure}[b]{\\textwidth}\n \\centering\n \\input{imgs\/exp03-epsilon_lenet.pgf}\n \\caption{LeNet architecture.}\n \\label{fig:exp03_lenet}\n \\end{subfigure}\n \\caption{PGD$_{\\infty}$: adversarial success rate plotted against number of iterations.}\n \\label{fig:exp03}\n\\end{figure*}\n\n\n\\subsection{Robustness Evaluation with the BA$_2$ Attack}\n\\label{ssec:results_boundary}\nTo further investigate the impact of DP-SGD training on model robustness, the gradient-free BA$_{2}$ was used to attack the DP models. \nTable \\ref{tab:success_boundary} depicts the results for adversarial perturbations of $\\epsilon=1$, and $\\epsilon=2$ against the custom and LeNet architectures.\nThe attack was executed with \\num{25000} iterations. \nFor more iterations, no increase in success rates could be observed.\n\nFor both model architectures, the results suggest that increasing the clip value or the amount of noise for a high clip value leads to increased adversarial vulnerability.\nThe custom models with $\\sigma=2$ or $\\sigma=3$ and $C=10$, that reached a plateau in the adversarial success rate for PGD$_{\\infty}$, are most vulnerable to BA$_{2}$ among all settings and for both model architectures. \nSolely for the parameter combinations of $C=1$ or $\\sigma=0$ the BA$_{2}$ attack with perturbation $\\epsilon=2$ reaches a lower success rate than for the non-DP baselines.\nHowever, in general, the success rates on the DP models are higher than on the non-DP ones.\n\nIn accordance to the findings of the previous section, the experiments here again show, that DP models are generally not more robust than their normally trained counterparts.\nThe experiments even show that for certain parameter combinations the DP models are attacked even more easily using the BA$_{2}$ method.\n\n\\subsection{Robustness Evaluation with CW$_{2}$ Attack}\n\\label{ssec:results_cw}\n\nIn the final experiment, the optimization-based CW$_{2}$ attack was used.\nThis method generates adversarial examples in an unbounded manner.\nHence, to determine the robustness towards CW$_{2}$, the adversarial perturbation needed to achieve a 100\\% adversarial success rate is measured.\nTable~\\ref{tab:cw_epsilon_rates} depicts the results of the experiments after 10,000 attack iterations.\n\nThe values suggest that with increasing noise, or with increasing clip norm, the amount of perturbation needed to achieve a 100\\% success rate decreases. \nThis indicates decreased adversarial robustness. \nFor the LeNet architecture, the CW$_{2}$ attack requires less perturbations on all DP models than on the non-private baseline models.\nThe required perturbation budget decreases monotonically when increasing the clip value or noise.\nInterestingly, in the custom architecture with $C=1$, or $\\sigma=0, C=10$, a higher perturbation than in the non-private setting is required. \nAlso, when setting the clip value to $C=10$, and varying the amount of noise, $\\sigma=3$ requires higher perturbation than $\\sigma=2$, or $\\sigma=1.3$, hence, no monotonic decrease could be observed.\n\nThis experiment again underlines, that DP models do not generally exhibit a higher level of robustness.\nFirst, the CW$_{2}$ attack is capable of generating adversarial examples with a success rate of 100\\%.\nSecond, the required perturbation budget to generate the adversarial examples is in the majority of cases smaller for the DP models, than for the normally trained ones.\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{lcc}\n \\toprule\n Parameters & LeNet & Custom \\\\\n \\midrule \n SGD & $\\epsilon=1.21$ & $\\epsilon=1.20$\\\\\n \\midrule \n $\\sigma=1.3, C=1$ &\t$\\epsilon=1.06$ & $\\epsilon=1.37$\\\\ \n $\\sigma=1.3, C=3$ & $\\epsilon=1.03$ & $\\epsilon=1.17$\\\\ \n $\\sigma=1.3, C=5$ & $\\epsilon=0.93$ & $\\epsilon=0.80$\\\\ \n $\\sigma=1.3, C=10$ & $\\epsilon=0.83$ & $\\epsilon=0.45$ \t\\\\ \n \\midrule \n $\\sigma=0, C=1$ & $\\epsilon=1.11$& $\\epsilon=1.41$\t\\\\ \n $\\sigma=1.3, C=1$ & $\\epsilon=1.06$& $\\epsilon=1.37$\\\\ \n $\\sigma=2, C=1$ & $\\epsilon=1.04$& $\\epsilon=1.31$ \\\\ \n $\\sigma=3, C=1$ & $\\epsilon=1.01$ & $\\epsilon=1.24$\\\\ \n \\midrule \n $\\sigma=0, C=10$ & $\\epsilon=1.12$ & $\\epsilon=1.38$\t\\\\ \n $\\sigma=1.3, C=10$ & $\\epsilon=0.83$ & $\\epsilon=0.45$\\\\ \n $\\sigma=2, C=10$ & $\\epsilon=0.63$ & $\\epsilon=0.29$\\\\ \n $\\sigma=3, C=10$ & $\\epsilon=0.15$ & $\\epsilon=0.50$ \t\\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{Adversarial perturbation $\\epsilon$ required to achieve a 100\\% adversarial success rate within \\num{10000} iterations of Carlini and Wagner attack, rounded to two decimal points.}\n \\label{tab:cw_epsilon_rates}\n\\end{table}\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{lcccc}\n \\toprule\n Parameters & \\multicolumn{2}{c}{LeNet} & \\multicolumn{2}{c}{Custom}\\\\\n & $\\epsilon=1$ & $\\epsilon=2$ & $\\epsilon=1$&$\\epsilon=2$\\\\\n \\midrule \n SGD & 27.5\\%& 79.4\\%&\t 19.1\\%& 83.2\\%\\\\\n \\midrule \n $\\sigma=1.3, C=1$ &\t 38.5\\%& 75.7\\%&\t 21.6\\%& 70.9\\%\\\\ \n $\\sigma=1.3, C=3$ & 41.3\\%& 81.3\\%&\t 26.6\\%& 77.6\\%\\\\ \n $\\sigma=1.3, C=5$ & 49.7\\%& 82.1\\%&\t 51.3\\%& 94.2\\%\\\\ \n $\\sigma=1.3, C=10$ & 57.2\\%& \t89.3\\%&\t 86.9\\%& 99.9\\%\\\\ \n \\midrule \n $\\sigma=0, C=1$ & 38.0\\%& \t71.3\\%&\t 19.5\\%& 65.2\\%\\\\ \n $\\sigma=1.3, C=1$ & 38.5\\%& 75.7\\%&\t 21.6\\%& 70.9\\%\\\\ \n $\\sigma=2, C=1$ & 36.3\\%& 76.7\\%&\t 22.6\\%& 72.4\\%\\\\ \n $\\sigma=3, C=1$ & 40.9\\%& 76.3\\%&\t 26.0\\%& 74.1\\%\\\\ \n \\midrule \n $\\sigma=0, C=10$ & 41.2\\%& \t77.9\\%&\t 17.0\\%& 71.7\\%\\\\ \n $\\sigma=1.3, C=10$ & 57.2\\%& 89.3\\%&\t 86.9\\%& 99.9\\%\\\\ \n $\\sigma=2, C=10$ & 65.2\\%& 94.7\\%&\t 98.3\\%& 100.0\\%\\\\ \n $\\sigma=3, C=10$ & 91.5\\%& \t91.8\\%&\t 93.6\\%& 100.0\\%\\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{Success Rates of BA$_2$ with different perturbation values on both model architectures.}\n \\label{tab:success_boundary}\n\\end{table}\n\n\n\\section{DP and Transferability}\n\\label{sec:transferability}\n\\begin{figure}\n \n \n \\import{imgs\/}{advs.pgf}\n \\caption{Adversarial samples generated with CW$_{2}$, BA$_2$, and PGD$_{\\infty}$ on the custom architecture models with different privacy settings (from left to right): SGD; $\\sigma=1.3, C=1$; $\\sigma=3, C=1$; $\\sigma=3, C=10$.}\n \\label{fig:adv_samples}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\input{imgs\/transfer_cw.pgf}\n \\caption{CW$_{2}$ attack. For perturbation values per model see Table \\ref{tab:cw_epsilon_rates}.}\n \\label{fig:adv_samples_trans_1}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\input{imgs\/transfer_linf.pgf}\n \\caption{PGD$_{\\infty}$ attack. Perturbation $\\epsilon=0.3$ and 40 attack iterations.}\n \\label{fig:adv_samples_trans_2}\n \\end{subfigure}\n \\caption{Transferability of generated adversarial examples between custom models. \n Adversarial examples generated on models with settings depicted in the rows and evaluated against models with settings depicted in the columns.\n High success rates indicate that the adversarial examples transfer well, low success rates indicate low transferability.}\n \\label{fig:adv_samples_trans}\n\\end{figure*}\n\nIn addition to evaluating the success of directly attacking the models in a white-box setting, transferability attacks were conducted.\nThereby, the possibility of transferring adversarial examples between DP models with different parameters, or between private and non-private models was quantified (see Figure \\ref{fig:adv_samples_trans}).\n\n\\paragraph{CW$_{2}$-Transferability}\nIn the first part of the experiment, the transferability of adversarial examples generated with the CW$_{2}$ attack was evaluated.\nFor this purpose, for each of the models under attack, first, 1000 correctly classified test samples were chosen randomly.\nThen, the CW$_2$ attack was used to craft adversarial examples with 100\\% success rate on the respective surrogate models.\nFinally, the original model's accuracy on the generated adversarial examples was measured to determine the success rate of the attack.\nFigure~\\ref{fig:adv_samples_trans_1} depicts the results for the custom model.\nResults for the LeNet models look similar and are, therefore, not shown here.\n\nThe transferability of adversarial examples created with the CW$_2$ attack might be influenced by the different levels of applied perturbations:\nThe perturbation budgets that lead to a 100\\% success rate in the CW$_2$ attack vary between models and tend to be lower for DP models (see Table~\\ref{tab:cw_epsilon_rates}).\n\n\\paragraph{PGD$_{\\infty}$-Transferability}\nTherefore, in the second experiment, the transferability of adversarial examples generated with PGD$_{\\infty}$ and a constant perturbation of $\\epsilon=0.3$ was assessed.\nThe procedure of determining the success rates was the same as for the CW$_2$ attack.\nFigure~\\ref{fig:adv_samples_trans_2} summarizes the results of this test on the custom models.\nAgain, results on LeNet models were similar and are, therefore, not shown here.\n\nThe experiments suggest that for PGD$_{\\infty}$, the adversarial examples transfer significantly better than for CW$_2$.\nStill, the same trends can be observed for both scenarios:\nAdversarial examples seem to transfer less well from DP to non-DP models than the other way round.\nAdditionally, adversarial examples generated on models with higher clip norms seem to transfer less well to other models than adversarial examples generated on models with lower clip norms.\nThe adversarial examples generated on DP models with smaller clip norms ($C=1$ and $C=3$) transfer better to other DP models than the adversarial examples generated on the non-private baseline models.\nAlso, models trained with higher clip norms exhibit an increased vulnerability to transferability attacks in comparison to models with lower clip norms.\n\nThese results suggest that transfer attacks between models with different privacy settings are indeed successful.\nAgain, the models that caused plateaus in the success rate when directly executing PGD$_{\\infty}$ against them seem to be the most vulnerable ones, according to this experiment.\nThis is another indicator for their apparent robustness being just a consequence of low-utility gradients. \nAnother interesting observation is the fact that adversarial examples tend to transfer better between DP models than from non-DP to DP models.\n\nTo investigate this effect further, adversarial examples created with the three considered attack methods on models with different privacy settings were visually examined (see Figure~\\ref{fig:adv_samples}). \nThe first column of the figures shows adversarial examples generated for the non-private baseline model.\nThe results of the previous section and a visual inspection of the generated samples by CW$_2$ and BA$_2$ suggest that the more private the models are, the less perturbation budget is required to fool the model with a 100\\% success rates.\n\nThe adversarial examples displayed for the PGD$_{\\infty}$ method all have the same perturbation budget of $\\epsilon=0.3$. \nInterestingly, the generated samples look significantly different between DP and non-DP models. \nWhereas in the non-DP model, the artifacts introduced into the images are grouped into regions, the higher the privacy gets, the more they resemble random noise and are not grouped into regions anymore. \nThe finding of this visual inspection also reflect in the evaluation of the transferability experiments.\n\n\n\\section{DP and Gradient Masking}\n\\label{sec:obfuscation}\n\n\\begin{figure}\n \\centering\n \n \\input{imgs\/exp09-all_layer6.pgf}\n\n \\caption{Distribution of the gradient magnitudes at the models' first dense layer. Upper row: baseline models without privacy, lower rows: custom models with DP.}\n \\label{fig:gradients}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \n \\input{imgs\/exp09-gradientszero_dprob_n1.3.pgf}\n \\caption{Custom architecture.}\n \\label{fig:zero_gradients_1}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\input{imgs\/exp09-gradientszero_lenet_n1.3.pgf}\n \\caption{LeNet architecture.}\n \\label{fig:zero_gradients_2}\n \\end{subfigure}\n \n \\caption{Percentage of zero gradients over all model layers.}\n \\label{fig:zero_gradients}\n\\end{figure}\n\nThe results presented in \\Cref{sec:robustness} and \\Cref{sec:transferability} suggest, that DP models may not be generally more robust compared to non-private ones.\nEven though the PGD$_{\\infty}$ attack did not yield a 100\\% success rate, adversaries using the CW$_{2}$ and BA$_2$ were able to fool the evaluated models.\nThe fact that the gradient-based PGD$_{\\infty}$ attack did not achieve a 100\\% success rate for certain settings, whereas the other attacks did, suggests instabilities of the models' gradients.\nTherefore, two properties of the gradients, namely, their magnitudes and the percentage of zero-gradients were investigated.\n\nFigure~\\ref{fig:gradients} displays the gradient magnitudes for both model architectures after the first dense layer for a test data batch.\nWhile the magnitudes of the non-DP models' gradients range from -10 to 10, the range for custom architecture DP models spans from -200 to 200.\nWith an increasing clip value, the magnitude of the gradients increases.\nThe same trend, even though not so prominent, can be observed for the LeNet models.\n\nThe percentage of zero-gradients per model layer is depicted in Figure~\\ref{fig:zero_gradients}.\nFor DP models, the proportion of gradients with zero magnitude is higher than for non-private models.\nFor the custom architecture, an increase of this proportion with increasing clip norms can be observed.\nIn the LeNet models, there are no such significant differences between different privacy parameters.\n\nThe observed properties of the gradients in the DP models might explain why PGD$_{\\infty}$ did not achieve a 100\\% success rate on the models with high noise and high clip norms. \nDue to the masked gradients and potential numerical instabilities during the calculations, no useful information required for a successful attack were available.\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{lcc}\n \\toprule\n Model & Parameters & Adv. Perturbation \\\\\n \\midrule \n bs1 (Custom) &\tSGD $, epochs=50$& $\\epsilon=1.20$\\\\ \n bs2 (LeNet) & SGD $, epochs=50$ & $\\epsilon=1.21$\\\\ \n m1 (Custom)& $\\sigma=2, C=5, epochs=50$ & $\\epsilon=1.34$\\\\ \n m2 (Custom)& $\\sigma=2, C=6, epochs=40$ & $\\epsilon=1.64$ \t\\\\ \n m3 (Custom)& $\\sigma=2, C=7, epochs=20$ & $\\epsilon=1.82$ \t\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Models with masked gradient and the adversarial perturbation that is required to achieve a 100\\% success rate with the CW$_2$ attack and \\num{20000} attack iterations.}\n \\label{tab:settings_obfuscated_models}\n\\end{table}\n\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{lccc}\n \\toprule\n Model & Parameters & \\multicolumn{2}{c}{Success Rate} \\\\\n & &$\\epsilon=1$ & $\\epsilon=2$\\\\\n \\midrule \n bs1 (Custom) & SGD & 19.1\\% &83.2\\%\\\\ \n bs2 (LeNet) &SGD & 27.5\\%&79.4\\%\\\\ \n m1 (Custom) & $\\sigma=2, C=5, epochs=50$&95.3\\%&99.7\\%\\\\ \n m2 (Custom)& $\\sigma=2, C=6, epochs=40$ &80.7\\%&\t95.2\\%\\\\ \n m3 (Custom)& $\\sigma=2, C=7, epochs=20$ &82.7\\%&99.8\\%\t\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Success rates of BA$_2$ with \\num{25000} attack iterations and different perturbation values on models with masked gradients.}\n \\label{tab:boundary_obfuscated_models}\n\\end{table}\n\nFurther investigations even suggest that it is possible to intentionally choose the DP settings in a way that the models still achieve a relatively high accuracy, but cause gradient masking according to the criteria presented in Section~\\ref{ssec:rel_grad}.\nIn general, gradient masking in DP seems to be due to an unfavorable combination of noise, high clip norms, and the model architecture. \nThe higher the clip norm, less noise or training epochs already lead to masked gradients in certain architectures. \nIn the following, three such exemplary models are depicted and analyzed according to the aforementioned criteria. \nSee Table~\\ref{tab:settings_obfuscated_models} for their privacy parameter settings and training epochs.\n\n\n\\begin{figure*}\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \n \\input{imgs\/exp01-obfuscated.pgf}\n \\caption{PGD$_{\\infty}$: adversarial success rate plotted against adversarial perturbation $\\epsilon$.}\n \\label{fig:dpobf_exp_1}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\input{imgs\/exp03-obfuscated.pgf}\n \\caption{PGD$_{\\infty}$: adversarial success rate plotted against number of iterations.}\n \\label{fig:dpobf_exp_2}\n \\end{subfigure}\n \\caption{PGD$_{\\infty}$ and BA$_2$ applied to the three DP models depicted in Table~\\ref{tab:settings_obfuscated_models}.}\n \\label{fig:dpobf_exp}\n\\end{figure*}\n\n\nFor the models, increasing the number of attack steps, or increasing the adversarial perturbation does not yield an increase in success rate after a certain plateau is reached. \nSee Figure~\\ref{fig:dpobf_exp} for the experimental results. \nThis can be an indication for masked gradients following criterion (5) and (6).\n\nFurthermore, the black-box and gradient-free BA$_2$ achieves similar and higher success rates than the white-box and gradient-based PGD$_{\\infty}$ method (see Table~\\ref{tab:boundary_obfuscated_models}). \nAccording to criterion (2) and (3), this might be another indicator for masked gradients.\n\nFinally, transferability attacks using CW$_{2}$ with 20,000 steps and adversarial perturbation values depicted in Table~\\ref{tab:settings_obfuscated_models} were conducted.\nSee Figure~\\ref{fig:dpobf_trans} for the results.\nThe transferability among the models is significantly higher than the transferability of the models studied in Section~\\ref{sec:transferability} (see Figure~\\ref{fig:adv_samples_trans_1}).\nTherefore, following criterion (7), the high robustness implied by the success of the PGD$_{\\infty}$ attack (see Figure~\\ref{fig:dpobf_exp_1} and \\ref{fig:dpobf_exp_2}) might be due to gradient masking.\n\n\nTo investigate whether unbounded attacks reach a 100\\% adversarial success rate, the CW$_{2}$ attack was conducted.\nAfter 20,000 iterations and with perturbation budgets depicted in Table~\\ref{tab:settings_obfuscated_models}, the attack achieved a 100\\% success rate for each of the three models.\nTherefore, criterion (4) is met.\nInterestingly, several of the generated samples for model m3 are entirely black.\nCarlini \\emph{et al}\\onedot~\\cite{Carlini.2019On} suggest that this kind of behavior might occur when the only possibility to successfully fool a model is to actually turn an instance into one of the other class.\n\n\n\n\\begin{figure}\n \\centering\n \\input{imgs\/transfer_cw_obf.pgf}\n \\caption{CW$_{2}$ transferability between baseline and models with masked gradients.}\n \\label{fig:dpobf_trans}\n\\end{figure}\n\nLooking into the gradients of the three models reveals that the majority of the them is zero (see Figure~\\ref{fig:dpobf_zero_gradients_perc}), but that the magnitudes of the remaining gradients are extremely high (see Figure \\ref{fig:dpobf_zero_gradients}).\nThis is another indicator for reduced gradient usefulness due to gradient masking and might explain, why crafting adversarial examples based on the model gradients yields little success.\n\n\\begin{figure}\n \\centering\n \n \\input{imgs\/exp09-gradientszero_obfuscated.pgf}\n \\caption{Zero gradients over all model layers.}\n \\label{fig:dpobf_zero_gradients_perc}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \n \\input{imgs\/exp09-obfuscated_layer6.pgf}\n \\caption{Gradient magnitudes at model's first dense layer.}\n \\label{fig:dpobf_zero_gradients}\n\\end{figure*}\n\n\n\n\\section{Discussion and Outlook}\n\\label{sec:discussion}\n\nThe experiments in this paper have shown that DP models exhibit an increased adversarial vulnerability in comparison to non-DP models.\nFurthermore, the observations by Tursynbek \\emph{et al}\\onedot~\\cite{Tursynbek.2020Robustness} (some DP parameter combinations with high clip norms yield high robustness) are likely due to gradient masking. \n\nThere are several possible explanations for the differences in robustness between DP and the baseline models.\nAccording to Demontis \\emph{et al}\\onedot~\\cite{Demontis.2019Why} the larger the gradients in a target model, the larger the impact of an adversarial attack.\nAs the experiments have shown, DP models' gradients are much larger than normal models' gradients, potentially explaining their increased vulnerability. \nTo counteract this factor, it might be helpful to regularize the gradients in DP model training.\n\nAnother reason for the increased vulnerability of DP models might be their decision boundaries.\nTursynbek \\emph{et al}\\onedot~\\cite{Tursynbek.2020Robustness} show that training with DP-SGD affects the underlying geometry of the decision boundaries.\nIn their example, it becomes visible that the DP-SGD training results in more fragmented and smaller decision regions. This increases the chances to generate an adversarial example with less perturbation.\n\nIn a similar vain, Demontis \\emph{et al}\\onedot~\\cite{Demontis.2019Why} suggest that the loss surface of a model has an influence on the robustness against adversarial examples.\nThey state that if the landscape of a model is very variable, it is much likely that slight changes to data points will encourage a change in the local optima of the corresponding optimization problem.\nAs a consequence the authors conclude that attack points might not transfer correctly to another model with a potentially more stable landscape.\nThe experiments of this work depict that adversarial examples generated on DP models transfer less to normal models than the other way round, and that adversarial examples crafted on models with higher clip norms transfer less than the ones from models with lower clip norms.\nThis might also be due to the models' loss landscapes.\nFuture work could, therefore, investigate the loss surface of the DP models more thoroughly.\n\nPrevious results by Papernot \\emph{et al}\\onedot~\\cite{Papernot.2020Tempered} suggest that applying DP in combination with standard RELU functions might lead to exploding activations in the resulting models.\nThe authors suggest using sigmoid activation functions to counteract this effect and to, thereby, improve the training process and achieve higher accuracy scores.\nIn future work, it would be interesting to investigate whether this replacement of the activation functions might also be beneficial for the model robustness.\nThe authors also conclude that the models' activation functions have the largest influence on the success of DP training.\nHowever, the experiments in this work suggest, that the LeNet architecture might be more robust, with less zero-gradients and lower magnitudes of the remaining gradients.\nHence, when considering privacy in combination with security, the model architecture might be an important factor to consider as well \\cite{Su.2018Is}.\n\n\n\n\n\nThe results of this work raise the question whether training DNNs with DP does necessarily cause an increase in model vulnerability against adversarial examples.\nFor the current state, the experiments suggest that achieving privacy does have a negative impact on model robustness.\nAt the same time, this work also highlights a direction of research that might be worth pursuing in the future, namely controlling the gradients in DP model training.\nPinot \\emph{et al}\\onedot~\\cite{Pinot.2019unified} show that, in principle, Renyi-DP~\\cite{Mironov.2017Renyi}, which is used in the DP-SGD, and adversarial robustness share equivalent goals.\nTherefore, future work could investigate how DP training can be adapted to simultaneously improve robustness and which factors, apart from the gradients, cause current DP model's vulnerability.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nMaking DNNs more private and more robust are important tasks that have long been considered separately.\nHowever, to solve both problems at the same time, it is beneficial to understand which impact they have on each other.\nThis work addressed this question from a privacy perspective, evaluating how training DNNs with DP-SGD affects the robustness of the models.\nThe experiments demonstrated that DP-SGD training causes a decrease in model robustness.\nBy conducting a broad range of adversarial attacks against DP models, it was shown that the positive effects of DP-SGD observed in previous work might be largely due to gradient masking, and therefore, provide a wrong sense of security.\nAnalyzing and comparing the gradients of DP and non-DP models demonstrated that DP training leads to larger gradient magnitudes and more zero gradients, which might be a reason for the DP models' higher vulnerability.\nFinally, this work is the first to depict that certain DP parameter combinations in the DP-SGD can intentionally cause gradient masking.\nAs a consequence, future work may further investigate the influence of DP training on the models.\nThis could serve as a basis during parameter and architecture selection such that private training does not oppose the goal of security and can, hence, be applied also in critical scenarios.\n\n\n\\printbibliography\n\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nRadio-loud active galactic nuclei (AGNs) are known to be powerful emitters of non-thermal radiation. In the simplest leptonic models, this emission is most commonly attributed to the presence of highly relativistic leptons accelerated in a relativistically moving magnetized jet, emitting synchrotron radiation and inverse-Compton photons on various sources of soft photons. \nHowever several features are still unclear, such as the composition of the jet (leptonic $\\rm e^+ \/ e^-$ or baryonic $\\rm p^+ \/ e^-$ ), the bulk acceleration mechanism, the heating mechanism of the relativistic non-thermal particles, \\corr{and} the precise size and location of the emitting zones. One-zone models are the most simple and widespread models for reproducing the AGN jet emission. They assume a spherical, homogeneous emission zone with a minimal number of free parameters: the radius of the zone, the magnetic field, the bulk Lorentz factor, and parameters describing the particle distribution. They present the advantage of being simple and give a good first approximation of the physical conditions in jets. However, they encounter several limitations. Among others, they have difficulties in reproducing the low-energy (radio) part of the spectral energy distribution (SED; most probably emitted by the farthest part of the jet). They assume the very stringent condition that the whole non-thermal emission must be produced in a single zone, which can be an issue for explaining the multi-wavelength variability of the sources. Furthermore, they do not offer any clue on the jet formation mechanism and its parameters outside this zone. \\corr{Also the strong Doppler boosting associated with highly relativistic jets is incompatible with the detection of high-energy emission from unbeamed radio galaxies seen at large angles, since their emission should be strongly attenuated by a Doppler factor smaller than one.} \\\\ \n\nFacing these weaknesses, models \\corr{considering more complex structures} have been proposed involving stratified inhomogeneous jets. \\corr{For instance, the b}lob-in-jet models \\citep{Katarzyski:2001iga,Hervet:2015wo} propose a structure where blobs move at high relativistic speed in the jet. Those blobs can be responsible for some of the emission (especially at high energies) whereas the ambient jet can explain the rest of the spectrum, for example in the radio band. Spine\/sheath models, implying the existence of two flows at different velocities, can provide a picture in agreement with both theoretical and observational constraints. \\cite{Ghisellini:2005bc} developed a model based on the same idea and showed that this kind of structure could help reduce the necessity for very high bulk Lorentz factor values as the two emitting zones interact radiatively, enhancing each other's emission.\\\\\n\nBut the original idea of a double jet structure stemmed from \\cite{1989MNRAS.237..411S}, who coined the name \"two-flow model\" (more details in Sect. \\ref{sec:twoflow}).\nIn their original paper, the authors proposed for the first time a double jet structure for AGN jets. \\corr{In this model, an outer jet or collimated wind is ejected from an accretion disc, with a mildly relativistic velocity \\corr{($\\upsilon \\sim0.5c$)}}. {In the empty funnel of this jet, a fast inner electron-positron beam is formed and moves at much higher relativistic speeds (bulk Lorentz factor $\\Gamma_b \\approx 10$). The pairs are supposed to be continuously heated by the surrounding baryonic jet through MHD turbulence.}\\\\\n\n\\corr{The model has several advantages compared to models assuming a single fluid. The first advantage is that the problem of the energy budget of relativistic jets is reduced, since only a minor component of leptons needs to be accelerated to high bulk Lorentz factors. The protons of the surrounding jets are not supposed to be highly relativistic}. The second is that this model can provide a simple way to solve the discrepancy between the required high Lorentz factors to produce the observed gamma-ray emission and the slower observed motion in jets at large scales (e.g. \\citealt{2006ApJ...640..185H} and references therein). \\corr{Furthermore, as the power is carried out mainly by the non-relativistic jet, it escapes the Compton drag issue. As explained below, the pair beam is only gradually accelerating thanks to the anisotropic inverse-Compton effect (or \"Compton rocket\" effect), and its velocity never exceeds the characteristic velocity above which the aberrated photon field starts to cause a drag (it actually remains at this characteristic velocity). Its density increases all along the jet due to the gamma-gamma pair production process, so the dominant emission region can be at large distances from the central black hole, avoiding the problem of gamma-gamma absorption by the accretion disk photon field. This model therefore offers a natural explanation of the main characteristics of the high-energy source deduced from observations. } \\\\\n\nNoticeably, the dynamical effects of radiation on relativistic particles (which are intrinsically strongly dissipative) are very difficult to incorporate both in analytical and in numerical (M)HD simulations. The picture we present here is thus markedly different from most models available in the literature, since the pair component dynamics is mainly governed (at least at distances relevant for high-energy emission) by these radiative effects. However we argue that these effects are unavoidable since the cooling time of relativistic leptons is indeed very short at these distances, and these effects must be taken into account in any physically relevant model implying a relativistic pair plasma, which is in turn likely to exist given the high density of gamma-ray photons observed from radio-loud gamma-ray-emitting AGNs.\\\\\n\nThe first numerical model of non-thermal emission based on these ideas was proposed by \\cite{1995MNRAS.277..681M},\\corr{who considered only the inverse-Compton process on accretion-disk photons.} Assuming a power-law particle distribution and a stratified jet, the authors could derive the inverse-Compton emission from a plasma of relativistic leptons illuminated by a standard accretion disc \\corr{as well as the opacity to pair production and the pair production rate }. They showed that the spontaneous generation of a dense $e^+ \/ e^-$ pair beam continuously heated by the baryonic jet was indeed possible and was able to reproduce \\corr{the gamma-ray emission of EGRET blazars}.\\\\\n\nBased on this work, \\cite{1996A&AS..120C.563M} further studied the possibility of accelerating the $e^+ \/ e^-$ pair beam in the framework of the two-flow through the Compton rocket mechanism introduced by \\cite{1981ApJ...243L.147O}. In this mechanism, the motion of the relativistic pair plasma is entirely controlled by the local anisotropy of the soft photon field, which produces an anisotropic inverse-Compton emission transferring momentum to the plasma. The acceleration saturates when the photon field, aberrated by the relativistic motion, appears nearly isotropic (vanishing flux) in the comoving frame. \n\\cite{1998MNRAS.300.1047R} continued this work on the acceleration via the Compton rocket effect. Considering a relativistic pair plasma following a power-law energy distribution coupled with the photon field from a standard accretion disc and extended sources (BLR or dusty torus); they computed the value of the terminal bulk Lorentz factor and showed that values up to \\corr{$\\Gamma_b = 20$} are achievable for extragalactic jets, the most probable values being of the order of 10 in good agreement with VLBI motions studies (e.g. \\citealt{Lister:2009hn}).\nSubsequent works studied the possibility of explaining the spectra by a pile-up (relativistic Maxwellian) distribution \\citep{Sauge:2004ep}, which better reproduces the high-energy spectra of BL Lacs \\corr{that peak in the TeV range. A quasi-Maxwellian distribution is close to a monoenergetic one, however the spatial convolution of such a distribution whose parameters vary along the jet can mimic a power-law over a limited range of frequencies.} These authors also proposed a time-dependent version of the model that was \\corr{later shown to be } able to successfully reproduce the rapid flares observed in some objects, such as PKS 2155-304 \\citep{Boutelier:2008bga}.\\\\\n\nIn a recent study, \\cite{Vuillaume:2015jv} (hereafter Vu15) further studied the acceleration through the Compton rocket effect. The complex photon field of an AGN was considered, carefully taking into account the spatial distribution of extended sources (standard accretion disc, dusty torus, and broad line region). The evolution of the resulting bulk Lorentz factor along the jet is then computed self-consistently and it appears that due to the complexity of the surrounding photon field, it can display a relatively tangled relation with the distance to the base of the jet. Moreover, variations of the bulk Lorentz factor (and thus of the Doppler factor) can induce complex variability of the observed emission in space and in time.\\\\\n\nThe goal of this paper is to present the complete calculation of the pair-beam non-thermal spectra {\\bf assuming} the above configuration, that is, 1) the pair plasma is assumed to be described by a pile-up distribution and is generated in situ by $\\gamma-\\gamma$ interactions, 2) its motion is controlled {at short and intermediate distances (up to $\\sim 10^3 r_g$) by the anisotropic Compton rocket effect in the complex photon field generated by an accretion disk, a broad line region, and a dusty torus, and 3) particles emit non-thermal radiation through synchrotron, synchrotron self-Compton (SSC), and external Compton (EC) in the various photon fields. Part of the high-energy $\\gamma-$ray photons can also be absorbed to produce new pairs. We assume that these new pairs are continuously accelerated along the jet. In the two-flow paradigm \\citep{1989MNRAS.237..411S}, such acceleration is expected by the MHD turbulence generated by the surrounding baryonic flow. Subsequently, the density and internal energy of the pair plasma are computed self-consistently, and the emitted photon spectrum can be evaluated and compared to observations. \\\\\n\nThe overall layout is as follows. In Section \\ref{sec:twoflow} we present the main theoretical interest of the two-flow paradigm. Then in Section \\ref{sec:num_model} we detail our model and the numerical methods we use. In section \\ref{sec:3c273} we apply this model to the \\corr{bright} quasar 3C273 and show the model can reproduce its jet emission from radio to gamma-rays.\n\n\\section{The two-flow paradigm: the hypothesis and the reasoning behind it\\label{sec:twoflow}}\n\nThe two-flow paradigm is based on an original idea from \\citet{1989MNRAS.237..411S} (see Section \\ref{intro}).The model has evolved since then but the core hypothesis remains the same: an AGN ``jet'' is actually made of two interacting flows.\nThe outer one is a MHD jet, or wind, fuelled by the accretion disc. It originates from and is self-collimated by the Blandford \\& Payne (BP) process \\citep{Blandford:1982vy} and is much like the jets found in other objects such as young stars or neutron stars. This baryonic jet is therefore mass loaded by the disc and mildly relativistic ($\\beta \\approx 0.5$). On the rotation axis, where the angular momentum tends to zero, no MHD force counteracts the gravity from the central object and it is expected that the density strongly decreases near the axis, thus leaving an empty funnel. A lighter jet made of relativistic leptons $\\rm (e^-\/e^+)$ can then be created there through $\\gamma-\\gamma$ interaction between the surrounding soft and high-energy photons. This leptonic plasma is accelerated through the Compton rocket effect (as explained below) and travels at highly relativistic speeds. It is assumed to be confined, collimated, and continuously reheated by the surrounding MHD jet.\n\n\\begin{figure}[ht]\n\\begin{center}\n \\includegraphics[width=\\hsize]{figures\/2flow_scheme_3.pdf}\n\\caption{Schematic view of the model developed in the two-flow paradigm.}\n\\label{fig:two-flow}\n\\end{center}\n\\end{figure}\n\n\n\\subsection{Interaction of highly relativistic flows}\n\nThe first benefit of the two-flow hypothesis is to alleviate the problem of the confinement of a highly relativistic flow. Self-confinement of a jet can take place due to the magnetic field exerting a magnetic pressure (from the Lorentz force) balancing internal pressure and centrifugal force. However, the self-confinement of a highly relativistic jet through this process is quite inefficient. This has been shown first in numerical simulations by \\citealt{Bogovalov:2001jk} and \\citealt{Bogovalov:2001ii} and then demonstrated based on theoretical arguments by \\citealt{Pelletier:2004vj}.\nTherefore the collimation of relativistic flows necessarily requires an external process like for example external pressure from the ambient (interstellar) medium.. In the two-flow paradigm, the outer self-confined massive and powerful MHD sheath confines the spine by ram pressure, providing an easy solution to the important problem of confinement of highly relativistic flows.\n\\newline\n\nMoreover, the interface of the two flows can be subject to Kelvin-Helmholtz instabilities producing turbulence. This turbulence can then accelerate particles in the spine through second-order Fermi processes. In that picture, because the MHD sheath carries most of the power, it can be seen as an energy reservoir continuously energizing the particles through turbulence. \nThis continuous source of energy gives rise to two very interesting phenomena.\n\nThe first one is the most important feature of the pair creation process. As new pairs are created through $\\gamma-\\gamma$ absorption, and immediately accelerated through turbulence to reach high energies, they can emit $\\gamma$-rays that will create more pairs. The pair-creation process being very efficient and highly non-linear, a copious amount of new pairs can be created even from an initial, very low density, therefore constituting the spine. Moreover, above a certain threshold of energy, the process can runaway and give rise to episodes of rapid flares. This has been demonstrated by \\cite{Renaud:1999vz}, \\cite{Sauge:2004ep}, and \\cite{Boutelier:2008bga}.\\\\\n\nThe second phenomenon is the possibility to accelerate the spine jet to relativistic motion through the anisotropic Compton rocket effect as discussed below.\n\n\\subsection{Bulk Lorentz factor of the spine\\label{sec:jet_velocity}}\n\\corr{\nThe question of the actual speed, or bulk Lorentz factor $\\Gamma_b$ of jets, as well as their acceleration, is central to the understanding of their physics. This is a long-standing and debated issue in the community \\citep{2006ApJ...640..185H} with conflicts between theoretical arguments (\\citealt{Aharonian:2007ep}, \\citealt{Tavecchio:2010hc}, \\citealt{2008MNRAS.384L..19B}) and observations \\citep{Piner:2004cj, 2004ApJ...600..127G, Piner:2014io, Lister:2013gp}.\nSome of the attempts to solve this issue come in the form of structured jets \\citep{2000A&A...358..104C, Ghisellini:2005bc, 2003ApJ...594L..27G}.\n}\n\n\n\n\n\n\n\nIn the two-flow paradigm, the question of the acceleration of the spine is solved by the Compton rocket effect as proposed by \\cite{1981ApJ...243L.147O}. In this process, inverse-Compton scattering in a strongly anisotropic photon field induces a motion of the emitting plasma in the opposite direction. \\cite{1981ApJ...243L.147O} showed already that a purely leptonic hot plasma must be dynamically driven by this process, up to relativistic speeds.\n\\cite{Phinney:1982wt} opposed the fact that this process is actually quite inefficient at accelerating a pair blob as the pairs cool down very quickly through the inverse-Compton scattering, therefore killing it before it can be effective.\n\nHowever, in the two-flow paradigm, as the pairs are continuously re-accelerated by the turbulence all along the jet, this argument does not hold and the Compton rocket process becomes an efficient source of plasma thrust (see Section \\ref{sec:gamma_b}). This effect is likely to be very efficient in type II AGNs (FSRQ and FR II galaxies) where an accretion disk with high luminosity is present. The ambient photon field in the first hundreds of $r_g$ will therefore be highly anisotropic and a pair plasma will be rapidly accelerated to a characteristic velocity for which the net aberrated flux, evaluated in the comoving frame, vanishes. As the cooling timescale for near-Eddington luminosities is much shorter than the other dynamical timescales, this effect will dominate over all other terms such as pressure gradients and magnetic effects. Hence the velocity of the plasma will be very close to this characteristic equilibrium velocity. If the photon field is mainly external (accretion disc and secondary reemission processes), and the scattering occurs in the Thomson regime, the characteristic velocity depends only on the angular distribution of photons and not on the plasma particle distribution, as explained in Section \\ref{sec:gamma_b}. We note however that the situation may be different for type I objects (FR I and BL lacs) where the radiation field is dominated by the jet itself; in this case , there is no simple calculation of the equilibrium velocity. In the following, we only consider FSRQ with an intense external photon field. This does not mean that the two-flow model is invalid for type I objects, but only that the velocity of the pair spine is not easily computed for these objects. In the same way, the application to other kinds of objects such as microquasars and GRBs is probably different given the very different physical conditions of the photon source.\n\n\n\\section{Numerical modelling \\label{sec:num_model}}\n\nThe numerical model developed here is based on the one described in \\cite{Boutelier:2008bga}. The jet is stratified along the axis coordinate $z$ (see Fig. \\ref{fig:two-flow}). The jet axis is viewed at an angle $\\theta_i$ with the line of sight, with $\\mu_i = \\cos \\theta_i$. Numerically, we define a slicing with an adaptive grid step as described below and physical conditions are computed at each slice, starting from initial conditions given at the basis of the jet.\nEach slice then acts as a one-zone and emits synchrotron radiation, synchrotron self-Compton (SSC), and external Compton (EC) radiation, computed as in a one-zone approximation with a spherical blob of radius $R(z)$.\nThe photon field from external sources (the accretion disc, the dusty torus, and the broad line region) is computed all along the jet. This determines the external inverse-Compton emission as well as the induced Compton rocket force on the plasma as described below. The opacity to high-energy photons inside and outside the jet is computed numerically, and is used to compute a pair creation term\nat each step.\n\n\\subsection{Geometry of the jet \\label{sec:jet_geometry}}\n\nTo compute the total jet emission, one needs to know the physical conditions all along the jet. Some of these conditions are derived from the computation of the spatial evolution equation along $z$ (see Sect. \\ref{sec:distr_evol}) and only three physical parameters, that is, the inner radius of the jet $R(z)$ (acting as an outer radius for the pair beam), the magnetic field $B(z),$ and the heating term $Q(z)$, are imposed on a global scale. Here we assume their evolution to be described by power-laws:\n\n\\begin{equation}\n\\label{eq:Rz}\n R(z) = R_0 \\left [ \\frac{z}{Z_0} + \\left( \\frac{R_i}{R_0}\\right)^{1\/\\omega} \\right ]^\\omega \\qquad \\text{with} \\quad \\omega < 2\n.\\end{equation}\n\n\\corr{This law describes a paraboloid with a radius close to $R_{0}$ at a distance $Z_{0}$ from the black-hole. \nThe constant $(R_i\/R_0)^{1\/\\omega}$ allows to avoid divergence issues at $z=0$ by setting a minimal jet radius $R(z=0) = R_i$ corresponding to the disc inner radius. The starting and ending altitudes of the jet are free parameters.\n}\nThe index $\\omega$ defines the jet opening. One must have $\\omega < 1$ to keep the jet collimated.\n\n\nThe magnetic field is supposed to be homogeneous and isotropic at every altitude $z$ in the plasma rest frame. Its evolution is described by:\n\\begin{equation}\n\\label{eq:Bz}\n B(z) = B_0 \\left( \\frac{R(z)}{R_0}\\right)^{-\\lambda} \\qquad \\text{with} \\quad 1< \\lambda < 2\n.\\end{equation}\n\nThe index $\\lambda$ gives the structure of the magnetic field and is confined by two extremes:\n\\begin{itemize}\n\\item\n$\\lambda = 1$ corresponds to a pure toroidal magnetic field as the conservation of the magnetic field circulation gives $B \\sim 1\/R.$\n\\item\n$\\lambda = 2$ corresponds to a pure poloidal magnetic field as the conservation of the magnetic flux in this case gives $B \\sim 1\/R^{2}$.\n\\end{itemize}\n\n\nThe particle acceleration is a central part of the two-flow model as it is assumed that the particles are continuously heated by the outer MHD structure (see Sect. \\ref{sec:twoflow}) which acts as an energy reservoir, compensating for radiation losses. Due to the lack of a precise expression for the acceleration rate per particle, $Q_{acc}$, we use the following expression:\n\n\\begin{equation}\n\\label{eq:Qz}\n Q_{acc}(z) = Q_0 \\left [ \\frac{z}{Z_0} + \\left(\\frac{R_i}{R_0} \\right)^{1\/\\omega}\\right ]^{-\\zeta} \\exp \\left( -\\frac{z}{Z_c} \\right)\n.\\end{equation}\n\nThe particle acceleration decreases as a power-law of index $\\zeta$ up to an altitude $z=Z_{c}$ where it drops exponentially. This altitude $Z_{c}$ physically corresponds to the end of the particle acceleration (through turbulence) in the jet.\nBecause of this exponential cut-off, whatever the index $\\zeta$ is, the total amount of energy provided to accelerated particles remains finite. However, as $Z_{c}$ could be as large as desired (even as large as the jet), we consider $\\zeta > 1$ to be physically more satisfactory. This way, even an integration to infinity of $Q_{acc}$ would converge. Similarly to the jet radius expression (equation \\ref{eq:Rz}), the constant $R_i\/R_0$ avoids numerical issues for very small $z$.\n\nThe jet is then sliced along $z$. As we see below, the particle density can be subject to abrupt changes in case of \\corr{short and intense events of pair creation which is a non-linear process}. It is therefore essential to have an adaptive slicing as the physical conditions are computed in the jet. \\corr{The condition have been chosen to ensure variation rates of the particle mean energy and of the particle flux of less than $1\\permil$ between two computation steps}.\n\n\nFrequencies follow a logarithmic sampling between $\\nu_{min}$ and $\\nu_{max}$ in the observer frame. The sampling is therefore different at each altitude in the jet depending on the Lorentz factor. This ensures that local emissivities are computed for the same sampling (that of the observer) when transferred to the observer frame. \n\n\n\\subsection{Geometry of the external sources of photons \\label{sec:ext_sources}}\n\nThere are several possible sources of soft photons in an AGN but three have an actual influence on the external Compton emission and are taken into account in our model: the accretion disc, the dusty torus, and the broad line region (BLR). In order to correctly compute the corresponding inverse-Compton radiation, the anisotropy of the sources is taken into account as detailed below. \n\n\\subsubsection{The accretion disc \\label{sec:accrection_disc}}\n\nThe geometry of the disc \\corr{(see Fig. \\ref{fig:sketch_disc})} is described by its internal radius $R_{in}$ and its external radius $R_{out}$. It is then sliced along its radius $r$ (with a logarithmic discretization) and its azimuthal angle $\\varphi$ (with a linear discretization). Therefore, each slice has a surface $\\displaystyle \\d S = \\d \\varphi \\left(r\\d r + \\frac{\\d r^2}{2}\\right)$.\nFrom the jet axis at an altitude $z$, each slice is seen under a solid angle \n\\begin{equation}\n\\d \\Omega = \\d S \\frac{z}{\\left( r^2 + z^2\\right)^{3\/2}}\n,\\end{equation}\n\\corr{at an angle $\\theta_s = \\arccos(z\/\\sqrt{r^2 + z^2})$ with the $z$ axis.}\nWe consider here a standard accretion disc. \\corr{Then the radial distribution of the temperature is given by $T_{disc}(r)$ \\citep{1976MNRAS.175..613S}:\n\\begin{equation}\n\\label{eq:T_disc}\nT_{disc}(r) = \\left[ \\frac{3 G M \\dot{M}} { 8 \\pi \\sigma} \\frac{1}{r^3} \\left(1 - \\sqrt{\\frac{R_{isco}}{r}} \\right) \\right] ^{1\/4}\n,\\end{equation}\nwith $G$ the gravitational constant, $\\sigma$ the Stefan-Boltzmann constant, $M$ the black-hole mass, $\\dot{M}$ the accretion rate, and $\\displaystyle R_{isco} = \\frac{6MG}{c^2}$ the innermost stable circular orbit. Its emissivity is equal to one if not specified otherwise.\n\nThe luminosity of one face of the disc is then given by\n\\begin{equation}\n\\label{eq:disc_luminosity}\nL_{disc} = \\int_{R_{in}}^{R_{out}} \\sigma T^4_{disc}(r) 2 \\pi r \\mathrm{d} r\n,\\end{equation}\nwhich converges to $\\displaystyle L_{disc} = \\frac{\\dot{M}c^2}{24}$ for $R_{in} = R_{isco}$ and $R_{out} \\gg R_{in}$.}\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=\\hsize]{figures\/disc.pdf}\n \\caption{\\label{fig:sketch_disc} Sketch of the disc radial and azimuthal splitting. A slice at $(r,\\varphi) \\in \\left( [R_{in},R_{out}],[0,2\\pi]\\right)$ is seen under a solid angle $\\d \\Omega$ from an altitude $z$ in the jet.}\n\\end{figure}\n\n\n\\subsubsection{The dusty torus}\n\nThe dusty torus (see Fig. \\ref{fig:torus}) is assumed to be in radiative equilibrium with the luminosity received from the accretion disc. The torus is sliced according to $\\theta_t \\in \\left[\\theta_{t_{min}},\\theta_{t_{max}}\\right]$ and $\\varphi \\in [0,2\\pi]$ . Each slice is illuminated by the disk and is in radiative equilibrium. It emits as a grey-body with a temperature $T_t(\\theta_t,\\varphi)$ and an emissivity $\\varepsilon(\\theta_t, \\varphi)$. As most of the disc luminosity comes from its inner parts and considering $R_{in} \\ll (D_t - R_t)$, the disc appears point-like from a torus surface point. Therefore, each slice of the torus of surface $\\d S_t$ and at an angle $\\theta_t$ from the $X$ axis verifies the relation:\n\\begin{equation}\n\\label{eq:torus_eq}\nI_d \\: S_d \\: \\d \\Omega_d \\: \\cos \\omega_d = \\varepsilon(\\theta_t) \\sigma T_{torus}^4(\\theta_t) \\d S_t (\\theta_t)\n,\\end{equation}\nwith $\\omega_d$ the angle between the Z-axis and the emission direction from the disc: $\\displaystyle \\cos \\omega_d = a \\sin \\theta_t \/ \\sqrt{1 - 2 a \\cos \\theta_t + a^2}$ and $a = R_t \/ D_t$.\n\n\\begin{figure}[ht]\n \\includegraphics[width=\\hsize]{figures\/torus.pdf}\n\\caption{\\label{fig:torus}Dusty torus sliced as described in the text. Each slice is seen from an altitude $z$ in the jet under a solid angle $\\d \\Omega$ . }\n\\end{figure}\n\n\\corr{The user can then either fix a constant temperature or a constant emissivity and compute the other parameters as a function of $\\theta_t$ to preserve the radiative equilibrium. If not specified otherwise, $\\varepsilon(\\theta_t) = 1$ and the temperature is kept free.}\n\n\\subsubsection{The broad line region}\n\nThe broad line region is modelled by an isotropic, optically and geometrically thin spherical shell of clouds situated at a distance $R_{blr}$ from the central black-hole. Like other sources, it is sliced angularly into different parts in order to perform the numerical integration. We choose a linear discretization along $\\omega \\in \\left[\\omega_{max},\\omega_{min} \\right]$ and along $\\varphi \\in [0:2\\pi]$.\n\n\\begin{figure}[ht]\n \\includegraphics[width=\\hsize]{figures\/blr.pdf}\n\\caption{\\label{fig:blr}The BLR, an optically and geometrically thin shell of clouds seen under a solid angle $\\d \\Omega$ from an altitude $z$ in the jet. The BLR is sliced according to $\\omega \\in \\left[\\omega_{max},\\pi\/2\\right]$ and $\\varphi \\in [0,2\\pi]$. The BLR absorbs and re-emits part of the disk luminosity.}\n\\end{figure}\n\nObserved BLRs display a complex emission with a continuum and broad absorption lines but \\cite{Tavecchio:2008fu} showed that modelling the spectrum of the BLR with a grey-body spectrum at $T=10^5 K$ provides a good approximation to the resulting inverse-Compton spectrum. \\corr{We followed this idea using a temperature of $T_{blr} = 10^5 K$ and an overall luminosity $L_{blr}$, which is a fraction $\\alpha_{blr}$ of the disc luminosity: $\\displaystyle L_{blr} = \\alpha_{blr} L_{disc} = \\sigma T_{blr}^4 R^2 \\int_{0}^{2\\pi}\\mathrm{d} \\phi \\int_{\\omega_{max}}^{\\pi\/2}\\sin(\\omega) \\mathrm{d} \\omega$.}\\\\\nEmissivity of the BLR is then given by:\n\\begin{equation}\n \\varepsilon_{blr} = \\frac{\\alpha_{blr} L_{disc}}{\\sigma T^4_{blr} 2\\pi R^2_{blr}\\cos \\omega_{max}}\n \\label{eq:iso_emissivity}\n.\\end{equation}\n\n\n\\subsection{Emission processes}\n\nAs the spine is supposed to be filled by electrons\/positrons, only leptonic processes need to be considered here: synchrotron and inverse-Compton radiation are computed all along the jet. The radiation is computed in the plasma rest frame and then boosted by the local Doppler factor $\\delta(z)=(\\Gamma_b(z)(1-\\beta_b(z)\\cos i_{obs})^{-1}$ into the observer's rest frame - with $\\beta_b = \\sqrt[]{1 - 1\/\\Gamma_b^2}$ and $i_{obs}$ the observer angle relative to the jet axis as defined in Fig. \\ref{fig:absorption_sketch}.\\\\\n\nIn the two-flow paradigm, particles are supposed to be accelerated by a second-order Fermi process due to turbulence inside the jet. Thus a Fokker-Planck equation governs the evolution of the particle distribution which evolves by diffusive acceleration and radiative losses. \\citep{Schlickeiser:1984ua} showed that generic solutions of this equation were pile-up (or quasi-Maxwellian) distributions. Most of the particles are then concentrated around some characteristic energy $\\bar{\\gamma}$ where the acceleration and cooling time are similar. We adopt the following simplified form of such a distribution:\n\\begin{equation}\n n_{e}(\\gamma, z) = n_{0}\\left(z\\right) \\frac{\\gamma^{2}}{2 \\bar{\\gamma}^{3}(z)} \\exp \\left(-\\gamma\/\\bar{\\gamma}(z)\\right)\n\\label{eq:pileup}\n,\\end{equation}\n\nwhere $\\displaystyle n_{0}(z) = \\int n_{e}(\\gamma) \\mathrm{d} \\gamma$.\\\\\n\n\\corr{As this acceleration mechanism does not produce power laws for the particles energy distributions (contrary to the first-order Fermi process at work in shocks), the reproduction of large-scale power laws in the spectra of the sources is not straight forward. Power-law shapes can however be reproduced by a summation of a number of pile-up (from different emission zones) with different parameters.\nHowever, the summation of the emission coming from each slice of the jet can be demanding in terms of computing time.} A\\ certain level of approximation is thus required to achieve computation of the model in a reasonable amount of time. The following subsections present the calculation of each type of radiation for a pile-up distribution.\n\n\\subsubsection{Synchrotron radiation}\n\n\nUsing the expression of the synchrotron-emitted power per unit of frequency and solid angle (\\citep{BLUMENTHAL:1970gb}), \\citep{Sauge:2004tc} showed that the synchrotron emissivity of a pile-up distribution can be written as:\n\\begin{equation}\n J_{syn} = \\frac{\\sqrt{3}}{16 \\pi} \\frac{e^3 B}{m_e c^2} N_e y \\: \\Lambda(y)\n \\label{eq:Jsyn}\n,\\end{equation}\nwith $\\displaystyle \\epsilon_c = \\frac{3}{4 \\pi} \\frac{h e B \\bar{\\gamma}^2}{m_e^2 c^3}\\: $, $\\displaystyle y = \\epsilon\/\\epsilon_c$\nand\n\\begin{equation}\n \\Lambda (y) = \\frac{1}{y} \\int_0^\\infty x^2 \\exp(-x) F_{syn}\\left(\\frac{y}{x^2}\\right) \\d x\n.\\end{equation}\n\nTo fasten numerical computation, analytical approximations of the function $\\Lambda(y)$ can be done in different regimes. These approximations are presented in the appendix (equations \\ref{eq:lambda1}, \\ref{eq:lambda2} and \\ref{eq:lambda3}).\n\n\n\nFinally, one obtains the synchrotron spectrum in the optically thin regime $(\\epsilon > \\epsilon_{abs})$:\n\\begin{equation}\n\\label{eq:sync_thin}\n\\frac{\\d n_s^{thin}(\\epsilon)}{\\d \\epsilon \\d t} = \\frac{4 \\pi}{\\epsilon_1 m_e c^2} J_{syn}=\n\\frac{\\sqrt{3}}{4} \\frac{e^3 B}{m_e^2 c^4} N_e \\frac{1}{\\epsilon_c} \\: \\Lambda(y)\n.\\end{equation}\n\nThe transition between the optically thin and the optically thick regime happens at the frequency $\\epsilon_{abs}$ defined by \n\\begin{equation}\n\\label{eq:tau_syn_def}\n\\tau_\\epsilon(\\epsilon_{abs}) \\approx \\frac{I_\\epsilon^{thin}(\\epsilon_{abs}) h^2}{2 \\bar{\\gamma} m_e^3 c^4 \\epsilon_{abs}^2} = 1\n.\\end{equation}\n\nFor a pile-up distribution of particles, the distribution temperature $k_B T_e = \\langle \\gamma \\rangle m_e c^2 \/3 = \\bar{\\gamma} m_e c^2$ does not depend on $\\epsilon$. In this case, the optically thick regime at low frequency $(\\epsilon < \\epsilon_{abs})$ is described by the Rayleigh-Jeans law:\n\\begin{equation}\n\\label{eq:sync_thick}\n\\frac{\\d n_s^{thick}(\\epsilon)}{\\d \\epsilon \\d t} = \\frac{8 \\pi}{R} \\frac{m_e^2 c^2}{h^2} \\bar{\\gamma} \\epsilon\n.\\end{equation}\n\nFinally the synchrotron spectrum resulting from a pile-up distribution has three main parts:\n\\begin{itemize}\n\\item $\\epsilon < \\epsilon_{abs}$ is the optically thick part of the spectrum described by a power-law of index 1.\n\\item $\\epsilon_{abs} < \\epsilon < \\epsilon_c$ is the optically thin part of the spectrum described by a power-law of index -2\/3.\n\\item $\\epsilon_{c} < \\epsilon $ is where the spectrum falls exponentially.\n\\end{itemize}\n\n\n\n\\subsubsection{Synchrotron self-Compton radiation}\n\nWe assume the synchrotron self-Compton (SSC) emission to be co-spatial with the synchrotron emission, and therefore neglect the synchrotron emission coming from other parts of the jet. We treat the Thomson and the Klein-Nishina (KN) regimes separately, following the analytical approximations proposed by \\citep{Sauge:2004tc}. The results of these approximations are recalled here while the details are given in the appendix. We first consider the distinction between the Thomson and the KN regimes relative to the synchrotron peak $\\epsilon_c$.\\\\\n\nIf $\\epsilon_c \\ll 1\/\\epsilon_1$, all synchrotron photons producing SSC photons of energy $\\epsilon_1$ are scattered in the Thomson regime. In this case, one can show that the SSC photon production rate is given by (see appendix \\ref{sec:appendix_ssc}):\n\\begin{equation}\n\\label{eq:SSC_rate_th}\n\\frac{\\d n_{ssc}^{th}}{\\d \\epsilon_1 \\d t} = \\frac{3 \\sigma_{th}}{4} \\frac{n_0}{2\\bar{\\gamma}} \\tilde{G}\\left(\\frac{\\epsilon_1}{\\bar{\\gamma}^2} \\right)\n,\\end{equation}\nwith\n\\begin{equation}\n\\label{eq:ssc_G}\n\\begin{aligned}\n\\tilde{G}(x) & = \\int \\frac{\\d \\epsilon}{\\epsilon} \\frac{\\d n_{ph}}{\\d \\epsilon} \\tilde{g}\\left( \\frac{x}{4\\epsilon} \\right) \\\\\n\\tilde{g}(x) & = \\frac{2}{3} e^{-\\sqrt{x}}\\left( 1 - \\sqrt{x} + xe^{\\sqrt{x}} E_i(\\sqrt{x}) \\right)\n\\end{aligned}\n,\\end{equation}\nwith $E_i(x) = \\int_x^\\infty \\d t \\, e^{-t}\/t $ being the exponential integral function. Interestingly, $\\tilde{G}$ is a function of a single variable and can therefore be tabulated to speed up calculation in the Thomson regime.\\\\\n\nFor $\\epsilon_c > 1\/\\epsilon_1$, we must take into account KN corrections. However, synchrotron photons verifying $\\epsilon < 1 \/ \\epsilon_1$ are still in the Thomson regime and photons verifying $ \\epsilon > 1\/\\epsilon_1 $ are in the KN regime. In order to include KN corrections only when necessary, the emissivity in this regime is divided into two contributions, Thomson and Klein-Nishina, respectively given by $J_{ssc}^{th}$ and $J_{ssc}^{kn}$:\n\\begin{equation}\n\\label{eq:ssc_kn_th}\nJ_{ssc}^{th}(\\epsilon_1) = \\epsilon_1^{-s} \\bar{\\gamma}^{2s+1} \\left(\\Gamma(2s+1, u_{min}) - \\Gamma(2s+1, u_{max}) \\right)\n,\\end{equation}\n\\corr{with $\\Gamma$ being the incomplete gamma function and}\nwith $\\displaystyle u_{min} = \\sqrt{\\frac{\\epsilon_1}{\\epsilon_{max}\\bar{\\gamma}^2}}$ and $\\displaystyle u_{max} = \\sqrt{\\frac{\\epsilon_1}{\\epsilon_{min}\\bar{\\gamma}^2}}, $\n\n\n\\begin{equation}\n\\label{eq:ssc_kn_kn}\n\\begin{aligned}\nJ_{scc}^{kn}(\\epsilon_1) & = \\frac{3}{8} \\epsilon_1 \\exp\\left(-\\frac{\\epsilon_1}{\\bar{\\gamma}}\\right) \\\\\n& \\times \\left\\{ \\left(\\ln(2\\epsilon_1) +\\frac{1}{2} \\right) K_1^{(s)}(1\/\\epsilon_1, \\epsilon_0) + K_2^{(s)}(1\/\\epsilon_1, \\epsilon_0) \\right\\}. \n\\end{aligned}\n\\end{equation}\n\nThe SSC photon production rate in the Klein-Nishina regime is then given by\n\\begin{equation}\n\\label{eq:ssc_rate_kn}\n\\frac{\\d n_{ssc}^{kn} (\\epsilon_1)}{\\d \\epsilon_1 \\d t} = n_1 n_0 c \\sigma_{Th} \\left( J_{scc}^{th}(\\epsilon_1) + J_{ssc}^{kn}(\\epsilon_1) \\right)\n.\\end{equation}\n\n\nThe continuity between the two regimes given by Eqs. \\ref{eq:SSC_rate_th} and \\ref{eq:ssc_rate_kn} is assured by \\corr{an interpolation formula} that gives the complete expression of the SSC radiation:\n\n\\begin{equation}\n\\label{eq:ssc_continuity}\n\\cfrac{\\d n_{ssc}(\\epsilon_1)}{\\d \\epsilon_1 \\d t} = \\cfrac{ \\cfrac{\\d n_{ssc}^{th}(\\epsilon_1)}{\\d \\epsilon_1 \\d t} + x^n \\cfrac{\\d n_{ssc}^{kn}(\\epsilon_1)}{\\d \\epsilon_1 \\d t} }{1 + x^n}\n\\qquad \\text{with} \\quad x = \\epsilon_{1}\\epsilon_c\n.\\end{equation}\n\\corr{We used some examples to verify that the value $n = 6$ gives a correct approximation of the full cross section. However the final results are relatively insensitive to the choice of $n$ since the various contributions are smoothed by the spatial convolution of the different parts of the jet.} \n\n\\subsubsection{External Compton radiation}\n\nThe calculation of the inverse-Compton emission on a thermal distribution of photons (a.k.a. external Compton) is the most demanding in terms of computation time since it requires an integration over the energy and spatial distributions of the incoming photons which are not produced locally. We note that the anisotropy of the incoming radiation is important and has to be properly taken into account for the computation of the emissivity and the bulk Lorentz factor of the plasma through the Compton Rocket effect (see section \\ref{sec:gamma_b}).\n\nSince all our external emission sources (disk, BLR and torus) can be approximated by a blackbody energy distribution(or the sum of several blackbodies), some approximations are possible and have been proposed by \\cite{Dubus:2010ez}, \\cite{2013arXiv1310.7971K}, and \\cite{Zdziarski:2013ed} (hereafter ZP13). The method we developed and used in this paper is less precise than the one proposed by ZP13 but is at least twice faster; and up to ten times faster in some cases. It approximates the Planck's law for black-bodies emission by a Wien-like spectrum. Details of the calculation and comparison with ZP13 are done in Appendix B.\\\\\n\nThe number of inverse-Compton photons per energy unit and time unit produced by the scattering of a thermal photon field on a single particle of energy $\\displaystyle \\gamma m_e c^2$ is then given by:\n\\begin{equation}\n\\label{eq:ic_wien_final}\n \\begin{aligned}\n \\frac{\\d N_{ec} (\\epsilon_1)}{\\d t \\d \\epsilon_1} = & \\frac{m^2_e c^4 r^2_e A \\epsilon' \\pi}{h^3 \\gamma^4 (1-\\beta\\cos\\theta_0)} \\frac{1}{(1-x)} \\left\\lbrace \\left( \\frac{2}{\\mathcal{H}} + 2 + x\\bar{\\epsilon}'\\right) e^{-\\mathcal{H}\/2} \\right.\\\\\n & \\left. - \\frac{}{} \\left(2+\\mathcal{H}\\right) \\left(E_1(\\mathcal{H}\/2)-E_1(2\\gamma^2 \\mathcal{H})\\right) \\right\\rbrace\n \\end{aligned}\n,\\end{equation}\nwith $\\displaystyle x = \\frac{\\epsilon_1}{\\gamma}$, $\\theta_0$ being the angle between the incoming photon and the particle direction of motion,\n$\\displaystyle \\mathcal{H}=\\frac{x}{\\bar{\\epsilon}'(1-x)} $\n, and $\\displaystyle E_1(x) = \\int_x^\\infty \\frac{\\exp(-t)}{t} \\d t$ being the exponential integral.\n\\corr{This relation takes into account the anisotropy of the emission through the angle $\\theta_0$ and the full Klein-Nishina regime.}\\\\\n\nEquation (\\ref{eq:ic_wien_final}) needs to be integrated over the pile-up distribution to get the complete spectrum. This can be long in the general case. However, \\corr{when applicable (we chose the conservative limit of $\\epsilon' < 0.01$)}, which greatly simplifies in the Thomson regime (see Appendix \\ref{sec:appendix_ec}) and one obtains:\n\\begin{equation}\n\\label{eq:ec_pileup_thomson}\n\\frac{\\d n_{ec}^{Th}}{\\d t \\d \\epsilon_{1}} = \\frac{\\pi m_{e}^{2}c^{4}r_{e}^{2}A\\epsilon_{1} }{h^{3}} \\frac{N_e} {2\\bar{\\gamma}^4 (1-\\mu_0) } \\: \\chi(s)\n,\\end{equation}\nwith \n\\begin{equation}\n\\label{eq:chi}\n\\begin{aligned}\n\\chi(s) = \\int_0^\\infty & \\frac{e^{-u}}{u^{2}} \n \\left\\lbrace \\left( \\frac{2u^2}{s} + 2 \\right) \\exp\\left(-\\frac{s}{2u^2}\\right) \\right. \\\\\n & \\left. - \\left( 2+ \\frac{s}{u^2} \\right) E_1 \\left(\\frac{s}{2u^2}\\right) \\right\\rbrace\n\\d u \n\\end{aligned}\n,\\end{equation}\nwith $\\chi(s)$ being a single variable function. As such, it can be computed once and then tabulated and interpolated over when required. As a result, \\corr{when the emission occurs in the Thomson regime,} the computation of the inverse-Compton spectra from the scattering of a thermal soft photon field on a pile-up distribution of electrons can be done much faster than usual with complete numerical integration.\n\n\\subsection{Photon-photon absorption in the jet and induced pair creation}\n\nHigh-energy photons produced by inverse Compton will locally interact with low-energy photons produced by the synchrotron process (external radiations can be locally neglected). This photon-photon interaction induces an absorption that needs to be taken into account to compute the emitted flux and the pair creation process loading the jet with leptons.\n\n\\subsubsection{Escape probability in the jet \\label{sec:abs_jet}}\n\nFor a photon of dimensionless energy, $\\displaystyle \\epsilon_1= h \\nu_1\/m_e c^2$ in a photon field of photon density per solid angle, and per dimensionless energy, $n_{ph}(\\epsilon_2,\\Omega)$, the probability to interact with a soft photon of energy $\\epsilon_2$ on a length $\\d l$ is given by:\n\\begin{equation}\n\\label{eq:dTaugg}\n \\frac{d}{\\d l} \\tau_{\\gamma\\gamma}(\\epsilon_1) = \\int \\frac{1-\\mu}{2} \\, \\sigma(\\beta) \\, n_{ph}(\\epsilon_2, \\Omega) \\: \\d \\epsilon_2 \\: \\d \\Omega\n,\\end{equation}\nwith $\\sigma(\\beta)$ being the interaction cross-section given by \\citep{Gould:1967bj}:\n\\begin{equation}\n\\label{eq:sigma_gould}\n\\begin{aligned}\n& \\sigma(\\beta) = \\frac{3 \\sigma_{Th}}{16} \\left( 1 - \\beta^2 \\right)\n\\left[ (3 - \\beta^2) \\ln \\left( \\frac{1 + \\beta}{1-\\beta}\\right) - 2 \\beta (2 - \\beta^2) \\right] \\\\\n& \\beta(\\epsilon_1, \\epsilon_2, \\mu) = \\sqrt{1 - \\frac{2}{\\epsilon_1 \\epsilon_2 (1- \\mu)}}\n\\end{aligned}\n,\\end{equation}\nwith $\\mu$ being the cosine of the incident angle between the two interacting photons in their reference frame.\\\\\n\nIf we make the approximation that the synchrotron emission is locally isotropic in the plasma rest frame, the optical depth per unit length $\\d l$ simplifies and is then given by \\citep{Coppi:1990tn}:\n\\begin{equation}\n\\frac{\\d}{\\d l} \\tau_{\\gamma\\gamma}^{jet}(\\epsilon_{1}) = \\frac{1}{c} \\int \\d \\epsilon_2 n_{ph}(\\epsilon_2) R_{pp}(\\epsilon_1 \\epsilon_2)\n,\\end{equation}\nwith $R_{pp}$ the angle-averaged pair production rate $(cm^3.s^{-1})$.\n\nAnalytical approximations of $R_{pp}$ have been proposed by \\cite{Coppi:1990tn} and \\cite{Sauge:2004ep} but they still require a timeconsuming integration over all energies. In order to simplify numerical calculations, as the function $R_{pp}$ is peaked around its maximum $R_{pp}^{max}$ occurring at $x = x_{max}$, we make the approximation:\n\\begin{equation}\nR_{pp}(x) = R_{pp}^{max} \\, \\delta(x - x_{max})\n\\end{equation}\nwith\n\\begin{equation}\n\\begin{aligned}\nR_{pp}^{max} & = 0.283 \\, \\frac{3}{4} c \\sigma_{Th} \\\\\nx_{max} & = 3.5563 ,\n\\end{aligned}\n\\end{equation}\nwith $\\sigma_{Th}$ being the Thomson cross-section.\\\\\n\nThe optical depth in the jet finally simplifies to\n\\begin{equation}\n\\label{eq:dtau_approx}\n \\frac{\\d}{\\d l} \\tau_{\\gamma\\gamma}^{jet}(\\epsilon_{1}) = \\frac{1}{c} \\frac{R_{pp}^{max}}{\\epsilon_{1}} n_{ph} \\left( \\frac{x_{max}}{\\epsilon_{1}} \\right)\n.\\end{equation}\n\nAssuming that the absorption coefficient is constant at a given altitude $z$ in the jet of radius $R(z)$, the opacity can be calculated as\n\\begin{equation}\n\\label{eq:tau_jet}\n\\tau_{jet}(\\epsilon_1, z) = R(z) \\frac{\\d \\tau^{jet}_{\\gamma \\gamma}}{\\d l} \\left(\\epsilon_{1},z \\right)\n.\\end{equation}\n\nSolving the transfer equation in the plane-parallel approximation for photons absorbed in-situ gives their escape probability $\\displaystyle \\mathscr{P}_{esc}^{jet}$:\n\\begin{equation}\n \\label{eq:escape_jet}\n \\mathscr{P}_{esc}^{jet}(\\epsilon_1, z) = \\left( \\frac{1-\\exp \\left(-\\tau_{jet}(\\epsilon_1, z) \\right)}{\\tau_{jet}(\\epsilon_1, z)} \\right)\n.\\end{equation}\nThis escape probability is useful for computing the pair production rate inside the jet. However, another absorption factor must be considered in the vicinity of the jet as discussed in Sect. \\ref{sec:abs_out}, since the photon density does not vanish abruptly outside the jet.\n\n\\subsubsection{Pair creation}\n\nThe pair production is a direct consequence of the $\\gamma-\\gamma$ absorption in the jet. A photon of dimensionless high-energy $\\epsilon > 1$ interacts preferentially with a soft photon of energy $\\epsilon \\approx 1\/\\epsilon$ to form a leptonic pair $e^+\/e^-$. Both created particles have the same energy $\\gamma m_e c^2$ (with $m_e$ the electron mass) and one can write the energy conservation as $\\epsilon + 1\/\\epsilon \\approx \\epsilon = 2\\gamma$.\n\nTherefore, at a given altitude of $z$, the pair production rate $\\dot{n}_{prod}$ is given by\n\\begin{equation}\n \\label{eq:nprod}\n \\dot{n}_{prod}(z) = 2 \\int \\d \\epsilon_{1} \\left. \\frac{\\d n(\\epsilon_{1})}{\\d \\epsilon_{1} \\d t}\\right|_z (1-\\mathscr{P}_{esc}^{jet}(\\epsilon_1, z))\n.\\end{equation}\n\nThis pair creation is then taken into account in the evolution of the particle density. The created particles are not supposed to cool freely but they are rather constantly reaccelerated by the turbulence. We assume that the acceleration process is fast enough to maintain a pile-up distribution without considering the perturbation to the energy distribution introduced by the pair creation.\n\n\\subsection{\\label{sec:distr_evol}Evolution of the particle distribution}\n\nThe pile-up distribution has two parameters, the \\emph{particle density} $n_{0}(z)$ and the \\emph{particles characteristic energy} $\\bar{\\gamma}(z) m_{e}c^{2}$ (the particles mean energy being given by $3 \\bar{\\gamma} m_{e} c^{2}$). Both parameters are not imposed but computed consistently all along the jet as detailed below.\n\n\\subsubsection{Evolution of the particles' characteristic energy}\n\nThe particles' characteristic energy $\\bar{\\gamma}(z) m_{e} c^{2}$ results from the balance between heating from the turbulence and radiative cooling. The energy equilibrium for each particle in the comoving frame can be written as follows.\n\\begin{equation}\n\\label{eq:energy_eq_particle}\n\\frac{\\partial}{\\partial t} \\left( \\bar{\\gamma} m_e c^2\\right) = \\frac{1}{\\Gamma_b} \\left( Q_{acc} m_e c^2 - P_{cool}\\right)\n,\\end{equation}\n\nwith the acceleration parameter $Q_{acc}$ $(s^{-1})$ (see Eq. \\ref{eq:Qz}) and $P_{cool}$ being the emitted power derived from synchrotron emission, SSC, and EC emission. We consider here that the cooling is efficient and occurs mainly in the Thomson regime, neglecting the cooling in the Klein-Nishina regime. In the case of isotropic distribution of photons, \\cite{1986rpa..book.....R} showed that the emitted power at the characteristic particles energy writes $\\displaystyle P = \\frac{4}{3} c \\sigma_{T} U \\left({\\bar{\\gamma}}^{2}-1\\right)$ with U being the total energy density of the photon field. One must compute the contribution of each energy density corresponding to each emission process: $\\displaystyle U = U_B + U_{syn} + U_{ext}$.\n\n\nFor the synchrotron emission, we consider an isotropic magnetic field. \\cite{1986rpa..book.....R} gives\n\\begin{equation}\n U_{B} = \\frac{B^{2}}{8\\pi}\n\\label{eq:ub}\n.\\end{equation}\n\n\nThe power emitted through SSC is computed considering the effective energy density of the synchrotron photon field corresponding to the Thomson regime (using a cut-off frequency $\\nu_{kn}$). As the synchrotron emission is isotropic and co-spatial with the SSC emission, one obtains\n\\begin{equation}\n U_{syn} = 4\\pi \\frac{m_e c}{h} \\int_{0}^{\\epsilon_{kn}} I_{\\epsilon}^{syn}(\\epsilon) \\mathrm{d} \\epsilon \\qquad \\text{with} \\quad \\epsilon_{kn} = \\frac{1}{\\bar{\\gamma}}\n \\label{eq:usyn}\n,\\end{equation}\nwith $ I_{\\epsilon}^{syn}$ being the local synchrotron specific intensity.\\\\\n\nThe external photon field is not isotropic and one should integrate over all directions to derive the power emitted through external inverse-Compton. To ease numerical computation, we assume that the total emitted power at each altitude $z$ can be approximated by the power emitted in the direction perpendicular to the jet axis and integrated over 4$\\pi$: \n\\begin{equation}\nU_{ext} = \\frac{3}{4 c \\sigma_{T} \\left(\\bar{\\gamma}^{2}-1\\right) } \\int_0^{\\epsilon_{kn}} \\epsilon \\left. \\frac{\\d N}{\\d t \\d \\epsilon} \\right|_{\\mu = 0} \\: \\d \\epsilon\n,\\end{equation}\n\nwith $\\displaystyle \\frac{\\d N}{\\d t \\d \\epsilon} \\left(t, \\epsilon \\right)$ being the emitted external Compton spectrum in the comoving frame.\nFinally, we compute the characteristic particle Lorentz factor by\nnumerically solving the following energy equation in the comoving frame:\n\\begin{equation}\n\\label{eq:balance}\n\\begin{aligned}\n \\frac{\\partial\\bar{\\gamma}}{\\partial t}(z,t) & = \\delta_{b}(z) \\left[Q_{acc}(z) \\right.\\\\\n & - \\left. \\frac{4}{3} \\frac{\\sigma_{T}}{m_{e}c} \\left( U_{B} + U_{sync}(\\bar{\\gamma}) + U_{ext}(\\bar{\\gamma}) \\right)\\left( \\bar{\\gamma}^{2} -1 \\right) \\right]\n\\end{aligned}\n.\\end{equation}\n\nThis equation is solved in the stationary regime in each slice of the jet using a Runge-Kutta method of order 4. \n\n\\subsubsection{Evolution of the particles density}\n\nThe particle density evolves with the pair production and annihilation. As the particles move forward in the jet, one can apply flux conservation to compute the evolution of the density along the jet.\n\nIn the absence of pair production, the particle flux $\\displaystyle \\Phi_e(z,t) = \\int n_{e}(\\gamma,z,t) \\: \\pi R^{2}(z) \\: \\Gamma_{b}(z) \\, \\beta_{b}(z) \\, c \\: \\d \\gamma$ is conserved.\n\nBy generalizing the standard continuity equation, \\citep{Boutelier:2009uh} showed that for a stationary jet structure, the flux conservation can be written as follows.\n\\begin{equation}\n\\label{eq:flux_conservation}\n D_{\\beta_b}\\Phi_e = \\frac{\\partial}{\\partial t} \\Phi_e + c \\beta_b \\frac{\\partial}{\\partial z} \\Phi_e = \\beta_b S \\dot{n}_{prod}\n,\\end{equation}\n\nwith $S(z) = \\pi R^{2}(z)$ being the section of the jet and $\\dot{n}_{prod}$ the pair production rate given by equation \\ref{eq:nprod}.\nIn the stationary case, one simply solves $\\displaystyle \\frac{\\partial}{\\partial z}\\Phi_e = \\frac{1}{c} S \\dot{n}_{prod}$.\n\nAs the new particles are created in the jet filled with turbulences, they are accelerated and can emit more radiation, creating more particles. This process is highly non-linear and can lead to explosive events. In the two-flow paradigm, this can lead to fast and powerful flares as the particle density will increase as long as the energy reservoir is not emptied.\n\n\n\\subsection{Evolution of the bulk Lorentz factor of the jet\\label{sec:gamma_b}}\n\n\\corrtwo{As shown by \\citealt{1981ApJ...243L.147O}, the radiative forces act on a hot plasma dynamic through the Compton rocket effect. In the present case, the timescale of the Compton rocket effect is equal to the inverse-Compton scattering time of a photon field of energy density $\\displaystyle U_{ph}$ on a particle of energy $\\displaystyle \\gamma m_e c^2$ and thus can be evaluated as:}\n\\begin{equation}\nt_{IC} = \\frac{3 m_e c^2}{4 c \\sigma_T \\gamma U_{ph}}\n.\\end{equation}\nIn the inner regions of powerful AGNs, the photon field energy density at a distance $z$ in the jet can be evaluated as\n\\begin{equation}\nU_{ph} \\approx \\frac{L_{disc}}{4\\pi z^2 c} \\approx \\frac{L_{edd}}{4 \\pi z^2 c} \\approx \\frac{R_g c^2 m_p}{z^2 \\sigma_T}\n,\\end{equation}\nwith $\\displaystyle L_{edd} = 4 \\pi G M m_p c \/\\sigma_T$ being the Eddington luminosity and $R_g = GM \/ c^2$ the gravitational radius.\n\nThis allows us to compare the inverse-Compton scattering time with the dynamical time of the system $t_{dyn} = z\/c$ as\n\\begin{equation}\n\\label{eq:tic_tdyn}\n\\frac{t_{IC}}{t_{dyn}} = \\frac{m_e}{m_p} \\frac{z}{R_g} \\frac{1}{\\gamma}\n.\\end{equation}\n\n\\corrtwo{The bulk Lorentz factor of the plasma is the result of a balance of the radiative and dynamical forces. However, as shown here, in the inner parts of an AGN, the inverse-Compton scattering time is several orders lower than the dynamical time. In this case, a purely leptonic plasma is strongly tide to the photon field and its dynamic must be imposed by the inverse-Compton scattering as others forces (such as MHD forces or the interaction with the external flow) acting on timescales of the order of the dynamical time are too slow to counteract this force.}\n\n\nTherefore, the bulk Lorentz factor of the jet $\\Gamma_{b}(z)$ is not a free parameter in the model but is imposed by the Compton rocket force and thus by the external photon fields.\n\n As shown in \\cite{Vuillaume:2015jv} (hereafter Vu15), as long as most of the emission happens in the Thomson regime, the computation of the resulting $\\Gamma_{b}(z)$ depends exclusively on the distribution of the external photon field in the inner parts of the jet whereas its final value $\\Gamma_{\\infty}$ reached at parsec scales \\corr{when the bulk acceleration stops or becomes ineffective} depends on the jet energetics. Indeed, acceleration through the Compton rocket effect ceases when the scattering time in the rest frame of the plasma becomes greater than the dynamical time.\n \nWe use the method described in Vu15 to determine the equilibrium bulk Lorentz factor $\\Gamma_{eq}(z)$ imposed by the Compton rocket and and only take into account the geometry of the external photon fields in the Thomson regime. We make the assumption that most of the inverse-Compton scattering always happens in the Thomson regime and we can verify this assumption afterwards for each object modelling.\n\n\nFrom there, the evolution of $\\Gamma_b(z)$ is determined by solving the differential equation (Vu15)\n\\begin{equation}\n\\frac{\\partial \\Gamma_b(z,\\bar{\\gamma})}{\\partial z} = - \\frac{1}{l(z,\\bar{\\gamma})} \\left(\\Gamma_b(z,\\bar{\\gamma}) - \\Gamma_{eq}(z)\\right)\n,\\end{equation}\nwith $l(z, \\bar{\\gamma})$ being the relaxation length to equilibrium that can be written as\n\\begin{equation}\nl(z,\\bar{\\gamma}) = \\frac{3 m_e c^3}{8 \\pi \\sigma_T} \\frac{\\beta_{eq}^3\\Gamma_{eq}^3}{\\bar{\\gamma} H}\\left(1+\\frac{1}{3\\Gamma_{eq}^2}\\right)\n,\\end{equation}\nwith $\\displaystyle H = \\frac{1}{4\\pi}\\int I_{\\nu_s} (\\Omega_s) \\mu_s \\mathrm{d} \\Omega_s \\mathrm{d} \\nu_s$ being the Eddington parameter proportional to the net energy flux of the external photon fields on the jet axis.\n\nIn order to ease numerical computation, we do not solve the complete differential equation. The relaxation length is determined at each step and as long as $l(z,\\bar{\\gamma}) \\ll z$ (equivalent to $t_{ic} \\ll t_{dyn}$), one can consider that $\\displaystyle \\Gamma_b(z) = \\Gamma_{eq}(z)$. At high $z$ in the jet, or when $\\bar{\\gamma}$ decreases enough, one has $l(z,\\bar{\\gamma}) \\gg z$. In this case, jet moves in a ballistic motion and its speed reaches a constant $\\Gamma_\\infty$. Numerical tests showed that $\\Gamma_{eq}(z)$ reaches $\\Gamma_\\infty$ when $l(z)\/z \\approx 0.6$. At this point we fix $\\Gamma_b(z) = \\Gamma_\\infty$.\n\n\\corrtwo{The bulk Lorentz factor then reaches an asymptotic value corresponding to ballistic motion of the plasma. In the present model, we assume that the jet follows such a ballistic motion and is not affected by external factors (such as interstellar medium or two-flow interactions).}\n\n\\subsection{Absorption outside the jet \\label{sec:abs_out}}\n\nBetween the jet and the observer, a photon encounters several radiation fields causing absorption through photon-photon interaction. There are two main sources of external radiation that should be considered here: the immediate vicinity of the jet and the extragalactic background light (EBL) (the absorption from the host galaxy being negligible).\n\n\\subsubsection{Extragalactic background light}\nExtragalactic background light (EBL) is composed of several sources: the CMB, the diffuse UV-optical background produced by the integrated emission from luminous AGNs and stellar nucleosynthesis, and the diffuse infrared background composed of the emission from stars' photospheres and thermal emission from dust. Here we use absorption tables from \\citep{Franceschini:2008em}. This introduces another term, $\\exp(\\tau_{ebl}),$ to the escape probability that depends on the source redshift $z_s$.\n\n\n\\subsubsection{Absorption in the jet vicinity}\n\nThe immediate vicinity of the jet is composed of several sources of soft photons.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.55\\hsize]{figures\/absorption.png}\n\\caption{\\label{fig:absorption_sketch}\nSketch of the computation of the absorption from external sources. Photons from all external sources (accretion disc, BLR and torus) interact with photons emitted by the jet. For every emitting zone at $z_0$, one needs to integrate the absorption over the path of the gamma photons to the observer and over incoming directions of thermal photons.}\n\\end{figure}\n\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=0.96\\hsize]{figures\/tau_sources2.pdf}\n \\caption{Integrated $\\tau_{ext}$ created by the three external sources (disc, torus, BLR) and experienced by a photon of frequency $\\nu$ leaving the jet from an altitude $z_0$ and travelling along the jet axis to infinity. Parameters of the sources: \\textit{Disc:} $L_d = 0.2 L_{edd}$, $R_{in} = 3R_S$ and $R_{out} = 5e4 R_S$. \\textit{Torus:} $R_t = 5e4R_S$ and $D_t=1e5R_S$. \\textit{BLR:} $R_{blr} = 8e3R_S$, $\\omega \\in [0:\\pi\/2]$, $\\alpha_{blr}=0.01.$}\n\\label{fig:tau_sources}\n\\end{figure}\n\n\n\nFirst, we consider the synchrotron photons from the jet itself. \\cite{1995MNRAS.277..681M} showed that these photons interact on a typical distance corresponding to the jet radius $R$, which introduces an absorption term $\\exp(-\\tau_{jet})$ with $\\tau_{jet}$ given by equation \\ref{eq:tau_jet}. \n\n\n\n\nOnce a photon has escaped the jet photon field, it enters the external photon field generated by external sources described in Sect. \\ref{sec:ext_sources}. The induced opacity is generally greater than one in FSRQ and therefore cannot be neglected.\n\nAs the external photon field resulting from the sources is highly anisotropic, one cannot use the approximation used in Sect. \\ref{sec:abs_jet}. \\corr{To compute the opacity, $\\tau_{ext}$ , experienced by a photon emitted from the jet at an altitude $z_0$, one needs to integrate the complete $\\gamma\\gamma$ absorption given by equation \\ref{eq:dTaugg} over the path followed by the photons from $z_0$ to infinity (here $M$ relates to the position along that path):\n\\begin{equation}\n\\label{eq:abs_ext_particle_path}\n\\tau_{ext}(z_0, \\epsilon_1) = \\int \\d l(M) \\int \\d \\Omega(M) \\int \\d \\epsilon_2 \\frac{1-\\cos\\theta}{2} \\, \\sigma(\\beta) \\, n_{ph}(\\epsilon_2, \\Omega)\n.\\end{equation}\n\nTerms in equation \\ref{eq:abs_ext_particle_path} have been detailed at equation \\ref{eq:dTaugg}. The factor $\\theta(M)$ is the angle between the direction of the photon from the jet and the photons from thermal sources as illustrated in Fig. \\ref{fig:absorption_sketch}. \n}\n\nAs an example, the result of this \\corr{numerical integration, carried out for a range of energies and photon-emitted altitudes, is given in Fig. \\ref{fig:tau_sources}. The figure represents isolines of photon-emitted altitude $z_0$.\nIn order to validate our approach, we chose an observing angle $i_{obs}=0$ and the same source parameters as in \\cite{2007ApJ...665.1023R} and find consistent results.}\\\\\n\n\n\\corr{Considering the different sources of absorption described above,} the total escape probability of photons produced in the jet is then given by\n\\begin{equation}\n \\label{eq:escape_tot}\n \\mathscr{P}_{esc}^{tot} = \\left( \\frac{1-\\exp \\left(-\\tau_{jet} \\right)}{\\tau_{jet}} \\right) \\exp \\left( -(\\tau_{jet}+\\tau_{ext}+\\tau_{ebl}) \\right)\n,\\end{equation}\n\nand the specific intensity reaching the observer is thus given by $\\displaystyle I_{obs}(\\epsilon) = I_{jet}(\\epsilon) \\mathscr{P}_{esc}^{tot}(\\epsilon, z_s).$\n\n\n\n\n\n\n\n\n\\section{Example of application: the quasar 3C 273 \\label{sec:3c273}}\n\n3C 273 is the first quasar to have been identified thanks to its relatively close distance (redshift $z = 0.158$, \\citealt{Schmidt:1963kd}) and has been extensively observed and studied. The broadband SED data used here come from \\cite{1999A&AS..134...89T}; they averaged over 30 years of observations and are more likely to represent the average state of the AGN. This quasar represents a good test for the model as this average state can be associated to a quiescent state that we can model. Moreover, it is a FSRQ with relatively well-known external sources, allowing us to test the complete model with good constraints on these sources while imposing values of $\\Gamma_b(z)$ through the Compton rocket effect (see section \\ref{sec:gamma_b}). We detail our modelling of this source in the following section.\n\n\n\\subsection{Modelling}\n\n\n\\corr{ The accretion disc is easily visible in the SED in Fig. \\ref{fig:3C273_SED} as the big `blue bump' in the optical band. For our modelling, me make a best-fit of this bump (between $2.4 \\times 10^{14}$Hz and $2.4 \\times 10^{15}$Hz) considering the temperature given by a standard accretion disc (see equation \\ref{eq:T_disc}) and with the two free parameters: $R_S = \\frac{1}{3} R_{isco}$ and $L_{disc}$ , the total luminosity of the disc. We obtain $R_S = 5.1 \\times 10^{14}cm$ and $L_{disc} = 1.7 \\times 10^{46}erg.s^{-1}$ ($\\bar{\\chi}^2 = 0.2$).\nThe modelling of the disc is given in orange in Fig. \\ref{fig:3C273_SED}.}\n\nThe signature of the dusty torus is perceptible as a bump in the infrared band. Parameters of the torus are chosen to be in agreement with the position and luminosity of this bump. The resulting size of the torus is $R_T = 10^4 R_{S}$ and $D_T = 1.5 \\times 10^4 R_{S}$.\n\nStrong emission lines are observed, indicating the presence of a BLR. To model the BLR, we chose its radius from the value of \\cite{Paltani:2005gk} that derived a size of $R_{blr}\/c = 986$ days by studying the lag of the Balmer lines. The BLR opening is set to an ad hoc value $\\omega_{max} = 35 \\degree$ \\corr{(there is no strong constraints on $\\omega_{max}$, however, it must be large enough for the BLR to be outside the MHD jet)}. Its luminosity was set with $\\alpha_{blr} = 0.1$ derived from observations \\citep{Celotti:1997fi}.\n\n\\corr{\nEmission attributed to a hot corona is observed in the SED of 3C 273 in the X-ray band. \nFollowing \\cite{Haardt:1998tj} who used a model of thermal comptonization to reproduce the hot corona emission, we add an emission following a power law of photon index $\\Gamma =1.65$ between $5\\times10^{15}$ Hz and $5\\times10^{19}$ Hz (0.02-200keV) as an extension of the accretion disc represented in Fig. \\ref{fig:3C273_SED}. These photons contribute as well to the inverse-Compton emission and Compton rocket process.}\n\n\\corr{\nSuperluminal motion has been observed in 3C 273 by \\cite{Pearson:1981gh}, proving the existence of relativistic motion in this jet. The deduced apparent velocity is $v_{app} \\approx 7.5 c$\\footnote{The authors compute an apparent velocity of $\\sim 9.5c$ assuming a value for the Hubble constant $H_0 = 55 km.s^{-1}.Mpc^{-1}$ which is now known to be closer to $70 km.s^{-1}.Mpc^{-1}$}. This imposes constraints on the observation angle and one can determine that $i_{obs} < 15\\degree$ (or $\\cos i_{obs}$ > 0.96). Here we fixed $i_{obs} = 13\\degree$ in agreement with these constraints.}\n\n\n\\corr{A best-fit of the model on the average data from \\cite{1999A&AS..134...89T} is performed. Such a fit is a difficult task in this case as the model is computationally expensive and requires many parameters. We developed a method using a combination of genetic algorithms and gradient methods. As the source parameters have been fixed by observations, the free parameters of the model used for the fit are $R_0$, $n_0$, $B_0$, $Q_0$, $\\lambda$, $\\omega$ and $\\zeta$.\n \n\n}\n\n\n\n\n\n\n\\begin{table}[ht]\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{c c | c c | c c }\n\\toprule\n$R_{S}$(cm) & $5.3\\times10^{14}$& $Z_c\/R_S$& 1$0^9$ & $L_{disc} (erg\/s)$ & $ 1.7 \\times 10^{46}$\\\\\n$ R_{in}\/R_{S} $ & 3 & $R_0\/R_S$ & $7.5$ & $n_0 (cm^{-3})$ & $4.5 \\times 10^3$ \\\\\n$R_{out}\/R_{S}$ & $5 \\times 10^3$ & $\\omega_{max} $ & $35\\degree$ & $B_0$ & 12 G\\\\\n$D_T\/R_{S}$ & $1.5 \\times 10^4$ & $\\alpha_{blr} $ & 0.1 & $Q_0$ & $0.03 s^{-1}$ \\\\\n$R_T\/R_{S}$ & $10^4$ & $i_{obs}$ & 13\\degree & $\\lambda$ & 1.4 \\\\\n$R_{blr}\/R_{S}$ & $4.8 \\times 10^3$ & z & 0.158 & $\\omega$ & 0.5 \\\\\n$Z_0\/R_{S}$ & $2 \\times 10^3$ & & & $\\zeta$ & 1.52 \\\\\n\\bottomrule\n\\end{tabular}%\n}\n\\caption{\\label{tab:3C273_param}Parameters corresponding to 3C 273 modelling.\n$R_S = 2GM\/c^2$ is the Schwarzschild radius deduced from a best-fit of the optical emission. $R_{in}$ and $R_{out}$ are the inner and outer radii of the disc. $D_T$ and $R_T$ are geometrical parameters of the torus. $R_{blr}$ is the BLR radius. $Z_0$, $R_0$ , and $Z_c$ are geometrical parameters of the jet (see section \\ref{sec:jet_geometry}). $\\omega_{max}$ corresponds to the BLR opening and $\\alpha_{blr}$ to its luminosity fraction with respect to that of the disc, $L_{disc}$. The value z is the redshift of 3C273. The factors $n_0$, $B_0$, $Q_0$, $\\lambda$, $\\omega$ and $\\zeta$ are free parameters of the jet model as described in section \\ref{sec:jet_geometry}}\n\\end{table}\n\n\n\\begin{figure}[ht]\n\\begin{center}\n \\includegraphics[width=\\hsize]{figures\/SED_final.pdf}\n\\caption{SED of 3C 273 - modelling compared to data from \\cite{1999A&AS..134...89T}. The synchrotron emission is shown in blue, the SSC in green, and the external Compton in purple. The torus emission is shown in red and the multicolor accretion disc in orange (filled orange curve), complete with a power-law describing the hot corona emission between 0.02 and 200 KeV (dashed orange line). Different emission zones in the jet are represented with different dotted lines. The emission below $10^9$Hz not reproduced by the model is interpreted as the emission from the jet hot spot, not modelled here.}\n\\label{fig:3C273_SED}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n \\includegraphics[width=1\\hsize, trim = 0cm 0cm 0cm 1cm, clip=true]{figures\/final_params_1.pdf}\n \\includegraphics[width=1\\hsize, trim = 0cm 0cm 0cm 1cm, clip=true]{figures\/3C273_params2.pdf}\n \\includegraphics[width=1\\hsize, trim = 0cm 0cm 0cm 1cm, clip=true]{figures\/3C273_equipartition.pdf}\n\\caption{Evolution of the physical parameters as a function of the distance in the jet for the modelling of 3C273}\n\\label{fig:3C273_params}\n\\end{figure}\n\n\\subsection{Results and discussion}\n\n\n\\corr{ With the seven free parameters, the fit gives a reduced $\\bar{\\chi}^2 = 4.92$. The model accurately reproduces the entire SED of 3C273 from radio to gamma-rays except for the far radio below $10^8$Hz. These points show a break in the spectrum and are interpreted as the jet hot spot on a very large scale. This hot spot is not supposed to be modelled here and it is actually not surprising that these points are not well fitted by the best fit model; if we do not take them into account, then the reduced $\\chi^2$ is reduced to $\\bar{\\chi}^2 = 1.3$, more representative of the quality of the fit obtained.}\\\\\n\n\\corr{\nThe mass and accretion rate of the black-hole can be deduced from the radius $R_{S}$ and the luminosity $L_{disc}$ determined previously, depending on the BH spin. For a Schwarzschild (non-spinning) black-hole, $M_\\bullet = R_{isco} c^2\/ 6G = 1.8 \\times 10^{9} M_\\odot$ and the reduced accretion rate $\\dot{m} = \\dot{M}\/\\dot{M}_{edd} = 0.08$.\nFor a maximum spinning Kerr black-hole however, $M_\\bullet = R_{isco} c^2\/ 2G = 5.4 \\times 10^9 M_\\odot$ and $\\dot{m} = 0.027$. These values of mass are in agreement with those derived from observations; \\cite{Paltani:2005gk} determined $M_\\bullet = (5.69 - 8.27) \\times 10^9 M_\\odot$ using the reverberation method on Ly-$\\alpha$ and $C_{IV}$ lines and $M_\\bullet = (1.58-3.45) \\times 10^9 M_\\odot$ using Balmer lines whereas \\cite{Peterson:2004ig} determined a mass of $M_\\bullet = (8.8 \\pm 1.8) \\times 10^9 M_\\odot$ using a large reverberation-mapping database.}\\\\\n\n\nIn the jet, the low energy (radio to optical) is produced by the synchrotron process. The synchrotron part of the inner jet (below $10^3 R_S$) emits in the optical and is hidden by the emission from the accretion disc and the dusty torus. When moving further in the jet, the synchrotron peak shifts to lower frequencies and the further we go, the more the peak shifts. Finally, the whole jet from $10^3 R_S$ to $10^9 R_S$ is necessary to reproduce the power-law-like radio spectrum. Its slope is determined by different factors: the increase of the jet radius, the decrease of the magnetic field and of the particle heating, and the bulk Lorentz factor. The spectrum of 3C 273 shows a break at $\\sim 10^9$ Hz which is poorly fitted by our model and is thought to be the result of synchrotron emission produced by the extended radio structure (lobe+hot spot). \\corrtwo{It could also show the limitation of our assumption on ballistic motion of the jet at very large distances. A deceleration of the jet could result in the observed break in the spectrum.}\\\\\n\nThe high-energy emission is produced by inverse-Compton processes, either on synchrotron (SSC) or on thermal photons (EC). Similarly to the synchrotron, the highest energies are produced close to the central engine and further regions emit at lower energies. In particular, the spectrum at $\\nu > 10^{21}$ Hz is produced by regions at $z < 10^3 R_S$ with a combination of SSC and EC. However, the X-rays and soft $\\gamma$-rays are produced further, at $z > 10^3 R_S$ by SSC.\\\\\n\nFigure 8 shows the evolution along the jet of the different model parameters. One can see first that the jet reaches ballistic motion ($\\Gamma_b(z) = \\Gamma_{\\infty}$) at $z = 10^4 R_S$. From this point, there is no further pair creation, the density decreasing only by dilution (due to the increase of the jet radius), and all parameters follow a very smooth evolution, corresponding to the almost featureless spectrum below $10^{13}$Hz.\\\\\n\nConcerning the geometrical parameters, the spine geometry (see equation \\ref{eq:Rz}) follows a parabola with a radius $R(z) \\propto \\sqrt{z}$. The jet opening can be evaluated by $\\displaystyle \\tan \\theta_{jet} = \\frac{R}{z}$ and goes from $2\\times10^{-3}$rad below 1 parsec to about $10^{-5}$rad at 1kpc which makes it a very narrow jet (these values concern only the spine jet here which is collimated by the outer MHD jet in the two-flow paradigm). However, the jet radius is important only to compute the SSC radiation, the other process depending only linearly on the number of particles: if the jet were to widen after the SSC emitting region, this would not affect the overall spectrum.\\\\\n\nThe particle density (middle curve of the top plot in the Fig. \\ref{fig:3C273_params}) evolves along the jet and goes through important pair creation under $10^3 R_S$. This part is also the part where most of the high-energy emission is emitted. This is in good agreement with the two-flow paradigm as flares (which can be extreme at high energies) are explained by intense phases of pair creation. Slight changes in the particle acceleration could induce large changes in the pair creation which is a highly non-linear process, in turn creating flare episodes. A time-dependent model such as the one described in \\cite{Boutelier:2008bga} is currently used to describe these flares.\\\\\n\nThe particles mean Lorentz factor ($3\\bar{\\gamma}$) varies between 600 and 1800; it increases and decreases rapidly below $10^3 R_S$, following changes in the bulk Lorentz factor $\\Gamma_b$. This is due to changes in the cooling inferred by changes in Doppler aberration in the plasma rest frame and in the particle density along the jet. Further in the jet, $\\bar{\\gamma}$ slowly increases but due to an important decrease in the particle density, the particle energy density also decreases (dotted line in the last plot in Fig. \\ref{fig:3C273_params}).\\\\\n\nThe last plot in Fig. \\ref{fig:3C273_params} represents the energy density in the particles and in the magnetic field as well as the equipartition ratio defined as the ratio of both energy densities:\n\\begin{equation}\n \\xi = \\frac{n_e <\\gamma>m_e c^2}{B^2\/8\\pi}\n\\label{eq:xi}\n.\\end{equation}\n\nThis plot shows that the jet is highly magnetised very close to the black-hole (below 100 $R_S$) as much of the energy is carried by the magnetic field. Particles then carry much more overall energy and bring this energy very far along the jet, allowing for the far synchrotron emission.\\\\\n\nThe bulk Lorentz factor $\\Gamma_b$ varies as a function of the distance in the jet and is equal to $\\Gamma_{eq}(z)$ under $10^4 R_S$. Changes in $\\Gamma_b(z) = \\Gamma_{eq}(z)$ are due to effects discussed in \\cite{Vuillaume:2015jv} and are the result of the Compton rocket effect. The jet reaches ballistic motion at $z = 10^4 R_S$ \\corr{as the acceleration becomes ineffective and then $\\Gamma_b(z)$ reaches its final value $\\Gamma_\\infty$.}\\\\\n\nThe value of $\\Gamma_\\infty \\approx 2.7$ is very questionable in this modelling, as superluminal motion inferring $v_{app} > 7c$ has been observed, implying at least the same value for $\\Gamma_b$. Superluminal motion as seen by very long base interferometry (VLBI) corresponds to regions very far from the central engine and should thus be produced by components travelling at $\\Gamma_\\infty$. Therefore, $\\Gamma_\\infty$ should be able to reach higher values. In the Compton rocket model, this implies that particles should be coupled to the photon field at larger distances than what is found here (well outside the BLR region). There are two simple ways to explain this lack of energy in particles\n\\begin{enumerate}\n\\item\nour description of the acceleration with a power-law might be too simple. It is quite clear that jets are more complex objects and that some re-acceleration sites are present far in the jet, as shown by the complex images of the jet (see also for example HST-1 in M87). Such far acceleration would give an energy boost to particles that would turn into jet acceleration on greater scales, pushing $\\Gamma_\\infty$ to the observed values.\n\\item\nOur modelling of 3C 273 corresponds to a quiescent state. During flaring states, it is expected that more energy is transferred to particles, here again providing a possibility to reach higher values of $\\Gamma_\\infty$. In this view, the fastest features observed in jets would correspond to past flaring episodes that accelerated the plasma to high values of $\\Gamma_b$ - the latter now following ballistic motion.\n\\end{enumerate}\n\n\nOn the other hand, it is very interesting that the model is able to reproduce the broadband spectrum of 3C 273 with $\\Gamma_b(Z) < 3$.\nAs explained in section \\ref{sec:jet_velocity}, there is substantial evidence pointing to low Lorentz factors in AGNs and high Lorentz factors are very difficult to explain if applied to the entire jet. Here we show that with a stratified jet model, high Lorentz factors are not necessary to reproduce the averaged broadband emission and that the quiescent state of an object could correspond to very moderate $\\Gamma_b$ in opposition to episodes of flares that could induce higher $\\Gamma_\\infty$, as high as the ones observed.\n\n\n\n\n\\section{Conclusion}\n\nEmission modelling is essential to understanding the AGN jet physics, and yet despite many flaws the one-zone model is still predominant. However, more complex models are being developed, including structured jets models. These models introduce more physics and improve our understanding of AGNs as they are often better at explaining the observations. In this work, we present a numerical model of a stratified jet based on the two-flow idea first introduced by \\cite{1989MNRAS.237..411S} and further developed by \\cite{1991ApJ...383L...7H}. \n\nThe numerical model is based on a prescription of the geometry of the jet providing the evolution of the radius, magnetic field, and particle acceleration. External thermal sources such as the accretion disc, the dusty torus, and the broad-line region are also modelled, including their spatial geometry.\nIn the jet, we consider emission from synchrotron and inverse-Compton (over synchrotron and thermal photons) processes. The physical conditions such as the particles energy and density are consistently computed along the jet based on initial conditions at a fixed point. In particular, the bulk Lorentz factor is not a free parameter in our model but its evolution is entirely constrained by the external sources (accretion disc, dusty torus, BLR) through the Compton rocket process.\nPhoton-photon absorption is taken into account in the jet (essential for pair production) as well as outside the jet, in its vicinity, and on the photon path to the observer. \n\nDespite being very constrained, the model is able to accurately reproduce the broadband emission of 3C273 from radio to gamma-rays. As the physical conditions evolve along the jet, different zones contribute to different parts of the observed spectrum. Inner parts of the jet contribute more to high-energies while low-frequency radio is emitted further in the jet, at parsec scales.\n\nIn the future, the model could be extended to incorporate variability (as done in \\citealt{Boutelier:2008bga}) and can be applied to other types of compact object such as galactic X-ray binaries. Further work is being planned for these applications.\n\n\\section*{Acknowledgement}\nWe acknowledge funding support from the French CNES agency and CNRS PNHE. IPAG is part of Labex OSUG@2020 (ANR10 LABX56).\n\n\n\n\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}