diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmcle" "b/data_all_eng_slimpj/shuffled/split2/finalzzmcle" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmcle" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nWe know that the autoparallel curves in the Riemann geometry coincide with its geodesics. In this note my aim is to investigate if there is a such result in the symmteric teleparallel geometry (STPG).\n\n\\section{Mathematical Preliminaries}\n\nSpacetime, in general, is denoted by $\\{ M,g,\\nabla\\}$ where $M$\nis orientable and differentiable manifold, $g=g_{\\alpha \\beta}\ne^\\alpha \\otimes e^\\beta$ is the metric tensor written in terms of 1-forms\n$e^\\alpha$ and $\\nabla$ is connection associated with connection\n1-forms ${\\Lambda^\\alpha}_\\beta$. Cartan structure equations\ndefine nonmetricity 1-forms, torsion 2-forms and curvature\n2-forms, respectively\n \\ba\n Q_{\\alpha \\beta} &:=& - \\frac{1}{2} \\D g_{\\alpha \\beta}\n = \\frac{1}{2} (-\\d g_{\\alpha \\beta} + \\Lambda_{\\alpha \\beta}+\\Lambda_{\\alpha \\beta}) \\; , \\label{nonmet}\\\\\n T^\\alpha &:=& \\D e^\\alpha = \\d e^\\alpha + {\\Lambda^\\alpha}_\\beta \\wedge e^\\beta \\; , \\label{tors}\\\\\n {R^\\alpha}_\\beta &:=& \\D {\\Lambda^\\alpha}_\\beta := \\d {\\Lambda^\\alpha}_\\beta\n + {\\Lambda^\\alpha}_\\gamma \\wedge {\\Lambda^\\gamma}_\\beta \\label{curva}\n \\ea\nwhere $ \\d $ and $ \\D$ are exterior derivative and covariant\nexterior derivative. Geometrically, the nonmetricity\ntensor measures the deformation of length and angle standards\nduring parallel transport. For example, after parallel\ntransportation of a vector along a closed curve, the length of the\nfinal vector may be different from that of the initial vector. On\nthe other hand, torsion relates to the translational group.\nMoreover it is sometimes said that closed parallelograms do not\nexist in spacetime with torsion. Finally, curvature is related to\nthe linear group. That is, when a vector is parallel transported\nalong a closed loop, the vector undergos a rotation due to\ncurvature. These quantities satisfy Bianchi identities:\n \\ba\n \\D Q_{\\alpha \\beta} &=& \\frac{1}{2} ( R_{\\alpha \\beta} +R_{\\beta \\alpha}) \\; , \\label{bianc:0} \\\\\n \\D T^\\alpha &=& {R^\\alpha}_\\beta \\wedge e^\\beta \\; , \\label{bianc:1} \\\\\n \\D {R^\\alpha}_\\beta &=& 0 \\; . \\label{bianc:2}\n \\ea\n\n\n\nFull connection 1-forms can be decomposed uniquely as follows \\cite{fhehl1995},\\cite{rtucker1995},\\cite{madak2010}:\n \\ba\n {\\Lambda^\\alpha}_\\beta = \\underbrace{(g^{\\alpha \\gamma}\\d g_{\\gamma_\\beta} + {p^\\alpha}_\\beta)\/2 + {\\omega^\\alpha}_\\beta}_{Metric}\n + \\underbrace{{K^\\alpha}_\\beta}_{Torsion} + \\underbrace{{q^\\alpha}_\\beta + {Q^\\alpha}_\\beta}_{Nonmetricity} \\label{connect:dec}\n \\ea\nwhere ${\\omega^\\alpha}_\\beta$ Levi-Civita connection 1-forms\n \\ba\n {\\omega^\\alpha}_\\beta \\wedge e^\\beta = -\\d e^\\alpha \\; , \\label{LevCiv}\n \\ea\n${K^\\alpha}_\\beta$ contortion tensor 1-forms,\n \\ba\n {K^\\alpha}_\\beta \\wedge e^\\beta = T^\\alpha \\; , \\label{contort}\n \\ea\nand anti-symmetric 1-forms\n \\ba\n & & q_{\\alpha \\beta} = -( \\iota_\\alpha Q_{\\beta \\gamma } ) e^\\gamma + ( \\iota_\\beta Q_{\\alpha \\gamma})\n e^\\gamma \\; , \\label{q:ab} \\\\\n & & p_{\\alpha \\beta} = -( \\iota_\\alpha \\d g_{\\beta \\gamma } ) e^\\gamma + ( \\iota_\\beta \\d g_{\\alpha \\gamma})\n e^\\gamma \\label{p:ab}\\; .\n \\ea\nThis decomposition is self-consistent. To see that it is enough to\nmultiply (\\ref{connect:dec}) from right by $e^\\beta$ and to use\ndefinitions above. While moving indices vertically in front of\nboth $\\d$ and $\\D$, special attention is needed because $\\d\ng_{\\alpha \\beta} \\neq 0$ and $\\D g_{\\alpha \\beta} \\neq 0$.\nSymmetric part of the full connection comes from (\\ref{nonmet})\n \\ba\n \\Lambda_{(\\alpha \\beta)} = Q_{\\alpha \\beta } + \\frac{1}{2} \\d g_{\\alpha \\beta } \\label{connect:sym}\n \\ea\nand the remainder is anti-symmetric part\n \\ba\n \\Lambda_{[\\alpha \\beta]} = \\frac{1}{2} p_{\\alpha \\beta} + \\omega_{\\alpha \\beta} + K_{\\alpha \\beta} + q_{\\alpha \\beta} \\; . \\label{connect:ansym}\n \\ea\n\nIt is always possible to choose orthonormal basis 1-forms which I denote $\\{ e^a \\}$. Then the metric\n$g=\\eta_{ab}e^a \\otimes e^b$ where $\\eta_{ab} = \\mbox{diag}(\\pm 1, \\cdots , 1)$. In this case the splitting of the full connection (\\ref{connect:dec}) takes the form\n \\ba\n \\Lambda^a{}_b = \\omega^a{}_b + K^a{}_b + q^a{}_b + Q^a{}_b \\; . \\label{connect:decortonormal}\n \\ea\n\nIf only $Q_{\\alpha \\beta}=0$, connection is metric compatible;\nEinstein-Cartan geometry. If both $Q_{\\alpha \\beta}=0$ and\n$T^\\alpha =0$, connection is Levi-Civita; pseudo-Riemannian\ngeometry. If $R^\\alpha{}_\\beta=0$ and $Q_{\\alpha \\beta}=0$, it is\ncalled teleparallel (Weitzenb\\\"{o}ck) geometry. If $R^\\alpha{}_\\beta=0$ and\n$T^\\alpha =0$, it is called symmetric teleparallel geometry (STPG).\n\n\n\n\n\n \\section{STPG}\n\nIn STPG only nonmetricity tensor is nonzero:\n \\ba\n Q_{\\alpha \\beta} \\neq 0 \\quad , \\quad T^\\alpha =0 \\quad , \\quad {R^\\alpha}_\\beta\n =0 \\; . \\label{STPG1}\n \\ea\nHere I argue that in a well-chosen coordinate (or gauge) every metric has its own nonmetricity. This can be seen a kind of gauge fixing. Let us show this\nargument as follows. First one has to choose the natural frame $e^\\alpha =\n\\d x^\\alpha$ and the connection as $\\Lambda^\\alpha{}_\\beta =0$, then automatically\n$R^\\alpha{}_\\beta =0$ and $T^\\alpha =0$ and $Q_{\\alpha \\beta} = -\n\\frac{1}{2}\\d g_{\\alpha \\beta}$. This sequence of the operations corresponds to $\\omega_{ab} + q_{ab}$ in the orthonormal frame. Besides, in this geometry identities (\\ref{bianc:0})-(\\ref{bianc:2}) give one nontrivial identity, $\\D Q_{\\alpha \\beta} =0$. From now on let Greek indices denote natural (holonomic) ones.\n\n \\subsection{Autoparallel Curves}\n\nIn the Riemann geometry one requires that the tangent vector to the\nautoparallel curve points in the same direction as itself when\nparallel propagated, and demand that it maintains the same length,\n \\ba\n \\D T^\\alpha =0 \\; .\n \\ea\nOn the other hand, since intuitively autoparallel curves are\n\"those as straight as possible\" I do not demand the\nvector to keep the same length during parallel propagation in\nSTPG. It is known that nonmetricity is related to length and angle\nstandards at parallel transportation. Therefore I prescribe the\nparallel propagation of the tangent vector\n \\ba\n \\D T^\\mu = (a Q^\\mu{}_\\nu +bq^\\mu{}_\\nu ) T^\\nu + c Q T^\\mu\n \\label{eq:DTmu}\n \\ea\nwhere $T^\\mu = \\frac{d x^\\mu}{d \\tau}$ is the tangent vector to the curve\n$x^\\mu (\\tau)$ with an affine parameter $\\tau$ and $a,b,c$ are arbitrary constants. Here since I\nchoose $\\Lambda^\\alpha{}_\\beta =0$ I obtain $\\D T^\\mu = \\d T^\\mu\n= (\\partial_\\alpha T^\\mu) e^\\alpha$. Moreover, I write\n$Q^\\mu{}_\\nu=Q^\\mu{}_{\\nu \\alpha} e^\\alpha$ and $q^\\mu{}_\\nu =\nq^\\mu{}_{\\nu\\alpha} e^\\alpha$ and $Q= Q^\\nu{}_{\\nu\\alpha}\ne^\\alpha$. Then Eqn(\\ref{eq:DTmu}) turns out to be\n \\ba\n \\partial_\\alpha T^\\mu =(a Q^\\mu{}_{\\nu \\alpha} + b\n q^\\mu{}_{\\nu \\alpha}) T^\\nu + c Q^\\nu{}_{\\nu \\alpha} T^\\mu \\, .\n \\ea\nNow after multiplying this with $T^\\alpha$ I write $T^\\alpha\n\\partial_\\alpha := \\frac{d}{d\\tau}$. Thus I arrive at\n \\ba\n \\frac{d^2 x^\\mu}{d\\tau^2} = (a Q^\\mu{}_{\\nu \\alpha} + b\n q^\\mu{}_{\\nu \\alpha}) \\frac{d x^\\nu}{d \\tau} \\frac{d x^\\alpha}{d \\tau}\n + c Q^\\nu{}_{\\nu \\alpha} \\frac{d x^\\mu}{d \\tau} \\frac{d x^\\alpha}{d \\tau} \\, .\n \\ea\nHere by using $Q_{\\alpha \\beta} = -\\frac{1}{2} \\d g_{\\alpha \\beta}$ I get $Q^\\mu{}_{\\nu \\alpha} = -\\frac{1}{2}g^{\\mu\n\\beta}(\\partial_\\alpha g_{\\beta \\nu})$ and then $q^\\mu{}_{\\nu \\alpha} = -\\frac{1}{2}g^{\\mu \\beta}(\\partial_\\nu g_{\\alpha \\beta}\n- \\partial_\\beta g_{\\alpha \\nu})$. Consequently, I express\nthe autoparallel curve equation in terms of the metric as follows\n \\ba\n \\frac{d^2 x^\\mu}{d\\tau^2} &=& g^{\\mu \\beta} \\left[ - \\frac{a+b}{4} (\\partial_\\alpha g_{\\beta \\nu} + \\partial_\\nu g_{\\beta \\alpha})\n + \\frac{b}{2} (\\partial_\\beta g_{\\alpha \\nu}) \\right] \\frac{d x^\\nu}{d \\tau} \\frac{d x^\\alpha}{d \\tau} \\nonumber \\\\\n & & - \\frac{c}{2}g^{\\nu \\beta} (\\partial_\\alpha g_{\\beta \\nu}) \\frac{d x^\\mu}{d \\tau} \\frac{d x^\\alpha}{d \\tau}\n \\ea\nwhere I symmetrized the first term of the first line in $(\\nu \\alpha)= \\frac{1}{2}(\\nu \\alpha + \\alpha\n\\nu)$ indices. If I fix $a=b=1$ and $c=0$, then this equation\nbecomes the same as the autoparallel curve of the Riemann geometry. \n\n\n \\subsection{Geodesics}\n\nIntuitively, geodesics are \"curves as short as\npossible\". Interval between infinitesimal points is given by\nmetric\n \\ba\n ds^2 = g_{\\mu \\nu} dx^\\mu dx^\\nu\n \\ea\nLet me parameterize the curve between endpoints as $x^\\mu =\nx^\\mu(\\tau)$, then I obtain\n \\ba\n s=\\int_{\\tau_1}^{\\tau_2} \\left( -g_{\\mu \\nu} \\dot{x}^\\mu \\dot{x}^\\nu \\right)^{1\/2}d\\tau\n \\ea\nwhere dot denotes $\\tau$-derivative. I inserted a minus sign because of the Lorentz signature. I wish now to derive the\ncondition on a curve which makes it extremize the length between\nits endpoints, i.e., wish to find those curves whose length does\nnot change to first order under an arbitrary smooth deformation\nwhich keeps the endpoints fixed. This condition gives rise to the\nEuler-Lagrange equations\n \\ba\n \\frac{d}{d \\tau} \\left( \\frac{\\partial L}{\\partial \\dot{x}^\\alpha} \\right) - \\frac{\\partial L}{\\partial\n x^\\alpha}=0\n \\ea\nof the action integral $I=\\int_{\\tau_1}^{\\tau_2} L(x^\\mu ,\n\\dot{x}^\\mu , \\tau)$. Thus in my case the Lagrangian is\n \\ba\n L = \\left[ -g_{\\mu \\nu} (x) \\dot{x}^\\mu \\dot{x}^\\nu \\right]^{1\/2}\n \\ea\nwhere $x$ stands for coordinate functions $x^\\mu$. Now\nEuler-Lagrange equations yield\n \\ba\n \\ddot{x}^\\beta + \\frac{1}{2} g^{\\alpha \\beta} (\\partial_\\mu g_{\\alpha \\nu} + \\partial_\\nu g_{\\alpha \\mu}\n + \\partial_\\alpha g_{\\mu \\nu})\\dot{x}^\\mu \\dot{x}^\\nu =\n \\frac{\\dot{x}^\\beta}{2 ( g_{\\mu \\nu} \\dot{x}^\\mu \\dot{x}^\\nu )} \\frac{d(g_{\\mu \\nu} \\dot{x}^\\mu\n \\dot{x}^\\nu)}{d\\tau} \\label{eq:geodesic}\n \\ea\nAttention to the last term! Let me evaluate it. First, I write it\nas $ g_{\\mu \\nu} \\dot{x}^\\mu \\dot{x}^\\nu = g_{\\mu \\nu} T^\\mu T^\\nu\n= T_\\mu T^\\mu$. Now,\n \\ba\n \\d (T_\\mu T^\\mu) &=&(\\D g_{\\mu \\nu}) T^\\mu T^\\nu +2g_{\\mu \\nu} T^\\mu (\\D T^\\nu) \\nonumber \\\\\n &=& -2Q_{\\mu \\nu} T^\\mu T^\\nu + 2g_{\\mu \\nu} T^\\mu (\\D T^\\nu)\n \\ea\nHere the usage of Eqn(\\ref{eq:DTmu}) gives\n \\ba\n \\d (T_\\mu T^\\mu) = 2(a-1)Q_{\\mu \\nu} T^\\mu T^\\nu + 2c Q T_\\mu T^\\mu\n \\ea\nThis means that if I choose $a=1$ and $c=0$,\nEqn(\\ref{eq:geodesic}) becomes the same as the geodesic equation of the Riemann geometry. \n\n\n\\section{Result}\n\nThus if in STPG the parallel propagation of the tangent vector $T^\\mu =\n\\frac{dx^\\mu}{d\\tau}$ to a curve $x^\\mu(\\tau)$ is defined as\n \\ba\n \\D T^\\mu = ( Q^\\mu{}_\\nu + q^\\mu{}_\\nu ) T^\\nu\n \\ea\nthen the autoparallel curve equation is obtained as\n \\ba\n \\frac{d^2 x^\\mu}{d\\tau^2} + \\frac{1}{2} g^{\\mu \\beta} \\left( \\partial_\\alpha g_{\\beta \\nu} + \\partial_\\nu g_{\\beta \\alpha}\n - \\partial_\\beta g_{\\alpha \\nu} \\right) \\frac{d x^\\nu}{d \\tau} \\frac{d x^\\alpha}{d \\tau}=0 \\, . \\label{eq:autoparallel}\n \\ea\nwhich is the autoparallel curve equation of the Riemann geometry. Also it is shown that this is the geodesic equation of the STPG like in the Riemann geometry. \n\n\\section*{Acknowledgement}\n\nThe author thanks to F W Hehl for stimulating criticisms. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{SEC:IN}\n\\vspace{-2mm}\n\nWith the rise of big data analytics and cloud computing, cluster-based large-scale data processing has become a common paradigm in many applications and services. \nOnline companies of diverse sizes, ranging from technology giants to smaller startups, routinely store and process data generated by their users and applications on the cloud. \nData-parallel computing frameworks, such as Apache Spark~\\cite{zaharia2010spark,spark} and Hadoop~\\cite{hadoop}, are employed to perform such data processing at scale. \nJobs executed over such frameworks comprise hundreds or thousands of identical parallel subtasks, operating over massive datasets, and executed concurrently in a cluster environment.\n\n\n\\vspace{-1mm}\nThe time and resources necessary to process such massive jobs are immense. Nevertheless, jobs executed in such distributed environments often have significant computational overlaps: \ndifferent jobs processing the same data may involve common intermediate computations, as illustrated in Fig.~\\ref{FIG:JOBARRIVALS}.\nSuch computational overlaps arise naturally in practice. \nIndeed, computations performed by companies are often applied to the same data-pipeline: companies collect data generated by their applications and users, and store it in the cloud. \nSubsequent operations operate over the same pool of data, e.g., user data collected within the past few days or weeks. \nMore importantly, a variety of prominent data mining and machine learning operations involve common preprocessing steps. This includes database projection and selection~\\cite{maier1983theory}, preprocessing in supervised learning~\\cite{trevor2001elements}, and dimensionality reduction~\\cite{eldar2012compressed}, to name a few. \nRecent data traces from industry have reported $40\\sim60\\%$ recurring jobs in Microsoft production clusters~\\cite{jyothi2016morpheus}, and up to $78\\%$ jobs in Cloudera clusters involve data re-access~\\cite{chen2012interactive}.\n\n\n\\vspace{-1mm}\nExploiting such computational overlaps has a tremendous potential to drastically reduce job computation costs and lead to significant performance improvements. In data-parallel computing frameworks like Spark, computational overlaps inside each job are exploited through caching and \\emph{memoization}: the outcomes of computations are stored with the explicit purpose of significantly reducing the cost of subsequent jobs. \nOn the other hand, introducing caching also gives rise to novel challenges in resource management; \nto that end, the purpose of this paper is to design, implement and evaluate caching algorithms over data-parallel cluster computing environments.\n\n\\vspace{-1mm}\nExisting data-parallel computing frameworks, such as Spark, incorporate caching capabilities in their framework in a non-automated fashion. \nThe decision of which computation results to cache rests on the developer that submits jobs: the developer explicitly states which results are to be cached, while cache eviction is implemented with the simple policy (e.g., LRU or FIFO); neither caching decisions nor evictions are part of an optimized design. Crucially, determining which outcomes to cache is a hard problem when dealing with jobs that consist of operations with complex dependencies. \nIndeed, under the Directed Acyclic Graph (DAG) structures illustrated in Fig.~\\ref{FIG:JOBARRIVALS}, making caching decisions that minimize, e.g., total work is NP-hard~\\cite{ioannidis2016adaptive,shanmugam2013femtocaching}.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.34\\textwidth]{Figures\/JOBARRIVALS.pdf}\\vspace{-3mm}\n\\caption{\\scriptsize {\\bf{Job arrivals with computational overlaps.}} Jobs to be executed over the cluster arrive at different times $t_1,\\ldots,t_5$. Each job is represented by a Directed Acyclic Graph (DAG), whose nodes correspond to operations, e.g., map, reduce, or join, while arrows represent order of precedence.\nCrucially, jobs have \\emph{computational overlaps}: their DAGs comprise common sets of operations executed over the same data, indicated as subgraphs colored identically across different jobs. Caching such results can significantly reduce computation time.\\vspace*{-1em}}\n\\label{FIG:JOBARRIVALS}\n\\end{figure}\n\n\\vspace{-1mm}\nIn this paper, we develop an adaptive algorithm for caching in a massively distributed data-parallel cluster computing environment, handling complex and massive data flows. Specifically, a mathematical model is proposed for determining caching decisions that minimize total work, i.e., the total computation cost of a job. \nUnder this mathematical model,\nwe have developed new {\\em adaptive} caching algorithms to make online caching decisions with optimality guarantees, e.g., minimizing total execution time. \nMoreover, we extensively validate the performance over several different databases, machine learning, and data mining patterns of traffic, both through simulations and through an implementation over Spark, comparing and assessing their performance with respect to existing popular caching and scheduling policies. \n\n\n\n\n\n\nThe remainder of this paper is organized as follows. \nSec.~\\ref{SEC:BM} introduces background and motivation. \nSec.~\\ref{SEC:AD} presents our model, problem formulation, and our proposed algorithms. \nTheir performance is evaluated in Sec.~\\ref{SEC:PE}. \nSec.~\\ref{SEC:RW} reviews related work, and we\nconclude in Sec.~\\ref{SEC:CO}.\n\n\n\n\n\\section{Background and Motivation}\n\\label{SEC:BM}\n\\vspace{-2mm}\n\n\\subsection{Resilient Distributed Datasets in Spark}\n\\vspace{-1mm}\n\nApache Spark has recently been gaining ground as an alternative for distributed data processing platforms. In contrast to Hadoop and MapReduce~\\cite{dean2008mapreduce},\nSpark is a memory-based general parallel computing framework. It provides {\\em resilient distributed datasets} (RDDs) as a primary abstraction: RDDs are distributed datasets stored in RAM across multiple nodes in the cluster.\nIn Spark, the decision of which RDDs to store in the RAM-based cache rests with the developer~\\cite{zaharia2012resilient}: the developer explicitly requests for certain results to persist in RAM. Once the RAM cache is full, RDDs are evicted using the LRU policy. Alternatively, developers are further given the option to store evicted RDDs on HDFS, at the additional cost of performing write operations on HDFS. RDDs cached in RAM are stored and retrieved faster; however, cache misses occur either because an RDD is not explicitly cached by the developer, or because it was cached and later evicted. In either case, Spark is resilient to misses at a significant computational overhead: if a requested RDD is neither in RAM nor stored in HDFS, Spark recomputes it from scratch. Overall, cache misses, therefore, incur additional latency, either by reading from HDFS or by fully recomputing the missing RDD.\n\n\n\\vspace{-2mm}\nAn example of a job in a data-parallel computing framework like Spark is given in Fig.~\\ref{FIG:DAG}. A job is represented as a DAG (sometimes referred to as the \\emph{dependency graph}). Each node of the DAG corresponds to a parallel operation, such as reading a text file and distributing it across the cluster, or performing a map, reduce, or join operation. Edges in the DAG indicate the order of precedence: an operation cannot be executed before all operations pointing towards it are completed, because their outputs are used as inputs for this operation. As in existing frameworks like Spark or Hadoop, the inputs and outputs of operations may be distributed across multiple machines: e.g., the input and output of a map would be an RDD in Spark, or a file partitioned across multiple disks in HDFS in Hadoop.\n\n\n\n\n\n\\vspace{-3mm}\n\\subsection{Computational Overlaps}\n\\vspace{-2mm}\n\n\nCaching an RDD resulting from a computation step in a job like the one appearing in Fig.~\\ref{FIG:DAG} can have significant computational benefits when jobs may exhibit \\emph{computational overlaps}: not only are jobs executed over the same data, but also consist of operations that are repeated across multiple jobs. This is illustrated in Fig.~\\ref{FIG:JOBARRIVALS}: jobs may be distinct, as they comprise different sets of operations, but certain subsets of operations (shown as identically colored subgraphs in the DAG of Fig.~\\ref{FIG:JOBARRIVALS}) are (a) the same, i.e., execute the same primitives (maps, joins, etc.) and (b) operate over the same data. \n\n\\vspace{-2mm}\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.34\\textwidth]{Figures\/DAG.pdf}\\vspace{-3mm}\n\\caption{\\scriptsize \\textbf{Job DAG example}. An example of a parallel job represented as a DAG. Each node corresponds to an operation resulting RDD that can be executed over a parallel cluster (e.g., a map, reduce, or join operation). DAG edges indicate precedence. \nSimple, crunodes (in\/out) and cross nodes are represented with solid or lined textures.}\\vspace*{-1.3em}\n\\label{FIG:DAG}\n\\end{figure}\n\nComputational overlaps arise in practice for two reasons. The first is that operations performed by companies are often applied to the same data-pipeline: companies collect data generated by their applications and users, which they maintain in the cloud, either directly on a distributed file system like HDFS, or on NoSQL databases (like Google's Datastore~\\cite{barrett2008under} or Apache HBase~\\cite{hbase}). Operations are therefore performed on the same source of information: the latest data collected within a recent period of time.\nThe second reason for computational overlaps is the abundance of commonalities among computational tasks in data-parallel processing. Commonalities occur in several classic data-mining and machine learning operations heavily utilized in inference and prediction tasks (such as predictions of clickthrough rates and user profiling). \nWe give some illustrative examples below:\n\n\n\n\n\\vspace{-2mm}\n\\noindent\\textbf{Projection and Selection.} The simplest common preprocessing steps are \\emph{projection} and \\emph{selection}~\\cite{maier1983theory}. For example, computing the mean of a variable $\\mathtt{age}$ among tuples satisfying the predicate $\\mathtt{gender}=\\mathtt{female}$ and $\\mathtt{gender}=\\mathtt{female}\\land \\mathtt{income}\\geq 50$K might both first reduce a dataset by selecting rows in which $\\mathtt{gender}=\\mathtt{female}$. Even in the absence of a relational database, as in the settings we study here, projection (i.e., maintaining only certain feature columns) and selection (i.e., maintaining only rows that satisfy a predicate) are common. For example, building a classifier that predicts whether a user would click on an advertisement relies upon first restricting a dataset containing all users to the history of the user's past clicking behavior. This is the same irrespective of the advertisement for which the classifier is trained. \n\n\n\\vspace{-2mm}\n\\noindent\\textbf{Supervised Learning.} Supervised learning tasks such as regression and classification~\\cite{trevor2001elements}, i.e., training a model from features for the purpose of predicting a label (e.g., whether a user will click on an advertisement or image) often involve common operations that are label-independent. For example, performing ridge regression first requires computing the co-variance of the features~\\cite{trevor2001elements}, an identical task irrespective of the label to be regressed. Similarly, kernel-based methods like support vector machines require precomputing a kernel function across points, a task that again remains the same irrespective of the labels to be regressed~\\cite{scholkopf2001learning}. Using either method to, e.g., regress the click-through rate of an ad, would involve the same preprocessing steps, irrespectively of the labels (i.e., clicks pertaining to a specific ad) being regressed.\n\n\\vspace{-2mm}\n\\noindent\\textbf{Dimensionality Reduction.} Preprocessing also appears in the form of \\emph{dimensionality reduction}: this is a common preprocessing step in a broad array of machine learning and data mining tasks, including regression, classification, and clustering. Prior to any such tasks, data is first projected in a lower dimensional space that preserves, e.g., data distances. There are several approaches to doing this, including principal component analysis~\\cite{jolliffe2002principal}, compressive sensing~\\cite{eldar2012compressed}, and training autoregressive neural networks~\\cite{gregor2014deep}, to name a few. In all these examples, the same projection would be performed on the data prior to subsequent processing, and be reused in the different tasks described above.\n\n\\vspace{-2mm}\nTo sum up, the presence of computational overlaps across jobs gives rise to a tremendous opportunity of reducing computational costs. Such overlaps can be exploited precisely through the caching functionality of a data-parallel framework.\nIf a node in a job is cached (i.e., results are \\emph{memoized}), then neither itself nor any of its predecessors need to be recomputed.\n\n\n\\begin{comment}\n\\begin{figure}[ht]\n\\centering\n\\begin{subfigure}[b]{0.4\\textwidth}\n\\includegraphics[width=\\textwidth]{job1}\n\\caption{Job1}\n\\label{fig:job1}\n\\end{subfigure}\n\\hspace{0.2in}\n\\begin{subfigure}[b]{0.4\\textwidth}\n\\includegraphics[width=\\textwidth]{job2}\n\\caption{Job2}\n\\label{fig:job2}\n\\end{subfigure}\n\\caption{\\small An Example of overlapping computation with Spark DAGs from two SQL queries. {\\bf too much space, may remove this figure}}\n\\end{figure}\n\\end{comment}\n\n\\vspace{-3mm}\n\\subsection{Problems and Challenges}\n\\vspace{-2mm}\n\nDesigning caching schemes poses several significant challenges. \nTo begin with, making caching decisions is an inherently combinatorial problem. Given (a) a storage capacity constraint, (b) a set of jobs to be executed, (c) the size of each computation result, and (d) a simple linear utility on each job, the problem is reduced to a knapsack problem, which is NP-hard. The more general objectives we discussed above also lead to NP-hard optimization problems~\\cite{fleischer2011tight}.\nBeyond this inherent problem complexity, even if jobs are selected from a pool of known jobs (e.g., classification, regression, querying), the sequence \nto submit jobs within a given time interval \\emph{is a priori unknown}. The same may be true about statistics about upcoming jobs, such as the frequency with which they are requested.\nTo that end, a practical caching algorithm must operate in an \\emph{adaptive} fashion: it needs to make online decisions on what to cache as new jobs arrive, and adapt to changes in job frequencies.\n\n\\vspace{-1mm}\nIn Spark, LRU is the default policy for evicting RDDs when the cache is full. There are some other conventional caching algorithms such as LRU variant~\\cite{LRU-K} that maintains the most recent accessed data for future reuse, and ARC~\\cite{NM-ARC} and LRFU~\\cite{LRFU} that consider both frequency and recency in the eviction decisions.\nWhen the objective is to minimize total work, these conventional caching algorithms are woefully inadequate, leading to arbitrarily suboptimal caching decisions~\\cite{ioannidis2016adaptive}. Recently, a heuristic policy~\\cite{geng2017lcs}, named ``Least Cost Strategy'' (LCS), was proposed to make eviction decisions based on the recovery temporal costs of RDDs. However, this is a heuristic approach and again comes with no guarantees.\nIn contrast, we intend to leverage Spark's internal caching mechanism to implement our caching algorithms and deploy and evaluate them over the Spark platform, while also attaining formal guarantees.\n\n\n\n\n\\section{Algorithm Design}\n\\label{SEC:AD}\n\\vspace{-2mm}\n\nIn this section, we introduce a formal mathematical model for making caching decisions that minimize the expected total work, i.e., the total expected computational cost for completing all jobs. \nThe corresponding caching problem is NP-hard, even in an offline setting where the popularity of jobs submitted to the cluster is \\emph{a priori known}. \nNevertheless, we show it is possible to pose this optimization problem as a submodular maximization problem subject to knapsack constraints. This allows us to produce a $1-1\/e$ approximation algorithm for its solution. Crucially, when job popularity is \\emph{not known}, we have devised an adaptive algorithm for determining caching decisions probabilistically, that makes caching decisions lie within $1-1\/e$ approximation from the offline optimal, in expectation. \n\n\\vspace{-2mm}\n\\subsection{DAG\/Job Terminology}\n\\vspace{-2mm}\n\nWe first introduce the terminology we use in describing caching algorithms.\nConsider a job represented as a DAG as shown in Fig.~\\ref{FIG:DAG}. Let $G(V,E)$ be the graph representing this DAG, whose nodes are denoted by $V$ and edges are denoted by $E$. Each node is associated with an operation to be performed on its inputs (e.g., map, reduce, join, etc.). \nThese operations come from a well-defined set of operation primitives (e.g., the operations defined in Spark). For each node $v$, we denote as $\\op(v)$ the operation that $v\\in V$ represents. The DAG $G$ as well as the labels $\\{\\op(v),v\\in V\\}$ fully determine the job.\nA node $v\\in V$ is a \\emph{source} if it contains no incoming edges, and a \\emph{sink} if it contains no outgoing edges. Source nodes naturally correspond to operations performed on ``inputs'' of a job (e.g., reading a file from the hard disk), while sinks correspond to ``outputs''.\nGiven two nodes $u,v\\in V$, we say that $u$ is a \\emph{parent} of $v$, and that $v$ is a \\emph{child} of $u$, if $(u,v)\\in E$. We similarly define \\emph{predecessor} and \\emph{successor} as the transitive closures of these relationships.\nFor $v\\in V$, we denote by $\\prd(v)\\subset V$, $\\scc(c)\\subset V$ the sets of predecessors and successors of $v$, respectively.\nNote that the parent\/child relationship is the opposite to usually encountered in trees, where edges are usually thought of as pointing away from the root\/sink towards the leaves\/sources.\nWe call a DAG a \\emph{directed tree} if (a) it contains a unique sink, and (b) its undirected version (i.e., ignoring directions) is acyclic.\n\n\\vspace{-2mm}\n\\subsection{Mathematical Model}\n\\vspace{-2mm}\n\nConsider a setting in which all jobs are applied to the same dataset; this is without loss of generality, as multiple datasets can be represented as a single dataset--namely, their union--and subsequently adding appropriate projection or selection operations as preprocessing to each job. Assume further that each DAG is a directed tree. Under these assumptions, let $\\ensuremath{\\mathcal{G}}$ be the set of all possible jobs that can operate on the dataset. We assume that jobs $G\\in\\ensuremath{\\mathcal{G}}$ arrive according to a stochastic stationary process with rate $\\lambda_G>0$. Recall that each job $G(V,E)$ comprises a set of nodes $V$, and that each node $v\\in V$ corresponds to an operation $\\op(v)$. We denote by as $c_v\\in \\reals_+$ the time that it takes to execute this operation given the outputs of its parents, and $s_v\\in \\reals_+$ be the size of the output of $\\op(v)$, e.g., in Kbytes. Without caching, the \\emph{total-work} of a job $G$ is then given by\n$W(G(V,E)) = \\sum_{v\\in V} c_v.$\nWe define the \\emph{expected total work} as:\n\\begin{align}\n\\bar{W} =\\sum_{G\\in \\mathcal{G}} \\lambda_G \\cdot W(G) = \\sum_{G(V,E)\\in \\mathcal{G}} \\lambda_{G(V,E)} \\sum_{v\\in V} c_v.\n\\end{align}\nWe say that two nodes $u,u'$ are \\emph{identical}, and write $u=u'$, if both these nodes and all their predecessors involve exactly the same operations.\nWe denote by $\\ensuremath{\\mathcal{V}}$\nthe union of all nodes of DAGs in $\\ensuremath{\\mathcal{G}}$.\nA \\emph{caching strategy} is a vector $x=[x_v]_{v\\in\\ensuremath{\\mathcal{V}}}\\in \\{0,1\\}^{|\\ensuremath{\\mathcal{V}}|}$, where $x_v\\in\\{0,1\\}$ is a binary variable indicating whether we have cached the outcome of node $v$ or not.\nAs jobs in \\ensuremath{\\mathcal{G}}{} are directed trees, when node $v$ is cached, \\emph{there is no need to compute that node or any predecessor of that node}.\nHence, under a caching strategy $x$, the total work of a job $G$ becomes:\n\\begin{align}\n\\textstyle W =\\sum_{v\\in V} c_v (1-x_v)\\prod_{u\\in \\scc(v)} (1-x_u).\n\\end{align}\nIntuitively, this states that the cost $c_v$ of computing $\\op(v)$ needs to be paid if and only if \\emph{neither} $v$ \\emph{nor} any of its successors have been cached.\n\n\\vspace{-3mm}\n\\subsection{Maximizing the Caching Gain: Offline Optimization} \n\nGiven a cache of size $K$ Kbytes, we aim to solve the following optimization problem:\n\n\n\\vspace{-2mm}\n\\begin{subequations}\\label{maxcachegain}\n\\small{{\\hspace*{\\stretch{1}} \\textsc{MaxCachingGain}\\hspace{\\stretch{1}} }}\n\\begin{align}\n\\text{Max:}& & & F(x) \\!=\\! \\bar{W} \\!-\\!\\! \\sum_{G\\in\\ensuremath{\\mathcal{G}}}\\!\\!\\lambda_G W(G,x)\\!\\label{obj}\\\\\n\\text{}&&& =\\!\\!\\! \\sum_{G(V,E)\\in\\ensuremath{\\mathcal{G}}}\\!\\!\\!\\!\\!\\! \\lambda_G\\!\\sum_{v\\in V}\\!c_v\\big[1- (1\\!-\\!x_v)\\!\\!\\!\\!\\!\\prod_{u\\in \\scc(v)}\\!\\!\\! (1\\!-\\!x_u)\\big] \\\\\n\\text{Sub.~to:}&&& \\textstyle\\sum_{v\\in \\ensuremath{\\mathcal{V}}} s_vx_v\\leq K, \\quad\nx_v\\in\\{0,1\\}, \\text{ for all } v\\in \\ensuremath{\\mathcal{V}}.\\label{intcont}\n\\end{align}\n\\end{subequations}\n\n\\vspace{-2mm}\nFollowing~\\cite{ioannidis2016adaptive}, we call function $F(x)$ the \\emph{caching gain}: this is the reduction on total work due to caching. This offline problem\nis NP-hard~\\cite{shanmugam2013femtocaching}. \nSeen as an objective over the set of nodes $v\\in \\ensuremath{\\mathcal{V}}$ cached, $F$ is a \\emph{monotone, submodular} function. Hence, \\eqref{maxcachegain} is a submodular maximization problem with a knapsack constraint. When all outputs have the same size, the classic greedy algorithm by Nemhauser et al.~\\cite{nemhauser} yields a $1-1\/e$ approximation. In the case of general knapsack constraints, there exist well-known modifications of the greedy algorithm that yields the same approximation ratio~\\cite{sviridenko-submodular,krause-submodular,kulik2009maximizing}.\n\n\n\n\n\\vspace{-2mm}\nBeyond the above generic approximation algorithms for maximizing submodular functions, \\eqref{maxcachegain} can be solved by \\emph{pipage rounding}~\\cite{ageev2004pipage}.\nIn particular, there exists a concave function $L:[0,1]^{|\\ensuremath{\\mathcal{V}}|}$ such that: \n\\begin{align}(1-1\/e)L(x)\\leq F(x) \\leq L(x), \\quad\\text{for all}~x\\in[0,1]^{|\\ensuremath{\\mathcal{V}}|}\\label{sandwitch}.\\end{align}\nThis \\emph{concave relaxation} of $F$ is given by:\n\\begin{align}\\label{relaxation}\nL(x) = \\sum_{G(V,E)\\in\\ensuremath{\\mathcal{G}}} \\lambda_G\\sum_{v\\in V}c_v\\min\\big\\{1, x_v+\\sum_{u\\in \\scc(v)}x_u\\big\\}.\n\\end{align}\nPipage rounding solves \\eqref{maxcachegain} by replacing objective $F(x)$ with its concave approximation $L(x)$ and relaxing the integrality constraints \\eqref{intcont} to the convex constraints $x\\in[0,1]^{|\\ensuremath{\\mathcal{V}}|}$. The resulting optimization problem is convex--in fact, it can be reduced to a linear program, and thus solved in linear time. Having solved this convex optimization problem, the resulting fractional solution is subsequently rounded to produce an integral solution. Several polynomial time rounding algorithms exist (see, e.g., \\cite{ageev2004pipage,swaprounding}, and \\cite{kulik2009maximizing} for knapsack constraints). Due to \\eqref{sandwitch} and the specific design of the rounding scheme, the resulting integral solution is guaranteed to be within a constant approximation of the optimal \\cite{ageev2004pipage,kulik2009maximizing}.\n\n\\vspace{-2mm}\n\\subsection{An Adaptive Algorithm with Optimality Guarantees}\n\\label{subsec:adaptive} \n\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=0.84\\textwidth]{Figures\/EXP-DAG.pdf}\\vspace{-3mm}\n\\caption{\\scriptsize {\\bf{An example of RDD dependency in synthetic jobs.}} Denote $J_x.S_y$ as stage $y$ in job $x$, then we have $J_0.S_0 = J_1.S_1=J_2.S_0=J_3.S_1$, \n$J_1.S_{0\\sim 5}=J_3.S_{0\\sim 5}$, and \n$J_0.S_{0\\sim 1}=J_2.S_{0\\sim 1}$. \nUnfortunately, even sharing the same computational overlap, by default these subgraphs will be assigned with different stage\/RDD IDs by Spark since they are from different jobs.\n\\vspace*{-1em}\n}\n\\label{FIG:EXP-DAG}\n\\end{figure*}\n\n\n\n\\begin{algorithm}[t]\n\\scriptsize\n\\SetAlFnt{\\footnotesize}\n\\SetKwFunction{FuncIterateJobs}{processJobs}\n\\SetKwProg{ProcIterateJobs}{Procedure}{}{}\n\n\\SetKwFunction{FuncIterate}{processJob}\n\\SetKwProg{ProcIterate}{Procedure}{}{}\n\n\\SetKwFunction{FuncCalCost}{estimateCost}\n\\SetKwProg{ProcCalCost}{Procedure}{}{}\n\n\\SetKwFunction{FuncSaveRDD}{updateCache}\n\\SetKwProg{ProcSaveRDD}{Procedure}{}{}\n\n\\SetKwIF{If}{ElseIf}{Else}{if}{then}{else if}{else}{endif}\n\n\\ProcIterateJobs{\\FuncIterateJobs{\\ensuremath{\\mathcal{G}}}}\n{ \n $C_{\\ensuremath{\\mathcal{G}}}$ = Historical RDD access record\\;\n $C_G$ = Current job RDD access record\\;\n \\For{$G\\in\\ensuremath{\\mathcal{G}}$}\n {\n \n processJob($G(V,E)$, $C_G$)\\;\n updateCache($C_G$, $C_{\\ensuremath{\\mathcal{G}}}$)\\;\n }\n} \n\\ProcIterate{\\FuncIterate{$G(V,E)$, $C$}}\n{ \n $C_G$.clear()\\;\n \\For{v$\\in$V}\n {\n v.accessed=False\\;\n toAccess=set(DAG.sink())\\;\n \\While{toAccess$\\neq \\emptyset $}\n {\n v=toAccess.pop()\\; \n $C_G$[v]=estimateCost(v)\\;\n\n \\If{not v.cached}\n {\n \\For{u $\\in$ v.parents}\n {\n \\If{not u.accessed}\n {\n toAccess.add(u)\\;\n }\n }\n }\n access(v); \/* Iterate RDD $v$. *\/ \\\\\n v.accessed=True\\;\n }\n }\n \\KwRet\\;\n} \n\\ProcCalCost{\\FuncCalCost{v}}\n{\n cost=compCost[v]; \/* If all parents are ready. *\/\\\\\n toCompute=v.parents \/* Check each parent. *\/\\\\\n \\While{toCompute $\\neq \\emptyset$}\n {\n u=toCompute.pop()\\;\n \\If {not (u.cached or u.accessed or u.accessedInEstCost)}\n {\n cost+=compCost[u]\\;\n toCompute.appendList(u.parents)\\; \n u.accessedInEstCost=True\\;\n }\n }\n \\KwRet cost;\n}\n\\ProcSaveRDD{\\FuncSaveRDD{$C_G$, $C_{\\ensuremath{\\mathcal{G}}}$}}\n{\n \\For {$v \\in C_{\\ensuremath{\\mathcal{G}}}$}\n {\n \n \\If{$v \\in C_G$}\n {\n $C_{\\ensuremath{\\mathcal{G}}}[v] =(1-\\beta)\\times C_{\\ensuremath{\\mathcal{G}}}[v] + \\beta \\times C_G[v]$\\;\n }\n \\Else\n {\n $C_{\\ensuremath{\\mathcal{G}}}[v] =(1-\\beta)\\times C_{\\ensuremath{\\mathcal{G}}}[v] $\\;\n }\n updateCacheByScore($C_{\\ensuremath{\\mathcal{G}}}$)\\;\n \n }\n \\KwRet\\;\n}\n\\caption{\\small A Heuristic Caching Algorithm.}\n\\label{ALG:1}\n\\end{algorithm}\n\n\\vspace{-2mm}\nAs discussed above, if the arrival rates $\\lambda_G$, $G\\in\\ensuremath{\\mathcal{G}}$, are known, we can determine a caching policy within a constant approximation from the optimal solution to the (offline) problem \\textsc{MaxCachingGain} by solving a convex optimization problem.\nIn practice, however, the arrival rates $\\lambda_G$ may \\emph{not} be known. To that end, we are interested in an \\emph{adaptive} algorithm, that converges to caching decisions \\emph{without any prior knowledge of job arrival rates} $\\lambda_G$. \nBuilding on \\cite{ioannidis2016adaptive}, we propose an adaptive algorithm for precisely this purpose. We describe the details of this adaptive algorithm in {\\intechreport{the Appendix~\\ref{app:overview}.}{our technical report \\cite{techrep}.}}\nIn short, our adaptive algorithm performs \\emph{projected gradient ascent} over concave function $L$, given by \\eqref{relaxation}. \nThat is, our algorithm maintains at each time a fractional $y\\in[0,1]^{|\\ensuremath{\\mathcal{V}}|}$, capturing the probability with which each RDD should be placed in the cache. Our algorithm collects information from executed jobs; this information is used to produce an estimate of the gradient $\\nabla L(y)$. In turn, this is used to adapt the probabilities $y$ that we store different outcomes. Based on these adapted probabilities, we construct a randomized placement $x$ satisfying the capacity constraint \\eqref{intcont}. We can then show that the resulting randomized placement has the following property:\n\\begin{thm} \\label{mainthm}If $x(t)$ is the placement at time $t$, then \n$\\textstyle\\lim_{t\\to \\infty} \\mathbb{E}[F(x(t))] \\geq \\big(1 -{1}\/{e}\\big) F(x^*),$\nwhere $x^*$ is an optimal solution to the offline problem \\textsc{MaxCachingGain} (Eq.~\\eqref{maxcachegain}).\n\\end{thm}\nThe proof of Thm.~\\ref{mainthm} can be found in \\intechreport{Appendix~\\ref{app:proofofmainthm}.}{our technical report~\\cite{techrep}.}\n\n\n\n\n\\subsection{A Heuristic Adaptive Algorithm}\n\nBeyond attaining such guarantees, our adaptive algorithm gives us a great intuition to prioritize computational outcomes. Indeed, the algorithm prioritizes nodes $v$ that have a high gradient component $\\partial L\/\\partial x_v$ and a low size $s_v$. Given a present placement, RDDs should enter the cache if they have a high value w.r.t.~the following quantity \\intechreport{(Appendix~\\ref{app:proofofmainthm})}{~\\cite{techrep}}:\n\\begin{align}\\textstyle \\frac{\\partial L}{\\partial x_v}\/s_v \\simeq \\left(\\textstyle\\sum_{G\\in \\ensuremath{\\mathcal{G}}: v\\in G}\\lambda_G \\times \\Delta(w)\\right)\/s_v, \\label{approx}\\end{align}\nwhere $\\Delta(w)$ is the difference in total work if $v$ is not cached.\nThis intuition is invaluable in coming up with useful heuristic algorithms for determining what to place in a cache. In contrast to, e.g., LRU and LFU, that prioritize jobs with high request rate, Eq.~\\eqref{approx} suggests that a computation should be cached if (a) it is requested often, (b) caching it can lead to a significant reduction on the total work, and (c) it has small size. Note that (b) is \\emph{dependent on other caching decisions made by our algorithm}. Observations (a), (b), and (c) are intuitive, and the specific product form in \\eqref{approx} is directly motivated and justified by our formal analysis.\nThey give rise to the following simple heuristic adaptive algorithm: for each job submitted, maintain a moving average of (a) the request rate of individual nodes it comprises, and (b) the cost that one would experience if these nodes are not cached. Then, place in the cache only jobs that have a high such value, when scaled by the size $s_v$. \n\n\\vspace{-2mm}\nAlg.~\\ref{ALG:1} shows the main steps of our heuristic adaptive algorithm. \nIt updates the cache (i.e., storage memory pool) after the execution of each job (line 5) based on decisions made in the $updateCache$ function (line 6), which considers both the historical (i.e., $C_{\\ensuremath{\\mathcal{G}}}$) and current RDD (i.e., $C_G$) cost scores.\nParticularly, when iterating RDDs in each job following a recursive fashion, \nan auxiliary function $estimateCost$ is called to calculate and record the temporal and spatial cost of each RDD in that job (see line 14 and lines 22 to 31).\nNotice that $estimateCost$ does not actually access any RDDs, but conducts DAG-level analysis for cost estimation which will be used to determine cache contents in the $updateCache$ function.\nIn addition, a hash mapping table is also used to record and detect computational overlap cross jobs (details see in our implementation in Sec.~\\ref{SUBSEC:PE-SI}). \nAfter that, we iterate over each RDD's parent(s) (lines 16 to 18). Once all its parent(s) is(are) ready, we access (i.e., compute) the RDD (line 19). \nLastly, the $updateCache$ function first updates the costs of all accessed RDDs to decide the quantities cost collected above with a moving average window using a decay rate of $\\beta$, implementing an Exponentially Weighted Moving Average (EWMA).\nNext, $updateCache$ makes cross-job cache decisions based on the sorting results of the moving average window by calling the $updateCacheByScore$ function. \nThe implementation of this function can (1) refresh the entire RAM by top score RDDs; or (2) evict lower score old RDDs to insert higher score new RDDs. \n\n\n\n\\section{Performance Evaluation}\n\\label{SEC:PE}\n\\vspace{-2mm}\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{Figures\/EXP-SIM.pdf}\\vspace{-3mm}\n\\caption{\\small Hit ratio, access number and total work makespan results of large scale simulation experiments.\n\\vspace*{-2.3em}\n}\n\\label{FIG:EXP-SIM}\n\\end{figure*}\n\n\nIn this section, we first demonstrate the performance of our adaptive caching algorithm ( Alg.~\\ref{ALG:1}) on a simple illustrative example. \nWe then build a simulator to analyze the performance of large-scale synthetic traces with complex DAGs. \nLastly, we validate the effectiveness of our adaptive algorithm by conducting real experiments in Apache Spark with real-world machine learning workloads.\n\n\n\n\n\n\n\\vspace{-3mm}\n\\subsection{Numerical Analysis}\n\\label{SUBSEC:PE-NA}\n\\vspace{-2mm}\n\nWe use a simple example to illustrate how our adaptive algorithm (i.e., Alg.~\\ref{ALG:1}) performs w.r.t minimizing total work. \nThis example is specifically designed to illustrate that our algorithm significantly outperforms the default LRU policy used in Spark. \nAssume that we have 5 jobs ($J_0$ to $J_4$) each consisting of 3 RDDs, the first 2 of which are common across jobs. \nThat is, $J_0$'s DAG is $R_0$$\\rightarrow$$R_1$$\\rightarrow$$R_2$, \n$J_1$ is $R_0$$\\rightarrow$$R_1$$\\rightarrow$$R_3$, \n$J_2$ is $R_0$$\\rightarrow$$R_1$$\\rightarrow$$R_4$, etc. \nThe calculation time for $R_1$ is 100 seconds while the calculation time for other RDDs (e.g., $R_2$, $R_3$,...) is 10 seconds. \nWe submit this sequence of jobs twice, with the interarrival time of 10 seconds between jobs. Thus, we have 10 jobs in a sequence of \\{$J_0$, $J_1$, ..., $J_4$, $J_0$, $J_1$, ..., $J_4$\\}. We set the size of each RDD as 500MB and the cache capacity as 500MB as well. Hence, at most one RDD can be cached at any moment. \n\n\nTable~\\ref{TAB:sample} shows the experimental results of this simple example under LRU and our algorithm. Obviously, LRU cannot well utilize the cache because the recently cached RDD (e.g., $R_2$) is always evicted by the newly accessed RDD (e.g., $R_3$). As a result, none of the RDDs are hit under the LRU policy. By producing an estimation of the gradient on RDD computation costs, our algorithm instead places $R_1$ in the cache after the second job finishes and thus achieves a higher hit ratio of 36\\%, i.e., 8 out of 22 RDDs are hit. Total work (i.e., the total calculation time for finishing all jobs) is significantly reduced as well under our algorithm. \n\n\n\\begin{table}[h]\n\\scriptsize\n\\vspace{-0.1in}\n\\caption{Caching results of the simple case.}\n\\label{TAB:sample}\n\\centering\n\\begin{tabular}{|p{10mm}|p{2.5mm}|p{2.5mm}|p{2.5mm}|p{2.5mm}|p{2.5mm}|p{2.5mm}|p{9mm}|p{13mm}|}\n\\hline\nPolicy & $J_0$ & $J_1$ & $J_2$ & $J_3$ & ... & $J_4$ & hitRatio & totalWork \\\\ \\hline \nLRU & $R_2$ & $R_3$ & $R_4$ & $R_5$ & ... & $R_6$ &0.0\\% & 1100 \\\\ \\hline\nAdaptive & $R_2$ & $R_1$ & $R_1$ & $R_1$ & ... & $R_1$ &36.4\\% & 300 \\\\ \\hline\n\\end{tabular}\n\\vspace{-0.1in}\n\\end{table}\n\n\n\n\n\n\n\\vspace{-2mm}\n\\subsection{Simulation Analysis}\n\\label{SUBSEC:PE-SA}\n\\vspace{-2mm}\n\nTo further validate the effectiveness of our proposed algorithm, we scale up our synthetic trace by randomly generating a sequence of 1000 jobs to represent real data analysis applications with complex DAGs. Fig.~\\ref{FIG:EXP-DAG} shows an example of some jobs' DAGs from our synthetic trace, where some jobs include stages and RDDs with the same generating logic chain. For example, stage 0 in $J_0$ and stage 1 in $J_1$ are identical, but their RDD IDs are different and will be computed twice. \nOn average, each of these jobs consists of six stages and each stage has six RDDs. \nThe average RDD size is 50MB. \nWe use a decay rate of $\\beta=0.6$. \n\n\n\n\n\n\n\\vspace{-2mm}\nWe implement four caching algorithms for comparison: (1) NoCache: a baseline policy, which forces Spark to ignore all user-defined {\\em cache}\/{\\em persist} demands, and thus provides the lower bound of caching performance; (2) LRU: the default policy used in Spark, which evicts the least recent used RDDs; (3) FIFO: a traditional policy which evicts the earliest RDD in the RAM; and (4) LCS: a recently proposed policy, called ``Least Cost Strategy''~\\cite{geng2017lcs}, which uses a heuristic approach to calculate each RDD's recovery temporal cost to make eviction decisions. \nThe main metrics include \n(a) {\\em RDD hit ratio} that is calculated as the ratio between the number of RDDs hit in the cache and the total number of accessed RDDs, or the ratio between the size of RDDs hit in the cache and the total size of accessed RDDs;\n(b) {\\em Number of accessed RDDs} and {\\em total amount of accessed RDD data size} that need to be accessed through the experiment; \n(c) {\\em Total work} (i.e., makespan) that is the total calculation time for finishing all jobs; \nand \n(d) {\\em Average waiting time} for each job.\n\n\nFig.~\\ref{FIG:EXP-SIM} depicts the performance of the five caching algorithms.\nWe conduct a set of simulation experiments by configuring different cache sizes for storing RDDs. \nClearly, our algorithm (``Adaptive\") significantly improves the hit ratio (up to 70\\%) across different cache sizes, as seen Fig.~\\ref{FIG:EXP-SIM}(a) and (b). \nIn contrast, the other algorithms start to hit RDDs (with hit ratio up to 17\\%) only when the cache capacity becomes large. \nConsequently, our proposed algorithm reduces the number of RDDs that need to be accessed and calculated (see Fig.~\\ref{FIG:EXP-SIM}(c) and (d)), which further saves the overall computation costs, i.e., the total work in Fig.~\\ref{FIG:EXP-SIM}(e) and (f). \nWe also notice that such an improvement from ``Adaptive\" becomes more significant when we have a larger cache space for RDDs, which indicates that our adaptive algorithm is able to better detect and utilize those shareable and reusable RDDs across jobs. \n\n\n\n\n\\vspace{-3mm}\n\\subsection{Spark Implementation}\n\\label{SUBSEC:PE-SI}\n\\vspace{-2mm}\n\nWe further evaluate our cache algorithm by integrating our methodology into Apache Spark 2.2.1, hypervised by VMware Workstation 12.5.0. \nTable~\\ref{TAB:EV-SPEC} summarizes the details of our testbed configuration. \nIn Spark, the memory space is divided into four pools: storage memory, execution memory, unmanaged memory and reserved memory. \nOnly storage and execution memory pools (i.e., $UnifiedMemoryManager$) are used to store runtime data of Spark applications. \nOur implementation focuses on storage memory, \nwhich stores cached data (RDDs), internal data propagated through the cluster, and temporarily unrolled serialized data.\nFig.~\\ref{FIG:EXP-SP-ARC} further illustrates the main architecture of modules in our implementation. \nIn detail, different from Spark's built-in caching that responds to {\\em persist} and {\\em unpersist} APIs, we build an {\\em RDDCacheManager} module in the {\\em Spark Application Layer} to communicate with cache modules in the {\\em Worker Layer}. \nOur proposed module maintains statistical records (e.g., historical access, computation overhead, DAG dependency, etc.), and automatically decides which new RDDs to be cached and which existing RDDs to be evicted when the cache space is full. \n\n\\vspace{-1mm}\n\\small\n\\begin{table}[th]\n \\center\n \\caption{Testbed configuration.} \n \\label{TAB:EV-SPEC}\n \n \\begin{tabular}{|c|c|}\n \\hline\n \\textbf{Component} & \\textbf{Specs} \\\\ \\hline \n Host Server & Dell PowerEdge T310 \\\\ \\hline\n Host Processor & Intel Xeon CPU X3470 \\\\ \\hline\n Host Processor Speed & 2.93GHz \\\\ \\hline\n Host Processor Cores & 8 Cores \\\\ \\hline\n \n Host Memory Capacity & 16GB DIMM DDR3 \\\\ \\hline\n Host Memory Data Rate & 1333 MHz \\\\ \\hline\n \n Host Hypervisor & VMware Workstation 12.5.0 \\\\ \\hline\n Big Data Platform & Apache Spark 2.2.1 \\\\ \\hline\n Storage Device & Western Digital WD20EURS \\\\ \\hline\n Disk Size & 2TB \\\\ \\hline\n Disk Bandwidth & SATA 3.0Gbps \\\\ \\hline\n \n \n Memory Size Per Node & 1 GB \\\\ \\hline\n Disk Size Per Node & 50 GB\\\\ \\hline\n\n \\end{tabular} \n \\end{table}\n\\normalsize\n\\vspace{-1mm}\n\n\\begin{comment}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.31\\textwidth]{Figures\/EXP-SP-MEM.pdf}\\vspace{-3mm}\n\\caption{\\small Architecture of Apache Spark Unified Memory Manager.\\vspace*{-1.3em}}\\vspace*{-1.3em}\n\\label{FIG:EXP-SP-MEM}\n\\end{figure}\n\\end{comment}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.31\\textwidth]{Figures\/EXP-SP-ARC.pdf}\\vspace{-3mm}\n\\caption{\\small Module structure view of our Spark implementation, where our proposed {\\em RDDCacheManager} module cooperates with {\\em cache} module inside each worker node\n}\n\\label{FIG:EXP-SP-ARC}\n\\end{figure}\n\n\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.29\\textwidth]{Figures\/EXP-SP-RES.pdf}\\vspace{-3mm}\n\\caption{\\small Hit ratio and normalized makespan results of a stress testing on cache-unfriendly {\\em Ridge Regression} benchmark with different cache sizes under four cache algorithms.\\vspace*{-1.3em}}\n\\label{FIG:EXP-SP-REC}\n\\end{figure}\n\nWe select {\\em Ridge Regression}~\\cite{hoerl1970ridge} as a benchmark because it is a ubiquitous technique, widely applied in machine learning and data mining applications~\\cite{li2013enhanced, huang2012extreme}.\nThe input database we use is a huge table containing thousands of entries (i.e., rows), and each entry has more than ten features (i.e., columns). \nMore than hundred Spark jobs are repeatedly generated with an exponential arrival rate. \nEach job's DAG contains at least one {\\em Ridge Regression}-related subgraph, which regresses a randomly selected feature column (i.e., target) by a randomly selected subset of the remaining feature columns (i.e., source), i.e., $f_t=\\Re (\\vec {f_s})$, where $f_t$ is the target feature, and $\\Re(\\vec {f_s})$ is the regressed correlation function with an input of source feature vector $\\vec {f_s}$.\nMoreover, different jobs may share the same selections of target and source features, and thus they may have some RDDs with exactly the same generating logic chain (i.e., a subset of DAGs). \nUnfortunately, the default Spark cannot identify RDDs with the same generating logic chain if they are in {\\em different} jobs.\nIn order to identify these reusable and identical RDDs, our proposed {\\em RDDCacheManager} uses a mapping table to records each RDD's generating logic chain {\\em across} jobs (by importing our customized header files into the benchmark), i.e., we denote $RDD_x$ by a hashing function $key\\leftarrow hash(G_x(V,E) )$, where $G_x(V,E)$ is the subgraph of $RDD_x$ ($V$ is the set of all ancestor RDDs and $E$ is the set of all operations along the subgraph).\nSince not all operations are deterministic~\\cite{determ} (e.g., {\\em shuffle} operation on the same input data may result in different RDDs), we only monitor those deterministic operations which guarantee the same output under the same input. \n\n\n\n\n\n\n\\vspace{-3mm}\nRather than scrutinizing the cache-friendly case where our adaptive algorithm appears to work well as shown in Sec.~\\ref{SUBSEC:PE-SA}, \nit will be more interesting to study the performance under the cache-unfriendly case (also called ``stress test''~\\cite{wagner2005stresstest}), where the space size of different combinations of source and target features is comparatively large, which causes the production of a large number of different RDDs across jobs. \nMoreover, the probability of RDDs reaccess is low (e.g., the trace we generated has less than 26\\% of RDDs are repeated across all jobs), and the temporal distances of RDDs reaccess are also relatively long~\\cite{vcacheshare}. Thus, it becomes more challenging for a caching algorithm to make good caching decisions to reduce the total work under such a cache-unfriendly case.\n\n\\vspace{-3mm}\nFig.~\\ref{FIG:EXP-SP-REC} shows the real experimental results under four different caching algorithms, i.e., FIFO, LRU, LCS, and Adaptive. To investigate the impact of cache size, we also change the size of storage memory pool to have different numbers of RDDs that can be cached in that pool. \nCompared to the other three algorithms, our adaptive algorithm achieves non-negligible improvements on both hit ratio (see in Fig.~\\ref{FIG:EXP-SP-REC}(a)) and makespan (see in Fig.~\\ref{FIG:EXP-SP-REC}(b)), especially when the cache size increases. \nSpecifically, the hit ratio can be improved by 13\\% and the makespan can be reduced by 12\\% at most, which are decent achievements for such a cache-unfriendly stress test with less room to improve performance. Furthermore, we observe that Adaptive significantly increases the hit ratio and reduces the makespan when we have more storage memory space, which again indicates that our caching algorithm has the ability to make good use of memory space. \nIn contrast, the other algorithms have less improvement on hit ratio and makespan, since they cannot conduct cross-job computational overlap detection. While, with a global overview of all accessed RDDs, our adaptive algorithm can effectively select proper RDDs from all jobs to be cached in the limited storage memory pool. \n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\\label{SEC:RW}\n\\vspace{-3mm}\n\n\n\n\n\\intechreport{In the era of big data, a large amount of data is needed to be analyzed and processed in a small amount of time. \nTo meet the requirement, two types of in-memory processing systems are proposed~\\cite{zhang2015memory}. %\nThe first type is data analytics system which is focusing on batch processing such as SINGA~\\cite{singa}, Giraph~\\cite{giraph}, and GridGain~\\cite{gridgain}.\nThe second type is real-time data processing systems such as Storm~\\cite{storm}, Spark Streaming~\\cite{sparkstream}, MapReduce Online~\\cite{condie2010mapreduce}. \n}{}\n\n\nMemory management is a well-studied topic across in-memory processing systems. \nMemcached~\\cite{Memcache} and Redis~\\cite{redis} are highly available distributed key-value stores.\nMegastore~\\cite{megastore} offers a distributed storage system with strong consistency guarantees and high availability for interactive online applications. \nEAD~\\cite{yang2018ead} and MemTune~\\cite{xu2016memtune} are dynamic memory managers based on workload memory demand and in-memory data cache needs. \n\\intechreport{\n\nA number of studies have also been done for modeling the multi-stage frameworks. \nStudy~\\cite{gu2013memory} compares the performance in both time and memory cost between Hadoop and Spark, and they observed that Spark is, in general, faster than Hadoop in iterative operations but Spark has to pay for more memory consumption. \nStudy~\\cite{wang2015performance} proposed a simulation driven prediction model that can predict job performance with high accuracy for Spark. \nA novel analytical model is designed in Study~\\cite{wang2016modeling}, which can estimate the effect of interference among multiple Apache Spark jobs running concurrently on job execution time.\n\n}{}\nThere are some heuristic approaches to evict intermediate data in big data platforms. \nLeast Cost Strategy (LCS)~\\cite{geng2017lcs} evicts the data which lead to minimum recovery cost in future. \nLeast Reference Count (LRC) \\cite{yu2017lrc} evicts the cached data blocks whose reference count is the smallest where the reference count dependent child blocks that have not been computed yet. \nWeight Replacement (WR)~\\cite{duan2016selection} is another heuristic approach to consider computation cost, dependency, and sizes of RDDs. \nASRW~\\cite{wang2015new} uses RDD reference value to improve the memory cache resource utilization rate and improve the running efficiency of the program.\nStudy~\\cite{kathpal2012analyzing} develops cost metrics to compare storage vs. compute costs and suggests when a transcoding on-the-fly solution can be cost-effective. \nWeighted-Rank Cache Replacement Policy (WRCP)~\\cite{ponnusamy2013cache} uses parameters as access frequency, aging, and mean access gap ratio and such functions as size and cost of retrieval. \nThese heuristic approaches do use optimization frameworks to solve the problem, but they are only focusing on one single job, and ignoring cross-job intermediate dataset reuse.\n\n\n\\section{Conclusion}\n\\label{SEC:CO}\n\\vspace{-3mm}\n\n\nThe big data multi-stage parallel computing framework, such as Apache Spark, has been widely used to perform data processing at scale. \nTo speed up the execution, Spark strives to absorb as much intermediate data as possible to the memory to avoid repeated computation. \nHowever, the default in-memory storage mechanism LRU does not choose reasonable RDDs to cache their partitions in memory, leading to arbitrarily sub-optimal caching decisions. \nIn this paper, we formulated the problem by proposing an optimization framework, and then developed an adaptive cache algorithm to store the most valuable intermediate datasets in the memory.\nAccording to our real implementation on Apache Spark, the proposed algorithm can improve the performance by reducing 12\\% of the total work to recompute RDDs.\nIn the future, we plan to extend our methodology to support more big data platforms.\n\\vspace{-5mm}\n\n\\subsection{Online Algorithm Overview}\\label{app:overview}\n\n We describe here our adaptive algorithm for solving {\\textsc{MaxCachingGain}} without a prior knowledge of the demands $\\lambda_G$, $G\\in \\mathcal{G}$. The algorithm is based on \\cite{ioannidis2016adaptive}, which solves a problem with a similar objective, but with matroid (rather than knapsack) constraints. We depart from \\cite{ioannidis2016adaptive} in both the objective studied--namely, \\eqref{obj}-- as well as in the rounding scheme used: the presence of knapsack constraints implies that a different methodology needs to be applied to round the fractional solution produced by the algorithm in each step. \n \n We partition time into periods of equal length $T>0$, during which we collect access statistics for different RDDs. In addition, we maintain as state information the\\emph{ marginals} $y_v\\in[0,1]^{|\\mathcal{V}|}$: intuitively each $y_{v}$ captures the probability that node $v\\in \\mathcal{V}$ is cached. \nWhen the period ends, we (a) adapt the state vector $y=[y_v]_{v\\in\\mathcal{V}}\\in [0,1]^{|\\mathcal{V}|}$, and (b) reshuffle the contents of the cache, in a manner we describe below. \n\n\\fussy\n\\noindent\\textbf{State Adaptation.} We use RDD access and cost measurements collected during a period to produce a random vector $z=[z_v]_{v\\in \\mathcal{V}}\\in \\reals_+^{|\\mathcal{V}|}$ that is an unbiased estimator of a subgradient of $L$ w.r.t.~to $y$. That is, if $y^{(k)}\\in[0,1]^{|\\mathcal{V}|}$ is the vector of marginals at the $k$-th measurement period, $z=z(y^{(k)})$ is a random variable satisfying:\n \\begin{align}\\label{estimateprop}\n\\mathbb{E}\\big[z(y^{(k)})\\big] \\in \\partial L(y^{(k)}\n\\end{align}\nwhere $\\partial L(y)$ is the set of subgradients of $L$ w.r.t $y$. \n We specify how to produce such estimates below, in Appendix~\\ref{app:distributedsub}. \n \n Having these estimates, we adapt the state vector $y$ as follows: at the conclusion of the $k$-th period, the new state is computed as\n \\begin{align}y^{(k+1)} \\leftarrow \\mathcal{P}_{\\ensuremath{\\mathcal{D}}} \\left( y^{(k)} + \\gamma^{(k)}\\cdot z(y^{(k)}) \\right),\\label{adapt}\\end{align}\nwhere $\\gamma^{(k)}>0$ is a gain factor and $\\mathcal{P}_{\\ensuremath{\\mathcal{D}}}$ is the projection to the set of relaxed constraints: \n$$\\ensuremath{\\mathcal{D}} = \\left\\{y\\in [0,1]^{|\\mathcal{V}|} : \\sum_{v\\in \\mathcal{V}}s_vy_{v}=K \\right\\}.$$\nNote that $\\mathcal{P}_{\\mathcal{D}}$ is a projection to a convex polytope, and can thus be computed in polynomial time.\n\n\n\\noindent\\textbf{State Smoothening.}\nUpon performing the state adaptation \\eqref{adapt}, each node $v\\in V$ computes the following\n``sliding average'' of its current and past states:\n \\begin{align}\\label{slide}\\bar{y}^{(k)} = \\textstyle \\sum_{\\ell = \\lfloor\\frac{k}{2} \\rfloor}^{k} \\gamma^{(\\ell)} y^{(\\ell)} \/\\left[\\sum_{\\ell=\\lfloor \\frac{k}{2}\\rfloor}^{k}\\gamma^{(\\ell)}\\right] .\\end{align}\n This ``state smoothening'' is necessary precisely because of the non-differentiability of $L$ \\cite{nemirovski2005efficient}. Note that $\\bar{y}^{(k)} \\in \\ensuremath{\\mathcal{D}}$, as a convex combination of points in $\\ensuremath{\\mathcal{D}}$.\n\n\\noindent\\textbf{Cache Placement.} Finally, at the conclusion of a timeslot, the smoothened marginals $\\bar{y}^{(k)}\\in [0,1]^{|\\mathcal{V}|}$ are \\emph{rounded}, to produce a new integral placement $x^{(k)}\\in\\{0,1\\}^{|\\ensuremath{\\mathcal{V}}|}$ that satisfies the knapsack constraint \\eqref{intcont}. There are several ways of producing such a rounding \\cite{ageev2004pipage,swaprounding,kulik2009maximizing}. We follow the probabilistic rounding of \\cite{kulik2009maximizing} (see also \\cite{horel2014budget}): starting from a fractional $y$ that maximizes $L$ over $\\ensuremath{\\mathcal{D}}$, the resulting (random) integral $x$ is guaranteed to be within $1-1\/e$ from the optimal, in expectation. \n\n\\begin{comment}\n Pipage rounding uses the following property of $F$: given a fractional solution $\\mathbf{y} \\in \\ensuremath{\\mathcal{D}}$, there are at least two fractional variables $y_{v}$ and $y_{v'}$, such that transferring mass from one to the other,(a) makes at least one of them 0 or 1, (b) the new $y'$ resulting from this mass transfer remains feasible (i.e., in $\\ensuremath{\\mathcal{D}}$), and (b) $F(y') \\geq F(y)$, that is, the caching gain at $y'$ is at least as good as $y$. \n This is a consequence of the fact, for any two $v,v'\\in \\mathcal{V}$, function $F(\\cdot,y_v,y_{v'})$ is convex w.r.t. the two variables $y_v$, $y_{v'}$; hence, maximizing it over the constraint set implied by $\\ensuremath{\\mathcal{D}}$, restricted to all other variables being fixed, will have a maximum at an extreme point (in which one of the two variables is either 0 or 1).\n \n This rounding process is repeated until $\\hat{\\mathbf{y}}$ has at most one fractional element left, at which point pipage rounding terminates, discards this fractional value, and returns $y'$.\n\\end{comment}\n\n\\subsection{Constructing an Unbiased Estimator of $\\partial L(y)$.}\\label{app:distributedsub}\nTo conclude our algorithm description, we outline here how to compute the unbiased estimates $z$ of the subgradients $\\partial L(y{(k)})$ during a measurement period. In the exposition below, drop the superscript $\\cdot^{(k)}$ for brevity. \n\nThe estimation proceeds as follows.\n\\begin{enumerate}\n\\item Every time a job $G(V,E)$ is submitted for computation, we compute\nthe quantity\n$$t_v=\\textstyle\\sum_{v\\in V}\\!c_v \\ensuremath{\\mathds{1}}\\big( x_v+\\sum_{u\\in \\scc(v)}x_u\\leq 1\\big),$$\nwhere \n$$\n\\ensuremath{\\mathds{1}}(A) =\\begin{cases}\n1, &\\text{if}~A~\\text{is true},\\\\\n0, &\\text{o.w.}\n\\end{cases}\n$$\n\\item Let $\\mathcal{T}_{v}$ be the set of quantities collected in this way at node $v$ regarding item $v\\in \\mathcal{V}$ during a measurement period of duration $T$. At the end of the measurement period, we produce the following estimates: \\begin{align}z_{v}= \\textstyle\\sum_{t\\in \\mathcal{T}_{v} }t\/T,\\quad v\\in\\mathcal{V}.\\label{estimation}\\end{align} \n\\end{enumerate}\nNote that, in practice, $z_v$ needs to be computed only for RDDs $v\\in \\mathcal{V}$ that have been involved in the computation of some job in the duration of the measurement period. \n\nIt is easy to show that the above estimate is an unbiased estimator of the subgradient:\n\\begin{lemma}\\label{subgradientlemma}\nFor $z=[z_{v}]_{v\\in \\mathcal{V}}\\in \\reals_+^{|\\mathcal{V}|}$ the vector constructed through coordinates \\eqref{estimation},\n$$\\mathbb{E}[z(y)] \\in \\partial L(y)\\text{ and }\\mathbb{E}[\\|z\\|_2^2] th_2\\\\\n uncertain \\quad \\quad \\quad \\hspace*{-1mm} \\textrm{otherwise}\n \\end{array}\n \\right.\n\\label{FixedTh}\n\\end{equation}\nThe two thresholds $th_1$ and $th_2$ have been fixed to $0.3$ and $0.7$, respectively. \nIf $prob(x, y) \\in (th_1,th_2)$, then $(x,y)$ is labeled as uncertain. To provide a significant pixel--level supervision, bounding--boxes that are not labeled as legible, machine printed and written in English have been added to the uncertainty region. This procedure has been used to extract the COCO\\_TS dataset.\nSome examples of the obtained supervisions are reported in Figure \\ref{supervision_example}.\n\\begin{figure*}[ht]\n\\begin{center}\n\\centerline{\n\\includegraphics[height=3.2cm,keepaspectratio]{generated_supervision\/COCO_train2014_000000005483.jpg}\n\\includegraphics[height=3.2cm,keepaspectratio]{generated_supervision\/COCO_train2014_000000011422.jpg}\n\\includegraphics[height=3.2cm,keepaspectratio]{generated_supervision\/COCO_train2014_000000014886.jpg}}\n\\vskip 0.08in\n\\centerline{\n\\includegraphics[height=3.2cm,keepaspectratio]{generated_supervision\/COCO_train2014_000000005483.png}\n\\includegraphics[height=3.2cm,keepaspectratio]{generated_supervision\/COCO_train2014_000000011422.png}\n\\includegraphics[height=3.2cm,keepaspectratio]{generated_supervision\/COCO_train2014_000000014886.png}}\n\\caption{The original images and the generated supervisions, on the top and at the bottom, respectively. The background is colored in black, the foreground in red, and the uncertainty region in yellow.}\n\\label{supervision_example}\n\\end{center}\n\\end{figure*}\n\n\\subsection{Scene Text Segmentation} \\label{Scene_Text_Segmentation}\nThe COCO\\_TS dataset is used to train a deep segmentation network (bottom of Figure \\ref{training_scheme}) for scene text segmentation of both the ICDAR--2013 and Total--Text datasets. The effects obtained by the use of the COCO\\_TS dataset, as an alternative to synthetic data, will be described in the next section. \n\n\n\\section{Experiments} \\label{Experiments}\nIn the following, our experimental setup is shown. In particular, Section \\ref{PSP} and Section \\ref{training_details} introduce the segmentation network and define the implementation details used in our experimental setup. In Section \\ref{coco_eval}, the generated annotations for the COCO\\_TS dataset are evaluated, whereas Section \\ref{sts_eval} assesses the insertion of the COCO\\_TS dataset during the training of a scene text segmentation network.\n\n\\subsection{PSPNet}\\label{PSP}\nAll the experiments are carried out with the PSPNet architecture \\cite{PSP}, originally designed for semantic segmentation of natural images. This model, like most of the other semantic segmentation networks, takes an image as input and produces a per--pixel prediction. The PSPNet is a deep convolutional neural network, built on the ResNet model for image classification. To enlarge the receptive field of the neural network, a set of dilated convolutions replaces standard convolutions in the ResNet part of the network. The ResNet encoder produces a set of feature maps and a pyramid pooling module is used to gather context information. Finally, an upsample layer transforms, by bilinear interpolation, the low--dimension feature maps to the resolution of the original image. A convolutional layer produces the final per--pixel prediction. In this work, to better handle the presence of thin text and similarly to \\cite{tang2017scene}, we modified the network structure adding a two level convolutional decoder.\n\n\\subsection{Implementation Details}\\label{training_details}\nThe PSPNet architectures, used both for the background--foreground network and for scene text segmentation, are implemented in TensorFlow. Due to computational issues, in this work, the PSPNet based on the ResNet50 model is used as the CNN encoder. The experiments are realized based on the training procedure explained in the following.\nAs far as the background--foreground network is considered, the image crops are resized so that the min side dimension is equal to 185, while maintaining the original aspect--ratio. Random crops of $185\\times185$ are used during training. Instead, for the scene text segmentation network, the input images have not been resized, and random crops of $281\\times281$ are extracted for training. A multi--scale approach is employed during training and test. In the evaluation phase, a sliding window strategy is used for both the networks. The Adam optimizer \\cite{adam}, with a learning rate of $10^{-4}$, has been used to train the network. The experimentation was carried out in a Debian environment, with a single NVIDIA GeForce GTX 1080 Ti GPU.\n\n\\subsection{Evaluation of the Supervision Generation Procedure}\n\\label{coco_eval}\nThe quality of the generation procedure cannot be assessed on COCO--Text, \ndue to the absence of pixel--level targets. Therefore, we used the ICDAR--2013 dataset for which ground--truth labels are available.\nFollowing the procedure described in Section \\ref{sup_generation}, the segmentation annotations for the ICDAR--2013 test set have been extracted and compared to the original ground--truth. The results, measured using the pixel--level precision, recall and F1 score, are reported in Table \\ref{annotation_res}. For this analysis, the uncertainty region has been considered as text. \n\\begin{table*}[ht]\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\n & Precision & Recall & F1 Score \\\\\n\\midrule\nProposed approach & 89.10\\% & 70.74\\% & 78.87\\% \\\\\n\\midrule\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\caption{Results of the annotation generation approach on the ICDAR--2013 test set.}\n\\label{annotation_res}\n\\end{table*}\n\\noindent\nA qualitative evaluation of the generated supervision for the COCO\\_TS dataset is reported in Figure \\ref{supervision_example}.\n\n\\subsection{Scene Text Segmentation evaluation}\n\\label{sts_eval}\nDue to the inherent difficulties in collecting large sets of pixel--level supervised images, only few public datasets are available for scene text segmentation. To face this problem, in \\cite{tang2017scene}, synthetic data generation has been employed. Nevertheless, due to the domain--shift, there is no guarantee that a network trained on synthetic data would generalize well also to real images. \nThe COCO\\_TS dataset actually contains real images and, therefore, we expect that, when used for network training, the domain--shift can be reduced. To test this hypothesis, the PSPNet is used for scene text segmentation and evaluated on the ICDAR--2013 and Total--Text test sets, that provides pixel--level annotations. In particular, the following experimental setups have been compared:\n \\begin{itemize} \n\\item \\textbf{Synth:} The training relies only on the synthetically generated images;\n\\item \\textbf{Synth + COCO\\_TS:} The network is pre--trained on the synthetic dataset and fine--tuned on the COCO\\_TS images;\n\\item \\textbf{COCO\\_TS:} The network is trained only on the COCO\\_TS dataset. \n \\end{itemize} \n\\noindent The influence of fine--tuning on the ICDAR--2013 and Total--Text datasets was also evaluated. The results, measured using the pixel--level precision, recall and F1 score, are reported in Table \\ref{ICDAR2013_Results} and Table \\ref{TotalText_Results}, respectively.\n\\begin{table*}[ht]\n\\label{icdar_results}\n\\begin{center}\n\\begin{small}\n\\subfloat[\\small{Results on the ICDAR--2013 test set} \\label{ICDAR2013_Results}]{%\n\\begin{tabular}{|lcccc|}\n\\hline\n & Precision & Recall & F1 Score & \\\\\n\\hline\nSynth & 73.19\\%& 55.67\\% & 63.23\\% & --\\\\\nSynth + COCO\\_TS & \\textbf{77.80\\%}& \\textbf{70.14\\%} & \\textbf{73.77\\%} & \\textbf{+10.54\\%} \\\\\nCOCO\\_TS & 78.86\\% & 68.66\\% & 73.40\\% & +10.17\\%\\\\\n\\hline\nSynth + ICDAR--2013 & 81.12\\%& 78.33\\% & 79.70\\% & --\\\\\nSynth + COCO\\_TS + ICDAR--2013 & 80.08\\%& 79.53\\% & 80.15\\% & +0.45\\%\\\\\nCOCO\\_TS + ICDAR--2013 & \\textbf{81.68\\%} & \\textbf{79.16\\%} & \\textbf{80.40\\%} & \\textbf{+0.70\\%}\\\\\n\\hline\n\\end{tabular}}\n\\end{small}\n\\begin{small}\n\\subfloat[\\small{Results on the Total--Text test set} \\label{TotalText_Results}]{%\n\\begin{tabular}{|lcccc|}\n\\hline\n & Precision & Recall & F1 Score & \\\\\n\\hline\nSynth & 55.76\\%& 22.87\\% & 32.43\\% & --\\\\\nSynth + COCO\\_TS & 72.71\\%& 54.49\\% & 62.29\\% & +29.86\\% \\\\\nCOCO\\_TS & \\textbf{72.83\\%} & \\textbf{56.81\\%} & \\textbf{63.83\\%} & \\textbf{+31.40\\%}\\\\\n\\hline\nSynth + Total Text & 84.97\\%& 65.52\\% & 73.98\\% & --\\\\\nSynth + COCO\\_TS + Total Text & 84.65\\%& 66.93\\% & 74.75\\% & +0.77\\%\\\\\nCOCO\\_TS + Total Text & \\textbf{84.31\\%} & \\textbf{68.03\\%} & \\textbf{75.30\\%} & \\textbf{+1.32\\%}\\\\\n\\hline\n\\end{tabular}}\n\\end{small}\n\\end{center}\n\\caption{Scene text segmentation performances using synthetic data and\/or the proposed COCO\\_TS dataset. The notation \"$+$ Dataset\" means that a fine--tune procedure has been carried out on \"Dataset\". The last column reports the relative increment, with and without fine--tuning, compared to the use of synthetic data only.}\n\\end{table*}\n\\noindent It is worth noting that training the network using the COCO\\_TS dataset is more effective than using synthetic images. Specifically, employing the proposed dataset, the F1 Score is improved of 10.17\\% and 31.40\\% on ICDAR--2013 and Total--Text, respectively. \nThese results are quite surprising and prove that the proposed dataset substantially increases the network performance, reducing the domain--shift from synthetic to real images. If the network is fine--tuned on ICDAR--2013 or Total--Text, the relative difference between the use of synthetic images and the COCO\\_TS dataset is reduced, but still remains significant. Specifically, the F1 Score is improved by 0.70\\% on ICDAR--2013 and 1.32\\% on Total--Text. \n\\noindent Furthermore, it can be observed that using only COCO\\_TS provides comparable results than training the network with both the synthetic and the proposed dataset. Therefore, the two datasets are not complementary and, in fact, the proposed COCO\\_TS is a valid alternative to synthetic data generation for scene text segmentation. Indeed, the use of real images increases the sample efficiency, allowing to substantially reduce the number of samples needed for training. In particular, the COCO\\_TS dataset contains 14690 samples that are less than 1\/50 of the synthetic dataset cardinality. \nSome qualitative output results of the scene text segmentation network are shown in Figure \\ref{segmentation_results_ICDAR} and Figure \\ref{segmentation_results_TotalText}.\n\\begin{figure}[!ht]\n\\begin{center}\n\\centerline{\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/src\/img_7.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/synth\/img_7.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/synth_coco\/img_7.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/coco\/img_7.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/tag\/img_7.png}}\n\\vskip 0.08in\n\\centerline{\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/src\/img_11.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/synth\/img_11.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/synth_coco\/img_11.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/coco\/img_11.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/tag\/img_11.png}}\n\\vskip -0.05in\n\\centerline{\n\\subfloat[]{\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/src\/img_33.png}\\hskip 0.045in}\n\\subfloat[]{\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/synth\/img_33.png}\\hskip 0.045in}\n\\subfloat[]{\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/synth_coco\/img_33.png}\\hskip 0.045in}\n\\subfloat[]{\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/coco\/img_33.png}\\hskip 0.045in}\n\\subfloat[]{\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/icdar\/tag\/img_33.png}}}\n\\caption{Results on the ICDAR--2013 test set. In (a) the original image, in (b), (c) and (d) the segmentation obtained with Synth, Synth+COCO\\_TS and COCO\\_TS setups, respectively. The ground--truth supervision is reported in (e).}\n\\label{segmentation_results_ICDAR}\n\\end{center}\n\\begin{center}\n\\centerline{\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/src\/img2.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/synth\/img2.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/synth_coco\/img2.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/coco\/img2.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/tag\/img2.png}}\n\\vskip 0.08in\n\\centerline{\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/src\/img4.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/synth\/img4.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/synth_coco\/img4.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/coco\/img4.png}\n\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/tag\/img4.png}}\n\\vskip -0.05in\n\\centerline{\n\\subfloat[]{\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/src\/img8.png}\\hskip 0.045in}\n\\subfloat[]{\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/synth\/img8.png}\\hskip 0.045in}\n\\subfloat[]{\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/synth_coco\/img8.png}\\hskip 0.045in}\n\\subfloat[]{\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/coco\/img8.png}\\hskip 0.045in}\n\\subfloat[]{\\includegraphics[width=2.25cm,keepaspectratio]{img_segmentation\/total_text\/tag\/img8.png}}}\n\\caption{Results on the Total--Text test set. In (a) the original image, in (b), (c) and (d) the segmentation obtained with Synth, Synth+COCO\\_TS and COCO\\_TS setup, respectively. The ground--truth supervision is reported in (e).}\n\\label{segmentation_results_TotalText}\n\\end{center}\n\\end{figure}\n\n\\section{Conclusions} \\label{Conclusions}\nIn this paper, a weakly supervised learning approach has been used to generate pixel--level supervisions for scene text segmentation. Exploiting the proposed approach, the COCO\\_TS dataset, which contains the segmentation ground--truth for a subset of the COCO--Text dataset, has been automatically generated. Unlike previous approaches based on synthetic images, a convolutional neural network is trained on real images from the COCO\\_TS dataset for scene text segmentation, showing a very significant improvement in the generalization on both the ICDAR--2013 and Total--Text datasets, although with only a fraction of the samples. To foster further research on scene text segmentation, the COCO\\_TS dataset has been released.\nInterestingly, our procedure for pixel--level supervision generation from bounding--box annotations is general and not limited to the COCO--Text dataset. It is a matter of future work to employ the same method to extract pixel--level supervisions for different text localization problems (f.i., on multilingual scene text datasets, such as MLT \\cite{MLT}).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn the past few years, deep learning algorithms have successfully deployed in many areas of research, such as computer vision and pattern recognition, speech processing, text-to-speech and many other machine learning related areas \\cite{krizhevsky2012imagenet,bahdanau2014neural,amodei2016deep}. This has led the gradual performance improvement of many deep learning based statistical systems applied to various problems. These fundamental improvements also motivated researchers to explore problems of human-computer interaction (HCI), which have long been studied. One such problem involves understanding human emotions and reflecting them through machines, such as emotional dialogue models \\cite{zhou2018emotional,huang2018automatic}.\n\nIts natural for humans to identify the emotions and react accordingly. However, perception of emotion and affective response generation are still challenging problems for machines. In this study, we target to estimate facial animation representations from affective speech. For this purpose, we construct a two stage process, where the first stage uncovers the emotion in the affective speech signal, and the second stage defines a mapping from the affective speech signal to facial movements conditioned on the emotion. \n\n\n\nEmotion recognition from speech has been an active research area in HCI for various applications, specially for humanoid robots, avatars, chat bots, virtual agents etc. Recently, various deep learning approaches have been utilized to improve emotion recognition performance, which includes significant challenges in realistic HCI settings.\nBefore the deep learning era, classical machine learning models, such as hidden Markov\nmodels (HMMs), support vector machines (SVMs), and decision tree-based methods, have been used in speech emotion recognition \\cite{pan2012speech,schuller2003hidden,lee2011emotion,sadiq2017affect}.\nAs deep neural networks become widely available and computationally feasible, wide range of studies adapted complex deep neural network models for the speech emotion recognition problem. Among these models, \nConvolutional neural network (CNN) based models are successfully utilized for improved speech emotion recognition \\cite{bertero2017first,badshah2017speech}. In another setup, a recurrent neural network model based on text and speech inputs has been successfully used for emotion recognition \\cite{yoon2018multimodal}. \nEarly research on acoustic to visual mappings investigated different methods including Hidden Markov Models (HMM), Gaussian Mixture Models (GMM) and Dynamic Bayesian Networks (DBN). \nOne of the pioneers to produce speech driven facial animation used classical HMM \\cite{yamamoto1997speech}, where they successfully mapped states of the HMM to the lip parameters; moreover they also proposed to utilize visemes in mapping HMM states to visual parameters. Their idea of using visemes was later used by many researchers for synthesizing facial animations via speech signals \\cite{bozkurt2007comparison,verma2003using}. \n\nAnother early work on facial animation used classical machine learning approaches, where the 3-D facial movements were predicted from the LPC and RASTA-PLP acoustic features \\cite{brand1999voice}. Later, \\cite{kakumanu2001speech} proposed to consider context by tagging video frames to audio frames from past and future. \nMostly, the mapping from speech to visual representations is performed over offline data. As a real-time solution, \\cite{vougioukas2018end} proposed a generative adversarial network (GAN) based audio to visual mapping system that can synthesize the visual sequence at 150~frames\/sec. \n\nRecently, \\cite{taylor2017deep} focused on generating facial animations solely from the phoneme sequence. They use a DNN based sliding window predictor that learns arbitrary nonlinear mappings from phoneme label input sequences to facial lip movements. Contrary to the conventional methods of mapping phoneme sequence to fixed number of visemes in \\cite{verma2003using,bozkurt2007comparison}, they generate a sequence of output video frames for the input speech frame, and took the mean of that sequence to map the active speech frame to a single video frame.\n\nIn the literature, research on facial animation from affective speech is limited, this is mostly due to scarcity of labeled affective audio-visual data. \nIn a recent study, the problem of embedding emotions in speech driven facial animations has been modeled through an LSTM model \\cite{pham2017speech}. They employed the RAVDESS dataset in their study \\cite{livingstone2012ravdess}, which only includes two-sentence setup with limited phonetic variability.\nIn our earlier work, \\cite{asadiabadi2018multimodal}, we proposed a deep multi-modal framework, combining the work of \\cite{taylor2017deep} using phoneme sequence with spectral speech features to generate facial animations. Our work successfully demonstrated the good use of CNN models to capture emotional variability in mapping affective speech to facial representations.\n\nThe IEMOCAP dataset \\cite{busso2008iemocap} has been widely used in the literature for audio-visual emotion recognition task. Although the IEMOCAP delivers a rich set of affective audio-visual data, it mostly lacks frontal face videos and emotion categories are not all balanced. In this study, we choose to use the SAVEE \\cite{cooke2006audio} dataset. It delivers a balanced set of affective data as well as including clear frontal face videos, which help significantly to train better facial representation models.\n\nOur contributions in this paper are two fold. First we present a speech emotion recognition system which is trained with the SAVEE dataset to understand the emotional content in underlying speech signal. Secondly, we present a emotion dependent speech driven facial animation system which map the speech signal to facial domain according to emotional content.\n\nThe reminder of this paper is organized as follows. In Section~\\ref{sec:Method}, we describe the proposed methodology for the emotions based speech driven facial shape animations. we give experimental evaluations in Section~\\ref{sec:Res}. Finally, conclusions are discussed in Section~\\ref{sec:conc}.\n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=90mm]{block2.png}\n \\caption{Block diagram of the proposed emotion dependent facial animation system.}\n \\label{fig:1}\n \n\\end{figure}\n\\section{Methodology}\n\\label{sec:Method}\nWe model the affective speech animation problem as a cascade of emotion classification followed by facial parameter regression as depicted in Figure~\\ref{fig:1}. The affective content of the input speech is first classified into one of the 7 emotion categories, then the corresponding emotion dependent deep model maps the spectral speech representation to the visual shape parameters. \n\n\\subsection{Dataset}\nIn this study, we use the Surrey Audio-Visual Expressed Emotion (SAVEE) dataset to evaluate the affective speech animation models \\cite{cooke2006audio}. The dataset consists of video clips from 4 British male actors with six basic emotions (disgust, anger, happy, sad, fear surprise) and the neutral state. A total of 480 phonetically balanced sentences are selected from the standard TIMIT corpus \\cite{garofolo1993darpa} for every emotional state. Audio is sampled at 44.1~kHz with video being recorded at a rate of 60~fps. Phonetic transcriptions are provided with the dataset. A total of approximately 102~K frames are available to train and validate models. Face recordings are all frontal and faces are painted with blue markers for tracking of facial movements.\n\n\\subsection{Feature extraction}\n\n\\subsubsection{Acoustic features}\nWe use the mel-frequency spectral coefficients, aka, MFSC to represent the speech acoustic features. For each speech frame, 40 dimensional MFSC features are extracted to define the acoustic energy distribution over 40 mel-frequency bands. Python's speech feature library is used for the feature extraction. The MFSC features are extracted from pre-emphasized overlapping Hamming windowed frames of speech at 100~Hz. The extracted feature set is z-score normalized to have zero mean and unit variance in each feature dimension. We represent the set of acoustic feature vectors as $\\left\\{f^a_j\\right\\}^N_{j=1}$, where $f^a_{j} \\in \\mathbb{R}^{40 \\times 1}$ and $N$ is the total number of frames. Figure~\\ref{fig:spectros} presents sample MFSC spectrograms from three different emotions.\n\\begin{figure}[tb]\n\\centering\n \\includegraphics[width=90mm]{speech_mfsc.png}\n \\caption{MFSC spectrograms of one sample sentence, \"she had your dark suit in greasy wash water all year,\" with the fear, neutral and surprise affective states.}\n \\label{fig:spectros}\n\\end{figure}\n\nA temporal sliding window of spectral image is defined to capture the spatio-temporal characterization of the input speech as $F^a_j = [f^a_{j-\\Delta_{a}},...,f^a_j,...,f^a_{j+\\Delta_{a}}]$, where $F^a_j$ is a $40 \\times K_{a}$ image, $K_{a}=2\\Delta_{a}+1$ is the temporal length of the spectral image at time frame $j$ and the sliding window moves with stride~$1$ for $j=1, \\dots, N$. \n\n\n\\subsubsection{Facial shape features}\nAs defined in our previous work \\cite{asadiabadi2018multimodal}, the facial shapes are described with a set of $M=36$ landmark points on the lower face region, along the jaw line, nose, inner and outer lips and represented as $S_{j} = \\left\\{(x^{j}_{i},y^{j}_{i})\\right\\}^M_{i=1}$, where $i$ is the landmark index and $j$ is the sample index. The landmarks used in our experiments are extracted using the Dlib face detector \\cite{king2009dlib} shown as red dots in Figure~\\ref{fig:marks}. To obtain a one-to-one correspondence between the acoustic and visual features, the videos in the dataset are re-sampled to 25~fps. From the re-sampled videos, the landmark points are extracted at a 25~Hz rate and later up-sampled to 100~Hz using cubic interpolation, thus yielding a one-to-one match to the acoustic feature sequence.\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=60mm]{markers.png}\n \\caption{Facial markers on the face: Blue markers are from the SAVEE dataset, red markers are the extracted landmarks using the Dlib face detector.}\n \\label{fig:marks}\n\\end{figure}\n\nThe extracted facial shape set \n$S=\\left\\{S_j\\right\\}^N_{j=1}$\nare aligned using the Generalized Procrustes Analysis (GPA) \\cite{ref:procrutes} to remove the possible rotation, scale and position differences across speakers. Then a statistical face shape model is generated utilizing Principal Component Analysis (PCA) algorithm. PCA projects the shapes in the facial space $S$ to a lower dimensional uncorrelated parameter space \n$P=\\left\\{P_j\\right\\}^N_{j=1}$.\nThe projection between the two spaces is carried out using the mean and truncated eigenvector matrix of the covariance of $S$ as defined in detail in \\cite{asadiabadi2018multimodal}. In this study, $18$ PCA parameters are used, which are covering around $99 \\%$ of the variation in the dataset. \n\nThe target output sequence of the DNN is obtained utilizing a sliding window of size $K_{v}$ and stride $1$ over the shape parameter space. The temporal shape feature sequence is represented as $\\left\\{F^v_j\\right\\}^N_{j=1}$ where $F^v_j = [f^v_{j-\\Delta_{v}},...,f^v_j,...,f^v_{j+\\Delta_{v}}] \\in \\mathbb{R}^{18K_{v} \\times 1}$ with $K_{v}=2\\Delta_v + 1$.\n\n\\subsection{Speech emotion recognition}\nEmotion recognition from speech has been widely studied in the literature and the speech spectrogram is known to discriminate emotions well. As Figure~\\ref{fig:spectros} demonstrates sample variations of the MFSC spectrograms on three different emotions, we set the $F^a_{j}$ spectral image as the acoustic feature to train and predict the emotion of the underlying speech. To this effect, we use a deep emotion recognition network (DERN), which includes 3 convolutional layers followed by a fully connected layer and a softmax layer at the output. The details of the DERN network are given in Table~\\ref{tab:emotion}. \n\nIn the emotion recognition phase, the DERN outputs an emotion label for each frame in a given test utterance as, $\\tilde{e}_j=DERN(F^a_{j})$, where $\\tilde{e}_j$ is the estimated emotion label for frame $j$. For an utterance consisting of $T$ frames, the estimated emotion sequence can be given as $\\{\\tilde{e}_j\\}_{j=1}^T$. Let's index the 7 emotions used in this study as $e^1, e^2, \\dots, e^7$. Utterance level emotion probability for the $i$-th emotion can be defined as\n\\begin{equation}\n p_i = \\frac{1}{T} \\sum_{j=1}^T 1(e^i = \\tilde{e}_j),\n\\end{equation}\nwhere ones function, $1()$, returns 1 when condition is true else a zero. Then the top two emotions for the utterance can be identified as\n\\begin{equation}\n i^* = \\arg\\max_i p_i \\;\\; \\text{and}\\;\\; i^{**} = \\arg\\max_{i - \\{i^*\\}} p_i.\n\\end{equation}\n\nThe top second emotion is utilized when the confidence to the top emotion is weak. Hence, probability of the top second emotion is updated as\n\\begin{equation}\n p_{i^{**}} =\n \\begin{cases}\n 0 & \\text{if $p_{i^{*}} > 0.65$} \\\\\n p_{i^{**}} & \\text{otherwise}.\n \\end{cases}\n\\end{equation}\nThen the normalized probabilities for the utterance level top two emotions are set as\n\\begin{equation}\n p^* = \\frac{p_{i^{*}}}{p_{i^{*}}+p_{i^{**}}} \\;\\; \\text{and}\\;\\; p^{**} = \\frac{p_{i^{**}}}{p_{i^{*}}+p_{i^{**}}},\n\\end{equation}\ncorresponding to the top two emotions $e^*$ and $e^{**}$, respectively.\n\\begin{table}[bht]\n\\caption{Network architecture for the DERN}\n\\label{tab:emotion}\n\\centering\n\\scalebox{0.95}{\n\\begin{tabular}{l c c c c c c}\n\\toprule[1pt]\\midrule[0.3pt]\n\\textbf{Layer} & \\textbf{Type} & \\textbf{Depth} & \\textbf{Filter Size} & \\textbf{Stride}\\\\\n\\midrule\n\\hline\n1 & CONV+ReLu & 32 & 5x5 & - \\\\\n\\hline\n2 & MaxPooling & 32 & 3x3 & 2 \\\\\n\\hline\n3 & CONV+ReLu & 64 & 5x5 & - \\\\\n\\hline\n4 & MaxPooling & 64 & 3x3 & 2 \\\\\n\\hline\n5 & CONV+ReLu & 128 & 5x5 & - \\\\\n\\hline\n6 & MaxPooling & 128 & 3x3 & 2 \\\\\n\\hline\n7 & FC+ReLu & 256 & - & - \\\\\n\\hline\n8 & Dropuout & - & - & - \\\\\n\\hline\n9 & FC+Softmax & 7 & - & - \\\\\n\\hline\n\\end{tabular}}\n\\end{table}\n\n\n\n\\subsection{Emotion dependent facial shape regression}\n\\label{ref: av2}\nWe train a deep shape regression network (DSRN) to estimate the facial shape features $F^v_j$ from the acoustic MFSC spectrogram images $F^a_j$ for each emotion category, separately. Note that each DSRN model is trained for each emotion category, hence the estimated facial shape feature can be defined as the fusion of two estimates extracted with the top two emotions,\n\\begin{equation}\n \\label{equ:two label}\n \\tilde{F}^v_j = p^* DSRN(F^a_j | {e}^*) + p^{**} DSRN(F^a_j | {e}^{**}).\n\\end{equation}\n\nIn this formalism, we have $K_v$ number of facial shape estimates for the frame $j$ that are extracted from the neighboring estimates. The final facial shape estimation is defined as the average of $K_v$ estimates,\n\\begin{equation}\n \\hat{f}^v_j = \\frac{1}{K_v} \\sum_{i=j-\\Delta_v}^{j+\\Delta_v} \\tilde{f}^v_i . \n\\end{equation}\n\n\nThe DSRN is constructed with 4 convolutional layers, which are followed by 2 fully connected layers to estimate the shape features. We use dropout regularization method with probability 50\\% in the fully connected layers only, to overcome the over-fitting. ReLu activation function is used at each layer with Adam optimizer for hyper learning rate optimizations. Mean square error (MSE) is chosen as the objective function to be minimized. During the training, we only apply convolutions and pooling over the frequency axis, in order to further prevent any over-fitting, and also to preserve the temporal nature of speech. Detailed specification of the DSRN network is given in Table~\\ref{tab:2}.\n\\begin{table}[bht]\n\\caption{Network architecture for the DSRN}\n\\label{tab:2}\n\\centering\n\\scalebox{0.95}{\n\\begin{tabular}{l c c c c c c}\n\\toprule[1pt]\\midrule[0.3pt]\n\\textbf{Layer} & \\textbf{Type} & \\textbf{Depth} & \\textbf{Filter Size} & \\textbf{Stride}\\\\\n\\midrule\n\\hline\n1 & CONV+ReLu & 32 & 5x1 & - \\\\\n\\hline\n2 & MaxPooling & 32 & 3x1 & 2x1 \\\\\n\\hline\n3 & CONV+ReLu & 64 & 5x1 & - \\\\\n\\hline\n4 & MaxPooling & 64 & 3x1 & 2x1 \\\\\n\\hline\n5 & CONV+ReLu & 128 & 5x1 & - \\\\\n\\hline\n6 & MaxPooling & 128 & 2x1 & 2x1 \\\\\n\\hline\n7 & CONV+ReLu & 128 & 3x1 & - \\\\\n\\hline\n8 & MaxPooling & 128 & 2x1 & 2x1 \\\\\n\\hline\n9 & FC+ReLu & 1024 & - & - \\\\\n\\hline\n10 & Dropuout & - & - & - \\\\\n\\hline\n11 & FC+ReLu & 500 & - & - \\\\\n\\hline\n12 & Dropuout & - & - & - \\\\\n\\hline\n13 & FC+Multi-Reg & $18 \\times K_v$ & - & - \\\\\n\n\\hline\n\\end{tabular}}\n\\end{table}\n\n\n\\section{Experimental results}\n\\label{sec:Res}\n\nWe deploy 5-fold cross-validation training and testing scheme, where 10\\% of the dataset is hold for testing the trained networks. From the remain of the dataset 80\\% is used for training and the remaining 20\\% is used for validation in each fold. Both of the deep models, DERN and DSRN, are trained using the Keras\\footnote{\\url{https:\/\/keras.io\/}} with Tensorflow \\cite{abadi2016tensorflow} backend on a NVIDIA TITAN XP GPU. The temporal window size for acoustic and visual shape features is set as $K_a = 15$ and $K_v = 5$, respectively. \n\n\\subsection{Objective evaluations}\n\n\\subsubsection{Emotion recognition results}\n\nThe DERN model is trained over 200 epochs using the categorical cross-entropy loss function. Utterance level emotion recognition performances over the validation set are reported for each emotion category in Table~\\ref{tab:3}. All models sustain high average recall rates. The surprise emotion category model is observed to suffer the most, compared to other emotions.\n\n\\begin{table}[bht]\n\\caption{Utterance level accuracy (\\%) for the speech emotion recognition in each emotion category}\n\\label{tab:3}\n\\centering\n\\scalebox{0.85}{\n\\begin{tabular}{ccccccc}\n\\toprule[1pt]\\midrule[0.3pt]\nAngry & Disgust & Fear & Happy & Neutral & Sad & Surprise \\\\\n\\midrule\n\\hline\n 72.15 & 75.38 & 73.21 & 71.71 & 91.93 & 75.19 & 67.98 \\\\ \\hline\n\n\\end{tabular}}\n\\end{table}\n\n\n\n\\subsubsection{Facial shape regression results}\nThe mean squared error between predicted and original PCA coefficient values is used as the loss function for the training.\n\nThe DSRN model is trained for each emotion category separately, which defines the emotion dependent models. We as well train an emotion independent model using all combined training data, which sets a baseline for evaluations. Figure~\\ref{fig:loss} presents the MSE loss curve over the validation data through the learning process for emotion dependent and independent models. Note that the proposed emotion dependent regression attains significantly lower MSE loss values, which are more than 65\\% reduction in MSE compared to the emotion independent combined model. \n\nThe MSE loss performance of the cascaded trained DERN and DSRN models over the test set is given in Table \\ref{tab:4}. As obvious from the table, the proposed cascaded DERN and DSRN scheme performs better than the baseline model in terms of MSE in shape domain. In should be noted that given the true emotion labels of the utterances, the test loss of the DSRN is remarkably lower than the all combined model, which makes room to improve the performance of the DERN module in future work.\n\n\\begin{table}[b]\n\\caption{MSE loss on the test set for cascaded DERN and DSRN, true emotions and DSRN vs all combined model}\n\\label{tab:4}\n\\centering\n\\scalebox{0.85}{\n\\begin{tabular}{l c c c}\n\\toprule[1pt]\\midrule[0.3pt]\n\\textbf{Method} & \\textbf{DERN+DSRN}& \\textbf{Actual Emo+DSRN} & \\textbf{ALL Combined}\\\\\n\\midrule\n\\hline\nMSE & {5.57} & {3.23} & {6.73} \\\\\n\\hline\n\\end{tabular}}\n\\end{table}\n\n\\begin{figure}[bht]\n\\centering\n \\includegraphics[width=90mm]{all_loss_small.png}\n \\caption{The MSE loss over the validation data along the epochs for the emotion dependent (separate for each emotion) and independent (all combined) models.}\n \\label{fig:loss}\n\\end{figure}\n\n\n\n\n\\subsection{Subjective evaluations}\nThe resulting facial shape animations are also evaluated subjectively through visual user preference study. We use a mean opinion score (MOS) test to subjectively evaluate animations of the emotion dependent and independent models. The test is run with 15 participants using 7 conditions, which are the animations of utterances from the 7 emotion categories. Each test session for a participant contains 14 clips, where each condition is tested with the emotion dependent and independent models. During the test, all clips are shown in random order. In the test, each participant is asked to evaluate clips based on synchronization and emotional content of the animations using a five-point preference scale (1: Bad, 2: Poor, 3: Fair, 4: Good, and 5: Very Good).\n\n\n\\begin{table}[t]\n\\caption{Average preference scores for the emotion dependent and independent (all combined) model animation evaluations}\n\\label{tab:5}\n\\centering\n\\scalebox{0.95}{\n\\begin{tabular}{l c c}\n\\toprule[1pt]\\midrule[0.3pt]\n\\textbf{Method} & \\textbf{Mean} & \\textbf{Std} \\\\\n\\midrule\n\\hline\nEmotion Dependent & {3.03} & 0.96 \\\\\n\\hline\nAll Combined & 2.74 & 0.88\\\\\n\\hline\n\\end{tabular}}\n\\end{table}\n\n\\begin{figure}[bht]\n\\centering\n \\includegraphics[width=90mm]{subjective_emos1.png}\n \\caption{Emotion category based average preference scores for the emotion dependent and independent (all combined) model animation evaluations. }\n \\label{fig:subjec}\n\\end{figure}\n\nThe average preference scores are listed in Table~\\ref{tab:5}. The proposed emotion dependent facial shape animation scheme is preferred over the baseline emotion independent scheme. Furthermore, in each emotional category a similar preference tendency, except for happy and sad categories, is observed as presented in Figure~\\ref{fig:subjec}. \n\n\n\\section{Conclusions}\n\\label{sec:conc}\nIn this study we propose an emotion dependent speech driven facial animation system. A statistical shape model (PCA based) is trained to project the shape data into an uncorrelated lower dimensional parameter space, capturing 99\\% of variation in the training data. We observe that training separate models to map acoustic spectral features to visual shape parameters performs better than training a universal network with all the emotions combined. Our proposed emotion dependent facial shape animation model outperforms the emotion independent universal model in terms of the MSE loss and also in the subjective evaluations, facial animations of the proposed method preferred higher.\nAs a future work we will investigate the ways to improve the accuracy of the speech emotion recognition system.\n\n\\section{Acknowledgements}\nThis work was supported in part by the Scientific and Technological Research Council of Turkey (T\\\"{U}B\\.{I}TAK) under grant number 217E107.\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOne of the longstanding problems in strong interaction is the\nunderstanding within QCD of the mechanism that is responsible for\nthe large single-spin asymmetries (SSA) observed in numerous\nhigh energy reactions with hadrons. Many different approaches were\nsuggested to solve this problem (see recent papers and review\n\\cite{Anselmino:2013rya,Anselmino:2012rq,reviews} and references\ntherein). Most of them are based on the assumption of the\nso-called transverse-momentum-dependent (TMD) factorization\n\\cite{Ji:2004xq,Ji:2004wu,Bacchetta:2008xw,collins}. The validity\nof this assumption is not clear so far \\cite{Rogers:2010dm}.\nFurthermore, in our paper we will show the existence of the\nnonperturbative QCD mechanism which violates explicitly the TMD\nfactorization for SSA.\n\nIt is well known that SSA arises from interference of different\ndiagrams and should include at least two ingredients. First off\nall, it should be a helicity-flip in the scattering amplitude and\nsecondly, the amplitude should have a nonzero imaginary part. The\nsmall current masses of quarks are only a source in perturbative\nQCD (pQCD) for helicity-flip. Furthermore, the imaginary part of\nthe scattering amplitude, which comes from loop diagrams, is\nexpected to be suppressed by extra power of the strong coupling\nconstant $\\alpha_s$. As a result, pQCD fails to describe large\nobserved SSA. On the other hand, it is known that QCD has a\ncomplicated structure of vacuum which leads to the phenomenon of\nspontaneous chiral symmetry breaking (SCSB) in strong interaction.\nTherefore, even in the case of a very small current mass of the\nquarks their dynamical masses arising from SCSB can be large.\nThe instanton liquid model of QCD vacuum \\cite{shuryak,diakonov}\nis one of the models in which the SCSB phenomenon arises in a\nvery natural way due to quark chirality flip in the field of\nstrong\n fluctuation of the vacuum gluon field called\ninstanton \\cite{Belavin:1975fg,'tHooft:1976fv}.\n The instanton is the well-known solution of QCD equation of motion in the Euclidian space-time which\nhas nonzero topological charge. In many papers (see reviews\n\\cite{shuryak,diakonov,Kochelev:2005xn}), it was shown that\ninstantons play a very important role in hadron physics.\nFurthermore, instantons lead to the anomalous quark-gluon\nchromomagnetic vertex with a large\n quark helicity-flip \\cite{kochelev1,diakonov}.\nTherefore, they can give the important contribution to SSA\n\\cite{kochelev1,Kochelev:1999nd,Cherednikov:2006zn,Dorokhov:2009mg,Ostrovsky:2004pd,diakonov,Qian:2011ya}.\n\n\nIn this paper, we will present the first consistent calculation\nof SSA in the quark-quark scattering based on the existence of the\nanomalous quark chromomagnetic moment (AQCM) induced by instantons\n\\cite{kochelev1} \\footnote{ The semi-classical mechanism for SSA\nbased on large AQCM has recently been discussed in papers\n\\cite{Abramov:2011zz,Abramov:2009tm}.}.\n\n\n\n\n\n\n\n\n\\section{ Quark-gluon interaction in non-perturbative QCD }\n\n\n\n\nIn the general case, the interaction vertex of a massive quark\nwith a gluon, Fig.1, can be written in the following form:\n\\begin{equation}\nV_\\mu(p_1^2,{p_1^\\prime}^2,q^2)t^a = -g_st^a[F_1(p_1^2,{p_1^\\prime}^2,q^2) \\gamma_\\mu\n +\n\\frac{\\sigma_{\\mu\\nu}q_\\nu}{2M_q}F_2(p_1^2,{p_1^\\prime}^2,q^2)],\n \\label{vertex}\n \\end{equation}\nwhere the first term is the conventional perturbative QCD\nquark-gluon vertex and the second term comes from the\nnonperturbative sector of QCD.\n\\begin{figure}[htb]\n\\vspace*{2.0cm} \\centering\n\\centerline{\\epsfig{file=vertices.eps,width=12cm,height=3.0cm,\nangle=0}} \\vskip 1cm \\caption{a) Perturbative helicity non-flip\nand b) nonperturbative helicity-flip quark-gluon vertices}\n\\label{vertexpicture}\n\\end{figure}\n\n\n\n\n\n\nIn Eq.\\ref{vertex} the form factors $F_{1,2}$ describe\nnonlocality of the interaction, $p_{1}, p_1^\\prime$ are\nthe momenta of incoming and outgoing quarks, respectively, $ q=p_1^\\prime-p_1$,\n $M_q$ is the quark mass, and $\\sigma_{\\mu\\nu}=(\\gamma_\\mu \\gamma_\\nu-\\gamma_\\nu \\gamma_\\mu)\/2$.\n\n\n\nThe form factor $F_2(p_1^2,{p_1^\\prime}^2,q^2)$\n suppresses the AQCM vertex\nat short distances when the respective virtualities are large. Within the\ninstanton model it is explicitly related to the Fourier-transformed quark\nzero-mode and instanton fields and reads\n\\begin{equation}\n F_2(p_1^2,{p_1^\\prime}^2,q^2) =\\mu_a\n\\Phi_q(\\mid p_1\\mid\\rho\/2)\\Phi_q(\\mid p_1^\\prime\\mid\\rho\/2)F_g(\\mid\nq\\mid\\rho) \\ , \\nonumber\n\\end{equation}\nwhere\n\\begin{eqnarray}\n\\Phi_q(z)&=&-z\\frac{d}{dz}(I_0(z)K_0(z)-I_1(z)K_1(z)), \\label{ffq}\\\\\nF_g(z)&=&\\frac{4}{z^2}-2K_2(z), \\label{ffg}\n\\end{eqnarray}\n$I_{\\nu}(z)$, $K_{\\nu}(z)$ are the modified Bessel functions and\n$\\rho$ is the instanton size.\n\nWe assume $F_1\\approx 1$ and $F_2(p_1^2,{p_1^\\prime}^2,q^2)\n \\approx \\mu_a F_g(q^2)$ since valence quarks in hadrons have small virtuality.\n\nWithin the instanton liquid model \\cite{shuryak,diakonov}, where\nall instantons have the same size $\\rho_c$, AQCM is\n\\cite{kochelev2}\n\\begin{equation}\n\\mu_a=-\\frac{3\\pi (M_q\\rho_c)^2}{4\\alpha_s}.\n\\label{AQCM}\n\\end{equation}\n In Eq.(\\ref{AQCM}), $M_q$ is the so-called dynamical quark mass.\nWe would like to point out two specific features of the formula\nfor AQCM. First, the strong coupling constant enters into the\ndenominator showing a clear nonperturbative origin of AQCM. The\nsecond feature is the negative sign of AQCM. As we will see below,\nthe sign of AQCM leads to the definite sign of SSA in the\nquark-quark scattering. The value of AQCM strongly depends on the\ndynamical quark mass which is $M_q=170$ MeV in the mean field\napproximation (MFA)\\cite{shuryak}\n and $M_q=350$ MeV in the Diakonov-Petrov model (DP) \\cite{diakonov}.\nTherefore, for fixed value of the strong coupling constant in\nthe instanton model, $\\alpha_s\\approx \\pi\/3\\approx 0.5$\n\\cite{diakonov}, we get\n\n\\begin{equation}\n{\\mu_a}^{MFA}=-0.4 \\ \\ \\ \\mu_a^{DP}= -1.6\n\\label{mu}\n\\end{equation}\n\nWe would like to mention that the Schwinger-type of the pQCD\ncontribution to AQCM\n\\begin{equation}\n \\mu_a^{pQCD}=-\\frac{\\alpha_s}{12\\pi}\\approx 1.3\\cdot 10^{-2}\n\\label{pQCD}\n\\end{equation}\nis by several orders of magnitude smaller in comparison with the\nnonperturbative contribution induced by instantons, Eq.\\ref{mu},\nand, therefore, it can give only a tiny contribution to\nspin-dependent cross sections\n \\cite{Ryskin:1987ya}\\footnote{Recently, a rather large AQCM has been obtained within the approach based on the\n Dyson-Schwinger equations (see review \\cite{Roberts:2012sv} and references therein).}.\n\n\n\n\n\n\n\\section{Single-spin asymmetry in high energy quark-quark scattering induced by AQCM}\n\nThe SSA for the process of transversely polarized quark scattering off unpolarized quark,\n $q^{\\uparrow}(p_1)+q(p_2) \\to q (p_1^\\prime)\n+ q(p_2^\\prime)$, is defined as\n\\begin{equation}\nA_{N}=\\frac{d \\sigma^{\\uparrow} - d \\sigma^{\\downarrow}}\n{d \\sigma^{\\uparrow}+d \\sigma^{\\downarrow}},\n\\end{equation}\nwhere ${\\uparrow} {\\downarrow}$ denote the initial quark spin\norientation perpendicular to the scattering plane and\n\n\\begin{equation}\nd \\sigma^{\\uparrow\\downarrow}=\\frac{|M(\\uparrow\\downarrow)|^2}{\\mathrm{2I}} d\\mathrm{PS_2(S,q_t)},\n\\end{equation}\nwhere $I$ is the initial flux, $S=(p_1+p_2)^2$,\n$M(\\uparrow\\downarrow)$ is the matrix element for the different\ninitial spin directions, $d\\mathrm{PS_2(S,q_t)}$ is the\ntwo-particle phase space and $q_t={{p_1}^\\prime}_t-{p_1}_t$ is the\ntransverse momentum transfer. In the high energy limit $S\\gg\nq_t^2,M_q^2$, we have $I\\approx S$ and\n$d\\mathrm{PS_2(S,q_t)}\\approx d^2q_t\/(8\\pi^2S)$.\n\n\n\nIn terms of the helicity amplitudes \\cite{Goldberger:1960md},\n\\cite{Buttimore:1978ry}\n\\begin{equation}\n\\Phi_1=M_{++;++},\\ \\ \\Phi_2=M_{++;--},\\ \\ \\Phi_3=M_{+-;+-},\\ \\ \\Phi_4=M_{+-;-+} ,\\ \\ \\Phi_5=M_{++;+-},\n\\nonumber\n\\end{equation}\nwhere the symbols $+$ or $-$ denote the helicity of quark in the\nc.m. frame, SSA is given by\n\\begin{equation}\nA_N=-\\frac{2Im[( \\Phi_1+\\Phi_2+\\Phi_3-\\Phi_4)\\Phi_5^*]} {|\\Phi_1|^2+|\\Phi_2|^2+|\\Phi_3|^2+|\\Phi_4|^2+4|\\Phi_5|^2)}.\n\\label{helicity}\n\\end{equation}\nIn Fig.2, we present the set of diagrams which give a\nsignificant contribution to $A_N$. Higher order terms in $\\mu_a$\nand $\\alpha_s$ are expected to be suppressed by a small instanton\ndensity in QCD vacuum \\cite{shuryak} and by a large power of the\nsmall strong coupling constant.\n\n\n\n\\begin{figure}[htb]\n\n \\vspace*{-0.0cm} \\centering\n \\centerline{\\epsfig{file=AN1grey.eps,width=14cm, angle=0}}\n \\caption{Contribution to SSA arising from different diagrams.}\n\n \\end{figure}\n\n\n\nFor estimation, we take the simple form for the gluon propagator in the Feynman gauge\n \\begin{equation}\n P_{\\mu\\nu}(k^2)=\\frac{g_{\\mu\\nu}}{k^2-m_g^2},\n \\nonumber\n \\end{equation} where $m_g$ can be treated as the infrared cut-off related to confinement \\cite{nikolaev},\nor as the dynamical gluon mass \\cite{RuizArriola:2004en},\n\\cite{aguilar}. Within the instanton model this parameter can be\nconsiderated as the effect of\n multiinstanton contribution to the gluon propagator.\n\n\n\nBy using in the high energy limit the Gribov decomposition for the\nmetric tensor into the transverse and longitudinal parts\n\\begin{equation}\n g_{\\mu\\nu}=g_{\\mu\\nu}^t+ \\frac{2(p_{2\\mu}p_{1\\nu}+p_{2\\nu}p_{1\\mu})}{S}\n \\approx \\frac{2(p_{2\\mu}p_{1\\nu}+p_{2\\nu}p_{1\\mu})}{S}\n \\nonumber\n\\end{equation}\nand the Sudakov parametrization of the four-momenta of particles \\cite{Arbuzov:2010zza},\n\\cite{Baier:1980kx}\n\\begin{equation}\nq_i=\\alpha_ip_2+\\beta_ip_1+q_{i,t}, \\ \\ q_{i,t}p_{1,2}=0, \\ \\ q_{i,t}^2=-\\vec{q_i^2}<0,\n\\nonumber\n\\end{equation}\nwe finally obtain\n \\begin{equation}\n A_N=-\\frac{5 \\alpha_s\\mu_a q_t(q_t^2+m_g^2)}{12 \\pi M_q} \\frac{ F_g(\\rho\n |q_t|)N(q_t)}{D(q_t)},\n \\label{SSA}\n \\end{equation}\n where\n\n \\begin{equation}\nN(q_t)=\\int \\!\\!d^2k_t \\frac{(1+\\mu_a^2(q_t \\cdotp \\! k_t + k_t^2)\n F_g(\\rho |k_t|) F_g(\\rho |q_t\\!+\\!k_t|)\/(2M_q^2) }\n {(k_t^2+m_g^2)((k_t+q_t)^2+m_g^2)}\\big)\n\\nonumber\n\\end{equation}\nand\n\\begin{equation}\n D(q_t)= \\Big(1+ (\\frac{\\mu_a q_t}{2M_q} F_g(\\rho |q_t|))^2\\Big)^2\n + \\frac{\\alpha_s^2 (q_t^2+m_g^2)^2}{12 \\pi^2}\n \\left( \\int \\!\\! \\frac{d^2k_t}{(k_t^2+m_g^2)((k_t+q_t)^2+m_g^2)}\\right)^2 \\nonumber\n\\end{equation}\n\n\\section{Results and discussion}\n\\begin{figure}[h]\n\\begin{minipage}[c]{8cm}\n\\vskip -0.5cm \\hspace*{-1.0cm}\n\\centerline{\\epsfig{file=Plot1grey.eps,width=10cm,height=6cm,angle=0}}\\\n\\end{minipage}\n\\begin{minipage}[c]{8cm}\n\\centerline{\\epsfig{file=Plot2grey.eps,width=10cm,height=6cm,angle=0}}\\\n\\hspace*{1.0cm} \\vskip -1cm\n\\end{minipage}\n\\caption{ Left panel: the $q_t$ dependence of SSA for different\nvalues of the infrared cut-off in the gluon propagator\n\\cite{nikolaev}, \\cite{RuizArriola:2004en}, \\cite{aguilar}. Right\npanel: the $q_t$ dependence of SSA for the different values of\nthe dynamical quark mass \\cite{shuryak}, \\cite{diakonov},\n\\cite{kochelev2}.}\n\\end{figure}\n In Fig.3, the result for $A_N$ as the function of transfer momentum transfer is presented\n for different values of the dynamical quark mass $M_q$ and the parameter infrared cut-off\n $m_g$.\nOur results show that SSA $A_N$ induced by AQCM is very large and\npractically independent of particular values of $M_q$ and $m_g$.\nWe would like to stress also that $A_N$ in our approach does not\ndepend on c.m. energy. The energy independence of SSA is in\nagreement with experimental data and in contradiction with naive\nexpectation that spin effects in strong interaction should vanish\nat high energy \\cite{Krisch:2010hr}.\n One can show that this property is directly related\nto the spin one t-channel gluon exchange. Another remarkable\nfeature of our approach is a flat dependence of SSA\n on transverse momentum of a final particle, Fig.3.\nIt comes from a rather soft power-like form factor in the\ngluon-quark vertex, Eq.\\ref{ffg}, and a small average size of\ninstanton, $\\rho_c\\approx 1\/3 fm$, in QCD vacuum \\cite{shuryak}.\nSuch a flat dependence has recently been observed by the STAR\ncollaboration in the inclusive $\\pi^0$ production in high energy\nproton-proton collision \\cite{STAR} and was not expected in the\nmodels based on TMD factorization and {\\it ad hoc} parametrization\nof Sivers and Collins functions \\cite{reviews}. Finally, the sign\nof the SSA is defined by the sign of AQCM and should be positive,\nEq.\\ref{SSA}. This sign is very important in explaining of the\nsigns of SSA observed for inclusive production of $\\pi^+,\\pi^-$\nand $\\pi^0 $ mesons in proton-proton and proton-antiproton high\nenergy collisions (see discussion and references in\n\\cite{reviews}, \\cite{Krisch:2010hr}).\n\n\nIt is evident that the instanton induced helicity-flip should also\ngive the contribution to SSA in the meson production in\nsemi-inclusive deep inelastic scattering (SIDIS) where large SSA\nin $\\pi$- and $K$-meson production was observed by HERMES\n\\cite{Airapetian:2010ds} and by COMPASS Collaborations\n\\cite{Martin:2013eja}. In the leading order in the instanton\ndensity the nonzero contribution to SSA in SIDIS is expected to\ncome from the interference of diagrams presented in Fig.4. Here,\nthe imaginary part arises from final state perturbative and\nnonperturbative interactions of the current quark with the\nspectator system. The real part of the amplitude presented by two\nfirst diagrams includes perturbative helicity-conserved\nphoton-quark vertex and the instanton induced helicity-flip\nvertex. The Pauli form factor corresponding to the last vertex was\ncalculated in \\cite{Kochelev:2003cp}.\n\n\\begin{figure}[htb]\n\\centering\n \\centerline{\\epsfig{file=SIDIS.eps,width=16cm,height=3cm,angle=0}}\n \\caption{The leading contributions to SSA in SIDIS.}\n\n \\end{figure}\n\n We should emphasize\nthe significant difference between our approach to SSA in SIDIS\nand perturbative final state interaction model presented in\n\\cite{Brodsky:2002cx}. In particular, one can expect that the\nmain contribution comes from the kinematical region where the\nvirtuality of gluon in Fig.4 is small. Therefore, soft gluon\ninteraction with quarks should be highly nonperturbative.\nFurthermore, the helicity flip in \\cite{Brodsky:2002cx} is related\nto the wave function of the nucleon. Due to that, SSA coming from\nthis mechanism, might be significant only in the region of small\ntransverse momentum of the final meson $k_t\\approx\n\\Lambda_{QCD}\\approx 250$ MeV. In our approach, we expect the\nlarge SSA at higher transverse momentum because the averaged\ninstanton size is much smaller than the confinement size\n$\\rho_c\\approx R_{conf}\/3$. This qualitative observation\ncorresponds to the experimental data presented by HERMES and\nCOMPASS where large SSA was observed only at rather large $k_t$.\nAdditionally, a significant $Q^2$ dependence of SSA found by\nCOMPASS Collaboration \\cite{Martin:2013eja} might be related to\nthe strong $Q^2$ dependence of the nonperturbative photon-quark\nvertex presented by second diagram in Fig.4.\n\n\n\n The additional contribution to SSA induced by instantons was\n suggested in the papers \\cite{Ostrovsky:2004pd} and \\cite{Qian:2011ya}.\nIt is based on the results from \\cite{Moch:1996bs},\n where the effects of instantons in\nthe nonpolarized deep inelastic scattering process were\ncalculated in a careful way \\footnote{This approach was applied to\nthe Drell-Yan process \\cite{Brandenburg:2006xu} as well.}. In\nthis case, the effect arises from phase shift in the quark\npropagator in the instanton field. This contribution might be\nconsidered as complementary to the AQCM effect.\n\n\nIn spite of the fact that our estimation is based mainly on\nsingle-instanton approximation (SIA) for AQCM \\cite{kochelev1},\nthe effects of the multiinstantons, which are hidden in the value\nof dynamical quark mass in Eq.\\ref{AQCM}, are also taken into\naccount in the effective way. The accuracy of such SIA was\ndiscussed in various aspects in \\cite{Faccioli:2001ug}. By\nanalyzing of several correlation functions the authors claimed\nthat dynamical quark mass can be different from the MFA value\n$M_q=170$ MeV. However, as it was discussed above, SSA induced by\nAQCM has rather a weak dependence on the value of dynamical mass,\nFig.3. Therefore, we believe that some effects beyond SIA can not\nlead to a significant change of our results.\n Furthermore, we would like to mention that the SSA\nmechanism based on AQCM is quite general and might happen in any\nnonperturbative QCD model with the spontaneous chiral symmetry\nbreaking. The attractive feature of the instanton liquid model is\nthat within this model this phenomenon comes from rather small\ndistances $\\rho_c\\approx 0.3$ fm. As the result, it allows to\nunderstand the origin of large observed SSA at large\ntransverse momentum.\n\n\n\n\nIn summary, we calculated the SSA in the quark-quark scattering\ninduced by AQCM and found that it was large. This phenomenon is\nrelated to the strong helicity-flip quark-gluon interaction\ninduced by the topologically nontrivial configuration of vacuum\ngluon fields called instantons. Our estimation shows that the\nsuggested mechanism can be responsible for anomalously large SSA\nobserved in different reactions at high energies. We would like to\nstress that quark-gluon and quark-photon nonperturbative\ninteractions violate the TMD factorization in inclusive meson\nproduction in both hadron-hadron and deep inelastic scatterings.\nTherefore, it cannot be treated as some additional contribution to\nthe Sivers distribution function or to the Collins fragmentation\nfunction. It is evident that the nonfactorizable mechanism for SSA\nbased on AQCM can be extended to other spin-dependent observables,\nincluding double-spin asymmetries in inclusive and exclusive\nreactions.\n\n\n\n\n\\section*{Acknowledgments}\nThe authors are very grateful to I.O. Cherednikov, A.E. Dorokhov,\nA.V. Efremov and E.~A.~Kuraev for discussion. The work of N.K. was\nsupported in part by Belarus-JINR grant, a visiting scientist\ngrant from the University of Valencia and by the MEST of the\nKorean Government (Brain Pool Program No. 121S-1-3-0318). We also\nacknowledge that this work was initiated through the series of\nAPCTP-BLTP JINR Joint Workshops.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}