diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkvak" "b/data_all_eng_slimpj/shuffled/split2/finalzzkvak" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkvak" @@ -0,0 +1,5 @@ +{"text":"\\subsection*{Bibliographic Remarks}}\n\\newcommand{\\etaF}{\\overrightarrow{\\eta}}\n\\newcommand{\\etaB}{\\overleftarrow{\\eta}}\n\\newcommand{\\FF}{\\overrightarrow{F}}\n\\newcommand{\\Fh}{\\widehat{F}}\n\\newcommand{\\FN}{F\n\\newcommand{\\FV}{\\mathcal{F}}\n\\newcommand{\\Ft}{\\widetilde{F}}\n\\newcommand{\\K}{\\mathbb{K}}\n\\newcommand{\\MC}{\\overline{M}}\n\\newcommand{\\MF}{\\overrightarrow{M}}\n\\newcommand{\\MB}{\\overleftarrow{M}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\NC}{\\textup{NC}}\n\\newcommand{\\nF}{\\overrightarrow{n}}\n\\newcommand{\\nB}{\\overleftarrow{n}}\n\\renewcommand{\\P}[1]{{\\cal P}\\left(#1\\right)}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\rank}{\\ensuremath{\\textup{rank}}}\n\\newcommand{\\sem}[1]{\\llbracket #1 \\rrbracket}\n\\newcommand{\\spa}[1]{\\langle #1 \\rangle}\n\\newcommand{\\X}{\\mathcal{X}}\n\\newcommand{\\Y}{\\mathcal{Y}}\n\n\n\\begin{abstract}\nThis set of notes re-proves known results on weighted automata (over a field, also known as multiplicity automata).\nThe text offers a unified view on theorems and proofs that have appeared in the literature over decades and were written in different styles and contexts.\n\\emph{None of the results reported here are claimed to be new.}\n\nThe content centres around fundamentals of equivalence and minimization, with an emphasis on algorithmic aspects.\n\nThe presentation is minimalistic.\nNo attempt has been made to motivate the material.\nWeighted automata are viewed from a linear-algebra angle.\nAs a consequence, the proofs, which are meant to be succinct, but complete and almost self-contained, rely mainly on elementary linear algebra.\n\\end{abstract}\n\n\\section{Preliminaries}\n\nLet $\\K$ be a field.\nWhen speaking about algorithms and computational complexity, we will implicitly take as~$\\K$ the field~$\\Q$ of rational numbers (where we assume that rational numbers are encoded as quotients of integers encoded in binary).\nFor a finite alphabet~$\\Sigma$ we call a map $s : \\Sigma^* \\to \\K$ a \\emph{series}.\n\nAn \\emph{automaton} $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ consists of a natural number~$n$ (to which we refer as the \\emph{number of states}),\n a finite alphabet~$\\Sigma$,\n a map $M : \\Sigma \\to \\K^{n \\times n}$, an initial (row) vector $\\alpha \\in \\K^n$, and a final (column) vector $\\eta \\in \\K^n$.\nExtend $M$ to a monoid homomorphism $M : \\Sigma^* \\to \\K^{n \\times n}$ by setting $M(a_1 \\cdots a_k) := M(a_1) \\cdots M(a_k)$ and $M(\\varepsilon) := I_n$, where $\\varepsilon$ is the empty word and $I_n \\in \\{0,1\\}^{n \\times n}$ the $n \\times n$ identity matrix.\nThe \\emph{semantics} of an automaton~$\\A$ is the series $\\sem{\\A} : \\Sigma^* \\to \\K$ with $\\sem{\\A}(w) = \\alpha M(w) \\eta$.\nAutomata $\\A_1, \\A_2$ over the same alphabet~$\\Sigma$ are said to be \\emph{equivalent} if $\\sem{\\A_1} = \\sem{\\A_2}$.\nAn automaton~$\\A$ is \\emph{minimal} if there is no equivalent automaton~$\\A'$ with fewer states.\nIf $n = 0$, it is natural to put $\\sem{\\A}(w) = 0$ for all $w \\in \\Sigma^*$.\n\nWe have the following closure properties:\n\\begin{proposition} \\label{prop:closure}\nLet $\\A_i = (n_i, \\Sigma, M_i, \\alpha_i, \\eta_i)$ for $i \\in \\{1,2\\}$ be automata.\nOne can compute in logarithmic space (hence, in polynomial time) automata $\\A_+, \\A_-, \\A_\\otimes$ with\n$\\sem{\\A_+}(w) = \\sem{\\A_1}(w) + \\sem{\\A_2}(w)$ and \n$\\sem{\\A_-}(w) = \\sem{\\A_1}(w) - \\sem{\\A_2}(w)$ and \n$\\sem{\\A_\\otimes}(w) = \\sem{\\A_1}(w) \\cdot \\sem{\\A_2}(w)$ for all $w \\in \\Sigma^*$.\nOne can compute~$\\A_+, \\A_-$ with $O(|\\Sigma| (n_1+n_2)^2)$ arithmetic operations.\nOne can compute~$\\A_\\otimes$ with $O(|\\Sigma| n_1^2 n_2^2)$ arithmetic operations.\n\\end{proposition}\n\\begin{proof}\nIt is straightforward to check that the automaton $\\A_+ = (n_1+n_2, \\Sigma, M_+, (\\alpha_1, \\alpha_2), \\eta_+)$ with\n\\[\nM_+(a) \\ = \\ \\begin{pmatrix} M_1(a) & 0_{n_1, n_2} \\\\ 0_{n_2,n_1} & M_2(a) \\end{pmatrix} \\quad \\text{for all $a \\in \\Sigma$}\n \\quad \\text{and} \\quad \n \\eta_+ \\ = \\ \\begin{pmatrix} \\eta_1 \\\\ \\eta_2 \\end{pmatrix}\n\\]\nis the desired automaton, where $0_{m,n}$ stands for the $m \\times n$ zero matrix.\n\nThe automaton $\\A_{-}$ can be constructed similarly to~$\\A_+$, but $(\\alpha_1, \\alpha_2)$ is replaced with $(\\alpha_1, -\\alpha_2)$.\n\nLet $\\mathord{\\otimes}$ denote the Kronecker product.\nDefine $\\A_\\otimes = (n_1 n_2, \\Sigma, M_\\otimes, (\\alpha_1 \\otimes \\alpha_2), (\\eta_1 \\otimes \\eta_2))$, where $M_\\otimes(a) = M_1(a) \\otimes M_2(a)$ for all $a \\in \\Sigma$.\nUsing the mixed-product property of~$\\mathord{\\otimes}$ (i.e., $(A B) \\otimes (C D) = (A \\otimes C) (B \\otimes D)$),\nwe have for all $a_1 \\cdots a_k \\in \\Sigma^*$:\n\\begin{align*}\n\\sem{\\A_\\otimes}(w) \\ &= \\ (\\alpha_1 \\otimes \\alpha_2) (M_1(a_1) \\otimes M_2(a_1)) \\cdots (M_1(a_k) \\otimes M_2(a_k)) (\\eta_1 \\otimes \\eta_2) \\\\\n&= \\ (\\alpha_1 M_1(a_1) \\cdots M_1(a_k) \\eta_1) \\otimes (\\alpha_2 M_2(a_1) \\cdots M_2(a_k) \\eta_2) \\\\\n&= \\ \\sem{\\A_1}(a_1 \\cdots a_k) \\cdot \\sem{\\A_2}(a_1 \\cdots a_k) \\qedhere\n\\end{align*}\n\\end{proof}\n\n\nFor a set~$V$ of vectors we use the notation $\\spa{v \\mid v \\in V}$ to denote the vector space spanned by~$V$.\nFor an automaton~$\\A$, define its \\emph{forward space} as the (row) vector space $\\spa{\\alpha M(w) \\mid w \\in \\Sigma^*}$.\nSimilarly, the \\emph{backward space} of~$\\A$ is the (column) vector space $\\spa{ M(w) \\eta \\mid w \\in \\Sigma^* }$.\n\nLet $s : \\Sigma^* \\to \\K$.\nThe \\emph{Hankel matrix} of~$s$ is the (infinite) matrix $H \\in \\K^{\\Sigma^* \\times \\Sigma^*}$ with $H[x,y] = s(x y)$ for all $x,y \\in \\Sigma^*$.\nDefine $\\rank(s) := \\rank(H)$.\n\n\\section{Equivalence Checking} \\label{sec:equivalence}\n\nFirst we discuss how to efficiently compute a basis of the forward space $\\FV := \\spa{\\alpha M(w) \\mid w \\in \\Sigma^*}$ of an automaton $\\A = (n, \\Sigma, M, \\alpha, \\eta)$.\nIt is a matter of basic linear algebra to check that $\\FV$ is the smallest vector space that contains~$\\alpha$ and is closed under post-multiplication of~$M(a)$ (i.e., $\\FV M(a) \\subseteq \\FV$) for all $a \\in \\Sigma$.\nHence \\cref{alg:forward-space-basic} computes a basis of~$\\FV$.\n\n\\begin{algorithm}\n\\DontPrintSemicolon\n\\If{$\\alpha = 0$}{\\Return{$\\emptyset$}}\n$W := \\{\\varepsilon\\}$\\;\n\\While{$\\exists\\, w \\in W\\; \\exists\\, a \\in \\Sigma : \\alpha M(w a) \\not\\in \\spa{\\alpha M(w) \\mid w \\in W}$}{\n\t$W := W \\cup \\{w a\\}$\n }\n\\Return{$\\{\\alpha M(w) \\mid w \\in W\\}$}\n\\caption{Computing a basis of the forward space of an automaton $\\A = (n, \\Sigma, M, \\alpha, \\eta)$.}\n\\label{alg:forward-space-basic}\n\\end{algorithm}\n\n\\Cref{alg:forward-space-basic} actually computes a set~$W$ of words such that $\\{\\alpha M(w) \\mid w \\in W\\}$ is a basis of the forward space~$\\FV$.\nThese words will be of interest, e.g., to compute a word~$w$ that ``witnesses'' the inequivalence of two automata.\nSince $\\FV$ is a subspace of $\\K^{n}$, its dimension, say $\\nF$, is at most~$n$.\nIt follows that $|W| = \\nF \\le n$ and $|w| \\le \\nF-1$ holds for all $w \\in W$.\n\n\nWe want to make \\cref{alg:forward-space-basic} efficient.\nFirst, in addition to the words~$w$ we save the vectors $\\alpha M(w)$ to avoid unnecessary vector-matrix computations.\nSecond, we use a worklist, implemented as a queue, to keep track of which vectors are new in the basis of~$\\FV$ computed so far.\nThese refinements result in \\cref{alg:forward-space-intermediate}.\n\n\\begin{algorithm}\n\\DontPrintSemicolon\n\\If{$\\alpha = 0$}{\\Return{$\\emptyset$}}\n$P := \\{(\\varepsilon,\\alpha)\\}$\\;\n$Q := [(\\varepsilon,\\alpha)]$\\;\n\\Repeat{$\\mathit{isEmpty}(Q)$}{\n $(w,v) := \\mathit{dequeue}(Q)$\\;\n \\ForAll{$a \\in \\Sigma$}{\n $w' := w a$\\;\n $v' := v M(a)$\\;\n \\If{$v' \\not\\in \\spa{u \\mid (x,u) \\in P}$ \\label{algline:forward-space-intermediate}}{\n $P := P \\cup \\{(w', v')\\}$\\;\n $Q := \\mathit{enqueue}(Q,(w',v'))$\\;\n }\n }\n}\n\\Return{$P$}\n\\caption{Computing\n$\\{(w_1,v_1), \\ldots, (w_{\\protect\\nF}, v_{\\protect\\nF})\\} \\subseteq \\Sigma^{\\le \\protect\\nF-1} \\times \\K^{n}$\nsuch that $\\{v_1, \\ldots, v_{\\protect\\nF}\\}$\nis a basis of the forward space of an automaton $\\A = (n, \\Sigma, M, \\alpha, \\eta)$}\n\\label{alg:forward-space-intermediate}\n\\end{algorithm}\n\n\\Cref{algline:forward-space-intermediate} of \\Cref{alg:forward-space-intermediate} requires a check for linear independence.\nUsing Gaussian elimination, such a check can be carried out with $O(n^3)$ arithmetic operations.\nTo make this more efficient, one can keep a basis of the vector space $\\spa{u \\mid (x,u) \\in P}$ in echelon form.\nWith such a basis at hand, the check for linear independence amounts to performing one iteration of Gaussian elimination, which takes $O(n^2)$ operations, and checking if the resulting vector is non-zero.\nIf it is indeed non-zero, it can be added to the basis, thus preserving its echelon form\n\\footnote{For improved numerical stability of the computation, instead of using a basis in echelon form, one may keep an orthonormal basis, against which the new vector is orthogonalized using one iteration ($O(n^2)$ operations) of the modified Gram-Schmidt process.}\n \nSince $\\nF \\le n$, it follows that \\cref{algline:forward-space-intermediate} is executed $O(n |\\Sigma|)$ times.\nHence we have:\n\\begin{proposition} \\label{prop:compute-forward-intermediate}\nLet $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ be an automaton.\nOne can compute in polynomial time (with $O(|\\Sigma| n^3)$ arithmetic operations) a set $\\{(w_1,v_1), \\ldots, (w_{\\nF}, v_{\\nF})\\} \\subseteq \\Sigma^{\\le \\nF-1} \\times \\K^{n}$ such that $\\{v_1, \\ldots, v_{\\nF}\\}$ is a basis of~$\\FV$ and $v_i = \\alpha M(w_i)$ holds for all $1 \\le i \\le \\nF$.\n\\end{proposition}\n\nAn automaton~$\\A$ is called \\emph{zero} if $\\sem{\\A}(w) = 0$ for all $w \\in \\Sigma^*$.\nWe show:\n\\begin{proposition} \\label{prop:zeroness-in-P}\nLet $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ be an automaton.\nOne can check in polynomial time (with $O(|\\Sigma| n^3)$ arithmetic operations) whether $\\A$ is zero, and if it is not, output $w \\in \\Sigma^*$ with $|w| \\le n-1$ such that $\\sem{\\A}(w) \\ne 0$.\n\\end{proposition}\n\\begin{proof}\nAutomaton~$\\A = (n, \\Sigma, M, \\alpha, \\eta)$ is zero if and only if its forward space $\\FV := \\spa{\\alpha M(w) \\mid w \\in \\Sigma^*}$ is orthogonal to $\\eta$, i.e., $v \\eta = 0$ for all $v \\in \\FV$.\nLet $S \\subseteq \\K^n$ (with $|S| \\le n$) be a basis of~$\\FV$.\nThen $\\A$ is zero if and only if $v \\eta = 0$ holds for all $v \\in S$.\nBut by \\cref{prop:compute-forward-intermediate} one can compute such~$S$.\nSimilarly, one can compute, if it exists, the ``counterexample''~$w$.\n\\end{proof}\n\n\\begin{theorem} \\label{thm:equivalence-in-P}\nLet $\\A_i = (n_i, \\Sigma, M_i, \\alpha_i, \\eta_i)$ for $i \\in \\{1,2\\}$ be automata.\nOne can check in polynomial time (with $O(|\\Sigma| (n_1+n_2)^3)$ arithmetic operations) whether $\\A_1, \\A_2$ are equivalent, and if they are not, output $w \\in \\Sigma^*$ with $|w| \\le n_1 + n_2 - 1$ such that $\\sem{\\A_1}(w) \\ne \\sem{\\A_2}(w)$.\n\\end{theorem}\n\\begin{proof}\nCompute automaton~$\\A_-$ from \\cref{prop:closure}.\nThen the theorem follows from \\cref{prop:zeroness-in-P}.\n\\end{proof}\n\n\\br\n\nEquivalence checking goes back to the seminal paper by Sch\\\"utzenberger from 1961~\\cite{IC::Schutzenberger1961}.\nA polynomial-time algorithm could be derived from there but was not made explicit.\nThe books by Paz~\\cite{Paz71} and Eilenberg~\\cite{Eilenberg74} from 1971 and 1974, respectively, describe an exponential-time algorithm based on the fact that shortest ``counterexamples'' have length at most $n_1+n_2-1$.\nAn $O(|\\Sigma| (n_1 + n_2)^4)$ (in terms of arithmetic operations) algorithm was explicitly provided in 1992 by Tzeng~\\cite{Tzeng92}.\nImprovements to $O(|\\Sigma| (n_1 + n_2)^3)$ were then \\mbox{(re-)}discovered, e.g., in \\cite{CortesMohriRastogi07,KieferMOWW11,BerlinkovFS18}. These improvements are all based on the idea described before \\cref{prop:compute-forward-intermediate}.\nThe abstract of the 2002 paper~\\cite{Archangelsky02} indicates that this improvement was already known to some.\nIncidentally, a different algorithm, also cubic in~$n$, was proposed in~\\cite{Archangelsky02}.\n\n\\section{Minimization} \\label{sec:minimization}\n\nLet $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ be an automaton.\nLet $F \\in \\K^{\\nF \\times n}$ with $\\nF \\le n$ be a matrix whose rows form a basis of the forward space~$\\FV$.\nSimilarly, let $B \\in \\K^{n \\times \\nB}$ with $\\nB \\le n$ be a matrix whose columns form a basis of the backward space~$\\BV$.\nSince $\\FV M(a) \\subseteq \\FV$ and $M(a) \\BV \\subseteq \\BV$ for all $a \\in \\Sigma$,\n there exist maps $\\MF : \\Sigma \\to \\K^{\\nF \\times \\nF}$ and $\\MB : \\Sigma \\to \\K^{\\nB \\times \\nB}$ such that\n \\begin{equation*}\n F M(a) \\ = \\ \\MF(a) F \\quad \\text{and} \\quad M(a) B \\ = \\ B \\MB(a) \\quad \\text{for all $a \\in \\Sigma$.}\n \\end{equation*}\nThese maps $\\MF, \\MB$ are unique, as $F, B$ have full rank.\nThe above equalities extend inductively to words:\n \\begin{equation}\n F M(w) \\ = \\ \\MF(w) F \\quad \\text{and} \\quad M(w) B \\ = \\ B \\MB(w) \\quad \\text{for all $w \\in \\Sigma^*$}\n \\label{eq:commutativity}\n \\end{equation}\nLet $\\alphaF \\in \\K^{\\nF}$ be the unique row vector with $\\alphaF F = \\alpha$, and $\\etaB \\in \\K^{\\nB}$ be the unique column vector with $B \\etaB = \\eta$.\nCall $\\AF := (\\nF, \\Sigma, \\MF, \\alphaF, F \\eta)$ the \\emph{forward conjugate} of~$\\A$ with base~$F$,\n and $\\AB := (\\nB, \\Sigma, \\MB, \\alpha B, \\etaB)$ the \\emph{backward conjugate} of~$\\A$ with base~$B$.\n\n\\begin{proposition} \\label{prop:equivalence}\n Let $\\A$ be an automaton. Then $\\sem{\\A} = \\sem{\\AF} = \\sem{\\AB}$.\n\\end{proposition}\n\\begin{proof}\n By symmetry, it suffices to show the first equality.\n Indeed, we have for all $w \\in \\Sigma^*$:\n \\begin{align*}\n \\sem{\\AF}(w)\n \\ & =\\ \\alphaF \\MF(w) F \\eta \\\\\n \\ & =\\ \\alphaF F M(w) \\eta && \\text{by~\\cref{eq:commutativity}} \\\\\n \\ & =\\ \\alpha M(w) \\eta && \\text{definition of~$\\alphaF$} \\\\\n \\ & =\\ \\sem\\A(w) \\tag*{\\qedhere}\n \\end{align*}\n\\end{proof}\n\n\\begin{proposition} \\label{prop:compute-conjugate}\nLet $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ be an automaton.\nOne can compute in polynomial time (with $O(|\\Sigma| n^3)$ arithmetic operations)\n\\begin{itemize}\n\\item a matrix~$F$ whose rows form a basis of the forward space of~$\\A$, and\n\\item the forward conjugate of~$\\A$ with base~$F$.\n\\end{itemize}\nThe statement holds analogously for backward conjugates.\n\\end{proposition}\n\\begin{proof}\nBy \\cref{prop:compute-forward-intermediate} one can compute a basis of the forward space, and hence~$F$, in the required time.\nHaving computed~$F$ it is straightforward to compute $\\AF$ in the required time.\nThe same holds analogously for~$\\AB$.\n\\end{proof}\n\n\\begin{proposition} \\label{prop:direction-1}\n Let $\\A$ be an automaton. Then $\\rank(\\sem\\A) \\le n$.\n\\end{proposition}\n\\begin{proof}\n Consider the matrices $\\Fh: \\K^{\\Sigma^* \\times n}$ and $\\Bh: \\K^{n \\times \\Sigma^*}$ with\n $\\Fh[w,\\cdot] = \\alpha M(w)$ and $\\Bh[\\cdot,w] = M(w) \\eta$ for all $w \\in \\Sigma^*$.\n Note that $\\rank(\\Fh) \\le n$ (and similarly $\\rank(\\Bh) \\le n$).\n Let $x,y \\in \\Sigma^*$.\n Then $(\\Fh \\Bh)[x,y] = \\alpha M(x) M(y) \\eta = \\sem\\A(x y)$, so $\\Fh \\Bh$ is the Hankel matrix of~$\\sem\\A$.\n Hence $\\rank(\\sem\\A) = \\rank(\\Fh \\Bh) \\le \\rank(\\Fh) \\le n$.\n\\end{proof}\n\nCall an automaton with $n$ states \\emph{forward-minimal} (resp., \\emph{backward-minimal}) if its forward (resp., backward) space has dimension~$n$.\n\n\\begin{proposition} \\label{prop:forw-conj-is-forw-min}\nA forward conjugate is forward-minimal.\nA backward conjugate is backward-minimal.\n\\end{proposition}\n\\begin{proof}\nBy symmetry, it suffices to prove the statement about forward conjugates.\nLet $\\AF = (\\nF, \\Sigma, \\MF, \\alphaF, F \\eta)$ be the forward conjugate of $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ with base~$F \\in \\K^{\\nF \\times n}$.\nWe have:\n\\begin{align*}\n & \\dim\\, \\spa{\\alphaF \\MF(w) \\mid w \\in \\Sigma^*} \\\\\n=\\ & \\dim\\, \\spa{\\alphaF \\MF(w) F \\mid w \\in \\Sigma^*} && \\text{the rows of~$F$ are linearly independent}\\\\\n=\\ & \\dim\\, \\spa{\\alphaF F M(w) \\mid w \\in \\Sigma^*} && \\text{by \\cref{eq:commutativity}}\\\\\n=\\ & \\dim\\, \\spa{\\alpha M(w) \\mid w \\in \\Sigma^*} && \\text{definition of~$\\alphaF$}\\\\\n=\\ & \\dim\\, \\FV && \\text{definition of~$\\FV$} \\\\\n=\\ & \\nF && \\text{definition of~$\\nF$} &&\\qedhere\n\\end{align*}\n\\end{proof}\n\n\\begin{proposition} \\label{prop:new-minimal}\nA backward conjugate of a forward-minimal automaton is minimal.\nA forward conjugate of a backward-minimal automaton is minimal.\n\\end{proposition}\n\\begin{proof}\nBy symmetry, it suffices to show the first statement.\nLet $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ be forward-minimal.\nLet $B \\in \\K^{n \\times \\nB}$ be a matrix whose columns form a basis of the backward space of~$\\A$.\nBy \\cref{prop:direction-1} it suffices to show that $\\nB = \\rank(H)$, where $H$ is the Hankel matrix of~$\\sem{\\A}$.\nLet $\\Fh$ and $\\Bh$ be the matrices from the proof of \\cref{prop:direction-1}.\nSince $\\A$ is forward-minimal, the columns of~$\\Fh$ are linearly independent.\nWe have:\n\\begin{align*}\n&\\ \\nB \\\\ \n= &\\ \\rank(B) && \\text{definition of~$B$} \\\\\n= &\\ \\rank(\\Fh B) && \\text{the columns of~$\\Fh$ are linearly independent} \\\\\n= &\\, \\dim\\, \\spa{\\Fh M(w) \\eta \\mid w \\in \\Sigma^*} && \\text{definition of~$B$} \\\\\n= &\\ \\rank(\\Fh \\Bh) && \\text{definition of~$\\Bh$} \\\\\n= &\\ \\rank(H) && \\text{proof of \\cref{prop:direction-1}} \\qedhere\n\\end{align*}\n\\end{proof}\n\n\n\\begin{theorem} \\label{thm:minimization}\nLet $\\A$ be an automaton.\nLet $\\A'$ be a backward conjugate of a forward conjugate of~$\\A$ (or a forward conjugate of a backward conjugate of~$\\A$).\nThen $\\A'$ is minimal and equivalent to~$\\A$.\nIt can be computed in polynomial time (with $O(|\\Sigma| n^3)$ arithmetic operations).\n\\end{theorem}\n\\begin{proof}\nMinimality follows from \\cref{prop:forw-conj-is-forw-min,prop:new-minimal}.\nEquivalence follows from \\cref{prop:equivalence}.\nPolynomial-time computability follows by invoking \\cref{prop:compute-conjugate} twice.\n\\end{proof}\n\nLet $\\A_i = (n, \\Sigma, M_i, \\alpha_i, \\eta_i)$ for $i \\in \\{1,2\\}$ be minimal, where $\\A_2$ is the forward conjugate of~$\\A_1$ with some base $Q \\in \\K^{n \\times n}$.\nBy minimality and \\cref{prop:equivalence}, matrix~$Q$ is invertible.\nSince\n\\[\n \\alpha_2 Q = \\alpha_1, \\quad \\eta_2 = Q \\eta_1, \\quad Q M_1(a) = M_2(a) Q \\quad\\text{for all } a \\in \\Sigma\\,,\n\\]\nautomaton~$\\A_1$ is the backward conjugate of~$\\A_2$ with base~$Q$.\nSince\n\\[\n\\alpha_2 = \\alpha_1 Q^{-1}, \\quad Q^{-1} \\eta_2 = \\eta_1, \\quad M_1(a) Q^{-1} = Q^{-1} M_2(a) \\quad\\text{for all } a \\in \\Sigma\\,,\n\\]\nautomaton~$\\A_1$ is the forward conjugate of~$\\A_2$ with base~$Q^{-1}$, and $\\A_2$ is the backward conjugate of~$\\A_1$ with base~$Q^{-1}$.\n\nThis motivates the following definition.\nCall minimal automata $\\A_1, \\A_2$ \\emph{conjugate} if one is a forward (equivalently, backward) conjugate of the other.\n\n\\begin{theorem} \\label{thm:conjugate-new}\nTwo minimal automata are conjugate if and only if they are equivalent.\n\\end{theorem}\n\\begin{proof}\nThe forward direction follows from \\cref{prop:equivalence}.\n\nTowards the backward direction, let $\\A_i = (n, \\Sigma, M_i, \\alpha_i, \\eta_i)$ for $i \\in \\{1,2\\}$ be minimal equivalent automata.\nFor $i \\in \\{1, 2\\}$, consider the matrices $\\Fh_i: \\K^{\\Sigma^* \\times n}$ and $\\Bh_i: \\K^{n \\times \\Sigma^*}$ with $\\Fh_i[w,\\cdot] = \\alpha_i M_i(w)$ and $\\Bh_i[\\cdot,w] = M_i(w) \\eta_i$ for all $w \\in \\Sigma^*$.\nIt follows from minimality and \\cref{prop:equivalence} that $\\Fh_i$ and~$\\Bh_i$ have full rank~$n$.\nThus, there is an invertible matrix~$Q \\in \\K^{n \\times n}$ with $\\Fh_1 = \\Fh_2 Q$.\nWe show that $\\A_2$ is the forward conjugate of~$\\A_1$ with base~$Q$.\n\nWe have $\\alpha_1 = \\Fh_1[\\varepsilon,\\cdot] = \\Fh_2[\\varepsilon,\\cdot] Q = \\alpha_2 Q$.\nLetting $H$ denote the Hankel matrix of $\\sem{\\A_1} = \\sem{\\A_2}$, we have $\\Fh_2 Q \\Bh_1 = \\Fh_1 \\Bh_1 = H = \\Fh_2 \\Bh_2$.\nSince $\\Fh_2$ has full rank, it follows that $Q \\Bh_1 = \\Bh_2$.\nIn particular, $Q \\eta_1 = Q \\Bh_1[\\cdot, \\varepsilon] = \\Bh_2[\\cdot, \\varepsilon] = \\eta_2$.\n\nFor any $a \\in \\Sigma$ let $H_a \\in \\K^{\\Sigma^* \\times \\Sigma^*}$ be the matrix with $H_a[x,y] = \\sem{\\A_i}(x a y) $ for all $x, y \\in \\Sigma^*$.\nWe have $\\Fh_i M_i(a) \\Bh_i = H_a$ for all $a \\in \\Sigma$.\nThus, for all $a \\in \\Sigma$ we have:\n\\[ \\Fh_2 Q M_1(a) \\Bh_1 \\ = \\ \\Fh_1 M_1(a) \\Bh_1 \\ = \\ H_a \\ = \\ \\Fh_2 M_2(a) \\Bh_2 \\ = \\ \\Fh_2 M_2(a) Q \\Bh_1\n\\]\nSince $\\Fh_2$ and~$\\Bh_1$ have full rank, we have $Q M_1(a) = M_2(a) Q$ for all $a \\in \\Sigma$.\n\\end{proof}\n\n\\br\n\nMinimization is closely related to equivalence and also goes back to~\\cite{IC::Schutzenberger1961}.\nThe book~\\cite[Chapter~II]{BerstelReutenauer88} describes a minimization procedure.\nAn $O(|\\Sigma| n^4)$ minimization algorithm (for a related probabilistic model) was suggested in~\\cite{GillmanS94}.\nThe algorithm given in this note reminds of Brzozowski's algorithm for minimizing DFAs~\\cite{Brzozowski62}.\nThe succinct formulation in this note is essentially from~\\cite{14KW-ICALP}.\nFurther generalizations of Brzozowski's algorithm are discussed in~\\cite{BonchiBrzozowski}.\n\n\\Cref{thm:conjugate-new} also goes back to \\cite{IC::Schutzenberger1961}.\nSee also \\cite{fliess} and \\cite[Chapter~II]{BerstelReutenauer88}.\n\n\n\\section{The Hankel Automaton} \\label{sec-Hankel-aut}\n\nLet $s : \\Sigma^* \\to \\K$ be a series of rank~$n$, with Hankel matrix~$H$.\nCall a set $C = \\{c_1, \\ldots, c_n\\} \\subseteq \\Sigma^*$ \\emph{complete} if the columns of~$H[\\cdot, C]$ form a basis of the column space of~$H$.\n\nNote that for any $G \\subseteq \\Sigma^*$ and $w \\in \\Sigma^*$ we have $H[G w, C] = H[G, w C]$, where $G w := \\{g w \\mid g \\in G\\}$ and $w C := \\{w c \\mid c \\in C\\}$.\n\nLet $C = \\{c_1, \\ldots, c_n\\} \\subseteq \\Sigma^*$ be complete.\nThen, for any $w \\in \\Sigma^*$ there is a unique column vector $\\eta_w \\in \\K^{n}$ with $H[\\cdot,w] = H[\\cdot,C] \\eta_w$, and for all $a \\in \\Sigma$ a unique matrix~$\\MC(a) \\in \\K^{n \\times n}$ with $H[\\cdot,C] \\MC(a) = H[\\cdot, a C]$.\nWe define the \\emph{Hankel automaton} for $s,C$ as $\\AC = (n, \\Sigma, \\MC, H[\\varepsilon,C], \\eta_\\varepsilon)$.\n\n\\begin{proposition} \\label{prop:canonical-aut-monoid}\nLet $\\AC = (n, \\Sigma, \\MC, H[\\varepsilon,C], \\eta_\\varepsilon)$ be the Hankel automaton for~$s,C$.\nThen for all $w \\in \\Sigma^*$ we have $H[\\cdot,C] \\MC(w) = H[\\cdot, w C]$.\nHence, if $G = \\{g_1, \\ldots, g_n\\} \\subseteq \\Sigma^*$ is such that $H[G,C]$ has full rank, we have $\\MC(w) = H[G,C]^{-1} H[G, w C]$ for all $w \\in \\Sigma^*$.\n\\end{proposition}\n\\begin{proof}\nWe proceed by induction on the length of~$w$.\nThe induction base ($w = \\varepsilon$) is trivial.\nFor the step, let $w \\in \\Sigma^*$ and $a \\in \\Sigma$.\nWe have:\n\\begin{align*}\nH[\\cdot,C] \\MC(w a)\n\\ &=\\ H[\\cdot,w C] \\MC(a) && \\text{induction hypothesis} \\\\\n\\ &=\\ H[\\cdot\\, w, C] \\MC(a) \\\\\n\\ &=\\ H[\\cdot\\, w, a C] && \\text{definition of~$\\MC(a)$} \\\\\n\\ &=\\ H[\\cdot, w a C] && \\qedhere\n\\end{align*}\n\\end{proof}\n\n\\begin{proposition} \\label{prop:canonical-main}\nLet $\\AC = (n, \\Sigma, \\MC, H[\\varepsilon,C], \\eta_\\varepsilon)$ be the Hankel automaton for~$s,C$.\nThen $\\sem{\\AC} = s$ and $\\AC$ is minimal.\n\\end{proposition}\n\\begin{proof}\nWe have for all $w \\in \\Sigma^*$:\n\\begin{align*}\n\\sem{\\AC}(w)\n\\ &=\\ H[\\varepsilon,C] \\MC(w) \\eta_\\varepsilon \\\\\n\\ &=\\ H[\\varepsilon, w C] \\eta_\\varepsilon && \\text{\\cref{prop:canonical-aut-monoid}} \\\\\n\\ &=\\ H[w,C] \\eta_\\varepsilon \\\\\n\\ &=\\ H[w,\\varepsilon] && \\text{definition of~$\\eta_\\varepsilon$} \\\\\n\\ &=\\ s(w)\n\\end{align*}\nMinimality follows from \\cref{prop:direction-1}.\n\\end{proof}\n\n\\begin{theorem} \\label{thm:WA-are-complete}\nLet $s : \\Sigma^* \\to \\K$ be a series and $n \\in \\N$.\nThen $\\rank(s) \\le n$ if and only if there is an automaton~$\\A$ with $\\sem{\\A} = s$ that has at most $n$ states.\n\\end{theorem}\n\\begin{proof}\nFollows from \\cref{prop:canonical-main,{prop:direction-1}}.\n\\end{proof}\n\n\nThe following proposition uses some notions from \\cref{sec:minimization}.\n\n\\begin{proposition} \\label{prop:canonical-conjugate}\nLet $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ be forward-minimal.\nLet $C = \\{c_1, \\ldots, c_r\\} \\subseteq \\Sigma^*$ be such that the columns of the matrix $B := (M(c_1) \\eta, \\ldots, M(c_r) \\eta) \\in \\K^{n \\times r}$ form a basis of the backward space of~$\\A$.\nThen $C$ is complete, and the backward conjugate of~$\\A$ with base~$B$ is the Hankel automaton for~$\\sem{\\A},C$.\n\\end{proposition}\n\\begin{proof}\nLet $H$ be the Hankel matrix of~$\\sem{\\A}$.\nLet $\\Fh: \\K^{\\Sigma^* \\times n}$ and $\\Bh: \\K^{n \\times \\Sigma^*}$ be the matrices with $\\Fh[w,\\cdot] = \\alpha M(w)$ and $\\Bh[\\cdot,w] = M(w) \\eta$ for all $w \\in \\Sigma^*$.\nWe have $\\Fh \\Bh = H$ and $\\Bh[\\cdot,C] = B$, hence $\\Fh B = H[\\cdot,C]$.\nSince the column spaces of $\\Bh$ and~$B$ are equal, it follows that the column spaces of $H$ and~$H[\\cdot,C]$ are equal.\nSince $\\A$ is forward-minimal, $\\Fh$ has full rank.\nThus, $r = \\rank(B) = \\rank(\\Fh B) = \\rank(H[\\cdot,C])$, so the columns of $H[\\cdot,C]$ are linearly independent.\nHence, $C$ is complete.\n\nLet $\\AC = (r, \\Sigma, \\MC, H[\\varepsilon,C], \\eta_\\varepsilon)$ be the Hankel automaton for~$\\sem{\\A},C$.\nFor all $x, w \\in \\Sigma^*$ and all $c \\in C$ we have $\\Fh[x,\\cdot] M(w) B[\\cdot,c] = \\alpha M(x) M(w) M(c) \\eta = \\sem{\\A}(x w c) = H[x, w c]$. \nThus, we have for all $w \\in \\Sigma^*$:\n\\begin{align*}\n \\Fh M(w) B\n \\ &=\\ H[\\cdot, w C] \\\\\n \\ &=\\ H[\\cdot, C] \\MC(w) && \\text{\\cref{prop:canonical-aut-monoid}} \\\\\n \\ &=\\ \\Fh B \\MC(w)\n\\end{align*}\nSince $\\Fh$ has full rank, it follows that $M(w) B = B \\MC(w)$ for all $w \\in \\Sigma^*$.\nSimilarly, we have $\\Fh B \\eta_\\varepsilon = H[\\cdot, C] \\eta_\\varepsilon = H[\\cdot, \\varepsilon] = \\Fh \\eta$, and since $\\Fh$ has full rank, it follows that $B \\eta_\\varepsilon = \\eta$.\nFinally, we have $\\alpha B = \\Fh[\\varepsilon,\\cdot] B = H[\\varepsilon,C]$.\nWe conclude that $\\AC$ is the backward conjugate of~$\\A$ with base~$B$.\n\\end{proof}\n\n\\begin{theorem} \\label{thm:compute-conjugate-Hankel}\nLet $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ be an automaton.\nOne can compute in polynomial time (with $O(|\\Sigma| n^3)$ arithmetic operations) a complete (for~$\\sem{\\A}$) set $C = \\{c_1, \\ldots, c_r\\} \\subseteq \\Sigma^{\\le r-1}$ with $r \\le n$, and the Hankel automaton for $\\sem{\\A}, C$.\n\\end{theorem}\n\\begin{proof}\nUsing \\cref{prop:compute-conjugate,prop:equivalence,prop:forw-conj-is-forw-min}, first compute in the required time a forward-minimal automaton~$\\AF = (\\nF, \\Sigma, \\MF, \\alphaF, \\etaF)$ with $\\sem{\\AF} = \\sem{\\A}$.\nBy the backward analogue of \\cref{prop:compute-forward-intermediate} one can compute in the required time a set $C = \\{c_1, \\ldots, c_r\\} \\subseteq \\Sigma^{\\le r-1}$ and the matrix $B = (\\MF(c_1) \\etaF, \\ldots, \\MF(c_r) \\etaF) \\in \\K^{\\nF \\times r}$ such that the columns of~$B$ form a basis of the backward space of~$\\AF$.\nLet $\\A'$ be the backward conjugate of~$\\AF$ with base~$B$.\nBy \\cref{prop:compute-conjugate}, it can be computed in the required time.\nBy \\cref{prop:canonical-conjugate}, $\\A'$ is the Hankel automaton for $\\sem{\\A}, C$.\n\\end{proof}\n\n\\br\n\nThe material in this section, at least up to \\cref{thm:WA-are-complete}, is similar to \\cite[Section~2]{carlyle1971realizations} and \\cite{fliess}.\nSee also \\cite[Theorem~5.3]{MandelSimon77} and \\cite[Section~2]{five} for related treatments.\n\n\n\\section{Computations in \\NC}\n\nWe show that some of the mentioned polynomial-time computations can even be carried out in the complexity class~\\NC, which comprises those languages having L-uniform Boolean circuits of polylogarithmic depth and polynomial size, or, equivalently,\nthose problems solvable in polylogarithmic time on parallel random-access machines with polynomially many processors. We have $\\text{NL} \\subseteq \\text{\\NC} \\subseteq \\text{P}$.\n\n\\begin{lemma} \\label{lem: AtA}\nLet $A \\in \\K^{m \\times n}$.\nThe row spaces of $A$ and $A^T A$ are equal.\n\\end{lemma}\n\\begin{proof}\nIt is clear that the row space of $A^T A$ is included in the row space of~$A$.\nFor the converse, it suffices to show that the null space of $A^T A$ is included in the null space of~$A$.\nLet $x \\in \\K^n$ with $A^T A x = 0_n$, where $0_n$ denotes the zero vector.\nThen $(A x)^T (A x) = x^T A^T A x = x^T 0_n = 0$, and hence $A x = 0_m$.\n\\end{proof}\n\nIn the following we assume $\\K = \\Q$.\n\n\\begin{proposition}[cf.~\\cref{prop:compute-forward-intermediate}] \\label{prop:compute-forward-intermediate-NC}\nLet $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ be an automaton.\nOne can compute in \\NC\\ a basis of the forward space $\\FV := \\spa{\\alpha M(w) \\mid w \\in \\Sigma^*}$.\n\\end{proposition}\n\\begin{proof}\nLet $\\FN \\in \\K^{\\Sigma^{\\le n-1} \\times n}$ be the matrix with $\\FN[w,\\cdot] = \\alpha M(w)$ for $w \\in \\Sigma^{\\le n-1}$.\nWe have shown in \\cref{sec:equivalence} that the rows of~$\\FN$ span~$\\FV$.\nBy \\cref{lem: AtA}, the rows of~$E := \\FN^T \\FN \\in \\K^{n \\times n}$ also span~$\\FV$.\n\nLet $e_i \\in \\{0,1\\}^n$ denote the $i$th coordinate column vector, and let $\\mathord{\\otimes}$ denote the Kronecker product.\nUsing the mixed-product property of~$\\mathord{\\otimes}$ (i.e., $(A B) \\otimes (C D) = (A \\otimes C) (B \\otimes D)$), we have:\n\\begin{align*}\nE[i,j]\n\\ &=\\ \\FN^T[i,\\cdot] \\ \\FN[\\cdot,j] \\\\\n\\ &=\\ \\sum_{w \\in \\Sigma^{\\le n-1}} (\\alpha M(w))[i] \\ (\\alpha M(w))[j] \\\\\n\\ &=\\ \\sum_{w \\in \\Sigma^{\\le n-1}} (\\alpha M(w) e_i) \\otimes (\\alpha M(w) e_j) \\\\\n\\ &=\\ \\sum_{w \\in \\Sigma^{\\le n-1}} (\\alpha \\otimes \\alpha) (M(w) \\otimes M(w)) (e_i \\otimes e_j) \\\\\n\\ &=\\ (\\alpha \\otimes \\alpha) \\left(\\sum_{w \\in \\Sigma^{\\le n-1}} M(w) \\otimes M(w)\\right) (e_i \\otimes e_j) \\\\\n\\ &=\\ (\\alpha \\otimes \\alpha) \\left(\\sum_{k=0}^{n-1} \\sum_{w \\in \\Sigma^k} M(w) \\otimes M(w)\\right) (e_i \\otimes e_j) \\\\\n\\ &=\\ (\\alpha \\otimes \\alpha) \\left(\\sum_{k=0}^{n-1} \\sum_{a_1 \\cdots a_k \\in \\Sigma^k} (M(a_1) \\otimes M(a_1)) \\cdots (M(a_k) \\otimes M(a_k)) \\right) \\\\ & \\qquad (e_i \\otimes e_j) \\\\\n\\ &=\\ (\\alpha \\otimes \\alpha) \\left(\\sum_{k=0}^{n-1} \\Big(\\sum_{a \\in \\Sigma} M(a) \\otimes M(a)\\Big)^k\\right) (e_i \\otimes e_j)\n\\end{align*}\nSince Kronecker products, sums and matrix powers~\\cite{Cook85} can be computed in~\\NC, one can compute~$E$ in~\\NC.\nWe include the $i$th row of~$E$ in the desired basis of~$\\FV$ if and only if $\\rank(E[\\{1, \\ldots, i\\},\\cdot]) > \\rank(E[\\{1, \\ldots, i-1\\},\\cdot])$.\nThis can be done in~\\NC, as the rank of a matrix can be determined in \\NC~\\cite{IbarraMR80}.\n\\end{proof}\n\n\\begin{proposition}[cf.~\\cref{prop:zeroness-in-P}] \\label{prop:zeroness-in-P-NC}\nLet $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ be an automaton.\nOne can check in \\NC\\ whether $\\A$ is zero.\n\\end{proposition}\n\\begin{proof}\nAnalogous to the proof of \\cref{prop:zeroness-in-P}.\n\\end{proof}\n\n\\begin{theorem}[cf.~\\cref{thm:equivalence-in-P}] \\label{thm:equivalence-in-P-NC}\nLet $\\A_i = (n_i, \\Sigma, M_i, \\alpha_i, \\eta_i)$ for $i \\in \\{1,2\\}$ be automata.\nOne can check in \\NC\\ whether $\\A_1, \\A_2$ are equivalent.\n\\end{theorem}\n\\begin{proof}\nAnalogous to the proof of \\cref{thm:equivalence-in-P}.\n\\end{proof}\n\n\\begin{proposition}[cf.~\\cref{prop:compute-conjugate}] \\label{prop:compute-conjugate-NC}\nLet $\\A = (n, \\Sigma, M, \\alpha, \\eta)$ be an automaton.\nOne can compute in~\\NC:\n\\begin{itemize}\n\\item a matrix~$F$ whose rows form a basis of the forward space of~$\\A$,\n\\item the forward conjugate of~$\\A$ with base~$F$.\n\\end{itemize}\nThe statement holds analogously for backward conjugates.\n\\end{proposition}\n\\begin{proof}\nThe first item follows from \\cref{prop:compute-forward-intermediate-NC}.\nThe second item follows from the fact that linear systems of equations can be solved in \\NC~\\cite{Csanky76}.\n\\end{proof}\n\n\\begin{theorem}[cf.~\\cref{thm:minimization}] \\label{thm:minimization-NC}\nGiven an automaton, one can compute in~\\NC\\ a minimal equivalent automaton.\n\\end{theorem}\n\\begin{proof}\nAnalogous to the proof of \\cref{thm:minimization}.\n\\end{proof}\n\n\\br\n\n\\Cref{thm:equivalence-in-P-NC} about equivalence checking was proved by Tzeng~\\cite{Tzeng96}.\n\\Cref{thm:minimization-NC} about minimization was obtained in~\\cite[Section~4.2]{17KMW-LMCS}.\nIt is not known whether ``counterexample'' words for equivalence can be computed in~\\NC.\nThey can be computed in randomized~\\NC~\\cite{KieferMOWW13}.\n\n\\paragraph*{Acknowledgements.}\nThe author thanks Oscar Darwin, Qiyi Tang, and Cas Widdershoven for comments that helped to improve the text.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\n\\section{Introduction}\n\\section{Introduction}\n\\input{paper\/1_intro}\n\\input{paper\/2_related}\n\\input{paper\/3_method}\n\\input{paper\/4_application}\n\\input{paper\/5_evaluation}\n\\input{paper\/6_limitations}\n\\input{paper\/7_conclusion}\n\n\n\\section*{Acknowledgment}\nThe authors acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). This work was supported by Engage Nova Scotia and Mitacs through the Mitacs Accelerate program (funding reference number IT28167).\n\n\\section{Background and Related Work}\\label{sec:related}\n\nThe modeling and design of \\textit{Knowledge-Decks} are based on existing knowledge model research. We have implemented ways to collect and transform user data into a retelling of the users' insights and knowledge discovery process through slide decks. Therefore, this section discusses VA knowledge models and their practical usage when applied to knowledge discovery. Then, we discuss several provenance~\\footnote{Provenance: tracking and using data collected from a tool's usage~\\cite{ragan2015characterizing}} approaches, including data and analysis provenance, which enables one to collect users' insights and knowledge discovery process. Finally, we discuss existing approaches to recall and retell users' knowledge discovery as storytelling presentations, such as slide decks.\n\n\\subsection{Knowledge Modeling and Provenance in VA}\n\nVA literature in knowledge modeling has shown that users' interactivity with VA tools can be modeled as an iterative workflow of user intentions, behaviors, and insights~\\cite{sacha2014knowledge, sacha2016analytic, federico2017role}. For instance, Federico et al. ~\\cite{federico2017role} describes how VA tools and frameworks can be modeled following knowledge modeling methodology and explain how to interpret VA research in light of their model. This methodology of describing VA as an iterative workflow of events between Human and Machine actors shows that it is possible to understand a VA problem as a sequence of events. This model also includes automatic processes, such as data mining or machine learning, as part of the sequence of events, even if a Human interaction did not trigger it. Sacha et.al.~\\cite{sacha2018vis4ml}, among others, show through their detailed machine-learning ontology that these Computer-generated events permeate much of VA as a whole. \n\n\n\n\nUsually, VA developers and researchers use more practical means to model, collect, store, and utilize their tools or users' experiences. For instance, some researchers collect and analyze user behavior using provenance~\\cite{da2009towards, ragan2015characterizing}. Landesberger et.al.~\\cite{von2014interaction} perform behavior provenance by modeling user behavior as a graph network, which is then used for analysis. Other works also use graph networks as a way to model knowledge~\\cite{fensel2020introduction}, data provenance~\\cite{simmhan2005survey, da2009towards}, insight provenance~\\cite{gomez2014insight}, and analytic provenance~\\cite{xu2015analytic, madanagopal2019analytic}. Each of these tools and techniques aim to collect some part of the user's knowledge discovery process and model it into some structure.\n\nAmong the literature, graph networks, or more specifically \\textit{Knowledge Graphs (KGs)}~\\cite{fensel2020introduction, guan2022event}, are uniquely positioned to model the user's knowledge discovery process due to their specific applicability in collecting and analyzing two kinds of data: temporally-based event data~\\cite{guan2022event} and relationship-centric data~\\cite{auer2007dbpedia}. As VA is modeled based on events~\\cite{sacha2014knowledge, federico2017role} and our goal is to find and analyze the \\textit{relationship} between VA events~\\cite{fujiwara2018concise}, we decided to use KGs as one of the cornerstones of our approach, \\textit{Knowledge-Decks}.\n\nIn order to perform provenance, however, one must first collect information from the user. Existing works have collected changes in datasets~\\cite{da2009towards}, updates in visualizations~\\cite{battle2019characterizing, xu2020survey}, and other similar events in order to recreate and recall user behavior. On the other hand, tracking user's \\textit{Tacit Knowledge} as defined by Federico et al.~\\cite{federico2017role} is either done by manual feedback systems~\\cite{bernard2017comparing, mathisen2019insideinsights}, by manual annotations over visualizations~\\cite{soares2019vista}, or by inference methods that attempt to extract users' insights by recording the users' screens, video or logs and extract from them interactivity patterns as a post-mortem task~\\cite{battle2019characterizing, guo2015case}. Among these VA systems, InsideInsights~\\cite{mathisen2019insideinsights} is an approach to recording insights through annotations during the user's analytical process. It has demonstrated that collecting user annotations is a legitimate way to extract and store user insights. Similar to InsideInsights, Knowledge-Decks also requests user annotation through text inputs, but different from other works, Knowledge-Decks calculates the similarity of the text input to inputs from other users in order to coalesce the collective knowledge discovery into a single structure.\n\n\n\n\nAfter collecting data from users' knowledge discovery process, we must impose a structure to it. The structural design through knowledge ontologies has been a central goal of KGs~\\cite{fensel2020introduction}. KGs~\\cite{chen2020review, guan2022event} is a widely used technique to store and structure knowledge and has shown to be a successful instrument to interpret, store and query explicit knowledge, such as the information within Wikipedia~\\cite{auer2007dbpedia}. KG methodology defines that the relationships between nodes of a graph network, and not just the nodes themselves, can represent knowledge~\\cite{guan2022event}. Among similar structures, Event KGs~\\cite{guan2022event} expands on this by including the concept of event-based data by describing that paths of the graph's relationships depict sequences of events over time.\n\nIn order to store and analyze a KG, graph databases (e.g. neo4j~\\cite{noauthororeditorneo4j}) have emerged, exhibiting a wide array of techniques. Such specialized databases are needed since the structure of KGs focuses less on the usual transactional or row-based structure of typical relational databases and their transactional structure~\\cite{cashman2020cava}. Applications of KGs are also part of the overall graph network research, which means that graph network techniques such as Graph Neural Networks (GNNs)~\\cite{jin2020gnnvis}, graph visualizations~\\cite{chang2016appgrouper, he2019aloha} and graph operations~\\cite{ilievski2020kgtk}, such as page rank and traveling salesman, can be applied to KGs.\n\nAmong works that utilize KGs to model and store the user's knowledge discovery process, VAKG~\\cite{christino2022data} connects the VA knowledge modeling theory with KGs to structure and store knowledge provenance. Indeed, as is later discussed, our solution is built around the VAKG methodology. We have modeled two VA tools, one of which is described here, in light of existing knowledge models. From this methodology, we developed a KG ontology (or schema) to store users' knowledge discovery process. With VAKG we can identify what should be collected from the user. However, it by itself does not aid in performing data collection, nor aids in defining how to analyze or how to extract KG event paths (e.g. slide decks).\n\n\nCertain limitations from these works show that tracking automatically-generated insights~\\cite{spinner2019explainer} or accounting for automatic computer processes~\\cite{federico2017role} is still a challenging and largely unsolved problem. Additionally, since few existing frameworks or systems attempt to recall and retell a user's knowledge discovery process, no existing work solves how to link user-related and computer-related provenance for retelling the collective knowledge discovery of all users as far as the authors are aware. To solve that, we have opted to follow the argumentation of Xu et al.~\\cite{xu2015analytic} and focus on manual means to collect user insights and intentions while automatically collecting computer-processes~\\cite{federico2017role, battle2019characterizing}, such as user interaction. We also automatically link all user-related to their respective computer-related events as described by VAKG~\\cite{christino2022data}.\n\n\nOutside the realm of provenance, other researchers have attempted to analyze KGs to extract information. CAVA~\\cite{cashman2020cava}, for instance, enables the exploration and analysis of KGs through visual interaction. KG4Vis~\\cite{li2021kg4vis}, on the other hand, uses the advantages of the KG structure to provide visualization recommendations. ExeKG~\\cite{zheng2022exekg} uses KGs to record paths of execution within data mining and machine learning pipelines. Indeed, there is an ever-increasing number of works using KGs in VA, all of which agree that KGs are suitable for knowledge analysis and extraction. Our novelty among such works is the usage of KGs to structure the knowledge discovery process as paths within the KG, which are then reformatted as slide decks for presentation purposes.\n\nHardly any research has attempted to model user behavior or the associated knowledge discovery as we aim to do. Info-graphs are one of the most used to encode storytelling visually~\\cite{knaflic2015storytelling}. Indeed, Zhu et al.~\\cite{zhu2020survey} details how info-graphs can be automatically generated through machine learning~\\cite{chen2019towards} or pre-defined rules~\\cite{gao2014newsviews}. When applied to storytelling, displaying insights as visual or textual annotations on top of visualizations has been the aim of several works~\\cite{gao2014newsviews, chen2020augmenting, chen2019towards}. Although their results are very relevant for storytelling, they only tell what insights were found, not the process of how a user might have reached them. Instead, slide decks are better suited to narrate the events which lead to the insight~\\cite{schoeneborn2013pervasive}.\n\nRegarding slide decks, StoryFacets~\\cite{park2022storyfacets} is a system that displays visualizations both in dashboard and slide-deck formats. Although slide decks have well-researched limitations~\\cite{knaflic2015storytelling}, StoryFacets also argue about how and why slide decks are advantageous when one wishes to narrate a sequence of events. Indeed, slide decks are yet to be dethroned as the best way to give presentations~\\cite{schoeneborn2013pervasive}. Of course, other works have succeeded in recalling and retelling the knowledge discovery process in other means~\\cite{xu2015analytic}. For instance, Ragan et al. ~\\cite{ragan2015characterizing} list many tools that collect and structure user behavior and insights in a queriable format. Nevertheless, they have not proposed a means to extract slide decks from user behavior to narrate the knowledge discovery process to third parties.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Methodology}\n\nThis section describes the goals that motivated \\textit{Knowledge-Decks}, its design, and its implementation.\n\n\\subsection{Research Question and Goals}\\label{sec:goals}\n\nIn this paper, our research question is: \\textit{\\textbf{how to automate the process of creating a draft slide deck presentation that tells the story of which and how insight(s) were achieved using a VA tool?}} In other words, how to automatically create linear storytelling based on VA's non-linear knowledge discovery process? To answer this, other sub-questions emerged, such as: what would cause a user to reach such an insight? Which questions have the users had to ask throughout their interaction with the tool to reach such an insight? Which other insights were found along the way? And how to automatically generate a draft slide deck presentation that can be used to show these answers to others. As discussed in Sec~\\ref{sec:related}, current solutions do not generically solve this, so that the same process could be applied to different VA tools. A generalized solution that addresses these primary and sub-questions should not just retell a user's insights, but also create or propose new optimized knowledge discovery paths extracted from the experiences of many users.\n\nThis led us to develop an approach that attaches itself to existing VA tools to (1) collect users' knowledge discovery processes and, from it, (2) extract knowledge discovery paths to (3) retell the underlying story using a slide deck. Based on existing methods of modeling, collecting, and analyzing user behavior and knowledge discovery, we identified our goals as: \n\n\\squishlist\n\\item[G1:] Collect user's intentions and insights while using a VA tool;\n\\item[G2:] Collect the user behavior and assign which intentions\/insights were related to them;\n\\item[G3:] Provide a way to extract knowledge discovery paths as narrations of user's experience;\n\\item[G4:] Format the knowledge discovery paths as slide deck presentations;\n\\squishend\n\n\nTo extract knowledge discovery paths from a collection of user experiences, we followed existing works by utilizing \\textit{Knowledge Graphs (KGs)}~\\cite{guan2022event} to structure and store user intentions, behavior, and insights. We also utilized the methodology of VAKG~\\cite{christino2022data} to model the expected user experience based on the tools they would be using to derive what pieces of data should be stored within the KG given our goals. In order to apply VAKG, however, we first need to describe what kind of VA tools are targeted by our system.\n\nAfter analyzing existing VA tools, we decided to focus on a specific subset of tools that provides visualizations to better understand and explore the available data through exploratory data analysis (EDA). More specifically:\n\n\\squishlist\n\\item The tool must be web-based and be comprised of a set of visualizations that displays data but does not allow any modification or addition of new data;\n\\item The tool may also have means to filter the data through visual selection, aggregations, or other manual means;\n\\item The web-based tool must allow our system to recall any previously visited state through its URL;\n\\squishend\n\nWe can see from the tool's requirements that there are no domain-specific or data-type requirements. However, the tools should aim to use visualizations to allow users to explore and better understand the available data. Indeed, this common point provides us with the means by which we can apply the same VAKG modeling to the tools to extract and retell users' knowledge discovery paths of any such tool, promoting reproducibility and generalization of \\textit{Knowledge-Decks}.\n\n\n\\subsection{Modeling and Schema Design}\\label{sec:modeling}\n\n\nThe first step of VAKG~\\cite{christino2022data} is to formally define the expected knowledge model as a flow graph. Thus, we took the existing flow graph example from VAKG and removed the aspects of VA not present within the proposed target tools. For instance, one aspect of the general model not present in the target tools is the ability for users to modify the data. The resulting model is presented in Fig.~\\ref{fig:flow}.\n\n\\begin{figure\n \\centering\n \\includegraphics[width=\\columnwidth]{pictures\/flow.png}\n \\caption{Simplified knowledge-assisted VA model. In red from $V$ to $E$ is the \\textbf{insightful flow} where the user has a new insight, while in green is the \\textbf{insight-less flow} when the user performs an action without any new insight. The two parallel red and green links between $E$ and $S$ represent the user's intent when a new specification, such as an interaction with some visualization, is made. This causes the execution of some algorithm $A$ and\/or the generation of a new visualization $V$. Nodes in orange identify which data will be tracked in our system. }\n \\label{fig:flow}\n\\end{figure} \n\nThis flow graph shows four connections between the \\textsc{Machine} and \\textsc{Human} sides, two in each way. Within the \\textsc{Human} side, we see the possibility that the user gained new tacit knowledge due to a new perception ($P$) and interacted with the tool ($E$). That said, the user may also interact with the tool without any new perception, usually when multiple interactions are done without any new knowledge being gained. On the \\textsc{Machine} side, we see how explicit knowledge $K^\\epsilon$, data $D$, and schema $S$ generate new visualizations $V$ for the user. As further detailed by others~\\cite{federico2017role, christino2022data}, this flow graph indeed models the knowledge flow while users are using a VA tool. Next, VAKG requires the creation of the Set-Theory equations. From these equations, we can formally define the schema of our KG and its property maps, which is the definition of the expected content of each of the KG's node types. The equations are listed below:\n\n\\begin{align}\nU^h_t =& \\{P_t, E_t\\}, & U^h_t \\equiv U_t &\\cap H_t \\\\\nU^m_t =& \\{V_t, A_t\\}, & U^m_t \\equiv U_t &\\cap M_t \\\\\nC^h_{t+1} =& \\{K^T_{t+1}\\} \\equiv U^h_t(C^m_t), & C^h_t \\equiv C_t &\\cap H_t \\\\\nC^m_{t+1} =& \\{D_{t+1}, S_{t+1}, K^{\\epsilon}_{t+1}, I^{t+1}\\} \\equiv& \\\\\n& U^m_t(C^m_t) + U^h_t(C^h_t), & C^m_t \\equiv C_t &\\cap M_t \\nonumber\n\\label{eq:VAKG}\n\\end{align}\n\nVAKG ontology refers to classes of events and their relationships within a KG. By following the same definition of~\\cite{christino2022data}, we have four classes: the \\textsc{Human} temporal sequence, which holds the user's temporally-dependent events, such as insights, perceptions, and explorations; \\textsc{Human} state-space, which holds the state-space of all user events; \\textsc{Machine} temporal sequence, which holds all computer-side events like changes to the dataset, filters applied to the data, automated processes, and changes to the visualization; and \\textsc{Machine} state-space, which holds the state-space of all computer-side events. Note that the temporal aspect is the difference between a ``temporal sequence'' and a ``state space''. If many users perform the same exact interaction, each user's interaction will create a new ``temporal sequence'' event. However, only one state-space event would be created. Our model also uses the same $9$ relationships of VAKG~\\cite{christino2022data}.\n\nThe property maps define the content of each event class and relationship. For that, we follow a design based on the equations above, where the relationships hold no content, and each event class holds information collected from the tool, as will be discussed later. Note that this design choice aims to be simple to explain and reproduce but could be constructed differently depending on other goals or requirements. The full ontology schematic of our KG is included as supplemental material.\n\nFrom the description of the VA tools being targeted, the tool has one or multiple interlinked visualizations where the raw data is visualized. As per the requirement, the only interaction events available in the tool are the filters, selections, and aggregations being applied at any given time and the state of each visualization, such as map position, zoom, or the selection of which part of the data is to be visualized. Therefore, the data $D$ and explicit knowledge $K^\\epsilon$ in Fig.~\\ref{fig:flow} do not change over time. Only filtering ($A$), aggregation operations ($A$), and selections ($S$) are expected. As we discussed previously, new \\textsc{Machine} temporal sequence events are created at every new user interaction. Therefore we consider the above-described information as our KG's property map definition, which also includes a timestamp and an anonymous id of who made any given interaction.\n\nOn the \\textsc{Human} side, the property map is required to contain the events happening on or within the user while using the VA tools. In order to keep the design simple while also being able to answer the hypothesis of Sec.~\\ref{sec:goals}, we deviate from related works which capture user interactivity and only process it into a specific format after the fact~\\cite{battle2019characterizing, guo2015case}. In other words, we do not use recording software such as cameras or screen capture to populate said property map. Instead, we use a more active method to extract user events: by requesting user inputs~\\cite{mathisen2019insideinsights, bernard2017comparing}. Namely, by pairing the flow of Fig.~\\ref{fig:flow} and the hypothesis of Sec.~\\ref{sec:goals}, we identified two main user events to track: user intentions and user insights. In short, user intentions relate to anything the user wishes to do, which is represented by all arrows from $E$ to $S$. User insights relate to any new information perceived by the user, which is represented by all arrow sequences between $I$ to $E$. These two events represent the possible property maps of the \\textsc{Human} space-state and \\textsc{Human} temporal sequence classes. Also, similar to before, the \\textsc{Human} temporal sequence property map has a timestamp of the event and an id of the user.\n\nFinally, after performing preemptive tests with the KG defined so far, we identified one issue: when two users were to type the same insight or intention, it was very uncommon for the text to match exactly. In order to better allow for matching texts of similar intentions or insights, we modified the property map of the \\textsc{Human} space-state class. Instead of holding the text inputted by the user, the property map holds a set of keywords that represents said text. The exact process of calculating the keywords, how all property maps fit within a KG, and how the user interaction can be collected and stored for analysis will be discussed in the next section.\n\n\\subsection{Knowledge-Decks}\n\\label{sec:system}\n\nWith the tool modeled following VAKG, we now discuss how we implemented and attached them to the VA tools. Our implementation of the VAKG method has two components: a python web server with a graph database for storage and a library with which tools can use to send the expected events to the web server.\n\n\n\\noindent\\textbf{Knowledge Graph Database.} First, to collect and store the VAKG data, our implementation must receive and store events in a database while following the design of Sec.~\\ref{sec:modeling}. We chose Python as the language of choice for its wide usage within the community and Neo4J~\\cite{noauthororeditorneo4j} as the graph database. We chose to use an API-based design where our code creates a server to which any tool can send data through the web~\\cite{leonardo_christino_2021_5750019}. With this, any number of VA tools following the same knowledge model can connect and send their events to the same VAKG implementation. It is important to note that the implementation provided is specific to the modeling process above but can be modified to satisfy other models.\n\n\n\\noindent\\textbf{Collecting \\textsc{Machine} Events. } Next, to populate the machine-side events, we are required to keep track of the VA tool's state at all times. For this, we developed a library with which the VA tools can inform of any new events. The VA tools being used are required to be web-based tools implemented in JavaScript. Hence, the library to collect and send all events from the VA tool to the web-server are implemented to be used as a JavaScript library. The library is also publically available~\\cite{leonardo_christino_2021_5750019}.\n\nIn short, the library implements two methods with which the VA tool informs the web-server of any new interaction or change within itself. This requires the tool to inform its current state as a hashable dictionary (or hashmap), which is transmitted to the web server as JSON and then stored within our KG. This information will also be used later to recall the state of the VA tool during the analysis of the resulting KG. Since the most common manner to recall a state in a website is through its URL, we extract the current state of the VA tool in this way. Therefore, this implementation leaves the VA tool to control what is included in VAKG's property maps.\n\n\\noindent\\textbf{Collecting \\textsc{Human} Events. } Similar to the \\textsc{Machine} events, the library also provides ways for the VA tool to send \\textsc{Human} events to the web-server. One core difference, however, is that we expect users to type their insights or intentions as a sentence. We extract keywords from the text, and the resulting combination of keywords plus text is sent to the web-server. In order to generate such keywords, we use the natural language processing library \\textit{Spacy}~\\cite{vasiliev2020natural} to retrieve the list of lemmas from a given natural language text.\n\nFrom preliminary tests, we found that a more reliable way to compare previous texts (e.g. insights or intentions) and the new text being typed by a user is to request confirmation from the user. We also noticed that short, concise texts are more reliable than larger ones. Therefore, we limited the number of characters allowed per text to $75$. Once the text is typed, we gather similar texts and display them to the user asking whether any existing text is similar to theirs. To calculate this text similarity, we use the text's keywords (lemmas) and scan the database for similar keywords among the \\textsc{Human} state-space nodes. The similarity measure used is a cosine similarity measure calculated from a word2vec representation of the keyword list and is also implemented by \\textit{Spacy}~\\cite{vasiliev2020natural}. If there is any match with a score greater than $0.5$, we collect the texts that generated these keywords, which can be accessed from its neighboring \\textsc{Human} temporal sequence node, and display up to $5$ of these texts to the user ranked by similarity score. The user can say whether their intention or insight is equivalent to any of the existing ones suggested or if it is new. This user-feedback step provided a reasonably reliable way for our system to better match new intentions or insights with previous ones compared to purely automatic processes.\n\nFinally, the \\textit{Knowledge-Decks} asks whether there was an existing element in the VA tool, such as visualization, text, or color, which caused the user's intention or insight. The user is given the ability to draw circles or arrows on the screen in this step. Users can also say that nothing in the interface caused the new intention or insight. The drawn elements and the current URL of the website, similar to Collecting \\textsc{Machine} Events, are also saved as part of the user's \\textsc{Human} temporal sequence to recall the elements of interest.\n\nWith this, the flow of a new insight or intention from the perspective of the user is as follows: the user types a new entry within a text field of the VA tool and selects whether it is a new intention or new insight, the VA tool searches and displays similar intentions or insights from previous users and asks if the intention or insight is new or if it is equivalent as a previous one. The user is also asked to indicate what element in the VA tool, if any, caused this new intention or insight. After this flow, the KG will store the user's typed text, keywords, the website URL, the drawn elements, and whether this was a new intention or insight by following the structure defined in Sec.~\\ref{sec:modeling}.\n\n\\noindent\\textbf{Knowledge Discovery Recall as Narrative Slide Decks.} \nOnce the VA tools and our approach can communicate the required data and structure it as a VAKG knowledge graph, we can analyze the extracted data. For that, \\textit{Knowledge-Decks} accepts Neo4J's cypher queries~\\cite{francis2018cypher} as the means to extract information from its KG. \n\nCypher queries are one of the main ways to query for raw or aggregated data from graph databases~\\cite{francis2018cypher}. Its format follows strict specifications, where nodes and\/or relationships of interest are defined within the query to specify either aggregation values or graph paths of interest. For instance, we can verify how many users have used the system by aggregating all \\textsc{Computer} temporal sequence nodes with different users and retrieving its count. We can also extract the path within the graph a particular user took from a particular intention to a specific insight, which returns us an ordered list of \\textsc{Human} temporal sequence nodes the user took from beginning to end. This flexibility provides endless ways to analyze and extract data from the KG. However, we will focus on one goal: extracting a narration of the user's knowledge discovery process as a slide deck.\n\nGiven a KG collected from several users utilizing a VA tool, we aim to construct a set of cypher queries that narrates the tool's knowledge discovery process. The first question is: how to extract the user's insights that began with a known intention? Given an $\\$intention$, the cypher query below solves this question, discovering all insights \\textit{the same user had} which follows the intention in question.\n\\begin{verbatim}\nMATCH ((n:H_UPDATE {text:$intention, label:'intention'})\n -[:FOLLOWS_INSIGHT*1..20]-(n1:H_UPDATE {label:'insight'})) \nRETURN *\n\\end{verbatim}\n\nOptional parameters could be included, such as $RETURN * LIMIT 1$ to only return the \\textit{first} intention the user had or $RETURN * ORDER BY n1.created DESC LIMIT 1$ to only return the \\textit{last} intention the user had. Now, in order to find \\textit{all insights} from \\textit{all users} which originated in a given intention, we use the query below. The result of running this query is exemplified in Fig.~\\ref{fig:findinsights}.\n\n\\begin{verbatim}\nMATCH intention_path=\n ((n:H_UPDATE {text:$intention, label:'intention'})\n -[:LEADS_TO]->(i)-[:INSIGHT*0..20]-(j)-[:LEADS_TO]\n -(n1:H_UPDATE {label:'insight'}))\nreturn nodes(intention_path)\n\\end{verbatim}\n\\label{cy:intentionpath}\n\n\\begin{figure\n \\centering\n \\includegraphics[width=\\columnwidth]{pictures\/findinsights.png}\n \\includegraphics[width=0.3\\columnwidth]{pictures\/allinsights\/Slide1.png}\n \\includegraphics[width=0.3\\columnwidth]{pictures\/allinsights\/Slide2.png}\n \\includegraphics[width=0.3\\columnwidth]{pictures\/allinsights\/Slide3.png}\n \\includegraphics[width=0.4\\columnwidth]{pictures\/allinsights\/Slide4.png}\n \\includegraphics[width=0.4\\columnwidth]{pictures\/allinsights\/Slide5.png}\n \\caption{Graph visualization from Neo4j~\\cite{noauthororeditorneo4j} when collecting all insights from all users given a starting intention. In this example, four insights were found from said intention, two of which were from the same user who wrote the intention (two to the left) and two from other users (two to the right). The slide deck generated on the bottom describes the four insights found given the original intention. }\n \\label{fig:findinsights}\n\\end{figure} \n\n\nSo far, we have not collected nor identified the user's behavior between intention and insight. Following VAKG's structure, we can discover the user's behavior between any pair of intention ($\\$intention$) and insight ($\\$insight$) by finding the user's behavior path ($behavior\\_path$). The $behavior\\_path$ output of the query below describes the complete path starting from a user's insight (\\textsc{Human} temporal sequence), into an insight space state (\\textsc{Human} space state), and runs through all \\textsc{Computer} space states until the path can connect to the intention which started it all, as shown in Fig.~\\ref{fig:slidedecksampleflow}. Note that within this query, the $intention\\_path$ retrieves the \\textit{shortest path} of interaction events from intention to insight calculated from all users' paths combined. Therefore, this path may be and usually is, shorter than any of the individual users' paths.\n\n\\begin{verbatim}\nMATCH intention_path=\n ((n:H_UPDATE {text:$insight, label:'insight'})\n -[:LEADS_TO]-(i)<-[:INSIGHT*0..20]-(j)-[:LEADS_TO]\n -(n1:H_UPDATE {text:$intention, label:'intention'}))\nMATCH behavior_path=\n ((n1)-[:LEADS_TO]-()-[:FEEDBACK]-(c)-[:UPDATE*0..20]\n -(m:C_STATE)-[:FEEDBACK]-()-[:LEADS_TO]-(n))\nWITH nodes(behavior_path) AS px \n ORDER BY length(behavior_path) LIMIT 1\nUNWIND px AS mainPath\nOPTIONAL MATCH (mainPath)-[:LEADS_TO]-(interactions:C_UPDATE)\nRETURN mainPath, interactions\n\\end{verbatim}\n\\label{cy:behaviorpath}\n\n\n\n\n\\begin{figure\n \\centering\n \\includegraphics[width=\\columnwidth]{pictures\/sampleflow.png}\n \\includegraphics[width=0.22\\columnwidth]{pictures\/educpptpic\/4\/Slide1.png}\n \\includegraphics[width=0.22\\columnwidth]{pictures\/educpptpic\/4\/Slide2.png}\n \\includegraphics[width=0.22\\columnwidth]{pictures\/educpptpic\/4\/Slide3.png}\n \\includegraphics[width=0.22\\columnwidth]{pictures\/educpptpic\/4\/Slide4.png}\n \\includegraphics[width=0.22\\columnwidth]{pictures\/educpptpic\/4\/Slide5.png}\n \\includegraphics[width=0.22\\columnwidth]{pictures\/educpptpic\/4\/Slide6.png}\n \\includegraphics[width=0.22\\columnwidth]{pictures\/educpptpic\/4\/Slide7.png}\n \\includegraphics[width=0.22\\columnwidth]{pictures\/educpptpic\/4\/Slide8.png}\n \\caption{Above is a portion of KG extracted from \\textit{Knowledge-Decks} when generating a slide deck, which contains all interactions done in the shortest between an intention and an insight. Visualization created with Neo4J browser~\\cite{noauthororeditorneo4j}. The graph's path highlighted in black is used to generate the slide deck below, which is to be read from left to right and top to bottom. }\n \\label{fig:slidedecksampleflow}\n\\end{figure}\n\n\nOther cypher queries were also developed, such as: which insight was the one \\textit{most found} among different users, what is the \\textit{longest} acyclic path recorded between an intention and a given insight, and what are \\textit{all intentions} which unimpededly culminated in a given insight (with paths). These and other queries are available in a public repository~\\cite{leonardo_christino_2021_5750019}. \\textit{Knowledge-decks} also allows custom cypher queries to be run, allowing for extensive flexibility in its usage in other use cases.\n\nWith the collected paths of each of these queries, we can retrace the user's behavior using the information from the \\textsc{Computer} temporal sequence nodes. For this, we sample one \\textsc{Computer} temporal sequence node (orange nodes in Fig.~\\ref{fig:slidedecksampleflow}) to obtain the actual user's interactions, including URLs, to recall each step of the process. The line with ``OPTIONAL MATCH'' in the query above collects the $interactions$ nodes for this purpose. Finally, we obtain the shapes drawn by the user from the final insight node (red ``insight'' nodes in Fig.~\\ref{fig:slidedecksampleflow}).\n\nTo build a slide deck with the extracted data, \\textit{Knowledge-Decks} navigates through the path generating one slide per node visited in the path. For instance, the slide deck from the graph in Fig.~\\ref{fig:slidedecksampleflow} would start with a slide stating the user's intention, several slides with screenshots of the website, and a final slide with the insight reached. On the other hand, the slide deck of the graph~\\ref{fig:findinsights} would, instead, have one slide per insight. Each slide can optionally include any collected information, such as the insight\/intention text, text keywords, event name, URL, etc. For exemplification purposes, we have only included the insight or intention text and\/or event name and a screenshot of the VA tool at that specific point in time.\n\nTo generate screenshots of the VA tools at a certain state, \\textit{Knowledge-Decks} uses \\textit{Selenium}~\\cite{gojare2015analysis} to visit and take screenshots from the website and python-pptx~\\cite{pptxpython} to generate the slides in PowerPoint format. Finally, any shapes drawn by users, such as circles or arrows, are added to their respective slides as PowerPoint shapes. The two slide decks generated from the graphs of Figs.~\\ref{fig:findinsights} and ~\\ref{fig:slidedecksampleflow} are below each respective figure.\n\nThe resulting slide deck can, of course, be edited by anyone and used in presentations or in any other way PowerPoints are used to retell the knowledge discovery story of the specific cypher query. Indeed, as part of our evaluation, we found that once the slide decks are generated and read by an expert, they were able to aggregate all insights throughout the slide deck and manually create a new \\textit{conclusion slide} which concludes the knowledge discovery story with a description of what new knowledge was found.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Application and Evaluation}\n\nTo evaluate our system, we have attached it to two VA tools that fit our pre-determined requirements (see Sec~\\ref{sec:goals}), one of which is discussed here. \\textit{Engage Nova Scotia}~\\cite{graham} is an independent not-for-profit organization that started in June 2012. Its Quality of Life initiative, born in 2017, led to a survey in 2019 with 230 questions about the quality of life and well-being of residents across the province. Almost 13,000 people responded. From this data, the well-being mapping tool (WMT)~\\cite{christino} was built to allow the population to access and explore the survey results. In their own words: the tool allows for visualization of the entire survey through maps and charts, which has already led to many actionable insights by the population, companies, and government entities. We can see that WMT matches our requirements due to its use of visualizations to better understand and explore the available data.\n\n\\subsection{Configuring VA tools for data collection}\n\nIn order to attach \\textit{Knowledge-Decks} to WMT, we first need to define what data we are required to harness from the tool and its users to populate our KG. From Sec.~\\ref{sec:modeling}, we see that in Fig.~\\ref{fig:flow} there are five orange shapes: insight $P$, intention $E$, specification $S$, visualization $V$, and analysis $A$. We also see that $A$, $S$, and $V$ are \\textsc{Machine}-related while $P$ and $E$ are \\textsc{Human}-related (Fig.~\\ref{fig:flow}). By following the definitions of these four taxonomies~\\cite{christino2022data, federico2017role}, we defined what data was to be collected from each of the tools in Tab.~\\ref{tab:propertymapdata}.\n\n\\begin{table}\n\\label{tab:propertymapdata}\n\\caption{Data stored in the Well-being Mapping Tool (WMT) KG node's contents (property maps). }\n\\begin{tabular}{p{16mm} | p{60mm}} \n\n \\textbf{Node Type} & \\textbf{Data collected from the WMT}\\\\ [0.5ex] \n \\hline\\hline\n \\textsc{Human} temporal sequence & label (insight $P$ or intention $E$), created time, URL, screen size, text, keywords, shapes drawn, user id, and analysis id \\\\ \n \\hline\n \\textsc{Human} state-space & label (insight $P$ or intention $E$), created time, last updated time, and keywords time \\\\\n \\hline\n \\textsc{Computer} state-space & label (specification $S$, visualization $V$ or analysis $A$), created time, last updated time, the status of the hierarchical structure to the left, bar chart's visualization schema (stacked, grouped, maximized, etc.), map position, map zoom, map areas selected, math operation used, and question visualized in map \\\\\n \\hline\n \\textsc{Computer} temporal sequence & event name, created time, URL, user id, analysis id, and all the same data from the related Computer state-space \\\\\n\n\\end{tabular}\n\\end{table}\n\n\\subsection{Evaluation through User Interaction}\n\n\n\nTo collect the required information, we asked $9$ users to perform pre-defined tasks with the tool to collect data and build the KGs. In short, after the participants were shown an introductory video, the survey asked participants to fill out a demographics questionnaire with $7$ questions, perform $5$ tasks, and fill out a set of Likert-style questions related to the participants' experience. The complete questionnaire can be found as supplemental material. The tasks were used to provide a common starting point and a common goal to all participants. During the tasks, participants were asked to write any intentions and insights they had, being part of or not related to the task at hand. Indeed, all tasks were exploratory in nature, and participants were asked and encouraged to freely roam the tool if they so wished. Among the participants, there were three experts on WMT, two non-experts, and two of unknown expertise level.\n\n\n\n\n\n\nThe data was fed to our system, and the resulting knowledge graph was vastly complex, with more than 900 nodes. Nevertheless, a preliminary analysis of the results showed that users had a total of 21 unique intentions and 26 unique insights. On the computer side, there were a total of 252 events or 143 unique events.\nWe also looked through all insights related to Q1 (first task) and found that the $9$ participants reached insights with an average of 20.9 interactions (machine-side events), and on average, 1.44 insights were reached by each participant within Q1. Similarly, across all tasks, users had an average of 1.13 intentions and 1.29 insights per task. Overall, users had more insights than intentions. Also, on average, each intention took 6.11 interactions (machine-side events) to reach some insight.\n\n\\subsection{Slide Deck Generation}\n\nWe generated several slide decks from the collected data. Each slide deck type aims to tell a story from the collective experience of the users. As discussed in Sec.~\\ref{sec:system}, our approach has different pre-defined queries, each of which tells a specific type of story. After generating the slide decks below, we discussed with an expert, who is proficient with the tool and its domain, whether and how she would use the generated slide decks and questioned her whether the slide decks provided new insights they were unaware of before the experiment.\n\n\\noindent\\textbf{Collection of Insights given an Intent}: By using the $intention\\_path$ query of Sec.~\\ref{cy:intentionpath} to collect all insights users had given a known intention, we generated a total of 15 unique slide decks. One of such slides was done from the intention ``what area has concerns regarding access to education'' and had four related insights as seen in Fig.~\\ref{fig:findinsights}. The resulting slide deck, shown in Fig.~\\ref{fig:findinsights} bottom, discloses a list of four independent insights from three different users who had the same intention. From the slide decks above, we noticed that some of the slide decks displayed insights that are not directly to the intention of origin. \nIn part, this is expected since, naturally, users can not be expected to only have insights that specifically answer their intention. Since analyzing the text content of insights and intentions and their correlation was not a goal in \\textit{Knowledge-Decks}, we opted to focus on telling the users' stories until their last insight. For this, another pre-defined query generates the slide deck of the path taken from the intention in question to the \\textbf{last} or furthermost insight within the KG. This new slide deck may include other insights the users had during their analysis, but if the user found an answer to their intention, this slide deck will tell the story of how the insight was reached, including all other insights along the way. As one can expect, these slide decks are larger, but they recall much more of the users' experiences.\n\n\\noindent\\textbf{Behavior between intention and insight}: To generate a slide deck of how the VA tool was used to reach a given insight, we utilize the $behavior\\_path$ query of Sec.~\\ref{cy:behaviorpath}. Here, an important distinction has to be made: the story generated by these slide decks does not necessarily follow any user's behavior but instead uses the collection of all behaviors and extracts the shortest path from them. That is, in these slide decks, each slide may have originated from different users. However, it ultimately follows the optimal path of behavior (interaction events) between the intention and the insight. That said, a total of $61$ possible paths from intention to insight were found, each of which has \\textit{one shortest behavior path} and many larger ones. For instance, the slide deck of Fig.~\\ref{fig:slidedecksampleflow} was generated from the $behavior\\_path$ query considering ``B0C lack opportunities to take formal education'' as the insight of interest and the ``Find the concern with educational opportunities in the FSAs'' as the intention of interest. Note that the insight originally came from a different user from the intention.\n\n\\noindent\\textbf{Behavior to reach an insight}: We may also ask \\textit{Knowledge-Decks} to detect which intention led to some insight of interest automatically. We extracted three slide decks from three different user insights from Q1, as shown in Fig.~\\ref{fig:slidedeckeduc}. From top to bottom, the slides follow the user's intention (first slide), behavior, and insight (last slide). We see that the left and center slide decks were detected to have the same \\textit{closest} intention within the graph. Also, notice that although left and right paths had $6$ interaction events each and the Center $5$, all slide decks show that, when comparing the screenshots of each slide, \\textit{the first interaction event was the same}, diverging from the second interaction onward. On that note, all interaction events of the left and right slide decks were the same, only diverging in the last slide, which displays the insight itself. Of course, a similar cypher query can also be used to generate the slide deck of a \\textit{single} user's insights and their own intentions or to automatically find the closest \\textit{insight} given an intention of interest.\n\n\nSeveral other types of slide decks were extracted, but due to space constraints, they were redacted. Also, since further analysis or slide decks could be done to compare the tools, \\textit{Knowledge-Decks} provide a direct Cypher API to perform custom analysis and the generation of custom slide decks.\n\n\n\n\n\n\n\\noindent\\textbf{Expert opinion and evaluation}: We showed the slide decks described so far to an expert with the tool who works at Engage Nova Scotia and has extensive experience in the objective of WMT, including having to create many presentations about the tool, its usage, and of insights collected from the tool. She was first shown the slide deck of all insights from an intention (Fig.~\\ref{fig:findinsights}) and was explained how the slide deck was generated. We asked for her opinion regarding how well-suited the slide deck was to be used as part of a presentation she may need to present. \n\nShe showed significant interest in the slide deck. She said that a large amount of time would be saved by using \\textit{Knowledge-Decks} to ask company employees to search for insights and, with the collected data, create a draft presentation containing all insights from the employees. She also noted that the overall slide deck layout is already very well suited to be used to present a list of insights and praised the advantage of using PowerPoint as the export method since she can simply change the design of the whole slide deck with one button to fit the company's presentation template. She also provided several feedback points, such as saying that the image could be even larger, the terminology of ``insight'' and ``intention'' may not be easily understood by some employees, and discussed how the insight texts were not suited as-is to be used as a presentation and would need spellchecking to refactor it into a more formal and complete speech. Additionally, she said that since the insights may not be directly related to the intention, editing the slide deck after the fact is required. She also noted that it would be ideal to have a prelude slide that explains how to read the slide decks and a conclusion slide summarizing the collection of insights to conclude the presentation. However, she agreed that these two slides are better suited to be written manually by herself or whoever will be doing the presentation.\n\nShe was also shown the interactivity slide deck (Fig.~\\ref{fig:slidedecksampleflow}) and was once again impressed with its potential usage. She noted that these slides would be great to onboard new employees to the company or to be used in workshops or tutorials in technical-focused meetings. We then explained that the path shown by the slides is the optimal path between insight and intention, to which she noted that it is great that \\textit{Knowledge-Decks} replace manual labor involved in a pre-meeting of stakeholders to discuss and coalesce experiences to optimize the slides into its most efficient story. She said this pre-meeting would focus only on defining the slide deck style, such as font and background type\/color. However, she also noted that the current slide descriptions are only well understood by experienced users, such as herself. Ideally, \\textit{Knowledge-Decks} should consider the tool's context and usage to generate a better-tailored description of each slide with less technical language. Nevertheless, she also noted that this slide could be used to describe how any of the insights of the previous slide deck was reached, or in other words, this slide deck would work well as a deep dive into a slide deck that lists all insights which originated from a given intention, such as the one described earlier. Overall, she said that \\textit{Knowledge-Decks} would indeed be of significant help in creating drafts of presentations which would then be modified and edited into their final form within PowerPoint, including the removal of any potentially redundant or unwanted slides.\n\n\n\n\\begin{figure\n \\centering\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/1\/Slide1.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/2\/Slide1.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/3\/Slide1.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/1\/Slide2.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/2\/Slide2.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/3\/Slide2.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/1\/Slide3.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/2\/Slide3.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/3\/Slide3.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/1\/Slide4.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/2\/Slide4.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/3\/Slide4.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/1\/Slide5.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/2\/Slide5.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/3\/Slide5.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/1\/Slide6.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/2\/Slide6.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/3\/Slide6.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/1\/Slide7.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/2\/Slide7.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/3\/Slide7.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/1\/Slide8.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/2\/Slide8.png}\n \\includegraphics[width=0.28\\columnwidth]{pictures\/educpptpic\/3\/Slide8.png}\n \\caption{Three slide decks extracted from the insight WMT task 1. }\n \\vspace{-4mm}\n \\label{fig:slidedeckeduc}\n\\end{figure} \n\nThe evaluation of \\textit{Knowledge-Decks} with a second VA tool called \\textit{Bay-Of-Fundy Marine Life Tool}, which allows exploration of Marine Life, is included as supplemental material.\n\n\\section{Discussions, Limitations and Future Work}\n\\label{sec:future}\n\nOur system has shown how to generate diverse knowledge discovery narratives from a KG. However, the modeling of our system is largely based on VAKG's methodology, which is unique in its applicability potential. This means we could not find another equivalent methodology to compare against. KGs have been very sparsely used for this kind of application, and no similar process would allow us to build a system based on VA's theoretical knowledge models. We propose \\textit{Knowledge-Decks}' modeling strategy with the hope that other modeling strategies will emerge to be compared against ours.\n\nOn the other hand, many works use graph networks and natural language processing. Although \\textit{Knowledge-Decks} has reached its goal, we recognize that more advanced processes could be applied. For instance, our keyword generation is done by lemma extraction but could also be performed by topic modeling~\\cite{dahir2021query}, KeyBERT~\\cite{keybertgithub}, or other machine learning models. Similarly, our KG could be used in many more potential graph network analyses, including advanced procedures of graph completion and recommendations. We judged it better to use simpler techniques in our work to allow for easier reproducibility and broader applicability of our approach. However, we intend to investigate the usage of more advanced algorithms like the ones listed in Sec.~\\ref{sec:related} when we model new systems for knowledge discovery storytelling in the future.\n\nOne of our approach's core complexity is the need to retool the existing VA tools with the connection to implementation. Although this decision was by design, we recognize that not all tools are so easily modifiable to be attached to external libraries like ours. Some related works have shown to be better than ours in that regard, but they usually require the complete development of a new tool or the need for external user monitoring. Future works may investigate other ways to collect user intention, insight, and behavior without any modification on the VA tool's part, such as through browser extensions or by recording and processing the video\/audio feed of users while they use tools and automatically processing them into a usable KG. On another note, our methodology can also be applicable to different use cases and goals, such as providing recommendations retelling previous users' experiences within the VA tool itself. Indeed, some participants provided us with valuable feedback on other potential uses of our methodology other than slide decks. Of course, these have their own goals and limitations different from ours, so we digress.\n\nAnother key issue when handling user data is the privacy concern, especially on the users' part, regarding how user data is being utilized. In our system and throughout our study, we maintained complete anonymity, but privacy and accountability is a concern usually raised by users and researchers alike~\\cite{xu2015analytic}. Although no participant raised this issue during our tests, we see the attempts of companies and research institutions to restrict or limit the collection of user data of any kind. Further evaluation is required to say that \\textit{Knowledge-Decks} can be used in commercial scenarios. Nevertheless, \\textit{Knowledge-Decks} does not explicitly attempt to solve this issue other than that only collecting anonymous information.\n\nFinally, certain design choices may be of concern. For instance, using slide decks to narrate the user's knowledge discovery process has its issues~\\cite{park2022storyfacets}, such as being seen as useful only for presentations to novice users. As noted by the expert, the slide decks may also contain slides that may be unrelated to the rest when, for instance, a user had an insight unrelated to their intention. However, the ubiquitousness of PowerPoint slides for presentation purposes, including infographics and storytelling, must be considered. PowerPoint files are also easily editable, allowing for the inclusion or exclusion of information, such as a summary slide or the removal of an unrelated insight slide, to be handled on a case-by-case basis. As StoryFacet~\\cite{park2022storyfacets} says, adding animations, filtering, and other features of PowerPoint or similar software would also potentially enhance the slide decks generated, but these modifications are already beyond our original goal. Also, the design used to implement our system was based on a specific type of VA tool, a limitation we recognize. Yet, we believe that many tools fit our requirements, providing therefore a good starting point for further research. \\textit{Knowledge-Decks} also allows the use of custom queries to generate other types of slide decks. Therefore, although our approach can be considered limited due to the type of tools it can be attached to and how the slide decks are generated, it provides a reasonable basis for other works to expand upon.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\\label{sec:conclusion}\n\nWe have presented \\textit{Knowledge-Decks}, a novel approach that generates slide decks of user knowledge discovery processes using VA applications and tools. \\textit{Knowledge-Decks} collect user intentions, behavior, and insights during users' knowledge discovery sessions and automatically structure the data into a KG modeled based on the VA tools. By executing specific pre-defined Cypher queries, \\textit{Knowledge-Decks} can extract paths from a KG and format them as PowerPoint slide decks that tell a knowledge discovery story. \\textit{Knowledge-Decks} was evaluated by being attached to two existing VA tools where users were asked to perform $5$ pre-defined exploratory-based tasks. By collecting user intentions, behavior (interactions), and insights, three main types of stories were told: what insights were reached given a particular intention of interest; which intentions and behavior led to a particular insight; and how insights were reached given a specific user intention. \nThe slide decks of these stories were shown to experts, which validated their usefulness as a draft presentation resource that can be used to investigate the user stories or to be used as a first or draft version of an actual presentation to be used. \n\n\n\n\n\n\n\n\n\\section{Other Tool}\n\n\\noindent\\textbf{Bay-Of-Fundy Marine Life Tool (MLT)}: The \\textit{Ocean Tracking Network} (OTN)~\\cite{iverson2019ocean} is a global aquatic animal tracking, technology, data management, and partnership platform headquartered at Dalhousie University in Canada. OTN and its partners are using electronic tags to track more than 200 keystones, commercially important and endangered species worldwide. In order to better understand the existing data on Stripped Bass and Shark detection throughout the Bay of Fundy in Nova Scotia, researchers came together and developed a visual analytic tool where data can be displayed and explored. In their own words: by displaying raw and aggregated data in different interlinked visualizations, domain experts could understand both the marine life and the limitations of their own sensors when collecting data for decision-making.\n\nFrom the two tool's descriptions (above and in the actual paper), we can see that although they are part of completely different domains (social sciences and marine biology) and focus on different types of data (multi-choice survey of QoL and marine life tag detection over time), both are similar in nature due to their aim to provide visualizations to better understand and explore the available data. Both are within the requirements to be used with our system. Here is the full property map table, including the two tools:\n\n\\begin{table}\n\\label{tab:propertymapdata}\n\\caption{Data stored in each of the tool's respective knowledge graph node's contents (property maps). }\n\\begin{tabular}{||p{12mm} | p{32mm} p{25mm} ||} \n \\hline\n Node Type & Well-being Tool & Marine Life Tool \\\\ [0.5ex] \n \\hline\\hline\n Human temporal sequence & label (insight $P$ or intention $E$), created time, URL, screen size, text, keywords, shapes drawn, user id, and analysis id & same as Well-being tool \\\\ \n \\hline\n Human state-space & label (insight $P$ or intention $E$), created time, last updated time, and keywords time & same as Well-being tool \\\\\n \\hline\n Computer state-space & label (specification $S$, visualization $V$ or analysis $A$), created time, last updated time, the status of the hierarchical structure to the left, bar chart's visualization schema (stacked, grouped, maximized, etc.), map position, map zoom, map areas selected, math operation used, and question visualized in map & label (specification $S$, visualization $V$ or analysis $A$), created time, last updated time, map position, map zoom, selected time-frame if any, and selected filters if any \\\\\n \\hline\n Computer temporal sequence & event name, created time, URL, user id, analysis id, and all the same data from the related Computer state-space & event name, created time, URL, user id, analysis id, and all the same data from the related Computer state-space \\\\\n \\hline\n\\end{tabular}\n\\end{table}\n\nThe data was fed to our system, and the resulting knowledge graph of both tools combined was vastly complex. Nevertheless, a preliminary analysis of the results showed that users had a total of 70 unique intentions (36 for WMT and 34 for MLT) and 54 unique insights (27 for each). From user input, there were 40 unique intentions (21 for WMT and 19 for MLT) and 53 unique insights (26 for WMT and 27 for MLT). On the machine side, there were a total of 458 events (252 WMT to 206 MLT) or 230 unique events (143 to 87). We can see from these numbers that although there were fewer interactions with MLT, it offered the same amount of insights. This could either mean that the tasks were simpler on MLT or that the users had to interact less with this tool to reach insights. \n\n\n\n\\section{Screenshot of the Tools}\nSee Figs.~\\ref{fig:wmt} and ~\\ref{fig:mlt}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=2\\columnwidth]{pictures\/WMT.png}\n \\caption{Screenshot of the Well-being Mapping Tool. Knowledge-Decks options, including the text input areas, are on the top right and the bottom part of the website. }\n \\label{fig:wmt}\n\\end{figure*} \n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=2\\columnwidth]{pictures\/MLT.png}\n \\caption{Screenshot of the Marine Life Tool. Knowledge-Decks options, including the text input areas, are on the top right and the bottom part of the website. }\n \\label{fig:mlt}\n\\end{figure*} \n\n\\section{Questionnaire}\n\nAppendix C\u2013Demographic Questionnaire\n\\begin{itemize}\n \\item How long have you used a computer?\n \\begin{itemize}\n \\item Less than one week\n \\item 1 week to less than 1 month\n \\item 1 month to less than 1 year\n \\item 1 year to less than 2 years\n \\item 2 years to less than 4 years\n \\item 4 or more years\n \\item I prefer not to answer\n \\end{itemize}\n \\item On average, how much time do you spend per week on a computer?\n \\begin{itemize}\n \\item Less than one hour\n \\item One to less than 4 hours\n \\item to less than 10 hours\n \\item 10 to less than 20 hours\n \\item 20 to less than 40 hours\n \\item Over 40 hours\n \\item I prefer not to answer\n \\end{itemize}\n \\item How comfortable are you at using interactive user interface?\\begin{itemize}\n \\item Extremely comfortable\n \\item Very comfortable\n \\item Comfortable\n \\item Uncomfortable\n \\item Very uncomfortable\n \\item Extremely uncomfortable\n \\item I prefer not to answer\n \\end{itemize}\n \\item How familiar are you with Data Analytics Tools, such as Microsoft Excel or Tableau?\n \\begin{itemize}\n \\item Very well\n \\item Well\n \\item Neutral\n \\item Not well\n \\item Not well at all\n \\item I prefer not to answer\n \\end{itemize}\n \\item At what level do you think your understanding of written English is?\n \\begin{itemize}\n \\item Excellent\n \\item Very good\n \\item Good\n \\item Acceptable\n \\item Bad\n \\item Very bad\n \\item None\n \\item I prefer not to answer\n \\end{itemize}\n \\item What is the highest level of education you have completed?\n \\begin{itemize}\n \\item Little or no formal education\n \\item High school or equivalent\n \\item College or university\n \\item Master\n \\item Doctoral\n \\item Post-Doctoral\n \\item I prefer not to answer\n \\end{itemize}\n \\item In case you have or are pursuing a degree, what is your primary area of study?\n \\begin{itemize}\n \\item Computer Science\n \\item Information technology\n \\item Internetworking\n \\item Social Science\n \\item Health Science\n \\item Other\n \\item I have no primary area of study\n \\item I prefer not to answer\n \\end{itemize}\n\\end{itemize}\n\nPrelude - Video tutorial\nPlease view this video tutorial (link) only once and respond to the following statements about the visualization-based interface, using the given scale:\n\nQuestionnaire 1.1 - Pre-defined execution Questionnaire of the Quality of Life Tool\nNow open the website at this link and attempt to answer the following question. As the video has shown, write the individual intentions and\/or insights you discover in the process. If you find the answer, type as a final insight and then respond to the following statements about your experience using the given scale. Note that the process of finding the answer is what is important to us, so don't worry about how well you answer the question, just attempt to within the allotted time of 5min.\n\nQuestion: Which FSA shows the most concern with access to educational opportunities?\n\t\nQuestion\tAnswers\nI was able to follow the steps without any problems\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI was able to quickly understand what I needed to do to perform the required steps in the webapp\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nWhile performing the steps, I totally ignored other information not relevant to the question\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe I was able to answer the requested question \tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\nQuestionnaire 1.2 - Pre-defined execution Questionnaire of the Quality of Life Tool\nNow open the website at this link and attempt to answer the following question. As the video has shown, write the individual intentions and\/or insights you discover in the process. If you find the answer, type as a final insight and then respond to the following statements about your experience using the given scale. Note that the process of finding the answer is what is important to us, so don't worry about how well you answer the question, just attempt to within the allotted time of 5min.\n\nQuestion: Is there a \"happiest\" community that shows the highest life satisfaction score?\n\t\nQuestion\tAnswers\nI was able to follow the steps without any problems\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI was able to quickly understand what I needed to do to perform the required steps in the webapp\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nWhile performing the steps, I totally ignored other information not relevant to the question\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe I was able to answer the requested question \tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\nQuestionnaire 1.3 - Pre-defined execution Questionnaire of the Quality of Life Tool\nNow open the website at this link and attempt to answer the following question. As the video has shown, write the individual intentions and\/or insights you discover in the process. If you find the answer, type as a final insight and then respond to the following statements about your experience using the given scale. Note that the process of finding the answer is what is important to us, so don't worry about how well you answer the question, just attempt to within the allotted time of 5min.\n\nQuestion: Where do parents experience barriers to access recreation in terms of facilities not offering childcare?\n\nQuestion\tAnswers\nI was able to follow the steps without any problems\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI was able to quickly understand what I needed to do to perform the required steps in the webapp\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nWhile performing the steps, I totally ignored other information not relevant to the question\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe I was able to answer the requested question \tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\nQuestionnaire 1.4 - Pre-defined execution Questionnaire of the Quality of Life Tool\nNow open the website at this link and attempt to answer the following question. As the video has shown, write the individual intentions and\/or insights you discover in the process. If you find the answer, type as a final insight and then respond to the following statements about your experience using the given scale. Note that the process of finding the answer is what is important to us, so don't worry about how well you answer the question, just attempt to within the allotted time of 5min.\n\nQuestion: Where do people tend to buy local food? (or, where is the lowest amount of those who \"never\" buy local food?)\n\t\nQuestion\tAnswers\nI was able to follow the steps without any problems\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI was able to quickly understand what I needed to do to perform the required steps in the webapp\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nWhile performing the steps, I totally ignored other information not relevant to the question\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe I was able to answer the requested question \tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\nQuestionnaire 1.5 - Pre-defined execution Questionnaire of the Quality of Life Tool\nNow open the website at this link and attempt to answer the following question. As the video has shown, write the individual intentions and\/or insights you discover in the process. If you find the answer, type as a final insight and then respond to the following statements about your experience using the given scale. Note that the process of finding the answer is what is important to us, so don't worry about how well you answer the question, just attempt to within the allotted time of 5min.\n\nQuestion: Where do people report the highest overall work life balance? \n\t\nQuestion\tAnswers\nI was able to follow the steps without any problems\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI was able to quickly understand what I needed to do to perform the required steps in the webapp\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nWhile performing the steps, I totally ignored other information not relevant to the question\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe I was able to answer the requested question \tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\n\nQuestionnaire 2 \u2013 Quality of Life Tool's Interface Features Questionnaire\nPlease respond to the following statements about the visualization-based interface, using the given scale:\n\nQuestion\tAnswers\nThe tool was easy to use\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nThe questions were simple to answer\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI answered the questions confidently\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nIt was intuitive to write down my intentions during the process\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nIt was intuitive to write down my insights during the process\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe if I were to see other's intentions, I would be able to better understand the question.\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe if I were to see others insights, I would be able to answer the question faster\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe if I were to see others insights, I would be able to answer the question with more precision\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\n\nQuestionnaire 3.1 - Pre-defined execution Questionnaire of the Ocean Data Tool\nNow open the website at this link and attempt to answer the following question. As the video has shown, write the individual intentions and\/or insights you discover in the process. If you find the answer, type as a final insight and then respond to the following statements about your experience using the given scale. Note that the process of finding the answer is what is important to us, so don't worry about how well you answer the question, just attempt to within the allotted time of 5min.\n\nQuestion: Considering the cardinal directions (north, northeast, east, etc), where are most of the sensors (also called stations) located within the Bay of Fundy?\n\t\nQuestion\tAnswers\nI was able to follow the steps without any problems\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI was able to quickly understand what I needed to do to perform the required steps in the webapp\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nWhile performing the steps, I totally ignored other information not relevant to the question\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe I was able to answer the requested question \tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\nQuestionnaire 3.2 - Pre-defined execution Questionnaire of the Quality of Life Tool\nNow open the website at this link and attempt to answer the following question. As the video has shown, write the individual intentions and\/or insights you discover in the process. If you find the answer, type as a final insight and then respond to the following statements about your experience using the given scale. Note that the process of finding the answer is what is important to us, so don't worry about how well you answer the question, just attempt to within the allotted time of 5min.\n\nQuestion: In which year and month were fish most detected on the passage?\n\t\nQuestion\tAnswers\nI was able to follow the steps without any problems\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI was able to quickly understand what I needed to do to perform the required steps in the webapp\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nWhile performing the steps, I totally ignored other information not relevant to the question\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe I was able to answer the requested question \tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\nQuestionnaire 3.3 - Pre-defined execution Questionnaire of the Quality of Life Tool\nNow open the website at this link and attempt to answer the following question. As the video has shown, write the individual intentions and\/or insights you discover in the process. If you find the answer, type as a final insight and then respond to the following statements about your experience using the given scale. Note that the process of finding the answer is what is important to us, so don't worry about how well you answer the question, just attempt to within the allotted time of 5min.\n\nQuestion: How much did the tide affect the presence and number of sharks in the avon river in 2019?\n\nQuestion\tAnswers\nI was able to follow the steps without any problems\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI was able to quickly understand what I needed to do to perform the required steps in the webapp\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nWhile performing the steps, I totally ignored other information not relevant to the question\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe I was able to answer the requested question \tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\nQuestionnaire 3.4 - Pre-defined execution Questionnaire of the Quality of Life Tool\nNow open the website at this link and attempt to answer the following question. As the video has shown, write the individual intentions and\/or insights you discover in the process. If you find the answer, type as a final insight and then respond to the following statements about your experience using the given scale. Note that the process of finding the answer is what is important to us, so don't worry about how well you answer the question, just attempt to within the allotted time of 5min.\n\nQuestion: On what day of the week are sharks most and least active in 2018?\n\t\nQuestion\tAnswers\nI was able to follow the steps without any problems\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI was able to quickly understand what I needed to do to perform the required steps in the webapp\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nWhile performing the steps, I totally ignored other information not relevant to the question\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe I was able to answer the requested question \tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\nQuestionnaire 3.5 - Pre-defined execution Questionnaire of the Quality of Life Tool\nNow open the website at this link and attempt to answer the following question. As the video has shown, write the individual intentions and\/or insights you discover in the process. If you find the answer, type as a final insight and then respond to the following statements about your experience using the given scale. Note that the process of finding the answer is what is important to us, so don't worry about how well you answer the question, just attempt to within the allotted time of 5min.\n\nQuestion: How does the number of detected striped bass compare between the sensors running across the north shore of the main basin, the southern shore sensors and the avon river sensors?\n\t\nQuestion\tAnswers\nI was able to follow the steps without any problems\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI was able to quickly understand what I needed to do to perform the required steps in the webapp\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nWhile performing the steps, I totally ignored other information not relevant to the question\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe I was able to answer the requested question \tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\n\nQuestionnaire 4 \u2013 Ocean Tool's Interface Features Questionnaire\nPlease respond to the following statements about the visualization-based interface, using the given scale:\n\nQuestion\tAnswers\nThe tool was easy to use\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nThe questions were simple to answer\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI answered the questions confidently\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nIt was intuitive to write down my intentions during the process\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nIt was intuitive to write down my insights during the process\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe if I were to see other's intentions, I would be able to better understand the question.\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe if I were to see others insights, I would be able to answer the question faster\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\nI believe if I were to see others insights, I would be able to answer the question with more precision\tStrongly Disagree\tSomewhat Disagree\tNeutral\tSomewhat Agree\tStrongly Agree\n\nPlease give us more comments about your experience\nIs there any way you expect that the intentions and insights could be used to empower you while you were answering the questions?\n\n\\section{Knowledge-Decks Ontology}\nSee Fig.~\\ref{fig:ontology}.\n\n\\begin{figure}\n \n \\includegraphics[width=\\columnwidth]{pictures\/ontology.png}\n \\caption{Ontology structure of the Knowledge Graph used in Knowledge-Decks. There are four classes of nodes, each containing property-maps as depicted above. The relationships between the classes are in capital letters and are used within the queries to specify which relationship is needed to be followed to extract the knowledge discovery paths. }\n \\label{fig:ontology}\n\\end{figure} \n\n\\section{Slide Decks Generated}\n\n\\begin{figure*\n \\centering\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/1\/Slide1.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/2\/Slide1.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/3\/Slide1.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/1\/Slide2.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/2\/Slide2.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/3\/Slide2.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/1\/Slide3.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/2\/Slide3.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/3\/Slide3.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/1\/Slide4.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/2\/Slide4.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/3\/Slide4.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/1\/Slide5.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/2\/Slide5.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/3\/Slide5.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/1\/Slide6.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/2\/Slide6.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/3\/Slide6.png}\n \\caption{Three slide decks extracted from the insight WMT task 1 (part 1). }\n \\label{fig:slidedeckeduc}\n\\end{figure*} \n\n\\begin{figure*\n \\centering\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/1\/Slide7.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/2\/Slide7.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/3\/Slide7.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/1\/Slide8.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/2\/Slide8.png}\n \\includegraphics[width=0.3\\textwidth]{pictures\/educpptpic\/3\/Slide8.png}\n \\caption{Three slide decks extracted from the insight WMT task 1 (part 2). }\n \\label{fig:slidedeckeduc}\n\\end{figure*} \n\n\n\\begin{figure*\n \\centering\n \\includegraphics[width=\\textwidth]{pictures\/findinsights.png}\n \\includegraphics[width=0.33\\textwidth]{pictures\/allinsights\/Slide1.png}\n \\includegraphics[width=0.33\\textwidth]{pictures\/allinsights\/Slide2.png}\n \\includegraphics[width=0.33\\textwidth]{pictures\/allinsights\/Slide3.png}\n \\includegraphics[width=0.4\\textwidth]{pictures\/allinsights\/Slide4.png}\n \\includegraphics[width=0.4\\textwidth]{pictures\/allinsights\/Slide5.png}\n \\caption{Graph visualization from Neo4j~\\cite{noauthororeditorneo4j} when collecting all insights from all users given a starting intention. In this example, four insights were found from said intention, two of which were from the same user who wrote the intention (two to the left) and two from other users (two to the right). The slide deck generated on the bottom describes the four insights found given the original intention. }\n \\label{fig:findinsights}\n\\end{figure*} \n\n\\begin{figure*\n \\centering\n \\includegraphics[width=\\textwidth]{pictures\/sampleflow.png}\n \\caption{Above is a portion of KG extracted from \\textit{Knowledge-Decks} when generating a slide deck, which contains all interactions done in the shortest between an intention and an insight. Visualization created with Neo4J browser~\\cite{noauthororeditorneo4j}. The graph's path highlighted in black is used to generate the slide deck below, which is to be read from left to right and top to bottom (part 1). }\n \\label{fig:slidedecksampleflow}\n\\end{figure*}\n\n\\begin{figure*\n \\centering\n \\includegraphics[width=0.4\\textwidth]{pictures\/educpptpic\/4\/Slide1.png}\n \\includegraphics[width=0.4\\textwidth]{pictures\/educpptpic\/4\/Slide2.png}\n \\includegraphics[width=0.4\\textwidth]{pictures\/educpptpic\/4\/Slide3.png}\n \\includegraphics[width=0.4\\textwidth]{pictures\/educpptpic\/4\/Slide4.png}\n \\includegraphics[width=0.4\\textwidth]{pictures\/educpptpic\/4\/Slide5.png}\n \\includegraphics[width=0.4\\textwidth]{pictures\/educpptpic\/4\/Slide6.png}\n \\includegraphics[width=0.4\\textwidth]{pictures\/educpptpic\/4\/Slide7.png}\n \\includegraphics[width=0.4\\textwidth]{pictures\/educpptpic\/4\/Slide8.png}\n \\caption{Above is a portion of KG extracted from \\textit{Knowledge-Decks} when generating a slide deck, which contains all interactions done in the shortest between an intention and an insight. Visualization created with Neo4J browser~\\cite{noauthororeditorneo4j}. The graph's path highlighted in black is used to generate the slide deck below, which is to be read from left to right and top to bottom (part 2). }\n \\label{fig:slidedecksampleflow}\n\\end{figure*}\n\n\\section{Methodology}\\label{sec:methodology}\n\nTo arrive at VAKG's goals and ontology, we first formalize and describe the Visual Analytics (VA) knowledge model framework~\\cite{sacha2014knowledge, federico2017role} and its role in VA ontologies~\\cite{xu2020survey, sacha2018vis4ml}. Then, following the example of related works, we use this foothold to unfold the Human-Computer loop into a temporal sequence of states, as is shown in Fig.~\\ref{fig:teaser}(B). We then discuss VAKG goals in light of related work limitations, and, to achieve these goals, we expand the ontology and use a \\textit{Temporal Knowledge Graph (TKG)} structure to allow VAKG to store and analyze the user's knowledge gain process.\n\n\\subsection{Background and Goals}\\label{sec:formal}\n\nThe theoretical background of VA's knowledge model is a foundation work for most, if not all, research within VA. The rather simplistic representation of this model shown in Fig.~\\ref{fig:teaser}(A) characterizes its two main actors: \\textit{Humans} and \\textit{Computers}. From \\citet{sacha2014knowledge}, VA's cyclical process of human-computer interactions and feedback loops describes that knowledge is generated over time. \\citet{federico2017role} expand these concepts by describing the inner taxonomies of both Human and Computer sides and further describing that depending on the system and use-case, different styles of interaction loops can be described through these inner taxonomies. They also propose a mathematical interpretation called ``Conceptual Model of Knowledge-Assisted VA'' to describe a base knowledge model, which is then used t construct many derived models, such as knowledge generation, conversion, internalization, externalization, and exploitation. For our purposes, the essential information extracted from these works is that VA workflows have many well-defined inner taxonomies, which may vary depending on the application and use case. Still, there always exists an overarching human-computer interactivity loop.\n\nAs discussed previously, such theoretical models are also applied in practice either by using the theory as an inspiration for design guidelines or through formal ontologies. For instance, \\citet{sacha2018vis4ml} describes an ontology of the knowledge model as what they call a ``diamond feedback loop'', and \\citet{battle2019characterizing} uses the knowledge model as an inspiration to record user behavior as a graph network for analysis. In all cases, the interactivity loop between \\textit{Human} and \\textit{Computer} is always present. \n\nTo better understand VAKG's goals, we extract the underlying ontology of the knowledge model from these existing works following the Web Ontology Language (OWL)~\\cite{chen2017pathways}. By pairing the temporal aspect of the human-computer interactivity loop from knowledge models~\\cite{sacha2014knowledge, federico2017role} and the ``diamond rule'' of \\citet{sacha2018vis4ml}, we define the sequences of \\textit{State} classes (green circles) with two main sub-types: \\textit{Human States} (red outline) and \\textit{Computer States} (black outline), as is seen in Fig.~\\ref{fig:teaser}(B). \n\nNext, we incorporate the temporal aspect into the ontology. For this, we follow the current conceptual work of defining and characterizing VA workflows (see ~\\ref{sec:related}). Although many specific ontologies of relationships have been designed recently, we first return to the roots of VA and focus our macro design on the temporal relationship between the two aforementioned classes as the design core of VAKG. Formally speaking, we define two relationships that connect the aforementioned two \\textit{state} types: \\textit{interact-Human-Computer} and \\textit{feedback-Computer-Human}, similar to~\\cite{sacha2018vis4ml, shu2006investigating}, as is seen by the red-dashed and black-dashed lines of Fig.~\\ref{fig:teaser}(B). These relationships are analogous to \\textit{has-IO-Entity-Successor}, and \\textit{has-Process-Successor} of \\citet{sacha2018vis4ml} and, just as they argue, this allows for ``directed connections explicitly define the predecessor-successor and action-actor relationships within any workflow''. Therefore, by just extracting the descriptors from existing works~\\cite{sacha2018vis4ml, federico2017role}, VAKG ontology should include:\n\n\\squishlist\n\n\\item \\textbf{\\textit{Human} Class (H):} All human-related information and changes, such as tacit knowledge, newly gained insights and findings, perception\/cognition of the user, user's will or wishes while performing any task, any questions or goals the user may have and all demographic information on the user.\n\n\\item \\textbf{\\textit{Computer} Class (C):} All computer-related information and changes, such as datasets, metadata, visual and interface state at a given time, machine learning state, automated processes, system specifications, system configuration, and available explicit knowledge.\n\n\\item \\textbf{Relationship \\textit{interact-Human-Computer}:} \\textit{What} and \\textit{how} has the user interacted or changed within the interface (Computer) and \\textit{why} has the user interacted with the interface (Computer) at a given time.\n\n\\item \\textbf{Relationship \\textit{feedback-Computer-Human}:} \\textit{How} was the state of the Human prior to the changes in the interface at a given time.\n\n\\squishend\n\n\nSo far, we have defined what already exists in the literature~\\cite{sacha2018vis4ml, federico2017role}. However, as we have discussed in Sec.~\\ref{sec:related}, the related works have an ambiguous definition of the two classes and two relationships: whether the \\textit{knowledge gathering} process is to be considered temporal or not. A temporal process is a provenance workflow that considers the sequences of interact-feedback loops as sequences of events over time, which can then be modeled through taxonomies~\\cite{sacha2014knowledge, federico2017role, polowinski2013viso, von2014interaction} or systematically used for tracking or analysis~\\cite{battle2019characterizing, xu2020survey, fujiwara2018concise, bernard2017comparing}. On the other hand, an atemporal process would instead consider the \\textit{state space} of the VA workflow. \n\nWe consider a \\textit{state space} a set of some or all possible configurations of a system or entity and all the immediate adjacency relationships between such configurations. For example, a state space can be a graph network with a node for every possible state a VA tool can be and a link between every two nodes that are a single event or interaction away from each other. Interpreting the ontology described above as a state-space has been considered in some examples of theoretical works~\\cite{sacha2018vis4ml, von2019informed} and analysis-focused works~\\cite{battle2019characterizing, clifton2012advanced}. However, even when both temporal and atemporal aspects are used, it is ambiguous or uncertain how the two are differentiated and whether they are connected in any way, as we have discussed in Sec.~\\ref{sec:related}. Furthermore, the inner elements of the Human side versus the Computer side of VA are not always separated within these works, which creates further ambiguity. For instance, while theoretical works~\\cite{sacha2014knowledge, federico2017role} clearly separate the two in their models, works describing VA ontologies~\\cite{sacha2018vis4ml, brehmer2013multi}, user-tracking~\\cite{bernard2017comparing} and behavior analysis~\\cite{battle2019characterizing, xu2020survey} usually merge the two sides or ignore one side completely.\n\nWith this, we have our core goal stated: \\textit{VAKG is an architectural framework designed with ontologies that formally models the temporal sequences and the state space of both the Computer and Human sides of a VA workflow, and when used to record VA workflow sessions, it directly allows for provenance and state space analysis}. Note that VAKG should be understood as an ontology \\textit{architecture}, in the sense that it is not a static ontology but is expected to be expanded and specialized for different use-cases. With that in mind, our objectives while designing VAKG are to:\n\n\\begin{itemize}\n \\setlength\\itemsep{0em}\n \\item[\\textbf{G1:}] Create an ontology that describes and links all four VA workflow aspects:\n \\begin{itemize}\n \n \n\t \\item[\\textbf{G1.1:}] Temporal sequences of knowledge gathering per user (Human State Provenance).\n \\item[\\textbf{G1.2:}] The user feedback and insights which occurs within a VA workflow (Human State Space).\n \\item[\\textbf{G1.3:}] The Computer state space within a VA workflow (Computer State Space).\n \\item[\\textbf{G1.4:}] Temporal sequences of computer events\/tasks which are executed during VA workflow sessions (Computer State Provenance).\n \\end{itemize}\n \\item[\\textbf{G2:}] Architect such ontology so that it can be used to record users executing a VA workflow and allow its analysis.\n \\item[\\textbf{G3:}] Have the ontology able to optionally incorporate lower-level ontologies and models for specific analytical use-cases.\n\\end{itemize}\n\n\\subsection{VAKG Expanded Ontology}\\label{sec:vakgkg}\n\\subsubsection{Temporal Disambiguation}\n\nSo far, we have extracted an ontology from the literature~\\cite{federico2017role, sacha2018vis4ml} and discussed its ambiguities regarding whether the \\textit{knowledge gathering} process is temporal (e.g., provenance) or not (e.g., state space). To solve them, VAKG expands the ontology described above by explicitly separating the concepts of temporal and atemporal relationships. This is done by interpreting the two relationships, namely \\textit{feedback-Computer-Human} (\\textbf{G1.2}) and \\textit{interact-Human-Computer} (\\textbf{G1.3}), as atemporal relationships and adding two more relationships for temporal connections:\n\n\\squishlist\n\n\\item \\textbf{Relationship \\textit{update-Computer-Computer}:} Time-stamped indication of \\textit{What} changed in the interface and \\textit{why} did this change happen (\\textbf{G1.1}).\n\n\\item \\textbf{Relationship \\textit{insight-Human-Human}:} Time-stamped indication of \\textit{what, how and why} has the user aggregated as new insights\/findings from the new information (\\textbf{G1.4}).\n\n\\squishend\n\nOur expanded version of the VA workflow ontology shown in Fig.~\\ref{fig:teaser}(B) attempts to reduce, if not eliminate, this ambiguity. The figure shows the two temporal relationships as the light-red and gray diamond arrows. In contrast, the atemporal relationships are the red and black dotted arrows. A parallel mathematical formalization of the same ontology is discussed in the supplementary material.\n\n\\subsubsection{The Update Sequence}\\label{sec:updateseq}\n\nTherefore, although the basic architecture of VAKG follows existing research, it also attempts to solve existing temporal versus atemporal issues. Even though analysis of the temporal aspect of VA, such as knowledge gathering or data provenance, is at the center of VAKG's goals, the ontology so far does not fully depict a timeline of events separate from a timeless ontology of possible events (\\textbf{G1}). For this and for VAKG to also allow for analytical tasks to be done (\\textbf{G2}), we propose that the design of VAKG follows existing research on \\textit{Knowledge Graph (KG)} . \n\nA KG is a graph structure where knowledge reasoning is modeled as connections between classes or properties, such as ``George Washington \\textit{is a} human'' and ``Canada \\textit{is a} country''. A \\textit{Temporal Knowledge Graph (TKG)}, however, models these connections as the temporal relationship between the classes or properties. Many different types of TKGs exist. For each, this temporal connection has a different meaning. For instance, the most used version of TKGs uses time as the connection, where two connected nodes represent two events that co-occurred. An example of such a KG would be all purchases done between different businesses within a supply chain, where the product ``Mayonese'' may have been bought by a store ``Wallmart'' from the seller ``Hellmann's'' on ``25\/06''. In this TKG, the connection between the three nodes: Wallmart, Hellmann's, and Mayonnaise, would be ``25\/06''. To design such structures, one must use Web Ontology Language (OWL) or a similar ontology language~\\cite{chen2017pathways}. By designing VAKG's ontology with KGs and TKGs in mind, we enable easier ways for both temporal and atemporal visualization and analysis to be done with VAKG through existing works~\\cite{chen2020review, wang2018ripplenet, han2014chronos, gottschalk2018eventkg, cashman2020cava}.\n\nVAKG is arguably more complex than the KG examples given so far. When also considering the four different aspects listed within \\textbf{G1}, we describe VAKG as a 4-lane KG, with one lane for each of the four aspects of \\textbf{G1}. However, our ontology of Fig.~\\ref{fig:teaser}(B) does not match this expectation. For that, we create a new ontology by enhancing the two temporal relationships into its own \\textit{class} which represents an \\textit{Update} process (Fig.~\\ref{fig:teaser}(C)). We, therefore, modify the relationship between two \\textit{States} of the same type (e.g., \\textit{Human-Human}) to become a new \\textit{class} of type \\textit{Update} and, from this, three new relationships are created: \\textit{does-State-Update}, \\textit{leads-Update-State}, and \\textit{follows-Update-Update}, which forms the 4-lane TKG architecture of Fig.~\\ref{fig:teaser}(C). With this, the \\textit{Human Update} inherits the definition from \\textit{insight-Human-Human}, and the \\textit{Computer Update} inherits the one from \\textit{update-Computer-Computer}. A representation of how the \\textit{classes} relate to each other can be seen in Fig.~\\ref{fig:entiredesign}, where it is shown that there are four \\textit{classes}: \\textit{Human State}, \\textit{Human Update}, \\textit{Computer State}, and \\textit{Computer Update}. This now achieves \\textbf{G1}, since the \\textit{Human Updates TKG} identifies \\textit{Human State Provenance}, the \\textit{Human States KG} identifies the \\textit{Human Space State}, the \\textit{Computer State KG} identifies the \\textit{Computer State Space} and the \\textit{Computer Update TKG} identifies \\textit{Computer State Provenance}, all of which describe \\textit{one overarching TKG with 4 sub-KG lanes}. VAKG representation of Fig.~\\ref{fig:teaser}(C) simplifies this definition as: \\textit{Feedback} lane for Human States, \\textit{Interactivity} lane for Computer States, \\textit{Update Interface} lane for Computer Updates and \\textit{Knowledge} lane of Human Updates. \n\n\\begin{figure\n \\centering\n \\includegraphics[width=.75\\columnwidth]{pictures\/fullVAKGontology.PNG}\n \\caption{VAKG ontological design. The four different lanes of VAKG are represented. Two are KGs that describe the possible states of the inner property-maps or sub-graphs. Two are TKGs describing the sequences of updates, such as the sequence of insights and knowledge-gain by users in a VA workflow or the sequence of computer events. }\n \\label{fig:entiredesign}\n\\end{figure} \n\n\n\\subsubsection{VAKG Property Map and Data Collection}\\label{sec:propertymap}\n\nSo far, we have focused on the VAKG structure. Still, an integral part of our proposal is to record users executing a VA workflow and allow its usage for analysis (\\textbf{G2}). While the usual way of thinking on ontology design is to focus on OWL classes and their relationships, with VAKG we are also utilizing OWL \\textit{class properties}, sometimes called \\textit{data properties} or, when used within the context of KGs, \\textit{property-maps}. This design pattern uses the idea that every class in OWL can contain inner properties with attached data. The \\textit{property-map} design patterns is, however, interchangeable with the design pattern of pure OWL classes and relationships~\\cite{myroshnichenko2009mapping}, which removes any potential limitation on interconnecting our design with other ontologies.\n\nAs per our design, each node within VAKG holds a property map of all the node information. Fig.~\\ref{fig:vakgexpatt} shows a simple example of what this property map is when using VAKG to model an \\textit{Exploratory Data Analysis (EDA)} task. \nHowever, how much of the recorded information should be stored in the property map? Although theoretically, one could store all information related to a given state down to the exact bits within the Computer State or a brain scan of a Human State, we understand that it is not reasonable to expect that every VA workflow requires such an amount of information. Therefore, we define as part of VAKG that the property map of a State should, at the very least, uniquely identify that specific State within the state space of VAKG.\n\nThrough this definition, we can see that a given Computer or Human \\textit{States} can repeat if the same conditions occur multiple times. For instance, if the user within an EDA task performs a sequence of interactions that returns all the visualizations to an already explored state, the VAKG's \\textit{Computer States} would describe the state transitions that the user took until he returned to that state. Though less common, this property is shared by the \\textit{Human States}, where the user could have had an insight previously, and after further EDA interactions, s\/he simply had the same insight once again. This, however, is not a property shared by the Computer or Human \\textit{Update} sequences. By applying the same definition to the two \\textit{Update} classes, their expected property-map would uniquely identify the changes between the two Computer or two Human \\textit{States}, which, for VAKG, would include the timestamp of when the change occurred. \n\nTherefore, it is important to note that the Feedback and Interactivity lanes pf Fig.~\\ref{fig:teaser}(C) are not \\textit{temporal} because their connection is not temporally dependent but only dependent on the adjacency of their inner property-maps. Nevertheless, we still describe VAKG as a TKG as a whole due to the other two lanes, which are themselves a \\textit{temporal} sequence of events. With this, the example of an EDA task shown in Fig.~\\ref{fig:vakgexpatt} can be better understood through this 4-lane interpretation of VAKG.\n\n\\begin{figure\n \\centering\n \\includegraphics[width=\\columnwidth]{pictures\/VAKG_abstract.png}\n \\caption{A VA tool called ExPatt used VAKG to track its interface states and user's knowledge gain. The tool's state is stored as a property map, and on each update operation (e.g., an interaction on the tool) a new update node records all pertained information. Similarly the user's ongoing insights and tacit knowledge follows in a separate timeline.}\n \\label{fig:vakgexpatt}\n\\end{figure}\n\n\nVAKG itself does not strictly design the property-map typology because each use case will have different requirements (\\textbf{G3}). For instance, Vis4ml~\\cite{sacha2018vis4ml} focuses on designing an ontology which describes the workflow of VA-assisted ML. As we discussed before, although Vis4ml achieves very well its own goals, its ambiguity of temporal and atemporal aspects limits its usage when compared to VAKG. However, our design also expects that Vis4ml may use their work to extend VAKG's ontology through, for instance, defining the content of all property-maps within each \\textit{State}. As an example, Vis4ml could specify a property-map which describes the exact state of ``Prepare-Data'' at a given time, including all statistical profiles, data processing, and annotation. Another possibility is to instead of defining property-maps, they could upgrade the \\textit{Computer States} into their own Vis4ml ontology (\\textbf{G3}). \n\nUsing an existing ontology or taxonomy map, such as Vis4ml, as a relationship model of the property-map within a \\textit{Class} is interesting in some use cases. For instance, by following the previous example, we know that Vis4ml~\\cite{sacha2018vis4ml} has a relationship model within all ``Prepare-Data'' nodes. Therefore, if a single \\textit{Computer state} can define the state space of an entire ``Prepare-Data'' process, the user of VAKG can choose to expand the property-map into a sub-KG containing a full ontology within. This same concept can be applied to Human or Update \\textit{States}, where complex knowledge models from \\citet{federico2017role} can be used. Further examples and applications will be discussed in Sec.~\\ref{sec:comparison}.\n\n\n\\subsubsection{Temporal Misalignment}\\label{sec:summarization}\n\nThe examples of VAKG given so far through Fig.~\\ref{fig:teaser} show a new Human update for every new Computer update. In other words, these examples expect users to have a new insight\/finding at every single interaction, forcing these Update states to align perfectly. This, however, does not reflect the current literature, as is shown in manual and automatic annotations research~\\cite{yu2019flowsense, kim2020answering, mogadala2019trends, mathisen2019insideinsights}. In reality, users may have one intention or insight that causes multiple interactions within the VA tool. After all these interactions, the user finally generates a new piece of knowledge. Users may also have multiple insights or intentions after performing a single interaction. With this in mind, VAKG does not require exact parity between the Computer and the Human timeline. In place of this, VAKG interprets the Update states as the summary of all changes between the two states. These Updates may be linked to one or more Updates from the parallel Update lane. For instance, Fig.~\\ref{fig:vakgsummary} exemplifies when two users perform multiple actions only to harness a single piece of new information at the end. A single insight Update is linked to three interface Updates in this example.\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=.68\\columnwidth]{pictures\/vakgsummary.png}\n \\caption{VAKG summary extension. Although the two users performed three actions (black border), the two users had only one intention from the beginning and learned something new only at the very end (red border). Relationships between the Computer lane (left) and Human lane (right) occur between nodes of the same inner color.}\n \\label{fig:vakgsummary}\n \n\\end{figure}\n\nThis loosens the definition of the relationships between the Update classes. More specifically, we extend the definition of the \\textit{knowledge lane} to be related to any number of updates in the tool. Similarly, a single interaction can also be related to any number of new insights of the \\textit{knowledge lane}. This forces VAKG to handle new updates separately between the Computer and Human classes. Still, the resulting KG may be closer to how exactly the users' knowledge gain happened. We define this aspect of VAKG as the \\textit{summarization} or \\textit{interpolation} of Update nodes. Arguably, this also reduces the amount of information within VAKG since when multiple interactivity Update classes link to a single knowledge Update class, the ontology will not retain how each of the interactions uniquely impacted the property-map of the knowledge Update. Although this limits certain aspects of VAKG, as we have discussed before, specific use-cases can upgrade the property-map of a \\textit{summary Update} into an inner ontology which enables further information to be modeled within the TKG.\n\\section{Background and Related Work}\\label{sec:related}\n\nFor the design of our system, we have used a modeling process based on existing knowledge model research, with which we implemented ways to collect user data and transform it into a retelling of the users' insights and knowledge discovery process. Therefore, this section will discuss VA knowledge models and their practical usage when applied to knowledge discovery. Then, we discuss several provenance~\\footer{Provenance: tracking and using data collected from a tool's usage~\\cite{ragan2015characterizing}} approaches, including data and analysis provenance which allows us to collect users' insights and knowledge discovery process. Finally, we discuss existing approaches to recall and retell users' knowledge discovery as storytelling presentations.\n\n\\subsection{Knowledge Modeling and Provenance in VA}\n\\fvp{modelagem do usuario (lc: e da ferramenta tambem) - VAKG}\n\nVA literature in knowledge modeling has shown that users' interactivity with VA tools can be modeled as an iterative workflow of user intentions, behaviors, and insights~\\cite{sacha2014knowledge, sacha2016analytic, federico2017role}. For instance, Federico et al. ~\\cite{federico2017role} describes how VA tools and frameworks can be modeled following said knowledge modeling methodology. They also describe how to interpret VA research in light of their model. This methodology of describing VA as an iterative workflow of events between Human and Machine actors shows that it is possible to understand a VA problem as a sequence of events. Additionally, this model can also represent automatic processes, such as data mining or machine learning, as part of the sequence of events, even if it was not triggered by a Human interaction. Sacha et.al.~\\cite{sacha2018vis4ml}, among others, show through their detailed machine-learning ontology that these Computer-generated events permeate much of VA as a whole. \n\n\n\\fvp{identifcar e extrair info - anotacao}\n\n\nUsually, VA developers and researchers use more practical means to model, collect, store, and utilize their tools or users' experiences. For instance, some researchers collect and analyze user behavior using provenance~\\cite{da2009towards, ragan2015characterizing}. For example, Landesberger et.al.~\\cite{von2014interaction} perform behavior provenance by modeling user behavior as a graph network, which is then used for analysis. Other works also use graph networks as a way to model knowledge~\\cite{fensel2020introduction}, data provenance~\\cite{simmhan2005survey, da2009towards}, insight provenance~\\cite{gomez2014insight}, and analytic provenance~\\cite{xu2015analytic, madanagopal2019analytic}. All these tools and techniques aim to collect some part of the user's knowledge discovery process and model it into some structure.\n\nAmong these, graph networks, or more specifically knowledge graphs~\\cite{fensel2020introduction, guan2022event}, are uniquely positioned as a means to model the user's knowledge discovery process due to their specific applicability in collecting and analyzing two kinds of data: temporally-based event data~\\cite{guan2022event} and relationship-centric data~\\cite{auer2007dbpedia}. Since VA is modeled based on events~\\cite{sacha2014knowledge, federico2017role} and our goal is to find and analyze the \\textit{relationship} between VA events~\\cite{fujiwara2018concise}, we decided to use knowledge graphs as one of the cornerstones of our system.\n\nIn order to perform provenance, however, one must first collect information from the user. Existing works have collected changes in datasets~\\cite{da2009towards}, updates in visualizations~\\cite{battle2019characterizing, xu2020survey}, and other similar events in order to recreate and recall user behavior. On the other hand, tracking user's \\textit{Tacit Knowledge} as defined by Federico et al. ~\\cite{federico2017role} is either done by manual feedback systems~\\cite{bernard2017comparing, mathisen2019insideinsights}, by manual annotations over visualizations~\\cite{soares2019vista} or by inference methods that attempt to extract users' insights by recording the users' screens, video or logs and extract from them interactivity patterns as a post-mortem task~\\cite{battle2019characterizing, guo2015case}. Among these VA systems, InsideInsights~\\cite{bernard2017comparing} is an approach to recording insights through annotations during the user's analytical process. They have demonstrated that collecting user annotations is a legitimate way to extract and store user insights.\n\n\n\n\\fvp{armazenamendo da informcao de anotacao e interacao do usuario - provenance}\n\n\n\nAfter collecting data from users' knowledge discovery process, one must impose a structure to it. For that, the design of knowledge ontologies has been a central goal of \\textit{Knowledge Graphs} (KGs)~\\cite{fensel2020introduction}. KGs~\\cite{chen2020review, guan2022event} is a widely used technique to store knowledge in a structured format and has shown many successful methods of how to interpret, store and query explicit knowledge, such as the information within Wikipedia~\\cite{auer2007dbpedia}. KG methodology defines that the relationships between nodes of a graph network, and not just the nodes themselves, can represent knowledge~\\cite{guan2022event}. Among similar structures, Event Knowledge Graphs~\\cite{guan2022event} expands on this concept by including the concept of event-based data, defining that paths of the graph's relationships represent sequences of events over time.\n\nIn order to store and analyze a KG, graph databases, such as neo4j~\\cite{noauthororeditorneo4j}, have emerged with a wide array of available techniques. Such specialized databases are needed since the structure of KGs focuses less on the usual transactional or row-based structure of typical relational databases and their transactional structure~\\cite{cashman2020cava}. Applications of KGs are also part of the overall graph network research, which means that graph network techniques such as Graph Neural Networks (GNNs)~\\cite{jin2020gnnvis}, graph visualizations~\\cite{chang2016appgrouper, he2019aloha} and graph operations~\\cite{ilievski2020kgtk}, such as page rank and traveling salesman, can be applied to KGs.\n\nAmong works that utilize KGs to model and store the user's knowledge discovery process, we must highlight VAKG~\\cite{christino2022data}. This conceptual framework connects the VA knowledge modeling theory with KGs to structure and store knowledge provenance. Indeed, as later discussed, our solution is built around the VAKG methodology. We have modeled two existing VA tools in light of existing knowledge models. From it, we have developed a KG ontology (or schema) to store users' knowledge discovery process. With VAKG we can identify what should be collected from the user. However, it by itself does not aid in performing data collection, nor aids in defining how to analyze and extract slide decks from the generated KG.\n\\fvp{nao sei se vale a pena falar de slide deck, mas talvez de como divulgar o conhecimento acumulado no grafo.}\n\n\nCertain limitations from these works show that tracking automatically-generated insights~\\cite{spinner2019explainer} or accounting for automatic computer processes~\\cite{federico2017role} is still a challenging and largely unsolved problem. Therefore, we have opted in our system to follow the argumentation of Xu Et.Al.~\\cite{xu2015analytic} and focused on manual means to collect user insights and intentions while providing automatic collection of computer-processes~\\cite{federico2017role} and automatic linking of all user-related to its respective computer-related events~\\cite{christino2022data}.\n\n\\fvp{divulgacao do conhecimento gerado}\n\nOthers have attempted to analyze KGs in order to extract information. CAVA~\\cite{cashman2020cava}, for instance, allows for the exploration and analysis of KGs through visual interaction. KG4Vis~\\cite{li2021kg4vis}, on the other hand, uses the advantages of the KG structure to provide visualization recommendations. ExeKG~\\cite{zheng2022exekg} uses KGs to record paths of execution within data mining and machine learning pipelines. Indeed, there is an ever-increasing number of works using KGs in VA, all of which agree that KGs are suitable for knowledge analysis and extraction. Our novelty among such works is using KGs to model the user's knowledge discovery process as paths within the KG, which are then extracted as slide decks.\n\n\\fvp{entao acho que poderiamos chamar de divulgacao da informacao em formato de slide decks. Ou algo assim.}\n\nOnly some works have attempted to model user behavior or their knowledge discovery in a way to generate or extract slide decks. StoryFacets~\\cite{park2022storyfacets} is unique in this aspect. They have developed a single system showing visualizations in dashboard and slide-deck formats. Although slide decks have well-researched limitations~\\cite{knaflic2015storytelling}, they also argue about how and why slide decks are advantageous when one wishes to narrate a sequence of events. Indeed, slide decks are yet to be dethroned as the utmost way to give presentations~\\cite{schoeneborn2013pervasive}. Of course, other works have succeeded in recalling and retelling the knowledge discovery process in other means~\\cite{xu2015analytic}. For instance, Ragan Et.Al.~\\cite{ragan2015characterizing} lists many tools that collect and structure user behavior and insights in a queriable format. Nevertheless, they all have not proposed a means to extract slide decks from their collected data to narrate the knowledge discovery process to third parties.\n\n\\fvp{ficou legal esse final. Acho que anteriormente, podemos criticar nao com base em slide decks mas - recalling and retelling the knowledge. }\nTODOLC: nao eh CSCW e eh bom deixar claro, mas deixar uma pessao apresentar o que o outro fez (colocar um pouco na intro tb) (given thereis a team expoloring and aggregating insights, how to unify theyr insights to show\/present to a third party like danny does?)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Illustration of VAKG Usage}\\label{sec:extensions}\n\n\nOur first discussion of potential applications of VAKG will focus on Exploratory Data Analysis (EDA), which aims at providing users with easy access to data through visualizations and analytical capabilities~\\cite{keim2010visual}. The existing Knowledge Model framework~\\cite{federico2017role} describes why such EDA tools perform well by providing a platform for users to utilize their \\textit{tacit knowledge} and visualizations to harness and gather new insights over time. As was already noted, most of such tools do not record the interaction of users, and even fewer record users' new insights and knowledge. However, by using VAKG, this gap can be filled.\n\nCurrently, for EDA tools to track their usage, developers heavily utilize services such as Google Analytics~\\cite{clifton2012advanced}, Datadog~\\cite{datadog}, Elasticsearch~\\cite{Kononenko2014MiningElasticsearch}, or custom metrics recorders from the major cloud providers. These services focus on recording logs, metrics, or transactions which occur while web-based systems are used. VAKG can be used similarly. However, VAKG utilizes a relationship-first approach due to its focus on KGs. To test out how well VAKG couples with current tools being developed, the system ExPatt~\\cite{christino2021explainable}, which was already using Google Analytics to track usage metrics, was used to populate a VAKG.\n\nExPatt is an EDA tool developed for the analysis of heterogeneous linked data, or more specifically: concomitant analysis of GapMinder data and Wikipedia documents. Fig.~\\ref{fig:expatt} exemplifies the interface of ExPatt, where users can use line charts, maps, and curated lists of recommendations to explore and analyze world demographic datasets. For our example, we will follow the ExPatt paper's proposed scenario where a user investigates USA and Russia's life expectancy indicators. Namely, the user first investigates USA's life expectancy values, then adds Russia's onto the line chart for comparison, and finally verifies if the same pattern seen so far is also present in the mortality rate indicator of both countries. VAKG was populated with this usage information and can be visualized by the representation of Fig.~\\ref{fig:vakgexpatt}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{pictures\/expatt.png}\n \\caption{ExPatt interface setup to explore world demographic indicators. Using a line chart (A), users can select findings (B) and analyze the indicators through a world map (C). \nSee ExPatt~\\cite{christino2021explainable} for more in-depth information.}\n \\label{fig:expatt}\n \\vspace{-0.5cm}\n\\end{figure}\n\n\\begin{figure\n \\centering\n \\includegraphics[width=\\columnwidth]{pictures\/VAKG abstract.png}\n \\caption{A VA tool called ExPatt used VAKG to track its interface states and user's knowledge gain. The tool's state is stored as a property map, and on each update operation (e.g., an interaction on the tool) a new update node records all pertained information. Similarly the user's ongoing insights and tacit knowledge follows in a separate timeline.}\n \\label{fig:vakgexpatt}\n \\vspace{-0.65cm}\n\\end{figure}\n\nIn this example, we see in Fig.~\\ref{fig:vakgexpatt} the identical $4$ sequences described in Sec.~\\ref{sec:formal}, where at first the user does not have any insight and the VA tool has some visualizations, however as the user's interest is on comparing Russia and USA, the update sequence $U^{va}_t$ shows what is the interaction performed by the user, while the states sequence $C^{va}_t$ only describes what is being displayed by the VA tool at each point in time. Similarly, the two knowledge sequences on the bottom of Fig.~\\ref{fig:vakgexpatt} show the user's knowledge gain as the state sequence $C^u_t$ while user interests that cause interactions with the VA tool are shown as the update sequence $U^u_t$.\n\nIn addition to the example seen so far, we also investigated the user evaluation process of ExPatt, where multiple users performed certain interactions with a specified goal. The full description of this evaluation can be seen in ExPatt's publication~\\cite{christino2021explainable}. By analyzing the patterns used by users, we confirmed several of the examples of Fig.~\\ref{fig:vakgexamples}. Namely, the pattern (A) was always observed when allowing for the VA tool to be used by multiple users because they always took slightly different interaction sequences with the tool, patterns (C) and (D) were observed multiple times while the users attempted to answer the survey's questions, and pattern (B) also always happened since the users took different paths while reaching the same goal with the same final answer to the survey. Furthermore, the summarization extension of Sec.~\\ref{sec:summarization} was used since for the entire sequence of actions, each participant only reached one new insight in order to answer the survey. Unfortunately, if participants gained more insights during this process, ExPatt did not collect the information.\n\nOnce the graph data structure is created, graph analysis techniques can be used on top of VAKG to identify patterns in the user's analysis, suggest next-step recommendations, or suggest knowledge generated from previous users who took the same analysis paths as the current user. For instance, many users in ExPatt did not reach the survey's answer using the minimum path, but now this can be verified by using the graph theory method of minimum path analysis. This process is exemplified in Fig.~\\ref{fig:vakgshortestpath}\n\n\n\\begin{figure\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{pictures\/vakgshortestpath.png}\n \\caption{VAKG provides graph analysis capabilities. One example is shortest path analysis, where if two users attempt to reach the same goal through different interactivity paths, VAKG will display the minimum interactivity path (left). Similarly, if two users progressively gain new insights to reach a common new knowledge, VAKG will display the minimum path of knowledge gain (right).}\n \\label{fig:vakgshortestpath}\n \\vspace{-0.65cm}\n\\end{figure}\n\n\\section{Hypothetical VAKG Usage}\\label{sec:future}\n\nJust as ExPatt, VAKG is a good fit for VA systems where the user has access to a dashboard of linked visualizations to perform simple data analysis, such as GapMinder~\\cite{rosling2012gapminderorg}, Covid-19~\\cite{bccovid}, national census websites~\\cite{canadadashboard} or other similar websites such as the Datatool project~\\cite{engagenovascotia}. If VAKG is set up within one of these systems, the resulting graph will contain the complete state of the dashboard at a given time $C^{va}_t$, including information of what is the data and filters selected. When the user interacts with the visual representations, the system sends the interaction information as a dictionary structure to VAKG, which generates a new update node $U^{va}_t$ following $C^{va}_{t+1} = U_t^{va}(C^{va}_t)$, creating once again many of the patterns seen in Fig.~\\ref{fig:vakgexamples}. In this simple setup, the system will successfully record all interactions over time within VAKG. However, the user has no intuitive way to convey whether they have acquired any new knowledge $U^u_t$. In order to know when users identify any new insights or knowledge within their analysis, manual feedbacks (e.g., annotations) are required from the user at any\/every interaction. However, in this example, we are expecting the user to provide all their tacit knowledge manually. However, if users perform multiple interactions before any new tacit knowledge feedback, the system will be required to use the \\textit{update summary extension} of Sec.~\\ref{sec:summarization}.\n\nAnother similar example is the usage of VAKG within statistical analysis tools~\\cite{abbasnasab2021comparing}, where datasets are loaded, processed and through statistical analysis, the user can extract whether a particular hypothesis is valid or not. In order to fit this scenario to VAKG, the tool can use a similar setup where the tool state nodes $C^{va}_t$ and update nodes $U^{va}_t$ are the actions taken by the user, such as ``inclusion of dataset X'', and also use any of the statistical analysis methods applied generates new knowledge as new user nodes $C^{u}_t$ and $U^{u}_t$. This way, we expect that this VAKG would be able to explain how different users utilize a certain statistical analysis tool (Fig.~\\ref{fig:vakgexamples}), whether the users are taking the optimal paths to reach the conclusions needed (Fig.~\\ref{fig:vakgshortestpath}) and, of course, VAKG would also record all new knowledge generated by the users for further usage.\n\n\n\nSo far, we have exemplified VAKG only containing the KG itself. However, VAKG can also be interfaced with external data to track updates of specific external resources, but due to limited space in this paper, such extensions and examples were omitted. Nevertheless, in summary, in addition to tracking the states and updates of the tool $va$ and user's tacit knowledge $u$, VAKG can track updates in the dataset $D$ when it changes over time by using the update node's property map $U^{va}_t$ also to record the whole dataset state and any descriptors, such as averages and outliers, and filters within the visualizations. Another approach is to track an external dataset using hyperlinks, and in case multiple datasets $D_n$ are being used within a single VA tool $va$, VAKG can track hyperlinks to all datasets, including what partition of each dataset is being shown by the VA tool.\n\nIn the given examples, the underlying dataset does not change during analysis. However, if the dataset is the running stock market information, both the stock market price over time and the numeric differences between each of the prices over time can also be part of VAKG's state nodes $C^{va}_t$ and update nodes $U^{va}_t$, respectively. Further applications, such as weather data or user-tracking data, can be considered VA tool updates $U^{va}_t$, and information of the user's emotion from sensor data, such as 3D scanners or biometric sensors, can be considered user updates $U^u_t$. VAGK can even be applied to software analysis if the developer uses git commits, issue trackers, and user feedback to update a VA tool's code over time. \n\nThough we envision many examples, all follow the same process. First, we model the user interactivity and user knowledge gain from each example and then fit this model into the VAKG structure of a VA tool and user knowledge of Fig.~\\ref{fig:teaser}. Although there are challenges within each example of how to best perform this fit, once it is done, we have shown that VAKG provides valuable capabilities of recording the user interaction information over time, recording the user's knowledge gain process, and analyzing usage patterns.\n\n\\vspace{-0.25cm}\n\\subsection{VAKGI}\n\nAs a final contribution, we also created a sample implementation of the proposed VAKG, which we call VAKGI. Though useful for testing, it is important to note that VAKGI is not the same as VAKG, but only implementation of some aspects of VAKG within a specified domain. VAKGI uses a multi-tenant web-accessible REST API architecture, allowing any tool and website to attach to and try out VAKG easily. The Python implementation of the VAKGI server~\\cite{leonardo_christino_2021_5750019} connects to a Neo4J~\\cite{noauthororeditorneo4j} server as the graph database back-end. The expected architecture can be seen in Fig.~\\ref{fig:arch}. Using this, we have connected some existing tools created within our group for tests and analysis, such as ExPatt~\\cite{christino2021explainable} and Datatool~\\cite{engagenovascotia}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{pictures\/VAKG arch.png}\n \\caption{VAKGI: A sample implementation architecture of the VAKG framework. With VAKGI, any VA tool can inform its $va$ updates and user's $u$ updates and states as web API calls, similar to how one would do to Google Analytics. Once some usage data is populated, the VA tool using VAKGI also has access to the graph structure through API calls and the ability to fetch information, analysis results, or visualizations directly from Neo4j.}\n \\label{fig:arch}\n \\vspace{-0.65cm}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\\section{VAKG in Practice: Domain Expert Interviews}\\label{sec:interview}\n\nDuring the development of VAKG, Janio was kind enough to give inputs related to the usefulness of tracking user interactivity for knowledge gain. Janio is an experienced entrepreneur and business owner in the international food supply chain as an international food importer and wholesaler, and regional restaurant chain ownership and management with over 30 years of experience. The hour-long interview focused on his methods to store information and extract knowledge from Enterprise Resource Planning (ERP) and Accounting systems. In summary, he said the usual process depends heavily on training employees to record sales manually and restocking data in a system. Once the data is stored and structured into a timeline of events, he generates reports and uses his own business experience to discover and explain insights from the data. We have included a small excerpt of the interview below.\n\n\\textit{Q: Can you give examples of when and how you use provenance and analysis?} \nOne example is how I analyze the product purchase behavior of select customers on a month-to-month basis and the overall monthly product-based sales compared to previous years. I do this by extracting all reports from my accounting software and using MS Excel to merge them and analyze the result, sometimes on a month-by-month basis or a year-over-year basis, depending on what I am analyzing. (...) Among the main difficulties within the process is the required expectation of correct imputation by the employees and the amount of external knowledge I need to bring to the report so that analysis can be meaningful. In our restaurants, employees would input the payment type of a purchase, such as cash or credit card. However, once the customer makes the payment, sometimes the type would swap, but the system is not updated, causing many discrepancies in the monthly accounting process.\n\n\\textit{Q: How external knowledge was used and was it included in the reports?} \n(...) managers typed in manually certain events, such as heavy rain or large soccer match, while inputting the weekly restocking information. (...) however, most of the time, the external knowledge I use is either only manually typed during the analysis itself after the generated report or not included at all, just coming from my head, experience, and the internet.\n\nWe also interviewed Thiago, a Data Science General Manager of Credit Recovery in a large bank. Thiago has a Ph.D. in Machine Learning applied to Natural Language Processing and has over a decade of industry experience in applied data science. In summary, we discussed in his interview the past and present use of provenance analysis within decision-making processes.\n\n\\textit{Q: Can you give examples of when and how you use provenance and analysis?} \nBank transaction provenance, such as payments and money transfers, is extensively used for flagging fraudulent transactions. (...) My area then uses machine learning to predict the tendency of credit repayment. The treatment of temporal relationships of the data, such as order of transactions, is sometimes part of an embedding step or forwarded to a machine learning model which can handle temporal data. A new project in my area uses provenance to better design a data science workflow. Currently, our bank does not have a well-optimized data analysis workflow. Still, by tracking the data analysis, we plan to record the work effort of each task and, in the future, analyze the results to improve the overall workflow or help us plan the future by predicting how long data analysis tasks will take.\n\n\\textit{Q: Would a temporal knowledge graph structure, such as the one from VAKG, aid in such applications? How? } \nA co-worker and I have been investigating using a knowledge graph representation of customers that could describe behavioral patterns for better predictions during fraud analysis. (...) Another large intersection I see is by using such a structure to record and analyze our Data Science pipeline. Using such recording, we envision predicting a more optimized sequence of tasks required for a new data science project given a description of goals or problems we want to solve.\n\nWe use these interviews to check in the wild the significance of our motivation, goals and get insights into future works for VAKG when applied to real business analytical processes. For instance, Janio confirmed that his experience with ERP and accounting software essentially had data provenance but rarely tracked any external information at the time when the data was changed, such as the reasoning of why his employee changed the data (\\textbf{G1}). Janio said he manually had to track down these extra pieces of information through an involved and time-consuming cross-check process and use external software to merge and analyze it (\\textbf{G2} \\& \\textbf{G3}).\n\nSimilarly, Thiago's usage of provenance requires significant work by data scientists for analysis to process the data (\\textbf{G2} \\& \\textbf{G3}), and his wish to better understand this data science pipeline through provenance and knowledge graphs indicates both the lack and the potential of using VAKG (\\textbf{G1}). These examples show that VAKG's focus on tracking the user's knowledge gain in tandem with user interactivity aims to aid in this problem: the analysis of data-oriented workflows that requires or would benefit from the connection between data provenance, external data (e.g., supply chain or bank transactions) and user knowledge gain (e.g., intentions and behavior).\n\\section{Introduction}\n\nOne of the main components of any Visual Analytics (VA) tool is to allow user-guided data analysis. This process is known to be guided by user interactivity towards knowledge generation~\\cite{sacha2014knowledge}. The understanding of what knowledge is, how it is formed, and how one can model such knowledge generation into a shareable format has been much investigated within Visual Analytics (VA)~\\cite{collins2018guidance} and Knowledge Graphs (KGs)~\\cite{chen2020review} fields. For instance, \\cite{collins2018guidance} describe in detail what intelligent guidance is and how it provides digestible and interpretable information to users from visualizations. \nFor instance, a tool applies intelligent guidance by reducing the cognitive load and visual bias that users interacting with a visual analytic tool may experience. \nSince the analysis of users' behavior is dependent on their pre-conceived tacit knowledge~\\cite{federico2017role} and their own interest when starting an analysis process, \\cite{collins2018guidance} also describe a knowledge model where VA is the means for users to generate and amass knowledge through visual interactivity. In this context, the authors describe KGs as a representation of user-generated knowledge. \n\nExplicit knowledge, however, is treated differently. Since this type of knowledge is defined as what is explicitly described within a dataset~\\cite{federico2017role}, visualizations employ visual artifacts, such as annotations or bookmarks, to encode an \\textit{explicit representation of knowledge} as part of a tool~\\cite{federico2017role} or encoded as a data-structure, such as a Knowledge Graph (KG). In order for a KG to provide value for users, the literature classifies the usage of KGs into three distinct reasoning types~\\cite{chen2020review}: rule-based reasoning, distributed representation-based reasoning, or neural network-based reasoning. Although KGs are used within applications such as recommender systems, health analysis systems, fintech, and question-answering systems~\\cite{chen2020review}, each individual application use KGs in different ways. In short, we see KGs being interpreted as both an abstract representation of user-generated knowledge~\\cite{collins2018guidance} and a graph-structured data model to store interlinked concepts~\\cite{chen2020review}. \n\nWhile KGs provide a single source where to store and structure knowledge, VA thrives in situations where users are required to use interactivity for gradual aggregation of insights, findings, and knowledge~\\cite{sacha2014knowledge, federico2017role}. As part of the gradual process of VA, data analysts are required to source multiple datasets and perform well-known data mining steps, such as pre-processing and cross-analysis, before the creation of a visualization solution. The area of DataSpaces (DSs), however, focuses on the understanding and analysis of multiple heterogeneous datasets an integral part of the analytic process~\\cite{collins2018guidance, golshan2017data} instead of delegating to only a pre-processing step. \nIndeed, DSs have been emerging as one of the cornerstones of data integration and storage tasks~\\cite{golshan2017data}, especially when applied to multiple heterogeneous datasets, such as text, tabular, and video datasets. \nIn short, a DS~\\cite{franklin2008first, franklin2005databases} is a concept that aggregates any number of datasets into a single unified service in a pay-as-you-go fashion. At its core, a DS provides a platform where you can store and query any number of datasets, and, as you gather more information from the said datasets, you progressively add links between datasets \n, features of the data, additional extrapolated features,\nor any new information \\textit{as-you-go}. These links can themselves be stored in a graph structure~\\cite{chen2020review}. DSs have shown to be advantageous when processing and analyzing a combination of stream data, IoT data, heterogeneous data or large datasets~\\cite{franklin2005databases, singh2011survey, curry2020real}.\n\nIn this context, KGs have been shown to pair well with DSs~\\cite{beheshti2018corekg} in order to allow a multi-level loose data linkage pattern. Also, the DS's \\textit{pay-as-you-go} mindset can be considered similar to VA's gradual aggregation of knowledge through user interactivity. \nHowever, a combined solution has yet to be seen within visualization or exploratory data analysis contexts, which are core components of VA.\nTherefore, in this state-of-the-art report, we provide a structured, systematic, and accessible mapping of usages of KGs and DSs within the context of structuring and analyzing knowledge among the VA literature by analysing the most relevant research found in VA which involves the usage of the discussed concepts. \nSome of the questions we answer are: \n\\squishlist\n \\item What strategies have been employed to utilize KG and DS techniques to develop VA tools?\n \\item How does one visualize KG and\/or DS concepts\/data within an exploratory visual data analysis context and with what goals?\n \\item How does the user's gradual accumulation of knowledge within VA compare to KG and DS and how do they differ?\n \\item What are the state-of-the-art methods to structure and analyze explicit and tacit knowledge?\n \\item Which challenges are still not solved?\n\\squishend\nWe believe our survey will prove useful to researchers of all three fields: VA, KG and DS, as an overview of the intersection of the fields and as a reference list of challenges and gaps in the literature. \n\n\n\\section{Related Surveys}\n\nThe application of DSs is discussed by several surveys~\\cite{kasamani2013survey, curry2020real} and can be partitioned between a catalog and browser of datasets, a unified queryable repository of datasets, and a stream funnel of real-time datasets. For example, \\cite{curry2020real} discusses DSs for real-time Internet of Things (IoT)-based sensors. Further usages of DSs are personal information management, personal health management, Science, and, arguably, the web itself~\\cite{franklin2008first}. The concept of linking data is also found in Data Integration (DI), a separate vast research field which includes surveys in the context of VA such as entity matching~\\cite{kopcke2010frameworks}, entity resolution~\\cite{kang2008interactive}, entity deduplication~\\cite{morton2016view, muelder2016visual, kang2008interactive, bilgic2006d} and semantic resolution~\\cite{savelyev2014interactive, li2021kg4vis, moreauvisual}. Although we see some surveys and research articles that discuss KGs and DSs, we have yet to bridge the gap of how visualizations and VA can benefit from these areas. Therefore, it is advantageous for us to analyze the intersections of these areas with VA and expose the benefits they may bring.\n\nAmong the many applications of KGs, we find surveys originating from Semantic Web~\\cite{ferrara2011data} that discuss loosely linking many heterogeneous datasets~\\cite{zou2020survey, nguyen2020knowledge, auer2007dbpedia}. Of course, KGs have many other parallels and connections to the overall graph research, including Graph Neural Networks (GNNs)~\\cite{jin2020gnnvis}, graph visualizations~\\cite{chang2016appgrouper, he2019aloha} and graph network operations~\\cite{viegas2004social}. However, the concept of using KG to store and analyze the user's knowledge-gathering process is still somewhat unknown within VA, and although many publications tackle the broader \\textit{Knowledge Generation} problem~\\cite{wang2016survey, federico2017role}, and others attempt to perform a visual analysis of existing KGs~\\cite{chang2009defining, xu2020survey}, there is not much effort into utilizing KG within VA itself. We intend to discuss this limitation and provide a clearer distinction and definition of KG, its core concepts, and its potential use cases within VA. \n\nThere are many other similar concepts, such as databases, networking, multi-dataset machine learning, time-series machine learning, graph-neural-networks, and others, however, we will limit our survey to the intersection of the three areas while also exemplifying under-exploited venues of research. \n\n\\section{Relevance to the Community}\n\nSimilar to how the term ``Visual Analytics'' may sometimes be misinterpreted or misused in other fields of study, we found that many usages of terms like ``graphs'', ``knowledge modeling'', and ``multiple heterogeneous datasets'' within VA literature have been disparate from their respective research fields. This has left much potential research utilizing said concepts unexplored. A simple search of publications of the term DSs within TVCG shows that it is only used as a ``virtual reality space'' for some VA articles and nothing more, and similarly the term ''Knowledge Graphs'' is used loosely as a ``visualization of knowledge'' instead of understanding ``graph'' as in graph networks. It is not just hard to find connections between the fields, but the number of those found shows a lack of activity which deserves more attention.\n\nThere are many possible stakeholders that may be interested in our survey, among which the most prominent are VA researchers and developers themselves. During VA development, many limitations related to DS are brought up, such as limitations on data accessibility, data quality, and data linkage. \nTherefore, future developments of graph-based visualizations of knowledge~\\cite{he2019aloha}, heterogeneous linked data~\\cite{de2020quedi} and graph network applications in VA~\\cite{wu2016survey} would be aided with our survey. By referring to our survey, these researchers would have an overview of the three areas of interest and what are the potential under-researched overlaps between the areas. Furthermore, many of these authors either are not core visualization researchers or are specialized in visualizations and, thus, our research attempts to bridge this gap through a list of applications of KGs and DSs within a VA context.\n\n\\section{Review methodology}\n\nRelevant and recent surveys and books from the fields of KGs~\\cite{chen2020review} and DSs~\\cite{singh2011survey, kasamani2013survey, curry2020real} were used to identify both the overview of the area and the current state-of-the-art research. In addition, surveys in knowledge modeling, theoretical discussions of knowledge, heterogeneous dataset data-mining, and data linkage within VA journals and conferences (VIS, EuroVis, TVCG, and CGF) were also used to bring into perspective the current usage of KGs and DSs concepts within VA. We follow similar approaches presented by ~\\cite{chatzimparmpas2020state}. Examples of papers selected are ~\\cite{cashman2020cava, collins2018guidance, li2021kg4vis}.\n\nThe proposed structure of the survey will be a systematic one. After presenting a succinct overview of each area, we will collect the concepts which intersect KGs, DSs, and\/or VA, such as \\textit{heterogeneous data processing} or \\textit{recording user-generated knowledge}, from a preliminary analysis of the current research being done and then will categorize the relevant literature onto the usage or not of these concepts. In addition to complexity and novelty, the structure will account for the conceptualization of knowledge~\\cite{wang2016survey, federico2017role, wang2018ripplenet, ferrara2011data, golshan2017data} to link the three core areas of the survey. The results of this categorization will provide a clear map of how the current research is and what are opportunities where more attention is deserved. Further discussion of challenges and potential ideas will be given in order to inspire and prepare researchers to fill this gap.\n\n\n\\section{Planned Structure}\n\nThe envisioned structure of the paper is as follows: (1) an introduction providing the usual overview, which accounts for the purpose of the paper, its motivation, and its research contributions;(2) a background with an in-depth description of the context, i.e.,\nKnowledge Bases, KGs, DSs, Data Integration, and other related concepts; (3) related work for describing similar surveys and how ours differentiates from them; (4) a review methodology explaining how the literature review has been carried out; (5) a description of the categorization developed and used for the systematic review; (6) the state-of-the-art as seen through the lens of the categorization; (7) challenges and discussions; and (8) opportunities for future research.\n\n\n\\nocite{*}\n\\bibliographystyle{eurovisDefinitions\/eg-alpha-doi} \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\begin{figure}[h]\n\\captionsetup[subfigure]{labelformat=empty}\n\\begin{tabular}{ccccccc}\n\\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/style.jpg} \\\\\nBN Matching & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/norm\/vgg19-bn_0layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/norm\/vgg19-bn_1layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/norm\/vgg19-bn_2layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/norm\/vgg19-bn_3layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/norm\/vgg19-bn_4layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/norm\/vgg19-bn_5layers_pretrained.jpg} \\\\\nGatys et al. & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/gram\/vgg19_0layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/gram\/vgg19_1layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/gram\/vgg19_2layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/gram\/vgg19_3layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/gram\/vgg19_4layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/gram\/vgg19_5layers_pretrained.jpg} \\\\\nOurs. & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/wass\/vgg19_0layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/wass\/vgg19_1layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/wass\/vgg19_2layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/wass\/vgg19_3layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/wass\/vgg19_4layers_pretrained.jpg} & \\includegraphics[width=0.10\\textwidth]{figures\/style_reps\/starry_night\/wass\/vgg19_5layers_pretrained.jpg}\\\\\n& Raw Pixels & Layer 1 & Layer 2 & Layer 3 & Layer 4 & Layer 5\n\\end{tabular}\n\\caption{\\small{Style representation of different methods. We compare 1\\textsuperscript{st} and 2\\textsuperscript{nd} order statistics (BN Matching and \\cite{gatys2015neural}) to our method, which uses the Wasserstein metric. Starting from layer 2, 1\\textsuperscript{st} and 2\\textsuperscript{nd} order statistics fail to capture significantly higher level textures, unlike the Wasserstein metric.}}\n\\label{fig:starry-night}\n\\end{figure}\n\n\\subsection{Neural Style Transfer}\nIn 2015, \\cite{gatys2015neural} introduced neural style transfer (NST), a powerful image generation technique that uses a convolutional neural network (CNN) to merge the content of one image with the style of another. There are many methods of style transfer nowadays, but since they are the first to introduce it, we refer to their method as the traditional style transfer. Content is defined as the semantic information of an image (i.e., the image is of a dog sitting on a porch), while style is defined as the textural information of an image (i.e. rough, smooth, colorful, etc.).\n\nAt a high level, the procedure for neural style transfer is to optimize the image we are generating (generated image) with respect to the content loss and style loss of the neural network. Content loss is defined simply as the element-wise difference between the corresponding feature maps of the content image and the generated image, and we usually define this difference with the mean squared error (MSE). Style loss on the other hand has many different interpretations. \\cite{gatys2015neural} defined style loss as the difference between the Gramian matrix of the feature maps between the style image and the generated image, but it was unclear how this formula was derived.\n\nFor a while much of neural style transfer, specifically regarding the question \"what is style?\", remained a mystery. This led to the development of \\cite{li2017demystifying} which showed that style can be interpreted simply as the \\textit{distribution of features}, and that style loss is the distance between feature distributions of the style image and the generated image. However, contemporary methods still continue to use distribution matchings based on 1\\textsuperscript{st} order or 2\\textsuperscript{nd} order statistics. While they are fast and cheap, these methods are insufficient because they cannot fully discriminate between any two probability distributions. Thus, they cannot fully extract the style from the style image and transfer them to the generated image.\n\n\\subsection{Our contributions to NST}\nBuilding off of the idea that style is the \\textit{distribution} of features, we hypothesized that redefining style loss under the popular distribution distance metric, the Wasserstein distance, would be a superior alternative. The Wasserstein distance can always discriminate between two nonidentical probability distributions. Our experimental results show that performing Wasserstein style transfer achieves significantly more appealing results. The disadvantage is that it takes more computation compared to contemporary methods.\n\nIn the spirit of distribution matching, we also connect neural style transfer to the class of generative adversarial networks.\n\n\\section{Related work}\n\\subsection{Fast approximations}\nTraditional style transfer is a slow and iterative optimization process, taking up to a few minutes to complete. Thus, there have been several works focusing on improving the speed of style transfer by approximating the style loss with cheaper statistical metrics.\n\n\\vspace{1em}\n\\noindent\n\\textbf{1\\textsuperscript{st} order statistics} \nA popular method is to match the mean and standard deviation of the features. Examples include batch normalization statistics matching \\cite{li2016revisiting, li2017demystifying}, instance normalization \\cite{ulyanov2016instance, ulyanov2017improved}, conditional instance normalization \\cite{dumoulin2016learned}, and adaptive instance normalization \\cite{huang2017arbitrary, karras2018stylebased}.\nAll these methods can be classified as distribution matching using first order statistics.\n\nDue to their speed, style transfer with 1\\textsuperscript{st} order statistics is one of the most popular methods for commercial use. It has also found its way in other fields of machine learning. One notable example is \\cite{karras2018stylebased}'s StyleGAN, a generative adversarial network that uses the adaptive instance normalization formulation of style to produce higher quality fake data.\n\n\\vspace{1em}\n\\noindent\n\\textbf{2\\textsuperscript{nd} order statistics} Traditional style transfer (\\cite{gatys2015neural}) uses 2\\textsuperscript{nd} order statistics matching (see Section~\\ref{section:background} for more details).\n\\cite{li2017universal} extends off of this idea by matching the covariance of the generated features with the style features. Similarly to \\cite{huang2017arbitrary}, they take a pretrained autoencoder and perform linear transformations to match the features at various layers of the autoencoder. However instead of matching the mean and standard deviation of the features, \\cite{li2017universal} matches the \\textit{covariance} of the features.\nCompared to \\cite{gatys2015neural}, their algorithm produces similar results and is significantly faster. However, it is not as fast as the other 1\\textsuperscript{st} order statistic methods.\n\n\\subsection{Deep generative modeling}\nOther works that aim to improve the speed of style transfer use neural networks that can perform style transfer in one forward pass: \\cite{johnson2016perceptual, ulyanov2016texture, li2016precomputed, dumoulin2016learned}. Such methods are even faster than the 1\\textsuperscript{st} order statistics methods mentioned earlier, but the drawbacks are: \n(a) they are qualitatively worse and produce less diverse style transfers, and\n(b) they specialize in a finite amount of styles, and cannot be applied to styles that they have not trained on. \\cite{ulyanov2017improved} addressed the issue of quality and diversity by integrating the first order statistics methods, namely instance normalization, into their neural network architecture.\n\n\\subsection{Quality control}\nThere have also been works that aim to improve the quality and control over the style transfers. \\cite{gatys2017controlling} extend their traditional style transfer algorithm to allow fine-grained control over the spatial location, color, and spatial scale. \\cite{ulyanov2017improved} provides a method to improve the diversity of style transfers.\n\n\\cite{risser2017stable} stabilizes traditional style transfer by adding histogram losses. They also provide methods for localized control of style transfer using multi-resolution (pyramidal) image techniques.\n\nWhile these methods do indeed improve style transfer quality, they do not solve the issue of fully extracting the style from the style image, which we address. \\cite{jing2019neural} recently made a comprehensive review of style transfer methods, and they still regard the \\cite{gatys2015neural}'s traditional method as the gold standard today.\n\n\\section{Background}\\label{section:background}\n\\subsection{Traditional style transfer uses 2\\textsuperscript{nd} order statistics}\nTraditional style transfer uses the mean square error (MSE) of the Gramian matrix of the feature maps, which \\cite{li2017demystifying} showed is mathematically equivalent to the Maximum Mean Discrepancy (MMD) of the features using the quadratic kernel $\\kq(x,y) = (x^{\\T}y)^2$. In other words, traditional style transfer matches the features using 2\\textsuperscript{nd} order statistics.\n\nRecall the following equivalent definitions of the MMD.\n\\begin{align}\n \\MMD(p, q; k) &= \\sup_{\\norm{f}_\\Hilbert}\\left(\\EE_x[f(x)] - \\EE_y[f(y)]\\right)\\\\\n &= \\norm{\\mu_p - \\mu_q}\n\\end{align}\nwhere $p,q$ are two arbitrary probability distributions. $f(x) = \\inner{f, k(x, \\cdot)}$ is the RKHS function for kernel $k$. $\\mu_p = \\EE_x[k(x,\\cdot)], \\mu_q = \\EE_y[k(y,\\cdot)]$ are the mean embeddings of $p,q$ respectively under the feature space $k(x, \\cdot)$.\n\nFurthermore, consider the definition of a quadratic kernel, $\\kq(x,y) = (x^\\T y)^2 = \\inner{x, y}^2$. One can show that the corresponding feature map $\\kq(x, \\cdot)$ is equal to \n\\begin{align}\n \\kq(x, \\cdot) = [x_n^2, \\ldots, x_1^2, \\sqrt{2} x_n x_{n-1}, \\ldots, \\sqrt{2} x_n x_1, \\sqrt{2} x_{n-1} x_{n-2}, \\ldots, \\sqrt{2} x_{n-1} x_{1}, \\ldots, \\sqrt{2} x_{2} x_{1}] \\label{eq:quad-phi}\n\\end{align}\nIntuitively, the feature map represents the individual elements and all pair combinations of the elements. Thus, it is second order statistics. \n\nNevertheless this second order matching is limited, and cannot discriminate conditional probabilities involving 2 or more other dimensions. An example of a limitation specific to the $\\MMD$ under the quadratic kernel is that it is invariant to negation (i.e. $p \\buildrel d \\over = -q$). Consider this simple example of two probability distributions $p, q$ in $n$-dimensions. Let $p$ be concentrated at $\\{1\\}^n$ and $q$ be concentrated at $\\{-1\\}^n$. Then $\\MMD(p, q;\\kq) = 0$. Note that the MSE of the Gramian matrix between $p$ and $q$ would also be $0$, since they are mathematically equivalent. These two distributions are clearly different, yet the MMD cannot discriminate between them. For reference, the Wasserstein metric under the Euclidean distance, which we use in our method, would output $2\\sqrt{n}$.\n\nFurthermore, if you run \\cite{gatys2015neural}'s style transfer on VGG19-BN (Batch Normalization), you will find that it performs significantly worse. We believe a major reason is because batch normalization standardizes features to have 0 mean. Therefore it's more likely that the style features and the generated features lie on opposite sides of the origin. Since MMD-Quad cannot distinguish between negated probability distributions, \\cite{gatys2015neural}'s method fails.\n\n\\subsection{Wasserstein distance}\nIntuitively, the Wasserstein loss can be interpreted as the minimum amount of work needed to move one distribution of mass to a target location. This is why it is sometimes known as the Earth-Mover distance. Let $p,q$ be two probability distributions.\nWe define the Wasserstein distance between them as\n\\begin{align}\n \\W(p, q) = \\inf_{\\gamma\\in \\Gamma(p,q)}\\EE_{x,y\\sim\\gamma}[d(x,y)]\n\\end{align}\nwhere $\\Gamma(p,q)$ is the set of all joint probability distributions over $p,q$ and $d$ is some distance function (e.g. Euclidean distance).\n\nUnlike the 1\\textsuperscript{st} or 2\\textsuperscript{nd} order statistics, the Wasserstein metric can always discriminate between two nonidentical probability distributions (i.e. $\\forall p \\ndeq q:\\quad \\W(p, q) > 0$). An easy way to see this is for any $\\gamma\\in \\Gamma(p, q)$, there exists an $x,y$ such that $x\\neq y$ and $\\gamma(x, y) > 0$. Since $x\\neq y$, $d(x,y) > 0$, which means $\\inf_{\\gamma\\in \\Gamma(p,q)}\\EE_{x,y\\sim\\gamma}[d(x,y)] > 0$.\n\nThe most notable use of the Wasserstein distance in deep learning is by \\cite{arjovsky2017wasserstein}, which used the Wasserstein distance as the generator loss in generative adversarial networks (GAN). In their work, which is now known as Wasserstein-GANs (W-GAN), the discriminator is a neural network that is trained to approximate the Wasserstein distance between the fake data distribution from the generator, and the real data distribution. \\cite{arjovsky2017wasserstein} also provides an apt argument for why the Wasserstein distance in practice is superior to other distribution metrics including Jensen-Shannon divergence, Kullback\u2013Leibler divergence, and Total Variation.\n\nThis approximation of the Wasserstein distance using neural networks was later improved by \\cite{gulrajani2017improved}, which is now known as Wasserstein Gradient Penalty (Wasserstein-GP).\nWe employ \\cite{gulrajani2017improved}'s method to approximate the Wasserstein distance between the features of the style image and the generated image.\n\n\\section{Methods}\nThe source code is available at: \\url{https:\/\/github.com\/aigagror\/wasserstein-style-transfer}\n\n\\subsection{Training}\n\nLet $\\x^*, \\x_c, \\x_s$ represent the generated image, content image, and style image respectively.\nLet the feature maps of $\\x^*, \\x_c, \\x_s$ in layer $l$ of our CNN be denoted by $\\Fl^l \\in \\R^{N_l\\times M_l}, \\Pl^l \\in \\R^{N_l\\times M_l}, \\Sl^l \\in \\R^{N_l\\times M_l}$ respectively, where $N_l$ is the number of the feature maps in the layer $l$ and $M_l$ is the height times the width of the feature map. For this paper, let us call the columns of $\\Fl^l,\\Pl^l, \\Sl^l$ as \\textbf{features}. Intuitively, a feature can be interpreted as a ``pixel'' of the feature map.\n\nIn neural style transfer, we optimize the generated image with respect to the following loss\n\n\\begin{align}\n \\Ll = \\alpha \\Ls + (1 - \\alpha) \\Lc \\label{eq:style-transfer} \n\\end{align}\nwhere content loss $\\Lc$ is defined as the mean squared error between the spatially corresponding features:\n\\begin{align}\n \\Lc = \\frac{1}{2}\\sum_{i=1}^{N_l}\\sum_{j=1}^{M_l}(\\Fl^l_{ij} - \\Pl^l_{ij})^2 \n\\end{align}\nand style loss $\\Ls$ is defined as a weighted sum of the distribution distances between the style's features and the generated image's features.\n\n\\begin{align}\n \\Ls^l &= \\D[\\Fl^l, \\Sl^l] & \\textit{style layer loss}\\\\ \n \\Ls &= \\sum_l w_l \\Ls^l & \\textit{weighted sum of style layer losses}\n\\end{align}\nwhere $\\D$ is some distribution distance distance metric. In our case, it is the Wasserstein metric. Note that in traditional NST, the distribution distance metric is the Mean Maximum Dependency (MMD) with the quadratic kernel.\n\n\\subsection{Architecture}\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figures\/model_architecture.pdf}\n \\caption{\\small{Model architecture of the Wasserstein style transfer}}\n \\label{fig:model}\n\\end{figure}\n\n\\subsection{Experimental Setup}\nWe test Wasserstein style transfer on VGG19 with batch normalization (VGG19-BN). We found that using VGG19 without batch-norm, which is the model other methods use, achieves the same results, but requires more training steps to converge. We use 5 distinct layers for style loss (conv1\\_1, conv2\\_1, conv3\\_1, conv4\\_1, conv5\\_1), and 1 layer for the content loss (conv4\\_1). These layers are also commonly used in other NST methods, which we purposefully chose for fair comparisons. To calculate the style loss using the Wasserstein distance, we attach a discriminator network at the corresponding CNN layer. These discriminators approximate the Wasserstein distance between the style features and the generated features under Wasserstein-GP. The discriminators each have 3 hidden layers, with dimensions 256, and a ReLU activation. Before the features are fed into the discriminators, they are first fed through an element-wise hyberbolic tangent (tanh) activation. We found this beneficial for keeping the training losses regularized and bounded. Figure \\ref{fig:model} illustrates our model.\n\nWe compare our results to \\cite{gatys2015neural}, \\cite{huang2017arbitrary}, and \\cite{ulyanov2017improved} using fine-tuned $\\alpha$ values to achieve the same balance of content and style in their results. See the supplementary materials for more training details.\n\n\\section{Results} \\label{section:results}\n\n\\subsection{Comparison to other NST methods}\nWe run the our Wasserstein style transfer on the same style-content pairs as other contemporary methods for comparison. See Figure~\\ref{fig:comparisons}. We think our transfers are of the best quality. Moreover, we think that it especially excels at capturing the color scheme and the higher level textures. For the sketch-sailboat pair (1\\textsuperscript{st} row), we used the first 3 layers of VGG19, and for the picasso-brad-pitt pair (3rd row), we used the first 4 layers. This was done in order to prevent the semantic information of the style image from leaking into the generated image. \n\\begin{figure}[ht]\n\\captionsetup[subfigure]{labelformat=empty}\n\\begin{tabular}{ccccc}\n\\includegraphics[width=0.16\\textwidth]{figures\/transfers\/sketch-sailboat-style-content.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--005.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--007.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--008.jpg} & \\includegraphics[width=0.16\\textwidth]{figures\/transfers\/sketch-sailboat-wass-vgg19-3layers-pretrained-fixed-0-25alpha.jpg}\\\\\n\\includegraphics[width=0.16\\textwidth]{figures\/transfers\/picasso-lenna-style-content.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--017.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--019.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--020.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/transfers\/picasso-lenna-wass-vgg19-5layers-pretrained-fixed-0-25alpha.jpg}\\\\\n\\includegraphics[width=0.16\\textwidth]{figures\/transfers\/woman-with-hat-cornell-style-content.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--011.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--013.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--014.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/transfers\/woman-with-hat-cornell-vgg19-bn-4layers-pretrained-0-125alpha}\\\\\n\\includegraphics[width=0.16\\textwidth]{figures\/transfers\/picasso-selfportrait-brad-pitt-style-content.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--023.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--025.jpg} & \\includegraphics[width=0.16\\textwidth]{figures\/others\/image--026.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/transfers\/picasso-selfportrait-brad-pitt-wass-vgg19-5layers-pretrained-fixed-0-1alpha.jpg}\\\\\n\\includegraphics[width=0.16\\textwidth]{figures\/transfers\/la-muse-golden-gate-style-content.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--029.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--031.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/others\/image--032.jpg} &\n\\includegraphics[width=0.16\\textwidth]{figures\/transfers\/la-muse-golden-gate-wass-vgg19-5layers-pretrained-fixed-0-25alpha.jpg}\\\\\nStyle-Content & Huang \\textit{et al.} & Ulyanov \\textit{et al.} & Gatys \\textit{et al.} & Ours\n\\end{tabular}\n\\caption{\\small{Example style transfer results. Our method was most preferred overall in an Amazon Turk survey.}}\n\\label{fig:comparisons}\n\\end{figure}\n\nWe conducted a human preference survey using Amazon Turk using the images from Figure~\\ref{fig:comparisons}. For each survey question, we presented the style and content image pair and titled them as \"style\" and \"content\" respectively. To clarify we presented the style and content images separately, unlike the style-content overlay in our figures, so that users can see the style image fully. We also presented transfers from all four methods with no labels for anonymity. Our survey question was the following:\n\\begin{quote}\n \\textit{We merged the \"style\" and \"content\" of two images together using different methods. Select the merged image that best captures both the style and content.}\n\\end{quote}\n\nWe gathered 160 responses (32 responses for each of the 5 style-content pairs) from 47 unique users. Overall, our Wasserstein method was most preferred, with 59 votes, (36.9\\% of all votes). \\cite{ulyanov2017improved} was the next highest with 48 votes (30.0\\%), followed by \\cite{gatys2015neural} with 31 votes (19.4\\%), and \\cite{huang2017arbitrary} with 22 votes (13.8\\%).\n\nFor each style-content pair, our method was most preferred, with the exception of the sketch-sailboat pair (first row of Figure~\\ref{fig:comparisons}) in which \\cite{ulyanov2017improved}'s method was most preferred.\n\n\\subsection{Style representation}\nWe compare style representations between 1\\textsuperscript{st} order statistics, 2\\textsuperscript{nd} order statistics, and the Wasserstein metric. See Figure~\\ref{fig:starry-night}. For the 1\\textsuperscript{st} order statistics, we use \\cite{li2017demystifying}'s batch-normalization statistics matching (BN Matching) to align the mean and standard deviation of the features under the style layer loss $\\Ls^l = (\\EE[\\Fl^l] - \\EE[\\Sl^l])^2 + (\\sigma(\\Fl^l) - \\sigma(\\Sl^l))^2$. For second order statistics, we use \\cite{gatys2015neural}'s definition of the MSE of Gramian matrix. We also apply these style representations directly on the raw pixels themselves. The results are noisy images, but with the same color scheme as the original image. This is expected because optimizing over the raw pixels has no spatial dependencies whatsoever.\n\nWhat's most interesting is how our method's style representation compares to the others. Starting from layer 3, it is clear that our method is better at capturing the higher level textures including the paint strokes, swirls, and even semantic features like the stars and the dark spires. In fact at layer 5, our method is so good at discriminating such high level features, it nearly recreates the global structure of the image.\n\n\\subsection{Semantic Information Leaking}\nFor the first time in style transfer, we encounter the issue of leaking semantic information from the style into the content. As stated previously, our method is exceptional at capturing higher level textures from the deeper layers of the CNN model. However, higher level textures have a larger spatial footprint, which ultimately leads to capturing the semantic information as well. See Figure~\\ref{fig:failure-cases-a} for examples. To avoid this issue, one can simply use shallower CNN layers or decrease the $\\alpha$ value. While most methods use the first 5 layers of VGG, in some of our transfers we had to restrict ourselves to the first 3 or 4 layers because the our method would otherwise add the semantic information of the style image to the generated image.\n\n\\begin{figure}[H]\n \\centering\n \\begin{tabular}{cccc}\n \\includegraphics[width=0.15\\textwidth]{figures\/transfer-fails-a\/parish_delaunaystyle_content.jpg} &\n \\includegraphics[width=0.15\\textwidth]{figures\/transfer-fails-a\/parish_delaunay_vgg19-bn_5layers_pretrained_0-909alpha-8.jpg} &\n \\includegraphics[width=0.15\\textwidth]{figures\/transfer-fails-a\/sailboat_sketch_style_content.jpg} &\n \\includegraphics[width=0.15\\textwidth]{figures\/transfer-fails-a\/sailboat_sketch-vgg19-bn_5layers_pretrained_0-909alpha-10.jpg} \\\\\n \n \\includegraphics[width=0.15\\textwidth]{figures\/transfer-fails-a\/goldengate_la_muse_style_content.jpg} &\n \\includegraphics[width=0.15\\textwidth]{figures\/transfer-fails-a\/goldengate_la_muse-vgg19-bn_5layers_pretrained_0-909alpha-3.jpg} &\n \\includegraphics[width=0.15\\textwidth]{figures\/transfer-fails-a\/brad_pitt_picasso_selfportrait_style_content.jpg} &\n \\includegraphics[width=0.15\\textwidth]{figures\/transfer-fails-a\/brad_pitt_picasso_selfportrait-vgg19-bn_5layers_pretrained_0-909alpha.jpg} \\\\\n \n \\end{tabular}\n \\caption{\\small{Our Wasserstein distance based style transfer method is prone to capturing the style ``too well''. This happens when we use layers that are too deep in the CNN model or $\\alpha$ is too high. In these cases, we begin to blur the line between high level textures and semantic content.}}\n \\label{fig:failure-cases-a}\n\\end{figure}\n\n\\section{Discussion}\n\n\\subsection{Style is a distribution of features}\nIn 2017, in an attempt to demystify NST, \\cite{li2017demystifying} hypothesized that style is simply the distribution of features and that style transfer is nothing more than feature distribution matching.\n\nConsider a feature in the first convolutional layer. It's value depends on the input image pixels in its receptive field. As we move to deeper layers, a feature's receptive field with respect to the input image grows linearly. Hence features in low level layers capture small spatial dependencies and features in deeper layers capture larger spatial dependencies. In layman's terms, features in low layers capture low-level textures like color, while features in higher layers capture higher-level textures like roughness or strokes. Since we define style as the \\textit{distribution} of features, style can be informally described as the distribution of colors, roughness, smoothness, sharp edges, etc. Furthermore, since style is the distribution of features, it is global-spatially invariant. We find this idea that style is a distribution of features so appealing, we found it necessary to re-emphasize it here because we believe it is not discussed enough in the community. We hope you find this idea equally appealing.\n\nAs a consequence if we were to use just the raw pixels of the image as the features, the style representation should converge to any arbitrary image that has the same distribution of \\textit{colors} of the style image. Figure \\ref{fig:starry-night} supports this hypothesis.\n\n\\subsection{Style transfer is a special type of GAN}\n\\begin{figure}[t!]\n\\centering\n\\begin{subfigure}{0.4\\textwidth}\n\\includegraphics[width=0.8\\linewidth]{figures\/GAN.pdf} \n\\caption{GAN Setting}\n\\label{fig:gan-setting}\n\\end{subfigure}\n\\hspace{25mm}\n\\begin{subfigure}{0.4\\textwidth}\n\\includegraphics[width=0.8\\linewidth]{figures\/NST-GAN.pdf}\n\\caption{NST Setting}\n\\label{fig:style-transfer-setting}\n\\end{subfigure}\n\n\\caption{\\small{We argue that NST can be classified as a special type of GAN problem}}\n\\label{fig:gan-connection}\n\\end{figure}\nIn the spirit of defining style as a distribution of features, we interpret that NST can be classified as a type of GAN framework by the following connections. See Figure~\\ref{fig:gan-connection} for illustration.\n\n\\begin{enumerate}[leftmargin=*]\n \\item\n \\begin{enumerate}[leftmargin=*]\n \\item Under the traditional GAN setting, the discriminator and generator discriminate and align probability distributions in some physical space. The distribution of the real physical data is a result of some physical phenomenon that maps a distribution from some latent space to the physical space. The generator attempts to mimic this physical phenomenon with respect to the discriminator.\n \n \\item Under the NST setting, the discriminator and generator discriminate and align probability distributions of the \\textit{features} that are mapped by the CNN from either the style image (real latent distribution) or the generated image (fake latent distribution). More specifically, an element in the latent space is a patch of the image whose size is equal to the receptive field of the CNN layer. As a consequence, changing one pixel changes multiple latent elements since multiple patches share the same pixel. In this setting, the CNN is both the generator and the real physical phenomenon, mapping the latent space to the physical space.\n \\end{enumerate}\n \n \\item\n \\begin{enumerate}[leftmargin=*]\n \\item Under the traditional GAN setting, the generator is usually fed in some fixed latent distribution (e.g. a Gaussian distribution), and we optimize the generator network so that it's mapped output distribution aligns with the real data distribution. In essence, we make the generator try to mimic the physical phenomenon.\n \n \\item Under the NST setting, this latent distribution are image patches, and instead of fixing this distribution, we fix the CNN and optimize over the the generated image patches. In essence, we try to align the distribution of image patches in the generated image with the style image. This aligns nicely with the idea that style transfer is a texture alignment process because image patches are locally spatial features.\n \\end{enumerate}\n\\end{enumerate}\n\nSince the Wasserstein metric has had great success with improving GANs, we can expect similar benefits in its application to style transfer.\n\n\n\\section{Conclusion}\nWe provide a new alternative to style transfer that calculates the style loss using the Wasserstein distance. If you consider style as the distribution of features, then the Wasserstein metric is more effective than contemporary methods at discriminating between the style features and the generated features, which leads to higher quality transfers. Thus we set a new benchmark for style transfer quality.\n\n\n\\begin{ack}\n\\begin{itemize}\n \\item Thanks to Professor Rayadurgam Srikant for providing consultation on how to compare the Wasserstein distance against the MMD.\n \\item Thanks to Hossein Talebi for general advice on the writing of this paper.\n \\item Thanks to Bennett Ip for helping us with the initial developments of this project.\n\\end{itemize}\n\\end{ack}\n\n\\medskip\n\n\\newpage\n\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{} Cellular resolutions\n\n\\noindent Fix a subset $\\setdef{\\enma{\\mathbf{a}}_j}{j\\in I} \\subset\n\\enma{\\mathbb{Z}}^n\\!$, for an index set $I$ which need not be finite,\nand let $M $ be the monomial module\ngenerated by the monomials $m_j = \\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_j}$, $ j \\in I$.\nLet $X$ be a {\\it regular cell complex\\\/} having $I$ as its set\nof vertices, and equipped with a choice of an\n{\\it incidence function} $\\varepsilon(F,F')$ on pairs of faces.\nWe recall from [BH, Section 6.2] that $\\,\\varepsilon\\,$\ntakes values in \\set{0,1,-1}, that\n$\\varepsilon(F,F') = 0$ unless $F'$ is a facet of $F$, that\n$\\varepsilon(\\{j\\},\\emptyset) =1$ for all vertices $j\\in I$, and\nthat for any codimension 2 face $F'$ of $F$,\n$$\\varepsilon(F,F_1) \\varepsilon(F_1,F') \\,\\, + \\,\\,\n\\varepsilon(F,F_2) \\varepsilon(F_2,F') \\quad = \\quad 0$$\nwhere $F_1$, $F_2$ are the two facets of $F$ containing $F'$.\nThe prototype of a regular cell complex is the set of faces of a\nconvex polytope. The incidence function $\\varepsilon$ defines a differential\n$\\partial$ which can be used to compute the homology of $X$. Define\nthe {\\it augmented oriented chain complex} $\\widetilde{C}(X;k) =\n\\mathop{\\hbox{$\\bigoplus$}}_{F\\in X}\\; kF$, with differential\n $$ \\partial F \\quad = \\quad \\sum_{F'\\in X} \\, \\varepsilon(F,F') \\, F'.$$\n The {\\it reduced cellular homology group} $\\widetilde{H}_i(X;k)$ is\nthe $i$th homology of $\\widetilde{C}(X;k)$, where faces of $X$\nare indexed by their dimension. The {\\it oriented\nchain complex} $C(X;k) = \\mathop{\\hbox{$\\bigoplus$}}_{F\\in X,\\, F\\ne\\emptyset}\\; kF$ is\nobtained from $\\widetilde{C}(X;k)$ by dropping the\ncontribution of the empty face.\nIt computes the ordinary homology groups $H_i(X;k)$ of $X$.\n\nThe cell complex $X$ inherits a $\\enma{\\mathbb{Z}}^n$-grading from the\ngenerators of $M$ as follows. Let $F$ be a nonempty face of $X$.\nWe identify $F$ with its set of vertices, a finite subset of $I$.\nSet $m_F := \\mathop{\\rm lcm}\\nolimits \\setdef{m_j}{j\\in F}$. The exponent vector of\nthe monomial $m_F$ is the {\\it join}\n$ \\, \\enma{\\mathbf{a}}_F := \\bigvee\\setdef{\\enma{\\mathbf{a}}_j}{j\\in F}$\nin $\\enma{\\mathbb{Z}}^n $. We call $\\enma{\\mathbf{a}}_F $\nthe {\\it degree} of the face $F$.\n\nHomogenizing the differential $\\partial$ of $C(X;k)$\nyields a $\\enma{\\mathbb{Z}}^n$-graded chain complex of $S$-modules. Let $SF$ be the free\n$S$-module with one generator $F$ in degree $\\enma{\\mathbf{a}}_F$. The {\\it\ncellular complex} $\\, \\enma{\\mathbf{F}}_X \\,$ is the $\\enma{\\mathbb{Z}}^n$-graded\n$S$-module $\\mathop{\\hbox{$\\bigoplus$}}_{F\\in X,\\, F\\ne\\emptyset} SF$ with differential\n $$ \\partial \\, F \\quad = \\quad \\sum_{F'\\in X,\\, F'\\ne\\emptyset} \\;\n\\varepsilon(F,F') \\; {m_F \\over m_{F'}} \\; F'.$$\n The homological degree of each face $F$ of $X$ is its dimension.\n\nFor each degree $\\enma{\\mathbf{b}}\\in\\enma{\\mathbb{Z}}^n$, let $X_{\\preceq\\enma{\\mathbf{b}}}$ be the\nsubcomplex of $X$ on the vertices of degree $\\preceq\\enma{\\mathbf{b}}$, and let\n$X_{\\prec\\enma{\\mathbf{b}}}$ be the subcomplex of $X_{\\preceq\\enma{\\mathbf{b}}}$ obtained by\ndeleting the faces of degree \\enma{\\mathbf{b}}.\nFor example, if there is a unique vertex $j$ of degree $\\preceq\\enma{\\mathbf{b}}$, and\n$\\enma{\\mathbf{a}}_j=\\enma{\\mathbf{b}}$, then $X_{\\preceq\\enma{\\mathbf{b}}} = \\set{\\set{j},\\emptyset}$ and\n$X_{\\prec\\enma{\\mathbf{b}}}=\\set{\\emptyset}$. A full subcomplex on no vertices is the\nacyclic complex \\set{}, so if there are no vertices of degree\n$\\preceq\\enma{\\mathbf{b}}$, then $X_{\\preceq\\enma{\\mathbf{b}}} = X_{\\prec\\enma{\\mathbf{b}}} = \\set{}$.\n\n The following proposition generalizes [BPS,~Lemma~2.1] to cell\ncomplexes:\n\n\\proposition{propD}\n The complex $\\enma{\\mathbf{F}}_X$ is a free resolution of $M$ if and only if\n$X_{\\preceq\\enma{\\mathbf{b}}}$ is acyclic over $k$ for all degrees $\\enma{\\mathbf{b}}$.\nIn this case we call $\\enma{\\mathbf{F}}_X$ a {\\it cellular resolution} of $M$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nThe complex $\\enma{\\mathbf{F}}_X$ is $\\enma{\\mathbb{Z}}^n$-graded. The degree \\enma{\\mathbf{b}}\\ part of $\\enma{\\mathbf{F}}_X$ is\nprecisely the oriented chain complex $C(X_{\\preceq\\enma{\\mathbf{b}}};k)$. Hence $\\enma{\\mathbf{F}}_X$ is a\nfree resolution of $M$ if and only if $H_0(X_{\\preceq\\enma{\\mathbf{b}}};k) \\cong k$ for\n$\\enma{\\mathbf{x}}^\\enma{\\mathbf{b}} \\in M$, and otherwise $H_i(X_{\\preceq\\enma{\\mathbf{b}}};k) = 0$ for all $i$ and\nall $\\enma{\\mathbf{b}}$. This condition is equivalent to\n$\\widetilde{H}_i(X_{\\preceq\\enma{\\mathbf{b}}};k) = 0$ for all $i$ (since $\\enma{\\mathbf{x}}^\\enma{\\mathbf{b}} \\in M\n\\,$ if and only if $ \\emptyset \\in X_{\\preceq\\enma{\\mathbf{b}}}$) and thus to\n$X_{\\preceq\\enma{\\mathbf{b}}}$ being acyclic. \\Box\n\n\\remark{remU}\nFix $\\enma{\\mathbf{b}}\\in\\enma{\\mathbb{Z}}^n$. The set of generators of $M$ of degree\n$\\preceq\\enma{\\mathbf{b}}$ is finite. It generates a monomial module $M_{\\preceq \\enma{\\mathbf{b}}}$\nisomorphic to an ideal (up to a degree shift).\n\n\\corollary{makefinite}\nThe cellular complex $\\enma{\\mathbf{F}}_X$ is a resolution of $M$ if and only if the\ncellular complex $\\enma{\\mathbf{F}}_{X_{\\preceq\\enma{\\mathbf{b}}}}$ is a resolution of the monomial\nideal $M_{\\preceq\\enma{\\mathbf{b}}}$ for all $\\enma{\\mathbf{b}} \\in \\enma{\\mathbb{Z}}^n$.\n\n\\stdbreak\\noindent{\\bf Proof. }\n This follows from \\ref{propD} and the identity\n$\\,(X_{\\preceq\\enma{\\mathbf{b}}})_{\\preceq\\enma{\\mathbf{c}}} = X_{\\preceq \\, \\enma{\\mathbf{b}} \\wedge\\enma{\\mathbf{c}}}$.\n \\Box\n\n\\remark{distinct}\nA cellular resolution $\\enma{\\mathbf{F}}_X$ is a minimal resolution if and only\nif any two comparable faces $F' \\subset F$ of the complex $X$\nhave distinct degrees $\\,\\enma{\\mathbf{a}}_F \\not= \\enma{\\mathbf{a}}_{F'}$.\n\n\\vskip .2cm\n\nThe simplest example of a cellular resolution is\nthe {\\it Taylor resolution\\\/} for monomial ideals [Tay].\nThe Taylor resolution is easily generalized to\narbitrary monomial modules $M$ as follows.\nLet $\\setdef{m_j}{j \\in I}$ be the\nminimal generating set of $M$. Define $\\Delta$ to be\nthe simplicial complex consisting of all finite subsets\nof $I$, equipped with the standard incidence function $\\varepsilon(F,F') =\n(-1)^j$ if $F\\setminus F'$ consists of the $j$th element of $F$. The Taylor\ncomplex of $M$ is the cellular complex $\\enma{\\mathbf{F}}_\\Delta$.\n\n\\proposition{propT}\n The Taylor complex $\\enma{\\mathbf{F}}_\\Delta$ is a resolution of $M$.\n\n\\stdbreak\\noindent{\\bf Proof. }\n By \\ref{propD} we need to show that each subcomplex $\\Delta_{\\preceq\\enma{\\mathbf{b}}}$\nof $\\Delta$ is acyclic. $\\Delta_{\\preceq\\enma{\\mathbf{b}}}$ is the full simplex on the\nset of vertices \\setdef{j\\in I}{\\enma{\\mathbf{a}}_j \\preceq \\enma{\\mathbf{b}}}.\nThis set is finite by \\ref{remU}.\nHence $\\Delta_{\\preceq\\enma{\\mathbf{b}}}$ is a finite simplex, which is acyclic.\n \\Box\n\n\nThe Taylor resolution $\\enma{\\mathbf{F}}_\\Delta$\n is typically far from minimal. If $M$ is\ninfinitely generated, then $\\Delta$ has faces of arbitrary dimension\nand $\\enma{\\mathbf{F}}_\\Delta$ has infinite length.\nFollowing [BPS, \\S 2] we note that\nevery simplicial complex $X\\subset\\Delta$\ndefines a submodule $ \\enma{\\mathbf{F}}_X \\subset \\enma{\\mathbf{F}}_\\Delta$ which is closed under the\ndifferential $\\partial$. We call $\\enma{\\mathbf{F}}_X$ the {\\it restricted Taylor\ncomplex\\\/} supported on $X$. $\\enma{\\mathbf{F}}_X$ is a resolution of $M$ if and only\nif the hypothesis of \\ref{propD} holds, with cellular homology\nspecializing to simplicial homology.\n\n\n\\example{exA}\nConsider the monomial ideal\n$M=\\idealfour{a^2b}{ac}{b^2\\!}{bc^2}$ in $S=k[a,b,c]$.\n\\nextfigtoks Figure~\\numtoks\\ shows a truncated ``staircase diagram'' of\nunit cubes representing the monomials in $S \\backslash M$, and shows\ntwo simplicial complexes $X$ and $Y$ on the generators of $M$.\nBoth are two triangles sharing an edge.\nEach vertex, edge or triangle is labeled by its degree.\nThe notation {\\bf 210}, for example,\nrepresents the degree $(2,1,0)$ of $a^2b$.\n\n\\draw{70}{diagonals}{\n \\setext{.0685}{.475}{\\sepad{3pt}{3pt}{$ac$}}\n \\nwtext{.107}{.1}{\\nwpad{1pt}{1pt}{$a^2b$}}\n \\nwtext{.248}{.352}{\\wpad{2pt}{$b^2$}}\n \\swtext{.185}{.81}{\\swpad{3pt}{3pt}{$bc^2$}}\n %\n \\setext{.353}{.94}{\\sepad{2pt}{2pt}{\\stack{4pt}{$ac$}{\\sevenbf 101}}}\n \\etext{.358}{.518}{\\epad{3pt}{\\sevenbf 211}}\n \\netext{.353}{.105}{\\nepad{2pt}{2pt}{\\stack{4pt}{\\sevenbf 210}{$a^2b$}}}\n \\stext{.48}{.919}{\\spad{3pt}{\\sevenbf 112}}\n \\ctext{.48}{.518}{\\sevenbf 121}\n \\ntext{.48}{.122}{\\npad{3pt}{\\sevenbf 220}}\n \\swtext{.605}{.94}{\\swpad{2pt}{2pt}{\\stack{4pt}{$bc\\hexp 2$}{\\sevenbf\n 012}}}\n \\wtext{.602}{.518}{\\wpad{3pt}{\\sevenbf 022}}\n \\nwtext{.605}{.105}{\\nwpad{2pt}{2pt}{\\stack{4pt}{\\sevenbf 020}{$b\\hexp\n 2$}}}\n %\n \\ctext{.438}{.38}{\\sevenbf 221}\n \\ctext{.522}{.659}{\\sevenbf 122}\n %\n \\setext{.742}{.94}{\\sepad{2pt}{2pt}{\\stack{4pt}{$ac$}{\\sevenbf 101}}}\n \\etext{.747}{.518}{\\epad{3pt}{\\sevenbf 211}}\n \\netext{.742}{.105}{\\nepad{2pt}{2pt}{\\stack{4pt}{\\sevenbf 210}{$a^2b$}}}\n \\stext{.869}{.919}{\\spad{3pt}{\\sevenbf 112}}\n \\ctext{.869}{.518}{\\sevenbf 212}\n \\ntext{.869}{.122}{\\npad{3pt}{\\sevenbf 220}}\n \\swtext{.994}{.94}{\\swpad{2pt}{2pt}{\\stack{4pt}{$bc\\hexp 2$}{\\sevenbf\n 012}}}\n \\wtext{.991}{.518}{\\wpad{3pt}{\\sevenbf 022}}\n \\nwtext{.994}{.105}{\\nwpad{2pt}{2pt}{\\stack{4pt}{\\sevenbf 020}{$b\\hexp\n 2$}}}\n %\n \\ctext{.827}{.659}{\\sevenbf 212}\n \\ctext{.911}{.38}{\\sevenbf 222}\n %\n \\ntext{.13}{0}{\\npad{10pt}{$M$}}\n \\ntext{.48}{0}{\\npad{10pt}{$X$}}\n \\ntext{.869}{0}{\\npad{10pt}{$Y$}}\n }\n\n\\noindent\nBy \\ref{propD},\nthe complex $X$ supports the minimal free resolution $\\enma{\\mathbf{F}}_X =$\n $$ 0 \\rightarrow\n S^2 \\rightarrowmat{2pt}{4pt}{\\!-b&0\\cr c&0\\cr 0&-b\\cr \\!-a&c\\cr\n 0&-a&\\cr}\n S^5 \\rightarrowmat{2pt}{4pt}{c&b&0&0&0\\cr\n \\!-ab&0&bc&b\\sexp2&0\\cr\n 0&0& \\,-a&0&b\\cr\n 0&-a\\sexp2&0&-ac&\\,-c\\sexp2&\\,\\cr}\n S^4 \\rightarrowmat{3pt}{6pt}{a^2b & ac & bc^2 & b^2 \\cr}\n M \\rightarrow 0.\n $$\nThe complex $Y$ fails the criterion of \\ref{propD},\nand hence $\\enma{\\mathbf{F}}_Y$ is not exact: if $\\enma{\\mathbf{b}}=(1,2,1)$ then\n$Y_{\\preceq\\enma{\\mathbf{b}}}$ consists of the two vertices $ac$ and $b^2$, and is not\nacyclic. \\Box\n\nWe next present four examples which are not restricted Taylor complexes.\n\n\\example{exBH} \nLet $M$ be a {\\it Gorenstein ideal of height $3$}\ngenerated by $m$ monomials.\nIt is shown in [BH1, \\S 6] that\n the minimal free resolution\nof $M$ is the cellular resolution $\\, \\enma{\\mathbf{F}}_X \\,:\\,\n 0 \\rightarrow S \\rightarrow S^m \\rightarrow S^m \\rightarrow S\n\\rightarrow 0 \\,$ supported on a {\\it convex $m$-gon}.\n\n\\example{excogen} %\nA monomial ideal $M$ is {\\it co-generic} if its\nno variable occurs to the same power in two distinct\nirreducible components\n$\\,\\langle x_{i_1}^{r_1},\n x_{i_2}^{r_2}, \\ldots, x_{i_s}^{r_s} \\rangle \\,$ of $M$. It is shown in\n[Stu2] that the minimal resolution of a co-generic\nmonomial ideal is a cellular resolution\n$\\enma{\\mathbf{F}}_X$ where $X$ is the complex of bounded faces\nof a {\\it simple polyhedron}.\n\n\\example{exD}\nLet $u_1,\\ldots,u_n$ be distinct integers and $M$ the module generated by\nthe $\\, n\\, !\\, $ monomials $\\,\nx_{\\pi(1)}^{u_1} x_{\\pi(2)}^{u_2} \\cdots\nx_{\\pi(n)}^{u_n} \\,$ where $\\pi$ runs over all permutations of\n$\\,\\{1,2,\\ldots,n \\} $. Let $X$ be the complex of all faces of the\n{\\it permutohedron} [Zie, Example 0.10], which is the convex hull\nof the $\\,n \\,! \\,$ vectors $\\bigl(\\pi(1),,\\ldots,\\pi(n)\\bigr)$ in $\\enma{\\mathbb{R}}^n$.\nIt is known [BLSWZ, Exercise~2.9] that the\n$i$-faces $F$ of $X$ are indexed by chains\n$$ \\emptyset \\;=\\; A_0 \\subset A_1 \\subset \\;\\;\\ldots\\;\\; \\subset\nA_{n-i-1} \\subset A_{n-i} \\;=\\; \\{u_1,u_2,\\ldots,u_n\\} . $$\nWe assign the following\nmonomial degree to the\n$i$-face $F$ indexed by this chain:\n$$ \\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_F} \\quad = \\quad\n\\prod_{j=1}^{n-i} \\; \\prod_{r \\in A_j \\backslash A_{j-1}} \\!\\!\nx_r^{\\max\\{ A_j \\backslash A_{j-1}\\}}. $$\nIt can be checked (using our results in \\S 2)\nthat the conditions in \\ref{propD} and \\ref{distinct}\nare satisfied.\nHence $\\enma{\\mathbf{F}}_X$ is the minimal free resolution of $M$. \\Box\n\n\\example{exR}\n \\def\\enma{\\RR\\bbPP^2}{\\enma{\\enma{\\mathbb{R}}\\enma{\\mathbb{P}}^2}}\nLet $S=k[a,b,c,d,e,f]$. Following [BH, page 228] we consider the\nStanley-Reisner ideal of the minimal triangulation\nof the {\\it real projective plane} \\enma{\\RR\\bbPP^2},\n$$M \\;=\\; \\ideal{abc,\\; abf,\\; ace,\\; ade,\\; adf,\\; bcd,\\; bde,\\;\nbef,\\; cdf,\\; cef}.$$\nThe dual in \\enma{\\RR\\bbPP^2}\\ of this triangulation\nis a cell complex $X$ consisting of six pentagons.\nThe ten vertices of $X$ are labeled by the\ngenerators of $M$. We illustrate $ X \\simeq $ \\enma{\\RR\\bbPP^2} \\ as the disk shown\non the left in \\nextfigtoks Figure~\\numtoks; antipodal points on the boundary are to be\nidentified. The small pictures on the right will be\ndiscussed in Example 2.14.\n\n\n\\draw{70}{projplane}{\n \\ntext{.161}{.475}{\\npad{2pt}{$abc$}}\n \\stext{.0915}{.406}{\\swpad{4pt}{3pt}{$ace$}}\n \\stext{.233}{.406}{\\sepad{4pt}{3pt}{$abf$}}\n \\nwtext{.169}{.728}{\\npad{1pt}{$bcd$}}\n %\n \\stext{.12}{1}{\\swpad{2pt}{2pt}{$bef$}}\n \\stext{.202}{1}{\\swpad{2pt}{2pt}{$cef$}}\n \\swtext{.28}{.866}{\\swpad{2pt}{1pt}{$cdf$}}\n \\wtext{.322}{.626}{\\wpad{2pt}{\\mimic{$a$}{$adf$}}}\n \\wtext{.322}{.373}{\\wpad{2pt}{\\mimic{$a$}{$ade$}}}\n \\nwtext{.278}{.132}{\\nwpad{2pt}{1pt}{$bde$}}\n \\ntext{.202}{.006}{\\nwpad{1pt}{2pt}{$bef$}}\n \\ntext{.12}{.006}{\\nwpad{1pt}{2pt}{$cef$}}\n \\netext{.043}{.14}{\\nepad{2pt}{1pt}{$cdf$}}\n \\etext{.0015}{.373}{\\epad{2pt}{\\mimic{$a$}{$adf$}}}\n \\etext{.0015}{.626}{\\epad{2pt}{\\mimic{$a$}{$ade$}}}\n \\setext{.04}{.85}{\\sepad{2pt}{1pt}{$bde$}}\n %\n \\ctext{.161}{.88}{$\\widehat{a}$}\n \\ctext{.061}{.31}{$\\widehat{b}$}\n \\ctext{.261}{.31}{\\mimic{$\\widehat{b}$}{$\\widehat{c}$}}\n \\ctext{.161}{.25}{$\\widehat{d}$}\n \\ctext{.241}{.63}{\\mimic{$\\widehat{f}$}{$\\widehat{e}$}}\n \\ctext{.081}{.63}{$\\widehat{f}$}\n %\n \\ntext{.51}{.264}{\\npad{10pt}{\\stack{6pt}{$a\\,=\\,0$}{6 cycles}}}\n \\ntext{.714}{.264}{\\npad{10pt}{\\stack{6pt}{$a\\,=\\,1$}{6 cycles}}}\n \\ntext{.918}{.264}{\\nwpad{10pt}{2pt}{\\stack{6pt}{$b+c+d\\,=\\,1$}{10\ncycles}}}\n }\n\n\\noindent\nIf $\\mathop{\\rm char }\\nolimits k \\ne 2$ then $X$ is acyclic over $k$ and the cellular complex\n$\\enma{\\mathbf{F}}_X$ coincides with the minimal free resolution\n$\\, 0 \\rightarrow S^6 \\rightarrow S^{15} \\rightarrow S^{10} \\rightarrow M $.\nIf $\\mathop{\\rm char }\\nolimits k = 2$ then $X$ is not acyclic over $k$, and the cellular\ncomplex $\\enma{\\mathbf{F}}_X$ is not a resolution of $M$. \\Box\n\nReturning to the general theory, we next present a formula for\nthe {\\it Betti number} $\\beta_{i,\\enma{\\mathbf{b}}} = \\dim\\mathop{\\rm Tor}\\nolimits_i(M,k)_\\enma{\\mathbf{b}} \\,$ which\nis the number of minimal $i$th syzygies in degree $\\enma{\\mathbf{b}}$.\nThe degree $\\enma{\\mathbf{b}} \\in \\enma{\\mathbb{Z}}^n$ is called a {\\it Betti degree} of $M$\nif $\\beta_{i,\\enma{\\mathbf{b}}}\\not= 0$ for some $i$.\n\n\\theorem{propA}\n If $\\, \\enma{\\mathbf{F}}_X$ is a cellular resolution of a monomial module $M$ then\n $$\\beta_{i,\\enma{\\mathbf{b}}} \\quad = \\quad \\dim H_i(X_{\\preceq\\enma{\\mathbf{b}}}, X_{\\prec\\enma{\\mathbf{b}}};k)\n\\quad = \\quad \\dim \\widetilde{H}_{i-1} (X_{\\prec\\enma{\\mathbf{b}}};k),$$\n where $H_*$ denotes relative homology and $\\widetilde{H}_*$ denotes\nreduced homology.\n\n\\stdbreak\\noindent{\\bf Proof. }\nWe compute $\\mathop{\\rm Tor}\\nolimits_i(M,k)_\\enma{\\mathbf{b}}$ as the $i$th homology of the complex\nof vector spaces $(\\enma{\\mathbf{F}}_X\\otimes_S k)_\\enma{\\mathbf{b}}$. This complex equals\nthe chain complex\n$\\widetilde{C}(X_{\\preceq\\enma{\\mathbf{b}}},X_{\\prec\\enma{\\mathbf{b}}};k)$\nwhich computes the relative homology with coefficients in $k$ of\nthe pair $(X_{\\preceq\\enma{\\mathbf{b}}},X_{\\prec\\enma{\\mathbf{b}}})$. Thus\n $$\\mathop{\\rm Tor}\\nolimits_i(M,k)_\\enma{\\mathbf{b}} \\quad = \\quad\nH_i(X_{\\preceq\\enma{\\mathbf{b}}},X_{\\prec\\enma{\\mathbf{b}}};k).$$\nSince $X_{\\preceq\\enma{\\mathbf{b}}}$ is acyclic,\nthe long exact sequence of homology groups looks like\n $$ 0 \\,=\\, \\widetilde{H}_i(X_{\\preceq\\enma{\\mathbf{b}}}; k) \\,\\rightarrow\n\\, H_i(X_{\\preceq\\enma{\\mathbf{b}}},X_{\\prec\\enma{\\mathbf{b}}}; k)\n\\, \\rightarrow \\, \\widetilde{H}_{i-1}(X_{\\prec\\enma{\\mathbf{b}}}; k)\n\\, \\rightarrow \\, \\widetilde{H}_{i-1}(X_{\\preceq\\enma{\\mathbf{b}}}; k) \\,=\\, 0 .$$\nWe conclude that the two vector spaces in the middle are isomorphic.\n \\Box\n\n\nA subset $Q\\subset\\enma{\\mathbb{Z}}^n$ is an {\\it order ideal} if $\\enma{\\mathbf{b}}\\in Q$\nand $\\enma{\\mathbf{c}} \\in \\enma{\\mathbb{N}}^n$ implies $\\enma{\\mathbf{b}}-\\enma{\\mathbf{c}}\\in Q$. For a\n$\\enma{\\mathbb{Z}}^n$-graded cell complex $X$ and an order\nideal $Q$ we define the {\\it order ideal complex\\\/}\n$\\,X_Q \\, = \\, \\bigsetdef{F\\in X}{\\enma{\\mathbf{a}}_F\\in Q}$.\nNote that $\\,X_{\\prec \\enma{\\mathbf{b}}}$ and $\\,X_{\\preceq \\enma{\\mathbf{b}}}$ are\nspecial cases of this.\n\n\\corollary{corK}\n If $\\enma{\\mathbf{F}}_X$ is a cellular resolution of $M$ and\n $Q\\subset\\enma{\\mathbb{Z}}^n$ an order ideal which contains the\n Betti degrees of $M$,\nthen $\\enma{\\mathbf{F}}_{X_Q}$ is also a cellular resolution of $M$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nBy \\ref{makefinite} and the identity\n$\\,(X_Q)_{\\preceq \\enma{\\mathbf{b}}} = (X_{\\preceq \\enma{\\mathbf{b}}})_Q$, it suffices\nto prove this for the case where $M$ is a monomial ideal and $X$ is finite.\nWe proceed by induction on the number of faces in $X \\backslash X_Q$.\nIf $X_Q = X$ there is nothing to prove. Otherwise\npick $\\enma{\\mathbf{c}} \\in \\enma{\\mathbb{Z}}^n \\backslash Q$ such that $X_{\\preceq \\enma{\\mathbf{c}}} = X$ and\n$X_{\\prec \\enma{\\mathbf{c}}} \\not= X$. Since $\\enma{\\mathbf{c}}$ is not a\nBetti degree, \\ref{propA} implies that the complex $X_{\\prec \\enma{\\mathbf{c}}}$ is acyclic.\nFor any $\\enma{\\mathbf{b}} \\in \\enma{\\mathbb{Z}}^n $, the complex\n$\\,(X_{\\prec \\enma{\\mathbf{c}}})_{\\preceq \\enma{\\mathbf{b}}}\\,$ equals either\n$X_{\\prec \\enma{\\mathbf{c}}}$ or $X_{\\preceq\\, \\enma{\\mathbf{b}} \\wedge \\enma{\\mathbf{c}}}$\nand is hence acyclic. At this point we replace $X$ by the\nproper subcomplex $X_{\\prec \\enma{\\mathbf{c}}}$, and we are done by induction.\n\\Box\n\nBy \\ref{propT} and \\ref{propA}, the Betti numbers $\\beta_{i,\\enma{\\mathbf{b}}}$ of $M$\nare given by the reduced homology of $\\Delta_{\\prec\\enma{\\mathbf{b}}}$. Let us compare\nthat formula for $\\beta_{i,\\enma{\\mathbf{b}}}$ with the following formula which is due\nindependently to Hochster [Ho] and Rosenknop [Ros].\n\n\\corollary{propH}\n The Betti numbers of $M$ satisfy\n $ \\,\\beta_{i,\\enma{\\mathbf{b}}} \\, = \\,\\dim \\widetilde{H}_i(K_\\enma{\\mathbf{b}};k)$\nwhere $K_\\enma{\\mathbf{b}}$ is the simplicial complex\n$\\{\\, \\sigma \\subseteq\\set{1,\\ldots,n}\\,|\\, M \\! \\mtext{has a generator of\ndegree} \\! \\!\\preceq\\enma{\\mathbf{b}}- \\sigma\\}$. Here each face $\\sigma$\nof $K_\\enma{\\mathbf{b}}$ is identified with its\ncharacteristic vector in $\\{0,1\\}^n$.\n\n\\stdbreak\\noindent{\\bf Proof. }\n For $i \\in \\{1,\\ldots,n\\}$ consider the subcomplex of\n$\\Delta_{\\prec \\enma{\\mathbf{b}}}$ consisting of all faces $F$ with degree $\\enma{\\mathbf{a}}_F\n\\preceq \\enma{\\mathbf{b}} - \\{i\\}$. This subcomplex is a full simplex. Clearly, these $n$\nsimplices cover $\\Delta_{\\prec \\enma{\\mathbf{b}}}$. The nerve of this cover by\ncontractible subsets is the simplicial complex $K_\\enma{\\mathbf{b}}$. Therefore, $K_\\enma{\\mathbf{b}}$\nhas the same reduced homology as $\\Delta_{\\prec \\enma{\\mathbf{b}}}$.\n \\Box\n\n\\section{} The hull resolution\n\n\\noindent\nLet $M$ be a monomial module in $T = k[x_1^{\\pm 1}\\!\\!,\\, \\dots ,\\,\nx_n^{\\pm 1}]$. In this section we apply convexity methods to construct a\ncanonical cellular resolution of $M$. For $\\enma{\\mathbf{a}} \\in \\enma{\\mathbb{Z}}^n$ and $t \\in \\enma{\\mathbb{R}}$\nwe abbreviate $t^\\enma{\\mathbf{a}} = (t^{a_1}, \\dots, t^{a_n})$. Fix\nany real number $t $ larger than $ (n+1) \\,!\\,\n= 2 \\cdot 3 \\cdot \\cdots \\cdot (n+1)$. We define\n$P_t$ be the convex hull of the point set\n$$\\setdef{ t^\\enma{\\mathbf{a}} }{\\enma{\\mathbf{a}} \\mtext{is the exponent of a monomial}\n\\enma{\\mathbf{x}}^\\enma{\\mathbf{a}}\\in M}\n\\quad \\subset \\quad \\enma{\\mathbb{R}}^n.$$\nThe set $P_t$ is a closed, unbounded $n$-dimensional convex polyhedron.\n\n\\proposition{propO}\nThe vertices of the polyhedron $P_t$ are precisely the points\n$ t^\\enma{\\mathbf{a}} = (t^{a_1},\\ldots,t^{a_n}) $\nfor which the monomial $ \\enma{\\mathbf{x}}^\\enma{\\mathbf{a}}\n= x_1^{a_1}\\! \\cdots x_n^{a_n} $ is a minimal generator of $M$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nSuppose $\\enma{\\mathbf{x}}^\\enma{\\mathbf{a}} \\in M$ is not a minimal generator of $M$.\nThen $M$ contains both $\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}+\\enma{\\mathbf{e}}_i} = \\enma{\\mathbf{x}}^\\enma{\\mathbf{a}} x_i$\nand $\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}-\\enma{\\mathbf{e}}_i} = \\enma{\\mathbf{x}}^\\enma{\\mathbf{a}} \/x_i$ for some $i$.\nThe line segment $\\mathop{\\rm conv}\\nolimits \\{t^{\\enma{\\mathbf{a}}-\\enma{\\mathbf{e}}_i},t^{\\enma{\\mathbf{a}}+\\enma{\\mathbf{e}}_i} \\}$ lies in $P_t$\nand contains $t^\\enma{\\mathbf{a}}$ in its relative interior. Therefore $t^\\enma{\\mathbf{a}}$\nis not a vertex of $P_t$.\n\nNext, suppose $\\enma{\\mathbf{x}}^\\enma{\\mathbf{a}} \\in M$ is a minimal generator of $M$.\nLet $\\enma{\\mathbf{v}} = t^{-\\enma{\\mathbf{a}}}$, so $\\enma{\\mathbf{v}}\\cdot t^{\\enma{\\mathbf{a}}} = n$. For any other\nexponent $\\enma{\\mathbf{b}}$ of a monomial in $M$, we have $b_i \\ge a_{i}+1$ for some\n$i$, so $$ \\enma{\\mathbf{v}} \\cdot t^\\enma{\\mathbf{b}} \\quad = \\quad\n\\sum_{j=1}^n t^{b_j - a_j} \\quad \\geq\n\\quad t^{b_i-a_{i}} \\quad \\ge \\quad t \\quad > \\quad (n+1) \\,!\n\\quad > \\quad n .$$\n Thus, the inner normal vector $\\enma{\\mathbf{v}}$ supports\n$t^{\\enma{\\mathbf{a}}}$ as a vertex of $P_t$.\n\\Box\n\n\\corollary{}\n$\\, P_t \\,\\,\\, = \\,\\,\\, \\enma{\\mathbb{R}}_+^n \\,\\, + \\,\\,\\mathop{\\rm conv}\\nolimits\\,\n\\setdef{ t^\\enma{\\mathbf{a}} }{\\enma{\\mathbf{x}}^\\enma{\\mathbf{a}} \\mtext{is a minimal generator}\n\\enma{\\mathbf{x}}^\\enma{\\mathbf{a}} \\mtext{of} M}$.\n\nOur first goal is to establish the following combinatorial result.\n\n\\theorem{indepthm} The face poset of the\npolyhedron $P_t$ is independent of $t$ for $\\,t > (n+1) \\,!$.\nThe same holds for the subposet of all bounded faces of $P_t$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nThe face poset of $P_t$ can be computed\nas follows. Let $C_t \\subset \\enma{\\mathbb{R}}^{n+1}$ be the cone spanned\nby the vectors $\\,(t^\\enma{\\mathbf{a}} ,1 ) \\,$ for all minimal generators\n$\\enma{\\mathbf{x}}^\\enma{\\mathbf{a}}$ of $M$ together with the unit vectors $\\,(\\enma{\\mathbf{e}}_i, 0)\\,$\nfor $i=1,\\ldots,n$. The faces of $P_t$ are in order-preserving\nbijection with the faces of $C_t$ which do not lie in the hyperplane ``at\ninfinity'' $\\,x_{n+1} = 0$. A face of $P_t$ is bounded if and only if the\ncorresponding face of $C_t$ contains none of the vectors $(\\enma{\\mathbf{e}}_i,0)$.\nIt suffices to prove that the\nface poset of $C_t$ is independent of $t$.\n\nFor any $(n+1)$-tuple of generators of $C_t$ consider\nthe sign of their determinant\n$$\n\\mathop{\\rm sign}\\nolimits \\,\\,\n\\mathop{\\rm det}\\nolimits \\bmatrix{2pt}{\n\\enma{\\mathbf{e}}\\ssub{i_0} & \\;\\;\\cdots & \\enma{\\mathbf{e}}\\ssub{i_r} & \\;\\;\\;\nt^{\\enma{\\mathbf{a}}\\ssub{j_1}} &\n\\;\\;\\cdots & t^{\\enma{\\mathbf{a}}\\ssub{j_{n-r}}} & \\;\\quad\\cr\n 0 & \\cdots & 0 & 1 & \\cdots & 1 \\cr}\n\\quad \\in \\quad \\{-1,0,+1 \\}. \\eqno (1) $$\nThe list of\nthese signs forms the (possibly infinite) {\\it oriented matroid}\nassociated with the cone $C_t$. It is known\n(see e.g.~[BLWSZ]) that the face poset of $C_t$\nis determined by its oriented matroid.\nIt therefore suffices to show that the sign\nof the determinant in (1) is independent of $t$ for $t > (n+1) \\, !$.\nThis follows from the next lemma.\n\\Box\n\n\\lemma{factoriallem}\nLet $a_{ij}$ be integers for $\\,1 \\leq i,j \\leq r$.\nThen the Laurent polynomial\n$ f(t) = \\mathop{\\rm det}\\nolimits \\bigl( (t^{a_{ij}})_{1 \\leq i,j \\leq r} ) \\,$\neither vanishes identically or\nhas no real roots for $t > r \\,!$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nSuppose that $f$ is not zero and write\n$\\,f(t) = c_\\alpha t^\\alpha + \\sum_\\beta c_\\beta t^\\beta$,\nwhere the first term has the highest degree in $t$. For $t > r!$\nwe have the chain of inequalities\n$$ | \\sum_\\beta c_\\beta \\cdot t^\\beta |\n \\, \\leq \\,\n \\sum_\\beta |c_\\beta | \\cdot t^\\beta\n \\, \\leq \\,\n (\\sum_\\beta |c_\\beta | ) \\cdot t^{\\alpha-1}\n \\, < \\, r \\, ! \\cdot t^{\\alpha-1}\n \\, < \\, t^\\alpha\n \\, \\le \\, |c_\\alpha \\cdot t^\\alpha| .$$\nTherefore $f(t)$ is nonzero, and\n$\\, \\mathop{\\rm sign}\\nolimits\\bigl(f(t)\\bigr) = \\mathop{\\rm sign}\\nolimits(c_\\alpha)$. \\Box\n\nIn the proof of \\ref{indepthm} we are using \\ref{factoriallem} for $r=n+1$.\nLev Borisov and Sorin Popescu constructed examples\nof matrices which show\nthat the exponential lower bound for $t$ is necessary in \\ref{factoriallem},\nand also in \\ref{indepthm}.\n\nWe are now ready to define the hull resolution\nand state our main result.\nThe {\\it hull complex} of a monomial module $M$,\ndenoted $\\mathop{\\rm hull}\\nolimits(M)$, is the complex of bounded\nfaces of the polyhedron $P_t$ for large $t$.\n\\ref{indepthm} ensures that $\\mathop{\\rm hull}\\nolimits(M)$ is well-defined\nand depends only on $M$. The vertices of $\\mathop{\\rm hull}\\nolimits(M)$\nare labeled by the generators of $M$, by \\ref{propO},\nand hence the complex $\\,\\mathop{\\rm hull}\\nolimits(M) \\,$ is $\\enma{\\mathbb{Z}}^n$-graded.\nLet $\\enma{\\mathbf{F}}_{\\mathop{\\rm hull}\\nolimits(M)}$ be the complex of free $S$-modules\nderived from $\\mathop{\\rm hull}\\nolimits(M)$ as in Section~1.\n\n\\theorem{thmH}\n The cellular complex $\\enma{\\mathbf{F}}_{\\mathop{\\rm hull}\\nolimits(M)}$ is a free resolution of $M$.\n\n\\stdbreak\\noindent{\\bf Proof. }\n Let $X = (\\mathop{\\rm hull}\\nolimits(M))_{\\preceq\\enma{\\mathbf{b}}}$ for some degree \\enma{\\mathbf{b}}; by \\ref{propD} we\nneed to show that $X$ is acyclic. This is immediate if $X$ is empty or a\nsingle vertex. Otherwise choose $\\,t > (n+1)\\,! \\,$\n and let $\\enma{\\mathbf{v}}=t^{-\\enma{\\mathbf{b}}}$. If $t^\\enma{\\mathbf{a}}$ is a vertex of $X$ then $\\enma{\\mathbf{a}} \\prec\\enma{\\mathbf{b}}$,\nso\n $$\\enma{\\mathbf{v}} \\cdot t^{\\enma{\\mathbf{a}}} \\quad = \\quad t^{-\\enma{\\mathbf{b}}} \\cdot t^{\\enma{\\mathbf{a}}} \\quad < \\quad\nt^{-\\enma{\\mathbf{b}}} \\cdot t^\\enma{\\mathbf{b}} \\quad = \\quad n,$$\n while for any other $\\enma{\\mathbf{x}}^\\enma{\\mathbf{c}} \\in M$ we have $c_{i}\\ge b_i+1$ for some $i$,\nso\n $$\\enma{\\mathbf{v}} \\cdot t^{\\enma{\\mathbf{a}}} \\quad = \\quad t^{-\\enma{\\mathbf{b}}} \\cdot t^{\\enma{\\mathbf{c}}} \\quad \\ge\n\\quad t^{c_i-b_i} \\quad \\ge \\quad\n t \\quad > \\quad n.$$\nThus, the hyperplane $H$ defined by $\\enma{\\mathbf{v}}\\cdot\\enma{\\mathbf{x}} = n$ separates the\nvertices of $X$ from the remaining vertices of $P_t$.\nMake a projective transformation which moves $H$ to infinity. This\nexpresses $X$ as the complex of bounded faces of a convex polyhedron, a\ncomplex which is known to be contractible, e.g.~[BLSWZ,\nExercise 4.27 (a)].\n\\Box\n\n\nWe call $\\enma{\\mathbf{F}}_{\\mathop{\\rm hull}\\nolimits(M)}$ the {\\it hull resolution} of $M$.\nLet us see that the hull resolution generalizes the {\\it Scarf\ncomplex} introduced in [BPS]. This is the\nsimplical complex\n $$\\Delta_M \\quad = \\quad \\setdef{F \\subseteq I}{m_F \\ne m_G\n\\mtext{for all} G \\subseteq I \\mtext{other than} F}.$$\nThe Scarf complex $\\Delta_M$ defines a subcomplex\n$\\enma{\\mathbf{F}}_{\\Delta_M}$ of the Taylor resolution $\\enma{\\mathbf{F}}_\\Delta$.\n\n\\proposition{scarfinhull} For any monomial module $M$, the\nScarf complex ${\\Delta_M}$ is a subcomplex of the hull\ncomplex $\\mathop{\\rm hull}\\nolimits(M)$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nLet $F = \\{ \\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_1},\\! \\ldots,\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_p}\\}$\nbe a face of $\\Delta_M$ with $\\,m_F = \\mathop{\\rm lcm}\\nolimits(F) = \\enma{\\mathbf{x}}^\\enma{\\mathbf{u}}$.\nConsider any injective map $\\sigma : \\{1,\\ldots,p\\}\n\\rightarrow \\{1,\\ldots,n\\}$ such that $a_{i,\\sigma(i)} = u_i$\nfor all $i$. Compute the inverse of the $p \\times p$-matrix\n$(t^{a_{i,\\sigma(j)}})$, and let $\\enma{\\mathbf{v}}^\\sigma(t)'$ be the sum\nof the column vectors of that inverse matrix. By augmenting\nthe $p$-vector $\\enma{\\mathbf{v}}^\\sigma(t)'$ with additional zero coordinates,\nwe obtain an $n$-vector $\\enma{\\mathbf{v}}^\\sigma(t)$ with the following properties:\n\\item{(i)} $\\, \\enma{\\mathbf{v}}^\\sigma(t) \\cdot t^{\\enma{\\mathbf{a}}_1} =\n \\enma{\\mathbf{v}}^\\sigma(t) \\cdot t^{\\enma{\\mathbf{a}}_2} = \\cdots =\n\\enma{\\mathbf{v}}^\\sigma(t) \\cdot t^{\\enma{\\mathbf{a}}_p} \\,=\\, 1$;\n\\item{(ii)}\n$\\, v^\\sigma_j(t) \\, = \\, 0 \\,$, for all $j \\not\\in \\mathop{\\rm image}\\nolimits(\\sigma)$;\n\\item{(iii)}\n$\\, v^\\sigma_j(t) \\, = \\, t^{-u_j} \\,+ \\,$\n{\\sl lower order terms in $t$}, for all $j \\in \\mathop{\\rm image}\\nolimits(\\sigma)$.\n\nBy taking a convex combination of the vectors $\\enma{\\mathbf{v}}^\\sigma(t)$\nfor all possible injective maps $\\sigma$ as above, we obtain a vector\n$\\enma{\\mathbf{v}}(t)$ with the following properties:\n\\item{(iv)} $\\, \\enma{\\mathbf{v}}(t) \\cdot t^{\\enma{\\mathbf{a}}_1} =\n \\enma{\\mathbf{v}}(t) \\cdot t^{\\enma{\\mathbf{a}}_2} = \\cdots =\n\\enma{\\mathbf{v}}(t) \\cdot t^{\\enma{\\mathbf{a}}_p} \\,=\\, 1$;\n\\item{(v)}\n$\\, v_j(t) \\, = \\, c_j \\cdot t^{-u_j} \\,+ \\, $\n{\\sl lower order terms in $t\\,$}\nwith $c_j > 0 $, for all $j \\in \\{1,\\ldots,n\\}$.\n\nFor any $\\enma{\\mathbf{x}}^\\enma{\\mathbf{b}} \\in M$ which is not in $F$\nthere exists an index $\\ell$ such that $b_\\ell \\geq u_\\ell + 1$.\nThis implies $\\, \\enma{\\mathbf{v}}(t) \\cdot t^\\enma{\\mathbf{b}}\n\\geq c_\\ell \\cdot t^{b_\\ell-u_\\ell} \\,+ \\, $ {\\sl lower order terms in} $t$,\nand therefore $\\, \\enma{\\mathbf{v}}(t) \\cdot t^\\enma{\\mathbf{b}} > 1\\,$ for $t \\gg 0$.\nWe conclude that $F$ defines a face of $P_t$ with inner normal vector\n$\\enma{\\mathbf{v}}(t)$. \\Box\n\nA binomial first syzygy of $M$ is\ncalled {\\it generic} if it has full support, i.e., if no\nvariable $x_i$ appears with the same exponent in the\ncorresponding pair of monomial generators. We call\n $M$ {\\it generic} if it has a basis of generic binomial\nfirst syzygies. This is a translation-invariant generalization of the\ndefinition of genericity in [BPS].\n\n\n\\lemma{genlem}\n If $M$ is generic, then for any pair of generators $m_i$, $m_j$\neither the corresponding binomial first syzygy is generic, or there\nexists a third generator $m$ which strictly divides\nthe least common\nmultiple of $m_i$ and $m_j$ in all coordinates.\n\n\\stdbreak\\noindent{\\bf Proof. }\n Suppose that the syzygy formed by $m_i$ and $m_j$ is not generic, and\ninduct on the length of a chain of generic syzygies needed to express it.\nIf the chain has length two, then the middle monomial $m$\ndivides $\\,\\mathop{\\rm lcm}\\nolimits(m_i,m_j)$.\nMoreover, because the two syzygies involving $m$ are generic, this\ndivision is strict in each variable.\nIf the chain is longer, then divide it into two\nsteps. Either each step represents a generic syzygy, and we use the\nabove argument, or by induction there exists an $m_j$ strictly\ndividing the degree of one of these syzygies in all\ncoordinates, and we are again done.\n \\Box\n\n\\lemma{notunder}\nLet $M$ be a monomial module and $F$ a face of $\\mathop{\\rm hull}\\nolimits(M)$.\nFor every monomial $m \\in M$ there exists a variable\n$x_j$ such that $\\mathop{\\rm deg}\\nolimits_{x_j}(m) \\geq \\mathop{\\rm deg}\\nolimits_{x_j}(m_F)$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nSuppose that $m = \\enma{\\mathbf{x}}^\\enma{\\mathbf{u}}$ strictly divides $m_F$ in each\ncoordinate. Let $t^{\\enma{\\mathbf{a}}_1},\\ldots,t^{\\enma{\\mathbf{a}}_p}$ be the vertices\nof $F$ and consider their barycenter\n$\\,\\enma{\\mathbf{v}}(t) = {1 \\over p} \\cdot (t^{\\enma{\\mathbf{a}}_1}+ \\cdots + t^{\\enma{\\mathbf{a}}_p})\\,\\in \\, F$.\nThe $j$th coordinate of $\\enma{\\mathbf{v}}(t)$ is a polynomial in $t$\nof degree equal to $\\,\\mathop{\\rm deg}\\nolimits_{x_j}(m_F)$. The $j$th coordinate\nof $t^\\enma{\\mathbf{u}}$ is a monomial of strictly lower degree.\nHence $\\,t^{\\bf u} < \\enma{\\mathbf{v}}(t)\\,$ coordinatewise for $t \\gg 0$.\nLet $\\enma{\\mathbf{w}}$ be a nonzero linear functional which\nis nonnegative on $\\enma{\\mathbb{R}}_+^n $ and whose minimum over\n$P_t$ is attained at the face $F$.\nThen $\\enma{\\mathbf{w}} \\cdot \\enma{\\mathbf{v}}(t) = \\enma{\\mathbf{w}} \\cdot\n\\enma{\\mathbf{a}}_1 = \\cdots = \\enma{\\mathbf{w}} \\cdot \\enma{\\mathbf{a}}_p $, but\nour discussion implies $\\enma{\\mathbf{w}} \\cdot t^\\enma{\\mathbf{u}} < \\enma{\\mathbf{w}} \\cdot \\enma{\\mathbf{v}}(t) $,\na contradiction. \\Box\n\n\\theorem{genmin}\n If $M$ is a generic monomial module then $\\mathop{\\rm hull}\\nolimits(M)$ coincides with\nthe Scarf complex $\\Delta_M$ of $M$, and the hull resolution\n$\\enma{\\mathbf{F}}_{\\mathop{\\rm hull}\\nolimits(M)} = \\enma{\\mathbf{F}}_{\\Delta_M}$ is minimal.\n\n\\stdbreak\\noindent{\\bf Proof. }\nLet $F$ be any face of $\\mathop{\\rm hull}\\nolimits(M)$ and\n$\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_1}, \\ldots,\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_p}$ the generators\nof $M$ corresponding to the vertices of $F$.\nSuppose that $F$ is not a face of $\\Delta_M$. Then either\n\\item{(i)} $\\,\n\\mathop{\\rm lcm}\\nolimits(\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_1}, \\ldots,\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_{i-1}},\n\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_{i+1}},\\ldots,\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_p}) = m_F \\,$ for some $i\n\\in \\{1,\\ldots,p\\}$, or\n\\item{(ii)} there exists another generator $\\enma{\\mathbf{x}}^\\enma{\\mathbf{u}}$ of $ M$ which\ndivides $m_F$ and such that $t^\\enma{\\mathbf{u}} \\not\\in F$.\n\nConsider first case (i). By \\ref{notunder}\napplied to $m = {\\bf x}^{\\enma{\\mathbf{a}}_i} $ there exists\n$x_j$ such that $\\mathop{\\rm deg}\\nolimits_{x_j}(\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_i}) = \\mathop{\\rm deg}\\nolimits_{x_j}(m_F)$,\nand hence $\\mathop{\\rm deg}\\nolimits_{x_j}(\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_i}) =\n\\mathop{\\rm deg}\\nolimits_{x_j}(\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_k})$ for some\n$k \\not= i$. The first\nsyzygy between $\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_i}$ and $\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_k}$ is not generic,\nand, by \\ref{genlem}, there exists a generator $m$ of $M$\nwhich strictly divides $\\mathop{\\rm lcm}\\nolimits(\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_i},\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_k})$ in\nall coordinates. Since $\\mathop{\\rm lcm}\\nolimits(\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_i},\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_k})$\n divides $m_F$, we get a contradiction to \\ref{notunder}.\n\nConsider now case (ii).\nFor any variable $x_j$ there exists $i \\in \\{1,\\ldots,p\\}$ such that\n$\\mathop{\\rm deg}\\nolimits_{x_j}(\\enma{\\mathbf{x}}^{ \\enma{\\mathbf{a}}_i}) = \\mathop{\\rm deg}\\nolimits_{x_j}(m_F)\n\\geq \\mathop{\\rm deg}\\nolimits_{x_j}(\\enma{\\mathbf{x}}^\\enma{\\mathbf{u}})$. If the inequality $\\geq$ is an\nequality $=$, then the first syzygy between $\\enma{\\mathbf{x}}^\\enma{\\mathbf{u}}$\nand $\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_i}$ is not generic, and \\ref{genlem} gives\na new monomial generator $m$ which strictly divides $m_F$ in all\ncoordinates, a contradiction to \\ref{notunder}.\nTherefore $\\geq$ is a strict inequality $>$\nfor all variables $x_j$. This means that $\\enma{\\mathbf{x}}^\\enma{\\mathbf{u}}$\nstrictly divides $m_F$ in all coordinates,\nagain a contradiction to \\ref{notunder}.\n\nHence both (i) and (ii) lead to a contradiction,\nand we conclude that every face of $\\mathop{\\rm hull}\\nolimits(M)$ is a face\nof $\\Delta_M$. This implies $\\mathop{\\rm hull}\\nolimits(M) = \\Delta_M$ by \\ref{scarfinhull}.\nThe resolution $\\enma{\\mathbf{F}}_{\\Delta_M}$ is minimal\nbecause no two faces in $\\Delta_M$ have the same degree. \\Box\n\nIn this paper we are mainly interested in nongeneric\nmonomial modules for which the hull complex\nis typically not simplicial. Nevertheless\nthe possible combinatorial types of facets\nseem to be rather limited. Experimental evidence suggests:\n\n\\conjecture{}\nEvery face of $\\mathop{\\rm hull}\\nolimits(M)$ is affinely isomorphic to a subpolytope\nof the $(n-1)$-dimensional permutohedron and hence has\nat most $\\, n \\, ! \\,$ vertices.\n\n\nBy \\ref{exD} it is easy to see that\nany subpolytope of the $(n-1)$-dimensional permutohedron\ncan be realized as the hull complex of suitable monomial ideal.\n\nThe following example, found in discussions\nwith Lev Borisov, shows that the hull complex\nof a monomial module need not be locally finite:\n\n\\example{lev}\nLet $n=3$ and $M$ the monomial module\ngenerated by $x_1^{-1} x_2$ and $\\setdef{x_2^i\nx_3^{-i}}{i\\in\\enma{\\mathbb{Z}}}$. Then every triangle of the form\n$\\setthree{x_1^{-1} x_2}{ x_2^i x_3^{-i}}{x_2^{i+1} x_3^{-i-1}}$\nis a facet of $\\mathop{\\rm hull}\\nolimits(M)$.\nIn particular, the vertex $x_1^{-1} x_2$ of $\\mathop{\\rm hull}\\nolimits(M)$ has\ninfinite valence.\n \\Box\n\nFor a generic monomial module $M$ we have the\nfollowing important identity\n $$ \\, \\mathop{\\rm hull}\\nolimits(M_{\\preceq \\enma{\\mathbf{b}}}) \\quad = \\quad \\mathop{\\rm hull}\\nolimits(M)_{\\preceq \\enma{\\mathbf{b}}}.$$\nSee equation (5.1) in [BPS]. This identity can fail if $M$ is not generic:\n\n\\example{}\n Consider the monomial ideal $M=\\idealfour{a^2b}{ac}{b^2\\!}{bc^2}$ studied\nin \\ref{exA} and let $\\enma{\\mathbf{b}}=(2,1,2)$. Then\n$\\mathop{\\rm hull}\\nolimits(M_{\\preceq \\enma{\\mathbf{b}}})$ is a triangle, while $\\mathop{\\rm hull}\\nolimits(M)_{\\preceq \\enma{\\mathbf{b}}}$\nconsists of two edges. The vertex $b^2$ of $\\mathop{\\rm hull}\\nolimits(M)$\n``eclipses'' the facet of $\\mathop{\\rm hull}\\nolimits(M_{\\preceq \\enma{\\mathbf{b}}})$\n \\Box\n\nThe hull complex $\\mathop{\\rm hull}\\nolimits(M)$ is particularly easy to compute\nif $M$ is a squarefree monomial ideal. In this case we have $P_t = P_1$\nfor all $t$. Moreover, if all square-free generators of $M$ have the\nsame total degree, then the faces of their convex hull are precisely\nthe bounded faces of $P_t$.\n\\ref{thmH} implies the following corrollary.\n\n\n\\corollary{squarefree}\nLet $\\enma{\\mathbf{a}}_1,\\ldots,\\enma{\\mathbf{a}}_p$ be $0$-$1$-vectors having the same\ncoordinate sum. Then their boundary complex, consisting of\nall faces of the convex polytope $\\,P = \\mathop{\\rm conv}\\nolimits \\{\\enma{\\mathbf{a}}_1,\\ldots,\\enma{\\mathbf{a}}_p\\}$,\ndefines a cellular resolution of the ideal\n$\\,M = \\langle \\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_1},\\ldots, \\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}_p} \\rangle$.\n\n\\example{exRb} \\ref{squarefree} applies\nto the Stanley-Reisner ideal of the real projective plane\nin \\ref{exR}. Here $P$ is a 5-dimensional polytope with 22\nfacets, corresponding to the 22 cycles on the $2$-complex\n$X$ of length $\\le 6$.\nRepresentatives of these three cycle types, and supporting hyperplanes of\nthe corresponding facets of $P$, are shown on the right in \\ref{rpfig}. This\nexample illustrates how the hull resolution encodes combinatorial information\nwithout making arbitrary choices.\n \\Box\n\n\\section{} Lattice ideals\n\n\\noindent Let $L\\subset\\enma{\\mathbb{Z}}^n$ be a lattice. In this section\nwe study (cellular) resolutions of the lattice module $M_L$ and of the\nlattice ideal $I_L$. Let $S[L]$ be the group algebra of $L$\nover $S$. We realize $S[L]$ as the subalgebra of\n$\\,k[x_1,\\ldots,x_n, z_1^{\\pm 1}\\!\\!,\\, \\dots ,\\, z_n^{\\pm 1}]\n\\,$ spanned by all monomials\n$\\enma{\\mathbf{x}}^\\enma{\\mathbf{a}} \\enma{\\mathbf{z}}^\\enma{\\mathbf{b}}$ where $\\enma{\\mathbf{a}} \\in \\enma{\\mathbb{N}}^n$ and $\\enma{\\mathbf{b}} \\in L$. Note that\n$\\,S \\,= \\,S[L]\/ \\langle \\,\\enma{\\mathbf{z}}^\\enma{\\mathbf{a}} - 1 \\,\\,| \\,\\, \\enma{\\mathbf{a}} \\in L\\, \\rangle $.\n\n\\lemma{lem1}\nThe lattice module $M_L$ is an $S[L]$-module,\nand $\\,M_L \\otimes_{S[L]} S \\,= \\, S\/I_L$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nThe $k$-linear map\n$\\,\\phi : S[L] \\rightarrow M_L , \\, \\enma{\\mathbf{x}}^\\enma{\\mathbf{a}} \\enma{\\mathbf{z}}^\\enma{\\mathbf{b}}\n\\mapsto \\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}} + \\enma{\\mathbf{b}}} \\,$\ndefines the structure of an $S[L]$-module on $M_L$.\nIts kernel $\\,\\mathop{\\rm ker}\\nolimits(\\phi) \\,$ is the ideal in $S[L]$ generated by all\nbinomials $\\,\\enma{\\mathbf{x}}^\\enma{\\mathbf{u}} - \\enma{\\mathbf{x}}^\\enma{\\mathbf{v}} \\enma{\\mathbf{z}}^{\\enma{\\mathbf{u}}-\\enma{\\mathbf{v}}}$\nwhere $\\enma{\\mathbf{u}},\\enma{\\mathbf{v}} \\in \\enma{\\mathbb{N}}^n$ and $\\enma{\\mathbf{u}} - \\enma{\\mathbf{v}} \\in L$.\nClearly, we obtain $I_L$ from $\\mathop{\\rm ker}\\nolimits(\\phi)$\nby setting all $\\enma{\\mathbf{z}}$-variables to $1$, and hence\n$\\,(S[L]\/\\mathop{\\rm ker}\\nolimits(\\phi) )\\otimes_{S[L]} S \\,= \\, S\/I_L$. \\Box\n\nWe define a $\\enma{\\mathbb{Z}}^n$-grading on $S[L]$ via\n$\\,\\mathop{\\rm deg}\\nolimits(\\enma{\\mathbf{x}}^\\enma{\\mathbf{a}} \\enma{\\mathbf{z}}^\\enma{\\mathbf{b}}) = \\enma{\\mathbf{a}} + \\enma{\\mathbf{b}}$.\nLet ${\\cal A}$ be the category of $\\enma{\\mathbb{Z}}^n$-graded $S[L]$-modules,\nwhere the morphisms are $\\enma{\\mathbb{Z}}^n$-graded $S[L]$-module homomorphisms\nof degree $\\enma{\\mathbf{0}}$. The polynomial ring\n$S = k[x_1,\\ldots,x_n]$ is graded by the quotient group\n$\\enma{\\mathbb{Z}}^n\/L$ via $\\,\\mathop{\\rm deg}\\nolimits(\\enma{\\mathbf{x}}^\\enma{\\mathbf{a}}) = \\enma{\\mathbf{a}} + L$. Let ${\\cal B}$\nbe the category of $\\enma{\\mathbb{Z}}^n\/L$-graded $S$-modules, where the\nmorphisms are $\\enma{\\mathbb{Z}}^n\/L$-graded $S$-module homomorphisms\nof degree $\\enma{\\mathbf{0}}$. Clearly, $M_L$ is an object in ${\\cal A}$,\nand $\\,M_L \\otimes_{S[L]} S \\,= \\, S\/I_L \\,$ is an object in ${\\cal B}$.\n\n\\theorem{thmU}\nThe categories ${\\cal A}$ and ${\\cal B}$ are equivalent.\n\n\\stdbreak\\noindent{\\bf Proof. }\n Define a functor $\\pi:{\\cal A} \\rightarrow {\\cal B}$ by the rule\n$\\,\\pi(M) := M \\otimes_{S[L]} S$. This functor weakens\nthe $\\enma{\\mathbb{Z}}^n$-grading of objects in ${\\cal A}$ to a $\\enma{\\mathbb{Z}}^n\/L$-grading.\nThe properties of $\\pi$ cannot be deduced from the\ntensor product alone, which is poorly behaved when applied to\narbitrary $S[L]$-modules; e.g., $S$ is not a flat $S[L]$-module.\nFurther, the categories\n${\\cal A}$ and ${\\cal B}$ are {\\sl not isomorphic};\nwe are only claiming that they are {\\sl equivalent}.\n\nWe apply condition iii) of [Mac, \\S IV.4, Theorem 1]: It is enough to prove\nthat $\\pi$ is full and faithful, and that each object $N\\in{\\cal B}$ is\nisomorphic to $\\pi(M)$ for some object $M\\in{\\cal A}$. To prove that $\\pi$\nis full and faithful, we show that for any two modules $M$, $M'\\in {\\cal\nA}$ it induces an identification $\\mathop{\\rm Hom}\\nolimits_{\\cal\nA}(M,M') = \\mathop{\\rm Hom}\\nolimits_{\\cal B}(\\pi(M),\\pi(M'))$.\n\nBecause each module $M\\in{\\cal A}$ is $\\enma{\\mathbb{Z}}^n$-graded, the lattice\n$L\\subset S[L]$ acts on $M$ as a group of automorphisms, i.e. the\nmultiplication maps $\\enma{\\mathbf{z}}^\\enma{\\mathbf{b}}: M_\\enma{\\mathbf{a}} \\rightarrow M_{\\enma{\\mathbf{a}}+\\enma{\\mathbf{b}}}$ are\nisomorphisms of $k$-vector spaces for each $\\enma{\\mathbf{b}} \\in L$, compatible with\nmultiplication by each $x_i$. For each $\\alpha\\in\\enma{\\mathbb{Z}}^n\/L$, the functor\n$\\pi$ identifies the spaces $M_\\enma{\\mathbf{a}}$ for $\\enma{\\mathbf{a}}\\in\\alpha$ as the single space\n$\\pi(M)_\\alpha$.\nA morphism $f:M\\rightarrow M'$ in ${\\cal A}$ is a collection of $k$-linear\nmaps $f_\\enma{\\mathbf{a}}: M_\\enma{\\mathbf{a}}\\rightarrow M'_\\enma{\\mathbf{a}}$, compatible with the action by $L$\nand with multiplication by each $x_i$. A morphism $g:\\pi(M)\\rightarrow\n\\pi(M')$ in ${\\cal B}$ is a collection of $k$-linear maps\n$g_\\alpha:\\pi(M)_\\alpha\\rightarrow \\pi(M')_\\alpha$, compatible with\nmultiplication by each $x_i$. For each $\\alpha\\in\\enma{\\mathbb{Z}}^n\/L$, the functor\n$\\pi$ identifies the maps $f_\\enma{\\mathbf{a}}$ for $\\enma{\\mathbf{a}}\\in\\alpha$ as the single map\n$\\pi(f)_\\alpha$.\n\nIt is clear from this discussion that $\\pi$ takes\ndistinct morphisms to distinct morphisms. Given a morphism $g\\in\\mathop{\\rm Hom}\\nolimits_{\\cal\nB}(\\pi(M),\\pi(M'))$, define\na morphism $f\\in\\mathop{\\rm Hom}\\nolimits_{\\cal A}(M,M')$ by the rule $f_\\enma{\\mathbf{a}}=g_\\alpha$ for\n$\\enma{\\mathbf{a}}\\in\\alpha$. We have $\\pi(f)=g$, establishing the desired identification\nof Hom-sets. Hence $\\pi$ is full and faithful.\n\nFinally, let $N = \\mathop{\\hbox{$\\bigoplus$}}_{ \\alpha \\in \\enma{\\mathbb{Z}}^n\/L} N_\\alpha$ be any object in\n${\\cal B}$. We define an object $M = \\oplus_{\\enma{\\mathbf{a}} \\in \\enma{\\mathbb{Z}}^n} M_\\enma{\\mathbf{a}}$ in\n${\\cal A}$ by setting $M_\\enma{\\mathbf{a}} := N_\\alpha$ for each $\\enma{\\mathbf{a}} \\in \\alpha$, by\nlifting each multiplication map $x_i:N_\\alpha \\rightarrow N_{\\alpha+\\enma{\\mathbf{e}}_i}$\nto maps $x_i:M_\\enma{\\mathbf{a}} \\rightarrow M_{\\enma{\\mathbf{a}}+\\enma{\\mathbf{e}}_i}$ for $\\enma{\\mathbf{a}} \\in \\alpha$, and\nby letting $\\enma{\\mathbf{z}}^\\enma{\\mathbf{b}}$ act on $M$ as the identity map from $M_\\enma{\\mathbf{a}} $ to\n$M_{\\enma{\\mathbf{a}}+\\enma{\\mathbf{b}}}$ for $\\enma{\\mathbf{b}} \\in L$. The module $M$ satisfies $\\pi(M) = N$,\nshowing that $\\pi$ is an equivalence of categories.\n \\Box\n\n\\ref{thmU} allows us to resolve the lattice module $M_L\\in\n{\\cal A}$ in order\nto resolve the quotient ring $\\pi(M_L) = S\/I_L\\in {\\cal B}$,\nand conversely.\n\n\\corollary{corU}\nA $\\enma{\\mathbb{Z}}^n$-graded complex of free $S[L]$-modules,\n $$ C : \\qquad \\;\\cdots \\;\n\\rightarrowbox{8pt}{$f_2$}\\; S[L]^{\\beta_1}\n\\rightarrowbox{8pt}{$f_1$}\\; S[L]^{\\beta_0}\n\\;\\rightarrowbox{8pt}{$f_0$}\\; S[L] \\;\\rightarrow \\; M_L \\;\\rightarrow \\; 0, $$\n is a (minimal) free resolution of $M_L$ if and only if its image\n $$ \\pi(C)\\,: \\, \\;\\cdots \\;\n\\rightarrowbox{8pt}{$\\pi(f_2)$}\\; S^{\\beta_1}\n\\rightarrowbox{8pt}{$\\pi(f_1)$}\\; S^{\\beta_0}\n\\;\\rightarrowbox{8pt}{$\\pi(f_0)$}\\; S\n\\;\\rightarrow \\; S\/I_L\n\\;\\rightarrow \\; 0 ,$$\nis a (minimal) $\\enma{\\mathbb{Z}}^n\/L$-graded resolution of $S\/I_L$\nby free $S$-modules.\n\n\\stdbreak\\noindent{\\bf Proof. }\n This follows immediately from \\ref{lem1} and \\ref{thmU}.\n \\Box\n\nSince $S[L]$ is a free $S$-module, every resolution $C$\nas in the previous corollary gives rise to\na resolution of $M_L$ as a $\\enma{\\mathbb{Z}}^n$-graded $S$-module. We demonstrate\nin an example how resolutions of $M_L$ over $S$\nare derived from resolutions of $S\/I_L$ over $S$.\n\n\\example{exU}\n Let $S=k[x_1,x_2,x_3]$ and $L=\\mathop{\\rm ker}\\nolimits \\bmatrix{2pt}{1 & 1 & 1 \\cr}\n \\subset \\enma{\\mathbb{Z}}^3$.\nThen $\\enma{\\mathbb{Z}}^3 \/L \\simeq \\enma{\\mathbb{Z}}$, $I_L = \\ideal{x_1-x_2,x_2-x_3}$,\nand $M_L$ is the module generated by all\nmonomials of the form $\\ x_1^i x_2^j x_3^{-i-j}$.\nThe ring $S\/I_L$ is resolved by the Koszul complex\n $$ 0 \\longrightarrow\n S(-2) \\rightarrowmat{4pt}{4pt}{\\!\\!x_2 - x_3 \\cr x_2 - x_1 \\cr}\n S(-1)^2 \\rightarrowmat{4pt}{6pt}{x_1 - x_2 & x_2 - x_3 \\cr}\n S \\longrightarrow S\/I_L . $$\nThis is a $\\enma{\\mathbb{Z}}^3\/L$-graded complex of free $S$-modules.\nAn inverse image under $\\pi$ equals\n $$ \\eqalign{\n 0 \\longrightarrow\n S[L]\\bigl(-(1,1,0)\\bigr)\n& \\rightarrowmat{4pt}{4pt}{\\!\\!x_2 - x_3 z_2 z_3^{-1}\\cr\n x_2- x_1 z_2 z_1^{-1} \\cr}\n S[L]\\bigl(-(1,0,0)\\bigr) \\oplus\n S[L]\\bigl(-(0,1,0)\\bigr) \\cr\n & \\qquad \\qquad\n\\rightarrowmat{4pt}{6pt}{x_1 - x_2 z_1 z_2^{-1} & x_2 - x_3 z_2\nz_3^{-1} \\cr} S[L] \\longrightarrow M_L .\\cr} $$\nWriting each term as a direct sum of free $S$-modules,\nfor instance, $\\, S[L]\\bigl(-(1,1,0)\\bigr) =\n\\oplus_{i+j+k=2} S \\bigl(-(i,j,k) \\bigr)$, we\nget a $\\enma{\\mathbb{Z}}^3$-graded minimal free resolution of $M_L$ over $S$:\n$$\n0 \\,\\, \\rightarrow\n\\mathop{\\hbox{$\\bigoplus$}}_{i+j+k=2}\\! S \\bigl(-(i,j,k) \\bigr)\n \\,\\, \\rightarrow\n\\mathop{\\hbox{$\\bigoplus$}}_{i+j+k=1}\\! S \\bigl(-(i,j,k) \\bigr)^2\n \\,\\, \\rightarrow\n\\mathop{\\hbox{$\\bigoplus$}}_{i+j+k=0}\\! S \\bigl(-(i,j,k) \\bigr)\n \\rightarrow\nM_L. \\Box\n$$\n\n\\vskip .2cm\n\nOur goal is to define and study\ncellular resolutions of the lattice ideal $I_L$.\nLet $X$ be a $\\enma{\\mathbb{Z}}^n$-graded cell complex whose vertices\nare the generators of $M_L$.\nEach cell $F \\in X$ is identified with its set of vertices,\nregarded as a subset of $L$.\nThe cell complex $X$ is called {\\it equivariant}\n\\ if $\\,F \\in X$ and $\\enma{\\mathbf{b}} \\in L$ implies that $F + \\enma{\\mathbf{b}} \\,\\in \\, X$,\nand if the incidence function satisfies\n$\\,\\varepsilon(F,F') = \\varepsilon(F+\\enma{\\mathbf{b}},F'+\\enma{\\mathbf{b}})$\nfor all $\\enma{\\mathbf{b}} \\in L$.\n\n\\lemma{ }\nIf $X$ is an equivariant $\\enma{\\mathbb{Z}}^n$-graded cell complex on $M_L$\nthen the cellular complex $\\enma{\\mathbf{F}}_X$ has the structure of a\n$\\enma{\\mathbb{Z}}^n$-graded complex of free $S[L]$-modules.\n\n\\stdbreak\\noindent{\\bf Proof. }\nThe group $L$ acts on the faces of $X$.\nLet $X\/L$ denote the set of orbits. For each orbit ${\\cal F}\n\\in X\/L $ we select a distinguished representative $F \\in {\\cal F}$,\nand we write $\\mathop{\\rm Rep}\\nolimits(X\/L)$ for the set of representatives.\nThe following map is an\nisomorphism of $\\enma{\\mathbb{Z}}^n$-graded $S$-modules, which\ndefines the structure of a free $S[L]$-module on $\\enma{\\mathbf{F}}_X$:\n$$ \\mathop{\\hbox{$\\bigoplus$}}_{F \\in \\mathop{\\rm Rep}\\nolimits(X\/L)} \\!\\!\\!\\! S[L] \\cdot e_F \\quad \\simeq \\quad\n \\mathop{\\hbox{$\\bigoplus$}}_{F \\in X }S \\cdot e_F \\, \\,\n\\,\\,\\, = \\,\\,\\, \\enma{\\mathbf{F}}_X , \\quad \\,\n\\enma{\\mathbf{z}}^\\enma{\\mathbf{b}} \\cdot e_F \\,\\, \\mapsto \\,\\, e_{F + \\enma{\\mathbf{b}}}\\ .$$\nThe differential $\\partial $ on $\\enma{\\mathbf{F}}_X$\nis compatible with the $S[L]$-action\non $\\enma{\\mathbf{F}}_X$ because the incidence\nfunction is $L$-invariant. For each $F \\in \\mathop{\\rm Rep}\\nolimits(X\/L)$\nand $\\enma{\\mathbf{b}} \\in L$ we have\n$$ \\eqalign{\n \\partial(\\enma{\\mathbf{z}}^\\enma{\\mathbf{b}} \\cdot e_F ) \\quad &= \\quad\n \\partial ( e_{F+\\enma{\\mathbf{b}}}) \\quad\n= \\quad \\sum_{F'\\in X,\\, F'\\ne\\emptyset} \\;\n \\varepsilon(F \\! + \\! \\enma{\\mathbf{b}},F' \\! + \\!\\enma{\\mathbf{b}}) \\; {m_{F \\!+\\! \\enma{\\mathbf{b}}}\n\\over m_{F'\\! +\\! \\enma{\\mathbf{b}}}} \\; e_{F'+\\enma{\\mathbf{b}}} \\cr\n& = \\,\\, \\sum_{F'\\in X,\\, F'\\ne\\emptyset} \\; \\! \\!\n\\varepsilon(F,F') \\; {m_{F} \\over m_{F'}} \\; \\enma{\\mathbf{z}}^\\enma{\\mathbf{b}} \\cdot e_{F'}\n\\quad = \\quad\n\\enma{\\mathbf{z}}^\\enma{\\mathbf{b}} \\cdot \\partial( e_F ) . \\cr} $$\nClearly, the differential $\\partial$ is homogeneous of degree $0$,\nwhich proves the claim. \\Box\n\n\\corollary{}\nIf $X$ is an equivariant $\\enma{\\mathbb{Z}}^n$-graded cell complex on $M_L$\nthen the cellular complex $\\enma{\\mathbf{F}}_X$ is exact over $S$ if and only if\nit is exact over $S[L]$.\n\n\\stdbreak\\noindent{\\bf Proof. } The $\\enma{\\mathbb{Z}}^n$-graded components of $\\enma{\\mathbf{F}}_X$ are\ncomplexes of $k$-vector spaces which are independent of our\ninterpretation of $\\enma{\\mathbf{F}}_X$ as an $S$-module or $S[L]$-module. \\Box\n\nIf $X$ is an equivariant $\\enma{\\mathbb{Z}}^n$-graded cell complex on $M_L$\nsuch that $\\enma{\\mathbf{F}}_X$ is exact, then\nwe call $\\enma{\\mathbf{F}}_X$ an {\\it equivariant cellular resolution} of $M_L$.\n\n\\corollary{cor17}\nIf $\\enma{\\mathbf{F}}_X$ is an equivariant cellular (minimal) resolution of $M_L$\nthen $\\, \\pi(\\enma{\\mathbf{F}}_X)\\,$ is a (minimal) resolution of $S\/I_L$\nby $\\enma{\\mathbb{Z}}^n\/L$-graded free $S$-modules.\n \\Box\n\nWe call $\\,\\pi(\\enma{\\mathbf{F}}_X)\\,$ a\n{\\it cellular resolution} of the lattice ideal $I_L$.\nLet $Q$ be an order ideal in the quotient poset $\\enma{\\mathbb{N}}^n\/L$.\nThen $Q + L$ is an order ideal in $\\enma{\\mathbb{N}}^n +L$, and\nthe restriction $\\,\\enma{\\mathbf{F}}_{X_{Q+L}}\\,$ is a complex\nof $\\enma{\\mathbb{Z}}^n$-graded free $S[L]$-modules. We set\n$\\,\\pi(\\enma{\\mathbf{F}}_X)_Q \\,:= \\,\\pi(\\enma{\\mathbf{F}}_{X_{Q+L}})$. This\nis a complex of $\\enma{\\mathbb{Z}}^n\/L$-graded free $S$-modules.\n\\ref{corK} implies\n\n\\proposition{restr}\nIf $\\pi(\\enma{\\mathbf{F}}_X)$ is a cellular resolution of $I_L$ and\n$Q$ is an order ideal in $\\enma{\\mathbb{N}}^n\/L$ which\ncontains all Betti degrees then\n$\\,\\pi(\\enma{\\mathbf{F}}_{X})_Q\\,$ is a cellular resolution of $I_L$.\n\nIn what follows we shall study two particular\ncellular resolutions of $I_L$.\n\n\\theorem{nicethm} The Taylor complex $\\Delta$ on $M_L$\nand the hull complex $\\mathop{\\rm hull}\\nolimits(M_L)$ are equivariant.\nThey define cellular resolutions\n$\\,\\pi(\\enma{\\mathbf{F}}_\\Delta)\\,$ and $\\,\\pi(\\enma{\\mathbf{F}}_{\\mathop{\\rm hull}\\nolimits(M_L)}) \\,$ of $I_L$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nThe Taylor complex $\\Delta$\nconsists of all finite subsets of generators of $M_L$.\nIt has an obvious $L$-action. The hull complex also has an $L$-action: if\n$\\,F = \\mathop{\\rm conv}\\nolimits \\bigl( \\set{ t^{\\enma{\\mathbf{a}}_1},\\ldots,t^{\\enma{\\mathbf{a}}_s} } \\bigr) \\,$\nis a face of $\\mathop{\\rm hull}\\nolimits(M_L)$ then\n $\\,\\enma{\\mathbf{z}}^b \\cdot F =\n\\mathop{\\rm conv}\\nolimits \\bigl( \\set{ t^{\\enma{\\mathbf{a}}_1+\\enma{\\mathbf{b}}},\\ldots,t^{\\enma{\\mathbf{a}}_s+\\enma{\\mathbf{b}}} } \\bigr) \\,$\nis also a face of $\\mathop{\\rm hull}\\nolimits(M_L)$ for all $\\enma{\\mathbf{b}} \\in L$.\nIn both cases the incidence function $\\varepsilon$ is defined uniquely\nby the ordering of the elements in $L$. To ensure that\n$\\varepsilon$ is $L$-invariant, we\nfix an ordering which is $L$-invariant; for instance,\norder the elements of $L$ by the value of an\n$\\enma{\\mathbb{R}}$-linear functional whose coordinates are $\\enma{\\mathbb{Q}}$-linearly independent.\n\nBoth $\\pi(\\enma{\\mathbf{F}}_\\Delta)$ and $\\pi(\\enma{\\mathbf{F}}_{\\mathop{\\rm hull}\\nolimits(M_L)})$\nare cellular resolutions of $I_L$\nby \\ref{cor17}.\n\\Box\n\nThe {\\it Taylor resolution} $\\pi(\\enma{\\mathbf{F}}_\\Delta)$ of $I_L$ has\nthe following explicit description.\nFor $\\alpha \\in \\enma{\\mathbb{N}}^n\/L$ let $\\mathop{\\rm fiber}\\nolimits(\\alpha)$ denote the\n(finite) set of all monomials $\\enma{\\mathbf{x}}^\\enma{\\mathbf{b}}$ with $\\enma{\\mathbf{b}} \\in \\alpha$.\nThus $S_\\alpha = k \\cdot \\mathop{\\rm fiber}\\nolimits(\\alpha)$.\nLet $E_i(\\alpha)$ be the collection of\nall $i$-element subsets $I$ of $\\mathop{\\rm fiber}\\nolimits(\\alpha)$\nwhose greatest common divisor $\\mathop{\\rm gcd}\\nolimits(I)$ equals $1$.\nFor $I \\in E_i(\\alpha)$ set $\\mathop{\\rm deg}\\nolimits(I) := \\alpha$.\n\n\\proposition{explicit}\nThe Taylor resolution $\\pi(\\enma{\\mathbf{F}}_\\Delta)$ of\na lattice ideal $I_L$ is isomorphic to the\n$\\enma{\\mathbb{Z}}^n\/L$-graded free $S$-module\n$\\,\\, \\mathop{\\hbox{$\\bigoplus$}}_{\\alpha \\in \\enma{\\mathbb{N}}^n\/L} S \\cdot E_i(\\alpha)\\,$\nwith the differential\n$$ \\partial(I)\\quad = \\quad \\sum_{m \\in I} \\mathop{\\rm sign}\\nolimits(m,I) \\cdot\n\\mathop{\\rm gcd}\\nolimits(I \\backslash \\set{m})\\cdot [I \\backslash \\set{m}]. \\eqno (3.1) $$\nIn this formula, $\\,[I \\backslash \\set{m}] $ denotes the\nelement of\n$ \\,E_{i-1}\\bigl( \\alpha - \\mathop{\\rm deg}\\nolimits(\\mathop{\\rm gcd}\\nolimits(I \\backslash \\set{m}))\\bigr)$\nwhich is obtained from $I \\backslash \\set{m}$\n by removing the common factor $\\,\\mathop{\\rm gcd}\\nolimits(I \\backslash \\set{m})$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nFor $\\enma{\\mathbf{b}} \\in \\enma{\\mathbb{Z}}^n$ let $F_i(\\enma{\\mathbf{b}})$ denote the\ncollection of $i$-element subsets of generators of $M_L$\nwhose least common multiple equals $\\enma{\\mathbf{b}}$.\nFor $J \\in F_i(\\enma{\\mathbf{b}})$ we have $\\mathop{\\rm lcm}\\nolimits(J) = \\enma{\\mathbf{x}}^\\enma{\\mathbf{b}}$. The\nTaylor resolution $\\enma{\\mathbf{F}}_\\Delta$ of $M_L$ equals\n$\\, \\mathop{\\hbox{$\\bigoplus$}}_{\\enma{\\mathbf{b}} \\in \\enma{\\mathbb{N}}^n + L} S \\cdot F_i(\\enma{\\mathbf{b}})\\,$\nwith differential\n$$ \\partial(J) \\quad = \\quad \\sum_{m \\in J} \\mathop{\\rm sign}\\nolimits(m,J) \\cdot\n{\\mathop{\\rm lcm}\\nolimits(J) \\over \\mathop{\\rm lcm}\\nolimits( J \\backslash \\set{m})} \\cdot J \\backslash \\set{m}.\n\\eqno (3.2) $$\nThere is a natural bijection between $F_i(\\enma{\\mathbf{b}})$ and $E_i(\\enma{\\mathbf{b}}+L)$,\nnamely, $\\,J \\, \\mapsto \\, \\set{ \\enma{\\mathbf{x}}^\\enma{\\mathbf{b}} \/ \\enma{\\mathbf{x}}^\\enma{\\mathbf{c}}\n\\,\\,|\\,\\,\\enma{\\mathbf{x}}^\\enma{\\mathbf{c}} \\in J} \\, = \\, I $.\nUnder this bijection we have\n$\\,{\\enma{\\mathbf{x}}^\\enma{\\mathbf{b}} \\over \\mathop{\\rm lcm}\\nolimits( J \\backslash \\set{m})}\n= \\mathop{\\rm gcd}\\nolimits(I \\backslash \\set{m})$.\nThe functor $\\pi$ identifies each $F_i(\\enma{\\mathbf{b}})$ with\n$E_i(\\enma{\\mathbf{b}}+L)$ and it takes (3.2) to (3.1). \\Box\n\n\\corollary{}\nLet $Q$ be an order ideal in $\\enma{\\mathbb{N}}^n\/L$ which\ncontains all Betti degrees.\nThen $\\,\\pi(\\enma{\\mathbf{F}}_\\Delta)_Q\n= \\mathop{\\hbox{$\\bigoplus$}}_{\\alpha \\in Q} S E_i(\\alpha)\\,$\nwith differential (3.1) is a cellular resolution of $I_L$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nThis follows from \\ref{restr}, \\ref{nicethm}\nand \\ref{explicit}.\n\\Box\n\n\\example{} {\\sl (Generic lattice ideals) }\nThe lattice module $M_L$ is generic (in the sense of \\S 2)\nif and only if the ideal $I_L$ is generated by binomials with full support.\nSuppose that this holds. It was shown in [PS] that\nthe Betti degrees of $I_L$ form an order ideal $Q$\nin $\\enma{\\mathbb{N}}^n\/L$. \\ref{genmin} and \\ref{restr} imply that\nthe resolution $\\, \\pi(\\enma{\\mathbf{F}}_\\Delta)_Q $ is minimal and coincides with\nthe hull resolution $\\pi(\\enma{\\mathbf{F}}_{\\mathop{\\rm hull}\\nolimits(M_L)})$.\n\\Box\n\n\n\\vskip .2cm\n\nThe remainder of this section is devoted to the hull resolution\nof $I_L$. We next show that the hull complex\n$\\mathop{\\rm hull}\\nolimits(M_L)$ is locally finite. This fact is nontrivial,\nin view of \\ref{lev}. It will imply that the hull\nresolution has finite rank over $S$.\n\nWrite each vector $\\enma{\\mathbf{a}} \\in L \\subset \\enma{\\mathbb{Z}}^n$ as difference\n$\\enma{\\mathbf{a}} = \\enma{\\mathbf{a}}^+ - \\enma{\\mathbf{a}}^-$ of two nonnegative vectors\nwith disjoint support. A nonzero vector $\\enma{\\mathbf{a}} \\in L$\nis called {\\it primitive} if there is no vector\n$\\enma{\\mathbf{b}} \\in L \\backslash \\set{ \\enma{\\mathbf{a}}, \\enma{\\mathbf{0}} }$ such that\n$\\enma{\\mathbf{b}}^+ \\leq \\enma{\\mathbf{a}}^+$ and $\\enma{\\mathbf{b}}^- \\leq \\enma{\\mathbf{a}}^-$.\nThe set of primitive vectors is known to be finite\n[St, Theorem 4.7]. The set of binomials\n $\\,\\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}^+} - \\enma{\\mathbf{x}}^{\\enma{\\mathbf{a}}^-}\\,$ were $\\enma{\\mathbf{a}}$ runs over\nall primitive vectors in $L$ is called\nthe {\\it Graver basis} of the ideal $I_L$.\nThe Graver basis contains the universal\nGr\\\"obner basis of $I_L$ [St, Lemma 4.6].\n\n\\lemma{graver}\nIf $\\set{\\enma{\\mathbf{0}},\\enma{\\mathbf{a}}}$ is an edge of $\\mathop{\\rm hull}\\nolimits(M_L)$ then\n$\\enma{\\mathbf{a}}$ is a primitive vector in $L$.\n\n\\stdbreak\\noindent{\\bf Proof. }\nSuppose that $\\enma{\\mathbf{a}}= (a_1,\\ldots,a_n) $\nis a vector in $L$ which is not primitive,\nand choose $\\,\\enma{\\mathbf{b}} = (b_1,\\ldots,b_n)\n\\in L \\backslash \\set{ \\enma{\\mathbf{a}}, \\enma{\\mathbf{0}}}$ such that\n$\\enma{\\mathbf{b}}^+ \\leq \\enma{\\mathbf{a}}^+$ and $\\enma{\\mathbf{b}}^- \\leq \\enma{\\mathbf{a}}^-$.\nThis implies $\\,t^{b_i} + t^{a_i-b_i} \\le 1 + t^{a_i} \\,$ for\n$t \\gg 0$ and $i \\in \\set{1,\\ldots,n}$.\nIn other words, the vector $\\,t^{\\enma{\\mathbf{b}}} + t^{\\enma{\\mathbf{a}}-\\enma{\\mathbf{b}}}\\,$\nis componentwise smaller or equal to the vector\n$\\, t^\\enma{\\mathbf{0}} + t^\\enma{\\mathbf{a}} $. We conclude that the\nmidpoint of the segment $ \\, \\mathop{\\rm conv}\\nolimits \\set{ t^\\enma{\\mathbf{0}} , t^\\enma{\\mathbf{a}}}\\, $\nlies in $\\,\\mathop{\\rm conv}\\nolimits \\set{t^{\\enma{\\mathbf{b}}} , t^{\\enma{\\mathbf{a}}-\\enma{\\mathbf{b}}}} + \\enma{\\mathbb{R}}_+^n$,\nand hence $\\,\\mathop{\\rm conv}\\nolimits \\set{ t^\\enma{\\mathbf{0}} , t^\\enma{\\mathbf{a}}}$ is not an edge of\nthe polyhedron\n$\\,P_t \\, = \\, \\mathop{\\rm conv}\\nolimits \\set{\\, t^\\enma{\\mathbf{c}} \\,: \\, \\enma{\\mathbf{c}} \\in L} \\,+ \\, \\enma{\\mathbb{R}}_+^n$.\n \\Box\n\n\\theorem{ }\nThe hull resolution $\\pi(\\enma{\\mathbf{F}}_{\\mathop{\\rm hull}\\nolimits(M_L)})$ is finite as an $S$-module.\n\n\\stdbreak\\noindent{\\bf Proof. }\nBy \\ref{graver} the vertex $\\enma{\\mathbf{0}}$ of $\\mathop{\\rm hull}\\nolimits(M_L)$ lies in only\nfinitely many edges. It follows that $\\enma{\\mathbf{0}}$ lies in only finitely\nmany faces of $\\mathop{\\rm hull}\\nolimits(M_L)$. The lattice $L$ acts transitively\non the vertices of $\\mathop{\\rm hull}\\nolimits(M_L)$, and hence every face of $\\mathop{\\rm hull}\\nolimits(M_L)$\nis $L$-equivalent to a face containing $\\enma{\\mathbf{0}}$.\nThe faces containing $\\enma{\\mathbf{0}}$ generate\n$\\enma{\\mathbf{F}}_{\\mathop{\\rm hull}\\nolimits(M_L)}$ as an $S[L]$-module, and hence they\ngenerate $\\pi(\\enma{\\mathbf{F}}_{\\mathop{\\rm hull}\\nolimits(M_L)})$ as an $S$-module. \\Box\n\n\nA minimal free resolution of a lattice ideal $I_L$\ngenerally does not respect symmetries, but\nthe hull resolution does.\nThe following example illustrates this point.\n\n \\example{exE}\n {\\sl (The hypersimplicial complex as a hull resolution)}\n \\hfill \\break The lattice $\\,L \\,=\\, \\mathop{\\rm ker}\\nolimits_\\enma{\\mathbb{Z}} \\pmatrix{ 1 \\! & \\!1 &\n\\cdots & 1 }\\,$ in\n$\\enma{\\mathbb{Z}}^n$ defines the toric ideal\n$$ I_L \\quad = \\quad\n\\langle \\, x_i - x_j \\,\\,: \\,\\, 1 \\leq i < j \\leq n \\, \\rangle .$$\nThe minimal free resolution of $I_L$ is\nthe Koszul complex on $n-1$ of the generators $x_i-x_j$.\nSuch a minimal resolution does not respect the\naction of the symmetric group $S_n$ on $I_L$.\nThe hull resolution is the\nEagon-Northcott complex of the matrix\n{\\smallmath $\\,\\bmatrix{2pt}{1\\;\\, & 1\\;\\, & \\cdots & 1\\;\\, \\cr\n x_1 & x_2 & \\cdots & x_n \\cr}$}.\nThis resolution is not minimal but it retains the\n$S_n$-symmetry of $I_L$.\nIt coincides with the {\\sl hypersimplicial complex} studied\nby Gel'fand and MacPherson in [GM, \\S 2.1.3].\nThe basis vectors\nof the hypersimplicial complex are denoted $\\Delta_{\\ell}^I$\nwhere $I$ is a subset of $\\set{1,2,\\ldots,n}$ with $|I| \\geq 2$\nand $\\ell$ is an integer with $ 1 \\leq \\ell \\leq |I| - 1$.\nWe have $\\,\\Delta_1^{\\set{i,j}} \\mapsto x_i - x_j\\,$ and the\nhigher differentials act as\n$$ \\Delta_\\ell^I \\quad \\mapsto \\quad\n\\sum_{i \\in I} \\mathop{\\rm sign}\\nolimits(i,I) \\cdot x_i \\cdot \\Delta^{I \\setminus \\!\n\\set{i}}_{\\ell-1} \\, - \\, \\sum_{i \\in I} \\mathop{\\rm sign}\\nolimits(i,I) \\cdot \\Delta^{I \\setminus\n\\! \\set{i}}_{\\ell}, $$ where the first sum is zero if $\\ell=1$\nand the second sum is zero if $\\ell = \\abs{I}-1$.\n\\Box\n\n\\remark{curious}\nOur study suggests a {\\sl curious duality} of toric\nvarieties, under which the coordinate ring of the primal variety is\nresolved by a discrete subgroup of the dual variety.\nMore precisely, the hull resolution of $I_L$ is gotten by taking the\nconvex hull in $\\enma{\\mathbb{R}}^n$ of the points $t^\\enma{\\mathbf{a}}$ for $\\enma{\\mathbf{a}} \\in L$.\nThe Zariski closure of these points (as $t$ varies)\nis itself an affine toric variety, namely, it is the\nvariety defined by the lattice ideal\n$\\,I_{L^\\perp}$ where\n$L^\\perp$ is the lattice dual to $L$\nunder the standard inner product on $\\enma{\\mathbb{Z}}^n$.\n\nFor instance, in \\ref{exE} the primal toric\nvariety is the line $\\,(t , t , \\ldots , t)$\nand the dual toric variety is the\nhypersurface $x_1 x_2 \\cdots x_n = 1$.\nThat hypersurface forms a group under\ncoordinatewise multiplication,\nand we are taking the convex hull of a\ndiscrete subgroup to resolve the coordinate ring\nof the line $\\,(t , t , \\ldots , t)$. \\Box\n\n\\example{exV} {\\sl (The rational normal quartic curve in $P^4$)}\n\n\\vskip .1cm\n\\noindent\nLet $L = \\mathop{\\rm ker}\\nolimits_\\enma{\\mathbb{Z}} {\\smallmath\\bmatrix{2pt}{\n0 & 1 & 2 & 3 & 4 \\cr\n4 & 3 & 2 & 1 & 0 \\cr}}$.\nThe minimal free resolution of the lattice ideal $I_L$ looks like\n$ 0 \\rightarrow S^3 \\rightarrow S^8 \\rightarrow S^6 \\rightarrow I_L $.\nThe primal toric variety in the sense of \\ref{curious}\nis a curve in $P^4$ and the dual toric variety\nis the embedding of the $3$-torus into affine $5$-space\ngiven by the equations $\n\\, x_2 x_3^2 x_4^3 x_5^4 = x_1^4 x_2^3 x_3^2 x_4 = 1$.\nHere the hull complex $\\mathop{\\rm hull}\\nolimits(M_L)$ is simplicial,\nand the hull resolution of $I_L$ has the format\n$ 0 \\rightarrow S^4 \\rightarrow S^{16}\n \\rightarrow S^{20} \\rightarrow S^9\n \\rightarrow I_L $.\nThe nine classes of edges in $\\mathop{\\rm hull}\\nolimits(M_L)$\nare the seven quadratic binomials in $I_L$\nand the two cubic binomials\n$\\,x_3 x_4^2 - x_1 x_5^2 , \\, x_2^2 x_3 - x_1^2 x_5 $.\n\n\\vskip 1.2cm\n\n\\noindent\n{\\bf Acknowledgements. }\nWe thank Lev Borisov, David Eisenbud, Irena Peeva, Sorin\nPopescu, and Herb Scarf for helpful conversations. Dave Bayer and Bernd\nSturmfels are partially supported by the National Science Foundation.\nBernd Sturmfels is also supported by the David and Lucille Packard\nFoundation and a 1997\/98 visiting position at\nthe Research Institute for Mathematical Sciences\nof Kyoto University.\n\n\\bigskip \\bigskip\n\n\\references\n\n\n\\itemitem{[AB]} R.~Adin and D.~Blanc,\nResolutions of associative and Lie algebras,\npreprint, 1997.\n\n\\itemitem{[BHS]} I.~Barany, R.~Howe, H.~Scarf:\nThe complex of maximal lattice free simplices,\n{\\sl Mathematical Programming} {\\bf 66} (1994) Ser.~A, 273--281.\n\n\\itemitem{[BPS]} D.~Bayer, I.~Peeva and B.~Sturmfels,\nMonomial resolutions, preprint, 1996.\n\n\\itemitem{[BLSWZ]} A.~Bj\\\"orner, M.~Las~Vergnas,\nB.~Sturmfels, N.~White and G.~Ziegler,\n{\\sl Oriented Matroids}, Cambridge University Press, 1993.\n\n\\itemitem{[BH]} W.~Bruns and J.~Herzog, {\\sl Cohen-Macaulay Rings},\nCambridge University Press, 1993.\n\n\\itemitem{[BH1]} W.~Bruns and J.~Herzog,\nOn multigraded resolutions, {\\sl Math.~Proc.~Cambridge Philos.~Soc.}\n{\\bf 118} (1995) 245--257.\n\n\\itemitem{[GM]} I.~M.~Gel'fand and R.~D.~MacPherson:\nGeometry in Grassmannians and a generalization of\nthe dilogarithm, {\\sl Advances in Math.}\n{\\bf 44} (1982), 279--312.\n\n\\itemitem{[Ho]} M. Hochster,\nCohen-Macaulay rings, combinatorics and simplicial complexes,\nin {\\sl Ring Theory II}, eds.~B.R.~McDonald and R.~Morris,\nLecture Notes in Pure and Appl.~Math.~{\\bf 26},\nDekker, New York, (1977), 171--223.\n\n\\itemitem{[KS]} M.~Kapranov and M.~Saito,\nHidden Stasheff polytopes in algebraic K-theory and the space of\nMorse functions, preprint, 1997,\npaper \\# 192 in {\\tt http:\/\/www.math.uiuc.edu\/K-theory\/}.\n\n\\itemitem{[Mac]} S. MacLane,\n{\\sl Categories for the Working Mathematician},\nGraduate Texts in Mathematics, No.~5,\nSpringer-Verlag, New York, 1971.\n\n\\itemitem{[PS]} I.~Peeva and B.~Sturmfels,\nGeneric lattice ideals, to appear in {\\sl Journal of the American\nMath.~Soc.}\n\n\\itemitem{[Ros]} I.~Z.~Rosenknop, Polynomial ideals that are generated by\nmonomials (Russian), {\\sl Moskov. Oblast. Ped. Inst. Uw cen Zap.} {\\bf 282}\n(1970), 151-159.\n\n\\itemitem{[Stu]} B.~Sturmfels,\n{\\sl Gr\\\"obner Bases and Convex Polytopes},\nAMS University Lecture Series, Vol. 8, Providence RI, 1995.\n\n\\itemitem{[Stu2]} B.~Sturmfels,\nThe co-Scarf resolution, to appear in\n{\\sl Commutative Algebra and Algebraic Geometry},\nProceedings Hanoi 1996, eds.~D.~Eisenbud and N.V.~Trung,\nSpringer Verlag.\n\n\\itemitem{[Tay]} D.~Taylor,\n{\\sl Ideals Generated by Monomials in an $R$-Sequence},\nPh.~D.~thesis, University of Chicago, 1966.\n\n\\itemitem{[Zie]} G.~Ziegler, {\\sl Lectures on Polytopes},\nSpringer, New York, 1995.\n\n\n\\vskip 1.8cm\n\n\\noindent\n Dave Bayer, Department of Mathematics,\nBarnard College, Columbia University,\nNew York, NY 10027, USA,\n{\\tt bayer@math.columbia.edu}.\n\n\\vskip .4cm\n\n\\noindent\nBernd Sturmfels,\nDepartment of Mathematics,\nUniversity of California,\nBerkeley, CA 94720, USA,\n{\\tt bernd@math.berkeley.edu}.\n\\bye\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nIn recent years, the number of mobile subscribers and their traffic have increased rapidly. Mobile subscribers are currently running multiple applications, simultaneously, on their smart phones that require a higher bandwidth and make users so limited to the carrier resources. Network providers are now offering multiple services such as multimedia telephony and mobile-TV \\cite{QoS_3GPP}. More spectrum is required to meet these demands \\cite{Carrier_Agg_1}. However, it is difficult to provide the required resources with a single frequency band due to the scarcity of the available radio spectrum. Therefore, aggregating different carriers' frequency bands is needed to utilize the radio resources across multiple carriers and allow a scalable expansion of the effective bandwidth delivered to the user terminal, leading to interband non-contiguous carrier aggregation \\cite{Carrier_Agg_2}.\n\nCarrier aggregation (CA) is one of the most distinct features of 4G systems including Long Term Evolution Advanced (LTE Advanced). Given the fact that LTE requires wide carrier bandwidths to utilize such as $10$ and $20$ MHz, CA needs to be taken into consideration when designing the system to overcome the spectrum scarcity challenges. With the CA being defined in \\cite{work-item}, two or more component carriers (CCs) of the same or different bandwidths can be aggregated to achieve wider transmission bandwidths between the evolve node B (eNodeB) and the UE. An overview of CA framework and cases is presented in \\cite{CA-framework}. Many operators are willing to add the CA feature to their plans across a mixture of macro cells and small cells. This will provide capacity and performance benefits in areas where small cell coverage is available while enabling network operators to provide robust mobility management on their macro cell networks.\n\nIncreasing the utilization of the existing spectrum can significantly improve network capacity, data rates and user experience. Some spectrum holders such as government users do not use their entire allocated spectrum in every part of their geographic boundaries most of the time. Therefore, the National Broadband Plan (NBP) and the findings of the President's Council of Advisors on Science and Technology (PCAST) spectrum study have recommended making the under-utilized federal spectrum available for secondary use \\cite{PCAST}. Spectrum sharing enables wireless systems to harvest underutilized swathes of spectrum, which would vastly increase the efficiency of spectrum usage. Making more spectrum available can provide significant gain in mobile broadband capacity only if those resources can be aggregated efficiently with the existing commercial mobile system resources.\n\nThis non-contiguous carrier aggregation task is a challenging. The challenges are both in hardware implementation and joint optimal resource allocation. Hardware implementation challenges are in the need for multiple oscillators, multiple RF chains, more powerful signal processing, and longer battery life \\cite{RebeccaThesis}. In order to allocate different carriers resources optimally among mobile users in their coverage areas, a distributed resource allocation algorithm between the UEs and the eNodeBs is needed.\n\nA multi-stage resource allocation (RA) with carrier aggregation algorithms are presented in \\cite{Haya_Utility1,Haya_Utility3,Haya_Utility6}. The algorithm in \\cite{Haya_Utility1} uses utility proportional fairness approach to allocate the primary and the secondary carriers resources optimally among mobile users in their coverage area. The primary carrier first allocates its resources optimally among users in its coverage area. The secondary carrier then starts allocating optimal rates to users in its coverage area based on the users applications and the rates allocated to them by the primary carrier. A RA with CA optimization problem is presented in \\cite{Haya_Utility3} to allocate resources from the LTE Advanced carrier and the MIMO radar carrier to each UE, in a LTE Advanced cell based on the application running on the UE. A price selective centralized RA with CA algorithm is presented in \\cite{Haya_Utility6} to allocate multiple carriers resources optimally among users while giving the user the ability\nto select one of the carriers to be its primary carrier and the others to be its secondary carriers. The UE's decision is based on the carrier price per unit bandwidth. However, the multi-stage RA with CA algorithms presented in \\cite{Haya_Utility1,Haya_Utility3,Haya_Utility6} guarantee optimal rate allocation but not optimal pricing.\n\nIn this paper, we focus on solving the problem of utility proportional fairness optimal RA with joint CA for multi-carrier cellular networks. The RA with joint CA algorithm presented in \\cite{Ahmed_Utility4} fails to converge for high-traffic situations due to the fluctuation in the RA process. In this paper, we present a robust algorithm that solves the drawbacks in \\cite{Ahmed_Utility4} and allocates multiple carriers resources optimally among UEs in their coverage area for both high-traffic and low-traffic situations. Additionally, our proposed distributed algorithm outperforms the multi-stage RA with CA algorithms presented in \\cite{Haya_Utility1,Haya_Utility3,Haya_Utility6} as it guarantees that mobile users are assigned optimal (minimum) price for resources. We formulate the multi-carrier RA with CA optimization problem into a convex optimization framework. We use logarithmic and sigmoidal-like utility functions to represent delay-tolerant and real-time applications, respectively, running on the mobile\nusers' smart phones \\cite{Ahmed_Utility1}. Our model supports both contiguous and non-contiguous carrier aggregation from one or more network providers. During the resource allocation process, our distributed algorithm allocates optimal resources from one or more carriers to provide the lowest resource price for the mobile users. In addition, we use a utility proportional fairness approach that ensures non-zero resource allocation for all users and gives real-time applications priority over delay-tolerant applications due to the nature of their applications that require minimum encoding rates.\n\n\\subsection{Related Work}\\label{sec:related}\n\nThere has been several works in the area of resource allocation optimization to utilize the scarce radio spectrum efficiently.\nThe authors in \\cite{kelly98ratecontrol,Internet_Congestion,Optimization_flow,Fair_endtoend} have used a strictly concave utility function to represent each user's elastic traffic and proposed distributed algorithms at the sources and the links to interpret the congestion control of communication networks. Their work have only focussed on elastic traffic and did not consider real-time applications as it have non-concave utility functions as shown in \\cite{fundamental_design}. The authors in \\cite{Utility_max-min} and \\cite{ Fair_allocation} have argued that the utility function, which represents the user application performance, is the one that needs to be shared fairly rather than the bandwidth. In this paper, we consider using resource allocation to achieve a utility proportional fairness that maximizes the user satisfaction. If a bandwidth proportional fairness is applied through a max-min bandwidth allocation, users running delay-tolerant applications receive larger utilities than users running real-time\napplications as real-time applications require minimum encoding rates and their utilities are equal to zero if they do not receive their minimum encoding rates.\n\nThe proportional fairness framework of Kelly introduced in \\cite{kelly98ratecontrol} does not guarantee a minimum QoS for each user application. To overcome this issue, a resource allocation algorithm that uses utility proportional fairness policy is introduced in \\cite{Ahmed_Utility1}. We believe that this approach is more appropriate as it respects the inelastic behavior of real-time applications. The utility proportional fairness approach in \\cite{Ahmed_Utility1} gives real-time applications priority over delay tolerant applications when allocating resources and guarantees that no user is allocated zero rate. In \\cite{Ahmed_Utility1, Ahmed_Utility2} and \\cite{ Ahmed_Utility3}, the authors have presented optimal resource allocation algorithms to allocate single carrier resources optimally among mobile users. However, their algorithms do not support multi-carrier resource allocation. To incorporate the carrier aggregation feature, we have introduced a multi-stage resource allocation using carrier\naggregation in \\cite{Haya_Utility1}. In \\cite{Haya_Utility2} and \\cite{Haya_Utility4}, we present resource allocation with users discrimination algorithms to allocate the eNodeB resources optimally among mobile users with elastic and inelastic traffic. In \\cite{Mo_ResourceBlock}, the authors have presented a radio resource block allocation optimization problem using a utility proportional fairness approach. The authors in \\cite{Tugba_ApplicationAware} have presented an application-aware resource block scheduling approach for elastic and inelastic\nadaptive real-time traffic where users are assigned to resource blocks.\n\nOn the other hand, resource allocation for single cell multi-carrier systems have been given extensive attention in recent years \\cite{Dual-Decomposition, Resource_allocation, Rate_Balancing}. In \\cite{Fair_resource,Design_of_Fair,Fast_Algorithms,Optimal_and_near-optimal}, the authors have represented this challenge in optimization problems. Their objective is to maximize the overall cell throughput with some constraints such as fairness and transmission power. However, transforming the problem into a utility maximization framework can achieve better users satisfaction rather than better system-centric throughput. Also, in practical systems, the challenge is to perform multi-carrier radio resource allocation for multiple cells. The authors in \\cite{Downlink_dynamic,Centralized_vs_Distributed} suggested using a distributed resource allocation rather than a centralized one to reduce the implementation complexity. In \\cite{Cooperative_Fair_Scheduling}, the authors propose a collaborative scheme in a multiple\nbase stations (BSs) environment, where each user is served by the BS that has the best channel gain with that user. The authors in \\cite{DownlinkRadio} have addressed the problem of spectrum resource allocation in carrier aggregation based LTE Advanced systems, with the consideration of UEs' MIMO capability and the modulation and coding schemes (MCSs) selection.\n\n\\subsection{Our Contributions}\\label{sec:contributions}\nOur contributions in this paper are summarized as:\n\\begin{itemize}\n\\item We consider the RA optimization problem with joint CA presented in \\cite{Ahmed_Utility4} that uses utility proportional fairness approach and solves for logarithmic and sigmoidal-like utility functions representing delay-tolerant and real-time applications, respectively.\n\\item We prove that the optimization problem is convex and therefore the global optimal solution is tractable. In addition, we present a robust distributed resource allocation algorithm to solve the optimization problem and provide optimal rates in high-traffic and low-traffic situations.\n\\item Our proposed algorithm outperforms that presented in \\cite{Ahmed_Utility4} by preventing the fluctuations in the RA process when the resources are scarce with respect to the number of users. It also outperforms the algorithms presented in \\cite{Haya_Utility1,Haya_Utility3,Haya_Utility6} as it guarantees that mobile users receive optimal price for resources.\n\\item We present simulation results for the performance of our RA algorithm and compare it with the performance of the multi-stage RA algorithm presented in \\cite{Haya_Utility1,Haya_Utility3,Haya_Utility6}\n\\end{itemize}\n\nThe remainder of this paper is organized as follows. Section \\ref{sec:Problem_formulation} presents the problem formulation. Section \\ref{sec:Proof} proves that the global optimal solution exists and is tractable. In Section \\ref{sec:Dual}, we discuss the conversion of the primal optimization problem into a dual problem. Section \\ref{sec:Algorithm} presents our distributed resource allocation algorithm with joint carrier aggregation for the utility proportional fairness optimization problem. In Section \\ref{sec:conv_analy}, we present convergence analysis for the allocation algorithm and a modification for robustness at peak-traffic hours. In section \\ref{sec:sim}, we discuss simulation setup, provide quantitative results along with discussion and compare the performance of the proposed algorithm with the one presented in \\cite{Haya_Utility1,Haya_Utility3,Haya_Utility6}. Section \\ref{sec:conclude} concludes the paper.\n\n\\section{Problem Formulation}\\label{sec:Problem_formulation}\n\nWe consider LTE mobile system consisting of $K$ carriers eNodeBs with $K$ cells and $M$ UEs distributed in these cells. The rate allocated by the $l^{th}$ carrier eNodeB to $i^{th}$ UE is given by $r_{li}$ where $l =\\{1,2, ..., K\\}$ and $i = \\{1,2, ...,M\\}$. Each UE has its own utility function $U_i(r_{1i}+r_{2i}+ ...+r_{Ki})$ that corresponds to the type of traffic being handled by the $i^{th}$ UE. Our objective is to determine the optimal rates that the $l^{th}$ carrier eNodeB should allocate to the nearby UEs. We express the user satisfaction with its provided service using utility functions that represent the degree of satisfaction of the user function with the rate allocated by the cellular network \\cite{DL_PowerAllocation} \\cite{fundamental_design} \\cite{UtilityFairness}. We assume the utility functions $U_i(r_{1i}+r_{2i}+ ...+r_{Ki})$ to be a strictly concave or a sigmoidal-like functions. The utility functions have the following properties:\n\n\\begin{itemize}\n\\item $U_i(0) = 0$ and $U_i(r_{1i}+r_{2i}+ ...+r_{Ki})$ is an increasing function of $r_{li}$ for $l$.\n\\item $U_i(r_{1i}+r_{2i}+ ...+r_{Ki})$ is twice continuously differentiable in $r_{li}$ for all $l$.\n\\end{itemize}\nIn our model, we use the normalized sigmoidal-like utility function, as in \\cite{DL_PowerAllocation}, that can be expressed as\n\\begin{equation}\\label{eqn:sigmoid}\nU_i(r_{1i}+r_{2i}+ ...+r_{Ki}) = c_i\\Big(\\frac{1}{1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}-d_i\\Big)\n\\end{equation}\nwhere $c_i = \\frac{1+e^{a_ib_i}}{e^{a_ib_i}}$ and $d_i = \\frac{1}{1+e^{a_ib_i}}$. So, it satisfies $U_i(0)=0$ and $U_i(\\infty)=1$. We use the normalized logarithmic utility function, as in \\cite{UtilityFairness}, that can be expressed as\n\\begin{equation}\\label{eqn:log}\nU_i(r_{1i}+r_{2i}+ ...+r_{Ki}) = \\frac{\\log(1+k_i\\sum_{l=1}^{K}r_{li})}{\\log(1+k_ir_{max})}\n\\end{equation}\nwhere $r_{max}$ is the required rate for the user to achieve 100\\% utility percentage and $k_i$ is the rate of increase of utility percentage with allocated rates. So, it satisfies $U_i(0)=0$ and $U_i(r_{max})=1$.\nWe consider the utility proportional fairness objective function that is given by\n\\begin{equation}\\label{eqn:utility_fairness}\n\\underset{\\textbf{r}}{\\text{max}} \\prod_{i=1}^{M}U_i(r_{1i} + r_{2i} + ... + r_{Ki})\n\\end{equation}\nwhere $\\textbf{r} =\\{\\textbf{r}_1, \\textbf{r}_2,..., \\textbf{r}_M\\}$ and $\\textbf{r}_i =\\{r_{1i}, r_{2i},..., r_{Ki}\\}$. The goal of this resource allocation objective function is to maximize the total system utility while ensuring proportional fairness between utilities (i.e., the product of the utilities of all UEs). This resource allocation objective function inherently guarantees\n\\begin{itemize}\n \\item non-zero resource allocation for all users. Therefore, the corresponding resource allocation optimization problem provides a minimum QoS for all users.\n \\item priority to users with real-time applications. Therefore, the corresponding resource allocation optimization problem improves the overall QoS for LTE system.\n\\end{itemize}\n\nThe basic formulation of the utility proportional fairness resource allocation problem is given by the following optimization problem:\n\\begin{equation}\\label{eqn:opt_prob_fairness}\n\\begin{aligned}\n& \\underset{\\textbf{r}}{\\text{max}} & & \\prod_{i=1}^{M}U_i(r_{1i} + r_{2i} + ... + r_{Ki}) \\\\\n& \\text{subject to} & & \\sum_{i=1}^{M}r_{1i} \\leq R_1, \\sum_{i=1}^{M}r_{2i} \\leq R_2, ...\\\\\n& & & ...\\:\\:, \\:\\:\\sum_{i=1}^{M}r_{Ki} \\leq R_K,\\\\\n& & & r_{li} \\geq 0, \\;\\;\\;l = 1,2, ...,K,\\;\\; i = 1,2, ...,M\n\\end{aligned}\n\\end{equation}\nwhere $R_l$ is the total available rate at the $l^{th}$ carrier eNodeB.\n\nWe prove in Section \\ref{sec:Proof} that the solution of the optimization problem (\\ref{eqn:opt_prob_fairness}) is the global optimal solution.\n\\section{The Global Optimal Solution}\\label{sec:Proof}\n\nIn the optimization problem (\\ref{eqn:opt_prob_fairness}), since the objective function $\\arg \\underset{\\textbf{r}} \\max \\prod_{i=1}^{M}U_i(r_{1i}+r_{2i}+ ...+r_{Ki})$ is equivalent to $\\arg \\underset{\\textbf{r}} \\max \\sum_{i=1}^{M}\\log(U_i(r_{1i}+r_{2i}+ ...+r_{Ki}))$, then optimization problem (\\ref{eqn:opt_prob_fairness}) can be written as:\n\n\\begin{equation}\\label{eqn:opt_prob_fairness_mod}\n\\begin{aligned}\n& \\underset{\\textbf{r}}{\\text{max}} & & \\sum_{i=1}^{M}\\log \\Big(U_i(r_{1i} + r_{2i} + ... + r_{Ki})\\Big) \\\\\n& \\text{subject to} & & \\sum_{i=1}^{M}r_{1i} \\leq R_1, \\sum_{i=1}^{M}r_{2i} \\leq R_2, ...\\\\\n& & & ...\\:\\:, \\:\\:\\sum_{i=1}^{M}r_{Ki} \\leq R_K,\\\\\n& & & r_{li} \\geq 0, \\;\\;\\;l = 1,2, ...,K,\\;\\; i = 1,2, ...,M.\n\\end{aligned}\n\\end{equation}\n\n\\begin{lem}\\label{lem:concavity}\nThe utility functions $\\log(U_i(r_{1i} + ... + r_{Ki}))$ in the optimization problem (\\ref{eqn:opt_prob_fairness_mod}) are strictly concave functions.\n\\end{lem}\n\\begin{proof}\nIn Section \\ref{sec:Problem_formulation}, we assume that all the utility functions of the UEs are strictly concave or sigmoidal-like functions.\n\nIn the strictly concave utility function case, recall the utility function properties in Section \\ref{sec:Problem_formulation}, the utility function is positive $ U_i(r_{1i} + ... + r_{Ki}) > 0$, increasing and twice differentiable with respect to $r_{li}$. Then, it follows that $\\frac{ \\partial U_i(r_{1i} + ... + r_{Ki})}{\\partial r_{li}} > 0$ and $\\frac{\\partial^2 U_i(r_{1i}+ ... + r_{Ki})}{\\partial r_{li}^2} < 0$. It follows that, the utility function $\\log(U_i(r_{1i} + r_{2i} + ... + r_{Ki}))$ in the optimization problem (\\ref{eqn:opt_prob_fairness_mod}) have\n\\begin{equation}\\label{eqn:log_first_derivative}\n\\frac{\\partial \\log(U_i(r_{1i} + ... + r_{Ki}))}{\\partial r_{li}} = \\frac{\\frac{\\partial U_i}{\\partial r_{li}}}{U_i} > 0\n\\end{equation}\nand\n\\begin{equation}\\label{eqn:log_second_derivative}\n\\frac{\\partial ^2\\log(U_i(r_{1i} + ... + r_{Ki}))}{\\partial r_{li}^2} = \\frac{\\frac{\\partial^2 U_i}{\\partial r_{li}^2}U_i-(\\frac{\\partial U_i}{\\partial r_{li}})^2}{U^2_i} < 0.\n\\end{equation}\nTherefore, the strictly concave utility function $U_i(r_{1i} + r_{2i} + ... + r_{Ki})$ natural logarithm $\\log(U_i(r_{1i} + r_{2i} + ... + r_{Ki}))$ is also strictly concave. It follows that the natural logarithm of the logarithmic utility function in equation (\\ref{eqn:log}) is strictly concave.\n\nIn the sigmoidal-like utility function case, the utility function of the normalized sigmoidal-like function is given by equation (\\ref{eqn:sigmoid}) as $U_i(r_{1i} + r_{2i} + ... + r_{Ki}) = c_i\\Big(\\frac{1}{1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}-d_i\\Big)$. For $0<\\sum_{l=1}^{K}r_{li}<\\sum_{l=1}^{K}R_l$, we have\n\\begin{equation*}\\label{eqn:sigmoid_bound}\n\\begin{aligned}\n0&{1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}>\\frac{c_i}{1+c_id_i}\\\\\n0&<1-d_i({1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}})<\\frac{1}{1+c_id_i}\\\\\n\\end{aligned}\n\\end{equation*}\nIt follows that for $0<\\sum_{l=1}^{K}r_{li}<\\sum_{l=1}^{K}R_l$, we have the first and second derivative as\n\\begin{equation*}\\label{eqn:sigmoid_derivative}\n\\begin{aligned}\n\\frac{\\partial}{ \\partial r_{li}}\\log U_i(r_{1i} + ... + r_{Ki}) =& \\frac{a_i d_i e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}{1-d_i(1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)})} \\\\\n\\;\\;\\;& + \\frac{a_ie^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}{(1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)})}>0\\\\\n\\frac{\\partial^2}{\\partial r_{li}^2}\\log U_i(r_{1i} + ... + r_{Ki}) =& \\frac{-a_i^2d_ie^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}{c_i\\Big(1-d_i(1+e^{-a(\\sum_{l=1}^{K}r_{li}-b_i)})\\Big)^2} \\\\\n\\;\\;\\;& + \\frac{-a_i^2e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}{(1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)})^2} < 0\\\\\n\\end{aligned}\n\\end{equation*}\nTherefore, the sigmoidal-like utility function $U_i(r_{1i}+...+r_{Ki})$ natural logarithm $\\log(U_i(r_{1i}+...+r_{Ki}))$ is strictly concave function. Therefore, all the utility functions in our model have strictly concave natural logarithm.\n\\end{proof}\n\\begin{thm}\\label{thm:global_soln}\nThe optimization problem (\\ref{eqn:opt_prob_fairness}) is a convex optimization problem and there exists a unique tractable global optimal solution.\n\\end{thm}\n\\begin{proof}\nIt follows from Lemma \\ref{lem:concavity} that for all UEs utility functions are strictly concave. Therefore, the optimization problem (\\ref{eqn:opt_prob_fairness_mod}) is a convex optimization problem \\cite{Boyd2004}. The optimization problem (\\ref{eqn:opt_prob_fairness_mod}) is equivalent to optimization problem (\\ref{eqn:opt_prob_fairness}), therefore it is a convex optimization problem. For a convex optimization problem, there exists a unique tractable global optimal solution \\cite{Boyd2004}.\n\\end{proof}\n\n\\section{The Dual Problem}\\label{sec:Dual}\n\nThe key to a distributed and decentralized optimal solution of the primal problem in (\\ref{eqn:opt_prob_fairness_mod}) is to convert it to the dual problem similar to \\cite{Ahmed_Utility1}, \\cite{kelly98ratecontrol} and \\cite{Low99optimizationflow}. The optimization problem (\\ref{eqn:opt_prob_fairness_mod}) can be divided into two simpler problems by using the dual problem. We define the Lagrangian\n\\begin{equation}\\label{eqn:lagrangian}\n\\begin{aligned}\nL(\\textbf{r},\\textbf{p}) = & \\sum_{i=1}^{M}\\log \\Big(U_i(r_{1i} + r_{2i} + ... + r_{Ki})\\Big)\\\\\n\t\t & -p_1(\\sum_{i=1}^{M}r_{1i} + z_1 - R_1) - ...\\\\\n\t\t & - p_K(\\sum_{i=1}^{M}r_{Ki} + z_K - R_K)\\\\\n = & \\sum_{i=1}^{M}\\Big({\\log(U_i(r_{1i} + r_{2i} + ... + r_{Ki}))-\\sum_{l=1}^{K}p_lr_{li}\\Big)}\\\\\n & + \\sum_{l=1}^{K} p_l(R_l-z_l)\\\\\n = & \\sum_{i=1}^{M}L_i(\\textbf{r}_i,\\textbf{p}) + \\sum_{l=1}^{K} p_l(R_l-z_l)\\\\\n\\end{aligned}\n\\end{equation}\nwhere $z_l\\geq 0$ is the $l^{th}$ slack variable and $p_l$ is Lagrange multiplier or the shadow price of the $l^{th}$ carrier eNodeB (i.e. the total price per unit rate for all the users in the coverage area of the $l^{th}$ carrier eNodeB) and $\\textbf{p}=\\{p_1,p_2,...,p_K\\}$. Therefore, the $i^{th}$ UE bid for rate from the $l^{th}$ carrier eNodeB can be written as $w_{li} = p_l r_{li}$ and we have $\\sum_{i=1}^{M}w_{li} = p_l \\sum_{i=1}^{M}r_{li}$. The first term in equation (\\ref{eqn:lagrangian}) is separable in $\\textbf{r}_i$. So we have $\\underset{\\textbf{r}}\\max \\sum_{i=1}^{M}({\\log(U_i(r_{1i}+r_{2i}+...+r_{Ki}))-\\sum_{l=1}^{K}p_lr_{li})} = \\sum_{i=1}^{M}\\underset{{\\textbf{r}_i}}\\max\\big({\\log(U_i(r_{1i}+r_{2i}+...+r_{Ki}))-\\sum_{l=1}^{K}p_lr_{li}\\big)}$.\nThe dual problem objective function can be written as\n\\begin{equation}\\label{eqn:dual_obj_fn}\n\\begin{aligned}\nD(\\textbf{p}) = & \\underset{{\\textbf{r}}}\\max \\:L(\\textbf{r},\\textbf{p}) \\\\\n= &\\sum_{i=1}^{M}\\underset{{\\textbf{r}_i}}\\max (L_i(\\textbf{r}_i,\\textbf{p})) + \\sum_{l=1}^{K} p_l(R_l-z_l)\n\\end{aligned}\n\\end{equation}\nThe dual problem is given by\n\\begin{equation}\\label{eqn:dual_problem}\n\\begin{aligned}\n& \\underset{{\\textbf{p}}}{\\text{min}}\n& & D(\\textbf{p}) \\\\\n& \\text{subject to}\n& & p_l \\geq 0, \\;\\;\\;\\;\\;l = 1,2, ...,K.\n\\end{aligned}\n\\end{equation}\nSo we have\n\\begin{equation}\\label{eqn:dual_max}\n\\frac{\\partial D(\\textbf{p})}{\\partial p_l} = R_l-\\sum_{i=1}^{M}r_{li} -z_l = 0\n\\end{equation}\nsubstituting by $\\sum_{i=1}^{M}w_{li} = p_l \\sum_{i=1}^{M}r_{li}$ we have\n\\begin{equation}\\label{eqn:dual_new_obj}\np_l = \\frac{\\sum_{i=1}^{M}w_{li}}{R_l-z_l}.\n\\end{equation}\nNow, we divide the primal problem (\\ref{eqn:opt_prob_fairness_mod}) into two simpler optimization problems in the UEs and the eNodeBs. The $i^{th}$ UE optimization problem is given by:\n\\begin{equation}\\label{eqn:opt_prob_fairness_UE}\n\\begin{aligned}\n& \\underset{{r_i}}{\\text{max}}\n& & \\log(U_i(r_{1i} + r_{2i} + ... + r_{Ki}))-\\sum_{l=1}^{K}p_lr_{li} \\\\\n& \\text{subject to}\n& & p_l \\geq 0\\\\\n& & & r_{li} \\geq 0, \\;\\;\\;\\;\\; i = 1,2, ...,M, l = 1,2, ...,K.\n\\end{aligned}\n\\end{equation}\n\nThe second problem is the $l^{th}$ eNodeB optimization problem for rate proportional fairness that is given by:\n\\begin{equation}\\label{eqn:opt_prob_fairness_eNodeB}\n\\begin{aligned}\n& \\underset{p_l}{\\text{min}}\n& & D(\\textbf{p}) \\\\\n& \\text{subject to}\n& & p_l \\geq 0.\\\\\n\\end{aligned}\n\\end{equation}\nThe minimization of shadow price $p_l$ is achieved by the minimization of the slack variable $z_l \\geq 0$ from equation (\\ref{eqn:dual_new_obj}). Therefore, the maximum utility percentage of the $l^{th}$ eNodeB rate $R_l$ is achieved by setting the slack variable $z_l = 0$. In this case, we replace the inequality in primal problem (\\ref{eqn:opt_prob_fairness_mod}) constraints by equality constraints and so we have $\\sum_{i=1}^{M}w_{li} = p_l R_l$. Therefore, we have $p_l = \\frac{\\sum_{i=1}^{M}w_{li}}{R_l}$ where $w_{li} = p_l r_{li}$ is transmitted by the $i^{th}$ UE to $l^{th}$ eNodeB. The utility proportional fairness in the objective function of the optimization problem (\\ref{eqn:opt_prob_fairness}) is guaranteed in the solution of the optimization problems (\\ref{eqn:opt_prob_fairness_UE}) and (\\ref{eqn:opt_prob_fairness_eNodeB}).\n\n\n\\section{Distributed Optimization Algorithm}\\label{sec:Algorithm}\n\nThe distributed resource allocation algorithm, in \\cite{Ahmed_Utility4}, for optimization problems (\\ref{eqn:opt_prob_fairness_UE}) and (\\ref{eqn:opt_prob_fairness_eNodeB}) is a modified version of the distributed algorithms in \\cite{Ahmed_Utility1, Ahmed_Utility2,Ahmed_Utility3}, \\cite{kelly98ratecontrol} and \\cite{Low99optimizationflow}, which is an iterative solution for allocating the network resources for a single carrier. The algorithm in \\cite{Ahmed_Utility4} allocates resources from multiple carriers simultaneously with utility proportional fairness policy. The algorithm is divided into the $i^{th}$ UE algorithm as shown in Algorithm 1 \\cite{Ahmed_Utility4} and the $l^{th}$ eNodeB carrier algorithm as shown in Algorithm 2 \\cite{Ahmed_Utility4}. In Algorithm 1 and 2 \\cite{Ahmed_Utility4}, the $i^{th}$ UE starts with an initial bid $w_{li}(1)$ which is transmitted to the $l^{th}$ carrier eNodeB. The $l^{th}$ eNodeB calculates the difference between the received bid $w_{li}(n)$ and the previously\nreceived bid $w_{li}(n-1)$ and exits if it is less than a pre-specified threshold $\\delta$. We set $w_{li}(0) = 0$. If the value is greater than the threshold, the $l^{th}$ eNodeB calculates the shadow price $p_l(n) = \\frac{\\sum_{i=1}^{M}w_{li}(n)}{R_l}$ and sends that value to all UEs in its coverage area. The $i^{th}$ UE receives the shadow prices $p_{l}$ from all in range carriers eNodeBs and compares them to find the first minimum shadow price $p_{\\min}^{1}(n)$ and the corresponding carrier index $l_1 \\in L$ where $L = \\{1, 2, ..., K\\}$. The $i^{th}$ UE solves for the $l_1$ carrier rate $r_{l_1i}(n)$ that maximizes $\\log U_i(r_{1i}+...+r_{Ki}) - \\sum_{l=1}^{K}p_l(n)r_{li}$ with respect to $r_{l_1i}$. The rate $r_{i}^{1}(n) = r_{l_1i}(n)$ is used to calculate the new bid $w_{l_1i}(n)=p_{\\min}^{1}(n) r_{i}^{1}(n)$. The $i^{th}$ UE sends the value of its new bid $w_{l_1i}(n)$ to the $l_1$ carrier eNodeB. Then, the $i^{th}$ UE selects the second minimum shadow price $p_{\\min}^{2}(n)$ and the corresponding\ncarrier index $l_2 \\in L$. The $i^{th}$ UE solves\nfor the $l_2$ carrier rate $r_{l_2i}(n)$ that maximizes $\\log U_i(r_{1i}+...+r_{Ki}) - \\sum_{l=1}^{K}p_l(n)r_{li}$ with respect to $r_{l_2i}$. The rate $r_{l_2i}(n)$ subtracted by the rate from $l_1$ carrier $r_{i}^{2}(n) = r_{l_2i}(n) - r_{i}^{1}(n)$ is used to calculate the new bid $w_{l_2i}(n)=p_{\\min}^{2}(n) r_{i}^{2}(n)$ which is sent to $l_2$ carrier eNodeB. In general, the $i^{th}$ UE selects the $m^{th}$ minimum shadow price $p_{\\min}^{m}(n)$ with carrier index $l_m \\in L$ and solves for the $l_m$ carrier rate $r_{l_mi}(n)$ that maximizes $\\log U_i(r_{1i}+...+r_{Ki}) - \\sum_{l=1}^{K}p_l(n)r_{li}$ with respect to $r_{l_mi}$. The rate $r_{l_mi}(n)$ subtracted by $l_1, l_2, ..., l_{m-1}$ carriers rates $r_{i}^{m}(n) = r_{l_mi}(n) - (r_{i}^{1}(n)+r_{i}^{2}(n)+...+r_{i}^{m-1}(n))$ is used to calculate the new bid $w_{l_mi}(n)=p_{\\min}^{m}(n) r_{i}^{m}(n)$ which is sent to $l_m$ carrier eNodeB. This process is repeated until $|w_{li}(n) -w_{li}(n-1)|$ is less than the threshold $\\delta$ for all $l$\ncarriers.\n\nThe distributed algorithm in \\cite{Ahmed_Utility4} is set to avoid the situation of allocating zero rate to any user (i.e. no user is dropped). This is inherited from the utility proportional fairness policy in the optimization problem, similar to \\cite{Ahmed_Utility1}, \\cite{Ahmed_Utility2} and \\cite{Ahmed_Utility3}. In addition, the UE chooses from the nearby carriers eNodeBs the one with the lowest shadow price and starts requesting bandwidth from that carrier eNodeB. If the allocated rate is not enough or the price of the resources increases due to high demand on that carrier eNodeB resources from other UEs, the UE switches to another nearby eNodeB carrier with a lower resource price to be allocated the rest of the required resources. This is done iteratively until an equilibrium between demand and supply of resources is achieved and the optimal rates are allocated in the LTE mobile network. Figure \\ref{fig:multiple_app_flow_centralized} shows a block diagram that represents the distributed RA algorithm.\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[width=0.8\\plotwidth]{flow_diagram_carrier_aggergation}\n \\caption{Flow Diagram with the assumption that the shadow price from the first carrier eNodeB $p_1$ is less before the $n_1$th iteration so rate $r_{1i}$ of the $i^{th}$ user is allocated. After the $n_1$th iteration, the shadow price from the second carrier eNodeB $p_2$ is less so rate $r_{2i}$ is allocated.}\n \\label{fig:multiple_app_flow_centralized}\n\\end{figure}\n\\section{Convergence Analysis}\\label{sec:conv_analy}\nIn this section, we present the convergence analysis of Algorithm 1 and 2 in \\cite{Ahmed_Utility4} for different values of carriers eNodeBs rates $R_l$. This analysis is equivalent to low and high-traffic hours analysis in cellular systems (e.g. change in the number of active users $M$ and their traffic in the cellular system \\cite{Ahmed_Utility2}).\n\\subsection{Drawback in Algorithm 1 and 2 in \\cite{Ahmed_Utility4}}\\label{sec:conv_drawbacks}\n\\begin{lem}\\label{lem:slope_curve}\nFor sigmoidal-like utility function $U_i(r_{1i}+r_{2i}+ ...+r_{Ki})$, the slope curvature function $\\frac{\\partial \\log U_i(r_{1i}+r_{2i}+ ...+r_{Ki})}{\\partial r_{li}}$ has an inflection point at $\\sum_{l=1}^{K}r_{li} = r_i^{s} \\approx b_i$ and is convex for $\\sum_{l=1}^{K}r_{li} > r_i^{s}$.\n\\end{lem}\n\\begin{proof}\nFor the sigmoidal-like function $U_i(r_{1i}+r_{2i}+ ...+r_{Ki}) = c_i\\Big(\\frac{1}{1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}-d_i\\Big)$, let $S_i(r_{li}) = \\frac{\\partial \\log U_i(r_{1i}+r_{2i}+ ...+r_{Ki})}{\\partial r_{li}}$ be the slope curvature function. Then, we have that\n\\begin{equation}\n\\begin{aligned}\\label{eqn:diff_slope}\n\\frac{\\partial S_i}{\\partial r_{li}} &= \\frac{-a_i^2d_ie^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}{c_i\\Big(1-d_i(1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)})\\Big)^2} \\\\\n&- \\frac{a_i^2e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}{\\Big(1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}\\Big)^2}\\\\\n\\text{and}\\\\\n\\frac{\\partial^2 S_i}{\\partial r_{li}^2}& = \\frac{a_i^3d_ie^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}(1-d_i(1-e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}))}{c_i\\Big(1-d_i(1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)})\\Big)^3} \\\\\n+& \\frac{a_i^3e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}(1-e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)})}{\\Big(1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}\\Big)^3}.\\\\\n\\end{aligned}\n\\end{equation}\nWe analyze the curvature of the slope of the natural logarithm of sigmoidal-like utility function. For the first derivative, we have $\\frac{\\partial S_i}{\\partial r_{li}}<0 \\:\\:\\:\\forall\\: r_{li}$. The first term $S^1_i$ of $\\frac{\\partial^2 S_i}{\\partial r_{li}^2}$ in equation (\\ref{eqn:diff_slope}) can be written as\n\\begin{equation}\\label{eqn:slope_fn}\nS^1_i = \\frac{a_i^3e^{a_ib_i}(e^{a_ib_i}+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)})}{(e^{a_ib_i}-e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)})^3}\n\\end{equation}\nand we have the following properties:\n\\begin{equation}\\label{eqn:slope_fn_term1}\n\\left\\{\n\\begin{array}{l l}\n \\lim_{\\sum_{l=1}^{K}r_{li} \\rightarrow 0} S^1_i = \\infty,\\\\\n \\lim_{\\sum_{l=1}^{K}r_{li} \\rightarrow b_i} S^1_i = 0 \\:\\:\\text{for} \\:\\:b_i \t\\gg \\frac{1}{a_i}.\n\\end{array} \\right.\n\\end{equation}\nFor second term $S^2_i$ of $\\frac{\\partial^2 S_i}{\\partial r_i^2}$ in equation (\\ref{eqn:diff_slope}), we have the following properties:\n\\begin{equation}\\label{eqn:slope_fn_term2}\n\\left\\{\n\\begin{array}{l l}\n S^2_i (r_{li}=b_{i}-\\sum_{j\\neq l} r_{ji}) = 0,\\\\\n S^2_i (r_{li}>b_{i}-\\sum_{j\\neq l} r_{ji}) > 0,\\\\\n S^2_i (r_{li} r_i^s \\approx b_i$. The algorithm in \\cite{Ahmed_Utility4} is guaranteed to converge to the global optimal solution when the slope $S_i(r_{li})$ of all the utility functions natural logarithm $\\log U_i(r_{1i}+r_{2i}+ ...+r_{Ki})$ are in the convex region of the functions, similar to analysis of logarithmic functions in \\cite{kelly98ratecontrol} and \\cite{Low99optimizationflow}. Therefore, the natural logarithm of sigmoidal-like functions $\\log U_i(r_{1i}+r_{2i}+ ...+r_{Ki})$ converge to the global optimal solution for $\\sum_{l=1}^{K}r_{li} > r_i^s \\approx b_i$. The inflection point of sigmoidal-like function $U_i(r_{1i}+r_{2i}+ ...+r_{Ki})$ is at $r_i^{\\text{inf}} = b_i$. For $\\sum_{i \\in \\mathcal{M}^{l}}r_i^{\\text{inf}} \\ll R_l$, the algorithm in \\cite{Ahmed_Utility4} allocates rates $\\sum_{l=1}^{K}r_{li}>b_i$ for all users.\nSince $S_i(r_{li})$ is convex for $\\sum_{l=1}^{K}r_{li}>r_i^s \\approx b_i$ then the optimal solution can be achieved by Algorithm 1 and 2 in \\cite{Ahmed_Utility4}. We have from equation (\\ref{eqn:slope_equation}) and as $S_i(r_{li})$ is convex for $\\sum_{l=1}^{K}r_{li} > r_i^s \\approx b_i$, that $p_{ss}< S_i(\\sum_{l=1}^{K}r_{li} =\\max_{i \\in \\mathcal{M}^{l}} b_i)$ where $S_i(\\sum_{l=1}^{K}r_{li} =\\max_{i \\in \\mathcal{M}^{l}} b_i) = \\frac{a_{i_{\\max}} d_{i_{\\max}} }{1-d_{i_{\\max}} }+\\frac{a_{i_{\\max} }}{2}$ and $i_{\\max} = \\arg \\max_{i \\in \\mathcal{M}^{l}} b_i$.\n\\end{proof}\nWe define the set $\\mathcal{M}^{\\mathcal{L}}:=\\{i:r_{li} \\neq 0 \\: \\forall \\: l\\in \\mathcal{L}, r_{li} = 0 \\: \\forall \\: l\\notin \\mathcal{L} \\}$ to be the set of active users covered exclusively by the set of carriers eNodeBs $\\mathcal{L} \\subseteq L$. Then, we have the following Corollary.\n\\begin{cor}\\label{cor:sig_fluctuate}\nFor $\\sum_{i \\in \\mathcal{M}^{\\mathcal{L}}}r_i^{\\text{inf}}> \\sum_{l \\in \\mathcal{L}}R_l$ and the global optimal shadow price $p_{ss} \\approx \\frac{a_id_i e^{\\frac{a_ib_i}{2}}}{1-d_i(1+e^{\\frac{a_ib_i}{2}})} + \\frac{a_ie^{\\frac{a_ib_i}{2}}}{(1+e^{\\frac{a_ib_i}{2}})}$ where $i \\in \\mathcal{M}^{\\mathcal{L}}$, then the solution given by Algorithm 1 and 2 in \\cite{Ahmed_Utility4} fluctuates about the global optimal rates.\n\\end{cor}\n\\begin{proof}\nFor the sigmoidal-like function $U_i(r_{1i}+r_{2i}+ ...+r_{Ki}) = c_i\\Big(\\frac{1}{1+e^{-a_i(\\sum_{l=1}^{K}r_{li}-b_i)}}-d_i\\Big)$, it follows from lemma \\ref{lem:slope_curve} that for $\\sum_{i \\in \\mathcal{M}^{\\mathcal{L}}}r_i^{\\text{inf}}> \\sum_{l \\in \\mathcal{L}}R_l$ $\\exists \\:\\: {i \\in \\mathcal{M}^{\\mathcal{L}}}$ such that the optimal rates $\\sum_{l=1}^{K}r_{li}^{\\text{opt}} < b_i$. Therefore, if $p_{ss} \\approx \\frac{a_id_i e^{\\frac{a_ib_i}{2}}}{1-d_i(1+e^{\\frac{a_ib_i}{2}})} + \\frac{a_ie^{\\frac{a_ib_i}{2}}}{(1+e^{\\frac{a_ib_i}{2}})}$ is the optimal shadow price for optimization problem (\\ref{eqn:opt_prob_fairness_mod}). Then, a small change in the shadow price $p_l(n)$ in the $n^{th}$ iteration can lead the rate $r_{li}(n)$ (root of $S_i(r_{li}) - p_l(n) =0$) to fluctuate between the concave and convex curvature of the slope curve $S_i(r_{li})$ for the $i^{th}$ user. Therefore, it causes fluctuation in the bid $w_{li}(n)$ sent to the eNodeB and fluctuation in the shadow price $p_l(n)$ set by eNodeB.\nTherefore, the iterative solution of Algorithm 1 and 2 in \\cite{Ahmed_Utility4} fluctuates about the global optimal rates $\\sum_{l=1}^{K}r_{li}^{\\text{opt}}$.\n\\end{proof}\n\\begin{thm}\\label{thm:sig_not_conv}\nAlgorithm 1 and 2 in \\cite{Ahmed_Utility4} does not converge to the global optimal rates for all values of $R_l$.\n\\end{thm}\n\\begin{proof}\nIt follows from Corollary \\ref{cor:sig_convergence} and \\ref{cor:sig_fluctuate} that Algorithm 1 and 2 in \\cite{Ahmed_Utility4} does not converge to the global optimal rates for all values of $R_l$.\n\\end{proof}\n\n\\begin{algorithm}[tb]\n\\caption{The $i^{th}$ UE Algorithm}\\label{alg:UE_FK}\n\\begin{algorithmic}\n\\STATE {Send initial bid $w_{li}(1)$ to $l^{th}$ carrier eNodeB (where $l \\in L = \\{1, 2, ..., K\\}$)}\n\\LOOP\n\t\\STATE {Receive shadow prices $p_{l\\in L}(n)$ from all in range carriers eNodeBs}\n\t\\IF {STOP from all in range carriers eNodeBs} %\n\t \n\t\t\\STATE {Calculate allocated rates $r_{li} ^{\\text{opt}}=\\frac{w_{li}(n)}{p_l(n)}$}\n\t\t\\STATE {STOP}\n\t\\ELSE\n\t\t\\STATE{Set $p_{\\min}^{0} = \\{\\}$ and $r_{i}^{0}=0$}\n\t\t\\FOR{$m = 1 \\to K$}\n\t\t \\STATE{$p_{\\min}^{m}(n) = \\min (\\textbf{p} \\setminus \\{p_{\\min}^{0},p_{\\min}^{1},...,p_{\\min}^{m-1}\\})$}\n\t\t \\STATE{$l_m = \\{l \\in L : p_{l}=\\min (\\textbf{p} \\setminus \\{p_{\\min}^{0}, p_{\\min}^{1}, ..., p_{\\min}^{m-1}\\}) \\}$}\n\t\t \\COMMENT{$l_m$ is the index of the corresponding carrier}\n\t\t \\STATE {Solve $r_{l_mi}(n) = \\arg \\underset{r_{l_mi}}\\max \\Big(\\log U_i(r_{1i}+ ... + r_{Ki}) - \\sum_{l=1}^{K}p_l(n)r_{li}\\Big)$ for the $l_m$ carrier eNodeB}\n\t\t \\STATE {$r_{i}^{m}(n) = r_{l_mi}(n)-\\sum_{j=0}^{m-1}r_{i}^{j}(n)$}\n\t\t \\IF{$r_{i}^{m}(n)<0$}\n\t\t\t \\STATE{Set $r_{i}^{m}(n)=0$}\n\t\t \\ENDIF\n\t\t \n\t\t \\STATE {Calculate new bid $w_{l_mi} (n)= p_{\\min}^{m}(n) r_{i}^{m}(n)$}\n \n\t\t \\IF {$|w_{l_mi}(n) -w_{l_mi}(n-1)| >\\Delta w(n)$} %\n\t\t\t \\STATE {$w_{l_mi}(n) =w_i(n-1) + \\text{sign}(w_{l_mi}(n) -w_{l_mi}(n-1))\\Delta w(n)$}\n\t\t\t \\COMMENT {$\\Delta w = h_1 e^{-\\frac{n}{h_2}}$ or $\\Delta w = \\frac{h_3}{n}$}\n\t\t \\ENDIF\n \n\t\t \\STATE {Send new bid $w_{l_mi} (n)$ to $l_m$ carrier eNodeB}\n\t\t\\ENDFOR\n\t\\ENDIF\n\\ENDLOOP\n\\end{algorithmic}\n\\end{algorithm}\n\\subsection{Solution using Algorithm \\ref{alg:UE_FK} and \\ref{alg:eNodeB_FK}}\\label{sec:conv_solution}\nFor a robust algorithm, we add a fluctuation decay function to the algorithm presented in \\cite{Ahmed_Utility4} as shown in Algorithm \\ref{alg:UE_FK}. Our robust algorithm ensures convergence for all values of the carriers eNodeBs maximum rate $R_l$ for all $l$. Algorithm \\ref{alg:UE_FK} and \\ref{alg:eNodeB_FK} allocated rates coincide with Algorithm 1 and 2 in \\cite{Ahmed_Utility4} for $\\sum_{i \\in \\mathcal{M}^{l}}r_i^{\\text{inf}} \\ll R_l \\:\\: \\forall \\:\\: l \\in L$. For $\\sum_{i \\in \\mathcal{M}^{\\mathcal{L}}}r_i^{\\text{inf}}> \\sum_{l \\in \\mathcal{L}}R_l$, robust algorithm avoids the fluctuation in the non-convergent region discussed in the previous section. This is achieved by adding a convergence measure $\\Delta w(n)$ that senses the fluctuation in the bids $w_{li}$. In case of fluctuation, it decreases the step size between the current and the previous bid $w_{li}(n) -w_{li}(n-1)$ for every user $i$ using \\textit{fluctuation decay function}. The\nfluctuation decay function could be in the following forms:\n\\begin{itemize}\n\\item \\textit{Exponential function}: It takes the form $\\Delta w(n) = h_1 e^{-\\frac{n}{h_2}}$.\n\\item \\textit{Rational function}: It takes the form $\\Delta w(n) = \\frac{h_3}{n}$.\n\\end{itemize}\nwhere $h_1, h_2, h_3$ can be adjusted to change the rate of decay of the bids $w_{li}$.\n\n\\begin{rem}\nThe fluctuation decay function can be included in the UE or the eNodeB Algorithm.\n\\end{rem}\nIn our model, we add the decay part to the UE Algorithm as shown in Algorithm \\ref{alg:UE_FK}.\n\n\\begin{algorithm}[tb]\n\\caption{The $l^{th}$ eNodeB Algorithm}\\label{alg:eNodeB_FK}\n\\begin{algorithmic}\n\\LOOP\n\t\\STATE {Receive bids $w_{li}(n)$ from UEs}\n\t\\COMMENT{Let $w_{li}(0) = 0\\:\\:\\forall i$}\n\t\t\t\\IF {$|w_{li}(n) -w_{li}(n-1)|<\\delta \\:\\:\\forall i$} %\n\t \t\t\\STATE {Allocate rates, $r_{li}^{\\text{opt}}=\\frac{w_{li}(n)}{p_l(n)}$ to $i^{th}$ UE}\n\t \t\t\\STATE {STOP}\n\t\t\\ELSE\n\t\\STATE {Calculate $p_l(n) = \\frac{\\sum_{i=1}^{M}w_{li}(n)}{R_l}$}\n\t\\STATE {Send new shadow price $p_l(n)$ to all UEs}\n\n\t \n\t\\ENDIF\n\n\\ENDLOOP\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Simulation Results}\\label{sec:sim}\n\nAlgorithm \\ref{alg:UE_FK} and \\ref{alg:eNodeB_FK} were applied to various logarithmic and sigmoidal-like utility functions with different parameters in MATLAB. The simulation results showed convergence to the global optimal rates. In this section, we present the simulation results for two carriers in a heterogeneous network (HetNet) that consists of one macro cell, one small cell and $12$ active UEs as shown in Figure \\ref{fig:RA_system_model}. The UEs are divided into two groups. The $1^{st}$ group of UEs (index $i=\\{1,2,3,4,5,6\\}$) is located in the macro cell under the coverage area of both the $1^{st}$ carrier (C1) and the $2^{nd}$ carrier (C2) eNodeBs. We use three normalized sigmoidal-like functions that are expressed by equation (\\ref{eqn:sigmoid}) with different parameters. The used parameters are $a = 5$, $b=10$ corresponding to a sigmoidal-like function that is an approximation to a step function at rate $r =10$ (e.g. VoIP) and is the utility of UEs with indexes $i=\\{1,7\\}$, $a = 3$, $b=20$\ncorresponding to a sigmoidal-like function that is an approximation of an adaptive real-time application with inflection point at rate $r=20$ (e.g. standard definition video streaming) and is the utility of UEs with indexes $i=\\{2,8\\}$, and $a = 1$, $b=30$ corresponding to a sigmoidal-like function that is also an approximation of an adaptive real-time application with inflection point at rate $r=30$ (e.g. high definition video streaming) and is the utility of UEs with indexes $i=\\{3,9\\}$, as shown in Figure \\ref{fig:utility}. We use three logarithmic functions that are expressed by equation (\\ref{eqn:log}) with $r_{max} =100$ and different $k_i$ parameters which are approximations for delay-tolerant applications (e.g. FTP). We use $k =15$ for UEs with indexes $i=\\{4,10\\}$, $k =3$ for UEs with indexes $i=\\{5,11\\}$, and $k = 0.5$ for UEs with indexes $i=\\{6,12\\}$, as shown in Figure \\ref{fig:utility}. A summary is shown in table \\ref{table:parameters}. A three dimensional view of the sigmoidal-like utility\nfunction $U_i(r_{1i} +r_{2i})$ is show in Figure \\ref{fig:Utility3D}.\n\\begin{figure}[tb]\n\\centering\n\\includegraphics[width=3.5in]{RA_system_model}\n\\caption{System model with two groups of users. The $1^{st}$ group with UE indexes $i =\\{ 1,2,3,4,5,6\\}$, $2^{nd}$ group with UE indexes $i = \\{7,8,9,10,11,12\\}$.}\n\\label{fig:RA_system_model}\n\\end{figure}\n\\begin{figure}[tb]\n\\centering\n\\includegraphics[width=3.5in]{utility}\n\\caption{The users utility functions $U_i(r_{1i}+r_{2i})$ used in the simulation (three sigmoidal-like functions and three logarithmic functions).}\n\\label{fig:utility}\n\\end{figure}\n\n\\begin{figure}[tb]\n\\centering\n\\includegraphics[width=3.5in]{haya}\n\\caption{The sigmoidal-like utility $U_i(r_{1i} + r_{2i}) = c_i(\\frac{1}{1+e^{-a_i(r_{1i} + r_{2i}-b_i)}}-d_i)$ of the $i^{th}$ user, where $r_{1i}$ is the rate allocated by $1^{st}$ carrier eNodeB and $r_{2i}$ is the rate allocated by $2^{nd}$ carrier eNodeB.}\n\\label{fig:Utility3D}\n\\end{figure}\n\\begin {table}[]\n\\caption {Users and their applications utilities}\n\\label{table:parameters}\n\\begin{center}\n\\renewcommand{\\arraystretch}{1.4}\n\\begin{tabular}{| l | l | l | }\n \\hline\n \\multicolumn{2}{|c|}{Applications Utilities Parameters} & \\multicolumn{1}{|c|}{Users Indexes} \\\\ \\hline\n Sig1 & Sig $a=5,\\:\\: b=10$ & $i=\\{1,7\\}$ \\\\ \\hline\n Sig2 & Sig $a=3,\\:\\: b=20$ & $i=\\{2,8\\}$ \\\\ \\hline\n Sig3 & Sig $a=1,\\:\\: b=30$ & $i=\\{3,9\\}$ \\\\ \\hline\n Log1 & Log $k=15,\\:\\: r_{max}=100$ & $i=\\{4,10\\}$ \\\\ \\hline\n Log2 & Log $k=3,\\:\\: r_{max}=100$ & $i=\\{5,11\\}$ \\\\ \\hline\n Log3 & Log $k=0.5,\\:\\: r_{max}=100$ & $i=\\{6,12\\}$ \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end {table}\n\\subsection{Allocated Rates for $30\\le R_1\\le200$ and $R_2=70$}\nIn the following simulations, we set $\\delta =10^{-3}$, the $1^{st}$ carrier eNodeB rate $R_1$ takes values between $30$ and $200$ with step of $10$, and the $2^{nd}$ carrier eNodeB rate is fixed at $R_2 = 70$. In Figure \\ref{fig:ri_versus_R1}, we show the final allocated optimal rates $r_i=r_{1i}+r_{2i}$ of different users with different $1^{st}$ carrier eNodeB total rate $R_1$ and observe how the proposed rate allocation algorithm converges when the eNodeBs available resources are abundant or scarce. In Figure \\ref{fig:r1i_versus_R1}, we show the rates allocated to the $1^{st}$ group of UEs by only C1 eNodeB since C2 eNodeB is not within these users range, we observe the increase in the rate allocated to these users with the increase in $R_1$. Figure \\ref{fig:r1i+r2i_versus_R1} shows the final allocated rates to the $2^{nd}$ group of UEs by both C1 and C2 eNodeBs. Since these users located under the coverage area of both the macro cell and the small cell, they are allocated rates jointly using the proposed\nRA with joint CA approach. Figure \\ref{fig:r1i_versus_R1} and \\ref{fig:r1i+r2i_versus_R1} show that by using the RA with joint CA algorithm, no user is allocated zero rate (i.e. no user is dropped). However, the majority of the eNodeBs resources are allocated to the UEs running adaptive real-time applications until they reach their inflection rates the eNodeBs then allocate more resources to the UEs with delay-tolerant applications, as real-time application users bid higher than delay-tolerant application users by using the utility proportional fairness policy.\n\nIn Figure \\ref{fig:ri_versus_R1_SmallCell}, we show the rates allocated to the $2^{nd}$ group users, located under the coverage area of both the macro cell and small cell eNodeBs, by each of the two carriers' eNodeBs with the increase in the $1^{st}$ carrier eNodeB resources. In Figure \\ref{fig:r1i_versus_R1_SmallCellUsers} and \\ref{fig:r2i_versus_R1_SmallCellUsers}, when the resources available at C2 eNodeB (i.e. $R_2$) is more than that at C1 eNodeB, we observe that most of the $2^{nd}$ group rates are allocated by C2 eNodeB. However, the delay tolerant applications are not allocated much resources since most of $R_2$ is allocated to the real-time applications. With the increase in C1 eNodeB resources $R_1$, we observe a gradual increase in the $2^{nd}$ group rates allocated to real-time applications from C1 eNodeB and a gradual decrease from C2 eNodeB resources allocated to real-time-applications. This shift in the resource allocation increases the available resources in C2 eNodeB to be allocated to $2^{nd}$ group delay tolerant applications by C2 eNodeB.\n\\begin{figure}[tb]\n \\centering\n \\subfigure[The rates allocated $r_{1i}$ from the $1^{st}$ carrier eNodeB (i.e. the macro cell eNodeB) to users of the $1^{st}$ group (i.e. $i = 1, 2, 3, 4, 5, 6$).]{%\n \\label{fig:r1i_versus_R1}\n \\includegraphics[width=3.5in]{r1i_versus_R1}\n }\\\\%\n\\subfigure[The rates $r_{1i} + r_{2i}$ allocated from $1^{st}$ and $2^{nd}$ carriers eNodeBs (i.e. the macro cell and the small cell eNodeBs) to users of the $2^{nd}$ group (i.e. $i = 7, 8, 9, 10, 11, 12$).]{%\n \\label{fig:r1i+r2i_versus_R1}\n \\includegraphics[width=3.5in]{r1i+r2i_versus_R1}\n }%\n\\caption{The allocated rates $\\sum_{l=1}^{K}r_{li}$ of the two groups of users verses $1^{st}$ carrier rate $30