diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhcuu" "b/data_all_eng_slimpj/shuffled/split2/finalzzhcuu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhcuu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLet $\\{ (\\xi _t ,\\eta _t) , t \\geqslant 0 \\}$ \nbe a L\\'evy process on $\\mathbb R ^{2}$.\nThe generalized Ornstein-Uhlenbeck process $\\{V_t , t \\geqslant 0 \\}$ on $\\mathbb R$ \nbased on $\\{ (\\xi _t ,\\eta _t) , t \\geqslant 0 \\}$ with initial condition $V_0$\nis defined as\n\\begin{equation}\\label{1.1}\nV_t = e^{-\\xi _t }\\left( V_0 + \\int _{0}^{t} e^{\\xi _{s-}} {d}\\eta _s \\right) , \n\\quad t\\geqslant 0,\n\\end{equation}\nwhere $V_0$ is a random variable independent of \n$\\{(\\xi _t ,\\eta _t) , t\\geqslant 0\\}$.\nThis process has recently been well-studied by \nCarmona, Petit, and Yor \\cite{CPY97}, \\cite{CPY01}, Erickson and Maller \\cite{EM05}, and \nLindner and Maller \\cite{LM05}.\n\nLindner and Maller \\cite{LM05} find that the generalized Ornstein-Uhlenbeck \nprocess $\\{V_t , t \\geqslant 0 \\}$ based on $\\{ (\\xi _t ,\\eta _t) , t \\geqslant 0 \\}$ \n turns out to be\n a stationary process with a suitable choice of $V_0$ if and only if \n\\begin{equation}\\label{1.2}\nP\\left(\n\\int _{0}^{\\infty -} e^{-\\xi_{s-}}dL_s \\text{ exists and is finite}\\right)=1,\n\\end{equation}\nwhere \n\\begin{equation}\\label{1.2a}\n\\int _{0}^{\\infty -} e^{-\\xi_{s-}}dL_s =\n\\lim _{t \\rightarrow \\infty} \\int _{0}^{t} e^{-\\xi_{s-}}dL_s \n\\end{equation}\nand $\\{ (\\xi _t ,L_t) , t \\geqslant 0 \\}$ \nis a L\\'evy process on $\\mathbb R^2$ defined by\n\\begin{equation}\\label{1.3}\nL_t=\\eta_t+\\sum_{00$ such that $h_X(x)>0$ for all $x\\geqslant c$ and that\n$\\{Y_t\\}$ is not the zero process. Then\n\\begin{equation}\\label{2.1}\nP\\left(\n\\int _{0}^{\\infty -} e^{-X_{s-}}dY_s \\text{ exists and is finite}\\right)=1\n\\end{equation}\nif and only if\n\\begin{equation}\\label{2.2}\n\\lim_{t\\to\\infty}X_t=+\\infty\\text{ a.\\,s.\\ and }\\int_{|y|\\geqslant e^c}\n\\frac{\\log|y|}{h_X(\\log|y|)} \\nu_Y (dy)<\\infty,\n\\end{equation}\nwhere $|y|$ is the Euclidean norm of $y \\in \\mathbb R^d$.\n\\end{thm}\n\n\\begin{proof}\nFirst, for $d=1$, this theorem is established in \\cite{EM05}.\nSecond, for $j=1,\\ldots,d$, the $j$th coordinate process $\\{Y_t^{(j)}, t\\ges0\\}$\nis a L\\'evy process on $\\mathbb R$ with L\\'evy measure \n$\\nu_{Y^{(j)}}(B)=\\int_{\\mathbb R^d} 1_B(y_j)\\nu_Y (dy)$ for any Borel set $B$ in \n$\\mathbb R$\nsatisfying $0\\not\\in B$, where $y=(y_1,\\ldots,y_d)$. Third,\nthe property \\eqref{2.1} is equivalent to\n\\begin{equation}\\label{2.1a}\nP\\left(\n\\int _{0}^{\\infty -} e^{-X_{s-}}dY_s^{(j)} \\text{ exists and is finite}\\right)=1\n\\quad\\text{for }j=1,\\ldots,d.\n\\end{equation}\n\nNext, we claim that the following \\eqref{2.3} and \\eqref{2.4}\nare equivalent:\n\\begin{gather}\n\\int_{|y|>M}\\frac{\\log |y|}{h_X(\\log |y|)} \\nu_Y(dy)<\\infty\\quad\n\\text{ for some }M\\geqslant e^c,\\label{2.3}\\\\\n\\int_{\\{y\\colon |y_j|>M\\}}\n\\frac{\\log |y_j|}{h_X(\\log |y_j|)} \\nu_Y(dy)<\\infty,\\quad\nj=1,\\ldots,d,\\quad\\text{for some }M\\geqslant e^c.\\label{2.4}\n\\end{gather}\nPut $f(u)=\\log u\/h_X(\\log u)$ for $u\\geqslant e^c$. This $f(u)$ is not necessarily\nincreasing for all $u\\geqslant e^c$. We use the words {\\em increasing} and \n{\\em decreasing} in the wide sense allowing flatness. But\n$f(u)$ is increasing for sufficiently large $u$ ( $>M_0$, say), because, for $x>c$,\n\\[\n\\frac{h_X(x)}{x}=\\frac{h_X(c)}{x}+\\frac{1}{x} \\int_c^x n(y)dy\n\\]\nwith $n(y)=\\nu_X(\\,(y,\\infty)\\,)$ and, with \n$d\/dx$ meaning the right derivative, we have\n\\begin{align*}\n&\\frac{d}{dx}\\left( \\frac{1}{x} \\int_c^x n(y)\ndy\\right)=\\frac{1}{x^2}\\left(-\\int_c^x n(y)\ndy+xn(x)\\right)\\\\\n&\\qquad=\\frac{1}{x^2}\\left(\\int_c^x (n(x)-n(y))\ndy+cn(x)\\right)<0\n\\end{align*}\nfor sufficiently large $x$\nif $n(c)>0$ (note that $\\int_c^x (n(x)-n(y))dy$\nis nonpositive and decreasing).\nThus we see that \\eqref{2.3} implies \\eqref{2.4}.\nIndeed, letting $M_1 = M\\lor M_0$, we have\n\\[\n\\int_{\\{y\\colon |y_j|>M_1\\}} f(|y_j|)\\nu_Y(dy)\\leqslant \\int_{\\{y\\colon \n|y_j|>M_1\\}} f(|y|)\\nu_Y(dy)\\leqslant \n\\int_{|y|>M_1} f(|y|)\\nu_Y(dy)<\\infty.\n\\]\nIn order to show that \\eqref{2.4} implies \\eqref{2.3}, let $g(x)=h_X(x)$ for \n$x\\geqslant c$ and $=h_X(c)$ for $-\\infty< xM_1} f(|y|)\\nu_Y(dy)\\leqslant\\int_{|y|>M_1} f(|y_1|+\\cdots+|y_d|)\n\\nu_Y(dy)\\\\\n&\\qquad\\leqslant\n\\int_{|y|>M_1}\\frac{\\log(|y_1|+\\cdots+|y_d|+1)}{h_X(\\log(|y_1|+\\cdots+|y_d|))}\n\\nu_Y(dy)\\\\\n&\\qquad\\leqslant\n\\sum_{j=1}^d \\int_{|y|>M_1}\\frac{\\log(|y_j|+1)}{h_X(\\log(|y_1|+\\cdots+|y_d|))}\n\\nu_Y(dy),\\\\\n&\\qquad=\\sum_{j=1}^d \\int_{|y|>M_1}\\frac{\\log(|y_j|+1)}{g(\\log(|y_1|+\\cdots+|y_d|))}\n\\nu_Y(dy)\\\\\n&\\qquad\\leqslant\\sum_{j=1}^d \\int_{|y|>M_1}\\frac{\\log(|y_j|+1)}{g(\\log(|y_j|))}\n\\nu_{\\eta}(dy)\\\\\n&\\qquad\\leqslant\\sum_{j=1}^d \\left(\\int_{|y_j|>M_1}\n\\frac{\\log(|y_j|+1)}{g(\\log(|y_j|))}\n\\nu_Y(dy)+\\int_{|y_j|\\leqslant M_1,\\,|y|>M_1}\\frac{\\log(|y_j|+1)}{g(\\log(|y_j|))}\n\\nu_Y(dy)\\right).\n\\end{align*}\nThe first integral in each summand is finite due to} \\eqref{2.4} \n and the second integral is also finite because the integrand is bounded.\nThis finishes the proof of equivalence of \\eqref{2.3} and \\eqref{2.4}.\n\nNow assume that \\eqref{2.2} holds. Then \\eqref{2.4} holds. Hence, by the theorem\nfor $d=1$, $\\int _{0}^{\\infty -} e^{-X_{s-}}dY_s^{(j)}$ exists and is finite \na.\\,s.\\ for all $j$ such that $\\{Y_t^{(j)}\\}$ is not the zero process.\nFor $j$ such that $\\{Y_t^{(j)}\\}$ is the zero process, we have\n$\\int _{0}^{\\infty -} e^{-X_{s-}}dY_s^{(j)}=0$. Hence \\eqref{2.1a} holds, that is,\n\\eqref{2.1} holds.\n\nConversely, assume that \\eqref{2.1} holds.\nLet\n\\[\nI_j=\\int_{\\{y\\colon |y_j|\\geqslant e^c\\}}\n\\frac{\\log |y_j|}{h_X(\\log |y_j|)} \\nu_Y(dy).\n\\]\nSince $\\{Y_t\\}$ is not the zero process, $\\{Y_t^{(j)}\\}$ is not the zero process\nfor some $j$. Hence, by the theorem for $d=1$, $\\lim_{t\\to\\infty}X_t=+\\infty$ \na.\\,s.\\ and $I_j<\\infty$ for such $j$. For $j$ such that $\\{Y_t^{(j)}\\}$ is the \nzero process, $\\nu_{Y^{(j)}}=0$ and $I_j=0$. Hence we have \\eqref{2.4} and thus\n\\eqref{2.2} holds due to the equivalence of \\eqref{2.3} and \\eqref{2.4}.\n\\end{proof}\n\n\\begin{rem}\\label{r2.1}\n(i) Suppose that $\\{X_t\\}$ satisfies $0-\\infty$ and $00$;\\\\\n(b$'$) $\\int_1^{\\infty} \\nu_X(\\,(y,\\infty)\\,)dy=\\infty$ and \\eqref{2.6} holds.\\\\\nSee also Doney and Maller \\cite{DM02}.\n\n(iii) If $\\lim_{t\\to\\infty}X_t=+\\infty$ a.\\,s., then $h_X(x)>0$ for all large\n$x$, as is explained in \\cite{EM05} after their Theorem 2.\n\\end{rem}\n\nWhen $\\{X_t\\}$ and $\\{Y_t\\}$ are\nindependent, the result in Remark \\ref{r2.1} (i) can be extended to more general\nexponential integrals of L\\'evy processes.\n\n\\begin{thm}\\label{t2.2}\nSuppose that $\\{X_t\\}$ and $\\{Y_t\\}$ are\nindependent and that $00$. Then\n\\begin{equation}\\label{2.7}\nP\\left(\n\\int _{0}^{\\infty -} e^{-(X_{s-})^{\\alpha}}dY_s \\text{ exists and is finite}\\right)=1\n\\end{equation}\nif and only if\n\\begin{equation}\\label{2.8}\n\\int_{\\mathbb R^d} (\\log^+ |y|)^{1\/\\alpha}\\nu_Y(dy)<\\infty.\n\\end{equation}\n\\end{thm}\n\nWe use the following result, which is a part of Proposition 4.3 of \n\\cite{S05b}.\n\n\\begin{prop}\\label{p2.1}\nLet $f$ be a locally square-integrable nonrandom function on $[0,\\infty)$\nsuch that there are positive constants $\\alpha$, $c_1$, and $c_2$ satisfying\n\\begin{equation*}\ne^{-c_2 s^{\\alpha}}\\leqslant f(s)\\leqslant e^{-c_1 s^{\\alpha}}\\quad\\text{for all large $s$.}\n\\end{equation*}\nThen \n\\[\nP\\left(\n\\int _{0}^{\\infty -} f(s)dY_s \\text{ exists and is finite}\\right)=1\n\\]\nif and only if \\eqref{2.8} holds.\n\\end{prop}\n\n\\begin{proof}[Proof of Theorem \\ref{t2.2}] \nLet $E[X_1]=b$. By assumption, $00$, and define \n$$\nT_c = \\inf \\{ t\\colon X_t =c\\}.\n$$\nSince we are assuming that $X_{t}$ does not have positive jumps and that\n$00$. This is Samorodnitsky's remark mentioned in \\cite{KLM06}.\nThe integral $\\int_0^{\\infty-} \\exp(-N_{s-}) ds$ is a special case of (ii) with\n$\\alpha=1$.\n\n\\begin{proof}[Proof of Theorem \\ref{t3.2}]\n(i) Let $Z=\\int_0^{\\infty-} e^{-N_{s-}} dY_s$.\nIf $\\{Y_t\\}$ is the zero process, then $Z=0$. \nIf $\\{Y_t\\}$ is not the zero process, then existence and finiteness of $Z$\nfollows from Theorem \\ref{t2.1}. Let $T_n=\\inf\\{s\\ges0\\colon N_s=n\\}$. \nClearly $T_n$ is finite and tends to infinity as $n\\to\\infty$ a.\\,s. We have\n\\begin{equation*}\nZ=\\sum_{n=0}^{\\infty}\\int_{T_n}^{T_{n+1}} e^{-N_{s-}} dY_s=\\sum_{n=0}^{\\infty}\ne^{-n}(Y(T_{n+1})-Y(T_n)).\n\\end{equation*}\nFor each $n$, $T_n$ is a stopping time for $\\{(N_s,Y_s)\\colon s\\ges0\\}$. Hence\n$\\{(N(T_n +s)-N(T_n), Y(T_n +s)-Y(T_n)), s\\ges0\\}$ and $\\{(N_s,Y_s), 0\\leqslant s\\leqslant\nT_n\\}$ are independent and the former process is identical in law with \n$\\{(N_s,Y_s), s\\ges0\\}$. It follows that the family\n$\\{Y(T_{n+1})-Y(T_n), n=0,1,2,\\ldots\\}$ is independent and identically distributed.\nThus, denoting $W_n= Y(T_{n+1})-Y(T_n)$, we have representation\n\\begin{equation}\\label{3.4}\nZ=\\sum_{n=0}^{\\infty} e^{-n} W_n,\n\\end{equation}\nwhere $W_0,W_1.\\ldots$ are independent and identically distributed\n and $W_n\\overset{\\mathrm d}{=} Y(T_1)$ ( $\\overset{\\mathrm d}{=}$ stands for\n\\lq\\lq has the same law as\"). Consequently we have\n\\begin{equation}\\label{3.5}\nZ=W_0+e^{-1}Z',\n\\end{equation}\nwhere $W_0$ and $Z'$ are independent and $Z'\\overset{\\mathrm d}{=} Z$.\nThe distribution of $W_0$ is infinitely divisible, since $W_0=Y(T_1)\\overset{\\mathrm d}{=} U_1$,\nwhere $\\{U_s\\}$ is a L\\'evy process given by subordination of $\\{Y_s\\}$ by a \ngamma process.\nHere we use our assumption of independence of $\\{N_t\\}$ and $\\{Y_t\\}$. Thus\n$\\mu$ is\n$e^{-1}$-semi-selfdecomposable and hence infinitely divisible.\nAn alternative proof of the infinite divisibility of $\\mu$ is to look at \nthe representation \\eqref{3.4} and to use that $\\mathcal L(Y(T_1))$\nis infinitely divisible. \n\n(ii) \nUse the representation \\eqref{3.4} with $W_n\\overset{\\mathrm d}{=} U_1$,\nwhere we obtain a L\\'evy process $\\{U_s\\}$ by subordination of $\\{Y_s\\}$ \nby a gamma process.\nSince gamma distributions are selfdecomposable, the results of Sato \\cite{S01b}\non inheritance of selfdecomposability in subordination\nguarantee that $\\mathcal L(U_1)$ is selfdecomposable under our assumption on $\\{Y_s\\}$.\nHence $\\mu$ is selfdecomposable, as selfdecomposability is preserved under \nconvolution and convergence.\nFurther, since selfdecomposability implies $b$-semi-selfdecomposability for each\n$b$, \\eqref{3.5} shows that $\\mu$ is of class $L_1(e^{-1},\\mathbb R^d)$.\n\n(iii) \nThe process $\\{Y_t\\}$ is a compound Poisson process on $\\mathbb R$ with $\\nu_Y$ \nconcentrated on the integers (see Corollary 24.6 of \\cite{S}). Let us consider\nthe L\\'evy measure $\\nu^{(0)}$ of $Y(T_1)$. \nLet $a>0$ be the parameter of the Poisson process $\\{N_t\\}$.\nAs in the proofs of (i)\nand (ii), $Y(T_1)\\overset{\\mathrm d}{=} U_1$, where \n$\\{U_s\\}$ is given by subordination of $\\{Y_s\\}$, by a gamma process\nwhich has L\\'evy measure $x^{-1}e^{-ax}dx$. \nHence, using Theorem 30.1 of \\cite{S}, we see that\n\\begin{equation*}\n\\nu^{(0)}(B)=\\int_0^{\\infty}P(Y_s\\in B)s^{-1}e^{-as}ds\n\\end{equation*}\nfor any Borel set $B$ in $\\mathbb R$. Thus $\\nu^{(0)}(\\mathbb R\\setminus\\mathbb Z)=0$.\n\nSuppose that $\\{Y_t\\}$ is not a decreasing process. Then some positive integer\nhas positive $\\nu^{(0)}$-measure. Denote by $p$ the minimum of such positive integers.\nSince $\\{Y_t\\}$ is compound Poisson, $P(Y_s=kp)>0$ for any $s>0$ for \n$k=1,2,\\ldots$. Hence $\\nu^{(0)}(\\{kp\\})>0$ for $k=1,2,\\ldots$. Therefore,\nfor each nonnegative\ninteger $n$, the L\\'evy measure $\\nu^{(n)}$ of $e^{-n}Y(T_1)$ satisfies\n$\\nu^{(n)}(\\{e^{-n}kp\\})>0$ for $k=1,2,\\ldots$. \nClearly, $\\nu^{(n)}$ is also discrete.\nThe representation \\eqref{3.4} shows that\n\\[\n\\nu_{\\mu}=\\sum_{n=0}^{\\infty} \\nu^{(n)}.\n\\]\nHence, $\\nu_{\\mu}$ is discrete and\n\\[\n\\nu_{\\mu}(\\{e^{-n}kp\\})>0\\quad\\text{for all $n=0,1,2,\\ldots$ and $k=1,2,\\ldots$\\;\\,.} \n\\]\nThus the points in $(0,\\infty)$ of positive $\\nu_{\\mu}$-measure are dense in\n$(0,\\infty)$.\n\nSimilarly, if $\\{Y_t\\}$ is not an increasing process, then the points in \n$(-\\infty,0)$ of positive $\\nu_{\\mu}$-measure are dense in $(-\\infty,0)$.\n\\end{proof}\n\nThe following remarks give information on continuity properties of the law $\\mu$.\nA distribution on $\\mathbb R^d$ is called\nnondegenerate if its support is not contained in any \naffine subspace of dimension $d-1$. \n\n\\begin{rem}\\label{r3.1}\n(i) Any nondegenerate selfdecomposable distribution on $\\mathbb R^d$ for $d\\ges1$\nis absolutely continuous (with respect to Lebesgue measure on $\\mathbb R^d$)\n although, for $d\\ges2$, its L\\'evy measure is not necessarily\nabsolutely continuous. This is proved by Sato \\cite{S82} (see also Theorem 27.13\nof \\cite{S}). \n\n(ii) Nondegenerate semi-selfdecomposable distributions on $\\mathbb R^d$ for $d\\ges1$\nare absolutely continuous or continuous singular, as Wolfe \\cite{W83} proves\n(see also Theorem 27.15 of \\cite{S}).\n\\end{rem}\n\n\\vskip 5mm\n\\section{An example of type $G$ random variable}\n\nIn Maejima and Niiyama \\cite{MNi05}, an improper integral \n\\begin{equation}\\label{4.1}\nZ= \\int_0^{\\infty -} e^{-(B_s+\\lambda s)}dS_s\n\\end{equation}\nwas studied,\nin relation to a stationary solution of the stochastic differential equation\n\\begin{equation*}\ndZ_t = - \\lambda Z_{t} dt + Z_{t-} dB_t + dS_t, \\quad t\\geqslant 0,\n\\end{equation*} \nwhere $\\{B_t, t\\geqslant 0\\}$ is a standard Brownian motion on $\\mathbb R$, $\\lambda >0$, and\n$\\{S _t, t\\geqslant 0\\}$ is a symmetric $\\alpha$-stable L\\'evy process with $0<\\alpha \\leqslant 2$\non $\\mathbb R$, independent of $\\{B_t\\}$.\nThey showed that $Z$ is {\\it of type $G$} in the sense that $Z$ is a variance mixture\nof a standard normal random variable by some infinitely divisible distribution.\nNamely, $Z$ is of type $G$ if \n\\begin{equation*}\nZ\\overset{\\mathrm d}{=} V^{1\/2}W\n\\end{equation*}\nfor some nonnegative infinitely divisible random variable\n$V$ and a standard normal random variable $W$ independent of each other.\nEquivalently, $Z$ is of type $G$ if and only if $Z\\overset{\\mathrm d}{=} U_1$, where $\\{U_t, t\\ges0\\}$ \nis given by subordination of a standard Brownian motion.\nIf $Z$ is of type $G$, then $\\mathcal L(V)$ is uniquely \ndetermined by $\\mathcal L(Z)$ (Lemma 3.1 of \\cite{S01b}).\n\nThe $Z$ in \\eqref{4.1} is a special case of those exponential integrals of L\\'evy\nprocesses which we are dealing with. Thus Theorem \\ref{t3.1} says that \nthe law of $Z$ is selfdecomposable. But the class of type $G$ distributions\n(the laws of type $G$ random variables) is neither larger nor smaller \nthan the class of symmetric selfdecomposable distributions. \nAlthough the proof that $Z$ is of type $G$ is found in \\cite{MNi05},\nthe research report is not well distributed.\nHence we give their proof below for readers.\nWe will show that the law of $Z$ belongs to a special \nsubclass of selfdecomposable distributions.\n\n\\begin{thm}\\label{t4.1}\nUnder the assumptions on $\\{B_t\\}$ and $\\{S_t\\}$ stated above, \n$Z$ in \\eqref{4.1} is of type $G$ and furthermore the mixing distribution for \nvariance, $\\mathcal L(V)$, is not only\ninfinitely divisible but also selfdecomposable.\n\\end{thm}\n\n\\begin{proof}\nIt is known (Proposition 4.4.4 of \nDufresne \\cite{D90}) that for any $a\\in\\mathbb R\\setminus \\{0\\}$, $b>0$,\n$$\n\\int_0^{\\infty} e^{aB_s-bs} ds \\overset{\\mathrm d}{=} {2}\\left (a^2 \\Gamma _{2ba^{-2}}\\right )^{-1},\n$$\nwhere $\\Gamma _\\gamma$ is the gamma random variable with parameter $\\gamma >0$,\nnamely, $P(\\Gamma _\\gamma \\in B) = \\Gamma (\\gamma) ^{-1}\\int_{B\\cap (0,\\infty)}\nx^{\\gamma -1}e^{-x}dx$.\nThe law of the reciprocal of gamma random variable is\ninfinitely divisible and, furthermore, selfdecomposable (Halgreen \\cite{H79}). \nWe have\n\\begin{align*}\nE \\left[ e^{iz Z } \\right] \n& = E \\left[ \\exp \\left(iz \\int_0^{\\infty -} e^{-(B_s+\\lambda s)} dS_s \n\\right) \\right] \\\\\n& = E \\left[ E \\left[ \\left. \\exp \\left( iz \\int_0^{\\infty -} \ne^{-(B_s+\\lambda s)} dS_s\\right)\\,\\right|\\,\\{B_s\\} \\right]\\right] ,\n\\end{align*}\nWe have $Ee^{izS_t}=\\exp(-ct|z|^{\\alpha})$ with some $c>0$. \nFor any nonrandom measurable function $f(s)$ satisfying $\\int_0^{\\infty}\n|f(s)|^{\\alpha}ds<\\infty$, we have\n$$\nE\\left [\\exp \\left( iz \\int_0^{\\infty -} f(s)dS_s\\right)\\right ]\n= \\exp \\left( -c|z |^{\\alpha}\\int_0^{\\infty} |f(s)|^{\\alpha}ds\\right)\n$$\n(see, e.\\,g.\\ Samorodnitsky and Taqqu \\cite{ST}). Hence\n\\begin{align*}\nE\\left [e^{iz Z}\\right ]\n& = E\\left [\\exp \\left( -c|z |^{\\alpha}\\int_0^{\\infty}\ne^{ -\\alpha B_s -\\alpha \\lambda s} ds \\right)\\right]\\\\\n& = E\\left [\\exp\\left(-c|z |^{\\alpha} 2 \n\\left (\\alpha ^2 \\Gamma_{2\\alpha ^{-1}\\lambda }\\right )^{-1}\\right)\\right ] .\n\\end{align*}\nIf we put\n$$\nH(dx) = P\\left ( 2 c\n\\left (\\alpha ^2 \\Gamma_{2\\alpha ^{-1}\\lambda }\\right )^{-1}\n\\in dx\\right ),\n$$\nthen \n$$\nE[e^{iz Z}] = \\int_0^{\\infty} e^{-u|z |^{\\alpha}} H(du).\n$$\nThis $H$ is the distribution of a positive\ninfinitely divisible (actually selfdecomposable) random variable.\nThis shows that $Z$ is a mixture of a symmetric\n$\\alpha$-stable random variable $S$ with $Ee^{izS}=e^{-|z|^{\\alpha}}$ \nin the sense that\n\\begin{equation}\\label{4.2}\nZ \\overset{\\mathrm d}{=} \\Gamma ^{-1\/\\alpha}S,\n\\end{equation}\nwhere $\\Gamma$ and $S$ are independent and $\\Gamma$ is a gamma random variable with\n$\\mathcal L(\\Gamma^{-1})=H$, that is, \n$\\Gamma=(2c)^{-1}\\alpha^2 \\Gamma_{2\\alpha^{-1}\\lambda}$.\nTo see that $Z$ is of type $G$, we need to rewrite \\eqref{4.2} as\n$$\nZ \\overset{\\mathrm d}{=} \\Gamma ^{-1\/\\alpha} S \\overset{\\mathrm d}{=} V^{1\/2}W,\n$$\nfor some infinitely divisible random variable $V>0$ independent of a standard \nnormal random variable $W$.\nLet $S^+_{\\alpha\/2}$ be a positive strictly $(\\alpha\/2)$-stable random variable\nsuch that\n$$\nE\\left[\\exp(-uS_{\\alpha\/2}^+)\\right]=\\exp\\left( -(2u)^{\\alpha\/2}\\right),\\quad u\\geqslant 0\n$$\nand $\\Gamma$, $W$, and $S_{\\alpha\/2}^+$ are independent. Then\n$$\nS\\overset{\\mathrm d}{=} (S_{\\alpha\/2}^+)^{1\/2} W,\n$$\nand hence $S$ is of type $G$. Let\n$$\nV=\\Gamma^{-2\/\\alpha}S_{\\alpha\/2}^+.\n$$\nThen\n$$\nV^{1\/2} W=(\\Gamma^{-2\/\\alpha}S_{\\alpha\/2}^+)^{1\/2} W\n=\\Gamma^{-1\/\\alpha}(S_{\\alpha\/2}^+)^{1\/2}W\n\\overset{\\mathrm d}{=}\\Gamma^{-1\/\\alpha} S\\overset{\\mathrm d}{=} Z.\n$$\nUsing a positive strictly $(\\alpha\/2)$-stable L\\'evy process $\\{S_{\\alpha\/2}^+(t),\nt\\ges0\\}$ independent of $\\Gamma$ with $\\mathcal L(S_{\\alpha\/2}^+(1))=S_{\\alpha\/2}^+$, \nwe \nsee that \n$$\nV\\overset{\\mathrm d}{=} S_{\\alpha\/2}^+(\\Gamma^{-1}).\n$$\nSince $\\Gamma^{-1}$ is selfdecomposable, $V$ is also selfdecomposable due to the\ninheritance of selfdecomposability in subordination of strictly stable L\\'evy\nprocesses (see \\cite{S01b}).\nTherefore $Z$ is of type $G$ with $\\mathcal L(V)$ being selfdecomposable. Also,\nthe selfdecomposability of $Z$ again follows.\n\\end{proof}\n\nIn their recent paper \\cite{AMR06}, Aoyama, Maejima, and Rosi\\'nski\n have introduced a new strict subclass \n(called $M(\\mathbb R^d)$) of\nthe intersection of the class of type $G$ distributions and the class of selfdecomposable\ndistributions on $\\mathbb R^d$ (see Maejima and Rosi\\'nski \\cite{MR} for the \ndefinition of \ntype $G$ distributions on $\\mathbb R^d$ for general $d$).\nIf we write the polar decomposition of the L\\'evy measure $\\nu$ by\n\\begin{equation*}\n\\nu (B) = \\int_K \\lambda (d\\xi)\\int_0^{\\infty} 1_B(r\\xi)\\nu_{\\xi}(dr),\n\\end{equation*}\nwhere $K$ is the unit sphere $\\{\\xi\\in\\mathbb R^d\\colon |\\xi|=1\\}$ \nand $\\lambda$ is a probability measure on $K$,\nthen the element of $M(\\mathbb R^d)$ is characterized as a symmetric infinitely\ndivisible distribution such that\n\\begin{equation*}\n\\nu_{\\xi}(dr) = g_{\\xi}(r^2)r^{-1}dr\n\\end{equation*}\nwith $g_{\\xi}(u)$ being completely monotone as a function of \n$u\\in(0,\\infty)$ and measurable with respect to $\\xi$.\nRecall that if we write $\\nu_{\\xi}(dr) = g_{\\xi}(r^2)dr$ instead, this gives a \ncharacterization of type $G$ distributions on $\\mathbb R^d$ (\\cite{MR}).\nIn \\cite{AMR06} it is shown that\n$$\n\\{ \\text{type $G$ distributions on $\\mathbb R$ with selfdecomposable mixing \ndistributions}\\} \\subsetneqq M(\\mathbb R).\n$$\n\nNow, by Theorem \\ref{t4.1} combined with the observation above, we see that\n$\\mathcal L (Z)$ in \\eqref{4.1} belongs to $M(\\mathbb R)$.\nIt is of interest as a\nconcrete example of random variable whose distribution belongs to $M(\\mathbb R)$.\n\nWe end the paper with a remark that, by Preposition 3.2 of \\cite{CPY01}, \nif $\\alpha =2$, our $\\mathcal L (Z)$ is also \nPearson type IV distribution of parameters $\\lambda$ and $0$.\n\n\\bigskip\n{\\bf Acknowledgments.} The authors would like to thank Alexander Lindner and \nJan Rosi\\'nski for their helpful comments while this\npaper was written.\n\n\\bigskip \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTransducers compiled from simple replace expressions {\\tt UPPER} {\\tt\n->} {\\tt LOWER} (Karttunen 1995, Kempe and Karttunen 1996) are\ngenerally nondeterministic in the sense that they may yield multiple\nresults even if the lower language consists of a single string. For\nexample, let us consider the transducer in Figure \\ref{net1},\nrepresenting {\\tt a b | b | b a | a b a -> x}.\\footnote{The regular\nexpression formalism and other notational conventions used in\nthe paper are explained in the Appendix at the end.}\n\n\\begin{figure}\n\\begin{center}\n \\centerline{\\psfig{file=acl1.eps}}\n\\caption{\\label{net1}\\verb+ a b | b | b a | a b a -> x +. The four\npaths with ``aba'' on the upper side are: $<$0~{\\tt a}~0~{\\tt\nb:x}~2~{\\tt a}~0$>$, $<$0~{\\tt a}~0~{\\tt b:x}~2~{\\tt a:0}~0$>$,\n$<$0~{\\tt a:x}~1~{\\tt b:0}~2~{\\tt a}~0$>$, and $<$0~{\\tt a:x}~1~{\\tt\nb:0}~2~{\\tt a:0}~0$>$.}\n\\end{center}\n\\vspace*{-8mm}\n\\end{figure}\n\nThe application of this transducer to the input ``aba'' produces\nfour alternate results, ``axa'', ``ax'', ``xa'', and ``x'', as shown\nin Figure \\ref{net1}, since there are four paths in the network that\ncontain ``aba'' on the upper side with different strings on the lower\nside.\n\nThis nondeterminism arises in two ways. First of all, a replacement\ncan start at any point. Thus we get different results for ``aba''\ndepending on whether we start at the beginning of the string or in the\nmiddle at the ``b''. Secondly, there may be alternative replacements\nwith the same starting point. In the beginning of ``aba'', we can\nreplace either ``ab'' or ``aba''. Starting in the middle, we can\nreplace either ``b'' or ``ba''. The underlining in Figure \\ref{tab1}\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n a b a a b a a b a a b a\n - --- --- -----\n a x a a x x a x\n\\end{verbatim}\n\n\\caption{\\label{tab1}Four factorizations of ``aba''.}\n\\vspace*{-2mm}\n\\end{figure}\nshows the four alternate factorizations of the input string, that is,\nthe four alternate ways to partition the string ``aba'' with respect\nto the upper language of the replacement expression. The corresponding\npaths in the transducer are listed in Figure \\ref{net1}.\n\nFor many applications, it is useful to define another version of\nreplacement that produces a unique outcome whenever the lower language\nof the relation consists of a single string. To limit the number of\nalternative results to one in such cases, we must impose a unique\nfactorization on every input.\n\nThe desired effect can be obtained by constraining the directionality\nand the length of the replacement. Directionality means that the\nreplacement sites in the input string are selected starting from the\nleft or from the right, not allowing any overlaps. The length\nconstraint forces us always to choose the longest or the shortest\nreplacement whenever there are multiple candidate strings starting at\na given location. We use the term {\\bf directed replacement} to\ndescribe a replacement relation that is constrained by directionality\nand length of match. (See the end of Section 2 for a discussion about\nthe choice of the term.)\n\nWith these two kinds of constraints we can define four types of\ndirected replacement, listed in Figure \\ref{tab2}.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n longest shortest\n match match\n left-to-right @-> @>\n right-to-left ->@ >@\n\\end{verbatim}\n\\caption{\\label{tab2}Directed replacement operators}\n\\vspace*{-2mm}\n\\end{figure}\n\nFor reasons of space, we discuss here only the left-to-right,\nlongest-match version. The other cases are similar.\n\nThe effect of the directionality and length constraints is that some\npossible replacements are ignored. For example, {\\tt a b | b | b a |\na b a @-> x } maps ``aba'' uniquely into ``x'', Figure \\ref{net2}.\n\n\\begin{figure}[here]\n\\begin{center}\n \\centerline{\\psfig{file=acl2.eps}}\n\\caption{\\label{net2}\\verb+a b | b | b a | a b a @-> x+. The single\npath with ``aba'' on the upper side is: $<$0~{\\tt a:x}~1~{\\tt b:0}~2\n{\\tt a:0}~0$>$.}\n\\end{center}\n\\vspace*{-6mm}\n\\end{figure}\n\nBecause we must start from the left and have to choose the longest\nmatch, ``aba'' must be replaced, ignoring the possible replacements\nfor ``b'', ``ba'', and ``ab''. The {\\tt @->} operator allows only the\nlast factorization of ``aba'' in Figure \\ref{tab1}.\n\nLeft-to-right, longest-match replacement can be thought of as a\nprocedure that rewrites an input string sequentially from left to\nright. It copies the input until it finds an instance of {\\tt\nUPPER}. At that point it selects the longest matching substring, which\nis rewritten as {\\tt LOWER}, and proceeds from the end of that\nsubstring without considering any other alternatives. Figure\n\\ref{pict1} illustrates the idea.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{center}\n \\centerline{\\psfig{file=acl3.eps}}\n\\caption{\\label{pict1}Left-to-right, longest-match replacement}\n\\end{center}\n\\vspace*{-6mm}\n\\end{figure}\n\nIt is not obvious at the outset that the operation can in fact be\nencoded as a finite-state transducer for arbitrary regular patterns.\nAlthough a unique substring is selected for replacement at each point,\nin general the transduction is not unambiguous because {\\tt LOWER} is\nnot required to be a single string; it can be any regular language.\n\nThe idea of treating phonological rewrite rules in this way was the\nstarting point of Kaplan and Kay (1994). Their notion of obligatory\nrewrite rule incorporates a directionality constraint. They observe\n(p. 358), however, that this constraint does not by itself guarantee a\nsingle output. Kaplan and Kay suggest that additional restrictions,\nsuch as longest-match, could be imposed to further constrain rule\napplication.\\footnote{\\label{foot1}The tentative formulation of the\nlongest-match constraint in \\cite[p. 358]{Kaplan+Kay:regmod} is too\nweak. It does not cover all the cases.} We consider this issue in more\ndetail.\n\nThe crucial observation is that the two constraints, left-to-right and\nlongest-match, force a unique factorization on the input string thus\nmaking the transduction unambiguous if the {\\tt LOWER} language\nconsists of a single string. In effect, the input string is\nunambiguously {\\bf parsed} with respect to the {\\tt UPPER} language.\nThis property turns out to be important for a number of applications.\nThus it is useful to provide a replacement operator that implements\nthese constraints directly.\n\nThe definition of the \\verb.UPPER @-> LOWER. relation is presented in\nthe next section. Section 3 introduces a novel type of replace\nexpression for constructing transducers that unambiguously recognize\nand mark instances of a regular language without actually replacing\nthem. Section 4 identifies some useful applications of the new\nreplacement expressions.\n\n\\section{Directed Replacement}\nWe define directed replacement by means of a composition of regular\nrelations. As in Kaplan and Kay (1994), Karttunen (1995), and other\nprevious works on related topics, the intermediate levels of the\ncomposition introduce auxiliary symbols to express and enforce\nconstraints on the replacement relation. Figure \\ref{tab3} shows the\ncomponent relations and how they are composed with the input.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n Input string\n .o.\n Initial match\n .o.\n Left-to-right constraint\n .o.\n Longest-match constraint\n .o.\n Replacement\n\\end{verbatim}\n\\caption{\\label{tab3}Composition of directed replacement}\n\\vspace*{-2mm}\n\\end{figure}\n\nIf the four relations on the bottom of Figure \\ref{tab3} are composed in\nadvance, as our compiler does, the application of the replacement to\nan input string takes place in one step without any intervening levels\nand with no auxiliary symbols. But it helps to understand the logic to\nsee where the auxiliary marks would be in the hypothetical\nintermediate results.\n\nLet us consider the case of {\\tt a b | b | b a | a b a} {\\tt @-> x }\napplying to the string ``aba'' and see in detail how the mapping\nimplemented by the transducer in Figure \\ref{net2} is composed from\nthe four component relations. We use three auxiliary symbols, caret\n(\\verb+^+), left bracket (\\verb+<+) and right bracket (\\verb+>+),\nassuming here that they do not occur in any input. The first step,\nshown in Figure \\ref{tab4}, composes the input string with a\ntransducer that inserts a caret, in the beginning of every substring\nthat belongs to the upper language.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n a b a \n ^ a ^ b a\n\\end{verbatim}\n\\caption{\\label{tab4}Initial match. Each caret marks the beginning of\na substring that matches ``ab'', ``b'', ``ba'', or ``aba''.}\n\\vspace*{-2mm}\n\\end{figure}\n\nNote that only one \\verb+^+ is inserted even if there are several\ncandidate strings starting at the same location.\n\nIn the left-to-right step, we enclose in angle brackets all the\nsubstrings starting at a location marked by a caret that are instances\nof the upper language. The initial caret is replaced by a {\\tt <}, and\na closing {\\tt >} is inserted to mark the end of the match. We permit\ncarets to appear freely while matching. No carets are permitted\noutside the matched substrings and the ignored internal carets are\neliminated. In this case, there are four possible outcomes, shown in\nFigure \\ref{tab5}, but only two of them are allowed under the\nconstraint that there can be no carets outside the brackets.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n ALLOWED\n\n ^ a ^ b a ^ a ^ b a\n < a b > a < a b a >\n\n NOT ALLOWED\n\n ^ a ^ b a ^ a ^ b a\n ^ a < b > a ^ a < b a > \n\\end{verbatim}\n\\caption{\\label{tab5}Left-to-right constraint. {\\it No caret\noutside a bracketed region.}}\n\\vspace*{-2mm}\n\\end{figure}\n\nIn effect, no starting location for a replacement can be skipped over\nexcept in the context of another replacement starting further left in\nthe input string. (Roche and Schabes (1995) introduce a similar\ntechnique for imposing the left-to-right order on the transduction.)\nNote that the four alternatives in Figure \\ref{tab5} represent the four\nfactorizations in Figure \\ref{tab1}.\n\nThe longest-match constraint is the identity relation on a certain set\nof strings. It forbids any replacement that starts at the same\nlocation as another, longer replacement. In the case at hand, it means\nthat the internal {\\tt >} is disallowed in the context \\verb+< a b >\na+. Because ``aba'' is in the upper language, there is a longer, and\ntherefore preferred, \\verb+< a b a >+ alternative at the same starting\nlocation, Figure \\ref{tab5a}.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n ALLOWED NOT ALLOWED\n\n < a b a > < a b > a \n\\end{verbatim}\n\\caption{\\label{tab5a}Longest match constraint. {\\it No upper language\nstring with an initial }{\\tt < }{\\it and a nonfinal }{\\tt > }{\\it in\nthe middle}.}\n\\vspace*{-2mm}\n\\end{figure}\n\nIn the final replacement step, the bracketed regions of the input\nstring, in the case at hand, just \\verb+< a b a >+ , are replaced by\nthe strings of the lower language, yielding ``x'' as the result for\nour example.\n\nNote that longest match constraint ignores any internal brackets. For\nexample, the bracketing {\\tt < a > < a >} is not allowed if the upper\nlanguage contains ``aa'' as well as ``a''. Similarly, the\nleft-to-right constraint ignores any internal carets.\n\nAs the first step towards a formal definition of {\\tt UPPER @-> LOWER}\nit is useful to make the notion of ``ignoring internal brackets'' more\nprecise. Figure \\ref{tab5b} contains the auxiliary definitions. For\nthe details of the formalism (briefly explained in the Appendix),\nplease consult Karttunen (1995), Kempe and Karttunen\n(1996).\\footnote{\\label{foot2}{\\tt UPPER'} is the same language as\n{\\tt UPPER} except that carets may appear freely in all nonfinal\npositions. Similarly, {\\tt UPPER''} accepts any nonfinal brackets.}\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n UPPER' = UPPER\/\n UPPER'' = UPPER\/\n\\end{verbatim}\n\\caption{\\label{tab5b}Versions of {\\tt UPPER} that freely allow\nnonfinal diacritics.}\n\\vspace*{-2mm}\n\\end{figure} \n\nThe precise definition of the \\verb+UPPER @-> LOWER+ relation is given\nin Figure \\ref{tab6}. It is a composition of many auxiliary relations.\nWe label the major components in accordance with the outline in Figure\n\\ref{tab3}. The formulation of the longest-match constraint is based\non a suggestion by Ronald M. Kaplan (p.c.).\n\\begin{figure}[here]\n\\vspace*{-2mm}\n{\\it Initial match}\n\\begin{verbatim}\n ~$[\n .o.\n [. .] ->\n .o.\n\\end{verbatim}\n{\\it Left to right}\n\\begin{verbatim}\n [~$\n .o.\n \n .o.\n\\end{verbatim}\n{\\it Longest match}\n\\begin{verbatim}\n ~$\n .o.\n\\end{verbatim}\n{\\it Replacement}\n\\begin{verbatim}\n \n\\end{verbatim}\n\\caption{\\label{tab6}Definition of {\\tt UPPER @-> LOWER}}\n\\vspace*{-2mm}\n\\end{figure}\n\nThe logic of {\\tt @->} replacement could be encoded in many other\nways, for example, by using the three pairs of auxiliary brackets,\n{\\tt i}, {\\tt c}, and {\\tt a},\nintroduced in Kaplan and Kay (1994). We take here a more minimalist\napproach. One reason is that we prefer to think of the simple\nunconditional (uncontexted) replacement as the basic case, as in\nKarttunen (1995). Without the additional complexities introduced by\ncontexts, the directionality and length-of-match constraints can be\nencoded with fewer diacritics. (We believe that the conditional case\ncan also be handled in a simpler way than in Kaplan and Kay (1994).)\nThe number of auxiliary markers is an important consideration for some\nof the applications discussed below.\n\nIn a phonological or morphological rewrite rule, the center part of\nthe rule is typically very small: a modification, deletion or\ninsertion of a single segment. On the other hand, in our text\nprocessing applications, the upper language may involve a large\nnetwork representing, for example, a lexicon of multiword\ntokens. Practical experience shows that the presence of many auxiliary\ndiacritics makes it difficult or impossible to compute the\nleft-to-right and longest-match constraints in such cases. The size of\nintermediate states of the computation becomes a critical issue, while\nit is irrelevant for simple phonological rules. We will return to\nthis issue in the discussion of tokenizing transducers in Section 4.\n\nThe transducers derived from the definition in Figure \\ref{tab6} have\nthe property that they unambiguously parse the input string into a\nsequence of substrings that are either copied to the output unchanged or\nreplaced by some other strings. However they do not fall neatly into\nany standard class of transducers discussed in the literature\n(Eilenberg 1974, Sch\\\"{u}tzenberger 1977, Berstel 1979). If the {\\tt\nLOWER} language consists of a single string, then the relation encoded\nby the transducer is in Berstel's terms a {\\bf rational function}, and\nthe network is an {\\bf unambigous} transducer, even though it may\ncontain states with outgoing transitions to two or more destinations\nfor the same input symbol. An unambiguous transducer may also be {\\bf\nsequentiable}, in which case it can be turned into an equivalent {\\bf\nsequential} transducer \\cite{Mohri:fsa+nlp}, which can in turn be\nminimized. A transducer is sequential just in case there are no states\nwith more than one transition for the same input symbol. Roche and\nSchabes (1995) call such transducers {\\bf deterministic}.\n\nOur replacement transducers in general are not unambiguous because we\nallow {\\tt LOWER} to be any regular language. It may well turn out\nthat, in all cases that are of practical interest, the lower language\nis in fact a singleton, or at least some finite set, but it is not so\nby definition. Even if the replacement transducer is unambiguous, it\nmay well be unsequentiable if {\\tt UPPER} is an infinite language.\nFor example, the simple transducer for {\\tt a+ b @-> x} in Figure\n\\ref{net3} cannot be sequentialized. It has to replace any string of\n``a''s by ``x'' or copy it to the output unchanged depending on\nwhether the string eventually terminates at ``b''. It is obviously\nimpossible for any finite-state device to accumulate an unbounded\namount of delayed output. On the other hand, the transducer in Figure\n\\ref{net2} is sequentiable because there the choice between {\\tt a}\nand {\\tt a:x} just depends on the next input symbol.\n\\begin{figure}\n\\begin{center}\n \\centerline{\\psfig{file=acl4.eps}}\n\\caption{\\label{net3}\\verb| a+ b @-> x|. This transducer is\nunambiguous but cannot be sequentialized.}\n\\end{center}\n\\vspace*{-8mm}\n\\end{figure}\n\nBecause none of the classical terms fits exactly, we have chosen a\nnovel term, {\\bf directed transduction}, to describe a relation\ninduced by the definition in Figure \\ref{tab6}. It is meant to suggest\nthat the mapping from the input into the output strings is guided by the\ndirectionality and length-of-match constraints. Depending on the\ncharacteristics of the {\\tt UPPER} and {\\tt LOWER} languages, the\nresulting transducers may be unambiguous and even sequential, but\nthat is not guaranteed in the general case.\n\n\\section{Insertion}\nThe effect of the left-to-right and longest-match constraint is to\nfactor any input string uniquely with respect to the upper language of\nthe replace expression, to parse it into a sequence of substrings that\neither belong or do not belong to the language. Instead of replacing\nthe instances of the upper language in the input by other strings, we\ncan also take advantage of the unique factorization in other ways. For\nexample, we may insert a string before and after each substring that\nis an instance of the language in question simply to mark it as such.\n\nTo implement this idea, we introduce the special symbol ... on the\nright-hand side of the replacement expression to mark the place around\nwhich the insertions are to be made. Thus we allow replacement\nexpressions of the form {\\tt UPPER @-> PREFIX ... SUFFIX}. The\ncorresponding transducer locates the instances of {\\tt UPPER} in the\ninput string under the left-to-right, longest-match regimen just\ndescribed. But instead of replacing the matched strings, the\ntransducer just copies them, inserting the specified prefix and\nsuffix. For the sake of generality, we allow {\\tt PREFIX} and {\\tt\nSUFFIX} to denote any regular language.\n\nThe definition of {\\tt UPPER @-> PREFIX ...} {\\tt SUFFIX} is just as\nin Figure \\ref{tab6} except that the Replacement expression is\nreplaced by the Insertion formula in Figure \\ref{tab7}, a simple\nparallel replacement of the two auxiliary brackets that mark the\nselected regions. Because the placement of \\verb+<+ and \\verb+>+ is\nstrictly controlled, they do not occur anywhere else.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n{\\it Insertion}\n\\begin{verbatim}\n \n\\end{verbatim}\n\\caption{\\label{tab7}Insertion expression in the definition of\n{\\tt UPPER @-> PREFIX ... SUFFIX}.}\n\\vspace*{-2mm}\n\\end{figure}\n\nWith the ... expressions we can construct transducers that mark\nmaximal instances of a regular language. For example, let us assume\nthat noun phrases consist of an optional determiner, {\\tt (d)}, any\nnumber of adjectives, {\\tt a*}, and one or more nouns, {\\tt n+}. The\nexpression \\verb| (d) a* n+ @->\nthat inserts brackets around maximal instances of the noun phrase\npattern. For example, it maps {\\tt \"dannvaan\"} into {\\tt\n\"[dann]v[aan]\"}, as shown in Figure \\ref{tab8}.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n d a n n v a a n\n ------- -----\n [ d a n n ] v [ a a n ]\n\\end{verbatim}\n\\caption{\\label{tab8}Application of\\verb| (d) a* n+ @-> \\%[...\\%] |to\n{\\tt \"dannvaan\"}}\n\\vspace*{-2mm}\n\\end{figure}\n\nAlthough the input string \\verb+\"dannvaan\"+ contains many other\ninstances of the noun phrase pattern, \\verb+\"n\"+, \\verb+\"an\"+,\n\\verb+\"nn\"+, etc., the left-to-right and longest-match constraints\npick out just the two maximal ones. The transducer is displayed\nin Figure \\ref{net4}. Note that {\\tt ?} here matches symbols, such as\n{\\tt v}, that are not included in the alphabet of the network.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{center}\n \\centerline{\\psfig{file=acl5.eps}}\n\\caption{\\label{net4}\\verb|(d) a* n+ @-> \\%[...\\%]|. The\none path with ``dannvaan'' on the upper side is: $<$0 {\\tt 0:[} 7\n{\\tt d} 3 {\\tt a} 3 {\\tt n} 4 {\\tt n} 4 {\\tt 0:]} 5 {\\tt v} 0 {\\tt\n0:[} 7 {\\tt a} 3 {\\tt a} 3 {\\tt n} 4 {\\tt 0:]} 5$>$.}\n\\end{center}\n\\vspace*{-6mm}\n\\end{figure}\n\n\\section{Applications}\nThe directed replacement operators have many useful applications. We\ndescribe some of them. Although the same results could often be\nachieved by using lex and yacc, sed, awk, perl, and other Unix\nutilities, there is an advantage in using finite-state transducers for\nthese tasks because they can then be smoothly integrated with other\nfinite-state processes, such as morphological analysis by lexical\ntransducers (Karttunen 1994) and rule-based part-of-speech\ndisambiguation (Chanod and Tapanainen 1995, Roche and Schabes 1995).\n\n\\subsection{Tokenization}\nA tokenizer is a device that segments an input string into a sequence\nof tokens. The insertion of end-of-token marks can be accomplished by\na finite-state transducer that is compiled from tokenization\nrules. The tokenization rules may be of several types. For example,\n\\verb|[WHIT\nthat reduces any sequence of tabs, spaces, and newlines to a single\nspace. \\verb|[LETTER+| \\verb|@-> ... EN\nspecial mark, e.g. a newline, at the end of a letter sequence.\n\nAlthough a space generally counts as a token boundary, it can also be\npart of a multiword token, as in expressions like ``at least'', ``head\nover heels'', ``in spite of'', etc. Thus the rule that introduces the\n\\verb+END_OF_TOKEN+ symbol needs to combine the \\verb|LETTER+| pattern\nwith a list of multiword tokens which may include spaces, periods and\nother delimiters.\n\nFigure \\ref{tab9} outlines the construction of a simple tokenizing transducer\nfor English.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\n WHIT\n .o.\n [ LETTER+ | \n a t\n h e a d\n i n\n @-> ... EN\n .o.\n SPACE -> [] || .#. | EN\n\\end{verbatim}\n\\caption{\\label{tab9}A simple tokenizer}\n\\vspace{-2mm}\n\\end{figure}\n\nThe tokenizer in Figure \\ref{tab9} is composed of three\ntransducers. The first reduces strings of whitespace characters to a\nsingle space. The second transducer inserts an \\verb+END_OF_TOKEN+\nmark after simple words and the listed multiword expressions. The\nthird removes the spaces that are not part of some multiword\ntoken. The percent sign here means that the following blank is to be\ntaken literally, that is, parsed as a symbol.\n\nWithout the left-to-right, longest-match constraints, the tokenizing\ntransducer would not produce deterministic output. Note that it must\nintroduce an \\verb+END_OF_TOKEN+ mark after a sequence of letters just\nin case the word is not part of some longer multiword token. This\nproblem is complicated by the fact that the list of multiword tokens\nmay contain overlapping expressions. A tokenizer for French, for\nexample, needs to recognize ``de plus'' (moreover), ``en plus''\n(more), ``en plus de'' (in addition to), and ``de plus en plus'' (more\nand more) as single tokens. Thus there is a token boundary after ``de\nplus'' in {\\it de plus on ne le fait plus} (moreover one doesn't do it\nanymore) but not in {\\it on le fait de plus en plus} (one does it more\nand more) where ``de plus en plus'' is a single token.\n\nIf the list of multiword tokens contains hundreds of expressions, it\nmay require a lot of time and space to compile the tokenizer even if\nthe final result is not too large. The number of auxiliary symbols\nused to encode the constraints has a critical effect on the efficiency\nof that computation. We first observed this phenomenon in the course\nof building a tokenizer for the British National Corpus according to\nthe specifications of the {\\sc bnc} Users Guide \\cite{Leech:bnc},\nwhich lists around 300 multiword tokens and 260 foreign phrases. With\nthe current definition of the directed replacement we have now been\nable to compute similar tokenizers for several other languages\n(French, Spanish, Italian, Portuguese, Dutch, German).\n\n\\subsection{Filtering}\nSome text processing applications involve a preliminary stage in which\nthe input stream is divided into regions that are passed on to the\ncalling process and regions that are ignored. For example, in\nprocessing an {\\sc sgml}-coded document, we may wish to delete all the\nmaterial that appears or does not appear in a region bounded by\ncertain {\\sc sgml} tags, say {\\tt } and {\\tt <\/A>}.\n\nBoth types of filters can easily be constructed using the\ndirected replace operator. A negative filter that deletes all the\nmaterial between the two {\\sc sgml} codes, including the codes themselves,\nis expressed as in Figure \\ref{tab10}.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\n \"\" ~$[\"\"|\"<\/A>\"] \"<\/A>\" @-> [] ;\n\\end{verbatim}\n\\caption{\\label{tab10} A negative filter}\n\\vspace{-2mm}\n\\end{figure}\n\nA positive filter that excludes everything else can be expressed as in\nFigure \\ref{tab11}.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\n ~$\"<\/A>\" \"\" @-> \"\"\n .o.\n \"<\/A>\" ~$\"\" @-> \"<\/A>\" ;\n\\end{verbatim}\n\\caption{\\label{tab11}A positive filter}\n\\vspace{-2mm}\n\\end{figure}\n\nThe positive filter is composed of two transducers. The first reduces\nto {\\tt } any string that ends with it and does not contain the\n{\\tt <\/A>} tag. The second transducer does a similar transduction on\nstrings that begin with {\\tt <\/A>}. Figure 12 illustrates the effect\nof the positive filter.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\none<\/B>two<\/A>three<\/C>four<\/A>\n------------- ----------------\n two <\/A> four<\/A>\n\\end{verbatim}\n\\caption{\\label{tab12}Application of a positive filter}\n\\vspace{-2mm}\n\\end{figure}\n\nThe idea of filtering by finite-state transduction of course does not\ndepend on {\\sc sgml} codes. It can be applied to texts where the\ninteresting and uninteresting regions are defined by any kind of\nregular pattern.\n\n\\subsection{Marking}\n\nAs we observed in section 3, by using the ... symbol on the lower side\nof the replacement expression, we can construct transducers that mark\ninstances of a regular language without changing the text in any other\nway. Such transducers have a wide range of applications. They can be\nused to locate all kinds of expressions that can be described by a\nregular pattern, such as proper names, dates, addresses, social\nsecurity and phone numbers, and the like. Such a marking transducer\ncan be viewed as a deterministic parser for a ``local grammar'' in the\nsense of Gross (1989), Roche (1993), Silberztein (1993) and others.\n \nBy composing two or more marking transducers, we can also construct a\nsingle transducer that builds nested syntactic structures, up to any\ndesired depth. To make the construction simpler, we can start by\ndefining auxiliary symbols for the basic regular patterns. For example,\nwe may define {\\tt NP} as \\verb|[(d) a* n+]|. With that abbreviatory\nconvention, a composition of a simple {\\tt NP} and {\\tt VP} spotter\ncan be defined as in Figure \\ref{tab13}.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\n NP @->\n .o.\n v\n\\end{verbatim}\n\\caption{\\label{tab13}Composition of an {\\tt NP} and a {\\tt VP} spotter}\n\\vspace{-2mm}\n\\end{figure}\n\nFigure \\ref{tab14} shows the effect of applying this composite transducer to\nthe string {\\tt \"dannvaan\"}.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\n d a n n v a a n\n ------- - -----\n [NP d a n n ] [VP v [NP a a n ] ]\n\\end{verbatim}\n\\caption{\\label{tab14}Application of an {\\tt NP-VP} parser}\n\\vspace{-2mm}\n\\end{figure}\n\nBy means of this simple ``bottom-up'' technique, it is possible to\ncompile finite-state transducers that approximate a context-free\nparser up to a chosen depth of embedding. Of course, the\nleft-to-right, longest-match regimen implies that some possible\nanalyses are ignored. To produce all possible parses, we may introduce\nthe ... notation to the simple replace expressions in Karttunen\n(1995).\n\n\n\\section{Extensions}\n\nThe definition of the left-to-right, longest-match replacement can\neasily be modified for the three other directed replace operators\nmentioned in Figure \\ref{tab2}. Another extension, already\nimplemented, is a directed version of parallel replacement (Kempe and\nKarttunen 1996), which allows any number of replacements to be done\nsimultaneously without interfering with each other. Figure \\ref{tab15}\nis an example of a directed parallel replacement. It yields a\ntransducer that maps a string of ``a''s into a single ``b'' and \na string of ``b''s into a single ``a''.\n\n\\begin{figure}[here]\n\\begin{verbatim}\n a+ @-> b, b+ @-> a ;\n\\end{verbatim}\n\\caption{\\label{tab15}Directed, parallel replacement}\n\\vspace{-2mm}\n\\end{figure}\n\nThe definition of directed parallel replacement requires no\nadditions to the techniques already presented. In the near future we\nalso plan to allow directional and length-of-match constraints in the\nmore complicated case of conditional context-constrained replacement.\n\n\\section{Acknowledgements}\n\nI would like to thank Ronald M. Kaplan, Martin Kay, Andr\\'{e} Kempe,\nJohn Maxwell, and Annie Zaenen for helpful discussions at the\nbeginning of the project, as well as Paula Newman and Kenneth\nR. Beesley for editorial advice on the first draft of the paper. The\nwork on tokenizers and phrasal analyzers by Anne Schiller and Gregory\nGrefenstette revealed the need for a more efficient implementation of\nthe idea. The final version of the paper has benefited from detailed\ncomments by Ronald M. Kaplan and two anonymous reviewers, who\nconvinced me to discard the ill-chosen original title (``Deterministic\nReplacement'') in favor of the present one.\n\n\\section{Appendix: Notational conventions}\n\nThe regular expression formalism used in this paper is essentially the\nsame as in Kaplan and Kay (1994), in Karttunen (1995), and in Kempe\nand Karttunen (1996). Upper-case strings, such as {\\tt UPPER},\nrepresent regular languages, and lower-case letters, such as {\\tt x},\nrepresent symbols. We recognize two types of symbols: unary symbols\n({\\tt a}, {\\tt b}, {\\tt c}, etc) and symbol pairs ({\\tt a:x}, {\\tt\nb:0}, etc. ).\n\nA symbol pair {\\tt a:x} may be thought of as the crossproduct of {\\tt\na} and {\\tt x}, the minimal relation consisting of {\\tt a} (the upper\nsymbol) and {\\tt x} (the lower symbol). To make the notation less\ncumbersome, we systematically ignore the distinction between the\nlanguage {\\tt A} and the identity relation that maps every string\nof {\\tt A} into itself. Consequently, we also write {\\tt a:a} as\njust {\\tt a}.\n\nThree special symbols are used in regular expressions: {\\tt 0} (zero)\nrepresents the empty string (often denoted by $\\epsilon$); {\\tt ?}\nstands for any symbol in the known alphabet and its extensions; in\nreplacement expressions, {\\tt .\\#.} marks the start (left context) or\nthe end (right context) of a string. The percent sign, {\\tt \\%}, is\nused as an escape character. It allows letters that have a special\nmeaning in the calculus to be used as ordinary symbols. Thus {\\tt \\%[}\ndenotes the literal square bracket as opposed to {\\tt [}, which has a\nspecial meaning as a grouping symbol; \\%0 is the ordinary zero symbol.\nDouble quotes around a symbol have the same effect as the percent\nsign.\n\nThe following simple expressions appear freqently in the formulas:\n{\\tt []} the empty string language, {\\tt ?*} the universal (``sigma\nstar'') language.\n\nThe regular expression operators used in the paper are: {\\tt *} zero\nor more (Kleene star), {\\tt +} one or more (Kleene plus), \\verb+~+ not\n(complement), {\\tt \\$} contains, {\\tt \/} ignore, {\\tt |} or (union),\n{\\tt \\&} and (intersection), {\\tt -} minus (relative complement), {\\tt\n.x.} crossproduct, {\\tt .o.} composition, \\verb+->+ simple replace.\n\nIn the transducer diagrams (Figures \\ref{net1}, \\ref{net2}, etc.),\nthe nonfinal states are represented by single circles, final states\nby double circles. State 0 is the initial state. The symbol {\\tt ?}\nrepresents any symbols that are not explicitly present in the\nnetwork. Transitions that differ only with respect to the label\nare collapsed into a single multiply labelled arc.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{section_intro}\n\nTime synchronization between nodes of a communication network is a common assumption made to\nanalyze and design such networks. However, in practice, it is very difficult to exactly\nsynchronize separate nodes either in time or frequency. As an example, in systems with different\ntransmitters, the transmitters must use their own locally generated clock. However, the\ninitialization might be different for each clock and the frequencies at the local signal generators\nmay not be perfectly matched \\cite{Hui_Humblet:85}. Indeed, achieving time, phase or frequency synchronization in practical communication systems has been a major engineering issue and still remains an active area of research (see e.g., \\cite{Wornell_asynchronism:2009}). Thus, fundamental limits of communication in the presence of time asynchronism should be explicitly addressed as a tool to better understand and tackle\nreal-world challenges in the context of multiuser information theory.\n\nThe problem of finding the capacity region of multiuser channels with no time synchronization\nbetween the encoders is considered in \\cite{Cover_McEliece:81}, \\cite{Hui_Humblet:85},\n\\cite{Farkas_Koi:2012}, and \\cite{Grant_Rimoldi_Urbanke_Whiting:2001} from a channel coding\nperspective only for the specific case of multiple access channels (MAC). In \\cite{Verdu_memory:1989}, a frame asynchronous MAC with memory is considered and it is shown that the capacity region can be drastically reduced in the presence of frame asynchronism.\nIn \\cite{Verdu:1989}, an asynchronous MAC is also considered, but with symbol asynchronism. All of\nthese works constrain themselves to the study of channel coding only and disregard the source-channel communication\nof correlated sources over asynchronous channels. In this paper, we are interested in the problem of joint source-channel coding (JSCC) of a set of correlated sources over time-asynchronous multiuser channels which can include relaying as well. In particular, we focus on the analysis of JSCC for a MAC with the presence of a relay, also known as a multiple access relay channel (MARC).\n\nThe problem of JSCC for multiuser networks is open in general. However, numerous results have been published on different aspects of the problem for specific channels and under specific assumptions such as phase or time asynchronism between the nodes. In \\cite{Cover_ElGamal_Salehi:1980}, a sufficient condition for lossless communication of correlated sources over a discrete memoryless MAC is given. Although not always optimal, as shown in \\cite{Dueck:1981}, the achievable scheme of \\cite{Cover_ElGamal_Salehi:1980} outperforms separate source-channel coding. In \\cite{FadiAbdallah_Caire:2008}, however, the authors show that under phase fading, separation is optimal for the important case of a Gaussian MAC. Also, \\cite{Saffar:phase}, \\cite{Saffar_Globecom:2012} show the optimality of separate source-channel coding for several Gaussian networks with phase uncertainty among the nodes. Other authors have derived JSCC coding results for the broadcast channels \\cite{Coleman:2006}, \\cite{Tian_Diggavi_Shamai_BCfull:2011}, interference relay channels \\cite{Saffar_ISIT:2012}, and other multiuser channels \\cite{Gunduz:2009}. Furthermore, for lossy source-channel coding, a separation approach is shown in \\cite{Tian_Diggavi_Shamai_full_version:2012} to be optimal or approximately optimal for certain classes of sources and networks.\n\n\nIn \\cite{Saffar_Mitran_ISIT2013}, we have considered a two user time asynchronous Gaussian MAC with a pair of correlated sources. There, we have derived necessary and sufficient conditions for reliable communication and consequently derived a separation theorem for the problem. This paper extends the work of \\cite{Saffar_Mitran_ISIT2013} to a more general setup with $K$ nodes and a relay. Also, the recent work \\cite{Yemini_Asynchronous_Side:2014} considers the point-to-point state-dependent and cognitive multiple access channels with time asynchronous side information.\n\nIn \\cite{Cover_McEliece:81}, the authors have considered a MAC with no common time base between encoders.\nThere, the encoders transmit with an unknown offset with respect to each other, and the offset is\nbounded by a maximum value $\\dm(n)$ that is a function of coding block length $n$. Using a\ntime-sharing argument, it is shown that the capacity region is the same as the capacity of the\nordinary MAC as long as $\\dm(n)\/n \\rightarrow 0$. On the other hand, \\cite{Hui_Humblet:85}\nconsiders a {\\em totally asynchronous} MAC in which the coding blocks of different users can\npotentially have no overlap at all, and thus potentially have several block lengths of shifts between\nthemselves (denoted by random variables $\\Delta_i$). Moreover, the encoders have different clocks\nthat are referenced with respect to a standard clock, and the offsets between the start of code\nblocks for the standard clock and the clock at transmitter $i$ are denoted by random variables\n$D_i$. For such a scenario, in \\cite{Hui_Humblet:85}, it is shown that the capacity region differs\nfrom that of the synchronous MAC only by the lack of the convex hull operation. In\n\\cite{Poltyrev:83}, Poltyrev also considers a model with arbitrary delays, known to the receiver\n(as opposed to \\cite{Hui_Humblet:85}). Among other related works is the recent paper\n\\cite{Farkas_Koi:2012} that finds a single letter capacity region for the case of a $3$ sender MAC,\n$2$ of which are synchronized with each other and both asynchronous with respect to the third one.\n\n\nIn this paper, we study the communication of $K$ correlated sources over a $K$-user Gaussian time-asynchronous MARC\n(TA-MARC) where the encoders cannot synchronize the starting times of their codewords. Rather, they\ntransmit with unknown positive time delays $d_1,d_2,\\cdots,d_{K+1}\\geq 0$ with respect to a time reference, where the index $K+1$ indicates the relay transmitter. The time shifts are also bounded by $d_{\\ell} \\leq \\dm(n),$ $\\ell=1,\\cdots,K+1$, where $n$ is the codeword block length. Moreover, we assume that the offsets $d_1,d_2,\\cdots,d_{K+1}$ are unknown to the transmitters as a practical assumption since they are not controlled by the transmitters. We further assume that the\nmaximum possible offset\n$\\dm(n) \\rightarrow \\infty$ as $n\\rightarrow \\infty$ while ${\\dm(n) \/ n} \\rightarrow 0$.\n\nThe rest of this paper is organized as follows. In Section \\ref{section_preliminaries}, we present\nthe problem statement and preliminaries along with a key lemma that is useful in the derivation of\nthe converse. In Section \\ref{section_converse}, as our main result, the converse part of the capacity theorem (i.e., a theorem stating coinciding necessary and sufficient conditions for reliable source-channel communication) is proved. Then, under specific gain conditions, using separate source and channel coding and the results of \\cite{Cover_McEliece:81} combined with block Markov coding, it is shown in Section\n\\ref{section_achievability} that the thus achievable region matches the outer bound. Section \\ref{results_statements} then states a separation theorem under specific gain conditions for the TA-MARC as the combination of converse and achievability parts along with a corollary that results for the interference channel. Finally, Section \\ref{section_conclusion} concludes the paper. \\vspace{-.2cm}\n\n\n\n\n\n\\section{Problem Statement and a Key Lemma} \\label{section_preliminaries}\n\n{\\em Notation}: In what follows, we denote random variables by upper case letters, e.g., $X$, their realizations by lower case\nletters, e.g., $x$, and their alphabet by calligraphic letters, e.g., $\\mathcal{X}$. For integers $0 \\leq a \\leq b$, $Y_{a}^{b}$ denotes\nthe $b-a+1$-tuple $(Y[a],\\cdots,Y[b])$, and $Y^{b}$ is a shorthand for $\\Y{0}{b-1}$. Without confusion, $X_{\\ell}^{n}$ denotes the\nlength-$n$ MARC input codeword $(X_{\\ell}[0],\\cdots,X_{\\ell}[n-1])$ of the $\\ell$th transmitter, and based on\nthis, we also denote $(X_{\\ell}[a],\\cdots,X_{\\ell}[b])$ by $X_{\\ell,a}^{b}$. The $n$-length\ndiscrete Fourier transforms (DFT) of the $n$-length codeword $X_{\\ell}^{n}$ is denoted by $\\hat{X}_{\\ell}^{n} =\n{\\rm{DFT}}(\\X{\\ell}{n})$. Furthermore, let $[1,K] \\triangleq \\{1,\\cdots,K\\}$, for $\\forall K \\in \\mathbb{N}$.\n\n\\begin{figure}\n\\centering\n{\\includegraphics[keepaspectratio=true, width=12.68cm]{system_model_channel2.eps}}\n\\caption{{Gaussian time asynchronous multiple access relay channel (TA-MARC), with delays} $d_{1},\\cdots,d_{K+1}$.}\n\\label{fig:TA-MAC}\n\\end{figure}\n\n\nConsider $K$ finite alphabet sources $\\{(U_{1}[i],U_{2}[i],\\cdots,U_{K}[i])\\}_{i=0}^{\\infty}$ as correlated random variables drawn according to a distribution $p(u_1,u_2,\\cdots,u_K)$. The sources are memoryless, i.e.,\n$(U_{1}[i],U_{2}[i],\\cdots,U_{K}[i])$'s are independent and identically distributed (i.i.d) for $i=1,2,\\cdots$. The indices $1,\\cdots,K$, represent the transmitter nodes and the index $K+1$ represents the relay transmitter. All of the sources are to\nbe transmitted to a destination by the help of a relay through a continuous alphabet, discrete-time memoryless\nmultiple-access relay channel (MARC)\nwith time asynchronism between different transmitters and the relay. Specifically, as depicted in Fig. \\ref{fig:TA-MAC}, the encoders use different time references and thus we assume that the encoders start transmitting with offsets of\n\\begin{align}\n0 \\leq d_{\\ell} \\leq \\dm(n), \\quad \\ell=1,\\cdots,K+1,\n\\end{align}\nsymbols with respect to a fixed time reference, where $d_{K+1}$ is the offset for the relay transmitter with respect to the time reference.\n\nHence, the probabilistic characterization of the time-asynchronous Gaussian MARC, referred to as a Gaussian TA-MARC and denoted by $\\mathcal{M}([1,K+1])$ throughout the paper, is\ndescribed by the relationships\n\\begin{align}\\label{channel-model-1}\nY_{\\mathsf{D}}[i] & = \\sum_{\\ell=1}^{K+1} g_{\\ell\\mathsf{D}}X_{\\ell}[i-d_{\\ell}] + Z_{\\mathsf{D}}[i], \\quad i=0,1,\\cdots,n+\\dm(n)-1,\n\\end{align}\n\\noindent as the $i$th entry of the received vector $Y_{\\sf{D}}^{n+\\dm(n)}$ at the destination ($\\sf{D}$), and\n\\begin{align}\\label{channel-model-2}\nY_{\\mathsf{R}}[i] & = \\sum_{\\ell=1}^{K} g_{\\ell \\mathsf{R}}X_{\\ell}[i-d_{\\ell}] + Z_{\\mathsf{R}}[i], \\quad i=0,1,\\cdots,n+\\dm(n) - 1,\n\\end{align}\n\\noindent as the $i$th entry of the received vector $Y_{\\mathsf{R}}^{n+\\dm(n)}$ at the relay ($\\mathsf{R}$), where\n\\begin{itemize}\n\\item $g_{\\ell \\mathsf{D}},\\ell=1,\\cdots,K+1,$ are complex gains from transmission nodes as well as the relay (when $\\ell=K+1$) to the destination, and $g_{\\ell\\mathsf{R}}, \\ell = 1,\\cdots,K,$ are complex gains from the transmission nodes to the relay,\n\\item $X_{\\ell}[i-d_{\\ell}], \\ell=1,\\cdots,K+1$, are the delayed channel inputs such that $X_{\\ell}[i-d_{\\ell}] = 0$ if $(i-d_{\\ell})$$ \\notin \\{0,1,\\cdots,n-1\\}$ and $X_{\\ell}[i-d_{\\ell}]\\in \\mathbb{C}$ otherwise,\n\\item $Z_{\\mathsf{D}}[i],Z_{\\mathsf{R}}[i] \\sim {\\mathcal{C}\\mathcal{N}(0,N)}$ are circularly symmetric complex Gaussian noises at the destination and relay, respectively.\n\\end{itemize}\nFig. \\ref{fig:TA-MAC} depicts the delayed codewords of the encoders, and the formation of the received codeword for the TA-MARC.\n\n\n\n\nWe now define a joint source-channel code and the notion of reliable communication for a Gaussian\nTA-MARC in the sequel.\n\n\\begin{definition}\nA block joint source-channel code of length $n$ for the Gaussian TA-MARC with the block of correlated\nsource outputs $$\\{(U_1[i],U_2[i],\\cdots,U_K[i])\\}_{i=0}^{n-1}$$ is defined by\n\\begin{enumerate}\n\\item {A set of encoding functions with the bandwidth mismatch factor of unity\\footnote{The assumption of unity mismatch factor is without loss of generality and for simplicity of exposition. Extension to the more general setting with different mismatch factors can be achieved by a simple modification (cf. Remark \\ref{Remark:about_mismatch}).}, i.e.,\n\\begin{align*}\nf_{\\ell}^{n}&: \\mathcal{U}_{\\ell}^n \\rightarrow \\mathbb{C}^n, \\quad \\ell=1,2,\\cdots,K,\n\\end{align*}\n\\noindent that map the source outputs to the codewords, and the relay encoding function\n\\begin{align}\\label{Eq:relay_encoding_function}\nx_{(K+1)}^{i+1} = f_{(K+1)}^{i+1}(y_{\\mathsf{R}}[0],y_{\\mathsf{R}}[1],\\cdots,y_{\\mathsf{R}}[i]), \\quad i=0,2,\\cdots,n-2.\n\\end{align}\n\\noindent The sets of encoding functions are denoted by the {\\em codebook} $\\mathcal{C}^{n} = \\Big\\{f_{1}^{n},\\cdots,f_{K}^{n},\\{f_{(K+1)}^{i+1}\\}_{i=0}^{n-2}\\Big\\}$}.\n\n\\item{Power constraints $P_\\ell$, $\\ell=1,\\cdots,K+1,$ on the codeword vectors $X^{n}_{\\ell}$, i.e.,\n\\begin{align}\\label{Power_constraint}\n\\mathbb{E}\\left[{1 \\over n} \\sum_{i=0}^{n-1}\\vert X_{\\ell }[i]\\vert^2\\right] =\n\\mathbb{E}\\left[{1 \\over n} \\sum_{i=0}^{n-1}\\vert \\hat{X}_{\\ell }[i]\\vert^2\\right] \\leq P_\\ell, \\ \\\n\\end{align}\n\\noindent for $\\ell=1,\\cdots,K+1$ where we recall that $\\hat{X}^{n}_{\\ell}=\\text{DFT}\\{X_{\\ell}^{n}\\}$, and $\\mathbb{E}[\\cdot]$ represents the expectation operator.}\n\n\\item{A decoding function $g^n(y_{\\sf{D}}^{n+\\dm} \\vert d_{1}^{K+1}) : \\mathbb{C}^{n+\\dm} \\times [0,\\dm]^{K+1} \\rightarrow \\mathcal{U}_{1}^n \\times\\cdots \\times \\mathcal{U}_{K}^n. $ }\n\\end{enumerate}\n\\end{definition}\n\\begin{definition} \\label{reliability_definition} We say the source $\\{(U_{1}[i],U_{2}[i],\\cdots,U_{K}[i])\\}_{i=0}^{n-1}$ of i.i.d. discrete random variables with joint probability mass function $p(u_1,u_2,\\cdots,u_K)$ {\\em can be reliably sent} over a Gaussian TA-MARC, if there exists a sequence of codebooks $\\mathcal{C}^{n}$ and decoders $g^n$ in $n$ such that the output sequences $\\Uo,\\Ut,\\cdots,U_{K}^{n}$ of the source can be estimated from $Y_{\\mathsf{D}}^{n+\\dm(n)}$ with arbitrarily asymptotically small probability of error uniformly over {\\em all} choices of delays $0 \\leq $$ d_{\\ell} $$\\leq \\dm(n),$ $\\ell=1,\\cdots,K+1$, i.e.,\n\\begin{align}\\label{main_error_probability}\n\\sup_{0 \\leq d_{1}, \\cdots, d_{K+1} \\leq \\dm(n)} P_e^n(d_{1}^{K+1}) \\longrightarrow 0, \\ \\ {\\rm as} \\ \\ n \\rightarrow \\infty,\n\\end{align}\n\\noindent where\n\\begin{align}\nP_e^n(d_{1}^{K+1}) \\triangleq P[g(Y_{\\sf{D}}^{n+\\dm(n)} \\vert d_{1}^{K+1}) \\neq (\\Uo,\\Ut,\\cdots,U_{K}^{n}) \\vert d_{1}^{K+1}],\n\\end{align}\nis the error probability for a given set of offsets $d_{1}^{K+1}$. \\thmend\n\\end{definition}\n\nWe now present a key lemma that plays an important role in the derivation of our results. In order\nto state the lemma, we first need to define the notions of a {\\em sliced} MARC and a {\\em sliced cyclic} MARC as follows:\n\n\\begin{definition}\nLet $\\mS \\subseteq [1,K+1]$ be a subset of transmitter node indices. A Gaussian sliced TA-MARC ${\\mathcal{M}}(\\mS)$ corresponding to the Gaussian TA-MARC ${\\mathcal{M}}([1,K+1])$ defined by \\eqref{channel-model-1}-\\eqref{channel-model-2}, is a MARC in which only the codewords of the encoders with indices in $\\mS$ contribute to the destination's received signal, while the received signal at the relay is the same as that of the original Gaussian TA-MARC $\\mathcal{M}([1,K+1])$.\n\nIn particular, for the Gaussian sliced MARC ${\\mathcal{M}}(\\mS)$, the received signals at the destination and the relay at the $i$th time index, denoted by ${Y}_{\\mathsf{D}(\\mS)}[i]$ and ${Y}_{\\mathsf{R}(\\mS)}[i]$ respectively, are given by\n\\begin{align}\\label{sliced_destination}\n{Y}_{\\mathsf{D}(\\mS)}[i] = \\sum_{\\ell \\in \\mS} g_{\\ell\\mathsf{D}}X_{{{\\ell}}}[i-d_{\\ell}] + {Z}_{\\mathsf{D}}[i], \\quad {i=0,\\cdots,n+\\dm-1},\n\\end{align}\n\\noindent and\n\\begin{align}\\label{sliced_relay}\n{Y}_{\\mathsf{R}(\\mS)}[i] = Y_{\\mathsf{R}}[i], \\quad {i=0,\\cdots,n+\\dm-1}.\n\\end{align}\n\\end{definition}\n\n\\begin{figure}\n\\hspace{-1.5cm}\\centering{\\includegraphics[keepaspectratio = true, height=14.2cm]{Partial_AMARC3.eps}}\n\\caption{Codewords of a Gaussian sliced TA-MARC $\\mathcal{M}(\\mS)$ (top) and the corresponding sliced cyclic MARC $\\tilde{\\mathcal{M}}(\\mS)$ (bottom).}\n\\label{fig:partial-TA-MAC}\n\\end{figure}\n\n\\begin{definition}\nA sliced cyclic MARC $\\widetilde{\\mathcal{M}}(\\mS)$, corresponding to the sliced TA-MARC $\\mathcal{M}(\\mS)$ defined by \\eqref{sliced_destination}-\\eqref{sliced_relay}, is a sliced TA-MARC in which the codewords are cyclicly shifted around the $n$th time index to form new received signals at the destination \\textit{only}. Specifically, the corresponding outputs of the sliced cyclic MARC $\\widetilde{\\mathcal{M}}(\\mS)$ at the destination and the relay at the $i$th time index, denoted by $\\tilde{Y}_{\\mathsf{D}(\\mS)}[i]$ and $\\tilde{Y}_{\\mathsf{R}(\\mS)}[i]$ respectively, can be written as\n\\begin{align}\n\\tilde{Y}_{\\mathsf{D}(\\mS)}[i] = \\sum_{\\ell \\in \\mS} g_{\\ell \\mathsf{D}}X_{{\\ell}}[(i-d_{\\ell})\\hspace{0mm}\\mod n] + Z_{\\mathsf{D}}[i], \\quad i = 0,\\cdots,n-1,\n\\end{align}\n\\noindent and\n\\begin{align}\n\\tilde{Y}_{\\mathsf{R}(\\mS)}[i] &= \\sum_{\\ell = 1}^{K} g_{\\ell \\mathsf{R}}X_{{{\\ell}}}[i-d_{\\ell}] + Z_{\\mathsf{R}}[i], \\quad i = 0,\\cdots,n-1,\n\\nonumber\\\\\n&=Y_{\\mathsf{R}}[i].\n\\end{align}\n\nIn particular, as shown in Fig. \\ref{fig:partial-TA-MAC}, the tail of the codewords are cyclicly shifted to the beginning of the block, where the start point of the block is aligned with the first time instant. The destination's output $\\tilde{Y}_{\\mathsf{D}(\\mS)}^{n}$ of the sliced cyclic MARC is the $n$-tuple that results by adding the shifted versions of the codewords $X_{\\ell}^{n},{\\ell \\in \\mS}$. As indicated in Fig. \\ref{fig:partial-TA-MAC}, we divide the entire time interval $[0,n+\\dm-1]$ into three subintervals $\\mA, \\mB$, and $\\mC$ where\n\\begin{itemize}\n\\item $\\mA$ is the sub-interval representing the left tail of the received codeword, {i.e.}, $[0,\\dm-1]$,\n\\item $\\mB$ represents the right tail, {i.e.}, $[n,n+\\dm-1]$,\n\\item $\\mC$ represents a common part between the sliced TA-MARC and sliced cyclic MARC, {i.e.}, $[\\dm,n-1]$.\n\\end{itemize}\n\\end{definition}\n\n\\begin{remark}\n\\label{Remark_01}\nIn both sliced TA-MARC and sliced cyclic MARC, the observation $Y^{n+\\dm}_{\\mathsf{R}}$ of the relay remains unchanged. Therefore, the generated channel input at the relay $X_{K+1}^{n}$ is the same as the original TA-MARC due to \\eqref{Eq:relay_encoding_function} when the same relay encoding functions are used.\n\\end{remark}\n\nThe following lemma implies that, for every choice of $\\mS \\subseteq [1,K+1]$, the mutual information rate between the inputs and the destination's output in the Gaussian sliced TA-MARC $\\mathcal{M}(\\mS)$ and the sliced cyclic MARC $\\widetilde{\\mathcal{M}}(\\mS)$ are asymptotically the same, i.e., their difference asymptotically vanishes. This fact will be useful in the analysis of the problem in Section\n\\ref{section_converse}, where we can replace a sliced TA-MARC with the corresponding sliced cyclic MARC.\n\nBefore stating and proving the key lemma, we define the following notations:\n\\begin{align}\nY_{\\mathsf{D}{(\\mS)}}[\\mA] & \\triangleq \\{Y_{\\mathsf{D}{(\\mS)}}[i]: i \\in \\mA \\}, \\\\\n\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA]& \\triangleq \\{\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[i]: i \\in \\mA \\}, \\\\ \\label{Eq:definitionofXs}\nX_{\\mathcal{S}}^{n} & \\triangleq \\{X_{\\ell}^{n}: \\ell \\in \\mS\\}, \\\\\n\\vec{X}_{\\mathcal{S}}[{\\mA}] & \\triangleq \\{X_{\\ell}[i-d_{\\ell}]: \\ell\\in \\mS, i \\in {\\mA}\\},\\\\\n\\tilde{\\vec{X}}_{\\mathcal{S}}[{\\mA}] & \\triangleq \\{X_{\\ell}[i-d_{\\ell}\\ \\text{mod}\\ n]: \\ell\\in \\mS, i \\in {\\mA}\\},\n\\end{align}\n\\noindent where $\\mS \\subseteq [1,K+1]$ is an arbitrary subset of transmitter nodes indices, and recall that $X_{\\ell}[i-d_{\\ell}] = 0$, for $i-d_{\\ell} \\not\\in \\{0,1,\\cdots,n-1\\}$. Similarly, we can define $Y_{\\mathsf{D}{(\\mS)}}[\\mB]$, $Y_{\\mathsf{D}{(\\mS)}}[\\mC]$, $\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mB],\\cdots$, by replacing $\\mA$ with $\\mB$ or $\\mC$ in the above definitions.\n\n\n\\begin{lemma} \\label{Key_lemma} For a Gaussian sliced TA-MARC $\\mathcal{M}(\\mS)$, and the corresponding sliced cyclic MARC $\\widetilde{\\mathcal{M}}(\\mS)$,\n\\begin{align}\n{1 \\over n} \\left| I(\\X{\\mS}{n};{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}}) -\nI(\\X{\\mS}{n}; \\tilde{Y}^{n}_{\\mathsf{D}(\\mS)} \\vert {d_{1}^{K+1}}) \\right| & \\leq \\ep_{n}, \\quad \\forall \\ d^{K+1}_{1} \\in [0,\\dm(n)]^{K+1}, \\label{lemma_main_expression}\n\\end{align}\n\\noindent for all $\\mS \\subseteq [1,K+1]$, where $\\ep_{n}$ does not depend on $d_{1}^{K+1}$ and $\\ep_{n} \\rightarrow\n0$, as $n\\rightarrow \\infty$. \\thmend \n\\end{lemma}\n\\begin{IEEEproof}\n\nNoting that the mutual information between subsets of two random vectors is a lower bound on the mutual information between the original random vectors, we first lower bound the original mutual information $I(\\X{\\mS}{n};{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}})$:\n\\begin{align}\n& I(\\vec{X}_{\\mS}[\\mC]; Y_{{\\mathsf{D}(\\mS)}}[\\mC]\\vert {d_{1}^{K+1}}) \\leq I(\\X{\\mS}{n};{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}}). \\label{first_MI_lower}\n\\end{align}\n\\noindent Then, by splitting the entropy terms over the intervals $\\mA, \\mB$, and $\\mC$ as depicted in Fig. \\ref{fig:partial-TA-MAC}, we upper bound the same mutual information term $I(\\X{\\mS}{n}$$;{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm}$$ \\vert {d_{1}^{K+1}})$ as follows:\n\\begin{align}\n I(\\X{\\mS}{n};{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}}) & = h({Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}}) - h({Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert \\X{\\mS}{n}, {d_{1}^{K+1}}) \\nonumber \\\\\n& \\leq h(Y_{{\\mathsf{D}(\\mS)}}[\\mA] \\vert {d_{1}^{K+1}}) + h(Y_{{\\mathsf{D}(\\mS)}}[\\mB] \\vert {d_{1}^{K+1}}) + h(Y_{{\\mathsf{D}(\\mS)}}[\\mC] \\vert {d_{1}^{K+1}}) - \\sum_{i=0}^{n+\\dm-1} h(Z_{\\mathsf{D}}[i]) \\nonumber \\\\\n& = I(\\vec{X}_{\\mS}[\\mA];Y_{\\mathsf{D}{(\\mS)}}[{\\mA}] \\vert {d_{1}^{K+1}}) + I(\\vec{X}_{\\mS}[\\mB];Y_{\\mathsf{D}{(\\mS)}}[{\\mB}] \\vert {d_{1}^{K+1}})\n+ I(\\vec{X}_{\\mS}[\\mC];Y_{\\mathsf{D}{(\\mS)}}[{\\mC}] \\vert {d_{1}^{K+1}}). \\label{first_MI_upper}\n\\end{align}\n\n\nAlso, the mutual information term $I(\\X{\\mS}{n}; \\tilde{Y}^{n}_{\\mathsf{D}(\\mS)} \\vert {d_{1}^{K+1}})$ which is associated to the sliced cyclic MARC can be similarly lower bounded as\n\\begin{align}\n& I(\\tilde{\\vec{X}}_{\\mS}[\\mC]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mC]\\vert {d_{1}^{K+1}}) \\leq I(\\X{\\mS}{n}; \\tilde{Y}^{n}_{\\mathsf{D}(\\mS)} \\vert {d_{1}^{K+1}}), \\label{second_MI_lower}\n\\end{align}\n\\noindent and upper bounded as\n\\begin{align}\nI(X^{n}_{\\mS}; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}\\vert {d_{1}^{K+1}}) & = h(\\tilde{Y}_{\\mathsf{D}{(\\mS)}} \\vert {d_{1}^{K+1}}) - h(\\tilde{Y}_{\\mathsf{D}{(\\mS)}} \\vert X^{n}_{\\mS}, {d_{1}^{K+1}}) \\nonumber \\\\\n& \\leq h(\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA] \\vert {d_{1}^{K+1}}) + h(\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mC] \\vert {d_{1}^{K+1}})- {\\sum_{i=0}^{n-1}} h(Z_{\\mathsf{D}}[i]) \\nonumber \\\\ \\nonumber\n& = I(\\tilde{\\vec{X}}_{\\mS}[\\mA]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}})+ I(\\tilde{\\vec{X}}_{\\mS}[\\mC]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mC]\\vert {d_{1}^{K+1}})\\\\\n& = I(\\tilde{\\vec{X}}_{\\mS}[\\mA]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}})+ I(\\vec{X}_{\\mS}[\\mC]; Y_{\\mathsf{D}{(\\mS)}}[\\mC]\\vert {d_{1}^{K+1}}),\n\\label{second_MI_upper}\n\\end{align}\n\\noindent where in the last step, we used the fact that for any $\\mathcal{S}\\subseteq [1,K+1]$, ${\\tilde{Y}}_{\\mathsf{D}{(\\mS)}}[{\\mC}] = Y_{{\\mathsf{D}(\\mS)}}[{\\mC}]$ and $\\tilde{\\vec{X}}_{\\mS}[\\mC]=\\vec{X}_{\\mS}[\\mC]$, as there is no cyclic foldover for $i\\in {\\mC}$\n\nHence, combining \\eqref{first_MI_lower}-\\eqref{first_MI_upper}, and \\eqref{second_MI_lower}-\\eqref{second_MI_upper}, we can\nnow bound the difference between the mutual information terms as\n\\begin{align}\n&{1 \\over n} \\left| I(\\X{\\mS}{n};{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}}) -\nI(\\X{\\mS}{n}; \\tilde{Y}^{n}_{\\mathsf{D}(\\mS)} \\vert {d_{1}^{K+1}}) \\right| \\nonumber \\\\\n& \\quad \\leq {1\\over n}I(\\vec{X}_{\\mS}[\\mA]; Y_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}}) + {1 \\over n} I(\\vec{X}_{\\mS}[\\mB]; Y_{\\mathsf{D}{(\\mS)}}[\\mB]\\vert {d_{1}^{K+1}}) +{1 \\over n} I(\\tilde{\\vec{X}}_{\\mS}[\\mA]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}}). \\label{three_terms_expansion}\n\\end{align}\n\\noindent But all of the terms in the right hand side of \\eqref{three_terms_expansion} can also be\nbounded as follows. Consider the first term:\n\\begin{align}\n\\nonumber\n{1\\over n}I(\\vec{X}_{\\mS}[\\mA]; Y_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}})& = {1\\over n} \\left[ h(Y_{\\mathsf{D}{(\\mS)}}[\\mA] \\vert d_{1}^{K+1}) - h(Z_{\\mathsf{D}}[{\\mA}]) \\right] \\\\ \\nonumber\n& \\leq {1\\over n} \\sum_{i \\in \\mA} \\left[ h(Y_{{\\mathsf{D}(\\mS)}}[i] \\vert d_{1}^{K+1}) - h(Z_{\\mathsf{D}}[i]) \\right] \\\\ \\nonumber\n& = {1 \\over n} \\sum_{i \\in \\mA} \\left[ h\\left(\\sum_{\\ell \\in \\mS} g_{{\\ell}\\mathsf{D}}X_{\\ell}[i-d_\\ell] + Z_{\\mathsf{D}}[i] \\right)\n- h(Z_{\\mathsf{D}}[i]) \\right] \\\\ \\nonumber\n& \\stackrel{\\rm (a)}{\\leq} {1 \\over n} \\sum_{i \\in \\mA} \\log\\left(1 +{ {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} g_{{\\ell}\\mathsf{D}}X_{\\ell}[i-d_\\ell] \\right\\vert}^{2} \\over {N} } \\right)\\\\ \\nonumber\n& \\stackrel{\\rm (b)}{\\leq} {1 \\over n} \\sum_{i \\in \\mA} \\log\\left(1 +{ {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}\\mathbb{E}\\vert X_{\\ell}[i-d_\\ell]\\vert^{2}} \\over {N} } \\right)\\\\ \\nonumber\n& \\stackrel{\\rm (c)} \\leq {\\vert \\mA \\vert \\over n} \\log\\left(1 +{ { \\sum_{i \\in \\mA} \\left[ {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}\\mathbb{E}\\vert X_{\\ell}[i-d_\\ell]\\vert^{2}} \\right] } \\over {\\vert \\mA \\vert N} } \\right)\\\\ \\nonumber\n& \\stackrel{\\rm (d)}{=} {\\dm \\over n} \\log\\left(1 +{ { {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}\\mathbb{E}\\left[\\sum_{i \\in \\mA}\\vert X_{\\ell}[i-d_\\ell]\\vert^{2}\\right]}} \\over {\\dm N} } \\right)\\\\ \\nonumber\n& \\leq {\\dm \\over n} \\log\\left(1 +{ { {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}\\mathbb{E}\\sum_{i=0}^{n-1}\\vert X_{\\ell i}\\vert^2}} \\over {\\dm N} } \\right)\\\\ \\nonumber\n&\\stackrel{\\rm (e)}{\\leq} {\\dm \\over n} \\log\\left(1 +{n\\over \\dm}{ {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}P_{\\ell}} \\over { N} } \\right) \\\\\n& \\triangleq \\gamma\\left(\\dfrac{\\dm}{n}\\right), \\label{lemma_inequality_1}\n\\end{align}\n\\noindent where $\\rm{(a)}$ follows by the fact that Gaussian distribution maximizes the differential entropy \\cite[Thm. 8.4.1]{Cover:2006}, $\\rm{(b)}$ follows from the Cauchy-Schwartz inequality:\n\\begin{align}\n\\left \\vert \\sum_{\\ell\\in \\mathcal{S}} g_{\\ell\\mathsf{D}}{X}_{\\ell}[i-d_{\\ell}] \\right\\vert^{2} & \\leq {\\left(\\sum_{\\ell\\in \\mathcal{S}}\\vert g_{\\ell\\mathsf{D}} \\vert ^{2} \\right)} \\left(\\sum_{\\ell\\in \\mathcal{S}} \\vert X_{\\ell}[i-d_{\\ell}] \\vert^{2}\\right), \\label{eq_Cauchy}\n\\end{align}\n\\noindent $(\\rm c)$ follows from concavity of the $\\log$ function, $(\\rm d)$ follows from the fact that $\\vert \\mA \\vert = \\dm$, and $(\\rm e)$ follows from the power constraint in \\eqref{Power_constraint}.\n\nSimilarly, for the second term in the right hand side of \\eqref{three_terms_expansion}, it can be shown that\n\\begin{align}\n{1 \\over n} I(\\vec{X}_{\\mS}[\\mB]; Y_{\\mathsf{D}{(\\mS)}}[\\mB]\\vert {d_{1}^{K+1}}) \\leq \\gamma\\left({\\dm\\over n}\\right). \\label{lemma_inequality_2}\n\\end{align}\n\n\nFollowing similar steps that resulted in \\eqref{lemma_inequality_1}, we now upper bound the third term in the right hand side of \\eqref{three_terms_expansion} as follows\n\\begin{align}\n\\nonumber\n{1 \\over n} I(\\tilde{\\vec{X}}_{\\mS}[\\mA]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}})& = {1\\over n} \\left[ h(\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA] \\vert d_{1}^{K+1}) - h(Z_{\\mathsf{D}}[{\\mA}]) \\right] \\\\ \\nonumber\n& \\leq {1\\over n} \\sum_{i \\in \\mA} \\left[ h(\\tilde{Y}_{\\mathsf{D}}[i] \\vert d_{1}^{K+1}) - h(Z_{\\mathsf{D}}[i]) \\right] \\\\ \\nonumber\n& = {1 \\over n} \\sum_{i \\in \\mA} \\left[ h\\left(\\sum_{\\ell \\in \\mS} g_{\\ell\\mathsf{D}}X_{{{\\ell}}}[(i-{d_{{\\ell}}})\\hspace{-2mm} \\mod n] + Z_{\\mathsf{D}}[i] \\Big\\vert d_{1}^{K+1}\\right) - h(Z_{\\mathsf{D}}[i]) \\right] \\\\ \\nonumber\n& \\leq {1 \\over n} \\sum_{i \\in \\mA} \\log\\left(1 +{ {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} g_{{\\ell}\\mathsf{D}}X_{\\ell}[(i-{d_{{\\ell}}})\\hspace{-2mm} \\mod n] \\right\\vert}^{2} \\over {N} } \\right)\n\\\\ \\nonumber\n& \\leq {\\dm \\over n} \\log\\left(1 +{n\\over \\dm}{ {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}P_{\\ell}} \\over { N} } \\right)\\\\\n& = \\gamma\\left(\\dfrac{\\dm}{n}\\right). \\label{lemma_inequality_3}\n\\end{align}\n\nBased on \\eqref{lemma_inequality_1}, \\eqref{lemma_inequality_2}, and \\eqref{lemma_inequality_3}, the absolute difference between the mutual informations in \\eqref{lemma_main_expression} is upper bounded by $3\\gamma(\\dm\/n)$.\nOne can see that $3\\gamma\\left(\\dm(n)\/n\\right) \\rightarrow 0$ as $n \\rightarrow \\infty$, since for any $a>0$, $z_{n}\\log(1 + a\/{z_{n}}) \\rightarrow 0$ as\n$z_{n} \\rightarrow 0$, and the lemma is proved by taking $z_{n}=\\dm(n)\/n$ and $a={\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2}\\sum_{\\ell\\in \\mathcal{S}}P_{\\ell}}\/N$.\n\\end{IEEEproof}\n\n\\vspace{-.4cm}\n\\section{Converse}\\label{section_converse}\n\n\\begin{lemma}\n\\label{Lemma:Reliable_Communication}\nConsider a Gaussian TA-MARC with power constraints\n$P_1,P_2,\\cdots,P_K$ on the transmitters, and the power constraint $P_{K+1}$ on the relay, and the set of encoders' offsets $d_{1}^{K+1}$. Moreover, assume that the set of offsets $d_{1}^{K+1}$ are known to the receiver, $\\dm(n) \\rightarrow \\infty$, and ${\\dm(n)\n\/ n} \\rightarrow 0$ as $n\\rightarrow \\infty$. Then, a necessary condition for reliably\ncommunicating a source tuple $(U^{n}_1,U^{n}_2,\\cdots,U^{n}_{K}) \\sim {\\prod_{i=0}^{n-1}}p(u_{1}[i],u_{2}[i],\\cdots,u_{K}[i])$, over such a Gaussian\nTA-MARC, in the sense of Definition \\ref{reliability_definition}, is given by\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) & \\leq \\log\\left(1+{\\sum_{\\ell \\in \\mS}\\vert g_{\\ell\\mathsf{D}}\\vert ^{2}P_{\\ell} \\over N}\\right),\\quad \\forall \\mS \\subseteq [1,K+1] \\label{separation_TAMAC_1}\n\\end{align}\n\\noindent where $\\mS$ includes the relay, {i.e.}, $\\{K+1\\}\\in \\mS$, where by definition $U_{K+1}\\triangleq \\emptyset$, and $\\mathcal{S}^{c}\\triangleq[1,K+1]\/\\{\\mathcal{S}\\}$.\n\\thmend\n\\end{lemma}\n\\begin{remark}\n\\label{Remark:about_mismatch}\nThe result of \\eqref{separation_TAMAC_1} can be readily extended to the case of mapping blocks of source outputs of the length $m_{n}$ to channel inputs of the length $n$. In particular, for the bandwidth mismatch factor $\\kappa \\triangleq \\lim_{n \\rightarrow \\infty} {n \\over {m_{n}}}$, the converse result in \\eqref{separation_TAMAC_1}, to be proved as an achievability result in Section \\ref{section_achievability} as well, can be generalized to\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) & \\leq \\kappa \\log\\left(1+{\\sum_{\\ell \\in \\mS}\\vert g_{\\ell\\mathsf{D}}\\vert ^{2}P_{\\ell} \\over N}\\right), \\quad \\forall \\mS \\subseteq [1,K+1].\n\\end{align}\nSince considering a general mismatch factor $\\kappa>0$ obscures the proof, in the following, without essential loss of generality, we present the proof for the case of $\\kappa=1$.\n\\end{remark}\n\\begin{IEEEproof}\n\nFirst, fix a TA-MARC with given offset vector $d_{1}^{K+1}$, a codebook $\\mathcal{C}^{n}$, and\ninduced {\\em empirical} distribution\n\\[p(\\uo,\\cdots,u_{K}^{n},\\x{1}{n},\\cdots,\\x{K+1}{n},y_{\\mathsf{R}}^{n+\\dm},y_{\\mathsf{D}}^{n+\\dm} \\vert d_{1}^{K+1}).\\]\nSince for this fixed choice of the offset vector $d_{1}^{K+1}$, $P^n_e(d_{1}^{K+1}) \\rightarrow 0$, from Fano's inequality, we have\n\\begin{align}\n{1 \\over n}H(\\Uo,\\Ut,\\cdots,U_{K}^{n} \\vert Y_{\\sf{D}}^{n+\\dm},d_{1}^{K+1}) \\leq {1 \\over n}{P_e^n(d_{1}^{K+1})} \\log \\|{{\\mathcal{U}}^{n}_{1}}\\times{{\\mathcal{U}}^{n}_{2}}\\times\\cdots\\times{{\\mathcal{U}}^{n}_{K}}\\| + {1 \\over n}\n\\triangleq \\de_n, \\label{Fano_inequality}\n\\end{align}\nand $\\de_n \\rightarrow 0$, where convergence is uniform in $d_{1}^{K+1}$ by \\eqref{main_error_probability}.\n\nNow, we can upper bound $H(U_{\\mS}\\vert U_{\\mSc})$ as follows:\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) & = {1 \\over n} H(\\Us \\vert \\Usc, d_{1}^{K+1}) \\nonumber \\\\\n& \\stackrel{(\\rm a)}{=} {1 \\over n} H(\\Us \\vert \\Usc, \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) \\nonumber \\\\\n& = {1 \\over n} I(\\Us; Y_{\\mathsf{D}}^{n+\\dm} \\vert \\Usc, \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) + {1 \\over n} H(\\Us \\vert Y_{\\mathsf{D}}^{n+\\dm}, \\Usc, \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) \\nonumber \\\\\n& \\stackrel{(\\rm b)}{\\leq} {1 \\over n} I(\\X{\\mS}{n}; Y_{\\mathsf{D}}^{n+\\dm} \\vert \\Usc, \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) + \\de_{n} \\nonumber \\\\ \\nonumber\n& \\stackrel{(\\rm c)}{=} {1 \\over n} h(Y_{\\mathsf{D}}^{n+\\dm} \\vert \\Usc, \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) - {1 \\over n} h(Y_{\\mathsf{D}}^{n+\\dm} \\vert \\Usc,X^{n}_{[1,K+1]}, d_{1}^{K+1}) + \\de_{n} \\nonumber \\\\\n& \\stackrel{(\\rm d)}{\\leq} {1 \\over n} h(Y_{\\mathsf{D}}^{n+\\dm} \\vert \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) - {1 \\over n} h(Y_{\\mathsf{D}}^{n+\\dm} \\vert \\Usc,X_{[1,K+1]}^{n}, d_{1}^{K+1}) + \\de_{n} \\nonumber \\\\ \\nonumber\n&={1\\over n} h(\\big\\{\\sum_{\\ell=1}^{K+1}g_{\\ell\\mathsf{D}}X_{\\ell}[i-d_{\\ell}]+Z_{\\mathsf{D}}[i]\\big\\}_{i=0}^{n+\\dm-1}\\vert \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1})-{1\\over n}h(Z_{\\mathsf{D}}^{n+\\dm})+\\delta_{n}\\\\ \\nonumber\n&={1\\over n} h(\\big\\{\\sum_{\\ell\\in \\mathcal{S}}g_{\\ell\\mathsf{D}}X_{\\ell}[i-d_{\\ell}]+Z_{\\mathsf{D}}[i]\\big\\}_{i=0}^{n+\\dm-1}\\vert \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1})-{1\\over n}h(Z_{\\mathsf{D}}^{n+\\dm})+\\delta_{n} \\\\\n& \\leq {1 \\over n} h(Y_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert d_{1}^{K+1}) - {1 \\over n}h(Z_{\\mathsf{D}}^{n+\\dm}) + \\de_{n} \\nonumber \\\\\n& = {1 \\over n} I(\\X{\\mS}{n}; Y_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert d_{1}^{K+1}) + \\de_{n} \\label{new_MARC_equation}\n\\end{align}\n\\noindent where in $(\\rm a)$ we used the fact that $\\X{\\mSc}{n}$ is a function of only ${U}_{\\mSc}^{n}$, in $(\\rm b)$ we used the data processing inequality and \\eqref{Fano_inequality}, in $(\\rm c)$ we used $X^{n}_{[1,K+1]}$ based on the definition in \\eqref{Eq:definitionofXs}, and lastly in $(\\rm d)$ we made use of the fact that conditioning does not increase the entropy.\n\nBut \\eqref{new_MARC_equation} represents the mutual information at the destination's output of the Gaussian sliced TA-MARC $\\mathcal{M}(\\mS)$ corresponding to the original Gaussian TA-MARC. Thus, using Lemma \\ref{Key_lemma}, we can now further upper bound the mutual information term in \\eqref{new_MARC_equation} by the corresponding mutual information term in the corresponding sliced cyclic MARC and derive\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) \\leq {1 \\over n} I(\\X{\\mS}{n}; {\\tilde{Y}_{\\mathsf{D}(\\mS)}}^{n} \\vert d_{1}^{K+1}) + \\ep_{n} + \\de_{n}. \\label{after_lemma}\n\\end{align}\n\nNow, let $D_{\\ell}, \\ell=1,\\cdots,K+1,$ be a sequence of independent random variables that are each uniformly distributed on the set $\\{0,1,\\cdots,\\dm(n)\\}$ and also independent of $\\{U^{n}_{\\ell}\\}_{\\ell=1}^{K+1}$, $\\{Z_{\\mathsf{D}}[i]\\}_{i=0}^{n-1}$, and $\\{Z_{\\mathsf{R}}[i]\\}_{i=0}^{n-1}$. Since\n\\eqref{after_lemma} is true for every choice of $d_{1}^{K+1} \\in \\{0,1,\\cdots,\\dm(n)\\}^{K+1}$, $H(U_{\\mS}\\vert U_{\\mSc})$ can\nalso be upper bounded by the average over $d_{1}^{K+1}$ of $I(\\X{\\mS}{n}; {\\tilde{Y}_{\\mathsf{D}(\\mS)}}^{n} \\vert d_{1}^{K+1})$. Hence,\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) & \\leq I(\\X{\\mS}{n}; {\\tilde{Y}_{\\mathsf{D}(\\mS)}}^{n} \\vert D_{1}^{K+1})+\\epsilon_{n}+\\delta_{n} \\nonumber \\\\\n& \\stackrel{\\rm(a)}{=} I(\\X{\\mS}{n}; \\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n} \\vert D_{1}^{K+1}) +\\epsilon_{n}+\\delta_{n}, \\label{eq:middle_step}\n\\end{align}\n\\noindent where $\\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n} = {\\rm{DFT}}({\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n})$, and $\\rm(a)$ follows from the\nfact that the DFT is a bijection.\n\nExpanding $I(\\X{\\mS}{n}; \\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n} \\vert D_{1}^{K+1})$ in the right hand side of \\eqref{eq:middle_step},\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) & \\leq {1 \\over n} [h(\\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n} \\vert D_{1}^{K+1}) - h(\\hat{\\tilde{Y}}_{\\mathsf{D}(\\mathcal{S})}^{n} \\vert X_{\\mS}^{n}, D_{1}^{K+1})] + \\ep_{n} + \\de_{n} \\nonumber \\\\\n& \\leq {1 \\over n} [h(\\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n}) - h(\\hat{Z}_{\\mathsf{D}}^{n})] + \\ep_{n} + \\de_{n}, \\nonumber\n\\end{align}\n\\noindent where $\\hat{Z}_{\\mathsf{D}}^{n}={\\rm{DFT}}(Z_{\\mathsf{D}}^{n})$ has {i.i.d.} entries with $\\hat{Z}_{\\mathsf{D}}[i] \\sim\n\\mathcal{C}\\mathcal{N}(0,N)$. Recall $\\hat{X}_{{\\ell}}^{n} = {\\rm{DFT}}(X_{{\\ell}}^{n})$. Then,\n\\begin{align}\nh(\\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n}) & = h\\left(\\sum_{\\ell \\in \\mS} {e^{-j\\Bt(D_{{\\ell}})}} \\odot g_{{\\ell}\\mathsf{D}}\\hat{X}_{{\\ell}}^{n} + \\hat{Z}_{\\mathsf{D}}^{n} \\right) \\nonumber \\\\\n& \\leq \\sum_{i=0}^{n-1} h\\left(\\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{{\\ell}\\mathsf{D}} \\hat{X}_{{\\ell}}[i] + \\hat{Z}_{\\mathsf{D}}[i]\\right), \\nonumber\n\\end{align}\n\\noindent where ${e^{-j\\Bt(D)}} \\triangleq (e^{-j2\\pi i D \\over n})_{i=0}^{n-1}$ is an $n$-length vector, and $\\odot$ denotes\nelement-wise vector multiplication. Thus,\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc})\n&\\leq {1 \\over n} \\sum_{i=0}^{n-1} \\left[h\\left(\\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{\\ell\\mathsf{D}} \\hat{X}_{{\\ell}}[i] + \\hat{Z}_{\\mathsf{D}}[i]\\right)- h(\\hat{Z}_{\\mathsf{D}}[i])\\right] + \\ep_{n} + \\de_{n} \\nonumber\\\\\n& \\leq {1 \\over n} \\sum_{i=0}^{n-1} \\log\\left(1 + { { {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{\\ell\\mathsf{D}} \\hat{X}_{{{\\ell}}}[i] \\right\\vert^{2}}} \\over {N}}\\right)\n+ \\ep_{n} + \\de_{n}. \\label{before_split_into_three}\n\\end{align}\n\nWe now divide the sum in \\eqref{before_split_into_three} into three terms for $0 \\leq i \\leq \\al(n)-1$, $\\al(n) \\leq i \\leq n-\\al(n)-1$, and $n-\\al(n) \\leq i \\leq n-1$, where $\\al(n): \\mathbb{N} \\rightarrow \\mathbb{N}$ is a function such that\n\\begin{align}\n{\\al(n) \\over n} \\rightarrow 0, \\ \\ {\\al(n)\\dm(n) \\over n} \\rightarrow \\infty. \\label{condition_on_alpha}\n\\end{align}\n\\noindent An example of such an $\\al(n)$ is the function $\\alpha(n) = \\lceil {n \\over\n\\dm(n)}{\\log\\dm(n)} \\rceil $. Consequently, we first upper bound the tail terms and afterwards the main term in the sequel.\n\nFor the terms in $0\\leq i \\leq \\al(n)-1$, we have\n\n\\begin{align}\n\\nonumber\n{1 \\over n} \\sum_{i=0}^{\\al(n)-1} \\log\\left(1 + { { {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{\\ell\\mathsf{D}} \\hat{X}_{{{\\ell}}}[i] \\right\\vert^{2}}} \\over {N}}\\right) & \\stackrel{\\rm(a)}{\\leq} {1 \\over n} \\sum_{i=0}^{\\al(n)-1} \\log\\left(1 + { {\\sum_{\\ell\\in \\mS}\\vert g_{\\ell\\mathsf{D}}\\vert^{2} \\cdot \\sum_{\\ell \\in \\mS}\\mathbb{E} \\vert \\hat{X}_{\\ell}[i]\\vert^{2} } \\over {N}}\\right) \\\\ \\nonumber\n& \\stackrel{\\rm(b)}{\\leq} {\\al(n) \\over n} \\log\\left(1 + { {\\sum_{i=0}^{\\al(n)-1} \\left[\\sum_{\\ell\\in \\mS}\\vert g_{\\ell\\mathsf{D}}\\vert^{2}\\cdot \\sum_{\\ell \\in \\mS}\\mathbb{E} \\vert \\hat{X}_{\\ell}[i]\\vert^{2} \\right]} \\over {\\alpha(n)N}}\\right)\\\\ \\nonumber\n\\nonumber\n& \\stackrel{\\rm(c)}{\\leq}{\\al(n) \\over n} \\log\\left(1 +{n\\over \\alpha(n)} {{ \\sum_{\\ell\\in \\mS}\\vert g_{\\ell\\mathsf{D}}\\vert^{2} \\cdot \\sum_{\\ell \\in \\mS}P_{\\ell}} \\over {N}}\\right) \\\\\n& \\triangleq \\lambda_{n}, \\label{1st_sum}\n\\end{align}\n\\noindent where $(\\rm a)$ follows by the Cauchy-Schwartz inequality (cf. \\eqref{eq_Cauchy}), $(\\rm b)$ follows by the concavity of the $\\log$ function and $(\\rm c)$ follows by the power constraints \\eqref{Power_constraint}.\nAlso, for $n-\\al(n) \\leq i \\leq n-1$, a similar upper bound can be derived by the symmetry of the problem as follows\n\n\\begin{align}\n{1 \\over n} \\sum_{i=n-\\al(n)}^{n-1} \\log\\left(1 + { { {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{\\ell\\mathsf{D}} \\hat{X}_{{{\\ell}}}[i] \\right\\vert^{2}}} \\over {N}}\\right) \\leq \\lambda_{n}. \\label{2nd_sum}\n\\end{align}\n\nTo bound the third component of \\eqref{before_split_into_three} for $\\alpha(n)\\leq i\\leq n-\\alpha(n)-1$, we first obtain that\n\\begin{align}\n{ {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{\\ell\\mathsf{D}} \\hat{X}_{{{\\ell}}}[i] \\right\\vert^{2}}}\n = \\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert^{2} \\mathbb{E}\\vert \\hat{X}_{\\ell}[i] \\vert^{2} + {\\sum_{ \\substack {(\\ell,\\ell^{'}) \\in \\mS^{2} \\\\ \\label{Eq:dependent_terms} \\ell < \\ell^{'}} }} 2\\Re\\mathbb{E}\\left\\{{e^{-j2\\pi i ({D_{{\\ell}} - D_{{\\ell^{'}}}})\\over n}} g_{\\ell\\mathsf{D}}g^{*}_{\\ell'\\mathsf{D}} \\hat{X}_{{{\\ell}}}[i]\\hat{X}^{*}_{{{\\ell^{'}}}}[i] \\right\\},\n\\end{align}\n\\noindent where $\\Re(z)$ is the real part of $z\\in \\mathbb{C}$. Now, the following two cases can occur\n\n$i$) $\\ell< \\ell'0$ is part $r_j$'s constant density.\n\t\\end{assumption}\n\t\\noindent Critically, \\Cref{assum:Homogeneous} can be used to simplify the identification problem by exploiting measurements of the shape of a manipulated object.\n\tThis is accomplished by noting that, once a cobot has determined the shape of an object, the object's inertial parameters depend solely on the mass of each of its homogeneous parts. \n\tWe refer to our overall approach as \\emph{Homogeneous Part Segmentation} (HPS).\\footnote{We provide code, our complete simulation dataset, and a video showcasing our algorithm at \\url{https:\/\/papers.starslab.ca\/part-segmentation-for-inertial-identification\/}.} \n\tOur main contributions are:\n\t\\begin{itemize}\n\t\t\\item a formulation of inertial parameter identification that incorporates \\Cref{assum:Homogeneous};\n\t\t\\item a method combining the algorithms proposed in \\cite{attene_hierarchical_2008} and \\cite{lin_toward_2018} to improve part segmentation speed;\n\t\t\\item a dataset of models of 20 common workshop tools with 3D meshes, point clouds, ground truth inertial parameters, and ground truth part-level segmentation for each object; and\n\t\t\\item experiments highlighting the benefits of HPS when compared to two benchmark algorithms.\n\t\\end{itemize}\n\tWe undertake a series of simulation studies with our novel dataset and carry out a real-world experiment to assess the performance of the proposed algorithm on a real cobot. We show that, under noisy conditions, our approach produces more accurate inertial parameter estimates than competing algorithms that do not utilize shape information. \n\t\n\t\\vspace{2mm}\n\t\\section{Related Work}\n\t\\label{sec:RelatedWork}\n\t\n\tThis section provides a brief overview of work related to the two main algorithmic components of our system: inertial parameter or load identification and part-level object segmentation.\n\tA more thorough survey of inertial parameter identification for manipulation is provided by Golluccio et al.\\ in \\cite{golluccio_robot_2020}.\n\t\n\t\\subsection{Load Identification}\n\t\n\tIn \\cite{atkeson_estimation_1986}, a least squares method is applied to determine the inertial parameters of a load manipulated by a robot arm.\n\tThe authors of \\cite{atkeson_estimation_1986} underline that poor performance can be caused by the low signal-to-noise ratios in the data used for the regression.\n\tA recursive total least squares (RTLS) approach is proposed in \\cite{kubus_-line_2008} to account for noise in the regressor matrix.\n\tHowever, an experimental evaluation in \\cite{farsoni_real-time_2018} of RTLS on multiple datasets reports large estimation errors, demonstrating that RTLS has limited utility with noisy sensors.\n\tIn this work, we also minimize a least squares cost, but we employ a unique formulation that is well-suited to data gathered by a slower-moving cobot, for example. \n\t\n\tWithout appropriate constraints on regression variables, many identification algorithms find unphysical solutions~\\cite{sousa_physical_2014, traversaro_identification_2016}.\n\tSufficient conditions for the physical consistency of the identified parameters are stated in \\cite{traversaro_identification_2016}, where a constrained optimization problem is formulated that guarantees the validity of the solution.\n\tSimilarly, physical consistency is enforced through linear matrix inequalities as part of the method proposed in \\cite{wensing_linear_2017}. Geodesic distance approximations from a prior solution are used in \\cite{lee_geometric_2019} to regularize the optimization problem without introducing very long runtimes.\n\tIn this work, we \n\t\\emph{implicitly} enforce the physical consistency of estimated inertial parameters by discretizing the manipulated object with point masses \\cite{ayusawa_identification_2010, nadeau_fast_2022}. \n\tFixed-sized voxels can also be used to achieve the same effect~\\cite{song_probabilistic_2020}.\n\t\n\tFinally, the authors of \\cite{sundaralingam_-hand_2021} augment the traditional force-torque sensing system with tactile sensors to estimate the inertial parameters and the friction coefficient of a manipulated object. %\n\tOur contribution similarly makes use of an additional sensing modality in the form of a camera. \n\t\n\t\\subsection{Part-Level Object Segmentation}\n\t\n\tAlthough there is no formal definition of a ``part\" of a manipulated item, humans tend to follow the so-called \\emph{minima rule}: the decomposition of objects into approximately convex contiguous parts bounded by negative curvature minima \\cite{hoffman_parts_1984}.\n\tThis rule has provided inspiration for many part segmentation methods reviewed in \\cite{rodrigues_part-based_2018}, which benchmarks several techniques on the Princeton dataset \\cite{chen_benchmark_2009} and divides approaches into surface-based, volume-based, and skeleton-based algorithms.\n\tSurface-based and volume-based mesh segmentation algorithms are reviewed in \\cite{shamir_survey_2008}, highlighting a tradeoff between segmentation quality and the number of parts produced by the algorithm.\n\tSkeleton-based segmentation methods for 3D shapes, which capture structural information as a lower-dimensional graph (i.e., the shape's \\emph{skeleton}), are reviewed in \\cite{tagliasacchi_3d_2016}.\n\tThe approach in \\cite{lin_seg-mat_2020}, which is based on the medial axis transform~\\cite{li_q-mat_2015}, exploits prior knowledge of an object's skeleton to produce a \\emph{top-down} segmentation about 10 times faster than \\emph{bottom-up} methods that use local information only. \n\t\n\tThe method of Kaick et al. \\cite{kaick_shape_2014} also segments incomplete point clouds into approximately convex shapes, but is prohibitively slow for manipulation tasks, with an average computation time of 127 seconds on the Princeton benchmark \\cite{chen_benchmark_2009} as reported in \\cite{lin_seg-mat_2020}.\n\tIn contrast, learning-based methods for part segmentation can struggle to generalize to out-of-distribution shapes but are usually faster than geometric techniques~\\cite{dou_coverage_2021, lin_point2skeleton_2021, rodrigues_part-based_2018}.\n\tOur approach to segmentation does not apply any learned components but is fast enough for real-time use by a collaborative robot.\n\t\n\tA hierarchical volume-based segmentation technique (HTC) proposed in \\cite{attene_hierarchical_2008}, enabled by fast tetrahedralization \\cite{si_tetgen_2015} and quick convex hull computation \\cite{barber_quickhull_1996}, can perform well if a watertight mesh can be reconstructed from the input point cloud.\n\tThis technique, which we describe in detail in Section \\ref{sec:visual_part_segmentation}, is an important element of our part segmentation pipeline.\n\tAnother key component of our approach is the surface-based algorithm proposed in \\cite{lin_toward_2018}, which uses a dissimilarity metric to iteratively merge nearby mesh patches, taking special care to preserve part boundaries.\n\n\t\\section{Inertial Parameters of Homogeneous Parts}\n\t\n\tIn this section, we describe our inertial parameter identification technique, which assumes that an object has been segmented into its constituent parts.\n\tBy determining the mass of each segment, we are able to identify the full set of inertial parameters, or to provide an approximate solution when \\Cref{assum:Homogeneous} is not respected.%\n\t\n\t\\subsection{Notation}\n\tReference frames $\\ObjectFrame$ and $\\SensorFrame$ are attached to the object and to the force-torque (FT) sensor, respectively.\n\tThe reference frame $\\WorldFrame$ is fixed to the base of the robot and is assumed to be an inertial frame, such that the gravity vector expressed in the sensor frame is given by $\\Vector{g}_{s} = \\Rot{w}{s} [0,0,-9.81]^\\Transpose$.\n\tThe orientation of $\\ObjectFrame$ relative to $\\SensorFrame$ is given by $\\Rot{b}{s}$ and the origin of $\\ObjectFrame$ relative to $\\SensorFrame$, expressed in $\\WorldFrame$, is given by $\\Pos{b}{s}{w}$.\n\tThe skew-symmetric operator $\\Skew{\\cdot}$ transforms a vector $\\Vector{u} \\in\\Real^3$ into a $\\Real^{3\\times3}$ matrix such that $\\Skew{\\Vector{u}} \\Vector{v} = \\Vector{u} \\times \\Vector{v}$.\n\t\n\t\\subsection{Formulation of the Optimization Problem}\n\tFor a part $\\Part{j}$, under \\Cref{assum:Homogeneous}, the $k$-th moment of a mass distribution discretized into $n$ point masses is given by\n\t\\begin{equation}\n\t\t\\label{eqn:Moments}\n\t\t\\int_{V_j} \\Vector{\\Position}^k \\rho_j(\\Vector{\\Position}) dV_j \\approx \\frac{\\Mass}{n}\\sum_{i}^{n} (\\Vector{\\Position}_i)^k ~,\n\t\\end{equation}\n\twhere the position of the $i$-th point mass relative to $\\ObjectFrame$ is given by $\\Vector{\\Position}_i$.\n\tFor a homogeneous mass density, the centre of mass corresponds to the centroid\n\t\\begin{equation}\n\t\t\\Pos{\\Part{j}}{b}{b} = \\frac{1}{n}\\sum_{i}^{n}\\Vector{\\Position}_i~.\n\t\\end{equation}\n\tThe inertia tensor of the $i$-th point mass relative to $\\ObjectFrame$ is\n\t\\begin{align}\n\t\t\\InertiaMatrix(\\Vector{\\Position}_i) \n\t\t&= - \\Mass_i \\Skew{\\Vector{\\Position}_i} \\Skew{\\Vector{\\Position}_i}\\\\[1mm]\n\t\t&= \\Mass_i \\begin{bmatrix}\n\t\t\ty^2 + z^2 & -xy & -xz\\\\\n\t\t\t-yx & x^2+z^2 & -yz\\\\\n\t\t\t-zx & -zy & x^2+y^2\n\t\t\\end{bmatrix},\n\t\\end{align}\n\twhere $\\Vector{\\Position}^{\\Transpose}_i = [x, y, z]$ and $\\Mass_i$ is the mass of the point.\n\t\n\tThe part's inertial parameters with respect to the sensor frame $\\SensorFrame$ are\n\t\\begin{align}\n\t\t\\label{eqn:ParamsFromPointMasses}\n\t\t&{}^s\\Vector{\\phi}^{\\Part{j}} = \\bbm m,\\!\\! & {}^s\\Vector{c}^{\\Part{j}},\\!\\! & {}^s\\InertiaMatrix^{\\Part{j}} \\ebm^\\Transpose = \\\\[1mm]\n\t\t&\\Mass\\! \\bbm \n\t\t1\\\\ \n\t\t\\Pos{\\Part{j}}{s}{s}\\\\ \n\t\t\\Vech\\!\\left(\\Rot{b}{s} \\frac{1}{n}\\sum_{i}^{n}\\left(-\\Skew{\\Vector{\\Position}_i}\\!\\Skew{\\Vector{\\Position}_i}\\!\\right) \\Rot{s}{b} - \\Skew{\\Pos{\\Part{j}}{s}{s}}\\! \\Skew{\\Pos{\\Part{j}}{s}{s}}\\! \\right) \n\t\t\\ebm , \\notag\n\t\\end{align}\n\twhere $\\Rot{b}{s}$ is the rotation that aligns $\\ObjectFrame$ to $\\SensorFrame$, $\\Pos{\\Part{j}}{s}{s}$ is the translation that brings the centroid of $\\Part{j}$ to $\\SensorFrame$, and $\\Vech(\\cdot)$ is the \\emph{vector-half} operator defined in \\cite{henderson_vec_1979} that extracts the elements on and above the main diagonal.\n\t\n\tBy keeping $\\Vector{\\Position}_i$ fixed and by measuring $\\Pos{b}{s}{s}$ and $\\Rot{b}{s}$ with the robot's perception system, it becomes clear that only the part's mass $\\Mass$ needs to be inferred in \\Cref{eqn:ParamsFromPointMasses} as $\\Pos{\\Part{j}}{s}{s} = \\Rot{b}{s}\\Pos{\\Part{j}}{b}{b}+\\Pos{b}{s}{s}$.\n\tHence, assuming that the robot's perception system can provide $\\Vector{\\Position}_i$ and that $\\Pos{b}{s}{s}$ and $\\Rot{b}{s}$ are either known or measured, the inertial parameters of a homogeneous part depend solely on its mass.\n\tSimilarly, the inertial parameters of a rigid object can be expressed as a function of the masses of its constituent parts.\n\t\n\tFor ``stop-and-go\" motions, where force measurements are taken while the robot is immobile, only the mass and \\TextCOM are identifiable \\cite{nadeau_fast_2022}.\n\tNonetheless, a stop-and-go trajectory greatly reduces noise in the data matrix, because accurate estimates of the end-effector kinematics are not needed \\cite{nadeau_fast_2022}.\n\tAssuming that the manipulated object is built from up to four homogeneous parts\\footnote{For stop-and-go trajectories, the rank of the data matrix is four when using non-degenerate poses as described in the appendix of \\cite{nadeau_fast_2022}. Dynamic trajectories can increase the rank of $\\DataMatrix$ to 10, enabling mass identification of up to 10 unique homogeneous parts.}, finding the mass of each part enables the identification of the complete set of inertial parameters even when stop-and-go trajectories are performed.\n\tThis identification requires measuring the wrench $\\Vector{b}_j$ at time step $j$ and relating the wrench to the masses $\\Vector{\\Mass}$ via\n\t\\begin{equation}\n\t\t\\underset{\\Vector{b}_j}{\\underbrace{\n\t\t\t\t\\begin{bmatrix}\n\t\t\t\t\t\\Vector{\\Force}_s\\\\\n\t\t\t\t\t\\Vector{\\Torque}_s\n\t\t\t\t\\end{bmatrix}\n\t\t}}\n\t\t=\n\t\t\\underset{\\RedModel_j}{\\underbrace{\n\t\t\t\t\\begin{bmatrix}\n\t\t\t\t\t\\Vector{\\Gravity}_s\\lbrack1&1&1&1\\rbrack\\\\\n\t\t\t\t\t-\\lbrack\\Vector{\\Gravity}_s\\rbrack_\\times \\lbrack\\Pos{\\Part{1}}{s}{s}&{}\\Pos{\\Part{2}}{s}{s}&\\Pos{\\Part{3}}{s}{s}&\\Pos{\\Part{4}}{s}{s}\\rbrack\n\t\t\t\t\\end{bmatrix}\n\t\t}}\n\t\t\\underset{\\Vector{\\Mass}}{\\underbrace{\n\t\t\t\t\\begin{bmatrix}\n\t\t\t\t\tm_{\\Part{1}}\\\\m_{\\Part{2}}\\\\m_{\\Part{3}}\\\\m_{\\Part{4}}\n\t\t\t\t\\end{bmatrix}\n\t\t}},\n\t\\end{equation}\n\tstacking $K$ matrices such that $\\DataMatrix = [\\RedModel_1^\\Transpose, ..., \\RedModel_K^\\Transpose]^\\Transpose$ and $\\Vector{b} = [\\Vector{b}_1^\\Transpose, ..., \\Vector{b}_K^\\Transpose]^\\Transpose$.\n\tMinimizing the Euclidean norm leads to the convex optimization problem\n\t\\begin{align}\n\t\t\\label{eqn:ObjFunc}\n\t\t&\\min_{\\Vector{\\Mass} \\in\\Real^4} \\quad\\vert\\vert \\DataMatrix \\Vector{\\Mass} - \\Vector{b} \\vert\\vert_2 \\\\\n\t\t&\\quad\\text{\\emph{s.t.}} \\quad \\Mass_{\\Part{j}} \\ge 0 \\enspace \\forall j \\in \\{1, \\ldots, 4\\}, \\notag\n\t\\end{align}\n\twhich can be efficiently solved with standard methods.\n\t\n\t\\section{Visual Part Segmentation}\n\t\\label{sec:visual_part_segmentation}\n\tIn this section, we combine a part segmentation method that uses local information (e.g., surface normals) with a second method that relies on structural information (e.g., shape convexity).\n\tThe Python implementation of our part segmentation algorithm, which is described by \\Cref{algo:WarmStartedHTC}, includes an open-source version of \\cite{attene_hierarchical_2008} as well as our variant of \\cite{lin_toward_2018} that makes use of colour information.\n\t\n\tDefining the shape of an object from a point cloud involves reconstructing the object surface (i.e., shape reconstruction).\n\tIn this work, the ball-pivoting algorithm \\cite{bernardini_ball-pivoting_1999} is used owing to its speed and relative effectiveness on point clouds of low density.\n\tShape reconstruction can be challenging for objects with very thin parts (e.g., a saw) and a thickening of the object through voxelization is performed beforehand. \n\t\n\tAs stated by \\Cref{eqn:Moments}, the moments of a mass distribution are computed by integrating over the shape of the distribution. \n\tHence, volumetric part segmentation is sufficient for identification of the true inertial parameters of an object.\n\tTo obtain such a representation of the object from its surface mesh, tetrahedralization is performed via TetGen \\cite{si_tetgen_2015} and the resulting tetrahedral mesh is supplied as an input to the part segmentation algorithm.\n\t\n\tOur method makes use of the Hierarchical Tetrahedra Clustering (HTC) algorithm \\cite{attene_hierarchical_2008}, which iteratively merges clusters of tetrahedra such that the result is as convex as possible while also prioritizing smaller clusters.\n\tHTC maintains a graph with nodes representing clusters and edges representing adjacency, and with an edge cost based on the concavity and size of the connected nodes. \n\tThe concavity of a cluster is computed by subtracting its volume from its convex hull and an edge cost is defined by\n\t\\begin{align}\n\t\t\\label{eqn:htc_cost}\n\t\tc_{ij} &= \\text{CVXHULL}\\left(C_i \\cup C_j\\right) - \\text{VOL}(C_i \\cup C_j) \\notag\\\\\n\t\t\\text{Cost}(i,j) &= \\begin{cases}\n\t\t\t\t\t\t\t\tc_{ij}+1, &\\text{if }c_{ij}>0\\\\\n\t\t\t\t\t\t\t\t\\frac{\\vert C_i\\vert^2+\\vert C_j\\vert^2}{N^2}, &\\text{otherwise}\n\t\t\t\t\t\t\t\\end{cases},\n\t\\end{align}\n\twhere $\\vert C_i\\vert$ is the number of elements in cluster $C_i$ and $N$ is the total number of elements.\n\tThe edge associated with the lowest cost is selected iteratively and the connected nodes are merged into a single cluster, resulting in the hierarchical segmentation.\n\t\n\tTo make part segmentation faster, we perform an initial clustering such that HTC \n\trequires fewer \n\tconvex hull computations, which is by far the most expensive operation.\n\tThe initial clustering is provided through a bottom-up point cloud segmentation algorithm \\cite{lin_toward_2018} that clusters points together based on heuristic features and chooses a representative point for each cluster.\n\tTwo adjacent clusters are merged if the dissimilarity between their representative points is lower than some threshold $\\beta$ (n.b.\\ the $\\lambda$ symbol is used in \\cite{lin_toward_2018}). \n\tThe value of $\\beta$ increases with each iteration; the process halts when the desired number of clusters is obtained.\n\tIn our implementation, the dissimilarity metric is defined as:\n\t\\begin{equation}\n\t\t\\label{eqn:similarity_metric}\n\t\t\\text{D}(C_i,C_j) = \\lambda_p\\Norm{\\Vector{p}_i-\\Vector{p}_j} + \\lambda_l\\Norm{\\Vector{l}_i-\\Vector{l}_j} + \\lambda_n \\left(1-\\vert \\Vector{n}_i\\cdot \\Vector{n}_j\\vert\\right)\n\t\\end{equation}\n\twhere each $\\lambda$ is a tunable weight, $\\Vector{p}_i$ is the position of the representative point of cluster $C_i$, $\\Vector{l}_i \\in \\Natural^{3}$ is the RGB representation of its colour, and $\\Vector{n}_i$ is its local surface normal. \n\t\n\tTo enable accurate part segmentation, the initial clustering should not cross the boundaries of the parts that will subsequently be defined by the HTC algorithm.\n\tTherefore, initial clustering is stopped when the number of clusters (e.g., 50 in our case) is much larger than the desired number of parts.\n\tThe desired number of clusters does not need to be tuned on a per-object basis as long as it is large enough.\n\n\t\\SetAlgoSkip{SkipBeforeAndAfter}\n\t\\begin{algorithm}[h]\n\t\t\\DontPrintSemicolon\n\t\t\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\t\t\\Input{A point cloud}\n\t\t\\Output{A segmented tetrahedral mesh}\n\t\t\\nlset{1}Initialize similarity threshold $\\beta$ as done in \\cite{lin_toward_2018}\\;\n\t\tInitialize each point as a cluster\\;\n\t\t\\While{$number~of~clusters > desired~number$}{\n\t\t\t\\ForEach{existing cluster $C_i$}{\n\t\t\t\t\\ForEach{neighboring cluster $C_j$}{\n\t\t\t\t\t\\If{$\\text{D}(C_i,C_j)<\\beta$}{\n\t\t\t\t\t\tMerge $C_j$ into $C_i$\\;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t$\\beta \\leftarrow 2\\beta$\n\t\t}\n\t\t\\ForEach{point $\\Vector{p}$ at the border of two clusters}{\n\t\t\tMerge $\\Vector{p}$ into the most similar cluster\\;\n\t\t}\n\t\t\\nlset{2}Perform surface reconstruction and tetrahedralization\\;\n\t\tAssociate each tet. to its nearest cluster\\;\n\t\tInitialize a node in the graph for each cluster\\;\n\t\t\\ForEach{node}{\n\t\t\t\\If{two tet. from two nodes share a face}{\n\t\t\t\tCreate an edge between the nodes and compute edge-cost with \\Cref{eqn:htc_cost}\n\t\t\t}\n\t\t}\n\t\t\\While{$number~of~edges > 0$}{\n\t\t\tFind edge with lowest cost and merge its nodes\\;\n\t\t\tCreate a parent node that contains merged nodes\\;\t\n\t\t}\n\t\tLabel each tet. with its associated cluster number\\;\n\t\t\\caption{HTC (\\textbf{2}) with initial clustering (\\textbf{1})}\n\t\t\\label{algo:WarmStartedHTC}\n\t\\end{algorithm}\n\t\n\t\\section{Experiments}\n\tIn this section, each component of the proposed method is evaluated on 20 objects with standard metrics, and our entire identification pipeline is benchmarked in 80 scenarios.\n\tThe practicality of the proposed approach is also demonstrated through a real `hammer balancing act' experiment using relatively inexpensive robot hardware (see \\Cref{fig:demo}). \n\t\n\tTo conduct simulation experiments with realistic objects and evaluate the performance of part segmentation and identification, a dataset with ground truth inertial parameters and segments is needed.\n\tTo the best of the authors' knowledge, no such dataset is freely available.\n\tFrom CAD files contributed by the community, we built a dataset containing 20 commonly-used workshop tools. \n\tFor each object, our dataset contains a watertight mesh, a coloured surface point cloud with labelled parts, the object's inertial parameters, and a reference frame specifying where the object is usually grasped.\n\tThis dataset of realistic objects enables the evaluation of shape reconstruction, part segmentation, and inertial parameter identification.\n\n\t\\subsection{Experiments on Objects from Our Dataset} \\label{sec:experiments_on_dataset_objects}\n\tThe quality of a shape reconstructed from point cloud data is evaluated by computing the Hausdorff distance between the ground truth mesh and the reconstructed mesh, both scaled such that the diagonal of their respective bounding box is one metre in length.\n\tPart segmentation performed with our proposed variation of HTC is evaluated via the undersegmentation error (USE) \\cite{levinshtein_turbopixels_2009}, and with the global consistency error (GCE) \\cite{martin_database_2001, chen_benchmark_2009}, with results summarized in \\cref{tab:dataset_partseg_eval}.\n\tThe USE measures the proportion of points that are crossing segmentation boundaries, or \\emph{bleeding out} from one segment to another.\n\tThe GCE measures the discrepancy between two segmentations, taking into account the fact that one segmentation can be more refined than the other.\n\tBoth evaluation metrics represent dimensionless error ratios and are insensitive to over-segmentation, \n\tas discussed in Section \\ref{sec:Discussion}.\n\t\n\t\\Cref{tab:AverageComputeTime} summarizes the runtime performance obtained by augmenting HTC with the initial clustering in \\Cref{algo:WarmStartedHTC}.\n\tWhile the average segmentation error is almost identical in both cases, \\Cref{algo:WarmStartedHTC} executes in about a third of the time, owing to the smaller number of convex hull computations performed. \n\t\\begin{table}[t]\n\t\t\\centering\n\t\t\\caption{Average computation time and segmentation error per object from our dataset with standard deviations in parentheses. Initial clustering significantly reduces the runtime with little impact on the part segmentation.}\n\t\t\\label{tab:AverageComputeTime}\n\t\t\\begin{tabular}{cccc}\n\t\t\t\\toprule\n\t\t\tAlgorithm & USE & GCE & Time (s)\\\\\n\t\t\t\\midrule\n\t\t\tHTC & 0.1 (0.13) & 0.05 (0.10) & 9.73 (5.56)\\\\\n\t\t\tHTC with Initial Clustering & 0.1 (0.11) & 0.07 (0.11) & 3.48 (1.00)\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\vspace{-1mm}\n\t\\end{table}\n\t\\begin{table}[h!]\n\t\t\\centering\n\t\t\\caption{Evaluation of part segmentation (USE and GCE) and shape reconstruction (Hausdorff). As expected, objects with a single part do not have GCE and USE errors while objects with many parts have larger segmentation errors.}\n\t\t\\label{tab:dataset_partseg_eval}\n\t\t\\begin{tabular}{cccccc}\n\t\t\t\\toprule\n\t\t\tObject & USE & GCE & Hausdorff & Mass & \\#Parts\\\\\n\t\t\t\t & & & (mm) & (g) & \\\\\n\t\t\t\\midrule\n\t\t\tAllen Key\t\t\t&0\t\t&0\t\t&8.2\t&128 &1\\\\\n\t\t\tBox Wrench\t\t\t&0\t\t&0\t\t&10.9\t&206 &1\\\\\n\t\t\tMeasuring Tape\t\t&0\t\t&0\t\t&11.4\t&136 &1\\\\\n\t\t\tRuler\t\t\t\t&0\t\t&0\t\t&14.8\t&9 \t &1\\\\\n\t\t\tScrewdriver\t\t\t&0.013\t&0.001\t&12.1\t&30 &2\\\\\n\t\t\tNut Screwdriver\t\t&0.002\t&0.002\t&16\t\t&81 &2\\\\\n\t\t\tRubber Mallet\t\t&0.011\t&0.009\t&15.8\t&237 &2\\\\\n\t\t\tBent Jaw Pliers\t\t&0.176\t&0.01\t&16.5\t&255 &3\\\\\n\t\t\tMachinist Hammer\t&0.018\t&0.012\t&13.6\t&133 &2\\\\\n\t\t\tPliers\t\t\t\t&0.123\t&0.041\t&10.7\t&633 &3\\\\\n\t\t\tC Clamp\t\t\t\t&0.103\t&0.077\t&9.6\t&598 &5\\\\\n\t\t\tAdjustable Wrench\t&0.117\t&0.098\t&14.2\t&719 &4\\\\\n\t\t\tHammer\t\t\t\t&0.08\t&0.098\t&14.7\t&690 &3\\\\\n\t\t\tFile\t\t\t\t&0.057\t&0.102\t&17.9\t&20 &3\\\\\n\t\t\tSocket Wrench\t\t&0.141\t&0.123\t&7.4\t&356 &5\\\\\n\t\t\tHacksaw\t\t\t\t&0.129\t&0.128\t&11.1\t&658 &7\\\\\n\t\t\tClamp\t\t\t\t&0.259\t&0.181\t&17.6\t&340 &7\\\\\n\t\t\tVise Grip\t\t\t&0.158\t&0.287\t&17.2\t&387 &8\\\\\n\t\t\tElectronic Caliper\t&0.42\t&0.316\t&9.1\t&174 &14\\\\\n\t\t\tVise Clamp\t\t\t&0.296\t&0.373\t&15.1\t&225 &9\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\vspace{-5mm}\n\t\\end{table}\n\t\\begin{figure}\n\t\t\\vspace{-3mm}\n\t\t\\centering\n\t\t\\begin{overpic}[width=1\\linewidth]{figs\/Real_Seg_vs_Pictures_Smaller}\n\t\t\t\\put(12,-1){(a)}\n\t\t\t\\put(43,-1){(b)}\n\t\t\t\\put(82.5,-1){(c)}\n\t\t\\end{overpic}\n\t\t\\vspace{-3mm}\n\t\t\\caption{Pictures of objects scanned by the RGB-D camera on the manipulator, next to their segmented meshes; points are located at the tetrahedra's centroids (screwdriver is rotated).}\n\t\t\\label{fig:realsegvspictures}\n\t\\end{figure}\n\tThe reconstruction and part segmentation can also be qualitatively evaluated on point clouds obtained from RGB-D images taken by a Realsense D435 camera at the wrist of the manipulator.\n\tTo complete a shape that is partly occluded by the support table, the point cloud is projected onto the table plane, producing a flat bottom that might not correspond to the true shape of the object. For instance, the left side of the rotated screwdriver point cloud in \\Cref{fig:realsegvspictures} is the result of such a projection.\n\t\n\t\\begin{table}[t]\n\t\t\\centering\n\t\t\\caption{Standard deviations of the zero-mean Gaussian noise added to accelerations and force signals in simulation.}\n\t\t\\label{tab:noise_std_dev}\n\t\t\\begin{tabular}{ccccc}\n\t\t\t\\toprule\n\t\t\tScenario \t\t& Ang. Acc. & Lin. Acc. & Force & Torque\\\\\n\t\t\t\\midrule\n\t\t\tLow Noise \t\t& 0.25 & 0.025 & 0.05 & 0.0025\\\\\n\t\t\tModerate Noise \t& 0.5 & 0.05 & 0.1 & 0.005\\\\\n\t\t\tHigh Noise \t\t& 1\t\t& 0.1 & 0.33 & 0.0067\\\\\n\t\t\t\\bottomrule \n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\t\\begin{figure}\n\t\t\\vspace{-1mm}\n\t\t\\centering\n\t\t\\includegraphics[width=1\\linewidth]{figs\/Kinematics_Rubber_Mallet_v2}\n\t\t\\caption{Kinematics of the identification trajectory performed with the Rubber Mallet. The norm of the velocity (black) goes slightly above the standard maximum speed for a cobot.}\n\t\t\\label{fig:kinematicsrubbermallet}\n\t\t\\vspace{-5mm}\n\t\\end{figure}\n\t\n\tThe proposed identification algorithm is tested in simulation under four noise scenarios where zero-mean Gaussian noise is added to the signals. The standard deviations of the noise values are based on the specifications of the Robotiq FT-300 sensor as described in \\Cref{tab:noise_std_dev}.\n\tThe identification trajectory used to generate the regressor matrix $\\DataMatrix$ in \\Cref{eqn:ObjFunc} has the robot lift the object and successively orient it such that gravity is aligned with each axis of $\\ObjectFrame$, stopping for a moment at each pose and moving relatively quickly between poses, as shown in \\Cref{fig:kinematicsrubbermallet}.\n\tThe average condition number~\\cite{gautier_exciting_1992} of the scaled regressor matrix for all simulated trajectories is 74, 92, and 99 for the low, moderate, and high noise scenarios, respectively.\n\tThese relatively low condition numbers confirm that our identification trajectory is non-degenerate.\n\n\tThe performance of our proposed algorithm (HPS) is compared against the classical algorithm proposed in \\cite{atkeson_estimation_1986} (OLS) and to the more modern algorithm proposed in \\cite{lee_geometric_2019} (GEO). \n\tThe latter was provided with a prior solution consisting of the true mass of the object, and the \\TextCOM and inertia tensor resulting from a homogeneous mass distribution for the object ($\\alpha=10^{-5}$ was used).\n\tHPS only uses measurement data when\n\tboth the linear and angular accelerations are below 1 unit$\/s^2$,\n\tcorresponding to the \\textit{stalled} timesteps of the stop-and-go motion, whereas OLS and GEO use all data available.\n\t\n\tThe accuracy of inertial parameter identification is measured via the Riemannian geodesic distance ${e_{\\text{Rie}} = \\sqrt{\\left(\\frac{1}{2}\\sum\\Vector{\\lambda}\\right)}}$~\\cite{lang_fundamentals_1999}, where $\\Vector{\\lambda}$ is the vector of eigenvalues for $P({}^s\\phi^1)^{-1}P({}^s\\phi^2)$ and $P(\\phi)$ is the \\textit{pseudoinertia} matrix that is required to be symmetric positive-definite (SPD) \\cite{wensing_linear_2017}.\n\tThe metric $e_{\\text{Rie}}$ is the distance between the estimated and ground truth inertial parameters in the space of SPD matrices.\n\tTo assist interpretation of the identification error, we also compute the size-based error metrics proposed in \\cite{nadeau_fast_2022}, which use the bounding box and mass of the object to produce average \\textit{percentage} errors $\\bar{\\Vector{e}}_m$, $\\bar{\\Vector{e}}_C$, and $\\bar{\\Vector{e}}_J$ associated, respectively, with the mass, centre of mass, and inertia tensor estimates.\n\t\\Cref{tab:PerfoComparison} reports the mean of the entries of the error vector $\\bar{\\Vector{e}}$ for each quantity of interest. \n\t\n\t\\begin{table}[t]\n\t\t\\centering\n\t\t\\caption{Comparison between HPS (ours), OLS, and GEO with various levels of noise, indicating the percentage of solutions that were physically consistent (Cons).}\n\t\t\\label{tab:PerfoComparison}\n\t\t\\begin{tabular}{ccccccc}\n\t\t\t\\toprule\n\t\t\tNoise \t& Algo. & Cons. (\\%) & $\\bar{\\Vector{e}}_m$(\\%) & $\\bar{\\Vector{e}}_C$(\\%) & $\\bar{\\Vector{e}}_J$(\\%) & $e_{\\text{Rie}}$\\\\\n\t\t\t\\midrule\n\t\t\tNo\t & OLS & 100 & \\textbf{$<$0.1} & \\textbf{$<$0.1} & \\textbf{0.09} & \\textbf{0.03}\\\\\n\t\t\t\t & GEO & 100 & $<$0.1 & 1.35 & 53.22\t & 1.14\\\\\n\t\t\t\t & HPS & 100 & 0.27 & 0.1 & 10.28 & 0.72\\\\\n\t\t\tLow & OLS & 14 & 0.19 & 2.13 & $>$500 & N\/A\\\\\n\t\t\t\t & GEO & 100 & \\textbf{0.18}\t & 1.34 & 52.79 & 1.14\\\\\n\t\t\t\t & HPS & 100 & 0.40 & \\textbf{0.32} & \\textbf{11.12} & \\textbf{0.74}\\\\\n\t\t\tMod. & OLS & 4.5 & 0.36 & 5.33 & $>$500 & N\/A\\\\\n\t\t\t\t & GEO & 100 & \\textbf{0.35} & 1.37 & 51.73 & 1.14\\\\\n\t\t\t\t & HPS & 100 & 0.74 & \\textbf{0.48} & \\textbf{11.81} & \\textbf{0.77}\\\\\n\t\t\tHigh & OLS & 2 & 0.69 & 10.11 & $>$500 & N\/A\\\\\n\t\t\t \t & GEO & 100 & \\textbf{0.64} & 1.58 & 48.49 & 1.13\\\\\n\t\t\t\t & HPS & 100 & 2.79 & \\textbf{1.07} & \\textbf{15.00} & \\textbf{0.87}\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\vspace{0mm}\n\t\\end{table}\n\n\t\\subsection{Demonstration in Real Settings}\n\tTo test our proposed method in a realistic setting, we used a uFactory xArm 7 manipulator equipped with a RealSense D435 camera and a Robotiq FT-300 force-torque sensor, as shown in \\Cref{fig:gladyswieldingobject}.\n\tFirst, the hammer in \\Cref{fig:realsegvspictures} was scanned with the camera, producing 127 RGB-D images of the scene in about 30 seconds.\n\tThe object was then picked up at a predetermined grasping pose where the Robotiq 2F-85 gripper could fit into plastic holders attached to the object (enabling a stable grasp of the handle).\n\tA short trajectory that took about 10 seconds to execute was performed while the dynamics of the object were recorded at approximately 100 Hz.\n\tPoint cloud stitching, mesh reconstruction, and part segmentation can be performed concurrently while the robot executes the trajectory, since these operations take 2.87, 0.24, and 2.82 seconds, respectively.\n\tFinally, our proposed method identified the inertial parameters in about 0.5 seconds with MOSEK \\cite{andersen2000mosek}. \n\tUsing only its estimate of the \\TextCOMMA, the robot autonomously balanced the hammer on a cylindrical target with a radius of 17.5 mm.\n\tThe entire process is shown in our accompanying video, with summary snapshots provided in \\Cref{fig:demo}.\n\tIn contrast, both OLS and GEO returned inaccurate parameter estimates, causing hammer balancing to fail.\n\t\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\begin{overpic}[width=1\\linewidth]{figs\/Balancing_Hammer_Demo}\n\t\t\t\\put(12,-4){(a)}\n\t\t\t\\put(47,-4){(b)}\n\t\t\t\\put(82.5,-4){(c)}\n\t\t\\end{overpic}\n\t\t\\vspace{0.1mm}\n\t\t\\caption{The hammer is (a) scanned, (b) picked up for inertial identification, and (c) balanced onto the small green target.}\n\t\t\\label{fig:demo}\n\t\t\\vspace{-4mm}\n\t\\end{figure}\n\t\n\t\\section{Discussion}\n\t\\label{sec:Discussion}\n\tThis work exploits object shape information to produce fast and accurate inertial parameter estimates.\n\tThe object reconstructions used by our method to determine object shape are often already computed as components of planning and manipulation algorithms, limiting the computational burden introduced by our approach.\n\tThe decomposition of objects into parts also enables fast inertial parameter identification of bodies that have joints (e.g., hedge scissors), since the parameters can be trivially updated following a change in object configuration.\n\t\n\tWrong or inaccurate part segmentation may lead to erroneous parameter estimates.\n\tHowever, if \\Cref{assum:Homogeneous} holds, \\emph{over}-segmenting an object does not affect the result of the identification as for a given part, \\Cref{eqn:Moments} can be decomposed into\n\t\\begin{equation}\n\t\t\\label{eqn:Oversegmentation}\n\t\t \\int_{V_1} \\Vector{\\Position}^k \\rho_1(\\Vector{\\Position}) dV_1 + \\int_{V_2} \\Vector{\\Position}^k \\rho_2(\\Vector{\\Position}) dV_2 = \\int_{V} \\Vector{\\Position}^k \\rho(\\Vector{\\Position}) dV,\n\t\\end{equation}\n\twhich is true since $V=V_1 \\cup V_2$ and $\\rho_1(\\Vector{\\Position}) = \\rho_2(\\Vector{\\Position})$.\n\tSimilarly, if two conceptually distinct parts with the \\emph{same} mass density are erroneously combined into one, the result of the identification will remain unaffected.\n\tHowever, if parts with \\emph{different} mass densities are considered to be a single part by the algorithm, the identification will fail since \\Cref{eqn:Oversegmentation} does not hold when $\\rho_1(\\Vector{\\Position}) \\neq \\rho_2(\\Vector{\\Position})$.\n\t\n\tThe comparison in \\Cref{tab:PerfoComparison} suggests that OLS outperforms other algorithms in the noiseless scenario, which is expected since it is not biased by any prior information.\n\tHowever, OLS nearly always converges to physically inconsistent solutions and becomes inaccurate in the presence of even a small amount of sensor noise.\n\tThe GEO algorithm performs similarly across noise levels, possibly due to the very good prior solution (i.e., correct mass, homogeneous density) provided, corresponding to the exact solution for objects that have a single part. \n\tOn average, HPS outperforms OLS and GEO for the identification of \\TextCOM and $\\InertiaMatrix$ when the signals are noisy.\n\tThe slightly higher $\\bar{\\Vector{e}}_m$ for HPS may be caused by the approximation made when using stalling motions~\\cite{nadeau_fast_2022}.\n\t\n\tExperiments with objects from our dataset do not reveal any obvious trends relating the quality of the shape reconstruction (measured via the Hausdorff distance), the quality of the part segmentation (measured via USE and GCE), and the quality of the inertial parameter identification (measured via $e_{\\text{Rie}}$).\n\tThis can be explained by the fact that an object's mass and shape largely determine the signal-to-noise ratios that any identification algorithm has to deal with.\n\t\n\tAs demonstrated by experiments with objects that are mostly symmetrical (e.g., the screwdriver), if the shape of the object is such that the parts' centroids are coplanar, the optimizer will `lazily' zero out the mass of some parts since they are not required to minimize \\Cref{eqn:ObjFunc}.\n\tAn improved version of HPS could use the hierarchy from HTC to intelligently define segments whose centroids are not coplanar.\n\t\n\t\\section{Conclusion}\n\t\\label{sec:Conclusion}\n\t\n\tIn this paper, we leveraged the observation that man-made objects are often built from a few parts with homogeneous densities. \n\tWe proposed a method to quickly perform part segmentation and inertial parameter identification.\n\tWe ran 80 simulations in which our approach outperformed two benchmark methods, and we demonstrated real-world applicability by autonomously balancing a hammer on a small target. \n\tOn average, our proposed algorithm performs well in noisy conditions and can estimate the full set of inertial parameters from`stop-and-go' trajectories that can be safely executed by collaborative robots.\n\tPromising lines of future work include formulating the optimization problem as a mixed-integer program where mass densities are chosen from a list of known materials, and improving the segmentation algorithm such that parts' centroids are never coplanar.\n\t\n\t\\bibliographystyle{ieeetr}\n\t\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Intro}\n\nAs the field of Artificial Intelligence matures and becomes ubiquitous, there is a growing emergence of systems where people and agents work together. These systems, often called Human-Agent Systems or Human-Agent Cooperatives, have moved from theory to reality in the many forms, including digital personal assistants, recommendation systems, training and tutoring systems, service robots, chat bots, planning systems and self-driving cars \\cite{amir2013plan,azaria2015strategic,barrett2017making,biran2017explanation,XAIP2017,jennings2014human,kleinerman2018providing,langley2017explainable,richardson2008coach,rosenfeld2017intelligent,rosenfeld2012learning,rosenfeld2015learning,salem2015would,sheh2017did,sierhuis2003human,traum2003negotiation,vanlehn2011relative,xiao2007commerce}. One key question surrounding these systems is the type and quality of the information that must be shared between the agents and the human-users during their interactions. \n\nThis paper focuses on one aspect of this human-agent interaction \u2014 the internal level of explainability that agents using machine learning must have regarding the decisions they make. The overall goal of this paper is to provide an extensive study of this issue in Human-Agent Systems. Towards this goal, our first step is to formally and clearly define explainability in Section \\ref{sec:definitions}, as well as the concepts of interpretability, transparency, explicitness, and faithfulness that make a system explainable. Through using these definitions, we provide a clear taxonomy regarding the \\emph{Why, Who, What, When, and How} about explainability and stress the relationship of interpretability, transparency, explicitness, and faithfulness to each of these issues. \n\nOverall, we believe that the solutions presented to all of these issues need to be considered in tandem as they are intertwined. The type of explainability needed directly depends on the motivation for the type of human-agent system being implemented and thus directly stems from the first question about the overall reason, or reasons, for why the system must be explainable. Assuming that the system is human-centric, as is the case in recommendation \\cite{kleinerman2018providing,xiao2007commerce}, training \\cite{traum2003negotiation}, and tutoring systems \\cite{amir2013plan,vanlehn2011relative}, then the information will likely need to persuade the person to choose a certain action, for example through arguments about the agent's decision \\cite{rosenfeld2016providing}, its policy \\cite{rosenfeld2017intelligent} or presentation \\cite{azaria2014agent} . If the system is agent-centric, such as in knowledge discovery or self-driving cars, the agent might need to provide information about its decision to help convince the human participant of the correctness of their solution, aiding in the adoption of these agent based technologies \\cite{Ribeiro2016}. In both cases, the information the agent provides should build trust to ensure its decisions are accepted \\cite{abs-1806-00069,Guidotti2018,jennings2014human,lee2004trust}. Furthermore, these explanations might be necessary for legal considerations \\cite{doshi2017towards,vlek2016method}. In all cases we need to consider and then evaluate \\emph{how} these explanations were generated, presented, and if their level of detail correctly matches the system's need, something we address in Section \\ref{How}. \n\nThis paper is structured as follows. First, in Section \\ref{sec:definitions}, we provide definitions for the terms of explainability, interpretability, transparency, fairness, explicitness and faithfulness and discuss the relationship between these terms. Based on these definitions, in Section \\ref{Why} we present a taxonomy of three possibilities for \\emph{why} explainability might be needed, ranging from not helpful, beneficial and critical. In Section \\ref{Who}, we suggest three possible targets for \\emph{who} the explanation is geared for: ``regular users\", ``expert users\", or entities external to the users of the system. In Section \\ref{What}, we address \\emph{what} mechanism is used to create explanations. We consider six possibilities: directly from the machine learning algorithm, using feature selection and analysis, through a tool separate from the learning algorithm to model all definitions, a tool to explain a specific outcome, visualization tools and prototype analysis. In Section \\ref{when} we address \\emph{when} the generated explanations should be presented: before, after and\/or during the task execution. In Section \\ref{How} we introduce a general framework to evaluate explanations. Section \\ref{discuss} includes a discussion about the taxonomy presented in the paper, including a table summarizing previous works and how they relate. Section \\ref{conclusion} concludes.\n\n\n\\section{Definitions of Explainable Systems and Related Terms}\n\\label{sec:definitions}\n Several works have focused on the definitions of a system's explainability and also the related definitions of interpretability, transparency, fairness, explicitness and faithfulness. As we demonstrate in this section, of all of these terms, we believe that the objective of making a system explainable is the most central and important for three reasons. Chronologically, this term was introduced first and thus has largest research history. Second, and possibly due to the first factor, this is the most general term. As we explain in this section, a system's level of explainability is created through the interpretations that the agent provides. These interpretable elements can be transparent, fair, explicit, and\/or faithful. Last, and most importantly, this term connotes the key objective for the system: facilitating the human user's understanding of the agent's logic. \n\n\\subsection{Theoretical Foundations for Explainability}\nIt has been noted that a thorough study of the term explanation would need to start with Aristotle as since his time it has been noted that explanations and causal reasoning are intrinsically intertwined \\cite{hoffman2017explaining}. Specific to computer systems, as early as 1982, expert systems such as MYCIN and NEOMYCIN were developed for encoding the logical process within complex systems \\cite{clancey1983epistemology,clancey1982neomycin}. The objective of these systems, as is still the case, was to provide a set of clear explanations for a complex process. However, no clear definitions for the nature of what constituted an explanation was provided.\n\nWork by Gregor and Benbasat in 1999 defined the nature of explainability within ``intelligent\" or ``knowledge-based\" systems as a ``declaration of the meaning of words spoken, actions, motives, etc., with a view to adjusting a misunderstanding or\nreconciling differences\" \\cite{gregor1999explanations}. As they point out in their paper, this definition assumes that the explanation is provided by the provider of the information, in our case the intelligent agent, and that the explanation is geared to resolve some type of misunderstanding or disagreement. This definition is in line with other work that assumed that explanations were needed to help understand a system malfunction, an anomaly or to resolve conflict between the system and the user \\cite{gilbert1989explanation,ortony1987surprisingness,schank1986explanation}. Given this definition, it is not surprising that the first agent explanations were basic reasoning traces that assume the user will understand the technical information provided, without taking a user other than the system designer into account. As these explanations are not typically processed beyond the raw logic of the system, they are referred to as ``na\\\"ive explanations\" by previous work \\cite{sormo2004explanation}. In our opinion, explainability of this type is more appropriate for system debugging than for other uses.\n\n\nPossibly more generally, the Philosophy of Science community also provided several definitions of explainability. Most similar to the previous definition, work by Schank \\cite{schank1986explanation} specifies that explanations address anomalies where a person is faced with a situation that does not fit her internalized model of the world. This type of definition can be thought of as goal-based, as the goal of the explanation is to address a specific need (e.g. disharmony within a user's internalized model) \\cite{sormo2004explanation}. Thus, explanations focus on an operational goal of addressing why the system isn't functioning as expected.\n\nA second theory by van Fraassen \\cite{van198511} claims that an explanation is always an answer to an implicit or explicit why-question comparing two or more possibilities. As such, an explanation provides information about why possibility $S_0$ was chosen and not options $S_1 \\dots S_n$ \\cite{sormo2004explanation,van198511}. This definition suggests a minimum criteria any explanation must fulfill, namely that it facilitates a user choosing a specific option $S_0$, as well as a framework for understanding explanations as answers to why-questions contrasting two or more states \\cite{sormo2004explanation}. One limitation of this approach is that the provided explanation has no use beyond helping the user understand why possibility $S_0$ was preferable relative to other possibilities. \n\nMost generally, a third theory by Achinstein \\cite{achinstein1983nature} focuses on explanations as a process of communication between people. Here, the goal of an explanation is to provide the knowledge a recipient requests from a designated sender. Accordingly, this theory does not necessarily require a complete explanation if the system's user does not require it. Consider a previously described example \\cite{sormo2005explanation} that a neural network is trained to compare two pictures of a certain type and can give a similarity measure, e.g. from 0 to 1, and most people cannot understand how it came up with this score. Presenting the pictures to the user so she can validate the similarity for herself can itself serve as an explanation. As the very definition of a proper explanation is dependent on the interaction between the sender and the receiver, such an explanation is sufficient. Similarly, explanations can be motivated by many situations and not exclusively van Fraassen's why-questions. Conversely, a proper definition can and should be limited only to the information needed to address the receiver's request. \n\n\\subsection{The Need for Precisely Defining Explainability in Human-Agent Systems}\nRecently, questions have arose as to the definition of explainability of machine learning and agent systems. An explosive growth of interest has been registered within various research communities as is evident by workshops on: Explanation-aware Computing (ExaCt), Fairness, Accountability, and Transparency (FAT-ML), Workshop on Human Interpretability in Machine Learning (WHI), Interpretable ML for Complex Systems, Workshop on Explainable AI, Human-Centred Machine Learning, and Explainable Smart Systems \\cite{CHI2018}. However, no consensus exists about the meaning of various terms related to explainability including interpretability, transparency, explicitness, and faithfulness. It has been pointed out that the Oxford English dictionary does not have a definition for the term ``explainable\" \\cite{DoranSB17}. One definition for an explanation that has been suggested as a, ``statement or account that makes something clear; a reason or justification given for an action or belief\" is not always true for systems that claim to be explainable \\cite{DoranSB17}. Thus, providing an accepted and unified definition of explainability and other related terms is of great importance.\n\nPart of the confusion is likely complicated by the fact that the terms, ``explainability, interpretability and transparency\" are often used synonymously while other researchers implicitly define these terms differently \\cite{DoranSB17,doshi2017towards,abs-1806-00069,Guidotti2018,Lipton16a,samek2017explainable}. Artificial intelligence researchers tend to use the term Explainable AI (XAI) \\cite{CHI2018,gunning2017explainable}, and focus on how explainable an artificial intelligence (XAI) system is without necessarily directly addressing the machine learning algorithms. For example, work on explainable planning, which they coin XAIP, takes a system view of planning without considering any machine learning algorithms. They distance themselves from machine learning and deep learning systems which they claim are still far from being explainable \\cite{XAIP2017}. \n\nIn contrast, the machine learning community often focuses on the ``interpretability\" of a machine learning system by focusing on how a machine learning algorithm makes its decisions and how interpretations can be derived either directly or secondarily from the machine learning component \\cite{doshi2017towards,letham2015interpretable,rudin2014algorithms,vellido2012making,wang2017bayesian}. However, this term is equally poorly defined. In fact, one paper has gone so far as to recently write that, ``at present, interpretability has no formal technical meaning\" and that, ``the term interpretability holds no agreed upon meaning, and yet machine\nlearning conferences frequently publish papers which wield the term in a quasimathematical way\" \\cite{Lipton16a}. In these papers, there is no syntactical technical difference between interpretable and explainable systems, as both terms refer to aspects of providing information to a human actor about the agent's decision-making process. Previous work generally defined interpretability as the ability to explain or present the decisions of a machine learning system using understandable terms \\cite{doshi2017towards}. More technically, Montanavon et al. propose that ``an interpretation is the mapping of an abstract concept (e.g. a predicted class) into a domain that the human can make sense of\" which in turn forms explanations \\cite{post}. Similarly, Doran et al. define interpretability as ``a system where a user cannot only see, but also study and understand how inputs are mathematically mapped to outputs.\" To them, the opposite of interpretable systems are ``opaque\" or ``black box\" systems which yield no insight about the mapping between a decision and the inputs that yielded that decision \\cite{DoranSB17}. \n\nWithin the Machine Learning \/ Agent community, transparency has been informally defined to be the opposite of opacity or ``blackbox-ness\" \\cite{Lipton16a}. In order to clarify the difference between interpretability and transparency, we build upon the definition of transparency as an explanation about how the system reached its conclusion \\cite{sormo2005explanation}. More formally, transparency has been defined as a decision model where the decision-making process can be directly understood without any additional information \\cite{Guidotti2018}. It is generally accepted that certain decision models are inherently transparent and others are not. For example, decision trees, and especially relatively small decision trees, are transparent, while deep neural networks cannot be understood without the aid of a explanation tool outside that of the decision process \\cite{Guidotti2018}. We consider this difference in the next section and again in Section \\ref{What}.\n\n\\label{inter_subsection}\n\n\n\n\n\\subsection{Formal Definitions for Explainability, Interpretability and Transparency in Human-Agent Systems}\n\\label{definitions}\nThis paper's first contribution is a clear definition for explainability and for the related terms: interpretability and transparency. In defining these terms we also define how explicitness and faithfulness are used within the context of Human-Agent Systems. A summary of these definitions is found in Table \\ref{table2}. \n\nIn defining these terms, we focus on the features and records that are used as training input in the system, the supervised targets that need to be identified, and the machine learning algorithm used by the agent. We define $L$ as the machine learning algorithm that is created from a set of training records, $R$.\nEach record $r \\in R$ contains values for a tuple of ordered features, $F$.\nEach feature is defined as $f \\in F$. Thus, the entire training set consists of $R \\times F$. For example: Assume that the features are: $f1 = age$ (years), $f2 = height$ (cm), $f3 = weight$ (kg), so $F = \\{age, height, weight\\}$. A possible record $r \\in R$ might be $r=\\{35,160,70\\}$. While this model naturally lends itself to tabular data, it can as easily be applied to other forms of input such as texts, whereby $f$ are strings, or images whereby $f$ are pixels. The objective of $L$ is to properly fit $R \\times F$ with regard to the labeled targets $t \\in T$. \n\n\n\\begin{table}[]\n\\begin{tabular}{|l|c|l|}\n\\hline\nTerm & \\begin{tabular}[c]{@{}c@{}}Notation\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Short Description\\end{tabular} \\\\ \\hline\nFeature & $F$ & \\begin{tabular}[c]{@{}l@{}}One field within the input.\\end{tabular} \\\\ \\hline\nRecord & $R$ & \\begin{tabular}[c]{@{}l@{}}A collection of one item of information (e.g. picture, row in datasheet).\\end{tabular} \\\\ \\hline\nTarget & $T$ & \\begin{tabular}[c]{@{}l@{}}The labelled category to be learned. Can be categorical or numeric.\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}l@{}}Algorithm\\end{tabular} & $L$ & \\begin{tabular}[c]{@{}l@{}}The algorithm used to predict the value of $T$ from the collection of data \\\\(all features and records).\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}l@{}}Interpretation\\end{tabular} & $\\varmathbb I$ & \\begin{tabular}[c]{@{}l@{}}A function that takes as its input $F,R,T,$ and $L$ \\\\and returns a representation of $L$'s logic.\\end{tabular} \\\\ \\hline\nExplanation & \\multicolumn{1}{c|}{$\\varmathbb E$} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}} The human-centric objective for the user to understand $L$ using $\\varmathbb I$.\\end{tabular}} \\\\ \\hline\nExplicitness & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}The extent to which $\\varmathbb I$ is understandable to the intended user.\\end{tabular}} \\\\ \\hline\nFairness & & \\begin{tabular}[c]{@{}l@{}}The lack of bias in $L$ for a field of importance (e.g. gender, age, ethnicity).\\end{tabular} \\\\ \\hline\nFaithfulness & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}The extent to which the logic within $\\varmathbb I$ is similar to that of $L$.\\end{tabular}} \\\\ \\hline\nJustification & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}Why the user should accept $L$'s decision. \\\\Not necessarily faithful as no connection assumed between $L$ and $\\varmathbb I$.\\end{tabular}} \\\\ \\hline\nTransparency & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}The connection between $\\varmathbb I$ and $L$ is both explicit and faithful.\\end{tabular}} \\\\ \\hline\n\\end{tabular}\n\\caption{Notation and short definition of key concepts of explainability, interpretability, transparency, fairness, and explicitness in this paper. Concepts of features, records, targets and machine learning algorithms and explanations are also included as they define the key concepts.}\n\\label{table2}\n\\end{table}\n\nWe define explainability as the ability for the human user to understand the agent's logic. This definition is consistent with several papers that considered the difference between explainability and interpretability within Human-Agent Systems. For example, Doran et al. define explainable systems as those that explains the decision-making process of a model using reasoning about the most human-understandable features of the input data \\cite{DoranSB17}. Following their logic, interpretability and transparency can help form explanations, but are only part of the process. Guidotti et al. state that ``an interpretable model is required to provide an explanation\" \\cite{Guidotti2018}, thus an explanation is obtained by the means of an interpretable model. \nSimilarly, Montanavon et al., define explanations as ``a collection of features of the interpretable domain, that have contributed for a given example to produce a decision\" \\cite{post}. \n\nThus, the objective of any system is explainability, meaning it has an explanation $\\varmathbb E$, which is the human-centric aim to understand $L$. An explanation is derived based on the human user's understanding about the connection between $T$ and $R \\times F$. The user will create $\\varmathbb E$ based on her understanding of an interpretation function, $\\varmathbb I$ that takes as its inputs $L$, $R \\times F$ and $T$ and returns a representation of the logic within $L$ that can be understood. Consequently, in this paper we refer to explainability of systems as the understanding the human user has achieved from the explanation and do not use this term interchangeably with ``interpretability\" and ``transparency\". We reserve use of terms ``interpretability\" and ``transparency\" as descriptions of the agent's logic. Specifically, we define $\\varmathbb E$ as:\n\\begin{equation} \n\\varmathbb E = \\varmathbb I(L(R \\times F,T))\n\\label{equ:explain}\n\\end{equation}\n\nWe claim that the connection between $\\varmathbb I$ and $L$, $R$, $F$ and $T$ will also determine the type of explanation that is generated. A globally explainable model provides an explanation for all outcomes within $T$ taking into consideration $R \\times F$, thus using all information in: ${L,R,F,T}$. A locally explainable model provides explanations for a specific outcome, $t \\in T$ (and by extension for specific records $r \\in R$), using ${L,r,F,t}$ as input. \n\nWe use three additional terms: explicitness, faithfulness and justification to quantify the relationship of $\\varmathbb I$ to $\\varmathbb E$ and $L$ respectively. Following recent work \\cite{abs-1806-07538}, we refer to \\textit{explicitness} as the level to which the output of $\\varmathbb I$ is immediate and understandable. As we further explore in the next section, the level of explicitness depends on \\emph{who} the target of the explanation is and what is the level of her expertise at understanding $\\varmathbb I$. It is likely that two users will obtain different values for $\\varmathbb E$ even given the same value for $\\varmathbb I$, making quantifying $\\varmathbb I$'s explicitness difficult due to this level of subjectivity. We define \\textit{faithfulness}, also previously defined as \\textit{fidelity} \\cite{4938655,Ribeiro2016}, as the degree to which the logic within $\\varmathbb I$ is similar to that of $L$. Especially within less faithful models, a concept of \\textit{completeness} was recently suggested to refer to the ability of $\\varmathbb I$ to provide an accurate description for all possible actions of $L$ \\cite{abs-1806-00069}. Given the similarity of these terms, we only use the term faithful due to its general connotation. Justification was previously defined as an explanation about why a decision is correct without any information about the logic about how it was made \\cite{biran2017explanation}. According to this definition, justifications can be generated even within non-interpretable systems. Consequently, justification requires no connection between $\\varmathbb I$ and $L$ and no faithfulness. Instead, justification methods are likely to provide implicit or explicit arguments about the correctness of the agent's decision, such as through persuasive argumentation \\cite{yetim2008framework}.\n\nIn order for a model to be transparent, two elements are needed: the decision-making model must be readily understood by the user, and that explanation must map directly to how the decision is made. More precisely, a transparent explanation is one where the connection between $\\varmathbb E$, $\\varmathbb I$ and $L$ is explicit and faithful as the logic within $\\varmathbb I$ is readily understandable and identical to $L$, e.g. $\\varmathbb I \\simeq L$. When a tool or model is used to provide information about the decision-making process secondary to $L$, the system contains elements of interpretability, but not transparency. \n\nSection \\ref{What} discusses the different types of interpretations that can be generated, including transparent ones. Non-transparent interpretations will lack faithfulness, explicitness, or both. Examples include tools to create model and outcome interpretations, feature analysis, visualization methods and prototype analysis. Each of these methods will focus on different parameters within the input, ${R,F,T}$ and their relationship to $L$. Model and outcome interpretation tools create $\\varmathbb I$ without a direct connection to the logic in $L$. Feature Analysis is a method of providing interpretations via analyzing a subset of features $f \\in F$. Prototype selection is a method of providing interpretations via analyzing a subset of records $r \\in R$. Visualization tools are used to understand the connection between $L$ and $T$ and thus $\\varmathbb I$ takes this interpretable form. \n\nTo help visualize the relationship between explainability, interpretability and transparency, please note Figure \\ref{Figure1}. Note that interpretability includes six methods, including transparent models, and also the non-transparent possibilities of model and outcome tools, feature analysis, visualization methods, and prototype analysis. In the figure, interpretability points to the objective of explainability to signify that interpretability is a means for providing explainability, as per these terms' definitions in Table \\ref{table2}. Note the overlaps within the figure. Feature analysis can serve as a basis for creating transparent models, on its own as a method of interpretability, or as a interpretable component within model, outcome and visualization tools. Similarly, visualization tools can help explain the entire model as a global solution or as a localized interpretable element for specific outcomes of $t \\in T$. Prototype analysis uses $R$ as the basis for interpretability, and not $F$, and can be used for visualization and\/or outcome analysis of $r \\in R$. We explore these points further in Section \\ref{What}. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=4.5in]{fig1v2.png}\n\\caption{A Venn Diagram of the relationship between Explainability, Interpretability and Transparency. Notice the centrality of Feature Analysis to 4 of the 5 interpretable elements.}\n\\label{Figure1} \\label{fig::example0}\n\\end{figure}\n\nThe level of interpretability and transparency needed within an explanation will be connected to either hard or soft-constraints defined by the user's requirements. At times, there may be a hard-constraint based on a legal requirement for transparency, or a soft-constraint that transparency exist in cases where one suspects that the agent made a mistake or does not understand why the agent chose one possibility over others \\cite{gregor1999explanations,Guidotti2018,schank1986explanation,van198511,vlek2016method}. Explainability can be important for other reasons, including building trust between the user and system even when mistakes were not made \\cite{chen2014situation}-- something we now explore\n\n\\section{\\emph{Why} a Human-Agent System should be Explainable? }\\label{Why}\n\nWe believe that the single most important question one must ask about this topic is \\emph{Why} we need an explanation, and how important it is for the user to understand the agent's logic. In answering this question one must establish whether a system truly needs to be explainable. We posit that one can generalize the need for explainability with a taxonomy of three levels: \n\\begin{enumerate}\n\\item Not helpful\n\\item Beneficial\n\\item Critical\n\\end{enumerate}\n\nAdjustable autonomy is a well-established concept within human-agent and human-robot groups that refers to the amount of control an agent\/robot has compared to the human-user \\cite{goodrich2001experiments,scerri2001adjustable,yanco2004classifying}. Under this approach, the need for explainability can be viewed as a function of the degree of cooperation between the agent to the human user. Assuming the agent is fully controlled by the human operator (e.g. teleoperated), then no explainability is needed as the agent is fully an extension of the human participant. Conversely, if the robot is given full control, particularly if the reason for the decision is obvious (a recommendation agent gives advice based on a well-established collaborative filtering algorithm), it again serves to reason that no explainability is needed. Additionally, Doshi-Velez and Kim pointed out that an explanation at times is not needed if there are no significant consequences for unacceptable results or the agent's decision is universally accepted and trusted \\cite{doshi2017towards}.\n\nAt the other extreme, many Human-Agent Systems are built whereby the agent's role is to support a human's task. In many of these cases, we argue that the agent's explanation is a critical element within the system. The need for an agent to be transparent or to explicitly and faithfully explain its actions is tied directly to task execution. For example, Intelligent Tutoring Systems (ITS) typically use step-based granularities of interaction whereby the agent confirms one skill has been learned or uses hints to guide the human participant \\cite{vanlehn2011relative}. The system must provide concrete explanations for its guidance (called \\textit{hints} in ITS terminology) to better guide the user. Similarly, explanations form a critical component of many negotiation, training, and argumentation systems \\cite{rahwan2003towards,rosenfeld2016providing,rosenfeld2016negochat,sierhuis2003human,traum2003negotiation}. For example, effective explanations might be critical to aid a person in making the final life-or-death decision within Human-Agent Systems \\cite{sierhuis2003human}. Rosenfeld's et al.'s NegoChat-A negotiation agent uses arguments to present the logic behind its position \\cite{rosenfeld2016negochat}. Traum et al. explained the justification within choices of their training agent to better convince the trainee, as well as to teach the factors to\nlook at in making decisions \\cite{traum2003negotiation}. Rosenfeld and Kraus created agents that use argumentation to better persuade people to engage in positive behaviors, such as choosing healthier foods to eat \\cite{rosenfeld2016providing}. Azaria et al. demonstrate how an agent that learns the best presentation method for proposals given to a user improves their acceptance rate \\cite{azaria2014agent}. Many of these systems can be generally described as Decision Support Systems (DSS). A DSS is typically defined as helping people make semi-structured decisions requiring some human judgment and at the same time with some agreement on the solution method \\cite{Adam2008}. An agent's effective explanation is critical within a DSS as the system's goal is providing the information to help facilitate improved user decisions.\n\nA middle category in our taxonomy exists when an explanation is beneficial, but not critical. The Merrian-Webster dictionary defines beneficial as something that ``produces good or helpful results\"\\footnote{https:\/\/www.merriam-webster.com\/dictionary\/benefit}. In general, the defining characteristic of explanations within this category is that they are not needed in order for the system to behave optimally or with peak efficiency. \n\nTo date, many reasons have been suggested for making systems explainable \\cite{CHI2018,DoranSB17,doshi2017towards,gregor1999explanations,Guidotti2018,Lipton16a,sormo2005explanation}:\n\n\\begin{enumerate}\n\\item To justify its decisions so the human participant can decide to accept them (provide control)\n\\item To explain the agent's choices to guarantee safety concerns are met\n\\item To build trust in the agent's choices, especially if a mistake is suspected or the human operator does not have experience with the system\n\\item To explain the agent's choices to ensure fair, ethical, and\/or legal decisions are made\n\\item Knowledge \/ scientific discovery \n\\item To explain the agent's choices to better evaluate or debug the system in previously unconsidered situations\n\\end{enumerate}\n\nThe importance of these types of explanations will likely vary greatly across systems. If the user will not accept the system without this explanation, then a critical need for explainability exists. This can particularly be the case in Human-Agent Systems where the agent supports a life-or-death task, such as search and rescue or medical diagnostic systems, where ultimately the person is tasked with the final decisions \\cite{jennings2014human}. In these types of tasks the explanation is critical to facilitate a person's decision whether to accept the agent's suggestion and\/or to allow that person to decide if safety concerns are met, such as a patient's health or that of a person at-risk in a rescue situation. In other situations, explanations are beneficial for the overall function of the human-agent system, but are not critical. \n\nOne key and common example where explanations can range in significance from critical to beneficial are situations where explanations help instill trust. Previous work on trust, within people in a work situation, identified two types of trust that develop over time, ``knowledge-based\" and ``identification-based\" \\cite{inbook}. Of the two types of trust, they claim that knowledge-based trust requires less time, interactions and information to develop as it is grounded primarily in the other party's predictability. Identification-based trust requires a mutual understanding about the other's desires and intention and requires more information, interactions and time to develop. \n\nWe posit that previous work has focused on elements of this trust model in identifying what types of explanations are necessary to foster this type of trust within Human-Agent Systems. Following our previous definitions of interpretability and transparency, it seems that the former type of interpretable elements may be sufficient for knowledge-based definitions of trust, while transparent elements are required for identification-based models. When a person has not yet developed enough positive experience with the agent she interacts with, both knowledge-based and identification based trust are missing. As it has been previously noted that people are slow to adopt systems that they do not understand and trust and ``if the users do not trust a model or a prediction they will not use it.\" \\cite{Ribeiro2016}, even providing a non-transparent interpretable explanation will likely help instill confidence about the system's predictability, thus facilitating the user's knowledge-based trust in the system. Ribeiro et al. demonstrate how interpretability of this type is important for identifying models that have high accuracy for the wrong reasons \\cite{Ribeiro2016}. For example, they show that text classification often are wrongly based on the heading rather than the content. In contrast, image classifiers that capture the main part of the image in a similar manner to the human eye, install a feeling that the model is functioning correctly even if accuracy is not particularly high. \n\nHowever, it has been claimed that when the person suspects the agent has made a mistake and\/or is unreliable then the agent should act with transparency, and not merely be interpretable, as explanations generated from transparent methods will aid the user to trust the agent in the future \\cite{XAIP2017}. In extreme cases, if the user completely disregards the agent, then the human-agent system breaks down, making transparent explanations critical to help restore trust. Furthermore, explanations based on $L$'s transparency may be needed to help facilitate the higher level of identification-based trust. Only transparent interpretations directly link $L$ and $\\varmathbb I$ thus providing full information about the agent's intention. We suggest that designers of systems that require this higher level of trust, such as health-care \\cite{crockett2016data}, recommender systems \\cite{knijnenburg2012explaining}, planning \\cite{XAIP2017} and human-robot rescue systems \\cite{rosenfeld2017intelligent,salem2015would} should be transparent, and not merely interpretable. \n\nOther types of explanations are geared towards people beyond the immediate users of the system. Examples of these types of explanations include those designed for legal and policy experts to confirm that the decisions \/ actions of agent fulfill legal requirements such as being fair and ethical \\cite{DoranSB17,doshi2017towards,dwork2012fairness,Garfinkel2017,Guidotti2018}. Both the EU and UK governments have adopted guidelines requiring agent designers to provide users information about agents' decisions. In the words of the EU's ``General Data Protection Regulation\" (GDPR), users are legally entitled to obtain ``meaningful explanation of the logic involved\" of these decisions. Additional legislation exists to ensure that agents are not biased against any ethnic or gender groups \\cite{doshi2017towards,Guidotti2018} such that they demonstrate fairness \\cite{dwork2012fairness}. Similarly, the ACM has published guidelines for algorithmic accountability and transparency \\cite{Garfinkel2017}. \nThe system's explanation is not here critical for effective performance of the agent, but instead to confirm that a secondary legal requirement is being met. \n\nExplanations geared beyond the immediate user can also be those geared for researchers to help facilitate scientific knowledge discovery \\cite{doshi2017towards,Guidotti2018} or for system designers to evaluate or test a system \\cite{doshi2017towards,sormo2004explanation,sormo2005explanation}. For example, a medical diagnostic system may work with peak efficiency exclusively as a black box, and users may be willing to rely on this black box as the agent is trusted due to an exemplary historical record. Nonetheless, explanations can be still be helpful for knowledge discovery to help researchers understand gain understanding of various medical phenomena. Explainability has also been suggested as being necessary for properly evaluating a system or for the agent's designer to confirm that the system is properly functioning, even within situations that were not considered when the agent was built. For example, Doshi-Velez and Kim claimed that due to the inherent inability to quantify all possible situations, it is impossible for a system designer to evaluate an agent in all possible situations \\cite{doshi2017towards}. Explanations can be useful in these situations to help make evident any possible gaps between an agent's formulation and implementation and its performance. In all cases, the explanation is not geared to the end-user of the system, but rather to an expert user who requires the explanation for a reason beyond the day-to-day operation of the system. \n\nAs we have shown in this section, the question about explainability can be divided into questions about its necessity, e.g. not necessary, beneficial or critical, which is directly connected to the objective of that explanation. From a user-perspective, the primary objective of the explanation is related to factors that help her use the system, and particularly elements that help foster trust. In these cases, a system may need to be transparent, even if this level of explanation entails a sacrifice of the system's performance. We further explore this possibility and relationship in Section \\ref{What}. \nAt times, explanations are needed or beneficial for entities beyond the typical end-user such as for the designer, researcher or legal expert. As the objective of explanations of this type is different, it stands to reason that the type of explanation may be fundamentally different based on \\emph{whom} the target is for this information, something we address in the next section. This in turn may impact the type of interpretation the agent must present, something we explore in Section \\ref{What}. \n\n\\section{\\emph{Who} is the Target of the Explanation?}\n\\label{Who}\nThe type of interpretable element needed to form the basis of the explanation is highly dependent on the question of \\emph{who} the explanation is for. We suggest three possibilities:\n\\begin{enumerate}\n\\item Regular user\n\\item Expert user\n\\item External entity\n\\end{enumerate}\n\n\nThe level of explanation detail needed depends on \\emph{why} that Human-Agent Systems needs the user to understand the agent's logic (Section \\ref{Why}) and how the explanation has been generated (Section \\ref{What}). If the need for explanation is for legal purposes, then it follows that legal experts need the explanation, and not the regular user. Similarly, it stands to reason that the type of explanation that is given should be directed specifically to this population. If the purpose of the explanation is to support experts' knowledge discovery, then it stands to reason that the explanation should be directed towards researchers with knowledge of a specific problem. In these cases, the system might not even need to present their explanations to the regular users and may thus only focus on presenting information to these experts. Most systems will still likely benefit by directing explanations to the regular users to help them better understand the system's decisions, thus aiding in their acceptance and\/or trust. In these cases, the system should be focused on providing justifications in addition to providing the logic behind their decisions through arguments \\cite{rosenfeld2016providing,yetim2008framework} and\/or through Case Based Reasoning \\cite{corchado2003constructing,kim2014bayesian,kwon2004applying} that help reassure the user about the correctness of the agent's decision.\n\nThe same explanation might be considered as extremely helpful by a system developer, but considered useless by a regular user. Thus, the expertise level of the target will play a large part of defining an explanation and how explicit $\\varmathbb I$ is. Deciding on how to generate and present $\\varmathbb I$ will be covered in later sections.\n\nSimilarly, what level of detail constitutes an adequate explanation likely depends on precisely how long the user will study the explanation. If the goal is knowledge discovery and\/or complying with legal requirements, then an expert will likely need to spend large amounts of time meticulously studying the inner-workings of the decision-making process. In these cases, it seems likely that great amounts of detail regarding the justification of the explanations will be necessary. If a regular user is assumed, and the goal is to build user trust and understanding, then shorter, very directed explanations are likely more beneficial. This issue touches upon a larger issue about the danger additional information may overload a given user \\cite{shrot2014crisp}.\n\nAt times, the recipient of the explanation is not the user directly interacting with the system. This is true in cases where explanations are mandated by an external regulative entity, such as is proposed by the EU's GDPR, regulation. In this case, the system must follow explanation guidelines provided by the external entity. In contrast, developers providing explanations to users will typically follow different guidelines, such as user usability studies. As these two types of explanations are not exclusive, it is possible that the agent will generate multiple types of explanations for the different targets (e.g. the user and the regulator entity). In certain types of systems, such as security systems, multiple potential targets of the explanation also exist. Vigano and Magazzeni explain that security systems have many possible targets, such as the designer, the attacker, and the analyst \\cite{vigano2018explainable}. Obviously an explanation provided for the designer can be very dangerous in the hands of an attacker. Thus, aside from the question of how ``helpful\" an explanation is for a certain types of user, one must consider what the implications of providing an unsuitable explanation are. In these cases, the explanation must be provided for a given user while also considering the implications on the system's security goals. \n\n\\section{\\emph{What} Interpretation can be Generated?}\\label{What}\nOnce we have established the \\emph{why} and \\emph{who} about explanations, a key related question one must address is \\emph{what} interpretation can be generated as the basis for the required explanation. Different users will need different types of explanations, and the interpretations required for effective explanations will differ accordingly \\cite{vigano2018explainable}. We posit that six basic approaches exist as to how interpretations can be generated:\n\\begin{enumerate}\n\\item Directly from a transparent machine learning algorithm \n\\item Feature selection and\/or analysis of the inputs\n\\item Using an algorithm to create a post-hoc model tool\n\\item Using an algorithm to create a post-hoc outcome tool\n\\item Using an interpretation algorithm to create a post-hoc visualization of the agent's logic\n\\item Using an interpretation algorithm to provide post-hoc support for the agent's logic via prototypes\n\\end{enumerate}\n\nIn Figure \\ref{fig::Explicit-Faithful} we describe how these various methods for generating interpretations have different degrees of faithfulness and explicitness. Each of these methods contains some level of trade-off between their explicitness and faithfulness. For example, as described in Section \\ref{definitions}, transparent models are inherently more explicit and faithful than other possibilities. Nonetheless, we present this figure only as a guideline, as many implementations and possibilities exists within each of these six basic approaches. These differences will impact the levels of both faithfulness and explicitness, something we indicate via the arrows pointing to both higher levels of faithfulness and explicitness for a specific implementation. \n\\begin{figure}\n\\label{Figure2}\n\\centering\n\\includegraphics[width=3.3in]{Explicit-Faithful.pdf}\n\\caption{Faithfulness versus explicitness within the six basic approaches for generating interpretations}\n\\label{fig::Explicit-Faithful}\n\\end{figure}\n\n\\subsection{Generating Transparent Interpretations Directly from Machine Learning Algorithms}\n\\label{What-transparent}\nThe first approach, and the most explicit and faithful method, is to generate $\\varmathbb I$ directly from the output of the machine learning algorithm, $L$. These types of interpretations can be considered ante-hoc, or ``before this\" (e.g. an explanation is needed), as the this type of connection between $\\varmathbb I$ and $L$ facilitates providing interpretations at any point, including as the task is being performed \\cite{ante-2018,holzinger2017we}.\nThese transparent algorithms, often called white box algorithms, include decision trees, rule-based methods, k-nn (k-nearest neighbor), Bayesian and logistic regression \\cite{dreiseitl2002logistic}. As per our definitions in Section \\ref{sec:definitions}, these algorithms have not been designed for generating interpretations, but can be readily derived from the understandable logic inherent in the algorithms. As we explain in this section, all of these algorithms are faithful, and are explicit to varying degrees. A clear downside to these approaches is that one is then limited to these machine learning algorithms, and\/or a specific algorithmic implementation. It has been previously noted that an inverse relationship often exists between machine learning algorithms' accuracy and their explainability \\cite{Guidotti2018,gunning2017explainable}. Black box algorithms, especially deep neural networks but including other less explainable algorithms such as ensemble methods and support vector machines, are often used due to their exceptional accuracy on some problems. However, these types of algorithms are difficult to glean explicit interpretations from and are typically not transparent \\cite{dreiseitl2002logistic}. Figure \\ref{fig::Explicit-Predict} is based on previous work \\cite{Explain2018,gunning2017explainable} and quantifies the general relationship between algorithms' explicitness and accuracy. This figure describes the relationships as they stand at the time the paper is written, and may change as algorithmic solutions develop and evolve. Additionally, this figure may be somewhat over-simplified, as we now describe.\n\n\\begin{figure}\n\\label{Figure3}\n\\centering\n\\includegraphics[width=3.3in]{Explicit-Predict.pdf}\n\\caption{Typical trade-off between prediction accuracy versus explicitness}\n\\label{fig::Explicit-Predict}\n\\end{figure}\n\nDecision trees are often cited to be the most understandable (e.g. explicit) \\cite{Explain2018,doshi2017towards,Freitas2014,gunning2017explainable,quinlan1986induction}. The hierarchical structure inherent in decision trees yields itself to understanding which attributes are most important, of second-most importance, etc. \\cite{Freitas2014}. Furthermore, assuming the size of the tree is relatively small due to Occam's Razor \\cite{murphy1993exploring}, the if-then rules that can be derived directly from decision trees are both particularly explicit and faithful \\cite{Freitas2014,RosenfeldSGBHL14}. \n\nHowever, in practice not all decision trees are easily understood. Large decision trees with hundreds of nodes and leaves are often more accurate than smaller ones, despite the assumption inherent within Occam's Razor \\cite{murphy1993exploring}. Such trees are less explicit, especially if they contain many attributes and\/or multiple instances of nodes using the same attribute for different conditions. Assuming the decision tree is too large to fully understand (e.g. thousands of rules) \\cite{hara2016making} and\/or overfitted due to noise in the training data \\cite{Freitas2014}, it will lose its explicitness. One approach to address this issue is suggested by Last and Maimon \\cite{last2004compact} where they reason about the added value of added attributes versus the complexity they add, facilitating more explicit models.\n\nClassification rules \\cite{clark1989cn2,michalski1999learning} have also been suggested as a highly explicit machine learning model \\cite{Explain2018,Freitas2014,gunning2017explainable}. As is the case with decision trees, the if-then rules within such models provide faithful interpretations and are potentially explicit. The flat, non-hierarchical structure in such models can be an advantage in allowing the user to focus on individual rules separately which at times has been shown to be advantageous \\cite{clancey1983epistemology,Freitas2014}. However, in contrast to decision trees, this structure does not inherently give a person insight as to the relative importance of the rules within the system. Furthermore, conflicts between rules need to be handled, often through an ordered rule-list, adding to the model's complexity and reducing its level of explicitness. \n\nNearest neighbor algorithms, such as k-nn, can potentially be transparent machine learning models as they can provide interpretations based on the similarity between an item needing interpretation and other similar items. This is reminiscent of the picture classification example in Section \\ref{sec:definitions} as the person is actually performing an analysis similar to k-nn in understanding why certain pictures are similar. This process is also similar to the logic within certain Case Based Reasoning algorithms which often also use logic akin to k-nn algorithms to provide an interpretation for why two items are similar \\cite{sormo2005explanation}. However, as has been previously pointed out, these interpretations are only typically explicit if k is kept small, e.g. k=1 or close to 1 \\cite{sormo2005explanation}. Furthermore, k-nn is a lazy model that classifies each new instance separately. As such, every instance could potentially have a different ``interpretation\", making this a local interpretation. In contrast, both decision trees and rule-based systems construct general rules that are to be applied across all instances \\cite{Freitas2014}. In addition, if the number of attributes in the dataset are very large, it might be difficult for a person to appreciate the similarities and differences between different instances again reducing the explicitness of the model. \n\nBayesian network classifiers have also been suggested as another transparent machine learning model. Knowing the probability of a successful outcome is often needed in many applications, something that probabilistic models, including Bayesian models, excel at \\cite{bellazzi2008predictive}. Bayesian models have been previously suggested to be the most transparent of these types of models as each attribute can be independently analyzed and the relative strength of that attribute be understood \\cite{Freitas2014}. This approach is favored in many medical applications for this reason \\cite{zupan2000machine,kononenko1993inductive,lavravc1999selected}. More complex, non-na\\\"ive Bayesian models can be constructed \\cite{cheng1999comparing} although one may then potentially lose both model accuracy and transparency. \n\nSimilar to Bayesian models, logistic regression also outputs outcome probabilities by fitting the output of its regression model to values between 0 and 1. The logit function inherent in this model is also constructed from probabilities-- here in the form of log-odds \/ odds-ratios. This makes this model popular for creating medical applications \\cite{bagley2001logistic,katafigiotis2018stone}. At times, the interpretations that can be generated by these relationships are explicit \\cite{dreiseitl2002logistic}. \n\nSupport Vector Machines (SVM) are based on finding a hyperplane to separate between different instances and are potentially explicit, particularly if a linear kernel is used \\cite{bellazzi2008predictive}. Once again, if many attributes exist in the model, the explicitness of the model might be limited even if a linear kernel is used. An SVM becomes even less explicit if more complex kernels are used including RBF and polynomial kernels. As is the case with the last three of these algorithms (SVM, k-nn and Bayesian), feature selection \/ reduction could significantly help the explicitness of the model, something we explore in the next section. \n\nAs no one algorithm provides both high accuracy and explicitness, it is important to consider new machine learning algorithms that include explainability as a consideration within the learning algorithm. One example of this approach is work by Kim, Rudin and Shah, who have suggested a Bayesian Case Model for case-based reasoning \\cite{kim2014bayesian}. Another example is introduced by Lou et al. \\cite{LouCG12}. Their generalized additive models (GAMs) combine univariate models called shape functions through a linear function. On one hand, the shape functions can be arbitrarily complex, making GAMs more accurate than simple linear models. On the other hand, GAMs do not contain any interactions between features, making them more explicit than black box models. Lou et al. also suggested adding selected terms of interacting pairs of features to standard GAMs \\cite{lou2013accurate}. This method increases the accuracy of the models, while maintaining better explicitness than black box methods. Caruana et al. propose a extension of the GAM, GA$^2$M, which considers pairwise interactions between features and provide a case study showing its success in accurately and transparently explaining a health-care dataset \\cite{Caruana2015}.\n\nWe believe these approaches are worthy of further consideration and provide an important future research area as new combinations of machine learning algorithms that provide both high accuracy and explainability could potentially be developed. Several of these methods use an element of feature analysis as the basis of their transparency \\cite{Caruana2015,last2004compact,LouCG12,lou2013accurate}. In general, feature selection can be a critical element in creating transparent and non-transparent interpretations, as we now detail.\n\n\\subsection{Generating Interpretations from Feature Selection \/ Analysis}\n\\label{What-feature analysis}\nA second approach to create the interpretation, $\\varmathbb I$, is through performing feature selection and\/or feature analysis of all features, $F_1 \\ldots F_i$, before or after a model has been built. Theoretically, this approach can be used alone and exclusively to generate interpretations within the non-transparent ``black box\" algorithms, or in conjunction with the above ``white box\" algorithms to help further increase their explicitness. Feature selection has long been established as an effective way of building potentially better models which are simpler and thus better overcome the curse of dimensionality \\cite{guyon2003introduction}. Additionally, models with fewer attributes are potentially more explicit as the true causal relationship between the dependent and independent variables is clearer and thus easier to present to the user \\cite{Kononenko99}. The strong advantage of this approach is that the information presented to the user is generated directly from the mathematical relationship between a small set of features and the target being learned. \n\nThree basic types of feature selection approaches exist: filters, wrappers, and embedded methods. We believe that filter methods are typically best suited for generating explicit interpretations as the analysis is derived directly from the data without any connection to a specific machine learning model \\cite{guyon2003introduction,Saeys2007survey}. Univariate scores such as information gain or $X^2$ can be used to evaluate each of the attributes independently. Either the top $n$ features could then be selected or only those with a score above a previously defined threshold. The user's attention could then be focused on relationships between these attribute, facilitating explicitness. Multivariate filters, such as CFS \\cite{hall1999correlation} allow us to potentially discover interconnections between attributes. The user's attention could again then be focused on this small subset of features with the assumption that interrelationships between features have become more explicit. Previous work by Vellido et al. \\cite{vellido2012making} recommends using principals component analysis (PCA) to generate interpretations. Not only does PCA reduce the number of attributes needing to be considered, but the new features generated by PCA are linear combinations of the original ones. As such, the user could understand an explanation based on these interrelationships, especially if the both the number and size of these derived features are small. \n\nAs filter methods are independent of the machine learning algorithm used, it has been suggested that this approach can be used in conjunction with black box algorithms to make them more explicit \\cite{vellido2012robust}. One example is previous work that used feature selection to reduce the number of features from nearly 200 to 3 before using a neural network for classification \\cite{vellido2012robust}. As neural networks are becoming increasing popular due to their superior accuracy in many datasets, we believe this is a general approach that is worth consideration to help make neural networks more explicit.\n\n\\subsection {Tools to Generate Model Interpretations Independently from L}\n\\label{What-Model outcome}\nThe above methods are faithful in that the transparent algorithms and feature analysis is done in conjunction with $L$. However, other approaches exist that create $\\varmathbb I$ as a process independent of the logic within $L$. In the best case, $\\varmathbb I$ does faithfully approximate the actual and complete logic within $L$, albeit found differently, and thus represents a form of reverse-engineering version of the logic within $L$ \\cite{Augasta2012}. Even when $\\varmathbb I$ is not 100\\% faithful, the goal is to be as faithful and explicit as possible, making these approaches a type of metacognition process, or reasoning about the reasoning process (e.g. $L$) \\cite{cox2011metareasoning}. A key difference within the remaining approaches in this section is that $\\varmathbb I$ is created through an analysis after the $L$'s learning has been done, something referred to as postprocessing \\cite{Strumbelj2010} or post-hoc analysis \\cite{Lipton16a,post}. Examples of post-hoc approaches that we consider in the remainder of this section include: model and outcome interpretations, visualization, and prototyping similar records. \n\nWhile disconnecting the $L$ and $\\varmathbb I$ can lead to a loss of faithfulness, it can lead to other benefits and challenges. Designing tools that focus on $\\varmathbb I$ could potentially lead to very explicit models, something we represent in Figure \\ref{fig::Explicit-Faithful}. Additionally, interpretations that are derived directly from the machine learning algorithm or the features are strongly restricted by the nature of the algorithm \/ features. In contrast, interpretations that are created in addition to the decision-making algorithm can be made to comply with various standards. For example, Miller demonstrates how interpretations are often created by the same people that develop the system. They tend to generate explanations that are understandable to software designers, but are not explicit for the system's users \\cite{miller2017explanation}. He suggests using insights from the social sciences when discussing explainability in AI. Other factors, such as legal and practical considerations might limit researchers as to what constitutes a sufficient explanation. For example, as these tools disconnect the logic in $\\varmathbb I$ from $L$, they cannot guarantee the fairness of the agent's decision which may be a critical need and even require transparency (see Section \\ref{Why}). \n\nThe first possibility creates a ``model interpretation tool\" that is used to explain the logic behind $L$'s predictions for all values of $T$ given all records, $R$. A group of these approaches create simpler, transparent decision trees or rules secondary to $L$. While these approaches will have the highest level of explicitness, they will generally lack faithfulness. For example, Frosst \\cite{frosst2017distilling} presents a specific interpretation model for neural networks in an attempt to resolve the tension between the generalization of neural networks and the explicitness of decision trees. They show how to use a deep neural network to train a decision tree. The new model does not perform as well as a neural network, but is explicit. Many other approaches have used decision trees to provide explanations for neural networks \\cite{Boz2002,Craven1995,KRISHNAN1999}, decision rules \\cite{arbatli1997rule,Augasta2012,craven1994using,Kahramanli,zhou2003extracting} and a combination of genetic algorithms with decision trees or rules \\cite{arbatli1997rule,4938655,Mohamed2011}. Similarly, decision trees \\cite{Chipman_makingsense,Domingos1998,zhou2016interpreting} and decision rules \\cite{deng2014interpreting,hara2016making,4167900,tan2016tree} have been suggested to explain tree ensembles. \n\nSome explanations secondary to $L$ are generated by using feature analysis and thus are most similar to the approaches in the previous section. One example of these algorithms is SP-LIME, which provides explanations that are independent of the type of machine learning algorithm used \\cite{Ribeiro2016}. It is noteworthy that SP-LIME includes feature engineering as part of its analysis, showing the potential connection between the second and third approaches. The feature engineering in SP-LIME tweaks examples that are tagged as positive and observes how changing them affects the classification. A similar method has been used to show how Random Forests can be made explainable \\cite{tolomei2017interpretable,whitmore2018explicating}. The Random Forest can be considered a black box that determines the class of a given feature set. $L$'s interpretabity is obtained by determining how the different features contribute to the classification of a feature set \\cite{tolomei2017interpretable}, or even which features should be changed, and how, in order to obtain a different classification \\cite{whitmore2018explicating}. This type of interpretation is extremely valuable. For example, consider a set of medical features, such as weight, blood pressure, age etc. and a model to determine heart attack risk. Assume that for a specific feature set the model classifies the patient as high risk. The model's interpretation facilitates knowing what parameters need to change in order to change the prediction to low risk. \n\n\n\n\\subsection {Tools to Generate Outcome Interpretations Independently from L}\n\\label{What-outcome explanation}\nThe second possibility for creating interpretations independently from $L$ creates an ``outcome explanation\" that is localized and explains the prediction for a given instance $r \\in R$ and its prediction, $t \\in T$. It has been claimed that feature selection approaches are useful for obtaining a general, global understanding of the model, but not for specific classifications of an instance, $t$. Consequently, they advocate using local interpretations \\cite{Baehrens2010}. One example is an approach that uses vectors which are constructed independently of the learning algorithm for generating localized interpretations \\cite{Baehrens2010}. Another example advocates using coalition game theory to evaluate the effect of combinations of features for predicting $t$ \\cite{Strumbelj2010}. Work by Lundberg and Lee present a unified framework for interpreting\npredictions using Shapley game theoretic functions \\cite{lundberg2017unified}. Certain algorithms have both localized and global versions. One example is the local algorithm LIME and its global variant, SP-LIME \\cite{Ribeiro2016}.\n\n\\subsection {Algorithms to Visualize the Algorithm's Decision}\n\\label{What-visualization}\nWhile the explanations in the previous sections focused on ways a person could better understand the logic within $L$, visualization techniques typically focus on explaining how a subset of features within $F$ are connected to $L$. However, the level of explicitness within visualization is lower than that of feature selection and model and outcome interpretations. This is because feature selection and model and outcome interpretations all aim to understand the logic within $L$, thus giving them relatively higher level of faithfulness and explicitness. As visualization tools do not focus on understanding the logic within $L$, they are less faithful than feature analysis methods that do, and at times the level of understanding they provide is not high, especially for regular users.\n\nOverall, many of these approaches seem to have the primary goal of justification for a specific outcome of $L$ and are not focused on even localized interpretations of $L$'s logic. As justification is more concerned with persuading a user that a decision is correct than providing information about $L$'s logic \\cite{biran2017explanation}, it seems that justification methods likely have the least amount of faithfulness, as there is no need to make any direct connection between $\\varmathbb I$ and $L$. Consistent with this aim, work by Lei, Barzilay and Jaakkola generated rationales, which they defined as justifications for an agent's local decision through creating a visualization tool that highlighted which sections of text, e.g. $f \\in F$, were responsible for making a specific classification \\cite{lei2016rationalizing}. \n\nConsider explanations that can potentially be generated within image classification, a task many visualization tools address \\cite{fong2017interpretable,guo2010novel,Simonyan2013DeepIC,xu2015show,zhou2016learning}. A visualization tool will typically identify the portion of the picture (a subset of $F$) that was most responsible for yielding a prediction, $T_k$. However, typical visualizations, such as those generated by saliency masks, class activation mapping, sensitivity analysis and partial dependency plots all only focus on highlighting important portions of input, without explaining the logic within the model, and the output is often hard for regular users to understand. Nonetheless, these approaches are useful in explaining high accuracy, low-explicitness machine learning algorithms, particularly neural networks, often within image classification tasks.\n\nSaliency maps are a visualizations that identify important, e.g. salient, objects which are groups of features \\cite{xu2015show}. In general, saliency can be defined as identifying the region of an image $r \\in R$, that $L(r\\times F,T_k)$ will identify \\cite{fong2017interpretable}. For example, a picture may include several items, such as a person, house and car. $r$ can represent the car and $L(r\\times F,T_k)$ is used to properly identify it ($T_k$). Somewhat similar to the previous types of explanations, these salient features could then generate a textual explanation of an image. For example, Xu et al. focused on identifying objects within a deep neural network (CNN) for picture identification to automatically create text descriptions \\cite{xu2015show} for a given picture (outcome description). Kim et al. created textual explanation for neural networks of self-driving cars \\cite{kim2018textual}. More generally, saliency masks can be used to identify the $r*n$ areas that represent the $t*n$ targets that were identified in the picture \\cite{fong2017interpretable,guo2010novel,Simonyan2013DeepIC,xu2015show,zhou2016learning}. They generally use the gradient of the output corresponding to the each of the targets with respect to the inputted features \\cite{Lipton16a}. While earlier works constrained the neural network to provide this level of explicitness \\cite{zhou2016learning}, recent works provide visual explanations without altering the structure of the neural network \\cite{fong2017interpretable,hu2018explainable,selvaraju2017grad}. Still, serious concerns exist that many of these visualizations are too complex for regular users and thus reserved for experts, as some of these explanations are only appropriate for people researching the under-workings of the algorithm to diagnose and understand mistakes \\cite{selvaraju2017grad}. \n\nNeural activation is a visualization for the inspection of neural networks that help focus a person to what neurons are activated with respect to particular input records. As opposed to the previous visualizations that focus on $F$ and $R$, this visualization helps provide an understanding about neural networks' decisions making them less of a black box. Consequently, these approaches provide interpretation and not justification and are more faithful. For example, work by Yosinski et al. \\cite{yosinski2015understanding} proposes two tools for visualizing and understanding what computations and neuron activations occur in the intermediate layers of deep neural networks (DNN). Work by Schwartz-Ziv and Tishby suggest using a Information Plane visualization which captures the Mutual Information values that each layer preserves regarding the input and output variables of DNNs \\cite{shwartz2017opening}. \n\nOther visualizations exist for other machine learning algorithms and learning tasks. Similar to saliency maps, sensitivity analysis provides a visualization that connects the inputs and outputs of $L$ \\cite{saltelli2002sensitivity}. Moreover, sensitivity analysis maps have been applied to tasks beyond image classification and to other black box machine learning algorithms such as ensemble trees \\cite{cortez2013using}. For example, Coretz and Embrechts present five sensitivity analysis methods appropriate for both classification and regression tasks \\cite{cortez2013using}. Zhang and Wallace present a sensitivity analysis for convolutional neural networks used in text classification \\cite{zhang2015sensitivity}. \n\nPartial Dependency Plots (PDP) help visualize the average partial relationship between the predicted response of $L$ and one or more features within $F$ \\cite{friedman2001greedy,goldstein2015peeking}. PDPs use feature analysis as a critical part of their interpretation, and are much more faithful and explicit than many of the other visualizations approaches in this section. However, as the primary output and interpretation tool is visual \\cite{friedman2001greedy}, we have categorized it in this section. Examples include work by Hooker that uses ANOVA decomposition to help create this a Variable Interaction Network (VIN) visualization \\cite{hooker2004discovering} and work by Goldstein et al. that extend the more classic PDP model by graphing the functional relationship between the predicted response and the feature for individual observations, thus making this a localized visualization \\cite {goldstein2015peeking}. Similarly, Krause et al. provide a localized visualization to create partial dependence bars, a color bar representation of a PDP \\cite{krause2016interacting}.\n\n\\subsection{Generating Explanations from Prototyping the Dataset's Input as Examples}\n\\label{What-protyping}\nSimilar to visualization tools, prototype selection also seeks to clarify the link between $L$'s input and output. However, while visualization tools focus on the input from $F$, prototyping focuses on $R$, seeking the existence of a subset of records similar to record, $r \\in R$, being classified. This subset is meant to serve as an implicit explanation as to the correctness of the model as prototyping aims to find the minimal subset of input records that can serve as a distillation or condensed view of the dataset \\cite{bien2011prototype}. \n\nPrototypes have been shown to help people better understand $L$'s decisions. For example, work by Henricks et al. focuses on providing visual explanations for images that include class-discriminate information about other images that share common characteristics with the image being classified \\cite{HendricksARDSD16}. The assumption here is that the information about similar pictures in the same class helps people better understand the decision of the algorithm. Bien and Tibshirani propose two methods for generating prototypes-- a LP relaxation with randomized rounding and a greedy approach \\cite{bien2011prototype}. Work by Kim et al. suggested using maximum mean\ndiscrepancy to generate prototypes \\cite{kim2016examples}. In other work by Kim et al., they suggest using a Bayesian Case Model (BCM) to generate prototypes \\cite{kim2014bayesian}.\n\n\\subsection{Comparing the Six Basic Approaches for Generating Interpretations}\nReferring back to Figure \\ref{fig::Explicit-Faithful}, each of these approaches will differ along the axis of their level of explicitness and faithfulness. It has been previously noted that many of the visualization approaches produce interpretations that are not easily understood by people without an expert-level understanding of the problem being solved \\cite{post} making them not very explicit. As they often provide justification and no direct interpretation of the logic in $L$, they are also not very faithful. As prototypes provide examples of similar classifications, they are often more explicit than visualizations as regular users can more easily understand their meaning. However, as they also do not attempt to directly explain $L$'s logic, they are not more faithful. Other approaches, such as transparent ones, have high levels of both explicitness and faithfulness, but are typically limited to white box methods that facilitate these types of interpretability. Model and outcome tool approaches can potentially be geared to any user, making them very explicit, but are less faithful as the logic generated in $\\varmathbb I$ is not necessarily the same as that in $L$. When taken in combination with a white box algorithm, feature analysis methods can be very explicit and faithful. At times, they are used independently of $L$, potentially making them less faithful.\n\nReferring back to Figure \\ref{Figure1}, each of the approaches described in this section are labeled with the term within the explainability model described in Section \\ref{definitions}. However, note the overlaps within the Venn Diagram as overlaps do exist between some of the approaches described in this section. While transparent approaches do link $\\varmathbb I$ and $L$, sometimes the link between these two elements is strengthened and\/or described through an analysis of the $F$ commonly seen in Feature Analysis approaches. For example, the GAM and GA$^2$M approaches \\cite{Caruana2015,LouCG12} use univariate and pairwise feature analysis methods respectively in their transparent models. While model outcome models such as SP-LIME pride themselves on being agnostic, e.g. no direct connection be assumed between $\\varmathbb I$ and $L$, they do use elements of feature analysis and visualization in creating their global interpretation of $L$ \\cite{Ribeiro2016}. Similarly, the outcome explanation model, LIME, also uses feature analysis and visualization in creating its local interpretations of $L$ \\cite{Ribeiro2016} for an instance of $r \\in R$. Saliency maps are visualization that is based on identifying the features used for classifying a given picture \\cite{fong2017interpretable} showing the potential overlap between visualization methods and feature analysis. However, at times, the identified salient features are used to create a outcome interpretation, as is the case in other work \\cite{xu2015show}. Similarly, work by Lei, Barzilay and Jaakkola generated visualizations of outcomes through analyzing which features were most useful to the model, again showing the intersection of these three approaches. Last, some prototype analysis tools, such as work by Henricks et al. use visual methods \\cite{HendricksARDSD16}. Thus, we stress that the different types of interpretation approaches are often complementary and not mutually exclusive.\n\nGiven these differences of the explicitness and faithfulness of each of these approaches, it seems logical that the type of interface used for disseminated the system's interpretation will likely depend upon the level of the user's expertise and the type of interpretation that was generated. The idea of adaptable interfaces based on people's expertise was previously noted \\cite{grudin1989case,shneiderman2002promoting,SteinGNRJ17}. In these systems, the type of information presented in the interface depends on the user's level of expertise. Accordingly, an interface might consider different types of interpretation or interpretation algorithms based on \\emph{who} the end-user will be. Even among experts, it is reasonable to assume that different users will need different types of information. The different backgrounds of legal experts, scientists, safety engineers, and researchers may necessitate different types of interfaces \\cite{doshi2017towards}. \n \n \n\\section{\\emph{When} Should Information be Presented?}\n\\label{when}\n\nExplanations can be categorized based on \\emph{when} the interpretation is presented:\n\\begin{enumerate}\n\\item Before the task \n\\item Continuing explanations throughout the task \n\\item After the task \n\\end{enumerate}\n\nSome agents may present their interpretation before the task is executed as either justification \\cite{biran2017explanation}, conceptualization or proof of fairness of an agent's intended action \\cite{dwork2012fairness}. Other agents may present their explanation during task execution, especially if this information is important to explain when the agent fails so it will be trusted to correct the error \\cite{jennings2014human,abs-1806-00069,Guidotti2018}. Other agents provide explanations after actions are carried out \\cite{langley2017explainable}, to be used for retrospective reports \\cite{Lipton16a}. \n\nIt is important to note that not all approaches for \\emph{what} can be generated, as per Section \\ref{What} support all of these possibilities. While all methods can be used for analysis after the task, many of these methods use post-hoc analysis that separates $L$ from $\\varmathbb I$. Thus, if fairness needs to be checked before task execution, the lack of connection between $L$ from $\\varmathbb I$ in model and outcome explanations, visualizations, and prototypes make this difficult to accurately check. Transparent methods could fulfill this requirement due to their inherent faithfulness. Feature analysis methods including, but not limited to GAM, GA$^2$M, and PDP \\cite{Caruana2015,friedman2001greedy,goldstein2015peeking,lou2013accurate} can check the connection between inputs and outputs, thus confirming fairness or other legal requirements are met even before task execution. \n\nThe choice of \\emph{when} to present the explanation is not exclusive. Agents might supply various explanations at various times, before, during and after the task is carried out. Building on the taxonomy in Section \\ref{Why}, if explainability is critical for the system to begin functioning, then it stands to reason that this knowledge must be presented at the beginning of the task, thus enabling the user to determine whether to accept the agent's recommendation \\cite{sheh2017did}. However, if it is beneficial to build trust \/ user acceptance, then it might be directed during the task, especially if the agent erred. If the purpose of the explanation is to justify the agent's choice from a legal perspective then we may need to certify that decision before the agent acts (preventative) or after the act (accusatory). But, if the goal is conceptualization, especially in the form of knowledge discovery and\/or to support future decisions, then the need for explanation after task execution is equally critical. These possibilities are not inherently mutually exclusive. For example, work by Vigano and Magazzeni \\cite{vigano2018explainable} claims that explanations should be provided throughout all stages of the systems lifecycle within security systems. They describe how explanations should begin as the system is designed an implemented, continue through use, analysis and change and maybe even when it is replaced. One may argue whether this is crucial for all systems or only for security systems that are discussed in their work, but it is surely a point to consider.\n\n\\section{\\emph{How} can Explanations be Evaluated?} \\label{How}\nIt was previously noted that little agreement currently exists about how to define explainability and interpretability which may be adding to the difficulty in properly evaluating it \\cite{doshi2017towards}. In order to address this point, we first clearly defined these terms in Section \\ref{definitions}, and then proceeded to consider questions of \\emph{why}, \\emph{what}, \\emph{when} and \\emph{how} based on these definitions. \n\nAs we discuss in this section, creating a general evaluation framework is still an open challenge as these issues are often intrinsically connected. For example, the detail of an explanation is often dependent on \\emph{why} that explanation is needed. An expert will likely differ from a regular user regarding \\emph{why} an explanation is needed, will often need these explanations at different times, e.g. before or after the task (\\emph{when}), and may require different types of explanations and interfaces (\\emph{what} and \\emph{how}). At other times multiple facets of explanation exist even within one category. A DSS system is built to support a user's decision, thus making explainability a critical issue. However, these systems will still likely benefit from better explanations, so that the user trusts those explanations. Similarly, a scientist pursuing knowledge discovery may need to analyze and interact with information presented before, during and after a task's completion (\\emph{when}). Thus, multiple goals must often be considered and evaluated.\n\nTo date, there is little consensus about how to quantify these interconnections. Many works evaluated explainability as a binary value-- either it works or it doesn't. Within these papers, an explanation is inspected to insure that it provides the necessary information in the context of a specific application \\cite{lei2016rationalizing,Ribeiro2016}. If it does so, it is judged as a success, even if other approaches may have been more successful. \nTo help quantify evaluations, Doshi-Velez and Kim suggested a taxonomy of three types of evaluation tasks that can be used for evaluation: application, human, and functionally grounded \\cite{doshi2017towards}. In their model, application grounded tasks are meant for experts attempting to execute a task and the evaluation focuses on how well the task was completed. Human-grounded tasks are simplified and can be performed with regular-users. They conceded that it is not clear what the evaluation goal of this task need be but recommended simplified evaluation metrics, such as reporting which explanation they preferred. Work by Mohseni and Ragan suggested creating canonical datasets of this type that could quantify differences between interpretable algorithms \\cite{mohseni2018human}. They proposed annotating regions in images, and words in texts that provide an explanation. The output of any new interpretation algorithm, $\\varmathbb I$, could be compared to the user annotations that provide a ground-truth. This approach is still goal-oriented and thus they classify their task as a human-grounded task (e.g. having $\\varmathbb I$ match the human explanation). Doshi-Velez and Kim's last type of evaluation task is functionality-grounded where some objective criteria for evaluation is predefined (e.g. the ratio of decision tree size to model accuracy). The main advantage to this type of evaluation is the evaluation of $\\varmathbb I$ can be quantified without any need for user studies.\n\nThis taxonomy provides three important types of tasks that can be used in evaluate explainability and interpretability, but these researchers do not propose how to quantify the effectiveness of the key components within $\\varmathbb E$ and $\\varmathbb I$. This paper's main point is that questions surrounding the system's need for \\emph{why}, \\emph{what}, \\emph{when} and \\emph{how} about explainability must be addressed. These elements can and should be quantified, while also considering trade-offs between these categories as well as elements within $\\varmathbb I$. Issues about algorithms' explicitness, faithfulness and transparency must be explicitly evaluated while balancing the agent's and user's performance requirements, including the agent's fairness and prediction accuracy, and the user's performance and acceptance of $\\varmathbb E$'s.\n\nGiven this, we suggest explicitly evaluating the following three elements in Human-Agent Systems: The quantifiable performance of the agent's learning, $L$, its level of interpretation, $\\varmathbb I$, and human's understanding, $\\varmathbb E$. For example, in a movie recommendation system the three scores would be described as follows: The score of for $L$ is based on standard metrics for evaluating recommendation predictions (i.e. accuracy, precision and\/or recall). A score can also be given to $\\varmathbb I$ that reflects how much explicitness, faithfulness and transparency exist in $\\varmathbb I$ according to objective criteria described below. The score for $\\varmathbb E$ should be quantified based on the user's performance. As the goal of the system is to yield predictions that the user understands so they will be accepted, we should quantify the impact of $\\varmathbb I$ on the user's behavior. Thus, we suggest an evaluation score that quantifies:\n\n\\begin{enumerate}\n\\item A score for $L$, the performance of the agent's prediction \n\\item A score for $\\varmathbb I$, the interpretation given to the user \n\\item A score for $\\varmathbb E$, the user's acceptance of $\\varmathbb I$\n\\end{enumerate}\n\nAs described in previous sections, a complex interplay exists between these three elements. There is often a trade-off between the performance of $L$ and the explicitness of $\\varmathbb I$ that can be produced from $L$ (see Figure \\ref{fig::Explicit-Predict}). White-box algorithms are more explicit and can even be transparent, but typically have lower performance. Higher accuracy algorithms, such as neural networks, are typically less explicit and faithful (see Figure \\ref{fig::Explicit-Faithful}). Thus, agents with lower performance scores for $L$ will likely have higher scores for $\\varmathbb E$, especially if explicitness and faithfulness are important and quantifiable within the system. Furthermore, different user types and interfaces will be effected by the type of agent design and a total measure is needed to weigh all parameters that are needed by a system into account. For example, an agent that was designed to support an expert user is different from one provided to a regular user. \n\nAnother equally important element of the system is how well the person executed her system task(s) given $\\varmathbb I$. In theory, multiple goals for $\\varmathbb E$ may exist for the human user such as immediate performance vs. long-term knowledge acquisition. These may be complementary or in conflict. For example, assume the explanation goal of a system is to support a person's ability to purchase items in a time-constrained environment (e.g. online stock purchasing). The greater detail contained within the agent's explanations on one hand instill improved confidence within the user, but also will take more time to read and process, which may prevent the user from capitalizing on certain quickly passing market fluctuations. Thus, some measure should likely be introduced to reason about different goals for the explanation and the relative strengths of various explanations, their interfaces, and the algorithms that generate those explanations.\n\nTo capture these properties, we propose an overall utility to quantify the complementary and contradictory goals for $L, \\varmathbb I$, and $\\varmathbb E$ as the weighted product:\n\n\\begin{equation}\nUtility = \\prod_{n=1}^{NumGoals} Imp_n*Grade_n\n\\label{equation-utility}\n\\end{equation}\n\\begin{equation}\n\\sum_{n=1}^{NumGoals} Imp_n = 1\n\\end{equation}\n\nWe define $NumGoals$ as the number of goals in an the system. $L, \\varmathbb I$, and $\\varmathbb E$ each have an overall objective and the system meets this object through all of these goals. The objective for $L$ is to provide predictions for $T$ using \n$R \\times F$. The ability of the system to meet this objective is measured through machine learning performance metrics that quantify the goals of high accuracy, recall, precision, F-measure, mean average precision, and mean squared error. A goal for $L$ can also be that $L$ exhibits fairness, which often is a hard-constraint due to legal considerations. The objective for $\\varmathbb I$ is to provide a representation of $L$'s logic that is understandable to the user. This success of this objective can be measured by goals for $\\varmathbb I$ to have the highest levels of explicitness, faithfulness, and transparency. Other papers have suggested additional goals for $\\varmathbb I$ including justification \\cite{kofod2008explanatory} and completeness \\cite{abs-1806-00069} that we argue are included in goals of explicitness and faithfulness respectively. The objective for $\\varmathbb E$ is that the person will understand $L$ using $\\varmathbb I$. This can be measured through the goal that the user's performance be improved given $\\varmathbb I$. Additional goals include those specified in Section \\ref{Why} including guaranteeing safety concerns, trust, and knowledge \/ scientific discovery. Goals such as to the timing of when interpretations were present (e.g. ``presenting during task as required\") are likely hard-constraints (e.g. either it was done at the correct time or not). $Imp_n$ is the importance weight we give to the $n_{th}$ goal such that $0