diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfjdr" "b/data_all_eng_slimpj/shuffled/split2/finalzzfjdr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfjdr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe chromatic number $\\chi(G)$ of a graph $G=(V,E)$ is the minimal cardinal $\\varkappa$ for which there exists a vertex coloring with $\\varkappa$ colors. There is a long history of structure theorems deriving from large chromatic number assumptions, see e.g., \\cite{komjath}. The main topic of this paper will be the following conjecture proposed by Erd\\\"os-Hajnal-Shelah \\cite[Problem 2]{EHS} and Taylor \\cite[Problem 43, page 508]{Taylorprob43}.\n\\begin{conjecture}[Strong Taylor's Conjecture]\nFor any graph $G$ with $\\chi(G)>\\aleph_0$ there exists an $n\\in\\mathbb{N}$ such that $G$ contains all finite subgraphs of $\\text{Sh}_n(\\omega)$.\n\\end{conjecture}\nWhere, for a caridnal $\\kappa$, the shift graph $\\text{Sh}_n(\\kappa)$ is the graph whose vertices are increasing $n$-tuples of ordinals less than $\\kappa$, and we put an edge between $s$ and $t$ if for every $1\\leq i\\leq n-1$, $s(i)=t(i-1)$ or vice-versa. The shift graphs $\\text{Sh}_n(\\kappa)$ have large chromatic numbers depending on $\\kappa$, see Fact \\ref{F:Sh-high chrom} below. Consequently, if the strong Taylor's conjecture holds for a graph $G$, it has elementary extensions of unbounded chromatic number (having the same family of finite subgraphs). \n\nThe strong Taylor's conjecture was refuted in \\cite[Theorem 4]{HK}. See \\cite{komjath} and the introduction of \\cite{1196} for more historical information.\n\nIn \\cite{1196} we initiated the study of variants of the strong Taylor's conjecture for some classes of graphs with \\emph{stable} first order theory (stable graphs). Stablility theory, which is the study of stable theories and originated in the works of the third author in the 60s and 70s, is one of the most influential and important subjects in modern model theory. Examples of stable theories include abelian groups, modules, algebraically closed fields, graph theoretic trees, or more generally superflat graphs \\cite{PZ}. Stablility also had an impact in combinatorics, e.g. \\cite{MS} and \\cite{CPT} to name a few.\n\nMore precisely, in \\cite{1196} we proved the strong Taylor's conjecture for $\\omega$-stable graphs and variants of the conjecture for superstable graphs (replacing $\\aleph_0$ by $2^{\\aleph_0}$) and for stable graphs which are interpretable in a stationary stable theory (replacing $\\aleph_0$ by $\\beth_2(\\aleph_0)$). As there exist stable graphs that are not interpretable in a stationary stable structure, see \\cite[Proposition 5.22, Remark 5.23]{1196}, we asked what is the situation in general stable graphs and in this paper we answer it with the following theorem.\n\n\\begin{theorem*}[Corollary \\ref{C:main corollary}]\nLet $G=(V,E)$ be a stable graph. If $\\chi(G)>\\beth_2(\\aleph_0)$ then $G$ contains all finite subgraphs of $\\text{Sh}_n(\\omega)$ for some $n\\in \\mathbb{N}$.\n\\end{theorem*}\n\nThe key tool in proving the results for $\\omega$-stable graphs and superstable graphs is that every large enough saturated model is an Ehrenfeucht\u2013Mostowski model (EM-model) in some bounded expansion of the language. \n\nAn EM-model is a model which is the definable closure of an indiscernible sequence and was originally used by Ehrenfeucht\u2013Mostowski in order to find models with many automorphisms \\cite{EM}. It was shown by Lascar \\cite[Section 5.1]{lascar} that every saturated model of cardinality $\\aleph_1$ in an $\\omega$-stable theory is an EM-model in some countable expansion of the language, and was later generalized to any cardinality by Mariou \\cite[Theorem C]{mariou}. This was generalized for superstable theories by Mariou \\cite[Theorem 3.B]{mariouthesis} and in an unpublished preprint by the third author \\cite{Sh:1151}.\n\nIt was shown by Mariou \\cite[Theorem 3.A]{mariouthesis} that in a certain sense the existence of such saturated EM-models for a stable theory necessarily implies that the theory is superstable. Consequently a different tool is needed in order to prove the theorem for general stable theories. \n\nIn the stationary stable case, we use a variant of representations of structures in the sense of \\cite{919}. However, this method did not seem to easily adjust to the general stable case.\n\nIn this paper we resolved this problem by generalizing the notion of EM-models to \\emph{infinitary EM-models} and show that such saturated models exist for any stable theory in Theorem \\ref{T:existence of gen em model in stable}. The definition is a bit technical, so here we will settle with an informal description: \n\nIn an EM-model every element is given by a term and a finite sequence of elements from the generating indiscernible sequence. Analogously, in an infinitary EM-model every element is given by some ``term'' with infinite (but bounded) arity and a suitable sequence of elements from an indiscernible sequence. \n\nWe prove that the existence of saturated infinitary EM-models characterizes stability.\n\n\\begin{theorem*}[Theorem \\ref{T:existence of gen em model in stable}]\nThe following are equivalent for a complete $\\mathcal{L}$-theory $T$:\n\\begin{enumerate}\n\\item $T$ is stable.\n\n\\item Let $\\kappa,\\mu$ and $\\lambda$ be cardinals satisfying $\\kappa=\\cf(\\kappa)\\geq \\kappa(T)+\\aleph_1$, $\\mu^{<\\kappa}=\\mu\\geq 2^{\\kappa+|T|}$ and $\\lambda=\\lambda^{<\\kappa}\\geq \\mu$ and let $T\\subseteq T^{sk}$ be an expansion with definable Skolem functions such that $|T|=|T^{sk}|$ in a language $\\mathcal{L}\\subseteq \\mathcal{L}^{sk}$. Then there exists an infinitary EM-model $M^{sk}\\models T^{sk}$ based on $(\\alpha,\\lambda)$, where $\\alpha\\in \\kappa^U$ for some set $U$ of cardinality at most $\\mu$, such that $M=M^{sk}\\restriction \\mathcal{L}$ is saturated of cardinality $\\lambda$.\n\\end{enumerate}\n\\end{theorem*}\n\nSection \\ref{s:em-models} is the only purely model theoretic section and is the only place where stability is used. The results of this section (more specifically Theorem \\ref{T:existence of gen em model in stable}) are only used in Section \\ref{s:conclusion}. In Section \\ref{S:order type graphs} we study graphs on (perhaps infinite) increasing sequences whose edge relation is determined by the order type. Aiming to prove that if the chromatic number is large, then one can embed shift graphs, we analyze several different cases. The last case we deal with in Section \\ref{S:order type graphs} turns out to be rather complicated, so we devote all of Section \\ref{S:PCF} to it. There, we employ ideas inspired by PCF theory to get a coloring of small cardinality. Section \\ref{s:conclusion} concludes. \n\n\n\\section{Preliminaries}\nWe use small latin letters $a,b,c$ for tuples and capital letters $A,B,C$ for sets. We also employ the standard model theoretic abuse of notation and write $a\\in A$ even for tuples when the length of the tuple is immaterial or understood from context.\n\nFor any two sets $A$ and $J$, let $A^{\\underline{J}}$ be the set of injective functions from $J$ to $A$ (where the notation is taken from the falling factorial notation), and if $(A,<)$ and $(J,<)$ are both linearly ordered sets, let $(A^{\\underline J})_<$ be the subset of $A^{\\underline J}$ consisting of strictly increasing functions. If we want to emphasize the order on $J$ we will write $(A^{\\underline{(J,<)}})_<$. \n\nThroughout this paper, we interchangeably use sequence notation and function notation for elements of $A^{\\underline J}$, e.g. for $f\\in A^{\\underline J}$, $f(i)=f_i$. For any sequence $\\eta$ we denote by $\\Rg(\\eta)$ the underlying set of the sequence (i.e. its image). If $(A,<^A)$ and $(B,<^B)$ are linearly ordered sets, then the most significant coordinate of the lexicographic order on $A\\times B$ is the left one.\n\n\\subsection{Stability}\nWe use fairly standard model theoretic terminology and notation, see for example \\cite{TZ,guidetonip}. We gather some of the needed notions. For stability, the reader can also consult with \\cite{classification}.\n\nWe denote by $\\tp(a\/A)$ the complete type of $a$ over $A$. Let $(I,<)$ be a linearly ordered set. A sequence $\\langle a_i: i\\in I\\rangle$ inside a first order structure is \\emph{indiscernible} if for any $i_1<\\dots\\alpha_j$, $|\\{\\gamma:\\Rg(\\eta)<\\gamma\\in I\\}|=|I|$.\n\\end{proof}\n\n\n\nIn stable theories, for any infinite indiscernible sequence $I$ over some set $A$ one may take the limit type defined by \n\\[\\lim(I)=\\{\\varphi(x,c): \\text{$\\varphi(a,c)$ holds for cofinitely many $a\\in I$}\\}.\\]\nIt is a consistent complete type by stability. It is obviously finitely satisfiable in $I$. Moreover, if $\\mathcal{D}$ is a non-principal ultrafilter on $I$, then $p_\\mathcal{D}=\\lim(I)$. We often write $\\lim(I\/A)=\\lim(I)|A$.\n\nThe following is \\cite[Lemma III.3.10]{classification}, we give a proof for completeness.\n\\begin{lemma}\\label{L:saturation}\nLet $T$ be a stable theory and $M\\models T$. If $M$ is $(\\kappa(T)+\\aleph_1)$-saturated and every countable indiscernible sequence over $A\\subseteq M$, with $|A|<\\kappa(T)$, in $M$ can be extended to one of cardinality $\\lambda$ then $M$ is $\\lambda$-saturated.\n\\end{lemma}\n\\begin{proof}\nWe may assume that $\\lambda>\\kappa(T)+\\aleph_1$.\nBy passing to $M^{eq}$ (and $T^{eq}$) there is no harm in assuming that $T$ eliminates imaginaries. \nLet $p\\in S(C)$ with $C\\subseteq M$ and $|C|<\\lambda$. Let $B\\subseteq C$ with $|B|<\\kappa(T)$ such that $p$ does not fork over $B$. Let $q\\supseteq p$ be its non-forking global extension. Since $M$ is $\\kappa(T)$-saturated, we may find a sequence of elements $S=\\langle b_i :i<\\omega \\rangle\\subseteq M$ satisfying $b_i\\models q|B\\langle b_j:j\\xi$. Consequently, $\\lambda^\\xi = \\lambda$ and $\\lambda^{<\\kappa} = \\lambda$.\n\n\nHence, by (3), there is a saturated model of size $\\lambda$. On the other hand, since $\\lambda$ is singular (of cofinality $\\kappa<\\lambda$), $\\lambda^{<\\lambda}>\\lambda$ and as a result by \\cite[Theorem VIII.4.7]{classification}, $T$ is $\\lambda$-stable (and hence stable).\n \n\\end{proof}\n\n\\section{Order-Type graphs with large chromatic number} \\label{S:order type graphs}\nIn this section we discuss graphs whose vertices are (possibly infinite) increasing sequences, where the edge relation is determined by the order type. More specifically, our main interest in this section is the following type of graphs.\n\\begin{definition}\nLet $(I,<)$ and $(J,<)$ be linearly ordered sets and $\\bar a\\neq \\bar b\\in (I^{\\underline J})_<$ be increasing sequences. We define a graph $E^J_{\\bar a,\\bar b}$ and a directed graph $D_{\\bar a,\\bar b}^J$ on $(I^{\\underline J})_<$ by:\n\\begin{itemize}\n\\item $\\bar c \\mathrel{E}^J_{\\bar a,\\bar b} \\bar d \\iff \\otp(\\bar c,\\bar d)=\\otp(\\bar a,\\bar b)\\vee \\otp(\\bar d,\\bar c)=\\otp(\\bar a,\\bar b)$\n\\item $\\bar c \\mathrel{D}^J_{\\bar a,\\bar b} \\bar d \\iff \\otp(\\bar c,\\bar d)=\\otp(\\bar a,\\bar b).$\n\\end{itemize}\nWe omit $J$ from $E^J_{\\bar a,\\bar b}$ and $D^J_{\\bar a,\\bar b}$ when it is clear from the context.\n\nWe call these graphs the \\emph{(directed) order-type graphs}.\n\\end{definition}\n\n\\begin{remark}\nAlthough it will not define a graph, we sometimes use the notation $D_{\\bar a,\\bar b}$ and $E_{\\bar a,\\bar b}$ even if $\\bar a=\\bar b$.\n\\end{remark}\n\nIn Section \\ref{ss:embed-shift} we isolate a family of order-type graphs whose members contain all finite graphs of $\\text{Sh}_m(\\omega)$ for a certain integer $m$ (Corollary \\ref{C:k-ord-cov-implies shift}). In Section \\ref{ss:orde-type-large-chrom} we show that order-type graphs with large chromatic number fall into this family (Theorem \\ref{T:shift graphs in order-type-graphs}).\n\n\\subsection{Embedding shift graphs into order-type graphs}\\label{ss:embed-shift}\n\\begin{definition}\nLet $(I,<)$ and $(J,<)$ be linearly ordered sets, $\\bar a,\\bar b\\in (I^{\\underline J})_<$ be increasing sequences and $01$) and $\\eta(0)<\\rho(0)$ (if $k=1$).\n\\end{definition}\n\n\n\n\\begin{lemma}\\label{L:k-orderly}\nLet $01$) or $\\eta(0)<\\rho(0)$ (if $k=1$), $f_{\\eta,\\bar g}\\mathrel{D}_{\\bar a,\\bar b} f_{\\rho,\\bar g}$.\n\\end{claim}\n\\begin{claimproof}\nFor the purpose of this proof, for $1\\leq i\\leq k$ let $\\pi_i:S^*_{k-1}\\times\\dots\\times S_0^*\\to S^*_{k-1}\\times\\dots\\times S^*_{k-i}$ be the projection on the first $i$ coordinates. \nWe choose increasing functions $g_i:S_i\\cup\\{\\infty\\} \\to S_{k-1}^*\\times\\dots S^*_{k-i}\\times \\{0^-\\}\\times\\dots\\times \\{0^-\\}$ by downwards induction on $i\\pi_{i}(g_i(\\beta))^-$.\n\nFor any $0\\leq i\\leq k-1$, $g_i$ is increasing.\n\\end{subclaim}\n\\begin{subclaimproof}\nThis is straightforward and follows, by downwards induction, that for any $\\beta\\in S_i$, if $g_i(\\beta)=(x_0,\\dots,x_{k-1})$ then the maximal $l\\]\\[(\\pi_{i}(g_i(\\beta_1)),0^-,\\dots,0^-)=g_{i}(\\beta_1).\\]\nOtherwise, let $\\beta_1< \\gamma\\in S_i$ be minimal such that $b_{\\beta_2}\\leq a_\\gamma$. If $b_{\\beta_2}=a_\\gamma$ then $g_{i-1}(\\beta_2)=g_i(\\gamma)>g_i(\\beta_1).$\n\nIf $b_{\\beta_2}\\]\\[ (\\pi_{i}(g_i(\\beta_1)),0^-,0^-,\\dots, 0^-)=g_i(\\beta_1).\\]\n\\item Assume $a_{\\beta_1}>b_{\\beta_2}$ and let $\\gamma\\in S_i$ be minimal such that $a_\\gamma\\geq b_{\\beta_2}$, so $\\gamma\\leq \\beta_1$. If $a_\\gamma=b_{\\beta_2}$ then $\\gamma<\\beta_1$ and\n$g_{i-1}(\\beta_2)=g_i(\\gamma)b_{\\beta_2}$ then\n\\[g_{i-1}(\\beta_2)=(\\pi_{i}(g_i(\\gamma))^-,\\beta_2,0^-,\\dots, 0^-)<\\]\\[(\\pi_{i}(g_i(\\beta_1)),0^-,0^-,\\dots, 0^-)=g_i(\\beta_1).\\]\n\n\\end{list}\nThis proves $(\\dagger)$. Let $\\eta,\\rho\\in \\text{LSh}_k(\\delta)$ be as in the statement of the lemma. We proceed to prove that $f_{\\eta,\\bar g} \\mathrel{D}_{\\bar a,\\bar b} f_{\\rho,\\bar g}$.\n\nLet $\\beta_1,\\beta_2< \\alpha$ and assume that $\\beta_1\\in S_{n_1}$ and $\\beta_2\\in S_{n_2}$, for some $0\\leq n_1,n_2\\leq k-1$. Note that if $b_{\\beta_2}\\in C_n$, for $01$.\n\\begin{list}{\u2022}{}\n\\item Assume that $0,=\\right\\}$. By $(\\dagger)$, $g_{n_1}(\\beta_1)\\mathrel{\\square} g_{n_2}(\\beta_2)$ and as a result\n\\[f_{\\eta,\\bar g}(\\beta_1)=(\\eta(n_1),g_{n_1}(\\beta_1))=(\\rho(n_1-1),g_{n_1}(\\beta_1))\\mathrel{\\square} (\\rho(n_1-1),g_{n_2}(\\beta_2))=\\]\\[(\\rho(n_2),g_{n_2}(\\beta_2))=f_{\\rho,\\bar g}(\\beta_2).\\]\n\\item If $b_{\\beta_2}\\in C_n$ for some $n_1b_{\\beta_2}$ and $n_2=n-1(\\rho(n_2),g_{n_2}(\\beta_2))=\\]\\[f_{\\rho,\\bar g}(\\beta_2).\\]\n\n\\item If $b_{\\beta_2}\\in C_k$ then necessarily $n_2=k-1$ and $a_{\\beta_1}1$.\n\nFor any $\\varepsilon\\in S$, with $|J_\\varepsilon|>1$, we say that $J_\\varepsilon$ is\n\\begin{list}{\u2022}{}\n\\item of type $A$ if $\\langle \\bar a\\restriction J_\\varepsilon,\\bar b\\restriction J_\\varepsilon\\rangle$ is $k_\\varepsilon$-orderly, and \n\\item of type $B$ if $\\langle \\bar b\\restriction J_\\varepsilon,\\bar a\\restriction J_\\varepsilon\\rangle$ is $k_\\varepsilon$-orderly.\n\\end{list}\n\nLet $N<\\omega$ be some natural number. By replacing $I$ with an isomorphic copy, we may assume that $(\\alpha \\times (N\\times (2\\alpha +1)^k),<_{lex})\\subseteq (I,<)$. Let $\\varepsilon\\in S$ and let $I_\\varepsilon=\\{\\varepsilon\\}\\times (N\\times (2\\alpha+1)^{k})$. \n\nIf $|J_\\varepsilon|=1$ then we let $\\varphi_\\varepsilon:\\text{LSh}_1 (N)\\to ((I_\\varepsilon)^{\\underline{J_\\varepsilon}})_<$ be such that $\\varphi_\\varepsilon(\\eta)$ is the constant function giving $(\\varepsilon,0,\\dots,0)$.\n\nFor any $\\varepsilon\\in S$ let $E^\\varepsilon_{\\bar a,\\bar b}= E_{\\bar a\\restriction J_\\varepsilon, \\bar b\\restriction J_\\varepsilon}$ and $D^\\varepsilon_{\\bar a,\\bar b}= D_{\\bar a\\restriction J_\\varepsilon, \\bar b\\restriction J_\\varepsilon}$ and similarly $E^\\varepsilon_{\\bar b,\\bar a}$ and $D^\\varepsilon_{\\bar b,\\bar a}$.\n\nIf $|J_\\varepsilon|>1$ and $J_\\varepsilon$ is of type $A$ then let $\\varphi_\\varepsilon:\\text{LSh}_{k_\\varepsilon}(N)\\to(((I_\\varepsilon)^{\\underline{J_\\varepsilon}})_<,D^\\varepsilon_{\\bar a,\\bar b})$ be as supplied by Lemma \\ref{L:k-orderly}. I.e., for any $\\eta,\\rho\\in (N^{\\underline{k_\\varepsilon}})_<$, if $\\eta(i)=\\rho(i-1)$ for $01$) and $\\eta(0)<\\rho(0)$ (if $k_\\varepsilon=1$) then $\\otp(\\varphi_\\varepsilon(\\eta),\\varphi_\\varepsilon(\\rho))=\\otp(\\bar a\\restriction J_\\varepsilon,\\bar b\\restriction J_\\varepsilon)$. \n\nIf $|J_\\varepsilon|>1$ and $J_\\varepsilon$ is of type $B$ then let $\\widehat{\\varphi_\\varepsilon}:\\text{LSh}_{k_\\varepsilon}(N)\\to (((I_\\varepsilon)^{\\underline{J_\\varepsilon}})_<,D^\\varepsilon_{\\bar b,\\bar a})$ be as supplied by Lemma \\ref{L:k-orderly}. I.e, for any $\\eta,\\rho\\in (N^{\\underline{k_\\varepsilon}})_<$, if $\\eta(i)=\\rho(i-1)$ for $01$) and $\\eta(0)<\\rho(0)$ (if $k_\\varepsilon=1$) then $\\otp(\\widehat{\\varphi_\\varepsilon}(\\eta),\\widehat{\\varphi_\\varepsilon}(\\rho))=\\otp(\\bar b\\restriction J_\\varepsilon,\\bar a\\restriction J_\\varepsilon)$. \n\nBy composing with the isomorphism $\\text{RSh}_{k_\\varepsilon}(N)\\to \\text{LSh}_{k_\\varepsilon}(N)$ mapping $(x_0,\\dots, x_{k_\\varepsilon-1})$ to $(N-1-x_{k_\\varepsilon-1},\\dots,N-1-x_0)$, we arrive to a directed graph homomorphism $\\varphi_\\varepsilon:\\text{RSh}_{k_\\varepsilon}(N)\\to (((I_\\varepsilon)^{\\underline{J_\\varepsilon}})_<,D^\\varepsilon_{\\bar b,\\bar a})$. By definition this map can be seen as a directed graph homomorphism $\\varphi_\\varepsilon:\\text{LSh}_{k_\\varepsilon}(N)\\to (((I_\\varepsilon)^{\\underline{J_\\varepsilon}})_<,D^\\varepsilon_{\\bar a,\\bar b})$.\n\nFor $1\\leq m\\leq k$ let $\\pi_m:(N^{\\underline k})_<\\to (N^{\\underline m})_<$ be the projection on the first $m$ coordinates. Note that it is a directed graph homomorphism $\\text{LSh}_k(N)\\to \\text{LSh}_m(N)$. We now define $\\varphi:\\text{LSh}_k(N)\\to ((I^{\\underline{\\alpha}})_<,D_{\\bar a,\\bar b})$. For any $\\eta\\in (N^{\\underline k})_<$, $\\varepsilon\\in S$ and $\\beta< \\alpha$, let $\\varepsilon(\\beta)\\in S$ be such that $\\beta\\in J_\\varepsilon$. We define\n\\[\\varphi(\\eta)(\\beta)= (\\varepsilon(\\beta),\\varphi_{\\varepsilon(\\beta)}(\\pi_{k_\\varepsilon}(\\eta))(\\beta)).\\]\nSince, for any $\\varepsilon\\in S$ and $\\eta\\in (N^{\\underline k})_<$, $\\varphi_{\\varepsilon}(\\pi_{k_\\varepsilon}(\\eta))$ is increasing, it is clear that $\\varphi(\\eta)$ is increasing as well.\n\nAssume that $\\eta,\\rho\\in \\text{LSh}_k(N)$ are connected, i.e., $\\eta(i)=\\rho(i-1)$ for $01$) and $\\eta(0)<\\rho(0)$ (if $k=1$). It is routine to check that $\\otp(\\varphi(\\eta),\\varphi(\\rho))=\\otp(\\bar a,\\bar b)$. As a result, $\\varphi$ is also a graph homomorphism between $\\text{Sh}_k(N)$ to $((I^{\\underline \\alpha})_<,E_{\\bar a,\\bar b})$.\n\nWe have proved that for every $N<\\omega$ there exists a graph homomorphism $\\varphi_N:\\text{Sh}_k(N)\\to ((I^{\\underline \\alpha})_<,E_{\\bar a,\\bar b})$. By compactness, we may find a graph homomorphism $\\text{Sh}_k(\\omega)\\to \\mathcal{H}$ for some elementary extension $((I^{\\underline \\alpha})_<,E_{\\bar a,\\bar b})\\prec (\\mathcal{H},E)$. By Fact \\ref{F:homomorphism is enough}, there exists $m\\leq k$ such that $((I^{\\underline \\alpha})_<,E_{\\bar a,\\bar b})$ contains all finite subgraphs of $\\text{Sh}_m(\\omega)$.\n\\end{proof}\n\n\n\\subsection{Analyzing order-type graphs with large chromatic number}\\label{ss:orde-type-large-chrom}\nThe main goal of this section is to prove that every order-type graph of large enough chromatic number is $k$-orderly covered for some $k$, i.e. we will prove the following.\n\\begin{theorem}\\label{T:shift graphs in order-type-graphs}\nLet $\\alpha$ be an ordinal, $(\\theta,<)$ an infinite ordinal with $|\\alpha|^+ +\\aleph_0<\\theta$. Let $\\bar a\\neq \\bar b\\in (\\theta^{\\underline{\\alpha}})_<$ be some fixed sequences.\n\nLet $G=((\\theta^{\\underline{\\alpha}})_<,E_{\\bar a,\\bar b})$. If $\\chi(G)>\\beth_2(\\aleph_0)$ then $G$ contains all finite subgraphs of $\\text{Sh}_m(\\omega)$ for some $m\\in \\mathbb{N}$.\n\\end{theorem}\n\nIn order to achieve this we will need to analyze the order-type of two infinite sequences. The tools developed here, we believe, may be useful in their own right.\n\nWe fix some ordinals $\\alpha$ and $\\theta$ with $\\theta$ infinite and $\\bar a\\neq \\bar b\\in (\\theta^{\\underline{\\alpha}})_<$ increasing sequences.\n\nWe partition $\\alpha=J_0\\cup J_+\\cup J_-$, where\n\\[J_0=\\{\\beta< \\alpha: a_\\beta=b_\\beta\\},\\, J_+=\\{\\beta < \\alpha: a_\\betab_x$). By the assumptions, $X$ is a non empty initial segment of $[\\beta]_R$ and $Y=[\\beta]_R\\setminus X$ is non-empty convex.\n\nWe will show that both $X$ and $Y$ are closed under the relations defining $R$ and thus derive a contradiction to the minimality of $R$.\n\nAssume that $a_y=b_z$ with $y\\in X$ and $z\\in Y$. Since $X$ is an initial segment, $y0$ then $b_{\\delta^A_{n-1}}\\leq a_\\beta\\delta_n$ then $a_{\\delta_{n+1}}\\leq a_\\beta=b_\\gamma2^{\\aleph_0}$ then for any $A\\in (J_+\\cup J_-)\/R$, $n_A<\\omega$.\n\\end{lemma}\n\\begin{proof}\nWe assume that $A\\in J_+\/R$, the proof for $A\\in J_-\/R$ is similar. Assume towards a contradiction that $n_A=\\omega$. We will show that $\\chi(G)\\leq 2^{\\aleph_0}$.\n\nLet $S=\\{\\beta\\leq \\theta:\\cf(\\beta)=\\aleph_0\\}$. \nLet $\\langle \\zeta_l=\\zeta^A_l\\in A:l<\\omega\\rangle$ be the sequence supplied by Lemma \\ref{L:special-sequence}. \n\nFor any $\\gamma\\in S$ choose an increasing sequence of ordinals $\\langle \\alpha_{\\gamma,n}:n<\\omega\\rangle\\subseteq \\gamma$ with limit $\\gamma$. We define a coloring map $c:(\\theta^{\\underline{\\alpha}})_<\\to 2^{\\aleph_0\\times\\aleph_0}$.\nFor any $\\bar f\\in (\\theta^{\\underline{\\alpha}})_<$ let $\\gamma(\\bar f)=\\sup\\{f_{\\zeta_l}:l<\\omega\\}\\in S$ and\n\\[c(\\bar f)=\\{(l,n):l,n<\\omega,\\, f_{\\zeta_l}<\\alpha_{\\gamma(\\bar f),n}\\}.\\]\nTo show that it is a legal coloring, let $\\bar f,\\bar g\\in (\\theta^{\\underline{\\alpha}})_<$ such that $\\otp(\\bar f,\\bar g)=\\otp(\\bar a,\\bar b)$.\nBy assumption $f_{\\zeta_l}\\beth_2(\\aleph_0)$ then the set $\\{n_A: A\\in (J_+\\cup J_-)\/R\\}$ is bounded.\n\\end{lemma}\n\\begin{proof}\nBy Lemma \\ref{L:each n_A is finite}, for any $A\\in (J_+\\cup J_-)\/R$, $n_A<\\omega$. We will show that $\\{n_A: A\\in J_+\/R\\}$ and $\\{n_A: A\\in J_-\/R\\}$ are both bounded.\n\nAssume that $\\{n_A: A\\in J_+\/R\\}$ is unbounded. Let $\\{A_\\varepsilon\\in J_+\/R:\\varepsilon<\\omega\\}$ be a family of convex equivalence classes such that $\\varepsilon\\chi\\}\\in \\mathcal{D}$ then either $\\Omega^\\textbf{b}_t$ is trivial or $\\{\\varepsilon<\\kappa: h^\\textbf{b}_t(\\varepsilon) \\chi\\}\\notin \\mathcal{D}$ then $h^\\textbf{b}_t=h^\\textbf{a}_{g(t)}$ and $\\Omega^\\textbf{b}_t=\\Omega^\\textbf{a}_{g(t)}$.\n\\end{enumerate}\n\nLastly, for $t\\in S_\\textbf{b}$, if $\\rho^\\textbf{a}_{g(t)}\\in \\chi^{\\xi}$, with $\\xi<\\sigma$, then $\\rho^\\textbf{b}_t\\in \\chi^{\\xi+1}$.\n\\end{proposition}\n\\begin{proof}\nLet $S_1=\\{s\\in S_\\textbf{a}: \\Omega^\\textbf{a}_s \\text{ is non-trivial and } (\\forall^{\\mathcal{D}} \\varepsilon<\\kappa)(\\cf(h^\\textbf{a}_s(\\varepsilon))>\\chi\\}$ and $S_0=S_\\textbf{a}\\setminus S_1$. Fix any $s\\in S_1$. Let $A_s=\\{\\varepsilon<\\kappa: \\cf(h^\\textbf{a}_s(\\varepsilon))>\\chi\\}$, so $A_s\\in \\mathcal{D}$.\n\nLet $\\mathcal{D}_s=\\{D\\in \\mathcal{D}: D\\subseteq A_s\\}$ be the induced ultrafilter on $A_s$. Consider the ultraproduct $\\prod_{\\varepsilon\\in A_s}h^\\textbf{a}_s(\\varepsilon)\/\\mathcal{D}_s$. We may consider it as a linearly ordered set, ordered by $<_{\\mathcal{D}_s}$.\n\n\\begin{claim}\\label{C: sequence in ultraproduct}\nThere exists a sequence $H_s=\\langle h_{s,\\beta}\\in \\mathcal{H}:\\beta<\\beta_s\\rangle$ satisfying\n\\begin{enumerate}\n\\item for all $\\varepsilon\\in A_s$ and $\\beta<\\beta_s$, $h_{s,\\beta}(\\varepsilon)0$. If $Y_{s,f}\\notin \\mathcal{D}$ then $\\{\\varepsilon<\\kappa: |\\{i\\in J_\\varepsilon: f(i)\\geq h_{s,\\beta_s(f)}(\\varepsilon)\\}|\\leq n_s(f)-1\\}\\in \\mathcal{D}$. So $\\beta_{s,n_s(f)-1}\\leq \\beta_s(f)=\\beta_{s,n_s(f)}(f)$, contradiction.\n\\end{claimproof}\n\nFor $s\\in S_1$, $\\beta<\\beta_s$ and $n<\\omega$, let $\\Omega_{(s,\\beta,n)}=\\{f\\in \\Omega^\\textbf{a}_s: \\beta_s(f)=\\beta,\\, n_s(f)=n\\}$.\nLet $S_\\textbf{b}=\\{(s,\\beta,n):s\\in S_1,\\, \\beta< \\beta_s,\\, n<\\omega,\\, \\Omega_{(s,\\beta,n)}\\neq\\emptyset \\}\\cup S_0$ and let $g:S_\\textbf{b}\\to S_\\textbf{a}$ be the function defined by $g(s,\\beta,n)=s$ for $s\\in S_1$ and $g(s)=s$ otherwise. For any $s\\in S_0$ let $\\Omega^\\textbf{b}_s=\\Omega^\\textbf{a}_s$, $h^\\textbf{b}_s=h^\\textbf{a}_s$ and $\\rho^\\textbf{b}_s={\\rho^\\textbf{a}_s}^\\frown\\langle 0\\rangle $. For $s\\in S_1$, $\\beta<\\beta_s$ and $n<\\omega$, we set $\\Omega^\\textbf{b}_{(s,\\beta,n)}=\\Omega_{(s,\\beta,n)}$, $\\rho^\\textbf{b}_{(s,\\beta,n)}={\\rho^\\textbf{a}_s}^\\frown \\langle n\\rangle$ and \n\\[\nh^\\textbf{b}_{(s,\\beta,n)}=\n\\begin{cases}\nh_{s,\\beta} & n=0 \\\\\nh^\\textbf{a}_s & n>0\n\\end{cases}\n.\\]\n\n\\begin{claim}\n$\\textbf{b}$ is an approximation and $\\textbf{a}\\trianglelefteq_g \\textbf{b}$.\n\\end{claim}\n\\begin{claimproof}\nWe check that $\\textbf{b}$ satisfies $(1)$ and $(2)$ from the definition. $(1)$ follows by the choice of $h^\\textbf{b}_{(s,\\beta,n)}$.\n\nWe are left with $(2)$. Let $t\\in S_0$ and $(s,\\beta,n)$ with $s\\in S_1$. If $\\rho^\\textbf{b}_t=\\rho^\\textbf{b}_{(s,\\beta,n)}$ then $\\rho^\\textbf{a}_t=\\rho^\\textbf{a}_s$ so the result follows since $\\textbf{a}$ is an approximation. Let $(s_1,\\beta_1,n_1)\\neq (s_2,\\beta_2,n_2)\\in S_\\textbf{b}$. If $\\rho^\\textbf{b}_{(s_1,\\beta_1,n_1)}=\\rho^\\textbf{b}_{(s_2,\\beta_2,n_2)}$ then $\\rho^\\textbf{a}_{s_1}=\\rho^\\textbf{a}_{s_2}$. If $s_1\\neq s_2$ then the results follows since $\\textbf{a}$ is an approximation. So assume that $s=s_1=s_2$ and $n=n_1=n_2$. Assume that $\\beta_1<\\beta_2<\\beta_s$ and let $f_1\\in \\Omega^\\textbf{b}_{(s,\\beta_1,n)}$ and $f_2\\in \\Omega^\\textbf{b}_{(s_,\\beta_2,n)}$. We need to show that $f_1\\not\\mathrel{R} f_2$ and $f_2\\not\\mathrel{R} f_1$.\n\nBy choice of $\\beta_1=\\beta_s(f_1)$ and $n=n_s(f_1)$, $B_n(f_1,h_{s,\\beta_1})\\in \\mathcal{D}$. On the other hand, since $\\beta_1<\\beta_2=\\beta_s(f_2)\\leq \\beta_{s,n+2}(f_2)$, $B_{n+2}(f_2,h_{s,\\beta_1})\\notin \\mathcal{D}$. I.e. $\\kappa\\setminus B_{n+2}(f_2,h_{s,\\beta_1})\\in \\mathcal{D}$. Let $\\varepsilon\\in B_n(f_1,h_{s,\\beta_1})\\cap (\\kappa\\setminus B_{n+2}(f_2,h_{s,\\beta_1}))$ and let $i_l$ be the $(n+l)$-th element of $J_\\varepsilon$ from the end, for $l=1,2,3$.\nAs a result,\n\\[f_1(i_3)0$, $\\Omega^\\textbf{b}_{(s,\\beta,n)}$ is trivial. Let $f_1,f_2\\in \\Omega^\\textbf{b}_{(s,\\beta,n)}$. By Claim \\ref{C:=n_s(f)}, and the assumptions on $\\mathcal{D}$, we may find $\\varepsilon<\\kappa$ such that for $l=1,2$ the following holds\n\\begin{enumerate}\n\\item $|\\{i\\in J_\\varepsilon: f_l(i)\\geq h_{s,\\beta}(\\varepsilon)\\}|=n$ and\n\\item $|J_\\varepsilon|>n$.\n\\end{enumerate}\nSince $n>0$, letting $i$ be the $n+1$-th element from the end of $J_\\varepsilon$ we have that $f_l(i)\\chi\\}\\in \\mathcal{D}$. In particular, since by Proposition \\ref{P:approx-small-cof}(2), $h^\\textbf{a}_{gf(t)}=h^\\textbf{b}_{f(t)}$, $\\{\\varepsilon<\\kappa:\\cf(h^\\textbf{b}_{f(t)}(\\varepsilon))>\\chi\\}\\in \\mathcal{D}$. Assuming that $\\Omega^\\textbf{c}_{t}$ is not trivial, the by Proposition \\ref{P:approx-large-cof}(1), $\\{\\varepsilon<\\kappa: h^\\textbf{c}_t(\\varepsilon)2^{2^{<(\\kappa+\\aleph_1)}}+|T|\\cdot |U|$ be a regular cardinal.\n\nIf $\\chi(G)\\geq \\varkappa$ then $G$ contains all finite subgraphs of $\\text{Sh}_n(\\omega)$ for some $n\\in \\mathbb{N}$.\n\\end{theorem}\n\\begin{proof}\n\nBy Lemma \\ref{L:underlying set of Inf-EM is Inf-EM} there exists some $(\\widehat \\alpha,\\theta)$-indiscernible sequence $b=\\langle b_{i,\\eta}:i\\in \\widehat{U},\\, \\eta\\in (\\theta^{\\underline{\\widehat{\\alpha}_i}})_<\\rangle$ whose underlying set is $V$, where $\\widehat \\alpha\\in \\kappa^{\\widehat U}$ and $\\widehat U$ is a set such that $|\\widehat{U}|\\leq |T|\\cdot |U|\\cdot \\kappa^{<\\kappa}.$ \n\nLet $B=\\{(i,\\eta):i\\in \\widehat U,\\, \\eta\\in (\\theta^{\\underline{\\widehat{\\alpha}_i}})_<\\}$ and $R=\\{((i_1,\\eta_1),(i_2,\\eta_2)): b_{i_1,\\eta_1} \\mathrel{E} b_{i_2,\\eta_2}\\}$. Since $(i,\\eta)\\mapsto b_{i,\\eta}$ is surjective and $((i_1,\\eta_1),(i_2,\\eta_2))\\in R \\iff (b_{i_1,\\eta_1} b_{i_2,\\eta_2})\\in E$, $\\chi(B,R)=\\chi(G)\\geq \\varkappa$ (by Fact \\ref{F:basic prop}(4)). Moreover, by Fact \\ref{F:homomorphism is enough} it is enough to prove the conclusion for the graph $(B,R)$.\n\nFor any $i\\in \\widehat{U}$ let $B_i=\\{(i,\\eta):\\eta\\in (\\theta^{\\underline{\\alpha_i}})_<\\}$. By Fact \\ref{F:basic prop}(1), since $B=\\bigcup_{i\\in \\widehat{U}} B_i$, $\\varkappa\\leq \\chi(B,R)\\leq \\sum_{i\\in \\widehat{U}}\\chi(B_i,R\\restriction B_i)$. By definition\\footnote{As $2^{<\\kappa}=\\sup\\{2^\\mu:\\mu<\\kappa\\}$, if $2^{<\\kappa}<\\kappa$ then $2^{2^{<\\kappa}}\\leq 2^{<\\kappa}$.} $\\kappa\\leq 2^{<\\kappa}$ which implies $\\kappa^{<\\kappa}\\leq \\kappa^{\\kappa}=2^{\\kappa}\\leq 2^{2^{<\\kappa}}$ and thus $\\varkappa>|\\widehat U|$. Since $\\varkappa$ is a regular cardinal there exists $i\\in \\widehat U$ with $\\chi(B_i,R\\restriction B_i)\\geq \\varkappa$. As a result, it is enough to prove the conclusion for the graph $((\\theta^{\\underline{\\widehat{\\alpha}_i}})_<,S)$, where $S=\\{(\\eta_1,\\eta_2):(i,\\eta_1)\\mathrel{R} (i,\\eta_2)\\}$.\n\nFor $P=\\{\\otp(\\bar a,\\bar b):(\\bar a,\\bar b)\\in S\\}$, by $(\\widehat{\\alpha},\\theta)$-indiscernibility $S=\\bigcup_{p\\in P} \\{(\\bar c,\\bar d)\\in ((\\theta^{\\underline{\\widehat{\\alpha}_i}})_<)^2 :\\otp(\\bar c,\\bar d)=p\\vee \\otp(\\bar d,\\bar c)=p\\}$. \n\nBy Fact \\ref{F:basic prop}(2), \n\\[\\varkappa\\leq \\chi((\\theta^{\\underline{\\widehat{\\alpha}_i}})_<,S))\\leq \\prod_{p\\in P}\\chi ((\\theta^{\\underline{\\widehat{\\alpha}_i}})_<,P_p),\\] where $P_p=\\{(\\bar c,\\bar d)\\in ((\\theta^{\\underline{\\widehat{\\alpha}_i}})_<)^2 :\\otp(\\bar c,\\bar d)=p\\vee \\otp(\\bar d,\\bar c)=p\\}$. \nAssume towards a contradiction that $\\chi ((\\theta^{\\underline{\\widehat{\\alpha}_i}})_<,P_p)\\leq \\beth_2(\\aleph_0)$ for all $p\\in P$. Hence $\\varkappa\\leq \\beth_2(\\aleph_0)^{2^{|\\alpha_i|+\\aleph_0}}\\leq \\beth_2(|\\alpha_i|+\\aleph_0)$. Since $|\\alpha_i|+\\aleph_0<\\kappa+\\aleph_1$ and $\\varkappa>2^{2^{<(\\kappa+\\aleph_1)}}$, we derive a contradiction.\n\nConsequently, there exists $p\\in P$ with $\\chi ((\\theta^{\\underline{\\widehat{\\alpha}_i}})_<,P_p)>\\beth_2(\\aleph_0)$ and we may conclude by Theorem \\ref{T:shift graphs in order-type-graphs}.\n\\end{proof}\n\n\\begin{corollary}\\label{C:main corollary}\nLet $G=(V,E)$ be a stable graph. If $\\chi(G)>\\beth_2(\\aleph_0)$ then $G$ contains all finite subgraphs of $\\text{Sh}_n(\\omega)$ for some $n\\in \\mathbb{N}$.\n\\end{corollary}\n\\begin{proof}\nLet $G=(V,E)$ be stable graph, $T=Th(G)$ and $T^{sk}$ be a complete expansion of $T$ with definable Skolem functions in the language $E\\in \\mathcal{L}^{sk}$. \n\nWe apply Theorem \\ref{T:existence of gen em model in stable} with $\\kappa=\\aleph_1$, $\\mu=2^{\\aleph_1}$ and $\\lambda=2^{\\max\\{\\mu,|G|\\}}$. We get an infinitary EM-model $\\mathcal{G}^{sk}\\models T^{sk}$ based on $(\\alpha,\\lambda)$, where $\\alpha \\in \\kappa^U$ for some set $U$ of cardinality at most $\\mu$, such that $\\mathcal{G}=\\mathcal{G}^{sk}\\restriction \\{E\\}$ is saturated of cardinality $\\lambda$. Since $\\mathcal{G}$ is saturated of cardinality $>|G|$, we may embed $G$ as an elementary substructure of $\\mathcal{G}$. Since $\\chi(\\mathcal{G})\\geq \\chi(G)>\\beth_2(\\aleph_0)$ and the conclusion is an elementary property, it is enough to show it for $\\mathcal{G}$. \n\nSince $2^{2^{<(\\kappa+\\aleph_1)}}+|T|+|U|\\leq 2^{2^{\\aleph_0}}+\\aleph_0+\\mu\\leq \\beth_2(\\aleph_0)+2^{\\aleph_1}= \\beth_2(\\aleph_0)$, Theorem \\ref{T:Taylor for infinitary EM-models} applies with $\\theta=\\lambda$ and $\\varkappa=(\\beth_2(\\aleph_0))^+$.\n\\end{proof}\n\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\tElectroencephalography (EEG) is a widely used tool for the assessment of neural activity in the human brain \\cite{Brette2012}. To estimate the area of the brain responsible for the measured data, one has to simulate the electric potential as induced by hypothetical current sources in the brain, i.e. the EEG forward problem has to be solved. While quasi-analytical solutions to the differential equation underlying the forward problem exist, these are only available in simplified geometries such as the multi-layer sphere model\\cite{DeMunck1993}. One thus requires numerical methods \n to incorporate\n accurate representations of the head's shape and volume conduction properties. \n Popular\n approaches are the boundary element method (BEM)\\cite{Mosher1999,Gramfort2011,Makarov2020} and the finite element method (FEM)\\cite{Medani2015,Vallaghe2010}. Here, we will focus on the FEM due to its flexibility in modeling complex geometries with inhomogeneous and anisotropic compartments \\cite{He2020,Vermaas2020,Beltrachini2018a,VanUitert2004,Schimpf2002,Wolters2007,Nuesing2016}.\n Efficient solvers and the transfer matrix approach \\cite{Lew2009,Wolters2004}\n allow significantly reduced computational costs\n\t\n\tWhen \n employing FEM,\n one usually choses between either a hexahedral or tetrahedral discretization of the head. Both choices come with their own strengths and limitations. The mesh creation requires a classification of the MRI into tissue types. This segmentation data often comes in the form of binary maps with voxels of around 1mm resolution, allowing for quick and simple hexahedral mesh generation. However, as head tissue surfaces are smooth, approximating them with regular hexahedra is bound to be inaccurate. While methods for geometry adaptation exist \\cite{Wolters2007}, the resulting meshes still have a (reduced) angular pattern. Furthermore, when applying a standard continuous Galerkin FE scheme, areas with very thin compartments may suffer from leakage effects where current can bypass the insulating effects of the skull \\cite{Sonntag2013}. To alleviate this, flux based methods, like the discontinuous Galerkin method, offer a robust alternative \\cite{Engwer2017a}. These however severely increase the number of DOF and thus necessary computational effort.\n\t\n\tSurface-based tetrahedral FEM approaches on the other hand are able to accurately model the curvature of smooth tissue surfaces. Creating high quality tetrahedra, e.g. ones fulfilling a delaunay criterion, require tissue surface representations in the form of triangulations first. These triangulations have to be free of self-intersections and are often nested, usually leading to modeling inaccuracies such neglecting skull holes or an artificial separation of gray matter and skull. Therefore, we will not discuss surface-based tetrahedral FEM approaches throughout this work. \n\n In \\cite{Rice2013}, the impact of prone vs supine subject positioning on EEG amplitudes was investigated. In the small group study, average differences of up to eighty percent were found. These were accompanied by differences in MRI-based CSF-thickness estimation of up to thirty percent underlining the importance of correctly modeling CSF-thickness and areas of contact between skull and brain surfaces.\n\t\n\tRecently, an unfitted discontinuous Galerkin method (UDG) \\cite{Bastian2009} was introduced to solve the EEG forward problem\\cite{Nuesing2016}. Rather than working with mesh elements that are tailored to the geometry, it uses a background mesh which is cut by level set functions, each representing a tissue surface. It was shown to outperform the accuracy of a discontinuous Galerkin approach on a hexahedral mesh, while not being limited by the assumptions necessary to create tetrahedral meshes.\n\n\tExtending the ideas of the UDG method, this paper introduces a multi-compartment formulation of the CutFEM \\cite{Burman2015} for EEG source analysis. Compared to UDG, it operates on a simpler trial function space and adds a ghost penalty based on \\cite{Burman2010}, stabilizing small mesh elements. \n \n The paper is structured as follows. After introducing the theory behind CutFEM, three successively more realistic scenarios are tested. These scenarios include a multi-layer sphere model, followed by realistic brain tissues embedded in spherical skull and scalp compartments. Finally, a fully realistic five compartment head model is used for source analysis of the P20\/N20 component of measured somatosensory evoked potentials (SEP). Comparison results from different FEM and meshing approaches will be considered throughout the scenarios.\n\t\n\t\\section{Theory}\n\t\\subsection{CutFEM}\n\tDeviating from classical, fitted FEM-approaches, where \n the mesh cells resolves tissue boundaries,\n CutFEM uses a level set based representation of domain surfaces. \n\tLet $\\Omega = \\bigcup_i \\Omega_i$ be the head domain divided into $m$ disjunct open subdomains, e.g. gray matter, white matter, CSF, skull, skin. The level set function for compartment $i$ is then defined as\n\t$$ \\Phi_i(x) \\left\\{ \\begin{array}{ l l }\n\t< 0, \\text{ if $x \\in \\Omega_i$} \\\\\n\t= 0, \\text{ if $x \\in \\partial \\Omega_i$} \\\\\n\t> 0, \\text{ else}\n\t\\end{array} \\right.\n\t$$\n\tand $\\mathcal{L}_i = \\{x \\in \\Omega : \\Phi_i(x) = 0\\}$ \n denotes its (zero) level set.\n\tWe proceed by defining a background domain $\\hat{\\Omega} \\subset \\mathbb{R}^3$ covering the head domain $\\Omega$. This background is then tesselated, yielding a regular hexahedral mesh $\\mathcal{T}(\\hat{\\Omega})$, the fundamental or background mesh.\n\tTaking on the level set representation, submeshes $\\mathcal{T}_h^i \\subset \\mathcal{T}_h(\\hat{\\Omega}) $ are created from the background mesh, containing all cells that have at least partial support within the respective subdomain $\\Omega_i$. This results in an overlap of submeshes at compartment interfaces. For each submesh we define a conforming $\\mathbb{Q}_1$ space $V_h^i$. Thus, up to this point, each submesh is treated the way a conforming Galerkin method would treat the entire mesh.\n\t\n\tThe difference then lies in restricting the trial and test functions to their respective compartment, effectively cutting them off at the boundary and giving rise to the name CutFEM.\n \n A fundamental mesh cell intersected by a level set $\\mathcal{L}_i$ is called a cut cell. Their respective fundamental cells are contained in multiple compartements and thus have more DOF.\n On the other hand, compared to classical conforming discretizations, a coarser mesh resolution can be chosen, as the mesh does not have to follow small geometric features.\n\n\n\tAs the trial functions are only continuous on their respective compartment and cut off at the boundary, using them to approximate the electric potential requires internal coupling conditions at the tissue interfaces.\n\tWe\n define the internal skeleton as the union of all subdomain interfaces\n\t\\begin{equation}\n \\begin{split}\t \n\t\\Gamma = \\bigcup \\Big\\{\\bar\\Omega_i \\cap \\bar\\Omega_j :\n ~i \\neq j,~\\mu_{d-1}(\\bar\\Omega_i \\cap \\bar\\Omega_j)>0 \\Big\\}.\\label{eq:1}\n\t\\end{split}\n \\end{equation}\n\t$\\mu_{d-1}$ is the d-1 dimensional measure in d-dimensional space. For two sets $E,F$ sharing a common interface (an element of $\\Gamma$) and a, possibly discontinuous, function $u$ operating on them we can define a scalar- or vector-valued jump operator as $\\llbracket u \\rrbracket := u|_E \\cdot n_E + u|_F \\cdot n_F$ with $n_E, n_F$ the outer unit normal of the respective set. Additionally, a (skew-)weighted average can be stated as\n\t\\begin{align}\n\t\\{u \\} &= \\omega_E u|_E + \\omega_F u|_F\\label{eq:2}\\\\\n\t\\{u \\}^* &= \\omega_F u|_E + \\omega_E u|_F.\\label{eq:3}\n\t\\end{align}\n\twith \n\t$\\omega_E = \\frac{\\delta_E}{\\delta_E + \\delta_F}$,\n\t$\\delta_E = n_E^t\\sigma_E n_E.$\n\tHere, $\\sigma_E$ refers to the symmetric $3 \\times 3$, positive definite electric conductivity tensor on $E$. Note that $\t\\llbracket uv \\rrbracket = \\llbracket u\\rrbracket \\{v \\}^* + \\{u \\} \\llbracket v \\rrbracket.$ The purpose of these definitions will become clear when deriving the weak formulation for our forward model.\n\t\n\tTypically, the EEG forward problem for the electric potential $u$ induced by a neural source term $f$ is derived from the quasi-static formulation of Maxwell's equations \\cite{Brette2012}.\n\n\t\\begin{align}\n\t\\nabla \\cdot \\sigma \\nabla u &= f, \\; \\text{ in } \\bigcup\\limits_i \\Omega_i\\label{eq:4}\\\\\n \\langle \\sigma \\nabla u,n\\rangle &= 0, \\; \\text{ on } \\partial\\bar{\\Omega}\\label{eq:5}\\\\\n \\intertext{And in addition we require continuity of the electric potential and the electrix current}\n \\llbracket u \\rrbracket &= 0, \\; \\text{ on } \\Gamma\\label{eq:6}\\\\\n\t\\llbracket \\sigma \\nabla u \\rrbracket &= 0, \\; \\text{ on } \\Gamma. \\label{eq:7}\n\t\\end{align} \n\n\tAs trial and test space we employ $V_h$ as direct sum of all $V_h^i$. \n\t\n\tThe weak formulation can be obtained by multiplying with a test function, integrating and applying subdomain wise integration by parts. This yields\n\t$$\n\t\\sum_{i}(\\int_{ \\Omega_i} \\sigma \\nabla u_h^i \\nabla v_h^i - f v_h^idx) - \\int_{\\Gamma} \\{\\sigma \\nabla u_h \\}\\llbracket v_h\\rrbracket dS = 0,\n\t$$\n\twhere the jump formula for a product of two functions as well as \\eqref{eq:7} were used.\n\t$u_h^i$ is the restriction of $u_h \\in V$ to $V_i$. A symmetry term $ \\pm \\int_{\\Gamma} \\{\\sigma \\nabla v_h \\}\\llbracket u_h\\rrbracket dS$ is added to end up with either a symmetric or non-symmetric bilinearform.\n\t\n\tTo incorporate \\eqref{eq:6} \n \n\ta Nitsche penalty term\\cite{Nitsche1971} is added, which weakly couples the domains and ensures coercivity\\cite{Burman2015}:\n\t\\begin{equation}\n\tP_{\\gamma}(u,v) = \\gamma \\nu_k\\int_{\\Gamma} \\frac{\\hat{\\sigma}}{\\hat{h}} \\llbracket u_h\\rrbracket \\llbracket v_h\\rrbracket dS.\\label{eq:8}\n\t\\end{equation}\n\n Here\n $\\nu_k, \\hat{h}, \\hat{\\sigma}$ are scaling parameters based on the ratio of cut cell area on each interfaces' side, dimension, degree of trial functions used and conductivity. See \\cite{DiPietro2012} for a further discussion of these. $\\gamma$ is a free parameter to be discussed later. \n\t\n\tA challenge is the shape of the cut-cells. Heavily distorted and sliver-like snippets significantly impact the stability\n \n and\n lead to a deterioration of the stiffness matrix condition number. To alleviate this, a ghost penalty \\cite{Burman2010} term is used, which weakly couples the solution on the snippets to their neighbors. Said coupling takes place on the interfaces of all the fundamental mesh cells cut by a level set. Let\n \\begin{equation}\n \\begin{split}\n\t\\hat{\\Gamma} = \\cup \\Big\\{\\partial E_i: E_i \\in \\mathcal{T}_h, E_i \\cap \\Gamma \\neq \\emptyset \\}.\\label{eq:9}\n \\end{split}\n\t\\end{equation}\n\tNote the difference between $\\Gamma$ and $\\hat{\\Gamma}$. $\\Gamma$ operates on compartment interfaces, $\\hat{\\Gamma}$ on faces of the fundamental mesh.\n\tThe ghost penalty is then defined as\n\t\\begin{equation}\n\ta^G (u_h,v_h) = \\gamma_G \\int_{\\hat{\\Gamma}} \\hat{h} \\llbracket \\sigma \\nabla u_h \\rrbracket \\llbracket \\nabla v_h \\rrbracket dS,\\label{eq:10}\n\t\\end{equation}\n\twhere\n\t$\\gamma_G$ is again a free parameter, usually a couple orders of magnitude smaller than $\\gamma$.\n\t\n\tThe weak CutFEM EEG-forward problem can now be stated as finding the electric potential $u_h \\in V_h$ such that\n\t\\begin{equation}\n\ta(u_h,v_h) + a_{n\/s}^N(u_h,v_h) + a^G(u_h,v_h) = l(v_h) \\; \\forall v_h \\in V_h,\\label{eq:11}\n\t\\end{equation}\n\twith\n\t\\begin{align*}\n a(u_h,v_h) &= \\sum_{i} \\int_{ \\Omega_i} \\sigma \\nabla u_h^i \\nabla v_h^idx,\\\\\n l(v_h) &= \\sum_{i} \\int_{ \\Omega_i} f v_h^idx\n \\end{align*}\n\tand\n\t\\begin{align*}\n\ta_{n\/s}^{N}(u_h,v_h) := &- \\int_{\\Gamma}\\{\\sigma \\nabla u_h \\}\\llbracket v_h\\rrbracket \\pm \\int_{\\Gamma} \\{\\sigma \\nabla v_h \\} \\llbracket u_h \\rrbracket dS\\\\\n\t &+ \\gamma \\nu_k\\int_{\\Gamma} \\frac{\\hat{\\sigma}}{\\hat{h}} \\llbracket u_h\\rrbracket \\llbracket v_h\\rrbracket dS.\n\t\\end{align*}\n\tIn the following we will refer to these two variants as NWIPG\/SWIPG,\n short for non-symmetric\/symmetric weighted interior penalty Galerkin method.\n\t\n\tIn \\cite{Oden1998,Guzman2009} it was shown that the non-symmetric DG-methods may result in a sub-optimal convergence rate in the L2-norm (full convergence in H1), a result that also extends to CutFEM\\cite{Burman2012}.\n\tHowever, while SWIPG is coercive only if $\\gamma$ is chosen sufficiently large\\cite{Burman2012}, NWIPG does not have such a limitation.\n\tTherefore, we will employ the NWIPG method throughout this paper due to its stability with regard to the selection of $\\gamma$.\n \n\t\\paragraph{Integration over the cut domains}\t\n\tFundamental cells that are cut by level sets, the cut cells, can be integrated over by employing a topology preserving marching cubes algorithm (TPMC) \\cite{Engwer2017}. The initial cell is divided into a set of snippets, each completely contained within one subdomain. These snippets are of a simple geometry and therefore easy to integrate over. Thus, integrals over the fundamental cell or subdomain boundaries are replaced by integrals over the snippets or their boundaries. The trial functions are effectively cut off at the compartment boundaries.\n\t\n\tSee Fig. 1 for an overview of the reconstruction steps. Note that the trial functions are coupled to their respective submesh, not to the TPMC reconstruction of the domain. The latter only determines the area over which the functions are integrated.\n\t\n\t\\begin{figure}[t!]\n\t\t\\centering\n\t\n\t\n\t\n\t\t\\subfloat{\\includegraphics[width=1\\linewidth]{graphics\/PosterTPMC.png}}\n\t\t\\caption[Level sets over fundamental mesh and TPMC reconstruction]{Left: Fundamental mesh with two spherical level sets, topology preserving marching cubes reconstruction. Center: Overlapping submeshes for the two compartments enclosed by the level sets. Right: trial function space for the inner compartment with white dots representing degrees of Freedom, cut area that the DOF are restricted to.}\n\t\\end{figure}\n\t\n\tStarting on the fundamental mesh, the algorithm is applied once per level set. Each following iteration is applied on the cut cells of the previous iteration, i.e. first the fundamental mesh is cut, then the resulting snippets are cut. This ensures the correct handling of mesh cells that are cut by multiple level sets.\n\n\t\\paragraph{Source model and transfer matrix}\n\tFollowing the principle of St. Venant, the source term $f$ will be approximated by a set of monopoles. Where fitted FEM use mesh vertices as monopole locations, this is not feasible for CutFEM as fundamental cells may have vertices not belonging to the source compartment. Rather, only gray matter cut cells are used and the locations are based on a Gauss-Legendre quadrature rule. For more information on the Venant source model, see \\cite{Medani2015}, \\cite{Buchner1997}.\n\t\n\tFor an accurate source analysis it is necessary to compute the EEG-forward solution for a large number, i.e. tens of thousands, of possible sources. However, the electric potential induced by a source is only of interest at a set of predetermined points, namely the electrodes at the scalp. So, rather than solving \\eqref{eq:11} for each source individually, a transfer matrix approach \\cite{Gencer2004,Wolters2004} is employed, significantly reducing the amount of computation time needed.\n\t\n\n\n\n\n\n\t\\section{Methods}\n\t\n\n\n\t\\subsection{Head models}\n\tFor numerical evaluations three progressively more realistic scenarios were created, two sphere models, one of which contains realistic brain tissues, and a five compartment model created from anatomical data. For each model, we will compare CutFEM and a geometry-adapted hexahedral CG-FEM approach (\\textit{Hex}) with a node shift for the geometry-adaptation of 0.33 \\cite{Wolters2007}. In the first model, the UDG approach of \\cite{Nuesing2016} will also be added to the comparisons. To balance computational load, \\textit{Hex} will use 1mm meshes whereas for CutFEM and UDG we use a 2mm background mesh.\n\t\\paragraph{Shifted spheres}\n\tThe first scenario contains the four spherical compartments brain, CSF, skull and scalp. The brain sphere will be shifted to one side, simulating a situation where the subject lies down and the brain sinks to the back of the skull. Conductivities were chosen according to \\cite{McCann2019} with the exception that CSF and brain use the same conductivity. In terms of volume conduction the model is thus indistinguishable from a 3-layer concentric sphere model and analytical solutions \\cite{DeMunck1993} are available as benchmark. Conductivity values and radii of the compartments can be found in Table 1.\n\t\\begin{table}[htp]\n\t\t\\begin{minipage}[c]{0.5\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{l|ccc}\n\t\t\t\t& Radius [mm] & Center [mm] & $\\sigma$ [S\/m] \\\\\n \\hline\n\t\t\t\tScalp & 92 & (127 127 127) & 0.43 \\\\\n\t\t\t\tSkull & 86 & (127 127 127) & 0.01 \\\\\n\t\t\t\tCSF & 80 & (127 127 127) & 0.33 \\\\\n\t\t\t\tBrain & 78 & (129 127 127) & 0.33 \\\\[0.5ex]\n\t\t\t\\end{tabular}\n\t\t\\end{minipage}\\hfill\n\t\t\\begin{minipage}[c]{0.5\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\caption[Radii, Center and Conductivities for the Shifted Sphere Model]{Radii, center and conductivities for the shifted sphere model.}\n\t\t\\end{minipage}\n\t\\end{table}\n\t\n\t TPMC was applied twice, once on the fundamental mesh and once on the resulting cut cells. Note that this additional refinement step does not change the number of trial functions of the model. A total of 13.000 Evaluation points were distributed evenly throughout the inner sphere and lead fields for both radial and tangential source directions were computed at each point. For CutFEM, a combination of $\\gamma = 16$ and $\\gamma_G = 0.1$ has shown promising results. For UDG, no ghost penalty was implemented and $\\gamma = 4$ was chosen, following {Nuesing2016}.\n\t\n\t\\paragraph{Spheres containing realistic brain}\n\tIn the previous section, the level set functions could be computed analytically up to an arbitrary accuracy. In a realistic scenario where the segmentation quality is limited by the MRI resolution as well as partial volume effects and MRI artefacts, this is not the case. An easy way to pass level sets to CutFEM lies in using tissue probability maps (TPM), a typical intermediate result \\cite{Ashburner2014} from segmentation which provides for each voxel the probability that it is located in a certain compartment. \n\t\n\tTo examine the performance of CutFEM when used together with TPM's, another sphere model is employed, this time containing realistic gray and white matter compartments obtained from MRI scans of a human brain.\tThe subject was a healthy 24 year-old male from whom T1- and T2-weighted MRI scans were acquired using a 3 Tesla MR Scanner (MagnetomTrio, Siemens, Munich, Germany) with a 32-channel head coil. For the T1, a fast gradient-echo pulse sequence (TFE) using water selective excitation to avoid fat shift (TR\/TE\/FW = 2300\/3.51 ms\/8\u00b0, inversion pre-pulse with TI = 1.1 s, cubic voxels of 1 mm edge length) was used. For the\tT2, a turbo spin echo pulse sequence (TR\/TE\/FA = 3200\/408 ms\/90\u00b0, cubic voxels, 1 mm edge length) was used. TPM's were extracted from both T1- and T2-MRI using SPM12 \\cite{Ashburner2014} as integrated into fieldtrip \\cite{Oostenveld2011}. For each voxel, the average of both TPM's was computed and a threshold probability of 0.4 was set as zero-line.\n\t\n\tThe inner skull surface was defined as the minimal sphere containing the entire segmented brain with CSF filling the gaps. The spherical skull and scalp were chosen to have a thickness of 6mm. The same conductivities as before were used with CSF, gray and white matter being identical and again 200 sensors were placed on the scalp surface.\n\t\\paragraph{Realistic 5 compartment head model}\n\tAs an extension of the previous model, realistic 5 compartment head models were created using the same anatomical data, replacing the spherical skin, skull and CSF by realistic segmentations. Again, level sets were created from probability maps. To obtain smooth skull and scalp surfaces in the TPM case, binary maps of skull and skin were created following the procedure in \\cite{Antonakakis2020}. The level sets of skull\/skin were then calculated as an average of the binary map and the T1\/T2 TPM again with a threshold of 0.4. Following \\cite{Antonakakis2020}, the level sets were cut off below the neck to reduce computational load while maintaining a realistic current flow below the skull.\n\tAgain, lead fields from hexahedral meshes were created for comparison. DOF, number of cut cells\/mesh elements and the resulting number of snippets can be found in Table 2.\n\t\\begin{table}[t!]\n\t\t\\centering\n\t\t\\begin{tabular}{l|lll}\n\t\t\t& DOF & Cut cells\/elements & snippets \\\\ \\hline\n\t\t\tCutFEM & 917,463 & 716,994 & 7,950,120 \\\\\n\t\t\t$\\textit{Hex}$ & 3,909,303 & 3,475,138 & - \n\t\t\\end{tabular}\n\t\t\\centering\n\t\n\t\t\\caption[Number of DOF, Cut cells and snippets]{Number of degrees of freedom\/snippets\/cut cells for CutFEM, number of degrees of freedom\/elements for hexahedral mesh.}\n\t\\end{table}\n\t\\subsection{Forward and inverse comparisons}\n\tFor the two spherical scenarios, analytical forward solutions were calculated as reference. For the realistic cases, somatosensory evoked potentials were recorded and a dipole scan was performed as described in detail in (b). \n\t\n\tThe two latter scenarios including realistic gray\/white matter use a regular 2mm source grid created using Simbio \\footnote{www.mrt.uni-jena.de\/simbio}. It was ensured that the sources are located inside the gray matter compartment for both approaches (\\textit{Hex} + CutFEM). The resulting source space contains 58.542 different dipole locations with no orientation constraint being applied.\n\t\\paragraph{Error measures}\n\t\n\tTwo different metrics were employed to quantify the observed errors, the relative difference measure (RDM) and the magnitude error (MAG) \\cite{Wolters2007}. \n\t\n\tThe RDM measures the difference in potential distribution at the scalp electrodes.\n\t\\begin{equation}\n\tRDM (\\%)(u^{ana}, u^{num}) = 50*||\\frac{u^{ana}}{||u^{ana}||_2} -\\frac{u^{num}}{||u^{num}||_2}||_2.\\label{eq:12}\n\t\\end{equation}\n\tIt ranges from $0$ to $100$, the optimal value being $0$.\n\tMAG determines the differences in signal strength at the electrodes.\n\t\\begin{equation}\n\tMAG(u^{ana}, u^{num}) = 100*(\\frac{||u^{num}||_2}{||u^{ana}||_2}-1).\\label{eq:13}\n\t\\end{equation}\n\t\n\tMeasured in percent, its optimal value is $0$. It is unbounded from above and bound by $-100$ from below. $u^{ana}, u^{num} \\in \\mathbb{R}^s$ contain the analytical and numerical potential at the $s$ different sensor locations.\n\t\n\tCutFEM is implemented into the DUNEuro toolbox\\footnote{https:\/\/www.medizin.uni-muenster.de\/duneuro} \\cite{Schrader2021} where the FEM calculations were performed. Analytical EEG solutions were calculated using the fieldtrip toolbox \\cite{Oostenveld2011}.\n\t\n\tFor a comparison of runtime and memory usage, the forward calculation is split into 5 steps. The time necessary to create a driver, i.e. the time DUNEuro needs to setup the volume conductor, the times needed to assemble the stiffness matrix and AMG solver, the transfer matrix solving process using Dune-ISTL \\cite{Bastian2021} and the calculation of the final lead field matrix. All Computations are performed on a bluechip workstation with an AMD Ryzen Threadripper 3960X and 128 GB RAM. 16 threads are used to calculate the 200 transfer matrix\/lead field columns in parallel.\n\t\n\t\\paragraph{Somatosensory data and dipole scan}\n\tTo investigate CutFEM's influence on source reconstruction, an electric stimulation of the median nerve was performed on the same subject the anatomical data was acquired from. \n\tThe stimuli were monophasic square-wave pulses of 0.5ms width in random intervals between 350-450ms. The stimulus strength was adjusted such that the right thumb moved clearly.\n\tEEG data was measured using an 80 channel cap (EASYCAP GmbH, Herrsching, Germany, 74 channel EEG plus additional 6 channels EOG to detect eye artifacts). EEG positions were digitized using a Polhemus device (FASTRAK, Polhemus Incorporated, Colchester, Vermont, U.S.A.).\n\t2200 stimuli were digitally filtered between 20 to 250Hz (50Hz notch) and averaged to improve signal to noise ratio. A single dipole scan was conducted over the whole source space using the data at the peak and the CutFEM lead field.\n\t\n\tThe P20\/N20 component typically exerts a high signal to noise ratio and a strongly dipolar topography, making it an ideal candidate for a dipole scan approach as motivated for example by \\cite{Buchner1994}. \n\t\n \\begin{figure*}[tp]\n\t\t\\centering\n\t\t\\subfloat{\\includegraphics[width=1\\linewidth]{graphics\/stretchedRadSphere.png}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[width=1\\linewidth]{graphics\/stretchedTanSphere.png}}\n\t\t\\caption[EEG forward modeling errors for \\textit{Hex} and unfitted FEM approaches]{EEG forward modeling errors for \\textit{Hex} and unfitted FEM approaches in a shifted sphere scenario Top: Errors for tangential source directions. Bottom: Errors for radial source directions. Errors are in percent and grouped by eccentricities. The green line marks optimal error values. The grey area indicates the physiologically most realistic eccentricities.}\n\t\\end{figure*}\n\t\n\\section{results}\n\n The first investigated model is the shifted sphere scenario, where the brain sphere was moved within the CSF-sphere until there was exactly one contact point between skull and brain (see \\uproman{3} A). When comparing number of DOF and RAM usage, it is clear that CutFEM is by far the most memory efficient approach, using about a fifth of the number of trial functions and about a tenth of the amount of RAM as UDG (Table 3). \\textit{Hex} also uses significantly more resources than CutFEM.\n \n Regarding computation time, as UDG has to solve a significantly larger system, each iteration step in the solution phase takes longer than for CutFEM. As most time is spent on solving the system, CutFEM is overall around 16 minutes or 34 percent faster than UDG. The same cannot be said for comparisons to the standard \\textit{Hex} approach. While each iteration of the solver required less time than for \\textit{Hex}, it required an average of 92 iterations compared to 14 for \\textit{Hex}. The unfitted approaches spend less time calculating the final lead field as the time needed to locate each dipole within the 2mm background mesh is lower than for the 1mm hexahedral mesh. In total, the hexahedral CG was only faster than CutFEM by a negligible 3 percent or 52 seconds.\n \n \\begin{table}[ht]\n\t\t\\begin{minipage}[c]{0.5\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{l|lll}\n\t\t\t\t & CutFEM & UDG & \\textit{Hex} \\\\ \\hline\n\t\t\t\tnumber DOF & 552 985 & 3 601 824 & 3 341 280 \\\\\n\t\t\t\tmax. RAM used & 6.91 GB & 64.77 GB & 40.2GB \\\\ \\hline\n\t\t\t\tDriver setup & 44s & 45s & 52s \\\\\n\t\t\t\tMatrix assembly & 319s & 161s & 25s \\\\\n\t\t\t\tSolver setup & 353s & 235s & 45s \\\\\n\t\t\t\tSolving & 1111s & 2367s & 1550s \\\\\n\t\t\t\tLead field & 22s & 20s & 125s \\\\ \\hline\n\t\t\t\tTotal time & 1849s & 2828s & 1797s \\\\\n\n\t\t\t\\end{tabular}\n\t\t\\end{minipage}\\hfill\n\t\t\\begin{minipage}[c]{0.4\\textwidth}\n\t\t\t\\centering\n\t\t\n\t\t\t\\caption[Computation times, RAM\/DoF usage]{Computation times, RAM\/degree of freedom usage in the shifted sphere model. }\n\t\t\\end{minipage}\n\t\\end{table}\n \nError comparisons between CutFEM, UDG and \\textit{Hex} can be found in Fig. 2. CutFEM outperforms \\textit{Hex} in all eccentricity categories and for both radial and tangential source directions. As the pyramidal cells that give rise to the EEG potential are located in layer \\uproman{5} of the gray matter \\cite{Murakami2006}, eccentricities corresponding to 1-2mm distance to the skull are physiologically the most relevant. For eccentricities between 0.96 and 0.98 and both source directions CutFEM has average RDM\/MAG values of 0.18\\% and -0.06\\%, comparable to UDGs 0.17\\% and -0.2\\% and significantly lower than \\textit{Hex}'s 0.94\\% and 1.57\\%.\n \nThe most pronounced differences are at low eccentricities or when looking at magnitudes. CutFEM performance is similar for both radial and tangential source directions, UDG shows similar or slightly better results at low eccentricities. However, except for radial RDM's, UDG deteriorates faster at high eccentricities above 0.98. As both operate on the same cut mesh, the larger variance in the UDG results can most likely be explained by CutFEM's use of the ghost penalty stabilization. The overall largest absolute error values for CutFEM are 3.08 $\\%$ RDM and 8.21 $\\%$ MAG, underlining its performance with regard to outliers. Due to the similar numerical accuracy of CutFEM and UDG, we will only compare CutFEM and \\textit{Hex} in the following scenarios.\n\n\\subsection{Sphere containing realistic brain}\nThe results in the previous section were achieved using analytically computed level sets. Deviating from this, we will now use use a semi-realistic case where realistic brain compartments are contained within spheres. \nAgain, several different penalty parameters were tried, showing that a combination of $\\gamma = 40$ and a ghost penalty of $\\gamma_g = 0.5$ yields good results for CutFEM.\n\t\nThe results are presented in Fig 3. Note that eccentricity is stated with respect to the distance to the skull. As source points are only inside the gray matter the number of source points at high eccentricities is much lower. The eccentricity groups 0.98, 0.985, 0.99, 0.995 were thus combined into one group containing 136 points.\n\n\t \\begin{figure*}[tp]\n\t\t\\centering\n\t\t\\subfloat{\\includegraphics[width=1\\linewidth]{graphics\/stretchedRad.png}}\n\t\t\\hfill\n\t\t\\subfloat{\\includegraphics[width=1\\linewidth]{graphics\/stretchedTan.png}}\n\t\t\\caption[EEG forward model errors for CG- and CutFEM - Realistic brain]{Overview of different EEG-errors for 5 layer continuous Galerkin- and CutFEM approaches using realistic brain compartments contained in spherical skull and scalp shells. Top: Errors for tangential source directions. Bottom: Errors for radial source directions. Errors are in percent and grouped by eccentricities. The green line marks optimal error values. The grey area indicates the physiologically most realistic eccentricities.}\n\t\\end{figure*}\n\t\nMuch like before, CutFEM remains well below 1.5 and 2 percent RDM and MAG respectively, whereas \\textit{Hex} has for nearly all eccentricities higher median values and more outliers going up to more than 1.5\\% RDM and 4\\% MAG. CutFEM is again more stable with regard to outliers and especially when looking at magnitudes, differences between the two methods are in the several percent range.\n\t\nOverall, it can be stated that CutFEM is about as fast as and more accurate than \\textit{Hex} and about as accurate as and faster than UDG.\n\t\n\\subsection{Realistic 5 compartment head model}\nFor the final part of this paper, two lead fields, one from CutFEM, one from hexahedral CG, were created using realistic 5 compartment head models including gray and white matter, CSF, skull and scalp tissues. Somatosensory evoked potentials were acquired from a medianus stimulation of the right hand.\n\t\n\\paragraph{Lead field differences} Before looking at inverse reconstructions, we will investigate the differences between the forward results. As the same source space and electrodes were used for both models, we can again compute MAG and RDM values. In the absence of an analytical solution, these measurements cannot capture errors but rather differences between the methods without making a clear statement which is the more accurate. \n\t\nFor visualization purposes, for each gray matter centerpoint of the \\textit{Hex} mesh the closest source point is identified, RDM and MAG are computed for each spatial direction and averages over the directions are calculated. The result can be seen in Fig. 4. In both measures, the highest differences can be observed in inferior areas near the foramen magnum and optic channels or in superior areas. Overall the difference in potential distribution was 9.40 $\\pm$ 4.15$\\%$ and the difference in magnitude 18.94 $\\pm$ 12.03$\\%$. Interestingly, with a correlation coefficient of only 0.22, high RDM values do not necessarily coincide with high MAG values.\n\n\t\\paragraph{Reconstruction of somatosensory stimulation}\n\tFinally, the CutFEM lead field was used to perform a source reconstruction of the P20 component of an electric wrist stimulation. A dipole scan was conducted over the entire source space, the result of which can be seen in Fig. 4. $93.03\\%$ of the data could be explained by a dipole with a strength of 5.8nAm. From the literature \\cite{Buchner1994}, one expects the P20 component to be located in Brodmann Area 3b, that is in the anterior wall of the postcentral gyrus (and oriented towards the motor cortex).\n\n\t\\section{Discussion}\n\tThe purpose of this paper is to introduce CutFEM, an unfitted FEM for applications in EEG forward modeling. After discussing the mathematical theory behind CutFEM and implementational aspects, three progressively more realistic scenarios are introduced, ranging from a multi-layer sphere model to the reconstruction of somatosensory evoked potentials. \n\n\tAt similar computation times, CutFEM shows preferable results when compared to a geometry-adapted hexahedral CG-FEM \\cite{Wolters2007} in both a shifted sphere scenario and a sphere model with realistic brain tissues. While CutFEM requires significantly less DOF, both methods require similar computation times due to the different number of solver iterations. Thus, a thorough investigation of different iterative solver techniques such as multigrid methods and possibly a modification of the ghost penalty will be part of future work.\n \n Compared to UDG \\cite{Bastian2009}, it is shown that CutFEM combined with a ghost penalty leads to a decrease in outlier values at high eccentricities as well as a significant reduction in memory consumption and computation time. \n\t\n\tUsing a realistic five compartment head model, CutFEM correctly localizes the somatosensory P20 in the expected Brodmann area 3b. Especially in applications such as presurgical epilepsy diagnosis such accurate reconstructions might contribute significantly to the correct localization of the irritative zone \\cite{Neugebauer2022}. The employed somatosensory experiment featured clear peaks and a high signal to noise ratio, making it an ideal candidate for an initial study. Further investigations and a larger study size are necessary to determine CutFEM's contribution to accurate source reconstructions when used with noisier data and\/or more advanced inverse methods.\n\n In \\cite{Vallaghe2010}, a trilinear immersed FEM approach was introduced that, like CutFEM, employs level sets as tissue surfaces. Rather than using a Nitsche-based coupling, continuity of the electric potential is enforced by modifying the trial function space. Compared to CutFEM, no free parameters such as $\\gamma$ and $\\gamma_G$ are introduced but the absence of overlapping submeshes means that there is no increased resolution in areas with complex geometries.\n\n In \\cite{Windhoff2013,Nielsen2018} the process of building a tetrahedral mesh from segmentation data is investigated. Surface triangulations that are free of topological defects, self-intersections or degenerate angles have to be created before volumetric meshing. The authors show that it is possible to create such high quality surfaces and subsequent tetrahedral meshes for realistic head models, however they may come at the cost of modeling inaccuracies such as the separation of gray matter and skull by a thin layer of CSF.\n \n A main advantage of CutFEM is its flexibility with regard to the anatomical input data. Level sets can be created from a variety of sources, such as tissue probability maps, binary images or surface triangulations. This simplifies the question of how to create a mesh from segmentation data.\n \n \\begin{figure*}[tp]\n\t\t\\centering\n\t\t\\subfloat[][]{\\includegraphics[width=0.27\\linewidth]{graphics\/realRdmHexTpm.png}}\n\t\t\\subfloat[][]{\\includegraphics[width=0.27\\linewidth]{graphics\/realMAGHexTpm.png}}\n\t\t\\subfloat[][]{\\includegraphics[width=0.27\\linewidth]{graphics\/P20CutPaper.png}}\n\t\t\\caption[RDM\/MAG differences interpolated on gray matter + P20 reconstruction]{CutFEM vs \\textit{Hex} lead field differences in distribution (a) and magnitude (b). Differences are interpolated onto gray matter. (c): The CutFEM based dipole reconstruction of the P20 medianus stimulation.}\n\t\\end{figure*}\n\t\n \\section{Conclusion}\n\tCutFEM performed well both when the underlying head model was created using analytical level sets or realistic segmentation results. Application to an inverse reconstruction of a somatosensory evoked potential yielded findings that are in line with the literature. The level sets underlying CutFEM impose few restrictions on the compartments, thus allowing for more simplified segmentation routines when compared to other FEM approaches using surface triangulations.\n\n \\section{Author Contributions}\n \\textbf{Conceptualization}: C. Engwer, A. Westhoff, Carsten H. Wolters.\n \\textbf{Methodology}: C. Engwer, T. Erdbruegger, A. Westhoff, C.H. Wolters.\n \\textbf{Software}: C. Engwer, T. Erdbruegger, A. Westhoff.\n \\textbf{Investigation}: T. Erdbruegger.\n \\textbf{Writing \u2013 original draft}: T. Erdbruegger.\n \\textbf{Writing \u2013 review and editing}: Y. Buschermoehle, C. Engwer, T. Erdbruegger, J. Gross, M. Hoeltershinken, R. Lencer, J.O. Radecke, C.H. Wolters.\n \\textbf{Supervision}: C. Engwer, C.H. Wolters.\n \\textbf{Funding acquisition}: A. Buyx, J. Gross, R. Lencer, S. Pursiainen, F. Wallois, C.H. Wolters.\n\t\\bibliographystyle{ieeetr}\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLarge-scale multi-label text classification (\\textsc{lmtc}\\xspace) is the task of assigning to each document all the relevant labels from a large set, typically containing thousands of labels (classes). Applications include building web directories \\citep{Partalas2015LSHTCAB}, labeling scientific publications with concepts from ontologies \\cite{Tsatsaronis2015}, assigning diagnostic and procedure labels to medical records \\cite{Mullenbach2018,Rios2018-2}. We focus on legal text processing, an emerging \\textsc{nlp} field with many applications (e.g., legal judgment \\cite{Nallapati2008,Aletras2016}, contract element extraction \\cite{Chalkidis2017}, obligation extraction \\cite{Chalkidis2018b}), but limited publicly available resources.\n\nOur first contribution is a new publicly available legal \\textsc{lmtc}\\xspace dataset, dubbed \\textsc{eurlex57k}\\xspace, containing 57k English \\textsc{eu}\\xspace legislative documents from the \\textsc{eur-lex}\\xspace portal, tagged with $\\mathtt{\\sim}$4.3k labels (concepts) from the European Vocabulary (\\textsc{eurovoc}\\xspace).\\footnote{See \\url{https:\/\/eur-lex.europa.eu\/} for \\textsc{eur-lex}\\xspace, and \\url{https:\/\/publications.europa.eu\/en\/web\/eu-vocabularies} for \\textsc{eurovoc}\\xspace.} \\textsc{eurovoc}\\xspace contains approx.\\ 7k labels, but most of them are rarely used, hence they are under-represented (or absent) in \\textsc{eurlex57k}\\xspace, making the dataset also appropriate for few- and zero-shot learning. \\textsc{eurlex57k}\\xspace can be viewed as an improved version of the dataset released by \\citet{Mencia2007}, which has been widely used in \\textsc{lmtc}\\xspace research, but is less than half the size of \\textsc{eurlex57k}\\xspace (19.6k documents, 4k \\textsc{eurovoc}\\xspace labels) and more than ten years old. \n\nAs a second contribution, we experiment with several neural classifiers on \\textsc{eurlex57k}\\xspace, including the Label-Wise Attention Network of \\citet{Mullenbach2018}, called \\textsc{cnn-lwan}\\xspace here, which was reported to achieve state of the art performance in \\textsc{lmtc}\\xspace on medical records. We show that a simpler \\textsc{bigru} with self-attention \\cite{Xu2015} outperforms \\textsc{cnn-lwan}\\xspace by a wide margin on \\textsc{eurlex57k}\\xspace. However, by replacing the \\textsc{cnn}\\xspace encoder of \\textsc{cnn-lwan}\\xspace with a \\textsc{bigru}\\xspace, we obtain even better results on \\textsc{eurlex57k}\\xspace. Domain-specific \\textsc{word2vec}\\xspace \\cite{Mikolov2013} and context-sensitive \\textsc{elmo}\\xspace embeddings \\cite{Peters2018} yield further improvements. We thus establish strong baselines for \\textsc{eurlex57k}\\xspace.\n\nAs a third contribution, we investigate which zones of the documents are more informative on \\textsc{eurlex57k}\\xspace, showing that considering only the title and recitals of each document leads to almost the same performance as considering the full document. This allows us to bypass \\textsc{bert}\\xspace's \\cite{bert} maximum text length limit and fine-tune \\textsc{bert}\\xspace, obtaining the best results for all but zero-shot learning labels. To our knowledge, this is the first application of \\textsc{bert}\\xspace to an \\textsc{lmtc}\\xspace task, which provides further evidence of the superiority of pretrained language models with task-specific fine-tuning, and establishes an even stronger baseline for \\textsc{eurlex57k}\\xspace and \\textsc{lmtc}\\xspace in general.\n\n\\section{Related Work}\n\\label{sec:relatedwork}\n\nYou et al.\\ \\shortcite{You2018} explored \\textsc{rnn}\\xspace-based methods with self-attention on five \\textsc{lmtc}\\xspace datasets that had also been considered by \\citet{Liu2017}, namely \\textsc{rcv1}\\xspace \\cite{Lewis2004}, Amazon-13K, \\cite{McAuley2013}, Wiki-30K and Wiki-500K \\cite{Zubiaga2012}, as well as the previous \\textsc{eur-lex}\\xspace dataset \\cite{Mencia2007}, reporting that attention-based \\textsc{rnn}\\xspace{s} produced the best results overall (4 out of 5 datasets). \n\n\\citet{Mullenbach2018} investigated the use of label-wise attention in \\textsc{lmtc}\\xspace for medical code prediction on the \\textsc{mimic-ii}\\xspace and \\textsc{mimic-iii}\\xspace datasets \\cite{Johnson2017}. Their best method, Convolutional Attention for Multi-Label Classification, called \\textsc{cnn-lwan}\\xspace here, employs one attention head per label and was shown to outperform weak baselines, namely logistic regression, plain \\textsc{bigru}\\xspace{s}, \\textsc{cnn}\\xspace{s} with a single convolution layer. \n\n\\citet{Rios2018-2} consider few- and zero-shot learning on the \\textsc{mimic} datasets. They propose Zero-shot Attentive \\textsc{cnn}\\xspace, called \\textsc{zero-cnn-lwan}\\xspace here, a method similar to \\textsc{cnn-lwan}\\xspace, which also exploits label descriptors. Although \\textsc{zero-cnn-lwan}\\xspace did not outperform \\textsc{cnn-lwan}\\xspace overall on \\textsc{mimic-ii}\\xspace and \\textsc{mimic-iii}\\xspace, it had much improved results in few-shot and zero-shot learning, among other variations of \\textsc{zero-cnn-lwan}\\xspace that exploit the hierarchical relations of the labels with graph convolutions.\n\nWe note that the label-wise attention methods of \\citet{Mullenbach2018} and \\citet{Rios2018-2} were not compared to strong generic text classification baselines, such as attention-based \\textsc{rnn}\\xspace{s} \\cite{You2018} or Hierarchical Attention Networks (\\textsc{han}\\xspace{s}) \\cite{Yang2016}, which we investigate below.\n\n\\section{The New Dataset}\n\\label{sec:dataset}\n\nAs already noted, \\textsc{eurlex57k}\\xspace contains 57k legislative documents from \\textsc{eur-lex}\\xspace\\footnote{Our dataset is available at \\url{http:\/\/nlp.cs.aueb.gr\/software_and_datasets\/EURLEX57K}, with permission of reuse under European Union\\copyright, \\url{https:\/\/eur-lex.europa.eu}, 1998--2019.} with an average length of 727 words (Table~\\ref{tab:dataset}).\\footnote{See Appendix~\\ref{app:dataset} for more statistics.} Each document contains four major zones: the \\emph{header}, which includes the title and name of the legal body enforcing the legal act; the \\emph{recitals}, which are legal background references; the \\emph{main body}, usually organized in articles; and the \\emph{attachments} (e.g., appendices, annexes).\n\n\\begin{table}[t]\n\\centering\n\\footnotesize\n\\begin{tabular}{lcccc}\n Subset & Documents ($D$) & Words\/$D$ & Labels\/$D$ \\\\\n\\hline\n Train & 45,000 & 729 & 5\\\\\n Dev. & 6,000 & 714 & 5 \\\\\n Test & 6,000 & 725 & 5\\\\\\hline\n Total & 57,000 & 727 & 5\\\\\n \\hline\n\\end{tabular}\n\\caption{Statistics of the \\textsc{\\textsc{eur-lex}\\xspace} dataset.}\n\\label{tab:dataset}\n\\end{table}\n\nSome of the \\textsc{lmtc}\\xspace methods we consider need to be fed with documents split into smaller units. These are often sentences, but in our experiments they are \\emph{sections}, thus we preprocessed the raw text, respectively. We treat the header, the recitals zone, each article of the main body, and the attachments as separate sections. \n\nAll the documents of the dataset have been annotated by the Publications Office of \\textsc{eu}\\xspace\\footnote{See \\url{https:\/\/publications.europa.eu\/en}.} with multiple concepts from \\textsc{eurovoc}\\xspace. \nWhile \\textsc{eurovoc}\\xspace includes approx.\\ 7k concepts (labels), only 4,271 (59.31\\%) are present in \\textsc{eurlex57k}\\xspace, from which only 2,049 (47.97\\%) have been assigned to more than 10 documents. Similar distributions were reported by \\citet{Rios2018-2} for the \\textsc{mimic} datasets.\nWe split \\textsc{eurlex57k}\\xspace into training (45k documents), development (6k), and test subsets (6k). We also divide the 4,271 labels into \\emph{frequent} (746 labels), \\emph{few-shot} (3,362), and \\emph{zero-shot} (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.\n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=1\\textwidth]{methods.png}\n \\caption{Illustration of (a) \\textsc{bigru-att}\\xspace, (b) \\textsc{han}\\xspace, (c) \\textsc{bigru-lwan}\\xspace, and (d) \\textsc{bert}\\xspace.}\n \\vspace*{-4mm}\n \\label{fig:methods}\n\\end{figure*}\n\n\\section{Methods}\n\\label{sec:methods}\n\n\\paragraph{Exact Match, Logistic Regression:} \nA first naive baseline, Exact Match, assigns only labels whose descriptors can be found verbatim in the document. A second one uses Logistic Regression with feature vectors containing \\textsc{tf-idf}\\xspace scores of $n$-grams ($n=1,2,\\dots, 5$).\n\n\\paragraph{\\textsc{bigru-att}\\xspace:}\n\nThe first neural method is a \\textsc{bigru} with self-attention \\cite{Xu2015}. Each document is represented as the sequence of its word embeddings, which go through a stack of \\textsc{bigru}s (Figure~\\ref{fig:methods}a). A document embedding ($h$) is computed as the sum of the resulting context-aware embeddings ($h = \\sum_i a_i h_i$), weighted by the self-attention scores ($a_i$), and goes through a dense layer of $L=4,271$ output units with sigmoids, producing $L$ probabilities, one per label.\n\n\\paragraph{\\textsc{han}\\xspace:}\n\nThe Hierarchical Attention Network \\cite{Yang2016} is a strong baseline for text classification. We use a slightly modified version, where a \\textsc{bigru}\\xspace with self-attention reads the words of each section, as in \\textsc{bigru-att}\\xspace but separately per section, producing section embeddings. A second-level \\textsc{bigru}\\xspace with self-attention reads the section embeddings, producing a single document embedding ($h$) that goes through a similar output layer as in \\textsc{bigru-att}\\xspace (Figure~\\ref{fig:methods}b).\n\n\\paragraph{\\textsc{cnn-lwan}\\xspace, \\textsc{bigru-lwan}\\xspace:}\n\nIn the original Label-Wise Attention Network (\\textsc{lwan}\\xspace) of \\citet{Mullenbach2018}, called \\textsc{cnn-lwan}\\xspace here, the word embeddings of each document are first converted to a sequence of vectors $h_i$ by a \\textsc{cnn}\\xspace encoder. A modified version of \\textsc{cnn-lwan}\\xspace that we developed, called \\textsc{bigru-lwan}\\xspace, replaces the \\textsc{cnn}\\xspace encoder with a \\textsc{bigru}\\xspace (Figure~\\ref{fig:methods}c), which converts the word embeddings into context-sensitive embeddings $h_i$, much as in \\textsc{bigru-att}\\xspace. Unlike \\textsc{bigru-att}\\xspace, however, both \\textsc{cnn-lwan}\\xspace and \\textsc{bigru-lwan}\\xspace use $L$ independent attention heads, one per label, generating $L$ document embeddings ($h^{(l)} = \\sum_i a_{l,i} h_i$, $l=1, \\dots, L$) from the sequence of vectors $h_i$ produced by the \\textsc{cnn}\\xspace or \\textsc{bigru}\\xspace encoder, respectively. Each document embedding ($h^{(l)}$) is specialized to predict the corresponding label and goes through a separate dense layer ($L$ dense layers in total) with a sigmoid, to produce the probability of the corresponding label.\n\n\\paragraph{\\textsc{zero-cnn-lwan}\\xspace, \\textsc{zero-bigru-lwan}\\xspace:}\n\\citet{Rios2018-2} designed a model similar to \\textsc{cnn-lwan}\\xspace, called \\textsc{zacnn}\\xspace in their work and \\textsc{zero-cnn-lwan}\\xspace here, to deal with rare labels. In \\textsc{zero-cnn-lwan}\\xspace, the attention scores ($a_{l,i}$) and the label probabilities are produced by comparing the $h_i$ vectors that the \\textsc{cnn}\\xspace encoder produces and the label-specific document embeddings ($h^{(l)}$), respectively, to label embeddings. Each label embedding is the centroid of the pretrained word embeddings of the label's descriptor; consult \\citet{Rios2018-2} for further details. By contrast, \\textsc{cnn-lwan}\\xspace and \\textsc{bigru-lwan}\\xspace do not consider the descriptors of the labels. We also experiment with a variant of \\textsc{zero-cnn-lwan}\\xspace that we developed, dubbed \\textsc{zero-bigru-lwan}\\xspace, where the \\textsc{cnn}\\xspace encoder is replaced by a \\textsc{bigru}\\xspace.\n\n\\paragraph{\\textsc{bert}\\xspace:} \n\\textsc{bert}\\xspace \\citep{bert} is a language model based on Transformers \\citep{Vaswani2017} pretrained on large corpora. \nFor a new target task, a task-specific layer is added on top of \\textsc{bert}. The extra layer is trained jointly with \\textsc{bert} by fine-tuning on task-specific data. We add a dense layer on top of \\textsc{bert}, with sigmoids, that produces a probability per label. Unfortunately, \\textsc{bert}\\xspace can currently process texts up to 512 wordpieces, which is too small for the documents of \\textsc{eurlex57k}\\xspace. Hence, \\textsc{bert}\\xspace can only be applied to truncated versions of our documents (see below). \n\n\\begin{table*}[ht!]\n\\centering\n{\n\\footnotesize\\addtolength{\\tabcolsep}{-2pt}\n\\begin{tabular}{lccccccccc}\n \\hline\n & \\multicolumn{3}{c}{\\textsc{All Labels}} & \\multicolumn{2}{c}{\\textsc{Frequent}} & \\multicolumn{2}{c}{\\textsc{Few}} & \\multicolumn{2}{c}{\\textsc{Zero}} \\\\ \n & $RP@5$ & $nDCG@5$ & Micro-$F1$ & $RP@5$ & $nDCG@5$ & $RP@5$ & $nDCG@5$ & $RP@5$ & $nDCG@5$ \\\\\n \\cline{2-10}\n Exact Match & 0.097 & 0.099 & 0.120 & 0.219 & 0.201 & 0.111 & 0.074 & 0.194 & 0.186 \\\\\n Logistic Regression & 0.710 & 0.741 & 0.539 & 0.767 & 0.781 & 0.508 & 0.470 & 0.011 & 0.011 \\\\\n \\hline\n \\textsc{bigru-att}\\xspace & 0.758 & 0.789 & 0.689 & 0.799 & 0.813 & 0.631 & 0.580 & 0.040 & 0.027\\\\\n \\textsc{han}\\xspace & 0.746 & 0.778 & 0.680 & 0.789 & 0.805 & 0.597 & 0.544 & 0.051 & 0.034\\\\\n \\hline\n \\textsc{cnn-lwan}\\xspace & 0.716 & 0.746 & 0.642 & 0.761 & 0.772 & 0.613 & 0.557 & 0.036 & 0.023 \\\\\n \\textsc{bigru-lwan}\\xspace & \\textbf{0.766} & \\textbf{0.796} & \\textbf{0.698} & \\textbf{0.805} & \\textbf{0.819} & \\textbf{0.662} & \\textbf{0.618} & 0.029 & 0.019\\\\\n \\hline\n \\textsc{zero-cnn-lwan}\\xspace & 0.684 & 0.717 & 0.618 & 0.730 & 0.745 & 0.495 & 0.454 & 0.321 & 0.264 \\\\\n \\textsc{zero-bigru-lwan}\\xspace & 0.718 & 0.752 & 0.652 & 0.764 & 0.780 & 0.561 & 0.510 & \\textbf{0.438} & \\textbf{0.345} \\\\\n \\hline\n \\hline\n \\textsc{bigru-lwan-l2v} & 0.775 & 0.804 & 0.711 & 0.815 & 0.828 & 0.656 & 0.612 & 0.034 & 0.024 \\\\\n\\hline\n\\textsc{bigru-lwan-l2v}* & 0.770 & 0.796 & 0.709 & 0.811 & 0.825 & 0.641 & 0.600 & 0.047 & 0.030\\\\\n\\textsc{bigru-lwan-elmo}* & 0.781 & 0.811 & 0.719 & 0.821 & 0.835 & 0.668 & 0.619 & 0.044 & 0.028\\\\\n\\textsc{bert-base}\\xspace * & \\textbf{0.796} & \\textbf{0.823} & \\textbf{0.732} & \\textbf{0.835} & \\textbf{0.846} & \\textbf{0.686} & \\textbf{0.636} & 0.028 & 0.023\\\\\n\\hline\n\\end{tabular}\n}\n\\caption{Results on \\textsc{eurlex57k}\\xspace for all, frequent, few-shot, zero-shot labels. Starred methods use the first 512 document tokens; all other methods use full documents. Unless otherwise stated, \\textsc{glove}\\xspace embeddings are used.}\n\\vspace*{-4mm}\n\\label{tab:results}\n\\end{table*}\n\n\\section{Experiments}\n\\label{sec:experiments}\n\n\\paragraph{Evaluation measures:} Common \\textsc{lmtc}\\xspace evaluation measures are precision ($P@K$) and recall ($R@K$) at the top $K$ predicted labels, averaged over test documents, micro-averaged F1 over all labels, and $nDCG@K$ \\cite{Manning2009}. However, $P@K$ and $R@K$ unfairly penalize methods when the gold labels of a document are fewer or more than $K$, respectively. Similar concerns have led to the introduction of $\\mathrm{R}\\text{-}\\mathrm{Precision}$ and $nDCG@K$ in Information Retrieval \\cite{Manning2009}, which we believe are also more appropriate for \\textsc{lmtc}\\xspace. Note, however, that $\\mathrm{R}\\text{-}\\mathrm{Precision}$ requires the number of gold labels per document to be known beforehand, which is unrealistic in practical applications. Therefore we propose using $\\mathrm{R}\\text{-}\\mathrm{Precision}@K$ ($RP@K$), where $K$ is a parameter. This measure is the same as $P@K$ if there are at least $K$ gold labels, otherwise $K$ is reduced to the number of gold labels. \n\nFigure~\\ref{fig:kappa_comparison} shows $RP@K$ for the three best systems, macro-averaged over test documents. Unlike $P@K$, $RP@K$ does not decline sharply as $K$ increases, because it replaces $K$ by the number of gold labels, when the latter is lower than $K$. \nFor $K=1$, $RP@K$ is equivalent to $P@K$, as confirmed by Fig.~\\ref{fig:kappa_comparison}. For large values of $K$ that almost always exceed the number of gold labels, $RP@K$ asymptotically approaches $R@K$, as also confirmed by Fig.~\\ref{fig:kappa_comparison}.\\footnote{See Appendix~\\ref{app:evaluation} for a more detailed discussion on the evaluation measures.} In our dataset, there are 5.07 labels per document, hence $K=5$ is reasonable.\\footnote{Evaluating at other values of $K$ lead to similar conclusions (see Fig.~\\ref{fig:kappa_comparison} and Appendix~\\ref{app:experiments}).}\n\n\\begin{figure}[ht]\n\\centering\n\\begin{tikzpicture}[scale=1.0]\n \\begin{axis}[\n xlabel={$K$ top predictions},\n ylabel={},\n xmin=1, xmax=10,\n ymin=0.2, ymax=1,\n xtick={1,2,3,4,5,6,7,8,9,10},\n ytick={0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0},\n legend style={at={(0.24, 0.03)}, anchor=south west},\n ymajorgrids=true,\n grid style=dashed,\n ]\n \n \\addplot[\n color=black,\n ]\n coordinates {\n (1,0.921)(2,0.878)(3,0.830)(4,0.797)(5,0.781)(6,0.780)(7,0.799)(8,0.818)(9,0.832)(10,0.845)};\n \\addlegendentry{\\textsc{bigru-lwan}\\xspace (\\textsc{elmo}\\xspace)}\n \n \\addplot[\n color=black,\n mark=o,\n ]\n coordinates {\n (1,0.916)(2,0.867)(3,0.821)(4,0.787)(5,0.770)(6,0.771)(7,0.791)(8,0.808)(9,0.823)(10,0.836)};\n \\addlegendentry{\\textsc{bigru-lwan}\\xspace (\\textsc{l2v}\\xspace)}\n \n \\addplot[\n color=black,\n mark=square,\n ]\n coordinates {\n (1,0.922)(2,0.880)(3,0.839)(4,0.810)(5,0.795)(6,0.796)(7,0.814)(8,0.832)(9,0.845)(10,0.857)};\n \\addlegendentry{\\textsc{bert-base}\\xspace}\n \n \\addplot[\n color=red,\n ]\n coordinates {\n (1,0.921)(2,0.875)(3,0.811)(4,0.740)(5,0.674)(6,0.611)(7,0.552)(8,0.501)(9,0.456)(10,0.419)};\n\n \\addplot[\n color=green,\n ]\n coordinates {\n (1,0.209)(2,0.391)(3,0.531)(4,0.632)(5,0.705)(6,0.756)(7,0.789)(8,0.813)(9,0.830)(10,0.844)};\n \n \\addplot[\n color=red,\n mark=o,\n ]\n coordinates {\n (1,0.913)(2,0.863)(3,0.803)(4,0.734)(5,0.669)(6,0.607)(7,0.549)(8,0.498)(9,0.454)(10,0.417)};\n\n \\addplot[\n color=green,\n mark=o,\n ]\n coordinates {\n (1,0.207)(2,0.386)(3,0.527)(4,0.627)(5,0.700)(6,0.752)(7,0.787)(8,0.811)(9,0.828)(10,0.842)};\n \n \\addplot[\n color=red,\n mark=square,\n ]\n coordinates {\n (1,0.922)(2,0.877)(3,0.820)(4,0.753)(5,0.686)(6,0.623)(7,0.562)(8,0.510)(9,0.463)(10,0.424)};\n\n \\addplot[\n color=green,\n mark=square,\n ]\n coordinates {\n (1,0.210)(2,0.393)(3,0.539)(4,0.642)(5,0.718)(6,0.771)(7,0.804)(8,0.829)(9,0.844)(10,0.856)};\n \n\\end{axis}\n\\end{tikzpicture}\n\\caption{$R@K$ (green lines), $P@K$ (red), $RP@K$ (black) of the best methods (\\textsc{bigru-lwan}\\xspace{s} (\\textsc{l2v}\\xspace), \\textsc{bigru-lwan}\\xspace{s} (\\textsc{elmo}\\xspace), \\textsc{bert-base}\\xspace), for $K=1$ to 10. All scores macro-averaged over test documents.}\n\\label{fig:kappa_comparison}\n\\end{figure}\n\n\n\\paragraph{Setup:} \nHyper-parameters are tuned using the \\textsc{hyperopt} library selecting the values with the best loss on development data.\\footnote{We implemented all methods in \\textsc{keras} (\\url{https:\/\/keras.io\/}). Code available at \\url{https:\/\/github.com\/iliaschalkidis\/lmtc-eurlex57k.git}. See Appendix~\\ref{app:hyperopt} for details on hyper-parameter tuning.} For the best hyper-parameter values, we perform five runs and report mean scores on test data. For statistical significance tests, we take the run of each method with the best performance on development data, and perform two-tailed approximate randomization tests \\cite{Dror2018} on test data.\\footnote{We perform 10k iterations, randomly swapping in each iteration the responses (sets of returned labels) of the two compared systems for 50\\% of the test documents.} Unless otherwise stated, we used 200-D pretrained \\textsc{glove}\\xspace embeddings \\cite{pennington2014glove}.\n\n\\paragraph{Full documents:}\n\\label{sec:overall}\nThe first five horizontal zones of Table~\\ref{tab:results} report results for full documents. The naive baselines are weak, as expected. Interestingly, for all, frequent, and even few-shot labels, the generic \\textsc{bigru-att}\\xspace performs better than \\textsc{cnn-lwan}\\xspace, which was designed for \\textsc{lmtc}\\xspace. \\textsc{han}\\xspace also performs better than \\textsc{cnn-lwan}\\xspace for all and frequent labels. However, replacing the \\textsc{cnn}\\xspace encoder of \\textsc{cnn-lwan}\\xspace with a \\textsc{bigru}\\xspace (\\textsc{bigru-lwan}\\xspace) leads to the best results, indicating that the main weakness of \\textsc{cnn-lwan}\\xspace is its vanilla \\textsc{cnn}\\xspace encoder.\n\nThe zero-shot versions of \\textsc{cnn-lwan}\\xspace and \\textsc{bigru-lwan}\\xspace outperform all other methods on zero-shot labels (Table~\\ref{tab:results}), in line with the findings of \\citet{Rios2018-2}, because they exploit label descriptors,\nbut more importantly because they have a component that uses prior knowledge as is (i.e., label embeddings are frozen). Exact Match also performs better on zero-shot labels, for the same reason (i.e., the prior knowledge is intact). \\textsc{bigru-lwan}\\xspace, however, is still the best method in few-shot learning. All the differences between the best (bold) and other methods in Table~\\ref{tab:results} are statistically significant ($p < 0.01$).\n\nTable~\\ref{tab:embs} shows that using \\textsc{word2vec}\\xspace embeddings trained on legal texts (\\textsc{l2v}\\xspace) \\cite{Chalkidis2018} or \\textsc{elmo}\\xspace embeddings \\cite{Peters2018} trained on generic texts further improve the performance of \\textsc{bigru-lwan}\\xspace. \n\n\\paragraph{Document zones:} \nTable~\\ref{tab:zones} compares the performance of \\textsc{bigru-lwan}\\xspace on the development set for different combinations of document zones (Section~\\ref{sec:dataset}): \\emph{header} (\\emph{H}), \\emph{recitals} (\\emph{R}), \\emph{main body} (\\emph{MB}), full text. Surprisingly \\emph{H+R} leads to almost the same results as full documents,\\footnote{The approximate randomization tests detected no statistically significant difference in this case ($p = 0.20$).} indicating that \\emph{H+R} provides most of the information needed to assign \\textsc{eurovoc}\\xspace labels.\n\n\\begin{table}[ht!]\n\\centering\n\\footnotesize\\addtolength{\\tabcolsep}{-2pt}\n\\begin{tabular}{lccc}\n & $RP@5$ & $nDCG@5$ & Micro-$F1$ \\\\\n\\cline{2-4}\n\\textsc{glove}\\xspace & 0.766 & 0.796 & 0.698 \\\\\n\\textsc{l2v}\\xspace & 0.775 & 0.804 & 0.711 \\\\\n\\textsc{glove}\\xspace + \\textsc{elmo}\\xspace & 0.777 & 0.808 & 0.714\\\\\n\\textsc{l2v}\\xspace + \\textsc{elmo}\\xspace & \\textbf{0.781} & \\textbf{0.811} & \\textbf{0.719}\\\\\n\\end{tabular}\n\\caption{\\textsc{bigru-lwan}\\xspace with \\textsc{glove}\\xspace, \\textsc{l2v}\\xspace, \\textsc{elmo}\\xspace.}\n\\label{tab:embs}\n\\end{table}\n\\vspace{-0.5em}\n\\begin{table}[ht!]\n\\centering\n\\footnotesize\\addtolength{\\tabcolsep}{-2pt}\n\\begin{tabular}{lcccc}\n & $\\mu_{words}$ & $RP@5$ & $nDCG@5$ & Micro-$F1$ \\\\\n\\cline{2-5}\n\\emph{H} & 43 & 0.747 & 0.782 & 0.688 \\\\\n\\emph{R} & 317 & 0.734 & 0.765 & 0.669 \\\\\n\\emph{H+R} & 360 & \\underline{0.765} & \\underline{0.796} & \\underline{0.701} \\\\\n\\emph{MB} & 187 & 0.643 & 0.674 & 0.590 \\\\\n\\emph{Full} & 727 & \\textbf{0.766} & \\textbf{0.797} & \\textbf{0.702} \\\\\n\\end{tabular}\n\\caption{\\textsc{bigru-lwan}\\xspace with different document zones.}\n\\vspace*{-4mm}\n\\label{tab:zones}\n\\end{table}\n\n\n\\paragraph{First 512 tokens:} Given that \\emph{H+R} contains enough information and is shorter than 500 tokens in 83\\% of our dataset's documents, we also apply \\textsc{bert}\\xspace to the first 512 tokens of each document (truncated to \\textsc{bert}\\xspace's max.\\ length), comparing to \\textsc{bigru-lwan}\\xspace also operating on the first 512 tokens. Table~\\ref{tab:results} (bottom zone) shows that \\textsc{bert}\\xspace outperforms all other methods, even though it considers only the first 512 tokens. It fails, however, in zero-shot learning, since it does not have a component that exploits prior knowledge as is (i.e., all the components are fine-tuned on training data).\n\n\\section{Limitations and Future Work}\n\nOne major limitation of the investigated methods is that they are unsuitable for \\emph{Extreme} Multi-Label Text Classification where there are hundreds of thousands of labels \\cite{Liu2017,Zhang2018,Wydmuch2018}, as opposed to the \\textsc{lmtc}\\xspace setting of our work where the labels are in the order of thousands. We leave the investigation of methods for extremely large label sets for future work. Moreover, \\textsc{rnn} (and \\textsc{gru}\\xspace) based methods have high computational cost, especially for long documents. We plan to investigate more computationally efficient methods, e.g., dilated \\textsc{cnn}\\xspace{s} \\cite{KalchbrennerESO16} and Transformers \\cite{Vaswani2017, Dai2019}. We also plan to experiment with hierarchical flavors of \\textsc{bert}\\xspace to surpass its length limitations. Furthermore, experimenting with more datasets e.g., \\textsc{rcv1}\\xspace, Amazon-13K, Wiki-30K, \\textsc{mimic-iii}\\xspace will allow us to confirm our conclusions in different domains. Finally, we plan to investigate Generalized Zero-Shot Learning \\cite{Liu2018}. \n\n\\section*{Acknowledgements}\nThis work was partly supported by the Research Center of the Athens University of Economics and Business.\n\n\\vspace{-3mm}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nNuclear resonant scattering (NRS) of synchrotron radiation (SR) has become\nan established method for the study of nuclear hyperfine interaction during\nthe last two decades \\cite{Gerdau2000,Roehlsberger2004}. The spectrum is\nconventionally recorded as the time response of the nuclear ensemble\nfollowing a short resonant synchrotron pulse, which simultaneously excites\nall resonant transitions between hyperfine-split nuclear sublevels. The\nobserved beating frequencies are characteristic for the hyperfine fields in\nthe specimen. As an alternative to nuclear resonant forward scattering of SR\nin time domain, a heterodyne detection scheme was suggested\n\\cite{Coussement1996,Labbe2000}. The two scatterers, viz. the one\nunder investigation and as a reference sample a single-line M\\\"{o}ssbauer \nabsorber, are mounted on a M\\\"{o}ssbauer drive. The heterodyne spectrum is\nthe full time integral of the delayed counts, plotted as a function of the\nDoppler velocity of the reference sample. An advantage of this experimental\nsetup is the similarity of the spectra to those in the conventional\nenergy-domain M\\\"{o}ssbauer spectroscopy \\cite{Coussement1996,Labbe2000}.\nAlthough conventional M\\\"{o}ssbauer spectroscopy delivers similar\ninformation on hyperfine interactions, the special properties of SR like\nhigh collimation, high degree of polarization and high brilliance increase\nthe number of possible applications of NRS of SR.\nFurthermore, the heterodyne setup allows for dense bunch\nmodes of the synchrotron (with bunch separation time much shorter than the\nnuclear lifetime), which are not suitable for time differential NRS\nexperiments.\n\nUndistorted time integration of the nuclear response can only be performed\nif the large non-resonant intensity contribution is extinguished.\nExperimentally, this can be achieved by using radiation from a nuclear\nmonochromator \\cite{Smirnov1997} or by applying a polarizer\/analizer setup \n\\cite{Labbe2000}. An alternative approach, namely, ``stroboscopic detection'',\nis based on appropriate time gating \\cite{Callens2002,Callens2003},\ni.e., integration of the delayed time response in a periodic time window. The\nperiod $t_p$ of the observation time window after the SR pulse is chosen so\nthat $1\/t_p$ falls within the frequency range of the hyperfine interactions\nin the investigated specimen. This leads to new type of periodic resonances\nat certain Doppler velocities that are shifted from the M\\\"{o}ssbauer\nresonances by $mh\/t_p$, with $h$ being Planck's constant, and $m$ an integer\nnumber indicating the stroboscopic order \\cite{Callens2002,Callens2003}. The\nperiod $t_p$ should be selected according to the hyperfine spectral range,\nthe synchrotron bunch period and the detector dead time \n\\cite{Inge2004,Callens2003}.\n\nSo far the theory of stroboscopic detection scheme has only been developed \nand discussed in detail for forward scattering geometry. The several applications \nof NRS in surface and thin-film magnetism that make use of the grazing incidence geometry \n\\cite{Roehlsberger2004,Chumakov1999,Roehlsberger1999,Deak1999a,Roehlsberger2003,Sladecek2002}\ncall for computer programs that easily allow to fit data obtained by stroboscopic\ndetection as well. \nIn this geometry, the interferences of the SR plane waves, scattered from the surface and\ninterfaces of a stratified sample, provide information on the value,\ndirection and topology of the internal fields in the sample with\nnanometer depth resolution. \n\nRecently, interesting experiments have been performed using stroboscopic detection in\nthe grazing \nincidence case \\cite{Roehlsberger2010,Roehlsberger2012}, which demonstrates the \npotential of this method. \n\nGrazing-incident NRS of SR, often called Synchrotron \nM\\\"{o}ssbauer Reflectometry (SMR) \\cite{Gerdau2000,Deak01,Deak02}, has been\nestablished in both time and angular regime \\cite{Chumakov1999,Deak02,Nagy1999}, \nas time differential (TD) and time integral (TI) SMR, respectively. In the forward scattering \nchannel, the prompt electronic scattering homogeneously contributes to the stroboscopic \nspectrum and does not affect the spectral shape. For \nother scattering channels, including grazing incidence reflection,\nthe interference of the electronic and nuclear scattering provides further information. \nThe stroboscopic SMR line shape may considerably differ from the forward M\\\"{o}ssbauer\nspectrum, calling for a specialized computer code.\n\nThe dynamical theory of x-ray scattering gives a self-consistent description\nof the radiation field in all scattering channels of the system of\nscatterers, taking all orders of multiple scattering into account. Theories\nthat expand the coherent elastic scattering to the case of sharp nuclear\nresonances \\cite{Afanasev1963,Kagan1964,Hannon68,Hannon69,Hannon3} have been\napplied to\nvarious scattering geometries. The simplest cases are the\none-beam cases, such as forward and off-Bragg scattering, and the two-beam\ncases, the Bragg-Laue scattering \\cite{Hannon69,Sturhahn1994} and the grazing\nincidence scattering \\cite{Roehlsberger2003,Hannon69,Hannon85,Andreeva1,Deak96}. \nIn the grazing incidence limit, an optical model was derived from the dynamical theory \n\\cite{Hannon69,Hannon85}, which has been implemented in numerical calculations \n\\cite{Roehlsberger2003}. The reflectivity formulae given by \\citeasnoun{Deak96} and \n\\citeasnoun{Deak01} are suitable for fast numerical calculations in order to \nactually fit the experimental data \\cite{Spiering00} and, as has been shown \n\\cite{Deak1999}, this optical method is equivalent to that of the other approaches \nin the literature \\cite{Roehlsberger2003,Hannon85}.\n\nThe aim of the present paper is to develop the concept of the\nheterodyne\/stro\\-bo\\-scop\\-ic \ndetection and to\nestablish the formula that can be applied to any scattering channel, \nlike forward scattering, Bragg, off-Bragg and grazing incidence scattering.\n\nThis paper is\norganized as follows. In the second section, the\nheterodyne\/stroboscopic intensity formula for the propagation of $\\gamma $%\n--photons in a medium containing both electronic and resonant nuclear\nscatterers is derived. The equivalence to the previously discussed calculations for\nthe forward channel \\cite{Callens2002,Callens2003}\nare shown and \nthe important specific case of the stroboscopic grazing incidence reflection\nare outlined. In the third section, features of\nthe grazing incidence case\nare demonstrated by least-squares fitted\nexperimental stroboscopic SMR spectra on isotope-periodic $\\left[ ^{\\mathrm{%\nnat}}\\mathrm{Fe}\/^{57}\\mathrm{Fe}\\right] $ and antiferromagnetically ordered \n$\\left[ ^{57}\\mathrm{Fe}\/\\mathrm{Cr}\\right] $ multilayer films.\n\n\\section{Heterodyne\/Stroboscopic detection of Nuclear\\newline\nResonance Scattering}\n\n\\subsection{General considerations}\n\nThe setup of a heterodyne\/stroboscopic NRS of SR experiment includes two\nscatterers, the investigated specimen and an additional reference sample\n\\cite{Coussement1996,Labbe2000,Callens2002,Callens2003}, the latter being\nmounted on a M\\\"{o}ssbauer drive (in forward scattering geometry). The \nM\\\"{o}ssbauer drive provides a Doppler-shift $E_v=\\left( v\/c\\right) E_0$ of the\nnuclear energy levels, with $c,$ $v$ and $E_0$ being the velocity of light, \nthe velocity of the drive and the energy of the M\\\"{o}ssbauer\ntransition, respectively.\nThe polarization dependence of the nuclear scatterers is described \nadopting the notation of \n\\citeasnoun{Sturhahn1994}, by $2\\times 2$ transmissivity and reflectivity matrices\ncommonly called \\textit{scattering matrices}.\nThe scattering of the synchrotron photons on\nthe specimen and the reference sample is described by the \n\\textit{total scattering matrix} $T_\\tau \\left( E,E_v\\right)$,\n\\begin{equation}\nT_\\tau \\left( E,E_v\\right) =T_\\tau ^{\\left( \\mathrm{s}\\right) }\\left(\nE\\right) T^{\\left( \\mathrm{r}\\right) }\\left( E-E_v\\right) , \\label{tottrans}\n\\end{equation}\na product of the scattering matrices of the reference sample $T^{\\left(\n\\mathrm{r}\\right)} $ and of the \ninvestigated specimen $T^{\\left(\\mathrm{s}\\right)}$ \\cite{Blume68} in the energy domain.\nThe index $\\tau$ specifies the open scattering channel \\cite{Hannon69,Sturhahn1994}.\nThe scattering matrix \n$T^{\\left( \\mathrm{r}\\right) }\\left( E-E_v\\right) \\,$ of the reference sample\ndepends on the Doppler-shifted energy $E-E_v$, where the channel index $\\tau $\nis omitted for forward scattering. Note that both the electrons and \nthe resonant M\\\"{o}ssbauer nuclei scatter the $\\gamma $--photons coherently so\nthe scattering matrices have a resonant nuclear\nand nearly energy-independent electronic contribution. At\nenergies being far from the M\\\"{o}ssbauer resonances $\\left( E\\rightarrow\n\\pm \\infty \\right) $ on a hyperfine scale, the individual scattering\nmatrices $T^{\\left( \\mathrm{s},\\mathrm{r}\\right) }\\left( E\\rightarrow \\pm\n\\infty \\right) $, and thus their product $T_\\tau \\left( E\\rightarrow \\infty\n,E_v\\right) \\equiv T_{\\tau ,\\infty }\\,$in Eq.~(\\ref{tottrans}), approach\nthe non-resonant electronic contribution: \n\\begin{equation}\nT_{\\tau ,\\infty }=T_{\\tau ,\\mathrm{el}}^{\\left( \\mathrm{s}\\right) }T_{%\n\\mathrm{el}}^{\\left( \\mathrm{r}\\right) }. \\label{eltottrans}\n\\end{equation}\nSince the reference is mounted in forward geometry, its scattering matrix $%\nT^{\\left( \\mathrm{r}\\right) }\\left( E\\right) $ is the matrix exponential\n\\begin{equation}\nT^{\\left( \\mathrm{r}\\right) }\\left( E\\right) =\\exp \\left[ ikd^{\\left( \n\\mathrm{r}\\right) }n^{\\left( \\mathrm{r}\\right) }\\left( E\\right) \\right] ,\n\\label{Blumeexp}\n\\end{equation}\nwhere $n^{\\left( \\mathrm{r}\\right) }\\left(E\\right)$ is the index of refraction,\n$d^{\\left( \\mathrm{r}\\right) }$ the thickness and $k$ the vacuum\nwave number of the incident radiation \\cite{Blume68,Lax51}. The index of refraction \nis related to the susceptibility matrix $\\chi$ \\cite{Deak01,Deak96} through \n\\begin{equation}\nn^{\\left( \\mathrm{r}\\right) }\\left( E\\right) \\equiv I+\\frac{\\chi ^{\\left( \n\\mathrm{r}\\right) }\\left( E\\right) }2, \\label{khidef}\n\\end{equation}\nwhere $I$\\ is the unit matrix and $\\chi ^{\\left( \\mathrm{r}\\right) }=\\frac{4\\pi\nN^{\\left( \\mathrm{r}\\right) }}{k^2}f^{\\left( \\mathrm{r}\\right) }$, with $N^{\\left( \\mathrm{r}\\right) }$\nand $f^{\\left( \\mathrm{r}\\right) }$ being the density of the scattering centers\nand the $2\\times 2$\\ coherent forward scattering amplitude \\cite{Blume68},\nrespectively. The susceptibility is the sum of the electronic and the\nnuclear susceptibilities,\n\\begin{equation}\n\\chi ^{\\left( \\mathrm{r}\\right) }\\left( E\\right) =\\chi _{\\mathrm{el}%\n}^{\\left( \\mathrm{r}\\right) }+\\chi _{\\mathrm{nuc}}^{\\left( \\mathrm{r}\\right)\n}\\left( E\\right) . \\label{khisum}\n\\end{equation}\nWith Eqs.~(\\ref{Blumeexp}), (\\ref{khidef}) and (\\ref{khisum}), the\ntransmissivity of the reference is expressed as a product of electronic\nand nuclear transmissivities,\n\\begin{equation}\nT^{\\left( \\mathrm{r}\\right) }\\left( E\\right) =T_{\\mathrm{el}}^{\\left( \n\\mathrm{r}\\right) }\\tilde{T}_{\\mathrm{nuc\\vphantom{l}}}^{\\left( \\mathrm{r}\\right)\n}\\left( E\\right) , \\label{sepel}\n\\end{equation}\nwhere \n\\begin{equation}\n\\tilde{T}_{\\mathrm{nuc}}^{\\left( \\mathrm{r}\\right) }\\left( E\\right) =\\exp\n\\left( ikd^{\\left( \\mathrm{r}\\right) }\\frac{\\chi _{\\mathrm{nuc}}^{\\left( \n\\mathrm{r}\\right) }\\left( E\\right) }2\\right) . \\label{expnukl}\n\\end{equation}\n\n$T_\\tau^{\\left(\\mathrm{s}\\right)}\\left(E\\right)$ is determined from the \nrespective theory of wave propagation of channel $\\tau $ (forward, Bragg-Laue,\ngrazing incidence, etc. scattering), i.e., from the dynamical theory\n\\cite{Hannon68,Hannon69,Hannon3,Sturhahn1994,Hannon85}.\n\nIn forward scattering, due to the\nexponential expression in (\\ref{Blumeexp}), the total transmissivity $\nT\\left( E,E_v\\right) =T_\\infty \\tilde{T}_{\\mathrm{nuc}}^{\\left( \\mathrm{r}\n\\right) }\\left( E-E_v\\right) \\tilde{T}_{\\mathrm{nuc}}^{\\left( \\mathrm{s}\n\\right) }\\left( E\\right) $ is proportional to $T_\\infty$. Therefore, in\nthis special case, the electronic scattering is a simple multiplicative\nfactor, which does not affect the spectral shape. \n\nThe intensity $I_{\\tau }$ allowing for a general\npolarization state of the incident beam, the $2\\times 2$ polarization \ndensity matrix $\\rho$ (\\possessivecite{Blume68}), is given by \n\\begin{equation}\nI_{\\tau }\\left( E,E_{v}\\right) =\\mathrm{Tr}\\left[ T_{\\tau }^{\\dagger }\\left(\nE,E_{v}\\right) T_{\\tau }\\left( E,E_{v}\\right) \\rho \\right] .\n\\label{Blumeint}\n\\end{equation}\n\nThe beating time response to a single short polychromatic photon bunch of SR \nis obtained by the Fourier transform of the energy domain scattering matrices\n\\cite{Gerdau2000,Hannon3},\n\\begin{equation}\nT_{\\tau }\\left( t,E_{v}\\right) =\\frac{1}{\\sqrt{2\\pi }\\hbar }\\int \\mathrm{d}E%\n\\text{\\thinspace }\\left[ T_{\\tau }\\left( E,E_{v}\\right) -T_{\\tau ,\\infty }%\n\\right] \\exp \\left( -i\\frac{E}{\\hbar }t\\right), \\label{timetrans}\n\\end{equation}\nwhere, by subtracting the constant $T_{\\tau ,\\infty }$, the Dirac\ndelta--like prompt $(t=0)$ and the delayed $\\left( t>0\\right) $ time\nresponses are separated \\cite{Sturhahn1994}. We note that Eq.~(\\ref{timetrans})\nis valid only for delayed times $t>0$ after the SR bunch $(t=0)$, but\n$T_{\\tau }\\left( t,E_{v}\\right) = 0$ for $t<0$\\,!\nIn the same way as for Eq.~(\\ref{Blumeint}),\nthe delayed intensity in time domain becomes\n\\begin{equation}\nI_{\\tau }\\left( t,E_{v}\\right) =\\mathrm{Tr}\\left[ T_{\\tau }^{\\dagger }\\left(\nt,E_{v}\\right) T_{\\tau }\\left( t,E_{v}\\right) \\rho \\right] .\n\\label{Blumeinttim}\n\\end{equation}%\n\nFor a heterodyne\/stroboscopic NRS of SR experiment a time window function is introduced,\nwhich can be described by boxcar functions, namely, $S(t)=1$ \nfor $mt_{\\mathrm{B}}+t_{1}0$ terms were called\n``stroboscopic resonances'' \nof order $m$ \\cite {Callens2002}. Nevertheless, the stroboscopic resonances are not restricted\nto the forward scattering case. They also appear in other experimental\ngeometries, including, as we\n show below, in the grazing incidence\nscattering geometry.\n\n\\subsection{Grazing incidence geometry}\n\nIn what follows, stroboscopic SMR spectra will be discussed. In terms of\nthe dynamical theory, grazing incidence is a two-beam case. The $\\tau =0^{+}$\ntransmission and the $\\tau =0^{-}$ reflection channels are open\n\\cite{Roehlsberger2004,Hannon69,Sturhahn1994}, the latter being observed in SMR.\nClose to the electronic total reflection, the reflected intensity is high.\nTherefore, SMR is an experimentally fairly instructive special case. The\nreflection from the surface of the specimen is a multiple coherent\nscattering process of the (SR) photons on atomic electrons and resonant \nM\\\"{o}ssbauer nuclei \\cite{Deak01,Hannon85,Deak96}. Like in the forward\ncase, this scattering is independent of the atomic positions in the\nreflecting medium, such that the scattering is described\nby its index of refraction $n\\left( E\\right) $ \\cite{Deak96,Lax51}. Henceforth, \nin compliance with the literature \\cite{Roehlsberger2003,Deak01,Deak96}, \nin the general theory, the scattering matrix\n$T_\\tau ^{\\left( \\mathrm{s}\\right) }\\left( E\\right) $\nwill be replaced by the $2\\times 2$ reflectivity matrix $R^{\\left( \\mathrm{s}%\n\\right) }\\left( E,\\theta \\right) $, where $\\theta $ is the angle of \nincidence. This takes into account the interferences of the reflected\nradiation from the surfaces and interfaces between the layers with different\nrefraction index. The methods of calculating the reflectivity matrix can be\nfound in the literature \\cite{Roehlsberger2003,Deak01,Deak96}. Accordingly, \nthe total scattering matrix of the specimen and the reference from \nEq.~(\\ref{tottrans}) is \n\\begin{equation}\nT\\left( E,E_v,\\theta \\right) =R^{\\left( \\mathrm{s}\\right) }\\left( E,\\theta\n\\right) T^{\\left( \\mathrm{r}\\right) }\\left( E-E_v\\right) .\n\\label{reftottrans}\n\\end{equation}\nSimilarly, for energies being far from the M\\\"{o}ssbauer resonances, Eq.~(%\n\\ref{eltottrans}) reads \n\\begin{equation}\nT_\\infty \\left( \\theta \\right) =R_{\\mathrm{el}}^{\\left( \\mathrm{s}\\right)\n}\\left( \\theta \\right) T_{\\mathrm{el}}^{\\left( \\mathrm{r}\\right) }.\n\\label{refeltottrans}\n\\end{equation}\nInserting $T\\left( E,E_v\\right) $ and $T_\\infty $ into Eq.~(\\ref{finstrobo}%\n), the delayed count rate $D\\left( E_v,\\theta \\right)$ of the\nheterodyne\/stroboscopic spectrum for grazing incidence (stroboscopic SMR\nintensity) on the specimen is calculated.\n\nCombining Eqs.~(\\ref{sepel}), (\\ref{expnukl}) and (\\ref{defalfa}) \nreveal\n\\begin{eqnarray}\n\\delta _m\\left( E_v,\\theta \\right) & = & \\frac{A^{\\left( \\mathrm{r}\\right) }}%\n\\hbar \\int \\mathrm{d}E\\mathrm{Tr}\\left\\{ \\left[ \\tilde{T}^{\\dagger }\\left(\nE-E_v-m\\varepsilon \\right) R^{\\dagger }\\left( E-m\\varepsilon \\right) -R_{%\n\\mathrm{el}}^{\\dagger }\\right]\\right. \\notag \\\\\n& & \\qquad \\times \\left.\\left[ R\\left( E\\right) \\tilde{T}\\left( E-E_v\\right) -R_{\\mathrm{el}}%\n\\right] \\rho \\right\\} , \\label{dlform}\n\\end{eqnarray}\nwhere $A^{\\left( \\mathrm{r}\\right) }=\\left| T_{\\mathrm{el}}^{\\left( \\mathrm{r%\n}\\right) }\\right| ^2$ is the electronic absorption of the reference sample.\nFor the sake of simplicity, the indices on the right hand side have been\nomitted, so that $\\tilde{T}_{\\mathrm{nuc}}^{\\left( \\mathrm{r}\\right)\n}\\rightarrow $$\\tilde{T}$, $R_{\\mathrm{el}}^{\\left( \\mathrm{s}\\right)\n}\\left( \\theta \\right) \\rightarrow R_{\\mathrm{el}}$ and $R^{\\left( \\mathrm{s}%\n\\right) }\\left( E,\\theta \\right) \\rightarrow R\\left( E\\right) $. Note that\nall reflectivities are those of the specimen, and all transmissivities are\nthose of the reference sample. With the relevant angular parameter $\\theta $\nfor grazing incidence, Eq.~(\\ref{finstrobo}) reads \n\\begin{equation}\nD\\left( E_v,\\theta \\right) =\\sum_{-\\infty }^\\infty s_m\\delta _m\\left(\nE_v,\\theta \\right) . \\label{simpstrobo}\n\\end{equation}\nThe observed nuclear, as well as stroboscopic, resonances can be interpreted\nin a straightforward manner using Eq.~(\\ref{dlform}). Indeed, far from the\nresonances, $R\\left( E\\rightarrow \\infty \\right) =R_{\\mathrm{el}}$ and $%\n\\tilde{T}\\left( E\\rightarrow \\infty \\right) =1$, and the differences in the\nsquare brackets in (\\ref{dlform}) vanish. We expect a significant\ncontribution to the energy integral only if at least one energy argument of each\nbracket is close to resonance, i.e., either \n\\begin{subequations}\n\\label{co1}\n\\begin{align}\nE-E_v-m\\varepsilon &\\simeq 0 \\quad \\text{and} \\label{co1a} \\\\\nE &\\simeq E_i \\label{co1b}\n\\end{align}\nor \n\\end{subequations}\n\\begin{subequations}\n\\label{co2}\n\\begin{align}\nE-m\\varepsilon & \\simeq E_i \\quad \\text{and} \\label{co2a} \\\\\nE-E_v & \\simeq 0 \\label{co2b}\n\\end{align}\nare fulfilled, where $E_i$ is the energy of the\n\\textit{i}$^{\\mathrm{th}}$ M\\\"{o}ssbauer \nresonance of the specimen. The $m^{\\mathrm{th}}$ term of the sum in\nEq.~(\\ref{simpstrobo}) contributes considerably if the Doppler velocity is\nnear to the corresponding shifted M\\\"{o}ssbauer resonance. In this case: \n\\end{subequations}\n\\begin{subequations}\n\\label{fincond}\n\\begin{align}\nE_v =E_i-m\\varepsilon +\\Delta , \\label{finconda} \\\\\nE_v =E_i+m\\varepsilon +\\Delta . \\label{fincondb}\n\\end{align}\nHere, $\\Delta $ is a small deviation (of the order of the resonance line width)\\\nfrom the energy $E_i-m\\varepsilon $ or $E_i+m\\varepsilon $, ensuring the\nappearance of stroboscopic resonances also in grazing incidence geometry. In\nthe case of $m=0$, all four conditions of Eqs.~(\\ref{co1}) and (\\ref{co2})\nmay be true simultaneously. This means that, for $m=0$, nuclear scattering in\nboth samples, i.e., ``the radiative coupling of the samples'' \n\\cite{Callens2003}, also contributes. Hence, the dynamical line broadening\n(coherent speed-up) is the most effective in the heterodyne spectrum\n(= baseline and resonances of stroboscopic order 0).\n\nIn order to perform computer simulations of stroboscopic spectra, Eqs.~(\\ref%\n{defalfa}) and (\\ref{dlform}) were calculated for the forward scattering and\nSMR cases, respectively. Eqs.~(\\ref{defalfa}), (\\ref{dlform}) and (\\ref%\n{riet2}) were implemented in the evaluation program EFFI \\cite{Deak01,Spiering00}.\nThis program allows for least-square fitting\nof stroboscopic spectra. Moreover, they can be fitted \\textit{simultaneously}\nwith other types of spectra of the same specimen, such as forward\nscattering, grazing incidence, conventional M\\\"{o}ssbauer and other spectra\nof the implemented theory \\cite{Deak01,Spiering00}. This way, the fit\nconstraints on the common parameters become very general, as already\ndescribed \\cite{Deak01,Spiering00}.\n\n\\section{Experimental results and discussion}\n\nIn order to test the feasibility of this new reflectometric scheme, we investigated two film\nspecimens, a $^{\\mathrm{nat}}\\mathrm{Fe}\/^{57}\\mathrm{Fe}$ isotopic and a $%\n^{57}$Fe\/Cr antiferromagnetic multilayer, in grazing\nincidence reflection geometry, using the $14.4~\\mathrm{keV}$ M\\\"{o}ssbauer\ntransition of $^{57}$Fe nuclei. The experiments were performed at the BL09XU\nnuclear resonance beam line of SPring-8 \\cite{Yoda2001}. The\nexperimental setup is shown in Fig.~\\ref{expsetup}. The synchrotron was\noperated in the 203-bunch mode, corresponding to a bunch separation time of $%\n23.6\\ \\mathrm{ns}$. The SR was monochromated by a Si(422)\/Si(12~2~2) double\nchannel-cut high resolution monochromator with $6~\\mathrm{meV}$ resolution.\nIt was incident on the K$_4$[$^{57}$Fe(CN)$_6$] single line pelleted\nreference sample of effective thickness 11, and on the multilayer specimen\ndownstream mounted in grazing incidence geometry (Fig.~\\ref{expsetup}). The \nM\\\"{o}ssbauer drive was operated in constant acceleration mode, with a\nmaximum velocity of $v_{\\mathrm{max}}=20.24~\\mathrm{mm\/s}$. This maximum was\ncalibrated by fitting the velocity separation of the stroboscopic orders in\na forward scattering stroboscopic spectrum of a single line \n$^{57}\\mathrm{Fe}$-enriched stainless steel absorber \n\\cite{Callens2002,Callens2003}. The delayed radiation was detected using \nthree $2~\\mathrm{ns}$ dead time\nHamamatsu avalanche photo diodes (APD) in series. To record the delayed\nintensity, a two-dimensional data acquisition system was used. Each count\nwas indexed according to the time elapsed after the synchrotron pulse (1024\nchannels), as well as to the velocity of the reference (1024 channels).\nThese stroboscopic SMR data were time integrated using appropriate time\nwindows of $t_p=7.87~\\mathrm{ns}$ period and $3.93~\\mathrm{ns}$ length\n\\cite{Callens2002,Callens2003}. Since the energy is measured in mm\/s, the shift of\nthe first stroboscopic order, Eq.~(\\ref{epsdef}), can be rewritten as \n\\end{subequations}\n\\begin{equation}\n\\varepsilon ~\\left[ \\mathrm{mm}\/\\mathrm{s}\\right] =1000\\frac{\\lambda ~\\left[ \n\\mathrm{nm}\\right] }{t_p~\\left[ \\mathrm{ns}\\right] }. \\label{stroboshift}\n\\end{equation}\nWith the wavelength $\\lambda \\approx 0.086~\\mathrm{nm}$ for the M\\\"{o}%\nssbauer transition of $^{57}$Fe, the separation between the neighbouring\nstroboscopic orders can be calculated to be $\\varepsilon \\approx 10.93~%\n\\mathrm{mm}\/\\mathrm{s}$. Note that this is the range of the hyperfine\nsplitting in case of $\\alpha -\\mathrm{Fe}$ (outer line separation is $10.62~%\n\\mathrm{mm}\/\\mathrm{s}$ at room temperature), and the stroboscopic orders\nwould only slightly overlap in case of a sample of low effective thickness\nin forward scattering. However, in case of grazing incidence near the\ncritical angle of total external reflection due to the enhanced nuclear and\nelectronic multiple scattering, the M\\\"{o}ssbauer lines become extremely\nbroad and\\ a strong overlap of the stroboscopic orders is expected. This\ninterference and partial overlap are manifested in rather complex resonance\nline shapes and an intriguing angular dependence of the delayed intensity in\nthe various stroboscopic orders.\n\nBoth multilayers were prepared under ultra-high vacuum conditions by\nmolecular beam epitaxy at the IMBL facility in IKS Leuven. The\\thinspace $[^{%\n\\mathrm{nat}}\\mathrm{Fe}\/^{57}\\mathrm{Fe]}_{10}$ was prepared at room\ntemperature onto a Zerodur glass substrate. The first layer and all other $%\n^{57}\\mathrm{Fe}$-layers were 95.5\\% isotopically enriched, and were\ngrown from a Knudsen cell. The natural Fe layers, which have a $^{57}%\n\\mathrm{Fe}$-concentration of 2.17\\%, were grown from an electron gun\nsource. The nominal layer thickness was $3.15~\\mathrm{nm}$ throughout the\nmultilayer stack for both $^{\\mathrm{nat}}\\mathrm{Fe}$ and $^{57}\\mathrm{Fe}$%\n. Conversion electron M\\\"{o}ssbauer spectra showed a pure $\\alpha -\\mathrm{Fe%\n}$ spectrum. This spectrum was compared to a transmission M\\\"{o}ssbauer\nspectroscopy spectrum of a natural iron calibration specimen, which was\nprovided by Amersham. Both hyperfine magnetic fields were fitted to be\nidentical within the experimental error of 0.04\\%, and no sign of any second\nphase contamination was found.\n\nPreparation and characterization of the MgO(001)\/[$^{57}$Fe\/Cr]$_{20}$\nmultilayer sample has been described earlier \n\\cite{BottyanBSF1,Nagy02a,Tancziko2004}. The layering was verified as epitaxial\nand periodic, with thicknesses of $2.6~\\mathrm{nm}$ for the $^{57}$Fe layer,\nand $1.3~\\mathrm{nm}$ for the Cr layer. SQUID magnetometry showed dominantly\nantiferromagnetic coupling between neighboring Fe layers. According to\nprevious studies on this multilayer \\cite{BottyanBSF1,Nagy02a,Tancziko2004},\nthe magnetizations in Fe align to the [100] and [010] perpendicular easy\ndirections in remanence, respectively corresponding to the [110] and \n[$\\overline{1}$10] directions of the MgO substrate. The layer magnetizations\nwere aligned antiparallel in the consecutive Fe layers by applying a\nmagnetic field (1.6~T) above the saturation value (0.96~T) in the Fe[010]\neasy direction of magnetization, and then releasing the field to remanence.\nThis alignment is global, the antiferromagnetic domains were only different\nin the layer sequence of the parallel\/antiparallel orientations \n\\cite{Nagy02a}.\n\n\\subsection{Stroboscopic SMR on a $^{\\mathrm{nat}}$Fe\/$^{57}$Fe multilayer}\n\nSince in a $^{\\mathrm{nat}}\\mathrm{Fe}\/^{57}\\mathrm{Fe}$ isotope-periodic\nmultilayer the hyperfine field of $^{57}\\mathrm{Fe}$ is that \nof $\\alpha -\\mathrm{Fe}$ throughout the sample, this multilayer is \nparticularly suitable for studying the modification of the resonance \nline shapes due to interference between nuclear and electronic scattering \n\\cite{Deak1999a,Deak1994,Chumakov1991,Chumakov1993}. Fig.~\\ref{fefe} shows \nresults for the multilayer saturated in a transversal magnetic field of 50 mT. \nPanel \\textit{a} and \\textit{b} give the prompt electronic and delayed TISMR curves,\nrespectively. The stroboscopic SMR spectra at the angles indicated by the\narrows are given in panel \\textit{c} to \\textit{e}. The peak in the delayed\nreflectivity at the total reflection angle in panel \\textit{b} is a special\nfeature of SMR described earlier \\cite{Chumakov1999,Deak1994,Baron1994}. In\npanels \\textit{c} to \\textit{e}, the four resonance lines of the $+1$ and $-1$\nstroboscopic orders (right and left sides, respectively) partially overlap\nwith the $0^{\\mathrm{th}}$ order in the central part of the spectrum.\n\nThe delicate interplay between electronic and nuclear scattering is\ndemonstrated by the considerable difference between the stroboscopic SMR\nspectra \\textit{c} to \\textit{e} in Fig.~\\ref{fefe}, which are taken at only\nslightly different grazing angles. In contrast to the symmetric forward\nscattering spectra \\cite{Callens2002,Callens2003}, the stroboscopic SMR\nspectra are asymmetric due to the interference between the electronic and\nnuclear scattering. They also display both ``absorption-like'' and\n``dispersion-like'' resonance line shape contributions. In case of decreased\nnuclear scattering strength and of the same electronic reflectivity (cf.\npanels \\textit{d} and \\textit{e} in Fig.~\\ref{fefe}), the signal to baseline\nratio of the central part (heterodyne spectrum) decreases as compared to the\nsignal to baseline ratio of stroboscopic orders $\\pm 1$\\ in the spectrum\nwings.\n\nThe full lines are simultaneous least squares fits, using the theory\noutlined above and the computer code EFFI \\cite{Spiering00}. The\ninterference between nuclear and electronic scattering makes it possible to\nfit the layer structure in this isotope-periodic multilayer. The fitted\nvalue of the total thickness of pure $\\alpha -\\mathrm{Fe}$ is $42.5\\ \\mathrm{%\nnm,}$ comprised of nine times $1.49\\ \\mathrm{nm}$ of $^{\\mathrm{nat}}\\mathrm{%\nFe}$ and $3.23\\ \\mathrm{nm}$ of $^{57}\\mathrm{Fe,}$ with $0.4\\ \\mathrm{nm}$\ncommon roughness at the interfaces. In order to achieve the simultaneous\nfit, displayed by the full line in Figure \\ref{fefe}, we had to assume that\nhalf a bilayer on top and bottom ($^{\\mathrm{nat}}\\mathrm{Fe}$ and $^{57}%\n\\mathrm{Fe}$, respectively) was modified. The transversal hyperfine magnetic\nfield was fixed to $33.08\\ \\mathrm{T}$ in the nine $^{57}\\mathrm{Fe}$\/$^{%\n\\mathrm{nat}}\\mathrm{Fe}$ bi-layers in the middle of the multilayer, which\nis the room temperature value for $\\alpha -\\mathrm{Fe}$.\n\n\\subsection{Stroboscopic SMR of an antiferromagnetic $^{57}$Fe\/Cr multilayer}\n\nFig.~\\ref{fecraf} and Fig.~\\ref{fecrfm} display similar sets of spectra of\nan $^{57}$Fe\/Cr antiferromagnetically coupled epitaxial multilayer on\nMgO(001). The dots are the experimental data points, while the continuous\nlines are simultaneous fits to a model structure of $\\big[ {}^{57}\\mathrm{Fe}%\n\\left( 2.6~\\mathrm{nm}\\right) \/$ $\\mathrm{Cr}\\left( 1.3~\\mathrm{nm}\\right) %\n\\big] {}^{}_{20}\\:\\!$, based on the respective theory.\n\nNon-resonant reflectivity, TISMR and stroboscopic SMR spectra were recorded\nfirst with the Fe layer magnetizations parallel\/antiparallel (Fig.~\\ref%\n{fecraf}) to the $k-$vector of the SR beam. The stroboscopic spectra were\ntaken at the angles of total reflection (c), at the antiferromagnetic (d)\nand at the structural Bragg peak (e) positions. After this, a magnetic field\nof 20 mT was applied to the multilayer in longitudinal direction. This is\nknown to flop the magnetizations to the perpendicular Fe(010) easy axis \nof the magnetization \\cite{BottyanBSF1,Tancziko2004}. Non-resonant\nreflectivity, TISMR and stroboscopic SMR spectra at the same angular\npositions were again collected (Fig.~\\ref{fecrfm}).\n\nThe major difference between Figs.~\\ref{fecraf} and \\ref{fecrfm} is the\npresence, respectively absence, of the AF Bragg peak in the delayed\nreflectivity curve b. This antiferromagnetic alignment, i.e., the\nlongitudinal hyperfine field of alternating sign in consecutive Fe layers,\nis justified by the simultaneous fit in Fig.~\\ref{fecraf}. In Fig.~\\ref%\n{fecrfm}, the fitted Fe magnetizations are perpendicular to the wave vector\nof the SR. Indeed, the scattering amplitudes depend on the angle of the wave\nvector and the direction of the hyperfine magnetic field. In the case of\nperpendicular orientation, this angle is 90 degrees for consecutive layer\nmagnetizations and no AF contrast can be observed. In case of\nparallel\/anti--parallel orientations, however, the angles with respect to\nthe wave vector of SR are 0 and 180 degrees, respectively. Therefore, the\nhyperfine contrast is present and the AF Bragg peak is visible in panel b of\nFig.~\\ref{fecraf}.\n\nThe count rate at the baseline of a stroboscopic SMR spectrum, measured at a\ncertain grazing angle $\\theta $, is closely related to the TISMR spectrum at\nthis angle. Therefore, the respective experimental count rates of the\nstroboscopic SMR spectrum at the AF Bragg peak positions (panel d) differ by\nalmost two orders of magnitude. Spectrum 3d is also the only spectrum for\nwhich no considerable enhanced dynamic broadening can be observed.\n\nNote that, in panels d, the zeroth order resonances are considerably enhanced\nwith respect to the $\\pm 1\\,$order stroboscopic resonances. This can be\nexplained by an enhanced radiative coupling of the samples. Since the\nradiative coupling does not contribute to the $\\pm 1\\,$order stroboscopic\nresonances, it only influences the baseline and the central resonances.\n\nAt the multilayer Bragg reflections (panel e), and at the total reflection\npeak (panel c), the suppression of the higher stroboscopic orders is much\nsmaller, which means that the radiative coupling term is not dominating here.\nThese spectra also show a left\/right asymmetry due to the variation of the\nphase of the total scattering amplitude with energy. This latter allows for\nphase determination of the scattering amplitude from a set of stroboscopic\nSMR spectra, which work will be published later.\n\n\\section{Summary}\n\nIn summary, the concept of heterodyne\/stroboscopic detection of nuclear\nresonance scattering was outlined for a general scattering channel, with\nspecial emphasis on the grazing incidence reflection case. In any\nnon-forward scattering channel, the electronic scattering influences the NRS\nspectral shape, while in forward scattering, this is a mere multiplicative\nfactor. The interplay between electronic and nuclear scattering, as a\nfunction of the scattering angle, facilitates the determination of the\nelectronic and nuclear scattering amplitudes. The code of the present theory\nhas been merged into the EFFI program \\cite{Spiering00}, and was used in\nsimultaneous data fitting of x-ray reflectivity, time integral reflectivity\nand stroboscopic SMR spectra. Similar to time differential SMR, stroboscopic\nSMR spectra have been shown to be sensitive to the direction of the\nhyperfine fields of the individual layers. Therefore, it is possible to\napply this method to the study of magnetic multilayers and thin films. The\nexperiments on $\\left[ ^{57}\\mathrm{Fe}\\left( 2.6~\\mathrm{nm}\\right) \/%\n\\mathrm{Cr}\\left( 1.3~\\mathrm{nm}\\right) \\right] _{20}$ and $\\left[ ^{%\n\\mathrm{nat}}\\mathrm{Fe}\/^{57}\\mathrm{Fe}\\right] _{10}$ multilayers\ndemonstrated that stroboscopic detection of synchrotron M\\\"{o}ssbauer\nreflectometry of $^{57}$Fe-containing thin films is feasible in dense bunch\nmodes,\nwhich are not necessarily suitable for time differential nuclear resonance\nscattering experiments on $^{57}$Fe.\n\n \n \n \n \n \n \n\n \n\n\\ac\n{The authors gratefully acknowledge the beam time supplied free of charge by\nthe Japan Synchrotron Radiation Institute (JASRI) for experiment No:\n2002B239-ND3-np. Our gratitude goes to Dr. A.Q. Baron (SPring-8, JASRI) for\nkindly supplying the fast Hammamatsu APD detectors and Dr. Johan Dekoster\n(IKS\\ Leuven) for preparing the multilayer samples for the experiment.\nSupport by the Flemish-Hungarian inter-governmental project BIL14\/2002, the\nDYNASYNC Framework Six project of the European Commission (contract number:\nNMP4-CT-2003-001516) the Fund for Scientific research--Flanders (G.0224.02),\nthe Inter-University Attraction Pole (IUAP P5\/1) and the Concerted Action of\nthe KULeuven (GOA\/2004\/02) is gratefully acknowledged. L.~De\\'{a}k and R.~Callens \nthank the Deutscher Akademischer Austauschdienst (DAAD) and the\nFWO-Flanders, respectively, for financial support.}\n\n \n \n\n\\referencelist[oldstrobo-140807]\n\n \n \n \n\n \n \n\n \n\n\\begin{figure}\n\\includegraphics{Fig-1.eps}\n\\caption{Experimental setup for stroboscopic Synchrotron M\\\"{o}ssbauer Reflectometry.}\n\\label{expsetup}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\includegraphics{Fig-2.eps}\n\\caption{Prompt electronic (a) and delayed nuclear reflectivity (b) curves\nas well as stroboscopic SMR spectra (c) to e), of a $\\left[ ^{\\mathrm{nat}}%\n\\mathrm{Fe}\/^{57}\\mathrm{Fe}\\right] _{10}$ isotopic multilayer at grazing\nangles indicated by the arrows. Vertical dotted lines in panels c) to e)\nindicate the center of the zero and $\\pm 1$ order stroboscopic bands\nseparated by $\\protect\\varepsilon \\approx 10.93~\\mathrm{mm}\/\\mathrm{s}$ for\nthe applied observation window period.}\n\\label{fefe}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics{Fig-3.eps}\n\\caption{Prompt electronic (a) and delayed nuclear (b) reflectivity curves\nas well as stroboscopic SMR spectra (c to e) of a $\\mathrm{MgO(001)}\/\\left[\n^{57}\\mathrm{Fe}\/\\mathrm{Cr}\\right] _{20}$ antiferromagnetic multilayer at\nvarious angles indicated by arrows in b). The consecutive Fe layer\nmagnetizations are aligned parallel\/antiparallel with to the SR beam.\nVertical dotted lines in panels c) to e) indicate the center of the zero and \n$\\pm 1$ order stroboscopic bands separated by $\\protect\\varepsilon \\approx\n10.93~\\mathrm{mm}\/\\mathrm{s}$ for the applied observation window period.}\n\\label{fecraf}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics{Fig-4.eps}\n\\caption{Prompt electronic (a) and delayed nuclear (b) reflectivity curves\nas well as stroboscopic SMR spectra (c to e) of a $\\left[ ^{57}\\mathrm{Fe}%\n\\left( 2.6\\,\\mathrm{nm}\\right) \/\\mathrm{Cr}\\left( 1.3\\,\\mathrm{nm}\\right) %\n\\right] _{20}\/\\mathrm{MgO}$ antiferromagnetic multilayer at various angles\nindicated by arrows. The consecutive Fe layer magnetizations are aligned\nperpendicular to the SR beam. Vertical dotted lines in panels c) to e)\nindicate the center of the zero and $\\pm 1$ order stroboscopic bands\nseparated by $\\protect\\varepsilon \\approx 10.93~\\mathrm{mm}\/\\mathrm{s}$ for\nthe applied observation window period.}\n\\label{fecrfm}\n\\end{figure}\n\n\\end{document} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\n\nConsider a probability distribution $p$ over a countable alphabet $\\mathcal{X}$. Let $X^n$ be a sample of $n$ independent observations from $p$. Let $N_u(X^n)$ be the number of appearance of the symbol $u$ in the sample. Unobserved (or unseen) events refer to outcomes which do not appear in the sample, $\\mathcal{X}_0(X^n)=\\{u\\;|\\;N_u(X^n)=0\\}$. \nIn this work we study inference of the unobserved. Specifically, we are interested in simultaneous confidence intervals (SCIs) for the parameters $\\mathcal{M}_p(X^n)=\\{p(u)\\;|\\;N_u(X^n)=0\\}$. This problem is of high interest in variety of domains. For example, in $2006$ the New Zealand Ministry of Health reported new cancer cases among local populations \\cite{NZ_data}. Specifically, they focused on minorities in different age groups. Their study distinguished between $75$ cancer types. Their report indicated several interesting findings. For example, for Maori females under the age of $30$, there has been a total of $58$ new cancer cases that year. Importantly, out of the $75$ studied cancer types, only $37$ were observed. Does that mean that the remaining $38$ unobserved types of cancer are unlikely to appear in this population? The answer is obviously no, and we would like to infer the likelihood of these unobserved cancer types. Naturally, this example is just a special case of a broader problem, where multiple outcomes are studied and the sample size is limited.\n\n\nThe classical approach for the studied problem is based on the \\textit{rule-of-three}. The rule-of-three (ROT) suggests that for a sample of $n$ observations from a Bernoulli distribution, an approximate CI of level $1-\\alpha$ is given by $[0,-\\log(\\alpha)\/n]$ (as shown in Section \\ref{Previous_Work}). Extending this result to a multinomial setup requires a simple multiplicity correction. That is, \na simultaneous CI (of level $1-\\alpha$) for the unobserved satisfies $[0,-\\log(\\alpha\/k)\/n]$. As we can see, the obtained CI grows with the alphabet size $k$ and may be too conservative. More importantly, it requires the knowledge of $k$, which is not always available. For example, it is well-known that the number of cancer types is much greater than $75$, despite the report above \\cite{SEER_data}. \n\nIn this work we introduce a novel selective inference scheme for unobserved events. That is, we construct CIs only for the events that do not appear in the sample, while refraining from multiplicity correction over the entire alphabet size. To the best of our knowledge, this work is the first to directly address this basic problem. We distinguish between two setups. We first study the case where the alphabet size $k$ is unknown and even unbounded. We obtain a simple closed-form CI that is independent of $k$ and, most importantly, does not grow with it (as opposed to the ROT). Next, we focus on the setup where the alphabet size $k$ is known. Here, we introduce an efficient computational routine which utilizes the alphabet size and further improves our proposed CI. Then, we show that our results are tight. That is, we show that the length of our proposed CI cannot be further reduced (up to a negligible scale). We demonstrate the performance of our proposed scheme in synthetic and real-world experiments, and show it significantly improves upon the alternative. Finally, we apply our results to large alphabet inference. We introduce a novel simultaneous CI scheme for large alphabet distributions which outperforms currently known methods while maintaining the prescribed coverage rate. \n\n\n\n\n\n\n\\section{Previous Work}\n\\label{Previous_Work}\nConsider a set of fixed and unknown parameters $\\theta_1,...\\theta_k$. Given a confidence level $1-\\alpha$, a simultaneous confidence region for $\\{\\theta_j\\}_{j=1}^k$ is defined as a collection $\\{T_j(X^n)\\}_{j=1}^k$ such that \n\n\\begin{align}\\label{classical}\n P(\\cup_j\\{\\theta_j\\notin T_j(X^n)\\})\\leq \\alpha.\n\\end{align}\nIn words, the probability that all $\\theta_j$ simultaneously reside within their corresponding CI $T_j(X^n)$ is not smaller than $1-\\alpha$. Selective inference generalizes this framework and considers a subset of parameters of interest, selected during the experiment. For example, consider a linear regression problem with feature selection. Naturally, the parameters of interest are those selected by the model. In other words, we would like to infer on a (random) subset of parameters (the \\textit{selected parameters}) and not the entire collection.\n\n\nThe problem of inference over selected hypotheses, parameters, or models was recognized about seven decades ago (see \\cite{ben2017concentration} for a detailed discussion). One of the first major contributions to the problem is due to Benjamini and Yekutieli, who considered the problem of constructing CIs for selected parameters \\cite{benjamini2005false}. In their work, they showed that conditional coverage, following any selection rule for any set of (unknown) values for the parameters, is impossible to achieve. This means we cannot simply infer on the chosen parameters, given that they were selected. Benjamini and Yekutieli suggested an alternative viewpoint to the problem; instead of controlling the conditional coverage, the obstacle to avoid is that of making a false coverage statement. Specifically, given a selection rule, three outcomes are possible at each experiment; either a covering CI is constructed, a non-covering CI is constructed, or the interval is not constructed at all. Therefore, even though a $1-\\alpha$ CI does not offer selective (conditional) coverage, the probability of constructing a non-covering CI is at most $\\alpha$, \n\\begin{align}\\label{SoS}\n P(\\cup_j\\{\\theta_j\\notin T_j(X^n), \\theta_j\\text{ is selected}\\})\\leq \\alpha.\n\\end{align}\nThis formulation is also known as \\textit{Simultaneous over Selected} (SoS) (Equation (4) in \\cite{benjamini2019confidence}). Selective inference was extensively studied over the years. In \\cite{benjamini2005false}, Benjamini and Yekulieli relaxed (\\ref{SoS}) and defined the \\textit{false coverage rate} (FCR) as the expected proportion of parameters not covered by their CIs among the selected parameters. They introduced several controlling procedures for this formulation in different setups. The FCR framework was generalized and applied to a variety of applications. Lee et al. \\cite{lee2016exact} and Tibshirani et al. \\cite{tibshirani2016exact} constructed confidence interval for parameters selected by the Lasso and by forward step-wise selection, respectively. Berk et al. \\cite{berk2013valid} addressed the problem of inference when the model is selected because a pre-specified explanatory variable had the highest statistical significance, which restricts the family over which simultaneous coverage is required. Weinstein and Yekutieli \\cite{weinstein2020selective} designed FCR intervals that try to avoid covering zero. See \\cite{benjamini2019confidence} for additional references and examples. Benjamini et al. revisited the problem of constructing confidence intervals for seleceted parameters in \\cite{benjamini2019confidence}. They defined four controlling formulations, namely SoS (as appears in (\\ref{SoS})), FCR, \\textit{Simultaneous over all Possible selections} (SoP) and \\textit{Conditional over Selected} (CoS), (See (1)-(4) in \\cite{benjamini2019confidence}). They focused their attention to SoS and studied the problem of SoS-controlling CIs for the $r$ largest parameters (of a given collection of parameters). \nA similar framework was also studied by Katsevich and Ramdas \\cite{katsevich2020simultaneous}, who addressed simultaneous selective inference in testing under SoP. Specifically, They considered making selective inference statements on many selection rules, guaranteeing these statements hold simultaneously with high probability. \n\nIn this work we study interval estimation of the unseen. This task may be viewed as a selective inference problem, as we construct CIs only for the parameters of the events that are missing from the sample. Since the selected parameters are data-dependent we focus our attention to the SoS framework (\\ref{SoS}). In this sense, our work is an application of SoS-controlling CIs for this important problem.\n\n\nEstimation of the unseen has been extensively studied over the years. Interestingly, most work focus on point estimation, in a variety of setups and applications. Perhaps the first major contribution to this problem dates back to Laplace in the $18^{th}$ century \\cite{laplace2012pierre}. In his work, Laplace studied the \\textit{sunrise problem}; given that the sun raised every morning until today, what is the probability that it will rise tomorrow? Laplace addressed the problem of unobserved events by adding a single count to all $k$ events in the alphabet (including the unobserved). Then, the desired estimate is simply the empirical frequency, $1\/(n+k)$. This scheme is also known as the \\textit{rule of succession}. The Laplace estimator was later generalized to a family of \\textit{add-constant} estimators. An add-$c$ estimator assigns to a symbol that appeared $t$ times a probability proportional to $t+c$, where $c$ is a pre-defined constant. Add-constant estimators hold many desirable properties, mostly in terms of their simplicity and interpretability \\cite{orlitsky2003always}.\nHowever, when the alphabet size $k$ is large compared to the sample size $n$, add-constant estimators perform quite poorly \\cite{orlitsky2003always}. Furthermore, add-$c$ estimators require the knowledge of the alphabet size $k$, which is not always available. Additional caveats of add-$c$ estimators are discussed in \\cite{gale1994s}.\n\n\nMany years after Laplace, a major milestone was established in the work of Good and Turing \\cite{good1953population}, while trying to break the Enigma Cipher during World War II \\cite{orlitsky2003always}. The Good-Turing (GT) framework suggests that unobserved events shall be assigned a probability proportional to the number of events with a single appearance in the sample. This approach introduced a significant improvement compared to known estimators at the time. Furthermore, its promising performance and practical appeal have led many researchers to study and generalize these ideas. \n\nUnseen estimation is highly related to the \\textit{missing mass} problem; the total probability of symbols which do not appear in the sample. Estimating the missing mass is a basic problem in statistics and related fields (see \\cite{battiston2020consistent} and references therein). Further, it corresponds to an important prediction task. Namely, the problem of estimating the likelihood of encountering a future event which does not appear in the sample. Here too, the most popular approach is the GT estimator (which is, again, proportional to the number of events with a single appearance in the sample). A variety of results were introduced over the years, focusing on the properties of the missing mass the GT estimator. This includes, for example, asymptotic normality and large deviations \\cite{gao2013moderate}, admissibility and concentration properties\n\\cite{ben2017concentration}, expectation, consistency and convergence rates \\cite{mcallester2000convergence,drukh2005concentration,mossel2019impossibility,painsky2021refined,painsky2022convergence,painsky2022data}, Bayesian estimation schemes \n\\cite{lijoi2007bayesian,favaro2016rediscovery,favaro2012new}, and the\nestimation of the missing mass in the context of feature models under minimax risk \\cite{ayed2019good}.\n\nAs mentioned above, there are many results on point and interval estimation of the missing mass. Yet, these results only apply to the total mass of the unseen. Notice that this problem is fundamentally different than ours. Specifically, we are interested in inferring the probability of each unobserved outcome (as required, for example, in the Maori cancer study), and not their sum. To the best of our knowledge, inference of the unobserved is currently considered only in the simple binomial case ($k=2$). The \\textit{rule-of-three} (ROT) suggests that for a sample of $n$ identical samples from a Bernoulli distribution with a parameter $\\theta$, the edge of the CI is given by $P(X^n=0) = \\alpha$ which leads to $(1-\\theta)^n = \\alpha$. This implies $n\\log(1-\\theta)\\approx -n\\theta=\\log(\\alpha)$, where the approximation follows from $\\log(1-\\theta)\\approx-\\theta$, for $\\theta$ close to zero. Plugging $\\alpha=0.05$ results in a one-sided CI of approximately $3\/n$ (henceforth, rule-of-three). As mentioned in Section \\ref{intro}, the ROT may be generalized to the control all the events that do not appear in the sample by applying a simple Bonferroni correction, leading to a CI of $[0,-\\log(\\alpha\/k)\/n]$. The ROT was further extended in different setups. For example, the Vysochanskij-Petunin inequality \\cite{vysochanskij1980justification} shows that the ROT holds for unimodal distributions with finite variance, beyond just the binomial distribution. To the best of our knowledge, our contribution is the first to directly address SCIs for unobserved events in the multinomial setup. \n\n\\section{Problem Statement}\n\\label{defs}\nDenote the collection of \\textit{missing probabilities} as \n\\begin{align}\n\\mathcal{M}_p(X^n)=\\{p(u)\\;|\\; N_u(X^n)=0\\}.\n\\end{align}\nWe are interested in simultaneous one-sided CIs for the parameters in $\\mathcal{M}_p(X^n)$. This corresponds to constructing a CI only for the greatest element in the set. Hence, our statistic of interest follows \n\\begin{align}\\label{M_max}\n M_{max}(X^n)=\\max_{u \\in \\mathcal{X}} \\mathcal{M}_p(X^n)=\\max_{u \\in \\mathcal{X}}\\{p(u)\\mathbbm{1}(N_u(X^n)=0)\\},\n\\end{align}\nwhere $\\mathbbm{1}(\\cdot)$ is the indicator function. Notice that $M_{max}(X^n)$ depends on the (unknown) probability $p$, which is omitted from the syntax for brevity. Our goal is to construct a one-sided CI for $M_{max}(X^n)$, in a confidence level of $1-\\alpha$. Specifically, we are interested in $T(X^n)$ such that\n$P(M_{max}(X^n)\\geq T(X^n))\\leq \\alpha.$\nUnfortunately, the maximum operator is an involved, non-smooth functional. Therefore, we begin our analysis by representing $M_{max}(X^n)$ as the limit of an $r$-norm. Specifically, given a sample $X^n$ and a fixed parameter $r\\geq 1$, we define \n\\begin{align} \\label{M_r}\n M_r(X^n)\\triangleq\\sum_{u\\in\\mathcal{X}}p^r(u)\\mathbbm{1}(N_u(X^n)=0).\n\\end{align}\nThe $r$-norm of the missing probabilities follows\n\\begin{align}\n ||\\{p^r(u)\\mathbbm{1}(N_u(X^n)=0)\\}_{u \n \\in \\mathcal{X}}||_r\\triangleq (M_r(X^n))^{1\/r}.\n\\end{align}\nConsequently, we have that \n\\begin{align}\\label{r-norm lim}\n\\lim_{r\\rightarrow \\infty} (M_r(X^n))^{1\/r}=\\max_{u \\in \\mathcal{X}}p(u)\\mathbbm{1}(N_u(X^n)=0)\\triangleq M_{max}(X^n)\n\\end{align}\nand \n$(M_t(X^n))^{1\/t}\\leq (M_{r}(X^n))^{1\/r}$\nfor any $1\\leq r\\leq t$ \\cite{maddox1988elements}. This means that $M_{max}(X^n)\\leq (M_r(X^n))^{1\/r}$ for every $r\\geq 1$.\nTherefore, a $(1-\\alpha)$-level confidence interval for $(M_r(X^n))^{1\/r}$ is also an $(1-\\alpha)$-level confidence interval for $M_{max}(X^n)$,\n\\begin{align}\\label{CI r-norm}\n P\\left(M_{max}(X^n)\\geq T(X^n)\\right)\\leq P\\left((M_r(X^n))^{1\/r}\\geq T(X^n)\\right)\\leq \\alpha.\n\\end{align}\nDenote the expected value of $M_r(X^n)$ as\n$E_{r,n}(p)\\triangleq\\mathbb{E}_{X^n \\sim p}(M_r(X^n)).$\nDefine the worst-case (supremum) of $E_{r,n}(p)$ over $\\mathcal{P}$ as\n\n\\begin{equation}\\label{E_r(P)}\nE_{r,n}(\\mathcal{P}) \\triangleq \\sup_{p \\in \\mathcal{P}} E_{r,n}(p).\n\\end{equation} \nIn this work we focus on two sets of probability distributions $\\mathcal{P}$. Let $\\Delta_k$ be the set of all distributions of an alphabet size $k$, while $\\Delta$ be the set of all distributions over any countable alphabet $\\mathcal{X}$ (that is, $k\\rightarrow \\infty$). Markov's inequality suggests that for every $\\lambda>0$,\n\\begin{align}\\label{Markov}\n P(M_r(X^n) \\geq \\lambda)\\leq \\frac{E_{r,n}(p)}{\\lambda}\\leq \\frac{E_{r,n}(\\mathcal{P})}{\\lambda},\n\\end{align}\nwhere the second inequality follows from (\\ref{E_r(P)}). Setting $\\alpha=E_{r,n}(\\mathcal{P})\/\\lambda$, we have that \n\\begin{align}\\label{CI}\n &P\\left(M_r(X^n)\\geq \\frac{E_{r,n}(\\mathcal{P})}{\\alpha}\\right)=P\\left(\\left(M_r(X^n)\\right)^{1\/r}\\geq \\left(\\frac{E_{r,n}(\\mathcal{P})}{\\alpha}\\right)^{1\/r}\\right)\\leq \\alpha.\n\\end{align}\nPlugging $\\mathcal{P}=\\Delta$ (alternatively, $\\mathcal{P}=\\Delta_k$), we obtain a one-sided confidence interval for $\\left(M_r(X^n)\\right)^{1\/r}$ (and henceforth, $M_{max}(X^n)$) which holds for every $p \\in \\Delta$ (alternatively, $\\Delta_k$). Notice that the obtained confidence interval is independent of the sample $X^n$, similarly to the ROT. This makes it a robust non-random scheme, that generalizes the ROT for the multinomial setup. Further, notice that for $r=1$, (\\ref{CI}) is a CI for the missing mass. In that sense, our proposed framework generalizes the missing mass problem, and introduces CIs for any $r$-norm of the missing probabilities, $M_r(X^n)$. Interestingly, point estimation of $M_r(X^n)$ was recently studied by Chandra and Thangaraj in quite a different context \\cite{chandra2021estimation}. In their work, they showed that for $r \\in (1,\\infty)\/\\mathbb{N}$,\n\\begin{align}\\label{chandra}\n \\min_{\\hat{M}_r (X^n)}\\max_{p\\in \\Delta} \\mathbb{E}(M_r(X^n)-\\hat{M}_r(X^n))^2\\leq O\\left(\\frac{1}{n^{2(r-1)}}\\right)\n\\end{align}\nwhere $\\mathbb{N}$ is the set of natural numbers and $O(\\cdot)$ is the standard big O notation \\cite{bachmann1894analytische}. Similarly, for $n\\geq2r$ and $r\\in \\mathbb{N}$, they attained a bound of $O\\left({1}\/{n^{2r-1}}\\right)$. \nWe discuss these bounds and compare them to our results later in Section \\ref{unbounded_alphabet}. \n\nTaking a closer look at our derivation steps, one may wonder if the obtained data-independent CI is too conservative. Later in Section \\ref{tightness}, we show that the proposed CI is indeed tight, in the sense that there exists a distribution $p$ for which (\\ref{CI}) is attained with equality. This is not the typical notion of tightness in inference literature, where one would expect a CI to be tight for every $p$. Yet, this approach is quite common in the study of the unobserved. For example, notice that the popular ROT is also only tight in this sense, where only $\\theta=-\\log(\\alpha)\/n$ attains it equality. Additional examples in missing mass literature are discussed in \\cite{rajaraman2017minimax,ben2017concentration,acharya2018improved,painsky2022generalized}. Let us proceed with our analysis and study $E_{r,n}(\\mathcal{P})$ for both bounded ($\\mathcal{P}=\\Delta_k$) and unbounded ($\\mathcal{P}=\\Delta$) alphabets. \n\n\\section{Unbounded Alphabet Size}\n\\label{unbounded_alphabet}\n\nWe begin our analysis with the unbounded alphabet setup. First, the expected value of $M_r(X^n)$ satisfies\n\\begin{align}\\label{E(p) explicit}\n E_{r,n}(p)=\\sum_{u\\in \\mathcal{X}} p^r(u)\\mathbb{E}_{X^n \\sim p} \\left(\\mathbbm{1}(N_u(X^n)=0) \\right)=\\sum_{u\\in \\mathcal{X}} p^r(u)(1-p(u))^n.\n\\end{align}\nWe would now like to bound (\\ref{E(p) explicit}) from above, for every possible $p\\in \\Delta$. For this purpose, we introduce the following proposition.\n\\begin{proposition} \\label{prop1}\nLet $p$ be a distribution over a countable alphabet $\\mathcal{X}$. Let $\\phi:[0,1] \\rightarrow \\mathbb{R}$. Then, \n $\\sum_{u} p(u)\\phi(p(u))\\leq \\max_{q \\in [0,1]}\\phi(q).$\nFurther, equality is achieved for a uniform $p$. \n\\end{proposition}\n\\begin{proof}\nLet $Y \\sim p$ and define a random variable $T(u)$, such that $T(u)=\\phi(p(u))$. Then,\n \n $$\\mathbb{E}(T(Y))=\\sum_{u,\\in \\mathcal{X}} p(u)\\phi(p(u))\\leq \\max_{q \\in [0,1]}\\phi(q),$$\n where in the last inequality, the expectation of a random variable is bounded from above by its maximal value.\n Notice that equality is achieved if all $p(u)$'s are equal.\n \n \n\\end{proof}\n\\noindent Applying Proposition \\ref{prop1} to (\\ref{E(p) explicit}) we obtain\n\\begin{align}\\label{basic_unbounded}\n E_{r,n}(p)=\\sum_{u\\in \\mathcal{X}} p^r(u)(1-p(u))^n\\leq \\max_{q\\in[0,1]} q^{r-1}(1-q)^n=(q_{r,n}^*)^{r-1}(1-q_{r,n}^*)^n,\n\\end{align}\nwhere $q_{r,n}^*=(r-1)\/(r-1+n)$.\nFurther, equality is obtained for $p(u)=q_{r,n}^*$, which implies an alphabet size of $k=(r-1+n)\/(r-1)$.\nTo conclude, for a given $r\\geq1$ and an unbounded alphabet size, we have that\n\\begin{align}\\label{temp}\n E_{r,n}(\\Delta)=(q_{r,n}^*)^{r-1}(1-q_{r,n}^*)^n,\n\\end{align}\nwhich further implies that $E_{r,n}(\\Delta)=O(1\/n^{r-1})$. \nIn addition, the distribution which attains the above is a uniform distribution over an alphabet size $k=(r-1+n)\/(r-1)$. Finally, a one-sided confidence interval for $(M_r(X^n))^{1\/r}$ is necessarily a one-sided confidence interval for $M_{max}(X^n)$, for every $r\\geq 1$. This leads to the following theorem. \n\n\\begin{theorem}\\label{theorem_analytical}\nLet $p$ be a probability distribution over a countable alphabet $\\mathcal{X}$. Let $X^n$ be $n$ independent samples from $p$. Let $M_{max}(X^n)$ be the maximum over the set of missing probabilities, as defined in (\\ref{M_max}). Then, the following holds, \n\\begin{align}\\label{CI_unbounded}\n P\\left(M_{max}(X^n)\\geq \\min_{r\\geq 1}\\left((q_{r,n}^*)^{r-1}(1-q_{r,n}^*)^n\/\\alpha \\right)^{1\/r}\\right)\\leq\\alpha \n\\end{align}\nwhere $q_{r,n}^*=(r-1)\/(r-1+n)$. \n\\end{theorem}\n\nNotice that the obtained one-sided CI (\\ref{CI_unbounded}) is independent of the alphabet size $k$. This means that our proposed CI is constant, and does not grow with $k$, as opposed to the Bonferroni-corrected ROT (see Section \\ref{Previous_Work}). As we further examine our results, we observe that for every fixed $r\\geq 1$, the $r$-norm of the missing probabilities $M_r(X^n)$ satisfies \n\\begin{align}\\label{ours}\n P\\left(M_{r}(X^n)\\geq E_{r,n}(\\Delta)\/\\alpha \\right)\\leq\\alpha \n\\end{align}\nwhere $E_{r,n}(\\Delta)=O(1\/n^{r-1})$. Let us compare this result to \\cite{chandra2021estimation}. Applying Markov's inequality to (\\ref{chandra}) we obtain \n\\begin{align}\n P\\left(|M_{r}(X^n)-\\hat{M}_r(X^n)|\\geq \\lambda\\right)\\leq\\max_{p\\in\\Delta} \\mathbb{E}\\left(M_{r}(X^n)-\\hat{M}_r(X^n)\\right)^2\/\\lambda^2.\n\\end{align}\nInterestingly, for $r>1$ this leads to a one-sided CI of length $O(1\/n^{r-1})$, similarly to (\\ref{ours}). However, we emphasize that (\\ref{chandra}) is obtained by a data-dependent estimator of $M_r(X^n)$, which also depends on $r$. This means that the choice of $r$ which minimizes the CI for $M_{max}(X^n)$ (as in (\\ref{CI_unbounded})) also depends on the sample and is henceforth invalid. However, this analysis emphasizes the tightness of our bound (\\ref{basic_unbounded}) and its resulting CI for $M_{max}(X^n)$, even if we compare it to a data-dependent scheme. \n\n\\section{Bounded Alphabet Size}\n\nLet us now study the case where the alphabet size is bounded from above. This is a typical setup, for example, in experimental studies where the number of outcomes is known a priori to the experiment. As discussed in Section \\ref{defs}, our proposed CI depends $E_{r,n}(\\mathcal{P})$, where $\\mathcal{P}=\\Delta_k$ in this setup. Therefore, our goal is to maximize $E_{r,n}(p)$ over $p \\in \\Delta_k$,\n\\begin{align} \\label{bounded_objective}\nE_{r,n}(\\Delta_k)=\\max_{p\\in \\Delta_k} \\sum_{u\\in\\mathcal{X}}p^r(u)(1-p(u))^n.\n\\end{align}\nUnfortunately, this optimization problem does not hold a closed form solution. However, we show that it may be efficiently evaluated from simple optimization considerations. We begin our analysis with the following property.\n\\begin{property}\\label{property1}\nLet $E_{r,n}(p)=\\sum_{u \\in \\mathcal{X}} p^r(u)(1-p(u))^n$. Let $$t^*=\\frac{r}{r+n},\\;\\;t_{1,2}=t^*\\pm \\frac{1}{r+n}\\sqrt{\\frac{rn}{r+n-1}}.$$\nAssume $0\\leq t_1\\leq t^*\\leq t_2\\leq 1$. \nFor $r\\geq 1$, the summand, $h(t)=t^r(1-t)^n$ satisfies the following:\n\\begin{enumerate}\n \\item $h(t)$ has a local maximum at $t^*$\n \\item $h(t)$ is concave in $t$, for $t_1 \\leq t \\leq t_2$.\n \\item $h(t)$ is convex in $t$, for $0\\leq t\\leq t_1$ and $t_2\\leq t \\leq 1$ .\n \n\\end{enumerate}\n\\end{property}\nThe proof of the above directly follows from the derivatives of the summand, $h(t)=t^r(1-t)^n$, and is located in Appendix A. Property \\ref{property1} shows that the function, $p^r(u)(1-p(u))^n$, consists of three separate regions, characterized by their concavity and convexity. This allows us to characterize the maximum of our objective.\n\n\n\\begin{theorem}\\label{theorem2}\nLet $p^* \\in \\Delta_k$ be the maximizer of $E_{r,n}(p)=\\sum_u p^r(u)(1-p(u))^n$ over $\\Delta_k$. Then, for $r\\geq 1$ the following holds.\n\\begin{enumerate}\n\\item $p^*(u)=p^*(v)$ for every $p^*(u),p^*(v)\\in \\left[t_1,t_2\\right]$.\n\\item There exists at most a single $p^*(u)$ such that $p^*(u)\\in \\left(0,t_1\\right)$\n\n\\item There exists at most a single $p^*(u)$ such that $p^*(u)\\in \\left(t_2,1\\right]$\n\n\\end{enumerate}\n\\end{theorem}\nIn words, all $p^*(u)$ that are located in the concave region are identical, and there exists at most a single $p^*(u)$ in the interior of the convex regions. These properties are a direct consequence of the convexity\/concavity regions of the summand. The detailed proof is located in Appendix B. Proposition \\ref{theorem2} shows that the maximizer of $E_{r,n}(p)$ over $\\Delta_k$ depends on not more than four free parameters. Surprisingly, this results holds for every $k$. In other words, we may numerically evaluate $E_{r,n}(\\Delta_k)$ by considering only four free parameters, for every given $k$. This allows us to numerically evaluate the CI in a relatively small computational cost, even when the dimension of the problem increases, for every examined $r\\geq 1$, and choose the value of $r$ which minimizes the CI (similarly to (\\Ref{CI_unbounded})).\n\n\\section{Tightness Analysis}\\label{tightness}\nThe derivation of the proposed CI utilizes several relaxations and inequalities, such as the $r$-norm (\\ref{CI r-norm}) and Markov inequality (\\ref{Markov}). Therefore, it is of a reasonable concern that the obtained CI is over pessimistic. Here, we show that this it not the case. Specifically, we show that there exists a distribution $p\\in \\Delta$ for which the proposed CI is (almost) tight. \n\nAs we revisit our analysis in the unbounded alphabet setup (\\ref{basic_unbounded}), we observe that $E_{r,n}(p)\\leq (q^*_{r,n})^{r-1}(1-q^*_{r,n})^n$, where equality holds if $p(u)=q^*_{r,n}$. In words, $E_{r,n}(p)$ attains its maximum for a uniform distribution over an alphabet of size $k^*=1\/q^*_{r,n}$. This means that in practice, even if the alphabet size $k$ is known to be greater than $k^*$, the worst-case distribution which attains $E_{r,n}(p)$ with equality is a uniform distribution with $p(u)=q^*_{r,n}$ for $k^*$ symbols, and $p(u)=0$ for the remaining alphabet. Interestingly, this type of distributions also attain Markov inequality with equality. Specifically, let $\\mathcal{U}_k$ be the set of uniform distributions over an alphabet size $m\\leq k$. Then, for every $p_m\\in\\mathcal{U}_k$, we have $M_{max}(X^n)\\in\\{0,1\/m\\}$ and $P(M_{max}(X^n)\\geq 1\/m)=m{\\mathbb{E}_{X^n\\sim p}M_{max}(X^n)}$. This motivates exploring $\\mathcal{U}_k$ as a set of distributions for which our proposed CI may be tight. \n\nAs mentioned in Section \\ref{defs}, deriving an exact CI for $M_{max}(X^n)$, even when the underlying distribution $p$ is known, is not an easy task. However, we now show it is possible in several special cases, such as $p_m\\in \\mathcal{U}_k$. We begin with the following proposition. \n\n\\begin{proposition} \\label{prop2}\n Let $X^n$ be a sample of $n$ independent observations from $p_m \\in \\mathcal{U}_k$. Then, \n \\begin{align}\\label{worst-case}\n P\\left(M_{max}(X^n) \\geq \\frac{1}{m}\\right)=1-\\frac{m!S(n,m)}{m^n}\n \\end{align}\nwhere $S(n,m)=\\frac{1}{m!}\\sum_{j=0}^m(-1)^j\\binom{m}{j}(m-j)^n$ is \\textit{Stirling number of the second kind}.\n\\end{proposition}\n \nThe proof of Proposition \\ref{prop2} utilizes simple combinatorial properties. Specifically, given that $p_m$ is a uniform distribution over an alphabet size $m$, we have that $M_{max}(X^n)=1\/m$ if and only if there exists at least one symbol that do not appear in the sample, where all symbols are equiprobable. A detailed proof is provided in Appendix C. \n\n\nNow, define $m_\\alpha$ as the largest value of $m$ for which $1-\\frac{m!S(n,m)}{m^n}\\leq \\alpha$. Then, $1\/m_\\alpha$ is the $\\alpha$-quantile of $M_{max}(X^n)$. This means that for $X^n \\sim p_{m_\\alpha}$, we cannot set any constant $c<1\/m_\\alpha$ such that $P(M_{max}(X^n)\\geq c)\\leq \\alpha$. In other words, $p_{m_\\alpha}$ requires a CI larger than $1\/m_{\\alpha}$ in order to control $M_{max}(X^n)$ in a confidence level of at least $1-\\alpha$. The implications of this result are fairly simple. In order to control every $p\\in \\Delta$ in a confidence level of $1-\\alpha$, a (constant) confidence interval of size of at least $1\/m_\\alpha$ is inevitable. In other words, $p_{m_\\alpha}$ is the distribution which requires the tightest CI (among all the distributions in $\\mathcal{U}_k$), and it is therefore the most challenging to control. We denote it as the \\textit{worst-case} distribution. In the following section we compare our proposed CIs with $1\/m_\\alpha$ and show that the difference is practically negligible. \n\n\\iffalse\nThen, we denote $p_{m_\\alpha}\\in \\mathcal{U}_k$ as the \\textit{$\\alpha$-level distribution}, for which \n\nfor which the $\\alpha$-confidence interval (namely, $1\/m_\\alpha$) is the smallest. In other words, we found a distribution $p_{m_\\alpha}$ such that \n\n\n\n$p_{m_\\alpha}$ is the distribution which requires the tightest CI (among all the distributions in $\\mathcal{U}_k$), and it is therefore the most challenging to control. We denote it as the \\textit{worst-case} distribution. \n\n\n \\fi\n\n\\section{Experiments}\\label{experiments}\nWe now illustrate the performance of our proposed CIs in synthetic and real-world experiments. First, we study six example distributions, which are common benchmarks for probability estimation and related problems \\cite{orlitsky2015competitive}. The Zipf's law distribution is a typical benchmark in large alphabet probability estimation; it is a commonly used heavy-tailed distribution, mostly for modeling natural (real-world) quantities in physical and social sciences, linguistics, economics and others fields \\cite{saichev2009theory}. The Zipf's law distribution follows $p(u;s,k)={u^{-s}}\/{\\sum_{v=1}^k v^{-s}}$ where $k$ is the alphabet size and $s$ is a skewness parameter. Additional examples of commonly used heavy-tailed distributions are the geometric distribution, $p(u;\\alpha)=(1-\\alpha)^{u-1}\\alpha$, the negative-binomial distribution (specifically, see \\cite{efron1976estimating}), $p(u;l,r)= \\binom{u+l-1}{u} r^u(1-r)^l$ and the beta-binomial distribution $p(u;k,\\alpha,\\beta)= \\binom{k}{u}{B(u+\\alpha,k-u+\\beta)}\/{B(\\alpha,\\beta)}$. Notice that the support of the geometric and the negative binomial distributions is infinite. Therefore, for the purpose of our experiments, we truncate them to an alphabet size $k$ and normalize accordingly. Additional example distributions are the uniform, $p(u)=1\/k$, and the worst-case distribution, which is simply a uniform distribution over an alphabet size $1\/m_\\alpha$, as discussed in Section \\ref{tightness}. \n\nIn each experiment we draw $n=1000$ samples, and compare the lengths of different CIs for an increasing alphabet size. Figure \\ref{simulated_exp} illustrates the results we achieve. The red curve on top \ncorresponds to the Bonferroni-corrected ROT, as discussed in Section \\ref{Previous_Work}. As expected, it grows logarithmically with the alphabet size $k$. The blue curve below it is our proposed CI, for a known alphabet size $k$, while the blue dashed curve corresponds to the unbounded alphabet size. As we can see, the bounded $k$ curve is of similar length the ROT CI, for smaller values of $k$. However, as the alphabet size increases, it converges to the unobounded $k$ performance, as expected. It is also evident that while the ROT CI grows with $k$, our proposed schemes are fixed, and demonstrate significantly shorter confidence intervals, while maintaining the desired coverage rate. As we examine the value of $r$-norm which minimizes our CI, we observe that it increases with $k$ (for the bounded $k$ scheme) and converges to approximately $r=10$. Finally, the black curve at the bottom is an Oracle CI, who knows the underlying distribution $p$. Specifically, the Oracle CI is simply the $\\alpha$-quantile of $M_{max}(X^n)$ in the case where $p$ is known. This serves us as a lower bound, for the best we can achieve in each experiment. We focus our attention to the worst-case distribution, which we study in detail in Section \\ref{tightness}. As we can see, the Oracle CI is almost identical to our proposed scheme in this setup. This means that our CIs are universally tight, in the sense that there exists a distribution for which they we cannot be shorten.\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width =0.8\\textwidth,bb=10 70 750 540,clip]{simulated_six_rotated.pdf}\n\\caption{confidence intervals lengths in six synthetic experiments. We use the following parameters: Zipf's Law: $s=1.01$, Geometric: $\n\\alpha=0.4$, Negative-Binomial: $l=1, r=0.003$, Beta-Binomial: $\\alpha=\\beta=2$. The red curve on top is the rule-of-three CI, the blue curve below it is our proposed CI for a known alphabet size $k$. The blue dashed curve is our proposed method for unbounded $k$ and the black curve at the bottom is an Oracle CI, who knows the underlying distribution $p$.}\n\\label{simulated_exp}\n\\end{figure}\n\n\n\nNext, we turn to real-world experiments. Here, we follow \\cite{orlitsky2016optimal} and study three application domains. Notice that in these real-world settings, the true underlying probability is unknown. Hence, the missing probabilities refers to the frequency of symbols, in the full data-set, that do not appear in the sample. We begin with a corpus linguistic experiment. For this purpose we study a collection of word frequencies in English. \nSpecifically, we consider a list of word frequencies, collected from open source subtitles \\cite{subtitles1,subtitles2}. This list describes the frequency each word appears in the text, based on hundreds of millions of samples. We randomly sample $n$ words (with replacement) from the list, and construct a CI for the missing probabilities. The left chart of Figure \\ref{real_world_exp} demonstrates the CI of our proposed scheme, compared to the Bonferroni-corrected ROT. Notice we focus on the unbounded $k$ scheme, as it is more robust and may better describe the alphabet size in this setup (all the words in the English language). Next, we focus on a biota analysis. Gao et al. \\cite{gao2007molecular} considered the forearm skin biota of six subjects. They identified a total of $1{,}221$ clones consisting of $182$ different species-level operational taxonomic units (SLOTUs). As above, we sample $n$ out of the $1{,}221$ clones with replacement, and construct CI for the missing probabilities of the distinct SLOTUs found. The middle chart of Figure \\ref{real_world_exp} demonstrates the results we achieve. Finally, we study census data. The right chart of Figure \\ref{real_world_exp} considers the $2000$ United States Census \\cite{us2014frequently}, which lists the frequency of the top $1000$ most common last names in the United States. Here too, we sample $n$ names and construct corresponding CIs. Similarly to the synthetic experiments, our proposed scheme demonstrates shorter CIs than the ROT in the three examined setups, where the difference is typically more evident in the small $n$ regimes.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width =0.8\\textwidth,bb=10 140 745 470,clip]{real_world_updated_rotated.pdf}\n\\caption{confidence intervals lengths in three real-world experiments.}\n\\label{real_world_exp}\n\\end{figure}\n\n\n\n\n\\section{Application to Large Alphabet Inference}\\label{multinomial_intro}\n\nInference and estimation of unseen events is a corner-stone of large alphabet probability modeling. Here, the goal is to address not only the unobserved events, but the entire (very) large collection of possible outcomes. Specifically, the large alphabet regime considers multinomial distributions in cases where $k$ is much larger than $n$ (or at least comparable to it). This problem too is highly important to data-driven science and engineering. Its applications span a variety of disciplines including information retrieval \\cite{song1999general}, spelling correction \\cite{church1991probability}, word-sense disambiguation \\cite{gale1992method}, language modeling \\cite{chen1999empirical}, learning theory \\cite{makur2020estimation} and many others. In this section we demonstrate the favorable properties of our proposed CI, as we apply it to large alphabet inference. \n\nThere exists a large body of work on point estimation of large alphabet distributions. The GT probability estimator (based on the GT scheme described above) is perhaps the most popular estimator for this important task. While the GT estimator performs well in general, it is known to be sub-optimal for outcomes that frequently appear. Consequently, several modifications have been proposed, including the Jelinek-Mercer, Katz, Witten-Bell and Kneser-Ney estimators \\cite{chen1999empirical}. In language modeling for example, GT is usually used to estimate the probability of infrequent words, whereas the probability of frequent words is estimated by their empirical frequency. Different properties of the GT probability estimator were extensively studied over the years \\cite{mcallester2000convergence,drukh2005concentration,orlitsky2015competitive}.\nDespite this broad body of work, large alphabet inference has not received much attention. Current methods focus on two basic setups. The first considers an asymptotic regime, where $k$ is fixed the sample size $n$ is very large \\cite{quesenberry1964large,goodman1964simultaneous}. The second line of work addresses a fixed $n$, where the alphabet size $k$ is relatively small. Here, the most popular SCI scheme is arguably of Sison and Glaz \\cite{SisonGlaz1995}. In their work, Sison and Glaz (SG) proposed a method which utilizes Edgeworth expansions to approximate the desired distribution. Through extensive simulations, they showed that their method leads to smaller SCIs while maintaining a coverage rate closer to the desired level, compared to known methods at the time. Unfortunately, the SG scheme does not perform well in cases where the expected symbol counts are disparate \\cite{MayJohnson1997Properties}. Recently, \\cite{marton2022good} introduced a bootstrap framework for the case where both $k$ and $n$ are large. Yet, this approach is based on bootstrap sampling and does not provide solid theoretical guarantees. To the best of our knowledge, no method directly addresses the large alphabet regime, with provable performance guarantees.\n\n\nAs above, let $\\mathcal{X}$ be a countable alphabet. Here, we assume that the alphabet size $k$ is finite and known. Denote $p=p_1,\\dots,p_k$ as the unknown probability distribution over $\\mathcal{X}$, while $X^n$ is a collection of $n$ samples from $\\mathcal{X}$. Let $S(X^n)$ be an $(1-\\alpha)$-level confidence region (CR) for $p$. That is, \n$P(p \\in S(X^n))\\geq 1-\\alpha.$\nThe most popular form of a CR is the case where $S(X^n)$ is rectangular. That is, $S(X^n) = T_1(X^n), T_2(X^n), \\dots, T_k(X^n)$, where $T_j(X^n)=[a_j,b_j]$ for $j=1,\\dots,k$ and $0 \\leq a_j\\leq b_j\\leq 1$. This implies SCIs for the collection of the $k$ parameters, similarly to (\\ref{classical}). In fact, all the multinomial inference schemes mentioned above are rectangular CRs. \n\n\nThe most basic rectangular CR may be obtained from a binomial viewpoint. That is, one may construct a binomial CI for each symbol independently and correct for multiplicity using a Bonferroni correction. Hence, the obtained SCIs are just a collection of binomial CIs (of confidence level $\\alpha\/k$), for every symbol in the alphabet. Naturally, this approach controls the prescribed confidence level (for every $n$ and $k$), but may be over pessimistic and result in large volume SCIs. \nNotice that such an approach also applies for unobserved symbols. Specifically, a binomial CI for symbols with zero counts is obtained by a Bonferroni-corrected ROT, as described in Section \\ref{Previous_Work}. \n\nLet us now introduce our proposed large alphabet inference scheme. We distinguish between observed and unobserved symbols. Assume that $n\\leq k$ and let $\\alpha_1,\\alpha_2 \\in [0,1]$ be two fixed constants. First, we set a binomial CI of level $1-\\alpha_1\/n$ for all the symbols that appear in the sample, while unobserved symbols are set a naive CI, $T_j(X^n)=[0,1]$. We have that\n\\begin{align}\n P(p \\notin S(X^n))&=P(\\cup_{j=1}^k\\{p_j \\notin T_j(X^n)\\})\\leq \\sum_{j=1}^kP(p_j \\notin T_j(X^n))=\\\\\\nonumber\n &\\sum_{j|N_j(X^n)=0}P(p_j \\notin T_j(X^n))+\\sum_{j|N_j(X^n)>0}P(p_j \\notin T_j(X^n))=\\\\\\nonumber\n &\\sum_{j|N_j(X^n)>0}P(p_j \\notin T_j(X^n))\\leq n \\cdot \\frac{\\alpha_1}{n}=\\alpha_1\n\\end{align}\nwhere the first inequality follows from the union bound and the second inequality is due to $|\\{j|\\;N_j(X^n)>0\\}|\\leq n$ (that is, the number of symbols that appear in the sample is not greater than the sample size). Notice that in the case where $n>k$, we may define binomial CI of level $1-\\alpha_1\/k$ for all the symbols that appear in the sample, and the above still holds. \n\nNow, let $A_{n}=\\min_{r\\geq 1}\\left((q_{r,n}^*)^{r-1}(1-q_{r,n}^*)^n\/\\alpha_2 \\right)^{1\/r}$ be our proposed ($1-\\alpha_2$)-level CI for unobserved events (Theorem \\ref{theorem_analytical}). We would like to simultaneously control the events $M_{max}(X^n)\\leq A_{n}$ and $p \\in S(X^n)$ at a confidence level of $1-\\alpha$. Therefore, we set $\\alpha_1=\\alpha(1-c)$ and $\\alpha_2=\\alpha c$ for some $c\\in [0,1]$. We have, \n\\begin{align}\\label{eq2}\n &P\\left(\\left\\{p \\notin S(X^n)\\}\\right\\}\\cup\\left\\{M_{max}(X^n)\\geq A_n\\right\\}\\right)\\leq\\\\\\nonumber\n &P(\\cup_{j=1}^k\\{p \\notin S(X^n)\\})+P(M_{max}(X^n)\\geq A_n)\\leq \\alpha(1-c)+\\alpha c=\\alpha.\n\\end{align}\nNotice that by simultaneously controlling both of the terms above, we may replace the naive unit intervals of the unobserved events with $[0,A_n]$. This implies the following scheme (Algorithm \\ref{alg:Ours}) for constructing the desired SCIs.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\t\t\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\t\t\\REQUIRE A sample $X^n$, alphabet size $k$ and a confidence level $1-\\alpha$.\n\t\t\\STATE Set $c\\in[0,1]$\n\t\t\\STATE Construct a Binomial CI of level $1-\\alpha(1-c)\/n$ for all the symbols that appear in $X^n$ \n\t\t\\STATE Construct a CI for unobserved events (following Theorem $1$ or $2$) of level $1-\\alpha c$, for all the symbols that do not appear in $X^n$\n\t\\end{algorithmic}\n\t\\caption{Our Proposed Large Alphabet SCI's for Multinomial Proportions}\n\t\\label{alg:Ours}\n\\end{algorithm} \n\n\n\nThe scheme above introduces a simple analytical framework for constructing SCIs over large alphabets. The parameter $c$ defines an inference trade-off between observed and unobserved events. Specifically, for larger values of $k$ we expect many unobserved events which corresponds to a larger value of $c$. On the other hand, if $k$ is comparable to $n$, we would probably prefer a lower value of $c$. Therefore, choosing a reasonable value for $c$ is of a natural concern. Unfortunately, the choice of $c$ also depends on the unknown underlying distributions $p$. For example, a uniform $p$ results in fewer unobserved events than a degenerate $p$. Therefore, we cannot set a $c$ value that minimizes the SCIs uniformly, for every possible $p$. However, we show it is possible to set a value for $c$, so that our proposed scheme provably improves upon alternative methods.\n\nTypically, the performance of a CR is measured by its expected volume. That is, given two CRs, we say the one outperforms the other if its expected volume is smaller, while maintaining the prescribed confidence level. However, notice that in the large alphabet regime, the volume of a CR rapidly decays with the alphabet size $k$. For example, the volume of a rectangular CR with a fixed length of $L<1$ for each of its parameters demonstrates an exponential decay, $L^k$. Therefore, we focus on the log of the volume in this regime. Specifically, in each of our following experiments we measure the average log volume, as we cannot directly assess the volume by numerical means. Further, for the same reasons, we focus on the expected log volume as our analytical figure of merit. \n\n\\begin{theorem} \\label{T3}\n Let $p$ be a probability distribution over an alphabet $\\mathcal{X}$ of size $k$. Let $X^n$ be $n$ independent samples from $p$. Denote $A^{BC}_n=-\\log(\\alpha\/k)\/n$ as the Bonferroni-corrected CI for unobserved events.\n Let $A_{n,c}=\\min_{r\\geq 1}\\left((q_{r,n}^*)^{r-1}(1-q_{r,n}^*)^n\/\\alpha c \\right)^{1\/r}$ be our proposed CI, for a confidence level of $1-\\alpha c$. Define $z_0=z_{1-{\\alpha}\/{2k}}$ and $z_c=z_{1-{\\alpha(1-c)}\/{2n}}$, where $z_a$ is the $a$ quantile of a standard normal distribution. Assume there exists $c\\in[0,1]$ such that \n \\begin{enumerate}[label=(\\alph*)]\n \\item $k\\left(1-\\left(1-\\frac{1}{k}\\right)^n\\right)(z_c-z_0)+k\\left(1-\\frac{1}{k}\\right)^n(A_{n,c}-A^{BC}_n)\\leq 0$\n \\item $(z_c-z_0)+(k-1)(A_{n,c}-A^{BC}_n)\\leq 0$\n \\end{enumerate}\n Then, for every $p\\in \\Delta_k$ , the following (approximately) holds, $$\\mathbb{E}\\log V_c\\leq \\mathbb{E}\\log V_0,$$ \n where $V_c$ is the volume of our proposed CR (with a choice of $c$ that satisfies the above), $V_0$ is the volume of the Bonferroni-corrected CR and the approximation follows from Wald intervals for the Binomial proportions \\cite{de1820theorie}.\n\\end{theorem}\nTheorem \\ref{T3} establishes an important property of our proposed CR. Given a sample size $n$ and an alphabet size $k$, we seek a constant $c\\in[0,1]$ that satisfies the (a) and (b). This requires a simple grid search over the unit interval. Assuming we find such $c$, then we are guaranteed that Algorithm \\ref{alg:Ours} outperforms the Bonferroni-corrected CR, for every $p \\in \\Delta_k$. The proof of Theorem \\ref{T3} is located in Appendix D.\n\n\\subsection{Large Alphabet Experiments}\nLet us now demonstrate the performance of our suggested inference scheme. We focus on two benchmark distributions which represent two extreme cases. Specifically, we study the heavy-tailed Zipf's law distribution (with $s=1.01$) and the benchmark uniform distribution. In each experiment we draw $n=1000$ samples, and evaluate the \nlog-volume of different CRs (for $\\alpha=0.05$), as $k$ increases. We repeat this process $1000$ times to obtain an averaged log-volume. We focus on the Bonferroni-corrected CR (denoted BC in the figures that follow), the Sison-Glaz (SG) scheme \\cite{sison1995simultaneous} and our proposed method (Algorithm \\ref{alg:Ours}). To configure our method, we set $c$ as the largest value within the unit interval that satisfies conditions (a) and (b), for every $k$ and $n$. We justify this choice later in this section. Figure \\ref{log_volume} demonstrates the results we achieve. \n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width =0.47\\textwidth,bb=70 180 530 600,clip]{examine_ab_size_new.pdf}\n\\caption{log-volume of CR for Zipf's Law and Uniform distributions, for $n=1000$.}\n\\label{log_volume}\n\\end{figure}\n\nFirst, it is evident that our proposed scheme outperforms the Bonferroni-corrected CR as $k$ grows. The SG method is omitted from Figure \\ref{log_volume} as it fails to provide the prescribed confidence level (see Appendix E). It may appear from Figure \\ref{log_volume} that the difference between the Zipf's Law and the uniform distribution is negligible. The reason for this phenomenon is fairly simple. For $k>>n$, most symbols do not appear in the sample (regardless to the underlying distribution). In this case, the volume of the CR is dominated by the CI of the unobserved events. This CI is fixed and independent of the sample, for both inference schemes. \nOn the other hand, there is a difference in the log-volume for smaller alphabets. However, it is less visible from Figure \\ref{log_volume}, and demonstrated more clearly in Figure \\ref{simulated_exp_c} below. To complete the picture, we examine the coverage rate of the examined inference schemes. The results are reported in Appendix E for brevity. As we can see, both the Bonferroni-corrected and our proposed method obtain the prescribed $0.95$ confidence level as desired, while SG fails to do so. \n\n\\iffalse\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width =0.7\\textwidth,bb=50 150 530 620,clip]{examine_coverage.pdf}\n\\caption{CR coverage for a Zipf's Law distribution. The sample size is $n=1000$.}\n\\label{simulated_exp_coverage}\n\\end{figure}\n\\fi\nFinally, we examine the performance (and sensitivity) of our suggested scheme for the choice of $c$. The upper charts of Figure \\ref{simulated_exp_c} correspond to a Zipf's Law distribution ($s=1.01)$ with $k=1000$ (right) and $k=20000$ (left). The lower charts correspond to a uniform distribution with the same alphabet sizes. We use $n=1000$ samples as above. First, it is evident that for large $k$, the performance of our proposed scheme improves as $c$ grows. This is not quite surprising as there are more unobserved events in this setup. For a relatively smaller $k$, we still observe a significant improvement over the Bonferroni-corrected scheme for a large span of $c$ values, for both distributions. Despite the above, we emphasize that any choice of $c$ that satisfies conditions (a) and (b) is guaranteed to improve upon the Bonferroni-corrected scheme. Therefore, for simplicity, we choose the largest possible $c$ so that the improvement is more evident for larger alphabets. \n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width =0.58\\textwidth,bb=30 110 550 680,clip]{examine_c.pdf}\n\\caption{The choice of $c$ in Zipf's Law and Uniform setups. The sample size is $n=1000$.}\n\\label{simulated_exp_c}\n\\end{figure}\n\n\n\\section{Discussion}\nIn this work we introduce an interval estimation framework for the probability of symbols that do not appear in the sample. Our suggested framework is an SoS inference scheme, designed to simultaneously control the selected parameters. We distinguish between two setups, depending on the alphabet size. First, we \nconsider the case where the alphabet size $k$ is unknown and possible unbounded. This setup is of special interest in many real-world applications, as described throughout the manuscript. \nWe introduce a closed-form expression for the CI, which is independent of the alphabet size. Second, we study the case where the alphabet size in known (or bounded). Here, we derive an efficient numerical routine which improves upon the unbounded $k$ solution in cases where $k$ is relatively small. It is important to emphasize that in both setups, the proposed CI is independent of the sample, similarly to the ROT. This makes it a robust framework, which is easy to apply. Next, we show that our proposed CIs are (almost) tight, in the sense that there exists a probability distribution $p$, for which we cover the missing probabilities at a confidence level (almost) equal to $1-\\alpha$. We compare our proposed scheme to currently known methods, showing significant improvement in synthetic and real-world experiments. Finally, we apply our proposed CI to large alphabet inference. Specifically, we introduce a novel scheme that provably improves upon the alternatives while controlling the desired coverage rate. \n\nTo conclude, we revisit the motivating example in Section \\ref{intro}. The $2006$ New Zealand Ministry of Health report indicated $58$ new cancer cases among Maori female under the age of $30$. Specifically, out of the $75$ studied cancer types, only $37$ were observed. Using the Bonferroni-corrected ROT, we obtain a confidence interval of $[0,0.126]$ for the unobserved cancer types. Applying our proposed scheme (under the more robust unbounded $k$ assumption), we obtain a shorter CI of length $[0,0.089]$. Similarly, for Pacific Islands men of the same age group, an additional part of the report indicated only $11$ new cancer cases in $2006$, where each case is of a different type. Here, the Bonferroni-corrected ROT suggests a CI of $[0,0.244]$ while our proposed scheme obtains a CI of $[0,0.15]$. As we can see, our interval estimation scheme demonstrates a significant improvement. This makes it an favorable alternative for this important problem, which applies to many applications.\n\nFinally, our proposed framework may be generalized to consider the collection of probabilities with $i$ appearances in the sample. This would allow us to control more events of interest, and further improve out large alphabet inference scheme. We consider this direction for our future work. \n\n\n\n\\begin{appendices}\n\\section{A Proof for Property \\ref{property1}}\n\nLet $r\\geq 1$ and $h(t)=t^r(1-t)^n$. Then, the optimum of $h(t)$ satisfies\n\\begin{align}\n \\frac{dh(t)}{dt}=t^{r-1}(1-t)^{n-1}(r(1-t)-nt)=0\n\\end{align}\nThis implies that $t^*\\triangleq r\/(r+n)$ is a local optimum. Further, \n\\begin{align}\\nonumber\n \\frac{d^2h(t)}{dt^2}=&t^{r-2}(1-t)^{n-2}\\bigg(r(r-1)(1-t)^2-2nrt(1-t)+n(n-1)t^2\\bigg)=\\\\\\nonumber\n &t^{r-2}(1-t)^{n-2}\\bigg(t^2\\big(r(r-1)+2nr+n(n-1)\\big)+t\\big(-2r(r-1)-2nr\\big)+r(r-1)\\bigg).\n\\end{align}\nDenote the roots of the quadratic form \n$$z(t)=t^2\\big(r(r-1)+2nr+n(n-1)\\big)+t\\big(-2r(r-1)-2nr\\big)+r(r-1)$$\nas $t_1$ and $t_2$. Simple calculus shows that \n$$t_{1,2}=t^*\\pm \\frac{1}{r+n}\\sqrt{\\frac{rn}{r+n-1}}.$$\n\nAs we can, $z(t)$ is quadratic and convex in $t$. This means that $z(t)<0$ for $t_10$ elsewhere. This implies that $h(t)$ is concave for $t_1\\\\\\nonumber\n &\\sum_{l \\neq u,v} h(p^*(l))+h(p^*(u))+h(p^*(v))=\\sum_u h(p^*(u)).\n\\end{align}\nwhere the inequality follows from the concavity of $h(p(l))$ for every $p(l) \\in \\left[t_1,t_2\\right]$. Therefore, we found $\\tilde{p}\\in \\Delta_k$ for which $\\sum_l h(\\tilde{p}(l))>\\sum_l h(p^*(l))$, which contradicts the optimality of $p^*$. \n\\end{proof}\n\n\n\\begin{property}\\label{property4}\nLet $p^* \\in \\Delta_k$ be the maximizer of $E_{r,n}(p)=\\sum_u h(p(u))$, where $h(p(u))\\triangleq p^r(u)(1-p(u))^n$. Then, there exists at most a single $p^*(u)$ such that $p^*(u)\\in \\left(0,t_1\\right)$.\n\\end{property}\n\\begin{proof}\nBy negation, assume there exist $p^*(u)$ and $p^*(v)$ such that $p^*(u),p^*(v)\\in \\left(0,t_1\\right)$. Assume, without loss of generality, that $p^*(v)\\leq p^*(u)$. Define $\\delta= p^*(v)>0$. \n\nLet us first assume that $p^*(u)+\\delta h\\left(p^*(u)\\right)+\\delta h'\\left(p^*(u)\\right)\\\\\\nonumber\n &h\\left(p^*(v)-\\delta\\right)\\geq h\\left(p^*(v)\\right)-\\delta h'\\left(p^*(v)\\right).\n\\end{align}\nwhere $h'(p(u))={dh(p(u))}\/{dp(u)}$. Putting together the above, we have \n\\begin{align}\n h\\left(p^*(u)+\\delta\\right)+&h\\left(p^*(v)-\\delta\\right)>h\\left(p^*(u)\\right)+h\\left(p^*(v)\\right)+\\delta\\left(h'\\left(p^*(u)\\right)-h'\\left(p^*(v)\\right)\\right).\n\\end{align}\nWe observe that $h'(p(u))$ is an increasing function in $p(u)$, for $p(u)\\in \\left(0,t_1\\right)$, as its derivative, ${d^2 h(p(u))}\/{dp^2(u)}$ is positive in this range. Therefore, $h'\\left(p^*(u)\\right)\\geq h'\\left(p^*(v)\\right)$ and \n\\begin{align}\n h\\left(p^*(u)+\\delta\\right)+h\\left(p^*(v)-\\delta\\right)> h\\left(p^*(u)\\right)+h\\left(p^*(v)\\right).\n\\end{align}\nTherefore, we found $\\tilde{p}\\in \\Delta_k$ such that \n\\begin{equation}\n \\tilde{p}(l)=\n \\begin{cases}\n p^*(l) & l\\neq u,v\\\\\n 0 & l=v\\\\\n p^*(u)+\\delta & l=u\\\\\n \\end{cases}\n\\end{equation}\nand $\\sum_l h(\\tilde{p}(l))> \\sum_l h(p^*(l)) $, which contradicts the optimality of $p^*$.\n\n\nNow, assume that $p^*(u)+\\delta\\geq t_1$. Then, define $\\tilde{\\delta}=t_1-p^*(u)>0$. We have\n\\begin{align}\n h\\left(p^*(u)+\\tilde{\\delta}\\right)\\geq h\\left(p^*(u)\\right)+\\tilde{\\delta} h'\\left(p^*(u)\\right)\n\\end{align}\n\\begin{align}\n h\\left(p^*(v)-\\tilde{\\delta}\\right)> h\\left(p^*(v)\\right)-\\tilde{\\delta} h'\\left(p^*(v)\\right).\n\\end{align}\nPutting together the above, we have \n\\begin{align}\n h\\left(p^*(u)+\\tilde{\\delta}\\right)+&h\\left(p^*(v)-\\tilde{\\delta}\\right)>h\\left(p^*(u)\\right)+h\\left(p^*(v)\\right)+\\tilde{\\delta}\\left(h'\\left(p^*(u)\\right)-h'\\left(p^*(v)\\right)\\right).\\nonumber\n\\end{align}\nAs above, we observe that $h'(p(u))$ is an increasing function in $p(u)$, for $p(u)\\in \\left(0,t_1\\right)$. Therefore, $h'\\left(p^*(u)\\right)\\geq h'\\left(p^*(v)\\right)$ and \n\\begin{align}\n h\\left(p^*(v)-\\tilde{\\delta}\\right)+h\\left(p^*(u)+\\tilde{\\delta}\\right)> h\\left(p^*(v)\\right)+h\\left(p^*(u)\\right).\n\\end{align}\nTherefore, we found $\\tilde{p}\\in \\Delta_k$ such that \n\\begin{equation}\n \\tilde{p}(l)=\n \\begin{cases}\n p^*(l) & l\\neq u,v\\\\\n p^*(v)-\\tilde{\\delta} & l=v\\\\\n t_1 & l=u\\\\\n \\end{cases}\n\\end{equation}\nand $\\sum_l h(\\tilde{p}(l))> \\sum_l h(p^*(l)) $, which again contradicts the optimality of $p^*$.\n\\end{proof}\n\n\n\\begin{property}\\label{property3}\nLet $p^* \\in \\Delta_k$ be the maximizer of $E_{r,n}(p)=\\sum_u h(p(u))$, where $h(p(u))\\triangleq p^r(u)(1-p(u))^n$. Then, there exists at most a single $p^*(u)$ such that $p^*(u)\\in \\left(t_2,1\\right]$.\n\\end{property}\n\\begin{proof}\nBy negation, assume there exist $p^*(u)$ and $p^*(v)$ such that $p^*(u),p^*(v)\\in \\left(t_2,1\\right]$. Assume, without loss of generality, that $p^*(v)\\leq p^*(u)$. Define $\\delta= p^*(v)-t_2>0$. The function $h(p(u))$ is convex for $p(u) \\in \\left[t_2,1\\right]$ and strictly convex for $p(u) \\in \\left(t_2,1\\right]$. Therefore, we have\n\\begin{align}\n h\\left(t_2\\right)\\geq h\\left(p^*(v)\\right)-\\delta h'\\left(p^*(v)\\right)\n\\end{align}\n\\begin{align}\n h\\left(p^*(u)+\\delta\\right)> h\\left(p^*(u)\\right)+\\delta h'\\left(p^*(u)\\right)\n\\end{align}\n\nPutting together the above, we have \n\\begin{align}\n h\\left(t_2\\right)+&h\\left(p^*(u)+\\delta\\right)> h\\left(p^*(v)\\right)+h\\left(p^*(u)\\right)+\\delta\\left(h'\\left(p^*(u)\\right)-h'\\left(p^*(v)\\right)\\right).\n\\end{align}\nWe observe that $h'(p(u))$ is an increasing function in $p(u)$, for $p(u)\\in \\left(t_2,1\\right]$, as its derivative, ${d^2 h(p(u))}\/{dp^2(u)}$ is positive in this range. Therefore, $h'\\left(p^*(u)\\right)\\geq h'\\left(p^*(v)\\right)$ and \n\\begin{align}\n h\\left(t_2\\right)+h\\left(p^*(u)+\\delta\\right)> h\\left(p^*(v)\\right)+h\\left(p^*(u)\\right).\n\\end{align}\nTherefore, we define $\\tilde{p}\\in \\Delta_k$ such that \n\\begin{equation}\n \\tilde{p}(l)=\n \\begin{cases}\n p^*(l) & l\\neq u,v\\\\\n p^*(l)-\\delta & l=v\\\\\n p^*(l)+\\delta & l=u\\\\\n \\end{cases}\n\\end{equation}\nand $\\sum_l h(\\tilde{p}(l))> \\sum_l h(p^*(l)) $, which contradicts the optimality of $p^*$.\\end{proof}\n\n\\section{A Proof for Proposition \\ref{prop2}}\nLet $X^n$ be a sample of $n$ independent observations from $p_m \\in \\mathcal{U}_k$. This means that $M_{max}(X^n)={0,1\/m}$, and $M_max=1\/m$ if and only if there exists at least one symbol that do not appear in the sample, where all symbols are equiprobable. Therefore, the probability that $M_max=1\/m$ equals the probability of placing $n$ balls in $m$ identical bins, where at least a single bin remains empty. Equivalently, \n\n\\begin{align}\\label{worst-case_appendix}\n P\\left(M_{max}(X^n) = \\frac{1}{m}\\right)=1-\\frac{m!S(n,m)}{m^n},\n \\end{align}\nwhere $m!S(n,m)$ is the number of combinations of placing $n$ distinguishable balls in $m$ distinguishable bins, where no bin is empty, and $m^n$ is the total number of combinations of placing $n$ distinguishable balls into $m$ distinguishable bins \\cite{graham1989concrete}. \n\\section{A proof for Theorem \\ref{T3}}\\label{T3_appendix}\n\\begin{proof}\n Let $L_i$ be the length of the CI for a symbol that appears $i$ times in the sample. The Bonferroni-corrected CR satisfies\n\\begin{align}\n L^{BC}_0=-\\log(\\alpha\/k)\/n\\quad,\\quad L^{BC}_i=2z_{1-\\frac{\\alpha}{2k}}\\sqrt{\\frac{i\/n(1-i\/n)}{n}}\\;\\; \\forall i>0.\n\\end{align}\nNotice that we use a normal approximation for the binomial CI to simplify our derivation. Next, given a fixed $c \\in [0,1]$, the CI of our proposed method satisfies \n\\begin{align}\\nonumber\n L^{c}_0=\\min_{r\\geq 1}\\left((q_{r,n}^*)^{r-1}(1-q_{r,n}^*)^n\/(c\\alpha) \\right)^{1\/r}\\quad,\\quad L^{c}_i=2z_{1-\\frac{\\alpha(1-c)}{2n}}\\sqrt{\\frac{i\/n(1-i\/n)}{n}}\\;\\; \\forall i>0,\n\\end{align}\nwhere $q_{r,n}^*$ is defined in Theorem $1$. Notice that for simplicity, we use the result in Theorem $1$ although the alphabet size $k$ is known. The expected log-volume of a rectangular CR satisfies\n\\begin{align}\n \\mathbb{E}\\log V= \\mathbb{E}\\log \\prod_{i=0}^nL_i^{\\sum_u\\mathbbm{1}(N_u(X^n)=i)}=\\sum_{i=0}^n\\sum_u\\mathbb{E}\\left(\\mathbbm{1}(N_u(X^n)=i)\\right)\\log L_i.\n\\end{align}\nWe would like to find $c\\in [0,1]$ such that $\\mathbb{E}\\log V_c\\leq \\mathbb{E}\\log V_0$. We have,\n\\begin{align}\n \\mathbb{E}\\log V_c-\\mathbb{E}\\log V_0=&\\sum_{i=0}^n\\sum_u\\mathbb{E}\\left(\\mathbbm{1}(N_u(X^n)=i)\\right)\\left(\\log L_i^c-\\log L_i^{BC}\\right)=\\\\\\nonumber\n &\\sum_u\\mathbb{E}\\left(\\mathbbm{1}(N_u(X^n)=0)\\right)\\left(\\log L_0^c-\\log L_0^{BC}\\right)+\\\\\\nonumber\n &\\sum_{i=1}^n\\sum_u\\mathbb{E}\\left(\\mathbbm{1}(N_u(X^n)=i)\\right)\\left(\\log L_i^c-\\log L_i^{BC}\\right)=\\\\\\nonumber\n &\\sum_u\\mathbb{E}\\left(\\mathbbm{1}(N_u(X^n)=0)\\right)\\left(\\log L_0^c-\\log L_0^{BC}\\right)+\\\\\\nonumber\n &\\left(\\log z_{1-\\frac{\\alpha(1-c)}{2n}}-\\log z_{1-\\frac{\\alpha}{2k}}\\right)\\sum_{i=1}^n\\sum_u\\mathbb{E}\\left(\\mathbbm{1}(N_u(X^n)=i)\\right)=\\\\\\nonumber\n &\\sum_u\\mathbb{E}\\left(\\mathbbm{1}(N_u(X^n)=0)\\right)\\left(\\log L_0^c-\\log L_0^{BC}\\right)+\\\\\\nonumber\n &\\left(\\log z_{1-\\frac{\\alpha(1-c)}{2n}}-\\log z_{1-\\frac{\\alpha}{2k}}\\right)\\left(k-\\sum_u\\mathbb{E}\\left(\\mathbbm{1}(N_u(X^n)=0)\\right)\\right),\n\\end{align}\nwhere the last equality follows from $\\sum_{i=0}^n \\mathbbm{1}(N_u(X^n)=i)=k$. Notice we have that $\\mathbb{E}\\left(\\mathbbm{1}(N_u(X^n)=0)\\right)=(1-p_u)^n$. Therefore, \n\\begin{align}\\label{eq1a}\n \\mathbb{E}\\log V_c-\\mathbb{E}\\log V_0=&\\sum_u(1-p_u)^n\\left(\\log L_0^c-\\log L_0^{BC}\\right)+\\\\\\nonumber\n &\\left(k-\\sum_u(1-p_u)^n\\right)\\left(\\log z_{1-\\frac{\\alpha(1-c)}{2n}}-\\log z_{1-\\frac{\\alpha}{2k}}\\right).\n\\end{align}\nNotice that (\\ref{eq1a}) is linear in $\\sum_u(1-p_u)^n$. Further, simple calculus shows that $\\sum_u(1-p_u)^n$ attains its maximum for a uniform distribution, while its minimum is attained for a degenerate distribution. Therefore, $k(1-1\/k)^n\\leq \\sum_u(1-p_u)^n\\leq k-1$. This means that \n\\begin{align}\n &\\mathbb{E}\\log V_c-\\mathbb{E}\\log V_0\\leq\\\\\\nonumber\n &\\max\\bigg\\{ k(1-1\/k)^n\\left(\\log L_0^c-\\log L_0^{BC}\\right)+k\\left(1-(1-1\/k)^n\\right)\\left(\\log z_{1-\\frac{\\alpha(1-c)}{2n}}-\\log z_{1-\\frac{\\alpha}{2k}}\\right),\\\\\\nonumber\n &\\quad\\quad\\quad (k-1)\\left(\\log L_0^c-\\log L_0^{BC}\\right)+\\left(\\log z_{1-\\frac{\\alpha(1-c)}{2n}}-\\log z_{1-\\frac{\\alpha}{2k}}\\right)\\bigg\\}.\n\\end{align}\nWe require that $\\mathbb{E}\\log V_c-\\mathbb{E}\\log V_0 \\leq0$. This holds if both arguments of the max above are non positive, as stated in conditions (a) and (b). \n\\end{proof}\n\n\\newpage\n\n\\section{Coverage Rate for large Alphabet SCIs}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width =0.45\\textwidth,bb=50 150 530 620,clip]{examine_coverage.pdf}\n\\caption{CR coverage for a Zipf's Law distribution. The sample size is $n=1000$.}\n\\label{simulated_exp_coverage}\n\\end{figure}\n\n\\end{appendices}\n\n\\section*{Acknowledgements}\nThis research is supported by the Israel Science Foundation \ngrant number 963\/21. The author thanks Ruth Heller and Yoav Benjamini for helpful discussions. \n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}