diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbbjn" "b/data_all_eng_slimpj/shuffled/split2/finalzzbbjn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbbjn" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nModelling dynamical systems is a common problem in science and engineer, and there are many applications related to such a modelling, e.g. robot navigation, natural language processing, speech recognition, etc.~\\cite{ICML2013_hamilton13}. Many approaches have been devoted to modelling systems, such as Hidden Markov Models~(HMM) for uncontrolled dynamical systems and Partially Observable Markov Decision Processes~(POMDPs) for controlled systems. However, such latent-state approaches usually suffer from local minima and require some assumptions~\\cite{Singh04predictivestate}. Predictive State Representations~(PSRs) offer an effective approach for modelling partially dynamical systems~\\cite{Littman01predictiverepresentations}. Unlike the latent-state approaches, PSRs use a vector of predictions about future events, called tests, to represent the state. The tests can be executed on the systems and are fully observable. Compared to the latent-state approaches, PSRs have shown many advantages, such as the possibility of obtaining a global optimal model, more expressive power and less required prior domain knowledge, etc.~\\cite{LiuAAMAS15}.\n\nThere are two main problems in PSRs. One is the learning of the PSR model; the other is the application of the learned model, including predicting and planning~\\cite{Rosencrantz04learninglow,Liu14inaccuratePSR}. The state-of-the-art technique for addressing the learning problem is the spectral approach~\\cite{Boots-2011b}. Spectral methods treat the learning problem as the task of computing a singular value decomposition~(SVD) over a submatrix $H_s$ of a special type of matrix called the Hankel matrix~\\cite{Hsu_aspectral}. Under the strong assumptions that the rows $H$ and columns $T$ of $H_s$ are sufficient and the entries of the matrix can be estimated accurately, it has been proven that the spectral approach for learning PSRs is statistically consistent and the learned parameters can converge to the true parameters~\\cite{Boots-2011b}. However, for the sufficient assumption, it usually means a very large number of rows or columns of the $H_s$ matrix, which almost fails in practice~\\cite{kulesza2015spectral}. At the same time, the computation complexity of the SVD operation on $H_s$ is $O(|T|^2|H|)$~($|T|$ and $|H|$ is the number of the columns and rows in the matrix respectively), for large set of $T$ and $H$, such an operation is also prohibitively expensive. Also, for the spectral approach, to obtain the model parameter, one should estimate and manipulate two observable matrices $P_{T,H}$, $P_{H}$, and $|A|\\times|O|$ observable matrices $P_{T,ao,H}$~\\cite{Liu16IJCAI}. Although Denis {\\em et al.}~\\cite{DBLP:conf\/icml\/DenisGH14} showed that the concentration of the empirical $P_{T,H}$ around its mean does not highly depend on its dimension, which gives a hope for alleviating the statistical problem when using large set of $T$ and $H$ as the accuracy of the learned model is directly connected to the concentration, manipulating these matrices is still too expensive to afford in large systems.\n\nThus, in practice, taking the computational constraints into account, it is needed to find only a finite set of columns of the Hankel matrix before the spectral methods can be applied. While different sets of columns usually lead to variant accuracy of the learned model, in this paper, we first introduce a concept of model entropy and show the high relevance between the entropy and the accuracy of the learned model, then for the columns selection problem, we call it basis selection in spectral learning of PSRs, an approach is proposed by using the model entropy as the guidance. We finally show the effectiveness of the proposed approach by executing experiments on {\\em PocMan}, which is one benchmark domain in the literature.\n\nWe organize the remainder of this paper as follows. We briefly review the background and define notations in Section 2. We propose the approaches for basis selection in Section 3. We provide comparative results in Section 4 . Finally we conclude the paper in Section 5.\n\n\n\\section{Preliminaries}\n\nPredictive state representations~(PSRs) represent state by using a vector of predictions of fully observable quantities~(tests) conditioned on past events~(histories), denoted $b(\\cdot)$. For discrete systems with finite set of observations $O=\\{o^1,o^2,\\cdots,o^{|O|}\\}$ and actions $A=\\{a^1,a^2,\\cdots,a^{|A|}\\}$, at time $\\tau$, a \\emph{test} is a sequence of action-observation pairs that starts from time $\\tau + 1$. Similarly, a \\emph{history} at $\\tau$ is a sequence of action-observation pairs that starts from the beginning of time and ends at time $\\tau$, which is used to describe the full sequence of past events. The prediction of a length-$m$ test $t$ at history $h$ is defined as $p(t|h)=p(ht)\/p(h)=\\prod^m_{i=1}Pr(o_i|ha_1o_1\\cdots a_i)$~\\cite{Singh04predictivestate}.\n\nThe underlying dynamical system can be described by a special bi-infinite matrix, called the Hankel matrix~\\cite{Balle:2014ML}, where the rows and columns correspond to all the possible tests $T$ and histories $H$ respectively, the entries of the matrix are defined as $P_{t,h}=p(ht)$ for any $t \\in T$ and $h \\in H$, where $ht$ is the concatenation of $h$ and $t$~\\cite{Boots-2011b}. The rank of the Hankel matrix is called the linear dimension of the system. When the rank is finite, we assume it is $k$, in PSRs, the state of the system at history $h$ can be represented as a prediction vector of $k$ tests conditioned at $h$. The $k$ tests used as the state representation is called the minimal {\\em core tests} that the predictions of these tests contain sufficient information to calculate the predictions for all tests, and is a \\emph{sufficient statistic}~\\cite{Singh04predictivestate,LiuAAMAS15}. For linear dynamical systems, the minimal core tests can be the set of tests that corresponds to the $k$ linearly independent columns of the Hankel matrix~~\\cite{LiuAAMAS15}.\n\nA PSR of rank $k$ can be parameterized by a reference condition state vector $b_* = b(\\epsilon)\\in \\mathbb{R}^k$, an update matrix $B_{ao} \\in \\mathbb{R}^{k \\times k}$ for each $a \\in A$ and $o \\in O$, and a normalization vector $b_\\infty \\in \\mathbb{R}^k$, where $\\epsilon$ is the empty history and $b_\\infty^T B_{ao} = 1^T$~\\cite{Hsu_aspectral,Boots-2011b}. In the spectral approach, these parameters can be defined in terms of the matrices $P_\\mathcal{H}$, $P_{\\mathcal{T,H}}$, $P_{\\mathcal{T},ao,\\mathcal{H}}$ and an additional matrix $U \\in \\mathbb{R}^{T \\times d}$ as shown in Eq.~\\ref{equ:spectral}, where $\\mathcal{T}$ and $\\mathcal{H}$ are the set of all possible tests and histories respectively, $U$ is the left singular vectors of the matrix $P_{\\mathcal{T,H}}$, $^T$ is the transpose and $\\dag$ is the pseudo-inverse of the matrix~\\cite{Boots-2011b}.\n\\begin{equation}\n\\begin{split}\n& b_* = U^TP_{\\mathcal{H}}1_k, \\\\\n& b_{\\infty} = (P^T_{\\mathcal{T,H}})^{\\dag}P_\\mathcal{H}, \\\\\n& B_{ao} = U^TP_{\\mathcal{T},ao,\\mathcal{H}}(U^TP_{\\mathcal{T,H}})^\\dag.\n\\end{split}\n\\label{equ:spectral}\n\\end{equation}\n\nUsing these parameters, after taking action $a$ and seeing observation $o$ at history $h$, the PSR state at next time step $b(hao)$ is updated from $b(h)$ as follows~\\cite{Boots-2011b}:\n\n\\begin{equation}\nb(hao) = \\frac{B_{ao}b(h)}{b_\\infty^TB_{ao}b(h)}.\n\\end{equation}\n\nAlso, the probability of observing the sequence $a_1o_1a_2o_2 \\cdots a_no_n$ in the next $n$ time steps can be predicted by~\\cite{kulesza2015spectral}:\n\n\\begin{equation}\nPr[o_{1:t}||a_{1:t}] = b_{\\infty}^TB_{a_no_n} \\cdots B_{a_2o_2}B_{a_1o_1}b_*.\n\\end{equation}\n\nUnder the assumption that the columns and rows of the submatrix $H_s$ of the Hankel matrix are sufficient, when more and more data are included, the law of large numbers guarantees that the estimates $\\hat P_\\mathcal{H}$, $\\hat P_\\mathcal{T,H}$, and $\\hat P_{\\mathcal{T},ao,\\mathcal{H}}$ converge to the true matrices $P_\\mathcal{H}$, $P_\\mathcal{T,H}$, and $P_{\\mathcal{T},ao,\\mathcal{H}}$, the estimates $\\hat b_*$, $\\hat b_{\\infty}$, and $\\hat B_{ao}$ converge to the true parameters $b_*$, $b_{\\infty}$, and $B_{ao}$ for each $a \\in A$ and $o \\in O$, that the learning is consistent~\\cite{Boots-2011b}.\n\n\\section{Basis Selection via Model Entropy}\nAs the assumption that the columns and rows of the submatrix $H_s$ are sufficient is really strong~(usually means a very large set of columns and histories of the matrix), which almost fails in reality. At the same time, due to the computation and statistics constraints, we can only manipulate a limited finite set of tests\/histories. As different sets of columns~(tests) or rows~(histories) usually cause different model accuracy, which requires us to select the finite set of tests and rows~(histories) of the matrix for applying the spectral approach. In practice, as all possible training data is used to generate the histories, how to select the tests, i.e., the basis selection, is the crucial problem. In this section, we first introduce the concept of model entropy, then show the high relevance between the entropy and the model accuracy, finally we propose a simple search method for selecting the bases.\n\n\\noindent{\\bf Model Entropy}. Given a set of tests $X=\\{x_1, x_2, \\cdots, x_i\\}$, if it includes the set of core tests and the number of possible $p(X|\\cdot)$ is finite, Proposition~\\ref{pro:mdp} holds~\\cite{Liu16IJCAI}.\n\\begin{pro}\nThe Markov decision process~(MDP) model built by using $p(X|\\cdot)$ as state representation and the action-observation pair $\\langle ao \\rangle$ as action is deterministic.\n\\label{pro:mdp}\n\\end{pro}\nIn practice, it is difficult for $X$ to include the set of core tests. At any time step, the prediction vector $p(X|\\cdot)$ may actually correspond to several PSR states. In such cases, the transition from $p(X|h)$ to $p(X|hao)$ usually becomes stochastic and the less information is included in $X$, the more stochastic the transition will usually be. Inspired by the concept of Shannon entropy that measures information uncertainty~\\cite{shannon48}, to quantify the stochasticity, a concept of model entropy for each set of tests $X$ is defined in Eq.~\\ref{equ:entropy}~\\cite{Liu16IJCAI}.\n\\begin{equation}\n\\begin{small}\nE(X) = -\\sum_{a \\in \\mathcal{A}_{PP}} \\frac{1}{r(T^a)} \\sum_{i=1}^{r(T^a)} \\sum_{j=1}^{c(T^a)}T(s_i,a,s_j)\\log T(s_i,a,s_j),\n\\end{small}\n\\label{equ:entropy}\n\\end{equation}\nwhere $T$ is the state-transition function of the MDP using $p(X|\\cdot)$ as state representation and $\\langle ao \\rangle$ as action, $\\mathcal{A}_{PP}=A \\times O$ the set of action-observation pairs in the original system, $r(T^a)$ and $c(T^a)$ the number of rows and columns of the state-transition matrix $T^a$ respectively.\n\n\\noindent{\\bf Relevance between Model Entropy and Basis Selection}. When selecting the set of tests for applying spectral methods, the tests containing more information should be selected as the more information included in the set of tests, the higher accuracy the learned PSR model will usually be. According to Eq.~\\ref{equ:entropy}, the less information is included in the set of tests $X$, the more stochastic the state transition will be, and the higher the model entropy is. So for the basis selection problem, the set of tests with lowest model entropy should be selected.\n\n\\noindent{\\bf Basis Selection via Model Entropy}. Using the model entropy as the guidance, we propose a simple local search algorithm for searching the set of tests used for spectral learning of PSRs, which is shown in Algorithm~\\ref{Algo:selecting}. Starting with a default $T$ of the desired size, we iteratively sample a set of new tests and consider using it to replace the tests in $T$. If the replacement is a reduction in terms of the entropy value, then we keep it. After a fixed number of rounds, we stop and return the current $T$. In the algorithm, the entropy $E(T)$ for each candidate set of tests $T$ is calculated as follows~(We name this procedure as $EntropyLearn(D,T)$): We first generate an original randomly action-observation sequences $D$ as the training data for calculating the entropy. Then the data is translated into the form of $\\langle$action-observation$\\rangle$-$p(T|\\cdot)$ sequences. For example, a sequence $d=\\langle a_1o_1a_2o_2 \\cdots a_ko_k \\rangle$ is converted into $d'=\\langle a_1o_1 \\rangle p(T|a_1o_1)\\langle a_2o_2\\rangle p(T|a_1o_1a_2o_2) \\cdots \\langle a_ko_k\\rangle p(T|a_1o_1a_2o_2 \\cdots a_ko_k)$, where $p(T|\\cdot)$ can be estimated using the training data. Due to the sampling error, it is unlikely that any of these estimated $p(\\hat{T}|\\cdot)$ will be exactly the same, even if the true underlying $p(T|\\cdot)$ are identical. Statistical tests or linearly independent techniques can be used to estimate the number of distinct underlying $p(T|\\cdot)$ and cluster the estimated $p(\\hat{T}|\\cdot)$ corresponding to the same true prediction vector into one group(state)~\\cite{TalvitieS11,Liu14inaccuratePSR}. Subsequently, we compute the state-transition functions in the transformed data and build the MDP model. Finally, the entropy $E(T)$ of a set of tests $T$ can be calculated according to Eq.~\\ref{equ:entropy}.\n\n\\begin{algorithm}[tb]\n \\caption{Search for sets of $k$ tests $T$ that approximately minimize $E(T)$ }\n \\label{Algo:selecting}\n \n\\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:} dataset $D$, initial $T$ of size $k$, number of size $n < k$, distributions $p_T$ over candidate tests, number of rounds $r$, entropy value $EV$, an entropy threshold $E_{th}>0$, number of iteration $iterNum$.\n \\STATE {\\bfseries Initialize:} $EV := EntropyLearn(D,T)$.\n \\FOR{$i=1$ {\\bfseries to} $r$}\n \\FOR{$j=1$ {\\bfseries to} $iterNum$}\n \\STATE {Sample a set of $n$ tests $T_s$ $\\not \\in T \\sim p_T$}\n \\STATE {$T^{'}$ $\\leftarrow$ a set of $n$ test in $T$}\n \\STATE {$EV_T := EntropyLearn(D,T \\setminus T^{'} \\cup T_s)$}\n \\IF{$EV - EV_T > E_{th}$}\n \\STATE $EV := EV_T$\n \\STATE $T := T \\setminus T^{'} \\cup T_s$\n \\STATE {\\bfseries break}\n \\ENDIF\n \\ENDFOR\n \\ENDFOR\n \\STATE {\\bfseries Output:} $T$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Experimental Results}\nWe evaluated the proposed technique on {\\em PocMan}~\\cite{Silver10monte-carloplanning,JMLR:hamilton14}, a partially observable version of the video game {\\em PacMan}. The environment is extremely large that has 4 actions, 1024 observations and an extremely large number of states~(up to $10^{56}$).\n\n\\subsection{Experimental Setting}\n\\noindent{\\bf Evaluated Methods}.\nWe compared our approach~(Entropy-based) with the bound-based approach~\\cite{kulesza2015spectral}, which is the most related technique with our method that both the approaches address the same basis selection problem. The two approaches both first select a set of tests, then spectral method is applied by using these tests. Also, iterative approaches are used in both approaches for selecting the sets of tests. The main difference between our approach and the work of~\\cite{kulesza2015spectral} is the guidance used for selecting the set of tests. While our approach uses the model entropy as the guidance, the key idea in~\\cite{kulesza2015spectral} is that as in the limiting-case, the singular values of the learned transformed predictive state representations~(TPSR) parameters are bounded, then under an assumption that the smaller the singular values of the learned parameters are, the more accuracy the learned PSR model will be, they just select the tests that can cause smaller singular values of the learned PSR parameters. However, no any formal guarantees are provided for such an approach.\n\nA uniformly random generated sequence with length $200,000$ was used as the training sequence for both approaches. And to calculate the entropy of a candidate set of tests $T_c$, a randomly generated 5,000-length action-observation sequence $d$ was used. For each test $t \\in T_c$, $p(t|\\cdot)$ was estimated by executing $act(t)$ 100 times, where $act(t)$ is the action sequence in test $t$. The entropy thresholds for our approaches are 0.06, 0.04, and 0.02 for different number of tests as the entropy value usually decreases with the increase of the number of tests~(For number of tests 100 and 150, the threshold used is 0.06; for number of tests 200 and 250, the threshold is 0.04; and for number of tests 300, the threshold is 0.02). The number of rounds for both approaches is 10. In each round, we iteratively sample a set of new tests of size 20 and use it to replace 20 tests in $T$, the iterative number in each round is also 10. \\iffalse if the replacement is an reduction in terms of the entropy value or the bound in 10 iterations~(the reduction no less than the threshold for our approach), then we keep it and go to next round, otherwise, we also go to next round.\\fi After a fixed number of rounds, we stop and return the current $T$.\n\n\\noindent{\\bf Performance Measurements.}\nWe evaluated the learned models in terms of prediction accuracy, which is measured by the difference between the true predictions and the predictions given by the learned model over a test data sequence~(for {\\em PocMan}, we cannot obtain the true predictions, Monte-Carlo rollout predictions were used~\\cite{JMLR:hamilton14}).\n\nTwo error functions were used in the measurement. One is the average one-step prediction error per time step on the test sequence as shown in Eq.~\\ref{eqn:f1}:\n\n\\begin{equation}\n\\frac{1}{L} \\sum_{t=1}^L |p(o_{t+1}|h_t,a_{t+1})-\\hat{p}(o_{t+1}|h_t,a_{t+1})|.\n\\label{eqn:f1}\n\\end{equation}\n$p(\\cdot)$ is the probability calculated from the true POMDP model or the Monte-Carlo rollout prediction and $\\hat{p}(\\cdot)$ the probability obtained from the learned model. $|\\cdot|$ refers to the absolute value. $L$ is the length of the test sequence used in the experiments. For the average four-step prediction error, the same equation with predicting four steps ahead $\\hat{p}(o_{t+1}o_{t+2}o_{t+3}o_{t+4}|h_ta_{t+1}a_{t+2}a_{t+3}a_{t+4})$ was used.\n\n\\subsection{Performance Evaluation}\n\nIn this section, for each algorithm, we report the performance results as the mean error over 10 trials. For each trial, a uniform randomly generated test sequence with length $L=20,000$ was used for testing the accuracy of the learned model.\n\nTwo kinds of experiments were conducted. The first experiment is to evaluate the prediction performance by fixing the number of tests as 100, and ran Algorithm~\\ref{Algo:selecting} 10 rounds. For each round, we reported the average one and four prediction errors for both approaches, the results are shown in Fig.~\\ref{fig:fix}. The number at each point in Fig.~\\ref{fig:fix} is the average entropy~(for our approach) or the average largest singular values~(for the bound-based approach) in the corresponding round, which also shows that for the entropy-based approach, a higher entropy value results in a lower prediction accuracy while there are no relevance between the singular values and the prediction accuracy. The second experiment is to evaluate the prediction performance by varying the number of tests. For each number of tests, both approaches were ran 10 rounds, and the final results for both one and four-step errors after 10 rounds were reported in Fig.~\\ref{fig:vary}. For this experiment, we also reported the initial results~(Initial) without the replacement of the tests.\n\n\\begin{figure*}[!ht]\n\\begin{minipage}{6.0in}\n\\begin{minipage}{3in}\n\\includegraphics[width=0.8\\textwidth]{pocmaniterone.pdf}\n\\centerline{{\\scriptsize $(a)$}}\n\\end{minipage}\n\\begin{minipage}{3in}\n\\includegraphics[width=0.8\\textwidth]{pocmaniterfour.pdf}\n\\centerline{{\\scriptsize $(b)$}}\n\\end{minipage}\n\\end{minipage}\n\\caption{{\\small ($a$) One-step; ($b$) four-step prediction error of 10 rounds for fix number of tests}}\n\\label{fig:fix}\n\\end{figure*}\n\n\\begin{figure*}[!ht]\n\\begin{minipage}{6.0in}\n\\begin{minipage}{3in}\n\\includegraphics[width=0.8\\textwidth]{pocmantestone.pdf}\n\\centerline{{\\scriptsize $(a)$}}\n\\end{minipage}\n\\begin{minipage}{3in}\n\\includegraphics[width=0.8\\textwidth]{pocmantestfour.pdf}\n\\centerline{{\\scriptsize $(b)$}}\n\\end{minipage}\n\\end{minipage}\n\\caption{{\\small ($a$) One-step; ($b$) four-step prediction error of different number of tests}}\n\\label{fig:vary}\n\\end{figure*}\n\nAs can be seen from Figs.~\\ref{fig:fix} and~\\ref{fig:vary}, both for the one-step and four-step predictions, and both for the two kinds of experiments, in all the cases, our algorithm performs very well and outperforms the bound-based approaches. Considering the error reported is only the prediction error for one time step, the improvement on prediction accuracy on a long sequence is remarkable. Meanwhile, for the first kind of experiment, as the number of rounds increases, our algorithm reduces its prediction error while the bound-based approaches with increasing rounds do not improve their performances and are very unstable. What also can be seen from the experimental results is that with the increase of the number of tests, for all the approaches, the prediction accuracy of the obtained PSR model also increases, which demonstrates the importance of including more tests in the learning of the models.\n\n\\section{Conclusion}\n\nHow to choose the bases is a very important problem in spectral learning of PSRs. However, until now, there are very little work that can address this issue successfully. In this paper, by introducing the model entropy for measuring the model accuracy and showing the close relevance between the entropy and model accuracy, we propose an entropy-based basis selection strategy for spectral learning of PSRs. Several experiments were conducted on the PocMan environment, and the results show that compared to the state-of-the-art bound-based approach, our technique is more stable and achieved much better performance.\n\n\\acks{This work was supported by the National Natural Science Foundation of China (No. 61375077).}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\nSerre \\cite{Serre} developed the theory of $p$-adic and mod $p$ modular\nforms and produced several interesting results.\nIn his theory, the Ramanujan operator\n$\\theta :f=\\sum a_nq^n \\longmapsto \\theta (f):=\\sum n\\,a_nq^n$\nplayed an important role. The notion of such operator was extended\nto the case of Siegel modular forms. The theta operator (generalized\nRamanujan operator) on Siegel modular forms is defined by\n$$\n\\varTheta : F=\\sum_T a(T)q^T \\longmapsto \n\\varTheta (F):=\\sum_T\\text{det}(T)\\cdot a(T)q^T,\n$$\nwhere $F=\\sum a(T)q^T$ is the Fourier expansion (generalized $q$-expansion)\nof $F$.\n\nFor a prime number $p$, the theta operator acts on the algebra of mod $p$\nSiegel modular forms (cf. B\\\"{o}cherer-Nagaoka \\cite{B-N}). In our study, we found Siegel\nmodular forms $F$ which satisfy the property\n$$\n\\varTheta (F) \\equiv 0 \\pmod{p}.\n$$\nThe space consisting of such Siegel modular forms is called the {\\it mod $p$\nkernel of the theta operator}. In this terminology, we can say that the Igusa cusp form\nof weight 35 is an element of the the mod 23 kernel of the \ntheta operator (cf. Kikuta-Kodama-Nagaoka \\cite{K-K-N}). Moreover the theta series attached to \nthe Leech lattice is also in the mod 23 kernel of the theta operator \n(cf. Nagaoka-Takemori \\cite{N-T}).\n\nThe main purpose of this paper is to extend the notion of the theta operator\nto the case of Hermitian modular forms and, to give some examples of Hermitian \nmodular forms which are in the mod $p$ kernel of the theta operator.\nThe first half concerns the Eisenstein series. Let \n$\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}})$ be the Hermitian modular group\nof degree 2 with respect to an imaginary quadratic number field $\\boldsymbol{K}$.\nKrieg \\cite{Krieg} constructed a weight $k$ Hermitian modular form \n$F_{k,\\boldsymbol{K}}$ which coincides with the weight $k$ Eisenstein series\n$E_{k,\\boldsymbol{K}}^{(2)}$ for $\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}})$.\n\nThe first result says that the Hermitian modular form $F_{p+1,\\boldsymbol{K}}$\nis in the mod $p$ kernel of the theta operator under some condition\non $p$, namely\n$$\n\\varTheta (F_{p+1,\\boldsymbol{K}}) \\equiv 0\\pmod{p}\n\\quad (\\text{cf. Theorem \\ref{main1}}).\n$$\nAs a corollary, we can show that the weight $p+1$ Hermitian Eisenstein\nseries $E_{p+1,\\boldsymbol{K}}^{(2)}$ satisfies\n$$\n\\varTheta( E_{p+1,\\boldsymbol{K}}^{(2)}) \\equiv 0 \\pmod{p}\n$$\nif the class number of $\\boldsymbol{K}$ equals one.\n\nIn the remaining part, we give various examples which are in the mod $p$\nkernel of the theta operator. The first example we show is related to the\ntheta series attached to positive definite, even unimodular Hermitian lattice \nover the Gaussian field.\nLet $\\mathcal{L}$ be a positive definite, even unimodular Hermitian lattice of rank\n$r$ with the Gram matrix $H$. We denote the corresponding Hermitian theta series\nof degree $n$ by $\\vartheta_{\\mathcal{L}}^{(n)}=\\vartheta_H^{(n)}$. It is known\nthat the rank $r$ is divisible by 4 and $\\vartheta_{\\mathcal{L}}^{(n)}$ becomes a\nHermitian modular form of weight $r$. \nIn the case $r=12$, we have a positive definite, even integral Hermitian lattice\n$\\mathcal{L}_{\\mathbb{C}}$ of rank 12, which does not have any vector of\nlength one. In this paper we call it the Hermitian Leech lattice. \nThe theta series attached to $\\mathcal{L}_{\\mathbb{C}}$ satisfies\n$$\n\\varTheta (\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}) \\equiv\n0 \\pmod{11}\n\\quad (\\text{cf. Theorem \\ref{mod11cong}}).\n$$\n\nThe next example is connected with the Hermitian theta constant. \nLet $\\mathcal{E}$ be a set of mod 2 even characteristics of degree 2 (cf. $\\S$ \\ref{thetaconstant}).\nWe consider the theta constant $\\theta_{\\boldsymbol{m}}$\\,$(\\boldsymbol{m}\\in\\mathcal{E})$.\nIt is known that the function\n$$\n\\psi_{4k}:=\\frac{1}{4}\\sum_{\\boldsymbol{m}\\in\\mathcal{E}}\\theta_{\\boldsymbol{m}}^{4k}\n$$\ndefines a Hermitian modular form of weight $4k$ (cf. Freitag \\cite{Freitag}).\nThe final result can be stated as\n$$\n\\varTheta (\\psi_8) \\equiv 0 \\pmod{7},\\qquad\n\\varTheta (\\psi_{12}) \\equiv 0 \\pmod{11}\n\\quad (\\text{cf. Theorem \\ref{thetaconstantth}}).\n$$\n\nOur proof is based on the fact that the image of a weight $k$ modular form of the theta\noperator is congruent to a weight $k+p+1$ cusp form mod $p$ (cf. Theorem 2.7) and then\nwe use the Sturm bound (Corollary 2.6).\n\\section{Hermitian modular forms}\n\\label{Sect.2}\n\\subsection{Notation and definition}\n\\label{Sect.2.1}\nThe Hermitian upper half-space of degree $n$ is defined by\n$$\n\\mathbb{H}_n:=\\{\\,Z\\in Mat_n(\\mathbb{C})\\,\\mid\\, \\frac{1}{2i}(Z-{}^tZ)>0\\,\\},\n$$\nwhere ${}^t\\overline{Z}$ is the transposed complex conjugate of $Z$.\nThe space $\\mathbb{H}_n$ contains the Siegel upper-half space of degree\n$n$\n$$\n\\mathbb{S}_n:=\\mathbb{H}_n\\cap {\\rm Sym}_n(\\mathbb{C}).\n$$\nLet $\\boldsymbol{K}$ be an imaginary quadratic number field with discriminant\n$d_{\\boldsymbol{K}}$ and ring of integers $\\mathcal{O}_{\\boldsymbol{K}}$.\nThe Hermitian modular group\n$$\n\\Gamma^n(\\mathcal{O}_{\\boldsymbol{K}}):=\\left\\{\\, M\\in\n{\\rm Mat}_{2n}(\\mathcal{O}_{\\boldsymbol{K}})\n\\,\\mid\\,\n{}^t\\overline{M}J_nM=J_n:=\n\\begin{pmatrix}0 & -1_n \\\\ 1_n & 0\\end{pmatrix}\\,\\right\\}\n$$\nacts on $\\mathbb{H}_n$ by fractional transformation\n$$\n\\mathbb{H}_n\\ni Z\\longmapsto M\\langle Z\\rangle:=\n(AZ+B)(CZ+D)^{-1},\\;\nM=\\begin{pmatrix}A&B\\\\ C&D\\end{pmatrix}\n\\in \\Gamma^n(\\mathcal{O}_{\\boldsymbol{K}}).\n$$\n\nLet $\\Gamma \\subset \\Gamma^n(\\mathcal{O}_{\\boldsymbol{K}})$ be a\nsubgroup of finite index and $\\nu_k$\\,$(k\\in\\mathbb{Z})$ an abelian character\nof $\\Gamma$ satisfying $\\nu_k\\cdot\\nu_{k'=}\\nu_{k+k'}$.\nWe denote by $M_k(\\Gamma,\\nu_k)$ the space of Hermitian modular forms\nof weight $k$ and character $\\nu_k$ with respect to $\\Gamma$. Namely\nit consists of holomorphic functions $F:\\mathbb{H}_n\\longrightarrow\\mathbb{C}$\nsatisfying\n$$\nF\\mid_kM(Z):={\\rm det}(CZ+D)^{-k}F(M\\langle Z\\rangle)=\\nu_k(M)\\cdot F(Z)\n$$\nfor all $M=\\binom{*\\,*}{CD}\\in \\Gamma$. When\n$\\nu_k$ is trivial, we write it by $M_k(\\Gamma)$ simply. The subspace\n$S_k(\\Gamma,\\nu_k)$ of cusp forms is characterized by the condition\n$$\n\\Phi\\left(F\\mid_k\\begin{pmatrix}{}^t\\overline{U}&0\\\\ 0& U\\end{pmatrix}\\right)\n\\equiv 0,\\quad\n(U\\in GL_n(\\mathcal{O}_{\\boldsymbol{K}})),\n$$\nwhere $\\Phi$ is the Siegel operator. A modular form $F\\in M_k(\\Gamma,\\nu_k)$\nis called {\\it symmetric} if\n$$\nF({}^t\\!Z)=F(Z).\n$$\nWe denote by $M_k(\\Gamma,\\nu_k)^{\\rm sym}$ the subspace consisting of symmetric\nmodular forms. Moreover\n$$\nS_k(\\Gamma,\\nu)^{\\rm sym}:=M_k(\\Gamma,\\nu_k)^{\\rm sym}\\cap\nS_k(\\Gamma,\\nu_k).\n$$\nIf $F\\in M_k(\\Gamma,\\nu_k)$ satisfies the condition\n$$\nF(Z+B)=F(Z)\\qquad {\\rm for\\; all} \\;B\\in Her_n(\\mathcal{O}_{\\boldsymbol{K}}),\n$$\nthen $F$ has a Fourier expansion of the form\n\\begin{equation}\n\\label{Fourier}\nF(Z)\n=\\sum_{0\\leq H\\in\\Lambda_n(\\boldsymbol{K})}\na(F;H)\\text{exp}(2\\pi i\\text{tr}(HZ)),\n\\end{equation}\nwhere\n$$\n\\Lambda_n(\\boldsymbol{K}):=\\{\\,H=(h_{jl})\\in Her_n(\\boldsymbol{K})\\,\\mid\\,\nh_{jj}\\in\\mathbb{Z},\\;\\sqrt{d_{\\boldsymbol{K}}}\\,h_{jl}\\in\\mathcal{O}_{\\boldsymbol{K}}\\,\\}.\n$$\n\nWe assume that any $F\\in M_k(\\Gamma,\\nu_k)$ has the Fourier expansion above.\nFor any subring $R\\subset\\mathbb{C}$, we write as\n$$\nM_k(\\Gamma,\\nu_k)_R:=\\{\\,F\\in M_k(\\Gamma,\\nu_k)\\,\\mid\\,\na(F;H)\\in R\\;\\;(\\forall H\\in\\Lambda_n(\\boldsymbol{K}))\\,\\}.\n$$\nIn the Fourier expansion (\\ref{Fourier}), we use the abbreviation\n$$\n\\boldsymbol{q}^H:=\\text{exp}(2\\pi i\\text{tr}(HZ)).\n$$\nThe generalized $\\boldsymbol{q}$-expansion $F=\\sum a(F;H)\\boldsymbol{q}^H$\ncan be considered as an element in a formal power series ring\n$\\mathbb{C}[\\![\\boldsymbol{q}]\\!]$ (cf. Munemoto-Nagaoka \\cite{M-N}, p.248) from which we have\n$$\nM_k(\\Gamma,\\nu_k)_R \\subset R[\\![\\boldsymbol{q}]\\!].\n$$\n\nLet $p$ be a prime number and $\\mathbb{Z}_{(p)}$ the local ring at $p$, namely, \nring of $p$-integral rational numbers. For \n$F_i\\in M_{k_i}(\\Gamma,\\nu_{k_i})_{\\mathbb{Z}_{(p)}}$\\,\n($i=1,2$), we write $F_1 \\equiv F_2 \\pmod{p}$ when\n$$\na(F_1;H) \\equiv a(F_2;H) \\pmod{p}\n$$\nfor all $H\\in\\Lambda_n(\\boldsymbol{K})$.\n\\subsection{Hermitian modular forms of degree 2}\n\\label{Sect.2.2}\nIn the rest of this paper, we deal with Hermitian modular forms of degree 2.\n\\subsubsection{Eisenstein series}\n\\label{Sect.2.2.1}\nWe consider the Hermitian Eisenstein series of degree 2.\n$$\nE_{k,\\boldsymbol{K}}^{(2)}(Z)\n:=\\sum_{M=\\binom{*\\,*}{CD}}\\text{det}^{k\/2}(M)\\,\\text{det}(CZ+D)^{-k},\n\\quad Z\\in \\mathbb{H}_2,\n$$\nwhere $k>4$ is even and $M=\\binom{*\\,*}{CD}$ runs over a set\nof representaives of \n$\\left\\{\\binom{*\\,*}{0\\,*}\\right\\}\\backslash \\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}})$.\nThen\n$$\nE_{k,\\boldsymbol{K}}^{(2)}\\in\nM_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{-k\/2})_{\\mathbb{Q}}^{\\text{sym}}.\n$$\nMoreover, $E_{4,\\boldsymbol{K}}^{(2)}\\in\nM_4(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{-2})_{\\mathbb{Q}}^{\\text{sym}}$ is constructed by the Maass\nlift (Krieg \\cite{Krieg}). For an even integer $k\\geq 4$, $E_k^{(1)}=\\Phi (E_{k,\\boldsymbol{K}}^{(2)})$\nis the normalized Eisenstein series Eisenstein series of weight $k$ for $SL_2(\\mathbb{Z})$.\n\\subsubsection{Structure of the graded ring in the case \n$\\boldsymbol{K}=\\mathbb{Q}(i)$}\n\\label{Sect.2.2.2}\nIn this section, we assume that $\\boldsymbol{K}=\\mathbb{Q}(i)$. In \\cite{K-N},\nthe authors defined some Hermitian cusp forms\n$$\n\\chi_8\\in S_8(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}},\nF_{10}\\in S_{10}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^5)_{\\mathbb{Z}}^{\\text{sym}},\nF_{12}\\in S_{12}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}},\n$$\ncharacterized by\n$$\n\\chi_8\\mid_{\\mathbb{S}_2}\\equiv 0,\\quad\nF_{10}\\mid_{\\mathbb{S}_2}=6X_{10},\\quad\nF_{12}\\mid_{\\mathbb{S}_2}=X_{12},\n$$\nwhere $X_k$\\,($k=10,12$) is Igusa's Siegel cusp form of weight $k$ \nwith integral Fourier coefficients (cf. Kikuta-Nagaoka \\cite{K-N}).\n\\begin{Thm}\n\\label{structure}\nLet $\\boldsymbol{K}=\\mathbb{Q}(i)$. The graded ring\n$$\n\\bigoplus_{k\\in\\mathbb{Z}}\nM_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{k\/2})^{\\text{sym}}.\n$$\nis generated by\n$$\nE_{4,\\boldsymbol{K}}^{(2)},\\quad\nE_{6,\\boldsymbol{K}}^{(2)},\\quad\n\\chi_8,\\quad\nF_{10}, \\;\\;\\text{and}\\;\\;\nF_{12}.\n$$\n\\end{Thm}\nFor the proof, we should consult, for example, Kikuta-Nagaoka \\cite{K-N}.\n\\begin{Rem}\n$E_{4,\\boldsymbol{K}}^{(2)}\\in M_4(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}} $,\n$E_{6,\\boldsymbol{K}}^{(2)}\\in M_6(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{3})_{\\mathbb{Z}}^{\\text{sym}} $.\n\\end{Rem}\n\\subsubsection{Sturm bound}\n\\label{Sect.2.2.3}\nSturm gave some condition of $p$-divisibility of the Fourier coefficients of modular form. \nLater the bound is studied in the case of modular forms with several variables.\nFor example, the first author \\cite{C-C-K} studied the bound for Hermitian modular\nforms of degree 2 with respect to \n$\\boldsymbol{K}=\\mathbb{Q}(i)$ and $\\mathbb{Q}(\\sqrt{3}\\,i)$. However the statement\nis incorrect. We correct it here.\n\nWe assume that $\\boldsymbol{K}=\\mathbb{Q}(i)$ and use an abbreviation\n\\begin{equation}\n\\label{abb}\n[m,a+bi,n]:=\\begin{pmatrix}m & \\frac{a+bi}{2}\\\\ \\frac{a-bi}{2} & n \\end{pmatrix}\n\\in\\Lambda_2(\\boldsymbol{K}).\n\\end{equation}\nWe define a lexicographic order for the different element elements\n$$\nH=[m,a+bi,n],\\quad H'=[m',a'+b'i,n']\n$$\nof $\\Lambda_2(\\boldsymbol{K})$ by\n\\begin{align*}\nH \\succ H'\\quad\\Longleftrightarrow\\quad & (1)\\; \\text{tr}(H) > \\text{tr}(H')\\quad \\text{or}\\\\\n & (2)\\; \\text{tr}(H)=\\text{tr}(H'),\\;m>m'\\quad \\text{or}\\\\\n & (3)\\; \\text{tr}(H)=\\text{tr}(H'),\\;m=m',\\;a>a'\\quad \\text{or}\\\\\n & (4)\\; \\text{tr}(H)=\\text{tr}(H'),\\;m=m',\\;a=a',\\;b>b'.\n\\end{align*}\nLet $p$ be a prime number and \n$F\\in M_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}_{(p)}}$.\nWe define the order of $F$ by\n$$\n\\text{ord}_p(F):=\\text{min}\\{\\,H\\in\\Lambda_2(\\boldsymbol{K})\\,\\mid\\,\na(F;H)\\not\\equiv 0 \\pmod{p}\\,\\},\n$$\nwhere the ``minimum'' is defined in the sense of the above order. If\n$F \\equiv 0 \\pmod{p}$, then we define $\\text{ord}_p(F)=(\\infty)$. In the case of\n$\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$, we can also define the order in a similar\nway.\n\nIt is easy to check that\n\\begin{Lem}\n\\label{order}\nThe following equality holds.\n$$\n{\\rm ord}_p(FG)={\\rm ord}_p(F)+{\\rm ord}_p(G).\n$$\n\\end{Lem}\nThen we have\n\\begin{Thm}\n\\label{sturm}\nLet $k$ be an even integer and $p$ a prime number with $p\\geq 5$.\nLet $\\boldsymbol{K}=\\mathbb{Q}(i)$ or $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$.\nFor $F\\in M_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\nu_k)_{\\mathbb{Z}_{(p)}}^{\\text{sym}}$,\nassume that\n$$\n{\\rm ord}_p(F)\\succ\n\\begin{cases}\n\\displaystyle\n\\left[ \\left[\\frac{k}{8}\\right],2 \\left[\\frac{k}{8}\\right], \\left[\\frac{k}{8}\\right]\\right] & \n\\text{if $\\boldsymbol{K}=\\mathbb{Q}(i)$}\n\\vspace{2mm}\n\\\\\n\\displaystyle\n\\left[\\left[\\frac{k}{9}\\right],2\\left[\\frac{k}{9}\\right], \\left[\\frac{k}{9}\\right]\\right] & \n\\text{if $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$}\n\\end{cases},\n$$\nThen we have ${\\rm ord}_p(F)=(\\infty)$, i.e., $F \\equiv 0\\pmod{p}$.\nHere\n$$\n\\nu_k=\n\\begin{cases}\n{\\rm det}^{k\/2} & \\text{if $\\boldsymbol{K}=\\mathbb{Q}(i)$}\\\\\n{\\rm det}^{k} & \\text{if $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$}\n\\end{cases}\n$$\nand $[x]$ inside of the bracket means the greatest integer such that $\\leq x$.\n\\end{Thm}\n\\begin{Rem}\nFor \n$\\boldsymbol{K}=\\mathbb{Q}(i)$ (resp. $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$)\nthe matrix $\\left[ [\\frac{k}{8}],2[\\frac{k}{8}],[\\frac{k}{8}] \\right]$\\;\n$\\left(\\text{resp.} \\left[ [\\frac{k}{9}],2[\\frac{k}{9}],[\\frac{k}{9}] \\right]\\right)$\nis the maximum of the elements in $\\Lambda_2(\\boldsymbol{K})$ of the form\n$\\left[ [\\frac{k}{8}],*,[\\frac{k}{8}] \\right]\\geq 0$\\;\n(resp. $\\left[ [\\frac{k}{9}],*,[\\frac{k}{9}] \\right]\\geq 0$).\n\\end{Rem}\n\\begin{proof} {\\it of Theorem \\ref{sturm}}\nLet $\\boldsymbol{K}=\\mathbb{Q}(i)$. We use the induction\non the weight $k$. \nWe can confirm that it is true for small $k$.\nSuppose that the statement is true for any $k$ with $kr$, then\n$F$ is called a {\\it mod $p$ singular Hermitian modular form} (e.g. cf. B\\\"{o}cherer-Kikuta \\cite{B-K}). \nIt is obvious that,\nif $F$ is a mod $p$ singular Hermitian modular form, then $F$ is an element of\nmod $p$ kernel of the theta operator.\n\nThe main purpose of this paper is to give some examples of Hermitian modular\nform in the mod $p$ kernel of the theta operator in the case that $n=2$.\n\\subsubsection{Basic property of theta operator}\n\\label{Sect.2.3.1}\nAs we stated above, the image $\\varTheta (F)$ is not necessarily a Hermitian\nmodular form. However the following result holds:\n\\begin{Thm}\n\\label{thetamodp}\nAssume that $\\boldsymbol{K}=\\mathbb{Q}(i)$ and $p$ is a prime number such\nthat $p\\geq 5$. For any \n$F\\in M_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),{\\rm det}^{k\/2})_{\\mathbb{Z}_{(p)}}^{\\text{sym}}$, there is a cusp form\n$$\nG\\in S_{k+p+1}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),{\\rm det}^{(k+p+1)\/2})_{\\mathbb{Z}_{(p)}}^{{\\rm sym}}\n$$ \nsuch that\n$$\n\\varTheta (F) \\equiv G \\pmod{p}.\n$$\n\\end{Thm}\n\\begin{proof}\nA corresponding statement in the case of Siegel modular forms can be found\nin B\\\"{o}cherer-Nagaoka \\cite{B-N}, Theorem 4. The proof here follows the same line.\nWe consider the normailized Rankin-Cohen bracket $[F_1,F_2]$ for\n$F_i\\in M_{k_i}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{k_i\/2})_{\\mathbb{Z}_{(p)}}^{\\text{sym}}$\\\\\n$(i=1,2)$\n(e.g. cf. Martin-Senadheera \\cite{M-S}). We can show that $[F_1,F_2]$ becomes\na Hermitian cusp form\n$$\n[F_1,F_2]\\in S_{k_1+k_2+2}\n(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{(k_1+k_2+2)\/2})\n_{\\mathbb{Z}_{(p)}}^{\\text{sym}}.\n$$\nIf $p$ is a prime number such that $p\\geq 5$, then there is a Hermitian\nmodular form $G_{p-1}\\in M_{p-1}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{(p-1)\/2})_{\\mathbb{Z}_{(p)}}^{\\text{sym}}$ \nsuch that\n$$\nG_{p-1} \\equiv 1 \\pmod{p}\\quad (\\text{cf. {\\rm Kikuta-Nagaoka} \\cite{K-N}. Proposition 5}).\n$$\nFor a given $F\\in M_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{k\/2})_{\\mathbb{Z}_{(p)}}^{\\text{sym}}$, we have\n$$\n\\varTheta (F) \\equiv [F,G_{p-1}] \\pmod{p}.\n$$\nHence we may put\n$$\nG:=[F,G_{p-1}]\\in S_{k+p+1}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{(k+p+1)\/2})_{\\mathbb{Z}_{(p)}}^{\\text{sym}}.\n$$\n\\end{proof}\n\\begin{Ex}\n\\label{ex.1}\n$$\n\\varTheta (E_{4,\\boldsymbol{K}}) \\equiv 5E_{4,\\boldsymbol{K}}\\cdot \\chi_8+2F_{12}\n\\pmod{7}.\n$$\n\\end{Ex}\n\\section{Eisenstein case}\n\\label{Sect.3}\nIn this section we deal with the Hermitian modular forms related to the Eisenstein series\nof degree 2.\n\\subsection{Krieg's result}\n\\label{Sect.3.1}\nWe denote by $h_{\\boldsymbol{K}}$ the class number of $\\boldsymbol{K}$ and\n$w_{\\boldsymbol{K}}$ the order of the unit group of $\\boldsymbol{K}$.\n\nGiven a prime $q$ dividing $D_{\\boldsymbol{K}}:=-d_{\\boldsymbol{K}}$ define the $q$-factor $\\chi_q$ of \n$\\chi_{\\boldsymbol{K}}$ (cf. Miyake \\cite{Miyake}, p.80). Then $\\chi_{\\boldsymbol{K}}$ can be\ndecomposed as\n$$\n\\chi_{\\boldsymbol{K}}=\\prod_{q\\mid D_{\\boldsymbol{K}}}\\chi_q.\n$$\nWe set\n$$\na_{D_{\\boldsymbol{K}}}(\\ell):=\\prod_{q\\mid D_{\\boldsymbol{K}}}(1+\\chi_q(-\\ell)).\n$$\nLet $D_{\\boldsymbol{K}}=mn$ with coprime $m$,\\,$n$. We set\n$$\n\\psi_m:=\\prod_{\\substack{q:\\text{prime}\\\\ q\\mid m}}\\chi_q,\\qquad\n\\psi_1:=1.\n$$\nFor $H\\in\\Lambda_2(\\boldsymbol{K})$ with $H\\ne O_2$, we define\n$$\n\\varepsilon (H):=\\text{max}\\{ \\ell\\in\\mathbb{N}\\,\\mid\\, \\ell^{-1}H\\in\n\\Lambda_2(\\boldsymbol{K})\\}.\n$$\nKrieg's result is stated as follows:\n\\begin{Thm}\n\\label{Krieg}\n{\\rm (Krieg \\cite{Krieg})}\\;\nAssume that $k \\equiv 0 \\pmod{w_{\\boldsymbol{K}}}$ and $k>4$.\nThen there exists a modular form\n$F_{k,\\boldsymbol{K}}\\in M_k(\\Gamma_2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Q}}^{{\\rm sym}}$\nwhose Fourier coefficient $a(F_{k,\\boldsymbol{K}};H)$ is given by\n$$\na(F_{k,\\boldsymbol{K}};H)=\n\\begin{cases}\n\\displaystyle\\frac{4k(k-1)}{B_k\\cdot B_{k-1,\\chi_{\\boldsymbol{K}}}}\n\\sum_{00,\\\\\n\\displaystyle\n-\\frac{2k}{B_k}\\sum_{03$ is a prime number such that $\\chi_{\\boldsymbol{K}}(p)=-1$ and\n$h_{\\boldsymbol{K}} \\not\\equiv 0 \\pmod{p}$, then\n$$\n\\varTheta (F_{p+1,\\boldsymbol{K}}) \\equiv 0 \\pmod{p}.\n$$\n\\end{Thm}\n\\begin{Cor}\n\\label{cor1}\nAssume that $h_{\\boldsymbol{K}}=1$ and $p>3$ is a prime number such\nthat $\\chi_{\\boldsymbol{K}}(p)=-1$.\nThen the weight $p+1$ Hermitian Eisenstein series $E_{p+1,\\boldsymbol{K}}^{(2)}$ satisfies\n$$\n\\varTheta (E_{p+1,\\boldsymbol{K}}^{(2)}) \\equiv 0 \\pmod{p}.\n$$\n\\end{Cor}\n\\begin{Rem}\n{\\rm (1)}\\;\nIn the above theorem, the weight condition $k=p+1 \\equiv 0 \\pmod{w_{\\boldsymbol{K}}}$\nis automatically satisfied.\nIn fact, \nin the case $\\boldsymbol{K}=\\mathbb{Q}(i)$, the condition $\\chi_{\\boldsymbol{K}}(p)=-1$\nimplies $p \\equiv 3 \\pmod{4}$. Then $p+1 \\equiv 0 \\pmod{4}$. In the case \n$\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$, it follows from \nthe condition $\\chi_{\\boldsymbol{K}}(p)=-1$ that $p \\equiv -1 \\pmod{3}$. Since $p$ is odd,\nwe have $p+1 \\equiv 0 \\pmod{6}$. Since $w_{\\boldsymbol{K}}=2$ in the other cases,\n$p+1 \\equiv 0 \\pmod{w_{\\boldsymbol{K}}}$ is obvious.\n\\\\\n{\\rm (2)}\\; It is known that there are infinitely many $\\boldsymbol{K}$ and $p$ satisfying\n$$\n\\chi_{\\boldsymbol{K}}(p)=-1\\;\\; \\text{and}\\;\\;\nh_{\\boldsymbol{K}} \\not\\equiv 0 \\pmod{p}\n$$\n{\\rm (e.g. cf. Horie-Onishi \\cite{H-O})}.\\\\\n{\\rm (3)}\\; Our interest is to construct an element of mod $p$ kernel of the theta operator with the possible minimum weight \n(i.e., the weight is its filtration. For the details on the filtration \nof the mod $p$ modular forms, see Serre \\cite{Serre}). \nIf we do not restrict on the weight, we can construct some trivial examples in several ways. \nFor example, the power $F^p$ of a modular form $F$ is such a trivial example. \nIf $F$ is of weight $k$, then its weight is $pk$ and this is too large.\nWe suppose that the possible minimum weight is $p+1$ for mod $p$ non-singular cases\n(cf. B\\\"{o}cherer-Kikuta-Takemori \\cite{B-K-T}).\n\\end{Rem}\nFor the proof of the theorem, it is sufficient to show the following:\n\\begin{Prop}\n\\label{prop1}\nAssume that\n$p>3$ is a prime number such that $\\chi_{\\boldsymbol{K}}(p)=-1$ and\n$h_{\\boldsymbol{K}} \\not\\equiv 0 \\pmod{p}$. Let $a(F_{k,\\boldsymbol{K}};H)$\nbe the Fourier coefficient of $F_{k,\\boldsymbol{K}}$ at $H$. If ${\\rm det}(H)\\not\\equiv 0\\pmod{p}$,\nthen\n\\begin{equation}\n\\label{star}\na(F_{p+1,\\boldsymbol{K}};H) \\equiv 0 \\pmod{p}.\n\\end{equation}\n\\end{Prop}\n\\begin{proof}\nBy Theorem \\ref{Krieg}, the Fourier coefficient $a(F_{p+1,\\boldsymbol{K}};H)$ is expressed as\n$$\na(F_{p+1,\\boldsymbol{K}};H)=\\frac{4(p+1)p}{B_{p+1}\\cdot B_{p,\\chi_{\\boldsymbol{K}}}}\n\\sum_{00$. First we look at the factor\n$$\nA:=\\frac{4(p+1)p}{B_{p+1}\\cdot B_{p,\\chi_{\\boldsymbol{K}}}}.\n$$\nBy Kummer's congruence relation, we obtain\n\\begin{align*}\n& \\displaystyle \\bullet\\quad\\frac{B_{p+1}}{p+1} \\equiv \\frac{B_2}{2}=\\frac{1}{12} \\pmod{p}\n\\vspace{2mm}\n\\\\\n& \\displaystyle \\bullet\\quad\\frac{B_{p,\\chi_{\\boldsymbol{K}}}}{p}\n\\equiv (1-\\chi_{\\boldsymbol{K}}(p))B_{1,\\chi_{\\boldsymbol{K}}} \n = (1-\\chi_{\\boldsymbol{K}}(p))\\frac{-2h_{\\boldsymbol{K}}}{w_{\\boldsymbol{K}}}\n \\pmod{p}.\n\\end{align*}\nSince $p>3$, $\\chi_{\\boldsymbol{K}}(p)=-1$, and $h_{\\boldsymbol{K}}\\not\\equiv 0\\pmod{p}$, the factor $A$ is a $p$-adic unit.\n\nNext we shall show that, if $\\text{det}(H)\\not\\equiv 0\\pmod{p}$, then the factor\n$$\nB:=\\sum_{03$ is a prime number such that\n$p \\equiv 3 \\pmod{4}$ and $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{p}\\,i)$. Let $F_{k,\\boldsymbol{K}}$\nbe the Hermitian modular form introduced in Theorem \\ref{Krieg}. Then the modular form\n$F_{\\frac{p+1}{2},\\boldsymbol{K}}$ is a mod $p$ singular Hermitian modular form.\n\\end{Thm}\n\\begin{proof}\nFrom Theorem \\ref{Krieg}, we have\n$$\na(F_{\\frac{p+1}{2},\\boldsymbol{K}};H)\n=\\frac{(p+1)(p-1)}{B_{\\frac{p+1}{2}}\\cdot B_{\\frac{p-1}{2},\\chi_{\\boldsymbol{K}}}}\n\\sum_{00$. Since the factor of the summand on the right-hand side is rational integer,\nit is sufficient to show\n\\begin{equation}\n\\label{bigstar}\n\\frac{(p+1)(p-1)}{B_{\\frac{p+1}{2}}\\cdot B_{\\frac{p-1}{2},\\chi_{\\boldsymbol{K}}}}\n \\equiv 0 \\pmod{p}.\n\\end{equation}\nWe have the following result:\n\\begin{Lem}\n\\label{lemma3}\nAssume that \n$p>3$, $p \\equiv 3 \\pmod{4}$, and $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{p}\\,i)$. \nThen we have\n$$\n{\\rm (i)}\\;B_{\\frac{p+1}{2}}\\not\\equiv 0\\pmod{p}\n\\qquad\\qquad\n{\\rm (ii)}\\; p\\cdot B_{\\frac{p-1}{2},\\chi_{\\boldsymbol{K}}} \\equiv -1 \\pmod{p}.\n$$\n\\end{Lem}\n\\begin{proof}\n(i) The statement $B_{\\frac{p+1}{2}}\\not\\equiv 0\\pmod{p}$ can be found in, for example,\nWashington \\cite{Washington}, p.86, Exercise 5.4.\\\\\n(ii)\\; The congruence $p\\cdot B_{\\frac{p-1}{2},\\chi_{\\boldsymbol{K}}} \\equiv -1 \\pmod{p}$\nis a special case of the theorem of von Staudt-Clausen for the generalized Bernoulli\nnumbers. For the proof see Carlitz \\cite{Carlitz}, Theorem 3.\n\\end{proof}\nWe return to the proof of (\\ref{bigstar}). From the above lemma, we have\n$$\nB_{\\frac{p+1}{2}}\\cdot B_{\\frac{p-1}{2},\\chi_{\\boldsymbol{K}}}\\in \\frac{1}{p}\\mathbb{Z}_{(p)}^\\times. \n$$\nThis implies (\\ref{bigstar}).\n\\end{proof}\n\\section{Theta series case}\n\\label{Sect.4}\nIn this section we construct Hermitian modular forms in the mod $p$ kernel of\ntheta operator defined from theta series.\n\nFor a positive Hermitian lattice $\\mathcal{L}$ of rank $r$, we associate the\n{\\it Hermitian theta series}\n$$\n\\vartheta_{\\mathcal{L}}^{(n)}(Z)=\\vartheta_H^{(n)}(Z)=\n\\sum_{X\\in\\mathcal{O}_{\\boldsymbol{K}}^{(r,n)}}\\text{exp}(\\pi i\\text{tr}({}^t\\overline{X}HXZ)),\n\\quad Z\\in\\mathbb{H}_n,\n$$\nwhere $H$ is the corresponding Gram matrix of $\\mathcal{L}$.\n\nIn the rest of this paper, we assume that\n$$\n\\boldsymbol{K}=\\mathbb{Q}(i).\n$$\n\\subsection{Theta series for unimodular Hermitian lattice of rank 8}\n\\label{rank8}\nWe denote by $\\mathcal{U}_r(\\mathcal{O}_{\\boldsymbol{K}})=\\mathcal{U}_r(\\mathbb{Z}[i])$\nthe set of even integral, positive definite unimodular Hermitian matrices of rank $r$ over\n$\\mathcal{O}_{\\boldsymbol{K}}=\\mathbb{Z}[i]$. It is known that $4\\mid r$.We denote by \n$\\widetilde{\\mathcal{U}}_r(\\mathcal{O}_{\\boldsymbol{K}})$ the set of unimodular equivalence\nclasses. It is also known that \n$|\\widetilde{\\mathcal{U}}_8(\\mathcal{O}_{\\boldsymbol{K}})|=3$.\nWe fix a set of representatives $\\{ H_1,H_2,H_3\\}$, in which $H_i$ have the following\ndata:\n$$\n|\\text{Aut}(H_1)|=2^{15}\\cdot 3^5\\cdot 5^2\\cdot 7,\\quad\n|\\text{Aut}(H_2)|=2^{22}\\cdot 3^2\\cdot 5\\cdot 7,\\quad\n|\\text{Aut}(H_3)|=2^{21}\\cdot 3^4\\cdot 5^2,\n$$\n(cf. http:\/\/www.math.uni-sb.de\/ag\/schulze\/Hermitian-lattices\/).\n\nThe following identity is a special case of Siegel's main formula for Hermitian\nforms:\n\\begin{equation}\n\\label{mainformula}\n\\frac{\\vartheta_{H_1}^{(2)}}{2^{15}\\cdot 3^5\\cdot 5^2\\cdot 7}+\n\\frac{\\vartheta_{H_2}^{(2)}}{2^{22}\\cdot 3^2\\cdot 5\\cdot 7}+\n\\frac{\\vartheta_{H_3}^{(2)}}{2^{21}\\cdot 3^4\\cdot 5^2}\n=\\frac{61}{2^{22}\\cdot 3^5\\cdot 5\\cdot 7}E_{8,\\boldsymbol{K}}^{(2)},\n\\end{equation}\nwhere $\\frac{61}{2^{22}\\cdot 3^5\\cdot 5\\cdot 7}$ is the mass of the genus of the\nunimodular Hermitian lattices in rank 8. The space\n$M_8(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\nu_8)^{\\text{sym}}=\nM_8(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))^{\\text{sym}}$ is spanned by \n$(E_{4,\\boldsymbol{K}}^{(2)})^2$ and $\\chi_8$ (cf. Theorem \\ref{structure}).\n\\begin{Lem}\n\\label{span}\nWe have the following identities\n$$\n\\vartheta_{H_1}^{(2)}=(E_{4,\\boldsymbol{K}}^{(2)})^2-5760\\chi_8,\\;\\;\n\\vartheta_{H_2}^{(2)}=(E_{4,\\boldsymbol{K}}^{(2)})^2-3072\\chi_8,\\;\\;\n\\vartheta_{H_1}^{(2)}=(E_{4,\\boldsymbol{K}}^{(2)})^2.\n$$\n\\end{Lem}\n\\begin{proof}\nThe above identities come from the following data:\n\\begin{align*}\n& \\begin{cases}\n a(\\vartheta_{H_1}^{(2)};[1,0,1])=120960,\\\\\n a(\\vartheta_{H_1}^{(2)};[1,1+i,1])=0,\n \\end{cases}\n\\begin{cases}\n a(\\vartheta_{H_2}^{(2)};[1,0,1])=131712,\\\\\n a(\\vartheta_{H_2}^{(2)};[1,1+i,1])=2688,\n \\end{cases}\n\\\\\n& \\begin{cases}\n a(\\vartheta_{H_3}^{(2)};[1,0,1])=144000,\\\\\n a(\\vartheta_{H_3}^{(2)};[1,1+i,1])=5760.\n \\end{cases}\n\\end{align*}\nThe calculations of the Fourier coefficients were done by Till Dieckmann.\n\\end{proof}\n\\begin{Thm}\n\\label{mod7}\nWe have $\\vartheta_{H_i}^{(2)}\\in M_8(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}}$ ($i=1,2,3$) and\n$$\n\\varTheta(\\vartheta_{H_1}^{(2)}) \\equiv \\varTheta(\\vartheta_{H_2}^{(2)})\n\\equiv 0 \\pmod{7}.\n$$\n\\end{Thm}\n\\begin{proof}\nThe first statement is a consequence of the unimodularity of $H_i$.\nBy (\\ref{mainformula}), we see that\n$$\n4\\vartheta_{H_1}^{(2)} \\equiv 5\\cdot 61E_{8,\\boldsymbol{K}}^{(2)} \\pmod{7}.\n$$\nMoreover by Lemma \\ref{span}, we have\n$$\n\\vartheta_{H_1}^{(2)} \\equiv \\vartheta_{H_2}^{(2)} \\pmod{7}.\n$$\nSince $\\varTheta(E_{8,\\boldsymbol{K}}^{(2)}) \\equiv 0 \\pmod{7}$ (cf. Corollary \\ref{cor1}),\nwe obtain\n$$\n\\varTheta(\\vartheta_{H_1}^{(2)}) \\equiv \\varTheta(\\vartheta_{H_2}^{(2)})\n\\equiv 0 \\pmod{7}.\n$$\n\\end{proof}\n\\subsection{Theta series for unimodular Hermitian lattice of rank 12}\n\\label{HermitianLeech}\nIt is known that there is a unimodular Hermitian lattice \n$\\mathcal{L}_{\\mathbb{C}}\\in\\mathcal{U}_{12}(\\mathbb{Z}[i])$\nwhich does not have any vector of length one.\nThe transfer of this lattice to $\\mathbb{Z}$ is the Leech lattice\n(cf. http:\/\/www.math.uni-sb.de\/ag\/schulze\/Hermitian-lattices\/).\nFor this lattice, we have\n$$\n\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}\\mid_{\\mathbb{S}_2} =\n\\vartheta_{\\text{Leech}}^{(2)},\n$$\nnamely, the restriction of Hermitian theta series $\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}$\nto the Siegel upper half-space coincides with the Siegel theta series\n$\\vartheta_{\\text{Leech}}^{(2)}$ attached to the Leech lattice. \n\n\nWe fix the lattice\n$\\mathcal{L}_{\\mathbb{C}}$ and call it here the {\\it Hermitian Leech lattice}.\n\\begin{Thm}\n\\label{mod11cong}\nLet $\\mathcal{L}_{\\mathbb{C}}$ be the Hermitian Leech lattice.\nThe attached Hermitian theta series \n$\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}\n\\in M_{12}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}}$\nsatisfies the following congruence relations.\n\\vspace{2mm}\n\\\\\n{\\rm (1)}\\; $\\varTheta(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}) \\equiv 0 \\pmod{11}$.\n\\vspace{2mm}\n\\\\\n{\\rm (2)}\\; $\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)} \\equiv 1 \\pmod{13}$.\n\\end{Thm}\n\\begin{proof}\n(1)\\, By Theorem \\ref{thetamodp}, there is a Hermitian cusp form\n$$\nG\\in S_{12+11+1}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}})_{\\mathbb{Z}_{(11)}}^{\\text{sym}}\n=S_{24}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}})_{\\mathbb{Z}_{(11)}}^{\\text{sym}}\n$$ \nsuch that\n$$\n\\varTheta(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}) \\equiv G \\pmod{11}.\n$$\nBy Table 2, we see that\n$$\na(\\varTheta(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)});H) \\equiv\na(G;H) \\pmod{11}\n$$\nfor any $H\\in\\Lambda_2(\\boldsymbol{K})$ with $\\text{rank}(H)=2$ and\n$\\text{tr}(H)\\leq\\displaystyle 2\\left[\\frac{24}{8}\\right]=6$. Applying Sturm's\nbound (Corollary \\ref{sturmcorollary}), we obtain\n$$\n\\varTheta(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}) \\equiv G \\equiv 0 \\pmod{11}.\n$$\n(2)\\, We can confirm\n$$\n\\Phi(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)})\n=(E_4^{(1)})^3-720\\Delta \\equiv E_{12}^{(1)} \\equiv 1 \\pmod{13},\n$$\nwhere $\\Phi$ is the Siegel operator and \n$\\Delta=\\frac{1}{1728}((E_4^{(1)})^3-(E_6^{(1)})^2)$ is Ramanujan's weight 12 cusp\nform for $SL_2(\\mathbb{Z})$. This shows that\n$$\na(\\varTheta(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)});H) \\equiv\na(\\varTheta(E_{12,\\boldsymbol{K}}^{(2)});H) \\pmod{13}\n$$\nfor any $H\\in\\Lambda_2(\\boldsymbol{K})$ with $\\text{rank}(H)\\leq 1$. Considering\nthis fact and Table 2, we see that this congruence relation holds for any\n $H\\in\\Lambda_2(\\boldsymbol{K})$ with $\\text{rank}(H)\\leq 2$. Applying Sturm's\nbound again, we obtain\n$$\n\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)} \\equiv E_{12,\\boldsymbol{K}}^{(2)}\n\\equiv 1 \\pmod{13}.\n$$\n\\end{proof}\nIn this section,\nwe constructed Hermitian modular forms in the mod $p$ kernel of the theta\noperator by theta series attached to unimodular Hermitian lattices. \nBy the results above,\nit is expected that, if $p$ is a prime number such that $p \\equiv 3 \\pmod{4}$,\nthen there is a unimodular lattice $\\mathcal{L}$ of rank $p+1$ such that\n$$\n\\varTheta(\\vartheta_{\\mathcal{L}}^{(2)}) \\equiv 0 \\pmod{p}.\n$$\n\\subsection{Theta constants}\n\\label{thetaconstant}\nIn the previous sections, we gave examples of Hermitian modular form in the mod $p$\nkernel of the theta operator. In this section we give another example.\n\nThe Hermitian\n{\\it theta constant} on $\\mathbb{H}_2$ with characteristic $\\boldsymbol{m}$\nover $\\boldsymbol{K}=\\mathbb{Q}(i)$ is defined by\n\\begin{align*}\n\\theta_{\\boldsymbol{m}}(Z) &=\\theta(Z;\\boldsymbol{a},\\boldsymbol{b})\\\\\n&:=\\sum_{\\boldsymbol{g}\\in\\mathbb{Z}[i]^{(2,1)}}\\text{exp}\n\\left[\n\\frac{1}{2}\\left(Z\\left\\{\\boldsymbol{g}+\\frac{1+i}{2}\\boldsymbol{a}\\right\\}\n+2\\text{Re}\\frac{1+i}{2}{}^t\\boldsymbol{b}\\boldsymbol{a}\\right)\n\\right],\n\\quad Z\\in \\mathbb{H}_2,\n\\end{align*}\nwhere $\\boldsymbol{m}=\\binom{\\boldsymbol{a}}{\\boldsymbol{b}}$, \n$\\boldsymbol{a},\\,\\boldsymbol{b}\\in\\mathbb{Z}[i]^{(2,1)}$, $A\\{ B\\}={}^t\\overline{B}AB$.\nDenote by $\\mathcal{E}$ the set of even characteristic of degree 2 mod 2\n(cf. \\cite{Freitag}), namely, \n$$\n\\mathcal{E}=\n\\left\\{\n\\begin{pmatrix}0\\\\0\\\\0\\\\0\\end{pmatrix},\n\\begin{pmatrix}0\\\\0\\\\0\\\\1\\end{pmatrix},\n\\begin{pmatrix}0\\\\0\\\\1\\\\0\\end{pmatrix},\n\\begin{pmatrix}0\\\\0\\\\1\\\\1\\end{pmatrix},\n\\begin{pmatrix}0\\\\1\\\\0\\\\0\\end{pmatrix},\n\\begin{pmatrix}0\\\\1\\\\1\\\\0\\end{pmatrix},\n\\begin{pmatrix}1\\\\0\\\\0\\\\0\\end{pmatrix},\n\\begin{pmatrix}1\\\\0\\\\0\\\\1\\end{pmatrix},\n\\begin{pmatrix}1\\\\1\\\\0\\\\0\\end{pmatrix},\n\\begin{pmatrix}1\\\\1\\\\1\\\\1\\end{pmatrix}\n\\right\\}\n$$\n\\begin{Thm}\n{\\rm (Freitag \\cite{Freitag})}\\; Set\n$$\n\\psi_{4k}(Z):=\\frac{1}{4}\\sum_{\\boldsymbol{m}\\in\\mathcal{E}}\\theta_{\\boldsymbol{m}}^{4k}(Z),\n\\quad (k\\in\\mathbb{N}),\n$$\nThen\n$$\n\\psi_{4k}\\in M_{4k}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}}.\n$$\n\\end{Thm}\n\\begin{Rem}\n(1)\\, $\\psi_4=E_{4,\\boldsymbol{K}}^{(2)}$.\\\\\n(2)\\,$F_{10}=2^{-12}\\displaystyle\\prod_{\\boldsymbol{m}\\in\\mathcal{E}}\\theta_{\\boldsymbol{m}}$,\nwhere $F_{10}$ is a Hermitian modular form given in $\\S$ \\ref{Sect.2.2.2}.\n\\end{Rem}\n\\begin{Thm}\n\\label{thetaconstantth}\nThe following congruence relations holds.\n\\vspace{2mm}\n\\\\\n{\\rm (1)}\\; $\\varTheta (\\psi_8) \\equiv 0\\pmod{7}$.\n\\vspace{2mm}\n\\\\\n{\\rm (2)}\\; $\\varTheta (\\psi_{12}) \\equiv 0 \\pmod{11}$.\n\\end{Thm}\n\\begin{proof}\nThe form $\\psi_8$ can be expressed as\n$$\n\\psi_8=\\frac{14}{75}(E_{4,\\boldsymbol{K}}^{(2)})^2+\\frac{61}{75}E_{8,\\boldsymbol{K}}^{(2)}.\n$$\nSince $\\varTheta(E_{8,\\boldsymbol{K}}^{(2)}) \\equiv 0 \\pmod{7}$, we obtain\n $\\varTheta (\\psi_8) \\equiv 0\\pmod{7}$.\\\\\n(2)\\, We have the following expression.\n\\begin{align*}\n& \\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}\n =a_1\\psi_{12}+a_2(E_{4,\\boldsymbol{K}}^{(2)})^3\n+a_3E_{4,\\boldsymbol{K}}^{(2)}E_{8,\\boldsymbol{K}}\n+a_4E_{12,\\boldsymbol{K}}^{(2)},\\\\\n&\na_1=\\frac{1470105}{8511808},\\quad\na_2=\\frac{167218051}{638385600},\\quad\na_3=-\\frac{147340193}{212795200},\\\\\n&\na_4=\\frac{802930253}{638385600},\n\\end{align*}\nwhere $\\mathcal{L}_{\\mathbb{C}}$ is the Hermitian Leech lattice as before.\nIt should be noted that $a_2 \\equiv a_3 \\equiv 0\\pmod{11}$, and $a_1\\not\\equiv 0\\pmod{11}$.\nSince \n$\\varTheta (\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}) \\equiv \n\\varTheta(E_{12,\\boldsymbol{K}}^{(2)}) \\equiv 0\\pmod{11}$, we obtain\n$$\n\\varTheta (\\psi_{12}) \\equiv 0 \\pmod{11}.\n$$\n\\end{proof}\n\\section{Tables}\nIn this section, we summarize tables which are needed in the proof of our statements\nin the previous sections.\n\\subsection{Theta series for rank 8 unimodular Hermitian lattices}\nWe introduced Hermitian theta series $\\vartheta_{H_i}^{(2)}$ in $\\S$ \\ref{rank8}.\nThe following table gives some examples of Fouirer coefficients of $\\vartheta_{H_i}^{(2)}$\n$(i=1,2)$ and $\\vartheta_{[2,2,4]}^{(2)}$.\n\\begin{table*}[hbtp]\n\\caption{Fourier coefficients of hermitian theta series (rank 8)}\n\\label{tab.1}\n\\begin{center}\n\\begin{tabular}{llll} \\hline\n$H$ & 4det$(H)$ & $a(\\vartheta_{H_1}^{(2)};H)$ & $a(\\vartheta_{H_2}^{(2)};H)$ \\\\ \\hline\n$[0,0,0]$ & $0$ & $1$ & $1$ \\\\ \\hline\n$[1,0,0]$ & $0$ & $480$ & $480$ \\\\ \\hline\n$[2,0,0]$ & $0$ & $61920$ & $61920$ \\\\ \\hline\n$[3,0,0]$ & $0$ & $1050240$ & $1050240$ \\\\ \\hline\n$[4,0,0]$ & $0$ & $7926240$ & $7926240$ \\\\ \\hline\n$[1,1+i,1]$ & $2$ & $0$ & $2688=2^7\\cdot 3\\cdot 7$ \\\\ \\hline\n$[1,1,1]$ & $3$ & $26880=2^8\\cdot 3\\cdot 5\\cdot 7$& $21504=2^{10}\\cdot 3\\cdot 7$ \\\\ \\hline\n$[2,2,1]$ & $4$ & $120960=2^7\\cdot 3^3\\cdot 5\\cdot 7$& $131712=2^7\\cdot 3\\cdot 7^3$ \\\\ \\hline\n$[2,1+i,1]$ & $6$ & $1505280=2^{11}\\cdot 3\\cdot 5\\cdot 7^2$ & $1483776=2^{10}\\cdot 3^2\\cdot 7\\cdot 23$ \\\\ \\hline\n$[3,2+i,1]$ & $7$ & $3663360=2^9\\cdot 3^3\\cdot 5\\cdot 53$ & $3717120=2^{11}\\cdot 3\\cdot 5\\cdot 11^2$ \\\\ \\hline\n$[2,0,1]$ & $8$ & $8346240=2^7\\cdot 3^4\\cdot 5\\cdot 7\\cdot 23$ & $8217216=2^7\\cdot 3^2\\cdot 7\\cdot 1019$\\\\ \\hline\n$[2,2+2i,2]$ & $8$ & $8346240=2^7\\cdot 3^4\\cdot 5\\cdot 7\\cdot 23$ & $8561280=2^7\\cdot 3\\cdot 5\\cdot 7^3\\cdot 13$\\\\ \\hline\n$[3,1+i,1]$ & $10$ & $30965760=2^{15}\\cdot 3^3\\cdot 5\\cdot 7$ & $30992640=2^8\\cdot 3\\cdot 5\\cdot 7\\cdot 1153$ \\\\ \\hline\n$[2,2+i,2]$ & $11$& $55883520=2^8\\cdot 3^4\\cdot 5\\cdot 7^2\\cdot 11$ & $55716864=2^{10}\\cdot 3\\cdot 7\\cdot 2591$ \\\\ \\hline\n$[4,2+i,1]$ & $11$& $55883520=2^8\\cdot 3^4\\cdot 5\\cdot 7^2\\cdot 11$ & $55716864=2^{10}\\cdot 3\\cdot 7\\cdot 2591$ \\\\ \\hline\n$[3,0,1]$ & $12$ & $67751040=2^7\\cdot 3\\cdot 5\\cdot 7\\cdot 71^2$ & $68353152=2^7\\cdot 3\\cdot 7\\cdot 59\\cdot 431$ \\\\ \\hline\n$[2,2,2]$ & $12$ & $96875520=2^{10}\\cdot 3\\cdot 5\\cdot 7\\cdot 17\\cdot 53$& $96789504=2^{10}\\cdot 3\\cdot 7^2\\cdot 643$ \\\\ \\hline\n$[2,1+i,2]$ & $14$ & $240537600=2^{12}\\cdot 3^4\\cdot 5^2\\cdot 29$& $240752640=2^{11}\\cdot 3\\cdot 5\\cdot 17\\cdot 461$ \\\\ \\hline\n$[4,1+i,1]$ & $14$ & $240537600=2^{12}\\cdot 3^4\\cdot 5^2\\cdot 29$& $240752640=2^{11}\\cdot 3\\cdot 5\\cdot 17\\cdot 461$ \\\\ \\hline\n$[2,1,2]$ & $15$& $358095360=2^9\\cdot 3\\cdot 5\\cdot 7\\cdot 6661$ & $358041600=2^{11}\\cdot 3^3\\cdot 5^2\\cdot 7\\cdot 37$ \\\\ \\hline\n$[4,1,1]$ & $15$&$358095360=2^9\\cdot 3\\cdot 5\\cdot 7\\cdot 6661$ & $358041600=2^{11}\\cdot 3^3\\cdot 5^2\\cdot 7\\cdot 37$ \\\\ \\hline\n$[2,0,2]$ & $16$ & $544440960=2^7\\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 643$& $544612992=2^7\\cdot 3\\cdot 7\\cdot 11\\cdot 113\\cdot 163$ \\\\ \\hline\n$[4,0,1]$ & $16$ & $528958080=2^7\\cdot 3^3\\cdot 5\\cdot 7\\cdot 4373$ & $527753856=2^7\\cdot 3\\cdot 7\\cdot 196337$ \\\\ \\hline \n\\end{tabular}\n\\end{center}\n\\end{table*}\n\\newpage\n\\subsection{Theta series for rank 12 unimodular Hermitian lattice}\nThe table deals with the Fourier coefficients for theta series\n$\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}$ where $\\mathcal{L}_{\\mathbb{C}}$\nis the Hermitian Leech lattices introduced in $\\S$ \\ref{HermitianLeech}.\n\\begin{table*}[hbtp]\n\\caption{Non-zero Fourier coefficients $a(\\vartheta_{\\mathcal{L}}^{(2)};H)$ with rank$(H)=2$ and tr$(H)\\leq 6$}\n\\label{tab.2}\n\\begin{center}\n\\begin{tabular}{lll} \\hline\n$H$ & 4det$(H)$ & $a(\\vartheta_{\\mathcal{L}}^{(2)};H)$ \\\\ \\hline\n$[2,0,2]$ & $16$ & $8484315840=2^6\\cdot 3^5\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 109$ \\\\ \\hline\n$[2,1,2]$ & $15$ & $4428103680=2^{15}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,2,2]$ & $12$ & $484323840=2^9\\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,1+i,2]$ & $14$ & $2214051840=2^{14}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,2+i,2]$ & $11$ & $201277440=2^{14}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 13$ \\\\ \\hline\n$[2,2+2i,2]$ & $8$ & $8648640=2^6 \\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,0,3]$ & $24$ & $480449249280=2^{14}\\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 31$ \\\\ \\hline\n$[2,1,3]$ & $23$ & $314395361280=2^{15}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 71$ \\\\ \\hline\n$[2,2,3]$ & $20$ & $77491814400=2^{14}\\cdot 3^3\\cdot 5^2\\cdot 7^2\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,1+i,3]$ & $22$ & $201679994880=2^{15}\\cdot 3^4\\cdot 5\\cdot 7\\cdot 13\\cdot 167$ \\\\ \\hline\n$[2,2+i,3]$ & $19$ & $46495088640=2^{14}\\cdot 3^4\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,2+2i,3]$ & $16$ & $8302694400=2^{12}\\cdot 3^4\\cdot 5^2\\cdot 7\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,0,4]$ & $32$ & $8567040081600=2^6 \\cdot 3^3\\cdot 5^2\\cdot 7\\cdot 11\\cdot 13\\cdot 19\\cdot 10427$ \\\\ \\hline\n$[2,1,4]$ & $31$ & $6230341877760=2^{15}\\cdot 3^4\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 67$ \\\\ \\hline\n$[2,2,4]$ & $28$ & $2254596664320=2^{10}\\cdot 3^4\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 5431$ \\\\ \\hline\n$[2,4,4]$ & $16$ & $8484315840=2^6 \\cdot 3^5\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 109$ \\\\ \\hline\n$[2,1+i,4]$ & $30$ & $4487883079680=2^{14}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 2027$ \\\\ \\hline\n$[2,2+i,4]$ & $27$ & $1565334650880=2^{14}\\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 101$ \\\\ \\hline\n$[2,2+2i,4]$ & $24$ & $482870868480=2^9 \\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 997$ \\\\ \\hline\n$[3,0,3]$ & $36$ & $27374536949760=2^{16}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11^2\\cdot 13\\cdot 281$ \\\\ \\hline\n$[3,1,3]$ & $35$ & $20648247459840=2^{15}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 4663$ \\\\ \\hline\n$[3,2,3]$ & $32$ & $8431662919680=2^{12}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 15233$ \\\\ \\hline\n$[3,3,3]$ & $27$ & $1539504046080=2^{15}\\cdot 3^2\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 149$ \\\\ \\hline\n$[3,1+i,3]$ & $34$ & $15436369428480=2^{16}\\cdot 3^4\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 83$ \\\\ \\hline\n$[3,2+i,3]$ & $31$ & $6137351700480=2^{16}\\cdot 3^5\\cdot 5\\cdot 7^2\\cdot 11^2\\cdot 13$ \\\\ \\hline\n$[3,3+i,3]$ & $26$ & $1053888675840=2^{16}\\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 17$ \\\\ \\hline\n$[3,2+2i,3]$ & $28$ & $2218479943680=2^{15}\\cdot 3^4\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 167$ \\\\ \\hline\n$[3,3+2i,3]$ & $23$ & $309967257600=2^{16}\\cdot 3^3\\cdot 5^2\\cdot 7^2\\cdot 11\\cdot 13$ \\\\ \\hline\n$[3,3+3i,3]$ & $18$ & $26568622080=2^{16}\\cdot 3^4\\cdot 5\\cdot 7\\cdot 11\\cdot 13$ \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\nAny non-zero Fourier coefficient $a(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)};H)$ \nwith rank$(H)=2$ and tr$(H)\\leq 6$\ncoincides with one in the above list.\n\nThe calculation is based on the following expression:\n$$\n\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}:=\n\\frac{7}{12}(E_{4,\\boldsymbol{K}}^{(2)})^3\n+\\frac{5}{12}(E_{6,\\boldsymbol{K}}^{(2)})^2\n-10080E_{4,\\boldsymbol{K}}^{(2)}\\chi_8-60480F_{12}.\n$$\n\\newpage\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\subsection{Motivation}\nThere is growing interest in combining observational and experimental data to draw causal conclusions \\citep{hartman2015sate,athey2020combining,YangDing2020,Chen2021,Oberst2022,rosenman2020combining}. Experimental data from randomized controlled trials (RCTs) are considered the gold standard for causal inference and can provide unbiased estimates of average treatment effects. However, the scale of the experimental data is usually limited and the trial participants might not represent those in the target cohort. For example, the recruitment criteria for an RCT may prescribe that participants must be less than 65 years old and satisfy certain health criteria, whereas the target population considered for treatment may cover all age groups. This problem is known as the lack of transportability \\citep{pearl2011transportability,rudolph2017robust}, generalizability \\citep{cole2010generalizing,hernan2011compound,dahabreh2019extending}, representativeness \\citep{campbell1957factors} and external validity \\citep{rothwell2005external,westreich2019target}. By contrast, observational data usually has both the scale and the scope desired, but one can never prove that there is no hidden confounding. Any unmeasured confounding in the observational data may lead to a biased estimate of the causal effect. When it comes to estimating the causal effect in the target population, combining obervational and experimental data provides an avenue to exploit the benefits of both.\n\n\n\n\nExisting literature has proposed several methods of integrating RCT and observational data to address the issue of the RCT population not being representative of the target cohort. \\cite{kallus2018removing}, considered the case where the supports do not fully overlap, and proposed a linear correction term to approximate the difference between the causal estimates from observational data and experimental data caused by hidden confounding.\\par\n\n\nSometimes even though the domain of observational data overlaps with the experimental data, sub-populations with certain traits may be over- or under-represented in the RCT compared to the target cohort. This difference can lead to a biased estimate of the average treatment effect, and as a result, the causal conclusion may not be generalizable to the target population. In this case, reweighting the RCT population to make it applicable\nto the target cohort is a common choice of remedy \\citep{hartman2015sate,andrews2017weighting}. In particular, Inverse Probability of Sampling Weighting (IPSW) has been a popular estimator for reweighting \\citep{cole2008constructing,cole2010generalizing,stuart2011use}. In this paper, we base our theoretical results \non the IPSW estimator. \n\n\n\n\n\n\n\n\n\n\n\\subsection{Design Trumps Analysis}\nMost of the existing literature, including those discussed above, focuses on the analysis stage after RCTs are completed, and propose methods to analyse the data as given. This means, the analysis methods, including reweighting through IPSW, are to passively deal with the RCT data as they are. However, the quality of the causal inference is largely predetermined by the data collected. `Design trumps analysis' \\citep{rubin2008objective}; a carefully designed experiment benefits the causal inference by far more than the analysis that follows. Instead of marginally improving through analyses, we focus on developing a strategy for the design phase, specifically the selection of RCT participants with different characteristics \n, to fundamentally improve the causal inference.\\par\n\n\nWhen designing an RCT sample to draw causal conclusions on the target cohort, a heuristic strategy that practitioners tend to opt for is to construct the RCT sample that looks exactly like a miniature version of target cohort. For example, suppose that we want to examine the efficacy of a drug on a target population consisting $30\\%$ women and $70\\%$ men. If the budget allows us to recruit 100 RCT participants in total, then the intuition is to recruit exactly $30$ females and $70$ males. This intuition definitely works, yet, is it efficient? We refer to the efficiency of the reweighted causal estimator for the average treatment effect in the target population, and specifically, its variance \\footnote[1]{We note that the efficiency of an unbiased estimator $T$ is formally defined as $e(T) = \\frac{\\mathcal{I}^{-1}(\\theta)}{var(T)}$, that is, the ratio of its lowest possible variance over its actual variance. For our purpose, we do not discuss the behaviour of the Fisher information of the data but rather focus on reducing the variance of the estimators. With slight abuse of terminology, in this paper, when we say that one estimator is more efficient than another, we mean that the variance of the former is lower. Similarly, we say that an RCT sample is more efficient if it eventually leads to an estimator of lower variance.}.\n\nIn fact, we find that RCTs following the exact covariate distribution of the target cohort do not necessarily lead to the most efficient estimates after reweighting. Instead, our result suggests that the optimal covariate allocation of experiment samples is the target cohort distribution adjusted by the conditional variability of potential outcomes. Intuitively, this means that an optimal strategy is to sample more from the segments where the causal effect is more volatile or uncertain, even if they do not make up a large proportion of the target cohort. \n\n\n\n\n\n\\subsection{Contributions}\nIn this work, we focus on the common practice of generalizing the causal conclusions from an RCT to a target cohort. We aim at fundamentally improving the estimation efficiency by improving the selection of individuals into the trial, that is, the allocation of a certain number of places in the RCT to individuals of certain characteristics. We derive the optimal covariate allocation that minimizes the variance of the causal estimate of the target cohort. Practitioners can use this optimal allocation as a guide when they decide `who' to recruit for the trial. We also formulate a deviation metric that quantifies how far a given RCT allocation is from optimal, and practitioners can use this metric to decide when they are presented with several candidate RCT allocations to choose from.\\par\nWe develop variations of the main results to cater for various practical scenarios such as where the total number of participants in the trial is fixed, or the total recruitment cost is fixed while unit costs can differ, or with different precision requirements: best overall precision, equal segment precision or somewhere in between. In this paper, we provide practitioners with a clear strategy and versatile tools to select the most efficient RCT samples. \n\n\n\n\\subsection{Outline} \nThe remainder of this paper is organized at follows: In Section~\\ref{sec:1setup}, we introduce the problem setting, notations, provide the main assumptions and provide more details on the IPSW estimator that we consider. In Section~\\ref{sec:2res}, we derive the optimal covariate allocation for RCT samples to improve estimation efficiency, propose a deviation metric to assess candidate experimental designs and illustrate how this metric influences estimation efficiency. Section~\\ref{sec:3estimate} provides an estimate of the optimal covariate allocation and the corresponding assumptions to ensure consistency. Section~\\ref{sec:4pras} extends the main results and propose design strategies under other practical scenarios like heterogeneous unit cost and same precision requirement. In Section~\\ref{sec:5numerical}, we use two numerical studies, a synthetic simulation and a semi-synthetic simulation with real-word data, to corroborate our theoretical results.\n\n\n\n\n\\section{Setup, Assumptions and Estimators} \\label{sec:1setup}\n\n\\subsection{Problem Setup and Assumptions}\nIn this paper, we based our notations and assumptions on the potential outcome framework \\citep{rubin1974estimating}. We assume to have two datasets: a RCT and an observational data. We also make the assumption that the target cohort of interest is contained in the observational data. \\par\nDefine $S \\in \\{0,1\\} $ as the sample indicator where $s = 1$ indicates membership of the experimental data and $s = 0$ the target cohort, where $T \\in \\{0,1\\}$ as the treatment indicator and $t = 1$ indicates treatment and $t = 0$ indicates control. Let $Y_{is}^{(t)}$ denotes potential outcome for a unit $i$ assigned to data set $s$ and treatment $t$. We define $X$ as a set of observable pre-treatment variables, which can consist discrete and\/or continuous variables. Let $n_0$, $n_1$, $n = n_0 + n_1$ denote the number of units in the target cohort, RCT, and the combined dataset, respectively. We use $f_1(x)$ and $f_0(x)$ to denote the distribution of $X$ in the RCT population and target cohort, respectively. \n\nThe causal quantity of interest here is the average treatment effect (ATE) on the target population, denoted by $\\tau$ .\n\\begin{dfn} (ATE on target cohort)\n$$\n \\tau := \\mathbb{E} \\left [Y^{(1))} - Y^{(0)} \\mid S = 0 \\right].\n$$\n\\end{dfn}\nWe also define the CATE on the trial population, denoted by $\\tau(x)$.\n\\begin{dfn} (CATE on trial population)\n$$\n \\tau(x) := \\mathbb{E} \\left [Y^{(1)} - Y^{(0)} \\mid X = x, S = 1 \\right].\n$$\n\\end{dfn}\n\n\n\nTo ensure an unbiased estimator of the ATE on the target population after reweighting the estimates from the RCT, we need to make several standard assumptions.\n\\begin{assumption}(Identifiability of CATE in the RCT data)\n\\label{assump::identifiability1}\nFor all the observations in the RCT data, we assume the following conditions hold.\n\\begin{itemize}\n \n \\item[(i)] \n Consistency: $Y_{i} = Y_{i1}^{(t)}$ when $T=t$ and $S=1$;\n \n \\item[(ii)] Ignorability: $Y_{i}^{(t)} \\indep T \\mid (X, S =1)$;\n \\item[(iii)] Positivity: $0 < \\mathbb{P}(T=t \\mid X, S =1) < 1$ for all $t \\in \\{0,1\\}$.\n \n \n\\end{itemize}\n\\end{assumption}\nThe ignorability condition assumes that the experimental data is unconfounded and the positivity condition is guaranteed to hold in\nconditionally randomized experiments.The igonrability and positivity assumptions combined is also referred to as strong ignorability. Under Assumption~\\ref{assump::identifiability1}, the causal effect conditioned on $X = x$ in the experimental sample can be estimated without bias using:\n\\begin{eqnarray*}\n\\hat \\tau(x) &=& \\frac{ \\sum_{S_i=1,X_i = x} \\frac{T_i Y_i}{e(x)} - \\frac{(1-T_i) Y_i }{ 1-e(x)}}{\\sum_{S_i=1,X_i = x} 1},\n\\end{eqnarray*}\nwhere $e(x) = \\mathbb{P}(T=1 \\mid X=x, S=1)$ is the probability of treatment assignment in the experimental sample. This estimator is also known as the Horvitz-Thompson estimator \\citep{horvitz1952generalization}, which we will provide more details later in this section.\n\nTo make sure that we can `transport' the effect from the experimental data to the target cohort, we make the following transportability assumption.\n\\begin{assumption}(Transportability) \n\\label{assump::transport}\n$Y^{(t)} \\indep S \\mid (X, T=t)$.\n\\end{assumption}\nAssumption~\\ref{assump::transport} can be interpreted from several perspectives, as elaborated in \\cite{hernan2010causal}. First, it assumes that all the effect modifiers are captured by the set of observable covariates $X$. Second, it also ensures that \nthe treatment $T$ for different data stays the same. If the assigned treatment differs between the study population and the target population, then the magnitude of the causal effect of treatment will differ too. Lastly, the transportability assumption prescribes that there is no interference across the two populations. That is, treating one individual in one population does not interfere with the outcome of individuals in the other population.\\par\n\n\nFurthermore, we require the trial population fully overlaps with the the target cohort, so that we can reweight the CATE in the experimental sample to estimate the ATE in the target cohort. That is, for each individual in the target cohort, we want to make sure that we can find a comparable counterpart in the experimental sample with the same characteristics. \n\n\\begin{assumption}\n(Positivity of trial participation) \n\\label{assump::positivity}\n$0 < \\mathbb{P}(S=1 \\mid T=t, X = x) < 1$ for all $x \\in \\text{supp}(X\\,|\\, S=0) $.\n\\end{assumption}\nIn Assumption~\\ref{assump::positivity}, $\\text{supp}(X\\,|\\, S=0)$ denotes the support of the target cohort, in other words, the set of values that $X$ can take for individuals in the target cohort. Mathematically, $x \\in \\text{supp}(X\\,|\\, S=0)$ is equivalent to $\\mathbb{P}\\left(\\|X-x\\| \\leq \\delta \\mid S = 0\\right)>0$, $\\forall \\delta>0$. Assumption~\\ref{assump::positivity} requires that the support of the experimental sample includes the target cohort of interest.\n\n\\subsection{Estimators and related work}\nInverse Propensity (IP) weighted estimators were proposed by \\cite{horvitz1952generalization} for surveys in which subjects are sampled with unequal probabilities.\n\\begin{dfn}(Horvitz-Thompson estimator)\n\\begin{align*}\n \\widehat Y^{(t)}_{\\text{HT}} &= \\frac{1}{n_1} \\sum_{i=1}^{n_1} \\frac{I(T_i=t) Y_i}{\\mathbb{P}\\left(T_i = t \\,|\\, X = X_i \\right)},\\\\ \n \\hat{\\tau}_{\\text{HT}} &= \n \\widehat Y^{(1)}_{\\text{HT}} - \n \\widehat Y^{(0)}_{\\text{HT}} = \\frac{1}{n_1} \\sum_{i=1}^{n_1} \\frac{T_i Y_i}{e\\left(X_i\\right)} - \\frac{(1- T_i) Y_i}{1- e\\left(X_i\\right)},\n\\end{align*}\n\\end{dfn}\nwhere the probability of treatment $e(X_i)$ is assumed to be known as we focus on the design phase of experiments. In practice, we can extend the Horvitz-Thompson estimator by replacing $e(x)$ with an estimate $\\hat{e}(x)$, for example the Hajek estimator \\citep{hajek1971comment} and the difference-in-means estimator.\n\n\\begin{dfn} (Augmented Inverse Propensity Weighted estimator)\n\\begin{eqnarray*}\n \\widehat Y^{(1)}_{\\text{AIPW}} &=&\\frac{1}{n_1} \\sum_{i=1}^{n_1}\\left[\\hat{m}^{(1)}\\left(X_i\\right)+\\frac{T_i}{\\hat{e}\\left(X_i\\right)}\\left(Y_i-\\hat{m}^{(1)}\\left(X_i\\right)\\right)\\right], \\\\\n \\widehat Y^{(0)}_{\\text{AIPW}}&=&\\frac{1}{n_1} \\sum_{i=1}^{n_1}\\left[\\hat{m}^{(0)}\\left(X_i\\right)+\\frac{(1-T_i)}{1- \\hat{e}\\left(X_i\\right)}\\left(Y_i-\\hat{m}^{(0)}\\left(X_i\\right)\\right)\\right], \\\\\n \\hat{\\tau}_{\\text{\\text{AIPW}}} & =& \\frac{1}{n_1} \\sum_{i=1}^{n_1} \\frac{T_i(Y_i-\\hat{m}^{(1)}(X_i))}{\\hat{e}\\left(X_i\\right)} - \\frac{(1-T_i)(Y_i-\\hat{m}^{(0)}(X_i)))}{1-\\hat{e}\\left(X_i\\right)} + \\hat{m}^{(1)}(X_i) - \\hat{m}^{(0)}(X_i),\n\\end{eqnarray*}\nwhere $m^{(t)}(x)$ denotes the average outcome of treatment $t$ given covariate $X = x$, that is, $m^{(t)}(x)=\\mathbb{E} [Y \\mid T=t, X=x, S =1]$, and $\\hat{m}^{(t)}(x)$ is an estimate of $m^{(t)}(x)$ \\citep{robins1994correcting}. \n\\end{dfn}\nThe estimator $\\hat{\\tau}_{\\text{AIPW}}$ is doubly robust: $\\hat{\\tau}_{\\text{AIPW}}$ is consistent if either (1) $\\hat{e}\\left(X_i\\right)$ is consistent or (2) $\\hat{m}^{(t)}(x)$ is consistent.\n\n\\begin{dfn}(Inverse Propensity Sample Weighted (IPSW) estimator)\n\\begin{eqnarray*}\n\\hat{\\tau}_{\\text{IPSW}}^* &=& \\frac{1}{n_1} \\sum_{i \\in \\{i:S_i = 1\\}} w\\left(X_i\\right){\\left(\\frac{Y_i A_i}{e(X_i)}-\\frac{Y_i\\left(1-A_i\\right)}{1-e(X_i)}\\right)},\n\\\\\nw(x) &=& \n\\frac{f_0(X)}{f_1(X)}.\n\\end{eqnarray*}\n\\end{dfn}\nWe can see that the IPSW estimator extends the Horvitz-Thompson estimator by adding a weight $w(x)$, which is the ratio between the probably of observing an individual with characteristics $X = x$ in the trial population that in the target population \\citep{stuart2011use}\\footnote[2]{The definition of the weight $w(x)$ differs slightly from that in \\cite{stuart2011use}, where $w(x)$ is defined as $\\frac{P\\left(S =1 \\mid X=x\\right)}{P\\left(S = 0 \\mid X = x\\right)}$. That is, the ratio of the distribution of being selected into the trail over being selected into the target cohort. This definition is based on the problem setting where there is a super population which the target cohort and trial cohort are sampled from. Our definition here agrees with that in \\cite{colnet2022reweighting}.}. We use an asterisk in the notation to denote that it is an oracle definition where we assume both $f_1(X)$ and $f_0(X)$ are known, which is probably unrealistic. The IPSW estimator of average treatment effect on target cohort is proven to be unbiased under Assumptions~\\ref{assump::identifiability1}--~\\ref{assump::positivity}.\\par \nA concurrent study of high relevance to our work by \\cite{colnet2022reweighting} investigated performance of IPSW estimators. In particular, they defined different versions of IPSW estimators, where $f_1(X)$ and $f_0(X)$ are either treated as known or estimated, and derived the expressions of asymptotic variance for each version. They concluded that the semi-oracle estimator, where $f_1(X)$ is estimated and $f_0(X)$ is treated as known, outperforms the other two versions giving the lowest asymptotic variance.\n\n\\begin{dfn}\\label{dfn:semioracleIPSW}(Semi-oracle IPSW estimator, \\cite{colnet2022reweighting})\n\\begin{eqnarray*}\n\\hat{\\tau}_{\\text{IPSW}} &=& \\frac{1}{n_1} \\sum_{i \\in \\{i:S_i = 1\\}} \\frac{f_0(X_i)}{\\hat{f_1}(X_i)}{\\left(\\frac{Y_i A_i}{e(X_i)}-\\frac{Y_i\\left(1-A_i\\right)}{1-e(X_i)}\\right)}, \\text{ and} \\\\\n\\hat{f_1}(x) &=& \\frac{1}{n_1} \\sum_{S_i=1} \\mathbbm{1}_{X_i = x}.\n\\end{eqnarray*}\n\\end{dfn}\n\n The re-weighted ATE estimator we use in this paper, $\\hat{\\tau}$, coincides with their semi-oracle IPSW estimator defined above, where $f_1(X)$ is estimated from the RCT data.\n\n\\section{Main Results}\n\\label{sec:2res}\nIn this section, we start with the case where the number of possible covariate values is finite and derive the optimal covariate allocation of RCT samples that minimizes the variance of the ATE estimate, $\\hat{\\tau}$. We then develop a deviation metric, $\\mathcal{D}(f_1)$, that quantifies how much a candidate RCT sample composition with covariate distribution $f_1$ deviates from the optimal allocation. We prove that this deviation metric, $\\mathcal{D}(f_1)$, is proportional to the variance of $\\hat{\\tau}$ therefore it can be used as a metric for selection. Finally, we derive the above results in presence of continuous covariates.\n\\label{sec:theoreticalresults}\n\n\\subsection{Variance-Minimizing RCT Covariate Allocation}\nWe first consider the more straight-forward case, where the number of possible covariate values is finite. \nRecall that $e(x)$ denotes the propensity score,\nWe assume that the exact value of $e(x)$ is known for the RCT.\n\n\nWhen units in the experimental dataset cover all the possible covariate values, for $m=1,\\ldots,M$, recall the Horvitz-Thompson inverse-propensity weighted estimators \\citep{horvitz1952generalization} of CATE:\n\\begin{eqnarray*}\n\\hat \\tau(x_m) &=& \\frac{ \\sum_{S_i=1,X_i = x_m} \\frac{T_i Y_i}{e(x_m)} - \\frac{(1-T_i) Y_i }{ 1-e(x_m)}}{\\sum_{S_i=1,X_i = x_m} 1}.\n\\end{eqnarray*}\n\n\nDiscrete covariates can be furthered divided into two types: ordinal, for example, test grade, and categorical such as blood type. For ordinal covariates, we can construct a smoother estimator by applying kernel-based local averaging:\n\\begin{eqnarray*}\n \\hat \\tau_\\text{K} (x_m) = \\frac{\\frac{1}{n_1 h^k} \\sum_{S_i=1} \\left( \\frac{T_i Y_i}{ e (X_i)} - \\frac{(1-T_i) Y_i}{1- e (X_i)}\\right) K \\left(\\frac{X_i -x_m}{h}\\right)}{\\frac{1}{n_1 h^k} \\sum_{S_i = 1} K\\left(\\frac{X_i -x_m}{h}\\right)},\n\\end{eqnarray*}\nwhere $K(\\cdot)$ is kernel function and $h$ is the smoothing parameter. Conceptually, the kernel function measures how individuals with covariates in proximity to $x_m$ influence the estimation of $\\hat \\tau_\\text{K} (x_m)$.\nThis kernel-based estimator works even if the observational data does not fully overlap with the experimental data. The estimator $\\hat\\tau_\\text{K}$ is inspired by \\cite{abrevaya2015cate}, who used it to estimate the CATE. Specifically, if the covariate is ordinal and the sample size of a sub-population with a certain covariate value is small or even zero, we can consider $\\hat \\tau_\\text{K}(x)$, as it applies local averaging so that each CATE is informed by more data.\n\n\nTo study the variance of CATE estimates $\\hat \\tau(x_m)$ and $\\hat \\tau_\\text{K} (x_m)$, we define the following terms:\n\\begin{eqnarray*}\n \\sigma_\\psi^2(x) &=& \\mathbb{E}\\left[ \\left( \\psi(X,Y,T) - \\tau(x) \\right)^2 \\mid X=x, S=1 \\right],\\\\\n \\psi(x,y,t) &=& \\frac{t(y-m^{(1)}(x))}{e(x)} - \\frac{(1-t)(y-m^{(0)}(x))}{1-e(x)} + m^{(1)}(x) - m^{(0)}(x).\n \n\\end{eqnarray*}\nThe random vector $\\psi(X,Y,T)$ is the influence function of the AIPW estimator \\citep{bang2005doubly}. Term $\\sigma_\\psi^2(x)$ measures the conditional variability of the difference in potential outcomes given covariate $X = x$, and $m^{(t)}(x)$ denotes the average outcome with treatment $t$ given covariate $X = x$.\n\n\\begin{assumption}\\label{con::clt}\nAs $n$ goes to infinity, $n_1\/n$ has a limit in $(0,1)$.\n\\end{assumption}\nAssumption~\\ref{con::clt} suggests that when we consider the asymptotic behavior of our estimators, sample sizes for both experimental data and observational data go to infinity, though usually there is more observational samples than experimental samples.\n\n\\begin{theorem}\\label{thm::fclt}\nUnder Assumption~\\ref{con::clt}, for $m=1,\\ldots, M$, we have\n\\begin{eqnarray*}\n\\sqrt {n_1} (\\hat \\tau(x_m) - \\tau(x_m)) &\\stackrel{d}{\\rightarrow}& N \\left( 0, \\frac{\\sigma_\\psi^2(x_m)}{f_1(x_m)} \\right),\\\\\n\\sqrt{n_1 h} (\\hat \\tau_\\text{K} (x_m) - \\tau(x_m)) &\\stackrel{d}{\\rightarrow}&\n\\mathcal{N} \\left( 0, \\frac{\\Vert K \\Vert_2^2 \\sigma_\\psi^2(x_m) }{f_1(x_m)} \\right),\n\\end{eqnarray*}\nwhere $\\Vert K \\Vert_2 = (\\int K(u)^2 du)^{1\/2}$.\n\\end{theorem}\n\n\nTheorem~\\ref{thm::fclt} shows the asymptotic distribution of the two CATE estimators for every possible covariate value. Complete randomization in experiments ensures that $\\hat \\tau(x)$ is unbiased. Based on the idea of IPSW estimator, we then construct the following two reweighted estimators for ATE:\n$$\n \\hat \\tau = \\sum_{m=1}^M f_0(x_m) \\hat \\tau(x_m), \\quad\n \\hat \\tau_\\text{K} = \\sum_{m=1}^M f_0(x_m) \\hat \\tau_\\text{K}(x_m).\n$$\nIt is easy to see that the $\\hat{\\tau}$ above is the same as the semi-oracle IPSW estimator defined in Definition~\\ref{dfn:semioracleIPSW} once we substitute in the expression of $\\hat{\\tau}(x_m)$.\n\n\n\\begin{theorem}\\label{thm::covreal}\nUnder Assumption~\\ref{assump::identifiability1}--\\ref{con::clt}, we have\n\\begin{eqnarray*}\n n_1 \\text{var}(\\hat \\tau) &=& \\sum_{m=1}^M f_0^2(x_m)\\frac{\\sigma_\\psi^2(x_m)}{f_1(x_m)},\\\\\n n_1 h \\text{var}_\\text{a}(\\hat \\tau_\\text{K}) &=& \\Vert K \\Vert_2^2 \\sum_{m=1}^M f_0^2(x_m)\\frac{\\sigma_\\psi^2(x_m)}{f_1(x_m)},\n\\end{eqnarray*}\nwhere $\\text{var}_\\text{a}(\\cdot)$ denotes the asymptotic variance. For $m=1, \\ldots, M$, the optimal covariate RCT distribution to minimize both $\\text{var}(\\hat \\tau)$ and $\\text{var}_\\text{a}(\\hat \\tau_\\text{K})$ is \n\\begin{eqnarray*}\n f_1^*(x_m) = \\frac{f_0(x_m) \\sigma_\\psi(x_m)}{\\sum_{j=1}^M f_0(x_j) \\sigma_\\psi(x_j) }.\n\\end{eqnarray*}\n\\end{theorem}\nTheorem~\\ref{thm::covreal} indicates that even if the covariate distribution of experimental data is exactly the same as that of the target cohort, it does not necessarily produce the most efficient estimator. The optimal RCT covariate distribution also depends on the conditional variability of potential outcomes. In fact, $f_1^*$ is essentially the target covariate distribution adjusted by the variability of conditional causal effects. This result suggests that we should sample relatively more individuals from sub-populations where the causal effect is more volatile, even if they do not take up a big proportion of the target cohort. Moreover, the two estimators, $\\hat{\\tau}$ and $\\hat \\tau_\\text{K}$, share the same optimal covariate weight no matter whether local averaging is applied. \n\nIn practice, if the total number of samples is fixed, experiment designers can select RCT samples with covariate allocation identical to $f_1^*$ to improve the efficiency of IPSW estimate.\n \n\\subsection{Deviation Metric}\n\n\\begin{corollary}\\label{cor::Dmetrix}\nUnder Assumption~\\ref{assump::identifiability1}--\\ref{con::clt}, we have\n\\begin{eqnarray*}\n n_1 \\text{var}(\\hat \\tau) &= & \n \n \\left( \\sum_{m=1}^M f_0(x_m) \\sigma_\\psi(x_m) \\right)^2 \\times\n \\left\\{ \\mathcal{D}(f_1) + 1\\right\\}\n \\\\\n n_1 h \\text{var}_\\text{a}(\\hat \\tau_\\text{K}) &= & \n \n \\Vert K \\Vert_2^2 \\left( \\sum_{m=1}^M f_0(x_m) \\sigma_\\psi(x_m) \\right)^2 \\times\n \\left\\{ \\mathcal{D}(f_1) + 1\\right\\},\n\\end{eqnarray*}\nwhere $\\text{var}_{1}(\\cdot) = \\text{var}_{X \\mid S=1}(\\cdot)$, and we define\n$$\n\\mathcal{D}(f_1) = \\text{var}_{1} \\left( \\frac{f_1^*(X)}{f_1(X)}\\right)\n$$\nas the deviation metric of experiment samples as it measures the difference between the optimal covariate distribution $f_1^*$ and the real covariate distribution $f_1$. We have $\\mathcal{D}(f_1) \\geq 0$, and $\\mathcal{D}(f_1^*) = 0$ if and only if the real covariate distribution of experiment samples is identical to the optimal one, i.e. ${f_1^*(x)} = {f_1(x)}$ for $\\forall x \\in \\{ x_1, \\ldots, x_M \\}$.\n\\end{corollary}\n\nAccoring to Corollary~\\ref{cor::Dmetrix}, the variance of $\\hat \\tau$ depends on two parts: the first part $ \\sum_{m=1}^M f_0(x_m) \\sigma_\\psi(x_m)$ depends on the true distribution of the target population, while the second part $\\mathcal{D}(f_1)$ is a measure of the deviation of the RCT sample allocation $f_1$ compared to the optimal variability-adjusted allocation $f_1^*$, and can thus reflect the representativeness of our RCT samples.\nAs Corollary~\\ref{cor::Dmetrix} shows, the variance of IPSW estimator for the population, $\\hat \\tau$, is proportional to $\\mathcal{D}(f_1)$. \n\nThe deviation metric equips us with a method to compare candidate experiment designs. To be specific, if experiment designers have several potential plans for RCT samples, they can choose one with the smallest deviation metric to maximize the estimation efficiency.\n \n\\subsection{Including Continuous Covariates}\n\nFor continuous covariates, for instance, body mass index (BMI), we apply stratification based on propensity score. By considering an appropriate partition of the support $\\{A_1, \\ldots, A_L\\}$ with finite $L \\in \\mathbb{N}$, we can turn it into the discrete case above. \n\n\\begin{assumption}\\label{con::cont}\nFor $l = 1, \\ldots, L$, and $x, x^\\prime \\in A_l$, we have\n\\begin{itemize}\n \\item[(i)] $\\Pr(T=1 \\mid X=x) = \\Pr (T=1 \\mid X = x^\\prime)$;\n \\item[(ii)] $\\mathbb{E}(Y^{(1)} -Y^{(0)} \\mid X=x, S=1 ) = \\mathbb{E}(Y^{(1)} -Y^{(0)} \\mid X=x^\\prime, S=1)$.\n\\end{itemize}\n\\end{assumption}\n\nAssumption~\\ref{con::cont} assumes that units within each stratum share the same propensity score and CATE. This is a strong but reasonable condition if we make each stratum $A_l$ sufficiently small. Under Assumption~\\ref{con::cont}, let $\\hat\\tau(A_l)$, $\\hat\\tau_\\text{K}(A_l)$, $\\sigma_\\psi^2(A_l)$, $e(A_l)$ denote the causal effect estimate, causal effect estimate with kernel-based local averaging, variance of influence function, propensity score, that are conditioned on $X \\in A_l$. Let $f_0(A_l) = \\Pr(X \\in A_l \\mid S=0)$ and $f_1(A_l) = \\Pr(X \\in A_l \\mid S=1)$. We can then construct two IPSW estimators:\n$$\n \\hat \\tau = \\sum_{l=1}^L f_0(A_l) \\hat \\tau(A_l), \\quad\n \\hat \\tau_\\text{K} = \\sum_{l=1}^L f_0(A_l) \\hat \\tau_\\text{K}(A_l).\n$$\n\nAs shown in Corollary~\\ref{thm::covc}, we have similar results to Theorem~\\ref{thm::covreal}, but instead of the optimal covariate distribution, we derive the optimal probability on each covariate set $A_l$.\n\n\\begin{corollary}\\label{thm::covc}\nUnder Assumption~\\ref{assump::identifiability1}--~\\ref{con::cont}, we have\n\\begin{eqnarray*}\n n_1 \\text{var}(\\hat \\tau) &=& \\sum_{l=1}^L f_0^2(A_l)\\frac{\\sigma_\\psi^2(A_l)}{f_1(A_l)},\\\\\n n_1 h \\text{var}_\\text{a}(\\hat \\tau_\\text{K}) &=& \\Vert K \\Vert_2^2 \\sum_{l=1}^L f_0^2(A_l)\\frac{\\sigma_\\psi^2(A_l)}{f_1(A_l)}.\n\\end{eqnarray*}\nFor $l=1, \\ldots, L$, the optimal distribution on each covariate set to minimize both $\\text{var}(\\hat \\tau)$ and $\\text{var}_\\text{a}(\\hat \\tau_\\text{K})$ is \n\\begin{eqnarray*}\n f_1^*(A_l) = \\frac{f_0(A_l) \\sigma_\\psi(A_l)}{\\sum_{j=1}^L f_0(A_j) \\sigma_\\psi(A_j) }.\n\\end{eqnarray*}\nMoreover,\n\\begin{eqnarray*}\n \\sum_{l=1}^L f_0^2(A_l)\\frac{\\sigma_\\psi^2(A_l)}{f_1(A_l)}\n &=& \n \\left( \\sum_{l=1}^L f_0(A_l) \\sigma_\\psi(A_l) \\right)^2 \\times\n \\left\\{ \\mathcal{D}(f_1) + 1\\right\\},\n\\end{eqnarray*}\nwhere $A(x) = \\{ A: x \\in A; A \\in \\{A_1, \\ldots, A_L\\}\\}$.\n\\end{corollary}\n\n\n\n\n\n\nIn the sections that follow, for simplicity, we illustrate our method in the scenario where the covariates are all discrete with finite possible values. The results can easily be extended to include continuous covariates following the same logic as descibed in this section.\n\n\\section{Estimating Conditional Variability}\n\\label{sec:3estimate}\nThe optimal covariate allocation derived above can benefit the planning of the composition of RCT samples. However, it's difficult, or impossible, to estimate the conditional variability of potential outcomes prior to RCTs being carried out. In this section, we provide a practical strategy using information from the observational data to estimate the theoretical optimal covariate distribution, and derive conditions under which our strategy yields consistent results.\n\nIn completely randomized experiments, we can show that\n\\begin{eqnarray*}\n\\sigma_\\psi^2(x) = \\frac{1}{e(x)} \\text{var}(Y^{(1)} \\mid X=x) + \\frac{1}{1-e(x)} \\text{var}(Y^{(0)} \\mid X=x).\n\\end{eqnarray*}\nTo estimate $\\sigma_\\psi^2(x)$ by observational data, $\\forall x$, let\n\n\\begin{align*}\n\\widehat Y^{(0)}(x) &= \\frac{\\sum_{S_i=0,T_i=0} Y_i}{\\sum_{S_i=0,T_i=0} 1 }, \\quad&\n\\widehat Y^{(1)}(x) &= \\frac{\\sum_{S_i=0,T_i=1} Y_i}{\\sum_{S_i=0,T_i=1} 1 },\\\\\n\\widehat S^{(0)}(x) &= \\frac{\\sum_{S_i=0,T_i=0} \\left(Y_i - \\widehat Y^{(0)}(x) \\right)^2}{\\sum_{S_i=0,T_i=0} 1 - 1}, \\quad&\n\\widehat S^{(1)}(x) &= \\frac{\\sum_{S_i=0,T_i=1} \\left(Y_i - \\widehat Y^{(1)}(x) \\right)^2}{\\sum_{S_i=0,T_i=1} 1 - 1}.\n\\end{align*}\n\nWe can then estimate the conditional variability of potential outcomes, the optimal covariate distribution and the deviation metric of RCT samples from observational data as follows\n\\begin{eqnarray*}\n\\hat{\\sigma}_\\psi^2(x) &=& \\frac{1}{e(x)}\\widehat S^{(1)}(x) + \\frac{1}{1-e(x)}\\widehat S^{(0)}(x)\\\\\n\\hat f_1^*(x_m) &= &\\frac{f_0(x_m) \\hat\\sigma_\\psi(x_m)}{\\sum_{j=1}^M f_0(x_j) \\hat\\sigma_\\psi(x_j) },\\\\\n\\widehat{\\mathcal{D}}(f_1) &=& \\text{var}_{1} \\left( \\frac{\\hat f_1^*(X)}{f_1(X)}\\right).\n\\end{eqnarray*}\n\n\nAssumption~\\ref{cond::prop} below ensures the consistency of the estimated conditional variance of potential outcomes $\\hat{\\sigma}_\\psi^2(x)$. The main problem of estimating $\\hat{\\sigma}_\\psi^2(x)$ from observational data is the possibility of unobserved confounding. Instead of assuming unconfoundedness, \nour Assumption~\\ref{cond::prop} is weaker and requires that the expectation of estimate $\\hat{\\sigma}_\\psi^2(x)$ \nis proportional to the target conditional variability, which is a weaker condition.\n\\begin{assumption}\\label{cond::prop}\nFor $\\forall x$, suppose\n\\begin{eqnarray*}\n&& \\frac{1}{e(x)}\\text{var}{(Y^{(1)} \\mid X=x, T=1, S=0)} + \\frac{1}{1-e(x)}\\text{var}{(Y^{(0)} \\mid X=x, T=0, S=0)} \\\\\n&=& c \\left[ \\frac{1}{e(x)}\\text{var}{(Y^{(1)} \\mid X=x, S=0)} + \\frac{1}{1-e(x)}\\text{var}{(Y^{(0)} \\mid X=x, S=0)} \\right],\n\\end{eqnarray*}\nwhere $c>0$ is an unknown constant.\n\\end{assumption}\nThe left-hand side of the equation above is the conditional variance of observed outcomes that can be estimated from the observational data, and the right-hand side is the theoretical conditional variance of potential outcomes that we want to approximate. Assumption~\\ref{cond::prop} requires these two quantities to be proportional, rather than absolutely equal. Intuitively, the assumption supposes that the covariate segments in the observational data that exhibit high volatility in observed outcomes, also have high variance in their potential outcomes, although it is not required that the absolute levels of variance have to be the same.\n\n\\begin{theorem}\\label{thm::asyocd}\nUnder Assumption~\\ref{con::clt} and Assumption~\\ref{cond::prop}, $\\forall x$ we have\n$$\n \\hat f_1^*(x) \\rightarrow f_1^*(x), \\quad \\widehat{\\mathcal{D}}(f_1) \\rightarrow \\mathcal{D}(f_1).\n$$\nThus, $\\hat f_1^*(x)$ and $\\widehat{\\mathcal{D}}(f_1)$ are consistent. \n\\end{theorem}\n\nBased on Thereom~\\ref{thm::asyocd}, we propose a novel strategy to select efficient RCT samples. Specifically, we select the candidate experimental design with covariate allocation $f_1$ that minimizes the estimate of deviation metric $\\widehat{\\mathcal{D}}(f_1)$. By contrast, a naive strategy prefers the candidate experimental design with $f_1 = f_0$, which mimics exactly the covariate distribution in the target cohort. If the conditional variability of potential outcomes $\\sigma_\\psi^2(x)$ vary widely according to $x$. Our strategy can lead to much more efficient treatment effect estimator compared to the naive strategy.\n\n\\section{Practical Scenarios}\n\\label{sec:4pras}\n\\subsection{Heterogeneous Unit Cost}\nWe also consider the experimental design with a cost constraint and heterogeneous cost for different sub-populations. The goal is to find the optimal sample allocation for RCT that minimizes the variance of the proposed estimator subject to a cost constraint. For $m = 1, \\ldots, M$, let $C_m$ denote the cost to collect a sample in the sub-population with $X = x_m$. \n\n\\begin{theorem}\\label{thm::cost}\nUnder the cost constraint that\n$$\n \\sum_{m=1}^M C_m \\left( \\sum_{S_i=1, X_i = x_m} 1 \\right) = C,\n$$\nthe optimal sample allocation $f_1(X)$ that minimizes $\\text{var}(\\hat \\tau)$ is\n$$\n f_1^{\\text{c},*} (x_m) = \\frac{f_0(x_m)\\sigma_\\psi(x_m)\/\\sqrt{C_m}} {\\sum_{i=1}^M f_0(x_i)\\sigma_\\psi(x_i)\/\\sqrt{C_i}}\n$$\nfor $m = 1, \\ldots, M$. Here we use the superscript $c$ to denote the cost constraint.\n\\end{theorem}\n\nTheorem~\\ref{thm::cost} suggests that the optimal RCT covariate allocation under the given cost constraint is the covariate allocation of target cohort adjusted by both the heterogeneous costs for sub-populations and the conditional variability of potential outcomes. Intuitively, compared to the case without heterogeneous costs, we should include more RCT samples with lower unit cost.\n\n\n\\subsection{Different Precision Requirements}\nTheorem~\\ref{thm::covc} shows the optimal sample allocation to maximize the efficiency of average treatment effect estimator for the target cohort. If we require the same precision for estimators in each domain, we need the sample allocation as follows:\n$$\n f_1^{\\text{s},*}(x_m) = \\frac{ \\sigma^2_\\psi(x_m)}{\\sum_{j=1}^M \\sigma^2_\\psi(x_j) },\n$$\nwhere we use the superscript s to denote the requirement of same precision for the CATE estimate in each segment.\n\nIntuitively, to take both objectives into consideration, we propose a compromised allocation that falls between the two optimum allocations $\\forall k \\in [0,1]$:\n$$\n f_1^{k,*}(x_m) = \\frac{f_0^k(x_m) \\sigma^{2-k}_\\psi(x_m)}{\\sum_{j=1}^M f_0^k(x_j) \\sigma^{2-k}_\\psi(x_j) }.\n$$\n\n\n\\begin{corollary}\\label{prop::1}\nIf for $m = 1, \\ldots, M$, \n$$\n f_0(x_m) = \\frac{\\sigma_\\psi(x_m)}{\\sum_{j=1}^M \\sigma_\\psi(x_j)},\n$$\nwe have for $\\forall k \\in [0,1]$,\n$$\n f_1^*(X) = f_1^{\\text{s},*}(X) = f_1^{k,*}(X).\n$$\n\nThe deviation metric for sample allocation under same precision strategy and compromise strategy are\n\\begin{eqnarray*}\n \\mathcal{D}(f_1^{\\text{s},*}) &=& \\text{var}_1\\left( \\frac{f_0(X)}{\\sigma_\\psi(X)} \\right) \\left( \\frac{\\sum_{m=1}^M \\sigma^2_\\psi(x_m)}{\\sum_{m=1}^M f_0(x_m) \\sigma_\\psi(x_m)} \\right)^2,\\\\\n \\mathcal{D}(f_1^{k,*}) &=& \\text{var}_1\\left( \\frac{f_0(X)}{\\sigma_\\psi(X)} \\right)^{1-k} \\left( \\frac{\\sum_{m=1}^M f_0^{k}(x_m) \\sigma^{2-k}_\\psi(x_m)}{\\sum_{m=1}^M f_0(x_m) \\sigma_\\psi(x_m)} \\right)^2,\n\\end{eqnarray*}\nrespectively.\n\\end{corollary}\n\n\\section{Numerical Study}\n\\label{sec:5numerical}\n\\subsection{Simulation}\nIn this section, we conduct a simulation study to illustrate how representativeness of experiment samples influences the estimation efficiency of average treatment effect for a target cohort, and demonstrate how the representativeness metric $\\mathcal{D}(f_1)$ can facilitate the selection from candidate RCT sample designs. We set the size of observational dataset $n_0 = 10000$ and the size of experimental dataset $n_1 = 200$. For the units in the observational data, we draw covariates $x$ from $\\{1, 2, 3\\}$ with probability 0.3, 0.2 and 0.5 respectively. We then set\n\\begin{align*}\n Y^{(0)} = 2X + X^4\\epsilon, \\quad Y^{(1)} = 1 - X + \\epsilon, \\text{ then,}\\\\\n Y^{(1)} - Y^{(0)} = 1 -3X + (X^4 -1)\\epsilon,\n\\end{align*}\nwhere $\\epsilon \\sim \\mathcal{N}(0,1)$. We can then compute the conditional variability of potential outcomes $\\sigma^2_\\psi(x)$ and thus the optimal covariate distribution $f^*_1$ from the true population. Our model engenders distinctive conditional variability $\\sigma^2_\\psi(x)$ given different $x$, making the optimal covariate distribution $f_1^*$ very different from the target covariate distribution $f_0$.\n\nFor experimental data, we simulate 100 different candidate experimental sample designs. In each design, we randomly draw experiment samples from the target cohort with probability\n$$\n \\Pr(S=1 \\mid X=x) = \\frac{e^{p_x}}{e^{p_1} + e^{p_2} + e^{p_3}},\n$$\nwhere $p_1, p_2, p_3$ are i.i.d. samples drawn from standard normal distribution. We can then compute the real covariate distribution $f_1(x)$ and the repressiveness metric $\\mathcal{D}(f_1)$. To estimate the efficiency of average treatment effect estimator, we conduct 1000 experiments for each design. In each experiment, the treatment for each unit follows a Bernoulli distribution with probability 0.5. The simulation result is shown in Fig~\\ref{fig:sim}. The relationship between the variance and representativeness can be fit into a line, which is consistent with our result that $\\text{var}(\\hat \\tau) \\propto \\mathcal{D}(f_1)$. The red line shows the value $R(f_1)$ for the naive strategy mimicking exactly the target cohort distribution, which is not zero, and we can see that it is not the optimal RCT sample and does not produce the most efficient causal estimator. \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.5\\textwidth]{OS1.pdf}\n \\caption{How the deviation metric of experiment samples $\\mathcal{D}$ correlates with the estimated variance of the $\\hat{\\tau}$. The red line marks the deviation metric $\\mathcal{D}$ of a trial sample selected following the na\\\"ive strategy.}\n \\label{fig:sim}\n\\end{figure}\n\nFor the case with heterogeneous unit cost, we set the costs for sub-populations with covariate $X$ being 1, 2, 3 to be 20, 30, 40, respectively. The total capital available is 30000. Instead of randomly drawing experiment samples from the target cohort with a fixed number of total subjects, we randomly assign capitals for different sub-populations with a fixed amount of total cost. Given the budget assigned to each sub-population, we then draw subjects randomly from the sub-population, where the amount of subjects can be determined by the assigned budget. The simulation result is illustrated by Figure~\\ref{fig:simc}. We can see that under the cost constraint, experiment samples that follow a distribution closer to $f_1^{c,*}$ lead to causal estimator with better efficiency.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.5\\textwidth]{OS2.pdf}\n \\caption{How the the deviation metric of experiment samples with cost constraint \n \n influences the variance of $\\hat{\\tau}$.}\n \\label{fig:simc}\n\\end{figure}\n\n\\subsection{Real Data Illustration}\nWe use the well-cited Tennessee Student\/Teacher Achievement Ratio (STAR) experiment to assess how covariate distribution of experiment samples influences the estimation efficiency of average treatment effect for target cohort. STAR is a randomized experiment started in 1985 to measure the effect of class size on student outcomes, measured by standardized test scores. Similar to the exercise in \\cite{kallus2018removing}, we focus a binary treatment: $T=1$ for small classes (13-17 pupils), and $T=0$ for regular classes (22-25 pupils). Since many students only started the study at first grade, we took as treatment their class-type at first grade. The outcome $Y$ is the sum of the listening, reading, and math standardized test at the end of first grade. We use the following covariates $X$ for each student: student race ($X_1 \\in \\{ 1, 2 \\}$) and school urbanicity ($X_2 \\in \\{1,2,3,4\\}$). We exclude units with missing covariates. The records of 4584 students remain, with 1733 assigned to treatment (small class, $T=1$), and 2413 to control (regular size class, $T=0$). Before analysis we fill the missing outcome by linear regression based on treatments and two covariates so that both two potential outcomes $Y_0$ and $Y_1$ for each student are known.\n\nWe simulate 500 candidate experiment sample allocations. For each allocations, we select $n_1 = 500$ experiment units from the dataset with probability\n$$\n \\Pr(S=1 \\mid X=x) = \\frac{e^{p_{x_1x_2}}}{e^{p_{11}} + e^{p_{12}} + e^{p_{13}}+ e^{p_{14}}+ e^{p_{21}}+ e^{p_{22}}+ e^{p_{23}}+ e^{p_{24}}}.\n$$\nwhere $p_{11}, p_{12}, p_{13}, p_{14}, p_{21}, p_{22}, p_{23}, p_{24}$ are i.i.d. samples drawn from standard normal distribution. We can then compute the real covariate distribution $f_1(x)$ and the repressiveness of the experiment samples $\\mathcal{D}(f_1) = \\text{var}_1 (f_1^*(X) \/ f_1(X))$. To estimate the efficiency of average treatment effect estimator, we conduct 200 experiments for each design. In each experiment, the treatment follows a Bernoulli distribution with probability $1733\/4584 = 0.378$. The simulation result is shown in Fig~\\ref{fig:real}. The relationship between the variance and the deviation metric $\\mathcal{D}(f)$ can be fit into a line, which is consistent with our result that $\\text{var}(\\hat \\tau) \\propto \\mathcal{D}(f_1)$.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.5\\textwidth]{sim2.pdf}\n \\caption{How the deviation metric of experiment samples $\\mathcal{D}$ correlates with the estimated variance of the $\\hat{\\tau}$. The red line marks the deviation metric $\\mathcal{D}$ of a trial selected following the na\\\"ive strategy.}\n \\label{fig:real}\n\\end{figure}\n\n\n\\section{Conclusion}\nIn this paper, we examine the common procedure of generalizing causal inference from an RCT to a target cohort. We approach this as a problem where we can combine an RCT with an observational data. The observational data has two roles in the combination: one is to provide the exact covariate distribution of the target cohort, and the other role is to provide a means to estimate the conditional variability of the causal effect by covariate values.\n\nWe give the expression of the variance of Inverse Propensity Sampling Weights estimator as a function of covariate distribution in the RCT. We subsequently derive the variance-minimizing optimal covariate allocation in the RCT, under the constraint that the size of the trial population is fixed. Our result indicates that the optimal covariate distribution of the RCT does not necessarily follow the exact distribution of the target cohort, but is instead adjusted by the conditional variability of potential outcomes. Practitioners who are at the design phase of a trial can use the optimal allocation result to plan the group of participants to recruit into the trial. \\par\nWe also formulate a deviation metric quantifying how far a given RCT allocation is from optimal. The advantage of this metric is that it is proportional to the variance of the final ATE estimate so that when presented with several candidate RCT cohorts, practitioners can compare and choose the most efficient RCT according to this metric.\\par\nThe above results depend on the estimation of conditional variability of the causal effect by covariate values, which remains unknown. We propose to estimate it using the observational data and outline mild assumptions that needs to be met. \nIn reality, practitioners usually have complex considerations when designing a trial, for instance cost constraints and precision requirements. We develop variants of our main results to apply in such practical scenarios. Finally, we use two numerical studies to corroborate our theoretical results. \n\n\n\n\n\\bibliographystyle{abbrvnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction }\n\nThe Standard Electroweak Model based on gauge group $ SU(2)\\times U(1)$ gives a good description of electroweak processes.\nOne of the unsolved problems is the origin of electroweak symmetry breaking. \nIn the standard formulation the scalar field (Higgs boson) performs this task via Higgs mechanism, which generates a mass terms for vector bosons. \nSufficiently artificial Higgs mechanism with its imaginary bare mass is a naive relativistic analog of the phenomenological description of superconductivity \\cite{Sh-09}.\nHowever, it is not yet \nexperimentally verified whether electroweak symmetry is broken by such a Higgs mechanism, or by something else.\n The emergence of large number Higgsless models \\cite{C-05}--\\cite{MT-08} was stimulated by difficulties with Higgs boson. These models are mainly based on extra dimensions of different types or larger gauge groups. \nA finite electroweak model without a Higs particle which is used a regularized quantum field theory \\cite{E-66},\\cite{E-67} was developed in\n\\cite{MT-08}. \n \n \n \nOne of the important ingredient of the Standard Model is the simple group $ SU(2)$. More then fifty years in physics it is well known \nthe notion of group contraction \\cite{IW-53}, i.e. limit operation, which transforms, for example, a simple or semisimple group to a non-semisimple one. From general point of view\nfor better understanding of a physical system it is useful to investigate its properties for limiting values of their physical parameters.\nIn partucular, for a gauge model one of the similar limiting case corresponds to a model with contracted gauge group.\nThe gauge theories for non-semisimple groups which Lie algebras admit invariant non-degenerate metrics was considered in \\cite{NW-93},\\cite{T-95}.\n\n\n\nIn the present paper a modified formulation of the Higgsless Electroweak Model and its limiting case for contracted gauge group \n $ SU(2;j)\\times U(1)$ is regarded.\nFirstly we observe that the quadratic form $\\phi^\\dagger\\phi=\\phi_1^*\\phi_1+\\phi_2^*\\phi_2=R^2$\nof the complex matter field $\\phi \\in {\\bf C}_2$ is invariant with respect to gauge group transformations \n$ SU(2)\\times U(1)$ and we can restrict fields on the quadratic form without the loss of gauge invariance of the model. This quadratic form define three dimensional sphere in four dimensional Euclidean space of the real components of $\\phi,$ where the noneuclidean spherical geometry is realized.\n\nSecondly we introduce the {\\it free} matter field Lagrangian in this spherical field space, which along with the standard gauge field Lagrangian form the full Higgsless Lagrangian of the model. Its second order terms reproduce the same fields as the Standard Electroweak Model but without the remaining real dynamical Higgs field. The vector bosons masses are automatically generated and are given by the same formulas as in the Standard Electroweak Model, so there is no need in special mechanism of spontaneous symmetry breaking.\nThe fermion Lagrangian of the Standard Electroweak Model are modified by replacing of the fields $\\phi$ with the restricted on the quadratic form fields in such a way that its second order terms provide the electron mass and neutrino remain massless.\n\nWe recall the definition and properties of the contracted group $SU(2;j)$ in Sec. 2. \nIn Sec. 3, we step by step modify\nthe main points of the Electroweak Model for the gauge group $SU(2;j)\\times U(1)$.\nWe find transformation properties\nof gauge and matter fields under contractions. After that we obtain the Lagrangian of the contracted model from the noncontracted one by the substitution of the transformed fields. \nThe limiting case of the modified Higgsless Electroweak Model is regarded in Sec. 4. \nWhen contraction parameter tends to zero $j \\rightarrow 0$ or takes nilpotent value $j=\\iota$ the field space is fibered \\cite{Gr-09} in such a way that electromagnetic, Z-boson and electron fields are in the base whereas charged W-bosons and neutrino fields are in the fiber.\nWithin framework of the limit model the base fields can be interpreted as an external ones with respect to the fiber fields\n in the sence that the fiber fields do not effect on the base fields. \nThe field interactions are simplified under contraction.\nSec. 5 is devoted to the conclusions.\n\n\n\\section{Contracted Special Unitary group $SU(2;j)$ }\n\nLet us regard two dimensional complex fibered vector space $\\Phi_2(j)$ with one dimensional base $\\left\\{\\phi_1\\right\\}$\nand one dimensional fiber $\\left\\{\\phi_2\\right\\}$ \\cite{Gr-94}. This space has two hermitian forms: first in the base $\\bar{\\phi_1}\\phi_1=|\\phi_1|^2$ and second in the fiber $\\bar{\\phi_2}\\phi_2=|\\phi_2|^2,$ where bar denotes complex conjugation. Both forms can be written by one formula\n\\begin{equation}\n\\phi^\\dagger(j)\\phi(j)=|\\phi_1|^2+ j^2|\\phi_2|^2,\n\\label{g1}\n\\end{equation} \nwhere $\\phi^\\dagger(j)=(\\bar{\\phi_1},j\\bar{\\phi_2}), $\nparameter $j=1, \\iota$ and $\\iota$ is nilpotent unit $\\iota^2=0.$ \nFor nilpotent unit the following heuristic rules be fulfiled: \n1) division of a real or complex numbers by $\\iota$ is not defined, i.e. for a real or complex $a$ the expression $\\frac{a}{\\iota}$\nis defined only for $a=0$, \n2) however identical nilpotent units can be cancelled $\\frac{\\iota}{\\iota}=1.$\n \n\n The special unitary group $SU(2;j)$ is defined as a transformation group of $\\Phi_2(j)$ which keep invariant the hermitian form (\\ref{g1}), i.e.\n$$ \n\\phi'(j)=\n\\left(\\begin{array}{c}\n\\phi'_1 \\\\\nj\\phi'_2\n\\end{array} \\right)\n=\\left(\\begin{array}{cc}\n\t\\alpha & j\\beta \\\\\n-j\\bar{\\beta}\t & \\bar{\\alpha}\n\\end{array} \\right)\n\\left(\\begin{array}{c}\n\\phi_1 \\\\\nj\\phi_2\n\\end{array} \\right)\n=u(j)\\phi(j), \\quad\n$$\n\\begin{equation}\n\\det u(j)=|\\alpha|^2+j^2|\\beta|^2=1, \\quad u(j)u^{\\dagger}(j)=1.\n\\label{g3}\n\\end{equation} \n\n The fundamental representation of the one-parameter subgroups of $SU(2;j)$ are easily obtained\n\\begin{equation}\nu_1(\\alpha_1;j)=e^{\\alpha_1T_1(j)}=\\left(\\begin{array}{cc}\n\t\\cos \\frac{j\\alpha_1}{2} & i\\sin \\frac{j\\alpha_1}{2} \\\\\ni\\sin \\frac{j\\alpha_1}{2}\t & \\cos \\frac{j\\alpha_1}{2}\n\\end{array} \\right),\n\\label{g4}\n\\end{equation} \n\\begin{equation}\nu_2(\\alpha_2;j)=e^{\\alpha_2T_2(j)}=\\left(\\begin{array}{cc}\n\t\\cos \\frac{j\\alpha_2}{2} & \\sin \\frac{j\\alpha_2}{2} \\\\\n-\\sin \\frac{j\\alpha_2}{2}\t & \\cos \\frac{j\\alpha_2}{2}\n\\end{array} \\right),\n\\label{g5}\n\\end{equation} \n\\begin{equation}\nu_3(\\alpha_3;j)=e^{\\alpha_3T_3(j)}=\\left(\\begin{array}{cc}\n\te^{i\\frac{\\alpha_3}{2}} & 0 \\\\\n0\t & e^{-i\\frac{\\alpha_3}{2}}\n\\end{array} \\right).\n\\label{g6}\n\\end{equation} \nThe corresponding generators \n$$ \n T_1(j)= j\\frac{i}{2}\\left(\\begin{array}{cc}\n\t0 & 1 \\\\\n\t1 & 0\n\\end{array} \\right)=j\\frac{i}{2}\\tau_1, \\quad \nT_2(j)= j\\frac{i}{2}\\left(\\begin{array}{cc}\n\t0 & -i \\\\\n\ti & 0\n\\end{array} \\right)=j\\frac{i}{2}\\tau_2,\n$$\n\\begin{equation} \nT_3(j)= \\frac{i}{2}\\left(\\begin{array}{cc}\n\t1 & 0 \\\\\n\t0 & -1\n\\end{array} \\right)=\\frac{i}{2}\\tau_3, \n\\label{g7}\n\\end{equation} \nwith $\\tau_k$ being Pauli matrices, \nare subject of commutation relations\n$$ \n[T_1(j),T_2(j)]=-j^2T_3(j), \\quad [T_3(j),T_1(j)]=-T_2(j), \n$$\n\\begin{equation} \n [T_2(j),T_3(j)]=-T_1(j),\n\\label{g8}\n\\end{equation}\n and form the Lie algebra $su(2;j)$ with the general element\n\\begin{equation} \n T(j)=\\sum_{k=1}^{3}a_kT_k(j)= \\frac{i}{2}\\left(\\begin{array}{cc}\n\ta_3 & j(a_1-ia_2) \\\\\n\tj(a_1+ia_2) & -a_3\n\\end{array} \\right)=-T^{\\dagger}(j).\n\\label{g8-1}\n\\end{equation}\n\n\n \nThere are two more or less equivalent way of group contraction. We can put the contraction parameter equal to the nilpotent unit $j=\\iota$ or tend it to zero $j\\rightarrow 0$. Sometimes it is convenient to use the first (mathematical) approach, sometimes the second (physical) one. For example, the matrix $u(j)$ (\\ref{g3}) has non-zero nilpotent non-diagonal elements for $j=\\iota$,\nwhereas for $j\\rightarrow 0$ they are formally equal to zero. Nevertheless both approaches lead to the same final results.\n \nLet us describe the contracted group $SU(2;\\iota)$ in detail. For $j=\\iota$ it follows from (\\ref{g3}) that \n$\\det u(\\iota)=|\\alpha|^2=1,$ i.e. $\\alpha=e^{i\\varphi},$ therefore\n\\begin{equation} \nu(\\iota)= \n\\left(\\begin{array}{cc}\ne^{i\\varphi}\t & \\iota\\beta \\\\\n-\\iota\\bar{\\beta}\t & e^{-i\\varphi}\n\\end{array} \\right), \\quad\n\\beta=\\beta_1+i\\beta_2 \\in {\\bf C}. \n\\label{g9}\n\\end{equation}\nFunctions of nilpotent arguments are defined by their Taylor expansion, in particular, \n$\\cos\\iota x=1,\\;\\sin\\iota x=\\iota x.$ Then one-parameter subgroups of $SU(2;\\iota)$ take the form \n\\begin{equation}\nu_1(\\alpha_1;\\iota) \n=\\left(\\begin{array}{cc}\n\t1 & \\iota i\\frac{\\alpha_1}{2} \\\\\n\t\\iota i\\frac{\\alpha_1}{2} & 1\n\\end{array} \\right), \\quad\nu_2(\\alpha_2;\\iota) \n=\\left(\\begin{array}{cc}\n1 & \\iota \\frac{\\alpha_2}{2} \\\\\n-\\iota \\frac{\\alpha_2}{2}\t & 1\n\\end{array} \\right).\n\\label{g11}\n\\end{equation} \nThe third subgroup does not changed and is given by (\\ref{g6}).\nThe simple group $SU(2)$ is contracted to the non-semisimple group $SU(2;\\iota)$, which is isomorphic to the real Euclid group $E(2).$\nFirst two generators of the Lie algebra $su(2;\\iota)$ are commute $[T_1(\\iota),T_2(\\iota)]=0$ and the rest commutators are given by (\\ref{g8}). For the general element (\\ref{g8-1}) \nof $su(2;\\iota)$ the corresponding group element of $SU(2;\\iota)$ is as follows \n\\begin{equation} \nu(\\iota)=e^{T(\\iota)}=\n\\left(\\begin{array}{cc}\ne^{i\\frac{a_3}{2}}\t & \\iota i\\frac{\\bar{a}}{a_3}\\sin \\frac{a_3}{2} \\\\\n\\iota i\\frac{a}{a_3}\\sin \\frac{a_3}{2} & e^{-i\\frac{a_3}{2}}\n\\end{array} \\right), \\quad a=a_1+ia_2\\in C.\n\\label{g12}\n\\end{equation} \n \n \nThe actions of the unitary group $U(1)$ and the electromagnetic subgroup $U(1)_{em}$ \nin the fibered space $\\Phi_2(\\iota)$ are given by the same matrices as on the space $\\Phi_2$, namely\n\\begin{equation}\nu(\\beta)=e^{\\beta Y}=\\left(\\begin{array}{cc}\n\te^{i\\frac{\\beta}{2}} & 0 \\\\\n0\t & e^{i\\frac{\\beta}{2}}\n\\end{array} \\right), \\quad\nu_{em}(\\gamma)=e^{\\gamma Q}=\\left(\\begin{array}{cc}\n\te^{i\\gamma} & 0 \\\\\n0\t & 1\n\\end{array} \\right),\n\\label{g13}\n\\end{equation} \nwhere $Y=\\frac{i}{2}{\\bf 1}, \\; Q=Y+T_3.$\n\nRepresentations of groups $SU(2;\\iota), U(1), U(1)_{em}$ are linear ones, that is they are realised by linear operators in the fibered space $\\Phi_2(\\iota)$.\n\n\\section{ Electroweak Model for $SU(2;j)\\times U(1)$ gauge group}\n\n\nThe fibered space $\\Phi_2(j)$ can be obtained from $\\Phi_2$ by \nsubstitution $\\phi_2 \\rightarrow j\\phi_2$ in (\\ref{g1}), \nwhich induces another ones for Lie algebra $su(2)$ generators\n$T_1 \\rightarrow jT_1,\\; T_2 \\rightarrow jT_2,\\;T_3 \\rightarrow T_3. $\nAs far as the gauge fields take their values in Lie algebra, we can substitute gauge fields instead of transformation of generators, namely\n\\begin{equation}\nA_{\\mu}^1 \\rightarrow jA_{\\mu}^1, \\;\\; A_{\\mu}^2 \\rightarrow jA_{\\mu}^2,\\; \\;A_{\\mu}^3 \\rightarrow A_{\\mu}^3, \\;\\;\nB_{\\mu} \\rightarrow B_{\\mu}.\n\\label{eq28}\n\\end{equation} \n \n \n \n \n \nThese substitutions in the Lagrangian $L$ of the\nHiggsless Electroweak Model \\cite{Gr-07}\ngive rise to the Lagrangian $L(j)$\n\nof the contracted model \nwith $U(2;j)=SU(2;j)\\times U(1)$ gauge group\n\\begin{equation}\nL(j)=L_A(j) + L_{\\phi}(j),\n\\label{eq1}\n\\end{equation}\nwhere\n$$ \n L_A(j)=\\frac{1}{2g^2}\\mbox{tr}(F_{\\mu\\nu}(j))^2 + \\frac{1}{2g'^2}\\mbox{tr}(\\hat{B}_{\\mu\\nu})^2= \n $$\n \\begin{equation}\n= -\\frac{1}{4}[j^2(F_{\\mu\\nu}^1)^2+j^2(F_{\\mu\\nu}^2)^2+(F_{\\mu\\nu}^3)^2]-\\frac{1}{4}(B_{\\mu\\nu})^2\n\\label{eq2}\n\\end{equation}\nis the gauge fields Lagrangian\nand\n\\begin{equation} \n L_{\\phi}(j)= \\frac{1}{2}(D_\\mu \\phi(j))^{\\dagger}D_\\mu \\phi(j) \n\\label{eq3}\n\\end{equation} \nis the {\\it free} (without any potential term) matter field Lagrangian (summation on the repeating Greek indexes is always understood). \nHere $D_{\\mu}$ are the covariant derivatives\n \\begin{equation}\nD_\\mu\\phi(j)=\\partial_\\mu\\phi(j) + g\\left(\\sum_{k=1}^{3}T_k(j)A^k_\\mu \\right)\\phi(j) + g'YB_\\mu\\phi(j),\n\\label{eq4}\n\\end{equation} \nwhere $T_k(j)$ are given by (\\ref{g7})\n and \n$Y=\\frac{i}{2}{\\bf 1}$ is generator of $U(1).$ \nTheir actions on components of $\\phi(j)$ are given by\n$$\nD_\\mu \\phi_1=\\partial_\\mu \\phi_1 + \\frac{i}{2}(gA_\\mu^3+g'B_\\mu)\\phi_1 + j^2\\frac{ig}{2}(A_\\mu^1-iA_\\mu^2)\\phi_2,\n$$\n\\begin{equation}\nD_\\mu \\phi_2=\\partial_\\mu \\phi_2 - \\frac{i}{2}(gA_\\mu^3-g'B_\\mu)\\phi_2 + \\frac{ig}{2}(A_\\mu^1+iA_\\mu^2)\\phi_1.\n\\label{eq5}\n\\end{equation}\n\n\n\nThe gauge fields \n$$ \nA_\\mu (x;j)=g\\sum_{k=1}^{3}T_k(j)A^k_\\mu (x)=g\\frac{i}{2}\\left(\\begin{array}{cc}\n\tA^3_\\mu & j(A^1_\\mu -iA^2_\\mu ) \\\\\nj(A^1_\\mu + iA^2_\\mu ) & -A^3_\\mu \n\\end{array} \\right),\n$$\n\\begin{equation}\n \\hat{B}_\\mu (x)=g'YB_\\mu (x)=g'\\frac{i}{2}\\left(\\begin{array}{cc}\n\tB_{\\mu} & 0 \\\\\n0 & B_{\\mu}\n\\end{array} \\right)\n\\label{eq6-a}\n\\end{equation}\n take their values in Lie algebras $su(2;j),$ $u(1)$ respectively, and the stress tensors are\n$$ \nF_{\\mu\\nu}(x;j)={\\cal F}_{\\mu\\nu}(x;j)+[A_\\mu(x;j),A_\\nu(x;j)]=\n$$\n$$\n=g\\frac{i}{2}\\left(\\begin{array}{cc}\n\tF^3_\\mu & j(F^1_\\mu -iF^2_\\mu ) \\\\\nj(F^1_\\mu + iF^2_\\mu ) & -F^3_\\mu \n\\end{array} \\right), \n$$\n\\begin{equation} \nB_{\\mu\\nu}=\\partial_{\\mu}B_{\\nu}-\\partial_{\\nu}B_{\\mu}, \n\\label{eq7-a} \n\\end{equation}\nor in components \n$$\nF_{\\mu\\nu}^1={\\cal F}_{\\mu\\nu}^1 + g(A_\\mu^2A_\\nu^3-A_\\mu^3A_\\nu^2), \\quad\nF_{\\mu\\nu}^2={\\cal F}_{\\mu\\nu}^2 +g(A_\\mu^3A_\\nu^1-A_\\mu^1A_\\nu^3),\n$$\n\\begin{equation}\nF_{\\mu\\nu}^3={\\cal F}_{\\mu\\nu}^3 + j^2g(A_\\mu^1A_\\nu^2-A_\\mu^2A_\\nu^1),\n\\label{eq8-a} \n\\end{equation}\nwhere ${\\cal F}_{\\mu\\nu}^k=\\partial_\\mu A_\\nu^k- \\partial_\\nu A_\\mu^k. $ \n\n\n\nThe Lagrangian $L(j)$ (\\ref{eq1}) describe massless fields. \nIn a standard approach to generate mass terms for the vector bosons the \"`sombrero\"' potential is added to the matter field Lagrangian \n$ L_{\\phi}(j=1)$ (\\ref{eq3}) and after that the Higgs mechanism is used.\nThe different way \n\\cite{Gr-07} is based on the fact that the quadratic form $\\phi^{\\dagger}\\phi=\\rho^2$\nis invariant with respect to gauge transformations. This quadratic form define the 3-dimensional sphere $S_3$ of the radius $\\rho>0$ in the target space $\\Phi_2$ which is ${\\bf C_2}$ or ${\\bf R_4}$ if real components are counted. In other words the radial coordinates $R_{+}\\times S_3$ are introduced in ${\\bf R_4}.$ \nThe vector boson masses are generated by the transformation of Lagrangian $L(j=1)$ (\\ref{eq1}) to the coordinates on the sphere $S_3$ and are the same as in the standard model. \nHiggs boson field does not appeared if the sphere radius does not depend on the space-time coordinates $\\rho=R=const$ \\cite{Gr-07}. For $\\rho\\neq const$ the real positive massless scalar field --- analogy of dilaton or kind of Goldstone mode --- is presented in the model \\cite{F-08}.\n\n\n\n\nThe complex space $\\Phi_2(j)$ can be regarded as 4-dimensional real space ${\\bf R}_4(j)$. \n Let us introduce the real fields \n\\begin{equation} \n\\phi_1=r(1+i\\psi_3), \\quad \\phi_2=r(\\psi_2+i\\psi_1). \n\\label{eq7}\n\\end{equation} \nThe substitution $\\phi_2 \\rightarrow j\\phi_2$ induces the following substitutions \n \\begin{equation}\n\\psi_1 \\rightarrow j\\psi_1,\\;\\psi_2 \\rightarrow j\\psi_2,\\;\\psi_3 \\rightarrow \\psi_3,\\; r\\rightarrow r\n\\label{eq7-1}\n\\end{equation} \nfor the real fields. \n\n \n For the real fields the form (\\ref{g1}) is written as\n$r^2(1 + \\bar{\\psi}^2(j))=R^2, $ where $\\bar{ \\psi}^2(j)=j^2(\\psi_1^2+ \\psi_2^2)+\\psi_3^2,$ therefore \n\\begin{equation} \nr=\\frac{R}{\\sqrt{1 + \\bar{ \\psi}^2(j)}}. \n\\label{eq9}\n\\end{equation} \nHence there are \nthree independent real fields $\\bar{\\psi}(j)=(j\\psi_1,j\\psi_2,\\psi_3).$ These fields belong to the space $ \\Psi_3(j) $\nwith noneuclidean geometry which is realized on the 3-dimensional \"`sphere\"' of the radius $R$ in the 4-dimensional space \n${\\bf R}_4(j)$. The fields $\\bar{\\psi}(j)$ are intrinsic Beltrami coordinates on $ \\Psi_3(j)$. The space $ \\Psi_3(j=1)\\equiv S_3 $\nhas non degenerate spherical geometry, but $ \\Psi_3(j=\\iota) $ is fibered space of constant curvature with 1-dimensional base $\\{\\psi_3\\}$ and 2-dimensional fiber $\\{\\psi_1,\\psi_2\\}$ \\cite{Gr-09}, so-called semi-spherical space \\cite{P-65}, which can be interpreted as nonrelativistic $(1+2)$ kinematic with curvature or Newton kinematic \\cite{Gr-90}.\n\n\nThe {\\it free} Lagrangian (\\ref{eq3}) is transforms to \n the {\\it free} gauge invariant matter field Lagrangian $L_\\psi(j) $ on $ \\Psi_3(j) $, which is defined with the help of the metric tensor $g_{kl}(j)$ \\cite{Gr-07} of the space $ \\Psi_3(j) $\n$$ \ng_{11}=\\frac{1+\\psi_3^2+j^2\\psi_2^2}{(1+\\bar{\\psi}^2(j))^2}, \\quad\ng_{22}=\\frac{1+\\psi_3^2+j^2\\psi_1^2}{(1+\\bar{\\psi}^2(j))^2}, \\quad\ng_{33}=\\frac{1+j^2(\\psi_1^2+\\psi_2^2)}{(1+\\bar{\\psi}^2(j))^2}, \n$$\n$$\ng_{12}=g_{21}=\\frac{-j^2\\psi_1 \\psi_2}{(1+\\bar{\\psi}^2(j))^2},\\;\ng_{13}=g_{31}=\\frac{-j\\psi_1 \\psi_3}{(1+\\bar{\\psi}^2(j))^2},\\;\ng_{23}=g_{32}=\\frac{-j^2\\psi_2 \\psi_3}{(1+\\bar{\\psi}^2(j))^2}\n$$\n in the form \n$$\nL_\\psi(j)=\\frac{R^2}{2}\\sum_{k,l=1}^3g_{kl}(j)D_\\mu\\psi_k(j)D_\\mu\\psi_l(j)=\n$$\n\\begin{equation}\n=\\frac{R^2\\left[(1+\\bar{\\psi}^2(j))(D_{\\mu}\\bar{\\psi}(j))^2-(\\bar{\\psi}(j),D_{\\mu}\\bar{\\psi}(j))^2\\right] }{2(1+\\bar{\\psi}^2(j))^2}.\n\\label{eq10}\n\\end{equation}\n\n\nThe covariant derivatives (\\ref{eq4}) are obtained from the the representations of generators for the algebras $su(2),$ $u(1)$ in the space $\\Psi_3$ \\cite{Gr-07} with the help of the substitutions (\\ref{eq7-1}) \n$$\nT_1\\bar{\\psi}(j)=\\frac{i}{2}\\left(\\begin{array}{c}\n-j(1+j^2\\psi_1^2)\t \\\\\n j(\\psi_3-j^2\\psi_1\\psi_2) \\\\\n -j^2(\\psi_2+\\psi_1\\psi_3)\n\\end{array} \\right), \\quad\nT_2\\bar{\\psi}(j)=\\frac{i}{2}\\left(\\begin{array}{c}\n-j(\\psi_3+j^2\\psi_1\\psi_2) \\\\\n -j(1+j^2\\psi_2^2)\\\\\n \tj^2(\\psi_1-\\psi_2\\psi_3)\n\\end{array} \\right),\n$$\n$$ \nT_3\\bar{\\psi}(j) =\\frac{i}{2}\\left(\\begin{array}{c}\nj(-\\psi_2+\\psi_1\\psi_3)\t\\\\\nj(\\psi_1+\\psi_2\\psi_3) \\\\\n 1+\\psi_3^2\n\\end{array} \\right), \\quad\nY\\bar{\\psi}(j) =\\frac{i}{2}\\left(\\begin{array}{c}\n-j(\\psi_2+\\psi_1\\psi_3)\t\\\\\n j(\\psi_1-\\psi_2\\psi_3) \\\\\n -(1+\\psi_3^2)\n\\end{array} \\right)\n$$\nand are as follows: \n$$ \nD_\\mu \\psi_1=\\partial_\\mu \\psi_1 - \\frac{g'}{2}(\\psi_2+\\psi_1\\psi_3)B_\\mu + \n$$\n$$\n+\\frac{g}{2}\\left[-(1+j^2\\psi_1^2)A_\\mu^1 -(\\psi_3+j^2\\psi_1\\psi_2)A_\\mu^2-(\\psi_2-\\psi_1\\psi_3)A_\\mu^3 \\right], \n$$ \n$$ \n D_\\mu \\psi_2=\\partial_\\mu \\psi_2 + \\frac{g'}{2}(\\psi_1-\\psi_2\\psi_3)B_\\mu + \n $$\n $$\n +\\frac{g}{2}\\left[(\\psi_3-j^2\\psi_1\\psi_2)A_\\mu^1 -(1+j^2\\psi_2^2)A_\\mu^2 + (\\psi_1+\\psi_2\\psi_3)A_\\mu^3 \\right], \n$\n$$\n D_\\mu \\psi_3=\\partial_\\mu \\psi_3 - \\frac{g'}{2}(1+\\psi_3^2)B_\\mu +\n $$\n \\begin{equation} \n +\\frac{g}{2}\\left[-j^2(\\psi_2+\\psi_1\\psi_3)A_\\mu^1 +j^2(\\psi_1-\\psi_2\\psi_3)A_\\mu^2+(1+\\psi_3^2)A_\\mu^3 \\right]. \n\\label{eq11}\n\\end{equation} \nThe gauge fields Lagrangian (\\ref{eq2}) does not depend on the fields $\\phi$ and therefore remains unchanged.\nSo the full Lagrangian (\\ref{eq1}) is given by the sum of (\\ref{eq2}) and (\\ref{eq10})\n\nFor small fields, the second order part of the Lagrangian (\\ref{eq10}) is written as\n\\begin{equation} \nL_\\psi^{(2)}(j)=\\frac{R^2}{2}\\left[(D_\\mu \\bar{\\psi}(j))^{(1)}\\right]^2 = \n\\frac{R^2}{2}\\sum_{k=1}^3\\left[(D_\\mu \\psi_k(j))^{(1)}\\right]^2 , \n\\label{eq12}\n\\end{equation} \nwhere linear terms in covariant derivates (\\ref{eq11}) have the form\n$$\n(D_\\mu \\psi_1)^{(1)}= \\partial_\\mu\\psi_1-\\frac{g}{2}A_\\mu^1= -\\frac{g}{2}\\left(A_\\mu^1-\\frac{2}{g}\\partial_\\mu\\psi_1\\right) =-\\frac{g}{2}\\hat{A}_\\mu^1,\n$$\n$$ \n(D_\\mu \\psi_2)^{(1)}= \\partial_\\mu\\psi_2-\\frac{g}{2}A_\\mu^2= \n-\\frac{g}{2}\\left(A_\\mu^2-\\frac{2}{g}\\partial_\\mu\\psi_2\\right)= -\\frac{g}{2}\\hat{A}_\\mu^2,\n$$\n$$ \n(D_\\mu \\psi_3)^{(1)}=\\partial_\\mu\\psi_3+\\frac{g}{2}A_\\mu^3-\\frac{g'}{2}B_\\mu=\n\\frac{1}{2}\\sqrt{g^2+g'^2}Z_\\mu.\n$$\nThe new fields \n$$ \nW^{\\pm}_\\mu = \\frac{1}{\\sqrt{2}}\\left(\\hat{A}^1_\\mu \\mp i \\hat{A}^2_\\mu \\right), \\quad\n Z_\\mu =\\frac{gA^3_\\mu-g'B_\\mu + 2\\partial_\\mu \\psi_3}{\\sqrt{g^2+g'^2}}, \\quad \n A_\\mu =\\frac{g'A^3_\\mu+gB_\\mu}{\\sqrt{g^2+g'^2}}\n$$\nare transformed as \n$$\nW^{\\pm}_\\mu \\rightarrow j W^{\\pm}_\\mu,\\; Z_\\mu \\rightarrow Z_\\mu,\\; A_\\mu \\rightarrow A_\\mu \n$$ \nand Lagrangian (\\ref{eq12}) is rewritten as follows\n$$ \n L_{\\psi}^{(2)}=\n j^2 \\frac{R^2g^2}{4}W^{+}_\\mu W^{-}_\\mu +\\frac{R^2(g^2+g'^2)}{8}\\left(Z_\\mu \\right)^2.\n$$\n\nThe quadratic part of the full Lagrangian \n$$ \nL_0(j)= L_A^{(2)}(j) + L_{\\psi}^{(2)}(j)= \n$$\n$$ \n= - \\frac{1}{4}({\\cal F}_{\\mu\\nu})^2 -\\frac{1}{4}({\\cal Z}_{\\mu\\nu})^2 +\\frac{m_Z^2}{2}\\left(Z_\\mu \\right)^2 +\nj^2\\left\\{ -\\frac{1}{2}{\\cal W}^{+}_{\\mu\\nu}{\\cal W}^{-}_{\\mu\\nu} + m_{W}^2W^{+}_\\mu W^{-}_\\mu \\right\\} \\equiv\n$$ \n\\begin{equation} \n\\equiv L_b + j^2 L_f,\n\\label{neq1}\n\\end{equation} \nwhere \n\\begin{equation} \nm_W=\\frac{Rg}{2}, \\quad m_Z=\\frac{R}{2}\\sqrt{g^2+g'^2},\n\\label{eq13}\n\\end{equation}\nand \n$\n{\\cal Z}_{\\mu\\nu}=\\partial_\\mu Z_\\nu-\\partial_\\nu Z_\\mu, \\; \n{\\cal F}_{\\mu\\nu}=\\partial_\\mu A_\\nu-\\partial_\\nu A_\\mu, \\; \n{\\cal W^{\\pm}}_{\\mu\\nu}=\\partial_\\mu W^{\\pm}_\\nu-\\partial_\\nu W^{\\pm}_\\mu \\; \n$ \nare abelian stress tensors \n describes all the experimentally verified parts of the standard Electroweek Model \n but does not include the scalar Higgs field. \n \nThe interaction part of the full Lagrangian in the first degree of approximation is given by\n$$ \nL_{int}^{(1)}(j)=j^2\\left[L_{A}^{(3)} + L_{\\psi}^{(3)}\\right],\n$$\nwhere the third order terms of the gauge field Lagrangian (\\ref{eq2}) are\n\\begin{eqnarray*}\n L_A^{(3)} =-\\frac{g}{\\sqrt{g^2+g'^2}}\\left\\{i\\left( \\mathcal{W}_{\\mu\\nu}^-W_\\mu^+ - \\mathcal{W}_{\\mu\\nu}^+W_\\mu^- \\right)\\left(g'A_\\nu+gZ_\\nu \\right) - \\right. \\nonumber\\\\\n-\\frac{\\sqrt{2}}{g}\\left(g'A_\\mu+gZ_\\mu \\right) \n \\left[\\mathcal{W}_{\\mu\\nu}^+(\\partial_{\\nu}\\psi_2-i\\partial_{\\nu}\\psi_1) + \\mathcal{W}_{\\mu\\nu}^- (\\partial_{\\nu}\\psi_2+i\\partial_{\\nu}\\psi_1) \\right]- \\nonumber\\\\\n-\\frac{2ig}{\\sqrt{g^2+g'^2}}\\left(\\mathcal{W}_{\\mu\\nu}^-W_\\mu^+ - \\mathcal{W}_{\\mu\\nu}^+W_\\mu^- \\right)\\partial_{\\nu}\\psi_3 - \\nonumber\\\\\n-\\frac{2\\sqrt{2}}{\\sqrt{g^2+g'^2}}\\left[\\mathcal{W}_{\\mu\\nu}^+(\\partial_{\\nu}\\psi_2-i\\partial_{\\nu}\\psi_1) + \\mathcal{W}_{\\mu\\nu}^-(\\partial_{\\nu}\\psi_2+i\\partial_{\\nu}\\psi_1) \\right]\\partial_{\\nu}\\psi_3 + \\nonumber\\\\\n+\\left(g'\\mathcal{F}_{\\mu\\nu}+ g\\mathcal{Z}_{\\mu\\nu}\\right)\n \\left\\{\\frac{i}{4}\\left[\\left(W_{\\mu}^+ \\right)^2 - \\left(W_{\\mu}^- \\right)^2 \\right] +\n\\frac{4}{g^2}\\partial_{\\mu}\\psi_1\\partial_{\\nu}\\psi_2 + \\right. \\nonumber\\\\\n\\left. \\left. \n+ \\frac{\\sqrt{2}}{g}\\left[W_\\mu^+ (\\partial_{\\nu}\\psi_2-i\\partial_{\\nu}\\psi_1) + W_\\mu^- (\\partial_{\\nu}\\psi_2+i\\partial_{\\nu}\\psi_1) \\right] \\right\\} \\right\\} \\nonumber\\\\\n\\end{eqnarray*}\nand those of the matter field Lagrangian (\\ref{eq10}) are\n\\begin{eqnarray*}\n L_\\psi^{(3)}= \\frac{R^2g}{2\\sqrt{2}}\\left\\{\nW_{\\mu}^+\\left[\\psi_3(\\partial_\\mu \\psi_2 - i\\partial_\\mu \\psi_1) -\n\\frac{g^2-g'^2}{g^2+g'^2}(\\psi_2 -i\\psi_1) \\partial_\\mu \\psi_3 + \\right. \\right. \\nonumber\\\\\n\\left. + \\frac{g'\\left(gA_{\\mu}-g'Z_{\\mu} \\right)}{\\sqrt{g^2+g'^2}}(\\psi_2 -i\\psi_1) \\right] \n+W_{\\mu}^-\\left[\\psi_3(\\partial_\\mu \\psi_2 + i\\partial_\\mu \\psi_1) - \\right.\\nonumber\\\\\n\\left. - \\frac{g^2-g'^2}{g^2+g'^2}(\\psi_2 +i\\psi_1) \\partial_\\mu \\psi_3 + \n \\frac{g'\\left(gA_{\\mu}-g'Z_{\\mu} \\right)}{\\sqrt{g^2+g'^2}}(\\psi_2 +i\\psi_1) \\right]+ \\nonumber\\\\\n\\left.+\\frac{1}{g}\\sqrt{g^2+g'^2}Z_{\\mu}\\left(\\psi_1\\partial_\\mu \\psi_2 - \\psi_2\\partial_\\mu \\psi_1 \\right)\n\\right\\}. \n\\end{eqnarray*}\n\nThe fermion Lagrangian of the standard Electroweek Model\nis taken in the form \\cite{R-99}\n\\begin{equation} \nL_F=L_l^{\\dagger}i\\tilde{\\tau}_{\\mu}D_{\\mu}L_l + e_r^{\\dagger}i\\tau_{\\mu}D_{\\mu}e_r -\nh_e[e_r^{\\dagger}(\\phi^{\\dagger}L_l) +(L_l^{\\dagger}\\phi)e_r],\n\\label{eq14}\n\\end{equation}\nwhere\n$\nL_l= \\left(\n\\begin{array}{c}\n\te_l\\\\\n\t\\nu_{e,l}\n\\end{array} \\right)\n$\nis the $SU(2)$-doublet, $e_r $ the $SU(2)$-singlet, $h_e$ is constant and $e_r, e_l, \\nu_e $ are two component Lorentzian spinors. \nHere $\\tau_{\\mu}$ are Pauli matricies, \n$\\tau_{0}=\\tilde{\\tau_0}={\\bf 1},$ $\\tilde{\\tau_k}=-\\tau_k. $ \nThe covariant derivatives $D_{\\mu}L_l $ are given by (\\ref{eq4}) with $L_l$ instead of \n$\\phi$ and $D_{\\mu}e_r=(\\partial_\\mu + ig'B_\\mu)e_r. $\n The convolution on the inner indices of $SU(2)$-doublet is denoted by $(\\phi^{\\dagger}L_l)$.\n\nThe matter field $\\phi$ appears in Lagrangian (\\ref{eq14}) only in mass terms. \nWhen the gauge group $SU(2)$ is contracted to $SU(2;j)$ and the matter field is fibered to $\\phi(j)$\nthe same take place with doublet $L_l$, namely, the first component $\te_l$ does not changed, but the second component is multiplied by contraction parameter: $\\nu_{e,l} \\rightarrow j\\nu_{e,l}$. \nWith the use of (\\ref{eq7}),(\\ref{eq9}) and these substitution the mass terms are rewritten in the form\n$$\nh_e[e_r^{\\dagger}(\\phi^{\\dagger}(j)L_l(j)) +(L_l^{\\dagger}(j)\\phi(j))e_r]=\n\\frac{h_eR}{\\sqrt{1+\\bar{\\psi}^2(j)}}\\left\\{e_r^{\\dagger}e_l + e_l^{\\dagger}e_r + \\right.\n$$\n\\begin{equation} \n\\left. +i\\psi_3\\left(e_l^{\\dagger}e_r - e_r^{\\dagger}e_l \\right) + \nij^2 \\left[\\psi_1\\left(\\nu_{e,l}^{\\dagger}e_r - e_r^{\\dagger}\\nu_{e,l} \\right)+\ni\\psi_2\\left(\\nu_{e,l}^{\\dagger}e_r + e_r^{\\dagger}\\nu_{e,l} \\right)\\right]\n\\right\\},\n\\label{n1-1}\n\\end{equation}\nwhere\nthe $SU(2)$-singlet $e_r $ does not transforms under contraction.\n\n\n\\section{Limiting case of Higgsless Electroweak Model}\n\n\nAs it was mentioned the vector boson masses are automatically (without any Higgs mechanism) generated by the transformation of the free Lagrangian of the standard Electroweak Model to the Lagrangian (\\ref{eq1}),(\\ref{eq2}),(\\ref{eq10}) expressed in some coordinates on the sphere $\\Psi_3(j)$. \nAnd this statement is true for both values of contraction parameter $j=1,\\iota$.\nWhen contraction parameter tends to zero $j^2\\rightarrow 0$, then the contribution of W-bosons fields to the quadratic part of the Lagrangian (\\ref{neq1}) will be small in comparison with the contribution of Z-boson and electromagnetic fields. \nIn other words the limit Lagrangian includes only Z-boson and electromagnetic fields. Therefore charded W-bosons fields does not effect on these fields.\nThe part $L_f$ form a new Lagrangian for W-bosons fields and their interactions with other fields. \nThe appearance of two Lagrangians $L_b$ and $L_f$ for the limit model is in correspondence with two hermitian forms of fibered space $\\Phi_2(\\iota)$, which are invariant under the action of contracted gauge group $SU(2;\\iota)$. \n Electromagnetic and Z-boson fields can be regarded as external fields with respect to the W-bosons fields. \n\nIn mathematical language the field space $\\left\\{ A_{\\mu}, Z_{\\mu}, W_{\\mu}^{\\pm}\\right\\}$\nis fibered after contraction $j=\\iota$ to the base $\\left\\{ A_{\\mu}, Z_{\\mu}\\right\\}$ and the fiber \n$\\left\\{W_{\\mu}^{\\pm}\\right\\}.$ \n(In order to avoid terminological misunderstanding let us stress that we have in view locally trivial fibering, which\nis defined by the projection $pr:\\; \\left\\{ A_{\\mu}, Z_{\\mu}, W_{\\mu}^{\\pm}\\right\\} \\rightarrow \\left\\{ A_{\\mu}, Z_{\\mu}\\right\\}$ in the field space. \nThis fibering is understood in the context of semi-Riemannian geometry \\cite{Gr-09} and has nothing to do with the principal fiber bundle.)\nThen $L_b$ in (\\ref{neq1}) presents Lagrangian in the base and $L_f$ is Lagrangian in the fiber. In general, properties of a fiber are depend on a points of a base and not the contrary.\nIn this sense fields in the base are external one with respect to fields in the fiber. \n\nThe fermion Lagrangian (\\ref{eq14})\nfor nilpotent value of the contraction parameter $j=\\iota $ is also splited on electron part in the base and neutrino part in the fiber. \nThis means that in the limit model electron field is external one relative to neutrino field.\nThe mass terms (\\ref{n1-1}) for $j=\\iota $ are\n\\begin{equation}\nh_e[e_r^{\\dagger}(\\phi^{\\dagger}(\\iota)L_l(\\iota)) +(L_l^{\\dagger}(\\iota)\\phi(\\iota))e_r]\n=\\frac{h_eR}{\\sqrt{1+\\psi_3^2}} \\left[ e_r^{\\dagger}e_l^{-} + e_l^{- \\dagger}e_r \n +i\\psi_3\\left(e_l^{- \\dagger}e_r - e_r^{\\dagger}e_l^{-} \\right) \\right]. \n\\label{n1}\n\\end{equation}\nIts second order terms \n$ h_eR\\left(e_r^{\\dagger}e_l^{-} + e_l^{- \\dagger}e_r \\right)$ \nprovide the electron mass $m_e=h_eR $ and neutrino remain massless.\n\n\nLet us note that field interactions in contracted model are more simple as compared with the standard Electroweak Model due to nullification of some terms.\n\n\n\n\\section{Conclusions} \n\n\nThe modified formulation of the Electroweak Model with the gauge group $SU(2)\\times U(1)$ based on\nthe 3-dimensional spherical geometry in the target space is suggested. This model describes all experimentally observed fields and does not include the (up to now unobserved) scalar Higgs field. \nThe {\\it free} Lagrangian in the spherical matter field space is used instead of Lagrangian with\nthe potential of the special \"`sombrero\"' form.\nThe gauge field Lagrangian is the standard one. \nThere is no need in Higgs \nmechanism since the vector field masses are generated automatically.\n\n\nWe have discussed the limiting case of the modified Higgsless Electroweak Model, which corresponds to the contracted gauge group $SU(2;j)\\times U(1)$, where $j=\\iota$ or $j \\rightarrow 0$.\nThe masses of the all experimentally verified particles involved in the Electroweak Model remain the same under contraction, but interactions of the fields are changed in two aspects. \nFirstly all field interactions become more simpler due to nullification of some terms in Lagrangian. \nSecondly interrelation of the fields become more complicated. All fields are divided on two classes: fields in the base\n(Z-boson, electromagnetic and electron) and fields in the fiber (W-bosons and neutrino). \nThe base fields can be interpreted as external ones with respect to the fiber fields, i.e. Z-boson, electromagnetic and electron fields can interact with W-bosons and neutrino fields, but W-bosons and neutrino fields do not effect on these fields within framework of the limit model.\n\n\nThis work has been supported in part \nby the Russian Foundation for Basic Research, grant 08-01-90010-Bel-a\nand the program \"`Fundamental problems of nonlinear dynamics\"' of Russian Academy of Sciences. \n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe \\emph{competition graph} $C(D)$ of a digraph $D$ is an undirected graph which\nhas the same vertex set as $D$ and which has an edge $xy$\nbetween two distinct vertices $x$ and $y$\nif and only if for some vertex $z \\in V$,\nthe arcs $(x,z)$ and $(y,z)$ are in $D$.\n\nLet $d$ be a positive integer.\nFor $x = (x_1,x_2,\\ldots, x_d)$,\n$y = (y_1,y_2,\\ldots, y_d) \\in \\mathbb{R}^d$,\nwe write\n$x \\prec y$\nif $x_i 0 \\}$.\nFor a point $v=(v_1, v_2, v_3) \\in \\mathcal{H}_+$,\nlet $p_{1}^{(v)}$, $p_{2}^{(v)}$, and $p_{3}^{(v)}$\nbe points in $\\mathbb{R}^3$ defined by\n$p_{1}^{(v)} := (-v_2-v_3, v_2, v_3)$,\n$p_{2}^{(v)} := (v_1, -v_1-v_3, v_3)$, and\n$p_{3}^{(v)} := (v_1, v_2, -v_1-v_2)$,\nand let $\\triangle(v)$ be the convex hull of the points\n$p_{1}^{(v)}$, $p_{2}^{(v)}$, and $p_{3}^{(v)}$, i.e.,\n$\n\\triangle(v) := \\text{{\\rm Conv}}(p_{1}^{(v)},p_{2}^{(v)},p_{3}^{(v)})\n= \\left\\{ \\sum_{i=1}^3 \\lambda_i p_{i}^{(v)}\n\\mid \\sum_{i=1}^3 \\lambda_i=1, \\lambda_i \\geq 0 \\ (i=1,2,3) \\right\\}.\n$\nThen it is easy to check that $\\triangle(v)$ is an closed equilateral triangle\nwhich is contained in the plane $\\mathcal{H}$.\nLet $A(v)$ be the relative interior of the closed triangle $\\triangle(v)$, i.e.,\n$\nA(v) := \\text{rel.int}(\\triangle(v))\n= \\left\\{ \\sum_{i=1}^3 \\lambda_i p_{i}^{(v)}\n\\mid \\sum_{i=1}^3 \\lambda_i=1, \\lambda_i > 0 \\ (i=1,2,3) \\right\\}.\n$\nThen\n$A(v)$ and $A(w)$ are homothetic for any $v,w \\in \\mathcal{H}_+$.\n\nFor $v \\in \\mathcal{H}_+$ and $(i,j) \\in \\{(1,2),(2,3),(1,3)\\}$,\nlet $l_{ij}^{(v)}$ denote the line through\nthe two points $p^{(v)}_{i}$ and $p^{(v)}_{j}$, i.e.,\n$\nl_{ij}^{(v)} := \\{ x \\in \\mathbb{R}^3 \\mid\nx = \\alpha p^{(v)}_{i} + (1 - \\alpha) p^{(v)}_{j}, \\alpha \\in \\mathbb{R} \\},\n$\nand let $R_{ij}(v)$ denote the following region:\n\\[\nR_{ij}(v) := \\{ x \\in \\mathbb{R}^3 \\mid\nx = (1-\\alpha - \\beta)p^{(v)}_{k} + \\alpha p^{(v)}_{i} + \\beta p^{(v)}_{j} ,\n0 \\leq \\alpha \\in \\mathbb{R}, 0 \\leq \\beta \\in \\mathbb{R}, \\alpha + \\beta \\geq 1 \\},\n\\]\nwhere $k$ is the element in $\\{1,2,3\\} \\setminus \\{i,j\\}$;\nfor $k \\in \\{1,2,3\\}$, let $R_{k}(v)$ denote the following region:\n\\[\nR_{k}(v) := \\{ x \\in \\mathbb{R}^3 \\mid\nx = (1 + \\alpha + \\beta)p^{(v)}_{k} - \\alpha p^{(v)}_{i} - \\beta p^{(v)}_{j},\n0 \\leq \\alpha \\in \\mathbb{R}, 0 \\leq \\beta \\in \\mathbb{R} \\},\n\\]\nwhere $i$ and $j$ are elements such that\n$\\{i,j,k\\} = \\{1,2,3\\}$.\n(See Figure~\\ref{region} for an illustration.)\n\n\\begin{figure}\n\\psfrag{A}{ $p^{(v)}_{1}$}\n\\psfrag{B}{ $p^{(v)}_{2}$}\n\\psfrag{C}{ $p^{(v)}_{3}$}\n\\psfrag{D}{ $\\triangle(v)$}\n\\psfrag{E}{ $R_{23}(v)$}\n\\psfrag{F}{ $R_3(v)$}\n\\psfrag{G}{ $R_{13}(v)$}\n\\psfrag{H}{ $R_1(v)$}\n\\psfrag{I}{ $R_{12}(v)$}\n\\psfrag{J}{ $R_2(v)$}\n\\psfrag{K}{ $l_{12}^{(v)}$}\n\\psfrag{L}{ $l_{13}^{(v)}$}\n\\psfrag{M}{ $l_{23}^{(v)}$}\n\\begin{center}\n\\includegraphics[height=5cm]{region3.eps}\n\\end{center}\n\\vskip-1em\n\\caption{The regions determined by $v$. By our assumption, for any vertex $u$ of a graph considered in this paper, $p_1^{(u)}$, $p_2^{(u)}$, $p_3^{(u)}$ correspond to $p_1^{(v)}$, $p_2^{(v)}$, $p_3^{(v)}$ respectively.\n}\n\\label{region}\n\\end{figure}\n\nIf a graph $G$ satisfies $\\dim_{\\text{{\\rm poc}}}(G) \\leq 3$, then, by Theroem~\\ref{thm:intersectiongeneral}, we may assume that $V(G) \\subseteq \\mathcal{H}_+$ by translating each of the vertices of $G$ in the same direction and by the same amount.\n\n\\begin{Lem}\\label{lem:not-incl1}\nLet $D$ be a $3$-partial order and\nlet $G$ be the competition graph of $D$.\nSuppose that $G$ contains an induced path $uvw$ of length two.\nThen neither $A(u) \\cap A(v) \\subseteq A(w)$ nor $A(v) \\cap A(w) \\subseteq A(u)$.\n\\end{Lem}\n\n\\begin{proof}\nWe show by contradiction.\nSuppose that $A(u) \\cap A(v) \\subseteq A(w)$ or $A(v) \\cap A(w) \\subseteq A(u)$.\nBy symmetry, we may assume without loss of generality that $A(u) \\cap A(v) \\subseteq A(w)$.\nSince $u$ and $v$ are adjacent in $G$,\nthere exists a vertex $a \\in V(G)$ such that\n$\\triangle(a) \\subseteq A(u) \\cap A(v)$\nby Theorem~\\ref{thm:intersectiongeneral}.\nTherefore\n$\\triangle(a) \\subseteq A(w)$.\nSince $\\triangle(a) \\subseteq A(u)$,\n$u$ and $w$ are adjacent in $G$ by Theorem~\\ref{thm:intersectiongeneral},\nwhich is a contradiction to the assumption that\n$u$ and $w$ are not adjacent in $G$.\nHence the lemma holds.\n\\end{proof}\n\n\\begin{Defi}\nFor $v,w \\in \\mathcal{H}_+$,\nwe say that $v$ and $w$ are\n\\emph{crossing} if\n$A(v) \\cap A(w) \\neq \\emptyset$,\n$A(v) \\setminus A(w) \\neq \\emptyset$, and\n$A(w) \\setminus A(v) \\neq \\emptyset$.\n\\end{Defi}\n\n\\begin{Lem}\\label{lem:not-incl2}\nLet $D$ be a $3$-partial order and\nlet $G$ be the competition graph of $D$.\nSuppose that $G$ contains an induced path $xuvw$ of length three.\nThen $u$ and $v$ are crossing.\n\\end{Lem}\n\n\\begin{proof}\nSince $u$ and $v$ are adjacent in $G$, there exists a vertex $a \\in V(G)$\nsuch that $\\triangle(a) \\subseteq A(u) \\cap A(v)$\nby Theorem~\\ref{thm:intersectiongeneral}.\nTherefore $A(u) \\cap A(v) \\neq \\emptyset$.\nIf $A(v) \\subseteq A(u)$, then $A(v) \\cap A(w) \\subseteq A(u)$,\nwhich contradicts Lemma~\\ref{lem:not-incl1}.\nThus $A(v) \\setminus A(u) \\neq \\emptyset$.\nIf $A(u) \\subseteq A(v)$, then $A(x) \\cap A(u) \\subseteq A(v)$,\nwhich contradicts Lemma~\\ref{lem:not-incl1}.\nThus $A(u) \\setminus A(v) \\neq \\emptyset$.\nHence $u$ and $v$ are crossing.\n\\end{proof}\n\n\n\\begin{Lem}\\label{lem:intersecting3}\nIf $v$ and $w$ in $\\mathcal{H}_+$ are crossing,\nthen $p_k^{(x)} \\in \\triangle(y)$ for some $k \\in \\{1,2,3\\}$\nwhere $\\{x,y\\} = \\{v,w\\}$.\n\\end{Lem}\n\n\\begin{proof}\nSince $v$ and $w$ are crossing,\nwe have\n$A(v) \\cap A(w) \\neq \\emptyset$,\n$A(v) \\setminus A(w) \\neq \\emptyset$, and\n$A(w) \\setminus A(v) \\neq \\emptyset$.\nThen one of the vertices of the triangles $\\triangle(v)$ and $\\triangle(w)$\nis contained in the other triangle, thus the lemma holds.\n\\end{proof}\n\n\n\\begin{Defi}\nFor $k \\in \\{1,2,3\\}$,\nwe define a binary relation $\\stackrel{k}{\\rightarrow}$ on $\\mathcal{H}_+$ by\n\\[\nx \\stackrel{k}{\\rightarrow} y\n\\quad \\Leftrightarrow \\quad\n\\text{ $x$ and $y$ are crossing, and } p_k^{(y)} \\in \\triangle(x)\n\\]\nfor any $x, y \\in \\mathcal{H}_+$.\n\\end{Defi}\n\n\\begin{Lem}\\label{lem:transitive}\nLet $x,y,z \\in \\mathcal{H}_+$.\nSuppose that $x \\stackrel{k}{\\rightarrow} y$\nand $y \\stackrel{k}{\\rightarrow} z$ for some $k \\in \\{1,2,3\\}$\nand that $x$ and $z$ are crossing.\nThen $x \\stackrel{k}{\\rightarrow} z$.\n\\end{Lem}\n\n\\begin{proof}\nSince $x \\stackrel{k}{\\rightarrow} y$, $p_l^{(x)} \\not\\in R_i(y) \\cup R_{ij}(y) \\cup R_j(y)$ for each $l \\in \\{1,2,3\\}$,\nwhere $\\{i,j,k\\} = \\{1,2,3\\}$\nSince $y \\stackrel{k}{\\rightarrow} z$, $p_l^{(z)} \\in R_i(y) \\cup R_{ij}(y) \\cup R_j(y)$ for each $l \\in \\{i,j\\}$.\nSince $x$ and $z$ are crossing, $p_k^{(z)} \\in \\triangle(x)$.\n\\end{proof}\n\n\n\\begin{Defi}\nFor $k \\in \\{1,2,3\\}$,\na sequence $(v_1, \\ldots, v_m)$ of $m$ points in $\\mathcal{H}_+$, where $m \\geq 2$,\nis said to be \\emph{consecutively tail-biting in Type $k$}\nif $v_i \\stackrel{k}{\\rightarrow} v_j$ for any $i < j$\n(see Figure~\\ref{consecutive}).\nA finite set $V$ of points in $\\mathcal{H}_+$\nis said to be \\emph{consecutively tail-biting}\nif there is an ordering $(v_1, \\ldots, v_m)$ of $V$\nsuch that $(v_1, \\ldots, v_m)$ is consecutively tail-biting.\n\\end{Defi}\n\n\\begin{figure}\n\\psfrag{A}{ $A(v_1)$}\n\\psfrag{B}{ $A(v_2)$}\n\\psfrag{C}{ $A(v_3)$}\n\\psfrag{D}{ $A(v_4)$}\n\\psfrag{E}{(a)}\n\\psfrag{F}{(b)}\n\\psfrag{G}{(c)}\n\\begin{center}\n\\includegraphics[height=5cm]{consecutive1.eps} \\hskip2.5em\n\\includegraphics[height=5cm]{consecutive2.eps} \\hskip2.5em\n\\includegraphics[height=5cm]{consecutive3.eps}\n\\end{center}\n\\caption{The sequences $(v_1,v_2,v_3,v_4)$ in (a), (b), (c) are consecutively tail-biting of Type 1, 2, 3, respectively.}\n\\label{consecutive}\n\\end{figure}\n\n\\section{The partial order competition dimensions of diamond-free chordal graphs}\\label{sec:chordal}\n\n\nIn this section, we show that\na chordal graph has partial order competition dimension at most three\nif it is diamond-free.\n\n\nA \\emph{block graph} is a graph such that each of its maximal $2$-connected subgraphs\nis a complete graph.\nThe following is well-known.\n\n\\begin{Lem}[\\hskip-0.0025em {\\cite[Proposition 1]{BM86}}]\\label{lem:blockchara}\nA graph is a block graph\nif and only if the graph is a diamond-free chordal graph.\n\\end{Lem}\n\nNote that\na block graph having no cut vertex is a disjoint union of complete graphs.\nFor block graphs having cut vertices, the following lemma holds.\n\n\\begin{Lem}\\label{lem:blockgraph}\nLet $G$ be a block graph having at least one cut vertex.\nThen $G$ has a maximal clique that contains exactly one cut vertex.\n\\end{Lem}\n\n\\begin{proof}\nLet $H$ be the subgraph induced by the cut vertices of $G$.\nBy definition, $H$ is obviously a block graph, so $H$ is chordal and there is a simplicial vertex $v$ in $H$.\nSince $v$ is a cut vertex of $G$, $v$ belongs to at least two maximal cliques of $G$.\nSuppose that each maximal clique containing $v$ contains another cut vertex of $G$.\nTake two maximal cliques $X_1$ and $X_2$ of $G$ containing $v$\nand let $x$ and $y$ be cut vertices of $G$ belonging to $X_1$ and $X_2$, respectively.\nThen both $x$ and $y$ are adjacent to $v$ in $H$.\nSince $G$ is a block graph,\n$X_1 \\setminus \\{v\\}$ and $X_2 \\setminus \\{v\\}$\nare contained in distinct connected components of $G-v$.\nThis implies that $x$ and $y$ are not adjacent in $H$,\nwhich contradicts the choice of $v$.\nTherefore there is a maximal clique $X$ containing $v$\nwithout any other cut vertex of $G$.\n\\end{proof}\n\n\n\\begin{Lem}\\label{lem:blockgraph2}\nEvery block graph $G$ is the intersection graph\nof a family $\\mathcal{F}$ of homothetic closed equilateral triangles\nin which every clique of $G$ is consecutively tail-biting.\n\\end{Lem}\n\n\\begin{proof}\nWe show by induction on the number of cut vertices of $G$.\nIf a block graph has no cut vertex, then it is a disjoint union of complete graphs and the statement\nis trivially true as the vertices of each complete subgraph can be formed as a sequence which is consecutively tail-biting (refer to Figure~\\ref{consecutive}).\n\nAssume that the statement is true for any block graph $G$ with $m$ cut vertices where $m \\geq 0$.\nNow we take a block graph $G$ with $m+1$ cut vertices.\nBy Lemma~\\ref{lem:blockgraph},\nthere is a maximal clique $X$ that contains exactly one cut vertex, say $w$.\nBy definition, the vertices of $X$ other than $w$ are simplicial vertices.\n\nDeleting the vertices of $X$ other than $w$\nand the edges adjacent to them,\nwe obtain a block graph $G^*$ with $m$ cut vertices.\nThen, by the induction hypothesis, $G^*$ is the intersection graph of a family ${\\cal F}^*$\nof homothetic closed equilateral triangles satisfying the statement.\nWe consider the triangles corresponding to $w$.\nLet $C$ and $C'$ be two maximal cliques of $G^*$ containing $w$.\nBy the induction hypothesis,\nthe vertices of $C$ and $C'$ can be ordered as\n$v_{1}, v_{2}, \\ldots, v_{l}$ and $v'_{1}, v'_{2}, \\ldots, v'_{l'}$, respectively,\nso that $v_{i} \\stackrel{k}{\\rightarrow} v_{j}$ if $i < j$, for some $k \\in \\{1,2,3\\}$ and\nthat $v'_{i'} \\stackrel{k'}{\\rightarrow} v'_{j'}$ if $i' < j'$, for some $k' \\in \\{1,2,3\\}$.\n\nSuppose that $\\triangle(v_i) \\cap \\triangle(v'_j) \\neq \\emptyset$ for $v_i$ and $v'_j$ which are distinct from $w$.\nThen $v_i$ and $v'_j$ are adjacent in $G^*$, which implies the existence of a diamond in $G$\nsince maximal cliques have size at least two.\nWe have reached a contradiction to Lemma~\\ref{lem:blockchara} and so $\\triangle(v_i) \\cap \\triangle(v'_j) = \\emptyset$ for any $i,j$.\nTherefore there is a segment of a side on $\\triangle(w)$ (with a positive length) that does not intersect with the triangle assigned to any vertex in $G^*$ other than $w$\nsince there are finitely many maximal cliques in $G^*$ that contain $w$.\nIf the side belongs to $l_{ij}^{(w)}$ for $i,j \\in \\{1,2,3\\}$,\nthen we may order the deleted vertices and assign the homothetic closed equilateral triangles\nwith sufficiently small sizes to them\nso that the closed neighborhood of $v$ is consecutively tail-biting in Type $k$ for $k \\in \\{1,2,3\\} \\setminus \\{i,j\\}$\nand none of the triangles intersects with the triangle corresponding to any vertex other than $w$ in $G^*$.\nIt is not difficult to see that the set of the triangles in $\\mathcal{F}^*$\ntogether with the triangles just obtained is the one desired for $\\mathcal{F}$.\n\\end{proof}\n\n\n\\begin{Thm}\nFor any diamond-free chordal graph $G$, $\\dim_{\\text{{\\rm poc}}}(G) \\leq 3$.\n\\end{Thm}\n\n\\begin{proof}\nThe theorem follows from\nCorollary~\\ref{cor:closed}\nand Lemma~\\ref{lem:blockgraph2}.\n\\end{proof}\n\n\n\n\\section{Chordal graphs having partial order competition dimension greater than three}\\label{sec:dimpocmorethanthree}\n\nIn this section,\nwe present infinitely many chordal graphs $G$\nwith $\\dim_{\\text{{\\rm poc}}}(G) > 3$.\nWe first show two lemmas which will be repeatedly used\nin the proof of the theorem in this section.\n\n\\begin{Lem}\\label{lem:3triangles}\nLet $D$ be a $3$-partial order and\nlet $G$ be the competition graph of $D$.\nSuppose that $G$\ncontains a diamond $K_4-e$ as an induced subgraph,\nwhere $u$, $v$, $w$, $x$ are the vertices of the diamond and $e=vx$.\nIf the sequence $(u, v, w)$ is consecutively tail-biting in Type $k$ for some $k \\in \\{1,2,3\\}$,\nthen $p_i^{(x)} \\in R_i(v)$ and $p_j^{(x)} \\notin R_j(v)$ hold or $p_i^{(x)} \\notin R_i(v)$ and $p_j^{(x)} \\in R_j(v)$ hold where $\\{i,j,k\\} = \\{1,2,3\\}$.\n\\end{Lem}\n\n\\begin{proof}\nWithout loss of generality, we may assume that $k=3$.\nWe first claim that $p_1^{(x)} \\in R_1(v) \\cup R_2(v) \\cup R_{12}(v)$.\nSuppose not.\nThen $p_1^{(x)} \\in R:=\\mathcal{H} \\setminus (R_1(v) \\cup R_2(v) \\cup R_{12}(v))$.\nSince $A(x)$ and $A(v)$ are homothetic, $A(x) \\subseteq R$.\nThus $A(w) \\cap A(x) \\subseteq A(w) \\cap R$.\nSince $(u,v,w)$ is consecutively tail-biting in Type 3, $A(w) \\cap R \\subseteq A(v)$.\nTherefore $A(w) \\cap A(x)\\subseteq A(v)$, which contradicts Lemma~\\ref{lem:not-incl1}.\nThus $p_1^{(x)} \\in R_1(v) \\cup R_2(v) \\cup R_{12}(v)$.\nBy symmetry, $p_2^{(x)} \\in R_1(v) \\cup R_2(v) \\cup R_{12}(v)$.\n\nSuppose that both $p_1^{(x)}$ and $p_2^{(x)}$ are in $R_{12}(v)$.\nSince $A(x)$ and $A(v)$ are homothetic, $A(x) \\cap R \\subseteq A(v)$.\nBy the hypothesis that $(u,v,w)$ is consecutively tail-biting in Type 3,\nwe have $A(u) \\subseteq R$. Therefore $A(x) \\cap A(u) \\subseteq A(x) \\cap R$.\nThus $A(x) \\cap A(u) \\subseteq A(v)$, which contradicts Lemma~\\ref{lem:not-incl1}.\nTherefore $p_1^{(x)} \\in R_1(v) \\cup R_2(v)$ or $p_2^{(x)} \\in R_1(v) \\cup R_2(v)$.\nSince $p_1^{(x)} \\in R_2(v)$ (resp. $p_2^{(x)} \\in R_1(v)$) implies\n$p_2^{(x)} \\in R_2(v)$ (resp. $p_1^{(x)} \\in R_1(v)$), which is impossible, we have\n$p_1^{(x)} \\in R_1(v)$ or $p_2^{(x)} \\in R_2(v)$.\n\nSuppose that both $p_1^{(x)} \\in R_1(v)$ and $p_2^{(x)} \\in R_2(v)$ hold.\nThen $A(v) \\subseteq A(x)$ since $A(v)$ and $A(x)$ are homothetic.\nThen $A(u) \\cap A(v) \\subseteq A(x)$, which contradicts Lemma~\\ref{lem:not-incl1}.\nHence $p_1^{(x)} \\in R_1(v)$ and $p_2^{(x)} \\notin R_2(v)$ hold or $p_1^{(x)} \\notin R_1(v)$ and $p_2^{(x)} \\in R_2(v)$ hold.\n\\end{proof}\n\n\\begin{figure}\n\\psfrag{t}{\\small $t$}\n\\psfrag{u}{\\small $u$}\n\\psfrag{v}{\\small $v$}\n\\psfrag{w}{\\small $w$}\n\\psfrag{x}{\\small $x$}\n\\psfrag{y}{\\small $y$}\n\\begin{center}\n\\includegraphics{config.eps}\n\\end{center}\n\\caption{The graph $\\overline{\\mathrm{H}}$}\n\\label{fig:config}\n\\end{figure}\n\nLet $\\overline{\\mathrm{H}}$ be the graph on vertex set $\\{t,u,v,w,x,y\\}$ such that $\\{t,u,v,w\\}$ forms a complete graph $K_4$, $x$ is adjacent to only $t$ and $v$, and $y$ is adjacent to only $u$ and $w$ in $\\overline{\\mathrm{H}}$\n(see Figure~\\ref{fig:config} for an illustration).\n\n\\begin{Lem}\\label{lem:4triangles}\nLet $D$ be a $3$-partial order and let $G$ be the competition graph of $D$.\nSuppose that $G$ contains the graph $\\overline{\\mathrm{H}}$ as an induced subgraph and\n$(t, u, v, w)$ is consecutively tail-biting in Type $k$ for some $k \\in \\{1,2,3\\}$.\nThen, for $i,j$ with $\\{i,j,k\\} = \\{1,2,3\\}$,\n$p_i^{(x)} \\in R_i(u)$ implies $p_j^{(y)} \\in R_j(v)$.\n\\end{Lem}\n\n\\begin{proof}\nWithout loss of generality, we may assume that $k=3$.\nIt is sufficient to show that $p_1^{(x)} \\in R_1(u)$ implies $p_2^{(y)} \\in R_2(v)$.\nNow suppose that $p_1^{(x)} \\in R_1(u)$.\nSince $(t,u,v,w)$ is a tail-biting sequence of Type 3,\n$(t,u,v)$ and $(u,v,w)$ are tail-biting sequences of Type 3.\nSince $\\{t,u,v,x\\}$ induces a diamond and\n$(t,u,v)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(x)} \\in R_1(u)$ and $p_2^{(x)} \\not\\in R_2(u)$ hold or $p_1^{(x)} \\notin R_1(u)$ and $p_2^{(x)} \\in R_2(u)$ hold.\nSince $p_1^{(x)} \\in R_1(u)$, it must hold that $p_1^{(x)} \\in R_1(u)$ and $p_2^{(x)} \\not\\in R_2(u)$.\nSince $A(u)$ and $A(x)$ are homothetic and $p_1^{(x)} \\in R_1(u)$,\nwe have $A(u) \\subseteq A(x) \\cup R_{23}(x)$.\n\nSince $\\{u,v,w,y\\}$ induces a diamond and\n$(u,v,w)$ is a consecutively tail-biting sequence of Type 3, it follows from Lemma~\\ref{lem:3triangles} that $p_1^{(y)} \\in R_1(v)$ and $p_2^{(y)} \\not\\in R_2(v)$ hold or $p_1^{(y)} \\not\\in R_1(v)$ and $p_2^{(y)} \\in R_2(v)$ hold.\nWe will claim that the latter is true as it implies $p_2^{(y)} \\in R_2(v)$.\nTo reach a contradiction, suppose the former, that is, $p_1^{(y)} \\in R_1(v)$ and $p_2^{(y)} \\not\\in R_2(v)$.\nSince $A(v)$ and $A(y)$ are homothetic and $p_1^{(y)} \\in R_1(v)$, we have $A(v) \\subseteq A(y) \\cup R_{23}(y)$.\nWe now show that $A(x) \\cap A(v) \\subseteq A(y)$.\nTake any $a \\in A(x) \\cap A(v)$.\nSince $A(v) \\subseteq A(y) \\cup R_{23}(y)$, we have $a \\in A(y) \\cup R_{23}(y)$.\nSuppose that $a \\not\\in A(y)$. Then $a \\in R_{23}(y)$.\nThis together with the fact that $a \\in A(x)$\nimplies $A(y) \\cap R_{23}(x) = \\emptyset$.\nSince $A(u) \\subseteq A(x) \\cup R_{23}(x)$, we have\n\\begin{align*}\nA(u) \\cap A(y)\n&\\subseteq (A(x) \\cup R_{23}(x)) \\cap A(y) \\\\\n&= (A(x) \\cap A(y)) \\cup (R_{23}(x)) \\cap A(y)) \\\\\n&= (A(x) \\cap A(y)) \\cup \\emptyset \\\\\n&= A(x) \\cap A(y) \\subseteq A(x).\n\\end{align*}\nTherefore $A(u) \\cap A(y) \\subseteq A(u) \\cap A(x)$.\nSince $u$ and $y$ are adjacent in $G$,\nthere exists $b \\in V(G)$ such that\n$\\triangle(b) \\subseteq A(u) \\cap A(y)$.\nThen $\\triangle(b) \\subseteq A(u) \\cap A(x)$,\nwhich is a contradiction to the fact that $u$ and $x$ are not adjacent in $G$.\nThus $a \\notin R_{23}(y)$ and so $a \\in A(y)$.\nHence we have shown that $A(x) \\cap A(v) \\subseteq A(y)$.\nSince $x$ and $v$ are adjacent in $G$,\nthere exists $c \\in V(G)$ such that\n$\\triangle(c) \\subseteq A(x) \\cap A(v)$.\nThen $\\triangle(c) \\subseteq A(v) \\cap A(y)$,\nwhich is a contradiction to the fact that $v$ and $y$ are not adjacent in $G$.\nThus we have $p_1^{(y)} \\not\\in R_1(v)$ and $p_2^{(y)} \\in R_2(v)$.\nHence the lemma holds.\n\\end{proof}\n\n\n\\begin{Defi}\\label{def:expansion}\nFor a positive integer $n$,\nlet $G_n$ be the graph obtained from the complete graph $K_n$\nby adding a path of length $2$\nfor each pair of vertices of $K_n$,\ni.e.,\n$V(G_n) = \\{ v_i \\mid 1 \\leq i \\leq n \\}\n\\cup \\{ v_{ij} \\mid 1 \\leq i < j \\leq n \\}$\nand\n$E(G_n) = \\{ v_i v_j \\mid 1 \\leq i < j \\leq n \\}\n\\cup \\{ v_i v_{ij} \\mid 1 \\leq i < j \\leq n \\}\n\\cup \\{ v_j v_{ij} \\mid 1 \\leq i < j \\leq n \\}$.\n\\end{Defi}\n\n\\begin{Defi}\nFor a positive integer $m$,\nthe \\emph{Ramsey number} $r(m,m,m)$\nis the smallest positive integer $r$\nsuch that any $3$-edge-colored complete graph $K_r$ of order $r$\ncontains a monochromatic complete graph $K_m$ of order $m$.\n\\end{Defi}\n\n\n\\begin{Lem}\\label{lem:tail}\nLet $m$ be a positive integer at least $3$ and\nlet $n$ be an integer greater than or equal to\nthe Ramsey number $r(m,m,m)$.\nIf $\\dim_{{\\rm poc}}(G_n) \\leq 3$,\nthen\nthere exists a sequence\n$(x_1, \\ldots, x_m)$ of vertices of $G_n$\nsuch that $\\{ x_1, \\ldots, x_m \\}$\nis a clique of $G_n$\nand that any subsequence\n$(x_{i_1}, \\ldots, x_{i_l})$ of $(x_1, \\ldots, x_m)$\nis consecutively tail-biting,\nwhere $2 \\leq l \\leq m$\nand $1 \\leq i_1 < \\cdots < i_l \\leq m$.\n\\end{Lem}\n\n\n\\begin{proof}\nSince the vertices $v_i$ and $v_j$ of $G_n$\nare internal vertices of an induced path\nof length three by the definition of $G_n$,\nit follows from Lemma \\ref{lem:not-incl2} that\nthe vertices $v_i$ and $v_j$ of $G_n$\nare crossing.\nBy Lemma \\ref{lem:intersecting3}, for any $1 \\leq i < j \\leq n$,\nthere exists $k \\in \\{1,2,3\\}$ such that\n$v_i \\stackrel{k}{\\rightarrow} v_j$ or $v_j \\stackrel{k}{\\rightarrow} v_i$.\nNow we define an edge-coloring\n$c:\\{v_iv_j \\mid 1 \\leq i 3$.\n\\end{Thm}\n\n\\begin{proof}\nWe prove by contradiction.\nSuppose that $\\dim_{{\\rm poc}}(G_n) \\leq 3$\nfor some $n \\geq r(5,5,5)$.\nBy Lemma~\\ref{lem:tail},\n$G_n$ contains\na consecutively tail-biting sequence $(v_1, \\ldots, v_5)$ of five vertices in Type $k$\nsuch that $\\{v_1, \\ldots, v_5\\}$ is a clique of $G_n$\nand that\n$(v_{i_1}, v_{i_2}, v_{i_3})$\nis a consecutively tail-biting sequence for any $1 \\leq i_1 < i_2 < i_3 \\leq 5$\nand\n$(v_{i_1}, v_{i_2}, v_{i_3}, v_{i_4})$\nis a consecutively tail-biting sequence for any $1 \\leq i_1 < i_2 < i_3 < i_4 \\leq 5$.\nWithout loss of generality, we may assume that $k=3$.\n\nSince $\\{v_1,v_2,v_3,v_{13}\\}$ induces a diamond and\n$(v_1,v_2,v_3)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(v_{13})} \\in R_1(v_2)$ and $p_2^{(v_{13})} \\not\\in R_2(v_2)$ hold or $p_1^{(v_{13})} \\notin R_1(v_2)$ and $p_2^{(v_{13})} \\in R_2(v_2)$ hold.\n\nWe first suppose that $p_1^{(v_{13})} \\in R_1(v_2)$ and $p_2^{(v_{13})} \\not\\in R_2(v_2)$.\nSince $\\{v_1,v_2,v_3,v_4,v_{13},v_{24}\\}$ induces an $\\overline{\\mathrm{H}}$ and\n$(v_1,v_2,v_3,v_4)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:4triangles} and $p_1^{(v_{13})} \\in R_1(v_2)$ that\n$p_2^{(v_{24})} \\in R_2(v_3)$.\nSince $\\{v_1,v_2,v_3,v_5,v_{13},v_{25}\\}$ induces an $\\overline{\\mathrm{H}}$ and\n$(v_1,v_2,v_3,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:4triangles} and $p_1^{(v_{13})} \\in R_1(v_2)$ that\n\\begin{equation}\\label{eqn:three}\np_2^{(v_{25})} \\in R_2(v_3).\n\\end{equation}\nSince $\\{v_2,v_3,v_4,v_5,v_{24},v_{35}\\}$ induces an $\\overline{\\mathrm{H}}$ and\n$(v_2,v_3,v_4,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:4triangles} and $p_2^{(v_{24})} \\in R_2(v_3)$ that\n\\begin{equation}\\label{eqn:four}\np_1^{(v_{35})} \\in R_1(v_4).\n\\end{equation}\nSince $\\{v_1,v_3,v_4,v_{14}\\}$ induces a diamond and\n$(v_1,v_3,v_4)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(v_{14})} \\in R_1(v_3)$ and $p_2^{(v_{14})} \\not\\in R_2(v_3)$ hold or $p_1^{(v_{14})} \\notin R_1(v_3)$ and $p_2^{(v_{14})} \\in R_2(v_3)$ hold.\nSuppose that $p_1^{(v_{14})} \\in R_1(v_3)$ and $p_2^{(v_{14})} \\not\\in R_2(v_3)$.\nSince $\\{v_1,v_3,v_4,v_5,v_{14},v_{35}\\}$ induces an $\\overline{\\mathrm{H}}$ and\n$(v_1,v_3,v_4,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:4triangles} and $p_1^{(v_{14})} \\in R_1(v_3)$ that\n\\begin{equation}\\label{eqn:four-2}\np_2^{(v_{35})} \\in R_2(v_4).\n\\end{equation}\nSince $\\{v_3,v_4,v_5,v_{35}\\}$ induces a diamond and\n$(v_3,v_4,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(v_{35})} \\in R_1(v_4)$ and $p_2^{(v_{35})} \\notin R_2(v_4)$ hold or $p_1^{(v_{35})} \\notin R_1(v_4)$ and $p_2^{(v_{35})} \\in R_2(v_4)$ hold,\nwhich is a contradiction to the fact that both (\\ref{eqn:four}) and (\\ref{eqn:four-2}) hold.\nThus\n\\begin{equation}\\label{eqn:five}\np_1^{(v_{14})} \\not\\in R_1(v_3) \\text{ and } p_2^{(v_{14})} \\in R_2(v_3).\n\\end{equation}\nSince $\\{v_1,v_2,v_4,v_{14}\\}$ induces a diamond and\n$(v_1,v_2,v_4)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(v_{14})} \\in R_1(v_2)$ and $p_2^{(v_{14})} \\notin R_2(v_2)$ hold or $p_1^{(v_{14})} \\notin R_1(v_2)$ and $p_2^{(v_{14})} \\in R_2(v_2)$ hold.\nSuppose that $p_1^{(v_{14})} \\not\\in R_1(v_2)$ and $p_2^{(v_{14})} \\in R_2(v_2)$.\nSince $\\{v_1,v_2,v_4,v_5,v_{14},v_{25}\\}$ induces an $\\overline{\\mathrm{H}}$ and\n$(v_1,v_2,v_4,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:4triangles} and $p_2^{(v_{14})} \\in R_2(v_2)$ that\n\\begin{equation}\\label{eqn:six}\np_1^{(v_{25})} \\in R_1(v_4).\n\\end{equation}\nBy (\\ref{eqn:three}) and (\\ref{eqn:six}),\nsince $A(v_4)$ and $A(v_{25})$ are homothetic,\nwe have\n\\begin{equation}\\label{eqn:six-2}\np_2^{(v_{25})} \\in R_2(v_4).\n\\end{equation}\nSince $\\{v_2,v_4,v_5,v_{25}\\}$ induces a diamond and\n$(v_2,v_4,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(v_{25})} \\in R_1(v_4)$ and $p_2^{(v_{25})} \\notin R_2(v_4)$ hold or $p_1^{(v_{25})} \\notin R_1(v_4)$ and $p_2^{(v_{25})} \\in R_2(v_4)$ hold,\nwhich is a contradiction to the fact that both (\\ref{eqn:six}) and (\\ref{eqn:six-2}) hold.\nThus\n$p_1^{(v_{14})} \\in R_1(v_2)$ and $p_2^{(v_{14})} \\not\\in R_2(v_2)$.\n\nSince $A(v_3)$ and $A(v_{14})$ are homothetic,\nwe have\n\\begin{equation}\\label{eqn:five-2}\np_1^{(v_{14})} \\in R_1(v_3),\n\\end{equation}\ncontradicting (\\ref{eqn:five}).\n\nIn the case where $p_1^{(v_{13})} \\not\\in R_1(v_2)$ and $p_2^{(v_{13})} \\in R_2(v_2)$, we also reach a contradiction by applying a similar argument.\n\nHence, $\\dim_{{\\rm poc}}(G_n) > 3$ holds for any $n \\geq r(5,5,5)$.\n\\end{proof}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}