diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjuog" "b/data_all_eng_slimpj/shuffled/split2/finalzzjuog" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjuog" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\nWith the proliferation of big data matrices, dimension reduction has become an important tool in many data analysis applications. There are several methods of dimension reduction for a given problem; however, the approximated data often consists of derived features that are either no longer interpretable or difficult to interpret in the original context. For example, while the singular value decomposition (SVD) provides an optimal approximation and compression of data, it may be difficult for domain experts to directly draw conclusions or interpret the singular vectors. In some applications, it is necessary to find a dimension reduction method that preserves the original properties (such as sparsity, nonnegativity, being integer-valued) of the data and ensures interpretability. In an attempt to solve this difficulty, one possibility for a low-rank representation of a given data matrix is to use a subset of the original columns and rows of the matrix itself: a CUR decomposition; see, e.g., Mahoney and Drineas \\cite{Drineas}. The selected subsets of rows and columns capture the most relevant information of the original matrix.\n\nA CUR decomposition of rank $k$ of a (square or rectangular) $m \\times n$ matrix $A$ is of the form\n\\begin{equation}\n\\label{cur}\nA \\ \\approx \\ CMR \\ := \\ A P \\, \\cdot \\, M \\, \\cdot \\, S^TA.\n\\end{equation}\n Here, $P$ is an $n \\times k$ (where $k < \\min(m,n)$) {\\em index selection matrix} with some columns of the identity matrix $I_n$ that selects certain columns of $A$.\nSimilarly, $S$ is an $m \\times k$ matrix with columns of $I_m$\nthat selects certain rows of $A$; so $C$ is $m \\times k$ and $R$ is $k \\times n$. We construct the $k \\times k$ matrix $M$ in such a way that the decomposition has some desirable approximation properties that we will discuss in \\cref{sec:GCUR}. (In line with \\cite{Str19}, we will use the letter $M$ rather than $U$.) It is also possible to select a different number of columns and rows; here, $M$ will not be of dimension $k\\times k$. It is worth noting that given $k$, this decomposition is not unique; there are several ways to obtain this form of approximation to $A$ with different techniques of choosing the representative columns and rows. Many algorithms for this decomposition using the truncated SVD (TSVD) as a basis have been proposed \\cite{Boutsidis, Zhang, Mahoney, Drineas}. In \\cite{Sorensen}, Sorensen and Embree present a CUR decomposition inspired by a discrete empirical interpolation method (DEIM) on a TSVD. Let the rank-$k$ TSVD be\n\\begin{equation}\n\\label{svd}\nA \\approx W_k \\Psi_k Z_k^T,\n\\end{equation}\nwhere the columns of $W_k$ and $Z_k$ are orthonormal,\nwhile $\\Psi_k$ is diagonal with nonnegative elements. Throughout the paper we assume a unique truncated singular value decomposition, i.e., the $k$th singular value is not equal to the $(k+1)$st singular value.\n\nThere is extensive work on CUR-type decompositions in both numerical linear algebra and theoretical computer science. In this paper, we develop a generalized CUR decomposition (GCUR) for matrix pair, for any two matrices $A$ and $B$ with the same number of columns: $A$ is $m\\times n$, $B$ is $d\\times n$ and of full rank. The intuition behind this generalized CUR decomposition is that we can view it as a {\\em CUR decomposition of $A$ relative to $B$}. As we will see in \\cref{pp3}, when $B$ is square and nonsingular, the GCUR decomposition has a close connection with the CUR of $AB^{-1}$. The GCUR is also applicable to nonsquare matrices $B$; see the examples in \\cref{sec:EXP}. We show in \\cref{pp3} that if $B$ is nonsquare but of a full rank, we still have a close connection between the CUR decomposition of $AB^+$ (where $B^+$ denotes the pseudoinverse of $B$) and the GCUR decomposition. Another intuition for this GCUR decomposition comes from a footnote remark by Mahoney and Drineas \\cite[p.~700]{Mahoney}; for data sets where a low-dimensional subspace obtained by the SVD cannot capture class or subgroup separation, an SVD-based CUR decomposition performs equally poorly. This is evident in \\cref{exp:3}.\n\nInspired by the work of Sorensen and Embree \\cite{Sorensen}, we present a generalized CUR decomposition using the discrete empirical interpolation method.\nThe DEIM algorithm for interpolation indices, presented in \\cite{Chaturantabut}, is a discrete variant of empirical interpolation proposed in \\cite{Barrault} as a method for model order reduction for nonlinear dynamical systems. In \\cite{Sorensen}, the authors used DEIM as an index selection technique for constructing the $C$ and $R$ factors of a CUR decomposition. The DEIM algorithm independently selects the column and row indices based on the right and left singular vectors of a data matrix $A$, respectively. Our new GCUR method uses the matrices obtained from the GSVD instead. Besides using DEIM on the GSVD for index selection, we can also use other CUR-type index selection strategies for the GCUR. The proposed method can be used in situations where a low-rank matrix is perturbed with noise, where the covariance of the noise is not a multiple of the identity matrix. It may also be appropriate for applications where one is interested in extracting the most discriminative information from a data set of interest relative to another data set. We will see examples of these in \\cref{sec:EXP}. \n\n\\begin{example}\nThe following simple example shows that using the matrices obtained from the GSVD instead of the SVD can lead to more accurate results when approximating data with colored noise. As in \\cite[p.~55]{hansen} and \\cite{park1994}, we use the term ``colored noise'' for noise of which the covariance matrix is not a multiple of the identity.\n\nWe are dealing with a full-rank matrix $A_E$ representing low-rank data. We want to try to recover an original low-rank matrix perturbed by colored noise. Our test matrix $A_E$ is a rank-2 matrix $A$ of size $3\\times3$ perturbed by additive colored noise $E$ with a given desired covariance structure. We take \n\n\\[A= {\\footnotesize \\begin{bmatrix*}[r]\n 1 & 0 & 1\\\\\n 0 & 2 & 2\\\\\n 1 & 1 & 2\n \\end{bmatrix*}}, \n \\quad E^T\\!E= {\\footnotesize \\begin{bmatrix*}[l]\n 1.0 & 0.8 & 0.3\\\\\n 0.8 & 1.0 & 0.8\\\\\n 0.3 & 0.8 & 1.0\n \\end{bmatrix*}}.\\]\nWe generate the colored noise as an additive white Gaussian noise multiplied by the Cholesky factor $(R)$ of the desired covariance matrix. The matrix $A_E$ is, as a result, a sum of a rank-2 matrix and a correlated Gaussian noise matrix. We compute the SVD of both $A$ and $A_E$. The $k$ dominant left singular vectors of $A$ are denoted by $W_k$ while those of $A_E$ are $\\widetilde{W}_k$. We also compute the GSVD of $(A_E, R)$ and denote the $k$ dominant left generalized singular vectors by $U_k$. Since we are interested in recovering $A$, we examine the angle between the leading $k$-dimensional exact left singular subspace \\text{Range}$(W_k)$ and its approximations \\text{Range}$(\\widetilde{W}_k)$ and \\text{Range}$(U_k)$. We generate 1000 different test cases and take the average of the subspace angles.\n\n\\Cref{tab:0} shows the results for $k=2$ and three different noise levels. We observe that the approximations obtained using the GSVD in terms of subspace angles are more accurate than those from the SVD; about 40\\% gain in accuracy. This illustrates the potential advantage of using generalized singular vectors in the presence of colored noise.\n\\begin{table}[htb!]\n\\centering\n{\\caption{The average angle between the leading two-dimensional exact singular subspace \\text{Range}$(W_2)$ (which is the range of $A$) and its approximations \\text{Range}$(\\widetilde{W}_2)$ and \\text{Range}$(U_2)$, for different values of the noise level $\\varepsilon$. The subspaces \\text{Range}$(U_2)$ and \\text{Range}$(\\widetilde{W}_2)$ are from the SVD of $A_E$ and the GSVD of $(A_E, R)$, respectively.}\\label{tab:0}\n{\\footnotesize\n\\begin{tabular}{clc} \\hline \\rule{0pt}{3mm}%\n$\\varepsilon$ & Method & Subspace angle \\\\ \\hline \\rule{0pt}{3.5mm}%\n$5\\cdot 10^{-2}$ & SVD &$1.7\\cdot 10^{-2}$ \\\\\n& GSVD & $1.2\\cdot 10^{-2}$ \\\\[0.5mm] \\hline \\rule{0pt}{3.5mm}%\n$5\\cdot 10^{-3}$ & SVD &$1.7\\cdot 10^{-3}$ \\\\\n& GSVD & $1.1\\cdot 10^{-3}$ \\\\[0.5mm] \\hline \\rule{0pt}{3.5mm}%\n$5\\cdot 10^{-4}$ & SVD & $1.7\\cdot 10^{-4}$ \\\\\n& GSVD & $1.1\\cdot 10^{-4}$ \\\\[0.5mm]\\hline %\n\\end{tabular}}}\n\\end{table}\n\\end{example}\n\nInspired by this example, we expect that the GCUR compared to the CUR may produce better approximation results in the presence of non-white noise, as it is based on the GSVD instead of the SVD. We show in \\cref{sec:EXP} that the GSVD and the GCUR may provide equally good approximation results even when we use an inexact Cholesky factor. \n\nThroughout the paper, we denote the 2-norm by $\\norm{\\cdot}$ and the inf-norm by $\\norm{\\cdot}_\\infty$. We use MATLAB notation to index vectors and matrices; thus, $A(:,p)$ denotes the $k$ columns of $A$ whose corresponding indices are in vector $p \\in \\mathbb N^k$.\n\n\\textbf{Outline.}\nWe give a brief introduction to the generalized singular value decomposition in \\cref{sec:GSVD}. We also discuss the truncated GSVD and its approximation error bounds. We summarize the DEIM technique we use for index selection in \\cref{sec:DEIM}. \\Cref{sec:GCUR} introduces the new generalized CUR decomposition with an analysis of its error bounds. In \\cref{alg:GCUR-DEIM}, we present a DEIM-type GCUR decomposition algorithm. Results of numerical experiments are presented in \\cref{sec:EXP}, followed by conclusions in \\cref{sec:Con}. \n\n\\section{ Generalized singular value decomposition}\\label{sec:GSVD}\nThe GSVD appears throughout this paper since it is a key building block of the proposed algorithm. This section gives a brief overview of this decomposition. The original proof of the existence of the GSVD has first been introduced by Van Loan in \\cite{Van}. Paige and Saunders \\cite{Paige} later presented a more general formulation without any restrictions on the dimensions except for both matrices to have the same number of columns. Other formulations and contributions to the GSVD have been proposed in \\cite{stewart1982,sun,van1985}. For our applications in this paper, let $A\\in \\mathbb R^{m\\times n}$ and $B\\in \\mathbb R^{d\\times n}$ with both $m\\ge n$ and $d\\ge n$. Following the formulation of the GSVD proposed by Van Loan \\cite{Van}: there exist matrices $U \\in{ \\mathbb R ^{m \\times m}}$, $V \\in{ \\mathbb R^{d \\times d}}$ with orthonormal columns and a nonsingular $X \\in {\\mathbb R ^{n \\times n}}$ such that\n\\begin{equation}\n\\begin{aligned}\n\\label{gsvd}\nU^T\\!AX &= \\Gamma = \\text{diag}(\\gamma_1,\\dots,\\gamma_n), \\qquad &\\gamma_i\\in [0,1],\\\\\nV^T\\!BX &=\\Sigma = \\text{diag}(\\sigma_1,\\dots,\\sigma_n), \\qquad &\\sigma_i\\in [0,1],\n\\end{aligned}\n\\end{equation}\nwhere $\\gamma_i^{2}+\\sigma_i^{2} = 1 $. Although traditionally the ratios $\\gamma_i\/\\sigma_i$ are in a nondecreasing order, for our purpose we will instead maintain a nonincreasing order. The matrices $U$ and $V$ contain the left generalized singular vectors of $A$ and $B$, respectively; and similarly, $X$ contains the right generalized singular vectors and is identical for both decompositions. While the SVD provides two sets of linearly independent basis vectors, the GSVD of $(A, B)$ gives three new sets of linearly independent basis vectors (the columns of $U, V,$ and $X$) so that the two matrices $A$ and $B$ are diagonal when transformed to these new bases.\nWe note that only the reduced GSVD is needed, so that we can assume that $U \\in \\mathbb R^{m \\times n}$, $V \\in \\mathbb R^{d \\times n}$, and $\\Gamma$ and $\\Sigma$ are $n \\times n$.\n\nOur analysis is based on the following formulation of the GSVD presented in \\cite{van1985}. Let $Y := X^{-T}$ in the GSVD of \\eqref{gsvd}, then $A = U \\Gamma Y^T$ and $B = V \\Sigma Y^T$. Let us characterize matrix $Y$. (In fact, Matlab's {\\tt gsvd} routine renders $Y$ instead of $X$.) Since\n\\begin{equation}\\label{eq:mgsvd}\n A=U \\, \\Gamma \\, Y^T, \\qquad B=V \\, \\Sigma \\, Y^T \n\\end{equation}\nthis implies that we have the following congruence transformation\n\\[A^T\\!A=Y(\\Gamma^T\\Gamma)Y^T, \\qquad B^T\\!B=Y(\\Sigma^T\\Sigma)Y^T.\\]\nFrom the above, it follows that $A^T\\!A$ has the same inertia as $\\Gamma^T\\Gamma$ and the same holds for $B^T\\!B$ and $\\Sigma^T\\Sigma$ (here this mainly gives information on the number of zero eigenvalues).\nWe also see that, provided $A$ and $B$ are of full-rank, these similarity transformations hold:\n\\begin{equation}\n\\label{eqn:matrix_Y}\n\\begin{aligned}\n (B^T\\!B)(A^T\\!A)^{-1} &=Y(\\Sigma^T\\Sigma)(\\Gamma^T\\Gamma)^{-1}Y^{-1} = Y\\,\\text{diag}(\\sigma_i^2\/\\gamma_i^2)\\, Y^{-1},\\\\\n (A^T\\!A)(B^T\\!B)^{-1} &=Y(\\Gamma^T\\Gamma)(\\Sigma^T\\Sigma)^{-1}Y^{-1}=Y\\,\\text{diag}(\\gamma_i^2\/\\sigma_i^2)\\, Y^{-1}.\n\\end{aligned}\n\\end{equation}\nThe columns of the matrix $Y$ are therefore the eigenvectors for both $(A^T\\!A)(B^T\\!B)^{-1}$ and its inverse $(B^T\\!B)(A^T\\!A)^{-1}$. The GSVD avoids the explicit formation of the cross-product matrices $A^T\\!A$ and $B^T\\!B$ (see also \\cref{exp:3}).\n\n\\textbf{Truncated GSVD.}\nIn some practical applications it could be of interest to approximate both matrices $(A, B)$ by other matrices $(A_k, B_k)$, said truncated, of a specific rank $k$. For use in \\cref{sec:GCUR}, we define the truncated GSVD (TGSVD) for $(A,B)$ as (cf.~\\cite[(2.34)]{hansen})\n\\begin{equation}\n \\label{eq:tgsvd}\n A_k := U_k \\Gamma_k Y_k^T, \\qquad B_k := V_k \\Sigma_k Y_k^T,\n\\end{equation}\nwhere $k < n$. If we partition\n\\begin{equation}\\label{eq:pgsvd}\nU = [U_k \\ \\widehat{U}], \\ V = [V_k \\ \\widehat{V}], \\ Y = [Y_k \\ \\widehat{Y}], \\ \\Gamma = \\text{diag} (\\Gamma_k, \\widehat{\\Gamma}), \\ \\Sigma = \\text{diag} (\\Sigma_k, \\widehat{\\Sigma})\n\\end{equation}\nthen it follows that $A - A_k = \\widehat{U} \\, \\widehat{\\Gamma} \\, \\widehat{Y}^T.$\nThe following proposition is useful for understanding the error bounds for the GCUR. In line with \\cite[p.~495]{pchansen},\nlet $\\psi_i(A)$ and $\\psi_i(Y)$ be the singular values of matrix $A$ and $Y$, respectively (cf.~also \\eqref{svd}). The first and second statements of the following proposition are from \\cite{pchansen}; while the third statement may not be present in the literature yet, it is straightforward.\n\\begin{proposition}\\label{pp1}\nLet $A = U\\, \\Gamma \\, X^{-1} = U \\, \\Gamma \\, Y^T$ as in \\eqref{gsvd}, with $Y = X^{-T}$, then for $i=1,\\dots,n$ (see, e.g., \\cite[pp.~495--496]{pchansen})\n\\[\\gamma_i\\cdot\\psi_{\\min}(Y)\\leq \\psi_i(A)=\\psi_i(U \\, \\Gamma \\, Y^T) \\leq \\psi_i(\\Gamma) ~ \\norm{Y}=\\gamma_i\\cdot\\norm{Y}\\]\nso\n\\[ \\frac{\\psi_i(A)}{\\|Y\\|} \\le\n\\gamma_i= \\psi_i(\\Gamma) =\\psi_i(U^T\\!A Y^{-T}) \\leq \\psi_i(A)~\\norm{Y^{-1}}.\\]\nMoreover,\n\\[ {\\gamma_{k+1}}\\cdot\\psi_{\\min}(\\widehat{Y}) \\leq \\norm {A - A_k} \\leq {\\gamma_{k+1}}\\cdot \\norm{\\widehat{Y}}. \\]\n\\end{proposition}\n\\begin{proof}\nThis follows from \\eqref{eq:mgsvd} and the well-known property that, for the product of two matrices we have $\\psi_i(A) \\, \\psi_{\\min}{(B)} \\le \\psi_i(AB) \\leq \\psi_i(A) \\, \\norm{B}$ (see, e.g., \\cite[p.~89]{Householder}).\n\\end{proof}\n\nThe results above are relevant tools for the analysis and understanding of generalized CUR and its error bounds which we will introduce in \\cref{sec:GCUR}. \n\\section{Discrete empirical interpolation method}\\label{sec:DEIM} We now summarize the tool from existing literature \\cite{Sorensen, Chaturantabut} that we use to select columns and or rows from matrices. Besides the GSVD, the DEIM algorithm plays an important role in the proposed method. The DEIM procedure works on the columns of a specified basis vectors sequentially. The basis vectors must be linearly independent. Assuming we have a full-rank basis matrix $U \\in \\mathbb R^{m\\times k}$ with $k\\le m$, to select $k$ rows from $U$, the DEIM procedure constructs an index vector $s\\in \\mathbb N^k$ such that it has non-repeating values in $\\{1,\\dots,m\\}$. Defining the selection matrix $S$ as an $m\\times k$ identity matrix indexed by $s$, i.e., $S=I(:,s)$ and $\\mathbf x(s) = S^T\\mathbf x$ (cf.~\\cite{Sorensen}), we have an {\\em interpolatory projector} defined through the DEIM procedure as\n\\[\\mathbb{S}=U(S^TU)^{-1}S^T.\\]\nWe can show that $S^TU$ is nonsingular (see \\cite[Lemma~3.2]{Sorensen}). The term ``interpolatory projector\" stems from the fact that for any $\\mathbf x\\in \\mathbb R^m$ we have\n\\[(\\mathbb{S}\\mathbf x)(s)=S^T\\mathbb{S}\\mathbf x=S^TU(S^TU)^{-1}S^T\\mathbf x=S^T\\mathbf x=\\mathbf x(s),\\]\nimplying the projected vector $\\mathbb{S}\\mathbf x$ matches $\\mathbf x$ in the $s$ entries \\cite{Sorensen}. \n\nTo select the indices contained in $s$, the columns of $U$ are considered successively. The first interpolation index corresponds to the index of the entry with the largest magnitude in the first basis vector. The rest of the interpolation indices are selected by removing the direction of the interpolatory projection in the previous bases from the subsequent one and finding the index of the entry with the largest magnitude in the residual vector. The index selection using DEIM is limited by the rank of the basis matrix, i.e., the number of indices selected can be no more than the number of vectors available.\n\nTo form $s$, let $\\mathbf u_j$ denote the $j$th column of $U$ and $U_j$ be the matrix of the first $j$ columns of $U$. Similarly, let $s_j$ contain the first $j$ entries of $s$, and let $S_j=I(:,s_j)$. More precisely, we define $s_1$ such that\n$|\\mathbf u_1(s_1)|=\\norm{\\mathbf u_1}_\\infty$ and the $j$th interpolatory projector $\\mathbb{S}_j$ as \n\\[\\mathbb{S}_j=U_j(S_j^TU_j)^{-1}S^T_j.\\]\nTo select $s_j$, remove the $u_{j-1}$ component from $u_j$ by projecting $u_j$ onto indices \\{1, \\dots, $s_{j-1}$\\}, thus \n\\[\\mathbf r_j=\\mathbf u_i-\\mathbb{S}_{j-1}\\mathbf u_i,\\]\nthen take the index of the entry with the largest magnitude in the residual, i.e., $s_j$ such that \n\\[|\\mathbf r_j(s_j)|=\\norm{\\mathbf r_j}_\\infty.\\]\nIf there are multiple options, we take the smallest index. In a nutshell, we find the indices via a non-orthogonal Gram--Schmidt-like process (oblique projections) on the $\\mathbf u$-vectors. Since the input vectors are linearly independent, the residual vector $\\mathbf r$ is guaranteed to be nonzero. This DEIM algorithm forces the selection matrix $S$ to find $k$ linearly independent rows of $U$ such that the local growth of $\\norm{(S^TU)^{-1}}$ is kept modest via a greedy search \\cite[p.~2748]{Chaturantabut} as implemented in \\cref{alg:CUR-DEIM}.\n\n\\begin{tcbverbatimwrite}{tmp_\\jobname_alg.tex}\n\\begin{algorithm}\n\\caption{DEIM index selection \\cite{Sorensen}}\n\\label{alg:CUR-DEIM}\n {\\bf Input:} $U \\in \\mathbb R^{m \\times k}$, with $k\\le m$ (linearly independent columns)\\\\\n {\\bf Output:} Indices $s \\in \\mathbb N^k$ with distinct entries in $\\{1,\\dots,m\\}$ \n\\begin{algorithmic}[1]\n\t\\STATE{$\\mathbf u = U(:,1)$}\n\t\\STATE{ $s_1 = \\argmax_{1\\le i\\le m}~|(\\mathbf u)_i|$}\n\t\\FOR{ $j = 2, \\dots, k$ }\n\t\\STATE{$\\mathbf u = U(:,j)$ }\n\t\\STATE{$\\mathbf c=U(s,1:j-1)^{-1}\\mathbf u(s)$}\n\t\\STATE{$\\mathbf r=\\mathbf u-U(:,1:j-1)\\,\\mathbf c$}\n\t\\STATE{$s_j$ = $\\argmax_{1\\le i\\le m}~|(\\mathbf r)_i|$}\n\t\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\\end{tcbverbatimwrite}\n\n\\input{tmp_\\jobname_alg.tex}\n\nAlthough the DEIM index selection procedure is basis-dependent, if the interpolation indices are determined, the DEIM interpolatory projector is independent of the choice of basis spanning the space \\text{Range}$(U)$.\n\\begin{proposition}\\label{pp2}(\\cite[Def.~3.1, (3.6)]{Chaturantabut} ). \nLet $Q$ be an orthonormal basis of \\text{Range}$(U)$ where $Q_i = [\\mathbf q_1,\\dots,\\mathbf q_i]$ for $1 \\leq i \\leq k$, then\n\\[U(S^T U)^{-1}S^T = Q(S^T Q)^{-1}S^T.\\]\n\\end{proposition}\nThis proposition allows us to take advantage of the special properties of an orthonormal matrix in cases where our input basis matrix is not (see \\cref{pp4}).\n\\section{Generalized CUR decomposition and its approximation properties}\\label{sec:GCUR}\nIn this section we describe the proposed generalized CUR decomposition and provide a theoretical analysis of its error bounds.\n\\subsection{Generalized CUR decomposition}\nWe now introduce a new generalized CUR decomposition of matrix pairs $(A, B)$, where $A$ is $m \\times n$ $(m\\ge n)$ and $B$ is $d \\times n$ $(d\\ge n)$, and $B$ is of full rank. This GCUR is inspired by the truncated generalized singular value decomposition for matrix pairs, as reviewed in \\cref{sec:GSVD}. We now define a generalized CUR decomposition (cf.~\\eqref{cur}).\n\\begin{definition} \\label{Dfn4}\nLet $A$ be $m \\times n$ and $B$ be $d \\times n$ and of full rank, with $m\\ge n$ and $d\\ge n$.\nA generalized CUR decomposition of $(A, B)$ of rank $k$ is a matrix approximation of $A$\nand $B$ expressed as\n\\begin{equation}\n \\label{eq:gcur}\n \\begin{aligned}\n A_k := C_A\\, M_A\\, R_A = AP \\, M_A \\, S_A^TA ~ , \\\\\n B_k := C_B\\, M_B\\, R_B = BP \\, M_B \\, S_B^TB. \n\\end{aligned}\n\\end{equation}\nHere $S_A \\in \\mathbb R^{m \\times k}$, $S_B \\in \\mathbb R^{d \\times k}$, and $P \\in \\mathbb R^{n \\times k}$ are index selection matrices $(k < n)$. \n\\end{definition}\nIt is key that {\\em the same} columns of $A$ and $B$ are selected;\nthis gives a coupling between the decomposition of $A$ and $B$.\n\nThe matrices $C_A, C_B$ and $R_A, R_B$ are subsets of the columns and rows, respectively, of the original matrices. In the rest of the paper, we will focus on the matrix $A$; we can perform a similar analysis for the matrix $B$. As in \\cref{sec:DEIM}, we again have the vectors $s_A, p$ as the indices of the selected rows and columns such that $C_A = AP$ and $R_A = S_A^TA$, where $S_A=I(:,s_A)$ and $P=I(:,p)$. The choice of $p$ and $s_A$ is based on the transformation matrices from the rank-$k$ truncated GSVD.\n\nGiven $P$ and $S_A$, the middle matrix $M_A$ can be constructed in different ways to satisfy certain desirable approximation properties. In \\cite{Sorensen}, the authors show how setting $M=A(s,p)^{-1}$ leads to a CUR decomposition corresponding to the $p$ columns and $s$ rows of $A$. Instead, following these authors \\cite{Sorensen} and others \\cite{Mahoney,Stewart}, we choose to construct the middle matrix $M_A$ as $(C_A^T C_A)^{-1}C_A^T A R_A^T(R_A R_A^T )^{-1}$. This possibility, as shown by Stewart \\cite{Stewart}, minimizes $\\norm{A-CMR}$ for a given $p$ and $s$. Computing the middle matrix as such yields a decomposition that can be viewed as first projecting the columns of $A$ onto $\\text{Range}(C)$ and then projecting the result onto the row space of $R$, both steps being optimal for the 2-norm error. \n\nThe following proposition establishes a connection between the GCUR of $(A, B)$ and the CUR of $AB^{-1}$ and $AB^+$ for a square and nonsingular $B$ and a nonsquare but full-rank $B$, respectively.\n\\begin{proposition}\\label{pp3} If $B$ is a square and nonsingular matrix, then the selected row and column indices from the CUR decomposition of $AB^{-1}$ are the same as index vectors $s_A$ and $s_B$ obtained from the GCUR decomposition of $(A, B)$, respectively. Moreover in the special case where $B$ is the identity, the GCUR decomposition of $A$ coincides with the CUR decomposition of $A$, in that the factors $C$ and $R$ of $A$ for both methods are the same: the first line of \\eqref{eq:gcur} is equal to \\eqref{cur}. In addition, if $B$ is nonsquare but of a full rank, we have this same connection between the indices from the CUR decomposition of $AB^+$ and the index vectors $s_A$ and $s_B$ obtained from the GCUR decomposition of $(A, B)$.\n\\end{proposition}\n\\begin{proof} We start with the GSVD \\eqref{eq:mgsvd} . If $B$ is square and nonsingular, then the SVD of $G=AB^{-1}$ can be expressed in terms of the GSVD of $(A,B)$, and is equal to $G = U (\\Gamma\\Sigma^{-1}) V^T$ \\cite{matrixcomp}.\nTherefore, the row index selection matrix for $G$ is $S_A$, and the column index selection matrix is $S_B$ since they are determined using $U$ and $V$, respectively. \n\nIf $B$ is the identity, then the second statement follows from $G = A$. From the second line of \\eqref{eq:mgsvd} we have that $Y=V\\Sigma^{-1}$. This implies that the index of the largest entries in the columns of $V$ would be the same as that of $Y$. As previously shown the left and right singular vectors of $A$ are equivalent to the $U$ and $V$ matrices from the GSVD. Hence the selection matrix $P$ in \\eqref{eq:gcur} obtained by performing DEIM on $Y$ would be the same as the selection matrix $P$ in \\eqref{cur} obtained by applying DEIM to the right singular vectors of $A$.\n\nIf $B$ is nonsquare but of full rank $n$, then we still have a similar connection between the GSVD of $(A,B)$ and the SVD of $AB^+$ because of the following. Since the factors in the reduced GSVD $B = V\\Sigma Y^T$ are of full rank, we have $B^+ = Y^{-T} \\Sigma^{-1} V^T$. This means that $AB^+ = U \\Gamma \\Sigma^{-1} V^T$, so the index vectors $s_A$ and $s_B$ from GCUR of $(A,B)$ are equivalent to the selected column and row indices from CUR of $AB^+$, respectively.\n\\end{proof}\n\n\n\nAlthough we can obtain indices of a CUR decomposition of $AB^{-1}$ using the GCUR of $(A, B)$, the converse does not hold. We emphasize that we need the GSVD for the GCUR decomposition and cannot use the SVD of $AB^{-1}$ or $AB^+$ instead, since the GCUR decomposition requires the $Y$ matrix from \\eqref{eq:tgsvd} to find the column indices. While we used the generalized singular vectors here, in principle one could use other vectors, e.g., an approximation to the generalized singular vectors.\n\nTo build the decomposition, it is relevant to know the dominant rows and columns of $A$ and $B$ in their rank-$k$ approximations. Given that $A_k$ and $B_k$ are rank-$k$ approximations of $A$ and $B$, respectively, how should the columns and rows be selected? \\Cref{alg:GCUR-DEIM} is a summary of the procedure\\footnote{Note that the backslash operator used in \\cref{alg:GCUR-DEIM} is a Matlab type notation for solving linear systems and least-squares problems.}.\nWe emphasize that we can parallelize the work in \\cref{line3,line4,line5,line6,line7,line8} since it consists of three independent runs of DEIM. Note that if we are only interested in approximating the matrix $A$ from the pair $(A, B)$, we can omit \\cref{line5,line8} as well as the second part of \\cref{line9}; thus saving computational cost. \n\nIn some applications, one might be interested in a generalized interpolative decomposition, of which the column and row versions are of the form \n\\begin{equation}\\label{eq:ID}\nA \\approx C_A\\widetilde M_A, \\ B \\approx C_B\\widetilde M_B\n\\qquad \\text{or} \\qquad\nA\\approx \\widehat M_A R_A, \\ B\\approx \\widehat M_B R_B.\n\\end{equation}\nHere $\\widetilde M_A=C_A^+\\!A$ is $k \\times n$ and $\\widehat M_A=AR^+_A$ is $m \\times k$; similar remarks hold for $\\widetilde M_B$ and $\\widehat M_B$. As noted in \\cite{Sorensen}, since the DEIM index selection algorithm identifies the row and column indices independently, this form of decomposition is relatively straightforward.\n\nIn terms of computational complexity, the dense GSVD method requires $\\mathcal O((m+d)n^2)$ work and the three runs of DEIM together require $\\mathcal O((m+n+d)k^2)$ work, so the overall complexity of the algorithm is dominated by the construction of the GSVD. (This might suggest iterative GSVD approaches; see \\cref{sec:Con}.) \n\n\\begin{tcbverbatimwrite}{tmp_\\jobname_alg2a.tex}\n\\begin{algorithm}\n\\caption{DEIM type GCUR decomposition}\n\\label{alg:GCUR-DEIM}\n {\\bf Input:} $A \\in \\mathbb R^{m \\times n}$, $B \\in \\mathbb R^{d \\times n}$, desired rank $k$ \\\\\n{\\bf Output:} rank-$k$ generalized CUR decomposition \\\\\n$A_k = A(:,p) \\, \\cdot \\, M_A \\, \\cdot \\, A(s_A,:)$, \\quad\n$B_k = B(:,p) \\, \\cdot \\, M_B \\, \\cdot \\, B(s_B,:)$\n\\begin{algorithmic}[1]\n\\setcounter{ALC@unique}{0}\n\t\\STATE{$[U, V, Y] = {\\sf gsvd}(A,B)$ \\hfill (according to nonincreasing generalized singular values)}\n\t\\FOR{ $j = 1, \\dots, k$ }\n\t\\STATE\\label{line3}{ $p(j) = \\argmax_{1\\le i\\le n}~|(Y(\\,:,j))_i|$ \\quad \\hfill (Iteratively pick indices)}\n\t\\STATE\\label{line4}{$s_A(j) = \\argmax_{1\\le i\\le m}~|(U(\\,:,j))_i|$}\n\t\\STATE\\label{line5}{$s_B(j) = \\argmax_{1\\le i\\le d}~|(V(\\,:,j))_i|$}\\\\\n\t \\quad \\hfill (Update new columns) \\\\\n\t\\STATE\\label{line6}{$Y(\\,:,~j+1) = Y(\\,:,~j+1)-Y(:,~1:j)\\cdot (Y(p,1:j)\\ \\backslash \\ Y(p,~j+1)$)}\n\t\\STATE\\label{line7}{$U(\\,:,~j+1) = U(\\,:,~j+1)-U(:,~1:j)\\cdot (U(s_A,~1:j)\\ \\backslash \\ U(s_A,~j+1)$)}\n\t\\STATE\\label{line8}{$V(\\,:,~j+1) = V(\\,:,~j+1)-V(:,~1:j)\\cdot (V(s_B,~1:j)\\ \\backslash \\ V(s_B,~j+1)$)} \n\t\\ENDFOR\n\t\\STATE\\label{line9}{ $M_A = A(\\,:,p) \\ \\backslash \\ (A \\ \/ \\ A(s_A,:\\,))$, \\quad $M_B = B(\\,:,p) \\ \\backslash \\ (B \\ \/ \\ B(s_B,:\\,))$}\n\\end{algorithmic}\n\\end{algorithm}\n\\end{tcbverbatimwrite}\n\n\\input{tmp_\\jobname_alg2a.tex}\n\nThe pseudocode in \\cref{alg:GCUR-DEIM} assumes the matrices from the GSVD (i.e., $U$, $V$, and $Y$) corresponds to a nonincreasing order of the generalized singular values. \n\nIn generalizing the DEIM-inspired CUR decomposition, we also look for a generalization of the related theoretical results. While the results presented in \\cite{Sorensen} express the error bounds in terms of the optimal rank-$k$ approximation, for our generalized CUR factorization, the most relevant quantity is the rank-$k$ GSVD approximation. In the following subsection, we present the theoretical results for bounding the GCUR approximation error. \n\n\\subsection{Error Bounds in terms of the SVD approximation} The error bounds for any rank-$k$ matrix approximation are usually expressed in terms of the rank-$k$ SVD approximation error. We will show this in the following proposition and also discuss its limitations. \nFor this proposition, we introduce the following notation.\nLet $A = W \\Psi Z^T = W_k \\Psi_k Z_k^T + W_{\\perp} \\Psi_{\\perp} Z_{\\perp}^T$ be the SVD of $A$ (see \\eqref{svd}), where $Z_k$ contains the largest $k$ right singular vectors.\nLet $Q_k$ be an $n \\times k$ matrix with orthonormal columns. It turns out in both \\cite{Sorensen} and this section that $\\norm{A(I-Q_kQ_k^T)}$ is a central quantity in the analysis. In the DEIM-induced CUR decomposition work \\cite{Sorensen}, we take the right singular vectors to be $Q_k$, but here we study this quantity for general $Q_k$. In our context, we are particularly interested in $Q_k$ as the orthogonal basis of the matrix $Y_k$ in \\eqref{eq:tgsvd}.\nDenote $\\mathcal Q_k = \\text{span}(Q_k)$ and $\\mathcal Z_k = \\text{span}(Z_k)$. Recall that $\\psi_i(A)$ are the singular values of $A$.\n\n\\begin{proposition}\nLet $Q_k$ be an $n \\times k$ matrix with orthonormal columns, and let $Z_k$ contain the largest $k$ right singular vectors of $A$. Then\n\\[\n\\psi_{k+1}^2(A) \\le \\|A(I-Q_kQ_k^T)\\|^2 \\le\n\\psi_{k+1}^2(A) + \\|A\\|^2 \\cdot \\sin^2(\\mathcal Z_k, \\mathcal Q_k).\n\\]\nMore precisely, we have\n\\[\n\\|A(I-Q_kQ_k^T)\\|^2 \\le \\psi_{k+1}^2(A) + \n\\sum_{j=1}^k \\psi_j(A)^2 \\cdot \\sin^2(\\mathbf z_j, \\mathcal Q_k).\n\\]\n\\end{proposition}\n\\begin{proof}\nThe lower bound follows from the SVD; the optimal $\\mathcal Q_k$ is $\\mathcal Z_k$.\nWe can derive the upper bounds from\n\\begin{align*}\n\\|A(I-Q_kQ_k^T)\\|^2 & = \\|W_k \\Psi_k Z_k^T (I-Q_kQ_k^T)\\|^2\n+ \\|W_{\\perp} \\Psi_{\\perp} Z_{\\perp}^T (I-Q_kQ_k^T)\\|^2 \\\\\n& \\le \\|A\\|^2 \\cdot \\sin^2(\\mathcal Z_k, \\mathcal Q_k)\n+ \\psi_{k+1}^2 \\cdot \\sin^2(\\mathcal Z_{\\perp}, \\mathcal Q_k) \\\\\n& \\le \\|A\\|^2 \\cdot \\sin^2(\\mathcal Z_k, \\mathcal Q_k) + \\psi_{k+1}^2.\n\\end{align*}\nFurthermore, more specifically,\n\\[\n\\|A(I-Q_kQ_k^T)\\|^2 = \\sum_{j=1}^k \\psi_j^2(A) \\ |\\mathbf z_j^T (I-Q_kQ_k^T)|^2\n+ \\|W_{\\perp} \\Psi_{\\perp} Z_{\\perp}^T (I-Q_kQ_k^T)\\|^2.\\]\n\\end{proof}\n\nThe significance of this result is that $\\|A(I-Q_kQ_k^T)\\|$ may be close to $\\psi_k(A)$ when $\\mathcal Q_k$ captures the largest singular vectors of $A$ well. For instance, in the standard CUR, $Q_k$ is equivalent to $Z_k$ so the quantity $\\sin^2(\\mathcal Z_k, \\mathcal Q_k)$ equals 0. Suppose the matrix $B$ from \\eqref{eq:mgsvd} is close to the identity or is a scaled identity, we expect that $\\sin^2(\\mathcal Z_k, \\mathcal Q_k)$ will be approximately zero. However, this sine will generally not be small, as we illustrate by the following example.\n\n\\begin{example} {\\rm\nLet $A = \\text{diag}(1,2,3)$, and $B = \\text{diag}(1,20,300)$.\nDenote by $\\mathbf e_j$ the $j$th standard basis vector.\nThen clearly $Z_1 = \\mathbf z_1 = \\mathbf e_3$, while the largest right generalized singular vector $\\mathbf q_1$ is equal to the largest right singular vector of $AB^{-1} = \\text{diag}(1, 0.1, 0.01)$, and hence $Q_1 = \\mathbf q_1 = \\mathbf e_1$.\nThis implies that $\\sin(\\mathcal Z_1, \\mathcal Q_1) = \\sin(\\mathbf z_1, \\mathbf q_1)$ is large.}\n\\end{example}\n\n\\subsection{Error Bounds in terms of the GSVD approximation}\nWith the above results in mind, instead of using the rank-$k$ SVD approximation error, we will derive error bounds for $\\norm{A-CMR}$ (see \\eqref{eq:gcur}) in terms of the error bounds of a rank-$k$ GSVD approximation of $A$ (see \\cref{pp1}).\nThe matrices $C$ and $R$ are of full-rank $k$ determined by the row and column index selection matrices $S$ and $P$, respectively and $M = C^+\\!AR^+$. From \\cref{alg:GCUR-DEIM}, we know that $S$ and $P$ are derived using the $k$ columns of the matrices $U$ and $Y$, respectively, corresponding to the largest generalized singular value (see \\eqref{eq:tgsvd}).\n\nWe use the interpolatory projector given in \\cref{pp2}. Therefore instead of $Y$ (see \\eqref{eq:mgsvd}), we use its orthonormal basis $Q$, to exploit the properties of an orthogonal matrix.\n\nWe will now analyze the approximation error between $A$ and its interpolatory projection $A \\mathbb{P}$. The proof of the error bounds for the proposed method closely follows the one presented in \\cite{Sorensen}.\nThe second inequality of the first statement of \\cref{pp4} is in \\cite[Lemma~4.1]{Sorensen}. The first inequality of the first statement is new but completely analogous. In the second statement, we use the GSVD.\nFor the analysis, we need the following QR-decomposition of $Y$ (see \\eqref{eq:pgsvd}):\n\\begin{equation} \\label{eq:QR}\n[Y_k \\ \\ \\widehat Y] = Y = QT = [Q_k \\ \\ \\widehat Q]\n\\begin{bmatrix} T_k & T_{12} \\\\ 0 & T_{22} \\end{bmatrix} = [Q_k T_k \\ \\ Q \\widehat T],\n\\end{equation}\nwhere we have defined\n\\begin{equation} \\label{eq:T_hat}\n\\widehat T := \\begin{bmatrix} T_{12} \\\\ T_{22} \\end{bmatrix}.\n\\end{equation}\nThis implies that\n\\[\nA = A_k + \\widehat U \\, \\widehat \\Gamma \\, \\widehat Y^T = U_k \\Gamma_k Y_k^T + \\widehat U \\, \\widehat \\Gamma \\, \\widehat Y^T\n= U_k \\Gamma_k T_k Q_k^T + \\widehat U \\, \\widehat \\Gamma \\, \\widehat T Q^T.\n\\]\n\n\\begin{proposition} \\label{pp4} (Generalization of \\cite[Lemma~4.1]{Sorensen})\nGiven $A \\in \\mathbb R^{m\\times n}$ and $Q_k \\in \\mathbb R^{n\\times k}$ with orthonormal columns, let $P \\in \\mathbb R^{n\\times k}$ be a selection matrix and $Q_k^TP$ be nonsingular. Let $\\mathbb{P} = P(Q_k^TP)^{-1}Q_k^T$, then\n\\[\\psi_{\\min}(A(I - Q_kQ_k^T))~\\norm{(Q_k^TP)^{-1}}\\le \\norm{A-A\\mathbb{P}} \\leq \\norm{A(I - Q_kQ_k^T)}~ \\norm{(Q_k^TP)^{-1}}.\\]\nIn particular, if $Q_k$ is an orthonormal basis for $Y_k$, the first $k$ columns of $Y$, then\n\\[\n\\gamma_{k+1}\\cdot \\psi_{\\min}(T_{22}) \\cdot \\norm{(Q_k^TP)^{-1}}\\le \\norm{A-A\\mathbb{P}} \\le \\gamma_{k+1} \\cdot \\norm{T_{22}} \\cdot \\norm{(Q_k^TP)^{-1}}.\n\\]\n\\end{proposition}\n\\begin{proof} We have that $Q_k^T\\mathbb{P}=Q_k^TP(Q_k^TP)^{-1}Q_k^T=Q_k^T$ implies $Q_k^T(I-\\mathbb{P}) = 0$.\nTherefore,\n\\[\\norm{A-A\\mathbb{P}}=\\norm{A(I-\\mathbb{P})} = \\norm{A(I - Q_kQ_k^T)(I-\\mathbb{P})}\\leq \\norm{A(I - Q_kQ_k^T)}~\\norm{I-\\mathbb{P}}\\]\nand also\n\\[\\norm{A(I - Q_kQ_k^T)(I-\\mathbb{P})} \\ge \\psi_{\\min}(A(I - Q_kQ_k^T))~\\norm{I-\\mathbb{P}} \\ .\\]\nNote that, if $\\mathbb{P} \\ne 0$ and $\\mathbb{P} \\ne I$ (see, e.g., \\cite{Daniel}) then\n\\[\\norm{I-\\mathbb{P}} =\\norm{\\mathbb{P}} =\\norm{(Q_k^TP)^{-1}}.\\]\nWith $A = U \\, \\Gamma \\,Y^T$, $A_k = U_k\\Gamma_kY_k^T$, $Y=QT$ and $Y_k=Q_kT_k$ we have\n\\begin{align*}\nA\\ Q_k\\, Q_k^T &= \\big[U_k \\ \\ \\widehat U \\big] \\begin{bmatrix} \\Gamma_k & 0\\\\ 0 & \\widehat \\Gamma \\end{bmatrix} \\begin{bmatrix} T_k^T & 0 \\\\[0.5mm] T_{12}^T & T_{22}^T \\end{bmatrix} \\begin{bmatrix} I_k \\\\ 0 \\end{bmatrix} Q_k^T \\\\\n&= U_k\\Gamma_kT_k^TQ_k^T + \\widehat U \\, \\widehat \\Gamma \\, T_{12}^TQ_k^T\n\\end{align*}\nand hence\n\\begin{align*}\nA\\,(I-Q_kQ_k^T) & = (A-A_k)-\\widehat U \\, \\widehat \\Gamma \\, T_{12}^TQ_k^T \\\\\n& = \\widehat U \\, \\widehat \\Gamma \\, \\widehat T^T Q^T - \\widehat U \\, \\widehat \\Gamma \\, T_{12}^TQ_k^T = \\widehat U \\, \\widehat \\Gamma \\, T_{22}^T \\widehat Q^T.\n\\end{align*}\nThis implies\n\\[\\norm{A\\,(I - Q_kQ_k^T)} \\le \\gamma_{k+1} \\cdot \\norm{T_{22}}\\]\nand\n\\[\\norm{A\\,(I - Q_kQ_k^T)} \\ge \\gamma_{k+1} \\cdot \\psi_{\\min}(T_{22}). \\]\n\\end{proof}\n\nLet us now consider the operation on the left-hand side of $A$. Given the set of interpolation indices $\\{s_1, \\dots, s_k\\}$ determined from $U_k$, $S = [\\mathbf e_{s_1},\\dots,\\mathbf e_{s_k}]$ and for a nonsingular $S^TU_k$, we have the DEIM interpolatory projector $\\mathbb{S}=U_k(S^TU_k)^{-1}S^T$. Since $U_k$ consists of the dominant $k$ left generalized singular vectors of $A$ and has orthonormal columns, it is not necessary to perform a QR-decomposition as we did in \\cref{pp4}.\n\nThe following proposition is analogous to \\cref{pp4}. The results are similar to those in \\cite[p.~A1461]{Sorensen} except that here, we use the approximation error of the GSVD instead of the SVD.\n\\begin{proposition}\n\\label{pp4b} Given $U_k \\in \\mathbb R^{m\\times k}$ with orthonormal columns, let $S \\in \\mathbb R^{m\\times k}$ be a selection matrix and $S^TU_k$ be nonsingular. Furthermore, let $\\mathbb{S} = U_k(S^TU_k)^{-1}S^T$, then, with $\\widehat T$ as in \\eqref{eq:T_hat},\n\\[\\gamma_{k+1}\\cdot\\psi_{\\min}(\\widehat{T})\\cdot\\norm{(S^T\\!U_k)^{-1}}\\le\\norm{A-\\mathbb{S}A}\\leq \\gamma_{k+1}\\cdot\\norm{\\widehat{T}} \\cdot \\norm{(S^T\\!U_k)^{-1}}. \\]\n\\end{proposition}\n\\begin{proof}\nWe have\n\\[\\norm{A-\\mathbb{S}A} =\\norm{(I-\\mathbb{S})A}=\\norm{(I-\\mathbb{S})(I-U_kU_k^T)A}. \\]\nSimilar to before, if $\\mathbb{S} \\ne 0$ and $\\mathbb{S} \\ne I$ then \n\\[\\norm{I-\\mathbb{S}} =\\norm{\\mathbb{S}} =\\norm{(S^T\\!U_k)^{-1}}.\\] \nSince $(I-U_k U_k^T) A = A-U_k \\Gamma_k Y_k^T = \\widehat U \\, \\widehat \\Gamma \\, \\widehat Y^T = \\widehat U \\, \\widehat \\Gamma \\, \\widehat T^T\\!Q^T$ we get\n\\[\\norm{(I-U_kU_k^T)A}=\\norm{A-A_k} \\leq \\gamma_{k+1}\\cdot\\norm{\\widehat{T}},\\]\nand $\\norm{(I-U_kU_k^T)A} \\ge \\gamma_{k+1} \\cdot \\psi_{\\min}(\\widehat{T})$, from which the result follows. \n\\end{proof}\n\nWe will now use \\cref{pp4,pp4b} to find a bound for the approximation error of the GCUR of $A$ relative to $B$. As in \\cite{Sorensen} we first show in the following proposition that the error bounds of the interpolatory projection of $A$ onto the chosen rows and columns apply equally to the orthogonal projections of $A$ onto the same row and column spaces. \n\\begin{proposition}\\label{pp5}(Generalization and slight adaptation of \\cite[Lemma~4.2]{Sorensen}) Given the selection matrices $P$, $S$, let $C=AP$ and $R=S^T\\!A$. Suppose that $C \\in \\mathbb R^{m \\times k}$ and $R \\in \\mathbb R^ {k \\times n}$ are full rank matrices with $k< \\min(m,n)$, and that $Q_k^TP$ and $S^TU_k$ are nonsingular. With $\\widehat T$ and $T_{22}$ as in \\eqref{eq:QR}--\\eqref{eq:T_hat}, we have the bound for the orthogonal projections of $A$ onto the column and row spaces: \n\\begin{align*}\n\\norm{(I-CC^+)A} & \\le \\gamma_{k+1} \\cdot \\norm{T_{22}} \\cdot \\norm{(Q_k^T\\!P)^{-1}}, \\\\\n\\norm{A(I-R^+\\!R)} & \\le \\gamma_{k+1} \\cdot \\norm{\\widehat{T}} \\cdot \\norm{(S^T\\!U_k)^{-1}}.\n\\end{align*}\n\\end{proposition}\n\\begin{proof} This proof is a minor modification of that of \\cite[Lemma~4.2]{Sorensen}, we closely follow their proof technique. With $C=AP$ of full rank, we have $C^+=(P^T\\!A^T\\!AP)^{-1}(AP)^T$. With this, the orthogonal projection of $A$ onto $C$ can be stated as\n\\[CC^{+\\!}A=(AP(P^T\\!A^T\\!AP)^{-1}P^T\\!A^T)A.\\]\nLet $\\Pi_P =P(P^T\\!A^T\\!AP)^{-1}P^T\\!A^T\\!A$, note that $\\Pi_P P = P$ since $\\Pi_P$ is an oblique projector on Range$(P)$. We can rewrite $CC^+\\!A$ as $CC^+\\!A=A\\Pi_P$.\nHence the error in the orthogonal projection of $A$ will be\n$(I-CC^+)A=A(I-\\Pi_P)$.\nSince $\\Pi_P \\mathbb{P}= \\mathbb{P}$, we have\n\\[A(I-\\Pi_P)=A(I-\\Pi_P)(I-\\mathbb{P})=(I-CC^+)A(I-\\mathbb{P}),\\]\ntherefore\n\\begin{align*}\n\\norm{(I-CC^+)A} &=\\norm{A(I-\\Pi_P)}=\\norm{(I-CC^+)A(I-\\mathbb{P})} \\\\\n&\\leq \\norm{(I-CC^+)}~\\norm{A(I-\\mathbb{P})}.\n\\end{align*}\nWith $C$ being nonsquare, $\\norm{I-CC^+} =1$ (see, e.g., \\cite{Daniel})\nand $\\norm{A(I-\\mathbb{P})}\\leq \\gamma_{k+1}\\cdot \\norm{T_{22}} \\cdot \\norm{(Q_k^T\\!P)^{-1}}$ from \\cref{pp4}, we have\n\\[\\norm{(I-CC^+)A}\\leq \\gamma_{k+1}\\cdot \\norm{T_{22}} \\cdot \\norm{(Q_k^T\\!P)^{-1}}.\\]\nIn a similar vein, with $R=S^T\\!A$ and $R^+=R^T(RR^T)^{-1}$ we have $R^+=A^TS(S^T\\!AA^T\\!S)^{-1}$ and the error in the orthogonal projection of $A$ is $A(I-R^+\\!R) =(I-\\Pi_S)A$, where $\\Pi_S = AA^T\\!S(S^T\\!AA^T\\!S)^{-1}S^T$, so that\n\\[(I-\\Pi_S)A=(I-\\mathbb{S})(I-\\Pi_S)A=(I-\\mathbb{S})A(I-R^+\\!R)\\]\nand\n\\[\n\\norm{A(I-R^+\\!R)}\\leq \\norm{(I-\\mathbb{S})A}~\\norm{(I-R^+\\!R)} \\leq \\gamma_{k+1} \\cdot \\norm{\\widehat{T}} \\cdot \\norm{(S^T\\!U_k)^{-1}}. \n\\]\n\\end{proof}\n\nThis result helps to prove an error bound for the GCUR approximation error. For the following theorem, we again closely follow the approach of \\cite{Sorensen} which also follows a procedure in \\cite{Mahoney}.\nAs stated in \\cref{Dfn4} the middle matrix can be computed as $M=(C^T\\!C)^{-1}C^T\\!A R^T(R R^T )^{-1}=C^+\\!AR^+$.\n\\begin{theorem} \\label{theorem1} (Generalization of \\cite[Thm.~4.1]{Sorensen}) Given $A \\in \\mathbb R^{m\\times n}$ and $Y_k$, $U_k$ from \\eqref{eq:tgsvd}, let $P$ and $S$ be selection matrices so that $C=AP$ and $R=S^T\\!A$ are of full rank. Let $Q_k \\in \\mathbb R^{n\\times k}$ be the $Q$-factor of $Y_k$, and $\\widehat{T}$ and $T_{22}$ as in \\eqref{eq:QR}--\\eqref{eq:T_hat}. Assuming $Q_k^T\\!P$ and $S^T\\!U_k$ are nonsingular, then with the error constants\n\\[\\eta_p := \\norm{(Q_k^T\\!P)^{-1}}, \\qquad \\eta_s := \\norm{(S^T\\!U_k)^{-1}},\\]\nwe have\n\\[\\norm{A-CMR} \\leq \\gamma_{k+1} \\cdot (\\eta_p\\cdot\\norm{T_{22}} +\\eta_s\\cdot\\norm{\\widehat{T}}) \\le \\gamma_{k+1} \\cdot (\\eta_p + \\eta_s) \\cdot \\norm{\\widehat{T}}.\\]\n\\end{theorem}\n\\begin{proof}\nBy the definition of $M$, we have\n\\[A-CMR=A-CC^+\\!AR^+\\!R=(I-CC^+)A+CC^+\\!A(I-R^+\\!R),\\]\nusing the triangle inequality, it follows that\n\\[\\norm{A-CMR}=\\norm{A-CC^+\\!AR^+\\!R}\\leq \\norm{(I-CC^+)A}+\\norm{CC^+}~\\norm{A(I-R^+\\!R)}\\]\nand the fact that $CC^+$ is an orthogonal projection with $\\norm{CC^+}=1$,\n\\[\\norm{A-CMR} \\le \\gamma_{k+1} \\cdot \\norm{T_{22}} \\cdot \\norm{(Q_k^T\\!P)^{-1}}+\\norm{(S^T\\!U_k)^{-1}}~\\norm{\\widehat{T}} \\cdot \\gamma_{k+1}.\\]\n\\end{proof}\n\nThe last line of \\cref{theorem1} can be related to the results in \\cite[Thm~4.1]{Sorensen}; both theorems have the factors $\\eta_p$ and $\\eta_s$. In \\cite{Sorensen} the error of the CUR approximation of $A$ is within a factor of $\\eta_p + \\eta_s$ of the best rank-$k$ approximation (SVD). Here we have the bounds in terms of $\\gamma_{k+1}$ from the GSVD \\eqref{gsvd} and the additional factors $\\norm{\\widehat{T}}$ and $\\norm{T_{22}}$. The results presented in this subsection suggest that a good index selection procedures that minimize $\\norm{(Q_k^T\\!P)^{-1}}$ and $\\norm{(S^T\\!U_k)^{-1}}$ are theoretically desirable. We note that where these results have been presented for matrix $A$ in \\eqref{eq:gcur}, similar results can be obtained for $B$. \n\n\\section{Numerical experiments}\\label{sec:EXP}\nWe now present the results of a few numerical experiments to illustrate the performance of GCUR for low-rank matrix approximation. For the first two experiments, we consider a case where a data matrix $A$ is corrupted by a random additive noise $E$ and the covariance of this noise (the expectation of $E^T\\!E$) is not a multiple of the identity matrix. We are therefore interested in a method that can take the actual noise into account. Traditionally, a pre-whitening matrix $R^{-1}$ (where $R$ is the Cholesky factor of the noise's covariance matrix) may be applied to the perturbed matrix \\cite{hansen}, so that one can use SVD-based methods on the transformed matrix. With a GSVD formulation, the pre-whitening operation becomes an integral part of the algorithm \\cite{hansen2007subspace}; we do not need to explicitly compute $R^{-1}$ and transform the perturbed matrix. We show in the experiments that using SVD-based methods without pre-whitening the perturbed data yields less accurate approximation results of the original matrix. \n\nFor the last two experiments, we consider a setting where there are two data sets collected under different conditions e.g., treatment and control experiment where the former has distinct variation caused by the treatment or signal-free and signal recordings where the former contains only noise. We are interested in exploring and identifying patterns and discriminative features that are specific to one data set. \n\n\\begin{experiment}\\label{exp:1} {\\rm\nThis experiment is an adaptation of experiments in \\cite[p.~66: Sect.~3.4.4]{hansen} and \\cite[Ex.~6.1]{Sorensen}. We construct matrix $A$ to be of a known modest rank. We then perturb this matrix with a noise matrix $E \\in \\mathbb R^{m \\times n}$ whose entries are correlated. Given $A_E= A+E$, we evaluate and compare the GCUR and the CUR decomposition on $A_E$ in terms of recovering the original matrix $A$. Specifically, the performance of each decomposition is assessed based on the 2-norm of the relative matrix approximation error i.e., $\\norm{A-\\widetilde {A} }\/\\norm{A}$, where $\\widetilde {A}$ is the approximated low-rank matrix. \nWe present the numerical results for four noise levels; thus $E=\\varepsilon\\, \\frac{\\norm{F}}{\\norm{A}} F$ where $\\varepsilon$ is the parameter for the noise level and $F$ is a randomly generated correlated noise. We first generate a sparse, nonnegative rank-50 matrix $A \\in \\mathbb R^{m \\times n}$, with $m=100000$ and $n=300$, of the form \n\\[A=\\sum_{j=1}^{10}\\frac{2}{j}\\, \\mathbf x_j\\ \\mathbf y_j^T + \\sum_{j=11}^{50}\\frac{1}{j}\\, \\mathbf x_j\\ \\mathbf y_j^T,\\]\nwhere $\\mathbf x_j \\in \\mathbb R^{m}$ and $\\mathbf y_j \\in \\mathbb R^{n}$ are sparse vectors with random nonnegative entries (i.e., $\\mathbf x_j={\\sf sprand}(m,1,0.025)$ and $\\mathbf y_j={\\sf sprand}(n,1,0.025)$, just as in \\cite{Sorensen}. Unlike \\cite{Sorensen} we then perturb the matrix with a correlated Gaussian noise $E$ whose entries have zero mean and a Toeplitz covariance structure (in MATLAB $\\text{desired-cov}(F)={\\sf toeplitz}(0.99^0$, $\\dots$, $0.99^{n-1})$, $R={\\sf chol}(\\text{desired-cov}(F))$, and $F= {\\sf randn}(m,n)\\cdot R)$ and $\\varepsilon \\in \\{0.05$, $0.1$, $0.15$, $0.2\\}$. We compute the SVD of $A_E$ and the GSVD of $(A_E,R)$ to get the input matrices for the CUR and the GCUR decomposition respectively. \\Cref{fig:a,fig:b,fig:c,fig:d} compare the relative errors of the proposed DEIM-GCUR (see \\cref{alg:GCUR-DEIM}) and the DEIM-CUR (see \\cref{alg:CUR-DEIM}) for reconstructing the low-rank matrix $A$ for different noise levels. \nWe observe that for higher noise levels the GCUR technique gives a more accurate low-rank approximation of the original matrix $A$. The DEIM-GCUR scheme seems to perform distinctly well for higher noise levels and moderate values of $k$. As indicated in \\cref{sec:GCUR}, the GCUR method is slightly more expensive since it requires the computation of the TGSVD instead of the TSVD. We observe that, as $k$ approaches rank$(A)$, the relative error of the TGSVD continues to decrease; this is not true for the GCUR. We may attribute this phenomenon to the fact that the relative error is saturated by the noise considering we pick actual columns and rows of the noisy data. Since $\\varepsilon$ indicates the relative noise level, it is, therefore, natural that for increasing $k$, the quality of the TSVD approximation rapidly approaches $\\varepsilon$. For this experiment, we assume that an estimate of the noise covariance matrix is known and therefore we have the exact Cholesky factor; we stress that this may not always be the case. \n\n We now show an example where we use an inexact Cholesky factor $\\widehat R$. We derive $\\widehat R$ by multiplying all off-diagonal elements of the exact Cholesky factor $R$ by factors which are uniformly random from the interval $[0.9, 1.1]$. Here, the experiment setup is the same as described above with the difference that we compute the GSVD of $(A_E,\\widehat R)$ instead. In \\cref{fig:2a,fig:2b}, we observe that the GCUR and the GSVD still deliver good approximation results even for an inexact Cholesky factor $\\widehat R$ which may imply that we do not necessarily need the exact noise covariance. \n\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\subfloat[$\\varepsilon=0.2$.]{\\label{fig:a}{\\includegraphics[width=0.48\\textwidth]{Plots\/GCUR_plots\/0.2.png}}}\\hspace{0.1pt}\n\t\\subfloat[$\\varepsilon=0.15$.]{\\label{fig:b}{\\includegraphics[width=0.48\\textwidth]{Plots\/GCUR_plots\/0.15.png} }}\n\t\n\t\\subfloat[$\\varepsilon=0.1$.]{\\label{fig:c}{\\includegraphics[width=0.48\\textwidth,]{Plots\/GCUR_plots\/0.1.png} }}\\hfill\n\t\\subfloat[$\\varepsilon=0.05$.]{\\label{fig:d}{\\includegraphics[width=0.48\\textwidth]{Plots\/GCUR_plots\/0.05.png} }}\n\t\\caption{Accuracy of the DEIM-GCUR approximations compared with the standard DEIM-CUR approximations in recovering a sparse, nonnegative matrix $A$ perturbed with correlated Gaussian noise (\\cref{exp:1}) using exact Cholesky factor of the noise covariance. The relative errors $\\norm{A-\\widetilde {A}_k }\/\\norm{A}$ (on the vertical axis) as a function of rank $k$ (on the horizontal axis) for $\\varepsilon=0.2$, $0.15$, $0.1$, $0.05$, respectively.\\label{fig:1}}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\subfloat[$\\varepsilon=0.15$.]{\\label{fig:2a}{\\includegraphics[width=0.48\\textwidth]{Plots\/GCUR_plots\/0.15_inexact.png}}}\\hspace{0.1pt}\n\t\\subfloat[$\\varepsilon=0.1$.]{\\label{fig:2b}{\\includegraphics[width=0.48\\textwidth]{Plots\/GCUR_plots\/0.1_inexact.png} }}\n\t\n\t\\caption{Accuracy of the DEIM-GCUR approximations compared with the standard DEIM-CUR approximations in recovering a sparse, nonnegative matrix $A$ perturbed with correlated Gaussian noise (\\cref{exp:1}) using an inexact Cholesky factor of the noise covariance. The relative errors $\\norm{A-\\widetilde {A}_k }\/\\norm{A}$ (on the vertical axis) as a function of rank $k$ (on the horizontal axis) for $\\varepsilon=0.15$, $0.1$, respectively.\\label{fig:2}}\n\\end{figure}\n}\n\\end{experiment}\n\n\n\\begin{experiment}\\label{exp:2}{\\rm For this experiment, we maintain all properties of matrix $A_E$ mentioned in the preceding experiment except for the column size that we reduce to 10000 (i.e., $A_E \\in \\mathbb R^{10000 \\times 300}$) and instead of a sparse nonnegative matrix $A$, we generate a dense random matrix $A$. As in \\cite[Ex.~6.2]{Sorensen}, we also modify $A$ so that there is a significant drop in the 10th and 11th singular values. The matrix $A$ is now of the form \n\\[A=\\sum_{j=1}^{10}\\frac{1000}{j}\\, \\mathbf x_j\\ \\mathbf y_j^T + \\sum_{j=11}^{50}\\frac{1}{j}\\, \\mathbf x_j\\ \\mathbf y_j^T,\\]\nFor each fixed $\\varepsilon$ and $k$, we repeat the process 100 times and then compute the average relative error. The results in \\cref{tab:1} show that the advantage of the GCUR over the CUR still remains even when singular values of the original matrix $A$ decrease more sharply. We observe that the difference in the relative error of the GCUR and the CUR is quite significant when the rank of the recovered matrix $\\widetilde A$ is lower than that of $A$ (i.e., $k \\ll 50$).\n\n\\begin{table}[ht!]\n\n\\centering\n{\\caption{Comparison of the qualities $\\norm{A-\\widetilde {A}_k }\/\\norm{A}$ of the TSVD, TGSVD, CUR, and GCUR approximations as a function of index $k$ and noise level $\\varepsilon$ in \\cref{exp:2}. The relative errors are the averages of 100 test cases.}\\label{tab:1}\n{\\footnotesize\n\\begin{tabular}{clcccc} \\hline \\rule{0pt}{3mm}\n$k$ & Method $\\backslash \\ \\varepsilon$ & $0.05$ & $0.1$ & $0.15$ & $0.2$ \\\\ \\hline \\rule{0pt}{3.5mm}\n10 & TSVD & $0.008$ & $0.045$ & $0.150$ & $0.200$\\\\\n& TGSVD & $0.002$ & $0.003$ & $0.005$ & $0.007$\\\\[0.5mm]\n& CUR & $0.052$ & $0.118$ & $0.141$ & $0.186$\\\\\n& GCUR & $0.053$ & $0.088$ & $0.112$ & $0.134$\\\\[0.5mm] \\hline \\rule{0pt}{3.5mm}\n15 & TSVD & $0.050$ & $0.100$ & $0.150$ & $0.200$\\\\\n& TGSVD & $0.009$ & $0.017$ & $0.026$ & $0.035$\\\\[0.5mm]\n& CUR & $0.049$ & $0.097$ & $0.146$ & $0.196$\\\\\n& GCUR & $0.046$ & $0.091$ & $0.138$ & $0.185$\\\\[0.5mm] \\hline \\rule{0pt}{3.5mm}\n20 & TSVD & $0.050$ & $0.100$ & $0.150$ & $0.200$\\\\\n& TGSVD & $0.011$ & $0.023$ & $0.034$ & $0.015$\\\\[0.5mm]\n& CUR & $0.050$ & $0.099$ & $0.149$ & $0.199$\\\\\n& GCUR & $0.049$ & $0.097$ & $0.146$ & $ 0.198$ \\\\[0.5mm] \\hline\n\\rule{0pt}{3.5mm}\n30 & TSVD & $0.050$ & $0.100$ & $0.150$ & $0.200$\\\\\n& TGSVD & $0.016$ & $0.031$ & $0.047$ & $0.063$\\\\[0.5mm]\n& CUR & $0.050$ & $0.100$ & $0.150$ & $ 0.199$\\\\\n& GCUR & $0.050$ & $0.099$ & $0.149$ & $0.199$\\\\ \\hline\n\\end{tabular}}}\n\\end{table}\n\n\nThe higher the noise level, the more advantageous the GCUR scheme may be over the CUR one. Especially for moderate values of $k$ such as $k=10$, the GCUR approximations are of better quality than those based on the CUR. For higher values of $k$ such as $k=30$, the approximation quality of the CUR and GCUR method become comparable since they both start to pick up the noise in the data columns. In this case, the GCUR does not improve on the CUR. Since it is a discrete method, picking indices for columns instead of generalized singular vectors, we see that the GCUR method yields worse results than the TGSVD approach.\n}\n\\end{experiment}\n\n\\begin{experiment}\\label{exp:3}{\\rm Our next experiment is adapted from \\cite{Abid}. We create synthetic data sets which give an intuition for settings where the GSVD and the GCUR may resolve the problem of subgroups. Consider a data set of interest (target data), $A$, containing 400 data points in a 30-dimensional feature space. This data set has four subgroups ({\\color{aqua} blue}, {\\color{yellow} yellow}, {\\color{orange} orange}, and {\\color{darkmagenta} purple}), each of 100 data points. The first 10 columns for all 400 data points are randomly sampled from a normal distribution with a mean of 0 and a variance of 100. The next 10 columns of two of the subgroups ({\\color{aqua} blue} and {\\color{orange} orange}) are randomly sampled from a normal distribution with a mean of 0 and a unit variance while the other two subgroups ({\\color{yellow} yellow} and {\\color{darkmagenta} purple}) are randomly sampled from a normal distribution with a mean of 6 and a unit variance. The last 10 columns of subgroups {\\color{aqua} blue} and {\\color{yellow} yellow} are sampled from a normal distribution with a mean of 0 and a unit variance and those of {\\color{darkmagenta} purple} and {\\color{orange} orange} are sampled from a normal distribution with a mean of 3 and a unit variance.\n\nOne of the goals of the SVD (or the related concept principal component analysis) in dimension reduction is to find a low-dimensional rotated approximation of a data matrix while maximizing the variances.\nWe are interested in reducing the dimension of $A$. If we project the data onto the two leading right singular vectors, we are unable to identify the subgroups because the variation along the first 10 columns is significantly larger than in any other direction, so some combinations of those columns are selected by the SVD. \n\nSuppose we have another data set $B$ (a background data set), whose first 10 columns are sampled from a normal distribution with a mean of 0 and a variance of 100, the next 10 columns are sampled from a normal distribution with a mean of 0 and a variance of 9 and the last 10 columns are sampled from a normal distribution with a mean of 0 and a unit variance. The choice of the background data set is key in this context. Generally, the background data set should have the structure we would like to suppress in the target data, which usually corresponds to the direction with high variance but not of interest for the data analysis \\cite{Abid}. With the new data, one way to extract discriminative features for clustering the subgroups in $A$ is to maximize the variance of $A$ while minimizing that of $B$, which leads to a trace ratio maximization problem \\cite{chen}\n\\[\\widehat U := \\argmax_{U \\in \\mathbb R^{n \\times k}, \\ U^TU=I_k}~ \\text{Tr}~\\big[(U^T\\!B^T\\!B\\,U)^{-1}(U^T\\!A^T\\!A\\,U)\\big],\\]\nwhere $n=30$ and $k=5$ or $k=10$. \nBy doing this, the first dimensions are less likely to be selected because they also have a high variance in data set $B$. Instead, the middle and last dimensions of $A$ are likely to be selected as they have the dimensions with the lowest variance in $B$, thereby allowing us to separate all four subgroups. The solution $\\widehat U \\in \\mathbb R^{n \\times k}$ to the above problem is given by the $k$ (right) eigenvectors of $(B^T\\!B)^{-1}(A^T\\!A)$ corresponding to the $k$ largest eigenvalues (cf., \\cite[pp.~448--449]{fukunaga2013}); this corresponds to the (``largest'') right generalized singular vectors of $(A,B)$ (the transpose of \\eqref{eqn:matrix_Y}). As seen in \\cref{fig:3}, projecting $A$ onto the leading two right generalized singular vectors produces a much more clearer subgroup separation (top-right figure) than projecting onto the leading two right singular vectors (top-left figure). Therefore, we can expect that a CUR decomposition based on the SVD will also perform not very well with the subgroup separation. In the bottom figures is a visualization of the data using the first two important columns selected using the DEIM-CUR (left figure) and the DEIM-GCUR (right figure). To a large extent, the GCUR is able to differentiate the subgroups while the CUR fails to do so. We investigate this further by comparing the performance of subset selection via DEIM-CUR on $A$ (\\cref{alg:CUR-DEIM}) and DEIM-GCUR on $(A, B)$ (\\cref{alg:GCUR-DEIM}) in identifying the subgroup or class representatives of $A$; we select a subset of the columns of $A$ (5 and 10) and compare the classification results of each method. We center the data sets by subtracting the mean of each column from all the entries in that column. Given the class labels of the subgroups, we perform a ten-fold cross-validation (i.e., split the data points into 10 groups and for each unique group take the group as test data and the rest as training \\cite[p.~181]{james2013introduction}) and apply two classifiers on the reduced data set: ECOC (Error Correcting Output Coding) \\cite{dietterich1994} and {\\em classification tree} \\cite{banfield2006} using the functions {\\sf fitcecoc} and {\\sf fitctree} with default parameters as implemented in MATLAB. It is evident from \\cref{tab:2} that the TGSVD and the GCUR achieve the least classification error rate, e.g., for reducing the dimension from 30 to 10; 0\\% and 6.3\\% respectively, using the ECOC classifier and 0\\% and 9.5\\% respectively, using the tree classifier. The standard DEIM-CUR method achieves the worst classification error rate.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=\\textwidth]{Plots\/GCUR_plots\/svd_gsvd_2.png}\\\\\n \\includegraphics[width=\\textwidth]{Plots\/GCUR_plots\/cur_gcur.png}\n \\caption{(Top-left) We project the synthetic data containing four subgroups onto the first two dominant right singular vectors. The lower-dimensional representation using the SVD does not effectively separate the subgroups. In the bottom-left figure, we visualize the data using the first two columns selected by DEIM-CUR. (Top-right) We illustrate the advantage of using GSVD by projecting the data onto the first two dominant right generalized singular vectors corresponding to the two largest generalized singular values. In the bottom-right figure, we visualize the data using the first two columns selected by DEIM-GCUR. The lower-dimensional representation of the data using the GSVD-based methods clearly separates the four clusters while the SVD-based methods fail to do so.}\n \\label{fig:3}\n\\end{figure}\n\n\\begin{table}[ht!]\n\\centering\n{\\caption{$k$-Fold loss is the average classification loss overall 10-folds using SVD, GSVD, CUR, and GCUR as dimension reduction in \\cref{exp:3}. The second and third columns give information on the number of columns selected from the data set using the CUR and GCUR plus the number of singular and generalized singular vectors considered for the ECOC classifier. Likewise, the fifth and sixth columns for the tree classifier.}\\label{tab:2}\n{\\footnotesize\n\\begin{tabular}{lcclcc} \\hline \\rule{0pt}{3mm}%\nMethod & \\multicolumn{2}{c}{$k$-Fold Loss} & Method & \\multicolumn{2}{c}{$k$-Fold Loss}\\\\\n & 5 & 10 & & 5 &10\\\\ \\hline \\rule{0pt}{3.5mm}%\nTSVD+ECOC &$0.638$ &$0.490$ & TSVD+Tree & $0.693$ & $0.555$ \\\\\nTGSVD+ECOC & $0\\phantom{.000}$& $0\\phantom{.000}$ & TGSVD+Tree & $0\\phantom{.000}$ & $0\\phantom{.000}$ \\\\[0.5mm]\nCUR+ECOC & $0.793$ &$0.485$ & CUR+Tree & $0.793$ & $0.540$ \\\\\n GCUR+ECOC & $0.055$&$0.063$ & GCUR+Tree & $0.075$ &$0.095$ \\\\[0.5mm] \\hline\n\\end{tabular}}}\n\n\\end{table}\n}\n\\end{experiment}\n\\begin{experiment}\\label{exp:4}{\\rm We will now investigate the performance of the GCUR compared to the CUR on a higher-dimensional public data sets. The data sets consists of single-cell RNA expression levels of bone marrow mononuclear cells (BMMCs) from an acute myeloid leukemia (AML) patient and two healthy individuals. We have data on the BMMCs before stem-cell transplant and the BMMCs after stem-cell transplant. We preprocess the data sets as described by the authors in \\cite{boileau2020} \\footnote{\\url{https:\/\/github.com\/PhilBoileau\/EHDBDscPCA\/blob\/master\/analyses\/}} keeping the 1000 most variable genes measured across all 16856 cells (patient-035: 4501 cells and two healthy individuals; one of 1985 cells and the other of 2472 cells). The data from the two healthy patients are combined to create a background data matrix of dimension $4457 \\times 1000$ and we use patient-035 data set as the target data matrix of dimension $4501 \\times 1000$ . Both data matrices are sparse: the patient-035 data matrix has 1,628,174 nonzeros; i.e., about 36\\% of all entries are nonzero and the background data matrix has 1,496,229 nonzeros; i.e., about 34\\% of all entries are nonzero. We are interested in exploring the differences in the AML patient's BMMC cells pre- and post-transplant.\nWe perform SVD, GSVD, CUR and GCUR on the target data (AML patient-035) to see if we can capture the biologically meaningful information relating to the treatment status. For the GSVD and the GCUR procedure the background data is taken into account. As evident in \\cref{fig:4}, the GSVD and the GCUR produce almost linearly separable clusters which corresponds to pre- and post-treatment cells. These methods evidently capture the biologically meaningful information relating to the treatment and are more effective at separating the pre- and post-transplant cell samples compared to the other two. For the SVD and the CUR, we observe that both cell types follow a similar distribution in the space spanned by the first three dominant right singular vectors and the first three important gene columns, respectively. Both methods fail to separate the pre- and post-transplant cells. \n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=\\textwidth]{Plots\/GCUR_plots\/cell_035.png}\n \\caption{Acute myeloid leukemia patient-035 scRNA-seq data.(Top-left) A 3-D projection of the patient's BMMCs on the first three dominant right singular vectors. In the bottom-left figure, we visualize the data using the first three genes selected by DEIM-CUR. The lower-dimensional representation using the SVD-based methods does not effectively give a discernible cluster of the pre- and post-transplant cells. (Top-right) We illustrate the advantage of using GSVD by projecting the patient's BMMCs onto the first three dominant right generalized singular vectors corresponding to the three largest generalized singular values. In the bottom-right figure, we visualize the data using the first three genes selected by DEIM-GCUR. The lower-dimensional representation using the GSVD-based methods produce a linearly separable clusters.}\n \\label{fig:4}\n\\end{figure}\n}\n\\end{experiment}\n\\section{Conclusions}\\label{sec:Con} In this paper we propose a new method, the DEIM-induced GCUR (generalized CUR) factorization with pseudocode in \\cref{alg:GCUR-DEIM}. It is an extension of the DEIM-CUR decomposition for matrix pairs. When $B$ is square and nonsingular, there are close connections between the GCUR of $(A,B)$ and the DEIM-induced CUR of $AB^{-1}$. When $B$ is the identity, the GCUR decomposition of $A$ coincides with the DEIM-induced CUR decomposition of $A$. There exist a similar connection between the CUR of $AB^+$ and the GCUR of $(A, B)$ for a nonsquare but full-rank matrix $B$. \n\nThe DEIM index selection procedure independently selects the row and column indices defining the GCUR, which facilitates the parallelization of the decomposition. We note that we can restrict the generalized CUR decomposition discussed in this paper to a generalized interpolative decomposition (see \\eqref{eq:ID}). Although we used the generalized singular vectors here, in principle one could use other vectors, e.g., an approximation to the generalized singular vectors. Instead of the DEIM procedure, we can also use other index selection procedures of the CUR-type factorization for the GCUR. In our experiments, we choose the same number of columns and rows for the approximation of the original matrix. However, we do not need to choose the same number of columns and rows. We have extended the existing theory concerning the DEIM-CUR approximation error to this DEIM generalized CUR factorization; we derived the bounds of a rank-$k$ GCUR approximation of $A$ in terms of a rank-$k$ GSVD approximation of $A$.\n\nWhile a CUR decomposition acts on one data set, a GCUR factorization decomposes two data sets together. An implication of this is that we can use it in selecting discriminative features of one data set relative to another. For subgroup discovery and subset selection in a classification problem where two data sets are available, the new method can perform better than the standard DEIM-CUR decomposition as shown in the numerical experiments. The GCUR algorithm can also be useful in applications where a data matrix suffers from non-white (colored) noise. The GCUR algorithm can provide more accurate approximation results compared to the DEIM-CUR algorithm when recovering an original matrix with low rank from data with colored noise. For the recovery of data perturbed with colored noise, we need the Cholesky factor of an estimate of the noise covariance. However, as shown in the experiments, even for an inexact Cholesky factor the GCUR may still give good approximation results. We note that, while the GSVD always provides a more accurate result than the SVD regardless of the noise level, the GCUR decomposition is particularly attractive for higher noise levels and moderate values of the rank of the recovered matrix compared to the CUR factorization. In other situations, both methods may provide comparable results. In addition, the GCUR decomposition is a discrete method, so choosing indices for columns and rows instead of the generalized singular vectors leads to worse results than the GSVD approach. \n\nComputationally, the DEIM-GCUR algorithm requires the input of the GSVD, which is of the same complexity but more expensive than the SVD required for DEIM-CUR. For the case where we are only interested in approximating the matrix $A$ from the pair $(A, B)$, we can omit some of the lines in \\cref{alg:GCUR-DEIM}; thus saving computational cost. \nIn the case that the matrices $A$ and $B$ are so large, a full GSVD may not be affordable; in this case, we can consider iterative methods (see, e.g., \\cite{Zwaan, hochstenbach, Zha}).\n\nWhile in this work we used the GCUR method in applications such as extracting information from one data set relative to another, we expect that its promise may be more general.\n\n\\bibliographystyle{siam}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Gamma rays from DM spikes around IMBHs}\n\\label{normalization}\nWe give simple numerical estimates that illustrate how the IMBH-spike scenario can readily account for the \\textit{Fermi} ``excess\" for very small annihilation cross-sections. Following \\citet{agrawal17}, we estimate the DM parameters that can reproduce the flux of one point source from the analysis of \\citet{lee16}. About $10^{3}$ such point sources are then needed to contribute of the order of the residual $\\gamma$-ray emission.\nThe integrated photon flux for a spike around an individual IMBH---referred to as a mini-spike in the following---between $E_{\\gamma,\\mathrm{min}} = 1.893\\, \\mathrm{GeV}$ and $E_{\\gamma,\\mathrm{max}} = 11.943\\, \\mathrm{GeV}$ \\citep{lee16} is given as usual by \n\\begin{equation}\n\\Phi_{\\mathrm{sp}} = \\dfrac{\\left\\langle \\sigma v \\right\\rangle}{2 m_{\\mathrm{DM}}^{2} d^{2}} \\left( \\int_{E_{\\gamma,\\mathrm{min}}}^{E_{\\gamma,\\mathrm{max}}} \\! \\dfrac{\\mathrm{d}N}{\\mathrm{d}E_{\\gamma}} \\, \\mathrm{d}E_{\\gamma}\\right) \\left( \\int_{0}^{R_{\\mathrm{sp}}} \\! r^{2} \\rho^{2}(r) \\, \\mathrm{d}r \\right),\n\\end{equation}\nwhere $\\left\\langle \\sigma v \\right\\rangle$ is the velocity-averaged annihilation cross-section, $m_{\\mathrm{DM}}$ the mass of the DM candidate, $\\mathrm{d}N\/\\mathrm{d}E_{\\gamma}$ the $\\gamma$-ray spectrum per annihilation---taken from \\citet{cirelli2011}, and $d \\approx 8.32\\ \\rm kpc$ \\citep{gillessen2017}, the distance between Earth and the IMBH. Our benchmark scenario is a DM candidate with $m_{\\mathrm{DM}} = 30\\ \\rm GeV$ annihilating into $b\\bar{b}$, compatible with the spectral properties of the GC residual $\\gamma$-ray emission. The DM profile in the mini-spike is defined as follows:\\footnote{We use a simplified expression based on \\citet{gondolo99}. The inner boundary of the spike is related to capture of DM particles by the BH and is given by $2R_{\\mathrm{S}}$ for a non-rotating BH \\citep{sadeghian13}. The precise value of this inner radius has no impact on our calculations. The DM profile outside the spike is not relevant since the dominant part of the annihilation flux comes from the inner region of the spike.} \n\\begin{equation}\n\\rho(r) = \n\\begin{cases}\n0 & r \\leqslant 2 R_{\\mathrm{S}} \\\\\n\\rho_{\\mathrm{sat}} & 2 R_{\\mathrm{S}} < r \\leqslant R_{\\mathrm{sat}} \\\\\n\\rho_{0} \\left( \\dfrac{r}{R_{\\mathrm{sp}}} \\right)^{-\\gamma_{\\mathrm{sp}}} & R_{\\mathrm{sat}} < r \\leqslant R_{\\mathrm{sp}}\n\\end{cases},\n\\end{equation}\nwhere the saturation density is given by $\\rho_{\\mathrm{sat}} = m_{\\mathrm{DM}}\/(\\left\\langle \\sigma v \\right\\rangle t_{\\mathrm{BH}})$ with $t_{\\mathrm{BH}}$ the BH age, and $R_{\\mathrm{sat}} = R_{\\mathrm{sp}} (\\rho_{\\mathrm{sat}}\/\\rho_{0})^{-1\/\\gamma_{\\mathrm{sp}}}$ by continuity. The radial extension of the spike $R_{\\mathrm{sp}}$ is of the order of the BH influence radius, $G M_{\\mathrm{BH}}\/\\sigma_{*}^{2}$ \\citep{peebles72}. The extended $M_{\\mathrm{BH}}$-$\\sigma_{*}$ relation for IMBHs \\citep{tremaine2002} gives an estimated value of $\\sigma_{*} \\approx 10\\ \\rm km\\ s^{-1}$, and $R_{\\mathrm{sp}} \\approx 0.043\\, \\rm pc$. Then, $\\rho_{0} \\approx (3-\\gamma_{\\mathrm{sp}}) M_{\\mathrm{sp}}\/(4 \\pi R_{\\mathrm{sp}}^{3})$ by the requiring the mass inside the spike $M_{\\mathrm{sp}}$ be of the order of the BH mass, with $M_{\\mathrm{sp}} \\approx M_{\\mathrm{BH}} \\approx 10^{2}$--$10^{3}\\ M_{\\odot}$. For $\\gamma_{\\mathrm{sp}} > 3\/2$, the integrated flux for a single mini-spike reads\\footnote{For $\\gamma_{\\mathrm{sp}} = 3\/2$, the integrated flux for a single mini-spike is given by\n\\begin{align}\n\\Phi_{\\mathrm{sp}} \\approx \\ & 7 \\times 10^{-12} \\, \\mathrm{cm^{-2} \\, s^{-1}} \\ln \\left( 6 \\times 10^{6} \\left( \\dfrac{M_{\\mathrm{sp}}}{10^{3}\\, \\mathrm{M_{\\odot}}} \\right)^{-1} \\left( \\dfrac{R_{\\mathrm{sp}}}{0.043\\, \\mathrm{pc}} \\right)^{3} \\right. \\nonumber \\\\\n& \\times \\left. \\left( \\dfrac{m_{\\mathrm{DM}}}{30\\, \\mathrm{GeV}} \\right) \\left( \\dfrac{\\left\\langle \\sigma v \\right\\rangle}{3 \\times 10^{-31} \\, \\mathrm{cm^{3}\\, s^{-1}}} \\right)^{-1} \\left( \\dfrac{t_{\\mathrm{BH}}}{10^{10}\\, \\mathrm{yr}} \\right)^{-1} \\right) \\nonumber \\\\\n& \\times \\left( \\dfrac{d}{8.32\\, \\mathrm{kpc}} \\right)^{-2} \\left( \\dfrac{M_{\\mathrm{sp}}}{10^{3}\\, \\mathrm{M_{\\odot}}} \\right)^{2} \\left( \\dfrac{R_{\\mathrm{sp}}}{0.043\\, \\mathrm{pc}} \\right)^{-3} \\nonumber \\\\\n& \\times \\left( \\dfrac{m_{\\mathrm{DM}}}{30\\, \\mathrm{GeV}} \\right)^{-2} \\left( \\dfrac{\\left\\langle \\sigma v \\right\\rangle}{3 \\times 10^{-31} \\, \\mathrm{cm^{3}\\, s^{-1}}} \\right) \\left( \\dfrac{N_{\\gamma}^{(\\mathrm{tot})}}{5.7} \\right). \\nonumber\n\\end{align}}\n\\begin{align}\n\\label{spike_flux}\n\\Phi_{\\mathrm{sp}} =\\ & \\dfrac{\\gamma_{\\mathrm{sp}}}{3(2 \\gamma_{\\mathrm{sp}}-3)} \\left( \\dfrac{3-\\gamma_{\\mathrm{sp}}}{4\\pi} \\right)^{\\frac{3}{\\gamma_{\\mathrm{sp}}}} \\dfrac{1}{d^{2}} M_{\\mathrm{sp}}^{\\frac{3}{\\gamma_{\\mathrm{sp}}}} \\nonumber \\\\\n& \\times R_{\\mathrm{sp}}^{3\\left( 1-\\frac{3}{\\gamma_{\\mathrm{sp}}}\\right) } t_{\\mathrm{BH}}^{\\frac{3}{\\gamma_{\\mathrm{sp}}}-2} m_{\\mathrm{DM}}^{-\\frac{3}{\\gamma_{\\mathrm{sp}}}} \\left\\langle \\sigma v \\right\\rangle^{\\frac{3}{\\gamma_{\\mathrm{sp}}}-1} N_{\\gamma}^{(\\mathrm{tot})},\n\\end{align}\nwhere \n$N_{\\gamma}^{(\\mathrm{tot})} = \\int_{E_{\\gamma,\\mathrm{min}}}^{E_{\\gamma,\\mathrm{max}}} \\! (\\mathrm{d}N\/\\mathrm{d}E_{\\gamma}) \\, \\mathrm{d}E_{\\gamma}$. More specifically, for $\\gamma_{\\mathrm{sp}} = 9\/4$, the flux from a mini-spike is\n\\begin{align}\n\\Phi_{\\mathrm{sp}} \\approx & \\ 1 \\times 10^{-10}\\, \\mathrm{cm^{-2}\\,s^{-1}} \\left( \\dfrac{d}{8.32\\, \\mathrm{kpc}} \\right)^{-2} \\left( \\dfrac{M_{\\mathrm{sp}}}{10^{3}\\, \\mathrm{M_{\\odot}}}\\right) ^{4\/3} \\nonumber \\\\\n& \\times \\left( \\dfrac{R_{\\mathrm{sp}}}{0.043\\, \\mathrm{pc}} \\right)^{-1} \\left( \\dfrac{m_{\\mathrm{DM}}}{30\\, \\mathrm{GeV}} \\right)^{-4\/3} \\nonumber \\\\\n& \\times \\left( \\dfrac{\\left\\langle \\sigma v \\right\\rangle}{2 \\times 10^{-40} \\, \\mathrm{cm^{3}\\, s^{-1}}} \\right)^{1\/3} \\left( \\dfrac{t_{\\mathrm{BH}}}{10^{10}\\, \\mathrm{yr}} \\right)^{-2\/3} \\left( \\dfrac{N_{\\gamma}^{(\\mathrm{tot})}}{5.7} \\right).\n\\end{align}\nLet us assume that the entire point-source contribution to the \\textit{Fermi} excess fluctuations is given by the IMBH spikes. The resulting limits on the annihilation cross-section are as follows. The upper limit on $\\left\\langle \\sigma v \\right\\rangle$ is extremely small ($\\sim 10^{-40}\\, \\mathrm{cm^{3}\\, s^{-1}}$) for a steep mini-spike with $\\gamma_{\\mathrm{sp}} = 9\/4$ and $M_{\\mathrm{BH}} = 10^{3}\\, M_{\\odot}$. This is related to the very weak dependence of $\\Phi_{\\mathrm{sp}}$ on the cross-section. For a relaxed spike with $\\gamma_{\\mathrm{sp}} = 3\/2$, the upper limit on the cross-section is of the order of $10^{-31} \\, \\mathrm{cm^{3}\\, s^{-1}}$ for a population of $10^{3}\\, M_{\\odot}$ IMBHs. For $M_{\\mathrm{BH}} = 10^{2}\\, M_{\\odot}$, the best-fit cross-sections become $2 \\times 10^{-36}\\, \\mathrm{cm^{3}\\, s^{-1}}$ for $\\gamma_{\\mathrm{sp}} = 9\/4$ and $3 \\times 10^{-29}\\, \\mathrm{cm^{3}\\, s^{-1}}$ for $\\gamma_{\\mathrm{sp}} = 3\/2$.\n\n\n\n\\section{Global signal and spatial morphology}\nWe now consider a distribution of IMBHs that collect in the inner galaxy. Most of them are failed mergers, as mentioned above, with a total mass amounting to of the order of the mass of the central SMBH, $4 \\times 10^6\\, M_\\odot$. First, we note that to compute the radial profile of $\\gamma$-rays, we need to convolve the radial distribution of the IMBHs with the radial dependence of the mini-spike flux.\n\nThe DM interpretation of the \\textit{Fermi} excess works for the morphology because it naturally gives the $\\gamma$-ray profile as roughly the square of the NFW profile, or $r^{-2}$. The present model needs to address this point. However, the case for the DM interpretation may not be that strong. Firstly, the MWG may have a DM core \\citep{Portail17}. Second, the \\textit{Fermi} excess can be fit, according to a reanalysis, by a stellar mass (bulge)-related profile \\citep{bartels17}. \n\nThe point sources (IMBHs) have a $r^{-3\/2}$ density profile toward the GC. This is a dynamically relaxed profile that follows the Bahcall--Wolf solution for a stellar cusp \\citep{BW76}. This might not match the observed profile if the mini-spike masses and luminosities are independent of radius. In fact,\nthere will be mass segregation, the more massive IMBHs falling in closer to the GC but stalling at\/near the final parsec. IMBHs are point masses, and too dense to be tidally disrupted. Let us estimate the radial dependence of the mini-spike luminosity. The radial flux profile for a set of mini-spikes around BHs is $\\Phi_{r} \\propto \\Phi_{\\mathrm{sp}} r^{-3\/2}$. From Eq.~(\\ref{spike_flux}), the flux for an individual mini-spike is $\\Phi_{\\rm sp} \\propto M_{\\mathrm{BH}}^{3-6\/\\gamma_{\\rm sp}}$, where $\\gamma_{\\mathrm{sp}}= (9-2\\gamma)\/(4-\\gamma)$. Here, we assume $M_{\\mathrm{sp}} = M_{\\mathrm{BH}}$ and $R_{\\mathrm{sp}} =GM_{\\mathrm{BH}}\/\\sigma_{*}^2$, with $\\sigma_{*} \\propto M_{\\mathrm{BH}}^{1\/4}$ \\citep{tremaine2002} so that $R_{\\mathrm{sp}} \\propto M_{\\mathrm{BH}}^{1\/2}$. Hence, $\\Phi_{\\mathrm{sp}} \\propto M_{\\mathrm{BH}}^{(3\/2)(1-1\/\\gamma_{\\mathrm{sp}})}$. In addition, $M_{\\mathrm{BH}}$ increases as $r$ decreases because of mass segregation by settling. The two-body relaxation time-scale is $\\propto t_{\\mathrm{r}} \\propto 1\/M_{\\mathrm{BH}}$, as is the dynamical friction time that is $\\sim M_{\\mathrm{SMBH}}\/M_{\\mathrm{BH}}$ orbital times. One needs a simple diffusion model to go further, but a rough guess using adiabatic invariants might be $r v M_{\\mathrm{BH}} = \\mathrm{const}$ (conservation of angular momentum), so that $M_{\\mathrm{BH}} \\propto r^{-1\/2}$. Hence, the radial flux profile is \n\\begin{equation}\n\\Phi_{r} \\propto r^{-\\frac{9}{4} \\left( 1 - \\frac{1}{3\\gamma_{\\mathrm{sp}}} \\right) }.\n\\end{equation}\nGenerally, $\\gamma_{\\mathrm{sp}}= (9-2\\gamma)\/(4-\\gamma)$, so that for a core, $\\gamma=0$ and $\\gamma_{\\mathrm{sp}}=9\/4$, while for $\\gamma = 1$, $\\gamma_{\\mathrm{sp}}=7\/3$ and for $\\gamma=3\/2$, $\\gamma_{\\mathrm{sp}}=12\/5$. Hence, for adiabatic mini-spikes, the radial profile is $\\Phi_{r}^{(\\gamma=0)} \\propto r^{-23\/12}$, $\\Phi_{r}^{(\\gamma=1)} \\propto r^{-27\/14}$ and $\\Phi_{r}^{(\\gamma=3\/2)} \\propto r^{-31\/16}$. Therefore, $\\Phi_{r} \\propto r^{-2}$ for IMBH mini-spikes, always approximating the observed $\\gamma$-ray profile independently of the DM halo profile.\n\n\n\n\\begin{figure}[h!]\n\\centering \n\\includegraphics[width=\\linewidth]{GCE_profile.pdf} \n\\caption{\\label{profile}Angular profiles for the total $\\gamma$-ray emission at 2 GeV with bright point sources masked (black solid line; \\citealp{ackermann17}) and for various components of the $\\gamma$-ray emission. Green squares: MSP-like component extracted from the data \\citep{ackermann17}. Red dashed line: GeV excess in the sample model from \\citet{ackermann17} corresponding to a generalized NFW profile template with slope $\\gamma = 1.25$. Magenta dotted-dashed line: GeV excess in the sample model but for a regular NFW profile ($\\gamma = 1$). Yellow line: prediction for MSPs in the bulge of the Milky Way from disrupted globular clusters \\citep{brandt15}. Our IMBH--mini-spike model is depicted by the blue shaded area for benchmark slopes discussed in the text. Here, we are mostly interested in illustrating the spatial morphology of the signal in our model, so we arbitrarily rescaled the angular IMBH--mini-spike profile at the level of the first MSP-like point.}\n\\end{figure}\n\n\n\n\n\nThese predictions are illustrated in Fig.~\\ref{profile}, which shows the angular profile of the total GC $\\gamma$-ray data at 2 GeV with bright point sources masked \\citep{ackermann17}, along with the profiles of various components of the $\\gamma$-ray emission. Our model typically gives an angular profile that is consistent with expectations from bulge sources like MSPs. \n\nThe situation is complicated by the fact that the DM spikes may be heated---for relaxed mini-spikes $\\Phi_{r} \\propto r^{-1.75}$---and partially stripped as the IMBHs fall into the GC region, although tidal disruption of PBH clusters and dynamical friction may in turn steepen the IMBH profile, as discussed in \\citet{fragione17} for MSPs. Regardless, it seems plausible, pending detailed simulations, that our model gives a good approximation to the \\textit{Fermi} $\\gamma$-ray excess profile.\n\n\n\\section{Discussion}\nOne attractive model for the LIGO events argues that hard massive BH binaries form in dense stellar clusters. This scenario has one advantage over rivals: it was proposed before the aLIGO detection \\citep{bae14} to give acceptable rates and masses. Protoglobular clusters are likely pregalactic sites\n and are dispersed as substructure disrupts when the bulge formed. Stellar cluster-enhanced formation of massive BH binaries quantitatively accounts for the observed LIGO rates, when integrated out to several hundred Mpc \\citep{park17}. Such massive BHs may have formed prolifically at high redshift, when there was most likely a top-heavy initial mass function, providing a possible pathway to forming IMBHs. In the PBH case, one appeals to BH binary formation by early capture in the first bound DM substructures at the onset of matter domination \\citep{sasaki16}. Some subsets of these (one needs of the order of 10\\%) might have merged to form IMBHs.\n\nWe expect that massive binaries should be enhanced in number near the GC where the most massive protoglobulars dispersed to form the central NSC. These would generate MSPs as well as BH binaries. Hence, these two populations should track each other. Neither would have a significant disk component. Another consequence would be an enhanced rate of BH mergers in galactic nuclei that might be detectable by LIGO \\citep{nishikawa2017}. These LIGO events occur within the IMBH sphere of influence. This could lead to enhanced drag and affect the gravitational-wave signal phase evolution. This could potentially be seen as a cumulative phase shift by LISA over many cycles \\citep{yue2017}.\n\nWe showed that mini-spikes around a population of hundreds or thousands IMBHs can significantly contribute to the GC emission and can readily account for both the normalization and spatial morphology of the $\\gamma$-ray excess for very small annihilation cross-sections. The expected morphology of the predicted excess does not necessarily follow the standard DM halo profile, for instance, it can effectively trace the Galactic bulge due to mass segregation and the dependence of mini-spike luminosities on BH mass. This circumvents the issue raised by the observation of an excess of $\\gamma$-rays in control regions in the disk where no significant contribution from DM is expected \\citep{ackermann17}. IMBHs would appear naturally in central regions due to three-body encounters and ejections. This distinctive morphology also allows the model to evade the constraints of \\citet{clark16} that ruled out a DM interpretation of the excess in terms of ultra-compact mini-halos. We note that the constraints of \\citet{clark16} do not account for more recent studies of the GC emission that revealed a more complex spatial morphology \\citep{ackermann17,bartels17}. Finally, we expect the central massive BHs seen in nearby galactic centers, if indeed formed in the early universe, to have DM spikes, and hence to be \\textit{Fermi} $\\gamma$-ray sources. 47 Tuc is a possible example \\citep{abdo09}, although one cannot easily distinguish a possible $\\gamma$-ray point source from the expected population of MSPs. Future observations may help us elucidate this point.\n\n \n \n\\section*{Acknowledgments}\nT.L. receives financial support from CNRS-IN2P3. T.L. also acknowledges support from the European Union's Horizon 2020 research and innovation program under the Marie Sk\\l{}odowska-Curie grant agreement Nos. 690575 and 674896; beside recurrent institutional funding by CNRS-IN2P3 and the University of Montpellier. The work of J.S. has been supported in part by European Research Council (ERC) Project No. 267117 (DARK) hosted by Universit\\'{e} Pierre \\& Marie Curie -- Paris VI, Sorbonne Universit\\'{e}s.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Appendix}\n\n\\subsection{Experiment settings}\n\\section{Approaches}\n\\label{sec:approaches}\n\nIn this setting, we hope to investigate the ability of {\\texttt{BERT}}~ model to predict events in a sentence without training. \nTo this end, we attempt to convert event detection in the following 3 well-known problems: entailment prediction, question answering, and masked token prediction. \n\\begin{enumerate}\n \\item \\textbf{Textual entailment (TE).}\n In standard textual entailment prediction setting, as depicted in ~\\cite{mnli}, the model is given a pair of sentences, called ``premise'' and ``hypothesis'', and the model is tasked to predict whether the hypothesis is entailed by, contradicts to, or is irrelevant to the premise. \n We convert event detection to this format by treating each sentence as the premise, and for each event, a statement that this event has occurred as the hypothesis. \n Ideally, the model would classify that the sentence entails the event statement if the event indeed occurred in the context of the sentence. \n \n We assume that the model predicts that the input sentence pair belongs to one of the three categories: \\emph{contradictory}, \\emph{entailment}, or \\emph{neutral}. \n Given a query-text pair constructed from Section~\\ref{sec:preprocess:query_event} and ~\\ref{sec:preprocess:query_argument}, we predict that the corresponding event or argument occurs if and only if the textual entailment model predicts ``entailment''. \n \\item \\textbf{Question answering (QA).}\n The input in this setting includes a pair of sentences, ``question'' and ``text'', as described in ~\\cite{rajpurkar2016squad}. The model is supposed to detect a span of words in the text that best answers the question, if there is any. \n We have two type of questions: the ``Wh''-type and the Yes-No type, as described in Section~\\ref{sec:method:arg_query}. We refer them as SQ-QA and PQ-QA, respectively. \n For the first type, we apply the standard span detection approach. We treat the second type as a natural 0-1 classification problem, classifying the pair of sentences as either ``Yes'' or ``No''.\n \\item \\textbf{Masked token prediction. }\n This approach formulates event detection as a kind of ``fill in the blank''problem. \n {\\texttt{BERT}}~ is essentially a \\emph{masked language model}, and it is supposed to predict masked tokens given the context. \n We combine sentences with event queries, but instead of filling in actual event types, we place the special \\texttt{[MASK]} token in place of event types and feed it to {\\texttt{BERT}}~. We then use {\\texttt{BERT}}~'s pretrained weights to predict the most probable tokens on placeholders. \n We propose this method mostly as a probing technique to understand what information about events {\\texttt{BERT}}~ is able to summarize from texts and justify the performance of other models. \n\\end{enumerate}\n\nThroughout this paper, we used pretrained DistilBert model on relevant data and tasks for finetuning and evaluation. \nSpecifically, for TE, we use DistilBert trained on MultiNLI~\\cite{mnli} data~\\footnote{https:\/\/huggingface.co\/ishan\/distilbert-base-uncased-mnli}. For both types of QA, we use DistilBert trained on SQuAD~\\cite{rajpurkar2016squad} data~\\footnote{https:\/\/huggingface.co\/distilbert-base-cased-distilled-squad}. \n\n\\section{Conclusion}\nWe propose a reading comprehension framework for event extraction tasks. \nWe design a textual entailment based method for event detection and question answering for argument detection. \nExperiment findings suggest our framework can effectively distill knowledge from {\\texttt{BERT}}~ as well as guide the model with semantic information, achieving state-of-the-art results on few-shot and supervised settings.\n\nOur experiemnts also suggest several promising reseach directions. We summarize them here. \nIt is clear that while the current framework with mostly template-based queries can achieve superior performance already, \nthe queries are not ideal since they do not relate to actual sentence context and the casual relation between distinctive events and arguments. \nOne promising direction is generating queries that are 1) more flexible and authentic, 2) relevant to input sentence's context, and 3) reveals causal relations between events and arguments. We believe this would enable a more label efficient and robust zero-shot and few-shot learning framework.\n\nIn our current framework and, indeed, all existing reading comprehension frameworks to our best knowledge, \none must construct multiple queries for one input instance for complete classification results. \nThis could be burdensome when there's a large number of queries to be constructed, usually a result of large number of label types. \nOne problem is how to do so efficiently by enabling information sharing between different queries. \n\nExperiments show that the current {\\texttt{BERT}}~ model cannot learn efficiently from long event descriptions. A siginificant advancement in language modeling would be enabling it for zero- or few-shot learning with only a few descriptions and annotation guides. \n\nWe manually select reading comprehension tasks for two event extraction tasks. \nA key ingredient to a more general reading comprehension solution to NLP tasks is a principle to measure the transferibility between tasks. \n\n\n\\subsection{Argument Detection Experiments}\n\\label{sec:arg}\n\\subsubsection{Probing without Supervision}\n\\label{sec:arg:zero_shot}\n\nLike in event detection experiments, we first probe pretrained {\\texttt{BERT}}~ for QA tasks on SQuAD~\\cite{rajpurkar2016squad} and see if they can predict arguments without training~\\footnote{We used deepset\/bert-large-uncased-whole-word-masking-squad2. }.\nWe experiment on the following query types:\n\\begin{enumerate}\n \\item QA-Template. This constructs questions from templates in Section~\\ref{sec:method:arg}\n \\label{qa_exp_temp}\n \\item QA-Guide. This constructs questions from descriptions in the annotation guide~\\cite{ace05guide}.\n \\label{qa_exp_guide}\n \\item QA-Trigger. This constructs questions from a template similar to the one in Section~\\ref{sec:method:arg}, but the question is asked about the trigger, like: ``What is the \\texttt{[ARGUMENT]} in \\texttt{[TRIGGER]}?''.\n \\label{qa_exp_trigger}\n \\item QA-Trigger-Plus. This is based on a similar template which includes both trigger word and event name. For example: \n ``What is the \\texttt{[ARGUMENT]} in event \\texttt{[EVENT]} triggered by `\\texttt{[TRIGGER]}'?''\n \\label{qa_exp_trigger_plus}\n\\end{enumerate}\n\\ref{qa_exp_trigger} and ~\\ref{qa_exp_trigger_plus} is the method used by ~\\cite{du2020event}. Since we are doing event detection without triggers, it is necessary to include \\ref{qa_exp_trigger} \nas well to demonstrate the effect of removing trigger words on argument detection.\n\nTable~\\ref{tab:zero_shot_arg_results} shows results of models without training. Unlike in event detection, no threshold probability was manually chosen. We see that all four query templates perform similarly, and are all much better than random baseline. \nAlso, we observe that questions with trigger words perform slightly better than those without. However, this is based on ground-truth annotation trigger words. \n\nIn Table~\\ref{tab:exp:arg_examples}, we show some examples of zero-shot predictions by generated queries.\nAdditionally, we manually wrote customized queries for each sentence based on the principle that the query should specify as much information as possible about the event, including other relevant arguments, like ``London's financial world'', and ''by the end of the year'', that are present in the input sentence. \n\nFrom Table~\\ref{tab:exp:arg_examples}, we can observe that the custom query is much more semantically and contextually related to the input sentence. Compared with template-based queries which might ask ``Who started a new position'', or ``What is the artifact in Transfer-Ownership'', the written query is much more context specific. \nIt provides stronger guidance for {\\texttt{BERT}}~ model to extract the actual answer. \nUnfortunately, such queries are specific to sentence contexts and requires human writing, and it is hard to scale-up to the entire dataset at this time. \n\n\\begin{table}[tbp]\n \\centering\n \\begin{tabular}{c|ccccc}\n & Precision~\\(\\uparrow\\) & Recall~\\(\\uparrow\\) & F1~\\(\\uparrow\\) & p-value~\\(\\downarrow\\) \\\\ \\hline\n Random & 10.64 & 15.53 & 12.63 & 0.56 \\\\\n QA-Temp & 31.83 & 22.96 & 26.67 & 0.00 \\\\ \n QA-Guide & \\textbf{31.89} & 23.08 & 26.78 & 0.00 \\\\ \n QA-Trig & 28.03 & \\textbf{26.26} & 27.12 & 0.00\\\\ \n \\rowcolor{LightCyan} QA-Trig-Plus & 30.05 & 24.80 & \\textbf{27.17} & 0.00\n \\end{tabular}\n \\caption{Zero-shot learning for argument detection. We report Precision, Recall, F1, and \\(p\\)-values. The arguments are predicted based on ground-truth event labels and trigger word annotations. }\n \\label{tab:zero_shot_arg_results}\n\\end{table}\n\n\\begin{table*}[tbp]\n \\small\n \\centering\n \\begin{tabular}{p{0.43\\textwidth}|c|c|c|c}\n Sentence & Event, Argument, \\& Answer & Query Type & Prediction & Result \\\\ \\hline\n \\multirow{4}{0.43\\textwidth}{\\textbf{What is the organization he said that is going to start?} ``Prostitution is completely discriminalised in Sydney and we are going to build a monster,'' he said.} & \\multirow{4}*{\\shortstack{Start-Organization, \\\\ Organization, \\\\ ``a monster''}} & QA-Temp & ``a monster'' & Correct \\\\ \n & & QA-Guide & None & Wrong \\\\\n & & QA-Trigger & ``a monster'' & Correct \\\\ \n & & Custom & ``Universal \\(\\ldots\\) assets'' & Partial correct \\\\\\hline\n\n \\multirow{4}{0.43\\textwidth}{\\textbf{Who started a new job in London's financial world?} Former senior banker Callum McCarthy begins what is one of the most important jobs in London 's financial world in September , when incumbent Howard Davies steps down. } & \\multirow{4}*{\\shortstack{Start-Position, \\\\ Person, \\\\ ```Former \\(\\ldots\\) McCarthy''}} & QA-Temp & ``Former \\(\\ldots\\) McCarthy'' & Correct \\\\ \n & & QA-Guide & ``Former \\(\\ldots\\) McCarthy'' & Correct \\\\\n & & QA-Trigger & None & Wrong \\\\ \n & & Custom & ``Callum McCarthy'' & Partial correct \\\\\\hline\n\n \\multirow{4}{0.43\\textwidth}{\\textbf{What are the entertainment assets Vivendi tries to sell by the end of the year?} Vivendi confirmed that it planned to shed its entertainment assets by the end of the year, including its famed Universal movie studio and television assets.} & \\multirow{4}*{\\shortstack{Transfer-Ownership, \\\\ Artifact, \\\\ ``its famed\\(\\ldots\\) studio''}} & QA-Temp & None & Wrong \\\\ \n & & QA-Guide & None & Wrong \\\\\n & & QA-Trigger & None & Wrong \\\\ \n & & Custom & ``Universal \\(\\ldots\\) assets'' & Partial correct \\\\\\hline\n\n \\multirow{4}{0.43\\textwidth}{\\textbf{When did the meeting that Jean-Rene Fourtou participated in take place?} Chief executive Jean - Rene Fourtou told shareholders at the group 's annual general meeting Tuesday that \\hide{the sale of Vivendi Universal Entertainment was a major goal for 2003 , and that} negotiations were already under way.} & \\multirow{4}*{\\shortstack{Meet, \\\\ Time-Within, \\\\ ``Tuesday''}} & QA-Temp & None & Wrong \\\\ \n & & QA-Guide & None & Wrong \\\\\n & & QA-Trigger & None & Wrong \\\\ \n & & Custom & ``Tuesday'' & Correct \\\\\\hline\n \\end{tabular}\n \\hide{\n \\begin{tabular}{l|c}\n Index & Custom Query \\\\ \\hline\n 1 & \\\\\\hline\n 2 & What are the entertainment assets Vivendi are trying to sell by the end of the year? \\\\\\hline\n 3 & When did the meeting that Jean-Rene Fourtou participated in take place \\\\ \n \\end{tabular}\n }\n \\caption{Zero-shot learning for question answering examples. The bold first sentence is the ``Custom'' query that human manually wrote given the context. We list predictions by three generated queries and the custom query. Some non-essential parts are removed from sentences due to space limitations. }\n \\label{tab:exp:arg_examples}\n\\end{table*}\n\n\\subsubsection{Few-shot learning}\n\\label{sec:arg:sup}\nTo the best of our knowledge, we are the first to do few-shot argument detection without using external semantic role labeling tools. We show our results in Table~\\ref{tab:few_shot_arg_results}.\n\nIn Table~\\ref{tab:few_shot_arg_results}, we report rgument prediction scores based on both event predictions using TE model from Section~\\ref{sec:event:sup} and on ground-truth event labels. \nErrors in event predictions would propagate in the first scenario. \nWe see that the hand-written query based on guides perform a lot better, especially when \\(K\\) is low, than template-based queries. \n\n\n\\begin{table*}[htbp]\n \\centering\n \\begin{tabular}{c|c|ccccccccc}\n Events & \\(K\\)-shot & \\(K=1\\) & \\(K=3\\) & \\(K=5\\) & \\(K=7\\) & \\(K=9\\) \\\\ \\hline\n \\multirow{2}*{Predicted} & QA-Temp & 1.35 \\std{2.34} & 37.27 \\std{2.28} & 44.08 \\std{0.74} & 44.74 \\std{2.87} & 46.59 \\std{0.94} \\\\\n & QA-Guide & 19.58 \\std{2.45} & 41.38 \\std{1.61} & 45.41 \\std{1.48} & 44.92 \\std{1.12} & 46.86 \\std{0.74} \\\\ \\hline\n \\multirow{2}*{Ground Truth} & QA-Temp & 1.62 \\std{2.81} & 45.49 \\std{2.46} & 52.49 \\std{1.32} & 54.44 \\std{3.19} & 56.15 \\std{1.13} \\\\\n & QA-Guide & 22.67 \\std{3.03} & 50.97 \\std{2.42} & 55.48 \\std{1.68} & 54.39 \\std{0.85} & 57.66 \\std{1.23} \\\\ \n \n \n\\end{tabular}\n \\caption{Results on few-shot argument detection. The first part predicts and compute metrics based on predicted events with the best text entailment model trained in Section~\\ref{sec:method:event}. The second part is based on ground-truth event labels. }\n \\label{tab:few_shot_arg_results}\n\\end{table*}\n\n\\subsubsection{Fully Supervised}\nFor fully supervised argument detection, we compare our approach to DMCNN~\\cite{chen2015event} and VERB-QA~\\cite{du2020event} described in Section~\\ref{sec:event:sup}.\n\nVERB-QA~\\cite{du2020event} curates the argument with a trigger-word based QA framework. Specifically, their template is: \n``\n\\texttt{[Wh-word]} is the \\texttt{Argument} in \\texttt{Trigger}?\n''\nwhere ``Wh-word'' means interrogative words such as ``What'', ``Who'' and ``When'', and ``Trigger'' is the detected trigger word. This is essentially equivalent to our QA-Trig.\nIn ~\\ref{tab:arg_sup} we report the experiment results. Our method with \n\n\\begin{table}\n \\small\n \\begin{tabular}{l|l|ccc}\n Events & Method & Precision & Recall & F1\\\\ \\hline\n \\multirow{4}*{Pred.} & DMCNN & \\textbf{62.2} & 46.9 & 53.5 \\\\ \n & VERB-QA & 56.77 & 50.24 & 53.31 \\\\\n & QA-Temp & 56.24~\\std{1.21} & \\textbf{52.14}~\\std{3.52} & 54.01~\\std{2.13} \\\\ \n \\rowcolor{LightCyan}& QA-Guide & 57.69~\\std{0.45} & 51.90~\\std{3.39} & \\textbf{54.61}~\\std{1.67} \\\\ \n \\hline\n & QA-Temp & 71.06~\\std{1.58} & \\textbf{65.84}~\\std{2.14} & 68.25~\\std{0.98} \\\\ \n \\rowcolor{LightCyan}\n \\multirow{1}{*}{G. Truth} & QA-Guide & \\textbf{72.20}~\\std{2.45} & 65.79~\\std{3.09} & \\textbf{68.79}~\\std{0.57}\n \\end{tabular}\n \\caption{Fully supervised learning for argument detection. Similar to in few-shot setting, we report our scores on both ground-truth and predicted event labels. }\n \\label{tab:arg_sup}\n\\end{table}\n\n\n\\subsubsection{Discussions}\n\\label{sec:arg:ablation}\n\n\\paragraph{Does trigger word help?}\nThe potential drawback of event detection without triggers is that it might lose the trigger information that could be important for subsequent argument detection tasks. \nIn Section~\\ref{sec:arg:zero_shot}, we see that questions with trigger words perform slightly better than questions with only event and argument names. \nSince trigger words contain the only clue in the query about the context of the sentence, it is indeed reasonable that questions with triggers should perform better.\n\nHowever, this is based on golden trigger words. In real application, predicting trigger words tend to propagate more errors than predicting sentence-level event labels.\nHence, in supervised setting, we see that methods based on sentence-level event predictions (QA-Temp, QA-Guide) perform slightly better than VERB-QA, which predicts arguments based on predicted trigger words and questions constructed with them. \nWe may conclude that trigger words are non-essential to both event detection and argument detection. \n\\subsection{Event Detection Experiments}\n\\label{sec:event_detection}\n\\hide{For event detection, we use {\\texttt{BERT}}~ and {\\texttt{DistilBERT}}~ models finetuned on MNLI. Since we could not find a pretrained {\\texttt{BERT-Large}}~ on MNLI and finetuning it exceeds overburden our computation resources, we do not experiemnt with {\\texttt{BERT-Large}}~. }\n\n\\begin{table}[tbp]\n \\small\n \\centering\n \\begin{tabular}{c|ccccc}\n & Precision~\\(\\uparrow\\) & Recall~\\(\\uparrow\\) & F1~\\(\\uparrow\\) & Threshold & p-value~\\(\\downarrow\\) \\\\ \\hline\n Random & 5.75 & 86.28 & 10.79 & 0.157 & 0.63 \\\\\n MTP + TE & 12.50 & 29.09 & 17.40 & 0.825 & 0.00 \\\\ \n MTP + QA & 15.13 & 17.05 & 16.03 & 0.849 & 0.00 \\\\\n QA & 1.03 & 100 & 2.03 & 0.0 & 0.00 \\\\\n \\rowcolor{LightCyan}\n TE & 16.84 & 36.89 & \\textbf{23.12} & 0.385 & 0.00 \\\\\n \\end{tabular}\n \\caption{Zero-shot learning for event detection. Precision, Recall, F1-scores are evaluated on test set, with Threshold chosen to maximize F1-score on the development set. Any instance with predicted probability greater than the Threshold is classified as positive. \\(p\\)-value indicates the the level of statistical significance that the model does separate ground-truth events from false events. \\textit{Random} is the F1-score obtained by randomly generating scores from a uniform distribution on \\([0, 1]\\). }\n \\label{tab:zero_shot_results}\n\\end{table}\n\n\\begin{table*}[tbp]\n\\begin{tabular}{p{0.25\\textwidth}p{0.10\\textwidth}p{0.25\\textwidth}p{0.25\\textwidth}}\n Sentence & Ground-truth & Top candidates from vocab & Top candidates from event names \\\\ \\hline\n Jay Garner the retired general will go into Iraq soon with his troops soon. & \\multirow{3}{0.10\\textwidth}{Movement: Transport} & \n just time to never what war has this today now & Attack (0.98) Injure (0.96) Transport (0.92) \\\\ \\cline{1-4} \n It would not have been necessary to fire those 17 people right away. & Personnel: End-Position & never just to time had christmas not certainly have now & Die (0.97) Injure (0.92) Trial-Hearing (0.91) \\\\ \\cline{1-4}\n \\hide{If he is willing to file a lawsuit on the basis of New York non-profit organization law, which might or might not be applicable to the USCF,}\n Why wouldn't he file a lawsuit on the basis of the USCF's violation of its own bylaws, which unquestionably ARE applicable to the USCF? & \\multirow{4}*{Justice: Sue} & just never to has what not have already having had & Trial-Hearing (0.98) Sue (0.98) Injure (0.96) Arrest-Jail (0.90) Transfer-Money (0.85) \\\\ \\hline\n\\end{tabular}\n\\caption{Examples of masked token predictions. We show the original sentence, ground-truth labels, top candidate for the [MASK] placeholder from both the entire vocabulary and only from the set of event names. For each top candidate event name, we also show their predicted probabilities. }\n\\label{table:mtp_examples}\n\\end{table*}\n\n\n\\begin{table*}[tbp]\n \\centering\n \\begin{tabular}{l|ccccccccc}\n \\(K\\)-shot & \\(K=1\\) & \\(K=3\\) & \\(K=5\\) & \\(K=7\\) & \\(K=9\\) \\\\\n {\\texttt{DistilBERT}}~ & 0.42~\\std{0.29} & 0.42~\\std{0.29} & 0.42~\\std{0.29} & 0.42~\\std{0.29} & 0.42~\\std{0.29} s\\\\\n TED & \\textbf{43.02} \\std{2.36} & \\textbf{51.60} \\std{2.15} & 51.51 \\std{2.59} & 56.20 (\\(\\pm\\) 3.11) & 56.83 (\\(\\pm\\) 4.36) \\\\\n TED+D (1) & 23.10 (\\(\\pm\\) 5.20) & 45.94 (\\(\\pm\\) 4.80) & 51.92 (\\(\\pm\\) 4.07) & 54.88 (\\(\\pm\\) 2.82) & 57.11 (\\(\\pm\\) 2.90) \\\\\n TED+D (5) & 17.80 (\\(\\pm\\) 4.29) & 45.76 (\\(\\pm\\) 5.17) & 52.44 (\\(\\pm\\) 2.47) & 54.68 (\\(\\pm\\) 3.55) & 59.62 (\\(\\pm\\) 2.63) \\\\\n \n \n \n \\hline\n {\\texttt{BERT}}~ & 0.70~\\std{0.40} & 0.70~\\std{0.40} & 0.42~\\std{0.29} & 0.42~\\std{0.29} & 0.42~\\std{0.29} \\\\ \n TE & 36.00 \\std{3.56} & 50.68 \\std{1.51} & 53.31 \\std{2.40} & 56.01 (\\(\\pm\\) 2.52) & 57.46 (\\(\\pm\\) 3.24) \\\\\n TE+D (1) & 25.19 (\\(\\pm\\) 3.28) & 46.94 (\\(\\pm\\) 5.99) & 53.96 (\\(\\pm\\) 2.96) & 55.33 (\\(\\pm\\) 2.99) & 59.56 (\\(\\pm\\) 1.91) \\\\\n TE+D (5) & 26.66 (\\(\\pm\\) 1.74) & 49.48 (\\(\\pm\\) 1.95) & \\textbf{54.47} (\\(\\pm\\) 3.25) & \\textbf{57.00} (\\(\\pm\\) 2.03) & \\textbf{61.75}f (\\(\\pm\\) 2.12) \\\\\n \\end{tabular}\n \\caption{\n Results on few-shot event detection. We show mean scores and standard variances on micro F1-scores. \n We tried both {\\texttt{DistilBERT}}~ and {\\texttt{BERT}}~ as the backbone model. ``TED'' means pretrained {\\texttt{DistilBERT}}~ on textual entailment, and ``TE'' means pretrained {\\texttt{BERT}}~ on the same task. \n ``+D'' means trained with event description, and the number in the following parenthesis means the max number of used sentences in the description. \n }\n \\label{tab:few_shot_results}\n\\end{table*}\n\n\\begin{table}[tbp]\n \\small\n \\centering\n \\begin{tabular}{l|ccc}\n Method & Precision & Recall & F1 \\\\ \\hline\n DMCNN~\\cite{chen2015event} & 75.6 & 63.6 & 69.1 \\\\ \n Delta~\\cite{lu2019distilling} & 67.30 & 69.62 & 68.44 \\\\ \n VERB-QA~\\cite{du2020event} & 71.12 & 73.70 & 72.39 \\\\\n \n \\hline\n {\\texttt{BERT}}~~\\cite{devlin2018bert_} & 71.52 \\std{0.19} & 70.48 \\std{1.65} & 70.99 \\std{0.82} \\\\ \n Delta~\\cite{lu2019distilling} & 70.97 & 70.78 & 70.88 \\\\ \n DS-DMCNN~\\cite{liu2019event} & \\textbf{75.7} & 66.0 & 70.5 \\\\ \n \\rowcolor{LightCyan}\n TE & 73.28 \\std{2.13} & 76.29 \\std{1.30} & \\textbf{75.43} \\std{1.48} \\\\\n TE-D (1) & 73.04 \\std{3.21} & 75.82 \\std{1.41} & 74.39 \\std{2.29} \\\\ \n TE-D (5) & {72.95} \\std{3.03} & \\textbf{78.20} \\std{3.89} & {75.38} \\std{0.53} \n \\end{tabular}\n \\caption{Supervised results. The first group of methods are based on trigger detection, while the second group predicts events without triggers.}\n \\label{table:supervised_eventsp}\n\\end{table}\n\n\\begin{figure*}[tbp]\n \\centering\n \\subfigure[Without description.]{\n \\label{fig:distil_vs_bert:nodesc}\n \\includegraphics[width=0.3\\textwidth]{figs\/distil_vs_bert_no_desc.pdf}\n }\n \\subfigure[With description, one sentence. ]{\n \\label{fig:distil_vs_bert:desc1}\n \\includegraphics[width=0.3\\textwidth]{figs\/distil_vs_bert_desc_1.pdf}\n }\n \\subfigure[With full description. ]{\n \\label{fig:distil_vs_bert:desc5}\n \\includegraphics[width=0.3\\textwidth]{figs\/distil_vs_bert_desc_5.pdf}\n }\n \\caption{{\\texttt{DistilBERT}}~ vs {\\texttt{BERT}}~ on event detection in few-shot settings, with or without descriptions. Green lines with square marks are scores with {\\texttt{BERT}}~ as backbone model, and blue lines with circle marks with are {\\texttt{DistilBERT}}~. Strips represent confidence intervals. }\n \\label{fig:distil_vs_bert}\n\\end{figure*}\n\n\\subsubsection{Probing without Supervision}\n\\label{sec:event:zero_shot}\nWe first experiment on predicing events without any training, but only exploiting potential knowledge in pretrained models alone. \n\nConcretely, we mostly experiment with {\\texttt{BERT}}~ and the smaller {\\texttt{DistilBERT}}~ models pretrained on textual entailment prediction dataset, MNLI~\\cite{mnli}\\footnote{We used https:\/\/huggingface.co\/textattack\/bert-base-uncased-MNLI and https:\/\/huggingface.co\/ishan\/distilbert-base-uncased-mnli.}\n\nWe report performance on these following methods:\n\\begin{itemize}\n \\item {Random}. This method simply generates random scores from uniform distribution on \\([0, 1]\\) as predictive probability as a refernce. \n \\item Textual entailment (TE). This is the our approach introduced in Section~\\ref{sec:method:event} with {\\texttt{BERT}}~.\n \\item Question answering (QA). We ask models question like ``What is the trigger for \\texttt{[EVENT]}?'' and let the model do span detection of the trigger word, following the same method as in ~\\ref{sec:method:arg}. We convert the prediction to sentence-level detection by predicing an event if any span with scores greater than threshold is predicted for that event. For this particularly model, we use {\\texttt{BERT}}~ pretrained on SQuAD~\\cite{rajpurkar2016squad}. \n \\item Masked token prediction (MTP). Instead of filling in event labels in query templates, we fill in \\([MASK]\\) special token as a placeholder and let {\\texttt{BERT}}~ model predict what this token might be. \n Events are classified based on the event label name's average score predicted by {\\texttt{BERT}}~ to replace \\([MASK]\\). \n For this model, the original pretrained {\\texttt{BERT}}~ is used~\\cite{devlin2018bert_}.\n The scores here are obtained by sigmoid function on individual logits, instead of softmax across entire vocabulary, to avoid arithmetic underflow. Specifically,\n \\begin{itemize}\n \\item \\texttt{MTP+TE} predicts the \\([MASK]\\) token based on textual entailment formulation. \n \\item \\texttt{MTP+QA} predicts the \\([MASK]\\) token based on a question answering formulation, where we attach a question like ``Did any event about \\texttt{[MASK]} happen?'' before the input sentence. \n \\end{itemize}\n\\end{itemize}\n\nIn common binary classification settings, one predicts positive when the predicted probability is greater than \\(0.5\\). \nHowever, when the model is not trained the concerned data, although the model may contain certain bias towards the task, it is not necessarily \\emph{calibrated} - i.e. it might predict favorably towards the right answer, but not necessarily to the extent that the predicted probability is always larger than \\(0.5\\). \nHence, we manually choose a threshold \\(p_0\\), \nfrom 0 to 1 with step 0.01 that maximizes F1-score on the development set, and we use this threshold to separate predicted negative and positive class and report the performance on test set using this threshold.\n\nTable~\\ref{tab:zero_shot_results} shows the results for zero-shot event detection. We show the following metrics: Precision, Recall, F1, the optimal threhsold on the development set. \nIn addition, for each of the methods, we performed a statistcal test on whether the distribution of scores on the ground-truth events is significantly different from the distribution of scores on false events. We use the standard Kolgomorov-Smirnov 2-sample test~\\cite{massey1951kolmogorov,hodges1958significance} and report the \\(p\\)-value, the lower the better. \n\nFrom the results, we clearly see that entailment is a more natural formulation for {\\texttt{BERT}}~ model as it contains more prior knowledge than question answering framework. If we use a question answering model to predict trigger span, the best performance is obtained when \\(p_0=0.0\\), effectively providing no useful information at all. \nWe believe that it is because trigger words are not natural ``answers'' to questions, events are, but event names are not necessarily present in the text where events occur. Textual entailment does not rely on this, hence it is more able than question answering to solve event detection.\n\nIn Table~\\ref{table:mtp_examples} we showcase some examples of wrong predictions made by masked token prediction based on textual entailment. \nWe see that when \\({\\texttt{BERT}}~\\) doesn't predict the correct event as the one with the top probability, it still chooses the events that are semantically related to the sentence, although the corresponding event might not have actually happened. \n\n\n\\subsubsection{Few-shot Learning}\n\\label{sec:event:sup}\nFollowing zero-shot learning, we naturally want to test the model's ability to transfer knowledge to event detection given a few examples for each event type. \nWe naturally want to test the model's ability to transfer to event detection task with a few learning examples and if the model can build upon a few samples the link between model's semantic knowledge and the reasoning of events. \n\nTable~\\ref{tab:few_shot_results} shows experiemnt results on few-shot learning. \nWe would like to highlight the following observations:\n\\begin{enumerate}\n \\item {\\texttt{BERT}}~ and {\\texttt{DistilBERT}}~ on their own is not capable of few-shot learning for our tasks, resulting in converging to trivial solutions. \n \\item {\\texttt{BERT}}~ model does significantly outperform {\\texttt{DistilBERT}}~. When \\(K=1\\), {\\texttt{DistilBERT}}~ works even better in textual entailment without description. \n \\item {\\texttt{BERT}}~'s becames advantageous when the number of training data increases.\n \\item Both {\\texttt{BERT}}~ and {\\texttt{DistilBERT}}~ cannot efficiently utilize description information when \\(K\\) is small, resulting in a lower F1-score. When \\(K\\) increases, the performance gradually increases and matches the model without description. \n\\end{enumerate}\nGenerally, in few-shot settings, using \\({\\texttt{DistilBERT}}~\\) could be a better choice, balancing performance and efficiency. \nAs a comparison, \\cite{peng2016event} achieved 67.6. They prefilled possible argument slots for candidate events using semantic role labeling tools, effectively summarizing event structures, and this piece of information is not readily available to {\\texttt{BERT}}~ model. In the future, it would be an interesting direction to investigate if it is possible to use {\\texttt{BERT}}~ model to more effectively extract event structures. \n\n\n\\hide{\nIn Table~\\ref{tab:few_shot_results}, we show few-shot learning results of different models with or without event description. \nEvent descriptions in the annotation guide~\\cite{ace05guide} can contain as many as 5 sentences, exceeding the text length of the sentence greatly. \nThis makes harnessing the information in description hard when the number of training data is small. \nTherefore, in addition to training with full event descriptions, we truncate event descriptions to contain their the first sentences in hope of making the model easier to learn how to use information from descriptions. \nWe make the following observations. \nFirst, when the training samples are too small (from \\(K=1\\) to \\(K=5\\)), adding event descriptions undermines model performance, especially in by a large margin when \\(K=1\\). \nFurthermore, when training with only the first sentence in description, the performance drop is much lower. \nHowever, when we have more samples per class, the models with event descriptions gradually outperform those without, and models with full descriptions outperform with only one sentence. \nFurthermore, when trained without descriptions, one observes counter-intuitively that the standard variance across multiple experiments increase with the number of samples. \n}\n\n\\subsubsection{Fully Supervised.}\n\\paragraph{Baselines.}\nBecause the commonly used data for event detection, ACE05, is not freely available, there are only so few works that are open source whose code are complete and immediately runnable given data. This creates a problem for us to evaluate traditional methods in terms of sentence-level event detection performance.\nHence, for many of them, we could only report the results from the original paper. \n\nDuring our attempts to reproduce the experiments, event with official code, we still struggle to produce results consist with what was reported in papers. The same phenomenon was observed by ~\\cite{orr2018event} where metrics can drop from 5 to 10 points in rigorously designed testing compared with reported results. \nBased on observations, we suspect such inconsistencies could be a result from lack of adherence to standard practices in terms of evaluation schemes, data processing, split of training\/dev\/test sets, and model selection, which all could influence the variance of validity of reported performance.\n\n\n\\hide{\nIt is worth noting that even when reproducing results for trigger-based detection, as is the setting of most previous works, we struggle to reproduce their reported results consistently using official code because of the inconsistencies and sometimes dubious practices we observed in terms of model selection, prediction and evaluation schemes, data processing, and the split of training\/development\/test sets, which all could influence the variance and validity of reported performance. \nSimilar observation was made by ~\\cite{orr2018event}, where they found that the performance of most previous works drop by a large margin, from 5 to even 10 points, in rigorous evaluation settings. \nWe believe it is indeed necessary in this field to set standards on these matters\nif any further progress in this field is to be made. }\n\nBased on these reasons, we could only vouch for the validity of performances methods reported with standard variances, and the same observations and conclusion extends to argument detection as well. \nThe following baselines are used as comparable sentence-level event detection methods without triggers: \n\\begin{itemize}\n \\item {\\texttt{BERT}}~~\\cite{devlin2018bert_}. This based on the original pretrained {\\texttt{BERT}}~ model, where event classification is done by learning a linear classifier on top of the pooled sentence embeddings. \n \\item Delta~\\footnotemark~\\cite{lu2019distilling}. This method is based on an adversarial framework where the model learns both discriminative and generalizable information. Although it is originally a trigger-based event detection method, we evaluate it here in terms of sentence-level scores. \n \\item DS-DMCNN~\\cite{liu2019event}. This method performs event detection without trigger based on ~\\cite{chen2015event}. \n\\end{itemize}\n\nThe following baselines are trigger-based event detection methods, which we report here as a reference:\n\\begin{itemize}\n \\item DMCNN~\\cite{chen2015event}. This method proposed a dynamic pooling layer for CNN for event detection. \n \\item Delta\\footnotemark[\\value{footnote}]~\\cite{lu2019distilling}. This is the same method as mentioned above. \n \\item VERB-QA~\\cite{du2020event}. This is a QA based event detection framework where ``verb'' is used as a query to hint the model for trigger word detection and classification. \n\\end{itemize}\n\\footnotetext{This method was originally proposed to detect and classify triggers. We report the performance of their method on both trigger-based event detection and sentence-level event detection, while the former is computed by ignoring if the span matches. The performance on this method is reported based on reproduction of their method of the authors' published code. The paper reported about 74.7 F1-score on trigger detection and classification.}\n\nOur methods include TE, TE-D (1) and TE-D (5), which are the same as in the few-shot learning setting with {\\texttt{BERT}}~ model. \nWe observed significant performance drop with {\\texttt{DistilBERT}}~, with only 70.04 averge F1 score. Therefore, we focus on performances on {\\texttt{BERT}}~ model only, which can apparently utilize more data more effectively. \nFrom Table~\\ref{table:supervised_eventsp}, our model achieves the best performance among baselines. \n\n\\hide{\n\\begin{table}\n \\centering\n \\begin{tabular}{l|l|ccc}\n Setting & Method & Precision & Recall & F1 \\\\ \\hline\n \\multirow{4}{*}{w\/ triggers} & DMCNN~\\cite{chen2015event} & 75.6 & 63.6 & 69.1 \\\\ \n & Delta~\\cite{lu2019distilling} & 67.30 & 69.62 & 68.44 \\\\ \n & VERB-QA~\\cite{du2020event} & 71.12 & 73.70 & 72.39 \\\\\n & {\\texttt{BERT}}~~\\cite{devlin2018bert_} & & & \\\\\n \\hline\n \\multirow{6}{*}{w\/o triggers} & {\\texttt{BERT}}~~\\cite{devlin2018bert_} & & & \\\\ \n & DS-DMCNN~\\cite{liu2019event} & 75.7 & 66.0 & 70.5 \\\\ \n & Delta~\\cite{lu2019distilling} & 70.97 & 70.78 & 70.88 \\\\ \n & TE & & & \\\\\n & TE-D (1) & 73.04 \\std{3.21} & 75.82 \\std{1.41} & 74.39 \\std{2.29} \\\\ \n & TE-D (5) & 72.95 \\std{3.03} & 78.20 \\std{3.89} & 75.38 \\std{0.53} \n \\end{tabular}\n \\caption{Supervised results.}\n \\label{tab:my_label}\n\\end{table}\n}\n\n\\subsubsection{Discussions}\n\\label{sec:event:ablation}\n\\paragraph{Effect of query structure. }\nIn query templates we infused two kinds of knowledge: 1) information about the event detection task by providing a statement sentence, and 2) information about labels (event types) by filling in the statement event names. \nA natural question one would ask is if the statement structure is actually helpful or is it just the event names are enough.\n\n\\paragraph{Do event descriptions help?}\nBased on experiments on few-shot (Table~\\ref{tab:few_shot_results}) and fully supervised (Table~\\ref{table:supervised_eventsp}) settings, \nwe see that event descriptions can be a burden to model learning when the number of data is extremely small. While with increased data size and in the fully supervised setting, model with descriptions could have comparable perforance with the model without descriptions, it does not outperform the latter. \n\nHowever, event descriptions do provide valuable information on what are events and is sufficient on its own for a human annotator. \nThis could suggest that {\\texttt{BERT}}~ model cannot learn from descriptions more than it can learn from the data. This suggests that more work should be done on improving language models in understanding how to ``read'' a ``description'' or ``manual''. \n\n\\paragraph{{\\texttt{BERT}}~ vs. {\\texttt{DistilBERT}}~}\nIn Figure~\\ref{fig:distil_vs_bert} we compare the performance of {\\texttt{BERT}}~ and {\\texttt{DistilBERT}}~. \nIt is clear from the figures that in few-shot settings, there is no statistically significant difference in performance, except for \\(1\\)-shot learning, where {\\texttt{DistilBERT}}~ is better without description and {\\texttt{BERT}}~, however, is better with descriptions. \nIn fully supervised learning, the difference is significant where \\({\\texttt{DistilBERT}}~\\) achieves only 70 F1-score in average and {\\texttt{BERT}}~ achieves 75. \nTherefore, when training labels are extremely scarce, it is sufficient to use \\({\\texttt{DistilBERT}}~\\) to effectively learn from these samples with less memory consumption and computational burden. \nIn fully supervised learning, however, {\\texttt{BERT}}~'s ability to utilize massive data is significantly better than {\\texttt{DistilBERT}}~. \n\n\\section{Experiments}\n\\label{sec:exp}\nWe report in this section the experiemnt results on event detection.\nIn Section~\\ref{sec:data} we briefly introduce the dataset, ACE 2005, that we experiment on. \nFor both of the two tasks, \nfirst, we probe the pretrained language model's ability to infer events without specific training in Section~\\ref{sec:event:zero_shot},~\\ref{sec:arg:zero_shot}.\nThen, in Section~\\ref{sec:event:sup},\\ref{sec:arg:sup}, we report results when the model is finetuned on event detection and argument detection.\nAblation studies were reported separately in Section~\\ref{sec:event:ablation},~\\ref{sec:arg:ablation}.\n\n\\subsection{Data}\n\\label{sec:data}\nWe use ACE05 Multilingual Training Corpus~\\cite{doddington2004automatic} for event detection and argument detection. \nIt contains 33 event types, 28 argument types, and sentences that come with various documents. \n\nACE2005 is observed to have domian shifts between its official training, development, and test set split: the training and dev. set contain informal documents such as web logs, while the test set is a collection of newswire articles~\\cite{doddington2004automatic,orr2018event}. \n\nWe follow the data split used by ~\\cite{chen2015event}. Table~\\ref{tab:data_stat} gives data statistics. \n\n\\begin{table}\n \\begin{tabular}{l|ccc}\n Data split & \\# sentences & \\# events & \\# arguments \\\\ \\hline\n Train & 14626 & 4309 & 7702 \\\\ \n Dev & 870 & 492 & 923 \\\\ \n Test & 708 & 422 & 887 \n \\end{tabular}\n \\caption{Data statistics for training, development, and test set. We list here number of sentences, events, and arguments. }\n \\label{tab:data_stat}\n\\end{table}\n\n\\input{exp_event}\n\\input{exp_arg}\n\\section{Few-shot learning}\n\\label{sec:few_shot}\nSince that unlike in many other tasks, pretrained language models does not significantly outperform random guessing as explained in Section~\\ref{sec:zero_shot}, \nwe naturally want to test the model's ability to transfer to event detection task with a few learning examples and if the model can build upon a few samples the link between model's semantic knowledge and the reasoning of events. \n\nConcretely, we experiment on TE and PQ-QA and see if the \n\\newcommand{\\(\\pm\\)}{\\(\\pm\\)}\n\\newcommand{\\std}[1]{\\((\\pm#1)\\)}\n\n\\begin{table*}[htbp]\n \\centering\n \\begin{tabular}{c|cccccc}\n \\(K\\)-shot & \\(K=1\\) & \\(K=3\\) & \\(K=5\\) & \\(K=7\\) & \\(K=9\\) & K=N\\\\ \\hline\n {\\texttt{BERT}}~ & & & & & \\\\\n TE & 43.02 \\std{2.36} & 51.60 \\std{2.15} & 51.51 \\std{2.59} & 56.20 (\\(\\pm\\) 3.11) & 56.83 (\\(\\pm\\) 4.36) & \\\\\n TE+D (1) & 23.10 (\\(\\pm\\) 5.20) & 45.94 (\\(\\pm\\) 4.80) & 51.92 (\\(\\pm\\) 4.07) & 54.88 (\\(\\pm\\) 2.82) & 57.11 (\\(\\pm\\) 2.90) & \\\\\n TE+D (5) & 17.80 (\\(\\pm\\) 4.29) & 45.76 (\\(\\pm\\) 5.17) & 52.44 (\\(\\pm\\) 2.47) & 54.68 (\\(\\pm\\) 3.55) & 59.62 (\\(\\pm\\) 2.63) & \\\\\n PQ-QA & 40.23 \\std{1.30} & 48.88 \\std{3.27} & 50.05 \\std{1.40} & 54.59 \\std{1.88} & 55.77 \\std{3.83} & \\\\\n PQ-QA+D (1) & 11.81 \\std{13.45} & 47.86 \\std{2.27} & 49.72 \\std{4.57} & 52.89 \\std{3.36} & 54.40 \\std{4.41} & \\\\\n PQ-QA+D (5) & 6.34 \\std{13.09} & 42.60 \\std{3.01} & 49.34 \\std{2.86} & 52.48 \\std{3.79} & 54.97 \\std{2.38} &\n \\end{tabular}\n \\caption{Results on few-shot learning. Here, we show the average micro F1-scores and standard deviations for each model and different number of training samples per class. TE and PQ-QA stand for textual entailment and polar-question question answering, +D means with event description, the number in the parenthesis means the maximum number of sentences in description. }\n \\label{tab:few_shot_results}\n\\end{table*}\n\nIn Table~\\ref{tab:few_shot_results}, we show few-shot learning results of different models with or without event description. \nEvent descriptions in the annotation guide~\\cite{ace05guide} can contain as many as 5 sentences, exceeding the text length of the sentence greatly. \nThis makes harnessing the information in description hard when the number of training data is small. \nTherefore, in addition to training with full event descriptions, we truncate event descriptions to contain their the first sentences in hope of making the model easier to learn how to use information from descriptions. \n\nWe make the following observations. \nFirst, when the training samples are too small (from \\(K=1\\) to \\(K=5\\)), adding event descriptions undermines model performance, especially in by a large margin when \\(K=1\\). \nFurthermore, when training with only the first sentence in description, the performance drop is much lower. \nHowever, when we have more samples per class, the models with event descriptions gradually outperform those without, and models with full descriptions outperform with only one sentence. \nFurthermore, when trained without descriptions, one observes counter-intuitively that the standard variance across multiple experiments increase with the number of samples. \n\n\\section{Introduction}\n\nEvent extraction is one of the most important and challenging tasks in information extraction.\nEvent extraction consists of two subtasks: 1) The first task, event detection, is\nto detect if natural language text describes the\noccurrence of certain events. 2)\nThe second task, argument detection, aims to find the attributes and participants, such as ``when'', and ``where'', and ``who'', to the events.\nTypically, both tasks are formulated as \\textit{supervised sequence labeling} problems:\nevent detection is usually formulated as detecting the trigger words or phrases that best ``indicate'' an event;\nand argument detection is formulated as identifying entities that serve as arguments, such as ``who'', ``when'', and ``where'' for that event type.\nVarious sequence labeling models~\\cite{yang2019exploring,orr2018event,hong2011using,liao2010using,chen2015event,chen2017automatically,nguyen2015event,nguyen2016joint,wang2019adversarial,sha2019jointly,yang2016hierarchical,mehta2019event,nguyen2018graph,lu2019distilling,du2020event}\nhave been proposed, which can be trained\non an annotated corpus and then used for recognizing event triggers and arguments at test time.\n\nFor example, the following sentence is extracted from ACE05 event extraction benchmark~\\cite{doddington2004automatic}:\n\\begin{quotation}\n ``Orders went out today to \\underline{deploy} 17,000 US soldiers in the \\textit{Persian Gulf region.} ''\n\\end{quotation}\nIn the above example, an event ``Transport'' happened, \\underline{deploy} is annotated as its trigger word, and \\textit{Persian Gulf region} is the\n\\textit{Destination} argument to the event, and \\textit{17,000 US soliders} would be the \\textit{Artifact} (interpreted as passengers). \n\nHowever, formulating event extraction as supervised sequence labeling tasks have several drawbacks.\nFor event detection,\nthe annotation of event trigger words is of high variance. As the definition of ``trigger words'' is the word that most ``clearly'' expresses the occurrence of the event~\\cite{ace05guide,liu2019event}, it is inherently noisy and time-consuming to label, especially in complex documents~\\cite{liu2019event}.\nThis requires developing a specific set of rules governing ambiguity during annotation. Even so, the model is not necessarily able to recognize, or benefited from, such knowledge.\nThe annotation of arguments suffers from the same problem, as the span can be arbitrary (is it ``U.S. soldiers'' or ``17,000 U.S. soldiers'', or just simply ``soldiers''? All of them are\nvalid answers.)\nSome existing approaches~\\cite{liao2010using,sha2019jointly,duan2017exploiting,yan2019event,liu2018jointly}\nattempt to resolve these issues by introducing more complex structural features (such as dependency parses and document-level information) or use more complex neural sequential\nmodels.\nBut as a result, learning such complex models becomes highly label hungry, even when powerful pretrained language models are used~\\cite{wang2019adversarial,yang2019exploring,du2020event}.\nFurthermore, the learned model can easily overfit and be vulnerable to domain shift.\nAs reported in \\cite{orr2018event}, when rigorously tested with multiple random initializations, many works suffer from a severe performance drop compared with the best performance reported in their papers.\n\nWe propose to formulate event extraction as a machine reading comprehension task.\nThe general format of machine reading comprehension is that given a \\emph{query} and a context sentence, the algorithm finds the answer to the query conditioned on the context sentence.\nSpecifically, we formulate event detection as a textual entailment prediction task. \\hide{, where given each sentence as the context,\nthe model predicts if an event occurs by predicting if a description of the event is entailed by the sentence, without detecting or using the event's trigger word.}\nThe underlying intuition is that, suppose a sentence describes an event, then a statement that this event has happened would be a natural entailment to the first sentence.\nCompared with formulating event extraction as a sequence labeling problem,\nthe benefit of such an entailment formulation is twofold.\nFirst,\nthe model can distill knowledge from pretrained language models and be appealing for few-shot or even zero-shot settings. Second, it can avoid\nhaving to pinpoint event triggers and be able to flexibly exploit complex event semantics in the sentence.\n\n\n\\begin{figure}[t]\n \\includegraphics[width=0.9\\linewidth]{figs\/model}\n \\caption{Illustration of our framework. \n Given an input sentence, our framework predicts an event by predicting if a statement about the event is a logical entailment to the sentence with a textual entailment prediction module. \n To extract arguments, we construct natural language questions about the argument and extract the answer with a question answering module from the sentence. \n }\n \\label{fig:illustration}\n\\end{figure}\n\nWe formulate argument extraction as a question answering problem. Finding an argument to an event can be naturally posed as finding an answer to a\nquestion like ``Who was the attacker?'' or ``Where is the destination of the transport?''. \nIn principle, argument extraction can be formulated as a textual entailment problem as well, but this requires generating a statement for each event-argument-entity pair. This would require pre-identify entities and the training could be inefficient. \nBy reframing argument extraction as QA, we can again transfer knowledge from pretrained models on how to find answers from text data.\nIn Section~\\ref{sec:method}, we give detailed descriptions on how to pose event extraction tasks as reading comprehension tasks, and how to solve them.\nThe illustration of our model can be found in Figure~\\ref{fig:illustration}.\n\nWe conducted experiments and analyzed model performances in zero-shot, few-shot, and fully-supervised settings in Section~\\ref{sec:exp}. Our major\nfindings from the experiments are: (1) Our proposed approach of formulating event extraction as reading comprehension can\neffectively distill knowledge by probing pre-trained models, and\nachieve strong performance for zero-shot event detection without training data. (2) The performance of our model increases largely when the model is\nfed a small amount of labeled data. In fully-supervised settings, it achieve state-of-the-art performance. (3) We found that trigger words do not\nimprove performance for either event detection or argument detection, which justifies our design of discarding trigger words for event detection.\n\nThe major contributions of this paper are as follows:\n\\begin{enumerate}\n \\item We propose a reading comprehension framework for event and argument detection. Comparing with existing works, our framework is more capable in\n exploiting pre-trained reading comprehension models on tasks and event semantics.\n \\item We are the first to propose a reading comprehension framework of event detection and argument detection without trigger words.\n \\item\nBy probing and fine-tuning\n pretrained reading comprehension models,\n our approach achieves much stronger performance for\n low-shot event extraction compared with the baselines; and it\n achieves\n state-of-the-art\nperformance for fully-supervised event detection on the ACE 2005 benchmark.\n\\end{enumerate}\n\n\\section{Our Approach}\n\\label{sec:method}\nIn this section, we introduce our framework to solve event extraction tasks by formulating them into reading comprehension problems. We cast\nevent detection as a textual entailment prediction problem based on the intuition that a sentence should entail an event if the latter is described in\nthat sentence. We cast argument detection as a question answering problem since questions can be naturally asked about the specific arguments of\nan event. In the following, we first provide high-level descriptions of such a framework in Section \\ref{sec:method:event} and Section \\ref{sec:method:arg}. We\ndescribe how we generate queries for these two subtasks in Section \\ref{sec:query_generation}, and finally detail our model along with its probing and\ntraining procedures in Section \\ref{sec:method:bert_backbone}.\n\n\n\\subsection{Event Detection as Textual Entailment}\n\\label{sec:method:event}\nTo pose event detection as textual entailment prediction, given sentence \\(X\\) and an event type,\nwe first construct a statement \\(Q\\) that claims the event type has happened. The task of determining whether an event has\nhappened in \\(X\\) is then translated to judging if \\(Q\\) is a natural entailment to \\(X\\). For example:\n\\begin{quotation}\n (\\(X\\)) David is leaving to become chairman of London School of Economics. (\\(Q\\)) \\textit{Hence, an event about \\texttt{Start of Position} occurred.}\n\\end{quotation}\nIn the above, the first sentence is the original input sentence to event detection, and the second italic sentence is a statement constructed for the queried event type, \\texttt{Start-Position}.\nIf the statement is entailed by the first sentence, we predict that the event has happened.\nWe will describe in detail how to construct such \\(Q\\) statements in Section~\\ref{sec:query_generation}.\n\n\\subsection{Argument Detection as Question Answering}\n\\label{sec:method:arg}\nTo convert argument detection to question answering, we construct a ``Wh''-question \\(Q\\) (a question that starts with an interrogative word, such as ``What'', ``Who'', and 'Where', etc.) about the concerned event and argument type, like the following:\n\\begin{quotation}\n \\textit{\n (\\(Q\\)) Where did the \\texttt{Meeting} take place?\n }\n (\\(X\\)) But the Saint Petersburg summit ended without any formal declaration on Iraq.\n\\end{quotation}\nWhere the first italic sentence is a question about event \\texttt{Meet} and the queried argument type is ``Place''.\nWe construct questions from fixed templates as well as manually written question forms for each possible combination of events and argument types. See Section~\\ref{sec:query_generation} for details.\n\n\n\\subsection{Query Generation}\n\\label{sec:query_generation}\n\nWe introduced event extraction and machine reading comprehension tasks\nin Section~\\ref{sec:tasks},\nand how can the former be transformed as the latter in Section~\\ref{sec:method:event},~\\ref{sec:method:arg}. \nIn this section, we describe how to generate queries for the two event extraction tasks, and how are combined with input sentences. \nAfter this, all preparations would be made for the model computation flow. \n\nIt is of crucial importance \nto generate high quality queries, i.e. statements about events for entailment-prediction-based event detection, and questions about events and argument types for question-answering-based argument detection, because the quality of queries determines 1) how the model distills information about tasks and labels from pretrained weights, and 2) how the model connect the semantics of events, arguments, and the sentence context. \n\n\\vpara{Statements for events.}\nWe assume that each event type has a label name. To construct a statement about an event, we simply fill in the label name in the following template:\n\\begin{quotation}\n Hence, an event about \\texttt{[EVENT]} happened.\n\\end{quotation}\nwhere \\texttt{[EVENT]} is the placeholder for event names.\n\nThis statement is supposed to serve as a guide to distill knowledge from the language model about both the \\emph{task} and the \\emph{label}.\nFor the \\emph{task}, i.e. event detection, an ideal model should be informed that it is supposed to recognize ``what has happened''.\nFor the \\emph{label}, the model should recognize clues on the semantics of the event by its label name.\nIn addition to label name, we expect that an natural langauge description of the event would further help us distill knowledge from the model.\nSo, we append an optional piece of description on the event, acquired from the data's annotation guide~\\cite{ace05guide}, to the above statement. Some examples of event descriptions are given in Table~\\ref{tab:event_desc}.\n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{c|p{60mm}}\n Event & Description \\\\ \\hline\n Be-Born & \\textit{A \\textbf{Be-Born} Event occurs whenever a PERSON Entity is given birth to.} Please note that we do not include the birth of other things or ideas. \\\\ \\hline\n Marry & \\textit{\\textbf{Marry} events are official Events, where two people are married under the legal definition.} \\\\ \\hline\n Divorce & \\textit{\\textbf{A Divorce} event occurs whenever two people are officially divorced under the legal definition of divorce. We do not include separations or church annulments. }\\\\ \\hline\n Transfer-Money & \\textit{\\textbf{Transfer-Money} events refer to the giving, receiving, borrowing, or lending money when it is not in the context of purchasing something.} The canonical examples are: (1) people giving money to organizations (and getting nothing tangible in return); and (2) organizations lending money to people or other orgs. \\\\\n \\end{tabular}\n \\caption{Example event descriptions.}\n \\label{tab:event_desc}\n\\end{table}\n\n\\vpara{Questions for events and argument types.}\nSimilar to events, each argument type has a label name as well.\nWe have two options to construct a question for a pair of event and argument.\nFirst and most straightforwardly,\nwe could use a fixed template similar to event statement:\n\\begin{quotation}\n Who or what participated as role\n \\texttt{[ARGUMENT]} in the event \\texttt{[EVENT]}?\n\\end{quotation}\nwhere argument names and event names are filled in respective slots.\n\nThis rather inflexible approach does not provide much information on the relation between events and arguments,\nsince we assume the same query structure between all event-argument paris. It is more natural to ask questions differently, specific to the concerned event and argument. For example, ``What is the Person in event Start-Position?'' is a lot less natural than ``Who started a new position?''.\nSo, we manually composed a question for every pair of events and arguments based on descriptions in the annotation guide~\\cite{ace05guide}. We do so by converting the description text to a question with minimal edit while ensuring that the concerned argument type appears in the question. For example, from ``The people who are married.'' we construct ``Who are the married person?''. ``People'' are changed to ``Person'' to match the concerned argument type name. \nExamples\nare given in Table~\\ref{tab:arg_questions}. \n\\begin{table}[h]\n \\centering\n \\begin{tabular}{c|c|p{50mm}}\n Event & Arg. Type & Question \\\\ \\hline\n \\multirow{2}{*}{Marry} & Person & Who are the married person? \\\\\n & Where & Where does the marriage take place?\\\\ \\hline\n \\multirow{2}{*}{Attack} & Attacker & Who is the attacker? \\\\\n & Target & Who is attacked?\n \\end{tabular}\n \\caption{Example questions for event-argument pairs. }\n \\label{tab:arg_questions}\n\\end{table}\n\n\\subsection{Model Details}\n\\label{sec:method:bert_backbone}\nIn the previous section we described how to transform event extraction tasks to reading comprehension tasks.\nIn this section, we detail our model for solving these tasks, as well as how the model works in zero-shot and supervised settings.\n\n\\vpara{{\\texttt{BERT}}~ masked language model. }\nFirst, we give an introduction to the backbone {\\texttt{BERT}}~ model that outputs hidden representations for tokens and sentences as the backbone for textual entailment prediction and question answering.\n\n{\\texttt{BERT}}~~\\cite{devlin2018bert_} model is a masked language model that consists of deep multihead attention layers.\nGiven a pair of input sentences \\(X^{(1)}, X^{(2)}\\), {\\texttt{BERT}}~ first runs WordPiece~\\cite{schuster2012wordpiece} tokenizer to tokenize the both sentences into a sequence of token ids: \\( [ x_{1}^{(i)}, \\ldots, x_{n_i}^{(i)} ], i=1,2 \\). Then, two sentences are concatenated into one sequence:\n\\begin{equation}\n S = [[\\mathrm{CLS}], x^{(1)}_1,\\ldots, x^{(1)}_{n_1}, [\\mathrm{SEP}], x_1^{(2)}, \\ldots, x^{(2)}_{n_{2}}, [\\mathrm{SEP}]]\n\\end{equation}\nwhere \\([\\mathrm{CLS}]\\) is a special token at the beginning that is supposed to aggregate information in the two sentences, \\([\\mathrm{SEP}]\\) is another special token used to inform separation between sentences. This combined sequence of tokens is the input to {\\texttt{BERT}}~ model.\n\n\\vpara{Textual entailment for event detection.}\nTo perform textual entailment prediction,\nwe attach a linear classifier on top of the sentence embedding.\nTo obtain sentence embedding, we use {\\texttt{BERT}}~'s default pooling method:\n\\begin{equation}\n \\vec{x} = \\texttt{Pool}({\\texttt{BERT}}~([X, Q]))\n \\label{eqn:bert_sentence_embed}\n\\end{equation}\nwhere \\(\\texttt{Pool}\\) operator just extracts the output embedding of the \\([\\mathrm{CLS}]\\) token.\n\nA linear layer transforms the sentence embedding to logits:\n\\begin{equation}\n [l_0, l_1, l_2] = \\vec{x}^T \\mathbf{W}_E\n\\end{equation}\nwhere \\(\\mathbf{W}_E\\in\\mathbb{R}^{d\\times 3}\\), \\(d\\) is the dimension of the hidden spaces, and \\(l_0,l_1,l_2\\) are logits for predicting contraditory, entailment, and neutral, respectively. We consider that the underlying event to happen if and only if it predicts entailment. Hence,\nthe final scores for predicting the described event in the statement is obtained by a softmax function:\n\\begin{equation}\n [p_0, p_1] = \\texttt{softmax}(l_0+l_2, l_1)\n\\end{equation}\nThe final loss function is the cross entropy function:\n\\begin{equation}\n \\ell_E = \\texttt{cross\\_entropy}(p_0, p_1; y) = -((1-y)\\log p_0 + y \\log p_1)\n \\label{eqn:entail_loss}\n\\end{equation}\nwhere \\(y\\) is a binary label that indicates whether the described event happens.\n\n\\vpara{Question answering for argument detection.}\nGiven a question about an event and argument type, we need to predict the probability for each token as the starting or ending of the answer span.\nFirst, we collect the output embeddings from {\\texttt{BERT}}~ for each token in the original sentence \\(X\\):\n\\begin{equation}\n [\\vec{x}_1,\\ldots,\\vec{x}_n] = {\\texttt{BERT}}~(S)\n\\end{equation}\nTo obtain possible spans, we use two linear classifiers on token embeddings:\n\\begin{equation}\n \\begin{aligned}\\\n [\\mathbf{s}^l_1,\\ldots, \\mathbf{s}^l_n] & = \\texttt{softmax}([\\vec{x}_1,\\ldots, \\vec{x}_n]^T \\mathbf{W}_l) \\\\\n [\\mathbf{s}^r_1,\\ldots, \\mathbf{s}^r_n] & = \\texttt{softmax}([\\vec{x}_1,\\ldots, \\vec{x}_n]^T \\mathbf{W}_r)\n \\end{aligned}\n\\end{equation}\nwhere \\(\\mathbf{W}_l, \\mathbf{W}_r \\in \\mathbb{R}^{d\\times 1} \\), \\texttt{softmax} normalizes logits across tokens, and \\(s_i^l, s_i^r\\) are scores of the \\(i\\)th token being the start or end of the answer span.\nGiven ground-truth labels of span starts \\(y_i^l\\) and ends \\(y_i^r\\),\nthe model optimizes the cross entropy loss:\n\\begin{equation}\n \\begin{aligned}\n \\ell_Q & = \\ell_Q^l + \\ell_Q^r \\\\\n \\ell_Q^l & = - \\sum_{i=1}^n y_i^l \\log s_i^l \\\\\n \\ell_Q^r & = - \\sum_{i=1}^n y_i^r \\log s_i^r\n \\end{aligned}\n \\label{eqn:qa_loss}\n\\end{equation}\nto make a prediction,\nwe follow ~\\cite{devlin2018bert_} and\npredicts the span as \\(i, j\\) with the highest \\(s_i^l + s_j^r\\) satisfying \\(i0,\\ \\lambda>0,\\ 0<\\gamma<1.\n\t\\end{array}\\right\\}\n\\end{equation}\n\nIn this problem, $\\Delta_p$ denotes the $p$-Laplacian differential operator defined by\n$$\\Delta_pu={\\rm div}\\,(|Du|^{p-2}Du)\\ \\mbox{for all}\\ u\\in W^{1,p}_0(\\Omega),\\ 10$ being the parameter and $0<\\gamma<1$. Also, there is a Carath\\'eodory perturbation $f(z,x)$ (that is, for all $x\\in\\mathbb R$ the mapping $z\\mapsto f(z,x)$ is measurable and for almost all $z\\in\\Omega$ the mapping $x\\mapsto f(z,x)$ is continuous). We assume that $f(z,\\cdot)$ exhibits $(p-1)$-linear growth near $+\\infty$.\n\n We are looking for positive solutions of problem \\eqref{eqp}. Our aim is to describe in a precise way the dependence on the parameter $\\lambda>0$ of the set of positive solutions.\n\n \\textcolor{black}{\nWe prove a bifurcation-type property, which is the main result of our paper. Concerning the hypotheses $H(f)$ on the perturbation $f(z,x)$ and the other notation used in the statement of the theorem, we refer to Section~2.\nThe main result of the present paper is stated in the following theorem.}\n\n\\textcolor{black}{\n{\\bf Theorem A.} {\\sl If hypotheses $H(f)$ hold, then there exists $\\lambda^*\\in(0,+\\infty)$ such that\n\t\\begin{itemize}\n\t\t\\item [(a)] for every $\\lambda\\in(0,\\lambda^*)$, problem \\eqref{eqp} has at least two positive solutions\n\t\t\t$$\n\t\t\tu_\\lambda,\\hat{u}_\\lambda\\in {\\rm int}\\,C_+,\\ u_\\lambda\\neq\\hat{u}_\\lambda, \\ u_\\lambda\\leq\\hat{u}_\\lambda;\n\t\t\t$$\n\t\t\\item [(b)] for $\\lambda=\\lambda^*$, problem \\eqref{eqp} has at least one positive solution\n\t\t\t$$\n\t\t\tu^*_\\lambda\\in {\\rm int}\\,C_+;\n\t\t\t$$\n\t\t\\item [(c)] for $\\lambda>\\lambda^*$, problem \\eqref{eqp} has no positive solutions.\n\t\\end{itemize}}\n}\n\nIn the past, singular problems were studied in the context of semilinear equations (that is, $p=2$). We mention the works of Coclite \\& Palmieri \\cite{2}, Ghergu \\& R\\u{a}dulescu \\cite{5}, Hirano, Saccon \\& Shioji \\cite{10}, Lair \\& Shaker \\cite{11}, Sun, Wu \\& Long \\cite{20}. A detailed bibliography and additional topics on the subject, can be found in the book of Ghergu \\& R\\u{a}dulescu \\cite{6}. For nonlinear equations driven by the $p$-Laplacian, we mention the works of Giacomoni, Schindler \\& Taka\\v{c} \\cite{7}, Papageorgiou, R\\u{a}dulescu \\& Repov\\v{s} \\cite{16, 16bis}, Papageorgiou \\& Smyrlis \\cite{17}, Perera \\& Zhang \\cite{18}. Of the aforementioned papers, closest to our work here is that of Papageorgiou \\& Smyrlis \\cite{17}, where the authors also deal with a parametric singular problem and prove a bifurcation-type result. In their problem, the perturbation $f(z,x)$ is ($p-1$)-superlinear in $x\\in\\mathbb R$ near $+\\infty$. So, our present work complements the results of \\cite{17}, by considering equations in which the reaction has the competing effects of a singular term and of a $(p-1)$-linear term.\n\nOur approach uses variational tools together with suitable truncation and comparison techniques.\n\n\\section{Preliminaries and hypotheses}\n\nLet $X$ be a Banach space and $X^*$ its topological dual. By $\\left\\langle \\cdot,\\cdot\\right\\rangle$ we denote the duality brackets of the pair $(X^*,X)$. Given $\\varphi\\in C^1(X,\\mathbb R)$, we say that $\\varphi$ satisfies the ``Cerami condition\" (the ``C-condition\" for short), if the following property holds:\n\\begin{center}\n``Every sequence $\\{u_n\\}_{n\\geq 1}\\subseteq X$ such that\n$$\\{\\varphi(u_n)\\}_{n\\geq 1}\\subseteq\\mathbb R\\ \\mbox{is bounded and}\\\n\\mbox{$(1+||u_n||)\\varphi'(u_n)\\rightarrow 0$ in $X^*$ as $n\\rightarrow\\infty$,}$$\nadmits a strongly convergent subsequence.\"\n\\end{center}\n\nUsing this notion, we can state the ``mountain pass theorem\".\n\\begin{theorem}\\label{th1} {\\bf (Mountain pass theorem)}\n\tAssume that $\\varphi\\in C^1(X,\\mathbb R)$ satisfies the C-condition, $u_0,u_1\\in X$, $||u_1-u_0||>\\rho>0$,\n\t$$\\max\\{\\varphi(u_0),\\varphi(u_1)\\}<\\inf\\{\\varphi(u):||u-u_0||=\\rho\\}=m_{\\rho}$$\n\tand $c=\\inf\\limits_{\\gamma\\in\\Gamma}\\max\\limits_{0\\leq t\\leq 1}\\ \\varphi(\\gamma(t))$ with $\\Gamma=\\{\\gamma\\in C([0,1],X):\\gamma(0)=u_0,\\gamma(1)=u_1\\}$. Then $c\\geq m_{\\rho}$ and $c$ is a critical value of $\\varphi$ (that is, we can find $\\hat{u}\\in X$ such that $\\varphi'(\\hat{u})=0$ and $\\varphi(\\hat{u})=c$).\n\\end{theorem}\n\nThe analysis of problem \\eqref{eqp} will involve the Sobolev space $W^{1,p}_0(\\Omega)$ and the Banach space $$C^1_0(\\overline{\\Omega})=\\{u\\in C^1(\\overline{\\Omega}):u|_{\\partial\\Omega}=0\\}.$$ We denote by $||\\cdot||$ the norm of $W^{1,p}_0(\\Omega)$. On account of the Poincar\\'e inequality, we have\n$$||u||=||Du||_p\\ \\mbox{for all}\\ u\\in W^{1,p}_0(\\Omega).$$\n\nThe space $C^1_0(\\overline{\\Omega})$ is an ordered Banach space with positive (order) cone $$C_+=\\{u\\in C^1_0(\\overline{\\Omega}):u(z)\\geq 0\\ \\mbox{for all}\\ z\\in\\overline{\\Omega}\\}.$$ This cone has a nonempty interior given by\n$${\\rm int}\\, C_+=\\left\\{u\\in C_+:u(z)>0\\ \\mbox{for all}\\ z\\in\\Omega,\\ \\left.\\frac{\\partial u}{\\partial n}\\right|_{\\partial\\Omega}<0\\right\\}.$$\n\nHere, $n(\\cdot)$ denotes the outward unit normal on $\\partial\\Omega$.\n\nLet $h_1,h_2\\in L^{\\infty}(\\Omega)$. We write $h_1\\prec h_2$, if for every compact $K\\subseteq\\Omega$, we can find $c_K>0$ such that $c_K\\leq h_2(z)-h_1(z)$ for almost all $z\\in K$. Note that, if $h_1,h_2\\in C(\\Omega)$ and $h_1(z)0$ for all $z\\in\\Omega$, $u_2\\in {\\rm int}\\, C_+$ and\n\t\\begin{eqnarray*}\n\t\t&&-\\Delta_pu_1(z)+\\hat{\\xi}u_1(z)^{p-1}-\\lambda u_1(z)^{-\\gamma}=h_1(z),\\\\\n\t\t&&-\\Delta_pu_2(z)+\\hat{\\xi}u_2(z)^{p-1}-\\lambda u_2(z)^{-\\gamma}=h_2(z)\\ \\mbox{for almost all}\\ z\\in\\Omega,\n\t\\end{eqnarray*}\n\tthen $u_2-u_1\\in {\\rm int}\\, C_+.$\n\\end{prop}\n\nWe denote by $A:W^{1,p}_0(\\Omega)\\rightarrow W^{-1,p'}(\\Omega)=W^{1,p}_0(\\Omega)^*\\left(\\frac{1}{p}+\\frac{1}{p'}=1\\right)$ the nonlinear map defined by\n$$\\left\\langle A(u),h\\right\\rangle=\\int_{\\Omega}|Du|^{p-2}(Du,Dh)_{\\mathbb R^N}dz\\ \\mbox{for all}\\ u,h\\in W^{1,p}_0(\\Omega).$$\n\nThis map has the following properties (see Motreanu, Motreanu \\& Papageorgiou \\cite[p. 40]{15}).\n\\begin{prop}\\label{prop3}\n\tThe map $A:W^{1,p}_0(\\Omega)\\rightarrow W^{-1,p'}(\\Omega)$ is bounded (that is, $A$ maps bounded sets to bounded sets), continuous, strictly monotone and of type $(S)_+$, that is, if $u_n\\stackrel{w}{\\rightarrow}u$ in $W^{1,p}_0(\\Omega)$ and $\\limsup\\limits_{n\\rightarrow\\infty}\\left\\langle A(u_n),u_n-u\\right\\rangle\\leq 0$, then $u_n\\rightarrow u$ in $W^{1,p}_0(\\Omega)$.\n\\end{prop}\n\nConsider the following nonlinear eigenvalue problem\n\\begin{equation}\\label{eq1}\n\t-\\Delta_pu(z)=\\hat{\\lambda}|u(z)|^{p-2}u(z)\\ \\mbox{in}\\ \\Omega,\\ u|_{\\partial\\Omega}=0.\n\\end{equation}\n\nWe say that $\\hat{\\lambda}\\in\\mathbb R$ is an ``eigenvalue\" of ($-\\Delta_p,W^{1,p}_0(\\Omega)$) if problem (\\ref{eq1}) admits a nontrivial solution $\\hat{u}\\in W^{1,p}_0(\\Omega)$, known as an ``eigenfunction\" corresponding to $\\hat{\\lambda}$. The nonlinear regularity theory (see Gasinski \\& Papageorgiou \\cite[pp. 737-738]{3}) implies that $\\hat{u}\\in C^1_0(\\overline{\\Omega})$. There is a smallest eigenvalue $\\hat{\\lambda}_1>0$ with the following properties:\n\\begin{itemize}\n\t\\item $\\hat{\\lambda}_1>0$ is isolated (that is, if $\\hat{\\sigma}(p)$ denotes the spectrum of ($-\\Delta_p,W^{1,p}_0(\\Omega)$) then we can find $\\epsilon>0$ such that $(\\hat{\\lambda}_1,\\hat{\\lambda}_1+\\epsilon)\\cap\\hat{\\sigma}(p)=0$);\n\t\\item $\\hat{\\lambda}_1$ is simple (that is, if $\\hat{u},\\hat{v}\\in C^1_0(\\overline{\\Omega})$ are eigenfunctions corresponding to $\\hat{\\lambda}_1$, then $\\hat{u}=\\xi\\hat{v}$ for some $\\xi\\in \\mathbb R\\backslash\\{0\\}$);\n\t\\begin{equation}\\label{eq2}\n\t\t\\bullet\\hspace{3cm}\\hat{\\lambda}_1=\\inf\\left\\{\\frac{||Du||^p_p}{||u||^p_p}:u\\in W^{1,p}_0(\\Omega),u\\neq 0\\right\\}.\\hspace{4cm}\n\t\\end{equation}\n\\end{itemize}\n\nIt follows from the above properties that the eigenfunctions corresponding to $\\hat{\\lambda}_1$ do not change sign. We denote by $\\hat{u}_1$ the positive, $L^p$-normalized (that is, $||\\hat{u}_1||_p=1$) eigenfunction corresponding to $\\hat{\\lambda}_1>0$. From the nonlinear maximum principle (see, for example, Gasinski \\& Papageorgiou \\cite[p. 738]{3}), we have $\\hat{u}_1\\in {\\rm int}\\, C_+$. Any eigenfunction corresponding to an eigenvalue $\\hat{\\lambda}\\neq\\hat{\\lambda}_1$, is nodal (that is, sign-changing). More details about the spectrum of $(-\\Delta_p,W^{1,p}_0(\\Omega))$ can be found in \\cite{3, 15}.\n\nWe can also consider a weighted version of the eigenvalue problem (\\ref{eq1}). So, let $m\\in L^{\\infty}(\\Omega)$, $m(z)\\geq 0$ for almost all $z\\in\\Omega,\\ m\\neq 0$. We consider the following nonlinear eigenvalue problem:\n\\begin{equation}\\label{eq3}\n\t-\\Delta_pu(z)=\\tilde{\\lambda}m(z)|u(z)|^{p-2}u(z)\\ \\mbox{in}\\ \\Omega,\\ u|_{\\partial\\Omega}=0.\n\\end{equation}\n\nThis problem has the same properties as (\\ref{eq1}). So, there is a smallest eigenvalue $\\tilde{\\lambda}_1(m)>0$ which is isolated, simple and admits the following variational characterization\n$$\\tilde{\\lambda}_1(m)=\\inf\\left\\{\\frac{||Du||^p_p}{\\int_{\\Omega}m(z)|u|^pdz}:u\\in W^{1,p}_0(\\Omega),u\\neq 0\\right\\}.$$\n\nAlso the eigenfunctions corresponding to $\\tilde{\\lambda}_1(m)$ have a fixed sign and we denote by $\\tilde{u}_1(m)$ the positive, $L^p$-normalized eigenfunction. We have $\\tilde{u}_1(m)\\in {\\rm int}\\, C_+$. These properties lead to the following monotonicity property of the map $m\\mapsto\\tilde{\\lambda}_1(m)$.\n\n\\begin{prop}\\label{prop4}\n\tIf $m_1,m_2\\in L^{\\infty}(\\Omega),0\\leq m_1(z)\\leq m_2(z)$ for almost all $z\\in\\Omega$ and both inequalities are strict on the sets of positive measure, then $\\tilde{\\lambda}_1(m_2)<\\tilde{\\lambda}_1(m_1)$.\n\\end{prop}\n\nGiven $x\\in\\mathbb R$, we set $x^{\\pm}=\\max\\{\\pm x, 0\\}$. Then for $u\\in W^{1,p}_0(\\Omega)$, we set $u^{\\pm}(\\cdot)=u(\\cdot)^{\\pm}$. We know that\n$$u^{\\pm}\\in W^{1,p}_0(\\Omega),\\ |u|=u^++u^-,\\ u=u^+-u^-.$$\n\nIf $g:\\Omega\\times\\mathbb R$ is a measurable function (for example, a Carath\\'eodory function) then by $N_g(\\cdot)$ we denote the Nemytski map corresponding to $g(\\cdot,\\cdot)$ defined by\n$$N_g(u)(\\cdot)=g(\\cdot,u(\\cdot))\\ \\mbox{for all}\\ u\\in W^{1,p}_0(\\Omega).$$\n\nGiven $v,u\\in W^{1,p}_0(\\Omega)$ with $v\\leq u$, we define the order interval $[v,u]$ by\n$$[v,u]=\\{y\\in W^{1,p}_0(\\Omega):v(z)\\leq y(z)\\leq u(z)\\ \\mbox{for almost all}\\ z\\in\\Omega\\}.$$\n\nThe hypotheses on the perturbation $f(z,x)$ are the following:\n\n\\smallskip\n$H(f):$ $f:\\Omega\\times\\mathbb R\\leftarrow\\mathbb R$ is a Carath\\'eodory function such that $f(z,0)=0$ for almost all $z\\in\\Omega$ and\n\\begin{itemize}\n\t\\item[(i)] for every $\\rho>0$, there exists $a_{\\rho}\\in L^{\\infty}(\\Omega)$ such that\n\t$$|f(z,x)|\\leq a_{\\rho}(z)\\ \\mbox{for almost all}\\ z\\in\\Omega,\\ \\mbox{and all}\\ 0\\leq x\\leq\\rho;$$\n\t\\item[(ii)] $\\hat{\\lambda}_1<\\eta\\leq\\liminf\\limits_{x\\rightarrow+\\infty}\\frac{f(z,x)}{x^{p-1}}\\leq\\limsup\\limits_{x\\rightarrow+\\infty}\\frac{f(z,x)}{x^{p-1}}\\leq\\hat{\\eta}$ uniformly for almost all $z\\in\\Omega;$\n\t\\item[(iii)] there exists a function $w\\in C^1(\\overline{\\Omega})$ such that\n\t$$w(z)\\geq c_0>0\\ \\mbox{for all}\\ z\\in\\overline{\\Omega},\\ \\Delta_pw\\in L^{\\infty}(\\Omega)\\ \\mbox{with}\\ \\Delta_pw(z)\\leq 0\\ \\mbox{for almost all}\\ z\\in\\Omega,$$\n\tand for every compact $K\\subseteq\\Omega$ we can find $c_K>0$ such that\n\t$$w(z)^{-\\gamma}+f(z,w(z))\\leq-c_K<0\\ \\mbox{for almost all}\\ z\\in K;$$\n\t\\item[(iv)] there exists $\\delta_0\\in(0,c_0)$ such that for every compact $K\\subseteq\\Omega$\n\t$$f(z,x)\\geq\\hat{c}_K>0\\ \\mbox{for almost all}\\ z\\in K,\\ \\mbox{and all}\\ x\\in\\left(0,\\delta_0\\right];$$\n\t\\item[(v)] for every $\\rho>0$, there exists $\\hat{\\xi}_{\\rho}>0$ such that for almost all $z\\in\\Omega$ the function\n\t$$x\\mapsto f(z,x)+\\hat{\\xi}_{\\rho}x^{p-1}$$\n\tis nondecreasing on $[0,\\rho]$.\n\\end{itemize}\n\n\\begin{remark}\n\tSince we are looking for positive solutions and all the above hypotheses concern the positive semiaxis $\\mathbb R_+=\\left[0,+\\infty\\right)$, we may assume without any loss of generality that\n\t\\begin{equation}\\label{eq4}\n\t\tf(z,x)=0\\ \\mbox{for almost all}\\ z\\in\\Omega,\\ \\mbox{and all}\\ x\\leq 0.\n\t\\end{equation}\n\\end{remark}\n\nHypothesis $H(f)(iii)$ implies that asymptotically at $+\\infty$ we have uniform nonresonance with respect to the principal eigenvalue $\\hat{\\lambda}_1>0$ of $(-\\Delta_p,W^{1,p}_0(\\Omega))$. The resonant case was recently examined for nonparametric singular Dirichlet problems by Papageorgiou, R\\u{a}dulescu \\& Repov\\v{s} \\cite{16}.\n\n\\begin{ex}\nThe following functions satisfy hypotheses $H(f)$. For the sake of simplicity we drop the $z$-dependence:\n$$f(x)=\\left\\{\\begin{array}{ll}\n\tx^{\\tau-1}-3x^{\\vartheta-1}&\\mbox{if}\\ 0\\leq x\\leq 1\\\\\n\t\\eta x^{p-1}-(\\eta+2)x^{q-1}&\\mbox{if}\\ 1\\hat{\\lambda}_1$; and\n\n\\textcolor{black}{\n$$f(x)=\\left\\{\\begin{array}{ll}\n\t2\\sin(2\\pi x)&\\mbox{if}\\ 0\\leq x\\leq 1\\\\\n\t\\eta (x^{p-1}-x^{q-1})&\\mbox{if}\\ 1\\hat{\\lambda}_1$, $10,\\ \\lambda>0,\\ 0<\\gamma<1.\n\t\\end{array}\\right\\}\n\\end{equation}\n\nThe next proposition establishes the existence and $\\lambda$-dependence of the positive solutions for problem \\eqref{eqa}.\n\\begin{prop}\\label{prop5}\n\tFor every $\\lambda>0$ problem \\eqref{eqa} admits a unique solution $\\tilde{u}_{\\lambda}\\in {\\rm int}\\, C_+$, the map $\\lambda\\mapsto\\tilde{u}_{\\lambda}$ is nondecreasing from $(0,\\infty)$ into $C^1_0(\\overline{\\Omega})$ (that is, if $0<\\vartheta<\\lambda$, then $\\tilde{u}_{\\vartheta}\\leq\\tilde{u}_{\\lambda}$) and $||\\tilde{u}_{\\lambda}||_{C^1_0(\\overline{\\Omega})}\\rightarrow 0$ as $\\lambda\\rightarrow 0^+$.\n\\end{prop}\n\\begin{proof}\n\tThe existence of a unique solution $\\tilde{u}_{\\lambda}\\in {\\rm int}\\, C_+$ follows from Proposition 5 of Papageorgiou \\& Smyrlis \\cite{17}.\n\t\n\tLet $0<\\vartheta<\\lambda$ and let $\\tilde{u}_{\\vartheta},\\tilde{u}_{\\lambda}\\in {\\rm int}\\, C_+$ be the corresponding unique solutions of problem \\eqref{eqa}. Evidently, $\\tilde{u}^{p'}_{\\vartheta}\\in {\\rm int}\\, C_+\\left(\\frac{1}{p}+\\frac{1}{p'}=1\\right)$ and so by Proposition 2.1 of Marano \\& Papageorgiou \\cite{14}, we can find $c_1>0$ such that\n\t\\begin{eqnarray*}\n\t\t&&\\hat{u}_1\\leq c_1\\tilde{u}^{p'}_{\\vartheta},\\\\\n\t\t&\\Rightarrow&\\hat{u}_1^{1\/p'}\\leq c_1^{1\/p'}\\tilde{u}_{\\vartheta},\\\\\n\t\t&\\Rightarrow&\\tilde{u}^{-\\gamma}_{\\vartheta}\\leq c_2\\hat{u}_1^{-\\gamma\/p'}\\ \\mbox{for some}\\ c_2>0.\n\t\\end{eqnarray*}\n\t\n\t Lemma of Lazer \\& McKenna \\cite[p. 726]{12}, implies that $\\hat{u}_1^{-\\gamma\/p'}\\in L^{p'}(\\Omega)$. Therefore $\\tilde{u}_{\\vartheta}^{-\\gamma}\\in L^{p'}(\\Omega)$. We introduce the Carath\\'eodory function $g_{\\lambda}(z,x)$ defined by\n\t\\begin{equation}\\label{eq5}\n\t\tg_{\\lambda}(z,x)=\\left\\{\\begin{array}{ll}\n\t\t\t\\lambda\\tilde{u}_{\\vartheta}^{-\\gamma}&\\mbox{if}\\ x\\leq\\tilde{u}_{\\vartheta}(z)\\\\\n\t\t\t\\lambda x^{-\\gamma}&\\mbox{if}\\ \\tilde{u}_{\\vartheta}(z)0\\nonumber\\\\\n\t\t\t&&(\\mbox{see Theorem 13.17 of Hewitt \\& Stromberg \\cite[p. 196]{9}}),\\nonumber\\\\\n\t\t\t&\\Rightarrow&\\{\\tilde{u}_{\\lambda}\\}_{\\lambda\\in\\left(0,1\\right]}\\subseteq W^{1,p}_0(\\Omega)\\ \\mbox{is bounded and }||\\tilde{u}_{\\lambda}||\\rightarrow 0\\ \\mbox{as}\\ \\lambda\\rightarrow 0^+.\n\t\t\\end{eqnarray}\n\t\t\n\t\tAs in the first part of the proof, using Proposition 2.1 of Marano \\& Papageorgiou \\cite{14}, we show that $\\tilde{u}_{\\lambda}^{-\\gamma}\\in L^r(\\Omega)$ for $r>N$. Then Proposition 1.3 of Guedda \\& V\\'eron \\cite{8} implies that\n\t\t\\begin{equation}\\label{eq9}\n\t\t\t\\tilde{u}_{\\lambda}\\in L^{\\infty}(\\Omega)\\ \\mbox{and}\\ ||\\tilde{u}_{\\lambda}||_{\\infty}\\leq c_4\\ \\mbox{for some}\\ c_4>0,\\ \\mbox{and all}\\ 0<\\lambda\\leq 1.\n\t\t\\end{equation}\n\t\t\n\t\tLet $k_{\\lambda}=\\lambda\\tilde{u}^{-\\gamma}_{\\lambda}\\in L^r(\\Omega),\\lambda\\in\\left(0,1\\right]$ and consider the following linear Dirichlet problem\n\t\t\\begin{equation}\\label{eq10}\n\t\t\t-\\Delta v(z)=k_{\\lambda}(z)\\ \\mbox{in}\\ \\Omega,\\ v|_{\\partial\\Omega}=0,\\ 0<\\lambda\\leq 1.\n\t\t\\end{equation}\n\t\t\n\t\tStandard existence and regularity theory (see, for example, Struwe \\cite[p. 218]{19}), implies that problem (\\ref{eq10}) has a unique solution $v_{\\lambda}(\\cdot)$ such that\n\t\t$$v_{\\lambda}\\in W^{2,r}(\\Omega)\\subseteq C^{1,\\alpha}_{0}(\\overline{\\Omega})=C^{1,\\alpha}(\\overline{\\Omega})\\cap C^1_0(\\overline{\\Omega}),\\ ||v_{\\lambda}||_{C^{1,\\alpha}_0(\\overline{\\Omega})}\\leq c_5$$\n\t\tfor some $c_5>0$, all $\\lambda\\in\\left(0,1\\right]$, and with $\\alpha=1-\\frac{N}{r}\\in(0,1)$ (recall that $r>N$). Let $\\beta_{\\lambda}(z)=Dv_{\\lambda}(z)$. Then $\\beta_{\\lambda}\\in C^{0,\\alpha}(\\overline{\\Omega})$ for every $\\lambda\\in\\left(0,1\\right]$. We have\n\t\t$$-{\\rm div}\\,[|D\\tilde{u}_{\\lambda}|^{p-2}D\\tilde{u}_{\\lambda}-\\beta_{\\lambda}]=0\\ \\mbox{in}\\ \\Omega,\\left.\\ \\tilde{u}_{\\lambda}\\right|_{\\partial\\Omega}=0\\ (\\mbox{since}\\ \\tilde{u}_{\\lambda}\\ \\mbox{solves}\\ \\eqref{eqa}).$$\n\t\t\n\t\tThen Theorem 1 of Lieberman \\cite{13} (see also Corollary 1.1 of Guedda \\& V\\'eron \\cite{8}) and (\\ref{eq9}), imply that we can find $s\\in(0,1)$ and $c_6>0$ such that\n\t\t$$\\tilde{u}_{\\lambda}\\in C^{1,s}_0(\\overline{\\Omega})\\cap {\\rm int}\\, C_+,\\ ||\\tilde{u}_{\\lambda}||_{C^{1,s}_0(\\overline{\\Omega})}\\leq c_6\\ \\mbox{for all}\\ \\lambda\\in\\left(0,1\\right].$$\n\t\t\n\t\tFinally, the compact embedding of $C^{1,s}_0(\\overline{\\Omega})$ into $C^1_0(\\overline{\\Omega})$ and (\\ref{eq8}) imply that\n\t\t$$||\\tilde{u}_{\\lambda}||_{C^1_0(\\overline{\\Omega})}\\rightarrow 0\\ \\mbox{as}\\ \\lambda\\rightarrow 0^+.$$\nThis completes the proof.\n\\end{proof}\n\n\n\\section{Bifurcation-type theorem}\n\nLet $$\\mathcal{L}=\\{\\lambda>0:\\ \\mbox{problem \\eqref{eqp} admits a positive solution}\\}$$ \\begin{center}\n$S_{\\lambda}=\\mbox{the set of positive solutions for problem \\eqref{eqp}}$.\n \\end{center}\n\\begin{prop}\\label{prop6}\nIf hypotheses $H(f)$ hold, then $\\mathcal{L}\\neq\\emptyset$.\n\\end{prop}\n\\begin{proof}\nUsing Proposition \\ref{prop5}, we can find $\\lambda_0\\in\\left(0,1\\right]$ such that\n\\begin{equation}\\label{eq11}\n\\tilde{u}_{\\lambda}(z)\\in\\left(0,\\delta_0\\right]\\ \\mbox{for all}\\ z\\in\\Omega,\\ \\mbox{all}\\ \\lambda\\in\\left(0,\\lambda_0\\right].\n\\end{equation}\n\nHere, $\\delta_0>0$ is as postulated by hypothesis $H(f)(iv)$.\n\nWe fix $\\lambda\\in\\left(0,\\lambda_0\\right]$ and we consider the following truncation of the reaction in problem \\eqref{eqp}:\n\\begin{eqnarray}\\label{eq12}\n\\hat{k}_{\\lambda}(z,x)=\\left\\{\\begin{array}{ll}\n \\lambda\\hat{u}_{\\lambda}(z)^{-\\gamma}+f(z,\\tilde{u}_{\\lambda}(z))&\\mbox{if}\\ x<\\tilde{u}_{\\lambda}(z)\\\\\n \\lambda x^{-\\gamma}+f(z,x)&\\mbox{if}\\ \\tilde{u}_{\\lambda}\\leq x\\leq w(z)\\\\\n \\lambda w(z)^{-\\gamma}+f(z,w(z))&\\mbox{if}\\ w(z)0$.\n\\end{corollary}\n\nThe next proposition shows that $\\mathcal{L}$ is an interval.\n\\begin{prop}\\label{prop8}\n\tIf hypotheses $H(f)$ hold, $\\lambda\\in\\mathcal{L}$ and $\\vartheta\\in(0,\\lambda)$, then $\\vartheta\\in\\mathcal{L}.$\n\\end{prop}\n\\begin{proof}\n\tSince $\\lambda\\in\\mathcal{L}$, we can find $u_{\\lambda}\\in S_{\\lambda}\\subseteq {\\rm int}\\, C_+$. Proposition \\ref{prop5} implies that we can find $\\tau\\in[0,\\lambda_0]$ (see (\\ref{eq11})) such that\n\t$$\\tau<\\vartheta\\ \\mbox{and}\\ \\tilde{u}_{\\tau}\\leq u_{\\lambda}.$$\n\t\n\tWe introduce the Carath\\'eodory function $e(z,x)$ defined by\n\t\\begin{equation}\\label{eq16}\n\t\te_{\\vartheta}(z,x)=\\left\\{\\begin{array}{ll}\n\t\t\t\\vartheta\\tilde{u}_{\\tau}(z)^{-\\gamma}+f(z,\\tilde{u}_{\\tau}(z))&\\mbox{if}\\ x<\\tilde{u}_{\\tau}(z)\\\\\n\t\t\t\\vartheta x^{-\\gamma}+f(z,x)&\\mbox{if}\\ \\tilde{u}_{\\tau}(z)\\leq x\\leq u_{\\lambda}(z)\\\\\n\t\t\t\\vartheta u_{\\lambda}(z)^{-\\gamma}+f(z,u_{\\lambda}(z))&\\mbox{if}\\ u_{\\lambda}(z)0$ be as postulated by hypothesis $H(f)(v)$. Then\n\t\\begin{eqnarray*}\n\t\t&&-\\Delta_pu_{\\vartheta}+\\hat{\\xi}_pu_{\\vartheta}^{p-1}-\\lambda u_{\\vartheta}^{-\\gamma}\\\\\n\t\t&=&-(\\lambda-\\vartheta)u_{\\vartheta}^{-\\gamma}+f(z,u_{\\vartheta})+\\hat{\\xi}_{\\rho}u_{\\vartheta}^{p-1}\\\\\n\t\t&\\leq&f(z,u_{\\lambda})+\\hat{\\xi}_{\\rho}u_{\\lambda}^{p-1}\\ (\\mbox{recall that}\\ \\vartheta<\\lambda\\ \\mbox{and see (\\ref{eq19}) and hypothesis}\\ H(f)(v))\\\\\n\t\t&=&-\\Delta_pu_{\\lambda}+\\hat{\\xi}_{\\rho}u_{\\lambda}^{p-1}-\\lambda u_{\\lambda}^{-\\gamma}\\ (\\mbox{since}\\ u_{\\lambda}\\in S_{\\lambda}).\n\t\\end{eqnarray*}\n\t\n\tWe set\n\t\\begin{eqnarray*}\n\t\t&&h_1(z)=f(z,u_{\\vartheta}(z))+\\hat{\\xi}_{\\rho}u_{\\vartheta}(z)^{p-1}-(\\lambda-\\vartheta)u_{\\vartheta}(z)^{-\\gamma}\\\\\n\t\t&&h_2(z)=f(z,u_{\\lambda}(z))+\\hat{\\xi}_{\\rho}u_{\\lambda}(z)^{p-1}.\n\t\\end{eqnarray*}\n\t\n\tWe have\n\t$$h_2(z)-h_1(z)\\geq(\\lambda-\\vartheta)u_{\\vartheta}(z)^{-\\gamma}\\geq(\\lambda-\\vartheta)\\rho^{-\\gamma}\\ \\mbox{for almost all}\\ z\\in\\Omega$$\n\t(see (\\ref{eq19}) and hypotheses $H(f)(v)$).\n\t\n\tWe can apply Proposition \\ref{prop2} and conclude that\n\t$$u_{\\lambda}-u_{\\vartheta}\\in {\\rm int}\\, C_+.$$\nThe proof is now complete.\n\\end{proof}\n\nDenote $\\lambda^*=\\sup\\mathcal{L}.$\n\\begin{prop}\\label{prop11}\n\tIf hypotheses $h(f)$ hold, then $\\lambda^*<+\\infty$.\n\\end{prop}\n\\begin{proof}\n\tLet $\\epsilon>0$ be such that $\\hat{\\lambda}_1+\\epsilon<\\eta$ (see hypothesis $H(f)(ii)$). We can find $M>0$ such that\n\t\\begin{equation}\\label{eq20}\n\t\tf(z,x)\\geq[\\hat{\\lambda}_1+\\epsilon]x^{p-1}\\ \\mbox{for almost all}\\ z\\in\\Omega,\\ \\mbox{and all}\\ x\\geq M.\n\t\\end{equation}\n\t\n\tAlso, hypothesis $H(f)(i)$ implies that we can find large enough $\\tilde{\\lambda}>0$ such that\n\t\\begin{equation}\\label{eq21}\n\t\t\\tilde{\\lambda}M^{-\\gamma}+f(z,x)\\geq[\\hat{\\lambda}_1+\\epsilon]M^{p-1}\\ \\mbox{for almost all}\\ z\\in\\Omega,\\ \\mbox{and all}\\ 0\\leq x\\leq M.\n\t\\end{equation}\n\t\n\tIt follows from (\\ref{eq20}) and (\\ref{eq21}) that\n\t\\begin{equation}\\label{eq22}\n\t\t\\tilde{\\lambda}x^{-\\gamma}+f(z,x)\\geq[\\hat{\\lambda}_1+\\epsilon]x^{p-1}\\ \\mbox{for almost all}\\ z\\in\\Omega,\\ \\mbox{and all}\\ x\\geq 0.\n\t\\end{equation}\n\t\n\tLet $\\lambda>\\tilde{\\lambda}$ and suppose that $\\lambda\\in\\mathcal{L}$. Then we can find $u_{\\lambda}\\in S_{\\lambda}\\subseteq {\\rm int}\\, C_+$. We have\n\t\\begin{equation}\\label{eq23}\n\t\t-\\Delta_pu_{\\lambda}=\\lambda u_{\\lambda}^{-\\gamma}+f(z,u_{\\lambda})>\\tilde{\\lambda}u_{\\lambda}^{-\\gamma}+f(z,u_{\\lambda})\\geq[\\hat{\\lambda}_1+\\epsilon]u_{\\lambda}^{p-1}\\ \\mbox{for a.a.}\\ z\\in\\Omega\\ (\\mbox{see (\\ref{eq22})}).\n\t\\end{equation}\n\t\n\tSince $u_{\\lambda}\\in {\\rm int}\\, C_+$, we can find $t\\in(0,1)$ so small that\n\t\\begin{equation}\\label{eq24}\n\t\t\\hat{y}_1=t\\hat{u}_1\\leq u_{\\lambda}\n\t\\end{equation}\n\t(see Proposition 2.1 of Marano \\& Papageorgiou \\cite{14}). We have\n\t\\begin{equation}\\label{eq25}\n\t\t-\\Delta_p\\hat{y}_1=\\hat{\\lambda}_1\\hat{y}_1^{p-1}<[\\hat{\\lambda}_1+\\epsilon]\\hat{y}_1^{p-1}\\ \\mbox{for almost all}\\ z\\in\\Omega.\n\t\\end{equation}\n\t\n\tUsing (\\ref{eq24}), we can define the Carath\\'eodory function $\\beta(z,x)$ as follows\n\t\\begin{eqnarray}\\label{eq26}\n\t\t\\beta(z,x)=\\left\\{\n\t\t \\begin{array}{lll}\n\t\t\t & [\\hat{\\lambda}_1 + \\epsilon]\\hat{y}_1(z)^{p-1}& \\mbox{if}\\ x<\\hat{y}_1(z)\\\\\n\t\t\t & [\\hat{\\lambda}_1 + \\epsilon]x^{p-1} & \\mbox{if}\\ \\hat{y}_1(z)\\leq x\\leq u_{\\lambda}(z)\\\\\n\t\t\t & [\\hat{\\lambda}_1 + \\epsilon]u_{\\lambda}(z)^{p-1}& \\mbox{if}\\ u_{\\lambda}(z)0$ is admissible.\n\\begin{prop}\\label{prop12}\n\tIf hypotheses $H(f)$ hold, then $\\lambda^*\\in\\mathcal{L}$.\n\\end{prop}\n\\begin{proof}\n\tLet $\\{\\lambda_n\\}_{n\\geq1}\\subseteq(0,\\lambda^*)$ and assume that $\\lambda_n\\rightarrow(\\lambda^*)^{-}$ as $n\\rightarrow\\infty$. We can find $u_n=u_{\\lambda_n}\\in S_{\\lambda_n}\\subseteq {\\rm int}\\,C_+$ for all $n\\in\\mathbb N$. Then\n\t\\begin{equation}\\label{eq29}\n\t\t\\langle A(u_n),h\\rangle = \\int_\\Omega[\\lambda_n u_n^{-\\gamma}+f(z,u_n)]hdz\\ \\mbox{for all}\\ h\\in W^{1,p}_0(\\lambda),\\ \\mbox{all}\\ n\\in\\mathbb N.\n\t\\end{equation}\n\t\n\tSuppose that $||u_n||\\rightarrow\\infty$. We set $y_n=\\frac{u_n}{||u_n||}\\ n\\in\\mathbb N$. Then $||y_n||=1, y_n\\geq0$ for all $n\\in\\mathbb N$. So, we may assume that\n\t\\begin{equation}\\label{eq30}\n\t\ty_n\\xrightarrow{w}y\\ \\mbox{in}\\ W^{1,p}_0(\\Omega)\\ \\mbox{and}\\ y_n\\rightarrow y\\ \\mbox{in}\\ L^p(\\Omega)\\ \\mbox{as}\\ n\\rightarrow\\infty.\n\t\\end{equation}\n\t\n\tFrom (\\ref{eq29}) we have\n\t\\begin{equation}\\label{eq31}\n\t\t\\langle A(y_n),h\\rangle = \\int_\\Omega\\left[\\frac{\\lambda_n}{||u_n||^{p+\\gamma-1}}y^{-\\gamma}_n + \\frac{N_f(u_n)}{||u_n||^{p-1}}\\right]hdz\\ \\mbox{for all}\\ h\\in W^{1,p}_0(\\Omega),\\ n\\in\\mathbb N.\n\t\\end{equation}\n\t\n\tHypotheses $H(f)(i),(ii)$ imply that\n\t$$\n\t|f(z,x)|\\leq c_7[1+x^{p-1}]\\ \\mbox{for almost all}\\ z\\in\\Omega,\\ \\mbox{all}\\ x\\geq0,\\ \\mbox{and some}\\ c_7>0.\n\t$$\n\t\n\tThis growth condition implies that\n\t\\begin{equation}\\label{eq32}\n\t\t\\left\\{\\frac{N_f(u_n)}{||u_n||^{p-1}}\\right\\}_{n\\geq1}\\subseteq L^{p'}(\\Omega)\\ \\mbox{is bounded}.\n\t\\end{equation}\n\t\n\tThen (\\ref{eq32}) and hypothesis $H(f)(ii)$ imply that at least for a subsequence, we have\n\t\\begin{eqnarray}\\label{eq33}\n\t\t\\frac{N_f(u_n)}{||u_n||^{p-1}}\\xrightarrow{w}\\eta_0(z)y^{p-1}\\ \\mbox{in}\\ L^{p'}(\\Omega)\\ \\mbox{as}\\ n\\rightarrow\\infty,\\\\\n\t\t\\mbox{with}\\ \\eta\\leq\\eta_0(z)\\leq\\hat{\\eta}\\ \\mbox{for almost all}\\ z\\in\\Omega \\nonumber \\\\\n\t\t\\mbox{(see Aizicovici, Papageorgiou \\& Staicu \\cite{1}, proof of Proposition 16)}. \\nonumber\n\t\\end{eqnarray}\n\t\n\tIn (\\ref{eq31}) we choose $h=y_n-y\\in W^{1,p}_0(\\Omega)$, pass to the limit as $n\\rightarrow\\infty$, and use (\\ref{eq30}) and (\\ref{eq32}). Then\n\t\\begin{eqnarray}\n\t\t& \\lim_{n\\rightarrow\\infty}\\langle A(y_n),y_n-y\\rangle=0, \\nonumber \\\\\n\t\t\\Rightarrow & y_n\\rightarrow y\\ \\mbox{in}\\ W^{1,p}_0(\\Omega)\\ \\mbox{(see Proposition \\ref{prop3}), hence}\\ ||y||=1,\\ y\\geq0. \\label{eq34}\n\t\\end{eqnarray}\n\t\n\tTherefore, if in (\\ref{eq31}) we pass to the limit as $n\\rightarrow\\infty$ and use (\\ref{eq34}) and (\\ref{eq33}), then\n\t\\begin{eqnarray}\n\t\t& \\langle A(y),h\\rangle = \\int_\\Omega\\eta_0(z)y^{p-1}hdz\\ \\mbox{for all}\\ h\\in W^{1,p}_0(\\Omega), \\nonumber \\\\\n\t\t\\Rightarrow & -\\Delta_py(z)=\\eta_0(z)y(z)^{p-1}\\ \\mbox{for almost all}\\ z\\in\\Omega, \\ y|_{\\partial\\Omega}=0. \\label{eq35}\n\t\\end{eqnarray}\n\t\n\tSince $\\eta\\leq\\eta_0(z)\\leq\\hat{\\eta}$ for almost all $z\\in\\Omega$ (see (\\ref{eq33})), using Proposition \\ref{prop4}, we have\n\t$$\n\t\\tilde{\\lambda}_1(\\eta_0)\\leq\\tilde{\\lambda}_1(\\eta)<\\tilde{\\lambda}_1(\\hat{\\lambda_1})=1.\n\t$$\n\t\n\tSo, from (\\ref{eq35}) and since $||y||=1$ (see (\\ref{eq34})), it follows that $y$ must be nodal, a contradiction (see (\\ref{eq34})). Therefore\n\t$$\n\t\\{u_n\\}_{n\\geq1}\\subseteq W^{1,p}_0(\\Omega)\\ \\mbox{is bounded}.\n\t$$\n\t\n\tHence, we may assume that\n\t\\begin{equation}\\label{eq36}\n\t\tu_n\\xrightarrow{w}u^*\\ \\mbox{in}\\ W^{1,p}_0(\\Omega)\\ \\mbox{and}\\ u_n\\rightarrow u^*\\ \\mbox{in}\\ L^p(\\Omega)\\ \\mbox{as}\\ n\\rightarrow\\infty.\n\t\\end{equation}\n\t\n\tOn account of Corollary \\ref{cor9}, we may assume that $\\{u_n\\}_{n\\geq1}$ is nondecreasing. Therefore $u^*\\neq0$. Also, we have\n\t\\begin{equation}\\label{eq37}\n\t\t0\\leq(u^*)^{-\\gamma}\\leq u_n^{-\\gamma}\\leq u_1^{-\\gamma}\\in L^{p'}(\\Omega)\\ \\mbox{for all}\\ n\\in\\mathbb N.\n\t\\end{equation}\n\t\n\tFrom (\\ref{eq36}) and by passing to a subsequence if necessary, we can say that\n\t\\begin{equation}\\label{eq38}\n\t\tu_n(z)^{-\\gamma}\\rightarrow u^*(z)^{-\\gamma}\\ \\mbox{for almost all}\\ z\\in\\Omega.\n\t\\end{equation}\n\t\n\tFrom (\\ref{eq37}), (\\ref{eq38}) and Problem 1.19 of Gasinski \\& Papageorgiou \\cite{4}, we have that\n\t\\begin{equation}\\label{eq39}\n\t\tu_n^{-\\gamma}\\xrightarrow{w}(u^*)^{-\\gamma}\\ \\mbox{in}\\ L^{p'}(\\Omega)\\ \\mbox{as}\\ n\\rightarrow\\infty.\n\t\\end{equation}\n\t\n\tIf in (\\ref{eq29}) we choose $h=u_n-u^*\\in W^{1,p}_0(\\Omega)$, pass to the limit as $n\\rightarrow\\infty$ and use (\\ref{eq39}) and the fact that $\\{N_f(u_n)\\}_{n\\geq1}\\subseteq L^{p'}(\\Omega)$ is bounded, then\n\t\\begin{eqnarray}\n\t\t& & \\lim_{n\\rightarrow\\infty}\\langle A(u_n),u_n-u^*\\rangle=0, \\nonumber \\\\\n\t\t& \\Rightarrow & u_n\\rightarrow u^*\\ \\mbox{in}\\ W^{1,p}_0(\\Omega)\\ \\mbox{(see Proposition \\ref{prop3})}. \\label{eq40}\n\t\\end{eqnarray}\n\t\n\tFinally, in (\\ref{eq29}) we pass to the limit as $n\\rightarrow\\infty$ and use (\\ref{eq39}) and (\\ref{eq40}). We obtain\n\t\\begin{eqnarray*}\n\t\t& & \\langle A(u^*),h\\rangle = \\int_\\Omega[\\lambda^*(u^*)^{-\\gamma} + f(z,u^*)]hdz\\ \\mbox{for all}\\ h\\in W^{1,p}_0(\\Omega), \\\\\n\t\t& \\Rightarrow & u^*\\in S_{\\lambda^*}\\subseteq {\\rm int}\\,C_+\\ \\mbox{and}\\ \\lambda^*\\in\\mathcal{L}.\n\t\\end{eqnarray*}\nThis completes the proof.\n\\end{proof}\nWe have proved that\n$$\n\\mathcal{L}=(0,\\lambda^*].\n$$\t\n\\begin{prop}\\label{prop13}\n\tIf hypotheses $H(f)$ hold and $\\lambda\\in(0,\\lambda^*)$, then problem $(P_\\lambda)$ admits at least two positive solutions\n\t$$\n\tu_\\lambda,\\hat{u}_\\lambda\\in {\\rm int}\\,C_+, \\hat{u}_{\\lambda}-u_\\lambda\\in C_+\\backslash\\{0\\}.\n\t$$\n\\end{prop}\n\\begin{proof}\n\tLet $u^*\\in S_{\\lambda^*}\\subseteq {\\rm int}\\,C_+$ (see Proposition 12). Invoking Proposition \\ref{prop10}, we can find $u_\\lambda\\in S_\\lambda\\subseteq {\\rm int}\\,C_+$ such that\n\t\\begin{equation}\\label{eq41}\n\t\tu^*-u_\\lambda\\in {\\rm int}\\,C_+.\n\t\\end{equation}\n\t\n\tWe consider the Carath\\'eodory function $\\tau_\\lambda(z,x)$ defined by\n\t\\begin{equation}\\label{eq42}\n\t\t\\tau_\\lambda(z,x)=\\left\\{\n\t\t\t\\begin{array}{ll}\n\t\t\t\t\\lambda u_\\lambda(z)^{-\\gamma} + f(z,u_\\lambda(z)) & \\mbox{if}\\ x\\leq u_\\lambda(z) \\\\\n\t\t\t\t\\lambda x^{-\\gamma} + f(z,x) & \\mbox{if}\\ u_\\lambda(z)0, \\mbox{and all}\\ n\\in\\mathbb N, \\nonumber \\\\\n\t\t& \\Rightarrow & \\{u^-_n\\}_{n\\geq1}\\subseteq W^{1,p}_0(\\Omega)\\ \\mbox{is bounded}. \\label{eq53}\n\t\\end{eqnarray}\n\t\n\tSuppose that $||u^+_n||\\rightarrow\\infty$ and let $y_n=\\frac{u^+_n}{||u^+_n||}\\ n\\in\\mathbb N$. Then $||y_n||=1,y_n\\geq0$ for all $n\\in\\mathbb N$. So, we may assume that\n\t\\begin{equation}\\label{eq54}\n\t\ty_n\\xrightarrow{w}y\\ \\mbox{in}\\ W^{1,p}_0(\\Omega)\\ \\mbox{and}\\ y_n\\rightarrow y\\ \\mbox{in}\\ L^p(\\Omega),\\ y\\geq0.\n\t\\end{equation}\n\t\n\tFrom (\\ref{eq52}) and (\\ref{eq53}), we have\n\t\\begin{equation}\\label{eq55}\n\t\t|\\langle A(y_n),h\\rangle - \\int_\\Omega\\frac{N_{\\tau_\\lambda}(u_n^+)}{||u_n^+||^{p-1}}hdz| \\leq \\varepsilon_n'||h||\\ \\mbox{for all}\\ h\\in W^{1,p}_0(\\Omega),\\ \\mbox{with}\\ \\varepsilon_n'\\rightarrow0.\n\t\\end{equation}\n\tFrom (\\ref{eq42}) and hypothesis $H(f)(ii)$, we have\n\t\\begin{eqnarray}\\label{eq56}\n\t\t\\frac{N_{\\tau_\\lambda}(u_n^+)}{||u_n^+||^{p-1}}\\xrightarrow{w} \\eta_0(z)y^{p-1}\\ \\mbox{in}\\ L^{p'}(\\Omega)\\ \\mbox{as}\\ n\\rightarrow\\infty\\ \\\\\n\t\t\\mbox{with}\\ \\eta\\leq\\eta_0(z)\\leq\\hat{\\eta}\\ \\mbox{for almost all}\\ z\\in\\Omega.\\ \\mbox{(see (\\ref{eq33}))}.\\nonumber\n\t\\end{eqnarray}\n\t\n\tIn (\\ref{eq55}) we choose $h=y_n-y\\in W^{1,p}_0(\\Omega)$ and pass to the limit as $n\\rightarrow\\infty$. Then\n\t\\begin{eqnarray}\n\t\t& & \\lim_{n\\rightarrow\\infty}\\langle A(y_n),y_n-y\\rangle=0, \\nonumber \\\\\n\t\t& \\Rightarrow & y_n\\rightarrow y\\ \\mbox{in}\\ W^{1,p}_0(\\Omega)\\ \\mbox{(see Proposition \\ref{prop3}), hence}\\ ||y||=1, y\\geq0. \\label{eq57}\n\t\\end{eqnarray}\n\t\n\tThen passing to the limit as $n\\rightarrow\\infty$ in (\\ref{eq55}) and using (\\ref{eq56}) and (\\ref{eq57}), we obtain\n\t\\begin{eqnarray}\n\t\t& & \\langle A(y),h\\rangle = \\int_\\Omega\\eta_0(z)y^{p-1}hdz\\ \\mbox{for all}\\ h\\in W^{1,p}_0(\\Omega), \\nonumber \\\\\n\t\t& \\Rightarrow & -\\Delta_py(z)=\\eta_0(z)y(z)^{p-1}\\ \\mbox{for almost all}\\ z\\in\\Omega, y|_{\\partial\\Omega}=0. \\label{eq58}\n\t\\end{eqnarray}\n\t\n\tAs before, using Proposition 4, we have\n\t\\begin{eqnarray*}\n\t\t& & \\tilde{\\lambda}_1(\\eta_0)\\leq\\tilde{\\lambda}_1(\\eta)<\\tilde{\\lambda}_1(\\hat{\\lambda}_1)=1, \\\\\n\t\t& \\Rightarrow & y\\ \\mbox{must be nodal (see (\\ref{eq58}), (\\ref{eq57})), a contradiction (see (\\ref{eq57}))}.\n\t\\end{eqnarray*}\n\t\n\tThis proves that $\\{u^+_n\\}_{n\\geq1}\\subseteq W^{1,p}_0(\\Omega)$ is bounded. Hence\n\t$$\n\t\\{u_n\\}_{n\\geq1}\\subseteq W^{1,p}_0(\\Omega)\\ \\mbox{is bounded (see (\\ref{eq53}))}.\n\t$$\n\t\n\tSo, we may assume that\n\t\\begin{equation}\\label{eq59}\n\t\tu_n\\xrightarrow{w}u\\ \\mbox{in}\\ W^{1,p}_0(\\Omega)\\ \\mbox{and}\\ u_n\\rightarrow u\\ \\mbox{in}\\ L^p(\\Omega)\\ \\mbox{as}\\ n\\rightarrow\\infty.\n\t\\end{equation}\n\t\n\tIn (\\ref{eq52}) we choose $h=u_n-u\\in W^{1,p}_0(\\Omega)$, pass to the limit as $n\\rightarrow\\infty$ and use (\\ref{eq59}). Then\n\t\\begin{eqnarray*}\n\t\t& & \\lim_{n\\rightarrow\\infty}\\langle A(u_n),u_n-u\\rangle = 0, \\\\\n\t\t& \\Rightarrow & u_n\\rightarrow u\\ \\mbox{in}\\ W^{1,p}_0(\\Omega)\\ \\mbox{(see Proposition 3)}.\n\t\\end{eqnarray*}\n\t\n\tThis proves Claim \\ref{claim2}.\n\t\n\tOn account of (\\ref{eq50}), (\\ref{eq51}) and Claim \\ref{claim2} we can apply Theorem \\ref{th1} (the mountain pass theorem) and find $\\hat{u}_\\lambda\\in W^{1,p}_0(\\Omega)$ such that\n\t\\begin{eqnarray*}\n\t\t\\hat{u}_\\lambda\\in K_{\\tilde{\\varphi}_\\lambda}\\subseteq[u_\\lambda)\\cap {\\rm int}\\,C_+\\ \\mbox{(see Claim \\ref{claim1})}, \\\\\n\t\t\\tilde{m_\\lambda}\\leq\\tilde{\\varphi}_\\lambda(\\hat{u}_\\lambda)\\ \\mbox{(see (\\ref{eq50})), hence}\\ \\hat{u}_\\lambda\\neq u_\\lambda.\n\t\\end{eqnarray*}\n\t\n\tTherefore $\\hat{u}_\\lambda\\in {\\rm int}\\,C_+$ is the second positive solution of \\eqref{eqp} and\n\t$$\n\t\\hat{u}_\\lambda - u_\\lambda\\in C_+\\backslash\\{0\\}.\n\t$$\nThe proof is now complete.\n\\end{proof}\t\n\n\\textcolor{black}{Therefore we have also proved Theorem A, which is the main result of this paper.}\n\n\\begin{remark}\n\tAn interesting open problem is whether there is such a bifurcation-type theorem for resonant problems, that is,\n\t$$\n\t\\hat{\\lambda}_1\\leq\\liminf_{x\\rightarrow +\\infty}\\frac{f(z,x)}{x^{p-1}} \\leq\\limsup_{x\\rightarrow +\\infty}\\frac{f(z,x)}{x^{p-1}}\\leq\\hat{\\eta}\\ \\mbox{uniformly for almost all}\\ z\\in\\Omega\n\t$$\n\tor even for the nonuniformly nonresonant problems, that is,\n\t$$\\eta(z)\\leq\\liminf\\limits_{x\\rightarrow+\\infty}\\frac{f(z,x)}{x^{p-1}}\\leq\\limsup\\limits_{x\\rightarrow +\\infty}\\frac{f(z,x)}{x^{p-1}}\\leq\\hat{\\eta}\\ \\mbox{uniformly for almost all}\\ z\\in\\Omega$$\n\twith $\\eta\\in L^\\infty(\\Omega)$ such that\n\t$$\n\t\\hat{\\lambda}_1\\leq\\eta(z)\\ \\mbox{for almost all}\\ z\\in\\Omega,\\ \\eta\\not\\equiv\\hat{\\lambda}_1.\n\t$$\n\\end{remark}\n\nIn both cases it seems to be difficult to show that $\\lambda^*<\\infty$. Additional conditions on $f(z,\\cdot)$ might be needed.\n\n\\medskip\n{\\bf Acknowledgements.} \\textcolor{black}{The authors wish to thank the referee for his\/her remarks and suggestions.}\nThis research was supported by the Slovenian Research Agency grants\nP1-0292, J1-8131, J1-7025, N1-0064, and N1-0083. V.D.~R\\u adulescu acknowledges the support through a grant of the Romanian Ministry of Research and Innovation, CNCS-UEFISCDI, project number PN-III-P4-ID-PCE-2016-0130,\nwithin PNCDI III.\n\t\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec1}\n\nSymmetry breaking and formation of ordered structures~(patterns) in spatially extended dissipative systems driven away from equilibrium via some instability mechanisms are very interesting and important phenomena, occurring widely in physics, chemistry, biology, cosmology, and even economics and sociology,\netc.~\\cite{Nicolis1977,Haken1987,Murray1989,Cross1993,Bowman1998,\nWeidman2000,Cross2009}. Well-known instability mechanisms for pattern formations include the Rayleigh-B\\'{e}nard instability in thermal fluid convection~\\cite{Benard1901,Drazin1981}, the\nTaylor-Couette instability in rotating fluids~\\cite{Taylor1923}, the\nelectrohydrodynamic instability in nematic liquid crystals~\\cite{Dubois1978},\nand the Faraday instability for parametric waves~\\cite{Faraday1831}; other typical examples are the lasing instability in laser devices~\\cite{Newell1990,Arecchi1999,Lugiato1999}, the\nMullins-Sekerka instability for solidification pattern growth (e.g., snowflakes)~\\cite{Mullins1964},\nand the Turing instability for structures created in chemical reaction and living systems (e.g., animal coats)~\\cite{Turing1952}. For details, see Refs.~\\cite{Nicolis1977,Haken1987,Murray1989,Cross1993,Bowman1998,\nWeidman2000,Cross2009} and references cited therein.\nOne of main characters of these pattern forming systems is that an external drive (stress) must be applied, and the induced instability trigers symmetry-breaking causing the dissipative structures appearing immediately in the linear regime.\n\n\nBesides the linear instability in driven dissipative systems, much attention were also paid to the research of modulational instability (MI) in nonlinear systems~\\cite{Segur2007,Zakharov2009,Zakharov2013,Kibler2015,Biondini2016,\nConforti2016,Mosca2018,Kraych20191}. MI is a typical nonlinear instability discovered firstly in the study of water waves~\\cite{Benjamin1967},\nand may apply to non-dissipative and non-driven systems, where a plane wave of finite amplitude may undergo an instability and lose its energy to sideband components, resulting in a nonlinear modulation of the plane wave. Usually, MI is considered as wave dynamics problems for conservative nonlinear systems, where wave envelopes are controlled typically by cubic nonlinear Schr\\\"{o}dinger~(NLS) equation, with local and attractive Kerr nonlinearities. For such systems, the existence of MI is thought to be the major reason for the formation of solitons\n(see Refs.~\\cite{Segur2007,Zakharov2009,Zakharov2013,Kibler2015,Biondini2016,Conforti2016,Mosca2018,Kraych20191,Reece2007,Solli,\nNguyen2017,Kraych20192,Benjamin1967,Kamchatnov1997,Kartashov2011,Kivshar2006,Agrawal2007,Pethick2008} and references cited therein).\n\n\nIn recent years, considerable efforts were made on the study of\nMI in conservative nonlinear systems described by cubic NLS equations with nonlocal Kerr nonlinearity. It was found that MI may occur in such systems even the Kerr nonlinearity is repulsive, or has both repulsive and attractive parts~\\cite{Krolikowski2001,Wyller2002,Krolikowski2004,\nDoktorv2007,Henkel2010,Esbensen2011,Tiofack2015,Maucher2016,\nMaucher2017}, which provides the possibility to realize spontaneous symmetry breaking~\\cite{Malomed2013} and generate spatially extended, ordered structures in nonlocal nonlinear media. Various self-organized patterns were found~\\cite{Tiofack2015,Maucher2016,Maucher2017,Mottl2012,Labeyrie2014,\nZhangYC2018}, especially the ones of atomic density formed in Rydberg-dressed~\\cite{Henkel2010,Cinti2010,Cinti2014,Henkel2012,Hsuch2012,\nHsuch2017,Li2018} and dipolar~\\cite{Saito2009,Lu2015,Kadau2016,Wachtler2016,Xi2018,Zhang2019} Bose-Einstein condensates.\n\n\nOn the other hand, in the past two decades a large amount research works were focused on the investigation of cold Rydberg atomic gases~\\cite{Gallagher2006,Saffman2010} working under condition of electromagnetically induced transparency (EIT). EIT is an important quantum destruction interference effect occurring typically in resonant three-level atomic systems, by which the absorption of a probe laser field can be largely eliminated by a control laser field~\\cite{Fleischhauer2005}. Due to strong, long-range atom-atom interaction~(also called Rydberg-Rydberg interaction),\nsuch systems are desirable nonlinear optical media with strong, nonlocal Kerr nonlinearity if the Rydberg-Rydberg interaction is suitably mapped to photon-photon interaction via EIT~\\cite{Fir2016,Mur2016}. In an interesting work, Sevinli {\\it et al.}~\\cite{Sevincli2011} reported a self-organized hexagonal optical pattern via a MI of plane-wave probe beam in a cold Rydberg atomic gas with a repulsive Rydberg-Rydberg interaction.\n\n\nIn this work, we propose and analyze a scheme for realizing various self-organized optical structures and their structural phase transition in a cold Rydberg atomic gas via a Rydberg-EIT~\\cite{Mohapatra2007,Pritchard2010}. By exploiting a microwave dressing (i.e., a microwave field couples two electrically excited Rydberg states)~\\cite{Tana2011,Sedlacek2012,Yu2013,Maxwell2013,Petrosyan2014,Pohl2014,Li2014,\nLi2015,Rao2014,Adams2014,Liu2015,Thompson2017,Votg2018,Vogt2019,Jing2020}, we show that the nonlocal Kerr nonlinearity of the Rydberg gas (which has only a repulsive Rydberg-Rydberg interaction in the absence of the microwave field) is significantly modified, and its strength and sign can be tuned actively. Based on such nonlocal Kerr nonlinearity, we demonstrate that a homogeneous (plane wave) state of probe laser field can undergo MI and hence spontaneous symmetry breaking, which may result in the formation of various ordered optical patterns.\n\n\nThrough detailed analytical and numerical analysis, we find that a homogeneous state of the probe field is firstly transited into a hexagonal lattice pattern (which is the only lattice pattern when the microwave dressing is absent). Interestingly, this hexagonal lattice pattern may undergo a structural phase transition and develop into several types of square lattice patterns when the microwave field is applied and its strength is increased. Moreover, as a outcome of the MI the formation of nonlocal spatial optical solitons is also possible by a suitable choice of system parameters. Different from the results reported before, the optical patterns and nonlocal optical solitons found here can be flexibly manipulated via the adjustment of the effective probe-field intensity, nonlocality degree of the Kerr nonlinearity, and the strength of the microwave field. Our study opens a way for actively controlling the self-organization and structural phase transition of optical patterns through microwave-dressing on Rydberg gases, which are not only of fundamental interest, but also useful for potential applications in optical information processing and transmission.\n\n\nThe remainder of the article is arranged as follows. In Sec.~\\ref{sec2}, we describe the physical model, discuss the modification and enhancement of the Kerr nonlinearity contributed by the microwave field, and derive the nonlinear envelope equation of the probe field. In Sec.~\\ref{sec3}, we consider the MI of a plane-wave state, investigate the formation and structural phase transitions of optical patterns controlled by the microwave field, effective probe-field intensity, the nonlocal Kerr nonlinearity and its nonlocality degree. The result on the formation of nonlocal spatial optical solitons is also presented. The last section (Sec.~\\ref{sec4}) gives a summary of our main research results.\n\n\n\\section{Physical model, nonlinear envelope equation, and enhanced Kerr nonlinearity}\\label{sec2}\n\n\\subsection{Physical model}\\label{sec21}\n\nWe consider an ensemble of lifetime-broadened four-level atomic gas with an ladder-type level configuration, shown schematically in Fig.~{\\ref{Fig1}}{\\color{blue}(a)}.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{fig1.eps}\n\\caption{\\footnotesize Schematics of the model.\n(a)~Ladder-type four-level atomic configuration for realizing the microwave-dressed Rydberg-EIT. Here, the weak probe laser field~(blue), strong control laser field~(red), and strong microwave field~(green) with\nhalf Rabi frequencies ${\\Omega}_{p}$, $\\Omega_c$, and $\\Omega_m$ drive the transitions $|1\\rangle\\leftrightarrow|2\\rangle$, $|2\\rangle\\leftrightarrow|3\\rangle$, and $|3\\rangle\\leftrightarrow|4\\rangle$, respectively;\nStates $|1\\rangle$ and $|2\\rangle$ are respectively ground and excited states, both $|3\\rangle$ and $|4\\rangle$ are highly excited Rydberg states; $\\Delta_2$, $\\Delta_3$, and $\\Delta_4$ are respectively the one-, two-, and three-photon detunings; $\\Gamma_{12}$, $\\Gamma_{23}$, and $\\Gamma_{24}$ are the spontaneous emission decay rates from $|2\\rangle$ to $|1\\rangle$, $|3\\rangle$ to $|2\\rangle$ and\n$|4\\rangle$ to $|2\\rangle$, respectively.\nTwo Rydberg atoms locating respectively at position ${\\bf r}$ and ${\\bf r}'$ interact through van der Waals potential $\\hbar {\\mathcal V}_{\\rm vdw}^{l}({\\bf r}^{\\prime}-{\\bf r})$ ($l=s,d,e$; see text).\n(b)~Possible experimental geometry, where small solid circles denote atoms and large dashed circles denote Rydberg blockade spheres.\n(c)~Emergence of an optical pattern via modulation instability.\n}\\label{Fig1}\n\\end{figure}\nHere, the weak probe laser field with central angular frequency $\\omega_{p}$, wavevector ${\\bf k}_{p}$, and half Rabi frequency\n$\\Omega_p$ drives the transition from atomic ground state $|1\\rangle$ to intermediate state $|2\\rangle$, and the strong control laser field with\ncentral angular frequency $\\omega_c$, wavevector ${\\bf k}_c$, and\nhalf Rabi frequency $\\Omega_c$ drives the transition $|2\\rangle$ to the first highly excited Rydberg state $|3\\rangle$. This ladder-type three level EIT is dressed by a microwave field with central angular frequency\n$\\omega_m$, wavevector ${\\bf k}_m$, and half Rabi frequency\n$\\Omega_m$, which couples the transition between the Rydberg state $|3\\rangle$ and another Rydberg state $|4\\rangle$.\nThe total electric fields acting in the atomic system can be written as ${\\bf E(r},t)=\\sum_{j}{\\bf e}_{j}{\\cal E}_{j}e^{i({\\bf k}_{j}\\cdot{\\bf r}-\\omega_{j}t)} +{\\rm H.c.}$, with ${\\bf e}_{j}$ and ${\\cal E}_{j}$ respectively the polarization unit vector and the envelope of $j$-th laser field ($j=p,c,m$). $\\Delta_2$, $\\Delta_3$, and $\\Delta_4$ are respectively the one-, two-, and three-photon detunings; $\\Gamma_{12}$, $\\Gamma_{23}$, and $\\Gamma_{24}$ are the spontaneous emission decay rates from $|2\\rangle$ to $|1\\rangle$, $|3\\rangle$ to $|2\\rangle$, and $|4\\rangle$ to $|2\\rangle$, respectively.\nThe microwave field employed here is to realize a microwave-dressed Rydberg-EIT~\\cite{Tana2011,Sedlacek2012,Yu2013,Maxwell2013,Petrosyan2014,Pohl2014,Li2014,Li2015,\nRao2014,Adams2014,Liu2015,Thompson2017,Votg2018,Vogt2019,Jing2020} and thus to modify the Rydberg-Rydberg interaction,\nwhich, in turn, can manipulate the interaction strength and sign for the photons in the probe field and hence realize self-organized optical structures not discovered before.\n\n\nThe dynamics of the system is controlled by the Hamiltonian $\\hat{H}=\\mathcal{N}_a\\int d^3{\\bf r}\\hat{\\mathcal{H}}({\\bf r},t)$, where $\\hat{\\mathcal{H}}({\\bf r},t)$ is Hamiltonian density and $\\mathcal{N}_a$ is atomic density. Under the electric-dipole and rotating-wave approximations, the Hamiltonian density in interaction picture reads\n\\begin{align}\\label{Hamiltonian}\n\\hat{\\mathcal{H}}\\equiv\\hat{\\mathcal{H}}_1+\\hat{\\mathcal{H}}_{{\\rm vdW}},\n\\end{align} \\noindent\nwhere Hamiltonian $\\hat{\\mathcal{H}}_1$ describes unperturbed atoms as well as the interaction between the atoms and the laser fields, $\\hat{\\mathcal{H}}_{{\\rm vdW}}$ describes the Rydberg-Rydberg interaction, respectively given by\n{\\small\n\\begin{subequations}\n\\begin{align}\n&\\hat{\\mathcal{H}}_1=-\\hbar\\sum_{\\alpha=2}^4\\Delta_{\\alpha}\\hat{S}_{\\alpha\\alpha} -\\hbar(\\Omega_p \\hat{S}_{12}+\\Omega_c\n\\hat{S}_{23}+\\Omega_m \\hat{S}_{34}+{\\rm H.c.}),\\\\\n&\\hat{\\mathcal{H}}_{{\\rm vdW}}=\\hbar\\mathcal{N}_a\\int d^3{ r}^\\prime\\bigg\\{\\sum_{\\alpha=3,4}\\hat{S}_{\\alpha\\alpha}({\\bf\nr}^{\\prime},t)\\mathcal{V}_{\\alpha\\alpha}^s({\\bf r}^\\prime-{\\bf r})\\hat{S}_{\\alpha\\alpha}({\\bf r},t)\\notag\\\\\n&\\qquad\\,+\\mathcal{V}_{34}^d({\\bf r}^\\prime-{\\bf r})\\left[\\hat{S}_{33}({\\bf\nr^\\prime},t)\\hat{S}_{44}({\\bf r},t)+\\hat{S}_{44}({\\bf r^\\prime},t)\\hat{S}_{33}({\\bf r},t)\\right]\\notag\\\\\n&\\qquad\\,+\\mathcal{V}_{34}^e\n({\\bf r}^\\prime-{\\bf r})\\left[\\hat{S}_{43}({\\bf\nr^\\prime},t)\\hat{S}_{34}({\\bf r},t)+\\hat{S}_{34}({\\bf r^\\prime},t)\\hat{S}_{43}({\\bf r},t)\\right]\\bigg\\}.\n\\end{align}\n\\end{subequations}}\\noindent\nHere $d^3 r'=dx' dy' dz'$;\n$\\hat{S}_{\\alpha\\beta}=|\\beta\\rangle \\langle \\alpha|\\exp\\{i[({\\bf k}_{\\beta}-{\\bf k}_{\\alpha})\\cdot{\\bf r}-(\\omega_{\\beta}-\\omega_{\\alpha}+\\Delta_{\\beta}-\\Delta_{\\alpha})t]\\}$\nis the transition operator satisfying the commutation relation\n$[\\hat{S}_{\\alpha\\beta}({\\bf\nr},t),\\hat{S}_{\\alpha^\\prime\\beta^\\prime}({\\bf r}^\\prime,t)]=\\mathcal{N}_a^{-1}\n\\delta ({\\bf r}-{\\bf r}{^\\prime})(\\delta_{\\alpha\\beta'}\\hat{S}_{\\alpha'\\beta}({\\bf r},t)-\\delta_{\\alpha'\\beta}\\hat{S}_{\\alpha\\beta'}({\\bf r'},t))$;\nthe one-, two-, and three-photon detuings are respectively given by\n$\\Delta_2=\\omega_p-(\\omega_2-\\omega_1)$, $\\Delta_3= \\omega_c+\\omega_p-(\\omega_3-\\omega_1)$, and\n$\\Delta_4=\\omega_c+\\omega_p+\\omega_m-(\\omega_4-\\omega_1)$, with\n$E_{\\alpha}=\\hbar\\omega_{\\alpha}$ the eigenenergy of the atomic state $|\\alpha\\rangle$.\nThe half Rabi frequencies of the probe, control, and microwave fields are, respectively, $\\Omega_p=({\\bf e}_{p}\\cdot{\\bf p}_{21}){\\cal\nE}_{p}\/\\hbar$, $\\Omega_c=({\\bf e}_{c}\\cdot{\\bf p}_{32}){\\cal E}_{c}\/\\hbar$, and $\\Omega_m=({\\bf e}_{m}\\cdot{\\bf p}_{43}){\\cal\nE}_{m}\/\\hbar$, with ${\\bf p}_{\\alpha\\beta}$ the electric dipole matrix element associated with the transition between the states $|\\alpha \\rangle$ and $|\\beta\\rangle$.\n\n\nThe Hamiltonian density $\\hat{\\mathcal{H}}_{{\\rm vdW}}$ is the contribution by the Rydberg-Rydberg interaction, which contains\nfour parts, represented by $\\mathcal{V}_{33}^s$, $\\mathcal{V}_{44}^s$, $\\mathcal{V}_{\\rm 34}^d$, and $\\mathcal{V}_{\\rm 34}^e$,\nrespectively; the term $\\mathcal{V}_{\\rm 33}^s=-C_{33}^{s}\/|{\\bf r}'-{\\bf r}|^6$\\,\\, ($\\mathcal{V}_{44}^s=-C_{44}^{s}\/|{\\bf r}'-{\\bf r}|^6$)\ndescribes the van der Waals interaction between the two\natoms located respectively at positions ${\\bf r}'$ and ${\\bf r}$\nand excited to the same Rydberg state $|3\\rangle$\\, ($|4\\rangle$); the term $\\mathcal{V}_{\\rm 34}^{d}=-C_{34}^{d}\/|{\\bf r}'-{\\bf r}|^6$\\,\\,\n($\\mathcal{V}_{\\rm 34}^e=-C_{34}^{e}\/|{\\bf r}'-{\\bf r}|^3$)\ndescribes the direct non-resonant van der Waals interaction\n(resonant exchange dipole-dipole interaction)\nbetween the two atoms\nexcited to different Rydberg states (i.e. $|3\\rangle$ and $|4\\rangle$). Here $C_{\\alpha\\beta}^{l}$ ($\\{\\alpha\\beta\\}=\\{33,44,34\\}$; $l=s,d,e$) are dispersion parameters~\\cite{Petrosyan2014,Li2014,Li2015}.\n\n\nThe time evolution of the atoms in the system is governed by the optical\nBloch equation\n\\begin{equation}\\label{Bloch0}\n\\frac{\\partial\\rho}{\\partial t}=-\\frac{i}{\\hbar} \\left[{\\hat{ H}},\\rho\\right]-\\Gamma\\left[\\rho\\right],\n\\end{equation}\nwhere $\\rho ({\\bf r},t)=\\langle \\hat{S}({\\bf r},t)\\rangle$~\\cite{note0} is a $4\\times 4$ density matrix (with density matrix elements $\\rho_{\\alpha\\beta}({\\bf r},t)=\\langle \\hat{S}_{\\alpha\\beta} ({\\bf r},t)\\rangle$; $\\alpha,\\beta=1,2,3,4$)\ndescribing the atomic population and coherence, $\\Gamma$ is a $4\\times 4$ relaxation matrix describing the spontaneous emission\nand dephasing. Explicit expressions of $\\rho_{\\alpha\\beta}({\\bf r},t)$ are presented in Appendix~\\ref{app1}.\n\n\nThe propagation of the probe field is controlled by Maxwell equation, which under paraxial and slowly-varying envelope approximations is reduced into~\\cite{Mur2016}\n\\begin{align}\\label{Maxwell}\n i\\left(\\frac{\\partial}{\\partial z}+\\frac{1}{c}\\frac{\\partial}{\\partial\n t}\\right)\\Omega_p+\\frac{c}{2\\omega_p}\\nabla_{\\perp}^2\\Omega_p\n +\\kappa_{12}\\rho_{21}=0,\n\\end{align}\\noindent\nwhere $\\nabla_{\\perp}^2=\\partial^2\/\\partial x^2+\\partial^2\/\\partial y^2$ describes diffraction, $\\kappa_{12}=\\N_a\\omega_p|({\\bf e}_p\\cdot{\\bf p}_{12})|^2\/(2\\epsilon_0c\\hbar)$ is a parameter describing\nthe coupling between the atoms and the probe field,\nand $c$ is the light speed in vacuum.\nWithout loss of generality, we assume the probe field propagates along $z$ direction, i.e., ${\\bf k}_p=(0,0,\\omega_p\/c)$; to suppress Doppler effect, the microwave field is along the $z$ direction but the control field is along the negative $z$ direction [i.e., ${\\bf k}_m=(0,0,\\omega_m\/c)$ and ${\\bf k}_c=(0,0,-\\omega_c\/c)$]. A possible experimental arrangement is given in Fig.~\\ref{Fig1}(b).\n\n\nNote that the physical model described above is valid for any microwave-dressed Rydberg atomic gas. But for latter calculations where numerical values of the system are needed, we take cold $^{87}$Rb atomic gas~\\cite{Petrosyan2014} (which has density-density interaction in the absence of the microwave field) as a realistic example. The assigned atomic levels in Fig.~\\ref{Fig1}(a) are $|1\\rangle=|5 S_{1\/2}\\rangle$, $|2\\rangle=|5P_{3\/2}\\rangle$, $|3\\rangle=|nS_{1 \/ 2}\\rangle$, and $|4\\rangle=|n P_{3 \/ 2}\\rangle$. For example, for principal quantum number $n=60$, the dispersion parameters are\n$C_{33}^{s}=- 2\\pi \\times 140\\,{\\rm GHz}~\\mu {\\rm {m^6}}$~(repulsive interaction), $C_{44}^{s}=2\\pi \\times 295 \\,{\\rm GHz}~\\mu {\\rm\n{m^6}}$~(attractive interaction), $C_{34}^{d}=-2\\pi \\times 3 \\,{\\rm GHz}~\\mu {\\rm {m^6}}$ (repulsive interaction), and\n$C_{34}^{e}=-2\\pi \\times 3.8 \\,{\\rm GHz}~\\mu {\\rm {m^3}}$~(repulsive interactions)~\\cite{Petrosyan2014,Rao2014,note1}, respectively.\nTypical system parameters are chosen as follows:\n$\\Delta_2=3.17\\times 10^2$\\,MHz,\n$\\Delta_3=15.3$\\,MHz,\n$\\Delta_4=1.32$\\,MHz;\n$\\Gamma_{12}=2\\pi \\times 6.1$\\,MHz,\n$\\Gamma_3=\\Gamma_{4}=2\\pi \\times 1.67\\times 10^{-2}$\\,MHz;\n$\\Omega_{c}=20\\,{\\rm MHz}$;\n$\\N_a=1.0\\times 10^{11}\\,{\\rm cm^{-3}}$.\n\n\nWe stress that although the Bloch Eq.~(\\ref{Bloch0}) is for the evolution of one-body density-matrix elements $\\rho_{\\alpha\\beta}({\\bf r},t)$, it involves two-body density-matrix elements $\\rho_{\\alpha\\beta,\\mu\\nu}({\\bf r^{\\prime},r},t)= \\langle\n\\hat{S}_{\\alpha\\beta}({\\bf r^{\\prime}},t) \\hat{S}_{\\mu\\nu}({\\bf r},t)\\rangle$ due to the Rydberg-Rydberg interaction; furthermore, the equation of motion for $\\rho_{\\alpha\\beta,\\mu\\nu}({\\bf r^{\\prime},r},t)$ involves three-body density-matrix elements $\\rho_{\\alpha\\beta,\\mu\\nu,\\gamma\\delta}({\\bf r}^{\\prime\\prime},{\\bf r}^{\\prime}, {\\bf r},t)=\\langle\n\\hat{S}_{\\alpha\\beta}({\\bf r}^{\\prime\\prime},t) \\hat{S}_{\\mu\\nu}({\\bf r}',t)\\rangle \\hat{S}_{\\gamma\\delta}({\\bf r},t)\\rangle$, and so on. Thus an effective approach for solving such a hierarchy of infinite equations involving many-atom correlations is needed.\n\n\n\\subsection{Enhanced Kerr nonlinearity by the microwave dressing}\\label{sec22}\n\nWe first consider the modification of Kerr nonlinearity of the system induced by the microwave field based on the physical model described above. For simplicity, we assume that the control and microwave fields are strong enough, so that they are not depleted during the propagation of the probe field. Since the probe field is weak, a perturbation expansion can be applied to solve the Maxwell-Bloch (MB) equations (\\ref{Bloch0}) and (\\ref{Maxwell}) by taking $\\Omega_p$ as a small expansion parameter. Generalizing the approach developed in Refs.~\\cite{Bai2016,Zhang2018,Bai2019,Bai2020}, where MB equations for Rydberg atomic gases without microwave dressing are solved beyond mean-field approximation in a self-consistent and effective way, we can obtain the solutions of the Bloch Eq.~(\\ref{Bloch0}) using the perturbation expansion up to third-order approximation. In particular, the result of the one-body density matrix element $\\rho_{21}$ can be obtained analytically (see the Appendix~\\ref{app1} for detail).\n\n\nWith the expression of $\\rho_{21}$ and the definition of probe-field susceptibility, i.e. $\\chi= \\mathcal{N}_a({\\bf e\\cdot p}_{12})\\rho_{21}\/(\\epsilon_0\\mathcal{E}_p)$, it is easy to obtain the optical susceptibility of the probe field, which reads\n\\begin{equation}\n\\chi=\\chi^{(1)}+\\left(\\chi_{\\rm loc}^{(3)}+\\chi_{\\rm nloc}^{(3)}\\right)|\\mathcal{E}_p|^2,\n\\end{equation}\nwhere $\\chi^{(1)}$ is the linear susceptibility;\n$\\chi_{\\rm loc}^{(3)}$ and $\\chi_{\\rm nloc}^{(3)}$ are local and nonlocal third-order nonlinear (Kerr) susceptibilities, originated respectively from non-zero two-photon detuning (i.e. $\\Delta_3\\neq 0$)~\\cite{Bai2016,Zhang2018,Bai2019,Bai2020,Wang2001,Chen2014}) and from the Rydberg-Rydberg interaction in the system.\nExpressions of $\\chi^{(1)}$, $\\chi_{\\rm loc}^{(3)}$, and $\\chi_{\\rm nloc}^{(3)}$ are given in Appendix~\\ref{app3}; see\nEqs.~(\\ref{chi3s1}), (\\ref{chi3s2}), and (\\ref{chi3n}), respectively.\nUsing the system's parameters given at the final part of the last subsection and taking\n$\\Omega_m=18$\\,MHz, we obtain\n$\\chi^{(3)}_{\\rm loc} \\approx (5.08+0.012i)\\times 10^{-11}\\, {\\rm m^{2}\/V^2}$,\n$\\chi^{(3)}_{\\rm nloc}\\approx (3.05+0.022i)\\times 10^{-8}\\,\n{\\rm m^{2}\/V^2}$.\nWe see the imaginary parts of $\\chi^{(3)}_{\\rm loc}$ and $\\chi^{(3)}_{\\rm nloc}$ are much smaller than their real parts,\nwhich is due to the EIT effect contributed by the control field; moreover, the nonlocal Kerr nonlinearity is three orders of magnitude larger than the local one, which is due to the strong Rydberg-Rydberg interaction together with the microwave dressing.\n\n\nIt is helpful to reveal how the microwave-dressing modify the Kerr effect\nof the system. Fig.~\\ref{Fig2}(a) shows\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.48\\textwidth]{fig2.eps}\n\\caption{\\footnotesize\nKerr nonlinearity enhancement, interaction potential energy, and normalized nonlinear response function $\\Re$ of the probe field in the presence of the microwave dressing.\n(a)~Real part Re($\\chi^{(3)}_{\\rm loc}$)~(solid red line) and imaginary part Im($\\chi^{(3)}_{\\rm loc})$~(dotted blue line) of the local nonlinear susceptibility $\\chi_{\\rm\nloc}^{(3)}$ as a function of the half Rabi frequency $\\Omega_m$ of the microwave field.\n(b)~The same as (a) but for the nonlocal nonlinear susceptibility $\\chi_{\\rm nloc}^{(3)}$.\n(c)~Potential-energy curves $E_{1}$ (solid blue line), $E_2$ (solid black line), and $E_3$ (solid red line) of two Rydberg atoms\nas functions of the interatomic distance $r= |{\\bf r'-r}|$ for $\\Omega_m=10~$MHz and $\\Delta=13.98$ MHz; $E_{33}$ (dashed blue line), $E_{44}$ (dashed black line), and $E_{34}^{+}$ (dashed red line) are for the case without the microwave field.\n(d)~Normalized response function $\\Re\/\\Re_{\\rm max}$ as a function of the dimensionless coordinate $\\xi= x\/R_0$ ($R_0$ is the typical transverse beam radius of the probe field) with $\\Omega_m=0$~(dotted black line), 5~MHz~(solid red line), and 15~MHz~(solid blue line), respectively.\nInset: the normalized response function $\\tilde{\\Re}\/\\tilde{\\Re}_{\\rm max}$ in momentum space (i.e. the Fourier transformation of\n$\\Re\/\\Re_{\\rm max}$)\nas a function of the dimensionless wavenumber\n$\\beta_1=R_0 k_x$ ~($k_x$ is the dimensional wavenumber in $x$ direction) with $\\Omega_m=0$, 5, and 15\\,MHz, respectively.\n}\n\\label{Fig2}\n\\end{figure}\nthe real part Re($\\chi^{(3)}_{\\rm loc}$)~(solid red line) and imaginary part Im($\\chi^{(3)}_{\\rm loc})$~(dotted blue line) of the local nonlinear susceptibility $\\chi_{\\rm loc}^{(3)}$ as a function of the half Rabi frequency $\\Omega_m$ of the microwave field. Shown in Fig.~\\ref{Fig2}(b)\nis same as that in Fig.~\\ref{Fig2}(a) but for the nonlocal nonlinear susceptibility $\\chi_{\\rm nloc}^{(3)}$.\nFrom the figure we see that the nonlinear optical susceptibilities have two evident features:\n(i)~Both the real parts of $\\chi_{\\rm\nloc}^{(3)}$ and $\\chi_{\\rm nloc}^{(3)}$ are much larger than the corresponding imaginary parts, contributed by the EIT effect;\n(ii)~In the value range of $\\Omega_m$ taken here,\n${\\rm Re(\\chi_{loc}^{(3)})}$ is an increasing function; however,\n${\\rm Re(\\chi_{\\rm nloc}^{(3)}}$) increases firstly, then arrives a maximum at some value of $\\Omega_m$, and decreases when $\\Omega_m$ is increased further. At the point of the maximum, where\n$\\Omega_m\\approx 18$ MHz, ${\\rm Re}(\\chi_{\\rm nloc}^{(3)})\n\\approx 3.05\\times 10^{-8}\\,{\\rm m}^2\/{\\rm V}^2$, which is 15 times larger than the case without the microwave field\n[${\\rm Re}(\\chi_{\\rm nloc}^{(3)})\\approx 0.2\\times 10^{-8}~{\\rm m^2\/V^2}$ for $\\Omega_m=0$]. Thus, microwave-dressing can be used to modify the Kerr effect of the system greatly.\n\n\nTo support the above conclusion, a calculation is carried out for the interaction potential of two Rydberg atoms\nlocated respectively at positions ${\\bf r}$ and\n${\\bf r}'$, which may occupy in the Rydberg states $|3\\rangle$ and $|4\\rangle$. In the absence of the microwave field (i.e., $\\Omega_m=0$), the basis set of the such two-atom system consists of states\n$|33\\rangle= |3_13_2\\rangle$,\n$|44\\rangle= |4_14_2\\rangle$, and\n$|34_\\pm \\rangle= 1\/\\sqrt{2}(|3_14_2\\rangle\\pm|3_24_1\\rangle )$,\nwith the subscript 1 and 2 representing atom 1 and 2, respectively.\nThe (bare state) eigen energies of the system are $E_{33}=-\\hbar C_{33}^s\/{\\bf |r'-r|}^6$, $E_{44}=2\\hbar\\Delta -\\hbar C_{44}^s\/{\\bf |r'-r|}^6$, $E_{34}^{+}=\\hbar \\Delta+ \\hbar C_{34}^e\/|{\\bf r'-r}|^3$, and $E_{34}^{-}=\\hbar \\Delta- \\hbar C_{34}^d\/|{\\bf r'-r}|^6$, with $\\Delta=\\Delta_3-\\Delta_4=13.98$ MHz.\nSince antisymmetric state $|34_{-}\\rangle$ is nearly not coupled to laser field, one can disregard it if the microwave field is present (i.e., $\\Omega_m\\neq 0$). Then, the Hamiltonian in the two-atom basis set $\\{|33\\rangle$, $|34_{+}\\rangle$, $|44\\rangle\\}$ takes the form\n\\begin{align}\\label{HTA}\n \\mathcal{H}=\\hbar\\left(\n \\begin{array}{ccc}\n -\\frac{C_{33}^s}{|{\\bf r'-r}|^6}&\\sqrt{2}\\Omega_m&0\\\\\n \\sqrt{2}\\Omega_m&\\Delta+\\frac{C_{34}^e}{|{\\bf r'-r}|^3}&\\sqrt{2}\\Omega_m\\\\\n 0&\\sqrt{2}\\Omega_m&2\\Delta-\\frac{C_{44}^s}{|{\\bf r'-r}|^6}\\\\\n \\end{array}\n \\right).\n\\end{align}\nAfter diagonalization, we can obtain the energies $E_{1}$, $E_2$, and $E_3$ of the Hamiltonian (\\ref{HTA}).\nPotential-energy curves of $E_{1}$, $E_2$, and $E_3$ as functions of the interatomic separation $r= |{\\bf r'-r}|$ for $\\Omega_m=10~$MHz are shown in Fig.~\\ref{Fig2}(c). For comparison,\nthe bare potential-energy curves $E_{33},\\, E_{44}$, and $E_{34}^+$ (for $\\Omega_m=0$) are also shown. We see that, compared with the case without the microwave field, the potential-energy curves are modified largely by the introduction of the microwave field, especially for small interatomic separation $r$. The reason is that the microwave dressing brings a coupling between the Rydberg states $|3\\rangle$ and $|4\\rangle$, and thereby a modification of the Rydberg-Rydberg interaction. It is the use of the microwave dressing that brings the significant change and enhancement of the nonlocal Kerr nonlinearity.\n\n\n\\subsection{Nonlinear envelope equation and the property of nonlinear response function}\\label{sec23}\n\n\nWe now derive the envelope equation which controls the dynamics of the probe field. By substituting the solution of $\\rho_{21}$ into\nthe Maxwell Eq.~(\\ref{Maxwell}) and making a local approximation along the $z$ direction on the nonlocal nonlinear response function~(see the Appendix~\\ref{app3}),\nwe obtain the following three-dimensional (3D) nonlocal nonlinear Schr\\\"odinger (NNLS) equation\n{\\small\n\\begin{align}\n& i\\frac{\\partial \\Omega_p }{\\partial z}+\\frac{c}{2\\omega_p}\\nabla_{\\perp}^{2}\\Omega_p + {W_1}|\\Omega_p{|^2}\\Omega_p\\notag\\\\\n& \\qquad+\\int d^2 {r'}\nG({\\bf r_{\\perp}'-r_{\\perp}})|\\Omega_p({\\bf r_{\\perp}'},z)|^2\\Omega_p({\\bf r_{\\perp}},z)=0,\\label{NNLS1}\n\\end{align}}\\noindent\nwith ${\\bf r}_{\\perp}=(x,y)$, $d^2{r'}=dx' dy'$.\nThe third and forth terms on the left hand side of this equation describe two types of self-phase modulations of the probe field, contributed respectively by the local Kerr nonlinearity (originated from non-zero two-photon detuning (i.e., $\\Delta_3\\neq 0$)~\\cite{Bai2016,Zhang2018,Bai2019,Bai2020,Wang2001,Chen2014}) and the nonlocal Kerr nonlinearity (originated from the Rydberg-Rydberg interaction). In the integral of the forth term of the NNLS equation, $G$ is a reduced nonlinear response function, taking the form\n$G({\\bf r_{\\perp}'-r_{\\perp}})=\\sum_{\\alpha=3,4}G_{\\alpha\\alpha}^s({\\bf r_{\\perp}'-r_{\\perp}})+\\sum_{l={d,e}}G_{34}^l({\\bf r_{\\perp}'-r_{\\perp}})$,\nwith\n$G_{\\alpha\\beta}^l({\\bf r_{\\perp}'-r_{\\perp}})=\\int dz'R_{\\alpha\\beta}^l({\\bf r'-r})$\n($\\{\\alpha\\beta\\}=\\{33,44,34\\}$; $l=s,d,e$). Explicit expressions\nof the matrix elements of nonlinear response function\n$R_{\\alpha\\beta}^{l}({\\bf r}^\\prime-{\\bf r})$ and local nonlinear coefficient $W_1$ are given in the Appendix~\\ref{app3}~[see Eq.~(\\ref{W1}) and Eqs.~(\\ref{eqb5}), respectively]. Due to the microwave dressing,\nthe nonlocal Kerr nonlinearity consists of four parts; the first~(second) part $G_{33}^s({\\bf r_{\\perp}'-r_{\\perp}})$\n[$G_{44}^s({\\bf r_{\\perp}'-r_{\\perp}})$] is contributed by the interaction of the atoms lying in the same Rydberg state $|3\\rangle$~($|4\\rangle$);\nthe third (forth) part $G_{34}^d({\\bf r_{\\perp}'-r_{\\perp}})$~[$G_{34}^e({\\bf r_{\\perp}'-r_{\\perp}})$] is contributed by the interaction of the atoms lying in the different Rydberg states $|3\\rangle$ and $|4\\rangle$.\n\n\nFor the convenience of later discussions and numerical calculations, we rewrite the 3D NNLS Eq.~(\\ref{NNLS1}) into the non-dimensional form\n\\begin{align}\\label{NNLS2}\n&i\\frac{\\partial u}{\\partial s} + \\tilde{\\nabla}_{\\perp}^2u+\\int d^2\\zeta'\n\\Re(\\vec{\\zeta}'-\\vec{\\zeta})|u(\\vec{\\zeta}',s){|^2}\nu(\\vec{\\zeta},s)=0,\n\\end{align}\nwith $s = z\/(2{L_{\\rm diff}})$, $u=\\Omega_p\/U_0$,\n$\\tilde{\\nabla}_{\\perp}^2=\\partial^2\/\\partial\\xi^2\n+\\partial^2\/\\partial\\eta^2$,\n$\\vec{\\zeta}= (\\xi,\\eta)=(x,y)\/{R_0}$, and\n$d^2 {\\zeta}^{\\prime}= d\\xi^{\\prime}d \\eta^{\\prime}$.\nHere ${L_{\\rm diff}} = {\\omega _p}R_0^2\/c$ is the typical diffraction length, which is 1.61 mm in our system;\n$U_0$ is the typical Rabi frequency of the probe field;\n$R_0$ is the typical beam radius of the probe field;\nthe non-dimensional nonlinear response function is defined by\n$\\Re(\\vec{\\zeta}'-\\vec{\\zeta})=2L_{\\rm diff}U_0^2R_0^2 G[{(\\vec{\\zeta}'-\\vec{\\zeta})}R_0]$. Note that in writing Eq.~(\\ref{NNLS2})\nwe have neglected the term related to $W_1$ because the\nlocal Kerr nonlinearity is much smaller than the nonlocal one~\\cite{Bai2019}.\n\n\nThe property of the nonlocal Kerr nonlinearity of the system is characterized by the nonlinear response function $\\Re(\\vec{\\zeta})$.\nComparing with the case without the microwave field ($\\Omega_m=0$), $\\Re(\\vec{\\zeta})$ is largely modified and can be manipulated by the use of the microwave field ($\\Omega_m\\neq 0$). To demonstrate this, the normalized response function $\\Re\/\\Re_{\\rm max}$ as a function for $\\xi= x\/R_0$ is shown in Fig.~\\ref{Fig2}(d), where the dotted black line, solid red line, and solid blue line are for $\\Omega_m=0$, $\\Omega_m=5$~MHz, and $\\Omega_m=$15~MHz, respectively.\nWe see that, due to the role played by the microwave field, the shape of\n$\\Re\/\\Re_{\\rm max}$ is changed significantly. Especially, for a larger microwave field, the $\\Re\/\\Re_{\\rm max}$ curve\nbecomes negative for the small value of $\\xi$ (not shown here), consistent with the result obtained in Ref.~\\cite{Petrosyan2014}.\nPlotted in the inset of the figure is the normalized response function in momentum space $\\tilde{\\Re}\/\\tilde{\\Re}_{\\rm max}$\n(i.e., the Fourier transformation of $\\Re\/\\Re_{\\rm max}$)\nas a function of the non-dimensional wavenumber\n$\\beta_1=R_0 k_x$ ~($k_x$ is the wavenumber in $x$ direction) with $\\Omega_m=0$, 5, and 15\\,MHz, respectively.\nOne sees that $\\tilde{\\Re}\/\\tilde{\\Re}_{\\rm max}$ has only one change in sign for $\\Omega_m=0$; however, more changes in sign arise when $\\Omega_m$ takes nonzero values. Such behavior of $\\tilde{\\Re}\/\\tilde{\\Re}_{\\rm max}$ is due to the joint action by the nonlocal Kerr nonlinearity and the microwave field, through which the MI of the plane-wave probe field may occur~(see the next section).\n\n\n\nExcept for the significant dependence on the microwave field $\\Omega_m$,\nthe property of the response function depends also on another parameter, i.e., the nonlocality degree of the Kerr nonlinearity, defined by\n\\begin{equation}\\label{NLD}\n\\sigma\\equiv {R_b}\/{R_0},\n\\end{equation}\nwhere $R_b$ is the radius of Rydberg blockade sphere, given by\n$R_b=|C_{33}^{s}\/\\delta_{\\rm EIT}|^{1\/6}$~\\cite{Saffman2010,Fir2016,Mur2016}, with\n$\\delta_{\\rm EIT}$ the width of EIT transparency window. One has\n$\\delta_{\\rm EIT}=|\\Omega_c|^2\/\\gamma_{21}$ for\n$\\Delta_2=0$, and $\\delta_{\\rm EIT}=|\\Omega_c|^2\/\\Delta_2$\nfor $\\Delta_2\\gg\\gamma_{21}$. With the system parameters used here,\nwe have $R_b\\approx 8.34 ~\\mu{\\rm m}$.\nIn the next section, we shall show that the structural phase transitions of the optical patterns of the system depend strongly not only on the microwave field $\\Omega_m$ but also on the nonlocality degree $\\sigma$ of the Kerr nonlinearity.\n\n\n\\section{Modulational instability, emergence of optical patterns and solitons}\\label{sec3}\n\n\\subsection{Modulation instability}\\label{sec31}\nMI is a nonlinear instability of constant-amplitude continuous waves under long-wavelength perturbations, occurring in a variety of contexts where Kerr nonlinearity is attractive and local~\\cite{Zakharov2009,Biondini2016}; it can also arise in systems with repulsive but nonlocal Kerr nonlinearity when the perturbations have both long~\\cite{Krolikowski2001,Krolikowski2004} and short~\\cite{Maucher2016,Maucher2017,Sevincli2011} wavelengths. To explore the MI in the our system, we consider the MI of the plane-wave solution of the NNLS Eq.~(\\ref{NNLS2}), i.e.,\n\\begin{equation}\\label{PW}\nu_{\\rm pw}(\\vec{\\zeta},s)=A_0\\exp\\left[-is A_0^2\\int \\Re(\\vec{\\zeta})d^2 \\zeta\\right],\n\\end{equation}\nwhere $A_0$ is a real number.\nSince any perturbation can be expanded as a superposition of many Fourier modes, we make the MI analysis of the plane wave by taking only a periodic mode as the perturbation, i.e.,\n\\begin{eqnarray}\\label{PTS}\n\\tilde{u}(\\vec{\\zeta},s)=\n&&\\left[A_0+a_1e^{i\\vec{\\beta}\\cdot \\vec{\\zeta}+\\lambda\ns}+a_2^*e^{-i\\vec{\\beta}\\cdot \\vec{\\zeta}+\\lambda^* s}\\right]\\nonumber\\\\\n&& \\times\n\\exp\\left[-is A_0^2\\int \\Re(\\vec{\\zeta})d^2\\zeta\\right],\n\\end{eqnarray}\nwhere $a_1$ and $a_2$ are small complex amplitudes of the perturbation and $\\vec{\\beta}= (\\beta_1,\\beta_2)$\\, ($\\beta_1\\equiv R_0 k_x$, $\\beta_2\\equiv R_0 k_y$; $k_x$ $k_y$ are wavenumbers in $x$ and $y$ directions, respectively) is non-dimensional 2D wavevector and $\\lambda$ is the growth rate of the perturbation, to be determined yet.\n\n\nSubstituting the perturbation solution (\\ref{PTS}) into Eq.~(\\ref{NNLS2}) and keeping only linear terms of $a_1$ and $a_2$, it is easy to obtain the expression of the growth rate\n\\begin{align}\n\\lambda^2=-{\\beta}^2\\left[\\beta^2-2A_0^2\\,\\tilde{\\Re}(\\vec{\\beta})\\right],\n\\end{align}\nwhere ${\\beta}=\\sqrt{\\beta_1^2+\\beta_2^2}$ and $\\tilde{\\Re}(\\vec{\\beta})$ is the response function in momentum space [i.e., the Fourier transformation of $\\Re(\\vec{\\zeta})$].\n\n\nThe property of the growth rate $\\lambda$ depends on the plane-wave intensity $A_0^2$, the shape of the response function $\\tilde{\\Re}$ where the microwave field $\\Omega_m$ plays an important role.\nShown in Fig.~\\ref{Fig3}(a)\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{fig3.eps}\n\\caption{\\footnotesize Modulation instability and its manipulation for the plane-wave probe field in the Rydberg atomic gas with the microwave dressing.\n(a)~$-\\lambda^2$ ($\\lambda$ is the growth rate) as a function of $\\beta=\\sqrt{\\beta_1^2+\\beta_2^2}$ [$\\beta_{j}\\equiv R_0k_j$; $k_j$ is the wavenumber along $j$th direction ($j=x,y$); $R_0$ is the typical transverse beam radius of the probe field], for\nthe microwave field $\\Omega_m=0$~(dotted black line), 5~MHz~(dashed red line), and 15~MHz~( solid blue line), respectively; shadow regions are ones where MI occurs.\n(b)~Real part of the growth rate ${\\rm Re}(\\lambda$) as a function of $\\beta$ and the effective probe-field intensity $I_{\\rm eff}=\\alpha A_0^2$\\, [with $A_0$ the amplitude of the plane wave and $\\alpha=-\\int{\\Re}(\\vec{\\zeta})d^2\\zeta$\\,], for the microwave field $\\Omega_m=10$~MHz. The colorful region is the one for ${\\rm Re}(\\lambda)>0$, where MI occurs.\n(c)~${\\rm Re}(\\lambda$) as a function of $\\beta$ and $\\Omega_m$ for $I_{\\rm eff}=20$; the colorful region is the one where MI occurs.\n}\\label{Fig3}\n\\end{figure}\nis the curve of $-\\lambda^2$ as a function of the non-dimensional wavenumber $\\beta$ for the microwave field $\\Omega_m=0$~(dotted black line), 5~MHz~(dashed red line), and 15~MHz~( solid blue line), respectively. The shadow regions in the figure are ones for ${\\rm Re}(\\lambda)>0$. That is to say, MI occurs in these shadow regions and hence the plane-wave state of the probe field is unstable. The MI will lead to a symmetry breaking of the system and hence a phase transition to new states. As a result, new optical self-organized structures (or pattern formation) appear in the system (see next section). We note that, different from the cases reported in Refs.~\\cite{Krolikowski2001,Krolikowski2004} but similar to those considered in Refs.~\\cite{Maucher2016,Maucher2017,Sevincli2011},\nthe MI in the present system arises for the perturbation of short wavelengths.\n\n\nTo obtain a further understanding of the MI, Fig.~\\ref{Fig3}(b) shows\nthe real part of the growth rate, ${\\rm Re}(\\lambda$), as a function of $\\beta$ and the effective probe-field intensity\n\\begin{equation}\\label{I}\nI_{\\rm eff} =\\alpha A_0^2\n\\end{equation}\nfor $\\Omega_m=10$~MHz, where $\\alpha=-\\int{\\Re}(\\vec{\\zeta})d^2\\zeta$ is a parameter characterizing the role by the nonlocal Kerr nonlinearity.\nThe colorful region in the figure is the one where ${\\rm Re}(\\lambda)>0$ and hence MI occurs. Fig.~\\ref{Fig3}(c) shows ${\\rm Re}(\\lambda$) as a function of $\\beta$ and $\\Omega_m$ for $I_{\\rm eff}=20$, with the colorful region denoting the one where the MI happens. From these results we see that the MI depends not only on the effective probe-field intensity\n$I_{\\rm eff}$ but also on the microwave field $\\Omega_m$, which provides\nways to manipulate the MI and thereby the emergence of the optical patterns in the system.\n\n\n\\subsection{Pattern formation controlled by the Kerr nonlinearity and the microwave field}\\label{sec32}\n\n\nWe now turn to consider the outcome of the MI in the system. Note that\nin the absence of the microwave field the system is reduced to a three-level one (i.e. conventional Rydberg-EIT) and the atom-atom interaction Hamiltonian $\\hat{\\cal H}_{\\rm vdw}$ owns only the term\n$\\hbar\\mathcal{N}_a\\int d^3{ r}^\\prime \\hat{S}_{33}({\\bf\nr}^{\\prime},t)\\mathcal{V}_{33}^s({\\bf r}^\\prime-{\\bf r})\\hat{S}_{33}({\\bf r},t)$; however, in the presence of the microwave field, the state $|4\\rangle$ may have a significant population and hence it plays an important role for the dynamics of the probe field. In this case $\\hat{\\cal H}_{\\rm vdw}$ owns four terms,\nwhich may be comparable through the tuning of the system parameters.\nAs a result, the nonlinear response function $G$ in the envelope Eq.~(\\ref{NNLS1}) contains four terms, i.e., $G=G_{33}^s+G_{44}^s+G_{34}^d+G_{34}^e$; the nonlocal Kerr nonlinearities contributed by $G_{33}^s$, $G_{34}^d$, and $G_{34}^e$ are repulsive, but the one contributed by $G_{44}^s$ is attractive. Therefore, depending on system parameters and based on the competition among these four terms in $G$, the total Kerr nonlinearity of the system may be type of self-defocusing or self-focusing, which means that the system may support very rich nonlinear structures after the occurrence of the MI, including the emergence of various optical patterns and solitons.\nGenerally, when the repulsive part (contributed by $G_{33}^s$, $G_{34}^d$, and $G_{34}^e$)\nplays a dominant role over the attractive part (contributed by $G_{44}^s$), the MI results in the formation of optical patterns; on the contrary, when the attractive part is dominant over the repulsive part, the MI gives rise to the formation of bright solitons.\n\n\nAs a first step, we focus on the case of pattern formation, for which the whole Kerr nonlinearity must be type of self-defocusing. This can be realized by choosing suitable system parameters to make the repulsive part in $G$\\, (i.e., $G_{33}^s$, $G_{34}^d$, and $G_{34}^e$) is larger than the attractive part (i.e., $G_{44}^s$). In fact, the system parameters given at the final part of Sec.~\\ref{sec21} fulfill such requirement. Except for these parameters, other three parameters, i.e., $I_{\\rm eff}$ (the effective probe-field intensity), $\\sigma$ (the nonlocality degree of the Kerr nonlinearity), and $\\Omega_m$ (the microwave field), play significant roles for determining the types of optical patterns in the system. Based on such consideration and for obtaining the optical patterns, we seek the ground-state solution of the system by a numerical simulation solving Eq.~(\\ref{NNLS2}) via an imaginary evolution and split-step Fourier methods~\\cite{YangJK2010}, for which the total energy of the system\n{\\small\n\\begin{align}\nE=\n&\\int |\\tilde{\\nabla}_{\\perp}u(\\vec{\\zeta},s)|^2 d^2\\zeta\\nonumber\\\\\n&\n+\\frac{1}{2}\\iint \\Re({\\vec{\\zeta}^\\prime-\\vec{\\zeta}})|u(\\vec{\\zeta},s)|^2\n |u(\\vec{\\zeta}^{\\prime},s)|^2 d^2\\zeta^{\\prime} d^2 \\zeta\\label{E}\n\\end{align}}\\noindent\nis minimum. The initial condition used in the simulation is the plane wave (\\ref{PW}), perturbed by a random noise.\n\n\nShown in Fig.~\\ref{Fig4}(a)\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.48\\textwidth]{fig4.eps}\n\\caption{\\footnotesize Pattern formation and phase diagram controlled by the effective intensity of the probe field $I_{\\rm eff}=\\alpha A_0^2$\nand the microwave field $\\Omega_m$, for the nonlocality degree $\\sigma\\equiv R_b\/R_0=1$. Here, $A_0$ is the amplitude of the plane-wave state; $\\alpha= -\\int{\\Re}(\\vec{\\zeta})d^2\\zeta$; $R_b$ and $R_0$ are Rydberg blockade radius and the transverse radius of the probe beam, respectively.\n(a)~Phase diagram of the structural transition of optical patterns, where different regions (phases) are obtained by changing the values of $I_{\\rm eff}$ and $\\Omega_m$.\nRegion $\\circled{1}$: the homogeneous (i.e., the plane wave) state;\nRegion $\\circled{2}$: the hexagonal lattice;\nRegion $\\circled{3}$: the type I square lattice;\nRegion $\\circled{4}$: the type II square lattice.\n(b)~The hexagonal lattice of the normalized probe-field amplitude $|u|$ as a function of $\\xi=x\/R_0$ and $\\eta=y\/R_0$ for $\\Omega_m=10$~MHz and $I_{\\rm eff}=15$, corresponding to the region $\\circled{2}$ in panel (a).\n(c)~The type I square lattice of $|u|$ as a function of $\\xi$ and $\\eta$ for $\\Omega_m=13$~MHz and $I_{\\rm eff}=25$, corresponding to the region $\\circled{3}$ in panel (a).\n(d)~~The type II square lattice of $|u|$ as a function of $\\xi$ and $\\eta$ for $\\Omega_m=15$~MHz and $I_{\\rm eff}=35$, corresponding to the region $\\circled{4}$ in panel (a).\n}\\label{Fig4}\n\\end{figure}\nis the phase diagram describing the phase transition of self-organized optical structures, which are controlled by the effective intensity of the probe field $I_{\\rm eff}=\\alpha A_0^2$ and the microwave field $\\Omega_m$.\nThe dashed lines in the figure are boundaries of different phases.\nWhen obtaining the phase diagram, the nonlocality degree of the Kerr nonlinearity, i.e., $\\sigma= R_b\/R_0$, is fixed to be 1. From the figure, we see that several structural transitions of optical patterns\nemerge when $I_{\\rm eff}$ and $\\Omega_m$ are changed in the following ways:\n(i)~from the homogeneous state $\\circled{1}$ to the hexagonal lattice $\\circled{2}$;\n(ii)~from the hexagonal lattice $\\circled{2}$ to the type I square lattice $\\circled{3}$;\n(iii)~from the type I square lattice $\\circled{3}$ to the type II square lattice $\\circled{4}$).\nHere $\\circled{1}$, $\\circled{2}$, $\\circled{3}$, and $\\circled{4}$ represent regions of the homogeneous state, hexagonal lattice, type I square lattice, and type II square lattice, respectively.\n\n\nTo be more concrete, we give several examples for illustrating the optical lattice patterns that correspond to the self-organized structures indicated in the different regions of Fig.~\\ref{Fig4}(a). Fig.~\\ref{Fig4}(b) shows a hexagonal lattice pattern, where the amplitude $|u|$ of the probe field is normalized as a function of $\\xi=x\/R_0$ and $\\eta=y\/R_0$; it is obtained by taking $\\Omega_m=10$~MHz and $I_{\\rm eff}=15$, located in the region $\\circled{2}$ of Fig.~\\ref{Fig4}(a). Such hexagonal lattice pattern\nwas found by Sevinli {\\it et al.}~\\cite{Sevincli2011} where no microwave dressing is used (i.e., $\\Omega_m=0$); in this case the hexagonal lattice pattern is the only one that can be obtained via the MI of the homogeneous (plane wave) state.\n\n\nPlotted in Fig.~\\ref{Fig4}(c) is the optical pattern by taking $|u|$ as a function of $\\xi$ and $\\eta$, for $\\Omega_m=13$~MHz and $I_{\\rm eff}=25$ [which locates in the region $\\circled{3}$ of Fig.~\\ref{Fig4}(a)]. We see that in this case a new optical structure, called the type I square lattice, emerges. Obviously, such new optical structure, which does not exist if the microwave dressing is absent, arises due to the symmetry breaking induced by the introduction of the microwave field.\n\n\nFig.~\\ref{Fig4}(d) gives the result for the optical pattern with increasing microwave field and the effective intensity of the probe field,\nby taking $\\Omega_m=15$~MHz and $I_{\\rm eff}=35$ [which is in the region\n$\\circled{4}$ of Fig.~\\ref{Fig4}(a)]. One sees that in this situation another type of optical structure, called the type II square lattice, appears. By comparing the type I square lattice pattern of Fig.~\\ref{Fig4}(c), we see that there is an angle difference (around $45^\\circ$) between the type I and type II square lattices; furthermore, there are also differences for the normalized probe-field amplitudes and the lattice constants between these two types of square lattice patterns (for detail, see Table~\\ref{tab1} below).\n\n\n\n\n\\subsection{Pattern formation controlled by the nonlocality degree of the Kerr nonlinearity and the microwave field}\\label{sec33}\nTo explore the structural phase transition of the optical patterns further, we now fix the effective probe-field intensity\n($I_{\\rm eff}=35$) but take the nonlocality degree of the Kerr nonlinearity $\\sigma$ and the microwave field $\\Omega_m$ as control parameters. Similar to the last subsection, we seek the spatial distribution of the probe field for which the total energy (\\ref{E}) of the system is minimum, through a numerical simulation of Eq.~(\\ref{NNLS2}).\n\n\nShown in Fig.~\\ref{Fig5}(a) is the phase diagram of the structural transition of optical patterns, where different regions (phases) are obtained by changing the values of $\\sigma$ and $\\Omega_m$, separated by\ndashed lines (i.e., boundaries of different phases).\nWe see that several structural transitions [i.e., from the homogeneous state $\\circled{1}$ to the hexagonal lattice $\\circled{4}$, from the hexagonal lattice $\\circled{4}$ to the type I square lattice $\\circled{3}$, and from the type I square lattice $\\circled{3}$ to the type II square lattice $\\circled{2}$\\,] of the optical patterns arise when $\\sigma$ and $\\Omega_m$ are varied.\n\n\n\\begin{figure}[htpb]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{fig5.eps}\n\\caption{\\footnotesize Pattern formation and phase diagram controlled by the nonlocality degree of the Kerr nonlinearity $\\sigma=R_b\/R_0$ and the microwave field $\\Omega_m$, for\nthe effective probe-field intensity $I_{\\rm eff}=35$.\n(a)~Phase diagram of the structural transition of optical patterns, where different regions (phases) are obtained by changing the values of $\\sigma$ and $\\Omega_m$.\nRegion $\\circled{4}$: the pattern with hexagonal lattice structure;\nRegion $\\circled{3}$: the pattern with type I square lattice structure;\nRegion $\\circled{2}$: the pattern with type II square lattice structure;\nRegion $\\circled{1}$: the homogeneous (i.e. the plane wave) state.\n(b)~The hexagonal structure of the normalized probe-field amplitude $|u|$ as a function of $\\xi=x\/R_0$ and $\\eta=y\/R_0$ for $\\Omega_m=10$~MHz and $\\sigma=2$, corresponding to the region $\\circled{4}$ in panel (a).\n(c)~The type I square structure of $|u|$ as a function of $\\xi$ and $\\eta$ for $\\Omega_m=12$~MHz and $\\sigma=1$, corresponding to the region $\\circled{3}$ in panel (a).\n(d)~~The type II square structure of $|u|$ as a function of $\\xi$ and $\\eta$ for $\\Omega_m=18$~MHz and $\\sigma=1$, corresponding to the region $\\circled{4}$ in panel (a).\n}\\label{Fig5}\n\\end{figure}\n\nWe also give several examples for illustrating the optical patterns corresponding to the self-organized structures indicated in the different regions of Fig.~\\ref{Fig5}(a). Fig.~\\ref{Fig5}(b) shows a hexagonal lattice pattern, obtained by taking the normalized amplitude\nof the probe field $|u|$ as a function of $\\xi=x\/R_0$ and $\\eta=y\/R_0$, for $\\Omega_m=10$~MHz and $\\sigma=2$ [located in the region $\\circled{4}$ of Fig.~\\ref{Fig5}(a)].\n\n\nShown in Fig.~\\ref{Fig5}(c) is the optical pattern for $\\Omega_m=12$~MHz and $\\sigma=1$ [located in the region $\\circled{3}$ of Fig.~\\ref{Fig5}(a)]; one sees that in this case the lattice pattern is a type I square lattice, which is absent without microwave field. Illustrated in Fig.~\\ref{Fig5}(d) is the optical pattern with $\\Omega_m=18$~MHz and $\\sigma=1$, which is in the region\n$\\circled{2}$ of Fig.~\\ref{Fig5}(a); in this case\nthe type II square lattice structure appears.\n\n\nTo see clearly the differences between the two types of square lattice patterns, a quantitative comparison is made for the normalized probe-field amplitude $|u|$, microwave field $\\Omega_m$, effective probe-field intensity $I_{\\rm eff}$, and lattice constant $l$ (i.e., the distance between the maximums of two adjacent optical spots) between the type I and type II square lattice patterns obtained in Fig.~\\ref{Fig4} and Fig.~\\ref{Fig5} by taking the nonlocality degree of the Kerr nonlinearity $\\sigma=1$, with the result presented in Table~\\ref{tab1}.\n\\begin{table}[htbp]\n\\centering\\caption{\\footnotesize Differences between the type I and type II square lattice patterns. Values of the normalized probe-field amplitude $|u|$, microwave field $\\Omega_m$, effective probe-field intensity $I_{\\rm eff}$, and lattice constant $l$ for the two types of square lattice patterns.\n}\\label{tab1}\n\\vspace{4mm}\n\\renewcommand\\tabcolsep{9.5pt}\n\\begin{tabular}{ccccc}\n\\Xhline{1pt}\nType\n&$|u|_{\\rm max}$\n& $\\Omega_m$\n& $I_{\\rm eff}$\n& $l$\\\\\n\\Xhline{1pt}\nType I\\,\\,\\,\\, [Fig.~\\ref{Fig4}(c)]\n&28.5\n&13\n&25\n&1.74\\\\\nType II\\,\\,\\,\\,[Fig.~\\ref{Fig4}(d)]\n&54.4\n&15\n&35\n&1.66\\\\\nType I\\,\\,\\,\\, [Fig.~\\ref{Fig5}(c)]\n&37.9\n&12\n&35\n&2.82\\\\\nType II\\,\\,\\,\\,[Fig.~\\ref{Fig5}(d)]\n&69.5\n&18\n&35\n&2.35\\\\\n\\Xhline{1pt}\n\\end{tabular\n\\end{table\nWe see that:\n(i)~the lattice constant $l$ of the type I square lattice pattern is larger than that of the type II one;\n(ii)~comparing Fig.~\\ref{Fig5}(c) with Fig.~\\ref{Fig5}(d) [Fig.~\\ref{Fig4}(c) with Fig.~\\ref{Fig4}(d)], the lattice constant $l$\nis larger for smaller microwave field $\\Omega_m$.\nThe physical reason for such differences is that the nonlocal nonlinear response function $\\Re\/\\Re_{\\rm max}$ has a significant dependence on the microwave field. When the microwave filed $\\Omega_m$ is increased, the shape of $\\Re\/\\Re_{\\rm max}$ is largely modified [i.e., it becomes narrower and steeper; see Fig.~\\ref{Fig2}(d)], which makes the system change into a new state and thereby a new type of square lattice pattern emerges.\n\n\nCombining Fig.~\\ref{Fig4} and Fig.~\\ref{Fig5}, which are the key results of this work, we see that, in the parameter domains considered here, the system supports three types of self-organized optical structures (i.e., the hexagonal lattice, the type I and type II square lattices), and their phase transitions can be controlled by actively manipulating the microwave field ($\\Omega_m$), the effective probe-field intensity ($I_{\\rm eff}$), and the nonlocality degree of the Kerr nonlinearity ($\\sigma$).\nThe basic physical mechanism of the MI and the formation of the optical patterns found here may be understand as follows. When the plane-wave probe field with a finite amplitude is applied to and propagates in the Rydberg atomic gas along the $z$ direction, the nonlocal Kerr nonlinearity coming from the Rydberg-Rydberg interaction brings a phase modulation to the probe field; due to the role played by the diffraction in the transverse (i.e., $x$ and $y$) directions, the phase modulation is converted into amplitude modulation. Because of the joint effect of the phase and amplitude modulations, in some parameter domains the probe field undergoes MI and re-organizes its spatial distribution and hence the formation of optical patterns occurs.\n\n\nThe emergence of the different self-organized structures (i.e. the hexagonal and square lattice optical patterns) and related phase changes are originated from the spatial symmetry breaking of the system.\nTo understand this and illustrate furthermore the differences between various optical lattice patterns, a detailed theoretical analysis on the ground-state energy of the system for different spatial distributions of the probe-field intensity is given in Appendix~\\ref{appC}.\n\n\\subsection{Formation of nonlocal spatial optical solitons}\\label{sec34}\n\nSpatial optical solitons, i.e., localized nonlinear optical structures resulting from the balance between nonlinearity and diffraction, can form\nthrough the MI of plane waves~\\cite{Kivshar2006,Chen2012}. However,\na necessary condition for the formation of an optical soliton is that the Kerr nonlinearity in the system should be of the type of self-focusing. As indicated in Sec.~\\ref{sec32}, due to the microwave dressing in our system there exist four kinds of nonlocal Kerr nonlinearities, which are described by the four response functions (i.e., $G_{33}^s$, $G_{44}^s$, $G_{34}^d$, $G_{34}^e$), and one of them (i.e., $G_{44}^s$) is attractive. Therefore, it is possible to make the total Kerr nonlinearity of the system to be a self-focused one if suitable system parameters are chosen.\n\n\nIn fact, a self-focused total Kerr nonlinearity can indeed be obtained by choosing the following system parameters, i.e.\n$\\Delta_2=6.28\\times 10^2\\,{\\rm MHz}$,\n$\\Delta_3=6.92\\,{\\rm MHz}$,\n${\\rm \\Delta_4=1\\times 10^4\\,Hz}$,\n ${\\rm \\Gamma_{12}=2\\pi\\times 6}$ MHz,\n$\\Gamma_3=\\Gamma_{4}=2\\pi\\times 1.67\\times 10^{-2}$ MHz, $\\Omega_{c}=90\\,{\\rm MHz}$, and $\\N_a=1.0\\times 10^{11}\\,{\\rm\ncm^{-3}}$. In this situation, the attractive interaction contributed by $G_{44}^s$ plays a dominant role, and hence the plane-wave state (\\ref{PW}) will undergo a MI and is possible to be squeezed into a soliton by the Kerr nonlinearity.\n\n\nTo confirm the MI, a numerical simulation based on an imaginary-time propagation method is carried out by solving the NNLS equation (\\ref{NNLS2}) with the above parameters. Shown in Fig.~\\ref{Fig6}(b)\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig6.eps}\n\\caption{\\footnotesize Formation and propagation of nonlocal spatial optical solitons in the presence of the microwave dressing.\n(a)~Initial condition ($|u|=1.3~\\text{sech}[(\\xi^2+\\eta^2)^{1\/2}]$ at $s=0$ ($s= z\/2L_{\\rm diff}$; $L_{\\rm diff}$ is diffraction length).\n(b)~Spatial distribution of the soliton when propagating to the position $s=5$, by taking the normalized probe-field amplitude $|u|$ as a function of non-dimensional coordinates $\\xi= x\/R_0$ and $\\eta= y\/R_0$\n($R_0$ is the typical transverse beam radius of the probe field)\nfor the microwave field $\\Omega_m=10$~MHz and the nonlocality degree $\\sigma=1$.\n(c)~Random initial condition $|u|= 1+ 0.05f$ ($f$ is a Gaussian noise)\nand (d)~The soliton (generated by the random initial condition) when propagating to $s=5$.\n(e), (f), and (g) are two-, three-, and four-soliton solutions when they propagating to $s=5$ for $(\\Omega_m, \\sigma)=(10,1.2)$, $(15,1.4)$, and $(18,1.8)$, respectively.\n}\n\\label{Fig6}\n\\end{figure}\\noindent\nis the spatial distribution of the probe-field envelope when it propagates 10 diffraction length (i.e., $s\\equiv z\/2L_{\\rm diff}=5)$, by taking $|u|$ (the normalized probe-field amplitude) as a function of non-dimensional coordinates $\\xi= x\/R_0$ and $\\eta=y\/R_0$, for the microwave field $\\Omega_m=10$~MHz and the nonlocality degree of the Kerr nonlinearity $\\sigma=1$. The initial condition used in the simulation is $|u|=1.3~\\text{sech}[(\\xi^2+\\eta^2)^{1\/2}]$\\, [Fig.~\\ref{Fig6}(a)]. We see that a nonlocal spatial optical soliton can indeed form in the system and it is quite stable during propagation.\nNote that solitons can be also generated by using random initial conditions. Fig.~\\ref{Fig6}(d) shows the spatial distribution of a soliton when it is created and propagates 10 diffraction length, for which the initial condition used is of the form $|u|= 1+ 0.05f$, where $f$ is Gaussian noise\\, [Fig.~\\ref{Fig6}(c)].\n\n\nDifferent from the studies considered before, the nonlocal spatial optical solitons found here can be actively manipulated by actively tuning $\\Omega_m$ and $\\sigma$. Shown in the panels (e), (f), and (g) of Fig.~\\ref{Fig6} are two-soliton, three-soliton, and four-soliton solutions when they propagating to 10 diffraction length (i.e., $s=5$), obtained for $(\\Omega_m, \\sigma)=(10,1.2)$, $(15,1.4)$, and $(18,1.8)$, respectively.\nOther multi-soliton solution solutions of the system may also be obtained.\n\n\n\\section{Summary}\\label{sec4}\n\nIn this work, we have proposed a scheme for the realization of optical pattern formation and spatial solitons via a Rydberg-EIT. Through the use of a microwave dressing, we have shown that the nonlocal Kerr nonlinearity of the system can be manipulated actively and its magnitude can be enhanced significantly. Based on such nonlocal and tunable Kerr nonlinearity, we have demonstrated that a plane-wave probe field can undergo MI and spontaneous symmetry breaking, and thereby various self-organized optical patterns may emerge in the system. In particular, we have found that a hexagonal lattice pattern, which appears after the MI when the repulsive part of the nonlocal nonlinear response function is larger than its attractive part, may develop into several types of square lattice patterns when the microwave field is applied and tuned actively. Furthermore, through the MI the formation of nonlocal spatial optical solitons has also been found when the attractive part of the nonlocal nonlinear response function is dominant over its repulsive part. Different from the results reported before, the optical patterns and nonlocal optical solitons discovered here can be flexibly adjusted and controlled through the change of the effective probe-field intensity, nonlocality degree of the Kerr nonlinearity, and the strength of the microwave field. Our work opens a way for a versatile control of the self-organizations and structural phase transitions of laser light based on microwave-dressed Rydberg gases, which may have potential applications in optical information processing and transmission.\n\n\n\n\\acknowledgments\n\nAuthors thank Y.-C. Zhang and Z. Bai for useful discussions. This work was supported by the National Natural Science Foundation of China under Grant\nNo.~11474099 and No.~11975098.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}