diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznhgc" "b/data_all_eng_slimpj/shuffled/split2/finalzznhgc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznhgc" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nThe present paper studies the matrix sequence data with a latent low-rank structure. Instead of modeling the mean conditional on the latent factors as in recent works (\\cite{wang2019factor}; \\cite{yu2022projection}; \\cite{Chen2020StatisticalIF};\\cite{Gao2021A}), we model the conditional quantiles by an interactive effect of the row and column sections. The parameters (row and column factor loadings and factor matrices) are learnt by minimizing the empirical check loss accumulated over all entries. For the first time, we derived the statistical accuracy results of the estimated factor loading matrices and the factor matrices, under some mild conditions. In the modeling side, this is essentially a work where the matrix factor structure meets the quantile feature representation.\n\nOn the matrix factor structure, we assume in the present paper that each matrix observation is driven by a much lower dimensional factor matrix, and that the two cross sections along the row and column dimensions interact with each other and thus generates the entries of the $t$-th data matrix ${\\mbox{\\boldmath $X$}}_t=(X_{ijt})_{p_1\\times p_2}$. For example, in recommending system, ${\\mbox{\\boldmath $X$}}_t$ is a rating matrix of $p_1$ customers and $p_2$ commodities, and the scores in ${\\mbox{\\boldmath $X$}}_t$ are high when the latent $r_1$ common consumption preferences of the customers match the latent $r_2$ common features of the $p_2$ goods. We focus on matrix sequence rather than large vectors appeared in standard statistics and econometrics for two-fold reasons. First, many recent data sets in financial market, recommending system, social networks, international trade, and electronic business platform, are themselves well collected or organized intrinsically in matrices (or more generally in the form of tensors). The matrix factor structure is empirically found in these data sets and works well in applications, c.f, \\cite{Chen2020Modeling} for world trade data analysis. Second, modeling the matrix-value data with a low-rank structure, e.g. (\\ref{model}) below, makes the model parsimonious and statistical inference efficient once the structure is interpretably reasonable. A naive approach to analyze the data matrix ${\\mbox{\\boldmath $X$}}_t$ is to ``flatten\" it into a long vector $vec({\\mbox{\\boldmath $X$}}_t)$ by piling down it column by column or row by row. After that, existing vector sequence models and inference procedures, like the vector factor models in \\cite{stock2002forecasting}, \\cite{stock2002macroeconomic}, \\cite{bai2002determining}, \\cite{bai2003inferential}, \\cite{Trapani2018A}, \\cite{Barigozzi2020Sequential}, \\cite{fan2013large}, \\cite{Ait2017high}, \\cite{kong2017number}, \\cite{kong2018systematic}, \\cite{kong2019factor}, \\cite{onatski2010determining}, \\cite{Pelger2019Large} and \\cite{Chen2021quantile}, can be applied. However, the flattened vector factor modeling easily misses the interplay between the row and column sections, and has parameter complexity of order $O(p_1p_2+T)$ while the row-column interaction model (see (\\ref{model}) below) of order $O(p_1+p_2+T)$. This is also where the efficiency gain of the present paper comes from compared with the vector quantile factor modeling.\nFor more detailed motivation to study matrix or tensor sequence data, we referred to \\cite{wang2019factor}, \\cite{chen2019constrained}, \\cite{chen2021factor} and \\cite{chen2020testing}.\n\nOn the quantile feature representation, mathematically, a $\\tau$-quantile for a random variable $Y$ is $Q_{\\tau}(Y)=\\inf\\{y; P(Y\\leq y)\\geq \\tau\\}$. With increasing complexity of data sets, how to understand the co-movement of the quantiles of large-dimensional random vectors evolving in time is of vital importance in theory and applications. To the best of our knowledge, \\cite{Chen2021quantile} is the first paper that models the $\\tau$ quantile of a large vector by a vector factor structure. \\cite{Ando2021Quantile} extended \\cite{Chen2021quantile} to allow for observed covariates in modeling the panel quantiles. \\cite{Yu2021Quantile} applied the quantile factor structure to estimate the risk premium. But, so far, no works are done to investigate the co-movement of the quanitles of a matrix sequence (3-order tensor) or even more generally tensor sequence. The more parsimonious interactive quantile factor representation, compared to the vector quantile factor model, is still not well understood in achieving higher statistical estimation precision (faster convergence rates in estimating the row and column factor loadings and factors).\n\nIn this paper, we estimate the row and column factor loadings and factors by minimizing the empirical check loss function. Our theory demonstrates that our estimates converge at rate $O_p(1\/\\min\\{\\sqrt{p_1T},\\sqrt{p_2T},\\sqrt{p_1p_2}\\})$ in the sense of averaged Frobenius norm, if the quantile interactive mechanism is effective. Our theoretical rate is faster than $O_p(1\/\\min\\{\\sqrt{p_1p_2},\\sqrt{T}\\})$, the rate expected from the vector quantile factor analysis by vectorizing ${\\mbox{\\boldmath $X$}}_t$, which is more pronounced when the sequence length $T$ is short. To the best of our knowledge, this is the first result on the estimation of the matrix quantile factor model and reveal of the interactive effect in reducing the estimation error. Our theory also shows that the convergence rates are reached without any moment constraints on the idiosyncratic errors, hence robust to the heavy tails of the heterogeneous idiosyncratic errors. Due to the non-convexity of the objective function, we present an iterative algorithm to find an approximate solution. Extensive simulation studies show that the numerical solutions are close enough to the true parameter space, and demonstrate the robustness to the heavy tails. To determine the pair of the row and column factor numbers, we present three criteria, which are proved to be consistent and verified by extensive simulations.\n\n\n\nMost related to this work are some exceptional papers that focused on robust factor analysis and vector quantile factor models. The benchmark work \\cite{He2022large} provided the first robust factor analysis procedure that do not require any moment conditions on the factors and idiosyncratic errors. Inference on the quantile factor structure for large vector series with statistical theory are only studied most recently in statistics and econometrics, though the check loss optimization was long been considered in machine learning, for example, in image processing, \\cite{Ke2005robust} and \\cite{Aan2002robust} with $\\tau=1\/2$. But no statistical theory had ever been established in machine learning field. The seminal work \\cite{Chen2021quantile} extensively studied the statistical properties of the theoretical minimizers of the summed check loss functions, and no moment constraints were required for the idiosyncratic components of all variables. \\cite{He2022learn} is a concurrent independent work that investigated into the vector quantile factor model and gave a maximum norm result on the estimated factor loadings without any moment conditions on the idiosyncratic errors. See also extensions and applications in \\cite{Ando2021Quantile} and \\cite{Yu2021Quantile}, though both assumed strong moment conditions on the idiosyncratic components for all variables.\n\n\n\nThe present paper is organized as follows. Section \\ref{MM} gives the matrix quantile factor model and the estimation method. Main results on estimating the cross-sectional factor spaces and set-up Assumptions are provided in Section \\ref{MR}. Section \\ref{No} presents three model selection criteria to determine the numbers of row and column factors. Section \\ref{SE} conducts simulations and Section \\ref{RD} does empirical data analysis. Section \\ref{CD} concludes. The technical proofs are relegated to the Appendix.\n\n\n\\section{Model and Methodology}\\label{MM}\n\nThe quantile was widely used in robust portfolio allocation, risk management, insurance regulation, quality evaluation, manufacturing monitoring, and so on. For example, in portfolio application, quantile-based scatter matrix can be used to construct robust portfolios. We model the co-movement of the quantiles of all entries in each matrix by the following matrix quantile factor model.\n\\begin{eqnarray}\\label{model}\n{\\mbox{\\boldmath $X$}}_t&=&Q_{\\tau}({\\mbox{\\boldmath $X$}}_t|{\\mbox{\\boldmath $F$}}_{t,\\tau})+{\\mbox{\\boldmath $E$}}_{t,\\tau},\\nonumber\\\\\nQ_{\\tau}({\\mbox{\\boldmath $X$}}_t|{\\mbox{\\boldmath $F$}}_{t,\\tau})&=&(Q_{\\tau}(X_{ ijt}|{\\mbox{\\boldmath $F$}}_{t,\\tau}))_{p_1\\times p_2}={\\mbox{\\boldmath $R$}}_{\\tau}{\\mbox{\\boldmath $F$}}_{t,\\tau}{\\mbox{\\boldmath $C$}}_{\\tau}^{\\prime},\n\\end{eqnarray}\nwhere ${\\mbox{\\boldmath $R$}}_{\\tau}$, ${\\mbox{\\boldmath $C$}}_{\\tau}$ and ${\\mbox{\\boldmath $F$}}_{t,\\tau}$ are the $p_1\\times k_{1,\\tau}$ row factor loading matrix, $p_2\\times k_{2,\\tau}$ column factor loading matrix and $k_{1,\\tau}\\times k_{2,\\tau}$ common factor matrix, respectively, and ${\\mbox{\\boldmath $E$}}_{t,\\tau}$ is an error matrix. Obviously, $Q_{\\tau}({\\mbox{\\boldmath $E$}}_{t,\\tau}|{\\mbox{\\boldmath $F$}}_{t,\\tau})=0$. The subscript $\\tau$ emphasizes the dependence on $\\tau$. That being said, the low-rank quantile structure is heterogeneous across different quantile levels. Model (\\ref{model}) demonstrates that the entries of ${\\mbox{\\boldmath $X$}}_t$ depends on how close the rows of ${\\mbox{\\boldmath $R$}}_{\\tau}$ are to the rows of ${\\mbox{\\boldmath $C$}}_{\\tau}$, i.e, an interactive effect between the row and column sections of variables. We refer to ${\\mbox{\\boldmath $R$}}_{\\tau}{\\mbox{\\boldmath $F$}}_{t,\\tau}{\\mbox{\\boldmath $C$}}_{\\tau}^{\\prime}$ and ${\\mbox{\\boldmath $E$}}_{t,\\tau}$ as the common and idiosyncratic components, respectively. Model (\\ref{model}) includes the two-way quantile fixed effect model as a special example. In particular, setting ${\\mbox{\\boldmath $R$}}_{\\tau}=({\\mbox{\\boldmath $\\alpha$}}_{p_1\\times 1}(\\tau), (\\tilde{{\\mbox{\\boldmath $R$}}}_{\\tau})_{p_1\\times (k_{1,\\tau}-2)}(\\tau), {\\bf{1}}_{p_1\\times 1})$, ${\\mbox{\\boldmath $F$}}_t=\\mbox{diag}\\{1, (\\tilde{{\\mbox{\\boldmath $F$}}}_{t,\\tau})_{(k_{1,\\tau}-2)\\times (k_{2,\\tau}-2)}, 1\\}$ and ${\\mbox{\\boldmath $C$}}=({\\bf{1}}_{p_2\\times 1}, (\\tilde{{\\mbox{\\boldmath $C$}}}_{\\tau})_{p_2\\times (k_{2,\\tau}-2)}, {\\mbox{\\boldmath $\\beta$}}_{p_2\\times 1}(\\tau))$,\n$$\n{\\mbox{\\boldmath $X$}}_t={\\mbox{\\boldmath $\\alpha$}}(\\tau) {\\bf{1}}_{1\\times p_2}+{\\bf{1}}_{p_1\\times 1}{\\mbox{\\boldmath $\\beta$}}(\\tau)+\\tilde{{\\mbox{\\boldmath $R$}}}_{\\tau}\\tilde{{\\mbox{\\boldmath $F$}}}_{t,\\tau}\\tilde{{\\mbox{\\boldmath $C$}}}_{\\tau}^{\\prime}+{\\mbox{\\boldmath $E$}}_{t,\\tau},\n$$\nwhere ${\\mbox{\\boldmath $\\alpha$}}(\\tau)$ and ${\\mbox{\\boldmath $\\beta$}}(\\tau)$ represent the time-invariant quantile fixed effects along the row and column dimensions, respectively. They can be heterogeneous across the rows and\/or columns.\n\nWhile the vector factor model is conceptually a generative mechanism for a single cross-section of variables that are closely related in nature, the matrix factor model in (\\ref{model}) is a two-way joint generative modeling in two totally different cross-section of variables. Though different in interpretations, model (\\ref{model}) can be mathematically rewritten in the form of a vector factor model\n\\begin{equation}\\label{vectorize}\n\\mbox{vec}({\\mbox{\\boldmath $X$}}_t)=({\\mbox{\\boldmath $C$}}_{\\tau}\\otimes{\\mbox{\\boldmath $R$}}_{\\tau})\\mbox{vec}({\\mbox{\\boldmath $F$}}_{t,\\tau})+\\mbox{vec}({\\mbox{\\boldmath $E$}}_{t,\\tau}),\n\\end{equation}\nwhere $\\mbox{vec}(\\cdot)$ is the vectorization operator that stacks the columns of a matrix into a long vector and $\\otimes$ stands for the Kronecker product operator. A general vector factor model for an observed vector ${\\mbox{\\boldmath $x$}}_t$ is typically expressed as\n\\begin{equation}\\label{vector}\n({\\mbox{\\boldmath $x$}}_t)_{p\\times 1}={\\mbox{\\boldmath $L$}}_{p\\times k}({\\mbox{\\boldmath $f$}}_t)_{k\\times 1}+({\\mbox{\\boldmath $\\epsilon$}}_t)_{p\\times 1},\n\\end{equation}\nwhere ${\\mbox{\\boldmath $L$}}$, ${\\mbox{\\boldmath $f$}}_t$ and ${\\mbox{\\boldmath $\\epsilon$}}_t$ are the loading matrix, factor vector and idiosyncratic error vector, respectively. That is (\\ref{model}) can be mathematically regarded as a vector factor model with parameter restrictions ${\\mbox{\\boldmath $L$}}={\\mbox{\\boldmath $C$}}_{\\tau}\\otimes {\\mbox{\\boldmath $R$}}_{\\tau}$, $p=p_1p_2$ and $k=k_1k_2$. When the Kronecker structure ${\\mbox{\\boldmath $C$}}_{\\tau}\\otimes {\\mbox{\\boldmath $R$}}_{\\tau}$ is latent in the matrix sequence, a simple vectorization and vector principal component analysis would yield consistent estimate of the factor loading matrix ${\\mbox{\\boldmath $L$}}$ (and hence ${\\mbox{\\boldmath $C$}}_{\\tau}\\otimes {\\mbox{\\boldmath $R$}}_{\\tau}$) up to orthogonal transformation in the sense of averaged Frobenius norm. Expected from the vector quantile factor analysis in \\cite{Chen2021quantile}, the consistent rate for estimating ${\\mbox{\\boldmath $L$}}$ is $1\/\\min\\{\\sqrt{p_1p_2},\\sqrt{T}\\}$. To recover the row and column factor spaces spanned by ${\\mbox{\\boldmath $R$}}_{\\tau}$ and ${\\mbox{\\boldmath $C$}}_{\\tau}$, a further nearest Kronecker decomposition has to be done, c.f., \\cite{van2000kronecker}, but the resulting estimates of ${\\mbox{\\boldmath $R$}}_{\\tau}$ and ${\\mbox{\\boldmath $C$}}_{\\tau}$ depend on the estimation error for ${\\mbox{\\boldmath $L$}}$. The statistical theory for the nearest Kronecker decomposition, with diverging $p_1, p_2$ and $T$, is still open and out of the scope of the present paper. The other way around with vector quantile factor analysis is to minimize the empirical check loss function by restricting ${\\mbox{\\boldmath $L$}}={\\mbox{\\boldmath $C$}}_{\\tau}\\otimes {\\mbox{\\boldmath $R$}}_{\\tau}$, but the number of restrictions is diverging which leads to complex computation. The matrix form (\\ref{model}) gives a neat joint modeling of a two-way structure to start from. A simple iterative optimization approach based on (\\ref{model}) is given to compute the theoretic minimizers.\n\nComing back to the general model (\\ref{model}), the row factor loading matrix ${\\mbox{\\boldmath $R$}}_{\\tau}$, the column factor loading matrix ${\\mbox{\\boldmath $C$}}_{\\tau}$ and the factor matrix ${\\mbox{\\boldmath $F$}}_{t, \\tau}$ are not separately identifiable, though the common component itself is under some signal conditions. Indeed, there exists orthonormal square matrices ${\\mbox{\\boldmath $O$}}_R$ and ${\\mbox{\\boldmath $O$}}_C$, such that ${\\mbox{\\boldmath $R$}}_{\\tau}{\\mbox{\\boldmath $F$}}_{t, \\tau}{\\mbox{\\boldmath $C$}}_{\\tau}^{\\prime}={\\mbox{\\boldmath $R$}}^*_{\\tau}{\\mbox{\\boldmath $F$}}^*_{t, \\tau}{\\mbox{\\boldmath $C$}}^{*\\prime}_{\\tau}$ where ${\\mbox{\\boldmath $R$}}^*_{\\tau}={\\mbox{\\boldmath $R$}}_{\\tau}{\\mbox{\\boldmath $O$}}_R$, ${\\mbox{\\boldmath $C$}}^*_{\\tau}={\\mbox{\\boldmath $C$}}_{\\tau}{\\mbox{\\boldmath $O$}}_C$ and ${\\mbox{\\boldmath $F$}}_{t,\\tau}^*={\\mbox{\\boldmath $O$}}_R^{\\prime}{\\mbox{\\boldmath $F$}}_{t,\\tau}{\\mbox{\\boldmath $O$}}_C^{\\prime}$.\nWithout loss of generality, we assume throughout the paper that\n\\begin{equation}\\label{id}\n\\frac{{\\mbox{\\boldmath $R$}}^{\\prime}_{\\tau}{\\mbox{\\boldmath $R$}}_{\\tau}}{p_1}=\\mathbb{I}_{k_1}, \\ \\frac{{\\mbox{\\boldmath $C$}}^{\\prime}_{\\tau}{\\mbox{\\boldmath $C$}}_{\\tau}}{p_2}=\\mathbb{I}_{k_2}, \\ \\frac{\\sum^T_{t=1}{\\mbox{\\boldmath $F$}}_{t,\\tau}{\\mbox{\\boldmath $F$}}_{t,\\tau}^{\\prime}}{T} \\ \\mbox{and} \\ \\frac{\\sum^T_{t=1}{\\mbox{\\boldmath $F$}}_{t,\\tau}^{\\prime}{\\mbox{\\boldmath $F$}}_{t,\\tau}}{T} \\ \\mbox{are diagonal matrices}.\n\\end{equation}\n\n\nTo estimate the parameters, we propose to minimize the empirical check loss function,\n$$\n\\mathbb{M}_{p_{1}p_{2}T}(\\theta)=\\frac{1}{p_1p_2T}\\sum^{p_1}_{i=1}\\sum^{p_2}_{j=1}\\sum^T_{t=1}\\rho_{\\tau}(X_{ijt}-r_i^{\\prime}{\\mbox{\\boldmath $F$}}_tc_j),\n$$\nwith respect to $\\theta=\\{r_1,..., r_{p_1};c_1,...,c_{p_2};{\\mbox{\\boldmath $F$}}_1,...,{\\mbox{\\boldmath $F$}}_T\\}$, where $\\rho_{\\tau}(u)=(\\tau-I\\{u\\leq 0\\})u$, and $r_i^{\\prime}$ and $c_j^{\\prime}$ are the $i$-th row of ${\\mbox{\\boldmath $R$}}_{\\tau}$ and $j$-th row of ${\\mbox{\\boldmath $C$}}_{\\tau}$, respectively. Our estimates, denoted by $\\hat{{\\mbox{\\boldmath $R$}}}_{\\tau}$, $\\hat{{\\mbox{\\boldmath $F$}}}_{t,\\tau}$ and $\\hat{{\\mbox{\\boldmath $C$}}}_{\\tau}$, are simply the minimizers of the above empirical check loss function assuming that $k_{1, \\tau}$ and $k_{2, \\tau}$ are known numbers of factors a priori. Later, we will give consistent estimates of $k_{1, \\tau}$ and $k_{2, \\tau}$ by three methods. Notice that the empirical check loss function is not a convex function jointly in ${\\mbox{\\boldmath $R$}}_{\\tau}$, ${\\mbox{\\boldmath $F$}}_{t, \\tau}$\nand ${\\mbox{\\boldmath $C$}}_{\\tau}$, but it is a marginally convex function when the other two are fixed. Hence, we propose to optimize it via an iterative algorithm, see Algorithm \\ref{alg1} below, of which the performance is checked by the simulations.\n\n\n\\begin{algorithm}[H]\n\t\\caption{Iterative algorithm for the row and column factor loading matrices and the factor matrix}\\label{alg1}\n\t{\\bf Input:} Data matrices $\\{{\\mbox{\\boldmath $X$}}_t\\}_{t\\le T}$, the pair of row and column factor numbers $k_1$ and $k_2$\\\\\n\t{\\bf Output:} Factor loading matrices and factor matrix\n\nStep I: set $h=0$ and give initial values of $\\{\\hat{{\\mbox{\\boldmath $F$}}}_t(0)\\}_{t=1}^T$ and $\\hat{{\\mbox{\\boldmath $C$}}}(0)$ satisfying (\\ref{id});\n\t\t\nStep II: given $\\{\\hat{{\\mbox{\\boldmath $F$}}}_t(h)\\}_{t=1}^T$ and $\\hat{{\\mbox{\\boldmath $C$}}}(h)$, minimize $\\mathbb{M}_{p_1p_2T}(\\theta)$ with respect to ${\\mbox{\\boldmath $R$}}$ and obtain a normalized $\\hat{{\\mbox{\\boldmath $R$}}}(h+1)$ so that (\\ref{id}) is fulfilled;\n\t\t\nStep III: given $\\hat {\\mbox{\\boldmath $R$}}(h+1)$ and $\\hat{{\\mbox{\\boldmath $C$}}}(h)$, minimize $\\mathbb{M}_{p_1p_2T}(\\theta)$ with respect to ${\\mbox{\\boldmath $F$}}_1,\\ldots,{\\mbox{\\boldmath $F$}}_T$ and obtain $\\{\\hat{{\\mbox{\\boldmath $F$}}}_t(h+1)\\}_{t=1}^T$;\n\nStep IV: given $\\hat {\\mbox{\\boldmath $R$}}(h+1)$ and $\\{\\hat{{\\mbox{\\boldmath $F$}}}_t(h+1)\\}_{t=1}^T$, minimize $\\mathbb{M}_{p_1p_2T}(\\theta)$ with respect to ${\\mbox{\\boldmath $C$}}$ and obtain a normalized $\\hat{\\mbox{\\boldmath $C$}}(h+1)$ so that (\\ref{id}) is fulfilled;\n\nStep V: set $h=h+1$ and repeat Steps II to IV until convergence or up to $h=m$.\n\n\\end{algorithm}\n\n\nAlthough $\\mathbb{M}_{p_1p_2T}(\\theta)$ is not a joint convex function, it is convex in each iteration in one component of $(\\hat{{\\mbox{\\boldmath $R$}}}(l), \\hat{{\\mbox{\\boldmath $C$}}}(l), \\hat{{\\mbox{\\boldmath $F$}}}_t(l))$ with the other two components given. Our simulation shows that the algorithm converges fast and leads to accurate estimation. In the bi-convex quadratic loss function minimization, \\cite{Ge2017No} showed that there has no spurious local minima in positive semi-definite matrix completion. Motivated by \\cite{Ge2017No}, we set the initial values in Algorithm \\ref{alg1} by random initialization. Our simulation results justify the effectiveness of the iterative convex optimization with random initialization.\n\n\n\n\\section{Estimation of the Factor Spaces}\\label{MR}\n\nIn this section, we present a main result on the estimation accuracy of the estimated row and column factor loading matrices. Before stating the theorem, we give some technical assumptions. Without confusion, we suppress the dependence on $\\tau$ of the notation $k_{1,\\tau}$ and $k_{2,\\tau}$, and write them simply as $k_1$ and $k_2$.\n\n\\begin{assumption}\\label{ass1}\nLet $\\mathcal{A}\\subset \\mathbb{R}^{k_{1}}$, $\\mathcal{F}\\subset \\mathbb{R}^{k_{1}\\times k_{2}}$,\n$\\mathcal{B}\\subset \\mathbb{R}^{k_{2}}$ and define\n\\begin{equation}\n\\Theta^{k_{1}k_{2}}=\\{ r_{i}\\in \\mathcal{A}, {\\mbox{\\boldmath $F$}}_{t,\\tau}\\in \\mathcal{F}, c_{j}\\in \\mathcal{B} \\ \\mbox{satisfying} \\ (\\ref{id})\\}.\n\\end{equation}\n\\begin{enumerate}\n\\item $\\mathcal{A}$, $\\mathcal{F}$, $\\mathcal{B}$ are compact sets\nand the true parameter $\\theta_{0}\\in \\Theta^{k_{1}k_{2}}$. The true factor matrix ${\\mbox{\\boldmath $F$}}^0_{t,\\tau}$ satisfies\n\\begin{equation}\\label{F1}\n\\frac{1}{T}\\sum_{t=1}^{T}{\\mbox{\\boldmath $F$}}_{t,\\tau}^0{\\mbox{\\boldmath $F$}}_{t,\\tau}^{0\\prime}\n=\\mbox{diag}(\\sigma_{T1},\\cdots,\\sigma_{Tk_{1}})\n\\end{equation}\nwith $\\sigma_{T_{1}}\\geq \\cdots\\geq \\sigma_{T_{k_{1}}}$\nand $\\sigma_{Tj}\\rightarrow \\sigma_{j}$ as $T\\rightarrow\\infty$\nfor $j=1,\\ldots,k_{1}$ with $\\infty>\\sigma_{1}>\\cdots>\\sigma_{k_{1}}>0$.\n\\begin{equation}\\label{F2}\n\\frac{1}{T}\\sum_{t=1}^{T}{\\mbox{\\boldmath $F$}}_{t,\\tau}^{0\\prime}{\\mbox{\\boldmath $F$}}_{t,\\tau}^0\n=\\mbox{diag}(\\tilde{\\sigma}_{T1},\\cdots,\\tilde{\\sigma}_{Tk_{2}})\n\\end{equation}\nwith $\\tilde{\\sigma}_{T1}\\geq \\cdots\\geq \\tilde{\\sigma}_{Tk_{2}}$\nand $\\tilde{\\sigma}_{Tj}\\rightarrow \\tilde{\\sigma}_{j}$ as $T\\rightarrow\\infty$\nfor $j=1,\\ldots,k_{2}$ with $\\infty>\\tilde{\\sigma}_{1}>\\cdots>\\tilde{\\sigma}_{k_{2}}>0$.\n\n\\item The conditional density function of the idiosyncratic error variable $\\varepsilon_{ijt}$ given $\\{{\\mbox{\\boldmath $F$}}_{t,\\tau}^0\\}$, denoted as $\\text{f}_{ijt}$, is continuous, and satisfies that: for any compact set $C \\subset R$ and any $x\\in C$, there exists a positive constant $\\underline{\\text{f}}>0$ (depending on $C$) such that $\\text{f}_{ijt}(x)\\geq \\underline{\\text{f}}$ for all $i,j,t$.\n\n\\item Given $\\{{\\mbox{\\boldmath $F$}}_{t,\\tau}^0, 1\\leq t \\leq T \\}$, $\\{\\varepsilon_{ijt}, 1\\leq i \\leq p_{1}, 1\\leq j \\leq p_{2}, 1\\leq t \\leq T\\}$ are independent across $i,j$ and $t$.\n\\end{enumerate}\n\\end{assumption}\n\nAssumption \\ref{ass1}-1 is standard in the literature, e.g., the compactness of the parameters were assumed in \\cite{Chen2021quantile}, and the existence of the limits in (\\ref{F1}) and (\\ref{F2}) is guaranteed by the law of large numbers under various weak-correlation conditions. Assumption \\ref{ass1}-2 assumed the existence of density functions which are uniformly bounded from below in compact sets, see also similar conditions in \\cite{Chen2021quantile} and \\cite{He2022learn}. Assumption \\ref{ass1}-3 restricts that the idiosyncratic errors are conditionally independent but maybe dependent unconditionally, see the same condition in \\cite{Chen2021quantile} and \\cite{He2022learn}. Even if (\\ref{id}) is satisfied, the columns of loading matrices ${\\mbox{\\boldmath $R$}}_{\\tau}$ and ${\\mbox{\\boldmath $C$}}_{\\tau}$ are identifiable only up to a positive or negative sign. We henceforth make a convention that the signs of the columns of ${\\mbox{\\boldmath $R$}}_{\\tau}$ and ${\\mbox{\\boldmath $C$}}_{\\tau}$ are all positive.\n\nNow, we state our main result on the estimated row and column factor loading matrices.\n\\begin{theorem}\\label{th1}\nUnder Assumption 1, as $p_1, p_2, T\\rightarrow \\infty$,\n$$\n\\frac{\\|\\hat{{\\mbox{\\boldmath $R$}}}_{\\tau}-{\\mbox{\\boldmath $R$}}_{0,\\tau}\\|_F}{\\sqrt{p_{1}}}=O_{p}(L_{p_{1}p_{2}T}^{-1}), \\\n\\frac{\\|\\hat{{\\mbox{\\boldmath $F$}}}_{\\tau}-{\\mbox{\\boldmath $F$}}_{0,\\tau}\\|_F}{\\sqrt{T}}=O_{p}(L_{p_{1}p_{2}T}^{-1}), \\\n\\frac{\\|\\hat{{\\mbox{\\boldmath $C$}}}_{\\tau}-{\\mbox{\\boldmath $C$}}_{0,\\tau}\\|_F}{\\sqrt{p_{2}}}=O_{p}(L_{p_{1}p_{2}T}^{-1}),\n$$\nwhere $L_{p_1p_2T}=\\min\\{\\sqrt{p_1p_2}, \\sqrt{p_2T}, \\sqrt{p_1T}\\}$, and ${\\mbox{\\boldmath $R$}}_{0,\\tau}$, ${\\mbox{\\boldmath $F$}}_{0,\\tau}$ and ${\\mbox{\\boldmath $C$}}_{0,\\tau}$ are, respectively, the true row factor loading matrix, the factor matrix and the column factor loading matrix.\n\\end{theorem}\n\nTheorem \\ref{th1} demonstrates that the convergence rate of our estimates of the row and column factor loading matrices and the factor matrix is $O_p(1\/\\min\\{\\sqrt{p_1p_2},\\sqrt{p_2T},\\sqrt{p_1T}\\})$ in the sense of averaged Frobenius norm. For the estimation of the loading matrix in the vector form (\\ref{vectorize}), the plug-in estimate $\\hat{{\\mbox{\\boldmath $C$}}}_{\\tau}\\otimes\\hat{{\\mbox{\\boldmath $R$}}}_{\\tau}$ has convergence rate as its components in the sense of averaged Frobnius norm:\n\\begin{eqnarray}\\label{plug-in}\n&&\\|\\hat{{\\mbox{\\boldmath $C$}}}_{\\tau}\\otimes\\hat{{\\mbox{\\boldmath $R$}}}_{\\tau}-{\\mbox{\\boldmath $C$}}_{0,\\tau}\\otimes {\\mbox{\\boldmath $R$}}_{0, \\tau}\\|_F\/\\sqrt{p_1p_2}\\nonumber\\\\\n&\\leq & C(\\|\\hat{{\\mbox{\\boldmath $C$}}}_{\\tau}-{\\mbox{\\boldmath $C$}}_{0,\\tau}\\|_F\/\\sqrt{p_2}+\\|\\hat{{\\mbox{\\boldmath $R$}}}_{\\tau}-{\\mbox{\\boldmath $R$}}_{0,\\tau}\\|_F\/\\sqrt{p_1})=O_p(L_{p_1p_2T}^{-1}).\n\\end{eqnarray}\nExpected from \\cite{Chen2021quantile}, the convergence rate (in averaged Frobenius norm) of estimating the loading space spanned by ${\\mbox{\\boldmath $C$}}_{0,\\tau}\\otimes{\\mbox{\\boldmath $R$}}_{0,\\tau}$ under the framework of the vector quantile factor model (\\ref{vector}) is $O_p(1\/\\min\\{\\sqrt{p_1p_2},\\sqrt{T}\\})$ by piling down the columns of each observed matrix into a long vector. A simple comparison shows that the latter rate is no faster than ours, and in particular, when $p_1p_2$ dominates $T$, ours is strictly faster than the rate by vectorizing the matrix. This is intuitively interpretable because the structure restriction of ${\\mbox{\\boldmath $L$}}$ in (\\ref{vector}) is not observed in the vector quantile analysis.\n\n\\section{Model Selection Criteria}\\label{No}\n\nWhile in the previous section the numbers of quantile-dependent factors $k_{1}$ and $k_{2}$ was assumed to be known at each $\\tau$, we now propose three different methods to select the correct numbers of factors at each quantile with probability approaching 1. The first procedure selects the numbers of factors by rank minimization (RM), the second uses the information criterion (IC), while the third implements the eigenvalue ratio thresholding approach (ER). As before, the dependence on $\\tau$ in all mathematical notations, including $k_{1,\\tau}$ and $k_{2,\\tau}$, are suppressed for simplicity.\n\n\\subsection{Rank Minimization}\nLet $K_{1}$ and $K_{2}$ be two positive integers larger than $k_{1}$ and $k_{2}$, respectively.\nLet $\\mathcal{A}^{K_{1}}$ be compact subset of $\\mathbb{R}^{K_{1}}$, $\\mathcal{F}^{K_{1}\\times K_{2}}$ be compact subset of $\\mathbb{R}^{K_{1}\\times K_{2}}$ and $\\mathcal{B}^{K_{2}}$ be compact subset of $\\mathbb{R}^{K_{2}}$. Assume that\n\\begin{equation*}\n\\begin{aligned}\n\\left(\n \\begin{array}{cc}\n {\\mbox{\\boldmath $F$}}_{t}^0 & \\bf{0}_{k_{1}\\times (K_{2}-k_{2})} \\\\\n \\bf{0}_{(K_{1}-k_{1}) \\times k_{2}} & \\bf{0}_{(K_{1}-k_{1}) \\times (K_{2}-k_{2})}\n \\end{array}\n \\right)\\in \\mathcal{F}^{K_{1}\\times K_{2}}\n\\end{aligned}\n\\end{equation*}\nfor all $t$. Let $r_{i}^{K_{1}} \\in \\mathbb{R}^{K_{1}}$, ${\\mbox{\\boldmath $F$}}_{t}^{K_{1}\\times K_{2}} \\in \\mathbb{R}^{K_{1}\\times K_{2}}$, $c_{j}^{K_{2}} \\in \\mathbb{R}^{K_{2}}$ for all $i, t, j$ and\nwrite\n\\begin{eqnarray*}\n\\theta^{K_{1}K_{2}}&=&(({r_{1}^{K_{1}}})^{\\prime},\\cdots,({r_{p_{1}}^{K_{1}}})^{\\prime},\n{\\mbox{\\boldmath $F$}}_{1}^{K_{1}\\times K_{2}} ,\\cdots,{\\mbox{\\boldmath $F$}}_{T}^{K_{1}\\times K_{2}} ,({c_{1}^{K_{2}}})^{\\prime},\\cdots,({c_{p_{2}}^{K_{2}}})^{\\prime})',\\\\\n{\\mbox{\\boldmath $R$}}^{K_{1}}&=&(r_{1}^{K_{1}},\\cdots,r_{p_{1}}^{K_{1}})^{\\prime}, {\\mbox{\\boldmath $F$}}^{K_{1}\\times K_{2}}=({\\mbox{\\boldmath $F$}}_{1}^{K_{1}\\times K_{2}},\\cdots,{\\mbox{\\boldmath $F$}}_{T}^{K_{1}\\times K_{2}}), {\\mbox{\\boldmath $C$}}^{K_{2}}=(c_{1}^{K_{2}},\\cdots,c_{p_{2}}^{K_{2}})^{\\prime}.\n\\end{eqnarray*}\n\nConsider the following normalization,\n\\begin{eqnarray}\\label{6}\n&&\\frac{1}{p_{1}}({\\mbox{\\boldmath $R$}}^{K_{1}})'{\\mbox{\\boldmath $R$}}^{K_{1}}=\\mathbb{I}_{K_{1}}, \\frac{1}{p_{2}}({\\mbox{\\boldmath $C$}}^{K_{2}})'{\\mbox{\\boldmath $C$}}^{K_{2}}=\\mathbb{I}_{K_{2}},\\nonumber\\\\\n&&\\frac{1}{T}\\sum_{t=1}^{T}{\\mbox{\\boldmath $F$}}_{t}^{K_{1}\\times K_{2}}({\\mbox{\\boldmath $F$}}_{t}^{K_{1}\\times K_{2}})', \\ \\frac{1}{T}\\sum_{t=1}^{T}({\\mbox{\\boldmath $F$}}_{t}^{K_{1}\\times K_{2}})'{\\mbox{\\boldmath $F$}}_{t}^{K_{1}\\times K_{2}}\n\\text{diagonal and descending.}\n\\end{eqnarray}\nDefine\n\\begin{equation*}\n\\begin{aligned}\n\\Theta^{K_{1}K_{2}}=\\{\\theta^{K_{1}K_{2}}:& \\ r_{i}^{K_{1}}\\in \\mathcal{A}^{K_{1}}, {\\mbox{\\boldmath $F$}}_{t}^{K_{1}\\times K_{2}}\\in \\mathcal{F}^{K_{1}\\times K_{2}}, c_{j}^{K_{2}}\\in \\mathcal{B}^{K_{2}} \\ \\text{for all} \\ i,t,j; \\\\\n& r_{i}^{K_{1}},{\\mbox{\\boldmath $F$}}_{t}^{K_{1}\\times K_{2}}\\ \\mbox{and} \\ c_{j}^{K_{2}}\\text{ satisfy (\\ref{6})}\\},\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\hat{\\theta}^{K_{1}K_{2}}&=( (\\widehat{r}_{1}^{K_{1}})',\\cdots,(\\widehat{r}_{p_{1}}^{K_{1}})',\n\\widehat{{\\mbox{\\boldmath $F$}}}_{1}^{K_{1}\\times K_{2}} ,\\cdots,\\widehat{{\\mbox{\\boldmath $F$}}}_{T}^{K_{1}\\times K_{2}} ,(\\widehat{c}_{1}^{K_{2}})',\\cdots,(\\widehat{c}_{p_{2}}^{K_{2}})')'\\\\\n&=\\arg\\min_{\\theta^{K_{1}K_{2}}\\in\\Theta^{K_{1}K_{2}}}\n\\frac{1}{p_{1}p_{2}T}\\sum_{i=1}^{p_{1}}\\sum_{j=1}^{p_{2}}\\sum_{t=1}^{T}\n\\rho_{\\tau}(X_{ijt}-(r_{i}^{K_{1}})'{\\mbox{\\boldmath $F$}}_{t}^{K_{1}\\times K_{2}}c_{j}^{K_{2}}).\n\\end{aligned}\n\\end{equation*}\nMoreover, write $\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}=(\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}_{1},\\cdots,\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}_{T})$ and\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{T}\\sum_{t=1}^{T}\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}_{t}(\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}_{t})'\n&=\\text{diag}(\\widehat{\\sigma}^{K_{1}}_{T,1},\\cdots,\\widehat{\\sigma}^{K_{1}}_{T,K_{1}}),\\\\\n\\frac{1}{T}\\sum_{t=1}^{T}(\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}_{t})'\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}_{t}\n&=\\text{diag}(\\widehat{\\sigma}^{K_{2}}_{T,1},\\cdots,\\widehat{\\sigma}^{K_{2}}_{T,K_{2}}).\n\\end{aligned}\n\\end{equation*}\n\nThe rank minimization estimator of the numbers of factors, $k_{1}$ and $k_{2}$, are defined as\n\\begin{equation*}\n\\begin{aligned}\n\\widehat{k}_{1}^{r}&=\\sum_{j=1}^{K_{1}}\\textbf{1}\\{\\widehat{\\sigma}^{K_{1}}_{T,j}>C_{p_{1}p_{2}T} \\}, \\\n\\widehat{k}_{2}^{r}&=\\sum_{j=1}^{K_{2}}\\textbf{1}\\{\\widehat{\\sigma}^{K_{2}}_{T,j}>C_{p_{1}p_{2}T} \\},\n\\end{aligned}\n\\end{equation*}\nwhere $C_{p_{1}p_{2}T}$ is a sequence that goes to 0 as $p_{1},p_{2},T\\rightarrow \\infty$. That being said, $\\widehat{k}_{1}^r$ and $\\widehat{k}_{2}^r$ are, respectively, the\nnumbers of the diagonal elements of\n$$\n\\sum_{t=1}^{T}\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}_{t}(\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}_{t})'\/T \\ \\mbox{and} \\ \\sum_{t=1}^{T}(\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}_{t})'\\widehat{{\\mbox{\\boldmath $F$}}}^{K_{1}\\times K_{2}}_{t}\/T\n$$\nthat are larger than the threshold $C_{p_{1}p_{2}T}$. The following theorem shows the consistency of the rank minimization estimator.\n\n\\begin{theorem}\\label{th3.1}\nUnder Assumption 1,\n$\nP\\left(\\widehat{k}_{1}^r=k_{1}, \\widehat{k}_{2}^r=k_{2}\\right)\\rightarrow 1\n$\nas $p_{1},p_{2},T\\rightarrow \\infty$ if\n$K_{1}>k_{1}$, $K_{2}>k_{2}$, $C_{p_{1}p_{2}T}\\rightarrow 0$, and $C_{p_{1}p_{2}T}L_{p_{1}p_{2}T}^{2}\\rightarrow \\infty$.\n\\end{theorem}\n\n\n\\subsection{Information Criterion}\n\nThe second estimator of $(k_{1}, k_{2})$ is similar to the IC-based estimator of \\cite{bai2002determining}, but is adaptive to the matrix observation and the check loss function. For $(l_1, l_2)\\in \\mathcal{P}=:\\{0,..., K_1\\}\\times \\{0,..., K_2\\}$, we search the minimizer of a penalized empirical check loss function.\n\n\nThe IC-based estimator of $(k_{1}, k_{2})$ is defined as\n\\begin{equation*}\n(\\widehat{k}_{1}^{IC},\\widehat{k}_{2}^{IC})=\n\\arg\\min_{(l_1,l_2)\\in\\mathcal{P}}\\left(\\mathbb{M}_{p_{1}p_{2}T}(\\widehat{\\theta}^{l_{1}l_{2}})+(l_{1}+l_{2}) C_{p_{1}p_{2}T}\\right),\n\\end{equation*}\nwhere $\\hat{\\theta}^{l_1l_2}$ is similarly defined as $\\hat{\\theta}^{K_1K_2}$ except for replacing $(K_1, K_2)$ by $(l_1, l_2)$, pretending that there are $l_1$ row factor and $l_2$ column factors. Theorem \\ref{th3.2} below demonstrates that the estimators by the information criterion are consistent.\n\n\\begin{theorem}\\label{th3.2}\nSuppose Assumption 1 holds, and assume that for any compact set $C \\in \\mathbb{R}$\nand any $u\\in C$, there exists $\\overline{f}$ (depending on $C$) such that $\\text{f}_{ijt}(u)\\leq \\overline{f}$ for all $i, j, t$. Then\n$P\\left(\\widehat{k}_{1}^{IC}=k_{1},\\widehat{k}_{2}^{IC}=k_{2}\\right)\\rightarrow 1$, as $p_{1},p_{2},T\\rightarrow \\infty$ if $C_{p_{1}p_{2}T}\\rightarrow 0$ and $C_{p_{1}p_{2}T}L_{p_{1}p_{2}T}^{2}\\rightarrow \\infty$.\n\\end{theorem}\n\n\n\\subsection{Eigenvalue Ratio Thresholding}\n\nDue to the assumption of ${\\mbox{\\boldmath $F$}}_{t}^{K_{1}\\times K_{2}}$ in section 4.1, we expect $(\\hat\\sigma_{T,k_1+1}^{K_1},\\ldots,\\hat\\sigma_{T,K_1}^{K_1})$ and $(\\hat\\sigma_{T,k_2+1}^{K_2},\\ldots,\\hat\\sigma_{T,K_2}^{K_2})$ to be redundant and negligible. Therefore, motivated by the eigenvalue ratio approach in \\cite{Ahn2013eigenvalue},\na direct estimator for $(k_1, k_2)$ is given by\n$$\n\\hat k_1^{ER}=\\arg\\max_{1\\le k\\le K_1-1}\\frac{\\hat\\sigma_{T,k}^{K_1}}{\\hat\\sigma_{T,k+1}^{K_1}+c_0L_{p_1p_2T}^{-2}}, \\\n\\hat k_2^{ER}=\\arg\\max_{1\\le k\\le K_2-1}\\frac{\\hat\\sigma_{T,k}^{K_2}}{\\hat\\sigma_{T,k+1}^{K_2}+c_0L_{p_1p_2T}^{-2}},\n$$\nwhere $c_0$ is a small positive constant so that the denominator is always larger than 0. In our simulation studies and real data analysis, we set $c_0=10^{-4}$.\n\n\\begin{theorem}\\label{ER}\n\tUnder Assumption \\ref{ass1}, as $p_1, p_2, T\\rightarrow \\infty$ and $c_0\\rightarrow 0$,\n$$\nP\\left(\\hat k_1^{ER}=k_1, \\hat k_2^{ER}=k_2\\right)\\rightarrow 1.\n$$\n\\end{theorem}\n\n\\section{Simulation Studies}\\label{SE}\n\n\\subsection{Data generating process}\nTo investigate performance of the proposed estimators, we generate data from the following matrix series,\n\\begin{equation}\\label{data generating}\n{\\mbox{\\boldmath $X$}}_t={\\mbox{\\boldmath $R$}}{\\mbox{\\boldmath $F$}}_t{\\mbox{\\boldmath $C$}}^\\prime+\\theta g_t{\\mbox{\\boldmath $E$}}_t,\n\\end{equation}\nwhere ${\\mbox{\\boldmath $R$}}$ and ${\\mbox{\\boldmath $C$}}$ are $p_1\\times k_1$ and $p_2\\times k_2$ matrices, respectively. We set $k_1=2$ and $k_2=3$. The factor process follows an autoregressive model such that ${\\mbox{\\boldmath $F$}}_t=0.2{\\mbox{\\boldmath $F$}}_{t-1}+\\Xi_t$.\n $g_t$ is a scalar random variable satisfying $g_t=0.2g_{t-1}+\\epsilon_t$. The entries in ${\\mbox{\\boldmath $R$}}$, ${\\mbox{\\boldmath $C$}}$, $\\{\\Xi_t\\}$ and $\\{\\epsilon_t\\}$ are all generated from i.i.d. $\\mathcal{N}(0,1)$. The entries of $\\{{\\mbox{\\boldmath $E$}}_t\\}$ are i.i.d. from $\\mathcal{N}(0,1)$, or $t$ distributions with degree of freedom being 3 or 1, covering both light-tailed and heavy-tailed distributions. $\\theta$ is a parameter controlling the signal-to-noise ratio (SNR).\n\n It's clear that model (\\ref{data generating}) has 2 row factors and 3 column factors when $\\tau=0.5$. If $\\tau\\ne 0.5$, we let $q_\\tau=Q_{\\tau}(\\epsilon_{ijt})$ and rewrite the model as\n \\[\nX_{ijt}=r_i^\\prime {\\mbox{\\boldmath $F$}}_tc_j+\\theta q_\\tau g_t+\\theta g_t\\tilde \\epsilon_{ijt},\\quad \\text{with } \\tilde \\epsilon_{ijt}=\\epsilon_{ijt}-q_\\tau.\n \\]\n Therefore, it has 3 row factors and 4 column factors, with the new factor loading matrices and factor score matrices being\n \\[\n{\\mbox{\\boldmath $R$}}_\\tau=({\\mbox{\\boldmath $R$}},\\mathbf{1}_{p_1}),\\quad {\\mbox{\\boldmath $C$}}_\\tau=({\\mbox{\\boldmath $C$}},\\mathbf{1}_{p_2}),\\quad {\\mbox{\\boldmath $F$}}_{t,\\tau}=\\left(\\begin{matrix}\n {\\mbox{\\boldmath $F$}}_t&0\\\\\n 0&\\theta q_\\tau g_t\n \\end{matrix}\\right).\n \\]\n\n To ensure the identification condition (\\ref{id}), a normalization step should be applied to the loading and factor score matrices. For instance, when $\\tau=0.5$, do singular-value decomposition to ${\\mbox{\\boldmath $R$}}$ and ${\\mbox{\\boldmath $C$}}$ as\n \\[\n {\\mbox{\\boldmath $R$}}={\\mbox{\\boldmath $U$}}_R{\\mbox{\\boldmath $D$}}_R{\\mbox{\\boldmath $V$}}_R:={\\mbox{\\boldmath $U$}}_R{\\mbox{\\boldmath $Q$}}_R,\\quad {\\mbox{\\boldmath $C$}}={\\mbox{\\boldmath $U$}}_C{\\mbox{\\boldmath $D$}}_C{\\mbox{\\boldmath $V$}}_C:={\\mbox{\\boldmath $U$}}_C{\\mbox{\\boldmath $Q$}}_C.\n \\]\n Further define\n \\[\n\\tilde{\\mbox{\\boldmath $\\Sigma$}}_1=\\frac{1}{Tp_1p_2}\\sum_{t=1}^T{\\mbox{\\boldmath $Q$}}_R^\\prime{\\mbox{\\boldmath $F$}}_t{\\mbox{\\boldmath $C$}}^\\prime{\\mbox{\\boldmath $C$}}{\\mbox{\\boldmath $F$}}_t^\\prime{\\mbox{\\boldmath $Q$}}_R,\\quad \\tilde{\\mbox{\\boldmath $\\Sigma$}}_2=\\frac{1}{Tp_1p_2}\\sum_{t=1}^T{\\mbox{\\boldmath $Q$}}_C^\\prime{\\mbox{\\boldmath $F$}}_t^\\prime{\\mbox{\\boldmath $R$}}^\\prime{\\mbox{\\boldmath $R$}}{\\mbox{\\boldmath $F$}}_t{\\mbox{\\boldmath $Q$}}_C,\n \\]\n and the eigenvalue decompositions\n \\[\n \\tilde{\\mbox{\\boldmath $\\Sigma$}}_1=\\tilde{\\mbox{\\boldmath $\\Gamma$}}_1\\tilde{\\mbox{\\boldmath $\\Lambda$}}_1\\tilde{\\mbox{\\boldmath $\\Gamma$}}_1^\\prime, \\quad \\tilde{\\mbox{\\boldmath $\\Sigma$}}_2=\\tilde{\\mbox{\\boldmath $\\Gamma$}}_2\\tilde{\\mbox{\\boldmath $\\Lambda$}}_2\\tilde{\\mbox{\\boldmath $\\Gamma$}}_2^\\prime.\n \\]\n Then, the normalized loading and factor score matrices are\n \\[\n \\tilde{\\mbox{\\boldmath $R$}}=\\sqrt{p_1}{\\mbox{\\boldmath $U$}}_R\\tilde{\\mbox{\\boldmath $\\Gamma$}}_1,\\quad \\tilde{\\mbox{\\boldmath $C$}}=\\sqrt{p_2}{\\mbox{\\boldmath $U$}}_C\\tilde{\\mbox{\\boldmath $\\Gamma$}}_2,\\quad \\tilde{\\mbox{\\boldmath $F$}}_t=\\tilde{\\mbox{\\boldmath $\\Gamma$}}_1^\\top{\\mbox{\\boldmath $Q$}}_R{\\mbox{\\boldmath $F$}}_t{\\mbox{\\boldmath $Q$}}_C^\\prime\\tilde{\\mbox{\\boldmath $\\Gamma$}}_2.\n \\]\nWe are actually estimating $\\tilde{\\mbox{\\boldmath $R$}}$, $\\tilde{\\mbox{\\boldmath $C$}}$ and $\\tilde{\\mbox{\\boldmath $F$}}_t$. Moreover, in the iterative algorithm, we will normalize the estimators similarly in each step, so that condition (\\ref{id}) is always satisfied.\n\n\\subsection{Determining the numbers of factors for $\\tau=0.5$}\nThis section aims to verify the effectiveness of the proposed methods for estimating the numbers of row and column factors, when $\\tau=0.5$. Table \\ref{tab:number} reports the average estimated factor numbers over 500 replications and Table \\ref{tab:freq} reports frequencies of exact estimation, when $(T,p_1,p_2)$ grows gradually and the noises are sampled from different distributions.\nThe approaches proposed in \\cite{Chen2020StatisticalIF} and \\cite{yu2022projection} are taken as competitors, which are also designed for matrix factor models. Another natural idea is to first vectorize the data matrices ${\\mbox{\\boldmath $X$}}_t$ and then use the approach in \\cite{Chen2021quantile}. This will lead to an estimation of the total number of $k=k_1k_2$ factors. The two tables also report results of these methods for comparison. $K_1, K_2$ are set as 6 for matrix factor models while $k_{\\max}=12$ for \\cite{Chen2021quantile}'s method. The SNR parameter $\\theta=3$.\n\nHere we show detailed simulating settings for the proposed approaches. There is a thresholding parameter $C_{p_1p_2T}$ for the rank-minimization and information criterion methods. Following \\cite{Chen2021quantile}, for rank-minimization we set $C_{p_1p_2T}=\\delta L_{p_1p_2T}^{2\/3}$, where $\\delta=(\\hat\\sigma_{T,1}^{K_1}+\\hat\\sigma_{T,1}^{K_2})\/2$. For the information criterion, we actually use an accelerated algorithm in the simulation rather than direct grid search in $\\{1,...,K_1\\}\\times\\{1,...,K_2\\}$. In detail, we first fix $l_2=K_2$ and estimate $k_1$ by grid search in $\\{1,...,K_1\\}$. Next, we fix $l_1=\\hat k_1$ and estimate $k_2$ by grid search in $\\{1,...,K_2\\}$. The thresholding parameter for the information criterion is set as $C_{p_1p_2T}=\\delta L_{p_1p_2T}$, which is slightly smaller than that for rank-minimization.\n\nBy Tables \\ref{tab:number} and \\ref{tab:freq}, when the noises are from the standard normal distribution, the proposed three approaches with matrix quantile factor model perform comparably with the $\\alpha$-PCA ($\\alpha=0$) by \\cite{Chen2020StatisticalIF} and the projected estimation (PE) by \\cite{yu2022projection}. On the other hand, when the noises are from heavy-tailed distributions $t_3$ or $t_1$, the $\\alpha$-PCA and PE methods gradually lose accuracy, while the proposed three methods remain reliable, due to the robustness of check loss functions. The vectorized method doesn't work in this example mainly because the dimensions are much smaller compared with the settings in \\cite{Chen2021quantile} and we are considering weak signals (large $\\theta$). Moreover, the data matrix after vectorization is severely unbalanced ($p_1p_2\\gg T$), making the idiosyncratic errors too powerful. The advantage of the matrix quantile model over the vector quantile model verifies the rate improvement in our theorems.\n\n\\begin{table}[ht]\n\t\\caption{Averaged $(\\hat k_1,\\hat k_2)$ by different approaches over 500 replications. ``mqf-ER'', ``mqf-RM'' and ``mqf-IC'' stand for matrix quantile factor model with eigenvalue ratio, rank minimization and information criterion, respectively. $\\alpha$-PCA and ``PE'' are from \\cite{Chen2020StatisticalIF} and \\cite{yu2022projection}, respectively. ``vqf-RM'' is the rank minimization method with vectorized quantile factor model by \\cite{Chen2021quantile}. The ``mqf-ER\", ``mqf-RM\", ``mqf-ER\", ``$\\alpha$-PCA\", ``PE\" are estimates of $(k_1, k_2)$, while ``vqf-RM\" is for $k=k_1k_2$. \\label{tab:number}}\n\\resizebox{\\columnwidth}{!}{%\n\t\t\\begin{tabular}{cccccccccccccccc}\n\t\t\t\\hline\n\t\t\t${\\mbox{\\boldmath $E$}}_t$&$T$&$p_1=p_2$\n\t\t\t &mqf-ER&mqf-RM&mqf-IC&$\\alpha$-PCA&PE&vqf-RM\\\\\\hline\n\\multirow{9}{*}{$\\mathcal{N}(0,1)$}&20&20&(1.87,2.94)&(2.11,3.11)&(1.05,1)&(1.77,2.83)&(1.88,2.93)&3.64\n\\\\\n&20&50&(2,2.98)&(2.07,3.08)&(2,2.4)&(2,2.98)&(2,2.99)&4.74\n\\\\\n&20&80&(2,3)&(2.02,3.03)&(2,2.88)&(2,3)&(2,3)&4.89\n\\\\\n&50&20&(2,3)&(2,3)&(1.25,1.01)&(2,2.98)&(2,3)&5.22\n\\\\\n&50&50&(2,3)&(2.02,3.02)&(2,3)&(2,3)&(2,3)&9.38\n\\\\\n&50&80&(2,3)&(2,3)&(2,3)&(2,3)&(2,3)&8.23\n\\\\\n&80&20&(2,3)&(2,3)&(1.71,1)&(2,3)&(2,3)&5.9\n\\\\\n&80&50&(2,3)&(2,3)&(2,3)&(2,3)&(2,3)&9.44\n\\\\\n&80&80&(2,3)&(2,3)&(2,3)&(2,3)&(2,3)&9.4\n\\\\\\hline\n\\multirow{9}{*}{$t_3$}&20&20&(1.62,2.9)&(4,4.69)&(1.15,1.03)&(1.31,1.95)&(1.53,2.35)&4.2\n\\\\\n&20&50&(2.01,2.98)&(2.4,3.42)&(1.97,2.01)&(2,2.14)&(2.45,2.63)&4.34\n\\\\\n&20&80&(2,2.99)&(2.11,3.14)&(1.99,2.58)&(2.03,2.46)&(2.48,2.87)&4.58\n\\\\\n&50&20&(1.99,2.9)&(3.13,3.96)&(1.62,1.03)&(1.71,1.66)&(2.06,2.1)&7.41\n\\\\\n&50&50&(2,3)&(2.08,3.1)&(2,3)&(1.88,2.59)&(2.46,3.19)&8\n\\\\\n&50&80&(2,3)&(2,3.01)&(2,3)&(2.17,3.16)&(2.4,3.37)&8.17\n\\\\\n&80&20&(2,2.98)&(3.67,4.32)&(1.8,1.08)&(1.77,2.02)&(2.19,2.5)&9.03\n\\\\\n&80&50&(2,3)&(2,3)&(2,3)&(2.12,3.13)&(2.31,3.33)&10.23\n\\\\\n&80&80&(2,3)&(2,3)&(2,3)&(2.1,3.12)&(2.24,3.24)&10.63\n\\\\\\hline\n\\multirow{9}{*}{$t_1$}&20&20&(2.25,2.92)&(5.62,5.73)&(1.09,1.01)&(1.97,1.96)&(2.17,2.13)&1.63\n\\\\\n&20&50&(2.02,2.88)&(3,4.03)&(1.83,1.48)&(1.92,1.97)&(2.07,2.18)&1.62\n\\\\\n&20&80&(2.02,2.98)&(2.59,3.65)&(1.99,2.59)&(1.79,1.81)&(2,2)&1.63\n\\\\\n&50&20&(2,2.79)&(4.24,4.87)&(1.01,1)&(1.82,1.84)&(1.97,2.04)&1.79\n\\\\\n&50&50&(2,3)&(2.56,3.61)&(2,2.97)&(1.77,1.78)&(1.93,1.95)&1.79\n\\\\\n&50&80&(2,3)&(2.1,3.12)&(2,3)&(1.88,1.89)&(2.09,2.11)&1.76\n\\\\\n&80&20&(2.04,3)&(5.94,5.95)&(1.4,1.01)&(1.91,1.9)&(2.12,2.06)&1.89\\\\\n&80&50&(2,3)&(2.11,3.16)&(2,3)&(1.76,1.76)&(1.97,1.95)&1.96\n\\\\\n&80&80&(2,3)&(2.02,3.02)&(2,3)&(1.87,1.86)&(2.05,2.02)&1.9\\\\\\hline\n\t\\end{tabular}}\n\\end{table}\n\n\\begin{table}[ht]\n\t\t\t\\addtolength{\\tabcolsep}{12pt}\n\t\\caption{The frequencies of exactly estimating $( k_1, k_2)$ (or $k_1\\times k_2$ for the vectorized model) by different approaches over 500 replications. \\label{tab:freq}}\n\t\\resizebox{\\columnwidth}{!}{%\n\t\t\\begin{tabular}{cccccccccccccccc}\n\t\t\t\\hline\n\t\t\t${\\mbox{\\boldmath $E$}}_t$&$T$&$p_1=p_2$\n\t\t\t &mqf-ER&mqf-RM&mqf-IC&$\\alpha$-PCA&PE&vqf-RM\\\\\\hline\n\t\t \\multirow{9}{*}{$\\mathcal{N}(0,1)$}&20&20&0.83&0.89&0.00&0.68&0.83&0.03\n\\\\\n\t\t &20&50&0.99&0.92&0.52&0.98&0.99&0.18\n\\\\\n\t\t &20&80&1.00&0.98&0.89&1.00&1.00&0.21\n\\\\\n\t\t &50&20&1.00&1.00&0.00&0.98&1.00&0.26\n\\\\\n\t\t &50&50&1.00&0.98&1.00&1.00&1.00&0.03\n\\\\\n\t\t &50&80&1.00&1.00&1.00&1.00&1.00&0.07\n\\\\\n\t\t &80&20&1.00&1.00&0.00&1.00&1.00&0.53\n\\\\\n\t\t &80&50&1.00&1.00&1.00&1.00&1.00&0.02\n\\\\\n\t\t &80&80&1.00&1.00&1.00&1.00&1.00&0.02\n\\\\\\hline\n\t\t \\multirow{9}{*}{$t_3$}&20&20&0.49&0.12&0.00&0.03&0.05&0.14\n\\\\\n\t\t &20&50&0.97&0.68&0.32&0.27&0.31&0.15\n\\\\\n\t\t &20&80&0.99&0.89&0.64&0.36&0.40&0.16\n\\\\\n\t\t &50&20&0.90&0.31&0.00&0.06&0.12&0.15\n\\\\\n\t\t &50&50&1.00&0.90&1.00&0.39&0.45&0.12\n\\\\\n\t\t &50&80&1.00&0.99&1.00&0.69&0.63&0.07\n\\\\\n\t\t &80&20&0.98&0.16&0.00&0.14&0.23&0.03\n\\\\\n\t\t &80&50&1.00&1.00&1.00&0.76&0.68&0.02\n\\\\\n\t\t &80&80&1.00&1.00&1.00&0.86&0.77&0.01\n\\\\\\hline\n\t\t \\multirow{9}{*}{$t_1$}&20&20&0.09&0.00&0.00&0.01&0.00&0.01\n\\\\\n\t\t &20&50&0.83&0.40&0.06&0.01&0.00&0.00\n\\\\\n\t\t &20&80&0.97&0.63&0.67&0.01&0.00&0.01\n\\\\\n\t\t &50&20&0.60&0.04&0.00&0.03&0.01&0.01\n\\\\\n\t\t &50&50&1.00&0.53&0.97&0.02&0.01&0.01\n\\\\\n\t\t &50&80&1.00&0.89&1.00&0.00&0.00&0.00\n\\\\\n\t\t &80&20&0.68&0.00&0.00&0.03&0.02&0.01\n\\\\\n\t\t &80&50&1.00&0.85&1.00&0.01&0.00&0.01\n\\\\\n\t\t &80&80&1.00&0.98&1.00&0.00&0.00&0.01\\\\\\hline\n\t\\end{tabular}}\n\\end{table}\n\n\\subsection{Estimating loadings and factor scores for $\\tau=0.5$}\nNext, we investigate the accuracy of the estimated loadings and factor scores by different approaches. We use the similar settings as in Table \\ref{tab:number} and let $\\tau=0.5$. Note that the minimizers to the check loss function are not unique, so the estimated loading matrices converge only after a rotation. Due to such an identification issue, we will mainly focus on the estimation accuracy of the loading spaces. Let ${\\mbox{\\boldmath $R$}}_0$ and $\\hat {\\mbox{\\boldmath $R$}}$ be the true and estimated loading matrices respectively, both satisfying the identification condition in (\\ref{id}). We define the distance between the two loading spaces by\n\\[\n\\mathcal{D}({\\mbox{\\boldmath $R$}}_0,\\hat{\\mbox{\\boldmath $R$}})=\\bigg(1-\\frac{1}{k_1p_1^2}\\text{tr}(\\hat{\\mbox{\\boldmath $R$}}^\\prime{\\mbox{\\boldmath $R$}}_0{\\mbox{\\boldmath $R$}}_0^\\prime\\hat{\\mbox{\\boldmath $R$}})\\bigg)^{1\/2}.\n\\]\nIt's easy to see that $\\mathcal{D}({\\mbox{\\boldmath $R$}}_0,\\hat{\\mbox{\\boldmath $R$}})$ always takes value in the interval $[0,1]$. A smaller value of $\\mathcal{D}({\\mbox{\\boldmath $R$}}_0,\\hat{\\mbox{\\boldmath $R$}})$ indicates more accurate estimation of ${\\mbox{\\boldmath $R$}}_0$. When $\\mathcal{D}({\\mbox{\\boldmath $R$}}_0,\\hat{\\mbox{\\boldmath $R$}})=0$, the two loading spaces are exactly the same. Similar distance can be defined between ${\\mbox{\\boldmath $C$}}_0$ and $\\hat {\\mbox{\\boldmath $C$}}$. Let ${\\mbox{\\boldmath $W$}}_0={\\mbox{\\boldmath $C$}}_0\\otimes{\\mbox{\\boldmath $R$}}_0$, $p=p_1p_2$ and $\\hat{\\mbox{\\boldmath $W$}}=\\widehat{{\\mbox{\\boldmath $C$}}\\otimes{\\mbox{\\boldmath $R$}}}$. Similarly, we define\n$$\n\\mathcal{D}({\\mbox{\\boldmath $W$}}_0,\\hat{\\mbox{\\boldmath $W$}})=\\bigg(1-\\frac{1}{kp^2}\\text{tr}(\\hat{\\mbox{\\boldmath $W$}}^\\prime{\\mbox{\\boldmath $W$}}_0{\\mbox{\\boldmath $W$}}_0^\\prime\\hat{\\mbox{\\boldmath $W$}})\\bigg)^{1\/2}.\n$$\nThe existing vector quantile factor analysis estimates ${\\mbox{\\boldmath $W$}}_0$ by $\\hat{\\mbox{\\boldmath $W$}}=\\hat{\\mbox{\\boldmath $L$}}$ given in \\cite{Chen2021quantile}. The matrix quantile factor analysis estimates ${\\mbox{\\boldmath $W$}}_0$ by the plug-in estimator $\\hat{\\mbox{\\boldmath $W$}}=\\hat{{\\mbox{\\boldmath $C$}}}\\otimes\\hat{{\\mbox{\\boldmath $R$}}}$.\n\nTable \\ref{tab:loading} reports the estimation accuracy of the loading spaces by different methods over 500 replications. The conclusions almost follow those in Tables \\ref{tab:number} and \\ref{tab:freq}. The estimation based on matrix quantile factor models (``mqf'') is accurate and stable under all settings, while $\\alpha$-PCA and ``PE'' only work for light-tailed cases. Even under the normal cases, ``mqf'' can outperform ``PE'', mainly because the latter only contains one-step iteration thus relying on a good initial projection direction. There are some enormous errors for $\\alpha$-PCA, ``PE'' and the vectorized method in the table.\n\n\n\n \\begin{table}[ht]\n \t\\addtolength{\\tabcolsep}{0pt}\n \t\\caption{Distances between the estimated loading space and the truth by different approaches over 500 replications. ``mqf'' stands for matrix quantile factor analysis, while ``vqf'' stands for vectorized quantile factor analysis. \\label{tab:loading}}\n \t\\resizebox{\\columnwidth}{!}{%\n \t\t\\begin{tabular}{cccccccccccccccc}\n \t\t\t\\hline\n \t\t\t \\multirow{2}{*}{${\\mbox{\\boldmath $E$}}_t$}&\\multirow{2}{*}{$T$}&\\multirow{2}{*}{$p_1=p_2$}&\\multicolumn{3}{l}{$\\mathcal{D}({\\mbox{\\boldmath $R$}}_0,\\hat{\\mbox{\\boldmath $R$}})$}&\\multicolumn{3}{l}{$\\mathcal{D}({\\mbox{\\boldmath $C$}}_0,\\hat{\\mbox{\\boldmath $C$}})$}&\\multicolumn{2}{l}{$\\mathcal{D}({\\mbox{\\boldmath $W$}}_0,\\hat{{\\mbox{\\boldmath $W$}}})$}\\\\\\cmidrule(lr){4-6}\\cmidrule(lr){7-9}\\cmidrule(lr){10-11}\n \t\t\t &&&mqf&$\\alpha$-PCA&PE&mqf&$\\alpha$-PCA&PE&mqf&vqf\\\\\\hline\n \t\t \\multirow{9}{*}{$\\mathcal{N}(0,1)$}&20&20&0.04&0.09&0.08&0.05&0.11&0.09&0.06&0.69\n\\\\\n \t\t &20&50&0.02&0.06&0.05&0.03&0.08&0.07&0.04&0.74\n\\\\\n \t\t &20&80&0.02&0.04&0.04&0.03&0.06&0.05&0.03&0.75\n\\\\\n \t\t &50&20&0.02&0.04&0.04&0.02&0.06&0.05&0.03&0.36\n\\\\\n \t\t &50&50&0.01&0.04&0.04&0.02&0.05&0.05&0.02&0.50\n\\\\\n \t\t &50&80&0.01&0.03&0.02&0.01&0.03&0.03&0.02&0.38\n\\\\\n \t\t &80&20&0.01&0.03&0.03&0.02&0.04&0.04&0.02&0.17\n\\\\\n \t\t &80&50&0.01&0.03&0.03&0.01&0.04&0.03&0.02&0.25\n\\\\\n \t\t &80&80&0.01&0.02&0.02&0.01&0.03&0.03&0.01&0.22\n\\\\\\hline\n \t\t \\multirow{9}{*}{$t_3$}&20&20&0.06&0.50&0.50&0.07&0.53&0.43&0.09&0.85\n\\\\\n \t\t &20&50&0.03&0.23&0.23&0.04&0.34&0.21&0.05&0.85\n\\\\\n \t\t &20&80&0.02&0.17&0.17&0.03&0.26&0.15&0.04&0.85\n\\\\\n \t\t &50&20&0.03&0.33&0.31&0.04&0.46&0.29&0.05&0.66\n\\\\\n \t\t &50&50&0.02&0.20&0.18&0.02&0.27&0.15&0.03&0.66\n\\\\\n \t\t &50&80&0.01&0.10&0.10&0.02&0.16&0.09&0.02&0.61\n\\\\\n \t\t &80&20&0.02&0.33&0.30&0.03&0.43&0.27&0.04&0.52\n\\\\\n \t\t &80&50&0.01&0.10&0.08&0.01&0.15&0.08&0.02&0.32\n\\\\\n \t\t &80&80&0.01&0.07&0.06&0.01&0.09&0.06&0.01&0.29\n\\\\\\hline\n \t\t \\multirow{9}{*}{$t_3$}&20&20&0.08&0.95&0.95&0.10&0.92&0.92&0.13&0.98\n\\\\\n \t\t &20&50&0.03&0.98&0.98&0.05&0.97&0.97&0.06&0.97\n\\\\\n \t\t &20&80&0.03&0.99&0.99&0.03&0.98&0.98&0.04&0.98\n\\\\\n \t\t &50&20&0.03&0.95&0.95&0.04&0.92&0.93&0.05&0.75\n\\\\\n \t\t &50&50&0.02&0.98&0.98&0.02&0.97&0.97&0.03&0.81\n\\\\\n \t\t &50&80&0.01&0.99&0.99&0.02&0.98&0.98&0.02&0.80\n\\\\\n \t\t &80&20&0.03&0.95&0.95&0.04&0.92&0.92&0.05&0.79\n\\\\\n \t\t &80&50&0.01&0.98&0.98&0.02&0.97&0.97&0.02&0.61\n\\\\\n \t\t &80&80&0.01&0.99&0.99&0.01&0.98&0.98&0.02&0.57\\\\\\hline\n \t\\end{tabular}}\n \\end{table}\n\n\n\n\n\\subsection{Results for $\\tau\\ne 0.5$}\n\nOur next experiment is to investigate the performance of the proposed estimation procedure when $\\tau\\ne 0.5$. As argued in the data generating process, the model has 3 row factors and 4 column factors under such cases. Table \\ref{tab:tau=0.35} reports the averaged estimates of the factor numbers, frequencies of exact estimation, the estimation accuracy for loading spaces by the proposed matrix quantile factor model when $\\tau=0.35$.\n\nOverall speaking, Table \\ref{tab:tau=0.35} indicates that the proposed estimators perform satisfactorily to specify quantile factors with $\\tau\\ne 0.5$. It should be pointed out that when $\\tau\\ne 0.5$, the effective sample size reduces but the number of unknown parameters increases (more factors). As shown in Table \\ref{tab:tau=0.35}, the estimated factor numbers and loading spaces are not as accurate as those in previous experiment with $\\tau=0.5$, as expected.\n\n\n \\begin{table}[ht]\n\t\\addtolength{\\tabcolsep}{0pt}\n\t\\caption{Estimation results for matrix quantile factor model with $\\tau=0.35$.\\label{tab:tau=0.35}}\n\t\\resizebox{\\columnwidth}{!}{%\n\t\t\\begin{tabular}{cccccccccccccc}\n\t\t\t\\hline\n\t\t\t \\multirow{2}{*}{${\\mbox{\\boldmath $E$}}_t$}&\\multirow{2}{*}{$T$}&\\multirow{2}{*}{$p_1=p_2$}&\\multicolumn{3}{l}{Averaged $(\\hat k_1,\\hat k_2)$}&\\multicolumn{3}{l}{ $\\mathbb{P}(\\hat k_1=k_1,\\hat k_2=k_2)$}&$\\mathcal{D}({\\mbox{\\boldmath $R$}}_0,\\hat{\\mbox{\\boldmath $R$}})$&$\\mathcal{D}({\\mbox{\\boldmath $C$}}_0,\\hat{\\mbox{\\boldmath $C$}})$\\\\\\cmidrule(lr){4-6}\\cmidrule(lr){7-9}\n\t\t\t &&&mqf-ER&mqf-RM&mqf-IC&mqf-ER&mqf-RM&mqf-IC&&\\\\\\hline\n\t\t \\multirow{9}{*}{$\\mathcal{N}(0,1)$}&20&20&(2.44,3.47)&(3.27,4.21)&(1.66,1.1)&0.32&0.69&0&0.13&0.11\n\\\\\n\t\t &20&50&(3,4)&(3.02,4.02)&(2.01,2.33)&1&0.98&0.01&0.07&0.06\n\\\\\n\t\t &20&80&(3,4)&(3,4.01)&(2.23,3.02)&1&0.99&0.14&0.05&0.05\n\\\\\n\t\t &50&20&(2.81,4)&(3.01,4.01)&(1.92,1.1)&0.81&0.98&0&0.08&0.06\\\\\n\t\t &50&50&(3,4)&(3,4)&(2.64,3.49)&1&1&0.5&0.04&0.04\n\\\\\n\t\t &50&80&(3,4)&(3,4)&(3,4)&1&1&1&0.03&0.03\n\\\\\n\t\t &80&20&(3,4)&(3,4)&(1.97,1.19)&1&1&0&0.06&0.05\\\\\n\t\t &80&50&(3,4)&(3,4)&(3,4)&1&1&1&0.03&0.03\n\\\\\n\t\t &80&80&(3,4)&(3,4)&(3,4)&1&1&1&0.03&0.02\n\\\\\\hline\n\t\t \\multirow{9}{*}{$t_3$}&20&20&(2.86,3.28)&(4.7,5.31)&(1.48,1.03)&0.41&0.08&0&0.13&0.14\n\\\\\n\t\t &20&50&(3.01,3.99)&(3.22,4.25)&(2.18,2.23)&0.97&0.77&0.03&0.07&0.07\\\\\n\t\t &20&80&(3,4)&(3.04,4.05)&(2.37,3.03)&0.99&0.96&0.24&0.06&0.05\n\\\\\n\t\t &50&20&(3,3.73)&(3.21,4.16)&(1.38,1.01)&0.79&0.77&0&0.08&0.08\n\\\\\n\t\t &50&50&(3,4)&(3.17,4.18)&(3,3.96)&1&0.79&0.96&0.05&0.04\n\\\\\n\t\t &50&80&(3,4)&(3.01,4.01)&(3,4)&1&0.99&1&0.04&0.03\n\\\\\n\t\t &80&20&(3,3.71)&(5.14,5.56)&(1.48,1.01)&0.86&0.01&0&0.07&0.06\n\\\\\n\t\t &80&50&(3,4)&(3,4)&(2.64,3.53)&1&1&0.55&0.03&0.03\n\\\\\n\t\t &80&80&(3,4)&(3,4)&(3,4)&1&1&1&0.03&0.02\n\\\\\\hline\n\t\t \\multirow{9}{*}{$t_1$}&20&20&(2.01,2.22)&(5.38,5.54)&(1.01,1.01)&0.01&0&0&0.18&0.24\n\\\\\n\t\t &20&50&(3.02,3.88)&(3.83,4.84)&(1.76,1.29)&0.8&0.36&0&0.08&0.08\n\\\\\n\t\t &20&80&(3.02,3.91)&(3.43,4.56)&(2.37,2.25)&0.92&0.53&0.11&0.07&0.07\\\\\n\t\t &50&20&(2.56,3.56)&(4.67,5.27)&(1,1)&0.56&0.06&0&0.1&0.08\n\\\\\n\t\t &50&50&(3,3.94)&(4.05,5.08)&(2.98,3.16)&0.98&0.14&0.54&0.05&0.05\\\\\n\t\t &50&80&(3,4)&(3.05,4.06)&(3,3.99)&1&0.94&0.99&0.04&0.04\n\\\\\n\t\t &80&20&(3.03,3.3)&(5.42,5.68)&(1.02,1.01)&0.63&0&0&0.08&0.07\\\\\n\t\t &80&50&(3,4)&(3.13,4.16)&(3,3.92)&1&0.81&0.94&0.04&0.04\n\\\\\n\t\t &80&80&(3,4)&(3.01,4.02)&(3,4)&1&0.98&1&0.03&0.03\\\\\\hline\n\t\\end{tabular}}\n\\end{table}\n\n\\subsection{Dependent idiosyncratic errors}\nThe last experiment is to investigate the performance of the proposed estimators if the idiosyncratic errors are dependent, both serially and cross-sectionally. We follow the data-generating process of Table \\ref{tab:tau=0.35}, except that the idiosyncratic errors are by\n\\[\n\\epsilon_{ijt}=\\mathcal{V}_{ijt}+0.2\\mathcal{V}_{ij,t-1}+0.2\\mathcal{V}_{i-1,jt}+0.2\\mathcal{V}_{i,j-1,t},\n\\]\nwhere $\\mathcal{V}_{ijt}$ are i.i.d. from $\\mathcal{N}(0,1)$, $t_3$ or $t_1$ distribution. Tables \\ref{tab: 0.5 dependent} and \\ref{tab: 0.35 dependent} report the estimation results at $\\tau=0.5$ and $\\tau=0.35$, respectively. The results are not as good as but still close to those in the independent cases. One exception is that for $t_1$ samples, the rank minimization approach now performs not that well. This is understandable because the dependence may increase the effects of idiosyncratic errors and enlarge the eigenvalues $\\{\\hat\\sigma_{T,j}^{K_1}\\}$ and $\\{\\hat\\sigma_{T,j}^{K_2}\\}$. Therefore, in finite samples, there will be more eigenvalues exceeding the thresholding parameter $C_{p_1p_2T}$, leading to overestimation. When $\\tau=0.35$, the information criterion tends to underestimate. The accuracy can be potentially improved if we further increase the sample size, use a smaller $\\theta$, or modify the thresholding parameter. On the other hand, the eigenvalue ratio method is less influenced by the enlarged redundant eigenvalues, as long as the leading eigenvalues are sufficiently powerful.\n\n \\begin{table}[ht]\n\t\\addtolength{\\tabcolsep}{0pt}\n\t\\caption{Estimation results for matrix quantile factor model with dependent idiosyncratic errors and $\\tau=0.5$.\\label{tab: 0.5 dependent}}\n\t\\resizebox{\\columnwidth}{!}{%\n\t\t\\begin{tabular}{cccccccccccccccc}\n\t\t\\hline\n\t\t \\multirow{2}{*}{${\\mbox{\\boldmath $E$}}_t$}&\\multirow{2}{*}{$T$}&\\multirow{2}{*}{$p_1=p_2$}&\\multicolumn{3}{l}{Averaged $(\\hat k_1,\\hat k_2)$}&\\multicolumn{3}{l}{ $\\mathbb{P}(\\hat k_1=k_1,\\hat k_2=k_2)$}&$\\mathcal{D}({\\mbox{\\boldmath $R$}}_0,\\hat{\\mbox{\\boldmath $R$}})$&$\\mathcal{D}({\\mbox{\\boldmath $C$}}_0,\\hat{\\mbox{\\boldmath $C$}})$\\\\\\cmidrule(lr){4-6}\\cmidrule(lr){7-9}\n\t\t &&&mqf-ER&mqf-RM&mqf-IC&mqf-ER&mqf-RM&mqf-IC&&\\\\\\hline\n\t\t \\multirow{9}{*}{$\\mathcal{N}(0,1)$}&20&20&(2,2.58)&(3.17,4.01)&(1.75,1.14)&0.59&0.32&0&0.05&0.07\n\\\\\n\t\t &20&50&(2,3)&(2.13,3.15)&(1.98,2.32)&1&0.88&0.47&0.03&0.04\n\\\\\n\t\t &20&80&(2,3)&(2.06,3.07)&(2,2.83)&1&0.94&0.85&0.02&0.03\n\\\\\n\t\t &50&20&(2,2.91)&(2.26,3.28)&(1.83,1.08)&0.91&0.73&0&0.02&0.03\\\\\n\t\t &50&50&(2,3)&(2.01,3.01)&(2,2.99)&1&0.99&0.99&0.01&0.02\n\\\\\n\t\t &50&80&(2,3)&(2.01,3.02)&(2,3)&1&0.98&1&0.01&0.02\n\\\\\n\t\t &80&20&(2,3)&(2.18,3.15)&(1.95,1.1)&1&0.83&0&0.02&0.02\\\\\n\t\t &80&50&(2,3)&(2.01,3.01)&(2,3)&1&0.99&1&0.01&0.01\n\\\\\n\t\t &80&80&(2,3)&(2,3)&(2,3)&1&1&1&0.01&0.01\n\\\\\\hline\n\t\t \\multirow{9}{*}{$t_3$}&20&20&(2.06,3.01)&(5.26,5.6)&(1.63,1.05)&0.55&0.01&0.01&0.06&0.08\n\\\\\n\t\t &20&50&(2.02,2.96)&(3.12,4.12)&(1.98,2.13)&0.94&0.38&0.37&0.03&0.05\n\\\\\n\t\t &20&80&(2,2.99)&(2.28,3.35)&(2,2.69)&0.99&0.77&0.73&0.02&0.03\n\\\\\n\t\t &50&20&(2,2.88)&(3.59,4.32)&(1.46,1)&0.92&0.16&0&0.03&0.04\n\\\\\n\t\t &50&50&(2,2.99)&(3.47,4.56)&(2,3)&0.99&0.18&1&0.02&0.03\n\\\\\n\t\t &50&80&(2,3)&(2.15,3.21)&(2,3)&1&0.81&1&0.01&0.02\n\\\\\n\t\t &80&20&(2,2.87)&(6,6)&(1.99,1.02)&0.82&0&0&0.04&0.04\\\\\n\t\t &80&50&(2,3)&(2.01,3.01)&(2,2.97)&1&0.99&0.97&0.01&0.01\\\\\n\t\t &80&80&(2,3)&(2.01,3.02)&(2,3)&1&0.98&1&0.01&0.01\n\\\\\\hline\n\t\t \\multirow{9}{*}{$t_1$}&20&20&(2.1,2.01)&(4.5,4.49)&(1.1,1.12)&0.02&0.01&0&0.27&0.34\n\\\\\n\t\t &20&50&(2.04,2.48)&(5.44,5.75)&(1.3,1.09)&0.2&0&0&0.05&0.07\n\\\\\n\t\t &20&80&(2.28,2.61)&(5.37,5.81)&(1.94,1.78)&0.43&0.01&0.21&0.04&0.06\n\\\\\n\t\t &50&20&(1.63,2.54)&(5.77,5.79)&(1.04,1.04)&0.01&0&0&0.05&0.06\n\\\\\n\t\t &50&50&(2.05,2.86)&(5.99,5.99)&(2,2.74)&0.58&0&0.76&0.03&0.04\n\\\\\n\t\t &50&80&(2,3)&(4.67,5.51)&(2,3)&0.99&0.04&1&0.02&0.03\n\\\\\n\t\t &80&20&(2.18,2.45)&(5.57,5.56)&(1.02,1.01)&0.1&0&0&0.04&0.06\\\\\n\t\t &80&50&(2,3.01)&(5.98,5.99)&(2,2.95)&0.99&0&0.95&0.02&0.03\n\\\\\n\t\t &80&80&(2,3)&(5.03,5.75)&(2,3)&1&0.01&1&0.02&0.02\\\\\\hline\n\\end{tabular}}\n\\end{table}\n\n \\begin{table}[ht]\n\t\\addtolength{\\tabcolsep}{0pt}\n\t\\caption{Estimation results for matrix quantile factor model with dependent idiosyncratic errors and $\\tau=0.35$.\\label{tab: 0.35 dependent}}\n\t\\resizebox{\\columnwidth}{!}{%\n\t\t\\begin{tabular}{cccccccccccccc}\n\t\t\t\\hline\n\t\t\t\t \\multirow{2}{*}{${\\mbox{\\boldmath $E$}}_t$}&\\multirow{2}{*}{$T$}&\\multirow{2}{*}{$p_1=p_2$}&\\multicolumn{3}{l}{Averaged $(\\hat k_1,\\hat k_2)$}&\\multicolumn{3}{l}{ $\\mathbb{P}(\\hat k_1=k_1,\\hat k_2=k_2)$}&$\\mathcal{D}({\\mbox{\\boldmath $R$}}_0,\\hat{\\mbox{\\boldmath $R$}})$&$\\mathcal{D}({\\mbox{\\boldmath $C$}}_0,\\hat{\\mbox{\\boldmath $C$}})$\\\\\\cmidrule(lr){4-6}\\cmidrule(lr){7-9}\n\t\t\t &&&mqf-ER&mqf-RM&mqf-IC&mqf-ER&mqf-RM&mqf-IC&&\\\\\\hline\n\t\t\t \\multirow{9}{*}{$\\mathcal{N}(0,1)$}&20&20&(2.52,3.24)&(3.57,4.49)&(1.59,1.06)&0.32&0.51&0&0.14&0.13\n\\\\\n\t\t\t &20&50&(3,4)&(3.04,4.05)&(2,2.25)&0.99&0.93&0.01&0.08&0.07\n\\\\\n\t\t\t &20&80&(3,4)&(3.02,4.03)&(2.29,3)&1&0.98&0.18&0.06&0.06\n\\\\\n\t\t\t &50&20&(2.84,3.98)&(3.13,4.1)&(1.85,1.09)&0.82&0.85&0&0.09&0.07\\\\\n\t\t\t &50&50&(3,4)&(3,4)&(2.77,3.57)&1&1&0.58&0.05&0.04\n\\\\\n\t\t\t &50&80&(3,4)&(3,4.01)&(3,4)&1&0.99&1&0.04&0.03\n\\\\\n\t\t\t &80&20&(3,4)&(3.07,4.04)&(1.93,1.13)&1&0.92&0&0.07&0.06\\\\\n\t\t\t &80&50&(3,4)&(3,4)&(3,4)&1&1&1&0.04&0.03\n\\\\\n\t\t\t &80&80&(3,4)&(3,4)&(3,4)&1&1&1&0.03&0.03\n\\\\\\hline\n\t\t\t \\multirow{9}{*}{$t_3$}&20&20&(2.85,2.82)&(5.18,5.61)&(1.3,1.01)&0.24&0.03&0&0.14&0.15\n\\\\\n\t\t\t &20&50&(3,3.95)&(3.58,4.61)&(2.15,1.95)&0.94&0.49&0.01&0.08&0.08\n\\\\\n\t\t\t &20&80&(3,4)&(3.17,4.2)&(2.55,3)&0.99&0.81&0.28&0.06&0.06\n\\\\\n\t\t\t &50&20&(3,3.48)&(4.03,4.75)&(1.21,1)&0.67&0.26&0&0.08&0.09\n\\\\\n\t\t\t &50&50&(3,3.99)&(3.63,4.8)&(3,3.84)&0.99&0.34&0.86&0.05&0.05\n\\\\\n\t\t\t &50&80&(3,4)&(3.04,4.06)&(3,4)&1&0.94&1&0.04&0.04\n\\\\\n\t\t\t &80&20&(2.93,2.09)&(5.68,5.92)&(1.06,1)&0.31&0&0&0.08&0.07\\\\\n\t\t\t &80&50&(3,4)&(3,4)&(2.88,3.75)&1&1&0.76&0.04&0.04\n\\\\\n\t\t\t &80&80&(3,4)&(3,4)&(3,4)&1&1&1&0.03&0.03\n\\\\\\hline\n\t\t\t \\multirow{9}{*}{$t_1$}&20&20&(1.55,1.52)&(3.72,3.71)&(1.06,1.05)&0&0.01&0&0.45&0.54\n\\\\\n\t\t\t &20&50&(2.34,1.94)&(5.05,5.67)&(1.08,1.02)&0.11&0.01&0&0.11&0.11\n\\\\\n\t\t\t &20&80&(2.42,1.85)&(4.34,5.43)&(1.22,1.05)&0.2&0.04&0&0.09&0.09\n\\\\\n\t\t\t &50&20&(1.32,1.11)&(5.45,5.56)&(1.02,1.03)&0&0&0&0.12&0.12\n\\\\\n\t\t\t &50&50&(1.76,1.01)&(5.3,5.9)&(1.11,1.02)&0&0&0&0.07&0.07\n\\\\\n\t\t\t &50&80&(3,3.68)&(3.64,4.86)&(2.5,1.78)&0.89&0.31&0.15&0.05&0.05\\\\\n\t\t\t &80&20&(1.58,1.15)&(5.42,5.44)&(1.03,1.03)&0&0&0&0.1&0.11\n\\\\\n\t\t\t &80&50&(2.94,2.01)&(4.43,5.53)&(1.28,1.05)&0.34&0.01&0&0.05&0.05\\\\\n\t\t\t &80&80&(3,3.61)&(3.27,4.46)&(2.67,2)&0.87&0.56&0.1&0.04&0.04\\\\\\hline\n\t\\end{tabular}}\n\\end{table}\n\n\n\\section{Real Data Analysis}\\label{RD}\nIn this section, we apply the proposed matrix quantile factor model and associated estimators to the analysis of a real data set, the Fama-French 100 portfolios data set. This an open resource provided by Kenneth R. French, which can be downloaded from the website \\url{http:\/\/mba.tuck.dartmouth.edu\/pages\/faculty\/ken.french\/data_library.html}. It contains monthly return series of 100 portfolios, structured into a $10\\times 10$ matrix according to 10 levels of market capital size (S1-S10) and 10 levels of book-to-equity ratio (BE1-BE10).Considering the missing rate, we use the data from 1964-01 to 2021-12 in this study, covering 696 months. Similar data set has ever been studied in \\cite{wang2019factor} and \\cite{yu2022projection}. The data library also provides information on Fama-French three factors and excess market returns. Following the preprocessing steps in \\cite{wang2019factor} and \\cite{yu2022projection}, we first subtract the excess market returns and standardize each of the portfolio return series.\n\nIn the first step, we provide some descriptive information of the data set. At each month $t$, we calculate sample quantiles of the 100 portfolios returns. Figure \\ref{fig:series} plots the quantile series at $\\tau=0.95, 0.5, 0.05$. It's seen that different quantile series contain disparate local patterns across the sampling periods. For instance, around the year 2000, the $95\\%$ quantile series and the median series undergo a great increment while the $5\\%$ quantile series drops significantly. That is to say, considering multiple quantiles will help in understanding the distribution of data. On the other hand, some extreme values are seen in the $95\\%$ and $5\\%$ quantile series, such as in mid-1970s, the years 2000, 2008 and 2020, which may indicate heavy-tailed property. The histogram in Figure \\ref{fig:kurtosis} summarizes the total number of portfolio series associated with specific sample kurtosis. There are 18 portfolio series having larger sample kurtosis than the theoretical kurtosis of $t_5$ distribution, which is a convincing signal of heavy-tailed characteristic. According to our simulation results, it's better to use the quantile factor model under such scenarios.\n\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\begin{minipage}[c]{7cm}\n\t\t\\centering\n\t\t\\includegraphics[width=6.5 cm]{series.png}\n\t\t\\caption{Return quantiles.}\n\t\t\\label{fig:series}\n\t\\end{minipage}\n\\ \\\n\t\\begin{minipage}[c]{7cm}\n\t\\centering\n\t\\includegraphics[width=6.5 cm]{kurtosis.png}\n\t\\caption{Kurtosis histogram.}\n\t\\label{fig:kurtosis}\n\\end{minipage}\n\\end{figure}\n\n\nIn the second step, we fit the matrix quantile factor model. The numbers of row and column factors should be determined first. Table \\ref{tab: ff tau} provides the estimated $(k_1,k_2)$ using the proposed three approaches at different quantiles $\\tau$. The results by the vectorized method with \\cite{Chen2021quantile} are also reported in the table, which leads to the estimation of total number of factors. By Table \\ref{tab: ff tau}, the proposed eigenvalue ratio method and information criterion always lead to an estimate of $\\hat k_1=\\hat k_2=1$, while the rank minimization approach gives more row and\/or column factors when $\\tau\\in [0.15,0.9]$. The vectorized method leads to an estimate of 2 factors in total at most quantiles. Based on the results, there should be at least one powerful row factor and column factor in the system, and potentially one weak row factor and\/or column factor. Combined with the analysis in \\cite{wang2019factor} and \\cite{yu2022projection}, it might be a good choice to use $\\hat k_1=\\hat k_2=2$ in this example.\n\n\n \\begin{table}[htp]\n\t\\addtolength{\\tabcolsep}{0pt}\n\t\\caption{Estimating results of factor numbers and loading spaces for Fama-French 100 portfolio data set at different $\\tau$. $S^R_{12}$ is an abbreviation for $S(\\hat{\\mbox{\\boldmath $R$}}_1,\\hat{\\mbox{\\boldmath $R$}}_2)$.\\label{tab: ff tau}}\n\t\\resizebox{\\columnwidth}{!}{%\n\t\t\\begin{tabular}{lllllllllllllllllll}\n\t\t\t\\hline\n\t\t\t \\multirow{2}{*}{$\\tau$}&\\multicolumn{3}{l}{$(\\hat k_1,\\hat k_2)$}&$\\hat k_1\\times \\hat k_2$&\\multicolumn{4}{l}{Similarity of loading spaces}\\\\\\cmidrule(lr){2-4}\\cmidrule(lr){6-9}\n\t\t\t &mqf-ER&mqf-RM&mqf-IC&vqf-RM&$S^R_{mqf,\\alpha-PCA}$&$S^R_{mqf,PE}$&$S^C_{mqf,\\alpha-PCA}$&$S^C_{mqf,PE}$\\\\\\hline\n\t\t 0.05&(1,1)&(1,1)&(1,1)&1&0.953&0.961&0.918&0.849\n\\\\\n\t\t 0.1&(1,1)&(1,1)&(1,1)&1&0.979&0.965&0.967&0.972\n\\\\\n\t\t 0.15&(1,1)&(1,2)&(1,1)&2&0.987&0.986&0.976&0.943\n\\\\\n\t\t 0.2&(1,1)&(2,1)&(1,1)&2&0.994&0.992&0.972&0.926\n\\\\\n\t\t 0.25&(1,1)&(2,2)&(1,1)&2&0.993&0.984&0.985&0.991\n\\\\\n\t\t 0.3&(1,1)&(2,2)&(1,1)&2&0.997&0.992&0.993&0.970\n\\\\\n\t\t 0.35&(1,1)&(2,2)&(1,1)&2&0.995&0.997&0.996&0.980\n\\\\\n\t\t 0.4&(1,1)&(2,2)&(1,1)&2&0.994&0.998&0.990&0.997\n\\\\\n\t\t 0.45&(1,1)&(2,2)&(1,1)&2&0.998&0.992&0.963&0.994\n\\\\\n\t\t 0.5&(1,1)&(2,2)&(1,1)&2&0.996&0.998&0.964&0.995\n\\\\\n\t\t 0.55&(1,1)&(2,2)&(1,1)&2&0.997&0.994&0.986&0.998\n\\\\\n\t\t 0.6&(1,1)&(2,2)&(1,1)&2&0.989&0.998&0.976&0.998\n\\\\\n\t\t 0.65&(1,1)&(2,2)&(1,1)&2&0.985&0.997&0.993&0.996\n\\\\\n\t\t 0.7&(1,1)&(2,2)&(1,1)&2&0.995&0.993&0.993&0.992\n\\\\\n\t\t 0.75&(1,1)&(2,2)&(1,1)&2&0.994&0.992&0.991&0.981\n\\\\\n\t\t 0.8&(1,1)&(1,2)&(1,1)&2&0.985&0.995&0.988&0.979\n\\\\\n\t\t 0.85&(1,1)&(1,2)&(1,1)&2&0.985&0.988&0.986&0.963\n\\\\\n\t\t 0.9&(1,1)&(2,2)&(1,1)&1&0.980&0.974&0.974&0.963\n\\\\\n\t\t 0.95&(1,1)&(1,1)&(1,1)&1&0.826&0.838&0.942&0.881\\\\\\hline\n\t\t\t\\end{tabular}}\n\\end{table}\n\nThe next step is to estimate the loading matrices and factor scores with $\\hat k_1=\\hat k_2=2$.\nIt's worth noting that the quantile factor models can handle missing entries naturally by optimization only with non-missing entries, i.e., by defining\n\\[\n\\mathbb{M}_{p_1p_2T}(\\theta)=\\frac{1}{p_1p_2T}\\sum_{(i,j,t)\\in \\mathcal{M}}\\rho_{\\tau}(X_{ijt}-r_i^\\prime{\\mbox{\\boldmath $F$}}_t c_j),\n\\]\nwhere $\\mathcal{M}$ indicates the index set of all non-missing entries. However, the $\\alpha$-PCA and ``PE'' methods require to impute the missing entries first. Considering that the missing rate is small in this example ($0.23\\%$), we use simple linear interpolation method to impute missing data. To measure the similarity of two estimated loading spaces, we define the following indicator:\n\\[\nS(\\hat {\\mbox{\\boldmath $R$}}_1,\\hat {\\mbox{\\boldmath $R$}}_2)=\\frac{1}{k_1}\\text{tr}\\bigg(\\frac{1}{p_1^2}\\hat{\\mbox{\\boldmath $R$}}_1^\\prime\\hat{\\mbox{\\boldmath $R$}}_2\\hat{\\mbox{\\boldmath $R$}}_2^\\prime\\hat{\\mbox{\\boldmath $R$}}_1\\bigg),\n\\]\nwhere $\\hat {\\mbox{\\boldmath $R$}}_1$ and $\\hat{\\mbox{\\boldmath $R$}}_2$ are the estimated $p_1\\times k_1$ row loading matrices. Note that the columns of $\\hat {\\mbox{\\boldmath $R$}}_1$ and $\\hat{\\mbox{\\boldmath $R$}}_2$ are orthogonal after scaling. Therefore, the value $p_1^{-1}\\hat{\\mbox{\\boldmath $R$}}_2\\hat{\\mbox{\\boldmath $R$}}_2^\\prime\\hat{\\mbox{\\boldmath $R$}}_1$ is actually the projection matrix of $\\hat{\\mbox{\\boldmath $R$}}_1$ to the space of $\\hat{\\mbox{\\boldmath $R$}}_2$. The value of $S(\\hat {\\mbox{\\boldmath $R$}}_1,\\hat {\\mbox{\\boldmath $R$}}_2)$ will always be in the interval $[0,1]$. When the two loading spaces are closer to each other, the value of $S(\\hat {\\mbox{\\boldmath $R$}}_1,\\hat {\\mbox{\\boldmath $R$}}_2)$ will be larger.\n\nThe last four columns of Table \\ref{tab: ff tau} report the similarity of estimated loading spaces by matrix-quantile-factor-model and two competitors, $\\alpha$-PCA and ``PE''. It's seen that the similarity indicators are very close to 1, implying that the estimated loading spaces are almost the same, especially when $\\tau$ is near 0.5. Table \\ref{tab: ff loading} presents the estimated $\\hat {\\mbox{\\boldmath $R$}}$ and $\\hat{\\mbox{\\boldmath $C$}}$ by matrix quantile factor model at $\\tau=0.5$. By Table \\ref{tab: ff loading}, the effects of the row factors and column factors are closely related to market capital sizes and book-to-equity ratios. From the perspective of size ($\\hat{\\mbox{\\boldmath $C$}}$), the small-size portfolios load more heavily on the first factor than the large-size portfolios. Moreover, the second factor has opposite effects on small-size portfolios and large-size ones. From the perspective of book-to-equity ratio ($\\hat{\\mbox{\\boldmath $R$}}$), the large-BE portfolios load more heavily on the first factor than small-BE ones, while the second factor has opposite effects on the two classes. These findings coincide with financial theories because the capital size and book-to-equity ratio are known to be two important factors affecting portfolio returns in negative collaboration.\n\n\\begin{table*}[hbpt]\n\t\\begin{center}\n\t\t\\small\n\t\t\\addtolength{\\tabcolsep}{1pt}\n\t\t\\caption{ Transposed loading matrices for Fama--French data set by matrix quantile factor model at $\\tau=0.5$. }\\label{tab: ff loading}\n\t\t\\renewcommand{\\arraystretch}{1}\n\t\t\\scalebox{1}{ \t\t \\begin{tabular*}{16cm}{c|ccccccccccc}\n\t\t\t\t\\hline\n\t\t\t\t\\multicolumn{12}{l}{Size ($\\hat{\\mbox{\\boldmath $C$}}$)}\\\\\n\t\t\t\t\\hline\n\t\t\t\t Models&S1&S2&S3&S4&S5&S6&S7&S8&S9&S10\\\\\\hline\n1&1.13&1.2&1.25&1.23&1.15&1.07&0.93&0.78&0.52&-0.02\n\\\\\n2&1.44&0.98&0.5&0.13&-0.29&-0.63&-0.95&-1.22&-1.46&-1.3\\\\\\hline\n\t\t\t\t \\multicolumn{12}{l}{Book-to-Equity ($\\hat{\\mbox{\\boldmath $R$}}$)}\\\\\\hline\n\t\t\t\t Factor&BE1&BE2&BE3&BE4&BE5&BE6&BE7&BE8&BE9&BE10\\\\\\hline\n1&0.53&0.8&0.97&1.05&1.12&1.14&1.12&1.11&1.08&0.89\n\\\\\n2&2.26&1.53&0.76&0.27&-0.15&-0.53&-0.64&-0.71&-0.63&-0.52\n\t\t\t\t\\\\ \t\t\t\t\\hline\t\n\t\t\\end{tabular*}}\t\t\n\t\\end{center}\n\\end{table*}\n\nBy Table \\ref{tab: ff tau}, when $\\tau$ is at the edge, the similarity indicator decreases, suggesting that considering the edge quantiles might be helpful for extracting extra information from the data. To justify this argument, we construct a rolling prediction procedure as follows. Let $y_t$ be any of the Fama-French three factors at month $t$. We consider a forecasting model for $y_t$:\n\\[\ny_{t+1}=\\alpha+\\beta y_{t}+\\gamma^\\prime {\\mbox{\\boldmath $F$}}_{t+1}+e_{t+1},\n\\]\nwhere ${\\mbox{\\boldmath $F$}}_t$ is a vector of estimated factors from the Fama-French 100 portfolio data set. We estimate $\\alpha,\\beta,\\gamma$ using ordinary least squares. For ${\\mbox{\\boldmath $F$}}_t$, we consider five specifications: (i) ${\\mbox{\\boldmath $F$}}_t=0$, which is the benchmark AR(1) model, (ii) AR(1) model plus ${\\mbox{\\boldmath $F$}}_t$ from ``PE'', (iii) AR(1) plus ${\\mbox{\\boldmath $F$}}_t$ from ``PE'' and matrix quantile factor model at $\\tau=0.05$, (iv)\nAR(1) plus ${\\mbox{\\boldmath $F$}}_t$ from ``PE'' and matrix quantile factor model at $\\tau=0.95$, (v) AR(1) plus ${\\mbox{\\boldmath $F$}}_t$ from ``PE'' and matrix quantile factor model at $\\tau=0.05$ and $\\tau=0.95$. To control the dimension of design matrix, we use $k_1=k_2=1$ in this part. To predict $y_{t+1}$, we first estimate all the factors using historical data before (inclusive) $(t+1)$ with a rolling window of 120 months, and then fit the predicting model using data only before $(t+1)$. The predictor $\\hat y_{t+1}$ then follows the fitted model. Table \\ref{tab: ff prediction} reports the root of mean squared error (RMSE) and mean absolute error (MAE) for the prediction over all the periods, from different predicting models and for different Fama-French factors. As shown in the table, adding estimated factors into the model help reduce the error for the SMB factor and the HML factor, while considering the edge quantiles further improves the prediction performance. But for the RF factor, the Benchmark AR(1) model works already the best.\n\\begin{table*}[hbpt]\n\t\\begin{center}\n\t\t\\small\n\t\t\\addtolength{\\tabcolsep}{1pt}\n\t\t\\caption{RMSE and MAE from the rolling prediction procedure , by different models and for different Fama-French factors.}\\label{tab: ff prediction}\n\t\t\\renewcommand{\\arraystretch}{1}\n\t\t\\scalebox{1}{ \t\t \\begin{tabular*}{16cm}{lllllllllllll}\n\t\t\t\t\\hline\n\t\t\t\t&\\multicolumn{2}{l}{SMB factor}&\\multicolumn{2}{l}{HML factor}&\\multicolumn{2}{l}{RF factor}\\\\\\cmidrule(lr){2-3}\\cmidrule(lr){4-5}\\cmidrule(lr){6-7}\n\t\t\t\tPredicting models&RMSE&MAE&RMSE&MAE&RMSE&MAE\\\\\\hline\n\t\t\t\tAR benchmark&3.113&2.232&3.020&2.166&0.064&0.039\n\\\\\n\t\t\t\tAR plus $\\hat {\\mbox{\\boldmath $F$}}_{PE}$&1.659&1.093&2.927&2.130&0.065&0.040\n\\\\\n\t\t\t\tAR plus $\\hat {\\mbox{\\boldmath $F$}}_{PE}$, $\\hat{\\mbox{\\boldmath $F$}}_{qfm}^{\\tau=0.05}$&1.550&1.037&2.991&2.147&0.065&0.040\n\\\\\n\t\t\t\tAR plus $\\hat {\\mbox{\\boldmath $F$}}_{PE}$, $\\hat{\\mbox{\\boldmath $F$}}_{qfm}^{\\tau=0.95}$&1.527&1.037&2.972&2.163&0.066&0.041\n\\\\\n\t\t\t\tAR plus $\\hat {\\mbox{\\boldmath $F$}}_{PE}$, $\\hat{\\mbox{\\boldmath $F$}}_{qfm}^{\\tau=0.05}$, $\\hat{\\mbox{\\boldmath $F$}}_{qfm}^{\\tau=0.95}$&1.467&0.974&2.869&2.086&0.066&0.041\\\\\\hline\n\t\t\\end{tabular*}}\t\t\n\t\\end{center}\n\\end{table*}\n\n\n\\section{Conclusion and Discussion}\\label{CD}\n\nIn this study, we proposed a matrix quantile factor model that is a hybrid of quantile feature representation and a low rank structure for matrix data. By minimizing the check loss function with an iterative algorithm, we obtain estimates of the row and column factor spaces that are proved to be consistent in the sense of Frobenious norm. Three model selection criteria were given to consistently determine simultaneously the numbers of row and column factors. There are at least three problems that are worthy of being studied in the future. First, a statistical test for the presence of the low-rank matrix structure in the matrix quantile factor model is of potential usefulness as a model checking tool. Second, the latent factor structure here can be extended to the case where both observable explanatory variables and latent factors are incorporated into modeling the quantiles of matrix sequences. Third, the computation error with the algorithm, that parallels to the statistical error given in our theorem, is still unknown. We leave all these to our future research work.\n\n\n\\section{Acknowledgements}\n\nKong's work is partially supported by NSF China (71971118, 11831008). Liu's work is partially supported by NSF China (12001278). Yu's work is partially supported by . Zhao's work is partially supported by the National NSF China (11871252) and A Project Funded by the Priority Academic\nProgram Development of Jiangsu Higher Education Institutions.\n\n\\section{Supplementary Material}\nThe technical proofs of the main results are included in the Supplementary Material.\n\n\n\\bibliographystyle{model2-names}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLattice QCD now became a powerful method to study not only hadron spectra and QCD phase structure, \nbut also hadronic interactions and multi-hadron systems.\nFor example, hadron scatterings such as $\\pi\\pi$ and $NN$ have been studied by using the L\\\"uscher's finite volume method \\cite{Luscher:1990ux,Fukugita:1994ve,Beane:2002nu}.\nRecently, the binding energy of helium nucleus was also measured directly in a quenched lattice QCD \\cite{Yamazaki:2009ua}.\n In 2007, an approach to study hadron interactions \n on the basis of the Nambu-Bethe-Salpeter amplitude on the lattice was proposed\\cite{Ishii:2006ec,Aoki:2009ji}: This approach (the HAL QCD method) has been\n successfully applied to baryon-baryon interactions (such as $NN$ and $YN$)\nand the meson-baryon interaction (such as $\\bar K N$)\\cite{Nemura:2008sp,Inoue:2010hs,Ikeda:2010sg}. \n\n In any of the above approaches, \nthe spatial lattice size $L$ should be large enough\n to accommodate multi-hadrons inside the lattice volume. \nOnce $L$ becomes large, however, energy levels of hadrons corresponding to the scattering state in the lattice box become dense,\nso that isolation of the ground state from the excited states becomes very difficult\n unless unrealistically large imaginary-time $t$ is employed.\nRecently, we proposed a new technique to resolve this issue by generalizing the original HAL QCD method \\cite{Ishii:2011,Inoue:2010es}.\nWith our new method, information of hadron interactions can be obtained even without separating energy eigenstates on the lattice.\n\nAn interesting application of the above new method is the elusive $H$-dibaryon\nwhich has been know to receive large finite volume effect on the lattice \\cite{Wetzorke:2002mx}.\nThe $H$-dibaryon, predicted by R.~L.~Jaffe in 34 years ago~\\cite{Jaffe:1976yi},\nis one of most famous candidates of exotic-hadron. \nThe prediction was based on the observations that the Pauli exclusion principle \ncan be completely avoided due to the flavor-singlet($uuddss$)\nnature of $H$-dibaryon, together with the large attraction from\none-gluon-exchange interaction between quarks~\\cite{Jaffe:1976yi,Sakai:1999qm}.\nSearch for the $H$-dibaryon is one of the most challenging theoretical and experimental problems\nin the physics of strong interaction. \nAlthough deeply bound $H$-dibaryon with the binding energy $B_H > 7 $ MeV from the $\\Lambda\\Lambda$ threshold\nhas been ruled out by the discovery of the double $\\Lambda$ hypernuclei,\n$_{\\Lambda \\Lambda}^{\\ \\ 6}$He~\\cite{Takahashi:2001nm}, there still remains a possibility of \na shallow bound state or a resonance in this channel~\\cite{Yoon:2007aq}.\n \nIn this report, we first explain our new method to resolve the difficulty of the finite volume effect.\nBy using the method, we then study the $H$-dibaryon in lattice QCD.\nTo avoid unnecessary complications, we consider the flavor $SU(3)$ limit in this study.\nWe find a bound $H$-dibaryon with the binding energy of 20--50 MeV for the pseudo-scalar meson mass of 469--1171 MeV. \n Comparison of our result with \n those by other groups \\cite{Luo:2007zzb,Beane:2010hg} as well as \npossible $H$-dibaryon with flavor $SU(3)$ breaking are also discussed.\n\n\\section{Formalism}\n\nIn the original works \\cite{Ishii:2006ec,Aoki:2009ji}, \nthe Nambu-Bethe-Salpeter (NBS) wave function for the two-baryon system with energy $E$, \n\\begin{equation}\n \\phi_E(\\vec r, t)\n = \\sum_{\\vec x} \\langle 0 \\vert B_i(\\vec x + \\vec r,t)B_j(\\vec x,t) \\vert B=2, E \\rangle,\n\\label{eqn:NBS}\n\\end{equation}\nhas been employed to study baryon-baryon ($BB$) interactions.\nA correlation function $\\Psi(\\vec r,t)$ for two baryons can be expressed in terms of $\\phi_E(\\vec r, t)$ in a finite volume as\n\\begin{equation}\n \\Psi(\\vec r, t)\n \\, = \\, A_{\\rm gr} \\phi_{E_{\\rm gr}}(\\vec r)e^{-E_{\\rm gr}\\,t} ~ + ~ A_{\\rm 1st}\\phi_{E_{\\rm 1st}}(\\vec r) e^{-E_{\\rm 1st}\\,t} ~ \\cdots\n\\label{eqn:tdepNBS}\n\\end{equation}\nwhere $E_{\\rm gr}$ and $E_{\\rm 1st}$ are energy of the ground state and the first excited state,\nrespectively, and $A_{\\rm gr}$ and $A_{\\rm 1st}$ are the corresponding coefficients. \nHereafter we call $\\Psi(\\vec r, t)$ a time($t$)-dependent NBS wave function. \nIn principle, $\\Psi(\\vec r, t)$ is saturated by the ground state contribution\nfor $(E_{\\rm 1st}-E_{\\rm gr})\\times t \\gg 1$, so that the wave function of the ground state can be extracted.\nIf the ground state is a scattering state or a weakly bound state,\nthe energy differences $E_{i}-E_{j}$ are relatively large (several hundred MeV) in a small volume (e.g. $L\\simeq 2$ fm), \nand hence the condition can be fulfilled at relatively small $t$ ( $t\\simeq 1.2$ fm)\nwhich we can access in actual lattice simulations.\n\nIn a large volume, however, much larger $t$ is required.\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.475\\textwidth]{l4fm_wv_27__K13760_lat11.eps}\\hfill\n \\includegraphics[width=0.475\\textwidth]{l4fm_wv__1__K13760_lat11_09to11.eps}\n\\caption{Example of Nambu-Bethe-Salpeter wave function of two-baryon system measured on the lattice with $L \\simeq 4$ fm. \nLeft (Right) panel shows the function for two-baryon in the flavor 27-plet (singlet) channel.\nAll data are normalized to unity at the origin.}\n\\label{fig:NBS}\n\\end{figure}\nFig. \\ref{fig:NBS} shows the $t$-dependent NBS wave function of two-baryon\nmeasured in our lattice QCD simulation on $L \\simeq 4$ fm lattice, \nat sink-time $t$ around 10 in lattice unit ($a \\simeq 0.12$ fm).\nShapes of the function change as the sink-time $t$ increases from 10 to 12, showing that \n$\\Psi(\\vec r, t)$ is not saturated by the ground state contribution in this range of $t$.\nSince the energy difference tends to decrease as $L^{-2}$( with assumption that a deeply bound state is absent), $t$ around $40$ in lattice unit ($t\\simeq 5$ fm in physical unit) may be necessary for the ground state saturation. \nWe need huge statistics to extract signals at such large $t$, however,\nsince the signal to noise ration of $\\Psi(\\vec r, t)$, which includes 4 baryon operators,\nbecomes bad as $t$ increases. Such a calculation would be unacceptably expensive even with today's computational resources.\nWe therefore conclude that it is practically impossible to achieve the ground state saturation\nfor the two-baryon system in large volume, unless we employ some sophisticated techniques such as\nthe variational method with optimized sources.\n\nTo overcome this difficulty in large volume, the HAL QCD collaboration has recently proposed alternative method \\cite{Ishii:2011,Inoue:2010es} as summarized below.\nWithin the non-relativistic approximation,\nwave functions we consider satisfy the Schr\\\"{o}dinger equation with non-local but energy-independent potential $U(\\vec r, \\vec r')$.\nFor the lowest two energy eigenstates, it reads\n\\begin{eqnarray}\n \\left[ M_1 + M_2 - \\frac{\\nabla^2}{2\\mu} \\right] \\,\\, \\phi_{\\rm gr}(\\vec r)e^{-E_{\\rm gr}\\,\\,t}\n \\,\\, + \\, \\int \\!\\! d^3 \\vec r' \\, U(\\vec r, \\vec r') \\,\\, \\phi_{\\rm gr}(\\vec r)e^{-E_{\\rm gr}\\,\\,t}\n &=& E_{\\rm gr} \\,\\, \\phi_{\\rm gr}(\\vec r) \\, e^{-E_{\\rm gr}\\,\\,t} \\\\\n \\left[ M_1 + M_2 - \\frac{\\nabla^2}{2\\mu} \\right] \\phi_{\\rm 1st}(\\vec r) e^{-E_{\\rm 1st}\\,t}\n + \\int \\!\\! d^3 \\vec r' \\, U(\\vec r, \\vec r') \\, \\phi_{\\rm 1st}(\\vec r) e^{-E_{\\rm 1st}\\,t}\n &=& E_{\\rm 1st} \\, \\phi_{\\rm 1st}(\\vec r) e^{-E_{\\rm 1st}\\,t}\n\\end{eqnarray}\nwhere $M_{1,2}$ represent masses of two baryons and $\\mu$ is the reduced mass of the two baryons.\nSince these equations (and those for other $E$) are linear in $\\phi_E$, \n$\\Psi(\\vec r,t)=\\sum_n A_n\\phi_{E_n}({\\vec r}) e^{-E_n t}$ satisfies \n\\begin{equation}\n \\left[ M_1 + M_2 - \\frac{\\nabla^2}{2\\mu} \\right] \\Psi(\\vec r, t)\n + \\int \\!\\! d^3 \\vec r' \\, U(\\vec r, \\vec r') \\, \\Psi(\\vec r', t) \n = - \\frac{\\partial}{\\partial t} \\Psi(\\vec r, t).\n \\label{eq:t-dep}\n\\end{equation}\nUsing this equation, we can extract $U(\\vec r, \\vec r')$ from the $t$-dependent NBS wave function $\\Psi(\\vec r, t)$ at the moderate value of $t$,\nwhich can be easily calculated in lattice QCD simulations. \nThis is our new method to study hadron interactions in lattice QCD without isolating energy eigenstates. \nIn practice, we expand the non-local potential $U(\\vec r, \\vec r')$ in terms of velocity such that\n$U(\\vec r, \\vec r') = (V(\\vec r)+O(\\nabla))\\delta^3(\\vec r,-\\vec r')$. \nIn this paper, we only consider the leading term $V(\\vec r)$ of the velocity expansion. \n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.475\\textwidth]{l4fm_vr_27__K13760_lat11.eps}\\hfill\n \\includegraphics[width=0.475\\textwidth]{volume_vr__1_K13710_lat11.eps}\n\\caption{Example of baryon-baryon potential extracted from QCD in our new method.\nLeft (Right) panel shows the potential for two-baryon in the flavor 27-plet (singlet) channel.\nOne can see independence of the potential on the sink-time $t$ and the volume $L^3$ in the left and the right panel, respectively.}\n\\label{fig:Indep}\n\\end{figure}\n\nFig. \\ref{fig:Indep} shows examples of baryon-baryon potentials extracted by the new method.\nIn the left panel, we show the spin-singlet and flavor 27-plet potential $V^{(27)}(r)$, which is found to be independent of the sink-time $t$.\n(Note that $V(\\vec r)$ becomes a function of $r=\\vert \\vec r\\vert$ for the spin-singlet sector.)\nThis result shows not only a success of the new method using eq.(\\ref{eq:t-dep}) but also a goodness of the leading order approximation of the velocity expansion, since the $t$ independent potential $V(r)$ can be extracted from the manifestly $t$ dependent function $\\Psi(\\vec r, t)$.\nIn the right panel, the flavor singlet potential $V^{(1)}(r)$ is plotted on $L=2,3$ and $4$ [fm] lattices.\nSince the potential remains almost identical between $L=3$ and $4$ fm, \nthe potential obtained on $L=4$ fm is considered to be volume independent. \nThis observation is consistent with a fact that the interaction range of the potential, which is around $1.5$ fm from the figure, is smaller than the half of the lattice extension $L=4$ fm.\nOnce we obtain a volume independent potential,\nany observable of the system, such as binding energy and scattering phase shifts,\ncan be extracted by solving the Schr\\\"{o}dinger equation in the infinite volume.\n\n\\section{Lattice QCD setup}\n\n\\begin{table}[t]\n\\caption{Summary of lattice QCD parameters and hadron masses. See the main text for details.}\n\\label{tbl:mass}\n\\centering\n \\begin{tabular}{c|c|c|c|c|c|c}\n \\hline\n $a$ [fm] & $L$ [fm] & $\\kappa_{uds}$ & ~$M_{\\rm p.s.}$ [MeV]~ & ~$M_{\\rm vec}$ [MeV]~& ~ $M_{\\rm bar}$ [MeV]~ &\n ~$N_{\\rm cfg}\\,\/\\,N_{\\rm traj}$~ \\\\\n \\hline \n & & ~0.13660~ & 1170.9(7) & 1510.4(0.9) & 2274(2) & 420\\,\/\\,4200 \\\\\n & & ~0.13710~ & 1015.2(6) & 1360.6(1.1) & 2031(2) & 360\\,\/\\,3600 \\\\\n {0.121(2)} & 3.87 & ~0.13760~ & ~\\,836.5(5) & 1188.9(0.9) & 1749(1) & 480\\,\/\\,4800 \\\\\n & & ~0.13800~ & ~\\,672.3(6) & 1027.6(1.0) & 1484(2) & 360\\,\/\\,3600 \\\\\n & & ~0.13840~ & ~\\,468.6(7) & ~\\,829.2(1.5) & 1161(2) & 720\\,\/\\,3600 \\\\\n\n \\hline\n \\end{tabular}\n\\end{table}\n\nFor dynamical lattice QCD simulations in the flavor $SU(3)$ limit,\nwe have generated ensembles of gauge configurations on a $32^3 \\times 32$ lattice\nwith the renormalization group improved Iwasaki gauge action at $\\beta=1.83$\nand the non-perturbatively $O(a)$ improved Wilson quark action at five different value of quark mass. \nThe lattice spacing $a$ is found to be 0.121(2) fm and hence lattice size $L$ is 3.87 fm. \nHadron masses on each ensemble are given in Table \\ref{tbl:mass}, \ntogether with other parameters such as\nthe quark hopping parameter $\\kappa_{uds}$, \nnumber of thermalized trajectory $N_{\\rm traj}$ and number of configuration $N_{\\rm cfg}$.\n \nOn each gauge configurations, \nthe baryon two-point and four-point correlation functions are constructed from\nquark propagators for the wall source\nwith the Dirichlet boundary condition in the temporal direction.\nBaryon operators at source are combined to generate the two-baryon state in a definite flavor irreducible representation, while the local octet-baryon operators are used at sink.\nTo enhance signal, 16 measurements are made for each configuration,\ntogether with the average between forward and backward propagation in time.\nStatistical errors are estimated by the Jackknife method with bin size equal to 12 for the $\\kappa_{uds}=0.13840$ and 6 for others.\n\n\\section{Results}\n\nWe here consider the system with baryon-number $B=2$ in the flavor-singlet and $J^P=0^+$ channel, to investigate $H$-dibaryon.\nThe left panel in Fig. \\ref{fig:V1andH} shows the baryon-baryon potential in this channel at five values of quark mass: it is entirely attractive and has an ``attractive core\" \\cite{Inoue:2010hs}.\nThis result is consistent with the prediction by Jaffe and also by the quark-model in\n this channel.\nThe figure also indicates that the attractive interaction becomes stronger as the quark mass decreases.\n\nBy solving the Schr\\\"{o}dinger equation with this potential, \nwe have found one bound state in this channel~\\cite{Inoue:2010es}.\nThe right panel in Fig. \\ref{fig:V1andH} gives energy and size of this bound state, showing\nthat a stable $H$-dibaryon\nexists at this range of the quark mass in the flavor $SU(3)$ symmetric world.\n\n\\begin{figure}[t]\n\\includegraphics[width=0.475\\textwidth]{mass_vr__1__latt11.eps} \\quad\n\\includegraphics[width=0.475\\textwidth]{hdibaryon_mass_latt11.eps}\n\\caption{ Left: Baryon-baryon potential in the flavor singlet channel.\n Right: The ground state of the system. The energy $E_0$ is measured from two-baryon threshold.\n The bars indicate statistical error only.\n}\n\\label{fig:V1andH}\n\\end{figure}\n\nAs shown in the right panel of Fig. \\ref{fig:NBS},\nthe $t$-dependent NBS wave function for this channel goes to non-zero value at large distance,\ncontrary to a naive expectation for bound state wave functions.\nThe non-vanishing wave function at large distance is due to the contribution from excited states; they\n correspond to scattering states in the infinite volume and do not vanish at large distance.\nIn other words, the $t$-dependent NBS wave function is a superposition of the bound state and scattering states.\nEven in such a case, our new method is shown to work well.\n\nThe binding energy ${\\tilde B}_H = -E_0$ for the $H$-dibaryon ranges from 20 to 50 MeV and decreases as the quark mass decreases.\nThe rms distance $\\sqrt{\\langle r^2 \\rangle}$ is a measure of a \"size\" of the $H$-dibaryon,\nwhich may be compared to the rms distance of the deuteron in nature, $1.9 \\times 2 = 3.8$ fm.\nAlthough quark masses are different between the two,\nthis comparison suggests that $H$-dibaryon is much more compact than the deuteron.\n\nBy including a small systematic error caused by the choice of sink-time $t$ in the $t$-dependent NBS wave function,\nthe final result for the $H$-dibaryon binding energy become\n\\begin{eqnarray}\n M_{\\rm p.s.}&=& 1171 ~\\mbox{MeV} :~ \\ {\\tilde B}_H = 49.1 (3.4)(5.5) ~\\mbox{MeV} \\\\\n M_{\\rm p.s.}&=& 1015 ~\\mbox{MeV} :~ \\ {\\tilde B}_H = 37.2 (3.7)(2.4) ~\\mbox{MeV} \\\\\n M_{\\rm p.s.}&=& ~~837 ~\\mbox{MeV} :~ \\ {\\tilde B}_H = 37.8 (3.1)(4.2) ~\\mbox{MeV} \\\\\n M_{\\rm p.s.}&=& ~~672 ~\\mbox{MeV} :~ \\ {\\tilde B}_H = 33.6 (4.8)(3.5) ~\\mbox{MeV} \\\\\n M_{\\rm p.s.}&=& ~~469 ~\\mbox{MeV} :~ \\ {\\tilde B}_H = 26.0 (4.4)(4.8) ~\\mbox{MeV} \n\\end{eqnarray} \nwith the statistical error (first) and the systematic error (second).\nA bound $H$-dibaryon is also reported by the\n full QCD simulation with a different approach from ours \\cite{Beane:2010hg}.\nThe obtained binding energy from the $\\Lambda\\Lambda$ threshold $B_H = 16.6(2.1)(4.6)$ MeV\nat $(M_{\\pi},M_{K})\\simeq(389, 544)$ MeV is consistent with the present result.\n\nA deeply bound $H$-dibaryon is ruled out by the discovery of the double $\\Lambda$ hypernucleus.\nThe binding energy $\\tilde{B}_H$ in this paper, however, should be interpreted as\nthe binding energy from the $BB$ threshold averaged in the $(S,I)=(-2,0)$ sector.\nIn the real world, the $\\Lambda\\Lambda$, $N\\Xi$ and $\\Sigma\\Sigma$ thresholds in this sector largely split due to the flavor $SU(3)$ breaking.\nWe therefore expect that the binding energy of the $H$-dibaryon measured from the $\\Lambda\\Lambda$ threshold in nature\nbecomes much smaller than the present value or even the $H$-dibaryon goes above the $\\Lambda\\Lambda$ threshold.\nOur trial calculation using a phenomenological flavor $SU(3)$ breaking indicates\nthat the $H$-dibaryon state becomes a resonance state above the $\\Lambda\\Lambda$ threshold but \nbelow the $N\\Xi$ threshold, as the flavor $SU(3)$ breaking reaches the physical value.\nTo make a definite conclusion on this point, however, \nwe need lattice QCD simulations at the physical point, together with the coupled channel analysis. \nWe have already developed the formula for this purpose \\cite{Aoki:2011gt} and tested the method in numerical simulations \\cite{sasaki2010}.\nA study towards this final goal is now in progress.\n\n\\medskip\n\n\\section*{Acknowledgments}\nWe thank K.-I. Ishikawa and PACS-CS group for providing their DDHMC\/PHMC code~\\cite{Aoki:2008sm},\nand authors and maintainer of CPS++~\\cite{CPS}, whose modified version is used in this paper.\nNumerical computations of this work have been carried out at Univ. of Tsukuba supercomputer system (T2K).\nThis research is supported is supported by Grant-in-Aid for Scientific Research on Innovative Areas(No.2004:20105001, 20105003)\nand for Scientific Research(C) 23540321.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\IEEEPARstart{H}{igh} immersive 3D visual content representation has been has been studied over decades. Nowadays, the 3D visual representation is applied in both professional fields (tele-immersive medicine and communications) \\cite{wang2015unsupervised} and the personal applications (virtual reality and 3D computer gaming) \\cite{zhang2007multiview}. In different applications, the 3D visual content has various representations, such as point clouds, meshes, multiview video (MVV), and texture video plus depth maps \\cite{yemez2007scene}. For all these representations, the data used to represent the 3D visual content is significant more than the data of the 2D visual representation such as images and videos. Recent years, the worldwide research efforts have been spent on the data compression to improve the accessibility of the 3D visual content. For the video based applications, e.g. the MVV and free view-point video (FVV) \\cite{smolic20113d}, the multiview video coding (MVC) \\cite{merkle2007efficient} and some advanced 3D video coding technologies have ever been proposed based on the traditional video coding framework. For the graphics applications, like computer gaming and light field where the point clouds and meshes are usually applied, the 3D mesh coding \\cite{peng2005technologies} has been widely studied. Especially, the scalable 3D mesh coding \\cite{cao20123d} have gain interests from both academia and industry. Due to the increasing business in the 3D industry, ISO\/IEC JTC 1\/SC 29\/WG 11 (Moving Picture Experts Group\u2014MPEG) and the ITU-T SG 16 Working Party 3, the two main international organizations in the world which have worked individually and jointly, have worked on the standardization of the 3D video coding \\cite{vetro2011overview} and 3D mesh coding. \n\nThis paper reviews the recent progress of the coding technologies of 3D video. The 3D video mentioned in this paper mainly includes the binocular video, MVV, FVV which is represented in texture video plus depth maps. We reviews the coding framework of MVC and the joint coding technologies of MVV and depth maps. We also reviews the scalable 3D video coding frameworks for quality and view scalability. At last, the bit allocation in 3D video coding have been summarized. This paper also point out some unsolved problems, such as the optimization in coding structure and bit allocation when considering view switching behavior model.\n\nThe rest of this paper are organized as follows: Section \\ref{sec:3D_representation} briefly introduces some background knowledge of 3D video representations, depth maps generation, and the view synthesis; Section \\ref{sec:mvc} presents the MVC framework and its extensions to the FVV coding; Section \\ref{sec:joint_coding} analyzes the joint coding technology of MVV and depth maps; Section \\ref{sec:scalable_coding} reviews the work of both quality and view scalable 3D video coding; Section \\ref{sec:bit_allocation} describes the bit allocation efforts of the FVV coding; The conclusions are summarized in Section \\ref{sec:conclusion}. \n\n\\section{3D Video Representations}\n\\label{sec:3D_representation}\nIn this section, we briefly introduce the background information about the 3D video representations, depth map generation, and virtual view synthesis. \n\n\\subsection{Multiview Video and Free View-point Video}\nIn the human vision system (HSV), the eyes capture the scene through their own channels. The brain merges the two streams of images and generates the stereo vision by the disparity between the images from different channels. Based on this observation, the stereoscopic video was proposed by presenting the videos from two parallel cameras to the two eyes respectively. Human can thus obtain stereo viewing experience when watching the stereoscopic video \\cite{urey2011state}. The stereoscopic video can be captured by the binocular camera as shown in Fig. \\ref{fig:binocular_camera}. Although the binocular video can generate stereo vision for HSV, the view point is still fixed. To enhance the immersive viewing experience, the MVV is proposed to add additional view points for the observer. The MVV are usually captured by the camera array systems as shown in Fig. \\ref{fig:mvv_camera}. Compared to the binocular video, the MVV expands the vision field to the area captured by the camera array. Therefore, when watching MVV, the viewer can select any view point captured by the camera array. Moreover, the autostereoscopic video system demonstrates the video from multiple view points simultaneous on the polarized screen that viewer can watch the stereoscopic video from multiple physical positions. The MVV can though provide more immersive viewing experience than the binocular stereoscopic video, the view point is nevertheless limited to the captured ones. In order to obtain full immersive viewing experience, people proposed the FVV \\cite{tanimoto2011free} which requires to provide arbitrary view point accessibility. However, it is physically hard to obtain dense sampling of the whole 3D space. Therefore, the FVV is usually achieved by the virtual view synthesis with the sparsely captured texture video and geometry information. The geometry information can be represented in multiple different formats, such as depth, point cloud, meshes, voxels, etc. \\cite{yemez2007scene}. For the purpose of data streaming and real-time processing, the depth maps are the most popular format used in FVV. Therefore, the FVV is usually represented by the MVV which is captured by the sparse camera array and the depth maps \\cite{muller20113}. \n\n\\begin{figure}[t] \n\t\\centering\n\t\\subfigure[Binocular Camera]{\\label{fig:binocular_camera}\n\t\t\\includegraphics[width=0.23\\textwidth]{fig_binocular_camera.png}}\n\t\\subfigure[MVV Camera]{\\label{fig:mvv_camera}\n\t\t\\includegraphics[width=0.23\\textwidth]{fig_mvv_camera.png}}\n\t\\caption{Binocular and MVV camera systems.}\n\t\\label{fig:camera_system}\n\\end{figure}\n\n\n\\subsection{Depth Map Generation}\nThe depth maps which represent the quantized distance from the object to the camera plane in the 3D space can be generated by multiple ways. The highest precision depth maps are usually generated by the stereo matching between multiple images from different views \\cite{sun2005symmetric}. Some recent research can also generates the depth maps from a single image by exploiting the implicit geometry prior within the texture images \\cite{saxena2005learning}. Besides the computational approaches, the depth maps can also be sampled by the depth sensors. The depth sensors can be categorized into three classes by their sensing principles, including structure light \\cite{wang2015computational} (Microsoft Kinect version 1), time-of-flight infrared camera \\cite{zhang2012microsoft} (Microsoft Kinect version~2), and the laser scaning (MC3D) \\cite{matsuda2015mc3d}. Although all these depth sensors are suffered from the noise and low resolution \\cite{wang2015evaluation}, they can capture the depth maps in real-time. Therefore, for the real-time applications, such as the interactive computer gaming, the depth sensors are widely applied. One the other hand, for the applications like 3DTV which requires high precision depth information, the computational depth estimation approaches are usually applied.\n\n\\subsection{Virtual View Synthesis}\nIn the early stage of the 3D video, the virtual view video is interpolated from the texture images from its reference views by the coarse geometry models. However, without the accurate epipolar geometry model, the interpolation usually cause artifacts on the objects with large disparities. In the latest view synthesis framework, based on the epipolar geometry model and the calibrated camera parameters, each pixels in the reference view can be projected to the virtual view plane with its depth value \\cite{chan2007image}. In the FVV, the virtual view synthesis is implemented based on the latest view synthesis framework as shown in Fig. \\ref{fig:rendering} where the virtual view is synthesized by its left and right reference views. In order to reduce the pixel synthesis drift caused by the depth noise, the view synthesis module usually synthesize the depth maps of the virtual view with the depth maps from the reference views. Afterward, each pixel in the virtual view is backward mapped to the reference views and interpolated with the neighbors if the corresponding position is out of the sampled grid in the reference views. The backward projection reduces the ambiguity in the pixel merge from multiple reference views.\n\n\\begin{figure}[htbp] \n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{fig_view_synthesis.png}\n\t\\caption{Virtual view synthesis.}\n\t\\label{fig:rendering}\n\\end{figure}\n\n\\section{Simulcast and Multiview Video Coding}\n\\label{sec:mvc}\nAs the MVV is formed by the video stream from multiple view points, each video stream can be encoded individually by single view video codec. This encoding framework is called simulcast as shown in Fig. \\ref{fig:mvc_simulcast}. Although the single view video codec can exploit the temporal and intra frame correlation, the redundancy between different views still remains in the simulcast video streams. In order to compress the redundancy between different views, the multiview video coding (MVC) framework is proposed by adding the inter-view prediction to the simulcast coding framework. Fig. \\ref{fig:mvc_ks} demonstrates the MVV coding framework with the inter-view prediction implemented on the key frames which is called MVC\\_KS in the rest of this paragraph. The coding framework in Fig. \\ref{fig:mvc_as} extends the inter-view prediction to both key and non-key frames to fully exploit the inter-view redundancy. The coding framework in Fig. \\ref{fig:mvc_as} is called MVC\\_AS in the rest of this paragraph. \n\n\\begin{figure*}[t]\n\t\\centering \n\t\\subfigure[Simulcast]{\\label{fig:mvc_simulcast}\n\t\t\\includegraphics[width=0.3\\textwidth]{fig_MVC_simulcast.png}}\n\t\\subfigure[MVC\\_KS]{\\label{fig:mvc_ks}\n\t\t\\includegraphics[width=0.3\\textwidth]{fig_MVC_KS.png}}\n\t\\subfigure[MVC\\_AS]{\\label{fig:mvc_as}\n\t\t\\includegraphics[width=0.3\\textwidth]{fig_MVC_AS.png}}\n\t\\caption{Prediction structure in the MVC framework.}\n\t\\label{fig:mvc}\n\\end{figure*}\n\nFig. \\ref{fig:mvc_rd} demonstrates the rate-distortion (R-D) performance of different coding frameworks. Two testing sequences, \"Dancer\" and \"Poznan\\_Street\" which are both at 1080p and 30 fps, are used in this paper for all the evaluation experiments. From the curves shown in Fig. \\ref{fig:mvc_rd}, we can see the MVC frameworks improve the coding efficiency compared to the simulcast coding framework. The coding structure with full inter-view prediction (Fig. \\ref{fig:mvc_as}) achieves further R-D improvement than the coding structure with inter-view prediction on only key frames. \n\n\\begin{figure}[t] \n\t\\subfigure[Dancer]{\\label{fig:mvc_rd_dancer}\n\t\t\\includegraphics[width=0.23\\textwidth]{fig_mvc_rd_dancer.png}}\n\t\\subfigure[Poznan\\_Street]{\\label{fig:mvc_rd_pstreet}\n\t\t\\includegraphics[width=0.23\\textwidth]{fig_mvc_rd_PStreet.png}}\n\t\\caption{R-D performance of simulcast, MVC\\_KS, and MVC\\_AS.}\n\t\\label{fig:mvc_rd}\n\\end{figure}\n\nFor the purpose of compressing MVV, the MVC\\_AS framework would be the optimal choice. However, in the FVV applications, the viewer may randomly switch the view point. Although the simulcast coding framework achieves the lowest R-D performance compared to the other two, the video stream of different view point can be randomly accessed without any dependency. The MVC\\_KS framework achieves both moderate R-D performance and the view switching latency. The MVC\\_AS framework achieves the best coding framework at the cost of the highest view switching latency. Therefore, the prediction structure should be optimized to balancing the R-D performance and the view switching latency \\cite{ji2014online}.\n\nBased on the coding structure shown in Fig. \\ref{fig:mvc}, the average view switching latency can be modeled by the following model. Denote there are $N$ frames in each group of pictures (GOP). The deepest level of the interview dependency is $M$. Suppose the view switching is fully random and satisfied the uniform distribution among all the frames and views, the average view switching delay $T$ can be represented by \n\n\\begin{equation}\n\tT(M, N)=c\\times M \\times N.\n\t\\label{equ:delay}\n\\end{equation}\nIn equation \\ref{equ:delay}, $c$ denotes a constant coefficient which means the average view switching delay is proportional to the product of the GOP size and the average depth of the inter-view dependency. Consequently, the overall optimization problem can be represented as\n\n\\begin{equation}\n\t\\begin{split}\n\tJ(M,N,Q) &=\\argmax_{M,N,Q}D(Q)+\\lambda R(M,N,Q)+\\beta T(M,N) \\\\\n\tR(M,N,Q) &< R_C\n\t\\end{split}\t\n\t\\label{equ:optimization}\n\\end{equation}\nIn equation \\ref{equ:optimization}, $D$ and $R$ denote the coding distortion and bit rate, respectively, $Q$ denotes the quantization parameter, $\\lambda$ and $\\beta$ denote the Lagrange multipliers, $R_C$ denotes the given bit rate constraint. Increasing $M$ and $N$ can reduce the coding bit rate but also introduce additional the view switching latency. On the other hand, when $M$ and $N$ are small, the system can obtain good random view accessibility with the cost of increasing bit-rate. This optimization problem can be solved by the Karush\u2013Kuhn\u2013Tucker (KKT) conditions.\n \n\\section{Multiview Video and Depth Maps Coding}\n\\label{sec:joint_coding}\nAfter introducing the depth maps into the 3D video for the virtual view rendering, the depth maps coding technologies are further considered by the 3D video coding communities. Since the depth information are usually represented by the gray-level pictures, the depth maps are treated as the monochromatic MVV sequences that can be encoded by the MVC technologies. Therefore, most of 3D video coding frameworks encode the MVV and depth maps separately by MVC. Besides, the other existing depth maps coding technologies encode the depth maps by considering the signal properties of the depth maps \\cite{oh2011depth}.\n\nAlthough the MVC can remove both the intra-view and inter-view redundancy in both the MVV and depth maps, the correlation between the MVV and depth maps cannot be exploited by the MVC framework. Therefore, various joint multiview video and depth map coding technologies have been proposed to improve the 3D video coding efficiency based on MVC framework. Since the depth information can be converted to the disparity, using the depth maps to assist the inter-view prediction is one direction to improve the coding efficiency. In \\cite{yea2009view}, Yea et al. proposed an auxiliary view synthesis prediction (VSP) mode to improve the inter-view prediction efficiency. In the VSP mode, an additional reference frame is generated for each coding frame by warping the pictures in the reference view to the current view with the decoded depth maps. The rendered virtual reference frame is added into the reference frame buffer for the prediction. Compared to the reference frame from the reference view, the rendered virtual reference frame is in the same camera plane as the current frame. Therefore, the estimated disparity vector between the blocks in the current frame and the virtual reference frame is much less than that between the current frame and the reference frame from reference view. Therefore, the VSP mode can reduce the R-D cost on the disparity vector compared to the traditional inter-view prediction mode. On the other hand, if the non-translational transform exists between different views, the VSP mode can warp the reference frame to the same camera plane as the current frame. This can significantly reduce the prediction residue compared to the translational disparity compensation prediction (DCP). \n\nAlthough the VSP mode can reduce the bit-rate in inter-view prediction, the additional buffer cost increase the complexity at both encoder and decoder sides. In order to exploit the depth information to assist inter-view prediction, Wang et al.\\cite{wang2012free} proposed a depth-assisted disparity compensation prediction (DADCP) mode. In this mode, the encoder calculates the disparity vector for each pixel in the block based on the calibrated camera parameters and the depth value of each pixel. During the prediction, the disparity vectors calculated from the depth maps are quantized to quarter pixel precision. The prediction residue of the DADCP mode is generated by per-pixel interview prediction. Since the depth maps can be treated as the side information for the texture video coding if the depth maps are encoded\/decoded ahead of the multiview video encoding\/decoding, the bit-rate of the depth calculated disparity vectors can thus be saved. The DADCP scheme can also save the bit-rate of disparity vector without the additional buffer cost compared to the VSP. Besides, since DADCP does not require any additional motion search in the prediction, its run-time complexity is also lower than that of the VSP mode. Figs. \\ref{fig:dadcp_dancer} and \\ref{fig:dadcp_pstreet} demonstrate the R-D performance of the MVC, MVC with VSP, and MVC with DADCP on the two testing sequences, respectively. From the curves, we can see that DADCP outperforms both the traditional MVC and MVC with VSP. The R-D gain is increasing with the bit-rate rising. Compared to the traditional MVC, the gain of DADCP comes from the bit rate reduction on the disparity vectors in the inter-view prediction. Moreover, the rendering distortion of the synthesized virtual reference frame suppresses the R-D performance of the VSP mode.\n\n\\begin{figure}[t] \n\t\\subfigure[Dancer]{\\label{fig:dadcp_dancer}\n\t\t\\includegraphics[width=0.23\\textwidth]{fig_dadcp_dancer.png}}\n\t\\subfigure[Poznan\\_Street]{\\label{fig:dadcp_pstreet}\n\t\t\\includegraphics[width=0.23\\textwidth]{fig_dadcp_pstreet.png}}\n\t\\caption{R-D performance of the joint MVV and depth maps coding techniques.}\n\t\\label{fig:joint_coding_rd}\n\\end{figure}\n\n\nBeside using depth to assist the inter-view prediction, other research works exploit the similarity of the motion vector field between the MVV and depth maps to reduce the overall bit-rate of 3D video coding. Despite the lack of the texture, the depth maps record the same scene as the texture video. The depth maps and texture video can share similar motion vector fields. Zhang et al. \\cite{zhang2010joint} proposed a joint coding framework that uses the motion vectors of the texture video as a candidate reference motion vector for the depth maps coding. However, since the depth maps are usually noisy than the texture video, the optimize motion vectors of the depth maps usually differ from those the corresponding block in the texture video. The R-D gain of the motion vectors sharing between texture video and depth maps is limited. Guo et al. \\cite{guo2006inter} proposed an inter-view direct mode based on the parallelogram constraint between the motion and disparity vectors. The parallelogram constraint comes from the conservation of the temporal and spatial optical flow which means in any two pictures from one view and the other two corresponding frames from another view, the two motion vector and two disparity vectors between the corresponding pixels form a parallelogram. This scheme can reduce the coding bit-rate of the reference motion or disparity vector also save the run-time complexity at the encoder side.\n\nDue to the smoothness of the depth maps, some efforts have been spent on down-sampling the depth maps to lower resolution and perform up-sampling after the decoding \\cite{oh2009depth}. Since the bit-rate of the depth maps is usually 10\\% to 20\\% of the bit-rate of the texture video, the bit-rate savings on depth maps are limited compared to that of the texture video. Majority of the work still focuses on reducing the bit-rate of the MVV by the joint coding techniques. \n\nDue to the advantages of geometry block partitioning, Wang et al. \\cite{wang2011reduced} proposed practical geometry block partitioning scheme by exploiting the correlation of the object boundary between texture and depth images. The proposed scheme searches the partition line by a linear operator based on the input texture and depth information which significantly reduce the complexity of the geometry partitioning. By enabling the geometry partitioning, the proposed coding framework can achieve 6\\% R-D gain compared to the traditional MVC framework with only 18\\% encoding time increasing \\cite{wang2013complexity}. The proposed partition linear searching scheme can also be extended to the traditional 2D video compression \\cite{wang2012complexity}. \n\nFor the binocular stereoscopic video coding, some researchers have proposed a frame compatible 3D video coding framework by merging the frames from different views into a single frame. The interlacing mode includes top-bottom, side-by-side, row-interleaved, column-interleaved, and checkerboard \\cite{vetro2010frame}. The sequences of the interleaved images can be compressed by the traditional single view video codec.\n\n\n\\section{Scalable 3D Video Coding}\n\\label{sec:scalable_coding}\nDue to the limited bandwidth and the high coding bit-rate of 3D video, the scalable 3D video coding also gains much research attention recently. In \\cite{kurutepe2007client}, Kurutepe et al. proposed a resolution scalable MVC framework. In this framework, the encoder down-samples all the texture frames to a lower resolution. The base layer stream is generated by encoding all the texture frames at low resolution by MVC. The enhanced layer is the residue between the original frames and the decoded base layer frames. For each view, the enhanced layer is encoded independently by the traditional single view video codec. The encoder will deliver all the base layer and the enhanced layer of the views selected by the viewer. If the viewer change the view points, the enhanced layer can be switched with very low latency since there is no inter-view dependency in the enhanced layer. Therefore, this framework can also be applied to the scalable coding of the FVV. This framework is further extended to the quality scalable MVC framework in \\cite{ji2014online} by applying the traditional scalable coding technology in MVC. Figs. \\ref{fig:svc_dancer} and \\ref{fig:svc_pstreet} demonstrates the R-D performance of these two coding frameworks on the two testing sequences, respectively. From Fig. \\ref{fig:svc_rd}, we can see that the quality scalable framework outperforms the resolution framework since former framework applies the interlayer prediction when encoding the enhanced layer.\n\n\\begin{figure}[t] \n\t\\centering\n\t\\subfigure[Dancer]{\\label{fig:svc_dancer}\n\t\t\\includegraphics[width=0.23\\textwidth]{fig_svc_dancer.png}}\n\t\\subfigure[Poznan\\_Street]{\\label{fig:svc_pstreet}\n\t\t\\includegraphics[width=0.23\\textwidth]{fig_svc_pstreet.png}}\n\t\\caption{R-D performance of the scalable 3D video coding techniques.}\n\t\\label{fig:svc_rd}\n\\end{figure}\n\nIn MVV and FVV, the view scalability is a new research direction in 3D video coding. Shimuzu et al. \\cite{shimizu2007view} proposed a view scalable MVC framework based on the view synthesis. In their framework, the video sequence of one view is selected as the base view and encoded by the traditional single view codec. The rest views are all treated as the enhanced views. The frames in the enhanced views are predicted by the corresponding frames in the base view. In this framework, the decoder obtains fully scalability of the view switching since all the views can be decoded based on the base view and its own prediction residue. However, the coding efficiency is much lower than the MVC framework. To further improve the coding efficiency, the temporal prediction can also be applied to the enhanced views. The random view accessibility is still maintained since all the views only depend on the base view. In this framework, the number of the base views and the prediction between the base views and the enhanced views can be extended. For example, the hierarchical inter-view prediction structure can also be applied in this framework to improve the view scalability. In the 3D video coding system, the coding framework can be optimized by balancing the coding efficiency and the view scalability. \n\n\n\\section{Bit Allocation of 3D Video Coding}\n\\label{sec:bit_allocation}\nIn FVV, since the video sequences displayed at the client side are synthesized by the texture video sequences and the depth maps of the reference view points, the quality of the synthesized video are determined by both the quality of the decoded texture video and depth maps. Based on the principle of virtual view synthesis, the distortion of texture video will introduce linear error to the synthesized video frames. On the other hand, the distortion of depth maps will results in the pixel position drifting error to the synthesized video frames. To obtain the optimized quality fo the synthesized video, the encoder has to optimize the bit allocation between the video and depth maps. In \\cite{liu2009joint}, Liu et al. proposed a frame-level view synthesis distortion estimation algorithm and designed the bit allocation algorithm based on the rate distortion model of 3D video coding. However, since the frequency domain view synthesis distortion model needs the region has the uniform disparity error, the frame-level model is inaccurate to generate the R-D model. To get an accurate R-D model of 3D video coding, Wang et al. \\cite{wang2010region} proposed a region based view synthesis distortion estimation model by partitioning the whole frame into the region with uniform depth value. The distortion estimated for each depth-uniform region can significantly reduce the error of the view synthesis distortion estimation. Based on the proposed R-D model, Wang et al. \\cite{wang2012free} proposed a bit allocation algorithm with single pass search. The results reported in \\cite{wang2012free} demonstrated that the region based R-D model can obtain more accurate estimation of the view synthesis distortion and improved R-D performance of the synthesized virtual view video compared to the frame-level R-D model. Besides, Yuan et al \\cite{yuan2011model}. solved the bit allocation problem by the Lagrange optimization which can optimize the R-D performance with very low complexity.\n\nBesides the bit allocation between the MVV and depth maps, some other works have studied the bit allocation between different views. In \\cite{shao2012asymmetric}, Shao et al. proposed an asymmetric stereoscopic video coding scheme and optimize the bit allocation based on the masking effect of HSV. Yuan et al. \\cite{yuan2015rate} proposed the bit allocation algorithm between different view based on the view switching. However, since the view switching behavior is complicated, considering the view switching behavior model in the bit allocation is still an unsolved problem for the free view-point video coding with multiple input view points. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\\label{sec:conclusion}\nIn this paper, we reviewed the recent progress of the high efficiency 3D video coding technologies. Most of the MVV and FVV coding are based on the MVC framework. For the FVV represented by MVV and depth maps, majority research work focus on the joint MVV and depth maps coding techniques to reduce the overall coding bit rate. By considering the view switching, the prediction structure of MVV and FVV needs to be optimized to balance the view switching latency and the R-D performance. To make the 3D video streaming adaptive to the bandwidth and viewing behavior model, the scalable MVC and FVV coding are also an important research area for exploring. At the end, the bit allocation and rate control also need to be further optimized by considering the viewing behavior model. With the emerging applications of 3D video, the coding technologies will be further improved to meet the requirements from various applications.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWithin the past decade, the field of ultracold atomic gases has\nsignificantly extended the scope of atomic and molecular\nphysics~\\cite{Anderson1995,Davis1995,DeMarco1999}. The experimental\nmanipulation of quantum degenerate gases has led to the development\nof a toolbox for quantum atom optics \\cite{Rolston2002} including\nguides \\cite{Leanhardt2002}, beam splitters and combiners\n\\cite{Cassettari2000} as well as switches \\cite{Muller2001}. These\ntools are intended for the development of a new generation of guided\ninterferometric sensors\n\\cite{Hansel2001,Schumm2005,Wang2005a,Fattori2007,Jo2007,Jo2007a,Billy2007}.\nOne of the main requirements for such experiments is the coherent\nspatial transport of quantum degenerate gases. At present,\ninterferometric applications involving ultracold fermionic atoms are\nalso evaluated and, in some cases, regarded as superior compared to\ntheir bosonic counterpart\n\\cite{Andersson2002,Search2003,Modugno2004}. However, adiabatic\ntransport of quantum degenerate fermionic samples had not been\naccomplished up to now.\n\nThree main types of mechanical transport mechanisms for\nBose-Einstein condensates over macroscopic distances have been\nreported. The first transport was achieved by carefully moving the\nfocus of a red-detuned dipole trap over a distance $44~{\\rm cm}$\n\\cite{Gustavson2001}. Although the atomic cloud was heated due to\nvibrations of the moving optical components, the finite depth of the\ndipole trap provided continuous evaporative cooling. Thus quantum\ndegeneracy was maintained during the transfer time of $7.5$ seconds.\nAn alternative method transports the condensate within a\none-dimensional optical lattice \\cite{Schmid2006}. In order to\nobtain long transport distances, the lattice was produced by two\ncounter-propagating Bessel beams. By adjusting the relative phase,\nit was possible to shift the lattice sites and transport a\ncondensate over up to $10~{\\rm cm}$. The third method involves\nlithographic conducting structures, so-called atom chips. After\ncreating a Bose-Einstein condensate with such a wire structure, it\nis possible to shift the condensate by applying modulated currents\nto an additional periodical wire pattern\n\\cite{Haensel2001,Hommelhoff2005,Fortagh2007}. Transfer distances of\nup to $1.6~{\\rm cm}$ were demonstrated.\n\nHowever, all of these methods suffer from strong heating or loss\nmechanisms. A moving dipole trap involves the translation of optics\nand induces heating due to vibration. Optical lattices and chip\ntraps typically produce strongly confining traps with high trapping\nfrequencies and high atomic densities. Although high densities\nsupport fast evaporation to quantum degeneracy, they are impractical\nfor the transport because of heating and atom loss due to enhanced\nthree-body collision rates.\n\nIn this paper, we report on the transport of quantum degenerate\nsamples of bosonic $^{87}{\\rm Rb}$ and fermionic $^{40}{\\rm K}$ in a\nharmonic potential over a distance of $6~{\\rm mm}$. This also constitutes\nthe first transport experiment with a quantum degenerate Fermi gas. The transport is\nrealized by adiabatically transforming a Ioffe-Pritchard type\nmagnetic trap produced by macroscopic coils. The atom numbers are\nconsiderably larger than in experiments with atom chips\n\\cite{Fortagh2007} and the trapping strength is adjustable without\nchanging the trap depth.\n\nThe initial production of a quantum degenerate Bose-Fermi mixture is\nperformed in a so-called QUIC trap, consisting\nof a pair of anti-Helmholtz coils and a third coil in perpendicular\norientation~\\cite{Esslinger1998}. The adiabatic transport is realized by adding a\nsecond anti-Helmholtz pair. A numerical simulation is used to\ncalculate optimized currents for all coils in order to create a\nslowly changing trap and a smooth transport to the final position\nwhile maintaining a magnetic field for proper spin orientation of\nthe sample. By these means, it is also possible to accelerate and\nlaunch degenerate ensembles with high precision and reproducibility.\nThis capability is of great interest for its application in fountain\nclocks~\\cite{Wynands2005} and inertial sensors~\\cite{Yver-Leduc2003}.\n\nThe transport mechanism presented in this paper is particularly\nuseful for applications where low heating rates and large atom\nnumbers are required. In particular, it may be used to load chip\ntraps or optical interferometers \\cite{Dumke2002} with large atomic\nsamples. It may also be used to transport atoms to probe specific\nposition dependent quantities \\cite{Gunther2005}. By cascading\nthe coil configuration used in this experiment, it will be possible to\ncover much larger transport distances.\n\nIn our case, the transport\nis used to load the atomic cloud into a dipole trap located at the\ngeometric center of the QUIC trap. Transporting the cloud to this\nposition enables us to use the coils of the QUIC trap to generate strong\nhomogeneous magnetic fields with small spatial inhomogeinity. Hence,\nthe applicability of the popular QUIC trap is drastically improved,\nsince it can be used to produce strong magnetic fields (around $1000~{\\rm G}$)\nat moderate currents. The use of such fields has recently become\nimportant for the experimental manipulation of the scattering\nproperties of ultracold ensembles in the vicinity of Feshbach\nresonances \\cite{Inouye1998,Inouye2004,Ferlaino2006,Klempt2007}.\n\nThe paper is organized as follows. We give an overview of our\nexperimental setup in Sec.~\\ref{sec:setup}. Details of our\nimplementation and results of the transport of quantum degenerate\ngases are discussed in Sec.~\\ref{sec:transport}. We conclude with an\noutlook in Sec.~\\ref{sec:summary}.\n\n\\section{Experimental Setup}\n\\label{sec:setup} The apparatus used for the experiments described\nhere consists of two glass cells divided by a differential pumping\nstage: a MOT cell, where the atomic clouds are collected initially,\nand a science cell where experiments with ultracold atoms are\nperformed (see Fig.~\\ref{fig:vacuum}). The MOT region, designed for\nthe collection of large clouds of K and Rb, has been described in\ndetail previously~\\cite{Klempt2006}.\n\nAtoms are transferred between these two regions of the experiment,\nby transporting them in a movable magnetic quadrupole trap. This\ntransport mechanism is described in detail, since it is a key\nelement for further transport experiments with quantum degenerate\nsamples.\n\n\\subsection{Dual species MOT design}\nThe MOT is produced in a large glass cell with inner dimensions\n$50~{\\rm mm} \\times 50~{\\rm mm} \\times 140~{\\rm mm}$ at a pressure\nof $1~\\times~10^{-9}~{\\rm mbar}$. The cell allows for trapping\nbeams with a diameter of $3~{\\rm cm}$ to capture a large number of\natoms. Commercial rubidium dispensers and potassium dispensers\nconstructed according to Ref.~\\cite{DeMarco1999a} are used to\nprovide vapors of $^{87}{\\rm Rb}$ and $^{40}{\\rm K}$. These\ndispensers are located at a distance of $35~{\\rm cm}$ from the MOT\ncell and coat the surfaces of the chamber with rubidium and\npotassium.\n\nTwo high power laser systems are necessary to provide the light for\nmagneto-optical trapping of the two atomic species. The cooling and\nrepumping light for the rubidium atoms is provided by two external\ncavity diode lasers. Both beams are superposed and simultaneously\namplified by a tapered amplifier (TA) chip~\\cite{Walpole1996}. A\nfurther external cavity diode laser amplified by a TA provides\nresonant light at the $^{39}{\\rm K}$ D2 transition frequency.\nThis light is divided into two parts and one part is shifted\nto the $^{40}{\\rm K}$ cooling frequency by an acousto-optical\nmodulator (AOM) in double-pass configuration. The second part is\ntuned resonant to the $^{40}{\\rm K}$ repumping by an AOM in\nquadruple-pass configuration and recombined with the cooling light.\nAfter further amplification of the light for potassium with a second\nTA, it is combined with the light for rubidium with a long pass\nmirror. A single polarization maintaining fiber collects all four\nfrequencies for the cooling of potassium and rubidium. The use of a\nsingle fiber greatly facilitates all further adjustments and the\noptic setup is not more complicated than for a single species MOT.\nWe operate the MOT with a total power of $360~{\\rm mW}$ for Rb and\n$160~{\\rm mW}$ for K.\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=\\columnwidth]{Klempt01}\n\\caption{Outline of the vacuum system and magnetic field coils. The\nglass cell of the MOT is on the left hand side. The atoms are\ntransported to the science cell on the right hand side by a magnetic\nfield induced by a moving pair of coils as illustrated in~(a). A\ndifferential pumping stage enables a lower pressure in the science\ncell. The three coils of the QUIC trap around the science cell are\nshown on the right hand side. The configuration~(b) is used for the\ntransport of the cold mixture to the geometric center of the QUIC\ntrap. The pair of transport coils~(1) is displaced from the\ngeometric axis defined by the main coils of the QUIC trap~(2),\nopposite to the QUIC coil~(3).} \\label{fig:vacuum}\n\\end{figure}\n\nThe performance of this MOT is further improved by the use of\nlight-induced atom desorption (LIAD) at a wavelength of $395~{\\rm\nnm}$. Atoms that are adsorbed at the walls of a vacuum chamber are\ndesorbed by irradiation with weak incoherent light. This allows for\na temporary increase of the desired partial pressure. LIAD can thus\nbe used to load a rubidium MOT~\\cite{Anderson2001} and to obtain\nhigh loading efficiencies~\\cite{Atutov2003}. Our recent\nexperiments~\\cite{Klempt2006} have shown that LIAD is particularly\nwell suited as a switchable atom source, since the pressure decays\nback to equilibrium after the desorption light is turned off. For\nthe experiments described here about $1~\\times~10^8$ $^{40}{\\rm K}$\nand $5~\\times~10^9$ $^{87}{\\rm Rb}$ are trapped while the desorption\nlight is on, then they are held in the MOT without desorption light\nwhile the pressure drops and finally, only magnetic fields are used\nto confine them.\n\n\\subsection{Magnetic transport}\nAfter the desorption light is switched off, the temperature of the\natoms is brought close to the recoil limit in an optical molasses\nphase. In a second preparation step the atoms are optically pumped\nto the fully stretched states $|f,m_f>=|2,2>$ for Rb and $|9\/2,9\/2>$\nfor~K. This allows to capture the atoms in a magnetic quadrupole\nfield induced by two MOT coils. These are mounted on a translation\nstage. Just before it starts moving, the current of these coils is\nramped up from $14~{\\rm A}$ to $28~{\\rm A}$ in $300~{\\rm ms}$, which\ncompresses the cold ensemble.\n\nThe coils are moved over a distance of $42~{\\rm cm}$ to the science\ncell at a pressure of $2~\\times~10^{-11}~{\\rm mbar}$ (see part~(a)\nin Fig.~\\ref{fig:vacuum}). Such transport systems for cold atoms\nhave previously been realized using moving\ncoils~\\cite{Lewandowski2003} or sets of overlapping\ncoils~\\cite{Greiner2001}, since these systems do not require a\nsecond MOT in the science cell and provide far better optical access\nin this region.\n\nThe magnetic confinement for the transport of cold thermal samples\nis provided by the same coils which produce the small field\ngradient for the MOT. These coils can produce a quadrupole field\nwith a gradient of up to $138~{\\rm G}\/{\\rm cm}$. Each coil has\n132~turns of $1~{\\rm mm}\\times 2.5~{\\rm mm}$ copper wire. The coils\nof $13~{\\rm mm}$ thickness are separated by $74~{\\rm mm}$ and have\nan inner diameter of $45~{\\rm mm}$. Their wires are fixed by epoxy\nresin to avoid drifts in the performance. No active cooling is\nneeded.\n\nThis pair of coils is mounted on a translation stage (Parker, 404~XR\nseries), with a nominal position reproducibility of $5~\\mu{\\rm m}$.\nHowever, the motor of the translation stage is switched off if it is\nat rest to avoid rf noise between $1$ and $30~{\\rm MHz}$, which\nperturbs the rf evaporation of rubidium. The switching does not\neffect the position. This experimental technique may be used to\ntransport atoms of two species together~\\cite{Goldwin2004} or for\nuniting cold clouds~\\cite{Bertelsen2007}.\n\nThe translation stage is a reliable, maintenance free tool in our\nexperiments. Although the motion can be controlled in detail, we\nhave chosen a simple operation meth\\-od. We allow for a maximal speed\nof $10~{\\rm m}\/{\\rm s}$, limit the acceleration to $1 ~{\\rm m}\/{\\rm\ns}^2$, and the jerk to $100~{\\rm m}\/{\\rm s}^3$. With these settings,\nthe distance of $0.42~{\\rm m}$ is covered in less than $1.3~{\\rm\ns}$. Similarly to other experiments~\\cite{Theis2004,Greiner2001}, at\nleast one third of the particles reach the science cell. The losses\ncan be accounted to collisions with the background gas and to the\ntransport through the differential pumping tube between the two\nglass cells. Moreover, we have not observed heating during the\ntransport and can conclude that this type of transport is well\nsuited for the transport of mixtures of different species.\n\nThe mechanical transport has been suggested for mixing of cold gases\nwith many components, which may be created in different MOT regions\nand combined with moving coils~\\cite{Bertelsen2007}. This may be\nespecially useful for combinations, which are more difficult to\ncombine in a MOT than K and Rb. Moreover, a chain of such traps has\nbeen suggested as a step towards a continuously created\nBEC~\\cite{Lahaye2006}.\n\n\\subsection{Production of quantum degenerate gases}\nThe efficient transfer of cold atoms from the MOT region into a\nharmonic trap in the science cell enables the production of quantum\ndegeneracy for both $^{87}{\\rm Rb}$ and $^{40}{\\rm K}$ by forced\nrf-evaporation of rubidium. Therefore, the quadrupole field for the\ntransport is converted into a harmonic trapping potential, produced\nby a magnetic trap in QUIC configuration [see Graph~(b) in\nFig.~\\ref{fig:vacuum}].\n\nTo load the atoms into the QUIC trap, they are first transfered from\nthe transport coils into an even stronger quadrupole formed by the\nmain coils of the QUIC trap. Subsequently, the current through the\nQUIC coil is ram\\-ped up. Thus, the atoms are pulled towards this coil\nand the trap is deformed such that effective evaporation to quantum\ndegeneracy is possible~\\cite{Esslinger1998}. The $^{87}{\\rm Rb}$\natoms are cooled by rf-evaporation until a Bose-Einstein condensate\nwith up to $1.5\\times~10^6$ atoms at a temperature $T=460~{\\rm nK}$\nwith a transition temperature $T_{\\rm C}=580~{\\rm nK}$ is reached.\nThe $^{40}{\\rm K}$ atoms are cooled sympathetically with the Rb\natoms down to the same temperature reaching a quantum degenerate\nFermi gas with $1.3\\times~10^6$ K atoms (with a Fermi temperature of\n$T_{\\rm F}=1530~{\\rm nK}$).\n\nThe QUIC trap consists of a set of coils in anti-Helm\\-holtz\nconfiguration with $92$~turns with a separation of $30~{\\rm mm}$ and\na third QUIC coil with $86$~turns [see (2) in Graph~(b) of\nFig.~\\ref{fig:vacuum}], which is offset from the center of the\nanti-Helmholtz pair by $40~{\\rm mm}$ [see (3) in Graph~(b) of\nFig.~\\ref{fig:vacuum}]. This coil configuration produces an offset\nfield of $1.4$~G and trapping frequencies for $^{87}{\\rm Rb}$ of\n$23~{\\rm Hz}$ axially and $240~{\\rm Hz}$ radially. In this\nconfiguration the same current of $25~{\\rm A}$ flows through all\nthree coils, yielding an offset field stability of $3~{\\rm mG}$. All\nexperimental results presented in this paper were acquired by\nreleasing the atomic clouds from the magnetic confining potential\nand taking resonant absorption images after ballistic expansion.\n\n\\section{Transport in a harmonic trap}\n\\label{sec:transport} Many recent experiments with cold ensembles\nutilize homogeneous magnetic fields to manipulate the interaction of\ncold atoms in the vicinity of Fesh\\-bach\nresonances~\\cite{Weiner1999}. In our experiments the homogeneous\nmagnetic fields are created by the main coils of the QUIC\ntrap~\\cite{Klempt2007}, a solution that reduces the experimental\ncomplexity in the proximity of the cell. This allows us to profit\nfrom the achieved high mechanical and current stability of the QUIC\ntrap.\n\nHowever, any displacement of the atoms from the symmetry axis leads\nto a variation of the magnetic field over the width of the cloud. This\nvariation is much smaller in the geometric center of the pair of\nmain coils as shown in Fig.~\\ref{fig:noField}. A transport of the\ncold mixture to this region will therefore lead to a smaller\nmagnetic field spread and thus a better control of this crucial\nparameter.\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=\\columnwidth]{Klempt02}\n\\caption{Calculated magnetic field strength and its gradient\ngenerated by the main coils of the QUIC trap operated in Helmholtz\nconfiguration as a function of the radial position in the symmetry\nplane. The transport to the geometric center of the QUIC trap\nresults in a smaller variation of the magnetic field strength over\nthe size of the size of the cloud.} \\label{fig:noField}\n\\end{figure}\n\nWe have implemented such a transport of ultracold or quantum\ndegenerate samples over a distance of $\\approx 6$~mm in a harmonic\ntrap. This transport is realized by extending the capabilities of a\nmagnetic trap in QUIC configuration~\\cite{Esslinger1998} with an\nadditional coil pair in anti-Helmholtz configuration. In our case\nthat coil pair is identical to the one for the transport to the\nscience cell. Part~(b) of Fig.~\\ref{fig:vacuum} illustrates the coil\nconfiguration around the vacuum system for this transport.\n\nThe basic idea is to obtain a quadrupole field induced by the two\ncoil pairs (main and transport), which is shifted away from the one\ninduced by the main coils alone. For any position, the QUIC coil can\nbe used to convert the effective quadrupole field to a harmonic\ntrap.\n\nWe have simulated the magnetic fields to obtain optimal parameters\nfor the timing of all currents to realize the transport. The\nposition of the transport coils was chosen such that full optical\naccess to the cell is guaranteed. Precise control of the translation stage\nenables us to position the atoms exactly in the center of the main coils.\n\nFor the transport of a quantum degenerate atomic ensemble in the\nmagnetic trap, one has to consider both the position of the potential\nminimum and the offset field, which corresponds to the trapping\nfrequencies. In particular, the offset field should never drop to\nzero during the transformation, otherwise spin flips quickly lead to\nloss from the trap and destroy a quantum degenerate ensemble.\nAnother goal of our optimization strategy was to quickly reduce\nthe radial trapping frequency in order to minimize density dependent\nlosses and heating.\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=0.7\\columnwidth]{Klempt03}\n\\caption{Simulation of the transport from the QUIC trap to the\nsymmetry axis of the quadrupole coils. The top graph shows the\nresulting positions and the offset magnetic field. The middle graph\nillustrates the currents as obtained in an optimization with twelve\npoints. The wobbles for the magnetic field are due to the linear\ninterpolation between these points. The trap frequencies for\n$^{87}{\\rm Rb}$ are depicted in the bottom graph. Note that the trap\nfrequencies for $^{40}{\\rm K}$ can be interfered directly using the\nratio of the masses since the product $m_f\\,g_f=1$ for the\nrespective transported states is equal.} \\label{fig:sim}\n\\end{figure}\n\nAn optimization algorithm was applied to find suitable currents\nthrough the coils taking these issues into account. We aimed for the\nfollowing functional behavior of the position~$x$ with time~$t$\n\\begin{equation}\nx\\left(t\\right) = D \\frac{1}{2} \\left[ \\cos\\left(\\pi \\frac{t}{\\tau} \\right) +1 \\right]\n\\label{eq:move}\n\\end{equation}\nfor a total transfer from position~$D$ to the origin in the\ntime~$\\tau$. Thus, the velocity of the trap changes steadily from\nand to zero. The result of the simulation is shown in\nFig.~\\ref{fig:sim}. To realize the transfer, the current of the\nadditional quadrupole coil pair and the current through the QUIC\ncoil are simultaneously increased while the current through the\nanti-Helmholtz coil pair is decreased. The center of the harmonic\npotential shifts towards the common center of the two quadrupole\ncoil pairs.\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=\\columnwidth]{Klempt05}\n\\caption{Simulated position of the harmonic trap and measured\npositions of a sample of cold rubidium.} \\label{fig:xt}\n\\end{figure}\n\nBy controlling the currents through the coils, the transfer can in\nprinciple be realized adiabatically. Since it is experimentally\nnecessary to implement the time dependence of the currents in linear\nramps we have limited our simulation to twelve linear ramps.\n\nBased on this simulation, we have experimentally implemented the\ntransport and took absorption images after each of the applied\nramps. The atoms follow the position of the trap precisely, as\nillustrated in Fig.~\\ref{fig:xt}. The measurement confirms the\nsimulated transport of the ultracold mixture. It shows that we have\ncomplete control over the position of the atoms by applying designed\ncurrents through the coils to induce an adiabatically changing\nmagnetic field.\n\nDue to the lower trapping frequencies the heating rate of the sample\nin the final trap is lower than in the initial QUIC trap ($75$ and\n$330~{\\rm nK}\/{\\rm s}$). The observed heating during the transfer is\nbetween these two rates and thus no additional heating due to the\ntransport is present as shown in Fig.~\\ref{fig:heat}. In fact, it is\npossible to transport quantum degenerate gases without loss of the\ndegeneracy if the transport time~$\\tau$ is shorter than $1~{\\rm s}$.\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=\\columnwidth]{Klempt06}\n\\caption{Heating of a $^{87}{\\rm Rb}$ ensemble just above $T_{\\rm\nC}$ in either a magnetic trap or while being transported. For the\nlater, the temperature in the magnetic trap at the end of the\ntransport was measured for different transport times~$\\tau$.}\n\\label{fig:heat}\n\\end{figure}\n\nDue to the chosen functional behavior [see Eq.~(\\ref{eq:move})] in\nparticular short transport times $\\tau < 1.5~{\\rm s}$ lead to an\noscillation of the atomic clouds in the final magnetic trap (see\nFig.~\\ref{fig:oscill}). In this case the trap is not changing\nadiabatically anymore and the atoms act like a classical particle in\na harmonic trap. We have experimentally determined that twelve ramps\nrepresent a good compromise between adiabaticity of the transport\nand experimental complexity.\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=0.7\\columnwidth]{Klempt07}\n\\caption{Oscillations in the harmonic trap after the transfer from\nthe QUIC trap to the geometric center for pure samples and a mixture\nof rubidium and potassium. Note the change in the time scale for the\nmixture, in which the oscillation of rubidium is not only smaller\nbut also strongly damped by the potassium atoms.} \\label{fig:oscill}\n\\end{figure}\n\nWe obtain similar results for the transport of a heteronuclear\nmixture. Due to the different mass, the $^{40}{\\rm K}$ atoms\noscillate with a different frequency. We observed that the amplitude\nof the oscillations of a cold cloud of a single species is\nsignificantly reduced if Rb and K~atoms are transported together\n(see Fig.~\\ref{fig:oscill}). Similar effects in the hydrodynamic\nregime were observed and studied in greater detail in\nRef.~\\cite{Ferlaino2003}. For transport times above $1.5~{\\rm s}$,\nno significant oscillations can be observed and the twelve ramps of the\ncurrents taken directly from the simulations need no fine\nadjustments.\n\nIn addition to the transfer mechanism our method allows us to\naccelerate and launch ultracold ensembles by quickly switching off\nall currents in the middle of the transfer.\nFigure~\\ref{fig:parabolar} shows BECs launched with horizontal\nvelocities of up to $80~{\\rm mm}\/{\\rm s}$. This technique is an\nalternative for launching cold ensembles with optical\nlattices~\\cite{Schmid2006} or detuned laser fields, which is usually\napplied in fountain clocks~\\cite{Wynands2005} or in inertial\nsensors~\\cite{Yver-Leduc2003,Muller2007}. Also, the observation time of quantum\ndegenerate gases can be doubled if the sample is launched against\ngravity, provided that the time of flight is the limiting factor.\n\n\\begin{figure}\n\\centering\n\\includegraphics*[width=\\columnwidth]{Klempt08}\n\\caption{Trajectories of a BEC launched with two different\nhorizontal speeds. Several absorption images are compiled into a\nsingle image and overlay with two parabolas fitted to the obtained\npositions of the clouds. From these fits, velocities of $40~{\\rm\nmm}\/{\\rm s}$ and $80~{\\rm mm}\/{\\rm s}$ can be interfered, which agree\nwith the calculated speed of the harmonic trap at the time when all\ncurrents are rapidly switched off.} \\label{fig:parabolar}\n\\end{figure}\n\n\\section{Summary and Outlook}\n\\label{sec:summary} We have developed a transport mechanism for\nquantum degenerate gases in a harmonic trapping potential and we\nhave demonstrated the simultaneous transport of quantum degenerate\nbosonic and fermionic samples over a distance of up to $6~{\\rm mm}$.\nThis mechanism may be cascaded to cover even larger distances and\nthus enables magnetic transport experiments with large quantum\ndegenerate samples in macroscopic trap configurations. This concept\nadds another powerful method to the toolbox of quantum atom optics\nand will allow novel designs for interferometric sensors and clocks.\n\nThis transport mechanism enriches the possible applications of the\npopular QUIC trap geometry significantly. It allows for a transport\nof a quantum degenerate sample to the geometric center of the main\ncoil pair of the QUIC trap. This enables their use for the\nproduction of large homogeneous fields and thus the investigation\nand utilization of Fesh\\-bach resonances, e.g., for the creation of\nheteronuclear dimers or the tuning of the interaction of the two\ntrapped isotopes. Moreover, the transport facilitates the optical\naccess for additional beams, e.g., for the creation of optical\nlattices or optical pumping such as photoassociation.\n\nIn the case of our experiment, the modification has reduced the\nspread of a magnetic field of $500~{\\rm G}$ from $240~{\\rm mG}$ to\nbelow $16~{\\rm mG}$ for ensembles at $1~\\mu{\\rm K}$. This variation\nof the magnetic field is no longer due to the inhomogeneity of the\nfield but due to residual current noise. Such well controlled\nmagnetic fields in combination with a dipole trap allow for the\nprecise control of the effective interaction strength and open the\npathway to studies of many particle physics such as the phase\nseparation between Fermions and Bosons or molecular physics such as\ncold molecule production.\n\n\\section{Acknowledgments}\nWe acknowledge support from the Deutsche Forschungsgemeinschaft\n(SFB~407 and Graduate College \\emph{Quantum Interference and\nApplications}).\n\n\\bibliographystyle{prsty}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nObject detection is one of the most important applications of computer vision in the field of artificial intelligence (AI). It has been widely adopted by practical applications such as safety monitoring.\nOver the past decade, visual object detection technologies have experienced significant advancement as deep learning techniques developed \\cite{Ren:2017:FRT:3101720.3101780,DBLP:journals\/corr\/abs-1804-02767,DBLP:journals\/corr\/abs-1807-05511}. The prevailing training approach requires centralized storage of training data in order to obtain powerful object detection models. The typical workflow of training an object detection algorithm this way is shows in Figure \\ref{fig-trad-workflow}. Under such an approach, each data owner (i.e. user) annotates visual data from cameras and uploads these labelled training data to a central database (e.g., a cloud server) for model training. Once the model has been trained, it can be used to perform inference tasks. In this way, the users have no control over how the data would be used once they are transmitted to the central database. Besides, centralized model training has the following limitations:\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=1\\columnwidth]{traditional.png}\n\t\\caption{A typical workflow for centralized training of an object detector.}\n\t\\label{fig-trad-workflow}\n\\end{figure}\n\n\\begin{enumerate}\n\\item It is difficult to share data across organizations due to liability concerns. Increasingly strict data sharing regulations (e.g., the General Data Protection Regulation (GDPR) \\cite{Voigt:2017:EGD:3152676}) restrict data sharing across organizations;\n\\item The whole process takes a long time and depends on when the next round of off-line training occurs. When users accumulate new data, they must upload the data to the central training server and wait for the next round of training to happen, which they cannot control, in order to receive an updated model. This also leads to the problem of lagging feedbacks which delay the correction of any errors in model inference ; and\n\\item The amount of data required to train a useful object detector tends to be large, and uploading them to a central database incurs significant communication cost.\n\\end{enumerate}\n\nDue to these limitations, in commercial settings, the following anecdotal conversation between a customer (C) and an AI solution provider (P) can often be heard:\n\\begin{itemize}\n\\item [C]: \\textit{``We need a solution for detecting flames in a factory floor from surveillance videos to provide early warning of fire.''}\n\\item [P]: \\textit{``No problem. We will need some data from you to train a flame detection model.''}\n\\item [C]: \\textit{``Of course. We have a lot of data, both images and videos with annotations.''}\n\\item [P]: \\textit{``Great! Please upload your datasets to our server.''}\n\\item [C]: \\textit{``Such data contain sensitive information, we can't pass them on to a third party.''}\n\\item [P]: \\textit{``We could send our engineers to work with your dataset on-site, but this will incur additional costs.''}\n\\item [C]: \\textit{``Well, this looks expensive and is beyond our current budget ... ...''}\n\\end{itemize}\n\nThis challenging situation urges the AI research community to seek new methods of training machine learning models. Federated Learning (FL) \\cite{DBLP:journals\/tist\/YangLCT19}, which was first proposed by Google in 2016 \\cite{45648}, is a promising approach to resolve this challenge. The main idea is to build machine learning models based on distributed datasets while keeping data locally stored, hence preventing data leakage and minimizing communication overhead. FL balances performance and efficiency issues while preventing sensitive data from being disclosed. In essence, FL is a collaborative computing framework. FL models are trained via model aggregation rather than data aggregation. Under the federated learning framework, we only need to train a visual object detection model locally at a data owner's site, and upload the model parameters to a central server for aggregation, without the need to upload the actual training dataset.\n\nHowever, there currently lacks an easy to use tool to enable developers who are not experts in federated learning to conveniently leverage this technology in practical computer vision applications. In order to bridge this gap, we report \\textit{FedVision} - a machine learning engineering platform to support easy development of federated learning powered computer vision applications. It currently supports a proprietary federated visual object detection algorithm framework based on YOLOv3 \\cite{DBLP:journals\/corr\/abs-1804-02767}, and allows end-to-end joint training of object detection models with locally stored datasets from multiple clients. The user interaction for learning task creation follows a simplified design which does not require users to be familiar with the FL technology in order to make use of it.\n\nThe platform has been deployed through a collaboration between \\textit{WeBank}\\footnote{\\url{https:\/\/www.webank.com\/en\/}} and \\textit{Extreme Vision}\\footnote{\\url{https:\/\/www.extremevision.com.cn\/?lang=en_US}} since May 2019. It has been used by three large-scale corporate customers to develop computer vision-based safety monitoring solutions in smart city applications. After four months of usage at the time of submission of this paper, the platform has helped the customers significantly improve their operational efficiency and reduce their costs, while eliminating the need to transmit sensitive data around. To the best of our knowledge, this is the first industry application of federated learning in computer vision-based tasks.\n\n\\section{Application Description}\nIn this section, we describe the system design of FedVision. The new workflow for training a visual object detection algorithm under FedVision is as shows in Figure \\ref{fig-workflow}. It consists of three main steps: 1) crowdsourced image annotation, 2) federated model training; and 3) federated model update.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\columnwidth]{workflow.png}\n\t\\caption{The FedVision workflow for training a visual object detection algorithm with data from multiple users.}\n\t\\label{fig-workflow}\n\\end{figure}\n\n\\subsection{Crowdsourced Image Annotation}\nThis module is designed for data owners to easily label their locally stored image data for FL model training.\nA user can annotate a given image on his local device by using the interface provided by FedVision (Figure \\ref{fig-label}) to easily specify each bounding box and the corresponding label information. FedVision adopts the Darknet\\footnote{\\url{https:\/\/pjreddie.com\/darknet\/}} model format for annotation. Thus, each row represents information for a bounding box in the following form:\n\\centerline{\\{label $x$ $y$ $w$ $h$\\}}\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{UI1}\n\t\\caption{The image annotation module of FedVision.}\n\t\\label{fig-label}\n\\end{figure*}\nwhere ``label'' denotes the category of objects, $(x, y)$ represents the center of the bounding box, $(w, h)$ represents the width and height of the bounding box.\n\nThe process only requires the user to be able to visually identify where the objects of interest (e.g., flames) are in a given image, use the mouse to draw the bounding box, and assign it to a category (e.g., fire, smoke, disaster). Users do not need to possess knowledge about federated learning. The annotation file is automatically mapped to the appropriate system directory for model training by FedVision. With this tool, the task of labelling training data can be easily distributed among data owners in a way similar to crowdsourcing \\cite{Doan-et-al:2011}, and thereby, reducing the burden of image annotation on the entity coordinating the learning process. It can also be used to support online learning as new image data arrive sequentially over time from the cameras.\n\n\\subsection{Horizontal Federated Learning (HFL)}\nIn order to understand the federated learning technologies incorporated into the FedVision platform, we first introduce the concept of horizontal federated learning (HFL). HFL, also known as sample-based federated learning, can be applied in scenarios in which datasets share the same feature space, but differ in sample space (Figure \\ref{fig:hfl}). In other words, different parties own datasets which are of the same format but collected from different sources.\n\n\\begin{figure}[!b]\n\t\\centering\n\t\\includegraphics[width=1\\columnwidth]{HFL.png}\n\t\\caption{The concept of horizontal federated learning (HFL)\\cite{DBLP:journals\/tist\/YangLCT19}}\n\t\\label{fig:hfl}\n\\end{figure}\n\nHFL is suitable for the application scenario of FedVision since it aims to help multiple parties (i.e. data owners) with data from the same feature space (i.e. labelled image data) to jointly train federated object detection models. The word ``horizontal'' comes from the term ``horizontal partition'', which is widely used in the context of the traditional tabular view of a database (i.e. rows of a table are horizontally partitioned into different groups and each row contains the complete set of data features). We summarize the conditions for HFL for two parties, without loss of generality, as follows.\n\\begin{equation}\n\\mathcal{X}_{a}=\\mathcal{X}_{b},\\ \\ \\mathcal{Y}_{a}=\\mathcal{Y}_{b}, \\ \\ I_{a}\\neq I_{b},\\ \\ \\forall \\mathcal{D}_{a}, \\mathcal{D}_{b}, a\\neq b\n\\end{equation}\nwhere the data features and labels of the two parties, $(\\mathcal{X}_{a}, \\mathcal{Y}_{a})$ and $(\\mathcal{X}_{b}, \\mathcal{Y}_{b})$, are the same, but the data entity identifiers $I_{a}$ and $I_{b}$ can be different. $\\mathcal{D}_{a}$ and $\\mathcal{D}_{b}$ denote the datasets owned by Party $a$ and Party $b$, respectively.\n\nUnder HFL, data collected and stored by each party are no longer required to be uploaded to a common server to facilitate model training. Instead, the model framework is sent from the federated learning server to each party, which then uses the locally stored data to train this model. After training converges, the encrypted model parameters from each party are sent back to the server. They are then aggregated into a global federated model. This global model will eventually be distributed to the parties in the federation to be used for inference in subsequent operations.\n\n\\subsection{Federated Model Training}\n\\begin{figure}[!b]\n\t\\centering\n\t\\includegraphics[width=1\\columnwidth]{fl_framework.png}\n\t\\caption{The system architecture of the federated model training module.}\n\t\\label{fig-fl_fr}\n\\end{figure}\n\nThe FedVision platform includes an AI Engine which consists of the federated model training and the federated model update modules.\nFrom a system architecture perspective, the federated model training module consists of the following six components (Figure \\ref{fig-fl_fr}):\n\n\\begin{enumerate}\n \\item Configuration: it allows users to configure training information, such as the number of iterations, the number of reconnections, the server URL for uploading model parameters and other key parameters.\n \\item Task Scheduler: it performs global dispatch scheduling which is used to coordinate communications between the federated learning server and the clients in order to balance the utilization of local computational resources during the federated model training process. The load-balancing approach is based on \\cite{Yu-et-al:2017SciRep} which jointly considers clients' local model quality and the current load on their local computational resources in an effort to maximize the quality of the resulting federated model.\n \\item Task Manager: when multiple model algorithms are being trained concurrently by the clients, this component coordinates the concurrent federated model training processes.\n \\item Explorer: this component monitors the resource utilization situation on the client side (e.g., CPU usage, memory usage, network load, etc.), so as to inform the Task Scheduler on its load-balancing decisions.\n \\item FL\\_SERVER: this is the server for federated learning. It is responsible for model parameter uploading, model aggregation, and model dispatch which are essential steps involved in federated learning \\cite{DBLP:journals\/corr\/abs-1902-01046}.\n \\item FL\\_CLIENT: it hosts the Task Manager and Explorer components and performs local model training which is also an essential step involved in federated learning \\cite{DBLP:journals\/corr\/abs-1902-01046}.\n\\end{enumerate}\n\n\\subsection{Federated Model Update}\nAfter local model training, the model parameters from each user's FL\\_CLIENT are transmitted to the FL\\_SERVER. The updated model parameters need to be stored. The number of such model parameter files, and thus the storage size required, increases with the rounds of training operations. FedVision adopts Cloud Object Storage (COS) to store practically limitless amounts of data easily and at an affordable cost. The workflow of storing federated object detection model parameters via COS is shown in Figure \\ref{fig-COS}.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\columnwidth]{COS.png}\n\t\\caption{Cloud Object Storage (COS).}\n\t\\label{fig-COS}\n\\end{figure}\n\nFedVision provides a model aggregation approach to combine local model parameters into a federated object detection model. Details of the federated model training and federated model update components of the FedVision AI Engine are provided in the next section.\n\n\\section{Uses of AI Technology}\nIn this section, we discuss the AI Engine of FedVision. We explain the federated object detection model which is at the core of the FedVision AI Engine, and the neural network compression technique adopted by FedVision to optimize the efficiency of transmitting federated model parameters.\n\n\\subsection{Federated Object Detection Model Training}\nFrom the Regions with Convolutional Neural Network features (R-CNN) model \\cite{RCNN2014} to the latest YOLOv3 model \\cite{DBLP:journals\/corr\/abs-1804-02767}, deep learning-based visual object detection approaches have experience significant improvement in terms of accuracy and efficiency. From the perspective of model workflow, these approaches can be divided into two major categories: 1) one-stage approaches, and 2) two-stage approaches.\n\nIn a typical two-stage approach, the algorithm first generates candidate regions of interest, and then extracts features using CNN to perform classification of regions while improving positioning. In a typical one-stage approach, no candidate region needs to be generated. Instead, the algorithm treats the problems of bounding box positioning and classification of the regions as regression tasks. In general, two-stage approaches produce more accurate object detection results, while one-stage approaches are more efficient. As the application scenarios for FedVision prioritize efficiency over accuracy, we adopt YOLOv3, which is a one-stage approach, as the basic object detection model and implement a federated learning version of it -- \\textit{Federated YOLOv3 (FedYOLOv3)} -- in our platform. With one round of end-to-end training, FedYOLOv3 can identify the position of the bounding box as well as the class for the target object in an image.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=0.5\\columnwidth]{flame.png}\n\t\\caption{Flame detection with YOLOv3.}\n\t\\label{fig-flame}\n\\end{figure}\n\nThe approach of YOLOv3 can be summarized as follows. Given an image, such as the image of a flame as shown in Figure \\ref{fig-flame}, it is first divided into an $S\\times S$ grid with each grid being used for detecting the target object with its centre located in the given grid (the blue square grid in Figure \\ref{fig-flame} is used to detect flames). For each grid, the algorithm performs the following computations:\n\\begin{enumerate}\n \\item Predicting the positions of $B$ bounding boxes. Each bounding box is denoted by a 4-tuple $\\langle x,y,w,h \\rangle$, where $(x,y)$ is the coordinate of the centre, and $(w,h)$ represent the width and height of the bounding box, respectively.\n \\item Estimating the confidence score for the $B$ predicted bounding boxes. The confidence score consists of two parts: 1) whether a bounding box contains the target object, and 2) how precise the boundary of the box is. The first part can be denoted as $p(obj)$. If the bounding box contains the target object, then $p(obj)=1$; otherwise, $p(obj)=0$. The precision of the bounding box can be measured by its intersection-over-union (IOU) value, $IOU$, with respect to the ground truth bounding box. Thus, the confidence score can be expressed as $\\theta=p(obj)\\times IOU$.\n \\item Computing the class conditional probability, $p(c_{i}|obj)\\in[0,1]$, for each of the $C$ classes.\n\\end{enumerate}\n\nThe loss function of YOLOv3 consists of three parts:\n\\begin{enumerate}\n \\item Class prediction loss, which is expressed as\n \\begin{equation}\n \\sum_{i=0}^{S^{2}}1_{i}^{obj}\\sum_{c}(p_{i}(c)-\\hat{p}_{i}(c))^{2}\n \\end{equation}\n where $p_{i}(c)$ represents the probability of grid $i$ belonging to class $c$, and $\\hat{p}_{i}(c)$ denotes the probability that grid $i$ is predicted to be belonging to class $c$ by the model. $1_{i}^{obj}=1$ if grid $i$ contains the target object; otherwise, $1_{i}^{obj}=0$.\n \\item Bounding box coordinate prediction loss, which is expressed as\n \\begin{dmath}\n \\lambda_{coord}\\sum_{i=0}^{S^{2}}\\sum_{j=0}^{B}1_{ij}^{obj}[(x_{ij}-\\hat{x}_{ij})^{2}+(y_{ij}-\\hat{y}_{ij})^{2}]\n +\\lambda_{coord}\\sum_{i=0}^{S^{2}}\\sum_{j=0}^{B}1_{ij}^{obj}[(w_{ij}-\\hat{w}_{ij})^{2}+(h_{ij}-\\hat{h}_{ij})^{2}]\n \\end{dmath}\n where $\\langle x_{ij},y_{ij},w_{ij},h_{ij} \\rangle$ denote the ground truth bounding box coordinates, and $\\langle \\hat{x}_{ij},\\hat{y}_{ij},\\hat{w}_{ij},\\hat{h}_{ij} \\rangle$ denote the predicted bounding box coordinates.\n \\item Confidence score prediction loss, which is expressed as\n \\begin{equation}\n \\sum_{i=0}^{S^{2}}\\sum_{j=0}^{B}1_{ij}^{obj}(\\theta_{i}-\\hat{\\theta}_{i})^{2}+\\lambda_{\\neg obj}\\sum_{i=0}^{S^{2}}\\sum_{j=0}^{B}1_{ij}^{\\neg obj}(\\theta_{i}-\\hat{\\theta}_{i})^{2}.\n \\end{equation}\n\\end{enumerate}\nHere, $\\lambda_{coord}$ and $\\lambda_{\\neg obj}$ are well studied hyper-parameters of the model. The default value of them have been pre-configured into the platform.\n\nOnce the users use the FedVision image annotation tool to label their local training datasets, they can join the FedYOLOv3 model training process as described in the previous section. Once the local model converges, a user $a$ can initiate the transfer of the current local model parameters (in the form of the weight matrix $W_{a}(t)$) to FL\\_SERVER in a secure encrypted manner through his FL\\_CLIENT. The HFL module in FedVision operates in rounds. After each round of learning elapses, FL\\_SERVER performs federated averaging \\cite{FedAvg2016} to compute an updated global weight matrix for the model, $W(t)$:\n\\begin{equation}\nW(t)=\\frac{1}{N}\\sum_{a=1}^{N}W_{a}(t).\n\\end{equation}\nFL\\_SERVER then sends the updated $W(t)$ to the $N$ participating FL\\_CLIENTs so that they can enjoy the benefits of an updated object detection model trained with everyone's dataset in essence. In this way, FedYOLOv3 can rapidly respond to any potential errors in model inference.\n\n\\subsection{Model Compression}\nUploading model parameters to the FL\\_SERVER can be time consuming due to network bandwidth constraints. Figure \\ref{fig-upload} shows the upload time required for uploading federated model parameters of different sizes. For example, when the network bandwidth is about 15MB\/sec, it takes more than 20 seconds to upload a $W_{a}(t)$ of 230MB in size.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\columnwidth]{time}\n\t\\caption{Time for uploading federated model parameters of different sizes.}\n\t\\label{fig-upload}\n\\end{figure}\n\nHowever, during the federated model training, different model parameters might have different contributions towards model performance. Thus, neural network compression can be performed to reduce the sizes of the network by pruning less useful weight values while preserving model performance \\cite{DBLP:journals\/corr\/abs-1710-09282}. In FedVision, we apply network pruning to compress the federated model parameters and speed up transmission \\cite{DBLP:conf\/iclr\/2016}.\n\nLet $M^{i,k}$ be the model parameter matrix from the $i$-th user after completing the $k$-th iteration of federated model training. Let $M^{i, k}_{j}$ be the $j$-th layer of $M^{i,k}$. We denote the sum of the absolute values of all parameters in the $j$-th layer as $|\\sum{M^{i,k}_{j}}|$. The contribution of the $j$-th layer to the overall model performance, $v(j)$, can be expressed as:\n\\begin{equation}\nv(j)=\\left|\\sum{M^{i,k}_{j}}-\\sum{M^{i,(k-1)}_{j}}\\right|\n\\end{equation}\nThe larger the value of $v(j)$, the greater the impact of layer $j$ on the model. FL\\_CLIENT ranks the $v(j)$ values of all layers in the model in descending order, and selects only the parameters of the first $n$ layers to be uploaded to the FL\\_SERVER for federated model aggregation. A user can set the desired value for $n$ through FedVision.\n\n\\section{Application Use and Payoff}\nFedVision has been deployment through a collaboration between Extreme Vision and WeBank since May 2019. It has been used by three large-scale corporate customers: 1) China Resources (CRC)\\footnote{\\url{https:\/\/en.crc.com.cn\/}}, 2) GRG Banking\\footnote{\\url{http:\/\/www.grgbanking.com\/en\/}}, 3) State Power Investment Corporation (SPIC)\\footnote{\\url{http:\/\/eng.spic.com.cn\/}}.\n\nCRC has business interests in consumer products, healthcare, energy services, urban construction and operation, technology and finance. It has more than 420,000 employees. FedVision has been used to help it detect multiple types of safety hazards via cameras in more than 100 factories.\n\nGRG Banking is a globally renowned AI solution provider in financial self-service industry in the world. It has more than 300,000 equipment (e.g., ATMs) deployed in over 80 countries. FedVision has been used to help it monitor suspicious transaction behaviours via cameras on the equipment.\n\nSPIC is the world's largest photovoltaic power generation company which facilities in 43 countries. FedVision has been used to help it monitor the safety of more than 10,000 square meters of photovoltaic panels.\n\nOver the four months of usage, FedVision has achieved the following business improvements:\n\\begin{enumerate}\n \\item \\textit{Efficiency}: in the flame identification system of CRC, to improve a model, at least 1,000 sample images were needed. The entire procedure generally required 5 labellers for about 2 weeks, including the time of testing and packaging. Thus, the total time for model optimization can be up to 30 days. In subsequent operations, the procedure would be repeated. With FedVision, the system administrator can finish labeling the images by himself. The time of model optimization is reduced by more than 20 days, saving labor cost.\n \\item \\textit{Data Privacy}: under FedVision, image data do not need to leave the machine with which they are collected to facilitate model training. In the case of GRGBanking, to 10,000 photos were required to train its model. Each photo is around 1 MB in size. The 10,000 photos used to require 2 to 3 days to be collected and downloaded to a central location. During this process, the data would go through 2 to 3 locations and are at risk of being exposed. With the help of FedVision, GRGBanking can leverage the local storage and computational resources at their ATM equipment to train a federated suspicious activity detection model, thereby reducing the risk of data exposure.\n \\item \\textit{Cost}: in the generator monitoring system of SPIC, a total of 100 channels of surveillance videos are in place in one generator facility. Under the data transmission rate of 512 KB\/sec for synchronous algorithm analysis and optimization, these 100 channels require at least 50 MB\/sec of network bandwidth if image data need to be sent. This is expensive to implement on an industry scale. With FedVision, the network bandwidth required for model update is significantly reduced to less than 1 MB\/sec.\n\\end{enumerate}\n\nThe improvements brought about by the FedVision platform has significantly enhanced the operations of the customers and provided them with competitive business advantages.\n\n\\section{Application Development and Deployment}\n\\begin{figure*}[!t]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{UI2}\n\t\\caption{Monitoring multiple rounds of federated model training on FedVision.}\n\n\t\\label{fig-UI}\n\\end{figure*}\n\nThe FedVision platform was developed using Python and C programming languages by WeBank, Shenzhen, China. When developing the AI Engine, we have evaluated multiple potential approaches which are capable of fulfilling our design objectives while disallowing the explicit sharing of locally stored camera data. These include secure multi-party computation (MPC), differential privacy (DP), and federated learning (FL). The decision for selecting FL is based on the following considerations:\n\\begin{enumerate}\n\\item In recent decades, many privacy-preserving machine learning methods have been proposed. They are mostly based on secure MPC \\cite{Yao:1982:PSC:1398511.1382751,Yao:1986:GES:1382439.1382944}. The goal is to protect data privacy, and explore how to calculate a conversion function safely without a trusted third party. However, the transmission efficiency between multiple parties is very low. This makes unsuitable for our application which not only demands high efficiency, but also requires the involvement of a trusted third party (i.e. Extreme Vision Ltd) to fulfil certain business objectives.\n\\item Differential Privacy \\cite{Dwork:2006:DP:2097282.2097284,Wang-et-al:2019} aims to protect sensitive data by adding noise into the dataset in such a way that preserves the overall distribution of the data. A trade-off needs to be made between the strength of privacy protection versus the usefulness of the resulting dataset for inference tasks. However, DP still requires data aggregation for model training. This not only violates the requirements by privacy protection laws such as GDPR, but also incurs high communication overhead as the datasets are artificially enlarged with the added noise.\n\\item Federated learning is the best available technology for building the AI Engine of the FedVision platform. It does not require data aggregation for model training. Thus, it not only preserves data privacy, but also significantly reduces communication overhead. In addition, with a wide range of model aggregation algorithms (e.g., federated averaging \\cite{FedAvg2016}), FL provides better support for extending deep learning-based models which are widely used in visual object detection tasks.\n\\end{enumerate}\nTherefore, FL has been selected to implement the AI Engine of FedVision.\n\nOnce the user completed the annotation of his local dataset and joined the construction of a federated object detection model on FedVision, the rest of the processes are taken care of by the platform automatically. The user can conveniently monitor the progress of different rounds of federated model training through the user interface as shown in Figure \\ref{fig-UI}. As it is developed for use in China, the language used in the actual production user interface is Chinese. The version shown in Figure \\ref{fig-UI} is for readers who do not speak Chinese. A video demonstration of the functionalities of the FedVision platform can be accessed online\\footnote{\\url{https:\/\/youtu.be\/yfiO3NnSqFM}}.\n\n\\section{Maintenance}\nAs time goes by, there are additions of new types of computer vision-based tasks, changes in personnel access rights, and changes in operating parameters in\nFedVision. Since the platform architecture follows are modular design approach around tasks and personnel to achieve separation of concern with respect to the AI Engine, such updates can be performed without affecting the AI Engine.\nSince deployment in May 2019, there has not been any AI maintenance task.\n\n\\section{Conclusions and Future Work}\nIn this paper, we report our experience addressing the challenges of building effective visual object detection models with image data owned by different organizations through federated learning. The deployed FedVision platform is an end-to-end machine learning engineering platform for supporting easy development of FL-powered computer vision applications. The platform has been used by three large-scale corporate customers to develop computer vision-based safety hazard warning solutions in smart city applications. Over four months of deployment, it has helped the customers improve their operational efficiency, achieve data privacy protection, and reduced cost significantly. To the best of our knowledge, this is the first industry application of federated learning in computer vision-based tasks. It has the potential to help computer vision-based applications to comply with stricter data privacy protection laws such as GDPR \\cite{Voigt:2017:EGD:3152676}.\n\nCurrently, the FedVision platform allows users to easily utilize the FedYOLOv3 algorithm. In subsequent work, we will continue to incorporate more advanced FL algorithms (e.g. federated transfer learning \\cite{Liu-et-al:2019,Gao-et-al:2019}) into the platform to deal with more complex learning tasks. We are also working on improving the explainability of models through visualization \\cite{Wei-et-al:2019} in the platform in order to build trust with the users \\cite{Yu-et-al:2018IJCAI}. We will enhance the platform with incentive mechanisms \\cite{Cong-et-al:2019} to enable the emergence of a sustainable FL ecosystem over time.\n\n\\section{Acknowledgments}\nThis research is supported by the Nanyang Assistant Professorship (NAP); AISG-GC-2019-003; NRF-NRFI05-2019-0002; NTU-WeBank JRI (NWJ-2019-007); and the R\\&D group of Extreme Vision Ltd, Shenzhen, China.\n\n\\bibliographystyle{aaai}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}