diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziana" "b/data_all_eng_slimpj/shuffled/split2/finalzziana" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziana" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{sec1}\n\\IEEEPARstart{D}{irection} of arrival (DOA) estimation is a fundamental problem in the wireless communications, radar-based applications, and the future integrated sensing and communication (ISAC) systems~\\cite{zhang_overview_2021,xu_rate-splitting_2021}, and has been studied for decades. Typically, the DOA estimation is based on an ideal antenna array model, without considering any imperfect effect, including the mutual coupling effect, inconsistent gains\/phases, nonlinear effect, etc. In this ideal scenario, the DOAs can be estimated by traditional methods such as the monopulse angle estimation method~\\cite{zhu_combined_2018} and the fast Fourier transformation (FFT)-based methods. \n\nMoreover, the super-resolution estimation methods have also been proposed. On one hand, subspace-based methods such as the algorithms of multiple-signal classification (MUSIC)~\\cite{lin_fsf_2006,yan_low-complexity_2013,zhang_direction_2010} and estimation of signal parameters via rotational invariance techniques (ESPRIT)~\\cite{kim_joint_2015,lin_time-frequency_2016,xiaofei_zhang_multi-invariance_2009,fang-ming_han_esprit-like_2005} were proposed. On the other hand, sparse reconsturction-based methods have also been proposed by exploiting the signals' sparsity in the spatial domain. For example, the compressed sensing (CS)-based methods for the DOA estimation have been proposed, such as sparse Bayesian learning-based methods~\\cite{wan_deep_2021,p_chen_off-grid_2019,Dai_So_2021,mao_marginal_2021,wan_robust_2022,wang_alternative_2018}, the mixed $\\ell _{2,0}$ norm-based methods~\\cite{5466152}, etc.\n \n\nHowever, the above works did not consider the effect of imperfect antenna arrays. As a result, the performance of these algorithms is significantly affected in practical DOA estimation systems. In the literature, some works have started to investigate the DOA estimation schemes under imperfect antenna array. For example, for an array with mutual coupling, gain or phase errors, and sensor location errors, a method estimating both DOA and model errors is proposed in~\\cite{lu_direction--arrival_2018}. A fourth-order parallel factor decomposition model using imperfect waveforms is given in~\\cite{ruan_parafac_2019} to estimate the DOA. Then, ref.~\\cite{liu_2-d_2021} proposes a two-dimensional (2D) DOA estimation method for an imperfect L-shaped array using active calibration. However, each of the above works only considered a subset of the imperfect array effects, because optimization over the complicated array model with all imperfect effects considered is challenging. This motivates us to use the deep learning (DL) technique for DOA estimation with all imperfect array effects taken into account because of its efficiency for training over difficult networks. \n\n\nIn the literature, several works have been done for DL-based DOA estimation~\\cite{9173575,8400482,7497454,9178434}, and have the advantages of low computational complexity and high accuracy. There are some types of DL-based methods:\n\\begin{enumerate}\n\t\\item The input data is the raw sampled data from the array;\n\t\\item The input data is the covariance matrix of the received signal;\n\t\\item The outputs are the directly estimated DOAs;\n\t\\item The output is the spatial spectrum, and the DOAs are estimated from the spectrum.\n\\end{enumerate}\nFor example, in~\\cite{yuan_unsupervised_2021,wu_deep_2019}, a DL-based method is proposed for the DOA estimation, where the input is the covariance matrix and the output is the spectrum. A sparse loss function is used to train the network. Ref.~\\cite{papageorgiou_deep_2021} gives a CNN-based method for the DOA estimation using the estimated covariance matrix, and estimates the DOA by discretizing the spatial domain into grids. In~\\cite{akter_rfdoa-net_2021}, a synthetic dataset for angle classification is shown under the presence of additive noise, propagation attenuation and delay. Then, a CNN-based method is proposed for the DOA estimation. An angle separation learning method is proposed in~\\cite{wu_deep_2019} for the DOA estimation of the coherent signals, where the covariance matrix is formulated as the input features of the deep neural network (DNN). In~\\cite{8854868}, a deep convolution network (DCN) is given for the DOA estimation with the covariance matrix as the under-sampled linear measurements of the spatial spectrum, where the signal sparsity in the spatial domain is also exploited to improve the estimation performance. A MUSIC-based DOA estimation method is proposed in~\\cite{9328861} by using small antenna arrays, where DL is formulated to reconstruct the signals of a virtual large antenna array. Ref.~\\cite{8400482} gives an offline and online DNN method for the DOA estimation in the massive multiple-input multiple-output (MIMO) system, where DOA is the output of the network and can be estimated directly from the received signal. For the DOA estimation with low sjgnal-to-noise ratio (SNR), a convolutional neural network (CNN) is proposed in~\\cite{9457195}, where the covariance matrix is the network's input and shows enhanced robustness in the presence of noise. Moreover, a multiple deep CNN is designed in~\\cite{9034077}, where each CNN learns the MUSIC spectrum of the received signal, so a nonlinear relationship between the received sensor data and the angular spectrum is formulated in the network. For the imperfect array, ref.~\\cite{8485631} introduces a framework of the DNN to estimate the DOA using a multitask autoencoder and series of parallel multilayer classifiers. \n \nWe can find that the DL-based DOA estimation methods mainly use the CNN as a typical network structure~\\cite{chakrabarty_multi-speaker_2019}, and the input is the statistic results such as the covariance matrix. Since the estimation performance is limited by the information in the statistic data, the performance cannot be better than that use the raw sampled data. Furthermore, the network output is the estimated DOAs, the spatial spectrum cannot be obtained. Therefore, the network structure should be adjusted with different targets numbers, and it is not suitable in the practical applications. There are some limitations with the existing DL-based methods:\n\\begin{itemize}\n\t\\item Since the classic DOA estimation algorithms such as MUSIC are just based on the covariance matrices of the received signals, most existing ML-based schemes use these covariance matrices as the input data to train the network. However, the covariance matrices are not sufficient for the optimal estimator design in general. As a result, the input data used in these works does not preserve all the useful information. \n\t\\item In addition, the output of the existing ML-based DOA estimation schemes is usually the spatial spectrum of the targets. In this case, the training network depends on the number of targets, i.e., different networks should be trained given different number of targets. This is of high complexity in practice. \n\t\\item Furthermore, when the spatial spectrum is used, we must discretize the DOAs into grids, and the possible DOA must be on the discretized grids exactly. More girds as the output must be used for high accuracy, and the network will become more complexity and hardly to train.\n\\end{itemize}\n\n\nIn this paper, we propose a new type of DNN network based on CNN, i.e., super-resolution DOA network (SDOAnet), to overcome the above mentioned difficulties in the DOA estimation. The proposed SDOAnet is used for performance evaluation of imperfect array under realistic conditions. Compared with the existing methods, the proposed SDOAnet can achieve better estimation performance with lower complexity. The contributions of this paper are given as follows:\n\\begin{enumerate}\n\t\\item \\textbf{A system model with imperfect array effects for the DOA estimation is formulated.} The imperfect effect includes the position perturbation, the inconsistent gains, the inconsistent phases, the mutual coupling effect, and the nonlinear effect, etc. \n\tAs a result, our results are directly applicable in a practical system.\n\t\\item \\textbf{A new DL architecture is proposed based on the imperfect array.} Different from existing methods, the input of the SDOAnet is the raw sampled signals, and the output is a vector, which can be used to estimate the spatial spectrum easily. Convolution layers are then used to get the signals' features and avoid the complexity due to high-dimension signals. The output of the SDOAnet is a vector for the spectrum estimation and can avoid the problem of discretizing the spatial domain. Compared with the existing CNN-based method, the proposed SDOAnet can be trained easily and achieve better estimation performance. \n\t\\item \\textbf{A loss function to train the SDOAnet is proposed,} where Gaussian functions are used to approximate the spatial spectrum. Inspired by the atomic norm minimization (ANM)-based DOA estimation method, the output of SDOAnet is used to formulate the spatial spectrum. Then, a loss function is used to measure the error between the refereed spectrum and the estimated one, and to train the network. \n\\end{enumerate}\n\nThe remainder of this paper is organized as follows. The system mode of practical DOA estimation is formulated in Section~\\ref{sec2}. The review of super-resolution DOA estimation method is given in Section~\\ref{sec3}. Then, the proposed SDOAnet for the DOA estimation is shown in Section~\\ref{sec4}. Simulation results are carried out in Section~\\ref{sec5}, and finally, Section~\\ref{sec6} concludes the paper. \n\n\\textit{Notations:} Upper-case and lower-case boldface letters denote matrices and column vectors, respectively. The matrix transpose and the Hermitian transpose are denoted as $(\\cdot)^\\text{T}$ and $(\\cdot)^\\text{H}$, respectively. $\\mathcal{R}\\{\\cdot\\}$ and $\\mathcal{I}\\{\\cdot\\}$ denote the real and imaginary parts of a complex value, respecitvely. $\\text{Tr}\\{\\cdot\\}$ is the trace of a matrix. $\\|\\cdot\\|_2$ is the $\\ell_2$ norm. \n\n\n\\section{The System Model for Practical DOA Estimation}\\label{sec2}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3in]{.\/Figures\/system.pdf}\n\t\\caption{The system model for the DOA estimation in a practical array.}\n\t\\label{sys}\n\\end{figure} \n\n\nIn this paper, we consider the DOA estimation problem in a practical system, and propose a DL-based estimation framework. As shown in Fig.~\\ref{sys}, we consider $K$ far-field signals, and the $k$-th ($k=0,1,\\dots,K-1$) signal is expressed as $s_{k}(t)\\in\\mathbb{C}$ with the DOA being $\\theta_k\\in\\left(-\\frac{\\pi}{2},\\frac{\\pi}{2}\\right]$. A linear array system with $N$ antennas is used to receive the signals and estimate the DOAs, where the wavelength is denoted as $\\lambda$. Considering an additive noise $w_n(t)\\in\\mathbb{C}$, the received signal at the $n$-th ($n=0,1,\\dots, N-1$) antenna can be expressed as\n\\begin{align}\\label{eq1}\n\tr_n(t) = g\\left(x_n(t)+\\sum_{n'\\neq n}B_{n,n'}x_{n'}(t)\\right)+w_n(t).\n\\end{align}\nThen, we have\n\\begin{align}\n\tx_n(t) = \\sum^{K-1}_{k=0}s_k(t)A_ne^{j\\phi_n}e^{j2\\pi\\frac{d_n}{\\lambda}\\sin\\theta_k},\n\\end{align}\nwhere taking the $0$-th antenna as the reference one, i.e., $d_0=0$, the position of the $n$-th antenna is $d_n$, and for a uniform linear array (ULA), the position of antenna is $d_n=n\\lambda\/2$. In the received signal (\\ref{eq1}), the following imperfect problems are considered:\n\\begin{enumerate}\n\t\\item \\emph{The mutual coupling effect:} The antennas cannot be ideally isolated and introduce the mutual coupling effect among the received signals. The mutual coupling coefficient between the $n$-th and $n'$-th ($n\\neq n'$) antenna is $B_{n,n'}\\in\\mathbb{C}$ with $|B_{n,n'}|<1$ in (\\ref{eq1});\n\t\\item \\emph{The position perturbations:} The antenna positions cannot be exactly at the desired positions, and will cause the phase errors of the received signals among antennas in the steering vector;\n\t\\item \\emph{The inconsistent gains: } The radio frequency (RF) channels usually cannot have exactly the same amplifiers, and will cause the amplitude differences among the received signals. The channel gain of the $n$-th antenna is denoted as $A_n>0$;\n\t\\item \\emph{The inconsistent phases: } The difference among the RF channels will also cause the delay and phases errors of the received signals, and The channel phase of the $n$-th antenna is denoted as $\\phi_n$;\n\t\\item \\emph{The nonlinear effect: } The nonlinear effect among RF channels and analog-to-digital converter (ADC) will introduce the nonlinear effect and degrade the DOA estimation performance. We use a nonlinear function $g(\\cdot)$ to represent the nonlinear operation in the receiving channels.\n\\end{enumerate}\n \nHence, collect the received signals into a vector \n\\begin{align}\n\\boldsymbol{r}\\triangleq \\begin{bmatrix}\n\tr_0(t), r_1(t),\\dots, r_{N-1}(t)\n\\end{bmatrix}^{\\text{T}}.\t\n\\end{align}\nThe DOA estimation problem can be formulated as a parameter estimation problem with the received signal $\\boldsymbol{r}$. Most existing works consider the methods in the scenario with perfect array, where we have the linear function $g(\\cdot)$, the mutual coupling coefficient $B_{n,n'}$ is $0$, the channel gains are the same ($A_n=1$ and $\\phi_0=0$), and the position $d_n$ of antenna is known. \n\nHowever, when an imperfect array is considered, the imperfect elements include the mutual coupling effect, the nonlinear effect, the inconsistent phases, the inconsistent gains and the position perturbations, etc. In the practical systems, most existing super-resolution methods cannot outperform the traditional methods, where the super-resolution methods must have perfect systems and high SNR. In this paper, we will focus on a robust super-resolution method for the DOA estimation with imperfect system effect. \n\n\\section{The Review of Super-Resolution DOA Estimation Methods}\\label{sec3}\n\\subsection{The Atomic Norm-Based Estimation Methods}\nIn recent years, the atomic norm-based methods have been proposed for the line-spectra estimation and achieved better performance by exploiting the sparsity of the spectrum in the frequency domain. Additionally, the DOA estimation problem can be easily described as a line-spectral estimation problem, so atomic norm-based methods have been proposed for the DOA estimation.\n\nUsually, in the atomic norm-based methods. the ideal ULA is assumed, and the received signal based on (\\ref{eq1}) in the $n$-th array can be expressed as\n\\begin{align}\n\tr_n=\\sum^{K-1}_{k=0}s_ke^{j2\\pi\\frac{n d}{\\lambda}\\sin\\theta_k} + w_n(t),\n\\end{align}\nwhere the distance between adjacent antennas is $d=\\frac{\\lambda}{2}$. Then, with the definition of a steering vector\n\\begin{align}\n\t\\boldsymbol{a}(\\theta)\\triangleq \\begin{bmatrix}\n\t1, e^{j\\frac{2\\pi d}{\\lambda}\\sin\\theta},\\dots, 1, e^{j\\frac{2\\pi (N-1)d}{\\lambda}\\sin\\theta}\n\\end{bmatrix}^{\\text{T}},\n\\end{align}\ncollect all the received signals into a vector, and we have\n\\begin{align}\n\t\\boldsymbol{r}&\\triangleq \\begin{bmatrix}\n\t\tr_0,r_1,\\dots,r_{N-1}\n\t\\end{bmatrix}^{\\text{T}}\\notag \\\\\n\t& = \\boldsymbol{As}+\\boldsymbol{w},\n\\end{align} \nwhere we define the steering matrix as\n\\begin{align}\n\t\\boldsymbol{A}&\\triangleq \\begin{bmatrix}\n\t\t\\boldsymbol{a}(\\theta_0),\\boldsymbol{a}(\\theta_1),\\dots, \\boldsymbol{a}(\\theta_{K-1})\n\t\\end{bmatrix},\n\\end{align}\nthe signal vector is defined as\n\\begin{align}\n\t\\boldsymbol{s}&\\triangleq \\begin{bmatrix}\n\t\ts_0,s_1,\\dots,s_{K-1}\n\t\\end{bmatrix}^{\\text{T}},\n\\end{align}\nand the noise vector is\n\\begin{align}\n\t\\boldsymbol{w}&\\triangleq \\begin{bmatrix}\n\t\tw_0,w_1,\\dots,w_{N-1}\n\t\\end{bmatrix}^{\\text{T}}.\n\\end{align}\n\n\nIn the ANM-based DOA estimation method, an atomic norm is defined as \n\\begin{align}\n\t\\|\\boldsymbol{x}\\|_{\\mathcal{A}}\\triangleq \\inf \\Big\\{ & \\sum_n \\alpha_{n'}:\\boldsymbol{x}=\\sum \\alpha_{n'} e^{j\\phi_{n'}} \\boldsymbol{a}(\\theta_{n'}), \\notag\\\\\n\t& \\phi_{n'}\\in[0,2\\pi), \\alpha_{n'}\\geq 0\n\t\\Big\\}, \n\\end{align}\nwhich describes a sparse representation of $\\boldsymbol{x}$ with the sparse coefficients being $\\alpha_{n'}$ ($n'=0,1,\\dots, N'-1$). Then, with the received signal $\\boldsymbol{r}$, we denoise the signal with a sparse reconstruction signal $\\boldsymbol{x}$, which can be expressed as an ANM expression \n\\begin{align}\n\t\\min_{\\boldsymbol{x}} \\frac{1}{2}\\|\\boldsymbol{r}-\\boldsymbol{x}\\|^2_2+\\beta \\|\\boldsymbol{x}\\|_{\\mathcal{A}},\n\\end{align}\nwhere the parameter $\\beta$ is used to control the trade-off between the sparsity and the reconstruction accuracy. This ANM problem can be solved by introducing a semi-definite programming (SDP) method, which is\n\\begin{align}\\label{eq11}\n\t\\min_{\\boldsymbol{B},\\boldsymbol{h}}\\quad & \\|\\boldsymbol{r}-\\boldsymbol{h}\\|^2_2\\notag\\\\\n\t\\text{s.t}\\quad & \\begin{bmatrix}\n\t\t\\boldsymbol{B}& \\boldsymbol{h}\\\\\n\t\t\\boldsymbol{h}^{\\text{H}}& 1\n\t\\end{bmatrix}\\succeq 0\\\\\n\t& \\boldsymbol{B}\\text{ is Hermitian matrix}\\notag\\\\\n\t& \\text{Tr}\\{\\boldsymbol{B}\\} = \\beta^2\\notag\\\\\n\t& \\sum_n \\boldsymbol{B}_{n,n+n'} =0,\\text{ for } n'\\neq 0 \\notag \\\\ &\\qquad \\qquad\\qquad \\quad \\text{ and }n'=1-N,\\dots,N-1.\\notag\n\\end{align}\nBy solving the SDP problem (\\ref{eq11}), the sparse reconstruction signal $\\boldsymbol{h}$ can be obtained, and the DOA of the received signal can be estimated by finding the peak values of the following polynomial \n\\begin{align}\n\tf(\\theta) = |\\boldsymbol{a}^{\\text{H}}(\\theta)\\boldsymbol{h}|^2.\n\\end{align}\n\nThe ANM-based DOA estimation method is for the ideal array with perfect assumptions, but for the practical array, the ANM-based method must be extended. In~\\cite{p_chen_new_2020,wang_gridless_2019,govinda_raj_single_2019-1,gong_doa_2022}, the atomic norm-based methods are extended for the practical array. We can find that the much complex optimization problems are formulated, and a vector like $\\boldsymbol{h}$ denoted as $\\boldsymbol{h}'$ can be obtained. Then, the DOAs are estimated by the peak values of the following polynomial \n\\begin{align}\n\tf'(\\theta) = |\\boldsymbol{a}^{\\text{H}}(\\theta)\\boldsymbol{h}'|^2.\n\\end{align} \n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=7.2in]{.\/Figures\/sys.pdf}\n\t\\caption{The network architecture of the proposed SDOAnet.}\n\t\\label{sysnet}\n\\end{figure*} \n\n\\subsection{The MUSIC-Based Estimated Methods} \nIn the super-resolution estimation method, the MUSIC-based methods can achieve better performance by using the noise and signal subspaces. For the single-snapshot spectral estimation, ref~\\cite{liao_music_2016} proposes a MUSIC-based method. A Hankel matrix is obtained from the received signal $\\boldsymbol{r}$ as\n\\begin{align}\n\t\\boldsymbol{R}=\\text{Hankel}(\\boldsymbol{r})\\begin{bmatrix}\n\t\tr_0 & r_1 & \\dots & r_{N-L}\\\\\n\t\tr_1 & r_2 & \\dots & r_{N-L+1}\\\\\n\t\t\\vdots & \\vdots & \\ddots & \\vdots\\\\\n\t\tr_{L-1} & r_{L} & \\dots & r_{N-1}\n\t\\end{bmatrix},\n\\end{align}\nwhere the received signal $\\boldsymbol{r}$ is reshaped as a matrix $\\boldsymbol{R}\\in\\mathbb{C}^{L\\times N-L+1}$. Then, a singular value decomposition (SVD) is used as\n\\begin{align}\n\t[\\boldsymbol{U}_1, \\boldsymbol{U}_2] \\boldsymbol{\\Lambda}[\\boldsymbol{V}_1,\\boldsymbol{V}_2]=\\text{SVD}\\{\\boldsymbol{R}\\},\n\\end{align}\nwhere $\\boldsymbol{U}_2$ is corresponding to the small singular values, and $\\boldsymbol{\\Lambda}$ is a diagonal matrix with the entries from the singular values. Finally, the spatial spectrum can be estimated as\n\\begin{align}\n\tg(\\theta) = \\frac{1}{\\|\\boldsymbol{a}^{\\text{H}}(\\theta)\\boldsymbol{U}_2\\|^2_2}.\n\\end{align} \n\n\\section{The Proposed DOA Estimation Method}\\label{sec4}\n\n\nFrom the above sections about the exisiting DOA estimation methods, we can find that the DOAs are estimated by searching the peak values of the spatial spectrum. In this section, we will propose a new DL-based super-resolution method for the DOA estimation, and it is named as a super-resolution DOA network (SDOAnet), which contains more information and can be trained faster than the existing covariance matrix-based methods. \n\n\\subsection{The SDOAnet Architecture}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=2in]{.\/Figures\/relu.pdf}\n\t\\caption{The ReLU function.}\n\t\\label{relu}\n\\end{figure} \n\nThe SDOAnet architecture is show in Fig.~\\ref{sysnet}. First, the received signal in (\\ref{eq1}) is rewritten as a vector with real and imaginary parts\n\\begin{align}\n\t\\boldsymbol{y}(t)\\triangleq [\\mathcal{R}^{\\text{T}}\\{\\boldsymbol{r}(t)\\},\\mathcal{I}^{\\text{T}}\\{\\boldsymbol{r}(t)\\}]^{\\text{T}}\\in\\mathbb{R}^{2N\\times 1},\n\\end{align}\nwhere we have the received signal vector \n\\begin{align}\n\t\\boldsymbol{r}(t)=[r_0(t),r_1(t),\\dots,r_{N-1}(t)]^\\text{T}\\in\\mathbb{C}^{N\\times 1}.\n\\end{align}\nWith the batch size being $M_B$, the input signal is\n\\begin{align}\n\t\\boldsymbol{Y}\\triangleq [\\boldsymbol{y}(0), \\boldsymbol{y}(1),\\dots,\\boldsymbol{y}(M_B-1)]^{\\text{T}},\n\\end{align} \nand the size is $M_B\\times 2N$. \n\nThen, since the SDOAnet is based on the convolution network, we use a full connection (FC) as the input layer with the output dimension being $M_FM_I$, where $M_F$ denotes the number of filters in the convolution layers and $M_I$ denotes the inner dimension extension. After the input layer, the dimension of the signal is $M_B\\times M_FM_I$, and we reshape the signal as a tensor $f_1(\\boldsymbol{Y})$ with the dimension being $M_B \\times M_F\\times M_I$, where $f_1(\\cdot)$ is a input layer function.\n\nThe tensor is passed into the convolution layers and the number of convolution layers is $M_{C}$. In each convolution layer, a one-dimensional (1D) convolution operation is realized with the kernel size being $M_F\\times M_K$ and the padding operation is used to keep that the size of convolution output is the same with that of input. The output of convolution operation is $M_B\\times M_F \\times M_I$. Then, the batch normalization is applied on the convolution output, and the normalization output is denoted as $f_3(f_2(f_1(\\boldsymbol{Y})))$. The function $f_2(\\cdot)$ denotes the convolution operation and $f_3(\\cdot)$ is the batch normalization operation\n\\begin{align}\nf_3(x)=\\frac{x-\\mathcal{E}\\{x\\}}{\\sqrt{\\text{Var}\\{x\\}+\\epsilon}},\n\\end{align}\nwhere $\\mathcal{E}\\{x\\}$ and $\\text{Var}\\{x\\}$ are the mean and variance of $x$, respectively. $\\epsilon$ is a value added to the denominator for numerical stability and can be set as $\\epsilon=10^{-5}$. In each convolution layer, a ReLU function $f_4(\\cdot)$ as shown in Fig.~\\ref{relu} is applied on the output of the batch normalization, and is defined as \n\\begin{align}\n\tf_4(x)\\triangleq \\max(0,x).\n\\end{align}\nAfter the convolution layers, a FC layer is used as an output layer with the input and output sizes being $M_B\\times M_I M_F$ and $M_B\\times 2N$, respectively. The operation in the output layer is denoted as $f_5(\\cdot)$. \n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.2in]{.\/Figures\/funcflow.pdf}\n\t\\caption{The flowchart of the function operations.}\n\t\\label{funcflow}\n\\end{figure} \n\n\n\nFinally, as shown in Fig.~\\ref{funcflow}, we can obtain the output of the SDOAnet as\n\\begin{align}\n\\boldsymbol{G}=f_5(f_4(f_3(f_2(f_4(\\dots f_4(\\dots f_2(f_1(\\boldsymbol{Y}))\\dots)\\dots ))))),\n\\end{align}\nwhere we have \n\\begin{align}\n\t\\boldsymbol{G}\\triangleq \\begin{bmatrix}\n\\boldsymbol{g}(0),\\boldsymbol{g}(1),\\dots,\\boldsymbol{g}(M_B-1)\n\\end{bmatrix}\\in\\mathbb{R}^{M_B\\times 2N}.\n\\end{align}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.5in]{.\/Figures\/getsp.pdf}\n\t\\caption{The flowchart to obtain the spatial spectrum from the network output.}\n\t\\label{getsp}\n\\end{figure} \n\nAs shown in Fig.~\\ref{getsp}, the corresponding complex vector can be obtained from the network output $\\boldsymbol{g}(m)$ ($m=0,1,\\dots,M_B-1$) as\n\\begin{align}\n\\boldsymbol{z}(m)\\triangleq \\boldsymbol{g}_{0:N-1}(m) +j \\boldsymbol{g}_{N:2N-1}(m),\n\\end{align}\nwhere $ \\boldsymbol{g}_{0:N-1}(m) $ denotes a sub-vector of $\\boldsymbol{g}(m)$ with the index from $0$ to $N-1$, and $ \\boldsymbol{g}_{N:2N-1}(m) $ denotes that from $N$ to $2N-1$. With the output $\\boldsymbol{z}$ of SDOAnet, the spatial spectrum can be estimated by \n\\begin{align}\\label{eq20}\nf_{\\text{sp}}(\\zeta) = |\\boldsymbol{a}^{\\text{H}}(\\zeta)\\boldsymbol{z}|^2,\n\\end{align}\nwhere $\\zeta$ is chosen based on the detection area, such as from $-\\pi\/2$ to $\\pi\/2$.\n\nIn the proposed SDOAnet, the network is different from existing methods. The novelty is that we use the raw sampled data as the network input, and the convolution layers are used to obtain the features of the raw data. The raw data contains all the information of the received signals. The network's output is a vector, which is neither the DOA results nor the spatial spectrum used by the existing methods. The output size is the same as the number of antennas in the array. Therefore, the dimension of the SDOAnet is lower than that using the spectrum as the output. Hence, the training time can be reduced significantly. Additionally, the DOAs can be obtained by finding the peak values of $f_{\\text{sp}}(\\zeta)$ in (\\ref{eq20}), which can avoid the problem of adopting the determined number of the received signals in the networks that using the DOA values as the output. \n\n\n\\subsection{The Training Approach}\nTo train the SDOAnet, the spatial spectrum $f_{\\text{sp}}(\\zeta)$ is obtained from (\\ref{eq20}) and the refereed spectrum is given as follows\n\\begin{align}\nf_{\\text{ref}}(\\zeta) = \\sum_{k=0}^{K-1} A_k e^{-\\frac{(\\zeta-\\theta_k)^2}{\\sigma_{\\text{G}}^2}},\n\\end{align}\nwhere we use Gaussian functions to approximate the spatial spectrum. $A_k$ denotes the spectrum value, and $\\sigma_{\\text{G}}$ is the standard deviation of the Gaussian function. In this paper, we set the value of $\\sigma_{\\text{G}}$ as \n\\begin{align}\n\t\\sigma_{\\text{G}}=\\bar{\\sigma}_{\\text{G}}\/N.\n\\end{align}\nAn example of the refereed spatial spectrum approximated by the Gaussian functions is shown in Fig.~\\ref{refsp}, where we use $16$ antennas, $\\bar{\\sigma}_{\\text{G}}=100$, and the ground-truth DOAs are $\\ang{-30}$, $\\ang{10}$, and $\\ang{20}$. The $3$ dB-spectrum width is about $\\ang{10.4}$. \n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.7in]{.\/Figures\/refsp.pdf}\n\t\\caption{The refereed spatial spectrum approximated by the Gaussian functions.}\n\t\\label{refsp}\n\\end{figure} \n\n\nWith the refereed spectrum the loss function is defined as\n\\begin{align} \\label{eq22}\nf_{\\text{loss}}(\\boldsymbol{\\zeta}) =\\frac{1}{\\Omega} \\left\\|f_{\\text{ref}}(\\boldsymbol{\\zeta})-f_{\\text{sp}}(\\boldsymbol{\\zeta})\\right\\|^2_2,\n\\end{align}\nwhere $f_{\\text{ref}}(\\boldsymbol{\\zeta})\\in\\mathbb{R}^{\\Omega\\times 1}$ and $f_{\\text{sp}}(\\boldsymbol{\\zeta})\\in\\mathbb{R}^{\\Omega\\times 1}$ are vectors with the $\\omega$-th ($\\omega=0,1,\\dots, \\Omega-1$) entry being $f_{\\text{ref}}(\\zeta_\\omega)$ and $f_{\\text{sp}}(\\zeta_\\omega)$, respectively. We define \n\\begin{align}\n\t\\boldsymbol{\\zeta}\\triangleq \\begin{bmatrix}\n\\zeta_0,\\dots, \\zeta_{\\Omega-1}\n\\end{bmatrix}^{\\text{T}},\n\\end{align}\nwhere $\\Omega$ is the number of the discretized spatial angles. The SDOAnet is trained to minimize the loss function $f_{\\text{loss}}(\\boldsymbol{\\zeta})$ in (\\ref{eq22}) by updating the network coefficients.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=2.5in]{.\/Figures\/trainflow.pdf}\n\t\\caption{The training procedure for the SDOAnet.}\n\t\\label{trainflow}\n\\end{figure} \n\nFor the practical system, the mutual coupling effect, the nonlinear effect, the inconsistent phases, the inconsistent gains, and the position perturbations are considered in this paper. The training procedure is shown in Fig.~\\ref{trainflow}, and the following steps can be used to train the SDOAnet:\n\\begin{enumerate}\n\\item \\textbf{Perfect array step:} The received signals using perfect array without the imperfect effect are used during the training procedure;\n\\item \\textbf{Position perturbation step:} The received signals with position perturbation are used. The position perturbation is generated by a Gaussian distribution with the mean being $0$ and the standard deviation $\\sigma_{\\text{per}}$ selected by a uniform distribution $\\sigma_{\\text{per}}\\in [0,\\sigma_{\\text{max\\_per}}]$. The parameter $\\sigma_{\\text{max\\_per}}$ can be specified in the simulation;\n\\item \\textbf{Inconsistent gains step:} The inconsistent gains are considered in this step. Similarly, the inconsistent gains are generated by a zero-mean Gaussian distribution with the standard deviation $\\sigma_{\\text{gain}}$ being $\\sigma_{\\text{gain}}\\in [0, \\sigma_{\\text{max\\_gain}}]$, where $\\sigma_{\\text{max\\_gain}}$ is specified in the simulation;\n\\item \\textbf{Inconsistent phases step:} The inconsistent phases are also generated by a zero-mean Gaussian distribution with the standard deviation $\\sigma_{\\text{phase}}$ being $\\sigma_{\\text{phase}}\\in [0, \\sigma_{\\text{max\\_phase}}]$, where $\\sigma_{\\text{max\\_phase}}$ is specified in the simulation;\n\\item \\textbf{Mutual coupling effect step:} The mutual coupling effect is described by a matrix $\\boldsymbol{B}$ with complex entries, and the diagonal entries are all ones. The entry at the $n$-th row and the $n'$-th column is denoted as \n\\begin{align}\n\tB_{n,n'}=|B_{n,n'}|e^{j\\psi_{n,n'}},\n\\end{align}\nand $|B_{n,n'}|$ is determined by a uniform distribution $|B_{n,n'}|\\in[0,\\sigma^{|n-n'|}_{\\text{mc}}]$ with $n\\neq n'$. The phase $\\psi_{n,n'}$ follows a uniform distribution $\\psi_{n,n'}\\in[0,2\\pi)$. The parameter $\\sigma_{\\text{mc}}$ is specified in the simulation;\n\\item \\textbf{Nonlinear effect step:} The nonlinear effect is described by a nonlinear function\n\\begin{align}\n\tf_{\\text{nonlinear}}(x)=\\tanh(x\\sigma_{\\text{nonlinear}}),\n\\end{align}\nwhere $\\sigma_{\\text{nonlinear}}$ is specified in the simulation to control the nonlinear effect;\n\\item \\textbf{All the imperfect effect step:} We consider all the imperfect effect to train the network.\n\\end{enumerate}\nAfter training the SDOAnet in sequence according to the above steps, we start over from the first step to train the network again until the maximum number of the training procedures. \n\n\n\n\\section{Simulation Results} \\label{sec5}\nIn this section, simulation results show the DOA estimation performance of the proposed SDOAnet using a practical array. The simulation results are carried out in a personal computer with MATLAB R2020b, Intel Core i5 @ 2.9 GHz processor, and 8 GB LPDDR3 @ 2133 MHz. The code about the SDOAnet is available online \\url{https:\/\/github.com\/chenpengseu\/SDOAnet.git}, where the training codes and a pre-trained network are also provided. The network is based on PyTorch~1.4 and Python~3.7. Simulation parameters are given in Table~\\ref{table1}. We use $N=16$ antennas to receive the signals and the SDOAnet to estimate the DOA, where the number of signals is $K=3$. Moreover, the hyper-parameters for the imperfect array are also given in Table~\\ref{table1}.\n\n\n\\begin{table\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\caption{Simulation parameters}\n\t\\label{table1}\n\t\\centering\n\t\\begin{tabular}{cc}\n\t\t\\hline\n\t\t\\textbf{Parameter} & \\textbf{Value} \\\\\n\t\t\\hline\n\t\tThe standard deviation in the Gaussian function & $\\bar{\\sigma}_{\\text{G}}=100$ \\\\\n\t\tThe batch size & $64$ \\\\\n\t\tThe number of convolution layers & $6$ \\\\\n\t\tThe number of filters in the convolution layer & $2$ \\\\\n\t\tThe kernel size in the convolution layer & $3$ \\\\\n\t\tThe learning rate & $5\\times 10^{-4}$ \\\\\n\t\tThe number of antennas & $N=16$ \\\\\n\t\tThe number of targets & $K=3$ \\\\\n\t\tThe distance between adjacent antennas & $0.5$ wavelength \\\\\n\t\t\\tabincell{c}{The maximum standard deviation of\\\\ position perturbation} \n\t\t & ${\\sigma}_{\\text{max\\_per}}=0.15$ \\\\\n\t\t\\tabincell{c}{The maximum standard deviation of\\\\ inconsistent gain} & ${\\sigma}_{\\text{max\\_gain}}=0.5$ \\\\\n\t\t\\tabincell{c}{The maximum standard deviation of \\\\inconsistent phase} & ${\\sigma}_{\\text{max\\_phase}}=0.2$ \\\\\n\t\tThe maximum mutual coupling effect & ${\\sigma}_{\\text{mc}}=0.06$ \\\\\n\t\tThe nonlinear effect & ${\\sigma}_{\\text{nonlinear}}=1.0$ \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n \nFirst, the proposed SDOAnet contains convolution layers, and each convolution layer has convolution, batch normalization, and ReLU active function operations. In the SDOAnet, some important hyperparameters must be considered for a better DOA estimation. The first hyperparameter is the number of 1D convolution layers. In Fig.~\\ref{layer}, we show the DOA estimation performance with different numbers of convolution layers. As shown in this figure, when the number of convolutions is $6$, a better estimation performance is achieved, so we use $6$ convolution layers in the following simulations.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.7in]{.\/Figures\/layers.pdf}\n\t\\caption{The DOA estimation performance with different numbers of layers.}\n\t\\label{layer}\n\\end{figure} \n\nThen, we compare the DOA estimation performance among the networks using different numbers $M_F$ of filters that used in the convolution layers. For the consideration of both the estimation performance and the network complexity, better performance is achieved with $M_F=2$, so we will use $2$ filters in the following simulations. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.7in]{.\/Figures\/filter.pdf}\n\t\\caption{The DOA estimation performance with different numbers of filters.}\n\t\\label{filter}\n\\end{figure} \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.7in]{.\/Figures\/std.pdf}\n\t\\caption{The DOA estimation performance with different standard deviations $\\bar{\\sigma}_{\\text{G}}$.}\n\t\\label{std}\n\\end{figure} \n\nIn the procedure of training the SDOAnet, the referred spatial spectrum is used to measure the loss function, where we use the Gaussian functions to approximate the spatial spectrum. Hence, the standard deviation $\\bar{\\sigma}_{\\text{G}}$ in the Gaussian function is important in approximating the spatial spectrum. We show the DOA estimation performance with different standard deviations $\\bar{\\sigma}_{\\text{G}}$ in Fig.~\\ref{std}. When the standard deviation $\\bar{\\sigma}_{\\text{G}}$ is $100$, a better DOA estimation performance is achieved, so we will use $\\bar{\\sigma}_{\\text{G}}=100$ in the following simulation. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.8in]{.\/Figures\/spectrum.pdf}\n\t\\caption{The spatial spectrum compared with the existing methods.}\n\t\\label{spectrum}\n\\end{figure} \n\nNext, based on the above SDOAnet parameters, the estimated spatial spectrum is shown in Fig.~\\ref{spectrum} for the DOA estimation and is also compared with the following existing methods:\n\\begin{itemize}\n\t\\item \\textbf{MUSIC method~\\cite{liao_music_2016}:} In the traditional MUSIC method, multiple snapshots are used to estimate the covariance matrix, which is used to obtain the DOA based on the eigenvalue decomposition. For a fair comparison, we use the MUSIC algorithm with only snapshot proposed in~\\cite{liao_music_2016}, where uses both a Hankel data matrix and the Vandermonde decomposition in the MUSIC method;\n\t\\item \\textbf{ANM method~\\cite{govinda_raj_single_2019,wei_gridless_2020,zai_yang_enhancing_2016}:} The ANM-based methods have been proposed for the DOA estimation and can exploit the target's sparsity in the spatial domain. Unlike the current CS-based methods discretizing the spatial domain into grids and using a dictionary matrix for the sparse reconstruction~\\cite{z_yang_orthonormal_2011,z_tan_joint_2014,g_yu_statistical_2011}, the ANM method estimates the DOA in the continuous domain. It can solve the \\emph{off-grid} problem caused by the discrete methods. \n\t\\item \\textbf{FFT method:} The FFT method is widely used in practical systems with low computational complexity. However, the resolution of the FFT method is unsatisfactory but robust to the imperfect array;\n\t\\item \\textbf{OMP method~\\cite{aghababaiyan_high-precision_2020,lin_single_2021,chen_source_2019}:} The \n\t orthogonal matching pursuit (OMP) method is a CS-based method using the discretized spatial angles, and has relatively low computational complexity. Hence, it has been widely used in sparse reconstruction problems.\n\\end{itemize} \nAs shown in Fig.~\\ref{spectrum}, the spatial spectrum estimated by the proposed SDOAnet has better performance than the MUSIC, ANM, FFT, and OMP methods. Additionally, the proposed method is based on the convolution network and has low computational complexity than the ANM and MUSIC methods. Therefore, the proposed SDOAnet is efficient in the DOA estimation problem.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.7in]{.\/Figures\/snr.pdf}\n\t\\caption{The DOA estimation performance with different SNRs.}\n\t\\label{snr}\n\\end{figure} \n\nNext, the DOA estimation performance under different SNRs is shown in Fig.~\\ref{snr}, where the SNR is from $0$ dB to $30$ dB. This figure shows that a better estimation performance is achieved by the proposed method in the scenario with the imperfect array than that using ANM, FFT, MUSIC, and OMP methods. The estimation performance is measured by the root mean square error\n\\begin{align}\n\t\\text{RMSE} = \\sqrt{\\frac{1}{N_{\\text{sim}}K}\\left\\|\n\t\\hat{\\boldsymbol{\\theta}}-\\boldsymbol{\\theta}\\right\\|^2_2},\n\\end{align}\nwhere $N_{\\text{sim}}$ is the number of simulations, $\\hat{\\boldsymbol{\\theta}}$ is the estimated DOA vector, and $\\boldsymbol{\\theta}$ is the ground-truth DOA vector. For the SNR being $10$~dB, the RMSE of the proposed SDOAnet is about $\\ang{0.70}$ and that of ANM method is about $\\ang{1.15}$, so the RMSE improvement is about $39.13\\%$. Additionally, when the SNR is $7.5$~dB, the RMSE of the proposed SDOAnet method is the same as that of the ANM method with the SNR being $15$~dB, so the SNR improvement is about $7.5$~dB.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.7in]{.\/Figures\/imperfect.pdf}\n\t\\caption{The DOA estimation performance with different imperfect factors.}\n\t\\label{imperfect}\n\\end{figure} \n\nWe use an imperfect factor to measure the imperfect effect, which is defined as $\\xi$. With the imperfect factor $\\xi$, the imperfect parameters for position perturbation, inconsistent gain, inconsistent phase, mutual coupling effect and nonlinear effect will be $\\xi\\sigma_{\\text{max\\_per}}$, $\\xi\\sigma_{\\text{max\\_gain}}$, $\\xi\\sigma_{\\text{max\\_phase}}$, $\\xi\\sigma_{\\text{mc}}$, and $\\xi\\sigma_{\\text{nonlinear}}$, respectively. In Fig.~\\ref{imperfect}, the DOA estimation performance with different imperfect factors is shown, where a better estimation performance is achieved by the proposed SDOAnet method than the compared methods. Additionally, the proposed method works better in the scenario with a higher imperfect factor, which means that the proposed method is robust to the imperfect effect.\n \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.7in]{.\/Figures\/DLsp.pdf}\n\t\\caption{The spatial spectrum compared with the existing deep learning-based method.}\n\t\\label{dlsp}\n\\end{figure} \n\nFurthermore, a DL-based method is also proposed in~\\cite{izacard2019data} for the DOA estimation, named by deep frequency network, where the output of the network is the spectrum. In Fig.~\\ref{dlsp}, the spatial spectrum of the proposed SDOAnet and that of the deep frequency network is given. Since the output of the deep frequency network is the spatial spectrum, the estimated spectrum is not smooth compared with the proposed SDOAnet. Hence, a better DOA estimation performance can be achieved by the SDOAnet than that by the deep frequency network.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.7in]{.\/Figures\/DLsnr.pdf}\n\t\\caption{The DOA estimation performance compared with the existing deep learning-based method.}\n\t\\label{dlsnr}\n\\end{figure} \n \nFinally, the DOA estimation performance under different SNRs is given in Fig.~\\ref{dlsnr}, where the SNR is from $0$~dB to $30$~dB. The same training data set is used for the SDOAnet and the deep frequency network. Compared with the existing methods, including the FFT method, OMP method, and deep frequency network, the proposed SDOAnet can achieve better DOA estimation performance. Therefore, the proposed method is efficient in the DOA estimation problem using the imperfect array.\n\n\\section{Conclusions}\\label{sec6}\nThe DOA estimation problem in the imperfect array has been considered in this paper, and a general system model to describe the antenna position perturbations, the inconsistent gains\/phases, the mutual coupling effect, the nonlinear effect, etc., has also been formulated. Then, the novel SDOAnet has been proposed. Different from existing methods, the input of the SDOAnet is the raw sampled signals, and the output is a vector, which can be used to estimate the spatial spectrum. With the convolution layers, the convergence of training SDOAnet is much faster than existing DL-based methods. Simulation results show the advantages of the proposed SDOAnet in the DOA estimation problem with a practical array. Future work will focus on the theoretical analysis of the proposed SDOAnet in the DOA estimation performance. \n \n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWheeled snake-like robots \\cite{tanaka2015control}\nare a class of hypermobile robots \\cite{granosik2014hypermobile}\nthat are able to navigate flexibly through rough terrains \nand restricted geometries. Movements may be generated either\nvia central pattern generators \\cite{hopkins2009survey}, or\nvia top-down commands \\cite{pfotzer2017autonomous}, with the\nlatter being a challenging task when a large number of actuators \nis involved. An alternative is autonomous decentralized\ncontrol, which has been studied for the case of serpentine robots \nin terms of a chain of locally coupled oscillators \n\\cite{sato2011applicability,sato2011decentralized}, and \nneurally-inspired generating schemes able of\nsensorless pathfinding \\cite{boyle2013adaptive}.\n\nA key rationale for developing biologically inspired robots \nis the drive for robust and highly adaptive designs\n\\cite{hirose2004biologically,liljeback2012review}. Similarly,\nthis is most of the time also the motive for studying how\nadaptive locomotion can be realized \\cite{aoi2017adaptive},\nf.i.\\ with soft robots \\cite{calisti2017fundamentals}.\nAbstracting from the direct engineering benefit, it is \nof particular interest to study generating mechanisms of \nlocomotion in general, an approach taken here. Our focus\nis on compliant locomotion generated by self-organizing \ndynamical systems, which may take the\nform of either limit-cycle \\cite{martin2016closed}\nor playful behavior \\cite{der2006rocking}. \n\nCompliance, which denotes the ability of a robot to \nreact elastically to environmental feedback \\cite{sprowitz2013towards},\nmay be achieved in several distinct ways, which include\nfrom the engineering perspective suitably designed actuators \\cite{van2009compliant} \nand control algorithms \\cite{calanca2016review}. Compliant \nbehavior can emerge on the other side also through the \nreciprocal dynamical coupling of control, body and environment\n\\cite{pfeifer2012challenges}, the sensorimotor loop. A particularly\ninteresting limit is here, from the perspective of complex system\ntheory, the limit of a fully reactive and hence embodied \ncontroller. In this limit, the controller is inactive in \nabsence of environmental feedback, with the consequence that\nthe sensorimotor feedback has not only a modulating effect on\nlocomotion, becoming instead essential. Locomotion then arises \nvia limit cycles and chaotic attractors that emerge within the \nsensorimotor loop, the telltale sign of self-organized locomotion. \nIt is hence important to ask, as we will do in this study, how \nlocomotion is generated in terms of a dynamical systems bifurcation \ndiagram.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[height=0.5\\textwidth]{control_schemes_v3.pdf}\n\\caption{Illustration of possible control schemes. Sensory\ninformation is processed, e.g.\\ by a neural network, and motor \ncommands sent to the actuators. The actuators may respond either \nrigidly (top), or elastically, viz compliant (bottom).\nCompliant actuators may be realized, as illustrated here, via \na direct feedback loop involving the state of the actuator \n(propriosensation). In the limiting case of an embodied\nactuator, as considered in this study, locomotion occurs also in\nthe absence of a modulatory top-down signal.\n}\n\\label{fig:control_topDownCompliant}\n\\end{figure}\n\nDecomposing complex behavior into a series (or into a \nsuperposition) of basic reusable building blocks, the\nmotor primitives \\cite{flash2005motor}, is a well studied \napproach for reducing the control problem of complex robots. \nMovement primitives may be modeled by nonlinear dynamical \nsystems \\cite{ijspeert2002movement} using, e.g., \nGaussian mixture models \\cite{khansari2011learning},\nwhere the parameters of the dynamical system are either\nuniquely defined or drawn from a suitable distribution\n\\cite{paraschos2013probabilistic,amor2014interaction}.\nMotor primitives can emerge also from embodied dynamics\nin terms of chaotic itinerancy \\cite{park2017chaotic},\nor, alternatively, as self-organized attracting states \nin the sensorimotor loop \\cite{sandor2015sensorimotor}, \nthat is within the state space comprising the controller, \nthe body of the robot and the environment. Here we propose \na new type of self-organized controller for wheeled robots\nthat leads to multiple fixpoint and limit-cycle \nattractor states and hence to self-organized motor \nprimitives in the sensorimotor loop. With the\nbehavior of the robot being self organized on the \nlevel of the individual wheels and with respect to \ninter-wheel coordination, the resulting dynamics reflects \nits affordances \\cite{chemero2007gibsonian}\nwhen placed in simple but structured environments.\n\n\\subsection{Control frameworks and the sensorimotor loop}\n\nSeveral in part non-exclusive routes for the \ngeneration of locomotion in robots and animats \ndo exist in generic terms. Standard\ntop-down control, as illustrated\nin Fig.~\\ref{fig:control_topDownCompliant},\nconsists of a central processor \ngenerating motor commands either reactively,\nin response to sensory inputs, or deliberately\non its own \\cite{nakhaeinia2011review}. The \nactuator may be in turn stiff, as for industrial\nrobots, or compliant, as for the muscles and tendons\nof animals, reacting either passively or actively \nto external forces \\cite{van2009compliant}. For\nthe latter case, as sketched in\n Fig.~\\ref{fig:control_topDownCompliant}, the\nactuator changes its stiffness upon sensing \nits own state. Compliance arises then\nin response to propriosensation.\n\nWe are interested in locomotion that arises through \nthe interaction of the degrees of freedom of the robot, \nincluding both internal variables and the body, with \nenvironmental feedback. The combined variables of the \nresulting sensorimotor loop constitute then the phase \nspace for dynamical attracting states, fixed points, \nlimit cycles and chaotic attractors, that correspond \nto self-organized behavioral primitives. The locomotion \ngenerated in this way is highly compliant in the sense \nthat the attracting states in the sensorimotor loop \nrespond elastically to additional top-down commands \nchanging internal parameters.\n\n\\section{Locomotive principles}\n\nStudies of real-world and simulated robots may focus \neither on performance, and its improvement, or on the \ngenerative capabilities of locomotive principles.\nThe latter approach is gaining in importance in view of\na recent study of the neural coding of leg dynamics \nin flies, which showed that the dynamics of the \nleg becomes dysfunction once the feedback loop between \nleg proprioception and motor commands is cut\n\\cite{mamiya2018neural}. These findings imply that\nself-organization plays a commanding role in fly\nlocomotion. Distributed computations has been found\nto be of relevance for the nematode C.~elegans\n\\cite{kaplan2018sensorimotor}. Here we concentrate \non generative principles that are time reversal \nsymmetric in the sense that a given set of internal \nparameters allows the robot to move both forwards and \nbackwards. The direction selected by the robot then \ndepends on the initial state, like a small positive \ninitial velocity or force.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{wheel_controller_sketch}\n\\caption{Illustration of a one-neuron controller simulating \nthe transmission of classical steam engines. The actual\nposition $x^{(a)}=\\cos(\\varphi)$ of the wheel drives,\nas described by (\\ref{dot_x}), the neural\nactivity $y(x)$ setting the target position $x^{(t)}=y$. \nA simulated spring with spring constant $k$ between \n$x^{(a)}$ and $x^{(t)}$ generates subsequently the\ntorque $RF_\\mathrm{tan}$ acting on the wheel. Here \n$F_\\mathrm{tan}=F_k\\sin(\\varphi)$ denotes the tangential \nprojection of the spring force $F_k=k(x^{(t)}-x^{(a)})$.\n}\n\\label{fig:steamEngineController}\n\\end{figure}\n\n\\subsection{Locomotion via time reversal symmetry breaking}\n\nLocomotion is parametrized typically by a velocity vector\n$\\mathbf{v}=d\\mathbf{r}\/dt$ that incorporates both \nthe direction and the magnitude of the movement.\nReversing time $t\\leftrightarrow (-t)$ reverses then\nalso the velocity vector. Here we are interested in \nself-organized robots that break time reversal symmetry \nspontaneously, which in our case implies that the \nattracting states in the sensorimotor loop come in pairs\nthat are related via time reversal symmetry. Whether\nthe robot moves for- or backwards depends then only\non the initial conditions. For this purpose we\nuse the one-neuron controller illustrated\nin Fig.~\\ref{fig:steamEngineController}.\n\nA wheel with a rotational angle $\\varphi$ \nis regulated individually via\n\\begin{equation}\n\\tau \\dot x = \\cos\\varphi -x, \n\\qquad\\quad y=\\tanh(ax)~,\n\\label{dot_x}\n\\end{equation}\nwhere $x$ is the membrane potential of the controlling\nneuron, $y=\\tanh(ax)$ the neural activity and $\\tau$ \nthe membrane time constant. The motor command is \nproportional to the spring force\n\\begin{equation}\nF_k=k\\big(x^{(t)}-x^{(a)}\\big),\n\\qquad\\quad\nx^{(t)}= y,\n\\qquad\\quad\nx^{(a)}=\\cos(\\varphi)\\,,\n\\label{F_k}\n\\end{equation}\nwhere $k$ is a spring constant and $x^{(a)}$ and\n$x^{(t)}$ respectively the actual and the target \nposition of the wheel in terms of a projection to \nthe ground \\cite{sandor2018kick}. Note that the \nangle $\\varphi$, which enters the right-hand side \nof Eqs.~(\\ref{dot_x}) and (\\ref{F_k}) as $\\cos(\\varphi)$, \nis the measured, the actual angle of the wheel. All \nforces, gravitational and mechanical, impact the \ncontroller hence exclusively via their influence on \nthe angle $\\varphi$.\n\nThe controller simulates the transmission rod of \na classical steam engine, as sketched in \nFig.~\\ref{fig:steamEngineController}, as it\ntranslates the bounded forth and back motion \nof the neural activity $y(t)$ into a rotational \nmotion. Alternatively, instead of using the\nangle $\\varphi$ as the determining variable, one \ncould postulate a discrete map $\\omega^{(t)}=\\tanh(a\\omega^{(a)})$\nbetween the actual and a target angular velocity\n\\cite{der2008predictive}, $\\omega^{(a)}$ and \n$\\omega^{(a)}$. This is not a problem for simulated\nrobots, for which $\\omega$ is a directly accessible \nvariable. To obtain a reliable estimate of the instantaneous\nangular velocity for real-world robots working with \nduty cycles of the order of $20\\,\\mathrm{Hz}$\nwould however be a challenge \\cite{sandor2018kick}. \n\nA controller enabling locomotive limit cycles to\nemerge in the sensorimotor loop \\cite{martin2016closed},\nas described here by (\\ref{dot_x}) and (\\ref{F_k}), \ndiffers qualitatively from controlling schemes employing\nlocal phase oscillators \\cite{ambe2018simple}, for which \na spontaneous reversal of the direction of motion would \nnot be possible.\n\n\\subsection{Isolated wheel}\n\nThe individual wheels of the simulated robots are \ncontrolled exclusively by (\\ref{dot_x}) and \n(\\ref{F_k}). There is no explicit inter-wheel\ncoupling present. It is illustrative to model,\nfor comparison, an idealized isolated wheel\nwith moment of inertia $I$, radius $R$, angle $\\varphi$ \nand angular velocity $\\omega$. The force $F_k$\ngenerated by the simulated transmission rod then\nenters the equations of motion as a torque \n$RF_k\\sin(\\varphi)$,\n\\begin{equation}\n\\tau \\dot x = \\cos\\varphi -x,\n\\qquad\\quad\n\\dot\\varphi= \\omega,\n\\qquad\\quad\nI\\dot\\omega = R(F_k\\sin\\varphi -f \\omega)~,\n\\label{eq_dot_x_phi_omega}\n\\end{equation}\nwhere $f>0$ is a friction coefficient. \nEq.~(\\ref{eq_dot_x_phi_omega}) is manifestly \ninvariant under $\\omega\\leftrightarrow(-\\omega)$,\n$\\varphi\\leftrightarrow(-\\varphi)$ and\n$x\\leftrightarrow x$, which implies\ntime-reversal symmetry in terms of an\ninvariance with respect to reversing \nthe direction of motion. We will investigate\n(\\ref{eq_dot_x_phi_omega}) further in\nSect.~\\ref{sec_theory}, noting here that\nsymmetry breaking may occur also in embodied \nrobots that incorporate forward world \nmodels \\cite{der2013behavior}.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[height=0.4\\textwidth]{screenshot_alone.png}\n\\hspace{4ex}\n\\includegraphics[height=0.4\\textwidth]{screenshot_slope.png}\n\\caption{Screenshots of the LPZRobots simulation\nenvironment. {\\it Left:} A snake-like train of cars composed \nof five passively coupled segments. Each segment contains\ntwo independent wheels that influence each other exclusively \nthrough the mechanics of the body and via the hinge joints\nconnecting the individual segments.\n{\\it Right:} A robot climbing intersecting slopes on its \nown, with the silver line illustrating the ground \ntrace of the last segment. No explicit control signal has \nbeen given. The wiggling observed when the robot moves fast \non straight stretches disappears at lower velocities, as it \nis the case when moving steeper up the slope. The train of \ncars reverses direction autonomously when hitting the intersecting\nslope. The wiggling amplitude on the last leg increases progressively \nwhile moving down, leading in the end to an upward curve\n(\\href{http:\/\/doi.org\/10.6084\/m9.figshare.7643123.v1}\n{click for movie}).\n}\n\\label{fig:screenshots}\n\\end{figure}\n\\section{Results}\n\nWe used the LPZRobots physics simulation\npackage \\cite{der2012playful} for the simulation\nof robots composed of chains of 1-5 two-wheeled\ncars linked passively through hinge joints, which\nare equipped with passively damped torsion springs. \nIn the absence of motor commands or external\nforces the equilibrium position of the hinge joints\ninduces a straight alignment of the connected body \nsegments. During locomotion the joints can store,\non the other hand, potential energy when bent.\n\nShown in Fig.~\\ref{fig:screenshots} is the trajectory \nof the train of cars climbing up a slope that is \nintersected orthogonally by two other slopes. One \nobserves wiggling and straight locomotion together \nwith direction reversal and large turns.\nIn order to develop an understanding we start \nby investigating the velocity profile of a robot on an \nextended slope, concentrating on the dependence of the \nself-regulated steady state velocity on the spring \nconstant $k$ of the actuator and on the inclination of \nthe slope. We note that the simulation cycle times of \nthe LPZRobots simulation package, which is based\non the Open Dynamics Engine \\cite{smith2005open},\nare of the order of 50\\,ms. \n\n\\subsection{Moving up and down an infinite slope}\n\nIn Fig.~\\ref{fig:velocities} we present the velocity\nprofile for a 5-segmented robot moving on a slope\nparallel to the gradient, that is straight up and down.\nThe downward velocity decreases in magnitude with \ndecreasing slope and spring constant $k$, as expected.\n\nFor the robot moving on a horizontal plane there\nexists a critical $k_c\\approx0.54$, such that\nthe limit-cycles corresponding to regular forward \nor backward movement disappear for $ka_c$.\nFor small $k$ the trivial fixpoint $x=0=\\cos\\varphi=y$ \nwith the Jacobian\n\\begin{equation}\nJ(\\varphi=\\pm\\pi\/2)\\ =\\ \n\\left(\\begin{array}{ccc}\n-1\/\\tau & \\,\\mp 1\/\\tau\\, &0 \\\\[0.5ex]\n0 & 0 & 1 \\\\[0.5ex]\n\\pm ka & k & -f\n\\end{array}\\right)\\,\n\\label{Jacobian_x_0}\n\\end{equation}\nis a saddle for $01$. For larger values of\nthe spring constant~$k$, the $x=0$ solution undergoes \na Hopf bifurcation leading to limit-cycle oscillations.\n\\end{itemize}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{kplot_a_smaller_1.pdf}\n\\caption{One-step heteroclinic route to locomotion for $a<1$.\nShown are stable limit cycles (red) and stable, unstable \nmanifolds (black, light\/dark blue\/green) and \nselected sample trajectories (violet). The fixpoints at \n$\\varphi=0,\\,\\pi$ are stable nodes\/foci respectively for \nsmall and larger spring constants $k$ (top and upper middle panel), \nwith the saddles at $\\varphi=\\pm\\pi\/2$ remaining \nunchanged in character for all $k$. A symmetric heteroclinic \nconnection between the saddles is created when increasing \n$k$ further. For $k\\approx19.66$ (lower bottom panel)\na stable limit cycle (red) corresponding to limit-cycle \nlocomotion (bottom panel) is generated. \nThe parameters are $a=0.95$, $\\tau=0.2$, $I=0.25$ and $f=0.5$.\n}\n\\label{fig:bifurcations_a_smaller_1}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{kplot_a_larger_1.pdf}\n\\caption{Multi-step heteroclinic route to locomotion for $a>1$.\nThe fixpoints at $\\varphi=0,\\pi$ are always stable nodes, \nwith the foci at $\\varphi=\\pm\\pi\/2$ undergoing a Hopf\nbifurcation upon increasing the spring constant $k$\n(top\/upper middle panel). The resulting limit\ncycle (red) corresponds to a periodic forth-and-back\nmotion characterized by $\\omega\\ne0$ and a vanishing\naverage $\\overline{\\omega}=0$. The forth-and-back\nmotion is destroyed by a symmetric heteroclinic transition\n(upper\/lower middle panel), leading to an intermediate\nphase without locomotion. A second heteroclinic transition\nthen generates stable limit-cycle locomotion \n(lower middle\/bottom panel). Parameters, besides $a=1.5$, \nand color-coding as for Fig.~\\ref{fig:bifurcations_a_smaller_1}.\n}\n\\label{fig:bifurcations_a_larger_1}\n\\end{figure}\n\n\\subsection{Routes to locomotion}\n\nIt is interesting to study how limit-cycle\nlocomotion arises from a configuration of\nindividual fixpoints upon increasing the\nforce acting on the wheel, that is the spring \nconstant $k$.\n\nIn Fig.~\\ref{fig:bifurcations_a_smaller_1} we illustrate\nthe case $a<1$, for which the saddles at \n$\\varphi=\\pm\\pi\/2$ do not undergo a pitchfork bifurcation \nyet. Locomotion arises in this case via a one-step \nheteroclinic transition which allows for the generation \nof limit cycles of finite amplitudes. A pair of stable\nand unstable limit cycles is eventually produced\nwhen increasing the spring constant $k$, with the \nstable limit cycle corresponding to a locomotive \nbehavioral primitive. Note that the phase space\nof (\\ref{eq_dot_x_phi_omega}) is three dimensional \nand that the flow shown in \nFig.~\\ref{fig:bifurcations_a_smaller_1} corresponds\nto a projection onto the $(\\varphi,\\omega)$-plane.\nTrajectories may hence intersect.\n\nIn Fig.~\\ref{fig:bifurcations_a_larger_1} \na multi-step route to locomotion for $a>1$ is presented. \nOne observes first a Hopf-bifurcation (HB) at\n$\\varphi=\\pm\\pi\/2$, which leads to a first \nintermediate phase characterized by a closed \nlimit-cycle in the $(\\varphi,\\omega)$-plane. \nThis limit cycle, corresponding to \nsmall amplitude forth-and-back periodic motion, \nis destroyed when hitting in a symmetric heteroclinic \ntransition (SHE) the two saddles present additionally\nfor $a>a_\\mathrm{c}=1$. Motion ceases in the subsequent \nsecond intermediate phase, for which the \nunstable trajectories emerging from \n$\\varphi=\\pm\\pi\/2$ lead to the fixpoints~$\\varphi=0,\\pi$. \nA stable limit-cycle emerges however from a second \nheteroclinic transition (HE) when increasing the spring\nconstant $k$ further, namely when the unstable\nmanifold of one of the additional saddles\nhits another saddle.\n\nFor the parameters used for Fig.~\\ref{fig:bifurcations_a_larger_1} \nthere are hence for $a>1$ two phases without locomotion,\nviz for which the angular frequency $\\omega$ decays to \nzero. The average angular frequency $\\overline{\\omega}$ \nvanishing for forth-and-back motion, but not for limit-cycle\nlocomotion:\n$$\n\\fbox{$\\phantom{|}\\omega\\to0\\phantom{|}$} \n\\ \\ \\xrightarrow{\\mbox{\\small HB}} \\ \\\n\\fbox{$\\phantom{|}\\omega\\ne0,\\,\\overline{\\omega}=0\\phantom{|}$} \n\\ \\ \\xrightarrow{\\mbox{\\small SHE}} \\ \\\n\\fbox{$\\phantom{|}\\omega\\to0\\phantom{|}$} \n\\ \\ \\xrightarrow{\\mbox{\\small HE}} \\ \\\n\\fbox{$\\phantom{|}\\omega\\ne0,\\,\\overline{\\omega}\\ne0\\phantom{|}$} \n$$\nWith the motor command being proportional to the spring\nconstant $k$, it is somewhat intuitive that one needs a \ncritical $k$ for locomotion to emerge. Relatively large \nspring constants have been used in \nFig.~\\ref{fig:bifurcations_a_smaller_1} for \nillustrative purposes.\n\n\\section{Conclusions}\n\nThe gait of an animal corresponds to a coordinated\npattern of limb movements that repeats with a certain \nfrequency. Gaits are generated typically by a central \npattern generator \\cite{hopkins2009survey},\nthat is by a central processing unit that produces\ncoordinated motor signals. We have examined here an \nalternative framework for which the actuators of an \nanimat are self active, with the dynamics of the \nindividual actuators resulting from the presence \nof limit-cycle attractors within the sensorimotor loop.\nThe actuators, in our case the wheels of a snake-like\nrobot, are coupled in this framework only via the\nmechanics of the body and by the reaction of the environment.\nThe gaits of the animat result in our framework therefore \nfrom self-organizing principles. For a simulated \nwheeled snake-like animat, we find that the robot\ninteracts autonomously with the environment, e.g.\nby turning on its own on a slope. The robot will also\npush a movable box around for a while when colliding \nwith one.\n\nWe have tested in addition that locomotion modes also \narise for heterogeneous (not identical) body segments, \ne.g.\\ when the controllers have different spring constants \n$k_i$, or different wheel sizes. The multi-segmented robot \nis capable of generating locomotion in particular when\nseveral actuators are subcritical, i.e.\\ with wheels \nwhich on their own would not maintain oscillatory \ndynamics. Embodiment leads in our study robustly to \nemergent locomotion.\n\nThe here employed dynamical-system type approach to \nrobotic locomotion allows in addition to characterize \nthe motion primitives in terms of self-organized \nattractors formed in the extended phase space of the \nrobot and environment. Incorporating objects of the \nenvironment into the overarching dynamical system \nallows in consequence dynamical system approaches\nalso to classify computational models of affordance\n\\cite{zech2017computational}.\nIn this sense self-organizing attractors \nplay an important role in the generation of useful \nbehavior for the discovery of dynamic object \naffordances \\cite{der2017selforganizing}.\n\n\\section*{Acknowledgments}\nThe support of the German Science Foundation\n(DFG) is acknowledged.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA Cyber-Physical System (CPS) forms a tight integration of cyber (computational) and physical components \\cite{Lee2008}.\nThe physical system is monitored and controlled by (networks of) embedded computers.\nUsually this occurs with a feedback loop: the computations affect the physical process and vice versa.\nExamples of CPSs are automobiles, medical devices, and manufacturing systems.\nIn this work we consider the supervisory control design of manufacturing systems.\nSupervisory control refers to the high-level (coordinated) monitoring and actuation of the system. \nIn a manufacturing system, the supervisory controller regulates the manufacturing processes and the movement of products through the system.\n\nThe design of supervisory controllers may be performed by applying Model-Based Systems Engineering (MBSE), in which models, rather than documents, are the primary means of information exchange, and engineering processes are applied to these models directly \\cite{Lee2008,Ramos2012}.\nMBSE (of supervisory control) contains many different disciplines, among which specification, variation management, controller synthesis, optimization, formal verification, and implementation.\nThese disciplines each use a specific set of methods, tools, and technologies that are loosely coupled both on a syntactic and semantic level. \nThis has a major impact on engineering efficiency: it hampers verification sufficiently early in the development process, especially concerning system-wide aspects such as throughput and collision avoidance. \nThe simultaneous use and the integration of heterogeneous models and tools to capture system-wide properties reliably and with firm guarantees is an open issue \\cite{Engell2015}.\nTo significantly improve engineering efficiency and enable rapid deployment of (new) systems and system features, seamless syntactic and semantic interoperability between engineering tools needs to be established \\cite{Seshia2017}.\n\n\n\nIn this work we address how to create a multi-disciplinary workflow that has seamless integration of mono-disciplinary MBSE technologies.\nWe show how interoperability of MBSE tools can be achieved through \\textit{Analytics as a Service (AaaS)}.\nIn AaaS, analytics functionality can be accessed over web-delivered technologies (i.e., the cloud).\nGenerally, AaaS is applied in the context of big data \\cite{Sun2012,Demirkan2013,Assunccao2015,Skourletopoulos2016}.\nIn our work, however, we apply the concept of AaaS to MBSE:\nkey functionalities from various MBSE tools are offered as separate services on a network and are made interoperable by automatically translating models to equivalent ones that are necessary for the required service.\nFurthermore, new functionality and tools can be added to the process in a modular manner.\nThe network is set up using the Arrowhead Framework, which is an service-oriented architecture for tool interoperability \\cite{Varga2017,Venanzi2020}. \n\nIn this paper we showcase and demonstrate the functionality of a number of state-of-the-art MBSE tools, and focus on the integration of these tools through AaaS to enable seamless synergistic multi-disciplinary MBSE.\n\n\nTo make the context more tangible, we first discuss a use case in Section \\ref{sec:usecase}.\nExamples from this use case are used throughout the paper.\nIn Section \\ref{sec:challenges}, we discuss some challenges that emerge during the design of the supervisory control of CPSs.\nSome state-of-the-art MBSE tools and technologies that address these challenges are discussed in Section \\ref{sec:tools}.\nThese tools are made interoperable through AaaS, and integration of their functionality into a toolchain is discussed in Section \\ref{sec:toolchain}.\nConclusions are provided in Section \\ref{sec:conclusion}.\n\n\n\n\n\\section{Use case: xCPS manufacturing system}\n\\label{sec:usecase}\nIn this section we discuss a demonstrative use case from which we use examples throughout the paper.\nOur method is demonstrated on the eXplore Cyber-Physical Systems (xCPS) manufacturing system.\nIt is a platform of industrial complexity for research and education on CPSs.\nThe xCPS system is elaborately discussed in \\cite{Adyanthaya2017} and \\cite{Basten2020}.\nFor simplicity, we only consider a part of the system, which is formed by the collection of components mentioned in Fig. \\ref{fig:overview}.\nThe realization of this (sub)system is displayed in Fig. \\ref{fig:realization}.\n\nThe xCPS system receives tops (red pieces in Fig. \\ref{fig:realization}) and bottoms (silver pieces) on the right side of conveyor belt 1 in an alternating manner.\nWhen a conveyor belt is moving, pieces can be held at a place by a stopper.\nPieces that are upside down can be turned with their right side up at the turner station.\nSwitch 1 can push pieces onto the indexing table, or allow pieces to keep running on conveyor belt 1.\nThe indexing table can hold up to six pieces (one on each arm), and turns counterclockwise.\nPieces that are not pushed on the indexing table transition to conveyor belt 2 where they reach the pick and place station.\nTo assemble a product, the pick and place robot picks up a top from conveyor belt 2, and places it onto a bottom on the indexing table.\nWhen the indexing table turns after assembly, it brings the product to switch 2. \nSwitch 2 can then push the assembled piece from the indexing table onto conveyor belt 3.\nFinally, the assembled pieces leave the system at the right side of conveyor belt 3.\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[xCPS system schematic overview.]{\n\\includegraphics[width=0.46\\columnwidth]{images\/xCPS_overview.pdf}\n\\label{fig:overview}\n}\n\\subfigure[xCPS system realization.]{\n\\includegraphics[width=0.46\\columnwidth]{images\/screenshot_xCPS_edited.png}\n\\label{fig:realization}\n}\n\\caption{xCPS system layout.}\n\\label{fig:layout}\n\\end{figure}\n\n\\section{Challenges}\n\\label{sec:challenges}\nWhen controlling systems such as the xCPS system, several challenges may arise.\nWe mention some examples:\n\\begin{itemize}\n\\item How to manage variability in this system when there is for instance a system with, and a system without a turner station?\n\\item How to guarantee safe system operation, for instance, ensure a piece is never pushed to the indexing table when the spot on the table is already occupied by another piece?\n\\item How to control this system optimally to have the guaranteed highest throughput possible?\n\\item How to guarantee progress in the system, such that for every pair of top and bottom that enters the system, an assembled product will eventually leave the system?\n\\item After designing a controller for which the above guarantees can be made, how to deploy it on the system such that the guarantees are still made?\n\\end{itemize}\n\nGenerally, (theoretical) solutions have been found for such challenges.\nThese solutions originate from various disciplines.\nIf we consider the challenges mentioned above, in respective order some relevant disciplines are: \\textit{product line engineering} \\cite{Pohl2005}, \\textit{supervisory control synthesis} \\cite{Cassandras2008}, \\textit{timing analysis} \\cite{Cohen1989}, \\textit{formal verification} \\cite{Baier2008}, and \\textit{implementation} \\cite{Dietrich2002}. \nEven though the challenges can be separately addressed, it is still a single system that is being engineered.\nTherefore, a workflow is required that encompasses multiple disciplines.\nA workflow in which the above challenges can be addressed is shown in Fig. \\ref{fig:workflow_simplified}.\n\n\n\\begin{figure}[b!]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{images\/workflow_simplified.pdf}\n\\vspace*{-2\\baselineskip}\n\\caption{Overview of the demonstrative workflow.}\n\\label{fig:workflow_simplified}\n\\vspace*{0.5\\baselineskip}\n\\subfigure[In state-of-practice, manual processes are required to use functionality from different tools.\\label{fig:AaaS1}]{\n\\centering\n\\includegraphics[width=0.475\\columnwidth]{images\/AaaS_1.pdf}\n}\n\\hfill\n\\subfigure[By applying AaaS, tools are made interoperable resulting in readily available functionality from each tool.\\label{fig:AaaS2}]{\n\\centering\n\\includegraphics[width=0.475\\columnwidth]{images\/AaaS_2.pdf}\n}\n\\caption{From conventional MBSE (a) to AaaS (b).}\n\\end{figure}\n\n\nWe observe that for each step in the workflow specialized functionality is required, which is present in distinct tools that each use their own syntax and semantics.\nThis makes it challenging to (sequentially) apply each step in a unified engineering process (Fig. \\ref{fig:AaaS1}).\nThis brings us to the following major challenge that we address in this paper: \n\\begin{itemize}\n\\item How to create a multi-disciplinary workflow that has seamless integration of mono-disciplinary MBSE technologies?\n\\end{itemize}\nIn this paper we present AaaS as a solution to tackle this challenge.\nIn this solution multiple tools are used, each specialized in their own discipline and functionality.\nThey are integrated (i.e., made interoperable) over a service-oriented architecture.\nIn this way, their functionality is offered through services, and the engineer can access them from a single interface (Fig. \\ref{fig:AaaS2}).\nWhen other tools or services are required, they may be offered as additional services for integration into the workflow.\n\n\n\n\n\n\\section{State-of-the-art tools and technologies}\n\\label{sec:tools}\nIn the following, we discuss several state-of-the-art tools that each exist in separate disciplines of MBSE.\nThe tools are LSAT, PLE tool, CIF, SDF3, mCRL2, Activity Execution Engine, and model translation tool.\nThese tools are just a selection of tools that could be applied in an MBSE process.\nThe mentioned tools are applied in \\blind{the Arrowhead Tools}{\\xblackout{the Arrowhead Tools}} project and are made Arrowhead framework compatible as discussed in Section \\ref{sec:toolchain}.\n\n\n\n\\subsection{LSAT}\n\\label{sec:LSAT}\nLogistics Specification and Analysis Tool (LSAT) has been developed in a collaboration between \\blind{ESI (part of TNO, applied research center for high-tech systems design and engineering), ASML (manufacturer of lithography machines), and Eindhoven University of Technology (TU\/e)}{\\alttext{an applied research center, a manufacturing company, and a } \\alttext{ university}}\n\\cite{Sanden2021}\n\\footnote{\\blind{\\url{https:\/\/lsat.esi.nl}}{\\alttext{url 1}} , \\url{https:\/\/projects.eclipse.org\/projects\/technology.lsat}}.\nLSAT is being open sourced as an Eclipse project.\nIt is used for system specification through activity models \\cite{Sanden2016,Thuijsman2020LSAT}.\nA system is represented by a number of resources, where each resource consists of a number of peripherals.\nEach peripheral can execute a set of actions, to which a timing is prescribed. \nAn activity describes a cohesive piece of behavior in the system, and is modeled as a directed acyclic graph that captures the dependencies between actions performed on peripherals of the resources that it uses. \nAdditionally, LSAT has visualization techniques such as: movement trajectory plots for moving peripherals, graphical editing of activities, and Gantt charts to represent the timing of a sequence of activities.\n\nIn Listing \\ref{lst:LSATmachine}, an LSAT definition for three peripherals of the xCPS system is given: \\texttt{gripper}, \\texttt{turner}, and \\texttt{zMotor}.\nFor these peripherals, either actions are defined which they can perform, or axes are defined along which they can move.\nThe peripherals are instantiated in resource \\texttt{Turner}.\nThe instantiation of the \\texttt{zMotor} is parameterized with positions it can move to (\\texttt{Above\\_Belt}, \\texttt{At\\_Belt}), and which movements are possible between these positions (in this case in both directions between both positions), with a speed profile.\n\n\\begin{lstlisting}[frame=single,caption=LSAT machine specification.\\label{lst:LSATmachine}]\nPeripheralType gripper {\n\tActions {\n\t\tgrab\n\t\tungrab\n\t}\n}\n\nPeripheralType turner {\n\tActions {\n\t\tflip_left\n\t\tflip_right\n\t}\n}\n\nPeripheralType zMotor {\n SetPoints {\n Z [m]\n }\n Axes { \n Z [m] moves Z\n }\n}\n\nResource Turner {\n\tturner : turner \n\tgripper : gripper\n\tzMotor : zMotor {\n\t\tAxisPositions { \n Z (Above,At) \n\t\t}\n\t\tSymbolicPositions {\n\t\t\tAbove_Belt (Z.Above)\n\t\t\tAt_Belt (Z.At)\n\t\t}\n\t\tProfiles (normal)\n\t\tPaths {\n\t\t\tAbove_Belt <-> At_Belt profile normal \n\t\t}\n\t}\n}\n\\end{lstlisting}\n\nIn Listing \\ref{lst:LSATsetting}, part of the LSAT setting specification of the xCPS system is shown.\nThe timings of the actions of the \\texttt{gripper} and \\texttt{turner} peripherals in the \\texttt{Turner} resource are defined.\nThese timings are deterministic, but LSAT also supports specification of probability distributions for timing.\nFor the \\texttt{zMotor} of the \\texttt{Turner} a speed profile \\texttt{normal} is defined by specifying (maximal) velocity, acceleration, and jerk. \nCoordinates are defined for the symbolic positions of the peripheral.\nWhile specifying activities, the user can define actions that move the peripheral between these physical positions with a specific speed profile. \nThen, LSAT computes the timing of that action.\n\n\\begin{lstlisting}[frame=single,caption=LSAT setting specification.\\label{lst:LSATsetting}]\nTurner.gripper {\n\tTimings {\n\t\tgrab = 0.05\n\t\tungrab = 0.04\n\t}\n}\n\nTurner.turner {\n\tTimings {\n\t\tflip_left = 0.35\n\t\tflip_right = 0.35\n\t}\n}\n\nTurner.zMotor {\n\tAxis Z {\n\t\tProfiles {\n\t\t\tnormal (V = 5, A = 10, J = 10)\n\t\t}\n\tPositions {\n\t\tAbove = 0\n\t\tAt= 0.12\n }\n }\n}\n\\end{lstlisting}\n\n\nIn Listing \\ref{lst:LSATactivity}, an LSAT specification for the activity \\texttt{TurnerTurnTop} is given.\nThis is done by first giving arbitrary names for the actions that are used in the activity, and then specifying the directed acyclic graph of the activity which defines its action flow.\nAn arrow (\\texttt{->}) between two actions denotes that the succeeding action always starts after the preceding action has completed.\nSynchronization points are marked by a vertical bar: \\texttt{|s1} and \\texttt{|s2} are used in Listing \\ref{lst:LSATactivity} to denote multiple incoming or outgoing dependencies for an action.\nE.g., after \\texttt{Down}, both actions \\texttt{Release} and \\texttt{Up2} are allowed to take place (in any order\/at the same time).\nIn the activity framework, each resource needs to be claimed by the activity before it can perform actions, and all claimed resources need to be released after performing the actions \\cite{Sanden2016}.\nIn this way, activities can be deployed in a pipelined manner.\nI.e., when a resource is free, an activity can claim it and perform its actions on that resource, regardless of what previous activities might still be performing (on other resources). \n\n\\begin{lstlisting}[frame=single,caption=LSAT activity specification.\\label{lst:LSATactivity}]\nactivity TurnerTurnTop {\n\tprerequisites{\n\t\tTurner.zMotor at At_Belt\n\t}\n\tactions { \n\t\tCT1 : claim Turner\n\t\tRT1 : release Turner\n\t\tCS1 : claim Stopper1\n\t\tRS1 : release Stopper1 \n\t\tCS2 : claim Stopper2\n\t\tRS2 : release Stopper2 \n\t\tLeft : Turner.turner.flip_left \n\t\tRight : Turner.turner.flip_right\n\t\tUp : move Turner.zMotor to Above_Belt with speed profile normal \n\t\tUp2 : move Turner.zMotor to Above_Belt with speed profile normal\n\t\tDown : move Turner.zMotor to At_Belt with speed profile normal\n\t\tGrab : Turner.gripper.grab\n\t\tRelease : Turner.gripper.ungrab \n\t}\n\taction flow {\n CS2->CS1->CT1->Grab->Up->Left->Down->|s1->Release->|s2->Right->RT1->RS2->RS1\n |s1->Up2 ->|s2\n\t} \n}\n\\end{lstlisting}\n\n\\subsection{PLE tool}\nThe Product Line Engineering (PLE) tool, developed at \\blind{TU\/e}{\\alttext{a university}}, is used to manage the variability of a system, and to automatically validate and derive product instances within a product line \n\\cite{Kahraman2021}.\nUsing the PLE tool, a product line is defined which consists of a feature model, a base LSAT model, delta modules, and a mapping model. \n\nThe feature model expresses the commonality and variability of the product line \\cite{Benavides2010}.\nFig. \\ref{fig:feature_model} shows the feature model of the xCPS product family that represents the variability in the resources, behavior, and assembly type of the system.\nConstraints expressed as propositional formulas are used to define dependencies between features. \nA configuration is valid if the combination of features is allowed by the feature model.\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.90\\columnwidth]{images\/FeatureModel.pdf}\n\\caption{xCPS feature model}\n\\label{fig:feature_model}\n\\end{figure}\n\nA base LSAT model, such as described in Section \\ref{sec:LSAT}, is input to the PLE tool and serves as source for deriving new product instances. \nDelta modules are used to define modifications that can be made to the base model.\nIn Listing \\ref{lst:PLEdelta} a delta module of the LSAT machine specification is provided for the absence of the turner station.\nNote that for example the gripper peripheral is not removed, since it is used in more resources next to the turner.\n\\begin{lstlisting}[frame=single,caption=PLE Tool delta module.\\label{lst:PLEdelta}]\ndelta \"machineDelta\"\n\tdialect \n\tmodifies <..\/model\/xCPS.machine>\n{\tremoveResourceFromResourcesOfMachine(, );\n\tremovePeripheralFromPeripheralTypesOfMachine(, );\n\t... }\n\\end{lstlisting}\nThe PLE tools uses mapping models to define what delta modules need to be applied when a particular configuration is selected.\nIn Listing \\ref{lst:PLEmapping} a mapping model is provided, where \\texttt{!Turner} states that the following deltas should only be applied if the Turner is not selected.\n\\begin{lstlisting}[frame=single,caption=PLE Tool mapping model.\\label{lst:PLEmapping}]\n!Turner:\n ,\n ,\n ...\n\\end{lstlisting}\nA particular instance of the xCPS product line is defined in Listing \\ref{lst:PLEconfiguration}.\nNote that next to the absence or presence of components, the PLE tool can also be used to, e.g., instantiate settings (for the example, defined in \\texttt{FastMovement}) or assembly procedures.\nUsing the defined product line and the configuration, the PLE tool derives LSAT variant models.\nThese derived models can then be used to perform further analysis.\n\\begin{lstlisting}[frame=single,caption=PLE Tool configuration.\\label{lst:PLEconfiguration}]\nconfiguration {\n\t\"Resource\",\n\t\"PickPlace\",\n\t\"Turner\",\n\t\"Behavior\",\n\t\"FastMovement\",\n}\n\\end{lstlisting}\n\n\\subsection{CIF} \n\\label{sec:CIF}\nCIF is an automata-based tool and language, and is used to specify system behavior, formulate behavioral requirements, and perform supervisory controller synthesis \\cite{Cassandras2008} to obtain a correct-by-construction supervisory controller that adheres to the requirements\n\\cite{vanBeek2014}.\nCIF is part of the Eclipse Supervisory Control Engineering Toolkit (Eclipse ESCET\u2122) \\cite{ESCET}\\footnote{\\url{https:\/\/www.eclipse.org\/escet\/} , `Eclipse', 'Eclipse ESCET' and 'ESCET' are trademarks of Eclipse Foundation, Inc.}, that has become an Eclipse open source project since 2020.\nThis project builds upon research and tool development at \\blind{TU\/e, as well as collaboration with industry including ASML, and Rijkswaterstaat (part of the Dutch ministry of infrastructure and water management).}{\\alttext{a university}, as well as collaboration with industry including \\alttext{a manufacturing company}, and \\alttext{an infrastructural company}.}\n\n\n\nIn Listing \\ref{lst:CIFrequirement} a requirement specified in CIF is shown.\nThe requirement automaton specifies allowed behavior for the turner station.\nActivities \\texttt{TopGoTo}\\-\\texttt{Turner} and \\texttt{BottomGoToTurner} can only occur when there is no piece present at the turner station.\nA bottom can immediately pass through the turner station.\nIt is assumed that every top needs to be inverted.\nFirst, the \\texttt{TurnerGoDown} activity is executed.\nThis activity can be successful or fail.\nWhen it is successful, the turner will turn the top, and the top can continue after the turner.\nWhen the activity fails, the turner has to retry until it is successful.\n\n\\begin{lstlisting}[frame=single,caption=CIF requirement turner.\\label{lst:CIFrequirement}]\nrequirement automaton TurnerFlow:\n\tlocation NoPiece:\n\t\tinitial; marked;\n\t\tedge TopGoToTurner goto TurnTop;\n\t\tedge BottomGoToTurner goto Bottom;\n\tlocation TurnTop:\n\t\tedge TurnerGoDown goto TurningTop;\n\tlocation TurningTop:\n\t\tedge TURNERDOWNSUCC_TurnerTurnTop goto TurnedTop;\n\t\tedge TURNERDOWNFAIL_RetryTurnerGoDown;\n\tlocation TurnedTop:\n\t\tedge TopGoAfterTurner goto NoPiece; \n\tlocation Bottom:\n\t\tedge BottomGoAfterTurner goto NoPiece;\nend\n\\end{lstlisting}\n\n\nCIF can be used to perform supervisory controller synthesis. \nA supervisory controller is generated, that restricts the behavior of the system such that the requirements are always satisfied.\nEssentially, for each activity a predicate is computed that needs to hold for the activity to occur.\nFor example, Listing \\ref{lst:CIFsupervisor} shows the additional guard that is generated for the activity \\texttt{BottomGoAfterTurner}.\nIt makes sure that this activity can only occur when at the next station after the turner (\\texttt{Sensor3}) there are no inverted tops present, of which the amount is stored in \\texttt{Sensor3Location.nInvTops}, to avoid collisions.\nNote that when \\texttt{BottomGoAfterTurner} occurs, there can never be a non-inverted top or bottom at Sensor3, since it is assumed that tops and bottoms are input in alternating manner, cannot overtake, and every top is inverted at the turner station.\n\n\\begin{lstlisting}[frame=single,caption=CIF supervisor.\\label{lst:CIFsupervisor}]\nsupervisor automaton sup:\n location:\n initial; marked;\n edge BottomGoAfterTurner when Sensor3Location.nInvTops = 0;\n ...\nend\n\\end{lstlisting}\n\n\\subsection{SDF3} \n\\label{sec:SDF3}\nSDF3 is a toolset that has an extensive library of analysis and transformation algorithms for synchronous dataflow graphs, which are suitable for modeling both parallel and pipelined processing and cyclic dependencies \\cite{Stuijk2006}. \nAdditionally, it can be used to generate random synchronous dataflow graphs, if desirable with certain guaranteed properties.\nSDF3 is developed at \\blind{TU\/e \\footnote{\\url{https:\/\/www.es.ele.tue.nl\/sdf3\/}}}{\\alttext{a university} \\footnote{\\alttext{url}}}.\nIn this work we will use it for makespan optimization of activity models to find the optimal behavior that produces products as quickly as possible.\n\n\nFrom an Input\/Output (I\/O) automaton, SDF3 makes an internal conversion to a max-plus automaton and performs timing optimization using the methods of \\cite{Gaubert1995} to find the optimal order of activities to be dispatched to generate the lowest possible makespan.\nIn Listing \\ref{lst:SDF3MPA} an I\/O automaton is shown. \n\\texttt{loc1} is the initial location (denoted by \\texttt{i}).\nTransitions between the locations are defined, where in this case most input actions are empty (denoted by an empty string).\n\\texttt{loc4} is where the event outcome of \\texttt{TurnerGoDown} is processed. \nIf it succeeds, the top piece is turned, and if it fails, the turner retries to go down.\n\nFor the xCPS system model, the computed dispatching sequence is shown in Listing \\ref{lst:SDF3result}.\nThis is the optimal sequence of activities to execute when the system is empty to produce one product as quickly as possible.\nWhen there is no fail on the turner (or elsewhere), one product can be produced in 17.723 seconds.\n\n\\begin{lstlisting}[frame=single,caption=SDF3 I\/O automaton.\\label{lst:SDF3MPA}]\nioautomaton statespace { \n loc1 i -,InputTopInverted-> loc2 \n loc2 -,TopGoToTurner-> loc3 \n loc3 -,TurnerGoDown-> loc4 \n loc3 -,InputBottom-> loc5 \n loc4 -TURNERDOWNSUCC,TurnerTurnTop-> loc6 \n loc4 -TURNERDOWNFAIL,RetryTurnerGoDown-> loc7\n loc4 -,InputBottom-> loc8\n ... }\n\\end{lstlisting}\n\\begin{lstlisting}[frame=single,caption=SDF3 makespan result.\\label{lst:SDF3result}]\nmakespan: 17.723\nsequence: InputTopInverted; TopGoToTurner; InputBottom; ; TurnerGoDown; TURNERDOWNSUCC,TurnerTurnTop; TopGoAfterTurner; BottomGoToTurner; TopGoToSensor4; ; BottomGoAfterTurner; TopGoToSwitch2; BottomGoToSensor4; BottomGofromSt3ToTable2; AlignTable2WithPickPlace; AlignTable2WithBelt; AlignTable2WithPickPlace; TopGoToPickPlace; TopPickUp; TopAssemble; AlignTable2WithBelt; AlignTable2WithPickPlace; ProdGoFromTable2ToBelt4\n\\end{lstlisting}\nTo study the sequence and evaluate potential bottlenecks, LSAT can generate a Gantt chart from the sequence, which is shown in Figure \\ref{fig:Gantt}.\nThe Gantt chart shows which activities are occupying which resources at what time, what actions are executed, and the dependencies between actions on different peripherals.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{images\/Gantt_edited.pdf}\n\\vspace*{-1.2\\baselineskip}\n\\caption{Gantt chart optimal makespan one product.}\n\\vspace*{-0.2\\baselineskip}\n\\label{fig:Gantt}\n\\end{figure}\n\n\\subsection{mCRL2}\nmCRL2 is a toolset designed to reason about concurrent and distributed systems.\nThe toolset consists of its own language as well as more than sixty tools supporting visualization, simulation, minimization, and model checking \\cite{Baier2008} of complex systems\n\\cite{Groote2014,Bunte2019}\n\\footnote{\\url{https:\/\/www.mcrl2.org\/}}.\nmCRL2 is open source and developed at \\blind{TU\/e}{\\alttext{a university}} in collaboration with \\blind{University of Twente}{\\alttext{another university}}. \nGiven an mCRL2 specification and a property in the modal mu calculus, mCRL2 can apply model checking to guarantee the (non-)existence of particular behaviors in a system.\n\nA portion of an mCRL2 model describing the behavior relevant to the turner station and the activity TurnerTurnTop is provided in Listing \\ref{lst:mCRL2model}.\nThe model is derived from an automata translation of the LSAT model using \\cite{Thuijsman2020LSAT}.\n\n\\begin{lstlisting}[frame=single,caption=mCRL2 model.\\label{lst:mCRL2model}]\nsort enum_LPE = struct enumlit_Grab | enumlit_Ungrab;\nsort enum_LPE2 = struct enumlit_Flip_left | enumlit_Flip_right;\n...\nact value_Turner_gripper : enum_LPE;\nact value_Turner_turner : enum_LPE2;\n...\nproc BehProc_M(Locvar_M : LocSort_M, Activity_TurnerTurnTop : enum_LPE4, Turner_gripper : enum_LPE, Turner_turner : enum_LPE2, Turner_zMotor : enum_LPE3) =\n value_Activity_TurnerTurnTop(Activity_TurnerTurnTop) . BehProc_M(Locvar_M, Activity_TurnerTurnTop, Turner_gripper, Turner_turner, Turner_zMotor) +\n value_Turner_gripper(Turner_gripper) . BehProc_M(Locvar_M, Activity_TurnerTurnTop, Turner_gripper, Turner_turner, Turner_zMotor) +\n ...\n ((Locvar_M == loc_M_L) && (Activity_TurnerTurnTop == enumlit_l0)) -> claim_Stopper2 . BehProc_M(Locvar_M, enumlit_l1, Turner_gripper, Turner_turner, Turner_zMotor) +\n ((Locvar_M == loc_M_L) && (Activity_TurnerTurnTop == enumlit_l1)) -> claim_Stopper1 . BehProc_M(Locvar_M, enumlit_l2, Turner_gripper, Turner_turner, Turner_zMotor) +\n ... ;\nact claim_Stopper1, renamed_claim_Stopper1, claim_Stopper2, ...;\ninit BehProc_M(loc_M_L, enumlit_l0, enumlit_Ungrab, enumlit_Flip_left, enumlit_Above_belt);\n\\end{lstlisting}\n\nIn Listing \\ref{lst:mCRL2requirement}, a modal mu calculus formula is provided (in mCRL2 syntax) that expresses that for every time the turner flips left, the turner must eventually flip right. \nmCRL2 can be used to verify whether this property holds for the model.\nWhen a property that is checked does not hold, a counter-example is given to aid in solving the problem.\n\\begin{lstlisting}[frame=single,caption=mCRL2 requirement.\\label{lst:mCRL2requirement}]\n[true*.flip_left]mu X.([!flip_right]X && true)\n\\end{lstlisting}\n\n\\subsection{Activity Execution Engine}\nThe Activity Execution Engine (AEE) is an automated execution method for the activity framework developed at \\blind{TU\/e}{\\alttext{a university}}.\nIt receives an activity model with a supervisory controller in the form of an I\/O automaton as input and executes it on the system.\nThe AEE is time-preserving, meaning it adheres to (LSAT) model prescribed timing of actions within well defined bounds.\nThe AEE guarantees determinate behavior of the system despite timing variations that may happen in execution.\nThe supervisory control is directly performed by the activity execution engine, i.e., the activity execution engine directly connects to the low-level resource controllers.\n\nThe execution engine provides a generic solution for executing the LSAT model on the machine. \nIt consists of three layers: \nThe supervisory control layer deals with the high-level execution concerning the order of activities and decisions based on event outcomes that it receives from the lower layers.\nThe AEE layer is concerned with execution of individual activities and sequencing them together. \nThe action translation layer is responsible for translating action descriptions in LSAT specification to low-level function calls that execute the actions, and translates the sensor data back to communicate to the higher levels. \n\nThe generic solution provides all the mechanisms needed to guarantee timing and behavior-preserving execution of the model.\nThe only part of the engine that the system designers and engineers would need to implement based on their specific product is the action translation layer which is a relatively small library.\n\n\n\\subsection{Model translation tool}\nAs becomes apparent from the above listings, each tool has its own unique syntax.\nTo make the tools interoperable, it needs to be possible to translate a model in one tool to an equivalent model in another tool.\nIn this way it is for example possible to generate a behaviorally equivalent mCRL2 model of an LSAT model and perform formal verification on that model, which is a function of only mCRL2. \nThese translations are essential for the approach that we discuss next in Section \\ref{sec:toolchain}.\nIt allows to automate the use of functionality of tool \\emph{X} for models in (syntax of) tool \\emph{Y}.\nThe model translation tool is a toolset that offers a range of translations between models that are equivalent with respect to the process or computation that will be performed on the generated model.\nFor models with complex semantics, performing translation validation \\cite{Pnueli1998} may be desirable.\n\n\\section{Toolchain and interoperability}\n\\label{sec:toolchain}\nThe introduced tools each have their own specific functionality.\nWith their combined functionality, they can be used in a MBSE workflow for manufacturing systems.\nWe first discuss a workflow that uses all tools mentioned in Section \\ref{sec:tools}, and then present how the tools are integrated in an interoperable toolchain.\n\n\\subsection{Example workflow}\n\\label{sec:workflow}\nIn Section \\ref{sec:challenges}, we discussed a number of challenges that arise in MBSE of supervisory controllers.\nAn interdisciplinary workflow was introduced in Figure \\ref{fig:workflow_simplified}.\nIn that workflow, each of the steps tackled a challenge in the engineering process.\nAll steps of the workflow can individually be performed by the tools mentioned in Section \\ref{sec:tools}.\nIn this section we discuss integration of the tools into a toolchain, creating a MBSE process of supervisory control design for manufacturing systems that includes specification, variation management, supervisor synthesis, timing optimization, formal verification, and implementation.\n\nWe use the following workflow to show how the tools can be integrated together to form a toolchain:\n\\begin{enumerate}\n\\item A product line is specified using LSAT and the PLE tool.\n\\item Given a feature configuration, an LSAT product instance is derived with the PLE tool.\n\\item The LSAT product instance specification is converted to CIF.\n\\item Safety requirements are specified, and a maximally permissive supervisory controller automaton is synthesized using CIF.\n\\item The automaton supervisor is converted to an I\/O automaton.\n\\item Using the I\/O automaton and the timing information from the LSAT specification, SDF3 is used to perform makespan optimization to find the optimal dispatching sequence of activities to produce products as quickly as possible.\n\\item The LSAT specification and the optimal dispatching sequence are used to construct an mCRL2 model.\n\\item With mCRL2, progress properties are verified for the behavior that results from the obtained dispatching sequence.\n\\item The control strategy is deployed on the physical system using the AEE.\n\\end{enumerate}\nThe authors note that this is just a demonstrative workflow, the tools provide more services that could be used, or different tools can be used altogether.\n\nA graphical representation of the workflow is shown in Figure \\ref{fig:workflow}. \nEssentially, this is an elaboration of the workflow in Figure \\ref{fig:workflow_simplified}.\nAt the top of the diagram are the artifacts.\nSome artifacts are manually constructed (these do not have an incoming arrow), others are automatically generated during the process.\nIn the middle are services corresponding to processes that are performed during the workflow, with incoming and outgoing arrows linking them to the respective input and output artifacts. \nThe processes are executed as a service provided by one of the tools shown in the bottom of the diagram.\n\n\\begin{figure}[tbhp]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{images\/workflow.pdf}\n\\caption{Elaborated overview demonstrative workflow.}\n\\label{fig:workflow}\n\\end{figure}\n\nExcerpts from the artifacts are presented throughout Section \\ref{sec:tools} \\footnote{Complete models of the xCPS system for the various tools are available here: \\\\ \\url{https:\/\/github.com\/sbthuijsman\/FM_interoperability}}.\n\nThrough this workflow, the challenges mentioned in Section \\ref{sec:challenges} are addressed.\nUsing the PLE tool, variability of the system is managed by specifying a product line with deltas between particular configurations.\nIn this way, we avoid (manually) creating models of each unique configuration.\nBehavioral requirements are specified in a modular manner, and a minimally restrictive correct-by-construction supervisory controller is generated that adheres to these requirements by applying supervisory controller synthesis in CIF.\nTime-optimal control is guaranteed through application of timing optimization using SDF3, which selects the time-optimal activity sequence from the supervisory controller.\nProgress in the system can be guaranteed by specifying and verifying progress properties using mCRL2.\nFinally, the designed controller is deployed on the system with the AEE, which guarantees execution that adheres to the models.\nEven though these tools use their own semantics and syntax, their functionality can be applied sequentially on a single system through the use of model translations by using the model translation tool.\n\nThe authors note that even though behavioral requirements are already enforced by supervisor synthesis through CIF, formal verification in mCRL2 is still beneficial.\nA progress property such as listed in Listing \\ref{lst:mCRL2requirement} (eventually modality) can not be directly expressed as a requirement in CIF, and supervisory controller synthesis is in general not applicable to such requirements.\nAdditionally, in this case the CIF model only considers behavior on activity level, while the mCRL2 model includes action level behavior, which allows for more detailed inspection of the behavior.\n\n\n\n\\subsection{Toolchain integration over Arrowhead framework}\n\\label{sec:arrowhead}\nThe workflow described above spans from specification to realization.\nIn the steps along the way, several processes are performed using various tools.\nA seamless integration of these tools is established through AaaS: their functionalities are provided as services on an Arrowhead local cloud.\nThe Arrowhead cloud is constructed using the Arrowhead framework, which is an interoperability service-oriented architecture \\footnote{\\url{https:\/\/arrowhead.eu\/} , \\url{https:\/\/github.com\/eclipse-arrowhead} , the Arrowhead framework files used for the integration of the tools discussed in this paper can be found here: \\url{https:\/\/github.com\/sbthuijsman\/FM_interoperability}}.\nThe framework is elaborately discussed in \\cite{Varga2017,Venanzi2020}. \nNext to the MBSE tool services, the following three Arrowhead core services exist on the cloud:\n\\begin{enumerate}\n\\item Orchestration service to coordinate the connections between the consumer and provider services.\n\\item Authorization service to provide security such that services can only be accessed by authorized consumers.\n\\item Service registry system to keep track of all services within the network and ensure all systems can find each other.\n\\end{enumerate}\nBecause of the integration in the Arrowhead cloud, the workflow can be performed from a single interface and the functionality from each tool is readily accessible.\nFurthermore, additional tools and services can be added in a modular manner.\nAs long as there is a tool or service that can solve the problem, and model-to-model translation is possible from some existing artifact to the required syntax of the concerning service, the toolchain can be extended to include the service in the manner as the discussed services.\nDirect access to functionality of the MBSE tools enables rapid application of MBSE technologies from various disciplines, and allows seamless MBSE of the supervisory control of a CPS from start to finish.\nNext to this integration, the AaaS framework provides more benefits, such as provisioning of a centralized (model) database or management of computational resources.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nMany challenges can be addressed through MBSE of the supervisory control of CPSs.\nSolutions originate from several disciplines.\nEach discipline uses its own tools with its own semantics and syntax.\nIt is a major challenge to create a multi-disciplinary workflow that has seamless integration of mono-disciplinary MBSE technologies.\n\nIn this paper, several tools are discussed that are each state-of-the-art in their own discipline.\nEven though the tools use their own semantics and syntax, equivalent models can be generated for each tool by model-to-model translations.\nBy applying AaaS, in this case using the Arrowhead framework, the tools are made interoperable.\nThe translation steps are automated, and the services from each tool are readily accessible.\nA seamless integration of the tools is established: the engineer can easily access their functionality from a single interface.\nBecause of the modularity of the service-oriented architecture, the toolchain can without difficulty be extended to incorporate additional functionality as long as the required model-to-model translations can be established.\n\n \\bibliographystyle{splncs04}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nNakajima \\cite{N1} constructed a Heisenberg algebra using\ncorrespondence varieties which acts on the direct sum over all $n$\nof homology groups $H(X^{[n]})$ with complex coefficient of\nHilbert schemes $X^{[n]}$ of $n$ points on a quasi-projective\nsurface $X$. This representation is irreducible thanks to\nG\\\"ottsche's earlier work \\cite{G}. Similar results have been\nindependently obtained by Grojnowski \\cite{Gr}. We refer to\n\\cite{N2} for an excellent account of Hilbert schemes and related\nworks. In the case when $X$ is the minimal resolution $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $ of\nthe simple singularity $\\Bbb C^2 \/ \\Gamma$ associated to a finite\nsubgroup $\\Gamma$ of $SL_2 ( {\\Bbb C} )$, this together with some additional\nsimple data provides a geometric realization of the\nFrenkel-Kac-Segal vertex construction of the basic representation\nof an affine Lie algebra \\cite{FK, S1}. We may view this as a\ngeometric McKay correspondence \\cite{Mc} which provides a\nbijection between finite subgroups of $SL_2 ( {\\Bbb C} )$ and affine Lie\nalgebras of ADE types.\n\nIn \\cite{W} we realized the important role of wreath products $\\G_n\n= \\Gamma \\sim S_n $ associated to a finite group $\\Gamma$ in equivariant\nK-theory. Various algebraic structures were constructed on the\ndirect sum over all $n$ of the topological $\\G_n$-equivariant\nK-theory $K^{ {\\footnotesize {top}} }_{\\G_n} (X^n) \\bigotimes {\\Bbb C} $ of $X^n$ for a\n$\\Gamma$-space $X$. The results of \\cite{W} generalized the work of\nSegal \\cite{S2} (also see \\cite{VW, Gr}) which corresponds to our\nspecial case when $\\Gamma$ is trivial (i.e. $\\Gamma$ is the one-element\ngroup) and the wreath product $\\G_n$ reduces to the symmetric group\n$S_n$.\n\nThe wreath product approach obtains further significance in light\nof the conjectural equivalence of various algebraic structures in\nthe following three spaces:\n %\n\\begin{eqnarray} \\label{eq_master}\n \\begin{array}{ccc}\n \\mbox{I} & & \\mbox{II} \\\\\n \\bigoplus_{n \\geq 0} H( Y^{[n]})\n & \\leftarrow -\\rightarrow\n & \\bigoplus_{n \\geq 0} K^{ {\\footnotesize {top}} }_{S_n } (Y^n) \\bigotimes {\\Bbb C} \\\\\n %\n & \\;\\, \\nwarrow \\quad \\quad &\\uparrow \\\\\n %\n & \\quad \\searrow & \\downarrow \\\\\n & & \\mbox{III} \\\\\n & & \\bigoplus_{n \\geq 0} K^{ {\\footnotesize {top}} }_{\\G_n } (X^n) \\bigotimes {\\Bbb C} \n \\end{array}\n\\end{eqnarray}\n %\nHere one assumes that $X$ is a quasi-projective surface acted upon\nby a finite group $\\Gamma$ and $Y$ is a suitable resolution of\nsingularities of $X\/ \\Gamma$ such that there exists a canonical\nisomorphism between $K_{\\Gamma}(X)$ and $K( Y)$. For $\\Gamma$ trivial\nIII reduces to II.\nThe graded dimensions of the three spaces have been shown to\ncoincide \\cite{W}. The complexity of the geometry involved\ndecreases significantly from I to II, and then to III. In each of\nthe three setups various algebraic structures have been\nconstructed in \\cite{Gr, N2, S2, W} such as Hopf algebra, vertex\noperators, and Heisenberg algebra, etc. We remark that there has\nbeen a construction of an additive isomorphism between the spaces\nin I and II due to de Cataldo and Migliorini \\cite{CM}.\n\nIn a most important case when $X$ is $ {\\Bbb C} ^2$ acted upon by a finite\nsubgroup $\\Gamma \\subset SL_2( {\\Bbb C} )$ and $Y$ is the minimal resolution\n$ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $ of $ {\\Bbb C} ^2 \/\\Gamma$, the above diagram reduces to the following\none:\n %\n\\begin{eqnarray} \\label{eq_mckay}\n \\begin{array}{ccc}\n \\mbox{I} & & \\mbox{II} \\\\\n\\bigoplus_{n \\geq 0} H( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})\n & \\leftarrow -\\rightarrow\n & \\bigoplus_{n \\geq 0} K^{ {\\footnotesize {top}} }_{S_n } ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n) \\bigotimes {\\Bbb C} \\\\\n %\n & \\; \\,\\nwarrow \\quad \\quad &\\uparrow \\\\\n %\n & \\quad \\searrow & \\downarrow \\\\\n & & \\mbox{III} \\\\\n & & \\bigoplus_{n \\geq 0} R (\\G_n )\n \\end{array}\n\\end{eqnarray}\n %\nby using the Thom isomorphism between $K^{ {\\footnotesize {top}} }_{\\G_n} ( {\\Bbb C} ^{2n})$\nand the representation ring $R_{ {\\Bbb Z} }(\\G_n)$ of the wreath product.\nHere $R (\\G_n ) = R_{ {\\Bbb Z} } (\\G_n) \\otimes {\\Bbb C} $ in our notation. It was\npointed out in \\cite{W} that the Frenkel-Kac-Segal homogeneous\nvertex representation can be realized in terms of representation\nrings of such wreath products. Such a finite group theoretic\nconstruction, which can be viewed as a new form of McKay\ncorrespondence, has been firmly established recently in our work\n\\cite{FJW} jointly with I.~Frenkel and Jing. It remains a big\npuzzle however why there are many parallel algebraic structures in\nthese different setups.\n\nIn the present paper we propose a coherent approach to fill in the\ngap (at least in the setup of diagram (\\ref{eq_mckay}) above) and\npresent several canonical ingredients in our approach. More\nexplicitly, we provide direct links from wreath products to\nHilbert schemes. We find a natural interpretation of a main\ningredient (the so-called weighted bilinear form) in \\cite{FJW},\nwhich brings us one step closer to a {\\em direct} isomorphism of\nthe two forms of McKay correspondence respectively in terms of\nHilbert schemes and wreath products. We also establish\nisomorphisms of various algebraic structures in II and III. Let us\ndiscuss in more detail.\n\nGiven a finite subgroup $\\Gamma$ of $SL_2 ( {\\Bbb C} )$, we observe that there\nis a natural identification between $ {\\Bbb C} ^{2n} \/\\G_n$ and the $n$-th\nsymmetric product $( {\\Bbb C} ^2 \/\\Gamma)^{(n)}$ of the simple singularity\n$ {\\Bbb C} ^2 \/\\Gamma$. The following commutative diagram \\cite{W}\n %\n\\begin{eqnarray*}\n \\begin{array}{ccc}\n \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]} & \\stackrel{\\pi_n}{\\longrightarrow}\n & ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} )^n \/ S_n \\\\\n \\downarrow \\tau_n & & \\downarrow \\tau_{(n)} \\\\\n {\\Bbb C} ^{2n} \/\\G_n & \\cong & ( {\\Bbb C} ^2 \/ \\Gamma)^n \/S_n .\n \\end{array}\n\\end{eqnarray*}\n %\ndefines a resolution of singularities $\\tau_n : \\ale^{[n]} \\rightarrow\n {\\Bbb C} ^{2n} \/\\G_n$, where $\\tau_{(n)}$ is naturally induced from the\nminimal resolution $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} \\rightarrow {\\Bbb C} ^2 \/ \\Gamma$. We show that\n$\\tau_n$ is a semismall crepant resolution, which provides the\nfirst direct link between wreath products and Hilbert schemes. We\nshow that the fiber of $\\tau_n$ over $[0]\\in {\\Bbb C} ^{2n} \/\\G_n$\n(associated to the origin of $ {\\Bbb C} ^{2n}$) is of pure dimension $n$\nand we give an explicit description of its irreducible components.\n\nWe conjecture that there exists a canonical isomorphism between\nthe Hilbert quotient $ \\C^{2n} \/\/ \\Gn $ of $ {\\Bbb C} ^{2n}$ by $\\G_n$ (see\n\\cite{Ka} or Subsection~\\ref{subsect_link} for the definition of\nHilbert quotient) and the Hilbert scheme $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}$, and provide\nseveral nontrivial steps toward establishing this conjecture. More\nexplicitly, we first single out a distinguished nonsingular\nsubvariety $X_{\\G,n}$ of $( {\\Bbb C} ^2)^{[nN]}$ and construct a morphism\n$\\varphi$ from $ \\C^{2n} \/\/ \\Gn $ to $X_{\\G,n}$, where $N$ is the order of the\ngroup $\\Gamma$. We use here a description of a set of generators for\nthe algebra of $\\G_n$ invariant regular functions on $ {\\Bbb C} ^{2n}$\nwhich is a generalization of a theorem of Weyl \\cite{Wey}. It\nfollows by construction that our morphism from $ \\C^{2n} \/\/ \\Gn $ to\n$X_{\\G,n}$ when restricted to a certain Zariski open set is indeed an\nisomorphism. We give a quiver variety description of $X_{\\G,n}$ and\n$ {\\Bbb C} ^{2n} \/\\G_n$ in the sense of Nakajima \\cite{N, N3}. Such an\nidentification follows easily from Nakajima's quiver\nidentification of Hilbert scheme of points on $ {\\Bbb C} ^2$ (cf.\n\\cite{N2} and Varagnolo-Vasserot \\cite{VV}). According to Nakajima\n\\cite{N4}, it can be shown essentially by using\nKronheimer-Nakajima \\cite{KN} that the Hilbert scheme $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}$\nis a quiver variety associated to the same quiver data as $X_{\\G,n}$\nbut with a different stability condition. It follows that $X_{\\G,n} $\nand $ \\ale^{[n]} $ are diffeomorphic by Corollary 4.2 in \\cite{N}. In\nthis way we have obtained a second direct link between $\\G_n$ and\n$ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}$. One plausible way to establish our main conjecture\nwill be to establish that $\\varphi$ is an isomorphism between\n$ \\C^{2n} \/\/ \\Gn $ and $X_{\\G,n}$ and that the diffeomorphism between $X_{\\G,n} $\nand $ \\ale^{[n]} $ is indeed an isomorphism as complex varieties.\n\nOur construction contains two distinguished cases which have been\nstudied by others. The morphism above for $n =1$ becomes an\nisomorphism due to Ginzburg-Kapranov (unpublished) and\nindependently Ito-Nakumura \\cite{INr}. Our morphism above also\ngeneralizes Haiman's construction \\cite{H} of a morphism from\n$ \\C^{2n} \/\/ S_n $ to $( {\\Bbb C} ^2)^{[n]}$ which corresponds to our special case\nfor $\\Gamma$ trivial (where no passage to the quiver variety\ndescription is needed). Haiman \\cite{H} has in addition shown that\nthe morphism being an isomorphism is equivalent to the validity of\nthe remarkable $n!$ conjecture due to Garsia and Haiman \\cite{GH}.\n(We remark that there has been also attempt by Bezrukavnikov and\nGinzburg \\cite{BG} in establishing this conjectural isomorphism\nfor $\\Gamma$ trivial.) Very recently a proof of the $n!$ conjeture\n(and this isomorphism conjecture) has been announced by Haiman in\nhis homepage by establishing a Cohen-Macaulay property of a\ncertain universal scheme which was conjectured in \\cite{H}. It is\nnatural for us to conjecture similarly a Cohen-Macaulay property\nof a certain universal scheme in our setup\n(Conjecture~\\ref{conj_cohen}), which is sufficient to imply that\n$\\varphi$ is an isomorphism.\n\nA distinguished virtual character of $\\G_n$ has been used to\nconstruct a semipositive definite symmetric bilinear form (called\na weighted bilinear form) on $R_{ {\\Bbb Z} }(\\G_n)$ which plays a\nfundamental role in the wreath product approach to McKay\ncorrespondence \\cite{FJW}. Indeed it is given by the $n$-th tensor\nof the McKay virtual character $\\lambda ( {\\Bbb C} ^2)$ of $\\Gamma$. On the other\nhand, the virtual character $\\lambda ( {\\Bbb C} ^{2n})$ of $\\G_n$ induced\nfrom the Koszul-Thom class defines a canonical bilinear form on\nthe Grothendieck group $K^0_{\\G_n} ( {\\Bbb C} ^{2n})$ of the bounded\nderived category $D^0_{\\G_n} ( {\\Bbb C} ^{2n})$ consisting of $\\G_n$\nequivariant coherent sheaves whose cohomology sheaves are\nconcentrated on the origin. Although they are defined very\ndifferently, these two virtual characters of $\\G_n$ are shown to\ncoincide. This establishes an isometry between $K^0_{\\G_n}\n( {\\Bbb C} ^{2n})$ and $R_{ {\\Bbb Z} }(\\G_n)$ endowed with the weighted bilinear\nform, and thus provides a natural explanation of the weighted\nbilinear form introduced from a purely group theoretic\nconsideration. (Actually we establish the coincidence of virtual\ncharacters for more general $\\Gamma \\subset GL_k ( {\\Bbb C} )$ and the induced\n$\\G_n$-action on $ {\\Bbb C} ^{kn}$.) We regard this isometry as an\nimportant ingredient toward a direct isomorphism of the two forms\nof McKay correspondence realized respectively in terms of Hilbert\nschemes \\cite{N2, Gr} and of wreath products \\cite{FJW}.\n\nWhile our motivation is quite different, our main conjecture fits\ninto the scheme of Reid \\cite{R} who asks for what finite subgroup\n$G \\subset SL_K( {\\Bbb C} )$ the Hilbert quotient $ {\\Bbb C} ^K \/\/ G$ (also called\n$G$-Hilbert scheme on $ {\\Bbb C} ^K$) is a crepant resolution of $ {\\Bbb C} ^K\/G$.\nNote that the notion of McKay correspondence is meant in the\nstrict sense in this paper while the McKay correspondence in\n\\cite{R} is in a generalized sense. Our work provides supporting\nevidence for an affirmative answer of the McKay correspondence in\nthe sense of \\cite{R} for $ {\\Bbb C} ^{2n}$ acted upon by $\\G_n$ which in\nturn is a key step to a direct isomorphism of the two form of\nMcKay correspondence mentioned above.\n\nBy applying a remarkable theorem of Bridgeland-King-Reid\n\\cite{BKR} to our situation, our main conjecture on the\nisomorphism between Hilbert quotients and Hilbert schemes implies\nthat the equivalence of bounded derived categories among $D_{\\G_n}\n( {\\Bbb C} ^{2n})$, $D_{S_n} ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n ),$ and $D ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$. For $n =1$\nthis is a theorem due to Kapranov-Vasserot \\cite{KV}. Such an\nequivalence can be viewed as a direct connection among the objects\nin the diagram (\\ref{eq_mckay}), where K-groups of topological\nvector bundles are replaced by K-groups of sheaves and connection\nbetween K-group and homology is made via Chern character.\n\nIn the end we establish a direct isomorphism of various algebraic\nstructures in II and III in the diagram (\\ref{eq_master}). More\nexplicitly, we construct Schur bases of the equivariant K-groups\nin II and III and show that a canonical one-to-one correspondence\nbetween these two bases gives the desired isomorphism for various\nalgebraic structures such as Hopf algebras, $\\lambda$-rings, and\nHeisenberg algebra, etc.\n\nThe plan of the paper goes as follows. In\nSect.~\\ref{sect_morphism} we study the resolution of singularities\n$\\tau_n$ and provide various steps toward establishing our\nconjecture on the isomorphism between the Hilbert quotient\n$ \\C^{2n} \/\/ \\Gn $ and the Hilbert scheme $ \\ale^{[n]} $. In\nSect.~\\ref{sect_mckay} we show that two virtual characters of\n$\\G_n$ arising from different setups coincide with each other and\ndiscuss various implications of this and our main conjecture. In\nSect.~\\ref{sect_ktheory} we establish isomorphisms of various\nalgebraic structures in II and III of diagram (\\ref{eq_master}).\n\n\\noindent {\\bf Acknowledgment.} Some ideas of the paper were\nfirst conceived when I was visiting the Max-Planck Institut f\\\"ur\nMathematik (MPI) at Bonn in 1998. I thank MPI for a stimulating\natmosphere. Hiraku Nakajima's papers and lecture notes are sources\nof inspiration. I am grateful to him for the influence of his\nwork. His comments on an earlier version of this paper helps much\nto clarify the exposition of Subsection~\\ref{subsect_quiver}. I\nthank Mark de Cataldo for helpful discussions and comments. I also\nthank Igor Frenkel for his comments and informing me that he has\nbeen recently thinking about some related subjects from somewhat\ndifferent viewpoint.\n\\section{Wreath products and Hilbert schemes}\n\\label{sect_morphism}\nIn this section, we establish some direct\nconnections between wreath products and Hilbert schemes, i.e.\nbetween I and III in the diagram (\\ref{eq_mckay}). In particular\nfor $\\Gamma$ trivial it reduces to relating I and II.\n\\subsection{The wreath products}\nGiven a finite group $\\Gamma$, we denote by $\\Gamma^*$ the set of all the\ninequivalent complex irreducible characters $\\{ \\gamma_0, \\gamma_1,\n\\ldots, \\gamma_r \\}$ and by $\\Gamma_*$ the set of conjugacy classes. We\ndenote by $\\gamma_0$ the trivial character and by $\\Gamma^*_0$ the set of\nnon-trivial characters $\\{\\gamma_1, \\ldots, \\gamma_r \\}$. The $ {\\Bbb C} $-span of\n$\\gamma \\in \\Gamma^*$, denoted by $ R(\\Gamma) $, can be identified with\nthe space of class functions on $\\Gamma$. We denote by $R_{\\Bbb Z} (\\G)$ the\nintegral span of irreducible characters of $ \\Gamma$.\n\nLet $\\Gamma^n = \\Gamma \\times \\cdots \\times \\Gamma$ be the $n$-th\ndirect product of $\\Gamma$. The symmetric group $S_n$ acts on\n$\\Gamma^n$ by permutations: $\\sigma (g_1, \\cdots, g_n)\n = (g_{\\sigma^{ -1} (1)}, \\cdots, g_{\\sigma^{ -1} (n)}).\n$\nThe wreath product of $\\Gamma$ with $S_n$ is defined to be the\nsemi-direct product $$\n \\Gamma_n = \\{(g, \\sigma) | g=(g_1, \\cdots, g_n)\\in {\\Gamma}^n,\n\\sigma\\in S_n \\} $$\n with the multiplication given by\n $ (g, \\sigma)\\cdot (h, \\tau)=(g \\, {\\sigma} (h), \\sigma \\tau ) .\n$ Note that $\\Gamma^n$ is a normal subgroup of $\\G_n$.\n\nLet $\\lambda=(\\lambda_1, \\lambda_2, \\cdots, \\lambda_l)$ be a partition: $\\lambda_1\\geq\n\\dots \\geq \\lambda_l \\geq 1$. The integer $|\\lambda|=\\lambda_1+\\cdots+\\lambda_l$\nis called the {\\em weight}, and $l (\\lambda ) =l$ is called the {\\em\nlength} of the partition $\\lambda $. We will often make use of another\nnotation for partitions: $ \\lambda=(1^{m_1}2^{m_2}\\cdots) , $ where\n$m_i$ is the number of parts in $\\lambda$ equal to $i$.\n\nGiven a family of partitions $\\rho=(\\rho(x))_{x\\in S}$ indexed by\na finite set $S$, we define the {\\em weight} of $\\rho$ to be\n$$\\|\\rho\\|=\\sum_{x\\in S}|\\rho(x)|.$$ Sometimes it is convenient to\nregard $\\rho=(\\rho(x))_{x\\in S}$ as a partition-valued function on\n$S$. We denote by ${\\cal P}(S)$ the set of all partitions indexed\nby $S$ and by ${\\cal P}_n(S)$ the set of all partitions $\\rho$ in\n${\\cal P}(S)$ of weight $n$.\n\nThe conjugacy classes of ${\\Gamma}_n$ can be described in the\nfollowing way. Let $x=(g, \\sigma )\\in {\\Gamma}_n$, where $g=(g_1,\n\\cdots, g_n) \\in {\\Gamma}^n,$ $ \\sigma \\in S_n$. The permutation\n$\\sigma $ is written as a product of disjoint cycles. For each\nsuch cycle $y=(i_1 i_2 \\cdots i_k)$ the element $g_{i_k} g_{i_{k\n-1}} \\cdots g_{i_1} \\in \\Gamma$ is determined up to conjugacy in\n$\\Gamma$ by $g$ and $y$, and will be called the {\\em\ncycle-product} of $x$ corresponding to the cycle $y$. For any\nconjugacy class $c$ and each integer $i\\geq 1$, the number of\n$i$-cycles in $\\sigma$ whose cycle-product lies in $c$ will be\ndenoted by $m_i(c)$. Denote by $\\rho (c)$ the partition $(1^{m_1\n(c)} 2^{m_2 (c)} \\ldots )$. Then each element $x=(g, \\sigma)\\in\n{\\Gamma}_n$ gives rise to a partition-valued function $( \\rho\n(c))_{c \\in \\Gamma_*} \\in {\\mathcal P} ( \\Gamma_*)$ such that $\\sum_{i, c}\ni m_i(c) =n$. The partition-valued function $\\rho =( \\rho(c))_{ c\n\\in G_*} $ is called the {\\em type} of $x$. It is well known (cf.\n\\cite{M,Z}) that any two elements of ${\\Gamma}_n$ are conjugate in\n${\\Gamma}_n$ if and only if they have the same type.\n\\subsection{A resolution of singularities} \\label{subsect_fiber}\nLet $X$ be a smooth complex algebraic variety acted upon by a\nfinite group $\\Gamma$ of order $N$. We denote by $X^{[n]}$ the Hilbert\nscheme of $n $ points on $X$ and denote by $X^{(n)} = X^n \/S_n$\nthe $n$-th symmetric product. Both $X^{[n]}$ and $X^{(n)}$ carry\nan induced $\\Gamma$-action from $X$.\n\nNow assume $X$ is a quasi-projective surface. A beautiful theorem\nof Fogarty \\cite{Fo} states that $X^{[n]}$ is non-singular of\ndimension $2n$. It is well known (cf. e.g. \\cite{N2}) that the\nHilbert-Chow morphism $X^{[n]} \\rightarrow X^{(n)}$ is a\nresolution of singularities. Given a partition $\\nu$ of $n$ of\nlength $l$: $\\nu_1 \\geq \\nu_2 \\geq \\ldots \\geq \\nu_l\n>0$, we define\n\\[\nX^{(n)}_{\\nu} = \\left\\{ \\sum_{i =1}^l \\nu_i x_i \\in X^{(n)} |x_i\n\\neq x_i \\mbox{ for } i \\neq j \\right\\}.\n\\]\nA natural stratification of $X^{(n)}$ is given by\n\\begin{eqnarray*}\nX^{(n)} = \\bigsqcup_{\\nu} X^{(n)}_{\\nu}.\n\\end{eqnarray*}\n\nIn the remainder of this section, we let $\\Gamma$ be a finite subgroup\nof $ SL_2 ( {\\Bbb C} )$ unless otherwise specified. The classification of\nfinite subgroups of $SL_2 ( {\\Bbb C} )$ is well known. The following is a\ncomplete list of them: the cyclic, binary dihedral, tetrahedral,\noctahedral and icosahedral groups. We denote by $\\tau : \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} \\rightarrow {\\Bbb C} ^2 \/\\Gamma$ the minimal resolution of the simple singularity.\n\nA canonical identification between $ {\\Bbb C} ^{2n} \/\\G_n$ and $( {\\Bbb C} ^2\n\/\\Gamma)^n \/S_n $ is given as follows: given a $\\G_n$-orbit, say $\\G_n .\n(x_1, \\ldots, x_n)$ for some $(x_1, \\ldots, x_n) \\in ( {\\Bbb C} ^2)^n =\n {\\Bbb C} ^{2n}$, we obtain a point $[\\Gamma.x_1] +\\ldots + [\\Gamma.x_n]$ in\n$( {\\Bbb C} ^2 \/ \\Gamma)^n \/S_n $, where $\\Gamma.x_i$ denotes the $\\Gamma$-orbit of\n$x_1$, i.e. a point in $ {\\Bbb C} ^2 \/\\Gamma$. It is easy to see that this map\nis independent of the choice of the representative\n$(x_1,\\ldots,x_n)$ in the $\\G_n$-orbit and it is one-to-one. The\nfollowing commutative diagram \\cite{W}\n %\n\\begin{eqnarray} \\label{eq_mine}\n \\begin{array}{ccc}\n \\ale^{[n]} & \\stackrel{\\pi_n}{\\longrightarrow}\n & ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} )^n \/ S_n \\\\\n \\downarrow \\tau_n & & \\downarrow \\tau_{(n)} \\\\\n {\\Bbb C} ^{2n} \/\\G_n & \\cong & ( {\\Bbb C} ^2 \/ \\Gamma)^n \/S_n .\n \\end{array}\n\\end{eqnarray}\n %\ndefines a morphism $\\tau_n : \\ale^{[n]} \\rightarrow {\\Bbb C} ^{2n} \/\\G_n.$\n\n\\begin{proposition} \\label{prop_resol}\nThe morphism $\\tau_n : \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]} \\rightarrow {\\Bbb C} ^{2n} \/\\G_n $ is a\nsemismall crepant resolution of singularities.\n\\end{proposition}\n %\n\\begin{demo}{Proof}\nIt is clear by definition that $\\tau_n$ is a resolution of\nsingularities. We now describe a stratification of $ {\\Bbb C} ^{2n} \/\\G_n$.\n\nThe simple singularity $ {\\Bbb C} ^2 \/\\Gamma$ has a stratification given by\nthe singular point $o$ and its complement denoted by $ {\\Bbb C} _0^2 \/\\Gamma$.\nIt follows that a stratification of $( {\\Bbb C} ^2\/ \\Gamma)^n$ is given by $n\n+1 $ strata $( {\\Bbb C} ^2\/ \\Gamma)^n [i]$ $( 0 \\leq i \\leq n)$, where $( {\\Bbb C} ^2\/\n\\Gamma)^n [i]$ consisting of points in the Cartisian product $( {\\Bbb C} ^2\/\n\\Gamma)^n$ which has exactly $n - i$ components given by the singular\npoint. Then a stratification of $ {\\Bbb C} ^{2n} \/\\G_n = ( {\\Bbb C} ^2 \/ \\Gamma)^n \/S_n\n$ is given by\n\\begin{eqnarray*}\n( {\\Bbb C} ^2 \/ \\Gamma)^n \/S_n\n & =& \\bigsqcup_{i=0}^n ( {\\Bbb C} ^2\/ \\Gamma)^n [i] \/S_n \\\\\n & \\cong & \\bigsqcup_{i=0}^n ( {\\Bbb C} _0^2 \/\\Gamma)^i \/S_i \\times \\{ (n -i) o \\} \\\\\n & \\cong & \\bigsqcup_{i=0}^n ( {\\Bbb C} _0^2 \/\\Gamma)^{(i)} \\\\\n & \\cong & \\bigsqcup_{i=0}^n \\bigsqcup_{|\\mu| =i} ( {\\Bbb C} _0^2 \/\\Gamma)^{(i)}_{\\mu}.\n\\end{eqnarray*}\nThe codimension of the strata $( {\\Bbb C} _0^2\/\\Gamma)^{(i)}_{\\mu}$ is $2n - 2\nl (\\mu)$. Clearly we also have\n\\begin{eqnarray*}\n\\tau_n^{-1} ( ( {\\Bbb C} _0^2 \/\\Gamma)^{(i)}_{\\mu} \\times \\{ (n -i) o \\} ) =\n ( {\\Bbb C} _0^2 \/\\Gamma)^{[i]}_{\\mu} \\times \\tau^{-1} (0)^{n -i}.\n\\end{eqnarray*}\nIt follows that the dimension of a fiber over a point in this\nstrata is equal to $(i - l (\\mu) ) + (n -i) = n - l (\\mu)$ which\nis half of the codimension of the strata above. Thus $\\tau_n$ is\nsemismall.\n\nThe canonical bundles over $ {\\Bbb C} ^{2n} \/\\G_n$ and $ \\ale^{[n]} $ is trivial\ndue to the existence of holomorphic symeplectic forms (note that\n$\\G_n$ preserves the symeplectic form on $ {\\Bbb C} ^{2n}$). Thus $\\tau_n$\nis crepant.\n\\end{demo}\n\n\\begin{remark} \\rm\n\\begin{enumerate}\n\\item $\\tau_n$ is a symplectic resolution in the sense that\nthe pullback of the holomorphic symplectic form on $ {\\Bbb C} ^{2n} \/ \\G_n$\noutside of the singularities can be extended to a holomorphic\nsymplectic form on $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}$. When $n =1$ this becomes the\nminimal resolution $\\tau = \\tau_1 : \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} \\rightarrow {\\Bbb C} ^2 \/\\Gamma$.\n\\item It follows from the semismallness of $\\tau_n$\nthat $\\tau_{(n)} $ is also semismall. Then the diagram\n(\\ref{eq_mine}) is remarkable in that all three maps $\\tau_n, \\pi_n$\nand $\\tau_{(n)} $ are semismall.\n\\item The resolution $\\tau_n : \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]} \\rightarrow {\\Bbb C} ^{2n} \/\\G_n $\nis one-to-one over the non-singular locus of the orbifold $ {\\Bbb C} ^{2n}\n\/ \\G_n$ corresponding to regular $\\G_n$-orbits in $ {\\Bbb C} ^{2n}$.\n\\end{enumerate}\n\\end{remark}\n\nWe denote by $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n], 0}$ the fiber of $\\tau_n$ over $[0]$,\nwhere $[0]$ denotes the image in $ {\\Bbb C} ^{2n}\/\\G_n$ of the origin of\n$ {\\Bbb C} ^{2n}$. We assume that $\\Gamma$ is not trivial. By the diagram\n(\\ref{eq_mine}), we have\n %\n\\begin{eqnarray*}\n \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n], 0} = \\pi_n^{ -1} \\tau_{(n)}^{ -1} (0) = \\pi_n^{ -1}\n(D^{(n)}),\n\\end{eqnarray*}\nwhere $D = \\tau^{-1} (0)$ is the\nexceptional divisor in $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $. It is well known that the\nirreducible components of $D$ are projective lines $\\Sigma_{\\gamma}$\nparameterized by the set of non-trivial characters $\\gamma \\in \\Gamma^*_0$\n(cf. e.g. \\cite{GSV}).\n\nRecall that given an irreducible curve $\\Sigma \\subset \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $, the\nvariety\n\\[\nL^n \\Sigma := \\left\\{ I \\in \\ale^{[n]} | \\mbox{Supp}( {\\cal O} \/I\n)\\subset \\Sigma \\right\\} = \\pi_n^{-1} (\\Sigma^{(n)}).\n\\]\nintroduced by Grojnowski \\cite{Gr} (also see \\cite{N2}) plays an\nimportant role in understanding the middle-dimensional homology\ngroups of Hilbert schemes and in connection with symmetric\nfunctions. One can show (cf. {\\em loc. cit.}) that the irreducible\ncomponents of $L^n \\Sigma$ are parameterized by partitions $\\nu$\nof $n$ and given by\n %\n\\begin{eqnarray*}\nL^{\\nu} \\Sigma = \\overline{ \\pi_n^{-1} (\\Sigma^{(n)}_{\\nu} ) } ,\n\\end{eqnarray*}\nwhere $ \\Sigma^{(n)}_{\\nu} $ is the stratum of the symmetric\nproduct $\\Sigma^{(n)}$ associated to $\\nu$.\n\nIt is interesting to observe that the fiber $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n], 0}$ is a\nnatural generalization of the above construction. Given $\\rho =\n(\\rho (\\gamma))_{\\gamma \\in \\Gamma^*_0} \\in {\\cal P}_n (\\Gamma^*_0)$, we set $n_i\n= |\\rho (\\gamma_i)|$ and define\n\\begin{eqnarray*}\nL^{\\rho}D = \\overline{ \\pi_n^{-1}\n\\left((\\Sigma_{\\gamma_1})^{(n_{1})}_{\\rho (\\gamma_1)} \\times \\ldots \\times\n(\\Sigma_{\\gamma_r})^{ (n_{r})}_{\\rho (\\gamma_r)} \\right)} .\n\\end{eqnarray*}\n\n\\begin{proposition} \\label{prop_fiber}\nLet $\\Gamma$ be non-trivial. The fiber $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n], 0}$ is of pure\ndimension $n$, and its irreducible components are given by\n$L^{\\rho}D, \\rho \\in {\\cal P}_n (\\Gamma^*_0)$.\n\\end{proposition}\n\n\\begin{demo}{Proof}\nThe component $L^{\\rho}D$ is irreducible since the fiber of\n$\\pi_n$ is so. $L^{\\rho}D$ is of dimension $n$ since the dimension\nof $(\\Sigma_{\\gamma})^{ |\\rho (\\gamma)|}_{\\rho (\\gamma)}$ $(\\gamma \\in \\Gamma^*_0)$ is\n$l( \\rho (\\gamma))$ and the dimension of the fiber of $\\pi_n$ is equal\nto $$\\sum_{i =1}^r \\left( |\\rho (\\gamma_i) | - l (\\rho (\\gamma_i)) \\right)\n= n - \\sum_i l( \\rho (\\gamma_i)). $$ Therefore the dimension of\n$L^{\\rho}D$ is $( n - \\sum_i l( \\rho (\\gamma_i)) ) + \\sum_i l( \\rho\n(\\gamma_i)) = n$.\n\\end{demo}\n\\subsection{Hilbert quotient and a subvariety of $( {\\Bbb C} ^2)^{[nN]}$}\n\\label{subsect_link}\n %\nLet $X$ be a smooth complex algebraic variety acted upon by a\nfinite group $\\Gamma$ of order $N$. A regular $\\Gamma$-orbit can be viewed\nas an element in the Hilbert scheme $X^{[N]}$ of $N$ points in\n$X$. The Hilbert quotient is the closure $X \/\/ \\G$ of the set of\nregular $\\Gamma$-orbits in $X^{[N]}$ (cf. \\cite{Ka}). It follows that\nthere exists a tautological vector bundle over $X \/\/ \\G$ of rank\n$N$. The group $\\Gamma$ acts on the tautological bundle fiberwise and\neach fiber is isomorphic to the regular representation of $\\Gamma$.\n\nNote that the wreath product $\\G_n $ acts faithfully on the affine\nspace $ {\\Bbb C} ^{2n} = ( {\\Bbb C} ^2)^n$. We make the following conjecture which\nprovides a key connection between I and III in the diagram\n(\\ref{eq_mckay}). We remark that a more general conjecture can be\neasily formulated in the setup of the diagram (\\ref{eq_master}).\n\n\\begin{conjecture} \\label{conj_main}\nThere exists a canonical isomorphism between the Hilbert quotient\n$ \\C^{2n} \/\/ \\Gn $ and the Hilbert scheme $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}$ of $n$ points on\n$ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $.\n\\end{conjecture}\n %\n\\begin{remark} \\rm \\label{rem_one}\n \\begin{enumerate}\n\\item\nConjecture~\\ref{conj_main} for $n =1$ reduces to a theorem due to\nGinzburg-Kapronov (unpublished) and independently Ito-Nakamura\n\\cite{INr} which says $ {\\Bbb C} ^2 \/\/ \\Gamma \\cong \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $.\n\\item When $\\Gamma$ is trivial and so $\\G_n $ is the\nsymmetric group $S_n$, Haiman \\cite{H} has shown that\nConjecture~\\ref{conj_main} for $\\Gamma$ trivial is equivalent to a\nremarkable {\\em $n!$ conjecture} due to Garsia and Haiman\n\\cite{GH}. A proof of the $n!$ conjecture has been very recently\nposted by Haiman in his UCSD webpage.\n\\item Conjecture~\\ref{conj_main} implies an\nisomorphism of Hilbert quotients:\n\\[\n \\C^{2n} \/\/ \\Gn \\cong ( {\\Bbb C} ^2 \/\/ \\Gamma )^n \/\/S_n,\n\\]\nwhich seems to be in a more symmetric form. On the other hand,\nassuming the $n!$-conjecture, one can show that an isomorphism of\nHilbert quotients above implies Conjecture~\\ref{conj_main}.\n\\end{enumerate}\n\\end{remark}\n\nSince $\\tau_n : \\ale^{[n]} \\rightarrow {\\Bbb C} ^{2n } \/\\G_n$ is a resolution\nof singularities ( and thus is proper) and $ {\\Bbb C} ^{2n } \/\\G_n$ is a\nnormal variety, the algebra of regular functions on $ \\ale^{[n]} $ is\nisomorphic to the algebra of $\\G_n$ invariants on the regular\nfunctions on $ {\\Bbb C} ^{2n}$. The following lemma gives a description of\nthe algebra of $\\G_n$ invariants in $ {\\Bbb C} [ {\\bf x} , {\\bf y} ]$, where we denote\nby $ {\\bf x} $ (resp. $ {\\bf y} $) the $n$-tuple $x_1, \\ldots, x_n$ (resp. $y_1,\n\\ldots, y_n$). It generalizes Weyl's theorem \\cite{Wey} for\nsymmetric groups and the proof is similar.\n\n\\begin{lemma} \\label{lem_weyl}\nThe algebra of invariants $ {\\Bbb C} [ {\\bf x} , {\\bf y} ]^{\\G_n}$ is generated by\n\\begin{eqnarray*}\n\\tilde{f} ( {\\bf x} , {\\bf y} ) = f(x_1, y_1) + f(x_2, y_2)\n + \\ldots + f(x_n, y_n),\n\\end{eqnarray*}\nwhere $f$ runs over an arbitrary linear basis $\\cal B$ for\nthe space of invariants $ {\\Bbb C} [x,y]^{\\Gamma}$.\n\\end{lemma}\n\n\\begin{demo}{Proof}\nWe prove the lemma by induction on $n$. When $n =1$ it is evident.\nAssume now that we have established the lemma for $n-1$. We use\n$ {\\bf x} '$ and $ {\\bf y} '$ to denote $x_2, \\ldots, x_n$ and respectively\n$y_2, \\ldots, y_n$. The space $ {\\Bbb C} [ {\\bf x} ', {\\bf y} ' ]$ is acted on by\nthe wreath product subgroup $\\Gamma_{n -1} \\subset \\G_n$.\n\nGiven any $F ( {\\bf x} , {\\bf y} ) \\in {\\Bbb C} [ {\\bf x} , {\\bf y} ]^{\\G_n}$, we can write it as a\nlinear combination of $x_1^{\\alpha} y_1^{\\beta} F_{\\alpha\n\\beta}'( {\\bf x} ', {\\bf y} '),$ where $ F_{\\alpha \\beta}'( {\\bf x} ', {\\bf y} ')$ is\nsome $\\Gamma_{n -1}$-invariant polynomial. By induction assumption, we\ncan write $ F_{\\alpha \\beta}'( {\\bf x} ', {\\bf y} ')$ as a polynomial in\nterms of $\\tilde{f} ( {\\bf x} ', {\\bf y} ')$ where $f \\in {\\cal B}$, and in\nturn as a polynomial in terms of $\\tilde{f} ( {\\bf x} , {\\bf y} )$ and $x_1,\ny_1$. Therefore we can write $F ( {\\bf x} , {\\bf y} )$ as a linear combination\nof polynomials of the form $ x_1^{\\alpha} y_1^{\\beta} G_{\\alpha\n\\beta}( {\\bf x} , {\\bf y} ),$ where $ G_{\\alpha \\beta}( {\\bf x} , {\\bf y} )$ is a\npolynomial in terms of $\\tilde{f} ( {\\bf x} ', {\\bf y} ')$ where $f \\in \\cal\nB$. Since both $ F_{\\alpha \\beta}( {\\bf x} , {\\bf y} )$ and $G_{\\alpha\n\\beta}( {\\bf x} , {\\bf y} )$ are $\\G_n$-invariant and thus in particular\ninvariant with respect to the symmetric group $S_n$ and the first\nfactor $\\Gamma$ in $\\Gamma^n \\subset \\G_n$, $F( {\\bf x} , {\\bf y} )$ becomes a linear\ncombination of $p_{\\alpha \\beta}( {\\bf x} , {\\bf y} ) G_{\\alpha \\beta}( {\\bf x} , {\\bf y} ).$\nHere $p_{\\alpha \\beta}( {\\bf x} , {\\bf y} )$ denotes $\\frac1{nN} \\sum_{i =1}^n\n\\sum_{g \\in \\Gamma} (g. x_i)^{\\alpha} (g.y_i)^{\\beta}$, the average of\n$x_1^{\\alpha} y_1^{\\beta}$ over $\\Gamma \\times S_n$ (which is the same\nas the average over $\\G_n$). We complete the proof by noting that\n$p_{\\alpha \\beta}( {\\bf x} , {\\bf y} )$ is a linear combination of $\\tilde{f}\n( {\\bf x} , {\\bf y} )$ where $f \\in \\cal B$.\n\\end{demo}\n\nGiven $J \\in \\C^{2n} \/\/ \\Gn $ we regard it as an ideal in $ {\\Bbb C} [ {\\bf x} , {\\bf y} ]$ of\ncolength $N^n n!$ (which is the order of $\\G_n$). Then the quotient\n$ {\\Bbb C} [ {\\bf x} , {\\bf y} ] \/ J$ affords the regular representation $R$ of\n$\\G_n$, and its only $\\G_n$-invariants are constants. Thus we have\n$\\tilde{f} ( {\\bf x} , {\\bf y} ) = c_f \\mbox{ mod }J$ for some constant $c_f$.\nRecall that $\\Gamma_{n -1}$ acts on $ {\\Bbb C} [ {\\bf x} ', {\\bf y} ']$. By\nLemma~\\ref{lem_weyl}, the space $ {\\Bbb C} [ {\\bf x} , {\\bf y} ]^{\\Gamma_{n -1}}$ is\ngenerated by $x_1, y_1$ and $f(x_2, y_2) + \\ldots + f(x_n, y_n)$.\nThe latter is equal to $c_f - f(x_1, y_1) \\mbox{ mod } J$. Thus\n$( {\\Bbb C} [ {\\bf x} , {\\bf y} ] \/ J)^{\\Gamma_{n -1}}$ is generated by $x_1, y_1$ and\n$c_f - f(x_1, y_1), f \\in {\\Bbb C} [x, y]^{\\Gamma}$. It follows that\n\\begin{eqnarray} \\label{eq_link}\n {\\Bbb C} [x_1, y_1 ] \/ (J \\cap {\\Bbb C} [x_1, y_1 ])\n \\equiv ( {\\Bbb C} [ {\\bf x} , {\\bf y} ] \/ J)^{\\Gamma_{n -1}},\n\\end{eqnarray}\nwhich has dimension $nN = | \\G_n | \/ | \\Gamma_{n -1} |$ because\n$( {\\Bbb C} [ {\\bf x} , {\\bf y} ] \/J)^{\\Gamma_{n -1}}$ can be identified with the space of\n${\\Gamma_{n -1}}$-invariants in the regular representation of $\\Gamma$.\n\nThe first copy of $\\Gamma$ in the Cartesian product $\\Gamma^n \\subset \\G_n$\ncommutes with $\\Gamma_{n -1}$ above. It follows from (\\ref{eq_link})\nthat the quotient $ {\\Bbb C} [x_1, y_1 ] \/ (J \\cap {\\Bbb C} [x_1, y_1 ])$ as a\n$\\Gamma$-module is isomorphic to $R^n$, a direct sum of $n$ copies of\nthe regular representation $R$ of $\\G_n$.\n\nThe $\\Gamma$-action on $ {\\Bbb C} ^2$ induces a $\\Gamma$-action on the Hilbert\nscheme $( {\\Bbb C} ^2 )^{[nN]}$ and the symmetric product $\n( {\\Bbb C} ^2)^{(nN)}$. The Hilbert-Chow morphism $( {\\Bbb C} ^2 )^{[nN]}\n\\rightarrow ( {\\Bbb C} ^2 )^{(nN)}$ induces one between the set of\n$\\Gamma$-fixed points $( {\\Bbb C} ^2 )^{[nN], \\Gamma} \\rightarrow ( {\\Bbb C} ^2\n)^{(nN),\\Gamma}$. As the fixed point set of a non-singular variety by\nthe action of a finite group, $( {\\Bbb C} ^2 )^{[nN], \\Gamma}$ is\nnon-singular. Denote by $X_{\\G,n}$ the set of $\\Gamma$-invariant ideals\n$I$ in the Hilbert scheme $ ( {\\Bbb C} ^2)^{[nN]}$ such that the quotient\n$ {\\Bbb C} [x, y] \/ I$ is isomorphic to $R^n$ as a $\\Gamma$-module. Since the\nquotient $ {\\Bbb C} [x, y]\/I$ are isomorphic as $\\Gamma$-modules for all $I$\nin a given connected component of $( {\\Bbb C} ^2 )^{[nN], \\Gamma}$, the\nvariety $X_{\\G,n}$ is a union of components of $( {\\Bbb C} ^2 )^{[nN], \\Gamma}$.\nIn particular $X_{\\G,n}$ is non-singular (we shall see that $X_{\\G,n}$\nis indeed connected of dimension $2n$).\n\nTherefore by sending the ideal $J$ to the ideal $J \\cap {\\Bbb C} [x_1,\ny_1]$, we have defined a map $\\varphi$ from $ \\C^{2n} \/\/ \\Gn $ to $ ( {\\Bbb C} ^2\n)^{[nN]}$, whose image lies in $X_{\\G,n}$. We also denote $\\varphi:\n \\C^{2n} \/\/ \\Gn \\longrightarrow X_{\\G,n}$.\n\nThe map $\\varphi$ can be also understood as follows. Let\n${\\cal U}_{\\G, n} $ be the universal family over the Hilbert quotient\n$ \\C^{2n} \/\/ \\Gn $ which is a subvariety of the Hilbert scheme\n$( {\\Bbb C} ^{2n})^{[n!N^n]}$:\n\\begin{eqnarray} \\label{eq_quotuniv}\n \\begin{array}{ccc}\n {\\cal U}_{\\G, n} & \\longrightarrow & {\\Bbb C} ^{2n} \\\\\n \\downarrow & & \\\\\n \\C^{2n} \/\/ \\Gn & &\n \\end{array}\n\\end{eqnarray}\n %\nIt has a natural $\\G_n$-action fiberwise such that each fiber\ncarries the regular representation of $\\G_n$. Then ${\\cal U}_{\\G, n} \/\n\\Gamma_{n-1}$ is flat and finite of degree $nN$ over $ \\C^{2n} \/\/ \\Gn $, and\nthus can be identified with a family of subschemes of $ {\\Bbb C} ^2$ as\nabove. Then $\\varphi$ is the morphism given by the universal\nproperty of the Hilbert scheme $( {\\Bbb C} ^2 )^{[nN]}$ for the family\n${\\cal U}_{\\G, n} \/ \\Gamma_{n-1}$.\n\nThus we have established the following.\n\n\\begin{theorem} \\label{th_morph}\nWe have a natural morphism $\\varphi : \\C^{2n} \/\/ \\Gn \\longrightarrow\nX_{\\G,n}$ defined as above.\n\\end{theorem}\n\n\\begin{remark} \\rm\nFor $\\Gamma $ trivial, $\\G_n$ reduces to $S_n$ and $X_{\\G,n}$ becomes the\nHilbert scheme $( {\\Bbb C} ^2)^{[n]}$. In this case the above morphism\n$ \\C^{2n} \/\/ S_n \\rightarrow ( {\\Bbb C} ^2)^{[n]}$ was earlier constructed by\nHaiman \\cite{H}. We plan to elaborate further on generalizations\nof \\cite{H} to our setup in the future.\n\\end{remark}\n\n\\begin{conjecture}\nThe morphism $\\varphi : \\C^{2n} \/\/ \\Gn \\longrightarrow X_{\\G,n}$ is an\nisomorphism.\n\\end{conjecture}\n\nObserve that $ ( {\\Bbb C} ^2 )^{(nN),\\Gamma}$ is the subset of $( {\\Bbb C} ^2)^{(nN)}$\nconsisting of points\n\\begin{eqnarray*}\n\\sum_{\\gamma \\in \\Gamma} [\\gamma \\cdot x_1] + \\ldots + \\sum_{\\gamma \\in \\Gamma} [\\gamma \\cdot\nx_a] + (n -a) N [0], \\quad 1 \\leq a \\leq n, x_1 , \\ldots , x_a \\in\n {\\Bbb C} ^2 \\backslash 0,\n\\end{eqnarray*}\nwhich can be thought as the $\\G_n$-orbit of $(x_1, \\ldots, x_a, 0,\n\\ldots, 0) \\in ( {\\Bbb C} ^2)^n = {\\Bbb C} ^{2n}$. In this way $( {\\Bbb C} ^2)^{(nN),\\Gamma}$\nis identified with $ {\\Bbb C} ^{2n} \/ \\G_n$. Thus we have proved the\nfollowing proposition by noting the inclusion $X_{\\G,n} \\subset\n( {\\Bbb C} ^2 )^{[nN], \\Gamma} $.\n\n\\begin{proposition} \\label{prop_fix}\nThe $\\Gamma$-fixed point set $( {\\Bbb C} ^2)^{(nN),\\Gamma}$ of the symmetric\nproduct $( {\\Bbb C} ^2)^{(nN)}$ can be canonically identified with\n$ {\\Bbb C} ^{2n} \/ \\G_n$. The Hilbert-Chow morphism $( {\\Bbb C} ^2)^{[nN]}\n\\rightarrow ( {\\Bbb C} ^2)^{(nN)}$, when restricted to the $\\Gamma$-fixed\npoint set, induces a canonical morphism $X_{\\G,n} \\rightarrow {\\Bbb C} ^{2n}\n\/ \\G_n$.\n\\end{proposition}\n\n\\begin{remark} \\rm \\label{rem_gener}\nTake an unordered $n$-tuple $T$ of distinct $\\Gamma$-orbits in $ {\\Bbb C} ^2\n\\backslash 0$. Such an $n$-tuple defines a set of $nN$ distinct\npoints in $ {\\Bbb C} ^2$, and thus can be regarded as an ideal $I(T)$ in\nthe Hilbert scheme $( {\\Bbb C} ^2)^{[nN]}$. This ideal is clearly\n$\\Gamma$-invariant and as a $\\Gamma$-module $ {\\Bbb C} [x, y] \/ I(T)$ is\nisomorphic to $R^n$. On the other hand observe that such an\n$n$-tuple $T$ can be canonically identified with a regular\n$\\G_n$-orbit in $ {\\Bbb C} ^{2n}$. In this way the sets of generic points\nin $X_{\\G,n}$ and $ {\\Bbb C} ^{2n} \/\\G_n$ coincide. It is easy to see that the\nmorphism $X_{\\G,n} \\rightarrow {\\Bbb C} ^{2n} \/ \\G_n$ above is surjective and\nit is one-to-one over the set of generic points in $ {\\Bbb C} ^{2n} \/ \\G_n$\nconsisting of regular $\\G_n$-orbits.\n\\end{remark}\n\nWe define the reduced universal scheme $W_{\\Gamma, n}$ as the reduced\nfibered product\n\\begin{eqnarray*}\n\\begin{array}{ccc}\n W_{\\Gamma, n} & {\\longrightarrow} & {\\Bbb C} ^{2n} \\\\\n \\downarrow & & \\downarrow \\\\\n X_{\\G,n} & \\stackrel{ \\tau_n}{\\longrightarrow} & {\\Bbb C} ^{2n} \/\\G_n .\n\\end{array}\n\\end{eqnarray*}\nIt is known that a finite surjective morphism from $Z$ to a\nnon-singular variety is flat if and only if $Z$ is Cohen-Macaulay.\n\n\\begin{conjecture} \\label{conj_cohen}\n$W_{\\Gamma, n}$ is Cohen-Macaulay.\n\\end{conjecture}\n\nUnder the assumption of the validity of\nConjecture~\\ref{conj_cohen}, the universal properties of the\nHilbert scheme $( {\\Bbb C} ^{2n})^{[n!N^n]}$ induces a morphism $\\psi:\nX_{\\G,n} \\rightarrow ( {\\Bbb C} ^{2n})^{[n!N^n]}$, whose image lies in\n$ \\C^{2n} \/\/ \\Gn $. By Remark~\\ref{rem_gener} and the fact that the set of\ngeneric points of $ \\C^{2n} \/\/ \\Gn $ and $ {\\Bbb C} ^{2n} \/ \\G_n$ coincide, the two\nmorphisms $\\varphi$ and $\\psi$ are mutually inverse to each other\nover generic points. Then it follows that they are inverse\neverywhere, establishing Conjecture~\\ref{conj_main}.\n\n\\begin{remark} \\rm\nThe above conjecture~\\ref{conj_cohen} for $\\Gamma$ trivial was first\nconjectured by Haiman \\cite{H}. The proof announced very recently\nby Haiman in his UCSD homepage of $n!$ conjecture is based on a\nproof of this conjecture of his.\n\\end{remark}\n\\subsection{A quiver variety description} \\label{subsect_quiver}\nWe first recall (cf. \\cite{N2}) that the Hilbert scheme\n$( {\\Bbb C} ^2)^{[K]}$ of $K$ points in $ {\\Bbb C} ^2$ admits a description in\nterms of a quiver consisting of one vertex and one arrow starting\nfrom the vertex and returning to the same vertex itself. More\nexplicitly, we denote\n %\n\\begin{eqnarray*}\n \\widetilde{H}(K) =\n\\left\\{\n\\begin{array}{lcl}\n &|& i) [B_1, B_2 ] +ij =0 \\\\\n(B_1, B_2, i, j) &|& ii) \\mbox{ there exists no proper subspace\n}\\\\\n &|& S \\subset\n {\\Bbb C} ^K \\mbox{ such that } B_{\\alpha} (S) \\subset S \\mbox{ and } \\\\\n &|& \\mbox{ im } i \\subset S \\quad (\\alpha =1,2)\n\\end{array}\n \\right\\} ,\n\\end{eqnarray*}\n %\nwhere $B_1, B_2 \\in \\mbox{End} ( {\\Bbb C} ^K ), i \\in \\mbox{Hom} ( {\\Bbb C} , {\\Bbb C} ^K ), j \\in\n \\mbox{Hom} ( {\\Bbb C} ^K, {\\Bbb C} )$. Then we have an isomorphism\n %\n\\begin{eqnarray} \\label{eq_quot}\n( {\\Bbb C} ^2)^{[K]} \\cong \\widetilde{H}(K)\/ GL_K ( {\\Bbb C} ) ,\n\\end{eqnarray}\n %\nwhere the action of $GL_K ( {\\Bbb C} )$ on $\\widetilde{H}(K)$ is given by\n %\n\\begin{eqnarray*}\n g \\cdot (B_1, B_2, i, j) = (g B_1 g^{ -1}, g B_2 g^{ -1}, g i, jg^{-1}).\n\\end{eqnarray*}\nIt is also often convenient to regard $(B_1, B_2)$ to be in $ \\mbox{Hom} \n( {\\Bbb C} ^K , {\\Bbb C} ^2 \\otimes {\\Bbb C} ^K )$. We remark that one may drop $j$ in\nthe above formulation because one can show by using the stability\ncondition that $j =0$ (cf. \\cite{N2}).\n\nThe bijection in (\\ref{eq_quot}) is given as follows. For $I \\in\n( {\\Bbb C} ^2)^{[K]}$, i.e. an ideal in $ {\\Bbb C} [x, y]$ of colength $K$, the\nmultiplication by $x, y$ induces endomorphisms $B_1, B_2$ on the\n$K$-dimensional quotient $ {\\Bbb C} [x, y] \/I$, and the homomorphism $i\n\\in \\mbox{Hom} ( {\\Bbb C} , {\\Bbb C} ^K)$ is given by letting $i (1) =1 \\mbox{ mod }\nI$. Conversely, given $(B_1, B_2, i)$, we define a homomorphism\n$ {\\Bbb C} [x, y] \\rightarrow {\\Bbb C} ^K$ by $f \\mapsto f(B_1, B_2) i (1)$. One\ncan show by the stability condition that the kernel $I$ of this\nhomomorphism is an ideal of $ {\\Bbb C} [x, y]$ of colength $K$. One\neasily checks that the two maps are inverse to each other.\n\nSet $K =nN$, where $N$ is the order of $\\Gamma$. We may identify\n$ {\\Bbb C} ^K$ with $R^n$, $ {\\Bbb C} ^2$ with the defining representation $Q$ of\n$\\Gamma$ by the embedding $\\Gamma \\subset SL_2 ( {\\Bbb C} )$, and $ {\\Bbb C} $ with the\ntrivial representation of $\\Gamma$. Denote by\n\\begin{eqnarray*}\nM (n) & =& \\mbox{Hom} (R^n , Q \\otimes R^n ) \\bigoplus \\mbox{Hom} ( {\\Bbb C} ,\nR^n)\\bigoplus \\mbox{Hom} (R^n, {\\Bbb C} ).\n\\end{eqnarray*}\nBy definition $\\widetilde{H} (nN) \\subset M(n)$. Let $GL_{\\Gamma} (R)$\nbe the group of $\\Gamma$-equivariant automorphisms of $R$. Then the\ngroup $G \\equiv GL_{\\Gamma} (R^n) \\cong GL_n ( {\\Bbb C} ) \\times GL_{\\Gamma} (R)$\nacts on the $\\Gamma$-invariant subspace $M(n)^{\\Gamma} $. We have the\nfollowing description of $X_{\\G,n}$ as a quiver variety. This result\nis known to Nakajima \\cite{N4} (also cf. Theorem 1 of\nVaragnolo-Vasserot \\cite{VV})\\footnote{I.~Frenkel informed us that\nhe also noticed this recently.}.\n %\n\\begin{theorem} \\label{th_quiv}\nThe variety $X_{\\G,n} $ admits the following description:\n %\n\\begin{eqnarray*}\n X_{\\G,n} \\cong (\\widetilde{H} ( nN) \\cap M(n)^{\\Gamma} ) \/ GL_{\\Gamma} (R^n).\n\\end{eqnarray*}\nIt particular $X_{\\G,n} $ is non-singular of pure dimension $2n$.\n\\end{theorem}\n\n\\begin{remark} \\rm \\label{rem_clear}\nConsider the $\\Gamma$-module decomposition $Q \\otimes V_{\\gamma_i}=\n\\bigoplus_j a_{ij} V_{\\gamma_j}$, where $a_{ij} \\in {\\Bbb Z} _+$, and\n$V_{\\gamma_i} $ $(i =0, \\ldots, r)$ are irreducible representations\ncorresponding to the characters $\\gamma_i$ of $\\Gamma$. Set $\\dim V_{\\gamma_i}\n= n_i$. Then\n\\begin{eqnarray}\n M(n)^{\\Gamma}\n & =& \\mbox{Hom} _{\\Gamma} (R^n ,Q \\otimes R^n ) \\bigoplus \\mbox{Hom} _{\\Gamma} ( {\\Bbb C} , R^n)\n \\bigoplus \\mbox{Hom} _{\\Gamma} (R^n, {\\Bbb C} )\n \\label{eq_invt} \\\\\n & =& \\mbox{Hom} _{\\Gamma} (\\sum_i {\\Bbb C} ^{n n_i} \\otimes V_{\\gamma_i},\n {\\Bbb C} ^2 \\otimes \\sum_i {\\Bbb C} ^{n n_i} \\otimes V_{\\gamma_i} )\\nonumber \\\\\n && \\bigoplus \\mbox{Hom} _{\\Gamma} ( {\\Bbb C} , R^n) \\bigoplus \\mbox{Hom} _{\\Gamma} (R^n, {\\Bbb C} ) \\nonumber \\\\\n & =& \\sum_{i j} a_{ij} \\mbox{Hom} ( {\\Bbb C} ^{n n_i} , {\\Bbb C} ^{n n_j})\n \\bigoplus \\mbox{Hom} ( {\\Bbb C} , V_{\\gamma_0}^n ) \\bigoplus \\mbox{Hom} (V_{\\gamma_0}^n, {\\Bbb C} ). \\nonumber\n\\end{eqnarray}\nwhere $ \\mbox{Hom} _{\\Gamma}$ stands for the $\\Gamma$-equivariant homomorphisms.\nIn the language of quiver varieties as formulated by Nakajima\n\\cite{N, N3}, the above desciption of $X_{\\G,n}$ identifies $X_{\\G,n}$\nwith a quiver variety associated to the following data: the graph\nconsists of the same vertices and edges as the McKay quiver which\nis an affine Dynkin diagram associated to a finite subgroup $\\Gamma$\nof $SL_2 ( {\\Bbb C} )$; the vector space $V_i$ associated to the vertex\n$i$ is isomorphic to the direct sum of $n$ copies of the $i$-th\nirreducible representation $V_{\\gamma_i}$; the vector space $W_i =0$\nfor nonzero $i$ and $W_0 = {\\Bbb C} $.\n\\end{remark}\n\n\\begin{demo}{Proof of Theorem~\\ref{th_quiv}}\nOur proof is modeled on the proof of Theorem 1.9 and Theorem 4.4\nin \\cite{N2} which are special cases of our isomorphism for $\\Gamma$\ntrivial and for $n =1$ respectively. We sketch below for the\nconvenience of the reader.\n\nOne shows $j =0$ by using the stability condition. The isomorphism\nstatement follows directly from the description of ${ {\\Bbb C} ^2}^{[nN]}$\ngiven by (\\ref{eq_quot}), the definition of $X_{\\G,n}$, and\nEq.~(\\ref{eq_invt}). We have seen earlier that $X_{\\G,n}$ is\nnonsingular by construction.\n\nOne shows by a direct check that $[B_1, B_2]$ is $\\Gamma$-invariant\nendomorphism in $R^n$ for $(B_1, B_2) \\in \\mbox{Hom} _{\\Gamma} (R^n , {\\Bbb C} ^2\n\\otimes R^n )$. The cokernel of the differential of the map\n$(B_1,B_2, i) \\mapsto [B_1, B_2]$ from $M(n)^{\\Gamma}$ to\n$ \\mbox{End} _{\\Gamma}(R^n)$ consists of the $\\Gamma$-endomorphisms in $R^n$ which\ncommute with $B_1 $ and $B_2$. By sending $f \\mapsto f( i (1))$ we\ndefine a map from the cokernel to the $n$-dimensional space\n$ \\mbox{Hom} _{\\Gamma} ( {\\Bbb C} , R^n) \\cong V_{\\gamma_0}^n$. Conversely, given $v \\in\nV_{\\gamma_0}^n$, an endomorphism $f$ in $R^n$ is uniquely determined\nby the equation $f(B_1^a B_2^bi(1)) = B_1^a B_2^b v$ by the\nstability condition ii) in the definition of $\\widetilde{H} (nN)$.\nOne further checks that $f$ lies in the cokernel. These two maps\nare inverse to each other. Thus the cokernel has constant\ndimension $n$.\n\nThe dimension of $\\widetilde{H} ( nN) \\cap M(n)^{\\Gamma}$ is equal to\n$\\dim M^{\\Gamma}(n) +n - \\dim GL_{\\Gamma} (R^n) $ since the dimension of\nthe cokernel is $n$. Thus the quotient description of $X_{\\G,n}$\nimplies that the dimension of $X_{\\G,n} $ near $(B_1, B_2, i)$ is\ngiven by\n\\begin{eqnarray*}\n & & \\dim M(n)^{\\Gamma} +n - 2 \\dim GL_{\\Gamma} (R^n) \\\\\n & =& (n^2 \\dim \\mbox{Hom} _{\\Gamma} (R, {\\Bbb C} ^2 \\otimes R) +n)+ n -2 n^2 \\dim GL_{\\Gamma}\n (R) \\\\\n & =& n^2 (2 N) + n + n - 2 n^2 N \\\\\n & =& 2n,\n\\end{eqnarray*}\nwhich is independent of which component a point $(B_1, B_2, i)$\nis in. Here we\nhave used the fact that the (complex) dimension of $ \\mbox{Hom} _{\\Gamma} (R,\n {\\Bbb C} ^2 \\otimes R)$ is equal to $N$ (cf. Kronheimer \\cite{Kr}).\n\\end{demo}\n\n\\begin{remark} \\rm\nRecall that the minimal resolution $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $ endowed with certain\nhyper-Kahler structures are called an ALE spaces \\cite{Kr}.\nAccording to Nakajima \\cite{N4}, one can show that the Hilbert\nscheme $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}$ over an ALE space admits a quiver variety\ndescription in terms of the same quiver data as specified in\nRemark~\\ref{rem_clear} but with a different stability condition,\nby a modification of the proof for the description of the moduli\nspace of vector bundles over an ALE space \\cite{KN}. It follows by\nCorollary 4.2 of \\cite{N} that $ \\ale^{[n]} $ and $X_{\\G,n}$ is\ndiffeomorphic. We conjecture that they are indeed isomorphic as\ncomplex varieties. In this way we would have obtained a morphism\n$\\varphi: \\C^{2n} \/\/ \\Gn \\rightarrow \\ale^{[n]} $ by combining with\nTheorem~\\ref{th_morph}.\n\\end{remark}\n\n\\begin{remark} \\rm\nIt follows readily from the affine algebro-geometric quotient\ndescription of the symmetric product $( {\\Bbb C} ^2)^{(nN)}$\n(Proposition~2.10, \\cite{N2}) that the $\\Gamma$-fixed-point set\n$( {\\Bbb C} ^2)^{(nN),\\Gamma}$ or rather the orbifold $ {\\Bbb C} ^{2n} \/ \\G_n$ (see\nProposition~\\ref{prop_fix}) has the following description (also\ncf. \\cite{VV}, Theorem~1):\n\\[\n {\\Bbb C} ^{2n} \/ \\G_n \\cong \\{(B_1, B_2, i, j) \\in M(n)^{\\Gamma} | [B_1, B_2]\n+ ij =0 \\}\n\/\/GL_{\\Gamma} (R^n).\n\\]\nIt follows from the general theory of quiver varieties that there\nis a natural projective morphism $ \\ale^{[n]} \\rightarrow {\\Bbb C} ^{2n} \/ \\G_n$\nwhich is a semismall resolution. We expect that this is the same\nas the semismall resolution $\\tau_n : \\ale^{[n]} \\rightarrow\n {\\Bbb C} ^{2n}\/\\G_n$ explicitly constructed by the diagram\n(\\ref{eq_mine}). We also expect that the intermediate variety\n$( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} )^n\/S_n$ (see (\\ref{eq_mine})) can also be identified with a\nquiver variety associated with the same quiver data (as specified\nin Remark~\\ref{rem_clear}) but with a new stability condition. In\nthis case it follows from Corollary 4.2 in \\cite{N} that the fiber\n$ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n], 0}$ is a lagrangian subvariety in $ \\ale^{[n]} $ and it is\nhomotopy equivalent to $ \\ale^{[n]} $ (compare with\nProposition~\\ref{prop_fiber}).\n\\end{remark}\n\n\\begin{remark} \\rm\nQuiver varieties are connected \\cite{N, N3}. Thus the variety\n$X_{\\G,n}$ can also be defined as the closure of the set of ideals\n$I(T)$ in $( {\\Bbb C} ^2)^{[nN]}$ associated to unordered $n$-tuples $T$\nof distinct $\\Gamma$-orbits in $ {\\Bbb C} ^2 \\backslash 0$.\n\\end{remark}\n\\subsection{Canonical vector bundles}\n %\nSince $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $ is isomorphic to the Hilbert quotient $ \\C^{2} \/\/ \\G $, there\nexists the tautological vector bundle $\\cal R$ on $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $ of rank\n$N$, whose fiber affords the regular representation of $\\Gamma$ (cf.\n\\cite{GSV, N2}). It decomposes as follows:\n\\begin{eqnarray*}\n{\\cal V} \\cong \\sum_{\\gamma \\in \\Gamma^*} {\\cal R}_{\\gamma} \\bigotimes V_{\\gamma},\n\\end{eqnarray*}\nwhere $V_{\\gamma}$ is the irreducible representation of $\\Gamma$\nassociated to $\\gamma$ and ${\\cal R}_{\\gamma}$ is a vector bundle over\n$ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $ of rank equal to $ \\deg \\gamma$ (by definition $\\deg \\gamma = \\dim\nV_{\\gamma}$).\n\nOne can associate a vector bundle $E^{[n]}$ of rank $dn$ on the\nHilbert scheme $ \\ale^{[n]} $ to a rank $d$ vector bundle $E$ over $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $\nas follows: let $U \\subset \\ale^{[n]} \\times \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $ be the universal\nfamily\n\\begin{eqnarray*}\n\\begin{array}{ccc}\n U & \\stackrel{ p_1}{\\longrightarrow} & \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} \\\\\n p_2 \\downarrow\\quad & & \\\\\n \\ale^{[n]} & &\n\\end{array}\n\\end{eqnarray*}\n$U$ is flat and finite of degree $n$ over $ \\ale^{[n]} $. Then $E^{[n]}$\nis defined to be $(p_2)_* p_1^* E$. In this way we obtain\ncanonical vector bundles ${\\cal R}^{[n]}$ and ${\\cal\nR}^{[n]}_{\\gamma}$ $(\\gamma \\in \\Gamma^*)$ over $ \\ale^{[n]} $ associated to $\\cal R$\nand ${\\cal R}_{\\gamma}$ above.\n\nThere exists a tautological vector bundle ${\\cal R}^{\\{n \\} }$ of\nrank $nN$ over $X_{\\G,n}$ (and thus over $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}$) induced from\nthe inclusion $X_{\\G,n} \\subset ( {\\Bbb C} ^2)^{[nN]}$. The group $\\Gamma$ acts\non ${\\cal R}^{\\{n \\} }$ fiberwise such that each fiber as a\n$\\Gamma$-module is isomorphic to $R^n$. Then we have a decomposition:\n\\begin{eqnarray*}\n{\\cal R}^{\\{n \\} }\n = \\bigoplus_{\\gamma \\in \\Gamma^*} {\\cal R}^{\\{n \\} }_{\\gamma} \\bigotimes V_{\\gamma},\n\\end{eqnarray*}\nwhere ${\\cal R}^{\\{n \\} }_{\\gamma}$ is a vector bundle over\n$ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}$ of rank equal to $n \\deg \\gamma$.\n\nThe principle bundle\n\\begin{eqnarray*}\n\\widetilde{H} (K) \\cap M^{\\Gamma}(n) \\stackrel{\/ G}{\\longrightarrow}\nX_{\\G,n} \\end{eqnarray*}\n %\n(which follows from Theorem~\\ref{th_quiv}) gives rise to various\ncanonical vector bundles associated to canonical representations\nof $G \\equiv GL_{\\Gamma} (R^n) \\cong GL_n ( {\\Bbb C} ) \\times GL_{\\Gamma} (R)$.\nFor example, the vector bundle associated to the representation\n$GL_{\\Gamma} (R^n) \\hookrightarrow GL (R^n)$ is exactly the above\ntautological vector bundle ${\\cal R}^{\\{n \\} }$ over $X_{\\G,n}$ (or\nrather over $ \\ale^{[n]} $); the one associated to $GL_{\\Gamma} (R^n)\n\\rightarrow G L ( \\gamma^n)$ is $ {\\cal R}^{\\{n \\} }_{\\gamma}$.\n\nIn the remainder of this section we assume the validity of\nConjecture~\\ref{conj_main} that $ \\C^{2n} \/\/ \\Gn $ is isomoprhic to\n$ \\ale^{[n]} $. An immediate corollary is the existence of a tautological\nbundle $\\cal V$ on $ \\ale^{[n]} $ whose fiber affords the regular\nrepresentation of $\\G_n$. This comes from the tautological bundle\nover the Hilbert quotient $ \\C^{2n} \/\/ \\Gn $. It is well known (cf. e.g.\n\\cite{M, Z}) that the irreducible representations $S_{\\rho} $ of\n$\\G_n$ are parameterized by the set ${\\cal P}_n (\\Gamma^*)$ of\npartition-valued functions $\\rho$ of weight $n$ on the set $\\Gamma^*$\nof irreducible characters of $\\Gamma$. It follows that one has a\ndecomposition\n\\begin{eqnarray} \\label{eq_wreathreg}\n {\\cal V} = \\bigoplus_{\\rho \\in {\\cal\nP}_n (\\Gamma^*)} S_{\\rho} \\bigotimes {\\cal V}_{\\rho},\n\\end{eqnarray}\n %\nwhere ${\\cal V}_{\\rho}$ is a vector bundle on $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}$ of rank\nequal to $\\dim S_{\\rho} $. The vector bundles ${\\cal V}_{\\rho}$\nare expected to be a basis for the K-group of $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}$.\n\nWe denote by ${\\cal R}^{\\langle n \\rangle }$ the subbundle of\n$\\cal V$ which is given by the (fiberwise) $\\Gamma_{n -1}$ invariants\nof $\\cal V$. Since the first copy $\\Gamma$ in $\\Gamma^n \\subset \\G_n$\ncommutes with $\\Gamma_{n -1}$, ${\\cal R}^{\\langle n \\rangle }$ has a\n$\\Gamma$ action fiberwise such that each fiber affords the $\\Gamma$-module\n$R^n$ (cf. Subsect.~\\ref{subsect_link}). It decomposes as\n\\begin{eqnarray*}\n{\\cal R}^{\\langle n \\rangle } = \\bigoplus_{\\gamma \\in \\Gamma^*} {\\cal\nR}^{\\langle n \\rangle }_{\\gamma} \\bigotimes V_{\\gamma}.\n\\end{eqnarray*}\nwhere $ {\\cal R}^{\\langle n \\rangle }_{\\gamma}$ is a vector bundle\nover $ \\ale^{[n]} $ of rank $n \\deg \\gamma$.\n\nWe define the reduced universal scheme $ U_{\\G,n} $ as the reduced\nfibered product\n\\begin{eqnarray} \\label{eq_univ}\n\\begin{array}{ccc}\n U_{\\G,n} & {\\longrightarrow} & {\\Bbb C} ^{2n} \\\\\n \\downarrow & & \\downarrow \\\\\n \\ale^{[n]} & \\stackrel{ \\tau_n}{\\longrightarrow} & {\\Bbb C} ^{2n} \/\\G_n .\n\\end{array}\n\\end{eqnarray}\nUnder the isomorphism $\\varphi: \\C^{2n} \/\/ \\Gn \\rightarrow \\ale^{[n]} $, the\nuniversal schemes ${\\cal U}_{\\G, n} , U_{\\G,n} $ defined respectively by\n(\\ref{eq_quotuniv}) and (\\ref{eq_univ}) can be identified. The\nfollowing proposition follows now from the way we define $\\varphi$\n(cf. Subsect.~\\ref{subsect_link}).\n\n\\begin{proposition}\nThere is a natural identification between the vector bundles\n${\\cal R}^{[n]}$ and ${\\cal R}^{\\langle n \\rangle }$, respectively\n${\\cal R}^{[n]}_{\\gamma}$ and ${\\cal R}^{\\langle n \\rangle }_{\\gamma}$.\n\\end{proposition}\n\\section{On the equivalence of two forms of McKay correspondence}\n\\label{sect_mckay}\n\\subsection{A weighted bilinear form}\nIn this subsection we recall the notion of a {\\em weighted}\nbilinear form on $R(\\G_n)$ introduced in \\cite{FJW}.\n\nThe standard bilinear form on $R(\\Gamma )$ is defined as follows:\n\n\\begin{eqnarray*}\n\\langle f, g \\rangle_{\\Gamma} = \\frac1{ | \\Gamma |}\\sum_{x \\in \\Gamma}\n f(x) g(x^{ -1}).\n\\end{eqnarray*}\nWe will write $\\langle \\ , \\ \\rangle$ for $\\langle \\ , \\\n\\rangle_{\\Gamma }$ when no ambiguity may arise.\n\nLet us fix a virtual character $ \\xi \\in R(\\Gamma)$. The\nmultiplication in $ R(\\Gamma)$ corresponding to the tensor product of\ntwo representations will be denoted by $* $. Recall that $\\gamma_0,\n\\gamma_1, \\ldots, \\gamma_r$ are all the inequivalent irreducible\ncharacters of $\\Gamma$ and $\\gamma_0$ denotes the trivial character. We\ndenote by $a_{ij} \\in {\\Bbb Z} $ the (virtual) multiplicities of $\\gamma_j$\nin $ \\xi * \\gamma_i $, i.e. $ \\xi * \\gamma_i = \\sum_{j =0}^r a_{ij}\n\\gamma_j.$ We denote by $A$ the $ (r +1) \\times (r +1)$ matrix $ (\na_{ij})_{0 \\leq i,j \\leq r}$.\n\nWe introduce the following {\\em weighted bilinear form} on\n$R(\\Gamma)$:\n $$\n \\langle f, g \\rangle_{ \\xi } = \\langle \\xi * f , g \\rangle_{\\Gamma },\n \\quad f, g \\in R( \\Gamma).\n$$\nIt follows that $\\langle \\gamma_i, \\gamma_j \\rangle_{ \\xi } = a_{ij}.$\n\nThroughout this paper we will always assume that $ \\xi $ is a {\\em\nself-dual}, i.e. $ \\xi (x) = \\xi (x^{-1}), x \\in \\Gamma$. The\nself-duality of $ \\xi $ implies that $ a_{ij} = a_{ji}, $ i.e. $A$\nis a symmetric matrix.\n\nGiven a representation $V$ of $\\Gamma$ with character $\\gamma \\in R(\\Gamma)$,\nthe $n$-th outer tensor product $V^{ \\otimes n} $ of $V$ can be\nregarded naturally as a representation of the wreath product $\\G_n$\nwhose character will be denoted by $\\eta_n ( \\gamma )$: the direct\nproduct $\\Gamma^n$ acts on $\\gamma^{\\otimes n}$ factor by factor while\n$S_n$ by permuting the $n$ factors. Denote by $\\varepsilon_n$ the\n(1-dimensional) sign representation of $\\G_n$ on which $\\Gamma^n$ acts\ntrivially while $S_n$ acts as sign representation. We denote by\n$\\varepsilon_n ( \\gamma ) \\in R(\\G_n)$ the character of the tensor\nproduct of $\\varepsilon_n$ and $V^{\\otimes n}$.\n\nWe may extend naturally $\\eta_n$ to a map from $R(\\Gamma)$ to\n$R(\\G_n)$. In particular, if $\\beta$ and $\\gamma $ are characters of\n$\\Gamma$, then\n\\begin{eqnarray} \\label{eq_virt}\n \\eta_n (\\beta - \\gamma) =\n \\sum_{m =0}^n ( -1)^m \\mbox{Ind}_{\\Gamma_{n -m} \\times \\Gamma_m }^{\\G_n}\n [ \\eta_{n -m} (\\beta) \\otimes \\varepsilon_m (\\gamma ) ] .\n\\end{eqnarray}\n\nWe define a {\\em weighted bilinear form} on $R( \\G_n)$ by\nletting $$\n \\langle f, g\\rangle_{ \\xi , \\G_n } =\n \\langle \\eta_n ( \\xi ) * f, g \\rangle_{\\G_n} ,\n \\quad f, g \\in R( \\G_n).\n$$ One can show that the bilinear form $\\langle \\ , \\\n\\rangle_{ \\xi , \\G_n}$ is symmetric.\nA symmetric bilinear form on $R_{\\G} = \\bigoplus_{n} R(\\G_n)$ is\nthen given by\n\\[\n\\langle u, v \\rangle_{ \\xi }\n = \\sum_{ n \\geq 0} \\langle u_n, v_n \\rangle_{ \\xi , \\G_n } ,\n\\]\nwhere $u = \\sum_n u_n$ and $v = \\sum_n v_n$ with $u_n, v_n \\in\n\\G_n$.\n\nWe further specialize to the case when $\\Gamma$ is a finite subgroup\nof $SL_2 ( {\\Bbb C} )$ by an embedding $\\pi$, and fix the virtual\ncharacter $ \\xi $ of $\\Gamma$ to be $$\n \\lambda (\\pi) \\equiv \\sum_{i=0}^2 (-1)^i \\Lambda^i \\pi =2\\gamma_0 - \\pi.\n$$ where $\\Lambda^i$ denotes the $i$-th exterior power. We\nconstruct a diagram with vertices corresponding to elements $\\gamma_i$\nin $\\Gamma^*$ and we draw one edge (resp. two edges) between the\n$i$-th and $ j$-th vertices if $a_{ij} = -1$ (resp. $-2$).\nAccording to McKay \\cite{Mc}, the associated diagram can be\nidentified with affine Dynkin diagram of ADE type and the matrix\n$A$ is the corresponding affine Cartan matrix. It is shown in\n\\cite{FJW} that the weighted bilinear form on $R_{ {\\Bbb Z} }(\\G_n)$ is\nsemipositive definite symmetric.\n\\subsection{Identification of two virtual characters}\nIn this subsection we set $\\Gamma$ to be an arbitrary (not necessarily\nfinite) subgroup of $GL_k( {\\Bbb C} )$ unless otherwise specified. We\ndenote by $\\pi$ the $k$-dimensional defining representation of\n$\\Gamma$ for the embedding.\n\nThe wreath product $\\G_n$ acts naturally on $ {\\Bbb C} ^{kn} =( {\\Bbb C} ^k)^n$ by\nletting $\\Gamma^n$ act factor-wise and $S_n$ act as permutations of\n$n$-factors. We denote by $\\lambda ( {\\Bbb C} ^{kn})$ the virtual\ncharacter $\\sum_{i =0}^{kn} ( -1)^i \\Lambda^i {\\Bbb C} ^{kn}$ of $\\G_n$,\nwhere the $i$-th exterior power $ \\Lambda^i {\\Bbb C} ^{kn}$ carries an\ninduced $\\G_n$-action. The geometric significance of $\\lambda\n( {\\Bbb C} ^{kn})$ will become clear later. Let $\\eta_n (\\lambda (\\pi))$\nbe the virtual character of $\\G_n$ built on the $n$-th tensor of\nthe virtual character $\\lambda (\\pi ) = \\sum_{i =0}^k (-1)^i\n\\Lambda^i {\\Bbb C} ^k$ of $\\Gamma$.\n %\n\\begin{theorem} \\label{th_key}\n The virtual characters $\\lambda ( {\\Bbb C} ^{kn})$ and\n $\\eta_n (\\lambda (\\pi))$ of $\\G_n$ are equal.\n\\end{theorem}\n\n\\begin{demo}{Proof}\n Given $x \\times y \\in \\G_n$, where\n $x \\in \\Gamma_m$ and $y \\in \\Gamma_{n -m}$ for some $m$, we have by definition\n\\begin{eqnarray} \\label{eq_mult}\n \\eta_n (\\lambda (\\pi) ) (x \\times y)\n & =& \\eta_m (\\lambda (\\pi)) (x)\\; \\eta_{n-m} (\\lambda (\\pi))(y),\n \\nonumber \\\\\n \\lambda ( {\\Bbb C} ^{kn}) (x \\times y)\n & =& \\lambda ( {\\Bbb C} ^{km}) (x)\\; \\lambda ( {\\Bbb C} ^{k(n -m)})(y).\n\\end{eqnarray}\nThus it suffices to show that the character values $ \\eta_n\n(\\lambda (\\pi) ) (\\alpha, s)$ and $\\lambda ( {\\Bbb C} ^{kn}) (\\alpha, s)$\nare equal for $(\\alpha, s) \\in \\G_n,$ where $\\alpha =(g, 1, \\ldots,\n1) \\in \\Gamma^n$ and $s$ is an $n$-cycle, say $s = (12 \\ldots n)$. It\nis known (cf. \\cite{FJW}) that the character value of $\\eta_n\n( \\xi )(\\alpha, s)$ is $ \\xi ( c)$, where $ \\xi $ is a class function\nof $\\Gamma$ and $c$ is the conjugacy class of $g$. In particular\n %\n\\begin{eqnarray*}\n\\eta_n (\\lambda (\\pi) )(\\alpha, s) = \\lambda (\\pi) (c).\n\\end{eqnarray*}\n\nDenote by $x^i_a, a =1, \\ldots, n, i = 1, \\ldots, k$ the\ncoordinates in $ {\\Bbb C} ^{kn} = ( {\\Bbb C} ^k)^n.$ The $a$-th factor $\\Gamma$ in\n$\\Gamma^n \\subset \\G_n$ acts on $ {\\Bbb C} ^k$ with coordinates $x^i_a, i =1,\n\\ldots, k$.\n\nConsider the exterior monomial basis for $\\Lambda^i {\\Bbb C} ^{kn}$.\nGiven such a monomial $X$, we first observe that the coefficient\nof $X$ in $(\\alpha, s) .X$ is $0$ unless there are equal numbers\nof lower subscripts $1, 2, \\ldots, n$ for $x^i_a$ appearing in\n$X$. It follows that\n\\begin{eqnarray} \\label{eq_vanish}\n\\Lambda^i {\\Bbb C} ^{kn} (\\alpha, s) = \\mbox{trace }(\\alpha, s)\n\\mid_{\\Lambda^i {\\Bbb C} ^{kn}} = 0, \\quad \\mbox{if } i \\mbox{ is not\ndivisible by } n.\n\\end{eqnarray}\n\nFor $i =m n, 1\\leq m \\leq k$, we further observe that the\ncoefficient of $X$ in $(\\alpha, s) .X$ is $0$ unless the monomial\n$X$ is of the form\n\\begin{eqnarray*}\n X(i_1 \\ldots i_m) & =& x_1^{i_1} \\wedge x_2^{i_1} \\wedge \\ldots\n \\wedge x_n^{i_1} \\wedge \\\\\n & & x_1^{i_2} \\wedge x_2^{i_2} \\wedge \\ldots\n \\wedge x_n^{i_2} \\wedge \\ldots \\wedge x_1^{i_m} \\wedge \\ldots\n \\wedge x_n^{i_m},\n\\end{eqnarray*}\nwhere $\\{ i_1, \\ldots, i_m \\}$ is an unordered $m$-tuple of\ndistinct numbers among $1, 2, \\ldots, k$. Write $ \\pi( g) x^{i}_1\n= \\sum_j b_{ij} x^j_1 $ and denote by $B$ the $k \\times k$ matrix\n$(b_{ij})$. Then\n\\begin{eqnarray*}\n (\\alpha, s). X(i_1 \\ldots i_m)\n & =& x_2^{i_1} \\wedge x_3^{i_1} \\wedge \\ldots\n \\wedge x_n^{i_1} \\wedge \\sum_j b_{i_1 j} x_1^j \\wedge \\\\\n & & x_2^{i_2} \\wedge x_3^{i_2} \\wedge \\ldots\n \\wedge x_n^{i_2} \\wedge \\sum_j b_{i_2 j} x_1^j\n \\wedge \\ldots \\wedge \\\\\n & & x_2^{i_m} \\wedge x_3^{i_m} \\wedge \\ldots\n \\wedge x_n^{i_m} \\wedge \\sum_j b_{i_m j} x_1^j .\n\\end{eqnarray*}\n\nIt follows that the coefficient of $X(i_1 \\ldots i_m)$ in\n$(\\alpha, s). X(i_1 \\ldots i_m)$ is equal to\n\\begin{eqnarray} \\label{eq_det}\n && \\sum_{\\sigma} (-1)^{(n -1) m}\n(-1)^{l (\\sigma)} b_{i_1 \\sigma (i_1)}b_{i_2 \\sigma (i_2)} \\ldots\nb_{i_r \\sigma (i_m)} \\nonumber \\\\\n & =& (-1)^{(n -1) m} \\det B(i_1 \\ldots i_m),\n\\end{eqnarray}\nwhere the summation runs over all permutations $\\sigma$ of $i_1,\n\\ldots, i_m$, $l (\\sigma) $ is the length of $\\sigma$, and $\\det\nB(i_1 \\ldots i_m)$ denotes the determinant of the $m \\times m$\nminor of $A$ consisting of the rows and columns $i_1, \\ldots,\ni_m$.\n\nBy (\\ref{eq_vanish}) and (\\ref{eq_det}) we calculate that\n\\begin{eqnarray*} \\label{eq_two}\n\\lambda ( {\\Bbb C} ^{kn}) (\\alpha, s)\n & =& \\sum_{m =0}^k (-1)^{mn}\\Lambda^{mn} ( {\\Bbb C} ^{kn})(\\alpha, s) \\nonumber \\\\\n & =& \\sum_{m =0}^k (-1)^{mn} (-1)^{m(n -1)} \\sum_{ \\{i_1, \\ldots , i_m\\} }\n \\det B(i_1 \\ldots i_m) \\nonumber \\\\\n & =& \\sum_{m =0}^k (-1)^m e_m (t_1, \\ldots, t_k) \\nonumber \\\\\n & =& \\sum_{m =0}^k (-1)^m (\\Lambda^m \\pi) (g) \\nonumber \\\\\n & =& \\lambda (\\pi) (c ),\n \\end{eqnarray*}\nwhere the summation runs over all unordered $r$-tuples $ \\{i_1,\n\\ldots , i_m\\}$ of distinct numbers among $1, 2, \\ldots, k$, and\n$e_m$ denotes the $m$-th elementary symmetric polynomial of the\neigenvalues $t_1, \\ldots, t_k$ of the matrix $\\pi (g) \\in SL_k\n( {\\Bbb C} )$. The two identities involving $e_r$ used above are well\nknown.\n\nBy comparing the character values $\\eta_n (\\lambda (\\pi) )(\\alpha,\ns)$ and $\\lambda ( {\\Bbb C} ^{kn})(\\alpha, s) $ calculated above, we see\nthat\n\\[\n\\eta_n (\\lambda (\\pi) )(\\alpha, s) =\\lambda ( {\\Bbb C} ^{kn})(\\alpha, s) =\n\\lambda (\\pi) (c ).\n\\]\n\n\nTherefore if $x \\in \\G_n$ is of type $\\rho \\in {\\cal P}_n (\\Gamma_*)$\nthen by using (\\ref{eq_mult}) we obtain that\n \\begin{eqnarray*}\n \\eta_n (\\lambda (\\pi) ) ( x)\n & =& \\prod_{c\\in \\Gamma_*} \\lambda (\\pi) (c)^{l (\\rho(c))} \\label{eq_term}\\\\\n & =& \\lambda ( {\\Bbb C} ^{kn}) (x).\n \\end{eqnarray*}\n This completes the proof.\n\\end{demo}\n\n\\begin{remark} \\rm\nThe identification of the two virtual characters can be also seen\nalternatively as follows:\n\\begin{eqnarray}\n\\lambda ( {\\Bbb C} ^{kn}) & =& \\sum_{i =0}^{kn} (-1)^i \\Lambda ( ( {\\Bbb C} ^k)^n )\n \\nonumber \\\\\n & =& \\sum_{i_1, \\ldots , i_n } (-1)^{i_1 + \\ldots + i_n}\n \\Lambda^{i_1} ( {\\Bbb C} ^k) \\otimes \\ldots \\otimes \\Lambda^{i_n} ( {\\Bbb C} ^k)\n \\nonumber \\\\\n & =& \\sum_{\\{n_0, \\ldots , n_k \\}} (-1)^{\\sum_i i n_i}\n \\mbox{Ind}^{\\G_n}_{\\Gamma_{n_0} \\times \\ldots \\times \\Gamma_{n_k}}\n \\Lambda^0 ( {\\Bbb C} ^k)^{\\boxtimes n_0} \\otimes \\ldots \\otimes\n \\Lambda^k ( {\\Bbb C} ^k)^{\\boxtimes n_k} \\label{eq_def} \\\\\n & =& (\\sum_i (-1)^i \\Lambda^i ( {\\Bbb C} ^k) )^{\\boxtimes n} \\label{eq_work}\\\\\n & =& (\\lambda ( {\\Bbb C} ^k) )^{\\boxtimes n} \\nonumber\n\\end{eqnarray}\n %\nwhere $\\{n_0, \\ldots , n_k \\}$ ranges over the $(k+1)$-tuple of\nnon-negative integers such that $ \\sum_i n_i = n.$\nEq.~(\\ref{eq_def}) above basically follows from the definition of\nan induction functor. Eq.~(\\ref{eq_work}) is a generalization of\n(\\ref{eq_virt}) which can be established with some effort.\n\\end{remark}\n\nWhen $\\Gamma$ is trivial then $\\lambda (\\pi) = 0 \\in R(\\Gamma)$ and so $\\eta_n\n(\\lambda (\\pi)) =0$. We have an immediate corollary.\n %\n\\begin{corollary}\nWhen $\\Gamma$ is trivial and $\\G_n$ becomes the symmetric group $S_n$,\nthe virtual $S_n$-character $\\lambda ( {\\Bbb C} ^{kn})$ is zero.\n\\end{corollary}\n\\subsection{Derived categories and Grothendieck Groups}\nIn this subsection we let $\\Gamma $ be a finite subgroup of $SL_2\n( {\\Bbb C} )$ unless otherwise specified.\n\nWe denote by $D_{\\G_n} ( {\\Bbb C} ^{2n})$ the bounded derived category of\n$\\G_n$-equivariant coherent sheaves on $ {\\Bbb C} ^{2n}$, and denote by\n$D( \\ale^{[n]} )$ the bounded derived category of coherent sheaves on\n$ \\ale^{[n]} $. Define two functors $\\Phi : D( \\ale^{[n]} ) \\rightarrow D_{\\G_n}\n( {\\Bbb C} ^{2n})$ and $\\Psi : D_{\\G_n} ( {\\Bbb C} ^{2n}) \\rightarrow D( \\ale^{[n]} ) $ by\n\\begin{eqnarray*}\n \\Phi (-) & =& Rp_* ({\\cal O}_{ U_{\\G,n} } \\bigotimes q^*(-))\\\\\n \\Psi (-) & =& (Rq_* RHom({\\cal O}_{ U_{\\G,n} }, p^* (-)) )^{\\G_n},\n\\end{eqnarray*}\nwhere $ U_{\\G,n} $ is the universal scheme defined in (\\ref{eq_univ}),\nand $p, q$ denote the projections $ \\ale^{[n]} \\times {\\Bbb C} ^{2n}$ to\n$ {\\Bbb C} ^{2n}$ and $ \\ale^{[n]} $ respectively.\n\nTake a basis of $R(\\G_n)$ given by the irreducible characters\n$s_{\\rho}, \\rho \\in {\\cal P}_n (\\Gamma^*)$ of $\\G_n$ (cf. \\cite{M, Z}).\nWe denote by ${\\cal O}_{ {\\Bbb C} ^{2n}}$ the structure sheaf over\n$ {\\Bbb C} ^{2n}$. Recall that the vector bundle (i.e. locally free sheaf)\n${\\cal V}_{\\rho}$ is defined in (\\ref{eq_wreathreg}). The\nfollowing theorem can be derived by using a general theorem due to\nBridgeland, King and Reid (Theorem~1.2, \\cite{BKR}) since $\\tau_n\n: \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]} \\rightarrow {\\Bbb C} ^{2n} \/\\G_n$ is a crepant resolution and\n$\\G_n$ preserves the symplectic structure of $ {\\Bbb C} ^{2n}$.\n\n\\begin{theorem}\nUnder the assumption of the validity of\nConjecture~\\ref{conj_main}, $\\Phi$ is an equivalence of categories\nand $\\Psi$ is its adjoint functor. In particular, $\\Psi$ sends\n${\\cal O}_{ {\\Bbb C} ^{2n}} \\otimes s_{\\rho}^{\\vee}$ to ${\\cal V}_{\\rho}$.\n\\end{theorem}\n\n\\begin{remark} \\rm\nWhen $n=1 $, Conjecture~\\ref{conj_main} is known to be true (cf.\nRemark~\\ref{rem_one}) and the above theorem was established by\nKapranov and Vasserot \\cite{KV}.\n\\end{remark}\n\n\\begin{remark} \\rm \\label{rem_alt}\nBy assuming the validity of the $n!$ conjecture (cf. \\cite{GH}),\nwe have by Remark~\\ref{rem_one} that $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n\/\/S_n \\cong \\ale^{[n]} $.\nThus we may replace $ \\ale^{[n]} $ by $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n \/\/ S_n$ in the crepant\nHilbert-Chow resolution $ \\ale^{[n]} \\stackrel{\\pi_n}{\\rightarrow}\n \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n \/ S_n$. Clearly $S_n$ preserves the holomorphic symplectic\nstructure of the Cartisian product $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n$. Then one can apply\nagain Theorem~1.2 in \\cite{BKR} to show that there is an\nequivalence of derived categories between $D( \\ale^{[n]} )$ and $D_{S_n}\n( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n)$ of $S_n$-equivariant coherent sheaves of $ \\ale^{[n]} $.\n\\end{remark}\n\nBelow we further specialize and apply the general results of\nBridgeland, King and Reid \\cite{BKR} to our setup. We denote by\n$D^0_{\\G_n} ( {\\Bbb C} ^{2n})$ the full subcategory of $D_{\\G_n} ( {\\Bbb C} ^{2n})$\nconsisting of objects whose cohomology sheaves are concentrated on\nthe origin of $ {\\Bbb C} ^{2n}$. We denote by $D^0_{S_n} ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n)$ the\nfull subcategory of $D_{S_n} ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n)$ consisting of objects whose\ncohomology sheaves are concentrated on the $n$-th Cartisian\nproduct of the exceptional divisor $D \\subset \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} $. Denote by\n$D^0( \\ale^{[n]} )$ the full subcategory of $D( \\ale^{[n]} )$ consisting of\nobjects whose cohomology sheaves are concentrated on the fiber\n$ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n], 0}$ of $ \\ale^{[n]} $. Recall that $ \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n], 0}$ is\ndescribed in Subsection~\\ref{subsect_fiber}.\n\nThe equivalence between the derived categories $D_{\\G_n} ( {\\Bbb C} ^{2n})$\nand $D( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$ induces an equivalence between $D^0_{\\G_n}\n( {\\Bbb C} ^{2n})$ and $D^0( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$. Thus we have the following\ncommutative diagram (under the assumption of the validity of\nConjecture~\\ref{conj_main}):\n %\n\\begin{eqnarray} \\label{eq_cat}\n \\begin{array}{ccc}\n D^0_{\\G_n}( {\\Bbb C} ^{2n}) & \\stackrel{\\simeq}{\\longrightarrow}\n & D^0( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}) \\\\\n \\downarrow & & \\downarrow \\\\\n D_{\\G_n}( {\\Bbb C} ^{2n}) & \\stackrel{\\simeq}{\\longrightarrow}\n & D( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]}) .\n \\end{array}\n\\end{eqnarray}\n %\nGiven objects $E, F$ in $D_{\\G_n}( {\\Bbb C} ^{2n})$ and\n$D^0_{\\G_n}( {\\Bbb C} ^{2n})$ respectively, we define the Euler\ncharacteristic\n\\begin{eqnarray*}\n\\chi^{\\G_n} (E, F) = \\sum_i ( -1)^i \\dim Hom_{D_{\\G_n}( {\\Bbb C} ^{2n})} (E,\nF[i]).\n\\end{eqnarray*}\nThis gives a natural bilinear pairing between $D_{\\G_n} ( {\\Bbb C} ^{2n})$\nand $D^0_{\\G_n} ( {\\Bbb C} ^{2n})$. Similarly we can define the Euler\ncharacteristic $\\chi (A, B)$ for objects $A, B$ in $D( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$\nand $D^0( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$ respectively, which gives rise to a bilinear\npairing between $D( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$ and $D^0( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$. We further\nhave $\\chi^{\\G_n}(E, F) =\\chi (\\Psi (E), \\Psi (F)),$ cf.\n\\cite{BKR}.\n\nWe denote by $K_{\\G_n} ( {\\Bbb C} ^{2n}), K^0_{\\G_n} ( {\\Bbb C} ^{2n}), K\n( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$, $ K^0 ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$, $K^0_{S_n} ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n)$ and\n$K_{S_n} ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n)$ the Grothendieck groups of the corresponding\nderived categories. It is well known that $K_{\\G_n}\n( {\\Bbb C} ^{2n})$ and $ K^0_{\\G_n} ( {\\Bbb C} ^{2n})$ are both isomorphic to the\nrepresentation ring $R_{ {\\Bbb Z} }( \\G_n)$. The bilinear pairings\nmentioned above together with the embeddings of categories induces\na bilinear form on $K^0_{\\G_n} ( {\\Bbb C} ^{2n})$ and respectively on\n$K^0( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$.\n\nLet ${\\cal O}_0$ be the skyscraper sheaf at the origin $0$ on\n$ {\\Bbb C} ^{2n}$. The $\\G_n$-bundles ${\\cal O} \\otimes s_{\\rho}, \\rho \\in\n{\\cal P}_n (\\Gamma^*)$ form a basis for $K_{\\G_n} ( {\\Bbb C} ^{2n})$ while the\nmodules $s_{\\rho} \\otimes {\\cal O}_0, \\rho \\in {\\cal P}_n (\\Gamma^*)$\nform the dual basis for $K^0_{\\G_n} ( {\\Bbb C} ^{2n})$.\n %\n\\begin{theorem} \\label{th_reform}\nThe map by sending $s_{\\rho}$ to ${\\cal O}_0 \\otimes s_{\\rho}$,\n$\\rho \\in {\\cal P}_n (\\Gamma^*)$, is an isometry between $R_{ {\\Bbb Z} }(\\G_n)$\nendowed with the weighted bilinear form and $K^0_{\\G_n} ( {\\Bbb C} ^{2n})$\nendowed with the bilinear form defined above. In particular the\nbilinear form on $K^0_{\\G_n} ( {\\Bbb C} ^{2n})$ is semipositive definite\nsymmetric.\n\\end{theorem}\n\n\\begin{demo}{Proof}\nFollowing a similar argument as in Gonzalez-Sprinberg and Verdier\n\\cite{GSV} (which is for $n =1$), we obtain the following\ncommutative diagram by using the Koszul resolution of ${\\cal O}_0$\non $ {\\Bbb C} ^{2n}$:\n\\begin{eqnarray*}\n \\begin{array}{ccc}\n R_{ {\\Bbb Z} }(\\G_n) & \\stackrel{\\simeq}{\\longrightarrow}\n & K^0_{\\G_n} ( {\\Bbb C} ^{2n} ) \\\\\n \\jmath \\downarrow & & \\downarrow \\\\\n R_{ {\\Bbb Z} }(\\G_n) & \\stackrel{\\simeq}{\\longrightarrow}\n & K_{\\G_n} ( {\\Bbb C} ^{2n} ).\n \\end{array}\n\\end{eqnarray*}\n %\nHere the horizontal maps are isomorphisms given by sending\n$s_{\\rho}$ to ${\\cal O}_0 \\otimes s_{\\rho}$ and respectively to\n${\\cal O}_{ {\\Bbb C} ^{2n}} \\otimes s_{\\rho}$, $\\rho \\in {\\cal P}_n\n(\\Gamma^*)$. The left vertical map $ \\jmath $ is given by\nmultiplication by the virtual character $\\lambda ( {\\Bbb C} ^{2n})$ of\n$\\G_n$, and the right vertical one is induced from the natural\nembedding of the corresponding categories. Now the theorem follows\nfrom the definition of the weighted bilinear form on $R(\\G_n)$,\nTheorem~\\ref{th_key}, and the fact that the basis ${\\cal O}_0\n\\otimes s_{\\rho}, \\rho \\in {\\cal P}_n (\\Gamma^*)$ for $K^0_{\\G_n}\n( {\\Bbb C} ^{2n})$ is dual to the basis ${\\cal O}_{ {\\Bbb C} ^{2n}} \\otimes\ns_{\\rho}, \\rho \\in {\\cal P}_n (\\Gamma^*)$ for $K_{\\G_n} ( {\\Bbb C} ^{2n})$.\n\\end{demo}\n\n\\begin{remark} \\rm\nThe main results in \\cite{FJW} (see Theorem 7.2 and Theorem 7.3 in\n{\\em loc. cit.}) can be now formulated by using the space\n\\[\n{\\cal F}_{\\Gamma} = \\bigoplus_{n \\geq 0} K^0_{\\G_n} ( {\\Bbb C} ^{2n})\n\\bigotimes {\\Bbb C} [K^0_{\\Gamma}( {\\Bbb C} ^2 )]\n\\]\nwith its natural bilinear form induced from the Koszul-Thom class.\nHere $ {\\Bbb C} [ - ]$ denotes the group algebra. Roughly speaking,\n${\\cal F}_{\\Gamma}$ affords a vertex representation of the toroidal\nLie algebra and a distinguished subspace of ${\\cal F}_{\\Gamma}$\naffords the basic representation of the affine Lie algebra\n$\\widehat{\\frak g}$ whose associated affine Dynkin diagram corresponds to\n$\\Gamma$ in the sense of McKay. This may be viewed as a form of McKay\ncorrespondence relating finite subgroups of $SL_2 ( {\\Bbb C} )$ to affine\nand toroidal Lie algebras.\n\\end{remark}\n\nNow we have the following commutative diagram (assuming the\nvalidity of Conjecture~\\ref{conj_main}):\n\\begin{eqnarray*}\n \\begin{array}{ccccc}\n R_{ {\\Bbb Z} }(\\G_n) & \\stackrel{\\simeq}{\\longrightarrow}\n & K^0_{\\G_n} ( {\\Bbb C} ^{2n} ) & \\stackrel{\\simeq}{\\longrightarrow}\n & K^0 ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]} ) \\\\\n \\jmath \\downarrow & & \\downarrow & & \\downarrow \\\\\n R_{ {\\Bbb Z} }(\\G_n) & \\stackrel{\\simeq}{\\longrightarrow}\n & K_{\\G_n} ( {\\Bbb C} ^{2n} ) & \\stackrel{\\simeq}{\\longrightarrow}\n & K ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]} ).\n \\end{array}\n\\end{eqnarray*}\n %\nBy Remark~\\ref{rem_alt} and a similar argument as above, we have\nanother commutative diagram (assuming the validity of\nConjecture~\\ref{conj_main}):\n\\begin{eqnarray*}\n \\begin{array}{ccc}\n K^0_{S_n} ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n ) & \\stackrel{\\simeq}{\\longrightarrow}\n & K^0 ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]} ) \\\\\n \\downarrow & & \\downarrow \\\\\n K_{S_n} ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^n) & \\stackrel{\\simeq}{\\longrightarrow}\n & K ( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]} ).\n \\end{array}\n\\end{eqnarray*}\n\nCombining the two diagrams above, we have obtained the following\ntheorem.\n\n\\begin{theorem}\nUnder the assumption of the validity of\nConjecture~\\ref{conj_main}, the isomorphisms among $(R_{ {\\Bbb Z} }(\\G_n),\n\\langle -, - \\rangle_{\\lambda( {\\Bbb C} ^2)} )$, the K-groups $K^0_{S_n}\n( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{n} )$ and $K^0( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$ are isometries.\n\\end{theorem}\n\n\\begin{remark} \\rm\nAll the algebraic structures on $\\bigoplus_n R (\\G_n)$ (cf.\n\\cite{W,FJW}) and thus on $\\bigoplus_n K^0_{\\G_n} ( {\\Bbb C} ^{2n})$ can\nnow be carried over to $\\bigoplus_n K^0( \\widetilde{ {\\Bbb C} ^2 \/ \\Gamma} ^{[n]})$. However it\nremains to match these with the Grojnowski-Nakajima construction\non $\\bigoplus_{n} H( \\ale^{[n]} )$ in terms of correspondence varieties.\n\\end{remark}\n\n\\begin{remark} \\rm\nGiven a finite subgroup $G$ of $SL_K ( {\\Bbb C} )$, one asks whether there\nis a crepant resolution $Y$ of the affine orbifold $ {\\Bbb C} ^K \/ G$ so\nthat there exists a canonical isomorphism between $K_{G}( {\\Bbb C} ^K) =\nK(Y)$; one further asks whether or not the answer can be provided\nby the Hilbert quotient $ {\\Bbb C} ^K \/\/ G$, cf. Reid \\cite{R}. The answer\nis affirmative for $K =2$, known as the McKay correspondence\n\\cite{GSV} (also compare \\cite{IN, KV}). For $K=3$, there has been\nmuch work by various people, cf. \\cite{Ro, R, Nr, IN} and\nreferences therein, and it is settled by Bridgeland-King-Reid\n\\cite{BKR}. However not much is known in general (see however\n\\cite{BKR}) and there has been counterexamples. Our work provides\nstrong evidence for an affirmative answer in the case of $ {\\Bbb C} ^{2n}$\nacted upon by the wreath product $\\G_n$ associated to a finite\nsubgroup $\\Gamma \\subset SL_2 ( {\\Bbb C} )$.\n\\end{remark}\n\\section{A direct isomorphism of algebraic structures on equivariant K-theory}\n\\label{sect_ktheory}\n %\nIn this section we assume that the reader is familiar with\n\\cite{W}. For shortness of notations we will use $K^{ {\\footnotesize {top}} }(-)$\nand $K^{ {\\footnotesize {top}} }_G(-)$ to denote the {\\em complexified}\n($G$-equivariant) topological K-group. We further assume that $X$\nis a quasi-projective surface acted upon by a finite subgroup $\\Gamma$\nand $Y$ is a resolution of singularities of $X\/ \\Gamma$ such that\nthere exists a canonical isomorphism $\\theta$ between\n$K^{ {\\footnotesize {top}} }_{\\Gamma}(X)$ and $K^{ {\\footnotesize {top}} }( Y)$.\n\nLet ${\\cal C} = \\{ V_1, \\ldots, V_l \\}$ be a basis for $ {K}^{ {\\footnotesize {top}} }_{\\G}(X) $.\nWithout loss of generality we may assume they are genuine\n$\\Gamma$-vector bundles on $X$. We denote by $W_i = \\theta ( V_i)$.\nThe set $ \\{ W_1, \\ldots, W_l \\}$ is a basis for $ K(Y) $. We remark\nthat representatives of $W_i$'s can again be chosen as certain\ncanonical vector bundles in favorable cases, including the\nimportant case when $X= {\\Bbb C} ^2$ and $\\Gamma \\subset SL_2 ( {\\Bbb C} )$.\n\nLet $S_{\\lambda} $ be the irreducible representation of the symmetric\ngroup $S_n $ associated to the partition $\\lambda$ of $n$. Define\n$$\n S_{\\lambda} (V_i) = S_{\\lambda} \\bigotimes V_i^{ \\boxtimes n}.\n$$ Endowed with the diagonal action of $S_n$ on the two tensor\nfactors and the action of $\\Gamma^n$ on the second factor, $S_{\\lambda}\n(V_i)$ is an $\\G_n$-equivariant vector bundle.\n\nGiven a partition-valued function $\\lambda = (\\lambda_i )_{ 1 \\leq i \\leq\nl} \\in {\\cal P}_n (\\cal C)$, we define the $\\G_n$-equivariant\nvector bundle $$ S^X_{\\lambda} =\n \\mbox{Ind} ^{\\G_n}_{ \\Gamma_{|\\lambda_1|} \\times \\ldots \\times \\Gamma_{|\\lambda_rl|} }\n S_{\\lambda_1}(V_1 ) \\times \\ldots \\times S_{\\lambda_l}(V_l).\n$$\n %\nIn a parallel way, we can define the $S_n$-equivariant bundle $\nS^Y_{\\lambda}$ associated to $ \\lambda \\in {\\cal P}_n (\\cal C) $ as $$\nS^X_{\\lambda}\n=\n \\mbox{Ind} ^{S_n}_{ S_{|\\lambda_1|} \\times \\ldots \\times S_{|\\lambda_l|} }\n S_{\\lambda_1}(\\theta(V_1) ) \\times \\ldots \\times S_{\\lambda_l}(\\theta (V_l)).\n$$\n\nRecall that we constructed in \\cite{W} various algebraic\nstructures such as Hopf algebra, $\\lambda$-ring, Heisenberg\nalgebra on $\\bigoplus_{n \\geq 0} K^{ {\\footnotesize {top}} }_{\\Gn}(X^n) $ for a $\\Gamma$-space $X$,\ngeneralizing the results of Segal \\cite{S2} for $\\Gamma$ trivial. The\nfollowing proposition can be proved using \\cite{W} in a way as\nMacdonald \\cite{M} did when $X$ is a point.\n\n\\begin{proposition}\nThe $\\G_n$-bundles $ S^X_{\\lambda}, \\lambda \\in {\\cal P}_n (\\cal C)$ form a\nbasis of $ K^{ {\\footnotesize {top}} }_{\\Gn}(X^n) $. The $S_n$-bundles $ S_{\\lambda}^Y, \\lambda \\in {\\cal\nP}_n (\\cal C)$ form a basis of $ K_{S_n} (Y^n) $.\n\\end{proposition}\nThese bases will be referred to as Schur bases, generalizing the\nusual one for $R(\\G_n) = K^{ {\\footnotesize {top}} }_{\\G_n} (pt)$.\n\n\\begin{theorem}\nThe map $\\Theta$ from $\\bigoplus_{n \\geq 0} K^{ {\\footnotesize {top}} }_{\\Gn}(X^n) $ to\n$\\bigoplus_{n \\geq 0} K_{S_n} (Y^n) $ by sending $S_{\\lambda} ^X$ to $S_{\\lambda}\n^Y$ is an isomorphism of Hopf algebras, $\\lambda$-rings, and\nrepresentations over the Heisenberg algebra.\n\\end{theorem}\n\n\\begin{demo}{Proof}\nWe use \\cite{W} as a basic reference. We follow the notations\nthere with an additional use of $X, Y$ as subscripts to specify\nthe space we are referring to.\n\nAs graded vector spaces $\\bigoplus_{n \\geq 0} K^{ {\\footnotesize {top}} }_{\\Gn}(X^n) $ and\n$\\bigoplus_{n \\geq 0} K_{S_n} (Y^n) $ have the same graded dimension due to\nthe isomorphism between $K^{ {\\footnotesize {top}} }_{\\Gamma} (X)$ and $K^{ {\\footnotesize {top}} } (Y)$,\ncf. Theorem 3 in \\cite{W}. Thus the map $\\Theta$ given by matching\nthe Schur basis is an additive isomorphism.\n\nRecall that the Adam's operations $\\varphi^m_X$ on the space\n$\\bigoplus_{n \\geq 0} K^{ {\\footnotesize {top}} }_{\\Gn}(X^n) $ and $\\varphi^m_Y$ on $\\bigoplus_{n \\geq\n0} K_{S_n} (Y^n) $ satisfy the identities (where $q$ is a formal parameter),\ncf. Proposition 4 in \\cite{W}:\n\\begin{eqnarray*}\n\\bigoplus_{n \\geq 0} q^n V_i^{\\boxtimes n}\n & =& \\exp \\left( \\sum_{m > 0} \\frac1m \\varphi^m_X (V_i) q^m\n \\right), \\\\\n\\bigoplus_{n \\geq 0} q^n W_i^{\\boxtimes n}\n & =& \\exp \\left( \\sum_{m > 0} \\frac1m \\varphi^m_Y (W_i) q^m\n \\right).\n\\end{eqnarray*}\n\nIt follows that $\\varphi^m_X (V_i)$ and $\\varphi^m_Y (W_i)$ are\nuniquely determined by $V_i^{\\boxtimes n}$ $(n \\geq 0)$ and\nrespectively $W_i^{\\boxtimes n}$ in the same way (by taking\nlogarithms of the above identities). Since the isomorphism\n$\\Theta$ sends $V_i^{\\boxtimes n}$ to $W_i^{\\boxtimes n}$, the\nAdams operations $\\phi^r_X (V_i)$ and $\\phi^r_X (V_i)$ matches\nunder $\\Theta$ and so does the $\\lambda$-ring structures on\n$\\bigoplus_{n \\geq 0} K^{ {\\footnotesize {top}} }_{\\Gn}(X^n) $ and $\\bigoplus_{n \\geq 0} K_{S_n} (Y^n) $.\n\nRecall that Heisenberg algebras ${\\cal H}_X$ and ${\\cal H}_Y$ constructed\nin terms of K-theory maps act\nirreducibly on $\\bigoplus_{n \\geq 0} K^{ {\\footnotesize {top}} }_{\\Gn}(X^n) $ and $\\bigoplus_{n \\geq\n0} K_{S_n} (Y^n) $ respectively, cf. Theorem 4 in \\cite{W}. The Heisenberg\nalgebra generators are essentially defined in terms of Adams operations,\ninduction functors and restriction functors such as\n$\\mbox{Ind}_{\\Gamma_m \\times \\Gamma_{n -m}}^{\\G_n}$, $\\mbox{Res}_{\\Gamma_m\n\\times \\Gamma_{n -m}}^{\\Gamma_n}$, etc. Since the Adams operations,\ninduction and restriction functors are compatible with\nthe Schur bases and thus with the map\n$\\Theta$, the Heisenberg algebras acting on $\\bigoplus_{n \\geq 0}\n K^{ {\\footnotesize {top}} }_{\\Gn}(X^n) $ and $\\bigoplus_{n \\geq 0} K_{S_n} (Y^n) $ also matches under\n$\\Theta$.\n\\end{demo}\n\n\\frenchspacing\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nAbnormalities in medicine can be viewed as deviations from a normal feature in a healthy population. In medical images, such abnormalities appear as local or global deviations from an original normal anatomy. These changes should reflect a process where abnormalities occur under some initiation mechanisms from the normal anatomy and, with disease progression, existing normal tissues are eventually replaced by pathological ones. Indeed, when evaluating images of conditions to be diagnosed, physicians often need corresponding images without abnormal findings of interest or, conversely, images that contain similar abnormal findings regardless of normal anatomical context. This is called comparative diagnostic reading of medical images, which is essential for a correct diagnosis. For example, by comparing abnormal images of interest with normal images of the same anatomical site, physicians can diagnose the presence and extent of the disease. Meanwhile, images with similar abnormal features can also be useful as a reference because diseases with the same imaging phenotypes often have similar prognoses and treatment responses, somewhat regardless of the origin of the diseases. However, it is quite laborious to find similar images from a large database by focusing only on either normal or abnormal features of a query image by hands. Hence, we aimed to establish a content-based image retrieval (CBIR) system to support comparative diagnostic reading. \n\nCBIR is an important application to retrieve a set of the most similar images to a submitted patient's image from large databases to support clinical decision making. A CBIR system basically comprises two subsystems: At the first feature extraction stage, it converts images in a database to a set of features associated with image content, and at the next similarity search stage, similar images are retrieved from the database based on the feature similarity with respect to a query image. Traditionally, the feature extraction employed various types of handcrafted descriptors, such as shape, text, color, and texture; however, there was difficulty in reducing the ``semantic gap'' between low-level image features captured by the handcrafted descriptors and high-level visual concepts \\citep{Kumar2013, sift7935507}. Recently, with the success in many image processing tasks, the deep-learning-based approach has gained increasing attention in the field of CBIR. Since the deep neural network can automatically learn complex features in hierarchical levels of abstraction, it can simplify the feature extraction process. A common neural network architecture used in CBIR systems is the autoencoder, which enables to map each image into a latent representation that can store compressed image information \\citep{7727562, 7172448, 8310624, 8079843, 7351389}. Even though deep-learning-based approach often outperformed classical algorithm of CBIR systems \\citep{10.1117\/12.2251115, 10.1117\/12.2217587}, there is no specific method that can measure the distance between either normal or abnormal features of medical images as separated semantic components. To achieve our goals, we need to devise a novel feature extraction method that can decompose normal and abnormal features of medical images. We also aimed to establish a computationally effective method to retrieve similar images based on decomposed latent representations.\n\n\\begin{figure}\n \\centering\n \\includegraphics[]{.\/figures\/concept_of_decomp.jpg}\n \\caption{\\textbf{Concept of the compositionality of medical images.} It indicates that the entire medical image can be decomposed into normal and abnormal anatomy codes relevant to normal and abnormal semantic components in the image, respectively.}\n \\label{fig:concept_of_decomp}\n\\end{figure}\n\nIn this study, we define the two-tiered semantic nature of normal and abnormal anatomy as \\emph{compositionality} of medical images, as presented in {\\bf Fig. \\ref{fig:concept_of_decomp}}. Hereinafter, ``normal'' anatomy means counterfactual structures that should have existed if the sample is healthy, whereas ``abnormal'' anatomy attributes to disease changes that reflect the deviation from the normal baseline. When the sample is free from abnormality, ``normal'' anatomy corresponds to the entire sample image, and ``abnormal'' anatomy should not indicate any condition. Then, we consider how to decompose a given image into two low-dimensional representations in a manipulable manner, where the latent codes representing normal and abnormal anatomy should be mutually exclusive and collective in terms of the reconstruction of the original image. By measuring similarities based on decomposed normal or abnormal semantic components of medical images, it can be expected to retrieve images that have the same normal anatomical context or similar abnormal findings, respectively, while ignoring the other component.\n\nFactorizing compositionality into distinct and informative factors in data is a fundamental requirement of representation learning \\citep{Bengio2013}. The majority of studies has sought to capture purely independent factors of variation that contributes to generation of data, which are called \\emph{disentangled representations} \\citep{higgins2018definition}. To obtain disentanglement of features, two approaches have exploited implicit or explicit supervision to impose strong inductive biases on autoencoders \\citep{tschannen2018recent}. One approach is to collect a large amount of data to encompass sufficient variation for each factor and then apply appropriate regularization such that disentanglement can be implicitly performed \\citep{Higgins2017betaVAELB, burgess2018understanding, pmlr-v80-kim18b, Chen2018tcvae, zhao2017infovae, kumar2017variational, lopez2018information, esmaeili2018structured, Alex2017, Achille2018, Lample2017, louizos2015variational}. Another approach is to force a model to acquire separate representations by explicitly imposing modeling assumptions \\citep{cheung2014discovering, Kulkarni2015, Eslami2016attend, shanahan2019explicitly, charakorn2020explicit}. If distinct factors of variations explaining the characteristics of data can be separately represented using independent latent units, it can be useful in downstream tasks by providing interpretability and manipulability to the data. \n\nTherefore, disentangled representations can decompose normal and abnormal semantic components of medical images. Indeed, there has been an increasing number of studies focusing on feature disentanglement in medical imaging, including performing pseudo-healthy synthesis \\citep{XIA2020101719, ADN8788607, vorontsov2019semisupervised, Tang2021}, learning generalizable features across domains \\citep{meng2020learning, 9247170} or imaging modalities \\citep{Chen2019, 9250615, Chartsias2019}, and evaluating the reliability of individual annotators from true segmentation label distributions \\citep{HumanError2020}. However, with respect to the etiology of diseases in medical images, it is noteworthy that the assumption of disentanglement, where purely independent low-dimensional latent features can mimic the generation of high-dimensional data, can be too simple to be generalizable. For example, brain tumors originate as focal changes, and with their progression, cause compressional deformations and infiltration in adjacent structures, making it difficult to clearly define the boundary between normal and abnormal tissues. Hence, we consider \\emph{decomposition} of latent representation, which encompasses the notion of disentangled representation as its special case and allows much richer properties of latent spaces such as intricate structured relationships \\citep{pmlr-v97-mathieu19a}. \n\nIn this study, we first devised a neural network architecture, called \\emph{feature decomposing network}, to decompose normal and abnormal semantic components of medical images in a manipulable manner. Then, we demonstrated a novel CBIR framework by utilizing the decomposed latent codes to support comparative diagnostic reading. Given an input image, the feature decomposing network is trained to map it into two latent codes, {\\it normal anatomy code} and {\\it abnormal anatomy code}. It indicates that the original latent space for representing the whole image is divided into one portion as a normal anatomy code corresponding to normal anatomy and the remaining portion as an abnormal anatomy code corresponding to abnormal anatomy. After training the feature decomposing network, latent codes become representative of targeted semantic features in medical images. We also investigate the effectiveness of a method called \\emph{distribution matching} by utilizing Wasserstein generative adversarial networks (GANs) \\citep{pmlr-v70-arjovsky17a} with gradient penalty \\citep{NIPS2017_892c3b1c}. Distribution matching is imposed on the latent distribution to minimize the overlap in the semantic contents between normal and abnormal anatomy codes. Furthermore, by constructing the two latent codes to be discrete through vector quantization \\citep{oord2017neural}, we can reduce computational burden at the time of similarity search by binary hashing. The utility of these decomposed latent codes for CBIR applications is shown based on a large dataset containing brain magnetic resonance (MR) images of gliomas. By performing nearest neighbor search utilizing either normal or abnormal anatomy codes or the combination of the two codes, our CBIR system can retrieve images according to selected semantic components while ignoring the other, if necessary.\n\nThe main contributions of this study can be summarized into the following:\n\\begin{itemize}\n \\item We propose a feature decomposing network that can explicitly decompose semantic features of medical images into normal and abnormal anatomy codes in a manipulable manner for downstream tasks. \n \\item To enhance computational efficiency at the time of similarity search, latent spaces are configured using vector quantization to be discrete, rather than continuous.\n \\item By employing the decomposed latent codes, we present a novel CBIR application that can search for similarities in images based on a selected latent code or the combination of the two latent codes, enabling retrieval of images viewed as any semantic components while ignoring the other, if necessary. \n\\end{itemize}\n\nThe proposed method is most closely related to our conference extended abstract \\citep{kobayashi2020decomposing}, a neural network architecture to decompose normal and abnormal features of medical images with its application to CBIR. However, the presented work has significantly improved the learning method with extensive experimental settings. We further validated the performance of the CBIR application with more appropriate metrics. \n\nThe rest of the manuscript is organized as follows: {\\bf Section \\ref{sec:related_work}} reviews the literature on disentangled representation, image-to-image translation in medical imaging, and CBIR. {\\bf Section \\ref{sec:methodology}} describes our proposed method with technical backgrounds. {\\bf Section \\ref{sec:experiments}} presents experimental settings and evaluation methods. {\\bf Section \\ref{sec:results}} provides the results. {\\bf Section \\ref{sec:discussion}} presents the discussion and conclusions. \n\n\\section{Related work}\n\\label{sec:related_work}\n\nHere, we briefly review literature related to disentangled representation learning, especially in the field of computer vision. Thereafter, as a research interest more related to our purpose, we review recent progress that mainly apply the image-to-image translation technique based on GANs for pseudo-healthy synthesis of medical images. We also introduce current progress in the CBIR system in medical imaging. \n\n\\subsection{Disentangled representation learning}\n\\label{sec:disentangleed_representation_learning}\nLearning disentangled representation attempts to separate the underlying factors of sample variations in a way that each factor exclusively represents one type of sample attributes. There are several benefits in learning disentangled representation from data because models that produce feature disentanglement can provide explainability of the model function and manipulability in the data generation process. One approach is to combine deep generative models, such as GANs \\citep{Goodfellow2014} and variational autoencoders (VAEs) \\citep{Kingma2013, Rezende2014} with regularization techniques to acquire disentanglement in an implicit manner \\citep{Higgins2017betaVAELB, pmlr-v80-kim18b, Chen2018tcvae, zhao2017infovae, kumar2017variational, lopez2018information, esmaeili2018structured, Alex2017, Achille2018, Lample2017, louizos2015variational}. For example, $\\beta$-VAE can automatically discover the independent latent factors of variation by imposing a limit on the capacity of latent information, which facilitates factorization of representations \\citep{Higgins2017betaVAELB}. However, it is still fundamentally difficult to learn disentangled features without any supervision. It is also argued that acquired disentangled representations sometimes mismatch human's predefined concepts \\citep{Locatello}. Another approach is to explicitly factorize representations into a component that is related to or independent of classes based on labeled data \\citep{cheung2014discovering, Kulkarni2015, Eslami2016attend, shanahan2019explicitly, charakorn2020explicit}. Label information or attribute annotation can serve as strong supervision for feature disentanglement; hence, the performance of disentanglement can be optimized and guaranteed. Therefore, by designing a network with an appropriate structure and exploiting segmentation labels indicating abnormal regions as supervision, we aim to obtain the desired decomposition of latent representations. \n\n\\subsection{Image-to-image translation}\n\\label{sec:image_to_image_translation}\nImage-to-image translation has been exploited in medical imaging to obtain disentangled representation. CycleGAN, which performs bidirectional translation between image domains, has been widely used in image-to-image translation \\citep{cyclegan8237506}. As an extended architecture of CycleGAN, the unsupervised image-to-image translation (UNIT) framework proposes a shared latent space assumption, where a pair of corresponding images in two different domains can be mapped \\citep{liu2017unsupervised}. More recently, multimodal UNIT decomposes an image into a content code that is domain invariant and a style code that represents domain-specific features \\citep{huang2018multimodal}. In line with studies focusing on medical imaging, Xia et al. demonstrated pseudo-healthy synthesis by creating a subject-specific healthy image from a pathological one by extending the learning framework of CycleGAN \\citep{XIA2020101719}. Similarly, Liao et al. proposed an artifact disentanglement network using the image-to-image translation architecture, achieving comparable performance in image restoration to existing supervised models \\citep{ADN8788607}. Vorontsov et al. also applied the same concept to improve semi-supervised training for semantic segmentation with autoencoding \\citep{vorontsov2019semisupervised}. To enhance realistic synthesis of chest radiographic images, Tang et al. proposed a disentangled generative model for disease decomposition and demonstrated that disease residual maps can indicate underlying abnormal regions \\citep{Tang2021}. Since one of our goals is to decompose medical images into normal and abnormal semantic components, the basic idea is somewhat similar to pseudo-healthy synthesis exploiting image-to-image translation techniques. However, previous approaches focused on transforming rather superficial appearance of images and did not evaluate the accessibility and validity of latent representations acquired inside the models. Thus, our feature decomposing network has a bottleneck where imaging features are compressed, enabling to handle latent representations of targeted semantic component for the downstream task. \n\n\\subsection{Content-based image retrieval}\n\\label{sec:content_based_image_retrieval}\nCBIR is an important application in retrieving a set of the most similar images from large databases, given a query image. Similarity measurements based on various information, such as shape, text, color and texture, and features acquired inside convolutional neural networks are used to resolve the semantic gap between imaging features and high-level visual concepts \\citep{Kumar2013, sift7935507}. Even though several studies utilizing state-of-the-art techniques of deep neural networks have been introduced to CBIR \\citep{Haq2021, MohdZin2018}, to the best of our knowledge, there are few studies that exploit disentangled representation from the viewpoint of image retrieval. Recently, Havaei et al. proposed a neural network architecture to ensure content-style disentanglement of medical images and demonstrated how the inferred style and content features are disentangled from each other by utilizing CBIR as an evaluation method \\citep{havaei2020conditional}. Since CBIR is one of essential technologies that assist physicians in diagnosis, a methodology that directly seeks to develop CBIR based on disentangled representation is worth considering. Therefore, in addition to the concept of decomposing normal and abnormal features of medical images, we particularly employ latent codes as to be discrete through vector quantization. By applying vector quantization for the latent space, subspaces can be fixed and transversed by the Hamming distance rearrangement, which is favorable in a large-scale CBIR by reducing the computational cost at the time of similarity search \\citep{9050998}. \n\n\\section{Material and methods}\n\\label{sec:methodology}\n\nIn this study, we propose a method to decompose two-tiered semantic components of medical images into normal and abnormal anatomical codes for the application of CBIR that can selectively focus on semantic components. The proposed method is presented in two stages: First, we describe a network architecture, which we call \\emph{feature decomposing network}, to decompose normal and abnormal features in medical images and its learning strategy. Then, we present how to establish a CBIR system that can support comparative diagnostic reading by utilizing these decomposed features. \n\n\\subsection{Feature decomposing network}\n\\label{sec:training_of_feature_decomposing_network}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[]{.\/figures\/fdn_model_architecture.jpg}\n \\caption{\\textbf{Shared architecture of the feature decomposing networks.} Input image $\\bm{x}$ is mapped to a pair of latent representations, $\\bm{z}_e^-$ and $\\bm{z}_e^+$. Vector quantization is performed based on two codebooks, $\\bm{e}^-$ and $\\bm{e}^+$, to produce normal anatomy code $\\bm{z}_q^-$ and abnormal anatomy code $\\bm{z}_q^+$, respectively. The segmentation decoder predicts the semantic segmentation of abnormality $\\hat{\\bm{y}}$ from $\\bm{z}_q^+$. Depending on the collateral input of the softmax logit of abnormality $\\tilde{\\bm{y}}$, the image decoder reconstructs either the entire input image $\\hat{\\bm{x}}^+$ or normal-appearing image ${\\hat{\\bm{x}}^-}$. Several loss functions in training the network, including latent, reconstruction and segmentation losses, are shown, except for those related to distribution matching.}\n \\label{fig:fdn_model_architecture}\n\\end{figure*}\n\nThe feature decomposing network is used to decompose semantic components of medical images into normal and abnormal anatomy codes. It is trained based on a dataset containing input images $\\bm{x}$ and ground-truth segmentation labels $\\bm{y}$. The feature decomposing network consists of an encoder and two decoders, segmentation decoder, and image decoder. A pair of discrete latent spaces with latent codebooks exists at the bottom of the network, each of which produces normal and abnormal anatomy codes. The overview of the network architecture is shown in {\\bf Fig. \\ref{fig:fdn_model_architecture}}. It is noteworthy that the feature decomposing network has the latent space as its bottleneck, where no bypass connection between the encoder and decoders, such as skip connections \\citep{drozdzal2016importance}, is implemented. Therefore, we can expect that the information processed by the encoder can be compressed in latent spaces \\citep{razavi2019generating, 8969272}. \n\n\\subsubsection{Feature encoding}\n\\label{sec:encoder_network}\n\nThe encoder uses a two-dimensional (2D) medical image $\\bm{x} \\in \\mathbb{R}^{C \\times H \\times W}$ as an input, where $C$ is the number of channels and $H$ and $W$ represent the height and width of the images, respectively. Then, the encoder maps the image into two latent representations, $\\bm{z}_e^- \\in \\mathbb{R}^{D \\times H^{\\prime} \\times W^{\\prime}}$ and $\\bm{z}_e^+ \\in \\mathbb{R}^{D \\times H^{\\prime} \\times W^{\\prime}}$, where $\\bm{z}_e^-$ and $\\bm{z}_e^+$ correspond to the semantic features of normal and abnormal anatomies, respectively. We use $\\bm{z}^\\mp_e$ to represent both features. Subsequently, vector quantization is used to discretize $\\bm{z}^\\mp_e$. Namely, each elemental vector $z^\\mp_{e_i} \\in \\mathbb{R}^{D}$ is replaced with the closest code vector in each codebook $\\bm{e}^\\mp \\in \\mathbb{R}^{D \\times K}$ that comprises $D$-dimensional $K$ code vectors. Details in this vector quantization process are presented in the next subsection. We denote the quantized vector of $\\bm{z}_e^\\mp$ as $\\bm{z}_q^\\mp$. Hereinafter, $\\bm{z}_q^-$ is referred to as \\emph{normal anatomy code} and $\\bm{z}_q^+$ as \\emph{abnormal anatomy code}.\n\n\\subsubsection{Vector quantization}\n\\label{sec:vector_quantiser}\n\nThe latent space has two codebooks, $\\bm{e}^- = \\{e^-_k |k = 1, \\ldots, K\\}\\in \\mathbb{R}^{K \\times D}$ and $\\bm{e}^+ = \\{e^+_k |k = 1, \\ldots, K\\}\\in \\mathbb{R}^{K \\times D}$, corresponding to the normal and abnormal semantic features, respectively. The vector quantization process is similar to that of a vector-quantized (VQ) VAEs \\citep{oord2017neural}. An $i$-th elemental vector of $\\bm{z}^\\mp_e$, denoted as $z^\\mp_{e_i} \\in \\mathbb{R}^{D}$, is quantized by executing a nearest-neighbor lookup on the codebook as follows:\n\\begin{equation} \nk^\\mp = \\mathop{\\rm arg~min}\\limits_{k \\in \\{1, \\ldots, K\\}} \\|z^\\mp_{e_i} - e_k\\|_2^2.\n\\end{equation} \nThereafter, an $i$-th elemental vector of $\\bm{z}^\\mp_e$ is replaced by the $k^\\mp$-th code vector in the codebook as follows: \n\\begin{equation} \nz^\\mp_{e_i} = e_{k^\\mp}.\n\\end{equation} \nThis replacement is performed for all ($H^\\prime \\times W^\\prime$) elemental vectors of $\\bm{z}^\\mp_e$ to collectively form a quantized vector $\\bm{z}^\\mp_q$.\n\nTo optimize this process, the encoder and codebooks are updated to minimize an objective, which is referred to as latent loss $L_{\\mathrm{lat}}$ as follows:\n\\begin{equation} \nL^\\mp_\\mathrm{lat} = \\|\\mathrm{sg}[\\bm{z}^\\mp_e] - \\bm{e}^\\mp\\|^2_2 + \\beta \\|\\bm{z}^\\mp_e - \\mathrm{sg}[\\bm{e}^\\mp]\\|^2_2,\n\\end{equation} \n\\begin{equation} \nL_\\mathrm{lat} = L_\\mathrm{lat}^- + L_\\mathrm{lat}^+,\n\\end{equation} \nwhere $\\mathrm{sg}$ indicates a stop-gradient operator, which serves as an identity function at the forward computation time and has zero partial derivatives, and $\\beta$ is a balancing hyperparameter. During training, the first term in the abovementioned equation updates the codebook variables by delivering the code vectors to the encoder output. Meanwhile, the second term encourages the encoder output to move closer to the targeted code vectors. We use the exponential moving average to train the codebook \\citep{kaiser2018fast}.\n\n\\subsubsection{Feature decoding}\n\\label{sec:feature_decoding}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[]{.\/figures\/spade.jpg}\n \\caption{\\textbf{Overview of spatially adaptive normalization (SPADE) module.} Softmax logit $\\tilde{\\bm{y}}$ multiplied by the segmentation mask $\\hat{\\bm{y}}$ is further downsampled to achieve resolutions corresponding to those of each layer in the image decoder. SPADE module propagates the semantic layout of abnormalities into the image generation process. Each convolution block comprises a convolution function and bias parameter.}\n \\label{fig:spade}\n\\end{figure}\n\nThe segmentation decoder uses an abnormal anatomy code $\\bm{z}_q^+$ as an input and outputs a $S$-class segmentation label $\\hat{\\bm{y}} \\in \\mathbb{R}^{S \\times H \\times W}$ that corresponds to the ground-truth label $\\bm{y}$. A loss function for the segmentation output $L_{\\mathrm{seg}}$ is a composite of the generalized Dice \\citep{Sudre_2017} and focal \\citep{focal2017} losses as follows:\n\\begin{equation} \nL_\\mathrm{dice} = 1 - 2 \\frac{\\sum_{s \\in S} w_s |\\hat{\\bm{y}}_{s} \\cap \\bm{y}_{s}|}{\\sum_{s \\in S} w_s (|\\hat{\\bm{y}}_{s}| + |\\bm{y}_{s}|)},\n\\end{equation} \n\\begin{equation} \nL_\\mathrm{focal} = - \\frac{1}{N} \\sum_{s \\in S} \\bm{y}_s (1 - \\tilde{\\bm{y}}_s)^{\\gamma} \\log \\tilde{\\bm{y}}_s,\n\\end{equation} \n\\begin{equation} \nL_\\mathrm{seg} = L_\\mathrm{dice} + L_\\mathrm{focal},\n\\end{equation} \nwhere $\\tilde{\\bm{y}}$ indicates the softmax logits of the segmentation decoder, $N (= H \\times W)$ is the number of pixels, and $w_k$ is determined as $w_k = \\frac{1}{(\\sum_N |\\bm{y}_s|)^2}$ to mitigate the class imbalance problem \\citep{Sudre_2017}.\n\nMeanwhile, the image decoder $f$ performs conditional image generation using the \\emph{spatially adaptive normalization} (SPADE) \\citep{park2019SPADE}. SPADE is designed to propagate semantic layouts to the process of synthesizing images ({\\bf Fig. \\ref{fig:spade}}). The image decoder uses a normal anatomy code $\\bm{z}_q^-$ as its primary input. When the image decoder is used to reconstruct the entire input image $\\hat{\\bm{x}}^+$, the softmax logits of the segmentation decoder $\\tilde{\\bm{y}}$ is transmitted to each layer of the image decoder via the SPADE modules ($f(\\bm{z}_q^-, \\tilde{\\bm{y}}) = \\hat{\\bm{x}}^+$). When {\\it null} information, where $\\tilde{\\bm{y}}$ is filled with 0s, is propagated to the SPADE modules, a normal-appearing reconstruction $\\hat{\\bm{x}}^-$ is generated by the image decoder ($f(\\bm{z}_q^-, \\bm{0}) = \\hat{\\bm{x}}^-$).\n\nTo enforce different characteristics between the two types of generated images, $\\hat{\\bm{x}}^-$ and $\\hat{\\bm{x}}^+$, we apply a pixel-wise reconstruction loss depending on the region of abnormality. Suppose $\\bm{M}^+ \\in \\{0, 1\\}^{C \\times H \\times W}$ defines a mask, indicating that pixels with any abnormality labels are set to 1 and 0 otherwise, and $\\bm{M}^- = \\bm{1} - \\bm{M}^+$ is a complementary set of $\\bm{M}^+$. Briefly, $\\bm{M}^+$ presents the abnormal anatomy region, and $\\bm{M}^-$ indicates the normal anatomy region. Using these masks, image reconstruction loss $L_\\mathrm{rec}$ is defined as follows: \\begin{equation} \n\\begin{split}\nL^-_\\mathrm{rec} &= \\|\\bm{M}^- \\odot \\hat{\\bm{x}}^- - \\bm{M}^- \\odot \\bm{x}\\|^2_2 \\\\\n &+ (1 - \\mathrm{SSIM}(\\bm{M}^- \\odot \\hat{\\bm{x}}^-, \\bm{M}^- \\odot \\bm{x})),\n\\end{split}\n\\end{equation} \n\\begin{equation}\n\\begin{split}\nL^+_\\mathrm{rec} &= \\|\\hat{\\bm{x}}^+ - \\bm{x}\\|^2_2 \\\\\n &+ (1 - \\mathrm{SSIM}(\\hat{\\bm{x}}^+, \\bm{x})) \\\\\n &+ \\|{\\bm{M}^+} \\odot \\hat{\\bm{x}}^+ - {\\bm{M}^+} \\odot \\bm{x}\\|^2_2 \\\\\n &+ (1 - \\mathrm{SSIM}({\\bm{M}^+} \\odot \\hat{\\bm{x}}^+, {\\bm{M}^+} \\odot \\bm{x})),\n\\end{split}\n\\end{equation} \n\\begin{equation} \nL_\\mathrm{rec} = L^-_\\mathrm{rec} + L^+_\\mathrm{rec},\n\\end{equation} \nwhere SSIM indicates structural similarity \\citep{Wang2004}, which is added to the L2 loss as a constraint owing to its empirical effect to stabilize the image generation process. \n\n\\subsubsection{Distribution matching}\n\\label{sec:density_matching}\n\nIt is quite important to ensure that each decomposed feature, normal anatomy code $\\bm{z}_q^-$ and abnormal anatomy code $\\bm{z}_q^+$, corresponds to targeted semantic content in the images. For example, when some code vectors of normal anatomy codes convey not only features corresponding to normal anatomies but also those related to abnormal anatomies, the feature decomposition can be ``leaky,'' losing its reliability for downstream tasks. Particularly, this can occur at normal anatomy codes because, when a pathological image is provided, it is fundamentally impossible to obtain a \\emph{paired} normal counterpart that can be a ground-truth for the normal-appearing reconstructions $\\hat{\\bm{x}}^-$. Therefore, we utilize \\emph{unpaired} healthy images and employ distribution matching technique to minimize the discrepancy of the distributions between normal anatomy codes from healthy images and those from disease images. We consider this discrepancy as the Wasserstein distance $d_\\mathrm{W}$ and minimize it using Wasserstein GAN \\citep{pmlr-v70-arjovsky17a} with gradient penalty \\citep{NIPS2017_892c3b1c} that has a critic network $g$. \n\nHere, we consider that the set of input images $\\mathcal{X}$ can be divided into a set consisting of healthy images $\\mathcal{X}_h$ and a set consisting of diseased images $\\mathcal{X}_d$. Suppose the distribution of normal anatomy codes $\\mathcal{Z}^-_q$ can also be split into those originating from healthy images $\\mathcal{Z}^-_h$ and those originating from diseased images $\\mathcal{Z}^-_d$. When a batch $\\{\\bm{x} \\sim \\mathcal{X}\\}$ containing both healthy images $\\{\\bm{x}_h \\sim \\mathcal{X}_h\\}$ and diseased images $\\{\\bm{x}_d \\sim \\mathcal{X}_d\\}$ is fed into the encoder, a corresponding batch of normal anatomy codes $\\{\\bm{z}_q^- \\sim \\mathcal{Z}_q^- \\}$ can be split into those originating from healthy images $\\{\\bm{z}^-_h \\sim \\mathcal{Z}^-_h \\}$ and diseased images $\\{\\bm{z}^-_d \\sim \\mathcal{Z}^-_d \\}$. If there is no leakage of semantic content of abnormality into the normal anatomy codes, the two distributions, $\\mathcal{Z}^-_h$ and $\\mathcal{Z}^-_d$, should be identical with each other, irrespective of the presence of abnormality in the input images. Then, distribution matching imposes two types of loss functions, $L_\\mathrm{critic}$ and $L_\\mathrm{reg}$, as follows:\n\\begin{equation} \nL_\\mathrm{critic} = d_\\mathrm{W} + \\lambda_\\mathrm{gp} (\\| \\nabla_{\\bm{z}^-_m} g (\\bm{z}^-_m) \\|^2_2 - 1)^2,\n\\end{equation} \n\\begin{equation} \nL_\\mathrm{reg} = - g (\\bm{z}^-_d),\n\\end{equation} \nwhere $d_\\mathrm{W} = g (\\bm{z}^-_d) - g (\\bm{z}^-_h)$ is the Wasserstein distance, $\\lambda_\\mathrm{gp}$ is the balancing term for the gradient penalty, and $\\bm{z}^-_m = \\epsilon \\bm{z}^-_h + (1 - \\epsilon) \\bm{z}^-_d$ is to enforce the Lipschitz constraint by sampling a variable along straight lines between pairs of points sampled from the two latent distributions \\citep{NIPS2017_892c3b1c}. When optimizing $L_\\mathrm{critic}$, only the critic network is trained, not propagating gradients to the modules prior to $\\bm{z}_h^-$ and $\\bm{z}_d^-$. Both the encoder and codebook for normal anatomy codes are used to bring the two distributions closer together in the process of optimizing $L_\\mathrm{reg}$.\n\nNote that, unlike usual GANs for image synthesis, our goal is to achieve an alignment between two latent distributions, $\\mathcal{Z}^-_h$ and $\\mathcal{Z}^-_d$. Since distribution matching is applied to the batch containing quantized latent codes, it is expected that certain constraints will be added to the process of vector quantization. More specifically, it can be expected to regularize code vectors in the codebook for normal anatomy codes not to be too representative even for abnormal imaging features. \n\n\\subsubsection{Overall learning objectives}\n\\label{sec:overall_learning objectives}\n\nIn summary, we define several loss functions: latent loss $L_\\mathrm{lat}$ for optimization of the encoder and codebooks; segmentation loss $L_\\mathrm{seg}$ for the segmentation decoder, encoder, and codebook for abnormal anatomy codes; reconstruction loss $L_\\mathrm{rec}$ for the image decoder, encoder, and codebook for normal anatomy codes; and distribution matching loss for the critic network $L_\\mathrm{critic}$ and that for the encoder and codebook for normal anatomy codes $L_\\mathrm{reg}$. The overall learning algorithm is shown in {\\bf Algorithm \\ref{alg:learning_algorithm}}.\n\n\\begin{algorithm}[t]\n\\SetAlgoLined\n $e$: encoder\\\\\n $v^-$: vector quantiser for normal anatomy code\\\\\n $v^+$: vector quantiser for abnormal anatomy code\\\\\n $s$: segmentation decoder\\\\\n $f$: image decoder\\\\\n $g$: critic network\\\\\n $\\mathcal{D}$: training dataset\\\\\n sg: stop-gradient operator\\\\\n $m$: the number of inner iteration for the critic network\\\\\n $\\lambda_1, \\lambda_2, \\lambda_3, \\lambda_4, \\lambda_5$: balancing terms for each loss function\\\\\n \\While{not converge}{\n Sample a batch of images $\\bm{x}$ and segmentation labels for abnormal regions $\\bm{y}$ from $\\mathcal{D}$.\\\\\n $\\bm{z}_e^-, \\bm{z}_e^+ \\leftarrow e(\\bm{x})$\\\\\n $\\bm{z}_q^- \\leftarrow v^- (\\bm{z}_e^-)$\\\\\n $\\bm{z}_q^+ \\leftarrow v^+ (\\bm{z}_e^+)$\\\\\n $\\hat{\\bm{y}}, \\tilde{\\bm{y}} \\leftarrow s (\\bm{z}_q^+)$\\\\\n $\\hat{\\bm{x}}^+ \\leftarrow f (\\bm{z}_q^-, \\mathrm{sg}(\\tilde{\\bm{y}}))$\\\\\n $\\hat{\\bm{x}}^- \\leftarrow f (\\bm{z}_q^-, \\bm{0})$\\\\\n Compute $L_\\mathrm{lat}(\\bm{z}_e^-, \\bm{z}_e^+)$, $L_\\mathrm{seg} (\\hat{\\bm{y}}, \\bm{y})$, and $L_\\mathrm{rec} (\\hat{\\bm{x}}^-, \\hat{\\bm{x}}^+, \\bm{x})$.\\\\\n Split the batch of $\\{\\bm{z}_q^-\\}$ into subbatches of $\\{\\bm{z}^-_h\\}$ and $\\{\\bm{z}^-_d\\}$ according to the presence of abnormal label in each input image.\\\\\n \\For{i = 1, $\\dots$, m}{\n $\\bm{z}^-_h \\leftarrow \\mathrm{sg} (\\bm{z}^-_h)$\\\\\n $\\bm{z}^-_d \\leftarrow \\mathrm{sg} (\\bm{z}^-_d)$\\\\\n Sample a random number $\\epsilon \\sim \\mathbb{U}[0, 1]$.\\\\\n $\\bm{z}^-_m \\leftarrow \\epsilon \\bm{z}^-_h + (1 - \\epsilon) \\bm{z}^-_d$\\\\\n Compute $L_\\mathrm{critic} (\\bm{z}^-_h, \\bm{z}^-_d, \\bm{z}^-_m)$.\\\\\n Update parameters of $g$ to minimize $\\lambda_5 L_\\mathrm{critic}$ using stochastic gradient descent (e.g., Adam).\\\\\n }\n Compute $L_\\mathrm{reg} (\\bm{z}^-_d)$.\\\\\n Update parameters of $e$, $v^\\mp$, $s$, and $f$ to minimize $\\lambda_1 L_\\mathrm{lat} + \\lambda_2 L_\\mathrm{seg} + \\lambda_3 L_\\mathrm{rec} + \\lambda_4 L_\\mathrm{reg}$ using stochastic gradient descent (e.g., Adam). \n }\n \\caption{Training of the Feature Decomposing Network}\n \\label{alg:learning_algorithm}\n\\end{algorithm}\n\n\\subsection{Modeling of content-based image retrieval}\n\\label{sec:modeling_of_content-based_image_retrieval}\n\nHere, we formulate the problem of CBIR to find the closest feature vector from a reference database containing $N$ $D$-dimensional database vectors $\\{ r_n \\}^N_{n = 1}$, given a query vector $q$, as follows:\n\\begin{equation} \n\\mathop{\\rm arg~min}\\limits_{n \\in {1, \\ldots, N}} D(q, r_n),\n\\label{eq:separating_hyperplane}\n\\end{equation} \nwhere $D$ is a distance function such as Euclidean distance. \n\nStarting with the abovementioned equation, our consideration into the CBIR system is twofold. First, to enhance computational efficiency in the distance calculation, we propose to binarize latent representations by leveraging the latent spaces that are constructed as to be discrete rather than continuous. Then, a novel CBIR framework by utilizing the decomposed latent codes to retrieve images with targeted semantic components is introduced. \n\n\\subsubsection{Binary hashing based on separating hyperplanes}\n\\label{sec:binary_hashing_based_on_separating_hyperplanes}\n\nHere, we consider how to binarize code vectors in a codebook $\\bm{e} = \\{e_k |k = 1, \\ldots, K\\}\\in \\mathbb{R}^{K \\times D}$, which is learned in the feature decomposing network. The goal is to find subspaces that can be transversed by Hamming distance rearrangement. Since each code vector $e_k \\in \\mathbb{R}^D$ has a fixed position in the latent space, the latent space can be divided by $K \\choose 2$ separating hyperplanes that perpendicularly bisect a line segment connecting any two code vectors. Given two code vectors, $e_i$ and $e_j$, points $x$ located on the separating hyperplane can be formulated as follows:\n\\begin{equation} \nH_{(i, j)} (x) = (e_i - e_j) x - \\frac{1}{2} (\\|e_i\\|^2_2 - \\|e_j\\|^2_2) = 0.\n\\label{eq:separating_hyperplane}\n\\end{equation} \nTherefore, the position of another code vector $e_k$ can be binarized according to the side of the separating hyperplanes it is on:\n\\begin{equation} \n\\mathrm{sgn} [H_{i, j} (e_k)], \n\\label{eq:separating_hyperplane}\n\\end{equation} \nwhere\n\\begin{equation} \n\\mathrm{sgn} (x) = \\begin{cases}\n 1 & (x \\geq 0) \\\\\n -1 & (x < 0) \\\\\n\\end{cases}.\n\\end{equation} \nThen, by considering the positional relationship between a total of $K \\choose 2$ separating hyperplanes, the continuous codebook $\\bm{e} \\in \\mathbb{R}^{K \\times D}$ can be converted into a binarized codebook $\\bm{b} = \\{b_k | k = 1, \\dots, K\\} \\in \\mathbb{R}^{K \\times E}$, where $E = $ $K \\choose 2$. Experimentally, we confirmed that there was no code vector that is exactly located on any separating hyperplane, which would make Eq. (\\ref{eq:separating_hyperplane}) equal to zero. Since the binary representation corresponding to the location of each vector is uniquely determined, it can be regarded as binary hashing.\n\n\\subsubsection{Optimization of the binarized vector length}\n\\label{sec:optimization_of_binarized_vector_length}\n\nWhen naively performing the abovementioned binarization, the length of each binarized code vector is ${K \\choose 2} = {512 \\choose 2} = 130,816$, which is extremely long to obtain the computational benefit from the Hamming distance calculation. Therefore, we consider optimization for the length of the binarized code vectors. At the time of performing nearest neighbor search, a local sensitivity is primarily required; that is, the relationship between the closest code vectors should remain the same. Meanwhile, it does not necessarily guarantee the positional relationship between distance code vectors. Based on these ideas, we can optimize the length of the binarized code vector by removing each element one by one to investigate whether the proximity between the closest vectors changes or not. This optimization algorithm is shown in {\\bf \\ref{app:optimization_algorithm}}. Empirically, the length of the code vectors can be reduced to $<$ 1\\%, allowing the calculation of Hamming distance to be much faster than that of L2 distance using continuous code vectors. Hereinafter, we denote $\\bm{b}^* = \\{b_k^* | k = 1, \\dots, K\\} \\in \\mathbb{R}^{K \\times E^*}$ as a binarized codebook after optimization. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[]{.\/figures\/cbir_overview.jpg}\n \\caption{\\textbf{Overview of the content-based image retrieval framework.} First, a query image and reference images are decomposed into normal and abnormal anatomy codes. Then, by calculating the similarity of normal anatomy codes between the query image and reference images ($S_\\mathrm{normal}$), similar images when viewed as normal anatomies without any abnormality can be retrieved. In contrast, by calculating the similarity of abnormal anatomy codes, images with similar tumor regions can be retrieved ($S_\\mathrm{abnormal}$). Besides, the similarity retrieval based on the whole imaging features can be applied by calculating the similarity combining both normal and abnormal anatomy codes ($S_\\mathrm{sum}$). Note that these similarity measurements can be calculated based on different distance definitions, such as Euclidean distance, angular distance, and Hamming distance.} \n \\label{fig:cbir_overview}\n\\end{figure}\n\n\\subsubsection{Image retrieval based on the decomposed latent codes}\n\\label{sec:query_by_image}\n\nWhen performing the CBIR framework, a query image is decomposed into a normal anatomy code and abnormal anatomy code through the feature decomposing network ({\\bf Fig. \\ref{fig:cbir_overview}}). Then, the similarity using either the latent codes or a combination of both is calculated between the query image and images in the reference dataset. Particularly, when the similarity is viewed as counterfactual normal images as they should be, the similarity measurement uses only normal anatomy codes to be compared $S_\\mathrm{normal} (q, r)$. In contrast, when viewed by focusing only on abnormal areas, the similarity measurement uses only abnormal anatomy codes $S_\\mathrm{abnormal} (q, r)$. Moreover, to calculate similarities of the whole imaging features between the query and reference images, we define $S_\\mathrm{sum} (q, r)$ for the summation of both measurements as follows:\n\\begin{equation} \n\\label{eq:dsum}\nS_\\mathrm{sum} (q, r) = S_\\mathrm{normal} (q, r) + S_\\mathrm{abnormal} (q, r).\n\\end{equation} \n\nThese similarity measurements can be calculated based on different distance definitions. Here, for comparison, three types of distances, that is, Euclidean distance $D_\\mathrm{E}$, angular distance $D_\\mathrm{A}$, and Hamming distance $D_\\mathrm{H}$, are calculated for each similarity measurement. Note that the continuous codebook $\\bm{e}$ is the basis for Euclidean distance $D_\\mathrm{E}$ and angular distance $D_\\mathrm{A}$, while the optimized binarized codebook $\\bm{b}^*$ is the basis for Hamming distance $D_\\mathrm{H}$.\n\n\\section{Experiments}\n\\label{sec:experiments}\n\n\\subsection{Dataset}\n\\label{sec:dataset}\n\nWe used brain MR images of gliomas from the 2019 BraTS Challenge \\citep{6975210brats, Bakas2017, TCGAGBM, TCGALGG}, containing a dataset of 355 patients for training (MICCAI\\_BraTS\\_Training), a dataset of 125 patients for validation (MICCAI\\_BraTS\\_Validation), and a dataset of 167 patients (MICCAI\\_BraTS\\_Testing) for testing. For each patient, T1, Gd-enhanced T1, T2, and fluid-attenuated inversion recovery (FLAIR) sequences were obtained. MICCAI\\_BraTS\\_Training contains three segmentation labels of abnormality: Gd-enhancing tumor (ET), peritumoral edema (ED), and necrotic and non-enhancing tumor core (NET). Conversely, MICCAI\\_BraTS\\_Validation and MICCAI\\_BraTS\\_Testing do not have any segmentation label. There is no ground-truth that represents normal anatomical structures. Therefore, we segmented all three datasets into six normal anatomical labels (left cerebrum, right cerebrum, left cerebellum, right cerebellum, left ventricle, and right ventricle). Moreover, the abnormal regions (ET, ED, and NET) in MICCAI\\_BraTS\\_Validation and MICCAI\\_BraTS\\_Testing were segmented in the same manner with those in MICCAI\\_BraTS\\_Training. The annotation process is described in detail in {\\bf \\ref{app:annotation_process}}.\n\nFor the training of the feature decomposing network, we concatenated both MICCAI\\_BraTS\\_Validation and MICCAI\\_BraTS\\_Testing as a \\emph{training dataset}. Then, the feature decomposing network was evaluated on MICCAI\\_BraTS\\_Training as a \\emph{test dataset}, which is also utilized in the demonstration of the CBIR system based on the decomposed latent codes. \n\n\\subsection{Preprocessing}\n\\label{sec:pre-processing}\n\nAll four sequences, T1, Gd-enhanced T1, T2, and FLAIR, were concatenated into a four-channel MR volume $\\bm{X} \\in \\mathbb{R}^{4 \\times 240 \\times 240 \\times 155}$. Then, a preprocessing pipeline, including axial image resizing to $256 \\times 256$ and Z-score normalization, was performed. Subsequently, each three-dimensional (3D) MR volume was decomposed into a collection of 2D axial slices $\\{\\bm{x}_1, \\bm{x}_2, \\dots, \\bm{x}_{155} \\in \\mathbb{R}^{4 \\times 256 \\times 256} \\}$. The training and test datasets underwent the same preprocessing process. Data augmentation, such as random rotation and random horizontal flip, was applied to each image in the training dataset to train the feature decomposing network.\n\n\\subsection{Implementation}\n\\label{sec:implementation}\n\nAll experiments were implemented in Python 3.7 with PyTorch library 1.2.0 \\citep{NEURIPS2019_9015} using an NVIDIA Tesla V100 graphics processing unit and CUDA 10.0. For all networks, Adam optimization \\citep{kingma2014adam} was used for the training. Network initialization was performed using the method described by He et al. \\citep{he2015delving}.\n\n\\subsubsection{Feature decomposing network}\n\\label{sec:implement_feature_decomposing_network}\n\nWhen using a quantized latent space, such as VQ-VAE, the size of latent representation (i.e., width and height of the feature maps) exerts a significant effect on the quality of image generation \\citep{razavi2019generating}. Therefore, for comparison, we configured several architectures of the feature decomposing network depending on the size of the latent space. Hereinafter, we denote a feature decomposing network with the latent representation of the size of $H^\\prime \\times W^\\prime$ as $\\mathrm{FDN}_{H^\\prime \\times W^\\prime}$. In this study, we compared $\\mathrm{FDN}_{4 \\times 4}$, $\\mathrm{FDN}_{8 \\times 8}$, $\\mathrm{FDN}_{16 \\times 16}$, and $\\mathrm{FDN}_{32 \\times 32}$ with or without the distribution matching for the normal anatomy codes. \n\nAs described in \\textbf{Section \\ref{sec:training_of_feature_decomposing_network}}, the feature decomposing network comprises the encoder, segmentation decoder, and image decoder. See {\\bf \\ref{app:detailed_architecture}} for the detail of the architectures of these neural networks. The input for the encoder is required to be a four-channel 2D image with the size of $4 \\times 256 \\times 256$ $(= \\mathrm{channel} \\times \\mathrm{height} \\times \\mathrm{width})$. The encoder has a common trunk for obtaining an input image and extracting low-level features of the image and then bifurcates into two branches with the same architecture, one of which is for the normal anatomy code and the other is for the abnormal anatomy code. The number of a repeated structure, consisting of residual blocks \\citep{He2015} with a strided convolution (ResBlock + StridedConv), was adaptively set for each size of the latent representation. For example, in $\\mathrm{FDN}_{8 \\times 8}$, the encoder utilized $32-64-128-128-128-128$ filter kernels in each layer. The encoder outputs two latent representations corresponding to normal and abnormal semantic components, $\\bm{z}_e^-$ and $\\bm{z}_e^+$, with the size of $64 \\times H^\\prime \\times W^\\prime$. These latent representations are subsequently quantized to $\\bm{z}_q^-$ and $\\bm{z}_q^+$ through the vector quantization. The dimension of code vectors $D$ and number of code vectors $K$ in the codebooks were fixed as follows: $D = 64$, and $K = 512$.\n\nThe image and segmentation decoders have almost the same architecture, except for the normalization layer, where the image decoder particularly utilizes SPADE module for the propagation of softmax logits $\\tilde{\\bm{y}}$ from the segmentation network. The number of a repeated structure, which comprises upsampling module with a bilinear interpolation function followed by residual block with or without SPADE [Upsample + (SPADE-)ResBlock], was adopted for the size of the latent representation. For example, in $\\mathrm{FDN}_{8 \\times 8}$, the image and segmentation decoders utilized $128-128-128-128-64-32$ filter kernels in each layer. The segmentation decoder takes $\\bm{z}_q^+$ as an input and outputs a segmentation map $\\hat{\\bm{y}}$ with the size of $4 \\times 256 \\times 256$ $(= \\mathrm{channel} \\times \\mathrm{height} \\times \\mathrm{width})$, the channel number of which corresponds to the number of abnormality labels (ET, ED, and TC) plus a background label. Besides, a softmax logit $\\tilde{\\bm{y}}$ with the size of $4 \\times 256 \\times 256$ is retained for the collateral input for the image decoder. The image decoder utilizes $\\bm{z}_q^-$ as primary input. Depending on the presence or absence of the softmax logit $\\tilde{\\bm{y}}$ through the SPADE modules, it generates either a normal-appearing reconstruction $\\hat{\\bm{x}}^-$ or entire reconstruction of the input image $\\hat{\\bm{x}}^+$, respectively. \n\nWhen using distribution matching to regularize normal anatomy codes (see \\textbf{Section \\ref{sec:density_matching}}), the critic network is trained to approximate the Wasserstein distance $d_\\mathrm{W}$ of distributions between the normal anatomy codes originating from healthy images $\\mathcal{Z}_h^-$ and those originating from diseased images $\\mathcal{Z}_d^-$. The detailed network architecture is described in {\\bf \\ref{app:detailed_architecture}}. The number of inner iterations in the training of the critic network $m$ was set to 5, and the balancing term for the gradient penalty $\\lambda_{\\mathrm{gp}}$ was 10.0. To balance the magnitudes of the loss functions, $\\lambda_4$ and $\\lambda_5$ were set to $1.0 \\times 10^{-4}$ for $\\mathrm{FDN}_{4 \\times 4}$ and $\\mathrm{FDN}_{8 \\times 8}$ and $5.0 \\times 10^{-4}$ for $\\mathrm{FDN}_{16 \\times 16}$ and $\\mathrm{FDN}_{32 \\times 32}$. The larger values of $\\lambda_4$ and $\\lambda_5$ for each configuration tended to fail in mode collapse, where the encoder learned only a few modes of data in the sample distribution. The other hyperparameters were shared across the configurations as follows: batch size = 112, number of training epochs = 400, learning late = $1.0 \\times 10^{-4}$, weight decay = $1.0 \\times 10^{-5}$, $\\lambda_1 = 0.25$, $\\lambda_2 = 5.0$, $\\lambda_3 = 5.0$, and $\\gamma = 2.0$.\n\n\\subsubsection{Preparation of codebooks}\n\\label{sec:content-based_image_retrieval_system}\n\nThe CBIR framework was constructed on a per-image basis; that is, each MR volume was separated into slices along the axial axis, which was in the same manner with the training of the feature decomposing network. By straightforwardly applying the trained feature decomposing network, every single image in the test dataset was decomposed into normal anatomy code $\\bm{z}_q^-$ and abnormal anatomy code $\\bm{z}_q^+$. These latent codes can be considered as a set of code vectors extracted from the continuous codebooks $\\bm{e}^\\mp$, which is subjective to the calculation of Euclidean distance $D_\\mathrm{E}$ and angular distance $D_\\mathrm{A}$. Then, according to the methods described in \\textbf{Section \\ref{sec:binary_hashing_based_on_separating_hyperplanes}} and \\textbf{Section \\ref{sec:optimization_of_binarized_vector_length}}, we prepared a set of latent codes in a manner of binarized code vectors. First, code vectors with L2 norm $<$ $1.0 \\times 10^{-5}$ were rounded to a zero vector. Then, each code vector was binarized with optimized length to obtain the optimized binarized codebooks $\\bm{b}^{\\mp*}$. Binarized latent codes were acquired by searching the optimized binarized codebooks based on the same indices of the corresponding latent codes in the continuous codebooks. The similarity between binarized code vectors was evaluated based on Hamming distance $D_\\mathrm{H}$. \n\n\\subsection{Evaluation}\n\\label{sec:evaluation}\n\nSince end-to-end learning of the whole modules cannot be performed, we evaluated individual component to find the optimal combination for the CBIR framework. The evaluation process comprised three stages. First, to find the optimal configuration of the feature decomposing network, we assessed the error of image reconstruction, performance of abnormality segmentation, ``leakiness'' of feature decomposition, and compactness of the codebooks. Second, based on the feature decomposing network with a selected configuration, we observed the extent to which the distance relationship between code vectors was changed through the binarization. Lastly, we demonstrated the effectiveness of the proposed CBIR framework based on the decomposed latent codes from qualitative and quantitative perspectives.\n\n\\subsubsection{Image reconstruction error}\n\\label{sec:image_reconstruction_error}\n\nThe image decoder performs conditional image generation while switching the entire reconstructions $\\hat{\\bm{x}}^+$ and normal-appearing reconstructions $\\hat{\\bm{x}}^-$ (see \\textbf{Section \\ref{sec:feature_decoding}}). At the training stage, the entire reconstructions are similar to the input images. Meanwhile, normal-appearing reconstructions learn to match the region of the input images that excludes the abnormal area, which can be indicated by the mask matrix $\\bm{M}^-$. Therefore, the reconstruction error of the entire reconstructions and that of normal-appearing reconstructions can be calculated as $\\sum \\| \\bm{x} - \\hat{\\bm{x}}^+ \\|_1$ and $\\sum \\| \\bm{M}^- \\odot \\bm{x} - \\bm{M}^- \\odot \\hat{\\bm{x}}^- \\|_1$, respectively, where $\\sum$ denotes pixel-wise summation of the residual errors.\n\n\\subsubsection{Segmentation performance}\n\\label{sec:segmentation_performance}\n\nThe segmentation decoder predicts a segmentation output $\\hat{\\bm{y}}$, which should be close to the ground-truth label $\\bm{y}$ (see \\textbf{Section \\ref{sec:feature_decoding}}). The segmentation performance was evaluated based on Dice score \\citep{dice1945} with respect to the abnormality labels (ET, ED, and NET). The segmentation outputs in each 2D axial slice $\\{\\bm{x}_1, \\bm{x}_2, \\dots, \\bm{x}_{155}\\}$ were concatenated into a volume to evaluate the Dice score based on the volume. \n\n\\subsubsection{Leakiness of feature decomposition}\n\\label{sec:leakiness_of_feature_decomposition}\n\nAs described in \\textbf{Section \\ref{sec:density_matching}}, distribution matching is introduced to ensure that each decomposed latent code corresponds to targeted semantic content of the input images without being ``leaky'' to other features. Especially, this is important for normal anatomy codes because there is no ground-truth for normal-appearing reconstructions originating from diseased images. However, observation of the latent space cannot provide information on whether the representations contained therein are decomposed in a desirable manner. Hence, we observe reconstructed images generated from these latent codes through the image decoder. In the evaluation, a batch containing only the normal-appearing reconstructions $\\{\\hat{\\bm{x}}^-\\}$ was fed into a classification network to predict whether each normal-appearing reconstruction was derived from healthy images $\\mathcal{X}_h$ or diseased images $\\mathcal{X}_d$. The architecture of the classification network and training settings based on the training dataset are described in {\\bf \\ref{app:detailed_architecture_classifier}}. When the normal-appearing reconstructions do not contain any abnormal characteristics, the classification network should not be possible to identify their origin, even if they are derived from diseased images. Otherwise, it indicates that there are ``leaked'' abnormal imaging features in the normal-appearing reconstructions, which can be distinguishable for the origin from the diseased images. Positive predictive value (PPV) of the classification network with respect to the origin from the diseased images was evaluated as an indicator of leakiness in the test dataset. \n\n\\subsubsection{Compactness of the codebooks}\n\\label{sec:compactness_of_codebooks}\n\nIn some configurations of the feature decomposing network, it was empirically observed that a portion of the $K$ code vectors stored in a codebook exhibited small norms close to zero (see \\textbf{\\ref{app:distribution_of_norm}}). It suggested that these code vectors with small norms did not encode distinguishable features with zero vector, allowing approximation as the zero vector. Therefore, in that case, the number of code vectors that are actually significant was $< K$. Hence, we define the compactness of the codebooks to be the ratio of insignificant code vectors that showed small L2 norms below a threshold values of $1.0 \\times 10^{-5}$ to the total number of code vectors $K$ in a codebook. When the value of the compactness is large, it implies that the codebook can be considered as compact with less than initial $K$ code vectors. This is preferential for the computational cost at the time of similarity search because we can approximately reduce the number of code vectors to be computed.\n\n\\subsubsection{Validity of the binarization process}\n\\label{sec:validity_of_binarization_process}\n\nThe validity of the binarization process, which is introduced in \\textbf{Section \\ref{sec:binary_hashing_based_on_separating_hyperplanes}} and \\textbf{Section \\ref{sec:optimization_of_binarized_vector_length}}, was evaluated from three perspectives: concordance of the distance relationship of the code vectors before and after the binarization process, compression ratio of the code vector length, and comparison of computational time. To demonstrate concordance, we investigated whether the nearest neighbor relationship among binarized code vectors changed with respect to the continuous ones. Since the fundamental element of the latent space composed a fixed set of code vectors, the distance relationship as a whole can be regarded as almost unchanged, given that the distance relationship between each code vector is consistent. Therefore, the distance relationship of each code vector with the other remaining $K - 1$ code vectors was assessed and compared between the two types of codebooks. In the evaluation, for each code vector, the indices of the top-$Q$ closest code vectors were obtained. The distance calculation was performed using Hamming distance $D_\\mathrm{H}$ for the optimized binarized codebook $\\bm{b}^*$ and Euclidean distance $D_\\mathrm{E}$ for the continuous codebook $\\bm{e}$. Then, the concordance was calculated with respect to the agreement by the Jaccard similarity coefficient between two sets of indices. The compression ratio of the code vectors owing to the optimization process was calculated as follows: $\\frac{E^*}{E}$. See {\\bf \\ref{app:comparison_of_computational_time}} for the methodological details in the comparison of computational time in the distance calculation.\n\n\\subsubsection{CBIR performance based on the decomposed latent codes}\n\\label{sec:eval_query_by_image}\n\nTo quantify the image retrieval performance using the decomposed latent codes (see \\textbf{Section \\ref{sec:query_by_image}}), images containing the largest area of abnormality were selected from each MR volume in the test dataset based on ground-truth labels. We refer to these representative images as \\emph{query images}. For each query image, a \\emph{reference dataset} that comprised the rest of the images in the test dataset except for the images in the same MR volume of the query image was constructed. The images in the reference dataset are called \\emph{reference images}. Then, for each MR volume in the reference dataset, the image with the closest latent code to that of the query image was obtained. The obtained images from each MR volume were sorted with respect to the similarity. Lastly, top-$Q$ closest images, each of which belonged to different MR volume, were provided as retrieved images. This MR volume-wise retrieval can be appropriate in evaluating variable retrieved images by a single query image because one MR volume usually contains several images with similar appearance in continuous axial slices. \n\nTo report retrieval performance, the mean Dice coefficient based on two types of ground-truth labels (see \\textbf{Section \\ref{sec:dataset}}), one is for six categories of normal anatomy and the other is for three categories of abnormality, was assessed between each query image and top-$Q$ closest images from the reference dataset. The mean Dice coefficient was calculated for all query images and then averaged. As shown in \\textbf{Fig. \\ref{fig:cbir_overview}}, when using $S_\\mathrm{normal}$, the mean Dice coefficients based on the categories of normal anatomy should be high because the similarity is evaluated based on the normal anatomy codes that correspond to normal-appearing reconstructions. Conversely, image retrieval using $S_\\mathrm{abnormal}$ should be accompanied by high mean Dice coefficients in the categories of the abnormalities because it evaluates the similarity based on the abnormal anatomy codes relevant to tumor segmentation labels. Therefore, the mean Dice coefficient based on the ground-truth labels of the normal or abnormal anatomical categories was reported for the image retrieval using $S_\\mathrm{normal}$ or $S_\\mathrm{abnormal}$, respectively. When it comes to the similarity retrieval using $S_\\mathrm{sum}$, the mean Dice coefficients based on the ground-truth labels of both normal and the abnormal anatomical categories were averaged because the similarity measurement should represent the features of the whole images. \n\nFor comparison, a \\emph{brutal search} to directly maximize Dice overlap was performed for each query image to retrieve top-$Q$ closest images. In comparing the retrieval performance using $S_\\mathrm{normal}$ or $S_\\mathrm{abnormal}$, the brutal search retrieved images by maximizing Dice coefficients between the ground-truth labels of the normal or abnormal anatomical categories of a query image and those of reference images, respectively. For the similarity measurement using $S_\\mathrm{sum}$, the brutal search maximized the simple summation of Dice coefficients of the ground-truth labels of the normal and abnormal anatomical categories. Similar image retrieval using the brutal search was conducted in the same MR volume-wise manner. While the brutal search requires a significant computational time and ground-truth labels for all query and reference images, the mean Dice coefficients obtained can be used as an oracle (technical upper bound). \n\n\\section{Results}\n\\label{sec:results}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[]{.\/figures\/example_training_results.jpg}\n \\caption{\\textbf{Example results of model training based on $\\boldsymbol{\\mathrm{FDN}_{8 \\times 8}}$ with distribution matching.} The first row indicates the input images $\\bm{x}$. Entire input images were reconstructed $\\hat{\\bm{x}}^+$ (second row) based on both normal and abnormal anatomy codes, whereas normal-appearing reconstructions $\\hat{\\bm{x}}^-$ (third row) were generated on normal anatomy code only. A clear distinction can be observed between $\\hat{\\bm{x}}^+$ and $\\hat{\\bm{x}}^-$ at abnormal regions, which existed in both $\\bm{x}$ and $\\hat{\\bm{x}}^+$ but not in $\\hat{\\bm{x}}^-$. The fourth and fifth rows indicate ground-truth segmentation labels $\\bm{y}$ representing the tumor region and prediction for the labels $\\hat{\\bm{y}}$, respectively. The output of segmentation labels tended to be rounded and did not recover the detailed shape of each region. We consider this as a natural consequence since the compressed representation in the latent codes, which is advantageous for the computational cost of similarity search, did not have sufficient capacity to preserve the detailed feature in the input image as a tradeoff. ET, Gd-enhancing tumor; ED, peritumoral edema; NET, necrotic and non-enhancing tumor core.}\n \\label{fig:example_training_results}\n\\end{figure*}\n\nHere, we first present an example training process of the feature decomposing network. Then, several configurations of the feature decomposing network, particularly focusing on the size of the latent representation and use of distribution matching, are compared to select a model to be exploited for the downstream task. Subsequently, using the feature decomposing network with the selected configuration, we confirm the validity of binary hashing. Lastly, the CBIR system utilizing decomposed latent codes is qualitatively and quantitatively evaluated. \n\n\\subsection{Example training results of the feature decomposing network}\n\\label{sec:training_results_of_the_feature_decomposing_network}\n\nAmong several configurations of the feature decomposing network, an example of the training results of $\\mathrm{FDN}_{8 \\times 8}$ with distribution matching at epoch 400 is shown in {\\bf Fig. \\ref{fig:example_training_results}}. The first, second, and third rows demonstrate the input images $\\bm{x}$, corresponding entire image reconstructions $\\hat{\\bm{x}}^+$, and normal-appearing reconstructions $\\hat{\\bm{x}}^-$, respectively. We can observe that there is a clear distinction between two types of reconstructions, $\\hat{\\bm{x}}^+$ and $\\hat{\\bm{x}}^-$, especially when evaluating abnormal regions, which existed in the input images $\\bm{x}$ and entire reconstructions $\\hat{\\bm{x}}^+$ but disappeared in the normal-appearing reconstructions $\\hat{\\bm{x}}^-$. Note that the regions where the abnormality had existed originally were replaced by imaging characteristics of normal neuroanatomy in the normal-appearing reconstructions $\\hat{\\bm{x}}^-$. This indicates the complementary capacity derived from normal anatomy codes. Besides, due to the shared normal anatomy codes between the two types of reconstructions, $\\hat{\\bm{x}}^+$ and $\\hat{\\bm{x}}^-$, the appearances outside the region of abnormality seems to be almost identical with each other. Furthermore, the fourth and fifth rows in {\\bf Fig. \\ref{fig:example_training_results}} indicate ground-truth segmentation labels for the abnormality (ET, TC, and NET) and prediction for the labels through the segmentation decoder, respectively. The output of segmentation labels tended to be rounded, without precisely recovering the detailed shape of each tumor region. This can be expected because the compressed representation of the latent space, where the spatial resolution is only $8 \\times 8$, is advantageous for computational efficiency at the time of similarity search. For the same reason, we did not pursue the generation quality of the reconstructed images as a primary purpose. Although the detailed part of the textual appearance as realistic MR images was not perfectly reproduced, we consider that it is sufficient for recognizing anatomical location and presence of abnormality in the reconstructed images. The learning process of this example model is demonstrated in {\\bf \\ref{app:example_learning_process}}.\n\n\\subsection{Comparison between several configurations of the feature decomposing network}\n\\label{sec:comparison_between_several_configuration}\n\nTo evaluate the effects of the size of the latent representation and use of the distribution matching, we compared several configurations of the feature decomposing network, such as $\\mathrm{FDN}_{4 \\times 4}$, $\\mathrm{FDN}_{8 \\times 8}$, $\\mathrm{FDN}_{16 \\times 16}$, and $\\mathrm{FDN}_{32 \\times 32}$, with or without distribution matching (see \\textbf{Section \\ref{sec:implement_feature_decomposing_network}}). See {\\bf Fig. \\ref{fig:comparison_reconstructions}} for visual results, where two types of reconstructed images, entire image reconstructions $\\hat{\\bm{x}}^+$ and normal-appearing reconstructions $\\hat{\\bm{x}}^-$, according to an input image are shown for each configuration. As the resolution of the latent space increased from $4 \\times 4$ to $32 \\times 32$, the quality of the reconstructed images was improved, showing that fine textures of the brain MR images were reproduced more realistically. As for the difference between entire reconstructions $\\hat{\\bm{x}}^+$ and normal-appearing reconstructions $\\hat{\\bm{x}}^-$, it is expected that there should be differences in the areas that correspond to the abnormal sites. Thus, the abnormal areas should appear only in the entire reconstructions $\\hat{\\bm{x}}^+$ and should be diminished in the normal-appearing reconstructions $\\hat{\\bm{x}}^-$. When the resolution of the latent representation is relatively low (i.e., $\\mathrm{FDN}_{4 \\times 4}$ and $\\mathrm{FDN}_{8 \\times 8}$), the difference was clear even without distribution matching. Conversely, particularly in $\\mathrm{FDN}_{16 \\times 16}$ and $\\mathrm{FDN}_{32 \\times 32}$, the abnormal regions, which exhibited as low-intensity area in Gd-enhanced T1 sequence and high-intensity area in FLAIR sequence, were partly reproduced even in the normal-appearing reconstructions $\\hat{\\bm{x}}^-$ especially when distribution matching was not utilized (see arrows in {\\bf Fig. \\ref{fig:comparison_reconstructions}}). As mentioned in \\textbf{Section \\ref{sec:leakiness_of_feature_decomposition}}, we call this failure in decomposing representations as ``leakiness'' of abnormal features into normal anatomy codes $\\bm{z}_q^-$. Note that the distribution matching imposed on the normal anatomy codes mitigated this leakiness and encouraged normal-appearing reconstructions to replace the region of abnormality with normal imaging features that would have existed therein if the sample is healthy (indicated by arrowheads in {\\bf Fig. \\ref{fig:comparison_reconstructions}}). \n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[]{.\/figures\/comparison_reconstructions.jpg}\n \\caption{\\textbf{Comparison of reconstructed images according to different configurations of the feature decomposing network.} As the spatial resolution of the latent representation increases from $\\mathrm{FDN}_{4 \\times 4}$ to $\\mathrm{FDN}_{32 \\times 32}$, visual fidelity of the reconstructed images tends to improve. Note that there are partially reconstructed abnormal regions (arrows) even in the normal-appearing images $\\hat{\\bm{x}}^-$ through the models with relatively high resolutions (i.e., $\\mathrm{FDN}_{16 \\times 16}$ and $\\mathrm{FDN}_{32 \\times 32}$). These ``leaky'' appearances can be alleviated by imposing the distribution matching (arrowheads). DM, distribution matching.}\n \\label{fig:comparison_reconstructions}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[]{.\/figures\/image_recon_results.jpg}\n \\caption{\\textbf{Image reconstruction results.} Mean $\\pm$ standard deviation of the image reconstruction error (see \\textbf{Section \\ref{sec:image_reconstruction_error}}) is shown for each configuration of the feature decomposing network. As the resolution of the latent representation increased from $\\mathrm{FDN}_{4 \\times 4}$ to $\\mathrm{FDN}_{32 \\times 32}$, the quality of image reconstruction was improved with decreased reconstruction errors. Note that the distribution matching did not cause any negative effect. FDN, feature decomposing network; DM, distribution matching.}\n \\label{fig:image_recon_results}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[]{.\/figures\/segmentation_results.jpg}\n \\caption{\\textbf{Segmentation results.} Mean $\\pm$ standard deviation of the averaged Dice scores for the abnormality labels (see \\textbf{Section \\ref{sec:segmentation_performance}}) is shown for each configuration of the feature decomposing network. As the resolution of the latent representation increased from $\\mathrm{FDN}_{4 \\times 4}$ to $\\mathrm{FDN}_{32 \\times 32}$, the segmentation performance was improved. Note that the distribution matching did not cause any negative effect. FDN, feature decomposing network; DM, distribution matching.}\n \\label{fig:segmentation_results}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[]{.\/figures\/leakiness_results.jpg}\n \\caption{\\textbf{Evaluation of the leakiness in the normal-appearing reconstructions.} A classification network was trained based on four different reconstructions, such as entire reconstructions, entire reconstructions with distribution matching, normal-appearing reconstructions, and normal-appearing reconstructions with distribution matching, to classify whether the original images included the abnormality or not (see \\textbf{Section \\ref{sec:leakiness_of_feature_decomposition}}). Here, leakiness of feature decomposition means the tendency that the normal-appearing images unintentionally contained distinguishable features of abnormality. Mean $\\pm$ standard deviation of positive predictive values (PPVs) for the origin from diseased images measured in the test dataset are shown as an indicator of the leakiness. As the latent resolution increased, the PPVs increased (orange bars), especially without the distribution matching. This leakiness of abnormal features was alleviated by imposing the distribution matching (blue bars). FDN, feature decomposing network; DM, distribution matching.}\n \\label{fig:leakiness_results}\n\\end{figure}\n\n\\begin{table}[t]\n\\resizebox{\\linewidth}{!}{\\begin{tabular}{@{}lccc@{}}\n\\toprule\n\\textbf{FDN model} & \\textbf{Normal anatomy code} & \\textbf{Abnormal anatomy code} \\\\ \\midrule\n$\\mathrm{FDN}_{4 \\times 4}$ w\/ DM & 0.79 & 0.75 \\\\\n$\\mathrm{FDN}_{4 \\times 4}$ w\/o DM & 0.75 & 0.72 \\\\\n$\\mathrm{FDN}_{8 \\times 8}$ w\/ DM & 0.68 & 0.69 \\\\\n$\\mathrm{FDN}_{8 \\times 8}$ w\/o DM & 0.63 & 0.67 \\\\\n$\\mathrm{FDN}_{16 \\times 16}$ w\/ DM & 0.71 & 0.52 \\\\\n$\\mathrm{FDN}_{16 \\times 16}$ w\/o DM & 0.25 & 0.57 \\\\\n$\\mathrm{FDN}_{32 \\times 32}$ w\/ DM & 0.31 & 0.0 \\\\\n$\\mathrm{FDN}_{32 \\times 32}$ w\/o DM & 0.0 & 0.0 \\\\ \\bottomrule\n\\end{tabular}}\n\\caption{\\textbf{Compactness of the codebooks.} The compactness, which is the ratio of insignificant code vectors showing small L2 norms below a threshold to the total number of the code vectors (see \\textbf{Section \\ref{sec:compactness_of_codebooks}}), is shown for each configuration of the feature decomposing network. FDN, feature decomposing network; DM, distribution matching.} \n\\label{tab:compactness_of_codebooks}\n\\end{table}\n\nSubsequently, we quantitatively compared several configurations based on the reconstruction error (see \\textbf{Section \\ref{sec:image_reconstruction_error}}), performance of abnormality segmentation (see \\textbf{Section \\ref{sec:segmentation_performance}}), leakiness of feature decomposition (see \\textbf{Section \\ref{sec:leakiness_of_feature_decomposition}}), and compactness of the codebooks (see \\textbf{Section \\ref{sec:compactness_of_codebooks}}). As shown in {\\bf Fig. \\ref{fig:image_recon_results}} and {\\bf Fig. \\ref{fig:segmentation_results}}, the performances of both image reconstruction and segmentation were improved as the resolution of the latent representation increased, which is demonstrated by the reduced reconstruction errors and increased Dice coefficients, respectively. Since the distribution matching is expected to impose some regularization effects on the codebooks, the performance of image reconstruction and segmentation could be degraded by training with distribution matching; however, the data also indicate that it did not cause any negative effect on both image reconstruction and segmentation performance. For the leakiness evaluated in {\\bf Fig. \\ref{fig:leakiness_results}}, without the distribution matching, the tendency that the normal-appearing reconstructions unintentionally contained distinguishable features of abnormality became apparent when the latent resolution increased. For example, this can be evident at $\\mathrm{FDN}_{32 \\times 32}$, where PPV for the origin of diseased images of the classification network trained based on normal-appearing reconstructions (see the orange bar in {\\bf Fig. \\ref{fig:leakiness_results}}) was as high as those of the classification network trained based on entire reconstructions (see the yellow and gray bars in {\\bf Fig. \\ref{fig:leakiness_results}}). However, the leakiness of abnormal features could be mitigated to some extent by introducing distribution matching. Note that PPVs for the origin of diseased images of the classification network trained on normal-appearing reconstructions (see the orange bars in {\\bf Fig. \\ref{fig:leakiness_results}}) consistently decreased by imposing distribution matching (see the blue bars in {\\bf Fig. \\ref{fig:leakiness_results}}). Regarding the compactness of the codebooks shown in {\\bf Table \\ref{tab:compactness_of_codebooks}}, models with lower latent resolutions showed higher value. Besides, it is noteworthy that distribution matching had the effect of rendering the codebooks more compact with the higher ratio of insignificant code vectors, which can be observed except for the abnormal anatomy codes of $\\mathrm{FDN}_{16 \\times 16}$ with distribution matching. See {\\bf \\ref{app:distribution_of_norm}} for the distribution of norms of code vectors in each configuration, which also supports that models with low latent space resolution had more code vectors with small norms.\n\nIn summary, there is a tradeoff between the performance of image reconstruction and segmentation and leakiness of feature decomposition and compactness of the codebooks according to the spatial resolution of latent representations. Note that, when the length and width of the latent space doubles (e.g., from $8 \\times 8$ to $16 \\times 16$), the computational complexity of the code vectors quadruples. Hereinafter, we selected $\\mathrm{FDN}_{8 \\times 8}$ with distribution matching for the following evaluation because the model exhibited intermediate characteristics, enabling good feature decomposition and simple latent representation with compact codebooks, which will be favorable at the time of similarity calculation. \n\n\\subsection{Assessment of the binarization process}\n\\label{sec:assessment_of_binarization}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[]{.\/figures\/binarization.jpg}\n \\caption{\\textbf{Relationship between Hamming distance and Euclidean distance.} The distances between one code vector and all other code vectors were calculated by Hamming distance (horizontal axis) and Euclidean distance (vertical axis). \\textbf{(a)} Distance relationship between the continuous codebook and binarized codebook before optimization. \\textbf{(b)} The same relationship after the optimization of the binarized codebook length. Note that not only the nearest relationship but also the global relationship is maintained through the optimization process.}\n \\label{fig:binarization}\n\\end{figure}\n\nUsing $\\mathrm{FDN}_{8 \\times 8}$ with distribution matching as the feature decomposing network, we validated the process of the binarization of code vectors (see \\textbf{Section \\ref{sec:validity_of_binarization_process}}). Based on the techniques described in \\textbf{Section \\ref{sec:optimization_of_binarized_vector_length}}, the length of the optimized binarized vector was shortened from $13,816$ to 789 for the normal anatomy codes and 292 for the abnormal anatomy codes. The compression ratios $\\frac{E^*}{E}$ for the normal and abnormal anatomy codes were 0.60\\% and 0.22\\%, respectively. {\\bf Fig. \\ref{fig:binarization}} demonstrates an example result on the relationship between Hamming distance $D_\\mathrm{H}$ and Euclidean distance $D_\\mathrm{E}$ before ({\\bf Fig. \\ref{fig:binarization}a}) and after ({\\bf Fig. \\ref{fig:binarization}b}) optimization of the vector length. Note that, even though the optimization process considered only the nearest neighbor relationship, the global distance relationship was also maintained. The concordances of the top 1, 5, and 10 closest relationships between the Hamming distance calculation using the optimized binarized codebook $\\bm{b}^*$ and the Euclidean distance calculation using the continuous codebook $\\bm{e}$ were 0.81, 0.89, and 0.90 for normal anatomy codes and 0.81, 0.88, and 0.90 for abnormal anatomy codes, respectively. Notably, the relationships between the binarized codebook $\\bm{b}$ before the optimization and the continuous codebook $\\bm{e}$ were at similar levels, demonstrating that the concordances of top 1, 5, and 10 closest relationships were 0.82, 0.86, and 0.89 for normal anatomy codes and 0.84, 0.87, and 0.89 for abnormal anatomy codes, respectively. Therefore, the optimization process of the binarized vector length did not hinder the distance relationship with respect to the original Euclidean distance. Hamming distance calculation using the optimized binarized codebook $\\bm{b}^*$ reduced the computational time by 48.3\\% and 64.5\\% for normal and abnormal anatomy codes, respectively, compared to the Euclidean distance calculation using the continuous codebook $\\bm{e}$ (see \\textbf{\\ref{app:comparison_of_computational_time}}).\n\n\\subsection{Retrieval results based the decomposed latent codes}\n\\label{sec:result_query_by_image}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[]{.\/figures\/query_by_image.jpg}\n \\caption{\\textbf{Example results of CBIR based on the decomposed latent codes.} Image retrieval was performed based on volume, comparing the closest images from each MR volume. Each similarity was calculated according to Euclidean distance $D_\\mathrm{E}$ based on the continuous codebooks $\\bm{e}$. Retrieved images are arranged from left to right, starting with the closest one to the query vector. ({\\bf a}) Similarity calculation based on normal anatomy codes $S_\\mathrm{normal}$ retrieved images with similar normal anatomical labels, irrespective of gross abnormalities. Query and retrieved images with ground-truth labels of the normal anatomical categories are shown. ({\\bf b}) Similarity calculation based on abnormal anatomy codes $S_\\mathrm{abnormal}$ retrieved images with similar abnormal anatomical labels. Query and retrieved images with ground-truth labels of the abnormal anatomical categories are shown. Note the variety of normal anatomical contexts of the retrieved images. ({\\bf c}) Similarity calculation using $S_\\mathrm{sum}$ retrieved images with combined features, as shown by the similar patterns of both normal and abnormal anatomical labels among the retrieved images. Query and retrieved images with ground-truth labels of the normal and abnormal anatomical categories are shown. ET, Gd-enhancing tumor; ED, peritumoral edema; NET, necrotic and non-enhancing tumor core.}\n \\label{fig:query_by_image}\n\\end{figure*}\n\n\\begin{table}[t]\n\\centering\n\\resizebox{0.8\\linewidth}{!}{%\n\\begin{tabular}{@{}lccc@{}}\n\\toprule\n\\textbf{Distance measurements} & \\boldsymbol{$S_\\mathrm{normal}$} & \\boldsymbol{$S_\\mathrm{abnormal}$} & \\boldsymbol{$S_\\mathrm{sum}$}\\\\ \\midrule\n$D_\\mathrm{E}$ & $0.50 \\pm 0.16$ & $0.28 \\pm 0.12$ & $0.20 \\pm 0.16$\\\\\n$D_\\mathrm{A}$ & $0.47 \\pm 0.19$ & $0.27 \\pm 0.11$ & $0.30 \\pm 0.11$\\\\\n$D_\\mathrm{H}$ & $0.48 \\pm 0.17$ & $0.28 \\pm 0.12$ & $0.23 \\pm 0.15$\\\\\nBrutal search & $0.62 \\pm 0.16$ & $0.37 \\pm 0.08$ & $0.42 \\pm 0.08$\\\\ \\bottomrule\n\\end{tabular}\n}\n\\caption{\\textbf{Quantitative evaluation of the image retrieval.} Mean $\\pm$ standard deviation of the averaged Dice coefficients for ground-truth labels with respect to the semantic components exploited by the similarity evaluation is shown.} \n\\label{tab:quantitative_evaluation}\n\\end{table}\n\nAn example of the CBIR results showing the top 5 images with the closest latent codes based on $S_\\mathrm{normal}$, $S_\\mathrm{abnormal}$, and $S_\\mathrm{sum}$ is presented in {\\bf Fig. \\ref{fig:query_by_image}}. Similarity calculation based on normal anatomy codes $S_\\mathrm{normal}$ retrieved images with similar normal anatomical labels irrespective of gross abnormalities ({\\bf Fig. \\ref{fig:query_by_image}a}). Similarity calculation based on abnormal anatomy codes $S_\\mathrm{abnormal}$ retrieved images with similar abnormal anatomical labels ({\\bf Fig. \\ref{fig:query_by_image}b}). Note that the variety of normal anatomical contexts of the retrieved images accompanied similar abnormal lesions. In the similarity retrieval using $S_\\mathrm{sum}$, as formulated in Eq. (\\ref{eq:dsum}), the similarity measurements between the normal and abnormal anatomy codes were summed. As shown in {\\bf Fig. \\ref{fig:query_by_image}c}, the whole imaging features of the retrieved images using $S_\\mathrm{sum}$ resemble those of the query image. In this example, Euclidean distance $D_\\mathrm{E}$ was employed based on the continuous codebooks $\\bm{e}$. \n\nAccording to the method described in \\textbf{Section \\ref{sec:eval_query_by_image}}, we performed top 10 nearest neighbor search between query images and reference dataset for quantitative evaluation. Here, the query images are the slices with the largest tumor area in each MR volume, and the reference dataset contains the rest of images in the test dataset, except for the images in the same MR volume of each query image. The similarity measurements utilized $S_\\mathrm{normal}$, $S_\\mathrm{abnormal}$, and $S_\\mathrm{sum}$. Mean Dice coefficients were assessed according to the semantics exploited in the similarity measurement: Dice coefficients between ground-truth labels of the normal anatomical categories were averaged when using $S_\\mathrm{normal}$, those between ground-truth labels of the abnormal anatomical categories were averaged when using $S_\\mathrm{abnormal}$, and those between ground-truth labels of the normal anatomical categories and those between ground-truth labels of the abnormal anatomical categories were averaged when using $S_\\mathrm{sum}$. The three types of distance calculations, Euclidean distance $D_\\mathrm{E}$, angular distance $D_\\mathrm{A}$, and Hamming distance $D_\\mathrm{H}$, were calculated for each similarity measurement. The results are summarized in {\\bf Table \\ref{tab:quantitative_evaluation}}. The technical upper bound was provided by the brutal search, which retrieved images to maximize Dice coefficients directly computing the overlap between ground-truth labels. Note that the brutal search is performed based on the ground-truth labels for all query and reference images, which is usually not given in real clinical practice. \n\n\\section{Discussion and conclusions}\n\\label{sec:discussion}\n\nComparative diagnostic reading is critical in correct diagnosis by comparing an image of the condition to be diagnosed with corresponding normal images without abnormal findings or images that contain similar abnormal findings. To the best of our knowledge, this is the first study that proposes a deep-learning-based algorithm specifically designed to support the comparative diagnostic reading. The fundamental contribution of our study is to extend the idea of disentangled representation into a CBIR application in medical imaging. By leveraging the feature decomposing network, a medical image can be decomposed into a normal and abnormal anatomy code, each of which represents the targeted semantic component in the image. Note that the codebooks located at the bottom of the network allows decomposed latent codes to be manipulable for the CBIR framework, which can switch the semantic components to be focused in the retrieval according to users' purposes. \n\nThe proposed CBIR framework demonstrated notable results in both qualitative and quantitative evaluation (see \\textbf{Section \\ref{sec:result_query_by_image}}). However, because of the two-staged approach to establish the CBIR framework, there are possible error origins for the final image retrieval performance, such as errors in image reconstruction, segmentation, and vector quantization in the codebooks. Moreover, the leakiness of abnormal imaging features into the normal anatomical codes can hinder retrieval performance such that the retrieval based on $D^\\mathrm{normal}$ unintentionally accompanies images with significant amount of abnormalities, which can be alleviated by distribution matching. Because it is quite overloaded as an experiment, directly comparing the image retrieval performance according to the different configurations of the feature decomposing network could not be performed. Instead, we precisely evaluated these possible origins of errors (see \\textbf{Section \\ref{sec:comparison_between_several_configuration}}) and selected $\\mathrm{FDN}_{8 \\times 8}$ trained with distribution matching for reporting the CBIR performance owing to its preferable features for the image retrieval.\n\nEven though there is no benchmark available for the image retrieval based on the dataset, we consider that the performance difference from the brutal search (see \\textbf{Table \\ref{tab:quantitative_evaluation}}) can be acceptable for clinical use. Under the condition that the leakiness of abnormal imaging features is controlled, it is quite natural to speculate that the performance of the image reconstruction and segmentation has a positive relationship between the image retrieval performance using the decomposed latent codes. Therefore, if higher retrieval performance is desired and computational resources are sufficient, a model with higher latent space resolution can be used. Because the brutal search is an ideal setting, where ground-truth labels are given to all query and reference images, our CBIR framework is more versatile and useful, and furthermore, different configurations can be used for different situations. \n\nDistance calculations using Euclidean distance $D_\\mathrm{E}$, angular distance $D_\\mathrm{A}$, and Hamming distance $D_\\mathrm{H}$ can be implemented according to different situations, taking into account the tradeoff between accuracy and computational efficiency at the time of similarity search. Note that, with respect to $S_\\mathrm{sum}$, angular distance calculation $D_\\mathrm{A}$ provided a higher value than others. This can be because the magnitude of the distance values is not normalized in the Euclidean distance calculation $D_\\mathrm{E}$ and Hamming distance calculation $D_\\mathrm{H}$, making it difficult to evenly weight the two terms, $S_\\mathrm{normal}(q, r)$ and $S_\\mathrm{abnormal}(q, r)$, in Eq. (\\ref{eq:dsum}). It is also noteworthy that, even though the proposed binarization technique is simple, Hamming distance calculation $D_\\mathrm{H}$ approximated the Euclidean distance calculation $D_\\mathrm{E}$ with notable accuracy (see \\textbf{Section \\ref{sec:assessment_of_binarization}}). Since the picture archiving and communication systems in hospitals usually contain a huge amount of medical images, our proposal of the binary hashing based on the discrete latent codes can also be effective even in the context of large-scale image search. \n\nOne may argue that the distribution matching should be applied in reconstructed images instead of latent codes. It seems to be one alternative method to decompose features of medical images, but there are some concerns. One is an issue called ``posterior collapse,'' which is caused by a powerful decoder ignoring latent codes, which can be observed in many VAE models \\citep{oord2017neural}. Because our primary purpose is to acquire good latent representation that can be faithfully representative for corresponding imaging features, we intentionally avoided imposing additional learning objective for decoders. Another concern is that the distribution matching for images with a resolution of $256 \\times 256$ was computationally expensive and required a long time for training of feature decomposing networks. We also encountered a more unstable training process, which is one of the intrinsic problems of GANs. As shown in \\textbf{Fig. \\ref{fig:comparison_reconstructions}}, where the constraints on the latent space are reflected in the differences in imaging features in the reconstructed images, we consider that distribution matching on the latent spaces should be more appropriate when it comes to CBIR utilizing latent codes. \n\nWe found that SPADE modules can effectively propagate the semantic layout obtained at the final layer of the segmentation decoder into the image reconstruction process of the image decoder. The success of the conditional image generation of the image decoder depending on the collateral input through the SPADE modules can also be observed in \\textbf{Fig. \\ref{fig:leakiness_results}}, where PPVs for the origin from diseased images of the classification network trained using the entire reconstructions were consistently $>$ 0.9 (see yellow and gray bars in \\textbf{Fig. \\ref{fig:leakiness_results}}). However, future technological challenges may lie in this regard. Ideally, the latent codes representing normal and abnormal semantic components of medical images should be distributed in a decomposable manner in a single space where they can be linearly computed with each other. Because the current study employed an architecture that holds two separated latent spaces for decomposed latent codes, the similarity calculation according to the whole imaging feature (Eq. (\\ref{eq:dsum})) might be deemed arbitrary. Even so, because simple autoencoders can be sufficient to calculate the similarity based on whole imaging features, we believe that our proposal is innovative to enable the CBIR to selectively utilize either normal or abnormal components of medical images to support comparative diagnostic reading.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}