diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzetxr" "b/data_all_eng_slimpj/shuffled/split2/finalzzetxr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzetxr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sect:Intro}\n\n\n\n\n \n\nHS RS imaging has gradually become one of the most vital achievements in the field of RS since the 1980s \\cite{c173}. Varied from an initial single-band panchromatic image, a three-band color RGB image, and a several-band multispectral (MS) image, an HS image contains hundreds of narrow and continuous spectral bands, which is promoted by the development of spectral imaging equipment and the improvement of spectral resolutions. The broader portion of the HS spectrum can scan from the ultraviolet, extend into the visible spectrum, and eventually reach in the near-infrared or short-wave infrared \\cite{9395693}. Each pixel of HS images corresponds to a spectral signature and reflects the electromagnetic properties of the observed object. This enables the identification and discrimination of underlying objects, especially some that have a similar property in single-band or several-band RS images (such as panchromatic, RGB, MS) in a more accurate manner. As a result, the wealthy spatial and spectral information of HS images has extremely improved the perceptual ability of Earth observation, which makes the HS RS technique play a crucial role in the fields like precision agriculture (e.g., monitoring the growth and health of crops), space exploration (e.g., searching for signs of life on other planets), pollution monitoring (e.g., detection of the ocean oil spill), and military applications (e.g., identification of military targets) \\cite{c1,c172,9174822}. \n\nOver the past decade, massive efforts have been made to process and analyze HS RS data after the data acquisition. Initial HS data processing considers either a gray-level image for each band or the spectral signature of each pixel. From one side, each HS spectral band is regarded as a gray-level image, and the traditional 2-D image processing algorithms are directly introduced band by band \\cite{c175,c176}. From another side, the spectral signatures that have similar visible properties (e.g., color, texture) can be used to identify the materials \\cite{c174}. Furthermore, extensive low-rank (LR) matrix-based methods are employed to explore the high correlation of spectral channels with the assumption that the unfolding HS matrix has a low rank \\cite{c20,c177,c178}. Given an HS image of size h\u00d7v\u00d7z, the recovery of an unfolding HS matrix (hv\u00d7z) usually requires the singular value decomposition (SVD), which leads to the computational cost of $O(h^2 v^2 z+ z^3)$ \\cite{c30,c31,c32}. In some typical tensor decomposition-based methods, the complexity of the tensor singular value decomposition (t-SVD) is about $O(hvzlogz+ hv^2z)$ \\cite{c28,c34,c61}. Compared to matrix forms, tensor decompositions achieve excellent performances with a tolerable increment of computational complexity. However, these traditional LR models reshape each spectral band as a vector, leading to the destruction of the inherent spatial-spectral completeness of HS images. Correct interpretations of HS images and the appropriate choice of the intelligent models should be determined to reduce the gap between HS tasks and the advanced data processing technique. Both 2-D spatial information and 1-D spectral information are considered when an HS image is modeled as a three-order tensor.\n\n \n\n\n\\begin{figure*}[htp!]\n\t\\begin{center}\n \n \\includegraphics[width = 1\\textwidth]{TLFORHSI.pdf}\n\t\\end{center}\n\t\\caption[houston]{A taxonomy of main tensor decomposition-based methods for HS data processing. }\n\t\\label{fig:TLreference}\n\\end{figure*}\n\n\n\nTensor decomposition, which originates from Hitchcock's works in 1927 \\cite{c179}, touches upon numerous disciplines, but it has recently become prosperous in the fields of signal processing, machine learning, data mining and fusion over the last ten years \\cite{c181,c182,c183}. The early overviews focus on two common decomposition ways: Tucker decomposition and CANDECOMP\/PARAFAC (CP) decomposition. In 2008, these two decompositions were first introduced into HS restoration tasks to remove the Gaussian noise \\cite{c25,c26}. The tensor decomposition-based mathematical models avoid converting the original dimensions, and also to some degree, enhance the interpretability and completeness for problem modeling. Different types of prior knowledge (e.g, non-local similarity in the spatial domain, spatial and spectral smoothness) in HS RS are considered and incorporated into the tensor decomposition frameworks. However, on the one hand, additional tensor decomposition methods have been proposed recently, such as block term (BT) decomposition, t-SVD \\cite{c184}, tensor train (TT) decomposition \\cite{c185}, and tensor ring (TR) decomposition \\cite{c126}. On the other hand, as a versatile tool, tensor decomposition related to HS image processing has not been reviewed until. In this article, we mainly present a systematic overview from the perspective of the state-of-the-art tensor decomposition techniques for HS data processing in terms of the five burgeoning topics previously mentioned, as presented in Fig. \\ref{fig:TLreference}. \n\n\n\\begin{figure}[htp!]\n\t\\begin{center}\n \\includegraphics[width = 0.45\\textwidth]{papernumber.pdf}\n\t\\end{center}\n\t\\caption[houston]{The number of journal and conference papers that published in IEEE Xplore on the subject of \"hyperspectral\" and \"tensor decomposition\" within different time periods. }\n\t\\label{fig:Visio-papernum}\n\\end{figure}\nFig. \\ref{fig:Visio-papernum} displays the dynamics of tensor decompositions used for HS data processing in the HS community. The listed numbers contain both scientific journal and conference papers published in IEEE Xplore, which regards \"hyperspectral\" and \"tensor decomposition\" as the main keywords in abstracts. To highlight the increasing trend of number of publications, time period has been divided into four equal time slots (i.e., 2007-2010, 2011-2014, 2015-2018, 2019-2022(05 January)).\nIn this article, we mainly present a systematic overview from the perspective of the state-of-the-art tensor decomposition techniques for HS data processing in terms of the five burgeoning topics previously mentioned. \n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(1) To the best of our knowledge, this is the first time to provide a comprehensive survey of the state-of-the-art tensor decomposition techniques for processing and analyzing HS RS images. More than 100 publications in this field are reviewed and discussed, most of which were published during the last five years.\n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(2) For each HS topic, major representative works are scrupulously presented in terms of the specific categories of tensor decomposition. We introduce and discuss the pure tensor decomposition-based methods and their variants with other HS priors in sequence. The experimental examples are performed for validating and evaluating theoretical methods, followed by a discussion of remaining challenges and further research directions.\n\n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(3) This article makes a connection between tensor decomposition modeling and HS prior information. Tab. \\ref{tab:tab1} summarizes with the publication years, brief description, and prior information. Either beginners or experiencers are expected to obtain certain harvest pertinent to the tensor decomposition-based frameworks for HS RS. The available codes are also displayed in Tab. \\ref{tab:tab1} for the sake of repeatability and further studies.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{table*}[!htbp]\n\\centering\n\\caption{Tensor decomposition-based approaches for HS RS.}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{c|c|c|c|c|c}\n\\cline{1-6}\nCategory &Years & Methods & Brief Description & Prior Information & Code Links\\\\\n\\hline\nRstoration & 2008 & LRTA \\cite{c25} & Tucker decomposition & Spectral correlation & \\\\\nDenoising& 2008 & PARAFAC \\cite{c26} & CP decomposition & Spectral correlation & \\\\\n& 2013 & R1TD \\cite{c59} & Rank-1 Tensor decomposition & Spectral correlation & \\\\\n&2017& LRTR \\cite{c28} & TNN & Spectral correlation & \\\\\n&2019 & NTRM \\cite{xue2019nonconvex} & Logarithmic TTN & Spectral correlation &\\\\\n&2020 & 3DTNN \/ 3DLogTNN \\cite{c61}& Three-directional TNN \/ Log-based TNN &Spectral correlation & https:\/\/yubangzheng.github.io\/homepage\/ \\\\\n&2014& TDL & Tucker decomposition with dictionary learning & Spectral correlation + Non-local similarity & http:\/\/www.cs.cmu.edu\/~deyum\/ \\\\\n&2018 & NSNTD \\cite{c71} & Non-local similarity based nonnegative tucker decomposition & Spectral correlation + Non-local similarity & \\\\\n&2019 & GNWTTN \\cite{c67} & Global and non-local weighted TTN &Spectral correlation + Non-local similarity & \\\\\n&2015& NLTA-LSM \\cite{c65} & Tensor decomposition with laplacian scale mixture & Spectral correlation + Non-local similarity & \\\\\n& 2016 &ITS\\cite{c63} & CP + Tucker decomposition &Spectral correlation + Non-local similarity & https:\/\/gr.xjtu.edu.cn\/web\/dymeng\/3 \\\\\n&2019 & NLR-CPTD \\cite{xue2019nonlocal} & CP + Tucker decomposition &Spectral correlation + Non-local similarity &\\\\\n&2017& LLRT \\cite{chang2017hyper} & hyper-Laplacian prior + Unidirectional LR tensor & Non-local similarity + Spectral smoothness &https:\/\/owuchangyuo.github.io\/publications\/LLRT \\\\\n&2019 & NGmeet \\cite{he2019non}& Spectral subspace-based unidirectional LR tensor & Spectral correlation + Non-local similarity &https:\/\/prowdiy.github.io\/weihe.github.io\/publication.html\\\\\n&2020& WLRTR \\cite{c116} & Weighted Tucker decomposition & Spectral correlation + Non-local similarity & https:\/\/owuchangyuo.github.io\/publications\/WLRTR \\\\\n& 2020 & NLTR \\cite{c64} & Nonlocal TR decomposition & Spectral correlation + Non-local similarity &https:\/\/chenyong1993.github.io\/yongchen.github.io\/\\\\\n& 2018 & TLR-TV \\cite{c77} & TNN + 2DTV \/ 3DTV & Spectral correlation + Spatial $\\&$ Spectral smoothness & \\\\\n& 2018 & SSTV-LRTF \\cite{c34} & TNN + SSTV & Spectral correlation + Spatial-Spectral smoothness & \\\\\n& 2021 & MLR-SSTV \\cite{8920965} & Multi-directional weighted TNN + SSTV & Spectral correlation + Spatial-Spectral smoothness &\\\\\n&2018& LRTDTV \\cite{c36} & 3DwTV + Tucker decomposition & Spectral correlation + Spatial-Spectral smoothness & https:\/\/github.com\/zhaoxile\\\\\n&2021 &TLR-$L_{1\\--2}{\\rm SSTV}$ \\cite{c84} & $L_{1\\--2}{\\rm SSTV}$ + Local-patch TNN&Spectral correlation + Spatial-Spectral smoothness& \\\\\n&2019 &LRTDGS \\cite{c78}& Weighted group sparsity-regularized TV + Tucker decomposition& Spectral correlation + Spatial-Spectral smoothness &https:\/\/chenyong1993.github.io\/yongchen.github.io\/ \\\\\n&2019 & LRTF$L_0$ \\cite{c81}& $l_0$ gradient constraint + LR BT decomposition & Spectral correlation + Spatial-Spectral smoothness & http:\/\/www.xiongfuli.com\/cv\/ \\\\\n&2021&TLR-${l_0}\\text{TV}$ \\cite{c82}&${l_0}\\text{TV}$ + LR tensor & Spectral correlation + Spatial-Spectral smoothness &https:\/\/github.com\/minghuawang666\/TLR-L0TV\\\\\n& 2019 & SNLRSF \\cite{c72}& Subspace-based non-local LR and sparse factorization & Spectral correlation + Non-local tensor subspace & https:\/\/github.com\/AlgnersYJW\/\\\\\n&2020 & LRTF-DFR \\cite{c86}& double-factor-regularized LR tensor factorization & Subspace spectral correlation + spatial \\& spectral constraints & https:\/\/yubangzheng.github.io\/homepage\/\\\\\n&2021 &DNTSLR \\cite{c88}& Difference continuity + Non-local tensor subspace & Spectral correlation + Non-local tensor subspace & \\\\\n\\cline{2-6}\nDeblurring&2020& WLRTR \\cite{c116} & Weighted Tucker decomposition & Spectral correlation + Non-local similarity & https:\/\/owuchangyuo.github.io\/publications\/WLRTR \\\\\n&2021& OLRT \\cite{c117} & Joint spectral and non-local LR tensor & Non-local similarity +Spectral smoothness & https:\/\/owuchangyuo.github.io\/publications\/OLRT \\\\\n\\cline{2-6}\nInpaninting & 2015 & TMac \\cite{c118}& LR TC by parallel matrix factorization (TMac) & Spectral correlation& \\\\\n& 2015 & TNCP \\cite{c119} & TNN + CP decomposition & Spectral correlation & \\\\\n& 2017 & AWTC \\cite{c121} & HaLRTC with well-designed weights & Spectral correlation & \\\\\n& 2019 & LRRTC \\cite{c130}& logarithm of the determinant + TTN & Spectral correlation & \\\\\n& 2020 & LRTC \\cite{c123,c124} & t-SVD & Spectral correlation & \\\\\n& 2019 & TRTV \\cite{c128} & TR decomposition + spatial TV & Spectral correlation + Spatial smoothness & \\\\\n&2020& WLRTR \\cite{c116} & Weighted Tucker decomposition & Spectral correlation + Non-local similarity & https:\/\/owuchangyuo.github.io\/publications\/WLRTR \\\\\n& 2021 & TVWTR \\cite{c129}& Weighted TR decomposition + 3DTV & Spectral correlation + Spatial Spectral smoothness& https:\/\/github.com\/minghuawang666\/TVWTR \\\\\n\\cline{2-6}\nDestriping& 2018 & LRTD \\cite{c132}& Tucker decomposition + Spatial \\& Spectral TV & Spectral correlation + Spatial Spectral smoothness & https:\/\/github.com\/zhaoxile?tab=repositories \\\\\n& 2018 & LRNLTV \\cite{c135} & Matrix nuclear norm + Non-local TV & Spectral correlation + Non-local similarity &\\\\\n& 2020 & GLTSA \\cite{c133} & Global and local tensor sparse approximation & Sparisity + Spatial \\& Spectral smoothness& \\\\\n&2020& WLRTR \\cite{c116} & Weighted Tucker decomposition & Spectral correlation + Non-local similarity & https:\/\/owuchangyuo.github.io\/publications\/WLRTR \\\\\n\\hline\nCS & 2017 & JTenRe3-DTV \\cite{c97} & Tucker decomposition + weighted 3-D TV & Spectral correlation + Spatial \\& Spectral smoothness & https:\/\/github.com\/andrew-pengjj\/Enhanced-3DTV\\\\\n& 2017 & PLTD \\cite{c101} & Non-local Tucker decomposition & Spectral correlation + Non-local similarity & \\\\\n& 2019 & NTSRLR \\cite{c99} & TNN + Tucker decomposition & Spectral correlation + Non-local similarity & \\\\\n& 2020 & SNLTR \\cite{c90} & TR decomposition + Subspace representation & Non-local similarity + +Spectral smoothness & \\\\\n& 2015 & 3D-KCHSI \\cite{c106} & KCS with independent sampling dimensions & Spectral correlation & \\\\\n& 2015 & T-NCS \\cite{c95} & Tucker decomposition & Spectral correlation & \\\\\n& 2013 & NBOMP \\cite{6797642} & KCS with a tensor-based greedy algorithm & Spectral correlation & \\\\\n& 2016 & BOSE \\cite{7544443} & KCS with beamformed mode-based sparse estimator & Spectral correlation &\\\\\n& 2020 & TBR \\cite{c107} & KCS with multi-dimensional block-sparsity & Spectral correlation &\\\\\n\\hline\nAD & 2015 & LTDD \\cite{c110} & Tucker decomposition + Umixing & Spectral correlation & \\\\\n& 2016 & TenB \\cite{c111} & Tucker decomposition + PCA & Spectral correlation & \\\\\n& 2019 & TDCW \\cite{c139} & Tucker decomposition + Clustering & Spectral correlation & \\\\\n& 2020 & TEELRD \\cite{c113} & Tucker decomposition + Endmember extraction & Spectral correlation + Subspace Learning &\\\\\n& 2019 & LRASTD \\cite{c115} & Tucker decomposition +TNN & Spectral correlation + Subspace Learning & \\\\\n& 2018 & TPCA \\cite{c112} & TPCA + Fourier transform & Spectral correlation & \\\\ \n& 2020 & PTA \\cite{c114} & TRNN + Spatial TV & Spectral correlation + Spatial smoothness & https:\/\/github.com\/l7170\/PTA-HAD \\\\\n& 2022 & PCA-TLRSR \\cite{minghuaTC} & weighted TNN + Multi-subspace & Spectral correlation + Subspace & https:\/\/github.com\/minghuawang666\/ \\\\\n\\hline\nSR & 2018 & STEREO \\cite{c145} & CP decomposition & Spectral correlation & https:\/\/github.com\/marhar19\/HSR\\_via\\_tensor\\_decomposition \\\\\n& 2020 & NCTCP \\cite{c152} & Nonlocal coupled CP decomposition & Spectral correlation + Non-local similarity & \\\\\n& 2018 & SCUBA \\cite{c151} & CP decomposition with matrix factorization &Spectral correlation & \\\\\n& 2018 & CSTF \\cite{c146} & Tucker decomposition & Spectral correlation & https:\/\/github.com\/renweidian\/CSTF \\\\\n& 2021 & CT\/CB-STAR \\cite{c155} & Tucker decomposition with inter-image variability & Spectral correlation + Spatial Spectral Variability & https:\/\/github.com\/ricardoborsoi\n\\\\\n& 2021 & CNTD \\cite{c170} & Nonnegative Tucker decomposition & Spectral correlation &\\\\\n& 2018 & CSTF-$l_2$ \\cite{c147} & Tucker decomposition & Spectral correlation &\\\\\n& 2020 & SCOTT \\cite{c153} & Tucker decomposition + HOSVD & Spectral correlation & https:\/\/github.com\/cprevost4\/HSR$\\_$Software\\\\\n& 2020 & NNSTF \\cite{c154} & Tucker decomposition + HOSVD & Spectral correlation + Non-local similarity& \\\\\n& 2020 & WLRTR \\cite{c116} & Weighted Tucker decomposition & Spectral correlation + Non-local similarity & https:\/\/owuchangyuo.github.io\/publications\/WLRTR \\\\\n& 2017 & NLSTF \\cite{c156} & Non-local sparse Tucker decomposition &Spectral correlation + Non-local similarity & https:\/\/github.com\/renweidian\/NLSTF \\\\\n& 2020 & NLSTF-SMBF \\cite{c157} & Non-local sparse Tucker decomposition &Spectral correlation + Non-local similarity & https:\/\/github.com\/renweidian\/NLSTF \\\\\n& 2020 & UTVTD \\cite{c160} & Tucker decomposition + Unidirectional TV &Spectral correlation + Spatial-Spectral smoothness & https:\/\/liangjiandeng.github.io\/ \\\\\n& 2020 & NLRTD-SU \\cite{c158} & Non-local Tucker decomposition + SU +3-DTV & Spectral correlation + Non-local similarity + Spatial-Spectral smoothness \\\\\n& 2018 & SSGLRTD \\cite{c159} & Spatial\u2013spectral-graph Tucker decomposition & Spectral correlation + Local geometry & \\\\\n& 2021 & gLGCTD \\cite{c161} & Graph Laplacian-guided Tucker decomposition & Spectral correlation + Local geometry & \\\\\n& 2019 & NN-CBTD \\cite{c163} & BT decomposition & Spectral correlation & \\\\\n& 2021 & BSC-LL1 \\cite{c138} & BT decomposition &Spectral correlation & https:\/\/github.com\/MengDing56 \\\\\n& 2021 & GLCBTD \\cite{c164} & Graph Laplacian-guided BT decomposition & Spectral correlation + Local Geometry \\\\\n& 2019 & LTTR \\cite{c148} & Non-local TT decomposition & Spectral correlation + Non-local similarity &https:\/\/github.com\/renweidian\/LTTR \\\\\n& 2021 & NLRSR \\cite{c162} & Non-local TT decomposition & Spatial-Spectral correlation + Non-local similarity\\\\\n& 2022 & CTRF \\cite{c150} & Coupled TR decomposition & Spectral correlation & \\\\\n& 2020 & HCTR \\cite{c149} & High-Order Coupled TR decomposition & Spectral correlation + Local Geometry & \\\\\n& 2021 & FSTRD \\cite{c169} & TR decomposition + TV &Spectral correlation + Spatial-Spectral smoothness & \\\\\n& 2021 & LRTRTNN \\cite{c171} & Non-local TR decomposition + TNN & Spectral correlation + Non-local similarity&\\\\\n& 2019 & LTMR \\cite{c165} & Subspace based LR multi-Rank & Spectral correlation + Non-local similarity &https:\/\/github.com\/renweidian\/LTMR \\\\\n& 2021 & FLTMR \\cite{c166} & LTMR with a Truncation Concept & Spectral correlation + Non-local similarity& \\\\\n& 2019 & NPTSR \\cite{c8} & Non-local tensor sparse representation & Spectral correlation + Non-local similarity& \\\\\n& 2019 & TV-TLMR \\cite{c168} & Tucker decomposition + TV & Spectral correlation + Spatial-Spectral smoothness & \\\\\n& 2021 & LRTA-SR \\cite{c167} & TTN & Spectral correlation & \\\\\n\\hline \nSU& 2007 & NTF-SU \\cite{c190,c191} & CP decomposition & Spectral correlation & \\\\\n& 2020 & ULTRA-V \\cite{c212} & CP decomposition & Spectral correlation & https:\/\/github.com\/talesimbiriba\/ULTRA-V\\\\\n& 2017 & MVNTF \\cite{c192} & BT decomposition & Spectral correlation & https:\/\/gitSUb.com\/bearshng\/mvntf \\\\\n& 2019 & NTF-TV \\cite{c193} & TV + BT decomposition&\tSpectral correlation + Spatial-Spectral smoothness &\thttp:\/\/www.xiongfuli.com\/cv\/ \\\\\n& 2021 & SPLRTF \\cite{c194}\t&LR + sparsity + BT decomposition &\tSpectral correlation & \\\\\t\n& 2019 & svr-MVNTF \\cite{c195} & BT decomposition\t& Spectral correlation + Local Geometry \\\\\n& 2020 & SCNMTF \\cite{c196}\t& BT decomposition + NMF &\tSpectral correlations& \\\\\n& 2021 & NLTR \\cite{c197} & TV + Non-local LR &\tSpectral correlations+ Nonlocal similarity+ Spatial-Spectral smoothness\t& \\\\\n& 2021 & BUTTDL1 \\cite{c208} & sparsity + Tucker decomposition & Spectral correlations \\\\\n& 2021 & SeCoDe \\cite{c198} & Convolution operation + BT decomposition &\tSpectral correlations + Spatial-Spectral smoothness &\thttps:\/\/gitSUb.com\/danfenghong\/IEEE\\_TGRS\\_SeCoDe \\\\\n& 2020 & WNLTDSU \\cite{c210} & Weighted non-local LR + TV & Spectral correlation + Sparsity + Spatial smoothness & https:\/\/github.com\/sunlecncom\/WNLTDSU\\\\\n& 2021 & NL-TSUn \\cite{c210} & Non-local LR + Joint sparsity & Spectral correlation + Sparsity & \\\\\n& 2021 & LRNTF\\cite{c204} &\tBT decomposition &\tSpectral correlations &\thttps:\/\/gitSUb.com\/LinaZhuang\/HSI\\_nonlinear\\_unmixing\\_LR\\-NTF \\\\\n\\hline\n\\end{tabular}}\n\\label{tab:tab1}\n\\end{table*}\n\n \n\\section{Notations and Preliminaries}\n\\label{sect:Notations}\nIn this section, we introduce some notations and preliminaries. For clear description, the notations are list in Table \\ref{tab:tab0}. The main abbreviations used in this article are given in Table \\ref{tab:abbreviation}.\n\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{The notations used in the paper}\n\\begin{spacing}{1.3}\n\\scalebox{0.75}{\n\\begin{tabular}{c|c}\n\\hline\n\\cline{1-2}\n\\cline{1-2}\nNotation&Description\\\\\n\\hline\n\n\\cline{1-2}\n$x$ &scalars\\\\\n$\\textbf{x}$ & vectors\\\\\n$\\textbf{X}$ & matrices\\\\\n$vec(\\textbf{X})$ & $vec(\\textbf{X}$) stacks the columns of $\\textbf{X}$\\\\\n$\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$ & tensors with 3-modes\\\\\n$\\mathcal{X}_{h_i,v_i,z_i}$ & the ($h_i,v_i,z_i$)-element of $\\mathcal{X}$\\\\\n$\\mathcal{X}(i,:,:)$, $\\mathcal{X}(:,i,:)$ and $\\mathcal{X}(:,:,i)$ & the $i^{th}$ horizontal, lateral and frontal slices \\\\\n $||\\mathcal{X}||_1=\\sum_{h_i,v_i,z_i}{|\\mathcal{X}_{h_i,v_i,z_i}|}$ & $l_1$ norm\\\\\n $||\\mathcal{X}||_F= \\sqrt{\\sum_{h_i,v_i,z_i}{|\\mathcal{X}_{h_i,v_i,z_i}|^2}}$ & Frobenius norm\\\\\n ${\\sigma}_i(\\textbf{X})$ & the singular values of matrix $\\textbf{X}$ \\\\\n $||\\textbf{X}||_* = \\sum_i {\\sigma}_i(\\textbf{X})$ & nuclear norm\\\\ \n $||\\textbf{x}||_2 = \\sqrt{\\sum_i {|\\textbf{x}_i|^2}}$ & $l_2$ norm \\\\\n $ \\hat{\\mathcal{X}}=$fft$(\\mathcal{X},[],3)$ & Fourier transformation of $\\mathcal{X}$ along mode-3\\\\\n\\hline\n\\end{tabular}\n}\n\\end{spacing}\n\\label{tab:tab0}\n\\end{table}\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Main abbreviations used in the paper}\n\\begin{spacing}{1.3}\n\\scalebox{0.75}{\n\\begin{tabular}{c|c}\n\\hline\n\\cline{1-2}\n\\cline{1-2}\nAbbreviation &Full name\\\\\n\\hline\nAD & Anomaly detection\\\\\nCP & CANDECOMP\/PARAFAC\\\\\nCS & Compressive sensing\\\\\nBT & Block term\\\\\nHS & Hyperspectral\\\\\nLMM & Linear mixing model\\\\\nLR & Low-rank\\\\\nNMF & Nonnegative matrix factorization\\\\\nRS & Remote sensing\\\\\nSNN & The sum of the nuclear norm\\\\\nSU & Spectral unmixing\\\\\nTNN & Tensor nuclear norm\\\\\nTV & Total variation\\\\\nTT & Tensor train\\\\\nTTN & Tensor trace norm\\\\\nTR & Tensor ring\\\\\n1-D & One dimensional\\\\\n2-D & Two dimensional\\\\\n3-D & Three dimensional\\\\\n4-D & Four dimensional\\\\\n\\hline\n\\end{tabular}\n}\n\\end{spacing}\n\\label{tab:abbreviation}\n\\end{table}\n\n$\\textbf{Definition 1}$ (T-product \\cite{c43}): The T-product of two three-order tensors $\\mathcal{A} \\in \\mathbb{R}^{n_1 \\times n_2 \\times n_3}$ and $\\mathcal{B} \\in \\mathbb{R}^{n_2 \\times n_4 \\times n_3}$ is denoted by $\\mathcal{C} \\in \\mathbb{R}^{n_1 \\times n_4 \\times n_3}$:\n\\begin{equation}\n\\label{eq:product}\n\\mathcal{C}(i,k,:)= \\sum_{j=1}^{n_2} \\mathcal{A}(i,j,:) \\star \\mathcal{B}(j,k,:)\n\\end{equation}\nwhere $\\star$ represents the circular convolution between two tubes.\n\n$\\textbf{Definition 2}$ (Tensor $n$-mode product \\cite{c181}): The $n$-mode product of a tensor $\\mathcal{A} \\in \\mathbb{R}^{r_1 \\times r_2 \\times ... \\times r_N }$ and a matrix $\\mathbf{B} \\in \\mathbb{R}^{ B \\times r_n}$ is the tensor $\\mathcal{X} \\in \\mathbb{R}^{r_1 \\times r_2 \\times ... r_{n-1} \\times B \\times r_{n+1} ... \\times r_N } $ defined by\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:sec-product}\n\\mathcal{X} = \\mathcal{A} \\times_n \\mathbf{B}\n\\end{aligned}\n\\end{equation}\nThe unfolding matrix form of Eq.(\\ref{eq:sec-product}) is\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:sec-product2}\n\\mathbf{X}_{(n)} = \\mathbf{B} \\times \\mathbf{A}_{(n)} \n\\end{aligned}\n\\end{equation}\n\n$\\textbf{Definition 3}$ (Four Special tensors \\cite{c27}):\n\nConjugate transpose: The conjugate transpose of a three-order tensor $ \\mathcal{X}\\in \\mathbb{R}^{h \\times v \\times z}$ is the tensor $ {\\rm conj}(\\mathcal{X})=\\mathcal{X}^* \\in \\mathbb{R}^{v \\times h \\times z}$, which can be obtained by conjugately transposing each front slice and reversing the order of transposed frontal 2 through $z$.\n\nIdentity tensor: The identity tensor denoted by $\\mathcal{I}\\in \\mathbb{R}^{h \\times v \\times z}$ is the tensor whose first frontal slice is an identity matrix and all other frontal slices are zero.\n\nOrthogonal tensor: A three-order tensor $\\mathcal{Q}$ is orthogonal if it satisfies $\\mathcal{Q}^* * \\mathcal{Q}= \\mathcal{Q} * \\mathcal{Q}^*=\\mathcal{I}$.\n\nF-diagonal tensor: A three-order tensor $\\mathcal{S}$ is f-diagonal if all of its slices are diagonal matrices.\n\n$\\textbf{Definition 4}$ (First Mode-$k$ Unfolding\/matricization \\cite{c181}): This operator noted unfold$(\\mathcal{X},k)$ converts a tensor $\\mathcal{X} \\in \\mathbb{R}^{I_1 ... I_k \\times I_{k+1} ...I_N }$ into a matrix $\\textbf{X}_{(k)} \\in \\mathbb{R}^{I_k \\times I_1..I_{k-1}I_{k+1}...I_N }$. Inversely, fold($\\textbf{X}_{(k)}, k$) denotes the folding of the matrix into a tensor.\n\n\n$\\textbf{Definition 5}$ (Second Mode-$k$ Unfolding\/matricization \\cite{c126}): For a tensor $\\mathcal{X} \\in \\mathbb{R}^{I_1 ... I_k \\times I_{k+1} ...I_N }$, its second Mode-$k$ Unfolding matrix represented by $\\textbf{X}_{} \\in \\mathbb{R}^{I_k \\times I_{k+1}...I_N I_1..I_{k-1} }$. The inverse operation is matrix folding (tensorization).\n\n\n\n$\\textbf{Definition 6}$ (Mode-$k$ permutation \\cite{c29}): \nFor a three tensor $\\mathcal{X} \\in \\mathbb{R}^{ I_1 \\times I_2 \\times ... \\times I_N }$, this operator noted by $\\mathcal{X}^k$=permutation($\\mathcal{X}$, $k$) changes its permutation order with $k$ times and obtain a new tensor $\\mathcal{X}^k \\in \\mathbb{R}^{ I_k \\times ... \\times I_N \\times I_1 \\times ... \\times I_{k-1} } $. The inverse operator is defined as $\\mathcal{X}$ = ipermutation($\\mathcal{X}^k$, $k$). For example, three mode-$k$ permutation of an HS tensor $\\mathcal{X}^1\\in \\mathbb{R}^{ h \\times v \\times z }$ can be written as\n$\\mathcal{X}^1\\in \\mathbb{R}^{v \\times z \\times h}$, $\\mathcal{X}^2\\in \\mathbb{R}^{z \\times h \\times v}$, $\\mathcal{X}^3\\in \\mathbb{R}^{h \\times v \\times z}$. \n\n\n$\\textbf{Definition 7}$ (Tensor Trace Norm (TTN) \\cite{c44}) It is the sum of the nuclear norm (SNN) of the mode-$k$ unfolding matrix for a $3$-way HS tensor:\n\\begin{equation}\n\\label{eq:tracenorm}\n||\\mathcal{X}||_{\\rm SNN}:=\\sum_{k=1}^{3} \\alpha _k ||\\textbf{X}_{(k)}||_*\n\\end{equation}\nwhere weights $\\alpha_k$ satisfy $\\alpha_k \\geq 0 (k=1,2,3)$ and $\\sum_{k=1}^{3} \\alpha_k =1$. \n\n$\\textbf{Definition 8}$ (Tucker decomposition \\cite{c181,c186,c187}): The Tucker decomposition of an $N$-order tensor $\\mathcal{X} \\in \\mathbb{R}^{I_1 \\times I_2 \\times ... \\times I_N}$ is defined as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:Tucker1}\n\\mathcal{X}= \\mathcal{A} \\times_1 \\mathbf{B}_1 \\times_2 \\mathbf{B}_2 ... \\times_N \\mathbf{B}_N\n\\end{aligned}\n\\end{equation}\nwhere $\\mathcal{A} \\in \\mathbb{R}^{r_1 \\times r_2 \\times... \\times r_N}$ stands for a core tensor and $\\mathbf{B}_n \\in \\mathbb{R}^{I_n \\times r_n}, n=1,2,..,N$ represent factor matrices. The Tucker ranks are represented by ${\\rm rank}_{\\rm Tucker}(\\mathcal{X}) = [r_1, r_2,..., r_N] $.\n\n\n$\\textbf{Definition 9}$ (CP decomposition \\cite{c180,c181,c188}):\nThe CP decomposition of an $N$-order tensor $\\mathcal{X} \\in \\mathbb{R}^{I_1 \\times I_2 \\times ... \\times I_N} $ is defined as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:CP1}\n \\mathcal { X }=\\sum_{r=1}^{R} \\tau_r \\mathbf{b}^{(1)}_{r} \\circ \\mathbf{b}^{(2)}_{r} \\circ ... \\circ \\mathbf{b}^{(N)}_{r} \n\\end{aligned}\n\\end{equation}\nwhere $\\tau_r$ are non-zero weight parameters, and $\\mathbf{b}^{(1)}_{r} \\circ \\mathbf{b}^{(2)}_{r} \\circ ... \\circ \\mathbf{b}^{(N)}_{r} $ denotes a rank-one tensor with $\\mathbf{b}^{(n)}_{r} \\in \\mathbb{R}^{I_n}$. The CP rank denoted by ${\\rm rank}_{\\rm CP}(\\mathcal{X}) = R $ is the sum number of rank-one tensors.\n\n$\\textbf{Definition 10}$ ( BT decomposition \\cite{c213}) \nThe BT decomposition of an three-order tensor $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z} $ is defined as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:BTD1}\n \\mathcal{X} = \\sum_{r=1}^{R} \\mathcal{G}_r \\times_1 \\mathbf{A}_r \\times_2 \\mathbf{B}_r \\times_3 \\mathbf{C}_r\n \\end{aligned}\n\\end{equation}\nwhere $\\mathcal{G} \\in \\mathbb{R}^{L_h \\times L_v \\times L_z}$, $\\mathbf{A}_r \\in \\mathbb{R}^{h \\times L_h}$, $\\mathbf{B}_r \\in \\mathbb{R}^{v \\times L_v}$, and $\\mathbf{C}_r \\in \\mathbb{R}^{z \\times L_z}$. Each of $R$ component tensors can be expressed by rank ($L_h$, $L_v$, $L_z$) Tucker decomposition. BT decomposition can be regarded as the combination of Tucker and CP decomposition. On the hand, Eq. (\\ref{eq:BTD1}) becomes Tucker decomposition when $R=1$. On the other hand, when each component is represented by a rank ($L$, $L$, $1$) tensor, Eq. (\\ref{eq:BTD1}) is written by\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:BTD2}\n \\mathcal{X} =\\sum_{r=1}^{R} \\mathbf{A}_{r} \\cdot \\mathbf{B}_{r}^{T} \\circ \\mathbf{c}_{r} \n \\end{aligned}\n\\end{equation}\nwhere the matrix $\\mathbf{A}_{r} \\in \\mathbb{R}^{h \\times L_r}$ and the matrix $\\mathbf{B}_{r} \\in \\mathbb{R}^{v \\times L_r}$ are also rank-$L$. If rank-$L$ $\\mathbf{E}_{r} \\in \\mathbb{R}^{h \\times v}$ is factorized as $\\mathbf{A}_{r} \\cdot \\mathbf{B}_{r}^{T}$. Eq. (\\ref{eq:BTD2}) can be rewritten as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:BTD3}\n \\mathcal{X} =\\sum_{r=1}^{R} \\mathbf{E}_{r} \\circ \\mathbf{c}_{r} \n \\end{aligned}\n\\end{equation}\n\n\n$\\textbf{Definition 11}$ (Tensor Nuclear Norm (TNN) \\cite{c123}) Let $\\mathcal{X}=\\mathcal{U} * \\mathcal{S} * \\mathcal{V}^{*}$ be the t-SVD of $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$, TNN is the sum of singular values of $\\mathcal{X}$, that is,\n\\begin{equation}\n\\label{eq:TNN}\n||\\mathcal{X}||_*:=\\sum_{k=1}^{z} \\mathcal{S}(k,k,1),\n\\end{equation}\nand also can be expressed as the sum of nuclear norm of all the frontal slices of $\\hat{\\mathcal{X}}$ :\n\\begin{equation}\n\\label{eq:TNN2}\n||\\mathcal{X}||_*:=\\sum_{k=1}^{z} ||\\hat{\\mathcal{X}}(:,:,k)||_*.\n\\end{equation}\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[width = 1\\textwidth]{Visio-TD.pdf}\n\t\\end{center}\n\t\\caption[restoration]{ Illustration to show six tensor decompositions of third-order tensor: (a) Tucker decomposition, (b) CP decomposition, (c) BT decomposition, (d) t-SVD, (e) TT decomposition, (f) TR decomposition. }\n\t\\label{fig:td}\n\\end{figure*}\n\nFor more intuitively understanding the above mentioned tensor decompositions, examples for third-order tensor are shown in Fig. \\ref{fig:td}, which benefits the consequent tensor decomposition-based researches of third-order HS data.\n\n$\\textbf{Definition 12}$ (t-SVD \\cite{c123}) $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$ can be factorized as\n\\begin{equation}\n\\label{eq:tsvd}\n\\mathcal{X}=\\mathcal{U} * \\mathcal{S} * \\mathcal{V}^{*}\n\\end{equation}\nwhere $\\mathcal{U} \\in \\mathbb{R}^{h \\times h \\times z}$, $\\mathcal{V} \\in \\mathbb{R}^{v \\times v \\times z}$ are orthogonal tensors and $\\mathcal{S} \\in \\mathbb{R}^{h \\times v \\times z}$ is a f-diagonal tensor. The details of t-SVD are described in the Algorithm \\ref{alg:tsvd1}.\n\\begin{algorithm}[htb]\n\t\\caption{t-SVD} \\label{alg:tsvd1}\n\t\\begin{algorithmic}[1]\n\t\t\\REQUIRE $\\; \\mathcal{X}\\in \\mathbb{R}^{h \\times v \\times z}$\\\\\n\t\t\\NoDo\n \\STATE $ \\hat{\\mathcal{X}}=$fft$(\\mathcal{X},[],3)$;\n\t\t\\NoDo \\FOR{$i=0,1,\\dots, [\\frac{z+1}{2}]$}\n\t\t\\STATE $ [\\hat{\\mathcal{U}}(:,:,i), \\hat{\\mathcal{S}}(:,:,i), \\hat{\\mathcal{V}}(:,:,i)]=$SVD$(\\hat{\\mathcal{X}}(:,:,i))$;\t\t\n\t\t\\ENDFOR\n\t\t\\NoDo \\FOR{$i= [\\frac{z+1}{2}+1],\\dots,z$}\n\t\t\\STATE $ \\hat{\\mathcal{U}}(:,:,i)=$conj($\\hat{\\mathcal{U}}(:,:,z-i+2)$);\n\t\\STATE $ \\hat{\\mathcal{S}}(:,:,i)=(\\hat{\\mathcal{S}}(:,:,z-i+2)$)\n\t\\STATE$ \\hat{\\mathcal{V}}(:,:,i)=$conj($\\hat{\\mathcal{V}}(:,:,z-i+2)$);\t\t\n\t\t\\ENDFOR\n\t\t\\STATE $\\mathcal{U} =$ifft$(\\hat{\\mathcal{U}},[],3)$, $\\mathcal{S} =$Ifft$(\\hat{\\mathcal{S}},[],3)$,$ \\mathcal{V}=$fft$(\\hat{\\mathcal{V}},[],3)$;\n\t\t\\ENSURE $\\mathcal{U},\\mathcal{S},\\mathcal{V}$.\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\n$\\textbf{Definition 13}$ ( TT decomposition \\cite{c185}) The TT decomposition of an $N$-order $\\mathcal{X} \\in \\mathbb{R}^{I_1 \\times I_2 \\times ... \\times I_N} $ is represented by cores $\\mathcal{G} = \\{ \\mathcal{G}^{(1)},..., \\mathcal{G}^{(N)} \\}$, where $\\mathcal{G}^{(n)} \\in \\mathbb{R}^{ r_{n-1} \\times I_n \\times r_n} $, $n=1,2,...,N$, $r_0 = r_N =1$. The rank of TT decomposition is defined as $ {\\rm rank}_{\\rm TT}(\\mathcal{X}) =[r_0,r_1,...,r_{N}]$. Each entry of the tensor $\\mathcal{X}$ is formulated as\n\\begin{equation}\n\\label{eq:ttd}\n\\mathcal{X}(i_1,...,1_N) = \\mathcal{G}^{(1)}(:,i_1,:)\\mathcal{G}^{(2)}(:,i_2,:)...\\mathcal{G}^{(N)}(:,i_N,:).\n\\end{equation}\n\n$\\textbf{Definition 14} $ (TR decomposition \\cite{c126}): The purpose of TR decomposition is to represent a high-order $\\mathcal{X}$ by multi-linear products of a sequence of three-order tensors in circular form. Three-order tensors are named TR factors $\\{\\mathcal{G}^{(n)}\\}^N_{n=1} = \\{\\mathcal{G}^{(1)}, \\mathcal{G}^{(2)},...,\\mathcal{G}^{(N)}\\}$, where $\\mathcal{G} ^{(n)}\\in \\mathbb{R}^{r_n \\times I_n \\times r_{n+1}}$, $n=1, 2, ..., N$, $r_0 =r_N$. In this case, the element-wise relationship of TR decomposition with factors $\\mathcal{G}$ can be written as\n\\begin{equation}\n\\label{eq:TR1}\n\\begin{aligned}\n\\mathcal{X}(i_1, i_2,...,i_N) &= {\\rm Tr}(\\mathcal{G}^{(1)}(:,i_1,:)\\mathcal{G}^{(2)}(:,i_2,:)...\\mathcal{G}^{(N)}(:,i_N,:))\\\\\n&= {\\rm Tr}(\\prod_{i=1}^{N} \\mathcal{G}^{(n)}(:,i_n,:))\n\\end{aligned}\n\\end{equation} \nwhere ${\\rm Tr}$ denotes the matrix trace operation.\n\n\n\n\n\n$\\textbf{Definition 15} $ (Multi-linear Product \\cite{c16}): Given two TR factors $\\mathcal{G}^{(n)}$ and $\\mathcal{G}^{(n+1)}$, their multi-linear product $\\mathcal{G}^{(n,n+1)} \\in \\mathbb{R}^{r_n \\times I_n I_{n+1}\\times r_{n+1}}$ is calculated as\n\\begin{equation}\n\\label{eq:multilinear}\n\\begin{aligned}\n\\mathcal{G}^{(n,n+1)}(:,I_n(i_k -1) +j_k,:)=\\mathcal{G}^{(n)}(:,i_k,:)\\mathcal{G}^{(n+1)}(:,j_k,:)\n\\end{aligned}\n\\end{equation} \nfor $i_k = 1,2,...,I_n, j_k = 1, 2,...,I_n+1$.\n\nFrom the above $\\textbf{Definition 15} $, the multi-linear product of all the TR factors can be induce as $[\\mathcal{G}] = \\prod^N_{n=1}\\mathcal{G}^{(n)} = \\mathcal{G}^{(1,2,...,n)} = \\{ \\mathcal{G}^{(1)},\\mathcal{G}^{(2)},...,\\mathcal{G}^{(n)}\\} \\in \\mathbb{R}^{r_1 \\times I_1 I_2... I_n \\times r_1}$. The TR decomposition can be rewritten as $\\mathcal{X} = \\Phi({\\mathcal{G}})$, where $\\Phi$ is a dimensional shifting operator $\\Phi: \\mathbb{R}^{r_1 \\times I_1 I_2...I_n \\times r_1} \\rightarrow \\mathbb{R}^{I_1 \\times I_2 \\times...\\times I_n}$. \n\n$\\textbf{Lemma 1} $ (Circular Dimensional Permutation Invarience \\cite{c126}): If the TR decomposition of $\\mathcal{X}$ is $\\mathcal{X} = \\Phi(\\mathcal{G}^{(1)},\\mathcal{G}^{(2)},...,\\mathcal{G}^{(N)})$, ${\\stackrel{\\leftarrow}{\\mathcal{X}}}^n \\in \\mathbb{R}^{I_n \\times I_{n+1} \\times ... \\times I_1 \\times ... \\times I_{n-1}} $ is defined as circular shifting the dimensions of $\\mathcal{X}$ by $n$, we obtain the following relation:\n\\begin{equation}\n\\label{eq:circluar}\n\\begin{aligned}\n\\stackrel{\\leftarrow}{\\mathcal{X}}^n = \\Phi( \\mathcal{G}^{(n)},\\mathcal{G}^{(n+1)},...,\\mathcal{G}^{(N)},\\mathcal{G}^{(1)},\\mathcal{G}^{(2)},...,\\mathcal{G}^{(N)})\n\\end{aligned}\n\\end{equation} \n\n\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=9cm]{Visio-denoising_framework.pdf}\n\t\\end{center}\n\t\\caption[restoration]{A schematic diagram of HS image restoration. }\n\t\\label{fig:denoisingfr}\n\\end{figure*}\n\n\n\n\n$\\textbf{Definition 16} $ (Mixed $l_{1,0}$ pseudo-norm \\cite{c41}): Given a vector $\\textbf{y} \\in \\mathbb{R}^{m}$ and index sets $\\theta_1,...,\\theta_i,...,\\theta_n (1 \\leq n \\leq m)$ that satisfies \n\\begin{itemize}\n\\item Each $\\theta_i$ is a subset of {1,...,m},\n\\item $\\theta_i \\cap \\theta_l = \\emptyset$ for any $i \\neq l$,\n\\item $\\cup^n_{i=1}\\theta_i ={1,...,m}$,\n\\end{itemize}\nthe mixed $l_{1,0}$ pseudo-norm of $y$ is defined as:\n\\begin{equation}\n||\\textbf{y}||^{\\theta}_{1,0} = ||(||\\textbf{y}_{\\theta_1}||_1,...,||\\textbf{y}_{\\theta_i}||_1,...||\\textbf{y}_{\\theta_n}||_1)||_0,\n\\end{equation}\nwhere $\\textbf{y}_{\\theta_i}$ denotes a sub-vector of $\\textbf{y}$ with its entries specified by $\\theta_i$ and $|| \\cdot ||_0$ calculates the number of the non-zero entries in ($\\cdot$). \n\n\n\n\n\n\n\n\n\\section{HS Restoration}\n\\label{sect:restoration}\n\n\nIn the actual process of HS data acquisition and transformation, external environmental change and internal equipment conditions inevitably lead to noises, blurs, and missing data (including clouds and stripes) \\cite{GRSM2022,li2021progressive} which degrade the visual quality of HS images and the efficiency of the subsequent HS data applications, such as a fine HS RS classification for crops and wetlands \\cite{5779697,9598903} and the refinement of spectral information for target detection \\cite{zhangTD2012,2009OE}. Fig. \\ref{fig:denoisingfr} depicts the HS RS degradation and Restoration. Therefore, HS image restoration appears as a crucial pre-processing step for further applications. \n\nMathematically, an observed degraded HS image can be formulated as follows\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:degrade}\n\\mathcal{T}=M(\\mathcal{X}) + \\mathcal{S} + \\mathcal{N}\n\t\\end{split}\n\\end{equation} \nwhere $\\mathcal{T} \\in \\mathbb{R}^{h \\times v \\times z}$, $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$, $\\mathcal{S} \\in \\mathbb{R}^{h \\times v \\times z}$ and $\\mathcal{N} \\in \\mathbb{R}^{h \\times v \\times z}$ represents an observed HS image, the restored HS image, the sparse error and additive noise, respectively, and $M(\\cdot)$ denotes different linear degradation operators for different HS restoration problems: (a) when $M(\\cdot)$ is a blur kernal also called as point spread function (PSF), Eq. (\\ref{eq:degrade}) becomes HS deblurring problem; \n(b) when $M(\\cdot)$ is a binary operation, i.e., 1 for original pixels, and 0 for missing data, Eq. (\\ref{eq:degrade}) turns into the HS inpainting problem;\n(c) when $M(\\mathcal{X})$ keeps $\\mathcal{X}$ constant, i.e., $M(\\mathcal{X}) = \\mathcal{X}$, Eq. (\\ref{eq:degrade}) is reformulated as the HS destriping problem ($\\mathcal{T}=\\mathcal{X} + \\mathcal{S}$) or HS denoising problem (only consider Gaussian noise $\\mathcal{T}=\\mathcal{X} + \\mathcal{N}$ or consider mixed noise $\\mathcal{T}=\\mathcal{X} + \\mathcal{S} + \\mathcal{N}$). The HS restoration task is to estimate recovered HS images $\\mathcal{X}$ from the given HS images $\\mathcal{T}$. This ill-posed problem suggests that extra constraints on $\\mathcal{X}$ need to be enforced for the optimal solution of $\\mathcal{X}$. These additional constraints reveal the HS desired property and various types of HS prior information, such as non-local similarity, spatial and spectral smoothness, and subspace representation. The HS restoration problem can be summarized as \n\\begin{equation}\n \\begin{aligned}\n \\label{eq:summ}\n \\underset{\\mathcal{X}}{ \\min } \\frac{1}{2} || \\mathcal{T} - M(\\mathcal{X}) - \\mathcal{S} ||^2_F + \\tau f(\\mathcal{X}) + \\lambda g(\\mathcal{S})\n \\end{aligned}\n\\end{equation}\nwhere $f(\\mathcal{X})$ and $g(\\mathcal{S})$ stand for the regularizations to explore the desired properties on the recovered $\\mathcal{X}$ and sparse part $\\mathcal{S}$, respectively. $\\tau$ and $\\lambda$ are regularization parameters.\n\n\n\n\n\n\n\\subsection{HS Denoising}\n\\label{sect:HS_Denoising}\nThe observed HS images are often corrupted by mixed noise, including Gaussian noise, salt and pepper noise, and dead-line noise. Several noise types of HS images are shown in Fig. \\ref{fig:Visio-noisyHSI}. \n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=3.2cm]{Visio-noisyHSI.pdf}\n\t\\end{center}\n\t\\caption[washington]{HS data sets with different noise types: (a) the Urban data set, (b) the Indian Pines data set, (c) the Salinas data set. }\n\t\\label{fig:Visio-noisyHSI}\n\\end{figure}\nThe wealthy spatial and spectral information of HS images can be extracted by different prior constraints like LR property, sparse representation, non-local similarity, and total variation. Different LR tensor decomposition models are introduced for HS denoising. Consequently, one or two kinds of other prior constraints are combined with these tensor decomposition models.\n\n\n\n\\subsubsection{LR Tensor Decomposition}\n\\label{sect:LRT}\n \nIn this section, the LR tensor decomposition methods are divided into two categories: 1) factorization-based approaches and 2) rank minimization-based approaches. The former one needs to predefine rank values and update decomposition factors. The latter directly minimizes tensor ranks and updates LR tensors.\n \n$\\textbf{(1) Factorization-based approaches}$ \n\nTwo typical representatives are used in the HS image denoising literature, namely, Tucker decomposition and CP decomposition. Renard \\textit{et al}. \\cite{c25} considered Gaussian noise and suggested a LR tensor approximation (LRTA) model to complete an HS image denoising task:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:lrta}\n&\\underset{\\mathcal{X}}{\\textrm{min}}\\;||\\mathcal{T} - \\mathcal{X} ||^2_F \\\\ \n&{\\rm s.t.} \\; \\mathcal{X}= \\mathcal{A} \\times_1 \\mathbf{B}_1 \\times_2 \\mathbf{B}_2 \\times_3 \\mathbf{B}_3\n\t\\end{split}\n\\end{equation} \n\n Nevertheless, users should manually pre-define the multiple ranks along all modes before running the Tucker decomposition-related algorithm, which is intractable in reality. In Eq. (\\ref{eq:lrta}), the Tucker decomposition constraint is easily replaced by other tensor decomposition, such as CP decomposition. Liu \\textit{et al}. \\cite{c26} used a Parallel Factor Analysis (PARAFAC) decomposition algorithm and still assumed that HS images were corrupted by white Gaussian noise. Guo \\textit{et al}. \\cite{c59} presented an HS image noise-reduction model via rank-1 tensor decomposition, which was capable of extracting the signal-dominant features. However, the smallest number of rank-1 factors is served as the CP rank, which needs high computation cost to be calculated. \n\n $\\textbf{(2) Rank minimization approaches}$ \n \n The tensor rank bounds are rarely available in many HS noisy scenes. To avoid the occurrence of rank estimation, another kind of methods focus on minimizing the tensor rank directly, which can be formulated as follows:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:rank_mini}\n&\\underset{\\mathcal{X}}{\\textrm{min}}\\; {\\rm rank}(\\mathcal{X}) \\\\ \n&{\\rm s.t.} \\; \\mathcal{T}= \\mathcal{X} + \\mathcal{S} + \\mathcal{N} \\\\\n\t\\end{split}\n\\end{equation} \nwhere ${\\rm rank(\\mathcal{X})}$ denotes the rank of HS tensor $\\mathcal{X}$ and includes different rank definitions like Tucker rank, CP rank, TT rank, and tubal rank. Due to the above rank minimizations belong to non-convex problems, these problems are NP-hard to compute. Nuclear norms are generally used as the convex surrogate of non-convex rank function. Zhang \\textit{et al}. \\cite{c27} proposed a tubal rank related TNN to characterize the 3-D structural complexity of multi-linear data. Based on the TNN, Fan \\textit{et al}. \\cite{c28} presented an LR Tensor Recovery (LRTR) model to remove Gaussian noise and sparse noise:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:LRTR}\n&\\underset{\\mathcal{X},\\mathcal{S},\\mathcal{N}}{\\textrm{min}}\\; ||\\mathcal{X}||_* + \\lambda_1 || \\mathcal{S}||_1 + \\lambda_2 ||\\mathcal{N}||_F^2 \\\\ \n&{\\rm s.t.} \\; \\mathcal{T}= \\mathcal{X} + \\mathcal{S} + \\mathcal{N} \\\\\n\t\\end{split}\n\\end{equation} \n\nXue \\textit{et al}. \\cite{xue2019nonconvex} applied a non-convex logarithmic surrogate function into a TTN for tensor completion and (tensor robust principal component analysis) TRPCA tasks. \nZheng \\textit{et al}. \\cite{c61} explored the LR properties of tensors along three directions and proposed two tensor models: a three-directional TNN (3DTNN) and a three-directional log-based TNN (3DLogTNN) as its convex and nonconvex relaxation. Although these pure LR tensor decomposition approaches utilize the LR prior knowledge of HS images, they are hardly effective to suppress mixed noise due to the lack of other useful information. \n\n\\subsubsection{Other priors regularized LR Tensor Decomposition}\n\\label{sect:OLRT}\nVarious types of priors are combined with an LR tensor decomposition model to optimize the model solution including non-local similarity, spatial and spectral smoothness, spatial sparsity, subspace learning. \n\n$\\textbf{ (1) Non-local similarity }$\n\nAn HS image often possesses many repetitive local spatial patterns, and thus a local patch always has many similar patches across this HS image \\cite{c66}. Peng \\textit{et al}. \\cite{c62} designed a tensor dictionary learning (TDL) framework. In Fig. \\ref{fig:nonlocal}, an HS image is segmented into 3-D full band patches (FBP). The similar FBPs are clustered together as a 4-D tensor group to simultaneously leverage the non-local similarity of spatial patches and the spectral correlation. TDL is the first model to exploit the non-local similarity and the LR tensor property of 4-D tensor groups, as shown in Fig. \\ref{fig:nonlocal} (b). Instead of a traditional alternative least square based tucker decomposition, Bai \\textit{et al}. \\cite{c71} improved a hierarchical least square based nonnegative tucker decomposition method. Kong \\textit{et al}. \\cite{c67} incorporated the weighted tensor norm minimization into the Tucker decompositions of 4-D patches. \n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=10cm]{Visio-nonlocal.pdf}\n\t\\end{center}\n\t\\caption[nonlocal]{Flowchart of non-local LR tensor-based methods. }\n\t\\label{fig:nonlocal}\n\\end{figure*}\n\n\nDiffer from references \\cite{c62,c71,c67}, other works \\cite{c65,c63,xue2019nonlocal,chang2017hyper,he2019non,c64} obtained a 3-D tensor by stacking all non-local similar FBPs converted as matrices with a spatial mode and a spectral mode in Fig. \\ref{fig:nonlocal}(d). Based on a non-local similar framework, Dong \\textit{et al}. \\cite{c65} proposed a Laplacian Scale Mixture (LSM) regularized LR tensor approximation method for denoising.\nXie \\textit{et al}. \\cite{c63} conducted a tensor sparsity regularization named intrinsic tensor sparsity (ITS) to encode the spatial and spectral correlation of the non-local similar FBP groups. With the non-local similarity of FBPs, $\\mathcal{X}$ is estimated from its corruption $\\mathcal{T}$ by solving the following problem \n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:ITS}\n \\min _{\\mathcal{X}} \\Lambda (\\mathcal{X})+\\frac{\\gamma}{2}\\|\\mathcal{T}_{i}-\\mathcal{X}_i \\|_{F}^{2}\n\t\\end{split}\n\\end{equation} \nwhere the sparsity of a tensor $\\mathcal{X}$ is $\\Lambda (\\mathcal{X})=t\\|\\mathcal{A}\\|_{0}+(1-t) \\prod_{i=1}^{N} {\\rm rank}(X_{(i)})$ and $\\mathcal{A}$ is the core tensor of $\\mathcal{X}$ via the Tucker decomposition $\\mathcal{X}= \\mathcal{A} \\times_1 \\mathbf{B}_1 \\times_2 \\mathbf{B}_2 \\times_3 \\mathbf{B}_3$. Xue \\textit{et al}. \\cite{xue2019nonlocal} presented a non-local LR regularized CP tensor decomposition (NLR-CPTD) algorithm. However, the Tucker or CP decomposition-related methods are subject to the heavy computational burden issues. \n\nChang \\textit{et al}. \\cite{chang2017hyper} discovered the LR property of the non-local patches and used a hyper-Laplacian prior to model additional spectral information. He \\textit{et al}. \\cite{he2019non} developed a new paradigm, called non-local meets global (NGmeet) method, to fuse the spatial non-local similarity and the global spectral LR property. Chen \\textit{et al}. \\cite{c64} analyzed the advantages of a novel TR decomposition over the Tucker and CP decompositions. The proposed non-local TR decomposition method for HS image denoising is formulated as:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:TR}\n\\min _{\\mathcal{X}_{i}, \\mathcal{G}_{i}} \\frac{1}{2}\\|\\mathcal{T}_{i}-\\mathcal{X}_{i}\\|_{F}^{2} \\quad \\text { s.t. } \\; \\mathcal{X}_{i}=\\Phi([\\mathcal{G}_{i}])\n\t\\end{split}\n\\end{equation} \n\nThe non-local similarity-based tensor decomposition methods focus on removing Gaussian noise from corrupted HS images and unavoidably cause a computational burden in practice.\n\n$\\textbf{(2) Spatial and spectral smoothness }$\n\nHS images are usually captured by airborne or space-borne platforms far from the Earth's surface. The low measurement accuracy of imaging spectrometers leads to low spatial resolutions of HS images. In general, the distribution of ground objects varies gently. Moreover, high correlations exist between different spectral bands. HS images always have relatively smoothing characteristics in the spatial and spectral domains. \n\nAn original TV method was first proposed by Rudin \\textit{et al}. \\cite{c76} to remove the noise of gray-level images due to the ability to preserve edge information and promote piecewise smoothness. The HS image smoothness can be constrained by either an isotropic TV norm or an anisotropic TV norm \\cite{c35}. The obvious blurring artifacts are hardly eliminated in the denoised results of the isotropic model \\cite{c75}. Thus, anisotropic TV norms for HS image denoising are investigated in this paper.\nWe take the Washington DC (WDC) data set as a typical example to depict the gradient images along three directions in Fig. \\ref{fig:sstv1}. The smoothing areas and edge information of gradient images are much clearer than the origin. \n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=2.5cm]{Visio-spatialspectral-smoothness.pdf}\n\t\\end{center}\n\t\\caption[washington]{The spatial smooth properties of Washington DC: (a) original band, (b) the gradient image along the spatial horizontal direction, (c) the gradient image along the spatial vertical direction, (d) the gradient image along the spectral direction. }\n\t\\label{fig:sstv1}\n\\end{figure}\n\nInspired by the TV applications to gray-level images, the 2-D spatial TV norm of $\\mathcal{X}$ is easily introduced to an HS image in a band-by-band manner \\cite{c30}. This simple band-by-band TV norm is defined as follows:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:TV}\n\t\t|| \\mathcal{X} ||_{\\rm TV} = ||D_h \\mathcal{X} ||_1 + ||D_v \\mathcal{X} ||_1\n\t\t\t\\end{split}\n\\end{equation} \nwhere $D_h$ and $D_v$ stand for first-order linear difference operators corresponding to the horizontal and vertical directions, respectively. These two operator are usually defined as:\n \\begin{equation}\n\t\\begin{split}\n\t\\label{eq:dh}\n||D_h \\mathcal{X} ||_1 =\\left\\{\n\\begin{aligned}\n\\mathcal{X}(i,j+1,k)-\\mathcal{X}(i,j,k) ,\\quad & \\quad 1 \\leq j < v \\\\\n0, \\quad &\\quad j=v\n\\end{aligned}\n\\right.\n\\end{split}\n\\end{equation}\n\n \\begin{equation}\n\t\\begin{split}\n\t\\label{eq:dv}\n\t||D_v \\mathcal{X} ||_1 =\\left\\{\n\\begin{aligned}\n\\mathcal{X}(i+1,j,k)-\\mathcal{X}(i,j,k) ,\\quad & \\quad 1 \\leq i < h \\\\\n0, \\quad &\\quad i=h\n\\end{aligned}\n\\right.\n\\end{split}\n\\end{equation}\n\nTo enforce the spatial piecewise smoothness and the spectral consistency of HS images, a 3DTV norm \\cite{c35} and a SSTV norm \\cite{c19} are formulated, respectively:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:3DTV}\n\t\t|| \\mathcal{X} ||_{\\rm 3DTV} = ||D_h \\mathcal{X} ||_1 + ||D_v \\mathcal{X} ||_1 + ||D_z \\mathcal{X} ||_1\n\t\t\t\\end{split}\n\\end{equation} \n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:SSTV}\n||\\mathcal{X}||_{\\rm{SSTV}} = ||D_z(D_h \\mathcal{X}) ||_1 +||D_z(D_v \\mathcal{X}) ||_1\n\t\t\t\\end{split}\n\\end{equation} \nwhere $||D_z \\mathcal{X} ||_1$ is a 1-D finite-difference operator along the spectral direction and is defined as:\n \\begin{equation}\n\t\\begin{split}\n\t\\label{eq:dz}\n\t||D_z \\mathcal{X} ||_1 =\\left\\{\n\\begin{aligned}\n\\mathcal{X}(i,j,k+1)-\\mathcal{X}(i,j,k) ,\\quad & \\quad 1 \\leq i < h \\\\\n0, \\quad &\\quad k=z\n\\end{aligned}\n\\right.\n\\end{split}\n\\end{equation}\n\nConsidering the degrade model with mixed noise, Chen \\textit{et al}. \\cite{c77} integrated both the 2DTV and the 3DTV regularizations into the TNN. Fan \\textit{et al}. \\cite{c34} injected the above SSTV norm into LR tensor factorization. Wang \\textit{et al}. \\cite{wang2020hyperspectral} used an SSTV term in a multi-directional weighted LR tensor framework. Based on the different contributions of the three gradient terms to the 3DTV regularization, Wang \\textit{et al}. \\cite{c36} proposed the TV-regularized LR tensor decomposition (LRTDTV) method:\n \\begin{equation}\n\t\\begin{split}\n\t\\label{eq:LRTDTV}\n&\\min _{\\mathcal{X}, \\mathcal{S}, \\mathcal{N}} \\tau\\|\\mathcal{X}\\|_{\\mathrm{3DwTV}}+\\lambda\\|\\mathcal{S}\\|_{1}+\\beta\\|\\mathcal{N}\\|_{F}^{2} \\\\\n&\\text { s.t. } \\mathcal{T}=\\mathcal{X}+\\mathcal{S}+\\mathcal{N} \\\\\n&\\mathcal{X}=\\mathcal{A} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3}, \\mathbf{B}_{i}^{T} \\mathbf{B}_{i}=\\mathbf{I}(i=1,2,3)\n\\end{split}\n\\end{equation}\nwhere the 3DwTV term is defined as:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:3DwTV}\n\t\t|| \\mathcal{X} ||_{\\rm 3DwTV} = w_1||D_h \\mathcal{X} ||_1 + w_2||D_v \\mathcal{X} ||_1 + w_3||D_z \\mathcal{X} ||_1\n\t\t\t\\end{split}\n\\end{equation} \n\nZeng \\textit{et al}. \\cite{c84} integrating the advantages of both a global $L_{1\\--2}{\\rm SSTV}$ and the local-patch TNN.\nChen \\textit{et al}. \\cite{c78} exploited the row sparse structure of gradient images and proposed a weighted group sparsity-regularized TV combined with LR Tucker decomposition (LRTDGS) for HS mixed noise removal. \n\n\nDue to mentioned TV norms just penalizing large gradient magnitudes and easily blurring real image edges, a new $l_0$ gradient minimization was proposed to sharpen image edges \\cite{c37}. Actually, $l_1$ TV norm is a relaxation form of the $l_0$ gradient. Xiong \\textit{et al}. \\cite{c81} and Wang \\textit{et al}. \\cite{8920965} applied the $l_0$ gradient constraint in an LR BT decomposition and Tucker decomposition, respectively. However, the degrees of smoothness of this $l_0$ gradient form are controlled by a parameter, without any physical meaning. To alleviate this limitation, Ono \\cite{c41} proposed a novel $l_0$ gradient projection, which directly adopts a parameter to represent the smoothing degree of the output image. Wang \\textit{et al}. \\cite{c82} extended the ${l_0}\\text{TV}$ model into an LR tensor framework (TLR-${l_0}\\text{TV}$) to preserve more information for classification tasks after HS image denoising. The optimization model of TLR-${l_0}\\text{TV}$ is formulated as:\n\\begin{equation}\n\\label{eq:mlr-l0htv}\n\\begin{aligned}\n&\\underset{\\mathcal{X},\\mathcal{S}}{\\textrm{min}}\\; \\sum_{k=1}^{m} \\alpha _k E_k(\\mathcal{X})_{\\omega} +\\lambda\\|\\mathcal{S}\\|_1 +\\mu \\|\\mathcal{T}-\\mathcal{X}-\\mathcal{S}\\|_F^2,\\\\\n&s.t. \\ ||{B} D \\mathcal{X} ||^{\\theta}_{1,0} \\leq \\gamma,\n\\end{aligned}\t\n\\end{equation}\nwhere the functions $E_k(\\mathcal{X})_{\\omega}$ are set to be $||\\textbf{X}_{(k)}||_{\\omega,*}$ in the WSWNN-$l_0$TV-based method and $||\\mathcal{X}^k||_{\\omega,*}$ in the WSWTNN-$l_0$TV-based method. Operator ${B}$ forces boundary values of gradients to be zero when $ i = h $ and $j = v $. Operator $D$ is an operator to calculate both horizontal and vertical differences. Compared with many other TV-based LR tensor decompositions, TLR-${l_0}\\text{TV}$ achieves better denoising performances for mixed noise removal of HS images. In particular, HS classification accuracy is improved more effectively after denoising by TLR-l0TV.\n\n\n$\\textbf{(3) Subspace representation}$\n\nAs Fig. \\ref{fig:subspace} shows, an unfolding matrix $\\mathbf{X}$ of a denoised HS image can be projected into a orthogonal subspace, i.e., $\\mathbf{X} = \\mathbf{E}\\mathbf{Z}$. $\\mathbf{E} \\in \\mathbb{R}^{z \\times l}$ represents the basis of the subspace $S_l$ and $\\mathbf{Z} \\in \\mathbb{R}^{l \\times hv}$ denotes the representation coefficient of $\\mathbf{X}$ with respect to $\\mathbf{E}$. $\\mathbf{E}$ is reasonably assumed to be orthogonal, i.e., $\\mathbf{E}^{T} \\mathbf{E} =\\mathbf{I}$ \\cite{6736073}. \n\n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=4cm]{subspace.pdf}\n\t\\end{center}\n\t\\caption[nonlocal]{A schematic diagram of the subspace representation. }\n\t\\label{fig:subspace}\n\\end{figure}\n\nCao \\textit{et al}. \\cite{c72} combined a LR and sparse factorization with the non-local tensor constraint of subspace coefficients, dubbed SNLRSF. Each spectral band of an observed HS image $\\mathcal{T} \\in \\mathbb{R}^{h \\times v \\times z}$ is reshaped as each row of an HS unfolding matrix $\\mathbf{T} \\in \\mathbb{R}^{z \\times hv}$. The spectral vectors is assumed to lie in a $l$-dimensional subspace $S_l$ ($l \\ll z$), and the optimization model can be written as\n\\begin{equation}\n\\label{eq:SNLRSF}\n\\begin{aligned}\n&\\underset{\\mathbf{E}, \\mathbf{Z}, \\mathcal{L}_{i}, \\mathbf{S}}{\\arg \\min } \\frac{1}{2}\\|\\mathbf{T}-\\mathbf{E Z}-\\mathbf{S}\\|_{F}^{2}+\\lambda_{2}\\|\\mathbf{S}\\|_{1} \\\\\n&+\\lambda_{1} \\sum_{i}\\left(\\frac{1}{\\delta_{i}^{2}}\\left\\|\\Re_{i} \\mathbf{Z}-\\mathcal{L}_{i}\\right\\|_{F}^{2}+ ||\\mathcal{L}_i ||_{\\rm TTN} \\right) \\quad \\text { s.t. } \\quad \\mathbf{E}^{T} \\mathbf{E}=\\mathbf{I}\n\\end{aligned}\t\n\\end{equation}\nwhere $\\Re_{i} \\mathbf{Z}$ is divided into three steps: 1) reshape the reduced-dimensionality coefficient image $\\mathbf{Z} \\in \\mathbb{R}^{l \\times hv}$ as a tensor $\\mathbf{Z} \\in \\mathbb{R}^{h \\times v \\times l}$; 2) segment the tensor $\\mathbf{Z}$ as an overlapped patch tensor $\\mathbf{Z}_i \\in \\mathbb{R}^{p \\times p \\times l}$; and 3) cluster $d$ similar patches in a neighborhood area by computing Euclidean distance.\n\n\n\nFrom one side, a spectral LR tensor model is explored according to the fact that spectral signatures of HS images lie in a low-dimensional subspace. From another side, a non-local LR factorization is employed to take the non-local similarity along the spatial direction into consideration. Following the line of SNLRSF, Zheng \\textit{et al}. \\cite{c86} employed LR matrix factorization to decouple spatial and spectral models. The group-sparse structure of HS images is introduced on spatial difference images (SpatDIs). A continuity constraint was applied in the spectral factor to promote the group sparsity of SpatDIs and the spectral continuity of HS images. Sun \\textit{et al}. \\cite{c88} projected the noisy HS images into a non-local tensor subspace spanned by a spectral difference continuous basis. The continuity of the restored HS data is significantly promoted by this difference regularization.\n\n\\subsubsection{Experimental results and analysis}\n\\label{sect:experiment_denoising}\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=3.3cm]{gaussian-result.pdf}\n\t\\end{center}\n\t\\caption[pavia80]{The different methods for Gaussian noise removal. (a) Original HS image, (b) Gaussian noise, (c) LRTA, (d) TDL, (e) ITS, (f) LLRT, (g) NGmeet. }\n\t\\label{fig:Visio-denoising_gaussian}\n\\end{figure*}\n\n\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=3.5cm]{mixed-result.pdf}\n\t\\end{center}\n\t\\caption[pavia80]{The different methods for mixed noise removal. (a) Original HS image, (b) Mixed noise, (c) LRTR, (d) LRTDTV, (e) LRTDGS, (f) 3DTNN, (g) TLR-$L_0$TV }\n\t\\label{fig:Visio-denoising_mixed}\n\\end{figure*}\n\nAn HS subimage is selected from the Pavia University data set and is normalized to $[0,1]$. The zero-mean Gaussian noise of noise variance $0.12$ is added into each band and shown in Fig. \\ref{fig:Visio-denoising_gaussian} (b). In a mixed noise case, the same Gaussian noise is also adopted. Each band is corrupted by the salt and pepper noise with a proportion of $0-20\\%$. Dead-lines are randomly added from band $61$ to band $80$, with the width of stripes generated from $1$ to $3$, and the number of stripes randomly selected from $3$ to $10$. In addition, bands $61-70$ are corrupted by some stripes with the number randomly selected from $20$ to $40$. Four different quantitative quality indices are chosen: the mean of peak signal-to-noise ratio (MPSNR), the mean of structural similarity (MSSIM), relative dimensional global error in synthesis (ERGAS), and the mean spectral angle distance (MSAD). Larger MPSNR and MSSIM values indicate better-denoised image quality. These two indices pay attention to the restoration precision of spatial pixels. In contrast, smaller ERGAS and MSAD values illustrate better performances of denoised results. \n\nFor Gaussian noise removal, all the competing approaches achieve good results to some degree in Fig. \\ref{fig:Visio-denoising_gaussian}, in which the enlarged subregions are delineated in red boxes. But residual noise remains in the result denoised by LRTA. Compared with TDL, ITS fails to preserve detailed spatial information. LLRT provides a rather similar result with NGmeet. Consistent with the visual observation, NGmeet outperforms the other methods and obtains the highest metric values among the denoising models in Tab. \\ref{tab:tab-gaussiannoise}. The non-local LR tensor methods including ITSReg, TDL, and LLRT gain better performances than LRTA, due to the formers exploiting two types of HS prior knowledge. The LRTA method is the fastest one among all the competing algorithms since LRTA just considers the spectral correlation. \n\nFig. \\ref{fig:Visio-denoising_mixed} shows the restoration results by five different methods under a heavy noise case. Dead-lines remaining in the images denoised by LRTR and 3DTNN are more obvious than the ones restored by LRTDTV and LRTDGS. The LR tensor-based model is employed in LRTR and 3DTNN, yet LRTDTV, LRTDGS, and TLR-$L_0$TV considered two kinds of prior knowledge: spectral correlation and spatial-spectral smoothness. LRTDTV and LRTDGS are more sensitive to dead-lines than TLR-$L_0$TV, leading to more or fewer artifacts in the denoised results. TLR-$L_0$TV removes most of the mixed noise and preserves image details like texture information and edges. To further evaluate the differences among competing denoising methods, we calculate four quality indices and show them in Tab. \\ref{tab:tab-mixednoise}, with the best results in bold. TLR-$L_0$TV obtains the highest denoising performance among all the approaches. For MPSNR, LRTDTV and LRTDGS are slightly larger than 3DTNN, whereas the SSIM and ERGAS values of LRTDTV and LRTDGS are better than those of 3DTNN. LRTR and LRTDGS are the first and second faster, but they hardly handle the complex mixed noise case with some dead-lines retaining. \n\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Quantitative comparison of different selected algorithms for Gaussian noise removal.}\n\\begin{spacing}{1.0}\n\\scalebox{0.93}{\n\\begin{tabular}{c|ccccc}\n\\cline{1-6}\n& \\multicolumn{5}{|c}{Gaussian noise removal} \\\\\n\\hline\nIndex & LRTA & TDL & ITS & LLRT & NGmeet \\\\\n\\hline\nPSNR & 32.14 & 34.54 & 34.38 & \\underline{35.96} & $\\mathbf{37.06}$ \\\\\n \nSSIM & 0.9097 & {0.9484} & 0.9466 & \\underline{0.9637} & $\\mathbf{0.9707}$\\\\\n \nERGAS & 5.7044 & 4.3392 & 4.3981 & \\underline{4.0462} & $\\mathbf{3.2344}$\\\\\n \nMSAD & 6.6720 & 5.0701 & 5.0912 & \\underline{4.2402} & $\\mathbf{3.7804}$ \\\\\n \nTIME(s) & $ \\mathbf{1.48} $ & \\underline{13.77} & 650.49 & 506.84 & 29.58 \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-gaussiannoise}\n\\end{spacing}\n\\end{table}\n \n \n \\begin{table}[!htbp]\n\\centering\n\\caption{Quantitative comparison of different selected algorithms for mixed noise removal.}\n\\begin{spacing}{1.0}\n\\scalebox{0.8}{\n\\begin{tabular}{c|ccccc}\n\\cline{1-6}\n& \\multicolumn{5}{|c}{Mixed noise removal} \\\\\n\\hline\nIndex & LRTR & LRTDTV & LRTDGS & 3DTNN & TLR-$L_0$TV\\\\\n\\hline\nMPSNR & 26.89 & \\underline{30.76} & \\underline{30.76} & 30.20 & $\\mathbf{31.59}$\\\\\n \nMSSIM & 0.8157 & 0.8821 & 0.7852 & \\underline{0.8945} & $\\mathbf{0.8973}$ \\\\\n \nERGAS & 10.8842 & 7.8154 & 9.5527 & \\underline{7.4915} & $\\mathbf{7.1748}$\\\\\n \nMSAD & 9.9624 & \\underline{7.3689} & 10.4568 & 7.4712 & $\\mathbf{7.2055}$\\\\\n \nTIME(s) & $\\mathbf{19.67}$ & 35.29 & \\underline{25.11} & 44.25 & 325.67 \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-mixednoise}\n\\end{spacing}\n\\end{table}\n\n\n \\iffalse \n \\begin{table*}[!htbp]\n\\centering\n\\caption{A quantitative comparison of different selected algorithms for Gaussian \/ mixed noise removal.}\n\\begin{spacing}{1.0}\n\\scalebox{0.8}{\n\\begin{tabular}{c|ccccc|ccccc}\n\\cline{1-11}\n& \\multicolumn{5}{|c}{Gaussian noise removal} & \\multicolumn{5}{|c}{Mixed noise removal} \\\\\n\\hline\nIndex & LRTA & TDL & ITS & LLRT & NGmeet & LRTR & LRTDTV & LRTDGS & 3DTNN & TLR-$L_0$TV\\\\\nMPSNR & 32.14 & 34.54 & 34.38 & \\underline{35.96} & $\\mathbf{37.06}$ & 26.89 & \\underline{30.76} & \\underline{30.76} & 30.20 & $\\mathbf{31.59}$\\\\\n \nMSSIM & 0.9097 & \\underline{0.9484} & 0.9466 & 0.9637 & $\\mathbf{0.9707}$ & 0.8157 & 0.8821 & 0.7852 & \\underline{0.8945} & $\\mathbf{0.8973}$ \\\\\n \nERGAS & 5.7044 & 4.3392 & 4.3981 & \\underline{4.0462} & $\\mathbf{3.2344}$ & 10.8842 & 7.8154 & 9.5527 & \\underline{7.4915} & $\\mathbf{7.1748}$\\\\\n \nMSAD & 6.6720 & 5.0701 & 5.0912 & \\underline{4.2402} & $\\mathbf{3.7804}$ & 9.9624 & \\underline{7.3689} & 10.4568 & 7.4712 & $\\mathbf{7.2055}$\\\\\n \nTIME(s) & $ \\mathbf{1.48} $ & \\underline{6.14} & 650.49 & 506.84 & 29.58 & $ \\mathbf{19.67} $ & 35.29 & \\underline{25.11} & 44.25 & 325.67 \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab3}\n\\end{spacing}\n\\end{table*}\n\\fi\n\n\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=3.1cm]{Visio-deblur.pdf}\n\t\\end{center}\n\t\\caption[wdc]{ OLRT for different blur cases. (a) Original WDC image, (b) the light Gaussian blur on WDC ($8 \\times 8$, Sigma = 3) and corresponding deblurred image (c), (d) the heavy Gaussian blur on WDC ($17 \\times 17$, Sigma = 7) and corresponding deblurred image (e), (f) the Uniform blur and corresponding deblurred image (g). }\n\t\\label{fig:Visio-deblur}\n\\end{figure*}\n\n\\begin{table*}[!htbp]\n\\centering\n\\caption{A quantitative evaluation of OLRT for different blur cases.}\n\\begin{spacing}{1.0}\n\\scalebox{0.93}{\n\\begin{tabular}{c|ccccc}\n\\cline{1-6}\nBlur cases & MPSNR & MSSIM & ERGAS & MSAD & TIME(s) \\\\\n\\hline\nGaussian blur (8*8, Sigma = 3) & 43.50 & 0.9912 & 1.5910 & 1.9486 &314.50 \\\\\nGaussian blur (17*17, Sigma = 7) & 39.63 & 0.9807 & 2.5407 & 3.0819 &305.70 \\\\\n Uniform blur & 39.39 & 0.9784 & 2.9332 & 3.8355 & 314.28 \\\\\n \n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-deblur}\n\\end{spacing}\n\\end{table*}\n\n\n\\subsection{HS Deblurrring} \n\\label{sect:HS_Deblurring}\n\nThe atmospheric turbulence or fundamental deviation of some imaging systems often blur HS images during the data acquisition process, which unfortunately damages the high-frequency components and the edge features of HS images. HS deblurring aims to recover sharp latent images from blurred ones. Chang \\textit{et al}. \\cite{c117} discussed the LR correlations along HS spatial, spectral, and non-local similarity modes and proposed a unified optimal LR tensor (OLRT) framework for multiple HS restoration tasks. But a matrix nuclear norm is used to constrain the LR property of unfolding non-local patch groups. Consequently, Chang \\textit{et al}. \\cite{c116} proposed a weighted LR tensor recovery (WLRTR) algorithm with a reweighted strategy. Considering spectral correlation and non-local similarity, the HS deblurring optimization problem can be formulated as follows\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:WLRTR}\n &\\underset{\\mathcal{X}, \\mathcal{A}_{i}, \\mathbf{B}_{j}}{ \\min } \\frac{1}{2}\\|\\mathcal{T}-M( \\mathcal{X} ) \\|_{F}^{2}+\\\\\n &\\eta \\sum_{i}\\left(\\left\\|\\mathcal{R}_{i} \\mathcal{X}-\\mathcal{A}_{i} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3}\\right\\|_{F}^{2}\\right.\\left.+\\sigma_{i}^{2}\\left\\|\\boldsymbol{w}_{i} \\circ \\mathcal{A}_{i}\\right\\|_{1}\\right) \n\\end{aligned}\n\\end{equation}\nwhere $w_i$ is a reweighting factor inversely proportional to singular values of $\\mathcal{L}_i$ with $\\mathcal{L}_i = \\mathcal{A}_{i} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3}$, and higher-order SVD (HOSVD) is applied to see the different sparsities of higher-order singular values, i.e., LR property. The last term $\\|\\mathcal{Y}-M( \\mathcal{X} ) \\|_{F}^{2}$ is a data fidelity item, which can be replaced by $ || \\mathcal{T} - M(\\mathcal{X}) - \\mathcal{S} - \\mathcal{N} ||^2_F$ for HS inpainting, destriping, and denoising problems. \n\n\nAn experimental example is given to display the deblurred performances of OLRT for the Gaussian blur with different levels and the uniform blur on the WDC data set. Fig. \\ref{fig:Visio-deblur} shows the visual results under different blur cases. The specific texture information is hardly distinguished in the three blurred images shown in Fig. \\ref{fig:Visio-deblur} (b), (d), and (f). The optimal LR tensor prior knowledge of OLRT reliably reflects the intrinsic structural correlation of HS images, which benefits the recovery of structural information and image edges. The quantitative results under different blur cases are reported in Tab. \\ref{tab:tab-deblur}.\n\n\\subsection{HS Inpainting}\n\\label{sect:HS_Inpainting}\n\n\nIn this section, we introduce and discuss LR tensor-based methods for HS inpainting. These methods are also suitable for missing data recovery of high-dimensional RS (HDRS) images. \nRS images such as HS, MS, and multi-temporal images often from missing data problems, such as dead pixels, thick clouds, and cloud shadows, as shown in Fig. \\ref{fig:Visio-missingdata}. The goal of inpainting is to estimate the missing data from observed images, which can be regarded as a tensor completion problem. \n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=3.2cm]{missing-data.pdf}\n\t\\end{center}\n\t\\caption[md]{ Examples of RS data with missing information. (a) Reflectance of Aqua MODIS band 6 with sensor failure. (b) Digital number values of Landsat ETM+ with the SLC-off problem. (c) Digital number\nvalues of a Landsat image with cloud obscuration. }\n\t\\label{fig:Visio-missingdata}\n\\end{figure}\n\nLR tensor completion theory has been successfully applied for HS inpainting \\cite{c117,c116,c118,c119,c120,c130,c131}. Liu \\textit{et al}. \\cite{c119} suggested a trace norm regularized CP decomposition for missing data recovery. Ng \\textit{et al}. \\cite{c121} learned from high-accuracy LR tensor completion (HaLRTC) \\cite{c120} for recovering the missing data of HDRS and proposed an adaptive weighted TC (AWTC) method. The proposed AWTC model is expressed as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:AWTC}\n \\underset{\\mathcal{X}}{ \\min } \\frac{\\eta}{2} || \\mathcal{T} - M(\\mathcal{X}) ||^2_F + \\sum_{i=1}^3 w_i || \\mathbf{X}_{(i)}||_*\n\\end{aligned}\n\\end{equation} \nwhere $w_i$ is well-designed parameter related to the singular values of $\\mathbf{X}_{(i)}$.\nXie \\textit{et al}. \\cite{c130} proposed a LR regularization-based TC (LRRTC), fusing the logarithm of the determinant with a TTN. With the definitions of a new TNN and its t-SVD \\cite{c123}, Wang \\textit{et al}. \\cite{c124} and Srindhuna \\textit{et al}. \\cite{c125} proposed new low-tuba-rank TC methods to estimate the missing values in HDRS images. Consequently, a novel TR decomposition is formulated to represent a high-dimensional tensor by circular multi-linear products on a sequence of third-order tensors \\cite{c126}. Based on the TR theory, He \\textit{et al}. \\cite{c128} fused the spatial TV into the TR framework and developed two solving algorithms: ALM and ALS. Similarly, Wang \\textit{et al}. \\cite{c129} incorporated a 3DTV regularization into a novel weighted TR decomposition framework. The proposed TV-WTV model is formulated as:\n\\begin{equation}\n\\label{eq:TVWTR}\n\\begin{aligned}\n&\\underset{\\mathcal{X},[\\mathcal{G}]}{\\textrm{min}}\\; \\sum^N_{n=1}\\sum^3_{i=1} \\theta_i ||\\textbf{G}^{(n)}_{(i)}||_*+ \\frac{\\lambda}{2}||\\mathcal{X} - \\Phi([\\mathcal{G}])||^2_F+\\tau||\\mathcal{X}||_{\\rm 3DTV} \\\\\n&s.t.\\; \\mathcal{X}_{\\Omega}=\\mathcal{T}_{\\Omega}\n\\end{aligned}\n\\end{equation}\n\n\n\nFor HS image inpainting tasks, we test three methods: HaLRTC, LRTC, and TVWTR on the random missing data problem and the text removal problem. A subimage is chosen from the Houston 2013 data set for our experimental study. Fig. \\ref{fig:Visio-inpainting1} shows the results of the Houston2013 data set before and after recovery under ratio = $80\\%$. Although missing pixels disappear in the results of HaLRTC and LRTC, these methods produce more or fewer artifacts in the top-right corner of the zoomed area. The TVWTR method performs the best among all the compared algorithms and recovers the details like the red square center of the zoom area. In Fig. \\ref{fig:Visio-inpainting2}, original HS bands are corrupted by different texts that do not appear randomly as in previous cases. The text corruption is eliminated by three tensor decomposition-based algorithms. Few text artifacts exist in the enlarged area of LRTC. Due to the consideration of the spectral correlation and the spatial-spectral smoothness, TVWTR provides the best result with reconstructing most information of the original image.\n\nThe corresponding quantitative results of two inpainting tasks are reported in Tab. \\ref{tab:tab-inpainting}. Taking account of two types of prior knowledge, TVWTR gives a significantly fortified performance under two cases, as compared with the other competing methods. HaLRTC and LRTC are the fastest and second-fastest among all the comparing methods.\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=6cm]{inpainting_results.pdf}\n\t\\end{center}\n\t\\caption[houston]{ The inpainting results by different methods under $80\\%$ missing ratio. (a) Original Houston2013 image, (b) Missing, (c) HaLRTC, (d) LRTC, (e) TVWTR. }\n\t\\label{fig:Visio-inpainting1}\n\\end{figure*}\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=6cm]{inpainting_text_results.pdf}\n\t\\end{center}\n\t\\caption[houston]{ Inpainting results by different methods for the text removal case. (a) Original Houston2013 image, (b) Missing, (c) HaLRTC, (d) LRTC, (e) TVWTR. }\n\t\\label{fig:Visio-inpainting2}\n\\end{figure*}\n\n\\begin{table*}[!htbp]\n\\centering\n\\caption{A quantitative evaluation of different methods for inpainting.}\n\\begin{spacing}{1.0}\n\\scalebox{0.93}{\n\\begin{tabular}{c|ccc|ccc}\n\\cline{1-7}\n& \\multicolumn{3}{c}{Missing data ($80\\%$)} & \\multicolumn{3}{|c}{Text removal} \\\\\n\\hline\nIndex & HaLRTC & LRTC & TVWTR& HaLRTC & LRTC & TVWTR \\\\\n\\hline\n MPSNR & 36.54 & 38.49 & $\\mathbf{49.05}$ & 50.29 & 53.39& $\\mathbf{57.31}$ \\\\\n MSSIM & 0.9391 & 0.9555 & $\\mathbf{0.9947}$ & 0.9965 & 0.9975 & $\\mathbf{0.9991}$ \\\\\n ERGAS & 4.5703 & 3.7704 & $\\mathbf{1.0211}$ & 1.0108 & 1.1345 & $\\mathbf{0.4012}$\\\\ \n MSAD & 5.3845 & 4.6162 & $\\mathbf{1.3792}$ & 1.2424 &1.1134 & $\\mathbf{0.5497}$\\\\\n TIME(s) &$\\mathbf{9.34}$ & 23.74 & 347.46 & $\\mathbf{8.38}$ & 24.90 & 345.66\\\\\n \n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-inpainting}\n\\end{spacing}\n\\end{table*}\n\n\n\n\n\\subsection{HS Destriping}\n\\label{sect:HS_Destriping}\nIn the past three decades, plenty of airborne and space-borne imaging spectrometers have adopted a whiskbroom sensor or a pushbroom sensor commonly. The former one is built with linear charge-coupled device (CCD) detector arrays. The corresponding HS imaging systems scans the target pixel by pixel and then acquires a spatial image by track scanning with a scan mirror forward motion \\cite{c134}. The latter one contains area CCD arrays. A pushbroom sensor scans the target line by line, one direction of which is utilized for spatial imaging, and the other for spectral imaging. The incoherence of the system mechanical motion and the failure of CCD arrays lead to the non-uniform response of neighboring detectors, mainly generating stripe noise. The periodic or noneriodic stripes generally distributed along the scanning direction have a certain width and length. The values of stripes are brighter or darker than their surrounding pixels. The inherent property of stripes, i.e., $g(\\mathcal{S})$ should be considered in the HS destriping model. \n\nChen \\textit{et al}. \\cite{c132} were the first to develop a LR tensor decomposition for an MS image destriping task. The high correlation of the stripe component along the spatial domain is depicted by a LR Tucker decomposition. The final minimization model for solving the destriping problem is expressed as follows:\n \\begin{equation}\n \\begin{aligned}\n\\min _{\\mathcal{X}, \\mathcal{S}, \\mathcal{A}, \\mathbf{B}_{i}} & \\frac{1}{2}\\|\\mathcal{Y}-\\mathcal{X}-\\mathcal{S}\\|_{F}^{2}+\\eta_{1}\\left\\|D_{h} \\mathcal{X}\\right\\|_{1}+\\eta_{2}\\left\\|D_{z} \\mathcal{X}\\right\\|_{1} \\\\\n&+\\lambda\\|\\mathcal{S}\\|_{2,1} \\\\\n\\text { s.t. } \\mathcal{S}=& \\mathcal{A} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3}, \\mathbf{B}_{i}^{T} \\mathbf{B}_{i}=\\mathbf{I}(i=1,2,3)\n\\end{aligned}\n\\end{equation}\nwhere $\\|\\mathcal{S}\\|_{2,1} = \\sum^z_{k=1} \\sum^v_{j=1} \\sqrt{\\sum^h_{i_1} \\mathcal{S}_{i,j,k}^2 } $.\n\nCao \\textit{et al}. \\cite{c135} implemented the destriping task by the matrix nuclear norm of stripes and non-local similarity of image patches in the spatio-spectral volumes. WLRTR and OLRT \\cite{c117,c116} are also effective for a HS destriping task. Chang \\textit{et al}. \\cite{c117} simultaneously considered the LR properties of the stripe cubics and non-local patches. The OLRT algorithm is reformulated for modeling both the recovered and stripe components as follows\n\\begin{equation}\n\\begin{aligned}\n& \\min _{\\mathcal{X}, \\mathcal{L}_{i}^{j}, \\mathcal{S}} \\frac{1}{2}\\|\\mathcal{T}-\\mathcal{X}-\\mathcal{S}\\|_{F}^{2}+\\rho \\operatorname{rank}_{1}(\\mathcal{S}) \\\\\n&+\\omega_{j} \\sum_{j} \\sum_{i}(\\frac{1}{\\delta_{i}^{2}}\\|\\mathcal{R}_{i}^{j} \n\\mathcal{X}-\\mathcal{L}_{i}^{j}\\|_{F}^{2}+\\operatorname{rank}_{j} (\\mathcal{L}_{i}^{j}) )\n\\end{aligned}\n\\end{equation}\n\nIn \\cite{c133}, an HS destriping model is transformed to a tensor framework, in which the tensor-based non-convex sparse model used both $l_0$ and $l_1$ sparse priors to estimate stripes from noisy images.\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=6cm]{destriping.pdf}\n\t\\end{center}\n\t\\caption[houston2018]{The destriping results by different methods. (a) Original Houston 2018 image, (b) nonperiodic stripes, (c) WLRTR, (d) LRTD. }\n\t\\label{fig:Visio-destriping}\n\\end{figure*}\n\n\n\n \n\nWe take an example with nonperiodic stripes of intensity 50 and stripe ratio 0.2, which is presented in Fig. \\ref{fig:Visio-destriping} (b). Fig. \\ref{fig:Visio-destriping} (c) and (d) display the destriping results of WLRTR and LRTD. The stripes are estimated and removed correctly by WLRTR and LRTD since both models consider non-local similarity and spectral correlation. Considering the third type of prior knowledge-- spatial and spectral smoothness, LRTD moderately preserves more details like clear edges than WLRTR. The quantitative comparison is in accordance with the above-mentioned visual results. Tab. \\ref{tab:tab-destriping} performs destriping results with four quantitative indices. LRTD achieves higher evaluation values than WLRTR.\n\n \\begin{table}[!htbp]\n\\centering\n\\caption{A quantitative comparison of different selected algorithms for destriping.}\n\\begin{spacing}{1.0}\n\\scalebox{0.8}{\n\\begin{tabular}{c|cccc}\n\\hline\n & MPSNR & MSSIM & ERGAS & MSAD \\\\\n\\hline\nWLRTR & 39.77 & 0.9844 &7.7883 & 5.1765\\\\\nLRTD & $\\mathbf{47.87}$ & $\\mathbf{0.9912}$ & $\\mathbf{3.8388}$ & $\\mathbf{1.1276}$\\\\\n\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-destriping}\n\\end{spacing}\n\\end{table}\n\\subsection{Future challenges}\n\\label{sect:challenges_restoration}\n\nVarious tensor optimization models have been developed to solve the HS restoration problem and show impressive performances. Nevertheless, these models still can be further improved for future work:\n\n\nAs the prior information is efficient to find the optimal solution, novel tensor-based approaches should utilize as many types of priors as possible. Therein, how best to design a unified framework to simultaneously non-local similarity, spatial and spectral smoothness, and subspace representation is a crucial challenge.\n\nThe addition of different regularizations leads to the manual adjustment of corresponding parameters. For example, a noise-adjusted parameter pre-definition strategy needs to be studied to enhance the robustness of tensor optimization models. \n\n It is worth noting that we are usually blind to the location of the stripes or clouds. The locations of the stripes or mixed noise between the neighbor bands are often different and need to be estimated. How best to predict the degradation positions and design blind estimation algorithms deserves further study in following research.\n\nDue to some HS images containing hundreds of spectral bands, the high dimensions of an HS tensor cause a time-consuming problem. The model complexity of tensorial models should be reduced with the guarantee of efficiency and accuracy of HS restoration. \n\n\\section{HS CS}\n\\label{sect:HSI-CS}\nTraditional HS imaging techniques are based on the Nyquist sampling theory for data acquisition. A signal must be sampled at a rate greater than twice its maximum frequency component to ensure unambiguous data \\cite{c108,c109}. This signal processing needs a huge computing space and storage space. Meanwhile, the ever-increasing spectral resolution of HS images also leads to the high expense and low efficiency of transmission from airborne or space-borne platforms to ground stations. The goal of CS is to compressively sample and reconstruct signals based on sparse representation to reduce the cost of signal storage and transmission. In Fig. \\ref{fig:Visio-cs}, based on the image-forming principle of a single pixel camera which uses the digital micromirror device (DMD) to accomplish the CS sampling, an HS sensor can span the necessary wavelength range and record the intensity of the light reflected by the modulator in each wavelength \\cite{c215}. Since the CS rate can be far lower than the Nyquist rate, the limitation of high cost caused by the sheer volume of HS data will be alleviated. A contradiction usually exists between the massive HS data and the limited bandwidth of satellite transmission channel. HS images can be compressed first to reduce the pressure on channel transmission. Therefore, the HS CS technique is conducive to onboard burst transmission and real-time processing in RS \\cite{c223}.\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=6cm]{Visio-cs-fra.pdf}\n\t\\end{center}\n\t\\caption[restoration]{A schematic diagram of HS CS. }\n\t\\label{fig:CSfr}\n\\end{figure*}\n\nCS of HS images aims to preciously reconstruct an HS data $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$ from a few compressive measurements $\\textbf{y} \\in \\mathbb{R}^m$ by effective HS CS algorithms. The compressive measurements $\\textbf{y}$ can be formulated by:\n\\begin{equation}\n\\label{eq:y1}\n\\textbf{y} = \\Psi (\\mathcal{X})\n\\end{equation}\nwhere $\\Psi$ is a measurement operator instantiated as $\\Psi= \\textbf{D} \\cdot \\textbf{H} \\cdot \\textbf{P}$, where $\\textbf{D}$ is a random downsampling operator, $\\textbf{H}$ is a random permutation matrix, $\\textbf{P}$ is a WalshHadamard transform and the mapping of $\\Psi$ is $ \\mathbb{R}^{h \\times v \\times z} \\rightarrow \\mathbb{R}^m$ (the sampling ratio $m=hvz$). The strict reconstruction of $\\mathcal{X}$ from $\\mathbf{y}$ will be guaranteed by the CS theory when $\\Psi$ satisfies the restricted isometry property (RIP). This compressive operator has been successfully adopted for various HS CS tasks \\cite{c100,c92,c93,c94}. However, operator $\\Psi$ can be replaced with real \ndemands. Apparently, it is an ill-posed inverse problem to directly recover $\\mathcal{X}$ from Eq. (\\ref{eq:y1}). The extra prior information needs to be investigated to optimize the HS CS problem. The HS CS task can be generalized the following optimization problem:\n\\begin{equation}\n\\label{eq:hs_cs}\n\\begin{aligned}\n\\underset{\\mathcal{X}}{\\textrm{min}}\\;&\\|\\textbf{y}-\\Psi (\\mathcal{X})\\|_F^2 + \\lambda F(\\mathcal{X}),\n\\end{aligned}\t\n\\end{equation}\nwhere $F(\\mathcal{X})$ denotes the additional regularization term to use different types of HS prior information such as spectral correlation, spatial and spectral smoothness, and non-local similarity. \n\n\n\n\\subsection{Tensor decomposition-based HS CS reconstruction methods}\n\\label{sect:TD-CS}\n\n\n\nTucker decomposition-based methods have aroused wide attention for HS CS. Tucker decomposition was first introduced into the compression of HS images to constrain the discrete wavelet transform coefficients of spectral bands \\cite{c96}. Most of the following works try to study the Tucker decomposition-based variants for HS CS \\cite{c100,c97,c98,c99,c95,c214}. \n\n$\\textbf{(1) Tucker decomposition with TV}$\n\nIn one earlier work \\cite{c100}, a 2-D TV norm has been penalized in an LR matrix framework, which robustly recovers a large-size HS image when the sampling ratio is only 3\\%. A spectral LR model is rarely enough to depict the inherent property of HS images. \nJoint tensor Tucker decomposition with a weighted 3-D TV (JTenRe3-DTV) \\cite{c97} injected a weighted 3-D TV into the LR Tucker decomposition framework to model the global spectral correlation and local spatial\u2013spectral smoothness of an HS image. Considering the disturbance $\\mathcal{E}$, the JTenRe3-DTV optimization problem for HS CS can be expressed as\n\\begin{equation}\n\\label{eq:TDTV_cs}\n\\begin{aligned}\n&\\min _{\\mathcal{X}, \\mathcal{E}, \\mathcal{C}, \\mathbf{U}_{i}} \\frac{1}{2}\\|\\mathcal{E}\\|_{F}^{2}+\\lambda\\|\\mathcal{X}\\|_{3{\\rm DwTV}} \\\\\n&\\text { s.t. } \\mathbf{y}=\\Psi(\\mathcal{X}), \\mathcal{X}=\\mathcal{A} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3}+\\mathcal{E}\n\\end{aligned}\t\n\\end{equation}\nIn \\cite{c102}, the LR tensor constraint of Eq.(\\ref{eq:TDTV_cs}) was replaced by the TNN. \n\n$\\textbf{ (2) Tucker decomposition with non-local similarity}$\n\nThe Tucker decomposition methods with non-local similarity either cluster similar patches into a 4-D group or unfold 2-D patches into a 3-D group. \nDu \\textit{et al}. \\cite{c101} represented each local patch of HS images as a 3-D tensor and grouped similar tensor patches to form a 4-D tensor per cluster. Each tensor group can be approximately decomposed by a sparse coefficient tensor and a few matrix dictionaries. \nXue \\textit{et al}. \\cite{c99} unfolded a series of 3-D cubes into 2-D matrices along the spectral modes and stacked these matrices as a new 3-D tensor. The spatial sparsity, the non-local similarity, and the spectral correlation were simultaneously employed to obtain the proposed model\n\\begin{equation}\n\\label{eq:TDNON_cs}\n\\begin{aligned}\n\\min _{\\mathbf{x}, \\mathcal{A}_{p}, \\mathbf{B}_{1 p}, \\mathbf{B}_{2 p}, \\mathbf{B}_{3 p}} &\\sum_{p=1}^{P} \\frac{\\lambda_{1}}{2}\\left\\|\\mathcal{X}_{p}-\\mathcal{A}_{p} \\times_{1} \\mathbf{B}_{1 p} \\times_{2} \\mathbf{B}_{2 p} \\times_{3} \\mathbf{B}_{3 p}\\right\\|_{F}^{2}\\\\\n&+\\lambda_{2}\\left\\|\\mathcal{A}_{p}\\right\\|_{1}+\\lambda_{3} L(\\mathcal{X}_{p}) \\\\\n\\text { s.t. } \\mathbf{y}=\\Phi \\mathbf{x}, \\mathcal{X}_{p}&=\\mathcal{A}_{p} \\times_{1} \\mathbf{B}_{1 p} \\times_{2} \\mathbf{B}_{2 p} \\times_{3} \\mathbf{B}_{3 p}, \\\\\n\\mathbf{B}_{i p}^{T} \\mathbf{B}_{i p}= \\mathbf{I}&(i=1,2,3)\n\\end{aligned}\t\n\\end{equation}\nwhere $p = 1, . . . P$, and $P$ denotes the group number, $\\mathbf{x} \\in \\mathbb{R}^{hvz}$ denotes the vector form of X, $L(\\mathbf{X})$ is the TTN of $\\mathcal{X}$,\n\n\n$\\textbf{(3) TR-based methods}$\n\nUnlike the Tucker decomposition methods \\cite{c101,c99} which directly captured the LR priror in the original image space at the cost of high computation, a novel subspace-based non-local TR decomposition (SNLTR) approach projected an HS image into a low-dimensional subspace \\cite{c90}. The non-local similarity of the subspace coefficient tensor is constrained by a TR decomposition model. The SNLTR model is presented as\n\\begin{equation}\n\\label{eq:SNLTR-cs}\n\\begin{aligned}\n&\\min _{\\mathbf{E}, \\mathbf{Z}, \\mathcal{L}_{i}, \\mathcal{G}_{i}} \\frac{1}{2}\\|\\mathbf{y}-\\Psi(\\mathbf{E} \\mathbf{Z})\\|_{F}^{2}+\\lambda \\sum_{i}\\left(\\frac{1}{2}\\left\\|\\Re_{i} \\mathbf{Z}-\\mathcal{L}_{i}\\right\\|_{F}^{2}\\right) \\\\\n&\\text { s.t. } \\mathbf{E}^{T} \\mathbf{E}=\\mathbf{I}, \\quad \\mathcal{L}_{i}=\\Phi\\left(\\left[\\mathcal{G}_{i}\\right]\\right)\n\\end{aligned}\t\n\\end{equation}\n\n\\subsection{HS Kronecker CS methods}\n\\label{sect:TD-KCS}\n\nUnlike the current 1-D or 2-D sampling strategy, Kronecker CS (KCS) comprises Kronecker-structured sensing matrices and sparsifying bases for each HS dimension \\cite{c214,8000407}. Based on multidimensional multiplexing, Yang \\textit{et al}. \\cite{c106} used a tensor measurement and a nonlinear sparse tensor coding to develop a self-learning tensor nonlinear CS (SLTNCS) algorithm. The sampling process and sparse representation can be represented as the model based on Tucker decomposition. Generally, an HS image $\\mathcal{X} \\in \\mathbb{R}^{n_1 \\times n_2 \\times n_3} $ can be expressed as the following Tucker model:\n\\begin{equation}\n\\label{eq:KCS}\n\\begin{aligned}\n\\mathcal{X}=\\mathcal{S} \\times_{1} \\Phi_{1} \\times_{2} \\Phi_{2} \\times_{3} \\Phi_{3}\n\\end{aligned}\t\n\\end{equation}\nwhere $\\mathcal{S} \\in \\mathbb{R}^{m_1 \\times m_2 \\times m_3} $ stands for an approximate block-sparse tensor in terms of a set of three basis matrices {$\\Phi_{j} \\in \\mathbb{R}^{ k_j \\times k_j}$}, with $m_j \\ll k_j, j=1,2,3 $.\n\nIn the context of KCS, three measurement or sensing matrices denoted by $\\Psi_j, j=1,2,3$ of size $n_j \\times k_j$ with $n_j \\ll k_j$ are used to reduce the dimensionality of the measurement tensor. The compressive sampling model is given as\n\\begin{equation}\n\\label{eq:KCS-psi}\n\\begin{aligned}\n\\mathcal{Y}&=\\mathcal{X} \\times_{1} \\Psi_{1} \\times_{2} \\Psi_{2}\\times_{3} \\Psi_{3} \\\\\n&=\\mathcal{S} \\times_{1} \\mathbf{Q}_{1} \\times_{2} \\mathbf{Q}_{2} \\times_{3} \\mathbf{Q}_{3}\n\\end{aligned}\t\n\\end{equation}\nwhere $\\mathbf{Q}_j = \\Phi_j \\Psi_j, j=1,2,3$.\n\nZhao \\textit{et al}. \\cite{c106} designed a 3-D HS KCS mechanism to achieve independent samplings in three dimensions. The suitable sparsifying bases were selected and the corresponding optimized measurement matrices were generated, which adjusted the distribution of sampling ratio for each dimension of HS images. Yang \\textit{et al}. \\cite{c95} constrained the nonzero number of the Tucker core tensor to explore the spatial-spectral correlation. To address the issue of the computational burden on the data reconstruction of early HS KCS techniques, researchers have proposed several tensor-based methods such as the tensor-form greedy algorithm, N-way block orthogonal matching pursuit (NBOMP) \\cite{6797642}, beamformed mode-based sparse estimator (BOSE) \\cite{7544443} and Tensor-Based Bayesian Reconstruction (TBR) \\cite{c107}. The TBR model exploited the multi-dimensional block-sparsity of tensors, which was more consistent with the sparse model in HS KCS than the conventional CS methods. A Bayesian reconstruction algorithm was developed to achieve the decoupling of hyperparameters by a low-complexity technique. \n\n \n\n\n\\subsection{Experimental results and analysis}\n\\label{sect:experiment_cs}\n\nAn HS data experiment is employed to validate the effectiveness of tensor-based models on HS CS with four different sample ratios i.e. $1\\%$, $5\\%$, $10\\%$, $20\\%$. The Reno data set selected for HS CS experiments is size of $150 \\times 150 \\times 100$. The randomly permuted Hadamard transform is adopted as the compressive operator. Tab. \\ref{tab:tab-cs} compares the reconstruction results by SLNTCS and JTenRe3DTV. They have quality decays with sample ratios decreasing, but SLNTCS obtains poorer results than JTenRe3DTV in lower sampling ratios. \n\nIn the light of visual comparison, one representative band in the sampling ratio $10\\%$ is presented in Fig. \\ref{fig:Visio-cs}. The basic texture information can be found in the results of two HS CS algorithms. As shown in the enlarged area, SLNTCS causes some artifacts, but JTenRe3DTV produces a more acceptable result with the smoothing white area than SLNTCS.\n\n\n \\begin{table}[!htbp]\n\\centering\n\\caption{A quantitative comparison of different selected algorithms for HS CS.}\n\\begin{spacing}{1.0}\n\\scalebox{0.8}{\n\\begin{tabular}{c|c|cccc}\n\\hline\nMethod & Index & $1\\%$ & $5\\%$ & $10\\%$ & $20\\%$ \\\\\n\\hline\nSLNTCS & MPSNR & 18.70 & 24.44 &27.72 & 32.14 \\\\\n& MSSIM & 0.3273 & 0.6593 & 0.8047 & 0.9159\\\\\n& ERGAS & 23.3411 & 12.1203 & 8.3119 & 5.0263\\\\\n& MSAD & 22.0.35 & 11.2031 & 7.6354 & 4.6003\\\\\n\\hline\nJTenRe3DTV & MPSNR & 27.91 & 34.54 & 36.28 & 37.41 \\\\\n& MSSIM & 0.8116 & 0.9443 & 0.9638 & 0.9709\\\\\n& ERGAS & 8.2422 & 4.0139 & 3.2990 & 2.9124\\\\\n& MSAD & 7.5545 & 3.5703 & 2.9233& 2.5723\\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-cs}\n\\end{spacing}\n\\end{table}\n\n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=5.2cm]{Visio-cs-exp.pdf}\n\t\\end{center}\n\t\\caption[houston]{ Inpainting results by different methods under $10\\%$ sampling ratio. (a) Original Reno image, (b) SLNTCS, (c) JTenRe3DTV. }\n\t\\label{fig:Visio-cs}\n\\end{figure}\n\n\\subsection{Future challenges}\n\\label{sect:challenges_cs} \nThe low acquisition rate of CS inspires a novel development potentiality for HS RS. Many tensor-based methods have been proposed to achieve remarkable HS CS reconstruction results at a lower sampling ratio. However, here we briefly point out some potential challenges.\n\nSome novel tensor decomposition approaches need to be explored. In the past research works, Tucker decomposition has been successfully applied for HS CS. But with the development of the tensorial mathematical theory, many tensor decomposition models have been proposed and introduced in other HS applications. Therefore, how best to find more appropriate tensor decomposition for HS CS is a vital challenge. \n\nNoise degradation usually has a negative influence in HS CS sampling and reconstruction, which is hardly ignored in the real HS CS real imaging process. As a result, considering the noise interference and enhancing the robustness of noise in the CS process remain challenging.\n\n\\section{HS AD}\n\\label{sect:HSI-AD}\n\n\n HS AD aims to discover and separate the potential man-made objects from observed image scenes, which is typically constructive for defense and surveillance developments in RS fields, such as mine exploration and military reconnaissance. For instance, aircrafts in the suburb scene and vehicles in the bridge scene are usually referred to as anomalies or outliers. In Fig. \\ref{fig:Visio-AD}, AD can be regarded as an unsupervised two-class classification problem where anomalies occupy small areas compared with their surrounding background. The key to coping with this problem is to exploit the discrepancy between anomalies and their background. Anomalies commonly occur with low probabilities and their spectral signatures are quite different from neighbors.\n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=7.5cm]{Visio-AD.pdf}\n\t\\end{center}\n\t\\caption[AD]{A schematic diagram of HS image anomaly detection. }\n\t\\label{fig:Visio-AD}\n\\end{figure}\n\nHS images containing two spatial dimensions and one spectral dimension are intrinsically considered as a three-order tensor. Tensor-based approaches have been gradually attaching attention for HS AD in recent years. Tucker decomposition is the first and essential type of tensor-decomposition methods used for HS AD. Therefore, in the following sections, we mainly focus on the Tucker decomposition-based methods and a few other types of tensor-based methods.\n\n\\subsection{Tensor decomposition-based HS AD methods}\n\\label{sect:TD_ad}\n\n$\\textbf{ (1) Tucker decomposition-based methods }$\n\n An observed HS image $\\mathcal{T}$ can be decomposed into two parts by Tucker decomposition, i.e.,\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{T} = \\mathcal{X} + \\mathcal{S}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathcal{X}$ is LR background tensor and $\\mathcal{S}$ is the sparse tensor consisting of anomalies. The Tucker decomposition for AD is formulated as the following optimization\n\\begin{equation}\n\t\\begin{split}\n\t\\label{eq:Tucker-AD}\n\t\\left\\{\\begin{array}{l}\n\\mathcal{X}=\\mathcal{A} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3} \\\\\n\\mathcal{S}=\\mathcal{T}-\\mathcal{X}\n\\end{array}\\right.\n\\end{split} \n\\end{equation}\n\nMany Tucker decomposition-based variants have been studied to improve the AD accuracy. Li \\textit{et al}. \\cite{c110} proposed a LR tensor decomposition based AD (LTDD) model, which employed Tucker decomposition to obtain the core tensor of the LR part. The final spectral signatures of anomalies is extracted by an unmixing approach. After the Tucker decomposition processing, Zhang \\textit{et al}. \\cite{c111} utilized a reconstruction-error-based method to eliminate the background pixels and remain the anomaly information. Zhu \\textit{et al}. \\cite{c139} advocated a weighting strategy based on tensor decomposition and cluster weighting (TDCW). In TDCW, Tucker decomposition was adopted to obtain the anomaly part. K-means clustering and segmenting, were assigned as post-processing steps to achieve a performance boost.\nSong \\textit{et al}. \\cite{c113} proposed a tensor-based endmember extraction and LR decomposition (TEELRD) algorithm, where Tucker decomposition and k-means are employed to construct a high-quality dictionary.\n\nBased on Tucker decomposition, Qin \\textit{et al}. \\cite{c115} proposed a LR and sparse tensor decomposition (LRASTD). The LRASTD can be formulated as\n\\begin{equation}\n\t\\begin{split}\n\t\\label{eq:LRASTD}\n&\t\\min_{\\mathcal{A},\\mathcal{S}} || \\mathcal{A} ||_* + \\beta ||\\mathcal{A} ||_1 + \\lambda || \\mathcal{S} ||_{2,2,1}\\\\\n & {\\rm s.t.} \\quad\t\\mathcal{X}=\\mathcal{A} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3} + \\mathcal{S}\n\\end{split} \n\\end{equation}\nwhere $|| \\mathcal{S} ||_{2,2,1} = \\sum^z_{k=1} || \\mathcal{S}(:,:,k) ||_F$.\n\n$\\textbf{ (2) Other Tensor-based methods }$\n\n Chen \\textit{et al}. \\cite{c112} presented a TPCA-based pre-processing method to separate a principal component part and a residual part. Li \\textit{et al}. \\cite{c114} proposed a prior-based tensor approximation (PTA) approach, where the background was constrained by a truncated nuclear norm (TRNN) regularization and a spatial TV. The proposed PTA can be expressed as\n \\begin{equation}\n \\begin{aligned}\n&\\arg \\min _{\\mathcal{X}, \\mathcal{S}} \\frac{1}{2}\\left(\\left\\|\\mathbf{D}_{H} \\mathbf{X}_{(1)}\\right\\|_{F}^{2}+\\left\\|\\mathbf{D}_{v} \\mathbf{X}_{(2)}\\right\\|_{F}^{2}\\right)+\\alpha\\left\\|\\mathbf{X}_{3}\\right\\|_{r}+\\beta\\left\\|\\mathbf{S}_{3}\\right\\|_{2,1} \\\\\n&\\text { s.t. }\\left\\{\\begin{array}{l}\n\\mathcal{Y}=\\mathcal{X}+\\mathcal{S} \\\\\n\\mathcal{X}_{1}=\\operatorname{unfold}_{1}(\\mathcal{X}) \\\\\n\\mathcal{X}_{2}=\\operatorname{unfold}_{2}(\\mathcal{X}) \\\\\n\\mathcal{X}_{3}=\\operatorname{unfold}_{3}(\\mathcal{X}) \\\\\n\\mathcal{S}_{3}=\\operatorname{unfold}_{3}(\\mathcal{S})\n\\end{array}\\right.\n\\end{aligned}\n \\end{equation}\nwhere $\\mathbf{D}_{H} \\in \\mathbb{R}^{(h-1) \\times h}$ and $\\mathbf{D}_{v} \\in \\mathbb{R}^{(v-1) \\times v}$ are defined as\n$\\mathbf{D}_{H}=\\left[\\begin{array}{ccccc}\n1 & -1 & & & \\\\\n& 1 & -1 & & \\\\\n& & \\ddots & \\ddots & \\\\\n& & & 1 & -1\n\\end{array}\\right]$\\\\\nand\n$\\mathbf{D}_{v}=\\left[\\begin{array}{ccccc}\n1 & -1 & & & \\\\\n& 1 & -1 & & \\\\\n& & \\ddots & \\ddots & \\\\\n& & & 1 & -1\n\\end{array}\\right]$\n\nWang \\textit{et al}. \\cite{minghuaTC} proposed a novel tensor LR and sparse representation method with a PCA pre-processing step, namely PCA-TLRSR, which was the first time to expand the concept of Tensor LR representation in HS AD and exploited the 3-D inherent structure of HS images. Assisted by the multi-subspace learning of the tensor domain and the sparsity constraint along the joint spectral-spatial dimensions, the LR background and anomalies are separated in a more accurate manner.\n\n\n\n\\subsection{Experimental results and analysis}\n\\label{sect:experiment_ad}\n\nHerein, we take an example of PTA on three HS data sets for AD. The San Diego data set \\cite{c216} was captured by the Airborne Visible\/Infrared Imaging Spectrometer (AVIRIS) sensor over the San Diego airport, CA, USA. Three flights are obviously observed in the selected region with the size $100 \\times 100 \\times 189$. The Airport-1 and Airport-2 \\cite{c217} were also acquired by AVIRIS sensor. As shown in the second column of Fig. \\ref{fig:AD-exp}, flights are regarded as anomalies in different airport scenes. \n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=8.5cm, angle = 90]{AD-EXP2.pdf}\n\t\\end{center}\n\t\\caption[AD]{Original HS images, ground-truth maps, detection maps, and AUC curves of PTA on different data sets. }\n\t\\label{fig:AD-exp}\n\\end{figure}\n\nAs the detection maps of Fig. \\ref{fig:AD-exp} display, most of flights are clearly detected by PTA. Except for the visual observation of resulted anomaly maps, the receiver operating characteristic (ROC) curve \\cite{c218} and the area under the ROC curve (AUC) \\cite{c219} are employed to quantitatively assess the detection accuracy of the tensor-based method. The ROC curve plots the varying relationship of the probability of detection (PD) and false alarm rate (FAR) for extensive possible thresholds. The area under this curve is calculated as AUC, whose ideal result is 1. PTA is capable to achieve a high detection rate and low FAR. The AUC values derived from PTA is higher than 0.9.\n\n\\subsection{Future challenges}\n\\label{sect:challenges_ad} \n\nTucker decomposition-based models have been well developed by researchers, yet other types of tensor decompositions are rarely investigated in the HS AD community. In other words, how best to introduce other novel tensor decomposition frameworks into AD is a key challenge.\n\nAlthough most anomalies are successfully detected, some background pixels like roads and roofs usually remain. The more complex background and the fewer targets make the difficulty of AD increase. To solve this problem, researchers need to explore multiple features and suitable regularizations.\n\nThe background and anomalies are often modeled as the LR part and the sparse part of HS images. The 3-D inherent structure of HS images is exploited by tensor decomposition-based methods. The spatial sparsity and the 3-D inherent structure of anomalies should be considered by a consolidated optimization strategy.\n\n\n \\section{HS-MS fusion}\n\\label{sect:HSI-SR}\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=8cm]{SR-fr2.pdf}\n\t\\end{center}\n\t\\caption[unmixing]{Illustration of HS and MS fusion. }\n\t\\label{fig:Visio-sr-fr}\n\\end{figure*}\n\nHS images provide abundant and varied spectral information, yet hardly contain high-spatial resolution owing to the limitations of sun irradiance \\cite{c140}, and imaging systems \\cite{c141}. On the contrary, MS images are captured with low-spectral resolution and high-spatial resolution. HS and MS fusion aims to improve the spatial resolution of HS images with the assistance of MS images and generate final HS images with high-spatial resolution and original spectral resolution. The high-quality fused HS images benefit for the in-depth recognition and \ninsight of materials, which contributes to many RS real applications, such as object classification and change detection of wetlands and farms \\cite{7182258,7895167,8961105,gu2019superpixel,hong2021graph,chen2022fccdn,hong2022Spectral}.\n\nFig. \\ref{fig:Visio-sr-fr} depicts an HS and MS fusion process to generate an HR-HS image. Suppose that a desired high-spatial-spectral resolution HS (HR-HS) image, a low-resolution HS (LR-HS) image, and a high-resolution MS (HR-MS) image are denoted by $\\mathcal{X} \\in \\mathbb{R}^{ H \\times V \\times B}$, $\\mathcal{Y} \\in \\mathbb{R}^{ h \\times v \\times B}$ and $\\mathcal{Z} \\in \\mathbb{R}^{ H \\times V \\times b}$ ($H \\gg h$, $V \\gg v$, $B \\gg b$), respectively. A LR-HS image is seen as a spatially downsampled and blurring version of $\\mathcal{X}$, and a HR-MS image is the spectrally downsampled version of $\\mathcal{X}$. The two degradation models are expressed as follow\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR}\n \\mathbf{Y}_{(3)} = \\mathbf{X}_{(3)} \\mathbf{R} + \\mathbf{N}_h\n \\end{aligned}\n\\end{equation}\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR2}\n \\mathbf{Z}_{(3)} = \\mathbf{G} \\mathbf{X}_{(3)} + \\mathbf{N}_m\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{R} = \\mathbf{B} \\mathbf{K}$, $\\mathbf{B}$ denotes a convolution blurring operation. $\\mathbf{K}$ is a spatial downsampling matrix, and $\\mathbf{G}$ represents a spectral-response function if a MS image sensor, which can be regarded as a spectral downsampling matrix. $\\mathbf{N}_h$ and $\\mathbf{N}_m$ stand for noise.\n\nAccording to references \\cite{c142,c143,c144}, $\\mathbf{R}$ and $\\mathbf{G}$ are assumed to be given in advance of solving the HS SR problem \n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR-pro}\n \\min_{\\mathcal{X}} ||\\mathbf{Y}_{(3)} - \\mathbf{X}_{(3)} \\mathbf{R} ||^2_F +\n ||\\mathbf{Z}_{(3)} - \\mathbf{G} \\mathbf{X}_{(3)} ||^2_F + \\tau f ({\\mathcal{X}})\n \\end{aligned}\n\\end{equation}\nwhere the first and second F-norm are data-fidelity terms with respect with models (\\ref{eq:HSR}) and (\\ref{eq:HSR2}), $f ({\\mathcal{X}})$ represents the prior regularization pertinent to the desired property on the HR-HS ${\\mathcal{X}}$. \nIn the next section, we review currently advanced HS SR methods from two categories: tensor decomposition and prior-based tensor decomposition models.\n\n\n\\subsection{Tensor Factorizations for SR}\n\\label{sect:TDforSR} \n\n\\subsubsection{CP Decomposition Model}\n\n Initially, Kanatsoulis \\textit{et al}. \\cite{c145} employed a coupled CP decomposition framework for HS SR. The CP decomposition of an HR-HS tensor $\\mathcal{X}$ can be expressed as\n \\begin{equation}\n \\begin{aligned}\n \\label{eq:sr-cp}\n {\\mathcal { X }} &=\\sum_{r=1}^{R} \\mathbf{a}_{r} \\circ \\mathbf{b}_{r} \\circ \\mathbf{c}_{r} \\\\\n & =\\llbracket \\mathbf{A}, \\mathbf{B}, \\mathbf{C} \\rrbracket\\\\\n \\end{aligned}\n \\end{equation}\nwhere the latent LR factors are $\\mathbf{A}= [\\mathbf{a}_1,...,\\mathbf{a}_r ]$, $\\mathbf{B}= [\\mathbf{b}_1,...,\\mathbf{b}_r ]$, and $\\mathbf{C}= [\\mathbf{c}_1,...,\\mathbf{c}_r ]$.\n In \\cite{c145}, the coupled CP decomposition gave the following assumption\n \\begin{equation}\n \\begin{aligned}\n \\label{eq:sr-cp-assum}\n {\\mathcal { Y }} \n =\\llbracket \\mathbf{P}_1 \\mathbf{A}, \\mathbf{P}_2 \\mathbf{B}, \\mathbf{C} \\rrbracket;\n {\\mathcal { Z }} \n =\\llbracket \\mathbf{A}, \\mathbf{B}, \\mathbf{P}_3 \\mathbf{C} \\rrbracket\n \\end{aligned}\n \\end{equation}\n where $\\mathbf{P}_1 \\in \\mathbb{R}^{h \\times H} $, $\\mathbf{P}_2 \\in \\mathbb{R}^{v \\times V} $, and $\\mathbf{P}_3 \\in \\mathbb{R}^{b \\times B} $ are three linear degradation matrices. The identifiability of HS SR based on the algebraic properties of CP decomposition is guaranteed under relaxed conditions. However, LR properties of different dimensions are treated equally, which is rarely suitable for real HS SR. Subsequently, Kanatsoulis \\textit{et al}. \\cite{c151} a SR cube algorithm (SCUBA) that combined the advantages of CP decomposition and matrix factorization. Xu \\textit{et al}. \\cite{c152} improved CP decomposition-based method by adding a non-local tensor extraction module. \n \n\n \n \n\n\\subsubsection{Tucker Decomposition Model}\n\n Li \\textit{et al}. \\cite{c146} extended a coupled sparse tensor factorization (CSTF) approach, in which the fusion problem was transformed into the estimation of dictionaries along three modes and corresponding sparse core tensor. When a tensor $\\mathcal{X}$ is decomposed by Tucker decomposition\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:sr-tucker}\n \\mathcal{X}=\\mathcal{W} \\times_{1} \\mathcal{A} \\times_{2} \\mathcal{B} \\times_{3} \\mathcal{C}\n \\end{aligned}\n\\end{equation}\n\nThe LR-HS and HR-HS degradation models are rewritten as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:sr-tucker2}\n \\mathcal{Y}&=\\mathcal{W} \\times_{1}(\\mathbf{P}_{1} \\mathbf{A}) \\times_{2}(\\mathbf{P}_{2} \\mathbf{B}) \\times_{3} \\mathbf{C} \\\\\n &=\\mathcal{W} \\times_{1}\\mathbf{A}^* \\times_{2}\\mathbf{B}^*\\times_{3} \\mathbf{C} \\\\\n \\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{Z}&=\\mathcal{W} \\times_{1}\\mathbf{A} \\times_{2} \\mathbf{B} \\times_{3} (\\mathbf{P}_{3}\\mathbf{C} )\\\\\n &=\\mathcal{W} \\times_{1}\\mathbf{A} \\times_{2} \\mathbf{B} \\times_{3} \\mathbf{C}^*\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{A}^* = \\mathbf{P}_{1} \\mathbf{A}$, $\\mathbf{B}^* = \\mathbf{P}_{2} \\mathbf{B}$, and $\\mathbf{C}^* = \\mathbf{P}_{3}\\mathbf{C}$ are the downsampled dictionaries along three modes. Taking the sparsity of core tensor $\\mathcal{W}$, Li \\textit{et al}. formulated the fusion problem as follows\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:sr-CSTF}\n \\min_{ \\mathbf{A}, \\mathbf{B}, \\mathbf{C}, \\mathcal{W}} &|| \\mathcal{Y}-\\mathcal{W} \\times_{1} \\mathbf{A}^* \\times_{2}\\mathbf{B}^*\\times_{3} \\mathbf{C} ||^2_F + \\\\ \n &|| \\mathcal{Z} - \\mathcal{W} \\times_{1}\\mathbf{A} \\times_{2} \\mathbf{B} \\times_{3} \\mathbf{C}^* ||^2_F + \\lambda || \\mathcal{W} ||_1\n \\end{aligned}\n\\end{equation}\n\nThe $l_1$ norm in Eq. (\\ref{eq:sr-CSTF}) was replaced by a $l_2$ norm in \\cite{c147}. Pr\u00e9vost \\textit{et al}. \\cite{c153} assumed HR-HS images assessed approximately low multilinear rank and developed an SR algorithm based on coupled Tucker tensor approximation (SCOTT) with HOSVD. In the Tucker decompositon framework and the BT of decompositon framework (named CT-STAR and CB-STAR) \\cite{c155}, an additive variability term was admitted for the study of the general identifiability with theoretical guarantees. Zare \\textit{et al}. \\cite{c170} offered a coupled non-negative Tucker decomposition (CNTD) method to constrain the nonnegativity of two Tucker spectral factors.\n\n $\\textbf{Non-local Tucker decomposition:}$ Wan \\textit{et al}. \\cite{c154} grouped 4-D tensor patches using the spectral correlation and similarity under Tucker decomposition. Dian \\textit{et al}. \\cite{c156} offered a non-local sparse tensor factorization (NLSTF) method, which induced core tensors and corresponding dictionaries from HR-MS images, and spectral dictionaries from LR-HS images. A modified NLSF\\_SMBF version was developed for the semi-blind fusion of HS and MS \\cite{c157}. However, the dictionary and the core tensor for each cluster are estimated separately by NLSTF and NLSF\\_SMBF. \n\n $\\textbf{Tucker decomposition + TV:}$ Xu \\textit{et al}. \\cite{c160} presented a Tucker decomposition model with a unidirectional TV. Wang \\textit{et al}. \\cite{c158} advocated a non-local LR tensor decomposition and SU based approach to leverage spectral correlations, non-local similarity, and spatial-spectral smoothness. \n\n\n\n $\\textbf{Tucker decomposition + Manifold:}$ Zhang \\textit{et al}. \\cite{c159} suggested a spatial\u2013spectral-graph-regularized LR tensor decomposition (SSGLRTD). In SSGLRTD, the spatial and spectral manifolds between HR-MS and LR-HS images are assumed to be similar to those embedded in HR-HS images. Bu \\textit{et al}. \\cite{c161} presented a graph Laplacian-guided coupled tensor decomposition (gLGCTD) model that incorporated global spectral correlation and complementary submanifold structures into a unified framework.\n\n\\subsubsection{BT Decomposition Model}\n \n\nZhang \\textit{et al}. \\cite{c163} discovered the identifiability guarantees in \\cite{c145,c146} at the cost of the lack of physical meaning for the latent factors under CP and Tucker decomposition. Therefore, they employed an alternative coupled nonnegative BT tensor decomposition (NN-CBCTD) approach for HS SR. The NN-CBTD model with rank-($L_r,L_r,1$) for HS SR is given as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:NNCBTD}\n \\min _{\\mathrm{A}, \\mathbf{B}, \\mathrm{C}} &\\|\\mathcal{Y}-\\sum_{r=1}^{R}(\\mathbf{P}_{1} \\mathbf{A}_{r}\\left(\\mathbf{P}_{2} \\mathbf{B}_{r}\\right)^{\\top}) \\circ \\mathbf{c}_{r}\\|_{F}^{2} \\\\\n&+\\|\\mathcal{Z}-\\sum_{r=1}^{R}(\\mathbf{A}_{r} \\mathbf{B}_{r}^{\\top}) \\circ \\mathbf{P}_{3} \\mathbf{c}_{r}\\|_{F}^{2} \\\\\n\\text { s. t. } &\\mathbf{A} \\geq \\mathbf{0}, \\mathbf{B} \\geq \\mathbf{0}, \\mathbf{C} \\geq \\mathbf{0}\n \\end{aligned}\n\\end{equation}\n\nCompared with a conference version \\cite{c163}, the journal version \\cite{c138} additionally gave more recoverability analysis and more flexible decomposition framework by using a advocated LL1 model and a block coordinate descent algorithm. Jiang \\textit{et al}. \\cite{c164} introduced a graph manifold named Graph Laplacian into the CBTD framework.\n\n\n\\subsubsection{TT Decomposition Model}\n\nDian \\textit{et al}. \\cite{c148} proposed a low tensor-train rank (LTTR)-based HS SR method. A LTTR prior was designed for learning correlations among the spatial, spectral, and non-local modes of 4-D FBP patches. The HS SR optimization can be obtained as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:LTTR}\n \\min _{\\mathbf{X}_{(3)}}\\|\\mathbf{Y}_{(3)}-\\mathbf{X}_{(3)} \\mathbf{R}\\|_{F}^{2}+\\|\\mathbf{Z}_{(3)}-\\mathbf{G X}_{(3)}\\|_{F}^{2}+\\tau \\sum_{k=1}^{K}\\|\\mathcal{X}_{k}\\|_{\\mathrm{TT}}\n\\end{aligned}\n\\end{equation}\nwhere $K$ denotes the number of clusters, the TT rank of tensor $\\mathcal{Z}_k$ is defined\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:LTTR-tt}\n\\| \\mathcal{Z}_{k} \\|_{\\mathrm{TT}}=\\sum_{t=1}^{3} \\alpha_{t} \\operatorname{LS}(\\mathbf{Z}_{k \\langle t\\rangle})\n\\end{aligned}\n\\end{equation} \nand $\\mathrm{LS}(\\mathbf{A})=\\sum_{i} \\log (\\sigma_{i}(\\mathbf{A})+\\varepsilon)$ with a small positive value $\\varepsilon$.\n\nLi \\textit{et al}. \\cite{c162} presented nonlocal LR tensor approximation and sparse representation (NLRSR) that formed the non-local similarity and sparial-spectral correlation by the TT rank constraint of 4-D non-local patches.\n\n\n\\subsubsection{TR Decomposition Model}\n The TR decomposition of an HR-HS tensor $\\mathcal{X} \\in \\mathbb{R}^{H \\times V \\times B}$ is represented as\n \\begin{equation}\n\\begin{aligned}\n\\label{eq:SR-TR-x}\n\\mathcal{X}=\\Phi [\\mathcal{G}^{(1)}, \\mathcal{G}^{(2)}, \\mathcal{G}^{(3)}]\n\\end{aligned}\n\\end{equation}\nwhere three TR factors are denoted by $\\mathcal{G}^{(1)} \\in \\mathbb{R}^{r_{1} \\times H \\times r_{2}}$, $\\mathcal{G}_{(2)} \\in \\mathbb{R}^{r_{2} \\times V \\times r_{3}}$, and $\\mathcal{G}^{(3)} \\in \\mathbb{R}^{r_{3} \\times B \\times r_{1}}$ with TR ranks $r = [r_1, r_2, r_3]$. Based on the TR theory, an LR-HS image is rewritten as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:SR-TR-y}\n\\mathcal{Y}={\\Phi}[\\mathcal{G}^{(1)} \\times{ }_{2} \\mathbf{P}_{1}, \\mathcal{G}^{(2)} \\times_{2} \\mathbf{P}_{2}, \\mathcal{G}^{(3)}]\n\\end{aligned}\n\\end{equation}\nand an HR-MS image can be expressed as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:SR-TR-z}\n\\mathcal{Z}=\\boldsymbol{\\Phi}[\\mathcal{G}^{(1)}, \\mathcal{G}^{(2)}, \\mathcal{G}^{(3)} \\times_{2} \\mathbf{P}_{3}]\n\\end{aligned}\n\\end{equation}\n\nHe \\textit{et al}. \\cite{c150} presented a coupled TR factorization (CTRF) model and a modified CTRF version (NCTRF) with the nuclear norm regularization of third\/spectral TR factor. The NCTRF model is formulated as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:NCTRF}\n\\begin{aligned}\n&\\min _{\\mathcal{G}^{(1)}, \\mathcal{G}^{(2)}, \\mathcal{G}^{(3)}}\\|\\mathcal{Y}-\\boldsymbol{\\Phi} [\\mathcal{G}^{(1)} \\times_{2} \\mathbf{P}_{1}, \\mathcal{G}^{(2)} \\times_{2} \\mathbf{P}_{2}, \\mathcal{G}^{(3)} ] \\|_{F}^{2} \\\\\n&+\\|\\mathcal{Z}-\\boldsymbol{\\Phi} [\\mathcal{G}^{(1)}, \\mathcal{G}^{(2)}, \\mathcal{G}^{(3)} \\times_{2} \\mathbf{P}_{3} ] \\|_{F}^{2}+\\lambda\\|\\mathbf{G}_{(2)}^{(3)}\\|_{*}\n\\end{aligned}\n\\end{aligned}\n\\end{equation}\n\nEq. (\\ref{eq:NCTRF}) becomes the CTRF model when removing the last term. In \\cite{c150}, the benefit of TR decomposition for SR is elaborated via the theoretical and experimental proof related to a low-dimensional TR subspace.\nThe relationship between the TR spectral factors of LR-HS images and HR-MS images were explored in \\cite{c149} with a high-order representation of the original HS image. The spectral structures of HR-HS images were kept to be consistent with LR-HS images by a graph-Laplacian regularization. \nChen \\textit{et al}. \\cite{c169} presented a factor-smonthed TR decomposition (FSTRD) to capture the spatial-spectral continuity of HR-HS images.\nBased on the basic CTRF model, Xu \\textit{et al}. \\cite{c171} advocated LR TR decomposition based on TNN (LRTRTNN), which exploited the LR properties of non-local similar patches and their TR factors.\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\\includegraphics[height=12cm]{SR-result.pdf}\n\t\\end{center}\n\t\\caption[sr]{The fusion results of five different HS-MS fusion methods. (a) REF, (b) Blind-STEREO, (c) CSTF, (d) LTMR, (e) LTTR, and (f) SC-LL1. }\n\t\\label{fig:Visio-sr-re}\n\\end{figure*}\n\\begin{table*}[!htbp]\n\\centering\n\\caption{A quantitative comparison of different methods for HS-MS fusion.}\n\\begin{spacing}{1.0}\n\\scalebox{0.93}{\n\\begin{tabular}{c|ccccc}\n\\hline\nIndex & Blind-STEREO &\tCSTF & LTTR & LTMR & SC-LL1 \\\\\n\\hline\n MPSNR & 53.49 & $\\mathbf{54.28}$ &\t38.11 & 39.58 & 54.13\\\\\n ERGAS &0.3584 & $\\mathbf{0.3317}$ & 1.9761 & 1.5851\t& 0.3076\\\\ \n SAM & 1.0132\t& 0.8841 & 3.7971 & 3.0495 & $\\mathbf{0.8213}$ \\\\\n RMSE &0.0027\t& 0.0025 & 0.0165 & 0.0132\t& $\\mathbf{0.0023}$\\\\\nCC &0.9993\t& 0.9994 & 0.9819 & 0.9870\t& $\\mathbf{0.9995}$\\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-fusion}\n\\end{spacing}\n\\end{table*}\n\n\\subsubsection{Tensor Rank Minimization for SR}\n\\label{sect:TRMforSR} \n\nBased on t-SVD, Dian \\textit{et al}. \\cite{c165} developed a subspace based low tensor multi-rank (LTMR) that induced an HR-HS image by spectral subspace and corresponding coefficients of grouped FBPs. The specific LTMR model is expressed as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:LTMR}\n \\min_{\\mathcal{X}} ||\\mathbf{Y}_{(3)} - \\mathbf{X}_{(3)} \\mathbf{R} ||^2_F +\n ||\\mathbf{Z}_{(3)} - \\mathbf{G} \\mathbf{X}_{(3)} ||^2_F + \\tau \\sum_{k=1}^{K}\\|\\mathcal{X}_{k}\\|_{\\mathrm{TMR}}\n \\end{aligned}\n\\end{equation}\nwhere the multi-rank of tensor $\\mathcal{X}$ is defined as $\\|\\mathcal{X}\\|_{\\mathrm{TMR}}=\\frac{1}{B_{3}} \\sum_{b=1}^{B_{3}} \\operatorname{LS}(\\hat{\\mathcal{X}}(:,:,b))$, and $B$ is dimension number of the third mode of $\\mathcal{X}$. To speed up the estimation of LTMR, Long \\textit{et al}. \\cite{c166} introduced the concept of truncation value and obtain a fast LTMR (FLTMR) algorithm. Xu \\textit{et al}. \\cite{c8} presented a non-local patch tensor sparse representation (NPTSR) model that characterized the the spectral and spatial similarities among non-local HS patches by the t-product based tensor sparse representation.\n\nConsidering the HS image degradation by noise, some researchers study the Noise-robust HS SR problem. Li \\textit{et al}. \\cite{c168} proposed a TV regularized tensor low-multilinear-rank (TV-TLMR) model to improve the performances of the mixed-noise-robust HS SR task. Liu \\textit{et al}. \\cite{c167} transformed the HS SR problem as a convex TTN optimization, which permitted a SR process robust to an HS image striping case. \n\n\n\\subsection{Experimental results and analysis}\n\\label{sect:experiment_sr}\n\nIn this section, we select five representative tensor decomposition-based HS-MS fusion approaches: a Tucker decomposition-based method, i.e, CSTF \\cite{c146}; a CP decomposition-based method, i.e., Blind-STEREO \\cite{c145}; a TT decomposition-based method, i.e., LTTR \\cite{c148}; a BT decomposition-based method, i.e., SC-LL1 \\cite{c138} and a tensor singular value decomposition-based method, i.e., LTMR \\cite{c165}.\n\nThe quality assessment is conducted within a simulation study following Wald's protocol \\cite{c220}. One RS-HS data set is selected for the data fusion, i.e., the University of Houston campus used for the 2018 IEEE GRSS Data Fusion Contest. The original data is acquired by ITRES CASI 1500 HS camera, covering a 380-1050 nm spectral range with 48 bands at a 1-m GSD. a sub-image of $400 \\times 400 \\times 46 $ is chosen as the ground true after discarding some noisy bands. The input HR-MS image is generated by the reference image using the spectral response of WorldView 2, and the input LR-HS image is obtained via a Gaussian blurring kernel whose size equals five. Five quantitative metrics are used to assess the performances of the reconstructed HR-HS image, including MPSNR, ERGAS, root-mean-square error (RMSE), spectral angle mapper (SAM), and cross-correlation (CC). SAM measures the angles between the HR-HS image and the reference image, and smaller SAMs correspond to better performance. CC is a score between 0 and 1, where 1 represents the best estimation result. \n\n\n\n\nFig. \\ref{fig:Visio-sr-re} presents the reconstructed false-color images, enlarged local images, SAM error heatmaps, and mean relative absolute error (MRAE) heatmaps of five HS-MS fusion methods. From Fig. \\ref{fig:Visio-sr-re}, all five methods provide good spatial reconstruction results. However, LTMR and LTTR produce severe spectral distortions at the edge of the objects. In Tab. \\ref{tab:tab-fusion}, the conclusion of quantitative evaluation is consistent with that of the visual one. In other words, LTTR and LTMR perform poorly in the spectral reconstruction quality. The other three methods show a competitive ability in HS-MS fusion. Especially, CSTF gains the best MPSNR and ERGAS scores, and SC-LL1 achieves the best SAM, RMSE, and CC values among the competing approaches.\n\n\n\n\n\n\n\n\n\\subsection{Future challenges}\n\\label{sect:challenges_sr} \n\nThough tensor decomposition-based HS-MS fusion technology has been promoted rapidly in recent years and shows a promising reconstruction ability due to its strong exploitation of spatial-spectral structure information, a number of challenges remain.\n\n\n$ \\textbf{Non-registered HS-MS fusion}$: Tensor decomposition-based HS-MS fusion methods focuses on the pixel-level image fusion, which implies that image registration between two input modalities is a necessary prerequisite and the fusion quality heavily depends on the registration accuracy. However, most of the current methods pay more attention to the follow-up fusion step, ignoring the importance of registration. As a challenging task, image registration handles the inputs of two modalities acquired from different platforms and times. In the future, efforts should be made to accomplish non-registered HS-MS fusion tasks.\n\n\n\n$ \\textbf{Blind HS-MS fusion}$: Existing tensor decomposition-based HS-MS fusion methods contribute to the appropriate design of handcrafted priors to derive desired reconstruction results. However, the degradation models are often given without the estimation of real PSF and spectral response function in most of tensor-based methods. It is intractable to obtain precisely the degradation functions of real cases due to the uncertainty of sensor degradation.\nHow to devise blind HS-MS fusion methods with unknown degradation function is a desirable challenge. \n\n\n$\\textbf{Inter-image variability}$: The different times or platforms of two HS and MS modalities lead to the discrepancy, referring to the inter-image variability. However, tensor decomposition-based approaches usually assume that two modalities are acquired under the same condition, and hence ignore the spectral and spatial variability that usually happens in practice. Taking the inter-image variability phenomenon into consideration when modeling the degradation process is a key challenge for future researches.\n\n\n\n\n\n\\section{HS SU}\n\\label{sect:HSI-unmixing}\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=7cm]{unmixing_fr.pdf}\n\t\\end{center}\n\t\\caption[unmixing]{Illustration of HS unmixing based on linear mixing model and nonlinear mixing model. (a) Linear mixing, (b) intimate mixture, (c) multilayered mixture. }\n\t\\label{fig:Visio-unmixing-fr}\n\\end{figure*}\n\nOwing to its acquired continuous abundance maps, SU has been widely solved the inversion problems of typical ground object parameters, such as vegetation index, surface temperature and water turbidity in several decades \\cite{sonnentag2007mapping,alcantara2009improving, deng2013estimating}, and has been successfully applied in some RS applications, such as forest monitoring and land cover change detection \\cite{hlavka1995unmixing}. In addition, due to the mixing phenomenon caused by heterogeneity and stratified distribution of ground objects, SU can effectively realize crop identification and monitoring \\cite{lobell2004cropland, iordache2014dynamic, chi2014spectral }.\n\nWhen the mixing scale is macroscopic and each incident light reaching sensors has interacted with just one material, the measured spectrum is usually regarded as a linear mixing, as shown in Fig. \\ref{fig:Visio-unmixing-fr} (a). However, due to the existence of nonlinear interactions in real scenarios, several physics-based approximations of nonlinear linear mixing model (NLMM) have been proposed, mainly covering two types of mixing assumptions: intimate mixture (Fig. \\ref{fig:Visio-unmixing-fr} (b)) and multilayered mixture (Fig. \\ref{fig:Visio-unmixing-fr} (c)).\n The former describes the interactions suffered by the surface composed of particles at a microscopic scale. The intimate mixture usually occurs in scenes containing sand or mineral mixtures and requires a certain kind of prior knowledge of the geometric positioning of the sensor to establish the mixture model. The latter characterizes the light reflectance of various surface materials at a macroscopic scale. The multilayered mixture usually occurs in scenes composed of materials with some height differences, such as forest, grassland, or rocks, containing many nonlinear interactions between the ground and the canopy. In general, the multilayered mixture consisting of more than two orders is ignored owing to its negligible interactions. For the second-order multilayered mixture model, the family of bilinear mixing models is usually adopted to solve the NLMM.\nDue to the low spatial resolution of sensors, many pixels mixed by different pure materials exist in HS imagery, which inevitably conceals useful information and hinders the high-level image processing. SU aims to separate the observed spectrum into a suite of basic components, also called endmembers, and their corresponding fractional abundances. \n\n\\subsection{Linear Mixing Model}\nWith the assumption of the single interaction between the incident light and the material, representative SU methods are based on the following linear mixing model (LMM) \\cite{c189,RenL2021A}:\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:LMM}\n \\mathbf{X} = \\mathbf{E} \\mathbf{A} + \\mathbf{N}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{X} \\in \\mathbb{R}^{z \\times hv}$, $\\mathbf{E} \\in \\mathbb{R}^{z \\times r}$, $\\mathbf{A} \\in \\mathbb{R}^{r \\times hv}$, and $\\mathbf{N} \\in \\mathbb{R}^{z \\times hv}$ denotes the observed unfolding HS matrix, the endmember matrix, abundance matrix, and additional noise, respectively. The LMM-based methods have drawn much attention due to their model simplicity and desirable performance \\cite{c205,c206,c207}. However, current LMM-based matrix factorization methods usually convert the 3-D HS cube into a 2-D matrix, leading to the loss of spatial information in the relative positions of pixels. Tensor factorization-based approaches have been dedicated to SU to overcome the limitation of LMM.\n\n\\subsubsection{CP or Tucker Decomposition Model}\n\nZhang \\textit{et al}. \\cite{c190,c191} first introduced nonnegative tensor factorization (NTF) into SU via CP decomposition. However, this NTF-SU method hardly considers the relationship between LMM and NTF, giving rise to the lack of physical interpretation. \nImbiriba \\textit{et al}. \\cite{c212} considered the underlying variability of spectral signatures and developed a flexible approach, named unmixing with LR tensor regularization algorithm accounting for EM variability (ULTRA-V). The ranks of the abundance tensor and the endmember tensor were estimated with only two easily adjusted parameters.\nSun \\textit{et al}. \\cite{c208} first introduced Tucker decomposition for blinding unmixing and increased the sparse characteristic of abundance tensor.\n\n\\subsubsection{BT Decomposition Model}\n\nIn terms of tensor notation, an HS data tensor can be represented by sum of the outer products of an endmember (vector) and its abundance fraction (matrix). This enables a matrix-vector third-order tensor factorization that consists of $R$ component tensors:\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:SU-BTD}\n \\mathcal{X} & =\\sum_{r=1}^{R} \\mathbf{A}_{r} \\cdot \\mathbf{B}_{r}^{T} \\circ \\mathbf{c}_{r} +\\mathcal{N}\\\\\n &=\\sum_{r=1}^{R} \\mathbf{E}_{r} \\circ \\mathbf{c}_{r} +\\mathcal{N}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{E}_{r}$ calculated by the product of $\\mathbf{A}_{r}$ and $\\mathbf{B}_{r}^{T}$ denotes the abundance matrix, $\\mathbf{c}_{r}$ is the endmember vector, and $\\mathcal{N}$ represented the additional noise. Apparently, this matrix-vector tensor decomposition has the same form as BT decomposition, set up a straightforward link with the previously mentioned LMM model. Qian \\textit{et al}. \\cite{c192} proposed a matrix-vector NTF unmixing method, called MVNTF, by combining the characteristics of CPD and Tucker decomposition to extract the complete spectral-spatial structure of HS images. The MVNTF method for SU is formulated as \n\\begin{equation}\n \\begin{aligned}\n \\label{eq:MVNTF}\n &\\min_{\\mathbf{E},\\mathbf{c}} || \\mathcal{X}- \\sum_{r=1}^{R} \\mathbf{E}_{r} \\circ \\mathbf{c}_{r} ||^2_F \\\\\n & {\\rm s.t.} \\mathbf{A}_{r} , \\mathbf{B}_{r}^{T}, \\mathbf{c}_{r} \\geq 0\n \\end{aligned}\n\\end{equation}\n\nMVNTF derived BT decomposition essentially and established a physical connection with LMM. Compared with NMF-based unmixing approaches, MVNTF can achieve better unmixing performance in most cases. Nevertheless, the abundance results extracted by MVNTF may be over-smoothing and lose detailed information due to the strict LR constraint of NTF. Various spatial and spectral structures, such as spatial-spectral smoothness and non-local similarity, are proven to tackle the problem of pure MVNTF. \n\nXiong \\textit{et al}. \\cite{c193} presented a TV regularized NTF (NTF-TV) method to make locally smooth regions share similar abundances between neighboring pixels and suppress the effect of noises. Zheng \\textit{et al}. \\cite{c194} offered a sparse and LR tensor factorization (SPLRTF) method to flexibly achieve the LR and sparsity characteristics of the abundance tensor. Feng \\textit{et al}. \\cite{c195} installed three additional constraints, namely sparseness, volume, and nonlinearity, into the MVNTF framework to improve the accuracies in impervious surface area fraction\/classification map. Li \\textit{et al}. \\cite{c196} integrated NMF into MVNTF by making full use of their individual merits to characterize the intrinsic structure information. Besides, a sparsity-enhanced convolutional operation (SeCoDe) method \\cite{c198} incorporated a 3-D convolutional operation into MVNTF for the blind SU task.\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\\includegraphics[height=12cm]{unmixing_exp.pdf}\n\t\\end{center}\n\t\\caption[unmixing1]{Abundance maps of different methods on the Urban data set. }\n\t\\label{fig:Visio-unmixing1}\n\\end{figure*}\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\\includegraphics[height=8cm]{Endmember_result.pdf}\n\t\\end{center}\n\t\\caption[unmixing2]{Endmember results of different methods on the Urban data set. (a) Asphalt, (b) Grass, (c) Tree, and (d) Roof. }\n\t\\label{fig:Visio-unmixing2}\n\\end{figure*}\n\n\\begin{table*}[!htbp]\n\\centering\n\\caption{A quantitative comparison of different methods for HS SU.}\n\\begin{spacing}{1.0}\n\\scalebox{0.93}{\n\\begin{tabular}{c|c|cccc}\n\\hline\n\\multicolumn{2}{c|}{Method}&\tMVNTF & MVNTF-TV & SeCoDe & LR-NTF \\\\\n\\hline\n\\multirow{4}{*}{SAD} & Asphalt & 0.3738\t& 0.2606 & 0.2190 & $\\mathbf{0.1127}$ \\\\\n&Grass & 0.2572 &0.1722\t& $\\mathbf{0.0450}$ & 0.1349 \\\\\n&Tree\t&0.1474\t&0.1450 & 0.0854 &$\\mathbf{0.0632}$ \\\\\n&Roof\t&0.2825&\t0.2273&\t0.3861&\t$\\mathbf{0.0395}$\\\\\n\\cline{1-2}\n\\multicolumn{2}{c|}{MSAD} &\t0.2652&\t0.2013&\t0.1839&\n$\\mathbf{0.0876}$ \\\\\n\\multicolumn{2}{c|}{RMSE}\t&0.2638&\t0.2588\t&0.1453&$\\mathbf{\t0.1451}$ \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-unmixing}\n\\end{spacing}\n\\end{table*}\n\n\\subsubsection{Mode-$3$ Tensor Representation Model}\n\nUnder the definition of the tensor mode-$n$ multiplication, LMM (\\ref{eq:LMM}) is equivalent to\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:mul-LMM}\n \\mathcal{X} = \\mathcal{A} \\times_3 \\mathbf{E} + \\mathcal{N} \n \\end{aligned}\n\\end{equation}\nwhere $\\mathcal{A} \\in \\mathbb{R}^{h \\times v \\times R}$ denotes the abundance tensor containing $R$ endmembers. In \\cite{c197}, the non-local LR tensor and 3-DTV regularization of the abundance tensor were introduced further extract the spatial contextual information of HS data. With abundance nonnegative constraint (ANC) and abundance sum-to-one constraint (ASC) \\cite{c211}, the objective function of NLTR for SU is expressed as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:NLTR-SU}\n&\\min _{\\mathcal{A}} \\frac{1}{2}\\left\\|\\mathcal{X}-\\mathcal{A} \\times_{3} \\mathrm{E}\\right\\|_{F}^{2}+\\lambda_{\\mathrm{TV}}\\|\\mathcal{A}\\|_{\\mathrm{2DTV}}+\\lambda_{\\mathrm{NL}} \\sum_{k=1}^{K}\\left\\|\\mathcal{A}^{k}\\right\\|_{\\mathrm{NL}}\\\\\n&\\text { s.t. } \\mathcal{A} \\geq \\mathbf{0}, \\quad \\mathcal{A} \\times_{1} \\mathbf{1}_{P}=\\mathbf{1}_{h \\times v} \n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{1}_{P}$ is a $P$-dimensional vector of all 1, $\\mathbf{1}_{h \\times v} $ denotes a matrix of element 1, and the non-local LR regularization is defined as\n\\begin{equation}\n \\begin{aligned}\n\\|\\mathcal{A}^{k}\\|_{\\mathrm{NL}}=\\sum_{i=1}^{p} \\operatorname{LS}(\\mathbf{A}^{(i)})\n\\end{aligned} \n\\end{equation} \n\nANC, ASC, and the sparseness of abundance are often introduced into sparse unmixing models \\cite{Iordache2011Sparse,Iordache2014Collaborative}, which produces the endmembers and corresponding abundance coefficients by a known spectral library instead of extracting endmembers from HS data \\cite{RenL2022A,RenL2020A}. Sun \\textit{et al}. \\cite{c210} developed a weighted non-local LR tensor decomposition method for HS sparse unmixing (WNLTDSU) by adding collaborative sparsity and 2DTV of the endmember tensor into a weighted non-local LR tensor framework. The LR constraint and joint sparsity in the non-local abundance tensor were imposed in a non-local tensor-based sparse unmixing (NL-TSUn) algorithm \\cite{c209}.\n\n\n\n\n\\subsection{NonLinear Mixing Model}\nTo this end, numerous NLMMs have been studied in SU by modeling different order scatterings effects and producing more accurate unmixing results. To this end, numerous NLMMs have been proposed in SU by modeling different order scatterings effects and producing accurate unmixing results \\cite{c199,c200,c201}. Traditional NLMMs, such as Bilinear mixture models (BMMs), usually transform an HS cube into a 2-D matrix and have the same fault as LMMs \\cite{c202,c203,yao2019nonconvex}. \n\n\nTo effectively address the nonlinear unmixing problem, Gao et al. \\cite{c204} expressed an HS cube $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$ based on tensor notation in the following format\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:NLMM1}\n \\mathcal{X}=\\mathcal{A} \\times_{3} \\mathbf{C}+\\mathcal{B} \\times_{3} \\mathbf{E}+\\mathcal{N}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{C} \\in \\mathbb{R}^{z \\times R}$, $\\mathcal{B} \\in \\mathbb{R}^{h \\times v \\times R(R-1)\/2}$, and $\\mathbf{E} \\in \\mathbb{R}^{z \\times R(R-1)\/2}$ represent the mixing matrix, the nonlinear interaction abundance tensor, and the bilinear interaction endmember matrix respectively. A nonlinear unmixing method \\cite{c204} was first based on NTF by taking advantage of the LR property of the abundance maps and nonlinear interaction maps, which validated the potential of tensor decomposition in nonlinear unmixing. \n\n\n\\subsection{Experimental results and analysis}\n\\label{sect:experiment_SU}\n\nThe Urban HS data set obtained by the HYDICE sensor over the urban area, Texas, USA, are selected for evaluating the performance of different unmixing methods qualitatively, including MVNTF \\cite{c192}, MVNTF-TV \\cite{c193}, SeCoDe \\cite{c198}, and LR-NTF \\cite{c204}. For a fair comparison, HSsignal subspace identification by minimum error (HySime) \\cite{c221} and vertex component analysis (VCA) \\cite{c222} algorithms are adopted to determine the number of endmembers and the endmember initialization. The urban data contains $307 \\times 307$ pixels and 210 bands ranging from 0.4 to 2.5 $\\mu$m. Due to the water vapor and atmospheric effects, 162 bands are remained after removing the affected channels. Four main materials in this scene are investigated, that is, $\\#1$ Asphalt, $\\#2$ Grass, $\\#3$ Tree, and $\\#4$ Roof. Two quantitative metrics are utilized to evaluate the extracted abundance and endmember results, namely RMSE and SAD. \n\nFor illustrative purposes, Fig. \\ref{fig:Visio-unmixing1} and Fig. \\ref{fig:Visio-unmixing2} display the extracted abundances and the corresponding endmember results of different tensor decomposition-based SU approaches. The quantitative results on the urban data are reported in Tab. \\ref{tab:tab-unmixing}, where the best results are marked in bold. MVNTF yields poor unmixing performance for both endmember extraction and abundance estimation compared with other tensor-based unmixing methods since it only considers the tensor structure to represent the spectral-spatial information of HS images and ignores other useful prior regularizations. Compared with MVNTF, MVNTF-TV integrates the advantage of TV and tensor decomposition, bringing certain performance improvements in terms of SAD, MSAD, and RMSE. SeCoDe addresses the problem of spectral variabilities in a convolutional decomposition fashion effectively, thereby yielding further performance improvement of endmember and abundance results. Different from SeCoDe, LR-NTF considers the nonlinear unmixing model of tensor decomposition and the low-rankness regularization of abundances. The unmixing results of LR-NTF are superior to those of other competitive approaches on the urban data, demonstrating its superiority and effectiveness.\n\n\n\n\n\n\n\n\\subsection{Future challenges}\n\\label{sect:challenges_SU} \n\nSeveral advanced tensor decomposition-based methods have recently achieved effectiveness in HS SU. Nonetheless, there is still a long way to go towards the definition of statistical models and the design of algorithms. In the following, we briefly summarize some aspects that deserve further consideration:\n\nThe most commonly utilized evaluation indices for HS SU include RMSE (that measures the error between the estimated abundance map and the reference abundance map) and SAD (which assesses the similarity of the extracted endmember signatures and the true endmember signatures). However, RMSE and SAD just contribute to a quantitative comparison of SU results when the ground truth for abundances and endmembers exists. If there are no references in the real scenario, meaningful and suitable evaluation metrics should be developed in future work.\n\nTraditional NLMMs are readily interpreted as matrix factorization problems. The tensor decomposition-based NLMM has been springing up in the recent few years. We should consider complex interactions like the intimate and multilayered mixture for establishing general and robust tensor models.\n\n\nAnother important challenge is the high time consumption required by high-performance SU architectures, which hinders their applicability in real scenarios. Especially, as the number of end members and the size of the image increase, the current NTF-based unmixing methods are difficult to deal with this situation owing to a large amount of computational consumption. Therefore, the exploration of more computationally efficient tensor-based approaches will be an urgent research direction in the future.\n\n\n\n\n\n\\section{Conclusion}\n\\label{sect:conclusion}\n\nHS technique accomplishes the acquisition, utilization, and analysis of nearly continuous spectral bands and permeates through a broad range of practical applications, having attached incremental attention from researchers worldwide. In HS data processing, large-scale and high-order properties are often involved in collected data. The ever-growing volume of 3-D HS data puts higher demands on the processing algorithms to replace the 2-D matrix-based methods. Tensor decomposition plays a crucial role in both problem modelings and methodological approaches, making it realizable to leverage the spectral information of each complete 1-D spectral signature and the spatial structure of each complete 2-D spatial image. In this article, we presented a comprehensive and technical review of five representative HS topics, including HS restoration, CS, AD, HS-MS fusion, and SU. Among these tasks, we reviewed current tensor decomposition-based methods with main formulations, experimental illustrations, and remaining challenges. The most important and compatible challenges related to consolidating tensor decomposition techniques for HS data processing should be emphasized and summarized in five aspects: model applicability, parameter adjustment, computational efficiency, methodological feasibility, and multi-mission applications.\n\n$\\textbf{Model applicability}$: Tensor decomposition theory and practice offer us versatile and potent weapons to solve various HS image processing problems. A high-dimensional tensor is often decomposed by different categories of tensor decomposition into several decomposition factors\/cores. One sign reveals that the mathematical meaning of different factors\/cores should be made connection with the physical properties of HS structure. Another sign is that each HS task contains multiple modeling problems, such as various types of HS noise (i.e., Gaussian noise, stripes, or mixed noise) caused by different kinds of sensors or external conditions. The tensor decomposition-based models should be capable of characterizing the specific HS properties and being used in different scenarios.\n\n\n$\\textbf{Parameter adjustment}$: In the algorithmic solution, parameter adjustment is an indispensable portion to achieve the significant performances of HS data processing. Parameters can be gradually tuned via extensive simulated experiments, while sometimes, they should be reset for various data sets due to the uncertainty of data size. In practice, users are most likely to be non-professional with little knowledge of a special algorithm, leading\nto improper parameter setting and unsatisfactory processing results. Therefore, in the future, efforts should be made to design a fast proper-parameter search scheme or reduce the number of parameters to increase algorithmic practicability.\n\n\n\n$\\textbf{Computational consumption}$: Tensor decomposition-based methods have achieved satisfactory results in HS data processing, yet they sometimes cause high computational consumption. For instance, a non-local LR tensor denoising model, TDL spends more than 10 min under a data set of $200 \\times 200 \\times 80$. As the image size increases, the increasing number of non-local FBPs will cause a larger amount of time consumption. Thus, there still exists a vast room for promotion and innovation of improving the optimization efficiency of HS data processing.\n\n\n\n\n$\\textbf{Methodological feasibility}$: Unlike deep learning-based methods, designing handcrafted priors is the key to tensor decomposition-based methods. Existing methods exploit the structure information of the underlying target image by implementing various handcrafted priors, such as LR, TV, and non-local similarity. However, different priors assumptions apply to specific scenarios, making it challenging to choose suitable priors according to the characteristics of HS images to be processed. Deep learning-based methods automatically learn the prior information implicitly from data sets themselves without the trouble of manually designing a manual regularizer. As an advisable approach, deep learning can be incorporated into tensor-based methods to mine essential multi-features and enhance the methodological feasibility.\n\n\n\n$\\textbf{Multi-mission applications}$: The extremely broad field of HS imagery makes it impossible to provide an exhaustive survey on all of the promising HS RS applications. It is certainly of significant interest to develop tensor decomposition-based models for other noteworthy processing and analysis chains in future work, including classification, change detection, large-scale land cover mapping, and image quality assessment. Some HS tasks serve as the pre-processing step for high-level vision. For example, the accuracy of HS classification can be improved after an HS denoising step. How to apply tensor decomposition for high-level vision and even multi-mission frameworks may be a key challenge.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\\section{Motivation and significance}\n\\label{sect:ms}\n\nOver the past decade, massive efforts have been made to process and analyze HS RS data after the data acquisition. Initial HS data processing considers either the gray-level image for each band or the spectral signature of each pixel \\cite{c2}. From one side, each HS spectral band is regarded as a gray-level image, and the traditional 2-D image processing algorithms are directly introduced band by band \\cite{c175}. From another side, the spectral signatures that have similar visible properties (e.g., color, texture) can be used to identify the materials \\cite{c174}. Furthermore, extensive low-rank (LR) matrix-based methods are proposed to explore the high correlation of spectral channels with the assumption that the unfolding HS matrix has a low rank \\cite{c20}. However, these traditional LR models reshape each spectral band as a vector, leading to the destruction of the inherent spatial-spectral completeness of HS images. Correct interpretations of HS images and the appropriate choice of the intelligent models should be determined to reduce the gap between HS tasks and the advanced data processing technique. Both 2-D spatial information and 1-D spectral information are considered when an HS image is modeled as a three-order tensor.\n\n\\begin{figure}[htp!]\n\t\\begin{center}\n \\includegraphics[width = 0.45\\textwidth]{papernumber.pdf}\n\t\\end{center}\n\t\\caption[houston]{The number of journal and conference papers that published in IEEE Xplore on the subject of \"hyperspectral\" and \"tensor decomposition\" within different time periods. }\n\t\\label{fig:Visio-papernum}\n\\end{figure}\n\n\\begin{figure*}[htp!]\n\t\\begin{center}\n \\includegraphics[width = 0.85\\textwidth]{total-fr.pdf}\n\t\\end{center}\n\t\\caption[houston]{A taxonomy of main tensor decomposition-based methods for hyperspectral data processing. Brackets enclose the number of papers for each topics that appeared in IEEE Xplore. }\n\t\\label{fig:Visio-total}\n\\end{figure*}\nTensor decomposition, which originates from Hitchcock's works in 1927 \\cite{c179}, touches upon numerous disciplines, but it has recently become prosperous in the fields of signal processing and machine learning over the last ten years \\cite{c181}. The early overviews focus on two common decomposition ways: Tucker decomposition and CANDECOMP\/PARAFAC (CP) decomposition. In 2008, these two decompositions were first introduced into HS restoration tasks to remove the Gaussian noise \\cite{c25}. The tensor decomposition-based mathematical models avoid converting the original dimensions, and also to some degree, enhance the interpretability and completeness for problem modeling. Different types of prior knowledge (e.g, non-local Similarity in the spatial domain, spatial and spectral smoothness) in HS RS are considered and incorporated into the tensor decomposition frameworks. However, on the one hand, additional tensor decomposition methods have been proposed recently, including block term (BT) decomposition, tensor-singular value decomposition (T-SVD) \\cite{c184}, tensor train (TT) decomposition \\cite{c185}, and tensor ring (TR) decomposition \\cite{c126}. On the other hand, as a versatile tool, tensor decomposition related to HS data processing has not been reviewed until. \n\nFig. \\ref{fig:Visio-papernum} displays the dynamics of tensor decompositions used for HS data processing in the HS community. The listed numbers contain both scientific journal and conference papers published in IEEE Xplore, which regards \"hyperspectral\" and \"tensor decomposition\" as the main keywords in abstracts. To highlight the increasing trend of number of publications, time period has been divided into four equal time slots (i.e., 2007-2010, 2011-2014, 2015-2018, 2019-2022(05 January)).\nIn this article, we mainly present a systematic overview from the perspective of the state-of-the-art tensor decomposition techniques for HS data processing in terms of the five burgeoning topics previously mentioned. \n\n\n\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[width = 0.8\\textwidth]{total-fr-fig.pdf}\n\t\\end{center}\n\t\\caption[restoration]{A schematic diagram of HS data processing, including restoration, compressive sensing, anomaly detection, hyperspectral-multispectral (HS-MS) fusion, and spectral unmixing. }\n\t\\label{fig:Visio-total2}\n\\end{figure*}\n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(1) To the best of our knowledge, this is the first time to provide a comprehensive survey of the state-of-the-art tensor decomposition techniques for processing and analyzing HS RS images. More than 100 publications in this field are reviewed and discussed, most of which were published during the last five years.\n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(2) For each HS topic, major representative works are scrupulously presented in terms of the specific categories of tensor decomposition. We introduce and discuss the pure tensor decomposition-based methods and their variants with other HS priors in sequence. The experimental examples are performed for validating and evaluating theoretical methods, followed by a discussion of remaining challenges and further research directions.\n\n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(3) This article makes a connection between tensor decomposition modeling and HS prior information. We summarizes with the publication years, brief description, and prior information. Either beginners or experiencers are expected to obtain certain harvest pertinent to the tensor decomposition-based frameworks for HS RS. The available codes are also displayed for the sake of repeatability and further studies in the final submission.\n\n \n\n\n \n\n\\section{Outline}\n\\label{sect:outline}\n\nThis paper provides a brief introduction for various tensor decomposition models. Fig. \\ref{fig:Visio-total} illustrates a taxonomy of main tensor decomposition-based methods for HS data processing. Very recently, some excellent performances of tensor decompositions for HS data processing have garnered growing attention from researchers, leading to the novel prosperity of tensor modelings and related solving algorithms. These issues and solutions pose fresh challenges to research on optimizations, which inspires the further development of both tensor decompositions and HS data processing. Fig. \\ref{fig:Visio-total2} presents the illustration for each topic. \n\n\n\n\n\\subsection{Restoration}\n\\label{sect:restoration}\n\n\n\nIn the actual process of HS data acquisition and transformation, external environmental change and internal equipment conditions inevitably lead to noises, blurs, and missing data (including clouds and stripes), which degrade the visual quality of HS images and the efficiency of the subsequent HS data analysis. Therefore, HS image restoration appears as a crucial pre-processing step for further applications. Mathematically, an observed degraded HS image can be formulated as follows\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:degrade}\n\\mathcal{T}=M(\\mathcal{X}) + \\mathcal{S} + \\mathcal{N}\n\t\\end{split}\n\\end{equation} \nwhere $\\mathcal{T} \\in \\mathbb{R}^{h \\times v \\times z}$, $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$, $\\mathcal{S} \\in \\mathbb{R}^{h \\times v \\times z}$ and $\\mathcal{N} \\in \\mathbb{R}^{h \\times v \\times z}$ represents an observed HS image, the restored HS image, the sparse error and additive noise, respectively, and $M(\\cdot)$ denotes different linear degradation operators for different HS restoration problems: (a) when $M(\\cdot)$ is a blur kernal also called as point spread function (PSF), Eq. (\\ref{eq:degrade}) becomes HS deblurring problem; \n(b) when $M(\\cdot)$ is a binary operation, i.e., 1 for original pixels, and 0 for missing data, Eq. (\\ref{eq:degrade}) turns into the HS inpainting problem;\n(c) when $M(\\mathcal{X})$ keeps $\\mathcal{X}$ constant, i.e., $M(\\mathcal{X}) = \\mathcal{X}$, Eq. (\\ref{eq:degrade}) is reformulated as the HS destriping problem ($\\mathcal{T}=\\mathcal{X} + \\mathcal{S}$) or HS denoising problem (only consider Gaussian noise $\\mathcal{T}=\\mathcal{X} + \\mathcal{N}$ or consider mixed noise $\\mathcal{T}=\\mathcal{X} + \\mathcal{S} + \\mathcal{N}$). The HS restoration task is to estimate recovered HS images $\\mathcal{X}$ from the given HS images $\\mathcal{T}$. This ill-posed problem suggests that extra constraints on $\\mathcal{X}$ need to be enforced for the optimal solution of $\\mathcal{X}$. The HS restoration problem can be summarized as \n\\begin{equation}\n \\begin{aligned}\n \\label{eq:summ}\n \\underset{\\mathcal{X}}{ \\min } \\frac{1}{2} || \\mathcal{T} - M(\\mathcal{X}) - \\mathcal{S} ||^2_F + \\tau f(\\mathcal{X}) + \\lambda g(\\mathcal{S})\n \\end{aligned}\n\\end{equation}\nwhere $f(\\mathcal{X})$ and $g(\\mathcal{S})$ stand for the regularizations to explore the desired properties on the recovered $\\mathcal{X}$ and sparse part $\\mathcal{S}$, respectively. $\\tau$ and $\\lambda$ are regularization parameters.\n\n\n\n\\subsection{Compressive sensing}\n\\label{sect:HSI-CS}\n\n\n\nCompressive sensing (CS) of HS images aims to preciously reconstruct an HS data $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$ from a few compressive measurements $\\textbf{y} \\in \\mathbb{R}^m$ by effective HS CS algorithms. The compressive measurements $\\textbf{y}$ can be formulated by:\n\\begin{equation}\n\\label{eq:y1}\n\\textbf{y} = \\Psi (\\mathcal{X})\n\\end{equation}\nwhere $\\Psi$ is a measurement operator instantiated as $\\Psi= \\textbf{D} \\cdot \\textbf{H} \\cdot \\textbf{P}$, where $\\textbf{D}$ is a random downsampling operator, $\\textbf{H}$ is a random permutation matrix, $\\textbf{P}$ is a WalshHadamard transform and the mapping of $\\Psi$ is $ \\mathbb{R}^{h \\times v \\times z} \\rightarrow \\mathbb{R}^m$ (the sampling ratio $m=hvz$). The strict reconstruction of $\\mathcal{X}$ from $\\mathbf{y}$ will be guaranteed by the CS theory when $\\Psi$ satisfies the restricted isometry property (RIP). The HS CS task can be generalized the following optimization problem:\n\\begin{equation}\n\\label{eq:hs_cs}\n\\begin{aligned}\n\\underset{\\mathcal{X}}{\\textrm{min}}\\;&\\|\\textbf{y}-\\Psi (\\mathcal{X})\\|_F^2 + \\lambda F(\\mathcal{X}),\n\\end{aligned}\t\n\\end{equation}\nwhere $F(\\mathcal{X})$ denotes the additional regularization term to use different types of HS prior information such as spectral correlation, spatial and spectral smoothness, and non-local Similarity. \n\n\n\n\\subsection{Anomaly detection}\n\\label{sect:HSI-AD}\n\n\n HS aomaly detection (AD) aims to discover and separate the potential man-made objects from the observed image scene, which is typically constructive for defense and surveillance developments. The key to coping with this problem is to exploit the discrepancy between anomalies and their background. Anomalies commonly occur with low probabilities and their spectral signatures are quite different from neighbors.\n\nHS images containing two spatial dimensions and one spectral dimension are intrinsically considered as a three-order tensor. The tensor-based approaches have been gradually attaching attention for HS AD in recent years. Tucker Decomposition is the first and essential type of tensor-decomposition methods used for HS AD. Therefore, in the following sections, we mainly focus on the Tucker decomposition-based methods and a few other types of tensor-based methods. An observed HS image $\\mathcal{T}$ can be decomposed into two parts by Tucker decomposition, i.e.,\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{T} = \\mathcal{X} + \\mathcal{S}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathcal{X}$ is LR background tensor and $\\mathcal{S}$ is the sparse tensor consisting of anomalies.\n\n\n\n\n \\subsection{HS-MS fusion}\n\\label{sect:HSI-SR}\n\nHS and MS fusion aims to improve the spatial resolution of HS images with the assistance of MS images and generate final HS images with high-spatial resolution and original spectral resolution. \nSuppose that a desired high-spatial-spectral resolution HS (HR-HS) image, a low-resolution HS (LR-HS) image, and a high-resolution MS (HR-MS) image are denoted by $\\mathcal{X} \\in \\mathbb{R}^{ H \\times V \\times B}$, $\\mathcal{Y} \\in \\mathbb{R}^{ h \\times v \\times B}$ and $\\mathcal{Z} \\in \\mathbb{R}^{ H \\times V \\times b}$ ($H \\gg h$, $V \\gg v$, $B \\gg b$), respectively. A LR-HS image is seen as a spatially downsampled and blurring version of $\\mathcal{X}$, and a HR-MS image is the spectrally downsampled version of $\\mathcal{X}$. The two degradation models are expressed as follow\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR}\n \\mathbf{Y}_{(3)} = \\mathbf{X}_{(3)} \\mathbf{R} + \\mathbf{N}_h\n \\end{aligned}\n\\end{equation}\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR2}\n \\mathbf{Z}_{(3)} = \\mathbf{G} \\mathbf{X}_{(3)} + \\mathbf{N}_m\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{R} = \\mathbf{B} \\mathbf{K}$, $\\mathbf{B}$ denotes a convolution blurring operation. $\\mathbf{K}$ is a spatial downsampling matrix, and $\\mathbf{G}$ represents a spectral-response function if a MS image sensor, which can be regarded as a spectral downsampling matrix. $\\mathbf{N}_h$ and $\\mathbf{N}_m$ stand for noise.\nAccording to references \\cite{c142}, $\\mathbf{R}$ and $\\mathbf{G}$ are assumed to be given in advance of solving the HS SR problem \n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR-pro}\n \\min_{\\mathcal{X}} ||\\mathbf{Y}_{(3)} - \\mathbf{X}_{(3)} \\mathbf{R} ||^2_F +\n ||\\mathbf{Z}_{(3)} - \\mathbf{G} \\mathbf{X}_{(3)} ||^2_F + \\tau f ({\\mathcal{X}})\n \\end{aligned}\n\\end{equation}\n\n\n\\subsection{Spectral unmixing}\n\\label{sect:HSI-unmixing}\n\n\nDue to the low spatial resolution of sensors, many pixels mixed by different pure materials exist in HS imagery, which inevitably conceals useful information and hinders the further image processing. Spectral unmixing aims to separate the observed spectrum into a suite of basic components, also called endmembers, and their corresponding fractional abundances. \n\nAn HSI data tensor can be represented by sum of the outer products of an endmember (vector) and its abundance fraction (matrix). This enables a matrix-vector third-order tensor factorization that consists of $R$ component tensors:\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:SU-BTD}\n \\mathcal{X} & =\\sum_{r=1}^{R} \\mathbf{A}_{r} \\cdot \\mathbf{B}_{r}^{T} \\circ \\mathbf{c}_{r} +\\mathcal{N}\\\\\n &=\\sum_{r=1}^{R} \\mathbf{E}_{r} \\circ \\mathbf{c}_{r} +\\mathcal{N}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{E}_{r}$ calculated by the product of $\\mathbf{A}_{r}$ and $\\mathbf{B}_{r}^{T}$ denotes the abundance matrix, $\\mathbf{c}_{r}$ is the endmember vector, and $\\mathcal{N}$ represented the additional noise. The tensor factorization of endmember and its abundance can replaced by other decompositions.\n\nIn the final submission, we will offer specific tensor decomposition modelings, show the experimental performances and pose fresh challenges for each topic.\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:Introduction}Introduction}\n\n\n\nQuantum information processing arbitrates controlled interaction of the Hilbert space of a quantum system, for the purpose of generating a target probability distribution expressed in a computational basis defined by an experimental measurement scheme. The Hilbert space of a quantum system generally grows exponentially with the number of degrees of freedom, but for the purpose of quantum information processing it needs to be opportunistically partitioned in order to execute algorithms. The most common encoding exploits a collection of qubits, two-level systems, and it is known that the dimensionality of the Hilbert space is maximized when the states are arranged as a collection of qutrits, three-level systems, for a fixed number of allowed quantum states \n \\cite{greentree2004maximizing}. \nWithout loss of generality, multiple qudits can be merged into the definition of a new qudit, and a qudit can be mapped via binary encodings into a minimum of $\\log_2(d)$ qubits and viceversa. For instance, the binary expansion\n\n\\begin{eqnarray}\n\\bigotimes_{k=0}^{N-1}|s_k\\rangle& \n\\xrightleftharpoons[qubits]{qudits}\n&\\left|\\sum_{k=0}^{N-1} s_k d^k\\right\\rangle\n\\label{eq:quditmapping}\n\\end{eqnarray}\nwould map the computational basis state of a ququart (i.e. a four-level qudit) onto two qubits $|0\\rangle\\rightarrow|00\\rangle$, $|1\\rangle\\rightarrow|01\\rangle$, $|2\\rangle\\rightarrow|10\\rangle$, $|3\\rangle\\rightarrow|11\\rangle$.\n\nIt is known since the beginning of quantum computing architecture research that universal quantum computing could be achieved by operating constructively on single-qubit and two-qubit at a time~\\cite{divincenzo1995two} via the implementation of quantum gates temporally arranged into quantum circuits. A similar result is known for qudits of arbitrary dimension~\\cite{bullock2005asymptotically,wang2020qudits}, which can provide hardware-efficient solutions \\cite{liu2021constructing} and lower-depth gate compilation and noise improvement compared to qubit-based systems \\cite{gokhale2019asymptotic, otten2021impacts, blok2021quantum, gustafson2021prospects, gustafson2022noise}. Of particular interest in the current period of technological maturity of quantum processors (the NISQ Era~\\cite{preskill2018quantum}) are variational algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) and the Variational Quantum Eigensolver (VQE) that might achieve some quantum advantage without the fault-tolerance overhead of active error-correction~\\cite{cerezo2021variational}. Typically the quantum circuits of these algorithms feature unitary gates implementing a set of parametrized single-qudit rotations $U_M(\\beta)$ depending on some real angle $\\beta$. For instance, let us consider the set of $SU(2)$ rotations around the $X$-axis of the Bloch sphere for qubit systems, and the set of $SO(3)$ rotations that leave invariant the $|0\\rangle+|1\\rangle+|2\\rangle$ state for qutrits. Their matrix representations $U_{M}^{(2)}(\\beta)$ and $U_{M}^{(3)}(\\beta)$ are, respectively:\n\\setlength{\\thickmuskip}{0mu}\n\\setlength{\\medmuskip}{0mu}\n\\begin{eqnarray}\n U_{M}^{(2)}&\\equiv&\\begin{bmatrix} c_{\\frac{\\beta}{2}}&-is_{\\frac{\\beta}{2}} \\\\is_{\\frac{\\beta}{2}}&c_{\\frac{\\beta}{2}}\\end{bmatrix}\\label{eq:UMix}\\\\\n U_{M}^{(3)}&\\equiv&\\frac{1}{3}\\begin{bmatrix}\n 1\\text{+}2c_\\beta & 1-c_\\beta-\\sqrt{3}s_\\beta & 1-c_\\beta\\text{+}\\sqrt{3}s_\\beta\\\\\n 1-c_\\beta\\text{+}\\sqrt{3}s_\\beta & 1\\text{+}2c_\\beta & 1-c_\\beta-\\sqrt{3}c_\\beta\\\\\n 1-c_\\beta-\\sqrt{3}s_\\beta & 1-c_\\beta\\text{+}\\sqrt{3}s_\\beta & 1\\text{+}2c_\\beta,\\nonumber\n \\end{bmatrix}\n\\end{eqnarray}\n\\setlength{\\thickmuskip}{2mu}\n\\setlength{\\medmuskip}{2mu}\nwhere $c_x$, $s_x$ indicate $\\cos(x)$ and $\\sin(x)$ and the computational basis states are ordered in the canonical ascending way.\nThe two-qudit gates of interest for QAOA\/VQE ans\\\"atze are often diagonal in the computational basis. For instance, the following two-qudit and two-qutrit unitary gates $U_C(\\gamma)$ introduce a phase shift by the angle $\\gamma$ if the two qudits have the same computational state:\n\\begin{eqnarray}\n U_C^{(2)}&\\equiv&\\begin{bmatrix}\n e^{i\\gamma} & 0 & 0 & 0\\\\\n 0 & 1 & 0 & 0\\\\\n 0 & 0 & 1 & 0\\\\\n 0 & 0 & 0 & e^{i\\gamma}\n \\end{bmatrix}\\label{eq:UCost}\\\\\n U_C^{(3)}&\\equiv&\n \\begin{bmatrix}\n e^{i\\gamma} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & e^{i\\gamma} & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e^{i\\gamma}\\\\\n \\end{bmatrix},\\nonumber\n\\end{eqnarray}\nwhere the canonically ordered basis for the matrix representation is used~\\footnote{$|00\\rangle$, $|01\\rangle$, $|10\\rangle$, $|11\\rangle$ for qubits, and $|00\\rangle$, $|01\\rangle$, $|02\\rangle$, $|10\\rangle$, $|11\\rangle$, $|12\\rangle$, $|20\\rangle$, $|21\\rangle$, $|22\\rangle$ for qutrits}. Note that for the most common case of qubits $U_C^{(2)}\\propto\\exp(i (\\gamma\/2)\\sigma_z\\otimes\\sigma_z)$, where $\\sigma_z$ are the standard Pauli matrices. For circuit quantum electrodynamics (cQED) systems, note also that there are ways to find effective spin models, which is generally used for encoding of quantum heuristic algorithms \\cite{miyazaki2022effective}.\n\n\n\nImplementing parametrized gates such as (\\ref{eq:UMix}-\\ref{eq:UCost}) starting from the elementary interactions provided by a NISQ processor is a non-trivial problem of \\emph{synthesis}~\\cite{magann2021pulses}, which often can be tackled only via heuristic numerical approaches and online experimental calibration~\\cite{klimov2020snake}. In this work, we consider the problem of synthesis of gates of the type (\\ref{eq:UMix}-\\ref{eq:UCost}) by driving with carefully optimized time-dependent interactions in a system of interacting states.\nMore specifically, the Hilbert space we are considering is spanned by a truncated set of anharmonic bosonic modes, defined with second quantized operators $a_m$, coupled in a density-density fashion. The corresponding many-body Hamiltonians and their truncated diagonal first quantization representations are:\n\\begin{eqnarray}\nH_m &=& \\omega_m a_m^\\dagger a_m + \\xi_m (a_m^\\dagger a_m)^2\\label{eq:Hm}\\\\\n \\Big|_{n_m\\ }&\\xrightarrow{}&|0\\rangle \\langle 0|+\\sum_{n=1}^{n_m-1} \\left[\\omega_m n + \\xi_m n^2\\right] |n\\rangle \\langle n|,\\nonumber\\\\\nH_{mm\\prime}^{int} &=& \\xi_{mm^\\prime} a_m^\\dagger a_m a^\\dagger_{m^\\prime} a_{m\\prime}\\label{eq:Hmm}\\\\\n \\Big|_{{n_m}\\atop{n_{m^\\prime}}}&\\xrightarrow{}&|00\\rangle \\langle 00|+\\sum_{n=1}^{n_m-1} \\sum_{k=1}^{n_{m\\prime}-1} \\xi_{mm^\\prime}nk |nk\\rangle\\langle nk|,\\nonumber\n\\end{eqnarray}\nwhere $n_m$, $n_{m\\prime}$ are the number of levels considered for each mode. In photonic implementations, $\\xi_m$ is called the self-Kerr coefficient for mode $m$ and $\\xi_{mm^\\prime}$ is called the cross-Kerr coefficient between modes $m$ and $m^\\prime$. \n\\begin{figure}[!htbp]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figure1v4.pdf}\n\\par\\end{centering}\n\\caption{\\label{fig:System_and_Spectrum}\nTop: System figure with colored waves representing different cavity electromagnetic modes. Eigenspectrum for $\\mathcal{H}^{(A)}$ (left) and $\\mathcal{H}^{(B)}$ (right). \nSystem parameters and resonant frequencies are given in Section \\ref{sec:quantum_control}.\nArrows indicate the transition frequencies.\nDashed (continuous) arrows represent transitions between different energy levels with $|0\\rangle_{T}$ and $|0\\rangle_{T}$ ($|1\\rangle_{T}$).}\n\\end{figure}\nWe are considering two illustrative setups in order to describe how quantum information could be manipulated in systems featuring Hamiltonians of the type (\\ref{eq:Hm}-\\ref{eq:Hmm}). \nIn particular we consider that there is one \"control qubit mode\" $T$ whose Hilbert space is truncated to the first two computational states: \n\\begin{eqnarray}\nH_T &=& |0\\rangle_T \\langle 0|_T+(\\omega_T+\\xi_T)|1\\rangle_T \\langle 1|_T,\\label{eq:HT}\n\\end{eqnarray}\nand either one or two computational modes ($C$) interacting with the control mode, respectively truncated to the first 8 and 3 computational states:\n\\begin{eqnarray}\n\\mathcal{H}^{(A)} &=& H_T + H_m + H_{Tm}^{int}\\Big|_{\\substack{n_T=2\\\\n_m=8}}\n\\label{eq:H2}\n\\\\\n\\mathcal{H}^{(B)} &=& \\left. \\begin{array}{l} \n H_T + H_l + H_m\\\\\n + H_{Tl}^{int} + H_{Tm}^{int} \\nonumber\\\\ \\end{array}\\right|_{^{\\substack{n_T=2\\\\n_l=3\\\\n_m=3}}},\n \n\\end{eqnarray}\nwhere the dependence over the $\\omega$ and $\\xi$ parameters of the Hamiltonians is implied. This setup is a specific case of a generalized Jaynes-Cummings model~\\cite{blais2021circuit}. Note that for $\\mathcal{H}^{(B)}$, each $C$ mode is naturally a qutrit, while as noted in Eq.~(\\ref{eq:quditmapping}), the quantum occupation numbers of the cavity modes could be directly associated to qubit registers via binary expansion~\\cite{sawaya2020resource}. \n\nIn Fig.~\\ref{fig:System_and_Spectrum}, we show the energy spectrum of two specifications of $\\mathcal{H}^{(A)}$ and $\\mathcal{H}^{(B)}$ as well as a pictorial representation of a possible experimental setup that could be described by such effective Hamiltonians: A transmon circuit is embedded into a multimode 3D superconducting cavity, driven by the field of a coupled antenna. Indeed, our reference Hamiltonians can be derived by considering the superconducting transmon to be coupled with the cavity resonator in a dispersive way, i.e., by considering the effective interaction derived by perturbation theory assuming that the ratio of the transmon-cavity coupling and the difference between the transmon and the cavity fundamental frequencies is small, and neglecting the small effective couplings (i.e. cross-Kerr) between the cavity modes~\\cite{blais2021circuit, ma2021quantum}. The quantum control drive can be introduced in the model by adding a time-dependent term that allows to create and destroy excitations of a mode $m$:\n\\begin{equation}\n H^{drive}_{m}(t) = d_m(t) a_m + \\bar{d}_m(t) a_m^\\dagger,\\label{eq:HD}\n\\end{equation}\nwhere $d_m(t)$ are complex functions. This control Hamiltonian could be related to the (comparatively slowly varying) field generated by the antenna via phenomenologically justified approximations~\\cite{gerry2005introductory}. \n\n\nHaving introduced the main definitions and the systems under study, we outline the rest of the paper. In section \\ref{sec:quantum_control} we present the synthesis problem from a numerical point of view, following the implementation of quantum optimal control numerics in the open source package \\texttt{Juqbox.jl}~\\cite{Juqbox_Github}. Subsection \\ref{subseq:evaluation} will present results for the synthesis of simple QAOA proof-of-concept circuits based on the parallel execution of gates (\\ref{eq:UMix})-(\\ref{eq:UCost}). Finally,\nin Section \\ref{sec:discussion} \nwe will discuss future work, including improvements and generalizations of our case study to larger and realistic systems and what is needed for this method to be applied in practice for compilation of variational quantum algorithms in realistic bosonic quantum processors based on 3D cQED technology.\n\n\n\\section{\\label{sec:quantum_control} Pulse Engineering Approach}\n\nThe \\emph{gate synthesis} problem that we are facing could be framed as the task of discovering the functions $d_m(t)$ that allow the Schr\\\"odinger evolution for a time $\\tau$ of $\\mathcal{H}+H^{drive}$ to match as close as possible a target unitary operation $U$: \n\\begin{eqnarray}\n U &\\;\\simeq\\;& \\mathcal{U}(\\tau)=\\mathcal{T} \\exp\\left[-\\frac{i}{\\hbar}\\int_0^\\tau dt\\left( \\mathcal{H}+H^{drive}(t)\\right)\\right]\n \\label{eq:schroevol}\n\\end{eqnarray}\nIn particular, as discussed in the previous section, we will be considering Eqs.(\\ref{eq:H2}) for $\\mathcal{H}$ and Eqs. (\\ref{eq:UMix}-\\ref{eq:UCost}) as target unitary matrices. In order to solve synthesis numerically, the problem is cast into an optimization challenge over a finite number of real parameters, which can be tackled following the theory of quantum optimal control (QOC)~\\cite{palao2002quantum}. There are multiple strategies currently implemented for gate sythesis via QOC or machine learning, all with respective benefits and tradeoffs. However, these methods are currently tested on specific limited cases, and insights are difficult to generalize, e.g. see~\\cite{riaz2019optimal, niu2019universal, PRXQCTRL}. In this paper we follow the techniques described in Ref.~\\cite{petersson2021optimal}, targeting specifically cQED models, which we will now briefly review and contextualize for the system under study. \n\nWe leverage a key simplification of the QOC problem, consisting in the decomposition of the $d_m(t)$ control functions into a truncated basis spanned by a linear combination of $N_b$ B-spline quadratic polynomials, $S_b(t)$, corresponding to wavelets modulated with $N_f$ resonant frequencies, i.e.\n\\begin{eqnarray}\n d_m(t)&=&\\sum_k^{N_f} e^{i\\Omega_{m,k}} W_{m,k}(t)\\nonumber\\\\\n W_{m,k}(t)&=&\\sum_b^{N_b}\\alpha_{m,k,b}S_b(t),\n \\label{eq:Bsplines}\n\\end{eqnarray}\nwhere $\\alpha$s are complex coefficients, representing the unknowns of the optimization problem. The choice of B-splines as a basis for expansion is motivated by computational efficiency of parametrization of the control functions.\nThe resonant frequencies $\\Omega_{m,k}$ are defined by considering the energy differences between the states corresponding to the creation or annihilation of a boson, leaving the remaining occupations unchanged. Signals tuned at these frequencies initiate transitions as it can be proven by first order time-dependent perturbation theory.\n\nWe show in Fig.~\\ref{fig:System_and_Spectrum} the resonant frequencies for our illustrative systems: for $\\mathcal{H}^{(A)}$, we count 8 transitions related to $T$-bosons and 14 transitions for $C$-bosons for a total of 22 resonant frequencies. For $\\mathcal{H}^{(B)}$, there are 9 resonant frequencies in total that trigger T transitions, and 24 transitions related to the C modes. However, some transitions are degenerate -- only 17 different frequencies are required. \n\nWe consider the following values of parameters, with reference to a perspective reference cQED potential implementation: $\\omega_{\\mathrm{T}}\/2\\pi=$\\,5\\,GHz; $\\omega_{\\mathrm{m}}\/2\\pi=$\\,3\\,GHz, $\\omega_{\\mathrm{l}}\/2\\pi=$\\,4\\,GHz; $\\xi_m\/2\\pi=$\\,0.6\\,MHz, $\\xi_l\/2\\pi=$\\,0.9\\,MHz; $\\xi_T\/2\\pi =$\\,200\\,MHz. In line with our inspiration of a cavity-transmon systems in the dispersive regime~\\cite{nigg2012black}, we assign interaction parameters to be the geometric means of the local self-interactions $\\xi_{Tm}\/2\\pi = \\sqrt{\\xi_m \\times \\xi_T}\/2\\pi =$\\,10.95\\,MHz and $\\xi_{Tl}\/2\\pi = \\sqrt{\\xi_l \\times \\xi_T}\/2\\pi =$\\,13.42\\,MHz. The parameters that we used for $\\mathcal{H}^{(A)}$ and $\\mathcal{H}^{(B)}$ are inspired from expectations of results that would be obtained applying black-box quantization to Tesla-cavity systems~\\cite{romanenko2020three} coupled dispersively to transmons with coherence times $\\simeq$ 100 $\\mu s$ \\cite{nersisyan2019manufacturing}. Following that inspiration, we assume small linewidth for the cavity mode compared to their separations and we set the minimum frequency difference between the transmon and cavity mode frequencies to be of the order of the GHz, in order to justify independent access of the control pulses to the transmon and for each cavity modes.\n\n\nFollowing \\texttt{Juqbox.jl}~\\cite{Juqbox_Github}, the \\emph{pulse engineering} algorithm attempts to discover the best $\\alpha_{m,k,b}$ coefficients (i.e. $2 \\times N_f \\times N_b$ real parameters), which works as follows. Initially, a random pulse is selected by initializing the vector of parameters using random positive numbers uniformly distributed within ${[0, 0.2\\,\\text{MHz})}$. Then, an objective function is calculated (see Subsection \\ref{subseq:evaluation}) and the pulse is iteratively updated by computing the Schr\\\"odinger evolution and gradients efficiently by symplectic time-integration of adjoint equations~\\cite{petersson2020discrete}. Note that due to the B-spline parametrization, the number of control parameters does not depend directly on the pulse total duration $\\tau$. However, the number of B-splines $N_b$ defines the design of the temporal structure of the pulses, so one needs to choose large enough $\\tau$ and $N_b$ to allow the method to converge to a numerically robust solution. In particular, the slowest frequency resolution of the pulses is given by 1\/$\\tau$. We choose to vary $\\tau$ in the 500-8000 ns range for our numerical experiments on $\\mathcal{H}^{(A)}$ and $\\mathcal{H}^{(B)}$, allowing for a frequency resolution of 0.125-2 MHz. \nThe B-splines vary on the time scale $\\tau\/N_b$. Hence we choose $N_b$ = 10 to allow resolution at the scale of $\\xi_{Tm}$, which controls multiple energy separations in the spectrum. The values of $\\xi_l$, $\\xi_m$ define the smallest resonant frequencies. \n\n\\begin{figure*}[!htbp]\n\\begin{centering}\n\\includegraphics[width = .99 \\textwidth]{Fig2.pdf}\n\\par\\end{centering}\n\\caption{\\label{fig:EngineeredPulses} \n(a) Prototype circuits for the synthesis of Max-k-Cut QAOA. Single C-mode represent 8 computational states (equivalent to 3 qubits). (b) Illustrative Fourier spectrum of a high-fidelity engineered pulse via \\texttt{Juqbox.jl}. Top row shows results for $d_T(t)$ while the bottom row shows the control of the computational modes ($d_m(t)$ and $d_l(t)$). Darker tones (black, blue, orange) indicate the pulses that synthesize mixing layers, while light tones (gray, cyan, yellow) refer to phase-separation layers. (c) Fidelity for pulse engineered QAOA layers of the prototype circuits. Black lines indicate the mean across angles, individually plotted in gray. Each line is the mean of 10 random restarts (20-80 percentiles across restarts is plotted as shaded area). Leakage plots are presented in the Supplementary Material.}\n\\end{figure*}\n\n\\subsection{Evaluation test case: QAOA}\\label{subseq:evaluation}\n\nOur numerical prototype experiment is based on the synthesis of QAOA-like quantum circuits, which in their basic implementation consist of the layered alternated application of \\emph{phase-separation} unitary gates and \\emph{mixing} gates~\\cite{hadfield2019quantum}. With reference to the known Max-k-Cut qudit mapping of QAOA~\\cite{fuchs2021efficient}, where k corresponds to the dimensionality of the qudits, we can craft the phase-separation layers using $U_C(\\gamma)_{ij}$ gates and we have the freedom of designing the mixing layers using the $U_M(\\beta)_{i}$ gates in Eqs.~\\ref{eq:UMix}-\\ref{eq:UCost}, where $i$, $j$ indicate the distinguishable qudits that are targeted by the specific gate execution. Other choices would also be appropriate~\\cite{deller2022quantum}.\nFor clarity, in Fig.~\\ref{fig:EngineeredPulses}-a, we show the two toy-model circuits that we are going to synthesize, respectively via pulse engineering on $\\mathcal{H}^{(A)}$ and $\\mathcal{H}^{(B)}$. For completeness, the test circuit include the \\emph{initialization} operation, which is usually taken to be a generalized Hadamard gate (although it could be substituted by a mixing over the $|0\\rangle^{\\otimes N}$ state).\n\nWe note that quantum processor programmers have formally the freedom to execute gates sequentially or in parallel, and to exchange them in temporal execution order if they commute. However, in a real world implementation, if the processor is not fault-tolerant, under reasonable assumptions we expect decoherence and dephasing errors to be roughly proportional to execution time, so a compiler for NISQ algorithms often tries to parallelize gate execution as much as possible~\\cite{venturelli2018compiling}. Moreover, considering the mapping of the computational variables to the spectrum of the Hamiltonians (Fig.~\\ref{fig:System_and_Spectrum}), the possible qudit identity assignments are inequivalent with respect to pulse engineering, although it would be inconsequential if the synthesis was perfect. \\texttt{SWAP} operations could restrict the number of active qudits, by relegating some states to be just memory storage and not participate in processing. However, these operations and controls for our Hamiltonians need to be synthesized as well, increasing the complexity of the entire compilation significantly. \nBearing in mind these considerations, in our case study we choose to implement the single-qudit gates in parallel when possible, without implementing \\texttt{SWAP}s but directly synthesizing all required two-body interactions instead across the entire Hilbert space. We will discuss in Section~\\ref{sec:discussion} the scalability issues associated to this approach.\n\nNoting that in cQED implementations, the Hamiltonians in Eqs.~(\\ref{eq:H2}) are defined on truncated versions of a physically infinite Hilbert space, it is customary to include a few additional \\emph{guard states} corresponding to high occupation of boson modes to help the robustness of the numerical optimization, i.e., the following parameters are renormalized $n_T\\rightarrow\\tilde{n}_T= n_T+\\delta n_T$, $n_m\\rightarrow\\tilde{n}_m=n_m+\\delta n_m$ and $n_l\\rightarrow\\tilde{n}_l=n_l+\\delta n_l$, where $\\delta n$ represent guard states with values in Table \\ref{tab:parameters}.\n\n\n\n\nFollowing \\cite{petersson2021optimal}, the optimization objective to be minimized is chosen to be a sum of the infidelity and average leakage. The infidelity is a measure of a similarity score between the synthesized unitary matrix and the target, which can be defined as $O_F=1-|\\Tr(\\mathcal{U}(\\tau)^\\dagger U)\/E|^2$, where E is a normalization constant. The average leakage is defined as $O_L=(1\/\\tau)\\int_0^\\tau \\Tr(\\mathcal{U}^\\dagger(t) W \\mathcal{U}(t))dt$, where $W$ is a diagonal matrix which is non-zero only on the indices corresponding to the guard levels. \nThe weights in $W$ are set to be 1.0 for the highest guard state and then decrease exponentially in powers of 10 for each lower state. The objective of the numerics is to minimize $O=O_F+O_L$ by solving the related optimization problem on the $\\alpha$ parameters by using the IPOPT L-BFGS optimizer~\\cite{wachter2006implementation} and using the efficient \\texttt{Juqbox.jl} numerical integration scheme to compute the required $O$ and $\\nabla_\\alpha O$.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{\\label{tab:parameters}Parameters used for prototype (See Fig.~\\ref{fig:EngineeredPulses})}\n\\begin{tabular}{|c||c||c||}\n\\hline\n\\bf{parameter} & \\bf{$\\mathcal{H}^{(A)}$} & \\bf{$\\mathcal{H}^{(B)}$}\\\\[0.5ex]\n\\hline\\hline\nB-splines $N_b$ & 10 & 10\\\\\ncarrier frequencies $N_f$ & 22 & 17\\\\\n$T$ guard states $\\delta n_T$ & 3 & 3\\\\\n$C$ guard states $\\delta n_m$, $\\delta n_l$ & 2 & 2\\\\\nmax iterations & 100 & 30-150\\\\\nnumber of restarts & 10 & 10\\\\\ntarget fidelity 1-$O_F$ & 0.99 & 0.99\\\\\n \\hline\\hline\n\\end{tabular}\n\\end{table}\n\nThe optimization heuristics has a stopping condition based on either the achievement of a target threshold fidelity (1-$O_F$) or the execution of a maximum number of iterations. As mentioned, we perform multiple restarts initializing the optimization with different random pulses (see Table \\ref{tab:parameters} for a summary of some of the parameters used for the numerical experiments). Computations have been performed allowing an optimization time in the order of days. See Supplemental Material for computational details.\n\n\nTo give a sense of the resulting control signals that generate the QAOA circuit layers, we show the resulting Fourier transform of one engineered $d_T(t)$, $d_m(t)$, $d_l(t)$ functions in Fig.~\\ref{fig:EngineeredPulses}-b, for one random seed and pulse time $\\tau$ = 8000 ns, which in retrospect we know guaranteeing high fidelity of the synthesis. The angle parameters $\\beta$ and $\\gamma$ have been set to a fixed arbitrary value of $\\pi\/5$ for illustration but the qualitative features of the pulses that we are describing are preserved for different $\\tau$ and angles. As evident from the plots, the scheme and parameters described above clearly generate peaks around the identified resonant frequencies corresponding to the single-boson transitions in Fig.~\\ref{fig:System_and_Spectrum}. In particular, for the $d_T(t)$ controls, the highest peak corresponds to $\\omega_T$ while the other equispaced peaks are centered among multiples of $\\xi_{Tm}$, $\\xi_{Tl}$ or integer combinations of the two energy values for the $\\mathcal{H}^{(B)}$ system. For the C-mode controls, the peaking frequencies are topped by $\\omega_m$, $\\omega_l$ and generally $\\xi_{Tm}$, $\\xi_{Tl}$ plus multiples of $\\xi_m$, $\\xi_l$ respectively.\nClearly, each reported spectrum corresponds to a real-time microwave combination of pulses that can be crafted via an arbitrary waveform generator (AWG) in an experimental setup.\n\n\n\nIn Fig.~\\ref{fig:EngineeredPulses}-c, we provide the aggregated performance of the pulse engineering approach, plotting the fidelity between the final pulse $\\mathcal{U}(\\tau)$ and target circuit layers (phase separation and mixing) and Hadamard gates, for different pulse times. We show the mean fidelity, estimated averaging 10 random initializations, i.e. restart of the L-BFGS optimizer (the default optimizer for \\texttt{Juqbox.jl}), for QAOA layers parametrized with 11 different $\\gamma$ and $\\beta$ (from -$\\pi$ to $\\pi$ in fractions of $\\pi$\/5). As expected, notwithstanding outliers, the statistics is sufficient to indicate that the method can reach the target 0.99 fidelity if the pulse is allowed to be sufficiently long. \n\n\n\n\n\n\\section{Discussion and Outlook}\\label{sec:discussion}\n\nIn the previous section, we described a proof-of-concept of numerical synthesis for simple quantum circuits describing the building blocks of Max-k-Cut QAOA using qubits (mapped onto qudits) and qutrits, on bosonic quantum processors. The main question that is left to be addressed is if the synthesis approach we employed is sufficiently robust to be applied at application-scale. We break down the question in a discussion of three scalability challenges: Computational effort, realistic implementation, and circuit fidelity.\n\n\\paragraph{Computational Effort:} \nAs mentioned, the computational effort required by numerical packages to obtain high-fidelity in our case study is already very significant and scales both with the Hilbert space size and with the pulse duration. This means that the proposed methodology will most certainly not be viable if straightforwardly applied to systems at large scale, although larger synthesis can be achieved if the code is optimized to leverage GPU clusters. The envisioned practical synthesis of larger circuits will necessarily need to be broken down in modules, each of which working on a subspace of the entire Hilbert space. The requirement for this modularization is that the gate synthesized numerically in a system with few modes will have to be applied in a system with several modes and levels. The optimal gate from numerics should ideally act as an identity on the degree of freedoms that were not considered in the synthesis in order not to cause the crosstalk problem \\cite{ozguler2022dynamics}. Scaling up the single-mode case $\\mathcal{H}^{(A)}$ that we used will not likely be viable, since the non-local mapping onto qubits would require any gate to address the entire level structure independently from the locality of the gates, which is why we opted to synthesize the entire phase-separation circuit as opposed to the individual two-qubit gates independently. However, it is envisionable to generalize the $\\mathcal{H}^{(B)}$ system adding more C-modes, i.e., considering the Hamiltonian\n\\begin{eqnarray}\n\\mathcal{H}^{(B)}_{multi}[N] &=& \n H_T + \\sum_{j=1}^N \\left[H_{m_j}\n + H_{Tm_j}^{int}\\right], \n \\label{eq:multi-qutrits}\n\\end{eqnarray} which is $\\mathcal{H}^{(B)}$ for N=2. If the $\\xi$ parameters of each C-mode are sufficiently separated, the peaked frequency structure of the engineered pulses suggests that it is possible that none of the peaks in the final pulses would correspond to resonances with single-boson excitations that we don't want to trigger, which would likely induce very small leakage outside the two-mode target computational space. This needs to be verified theoretically or numerically in future work. Ultimately, frequency crowding will be an issue and more sophisticated numerics or frequency spacing and bandwidth engineering will be required.\n\nIt should be noted that if the modularization works as expected, the computing time spent synthesizing algorithmic primitives would be an offline \\emph{una tantum} cost to be paid to populate a lookup table (LUT) that would be accessed at runtime by the perspective user of the quantum solver. Indeed, similarly as in other domains, it is envisioned that the LUT would be computed for a large grid of parameters (angles $\\gamma$ and $\\beta$ in our QAOA example) and then machine learning algorithms would learn and return an interpolation of the engineered pulses if the compiler is called for a parameter that was not pre-computed, or would use nearby known points to initialize a fast optimization round to engineer a new pulse on the fly \\cite{xu2022neural}. \n\n\n\n\n\n\n\n\\paragraph{Realistic Implementation:} \n\nWhile the described technique is generically applicable to any bosonic interacting system, our case study has a specific 3D cQED implementation in mind, as illustrated in the inset of Fig.~\\ref{fig:System_and_Spectrum}. \nIt should be noted that the general framework that we employed, pulse engineering via QOC, while proven powerful~\\cite{heeres2017implementing} is not the only known approach to achieve universal synthesis of unitary quantum gates defined in the Fock space for these kind of systems. For instance, the use of selective number-dependent arbitrary phase (SNAP) protocol~\\cite{heeres2015cavity,fosel2020efficient} or echoed conditional displacement~\\cite{eickbusch2021fast} are strong candidates for the universal control of a single-mode system. Qudits have potential to be affected by noise less so than qubits \\cite{otten2021impacts} but working with large photon-number states comes with additional complications in terms of decoherence, which are still theoretically not entirely understood~\\cite{hanai2021intrinsic}.\n\nThe multiqudit system (Eq.~\\ref{eq:multi-qutrits}) could be viable but its practical implementation will likely suffer from the aforementioned quantum and classical crosstalk problems whose handling is currently one of the main active research topics of the 3D multimode cQED domain~\\cite{chakram2020seamless}. Even assuming that the bandwidth of the control pulses and the level spacing has sufficient resolution, there is a need for the co-design of a NISQ cQED architecture that would allow two-mode gates to operate in large Hilbert space with a controllable effect over spectator modes that are subject to an always-on interaction \\cite{alam2022quantum}. Theory results on quantum adiabatic protocols~\\cite{das2008colloquium, ozguler2018steering} on bosonic systems could provide an initial reference point to be generalized~\\cite{pino2018quantum, starchl2022unraveling}.\n\n\\paragraph{Fidelity:} The fidelity target we used in our prototype (0.99) is in line with the fidelity of native gates in industrial grade quantum processors but it is of course somewhat arbitrary. In accordance with conservative models of uncorrelated errors, we could estimate the final fidelity of the entire circuits in Fig.~\\ref{fig:EngineeredPulses}-a as the product of the fidelities of each synthesized layer, which means that ultimately the fidelity decreases exponentially with the number of layers. Hence, quantum-volumetric tests~\\cite{blume2020volumetric} would fail rather fast if we were to scale our circuits beyond few variables. However, it should be noted that for quantum optimization algorithms of the variational type, it is not clear if high fidelities are required, considering that the underlying computational principle is preserved for Lindblad evolution~\\cite{yang2017optimizing}. The degree of freedom of parameter setting might contribute to mitigate the misspecification of the gates due to poor synthesis. The non-requirement of exact synthesis is intuitive, since for optimization tasks we are not necessarily trying to reproduce a quantum process but rather to drive the system towards a probability distribution, which might be achievable also with partially coherent systems or in the presence of spurious unknown interactions that give rise to systematic coherent errors. So, as long as the nature of the errors is not specifically adversarial against the optimization tasks, there is still reasonable hope that a low-fidelity circuit could deliver speedup in the NISQ era. An important contribution that we are considering to improve the fidelity would be to generalize the technique of \\texttt{Juqbox.jl} to open systems, and fit the experimental noise to solve for a more realistic model. Fortunately, there has already been active development in that direction, including enabling quantum optimal control and pulse-level programming in XACC \\cite{nguyen2020extending, nguyen2021enabling} with \\texttt{QuaC} plugin \\cite{otten2017quac}, and a recently released open-source package for high-performance optimal control, \\texttt{Quandary}~\\cite{gunther2021quandary}.\n\n\n\n\nIn conclusion, we investigated the application of quantum optimal control techniques to design unitary gates for a class of physical systems that could be programmed to act as qudit-based quantum computers. We used variational algorithms such as QAOA for qubits (mapped onto a single qudit) and qutrits as targets for our case-study. Our current results, similar to other applied quantum computing works for multimode cQED~\\cite{kurkcuoglu2021quantum}, are still limited on small proof-of-concept models, due to limitations in computational effort, realistic implementation and achievable fidelity. While we identified pathways to overcome such limitations, we should note that for the purpose of variational optimization there are multiple recent attempts to employ co-designed digital-analog approaches that are directly related to QOC as optimization algorithms~\\cite{magann2021pulses,gokhale2019partial, choquette2021quantum}, and might not require the burdens of high-fidelity gate synthesis. We envision that our work could also contribute to those innovative methods that have already been delivering promising results.\n\n\n\n\\section*{Acknowledgments}\n\nWe thank Jens Koch, Srivatsan Chakram, Taeyoon Kim, Joshua Job, Matthew Reagor, Matthew Otten, Keshav Kapoor, Silvia Zorzetti, Sohaib Alam, Doga Kurkcuoglu and the SQMS 3D Algorithms Group and SQMS Codesign Group for discussions and feedback. We thank Adam Lyon, Jim Kowalkowski, Yuri Alexeev and Norm Tubman for their assistance on computing aspects, including support through XSEDE computational Project no. TG-MCA93S030 providing compute time at Bridges-2 of the Pittsburgh Supercomputer Center. A.B.\\\"O. thanks Gabriel Perdue, Adam and Jim for their guidance during his early career years. We thank Anders Petersson for his support in configuring \\texttt{Juqbox.jl}. D.V. acknowledges support via NASA Academic Mission Service (NNA16BD14C). This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359. We gratefully acknowledge the computing resources provided on Bebop, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe intrinsic notion of boundary has been extensively studied for both noncollapsed $\\RCD(K,N)$ spaces ($\\ncRCD(K,N)$ in short) and Alexandrov spaces. When we say Alexandrov spaces, we always mean complete, geodesic, finite dimensional Alexandrov space. For an Alexandrov space $(A,\\mathsf{d}_A)$, { Burago, Gromov and Perelman introduced the definition of boundary in \\cite{BGP92}}, deonted by $\\mathcal{F}A$, see \\eqref{eq:Alexboundary}. {From the uniqueness of tangent cones along in interiors of geodesics proved by Petrunin }in \\cite{Petrunin98}, it can be deduced that the interior of an Alexandrov space, i.e.\\ $A\\setminus\\mathcal{F}A$, is strongly convex, which means that any geodesic joining points in the interior does not intersect $\\mathcal{F}A$. For a $\\ncRCD(K,N)$ space $(X,\\mathsf{d},\\mathcal{H}^N)$, there are $2$ intrinsic definitions of boundary. One is defined by Kapovitch-Mondino in \\cite{KapMon19}, in the same spirit of defining the boundary for an Alexandrov space, we denote this boundary also by $\\mathcal{F}X$, see \\eqref{eq:KMboundary}. The other is defined by De Philippis-Gigli in \\cite{DPG17}, making use of the stratification of the singular set. We denote this boundary by $\\partial X$, see \\eqref{eq:DPGboundary}.\n\nIn parallel to the strong convexity of the interior of an Alxeandrov space, it is conjectured by De Philippis and Gigli \\cite[Remark 3.8]{DPG17} that the interior of $X$, i.e.\\ $X\\setminus\\partial X$ is strongly convex. We will see that this conjecture follows from the conjecture that the two notions of the boundary of $\\ncRCD(K,N)$ spaces agree.\n\n\n\nIn this paper, we look at the boundary from an an extrinsic point of view, namely, given $K\\in \\R$ and positive integer $N$ we consider two sitiations\n\n\\begin{enumerate}\n \\item\\label{item:alex} an $N$-dimensional Alexandrov space has an $N$-dimensional Alexandrov subspace;\n \\item \\label{item:RCD} a $\\ncRCD(K,N)$ space has a $\\ncRCD(K,N)$ subspace with mild boundary control.\n\\end{enumerate} \n\nWe prove that in the case of \\eqref{item:alex} the intrinsic boundary of an Alexandrov subspace coincides with the topological boundary, and in the case of \\eqref{item:RCD} the De Phlippis-Gigli boundary coincides with the topological boundary. See the precise statement in Theorem \\ref{thm:main1} and Theorem \\ref{thm:main2} below. A direct consequence is that synthetic curvature bounds on a subspace automatically imply regularity of its topological boundary, for example topological structure and rectifiability, see \\cite{BNS20}.\n\n\\begin{theorem}\\label{thm:main1}\nLet $(X,\\mathsf{d}_X)$ be an $N$-dimensional Alexandrov space, $N\\in \\mathbb{N}$, and $\\Omega\\subset X$ be open, if $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ is an Alexandrov space, $\\bar{\\Omega}\\cap\\mathcal{F}X=\\varnothing$, and $\\Omega={\\rm Int}_{\\rm top}(\\bar{\\Omega})$, where ${\\rm Int}_{\\rm top}(\\bar\\Omega)$ is the topological interior, i.e. the largest open subset of of $\\bar\\Omega$, \nthen \n\\begin{enumerate}\n \\item\\label{main1:item1} $\\partial_{\\rm top}\\bar\\Omega=\\mathcal{F}\\bar\\Omega$;\n \\item\\label{main1:item2} any (minimizing) geodesic in $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining two points in $\\Omega$ is a local geodesic of $(X,\\mathsf{d}_X)$;\n\n \\item\\label{main1:item3} any (minimizing) geodesic in $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ is a quasi-geodesic in $(X,\\mathsf{d}_X)$.\n\\end{enumerate}\n\\end{theorem}\n \nTheorem \\ref{thm:main1} will follow from the invariance of domain theorem for Alexandrov spaces, Theorem \\ref{thm:inv}. \nThe proof of Theorem \\ref{thm:inv} has been worked out on the mathoverflow website quite a while ago by Belegradek, Petrunin and Ivanov but does not seem to exist in literature. Since we have an application of this theorem we present the proof following closely the existing one by Belegradek-Petrunin-Ivanov.} It works in a more general purely topological category of MCS spaces, see Theorem~\\ref{thm:inv-dom-mcs}.\n\n\\begin{remark}\\label{rem:counterex}\nThe assumption $\\Omega={\\rm Int}_{\\rm top}(\\bar \\Omega)$ is clearly necessary and cannot be removed. For example, let $X=\\R^n$, $\\Omega=\\R^n\\setminus\\{0\\}$, which is open and dense. We see that $\\bar\\Omega=X$ is an Alexandrov space without Alexandrov boundary, but the topological boundary of is $\\{0\\}$ which is not empty. This shows that item \\ref{main1:item1} does not hold without the assumption $\\Omega={\\rm Int}_{\\rm top}(\\bar \\Omega)$ even for smooth manifolds. \n\nNext, the assumption that $\\bar{\\Omega}\\cap\\mathcal{F}X=\\varnothing$ is also clearly necessary. For example let $X$ be the closed unit disk in $\\R^2$ and $\\Omega=X$. Then $\\partial \\bar \\Omega$ is empty while $\\mathcal{F}\\bar\\Omega=\\mathbb S^1$.\n\n\n Also, a geodesic in $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ joining two points on the boundary need not to be a local geodesic in $(X,\\mathsf{d}_X)$, so the conclusion in item \\ref{main1:item3} of Theorem \\ref{thm:main1} that a geodesic in $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ is a quasi-geodesic in the ambient space is optimal.\n\n \n \n For example, consider the space $X\\mathrel{\\mathop:}= D^2\\times\\{0\\}\\sqcup_{\\mathbb{S}^1\\times\\{0\\}}\\mathbb{S}^1\\times [0,\\infty)$ with length metric, which is a cylinder glued along the boundary circle with a disk at the bottom, this is an Alexandrov space of non-negative curvature. Then let $\\Omega=\\mathbb{S}^1\\times (0,\\infty)$. Clearly $\\Omega$ is open, however, any geodesic on $\\partial_{\\rm top}\\Omega=\\mathcal{F}\\Omega=\\mathbb{S}^1\\times\\{0\\}$, which is an arc, is never a local geodesic w.r.t. the metric of $X$, since a segment in $D^2$ connecting any 2 points on its boundary circle is always shorter than the corresponding arcs.\nThis example also shows that $\\bar{\\Omega}$ need not to be locally convex. Compare this with Theorem \\ref{thm:han}.\n\n\\end{remark}\n\nFor $\\ncRCD(K,N)$ spaces, we are able to obtain a similar result to Theorem \\ref{thm:main1} under an extra assumption of a local Lipschitz condition on the metric $\\mathsf{d}_\\Omega$ which serves as a weak substitute for the regularity of the topological boundary.\n \\begin{theorem}\\label{thm:main2}\n Let $(X,\\mathsf{d},\\mathcal{H}^N_X)$ be a $\\ncRCD(K,N)$ space, $\\Omega$ be an open subset of $X$ such that $\\Omega={\\rm Int}_{\\rm top}(\\bar \\Omega)$ and $\\bar \\Omega\\cap \\partial X=\\varnothing$. Suppose that $(\\bar{\\Omega},\\mathsf{d}_\\Omega,\\mathcal{H}^N_{\\bar\\Omega})$ is also an $\\RCD(K,N)$ space and for every $x\\in\\partial_{\\rm top}\\bar{\\Omega}$ there exist an neighborhood $U_x$ of $x$ and $C(U_x)>1$ such that $\\mathsf{d}_\\Omega\\le C(U_x) \\mathsf{d}_X$ when restricted to $U_x\\cap\\bar \\Omega$. Then $\\partial_{\\rm top}\\bar \\Omega=\\partial \\bar\\Omega$.\n \n \n \n \\end{theorem}\n Here, $\\mathcal{H}^N_X$ (resp. $\\mathcal{H}^N_{\\bar\\Omega})$) is the Hausdorff measure induced by $\\mathsf{d}_X$ (resp. $\\mathsf{d}_{\\Omega}$). Notice the following relations between the two Hausdorff measures:\n \n \\begin{remark}\\label{rmk:equiHaus}\n From our assumption and the definition of intrinsic length metric, it follows that $\\bar\\Omega$ is embedded in $X$ in a locally biLipschitz way, i.e.\\ for any $x\\in \\partial_{\\rm top}\\bar\\Omega$ and its neighborhood $U_x$, $\\mathsf{d}_X\\le \\mathsf{d}_{\\Omega}\\le C \\mathsf{d}_{X}$ when restricted to $U\\cap\\bar\\Omega$, so the notions such as Hausdorff dimension and measure zero sets for both Hausdorff measures are equivalent for sets in $\\bar\\Omega$ {since we can always find a countable covering by neighborhoods on which the 2 metrics are biLipschitz to each other}.\n \n \n\n\n \\end{remark}\n \nThere are 2 main technical difficulties in proving Theorem \\ref{thm:main2}. The first being that in general there is no topological information on any neighborhood of a singular point. An important fact used to prove the invariance of domain theorem for Alexandrov spaces is that every point has a neighborhood homeomorphic to a cone over its space of directions, which is not available for $\\ncRCD(K,N)$ spaces. In particular, as opposed to the situation in Alexandrov spaces, for a given point in an $\\ncRCD(K,N)$ space its tangent cone(s)in general do not carry topological information of its neighborhood. For example, Colding-Naber \\cite{CN11} constructed an example of a noncollapsed Ricci limit space with a singular point at which there are two non-homeomorphic tangent cones. Another difficulty is that the topological boundary may in principle vanish when taking tangent cones. Conjecturally this cannot happen but this is unknown at the moment.\nA model case of this phenomenon would be a cusp, for example $X=\\R^2$, and $\\Omega=\\{(x,y)\\in \\R^2: y<\\sqrt{|x|}\\}$, where $0\\in \\partial_{\\rm top}\\bar\\Omega$ but its tangent cone in $\\bar\\Omega$ and in $X$ are both $\\R^2$. We can quickly rule out this case since if $\\bar\\Omega$ were a $\\ncRCD(K,N)$ space, then $0$ would have density $1$ in $\\bar\\Omega$, which in turn implies the neighborhood of $0$ $\\Omega$ is a manifold, a contradiction. However, this argument does not work if the point on the topological boundary is itself a singular point of the ambient space. A unified way to overcome both difficulties is to find a regular point on the topological boundary, if it is more than De Philippis-Gigli boundary. Indeed, we are able to do this with the help of Deng's H\\\"older continuity of tangent cones along the interior of a geodesic, \\cite{deng2020holder}.\n\nA motivation for studying the extrinsic notion of boundary is provided by the following observation on manifolds. Han in \\cite{han20} showed that for a weighted $n$-dimensional manifold $(M,g, e^{-f}\\vol_g)$ with smooth boundary, the measure valued Ricci tensor \n\\begin{equation}\n \\mathrm{\\bf Ric}(\\nabla \\phi,\\nabla \\phi):=\\mathbf{\\Delta}\\frac{|\\nabla \\phi|^2}{2}-(\\langle\\nabla \\phi,\\nabla \\Delta \\phi\\rangle+|{\\mathrm{Hess}}_{\\phi}|^2)e^{-f}\\vol_g\n\\end{equation}\n defined by Gigli \\cite{Gigli14} can be expressed as\n\\begin{equation}\n \\mathrm{\\mathbf{Ric}}=(\\mathrm{Ric}+{\\mathrm{Hess}}_f) e^{-f}\\vol_g+ \\mathrm{II}_{\\partial M}e^{-f}\\mathcal{H}^{n-1}|_{\\partial M},\n\\end{equation}\n where $\\mathbf{\\Delta}$ is the measure valued Laplacian. If $(M,g, e^{-f}\\vol_g)$ satisfies $\\CD(K,\\infty)$ condition, then $\\mathrm{\\bf Ric}\\ge K e^{-f}\\vol_g$. Combined with Han's expression, this lower bound in particular implies that the second fundamental form is non-negative definite, which means the boundary is convex and it is well known that this implies that geodesics joining interior points do not intersect boundary. Han further interprets this convexity where a subset and its topological boundary are considered, moreover, the boundary is not $C^2$ so it is not possible to define the second fundamental form on it.\n To proceed, we fix some notations. For a length metric space $(X,\\mathsf{d})$, and an open connected subset $\\Omega\\subset X$, denote by $\\mathsf{d}_{\\Omega}$ the intrinsic length metric on $\\Omega$, it extends by continuity to $\\bar\\Omega$. Denote by $\\partial_{\\rm top}\\bar\\Omega$ the topological boundary of $\\bar\\Omega$ in $X$. More precisely, Han proved\n\n\\begin{theorem}[\\cite{han20}]\\label{thm:han}\nLet $(M,g)$ be a complete $n$-dimensional manifold, and $\\Omega\\subset M$ be open. Suppose that $(\\bar{\\Omega}, \\mathsf{d}_\\Omega, \\mathfrak{m})$ satisfies that $\\supp(\\mathfrak{m})=\\bar{\\Omega}$ and $\\CD(K,\\infty)$ condition, then $\\mathfrak{m}\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\Omega\\ll \\vol_g\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\Omega$, if furthermore $\\bar\\Omega$ has Lipschitz and $\\mathcal{H}^{n-1}$-a.e.\\ $C^2$ boundary, then $\\mathfrak{m}(\\partial_{\\rm top}\\Omega)=0$ and $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ is {locally convex}, i.e., every (minimizing) geodesic in $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ is a local geodesic in $(M,g)$\n\\end{theorem}\n\nIn particular, every minimizing geodesic in $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ joining $2$ points in $\\Omega$ does not intersect ${\\partial_{\\rm top}\\Omega}$. We would like to generalize to non-smooth setting the {above theorem of Han, but in view of Remark \\ref{rem:counterex}, it is not true that the (synthetic) Ricci curvature lower bound on a closed subset forces the set to be locally convex. The correct notion to consider for metric spaces is the locally totally geodesic property. \n\n\\begin{definition}\n\tLet $(X,\\mathsf{d})$ be a geodesic metric space. A { connected open subset $\\Omega$} is said to be \\emph{locally totally geodesic} if every (minimizing) geodesic {in $(\\bar \\Omega, \\mathsf{d}_{\\Omega})$} joining two points in $\\Omega$ is a local geodesic in $(X,\\mathsf{d})$.\n\\end{definition}\n} \n\n\n\n\n{With this notion, we see from item \\ref{main1:item2} of Theorem \\ref{thm:main1} that we have shown that the synthetic sectional curvature lower bound on the closure of an open subset forces this open subset to be locally totally geodesic.\n\n For $\\ncRCD$ spaces, the natural approach to generalize the fact that Ricci curvature lower bound on a subset forces locally totally geodesic property is to show the equivalence between the intrinsic and topological boundary, since the convexity results for intrinsic boundary will then apply to the topological boundary as well. For example, with extra assumption that Kapovitch-Mondino boundary and De Philippis-Gigli boundary coincide, we can derive that the interior of an $\\ncRCD(K,N)$ subspace is locally totally geodesic by combining Theorem \\ref{thm:main2} and Theorem \\ref{thm:intconv}. See also Corollary \\ref{cor:loc-total-geo}. } \n\n\n However, for $\\ncRCD(K,N)$ spaces, the strong convexity of its (intrinsic) interior is not presently known, to derive it we need an extra assumption that the Kapovitch-Mondino boundary and the De Philippis-Gigli boundary are the same.\n\n\\begin{theorem}[Corollary \\ref{cor:intconv}]\\label{thm:intconv}\n Let $(X,\\mathsf{d},\\mathcal{H}^N)$ be a $\\ncRCD(K,N)$ space. Assume $\\partial X=\\mathcal{F} X$, then ${\\rm Int}(X)\\mathrel{\\mathop:}= X\\setminus \\partial X$ is strongly convex, i.e.\\ any geodesic joining points in ${\\rm Int}(X)$ does not intersect $\\partial X$.\n\\end{theorem}\n\n Although the equivalence between the two boundary notions, hence the strong convexity of the interior of $\\ncRCD(K,N)$ space is unknown, we can still obtain an a.e.\\ version of convexity of the interior of a $\\ncRCD(K,N)$ space. This in turn implies that for a $\\ncRCD(K,N)$ subset, intrinsic geodesics joining most interior points are away from its topological boundary. The a.e.\\ convexity of interior follows from the following more general a.e.\\ convexity of regular set at essential dimension which is a slight generalization of pairwise a.e. convexity of $\\mathcal{R}_n$ proved by Deng \\cite[Theorem 6.5]{deng2020holder}.\n \n\\begin{proposition}\\label{thm:almostconvex}\nLet $(X,\\mathsf{d},\\mathfrak{m})$ be an $\\RCD(K,N)$ space of essential dimension $n$. For \\textit{every} $x\\in X$, there exists a subset $R_x\\subset \\mathcal{R}_n$ so that $\\mathfrak{m}(X\\setminus R_x)=0$ and for any $y\\in R_x$ there is a minimizing geodesic joining $x,y$ contained in $\\mathcal{R}_n$ except possibly for $x$. \n\\end{proposition}\n\nFor the proof we need the technique of localization via transport rays of any $1$-Lipschitz function, developed by Cavalletti-Mondino \\cite{CavMon15} in non-smooth setting. \n\nFinally, we conjecture that Theorem \\ref{thm:han} holds in much larger generality including the measure regularity part, see Conjecture \\ref{conj:collapseconv}.\n\n\n\n\n\nThe paper is organized as follows: In section \\ref{sec:prelim}, we recall concisely the structure results for Alexandrov and $\\RCD(K,N)$ spaces. In section \\ref{sec:inv} we prove invariance of domain theorem for Alexandrov spaces. Section \\ref{sec:equi} is devoted to the proof of main theorems Theorem \\ref{thm:main1} and Theorem \\ref{thm:main2}. The last two sections, section \\ref{sec:AppConv} and \\ref{sec:almost} focus on applications of the main theorems to subsets satisfying $\\ncRCD(K,N)$ condition in various ambient spaces.\n\n\\smallskip\\noindent\n\\textbf{Acknowledgement.} The second named author thanks Anton Petrunin for bringing invariance of domain for Alexandrov spaces to his attention, Qin Deng for suggesting Proposition \\ref{thm:almostconvex}, Igor Belegradek and Jikang Wang for several helpful discussions.\n\n\\section{Preliminary}\\label{sec:prelim}\n\n\\subsection{Stratified spaces}\\label{subsec:} \nIn this section we give a brief review of topological stratified spaces. \n\n\\begin{definition}\nA metrizable space $X$ is called an \\emph{MCS-space (space with multiple conic singularities)} of dimension $n$ if every point $x\\in X$ has a neighborhood pointed homeomorphic to the open cone over a compact $(n-1)$-dimensional MCS space. Here we assume the empty set to be the unique $(-1)$-dimensional MCS-space.\n\\end{definition}\n\n\n\\begin{remark}\nA compact $0$-dimensional MCS-space is a finite collection of points with discrete topology. A 1-dimensional MCS-space is a locally finite graph.\n\\end{remark}\n\n\nAn open conical neighborhood of a point in an MCS-space is unique up to pointed homeomorphism~\\cite{Kwun}. However given an open conical neighborhood $U$ of $x\\in X$ pointed homeomorphic to a cone over an $(n-1)$-dimensional space $\\Sigma_x$, the space $\\Sigma_x$ need not be uniquely determined by $U$.\n\n\nIt easily follows from the definition that an MCS space has a natural topological stratification constructed as follows.\n\nWe say that a point $p\\in X$ belongs to the $l$-dimensional stratum $X_l$ if $l$ is the maximal number $m$ such that the conical neighbourhood \nof $p$ is pointed homeomorphic to $\\R^m\\times K(S)$ for some MCS-space $S$. It is clear that $X_l$ is an $l$-dimensional topological manifold. It is also immediate that for $x\\in X_l$ all points in the conical neighborhood of $X$ belong to the union of $X_k$ with $k\\ge l$. Therefore the closure $\\bar X_l$ of the $l$-stratum is contained in the union $\\cup_{m\\le l} X_m$ of the strata of dimension at most $l$.\n\nThe $n$ stratum $X_n$ is an $n$-dimensional manifold and by above it is open and dense in $X$. We will also refer to $X_n$ as the \\emph{top stratum} of $X$.\n\n\n\\subsection{Structure theory for $\\RCD(K,N)$ spaces}\\label{subsec:ncRCD}\nWhen writing $\\RCD(K,N)$ space, we always assume that $N\\in [1,\\infty)$.\nWe assume familiarity with the structure theory of $\\RCD(K,N)$ spaces and just collect a few facts to fix notations. \n\n\\begin{definition}\n Given an $\\RCD(K,N)$ space $(X,\\mathsf{d},\\mathfrak{m})$, let $\\mathcal{R}_k$ be the set of points at which the tangent cone is $(\\R^k,|\\cdot|,\\mathcal{L}^k)$, for $k\\in [1,N]\\cap \\mathbb{N}$. $\\mathcal{R}(X)\\mathrel{\\mathop:}= \\cup_k \\mathcal{R}_k$ is called the regular set of $X$. \n\\end{definition}\n\nIf there is no confusion we also write $\\mathcal{R}$ instead of $\\mathcal{R}(X)$. It is shown in \\cite{MN14} that $\\mathfrak{m}(X\\setminus\\cup_{k}\\mathcal{R}_k)=0$ and each $\\mathcal{R}_k$ is $\\mathcal{H}^k$-rectifiable. Then it is shown in \\cite{BrueSemola20Constancy} that there is a unique $n\\in [1,N]\\cap \\mathbb{N}$ such that $\\mathfrak{m}(X\\setminus\\mathcal{R}_n)=0$. Such $n$ is called the essential dimension of $(X,\\mathsf{d},\\mathfrak{m})$ which is also denoted by ${\\rm essdim}$. \n{It is equal to the maximal $k$ such that $\\mathcal{R}_k$ is non empty, see for example \\cite{kitabeppu2017sufficient}.}\nThe singular set $\\mathcal{S}$ is the complement of the regular set, $\\mathcal{S}\\mathrel{\\mathop:}= X\\setminus \\cup_{k}\\mathcal{R}_k$. { The singular set has measure zero.}\n\nThe notion of noncollapsed $\\RCD(K,N)$ ($\\ncRCD(K,N)$ in short) is proposed in \\cite{DPG17}, requiring that $\\mathfrak{m}=\\mathcal{H}^N$, which in turn implies $N\\in\\mathbb{N}$ and the essential dimension of a $\\ncRCD(K,N)$ space is exactly $N$, see \\cite[Theorem 1.12]{DPG17}. When considering $\\ncRCD(K,N)$ spaces, finer structure results are available.\n\nThe density function\n\\begin{equation}\n \\Theta_N(x)\\mathrel{\\mathop:}=\\lim_{r\\to 0}\\frac{\\mathcal{H}^N(B_r(x))}{\\omega_N r^N}\\le 1\n\\end{equation}\nplays a crucial role in the study of regularity of $\\ncRCD(K,N)$ spaces. The existence of the limit and the upper bound $1$ come from the Bishop-Gromov inequality. Note that the density function characterizes the regular points in the following way \\cite[Corollary 1.7]{DPG17}:\n\\begin{equation}\n \\Theta_N(x)=1 \\Leftrightarrow x\\in \\mathcal{R}_N=\\mathcal{R}.\n\\end{equation}\n\n Thanks to the splitting theorem \\cite{Gigli13} and the volume cone to metric cone property \\cite{DPG16} in a $\\ncRCD(K,N)$ space, the singular set $\\mathcal{S}$ is stratified into \n\\[\n\\mathcal{S}_0\\subset \\mathcal{S}_1\\subset \\cdots\\subset \\mathcal{S}_{N-1},\n\\]\nwhere for $0\\le k\\le N-1$, $k\\in \\N$, $\\mathcal{S}_k=\\{x\\in \\mathcal{S}: \\text{no tangent cone at $x$ is isometric to } \\R^{k+1}\\times C(Z)\\text{ for any metric space } Z\\}$, where $C(Z)$ is the metric measure cone over a metric space $Z$. It is proved in \\cite[Theorem 1.8]{DPG17} that\n\\begin{equation}\\label{eq:sing}\n \\dim_{\\mathcal{H}}(\\mathcal{S}_k)\\le k.\n\\end{equation}\n With the help of the metric Reifenberg theorem \\cite[Theorem A.1.1-A.1.3]{Cheeger-Colding97I}, it can be derived that for points whose the density is close to $1$ there is a neighborhood homeomorphic to a smooth manifold. We have from \\cite[Theorem 1.7, Corollary 2.14]{KapMon19} that\n\n\\begin{theorem}\\label{thm:regular}\n Let $(X,\\mathsf{d},\\mathfrak{m})$ be a $\\ncRCD(K,N)$ space, and $\\alpha\\in (0,1)$. There exists $\\delta\\mathrel{\\mathop:}= \\delta(\\alpha,K,N)>0$ small enough so that if $x\\in X$ satisfies $\\Theta_N(x)> 1-\\delta$, then there is a neighborhood of $x$ biH\\\"older homeomorphic to a smooth manifold with H\\\"older exponent $\\alpha$. Moreover the set $\\{x\\in X: \\Theta_N(x)> 1-\\delta\\}$ is open and dense.\n\\end{theorem}\n\nWe call such points manifold points, and call the complement non-manifold points. It then follows that the set of non-manifold points has Hausdorff codimension at least $1$ since it is contained in $S^{N-1}$. \n \n Finally let us recall here some facts about the boundary of a $\\ncRCD$ space $(X,\\mathsf{d},\\mathcal{H}^N)$. Based on the stratification of $\\mathcal{S}$, De Philippis and Gigli proposed the following definition of the boundary of a $\\ncRCD(K,N)$ space $(X,\\mathsf{d},\\mathfrak{m})$:\n\\begin{equation}\\label{eq:DPGboundary}\n \\partial X\\mathrel{\\mathop:}= \\overline{\\mathcal{S}_{n-1}\\setminus \\mathcal{S}_{n-2}}.\n\\end{equation}\nOn the other hand, Kapovitch-Mondino (\\cite{KapMon19}) proposed another recursive definition of the boundary analogous to that of Alexandrov spaces, for $N\\ge 2$: \n\\begin{equation}\\label{eq:KMboundary}\n \\mathcal{F}X\\mathrel{\\mathop:}=\\{x\\in X: \\exists Y\\in {\\rm Tan}(X,\\mathsf{d},\\mathfrak{m},x), Y=C(Z), \\mathcal{F}Z\\neq \\varnothing\\}.\n\\end{equation}\nIn this definition $Z$ must be a non-collapsed $\\RCD(N-2,N-1)$ space with suitable metric and measure (\\cite[Lemma 4.1]{KapMon19}, after \\cite{Ketterer2015}), so one can inductively reduce the consideration to the case $N=1$, in which case the classification is completed in \\cite{KL15}.\n\n\nThe measure theoretical and topological structure of De Philippis-Gigli's boundary is subsequently studied in \\cite{BNS20} and \\cite{BPS21}. We will need the following relation from combining \\cite[Lemma 4.6]{KapMon19} and \\cite[Theorem 6.6]{BNS20}: \n\\begin{equation}\\label{eq:boundaryrelation}\n \\mathcal{S}^{N-1}\\setminus\\mathcal{S}^{N-2}\\subset\\mathcal{F}X\\subset \\partial X.\n\\end{equation}\n An implication of the above relation is that not having boundary in both senses are the same, { and is equivalent to $ \\mathcal{S}^{N-1}\\setminus\\mathcal{S}^{N-2}=\\varnothing$.} It is conjectured that $\\mathcal{F}X=\\partial X$, and this is verified for Alexandrov spaces and Ricci limit spaces with boundary, see \\cite[Chapter 7]{BNS20}.\n \n \n \n \\subsection{Structure theory of Alexandrov Spaces}\\label{subsec:Alexandrov}\nObserve that the structure theory of $\\ncRCD(K,N)$ spaces holds for Alexandrov spaces since $N$-dimensional Alexandrov spaces with lower curvature bound $K$ are $\\ncRCD(K,N)$ spaces \\cite{Petrunin11}, though some results can have different, usually easier, proofs. Instead of attempting to give a thorough introduction, we collect here the following facts that are necessary for this paper and are more refined than that of $\\ncRCD(K,N)$ spaces. We refer readers to \\cite{BGP92, BBI01, Petr-conv} for detailed structure theory of Alexandrov spaces. \n\nFix an $N$-dimensional Alexandrov space $(X,\\mathsf{d})$. We describe the tangent cones, boundary and topological structure of $X$.\n \nTangent cones in an Alexandrov space are nicer than those in $\\ncRCD$ spaces, for example, the tangent cone at every point is unique. To better describe tangent cones, we introduce the space of directions:\n\n\\begin{definition}\nFor any $p\\in X$, we say that any $2$ geodesics emanating from $p$ have the same direction if their angle at $p$ is zero. This induces an equivalence relation on the space of all geodesics emanating from $p$ and the angle induces a metric on the space of equivalent classes of such geodesics. The metric completion of it is the space of directions at $p$, denoted by $\\Sigma_p(X)$.\n\\end{definition} \n\n$\\Sigma_p(X)$ is an $(N-1)$-dimensional Alexandrov space of curvature lower bound $1$ \\cite[Theorem 10.8.6]{BBI01}. The (metric) tangent cone at $p$ is the metric cone over $\\Sigma_p(X)$, this definition is consistent with the (blow-up) tangent cone $T_pX$ obtained by taking the pGH limit of $(X, r^{-1}\\mathsf{d},p)$ as $r\\to 0$. This observation along with Perelman's stability theorem \\cite{Perelman91} implies that $p$ has a neighborhood homeomorphic to a cone over $\\Sigma_p(X)$, therefore $X$ is an n-dimensional MCS-space by induction. For an alternative proof of this result see \\cite{Per-Morse}. \n\nThe boundary $\\mathcal{F}X$ is defined for $N\\ge 2$ as \n\\begin{equation}\\label{eq:Alexboundary}\n \\mathcal{F}X=\\{p\\in X: \\Sigma_p(X) \\text{ has boundary}\\}.\n\\end{equation}\nWhen $N=1$ Alexandrov spaces are manifolds, the boundary is just boundary of a manifold, see \\cite[7.19]{BGP92}. This gives the inspiration to the Kapovitch-Mondino boundary \\eqref{eq:KMboundary}. It is clear that when $(X,\\mathsf{d},\\mathcal{H}^N)$ is viewed as a $\\ncRCD(K,N)$ space, this boundary is exactly the Kapovitch-Mondino boundary, which justifies the use of notation.\n\nSimilar to the $\\ncRCD$ case, the set of manifold points of $X$ is open and dense, the non manifold points of $X$ is of Hausdorff dimension and topological dimension at most $n-1$ if $X$ has boundary and codimension at most $n-2$ if $X$ does not have boundary. This follows by combining \\eqref{eq:sing}, \\eqref{eq:boundaryrelation} and Theorem \\ref{thm:regular}. \n\nWe will also need the notion and properties of quasigeodesics on Alexandrov spaces \\cite{PP-quasigeoodesics}. Recall that a unit speed curve $\\gamma$ in an Alexandrov space is called a \\emph{quasigeodesic} if restrictions of distance functions to $\\gamma$ have the same concavity properties as their restrictions to geodesics. For example, for non-negatively curved Alexandrov space $X$ this means that for any $p\\in X$ the function $t\\mapsto d(\\gamma(t), p)^2$ is 2-concave. Every geodesic is obviously a quasigeodesic but the converse need not be true. For example if $X$ is the unit disk in $\\R^2$ then the boundary circle is a quasigeodesic in $X$.\nPetrunin and Perelman showed \\cite{PP-quasigeoodesics} that for every point $p$ in an Alexandrov space there are infinite quasigeodesic starting in every direction at $p$.\n\n\n\\section{Invariance of Domain for Alexandrov spaces}\\label{sec:inv}\n\nAs stated in the introduction, the invariance of domain for Alexandrov spaces has long been known by experts, we present here a precise statement and its proof due to Belegradek-Ivanov-Pertunin on mathoverflow \\cite{BIP10}. \n\n\\begin{theorem}\\label{thm:inv}\nLet $(X,\\mathsf{d}_X)$, $(Y,\\mathsf{d}_Y)$ be Alexandrov spaces of same dimension, $f:X\\to Y$ be a injective continuous map. For any open subset $U\\subset X$, if $U\\cap \\mathcal{F}X=\\varnothing$ then $f(U)\\cap \\mathcal{F}Y=\\varnothing$, and $f(U)$ is open in $Y$. \n\\end{theorem}\n\nThis theorem follows from the following purely topological Invariance of Domain Theorem for MCS spaces.\n\n\\begin{theorem}\\label{thm:inv-dom-mcs}\n{\nLet $X,Y$ be $n$ dimensional MCS spaces such that $X_{n-1}=Y_{n-1}=\\varnothing$ and for all points in $Y$ their open conical neighborhoods have connected $n$-strata.\n}\n\nLet $f: X\\to Y$ be continuous and injective.\n\nThen $f(X)$ is open in $Y$ {and open conical neighborhoods of all points in $X$ have connected top strata.}\n\n\\end{theorem}\n\n\n\n\n\n\nWe need the following lemma regarding the $\\mathbb{Z}_2$-cohomology for\n{ MCS spaces originated from Grove-Petersen \\cite{PG93}, initially stated for compact Alexandrov spaces without boundary}. Note that finite dimensional\nMCS spaces are locally compact, and locally contractible, since every point has a neighborhood homeomorphic to a cone, so Alexander-Spanier cohomology, singular cohomology and Cech cohomology all coincides. It is not necessary to specify which cohomology to use. In what follows all cohomology is taken with $\\mathbb{Z}_2$ coefficients.\n\nWe will make use of the following duality which holds for Alexander-Spanier cohomology with compact support \\cite[Chapter 1]{Massey-book}.\nGiven a locally compact and Hausdorff space $Y$ and a closed subset $A\\subset Y$ it holds that $H^n_c(Y,A)\\cong H^n_c(Y\\setminus A)$.\n{ \n\\begin{lemma}\\label{lem:GP}\nLet $X$ be an $n$-dimensional compact MCS space where $X_n$ has $k$ connected components and $X_{n-1}=\\varnothing$, then $H^n(X)\\cong\\mathbb{Z}_2^k$.\n\\end{lemma}\n}\n\\begin{proof}\nThe proof is the same as in \\cite{PG93}.\n\n\nSince $X_n=X\\setminus S$ is an $n$-manifold with $k$ connected components we have that $H^n_c(X\\setminus S)\\cong \\mathbb{Z}_2^k$. On the other hand by Alexander-Spanier duality we have that $H^n_c(X\\setminus S)\\cong H^n_c(X,S)\\cong H^n(X,S)$ where the last isomorphism holds since $X$ is compact. Now the result immediately follows from the long exact sequence of the pair $(X,S)$ using the fact that $S$ is the union of strata of dimension $\\le 2$ and hence $H^{n-1}(S)\\cong H^{n}(S)=0$.\n\\end{proof}\n\n \n \n Note that in the above proof we get that $H^n_c(X\\setminus S)\\cong H^n(X,S)\\cong H^n(X)$.\nCompare this to the proof of the following Lemma\n\n\\begin{lemma}\\label{lem:Igor}\n Let $(X,\\mathsf{d}_X)$ be a compact $n$-dimensional MCS space with connected $X_n$ and ${X_{n-1}}=\\varnothing$, take $x\\in X_n$.\n Then we have\n \\begin{enumerate}\n \\item\\label{lem:Igoritem1} $H^n(X\\setminus \\{x\\})=0$;\n \\item\\label{lem:Igoritem2} the inclusion $i: (X,\\varnothing)\\to (X,X\\setminus\\{x\\})$ induces an isomorphism on cohomology, that is \n \n \\begin{equation}\n i^*: H^n(X, X\\setminus \\{x\\})\\to H^n(X)\n \\end{equation}\n is an isomorphism.\n \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:Igor}]\n{\n\n\n\n We first show item \\ref{lem:Igoritem2}. Let $U\\subset X\\setminus S=X_n$ be connected and open. Since $X\\setminus S$ is a manifold and $U$ is connected, we have that the inclusion $U\\hookrightarrow X\\setminus S$ induces an isomorphism between compactly supported cohomology $H^n_c(U)$ and $H^n_c (X\\setminus S)$. \n \n \n Also, since $X$ is compact we have that $H^n_c(X, X\\setminus U)\\cong H^n(X, X\\setminus U)$ and similarly $H^n_c(X,S)\\cong H^n(X,S)$.\n \n \n With this at disposal, consider the inclusion of pairs $(X,S)\\hookrightarrow (X,X\\setminus U)$, we have \n \n \\begin{tikzcd}\nH^n_c(U) \\arrow[r, \"\\cong\"] \\arrow[d,\"\\cong\"]\n& H^n_c(X\\setminus S) \\arrow[d, \"\\cong\"] \\\\\nH^n(X, X\\setminus U) \\arrow[r]\n& |[]| H^n(X,S),\n\\end{tikzcd}\\\\\n\nwhere the vertical arrows are Alexander-Spanier duality combined with the above isomorphisms $H^n_c(X, X\\setminus U)\\cong H^n(X, X\\setminus U)$ and $H^n_c(X,S)\\cong H^n(X,S)$.\n}\n\nThis gives an isomorphism between $H^n(X, X\\setminus U)$ and $H^n(X,S)$ hence between $H^n(X, X\\setminus U)$ and $H^n(X)$ by inclusion. Note that $X\\setminus \\{x\\}$ deformation retract to $X\\setminus U$ for some open conical open neighborhood $U\\subset X\\setminus S$ of $x$, which implies that $i^*: H^n(X, X\\setminus \\{x\\})\\to H^n(X)$ is an isomorphism. \n\nNext we show item \\ref{lem:Igoritem1}. To compute $H^n(X\\setminus \\{x\\})$, look at the long exact sequence for the pair $(X,X\\setminus \\{x\\})$:\n \\begin{equation}\n \\cdots\\rightarrow H^n(X,X\\setminus \\{x\\})\\xrightarrow{\\cong} H^n(X)\\rightarrow H^n(X\\setminus \\{x\\})\\xrightarrow{0} H^{n+1}(X,X\\setminus \\{x\\})\\rightarrow\\cdots,\n \\end{equation}\n$H^n(X\\setminus \\{x\\})=0$ follows directly. \n\\end{proof}\n\nNow we can prove the invariance of domain for MCS spaces. The strategy is to localize $X,Y$ to suspensions over lower dimensional strata\n, so that the proof reduces to the case of compact MCS\nspaces with connected top stratum and empty codimension 1 stratum, where the above lemmas apply.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:inv-dom-mcs}]\n\n{ Let us first prove the theorem under the extra assumption that for all points in $X$ the top strata of their conical neighborhoods are connected.}\n\nWe break the proof into steps.\n\n{\\bf Step 1: }Localize to suspensions, which are MCS spaces satisfying assumption in Lemma \\ref{lem:GP} and Lemma \\ref{lem:Igor}. \n\n\nLet $x\\in U$ and $y\\mathrel{\\mathop:}= f(x)\\in f(U)$. Both $x,y$ have a neighborhood homeomorphic to cones over\n{ some $(n-1)$-dimensional MCS spaces} $\\Sigma_x$, $\\Sigma_y$, respectively. Take cone neighborhoods of $x$, $B_x\\Subset B'_x\\Subset U$, then there exists a cone neighborhood of $y$, say $B_y\\subset f(U)$ such that $B_y\\cap f(\\overline{B'_x}\\setminus B_x)=\\varnothing$. Let $C\\mathrel{\\mathop:}= U\\setminus B_x$ and $D\\mathrel{\\mathop:}= Y\\setminus B_y$. Note that both $U\/C$ and $Y\/D$ are homeomorphic to a suspension over $\\Sigma_x$, $\\Sigma_y$ respectively. The quotient map induces a new map $\\tilde{f}: U\/C\\to Y\/D$ between compact {$n$-dimensional MCS spaces with connected top stratum and empty codimension $1$ stratum. \nObserve that $\\tilde{f}$ remains injective on $f^{-1}(B_y)=f^{-1}(Y\\setminus D)$. \n\nIt suffices to show that $\\tilde{f}$ is surjective onto $B_y$ identified with its image in $Y\/D$. By continuity, it suffices to show every\npoint in $Y_n \\cap B_y$ is in the image of $\\tilde{f}$. \n\n\n\n{\\bf Step 2: }We show that $\\tilde{f}^*: H^n(Y\/D)\\to H^n(U\/C)$ is an isomorphism.\n\n\n First, we claim that there exists a\n point $x'\\in X_n $ such that $y'\\mathrel{\\mathop:}= f(x)\\in Y_n$. To see this, let $x\\in U\\cap X_n$, and take a compact neighborhood $B$, it is of topological dimension $n$, since $f$ is injective and continuous, it is a homeomorphism between $B$ and $f(B)$, so $f(B)$ also has topological dimension $n$, which means $f(B)$ can not be entirely in $\\cup_{k=0}^{n-2}Y_k$,\n which is of topological dimension at most $n-2$. Now that we have $x'\\in X_n$ and $y'\\in Y_n$ , we claim that $\\tilde{f}^*:H^n(Y\/D, Y\/(D\\setminus \\{y'\\}))\\to H^n(U\/C,U\/(C\\setminus\\{x'\\}))$ is an isomorphism. \n\nTo this end, take an excision around the manifold neighborhood of $x',y'$ respectively. The desired claim reduces to showing that $f^*:H^n(B^n,B^n\\setminus \\{x'\\})\\to H^n(f(B^n),f(B^n)\\setminus \\{y'\\})$ is an isomorphism for injective and continuous $f$ such that $f(x')=y'$, where $B^n$ is a ball in $\\mathbb{R}^n$. The invariance of domain for $\\mathbb{R}^n$ has been used to show that $f(B^n)$ is open so that an excision can be applied on $Y\/D$. The invariance of domain for $\\mathbb{R}^n$ also shows $f:(B^n, B^n\\setminus \\{x'\\})\\to (f(B^n),f(B^n)\\setminus \\{y'\\})$ is a homeomorphism, the claim follows.\n\nNow consider the induced map $f^*$ between long exact sequences of the pairs $(Y\/D, Y\/(D\\setminus \\{y'\\}))$ and $(U\/C,U\/(C\\setminus \\{x'\\}))$, taking also into account item \\ref{lem:Igoritem2} of Lemma \\ref{lem:Igor}, by 5-Lemma it follows that $\\tilde{f}^*: H^n(Y\/D)\\to H^n(U\/C)$ is an isomorphism. \n\n\n\n{\\bf Step 3: }Arguing by contradiction assume that\n $\\tilde{f}$ is not surjective onto $(Y_n\\cap B_y)$ identified with its image in $Y\/D$, we show that $\\tilde{f}^* :H^n(Y\/D)\\to H^n(X\/C)$ is a zero map. However, $\\tilde{f}^*$ cannot be both zero map and isomorphism (from Step 2), because by Lemma \\ref{lem:GP}, $H^n(Y\/D)= H^n(U\/C)=\\mathbb{Z}_2$, a contradiction.\n \n\nFor this purpose, suppose that a\npoint $z\\in Y_n\\cap B_y$ is missed by $\\tilde{f}$, then $\\tilde{f}$ can be factored through \n\\begin{equation}\n \\tilde{f}: U\/C\\rightarrow Y\/(D\\setminus \\{z\\})\\rightarrow Y\/D.\n\\end{equation}\nSince $H^n(Y\/(D\\setminus \\{z\\}))=0$ due to item \\ref{lem:Igoritem1} of lemma \\ref{lem:Igor}, $\\tilde{f}^* :H^n(Y\/D)\\to H^n(X\/C)$ is a zero map. \n\n\nThis concludes the proof of the theorem under the extra assumption that for all points in $X$ the top strata of their conical neighborhoods are connected. \n\n\n\nTo complete the proof in the general case we will need the following general lemma.\n\n\\begin{lemma}\\label{lem-top-strata}\nLet $Z$ be a connected $n$-dimensional MCS space space that that it's top stratum $Z_n$ is not connected. Then there exists a point $z\\in Z$ such that the top stratum of its conical neighborhood $U_z$ is not connected.\n\\end{lemma}\n\\begin{proof}[Proof of Lemma \\ref{lem-top-strata}]\nlet $p, q$ be points lying in different connected components of $Z_n$. Since $Z$ is connected there is a path $\\gamma:[ 0,1]\\to Z$ such that $\\gamma(0)=p, \\gamma(1)=q$.\nBy compactness of $[0,1]$ there exists finitely many connected components $U_1,\\ldots U_k$ of $Z_n$ whose closures intersect $\\gamma$. Since the top stratum is dense in $Z$ we have that\n$\\gamma$ is contained in $\\bar U_1\\cup \\bar U_2\\cup\\ldots\\cup \\bar U_k$, therefore $[0,1]=\\gamma^{-1}(\\bar U_1)\\cup \\gamma^{-1}(\\bar U_2)\\cup \\ldots\\cup \\gamma^{-1}(\\bar U_k)$.\nAs all these sets are closed and $[0,1]$ is connected this covering can not be disjoint and hence there is $t_0\\in [0,1]$ which belongs to at least two $\\gamma^{-1}(\\bar U_j)$. Then $z=\\gamma(t_0)$ satisfies the conclusion of the Lemma.\n\n\\end{proof}\nWe now continue with the proof of Theorem \\ref{thm:inv-dom-mcs}.\n\nRecall that we have proved the theorem under the assumption that all conical neighborhood of points in $X$ have connected top strata.\n\nNow suppose there are some points in $X$ such that the top strata of their conical neighborhoods are not connected. Let $l$ be the largest number that $X_l$ contains such a point $x$. Take such $x\\in X_l$.\n\n\nThen its conical neighborhood $U_x$ has the form $\\R^l\\times C(\\Sigma)$ where $\\Sigma$ is $(n-l-1)$-dimensional MCS space. \nNote that points in $U_x$ outside of $\\R^l\\times \\{*\\}$ (here $*$ is the cone point in $C(\\Sigma)$) lie in the union of strata of dimension $>l$.\n\nWe claim that $\\Sigma$ has more than one connected components. Indeed, if not then its top stratum is not connected while $\\Sigma$ itself is connected. Then by Lemma \\ref{lem-top-strata} applied to $\\Sigma$ there exists a point $\\sigma\\in \\Sigma$ such that the top stratum of its conical neighborhood in $\\Sigma$ is not connected. But then the corresponding point in $U_x$ will lie in $X_m$ for $m>l$ and also have the property that its conical neighborhood has more than one top stratum components. This contradicts the maximality of $l$ in the choice of $x$. \n\nLet $\\Sigma'$ be one component of $\\Sigma$. Then the subset $W'=\\R^l\\times C(\\Sigma')\\subset U_x$ is an $n$-dimensional MCS space with empty $(n-1)$-stratum and such that the top stratum of all conical neighborhoods in $W'$ are connected.\n Then we have an injective embedding $f: W'\\to Y$ and by the proof above the image $f(W')$ is an open neighborhood of $f(x)$. But the same argument applies to any other component $\\Sigma''$ of $\\Sigma$ and gives another subset $W''\\subset U_x$ which contains $x$ and such that $f(W'')$ is also an open neighborhood of $f(x)$. This contradicts injectivity of $f$ near $x$. Therefore under the assumption of the theorem conical neighborhoods of points in $X$ must necessarily have connected top strata. }\n\\end{proof}\n\n\\begin{remark}\nThe connectedness assumption of top strata of conical neighborhoods in $Y$ is essential. For example, take $Y=\\R^n\\bigvee \\R^n$ to be the wedge sum of two copies of $\\R^n$ glued at $0$, $X=\\R^n$ and $f:X\\hookrightarrow Y$ be inclusion of the first copy of $\\R^n$. \nThis map is clearly 1-1 but the image is not open since it does not contain any neighborhood of $0$ in $Y$.\n\n\t\n\\end{remark}\n\n\\begin{remark}\nThe conclusion that conical neighborhoods of points in $X$ must be connected can be viewed as a non-embeddability result. In other words the following holds. Suppose $Y$ satisfies the assumption of the theorem and $X$ is an $n$-dimensional MCS space with empty $(n-1)$-stratum and such that there is a point in $X$ such that the top stratum of its conical neighborhood is not connected. Then there is no 1-1 continuous map $f:X\\to Y$.\n\\end{remark} \n\n\n\\begin{proof}[Proof of Theorem \\ref{thm:inv}]\n\tAs pointed out in section \\ref{subsec:Alexandrov}, every Alexandrov space is an MCS space with connected top stratum.\n\t\n\t The assumption $U\\cap \\mathcal{F}X=\\varnothing$ implies that $U$ has empty codimension $1$ stratum. } Next, for every $p\\in Y$ its conical neighborhood $W_p$ is homeomorphic to $T_pY$ which is a nonnegatively curved Alexandrov space.\n\t\n Since the top stratum of $T_pY$ is connected the same is true for $W_p$.\n\t\n\t It suffices to show that $f(U)\\cap\\mathcal{F}Y=\\varnothing$, everything else follows from Theorem \\ref{thm:inv-dom-mcs}. \n\t\n\tAssume $\\mathcal{F}Y\\neq \\varnothing$. Take the metric double $\\tilde{Y}$ of $Y$, $\\tilde{Y}$ an $n$-dimensional Alexandrov space without boundary, and $f:X\\to Y$ extends to an injective and continuous map into $\\tilde Y$ by post composing with the inclusion map $Y\\hookrightarrow \\tilde Y$. We still denote it by $f$. Applying Theorem \\ref{thm:inv-dom-mcs} to $f:X\\to \\tilde{Y}$, we see that $f(U)$ must be open in $\\tilde Y$. If there exists $z\\in f(U)\\cap \\mathcal{F}Y$, then there exists an open neighborhood $V$ of $z$ in $f(U)\\cap\\tilde Y$. By definition of metric double $V$ must intersect both copies of $Y$ in $\\tilde Y$, this is a contradiction to the definition of $f$, from which it follows that $f(U)$ can not intersect $\\mathcal{F}Y$.\n\\end{proof}\n\n\n\n\n\n\n\n\\section{Equivalence of intrinsic and extrinsic boundary}\\label{sec:equi}\n\n\\subsection{Alexandrov case}\n\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main1}]\n\n We first show that $\\partial_{\\rm top}\\bar\\Omega=\\mathcal{F}\\bar\\Omega$. Since tangent cones at points in $\\Omega$ have no boundary, we see that $\\mathcal{F}\\bar\\Omega\\subset\\partial_{\\rm top}\\Omega$. Now take $p\\in \\partial_{\\rm top}\\bar\\Omega$, if to the contrary $p\\notin \\mathcal{F}\\bar\\Omega$, then there is an open set $U\\subset \\bar{\\Omega}$ containing $p$ such that $U\\cap \\mathcal{F}\\bar\\Omega=\\varnothing$, since $\\mathcal{F}\\bar\\Omega$ is closed. The Invariance of Domain Theorem \\ref{thm:inv} applied to inclusion $i:\\bar{\\Omega}\\hookrightarrow X$ yields that $i(U)=U$ is also an open subset of $X$, so $p\\in U\\subset {\\rm Int}_{\\rm top}(\\bar\\Omega)=\\Omega$, a contradiction to $p\\in\\partial_{\\rm top}\\bar\\Omega$. \n\n Now that $\\partial_{\\rm top}\\bar\\Omega=\\mathcal{F}\\bar\\Omega$, it follows immediately that $\\Omega$ coincides with $\\bar\\Omega\\setminus \\mathcal{F}\\bar\\Omega$, which is the interior in the sense of Alexandrov spaces. So strong convexity of the interior of an Alexandrov space yields that $\\gamma$ does not intersect $\\mathcal{F}\\Omega$ hence $\\partial_{\\rm top}\\Omega$. \n The proof of \\ref{main1:item2} is completed by noticing that any $\\mathsf{d}_\\Omega$ geodesic connecting points in ${\\rm Int}_{\\rm top}(\\bar\\Omega)=\\Omega$ and entirely contained in $\\Omega$ is a local geodesic of $(X,\\mathsf{d}_X)$.\n\nFor the proof of item \\ref{main1:item3}, let $p,q\\in \\partial{\\rm Int}_{\\rm top}(\\bar\\Omega), d=\\mathsf{d}_\\Omega(p,q)$, and $\\gamma: [0,d]\\to \\bar\\Omega$ be a unit speed geodesic with respect to $\\mathsf{d}_\\Omega$ joining $p,q$ such that $\\gamma(0)=p$, $\\gamma(1)=q$. For any small enough $\\varepsilon\\in (0,d\/3)$, take $p'=\\gamma(\\varepsilon)$ and $q'=\\gamma(d-\\varepsilon)$. \n{\nWe can find points in $\\{p'_n\\}$ and $\\{q'_n\\}$ in $\\Omega$ so that $p'_n\\to p'$ and $q'_n\\to q'$. The geodesic $\\gamma_n$ of $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining $p_n$ and $q_n$ must converge to $\\gamma|_{[\\varepsilon,1-\\varepsilon]}$ otherwise there would have been branching geodesics between $p,q$. \nOn the other hand $\\gamma_n$ is a local geodesic of $(X,\\mathsf{d}_X)$ and hence is a quasigeodesic in $X$. Since limits of quasigeodesics are quasigeodesics it follows that \n $\\gamma|_{[\\varepsilon,1-\\varepsilon]}$ is a quasi-geodesic in $X$. Letting $\\varepsilon\\to 0$ we conclude that $\\gamma$ is a quasi-geodesic in $X$ as well.}\n\\end{proof}\n\n\n\n\\subsection{$\\ncRCD$ case}\nThe purpose of this section is to prove Theorem \\ref{thm:main2}. We need the following pairwise almost convexity proved by Deng in \\cite[Theorem 6.5]{deng2020holder}.\n\n\\begin{proposition}\\label{prop:pairconv}\nLet $(X,\\mathsf{d},\\mathfrak{m})$ be an $\\RCD(K,N)$ space with ${\\rm essdim}=n$. For $\\mathfrak{m}\\times\\mathfrak{m}$-a.e.\\ every $(x,y)\\in\\mathcal{R}_n\\times \\mathcal{R}_n$, there exists a geodesic joining $x,y$, and entirely contained in $\\mathcal{R}_n$.\n\\end{proposition}\n\n \n \\begin{proof}[Proof of Theorem \\ref{thm:main2}]\n We assume that $\\bar\\Omega\\neq X$, otherwise it either contradicts the assumption $\\bar \\Omega\\cap \\partial X=\\varnothing$ or makes the statement trivial. We break the proof into several steps.\n \n {\\bf Step 1:} we show $\\partial \\bar\\Omega\\subset \\partial_{\\rm top} \\bar\\Omega$. \n \n \n Observe that $\\mathsf{d}_\\Omega$ and $\\mathsf{d}_X$ coincide on sufficiently small open subsets of $\\Omega$, hence tangent cones taken at the same point by the same rescaling sequence w.r.t. both metrics are isometric for points in $\\Omega$, in particular tangent cones at points in $\\Omega$ have no boundary. Which means that $\\mathcal{S}^{N-1}(\\bar\\Omega)\\setminus \\mathcal{S}^{N-2}(\\bar\\Omega)\\subset \\partial_{\\rm top} \\bar\\Omega$. Since $\\partial_{\\rm top} \\bar\\Omega$ is closed, we have $\\partial \\bar\\Omega\\subset \\partial_{\\rm top} \\bar\\Omega$.\n \n {\\bf Step 2:} Suppose $\\partial_{\\rm top} \\bar \\Omega\\subset \\partial \\bar\\Omega$ is not true, we find a point $q\\in \\partial_{top}\\bar\\Omega\\setminus \\partial \\bar\\Omega$ so that $q\\in \\mathcal{R}(X)$.\n \n \n First, there exists $p\\in \\partial_{\\rm top}\\bar\\Omega\\setminus \\partial \\bar\\Omega$. Since $\\partial_{\\rm top}\\bar\\Omega$ and $\\partial \\bar\\Omega$ are both closed, there exists $\\varepsilon>0$ such that $B_{2\\varepsilon}(p)\\cap \\partial \\bar\\Omega=\\varnothing$. Now consider any two points in $B_{\\varepsilon\/2}(p)$. By triangle inequality, any geodesic joining such two points lies in $B_{\\varepsilon}(p)$ hence does not intersect $\\partial \\bar\\Omega$, moreover, note that $\\mathcal{H}^N_X(B_{\\varepsilon\/2}(p)\\cap \\Omega)>0$, $\\mathcal{H}^N_X(B_{\\varepsilon\/2}(p)\\cap (X\\setminus \\bar\\Omega))>0$ (recall we assumed $\\bar\\Omega\\neq X$), by Deng's pairwise almost convexity of the regular set, Proposition \\ref{prop:pairconv}, there exist $x\\in B_{\\varepsilon\/2}(p)\\cap \\Omega\\cap \\mathcal{R}(X)$ and $y\\in B_{\\varepsilon\/2}(p)\\cap (X\\setminus \\bar\\Omega)\\cap \\mathcal{R}(X)$ such that some geodesic, denote it by $\\gamma_{xy}$, joining $x,y$ is entirely contained in $\\mathcal{R}(X)$, meanwhile, $\\gamma_{xy}$ must intersect $\\partial_{\\rm top}\\bar\\Omega$, and the point of intersection, denoted by $q$, is the desired point. \n \n {\\bf Step 3:} We show that for the point $q$ we found in step $2$, there exists a neighborhood $U$ so that $\\partial_{\\rm top}\\bar \\Omega\\cap U$ has Hausdorff codimension at least $2$ (recall Remark \\ref{rmk:equiHaus}), and there exists $\\delta\\mathrel{\\mathop:}= \\delta(K,N)>0$ depending only on $K,N$ such that $\\Theta_{\\bar\\Omega}(x)\\le 1-\\delta$ for any $x\\in \\partial_{\\rm top}\\bar \\Omega\\cap U$.\n \n \n Since $q\\in \\mathcal{R}(X)\\cap (\\partial_{\\rm top}\\bar \\Omega\\setminus \\partial \\bar\\Omega)$, there exists an open neighborhood $U$ such that $U$ is homeomorphic to a manifold and $U\\cap \\partial \\bar\\Omega=\\varnothing$. We claim that $\\partial_{\\rm top}\\bar \\Omega\\cap U\\subset \\mathcal{S}^{N-2}(\\bar\\Omega)$. It suffices to show $\\partial_{\\rm top}\\bar \\Omega\\cap U\\subset \\mathcal{S}(\\bar\\Omega)$ since $U$ is disjoint from $\\partial \\bar\\Omega$. \n \n Let $\\delta\\mathrel{\\mathop:}=\\delta(K,N)>0$ be as in Theorem \\ref{thm:regular}, if there exists $x\\in \\partial_{\\rm top}\\bar \\Omega\\cap U$ with $\\Theta_{\\bar\\Omega}(x)> 1-\\delta$ then there exists $V\\subset U\\cap \\bar\\Omega$ containing $x$, open relative to $\\bar\\Omega$, and homeomorphic to a manifold. Now the invariance of domain for manifolds applied to the inclusion $V\\hookrightarrow U$ yields that $V$ is open in $X$, hence $V\\subset \\Omega$. This contradicts that $x\\in \\partial_{\\rm top}\\bar \\Omega$. Therefore for any $x\\in \\partial_{\\rm top}\\bar \\Omega\\cap U$ it holds that $\\Theta_{\\bar\\Omega}(x)\\le 1-\\delta$ which by the choice of $\\delta$ implies that $\\partial_{\\rm top}\\bar \\Omega\\cap U\\subset \\mathcal{S}^{N-2}(\\bar\\Omega)$. \n Since Hausdorff codimension of $\\mathcal{S}^{N-2}(\\bar\\Omega)$ is at least $2$, the proof of this step is completed.\n \n \n {\\bf Step 4:} We show that when we blow up the inclusion map $ i_0:\\bar\\Omega\\hookrightarrow X$ at $q$, the induced map $i_1: T_q\\bar \\Omega\\to T_q X \\cong \\R^N$ is not surjective near $0$, in fact, $0$ is on the topological boundary of $i_1(T_q\\bar \\Omega)$.\n \n Denote by $B^X_r$ (resp. $B^{\\bar\\Omega}_r$) the ball of radius $r$ in metric $\\mathsf{d}_X$ (resp. $\\mathsf{d}_\\Omega$). We claim that $\\mathcal{H}^N_X(B^{\\bar\\Omega}_r(x))=\\mathcal{H}^N_{\\bar\\Omega}(B^{\\bar\\Omega}_r(x))$ for $x\\in U\\cap \\bar\\Omega$ and $r>0$ small enough so that $B^{\\bar\\Omega}_r(x)\\subset U$. Observe that the two distances $\\mathsf{d}_X$ and $\\mathsf{d}_{\\Omega}$ coincide with each other for small enough open subsets in $\\Omega$, so $\\mathcal{H}^N_X$ and $\\mathcal{H}^N_{\\bar\\Omega}$ gives the same mass to open subsets of $\\Omega$. Now observe that $B^{\\bar\\Omega}_r(x)=(B^{\\bar\\Omega}_r(x)\\cap \\Omega)\\cup (B^{\\bar\\Omega}_r(x)\\cap\\partial_{\\rm top} \\bar\\Omega)$, where the former is open in $\\Omega$, the latter has codimension at least $2$ proved in step 3 hence measure zero, which completes the proof of the claim. Recall from step 2 and step 3 we know that $\\Theta_X(p)=1$ and $\\Theta_{\\bar\\Omega}(p)\\le 1-\\delta$, it follows\n \n \\begin{equation}\\label{eq:density}\n \\lim_{r\\to 0} \\frac{\\mathcal{H}^N_X(B^{\\bar\\Omega}_r(p))}{\\mathcal{H}^N_X(B^{X}_r(p))}=\\lim_{r\\to 0} \\frac{\\mathcal{H}^N_{\\bar\\Omega}(B^{\\bar\\Omega}_r(p))}{\\mathcal{H}^N_X(B^{X}_r(p))}=\\frac{\\Theta_{\\bar\\Omega}(p)}{\\Theta_{X}(p)}\\le 1-\\delta.\n \\end{equation} \n If $i_1(T_q \\bar \\Omega)$ contains $B^{\\R^N}_{\\varepsilon}(0)$ for some $\\varepsilon>0$, then the local coincidence of the metrics when away from boundary implies $B^{\\R^N}_{\\varepsilon\/2}(0)=B^{T_q\\bar\\Omega}_{\\varepsilon\/2}(0)$, which in turn implies $\\mathcal{H}^N_{\\R^N}(B^{T_q\\bar\\Omega}_{\\varepsilon\/2}(0))=\\mathcal{H}^N_{\\R^N}(B^{\\R^N}_{\\varepsilon\/2}(0))$, this contradicts \\eqref{eq:density}.\n\n {\\bf Step 5:} We derive a contradiction by iteratively blowing up at a topological boundary point. \n \n \n If $N=1$, then the statement is clear thanks to the classification theorem \\cite{KL15}. It suffices to consider the case $N\\ge 2$. In this case the topological boundary of $T_q\\bar\\Omega$ is more than a single point, to show this, it is enough to notice that $i_1$ is bi-Lipschitz (recall remark \\ref{rmk:equiHaus}), so it is an homeomorphism onto its image. \n \n Now we summarize the properties needed for the blow-up procedure. In the setting of this theorem, let $i_0:\\bar \\Omega\\hookrightarrow X$ be the inclusion map, $q\\in \\partial_{\\rm top}\\bar\\Omega$ and $i_1: T_q\\bar\\Omega\\to T_qX$ be the blow-up of $i_0$ at $q$. In order for the cone tip of $T_q\\bar\\Omega$ to be on $\\partial_{\\rm top}i_1(T_q\\bar\\Omega)$, it is sufficient to have:\n \n \\begin{enumerate}\n \\item $q\\in \\mathcal{R}(X)$ and $\\Theta_{\\bar\\Omega}(q)\\le 1-\\delta$; \n \\item $\\mathcal{H}^N_{\\bar\\Omega}(B^{\\bar\\Omega}_r(q))=\\mathcal{H}^N_{X}(B^{\\bar\\Omega}_r(q))$ for sufficiently small $r>0$;\n \\item $q\\notin \\mathcal{F}\\bar\\Omega$.\n \\end{enumerate}\n \n After the blow-up procedure in step 4, the ambient space $T_q X\\cong \\R^N$ has no singular points, moreover, $q\\notin \\partial \\bar\\Omega$ implies $q\\notin \\mathcal{F}\\bar\\Omega$, which means iterated tangent cones at $q$ w.r.t. $(\\bar\\Omega, \\mathsf{d}_\\Omega)$ have no boundary, so every point on $\\partial_{\\rm top} i_1(T_q\\bar\\Omega)$ (not empty by step 4) still satisfies the conditions listed above, so we can continue blowing up at any point on $\\partial_{\\rm top} i_1(T_q\\bar\\Omega)$ other than the cone tip, each time keeping the the base point a point on the topological boundary. In finitely many blow-up procedures, we end up with a bi-Lipschitz map $i_N: \\R^N\\to \\R^N$ such that $i_N(0)=0$, $i_N$ not surjective, and $0$ is on the topological boundary of $i_N(\\R^N)$, this is impossible by invariance of domain.\n \n \\end{proof}\n \n \n \n\n\n\n\n\n\\section{Applications}\\label{sec:AppConv}\n In this section we derive from the boundary equivalence in various ambient spaces the {locally totally geodesic} property, i.e., a subset satisfying $\\ncRCD(K,N)$ condition forces the geodesics in intrinsic metric joining interior points to be disjoint from boundary. \n \n We first introduce a technical result which is a direct consequence of H\\\"older continuity along interior of tangent cones pointed out in \\cite[Corollary 1.5]{CN12}, it is available for $\\ncRCD(K,N)$ spaces thanks to Deng's generalization of this statement \\cite{deng2020holder}. \n\n\\begin{proposition}\\label{prop:dentofull}\nLet $(X,\\mathsf{d},\\mathfrak{m})$ be an $\\RCD(K,N)$ space, and $\\gamma$ be a geodesic in $X$. The set of points in $\\gamma$ with unique tangent cone is relatively closed in the interior of $\\gamma$. In particular, for each integer $1\\le k\\le N$, $\\gamma\\cap \\mathcal{R}_k$ is closed relative to the interior of $\\gamma$. If in addition $\\gamma\\cap \\mathcal{R}_k$ is dense in the interior of $\\gamma$, then it is all of the interior.\n\\end{proposition}\n\nWe start with the following simplest setting, where the ambient space is a smooth manifold {but there are no assumption on the regularity of topological boundary}.\n\\begin{theorem}\\label{thm:smooth-rcd-subset}\n Let $(M,g)$ be an $n$-dimensional smooth\n \n manifold, and $\\Omega\\subset M$ be open, connected and such that ${\\rm Int}(\\bar\\Omega)=\\Omega$. If $(\\bar\\Omega,\\mathsf{d}_\\Omega, \\vol_g\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\bar\\Omega)$ is a $\\ncRCD(K,n)$ space, then \n \\begin{enumerate}\n \\item $\\partial_{\\rm top}\\bar \\Omega=\\partial \\bar\\Omega$.\n \\item any minimizing geodesic in $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining two points in $\\Omega$ does not intersect $\\partial \\Omega$ hence a local geodesic in $(M,g)$, {i.e., $\\Omega$ is locally totally geodesic};\n \\item any minimizing geodesic in $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining two points on $\\partial_{\\rm top}\\bar\\Omega$ is either entirely contained in $\\partial_{\\rm top}\\bar\\Omega$, or its interior is entirely in $\\Omega$. In the latter case the minimizing geodesic is also a local geodesic in $(M,g)$.\n \n \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\n We first show that if $p\\in \\partial_{\\rm top}\\bar\\Omega$, then any tangent cone taken w.r.t. $(\\bar\\Omega,\\mathsf{d}_\\Omega, \\vol_g\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\bar\\Omega)$ at $p$ cannot be $\\R^n$. This is contained in step 3 of the proof of Theorem \\ref{thm:main2}. If there is a tangent cone w.r.t. $(\\bar\\Omega,\\mathsf{d}_\\Omega, \\vol_g\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\bar\\Omega)$ at $p$ is $\\R^n$, then there is a neighborhood $V$ of $p$ open in $\\bar\\Omega$ homeomorphic to $\\R^n$, while there is also a neighborhood $U$ of $p$ open in $M$ homeomorphic to $\\R^n$. Then the invariance of domain applied to the inclusion $U\\cap V\\hookrightarrow U$ show that $U\\cap V$ is open in $M$ and $U\\cap V\\subset \\Omega$, a contradiction to $p\\in \\partial_{\\rm top}\\bar\\Omega$. It follows directly that $\\partial_{\\rm top}\\bar\\Omega=\\partial \\bar \\Omega$.\n \n Consider now a minimizing geodesic $\\gamma:[0,1]\\to \\bar\\Omega$ in $(\\bar\\Omega,\\mathsf{d}_{\\Omega})$. Then $\\gamma((0,1))\\cap \\partial_{\\rm top} \\bar\\Omega$ is relatively closed in $\\gamma((0,1))$. By Proposition \\ref{prop:dentofull}, $\\gamma((0,1))\\setminus \\partial_{\\rm top} \\bar\\Omega$ is also relatively closed, this is the set of points in $\\gamma((0,1))$ having tangent cone $\\R^n$. It follows from the connectedness of $\\gamma((0,1))$ that either $\\gamma((0,1))\\setminus \\partial_{\\rm top} \\bar\\Omega$ or $\\gamma((0,1))\\cap \\partial_{\\rm top} \\bar\\Omega$ is empty.\n\\end{proof}\n\n{\n\\begin{remark}\nNote that Theorem \\ref{thm:smooth-rcd-subset} implies that $\\bar \\Omega$ is locally convex in $M$ and is hence locally Alexandrov (globally Alexandrov if it is compact).\n\\end{remark}\n}\nWe now move to the case where the ambient space is a $\\ncRCD(K,N)$ space. With the extra assumption $\\partial X=\\mathcal{F}X$ and the stability of absence of boundary \\cite[Theorem 1.6]{BNS20} of an $\\RCD(K,N)$ space, the exact same idea can be used to prove that ${\\rm Int}(X)$ is strongly geodesically convex.\n\n\\begin{corollary}\\label{cor:intconv}\n Let $(X,\\mathsf{d},\\mathcal{H}^N)$ be a $\\ncRCD(K,N)$ space. Assume $\\partial X=\\mathcal{F} X$, then ${\\rm Int}(X)\\mathrel{\\mathop:}= X\\setminus \\partial X$ is strongly convex, i.e.\\ any geodesic joining points in ${\\rm Int}(X)$ does not intersect $\\partial X$.\n\\end{corollary}\n\n \n\\begin{proof}\n For a constant speed geodesic $\\gamma:[0,1]\\to X$ joining two points in ${\\rm Int}(X)$, if $\\gamma\\cap \\partial X\\neq \\varnothing$, then there exists a $t_0\\in (0,1)$ such that $t_0=\\sup\\{t: \\gamma([0,t))\\cap \\partial X\\}=\\varnothing$ and $\\gamma (t_0)\\in \\partial X$, since $\\partial X$ is closed. Note that $\\gamma(t_0)$ is an interior point of $\\gamma$ and for every $t\\in (0,t_0)$, any tangent cone at $\\gamma(t)$ does not have boundary, now the he stability of absence of boundary \\cite[Theorem 1.6]{BNS20} under pmGH convergence and h\\\"older continuity of tangent cones along the interior of a geodesic yield that any tangent cone at $\\gamma(t_0)$ has no boundary, this contradicts $\\gamma(t_0)\\in \\partial X=\\mathcal{F}X$.\n\\end{proof}\n\n\n\n\t\\begin{corollary}\\label{cor:loc-total-geo}\n\t\tIn the setting of Theorem \\ref{thm:main2}, with the extra assumption that $\\mathcal{F}X=\\partial X$, any (minimizing) geodesic in $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining two points in $\\Omega$ is a local geodesic in $(X,\\mathsf{d}_X)$, hence $ \\Omega$ is locally totally geodesic.\n\t\\end{corollary}\n\n\n\nIf we consider only a noncollapsed Ricci limit space with boundary $(X,\\mathsf{d},\\mathfrak{m})$, i.e. the pmGH limit of $n$-dimensional manifolds with convex boundary and uniform Ricci curvature lower bound in the interior and uniform volume lower bound of ball of radius $1$ centered at points chosen in the pmGH convergence, then $\\mathcal{F}X=\\partial X$ is already verified \\cite[Theorem 7.8]{BNS20}, naturally we have:\n\n\\begin{corollary}\nLet $(X,\\mathsf{d},\\mathfrak{m})$ be a noncollapsed Ricci limit space with boundary, then its interior $X\\setminus\\partial X$ is strongly convex.\n\\end{corollary}\n\nDue to the lack of a notion of intrinsic boundary for collapsed spaces we have been discussing noncollapsed spaces only. Without stratification of singular set, De Philippis-Gigli definition's cannot be applied, and Kapovitch-Mondino's definition also fails to provide the correct definition of boundary, as the metric horn example by Cheeger-Colding \\cite[Example 8.77]{Cheeger-Colding97I} shows a collapsed Ricci limit space can have an interior cusp at which the tangent cone is a half line. Nevertheless we conjecture that Han's Theorem \\ref{thm:han} holds in much larger generality, that is, a subspace in a $\\ncRCD(K,N)$ ambient space along with some reference measure satisfying $\\RCD(K,\\infty)$ condition should still enjoy the property that geodesics in the intrinsic metric joining points in the interior remains away from boundary, and the reference measure gives measure $0$ to the topological boundary. This would provide a partial converse ( different from local-to-global theorem) to the well-known global-to-local theorem for $\\RCD(K,\\infty)$ spaces from \\cite[Theorem 6.18]{AGS14a}: \n\n \\begin{theorem}\n Let $Y$ be a weakly geodesically convex closed subset of an $\\RCD(K,\\infty)$ space $(X, \\mathsf{d},\\mathfrak{m})$ such that $\\mathfrak{m}(Y)>0$ and $\\mathfrak{m}(\\partial_{\\rm top}Y)=0$. Then $(Y,\\mathsf{d}_Y, \\mathfrak{m}\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} Y)$ is also an $\\RCD(K,\\infty)$ space.\n \\end{theorem}\n\nMore precisely, we conjecture that\n\n\\begin{conjecture}\\label{conj:collapseconv}\n Let $\\Omega$ be an open subset in a $\\ncRCD(K,N)$ space $(X,\\mathsf{d}_X, \\mathcal{H}^N)$, where $N$ is a positive integer, so that for some Radon measure $\\mu$ with $\\supp \\mu=\\bar\\Omega$, $(\\bar{\\Omega}, \\mathsf{d}_{\\Omega},\\mu)$ is an $\\RCD(K,\\infty)$ space. Assume that $\\partial_{\\rm top} \\Omega$ is $\\mathcal{H}^{N-1}$-rectifiable, then $\\mu\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\Omega\\ll \\mathcal{H}^N\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex}\\Omega$ and $\\mu(\\partial_{\\rm top} \\Omega)=0$ and every geodesic joining two points in $\\Omega$ w.r.t. $\\mathsf{d}_\\Omega$ does not intersect $\\partial_{\\rm top}\\bar\\Omega$ hence a local geodesic w.r.t. $\\mathsf{d}_X${, in particular, $\\Omega$ is locally totally geodesic}.\n\\end{conjecture}\n\n\n\n\n\\section{ Almost convexity}\\label{sec:almost}\n\n\n\n\\subsection{1-D localization}\nWe minimally collect the elements of the localization technique introduced in \\cite{Cav14} and \\cite{CavMon15}, we remark that this technique is available for a much general class of metric measure spaces, the so called essentially non-branching ${\\rm MCP}(K,N)$ spaces, which contains essentially non-branching $\\CD(K,N)$ spaces, hence $\\RCD(K,N)$ spaces (\\cite{RajalaSturm12}). \n\nLet $(X,\\mathsf{d},\\mathfrak{m})$ be an $\\RCD(K,N)$ space, $u$ be a $1$-Lipschitz function. Define the transport set induced by $u$ as:\n\\[\n\\Gamma(u)\\mathrel{\\mathop:}=\\{(x,y)\\in X\\times X: u(x)-u(y)=\\mathsf{d}(x,y)\\},\n\\]\nand its transpose as $\\Gamma^{-1}(u)\\mathrel{\\mathop:}= \\{(x,y)\\in X\\times X: (y,x)\\in \\Gamma(u)\\}$. The union $R_u\\mathrel{\\mathop:}= \\Gamma^{-1}(u)\\cup \\Gamma(u)$ defines a relation on $X$. By excluding negligible isolated and branching points, one can find a transport set $\\mathcal{T}_u$ such that $\\mathfrak{m}(X\\setminus\\mathcal{T}_u)=0$ and $R_u$ restricted to $\\mathcal{T}_u$ is an equivalence relation. So there is a partition of $\\mathcal{T}_u:=\\cup_{\\alpha\\in Q} X_{\\alpha}$, where $Q$ is a set of indices, denote by $\\mathfrak{Q}:\\mathcal{T}_u\\to Q$ the quotient map. In \\cite[Proposition 5.2]{Cav14}, it is shown that there exists a measurable selection $s:\\mathcal{T}_u\\to \\mathcal{T}_u$ such that if $x R_u y$ then $s(x)=s(y)$, so we can identify $Q$ as $s(\\mathcal{T}_u)\\subset X$. Equip $Q$ with the $\\sigma$-algebra induced by $\\mathfrak{Q}$ and the measure $\\mathfrak{q}\\mathrel{\\mathop:}= \\mathfrak{Q}_{\\sharp}(\\mathfrak{m}\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex}\\mathcal{T}_u)$, we can hence view $\\mathfrak{q}$ as a Borel measure on $X$. Furthermore, each $X_{\\alpha}$ is shown (\\cite[Lemma 3.1]{CavMon15}) be to isometric to an interval $I_{\\alpha}$, the distance preserving map $\\gamma_{\\alpha}: I_{\\alpha}\\to X_{\\alpha}$ extend to an geodesic still denoted by $\\gamma_{\\alpha}:\\bar{I}_{\\alpha}\\to X$. Putting several results together, we have (\\cite[Theorem A.5]{KapMon19}):\n\n\\begin{theorem}\\label{thm:disint}\n Let $(X,\\mathsf{d},\\mathfrak{m})$ be an $\\RCD(K,N)$ space. $u$ be a $1$-Lipschitz function. Then $\\mathfrak{m}$ admits a disintegration:\n \\[\n \\mathfrak{m}=\\int_{Q}\\mathfrak{m}_{\\alpha}\\mathfrak{q}(\\dd\\alpha),\n \\]\n where $\\mathfrak{m}_{\\alpha}$ is a non-negative Radon measure on $X$, such that \n \\begin{enumerate}\n \\item For any $\\mathfrak{m}$-measurable set $B$, the map $\\alpha\\mapsto \\mathfrak{m}_{\\alpha}(B)$ is $\\mathfrak{q}$-measurable.\n \\item\\label{item:strcons} for $\\mathfrak{q}$-a.e.\\ $\\alpha\\in Q$, $\\mathfrak{m}_{\\alpha}$ is concentrated on $X_{\\alpha}=\\mathfrak{Q}^{-1}(\\alpha)$. This property is called strong consistency of the disintegration.\n \\item \\label{item:disint} for any $\\mathfrak{m}$-measurable set $B$ and $\\mathfrak{q}$-measurable set $C$, it holds\n \\[\n \\mathfrak{m}(B\\cap \\mathfrak{Q}^{-1}(C))=\\int_C \\mathfrak{m}_{\\alpha}(B)\\mathfrak{q}(\\dd\\alpha).\n \\]\n \\item\\label{item:pos} for $\\mathfrak{q}$-a.e.\\ $\\alpha\\in Q$, $\\mathfrak{m}_{\\alpha}=h_{\\alpha}\\mathcal{H}^1\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} X_{\\alpha}\\ll\\mathcal{H}^1\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} X_{\\alpha}$, where $h_{\\alpha}$ is a $\\log$ concave density, and $(\\bar{X}_{\\alpha}, \\mathsf{d},\\mathfrak{m}_{\\alpha})$ is an $\\RCD(K,N)$ space.\n \\end{enumerate}\n\\end{theorem}\n\n\n\n\n\\subsection{ Proof of Proposition \\ref{thm:almostconvex} and Consequences}\n\n\\begin{proof}[Proof of Proposition \\ref{thm:almostconvex}]\nTake $x\\in X$, disintegrate $\\mathfrak{m}$ w.r.t $\\mathsf{d}_x\\mathrel{\\mathop:}= \\mathsf{d}(x,\\cdot)$. Item \\ref{item:disint} in Theorem \\ref{thm:disint} yields that \n\\begin{equation}\n \\begin{split}\n 0=\\mathfrak{m}(X\\setminus \\mathcal{R}_n)=\\int_Q \\mathfrak{m}_\\alpha(X\\setminus \\mathcal{R}_n)\\mathfrak{q}(\\dd\\alpha).\n \\end{split}\n\\end{equation}\nThen for $\\mathfrak{q}$-a.e.\\ $\\alpha\\in Q$, $\\mathfrak{m}_\\alpha(X\\setminus \\mathcal{R}_n)=0$, we set $\\widetilde{Q}\\mathrel{\\mathop:}= \\{\\alpha\\in Q: \\mathfrak{m}_{\\alpha}(X\\setminus \\mathcal{R}_n)=0 \\}$, then $R_x\\mathrel{\\mathop:}= (\\cup_{\\alpha\\in \\widetilde{Q}}X_{\\alpha})\\cap \\mathcal{R}_n$ is the desired set. Indeed, for any $y\\in R_x$, there is a geodesic (segment) $\\gamma$ contained in $X_\\alpha$ joining $x,y$, for some $\\alpha\\in \\widetilde{Q}$, with $\\mathfrak{m}_{\\alpha}(\\gamma\\setminus \\mathcal{R}_n)=0$. the $\\log$-concavity of $h_{\\alpha}$ implies that $h_{\\alpha}$ is $\\mathcal{H}^1$ a.e.\\ positive on $X_{\\alpha}$, so we get that $\\mathcal{H}^1\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} X_\\alpha(\\gamma\\setminus \\mathcal{R}_n)=0$, which in turn implies that regular points of essential dimension is dense in the interior of $\\gamma$. Now apply Proposition \\ref{prop:dentofull}, we see that the interior of $\\gamma$ is entirely in $\\mathcal{R}_n$ and the end point $y$ is also in $\\mathcal{R}_n$. \n\\end{proof}\n\n\n\n{ Since $\\mathcal R_n\\subset {\\rm Int}(X)$ Proposition \\ref{thm:almostconvex} immediately implies almost convexity of ${\\rm Int}(X) = X\\setminus \\partial X$. }\n\n\n\\begin{corollary}\nLet $(X,\\mathsf{d},\\mathcal{H}^N)$ be a $\\ncRCD(K,N)$ space. For \\textit{every} $x\\in {\\rm Int}(X)$\n there exists a subset $R_x\\subset {\\rm Int}(X)$ so that $\\mathfrak{m}(X\\setminus R_x)=0$ and for any $y\\in R_x$ there is a minimizing geodesic joining $x,y$ and entirely contained in ${\\rm Int}(X)$. \n\n\\end{corollary}\n\n\n\n \n We then naturally obtain the following corollary.\n\n\\begin{corollary}\nIn the setting of Theorem \\ref{thm:main2}, for every point $x\\in \\Omega$, there exists a set $\\mathcal{R}_x\\subset\\Omega$ such that $\\mathcal{H}^N(\\bar\\Omega\\setminus\\mathcal{R}_x)=0$ and for every $y\\in\\mathcal{R}_x$, there is a minimizing geodesic in $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining $x,y$ lies entirely in $\\Omega$, hence a local geodesic in $(X,\\mathsf{d}_X)$, {i.e., $\\Omega$ is almost locally totally geodesic}. \n\\end{corollary}\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Remark on thermodynamic consistency}\n\\label{sec:Constitutive_functions_Process_consistency}\n\nIn Section \\ref{sec:Curing_model_general_framework} the thermodynamic consistency of the general modelling framework has been considered but not finally proved since specific constitutive functions had not been set up to that point. The evaluation of the two remaining conditions \\eqref{eq:dissi_qdot} and \\eqref{eq:dissi_zdot} is discussed in this section. To this end, constitutive assumptions presented in Sections \\ref{sec:Constitutive_functions_Degree_of_Cure} - \\ref{sec:Constitutive_functions_Process_dependency} are employed.\n\nFirstly, inequality~\\eqref{eq:dissi_zdot} is considered. To prove this condition, the partial derivative of the isochoric part of the free energy function $\\hat{\\psi}_G$ with respect to the intrinsic time scale $z$ has to be calculated. Since only the viscoelastic parts $\\hat{\\psi}_{ve,k}$ include the dependency on $z$ (cf. Eq.~\\eqref{eq:psi_G}), it remains to show that\n\\begin{equation}\n\\label{eq:therm_cons_psi_ve}\n - \\sum_{k=1}^{N_k} \\dfrac{\\partial \\hat{\\psi}_{ve,k}}{\\partial z} \\ge 0 \\ .\n\\end{equation}\nFurthermore, it is assumed that not only the sum of all Maxwell elements but also every single Maxwell element meets the consistency condition. Thus, it is sufficient to show that\n\\begin{equation}\n\\label{eq:therm_cons_psi_ve_k}\n - \\dfrac{\\partial \\hat{\\psi}_{ve,k}}{\\partial z} \\ge 0\n\\end{equation}\nholds. According to \\cite{Haupt_Lion_2002} or \\cite{Lion_Kardelky_2004}, the thermodynamic consistency of the ansatz \\eqref{eq:psi_visc_single} is met, if the conditions \n\\begin{equation}\n\\label{eq:therm_cons_psi_ve_conditions}\n G_k(t) \\ge 0 \\ ,\n \\qquad\n \\dfrac{\\rm d}{{\\rm d}t} G_k(t) \\le 0 \\ ,\n \\qquad\n \\dfrac{\\rm d^2}{{\\rm d}t^2} G_k(t) \\ge 0\n\\end{equation}\nhold. Obviously, these conditions are satisfied by the relaxation function \\eqref{eq:kernel}.\n\nNext, the remaining condition \\eqref{eq:dissi_qdot} is considered. Since inequality \\eqref{eq:dissi_qdot} cannot be evaluated in a general form, an estimation under consideration of some physically reasonable assumptions is employed. In the first step, the term $\\partial \\hat{\\psi}_{\\theta C}(\\theta,q) \/ \\partial q $ of Eq.~\\eqref{eq:dissi_qdot} is estimated. Here, an ansatz for the thermochemical part of the specific enthalpy per unit mass is introduced \\cite{Kolmeder_Etal_2011,Lion_Yagimli_2008}\n\\begin{equation}\n\\label{eq:ansatz_enthalpy}\n h_{\\theta C}(\\theta,q) = h_{fluid}(\\theta) \\, (1-q) + h_{solid}(\\theta) \\, q \\ .\n\\end{equation}\nThe functions $h_{fluid}(\\theta)$ and $h_{solid}(\\theta)$ are the specific enthalpy per unit mass of the uncured and fully cured material, respectively. Note that the general ansatz \\eqref{eq:ansatz_enthalpy} depends on the temperature $\\theta$ and the degree of cure $q$. Specific models for the consideration of temperature dependent behaviour have been introduced in \\cite{Kolmeder_Etal_2011} and \\cite{Lion_Yagimli_2008}. However, for the estimation conducted in this section this is omitted and the values for $h_{fluid}$ and $h_{solid}$ are assumed to be constant. \n\nThe next step is to calculate the thermochemical free energy $\\psi_{\\theta C}$ from $h_{\\theta C}$. This can be accomplished by approaches presented in \\cite{Lion_Yagimli_2008} or \\cite{Mahnken_2013}. However, here an alternative formulation of this calculation step is used as follows. Firstly, the Legendre transformation \n\\begin{equation}\n\\label{eq:legendre}\n \\psi + \\theta \\, \\eta = h + \\dfrac{1}{\\JI\\STAPEL\\varrho!^\\SLtilde\\!} \\ I_1(\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde\\cdot \\Ten2 \\gamma) \n\\end{equation}\nis employed which relates the free energy and the enthalpy (see, for example, \\cite{Lion_Yagimli_2008,Lubarda_2004}). Therein, $\\eta$ is the specific entropy per unit mass and $\\Ten2 \\gamma = (\\STAPEL C!_\\SLstrich!_\\SLstrich-\\STAPEL I!_\\SLstrich!_\\SLstrich)\/2$ is the Green-Lagrange strain tensor. Next it is assumed, that DSC experiments take place at zero mechanical stresses \\cite{Lion_Yagimli_2008}. Thus, the last term of \\eqref{eq:legendre} is neglected. Furthermore, the constitutive relation $\\eta = - \\partial \\psi \/ \\partial \\theta$ at constant stress state is employed and the resulting equation is formulated with respect to the thermochemical potentials $\\psi_{\\theta C}$ and $h_{\\theta C}$. This yields the reduced relation \n\\begin{equation}\n\\label{eq:potentials_dgl}\n \\psi_{\\theta C}(\\theta,q) - \\theta \\, \\dfrac{\\partial \\psi_{\\theta C}(\\theta,q)}{\\partial \\theta} = h_{\\theta C}(\\theta,q) \\ .\n\\end{equation}\nEq.~\\eqref{eq:potentials_dgl} is a differential equation which has to be solved for $\\psi_{\\theta C}$. Its solution reads as\n\\begin{equation}\n\\label{eq:potentials_dgl_solu}\n \\psi_{\\theta C}(\\theta,q) = C \\, \\dfrac{\\theta}{\\theta_0} - \\theta \\, \\int \\dfrac{1}{\\theta^2}\\,h_{\\theta C}(\\theta,q) \\, {\\rm d}\\theta \\ .\n\\end{equation}\nHere, $C$ is an integration constant that does not need to be determined in our evaluation. Next, this general solution is applied to the ansatz for the thermochemical enthalpy which has been introduced in Eq.~\\eqref{eq:ansatz_enthalpy}. This yields a specific model for the thermochemical free energy \n\\begin{equation}\n\\label{eq:potentials_dgl_solu_specific}\n \\psi_{\\theta C}(\\theta,q) = C \\, \\dfrac{\\theta}{\\theta_0} + h_{fluid}\\, (1-q) + h_{solid} \\, q \\ .\n\\end{equation}\nBased on this solution, the term $\\partial \\hat{\\psi}_{\\theta C}(\\theta,q) \/ \\partial q $ of Eq. \\eqref{eq:dissi_qdot} is calculated by\n\\begin{equation}\n\\label{eq:dpsi_dq_solu}\n \\dfrac{\\partial \\hat{\\psi}_{\\theta C}(\\theta,q)}{\\partial q} \n = \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\ \\dfrac{\\partial \\psi_{\\theta C}(\\theta,q)}{\\partial q} \n = \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\ (h_{solid} - h_{fluid}) \\ .\n\\end{equation}\nTo quantify this expression, the maximum specific reaction enthalpy per unit mass $\\Delta h$ of a complete curing experiment has to be taken into account. This quantity has been measured by DSC experiments (see \\cite{Kolmeder_Etal_2011,Lion_Yagimli_2008} for detailed description) and can be related to the model \\eqref{eq:ansatz_enthalpy} by the relation\n\\begin{equation}\n\\label{eq:hsolid_hfluid}\n \\Delta h = h(\\theta,q=1) - h(\\theta,q=0) = h_{solid} - h_{fluid} \\ .\n\\end{equation}\nHere, a value of $\\Delta h \\approx - 300 \\, \\rm J\/g$ has been identified. Furthermore, taking into account the mass density $\\JI\\STAPEL\\varrho!^\\SLtilde\\! \\approx 1.1 \\, \\rm g\/cm^3$, the first term of Eq.~\\eqref{eq:dissi_qdot} is estimated by \n\\begin{equation}\n\\label{eq:dpsidq_value}\n\\partial \\hat{\\psi}_{\\theta C}(\\theta,q) \/ \\partial q = -330 \\ \\rm MPa \\ .\n\\end{equation}\n\nNext, the second term in inequality \\eqref{eq:dissi_qdot} is examined. Therein, the chemical shrinkage parameter $\\beta_q$ can be identified by the help of Eq.~\\eqref{eq:phi_thetaC}. Here, the relation\n\\begin{equation}\n\\label{eq:calc_betaq}\n \\dfrac{1}{J_{\\theta C}}\\,\n \\dfrac{\\partial J_{\\theta C}}{\\partial q} \\,\n = \\beta_q \n\\end{equation}\nholds. Furthermore, a relation to the hydrostatic pressure $p$ is obtained by evaluation of\n\\begin{equation}\n\\label{eq:calc_pressure}\n \\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M} \\, J_M= - J p\\ ,\n \\quad\n p = - \\dfrac{1}{3} \\, \\dfrac{1}{J} \\, I_1(\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde\\cdot\\STAPEL C!_\\SLstrich!_\\SLstrich) \\ .\n\\end{equation}\n\nFinally, inequality \\eqref{eq:dissi_qdot} can be evaluated. To this end, expressions \\eqref{eq:calc_betaq} and \\eqref{eq:calc_pressure} are substituted into Eq. \\eqref{eq:dissi_qdot} which yields\n\\begin{equation}\n\\label{eq:reformulate_cond}\n - \\dfrac{\\partial \\hat{\\psi}_{\\theta C}(\\theta,q)}{\\partial q}\n - \\beta_q\\,J\\,p \\ \\ge 0 \\ .\n\\end{equation}\nMoreover, the estimation \\eqref{eq:dpsidq_value} and the chemical shrinkage parameter $\\beta_q=-0.05$ (cf. Section \\ref{sec:Constitutive_functions_Volume}) are inserted in \\eqref{eq:reformulate_cond}, and the resulting inequality is resolved for the expression $J\\,p$. This finally yields the condition\n\\begin{equation}\n\\label{eq:pressure_cond}\n J\\,p \\ \\ge - 6600 \\ {\\rm MPa} \\ .\n\\end{equation}\nSince $J>0$ holds in general, it can be concluded from \\eqref{eq:pressure_cond} that a hydrostatic pressure with $p>0$ does not endanger the thermodynamic consistency. However, if the material is loaded in hydrostatic tension ($p<0$), the condition \\eqref{eq:pressure_cond} may be violated. If a constant volume is assumed ($J=1$), a hydrostatic tension of $p = -6600 \\, \\rm MPa$ would be necessary to violate thermodynamic consistency. Nevertheless, this value seems to be unrealistic to be achieved in real experiments. Thus, the thermodynamic consistency can be proved for the case of physically reasonable conditions (see also \\cite{Lion_Hoefer_2007,Lion_Yagimli_2008,Mahnken_2013}).\n\n\\section{Introduction}\n\\label{sec:Introduction}\n\nMotivated by the desire for continuous improvement, industrial countries constantly aim to develop innovative concepts and products of highest standards. Within this context, lightweight construction and smart structures are doubtless crucial keywords nowadays. Within the last decade, a number of new developments based on lightweight concepts were successfully established in nearly all fields of engineering \\cite{Wiedemann_Sinapius_2013}. One very challenging aspect for the implementation of such new concepts is the joining technology. Thereby, adhesives are given an important role because they join the most diverse materials, not only locally but also as full-surface bonding \\cite{messler_2004}. In the research field of smart structures, high importance is awarded to piezoceramic patches as they combine static structures with actuator and sensor functionality \\cite{Prasad_Etal_2005}. In that context, the Piezoceramic Fibre Composites (PFC) were shown to be the most promising technology. More precisely, the Macro Fibre Composite (MFC) is the most sophisticated device yet invented \\cite{Lloyd_2004}. \n\nDespite excellent properties of piezoceramic patches, the state of art is their application to the fabricated parts only after manufacturing which leads to a time and cost intense procedure \\cite{Neugebauer_Etal_2010_ProdEng}. Scientific fundamentals for an economic production of active structural components are worked out in the Collaborative Research Center\/Transregio \"PT-PIESA\". One of the pursued concepts, that is considered in this paper, is the joining of sheet metal lightweight construction and piezo elements with structural adhesives to smart Piezo Metal Composites (PMC) by an innovative ma\\-nu\\-fac\\-tu\\-ring approach. The basic idea is to merge the steps of forming and piezo application into one process such that they are no longer separated \\cite{Neugebauer_Etal_2010_ProdEng,Neugebauer_Etal_2013_WGP}. A schematic representation of the approach is depicted in Fig.~\\ref{fig:pic_Principle-PMC}.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig1}\n\t\\caption{Schematic illustration of the piezo metal composite (top) and manufacturing process (bottom)}\n\t\\label{fig:pic_Principle-PMC}\n\\end{figure}\n\nFirstly, the MFC is entirely surrounded by a structural adhesive, placed inside two light metal sheets, where one of them is the sheet intended to be formed and the other is a local covering sheet, see Fig.~\\ref{fig:pic_Principle-PMC} (top). A specific distance between both metal sheets is adjusted by the help of spacers. Next, the sandwich structure is formed to its final shape while the adhesive is not yet cured. During this stage, the MFC is protected from excessively high loads by a floating support. After the forming process, the adhesive cures to a solid and thereby provides a material closure between the MFC and the light metal structure in the formed state. The principal feasibility of this method could already be demonstrated in earlier studies (see, for instance, \\cite{Drossel_Etal_2009_CIRP,Neugebauer_Etal_2010_ProdEng,Neugebauer_Etal_2013_WGP}). \n\nWithin the PMC, an essential role is given to the adhesive layer. Beside the impact of the adhesive's specific material behaviour, its geometrical design (i.e. the layer thickness) is of great importance. If, on the one hand, the adhesive layer is very thin, the protective function during forming vanishes. On the other hand, if the adhesive layer is too thick, risk of overloads due to volume shrinkage processes increases. Moreover, secondary deformations of the PMC might occur and a thick adhesive layer may lead to loss of the electric field in the piezoceramic due to the additional capacity between actuator and structure \\cite{Seemann_Sattel_1999}. \n\nFirst studies on the influence of the adhesive during forming have been conducted by Neugebauer \\etal \\cite{Neugebauer_Etal_2013_WGP}. However, curing of the adhesive has not been taken into account so far. Thus, the aim of this study is to set up a simulation tool which enables the simulation of curing phenomena in adhesives and to investigate the impact of the curing process on formed PMCs and more precisely on the embedded MFC. One essential part of this work is to provide a phenomenological model which is capable of representing the material behaviour of the adhesive during cure and in the fully cured state. The main characteristics of this model are\n\\begin{itemize}\n \\item the description of the progress of the chemical process during cure,\n \\item the modelling of dependencies of mechanical properties during the curing process as well as at different temperatures, and\n \\item the prediction of volume changes which are caused by chemical shrinkage and heat expansion phenomena.\n\\end{itemize}\nHere, different modelling approaches have been presented before (see, for example, \\cite{Hossain_Etal_2009,Hossain_Etal_2010,Klinge_Etal_2012,Kolmeder_Etal_2011,Liebl_Etal_2012,Lion_Hoefer_2007,Mahnken_2013}). The basic structure of those models is similar. Beside the application to different specific materials, one further basic difference is their employed mechanical submodel. For example models of finite strain elasticity \\cite{Hossain_Etal_2009}, finite strain viscoelasticity \\cite{Hossain_Etal_2010,Klinge_Etal_2012,Kolmeder_Etal_2011,Lion_Hoefer_2007,Mahnken_2013} and viscoplasticity at small \\cite{Liebl_Etal_2012} and finite strains \\cite{Landgraf_Ihlemann_2011} have been used. In this paper, a general modelling approach which includes the main characteristics of the Lion and H\\\"ofer model \\cite{Lion_Hoefer_2007} is presented (see Section \\ref{sec:Curing_model}). However, it is formulated in a more general way. Especially, different mechanical submodels can be incorporated to represent the mechanical behaviour during curing. \n\nIn Section~\\ref{sec:Curing_model_constitutive_functions}, a particular model is introduced which is able to capture curing phenomena of one specific two component epoxy based adhesive. To this end, appropriate constitutive material functions are chosen and the thermodynamic consistency is evaluated. Within this specification, the mechanical behaviour is represented by a combination of models of finite strain pseudo-elasticity and viscoelasticity. Furthermore, changes in volume due to heat expansion and chemical shrinkage processes are taken into account.\n\nThe second part of this paper deals with different aspects of the finite element implementation (see Section~\\ref{sec:FEM_implementation}). The numerical integration of constitutive equations as well as the derivation of appropriate stress and material tangent measures for the implementation into the finite element software \\textit{ANSYS}$^{\\rm TM}$ are described. Moreover, a new algorithm is presented, which addresses numerical difficulties that arise due to thermal and chemically related volume changes. The constitutive functions for the representation of heat expansion and chemical shrinkage processes are introduced with respect to specific reference values for the temperature and a degree of cure, which is an internal variable representing the progress of the curing process. If initial values for both variables differ from previously defined reference values, an immediate volume change would be computed which may lead to instant mesh distortion. The new algorithm calculates a correction and thus keeps the initial volume constant for arbitrary initial values.\n\nFinally, the material model is applied to the simulation of curing processes in bonded PMCs which is described in Section \\ref{sec:Finite_element_simulation}. Here, a finite element model of a deep drawn cup geometry is employed in a simplified manner such that only the part directly surrounding the MFC is modelled. To obtain a realistic forming simulation, the geometry of the final formed model relies on data which has been extracted from comprehensive simulations presented by Neugebauer \\etal \\cite{Neugebauer_Etal_2013_WGP}. This simplified approach allows for reduction of computational efforts related to complicated forming simulations and makes it possible to concentrate on phenomena which accompany the curing of the adhesive. An analysis of the strains in the MFC will highlight the benefits of the new process chain of manufacturing described above. \n\n\n\n\n\n\\section{Constitutive modelling of curing phenomena in polymers}\n\\label{sec:Curing_model}\n\nFor the mathematical representation of the phenomenological model presented in this paper, a coordinate free tensor formalism according to Ihlemann \\cite{Ihlemann_2006} is used. Thereby, the rank of a tensor is denoted by the number of its underlines. To exemplify, $\\Ten2 X$ and $\\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt$ are second- and fourth-rank tensors, respectively. Furthermore, the following general notations are used throughout this article:\n\\begin{itemize}\n \\item second-rank identity tensor: $\\STAPEL I!_\\SLstrich!_\\SLstrich$, \\\\[-2mm]\n \\item first and third principle invariant: $I_1(\\Ten2 X)$ , $I_3(\\Ten2 X)$, \\footnote{ The first principle invariant equals the trace operator of the Cartesian coordinates $X_{ab}$, thus $I_1(\\Ten2 X) = {\\rm trace}[X_{ab}]$. Accordingly, the third principle invariant can be derived by the determinant, thus $I_3(\\Ten2 X) = {\\rm det}[X_{ab}]$}\\\\[-2mm]\n \\item deviatoric part of a tensor: $\\Ten2 X' = \\Ten2 X - \\frac{1}{3}\\,I_1(\\Ten2 X) \\, \\STAPEL I!_\\SLstrich!_\\SLstrich$,\\\\[-2mm]\n \\item unimodular part of a tensor: $\\Ten2 X!^\\vrule width \\SLeffbreite height.4pt = I_3(\\Ten2 X)^{-1\/3} \\, \\Ten2 X$,\\\\[-2mm]\n \\item inverse and transpose of a tensor: $\\Ten2 X^{\\minus 1}$ and $\\Ten2 X^T$,\\\\[-2mm]\n \\item material time derivative: $\\frac{\\rm d}{{\\rm d}t}\\Ten2 X = \\Ten2 X!^\\SLdreieck$.\n\\end{itemize}\n\nA further tensor operation is introduced as follows. Assume two arbitrary second rank tensors $\\Ten2 X$ and $\\Ten2 Y$ and a symmetric second rank tensor $\\Ten2 Z = \\Ten2 Z^T$. Based on these, a tensor operation denoted by superscript $S_{24}$ is defined by\n\\begin{equation}\n\\label{eq:S24}\n \\left( \\Ten2 X \\otimes \\Ten2 Y\\right)^{S_{24}} \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\Ten2 Z\n = \\dfrac{1}{2} \\left( \\Ten2 X \\cdot \\Ten2 Z \\cdot \\Ten2 Y + \\Ten2 Y^T \\cdot \\Ten2 Z \\cdot \\Ten2 X^T \\right) \\, .\n\\end{equation} \nIn the following, the kinematics and constitutive assumptions of the general modelling approach are presented. \n\n\\subsection{Kinematics}\n\\label{sec:Kinematics}\nThe phenomenological model for the representation of adhesive's curing is built up within the framework of nonlinear continuum mechanics using the deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich$ for the description of the underlying kinematics. The corresponding right Cauchy-Green tensor $\\STAPEL C!_\\SLstrich!_\\SLstrich$ is defined by\n\\begin{equation}\n\\label{eq:rCG}\n \\STAPEL C!_\\SLstrich!_\\SLstrich = \\STAPEL F!_\\SLstrich!_\\SLstrich^T\\cdot\\STAPEL F!_\\SLstrich!_\\SLstrich \\ .\n\\end{equation}\nFurthermore, the total volume ratio is abbreviated by $J = I_3(\\STAPEL F!_\\SLstrich!_\\SLstrich) = {\\rm d} V \/{\\rm d}\\STAPEL V!^\\SLtilde $. To capture different sources of deformation, the deformation gradient gets multiplicatively decomposed as depicted in Fig.~\\ref{fig:defgrad}.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig2}\n\t\\caption{Multiplicative decomposition of the deformation gradient}\n\t\\label{fig:defgrad}\n\\end{figure}\nFirstly, $\\STAPEL F!_\\SLstrich!_\\SLstrich$ gets decomposed into a thermochemical part $\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}$ and a mechanical part $\\STAPEL F!_\\SLstrich!_\\SLstrich_M$ by\n\\begin{equation}\n\\label{eq:split_mech_thermochem}\n \\STAPEL F!_\\SLstrich!_\\SLstrich = \\STAPEL F!_\\SLstrich!_\\SLstrich_{M}\\cdot\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C} \\ .\n\\end{equation}\nThe thermochemical part is related to chemical shrinkage and heat expansion phenomena which are assumed to be isotropic. Thus, $\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}$ is an isotropic tensor \n\\begin{equation}\n\\label{eq:FthetaC}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C} = J_{\\theta C}^{\\,1\/3} \\ \\STAPEL I!_\\SLstrich!_\\SLstrich \n \\ , \\quad\n J_{\\theta C} = \\varphi_{\\theta C}(\\theta,q) \\ ,\n\\end{equation}\nwhere $J_{\\theta C} = I_3(\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}) = {\\rm d} V_{\\theta C}\/{\\rm d}\\STAPEL V!^\\SLtilde$ is the scalar valued volume ratio which denotes the pure thermochemical volume change. This volume ratio is constituted by a function $\\varphi_{\\theta C}(\\theta,q)$ which depends on the thermodynamic temperature $\\theta$ and a variable $q$ referred to as degree of cure. A specific ansatz for $\\varphi_{\\theta C}$ is provided in Section \\ref{sec:Constitutive_functions_Volume}. The mechanical part of the deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich_M$ as well as its corresponding right Cauchy-Green tensor $\\STAPEL C!_\\SLstrich!_\\SLstrich_M$ are calculated by substituting Eq.~\\eqref{eq:FthetaC}$_1$ into \\eqref{eq:split_mech_thermochem} which yields\n\\begin{equation}\n\\label{eq:FM_CM}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_M = J_{\\theta C}^{\\,-1\/3} \\STAPEL F!_\\SLstrich!_\\SLstrich \\ , \\quad\n \\STAPEL C!_\\SLstrich!_\\SLstrich_M = \\STAPEL F!_\\SLstrich!_\\SLstrich_M^T\\cdot\\STAPEL F!_\\SLstrich!_\\SLstrich_M = J_{\\theta C}^{\\,-2\/3} \\STAPEL C!_\\SLstrich!_\\SLstrich \\ .\n\\end{equation}\nNext, the mechanical deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich_{M}$ is multiplicatively decomposed into $\\STAPEL F!_\\SLstrich!_\\SLstrich_V$ representing pure mechanical volume changes and a remaining isochoric (i.e. volume-preserving) part $\\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich$:\n\\begin{equation}\n\\label{eq:split_vol_isochor}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_M = \\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich\\cdot\\STAPEL F!_\\SLstrich!_\\SLstrich_{V} \\ ,\n \\quad \\STAPEL F!_\\SLstrich!_\\SLstrich_V = J_M^{\\,1\/3} \\ \\STAPEL I!_\\SLstrich!_\\SLstrich \\ .\n\\end{equation}\nTherein, $J_M = I_3(\\STAPEL F!_\\SLstrich!_\\SLstrich_M) = {\\rm d}\\widehat{V} \/{\\rm d}V_{\\theta C}$ is the mechanical volume ratio. Substituting \\eqref{eq:split_vol_isochor}$_2$ into \\eqref{eq:split_vol_isochor}$_1$ yields the isochoric deformation gradient \n\\begin{equation}\n\\label{eq:Fg}\n \\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich = J_M^{-1\/3} \\ \\STAPEL F!_\\SLstrich!_\\SLstrich_M = J^{\\,-1\/3} \\ \\STAPEL F!_\\SLstrich!_\\SLstrich \\ ,\n\\end{equation}\nwhich exhibits the property $I_3(\\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich) = 1$. Its corresponding isochoric right Cauchy-Green tensor is calculated by\n\\begin{equation}\n\\label{eq:Cg}\n \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich = \\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich^T\\cdot\\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich = J_M^{\\,-2\/3} \\ \\STAPEL C!_\\SLstrich!_\\SLstrich_M= J^{-2\/3} \\ \\STAPEL C!_\\SLstrich!_\\SLstrich \\ .\n\\end{equation}\n\nAt this point, all necessary aspects of the underlying kinematics have been introduced. However, for subsequent evaluations, time derivatives of different kinematic quantities have to be calculated as well. In the following, the most important relations will be summarized. \n\nThe material time derivative of the mechanical right Cauchy-Green tensor $\\STAPEL C!_\\SLstrich!_\\SLstrich_M$ (cf. eq. \\eqref{eq:FM_CM}$_2$) is given by\n\\begin{equation}\n\\label{eq:CMdot}\n\\begin{array}{lcl}\n \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck_M \n = \\dfrac{\\rm d}{{\\rm d}t} \\Big[ J_{\\theta C}^{\\,-2\/3} \\ \\STAPEL C!_\\SLstrich!_\\SLstrich\\Big] \\\\[3mm]\n \\phantom{\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck_M}\n = J_{\\theta C}^{\\,-2\/3} \n \\left\\{ \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck \n - \\dfrac{2}{3} \\dfrac{1}{J_{\\theta C}} \n \\left(\n \\dfrac{\\partial J_{\\theta C}}{\\partial \\theta} \\dot{\\theta}\n +\\dfrac{\\partial J_{\\theta C}}{\\partial q} \\dot{q}\n \\right)\\STAPEL C!_\\SLstrich!_\\SLstrich\n \\right\\}.\n\\end{array} \n\\end{equation}\nMoreover, the rate of the mechanical volume ratio $J_M$ equals\n\\begin{equation}\n\\label{eq:JMdot}\n\\begin{array}{lcl}\n \\STAPEL J!^{\\vbox{\\hbox{$\\displaystyle.$}\\vskip.03cm}}_M \n \\, = \\, \\dfrac{\\rm d}{{\\rm d}t} \\bigg[ \\sqrt{I_3(\\STAPEL C!_\\SLstrich!_\\SLstrich_M)}\\bigg] \n \\, = \\, \\dfrac{1}{2} \\, J_M \\, \\STAPEL C!_\\SLstrich!_\\SLstrich_M^{\\minus 1} \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck_M \\ .\n\\end{array}\n\\end{equation}\nThe material time derivative of $\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich$ (see eq. \\eqref{eq:Cg}) can be expressed by\n\\begin{equation}\n\\label{eq:Cgdot}\n\\begin{array}{lcl}\n \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\SLdreieck \n \\, = \\, \\dfrac{\\rm d}{{\\rm d}t}\\Big[J_M^{-2\/3}J_{\\theta C}^{-2\/3} \\ \\STAPEL C!_\\SLstrich!_\\SLstrich \\Big]\n \\, = \\, \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich\\cdot \\left( \\STAPEL C!_\\SLstrich!_\\SLstrich_M^{\\minus 1} \\cdot \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck_M\\right)' \\ .\n\\end{array}\n\\end{equation}\n\n\\subsection{General modelling framework}\n\\label{sec:Curing_model_general_framework}\n\nIn this section, a general modelling framework is introduced which defines the basic structure of the adhesives material model. To obtain a thermodynamically consist model, the second law of thermodynamics in form of the Clausius-Duhem inequality is considered. In Lagrangian representation it reads as follows\n\\begin{equation}\n\\label{eq:CDU}\n \\dfrac{1}{2} \\, \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck \n - \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\dot{\\psi} - \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\, \\eta \\, \\dot{\\theta} - \\dfrac{1}{\\theta} \\STAPEL q!_\\SLstrich!^\\SLtilde \\cdot \\STAPEL \\nabla!^\\SLtilde!_{\\hspace{0.5ex}\\SLstrich} \\theta \\ge 0 \\ .\n\\end{equation}\nTherein, the first term is the stress power per unit volume, $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde$ is the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor and $\\psi$ and $\\eta$ are the Helmholtz free energy and the entropy, respectively, per unit mass. Furthermore, $\\JI\\STAPEL\\varrho!^\\SLtilde\\!$ is the mass density and $\\STAPEL q!_\\SLstrich!^\\SLtilde$ is the heat flux vector, both defined on the reference configuration. The expression $\\STAPEL \\nabla!^\\SLtilde!_{\\hspace{0.5ex}\\SLstrich} \\theta$ denotes the temperature gradient with respect to the reference configuration.\n\nTo specify the general structure of the adhesive's material model, an ansatz for the Helmholtz free energy function $\\hat{\\psi} = \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\psi$ per unit volume is introduced. It is additively decomposed into three parts according to\n\\begin{equation}\n\\label{eq:free_energy_allg}\n \\hat{\\psi} \n = \\hat{\\psi}_{G}\\Big(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,\\theta,z\\Big) \n + \\hat{\\psi}_V\\Big(J_M\\Big)\n + \\hat{\\psi}_{\\theta C}\\Big(\\theta,q\\Big)\\ .\n\\end{equation}\nTherein, $\\hat{\\psi}_{G}$ represents the stored energy as a result of isochoric deformations described by $\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich$. Furthermore, this part depends on the current temperature~$\\theta$ and on an additional material function $z$ which reflects some process dependencies.\\footnote{ The isochoric part of the free energy may be extended by additional internal variables. This would be necessary if, for example, models of multiplicative viscoelasticity or viscoplasticity are employed (cf. \\cite{Landgraf_Ihlemann_2011}).} The second contribution of Eq. \\eqref{eq:free_energy_allg} describes the material response due to pure mechanical volume changes and only depends on the volume ratio $J_M$ of the mechanical deformation. The remaining part of Eq.~\\eqref{eq:free_energy_allg} defines the thermochemically stored energy $\\hat{\\psi}_{\\theta C} = \\hat{\\psi}_{\\theta C}(\\theta,q)$ of the material which is a function of the temperature $\\theta$ and the degree of cure $q$. It is attributed to an amount of energy, which is initially stored in the material and which gets released due to the exothermic chemical process during curing. Furthermore, it describes the energy storage related to varying temperatures.\n\nIn Eq.~\\eqref{eq:free_energy_allg}, the variables $q$ and $z$ are treated as internal variables. Thus, they are prescribed by evolution equations which are defined in general form by \n\\begin{equation}\n\\label{eq:qdot}\n \\dot{q} = f_q(q,\\theta,t) \\ge 0 \\ , \\quad q(t=0) = q_0 \\ ,\n\\end{equation}\n\\begin{equation}\n\\label{eq:zdot}\n \\dot{z} = f_z(q,\\theta,t) \\ge 0 \\ , \\quad z(t=0) = z_0 \\ .\n\\end{equation}\nTherein, $q_0$ and $z_0$ are appropriate initial conditions. It can be seen that both variables are monotonically increasing. More details and specific constitutive functions will be provided in Section \\ref{sec:Curing_model_constitutive_functions}. \n\nTo evaluate the ansatz \\eqref{eq:free_energy_allg} within the Clausius-Duhem inequality \\eqref{eq:CDU}, the rate of the Helmholtz free energy function $\\hat{\\psi}$ has to be calculated. Taking into account all dependencies of Eq.~\\eqref{eq:free_energy_allg}, its rate reads as \n\\begin{equation}\n\\label{eq:free_energy_derivative}\n\\begin{array}{lcl}\n \\dot{\\hat{\\psi}} \n &=& \\left[ \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\theta}\n +\\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial \\theta}\n \\right] \\, \\dot{\\theta}\n \\, + \\, \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich} \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\SLdreieck \\ + \n \\\\[4mm]\n & & \\ \\\n \\, + \\, \\dfrac{\\partial \\hat{\\psi}_{V}}{\\partial J_M} \\dot{J}_M\n \\, + \\, \\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial q} \\, \\dot{q}\n \\, + \\, \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial z} \\dot{z} \\ .\n\\end{array}\n\\end{equation}\nA substitution of expressions \\eqref{eq:CMdot}~-~\\eqref{eq:Cgdot} and \\eqref{eq:free_energy_derivative} into the Clausius-Duhem inequality \\eqref{eq:CDU} yields the dissipation inequality\n\\begin{equation}\n\\label{eq:dissip_ineq}\n\\begin{array}{rcl}\n \\left\\{ \n \\dfrac{1}{2} \\, \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \n - \\left[ \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich}\\cdot \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich\\right]'\\cdot\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} + \\dfrac{J_M}{2} \\,\\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\\,\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \n \\right\\} \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck \\ + \\ \\ \\\\[5mm]\n - \\left\\{\n \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\, \\eta \n + \\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial \\theta} \n + \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\theta} \n - \\left( \\dfrac{J_M}{J_{\\theta C}}\\,\n \\dfrac{\\partial J_{\\theta C}}{\\partial \\theta}\\,\n \\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\n \\right) \\right\\}\\, \\dot{\\theta} \\ + \\ \\ \\\\[3mm]\n - \\left\\{\n \\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial q} \n - \\left( \\dfrac{J_M}{J_{\\theta C}}\\,\n \\dfrac{\\partial J_{\\theta C}}{\\partial q}\\,\n \\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\n \\right) \\right\\}\\, \\dot{q} \\ + \\ \\ \\\\[3mm]\n - \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial z} \\dot{z}\n - \\dfrac{1}{\\theta} \\, \\STAPEL q!_\\SLstrich!^\\SLtilde \\cdot \\STAPEL \\nabla!^\\SLtilde!_{\\hspace{0.5ex}\\SLstrich} \\theta \\ge 0 \\ ,\n\\end{array}\n\\end{equation}\nwhich has to be satisfied for arbitrary thermomechanical processes. Following the standard methods for the evaluation of \\eqref{eq:dissip_ineq} (cf. \\cite{Haupt_2002}), it is firstly stated that the terms in brackets in front of the $\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck$ and $\\dot{\\theta}$ have to be zero. This yields the potential relations for the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor\n\\begin{equation}\n\\label{eq:Ttil_allg}\n \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \n = 2\\,\\left[ \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich}\\cdot \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich\\right]'\\cdot\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} + J_M\\,\\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\\,\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \\ ,\n\\end{equation}\nand the entropy \n\\begin{equation}\n\\label{eq:eta_allg}\n \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\, \\eta \n = - \\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial \\theta} \n - \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\theta} \n + \\left( \\dfrac{1}{J_{\\theta C}}\\,\n \\dfrac{\\partial J_{\\theta C}}{\\partial \\theta}\\,\n \\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\n J_M\n \\right) \\ .\n\\end{equation} \n\nNext it is assumed, that each of the remaining terms of inequality \\eqref{eq:dissip_ineq} has to be non-negative which is a sufficient but not necessary condition. The non-negativity of the last term of inequality \\eqref{eq:dissip_ineq} is complied by Fourier's law. Formulated on the reference configuration it reads as\n\\begin{equation}\n\\label{eq:Fourier}\n \\STAPEL q!_\\SLstrich!^\\SLtilde = - \\kappa \\, J \\, \\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \\cdot \\STAPEL \\nabla!^\\SLtilde!_{\\hspace{0.5ex}\\SLstrich} \\theta\\ .\n\\end{equation} \nHere, $\\kappa \\ge 0$ is the thermal conductivity. Furthermore, taking into account the properties $\\dot{q} \\ge 0$ and $\\dot{z} \\ge 0$ (see Eqs. \\eqref{eq:qdot} and \\eqref{eq:zdot}), the final two restrictions read as\n\\begin{equation}\n\\label{eq:dissi_qdot}\n - \\left\\{\n \\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial q} \n - \\left( \\dfrac{1}{J_{\\theta C}}\\,\n \\dfrac{\\partial J_{\\theta C}}{\\partial q}\\,\n \\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\n J_M\n \\right) \\right\\} \\ \\ge 0 \\ ,\n\\end{equation} \n\\begin{equation}\n\\label{eq:dissi_zdot}\n - \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial z} \\ge 0 \\ .\n\\end{equation} \nThese conditions cannot be evaluated in general form. However, for the case of the concretized material model presented in Section~\\ref{sec:Curing_model_constitutive_functions}, the thermodynamic consistency is proved (cf. Section~\\ref{sec:Constitutive_functions_Process_consistency}).\n\n\n\n\\section{Application to an epoxy based adhesive}\n\\label{sec:Curing_model_constitutive_functions}\n\nIn this section, the general modelling framework is specified to simulate the material behaviour of one specific class of adhesives. More precisely, the two-part epoxy based structural adhesive \\textit{DP410}$^{\\rm TM}$ provided by \\textit{3M Scotch-Weld}$^{\\rm TM}$ is modelled \\cite{DP410_2003}. The adhesive is composed by mixing of two paste-like components. Afterwards, the mixture cures to a solid without any further initiation. In particular, curing takes place at room temperature such that no heating is necessary. The fully cured material can be applied within a temperature range of $-55^\\circ C$ to $80^\\circ C$ and the glass transition temperature is about $50^\\circ C$. Furthermore, the mass density is approximately $1.1 \\, \\rm g\/cm^3$ \\cite{DP410_2003}. In the following, different aspects of the model specifications are addressed.\n\n\\subsection{Degree of cure}\n\\label{sec:Constitutive_functions_Degree_of_Cure}\n\nFirst of all, the curing process is examined in more detail. In analogy to the procedures described in \\cite{Kolmeder_Lion_2010} and \\cite{Lion_Yagimli_2008}, the curing process has been measured by Differential Scanning Calorimetry (DSC) experiments. Thus, it is assumed that the curing process can completely be determined by the exothermic reaction during the chemical process. According to Halley and Mackay \\cite{Halley_Mackay_1996}, different phe\\-no\\-me\\-no\\-lo\\-gi\\-cal models can be applied to simulate the curing processes of epoxy based materials. In this work, the so called $n$-th-order model (model of reaction order $n$) is employed. In view of Eq.~\\eqref{eq:qdot}, this specific ansatz is expressed by\n\\begin{equation}\n\\label{eq:q_ansatz}\n \\dot{q} = f_q(q,\\theta,t) = K_1(\\theta) \\cdot\\Big(1-q\\Big)^n \\cdot f_D(q,\\theta) \\ .\n\\end{equation}\nTherein, $n$ is a constant material parameter and $K_1(\\theta)$ is a temperature dependent thermal activation function which is constituted by the Arrhenius ansatz (cf. \\cite{Halley_Mackay_1996})\n\\begin{equation}\n\\label{eq:q_Kfunc}\n K_1(\\theta) = K_{10} \\, {\\rm exp}\\left[-\\frac{E_1}{R\\,\\theta}\\right] \\ ,\n\\end{equation}\nwhere $K_{10}$ and $E_1$ are constant material parameters and \\mbox{$R = 8.3144 \\, \\rm J\/(mol\\,K)$} is the universal gas constant. To account for diffusion controlled curing, which takes place at temperatures below the glass transition temperature, the ansatz \\eqref{eq:q_ansatz} includes an empirical diffusion factor $f_D(q,\\theta)$ which, according to Fournier \\etal \\cite{Fournier_Etal_1996}, reads as\n\\begin{equation}\n\\label{eq:q_diffusion}\n f_D(q,\\theta) = \\dfrac{2}{1+{\\rm exp}\\left[\\frac{q-q_{end}(\\theta)}{b}\\right]}-1 \\ .\n\\end{equation}\nHere, $b$ is another constant material parameter and $q_{end}$ is the maximum degree of cure, which can be attained at a certain temperature $\\theta$. To evaluate the maximum attainable degree of cure, typically the DiBenedetto equation is adopted \\cite{DiBenedetto_1987,Kolmeder_Lion_2010,Pascault_Williams_1990}:\n\\begin{equation}\n\\label{eq:q_diBenedetto}\n \\dfrac{T_g(q)-T_{g,0}}{T_{g,1}-T_{g,0}} = \\dfrac{\\lambda \\, q}{1- (1-\\lambda) \\, q} \\ .\n\\end{equation}\nTherein, $T_g(q)$ is the glass transition temperature as a function of the degree of cure and $T_{g,0}$ and $T_{g,1}$ are the glass transition temperatures at degree of cure $q=0$ and $q=1$, respectively. Furthermore, $\\lambda$ is a constant material parameter. In order to calculate the maximum attainable degree of cure at certain isothermal curing temperatures, Eq.~\\eqref{eq:q_diBenedetto} has to be solved for $q$ as follows\n\\begin{equation}\n\\label{eq:q_qend}\n q_{end}(\\theta) = \\dfrac{f_T(\\theta)}{f_T(\\theta)-\\lambda \\, f_T(\\theta) + \\lambda} \\ .\n\\end{equation}\nHere, an abbreviation $f_T(\\theta)$ has been introduced \n\\begin{equation}\n\\label{eq:q_fTheta}\n f_T(\\theta) = \\dfrac{\\theta + \\Delta T - T_{g,0}}{T_{g,1}-T_{g,0}} \\ .\n\\end{equation}\nIn Eq.~\\eqref{eq:q_fTheta} the assumption $T_g(q) = \\theta + \\Delta T$ has been employed. Therein, $\\Delta T$ denotes the difference between the glass transition temperature $T_g(q)$ attainable at specific isothermal curing temperatures, and the curing temperature~$\\theta$ itself.\n\nThe material parameters of the model \\eqref{eq:q_ansatz} - \\eqref{eq:q_fTheta} have been identified using the DSC measurements. The corresponding values are listed in Table~\\ref{tab:MatPar_Cure}. Moreover, the phenomenological behaviour of this model is depicted in Fig. \\ref{fig:curing} for different temperatures.\n\n\\begin{table}[ht]\n \\centering\n \\caption{Material parameters for Eqs.~\\eqref{eq:q_ansatz} - \\eqref{eq:q_fTheta}} \n {\\begin{tabular}{p{1.6cm}p{2.0cm}p{1.6cm}p{1.4cm}}\n \\hline & & & \\\\[-3mm]\n parameter & value & parameter & value\\\\ \n \\hline & & & \\\\[-3mm]\n $\\ \\ $ $K_{10}$ & $1.608\\cdot10^{10} \\rm$ \n & $\\ \\ $ $T_{g,1}$ & $324.85 \\ \\rm K$ \\\\\n $\\ \\ $ $E_1$ & $79835 \\ \\rm J\/mol$ \n & $\\ \\ $ $T_{g,0}$ & $234.35 \\ \\rm K$ \\\\ \n $\\ \\ $ $b$ & $0.057$ \n & $\\ \\ $ $\\Delta T$ & $11 \\ \\rm K$ \\\\\n $\\ \\ $ $n$ & $1.217$ \n & $\\ \\ $ $\\lambda$ & $1.7$ \\\\ \n \\hline\n \\end{tabular}}\n \\label{tab:MatPar_Cure}\n\\end{table} \n\n\\begin{figure}[ht]\n\t\\centering\n \\includegraphics[width=0.45\\textwidth]{Fig3}\n\t\\caption{Evolution of the degree of cure $q$ for different temperatures $\\theta$}\n\t\\label{fig:curing}\n\\end{figure}\n\n\\subsection{Heat expansion and chemical shrinkage}\n\\label{sec:Constitutive_functions_Volume}\n\nNext, the thermochemical volume change due to chemical shrinkage and heat expansion processes is specified. To this end, an idealized model with the following ansatz has been chosen:\n\\begin{equation}\n\\label{eq:phi_thetaC}\n \\varphi_{\\theta C}(\\theta, q) = {\\rm exp}\\left[ \\alpha_\\theta \\, \\big(\\,\\theta - \\STAPEL \\theta!^\\SLtilde\\,\\big) + \\beta_q \\, q\\right] \\ .\n\\end{equation}\nTherein, $\\alpha_\\theta$ is a volumetric heat expansion coefficient and $\\beta_q$ is the maximum volumetric chemical shrinkage. According to first measurement results, the material parameters have been set to the values listed in Table~\\ref{tab:MatPar_Volume}.\n\n\\begin{table}[ht]\n \\centering\n \\caption{Material parameters for Eq.~\\eqref{eq:phi_thetaC}} \n {\\begin{tabular}{p{1.6cm}p{2.0cm}p{1.6cm}p{1.4cm}}\n \\hline & & & \\\\[-3mm]\n parameter & value & parameter & value\\\\\n \\hline & & & \\\\[-3mm]\n $\\ \\ $ $\\STAPEL \\theta!^\\SLtilde$ & $295 \\ \\rm K$ \n & & \\\\\n $\\ \\ $ $\\alpha_\\theta$ & $5\\cdot10^{-4} \\ \\rm K^{\\minus 1}$ \n & $\\ \\ $ $\\beta_q$ & $-0.05$ \\\\[1mm]\n \\hline\n \\end{tabular}}\n \\label{tab:MatPar_Volume}\n\\end{table} \n\n\\subsection{Free energy and stresses}\n\\label{sec:Constitutive_functions_Free_Energy}\n\nTo complete the curing model, the mechanical parts of the free energy function \\eqref{eq:free_energy_allg} and thus the corresponding stress strain relationships have to be specified. Firstly, the mechanical response due to isochoric deformations is considered. It is described by the free energy contribution $\\hat{\\psi}_{G}(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,\\theta,z)$ in Eq.~\\eqref{eq:free_energy_allg}. This part is modelled by a combination of a finite strain pseudo-elasticity with temperature and degree of cure dependent stiffness and a sum of multiple Maxwell elements, each including process dependencies described by the material function $z(t)$ (cf. Eq.~\\eqref{eq:zdot}). Fig.~\\ref{fig:rheolog_model} illustrates this model by means of a one-dimensional rheological representation. \n\n\\begin{figure}[ht]\n\t\\centering\n \\includegraphics[width=0.25\\textwidth]{Fig4}\n\t\\caption{Rheological model of a nonlinear spring connected in parallel to multiple Maxwell elements}\n\t\\label{fig:rheolog_model}\n\\end{figure}\nThe corresponding free energy $\\hat{\\psi}_{G}$ is constituted as a sum of contributions related to a pseudo-elastic part $\\hat{\\psi}_{el}$ and $N_k$ Maxwell elements, each denoted by $\\hat{\\psi}_{ve,k}$:\n\\begin{equation}\n\\label{eq:psi_G}\n \\hat{\\psi}_{G}(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,\\theta,z)\n = \\hat{\\psi}_{el}\\Big(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,\\theta\\Big) \n + \\sum_{k=1}^{N_k} \\hat{\\psi}_{ve,k}\\Big(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,z\\Big) \\ .\n\\end{equation}\nThe pseudo-elastic part is modelled by an ansatz proposed by Lion and Johlitz \\cite{Lion_Johlitz_2012}. It takes the form\n\\begin{equation}\n\\label{eq:psi_ela}\n\\begin{array}{ll}\n \\hat{\\psi}_{el}\\Big(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,\\theta\\Big) = \\Ten2 Q \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich , \n\\end{array}\n\\end{equation}\nwhere the tensor $\\Ten2 Q$ is given by\n\\begin{equation}\n\\label{eq:psi_ela_II}\n\\begin{array}{ll}\n \\displaystyle\n \\Ten2 Q = - \\int\\limits_{-\\infty}^{t}2\\,c_{10}\\Big(\\theta(t),q(s)\\Big)\\,\\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s\\ .\n\\end{array}\n\\end{equation}\nTherein, a stiffness function $c_{10}(\\theta,q)$ is introduced. It takes into account the dependency on the curing history described by the degree of cure $q(s)$ ($s$ is the integration variable). Furthermore, a dependency on the current temperature $\\theta(t)$ is included. The stiffness function exhibits the properties\n\\begin{equation}\n\\label{eq:psi_ela_stiffness}\n c_{10}\\Big(\\theta,q\\Big)\n \\ge 0 \\ , \\quad\n \\dfrac{\\partial }{\\partial q} \\, c_{10}\\Big(\\theta,q\\Big) \\ge 0 \\ .\n\\end{equation}\nA specific ansatz will be provided in Section \\ref{sec:Constitutive_functions_Process_dependency}. The free energy $\\hat{\\psi}_{ve,k}$ of one single Maxwell element (see Eq.~\\eqref{eq:psi_G}) is modelled according to an ansatz proposed by Haupt and Lion \\cite{Haupt_Lion_2002}:\n\\begin{equation}\n\\label{eq:psi_visc_single}\n \\displaystyle\n \\hat{\\psi}_{ve,k} \n = \\left\\{ - \\int\\limits_{-\\infty}^{z}G_k(z-s)\\,\\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s\\right\\}\\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich \\ .\n\\end{equation}\nTherein, $G_k(z-s)$ is a relaxation function which is constituted by\n\\begin{equation}\n\\label{eq:kernel}\n \\displaystyle\n G_k(z-s)\n = 2 \\, \\mu_k \\, {\\rm e}^{-\\frac{z-s}{\\tau_k}} \\ .\n\\end{equation}\nTherein, ${\\rm e}$ denotes the Euler's number and the parameters $\\mu_k$ and $\\tau_k$ are the stiffness and the relaxation time, respectively, for the $k$-th Maxwell element (cf. Fig.~\\ref{fig:rheolog_model}). Note that Eq.~\\eqref{eq:psi_visc_single} is formulated with respect to the material function $z(t)$ instead of the physical time $t$. The variable $z(t)$ is also referred to as intrinsic time scale and is governed by an evolution equation which has been introduced in general form by Eq.~\\eqref{eq:zdot}. Different sources of process dependencies may be defined by choosing appropriate constitutive functions for this evolution equation. However, only temperature and degree of cure dependent behaviour is assumed in this work. A specific ansatz for Eq.~\\eqref{eq:zdot} is provided in Section~\\ref{sec:Constitutive_functions_Process_dependency}.\n\nTo complete the mechanical part of the free energy function \\eqref{eq:free_energy_allg}, the volumetric stress response described by $\\hat{\\psi}_V$ has to be constituted. This contribution is assumed to be pure elastic and is described by the ansatz\n\\begin{equation}\n\\label{eq:psi_vol}\n \\displaystyle\n \\hat{\\psi}_V = \\dfrac{K}{2} \\, \\Big(J_M - 1\\Big)^2 \\ .\n\\end{equation}\nTherein, $K>0$ is the bulk modulus. \n\nFinally, the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde$ is calculated by evaluation of Eq.~\\eqref{eq:Ttil_allg} in combination with the specific constitutive relations \\eqref{eq:psi_G} - \\eqref{eq:psi_vol}. The resulting contributions to the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor are summarized in Eqs.~\\eqref{eq:stress_sum} - \\eqref{eq:stress_visc_k}. More information on the process dependent material functions $c_{10}(\\theta,q)$ and~$\\dot{z}$ and a summary of specific values for material parameters are provided in Section \\ref{sec:Constitutive_functions_Process_dependency}. \n\n\\begin{center}\n\\hrule \\footnotesize\n\\nopagebreak\n\\vspace{1ex}\n\\begin{eqnarray}\n\\omit\\rlap{\\text{Total $2^{\\rm nd}$ PK stress}} \\nonumber \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \\ \\ \\,\\,\\, \\;\n &=& \\ \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{V} +\\displaystyle \\Big(\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{G}\\cdot\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich\\Big)'\\cdot\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \\label{eq:stress_sum} \\\\[1mm]\n\\omit\\rlap{\\text{Volumetric part}} \\nonumber \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{V} \\,\\,\\,\\,\n &=& \\ K \\, J_M \\, (J_M - 1 ) \\, \\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \\label{eq:stress_vol} \\\\[1mm]\n\\omit\\rlap{\\text{Isochoric part}} \\nonumber \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{G} \\,\\,\\,\\,\n &=& \\ \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el} + \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve} \\label{eq:stress_iso} \\\\[1mm]\n\\omit\\rlap{\\text{Pseudo-elastic part}} \\nonumber \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el}\\,\\,\\,\\,\n &=& - \\int\\limits_{-\\infty}^{t}2\\,c_{10}\\Big(\\theta(t),q(s)\\Big)\\,\n \\Bigg(\\frac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s \\label{eq:stress_ela} \\\\[1mm]\n\\omit\\rlap{\\text{Viscoelastic part}} \\nonumber \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve}\\,\\,\\,\n &=& \\sum_{k=1}^{N_k} \\ \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \\label{eq:stress_visc} \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k}\n &=& - \\int\\limits_{-\\infty}^{z(t)}2 \\mu_k\\,{\\rm e}^{-\\frac{z(t)-s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\,{\\rm d}s \\label{eq:stress_visc_k} \n\\end{eqnarray}\n\\nopagebreak\n\\hrule \n\\end{center}\n\n\n\n\\subsection{Process dependencies of mechanical properties}\n\\label{sec:Constitutive_functions_Process_dependency}\n\nIn Eq.~\\eqref{eq:psi_ela} a stiffness parameter $c_{10}(\\theta,q)$ has been introduced which includes dependencies on the temperature $\\theta$ and the degree of cure $q$. The specific ansatz used in this paper consists of a separation of the different physical processes\n\\begin{equation}\n\\label{eq:c10_ansatz}\n c_{10}(\\theta,q) = c_{10,0}\\,f_{c\\theta}(\\theta)\\,f_{cq}(q) \\ .\n\\end{equation}\nTherein, $c_{10,0}$ is a constant stiffness parameter which equals the half shear modulus $G$ in small strain shear experiments. Furthermore, $f_{c\\theta}(\\theta)$ and $f_{cq}(q)$ are normalized functions representing the temperature and degree of cure dependencies, respectively. The normalization is accomplished in a way, such that the corresponding values of both functions range from $0$ to $1$. \n\nFor the representation of the temperature dependency of the fully cured material, the normalized function $f_{c\\theta}(\\theta)$ is constituted by the ansatz\n\\begin{equation}\n\\label{eq:c10_fTheta}\n f_{c\\theta}\\big(\\theta\\big) = \\dfrac{1}{\\pi}\\Bigg\\{{\\rm atan}\\Big[ a_{c\\theta} \\cdot\\big(\\theta - T_{g,1}\\big)\\Big] + \\dfrac{\\pi}{2}\\Bigg\\} \\ .\n\\end{equation}\nIt takes into account the major part of stiffness change near the glass transition temperature $T_{g,1}$ of the cured material (cf. Table~\\ref{tab:MatPar_Cure}). An additional material parameter $a_{c\\theta}$ enables one to adjust the specific shape of the function. Fig.~\\ref{fig:ela_temp_func} illustrates the phenomenology of Eq.~\\eqref{eq:c10_fTheta}. The chosen material parameter $a_{c\\theta}$ is listed in Table~\\ref{tab:MatPar_Ela}.\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Fig5}\n \\caption{Temperature dependency $f_{c\\theta}(\\theta)$ of the equilibrium stiffness}\n \\label{fig:ela_temp_func}\n\\end{figure}\n\nThe second normalized function $f_{cq}(q)$ of Eq.~\\eqref{eq:c10_ansatz} represents the change in stiffness due to the curing process and thus only depends on the degree of cure. The chosen ansatz reads as\n\\begin{equation}\n\\label{eq:c10_gC}\n f_{cq}\\big(q\\big) = \\dfrac{1}{d_{cq}}\\Bigg\\{{\\rm atan}\\Big[ a_{cq} \\cdot\\big(q - b_{cq}\\big)\\Big] + c_{cq}\\Bigg\\} \\ .\n\\end{equation}\nTherein, $a_{cq}$ and $b_{cq}$ are material parameters and the variables $c_{cq}$ and $d_{cq}$ are evaluated in a way such that the conditions \\mbox{$f_{cq}(q=0) = 0$} and \\mbox{$f_{cq}(q=1)=1$} hold. An evaluation of both conditions yields the expressions\n\\begin{equation}\n\\label{eq:c10_gC_II}\n c_{cq}= - {\\rm atan}\\Big[ - a_{cq}\\cdot b_{cq} \\Big], \\quad\n\\end{equation}\n\\begin{equation}\n\\label{eq:c10_gC_III}\n d_{cq}= {\\rm atan}\\Big[ a_{cq}\\cdot (1-b_{cq})\\Big] + c_{cq} \\ .\n\\end{equation}\nThe course of the function $f_{cq}(q)$ is depicted in Fig.~\\ref{fig:ela_cure_func}. The material parameters $a_{cq}$ and $b_{cq}$ used for illustration are listed in Table~\\ref{tab:MatPar_Ela}.\n\n\\begin{figure}[ht]\n \\centering\n\t \\includegraphics[width=0.45\\textwidth]{Fig6}\n \\caption{Degree of cure dependency $f_{cq}(q)$ of the \n equilibrium stiffness}\n\t\\label{fig:ela_cure_func}\n\\end{figure}\n\n\\begin{table}[ht]\n \\centering\n \\caption{Material parameters for Eqs.~\\eqref{eq:stress_vol}, \\eqref{eq:stress_ela} \n and \\eqref{eq:c10_ansatz} - \\eqref{eq:c10_gC}} \n {\\begin{tabular}{p{1.6cm}p{2.0cm}p{1.6cm}p{1.4cm}}\n \\hline& & & \\\\[-3mm]\n parameter & value & parameter & value\\\\ \n \\hline& & & \\\\[-3mm]\n $\\ \\ $ $K$ & $5000 \\, \\rm MPa$ \n & $\\ \\ $ $c_{10,0}$ & $500 \\, \\rm MPa$ \\\\[1mm]\n $\\ \\ $ $a_{c\\theta}$ & $- 0.5 \\, \\rm K^{\\minus 1}$ \n & $\\ \\ $ $a_{cq}$ & $10$ \\\\[1mm]\n &\n & $\\ \\ $ $b_{cq}$ & $0.4 $ \\\\ [1mm]\n \\hline\n \\end{tabular}}\n \\label{tab:MatPar_Ela}\n\\end{table} \n\nFinally, the process dependency of the viscoelastic part of Eq.~\\eqref{eq:psi_G} is considered. A set of several Maxwell elements has been modelled to capture the materials viscoelastic behaviour. As presented in Section \\ref{sec:Constitutive_functions_Free_Energy}, the process dependency is accommodated by the intrinsic time scale $z(t)$. The following ansatz for the evolution equation \\eqref{eq:zdot} has been chosen\n\\begin{equation}\n\\label{eq:zdot_ansatz}\n \\displaystyle\n \\dot{z} \n \\,=\\, f_z(q,\\theta,t) = 10^{f_{z\\theta}(\\theta)}\\cdot10^{f_{zq}(q)} \\ .\n\\end{equation}\nIn accordance to the pseudo-elastic stiffness \\eqref{eq:c10_ansatz}, the dependencies on the temperature and the degree of cure have been separated in Eq.~\\eqref{eq:zdot_ansatz}. The constitutive equations for both functions $f_{z\\theta}(\\theta)$ and $ f_{zq}(q) $ are\n\\begin{align}\n f_{z\\theta}(\\theta) \n &= \\dfrac{a_z}{\\pi} \\, {\\rm atan}\\Big[b_z\\cdot[\\theta-T_{g,1}]\\Big]\n +\\dfrac{\\pi}{2} \\ ,\\label{eq:zdot_ansatz_function_theta}\n \\\\[2mm]\n f_{zq}(q) \n &= c_z\\cdot(1-q^{n_z}) \\ .\n\\label{eq:zdot_ansatz_function_q}\n\\end{align}\nThe material parameters belonging to the viscoelastic part of the model are listed in Table~\\ref{tab:MatPar_Visc}. \n\n\\begin{table}[ht]\n \\centering\n \\caption{Material parameters for Eqs.~\\eqref{eq:stress_visc}, \\eqref{eq:stress_visc_k}, \\eqref{eq:zdot_ansatz} - \\eqref{eq:zdot_ansatz_function_q}} \n {\\begin{tabular}{p{1.6cm}p{2.0cm}p{1.6cm}p{1.4cm}}\n \\hline & & & \\\\[-3mm]\n parameter & value & parameter & value\\\\ \n \\hline & & & \\\\[-3mm]\n $\\ \\ $ $N_k$ & $7$ \n & $\\ \\ $ $\\mu_{2-7}$ & $ 5 \\, \\rm MPa$ \\\\\n $\\ \\ $ $\\mu_{1}$ & $75 \\, \\rm MPa$ \n & $\\ \\ $ $\\tau_{2-4}$ & $10^{k-4} \\, \\rm s$ \\\\ \n $\\ \\ $ $\\tau_1$ & $10 \\, \\rm s$ \n & $\\ \\ $ $\\tau_{5-7}$ & $10^{k-3} \\, \\rm s$ \\\\[1mm]\n $\\ \\ $ $a_z$ & $6.0$ \n & $\\ \\ $ $b_z$ & $0.05 \\ \\rm K^{\\minus 1}$ \\\\ \n $\\ \\ $ $c_z$ & $5.0$ \n & $\\ \\ $ $n_z$ & $0.6$ \\\\ [1mm]\n \\hline\n \\end{tabular}}\n \\label{tab:MatPar_Visc}\n\\end{table} \n\n\\section{Aspects of finite element implementation}\n\\label{sec:FEM_implementation}\n\nThe constitutive model for the representation of curing phenomena in adhesives has been implemented into the finite element software \\textit{ANSYS}$^{\\rm TM}$. In this section, the numerical integration of constitutive equations as well as the derivation of \\textit{ANSYS}$^{\\rm TM}$ specific stress and material tangent measures are summarized. Additionally, a new algorithm is introduced which suppresses undesired initial volume changes. Those volume changes may result from thermal expansion and chemical shrinkage when initial values for the temperature and the degree of cure differ from their reference values. \n\n\n\\subsection{Numerical integration}\n\\label{sec:FEM_implementation_integration}\n\nFor the numerical integration of constitutive equations, a typical time interval ($t_n$, $t_{n+1}$) with $\\Delta t = t_{n+1} - t_n > 0$ is considered. Within this time step, the $2^{\\rm nd}$ Piola-Kirchhoff stress \\eqref{eq:stress_sum} has to be computed. Thus, \n\\begin{equation}\n\\label{eq:stresses_incremental}\n \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde\n = \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{V}\n + \\Big(\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{G}\\cdot\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich\\,\\Big)'\\cdot\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \n\\end{equation}\nhas to be solved. Here, values at time instance $t_{n+1}$ are denoted by $\\indLT{n+1}(\\cdot)$. Accordingly, values at time instance $t_n$ are indicated by $\\indLT{n}(\\cdot)$ and values that are defined at the midpoint $t_n + \\frac{\\Delta t }{2}$ are represented by $\\indLT{n\/2}(\\cdot)$. Within a single time step $\\Delta t$, it is assumed that the deformation gradients $\\indLT{n}\\STAPEL F!_\\SLstrich!_\\SLstrich$ and $\\indLT{n+1}\\STAPEL F!_\\SLstrich!_\\SLstrich$ as well as the temperatures $\\indLT{n}\\theta$ and $\\indLT{n+1}\\theta$ are known. Furthermore, internal variables of the preceding time step are given. \n\nFirstly, the solution for the degree of cure $\\indLT{n+1}q$ is obtained by numerical integration of the evolution Eq.~\\eqref{eq:q_ansatz}. Here, Euler backward method (Euler implicit) is employed. A formulation of Eq.~\\eqref{eq:q_ansatz} for time instance $t_{n+1}$ and a substitution of the approximation $\\indLT{n+1}{\\dot{q}} \\approx \\frac{1}{\\Delta t}(\\indLT{n+1}q-\\indLT{n}q)$ yields \n\\begin{equation}\n\\label{eq:q_EBM}\n \\indLT{n+1}q = \\indLT{n}q + \\Delta t \\ f_q(\\indLT{n+1}q,\\indLT{n+1}\\theta) \\ ,\n\\end{equation}\nwhich is a nonlinear equation with respect to the solution $\\indLT{n+1}q$. It is computed by application of Newton's method. Additionally, the degree of cure $\\indLT{n\/2}q$ at time instance $t = t_n + \\frac{\\Delta t}{2}$ has to be computed as well. In consideration of the temperature $\\indLT{n\/2}\\theta = \\frac{1}{2}(\\indLT{n+1}\\theta + \\indLT{n}\\theta)$, the value $\\indLT{n\/2}q$ is obtained according to \\eqref{eq:q_EBM} by\n\\begin{equation}\n\\label{eq:q_EBM_n2}\n \\indLT{n\/2}q = \\indLT{n}q + \\dfrac{\\Delta t}{2} \\ f_q(\\indLT{n\/2}q,\\indLT{n\/2}\\theta) \\ .\n\\end{equation}\nNext, the computation of the intrinsic time scale $\\indLT{n+1}z$ is considered. Since this variable is governed by an evolution equation \\eqref{eq:zdot_ansatz} which cannot be solved in closed form, a numerical scheme has to be applied as well. By analogy with Eq.~\\eqref{eq:q_EBM}, Euler backward method and the approximation $\\indLT{n+1}{\\dot{z}} \\approx \\frac{1}{\\Delta t}(\\indLT{n+1}z-\\indLT{n}z)$ are adopted\n\\begin{equation}\n\\label{eq:z_EBM}\n \\indLT{n+1}z = \\indLT{n}z + \\Delta t \\ f_z(\\indLT{n+1}q,\\indLT{n+1}\\theta) \\ .\n\\end{equation}\nIn contrast to Eqs.~\\eqref{eq:q_EBM} and \\eqref{eq:q_EBM_n2}, this relation can directly be evaluated. Thus, no iterative procedure has to be applied.\n\nNext, the calculation of the stresses Eq.~\\eqref{eq:stresses_incremental} is considered. While $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_V$ is already completely determined (see Eq.~\\eqref{eq:stress_vol}), the stress contribution $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_G$ has to be regarded in more detail. According to Eq. \\eqref{eq:stress_iso}, it includes a pseudo-elastic part $\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el}$ and a sum of several viscoelastic parts summarized by $\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve}$. \n\nTo compute the pseudo-elastic stress tensor, firstly Eq.~\\eqref{eq:stress_ela} has to be formulated for time instance $t_{n+1}$:\n\\begin{equation}\n\\label{eq:Tel_incemental}\n\\displaystyle \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el}\n = - \\int\\limits_{-\\infty}^{t_{n+1}}2\\,c_{10}( \\indLT{n+1}\\theta, q(s) ) \\, \\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s \\ .\n\\end{equation}\nNote that the temperature $\\indLT{n+1}\\theta$ does not depend on the integration variable $s$ but only on time instance $t_{n+1}$. \n\nAfter substituting the ansatz \\eqref{eq:c10_ansatz} for the stiffness $c_{10}(\\theta,q)$, all values that do not depend on the integration variable $s$ are excluded from the integral such that the pseudo-elastic stress can be rewritten by \n\\begin{align}\n \\displaystyle\n \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el}\n &= - 2\\, c_{10,0} \\, f_{c\\theta}\\big(\\indLT{n+1}\\theta\\big) \\ \\indLT{n+1}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} \\label{eq:Tel_incemental_abbrv} \\ .\n\\end{align}\nHere, $\\indLT{n+1}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el}$ is an abbreviation for the remaining integral \n\\begin{align}\n \\displaystyle\n \\indLT{n+1}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} \n &= \\int\\limits_{-\\infty}^{t_{n+1}} \\, f_{cq}\\big(q(s)\\big) \\, \\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s \\ .\n \\label{eq:Tel_incemental_reduced}\n\\end{align}\nNext, this integral is split into two sub-integrals by substituting $t_{n+1} = t_n + \\Delta t$ \n\\begin{equation}\n\\label{eq:Tel_incemental_split}\n\\begin{array}{l}\n \\displaystyle\n \\indLT{n+1}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} \n = \\int\\limits_{-\\infty}^{t_{n}} \\, f_{cq}\\big(q(s)\\big) \\, \\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s \\ +\\\\\n \\qquad \\qquad\\qquad \\displaystyle\n + \\int\\limits_{t_n}^{t_{n + \\Delta t}} \\, f_{cq}\\big(q(s)\\big) \\, \\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s \\ .\n\\end{array}\n\\end{equation}\nThe first term on the right-hand side of Eq.~\\eqref{eq:Tel_incemental_split} is the solution of Eq. \\eqref{eq:Tel_incemental_reduced} for time instance $t_n$. Thus, it equals $\\indLT{n}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} $. The second term of Eq.~\\eqref{eq:Tel_incemental_split} is computed numerically by the midpoint method \n\\begin{equation}\n\\label{eq:Tel_incemental_midpoint}\n\\begin{array}{l}\n { \\displaystyle\n \\int\\limits_{t_n}^{t_{n}+ \\Delta t}} f_{cq}\\big(q(s)\\big) \\, \\Big(\\frac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Big)\\ {\\rm d} s \\\\\n \\qquad \\qquad \\qquad\n \\approx \\Delta t \\ f_{cq}\\Big(\\indLT{n\/2}q\\Big) \\ \\ \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt!^\\SLdreieck^{\\minus 1}\\Big|_{t_n + \\frac{\\Delta t}{2}} \\ ,\n\\end{array}\n\\end{equation}\nand the material time derivative of the isochoric inverse right Cauchy-Green tensor at time instance $t_n + \\frac{\\Delta t}{2}$ is approximated by\n\\begin{equation}\n\\label{eq:CG_approx}\n\\begin{array}{l}\n \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\triangle^{\\minus 1}\\Big|_{t_n + \\frac{\\Delta t}{2}}\n \\approx\n \\dfrac{1}{\\Delta t} \\Big(\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} - \\indLT{n}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} \\Big) \\ .\n\\end{array} \n\\end{equation}\nA substitution of Eqs.~\\eqref{eq:Tel_incemental_midpoint} and \\eqref{eq:CG_approx} into \\eqref{eq:Tel_incemental_split} yields the incremental representation\n\\begin{equation}\n\\label{eq:Tel_incemental_solu}\n\\begin{array}{l}\n \\displaystyle\n \\indLT{n+1}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} \n = \\indLT{n}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} \n + f_{cq}\\Big(\\indLT{n\/2}q\\Big) \\ \\Big(\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} - \\indLT{n}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} \\Big)\\ ,\n\\end{array}\n\\end{equation}\nand the pseudo-elastic stress $\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el}$ can be computed by Eq.~\\eqref{eq:Tel_incemental_abbrv}. Finally, it remains to calculate the viscoelastic stress $\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve}$. Note that the constitutive equations for the pseudo-elastic stress \\eqref{eq:stress_ela} and the viscoelastic stress of one Maxwell element \\eqref{eq:stress_visc_k} have a similar structure. Thus, the procedure for numerical integration is similar as well. However, the procedure described above has to be slightly adapted. Firstly, the stress contribution for one single Maxwell element \\eqref{eq:stress_visc_k} is formulated for time instance $t_{n+1}$:\n\\begin{equation}\n\\label{eq:Tve_incemental}\n \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \n = - \\int\\limits_{-\\infty}^{\\indLT{n+1}z}\n 2 \\mu_k\\,{\\rm e}^{-\\frac{\\indLT{n+1}z-s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\, {\\rm d}s \\ .\n\\end{equation}\nNext, this integral is split into two sub-integrals which yields\n\\begin{equation}\n\\label{eq:Tve_incremental_split}\n\\begin{array}{l}\n\\displaystyle\n \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \n = - \\int\\limits_{-\\infty}^{\\indLT{n}z}\n 2 \\mu_k\\,{\\rm e}^{-\\frac{\\indLT{n}z + \\Delta z-s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\, {\\rm d}s \n \\\\[5mm]\\displaystyle\n \\qquad \\qquad\n - \\int\\limits_{\\indLT{n}z}^{\\indLT{n}z + \\Delta z}\n 2 \\mu_k\\,{\\rm e}^{-\\frac{\\indLT{n}z + \\Delta z-s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\, {\\rm d}s .\n\\end{array}\n\\end{equation}\nNote that in contrast to the calculation step \\eqref{eq:Tel_incemental_split} here the intrinsic time scale $\\indLT{n+1}z = \\indLT{n}z + \\Delta z$ has been substituted. The first integral on the right-hand side of Eq.~\\eqref{eq:Tve_incremental_split} can be expressed by the solution $\\indLT{n}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k}$ as follows\n\\begin{equation}\n\\label{eq:Tve_incremental_first_term}\n\\begin{array}{l}\n\\displaystyle\n-{\\rm e}^{-\\frac{\\Delta z}{\\tau_k}}\n \\int\\limits_{-\\infty}^{\\indLT{n}z}\n 2 \\mu_k\\,{\\rm e}^{-\\frac{\\indLT{n}z -s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\ {\\rm d}s\n \\\\[5mm]\\displaystyle\n \\qquad \\qquad\n = {\\rm e}^{-\\frac{\\Delta z}{\\tau_k}} \\ \\indLT{n}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \\ .\n \\end{array}\n\\end{equation}\nSince the second term of Eq.~\\eqref{eq:Tve_incremental_split} cannot be solved in closed form, a numerical procedure has to be applied. Here again, the midpoint method is employed:\n\\begin{equation}\n\\label{eq:Tve_incremental_midpoint}\n\\begin{array}{l}\n\\displaystyle\n - \\int\\limits_{\\indLT{n}z}^{\\indLT{n}z + \\Delta z}\n 2 \\mu_k\\,{\\rm e}^{-\\frac{\\indLT{n}z + \\Delta z-s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\ {\\rm d}s\n \\\\[4mm]\n\\displaystyle\n \\qquad\\quad \\approx -\\Delta z \\\n 2 \\mu_k\\,{\\rm e}^{-\\frac{ \\Delta z}{2\\,\\tau_k}} \n \\ \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\triangle^{\\minus 1}\\Big|_{\\indLT{n}z + \\frac{\\Delta z}{2}} \\ .\n\\end{array} \n\\end{equation}\nFurthermore, the material time derivative $\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\SLdreieck^{\\minus 1}$ is approximated by\n\\begin{equation}\n\\label{eq:Tve_CG_approx}\n\\begin{array}{l}\n \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\triangle^{\\minus 1}\\Big|_{\\indLT{n}z + \\frac{\\Delta z}{2}}\n \\approx\n \\dfrac{1}{\\Delta z} \\Big(\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} - \\indLT{n}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} \\Big) \\ .\n\\end{array} \n\\end{equation}\nA substitution of Eqs.~\\eqref{eq:Tve_incremental_first_term} - \\eqref{eq:Tve_CG_approx} into Eq.~\\eqref{eq:Tve_incremental_split} yields the incremental representation of the stresses $\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k}$ of one single Maxwell element\n\\begin{equation}\n\\label{eq:Tve_incremental_single}\n\\begin{array}{l}\n \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \n = {\\rm e}^{-\\frac{\\Delta z}{\\tau_k}} \\ \\indLT{n}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \\, +\n \\\\[5mm]\\displaystyle\n \\qquad \\qquad \\quad \n - 2 \\mu_k\\,{\\rm e}^{-\\frac{ \\Delta z}{2\\,\\tau_k}} \n \\ \\Big(\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} - \\indLT{n}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} \\Big) \\ .\n\\end{array}\n\\end{equation}\nBased on this result, the total viscoelastic stress contribution can be calculated by the help of Eq.~\\eqref{eq:stress_visc}.\n\n\n\\subsection{ANSYS specific stress and material tangent}\n\\label{sec:FEM_implementation_ANSYS} \n\nThe material model has been implemented into the commercial finite element software \\textit{ANSYS}$^{\\rm TM}$ by the help of the user subroutine \\textit{USERMAT} \\cite{Ansys_allg_2011}. Within a calculation step $\\Delta t = t_{n+1} - t_{n}$, several input variables are transferred to the user subroutine. These are, for example, the deformation gradients $\\indLT{n}\\STAPEL F!_\\SLstrich!_\\SLstrich$ and $\\indLT{n+1}\\STAPEL F!_\\SLstrich!_\\SLstrich$ as well as user defined internal variables of the previous time step. In the opposite direction, appropriate output values have to be transferred to the software. The output includes the stress and the material tangent operator. Moreover, internal variables have to be updated for the next calculation step. \n\nThe material model described in Sections \\ref{sec:Curing_model} and \\ref{sec:Curing_model_constitutive_functions} has been set up in total Lagrangian representation. Thus, the stresses are given in the form of the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde$. Its corresponding material tangent operator $\\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!^\\SLtilde$ is defined by\n\\begin{equation}\n\\label{eq:total_lagrange}\n \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde!^\\SLdreieck = \\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!^\\SLtilde \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck \\ ,\n \\qquad\n \\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!^\\SLtilde \n = \\dfrac{\\partial \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde}{\\partial \\STAPEL C!_\\SLstrich!_\\SLstrich} \\ .\n\\end{equation}\nSince \\textit{ANSYS}$^{\\rm TM}$ uses an updated Lagrangian representation for the formulation of finite strain material models \\cite{Ansys_allg_2011}, appropriate transformations of the stress and material tangent measures have to be performed. An \\textit{ANSYS}$^{\\rm TM}$ specific stress tensor $\\Ten2 \\sigma_U$ and its corresponding material tangent operator $\\STAPEL k!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U$ are defined on the configuration $\\mathcal{K}_U$ which arises from the polar decomposition theorem. (cf. Fig.~\\ref{fig:defgrad_polar}). \n\\begin{figure}[ht]\n\t\\centering\n\t\t\t\\includegraphics[width=0.35\\textwidth]{Fig7}\n\t\\caption{Polar decomposition of the deformation gradient.}\n\t\\label{fig:defgrad_polar}\n\\end{figure}\nAccording to the polar decomposition theorem, the deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich$ gets decomposed into pure stretch tensors and a pure rotation tensor \n\\begin{equation}\n\\label{eq:polar_decompo}\n \\STAPEL F!_\\SLstrich!_\\SLstrich = \\Ten2 V \\cdot \\Ten2 R = \\Ten2 R \\cdot \\Ten2 U \\ .\n\\end{equation}\nTherein, $\\Ten2 V$ and $\\Ten2 U$ are the positive definite symmetric left and right stretch tensor, respectively. Furthermore, $\\Ten2 R$ is the orthogonal rotation tensor, thus $\\Ten2 R^{\\minus 1} = \\Ten2 R^T$ holds. By the help of the stretch tensor $\\Ten2 U$, the stress tensor $\\Ten2 \\sigma_U$ is obtained by the push forward operation of the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde$ \n\\begin{equation}\n\\label{eq:Cauchy_pull_back_final}\n \\Ten2 \\sigma_U \n \\,=\\, \\dfrac{1}{J} \\, \\Ten2 U \\cdot \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \\cdot \\Ten2 U \n \\,=\\, \\dfrac{1}{J} \\, \\STAPEL M!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \\ .\n\\end{equation}\nHere, Eq.~\\eqref{eq:S24} has been applied for the definition of the fourth order tensor\n\\begin{equation}\n\\label{eq:TensM}\n \\STAPEL M!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U\n = \\left( \\Ten2 U \\otimes \\Ten2 U\\right)^{S_{24}} \\ .\n\\end{equation}\nAccording to \\cite{Ihlemann_2006} and \\cite{Rendek_Lion_2010}, the corresponding material tangent $\\STAPEL k!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U$ is calculated by the operation\n\\begin{equation}\n\\label{eq:tangent_euler_rot_final}\n \\STAPEL k!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U\n = \\dfrac{2}{J} \\ \\STAPEL M!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}}\n \\left\\{\n \\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!^\\SLtilde \n + \\left( \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \\otimes \\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1}\\right)^{S_{24}}\n \\right\\}\n \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL M!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U \\ .\n\\end{equation}\n\nEqs. \\eqref{eq:Cauchy_pull_back_final} and \\eqref{eq:tangent_euler_rot_final} have been implemented into the user subroutine \\textit{USERMAT} right after the computation of the Lagrangian tensors $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde$ and $\\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!^\\SLtilde $.\n\n\\subsection{Algorithmic correction of the initial volume}\n\\label{sec:FEM_implementation_correction}\n\nIn this section, an algorithm is presented which enables the consideration of initial conditions that differ from reference values used in constitutive equations for the representation of thermochemical volume changes. \n\nIn Section \\ref{sec:Constitutive_functions_Volume}, a material function $\\varphi_{\\theta C}(\\theta,q)$ has been introduced which describes changes in volume related to heat expansion and chemical shrinkage processes. This function has been formulated with respect to a specific reference temperature $\\STAPEL \\theta!^\\SLtilde$ and a reference value \\mbox{$\\STAPEL q!^\\SLtilde = 0$} for the degree of cure. If the current temperature $\\theta(t)$ and the degree of cure $q(t)$ equal those reference values, the function yields \\mbox{$\\varphi_{\\theta C}(\\STAPEL \\theta!^\\SLtilde,\\STAPEL q!^\\SLtilde) = 1$}. Thus, no change in density and volume is computed. If, however, the temperature or the degree of cure differ from their reference values, a certain volume change would be calculated depending on the differences of the input variables and the specific material properties (i.e. heat expansion and chemical shrinkage coefficients). The same situation may occur right at the beginning of a numerical simulation, i.e. in the initial state. In particular, the initial temperature $\\theta_0$ and the initial degree of cure $q_0$ may vary compared to the previously defined reference values $\\STAPEL \\theta!^\\SLtilde$ and $\\STAPEL q!^\\SLtilde$, respectively. In such a case, a value $\\varphi_{\\theta C}(\\theta_0 \\ne \\STAPEL \\theta!^\\SLtilde,q_0\\ne \\STAPEL q!^\\SLtilde) \\ne 1$ and, consequently, an immediate volume change would be calculated right at the beginning of a simulation. In finite element simulations, this would either lead the finite element mesh to change its volume or volumetric stresses would occur initially. Moreover, a distortion of the finite element mesh might occur.\n\nIn order to avoid this undesired behaviour and to keep the initial volume of a finite element mesh constant, a correction is made at the beginning of a simulation. To this end, the reference state and the initial state at $t=0$ of a simulation are strictly separated by introducing a reference configuration $\\STAPEL {\\cal K}!^\\SLtilde$ and an initial configuration $\\mathcal{K}_0$ according to Fig.~\\ref{fig:defgrad_korr}.\\footnote{The initial configuration $\\mathcal{K}_0$ can be interpreted as a new reference \\cite{Shutov_Etal_2012}.}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\t\t\\includegraphics[width=0.35\\textwidth]{Fig8}\n\t\\caption{Decomposition of the deformation gradient for the distinction between reference and initial configuration}\n\t\\label{fig:defgrad_korr}\n\\end{figure}\n\nThe reference configuration $\\STAPEL {\\cal K}!^\\SLtilde$ is constituted by reference values for the temperature $\\STAPEL \\theta!^\\SLtilde$, the degree of cure \\mbox{$\\STAPEL q!^\\SLtilde = 0$} and the mass density $\\JI\\STAPEL\\varrho!^\\SLtilde\\!$. Likewise, the initial state $\\mathcal{K}_0$ at time $t=0$ of a simulation is represented by the initial values \n\\begin{equation}\n\\theta_0 = \\theta(t=0), \\ q_0 = q(t=0), \\ \\varrho_0 = \\varrho(t=0) \\ . \n\\end{equation}\n\nNext, the different deformation paths occurring in Fig. \\ref{fig:defgrad_korr} are defined. Firstly, a new deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich^{\\rm new}$ is introduced which represents the true deformation within a simulation. Accordingly, the deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}$ represents the true thermochemical volume change. The latter operator is constituted by\n\\begin{equation}\n\\label{eq:F_thetaC_init}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C} = J_{\\theta C}^{\\,1\/3} \\ \\STAPEL I!_\\SLstrich!_\\SLstrich ,\n\\quad\n J_{\\theta C} = \\dfrac{{\\rm d}V_{\\theta C}}{{\\rm d} V_0} = \\dfrac{\\varrho_0}{\\varrho_{\\theta C}} \\ ,\n\\end{equation}\nwhere $J_{\\theta C}$ is the true thermochemical volume ratio that occurs in a simulation. Since the initial state $\\mathcal{K}_0$ is assumed to be deformation free, the condition \n\\begin{equation}\n\\label{eq:cond_init_vol}\n\\STAPEL F!_\\SLstrich!_\\SLstrich^{\\rm new}(t=0) \\, = \\, \\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}(t=0) \\, = \\, \\STAPEL I!_\\SLstrich!_\\SLstrich \n\\end{equation}\nholds. Next, the mapping between the reference configuration $\\STAPEL {\\cal K}!^\\SLtilde$ and the initial configuration $\\mathcal{K}_0$ is represented by an isotropic deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich_0$\n\\begin{equation}\n\\label{eq:F_0}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_{0} \n = J_0^{\\,1\/3} \\ \\STAPEL I!_\\SLstrich!_\\SLstrich , \\qquad\n J_0 \n = \\dfrac{{\\rm d}V_0}{{\\rm d}\\STAPEL V!^\\SLtilde} \n = \\dfrac{\\JI\\STAPEL\\varrho!^\\SLtilde\\!}{\\varrho_0} \\ .\n\\end{equation}\nHere, $J_0$ represents the volume ratio between the reference and the initial configuration. This value is not a function of time and thus remains constant throughout the whole deformation process. Moreover, $J_0$ will be interpreted as a correction of the initial volume. \n\nFinally, a mapping between the reference state $\\STAPEL {\\cal K}!^\\SLtilde$ and the configuration $\\mathcal{K}_{\\theta C}$ is introduced as follows:\n\\begin{equation}\n\\label{eq:F_thetaC_ref}\n \\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLtilde_{\\theta C} = \\varphi_{\\theta C}^{1\/3}\\big(\\theta,q\\big) \\ \\STAPEL I!_\\SLstrich!_\\SLstrich , \\quad\n \\varphi_{\\theta C}\\big(\\theta,q\\big) = \\dfrac{{\\rm d}V_{\\theta C}}{{\\rm d}\\STAPEL V!^\\SLtilde} = \\dfrac{\\JI\\STAPEL\\varrho!^\\SLtilde\\!}{\\varrho_{\\theta C}} \\ .\n\\end{equation}\nNote that $\\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLtilde_{\\theta C}$ is defined by the constitutive function $\\varphi_{\\theta C}\\big(\\theta,q\\big)$ (see Eq.~\\eqref{eq:phi_thetaC}) and thus only depends on the current temperature $\\theta(t)$ and degree of cure $q(t)$. It represents a hypothetical volume ratio that would occur if no correction is computed.\n\nNext, the initial correction is calculated. According to Fig. \\ref{fig:defgrad_korr}, the relation between the deformation gradients \\eqref{eq:F_thetaC_init}, \\eqref{eq:F_0} and \\eqref{eq:F_thetaC_ref} at arbitrary values $\\theta$ and $q$ reads as\n\\begin{equation}\n\\label{eq:F0_decompo}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}(\\theta, q) = \\STAPEL F!_\\SLstrich!_\\SLstrich_0^{\\minus 1} \\cdot \\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLtilde_{\\theta C}(\\theta, q) \\ ,\n\\end{equation}\nSince $\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}$, $\\STAPEL F!_\\SLstrich!_\\SLstrich_0$ and $\\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLtilde_{\\theta C}$ are isotropic, a similar relation can be formulated for the corresponding volume ratios:\n\\begin{equation}\n\\label{eq:J0_decompo}\n J_{\\theta C}(\\theta,q) = \\dfrac{\\varphi_{\\theta C}\\big(\\theta,q\\big)}{J_0} \\ .\n\\end{equation}\nRecall that the function $\\varphi_{\\theta C}\\big(\\theta,q\\big)$ only depends on the current temperature and the current degree of cure, and thus is fully determined. In contrast, the values $J_{\\theta C}\\big(\\theta,q\\big)$ and $J_0$ remain to be calculated. To this end, the condition \\eqref{eq:cond_init_vol} is employed. It takes into account that no volume change occurs in the initial state. Thus, it is stated that the initial volume remains unaffected by different initial values $\\theta_0$ and $q_0$. More precisely, \n\\begin{equation}\n\\label{eq:init_vol}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}(\\theta_0, q_0) = \\STAPEL I!_\\SLstrich!_\\SLstrich \n \\ \\ \\Leftrightarrow \\ \\ \n J_{\\theta C} (\\theta_0, q_0) = 1 \\ \n\\end{equation}\nis assumed. Based on this condition, the initial correction $J_0$ is calculated by evaluation of Eq.~\\eqref{eq:J0_decompo} at the initial state $\\theta = \\theta_0$ and $q = q_0$, which yields\n\\begin{equation}\n\\label{eq:J0_result}\n J_{0} = \\varphi_{\\theta C}(\\theta_0, q_0) = \\text{const.} \\ .\n\\end{equation}\nFurthermore, the initial mass density $\\varrho_0$ for a certain set of values $\\theta_0$ and $q_0$ is adjusted by \n\\begin{equation}\n\\label{eq:rho0_result}\n \\varrho_{0} = \\dfrac{\\JI\\STAPEL\\varrho!^\\SLtilde\\!}{J_0}= \\text{const.} \\ .\n\\end{equation}\nIn summary, the algorithm works as follows. Firstly, the initial correction $J_0$ is calculated within the first load step by Eq.~\\eqref{eq:J0_result}. This value is stored and can be accessed throughout all subsequent load steps of the simulation. Within each load step, the function $\\varphi_{\\theta C}\\big(\\theta,q\\big)$ constituted by Eq.~\\eqref{eq:phi_thetaC} is evaluated and the true thermochemical volume ratio $J_{\\theta C}\\big(\\theta,q\\big)$ is computed by Eq. \\eqref{eq:J0_decompo}.\n\n\n\n\\section{Finite element simulation of PMCs}\n\\label{sec:Finite_element_simulation}\n\nIn this section, the material model is applied within finite element simulations regarding the newly proposed manufacturing process for deep drawn PMCs (cf. Section~\\ref{sec:Introduction}). Since this paper primarily focuses on phenomena related to the adhesive's curing reaction, the simulation of the forming step is reduced to a simplifying approximation based on a more complex forming simulation presented by Neugebauer \\etal \\cite{Neugebauer_Etal_2013_WGP}. Nevertheless, the forming step cannot be omitted since the subsequent volume shrinkage of the adhesive leads to quite different results concerning secondary deformations of the PMC or evolving strains on the MFC. \n\nThe considered finite element model is related to one specific deep drawing geometry (see Section~\\ref{sec:Finite_element_model}). Based on this model, two different manufacturing processes are investigated. Firstly, simulations on the new manufacturing process (see Section~\\ref{sec:Introduction}) are conducted. To this end, Section~\\ref{sec:Part_I_Forming_step} deals with the simplified forming step where the adhesive is not yet cured. The subsequent curing of the adhesive is considered in Section \\ref{sec:Part_II_Curing_process}. To compare the obtained results regarding the impact on the formed PMC and the MFC, an alternative manufacturing process is investigated as well. In Section~\\ref{sec:reversed_process} the order of forming and curing is switched which will illustrate the negative influence on the MFC when forming takes place after the adhesive has been fully cured.\n\n\\subsection{Finite element model}\n\\label{sec:Finite_element_model}\n\nThe specific example of sheet metal forming simulation considered in this paper relies on a deep drawn rectangular cup geometry as presented in \\cite{Neugebauer_Etal_2013_WGP}. A metal sheet with in-plane dimensions of $200 \\, \\rm mm$ x $130 \\, \\rm mm$ is to be formed. A cover metal sheet and a MFC are bonded to the structure by an adhesive as described in Section \\ref{sec:Introduction}. A schematic illustration of a quarter of the final formed deep drawing cup is depicted in Fig.~\\ref{fig:pic_deep_drawing}. \n\n\\begin{figure}[ht]\n\t\\centering\n \\includegraphics[width=0.45\\textwidth]{Fig9}\n\t\\caption{Quarter section of the deep drawn PMC with the section of the finite element model (dashed line) }\n\t\\label{fig:pic_deep_drawing}\n\\end{figure}\nDeduced from the objectives of this work, the finite element model used in this work is confined to the MFC's surrounding region of the PMC (see the dashed line in Fig.~\\ref{fig:pic_deep_drawing}). Furthermore, two planes of symmetry are utilized such that the final model covers a quarter of the inner part of the PMC. Its basic area coincides to the size of the quarter aluminium cover sheet (Fig.~\\ref{fig:pic_model}). Taking into account the thicknesses of the different layers (see Table~\\ref{tab:Thicknesses}), the overall dimensions of the employed finite element model are $42.5 \\, \\rm mm$ x $35 \\, \\rm mm$ x $2.9 \\, \\rm mm$.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig10}\n\t\\caption{Geometry of the model with aluminium layers (1), \n\t\t\t\t\t spacer (2), adhesive layer (3) and \n\t\t\t\t\t macro fibre composite (4) with its active part (5)}\n\t\\label{fig:pic_model}\n\\end{figure}\n\n\n\\begin{table}[ht]\n \\caption{Thicknesses of the PMC layers in the finite element model}\n \\vspace{.5ex}\n \\centering\n {\\begin{tabular}{p{.05cm}p{4.4cm}p{2cm}p{.05cm}}\n \\hline & & & \\\\[-3mm]\n &layer & thickness&\\\\ \n \\hline & & & \\\\[-3mm]\n &aluminium cover sheet & $0.8 \\ \\rm mm$ &\\\\[1mm]\n &adhesive above MFC & $0.15 \\ \\rm mm$ &\\\\[1mm]\n ¯o fibre composite & $0.3 \\ \\rm mm$ &\\\\[1mm]\n &adhesive below MFC & $0.15 \\ \\rm mm$ &\\\\[1mm]\n &aluminium bottom sheet & $1.5 \\ \\rm mm$ &\\\\[1mm]\n \\hline\n \\end{tabular}}\n \\label{tab:Thicknesses}\n\\end{table} \n\nBoth, the PMC and the finite element model consist of several components with remarkably different material behaviour. Therefore, different material models are applied for the layers of the PMC. To represent the curing behaviour of the adhesive layer, the model and its corresponding material parameters presented in Section \\ref{sec:Curing_model} are employed. The material behaviour of the aluminium sheets is described by an elastic-plastic model provided by \\textit{ANSYS}$^{\\text{TM}}$ \\cite{Ansys_allg_2011}. The model incorporates plastic hardening effects represented by a bilinear kinematic hardening rule.\\footnote{ For even more exact prediction of the residual stresses and spring back, material models with nonlinear kinematic hardening are needed \\cite{Shutov_Kreissig_2008}.} The corresponding material parameters are chosen according to the manufacturer's specifications \\cite{ENAW5083_2002}. The active part of the MFC with its unidirectional piezoceramic fibres aligned in y-direction is represented by a model of transversely isotropic elasticity \\cite{Giddings_2009,MFC_2012}. The non-active part of the MFC as well as the spacer are modelled by isotropic elasticity laws. A summary of the employed material parameters for the described materials is given in Table~\\ref{tab:MatPar_FE_Model}.\n\n\n\\begin{table}[ht]\n \\caption{Material parameters for the different layers of the PMC}\n \\vspace{.5ex}\n \\centering\n {\\begin{tabular}{|lll|}\n \\hline && \\\\[-3mm]\n \\multicolumn{3}{|l|}\n {\\underline{Metal sheets: Elastoplasticity with kinematic hardening}}\\\\[3mm]\n & $E \\,\\,\\, = \\ 70000 \\ \\rm MPa$ \n & $\\ \\nu \\,\\,\\, = \\ 0.31$ \\\\[1mm] \n & $\\sigma_F = \\ 150 \\ \\rm MPa$ \n & $\\ E_T = \\ 1015 \\ \\rm \\rm MPa$ \\\\[4mm]\n \\multicolumn{3}{|l|}\n {\\underline{Active part of the MFC: Transversely isotropic elasticity} }\\\\[3mm]\n & $E_y \\, \\; = \\ 30336 \\ \\rm MPa$\n & $\\ E_x \\, \\; = \\ E_z = 15857 \\ \\rm MPa$ \\\\ [1mm]\n & $\\nu_{xz} \\,\\, = \\ 0.31$ \n & $\\ \\nu_{xy} \\,\\, =\\ \\nu_{zy}= 0.16$ \\\\[1mm]\n & $G_{xy} = \\ G_{zy}= 5515 \\ \\rm MPa$ \n & $\\ G_{zx} = \\ \\frac{E_{x}}{2\\left( 1 + \\nu_{xz} \\right)}$ \\\\[4mm] \n \n \\multicolumn{3}{|l|}\n {\\underline{Non-active part of the MFC: Isotropic elasticity}}\\\\[3mm]\n & $E_{kapt} = \\ 3500 \\ \\rm MPa$ \n & $\\ \\nu_{kapt} = \\ 0.33$ \\\\[4mm]\n \\multicolumn{3}{|l|}\n {\\underline{Spacer: Isotropic elasticity}}\\\\[3mm]\n & $E_{tape} = \\ 8000 \\ \\rm MPa$ \n & $\\ \\nu_{tape} = \\ 0.35$ \\\\[1mm]\n \\hline\n \\end{tabular}}\n \\label{tab:MatPar_FE_Model}\n\\end{table} \n\n\nThe finite element model has been meshed by a bottom up approach with three-dimensional structural solid elements, each incorporating eight nodes and linear regression functions (Fig.~\\ref{fig:pic_Mesh}). The complete mesh consists of about $40000$ elements from which $14000$ involve the adhesive's material model. To avoid volume locking effects within the adhesive layer, a mixed u-p-formulation is used for the corresponding elements. Furthermore, radii at the outer edges of the MFC have been modelled to reduce effects of stress concentration (see highlighted region of Fig.~\\ref{fig:pic_Mesh}).\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig11}\n\t\\caption{Finite element mesh with highlighted radii at the outer edge of the MFC}\n\t\\label{fig:pic_Mesh}\n\\end{figure}\n\n\n\\subsection{Part I: Forming step}\n\\label{sec:Part_I_Forming_step}\n\nThe first step of the simulation process approximates the deep drawing of the PMC's inner part. To this end, node displacements from the deep drawing simulation in \\cite{Neugebauer_Etal_2013_WGP} have been interpolated and defined as boundary conditions on the bottom sheet (see Fig.~\\ref{fig:pic_boundary_conditions_I}). This approach of approximating the deformation of the inner part provides the possibility to focus on studies regarding regions surrounding the adhesive while reducing the computational effort of a complete deep drawing simulation. Thus, typical challenges in sheet metal forming simulations like wrinkling, spring-back and contact formulations are avoided.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.3\\textwidth]{Fig12}\n\t\\caption{Interpolated displacement values at the bottom (1) \n\t and symmetry boundary conditions (2) on the model \n\t for the simulation of the forming step}\n\t\\label{fig:pic_boundary_conditions_I}\n\\end{figure}\n\nWithin the forming step, the adhesive has not yet reached its gelation point and, thus, can be treated as a viscoelastic fluid \\cite{Winter_1986}. To adopt the fluid like material behaviour, a simplified model is used which includes pure elastic behaviour with respect to volume changes and viscoelastic behaviour in case of isochoric deformations. The viscoelastic behaviour is represented by one single Maxwell element with constant material parameters. Here, an \\textit{ANSYS}$^{\\rm TM}$ built in viscoelastic model has been used. The material parameters are $K = 5000 \\ \\rm MPa$ for the bulk modulus, $c = 2.0 \\ \\rm MPa$ for the Neo-Hookean stiffness, and $\\tau = 20.0 \\ \\rm s$ for the relaxation time. \n\nThe simulation time of the forming step is $1000 \\ \\rm s$. The forming itself takes $30 \\ \\rm s$. The remaining period of $970 \\ \\rm s$ is included to achieve a relaxed state in the adhesive. This procedure has been chosen due to numerical difficulties when using shorter simulation times for the forming step or applying smaller values for the adhesive's relaxation time. Some numerical investigations on appropriate representations of the liquid adhesive during forming were conducted in \\cite{Neugebauer_Etal_2013_WGP}. However, the aim of this simulation step is to obtain a finite element mesh of the formed PMC. Thus, the described procedure is assumed to be sufficient for the needs of this work. \n\nResulting from the forming simulation, Fig.~\\ref{fig:pic_uz_bottom} shows the circularity of the contour lines of the displacement $u_z$ which points to a good reproduction of the profile generated by the rectangular punch with a double curvature of $100 \\, \\rm mm$.\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig13}\n\t\\caption{Perspective view (left) and bottom view (right) of the \n\t\t\t\t\t model's $u_z$-displacement due to forming}\n\t\\label{fig:pic_uz_bottom}\n\\end{figure}\n\nIn order to analyse the functionality of the formed PMC, the strain affecting the MFC has to be examined. According to manufacturer's specifications, the linear elastic tensile strain limit of the MFC's brittle piezoceramic fibres is about $1 \\! \\cdot \\! 10^{-3}$~\\cite{Daue_Kunzmann_2010,MFC_2012}. If this strain level is exceeded, risk of depolarization effects increases significantly and the MFC might possibly not be able to maintain its sensor and actuator functionalities in the required magnitude ~\\cite{Daue_Kunzmann_2010}. Complete failure of the MFC occurs, if a maximum operational tensile strain of $4.5 \\! \\cdot \\! 10^{-3}$ is exceeded \\cite{MFC_2012}. \n\nAs a result of the conducted forming simulation, Fig.~\\ref{fig:pic_MFC_eps_y} reveals that the deformation of the forming step exceeds the linear tensile strain limit by two to three times. However, failure of the MFC is not predicted since the strain magnitudes are below the maximum operational tensile strain. Moreover, it can be seen that there are compressed and stretched regions which points to major influence of bending deformation on the MFC. \n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig14}\n\t\\caption{Total mechanical strain $\\varepsilon_{y}$ along the orientation of fibres on the top (left) and the bottom (right) of the active part of the MFC}\n\t\\label{fig:pic_MFC_eps_y}\n\\end{figure}\n\n\n\\subsection{Part II: Curing process}\n\\label{sec:Part_II_Curing_process}\n\nThe second part of the simulation gives insight into the impact of the adhesive's curing and its associated volume shrinkage on the PMC. In order to obtain a continuous process chain of simulation, the deformed mesh of the simplified forming simulation presented in Section \\ref{sec:Part_I_Forming_step} is employed as starting point for the curing simulation. To account for curing phenomena in the adhesive, the viscoelastic material model used within the forming step is replaced by the material model presented in Sections~\\ref{sec:Curing_model} and \\ref{sec:Curing_model_constitutive_functions}. The point of time to conduct this change of material laws is set to the gelation point at which the fluid turns into a solid \\cite{Winter_1986}. According to Meuwissen \\etal \\cite{Meuwissen_Etal_2004} the gelation point is represented by a degree of cure \\mbox{$q \\approx 0.5-0.6$}. Here, the initial value is set to $q_0 = 0.5$. \n\nWithin the curing simulation, the applied boundary conditions are confined to symmetries at the two cross-sectional planes as can be seen in Fig.~\\ref{fig:pic_boundary_conditions_I}. The boundary conditions on the bottom sheet are released. Since no residual stresses are transferred from the forming simulation to the curing step, no initial spring-back is observed. The simulation time of the curing simulation is set to $1800 \\, \\rm s$ according to manufacturer's specifications \\cite{DP410_2003}. Additionally, a constant temperature $\\theta = 318 \\, K$ is prescribed. As an example for the decisive effects of adhesive curing process, Fig.~\\ref{fig:pic_J3_section} shows the mechanical volume ratio~$J$ resulting from the material's volume shrinkage. The final degree of cure at this stage is $q = 0.89$.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig15}\n\t\\caption{Volume ratio~$J$ of the PMC after the curing process shows the adhesive's chemically induced shrinkage (cover sheet is hidden, highlighted region indicates the adhesive)}\n\t\\label{fig:pic_J3_section}\n\\end{figure}\n\nNote that if a free volume shrinkage is assumed, $J$ would be equally distributed throughout the entire adhesive volume. The apparent deviations of~$J$ shown in Fig.~\\ref{fig:pic_J3_section} are caused by supporting effects of adjacent materials. These supporting effects can also be observed by viewing the displacements in normal direction of the free surfaces of the aluminium layers (see Fig.~\\ref{fig:pic_uz_curing}). As a consequence of their different thicknesses, the resulting normal displacements at the free surface of the top layer are more pronounced than these of the bottom layer. \n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig16}\n\t\\caption{Surface markings due to the adhesive's volume shrinkage during the curing process on top (left) and at the bottom (right) of the finite element model}\n\t\\label{fig:pic_uz_curing}\n\\end{figure}\n\nAnalogous to the investigations of the simplified for\\-ming simulation (cf. Section \\ref{sec:Part_I_Forming_step}), the mechanical strain $\\varepsilon_{y}$ of the piezoceramic fibres is examined and presented in Fig.~\\ref{fig:pic_MFC_eps_y_curing}. \n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig17}\n\t\\caption{Total mechanical strain~$\\varepsilon_{y}$ in the direction of the orientation of the fibres on the top (left) and the bottom (right) of the MFC due to the chemical curing process}\n\t\\label{fig:pic_MFC_eps_y_curing}\n\\end{figure}\n\nAs can be seen from Fig.~\\ref{fig:pic_MFC_eps_y_curing}, the linear elastic tensile strain limit is not exceeded due to the curing reaction. However, superimposing the strains of the forming and the curing simulation reveals that the MFC is loaded additionally in the compressed zones. For the herein presented example with its adhesive layer thickness, its further geometrically dimensions and the given forming displacements, the influence is rather negligible. Nevertheless, deviations from these conditions (smaller radii or thicker adhesive layer) may result in strain amplitudes due to forming and curing process which are vice versa.\n\n\\subsection{Comparison to reversed manufacturing process}\n\\label{sec:reversed_process}\n\nFinally, an analysis of a reversed manufacturing process proves the adhesive's protecting function to the MFC. The concluding simulation is conducted with the material model presented in Sections~\\ref{sec:Curing_model} and \\ref{sec:Curing_model_constitutive_functions}. Here, an initial degree of cure of~$q_0=0.5$ is chosen and a constant temperature of $\\theta = 318 \\,K$ is prescribed. The simulation time of the curing process is $1800 \\, \\rm s$. Within this period of time, the chemical reaction is finished with the value $q = 0.89$ which is in accordance with the simulation in Section \\ref{sec:Part_II_Curing_process}. Subsequently, the forming step with the cured adhesive is simulated. In Fig.~\\ref{fig:pic_MFC_eps_y_reverse}, the resulting tensile strain along the fibre direction is depicted.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig18}\n\t\\caption{Total mechanical strain~$\\varepsilon_{y}$ along fibre orientation on the top (left) and the bottom (right) of the MFC due to forming with already cured adhesive}\n\t\\label{fig:pic_MFC_eps_y_reverse}\n\\end{figure}\n\nAs Fig.~\\ref{fig:pic_MFC_eps_y_reverse} reveals, due to this strategy the entire MFC has been uniformly stretched and the maximum strain is almost four times higher than in the simulation of the actually intended manufacturing process shown in Sections~\\ref{sec:Part_I_Forming_step} and \\ref{sec:Part_II_Curing_process}. Moreover, not only the linear elastic tensile strain limit but also the maximum operational tensile strain of $4.5 \\! \\cdot \\! 10^{-3}$ is exceeded. Hence, failure of the MCF is predicted in this simulation. In conclusion, it can be confirmed that the liquid adhesive protects the MFC during the forming process (see results in Sections \\ref{sec:Part_I_Forming_step} and \\ref{sec:Part_II_Curing_process}).\n\\section{Conclusion and discussion}\n\\label{sec:Conclusions}\n\nThe present work aims at supporting investigations of the innovative manufacturing process for smart PMCs. Apparently, the role of the adhesive is crucial due to its specific material behaviour and its geometrical design. To allow for investigations on this issue, a general material modelling approach is presented in Section \\ref{sec:Curing_model}. Furthermore, a concretized model for the representation of curing phenomena in one specific adhesive is described in Section \\ref{sec:Curing_model_constitutive_functions}. This model is able to capture the main characteristics of the two-component epoxy based adhesive considered in this paper. Moreover, the thermodynamic consistency of the model could be proved. \n\nIn Section \\ref{sec:FEM_implementation}, different aspects of numerical implementation into the finite element software \\textit{ANSYS}$^{\\rm TM}$ have been discussed. Beside the numerical integration of constitutive equations and the derivation of software specific stress and material tangent tensors, a new algorithm has been proposed which suppresses undesired volume changes at the beginning of numerical simulations. Those volume changes may result from thermal expansion and chemical shrinkage when initial values for the temperature and the degree of cure differ from their reference values. Only by the help of the new algorithm, the consideration of heterogeneous initial fields of temperature and degree of cure has been made possible.\n\nFinally, in Section \\ref{sec:Finite_element_simulation} a finite element model for a deep drawn rectangular cup has been built up as representative example for the innovative manufacturing process. The model is based on experimental and numerical studies presented by Neugebauer \\etal \\cite{Neugebauer_Etal_2013_WGP}. However, in the present work a simplified model which includes only the closest surroundings to the MFC has been employed. Based on the results of the forming and curing simulations, it can be stated that the simplified forming step is capable of producing a sufficiently accurate geometry as basis for the curing simulation. \n\nTo highlight the benefit of the new manufacturing process, a reversed sequence of production has been investigated as well and the results of both processes have been compared. In Section \\ref{sec:reversed_process} it has been predicted that failure of the MFC occurs if the PMC is formed after the adhesive is completely cured. Hence, it can be seen that the new manufacturing process considered in this paper exhibits particular potential to overcome these shortcomings. Here, the floating support resulting from the uncured adhesive prevents sufficiently from overloading or delamination during the forming step.\n\nBased on the present work, which shows the qualitative feasibility of the proposed material model as well as the strategy of numerical simulation, different aspects are planned to be conducted in the future. This includes complete identification of material parameters based on experimental investigations and the extension to thermomechanically coupled simulations analogous to \\cite{Landgraf_Etal_2012,Landgraf_Etal_2013} or \\cite{Mahnken_2013}. Finally, it is intended to employ the presented approach to the simulation of different deep drawing processes (cf. \\cite{Neugebauer_Etal_2010_ProdEng,Neugebauer_Etal_2013_WGP}). \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nA broad range of observations including the cosmic microwave background radiation anisotropies have established\nthe so-called standard cosmological model in which the \nuniverse mainly consists of two unknown substances called dark energy and\ndark matter.\nAccording to the standard model,\nsmall primeval density fluctuations generated in the very early universe grow by gravitational instability, to form \nnonlinear objects called dark matter halos.\n\nDark halos with mass $\\sim 10^5$--$10^6{\\rm \\, M_{\\odot}}$ are thought to be the birth place of the first generation of stars \\citep{1997_Tegmark, 2003_YoshidaAbelHernquist}. \nStar-forming gas clouds are formed through condensation of the primordial gas by molecular hydrogen cooling. \nPopulation~III (\\ion{Pop}{3}) stars are formed \nin the primordial gas clouds when the age of the universe is a few tens\/hundreds million years \n\\citep[e.g,][]{2002_AbelBryanNorman, 2006_Naoz, 2012_Fialkov}.\n\nUltra-violet (UV) and X-ray radiation from the first stars and their remnants ionize and heat the inter-galactic medium (IGM), to initiate cosmic reionization.\nRecent observations including\nthe measurement of the electron scattering optical depth \\citep[e.g.,][]{2016_PlanckCollaboration}, the Gunn-Peterson trough in quasar spectra \\citep{1965_GunnPeterson, Fan:2006, Banados:2018}, and Ly-$\\alpha$ emission from star-forming galaxies \\citep{Mason:2018} suggest that the process of reionization completes by $z\\sim 6$ \\citep[e.g.,][]{Weinberger:2020, Naidu:2020}. \nIt is expected that early stages of reionization can be directly probed by future radio telescopes such as Square Kilometer Array\nthrough observation of redshifted 21-cm emission from neutral hydrogen in the IGM \\citep[see, e.g.,][for a recent review]{2016_Barkana, Mesinger:2019}. \n\nIn the early phase of reionization, the density distribution of the IGM,\nor the so-called gas clumping, is an important factor that critically sets the UV photon budget necessary for reionization.\nDense gas clouds hosted by cosmological minihalos can be significant \nphoton sinks, and their existence and the abundance\naffect the process and duration of reionization.\nUnfortunately, it is nontrivial to derive the abundance \nof the gas clouds and to estimate the effective gas clumping factor,\nbecause small gas clouds are effectively \nphoto-evaporated by the emerging UV background radiation.\nAt the same time, stars can be formed in the gas clouds, \nwhich then act as UV photon {\\it sources}. \n\nA number of studies have investigated early star formation under various environments \\citep[e.g.,][]{1976_Low, 2000_Omukai, 2002_Schneider, 2003_Schneider, 2007_GloverJappsen, 2007_Jappsen, 2009_Jappsen, 2009_Jappsenb, 2009_Smith, 2018_Chiaki, 2019_ChiakiWise}.\nMetal enrichment affects the evolution of gas clouds and subsequent star formation process \nthrough enhanced radiative cooling by heavy element atoms and dust grains \\citep[e.g,][]{2005_Omukai, 2019_Hartwig}. \nAlthough details of low-metallicity star formation \nhas been explored by recent numerical simulations \\citep{2018_Chiaki},\nthe effect of metal-enrichment on halo photoevaporation has not been systematically studied. \nIt is important to study the evolution of minihalos \nwith a wide range of metallicities\nin order to model the physical process of cosmic reionization\nin a consistent manner.\n\n\n\nReionization begins as a local process in which an individual\nradiation source generates an \\ion{H}{2} region around it.\n\\cite{1986_Shapiro} and \\cite{1987_ShapiroGiroux} \nuse a one-dimensional model under spherical symmetry to study the ionization front (I-front) propagation through the IGM in a cosmological \ncontext.\nRadiative transfer calculations have been performed in a post-processing\nmanner by using the density field realized in cosmological simulations\n\\citep{1999_Abel, 1999_RazoumovScott, 2001_SokasianAbelHernquist, 2002_Cen, 2003_HayesNorman}. \nFully coupled radiation-hydrodynamics simulations have been used to study reionization in a cosmological volume \\citep{2000_Gnedin, 2001_GnedinAbel, 2002_RicottiGnedinShull, 2004_SusaUmemuraa, 2004_SusaUmemurab, 2014_Wise, 2016_Xu}. Recent simulations of reionization explore the process of cosmic reionization employing large cosmological volumes of $\\sim 100^3$ comoving Mpc$^3$ while resolving the first galaxies in halos with mass of $\\sim 10^8 M_\\odot$\n\\citep{Semelin:2017, Ocvirk:2018}. However, even these state-of-the-art simulations do not fully resolve minihalos \nnor are able to follow the process of photoevaporation,\nand thus it still remains unclear \nhow the small-scale gas clumping affects reionization.\n\n\\cite{2004_Shapiro} and \\cite{2005_Iliev} study the dynamical evolution of minihalos irradiated by UV radiation.\nThey perform 2D radiation hydrodynamics simulations with \nincluding the relevant thermo-chemical processes. They explore a wide range of parameters such as halo mass, redshift, and the strength of UV radiation.\nIt is shown that gas clumping at sub-kiloparsec scales dominate absorption of ionizing photons during the early phase of reionization. \nAn important question is whether or not the minihalos survive\nunder a strong UVB for a long time, over a significant fraction of the age of the universe.\nAnother interesting question is whether or not stars are formed in metal-enriched minihalos. If massive stars are formed, they also contribute to reionization and may thus imprint characteristic features in the $21{\\, \\rm cm}$ signals \\citep{2016_Cohen}.\n\nIn the present paper, we perform a large set of high-resolution radiation hydrodynamics simulations of \nminihalo photoevaporation. \nWe aim at investigating systematically the effects of metal enrichment on the photoevaporation.\nWe evaluate the characteristic photoevaporation time and study its metallicity dependence. We also develop an analytic model\nof photoevaporation and compare the model prediction with our\nsimulation result.\nThe rest of the present paper is organized as follows.\nIn \\secref{sec:methods}, we explain the details of our computational methods. \nThe simulation results are presented in \\secref{sec:results}. \nWe develop an analytical model \nthat describes the physical process of minihalo photoevaporation in \\secref{sec:analytic}. \nWe discuss the physics of minihalo photoevaporation in \\secref{sec:discussion}. \nFinally, we give summary and concluding remarks in \\secref{sec:conclusions}.\n\nThroughout the present paper, we assume a flat $\\Lambda$CDM cosmology\nwith $(\\Omega_{\\rm m}, \\Omega_\\Lambda, \\Omega_{\\rm b}, h) = (0.27, 0.73, 0.046, 0.7)$ \\citep{2011_Komatsu}.\nAll the physical quantities in the following are given in physical units.\n\n\n\n\n\\section{Numerical Simulations} \\label{sec:methods}\n\tWe perform \n\tradiation-hydrodynamics simulations \n\tof minihalo photoevaporation by an external UV background radiation. \n\t We run a set of simulations systematically by varying the gas metallicity, dark matter halo mass, intensity of the radiation background and redshift. \n\t\n\tOur simulation set up is schematically shown in \\fref{fig:schematic}. \n\t\\begin{figure*}[htbp]\n\t\t\\begin{center}\n\t\t\\includegraphics[clip, width = \\linewidth-5cm]{schematic.pdf}\n\t\t\\caption{\n\t\t\tWe consider plane-parallel radiation incident on a halo with mass $M$ and metallicity $Z$.\n\t\t\tThe I-front reaches the halo at $z = z_{\\scalebox{0.6}{\\rm IN}}$. \n\t\t\t}\n\t\t\\label{fig:schematic}\n\t\t\\end{center}\t\t\n\t\\end{figure*}\n\t\n\tThe numerical method is essentially the same as in \\cite{2019_Nakatani}, where we study\n\tthe dynamical evolution of molecular gas clouds \n\texposed to an external UV radiation field.\n\tBriefly, we use a modified version of PLUTO \\citep[version 4.1;][]{2007_Mignone}\n\tthat incorporates ray-tracing radiative transfer\n\tof UV photons \n\tand non-equilibrium chemistry.\n\tThe details of the implemented physical processes are found in \n\t\\cite{2018_Nakatani, 2018_Nakatanib}.\n\t\n\tThe simulations are configured with 2D cylindrical coordinates. \n\tThe governing equations are\n\t\\gathering{\n\t\t\\frac{\\partial \\rho_{\\rm b}}{\\partial t} + \\nabla \\cdot \\rho_{\\rm b} \\vec{v} = 0 ,\t\n\t\t\\label{eq:continuity}\\\\\n\t\t\\frac{\\partial \\rho_{\\rm b} v_R}{\\partial t} + \\nabla \\cdot \\left( \\rho_{\\rm b} v_R \\vec{v} \\right) \n\t\t= -\\frac{\\partial P}{\\partial R} - \\rho_{\\rm b} \\partialdif{\\Phi}{R},\n\t\t\t\\\\\n\t\t\\frac{\\partial \\rho_{\\rm b} v_x}{\\partial t} + \\nabla \\cdot \\left( \\rho_{\\rm b} v_x \\vec{v} \\right) \n\t\t= - \\frac{\\partial P}{\\partial x } -\\rho_{\\rm b} \\partialdif{\\Phi}{x} , \\\\\n\t\t\\frac{\\partial \\rho_{\\rm b} E}{\\partial t} + \\nabla \\cdot \\left(\\rho_{\\rm b} H \\vec{v} \\right) \n\t\t= - \\rho_{\\rm b} \\vec{v} \\cdot \\nabla \\Phi \n\t\t+\\rho_{\\rm b} \\left( \\Gamma -\\Lambda \\right), \\\\\n\t\t\\frac{\\partial n_{\\text{\\rm H}} y_i }{\\partial t} + \\nabla \\cdot \\left( n_{\\text{\\rm H}} y_i \\vec{v} \\right)\n\t\t= n_{\\text{\\rm H}} C_i , \\label{eq:chemevoeq}\n\t}\n\twhere $R$ and $x$ are the physical radial distance and vertical height, respectively,\n\tand $t$ is proper time in the frame of the halo. \n\tWe denote the gas density, velocity, and pressure as $\\rho_{\\rm b}$, $\\vec{v} = (v_R, v_x)$, \n\tand $P$. In the equation of motion, $\\Phi$ is the external gravitational potential of the host dark halo.\n\tIn the energy equation, $E$ and $H$ are the total specific energy \n\tand total specific enthalpy, and\n\t$\\Gamma$ and $\\Lambda$ are the specific heating rate \n\tand specific cooling rate.\n\tThe abundance of $i$-th chemical species, $\\abn{i}$, is\n\tdefined by the ratio of the species' number density $n_i$ \n\tto the hydrogen nuclei number density $n_{\\text{\\rm H}}$.\n\tThe total reaction rate is denoted as $C_i$. The gas is composed of eight chemical species:\n\tH, \\ce{H+}, \\ce{H2}, \\ce{H2+}, He, CO, \\ce{C+}, O, and \\ce{e-}.\n\tThe elemental abundances of carbon and oxygen are normalized by\n\tthe assumed metallicity $Z$ as\n \t$0.926\\e{-4} \\, Z\/Z_{\\odot}$ \n\tand $3.568\\e{-4} \\, Z \/ Z_{\\odot}$, respectively \\citep{1994_Pollack, 2000_Omukai}.\n\t\n\t\t\n\tWe consider halos with a wide range of masses of \n\t$10^{3} {\\rm \\, M_{\\odot}} \\leq M \\leq 10^8{\\rm \\, M_{\\odot}}$. Each halo\n\thas a Navarro, Frenk \\& White density profile scaled appropriately \\citep{1997_NFW}.\n\tWe assume that the halo potential is fixed and is given by a function of \n\tthe spherical radius $r \\equiv \\sqrt{R^2+x^2}$ as\n\t\\eq{\n\t\t\\rho_{\\rm DM} \\propto \\frac{1 }{c_{\\rm N} \\xi ( 1 + c_{\\rm N} \\xi)^2},\t\\label{eq:nfw}\n\t}\n\twhere $c_{\\rm N}$ is the concentration parameter, \n\tand $x$ is the spherical radius normalized by the virial radius, i.e. $\\xi \\equiv r \/ r_{\\rm vir}$.\t\n\tThe virial radius of a halo collapsing at redshift $z$ is\n\t\\eq{\n\t\\splitting{\n\t\tr_{\\rm vir} = &1.51 \\braket{\\frac{\\Omega_m h^2}{0.141}}^{-1\/3}\n\t\t\t\t\t\t\t\\braket{\\frac{M}{10^8 \\,{\\rm \\, M_{\\odot}}}}^{1\/3} \\\\\n\t\t\t\t\t\t&\\times\t\\braket{\\frac{\\Delta_{\\rm c}}{18\\pi^2}}^{-1\/3}\n\t\t\t\t\t\t\t\\braket{\\frac{1+z}{10}}^{-1}\\,\\, {\\rm kpc} \t\\label{eq:rvir}\n\t\t\t\t\t\t\t}\n\t}\n\twhere $\\Delta_{\\rm c}$ is the overdensity relative to the critical density \n\tof the universe at the epoch.\n\tWe adopt $\\Delta_{\\rm c} =18\\pi^2$.\n\tThen the halo potential $\\Phi$ is explicitly given by\n\t\\gathering{\n\t\t\\Phi (r) \n\t\t\t\t\t= - V_{\\rm c}^2\n\t\t\t\t\t\\frac{\\ln\\braket{1 + c_{\\rm N} \\xi}}{c_{\\rm N} \\xi }\n\t\t\t\t\t\\frac{c_{\\rm N} }\n\t\t\t\t\t{\\ln(1+c_{\\rm N}) - \\dfrac{c_{\\rm N}}{1+ c_{\\rm N}}},\n\t\t\t\t\t\\label{eq:potential}\n\t\\\\\n\tV_{\\rm c} \\equiv \\sqrt{\\frac{GM}{r_{\\rm vir}}}\n\t}\n \n\t\n\t\n\t\n\t\n\tThe initial conditions are given by assuming a fully atomic, isothermal gas in hydrostatic equilibrium with the virial temperature\n\t\\eq{\n\tT_{\\rm vir}\t=\t\\frac{GM \\mu m_{\\rm p}}{2r_{\\rm vir} k_{\\rm B}},\n\t}\n\twhere $\\mu$ is mean molecular weight, \n\t$m_{\\rm p}$ is the proton mass,\n\tand $k_{\\rm B}$ is the Boltzmann constant. \n\tThe initial density profile is \n\t\\eq{\n\t\t\\rho_{\\rm b} (r)\n\t\t\t= \\hat{\\rho_{\\rm b}} \\exp\\left[-\\frac{\\Phi }{ k_{\\rm B}T_{\\rm vir}\/ \\mu m_{\\rm p}} \\right], \t\n\t\t\t\\label{eq:densityprofile}\n\t}\n\twhere $\\hat{\\rho_{\\rm b}}$ is the normalization factor \n\t\\eq{\n\t\t\t\\hat{\\rho_{\\rm b}} \\equiv \n\t\t\t\\frac{M \n\t\t\t\\Omega_{\\rm b} \\Omega_{\\rm m}^{-1}}\n\t\t\t{\n\t\t\t\\displaystyle\n\t\t\t\\int _0 ^{r_{\\rm vir}} {\\rm d} r\\, 4\\pi r^2\n\t\t\t\\exp\\left[-\\dfrac{\\Phi }{ k_{\\rm B}T_{\\rm vir}\/ \\mu m_{\\rm p}} \\right]}.\n\t}\n\tWith this normalization, the mass ratio of baryons to dark matter within $r_{\\rm vir}$ equals the global cosmic baryon fraction\n\t$f_{\\rm b} = \\Omega_{\\rm b} \/ \\Omega_{\\rm m}$.\n\tThe initial density profile is specified by $M$, $c_{\\rm N}$ and $\\xi$.\n\tNote that $\\hat{\\rho_{\\rm b}}$ is not the central density\n\tbut is a geometry-weighted average density,\n\twhich is independent of $M$ but scales as $\\propto (1+z)^3$\n\tfor fixed $\\Delta_{\\rm c}$ and $c_{\\rm N}$. \n\n\tThe initially fully atomic gas in the halo is exposed to plane-parallel UV radiation as illustrated in Figure \\ref{fig:schematic}.\n\tWe follow photoionization by extreme UV (EUV; $13.6\\,{\\rm eV} < h\\nu < 100\\,{\\rm eV}$) photons\n\tand photodissociation by far-UV photons in the Lyman-Werner (LW) band (FUV; $11.2\\,{\\rm eV} \\lesssim h\\nu < 13.6 \\,{\\rm eV}$).\n\tThe UV spectrum is given by \n\t\\eq{\n\t\tJ(\\nu) = J_{21} \\braket{\\frac{\\nu}{\\nu_1}}^{-\\alpha} \\e{-21} \n\t\t\\unit{erg}{}\\unit{s}{-1}\\unit{cm}{-2}\\unit{Hz}{-1}\\unit{sr}{-1},\t\\label{eq:UVSED}\n\t}\n\twhere $\\nu_1$ is the Lyman limit frequency (i.e., $h\\nu_1 = 13.6\\,{\\rm eV}$). \n\tWe set the UV spectral slope $\\alpha = 1$ and \n consider $J_{21}$ in the range $0.01 \\leq J_{21} \\leq 1$ \\citep{1996_ThoulWeinberg}. \n\tWe calculate the photodissociation rate with\n\ttaking into account the self-shielding of hydrogen molecules\n\t\\citep{1996_DraineBertoldi, 1996_Lee}.\n\n\tHeating and cooling rates are calculated self-consistently with the\n\tnon-equilibrium chemistry model of \\cite{2019_Nakatani}.\t\n\tMajor processes are photoionization heating,\n\tLy{\\rm $\\alpha$} cooling,\n\tradiative recombination cooling, \n\t\\ion{C}{2}~line cooling, \n\t\\ion{O}{1}~line cooling, \n\t\\ce{H2} line cooling,\n\tand CO line cooling. \n\tThe corresponding \n\theating\/cooling rates are found in\n\t\\cite{2018_Nakatani, 2018_Nakatanib}.\n\tFUV-induced photoelectric heating is not effective with FUV intensity and metallicity of interest in this study. We omit it from our thermochemistry model. \n\tFor the present study, we also implement the Compton cooling by the CMB photons interacting with free electrons with physical density $n_e$ as \n\t\\eq{\n\t\t\\Lambda_{\\rm Comp} = 5.65\\e{-36}\\nspe{e} (1+z)^4 (T - T_{\\rm CMB}) \\unit{erg}{}{\\, \\rm cm}^{-3} \\unit{s}{-1},\n\t}\n\twhere $T_{\\rm CMB}$ is the CMB temperature given by $T_{\\rm CMB} = 2.73(1+z){\\rm \\, K}$.\n\t\n\n\tOur computational domain extends $0\\, {\\rm kpc} \\leq R \\leq r_{\\rm vir}$ and $-r_{\\rm vir} \\leq x \\leq r_{\\rm vir}$. \n\tWe define computational grids uniformly spaced with the number of grids $N_R \\times N_x = 320\\times 640$. \t\n\tUV photons are injected from the boundary plane at $x = -r_{\\rm vir}$. \n\tWe assume that the cosmological I-front arrives at the plane at a redshift of $z_{\\scalebox{0.6}{\\rm IN}}$. \t\n\tAll our runs start at this time denoted as $t = 0 \\, {\\rm yr} $. \n\tNote that the external halo potential is fixed; we do not consider growth of halo mass.\n\tWe discuss potential influences of this simplification in \\secref{sec:discussion}. \n\t\n\tWe run a number of simulations with varying three parameters in the range\n\t$0\\, Z_{\\odot} \\leq Z \\leq 10^{-3} \\, Z_{\\odot}$, $0.01 \\leq J_{21} \\leq 1$,\n\t$10^{3} {\\rm \\, M_{\\odot}} \\leq M \\leq 10^8 {\\rm \\, M_{\\odot}}$,\n\tand $10\\leq z_{\\scalebox{0.6}{\\rm IN}} \\leq 20$, respectively. \n\tA total of 495 ($=5\\times3\\times11\\times3$) \n\tsimulations are performed.\n\tHereafter, we dub each run based on the assumed values of the parameters. A simulation with $(Z, J_{21}, M, z_{\\scalebox{0.6}{\\rm IN}}) = (10^{-a}Z_{\\odot}, 10^{-b}, 10^{c}{\\rm \\, M_{\\odot}}, d)$\n\tis referred to as ``\\sil{$a$}{$b$}{$c$}{$d$}''.\n\tFor example, \\sil{$\\infty$}{0}{5.5}{15} indicates \n\t$(Z, J_{21}, M, z_{\\scalebox{0.6}{\\rm IN}}) =(0 \\,Z_{\\odot}, 1, 10^{5.5}{\\rm \\, M_{\\odot}}, 15)$. \n\n\n\\section{Photoevaporation Process} \\label{sec:analytic}\n\\begin{figure*}\n \\centering\n \\includegraphics[clip, width = \\linewidth]{model.pdf}\n \\caption{Schematic view of our analytic model in \\secref{sec:analytic}. Planer UVB is incident on the halo, dividing it into self-shielded region (represented by the blue hemisphere) and photoevaporative flow region. Photoevaporative flows launch from the outermost layer of the self-shielded region. The red line indicates the locus of the I-front, $x_i(R)$. There is a UVB-shade in the downstream region, where the gas is neutral.}\n \\label{fig:analytic}\n\\end{figure*}\nBefore presenting the results from a number of simulations, it would be\nillustrative to describe the basic physics of photoevaporation.\nIt also helps our understanding of the overall evolution of a UV-irradiated minihalo. To this end, we develop an analytic model and \nevaluate the photoevaporation rate numerically so that it can be compared directly with our simulation results.\n\nPhotoevaporation is driven by gas heating associated with\nphoto-ionization.\nIncident UV radiation ionizes the halo gas, forming a sharp boundary between the ionized and neutral (self-shielded) regions.\nPhotoevaporative flows are launched from the outermost layer of the self-shielded region. \nThe number of UV photons incident on the self-shielded region is equal to that of evaporating gas particles. \nThe gas mass evolution of a halo can be described as \n\\eq{\n\\splitting{\n\t\\frac{{\\rm d} M _{\\rm s}}{{\\rm d} t} & = - m \\int_{\\partial V_{\\rm s}} {\\rm d} \\vec{S}_i \\cdot \\hat{x} \\, J \\\\\n\t&=\t- m \\int_{V_{\\rm s}} {\\rm d} V \\, \\nabla \\cdot \\hat{x} \\,J \\\\\n\t&\t=\t - m \\int _0^{R_i } 2\\pi R \\, J (R, x_i) \\, {\\rm d} R, \n\t}\\label{eq:masslosseq}\n}\nwhere $M_{\\rm s}$ is the total gas mass in the self-shielded region, $m n_{\\text{\\rm H}} \\equiv \\rho_{\\rm b}$, $V_{\\rm s}$ is the volume of the self-shielded region, $\\hat{x}$ is a unit vector in the $x$-direction, $J$ is total photon number flux, $R_i$ is the maximum radial extent of the self-shielded region (i.e., the radial position of the I-front), and $x_i(R)$ gives the locus of the I-front on the $R$--$x$ plane (\\fref{fig:analytic}).\n\nWe define the following dimensionless quantities:\n$\\vec{\\xi} = (\\xi_x, \\xi_R) \\equiv (x\/r_{\\rm vir}, R\/r_{\\rm vir})$, \n$\\tilde{t} = t\/ (r_{\\rm vir} V_{\\rm c}^{-1}) $, \n$\\tilde{M} = M_{\\rm s} \/ (f_{\\rm b} M )$ with $f_{\\rm b} = \\Omega_{\\rm b} \/ \\Omega_{\\rm m}$,\nand $\\tilde{J} = J \/ J_0$ $(J_0\\equiv 8.1\\e{5}\\, J_{21} \\unit{s}{-1}\\unit{cm}{-2})$; \nand rewrite \\eqnref{eq:masslosseq} in a dimensionless form\n\\eq{\n\t\\frac{{\\rm d} \\tilde{M} _{\\rm s}}{{\\rm d} \\tilde{t} } = \n\t- \\frac{J_0 r_{\\rm vir}^3 m }{f_{\\rm b} M V_{\\rm c}}\n\t\\int _0^{\\xi_i } 2\\pi \\xi_R \\, \\tilde{J} (\\xi_R, \\xi_{x, i}) \\, {\\rm d} \\xi_R. \n\t\\label{eq:nondimmassloss}\n}\nThe ionizing photon number flux at the I-front, $J(R, x_i)$, is equal to $J_0$ minus the total recombinations along the ray up to the I-front. Note that $J(R,x_i)$ depends on the recombination coefficient and the density, and also on the velocity profile of the photoevaporative flows. \n\nWe approximate the I-front to be a hemispheric surface facing toward the incident radiation in the region $x_i < 0$,\ni.e., $x_i = - R_i \\sqrt{1 - R^2\/ R_i^2}$.\nIn the other region $x_i \\geq 0$, the I-front lies at the surface of a cylinder with radius $ R_i $. \nWe further assume that photoevaporative flows are spherically symmetric in $x_i < 0$ and the ionized gas is isothermal with $T = 10^4{\\rm \\, K}$.\nThe wind velocity is assumed to be the sound speed of the ionized gas, $c_i$. \nThe ionizing photon number flux at the I-front is then given by\n\\gathering{\n\t{J}(R, x_i) = \\frac{2 J_0}{1 + \\sqrt{1 + 4\\dfrac{\\alpha_{\\rm B} R_i J_0}{c_i^2} \\dfrac{\\theta_{R,i}}{\\sin \\theta_{R,i} }} } \\nonumber \\\\\n\t\\theta_{R,i} \\equiv \\arccos\\sqrt{1 - \\braket{\\frac{R}{R_i}}^2}\n\t= \\arccos\\sqrt{1 - \\braket{\\frac{\\xi_R}{\\xi_i}}^2}, \\nonumber \n}\nwhere $\\alpha_{\\rm B}$ is the case-B recombination coefficient. \nWith these results, \\eqnref{eq:nondimmassloss} reduces to \n\\gathering{\n\t4\\pi \\xi_i^2 \\tilde{\\rho_{\\rm b}}(\\xi_i) \\frac{{\\rm d} \\xi_i}{{\\rm d} \\tilde{t} } = \n\t- \\eta\n\t\\int _0^{\\xi_i } \\frac{4\\pi \\xi_R \\, {\\rm d} \\xi_R}{1 + \\sqrt{1 + 4q\\xi_i \n\t\\dfrac{\\theta_{R,i}}{\\sin \\theta_{R,i}}\n\t} \n\t}\n\t\\label{eq:difmassloss}\\\\ \n\t\\eta \\equiv \\frac{J_0 r_{\\rm vir}^3 m }{f_{\\rm b} M V_{\\rm c}}\n\t\\approx 14 J_{21} \\braket{\\frac{M}{10^6{\\rm \\, M_{\\odot}}}}^{-1\/3} \\braket{\\frac{1+z}{11}}^{-7\/2}, \\nonumber \\\\\n\tq \\equiv \\dfrac{\\alpha_{\\rm B} r_{\\rm vir} J_0}{c_i^2} \n\t\\approx 1.7\\e{2} J_{21} \\braket{\\frac{M}{10^6{\\rm \\, M_{\\odot}}}}^{1\/3} \\braket{\\frac{1+z}{11}}^{-1}. \\nonumber \n}\nNote that the dimensionless parameter $\\eta$ effectively measures the ratio of Hubble time to the ionization time scale.\nThe other parameter $q$ quantifies the magnitude of UV absorption in the photoevaporative flows; absorption is negligible if $q \\ll 1$, while it is significant if $q \\gg 1$. \nAlthough the differential equation is not solved analytically, we can derive the asymptotic behaviour of the gas mass in a few limiting cases. \nFor $q \\gg 1$, the right-hand-side coefficient is approximately proportional to $\\simeq \\eta q^{- 1\/2} \\propto J_{21}^{1\/2} M^{-1\/2} (1+z)^{-3}$.\nIt follows that there is a similarity in the gas mass evolution among halos having similar $\\eta q^{-1\/2}$ and initial $\\xi_i$.\nHence we define the similarity parameter as \n\\eq{ \n \\chi \\equiv \\eta q ^ {-1\/2}. \\label{eq:chi}\n}\nThe initial $\\xi_i$ is determined by the radius \nat which the I-front turns to the D-critical type from R-type \\citep[e.g.,][]{1978_Spitzer, 1989_Bertoldi}.\nThe radius is obtained by numerically solving the integral equation\n\\gathering{\n\t\\int_{-\\infty}^{-\\xi_i} \\tilde{\\rho_{\\rm b}}^2 (\\xi_R, \\xi_x) \\,{\\rm d}\\xi_x = \n\t\\frac{J_0 }{ r_{\\rm vir} n_0^2 \\alpha_{\\rm B}} \n\t\\braket{1 \n\t- \\frac{2c_i n_0 }{ J_0} \\tilde{\\rho_{\\rm b}} (\\xi_i)\n\t} \\nonumber \\\\\n\tn_0 \\equiv \\frac{f_{\\rm b} M} {m r_{\\rm vir}^3} \\nonumber \\\\\n\t\\tilde{L}_{\\rm s}\\equiv \\frac{J_0 }{ r_{\\rm vir} n_0^2 \\alpha_{\\rm B}} \\nonumber \\\\\n\t\\tilde{u}_{\\rm IF} \\equiv \\frac{2c_i n_0 }{ J_0}, \\nonumber \n}\nwith the initial density profile of \\eqnref{eq:densityprofile}. \nTypically, the initial $x_i$ is larger\nfor lower $J_0$ and for higher $n_0$ (i.e., higher $z_{\\scalebox{0.6}{\\rm IN}}$). \nWe list $q, \\eta, q, \\chi, \\tilde{L}_{\\rm s}, $ and $\\tilde{u}_{\\rm IF}$ for each of our runs in \\tref{tab:data} of \\appref{sec:supplymental}. \n\nIn the above model, we have assumed a constant flow velocity, \nbut in practice the flow is accelerated within the ionized boundary layer after \nlaunched with a small, negligible velocity. Also, we do not consider gravitational force by the host halo, which can decelerate the photoevaporative flows. In \\secref{sec:similarities}, we will introduce a few corrections \nto these simplifications and\nexamine carefully the similarity of gas mass evolution by comparing with the simulation results.\n\n\\section{Simulation Results} \\label{sec:results}\nWe first describe the dynamical evolution of \na minihalo in our fiducial case, and examine the effect of metal- and dust-cooling in \\secref{sec:result1}. \nThen, we focus on the photoevaporation rates \nand study the dependence on metallicity and on \nhalo mass in \\secref{sec:massloss}. The dependence of the photoevaporation \nrates on radiation intensity and the turn-on redshift is\nstudied in \\secref{sec:result2} and \\secref{sec:result3}.\nWe summarize the halo mass evolution in \\secref{sec:similarities}. \nThen we provide an analytical fit to the derived mass evolution as a function of time in \\secref{sec:evatime}. \nFor convenience, we term halos with $T_{\\rm vir} > 10^4{\\rm \\, K}$ atomic cooling (massive) halos in the following sections. The corresponding mass range is $M \\gtrsim 10^{7.5}{\\rm \\, M_{\\odot}}$ ($ 10^7{\\rm \\, M_{\\odot}}$) at $z = 10$ (20). Lower-mass halos ($T_{\\rm vir} < 10^4{\\rm \\, K}$) are referred to as low-mass halos; those with $M \\gtrsim 10^{6.5}$--$10^7{\\rm \\, M_{\\odot}}$ ($10^6$--$10^{6.5}{\\rm \\, M_{\\odot}}$) at $z_{\\scalebox{0.6}{\\rm IN}} = 10$ (15--20) are specifically called molecular cooling halos.\n\n\\subsection{Photoevaporation and Metallicity Dependence}\t\\label{sec:result1}\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[clip, width = \\linewidth]{Haloevap_paperfigsFigSnapshots.pdf}\n\\caption{\n Snapshots of \\sil{4}{0}{5.5}{10} (top panels) and \\sil{$\\infty$}{0}{5.5}{10} (bottom panels).\n\tThe upper and lower half of each panel indicate the density and temperature distributions, respectively. \n\tThe color bars are shown at the right. \n\tThe magenta dashed lines are density contours for $n_{\\text{\\rm H}} = 10^{-2},10^{-1},1, 10, 100 {\\, \\rm cm}^{-3}$, \n\tand the gray dot-dashed lines are ionization degree for $0.1, 0.5, 0.9$. \n\tThe arrows represent velocity fields and are scaled by the magnitude. \n\tThe reference arrow length for $20\\,{\\rm km\\,s^{-1}}$ is shown at the top left in each panel. \n\tNote that the view closes-up as time goes for clarity. \n\tThe planer UV field is incident on the computational \n\tdomain at $x\/r_{\\rm vir} = -1$. \n\tThe UV-heated gas has a temperature of $\\sim 10^4{\\rm \\, K}$ \n\tand streams off the halo. \n\tThe self-shielded regions (orange regions in the temperature maps) have relatively low temperatures. \n}\n\\label{fig:snapshots}\n\\end{center}\n\\end{figure*}\n\n\\fref{fig:snapshots} shows the density and temperature distributions for a halo with $M=10^{5.5}{\\rm \\, M_{\\odot}}$ irradiated by the UV background with $J_{21} = 1$ at $z_{\\scalebox{0.6}{\\rm IN}} =10$.\nWe compare the results with two different metallicities of $Z =0\\,Z_{\\odot}$ and $Z=10^{-4}\\,Z_{\\odot}$ (i.e., \\sil{$\\infty$}{1}{5.5}{10} and \\sil{4}{1}{5.5}{10}). \nIn both runs, we find hot, ionized gas flows (hereafter ``wind region'') \nand a cold, dense region (hereafter ``self-shielded region''). \nThe boundary between the two is the launching \"base\" of the photoevaporative flows. \nIn \\fref{fig:snapshots}, the base appears as a transitional layer that divides \na hot ($\\sim 10^4{\\rm \\, K}$; white) region and a cool ($\\lesssim 10^3{\\rm \\, K}$; orange) region.\nThe wind regions are heated by EUV photons, \nand the temperature is $\\sim 10^4{\\rm \\, K}$ near the base,\nbut quickly decreases as the wind expands.\n\nThe self-shielded region of a \nmetal-free halo contracts slowly\nbecause of inefficient cooling.\nWith a slight amount of heavy elements, atomic cooling such as \\ion{C}{2}{} and \\ion{O}{1}{} cooling becomes effective, \nand also the grain-catalyzed \\ce{H2} formation reaction produces abundant \\ce{H2} molecules ($y_{\\ce{H2}} \\sim 10^{-4}$)\nthat further enhance cooling efficiency. \nSince the thermal coupling of dust and gas is weak at low densities, \nthe dust temperature does not become high enough to sublimate.\nThe radiative cooling rates are roughly comparable\nfor \\ce{H2}, \\ion{C}{2}{}, and \\ion{O}{1}{} species \nin metal-rich halos ($Z \\gtrsim 10^{-4}\\,Z_{\\odot}$).\nTypically, the most efficient coolant is\n\\ce{H2} in a large portion of the self-shielded region,\nwhereas \\ion{C}{2}{} cooling is dominant only in the central, very dense region ($\\gtrsim10^{3}{\\, \\rm cm}^{-3}$).\nThe central low temperature ($T \\sim 10^2{\\rm \\, K}$) part in the upper panels of \\fref{fig:snapshots} is formed through the \nefficient atomic cooling. \n\n\nThe self-shielded region of the metal-free halo (\\sil{$\\infty$}{1}{5.5}{10}) has a temperature close to the\nvirial temperature of the halo.\nThe wind region has similar thermal and chemical structure \nin the two runs shown in \\fref{fig:snapshots}. Lyman~${\\rm \\alpha}$ cooling \nis the dominant cooling process near the I-front, whereas Compton cooling is \nimportant in outer regions. Note that the efficiency of the latter depends on the cosmological redshift $z_{\\scalebox{0.6}{\\rm IN}}$. The gas temperature in the wind region is $\\sim 5000$--$10000 {\\rm \\, K}$ and decreases as the gas expands outward.\n\nSince the cooling time is progressively shorter in the central, denser part, \nthe gas cools and condenses in an inside-out manner.\nA dense core forms quickly at the halo center, as can be seen in the time evolution\nin \\fref{fig:snapshots}.\nIn metal-enriched halos, a sufficient amount of \\ce{H2} molecules is\nformed via grain-catalyzed reactions, even though the incident radiation continuously dissociates \\ce{H2}.\nWith increasing metallicity, \\ce{H2} molecules form more rapidly,\nand \\ion{C}{2} and \\ion{O}{1} line cooling also lower the gas temperature.\nAn important effect of metal cooling is to lower the minimum \nmass for gas cloud collapse. \nWe discuss whether or not star formation takes place in low-mass, low-metallicity halos in \\secref{sec:starformation}. \n\nWe have focused on the results with $z_{\\scalebox{0.6}{\\rm IN}} = 10$\nand with the fiducial UV radiation intensity \n$J = 10^{-21} {\\rm \\, erg} \\sec^{-1} {\\, \\rm cm}^{-2} {\\rm Hz}^{-1} {\\rm sr}^{-1}$.\nEssentially the same physical processes operate \nin other cases. \nWith $\\sim 10^2$--$10^3$ times higher LW intensities, \\ce{H2} molecules are almost completely photodissociated in the halo \\citep{2010_ShangBryanHaiman, 2012_Agarwal, 2014_ReganJohanssonWise, 2015_Hartwig, 2016_ReganJohanssonWisea, 2017_Schauer}.\nWe note that primordial gas clouds formed under strong LW radiation are\nsuggested to be possible birth places of massive black holes \\citep{2001_Omukai, 2010_ShangBryanHaiman, 2012_Agarwal, 2014_ReganJohanssonWise, 2015_Hartwig, 2016_ReganJohanssonWisea, 2017_Schauer}. \nIn metal-enriched halos, \nthe gas can still cool and condense by metal and dust cooling\neven under strong UV radiation.\n \n \nIn contrast to low-mass, \\ce{H2}-cooling halos, Ly$\\alpha$ cooling dominates in \nhalos with $T_{\\rm vir} \\gtrsim 20000{\\rm \\, K}$.\nSince the efficiency of Ly$\\alpha$ cooling is independent of metallicity, \nthe gas condensates quickly in the massive halos with $M \\gtrsim 10^{7.5}{\\rm \\, M_{\\odot}}$. \nIf the gas is enriched to $Z \\gtrsim 10^{-3}\\,Z_{\\odot}$,\ndust-gas collisional heat transfer \ncauses efficient gas cooling for $n_{\\text{\\rm H}} \\gtrsim 10^7{\\, \\rm cm}^{-3}$ \\citep{2005_Omukai, 2008_Omukai, 2016_Chiaki}. \n\n\nOur findings are largely consistent with \\cite{2014_Wise} regarding\nthe correspondence between halo mass and dominant cooling processes.\nHalos with $M \\gtrsim 10^7{\\rm \\, M_{\\odot}}$ are expected to be \nmetal-enriched by Pop~III supernovae triggered in the progenitor halo or in nearby halos,\nand thus metal cooling is dominant or comparable to atomic\/molecular hydrogen cooling. \nFollowing \\cite{2014_Wise}, we call such halos ``metal-cooling halos'', which have masses between the atomic cooling limit ($\\sim 10^{7.5}$--$10^8{\\rm \\, M_{\\odot}}$) and the upper mass limit of molecular cooling halos ($\\sim 10^{6.5}$--$10^7{\\rm \\, M_{\\odot}}$). We find similarly efficient metal cooling for $M \\gtrsim 10^7{\\rm \\, M_{\\odot}}$, but the most important effect of metal enrichment in our cases is to lower the molecular cooling limit by allowing formation of \\ce{H2} through grain-catalyzed reactions, especially at $Z \\gtrsim10^{-4}\\,Z_{\\odot}$.\n\nThe gas in massive halos ($T_{\\rm vir} > 10^4{\\rm \\, K}$) is gravitationally bound even under strong UV radiation.\nWe find rather small mass loss in the runs with $M = 10^{7.5}{\\rm \\, M_{\\odot}}$. \nApproximately 10\\% of the initial gas mass is lost via photoevaporation, \nbut the diffuse, ionized gas follows the concentrated gas toward the center. \nThis process slightly recovers the total gas mass within $r_{\\rm vir}$. \nFor halos with $T_{\\rm vir} > 10^4{\\rm \\, K}$,\noutgoing flows are not excited, and all of the baryons concentrate \nto the halo center regardless of \nthe UV strength. The total mass slightly increases from the initial state by accretion of the diffuse gas in the outer part.\n\n\n\\subsection{Mass Loss} \\label{sec:massloss}\nThe photo-heated gas flows outward from the surface of the self-shielded region, while the central part continues contracting. \nThe rate of the gas mass loss can be calculated as \n\\eq{\n\t\\dot{M}_{\\rm ph} = \\int _ S {\\rm d} \\vec{S} \\cdot( \\rho_{\\rm b} \\vec{v} ), \t\\label{eq:masslossrate}\n}\nwhere \n$S$ is the surface area of the launching base. \nNote that the right-hand-side of this equation is equal to that of \\eqnref{eq:masslosseq}.\nThe mass-loss rate is essentially determined by the radial extent of the self-shielded region, because the initial velocity of the photoevaporative winds is typically the sound speed of the ionized gas ($\\sim 10\\,{\\rm km\\,s^{-1}}$), and the base density is determined by the EUV flux. The mass flux at the base is not strongly dependent of the gas metallicity \\citep{1989_Bertoldi,2019_Nakatani}. \nWe measure the total gas mass within a halo as \n\\eq{\n\tM_{\\rm b} = \\int_{r \\leq r_{\\rm vir}} 2\\pi R \\rho_{\\rm b} \\, {\\rm d} x \\, {\\rm d} R.\n}\nThe evolution of $M_{\\rm b}$ can be characterized by two phases separated by the time when the diffuse outer part is stripped off and a \"naked\" dense core is left. In the later phase, the core is directly exposed to UV radiation but \nthe net mass loss is small owing to small geometrical size. The \n photoevaporation rate decreases rapidly during the transition phase. \n A similar process is known also in the study of molecular cloud photoevaporation \\citep[e.g.,][]{1989_Bertoldi, 2019_Nakatani}. \nIt is difficult to follow the photoevaporation process\nin detail after the transition phase, because\nthe small core is resolved only with \nseveral computational cells in our simulations.\nWe thus calculate $M_{\\rm b}$ only up to the transitional phase.\n\nWe empirically determine the transition time by the following\nconditions:\n\\gathering{\n \\frac{1}{M_{\\rm b}}\\int_{\\rho_{\\rm b} > 10^{-3}\\rho_{\\rm b, max}} 2\\pi R \\rho_{\\rm b} \\, {\\rm d} x \\, {\\rm d} R \n > 0.8 \\label{eq:limitingtime1}\\\\\n \\rho_{\\rm b, max} > \\rho_{\\rm b,0}. \\label{eq:limitingtime2}\n}\nHere, $\\rho_{\\rm b, max}$ is the maximum density in the computational domain and $\\rho_{\\rm b,0} \\equiv \\rho_{\\rm b}(t=0,r=0)$.\n\n\\fref{fig:masslossrates} shows the evolution of $M_{\\rm b}$ for halos with $M = 10^{5.5-8} {\\rm \\, M_{\\odot}}$ with various metallicities and other parameters.\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[clip, width = \\linewidth\/2-0.5cm]{Haloevap_metallicity_dependenceMassEvolution_Zdependence_J0M5565z10_showconcentration_annotation.pdf}\n\\includegraphics[clip, width = \\linewidth\/2-0.5cm]{Haloevap_metallicity_dependenceMassEvolution_Mdependence_Z3infJ0z10_showconcentration_annotation.pdf}\n\\includegraphics[clip, width = \\linewidth\/2-0.5cm]{Haloevap_metallicity_dependenceMassEvolution_Jdependence_Z3infM6z10_annotation.pdf}\n\\includegraphics[clip, width = \\linewidth\/2-0.5cm]{Haloevap_metallicity_dependenceMassEvolution_zdependence_Z3infM55z_showconcentration_annotation.pdf}\n\\caption{\n Time evolution of the total gas mass relative to the initial gas mass for selected runs.\n The panels show the dependence of the gas mass evolution on the four simulation parameters $(Z, M, J_{21}, z_{\\scalebox{0.6}{\\rm IN}})$:\n\t\t(a) metallicity dependence of the gas mass evolution for $M = 10^{5.5}, 10^{6.5}$ with $J_{21} = 10 $ and $z_{\\scalebox{0.6}{\\rm IN}} = 10$. The line colors and styles differentiate metallicity and the halo mass, respectively. The round marker points indicate the time at which all of the atmospheric gas has been lost, leaving an concentrated core (cf.~\\eqnref{eq:limitingtime1} and \\eqnref{eq:limitingtime2}).\n\t\tNote that the curve for \\sil{5}{0}{5.5}{10} overlaps those of \\sil{$\\infty$}{0}{5.5}{10} and \\sil{6}{0}{5.5}{10}. \n\t\t(b) halo mass dependence of the gas mass evolution for $Z = 0, 10^{-3}\\,Z_{\\odot}$ with $J_{21} = 1$ and $z_{\\scalebox{0.6}{\\rm IN}} = 10$. The line styles correspond to halo masses as annotated. The lines for \\sil{$\\infty$}{0}{8}{10} and \\sil{3}{0}{8}{10} overlap. \n\t\t(c) $J_{21}$ dependence of the gas mass evolution for $Z = 0, 10^{-2} \\, Z_{\\odot}$ with $M = 10^6 {\\rm \\, M_{\\odot}}$ and $z_{\\scalebox{0.6}{\\rm IN}} = 10$. Solid, dashed, dotted lines indicate $J_{21} = 1, 0.1, 0.01$, respectively. \n\t\t(d) $z_{\\scalebox{0.6}{\\rm IN}}$ dependence of the gas mass evolution for $Z = 0, 10^{-3}\\,Z_{\\odot}$ with $M = 10^{5.5}{\\rm \\, M_{\\odot}}$ and $J_{21} = 1$. \n\t\t}\n\\label{fig:masslossrates}\n\\end{center}\n\\end{figure*}\nFor a given $M$, \n$M_{\\rm b}$ evolves along the same track on the $t$--$M_{\\rm b}$ plane for various metallicities. \nWe have discussed in \\secref{sec:result1}\nthat the effect of metal-enrichment appears clearly in the concentrating core, but\nthe strength of EUV-driven photoevaporation is independent of metallicity. \nThus the outer diffuse gas has not cooled in the early evolutionary phase even with nonzero $Z$ cases, and\nthe size of the self-shielded region is similar regardless of $Z$ (compare the rightmost panels in \\fref{fig:snapshots}).\nTherefore, the evolution of $M_{\\rm b}$ does not differ significantly \nuntil the diffuse envelope gas is lost. \n\n\nWe show the time evolution of $M_{\\rm b}$ for halos with various $M$\nincluding the high mass case ($T_{\\rm vir} > 10^4{\\rm \\, K}$; $M \\gtrsim 10^{7.5}{\\rm \\, M_{\\odot}}$) \nin \\fref{fig:masslossrates} (Panel~b). \nA large amount of gas is lost from low-mass halos ($T_{\\rm vir} < 10^4{\\rm \\, K}$)\nbut the mass-loss rates are significantly smaller for massive halos with $10^8{\\rm \\, M_{\\odot}}$. \nInterestingly, the gas mass {\\it increases} slightly.\nThe diffuse gas at $r>r_{\\rm vir}$ is accreted while the central part\nkeeps cooling.\nPhotoevaporation is hardly observed in these runs, because the initial temperature, $T_{\\rm vir}$, is higher than the typical temperature of a photo-heated gas in the first place.\nHalo's gravity is so strong that it retains the \nphoto-heated gas. \nAnother important feature is that the gas mass evolution of the massive halos is nearly independent of metallicity. \nRadiative cooling by hydrogen is dominant in these halos (\\secref{sec:result1}). \n\n\n\n\n\n\n\n\n\\subsection{Radiation Intensity}\t\\label{sec:result2}\n\nIn photo-ionized regions, the characteristic ionization time,\n$t_{\\rm ion} \\sim 0.01 J_{21}^{-1} \\, {\\rm Myr} $, is orders of magnitude \nshorter than the typical crossing time of photoevaporative flows,\n$ t_{\\rm cr} \\simeq r_{\\rm vir} \/ 10\\,{\\rm km\\,s^{-1}} \\simeq 10 \\,M_{5}^{1\/3} [(1+z)\/10]^{-1} \\, {\\rm Myr} $ with $M_{5} \\equiv M\/10^{5} {\\rm \\, M_{\\odot}}$. \nFor weak UV radiation, the I-front is located at the outer part \nof the halo where the density is low.\nWe find that the boundary where $\\abn{HII} = 0.5$ is located at \na radius where $\\col{HI} \\sim 10^{18} J_{21} ^{1\/2} {\\, \\rm cm}^{-2}$,\nand the base density is approximately estimated to be $n_{\\text{\\rm H}} \\sim 10^{-1}$--$10^{-0.5} J_{21} {\\, \\rm cm}^{-3}$,\nwhich is consistent with the result of \\cite{2004_Shapiro}. \n\n\nThe UV intensity $J_{21}$ does not strongly change the overall dynamical \nphotoevaporation process.\nHowever, the mass loss rate sensitively depends on $J_{21}$,\nbecause the base is located at a larger $r$ for lower $J_{21}$, where \nthe gas density is lower. \nOne may naively expect that the geometrical size can make\nthe photoevaporation rate large, \nbut the small density at the large radius mitigates\nincrease of mass loss\n(cf.~\\eqnref{eq:masslossrate}). \nThe base density is approximately proportional to the UV flux,\nwhile the base radius increases by only a small factor \nbecause the density \nrapidly decreases with increasing radial distance.\nHence the mass loss rate, $|\\dot{M}_{\\rm b}|$, decreases for smaller\n$J_{21}$ as can be seen in \\fref{fig:masslossrates}-(c). \nThe mass loss rate decreases with time, ${\\rm d} |\\dot{M}_{\\rm b}| \/ {\\rm d} t < 0$, because the geometrical cross-section of halo decreases. \nWe also note that the characteristic mass-loss time for massive halos \nis longer than the Hubble time \nat the epochs considered. \n\n\\subsection{Turn-on Redshift}\t\\label{sec:result3}\nHalos forming at different redshifts have different properties.\nMost notably high-redshift halos are more compact and denser\n(cf.~\\eqnref{eq:densityprofile}).\nWe study cases with different \n$z_{\\scalebox{0.6}{\\rm IN}}$, the timing of radiation turn-on,\nwhen the cosmological I-front reaches the halo.\nOne can consider that different $z_{\\scalebox{0.6}{\\rm IN}}$ effectively corresponds \nto different \nreionization histories, or to an inhomogeneous reionization model\nin which the effective $z_{\\scalebox{0.6}{\\rm IN}}$ differs from place to place.\n\nThe process of photoevaporation is essentially the same as those described in \\secref{sec:massloss} and \\secref{sec:result2}.\nThe gas density at the photoevaporative flow \"base\" is primarily set by $J_{21}$, and is not explicitly dependent of $z_{\\scalebox{0.6}{\\rm IN}}$. \nHowever, the relative distance of the base to the halo center, $\\xi (\\equiv r\/r_{\\rm vir})$, \nis larger for halos at higher redshift owing to higher average density,\nwhile the size of the neutral, self-shielded region is {\\it smaller}.\nThese two effects nearly cancel out and yield photoevaporation rates nearly independent of $z_{\\scalebox{0.6}{\\rm IN}}$. \nWe find that $|\\dot{M}_{\\rm b}|$ increases only by $20$--$30\\%$ with $\\Deltaz_{\\scalebox{0.6}{\\rm IN}} = -5$. \n\nIn the characteristic minihalo case with $M=10^{5.5}{\\rm \\, M_{\\odot}}$, $J_{21} = 1$, $Z = 10^{-3}\\,Z_{\\odot}$ and $z_{\\scalebox{0.6}{\\rm IN}} = 10$,\nabout 80\\% of the initial gas mass is lost,\nand the mass loss fraction is only $\\sim 60\\%$ with $z_{\\scalebox{0.6}{\\rm IN}} = 20$. \nSimilar trend of the final core mass is seen in other runs with different $M$ and $J_{21}$. This is consistent with the results of\n\\cite{2005_Iliev} who show weak dependence of mass-loss time scale on turn-on redshift. \n\n\n\\subsection{Gas Mass Evolution}\\label{sec:result4}\nWe have shown that the gas mass evolution depends most sensitively \non $M$ and $J_{21}$. Physically, these are the most relevant quantities to the gravitational force and the mass flux, respectively (cf.~\\eqnref{eq:masslosseq}).\nThe results of our numerical simulations can be characterized by two quantities: the half-mass time, $t_{1\/2}$, at which the gas fraction decreases to 0.5, and the remaining mass fraction, $f_{\\rm b,rem}$, which is the mass fraction of the \"remnant\" condensed core.\n\\begin{figure*}[htbp]\n \\centering\n \n \\includegraphics[clip, width = \\linewidth]{Haloevap_paperfigsEvaporationMap.pdf}\n \\caption{We plot $t_{1\/2}$ and $f_{\\rm b, rem}$ for our simulated halos. The left, middle, and right panels shows the results at $z_{\\scalebox{0.6}{\\rm IN}} = 10, 15, 20$, respectively. The circles, triangles, and squares represent $J_{21} = 1, 0.1, 0.01$. The colors of the markers indicate metallicity, and the sizes are scaled according to the halo mass $M$. The size reference is shown at the bottom right. The maximum marker size corresponds to $M = 10^7, 10^{6.5}, 10^{6.5}{\\rm \\, M_{\\odot}}$ for the panels of $z_{\\scalebox{0.6}{\\rm IN}} = 10, 15, 20$, respectively. \n Note that $t_{1\/2}$ is definable only for the halos whose mass reduces to $f_{\\rm b, rem} \\leq 0.5$. }\n \\label{fig:evamap}\n\\end{figure*}\n\\fref{fig:evamap} shows the distribution of $t_{1\/2}$ and $f_{\\rm b, rem}$. \nThis summarizes the overall dependence of gas mass evolution on \nthe simulation parameters $(Z, J_{21}, M, z_{\\scalebox{0.6}{\\rm IN}})$. \nMassive halos at lower $z_{\\scalebox{0.6}{\\rm IN}}$ reflect lower average densities.\nThe mass (symbol size) is larger towards the upper right corner in each panel of \\fref{fig:evamap}, indicating that the remaining mass fraction $f_{\\rm b, rem}$ increases for higher halo mass. Also halos with higher-metallicity have higher $f_{\\rm b, rem}$ owing to\nefficient cooling (\\secref{sec:massloss}). \nFor higher $J_{21}$, both $t_{1\/2}$ and $f_{\\rm b,rem}$ are smaller\nbecause of the faster mass loss for stronger UV radiation (\\secref{sec:result2}). \n\n\n\n\n\\subsection{Similarity in Mass Loss}\t\\label{sec:similarities}\nWe have shown that the mass-loss rate has weak metallicity dependence\nat least until the bulk of the diffuse halo gas is stripped off. In this section, we first derive nontrivial similarity of the gas mass evolution for metal-free halos. We then apply this model to other low-metallicity cases.\n\nIn \\secref{sec:analytic}, we have developed an analytic model with a key parameter $\\chi$ that characterizes the gas mass evolution of a photoevaporating halo. \nThere, the effect of the host halo's gravity has not been incorporated (Eqs. \\ref{eq:masslosseq}, \\ref{eq:nondimmassloss}, and \\ref{eq:difmassloss}). \nWe expect that deceleration by gravity becomes important for halos\nwhose virial temperatures are \ncomparable to the typical temperature of the photo-ionized gas, $\\sim 10^4{\\rm \\, K}$. In such cases, assuming $c_i = 10\\,{\\rm km\\,s^{-1}}$ as the photoevaporative\nflow velocity overestimates the photoevaporation rate (see \\eqnref{eq:difmassloss} and the description above it). \nTo account for the deceleration, we adopt\na \"reduced\" parameter defined as \n\\eq{\n \\chi^\\prime \\equiv \\chi \\frac{c_i - V_{\\rm c}}{c_i}, \\label{eq:modchi}\n}\nin the following discussions. The derived values are listed in \\tref{tab:data} of \\appref{sec:supplymental}. \n\nThe top panel of \\fref{fig:similarity} shows the gas mass evolution of metal-free halos with various parameter sets in the dimensionless form. \n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[clip, width = \\linewidth]{Haloevap_metallicity_dependenceMassEvolution_similarity.pdf}\n\\includegraphics[clip, width = \\linewidth]{Haloevap_paperfigsEvaporationMap_selfsimilarity.pdf}\n\\caption{\n(top) Similarities in the gas mass evolution for metal-free halos with various parameter sets. Note that we omit ``Z$\\infty$'' from the simulation labels in the legend. Values in the parentheses after each simulation label show corresponding $\\chi(\\equiv \\eta q^{-0.5})$ and $\\chi^\\prime (\\equiv \\eta q^{-0.5} \\Delta c_i \/ c_i)$. The horizontal and vertical axes indicate the dimensionless time and mass, respectively (cf.~\\eqnref{eq:nondimmassloss}). Lines are colored according to $\\chi^\\prime$ as indicated by the color bar. \nLines with similar colors almost overlap on the $\\tilde{t}$--$\\tilde{M}$ plane, indicating that having similar $\\chi^\\prime$ results in similar mass evolution. \n(bottom) $\\chi^\\prime$ vs dimensionless half-mass time $\\tilde{t}_{1\/2}$ for metal-free halos. The markers are used in the same manner as in \\fref{fig:evamap} but the colors represent $z_{\\scalebox{0.6}{\\rm IN}}$. The magenta dashed line is a fit. A negative correlation is seen between $\\chi^\\prime$ and $\\tilde{t}_{1\/2}$. \n}\n\\label{fig:similarity}\n\\end{center}\n\\end{figure}\nHalos with similar $\\chi$ or $\\chi^\\prime$ \nevolve on essentially the same track in the $\\tilde{M}$--$\\tilde{t}$ plane.\nWe explain the similarity further by using a specific example as follows.\nThe simulation parameters of \\sil{$\\infty$}{0}{6.5}{10} (cyan dotted line; $\\chi = 0.54, \\chi^\\prime = 0.24$) are close to those of \\sil{$\\infty$}{0}{6.5}{20} (yellow dashed line; $\\chi = 0.078, \\chi^\\prime = 0.018$), but the mass evolution significantly deviates from each other on the dimensionless plane. \nOn the other hand, it is closer to those of \\sil{$\\infty$}{1}{4.5}{20} (cyan-green solid line; $\\chi = 0.25, \\chi^\\prime = 0.21$) and \\sil{$\\infty$}{2}{5}{10} (cyan dashed line; $\\chi = 0.31, \\chi^\\prime = 0.25$). \nA more straightforward case is \\sil{$\\infty$}{2}{5}{10} (cyan solid line; $\\chi = 0.31, \\chi^\\prime = 0.25$). It is close to \\sil{$\\infty$}{2}{4.5}{10} (light-blue dashed line; $\\chi = 0.54, \\chi^\\prime = 0.48$), as expected from the close parameter values. Interestingly, the \\sil{$\\infty$}{2}{4.5}{10} (blue dashed line; $\\chi = 0.54, \\chi^\\prime = 0.48$) run is also very close to \\sil{$\\infty$}{1}{5.5}{10} (cyan dash-dotted line; $\\chi = 0.54, \\chi^\\prime = 0.4$).\n\nWe find that gravitational deceleration of the photoevaporating gas is \nan important factor.\nThe evolution in Run~Z{$\\infty$}J{0}M{6.5}z{10} (cyan dotted line) is close to \nother cases with green lines, but its $\\chi$ value $(\\approx 0.54)$ is actually closer to those of runs indicated by the blue lines. \nAlso, $\\chi \\approx 0.31$ for Run \\sil{$\\infty$}{0}{7}{10} (yellow-green solid line)\nis close to those of runs indicated by the green lines, but the \nactual evolution apparently deviates. \nGravitational deceleration of the photoevaporating gas reduces the mass-loss rate \nfor these relatively massive minihalos with virial temperature of $2400 - 5000 {\\rm \\, K}$. \nClearly, it is important to incorporate the correction of $\\chi$ owing to deceleration.\nWe conclude that $\\chi^\\prime$ is the essential parameter to characterize the gas mass \nevolution of photoevaporating halos. \n\n\nThe bottom panel of \\fref{fig:similarity} shows correlation between $\\chi^\\prime$ and the dimensionless half-mass time $\\tilde{t}_{1\/2} \\equiv t_{1\/2} \/ t_0$ for metal-free halos. There is a tight correlation\ngiven by $\\tilde{t}_{1\/2} = 0.2 {\\chi^\\prime}^{-0.75}$. The correlation confirms the importance of $\\chi^\\prime$ in characterizing the gas mass evolution of photoevaporating halos. \nNote that the same correlation holds for low-metallicity halos.\nThus the fit can be applied for halos with any metallicity that lose more than a half of the initial mass. \n\n\n\\subsection{Fitting function}\t\\label{sec:evatime}\nBased on the similarity studied so far,\nwe derive a fit of $\\tilde{M} $ that can be readily used \nin semi-numerical models \\citep[e.g.,][]{2013_Sobacchi, 2012_Fialkov, 2013_Fialkov, 2015_Fialkov, 2016_Cohen}. \nFrom the result shown in ~\\fref{fig:masslossrates},\nwe propose a function\n\\eq{\n\\splitting{\n\t\\tilde{M}_{\\rm fit} &= f (\\tilde{t}) = \\frac{1-C_1}{(\\tilde{t}\/\\tilde{t}_{\\rm s})^{p} + 1} + C_1, \\label{eq:fittingfunc\n\t}\n}\nwhere $p, C_1$, $\\tilde{t}_{\\rm s}$ are fitting parameters,\nwhich control the steepness of mass decrease, the remaining mass fraction, and the dimensionless time at which $f_{\\rm b} = (1+C_1)\/2$, respectively. \nWe restrict the parameter ranges to $0 \\leq C_1\\leq 1$, $0 \\leq \\tilde{t}_{\\rm s}$, and $0\\leq p$, in order to avoid unphysical fitting results. \n\nWe list the best fit values in \\tref{tab:data} in Appendix.\nThe excellent accuracy of the fit can be seen in \\fref{fig:fitting}\nin comparison with the simulation results.\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[clip, width = \\linewidth]{Haloevap_paperfigsConcentrationTimes.pdf}\n\\caption{\nTop: We compare the mass evolution given by equation (\\ref{eq:fittingfunc}) and the \nsimulation results. \nThe horizontal and vertical axes are the same as \\fref{fig:similarity}. The solid lines show fits, and the marker points show the mass evolution in several selected runs. The marker's color represents metallicities, but the marker shapes are randomly adopted. \nThe round marker points at the tails of the curves for Z3J2M6z10, Z4J0M7z10, Z5J2M7z20, and Z6J0M8z20 indicate the time at which the bulk of the atmospheric gas is lost as in \\fref{fig:masslossrates}.\nBottom: Relative errors between the simulation and the fit in percent. \n}\n\\label{fig:fitting}\n\\end{center}\n\\end{figure}\nThe three-parameter function fits the simulation results well until the mass decreases to $\\tilde{M} \\approx 0.01$. \nFor halos with $T_{\\rm vir} > 10^4{\\rm \\, K}$,\nthe gas mass increases owing to the halo's strong gravity. \nWe do not consider the slight increase of gas mass in deriving the fit of \\eqnref{eq:fittingfunc},\nand simply set $\\tilde{M}_{\\rm fit} = 1$.\n\nSince we have not followed the evolution of the dense core after the \nouter diffuse gas is photoevaporated, the resulting \n$\\tilde{M}_{\\rm fit}$ does not strongly depend on metallicity.\nThe photoevaporation becomes inefficient in the late phase when the concentrated core is directly exposed to the UVB radiation, because of its small geometrical cross section (cf.~\\eqnref{eq:masslosseq}).\nIn order to follow the late phase evolution more accurately, \nwe will need to run simulations with\na much higher spatial resolution so that the small core can be fully resolved.\nHowever, we expect that the mass evolution would not differ significantly \nfrom those obtained in this section, \nbecause the remaining mass fraction is already small in the late phase. \n\n\n\n\n\n\n\\section{Discussion} \t\\label{sec:discussion}\n\n\\subsection{Star Formation in Metal-Enriched Halos}\n\\label{sec:starformation}\n\n\tAs the surrounding gas cools and falls on to the center, the\n\tcentral gas density further increases but the \n temperature remains low, \n and thus the dense core can be gravitationally unstable to induce \n star formation.\n\n\n\n\n\n\n\n\n\n\tIn metal-enriched halos, cooling by metal atoms\/ions and by dust grains\n\tenable the gas to condense even under the influence of strong UV radiation.\n\tHence, metal enrichment can effectively lower the mass threshold of star-forming halos \\citep[$\\sim 10^5$--$10^6{\\rm \\, M_{\\odot}}$; e.g.,][]{1997_Tegmark, 2001_Machacek, 2002_BrommCoppiLarson, 2003_YoshidaAbelHernquist}. \n In this section, we study the relation between metallicity and the minimum star-forming halo mass, $M_{\\rm min}$. \n\t\n\tWe assume that star formation occurs when an enclosed gas mass\n\t\\eq{\n\t\tM_{\\rm enc} ( r) = \\int_{\\leq r} \\rho_{\\rm b} \\, {\\rm d} V .\n\t\t\\label{eq:enclosedmass}\n\t}\n\texceeds the Bonnor-Ebert mass \\citep{1955_Ebert, 1956_Bonnor},\n\t\\eq{\n\t\tM_{\\rm BE} ( r) \\simeq 1.18 \\frac{\\bar{c_{\\rm s}}^4}{\\sqrt{P_{\\rm c} G^3}}, \\label{eq:bemass}\n\t}\n\tat a spherical radius of $r$.\n\tHere, $\\bar{c_{\\rm s}}$ is an average sound speed of gas,\n\tand $P_{\\rm c}$ is confining pressure. \n\tWe calculate $\\bar{c_{\\rm s}}$ and $P_{\\rm c}$ \n\tby integrating the pressure within an enclosed volume $V$ and over the corresponding enclosing surface $\\partial V$, respectively,\n\t\\gathering{\n\t\t\\bar{c_{\\rm s}} (r) = \\sqrt{ M_{\\rm enc} ^{-1} \\int_{\\leq r }P \\, {\\rm d} V } \\\\ \n\t\tP_{\\rm c} (r) = \\dfrac{1 }{4 \\pi r^2} \\int_{\\partial V} P \\, {\\rm d} S.\n\t\t\\label{eq:confiningpressure}\n\t}\n\tSince we are interested in star formation within dark halos,\n\twe set the enclosing radius to the scale radius (core radius), $r_{\\rm s} \\equiv r_{\\rm vir}\/c_{\\rm N}$, in \\eqnref{eq:enclosedmass}--\\eqnref{eq:confiningpressure}. \n\tWe regard a halo as star-forming if it has $M_{\\rm enc}(r_{\\rm s})\/M_{\\rm BE}(r_{\\rm s})$ larger than unity at a certain point during the evolution. \n\t\\fref{fig:starformation} shows star-forming halos and non-star-forming halos defined by this condition. \n\tWe also provide a fit to the resulting $M_{\\rm min}$ as a function of $z_{\\scalebox{0.6}{\\rm IN}}$, $J_{21}$ and $Z$ in \\appref{sec:fitminimum}. \n\t\\begin{figure*}\n\n\t \\centering\n\t \\includegraphics[clip, width = \\linewidth]{Haloevap_paperfigsGetMmin2.pdf}\n\t \\caption{Star formation vs. photoevaporation for all our runs.\n\t The left, middle, and right panels correspond to $z_{\\scalebox{0.6}{\\rm IN}} = 10, 15, 20$, respectively. \n\t The horizontal and vertical axes are halo mass and $J_{21}$ in all the panels. \n\t Each of the three panels is divided into $11\\times 3$ rectangular blocks indicating combinations of $M$ and $J_{21}$. Each block is further divided into 5~square sections to show the dimension of metallicity; corresponding metallicity is $Z = 0,10^{-6},10^{-5},10^{-4},10^{-3}\\,Z_{\\odot}$ from top to bottom.\n We represent star-forming halos by filling the sections with colored squares as\n\t $10^{-3}\\,Z_{\\odot}$ (navy), $10^{-4}\\,Z_{\\odot}$ (purple), $10^{-5}\\,Z_{\\odot}$ (pink), $10^{-6}\\,Z_{\\odot}$ (orange), $0\\,Z_{\\odot}$ (yellow). \n\t The vertical blue dashed line shows the molecular cooling limit, which we derive using an expression in \\cite{2013_Fialkov} \\citep[cf.][]{2001_Machacek, 2012_Fialkov}, for a reference; note that we have not taken into account the correction to the molecular cooling limit due to relative velocity between baryons and cold dark matter. Incorporating the correction does not significantly change the limit mass for the redshifts of interest here.\n\t }\n\t \\label{fig:starformation}\n\t\\end{figure*}{}\n\t\n\tIn molecular cooling halos, the gas cools to satisfy the unstable condition, $M_{\\rm enc}\/M_{\\rm BE}>1$, at any metallicity including the primordial case. \n\tThe halos retain more than 10\\% of the initial gas.\n\tThe remaining gas mass is larger for higher halo mass and for lower $J_{21}$. \n\tIt is nearly unity for atomic cooling halos ($T_{\\rm vir} > 10^4{\\rm \\, K}$).\n\t\n\tWe find strong impact of metal enrichment in halos whose mass is lower than the atomic cooling limit. \n The gas cools via \\ce{H2} cooling faster than the bulk of the gas is photoevaporated. \n\tThis effect is clearly seen in higher-metallicity halos (\\fref{fig:starformation}).\n\tInterestingly, the minimum collapse mass is lowered \n\teven with very small metallicities ($Z \\lesssim 10^{-5}\\,Z_{\\odot}$),\n\tand becomes as small as $M_{\\rm min} \\sim 10^4{\\rm \\, M_{\\odot}}$ with $Z \\gtrsim 10^{-4}\\,Z_{\\odot}$. \n\t\n \\citet{2014_Wise} show that \n \n star formation is active in the metal-cooling halos, and the ionizing photons of the formed stars can provide up to 30\\% of ionizing photons responsible for reionization. \n Metal-cooling halos are {\\it heavy} analog of molecular cooling halos and have masses slightly below the atomic cooling limit ($\\sim 10^8{\\rm \\, M_{\\odot}}$). We have found {\\it light} analog of molecular cooling halos in which the gas cools by \\ce{H2} molecules that are formed by grain-catalysed reactions. \n Recent numerical simulations suggest formation of massive or even very massive stars in metal-enriched halos \\citep{2016_Chiaki, 2018_Fukushima}. \n Theses stars can allow the beginning of reionization to occur earlier than without the metal effects. It does not likely change the redshift where reionization completes \\citep{2018_Norman}.\n \t\n\n\n\\subsection{Implications to $21{\\, \\rm cm}$ Line Observations}\nThe hyperfine spin-flip emission of atomic hydrogen (the so-called $21{\\, \\rm cm}$ line) is a promising probe of the neutral IGM in the Epoch of Reionization. \nBecause the strength of the 21-cm signals depend crucially on astrophysical processes at the early epochs, observations of the emission\/absorption against the CMB will provide invaluable information on star formation and the physical state of the IGM. \n\n Semi-numerical models have been used to predict the large-scale fluctuations of the $21{\\, \\rm cm}$ signal \\citep[e.g.,][]{Mesinger:2011,Fialkov:2014b}.\n Results of such models are often utilized to derive upper limits on the high-redshift astrophysics \\citep[e.g.,][]{Monsalve:2018, Monsalve:2019,Ghara:2020,Mondal:2020, Greig:2020}, and make forecasts for ongoing measurements of the 21-cm power spectrum with experiments such as HERA \\citep{DeBoer:2017}, LOFAR \\citep[e.g.,][]{Mertens:2020}, MWA \\citep[e.g.,][]{Trott:2020}, LWA \\citep{Eastwood:2019}, and the future SKA \\citep{Koopmans:2015}, and of the 21-cm sky-averaged (global) signal using LEDA \\citep{Price:2018}, SARAS \\citep{Singh:2018}, EDGES \\citep{Bowman:2018}, PRIZM \\citep{philip19}, MIST\\footnote{http:\/\/www.physics.mcgill.ca\/mist\/} , and REACH \\footnote{https:\/\/www.kicc.cam.ac.uk\/projects\/reach}. \n \n The speed and convenience of the semi-numerical methods come along with their poor spacial resolution which is compensated by extensive use of sub-grid models. \nFor example, \\citet{2013_Sobacchi, 2016_Cohen} study the effect of the UV\nbackground radiation in terms of gas cooling threshold $M_{\\rm cool}$. \nHalos more massive than $M_{\\rm cool}$ are regarded as star-forming halos. \nThe simple prescription in previous studies, however, does not take into account the effect of metal-enrichment. \nAs we have shown in \\secref{sec:starformation}, star formation can occur in metal-enriched minihalos even during reionization.\n\\cite{2016_Cohen} have shown that star formation in such metal-enriched minihalos affects the $21{\\, \\rm cm}$ signal from high redshifts.\nIn particular, signatures of baryon acoustic oscillation (BAO) imprinted on the $21{\\, \\rm cm}$ signal is amplified and can possibly be detected over a wide range of redshift. \n\\cite{2016_Cohen} set a threshold mass for cooling (and thus star-formation) similar to that of molecular cooling halos. \nInterestingly, this somewhat conservative assumption is well justified by our results, where metal enrichment lowers $M_{\\rm min}$ from the molecular cooling limit by an order of magnitude for $Z \\gtrsim 10^{-4}\\,Z_{\\odot}$ even under the effects of UVB. Our results support the predicted enhancement and survival of BAO signature imprinted on $21{\\, \\rm cm}$ signals due to effects of metal enrichment. \n\n\\input{stellarfeedback}\n\n\\subsection{X-Ray Effects}\nX-rays are attenuated by a larger column $(\\sim 10^{21} {\\, \\rm cm}^{-2})$ compared to EUV \\citep{2000_Wilms}.\nThey can reach halos and pre-ionize\/pre-heat the gas \nbefore the ionization front hits the halos. \nA larger attenuation column also implies larger penetration depths. \nHigher-density photoevaporative flows would be driven \nif the gas temperature increases sufficiently to allow it escape from gravitational binding. \nAccordingly, mass-loss rates would be significantly larger than those of EUV-driven photoevaporation. \n\nX-ray possibly delays concentration of the self-shielded regions and thereby star formation, if it efficiently heats the gas. \nOn the other hand, X-ray ionization can promote \\ce{H2} formation via the electron-catalyzed reactions: \n\\ce{H + e- -> H- + \\gamma} and \\ce{H- + H -> H2 + e-} \\citep{1996_Haiman, 1999_Bromm, 2003_Glover, 2015_Hummel,2011_Inayoshi,2015_InayoshiTanaka, 2016_Glover, 2016_ReganJohanssonWiseb}. \nIf X-rays are strong, very strong LW intensities are required to photodissociate \\ce{H2} in the entire halos. \nWe expect X-rays to have significant effects on evolution of irradiated halos and star formation activities. \nX-ray chemistry is already implemented in our code \\citep{2018_Nakatanib} and we plan to investigate their influences on halo photoevaporation in the future. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Model Limitation}\n\nWe have studied the gas mass evolution for a wide range of model parameters. \nExploring the large volume of the parameter space is essential in order to \nunderstand the evolution of a {\\it population} of photoevaporating halos during reionization. Since we have adopted a few simplifications mainly to save computational times, it would be worth examining the limitations of our results. \nWe have fixed the dark matter halo potential throughout our simulations. \nIn practice, halos grow in mass by mergers and accretion, which in turn strengthens the halo's gravity \\citep[e.g.,][]{2013_SobacchiMesinger}. \nThis effect may not be negligible for the halos that have longer mass-loss timescale than halo's growth timescale.\nEspecially, our results indicate that photoevaporation will be significantly suppressed, once halos grow to $T_{\\rm vir} \\gtrsim 10^4{\\rm \\, K}$. \nLower mass halos ($T \\ll 10^4{\\rm \\, K}$) disperse quickly, and thus including halo growth is not expected to significantly alter our results. In fact, we find that our results at $Z = 0\\,Z_{\\odot}$ \nare in good agreement with \\cite{2004_Shapiro} and \\cite{2005_Iliev}, \nwhere halo evolution is incorporated in an approximate manner. \n\nThe UVB radiation intensity is also fixed in our simulations.\nIn general, it can vary over time.\nAlthough the UV background intensity is hardly constrained by observations\nfor $z\\gtrsim 5$ \\citep[e.g.,][]{2007_BoltonHaehnelt, 2011_McQuinn}, numerical simulations predict that the UV background builds up over \nseveral hundred million years. As we have shown, $J_{21}$ strongly affects the mass evolution of halos at any metallicity (\\fref{fig:masslossrates}-(c)). \nWe expect that growth of halo with time partially compensates for the rise in the intensity of the ionizing background; \nwe have shown that mass of halos giving similar $\\chi^\\prime$, which is approximately proportional to $J_{21}^{1\/2} M^{-1\/2} (1+z)^{-3}$, evolve in a similar manner (cf.~\\secref{sec:similarities} and Eqs.(\\ref{eq:chi}), (\\ref{eq:modchi})). \n\n\nFinally, metallicity is also fixed in each individual run of our simulations. Cosmic average metallicity can increase by an order of magnitude on a timescale of $\\sim 1{\\rm \\,Gyr}$ \\citep{2014_MadauDickinson}, \nwhich is a much longer that typical mass-loss timescales that we found in this study. \nTime-dependent metallicity might have an effect only on long-lived photoevaporating halos by affecting their thermochemistry.\nHowever, we expect that the metallicity-dependent trend would not significantly differ from what we have reported in this study.\n\n\n\n\n\\section{Conclusions and Summary}\t\\label{sec:conclusions}\nPhotoheating and metal enrichment of halos during the epoch of reionization can affect star formation efficiency and thus change the course of reionization. The effects of metal enrichment, however, have never been explored systematically in prior works. We have run a suite of hydrodynamics simulations of photoevaporating minihalos ($T_{\\rm vir}\\lesssim10^4{\\rm \\, K}$) irradiated by the UVB, covering a wide range of metallicity, halo mass, UV intensity, and turn-on redshift of UV sources. \n\nOur main findings are summarized as follows: \n\\begin{itemize}\n \\item \n In low-mass minihalos with $T_{\\rm vir} < 10^4{\\rm \\, K}$,\n the gas cools mainly via \\ion{C}{2} and \\ce{H2} line emission to a temperature of $\\lesssim 100{\\rm K}$ if $Z \\gtrsim 10^{-4}\\,Z_{\\odot}$.\n \\ce{H2} molecules are produced through grain-catalyzed reactions in \n the self-shielded, neutral region.\n The cooled gas concentrates towards the potential center to form a dense core. \n \n \\item The evolution of the gas mass is qualitatively the same at any metallicity. The photoevaporation rate decreases after the bulk of the \n diffuse gas is lost. The dispersal of the diffuse gas completes at an earlier time for halos with higher metallicity. \n\n \n \\item In halos with $T_{\\rm vir} > 20000{\\rm \\, K}$, the gas cools by hydrogen Lyman$\\alpha$ cooling, and thus the overall evolution of photoevaporating halos does not depend on metallicity.\n\n \\item The photoevaporation rate depends only weakly on turn-on redshift, and it is slightly smaller for higher $z_{\\rm IN}$.\n\n \\item There is a simple scaling relation for the gas mass evolution of photoevaporating minihalos. \n The time evolution is characterized by a parameter ($\\chi^\\prime$) scaling as $\\propto J_{21}^{1\/2} M^{-1\/2} (1+z)^{-3}$. \n It indicates that the obtained evolution applies to any halos with the same $\\chi^\\prime$. \n We give a fit to the gas mass evolution as a function of time. \n \n \\item The concentrating cores of the molecular\/atomic cooling halos are likely a suitable environment for star formation. The efficient cooling of metal-rich halos fastens the concentration, and it results in lowering the molecular cooling limit by a factor for small metallicities ($Z \\lesssim 10^{-5}\\,Z_{\\odot}$) and by an order of magnitude for metal-rich cases ($Z \\gtrsim 10^{-4}\\,Z_{\\odot}$). \n Stellar feedback of the formed stars may be significant enough to disperse the baryons of the parental molecular cooling halos. \n\n\\end{itemize}\n\nOur study suggests existence of small mass, metal-enriched halos\nin which stars are formed even under the influence of emerging\nUV background radiation.\n\n\n\\acknowledgments %\nWe thank Gen Chiaki and Kana Moriwaki for insightful comments \non this manuscript and technical advice. \nRN acknowledges support from Special Postdoctoral Researcher program at RIKEN and from Grant-in-Aid for Research Activity Start-up (19K23469).\nAF is supported by the Royal Society University Research Fellowship.\nThe numerical simulations were carried out on the Cray\nXC50 at the Center for Computational Astrophysics, National\nAstronomical Observatory of Japan.\n\n\n\n\n\\bibliographystyle{aasjournal}\n\n\n\\subsection{Internal Stellar Feedback Effect}\n\n\t\n\tMassive stars formed in metal-enriched halos affect the host halo by UV radiation and stellar winds, and by supernova explosions.\n\tThen the halo gas would not only photoevaporate by external UV radiation\n\tbut also can be dispersed by these internal processes.\n\t\n\tThe stellar feedback is effective if\n\t(i) the enclosed gas mass $M_{\\rm enc}$ (\\eqnref{eq:enclosedmass})\n\texceeds the Bonnor-Ebert mass $M_{\\rm BE}$ (\\eqnref{eq:bemass});\n\t(ii) stellar feedback energy deposited by massive stars, $E_{\\rm dep}$,\n\tis larger than the gravitational binding energy of the neutral gas\n\tin the self-shielded regions\n\t\\eq{\n\t\tE_{\\rm dep} \\gtrsim \\frac{G (M + M_{\\rm s}) }{r_{\\rm s} } M _{\\rm s},\n\t}\n\twhere $r_{\\rm n}$ is the size of the neutral gas clump.\n\tWe first consider only the supernova explosion energy for simplicity. \n\tThe deposited energy by other feedback processes can be easily accounted for\n\tby increasing $E_{\\rm dep}$ by a suitable factor.\n\t\n\tThe cold gas that satisfies the condition~(i) is assumed to form stars with \n\ta star formation efficiency of $c_*$\n\tand with an initial mass function, $\\Psi(M_*)$.\n\tLet $\\epsilon_{\\rm SN}$ be the average \n\tsupernova explosion energy. \n\tThe deposited feedback energy is estimated as \n\t\\eq{\n\t\tE_{\\rm dep} = c_* M_{\\rm cold} \\, \\epsilon_{\\rm SN}\n\t\t\\braket{\\int M_* \\Psi {\\rm d} M_*}^{-1}\n\t\t{\\int_{ M_{\\rm th} }\\Psi {\\rm d} M_*},\n\t}\n\twhere $M_{\\rm cold}$ is the mass of enclosed cold gas,\n\tand $M_{\\rm th}$ is a threshold stellar mass above which stars cause supernova explosion. \n\tWe adopt $c_* = 0.1$, $\\epsilon_{\\rm SN} = 10^{51} {\\rm \\, erg}$,\n\t$M_{\\rm th} = 8{\\rm \\, M_{\\odot}}$, \n\tand use the initial mass function of \\cite{2003_Chabrier}. \n\t\n\n\n\nWhen the conditions~(i) and (ii) are met and stars are formed \nover a free-fall time, \nthe self-shielded region is expected to evolve differently \nfrom what has been shown in \\secref{sec:results}.\nIf the feedback effects are strong enough to disrupt the entire halo, \nthe halo evolution described in \\secref{sec:results} \nis valid only up to one free-fall time. \n\\fref{fig:masslossZdependence_stars} shows the same plots as \\fref{fig:masslossrates}-(a), \nbut the lines extend to the time when the stellar feedback is assumed to be effective.\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[clip, width = \\linewidth]{Haloevap_metallicity_dependenceMassEvolution_showstarformation.pdf}\n\\caption{\n\t\tThe same plot as \\fref{fig:masslossrates}-(a), \n\t\tbut the lines are terminated at the time of star formation and feedback (marked with star symbols) \n\t\tfor halos that meet the conditions~(i) and (ii). \n\t\t\t\t}\n\\label{fig:masslossZdependence_stars}\n\\end{center}\n\\end{figure}\nMassive, metal-rich halos are likely to be self-destructed before the gas is lost via photoevaporation. \nNote that the fraction of the cold gas that is converted to stars is small.\n\nIn metal-free halos, the stellar feedback can also be important, especially for massive halos.\nWe expect that the stellar feedback is unimportant\nin low-mass halos with $T_{\\rm vir} \\lesssim 100{\\rm \\, K}$ ($M\\lesssim 10^{4.5} \\,{\\rm \\, M_{\\odot}}$ at $z = 10$) for any metallicity. In these halos, most of the gas is quickly lost by photoevaporation before star formation.\nIn conclusion, \nthe deposited energy due to the stellar feedback can disperse the gas from the host halo,\nsimilarly to the well-known feedback effect in dwarf galaxies devoid of \\ion{H}{1} content \n\\citep{2009_Grcevich, 2014_Spekkens}.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}