diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpbfl" "b/data_all_eng_slimpj/shuffled/split2/finalzzpbfl" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpbfl" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\IEEEPARstart{S}{uperpixel} segmentation aims at grouping the discretizing pixels into some high-level correlative units as input primitives in a variety of subsequent computer vision tasks,\n\\textit{e.g.}, salient object detection \\cite{2020Going,Video,cong2017co,xu2019video}, image dehazing \\cite{yang2018superpixel}, image classification \\cite{shi2019multiscale}, object recognition \\cite{wang2014superpixel}, adversarial attack \\cite{dong2020robust}.\nNowadays, dual cameras have been widely used in extensive industrial applications, such as assistant driving and mobile phones.\nCompared with single images, stereo image pairs can obtain complementary information from the second viewpoint, which is beneficial to scene representation and object modeling \\cite{li2021stereo}.\nHowever, how to effectively utilize complementary and correspondence information to generate superpixels for stereo images is still a challenging task.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/simple_compare\/Simple_Compare.pdf}\n \\caption{A simple illustration of the comparison between the state-of-the-art superpixel segmentation methods and our method.}\n \\label{fig:Simple-compare}\n\\end{figure}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=.95\\textwidth]{figures\/framework.pdf}\n \\caption{Overall framework of the proposed method.}\n \\label{fig:method}\n\\end{figure*}\n\nFor stereo superpixel segmentation, stereo image pairs are generally segmented separately as single views by traditional methods. By this way, the complementarity and correlation between the left and right views of stereo image pairs are ignored and cannot be explored sufficiently \\cite{cheng2015cross,liu2016complementary}. Therefore, these methods cannot be regarded as an real implementation of stereo superpixel segmentation, since the intrinsic characteristics of stereo images are neglected.\nTo take the collaborative relationship between left and right views into consideration,\nLi \\textit{et al.} \\cite{li2021stereo} propose a collaborative optimization scheme to generate stereo superpixels with the parallax consistency, which is the first attempt to devise a specific superpixel segmentation method for stereo image pairs.\nThe method first match the corresponding regions between the left and right view of a stereo image pair. Superpixels are initialized and matched in the corresponding regions. Then, the superpixels in the left and right views are refined simultaneously via a collaborative optimization strategy.\nExperimental results demonstrate it outperforms the methods that segment stereo image pairs separately.\nNevertheless, this method extracts handcrafted feature instead of deep feature by an unsupervised way, which leads to the limitation of the performance.\n\nMost recently, Wu \\textit{et al.} \\cite{2021Stereo} propose an end-to-end dual attention fusion network (StereoDFN) for stereo superpixel segmentation, which extracts the deep features of stereo image pairs by convolutional neural networks instead of handcrafted features. Then it models the correspondence between the left and right views via the parallax attention module to integrate the complementary information of stereo image pairs and has achieved impressive performance.\nHowever, the existing superpixel segmentation methods like StereoDFN and superpixel sampling network (SSN) \\cite{SSN} utilize the five-dimensional features (including two-dimensional spatial features XY, and three-dimensional color features Lab) as input,\nwhich aims to extract high-dimensional deep features. For stereo superpixel segmentation task, the disparity information is often adopted to emphasize the correspondence between left and right views.\nIf the spatial information is input before the feature fusion, \nthis may result in strong constraints on spatial information,\nsince the stereo features are coupled with spatial features,\nwhich will lead to degradation of image boundaries.\n\nThe presented work significantly extends StereoDFN with a decoupling mechanism of spatial information to only use three-dimensional color features (Lab) instead of five-dimensional features (XYLab) in modeling the correspondence between both views,\nthereby decoupling the stereo features (color features) and spatial features (XY) to relax the constrain of spatial information.\nConsidering the importance of spatial information in superpixel segmentation,\nwe further design a Dynamic Spatiality Embedding Module (DSEM) to re-add spatial information, the weighting of spatial information will be adaptively adjusted through the Dynamic Fusion (DF) mechanism in DSEM to fit images of different sizes, thereby obtaining a more accurate representation of spatial information and achieving a better performance.\nAs the simple comparison shown in Fig.~\\ref{fig:Simple-compare}, we can see that our proposed method adhere to the object boundaries better than FCN \\cite{FCN} and StereoDFN.\n\nIn this paper, we improve upon our previous work in ICME \\cite{2021Stereo}.\nThe main contributions of the proposed work can be summarized as follows:\n\\begin{enumerate}\n \\item We propose a stereo superpixel segmentation method with a decoupling mechanism of spatial information to generate superpixels for stereo image pairs, which can integrate the correspondence from both left and right viewpoints to take the stereo features alignment and occlusion problems into account.\n \n \\item\n Since the coupling of stereo features and spatial features may impose strong constraints on spatial information while modeling the correspondence between stereo image pairs,\n we develop a spatial decoupling mechanism to model the correspondence with relaxed spatial constraint by decoupling the stereo features and spatial features, and postponing the embedding of spatial information after stereo features have been fused.\n %\n \\item We design a Dynamic Spatiality Embedding Module (DSEM) to re-add spatial information for achieving a finer segmentation. The weighting of spatial information can be adaptively adjusted via the Dynamic Fusion (DF) mechanism in DSEM to fit images of different sizes, thereby achieving a better performance.\n %\n \\item Our method achieves the state-of-the-art performance \n compared with previous works both quantitatively and qualitatively.\n Extensive ablation studies validate the effectiveness of the proposed strategy.\n With application in salient object detection, we also demonstrate that our method can achieve superior performance in downstream task.\n\\end{enumerate}\n\nThe article is organized as follows. We briefly introduce the related work about existing superpixel segmentation algorithms in Section II.\nThen, we propose our model and detail each key component in Section III.\nThe qualitative and quantitative experimental results and analyses are presented in Section IV. \nIn Section V, we present the application of the proposed method in salient object detection.\nFinally, Section VII concludes this article.\n\n\\section{Related Work}\nThe concept of \"Superpixel\" is first introduced in \\cite{ren2003learning}, which is an over-segmentation of images and generated by grouping pixels similar in low-level properties.\nExisting superpixel segmentation algorithms can be simply divided into two categories: unsupervised superpixel segmentation methods and supervised superpixel segmentation methods.\n\\subsection{Unsupervised Superpixel Segmentation Methods}\n\nSimple linear iterative clustering (SLIC) \\cite{SLIC} is one of the most widely used unsupervised methods, which employs k-means clustering approach to generate superpixels efficiently by grouping nearby pixels based on five-dimensional color and position features of the images.\nDue to SLIC has fast runtime and impressive performance, many superpixel-based applications commonly use SLIC for superpixel segmentation.\nLinear spectral clustering (LSC) \\cite{chen2017linear} generates superpixels based on kernel function instead of using the traditional eigen, which is not only able to produce compact superpixel, but also with low computational costs.\nConsidering the irregular structure of superpixels, Li \\textit{et al.} \\cite{li2019superpixel} propose approximately structural superpixels (ASS), they regard superpixel segmentation as a square-wise asymmetric partition problem and generate ASS by an asymmetrically square-wise superpixel segmentation way, which can preserve semantics better and largely reduces data amount.\n\\subsection{Supervised Superpixel Segmentation Methods}\nIn recent years, inspired by the success of deep learning techniques in a wide variety of computer vision tasks, some works try to use deep learning techniques for superpixel segmentation. Jampani \\textit{et al.} \\cite{SSN} propose the first deep learning-based end-to-end trainable superpixel segmentation network (SSN), which is enlightened by the SLIC method. To simplify the generation of superpixels, Yang \\textit{et al.} \\cite{FCN} propose a lightweight fully convolutional networks (FCN) that based on encoder-decoder structure, which generates superpixels efficiently by predicting the probability map between pixels and superpixels.\nMore recently,\nWu \\textit{et al.} propose an dual attention fusion network (StereoDFN),\nthey attempt to take the collaborative relationship between stereo image pairs into consideration by modeling the correspondence between them,\nwhich is based on parallax attention mechanism.\n\n\n\n\n\\section{Proposed Method}\nIn this work, we propose a stereo superpixel segmentation method with a decoupling mechanism of spatial information,\nthe framework is illustrated in Fig.~\\ref{fig:method}.\nIn general, the proposed method can be divided into the following steps:\nFirst, stereo image pairs with Lab color space are input\ninto fully convolutional network to extract the deep features.\nThen, the deep features of left and right views are fed to the Decoupled Stereo Fusion Module (DSFM),\nwhich integrates the features from both views. \nMoreover, Dynamic Spatiality Embedding Module (DSEM) is proposed\nto adaptively combine the spatial information with deep features.\nFinally, a soft clustering algorithm \\cite{SSN} is adopted to generate the superpixels.\n\nIn what follows, we detail the main components of the proposed method,\nwhich are feature extractor, Decoupled Stereo Fusion Module (DSFM),\nDynamic Spatiality Embedding Module (DSEM) and loss functions respectively.\n\n\\subsection{Feature Extractor}\nA pair of weight-shared Convolutional Neural Networks (CNNs) is adopted to extract the deep feature of stereo image.\nThe basic block is a `Conv-BN-ReLU' block, which is composed of a convolution layer with $3 \\times 3$ kernel size and $64$ output channels,\na batch-normalization layer and a ReLU activation function.\nEach of the two modules will be followed by a max-pooling layer for downsampling.\nFor features captured from Block2, Block4 and Block6,\nwe upsample them into the same resolution as the input image and concatenate them together.\nBlock7 will fuse them and generate the final output.\nThrough this way, the networks can effectively learn more multi-level and multi-scale features, which is benefit for both superpixel segmentation and capturing the correspondence of stereo image pairs.\nThe schematic illustration of feature extractor has been shown in Fig.~\\ref{fig:feature_extractor}.\n\n\\subsection{Decoupled Stereo Fusion Module}\nDecoupled Stereo Fusion Module (DSFM) is the key component for fusing the stereo features.\nConsidering the most significant problems in stereo features fusion,\nsuch as features alignment and occlusion problem,\nthe proposed DSFM try to solve them via parallax attention mechanism and valid mask.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/Feature_Extractor.pdf}\n \\caption{The schematic illustration of Feature Extractor.}\n \\label{fig:feature_extractor}\n\\end{figure}\n\\begin{figure}[t]\n\t\\centering\n\t\\subfigure[Previous method.]{\n \\includegraphics[width=\\linewidth]{figures\/Previous_work.pdf}\n\t}\n\t\\subfigure[Proposed method.]{\n\t\t\\includegraphics[width=\\linewidth]{figures\/Current_work.pdf}\n\t}\n\t\\caption{Differences between our method and StereoDFN. \n\tWe only use Lab color information as input to decouple the stereo features and spatial features.\n\tAfter \\textbf{D}ecoupled \\textbf{S}tereo \\textbf{F}usion \\textbf{M}odule (DSFM),\n\twe design a \\textbf{D}ynamic \\textbf{S}patiality \\textbf{E}mbedding \\textbf{M}odule (DSEM) \n\tto re-add spatial information.} \n\t\n\t\\label{fig:compare_previous}\n\\end{figure}\n\n\\textbf{Spatial Decoupling Mechanism.}\nConsidering the coupling of stereo features and spatial features may impose strong constraints on spatial information while modeling the correspondence between stereo image pairs, thereby interfering with superpixels to adhere to the object boundaries.\nThe spatial decoupling mechanism is proposed to model the correspondence between stereo image pairs with relaxed spatial constraint\nMore specifically, we remove the spatial information of input items for relaxing the constrain of spatial information.\nThe input of StereoDFN is a five-dimensional features (XYLab),\nwhile the input of our proposed method is a three-dimensional features (Lab),\nthis is the essential difference between our proposed method and StereoDFN,\nwe also present the schematic illustration of the difference in Fig.~\\ref{fig:compare_previous}.\nFurthermore, the effectiveness of our spatial decoupling mechanism has been shown in Fig.~\\ref{fig:decouple_visualized}, benefiting from decoupling stereo features and spatial features, our method eliminates the interference of spatial information on modeling, and the boundary information is much more clearly.\n\n\\textbf{Stereo Features Alignment.}\nSince the corresponding pixels in stereo image pairs are located at different positions,\nit is extremely difficult to fuse the stereo features directly.\nTherefore, aligning the stereo features is necessary before fusion.\n\n\\begin{figure}[t] \n \\begin{minipage}[b]{.3\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=2.85cm]{figures\/origin.png}}\n \\centerline{(a) Original image}\n \\end{minipage}\n \\hfill\n \\begin{minipage}[b]{.3\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=2.85cm]{figures\/coupled.png}}\n \\centerline{(b) Coupled}\n \\end{minipage}\n \\hfill\n \\begin{minipage}[b]{.3\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=2.85cm]{figures\/decoupled.png}}\n \\centerline{(c) Decoupled}\n \\end{minipage}\n \\hfill\n %\n \\caption{The visualization of deep features after modeling the correspondence between stereo image pairs.\n Note that (b) is generated by StereoDFN, while (c) is generated by our method.}\n \\label{fig:decouple_visualized}\n\\end{figure}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/attention.pdf}\n \\caption{The schematic illustration of the difference between self attention and parallax attention mechanism.\n (a) and (b) are self attention mechanism and our parallax attention mechanism respectively.\n We can find the computational complexity of parallax-attention mechanism is much smaller\n by comparing (a) and (b).}\n \\label{fig:attention}\n\\end{figure}\nInspired by~\\cite{PAM}, the parallax attention mechanism is utilized\nto model the correspondence between stereo image pairs.\nSince the left and right views of the images are only translated horizontally, while aligning in the vertical direction,\nthe matching pixels in two views must be on the same horizontal line.\nFor this reason, as the schematic illustration shown in Fig.~\\ref{fig:attention}(a) and Fig.~\\ref{fig:attention}(b), the parallax attention mechanism is only consider the correlation of pixels on the same horizontal line, instead of all the pixels in the image like traditional self attention mechanism.\nIn this way, the computation complexity of attention can be largely reduced.\n\nAs the schematic illustration of aligning the features from the right view with the left view features shown in Fig.~\\ref{fig:method},\nfor a pair of deep features $\\mathcal{F}_{L}$ and $\\mathcal{F}_{R}\\in\\mathcal{R}^{H \\times W \\times C}$,\nwe can get $A$ and $B$ from a convolution layer with $1 \\times 1$ kernel size. \nThen, the parallax attention map $\\mathcal{M} \\in \\mathcal{R}^{H \\times W \\times W}$ will be generated by:\n\\begin{equation}\n \\mathcal{M}_{R \\to L} = softmax(A \\otimes B^T),\n\\end{equation}\n\\begin{equation}\n \\mathcal{M}_{L \\to R} = softmax(B \\otimes A^T),\n\\end{equation}\nwhere $\\otimes$ denotes the batch-wise multiplication, and $T$ denotes the batch-wise transposition.\n$\\mathcal{M}_{L \\to R}(i, j, k)$ and $\\mathcal{M}_{R \\to L}(i, j, k)$ represent the contribution of the position $(i, j)$ in one view to position $(i, j)$ in another view.\nIn this way, the supplementary information of one view can be obtained through another view, which can be formulated as follows:\n\n\\begin{equation}\n \\hat{\\mathcal{F}}_{L} = \\mathcal{M}_{R \\to L} \\otimes \\mathcal{F}_{right},\n\\end{equation}\n\\begin{equation}\n \\hat{\\mathcal{F}}_{R} = \\mathcal{M}_{L \\to R} \\otimes \\mathcal{F}_{left},\n\\end{equation}\nwhere $\\hat{\\mathcal{F}}_{L}$ and $\\hat{\\mathcal{F}}_{R}$ denote the aligned features.\n\n\\textbf{Occlusion Problem.}\nOcclusion always exists in stereo images due to violent disparity variation,\nwhich will lead to an inaccurate stereo features fusion.\nTherefore, we further add the occlusion handling part in DSFM to take occlusion problem into consideration.\n\nTaking the example of handling the occlusion in the right image.\nAssuming that a pixel located at $(i, j)$ is in the occlusion region,\nfor any $k \\in [1,W]$, $\\mathcal{M}_{L \\to R}(i,j,k)$ is an extremely low value\nsince any pixel on the same horizontal line in the left view is not relevant to it.\nThus, the valid mask $O_{L \\to R}$ can be generated from parallax attention map $\\mathcal{M}_{L \\to R}$,\nwhich can be formulated as Eq.~(\\ref{valid-mask}):\n\\begin{equation}\n\t\\label{valid-mask}\n\tO_{L \\to R}(i,j) =\n\t\\begin{cases}\n\t1, &{\\sum_{k \\in [1, W]} \\mathcal{M}_{L \\to R}(i, k, j) \\textgreater \\tau} \\\\\n\t0, &{\\sum_{k \\in [1, W]} \\mathcal{M}_{L \\to R}(i, k, j) \\leq \\tau},\n\t\\end{cases}\n\\end{equation}\nwhere $\\tau$ is a threshold set to $0.1$. \nThen, the occlusion handling part will fuse the stereo features.\nThe left fused features $\\Tilde{\\mathcal{F}}_{L}$ can be obtained as Eq.~(\\ref{eq:fuse}):\n\\begin{equation}\n \\label{eq:fuse}\n \\Tilde{\\mathcal{F}}_{L} = Concat(\\hat{\\mathcal{F}}_{L} \\circ O_{L \\to R} + \\mathcal{F}_{L} \\circ (1 - O_{L \\to R}), \\mathcal{F}_{L}),\n\\end{equation}\nwhere $Concat(,)$ represents the concatenate operation on channel dimension,\nwhile $\\circ$ represents the Hadamard product.\nFinally, $\\Tilde{\\mathcal{F}}_{L}$ will be fed to a `Conv-BN-ReLU' block\nfor reducing the channel size to the size of origin features.\n\n\\subsection{Dynamic Spatiality Embedding Module}\nTo prevent the spatial information influence the stereo correspondence modeling,\nthe spatial information has been removed in the input items.\nHowever, spatial information is indispensable for superpixel segmentation method to adhere to object boundaries more accurately, which is vital to achieve a better performance.\nTherefore, spatial information is re-added via the Dynamic Spatiality Embedding Module (DSEM) to take both conditions into consideration simultaneously, so that we can not only eliminate the disadvantage of spatial information in modeling the correspondence, but also utilize the advantage of spatial information.\nDSEM consists of two parts, which are Spatiality Embedding (SE) and Dynamic Fusion (DF).\nThe architecture of DSEM can be seen in Fig.~\\ref{fig:method}.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/coordinate.pdf}\n \\caption{(a) denotes the input images with $300 \\times 300$ resolution,\n (b) the deep features, \n (c) the features after adding the spatial information with a value domain of $(0,1)$, \n and (d) the features after adding the spatial information zoomed in the manner of~\\cite{SLIC, SSN}.\n We can see our embedding strategy preserves more details than (d).}\n \\label{fig:coordinate}\n\\end{figure}\n\n\\textbf{Spatiality Embedding.}\nA reliable superpixel segmentation algorithms require the ability to handle images with different resolutions.\nHowever, the value of spatial information can be extremely large for a high-resolution image,\nwhich will pollute the image feature representation if the spatial information is embedded directly.\nIn order to avoid such a disadvantage,\nwe normalize the spatial information $X$ and $Y$ as Eq.~(\\ref{eq:range}):\n\\begin{equation}\n \\label{eq:range}\n \\hat{X} = \\frac{X}{max(X)},~\\hat{Y} = \\frac{Y}{max(Y)},\n\\end{equation}\nwhere $X$ and $Y$ are spatial information on horizontal and vertical direction, respectively.\n\nAfter normalizing, $\\hat{X}$ and $\\hat{Y}$ is added to fused features and get $\\Tilde{\\mathcal{F}_{X}}, \\Tilde{\\mathcal{F}_{Y}}$.\nThen, a convolution layer with $1 \\times 1$ kernel size is followed to embed the spatial information.\nIn this way, we can prevent an over-consideration of spatial information.\nFinally, $\\Tilde{\\mathcal{F}_{X}},~\\Tilde{\\mathcal{F}_{Y}}$ is concatenated with input features and send them to dynamic fusion part.\nFig.~\\ref{fig:coordinate} shows the effectiveness of our embedding strategy.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/DF.pdf}\n \\caption{The schematic illustration of Dynamic Fusion mechanism.}\n \\label{fig:DF}\n\\end{figure}\n\\begin{figure*}[!t]\n \\centering\n \\subfigure[Performance on the KITTI2015 dataset.]{\n \\includegraphics[width=.32\\textwidth]{figures\/KITTI_performance\/ASA_KITTI.pdf}\n \\includegraphics[width=.32\\textwidth]{figures\/KITTI_performance\/BR_KITTI.pdf}\n \\includegraphics[width=.32\\textwidth]{figures\/KITTI_performance\/UE_KITTI.pdf}\n \\label{fig:performance_a}\n }\n \\subfigure[Performance on the validation set of Cityscapes dataset.]{\n \\includegraphics[width=.32\\textwidth]{figures\/Cityscapes_performance\/ASA_Cityscapes.pdf}\n \\includegraphics[width=.32\\textwidth]{figures\/Cityscapes_performance\/BR_Cityscapes.pdf}\n \\includegraphics[width=.32\\textwidth]{figures\/Cityscapes_performance\/UE_Cityscapes.pdf}\n \\label{fig:performance_b}\n }\n \\caption{Quantitative comparison of the proposed method and other state-of-the-art methods.}\n \\label{fig:performance}\n\\end{figure*}\n\\textbf{Dynamic Fusion.}\nAlthough spatial information is indispensable in superpixel segmentation task,\nit does not always play the same important role of different regions in one image.\nFor example, for regions with sparse textures,\nspatial information should be considered more to generate regular and compact superpixels.\nOn the other hand, for regions with dense edges and complex contents,\nspatial information is relatively less important.\nTherefore, to achieve consistently excellent performance in different conditions,\na dynamic fusion mechanism is designed to adaptively adjust the weighting of spatial information during fusion phase.\n\nThe dynamic fusion mechanism employs a channel-attention~\\cite{senet} way\nto adaptively aggregate and refine features.\nMore specifically, we first use a `Conv-BN-ReLU' block to fuse the features coarsely.\nThen, a global average pooling layer with another `Conv-BN-ReLU' is followed to generate the global feature map.\nFinally, a series operations are utilized to produce the weighting map,\nwhich can be formulated as follows:\n\\begin{equation}\n \\mathcal{W} = g \\cdot \\sigma(C(ReLU(LN(C(g))))),\n\\end{equation}\nwhere $\\mathcal{W}$ is the weighting map and $g$ is the global feature map.\n$\\sigma, C, ReLU$ and $LN$ represents sigmoid function, a convolution layer with $1 \\times 1$ kernel size, ReLU activation function and layer-normalization, respectively.\nIn this way, a more effective representation of spatial information with the guidance of weighting map can be obtained.\nFinally, the weighting map is added to the input features and fed to the third `Conv-BN-ReLU' block to generate the adjusted features.\nFig.~\\ref{fig:DF} presents the details of the Dynamic Fusion (DF) mechanism.\n\n\\subsection{Loss Functions}\nWe design two loss functions for optimizing our model.\n\n\\textbf{Semantic Loss.}\nThis function facilitates the superpixel adhere to semantic boundaries,\nwhich utilize the cross-entropy loss function $SE$ to measure the loss:\n\\begin{equation}\n \\mathcal{L}_{sem} = CE(S, S^*),\n\\end{equation}\nwhere $S$ denotes the one-hot semantic label of ground truth and $S^*$ is the reconstructed semantic label.\n\n\\textbf{Stereo Loss.}\nThis loss function is designed to constrain the model to correctly estimate stereo correspondence.\nWe also add valid mask to eliminate the problems caused by occlusion.\nStereo loss is defined as:\n\\begin{equation}\n\\begin{aligned}\n \\mathcal{L}_{stereo} =& \\left\\|O_{L \\to R} \\circ (I_{L} - \\mathcal{M}_{R \\to L} \\otimes I_{R})\\right\\|_1+\n\t\\\\ &\\left\\|O_{R \\to L} \\circ (I_{R} - \\mathcal{M}_{L \\to R} \\otimes I_{L})\\right\\|_1,\n\\end{aligned}\n\\end{equation}\nwhere $I_L,~I_R$ denotes the left and right image, respectively. $\\circ$ denotes Hadamard product.\n\nThe total loss is the sum of these two functions:\n\\begin{equation}\n \\mathcal{L}_{total} = \\mathcal{L}_{sem} + \\lambda \\mathcal{L}_{stereo},\n\\end{equation}\nwhere $\\lambda$ is empirically set to $1.0$ for balancing the scales of different losses.\n\nAll above are the basic definitions of the metrics we used for evaluating,\nmore details can be seen in \\cite{benchmark}.\n\n\\section{Experiments and results}\n\n\\subsection{Experimental Setup}\n\\textbf{Implementation Details.} \nWe apply a batch-mode learning method with a batch size of $8$ \nto train our model for 20K iterations.\nThe Adam with default parameters ($\\beta_1 = 0.9,~\\beta_2=0.999$) is utilized to optimize the network.\nIn addition, the initial learning rate is $2\\times10^{-4}$ and decreases by half every 2k iterations.\nAfter 8k iterations, the learning rate is fixed to $2\\times10^{-5}$.\nDuring training phase, we randomly crop the images into size $200 \\times 200$ for augmenting the training data.\nFollowing~\\cite{SLIC,SSN}, \nstereo image pairs with Lab color space is used as input,\nand the Lab color space information is zoomed by multiplying a coefficient $\\beta = \\eta \\max(m_w\/n_w,m_h\/n_h)$, \nwhere $m$ and $n$ represent the number of superpixels and pixels, $\\eta$ is equal to $2.5$.\nAll experiments are implemented by PyTorch framework on a PC with NVIDIA RTX A4000 GPU.\n\n\\begin{figure*}[t!]\n\\centering\n \\includegraphics[width=\\linewidth]{figures\/visual\/visual_KITTI.pdf}\n \\caption{Qualitative comparison of the proposed method and other state-of-the-art methods in KITTI2015.}\n \\label{fig:KITTI_Compare}\n\\end{figure*}\n\\textbf{Datasets.}\nFollowing the experiment settings in \\cite{2021Stereo}, we use KITTI2015~\\cite{KITTI} and Cityscapes\\cite{Cityscapes} datasets to train and test our model.\nKITTI2015 contains 200 stereo image pairs with semantic annotations of left images,\nwe select 150 for training and 50 for testing.\nMoreover, to further indicate the superiority of the proposed method,\nwe also use the Cityscapes dataset for evaluation.\nCityscapes is a larger and more challenging dataset,\nwhich contains extensive stereo image pairs captured with diverse scenes, weathers and illumination conditions.\nSince the test set of Cityscapes is not public available,\nwe use the validation set for comparing,\nwhich is consist of 500 stereo images.\nFurthermore, the image of Cityscapes has been scaled to quarter-resolution for convenience.\n\n\\textbf{Evaluation Metrics.}\nIn our experiments, we use three widely used metrics to evaluate the performance of our model,\nwhich are achievable segmentation accuracy (ASA), \nundersegmentation error (UE),\nand boundary recall (BR).\nFor superpixel map $S=\\{S_i\\}$ and ground truth of semantic label $G=\\{G_j\\}$, \nThe detailed definitions of these metrics are as follows:\n\n\\textit{Achievable segmentation accuracy~(ASA):} \nASA is a metric for evaluating the upper bound on the achievable segmentation accuracy,\nwhich can be formulated as:\n\\begin{equation}\n ASA(S, G) = \\frac{\\sum_i max_j|S_i \\cap G_j|}{\\sum_j |G_j|}.\n\\end{equation}\n\n\\textit{Undersegmentaion error~(UE):}\nUE essentially measures the error of superpixel segmentation with respect to the ground truth.\nThe UE is defined as:\n\\begin{equation}\n UE(S, G) = \\frac{1}{|G|}\\sum_{G_j}\\frac{(\\sum_{S_i \\cap G_j}|S_i|) - |G_j|}{|G_j|}.\n\\end{equation}\n\n\\textit{Boundary recall~(BR):}\nThis is a metric of how well the superpixel adhere to image boundaries.\nWe use a coefficient $r$ to divide all pixels into two categories.\n$TP(S,G)$ is the boundary pixels in $G$ for which there is a boundary pixel in $S$ in range $r$,\nand $FN(S,G)$ is the opposite of it.\nThen BR can be formulated as:\n\\begin{equation}\n BR(S, G) = \\frac{TP(S, G)}{TP(S, G) + FN(S, G)}.\n\\end{equation}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/visual\/visual_Cityscapes.pdf}\n \\caption{Qualitative comparison of the proposed method and other state-of-the-art methods in Cityscapes.}\n \\label{fig:Cityscapes_Compare}\n\\end{figure*}\n\\subsection{Comparison with State-of-the-Arts}\nIn this part, we compare our method with some state-of-the-art methods, \nincluding SSN\\cite{SSN}, FCN\\cite{FCN}, LSC\\cite{LSC}, StereoDFN\\cite{2021Stereo}.\nAll of the compared methods are adopted the parameters setting of original works and implemented by the code released by~\\cite{benchmark} or the original authors.\n\n\\textbf{Quantitative Comparison.}\nFig.~\\ref{fig:performance_a} and Fig.~\\ref{fig:performance_b} shows the quantitative comparison results of our proposed method and other state-of-the-art methods on KITTI2015 and Cityscapes, respectively.\nWe can see that our method achieves the top score on KITTI2015 and comparaZble performance to FCN and StereoDFN on Cityscapes.\nTaking 700 superpixels for example, in terms of ASA, UE and BR, our method achieves the minimum percentage gain (computed with the highest score of other methods) on KITTI2015 is 0.4$\\%$, 9.3$\\%$, 1.1$\\%$, while the maximum percentage gain is 1.4$\\%$, 33.8$\\%$, 4.8$\\%$, respectively.\nOn Cityscapes, our method achieves minimum and maximum percentage gain of BR is 2.4$\\%$ and 9.8$\\%$, respectively, and also achieves a comparable performance to FCN and StereoDFN in terms of ASA and UE.\n\n\\textbf{Qualitative Comparison.}\nAs the qualitative comparison results shown in Fig.~\\ref{fig:KITTI_Compare} and Fig.~\\ref{fig:Cityscapes_Compare},\nit is clear that our method achieves the best visual performance since it can adhere to object boundaries and preserve texture better.\nMore specifically, on KITTI2015, we can see that our method can adhere to the boundary of various lane lines more accurately and capture the detail of the warning sign in low light while other methods cannot.\nFor Cityscapes, only our method can capture the details on the warning sign and green light on the traffic light while adhering the image boundaries well.\n\nIn conclusion, through the quantitative comparison results based on standard evaluation criteria, we can see that our method outperforms other methods in most cases, and achieves the best visual performance in qualitative comparison.\nThe impressive performance of our method also verifies the superiority of the proposed spatial decoupling mechanism.\n\n\\subsection{Ablation Experiments}\nIn order to validate the effectiveness of each component in our proposed network,\nwe perform extensive ablation experiments on KITTI2015.\nThere are three types of ablation models including T0, T1 and T2,\nT0 denotes the full model.\nFor baselines in T1,\nthey can indicate our DSEM can combine the spatial information better,\nwhile baselines in T2 show that our method makes a good use of information from another viewpoint and improve the performance of superpixel segmentation.\nIn addition, T1 and T2 contain 3 ablation models respectively,\nMore specifically,\n\\begin{itemize}\n \\item B1 denotes the ablation model without SE and DF modules.\n \\item B2 stands for the ablation model with spatial information (XY) without SE and DF modules.\n \\item B3 represents the ablation model without DF module.\n \\item B4 means that the ablation model without stereo loss, and does not consider stereo features alignment and occlusion problem.\n \\item B5 refers to the ablation model without considering occlusion problem.\n \\item B6 is the ablation model without stereo loss\n\\end{itemize}\n\nAll of the ablation experiments are trained for 20K iterations.\nThe specific structure of each ablation model also has been shown in TABLE~\\ref{table:ablation experiments}.\n\n\\begin{figure*}[h!]\n\\centering\n \\includegraphics[width=\\linewidth]{figures\/visual\/SOD_Compare.pdf}\n \\caption{Visual comparison of SOD results with different superpixel segmentation methods. Note that our method can preserve more details than others.}\n \\label{fig:SOD_Compare}\n\\end{figure*}\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{c|c|c|cccc|c}\n\\toprule\n{\\multirow{2}{*}{Type}} & {\\multirow{2}{*}{ID}} & {\\multirow{2}{*}{Input}} & \\multicolumn{4}{c|}{Component} & \\multirow{2}{*}{Stereo Loss} \\\\ \\cline{4-7}\n & & & \\multicolumn{1}{c|}{SFA} & \\multicolumn{1}{c|}{OH} & \\multicolumn{1}{c|}{SE} & \\multicolumn{1}{c|}{DF} & \\\\ \\hline\nT0 & B0 & Stereo & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\ \\hline\n & B1 & Stereo & \\checkmark & \\checkmark & & & \\checkmark \\\\ \nT1 & B2 & Stereo+XY & \\checkmark & \\checkmark & & & \\checkmark \\\\ \n & B3 & Stereo & \\checkmark & \\checkmark & \\checkmark & & \\checkmark \\\\ \\hline\n & B4 & Single & & & \\checkmark & \\checkmark & \\\\\nT2 & B5 & Stereo & \\checkmark & & \\checkmark & \\checkmark & \\checkmark \\\\ \n & B6 & Stereo & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\\\ \\bottomrule\n\\end{tabular}\n\n\\caption{Detailed setup for ablation experiments.\n B$n$ and T$n$ denote \n the $n$th baseline and the $n$th type respectively.}\n\\label{table:ablation experiments}\n\\end{table}\n\\begin{figure}[h]\n \\centering\n \\subfigure{\n \\includegraphics[width=.935\\linewidth]{figures\/ablation_ASA.pdf}\n\t}\n\t\\subfigure{\n\t\t\\includegraphics[width=.935\\linewidth]{figures\/ablation_BR.pdf}\n\t}\n \\caption{\\textbf{Ablation studies on KITTI2015.} The top figure shows the contributions of each component through ASA score, while the bottom figure through BR score.}\n \\label{fig:ablation}\n\\end{figure}\n\\textbf{Effectiveness of Each Component.}\nFig.~\\ref{fig:ablation} reports the quantitative comparison results of ablation models on KITTI2015.\nWe can see that adding spatial information directly can not make full use of it.\nHowever,\nthe spatial information can be embedded into the network better through our SE and DF modules, resulting in higher performance gains.\nIn addition, \nthe DSFM module and Stereo Loss also play an important role, which can solve the stereo features alignment and occlusion problem and constrain the model to correctly model stereo correspondence, respectively.\n\n\\textbf{Influence of Spatial Information.}\nFrom Fig.~\\ref{fig:ablation},\nwe can observe that model with SE module tends to have a larger performance improvement than the model without SE module,\nwhich proves that the spatial information is helpful to generate regular and compact superpixels.\nFurthermore, adjusting the weighting of the spatial information adaptively through the DF module can make better use of it to further improve the performance.\n\n\\section{Application on Salient Object Detection}\nSalient object detection (SOD) has attracted increasing interest in recent years,\nsince it plays a significant role in many popular computer vision tasks,\nincluding object recognition and detection\n\\cite{ren2013region,zhang2017bridging},\nimage retargeting \\cite{ding2011importance,sun2011scale},\nsemantic segmentation \\cite{wei2016stc,wang2018weakly}, etc.\nTo improve the performance of salient object detection,\nZhu \\textit{et al.} \\cite{zhu2014saliency} propose an superpixel-based salient object detection method,\nthey treat the saliency object detection problem as a saliency value optimization problem for all superpixels in an image.\nMoreover, they observe that background regions are more connected to image boundaries than salient object regions.\nTherefore, they propose a measure called boundary connectivity,\nwhich is utilized to perform salient object detection in their proposed method.\nThe boundary connectivity is defined as follows:\n\\begin{equation}\n \\label{eq:bndcon}\n BndCon(R)=\\frac{|\\{p \\mid p \\in R, p \\in B n d\\}|}{\\sqrt{|\\{p \\mid p \\in R\\}|}}, \n\\end{equation}\nwhere $p$ is a patch of an image and $Bnd$ is the set of all image boundary patches.\n\n\\begin{table}[t]\n\\normalsize\n \\centering\n \\begin{tabular}{p{2cm}p{1.2cm}<{\\centering}p{2.2cm}<{\\centering}p{1.2cm}<{\\centering}}\n \\toprule\n Method & SSN & StereoDFN & Ours\\\\\n \\midrule\n MAE $\\downarrow$ & 0.1783& 0.1794 & \\textbf{0.1777}\\\\\n \n E-measure $\\uparrow$ & 0.6489& 0.6427 & \\textbf{0.6498}\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Results on NJU2K benchmark. $\\uparrow$ denotes the higher is the better, and $\\downarrow$ is contrary.}\n\\label{table:application}\n\\end{table}\nTo indicate our method can perform better in downstream task,\nwe use three state-of-the-art methods, which are our proposed method, StereoDFN\\cite{2021Stereo} and SSN\\cite{SSN}\nto replace the default SLIC\\cite{SLIC} as the superpixel segmentation method of \\cite{zhu2014saliency}.\nIn our experiments, we use NJU2K \\cite{ju2015depth} for evaluation.\nNJU2K is a large dataset widely used for salient object detection of stereo images, which contains 2000 stereo image pairs, involving various objects and scenarios of different difficulty levels.\nMoreover, we select the first 400 images of NJU2K and resize all of them to size $400 \\times 400$ for the ease of experimentation.\n\nFollowing \\cite{zhu2014saliency}, we choose the mean absolute error (MAE) \\cite{perazzi2012saliency} to evaluate each method quantitatively, which is a metric to measure the average difference between the binary ground truth and saliency prediction map.\nHowever, MAE only focus on pixel-wise error.\nTo consider structure cues, we also introduce Enhanced-alignment measure (E-measure) \\cite{fan2018enhanced} as our evaluation metric.\n \nThe results of the quantitative evaluation are shown in Table \\ref{table:application},\nwe can see that our method achieves the best performance in terms of MAE and E-measure.\nIn addition, the visual comparison results in Fig.~\\ref{fig:SOD_Compare} also illustrate the saliency map generated based on our method can focus on more details than other state-of-the-art methods, which validates that our method can perform well in downstream task both qualitatively and quantitatively.\n\n\\section{Conclusion}\nPreviously, stereo superpixel segmentation methods neglect the coupling of stereo features and spatial features, which may impose strong constraints on spatial information while modeling the correspondence between stereo image pairs.\nTo solve this problem, we have presented an end-to-end stereo superpixel segmentation network with a decoupling mechanism of spatial information to eliminate such negative influence.\nIn addition, spatial information is adjusted adaptively through our dynamic fusion mechanism in dynamic spatiality embedding module to generate regular and compact superpixels.\nExtensive experiments on several popular datasets have shown that our proposed method achieves the state-of-the-art performance and performs well in downstream task.\nThe effectiveness of the components handling the spatial information and stereo features have also been verified in our ablation studies.\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe Riemann zeta-function is defined as $\\zeta(s) = \\sum_{n=1}^{\\infty} n^{-s}$ for $\\Re(s) >1$. Throughout this article we write the complex variable $s$ as $s = \\sigma + it$ with $\\sigma$ and $t$ real, and consider $N$ to be a natural number. Truncation of the zeta-function gives the partial sum $\\zeta_{N}(s) = 1 + 2^{-s} + \\cdots + N^{-s}$. One may study these partial sums in the hope of deducing some information about $\\zeta(s)$. For a comprehensive treatment of these ideas we refer the reader to \\cite{GonekZero} and \\cite{GonekMontyZero}. \n\nTur\\'{a}n \\cite{Turan} showed that the Riemann hypothesis would follow if for all $N$ sufficiently large $\\zeta_{N}(s)$ had no zero in $\\sigma>1$. Let $\\psi_{N}$ be the supremum over all values of $\\sigma$ for which $\\zeta_{N}(s)= 0$. Montgomery \\cite{MontyU} showed that for all $N$ sufficiently large,\n\\begin{equation*}\n\\psi_{N} = 1 + \\left( \\frac{4}{\\pi} - 1 - o(1)\\right) \\frac{\\log\\log N}{\\log N},\n\\end{equation*}\nwhere the constant $4\/\\pi - 1$ is best possible.\nTherefore for $N$ sufficiently large, $\\zeta_{N}(s)$ has zeroes in $\\sigma>1$. \n \nMonach \\cite{Monach} made this explicit: for all $N>30$ there are zeroes in $\\sigma>1$. His proof was in two parts: an analytic argument for $N\\geq 549,798$ and a computational proof for $301$ for various values of $N$.}\n\\label{table1}\n\\centering\n\\begin{tabular}{c c }\n\\hline\\hline\nRange of $N$ & Are there zeroes of $\\zeta_{N}(s)$ in $\\Re(s)>1$? \\\\[0.5ex]\\hline\n$1-5$ & No, \\cite[pp.\\ 7-8]{Turan}\\\\\n$6-9$ & No, \\cite[p.\\ 550]{SpiraSections1} and \\cite[Table II, \\S 4]{SpiraSections2}\\\\\n$19$ & Yes, \\cite[Table III, \\S 4]{SpiraSections2}\\\\\n$22-27$ & Yes, \\cite[Table III, \\S 4]{SpiraSections2}\\\\\n$29-50$ & Yes, \\cite[Table III, \\S 4]{SpiraSections2}\\\\\n$\\geq 51$ & Yes, \\cite[Thm.\\ 3.8]{Monach}\\\\ \n \\hline\\hline\n \\end{tabular}\n\\end{table}\nIndeed, van de Lune and te Riele \\cite{vantR} actually computed some zeroes of $\\zeta_{N}(s)$ for $N=19, 22-27, 29-35, 37-41, 47.$ Adapting Bohr's theorems on values assumed by Dirichlet series, Spira \\cite[Thm.\\ 3]{Spiraset} (see also, \\cite[p.\\ 163]{SpiraSections2}) showed that if $\\zeta_{N}(s)$ has one zero in $\\sigma>1$ then it has infinitely many such zeroes. \n\nTherefore, all that remains is to investigate whether, for,\n\\begin{equation}\\label{big}\nN\\in\\{10,11,12,13, 14, 15, 16, 17, 18, 20, 21, 28\\},\n\\end{equation}\n$\\zeta_{N}(s)$ has zeroes in $\\sigma >1$.\nWe find that there are no zeroes for each of these values of $N$. Combining this with Table \\ref{table1}, one proves the following theorem.\n\n\\begin{theorem}\\label{MainT}\nFor $1\\leq N \\leq 18$ and $N=20, 21, 28$ there are no zeroes of $\\zeta_{N}(s)$ in the region $\\sigma>1$; for all other positive $N$ there are infinitely many such zeroes.\n\\end{theorem}\n\n\\section{Numerical Computation}\\label{comp}\n\n\\subsection{Interval Arithmetic}\n\nAlmost all real numbers are not exactly representable by any finite-precision, floating-point system such as the $64$ bit IEEE implementation available on most modern processors. Thus any computation involving such a floating point system will, unless we are very lucky, only produce an approximation to the true result. One way of managing this is to use interval arithmetic (see, for example, \\cite{Moore1966} for a good introduction). Instead of storing a floating point number that is an approximation to the value we want, we store an interval bracketed by two floating point numbers that contains the true value. \n\nInterval arithmetic has been used to manage the accumulation of round-off and truncation errors. In this paper, we exploit the technique to get zero free regions rigorously. As an example, consider the function $f:\\mathbb{R}\\rightarrow\\mathbb{R}$ defined by\n\\begin{equation*}\nf(x)=x^2-4x+3.\n\\end{equation*}\nSuppose we wish to demonstrate that $f$ has no zeroes for $x\\in[4,5]$. Then we can compute\n\\begin{equation*}\nf([4,5])=[16,25]-[16,20]+3=[-1,12].\n\\end{equation*}\nSince this is inconclusive, we try again, but this time with the interval split in two. We have\n\\begin{equation*}\nf([4,4.5])=[16,20.25]-[16,18]+3=[1,7.25]\n\\end{equation*}\nand\n\\begin{equation*}\nf([4.5,5])=[20.25,25]-[18,20]+3=[3.25,10]\n\\end{equation*}\nand we have our result\\footnote{Note that if we had written $f(x)=(x-1)(x-3)$ then $f[4,5]=[3,4]\\cdot[1,2]=[3,8]$ which is the ``correct'' result. This sensitivity is common in expressions involving intervals.}.\n\n\n\\subsection{Description of algorithm}\nWe first note that we need not search in all of $\\sigma>1$ to find zeroes of $\\zeta_{N}(s)$. Spira \\cite[Thm.\\ 1]{SpiraSections1} proved that all zeroes of $\\zeta_{N}(s)$ must have real part less than $1.85$; this was sharpened in \\cite[Theorem 3.1]{BorweinZero} to $1.73$. We therefore need only consider $\\sigma\\in(1, 1.73]$. We can improve this for some values of $N$, but, as we shall see in \\S \\ref{2.3}, this is more than sufficient for our purposes.\n\nLet us consider the case $N=28$. Let $p$ denote a prime and let $\\theta_p=t\\log p$. Hence we have\n\\begin{equation*}\n\\zeta(\\sigma+it)=1+\\frac{\\exp(-i\\theta_2)}{2^\\sigma}+\\frac{\\exp(-i\\theta_3)}{3^\\sigma}+\\frac{\\exp(-i 2\\theta_2)}{4^\\sigma}+\\ldots+\\frac{\\exp(-i (2\\theta_2+\\theta_7))}{28^\\sigma}\n\\end{equation*}\nand we will now write $\\zeta_{28}(\\sigma,\\theta_2,\\ldots,\\theta_{23})$ for $\\zeta_{28}(\\sigma+it)$ under such a change of variables.\n\nIt would appear that we need to examine the space $\\sigma\\in(1, 1.73]$,\n $\\theta_p\\in[0,2\\pi)$ for $p\\leq 23$, for zeroes. In fact we can do considerably better. First, we observe that $\\theta_{17}$, $\\theta_{19}$ and $\\theta_{23}$ only appear once in the sum. Call the sum without those three terms $\\zeta_{28'}(\\sigma,\\theta_2,\\ldots,\\theta_{13})$. Then $\\zeta_{28}$ cannot have a zero if there is no $\\sigma,\\theta_2,\\ldots,\\theta_{13}$ such that\n\\begin{equation*}\n\\left|\\zeta_{28'}(\\sigma,\\theta_2,\\ldots,\\theta_{13})\\right|\\leq 17^{-\\sigma}+19^{-\\sigma}+23^{-\\sigma}.\n\\end{equation*}\n\nWe can go further. The $\\theta_{11}$ and $\\theta_{13}$ terms only appear on their own or in conjunction with $\\theta_2$. We write $a=11^{-\\sigma}$, $b=22^{-\\sigma}$, $c=13^{-\\sigma}$ and $d=26^{-\\sigma}$. Then a little high school geometry (the cosine rule to be precise) tells us that \n\\begin{equation*}\n\\left|\\frac{\\exp(-i\\theta_{11})}{11^\\sigma}+\\frac{\\exp(-i(\\theta_{11}+\\theta_{2}))}{22^{-\\sigma}}\\right|\\leq\\sqrt{a^2+b^2+2ab\\cos \\theta_2},\n\\end{equation*}\nand\n\\begin{equation*}\n\\left|\\frac{\\exp(-i\\theta_{13})}{13^\\sigma}+\\frac{\\exp(-i(\\theta_{13}+\\theta_{2}))}{26^{-\\sigma}}\\right|\\leq\\sqrt{c^2+d^2+2cd\\cos \\theta_2}.\n\\end{equation*}\n\n\nCall $\\zeta_{28''}(\\sigma,\\theta_2,\\theta_3,\\theta_5,\\theta_7)$ the result obtained by removing the $n=11,13,22$ and $26$ terms from $\\zeta_{28'}$. With $a,b,c$ and $d$ as above, define\n\\begin{equation*}\nf(\\sigma,\\theta_2)=17^{-\\sigma}+19^{-\\sigma}+23^{-\\sigma}+\\sqrt{a^2+b^2+2ab\\cos\\theta_2}+\\sqrt{c^2+d^2+2cd\\cos\\theta_2}.\n\\end{equation*}\nThen $\\zeta_{28}$ cannot have a zero if there is no $\\sigma,\\theta_2,\\ldots,\\theta_{7}$ such that\n\\begin{equation*}\n\\left|\\zeta_{28''}(\\sigma,\\theta_2,\\theta_3,\\theta_5,\\theta_7)\\right|\\leq f(\\sigma,\\theta_2).\n\\end{equation*}\n\nOur algorithm is as follows. Divide $\\sigma$, $\\theta_2$, $\\theta_3$, $\\theta_5$ and $\\theta_7$ into small intervals that cover $[1,1.73]$ and $[0,2\\pi]^4$ respectively. We refer to any choice of five such intervals as a ``box''. Push all possible boxes onto the stack. While the stack is not empty, pop off a box and compute an interval $z$ containing $|\\zeta_{28''}|$ for that box. Compute an interval containing $f(\\sigma,\\theta_2)$. If the interval $z-f(\\sigma,\\theta_2)$ is wholly positive, then that box did not contain any zeroes, so discard it. If the interval is wholly negative, then terminate with failure.\\footnote{We believe that this condition indicates the presence of infinitely many zeroes. We are grateful to the referee for suggesting a means by which one might seek to establish this, based on \\cite{AvellarandHale},\\cite{Dubonetal} and \\cite{SepulcreandVidal}. However, the weaker statement is sufficient for our purposes and we do not pursue this line of thought further.} If the interval straddles zero, then divide the box into $16$ smaller boxes by halving the intervals for the $\\theta_p$, and push these new boxes onto the stack.\n\n\\subsection{Details of the implementation}\\label{2.3}\n\nWe implemented this algorithm in ``C++'' using our own double precision interval package written in assembler. This exploits an idea of Lambov \\cite{Lambov2008} to make efficient use of the SSE instruction set of modern processors and uses CRMLIB \\cite{Muller2010} to implement the transcendental functions.\n\nWe divided the interval for $\\sigma$ into $16$ sub-intervals $[1+(2^n-1)\\cdot 2^{-16},1+(2^{n+1}-1)\\cdot2^{-16}]$ for $0\\leq n \\leq 15$. Therefore, the first interval checked was $\\sigma=[1,1+2^{-16}]$ and the last\\footnote{Note that this covered a wider interval than was strictly necessary.} was $\\sigma=[\\frac{3}{2}-2^{-16},2-2^{-16}]$. Each of these intervals for $\\sigma$ was handled by a single core of a compute node of the University of Bristol's Bluecrystal Phase III cluster \\cite{ACRC2015}\\footnote{A single node of Phase III contains two 8-core Intel E5-2670 Sandybridge processors running at $2.6$GHz.}. Within a single core, the intervals for $\\theta_2,\\theta_3,\\theta_5$ and $\\theta_7$ were initially divided into $16$, $8$, $4$ and $2$ sub-intervals respectively, for a total of $1,024$ boxes. Since $\\theta_2$ contributed to more terms than the other variables, it made sense to start with a narrower search here: this seemed to work well in practice.\n\nTable \\ref{table2} shows the data for $N=28$ and $\\sigma\\in[1,1+2^{-16}]$. At each iteration, a box could result in $16$ new boxes; at first this is what we see. We see that after the second iteration, the remaining search space decreases dramatically.\n\n\\begin{table}[ht]\n\\caption{Number of boxes at each iteration for $N=28$ and $\\sigma\\in[1,1+2^{-16}]$}\n\\label{table2}\n\\centering\n\\begin{tabular}{c c c}\n\\hline\\hline\nIteration & Number of Boxes & Coverage (\\%)\\\\[0.5ex]\\hline\n$1$ & $1,024$ & $100$\\\\\n$2$ & $16,256$ & $99.2$\\\\\n$3$ & $45,920$ & $17.5$\\\\\n$4$ & $118,560$ & $2.83$\\\\\n$5$ & $170,048$ & $0.25$\\\\\n$6$ & $195,920$ & $0.018$\\\\\n$7$ & $212,960$ & $0.0012$\\\\\n$8$ & $82,016$ & $0.000030$\\\\\n \\hline\\hline\n \\end{tabular}\n\\end{table}\n\nWe ran this algorithm for \nthose $N$ in (\\ref{big}) and in every case confirmed that $\\zeta_N(s)$ has no zeroes for $\\sigma\\geq 1$. Checking each $N$ took much less than a minute elapsed time using $16$ cores, with $N=21$ taking the longest at $30$ seconds.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\\noindent\nThe understanding of magnetization dynamics, especially on a microscale, is of utter relevance, for example in the development of magnetic sensors, recording heads, and magnetoresistive storage devices.\nIn the literature, a well accepted model for micromagnetic phenomena is the Landau-Lifshitz-Gilbert equation (LLG), see~\\eqref{eq:llg:physics}.\nThis non-linear partial differential equation describes the behaviour of the magnetization of some ferromagnetic body under the influence of a so-called effective field.\nExistence (and non-uniqueness) of weak solutions of LLG goes back to Ref.~\\cite{as}.\nAs far as numerical simulation is concerned, convergent integrators can be found, e.g., in the works of Refs.~\\cite{bp}, \\cite{bjp} or~\\cite{bbp}, where even coupling to Maxwell's equations is considered.\nFor a complete review, we refer to Refs.~\\cite{cimrak}, \\cite{gc}, \\cite{mp06} or the monographs~\\cite{hubertschaefer}, \\cite{prohl} and the references therein.\nRecently, there has been a major breakthrough in the development of effective and mathematically convergent algorithms for the numerical integration of LLG.\nIn Ref.~\\cite{alouges08}, an integrator is proposed which is unconditionally convergent and only needs the solution of one linear system per time step.\nThe effective field in this work, however, only covers microcrystalline exchange effects and is thus quite restricted.\nIn the subsequent works of Refs.~\\cite{alouges11}, \\cite{goldenits}, \\cite{mathmod2012}, \\cite{gamm2011} the analysis for this integrator was widened to cover more general (linear) field contributions while still \\revision{conserving} unconditional convergence. \n\\par In our work, we generalize the integrator from Ref.~\\cite{alouges08} even more and basically allow arbitrary field contributions (Section~\\ref{section:general}).\nUnder some assumptions on those contributions, namely boundedness and some weak convergence property,\nsee~\\eqref{assumption:chi:bounded}--\\eqref{assumption:chi:convergence}, our main theorem still proves unconditional convergence towards some weak solution of LLG (Theorem~\\ref{theorem}).\nIn particular, our analysis allows to incorporate the approximate \\revision{computation} of effective field contributions like, e.g., the stray field which cannot be computed analytically in practice, but requires certain FEM-BEM coupling methods (Section~\\ref{section:fredkinkoehler}).\nSuch additional approximation errors have so far been neglected in the previous works.\nTo illustrate this, we show that the hybrid \\revision{FEM-BEM approaches from Refs.~\\cite{fredkinkoehler,gcr}} for stray field computations does not affect the unconditional convergence of the proposed integrator (Proposition~\\ref{prop:stray field}, Proposition~\\ref{prop:gcr}).\n\\par From the point of applications, the numerical integration of LLG restricts the maximum element size for the underlying mesh to the (material dependent) exchange length in order to numerically resolve domain wall patterns.\nOtherwise, the numerical simulation is not able to capture the effects stemming from the exchange term and would lead to qualitatively wrong and even unphysical results.\nHowever, due to limited memory, this constraint on the mesh-size practically also imposes a restriction on the actual size of the contemplated ferromagnetic sample.\nConsidering the magnetostatic Maxwell equations combined with a (non-linear) material law instead, one does not face such a restriction on the mesh-size (and thus on the computational domain). On the one hand, this implies that such a rough model cannot be used to describe short-range interactions like those driving LLG.\nOn the other hand, this gives us the opportunity to cover larger domains and still maintain a manageable problem size.\n\\par In our work, we show how to combine microscopic and macroscopic domains to simulate a multiscale problem (Section~\\ref{sec:multimodel}):\nOn the microscopic part, where we aim to simulate the configuration of the magnetization, we solve LLG.\nThe influence of a possible macroscopic part, where the magnetization is not the goal of the computation, is described by means of the magnetostatic Maxwell equations in combination with some (non-linear) material law.\nThis macroscopic part then gives rise to an additional non-linear and nonlocal field contribution (Section~\\ref{sec:multi}) such that unconditional convergence of the numerical integrator or even mere existence of weak solutions in this case is not obvious.\nFor certain practically relevant material laws, we analyze a discretization of the multiscale contribution by means of the Johnson-N\\'ed\\'elec coupling and prove that the proposed numerical integrator still preserves unconditional convergence.\nStriking numerical experiments for our approach are given and discussed in Ref.~\\cite{bruckner}.\n\\subsection*{Outline}\n\\noindent The remainder of this paper is organized as follows:\nIn Section~\\ref{sec:multimodel}, we give a motivation and the mathematical modelling for our multiscale model.\nWhile Section~\\ref{sec:maxwell} focuses on the new contribution to the effective field, Section~\\ref{subsection:llg} recalls the LLG\nequation used for the microscopic part.\nIn Section~\\ref{section:general}, we introduce our numerical integrator in a quite general framework and formulate the main result (Theorem~\\ref{theorem}) which states unconditional convergence under certain assumptions on the (discretized) effective field contributions.\nThe remainder of this section is then dedicated to the proof of Theorem~\\ref{theorem}.\nIn Section~\\ref{section:heff}, we consider different effective field contributions as well as possible discretizations and show that the assumptions of Theorem~\\ref{theorem} are satisfied.\nOur analysis includes general anisotropy densities (Section~\\ref{sec:anisotropy}) as well as contributions which stem from the solution of operator equations with \\revision{strongly} monotone operators (Section~\\ref{sec:monotone}).\nThis abstract framework then covers, in particular, the hybrid FEM-BEM \\revision{discretizations from Refs.~\\cite{fredkinkoehler,gcr}} for the stray field (Section~\\ref{section:fredkinkoehler}) as well as the proposed multiscale contribution to the effective field (Section~\\ref{sec:multi}). \\revision{A short appendix comments on some physical energy dissipation.}\n\n\\section{Multiscale model}\\label{sec:multimodel}\nIn our model, we consider two separated ferromagnetic bodies $\\Omega_1$ and $\\Omega_2$ as schematized in Figure~\\ref{fig:regions}.\nLet $\\Omega_1,\\Omega_2\\subset\\mathbb{R}^3$ be bounded Lipschitz domains with Euclidean distance ${\\rm dist}(\\Omega_1,\\Omega_2)>0$ and boundaries $\\Gamma_1 = \\partial\\Omega_1$ resp.\\ $\\Gamma_2=\\partial\\Omega_2$.\nOn the microscopic part $\\Omega_1$, we are interested in the domain configuration and thus solve \\revision{LLG}.\nOn $\\Omega_2$, we will use the macroscopic Maxwell equations with a (possibly non-linear) material law instead.\n\\par To motivate this setting, we consider a magnetic recording head (see Figures~\\ref{fig:regions} and \\ref{fig:readhead_overview}).\nThe microscopic sensor element is based on the giant magnetoresistance effect \\revision{(GMR)}, and it requires the use of LLG in order to describe the short range interactions between the individual layers of the sensor accurately.\nOn the other hand, the smaller these sensor elements are, the more important becomes the shielding of the stray field of neighbouring data bits.\nIn practice, this is achieved by means of some macroscopic softmagnetic shields located directly besides the GMR sensor.\nDescribing these large components by use of LLG would lead to very large problem sizes, because the detailed domain structure within the magnetic shields would be calculated.\nAs proposed in this paper, macroscopic Maxwell equations allow to overcome this limitation and thus provide a profound method to describe the influence of the shields in an averaged sense.\nWhile this work focuses on the mathematical model and a possible discretization, we refer to Ref.~\\cite{bruckner} for numerical simulations and the experimental validation of the proposed model.\n\\begin{figure}[h!]\n\\centering\n\\begin{overpic}[width=0.5\\columnwidth]{.\/regions.eps}\n\t\\put(20,26){\\color{white}$\\Omega_{coil}$}\n\t\\put(83,32){\\color{white}$\\Omega_{1}$}\n\t\\put(50,60){\\color{white}$\\Omega_{2}$}\n\\end{overpic}\n\\caption{\\small Example geometry which demonstrates model separation into LLG region $\\Omega_1$ and Maxwell region $\\Omega_2$ (and in this case in an electric coil region $\\Omega_{coil}$). Here, $\\Omega_1$ represents one grain of a recording media and $\\Omega_2$ shows a simple model of a recording write head.}\n\\label{fig:regions}\n\\end{figure}\n\\begin{figure}[h!]\n\\centering\n\\begin{overpic}[width=0.5\\columnwidth]{.\/reader_combined.eps}\n\\end{overpic}\n\\caption{\\small The example setup consists of a microscopic GMR sensor element in between two macroscopic shields. Beyond the GMR sensor a magnetic storage media is indicated. The multiscale algorithm is used to calculate the stationary state of the GMR sensor for various applied external fields.} \n\\label{fig:readhead_overview}\n\\end{figure}\n\\subsection{Magnetostatic Maxwell equations}\\label{sec:maxwell}\nThe magnetostatic Maxwell equations read\n\\begin{align}\\label{eq:maxwell}\n \\nabla\\times\\boldsymbol{H} = \\boldsymbol{j}\n \\quad\\text{and}\\quad\n \\nabla\\cdot\\boldsymbol{B} = 0\n \\quad\\text{in }\\mathbb{R}^3,\n\\end{align}\nwhere $\\boldsymbol{H}:\\mathbb{R}^3\\to\\mathbb{R}^3$ is the magnetic field strength $[A\/m]$ and $\\boldsymbol{B}:\\mathbb{R}^3\\to\\mathbb{R}^3$ is the magnetic flux density $[T]$ which are related by\n\\begin{equation}\\label{eq:flux}\n\\boldsymbol{B} = \\mu_0(\\boldsymbol{H} + \\boldsymbol{M}) \\quad \\text{in }\\mathbb{R}^3\n\\end{equation}\nwith $\\mu_0=4\\pi\\cdot10^{-7}$ $Tm\/A$ the permeability of vacuum.\nThe current density $\\boldsymbol{j}$ $[A\/m^2]$ is the source of the magnetic field strength $\\boldsymbol{H}$.\nThe magnetization field $\\boldsymbol{M}$ $[A\/m]$ is non-trivial on the magnetic bodies $\\Omega_1\\cup\\Omega_2$, but vanishes in $\\mathbb{R}^3\\backslash\\overline{(\\Omega_1\\cup\\Omega_2)}$.\nThe total magnetic field is split into\n\\begin{align}\\label{eq:splitTotalMagField}\n\\boldsymbol{H} = \\boldsymbol{H}_1 + \\boldsymbol{H}_2 + \\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app},\n\\end{align}\nwhere $\\boldsymbol{H}_j:\\mathbb{R}^3\\to\\mathbb{R}^3$ is the magnetic field induced by the magnetization $\\boldsymbol{M}_j=\\boldsymbol{M}|_{\\Omega_j}$ on $\\Omega_j$ and $\\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app}$ is the field generated by the current density $\\boldsymbol{j}$ in $\\mathbb{R}^3\\backslash\\overline{\\Omega_1\\cup\\Omega_2}$.\nThis implies\n\\begin{align}\n\\nabla\\times\\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app} = \\boldsymbol{j}\n\\quad\\text{and therefore}\\quad\n\\nabla\\times\\boldsymbol{H}_j = 0\n\\quad\\text{in }\\mathbb{R}^3.\n\\end{align}\nIn particular, the induced fields are gradient fields $\\boldsymbol{H}_j = -\\nabla U_j$ with certain scalar potentials $U_j:\\mathbb{R}^3\\to\\mathbb{R}$.\nWe assume that $\\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app}$ is induced by currents only, but not by magnetic monopoles. Therefore,\n\\begin{align}\\label{eq:Hext:div}\n\\nabla\\cdot\\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app} = 0\\quad\\text{in }\\mathbb{R}^3.\n\\end{align}\nMoreover, the sources of $\\boldsymbol{H}_j$ lie inside $\\Omega_j$ only and hence\n\\begin{align}\\label{eq:divHj}\n\\nabla\\cdot\\boldsymbol{H}_j = 0 \\quad\\text{in }\\mathbb{R}^3 \\backslash \\overline\\Omega_j.\n\\end{align}\nFrom the magnetic flux $\\boldsymbol{B}$, we obtain\n\\begin{equation*}\n0\n= \\nabla\\cdot\\boldsymbol{B} = \\mu_0(\\nabla\\cdot\\boldsymbol{H} + \\nabla\\cdot\\boldsymbol{M})\n= \\mu_0(\\nabla\\cdot\\boldsymbol{H}_j + \\nabla\\cdot\\revision{\\boldsymbol{M}_j})\n\\quad \\text{in } \\Omega_j.\n\\end{equation*}\nTogether with $\\boldsymbol{H}_j = -\\nabla U_j$ and~\\eqref{eq:divHj}, this reveals\n\\begin{subequations}\\label{eq:uj}\n\\begin{align}\n\\label{eq:uj:interior}\n \\Delta U_j &= \\nabla\\cdot\\revision{\\boldsymbol{M}_j} \\quad\\text{in }\\Omega_j,\\\\\n\\label{eq:uj:exterior}\n \\Delta U_j &= 0 \\hspace*{13.0mm}\\text{in }\\mathbb{R}^3 \\backslash\\overline\\Omega_j.\n\\end{align}\n\\end{subequations}\nFor the micromagnetic body $\\Omega_1$, the respective magnetization $\\boldsymbol{M}_1$ is computed by LLG, see Section~\\ref{subsection:llg} below.\nThe overall transmission problem~\\eqref{eq:uj} for $\\Omega_1$, supplemented by transmission conditions as well as a radiation condition, reads\n\\begin{subequations}\\label{eq:u1}\n\\begin{align}\n \\Delta U_1 &= \\nabla\\cdot\\boldsymbol{M}_1\n \\hspace*{6.65mm}\\text{in }\\Omega_1,\\\\\n \\Delta U_1 &= 0\n \\hspace*{16.5mm}\\text{in }\\mathbb{R}^3 \\backslash\\overline\\Omega_1,\\\\\n U_1^{\\rm ext}-U_1^{\\rm int} &= 0\n \\hspace*{16.5mm}\\text{on }\\Gamma_1,\\\\\n \\revision{\\nabla(U_1^{\\rm ext}- U_1^{\\rm int})\\cdot{\\boldsymbol{\\nu}}_1} &= -\\boldsymbol{M}_1\\cdot\n \\revision{{\\boldsymbol{\\nu}}_1}\n \\hspace*{3.0mm}\\text{on }\\Gamma_1,\\\\\n U_1(x) &= \\mathcal O(1\/|x|)\n \\hspace*{5.3mm}\\text{as }|x|\\to\\infty.\n\\end{align}\n\\end{subequations}\nHere, the superscripts \\emph{int} and \\emph{ext} indicate whether the trace is considered from inside $\\Omega_1$ (resp. $\\Omega_2$ in~\\eqref{eq:u:omega2} below) or the exterior domain $\\mathbb{R}^3\\backslash\\overline\\Omega_1$ (resp. $\\mathbb{R}^3\\backslash\\overline\\Omega_2$ in~\\eqref{eq:u:omega2} below).\nMoreover, \\revision{${\\boldsymbol{\\nu}}_j$} denotes the outer unit normal vector on $\\Gamma_j$, which points from \\revision{$\\Omega_j$} to the exterior domain \\revision{$\\mathbb{R}^3\\backslash\\overline\\Omega_j$}.\nFor the macroscopic body $\\Omega_2$, we assume a non-linear material law\n\\begin{align}\\label{eq:material}\n \\revision{\\boldsymbol{M}_2} = \\chi(|\\boldsymbol{H}|)\\boldsymbol{H}\n \\quad\\text{on }\\Omega_2\n\\end{align}\nwith a scalar function $\\chi:\\mathbb{R}_{\\ge0}\\to\\mathbb{R}$ and $|\\cdot|$ the modulus. Some examples for suitable $\\chi$ are listed below (see Remark~\\ref{rem:materiallaw}).\n\\par For the computation of the potential $U_2$, we introduce an auxiliary potential $U_{\\text{app}}$. \n\\revision{Since} $\\nabla \\times \\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app} = 0$ in \\revision{the simply connected\ndomain $\\Omega_2$}, we infer $\\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app} = -\\nabla U_{\\text{app}}$ on $\\Omega_2$ with some\npotential $U_{\\text{app}} : \\Omega_2\\rightarrow \\mathbb{R}$. According to~\\eqref{eq:Hext:div} and up to an additive constant,\n\\revision{$U_{\\text{app}}$} can be obtained as the unique solution of the Neumann problem\n\\begin{subequations}\\label{eq:uext}\n\\begin{align}\\label{eq:uext:interior}\n \\Delta U_{\\text{app}} &= 0 \\hspace*{19.3mm}\\text{in }\\Omega_2, \\\\\n \\label{eq:uext:boundary}\n \\revision{\\nabla U_{\\text{app}}^{\\rm int}\\cdot{\\boldsymbol{\\nu}}_2} &= - \\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app}^{\\rm int}\\cdot\\revision{{\\boldsymbol{\\nu}}_2} \n \\hspace*{5mm}\\text{on }\\Gamma_2,\n\\end{align}\n\\end{subequations}\nwith $\\int_{\\Omega_2} U_{\\text{app}} = 0$. The transmission problem for the total\npotential $U = U_1 + U_2 + U_{\\text{app}}$ of the total magnetic field $\\boldsymbol{H} = -\\nabla U$ \nin $\\Omega_2$ and for the potential $U_2$ in\n$\\mathbb{R}^3\\backslash\\overline\\Omega_2$, supplemented by a radiation condition,\nreads\n\\begin{subequations}\\label{eq:u:omega2}\n\\begin{align}\n \\label{eq:u:omega2:interior}\n \\nabla\\cdot\\big( (1+\\chi(|\\nabla U|))\\nabla U\\big) &= 0\n \\hspace*{29.1mm}\\text{in }\\Omega_2, \\\\\n \\Delta U_2 &= 0 \n \\hspace*{29.1mm}\\text{in }\\mathbb{R}^3\\backslash\\overline\\Omega_2, \\\\\n \\label{eq:u:omega2:jumpu}\n U_2^{\\rm ext} - U^{\\rm int} &= -U_1^{\\rm int} - U_{\\text{app}}^{\\rm int}\n \\hspace*{10.5mm}\\text{on }\\Gamma_2, \\\\\n \\label{eq:u:omega2:jumpdu}\n \\revision{\\big(\\nabla U_2^{\\rm ext} - (1+\\chi(|\\nabla U^{\\rm int}|))\\nabla \n U^{\\rm int}\\big)\\cdot{\\boldsymbol{\\nu}}_2} &= (\\boldsymbol{H}_1^{\\revision{\\rm int}} + \\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app}^{\\revision{\\rm int}})\\cdot\\revision{{\\boldsymbol{\\nu}}_2}\n \\hspace*{-0.6mm}\\quad\\text{on }\\Gamma_2, \\\\\n U_2(x) &= \\mathcal O(1\/|x|) \n \\hspace*{18.0mm}\\text{as }|x|\\to\\infty,\n\\end{align}\n\\end{subequations}\nwhere~\\eqref{eq:u:omega2:interior} follows\nfrom~\\eqref{eq:maxwell}--\\eqref{eq:divHj} and~\\eqref{eq:material}.\n\\revision{The transmission condition~\\eqref{eq:u:omega2:jumpu} follows from the continuity of $U_2$ on $\\Gamma_2$ and $U = U_1+U_2+U_{\\text{app}}$ in\n$\\Omega_2$. \nTo see~\\eqref{eq:u:omega2:jumpdu}, we stress that~\\eqref{eq:maxwell} implies $(\\boldsymbol{B}^{\\rm ext}-\\boldsymbol{B}^{\\rm int})\\cdot{\\boldsymbol{\\nu}}_2=0$ on $\\Gamma_2$.\nPutting~\\eqref{eq:flux}--\\eqref{eq:splitTotalMagField} into this condition and using $\\boldsymbol{H} = -\\nabla U$ in $\\Omega_2$ as well\nas~\\eqref{eq:material} gives us\n\\begin{align*}\n (\\boldsymbol{H}_1^{\\rm ext} + \\boldsymbol{H}_2^{\\rm ext} + \\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app}^{\\rm ext} - (1+\\chi(|\\nabla U^{\\rm int}|))\\nabla U^{\\rm int}) \\cdot {\\boldsymbol{\\nu}}_2=0 \n \\quad\\text{on }\\Gamma_2.\n\\end{align*}\nMoreover, from~\\eqref{eq:Hext:div} and~\\eqref{eq:divHj} we infer $(\\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app}^{\\rm ext}-\\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app}^{\\rm int})\\cdot{\\boldsymbol{\\nu}}_2 = 0 = (\\boldsymbol{H}_1^{\\rm\next}-\\boldsymbol{H}_1^{\\rm int})\\cdot{\\boldsymbol{\\nu}}_2$ on $\\Gamma_2$.\nTogether with $\\boldsymbol{H}_2 = -\\nabla U_2$, the transmission condition~\\eqref{eq:u:omega2:jumpdu} follows.}\n\\begin{remark}\nIn case of a linear material law $\\chi(|\\boldsymbol{H}|) = \\chi \\in \\mathbb{R}_{>0}$ in~\\eqref{eq:material}, the transmission problem~\\eqref{eq:u:omega2} simplifies to $(1+\\chi)\\Delta U_2 = 0$ in $\\Omega_2$, $U_2^{\\rm ext}-U_2^{\\rm int} = 0$ on $\\Gamma_2$, and $\\revision{\\big(\\nabla U_2^{\\rm ext} - (1+\\chi)\\nabla U_2^{\\rm int}\\big)\\cdot{\\boldsymbol{\\nu}}_2} = (\\boldsymbol{H}_1^{\\rm int} + \\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app}^{\\rm int}) \\cdot \\revision{{\\boldsymbol{\\nu}}_2}$ on $\\Gamma_2$ in~\\eqref{eq:u:omega2:interior},~\\eqref{eq:u:omega2:jumpu}, and~\\eqref{eq:u:omega2:jumpdu}, respectively.\nIn particular, the Neumann problem~\\eqref{eq:uext} does not have to be solved.\nMoreover, we do not have to assume that $\\Omega_2$ is simply connected.\n\\end{remark}\n\\subsection{Landau-Lifshitz-Gilbert equation}\n\\label{subsection:llg}\nLet $\\alpha> 0$ denote a dimensionless empiric damping parameter, called Gilbert damping constant, and let the magnetization of the ferromagnetic body $\\Omega_1$ be characterized by the vector valued function\n\\begin{equation*}\n\\boldsymbol{M}_1:\\, (0,T)\\times\\Omega_1 \\rightarrow \\set{\\boldsymbol{x}\\in\\mathbb{R}^3}{|\\boldsymbol{x}|=M_s},\n\\end{equation*}\nwhere the constant $M_s>0$ refers to the saturation magnetization $[A\/m]$.\nThen, the Landau-Lifshitz-Gilbert equation reads\n\\begin{subequations}\\label{eq:llg:physics}\n\\begin{align}\\label{eq:llg_phy}\n\\frac{\\partial\\boldsymbol{M}_1}{\\partial t}\n= -\\frac{\\gamma_0}{1+\\alpha^2} \\boldsymbol{M}_1\\times\\boldsymbol{H}_{\\rm eff}\n- \\frac{\\alpha\\gamma_0}{(1+\\alpha^2)M_s} \\boldsymbol{M}_1\\times(\\boldsymbol{M}_1\\times\\boldsymbol{H}_{\\rm eff}),\n\\end{align}\nsupplemented by initial and Neumann boundary conditions\n\\begin{align}\n \\boldsymbol{M}_1(0) &= \\boldsymbol{M}^0\n \\quad\\text{in }\\Omega_1,\\\\\n \\partial_{\\boldsymbol{\\nu}}\\boldsymbol{M}_1 &= 0\n \\hspace*{7.5mm}\\text{on }(0,\\revision{T})\\times\\partial\\Omega_1.\n\\end{align}\n\\end{subequations}\nHere, \\revision{$\\gamma_0 = 2,210173\\cdot10^5$ $m\/(As)$} denotes the gyromagnetic ratio\nand $\\boldsymbol{M}^0:\\Omega_1\\to\\mathbb{R}^3$ with $|\\boldsymbol{M}^0| = M_s$ in $\\Omega_1$ is a given initial\nmagnetization.\nThe effective field $\\boldsymbol{H}_{\\rm eff}$ in $[A\/m]$ depends on $\\boldsymbol{M}_1$ and the magnetic field strength\n$\\boldsymbol{H}$, and is given as the negative \\revision{first} variation of the Gibbs free energy\n\\begin{equation*}\n\\revision{\\mu_0}\\,\\boldsymbol{H}_{\\rm eff} = -\\frac{\\delta E(\\boldsymbol{M}_1)}{\\delta\\boldsymbol{M}_1}.\n\\end{equation*}\nIn this work, the energy $E(\\cdot)$ consists of exchange energy, anisotropy\nenergy as well as magnetostatic energy\n\\begin{equation*}\n E(\\boldsymbol{M}_1) = \\frac{A}{M_s^2}\\,\\int_{\\Omega_1} |\\nabla\\boldsymbol{M}_1|^2\n + K\\,\\int_{\\Omega_1}\\phi(\\boldsymbol{M}_1\/M_s)\n - \\mu_0\\int_{\\Omega_1}\\boldsymbol{H}\\cdot\\boldsymbol{M}_1.\n\\end{equation*}\nThe exchange constant $A>0$ $[J\/m]$ and anisotropy constant $K>0$ $[J\/m^3]$ depend on the ferromagnetic material.\nMoreover, $\\phi$ refers to the crystalline anisotropy density.\nThe effective field is thus given by\n\\begin{equation*}\n\\boldsymbol{H}_{\\rm eff} = \\frac{2A}{\\mu_0 M_s^2}\\Delta \\boldsymbol{M}_1 - \\frac{K}{\\mu_0\\revision{M_s}}D\\phi(\\boldsymbol{M}_1\\revision{\/M_s}) + \\boldsymbol{H}.\n\\end{equation*}\nNote that the microscopic LLG equation and the macroscopic Maxwell equations are coupled through the magnetic field strength $\\boldsymbol{H}$ and hence through the effective field $\\boldsymbol{H}_{\\rm eff}$.\nAltogether, we will thus solve the multiscale problem by solving LLG on $\\Omega_1$ and incorporating the effects of $\\Omega_2$ via this coupling.\n\\section{General LLG equation}\n\\label{section:general}\n\\noindent In this section, we consider the non-dimensional form of LLG with a quite general effective field $\\boldsymbol{h}_{\\rm eff}$ which covers the multiscale problem from the previous section.\nWe recall some equivalent formulations of LLG and then state our notion of a weak solution, which has been introduced by \\textsc{Alouges} \\& \\textsc{Soyeur}, see Ref.~\\cite{as}, for the small-particle limit $\\boldsymbol{h}_{\\rm eff} =\\Delta \\boldsymbol{m}$ and which is now extended to the present situation. We then formulate a linear-implicit\ntime integrator in the spirit of Refs.~\\cite{alouges08}, \\cite{alouges11}, \\cite{goldenits}, \\cite{mathmod2012}, \\cite{gamm2011}. \n\\subsection{Non-dimensional form of LLG}\n\\noindent We perform the substitution $t' = \\gamma_0M_st$ with $t'$ being the so-called \n(non-dimensional) reduced time, and set $T' = \\gamma_0M_sT$ the scaled final time.\nMoreover, we rescale the spatial variable $x'=x\/L$ with $L$ being some characteristic\nlength of the problem $[m]$, e.g., the intrinsic length scale\n$L = \\sqrt{2A\/(\\mu_0M_s^2)}$. However, to simplify our notation, we stick with $t,T,x,\\Omega_j$ instead of $t',T',x\/L,\\Omega_j\/L$, respectively,\nand abbreviate the space-time cylinder $\\Omega_t = [0,t]\\times\\Omega_1$\nfor all $0\\le t\\le T$.\nWe set $\\boldsymbol{m} := \\boldsymbol{M}_1\/M_s$, $\\boldsymbol{m}^0 := \\boldsymbol{M}^0\/M_s$, $\\boldsymbol{h}_{\\rm eff}:=\\boldsymbol{H}_{\\rm eff}\/M_s$. With these\nnotations, the (sought) magnetization $\\boldsymbol{m}:\\Omega_T\\to\\set{x\\in\\mathbb{R}^3}{|x|=1}$\nsolves the non-dimensional form of LLG\n\\begin{subequations}\\label{eq:llg}\n\\begin{align}\\label{eq:llg1}\n \\revision{\\partial_t\\boldsymbol{m}} \n = - \\frac{1}{1+\\alpha^2}\\,\\boldsymbol{m}\\times\\boldsymbol{h}_{\\rm eff}\n -\\frac{\\alpha}{1+\\alpha^2}\\,\\boldsymbol{m}\\times(\\boldsymbol{m}\\times\\boldsymbol{h}_{\\rm eff}) \n \\quad\\revision{\\text{in }\\Omega_T},\n\\end{align}\nsupplemented by initial and \\revision{Neumann} boundary conditions\n\\begin{align}\\label{eq:llg:bc1}\n \\boldsymbol{m}(0) &= \\boldsymbol{m}^0\\quad\\mbox{in }\\Omega_1,\\\\\n \\label{eq:llg:bc2}\n \\partial_{\\boldsymbol{\\nu}}\\boldsymbol{m} &=0 \\quad\\quad\\mbox{in }(0,T)\\times\\partial\\Omega_1.\n\\end{align}\n\\end{subequations}\nThe \\revision{non-dimensional} effective field reads\n\\begin{align*}\n \\boldsymbol{h}_{\\rm eff} = \n \\frac{2A}{\\mu_0 M_s^2\\revision{L^2}}\\,\\Delta\\boldsymbol{m}\n - \\frac{K}{\\mu_0\\revision{M_s^2}}\\,D\\phi(\\boldsymbol{m})\n + \\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}} - \\nabla u_1 -\\nabla u_2,\n\\end{align*}\nwhere $u_1$ \\revision{solves}~\\eqref{eq:u1} with $\\boldsymbol{M}_1$ being replaced by $\\boldsymbol{m}$\nand where $u_2$ \\revision{solves}~\\eqref{eq:u:omega2} with, e.g., $\\boldsymbol{F}}%{\\boldsymbol{H}_{\\rm app}$ replaced by $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}$,\n$\\boldsymbol{H}_1$ replaced by $-\\nabla u_1$, etc. For the non-linearity $\\chi$, we introduce some $\\widetilde \\chi$ in the non-dimensional formulation. Details are elaborated in Section~\\ref{sec:multi}.\n\\begin{remark}\nNote that~\\eqref{eq:llg1} implies $0 = \\boldsymbol{m} \\cdot \\partial_t \\boldsymbol{m} = \\partial_t |\\boldsymbol{m}|^2\/2$, i.e., the time derivative $\\partial_t \\boldsymbol{m}$ belongs to the tangent space \nof $\\boldsymbol{m}$. \\revision{In particular,} the modulus constraint $|\\boldsymbol{m}| = 1$ in $\\Omega_T$\nalso follows from the PDE formulation~\\eqref{eq:llg1} \\revision{and $|\\boldsymbol{m}^0|=1$ in $\\Omega_1$}.\n\\end{remark}\n\\subsection{Notation and function spaces involved}\n\\noindent In this brief section, we collect the necessary notation as well as the relevant function spaces that will be used \\revision{throughout}.\nBy $L^2$, we denote the usual Lebesgue space of square integrable functions and by $H^1$\nthe Sobolev space of functions in $L^2$ that additionally admit a weak \\revision{gradient} in $L^2$. For vector fields and corresponding spaces, we use bold symbols,\ne.g., for $\\boldsymbol{f} \\in \\boldsymbol{L}^2(\\Omega_1)$, we write\n\\begin{align*}\n\\norm{\\boldsymbol{f}}{\\boldsymbol{L}^2(\\Omega_1)}^2 = \\sum_{i=1}^3 \\norm{f_i}{L^2(\\Omega_1)}^2.\n\\end{align*}\nFor the space-time cylinder $\\revision{\\Omega_T=[0,T]\\times\\Omega_1}$, we consider the \n\\revision{function} spaces $L^2(\\boldsymbol{L}^2):=L^2\\big([0,\\revision{T}], \\boldsymbol{L}^2(\\Omega_1)\\big) = \\boldsymbol{L}^2(\\revision{\\Omega_T})$, $L^2(\\boldsymbol{H}^1):=L^2\\big([0,\\revision{T}], \\boldsymbol{H}^1(\\Omega_1)\\big)$, and $\\boldsymbol{H}^1(\\revision{\\Omega_T})$ which are associated with the norms\n\\begin{align*}\n\\norm{\\boldsymbol{f}}{L^2(\\boldsymbol{L}^2)}^2 &:=\\norm{\\boldsymbol{f}}{\\boldsymbol{L}^2(\\revision{\\Omega_T})}^2 = \\int_0^\\revision{T} \\norm{\\boldsymbol{f}(t)}{\\boldsymbol{L}^2(\\Omega_1)}^2\\, dt,\\\\\n\\norm{\\boldsymbol{f}}{L^2(\\boldsymbol{H}^1)}^2 &:=\\norm{\\boldsymbol{f}}{L^2([0,\\revision{T}], \\boldsymbol{H}^1(\\Omega_1))}^2 = \\int_0^{\\revision{T}} \\norm{\\boldsymbol{f}(t)}{\\boldsymbol{L}^2(\\Omega_1)}^2 + \\norm{\\nabla \\boldsymbol{f}(t)}{\\boldsymbol{L}^2(\\Omega_1)}^2\\, dt,\\\\\n\\norm{\\boldsymbol{f}}{\\boldsymbol{H}^1(\\revision{\\Omega_T)}}^2 &= \\int_0^{\\revision{T}} \\norm{\\boldsymbol{f}(t)}{\\boldsymbol{L}^2(\\Omega_1)}^2 + \\norm{\\nabla \\boldsymbol{f}(t)}{\\boldsymbol{L}^2(\\Omega_1)}^2 + \\norm{\\partial_t \\boldsymbol{f}(t)}{\\boldsymbol{L}^2(\\Omega_1)}^2\\, dt,\n\\end{align*}\nrespectively. \nFinally, for appropriate sets $\\Sigma$, we denote by $\\dual{\\cdot}{\\cdot}_\\Sigma$ the scalar product of $\\boldsymbol{L}^2(\\Sigma)$.\nThe Euclidean scalar product of vectors $\\boldsymbol{x},\\boldsymbol{y} \\in \\mathbb{R}^3$ is denoted by $\\boldsymbol{x} \\cdot \\boldsymbol{y}$.\nIn proofs, we use the symbol $\\lesssim$ to abbreviate $\\le$ up to some (hidden) multiplicative constant which is clear from the context and independent of the discretization parameters $h$ and $k$.\n\\subsection{Equivalent formulations of LLG and weak solution}\nThe dimensionless formulation of LLG that is usually referred to, has already been stated in~\\eqref{eq:llg}.\nSupplemented by the same initial and boundary conditions~\\eqref{eq:llg:bc1}--\\eqref{eq:llg:bc2}, the equation can also equivalently be stated \\revision{as}\n\\begin{align}\\label{eq:form:alg}\n\\alpha\\partial_t\\boldsymbol{m} + \\boldsymbol{m}\\times\\partial_t\\boldsymbol{m}\n= \\boldsymbol{h}_{\\rm eff} - \\left(\\boldsymbol{m}\\cdot\\boldsymbol{h}_{\\rm eff}\\right)\\boldsymbol{m}\n\\end{align}\n\\revision{or}\n\\begin{align}\\label{eq:weaksol}\n\\partial_t\\boldsymbol{m} - \\alpha\\boldsymbol{m}\\times\\partial_t\\boldsymbol{m}\n= \\boldsymbol{h}_{\\rm eff} \\times \\boldsymbol{m}.\n\\end{align}\nIn this work, \\eqref{eq:form:alg} is exploited for the construction of our\nnumerical scheme. For the notion of a weak solution,\nwe use the so-called Gilbert formulation~\\eqref{eq:weaksol}. A rigorous proof for the equivalence of\nthe above equations can be found, e.g., in Ref.~\\cite{goldenits}, Section~1.2.\n\\par As far as numerical analysis is concerned, our integrator extends \\revision{the one} of\nRef.~\\cite{alouges08} from the small-particle limit with exchange energy only, to the case under\nconsideration. Independently, the preceding works of Refs.~\\cite{alouges11}, \\cite{goldenits} generalized the\napproach of Ref.~\\cite{alouges08} to an effective field, which consists of exchange energy, stray field energy, uniaxial anisotropy, and exterior energy, where only the first term is dealt with implicitly, whereas the remaining lower-order terms are treated explicitly. In this work, we extend this approach to certain non-linear contributions of the effective field. For this purpose, \n\\revision{we introduce a general contribution \n$\\boldsymbol{\\pi}:\\boldsymbol{H}^1(\\Omega_1)\\times Y\\to \\boldsymbol{L}^2(\\Omega_1)$ for some suitable Banach \nspace $Y$, see Section~\\ref{section:heff} for examples.}\nWe now write $\\boldsymbol{h}_{\\rm eff}$ in the form\n\\begin{subequations}\\label{se:multiscale}\n\\begin{align}\\label{eq:field}\n \\boldsymbol{h}_{\\rm eff} = C_{\\rm exch}\\Delta\\boldsymbol{m} - \\boldsymbol{\\pi}(\\boldsymbol{m},\\zeta) + \\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}},\n\\end{align}\nwhere $\\zeta \\in Y$, the exchange contribution and the exterior field $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}$ are explicitly given, while the stray field contribution, the material anisotropy, and the induced field\nfrom the macroscopic part are concluded in the operator \\revision{$\\boldsymbol{\\pi}$}. Our analysis thus particularly includes the case\n\\begin{align}\\label{eq:pi}\n \\revision{\\boldsymbol{\\pi}\\big(\\boldsymbol{m}, \\zeta\\big)}\n := \\nabla u_1 + C_{\\rm ani}\\,\\revision{D\\phi(\\boldsymbol{m})} +\\nabla u_2,\n\\end{align}\nbut also holds true for general contributions $\\boldsymbol{\\pi}$,\nwhich only act on the spatial variable, as long as they fulfil \\revision{the properties \\eqref{assumption:chi:bounded}--\\eqref{assumption:chi:convergence}} below. In~\\eqref{eq:field}--\\eqref{eq:pi}, the constants are given by\n\\begin{align}\\label{eq:constants}\nC_{\\rm exch} := \\frac{2A}{\\mu_0 M_s^2\\revision{L^2}} \\quad \\text{ resp. } \\quad C_{\\rm ani} := \\frac{K}{\\mu_0 \\revision{M_s}}.\n\\end{align}\n\\end{subequations}\n\\revision{\\begin{remark}\nFor the multiscale formulation~\\eqref{se:multiscale}, we employ \n\\revision{$Y=\\boldsymbol{L}^2(\\Omega_2)$}\nand $\\zeta=\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}$, since this data is required \nin~\\eqref{eq:uext}--\\eqref{eq:u:omega2}. Details are given in Section~\\ref{sec:multi} below.\nFor the classical contributions like anisotropy field and stray field,\nthe operator $\\boldsymbol{\\pi}$ is independent of $\\zeta$ and depends only on $\\boldsymbol{m}$.\n\\end{remark}}\n\\par With \\revision{these preparations}, our \\revision{definition} of a weak solution reads as follows:\n\\begin{definition}\\label{def:weaksol}\n\\revision{Let $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\in\\boldsymbol{L}^2(\\Omega_1)$, $\\zeta \\in Y$ and $\\boldsymbol{m}^0\\in\\boldsymbol{H}^1(\\Omega_1)$ with $|m|=1$ in $\\Omega_1$.}\nA function $\\boldsymbol{m}$ is called a \\emph{weak solution} to LLG in $\\revision{\\Omega_T}$, if\n\\begin{itemize}\n\\item[(i)] $\\boldsymbol{m} \\in \\boldsymbol{H}^1(\\revision{\\Omega_T})$ with $|\\boldsymbol{m}| = 1$ in $\\revision{\\Omega_T}$ and $\\boldsymbol{m}(0)=\\boldsymbol{m}^0$ in the sense of traces;\n\\item[(ii)] for all $\\boldsymbol\\phi \\in C^\\infty(\\revision{\\overline\\Omega_T})$, we have\n\\begin{align}\\label{eq:weaksol_def}\n&\\dual{\\partial_t\\boldsymbol{m}}{\\boldsymbol\\phi}_{\\Omega_T}\n-\\alpha\\,\\dual{\\boldsymbol{m} \\times \\revision{\\partial_t\\boldsymbol{m}}}{\\boldsymbol\\phi}_{\\revision{\\Omega_T}}\\\\\n&= -C_{\\rm exch}\\,\\dual{\\nabla \\boldsymbol{m} \\times \\boldsymbol{m}}{\\nabla \\boldsymbol\\phi}_{\\revision{\\Omega_T}}\n- \\dual{\\boldsymbol{\\pi}(\\boldsymbol{m}, \\zeta)\\times \\boldsymbol{m}}{\\boldsymbol\\phi}_{\\revision{\\Omega_T}}\n+ \\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\times \\boldsymbol{m}}{\\boldsymbol\\phi}_{\\Omega_T};\\nonumber\n \\end{align}\n\\item[(iii)] for almost all $t \\in (0, \\revision{T})$, we have\n\\begin{align}\\label{eq:weaksol:energy}\n\\norm{\\nabla \\boldsymbol{m} (t)}{L^2(\\Omega_1)}^2 + \\norm{\\partial_t\\boldsymbol{m}}{L^2(\\Omega_t)}^2 \\le C,\n\\end{align}\nfor some constant $C>0$ which depends only on $\\boldsymbol{m}^0$ and $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}$.\n\\end{itemize}\n\\end{definition}\nThe existence (and non-uniqueness) of weak solutions has first been\nshown in Ref.~\\cite{as} for the small particle limit, where \\revision{$\\boldsymbol{\\pi}$}\nand $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}$ are omitted. We stress, however, that our convergence proof is\nconstructive in the sense that the analysis does not only show convergence\ntowards, but also existence of weak solutions without any assumptions on the smoothness of the quantities involved.\n\\begin{remark}\nUnder certain assumptions on \\revision{$\\boldsymbol{\\pi}$},\nthe energy estimate~\\eqref{eq:weaksol:energy} can be improved. We refer to Proposition~\\ref{lem:energy:improved} in the appendix.\n\\end{remark}\n\\subsection{Linear-implicit integrator}\nWe discretize the magnetization $\\boldsymbol{m}$ and its time derivative $\\boldsymbol{v} = \\partial_t\\boldsymbol{m}$ in space by lowest-order Courant finite elements\n\\begin{equation*\n \\boldsymbol{\\mathcal{V}}_h := \n\n\n \\set{\\boldsymbol{n}_h:\\overline\\Omega_1\\to\\mathbb{R}^3\\text{ continuous}}{\\boldsymbol{n}_h|_T\\text{ affine for all }T\\in \\mathcal{T}_h^{\\Omega_1}},\n\\end{equation*}\nwhere $\\mathcal{T}_h^{\\Omega_1}$ is a \\revision{quasi-uniform and} conforming triangulation of $\\Omega_1$ into \ntetrahedra $T\\in\\mathcal{T}_h^{\\Omega_1}$ with mesh-size \\revision{$h\\simeq{\\rm diam}(T)$}. Let $\\mathcal{N}_h$ denote the set of nodes of $\\mathcal{T}_h^{\\Omega_1}$. For fixed time $t_j$, the discrete magnetization is sought in the set\n\\begin{align*}\n \\boldsymbol{m}(t_j) \\approx \\boldsymbol{m}_h^j \\in\n \\boldsymbol{\\mathcal{M}}_h := \\set{\\boldsymbol{n}_h\\in\\boldsymbol{\\mathcal{V}}_h}{|\\boldsymbol{n}_h(z)|=1\\text{ for all }z\\in\\mathcal{N}_h},\n\\end{align*}\nwhereas the discrete time derivative is sought in the discrete tangent space\n\\begin{align*}\n \\boldsymbol{v}(t_j) \\approx \\boldsymbol{v}_h^j \\in\n \\boldsymbol{\\mathcal{K}}_{\\boldsymbol{m}_h^j} := \\set{\\boldsymbol{n}_h\\in\\boldsymbol{\\mathcal{V}}_h}{\\boldsymbol{n}_h(z)\\cdot\\boldsymbol{m}_h^j(z)=0\\text{ for all }z\\in\\mathcal{N}_h}.\n\\end{align*}\nFor the time discretization, we impose a uniform partition $\\mathcal{I}_k$ of the time interval $[0,T]$ with time step-size $k=T\/N$ and time steps $t_j = jk$, $j=0,\\dots,N$.\n\\par Let $\\boldsymbol{\\pi}_h$ be a numerical realization \\revision{of $\\boldsymbol{\\pi}$}\nwhich maps $\\boldsymbol{m}(t_j)\\approx\\boldsymbol{m}_h^j\\in\\boldsymbol{\\mathcal{M}}_h$ and $\\zeta(t_j) \\approx \\zeta_h^j\\in Y$ to some\n$\\boldsymbol{\\pi}_h(\\boldsymbol{m}_h^j, \\zeta_h^j)\\in\\boldsymbol{L}^2(\\Omega_1)$.\nFinally, let \\revision{$\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_h^j\\in\\boldsymbol{L}^2(\\Omega_1)$} be an approximation of $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}(t_j)$ specified below. Then, our numerical time integrator reads as follows:\n\\begin{algorithm}\\label{algorithm}\nInput: Initial datum $\\boldsymbol{m}_h^0\\in\\boldsymbol{\\mathcal{M}}_h$, parameters $\\alpha>0$ and $0 \\leq\\theta\\leq 1$, data $\\left\\{\\zeta_h^i\\right\\}_{i=0,\\dots,N-1}$.\nThen, for all $i=0,\\dots,N-1$ iterate:\n\\begin{itemize}\n\\item[(i)] Compute $\\boldsymbol{v}_h^i\\in\\boldsymbol{\\mathcal{K}}_{\\boldsymbol{m}_h^i}$ such that for all $\\boldsymbol\\psi_h\\in\\boldsymbol{\\mathcal{K}}_{\\boldsymbol{m}_h^i}$, it holds\n\\begin{align}\\label{eq:alg}\n &\\alpha\\dual{\\boldsymbol{v}_h^i}{\\boldsymbol\\psi_h}_{\\Omega_1}\n + C_{\\rm exch} k \\theta \\,\\dual{\\nabla\\boldsymbol{v}_h^i}{\\nabla\\boldsymbol\\psi_h}_{\\Omega_1}\n + \\dual{\\boldsymbol{m}_h^i\\times\\boldsymbol{v}_h^i}{\\boldsymbol\\psi_h}_{\\Omega_1}\n \\\\&\\quad\n = -C_{\\rm exch} \\dual{\\nabla\\boldsymbol{m}_h^i}{\\nabla \\boldsymbol\\psi_h}_{\\Omega_1}\n - \\dual{\\boldsymbol{\\pi}_h(\\boldsymbol{m}_h^i,\\zeta_h^i)}{\\boldsymbol\\psi_h}_{\\Omega_1}\n + \\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_h^i}{\\boldsymbol\\psi_h}_{\\Omega_1}. \\nonumber\n\\end{align}\n\\item[(ii)] Define $\\boldsymbol{m}_h^{i+1}\\in\\boldsymbol{\\mathcal{M}}_h$ by\n$\\boldsymbol{m}_h^{i+1}(z) = \\displaystyle\\frac{\\boldsymbol{m}_h^i(z) + k \\boldsymbol{v}_h^i(z)}{|\\boldsymbol{m}_h^i(z) + k \\boldsymbol{v}_h^i(z)|}$ for all nodes $z\\in\\mathcal{N}_h$.\n\\end{itemize}\nOutput: Discrete time derivatives $\\boldsymbol{v}_h^i$ and magnetizations $\\boldsymbol{m}_h^{i+1}$, for $i=0,\\dots,N-1$.\n\\end{algorithm}\n\\revision{The input as well as the output of Algorithm~\\ref{algorithm} consists of discrete-in-time values $\\gamma_h^i$,\ne.g., $\\gamma_h^i\\in\\{\\boldsymbol{m}_h^i,\\boldsymbol{v}_h^i\\}\\subseteq\\boldsymbol{\\mathcal{V}}_h$.\nBy~\\eqref{eq:timeapprox} we define continuous-in-time interpretations, where we consider continuous and \npiecewise affine in time (denoted by $\\mathcal{S}^1$) resp.\\ piecewise constant in time \n(denoted by $\\mathcal{P}^0$): For $t_i \\le t < t_{i+1}$,\n$\\gamma_{hk}\\in\\mathcal{S}^1(\\mathcal{I}_k;\\boldsymbol{\\mathcal{V}}_h)\\subset\\boldsymbol{H}^1(\\revision{\\Omega_T})$ \nand $\\gamma_{hk}^-\\in\\mathcal{P}^0(\\mathcal{I}_k; \\boldsymbol{\\mathcal{V}}_h)\\subset L^2(\\boldsymbol{H}^1)$ are defined by\n\\begin{subequations} \\label{eq:timeapprox}\n\\begin{align}\n\\gamma_{hk}(t) &:= \\frac{t-ik}{k}\\,\\gamma_h^{i+1} + \\frac{(i+1)k-t}{k}\\,\\gamma_h^i\n\\label{eq:mhk}\\\\\n\\gamma_{hk}^-(t) &:= \\gamma_h^i\n\\label{eq:mhk-}.\n\\end{align}\n\\end{subequations}\nWe note that $\\partial_t\\gamma_{hk}=(\\gamma_h^{i+1}-\\gamma_h^i)\/k$.\nThe same notation is used for $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_{hk}^-\\in\\mathcal{P}^0(\\mathcal{I}_k;\\boldsymbol{L}^2(\\Omega))$ and \n$\\zeta_{hk}^-\\in\\mathcal{P}^0(\\mathcal{I}_k;Y)$.}\n\\revision{\\begin{lemma}\nAlgorithm~\\ref{algorithm} is well-defined, and it holds $\\norm{\\boldsymbol{m}_{hk}}{\\boldsymbol{L}^\\infty(\\revision{\\Omega_T})} = \\norm{\\boldsymbol{m}_{hk}^-}{\\boldsymbol{L}^\\infty(\\revision{\\Omega_T})} = 1$.\n\\end{lemma}}\n\\begin{proof}\nProblem~\\eqref{eq:alg} is a linear problem on a finite dimensional space.\nTherefore, existence and uniqueness of $\\boldsymbol{v}_h^i\\in\\boldsymbol{\\mathcal{K}}_{\\boldsymbol{m}_h^i}$ follow from the fact that the corresponding bilinear form is positive definite.\nBy definition of the discrete tangent space \\revision{$\\boldsymbol{\\mathcal{K}}_{\\boldsymbol{m}_h^i}$}, it holds\n$|\\boldsymbol{m}_h^i + k\\boldsymbol{v}_h^i|^2 = 1 + k^2\\,|\\boldsymbol{v}_h^i|^2\\ge1$ nodewise.\nTherefore, \\revision{Step~(ii) in Algorithm~\\ref{algorithm}} is well-defined. \nBy use of barycentric coordinates, an elementary calculation finally proves the pointwise estimates\n$|\\boldsymbol{m}_{hk}^-|\\le1$ as well as $|\\boldsymbol{m}_{hk}| \\le 1$, see, e.g., Ref.~\\cite{alouges08}.\n\\end{proof}\nBy definition of $\\boldsymbol{m}_h^{i+1}$ in Step~(ii) of Algorithm~\\ref{algorithm}, the\nfollowing two auxiliary results follow from elementary geometric considerations\n(see Refs.~\\cite{alouges08}, \\cite{alouges11}, \\cite{goldenits}).\n\\revision{\\begin{lemma}\\label{lemma1:aux}\nFor all $i=0,\\dots,N-1$, it holds nodewise $|\\boldsymbol{m}_h^{i+1}-\\boldsymbol{m}_h^i| \\le k\\,|\\boldsymbol{v}_h^i|$.\\hfill\\qed\n\\end{lemma}}\n\\revision{\\begin{lemma}\\label{lemma2:aux}\nFor all $i=0,\\dots,N-1$, it holds nodewise $|\\boldsymbol{m}_h^{i+1}-\\boldsymbol{m}_h^i-k\\boldsymbol{v}_h^i| \\le \\frac12\\,k^2\\,|\\boldsymbol{v}_h^i|^2$.\\hfill\\qed\n\\end{lemma}}\nThese nodal estimates shall be used together with the following elementary lemma which follows from standard scaling arguments.\n\\revision{\\begin{lemma}\\label{lemma3:aux}\nFor any discrete function ${\\bf w}_h\\in\\boldsymbol{\\mathcal{V}}_h$ and all $1\\le p<\\infty$, it holds\n\\begin{align*}\n \\c{shape}^{-1}\\,\\norm{{\\bf w}_h}{\\boldsymbol{L}^p(\\Omega)}^p\n \\le h^3\\sum_{z\\in\\mathcal{N}_h}|{\\bf w}_h(z)|^p\n \\le \\c{shape}\\,\\norm{{\\bf w}_h}{\\boldsymbol{L}^p(\\Omega)}^p.\n\\end{align*}\nThe constant $\\setc{shape}>0$ depends only on $p$ and the shape of the\nelements in $\\mathcal{T}_h^{\\Omega_1}$.\\hfill\\qed\n\\end{lemma}}\n\\subsection{Main theorem}\nThe following theorem is the main result of this work.\nIt states convergence of the numerical integrator (at least for a subsequence) towards a weak solution of the general LLG equation.\nAfterwards, we will show that the operator $\\boldsymbol{\\pi}$ and its discretization $\\boldsymbol{\\pi}_h$ of the multiscale LLG equation satisfy the general assumptions posed.\nIn particular, the concrete problem is thus covered by the general approach.\n\\begin{theorem}\\label{theorem}\n\\textbf{(a)}\nLet \\revision{$1\/2<\\theta\\le1$} and suppose that the spatial meshes $\\mathcal{T}_h^{\\Omega_1}$ are uniformly shape regular\nand satisfy the angle condition\n\\begin{align}\\label{assumption:mesh}\n \\dual{\\nabla\\eta_i}{\\nabla\\eta_j}_{\\Omega_1}\n \\le0\n \\quad\\text{for \\revision{all nodal hat} functions }\n \\eta_i,\\eta_j\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})\n \\text{ with }i\\neq j.\n\\end{align}\nWe suppose that\n\\begin{align}\\label{assumption:f}\n \\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_{hk}^- \\rightharpoonup \\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\text{ weakly in }\\boldsymbol{L}^2(\\revision{\\Omega_T})\n\\end{align}\nas well as\n\\begin{align}\\label{assumption:m0}\n \\revision{\\boldsymbol{m}_h^0 \\rightharpoonup \\boldsymbol{m}^0\\text{ weakly in }\\boldsymbol{H}^1(\\Omega_1).}\n\\end{align}\nMoreover, we suppose that the spatial discretization $\\revision{\\boldsymbol{\\pi}_h}$\nof \\revision{$\\boldsymbol{\\pi}$} satisfies\n\\begin{align}\\label{assumption:chi:bounded}\n \\norm{\\boldsymbol{\\pi}_h(\\boldsymbol{n}, y)}{\\boldsymbol{L}^2(\\Omega_1)}\n \\le \\c{bounded}\\,\\revision{(1+\\norm{\\nabla\\boldsymbol{n}}{L^2(\\Omega_1)})}\n\\end{align}\nfor all $h,k>0$ and all $\\boldsymbol{n} \\in \\revision{\\boldsymbol{H}^1(\\Omega_1)}$ with $|\\boldsymbol{n}| \\le 1$ and \\revision{all} $y \\in Y$ with $\\norm{y}{Y} \\le \\c{boundedy}$ for some $y$-independent constant $\\c{boundedy}>0$.\nHere, $\\setc{bounded}>0$ denotes a constant that is independent of $h,k, \\boldsymbol{n},$ and $y$, but may depend on $\\c{boundedy}$ and $\\Omega_1$. We further assume $\\norm{\\zeta_h^j}{Y} \\le \\setc{boundedy}$ for all $j = 1, \\hdots, N$. Under these assumptions, \\revision{Algorithm~\\ref{algorithm} yields} \\revision{strong $\\boldsymbol{L}^2(\\Omega_T)$-convergence\nof some subsequence of $\\boldsymbol{m}_{hk}^-$ as well as weak $\\boldsymbol{H}^1(\\Omega_T)$-convergence \nof some subsequence of $\\boldsymbol{m}_{hk}$ towards the same limit $\\boldsymbol{m}\\in \\boldsymbol{H}^1(\\Omega_T)$\nwhich additionally satisfies $\\boldsymbol{m}\\in L^\\infty(\\boldsymbol{H}^1)$ with $|\\boldsymbol{m}|=1$ in $\\Omega_T$.}\\\\\n\\par \\noindent\n\\textbf{(b)}\nIn addition to the above, we suppose\n\\begin{align}\\label{assumption:chi:convergence}\n \\boldsymbol{\\pi}_h(\\boldsymbol{m}_{hk}^-, \\zeta_{hk}^-) \\rightharpoonup \\boldsymbol{\\pi}(\\boldsymbol{m}, \\zeta)\n \\quad\\text{weakly in }\\revision{\\boldsymbol{L}^2(\\Omega_T)} \\text{ \\revision{for some subsequence}}.\n\\end{align}\nThen, the \\revision{limit $\\boldsymbol{m}\\in\\boldsymbol{H}^1(\\Omega_T)$ from (a) is \na weak solution} of general LLG\n\\revision{in the sense of Definition~\\ref{def:weaksol}.}\n\\end{theorem}\n\\begin{remark} \\label{rem:f}\n{\\rm(i)} Suppose that the applied exterior field is continuous in time, i.e., $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\in C([0,T];\\boldsymbol{L}^2(\\Omega_1))$. Let $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_h^j = \\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}(t_j)$ denote the evaluation of $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}$ at time $t_j$. Then, assumption~\\eqref{assumption:f} is satisfied since $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_{hk}^-\\to\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}$ strongly in $L^\\infty(\\boldsymbol{L}^2)$.\\\\\n{\\rm(ii)} Suppose that the applied exterior field is continuous in space-time, i.e., $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\in C(\\revision{\\overline\\Omega_T})$. Let $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_h^j$ denote the nodal interpolant of $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}(t_j)\\in C(\\overline\\Omega_1)$ in space. Then, assumption~\\eqref{assumption:f} is satisfied since $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_{hk}^-\\to\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}$ strongly in $\\boldsymbol{L}^\\infty(\\revision{\\Omega_T})$.\\\\\n{\\rm(iii)} Suppose $\\zeta$ is continuous in time, i.e., \\revision{$\\zeta \\in C([0,T], Y)$} and let $\\zeta_h^j = \\zeta(t_j)$ denote the evaluation of $\\zeta$ at time $t_j$. Then, we have $\\zeta_{hk}^- \\to \\zeta$ strongly in $L^\\infty(Y)$ and $\\norm{\\zeta_h^j}{Y} \\le \\sup_{t \\in [0,T]} \\norm{\\zeta(t)}{Y}$.\n\\end{remark}\n\\begin{remark}\nThe angle condition~\\eqref{assumption:mesh} is a technical ingredient for \nthe convergence analysis. It is automatically fulfilled for tetrahedral meshes \nwith dihedral angles that are smaller than $\\pi\/2$. If the\ncondition is satisfied by the initial mesh $\\mathcal{T}_0$, \\revision{it can be preserved\nby the mesh-refinement strategy (see, e.g., Ref.~\\cite{verfuerth}, Section~4.1).}\n\\end{remark} \n\\revision{The remainder of this section consists of the proof of\nTheorem~\\ref{theorem} which is roughly split into three steps:}\n\\begin{itemize}\n\\item[(i)] Boundedness of the discrete quantities and energies.\n\\item[(ii)] Existence of weakly convergent subsequences.\n\\item[(iii)] Identification of the limits with weak solutions of LLG.\n\\end{itemize}\n\\begin{lemma}\\label{lem:energy}\n\\revision{For all $j=0,\\dots,N$, the}\ndiscrete quantities $\\boldsymbol{m}_h^j$ and $\\left\\{\\boldsymbol{v}_h^i\\right\\}_{i=0,\\dots,j-1}$ satisf\n\\begin{align}\\label{eq:energy_discrete}\n\\begin{split}\n\\norm{\\nabla \\boldsymbol{m}_h^j}{\\boldsymbol{L}^2(\\Omega_1)}^2 &+ \nk\\sum_{i=0}^{j-1}\\norm{\\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 \n+ (\\theta - 1\/2)k^2\n\\sum_{i=0}^{j-1} \\norm{\\nabla \\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 \n\\le\\revision{\\c{energy}}.\n\\end{split}\n\\end{align}\n\\revision{The constant $\\setc{energy} > 0$ depends only on $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}$, $\\boldsymbol{m}^0$, \nand the final time $T$, but is independent of $h$ and $k$.}\n\\end{lemma}\n\\begin{proof}\nIn~\\eqref{eq:alg}, we use the test function $\\boldsymbol\\psi_h = \\boldsymbol{v}_h^i \\in \\boldsymbol{\\mathcal{K}}_{\\boldsymbol{m}_h^i}$ \nand get\n\\begin{align*}\n\\alpha \\norm{\\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 + C_{\\rm exch} \\theta\\,k \\norm{\\nabla \\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2\n= &-C_{\\rm exch}\\dual{\\nabla \\boldsymbol{m}_h^i}{\\nabla \\boldsymbol{v}_h^i}_{\\Omega_1}\n+ \\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_h^i}{\\boldsymbol{v}_h^i}_{\\Omega_1} \\\\\n&- \\dual{\\boldsymbol{\\pi}_h(\\boldsymbol{m}_h^i,\\zeta_h^i)}{\\boldsymbol{v}_h^i}_{\\Omega_1}.\n\\end{align*}\nThe angle condition~\\eqref{assumption:mesh} ensures\n$\\norm{\\nabla\\boldsymbol{m}_h^{i+1}}{\\boldsymbol{L}^2(\\Omega_1)}^2 \\le \\norm{\\nabla (\\boldsymbol{m}_h^i + k \\boldsymbol{v}_h^i)}{\\boldsymbol{L}^2(\\Omega_1)}^2$, see Refs. \\cite{alouges08}, \\cite{alouges11}, \\cite{goldenits}.\nWe thus get\n\\begin{align}\n\\frac12\\norm{\\nabla \\boldsymbol{m}_h^{i+1}}{\\boldsymbol{L}^2(\\Omega_1)}^2 &\\le \\frac12 \\norm{\\nabla \\boldsymbol{m}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 +\nk\\dual{\\nabla\\boldsymbol{m}_h^i}{\\nabla\\boldsymbol{v}_h^i}_{\\Omega_1}\n+ \\frac{k^2}{2} \\norm{\\nabla \\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 \\nonumber\\\\\n& \\leq \\frac12 \\norm{\\nabla \\boldsymbol{m}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 - (\\theta - 1\/2)k^2\\norm{\\nabla \\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 \\label{eq:nabla_m_bounded} \\\\\n&\\quad - \\frac{\\alpha\\, k}{C_{\\rm exch}}\\norm{\\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2\n+ \\frac{k}{C_{\\rm exch}}\\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_h^i}{\\boldsymbol{v}_h^i}_{\\Omega_1}\n- \\frac{k}{C_{\\rm exch}}\\dual{\\boldsymbol{\\pi}_h(\\boldsymbol{m}_h^i,\\zeta_h^i)}{\\boldsymbol{v}_h^i}_{\\Omega_1}. \\nonumber\n\\end{align}\nNext, we sum up over $i = 0, \\hdots, j-1$ to see\n\\begin{align*}\n\\frac12 \\norm{\\nabla \\boldsymbol{m}_h^j}{\\boldsymbol{L}^2(\\Omega_1)}^2 \\le &\\frac12\\norm{\\nabla\n\\boldsymbol{m}_h^0}{\\boldsymbol{L}^2(\\Omega_1)}^2 - (\\theta - 1\/2)k^2 \\sum_{i=0}^{j-1}\n\\norm{\\nabla \\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 \\\\\n&- \\frac{\\alpha k}{C_{\\rm exch}}\\sum_{i=0}^{j-1}\\norm{\\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 \n+ \\frac{k}{C_{\\rm exch}}\\sum_{i=0}^{j-1}\\big(\\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_h^i}{\\boldsymbol{v}_h^i}_{\\Omega_1}\n- \\dual{\\boldsymbol{\\pi}_h(\\boldsymbol{m}_h^i, \\zeta_h^i)}{\\boldsymbol{v}_h^i}_{\\Omega_1}\\big).\n\\end{align*}\nUsing the inequalities of Young and H\\\"older, this can be further estimated by\n\\begin{align*}\n\\frac12 &\\norm{\\nabla \\boldsymbol{m}_h^j}{\\boldsymbol{L}^2(\\Omega_1)}^2 \n+ \\frac{k}{C_{\\rm exch}}(\\alpha - \\varepsilon)\\sum_{i=0}^{j-1}\\norm{\\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2\n+ (\\theta - 1\/2)k^2 \\sum_{i=0}^{j-1}\\norm{\\nabla \\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2\n\\\\ \n&\\le \\frac12\\norm{\\nabla\\boldsymbol{m}_h^0}{\\boldsymbol{L}^2(\\Omega_1)}^2 \n + \\frac{k}{\\revision{2} C_{\\rm exch} \\varepsilon}\\sum_{i=0}^{j-1}\\big(\\norm{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 + \\norm{\\boldsymbol{\\pi}_h(\\boldsymbol{m}_h^i, \\zeta_h^i)}{\\boldsymbol{L}^2(\\Omega_1)}^2\\big)\n\\end{align*}\nfor any $\\varepsilon > 0$. \n\\revision{With the boundedness~\\revision{\\eqref{assumption:chi:bounded}}\nof $\\boldsymbol{\\pi}_h$, the last sum is estimated by\n\\begin{align*}\n k \\sum_{i=0}^{j-1}(\\norm{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2 \\!+\\! \\norm{\\boldsymbol{\\pi}_h(\\boldsymbol{m}_h^i, \\zeta_h^i)}{\\boldsymbol{L}^2(\\Omega_1)}^2)\n &\\lesssim \\norm{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_{hk}^-}{\\boldsymbol{L}^2(\\Omega_T)}^2\n \\!+\\! k\\sum_{i=0}^{j-1}(1+\\norm{\\nabla\\boldsymbol{m}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2)\n \\\\\n &\\lesssim \\norm{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_{hk}^-}{\\boldsymbol{L}^2(\\Omega_T)}^2\n + T + k\\,\\sum_{i=0}^{j-1}\\norm{\\nabla\\boldsymbol{m}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2.\n\\end{align*}\nChoosing $\\varepsilon<\\alpha$, \nwe altogether obtain\n\\begin{align*}\n&\\norm{\\nabla \\boldsymbol{m}_h^j}{\\boldsymbol{L}^2(\\Omega_1)}^2 \n+ k\\,\\sum_{i=0}^{j-1}\\norm{\\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2\n+ (\\theta - 1\/2)k^2 \\sum_{i=0}^{j-1}\\norm{\\nabla \\boldsymbol{v}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2\n\\\\ \n&\\qquad\\lesssim \\norm{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_{hk}^-}{\\boldsymbol{L}^2(\\Omega_T)}^2\n + T + k\\,\\sum_{i=0}^{j-1}\\norm{\\nabla\\boldsymbol{m}_h^i}{\\boldsymbol{L}^2(\\Omega_1)}^2.\n\\end{align*}\nAccording to weak convergence~\\eqref{assumption:f}--\\eqref{assumption:m0},\nthere holds uniform boundedness $\\norm{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_{hk}^-}{\\boldsymbol{L}^2(\\Omega_T)}^2 + \n\\norm{\\nabla\\boldsymbol{m}_h^0}{\\boldsymbol{L}^2(\\Omega_1)}^2 \\le C$. Consequently, the discrete\nGronwall lemma \\revision{(see, e.g., Ref.~\\cite{thomee}, Lemma 10.5)} applies and concludes the proof.}\n\\end{proof}\n\\revision{As a consequence of the energy estimate~\\eqref{eq:energy_discrete}, \nwe obtain uniform boundedness of the discrete quantities.}\n\\revision{\\begin{lemma}\\label{lemma:dpr}\nFor $1\/2\\le\\theta\\le1$, it holds\n\\begin{align}\n \\begin{split}\\label{dpr:boundedness}\n &\\norm{\\boldsymbol{m}_{hk}^-}{L^\\infty(\\boldsymbol{H}^1)}\n + \\norm{\\boldsymbol{m}_{hk}}{L^\\infty(\\boldsymbol{H}^1)}\n + \\norm{\\partial_t\\boldsymbol{m}_{hk}}{\\boldsymbol{L}^2(\\Omega_T)}\\\\\n &\\qquad+ \\norm{\\boldsymbol{v}_{hk}^-}{\\boldsymbol{L}^2(\\Omega_T)}\n + \\sqrt{(\\theta-1\/2)k}\\,\\norm{\\nabla\\boldsymbol{v}_{hk}^-}{\\boldsymbol{L}^2(\\Omega_T)}\n \\le \\c{dpr},\n \\end{split}\n\\end{align}\nwhere $\\setc{dpr}>0$ does not depend on $h$ or $k$.\n\\end{lemma}}\n\\revision{\\begin{proof}\nEstimate~\\eqref{eq:energy_discrete} reveals\n\\begin{align*}\n \\max_{j=0,\\dots,N}\\norm{\\nabla\\boldsymbol{m}_h^j}{\\boldsymbol{L}^2(\\Omega_1)}^2\n + \n \\norm{\\boldsymbol{v}_{hk}^-}{\\boldsymbol{L}^2(\\Omega_T)}^2\n + (\\theta-1\/2)k\\,\\norm{\\nabla\\boldsymbol{v}_{hk}^-}{\\boldsymbol{L}^2(\\Omega_T)}^2\n\n \\lesssim\n \\c{energy}.\n\\end{align*}\nClearly, it holds \n\\begin{align*}\n \\norm{\\nabla\\boldsymbol{m}_{hk}}{L^\\infty(\\boldsymbol{L}^2)}^2\n + \\norm{\\nabla\\boldsymbol{m}_{hk}^-}{L^\\infty(\\boldsymbol{L}^2)}^2 \n \\lesssim \n \\max_{j=0,\\dots,N}\\norm{\\nabla\\boldsymbol{m}_h^j}{\\boldsymbol{L}^2(\\Omega_1)}^2.\n\\end{align*}\nTogether with $\\norm{\\boldsymbol{m}_{hk}}{\\boldsymbol{L}^\\infty(\\Omega_T)}=1=\\norm{\\boldsymbol{m}_{hk}^-}{\\boldsymbol{L}^\\infty(\\Omega_T)}$, this bounds the $L^\\infty(\\boldsymbol{H}^1)$-norms of $\\boldsymbol{m}_{hk}$\nand $\\boldsymbol{m}_{hk}^-$.\nFor $t_j\\le t1\/2$.}\n\n\\revision{\\noindent{\\bf Verification of Definition~\\ref{def:weaksol} (ii).}}\nLet $\\boldsymbol\\phi \\in C^\\infty(\\revision{\\overline\\Omega_T})$ be arbitrary. We define test functions by \\revision{$\\boldsymbol\\psi_h:=\\mathcal{I}_h(\\boldsymbol{m}_{hk}^-\\times \\boldsymbol\\phi)$,\nwhere $\\mathcal{I}_h:C(\\overline\\Omega)\\to\\boldsymbol{\\mathcal{V}}_h$ denotes the nodal interpolation operator\nwhich only acts on the spatial variable}.\n\\revision{Note that $\\psi_h(t)\\in\\boldsymbol{\\mathcal{K}}_{\\boldsymbol{m}_h^j}$ for all $t_j\\le t0$, let $X_h \\subseteq X$ be finite dimensional subspaces of \n$X$ with $X_h \\subseteq X_{h'}$ for $h>h'$ and\n$\\overline{\\bigcup_{h>0}X_h} = X$. Let $b_h \\in X_h^*$. \nThen, the Galerkin formulation\n\\begin{align*\n\\dual{A w_h}{v_h}_{X^* \\times X} = \\dual{b_h}{v_h}_{X^* \\times X} \\quad \\text{ for all } v_h \\in X_h\n\\end{align*}\nadmits a unique solution $w_h \\in X_h$. Provided \n$\\norm{b_h}{X_h^*} \\le M < \\infty$ for all $h>0$, the sequence of \nGalerkin solutions is bounded, i.e., $\\norm{w_h}{X_h}\\le C < \\infty$ for all\n$h>0$, and the $h$-independent constant $C>0$ depends only on $M$ and the \ncoercivity \\revision{constant} of $A$. In particular, the sequence $\\left\\{w_h\\right\\}_{h>0}$ \\revision{admits a \nweakly convergent subsequence in $X$ with limit} $w \\in X$.\nIf $b_h \\to b$ strongly in $X^*$ for $h \\to 0$, this limit solves the operator equation~\\eqref{eq:browderminty}. \nFinally, \\revision{strong} monotonicity implies that there even holds strong convergence $w_h \\to w$ in $X$ of the entire sequence.\n\\par This framework is now used in the following lemma which guarantees the assumptions~\\eqref{assumption:chi:bounded}--\\eqref{assumption:chi:convergence} of Theorem~\\ref{theorem} for certain energy contributions:\n\\begin{lemma}\\label{prop:nonlin:conv}\n\\revision{Suppose that $X$ and $A:X\\to X^*$ satisfy the foregoing assumptions.}\nLet $Y$ be a Banach space and let $S, S_h \\in L\\left(X,\\boldsymbol{L}^2(\\revision{\\Omega_1})\\right)$, \nand $R, R_h \\in L\\big(\\revision{\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)} \\times Y, X^*\\big)$ \n\\revision{for some $0\\le\\varepsilon\\le1$}\nwith \n\\begin{align}\nS_h x \\rightharpoonup Sx\n&\\quad\\text{weakly in $\\boldsymbol{L}^2(\\Omega_1)$ for all $x \\in X$},\n\\label{prop:nonlinear:S}\\\\\nR_h (\\boldsymbol{n}, y) \\rightarrow R (\\boldsymbol{n}, y)\n&\\quad\\text{strongly in $X^*$ for all $\\boldsymbol{n} \\in \\revision{\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)}, y \\in Y$},\n\\label{prop:nonlinear:R}\n\\end{align}\nand $\\boldsymbol{\\pi}:=SA^{-1}R:\\boldsymbol{H}^1(\\Omega_1)\\times Y\\to\\boldsymbol{L}^2(\\Omega_1)$.\nFor $h > 0$, $\\boldsymbol{n}\\in\\revision{\\boldsymbol{H}^1(\\Omega_1)}$, and $y \\in Y$, define $\\boldsymbol{\\pi}_h(\\boldsymbol{n}, y):= S_h u_h$, where $u_h$ is the unique solution of \n\\begin{align}\\label{eq:multiscale:galerkin}\n\\dual{A u_h}{v_h}_{X^*\\times X} = \\dual{R_h (\\boldsymbol{n},y)}{v_h}_{X^*\\times X} \\quad \\text{ for all } v_h \\in X_h.\n\\end{align}\n\\revision{For all $y\\in Y$, it then} holds that \n\\begin{align}\\label{eq:pi_bounded}\n \\norm{\\boldsymbol{\\pi}_h(\\boldsymbol{n}, y)}{\\boldsymbol{L}^2(\\Omega_1)} \n \\le \\c{multiscale}\\revision{\\,(1+\\norm{\\nabla\\boldsymbol{n}}{\\boldsymbol{L}^2(\\Omega)})}.\n \\end{align}\nfor all $\\boldsymbol{n} \\in \\revision{\\boldsymbol{H}^1(\\Omega_1)}$ with $|\\boldsymbol{n}| \\le 1$ and for all $h>0$. The constant $\\setc{multiscale} > 0$ does not depend \\revision{on $h$} and $\\boldsymbol{n}$, but only on \\revision{$A$, $\\norm{y}Y$},\n$\\Omega_1$, \\revision{and the operators $S_h$ and $R_h$.} \nMoreover, suppose that $\\revision{\\norm{\\boldsymbol{m}_{hk}^-}{L^2(\\boldsymbol{H}^1)}+\\norm{\\zeta_{hk}^-}{L^\\infty(Y)}}\\le\\setc{dp:bounded}$ and\n$(\\boldsymbol{m}_{hk}^-, \\zeta_{hk}^-) \\rightarrow (\\boldsymbol{m}, \\zeta)$ \\revision{strongly} in $L^2\\big([0,T]; \\boldsymbol{L}^2(\\Omega_1) \\times Y\\big) = L^2(\\boldsymbol{L}^2(\\Omega_1) \\times Y)$ \\revision{for some subsequence as $(h,k) \\rightarrow (0,0)$. Then, \n\\begin{align}\\label{eq:lemma}\n\\boldsymbol{\\pi}_h(\\boldsymbol{m}_{hk}^-, \\zeta_{hk}^-) \\rightharpoonup \\boldsymbol{\\pi}(\\boldsymbol{m}, \\zeta)\n\\quad\\text{weakly in $\\boldsymbol{L}^2(\\Omega_T)$}\n\\end{align}\n for the same subsequence.}\n\\end{lemma}\n\\begin{proof}\nThe Banach-Steinhaus theorem implies uniform boundedness \\revision{of the operator norms\n$C_S := \\sup_{h>0}\\norm{S_h:X\\to\\boldsymbol{L}^2(\\Omega_1)}{}<\\infty$ and $C_R:= \\sup_{h>0}\\norm{R_h:\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)\\times Y\\to X^*}{}<\\infty$.} \nFor fixed $\\boldsymbol{n}\\in\\revision{\\boldsymbol{H}^1(\\Omega_1)}$ \nwith $|\\boldsymbol{n}|\\le 1$, \n$y \\in Y$, and $b_h:=R_h(\\boldsymbol{n},y)$, this implies\n\\begin{align*}\n\\norm{b_h}{\\revision{X^*}} \\le C_R \\norm{(\\boldsymbol{n}, y)}{\\revision{\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)}\\times Y} \\lesssim \\big(\\revision{\\norm{\\boldsymbol{n}}{\\boldsymbol{H}^1(\\Omega_1)}} + \\revision{\\norm{y}Y}\\big)=: M < \\infty.\n\\end{align*} \n\\revision{Strong monotonicity of $A$ shows\n\\begin{align*}\n \\norm{u_h}{X}^2 \\lesssim \\dual{Au_h-A(0)}{u_h}_{X^*\\times X}\n &= \\dual{b_h-A(0)}{u_h}_{X^*\\times X}\n \\\\&\\lesssim \\norm{b_h-A(0)}{X^*}\\norm{u_h}X.\n\\end{align*}\nThus, we infer with $|\\boldsymbol{n}|\\le1$\n\\begin{align*}\n\\norm{u_h}{X}\\lesssim \n\\norm{\\nabla\\boldsymbol{n}}{\\boldsymbol{L}^2(\\Omega_1)} + |\\Omega_1|^{1\/2} + \\norm{y}Y\n+ \\norm{A(0)}{X^*}\n\\lesssim1+\\norm{\\nabla\\boldsymbol{n}}{\\boldsymbol{L}^2(\\Omega_1)},\n\\end{align*}\nwhere the hidden constant $C>0$ depends only on $A$, $C_R$, and $\\norm{y}Y$}.\nConsequently, this proves~\\eqref{eq:pi_bounded} with $\\c{multiscale}=CC_S$.\n\nNext, we show that $\\boldsymbol{\\pi}_h(\\boldsymbol{n}_h, y_h) \\rightharpoonup \\boldsymbol{\\pi}_h(\\boldsymbol{n}, y)$ weakly \nin $\\boldsymbol{L}^2(\\Omega_1)$ as $h\\to0$ provided that $(\\boldsymbol{n}_h, y_h) \\to (\\boldsymbol{n}, y)$ strongly in \n$\\revision{\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)} \\times Y$.\nAssumption~\\eqref{prop:nonlinear:R} and the uniform boundedness of $R_h$ imply that\n$R_h(\\boldsymbol{n}_h,y_h) = R_h(\\boldsymbol{n},y) - R_h\\big((\\boldsymbol{n}-\\boldsymbol{n}_h, y-y_h)\\big) \\to R(\\boldsymbol{n},y)$ strongly in $X^*$ as $h\\to0$.\nTherefore, the Browder-Minty theorem for \\revision{strongly} monotone operators guarantees \n$u_h\\to u$ strongly in $X$, where $u = A^{-1}R(\\boldsymbol{n},y)$\nand $u_h\\in X_h$ solves~\\eqref{eq:multiscale:galerkin} with $(\\boldsymbol{n}, y)$ replaced by $(\\boldsymbol{n}_h, y_h)$.\nThe convergence assumption~\\eqref{prop:nonlinear:S} and the uniform boundedness of $S_h$ thus show\n$\\boldsymbol{\\pi}_h(\\boldsymbol{n}_h, y_h) = S_hu_h = S_hu - S_h(u-u_h) \\rightharpoonup Su = \\boldsymbol{\\pi}(\\boldsymbol{n}, y)$\nweakly in $\\boldsymbol{L}^2(\\Omega_1)$ as $h\\to0$.\n\\par Finally, we prove $\\boldsymbol{\\pi}_h(\\boldsymbol{m}_{hk}^-, \\zeta_{hk}^-)\\rightharpoonup\\boldsymbol{\\pi}(\\boldsymbol{m}, \\zeta)$\n\\revision{weakly} in $\\boldsymbol{L}^2(\\Omega_T)$ \\revision{for a subsequence} as $(h,k)\\to(0,0)$.\nTo that end, we choose sequences $h_\\ell\\to0$, $k_\\ell\\to0$ such that \n$(\\boldsymbol{m}_\\ell,\\zeta_\\ell):=(\\boldsymbol{m}_{h_\\ell k_\\ell}^-, \\zeta_{h_\\ell k_\\ell}^-)$ converges strongly in $L^2\\big(\\boldsymbol{L}^2(\\Omega_1) \\times Y\\big)$ \nto $(\\boldsymbol{m}, \\zeta)$. \n\\revision{According to interpolation theory (see, e.g., Ref.~\\cite{bl}, Section~5), interpolation\nof $\\boldsymbol{L}^2(\\Omega_T)=L^2(\\boldsymbol{L}^2)$ and $L^2(\\boldsymbol{H}^1)$ yields $L^2(\\boldsymbol{H}^s)$ for all $0 0$. \nIn particular, we may therefore write $\\boldsymbol{\\pi}_h(\\boldsymbol{m}_{hk}^-, \\zeta_{hk}^-) = \\boldsymbol{\\pi}_h(\\boldsymbol{m}_{hk}^-)$.\\\\\n{\\rm(iii)} For the multiscale approach, we use \\revision{$Y = \\boldsymbol{L}^2(\\Omega_2)$}, $\\zeta_{hk}^- = \\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}_{hk}^-$, and $\\zeta = \\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}$, respectively.\n\\end{remark}\n\\revision{%\n\\begin{remark}\nProvided that $R, R_h \\in L\\big(\\boldsymbol{L}^2(\\Omega_1)\\times Y, X^*\\big)$ with\n$R_h (\\boldsymbol{n}, y) \\rightarrow R (\\boldsymbol{n}, y)$\nstrongly in $X^*$ for all $(\\boldsymbol{n},y) \\in \\revision{\\boldsymbol{L}^2(\\Omega_1)}\\times Y$\nin~\\eqref{prop:nonlinear:R}, the assumptions on the nonlinear operator $A$\ncan be weakened: Instead of strong monotonicity, uniform monotonicity \nof $A$ is sufficient. Then, $\\norm{b_h}{X^*}\\le C_R\\norm{\\boldsymbol{n}}{\\boldsymbol{L}^2(\\Omega)}\n\\le C_R|\\Omega|^{1\/2}=:M$ proves $\\norm{u_h}{X}\\le C$ for some constant\n$C = C(M)>0$, see\nRef.~\\cite{zeidler}, Section~26.2. The remaining part of the proof of\nLemma~\\ref{prop:nonlin:conv} remains unchanged with the formal choice $\\varepsilon=1$.\n\\end{remark}%\n}\n\\subsection{Application: Hybrid FEM-BEM \\revision{stray field computations}}\\label{section:fredkinkoehler}%\n\\revision{In the following, we present the hybrid FEM-BEM approaches of \\textsc{Fredkin} and \\textsc{Koehler}, see Ref.~\\cite{fredkinkoehler}, \nand \\textsc{Garc\\'ia-Cervera} and \\textsc{Roma}, see Ref.~\\cite{gcr},\nfor the approximate computation of the stray field.\nWe show that it satisfies the assumptions of Lemma~\\ref{prop:nonlin:conv}.}\nGiven any $\\boldsymbol{m}\\in \\boldsymbol{L}^2(\\Omega_1)$, the non-dimensional form of~\\eqref{eq:u1} reads\n\\begin{align*\n \\begin{array}{rcll}\n \\Delta u_1 &=& \\nabla \\cdot \\boldsymbol{m} \\quad&\\text{in } \\Omega_1, \\\\\n \\Delta u_{1} &=& 0\\quad&\\text{in }\\mathbb{R}^3\\backslash\\overline\\Omega_1,\\\\{}\n \\revision{\\gamma_1^{\\rm ext}u_{1}-\\revision{\\gamma_1^{\\rm int}}u_{1}} &=& 0&\\text{on }\\Gamma_1,\\\\{}\n \\revision{\\delta_1^{\\rm ext}u_{1}-\\delta_1^{\\rm int}u_{1}}\n &=& -\\boldsymbol{m}\\cdot\\revision{{\\boldsymbol{\\nu}}_1}\\quad&\\text{on }\\Gamma_1,\\\\\n u_{1}(x) &=& \\mathcal O(1\/|x|)&\\text{as }|x|\\to\\infty,\n\\end{array}\n\\end{align*}\n\\revision{where the target for our LLG integrator is the stray field $\\boldsymbol{\\pi}(\\boldsymbol{m})=\\nabla u_1$ on $\\Omega_1$.}\n\\subsubsection{Fredkin-Koehler approach}\\label{section:stray field:cont}\n\\noindent\n\\revision{The approach of \\textsc{Fredkin} and \\textsc{Koehler} (Ref.~\\cite{fredkinkoehler})\nrelies on the superposition principle\n\\begin{align}\\label{dp:superposition}\n u_1 = \\begin{cases} \n u_{11}+u_{12}&\\text{in }\\Omega_1,\\\\\n u_{12}&\\text{in }\\mathbb{R}^3\\backslash\\overline\\Omega_1,\n \\end{cases}\n\\end{align}\nwhere $u_{11}\\in H^1_*(\\Omega_1)$ satisfies\n\\begin{align}\\label{eq:fk1}\n \\dual{\\nabla u_{11}}{\\nabla v}_{\\Omega_1}\n = \\dual{\\boldsymbol{m}}{\\nabla v}_{\\Omega_1}\n \\quad\\text{for all }v\\in H^1_*(\\Omega_1)\n\\end{align}\nand $u_{12} = \\widetilde K_1 \\revision{\\gamma_1^{\\rm int}}u_{11}\\in H^1(\\mathbb{R}^3\\backslash\\Gamma_1)$. Since the integration of LLG only requires \n$u_1$ on $\\Omega_1$, we note that $u_{12}\\in H^1(\\Omega_1)$ solves\n\\begin{align}\\label{eq:fk2}\n \\revision{\\gamma_1^{\\rm int}}u_{12} = (K_1-1\/2)\\revision{\\gamma_1^{\\rm int}}u_{11}\n \\text{ and }\n \\dual{\\nabla u_{12}}{\\nabla v}_{\\Omega_1} = 0\n \\text{ for all }v\\in H^1_0(\\Omega_1).\n\\end{align}}%\nTo discretize the equations~\\eqref{eq:fk1}--\\eqref{eq:fk2}, let \n$u_{11h}\\in\\mathcal{S}_*^1(\\mathcal{T}_h^{\\Omega_1})$ be the unique FE solution of\n\\begin{align}\\label{eq:fk1h}\n \\dual{\\nabla u_{11h}}{\\nabla v_h}_{\\Omega_1}\n = \\dual{\\boldsymbol{m}}{\\nabla v_h}_{\\Omega_1}\n \\text{ for all }v_h\\in\\mathcal{S}_*^1(\\mathcal{T}_h^{\\Omega_1}).\n\\end{align}\nSince an FE approximation $u_{12h}\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})$ of~\\eqref{eq:fk2} cannot satisfy\ncontinuous Dirichlet data $(K_1-1\/2)u_{11h}$, we need to discretize \\revision{them}. To\nthat end, let $I_h^{\\Omega_1}: H^1(\\Omega_1) \\to \\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})$ be the Scott-Zhang\nprojection from Ref.~\\cite{scottzhang}. Since $I_h^{\\Omega_1}$ is $H^1$-stable and preserves discrete boundary data, it induces a stable projection $I_h^{\\Gamma_1}: H^{1\/2}(\\Gamma_1) \\to \\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1}|_{\\Gamma_1})$ with $\\revision{\\gamma_1^{\\rm int}}I_h^{\\Omega_1}v = I_h^{\\Gamma_1}(\\revision{\\gamma_1^{\\rm int}}v)$ for all $v \\in H^1(\\Omega_1)$, \\revision{see, e.g., Ref.~\\cite{hypsing3d}.}\n\\revision{Let $u_{12h}\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})$ \nbe the unique solution of the inhomogeneous Dirichlet problem\n\\begin{align}\\label{eq:fk2h}\n \\revision{\\gamma_1^{\\rm int}}u_{12h} = I_h^{\\Gamma_1}(K_1-1\/2)\\revision{\\gamma_1^{\\rm int}}u_{11h}\n \\text{ and }\n \\dual{\\nabla u_{12h}}{\\nabla v_h}_{\\Omega_1} = 0\n \\quad\\text{for all }v_h\\in\\mathcal{S}_0^1(\\mathcal{T}_h^{\\Omega_1}).\n\\end{align}\nThe resulting approximate stray field $\\boldsymbol{\\pi}_h(\\boldsymbol{m})=\\nabla u_{11h}+\\nabla u_{12h}$ is indeed covered by our approach from Section~\\ref{sec:monotone}.}\n\n\\begin{proposition}\\label{prop:stray field}\nThe operator $\\boldsymbol{\\pi}_h(\\boldsymbol{m}) = R_h(\\boldsymbol{m}) := \\nabla u_{11h} + \\nabla u_{12h}$ defined\nvia~\\eqref{eq:fk1h}--\\eqref{eq:fk2h} satisfies $\\boldsymbol{\\pi}_h\\in\nL(\\boldsymbol{L}^2(\\Omega_1);\\boldsymbol{L}^2(\\Omega_1))$, and convergence~\\eqref{prop:nonlinear:R}\ntowards $\\boldsymbol{\\pi}(\\boldsymbol{m}) = R(\\boldsymbol{m}) :=\\nabla u_1$ holds even strongly in $\\boldsymbol{L}^2(\\Omega_1)$. In particular, Lemma~\\ref{prop:nonlin:conv} applies with $X:=\\boldsymbol{L}^2(\\Omega_1)$ and $Y:=\\{0\\}$ and guarantees the assumptions~\\eqref{assumption:chi:bounded}--\\eqref{assumption:chi:convergence} of Theorem~\\ref{theorem}.\n\\end{proposition}\n\n\\begin{proof}\nFirst, note that the FE solution $u_{11h}$ of~\\eqref{eq:fk1h} is a Galerkin approximation of~\\eqref{eq:fk1}. Therefore, stability \\revision{and density\narguments prove $\\norm{u_{11}-u_{11h}}{H^1(\\Omega_1)}\\to 0$ as $h\\to 0$.}\n\\revision{Next, we consider the unique solution $\\widetilde u_{12h}\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})$ of the auxiliary problem\n\\begin{align*\n \\revision{\\gamma_1^{\\rm int}}\\widetilde u_{12h} = I_h^{\\Gamma_1}(K_1-1\/2)\\revision{\\gamma_1^{\\rm int}}u_{11}\n \\text{ and }\n \\dual{\\nabla \\widetilde u_{12h}}{\\nabla v_h}_{\\Omega_1} = 0\n \\quad\\text{for all }v_h\\in\\mathcal{S}_0^1(\\mathcal{T}_h^{\\Omega_1}).\n\\end{align*}\nNote that $\\gamma_1^{\\rm int}\\widetilde u_{12h} = I_h^{\\Gamma_1}\\gamma_1^{\\rm int} u_{12}$. Therefore, the C\\'ea lemma for inhomogeneous Dirichlet problems (see Prop.~2.3 in Ref.~\\cite{dirichlet3d}) and density arguments prove\n\\begin{align*}\n \\norm{u_{12}-\\widetilde u_{12h}}{H^1(\\Omega_1)}\n \\lesssim \\min_{v_h\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})}\\norm{u_{12}-v_h}{H^1(\\Omega_1)}\n \\xrightarrow{h\\to0}0.\n\\end{align*}}\n\\revision{Third, stability of the inhomogeneous Dirichlet problem provides\n\\begin{align*} \n \\norm{u_{12h}-\\widetilde u_{12h}}{H^1(\\Omega_1)}\n \\lesssim \\norm{\\gamma_1^{\\rm int}(u_{11}-u_{11h})}{H^{1\/2}(\\Gamma_1)}\n \\lesssim \\norm{u_{11}-u_{11h}}{H^1(\\Omega_1)},\n\\end{align*}\nand the triangle inequality reveals\n\\begin{align*}\n \\norm{u_{12}-u_{12h}}{H^1(\\Omega_1)}\n &\\le \\norm{u_{12}-\\widetilde u_{12h}}{H^1(\\Omega_1)}\n + \\norm{u_{12h}-\\widetilde u_{12h}}{H^1(\\Omega_1)}\n \\xrightarrow{h\\to0}0.\n\\end{align*}}%\n\\begin{comment}\nSecond, we exploit the C\\'ea-type estimate for inhomogeneous Dirichlet problems which states\n\\begin{align*}\n \\norm{\\nabla(u_{12}-u_{12h})}{\\boldsymbol{L}^2(\\Omega_1)}\n \\le \\min_{\\substack{w_h\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})\\\\mathbf{w}_h|_\\Gamma=I_h^{\\Gamma_1}(K_1-1\/2)\\revision{\\gamma_1^{\\rm int}}u_{11h}}}\n \\norm{\\nabla(u_{12}-w_h)}{\\boldsymbol{L}^2(\\Omega_1)}.\n\\end{align*}\nWe now plug in $u_{12}=\\widetilde K_1\\revision{\\gamma_1^{\\rm int}}u_{11}$ and $w_h=I_h^{\\Omega_1}\\widetilde K_1\\revision{\\gamma_1^{\\rm int}}u_{11h}$ to see\n\\begin{align*}\n \\norm{\\nabla(u_{12}-u_{12h})}{\\boldsymbol{L}^2(\\Omega_1)}\n &\\le \\norm{\\widetilde K_1\\revision{\\gamma_1^{\\rm int}}u_{11}-I_h^{\\Omega_1}\\widetilde K_1\\revision{\\gamma_1^{\\rm int}}u_{11h}}{H^1(\\Omega_1)}\\\\\n &\\le \\norm{(1-I_h^{\\Omega_1})\\widetilde K_1\\revision{\\gamma_1^{\\rm int}}u_{11}}{H^1(\\Omega_1)}\n +\\norm{I_h^{\\Omega_1}\\widetilde K_1\\revision{\\gamma_1^{\\rm int}}(u_{11}-u_{11h})}{H^1(\\Omega_1)}.\n\\end{align*}\nFrom the projection property and stability of $I_h^{\\Omega_1}$, we get\n\\begin{align*}\n \\norm{(1-I_h^{\\Omega_1})\\widetilde K_1\\revision{\\gamma_1^{\\rm int}}u_{11}}{H^1(\\Omega_1)}\n \\lesssim \\min_{w_h\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})}\\norm{\\widetilde K_1\\revision{\\gamma_1^{\\rm int}}u_{11}-w_h}{H^1(\\Omega_1)} \\xrightarrow{h\\to 0} 0.\n\\end{align*}\nFor the other term, we use \\revision{stability} of $I_h^{\\Omega_1}$, $\\widetilde K_1$,\n\\revision{and $\\revision{\\gamma_1^{\\rm int}}$} as well as the Poincar\\'e estimate to conclude\n\\begin{align*}\n \\norm{I_h^{\\Omega_1}\\widetilde K_1\\revision{\\gamma_1^{\\rm int}}(u_{11}-u_{11h})}{H^1(\\Omega_1)}\n\n &\\lesssim \\norm{u_{11}-u_{11h}}{H^1(\\Omega_1)}\n \\\\&\\lesssim \\norm{\\nabla(u_{11}-u_{11h})}{\\boldsymbol{L}^2(\\Omega_1)}\n \\xrightarrow{h\\to 0} 0\n\\end{align*}\nwith the above estimate. Since this analysis was particularly independent of $\\boldsymbol{m}$, \n\\end{comment}\n\\revision{Finally,} the triangle inequality yields\n\\begin{align*}\n\\norm{\\boldsymbol{\\pi}_h(\\boldsymbol{m})-\\boldsymbol{\\pi}(\\boldsymbol{m})}{\\boldsymbol{L}^2(\\Omega_1)} \\le \\norm{\\nabla(u_{11}-u_{11h})}{\\boldsymbol{L}^2(\\Omega_1)}\n+ \\norm{\\nabla(u_{12}-u_{12h})}{\\boldsymbol{L}^2(\\Omega_1)}\\to0\n\\end{align*}\nfor all $\\boldsymbol{m} \\in X = \\boldsymbol{L}^2(\\Omega_1)$. \n\\revision{Together with Lemma~\\ref{prop:nonlin:conv}, we conclude the proof.}\n\\end{proof}\n\\revision{\\begin{remark}\nInstead of the Scott-Zhang projection $I_h^{\\Gamma_1}$, any Cl\\'ement-type \noperator $I_h^{\\Gamma}:L^2(\\Gamma_1)\\to\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1}|_{\\Gamma_1})$ can be employed. \nThe assertion of Proposition~\\ref{prop:stray field} holds accordingly, see Ref.~\\cite{goldenits}, Section~4.3.\nWe note that Ref.~\\cite{fredkinkoehler}\nemploys nodal interpolation which is \\emph{not} suitable for the numerical \nanalysis as $H^1$-functions are not continuous, in general.\n\\end{remark}}\n\n\\subsubsection{Garc\\'ia-Cervera-Roma approach}\\label{section:strayfield2:cont}\n\n\\noindent\nThe approach of \\textsc{Garc\\'ia-Cervera} and \\textsc{Roma}, see Ref.~\\cite{gcr},\nrelies also on the superposition~\\eqref{dp:superposition}, where now \n$u_{11}\\in H^1_0(\\Omega_1)$ satisfies\n\\begin{align}\\label{eq:gcr1}\n \\dual{\\nabla u_{11}}{\\nabla v}_{\\Omega_1}\n = \\dual{\\boldsymbol{m}}{\\nabla v}_{\\Omega_1}\n \\quad\\text{for all }v\\in H^1_0(\\Omega_1)\n\\end{align}\nand $u_{12} = \\widetilde V_1(\\boldsymbol{m}\\cdot{\\boldsymbol{\\nu}}_1-\\delta_1^{\\rm int}u_{11})\n\\in H^1_{\\ell oc}(\\mathbb{R}^3)$. Note that $u_{12}\\in H^1(\\Omega_1)$ solves\n\\begin{align}\\label{eq:gcr2}\n \\gamma_1^{\\rm int}u_{12} = V_1(\\boldsymbol{m}\\cdot{\\boldsymbol{\\nu}}_1-\\delta_1^{\\rm int}u_{11})\n \\text{ and }\n \\dual{\\nabla u_{12}}{\\nabla v}_{\\Omega_1} = 0\n \\text{ for all }v\\in H^1_0(\\Omega_1).\n\\end{align}\nTo discretize~\\eqref{eq:gcr1}--\\eqref{eq:gcr2}, we employ the $L^2$-projection \n$\\Pi_h:L^2(\\Gamma_1)\\to\\mathcal{P}^0(\\mathcal{T}_h^{\\Omega_1}|_{\\Gamma_1})$ \nas well as the Scott-Zhang projection $I_h^{\\Gamma_1}$\nand solve for $u_{11h}\\in\\mathcal{S}^1_0(\\mathcal{T}_h^{\\Omega_1})$ with\n\\begin{align}\\label{eq:gcr1h}\n \\dual{\\nabla u_{11h}}{\\nabla v_h}_{\\Omega_1}\n = \\dual{\\boldsymbol{m}}{\\nabla v_h}_{\\Omega_1}\n \\quad\\text{for all }v_h\\in \\mathcal{S}^1_0(\\mathcal{T}_h^{\\Omega_1}) \n\\end{align}\nand for $u_{12h}\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})$ with\n\\begin{subequations}\\label{eq:gcr2h}\n\\begin{align}\n &\\gamma_1^{\\rm int}u_{12h} = I_h^{\\Gamma_1}V_1(\\Pi_h(\\boldsymbol{m}\\cdot{\\boldsymbol{\\nu}}_1)-\\partial u_{11h}\/\\partial{\\boldsymbol{\\nu}}_1),\n\\\\&\n \\dual{\\nabla u_{12h}}{\\nabla v_h}_{\\Omega_1} = 0\n \\quad\\text{for all }v_h\\in \\mathcal{S}^1_0(\\mathcal{T}_h^{\\Omega_1}).\n\\end{align}\n\\end{subequations}\nThe resulting approximate stray field $\\boldsymbol{\\pi}_h(\\boldsymbol{m})=\\nabla u_{11h}+\\nabla u_{12h}$ is indeed covered by our approach from Section~\\ref{sec:monotone}.\nUnlike the Fredkin-Koehler approach, however, the numerical analysis is\nslightly more involved, since the well-posedness of~\\eqref{eq:gcr2}\nrequires at least that the normal trace $\\boldsymbol{m}\\cdot{\\boldsymbol{\\nu}}_1$ exists \nin $H^{-1\/2}(\\Gamma_1)$ which prevents to consider $\\boldsymbol{m}\\in\\boldsymbol{L}^2(\\Omega_1)$ only.\n\n\\begin{proposition}\\label{prop:gcr}\nThere exists some $\\varepsilon>0$ such that the operator \n$\\boldsymbol{\\pi}_h(\\boldsymbol{m}) = R_h(\\boldsymbol{m}) := \\nabla u_{11h} + \\nabla u_{12h}$ defined via~\\eqref{eq:gcr1h}--\\eqref{eq:gcr2h} satisfies $\\boldsymbol{\\pi}_h\\in\nL(\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1);\\boldsymbol{L}^2(\\Omega_1))$ as well as convergence~\\eqref{prop:nonlinear:R} towards $\\boldsymbol{\\pi}\\in L(\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1);\\boldsymbol{L}^2(\\Omega_1))$,\n$\\boldsymbol{\\pi}(\\boldsymbol{m}) = R(\\boldsymbol{m}) := \\nabla u_1 = \\nabla u_{11}+\\nabla u_{12}$. In particular, Lemma~\\ref{prop:nonlin:conv} applies with $X:=\\boldsymbol{L}^2(\\Omega_1)$ and $Y:=\\{0\\}$ and guarantees the assumptions~\\eqref{assumption:chi:bounded}--\\eqref{assumption:chi:convergence} of Theorem~\\ref{theorem}.\n\\end{proposition}\n\n\\begin{proof}\nWe argue essentially as in the proof of Proposition~\\ref{prop:stray field}.\nFirst, we see that \n\\begin{align*}\n \\norm{u_{11}-u_{11h}}{H^1(\\Omega_1)}\n \\lesssim \\min_{v_h\\in\\mathcal{S}^1_0(\\mathcal{T}_h^{\\Omega_1})}\\norm{u_{11}-v_h}{H^1(\\Omega_1)}\n \\xrightarrow{h\\to0}0,\n\\end{align*}\nfor all $\\boldsymbol{m}\\in\\boldsymbol{L}^2(\\Omega_1)$. Moreover, for $\\boldsymbol{m}\\in \\boldsymbol{H}^1(\\Omega_1)$, elliptic \nregularity for the Dirichlet problem~\\eqref{eq:gcr1} even predicts \n$u_{11}\\in H^{3\/2+\\mu}(\\Omega_1)$ and hence \n$\\norm{u_{11}-u_{11h}}{H^1(\\Omega_1)} = \\mathcal O(h^{1\/2+\\mu})$ for some $\\mu>0$\nwhich depends only on the shape of the polyhedral Lipschitz domain $\\Omega_1$,\nsee, e.g., Ref.~\\cite{monk}, Theorem~3.8. By interpolation, these\nobservations yield the existence of some (small) $0<\\varepsilon<1\/2$ such that \n\\begin{align}\\label{dp:regularity}\n u_{11}\\in H^{3\/2+\\varepsilon}(\\Omega_1)\\text{ with }\n \\norm{u_{11}-u_{11h}}{H^1(\\Omega_1)} = \\mathcal O(h^{1\/2+\\varepsilon})\n \\text{ for all } \\boldsymbol{m}\\in\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1).\n\\end{align}\nFrom now on, we assume $\\boldsymbol{m}\\in\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)$ and note that,\nin particular, $\\delta_1^{\\rm int}u_{11} = \\partial u_{11}\/\\partial{\\boldsymbol{\\nu}}_1$ \nexists in $L^2(\\Gamma_1)$. The trace inequality (e.g. Ref.~\\cite{fkmp}, \nLemma~3.4) proves for any face\n$E\\in\\mathcal{T}_h^{\\Omega_1}|_{\\Gamma_1}$ with corresponding element $T\\in\\mathcal{T}_h^{\\Omega_1}$\n(i.e., $E\\subset\\partial T\\cap\\Gamma_1$) that\n\\begin{align*}\n \\norm{\\delta_1^{\\rm int}&u_{11} - \\partial u_{11h}\/\\partial{\\boldsymbol{\\nu}}_1}{L^2(\\partial T\\cap\\Gamma_1)}^2\\\\\n &\\lesssim h^{-1}\\norm{\\nabla(u_{11}-u_{11h})}{\\boldsymbol{L}^2(T)}^2 + \\norm{\\nabla(u_{11}-u_{11h})}{\\boldsymbol{L}^2(T)} \\norm{D^2(u_{11}-u_{11h})}{\\boldsymbol{L}^2(T)}.\n\\end{align*}\nWith $D^2u_{11h}=0$ on $T$, we sum over all elements $T\\in\\mathcal{T}_h^{\\Omega_1}$ \nand obtain\n\\begin{align*}\n \\norm{\\delta_1^{\\rm int}&u_{11}- \\partial u_{11h}\/\\partial{\\boldsymbol{\\nu}}_1}{L^2(\\Gamma_1)}^2\\\\\n &\\lesssim h^{-1}\\norm{\\nabla(u_{11}-u_{11h})}{\\boldsymbol{L}^2(\\Omega_1)}^2 + \\norm{\\nabla(u_{11}-u_{11h})}{\\boldsymbol{L}^2(\\Omega_1)}\\norm{D^2u_{11}}{\\boldsymbol{L}^2(\\Omega_1)}\n = \\mathcal O(h^{2\\varepsilon}).\n\\end{align*}\nTogether with the continuous inclusion \n$H^{-1\/2}(\\Gamma_1)\\subseteq L^2(\\Gamma_1)$, \nit follows $\\norm{\\delta_1^{\\rm int}u_{11}- \\partial u_{11h}\/\\partial{\\boldsymbol{\\nu}}_1}{H^{-1\/2}(\\Gamma_1)}\\to0$ as $h\\to0$.\nLet $\\widetilde u_{12h}\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})$ be the unique\nsolution of the auxiliary problem\n\\begin{align*\n \\gamma_1^{\\rm int}\\widetilde u_{12h} = I_h^{\\Gamma_1}V_1(\\boldsymbol{m}\\cdot{\\boldsymbol{\\nu}}_1-\\delta_1^{\\rm int}u_{11})\n \\text{ and }\n \\dual{\\nabla \\widetilde u_{12h}}{\\nabla v_h}_{\\Omega_1} = 0\n \\quad\\text{for all }v_h\\in\\mathcal{S}_0^1(\\mathcal{T}_h^{\\Omega_1}).\n\\end{align*}\nAgain, it holds $\\gamma_1^{\\rm int}\\widetilde u_{12h} = I_h^{\\Gamma_1}u_{12}$\nand hence $\\norm{u_{12}-\\widetilde u_{12h}}{H^1(\\Omega_1)}\\to0$ as $h\\to0$.\nStability of the inhomogeneous Dirichlet problem proves\n\\begin{align*} \n \\norm{\\widetilde u_{12h}-u_{12h}}{H^1(\\Omega_1)}\n &\\lesssim\\norm{I_h^{\\Gamma_1}V_1\\big((1-\\Pi_h)\\boldsymbol{m}\\cdot{\\boldsymbol{\\nu}}_1-(\\delta_1^{\\rm int}u_{11}-\\partial u_{11h}\/\\partial{\\boldsymbol{\\nu}}_1)\\big)}{H^{1\/2}(\\Gamma_1)}\n \\\\&\\lesssim\n \\norm{(1-\\Pi_h)\\boldsymbol{m}\\cdot{\\boldsymbol{\\nu}}_1}{H^{-1\/2}(\\Gamma_1)}\n + \\norm{\\delta_1^{\\rm int}u_{11}-\\partial u_{11h}\/\\partial{\\boldsymbol{\\nu}}_1}{H^{-1\/2}(\\Gamma_1)}.\n\\end{align*}\nWe already saw that the second term on the right-hand side vanishes\nas $h\\to0$.\nFor the first term, a duality argument (see, e.g., Ref.~\\cite{ccdpr}, Section~4) proves\n\\begin{align*}\n \\norm{(1-\\Pi_h)\\boldsymbol{m}\\cdot{\\boldsymbol{\\nu}}_1}{H^{-1\/2}(\\Gamma_1)}\n \\lesssim h^{1\/2}\\norm{\\boldsymbol{m}\\cdot{\\boldsymbol{\\nu}}_1}{L^2(\\Gamma_1)}\n \\lesssim h^{1\/2}\\norm{\\boldsymbol{m}}{\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)},\n\\end{align*}\nwhere we also used $0<\\varepsilon<1\/2$ to admit a continuous trace operator\n$\\gamma_1^{\\rm int}:\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)\\to\\boldsymbol{L}^2(\\Gamma_1)$.\nOverall, we thus see\n\\begin{align}\\label{eq:gcr:u12conv}\n \\norm{u_{12}-u_{12h}}{H^1(\\Omega_1)}\n \\le \\norm{u_{12}-\\widetilde u_{12h}}{H^1(\\Omega_1)}\n + \\norm{\\widetilde u_{12h}-u_{12h}}{H^1(\\Omega_1)}\n \\xrightarrow{h\\to0}0.\n\\end{align}\nThe combintation of~\\eqref{dp:regularity}--\\eqref{eq:gcr:u12conv}\nconcludes $\\norm{\\pi(\\boldsymbol{m})-\\pi_h(\\boldsymbol{m})}{\\boldsymbol{L}^2(\\Omega_1)}\n\\to0$ as $h\\to0$, for all $\\boldsymbol{m}\\in\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)$.\n\\end{proof}\n\n\\input{multiscale_fig.tex}\n\\subsection{Application: Multiscale approach for total magnetic field}\n\\label{sec:multi}\nWe aim to apply Lemma~\\ref{prop:nonlin:conv} to the model problem posed in Section~\\ref{sec:maxwell}, i.e., the\ncomputation of $\\boldsymbol{\\pi}(\\boldsymbol{m},\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}) = \\nabla u_2$ on $\\Omega_1$. \nIn the following, we consider the subproblems needed for the computation of $\\nabla u_2$\nas well as their discretizations. An overview illustration is given in Figure~\\ref{fig:u2}.\nThroughout this section, we let\n\\begin{itemize}\n\\item $X:=H^{-1\/2}(\\Gamma_2)\\times H^1(\\Omega_2)$,\n\\item $Y:=\\boldsymbol{L}^2(\\Omega_2)$\n\\end{itemize}\nWe recall that $H^{-1\/2}(\\Gamma_2)$ is the dual space of the trace space\n$H^{1\/2}(\\Gamma_2)$ and that $\\widetilde H^{-1}(\\Omega_2)$ is the dual space of \n$H^1(\\Omega_2)$, where duality is understood according to the respective $L^2$-scalar \nproducts. In particular, the dual space of $X$ is \n$X^* = H^{1\/2}(\\Gamma_2)\\times \\widetilde H^{-1}(\\Omega_2)$.\n\\subsubsection{Continuous formulation}\nTo compute $\\nabla u_2$ on $\\Omega_1$, we proceed as implicitly outlined in Section~\\ref{sec:maxwell}.\nFor a magnetization $\\boldsymbol{m} \\in \\boldsymbol{L}^2(\\Omega_1)$, we compute $u_{1}\\in H^1(\\Omega_1)$ \nas solution of the stray field operator on the microscopic part.\nRecall from Section~\\ref{section:fredkinkoehler} that in $\\mathbb{R}^3\\backslash\\overline\\Omega_1\\supset\\Omega_2$ it holds\n$u_1 = u_{12} = \\widetilde K_1 \\gamma_1^{\\rm int}u_{11}$ with $u_{11}\\in H_*^1(\\Omega_1)$ being the solution of~\\eqref{eq:fk1}.\nAccording to~\\eqref{eq:intop_laplace},\n$u_1$ on $\\Omega_2$ thus solves the inhomogeneous Dirichlet problem\n\\begin{align}\\label{eq:fk5}\n \\gamma_2^{\\rm int}u_1 \n = \\gamma_2^{\\rm int}\\widetilde K_1 \\gamma_1^{\\rm int}u_{11}\n \\text{ and }\n \\dual{\\nabla u_1}{\\nabla v}_{\\Omega_2}=0\n \\text{ for all }v\\in H^1_0(\\Omega_2).\n\\end{align}\nRecall $\\nabla\\cdot\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}=0$ from~\\eqref{eq:Hext:div}, whence\n$\\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\cdot{\\boldsymbol{\\nu}}_2}{\\gamma_2^{\\rm int}v}_{\\Gamma_2}\n=\\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}}{\\nabla v}_{\\Omega_2}$ for all $v\\in H^1(\\Omega_2)$.\n For the auxiliary potential $u_{\\rm app}\\in H^1_*(\\Omega_2)$,\nthe non-dimensional weak formulation of~\\eqref{eq:uext} reads\n\\begin{align}\\label{eq:uext:nondim}\n \\dual{\\nablau_{\\rm app}}{\\nabla v}_{\\Omega_2}\n = -\\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}}{\\nabla v}_{\\Omega_2}\n \\text{ for all }v\\in H^1_*(\\Omega_2).\n\\end{align}\n\\par In the next step, we then compute the total magnetostatic potential \n$u = u_1 + u_2 + u_{\\rm app}$ on the macroscopic domain $\\Omega_2$. With \n$\\widetilde\\chi(|\\nabla u|) = \\chi\\big(M_s|\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}-\\nabla u_1-\\nabla u_2|\\big)$,\nthe non-dimensional form \nof~\\eqref{eq:u:omega2} \nreads\n\\begin{subequations}\\label{eq:jn0}\n\\begin{align}\n \\label{eq_:u:omega2:interior}\n \\nabla\\cdot\\big( (1+\\widetilde\\chi(|\\nabla u|))\\nabla u\\big) &= 0\n \\hspace*{29.1mm}\\text{in }\\Omega_2, \\\\\n \\Delta u_2 &= 0 \n \\hspace*{29.1mm}\\text{in }\\mathbb{R}^3\\backslash\\overline\\Omega_2, \\\\\n \\label{eq_:u:omega2:jumpu}\n \\gamma_2^{\\rm ext}u_2 - \\gamma_2^{\\rm int}u &= -\\gamma_2^{\\rm int}(u_1 +u_{\\rm app})\n \\hspace*{5.2mm}\\text{on }\\Gamma_2, \\\\\n \\label{eq_:u:omega2:jumpdu}\n \\delta_2^{\\rm ext}u_2 - (1+\\widetilde\\chi(|\\nabla u|))\\nabla u\\cdot{\\boldsymbol{\\nu}}_2 &= \\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\cdot{\\boldsymbol{\\nu}}_2 - \\delta_2^{\\rm int}u_1\n \\hspace*{5.6mm}\\quad\\text{on }\\Gamma_2, \\\\\n u_2(x) &= \\mathcal O(1\/|x|) \n \\hspace*{18.0mm}\\text{as }|x|\\to\\infty.\n\\end{align}\n\\end{subequations}\nLet $V_2 : H^{-1\/2}(\\Gamma_2) \\rightarrow H^{1\/2}(\\Gamma_2)$ and $K_2:H^{1\/2}(\\Gamma_2) \\rightarrow H^{1\/2}(\\Gamma_2)$ denote the simple-layer potential and \nthe double-layer potential with respect to $\\Gamma_2$ (see Section~\\ref{section:bio}).\nThe transmission problem~\\eqref{eq:jn0}\nis then equivalently stated by means of the Johnson-N\\'ed\\'elec \ncoupling from Ref.~\\cite{johnson-nedelec},\n\\begin{subequations}\\label{eq:jn1}\n\\begin{align}\n \\dual{(1+\\widetilde\\chi(|\\nabla u|))\\nabla u}{\\nabla v}_{\\Omega_2}\n -\\dual{\\phi}{\\gamma_2^{\\rm int}v}_{\\Gamma_2}\n &= \\dual{\\delta_2^{\\rm int}u_1}{\\gamma_2^{\\rm int}v}_{\\Gamma_2}\n \\!-\\! \\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}}{\\nabla v}_{\\Omega_2},\n\\\\\n V_2\\phi + (1\/2-K_2)\\gamma_2^{\\rm int}u &= (1\/2-K_2)\\gamma_2^{\\rm int}(u_1+u_{\\rm app}),\n\\end{align}\n\\end{subequations}\nfor all $v\\in H^1(\\Omega_2)$, see Ref.~\\cite{affkmp} for the non-linear case\nand Refs.~\\cite{johnson-nedelec}, \\cite{sayas09} for the linear one.\nThe coupling formulation~\\eqref{eq:jn1} provides the total potential $u\\in H^1(\\Omega_2)$ as well as \nthe exterior normal derivative $\\phi = \\delta_2^{\\rm ext}u_2\\in H^{-1\/2}(\\Gamma_2)$. \nExistence and uniqueness of the solution $(\\phi,u)\\in X = H^{-1\/2}(\\Gamma)\\times H^1(\\Omega)$ of~\\eqref{eq:jn1} hinges strongly on the material law \n$\\widetilde\\chi$ and will be discussed in Section~\\ref{section:jn} \nbelow.\n\nSince $u_2$ solves $-\\Delta u_2 = 0$ in $\\mathbb{R}^3\\backslash\\overline\\Omega_2$, $u_2$ can be\ncomputed by means of the representation formula\n\\begin{align}\\label{eq:jn2}\n u_2 = -\\widetilde V_2\\delta_2^{\\rm ext}u_2 + \\widetilde K_2\\gamma_2^{\\rm ext}u_2\n \\quad\\text{in }\\mathbb{R}^3\\backslash\\overline\\Omega_2 \\supset \\Omega_1,\n\\end{align}\nsee, e.g., Ref.~\\cite{sauschwa}, Theorem~3.1.6.\nTo lower the computational cost for the later implementation, we will, however, not use \nthe representation formula~\\eqref{eq:jn2} on $\\Omega_1$, but only on $\\Gamma_1$ and solve an \ninhomogeneous Dirichlet problem instead. \nIt holds $\\gamma_2^{\\rm ext}u_2 = \\gamma_2^{\\rm int}u_2\n= \\gamma_2^{\\rm int}(u - u_1 - u_{\\rm app})$. With \n$\\phi=\\delta_2^{\\rm ext}u_2$ on $\\Gamma_2$, we obtain\n\\begin{subequations}\\label{eq:jn3}\n\\begin{align}\n -\\Delta u_2 &= 0 \\hspace*{58.5mm}\\text{in }\\Omega_1,\\\\\n \\gamma_1^{\\rm int}u_2 &= \\gamma_1^{\\rm int}\\big(-\\widetilde V_2\\phi + \\widetilde K_2\\gamma_2^{\\rm int}(u - u_1 - u_{\\rm app})\\big)\\quad\\text{on }\\Gamma_1.\n\\end{align}\n\\end{subequations}\n\\subsubsection{Discrete formulation}\nAs for the stray field, we solve~\\eqref{eq:fk1h} to obtain an approximation \n$u_{11h} \\in\\mathcal{S}_*^1(\\mathcal{T}_h^{\\Omega_1})$ of $u_{11}$. To discretize~\\eqref{eq:fk5},\nlet $u_{1h}\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_2})$ solve\n\\begin{align}\\label{eq:fk5:disc}\n \\gamma_2^{\\rm int}u_{1h} = I_h^{\\Gamma_2} K_1 \\gamma_1^{\\rm int}u_{11h} \n \\text{ and }\n \\dual{\\nabla u_{1h}}{\\nabla v_h}_{\\Omega_2}= 0 \n \\text{ for all }v_h\\in\\mathcal{S}_0^1(\\mathcal{T}_h^{\\Omega_2}).\n\\end{align}\nThe discrete version of~\\eqref{eq:uext:nondim} reads as follows:\nLet $u_{{\\rm app},h} \\in \\mathcal{S}_*^1(\\mathcal{T}_h^{\\Omega_2})$ solve\n\\begin{align}\\label{eq:uext:disc}\n \\dual{\\nabla u_{{\\rm app},h}}{\\nabla v_h}_{\\Omega_2}\n = - \\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}}{\\nabla v_h}_{\\Omega_2}\n \\quad\\text{for all }v_h \\in \\mathcal{S}_*^1(\\mathcal{T}_h^{\\Omega_2}).\n\\end{align}\nFor the numerical solution of~\\eqref{eq:jn1}, we compute $(\\phi_h,u_h)\\in X_h:=\\mathcal{P}^0(\\mathcal{T}_h^{\\Omega_2}|_{\\Gamma_2})\\times \\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_2})$ \nsuch that\n\\begin{align}\\label{eq:jn:disc}\n\\begin{split}\n \\dual{(1+\\widetilde\\chi(|\\nabla u_h|))\\nabla u_h}{\\nabla v_h}_{\\Omega_2} \n - \\dual{\\phi_h}{v_h}_{\\Gamma_2}\n &= \\dual{\\partial u_{1h}\/\\partial{\\boldsymbol{\\nu}}_2}{v_h}_{\\Gamma_2}\n \\!-\\!\\dual{\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}}{\\nabla v_h}_{\\Omega_2}, \\\\\n \\dual{V_2\\phi_h + (1\/2-K_2)u_h}{\\psi_h}_{\\Gamma_2} \n &= \\dual{(1\/2-K)(u_{1h} + u_{{\\rm app},h})}{\\psi_h}_{\\Gamma_2\n\\end{split}\n\\end{align}\nfor all $(\\psi_h,v_h)\\in X_h$. Existence and uniqueness of $(\\phi_h,u_h)$\nis discussed in Section~\\ref{section:jn} below.\nTo discretize~~\\eqref{eq:jn3}, let $u_{2h} \\in \\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})$ solve\n\\begin{align}\\label{eq:jn3:disc}\n\\begin{split}\n&\\gamma_1^{\\rm int}u_{2h} = I_h^{\\Gamma_1}\\gamma_1^{\\rm int}\\big(-\\widetilde V_2\\phi_h + \\widetilde K_2\\gamma_2^{\\rm int}(u_h - u_{1h} - u_{{\\rm app},h})\\big)\n\\\\\n&\\dual{\\nabla u_{2h}}{\\nabla v_h}_{\\Omega_1} = 0 \\quad \\text{ for all } v_h \\in \\mathcal{S}^1_0(\\mathcal{T}_h^{\\Omega_1}).\n\\end{split}\n\\end{align}\n\n\\subsubsection{Operator formulation}\n\\label{section:SAR}\n\nWith respect to the abstract notation of Lemma~\\ref{prop:nonlin:conv},\nthe solutions of the problems~\\eqref{eq:fk5}--\\eqref{eq:uext:nondim} and~\\eqref{eq:fk5:disc}--\\eqref{eq:uext:disc} give rise to\nthe continuous linear operators\n\\begin{align}\n\\begin{split}\\label{eq:multiscale:R}\n \\widetilde R,\\widetilde R_h&:\\boldsymbol{H}^1(\\Omega_1)\\times \\boldsymbol{L}^2(\\Omega_2)\n \\to H^{1\/2}(\\Gamma_2)\\times \\widetilde H^{-1}(\\Omega_2),\\\\\n \\widetilde R(\\boldsymbol{m},\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}) \n &:=\n \\big((1\/2-K_2)\\gamma_2^{\\rm int}(u_1+u_{\\rm app}),(\\gamma_2^{\\rm int})^*\\delta_2^{\\rm int} u_1 -\\nabla^*\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\big),\\\\\n \\widetilde R_h(\\boldsymbol{m},\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}) \n &:=\n \\big((1\/2-K_2)\\gamma_2^{\\rm int}(u_{1h}+u_{{\\rm app},h}),(\\gamma_2^{\\rm int})^*\\partial u_{1h}\/\\partial{\\boldsymbol{\\nu}}_2 -\\nabla^*\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\big),\n\\end{split}\n\\end{align}\nwhere $(\\gamma_2^{\\rm int})^*:H^{-1\/2}(\\Gamma_2)\\to \\widetilde H^{-1}(\\Omega_2)$ \ndenotes the adjoint of the trace operator \n$\\gamma_2^{\\rm int}:H^1(\\Omega_2)\\to H^{1\/2}(\\Gamma_2)$\nand $\\nabla^*:\\boldsymbol{L}^2(\\Omega_2)\\to\\widetilde H^{-1}(\\Omega_2)$ is the adjoint gradient.\nNote that $\\widetilde R$, $\\widetilde R_h$ are also well-defined and bounded operators \non $\\boldsymbol{L}^2(\\Omega_1)\\times\\boldsymbol{L}^2(\\Omega_2)$ and hence by interpolation, for all $00$.\n Then, the (non-linear) operator\n \\begin{align*}\n \\mathcal A : \\boldsymbol{L}^2(\\Omega_2) &\\rightarrow \\boldsymbol{L}^2(\\Omega_2), \\quad \\mathcal A\\mathbf w = (1+\\widetilde\\chi(|\\mathbf w|))\\mathbf w \n \\end{align*}\n is Lipschitz continuous and strongly monotone, i.e., there holds\n \\begin{align}\\label{eq:kotz}\n L^{-2}\\,\\norm{\\mathcal A\\mathbf u-\\mathcal A\\mathbf v}{\\boldsymbol{L}^2(\\Omega_2)}^2 &\\leq \\norm{\\mathbf u-\\mathbf v}{\\boldsymbol{L}^2(\\Omega_2)}^2\n \\le\\gamma^{-1}\\,\\dual{\\mathcal A\\mathbf u-\\mathcal A\\mathbf v}{\\mathbf u-\\mathbf v}_{\\Omega_2}\n \\end{align}%\n for all $\\mathbf u,\\mathbf v\\in\\boldsymbol{L}^2(\\Omega_2)$.\n\\qed\n\\end{lemma}\n\nWe stress that the operator $\\widetilde A$ from~\\eqref{eq:jn1} resp.~\\eqref{eq:multiscale:A} is \\emph{not} strongly monotone as, e.g., the left-hand side of~\\eqref{eq:jn1} is zero for $(\\phi,u)=(0,1)$. \nTo overcome this problem, we define the linear operator\n\\begin{align}\\label{eq:multiscale:L}\n L: X^*\\to X^*,\\quad Lx^* := x^* + \\dual{x^*}{(1,0)}_{X^*\\times X} \\dual{\\widetilde A(\\cdot,\\cdot)}{(1,0)}_{X^*\\times X},\n\\end{align}\nwhere $1\\in\\mathcal{P}^0(\\mathcal{T}_h^{\\Omega_2}|_{\\Gamma_2})$ denotes the constant function.\nAs observed in Ref.~\\cite{affkmp}, Section~4, the Johnson-N\\'ed\\'elec coupling equations can then be equivalently rewritten as follows:\n\n\\begin{lemma}\\label{prop:equivalenceA}\nThe operator $L: X^* \\to X^*$ from~\\eqref{eq:multiscale:L} is well-defined, linear, and continuous. Let $\\widetilde A$ be the operator from~\\eqref{eq:jn1} resp.~\\eqref{eq:multiscale:A}. Define $A := L\\widetilde{A}$.\nLet $X_\\star$ be a closed subspace of $X=H^{-1\/2}(\\Gamma_2)\\times H^1(\\Omega_2)$\nwith $(1,0)\\in X_\\star$. Then, for any $\\tilde x^*\\in X^*$ and $x^*:=L\\tilde x^*$,\nthe pair $(\\phi_\\star,u_\\star) \\in X_\\star$ solves the operator formulation\n\\begin{align*}\n \\dual{\\widetilde A(\\phi_\\star, u_\\star)}{(\\psi_\\star,v_\\star)}_{X^*\\times X}\n = \\dual{\\tilde x^*}{(\\psi_\\star,v_\\star)}_{X^*\\times X}\n \\quad\\text{for all }(\\psi_\\star,v_\\star)\\in X_\\star\n\\end{align*}\nif and only if\n \\begin{align*}\n \\dual{A (\\phi,u)}{(\\psi_\\star,v_\\star)}_{X^*\\times X} = \\dual{x^*}{(\\psi_\\star,v_\\star)}_{X^*\\times X}\n \\quad\\text{for all }(\\psi_\\star,v_\\star)\\in X_\\star.\n \\end{align*}\nUnder the assumptions of Lemma~\\ref{lemma:lipAmonA} with $\\gamma>1\/4$, the operator $A = L\\widetilde A$ \n is Lipschitz continuous and strongly monotone.\n In particular, it fulfils the assumptions of the Browder-Minty theorem for \n strongly monotone operators.\n In this case, $A$ as well as $\\widetilde A$ are, in particular, invertible, and \n $\\widetilde A^{-1}\\tilde x^* = A^{-1}x^*$.\n \\qed\n\\end{lemma}\n\nFor $\\gamma>1\/4$, the preceding lemma applies to $X_\\star = X = H^{-1\/2}(\\Gamma_2)\\times H^1(\\Omega_2)$ as well as $X_\\star = X_h = \\mathcal{P}^0(\\mathcal{T}_h^{\\Omega_2}|_{\\Gamma_2})\n\\times\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_2}|_{\\Gamma_2})$ and thus proves \nthat~\\eqref{eq2:jn} as well as~\\eqref{eq2:jn:disc} admit unique solutions.\n\\par Finally, we give some examples of material laws $\\widetilde\\chi$, covered by Lemma~\\ref{prop:equivalenceA}.\n\\begin{remark}\\label{rem:materiallaw}\n (i) Consider the material law \n \\begin{align*}\n \\widetilde\\chi(t) = \\c{tanh1}\\tanh(\\c{tanh2} t)\/t\\quad\\text{for }t>0,\n \\quad\\widetilde\\chi(0) = \\c{tanh1}\\c{tanh2}\n \\end{align*}\n with dimensionless constants\n $\\setc{tanh1},\\setc{tanh2}>0$. Then, $g(t) = t + \\c{tanh1}\\tanh{\\c{tanh2}t}$ fulfils~\\eqref{eq:gproperty}\n with $\\gamma = 1$ and $L = 1+\\c{tanh1}\\c{tanh2}$.\n \\\\ (ii)\n According to Ref.~\\cite{rzmp81}, it is reasonable to approximate the magnetic susceptibility in terms of a\n rational function, e.g.,\n \\begin{align*}\n \\widetilde\\chi(t) = \\frac{\\c{chit1} + \\c{chit2}t }{1+\\c{chit3}t + \\c{chit4}t^2 }\n \\end{align*}\n with certain, material-dependent constants\n $\\setc{chit1},\\setc{chit2},\\setc{chit3},\\setc{chit4}>0$. For typical materials, it holds~\\eqref{eq:gproperty} with $\\gamma=1$ and some $L>1$ that depends on $\\c{chit1},\\c{chit2},\\c{chit3},\\c{chit4}$, see Ref.~\\cite{rzmp81}, Table~1.\n\\end{remark}\n\\subsubsection{Convergence Analysis}\nThe main result of this section is the following proposition.\n\\begin{proposition}\nIn addition to $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\in L^2(\\Omega_T)$, suppose that $\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}\\in L^\\infty(\\boldsymbol{L}^2(\\Omega_2))$.\nAdopt the notation of Section~\\ref{section:SAR} for the operators \n$\\widetilde R,R_h$ from~\\eqref{eq:multiscale:R}, $\\widetilde A$ from~\\eqref{eq:multiscale:A}\nand $\\widetilde S,S_h$ from~\\eqref{eq:multiscale:S}. Under the assumptions of Lemma~\\ref{lemma:lipAmonA} with $\\gamma>1\/4$, the operator \n$\\boldsymbol{\\pi}:=S\\widetilde A^{-1}\\widetilde R$\nand its discretization $\\boldsymbol{\\pi}_h$ from~\\eqref{kotz:multiscale} satisfy the \nassumptions~\\eqref{assumption:chi:bounded}--\\eqref{assumption:chi:convergence}\nof Theorem~\\ref{theorem}.\n\\end{proposition}\n\\begin{proof}\nWith Lemma~\\ref{prop:equivalenceA}, there exists a linear and continuous operator \n$L:X^*\\to X^*$ such that $A:=L\\widetilde A$ is Lipschitz continuous and strongly\nmonotone. It holds $\\boldsymbol{\\pi} = SA^{-1}R$ with $R:=L\\widetilde R$ and $\\boldsymbol{\\pi}_h(\\boldsymbol{m},\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}) \n= S_h(\\phi_h,u_h)$, where $(\\phi_h,u_h)$ solves with $R_h:=L\\widetilde R_h$\nthe variational formulation\n\\begin{align*}\n \\dual{A(\\phi_h,u_h)}{(\\psi_h,v_h)}_{X^*\\times X}\n = \\dual{R_h(\\boldsymbol{m},\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}})}{(\\psi_h,v_h)}_{X^*\\times X}\n \\text{ for all }(\\psi_h,v_h)\\in X_h.\n\\end{align*}\nTherefore, the claim follows from Lemma~\\ref{prop:nonlin:conv} if we prove that\nthere exists some $\\varepsilon>0$ such that\n\\begin{itemize}\n\\item[(i)] $\\widetilde R_h(\\boldsymbol{m},\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}}) \\to \\widetilde R(\\boldsymbol{m},\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}})$ strongly in $X^*$ \nfor all $(\\boldsymbol{m},\\boldsymbol{f}}%{\\revision{\\boldsymbol{h}_{\\text{app}}})\\in\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)\\times\\boldsymbol{L}^2(\\Omega_2)$;\n\\item[(ii)] $\\widetilde S_h x \\to \\widetilde Sx$ strongly in $\\boldsymbol{L}^2(\\Omega_1)$ for all \n$x\\in X$.\n\\end{itemize}\nTo verify (i), we argue as in the proofs of Proposition~\\ref{prop:stray field}\nand Proposition~\\ref{prop:gcr}. First, elliptic regularity for the Neumann problem\n\\eqref{eq:fk1} (see, e.g., Ref.~\\cite{monk}, Theorem 3.8) provides \nsome $\\varepsilon>0$ such that, for $\\boldsymbol{m}\\in\\boldsymbol{H}^{1-\\varepsilon}(\\Omega_1)$, it holds\n$\\norm{u_{11}-u_{11h}}{H^1(\\Omega_1)} = \\mathcal O(h^{1\/2+\\varepsilon})$.\nSecond, recall that $u_1=\\widetilde K_1\\gamma_1^{\\rm int}u_{11}\\in C^\\infty(\\overline\\Omega_2)\\subset H^2(\\Omega_2)$. \nHence, the inhomogeneous Dirichlet problem~\\eqref{eq:fk1} leads to\n\\begin{align*}\n \\norm{u_1-u_{1h}}{H^1(\\Omega_2)}\n \\lesssim \\min_{v_h\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_2})}\\norm{u_{1}-v_h}{H^1(\\Omega_2)}\n + \\norm{u_{11}-u_{11h}}{H^1(\\Omega_1)} = \\mathcal O(h^{1\/2+\\varepsilon}).\n\\end{align*}\nThird, arguing as in the proof of Proposition~\\ref{prop:gcr}, we derive\n\\begin{align*}\n \\norm{\\delta_2^{\\rm int}u_1 - \\partial u_{1h}\/\\partial{\\boldsymbol{\\nu}}_2}{H^{-1\/2}(\\Gamma_2)}\n =\\mathcal O(h^\\varepsilon).\n\\end{align*}\nFourth, the discretization of the auxiliary potential guarantees\n\\begin{align*}\n \\norm{u_{\\rm app}-u_{{\\rm app},h}}{H^1(\\Omega_1)}\n \\lesssim \\min_{v_h\\in\\mathcal{S}^1(\\mathcal{T}_h^{\\Omega_1})}\\norm{u_{\\rm app}-v_h}{H^1(\\Omega_1)}\n \\xrightarrow{h\\to0}0.\n\\end{align*}\nBy definition~\\eqref{eq:multiscale:R} of the operators $\\widetilde R$ and $\\widetilde R_h$, the combination of the \nforegoing three convergences proves (i).\n\nThe verification of (ii) follows along the same lines. This concludes the proof.\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nAnomaly detection is the problem of identifying abnormal samples among a group of normal data. This is a deviation from many machine learning problems because the set of abnormal data is either poorly sampled or unavailable during training. Recently, anomaly detection draws huge attention and provides many applications in the field of computer vision like marker discovery in biomedical data \\cite{schlegl2017unsupervised} and crime detection in surveillance videos \\cite{luo2017revisit}. Tackling these problems involves modelling the distribution of normal visual samples in a way that anomalies are identified at test time.\n\nDeep neural networks have become a popular choice to reach state-of-the-art performance in anomaly detection. Despite their good performance, these models suffer from high computational cost, complexity, and training instability, making them difficult to use in practice. To overcome these limitations, we propose training a relatively shallow CNN with continuous labelling and anomaly creation, yielding state-of-the-art performance on anomaly detection with significantly fewer parameters and less training time. Specifically, we approach anomaly detection as a supervised regression problem, where the model's objective is to map normal and created anomalous data to highly separable distributions.\n\nDue to the unavailability of anomalies, we apply simple data augmentation techniques on normal data to create distinct anomalies. With the new set of anomalous data, we can treat anomaly detection as a supervised learning problem. Since there are now two classes, it is intuitive to treat this as a binary classification problem. However, we show that using regression instead of classification improves anomaly detection performance. Furthermore, we introduce continuous labelling as a favorable means of performance stability.\n\nWe evaluated our proposed method, Augment to Detect Anomalies with Continuous Labelling (ADACL), on various benchmark datasets for anomaly detection. ADACL outperforms most state-of-the-art methods using significantly fewer parameters. We also provide a thorough study on loss functions, the choice of labels and the effects of the different augmentations. In this paper, our contributions are the following:\n\n\\begin{itemize}\n \\item We propose a novel method of anomaly detection which includes a lightweight CNN trained with regression, anomaly creation with augmentations and continuous labelling to improve performance stability. \n \\item Our method is simple yet outperforms most state-of-the-art approaches.\n \\item We study the effects of various losses, data augmentations and continuous labelling on anomaly detection performance.\n\\end{itemize}\n\n\n\n\\section{Related Works}\n\\label{sec:related_works}\nSeveral proposed methods such as reconstruction-based approaches take advantage of self-representation learning. They rely on the reconstruction error as a metric to decide whether or not an instance corresponds to the distribution of training examples \\cite{xia2015learning, sabokrou2016video}. As such, various types of autoencoders like denoising auotoencoders and context autoencoders \\cite{zimmerer2018context} are used for anomaly detection. Most of the deep learning-based models with an autoencoder architecture \\cite{sakurada2014anomaly, zhai2016deep, zhou2017anomaly, zong2018deep, chong2017abnormal} also use reconstruction error to detect anomalies. These methods strive to exclusively learn the distribution of normal data in training such that they fail to generalize to anomalies. Even though these methods can be effective in some cases, it has been shown that they generalize well to reconstruct out-of-distribution samples and thus fail to recognize anomalies at testing stage.\n\n\n\nSome works used deep convolutional generative adversarial network (DCGAN) \\cite{radford2015unsupervised} to learn a manifold of normal images for anomaly detection by mapping from an image space to a random distribution \\cite{schlegl2017unsupervised, schlegl2019f}. Sabokrou et al. \\cite{sabokrou2018adversarially} proposed a one-class classification framework consisting of a Reconstructor (R) and a Discriminator (D). R serves as a denoising autoencoder, while D operates as the detector. These two networks are trained adversarially in an end-to-end perspective. In an extension to this, Zaheer et al. \\cite{zaheer2020old} redefined the adversarial one-class classifier training setup by changing the role of the discriminator to classify between good and bad quality reconstructions and improved the performance even further. In \\cite{perera2019ocgan}, Perera et al. leveraged an autoencoder architecture to enforce the normal instances to be distributed uniformly across the latent space. \\cite{jewell2021oled} utilized adversarial setup to learn more robust representations by intelligently masking the input.\n\n\\cite{salehi2021multiresolution} and \\cite{georgescu2021anomaly} have attempted to benefit from deep pre-trained networks by distilling the knowledge where a small student model learns from a large teacher model. In \\cite{salehi2021multiresolution}, they utilized a VGG-16 \\cite{simonyan2014very} to calculate a multi-level loss from different activations for training the student network to determine the anomaly score. They also incorporate interpretability algorithms in their framework to localize anomalous regions and perform anomaly segmentation. Although knowledge-distillation methods could perform anomaly detection with high performance, they benefit from pre-training on millions of labeled images which is not effective in other modalities of data. Also, in practice, knowledge-distillation methods may not be suitable due to computationally expensive inference. Our proposed method does not leverage pre-trained networks, so we do not compare against these approaches.\n\n\nGong et al. proposed a deep autoencoder augmented with a memory module \\cite{gong2019memorizing} to encode the input to a latent space with the encoder. The resulting latent vector is used as a query to retrieve the most relevant memory item for reconstruction with the decoder. Also, in \\cite{park2020learning}, they introduced a memory module with items that capture prototypical models of the inlier class with a new update system.\n\n\nSome of the proposed anomaly detection methods that rely on learning the distribution of inliers cannot be applied to real-world applications. Generating anomalies alongside the available normal data build an informative training set for the task of anomaly detection. Employing GANs for generating anomalous data turns the problem of anomaly detection into a binary classification problem. This method can also be used for data augmentation for anomalous data. In \\cite{pourreza2021g2d}, they trained a Wasserstein GAN on normal samples and utilized the generator before convergence. In this case, generated irregular data have a controlled deviation from inliers. Although they set a new research direction in anomaly detection, training a network to generate outliers is computationally expensive.\n\n\n\\section{Method}\n\\label{sec:method}\n\n\\subsection{Motivation}\nDeep neural networks have shown great performance in solving anomaly detection problems. However, they often have a large number of parameters, making them difficult and expensive to train. Not only that, most deep models suffer from training instability, complexity, difficult implementation, and is intractable to deploy in real-world applications. To overcome these issues, we propose ADACL, where we follow an intuitive and stable training procedure which also exceeds state-of-the-art performance. ADACL is simple to implement and has relatively few parameters, leading to inexpensive training and fast inference time. Therefore, it is more suitable for use in real world scenarios. \n\n\\subsection{Approach}\n\nTo improve performance in anomaly detection, it is desirable to produce representations of normal and anomalous samples that have distinct distributions. In our method, we redefine anomaly detection as a supervised regression problem. However, the training data consists mainly of normal samples, which makes supervised learning a cumbersome task. To solve this issue, we leverage straightforward data augmentations to create anomalous samples during training. \n\n\\subsubsection{Regression for Anomaly Detection}\nWe utilize a lightweight convolutional neural network (CNN) to train on the normal and created anomalies. The CNN acts as a regressor that outputs a continuous value between 0 and 1 to represent normal and anomalous data. Even though this can be considered a binary classification problem, we show that regression offers fast convergence and high performance in anomaly detection. Here we will explain the different configuration options we have for our method. As examples, we explain why we did not use sigmoid in the last layer of the CNN and instead used value clipping as well as why Mean Squared Error (MSE) was chosen over Binary Cross Entropy (BCE) as the loss function. In the case of binary classification, equation \\ref{sigmoidgrowth} shows that the rate of change of the sigmoid function is always decreasing as the prediction approaches the ground truth target. Also, as shown in figure \\ref{sigandfirstsig}, the value of gradients are nearing zero in the same manner. Due to the saturation of gradients near the target, updating the weights will be less effective, resulting in slow convergence. A similar problem exists in binary classification with Binary Cross Entropy (BCE). Negative log likelihood in BCE also exhibits a decreasing rate of change as the predicted value approaches the ground truth target. Referring to figure \\ref{bcemsederiv}, Mean Squared Error (MSE) provides stronger gradients as the prediction approaches the target, which results in faster convergence. We perform anomaly detection experiments on ADACL and find not only that MSE converges faster, but also manages to maintain consistently high performance across multiple training runs. Consequently, we transform anomaly detection into a regression problem to reach the optimal solution faster.\n\n\n\\begin{equation}\n\\begin{aligned}\nh(x) = \\frac{\\mathrm{1} }{\\mathrm{1} + e^{-x} } \n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\nh'(x) = h(x)(1-h(x))\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\nfor \\ x < 0: \\qquad h'(x) - h'(x-1) > 0 \\\\\nfor \\ x > 0: \\qquad h'(x-1) - h'(x) > 0\\\\\n\\end{aligned}\n\\label{sigmoidgrowth}\n\\end{equation}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/sigmoidandderiv.png}\n\\end{center}\n \\caption{Sigmoid function and the first derivitive of the sigmoid function. Rate of change of sigmoid decreases to 0 as predictions approach its target.}\n\\label{sigandfirstsig}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/bce_mse_deriv_comp.png}\n\\end{center}\n \\caption{The first derivates of MSE and BCE. Notice that the gradients of MSE become larger than BCE after a certain point.}\n\\label{bcemsederiv}\n\\end{figure}\n\n\\begin{figure*}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/model.png}\n\\end{center}\n \\caption{Overview of ADACL architecture. Normal examples and created anomalies are assigned continuous labels and fed into the CNN. This regression model outputs a continuous value between 0 and 1 as an anomaly score. At test time, the model is evaluated with real anomalies.}\n\\label{modelarch}\n\\end{figure*}\n\n\\subsubsection{Continuous Labelling}\nIn the anomaly detection problem, we can label normal and anomalous data as 0 and 1, respectively. We call this Discrete Labelling (DL). However, experimental results show that this leads to high variance in anomaly detection performance. Instead, we use Continuous Labelling (CL), where we designate two continuous intervals corresponding to normal and anomalous data, and sample labels from them using a uniform distribution. The intuition behind continuous labelling is that the expected value of MSE over predictions is lower in comparison to using discrete labelling. Let a discrete label $ \\in \\{0, 1\\}$ and a continuous label $ \\in \\{[0, X_L], [X_H, 1]\\}$. $X_L$ is the upper bound of the interval of normal class and $X_H$ is the lower bound of the interval of anomaly class. Because we sample from a uniform distribution, $A = \\mathbb{E}([0, X_L]) = \\frac{X_L}{2}$ and $B = \\mathbb{E}([X_H, 1]) = \\frac{X_H + 1}{2}$. A and B are the expected values of prediction for the normal and anomaly classes, respectively. The MSE function takes two numbers as the input to calculate the loss. As shown in equation \\ref{expectedmse}, we let the prediction of our model be 0.5 (highest distance to the lower and upper bounds). The following inequalities show that the value of the MSE loss is always lower when using continuous labelling compared to discrete labelling:\n\n\\begin{equation}\n\\begin{aligned}\nMSE(0.5, A) < MSE(0.5, 0) \\ if \\ A > 0 \\\\\nMSE(0.5, B) < MSE(0.5, 1) \\ if \\ 1 > B\n\\\\\n\\end{aligned}\n\\label{expectedmse}\n\\end{equation}\n\nTherefore:\n\n\\begin{equation}\n\\begin{aligned}\n\\mathbb{E}(MSE(prediction, CL)) <\\\\ \\mathbb{E}(MSE(prediction, DL))\n\\end{aligned}\n\\label{expected}\n\\end{equation}\n\nAccording to equation \\ref{expected}, if we choose continuous labelling over discrete labelling, the expected value for the loss is lower during training and thus, convergence is slower. Therefore, it should increase training stability. The experimental results in Figures \\ref{ValVarianceCIFAR} and \\ref{AUCVarianceCIFAR} supports this hypothesis.\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/augs.png}\n\\end{center}\n \\caption{Various Augmentations applied to create anomalies. The first, second, and third rows contain normal and created anomalies with different augmentations of images from UCSD, MNIST and CIFAR-10, respectively. }\n\\label{augs}\n\\end{figure}\n\n\\subsubsection{Anomaly Creation}\nSolving anomaly detection as a supervised regression problem requires a dataset containing both normal and anomalous data. To compensate for the unavailability of anomalies, we utilize data augmentation techniques to create them during training. Examples of these augmentations are shown in figure \\ref{augs}. The following are descriptions of our proposed augmentations for ADACL:\n\n\\begin{itemize}\n \\item \\textbf{\\textit{Cut-Paste:}} Randomly select patch from image and place it in a random location. \n \\item \\textbf{\\textit{Puzzling:}} Take quarters of the image and shuffle them.\n \\item \\textbf{\\textit{Rotation:}} Rotate the image 90 degrees one or three times\n \\item \\textbf{\\textit{Mix-up:}} Add a rotated image to the original one. Prior to adding, the rotated and original image are multiplied by respective coefficients. \n\\end{itemize}\n\nTo assign training labels to normal and created anomalies, we pick two separate continuous intervals from which we uniformly sample. For example, normal and anomalous labels are in the range of [0, 0.3] and [0.7, 1], respectively.\n\n\\subsection{Implementation Details}\nOur method which is depicted in figure \\ref{modelarch}, uses a simple CNN with less than 300k parameters. As a regressor, this model outputs a continuous value which is clipped between 0 and 1. It is trained on the inlier samples and created anomalies which are augmented versions of the normal data. We use different variations of the Adam optimizer in conjunction with a cyclic learning rate. Also, we designate a constant number of epochs for training on each dataset. Then, based on the validation set, we use early stopping techniques to terminate training. This validation set consists of 150 randomly selected samples of normal data and the augmented version of them as anomalies. This random selection maintains consistency by using a random seed. \n\n\\section{Experiments and Results}\n\\label{sec:exp_res}\n\\subsection{Anomaly Detection in Images}\n\nThe image datasets we choose for anomaly detection in this paper are MNIST \\cite{lecun1998mnist}, FMNIST \\cite{xiao2017fashion} and CIFAR-10 \\cite{krizhevsky2009learning}. These benchmark datasets are standard in anomaly detection literature. In the following, we provide descriptions and protocols defined on each dataset. \n\\par\n{\\bf MNIST:} It is a dataset of handwritten digits that has 60,000 images. Samples in MNIST are grayscale with a resolution of 28 x 28. This is a benchmark dataset in anomaly detection.\n\\par\n{\\bf FMNIST:} Fashion MNIST also contains 60,000 28 \u00d7 28 grayscale images of fashion accessories but since there is a significant amount of intra-class variation, it is a more challenging dataset compared to MNIST.\n\n{\\bf CIFAR-10:} This dataset consists of 10 classes of 32 \u00d7 32 RGB images of natural objects. With high intra-class variance, CIFAR-10 is a more challenging benchmark for anomaly detection.\n\n\\par\nThe protocol we follow for these three datasets is to consider one class as normal data and the rest as anomalies. To measure anomaly detection performance, we calculate Area Under the Curve (AUROC) for each class and report the average of all classes as the final performance. AUROC on these datasets are shown in table \\ref{table:mnist,fashionmnist,CIFAR-10}. From our results, ADACL outperforms recent state-of-the-art anomaly detection methods. Moreover, figure \\ref{normalauganomexamples} depicts the model's predictions on normal, augmented and anomalous samples over different datasets. Figure \\ref{learned_repr} shows the 3D distributions of learned representations of normal and anomalous samples on class 1 and 8 of MNIST. This figure shows the separability of learned representations.\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=0.7\\linewidth]{latex\/figures\/preds.png}\n\\end{center}\n \\caption{Predictions of the model on normal, augmented and anomalous samples from UCSD, MNIST, and CIFAR-10.}\n\\label{normalauganomexamples}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/embs.png}\n\\end{center}\n \\caption{3D visualization of learned representation of class 1 and 8 of the MNIST dataset. As shown, there are separable and distinct distributions of normal and anomalous embeddings.}\n\\label{learned_repr}\n\\end{figure}\n\n\\begin{table*}[htbp]\n\\centering\n\\caption{AUROC in \\% for anomaly detection on MNIST \\cite{lecun1998mnist}, Fashion-MNIST \\cite{xiao2017fashion} and CIFAR-10 \\cite{krizhevsky2009learning} datasets.}\n\\label{table:mnist,fashionmnist,CIFAR-10}\n\\resizebox{\\textwidth}{!}{\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\nDataset & Method & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Mean\\\\\n\n\\hline\n\n\nMNIST\n& AnoGAN\\cite{schlegl2017unsupervised} & 96.6 & 99.2 & 85.0 & 88.7 & 89.4 & 88.3 & 94.7 & 93.5 & 84.9 & 92.4 & 91.3\\\\\n& DSVDD\\cite{ruff2018deep} & 98.0 & 99.7 & 91.7 & 91.9 & 94.9 & 88.5 & 98.3 & 94.6 & 93.9 & 96.5 & 94.8\\\\\n& OCSVM\\cite{scholkopf2002learning} & 99.5 & 99.9 & 92.6 & 93.6 & 96.7 & 95.5 & 98.7 & 96.6 & 90.3 & 96.2 & 96.0\\\\\n& CapsNet\\textsubscript{PP} \\cite{li2020exploring} & 99.8 & 99.0 & 98.4 & 97.6 & 93.5 & 97.0 & 94.2 & 98.7 & 99.3 & 99.0 & 97.7\\\\\n& OCGAN\\cite{perera2019ocgan} & 99.8 & 99.9 & 94.2 & 96.3 & 97.5 & 98.0 & 99.1 & 98.1 & 93.9 & 98.1 & 97.5\\\\\n& LSA\\cite{abati2019latent} & 99.3 & 99.9 & 95.9 & 96.6 & 95.6 & 96.4 & 99.4 & 98.0 & 95.3 & 98.1 & 97.5\\\\\n\n\n& \\textbf{Ours (ADACL)} & ${99.37}$ & ${99.30}$ & ${98.58}$ & ${97.36}$ & ${97.57}$ & ${98.43}$ & ${99.56}$ & ${98.09}$ & ${93.46}$ & ${98.38}$ & \\textbf{98.01}\\\\\n\n\\hline\nDataset & Method & T-shirt & Trouser & Pullover & Dress & Coat & Sandal & Shirt & Sneaker & Bag & Ankle boot & Mean\\\\\n\n\\hline\n\n\n\nFashion-MNIST\n& DAGMM\\cite{zong2018deep} & 30.3 & 31.1 & 47.5 & 48.1 & 49.9 & 41.3 & 42.0 & 37.4 & 51.8 & 37.8 & 41.7\\\\\n& DSEBM\\cite{zhai2016deep} & 89.1 & 56.0 & 86.1 & 90.3 & 88.4 & 85.9 & 78.2 & 98.1 & 86.5 & 96.7 & 85.5\\\\\n& LSA\\cite{abati2019latent} & 91.6 & 98.3 & 87.8 & 92.3 & 89.7 & 90.7 & 84.1 & 97.7 & 91.0 & 98.4 & 92.2\\\\\n& DSVDD\\cite{ruff2018deep} & 98.2 & 90.3 & 90.7 & 94.2 & 89.4 & 91.8 & 83.4 & 98.8 & 91.9 & 99.0 & 92.8\\\\\n& OCSVM\\cite{scholkopf2002learning} & 91.9 & 99.0 & 89.4 & 94.2 & 90.7 & 91.8 & 83.4 & 98.8 & 90.3 & 98.2 & 92.8\\\\\n\n& \\textbf{Ours (ADACL)} & ${94.42}$ & ${99.46}$ & ${89.82}$ & ${91.05}$ & ${92.68}$ & ${90.40}$ & ${80.43}$ & ${97.88}$ & ${97.14}$ & ${98.88}$ & \\textbf{93.22}\\\\\n\n\n\\hline\nDataset & Method & Plane & Car & Bird & Cat & Deer & Dog & Frog & Horse & Ship & Truck & Mean\\\\\n\n\\hline\n\n\n\nCIFAR-10\n& OCSVM\\cite{scholkopf2002learning} & 63.0 & 44.0 & 64.9 & 48.7 & 73.5 & 50.0 & 72.5 & 53.3 & 64.9 & 50.8 & 58.56\\\\\n& CapsNet\\textsubscript{PP}\\cite{li2020exploring} & 62.2 & 45.5 & 67.1 & 67.5 & 68.3 & 63.5 & 72.7 & 67.3 & 71.0 & 46.6 & 61.2\\\\\n& AnoGAN\\cite{schlegl2017unsupervised} & 67.1 & 54.7 & 52.9 & 54.5 & 65.1 & 60.3 & 58.5 & 62.5 & 75.8 & 66.5 & 61.79\\\\\n& DSVDD\\cite{ruff2018deep} & 61.7 & 65.9 & 50.8 & 59.1 & 60.9 & 65.7 & 67.7 & 67.3 & 75.9 & 73.1 & 64.81\\\\\n& LSA\\cite{abati2019latent} & 73.5 & 58.0 & 69.0 & 54.2 & 76.1 & 54.6 & 75.1 & 53.5 & 71.7 & 54.8 & 64.1\\\\\n& OCGAN\\cite{perera2019ocgan} & 75.7 & 53.1 & 64.0 & 62.0 & 72.3 & 62.0 & 72.3 & 57.5 & 82.0 & 55.4 & 65.66\\\\\n& CAVGA-D\\textsubscript{u}\\cite{venkataramanan2020attention} & 65.3 & 78.4 & 76.1 & 74.7 & 77.5 & 55.2 & 81.3 & 74.5 & 80.1 & 74.1 & 73.7\\\\\n& DROCC\\cite{goyal2020drocc} & 81.66 & 76.74 & 66.66 & 67.13 & 73.62 & 74.43 & 74.43 & 71.39 & 80.02 & 76.21 & 74.23\\\\\n\n\n& \\textbf{Ours (ADACL)} & ${73.89}$ & ${83.87}$ & ${67.47}$ & ${70.66}$ & ${69.51}$ & ${77.91}$ & ${72.66}$ & ${83.04}$ & ${87.64}$ & ${81.35}$ & \\textbf{76.80}\\\\\n\n\\hline\n\n\n\\end{tabular}}\n\\end{table*}\n\n\nTable \\ref{table:mnist,CIFAR-10} compares our results with those of two other methods that use the knowledge distillation framework. Methods such as these rely on pre-trained networks which have been trained on millions of labelled images. To learn from these pre-trained or teacher networks, these methods have been trained over many epochs. These methods are computationally expensive and require a long time for inference, which prevents their use in real-world scenarios. Our method takes less time and computation to train even though the results are slightly lower as shown in the table.\n\n\n\\begin{table*}[htbp]\n\\centering\n\\caption{Comparison of AUROC in \\% for anomaly detection on MNIST \\cite{lecun1998mnist} and CIFAR-10 \\cite{krizhevsky2009learning} datasets with knowledge distilation methods.}\n\\label{table:mnist,CIFAR-10}\n\\resizebox{\\textwidth}{!}{\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\nDataset & Method & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Mean & Epoch\\\\\n\n\\hline\n\n\nMNIST\n\n\n& U-Std\\cite{bergmann2020uninformed} & 99.9 & 99.9 & 99 & 99.3 & 99.2 & 99.3 & 99.7 & 99.5 & 98.6 & 99.1 & 99.35 & -\\\\\n\n& Multiresolution KDAD \\cite{salehi2021multiresolution} & 99.82 & 99.82 & 97.79 & 98.75 & 98.43 & 98.16 & 99.43 & 98.38 & 98.41 & 98.1 & 98.71 & 50\\\\\n\n\n& Ours (ADACL) & ${99.37}$ & ${99.30}$ & ${98.58}$ & ${97.36}$ & ${97.57}$ & ${98.43}$ & ${99.56}$ & ${98.09}$ & ${93.46}$ & ${98.38}$ & 98.01 & 10\\\\\n\n\\hline\n\nDataset & Method & Plane & Car & Bird & Cat & Deer & Dog & Frog & Horse & Ship & Truck & Mean & Epoch\\\\\n\n\\hline\n\nCIFAR-10\n\n& U-Std\\cite{bergmann2020uninformed} & 78.9 & 84.9 & 73.4 & 74.8 & 85.1 & 79.3 & 89.2 & 83 & 86.2 & 84.8 & 81.96 & -\\\\\n\n& Multiresolution KDAD \\cite{salehi2021multiresolution} & 90.53 & 90.35 & 79.66 & 77.02 & 86.71 & 91.4 & 88.98 & 86.78 & 91.45 & 88.91 & 87.18 & 200\\\\\n\n\n& Ours (ADACL) & ${73.89}$ & ${83.87}$ & ${67.47}$ & ${70.66}$ & ${69.51}$ & ${77.91}$ & ${72.66}$ & ${83.04}$ & ${87.64}$ & ${81.35}$ & 76.80 & 15\\\\\n\n\\hline\n\n\n\\end{tabular}}\n\\end{table*}\n\n\n\\begin{table}\n\\centering\n\\caption{Frame-level AUCROC and EER comparison \\% on UCSD dataset with state-of-the-art methods.}\n\n\n \\begin{tabular}{|l|l|c|}\n \\hline\nMethod & AUCROC (\\%) & EER (\\%) \\\\ [0.5ex] \n \\hline\nTSC \\cite{luo2017revisit_novelty} & 92.2 & - \\\\\nFRCN action \\cite{hinami2017joint_novelty} & 92.2 & - \\\\\nAbnormalGAN \\cite{ravanbakhsh2017abnormal_novelty} & 93.5 & 13 \\\\\nMemAE \\cite{gong2019memorizing} & 94.1 & - \\\\\nGrowingGas \\cite{sun2017online} & 94.1 & - \\\\\nFFP \\cite{liu2018future_novelty} & 95.4 & - \\\\\nConvAE+UNet \\cite{Nguyen_2019_ICCV} & 96.2 & - \\\\\nSTAN \\cite{lee2018stan} & 96.5 & - \\\\\nObject-centric \\cite{ionescu2019object} & 97.8 & - \\\\ \nRavanbakhsh \\cite{ravanbakhsh2019training} & - & 14 \\\\\nALOCC \\cite{sabokrou2018adversarially} & - & 13 \\\\\nDeep-cascade \\cite{sabokrou2017deep_novelty} & - & 9 \\\\\n Old is gold \\cite{zaheer2020old} & 98.1 & 7 \\\\\n\n \\textbf{Ours (ADACL)} & \\textbf{98.4} & \\textbf{7} \\\\\n \n \\hline\n\\end{tabular}\n\n\n\\label{ucsd_exp}\n\\end{table}\n\n\\subsection{Anomaly Detection in Videos}\nTo evaluate our method on video anomaly detection, we selected the UCSD dataset \\cite{xiao2017fashion}. This dataset contains multiple outdoor scenes with mobile objects such as pedestrians, cars, wheelchairs, skateboards and bicycles. Frames with only pedestrians are considered as the normal class, while frames containing other objects are anomalies. This dataset contains two subsets named Ped1 and Ped2. Ped1 includes 34 training video samples and 36 testing video samples and Ped2 contains 2,550 frames in 16 training videos and 2,010 frames in 12 test videos with a resolution of 240 \u00d7 360 pixels.\n\\par\nWe follow a patch-based protocol to evaluate on this dataset where each frame is divided into 30 x 30 sections. For training, we include only patches that include pedestrians. However, the model was evaluated on patches that contain pedestrians or other objects. Following \\cite{zaheer2020old}, to report the performance on this dataset, frame-level Area Under the Curve (AUROC) and Equal Error Rate (EER) are calculated in table \\ref{ucsd_exp}. Results show that ADACL surpasses state-of-the-art methods for video anomaly detection on UCSD. \n\n\\subsection{Analysis of the Method}\nIn this section, we provide experimental results that support the intuition behind our method.\n\n\\subsubsection{BCE vs. MSE}\nBased on our experimental results, we use MSE over BCE because it converges faster but still has good performance over multiple training runs. The figure \\ref{AUCmsebce} shows that our experimental results are aligned with this hypothesis.\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/test_auroc_bce_vs_mse_cifar.png}\n\\end{center}\n \\caption{ Average test AUROC taken over 10 training runs on CIFAR-10. MSE achieves higher AUROC in a shorter number of epochs. }\n\\label{AUCmsebce}\n\\end{figure}\n\n\\subsubsection{Continuous vs. Discrete Labelling}\nReferring back to the method section, we define an early stopping criteria based on validation AUROC. In this experiment, we study how labelling affects anomaly detection performance. We keep the entire training procedure the same and only modify the labelling scheme. Results show that discrete labels cause higher variance in the validation and test AUROC. With higher instability, it is increasingly difficult to create an accurate stopping criteria. As shown in figure \\ref{ValVarianceCIFAR}, continuous labelling yields lower variance in validation AUROC. Knowing this, figure \\ref{AUCVarianceCIFAR} shows that with continuous labelling, a stopping criteria over a validation set with lower AUROC variance is mostly able to produce more consistent test AUROC values per class. Therefore, continuous labelling is the better choice when using a stopping criteria for anomaly detection.\n\n\\begin{table}\n\\centering\n\\caption{Experiments on different intervals}\n \\begin{tabular}{|c|c|c|}\n \\hline\nInterval & Mean AUROC (\\%) & Variance \\\\ [0.5ex] \n \\hline\n$[0, 0.1] - [0.9, 1]$ & $97.14$ & $1.50 \\times 10^-5$ \\\\\n$[0, 0.2] - [0.8, 1]$ & $96.93 $ & $8.83 \\times 10^-5 $ \\\\\n$[0, 0.3] - [0.7, 1]$ & $97.42 $ & $1.29 \\times 10^-5 $ \\\\\n \n \\hline\n\\end{tabular}\n\n\n\\label{interval_exp}\n\\end{table}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/val_variance_mnist.png}\n\\end{center}\n \\caption{Variance of Average Validation AUROC over all classes in each epoch. The averages are taken over 10 training runs. It can be observed that continuous labelling consistently produces lower variance in higher epochs. Thus, a stopping criteria based on the validation set yields more stable test AUROC when continuous labels are used.}\n\\label{ValVarianceCIFAR}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/auroc_variance_cont_vs_disc.png}\n\\end{center}\n \\caption{Variance in Anomaly Detection AUROC when using continuous labelling versus discrete labelling for all class taken over 10 training runs of CIFAR-10. More often than not, using the same stopping criteria with continuous labels produces lower variance in test AUROC. }\n\\label{AUCVarianceCIFAR}\n\\end{figure}\n\n\n\\subsubsection{Effects of Augmentations}\nVariations in anomaly detection performance occur when using data augmentation to create anomalies. In this experiment, we study the effect of each augmentation used to create anomalies during training. As seen in figure \\ref{effect_augs}, the best performing solo augmentation is Cut-paste. But, the best anomaly detection results are achieved by using all augmentations. \n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/effect_diff_augs.jpg}\n\\end{center}\n \\caption{The effects of the augmentations like Cut-paste, puzzling, one and three times rotation and various mix-ups on AUROC. The most effective augmentation was Cut-paste,\n however using all augmentations in the process of creating outliers yields the highest performance in anomaly detection.}\n\\label{effect_augs}\n\\end{figure}\n\n\\subsubsection{Interval Selection}\nIn previous experiments, we show that continuous labelling improves anomaly detection by enabling better early stopping. This is achieved through lower variance in validation AUROC. To further examine the implications of label selection, We analyze different sized intervals to see their effects on anomaly detection performance. As shown in table \\ref{interval_exp}, the choice of interval has low impact on AUROC. \n\n\\section{Conclusion}\nDeep neural networks can achieve state-of-the-art performance when applied to anomaly detection tasks. However, most of them suffer from expensive computations, high complexity, training instability, and difficult implementation. In this paper, we alleviate these issues by proposing a simple and effective methodology for anomaly detection. We convert the problem into a supervised regression task by creating anomalies using data augmentations and training a lightweight convolutional neural network over continuous labels. In further experiments, we analyze the effects of MSE Loss versus BCE Loss, continuous labelling, interval size, and various augmentations. Results on several image and video anomaly detection benchmarks show our superiority over cutting-edge methods. \n\n{\\small\n\\bibliographystyle{ieee_fullname}\n\n\\section{Introduction}\n\\label{sec:intro}\n\nAnomaly detection is the problem of identifying abnormal samples among a group of normal data. This is a deviation from many machine learning problems because the set of abnormal data is either poorly sampled or unavailable during training. Recently, anomaly detection draws huge attention and provides many applications in the field of computer vision like marker discovery in biomedical data \\cite{schlegl2017unsupervised} and crime detection in surveillance videos \\cite{luo2017revisit}. Tackling these problems involves modelling the distribution of normal visual samples in a way that anomalies are identified at test time.\n\nDeep neural networks have become a popular choice to reach state-of-the-art performance in anomaly detection. Despite their good performance, these models suffer from high computational cost, complexity, and training instability, making them difficult to use in practice. To overcome these limitations, we propose training a relatively shallow CNN with continuous labelling and anomaly creation, yielding state-of-the-art performance on anomaly detection with significantly fewer parameters and less training time. Specifically, we approach anomaly detection as a supervised regression problem, where the model's objective is to map normal and created anomalous data to highly separable distributions.\n\nDue to the unavailability of anomalies, we apply simple data augmentation techniques on normal data to create distinct anomalies. With the new set of anomalous data, we can treat anomaly detection as a supervised learning problem. Since there are now two classes, it is intuitive to treat this as a binary classification problem. However, we show that using regression instead of classification improves anomaly detection performance. Furthermore, we introduce continuous labelling as a favorable means of performance stability.\n\nWe evaluated our proposed method, Augment to Detect Anomalies with Continuous Labelling (ADACL), on various benchmark datasets for anomaly detection. ADACL outperforms most state-of-the-art methods using significantly fewer parameters. We also provide a thorough study on loss functions, the choice of labels and the effects of the different augmentations. In this paper, our contributions are the following:\n\n\\begin{itemize}\n \\item We propose a novel method of anomaly detection which includes a lightweight CNN trained with regression, anomaly creation with augmentations and continuous labelling to improve performance stability. \n \\item Our method is simple yet outperforms most state-of-the-art approaches.\n \\item We study the effects of various losses, data augmentations and continuous labelling on anomaly detection performance.\n\\end{itemize}\n\n\n\n\\section{Related Works}\n\\label{sec:related_works}\nSeveral proposed methods such as reconstruction-based approaches take advantage of self-representation learning. They rely on the reconstruction error as a metric to decide whether or not an instance corresponds to the distribution of training examples \\cite{xia2015learning, sabokrou2016video}. As such, various types of autoencoders like denoising auotoencoders and context autoencoders \\cite{zimmerer2018context} are used for anomaly detection. Most of the deep learning-based models with an autoencoder architecture \\cite{sakurada2014anomaly, zhai2016deep, zhou2017anomaly, zong2018deep, chong2017abnormal} also use reconstruction error to detect anomalies. These methods strive to exclusively learn the distribution of normal data in training such that they fail to generalize to anomalies. Even though these methods can be effective in some cases, it has been shown that they generalize well to reconstruct out-of-distribution samples and thus fail to recognize anomalies at testing stage.\n\n\n\nSome works used deep convolutional generative adversarial network (DCGAN) \\cite{radford2015unsupervised} to learn a manifold of normal images for anomaly detection by mapping from an image space to a random distribution \\cite{schlegl2017unsupervised, schlegl2019f}. Sabokrou et al. \\cite{sabokrou2018adversarially} proposed a one-class classification framework consisting of a Reconstructor (R) and a Discriminator (D). R serves as a denoising autoencoder, while D operates as the detector. These two networks are trained adversarially in an end-to-end perspective. In an extension to this, Zaheer et al. \\cite{zaheer2020old} redefined the adversarial one-class classifier training setup by changing the role of the discriminator to classify between good and bad quality reconstructions and improved the performance even further. In \\cite{perera2019ocgan}, Perera et al. leveraged an autoencoder architecture to enforce the normal instances to be distributed uniformly across the latent space. \\cite{jewell2021oled} utilized adversarial setup to learn more robust representations by intelligently masking the input.\n\n\\cite{salehi2021multiresolution} and \\cite{georgescu2021anomaly} have attempted to benefit from deep pre-trained networks by distilling the knowledge where a small student model learns from a large teacher model. In \\cite{salehi2021multiresolution}, they utilized a VGG-16 \\cite{simonyan2014very} to calculate a multi-level loss from different activations for training the student network to determine the anomaly score. They also incorporate interpretability algorithms in their framework to localize anomalous regions and perform anomaly segmentation. Although knowledge-distillation methods could perform anomaly detection with high performance, they benefit from pre-training on millions of labeled images which is not effective in other modalities of data. Also, in practice, knowledge-distillation methods may not be suitable due to computationally expensive inference. Our proposed method does not leverage pre-trained networks, so we do not compare against these approaches.\n\n\nGong et al. proposed a deep autoencoder augmented with a memory module \\cite{gong2019memorizing} to encode the input to a latent space with the encoder. The resulting latent vector is used as a query to retrieve the most relevant memory item for reconstruction with the decoder. Also, in \\cite{park2020learning}, they introduced a memory module with items that capture prototypical models of the inlier class with a new update system.\n\n\nSome of the proposed anomaly detection methods that rely on learning the distribution of inliers cannot be applied to real-world applications. Generating anomalies alongside the available normal data build an informative training set for the task of anomaly detection. Employing GANs for generating anomalous data turns the problem of anomaly detection into a binary classification problem. This method can also be used for data augmentation for anomalous data. In \\cite{pourreza2021g2d}, they trained a Wasserstein GAN on normal samples and utilized the generator before convergence. In this case, generated irregular data have a controlled deviation from inliers. Although they set a new research direction in anomaly detection, training a network to generate outliers is computationally expensive.\n\n\n\\section{Method}\n\\label{sec:method}\n\n\\subsection{Motivation}\nDeep neural networks have shown great performance in solving anomaly detection problems. However, they often have a large number of parameters, making them difficult and expensive to train. Not only that, most deep models suffer from training instability, complexity, difficult implementation, and is intractable to deploy in real-world applications. To overcome these issues, we propose ADACL, where we follow an intuitive and stable training procedure which also exceeds state-of-the-art performance. ADACL is simple to implement and has relatively few parameters, leading to inexpensive training and fast inference time. Therefore, it is more suitable for use in real world scenarios. \n\n\\subsection{Approach}\n\nTo improve performance in anomaly detection, it is desirable to produce representations of normal and anomalous samples that have distinct distributions. In our method, we redefine anomaly detection as a supervised regression problem. However, the training data consists mainly of normal samples, which makes supervised learning a cumbersome task. To solve this issue, we leverage straightforward data augmentations to create anomalous samples during training. \n\n\\subsubsection{Regression for Anomaly Detection}\nWe utilize a lightweight convolutional neural network (CNN) to train on the normal and created anomalies. The CNN acts as a regressor that outputs a continuous value between 0 and 1 to represent normal and anomalous data. Even though this can be considered a binary classification problem, we show that regression offers fast convergence and high performance in anomaly detection. Here we will explain the different configuration options we have for our method. As examples, we explain why we did not use sigmoid in the last layer of the CNN and instead used value clipping as well as why Mean Squared Error (MSE) was chosen over Binary Cross Entropy (BCE) as the loss function. In the case of binary classification, equation \\ref{sigmoidgrowth} shows that the rate of change of the sigmoid function is always decreasing as the prediction approaches the ground truth target. Also, as shown in figure \\ref{sigandfirstsig}, the value of gradients are nearing zero in the same manner. Due to the saturation of gradients near the target, updating the weights will be less effective, resulting in slow convergence. A similar problem exists in binary classification with Binary Cross Entropy (BCE). Negative log likelihood in BCE also exhibits a decreasing rate of change as the predicted value approaches the ground truth target. Referring to figure \\ref{bcemsederiv}, Mean Squared Error (MSE) provides stronger gradients as the prediction approaches the target, which results in faster convergence. We perform anomaly detection experiments on ADACL and find not only that MSE converges faster, but also manages to maintain consistently high performance across multiple training runs. Consequently, we transform anomaly detection into a regression problem to reach the optimal solution faster.\n\n\n\\begin{equation}\n\\begin{aligned}\nh(x) = \\frac{\\mathrm{1} }{\\mathrm{1} + e^{-x} } \n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\nh'(x) = h(x)(1-h(x))\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\nfor \\ x < 0: \\qquad h'(x) - h'(x-1) > 0 \\\\\nfor \\ x > 0: \\qquad h'(x-1) - h'(x) > 0\\\\\n\\end{aligned}\n\\label{sigmoidgrowth}\n\\end{equation}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/sigmoidandderiv.png}\n\\end{center}\n \\caption{Sigmoid function and the first derivitive of the sigmoid function. Rate of change of sigmoid decreases to 0 as predictions approach its target.}\n\\label{sigandfirstsig}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/bce_mse_deriv_comp.png}\n\\end{center}\n \\caption{The first derivates of MSE and BCE. Notice that the gradients of MSE become larger than BCE after a certain point.}\n\\label{bcemsederiv}\n\\end{figure}\n\n\\begin{figure*}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/model.png}\n\\end{center}\n \\caption{Overview of ADACL architecture. Normal examples and created anomalies are assigned continuous labels and fed into the CNN. This regression model outputs a continuous value between 0 and 1 as an anomaly score. At test time, the model is evaluated with real anomalies.}\n\\label{modelarch}\n\\end{figure*}\n\n\\subsubsection{Continuous Labelling}\nIn the anomaly detection problem, we can label normal and anomalous data as 0 and 1, respectively. We call this Discrete Labelling (DL). However, experimental results show that this leads to high variance in anomaly detection performance. Instead, we use Continuous Labelling (CL), where we designate two continuous intervals corresponding to normal and anomalous data, and sample labels from them using a uniform distribution. The intuition behind continuous labelling is that the expected value of MSE over predictions is lower in comparison to using discrete labelling. Let a discrete label $ \\in \\{0, 1\\}$ and a continuous label $ \\in \\{[0, X_L], [X_H, 1]\\}$. $X_L$ is the upper bound of the interval of normal class and $X_H$ is the lower bound of the interval of anomaly class. Because we sample from a uniform distribution, $A = \\mathbb{E}([0, X_L]) = \\frac{X_L}{2}$ and $B = \\mathbb{E}([X_H, 1]) = \\frac{X_H + 1}{2}$. A and B are the expected values of prediction for the normal and anomaly classes, respectively. The MSE function takes two numbers as the input to calculate the loss. As shown in equation \\ref{expectedmse}, we let the prediction of our model be 0.5 (highest distance to the lower and upper bounds). The following inequalities show that the value of the MSE loss is always lower when using continuous labelling compared to discrete labelling:\n\n\\begin{equation}\n\\begin{aligned}\nMSE(0.5, A) < MSE(0.5, 0) \\ if \\ A > 0 \\\\\nMSE(0.5, B) < MSE(0.5, 1) \\ if \\ 1 > B\n\\\\\n\\end{aligned}\n\\label{expectedmse}\n\\end{equation}\n\nTherefore:\n\n\\begin{equation}\n\\begin{aligned}\n\\mathbb{E}(MSE(prediction, CL)) <\\\\ \\mathbb{E}(MSE(prediction, DL))\n\\end{aligned}\n\\label{expected}\n\\end{equation}\n\nAccording to equation \\ref{expected}, if we choose continuous labelling over discrete labelling, the expected value for the loss is lower during training and thus, convergence is slower. Therefore, it should increase training stability. The experimental results in Figures \\ref{ValVarianceCIFAR} and \\ref{AUCVarianceCIFAR} supports this hypothesis.\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/augs.png}\n\\end{center}\n \\caption{Various Augmentations applied to create anomalies. The first, second, and third rows contain normal and created anomalies with different augmentations of images from UCSD, MNIST and CIFAR-10, respectively. }\n\\label{augs}\n\\end{figure}\n\n\\subsubsection{Anomaly Creation}\nSolving anomaly detection as a supervised regression problem requires a dataset containing both normal and anomalous data. To compensate for the unavailability of anomalies, we utilize data augmentation techniques to create them during training. Examples of these augmentations are shown in figure \\ref{augs}. The following are descriptions of our proposed augmentations for ADACL:\n\n\\begin{itemize}\n \\item \\textbf{\\textit{Cut-Paste:}} Randomly select patch from image and place it in a random location. \n \\item \\textbf{\\textit{Puzzling:}} Take quarters of the image and shuffle them.\n \\item \\textbf{\\textit{Rotation:}} Rotate the image 90 degrees one or three times\n \\item \\textbf{\\textit{Mix-up:}} Add a rotated image to the original one. Prior to adding, the rotated and original image are multiplied by respective coefficients. \n\\end{itemize}\n\nTo assign training labels to normal and created anomalies, we pick two separate continuous intervals from which we uniformly sample. For example, normal and anomalous labels are in the range of [0, 0.3] and [0.7, 1], respectively.\n\n\\subsection{Implementation Details}\nOur method which is depicted in figure \\ref{modelarch}, uses a simple CNN with less than 300k parameters. As a regressor, this model outputs a continuous value which is clipped between 0 and 1. It is trained on the inlier samples and created anomalies which are augmented versions of the normal data. We use different variations of the Adam optimizer in conjunction with a cyclic learning rate. Also, we designate a constant number of epochs for training on each dataset. Then, based on the validation set, we use early stopping techniques to terminate training. This validation set consists of 150 randomly selected samples of normal data and the augmented version of them as anomalies. This random selection maintains consistency by using a random seed. \n\n\\section{Experiments and Results}\n\\label{sec:exp_res}\n\\subsection{Anomaly Detection in Images}\n\nThe image datasets we choose for anomaly detection in this paper are MNIST \\cite{lecun1998mnist}, FMNIST \\cite{xiao2017fashion} and CIFAR-10 \\cite{krizhevsky2009learning}. These benchmark datasets are standard in anomaly detection literature. In the following, we provide descriptions and protocols defined on each dataset. \n\\par\n{\\bf MNIST:} It is a dataset of handwritten digits that has 60,000 images. Samples in MNIST are grayscale with a resolution of 28 x 28. This is a benchmark dataset in anomaly detection.\n\\par\n{\\bf FMNIST:} Fashion MNIST also contains 60,000 28 \u00d7 28 grayscale images of fashion accessories but since there is a significant amount of intra-class variation, it is a more challenging dataset compared to MNIST.\n\n{\\bf CIFAR-10:} This dataset consists of 10 classes of 32 \u00d7 32 RGB images of natural objects. With high intra-class variance, CIFAR-10 is a more challenging benchmark for anomaly detection.\n\n\\par\nThe protocol we follow for these three datasets is to consider one class as normal data and the rest as anomalies. To measure anomaly detection performance, we calculate Area Under the Curve (AUROC) for each class and report the average of all classes as the final performance. AUROC on these datasets are shown in table \\ref{table:mnist,fashionmnist,CIFAR-10}. From our results, ADACL outperforms recent state-of-the-art anomaly detection methods. Moreover, figure \\ref{normalauganomexamples} depicts the model's predictions on normal, augmented and anomalous samples over different datasets. Figure \\ref{learned_repr} shows the 3D distributions of learned representations of normal and anomalous samples on class 1 and 8 of MNIST. This figure shows the separability of learned representations.\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=0.7\\linewidth]{latex\/figures\/preds.png}\n\\end{center}\n \\caption{Predictions of the model on normal, augmented and anomalous samples from UCSD, MNIST, and CIFAR-10.}\n\\label{normalauganomexamples}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/embs.png}\n\\end{center}\n \\caption{3D visualization of learned representation of class 1 and 8 of the MNIST dataset. As shown, there are separable and distinct distributions of normal and anomalous embeddings.}\n\\label{learned_repr}\n\\end{figure}\n\n\\begin{table*}[htbp]\n\\centering\n\\caption{AUROC in \\% for anomaly detection on MNIST \\cite{lecun1998mnist}, Fashion-MNIST \\cite{xiao2017fashion} and CIFAR-10 \\cite{krizhevsky2009learning} datasets.}\n\\label{table:mnist,fashionmnist,CIFAR-10}\n\\resizebox{\\textwidth}{!}{\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\nDataset & Method & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Mean\\\\\n\n\\hline\n\n\nMNIST\n& AnoGAN\\cite{schlegl2017unsupervised} & 96.6 & 99.2 & 85.0 & 88.7 & 89.4 & 88.3 & 94.7 & 93.5 & 84.9 & 92.4 & 91.3\\\\\n& DSVDD\\cite{ruff2018deep} & 98.0 & 99.7 & 91.7 & 91.9 & 94.9 & 88.5 & 98.3 & 94.6 & 93.9 & 96.5 & 94.8\\\\\n& OCSVM\\cite{scholkopf2002learning} & 99.5 & 99.9 & 92.6 & 93.6 & 96.7 & 95.5 & 98.7 & 96.6 & 90.3 & 96.2 & 96.0\\\\\n& CapsNet\\textsubscript{PP} \\cite{li2020exploring} & 99.8 & 99.0 & 98.4 & 97.6 & 93.5 & 97.0 & 94.2 & 98.7 & 99.3 & 99.0 & 97.7\\\\\n& OCGAN\\cite{perera2019ocgan} & 99.8 & 99.9 & 94.2 & 96.3 & 97.5 & 98.0 & 99.1 & 98.1 & 93.9 & 98.1 & 97.5\\\\\n& LSA\\cite{abati2019latent} & 99.3 & 99.9 & 95.9 & 96.6 & 95.6 & 96.4 & 99.4 & 98.0 & 95.3 & 98.1 & 97.5\\\\\n\n\n& \\textbf{Ours (ADACL)} & ${99.37}$ & ${99.30}$ & ${98.58}$ & ${97.36}$ & ${97.57}$ & ${98.43}$ & ${99.56}$ & ${98.09}$ & ${93.46}$ & ${98.38}$ & \\textbf{98.01}\\\\\n\n\\hline\nDataset & Method & T-shirt & Trouser & Pullover & Dress & Coat & Sandal & Shirt & Sneaker & Bag & Ankle boot & Mean\\\\\n\n\\hline\n\n\n\nFashion-MNIST\n& DAGMM\\cite{zong2018deep} & 30.3 & 31.1 & 47.5 & 48.1 & 49.9 & 41.3 & 42.0 & 37.4 & 51.8 & 37.8 & 41.7\\\\\n& DSEBM\\cite{zhai2016deep} & 89.1 & 56.0 & 86.1 & 90.3 & 88.4 & 85.9 & 78.2 & 98.1 & 86.5 & 96.7 & 85.5\\\\\n& LSA\\cite{abati2019latent} & 91.6 & 98.3 & 87.8 & 92.3 & 89.7 & 90.7 & 84.1 & 97.7 & 91.0 & 98.4 & 92.2\\\\\n& DSVDD\\cite{ruff2018deep} & 98.2 & 90.3 & 90.7 & 94.2 & 89.4 & 91.8 & 83.4 & 98.8 & 91.9 & 99.0 & 92.8\\\\\n& OCSVM\\cite{scholkopf2002learning} & 91.9 & 99.0 & 89.4 & 94.2 & 90.7 & 91.8 & 83.4 & 98.8 & 90.3 & 98.2 & 92.8\\\\\n\n& \\textbf{Ours (ADACL)} & ${94.42}$ & ${99.46}$ & ${89.82}$ & ${91.05}$ & ${92.68}$ & ${90.40}$ & ${80.43}$ & ${97.88}$ & ${97.14}$ & ${98.88}$ & \\textbf{93.22}\\\\\n\n\n\\hline\nDataset & Method & Plane & Car & Bird & Cat & Deer & Dog & Frog & Horse & Ship & Truck & Mean\\\\\n\n\\hline\n\n\n\nCIFAR-10\n& OCSVM\\cite{scholkopf2002learning} & 63.0 & 44.0 & 64.9 & 48.7 & 73.5 & 50.0 & 72.5 & 53.3 & 64.9 & 50.8 & 58.56\\\\\n& CapsNet\\textsubscript{PP}\\cite{li2020exploring} & 62.2 & 45.5 & 67.1 & 67.5 & 68.3 & 63.5 & 72.7 & 67.3 & 71.0 & 46.6 & 61.2\\\\\n& AnoGAN\\cite{schlegl2017unsupervised} & 67.1 & 54.7 & 52.9 & 54.5 & 65.1 & 60.3 & 58.5 & 62.5 & 75.8 & 66.5 & 61.79\\\\\n& DSVDD\\cite{ruff2018deep} & 61.7 & 65.9 & 50.8 & 59.1 & 60.9 & 65.7 & 67.7 & 67.3 & 75.9 & 73.1 & 64.81\\\\\n& LSA\\cite{abati2019latent} & 73.5 & 58.0 & 69.0 & 54.2 & 76.1 & 54.6 & 75.1 & 53.5 & 71.7 & 54.8 & 64.1\\\\\n& OCGAN\\cite{perera2019ocgan} & 75.7 & 53.1 & 64.0 & 62.0 & 72.3 & 62.0 & 72.3 & 57.5 & 82.0 & 55.4 & 65.66\\\\\n& CAVGA-D\\textsubscript{u}\\cite{venkataramanan2020attention} & 65.3 & 78.4 & 76.1 & 74.7 & 77.5 & 55.2 & 81.3 & 74.5 & 80.1 & 74.1 & 73.7\\\\\n& DROCC\\cite{goyal2020drocc} & 81.66 & 76.74 & 66.66 & 67.13 & 73.62 & 74.43 & 74.43 & 71.39 & 80.02 & 76.21 & 74.23\\\\\n\n\n& \\textbf{Ours (ADACL)} & ${73.89}$ & ${83.87}$ & ${67.47}$ & ${70.66}$ & ${69.51}$ & ${77.91}$ & ${72.66}$ & ${83.04}$ & ${87.64}$ & ${81.35}$ & \\textbf{76.80}\\\\\n\n\\hline\n\n\n\\end{tabular}}\n\\end{table*}\n\n\nTable \\ref{table:mnist,CIFAR-10} compares our results with those of two other methods that use the knowledge distillation framework. Methods such as these rely on pre-trained networks which have been trained on millions of labelled images. To learn from these pre-trained or teacher networks, these methods have been trained over many epochs. These methods are computationally expensive and require a long time for inference, which prevents their use in real-world scenarios. Our method takes less time and computation to train even though the results are slightly lower as shown in the table.\n\n\n\\begin{table*}[htbp]\n\\centering\n\\caption{Comparison of AUROC in \\% for anomaly detection on MNIST \\cite{lecun1998mnist} and CIFAR-10 \\cite{krizhevsky2009learning} datasets with knowledge distilation methods.}\n\\label{table:mnist,CIFAR-10}\n\\resizebox{\\textwidth}{!}{\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\nDataset & Method & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Mean & Epoch\\\\\n\n\\hline\n\n\nMNIST\n\n\n& U-Std\\cite{bergmann2020uninformed} & 99.9 & 99.9 & 99 & 99.3 & 99.2 & 99.3 & 99.7 & 99.5 & 98.6 & 99.1 & 99.35 & -\\\\\n\n& Multiresolution KDAD \\cite{salehi2021multiresolution} & 99.82 & 99.82 & 97.79 & 98.75 & 98.43 & 98.16 & 99.43 & 98.38 & 98.41 & 98.1 & 98.71 & 50\\\\\n\n\n& Ours (ADACL) & ${99.37}$ & ${99.30}$ & ${98.58}$ & ${97.36}$ & ${97.57}$ & ${98.43}$ & ${99.56}$ & ${98.09}$ & ${93.46}$ & ${98.38}$ & 98.01 & 10\\\\\n\n\\hline\n\nDataset & Method & Plane & Car & Bird & Cat & Deer & Dog & Frog & Horse & Ship & Truck & Mean & Epoch\\\\\n\n\\hline\n\nCIFAR-10\n\n& U-Std\\cite{bergmann2020uninformed} & 78.9 & 84.9 & 73.4 & 74.8 & 85.1 & 79.3 & 89.2 & 83 & 86.2 & 84.8 & 81.96 & -\\\\\n\n& Multiresolution KDAD \\cite{salehi2021multiresolution} & 90.53 & 90.35 & 79.66 & 77.02 & 86.71 & 91.4 & 88.98 & 86.78 & 91.45 & 88.91 & 87.18 & 200\\\\\n\n\n& Ours (ADACL) & ${73.89}$ & ${83.87}$ & ${67.47}$ & ${70.66}$ & ${69.51}$ & ${77.91}$ & ${72.66}$ & ${83.04}$ & ${87.64}$ & ${81.35}$ & 76.80 & 15\\\\\n\n\\hline\n\n\n\\end{tabular}}\n\\end{table*}\n\n\n\\begin{table}\n\\centering\n\\caption{Frame-level AUCROC and EER comparison \\% on UCSD dataset with state-of-the-art methods.}\n\n\n \\begin{tabular}{|l|l|c|}\n \\hline\nMethod & AUCROC (\\%) & EER (\\%) \\\\ [0.5ex] \n \\hline\nTSC \\cite{luo2017revisit_novelty} & 92.2 & - \\\\\nFRCN action \\cite{hinami2017joint_novelty} & 92.2 & - \\\\\nAbnormalGAN \\cite{ravanbakhsh2017abnormal_novelty} & 93.5 & 13 \\\\\nMemAE \\cite{gong2019memorizing} & 94.1 & - \\\\\nGrowingGas \\cite{sun2017online} & 94.1 & - \\\\\nFFP \\cite{liu2018future_novelty} & 95.4 & - \\\\\nConvAE+UNet \\cite{Nguyen_2019_ICCV} & 96.2 & - \\\\\nSTAN \\cite{lee2018stan} & 96.5 & - \\\\\nObject-centric \\cite{ionescu2019object} & 97.8 & - \\\\ \nRavanbakhsh \\cite{ravanbakhsh2019training} & - & 14 \\\\\nALOCC \\cite{sabokrou2018adversarially} & - & 13 \\\\\nDeep-cascade \\cite{sabokrou2017deep_novelty} & - & 9 \\\\\n Old is gold \\cite{zaheer2020old} & 98.1 & 7 \\\\\n\n \\textbf{Ours (ADACL)} & \\textbf{98.4} & \\textbf{7} \\\\\n \n \\hline\n\\end{tabular}\n\n\n\\label{ucsd_exp}\n\\end{table}\n\n\\subsection{Anomaly Detection in Videos}\nTo evaluate our method on video anomaly detection, we selected the UCSD dataset \\cite{xiao2017fashion}. This dataset contains multiple outdoor scenes with mobile objects such as pedestrians, cars, wheelchairs, skateboards and bicycles. Frames with only pedestrians are considered as the normal class, while frames containing other objects are anomalies. This dataset contains two subsets named Ped1 and Ped2. Ped1 includes 34 training video samples and 36 testing video samples and Ped2 contains 2,550 frames in 16 training videos and 2,010 frames in 12 test videos with a resolution of 240 \u00d7 360 pixels.\n\\par\nWe follow a patch-based protocol to evaluate on this dataset where each frame is divided into 30 x 30 sections. For training, we include only patches that include pedestrians. However, the model was evaluated on patches that contain pedestrians or other objects. Following \\cite{zaheer2020old}, to report the performance on this dataset, frame-level Area Under the Curve (AUROC) and Equal Error Rate (EER) are calculated in table \\ref{ucsd_exp}. Results show that ADACL surpasses state-of-the-art methods for video anomaly detection on UCSD. \n\n\\subsection{Analysis of the Method}\nIn this section, we provide experimental results that support the intuition behind our method.\n\n\\subsubsection{BCE vs. MSE}\nBased on our experimental results, we use MSE over BCE because it converges faster but still has good performance over multiple training runs. The figure \\ref{AUCmsebce} shows that our experimental results are aligned with this hypothesis.\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/test_auroc_bce_vs_mse_cifar.png}\n\\end{center}\n \\caption{ Average test AUROC taken over 10 training runs on CIFAR-10. MSE achieves higher AUROC in a shorter number of epochs. }\n\\label{AUCmsebce}\n\\end{figure}\n\n\\subsubsection{Continuous vs. Discrete Labelling}\nReferring back to the method section, we define an early stopping criteria based on validation AUROC. In this experiment, we study how labelling affects anomaly detection performance. We keep the entire training procedure the same and only modify the labelling scheme. Results show that discrete labels cause higher variance in the validation and test AUROC. With higher instability, it is increasingly difficult to create an accurate stopping criteria. As shown in figure \\ref{ValVarianceCIFAR}, continuous labelling yields lower variance in validation AUROC. Knowing this, figure \\ref{AUCVarianceCIFAR} shows that with continuous labelling, a stopping criteria over a validation set with lower AUROC variance is mostly able to produce more consistent test AUROC values per class. Therefore, continuous labelling is the better choice when using a stopping criteria for anomaly detection.\n\n\\begin{table}\n\\centering\n\\caption{Experiments on different intervals}\n \\begin{tabular}{|c|c|c|}\n \\hline\nInterval & Mean AUROC (\\%) & Variance \\\\ [0.5ex] \n \\hline\n$[0, 0.1] - [0.9, 1]$ & $97.14$ & $1.50 \\times 10^-5$ \\\\\n$[0, 0.2] - [0.8, 1]$ & $96.93 $ & $8.83 \\times 10^-5 $ \\\\\n$[0, 0.3] - [0.7, 1]$ & $97.42 $ & $1.29 \\times 10^-5 $ \\\\\n \n \\hline\n\\end{tabular}\n\n\n\\label{interval_exp}\n\\end{table}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/val_variance_mnist.png}\n\\end{center}\n \\caption{Variance of Average Validation AUROC over all classes in each epoch. The averages are taken over 10 training runs. It can be observed that continuous labelling consistently produces lower variance in higher epochs. Thus, a stopping criteria based on the validation set yields more stable test AUROC when continuous labels are used.}\n\\label{ValVarianceCIFAR}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/auroc_variance_cont_vs_disc.png}\n\\end{center}\n \\caption{Variance in Anomaly Detection AUROC when using continuous labelling versus discrete labelling for all class taken over 10 training runs of CIFAR-10. More often than not, using the same stopping criteria with continuous labels produces lower variance in test AUROC. }\n\\label{AUCVarianceCIFAR}\n\\end{figure}\n\n\n\\subsubsection{Effects of Augmentations}\nVariations in anomaly detection performance occur when using data augmentation to create anomalies. In this experiment, we study the effect of each augmentation used to create anomalies during training. As seen in figure \\ref{effect_augs}, the best performing solo augmentation is Cut-paste. But, the best anomaly detection results are achieved by using all augmentations. \n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{latex\/figures\/effect_diff_augs.jpg}\n\\end{center}\n \\caption{The effects of the augmentations like Cut-paste, puzzling, one and three times rotation and various mix-ups on AUROC. The most effective augmentation was Cut-paste,\n however using all augmentations in the process of creating outliers yields the highest performance in anomaly detection.}\n\\label{effect_augs}\n\\end{figure}\n\n\\subsubsection{Interval Selection}\nIn previous experiments, we show that continuous labelling improves anomaly detection by enabling better early stopping. This is achieved through lower variance in validation AUROC. To further examine the implications of label selection, We analyze different sized intervals to see their effects on anomaly detection performance. As shown in table \\ref{interval_exp}, the choice of interval has low impact on AUROC. \n\n\\section{Conclusion}\nDeep neural networks can achieve state-of-the-art performance when applied to anomaly detection tasks. However, most of them suffer from expensive computations, high complexity, training instability, and difficult implementation. In this paper, we alleviate these issues by proposing a simple and effective methodology for anomaly detection. We convert the problem into a supervised regression task by creating anomalies using data augmentations and training a lightweight convolutional neural network over continuous labels. In further experiments, we analyze the effects of MSE Loss versus BCE Loss, continuous labelling, interval size, and various augmentations. Results on several image and video anomaly detection benchmarks show our superiority over cutting-edge methods. \n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sect:intro}\n\nThe prospect of pre-programming a desired shape transformation in a material that may be remotely activated at a later time has understandably been the source of much recent interest, from engineering sheets with activated folds \\cite{HawkesPNAS:10} to attempts at understanding how nature accomplishes its vast spectrum of morphologies \\cite{BenAmarPRL:08, MahaPNAS:09}. The primary difficulty with such programming on an initially flat sheet lies in moving to target shapes beyond simple folding or crumpling, characterized by d-cones and developable geometry \\cite{WittenRMP:07}. By requiring non-developable results one must be prepared to deal either with a high stretch or compressional cost, or find a way to pre-program a change in the material's local metric geometry. This latter option may be attempted, as it often occurs in nature, through differential growth rates inside the material itself \\cite{BenAmarPRL:08b}, however, such a system is irreversible, difficult to program in advance, difficult to activate controllably, and hence not amenable to device applications. Other approaches to the problem include the use of gels \\cite{KleinSci:07} or fluid membranes \\cite{UchidaPRE:02} but similar issues appear in these cases, coupled with the disadvantage of fluid membranes and gels being less robust materials than desirable for use in shape-programmable devices. This paper will propose a new avenue to blueprinting for the purpose of broad morphology control in a thin solid sheet that circumvents these difficulties and results in stress-free final states by taking advantage of materials with local orientational order.\n\nLiquid crystalline solids undergo macroscopic elongations and contractions in response to heat, light, pH, and other stimuli that change the molecular order. Most studied are nematic glasses \\cite{vanOosten:07} and elastomers \\cite{warnerbook:07}. Both have spontaneous deformation gradients, $\\lambda_{ij} = \\partial x_i\/\\partial x^0_j$ transforming reference space points $\\vec{x}^0$ to target points $\\vec{x}$ that are of the form\n\\begin{equation}\n\\matr{\\lambda} = (\\lambda - \\lambda^{-\\nu}) {\\vec{n} }\\, {\\vec{n} } + \\lambda^{-\\nu}\\uuline{\\delta}\n\\label{eq:def}\n\\end{equation}\nthat is an extension\/contraction $\\lambda$ along the director $ {\\vec{n} }$ and a contraction\/extension $\\lambda^{-\\nu}$ perpendicular to $ {\\vec{n} }$ for $\\lambda >1 \/ < 1$ respectively. By analogy to the elastic case, $\\nu$ is what we call an opto-thermal Poisson ratio that relates the perpendicular to the parallel response. Thus $\\matr{\\lambda} $ is a uniaxial distortion when it is spontaneous and not associated with any subsequent stresses that distort the body away from the new natural state. For glasses $\\nu \\in ({\\textstyle \\frac{1}{2}}, 2)$ \\cite{vanOosten:07} while elastomers (rubbers) have $\\nu =1\/2$. Rubbers have spectacular stimuli responses $\\lambda \\in (0.5, 4)$, that is up to $400$\\% opto-thermal strains. Their directors are mobile in a fluid-like way -- in fact the rotation of $\\vec{n}$, in placing the longer dimension of the solid along the direction of imposed elongation allows shape change without energy cost \\cite{finkelmann1997,deSimone:00}. Glasses, on the other hand, have shear moduli comparable to their compressional resistance ($\\sim 10^9$Pa) and their opto-thermal elongations\/contractions are upto $\\pm 4$\\%, that is $\\lambda \\in (0.96, 1.04)$. The loss in dramatic elastic sensitivity to stimuli is compensated by the fact that their directors are anchored to the polymer matrix, at most convecting with the matrix as it is distorted. This allows for a feasible patterning of the director field at the initial time of cross-linking and the subsequent guarantee that the chosen pattern will not be erased by the soft elasticity present in the rubber. \n\nIt is in the spirit of this `written-in' patterning that we have previously addressed the many routes to a cantilever-style actuator \\cite{WMCPRSa:10} along with an in-depth treatment of such actuators whose properties are facilitated by the patterning of splay-bend or twist textures through the thickness of the material \\cite{MWvOCPRE:10}. Widening the net to allow for nematic director spatial variation across the surface of a thin sheet opens the door to emergent Gaussian curvature that manifests in a controllable way, growing conical shapes from $+1$ disclination defects \\cite{MBWPREr:10, MBWPRS:10}. In this paper we will continue the approach to a realizable blueprint for actively switching a flat sheet of nematic glass from its nascent, developable state to a curved or faceted and potentially complex shape. Due to the extreme smallness of the only inherent length scales in the problem -- those governed by the Frank energies, of order ~10s of nm -- and the relative smallness of the practical length scales -- such as how thin a sheet may feasibly be manufactured, of order microns -- any such blueprinted sheet could be envisaged as useful in applications from remotely operable peristaltic pumps in microfluidic circuits to macroscopic shape adaptation.\n\nIn lieu of considering the full realm of all possible two-dimensional nematic director fields, we choose to concentrate instead on those textures that are \\textit{locally} simplest to pattern and hence most amenable to application. The textures we allow, therefore, are either those with locally constant director field or those with a locally circular field; the use of masking in the preparation stages allows for regions of these types of textures to be joined with themselves, and with one another, to the desired effect. Notice that, although the topological charge in the $+1$ director field is intimately related to the appearance of conical curvature in those textures, the rest of the zoo of the traditional types of disclination charges in 2D nematics is disallowed by this restriction, as the smoothly varying realizations of these defects that minimize the Frank energy are highly non-trivial to manifest in a controllable way for patterning, particularly in groups of more than one. Furthermore, a simple symmetry argument shows that the emergent shape behavior of the smooth Frank-minimizing defects must exhibit more than simple point-like sources of Gaussian curvature. Consider such a Frank-minimizing defect field of disclination charge $m$. In the one-constant approximation this field may be defined simply by \\cite{deGennes}:\n\\begin{equation}\n\\phi = m\\theta + \\delta\n\\end{equation}\nwhere $\\phi$ is the director direction, $\\theta$ is the polar angle, and $\\delta$ is an arbitrary phase. If we now locally rotate the director at each point by an amount $\\Delta \\phi$ the texture becomes $\\phi + \\Delta \\phi = m\\theta + \\delta$. So long as $m \\neq 1$ this form may be recast as a global, solid-body rotation:\n\\begin{equation}\n\\phi - \\frac{\\Delta \\phi}{m-1} = m \\left( \\theta - \\frac{\\Delta \\phi}{m-1} \\right) + \\delta\n\\end{equation}\nimplying that local and global rotations for these defected textures are equivalent. In particular, a local rotation of $\\pi\/2$ is \\textit{also} equivalent to interchanging the roles of heating and cooling, as the director and perpendicular directions are swapped (see remarks following Eq. \\ref{eq:def}). Thus any shapes emergent from these textures must be the same up to solid-body rotations upon either heating or cooling from the flat state. As a consequence, simple point charges of Gaussian curvature -- which produce rotationally distinct results upon heating and cooling -- are disallowed, and more complicated morphologies must result from smooth Frank-minimizing defects with $m \\neq 1$. Indeed, by considering the metric tensor for the distorted space, $\\uuline{g} = \\uuline{\\lambda}^{T} \\uuline{\\lambda}$, one may show that the Gaussian curvature is indeed distributed. In the coordinates of the reference space, it is:\n\\begin{equation}\n\\kappa = \\frac{m(m-1) \\left(\\lambda^{2(1+\\nu)}-1\\right)}{r^2\\lambda^2}\\cos\\big[2(m-1)\\theta\\big]\n\\end{equation}\nfor a Frank-minimizing defect of charge $m$ \\cite{unpub}.\n\nIn their stead, we will construct ``piece-wise constant\" stand-ins that play the same role, and in so doing demonstrate that our restricted set of director patterns is enough to allow for the kind of active material blueprinting we seek.\n\nWe will take a constructionist view to the understanding of this class of textures and the way they can be combined with one another, first exploring possible component pieces and then synthesizing simple textures, and finally complicated combinations from them. Hence, the basic building blocks of the larger, complete textures will be addressed in Section II, and an examination of the point-defected textures that can be constructed from them follows in Section III. Section IV presents a guide to the intuition for the purposes of designing a switchable shape blueprint from multiple such constructed defects and then considers some examples of practically and theoretically relevant nematic glass textures that can be constructed through the combination of these point defects. We conclude and discuss in Section V.\n\n\\section{Elemental Building Blocks for Point Defects}\\label{sect:blocks}\n\nFollowing this constructionist philosophy, we begin by considering the constituent pieces from which we will compose more interesting textures. Since we expect to be able to fit these pieces together, initially, around a point to create a point-defected structure, we consider wedges of material with wedge angle, $\\theta$. Under spontaneous strain, the wedges we consider will deform in a self-similar way with respect to their director pattern, possibly allowing for a change in $\\theta$.\n\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[width=9cm]{wedges.eps}}\n\\caption{Three representative textured wedges whose angular extent varies with imposed spontaneous strain. The nematic director lies along the lines shown. On the left, the nematic director lies along concentric circles and the texture is simply cut from one considered in previous work. In the middle, the wedge contains a line of rank-1 connection of the nematic director and on the right the director is trivially aligned normal to the line bisecting the wedge angle. All three cases may also occur with a director field perpendicular to that shown: radial director lines on the left, rank-1 connected with the cusp pointing away from the wedge tip and the director normal to the wedge boundaries in the middle, and with the director aligned along the wedge angle's bisector on the right.}\n\\label{fig:wedges}\n\\end{figure}\n\n\\subsection{Slices of Cone\/Anti-cone Textures}\n\nThe first such wedge we consider is simply a slice taken from a pattern of concentric circles (Fig. \\ref{fig:wedges}, left). The deformation and strain-response of a complete $2\\pi$ texture of such a ``wedge\" is well-characterized by the adoption of conical or anti-conical shapes \\cite{MBWPREr:10}, where in the conical case the Gaussian curvature is related to the resulting cone's opening angle by $K = 2\\pi(1-\\sin \\phi_c)$. Since a point charge of Gaussian curvature can be thought of as an angular deficit or surplus around that point, we may expect that a partial wedge of this texture must exhibit a change in its wedge angle upon spontaneous strain. Indeed, consider an arc of the material a distance $r$ from the wedge tip such that the director always lies along a tangent. Initially, the length of this arc is simply $r\\theta$. Because this arc always coincides with the director, after strain its new natural state will have a length of $s' =\\lambda r \\theta$. Meanwhile, the radii to this arc from the tip of the wedge coincide with the perpendicular to the director, and hence, the new distance from wedge tip to arc is $r' = \\lambda^{-\\nu} r$. The new wedge angle is thus:\n\n\\begin{equation}\n\\theta' = s'\/r' = \\lambda^{1+\\nu} \\theta\n\\end{equation}\n\nThis is consistent with the conclusions drawn about a full texture of concentric circles \\cite{MBWPREr:10} as taking $\\theta = 2\\pi$ here leads to an angular deficit, and hence Gaussian curvature, of $2\\pi(1 - \\lambda^{1+\\nu})$ as required. Note also that this wedge texture may be replaced by one in which the director lies everywhere along radii emanating from the wedge tip and the director perpendiculars lie along concentric circles with no change in the conclusions other than a reversal of the effect of the strain, i.e. $\\theta' = \\lambda^{-1-\\nu} \\theta$.\n\n\\subsection{Rank-1 Connected Wedges}\n\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[width=7cm]{rank1-connected.eps}}\n\\caption{Rank-1 connection. A disc of material is adorned with a rank-1 connected director pattern in (a), with the director field meeting the boundary of rank-1 connection with angle $\\alpha_0$. Upon the application of spontaneous strain (b) the angle across this boundary must change to $\\alpha_1$, and the disc deforms to a stylized heart shape.}\n\\label{fig:rankone}\n\\end{figure}\n\nNext, we consider a wedge adorned with the director pattern shown in Figure \\ref{fig:wedges}, middle. This ``rank-1 connected\" wedge is characterized by two regions of simple parallel director fields joined across the wedge-bisecting line (dashed line in Fig. \\ref{fig:wedges}). Note that, in order for the resultant strains to be compatible, the angle at which each of the two separate regions meet the bisecting line must be the same \\cite{KBmicrostructure}. This condition is known as rank-1 connectedness (Fig. \\ref{fig:rankone}a). Note also, that if the wedge angle is $\\theta$, then by rank-1 connectedness the angle between the director and the bisecting line is $\\theta\/2$. How does spontaneous strain affect such an object? Consider a right triangle formed with the nematic director lying along one side, the director perpendicular lying along another, and the hypotenuse lying along the wedge-bisecting line. The angle between the hypotenuse and the director side is, as just stated, $\\theta\/2$. Prior to the imposition of spontaneous strain, let the length of the director side be $p$, and that of the director-perpendicular side, $q$. After strain, these sides will deform simply to lengths of $\\lambda p$ and $\\lambda^{-\\nu} q$, respectively. Therefore the new half-wedge angle is related to the original by:\n\n\\begin{eqnarray}\n\\tan(\\theta'\/2) = \\lambda^{-1-\\nu} \\tan(\\theta\/2) \\\\\n\\theta' = 2 \\tan^{-1} \\left( \\lambda^{-1-\\nu} \\tan(\\theta\/2) \\right)\n\\label{eq:r1wedge}\n\\end{eqnarray}\n\nAs in the case of the wedge textured with concentric circles, the spontaneous strain gives rise to a change in the angular extent of the wedge (Fig. \\ref{fig:rankone}b). An initial flat state composed of $2\\pi$ radians worth of such wedges would hence develop an angular deficit or surplus after spontaneous strain and exhibit the conical (or anti-conical, respectively) behavior associated with a point charge of Gaussian curvature. Note that a rank-1 connected wedge patterned with a director field perpendicular to that considered, such that the director is normal to the wedge boundaries, will distort its wedge angle in the opposite sense with respect to $\\lambda$: \n\\begin{equation}\n\\theta' = 2 \\tan^{-1} \\left( \\lambda^{1+\\nu} \\tan(\\theta\/2) \\right).\n\\end{equation}\n\n\\subsection{Triangle Wedges}\n\nFinally, consider the trivial, or ``triangle\" wedge pictured on the right of Figure \\ref{fig:wedges}, adorned entirely with one region of a simply parallel director field. In this case, it is easy to see intuitively the the wedge angle must change under spontaneous strain, as, for example, the director lines shorten and the perpendicular lines elongate. In order to quantify this intuition, consider again a right triangle, analogously to the middle case, but this time with vertex at the wedge tip, one side along the wedge-bisecting line (which coincides with the director perpendicular), one side along the director and the hypotenuse along the edge of the wedge. The angle between the director-perpendicular side and the hypotenuse is now $\\theta\/2$ and the argument goes through as in the case of the rank-1 connected wedges with $\\lambda$ and $\\lambda^{\\nu}$ swapped. Hence, we have: \n\n\\begin{eqnarray}\n\\tan(\\theta'\/2) = \\lambda^{1+\\nu} \\tan(\\theta\/2) \\\\\n\\theta' = 2 \\tan^{-1} \\left( \\lambda^{1+\\nu} \\tan(\\theta\/2) \\right)\n\\end{eqnarray}\n\nNote that another trivial wedge exists as a perpendicular version of the triangle wedge, where the wedge-bisecting line coincides with the director. In this case, the roles of $\\lambda$ and $\\lambda^{-\\nu}$ are swapped \\textit{again} and the relation between $\\theta$ and $\\theta'$ becomes identical to that given by Eq. \\ref{eq:r1wedge}.\n\n\\section{Constructing Point Defects from Material Wedges}\\label{sect:points}\n\nWith these wedge-shaped building blocks in hand and an understanding of how they deform under the imposition of spontaneous strain it is a relatively straightforward matter to put together enough wedges to reach an angular extent of exactly $2\\pi$ at the shared tip in the unstrained state and hence construct a complete texture which exhibits a geometric (curvature) point defect under spontaneous strain. There are, however, constraints -- namely, two wedges may only be stitched together if the nematic director field is the same on both sides of the boundary, or, if the boundary lies along a line of rank-1 connectedness in the director field. Hence, a wedge adorned with concentric circles may not be joined directly to a radial wedge, nor a triangle wedge with angular extent $\\pi\/2$ to a triangle wedge with angular extent $\\pi\/4$, but a rank-1 connected wedge may be joined to another or a triangle wedge to one decorated with concentric circles.\n\nWe proceed by categorizing these constructed textures according to their corresponding disclination defect charge. Note that, because many of the final textures will include at least one rank-1 connected border, the concept of a disclination defect is somewhat ambiguous -- when the angle of the director field changes discontinuously by an (apparent) amount $\\alpha$, it may instead be considered to have changed by an amount $\\alpha - \\pi$, or indeed, $\\alpha + n \\pi$ for any integer $n$. In order to sidestep this ambiguity we will always consider the angle change across such a discontinuous boundary to be either the smallest positive or largest negative value available. We will consider each of these two cases separately. We assume that the particular choice of angle change is made consistently for textures with more than one such discontinuous boundary. \n\n\\subsection{$m < 0$ and Polygonal $m=1$ Defects}\n\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[width=6cm]{minushalf.eps}}\n\\caption{Nematic glass texture corresponding to a disclination defect of charge $-1\/2$, up to jump angle ambiguities. The texture is composed of three rank-1 connected wedges each with an internal connection angle $\\pi\/3$. Lines of rank-1 connection are illustrated as dashed, and thick gray lines mark wedge boundaries.}\n\\label{fig:minushalf}\n\\end{figure}\n\nAs noted above, stitching rank-1 connected wedges to one another is permitted by our constraints, and so one of the most straightforward available $2\\pi$ constructions is to simply take $n$ congruent rank-1 connected wedges, each of angular extent $2\\pi\/n$, and stitch them together. The resulting complete texture qualitatively resembles a disclination defect of charge $1-n\/2$ for $n \\geq 3$ and indeed, by consistently choosing to assign the negative-valued angle change across the rank-1 connected boundaries, this is precisely the disclination charge of the texture (see Fig. \\ref{fig:minushalf} for a $m=-1\/2$ example). The spontaneous strain induced deformations of such a texture may be directly calculated from the behavior of the constituent wedges. Conveniently, all these wedges are congruent for this texture and we immediately arrive at a total angular deficit, and hence Gaussian curvature, of:\n\\begin{equation}\n\\Delta \\theta_{tot} = K_n = 2\\pi - 2n \\tan^{-1} \\left( \\lambda^{-1-\\nu} \\tan(\\pi\/n) \\right)\n\\end{equation} \nconcentrated at the point in the middle of the texture where all of the wedge tips meet. \n\nNotice, on the other hand, that had we chosen to assign the positive-valued angle change at all the discontinuous boundaries, we could have concluded that all these textures, regardless of $n$, have disclination charge $+1$. This is intuitive as well, as consideration of the perpendicular field to the director field described above give a texture of concentric regular polygons, qualitatively very reminiscent of the concentric circles of a traditional $+1$ disclination defect. From our constructivist point of view, this new texture could have been composed from scratch by stitching together $n$ triangle wedges of angular extent $2\\pi\/n$. Unsurprisingly, the total angular deficit produced by this structure -- as can consistently seen by either combining $n$ triangle wedges or swapping the roles of the director and perpendicular in the texture considered above -- is given by:\n\\begin{equation}\n\\Delta \\theta_{tot} = K_n = 2\\pi - 2n \\tan^{-1} \\left( \\lambda^{1+\\nu} \\tan(\\pi\/n) \\right).\n\\label{eq:square}\n\\end{equation} \nFurthermore, as $n$ tends to $\\infty$ we identically recover a $+1$ disclination defect texture from our concentric polygons. As required, $ \\displaystyle \\lim_{n \\rightarrow \\infty} K_n = 2\\pi (1-\\lambda^{1+\\nu})$ \\cite{MBWPREr:10}. Likewise, the same limit taken for the texture composed of congruent rank-1 connected wedges recovers a radial $+1$ disclination defect texture, and the limiting value of the Gaussian curvature also matches as appropriate.\n\nA concrete example, $n=4$ --- the square representation of an $m=1$ defect, Fig.~\\ref{fig:conedpyramid}(a), serves to show that all such polygonal $+1$ defects must give 3-D structures that relax into circular cones because the bend energy is convex.\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[width=8cm]{pyramid-relax-b.eps}}\n\\caption{(a) A square representation of a $m = 1$ defect. (b) Ignoring bend energy, on cooling the defect rises to being a pyramidal cone. (c) Relieving the bend energy of the creased edges of the pyramid yields a circular cone, where the integral lines of $\\vec{n}$ are the cusped trajectories shown.}\n\\label{fig:conedpyramid}\n\\end{figure}\nIgnoring the cost of bend, we expect the $n=4$ sided defect to rise into a square pyramidal cone, Fig.~\\ref{fig:conedpyramid}(b). The Gaussian curvature localised at the vertex is $K_4 = 2\\pi\\left(1 -\\frac{4}{\\pi} \\tan^{-1}(\\lambda^{1+\\nu})\\right)$ from Eq.~(\\ref{eq:square}). On heating there is a contraction along lines such as $\\text{AC} = L \\rightarrow \\text{A'C'} = \\lambda L$ and elongation along $\\text{OB} = L \\rightarrow \\text{O'B'} = \\lambda^{-\\nu} L$. Considering the triangle $\\text{O'OB'}$ (where $\\text{OB'} = \\text{A'C'}$) then $\\phi_p = \\sin^{-1}(\\text{OB'}\/\\text{O'B'}) = \\sin^{-1}(\\lambda^{1+\\nu})$ is the pyramidal opening angle. We note the line length $\\text{O'C'} = \\frac{1}{\\sqrt{2}} (\\lambda^2 + \\lambda^{-2\\nu})^{{\\textstyle \\frac{1}{2}}} \\text{OC} = (\\lambda^2 + \\lambda^{-2\\nu})^{{\\textstyle \\frac{1}{2}}}L$ since the line $\\text{OC}$ is at an angle $\\pi\/4$ with respect to $\\vec{n}$ (rotate $\\matr{\\lambda} $ by $\\pi\/4$). Note that the $n = \\infty$, i.e. circular, form of the $m=1$ defect gives circular cones of the same opening angle, $\\phi_{\\infty} = \\sin^{-1}(\\lambda^{1+\\nu})$.\n\nHowever the pyramidal cone has its bend localised into 4 creases emanating from the vertex $\\text{O}'$. Since the bend energy density is a quadratic function of the curvature, this convexity dictates that the energy be reduced by delocalising the curvature over the whole surface of the cone, that is by forming a circular cone, Fig.~\\ref{fig:conedpyramid}(c). The integral lines of the director are not concentric circles centred on the tip, but are cusped lines. Lengths from the tip to the integral curves include $L(\\lambda^2 + \\lambda^{-2\\nu})^{{\\textstyle \\frac{1}{2}}}$ to cusps at points like $\\text{C}'$, and $L\\lambda^{-\\nu}$ to points $\\text{A}'$, $\\text{B}'$ etc. Comparing the Gaussian curvature $K_4$ (which does not change on relaxation from the pyramid) with that of a circular cone of opening angle $\\phi_c$, i.e. with $K_c = 2\\pi(1-\\sin\\phi_c)$, we have for the opening angle of the relaxed circular cone $\\phi_c = \\sin^{-1}\\left[ \\frac{4}{\\pi}\\tan^{-1}(\\lambda^{1+\\nu})\\right]$ which is indeed flat, $\\phi_c = \\pi\/2$, when $\\lambda = 1$.\n\n\\subsection{$m = +1\/2$ Defects}\n\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[width=7cm]{halfstad.eps}}\n\\caption{Nematic glass textures corresponding to a disclination defect of charge $+1\/2$, up to jump angle ambiguities. The texture on the left, a hemi-stadium, is composed of a cone-textured `wedge' subtending an angle $\\pi$ and a trivial `triangle' wedge subtending the remaining $\\pi$. The texture on the right is composed of two rank-1 connected wedges subtending $\\pi\/2$ each, and a trivial wedge subtending the remaining $\\pi$ radians. In both cases lines of rank-1 connection are denoted by a dashed line and wedge boundaries by thick gray lines.}\n\\label{fig:halfstad}\n\\end{figure}\n\nBy making use of wedges textured with concentric circle director patterns, we may also construct a version of a $+1\/2$ charged disclination, reminiscent of a half-stadium, with the available constituent wedges (Fig. \\ref{fig:halfstad}, left). Here the region of the texture adorned with constant director field aligned perpendicular to the joining boundary, subtending $\\pi$ radians, does not change its angular extent upon being strained. The other half of the texture, composed of concentric half-circles, undergoes an angular change of $\\pi(1 - \\lambda^{1+\\nu})$, and hence this is the total curvature generated at the defect point of the texture:\n\n\\begin{equation}\n\\Delta \\theta_{tot} = K = \\pi(1 - \\lambda^{1+\\nu}).\n\\end{equation}\nleading to a cone opening angle in the final, deformed state for this texture of $\\sin \\phi_c = {\\textstyle \\frac{1}{2}} (1+\\lambda^{1+\\nu})$.\n\nMore discrete, piece-wise constant versions of this texture may be constructed as well, in much the same way as the concentric regular polygon analogs of a smooth $+1$ texture discussed previously (Fig. \\ref{fig:halfstad}, right). In this case, we again start with a $\\pi$-radian region of constant director field aligned normal to the joining boundary. Instead of joining across the boundary with a semi circular pattern, we join to a semi-polygonal pattern, obtained by slicing an \\textit{even}-sided polygonal $+1$ in half through its defect, normal to a pair of its polygonal sides. An even-sided polygon is required to ensure that opposite component wedges may have their director field aligned parallel, and hence join smoothly with the other, constant field region. Since, by construction, we may cut such a texture into pieces we have already dealt with, the curvature is simply half that of a full concentric-polygonal texture:\n\n\\begin{equation}\n\\Delta \\theta_{tot} = K_n = \\pi - n \\tan^{-1} \\left( \\lambda^{1+\\nu} \\tan(\\pi\/n) \\right)\n\\end{equation}\nwhere $n$ is the number of sides of a \\textit{complete} polygon, not just the number present on the polygonal side of the texture. As pointed out above, $n$ must be even as well.\n\n\\subsection{$m > 1$ Defects}\n\nWe have demonstrated that our simple basis set of wedges allows for the construction of many of the possible disclination point defects, all such with charge $\\leq 1$. The rest of the possible disclinated director fields, with charge $\\geq 3\/2$, may not be realized with our piece-wise constant components. In order to understand why this is so, consider the feasibility of promoting our 2D nematic textures to a 2D smectic-A state. It is now a necessary condition for the smectic-A phase that develops to be in a local minimum of the free energy that the smectic layers must be allowed to adopt a constant inter-layer spacing \\cite{deGennes}. Because each of our component wedges either have a locally constant director field, or are decorated with regions of concentric circles, they are compatible with these requirements and are thus, in principle, compatible with smectic layers. On the other hand, smectic textures are \\textit{incompatible} with disclination charges greater than one \\cite{Mermin79}, which necessarily lead to a divergent layer-compression energy. Hence construction of these higher disclination defects with these simple wedge components is disallowed.\n\n\\subsection{Charge-Free Bending Defects}\n\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[width=6cm]{benddef.eps}}\n\\caption{Nematic glass texture corresponding to a geometric point defect without associated topological disclination charge, resulting from a stepwise ``smoothing\" of a discontinuous bend in the nematic director direction. The texture is composed of three rank-1 connected wedges -- one with connection angle $\\theta$ and two each with connection angle $\\frac{\\pi}{4} - \\frac{\\theta}{2}$. The remaining $\\pi$ radians are accounted for a pair of $\\pi\/2$ trivial regions. Lines of rank-1 connection are denoted by a dashed line and wedge boundaries by thick gray lines.}\n\\label{fig:benddef}\n\\end{figure}\n\nThus far, we have constructed complete textures from our simple constituent wedges designed to generate an angular surplus or deficit, and hence Gaussian curvature, by creating disclination defects. As it happens, point-sources of curvature may result in other ways. These new point-sources arise from a step-wise smoothing of the discontinuous director-field bending associated with a line of rank-one connection. We have chosen to name these textures ``charge-free bending defects\" (Fig. \\ref{fig:benddef}), where here `charge' refers to disclination charge and `defect' to a curvature defect. By calculating the angular change that results from each of the five wedges, we arrive at a formula for the curvature. For an initial rank-one line whose director lines meet the boundary at an angle $\\theta$:\n\n\\begin{eqnarray}\n\\Delta \\theta_{tot} &=& K = \\pi - 2\\bigg(\\tan^{-1} \\left[ \\lambda^{-1-\\nu} \\tan \\theta \\right] \\nonumber \\\\ \n&+& 2\\tan^{-1} \\left[ \\lambda^{1+\\nu} \\tan \\left( \\frac{\\pi}{4} - \\frac{\\theta}{2} \\right) \\right]\\bigg).\n\\end{eqnarray}\n\nNote that, despite the complicated form taken, the limiting value for $\\lambda = 1$ remains appropriately $K=0$. Furthermore, the overall strength of the curvature generated by one of these charge-free bending defects is somewhat less than that seen in the disclinated director fields treated earlier, as there are counterbalancing terms in the strain present. Finally, it is worth pointing out that the limit $\\theta \\rightarrow 0$ recovers a discrete $+1\/2$ disclination defect, as the bend becomes so strong that the initial rank-one boundary disappears altogether. Accordingly, a different choice of angle-change accounting across that boundary for the `charge-free' case leads to a disclination charge of $+1\/2$.\n\n\\subsection{Generalizations and Exotica}\n\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[width=9cm]{exotica.eps}}\n\\caption{Nematic glass textures corresponding to more exotic combinations of the fundamental wedges, each leading to a geometric curvature defect after spontaneous strain is imposed. First, radially-textured wedges join rank-1 connected zones. Second, a variant of the ``half-stadium\" representation of a $+1\/2$ charge disclination with closed director lines. Third, a variant of the $-1\/2$ disclination with unequal wedge angles. In all cases, lines of rank-1 connection are denoted by a dashed line and wedge boundaries by thick gray lines.}\n\\label{fig:exotica}\n\\end{figure}\n\nBeyond simple reconstruction of topological charges or bend-smoothing, there is a plethora of variants supported by the available building-block wedges. One might use the heretofore unused radial version of the concentric circular arc texture to bridge the gap between smaller rank-one connected wedges (Fig. \\ref{fig:exotica}, left). One can distort a $+1\/2$ by increasing or reducing the region covered by concentric arcs and plug up the difference with rank-one connected wedges instead of a trivial constant piece as in the hemistadium $+1\/2$ (Fig. \\ref{fig:exotica}, middle). Or one might consider playing with the relative sizes of the regions in one of the piece-wise constant, negatively charged disclination textures (Fig. \\ref{fig:exotica}, right). This last one turns out to be of particular use in the blueprinting scheme that follows in the next section.\n\n\\section{Blueprinting with Combinations of Point Defects: Texture and Shape}\\label{sect:textures}\n\nHaving demonstrated all the ways in which our piecewise-constant buliding-block wedges may be combined to produced curvature effects, we are now in a position to consider higher level combinations, that is, combining multiple such points of curvature to achieve a desired shape. In order to match multiple point defects together in a single texture using the building blocks at hand is simply a matter of allowing finite polygonal patches in the texture in addition to the infinite wedges discussed earlier. These finite polygons are again restricted to be adorned by piece-wise constant director fields, and again must be linked to one another, and to any wedges, by rank-one connected boundaries. In this case, the grouping of polygonal vertices plays the same role as wedge tips in generating curvature. As such, there is an enormous range of possible morphologies that can emerge from the union of several such points of curvature, corresponding to the myriad ways of tiling the plane with (potentially irregular) polygons and infinitely extended regions.\n\nIn order to guide the potential design of blueprints for these nematic glass sheets, it is worth noting that, due to the restriction of the allowed director patterns on the constituent pieces, treating the integral lines of the director field as the contour lines of a topographic map of the target shape is always a stress-free solution. This is because the simple piece-wise textures chosen may always conform to any imposed spontaneous strain by adding a $z$ component to the distance between director field integral lines. More abstractly, this is a manifestation of the fact that, by construction, our textures are allowable 2D smectics, and 2D smectics may be represented as multiply leaved height functions through their phase field \\cite{Kamien}. As discussed in Section \\ref{sect:points} on comparing pyrmidal or cone-like outcomes for the concentric polygon texture, the primary reason a texture may not adopt this contour-line-like solution is the desire to minimize the bending energy once the metric is satisfied and stress is eliminated. In a texture with many defect points, however, it becomes impossible to simply freely choose a bending minimum relative to one defect without imposing costly stretch at another. In this case, the final shape will more closely resemble the topographic map as the overall minimum energy will require balancing (relatively cheap) bend costs against (relatively expensive) stretch. The more defects present, the more closely the final shape will hew to the topographic realization, as there is ever more of a potential stretch price to bear. \n\nA simple example of this principle is a texture in which the plane is simply tiled by regular squares, each one containing a concentric-square pattern and polygonal $+1$ defect a the center. The vertices of this tiling correspond to $-1$ defects. If each of the individual $+1$ defects became smooth cones, as is the case if they were isolated, then stretch penalties proliferate all along their boundaries. Instead, they retain a pyramid shape, and the overall texture becomes an array of pyramids. Interestingly, the bend energy still has a role to play here: these pyramids may grow up out of the plane or down from it, and the minimization of the bend energy leads to an anti-ferromagnetic Ising model interaction on the up\/downess of the pyramids. The final shape is thus a pyramidal square egg-crate. Of course, if we had chosen to tile the plane with hexagons instead, then the anti-ferromagnetic Ising model interaction is frustrated and a multitude of degenerate ground states ensue. \n\n\\subsection{True Blueprinting: An Emergent Pyramid}\n\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[width=9cm]{pyramids_decorated.eps}}\n\\caption{A simple example of a blueprinted shape, a pyramid flanked by a square crumple pattern. The director field blueprint is shown on the left: it is distinguished from the simple cone-producing concentric square texture by a $\\pm 1\/2$ disclination pair, seen in blown-up. The presence of these extra defects manifests as a terminated trough on one corner of the pyramid that subsequently spirals around repeatedly -- contour blow-up, right hand side; darker shading corresponds to lower elevation.}\n\\label{fig:pyramids}\n\\end{figure}\n\nIn the spirit of the egg-crate morphology discussed above, we wish to present a simple example of the manner in which the director field may be used to blueprint a sheet of nematic glass in order to realize a desired shape. In addition, we wish to demonstrate that a non-trivial blueprinted object is achievable with a only a small number of simple masking steps in the preparation stage. Consider a texture of concentric squares with a piece-wise constant $\\pm 1\/2$ pair situated some distance from the central defect along one of the lines of rank-one connection (Fig. \\ref{fig:pyramids}, left and blow-up detail). Such a texture is easy to prepare, requiring only four steps and masking boundaries that are simple straight lines or one with a small zig-zag that seeds the $\\pm 1\/2$ pair. Ignoring the effect of bend energy minimization in the thin sheet, this texture will produce a classic pyramid rising above a flat plane that has been weakly crumpled into a shallow spiraling moat (Fig \\ref{fig:pyramids}, right). As can be seen in close-up detail (Fig. \\ref{fig:pyramids}, right blow-up), the half charge disclination dipole produces the interior terminus of the spiraling moat. Accounting for the effect of the bend energy will lead to some smoothing of the creases near the pyramid tip, and a gradual fading of the moat into a conical skirt far from the defect dipole. Both of these effects can be dampened by the inclusion of more defect dipoles, for example at the other three corners of the pyramid, which in this case does not increase the complication of the preparation -- in fact it is simpler, as only one mask boundary need be used.\n\n\\section{Discussion}\n\nWe have shown how a thin sheet of nematic glass may be prepared with simple to understand and produce constituent regions of texture that work together to create an actively switchable, pre-programmable shape change, including the development of multiple points of Gaussian curvature in concert. Such an actively transformable sheet is theoretically realizable with features at any length scale above that dominated by the Frank energies -- tens of nm -- and already possible at the micron scale, leading to a host of possible applications. It is our fervent hope that this new tool inspires clever new device design that fulfills the strong potential of nematic glasses.\n\nThe authors would like to thank Dick Broer and Carlos Sanchez for stimulating discussions. C.D.M. and M.W. acknowledge support from the EPSRC-GB.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}