{"text":"\n\\section{Acknowledgements}\n\\label{sec:acknowledgements}\n\nThe authors would like to thank our colleagues: Fran\\c{c}oise Beaufays,\nAlex Graves and Leif Johnson for helpful research discussions and Mike Schuster for help with wordpiece models.\n\\section{Analysis}\n\\label{sec:analysis}\n\nWe observe that a large part of the improvements described in this work are from a reduction in substitution errors.\nUsing wordpieces instead of graphemes results in an absolute 2.3\\% word error rate improvement,\nof this 1.5\\% is due to fixing substitution errors. Inclusion of pronunciation and text data\nimprove voice-search word error rate by an absolute 0.6\\% and 0.6\\% respectively, all of these are\ndue to improvements in word substitution errors. Many of the corrected substitution errors seem to be from improved language modeling: words which\nmay sound similar but have different meaning given the text context. Some selected examples include\nimprovements with proper nouns: `barbara stanwick' recognized by a grapheme model is fixed when using\nwordpieces to the correct name `barbara stanwyck'. Similar improvements are found when including\npronunciation data: `sequoia casino' to `sycuan casino', `where is there' to `where is xur'\nand also when including text data: `soldier boy' to `soulja boy', `lorenzo llamas' to `lorenzo lamas'.\nWe also find that wordpieces capture longer range language context than graphemes in improvements like\n`tortoise and the hair' to `tortoise and the hare'.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe train end-to-end speech recognition models using the RNN-T which predicts graphemes\nor wordpieces and thus directly outputs the transcript from audio. We find pre-training the RNN-T\nencoder with CTC results in a 5\\% relative WER improvement, and using a deeper\n8-layer encoder instead of a 5-layer encoder further improves WER by 10\\% relative.\nWe incorporate pronunciation data using a pre-training hierarchical-CTC loss which\nincludes phoneme targets and find this improves the voice-search WER by 5\\% relative\nwith little impact on the voice-dictation task. To include\ntext-only data we pre-train the recurrent network in the decoder as LSTM language models resulting\nin a overall 5\\% relative improvement. We train wordpiece RNN-Ts with 1k, 10k and 30k wordpieces targets\nand find that they significantly outperform the grapheme-based RNN-Ts. For comparison we use a baseline\nspeech recognizer with individual acoustic, pronunciation and language models with state-of-the-art WERs\nof 8.3\\% on voice-search and 5.4\\% on voice-dictation. With a 30k wordpiece RNN-T achieving\nWERs of 8.5\\% on voice-search and 5.2\\% on voice-dictation we demonstrate that a single end-to-end neural model\nis capable of state-of-the-art streaming speech recognition.\n\n\\section{Experimental Setup} \\label{sec:experiments}\n\n\nWe compare the RNN-T end-to-end recognizer with a conventional ASR system consisting of separate acoustic, pronunciation and language models.\nThe acoustic model is a CTC trained LSTM that predicts context-dependent (CD) phonemes\nfirst fine-tuned with sequence discriminative training as described in~\\cite{lstm_am3} and\nfurther improved with word-level edit-based minimum Bayes risk (EMBR) proposed recently by Shannon~\\cite{embr}.\nAcoustic models are trained on a set of $\\sim$22 million hand-transcribed anonymized\nutterances extracted from Google US English voice traffic, which corresponds to $\\sim$18,000\nhours of training data. These include voice-search as well as voice-dictation utterances.\nWe use 80-dimensional log mel filterbank energy features computed every 10ms stacked every 30ms to a single 240-dimensional acoustic\nfeature vector. To achieve noise robustness acoustic training data is distorted as\ndescribed in~\\cite{lstm_am}. The pronunciation model is a dictionary containing hundreds of thousands of human expert transcribed US English word pronunciations.\nAdditional word pronunciations are learned from audio data using pronunciation \nlearning techniques~\\cite{pronlearning}. For out-of-dictionary words a G2P model is trained using transcribed word pronunciations.\nA 5-gram language model is trained with a text sentence dataset which includes untranscribed anonymized speech logs:\n150 million sentences each from voice-search and voice-dictation queries, and anonymized typed logs including tens of billion\nsentences from Google search from various sources. The language model is pruned to 100-million n-grams with a target vocabulary of 4 million\nand the various sources of text data are re-weighted using interpolation~\\cite{interpolation} for the\noptimal word error rate performance. Single-pass decoding with a conventional WFST is carried out to generate recognition transcripts.\n\n\nThe RNN-T is trained with the same data as the baseline. The CTC encoder network is pre-trained with\nacoustic transcribed data and as with the baseline acoustic model the pronunciation model is used to\ngenerate phoneme transcriptions for the acoustic data. The RNN-T decoder is pre-trained on the text only data as a LSTM language model, \nroughly half a billion sentences from the text data are sampled according to their count and the data source \ninterpolation weight (as optimized in the baseline). All RNN-T models are trained with LSTM networks in the tensorflow~\\cite{tensorflow} toolkit with\nasynchronous stochastic gradient descent. Models are evaluated using the RNN-T beam search algorithm\nwith a beam of 100 for grapheme models and 25 for wordpiece models and a temperature of 1.5 on the softmax.\nWord error rate (WER) is reported on a voice-search and a voice-dictation test set with roughly 15,000 utterances each.\n\n\\section{Introduction \\label{sec:introduction}}\nThe current state-of-the-art automatic speech recognition (ASR) systems break down\nthe ASR problem into three main sub-problems: acoustic, pronunciation and language modeling.\nSpeech recognition involves determining the most likely word sequence,\n$W=w_1,...,w_n$, given an acoustic input sequence, $\\mathbf{x}=x_1,...,x_T$,\nwhere $T$ represents the number of frames in the utterance:\n\\begin{equation}\n\tW^{*} = \\argmax_W P(W|{\\mathbf x}),\n\\end{equation}\nwhich is typically decomposed into three separate models, as follows:\n\\begin{align}\n\tW^{*} &= \\argmax_W \\sum_\\phi P({\\mathbf x}, \\phi | W) P(W) \\\\\n\t &\\approx \\argmax_{W, \\phi} p({\\mathbf x}|\\phi) P(\\phi|W) P(W)\n\\end{align}\nThe acoustic model, $p({\\mathbf x}|\\phi)$, predicts the likelihood of the acoustic input\nspeech utterance given a phoneme sequence, $\\phi$; for conditional models that\ndirectly predict $P(\\phi|{\\mathbf x})$, the likelihood is typically replaced with a\nscaled likelihood obtained by dividing the posterior with the prior, $P(\\phi)$,\nin so-called hybrid models~\\cite{MorganBourlard95}.\nDeep recurrent neural networks with long short-term memory (LSTM)\ncells~\\cite{lstm} have recently been shown to be ideal for this\ntask~\\cite{lstm_am1, lstm_am2, lstm_am3}.\nThe pronunciation model, $P(\\phi|W)$, is typically built from pronunciation\ndictionaries curated by expert human linguists, with back-off to a\ngrapheme-to-phoneme (G2P) model~\\cite{g2p} for out of dictionary words.\nFinally, an N-gram model trained on text data may be used as a language model,\n$P(W)$.\n\nRecently, there has been considerable interest in training end-to-end models for\nASR~\\cite{las, baidu, BahdanauChorowskiSerdyukEtAl16}, which directly output\nword transcripts given the input audio.\\footnote{ In the context of this work,\nwe consider models that are all-neural, and directly output word transcripts from\naudio utterances as being end-to-end.}\nThus, these models are much simpler than conventional ASR systems as a single\nneural network can be used to directly recognize utterances, without requiring\nseparately-trained acoustic, pronunciation and language model components.\nA particular class of architecures known as sequence-to-sequence\nmodels~\\cite{seq2seq} are particularly suited for end-to-end ASR as they include\nan \\emph{encoder} network which corresponds to the acoustic model of a\nconventional system and a decoder network which corresponds to the language\nmodel.\n\nOne drawback of typical encoder-decoder type architectures (e.g.,~\\cite{las,\nBahdanauChorowskiSerdyukEtAl16}) is that the entire input sequence is encoded\nbefore the output sequence may be decoded and thus these models cannot be used\nfor real-time streaming speech recognition.\nSeveral streaming encoder-decoder architectures have been proposed previously,\nincluding the neural transducer~\\cite{nt}, the recurrent neural aligner\n(RNA)~\\cite{rna}, and the recurrent neural network transducer\n(RNN-T)~\\cite{rnnt1, rnnt2}.\nIn particular, these architectures allow the output to be decoded as soon as the first\ninput is encoded, without introducing additional latency incurred when\nprocessing the entire utterance at once.\nIn this work we only consider streaming recognition architectures, specifically\nthe RNN-T model.\n\nDespite recent work on end-to-end ASR, conventional systems still remain the\nstate-of-the-art in terms of word error rate (WER) performance.\nFor example, in our previous work~\\cite{rohit1} we evaluated a number of\nend-to-end models including attention-based models~\\cite{las} and\nRNN-T~\\cite{rnnt1, rnnt2} trained on $\\sim$12,500 hours of transcribed training\ndata; although end-to-end approaches were found to be comparable to a\nstate-of-the-art context-dependent phone-based baseline on dictation test sets,\nthese models were found to be significantly worse than the baseline on\nvoice-search test sets.\nEnd-to-end systems are typically trained using transcribed acoustic data sets,\nwhich are relatively expensive to generate and thus much smaller than text-only\ndata sets, which are used to train LMs in a traditional speech recognizer.\nA deficiency of end-to-end systems appears to be in their language modeling\ncapacity~\\cite{rohit1} which may be because large text-only data are not\nutilized in end-to-end systems.\n\nIn this work we explore a particular sequence-to-sequence architecure, RNN-T,\nand show how text and pronunciation data may be included to improve end-to-end\nASR performance.\nAnother contribution of this work is to investigate the use of\nwordpieces~\\cite{wp}, which have been explored previously in the context of\nmachine translation, as a sub-word unit for end-to-end speech recognition.\n\nThe paper is organized as follows: in Section~\\ref{sec:rnnt} we describe the RNN-T and\nhow it may be used for streaming recognition. Section~\\ref{sec:training} describes how the RNN-T\nis trained including the units, architectures and pre-training parts of the model. The experimental\nsetup including the baseline system are detailed in Section~\\ref{sec:experiments}. Section~\\ref{sec:results}\ncompares the word error rate performance of various RNN-T models and the baseline to show relative improvement.\nWe find that the techniques introduced in this work mostly improve the language modeling of the RNN-T,\nSection~\\ref{sec:analysis} shows some select examples of such improved recognition. A concluding summary and\nacknowledgements are in Section~\\ref{sec:conclusion} and Section~\\ref{sec:acknowledgements}.\n\n\n\n\\section{Results}\n\\label{sec:results}\n\nWe train and evaluate various RNN-T and incrementally show the WER impact with\neach improvement.\n\n\nA grapheme based RNN-T is trained from scratch (no pre-training) on the acoustic data\nwith a 5-layer LSTM encoder of 700 cells and a 2-layer LSTM decoder of 700 cells. A final 700 unit\nfeed-forward layer and a softmax layer output grapheme label probabilities. We compare this model\nto a model with identical architecture but with the encoder CTC pre-trained. We find CTC pre-training\nto be helpful improving WER 13.9\\%$\\rightarrow$13.2\\% for voice-search and 8.4\\%$\\rightarrow$8.0\\%\nfor voice-dictation.\n\nA model with a deeper 8-layer encoder is also trained with a multi-CTC loss at depth 5 and depth 8 where both\nlosses are optimized for the same grapheme targets. We found training 8-layer models without a\nmulti-loss setup to be unstable which we acknowledge may be addressed with recent advancements in\ntraining deeper recurrent models~\\cite{highway} but are not tested as part of this work.\nThe deeper 8-layer encoder further improves WER 13.2\\%$\\rightarrow$12.0\\% for voice-search and 8.4\\%$\\rightarrow$6.9\\%\nfor voice-dictation.\n\nTo incorporate the knowledge of phonemes and specifically the pronunciation dictionary data we train a 8-layer\nencoder with hierarchical-CTC with a phoneme target CTC at depth 5 and a grapheme target CTC at depth 8.\nIn this way the network is forced to model phonemes and is exposed to pronunciation variants in the labels where\nthe same word (and thus same grapheme sequence) may have different pronunciations (and thus phoneme sequences).\nThis approach does not address including pronunciations for words that do not occur in the acoustic training\ndata, which we leave as future work. We find that the pronunciation data improves WER 12.0\\%$\\rightarrow$11.4\\% for voice-search\nbut with little improvement for voice-dictation. Unlike voice-search the voice-dictation test set is \ncomprised of mostly common words, we conjecture that it may be sufficient to learn pronunciations for these\nwords from the acoustic data alone and thus may not benefit from additional human transcribed pronunciations.\n\nNext, to include the text data we pre-train a 2-layer LSTM with 700 cells as a language model with grapheme targets. The model\nis trained until word perplexity on a held-out set no longer improves, Table~\\ref{lmperp} shows the word preplexity and\nsizes of the various language models that were trained. Addition of text data in this way improves WER \n11.4\\%$\\rightarrow$10.8\\% for voice-search and 6.8\\%$\\rightarrow$6.4\\% for voice-dictation.\n\nWe explore modeling wordpieces, with 1k, 10k and 30k wordpieces, instead of graphemes and make several changes to the architecture. \nThe wordpiece encoder network is a 12-layer LSTM with 700 cells each, trained with hierarchical-CTC with phoneme targets at depth 5, graphemes at depth 10\nand wordpieces at depth 12. Since wordpieces are longer units we include a time convolution after depth 10 reducing the \nsequence length by a factor of 3. We find that this time convolution does not affect WER but drastically reduces training and\ninference time as there are 3 times fewer encoder features that need to be processed by the decoder network. Wordpiece language models\nare trained similar to graphemes, since the numbers of labels are much larger an additional input embedding of size 500 is used for wordpiece\nmodels. The wordpiece language models perform much better in terms of word perplexity (Table~\\ref{lmperp}) and the RNN-T initialized from\nthem also see significant WER improvements (Table~\\ref{wer}). The best end-to-end RNN-T with 30k wordpieces achieves a \nWER of 8.5\\% for voice-search and 5.2\\% on voice-dictation which is on par with the state-of-the-art baseline speech recognition system.\n\n\\begin{table}[!h]\n \\caption{The number of parameters (in millions) and word perplexity for LSTM language model trained with different units evaluated\n on a held-out set.}\n \\label{lmperp}\n \\centering\n \\begin{tabular}{lcc} \\toprule\n Units & Params & Perplexity \\\\ \\hline \\midrule\n Graphemes & 6M & 185 \\\\\n Wordpieces-1k & 10M & 138 \\\\\n Wordpieces-10k & 20M & 130 \\\\\n Wordpieces-30k & 59M & 119 \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\\section{RNN-Transducer}\n\\label{sec:rnnt}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.5\\columnwidth]{rnnt-model}\n\t\\caption{The RNN-T model. The model consists of an encoder network,\n\twhich maps input acoustic frames into a higher-level representation, and\n\ta prediction and joint network which together correspond to the decoder\n\tnetwork. The decoder is conditioned on the history of previous\n\tpredictions.}\n\t\\label{fig:rnnt}\n\\end{figure}\n\nThe RNN-T was proposed by Graves~\\cite{rnnt1} as an extension to the\nconnectionist temporal classification (CTC)~\\cite{ctc} approach for sequence\nlabeling tasks where the alignment between the input sequence, ${\\mathbf x}$, and the\noutput targets ${\\mathbf y}$ is unknown.\nThis is accomplished in the CTC formulation by introducing a special label,\ncalled the \\emph{blank} label, which models the probability of outputting no\nlabel corresponding to a given input frame.\nCTC has been widely used in previous works to train end-to-end ASR\nmodels~\\cite{baidu, e2e_ctc1, e2e_ctc2}.\nHowever, a major limitation of CTC is its assumption that model outputs at a\ngiven frame are independent of previous output labels: $y_t \\perp\\!\\!\\!\\perp y_j | {\\mathbf x}$, for\n$t < j$.\n\nThe RNN-T model, depicted in Figure~\\ref{fig:rnnt}, consists of an\n\\emph{encoder} (referred to as the transcription network in~\\cite{rnnt1}), a\nprediction network and a joint network; as described in~\\cite{rohit1}, the RNN-T\nmodel can be compared to other encoder-decoder architectures such as ``listen,\nattend, and spell\"~\\cite{las}, if we view the combination of the prediction\nnetwork and the joint network as a decoder.\nThe encoder is an RNN which converts the input acoustic frame ${\\mathbf x}_t$ into a\nhigher-level representation, ${\\mathbf h}^\\text{enc}_t$, and is analogous to a CTC-based\nAM in a standard speech recognizer.\nThus, as in CTC, the output of the encoder network, ${\\mathbf h}^\\text{enc}_t$, is\nconditioned on the sequence of previous acoustic frames $x_0, \\cdots, x_t$.\n\\begin{equation}\n{\\mathbf h}_{t}^\\text{enc} = f^\\text{enc}(x_t),\n\\end{equation}\n\nThe RNN-T removes the conditional independence assumption in CTC by introducing\na \\emph{prediction network}, an RNN that is explicitly conditioned on the\nhistory of previous non-blank targets predicted by the model.\nSpecifically, the prediction network receives as input the last \\emph{non-blank}\nlabel, $y_{u-1}$, to produce as output ${\\mathbf h}^\\text{dec}_u$.\n\\begin{equation}\n{\\mathbf h}_{u}^\\text{dec} = f^\\text{dec}(y_{u-1}).\n\\end{equation}\n\nFinally, the \\emph{joint network}, is a feed-forward network that combines the\noutputs of the prediction network and the encoder to produce logits\n($\\mathbf{z}_{t, u}$) followed by a softmax layer to produce a distribution\nover the next output symbol (either the blank symbol or one of the\noutput targets).\n\\begin{equation}\nz_{t,u} = f^\\text{joint}({\\mathbf h}_{t}^\\text{enc}, {\\mathbf h}_{u}^\\text{dec})\n\\end{equation}\nWe use the same form for $f^\\text{joint}$ as described in~\\cite{rnnt2}.\nThe entire network is trained jointly to optimize the RNN-T loss~\\cite{rnnt1},\nwhich marginalizes over all alignments of target labels with blanks as in CTC,\nand is computed using dynamic programming.\n\nDuring each step of inference, the RNN-T model is fed the next acoustic frame\n${\\mathbf x}_t$ and the previously predicted label $y_{u-1}$, from which the model produces\nthe next output label probabilities $P(y|t, u)$. If the predicted label, $y_{u}$, is non-blank,\nthen the prediction network is updated with that label as input to generate \nthe next output label probabilities $P(y|t, u + 1)$. Conversely, if\na blank label is predicted then the next acoustic frame, ${\\mathbf x}_{t+1}$, is used to update the encoder\nwhile retain the same prediction network output resulting in $P(y|t + 1, u)$. In this way\nthe RNN-T can stream recognition results by alternating between updating the encoder and the prediction\nnetwork based on if the predicted label is a blank or non-blank. \nInference is terminated when blank is output at the last frame, $T$.\n\nDuring inference, the most likely label sequence is computed using beam search\nas described in~\\cite{rnnt1}, with a minor alteration which was found to make\nthe algorithm less computationally intensive without degrading performance: we\nskip summation over prefixes in \\texttt{pref$(\\mathbf{y})$} (see Algorithm\n1 in~\\cite{rnnt1}), unless multiple hypotheses are identical.\n\nNote that unlike other streaming encoder-decoder architectures such as\nRNA~\\cite{rna} and NT~\\cite{nt}, the prediction network is not conditioned on\nthe encoder output.\nThis allows for the the pre-training of the decoder as a RNN language model on\ntext-only data as described in Section~\\ref{sec:training}.\n\n\n\\section{Units, Architectures and Training}\n\\label{sec:training}\n\\begin{figure*}[!t]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{rnnt}\n\\end{center}\n\\caption{The various stages of training a wordpiece RNN-T. The encoder network is pre-trained as a hierarchical-CTC network simultaneously\npredicting phonemes, graphemes and wordpieces at 5, 10 and 12 LSTM layers respectively. A time convolutional layer reduces the encoder\ntime sequence length by a factor of three. The decoder network is trained as a LSTM langauge model predicting wordpieces optimized with\na cross-entropy loss. Finally, the RNN-T network weights are initialized from the two pre-trained models, indicated by the dashed lines,\nand the entire network is optimized using the RNN-T loss.}\n\\label{rnnt}\n\\end{figure*}\n\nWe investigate the use of graphemes and sub-words (wordpieces) as output lexical units in RNN-T models.\nFor the graphemes, we use letters (\\texttt{a-z}), digits (\\texttt{0-9}), special\nsymbols (\\texttt{\\&.'\\%\/-:}) and a space symbol (\\texttt{$<$space$>$}).\nThe space symbol is used for segmenting recognized grapheme sequences to word sequences.\n\nState-of-the-art large vocabulary speech recognition systems recognize millions of different words,\ninference for RNN-T with that many output labels would be impractically slow.\nTherefore, as subword units, we use wordpieces as described in ~\\cite{wp}.\nWe train a statistical wordpiece model with word counts obtained from text data for segmenting each word individually into subwords.\nAn additional space symbol is included in subword units.\nAn example segmentation for the sentence \\texttt{tortoise and the hare} is\n\\texttt{$<$tor$>$ $<$to$>$ $<$ise$>$ $<$space$>$ $<$and$>$ $<$space$>$ $<$the$>$ $<$space$>$ $<$ha$>$ $<$re$>$}.\nWordpieces have be shown to benefit end-to-end recognition~\\cite{lsd} since they offer\na balance with longer context than graphemes and a tunable number of labels.\nSince the wordpiece\nmodel is based on word frequencies, more common words appear as a single label.\nA vocabulary of 1,000 generated wordpieces\nincludes words like `mall', `remember' and `doctor' while a vocabulary of 30,000 wordpieces also includes\nless common words like `multimedia', `tungsten' and `49er'. The wordpiece models may also output\nany word that the grapheme model may; we find that all the graphemes are included in the wordpiece vocabularies.\n\nFor the encoder networks in RNN-T models, we experimented with deep LSTM networks (5 to 12 layers).\nFor the decoder networks, we used a stack of 2 layer LSTM network, a feed-forward layer and a softmax layer.\nIn addition to training models with random initialization of parameters, we explored variations of initializing encoder and decoder network parameters from pre-trained models.\nIt has been previously shown that initializing RNN-T encoder parameters from a model trained with the CTC loss is beneficial for the phoneme recognition task~\\cite{rnnt2}.\nWe experimented with initializing encoder networks from models trained with the\nCTC loss and with initializing LSTM layer parameters in prediction networks from LSTM language models trained on text data.\nAfter initialization of encoder and prediction network weights from separate pre-trained models, the entire RNN-T model weights are trained with the RNN-T objective.\n\nWe show one example architecture for the RNN-T wordpiece model in Figure 2.\nThe figure also shows the pre-trained CTC LSTM acoustic model and LSTM language model architectures used to initialize the encoder and prediction network weights.\nThe dotted arrows indicate the pre-trained layers used to initialize specific layers in the RNN-T model.\nThe encoder networks in RNN-T models are pre-trained with the CTC loss using phonemes, graphemes and wordpieces as output units.\nWe investigate encoder architectures with multi-task training using hierarchical-CTC~\\cite{hctc} with various\n'hierarchies' of CTC losses at various depths in the encoder network. With hierarchical-CTC\nthe encoder networks are trained with multiple simultaneous CTC losses which was\nbeneficial for grapheme recognition~\\cite{hctc_g}. After pre-training all CTC losses\nand additional weights associated with generating softmax probabilities are discarded.\nFor the wordpiece models which have longer duration than graphemes, we employ an additional 'time-convolution' in the encoder network to reduce the\nsequence length of encoded activations which is similar to the pyramidal sequence length reduction in ~\\cite{las}.\nFor these models, we used filters covering 3 non-overlapping consecutive activation vectors, thus reducing them to a single activation vector.\nThe LSTM layers in decoder networks are pre-trained as a language model using the graphemes or wordpieces as lexical units.\nThe input to the network is a label (grapheme or wordpiece) in a segmented sentence represented as a one-hot vector.\nThe target for the network is the next label in the sequence and the model is trained with the cross-entropy loss.\nThe weights in the softmax output layer are discarded after pre-training and only the LSTM network weights are used to partially initialize the RNN-T prediction network.\nFor wordpiece language models, we embed labels to a smaller dimension.\nThese embedding weights are also used to initialize the RNN-T wordpiece models.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{\\label{sec:level1} Introduction}\n\nMagnetic reconnection is a process that occurs on the order of Alfv\\'{e}nic timescales in which the magnetic topology is rearranged and the magnetic energy is converted into plasma kinetic or thermal energy \\cite{Finn1977,Bondeson1983}. Because of its broad applications in solar flares \\cite{Forbes1991} and corona \\cite{Chen2021b,Dong2022}, coronal mass ejections (CMEs) \\cite{Qiu2007}, Earth's and planetary magnetospheres \\cite{Chen2008,Le2017,Phan2018,Yang2022}, and also in laboratory plasmas \\cite{Yamada1994,Duan2018,Raymond2018}, reconnection has been of immense interest in recent years \\cite{Ji2022}.\n\nIt has been well established that collisionless magnetic reconnection occurs in the regime beyond ideal magnetohydrodynamics, where kinetic-scale physics becomes prominent in the formation and evolution of the thin current sheets near the reconnection sites \\cite{Zweibel2009,Yamada2010}. Thus, simulating collisionless magnetic reconnection typically requires kinetic approaches such as the particle-in-cell (PIC) method. In general, however, such kinetic simulations are computationally expensive and are hence unable to efficiently solve large-scale problems involving collisionless physics. In order to solve this issue with affordable computational costs, two broad approaches have been proposed for large-scale global simulations, i.e., the magnetohydrodynamics with embedded particle-in-cell (MHD-EPIC) model \\cite{Daldorff2014,Toth2016,Chen2017,Chen2019JGR} and the multi-moment multi-fluid model \\cite{Wang2015,Wang2018,Dong2019,Wang2020,Jarmak2020,Rulke2021}. Recently, some progress has been made to improve the fluid closure in the multi-moment multi-fluid model through machine learning \\cite{Qin2022,Cheng2023}, and meanwhile, MHD-EPIC has been improved to incorporate the feature of adaptively embedded PIC regions (MHD-AEPIC) \\cite{Shou2021,Chen2021,Wang2022}, which offers flexibility for the PIC code to capture the localized regions where kinetic physics is important.\n\nIt has been well demonstrated through a series of local studies \\cite{Wang2015,Ng2015,Ng2017,Ng2019} that the multi-moment multi-fluid model incorporating the higher-order moments is capable of reproducing some critical aspects of the collisionless reconnection physics from fully kinetic simulations. However, no such systematic local studies have been conducted using the MHD-AEPIC model, which motivates this study to employ the MHD-AEPIC model to investigate the magnetic island coalescence problem that is highly kinetic in nature \\cite{Stanier2015,Ng2015}.\n\nIn an MHD-AEPIC simulation, only part of the simulation domain, where kinetic effects are important, is simulated by the semi-implicit PIC code, Flexible Exascale Kinetic Simulator (FLEKS) \\cite{Chen2021} and the rest of the domain is handled by the MHD (or Hall MHD) model, Block-Adaptive-Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) \\cite{Powell1999,Toth2012}. For the magnetic island coalescence problem, the embedded PIC regions are applied to simulate the regions with strong current density. As we will show later, the adaptive simulation comes in close agreement with the simulation employing the fully kinetic PIC code, Object-oriented Simulation Rapid Implementation System (OSIRIS) \\cite{Fonseca2002,Hemker2015}, when analyzing their out-of-plane current densities, reconnection rates, O-point separations, pressure tensor elements, and agyrotropy (a measure of the deviation of the pressure tensor of a species from cylindrical symmetry with respect to the direction of the local magnetic field \\cite{Scudder2008}).\n\nIn Sec. II, we discuss the model setup for the magnetic island coalescence problem. In Sec. III, simulation results are presented and discussed by comparing the full PIC simulation results with the outputs from the MHD-EPIC and MHD-AEPIC models. Concluding remarks are given in Sec. IV. \n\n\\section{Model Setup and Methods}\n\nIn this study, we set a Fadeev equilibrium \\cite{Fadeev1965} as the initial condition where the initial magnetic vector potential is expressed as\n\\begin{eqnarray}\n\\textbf{A} = B_0 \\lambda \\ln \\big[ \\epsilon \\cos(x\/\\lambda) + \\cosh(y\/\\lambda) \\big] \\hat{\\textbf{z}}\n\\label{eq:vecpot}\n\\end{eqnarray}\nwhere $B_0$ is the asymptotic magnetic field away from the x-axis, $\\epsilon = 0.4$ is a measure of the island size, and the initial density distribution is written as\n\\begin{eqnarray}\nn = \\frac{n_0 (1 - \\epsilon^2)}{\\big[ \\epsilon \\cos(x\/\\lambda) + \\cosh(y\/\\lambda) \\big]^2} + n_b\n\\label{eq:eqden}.\n\\end{eqnarray}\nwhere $n_b = 0.2n_0$ is the background number density. We define the global Alfv\\'{e}n time as $t_A = L_x\/v_A$, where $L_x = 4\\pi \\lambda$ is the length of the simulation box, $\\lambda = 5d_{i0}$ is the half-width of the current sheet (we also run additional simulations with $\\lambda = 10d_{i0}$; see Fig.~\\ref{fig:lambda10_rate}), $d_{i0} = (m_i\/\\mu_0 n_0 e^2)^{1\/2}$ is the ion inertial length, and $v_A = B_0\/(\\mu_0 n_0 m_i)^{1\/2}$ is the Alfv\\'{e}n speed. The width of the box is set as $L_y = 2\\pi \\lambda$. In the full PIC simulation with OSIRIS, we specify the ion and electron out-of-plane current densities as the following relationship $J_{zi0}\/J_{ze0} = T_{i0}\/T_{e0}$, where $T_{i0}$ and $T_{e0}$ are uniform. In the MHD-AEPIC simulations, the electron and ion bulk velocities are initialized from the MHD current and momentum \\cite{Daldorff2014}. In the following simulations, we use the same configuration described in \\citet{Stanier2015} and \\citet{Ng2015}, setting $T_{i0} = T_{e0}$, $m_i = 25m_e$ (so $d_i = 5 d_e$), $t_{max} = 2.5 t_A$, and $n_0 k_B (T_{i0} + T_{e0}) = B_0^2\/2\\mu_0$ (so $p = p_{mag}$). The relationship between the electron plasma and cyclotron frequencies is given by $\\omega_{pe} = 2\\Omega_{ce}$ where $\\omega_{pe} = (n_0 e^2\/m_e \\epsilon_0)^{1\/2}$ and $\\Omega_{ce} = |e|B_0\/m_e$. The initial perturbation follows the same form as in \\citet{Daughton2009_1}, with\n\\begin{subequations}\n\\label{eq:idealMHDpert}\n\\begin{eqnarray}\n\\delta B_x = \\delta B_0 \\cos \\bigg( \\frac{2 \\pi x}{L_x} \\bigg) \\sin \\bigg( \\frac{\\pi y}{L_y} \\bigg)\n\\label{magpert1}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\delta B_y = -\\delta B_0 \\sin \\bigg( \\frac{2 \\pi x}{L_x} \\bigg) \\cos \\bigg( \\frac{\\pi y}{L_y} \\bigg)\n\\label{magpert2}\n\\end{eqnarray}\n\\end{subequations}\nwhere we set the perturbation amplitude as $\\delta B_0 = 0.1B_0$.\n\nTo simulate the merging of two magnetic islands, we first perform two simulations using the MHD-AEPIC model, one with a large static, non-adaptive (or fixed) PIC region and the other with adaptive PIC regions. We also run a full PIC simulation using OSIRIS to validate and compare with the MHD-AEPIC simulations. In all simulations, we have used periodic boundaries in the horizontal ($x$) direction, and conducting electromagnetic boundaries and reflecting particle boundaries in the vertical ($y$) direction.\n\nThe OSIRIS simulation domain consists of 2000$\\times$1000 cells and each cell has 256 particles per species. In the MHD-AEPIC simulations, the MHD grid consists of 2048$\\times$1024 cells. The effective PIC grid resolution and the initial particle number per cell are the same as the OSIRIS simulation, and the Gauss's Law-satisfying Energy Conserving Semi-Implicit Method (GL-ECSIM) \\cite{Chen2019} is applied. In general, the MHD and PIC grid size does not have to be the same \\cite{Chen2021} (see Appendix A). In the magnetic island coalescence simulation, the two magnetic islands are associated with strong current density, which suggests a notable separation between the electron and ion velocities, and kinetic effects may play an important role when two islands move toward each other. Outside the islands, the current is generally weak, and neither the magnetic field nor plasma quantities show a strong gradient, so an MHD model is sufficient to describe this region. In the MHD-AEPIC run with adaptive PIC regions, the PIC regions only cover the areas of $J\\Delta x\/B > 0.01$, where the selection criterion $J\\Delta x\/B$ is dimensionless. The threshold 0.01 is carefully chosen based on numerical experiments such that the islands are well covered by the PIC region. The selection criterion is calculated from the MHD results. We usually run a pure MHD simulation first to estimate a proper threshold that would determine the desired PIC regions, then we run short MHD-AEPIC simulations with a coarse PIC grid to optimize the threshold. The small threshold for the island coalescence problem is due to the fact that it is highly kinetic in nature where the coupling between the macro-scale MHD and micro-scale kinetic physics is important \\cite{Stanier2015,Ng2019}. In general, the PIC region of an MHD-AEPIC simulation is selected based on the nature of a problem. For comparison, we also perform a simulation using the MHD-EPIC model with a relatively large static or fixed PIC region, which covers the majority of the entire simulation domain. \n\nIn MHD-AEPIC and MHD-EPIC simulations, when Hall MHD is used (which is the case for the current study), the Alfven and Whistler modes can travel through the MHD-PIC boundaries smoothly \\cite{Daldorff2014}. However, if there are kinetic modes reaching the MHD-PIC boundaries, they may be artificially reflected since they cannot propagate into the MHD regions. For such applications, we should increase the PIC regions so that the kinetic modes would have been damped near the PIC regions. The MHD-AEPIC model is initially designed for global magnetosphere simulations with a relatively small kinetic region. If the entire simulation domain is dominated by kinetic physics, a full PIC or hybrid-PIC \\cite{Dong2021,Winske2022} code is a better choice. \n\nWe note that the boundary conditions, with the exception of periodic boundaries, introduce numerical perturbations inevitably for all kinetic codes, which is a natural consequence of numerical discretization. The perturbations, however, would be reduced if the boundaries are far away from the kinetic regions. We, therefore, usually choose PIC regions where the solution is smooth at the MHD-PIC interface, such as the simulations presented in this paper.\n\n\\section{Simulation Results}\n\n\\begin{figure\n\\includegraphics[width=0.5\\textwidth]{Fig1.pdf}\n\\caption{\\label{fig:Jz-contour} Out-of-plane current density at $t = 0.9 t_A$ for the adaptive (top) and fixed (middle) PIC region runs and the full-PIC run (bottom) with $\\lambda=5 d_{i0}$, overlaid with the magnetic vector potential contours (black dotted curves) and with the boundary between the MHD and PIC regions (green solid curves).}\n\\end{figure}\n\n\\begin{figure\n\\includegraphics[width=0.5\\textwidth]{Fig2.pdf}\n\\caption{\\label{fig:pic_cell_ratio} Ratio of the active PIC regions over the entire simulation domain for the MHD-AEPIC simulation. The regions that are covered by the semi-implicit PIC code increase from $40\\%$ to about $50\\%$ during the simulation.\n}\n\\end{figure}\n\nFig.~\\ref{fig:Jz-contour} shows 2D plots of the out-of-plane current density at $t=0.9t_A$ from the three different simulations, where the $x$ and $y$ coordinates have been normalized over $d_{i0}$ and the current density over $|e| n_0 c$. In each panel of Fig.~\\ref{fig:Jz-contour}, the current sheet formation, which cannot be correctly captured in a Hall MHD model for this problem \\cite{Stanier2015}, are indicated as regions of positive out-of-plane current densities near the origin ($x, y = 0$). Also illustrated in Fig.~\\ref{fig:Jz-contour} are the boundaries (green curves) between the MHD and PIC regions in both the adaptive-PIC-region and fixed-PIC-region runs. \n\nFor the adaptive-PIC-region run, the variation of the active PIC cell number inside the entire simulation domain over time is shown in Fig.~\\ref{fig:pic_cell_ratio}. Initially, about 40\\% of the simulation domain is run by PIC code, then the ratio gradually increases to about 50\\%. In this simulation, the PIC regions are defined as the areas with large current density values and they do not vary significantly during the simulation. For other applications, such as modeling the Earth magnetotail \\cite{Wang2022}, the active PIC regions may change dramatically. We note that the smallest granularity to turn on or off PIC cells is a patch with two cells in each direction \\cite{Chen2021}. Since the MHD-AEPIC model adopts a semi-implicit PIC code, which requires an iterative solver to update the electric field \\cite{Chen2019}, it requires more computations per step. The computational efficiency also depends on the implementation and problem properties. For the simulations presented in Fig.~\\ref{fig:Jz-contour}, their computational cost (CPU hours) ratio is about $3:8:1$ among the adaptive-PIC region case, the fixed-PIC region case, and the full-PIC case. Given that FLEKS is a semi-implicit PIC code and the gird resolution between the PIC and MHD regions can be different, one can reduce the grid resolution of the PIC cells inside the MHD-AEPIC domain, which can lower the computational cost of the MHD-AEPIC run and thus the MHD-AEPIC run becomes computationally cheaper than the full-PIC case (see Appendix A). It is noteworthy that in three-dimensional (3-D) cases, MHD-AEPIC can be computationally even more efficient than a full PIC code. \n\n\\begin{figure\n\\includegraphics[width=0.5\\textwidth]{Fig3.pdf}\n\\caption{\\label{fig:ER} Reconnection rate as a function of time for the adaptive and fixed PIC region runs and the full-PIC run with $\\lambda=5 d_{i0}$.}\n\\end{figure}\n\n\\begin{figure\n\\includegraphics[width=0.5\\textwidth]{Fig4.pdf}\n\\caption{\\label{fig:O-sep} O-point separation of the coalescing magnetic islands as a function of time for the adaptive and fixed PIC region runs and the full-PIC run with $\\lambda=5 d_{i0}$.}\n\\end{figure}\n\nAn inspection of Fig.~\\ref{fig:Jz-contour} reveals that the simulation results (such as the current density and the magnetic flux) from three different runs present nearly identical features and the system evolves at the same rate, indicating the reconnection rates, $E_R$, from the three simulations are very similar. In order to verify this idea, we explicitly compare the reconnection rates. We normalize the reconnection rate to the maximum initial magnetic field between the islands such that \\cite{Stanier2015}\n\\begin{eqnarray}\nE_R = \\frac{1}{v_{Am}B_m} \\frac{\\partial}{\\partial t} \\big( A_{zX} - A_{zO} \\big)\n\\label{eq:three}\n\\end{eqnarray}\nwhere $v_{Am} = B_m\/(\\mu_0 n_0 m_i)^{1\/2}$, $A_{zX}$ is the out-of-plane magnetic vector potential, evaluated at the $X$-point and $A_{zO}$ is evaluated at the $O$-point. The time evolution of the reconnection rate for the three simulations are plotted in Fig.~\\ref{fig:ER}, where a maximum reconnection rate occurs around $t = 0.9 t_A$ for all the cases. After the reconnection rate reaches a maximum, it decreases with time before reaching a constant value at approximately $t = 1.6 t_A$. We note from Fig.~\\ref{fig:ER} that the case with adaptive PIC regions closely captures the behavior of $E_R$ for this problem, when compared to the full-PIC simulation.\n\n\nSince the O-point separation between the magnetic islands is largely dependent on reconnection rates, we demonstrate the O-point separation as a function of time in Fig.~\\ref{fig:O-sep}. In this plot, we have normalized the O-point separation by the initial separation between the islands $L_0$. It is evident in Fig.~\\ref{fig:O-sep} that for all three cases the island separation decreases at an increasing rate before $t = 0.9 t_A$, at which point the coalescence of the islands slows down, reaching an approximately constant rate at $t = 1.6 t_A$. Both Figs.~\\ref{fig:ER} and \\ref{fig:O-sep} show excellent agreement among the three simulation approaches, validating the use of the MHD-AEPIC model. \n\n\nAt this point, we are interested in analyzing the effects of ion kinetics, as its importance has been demonstrated previously \\cite{Stanier2015}. \\citet{Stanier2015} also pointed out that electron kinetics were not crucial for the island coalescence problem. To do this, we write the z-component of the normalized ion Ohm's law as \n\\begin{eqnarray}\nE_z^\\prime &=& \\frac{d_i}{n} \\bigg[ \\frac{\\partial}{\\partial t} (nv_{iz}) + \\nabla \\cdot (n\\textbf{v}_i v_{iz}) \\bigg] + \\frac{d_i}{n} \\nabla \\cdot \\big( \\bar{\\bar{\\textbf{P}}}_i \\cdot \\textbf{z} \\big)\n\\label{eq:four}\n\\end{eqnarray}\nwhere $E_z^\\prime = (\\textbf{E} + \\textbf{v}_i \\times \\textbf{B}) \\cdot \\textbf{z}$ is the z-component of the nonideal electric field, $\\bar{\\bar{\\textbf{P}}}_i$ is the ion pressure tensor, and the resistivity is neglected for collisionless reconnection. Near the X-point of the reconnection site, where the magnetic field is small, pressure tensor effects become important. Specifically, Eq.(\\ref{eq:four}) demonstrates that the non-ideal electric field is heavily influenced by the off-diagonal terms of the pressure tensor. Since it has been shown in \\citet{Stanier2015} that ion kinetic effects are of importance in this specific reconnection setup, we focus our analysis on the ion pressure tensor. Fig.~\\ref{fig:fig4} compares the off-diagonal ion pressure tensor elements from the three cases when the reconnection rate reaches its peak, once again showing spectacular agreement among the cases. Fig.~\\ref{fig:fig4} also demonstrates that these two islands should be covered by the PIC code, since the off-diagonal pressure tensor can not be described by the isotropic MHD used here. In general, if the ion pressure is anisotropic, but the off-diagonal terms are relatively small in the local magnetic field coordinate system, MHD-AEPIC supports coupling with an anisotropic MHD model \\cite{Daldorff2014}, and the region that requires PIC can be reduced. However, if the off-diagonal terms are prominent in the local magnetic field coordinates, which is the case here (see Appendix B), the PIC regions need to cover those areas, consistent with the previous study using a similar approach \\cite{Makwana2018}.\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{Fig5.pdf}\n\\caption{\\label{fig:fig4} Comparison of the off-diagonal ion pressure tensor elements at $t = 0.9 t_A$ for the adaptive (top row) and fixed (middle row) PIC region runs and the full-PIC run (bottom row) with $\\lambda=5 d_{i0}$. The left column plots $P_{xy}$, the middle column shows $P_{xz}$, and $P_{yz}$ is illustrated in the right column.}\n\\end{figure*}\n\nIn addition to the off-diagonal ion pressure tensor terms, we also compare the ion agyrotropies of the three simulations, employing the same formula for agyrotropy as in \\citet{Scudder2008}. We note that agyrotropy is a measure useful for characterizing the ion (or electron) diffusion region in collisionless magnetic reconnection. We once again focus only on ion agyrotropy as ion kinetics are required to capture the correct reconnection rates and describe the global behavior of the system for the island coalescence problem \\cite{Stanier2015}. A comparison of the agyrotropy from the three different simulations is shown in Fig.~\\ref{fig:agyrotropy}, indicating excellent agreement among different cases, especially near the current sheet formation where ion agyrotropy is of particular significance (indicated by the lighter colors inside the ion diffusion region). \n\\begin{figure\n\\includegraphics[width=0.5\\textwidth]{Fig6.pdf}\n\\caption{\\label{fig:agyrotropy} Ion agyrotropy at $t = 0.9 t_A$ for the adaptive (top) and fixed (middle) PIC region runs and the full-PIC run (bottom) with $\\lambda=5 d_{i0}$, overlaid with the magnetic vector potential contours (in white).}\n\\end{figure}\n\nIn Fig.~\\ref{fig:lambda10_rate}, we also present the reconnection rates for simulations with $\\lambda = 10 d_{i0}$ to demonstrate that reconnection in large systems becomes slower. The islands are unable to coalesce on the first approach due to the slower reconnection rates, and so bounce off each other \\cite{Karimabadi2011_1,Stanier2015}. All simulations reach nearly the same maximum reconnection rate and show two peaks around $t=0.9t_A$ and $t=1.2t_A$, respectively. Overall, all simulations show similar reconnection processes.\n\n\\section{Conclusion}\n\nIt has been shown that simulating collisionless magnetic reconnection requires kinetic approaches such as the PIC method. However, these simulation codes are generally computationally expensive and hence make it difficult to efficiently model large-scale global systems. In this study, we have presented an alternative method by utilizing the MHD-AEPIC model that uses the PIC treatment in regions where the current sheet formation is prominent and uses the computationally cheap MHD model in the rest of the simulation domain. We have shown that the case with adaptive PIC regions comes in close agreement with the full PIC simulation when analyzing reconnection rates and O-point separations as well as the ion pressure tensor elements and ion agyrotropy. These results demonstrate that the MHD-AEPIC model may accurately and efficiently simulate large-scale systems that involve collisionless reconnection physics. \n\nIt should be noted that the magnetic island coalescence problem that we studied here is highly kinetic in nature where the coupling between the MHD and kinetic scales is important \\cite{Daughton2009_2,Stanier2015,Ng2019}, so relatively large PIC regions are needed within the MHD domain. For large-scale simulations such as the case of the Earth's magnetosphere, \\citet{Chen2021} applied MHD-AEPIC to study the magnetopause reconnection while treating the magnetotail reconnection using Hall MHD, such that they drastically saved computational costs compared with the full PIC magnetosphere simulations.\n\nWe also note that in magnetic confinement fusion, near the X-point where the magnetic field is weak, gyrokinetic approximations begin to break down and it becomes necessary to include full kinetic physics at these locations. The MHD-AEPIC model may be useful for simulating FLARE \\cite{Ji2020} and NSTX-U \\cite{Ebrahimi2016} at Princeton Plasma Physics Laboratory, MAST at the UK \\cite{Tanabe2017} as well as TREX at Wisconsin Plasma Physics Laboratory \\cite{Olson2021}.\n\n\\begin{figure\n\\includegraphics[width=0.5\\textwidth]{Fig7.pdf}\n\\caption{\\label{fig:lambda10_rate} Reconnection rate as a function of time for the adaptive and fixed PIC region runs and the full-PIC run with $\\lambda=10 d_{i0}$.}\n\\end{figure}\n\n\\begin{figure\n\\includegraphics[width=0.5\\textwidth]{Fig8.pdf}\n\\caption{\\label{fig:lambda5_halfres_boundaries} Out-of-plane current density at $t = 0.9 t_A$ for the adaptive PIC region run (top), the adaptive PIC region run with half PIC grid resolution (middle), and the full-PIC run (bottom) with $\\lambda=5 d_{i0}$, overlaid with the magnetic vector potential contours (black dotted curves) and with the boundary between the MHD and PIC regions (green solid curves).}\n\\end{figure}\n\n\\begin{figure}[b\n\\includegraphics[width=0.5\\textwidth]{Fig9.pdf}\n\\caption{\\label{fig:lambda5_halfres_rate} Reconnection rate as a function of time for the three cases shown in Fig.\\ref{fig:lambda5_halfres_boundaries}.}\n\\end{figure}\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{Fig10.pdf}\n\\caption{\\label{fig:fieldaligned} Comparison of the off-diagonal ion pressure tensor elements at $t = 0.9 t_A$ for the adaptive (top row) and fixed (middle row) PIC region runs and the full-PIC run (bottom row) in field-aligned coordinates with $\\lambda = 5 d_{i0}$. The left column plots $P_{12}^\\prime$, the middle column shows $P_{13}^\\prime$, and $P_{23}^\\prime$ is illustrated in the right column.}\n\\end{figure*}\n\n\\begin{acknowledgments}\n\nThis work was made possible by support from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work was supported by the U.S. Department of Energy under contract number DE-AC02-09CH11466 (through LDRD) and the DOE grant DE-SC0021205, the NASA grants 80NSSC21K1326 and 80NSSC22K0323, and the NSF grant AGS-2149787. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. The authors acknowledge the OSIRIS Consortium, consisting of UCLA and IST (Portugal) for the use of the OSIRIS 4.0 framework.\n\n\\end{acknowledgments}\n\n\\section*{Data Availability}\nThe data that support the findings of this study are available from the corresponding author upon reasonable request.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\n\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\\vspace{-.5\\baselineskip}\nWe have performed extensive quantitative and qualitative evaluations of our newly proposed ViTCA on a variety of datasets under a denoising autoencoding framework. We have demonstrated the superior denoising performance and robustness of our model when compared to a U-Net-based CA baseline (UNetCA) and ViT, as well as its generalization capabilities under a variety of environmental changes such as larger inputs (\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, spatial interpolation) and changing inputs \\emph{during} cell updates.\n\nDespite the computation savings---owed to our circumvention of self-attention's quadratic complexity by spatially localizing it within ViTCA---there remains the same memory limitations inherent to all recurrent models: multiple recurrent iterations are required for each training iteration, resulting in larger memory usage than a feedforward approach. This limits single-GPU training accessibility. We have experimented with gradient checkpointing \\cite{chen2016training} but found its trade-off for increased backpropagation duration (and slightly different gradients) less than ideal. To fully realize the potential of NCAs (self-organization, inherent distributivity, \\emph{etc}\\onedot), we encourage follow-up work to address this limitation. Adapting recent techniques using implicit differentiation is one avenue to circumvent these issues~\\cite{bai2022deep,bai2019deep}. Also, as mentioned in our ablation (\\Sec{\\ref{subsubsec:ablation_study}}), we hope to further investigate the instabilities caused by increasing the depth of ViTCA.\n\n\n\n\n\n\n\n\n\\section{Background and related work}\n\\vspace{-.5\\baselineskip}\n\\label{sec:related_work}\n\n\\paragraph{Neural Cellular Automata.}\n\nCellular Automata are algorithmic processes motivated by the biological behaviours of cellular growth and, as such, are capable of producing complex emergent (global) dynamics from the iterative application of comparatively simple (localized) rules~\\cite{von1966theory}. \\emph{Neural} Cellular Automata present a more general CA formulation, where the evolving cell states are represented as (typically low-dimensional) vectors and the update rule dictating their evolution is a differentiable function whose parameters are learned through backpropagation from a loss, rather than a handcrafted set of rules~\\cite{mordvintsev2020growing,gilpin2019cellular,wulff1992learning}.\nNeural net-based formulations of CAs in the NeurIPS community can be traced back to the early work of \\cite{wulff1992learning}, where only small and simple models were examined. Recent formulations of NCAs have shown that when leveraging the power of deep learning techniques enabled by advances in hardware capabilities---namely highly-parallelizable differentiable operations implemented on GPUs---NCAs can be tuned to learn surprisingly complex desired behaviour, such as semantic segmentation \\cite{sandler2020image}; common RL tasks such as cart-pole balancing \\cite{variengien2021towards}, 3D locomotion \\cite{najarro2022hypernca}, and Atari game playing \\cite{najarro2022hypernca}; and image synthesis~\\cite{palm2022variational,niklasson2021self-organising,mordvintsev2021mu}. Although these recent formulations rely on familiar compositions of convolutions and non-linear functions, it is important to highlight that NCAs are fundamentally not equivalent to ``very-deep'' CNNs (\\emph{vs}\\onedot~\\cite{gilpin2019cellular}), or any other feedforward architecture (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, ResNets \\cite{he2016deep}), particularly, in the same way that a Recurrent Neural Network (RNN) is not equivalent: CNNs and other feedforward architectures induce a directed \\textit{acyclic} computation graph (\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, a finite impulse response), whereas NCAs (and RNNs) induce a directed \\textit{cyclic} computation graph (\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, an infinite impulse response), where stateful data can additionally be manipulated using (learned) feedback loops and\/or time-delayed controls. As such, NCAs can be viewed as a type of RNN, and both (N)CAs and RNNs are known to be Turing complete~\\cite{christen2021automatic,cook2004universality,siegelmann1995computational,wulff1992learning}.\\footnote{In the case of (N)CAs, a Turing complete example is the \\emph{Rule 110} elementary CA \\cite{christen2021automatic,cook2004universality}}\n\n\\paragraph{Vision Transformers.}\nVision Transformers \\cite{dosovitskiy2020image} are an adaptation of Transformers \\cite{vaswani2017attention} to vision-based tasks like image classification. In contrast to networks built from convolutional layers, ViTs rely on \\emph{self-attention} mechanisms operating on tokenized inputs. Specifically, input images are divided into non-overlapping patches, then fed to a Transformer after undergoing a linear patch projection with an embedding matrix. While ViTs provide competitive image classification performance, the quadratic computational scaling of global self-attention limits their applicability in high-dimensional domains, \\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, per-pixel dense processing. Recent developments have attempted to alleviate such efficiency limitations \\cite{ali2021xcit,hudson2021generative,arnab2021vivit,fan2021multiscale}, one notable example being Perceiver IO \\cite{jaegle2021perceiver,yifan2021input} with its use of cross-attention. We refer interested readers to a comprehensive survey on ViTs~\\cite{khan2021transformers}.\n\\section{Introduction}\n\\vspace{-.5\\baselineskip}\n\\label{sec:introduction}\n\n\\input{figures\/vitca_vs_vit} Recent developments at the intersection of two foundational ideas---Artificial Neural Networks (ANNs) and Cellular Automata (CA)---have led to new approaches for constructing Neural Cellular Automata (NCA). These advances have integrated ideas such as variational inference \\cite{palm2022variational}, U-Nets \\cite{zhang2020learning}, and Graph Neural Networks (GNNs) \\cite{grattarola2021learning} with promising results on problems ranging from image synthesis \\cite{palm2022variational,niklasson2021self-organising,mordvintsev2021mu} to Reinforcement Learning (RL) \\cite{najarro2022hypernca,variengien2021towards}. Transformers are another significant development in deep learning \\cite{vaswani2017attention}, but, until now, have not been examined under an NCA setting.\n\nVision Transformers (ViTs) \\cite{dosovitskiy2020image} have emerged as a competitive alternative to Convolutional Neural Network (CNN) \\cite{lecun1998gradient} architectures for computer vision, such as Residual Networks (ResNets) \\cite{he2016deep}. ViTs leverage the self-attention mechanisms of original Transformers \\cite{vaswani2017attention}, which have emerged as the dominant approach for sequence modelling in recent years. Our work combines foundational ideas from Transformers and ViTs, leading to a new class of NCAs: \\textbf{Vision Transformer Cellular Automata (ViTCA)}. \n\nAn effective and ubiquitous Transformer-based learning technique for Natural Language Processing (NLP) pre-training is the unsupervised task of Masked Language Modelling (MLM), popularized by the BERT language model \\cite{devlin-etal-2019-bert}. The success of MLM-based techniques has similarly inspired recent work re-examining the classical formulation of Denoising Autoencoders (DAEs) \\cite{vincent2010stacked}, but for ViTs \\cite{bao2022beit,dosovitskiy2020image,chen2020generative}, introducing tasks such as Masked Image Encoding \\cite{he2021masked} and Masked Feature Prediction \\cite{wei2021masked} for image and video modelling, respectively. This simple yet highly-scalable strategy of masked-based unsupervised pre-training has yielded promising transfer learning results on vision-based downstream tasks such as object detection and segmentation, image classification, and action detection, even outperforming supervised pre-training~\\cite{he2021masked,wei2021masked}. We examine training methodologies for ViTCA within a DAE setting and perform extensive controlled experiments benchmarking these formulations against modern state of the art architectures, with favourable outcomes, \\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, \\Fig{\\ref{fig:vitca_vs_vit}}.\n\n\\input{figures\/teaser}\n\nOur contributions are as follows: \\textit{first}---to the best of our knowledge---our work is the first to extend NCA methodologies with key Transformer mechanisms, \\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, self-attention and positional encoding (and embedding), with the beneficial side-effect of circumventing the quadratic complexity of self-attention; \\textit{second}, our ViTCA formulation allows for lower model complexity (by limiting ViT depth) while retaining expressivity through CA iterations on a controlled state---all with the same encoder weights. This yields a demonstrably more parameter-efficient \\cite{mordvintsev2021mu} ViT-based model. Importantly, ViTCA mitigates the problems associated with the explicit tuning of ViT depth originally needed to improve performance (\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, we use a depth of 1). With ViTCA, we simply iterate until cell state convergence. Since ViT (and by extension, ViTCA) employs Layer Normalization (LN) \\cite{ba2016layer} at each stage of its processing, it is a fairly contractive model capable of fixed-point convergence guarantees~\\cite{bai2019deep}.\n\nIn relation to our first contribution, ViTCA respects CA requirements, most importantly that computations remain localized about a cell and its neighbourhood. As such, we modify the global self-attention mechanisms of a ViT to respect this locality requirement (\\Fig{\\ref{fig:teaser}}). Localized self-attention is not a new idea \\cite{chen2022regionvit,liu2021swin,chu2021twins,zhang2021multi}; however, because cells contain state information that depends on its previous state, over CA iterations the effective receptive field of ViTCA's localized self-attention grows increasingly larger until eventually incorporating information implicitly across all cells. Thus, admitting global propagation of information from spatially localized self-attention. Moreover, due to the self-organizing nature of NCAs, self-organization also manifests itself within the localized self-attention, resulting in a globally agreed-upon arrangement of local self-attention. Thus, circumventing the quadratic complexity of explicit global self-attention (w.r.t\\onedot the input size) through a linear amortization over time, and increasing the feasibility of per-pixel dense processing (as we demonstrate). This globally consistent and complex behaviour, which arises from strictly local interactions, is a unique feature of NCAs and confers performance benefits which we observe both qualitatively and quantitatively when comparing ViT and ViTCA for denoising autoencoding. \n\n\\input{figures\/overview}\n\n\n\n\n\n\n\n\\section*{Checklist}\n\\begin{enumerate}\n\n\\item For all authors...\n\\begin{enumerate}\n \\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n \\answerYes\n \\item Did you describe the limitations of your work?\n \\answerYes{See \\Sec{\\ref{sec:discussion}}.}\n \\item Did you discuss any potential negative societal impacts of your work?\n \\answerNo{Although we feel our work demonstrates the potential of NCAs as viable alternatives to common recurrent network architectures (ViTCA being our evidential contribution), our experiments intentionally tend towards the direction of optimizing model efficiency (and single-GPU training accessibility) rather than towards the increasingly popular direction of scaling upwards. However, as much as our work demonstrates the downward-scaling capabilities of NCAs, we also acknowledge that this similarly applies going upward, and as such, can be abused (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, creating a ``deepfake''-capable ViTCA).}\n \\item Have you read the ethics review guidelines and ensured that your paper conforms to them?\n \\answerYes\n\\end{enumerate}\n\n\\item If you are including theoretical results...\n\\begin{enumerate}\n \\item Did you state the full set of assumptions of all theoretical results\n \\answerNA{}\n \\item Did you include complete proofs of all theoretical results?\n \\answerNA{}\n\\end{enumerate}\n\n\\item If you ran experiments...\n\\begin{enumerate}\n \\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n \\answerYes{Code and instructions to reproduce results are included in the supplemental material.}\n \\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n \\answerYes{}\n \\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n \\answerNo{Given the combination of time and computational restrictions and our exhaustive list of experiments, we opted to prioritize experiment variety and dataset coverage as an implicit substitute for re-running experiments under different random seeds. For all experiments, we kept a fixed random seed, even pointing out (deterministic) differences caused by gradient checkpointing when used (see Appendix \\ref{sec:appendix}).}\n \\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n \\answerYes\n\\end{enumerate}\n\n\\item If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\\begin{enumerate}\n \\item If your work uses existing assets, did you cite the creators?\n \\answerYes\n \\item Did you mention the license of the assets?\n \\answerNA{Licensed frameworks used such as PyTorch (BSD-style) and Hydra (MIT) will be mentioned in acknowledgements.}\n \\item Did you include any new assets either in the supplemental material or as a URL?\n \\answerNo{No new assets---aside from code and training our models---were created for the purposes of this work.}\n \\item Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n \\answerNo{We used publicly available datasets.}\n \\item Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n \\answerNo{Although not discussed in the manuscript, we would like to point that the datasets we used that could potentially contain personally identifiable information (CelebA, CIFAR10, Tiny ImageNet) each have restrictions and\/or acknowledgements of such potential issues. Also, our work is not focused on classifying persons and ViTCA is not a generative model, \\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, it can not generate new faces.}\n\\end{enumerate}\n\n\\item If you used crowdsourcing or conducted research with human subjects...\n\\begin{enumerate}\n \\item Did you include the full text of instructions given to participants and screenshots, if applicable?\n \\answerNA\n \\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n \\answerNA\n \\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n \\answerNA\n\\end{enumerate}\n\n\\end{enumerate}\n\\section{Experiments}\n\\vspace{-.5\\baselineskip}\n\\label{sec:experiments}\n\nHere we examine ViTCA through extensive experiments. We begin with experiments for denoising autoencoding, then an ablation study followed by various qualitative analyses, before concluding with linear probing experiments on the learned representations for MNIST \\cite{deng2012mnist}, FashionMNIST \\cite{xiao2017fashion}, and CIFAR10 \\cite{krizhevsky2009learning}. We provide an extension to our experiments in Appendix \\ref{sec:appendix}.\n\n\\paragraph{Baseline models and variants.}\nSince we are performing pixel level reconstructions, we create a ViT baseline in which the class token has been removed. This applies identically for ViTCA. Unless otherwise stated, for our ViT and ViTCA models we use a patch size of $1\\!\\times\\!1$ ($P_H\\!=\\!P_W\\!=\\!1$), and only a single encoding block with $h\\!=\\!4$ \\mhsa\\ heads, \\embed\\ size $d\\!=\\!128$, and \\mlp\\ size of $128$. For ViTCA, we choose $N_H\\!=\\!3$ and $N_W\\!=\\!3$ (\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, the \\emph{Moore neighbourhood}~\\cite{weissteinmoore}). We also compare with a U-Net baseline similar to the original formulation \\cite{ronneberger2015u}, but based on the specific architecture from \\cite{lehtinen2018noise2noise}. Since most of our datasets consist of $32\\!\\times\\!32$ (resampled) images, we only have two downsampling steps as opposed to five. We implement a U-Net-based CA (UNetCA) baseline consisting of a modified version of our U-Net with 48 initial output feature maps as opposed to 24 and with all convolutions except the first changed to $1\\!\\times\\!1$ to respect typical NCA restrictions~\\cite{palm2022variational,mordvintsev2020growing}.\n\n\\vspace{-.5\\baselineskip}\n\\subsection{Denoising autoencoding}\n\\vspace{-.5\\baselineskip}\n\\label{subsec:denoising_autoencoding}\n\nWe compare between our baseline models and a number of ViTCA variants in the context of denoising autoencoding. We present test set results across six benchmark datasets: a land cover classification dataset intended for representation learning (LandCoverRep) \\cite{yeh2021sustainbench}, MNIST, CelebA \\cite{liu2015faceattributes}, FashionMNIST, CIFAR10, and Tiny ImageNet (a subset of ImageNet \\cite{russakovsky2015imagenet}). All datasets consist of $32\\!\\times\\!32$ resampled images except Tiny ImageNet, which is at $64\\!\\times\\!64$ resolution. During testing, we use all masking combinations, chosen in a fixed order, and we update cells using a fixed number of iterations ($T\\!=\\!64$). See \\Tab{\\ref{tab:denoising_results}} for quantitative results.\n\nBriefly mentioned in \\Sec{\\ref{subsec:update_rule}}, we employ a masking strategy inspired by Curriculum Learning (CL) \\cite{wang2021survey,bengio2009curriculum} to ease training. This schedule follows a geometric progression of difficulty---tied to training iterations---maxing out at 10K training iterations. Specifically, masking starts at covering 25\\% of the input with $1\\!\\times\\!1$ patches of noise (dropout for RGB inputs, Gaussian for grayscale), then at each shift in difficulty, new masking configurations are added to the list of available masking configurations in the following order: $(2^0\\!\\times\\!2^0, 50\\%), (2^0\\!\\times\\!2^0, 75\\%), (2^1\\!\\times\\!2^1, 25\\%), (2^1\\!\\times\\!2^1, 50\\%), (2^1\\!\\times\\!2^1, 75\\%), ..., (2^2\\!\\times\\!2^2, 75\\%)$. Masking configurations are randomly chosen from this list.\n\nWe initialize weights\/parameters using He initialization \\cite{he2015delving}, except for the final layer of CA-based models, which are initialized to zero \\cite{mordvintsev2020growing}. Unless otherwise stated, we train for $I\\!=\\!100$K iterations, use a minibatch size $b\\!=\\!32$, AdamW optimizer \\cite{loshchilov2019decoupled}, learning rate $\\eta\\!=\\!10^{-3}$ with a cosine annealing schedule \\cite{loshchilov2017sgdr}, pool size \\poolsize\\ $\\!=\\!1024$, and cell hidden channel size $C_h\\!=\\!32$. In the case of Tiny ImageNet, $b\\!=\\!8$ to accommodate training on a single GPU (48GB Quadro RTX 8000). Training typically lasts a day at most, depending on the model. Due to the recurrent iterations required per training step, CA-based models take the longest to train. To alleviate memory limitations for some of our experiments, we use gradient checkpointing \\cite{chen2016training} during CA iterations at the cost of backpropagation duration and slight variations in gradients due to its effect on round-off propagation. We also experiment with a cell fusion and mitosis scheme as an alternative. See Appendix \\ref{sec:appendix} for details on runtime performance, gradient checkpointing, and fusion and mitosis.\n\n\\input{tables\/denoising_results}\n\nAmongst baselines, ViTCA outperforms on most metrics across the majority of datasets used (10 out of 18). Exceptions include LandCoverRep, where UNetCA universally outperforms by a small margin, likely due to the texture-dominant imagery being amenable to convolutions. Notably, ViTCA strongly outperforms on MNIST. Although MNIST is a trivial dataset for common tasks such as classification, our masking\/noise strategy turns it into a challenging dataset for denoising autoencoding, \\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, it is difficult for even a human to classify a $32\\!\\times\\!32$ MNIST digit 75\\% corrupted by $4\\!\\times\\!4$ patches of Gaussian noise. We hypothesize that when compared to convolutional models, ViTCA's weaker inductive biases (owed to attention \\cite{yifan2021input,jaegle2021perceiver}) immediately outperform these models when there are large regions lacking useful features, \\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, MNIST digits cover a small space in the canvas. This is not the case with FashionMNIST, where the content is more filled out. Between baselines and ViTCA variants, ViTCA-32 (32 heads) and 32xy (xy-coordinate positional encoding) outperform all models by large margins, demonstrating the benefits of multi-head self-attention. We also experiment with a parameter-reduced (by $\\sim\\!60\\%$), inverted bottleneck variant where $d\\!=\\!64$ and \\mlp\\ size is 256, often with a minimal reduction in performance.\n\\vspace{-.5\\baselineskip}\n\\subsubsection{Ablation study}\n\\label{subsubsec:ablation_study}\n\\vspace{-.5\\baselineskip}\n\nIn \\Tab{\\ref{tab:ablation_results}} we perform an ablation study using the baseline ViTCA model above as reference on CelebA. Results are ordered in row-wise blocks, top-to-bottom. Specifically, we examine the impact of varying the cell hidden size $C_h$; the \\embed\\ size $d$; the number of \\mhsa\\ heads $h$; the depth (\\# encoders), comparing both ViTCA (used throughout the table) with ViT; and in the last block we examine the impact of various methods of incorporating positional information into the model.\n\n\\input{tables\/ablation_results}\n\nSpecifically, we examine the use of: (1) a xy-coordinate-based positional encoding \\emph{concatenated} (``injected'') to cells, and; (2) a Transformer-based positional encoding (or embedding, if learned) \\emph{added} into \\embed. These two categories are subdivided into: \n(1a) sincos5---consisting of handcrafted Fourier features \\cite{mildenhall2020nerf} with four doublings of a base frequency, \\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, \\pe\\ $\\!=\\!(\\sin{2^0\\pi p},$ $\\cos{2^0\\pi p},$ $...,\\sin{2^{J-1}\\pi p},$ $\\cos{2^{J-1}\\pi p})\\!\\in\\mathbb{R}^{N \\times (4JP_HP_W)}$ where $J\\!=\\!5$ and $p$ is the pixel coordinate (normalized to [-1,1]) for each pixel the cell is situated on (one pixel since $P_H\\!=\\!P_W\\!=\\!1$); \n(1b) sincos5xy---consisting of both Fourier features and explicit xy-coordinates concatenated; \n(1c) xy---only xy-coordinates; \n(2a) handcrafted (our baseline approach)---sinusoidal encoding \\pe\\ $\\!\\in\\!\\mathbb{R}^{N \\times d}$ similar to (1a) but following a Transformer-based approach \\cite{vaswani2017attention}, and; \n(2b) learned---learned embedding \\pe\\ $\\!\\in\\!\\mathbb{R}^{N \\times d}$ following the original ViT approach \\cite{dosovitskiy2020image}. \nTo further test the self-organizing capabilities of ViTCA, we also include: (3) none---no explicit positioning provided, where we let the cells localize themselves.\n\nAs shown in \\Tab{\\ref{tab:ablation_results}}, ViTCA benefits from an increase to most CA and Transformer-centric parameters, at the cost of computational complexity and\/or an increase in parameter count. A noticeable decrease in performance is observed when \\embed\\ size $d\\!=\\!512$, most likely due to the vast increase in parameter count necessitating more training. In the original ViT, multiple encoding blocks were needed before the model could exhibit performance equivalent to their baseline CNN \\cite{dosovitskiy2020image}, as verified in our ablation with our ViT. However, for ViTCA we notice an inverse relationship of the effect of Transformer depth, causing a divergence in cell state. It is not clear why this is the case, as we have observed that the LN layers and overflow losses otherwise encourage a contractive $F_\\theta$. This is an investigation we leave for future work. Despite the benefits of increasing $h$, we use $h\\!=\\!4$ for our baseline to optimize runtime performance. Finally, we show that ViTCA does not dramatically suffer when no explicit positioning is used---in contrast to typical Transformer-based models---as cells are still able to localize themselves by relying on their stored hidden information.\n\n\\vspace{-.5\\baselineskip}\n\\subsubsection{Cell state analysis}\n\\label{subsubsec:cell_state_analysis}\n\\vspace{-.5\\baselineskip}\n\nHere we provide an empirically-based qualitative analysis on the effects ViTCA and UNetCA have on cell states through several experiments with our pre-trained models (\\Fig{\\ref{fig:analysis} (a,b,c)}). We notice that in general, ViTCA indefinitely maintains cell state stability while UNetCA typically induces a divergence past a certain point. An extended analysis is available in Appendix \\ref{subsec:extended_analysis}.\n\n\\textbf{Damage resilience.}\nShown in \\Fig{\\ref{fig:analysis}} (a), we damage a random $H\/2\\!\\times\\!W\/2$ patch of cells with random values $\\sim\\!\\mathcal{U}(-1,1)$ twice in succession. ViTCA is able to maintain cell stability despite not being trained to deal with such noise, while UNetCA induces a divergence. Note both models are simultaneously performing the typical denoising task. We also note that ViTCA's inherent damage resilience is in contrast to recent NCA formulations that required explicit training for it \\cite{palm2022variational,mordvintsev2020growing}.\n\n\\textbf{Convergence stability.}\n\\Fig{\\ref{fig:analysis}} (b) shows denoising results after 2784 cell grid updates. ViTCA is able to maintain a stable cell grid state while UNetCA causes cells to diverge.\n\n\\textbf{Hidden state visualizations.}\n\\Fig{\\ref{fig:analysis}} (c) shows 2D and 3D PCA dimensionality reductions on the hidden states of converged cell grids for all examples in FashionMNIST~\\cite{xiao2017fashion}. The clusters suggest some linear separability in the learned representation, motivating our probing experiments in \\Sec{\\ref{subsec:probing}}. \n\n\\input{figures\/analysis}\n\n\\subsubsection{Investigating update rule inductive biases}\n\\label{subsubsec:inductive_bias_investigation}\n\\vspace{-.5\\baselineskip}\n\nHere we investigate the inductive biases inherent in ViTCA and UNetCA by testing their adaptation to various environmental changes (\\Fig{\\ref{fig:analysis}} (d,e,f,g,h)).\n\n\\textbf{Adaptation to varying update rates.}\nDespite being trained with a $\\sigma\\!=\\!50\\%$ cell update rate, ViTCA is able to adapt to varying rates (\\Fig{\\ref{fig:analysis}} (d)). Higher rates result in a proportionally faster rate of cell state convergence, and equivalently with lower rates. UNetCA exhibits a similar relationship, although is unstable at $\\sigma\\!=\\!100\\%$ (see Appendix \\ref{subsec:extended_analysis}). For details comparing training with a synchronous \\emph{vs}\\onedot asynchronous cell grid update, see Appendix \\ref{subsec:extended_ablation}.\n\n\\textbf{Generalization to noise unseen during training.}\nViTCA is capable of denoising configurations of noise it has not been trained on. \\Fig{\\ref{fig:analysis}} (e; \\emph{left-to-right}): $4\\!\\times\\!1$ and $1\\!\\times\\!4$ patches of Gaussian noise at 65\\% coverage. In contrast, UNetCA induces a cell state divergence (see Appendix \\ref{subsec:extended_analysis}).\n\n\\textbf{Adaptation to changing inputs.}\nAt various moments during cell updates, we re-inject cells with new masked inputs (\\Fig{\\ref{fig:analysis}} (f)). ViTCA is able to consistently adapt cells to new inputs while UNetCA experiences difficulty past a certain point (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, at 464 iterations in the figure).\n\n\\textbf{Effects of not \\emph{vs}\\onedot completely masking input.}\n\\Fig{\\ref{fig:analysis}} (g; \\emph{left}): ViTCA is able to perform autoencoding despite not being trained for it. UNetCA induces a cell grid divergence (see Appendix \\ref{subsec:extended_analysis}). \\Fig{\\ref{fig:analysis}} (g; \\emph{right}): Interestingly, when the input is completely masked, ViTCA outputs the median image~\\cite{lehtinen2018noise2noise}. UNetCA does not exhibit such behaviour and instead causes cells to diverge (see Appendix \\ref{subsec:extended_analysis}). \n\n\\textbf{Spatial interpolation.}\nWe use ViTCA models trained at $32\\!\\times\\!32$ using various types of positioning to generate $128\\!\\times\\!128$ outputs during inference, assuming an identical cell grid resolution. \\Fig{\\ref{fig:analysis}} (h; \\emph{top-to-bottom of \\textcolor{vitcaPurple}{outputs}}): xy-coordinates, no positioning, Fourier features \\cite{mildenhall2020nerf}, Fourier features concatenated with xy-coordinates, and a Transformer-based handcrafted positional encoding (baseline) \\cite{vaswani2017attention}. Results are ordered from best to worst. The baseline approach is not capable of spatial interpolation due to being a 1D positioning, while, as expected, the 2D encodings make it capable. Surprisingly, removing Fourier features and using only xy-coordinates results in a higher fidelity interpolation. We believe this to be caused by the distracting amount of positional information Fourier features provide to cells, as cells can instead rely on their hidden states to store higher frequency positional information. Finally, with no explicit positioning, ViTCA is still able to perform high-quality interpolation---even exceeding using Fourier features---by taking advantage of its self-organizing nature. As a side note, we point attention to the fact that ViTCA is simultaneously denoising at a scale space it has not been trained on, exemplifying its generalization capabilities.\n\n\\vspace{-.5\\baselineskip}\n\\subsection{Investigating hidden representations via linear probes}\n\\vspace{-.5\\baselineskip}\n\\label{subsec:probing}\n\n\\input{tables\/linear_probe_results}\nHere we examine the learned representations of our models pre-trained for denoising. We freeze model parameters and learn linear classifiers on each of their learned representations: converged cell hidden states for CA-based models, bottleneck features for U-Net, and LN'd tokens for ViT. This is a common approach used to probe learned representations~\\cite{chen2020generative}. Classification results on MNIST, FashionMNIST, and CIFAR10 are shown in \\Tab{\\ref{tab:linear_probe_results}} and we use the same training setup as for denoising, but without any noise. For comparison, we also provide results using a linear classifier and two 2-layer MLPs of varying complexity, all trained directly on raw pixel values. Correlations between denoising performance in \\Tab{\\ref{tab:denoising_results}} and classification performance in \\Tab{\\ref{tab:linear_probe_results}} can be observed. Linear classification accuracy on ViTCA-based features typically exceeds classification accuracy using other model-based features or raw pixel values, even outperforming the MLPs in most cases.\n\\section{Vision Transformer Cellular Automata (ViTCA)}\n\\vspace{-.5\\baselineskip}\n\\label{sec:vitca}\n\nBuilding upon NCAs and ViTs, we propose a new class of \\emph{attention-based} NCAs formed using a spatially localized---yet globally organized---self-attention scheme. We detail an instance of this class, ViTCA, by first reviewing its backbone ViT architecture before describing the ``pool sampling''-based training process for the ViTCA update rule (see overview in \\Fig{\\ref{fig:overview}}).\n\n\\paragraph{Input tokenization.}\n\nViT starts by dividing a $C_i\\!\\times\\!H\\!\\times\\!W$ input image \\gt\\ into $N$ non-overlapping $P_H\\!\\times\\!P_W$ patches ($16\\!\\times\\!16$ in the original work~\\cite{dosovitskiy2020image}), followed by a linear projection of the flattened image patches with an embedding matrix $\\mathbf{E} \\in \\mathbb{R}^{L \\times d}$ (\\Fig{\\ref{fig:overview}} \\embed), where $L\\!=\\!C_iP_HP_W$, to produce initial tokens \\pretokens\\ $\\!\\in\\!\\mathbb{R}^{N \\times d}$. Next, a handcrafted positional encoding \\cite{vaswani2017attention} or learned positional embedding \\pe\\ $\\in\\!\\mathbb{R}^{N \\times d}$ \\cite{dosovitskiy2020image} is added to tokens to encode positional information and break permutation invariance. Finally, a learnable class token is appended to the token sequence, resulting with \\tokens\\ $\\!\\in\\!\\mathbb{R}^{(N+1) \\times d}$. For the purposes of our task, we omit this token in all ViT-based models. In ViTCA, the input to the embedding is a flattened cell grid \\cells\\ $\\!\\in\\!\\mathbb{R}^{N \\times L}$ where $L\\!=\\!C_PP_HP_W+C_h$, $C_P\\!=\\!C_i+C_o+C_{\\peplain}$,\\, $C_h$ is the cell hidden size,\\, $C_o$ is the number of output image channels (one or three for grayscale or RGB), and $C_{\\peplain}$ is the positional encoding size when positional encoding is (optionally) concatenated to each cell rather than added to the tokens \\cite{mildenhall2020nerf}.\n\n\\paragraph{Multi-head self-attention (MHSA).}\n\nGiven a sequence of tokens \\tokens, self-attention estimates the relevance of one token to all others (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, which image patches are likely to appear together in an image) and aggregates this global information to update each token. This encodes each token in terms of global contextual information, and does so using three learned weight matrices: $\\mathbf{W}_Q\\!\\in\\!\\mathbb{R}^{d \\times d}$, $\\mathbf{W}_K\\!\\in\\!\\mathbb{R}^{d \\times d}$, and $\\mathbf{W}_V\\!\\in\\!\\mathbb{R}^{d \\times d}$. \\tokens\\ is projected onto these weight matrices to obtain Queries $\\mathbf{Q}\\!=\\!$ \\tokens$\\mathbf{W}_Q$, Keys $\\mathbf{K}\\!=\\!$ \\tokens$\\mathbf{W}_K$, and Values $\\mathbf{V}\\!=\\!$ \\tokens$\\mathbf{W}_V$. The self-attention layer output $\\sa\\!\\in\\!\\mathbb{R}^{N \\times d}$ is:\n\n\\input{equations\/sa}\n\n\\emph{Multi-head} self-attention employs many sets of weight matrices, $\\{\\mathbf{W}_{Q_i},$ $\\mathbf{W}_{K_i},$ $\\mathbf{W}_{V_i}\\!\\in\\!\\mathbb{R}^{d \\times (d\/h)}\\!\\mid$ $i\\!=\\!0,...,(h-1)\\}$. The outputs of $h$ self-attention \\emph{heads} are concatenated into $(\\sa_0,$ $...,$ $\\sa_{h-1})\\!\\in\\!\\mathbb{R}^{N \\times d}$ and projected onto a weight matrix $\\mathbf{W}\\!\\in\\!\\mathbb{R}^{d \\times d}$ to produce $\\mhsa\\!\\in\\!\\mathbb{R}^{N \\times d}$. Self-attention explicitly models global interactions and is more flexible than grid-based operators (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, convolutions) \\cite{perez2019turing,cordonnier2019relationship}, but its quadratic cost in time and memory limits its applicability to high resolution images.\n\n\\paragraph{Spatially localizing self-attention.}\n\nThe global nature of self-attention directly conflicts with the spatial locality constraint of CAs; in response, we limit the connectivity structure of the attention operation to each cell's neighbourhood. This can be accomplished by either masking each head's attention matrix ($\\mathbf{A}\\!=\\!\\texttt{softmax}(\\cdots) \\in \\mathbb{R}^{N \\times N}$ in Eq.\\ \\ref{eq:sa}) with a banded matrix representing local connectivity (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, \\Fig{\\ref{fig:overview}} \\localize), or more efficiently,\n\n\\input{equations\/localize_attn} Here, we assume top-left-to-bottom-right input flattening.\nInstead of explicitly computing the global self-attention matrix $\\mathbf{A}\\!\\in\\!\\mathbb{R}^{N \\times N}$ then masking it, this approach circumvents the $\\mathcal{O}(N^2d)$ computation in favour of an $\\mathcal{O}(N\\!M\\!d)$ alternative that indexes the necessary rows and columns \\emph{during} self-attention. The result is a localized self-attention matrix $\\mathbf{A}^{\\!\\star}\\!\\in\\!\\mathbb{R}^{N \\times M}$, where $M\\!=\\!N_HN_W\\!\\ll\\!N$. As we show in our experiments, ViTCA is still capable of global self-attention despite its localization, by leveraging stored state information across cells and their global self-organization during CA iterations (\\Fig{\\ref{fig:teaser}}).\n\nFollowing \\mhsa\\ is a multilayer perceptron (\\Fig{\\ref{fig:overview}} \\mlp) with two layers and a GELU non-linearity. We apply Layer Normalization (LN) \\cite{ba2016layer} before \\mhsa\\ and \\mlp, and residual connections afterwards, forming a single encoding block. We use an MLP head (\\Fig{\\ref{fig:overview}} \\mlphead) to decode to a desired output, with LN applied to its input, finalizing the ViTCA update rule $F_\\theta$. In our experiments, ViT's \\mlphead\\ decodes directly into an image output whereas ViTCA decodes into update vectors added to cells.\n\n\\vspace{-.5\\baselineskip}\n\\subsection{Update rule training procedure}\n\\vspace{-.5\\baselineskip}\n\\label{subsec:update_rule}\n\nTo train the ViTCA update rule, we follow a ``pool sampling''-based training process \\cite{palm2022variational,mordvintsev2020growing} along with a curriculum-based masking\/noise schedule when corrupting inputs. During odd training iterations, we uniformly initialize a minibatch of cells \\cells\\ $\\!=\\!(\\cellsplain_1,...,\\cellsplain_b)$ with constant values (0.5 for output channels, 0 for hidden---see Appendix \\ref{subsec:extended_ablation} for alternatives), then inject the masked input \\maskedinput\\ (see \\Sec{\\ref{subsec:denoising_autoencoding}}). After input injection, we asynchronously update cells ($\\sigma\\!=\\!50\\%$ update rate) using $F_\\theta$ for $T\\!\\sim\\!\\mathcal{U}\\{8,32\\}$ recurrent iterations. We retrieve output \\cellsoutput\\ from the cell grid and apply an $L_1$ loss against the ground truth \\gt. We also apply overflow losses to penalize cell output values outside of [0,1] and cell hidden values outside of [-1,1]. We use $L_2$ normalization on the gradient of each parameter in $\\theta$. After backpropagation, we append the updated cells and their ground truths to a pool \\pool\\ which we then shuffle and truncate up to the first \\poolsize\\ elements. During even training iterations, we retrieve a minibatch of cells and their ground truths from \\pool\\ and process them as above. This encourages $F_\\theta$ to guide cells towards a stable fixed-point. \\Alg{\\ref{alg:vitca_training}} in Appendix \\ref{sec:appendix} details this process.\n\\section{Appendix}\n\\label{sec:appendix}\n\n\\input{algorithms\/vitca_training}\n\n\n\\subsection{Training on high-resolution imagery with fusion and mitosis}\n\\label{subsec:high_resolution_training}\n\nAs an alternative to gradient checkpointing for reducing memory usage, we briefly experimented with a downsampling scheme inspired by cell fusion and mitosis when training on CelebA at $64\\!\\times\\!64$. Specifically, we split the $T$ applications of the update rule (within a training iteration) into multiple stages: 1) We apply the update rule twice so that cells will have, at minimum, some amount of knowledge of their neighbours. 2) We stash the masked input for a later re-injection. 3) \\emph{Fusion}---we apply a $2\\!\\times\\!2$ average pooling with a stride of 2 across the cell grid, combining $2\\!\\times\\!2$ groups of cells into singular cells. 4) We apply the update rule $T-4$ times at this $32\\!\\times\\!32$ downsampled cell grid resolution. 5) \\emph{Mitosis}---we perform a $2\\!\\times\\!2$ duplication of cells (each cell is duplicated to its right, bottom-right, and bottom). 6) We re-inject the stashed masked input. 7) We apply the update rule twice to adapt the cells to the $64\\!\\times\\!64$ resolution and to fill in any missing information.\n\nWe found that performing this fusion and mitosis scheme decreased training memory consumption to levels similar to our gradient checkpointing scheme ($\\sim\\!50\\%$ memory reduction) while having a $\\sim\\!70\\%$ faster backward pass. Loss-wise, we observed a $\\sim\\!33\\%$ increase in the average validation reconstruction loss during training, which can qualitatively be observed in the example provided in \\Fig{\\ref{fig:fusion_mitosis}} (\\emph{\\textcolor{unetcaBlue}{bottom}}). Although the results shown are not ideal---\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, we did not perform a hyper-parameter search here, for example, finding the optimal number of iterations preceding fusion and following mitosis---this brief experiment tests the feasibility of reducing memory consumption while maintaining denoising capability and avoiding gradient checkpointing. As shown in the figure, ViTCA with fusion and mitosis is able to successfully denoise the input despite applying updates at two different scales. This scale agnostic behaviour reveals potentially interesting research directions beyond the scope of this work, such as allowing an NCA update rule to dynamically and locally modify cell grid resolution based on a compute budget, which could see applications in signal (image, video, or audio) compression.\n\n\\begin{figure}[t]\n \\begin{floatrow}\n \\input{figures\/fusion_mitosis}\n \\input{tables\/other_ablation_results}\n \\vspace{-0.5\\baselineskip}\n \\end{floatrow}\n\\end{figure}\n\n\\subsection{Extended ablation study}\n\\label{subsec:extended_ablation}\n\nHere we present an extension of our ablation study in \\Sec{\\ref{subsubsec:ablation_study}}, using the baseline ViTCA model as our reference. As before, the ablation examines the effects certain training configuration parameters have on test performance.\n\n\\input{tables\/attn_size_ablation_results}\n\\input{tables\/update_rate_ablation_results}\n\\input{tables\/gradient_checkpointing_ablation_results}\n\n\\paragraph{Pool size, cell initialization, and patch size.}\nIn \\Tab{\\ref{tab:other_ablation_results}}, we examine the impact of varying the (max) pool size $N_\\mathcal{P}$, cell initialization method, and patch size $P_H\\!\\times\\!P_W$ on CelebA. As shown in the table, it is difficult to correlate pool size with test performance. However, when pool size $N_\\mathcal{P}\\!=\\!8192$, there is a noticeable reduction in performance. Test performance also degrades when initializing cells such that their output and hidden channels receive random values sampled from $\\mathcal{U}(0,1)$ and $\\mathcal{U}(-1,1)$, respectively, as opposed to receiving constant values (0.5 for output channels and 0 for hidden). Finally, we see a consistent decrease in performance when the input image is divided into non-overlapping patches $> 1\\!\\times\\!1$, as well as an increase in the number of model parameters.\n\n\\paragraph{Attention neighbourhood size.}\nIn \\Tab{\\ref{tab:attn_size_ablation_results}}, we examine the impact of attention neighbourhood size $N_H\\!\\times\\!N_W$ on FashionMNIST. Interestingly, increasing the neighbourhood size past $3\\!\\times\\!3$ causes a degradation in performance. This is most likely attributed to the increase in complexity caused by incorporating more information into ViTCA's self-attention. One would expect explicitly increasing the receptive field of spatially localized self-attention to result in better performance, but it can also complicate the process of figuring out which neighbours to attend to. We believe this may be alleviated by increasing model capacity and\/or training duration. As described in \\Sec{\\ref{sec:experiments}}, we use the Moore neighbourhood ($3\\!\\times\\!3$) as it requires less computation while still demonstrating ViTCA's effectiveness.\n\n\\paragraph{Asynchronous \\emph{vs}\\onedot synchronous cell updates.}\nIn \\Tab{\\ref{tab:update_rate_ablation_results}}, we compare between training with asynchronous cell updates ($\\sigma\\!=\\!50\\%$) and training with synchronous cell updates ($\\sigma\\!=\\!100\\%$) on LandCoverRep, MNIST, CelebA, and FashionMNIST. Training with asynchronous cell updates provides a meaningful increase in performance compared to training with synchronous cell updates and comes with several benefits, such as not requiring cells in a neighbourhood to be in sync with each other and serving as additional data augmentation. Similarly mentioned in related work \\cite{niklasson2021self-organising}, this allows ViTCA to be used in a distributed system where cells need not exist under a global clock and can be updated at varying rates. Thus making it easier to scale up or down within a non-homogeneous compute environment. This was somewhat demonstrated in \\Fig{\\ref{fig:analysis}} (d) where ViTCA was able to adapt to varying update rates despite being trained on a fixed asynchronous update rate ($\\sigma\\!=\\!50\\%$).\n\n\\paragraph{Effects of gradient checkpointing.}\nIn \\Tab{\\ref{tab:gradient_checkpointing_ablation_results}}, we compare between training with gradient checkpointing disabled and with gradient checkpointing enabled on LandCoverRep, MNIST, CelebA, and FashionMNIST. Similarly shown in \\Tab{\\ref{tab:ablation_results}}, we see here that training with gradient checkpointing has an adverse effect on test performance. As mentioned in \\Sec{\\ref{sec:discussion}}, NCAs---during training---require all activations from each recurrent iteration to be stored in memory before performing backpropagation. This results in memory usage being proportional to the amount of recurrent iterations. As such, depending on ViTCA's configuration, gradient checkpointing may be required to be able to train on a single GPU. We make use of PyTorch's \\texttt{checkpoint\\_sequential}, which we use as follows: given the number of CA iterations $T$, we divide the sequential (forward) application of the update rule into $\\lfloor T\/2 \\rfloor$ segments of roughly the same length (depending on whether $T$ is even or odd). Then, all segments are executed in sequence, where activations from only the first and last segments are stored as well as the inputs to each intermediate segment. The intermediate inputs are used for re-running the segments without stored activations during the backward pass to compute gradients. This results in a trade-off between memory consumption and backpropagation duration since each intermediate segment's forward pass needs to be re-computed during its backward pass. Moreover, and not mentioned in the documentation of PyTorch at the time of writing, there exists a subtle yet meaningful side-effect which we have observed and confirmed through the use of GNU Debugger (GDB) and Python Debugger (PDB): Without gradient checkpointing, gradients are accumulated all at once at the end of backpropagating through the entire computation graph, resulting in the expected round-offs due to limitations in machine precision (\\texttt{float32} in our case). At this point, PyTorch may use a variety of numerical techniques to minimize round-off, such as cascade summation (verified to be used for CPU-based summation, see \\texttt{SumKernel.cpp} in PyTorch) which recursively sums two halves of a sequence of summands as opposed to naively summing them in sequence. \\emph{With} gradient checkpointing, gradients are accumulated at each segment. This means that round-offs are forced to (potentially) occur at each checkpoint\/segment instead of once at the end of the entire computation graph. Even if cascade summation is used when summing gradients within each segment, the segment-wise ordering may reduce its effectiveness. We verified this behaviour by observing an exact machine epsilon difference ($\\epsilon\\approx1.19\\!\\times\\!10^{-7}$ in IEEE 754 standard) in the gradient---when compared to the non-checkpointed scheme---of the final operation of the update rule at the second-last segment, once the loss started to diverge.\n\nIt is important to note that despite the difference in gradients, the accuracy of the forward pass remains unchanged between the checkpointed and non-checkpointed models. Also, we must remind ourselves that round-offs are unavoidable when performing floating-point arithmetic, meaning that gradients computed within a deep learning library such as PyTorch are always an \\emph{estimation} of the true gradient. Importantly, both checkpointed and non-checkpointed models exhibited the same spikes and dips in their validation losses over the course of training, also decreasing at similar rates.\n\n\n\n\\subsection{Extended analysis of cell state and update rule inductive biases}\n\\label{subsec:extended_analysis}\n\n\\input{figures\/unetca_extended_analysis}\n\nHere we present an extension of the analyses provided in \\Sec{\\ref{subsubsec:cell_state_analysis}} and \\Sec{\\ref{subsubsec:inductive_bias_investigation}}.\n\n\\paragraph{Adaptation to varying update rates (UNetCA).}\n\\Fig{\\ref{fig:unetca_extended_analysis}} (a) shows UNetCA capable of adapting to a slower ($\\sigma\\!=\\!25\\%$) cell update rate despite being trained with a $\\sigma\\!=\\!50\\%$ cell update rate. Interestingly, UNetCA experiences difficulty synchronously updating all cells ($\\sigma\\!=\\!100\\%$), producing a noticeably lower quality output compared to its outputs at asynchronous rates. This is in contrast to ViTCA (\\Fig{\\ref{fig:analysis}} (d)), where the quality of output remains the same across all update rates. Also, not shown in \\Fig{\\ref{fig:analysis}} (d), but is important to note, are the number of ViTCA iterations from left-to-right, which are as follows: 1, 8, 12, 16, 32. We point attention to the fact that UNetCA required 48 iterations to converge with $\\sigma\\!=\\!25\\%$, 24 iterations to converge with $\\sigma\\!=\\!50\\%$, and could not converge to a good solution with $\\sigma\\!=\\!100\\%$, while ViTCA required 32 iterations to converge with $\\sigma\\!=\\!25\\%$, 16 iterations to converge with $\\sigma\\!=\\!50\\%$, and 8 iterations to converge with $\\sigma\\!=\\!100\\%$.\n\n\\paragraph{Generalization to noise unseen during training (UNetCA).}\nAs shown in \\Fig{\\ref{fig:unetca_extended_analysis}} (b), UNetCA is incapable of generalizing to noise configurations unseen during training, inducing a divergence in cell states. This is in contrast to ViTCA as shown in \\Fig{\\ref{fig:analysis}} (e). ViTCA not only produces a higher fidelity output mid-denoising, but it also maintains cell state stability.\n\n\\paragraph{Effects of not \\emph{vs}\\onedot completely masking input (UNetCA).}\n\\Fig{\\ref{fig:unetca_extended_analysis}} (c; \\emph{top}): Although UNetCA is able to successfully autoencode the unmasked input image, it eventually induces a divergence amongst cell states. This is in contrast to ViTCA as shown in \\Fig{\\ref{fig:analysis}} (g; \\emph{left}). ViTCA not only produces a higher fidelity output mid-denoising, but it also maintains cell state stability. \\Fig{\\ref{fig:unetca_extended_analysis}} (c; \\emph{bottom}): Unlike ViTCA (\\Fig{\\ref{fig:analysis}} (g; \\emph{right})), UNetCA does not output the median image when attempting to denoise a completely masked input and instead causes cells to diverge.\n\n\\paragraph{Effect of masking heads.}\n\\input{figures\/attn_head_masking}\n\n\\input{tables\/runtime_profiling_results}\n\n\\Fig{\\ref{fig:attn_head_masking}} shows how ViTCA reacts to having its self-attention heads masked during autoencoding (no noise) an example from CelebA. The purpose of this experiment is to observe each head's contribution to the output. We can see that when none of the heads are masked, they attend to facial features and contours, and the output is as expected. However, once heads are masked, the unmasked heads stop attending to the features they once did and instead deteriorate. In some cases, the unmasked heads stop attending to anything at all. There are a couple of interesting cases: 1) When only the first head is masked, ViTCA is still able to successfully autoencode the input, although there is a slight degradation in quality. This is consistent with examples from the other datasets as well as when there is noise involved. 2) When certain heads are masked, the noise that the model was trained to denoise starts to appear (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, fourth column from left and fifth column from right).\n\n\\subsection{Runtime analysis of ViTCA}\n\\label{subsec:runtime_analysis}\n\nHere we provide a brief analysis of ViTCA's runtime performance and memory usage while training on a minibatch of random $32\\times 3 \\times H \\times W$ images through measurements of forward pass duration (ms), backward pass duration (ms), and training memory usage (GB), with and without using gradient checkpointing. We use $T\\!=\\!32$ ViTCA iterations and 16 checkpoint segments. Results are shown in \\Tab{\\ref{tab:runtime_profiling_results}}. Gradient checkpointing provides substantial memory savings at the cost of proportionally increasing the duration of the backward pass.\n\n\n\n\\section{Appendix}\n\\label{sec:appendix}\n\n\\input{algorithms\/vitca_training}\n\n\n\\subsection{Training on high-resolution imagery with fusion and mitosis}\n\\label{subsec:high_resolution_training}\n\nAs an alternative to gradient checkpointing for reducing memory usage, we briefly experimented with a downsampling scheme inspired by cell fusion and mitosis when training on CelebA at $64\\!\\times\\!64$. Specifically, we split the $T$ applications of the update rule (within a training iteration) into multiple stages: 1) We apply the update rule twice so that cells will have, at minimum, some amount of knowledge of their neighbours. 2) We stash the masked input for a later re-injection. 3) \\emph{Fusion}---we apply a $2\\!\\times\\!2$ average pooling with a stride of 2 across the cell grid, combining $2\\!\\times\\!2$ groups of cells into singular cells. 4) We apply the update rule $T-4$ times at this $32\\!\\times\\!32$ downsampled cell grid resolution. 5) \\emph{Mitosis}---we perform a $2\\!\\times\\!2$ duplication of cells (each cell is duplicated to its right, bottom-right, and bottom). 6) We re-inject the stashed masked input. 7) We apply the update rule twice to adapt the cells to the $64\\!\\times\\!64$ resolution and to fill in any missing information.\n\nWe found that performing this fusion and mitosis scheme decreased training memory consumption to levels similar to our gradient checkpointing scheme ($\\sim\\!50\\%$ memory reduction) while having a $\\sim\\!70\\%$ faster backward pass. Loss-wise, we observed a $\\sim\\!33\\%$ increase in the average validation reconstruction loss during training, which can qualitatively be observed in the example provided in \\Fig{\\ref{fig:fusion_mitosis}} (\\emph{\\textcolor{unetcaBlue}{bottom}}). Although the results shown are not ideal---\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, we did not perform a hyper-parameter search here, for example, finding the optimal number of iterations preceding fusion and following mitosis---this brief experiment tests the feasibility of reducing memory consumption while maintaining denoising capability and avoiding gradient checkpointing. As shown in the figure, ViTCA with fusion and mitosis is able to successfully denoise the input despite applying updates at two different scales. This scale agnostic behaviour reveals potentially interesting research directions beyond the scope of this work, such as allowing an NCA update rule to dynamically and locally modify cell grid resolution based on a compute budget, which could see applications in signal (image, video, or audio) compression.\n\n\\begin{figure}[t]\n \\begin{floatrow}\n \\input{figures\/fusion_mitosis}\n \\input{tables\/other_ablation_results}\n \\vspace{-0.5\\baselineskip}\n \\end{floatrow}\n\\end{figure}\n\n\\subsection{Extended ablation study}\n\\label{subsec:extended_ablation}\n\nHere we present an extension of our ablation study in \\Sec{\\ref{subsubsec:ablation_study}}, using the baseline ViTCA model as our reference. As before, the ablation examines the effects certain training configuration parameters have on test performance.\n\n\\input{tables\/attn_size_ablation_results}\n\\input{tables\/update_rate_ablation_results}\n\\input{tables\/gradient_checkpointing_ablation_results}\n\n\\paragraph{Pool size, cell initialization, and patch size.}\nIn \\Tab{\\ref{tab:other_ablation_results}}, we examine the impact of varying the (max) pool size $N_\\mathcal{P}$, cell initialization method, and patch size $P_H\\!\\times\\!P_W$ on CelebA. As shown in the table, it is difficult to correlate pool size with test performance. However, when pool size $N_\\mathcal{P}\\!=\\!8192$, there is a noticeable reduction in performance. Test performance also degrades when initializing cells such that their output and hidden channels receive random values sampled from $\\mathcal{U}(0,1)$ and $\\mathcal{U}(-1,1)$, respectively, as opposed to receiving constant values (0.5 for output channels and 0 for hidden). Finally, we see a consistent decrease in performance when the input image is divided into non-overlapping patches $> 1\\!\\times\\!1$, as well as an increase in the number of model parameters.\n\n\\paragraph{Attention neighbourhood size.}\nIn \\Tab{\\ref{tab:attn_size_ablation_results}}, we examine the impact of attention neighbourhood size $N_H\\!\\times\\!N_W$ on FashionMNIST. Interestingly, increasing the neighbourhood size past $3\\!\\times\\!3$ causes a degradation in performance. This is most likely attributed to the increase in complexity caused by incorporating more information into ViTCA's self-attention. One would expect explicitly increasing the receptive field of spatially localized self-attention to result in better performance, but it can also complicate the process of figuring out which neighbours to attend to. We believe this may be alleviated by increasing model capacity and\/or training duration. As described in \\Sec{\\ref{sec:experiments}}, we use the Moore neighbourhood ($3\\!\\times\\!3$) as it requires less computation while still demonstrating ViTCA's effectiveness.\n\n\\paragraph{Asynchronous \\emph{vs}\\onedot synchronous cell updates.}\nIn \\Tab{\\ref{tab:update_rate_ablation_results}}, we compare between training with asynchronous cell updates ($\\sigma\\!=\\!50\\%$) and training with synchronous cell updates ($\\sigma\\!=\\!100\\%$) on LandCoverRep, MNIST, CelebA, and FashionMNIST. Training with asynchronous cell updates provides a meaningful increase in performance compared to training with synchronous cell updates and comes with several benefits, such as not requiring cells in a neighbourhood to be in sync with each other and serving as additional data augmentation. Similarly mentioned in related work \\cite{niklasson2021self-organising}, this allows ViTCA to be used in a distributed system where cells need not exist under a global clock and can be updated at varying rates. Thus making it easier to scale up or down within a non-homogeneous compute environment. This was somewhat demonstrated in \\Fig{\\ref{fig:analysis}} (d) where ViTCA was able to adapt to varying update rates despite being trained on a fixed asynchronous update rate ($\\sigma\\!=\\!50\\%$).\n\n\\paragraph{Effects of gradient checkpointing.}\nIn \\Tab{\\ref{tab:gradient_checkpointing_ablation_results}}, we compare between training with gradient checkpointing disabled and with gradient checkpointing enabled on LandCoverRep, MNIST, CelebA, and FashionMNIST. Similarly shown in \\Tab{\\ref{tab:ablation_results}}, we see here that training with gradient checkpointing has an adverse effect on test performance. As mentioned in \\Sec{\\ref{sec:discussion}}, NCAs---during training---require all activations from each recurrent iteration to be stored in memory before performing backpropagation. This results in memory usage being proportional to the amount of recurrent iterations. As such, depending on ViTCA's configuration, gradient checkpointing may be required to be able to train on a single GPU. We make use of PyTorch's \\texttt{checkpoint\\_sequential}, which we use as follows: given the number of CA iterations $T$, we divide the sequential (forward) application of the update rule into $\\lfloor T\/2 \\rfloor$ segments of roughly the same length (depending on whether $T$ is even or odd). Then, all segments are executed in sequence, where activations from only the first and last segments are stored as well as the inputs to each intermediate segment. The intermediate inputs are used for re-running the segments without stored activations during the backward pass to compute gradients. This results in a trade-off between memory consumption and backpropagation duration since each intermediate segment's forward pass needs to be re-computed during its backward pass. Moreover, and not mentioned in the documentation of PyTorch at the time of writing, there exists a subtle yet meaningful side-effect which we have observed and confirmed through the use of GNU Debugger (GDB) and Python Debugger (PDB): Without gradient checkpointing, gradients are accumulated all at once at the end of backpropagating through the entire computation graph, resulting in the expected round-offs due to limitations in machine precision (\\texttt{float32} in our case). At this point, PyTorch may use a variety of numerical techniques to minimize round-off, such as cascade summation (verified to be used for CPU-based summation, see \\texttt{SumKernel.cpp} in PyTorch) which recursively sums two halves of a sequence of summands as opposed to naively summing them in sequence. \\emph{With} gradient checkpointing, gradients are accumulated at each segment. This means that round-offs are forced to (potentially) occur at each checkpoint\/segment instead of once at the end of the entire computation graph. Even if cascade summation is used when summing gradients within each segment, the segment-wise ordering may reduce its effectiveness. We verified this behaviour by observing an exact machine epsilon difference ($\\epsilon\\approx1.19\\!\\times\\!10^{-7}$ in IEEE 754 standard) in the gradient---when compared to the non-checkpointed scheme---of the final operation of the update rule at the second-last segment, once the loss started to diverge.\n\nIt is important to note that despite the difference in gradients, the accuracy of the forward pass remains unchanged between the checkpointed and non-checkpointed models. Also, we must remind ourselves that round-offs are unavoidable when performing floating-point arithmetic, meaning that gradients computed within a deep learning library such as PyTorch are always an \\emph{estimation} of the true gradient. Importantly, both checkpointed and non-checkpointed models exhibited the same spikes and dips in their validation losses over the course of training, also decreasing at similar rates.\n\n\n\n\\subsection{Extended analysis of cell state and update rule inductive biases}\n\\label{subsec:extended_analysis}\n\n\\input{figures\/unetca_extended_analysis}\n\nHere we present an extension of the analyses provided in \\Sec{\\ref{subsubsec:cell_state_analysis}} and \\Sec{\\ref{subsubsec:inductive_bias_investigation}}.\n\n\\paragraph{Adaptation to varying update rates (UNetCA).}\n\\Fig{\\ref{fig:unetca_extended_analysis}} (a) shows UNetCA capable of adapting to a slower ($\\sigma\\!=\\!25\\%$) cell update rate despite being trained with a $\\sigma\\!=\\!50\\%$ cell update rate. Interestingly, UNetCA experiences difficulty synchronously updating all cells ($\\sigma\\!=\\!100\\%$), producing a noticeably lower quality output compared to its outputs at asynchronous rates. This is in contrast to ViTCA (\\Fig{\\ref{fig:analysis}} (d)), where the quality of output remains the same across all update rates. Also, not shown in \\Fig{\\ref{fig:analysis}} (d), but is important to note, are the number of ViTCA iterations from left-to-right, which are as follows: 1, 8, 12, 16, 32. We point attention to the fact that UNetCA required 48 iterations to converge with $\\sigma\\!=\\!25\\%$, 24 iterations to converge with $\\sigma\\!=\\!50\\%$, and could not converge to a good solution with $\\sigma\\!=\\!100\\%$, while ViTCA required 32 iterations to converge with $\\sigma\\!=\\!25\\%$, 16 iterations to converge with $\\sigma\\!=\\!50\\%$, and 8 iterations to converge with $\\sigma\\!=\\!100\\%$.\n\n\\paragraph{Generalization to noise unseen during training (UNetCA).}\nAs shown in \\Fig{\\ref{fig:unetca_extended_analysis}} (b), UNetCA is incapable of generalizing to noise configurations unseen during training, inducing a divergence in cell states. This is in contrast to ViTCA as shown in \\Fig{\\ref{fig:analysis}} (e). ViTCA not only produces a higher fidelity output mid-denoising, but it also maintains cell state stability.\n\n\\paragraph{Effects of not \\emph{vs}\\onedot completely masking input (UNetCA).}\n\\Fig{\\ref{fig:unetca_extended_analysis}} (c; \\emph{top}): Although UNetCA is able to successfully autoencode the unmasked input image, it eventually induces a divergence amongst cell states. This is in contrast to ViTCA as shown in \\Fig{\\ref{fig:analysis}} (g; \\emph{left}). ViTCA not only produces a higher fidelity output mid-denoising, but it also maintains cell state stability. \\Fig{\\ref{fig:unetca_extended_analysis}} (c; \\emph{bottom}): Unlike ViTCA (\\Fig{\\ref{fig:analysis}} (g; \\emph{right})), UNetCA does not output the median image when attempting to denoise a completely masked input and instead causes cells to diverge.\n\n\\paragraph{Effect of masking heads.}\n\\input{figures\/attn_head_masking}\n\n\\input{tables\/runtime_profiling_results}\n\n\\Fig{\\ref{fig:attn_head_masking}} shows how ViTCA reacts to having its self-attention heads masked during autoencoding (no noise) an example from CelebA. The purpose of this experiment is to observe each head's contribution to the output. We can see that when none of the heads are masked, they attend to facial features and contours, and the output is as expected. However, once heads are masked, the unmasked heads stop attending to the features they once did and instead deteriorate. In some cases, the unmasked heads stop attending to anything at all. There are a couple of interesting cases: 1) When only the first head is masked, ViTCA is still able to successfully autoencode the input, although there is a slight degradation in quality. This is consistent with examples from the other datasets as well as when there is noise involved. 2) When certain heads are masked, the noise that the model was trained to denoise starts to appear (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, fourth column from left and fifth column from right).\n\n\\subsection{Runtime analysis of ViTCA}\n\\label{subsec:runtime_analysis}\n\nHere we provide a brief analysis of ViTCA's runtime performance and memory usage while training on a minibatch of random $32\\times 3 \\times H \\times W$ images through measurements of forward pass duration (ms), backward pass duration (ms), and training memory usage (GB), with and without using gradient checkpointing. We use $T\\!=\\!32$ ViTCA iterations and 16 checkpoint segments. Results are shown in \\Tab{\\ref{tab:runtime_profiling_results}}. Gradient checkpointing provides substantial memory savings at the cost of proportionally increasing the duration of the backward pass.\n\\section*{Checklist}\n\\begin{enumerate}\n\n\\item For all authors...\n\\begin{enumerate}\n \\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n \\answerYes\n \\item Did you describe the limitations of your work?\n \\answerYes{See \\Sec{\\ref{sec:discussion}}.}\n \\item Did you discuss any potential negative societal impacts of your work?\n \\answerNo{Although we feel our work demonstrates the potential of NCAs as viable alternatives to common recurrent network architectures (ViTCA being our evidential contribution), our experiments intentionally tend towards the direction of optimizing model efficiency (and single-GPU training accessibility) rather than towards the increasingly popular direction of scaling upwards. However, as much as our work demonstrates the downward-scaling capabilities of NCAs, we also acknowledge that this similarly applies going upward, and as such, can be abused (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, creating a ``deepfake''-capable ViTCA).}\n \\item Have you read the ethics review guidelines and ensured that your paper conforms to them?\n \\answerYes\n\\end{enumerate}\n\n\\item If you are including theoretical results...\n\\begin{enumerate}\n \\item Did you state the full set of assumptions of all theoretical results\n \\answerNA{}\n \\item Did you include complete proofs of all theoretical results?\n \\answerNA{}\n\\end{enumerate}\n\n\\item If you ran experiments...\n\\begin{enumerate}\n \\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n \\answerYes{Code and instructions to reproduce results are included in the supplemental material.}\n \\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n \\answerYes{}\n \\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n \\answerNo{Given the combination of time and computational restrictions and our exhaustive list of experiments, we opted to prioritize experiment variety and dataset coverage as an implicit substitute for re-running experiments under different random seeds. For all experiments, we kept a fixed random seed, even pointing out (deterministic) differences caused by gradient checkpointing when used (see Appendix \\ref{sec:appendix}).}\n \\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n \\answerYes\n\\end{enumerate}\n\n\\item If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\\begin{enumerate}\n \\item If your work uses existing assets, did you cite the creators?\n \\answerYes\n \\item Did you mention the license of the assets?\n \\answerNA{Licensed frameworks used such as PyTorch (BSD-style) and Hydra (MIT) will be mentioned in acknowledgements.}\n \\item Did you include any new assets either in the supplemental material or as a URL?\n \\answerNo{No new assets---aside from code and training our models---were created for the purposes of this work.}\n \\item Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n \\answerNo{We used publicly available datasets.}\n \\item Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n \\answerNo{Although not discussed in the manuscript, we would like to point that the datasets we used that could potentially contain personally identifiable information (CelebA, CIFAR10, Tiny ImageNet) each have restrictions and\/or acknowledgements of such potential issues. Also, our work is not focused on classifying persons and ViTCA is not a generative model, \\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, it can not generate new faces.}\n\\end{enumerate}\n\n\\item If you used crowdsourcing or conducted research with human subjects...\n\\begin{enumerate}\n \\item Did you include the full text of instructions given to participants and screenshots, if applicable?\n \\answerNA\n \\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n \\answerNA\n \\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n \\answerNA\n\\end{enumerate}\n\n\\end{enumerate}\n\\section{Discussion}\n\\label{sec:discussion}\n\\vspace{-.5\\baselineskip}\nWe have performed extensive quantitative and qualitative evaluations of our newly proposed ViTCA on a variety of datasets under a denoising autoencoding framework. We have demonstrated the superior denoising performance and robustness of our model when compared to a U-Net-based CA baseline (UNetCA) and ViT, as well as its generalization capabilities under a variety of environmental changes such as larger inputs (\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, spatial interpolation) and changing inputs \\emph{during} cell updates.\n\nDespite the computation savings---owed to our circumvention of self-attention's quadratic complexity by spatially localizing it within ViTCA---there remains the same memory limitations inherent to all recurrent models: multiple recurrent iterations are required for each training iteration, resulting in larger memory usage than a feedforward approach. This limits single-GPU training accessibility. We have experimented with gradient checkpointing \\cite{chen2016training} but found its trade-off for increased backpropagation duration (and slightly different gradients) less than ideal. To fully realize the potential of NCAs (self-organization, inherent distributivity, \\emph{etc}\\onedot), we encourage follow-up work to address this limitation. Adapting recent techniques using implicit differentiation is one avenue to circumvent these issues~\\cite{bai2022deep,bai2019deep}. Also, as mentioned in our ablation (\\Sec{\\ref{subsubsec:ablation_study}}), we hope to further investigate the instabilities caused by increasing the depth of ViTCA.\n\n\n\n\n\\section{Experiments}\n\\vspace{-.5\\baselineskip}\n\\label{sec:experiments}\n\nHere we examine ViTCA through extensive experiments. We begin with experiments for denoising autoencoding, then an ablation study followed by various qualitative analyses, before concluding with linear probing experiments on the learned representations for MNIST \\cite{deng2012mnist}, FashionMNIST \\cite{xiao2017fashion}, and CIFAR10 \\cite{krizhevsky2009learning}. We provide an extension to our experiments in Appendix \\ref{sec:appendix}.\n\n\\paragraph{Baseline models and variants.}\nSince we are performing pixel level reconstructions, we create a ViT baseline in which the class token has been removed. This applies identically for ViTCA. Unless otherwise stated, for our ViT and ViTCA models we use a patch size of $1\\!\\times\\!1$ ($P_H\\!=\\!P_W\\!=\\!1$), and only a single encoding block with $h\\!=\\!4$ \\mhsa\\ heads, \\embed\\ size $d\\!=\\!128$, and \\mlp\\ size of $128$. For ViTCA, we choose $N_H\\!=\\!3$ and $N_W\\!=\\!3$ (\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, the \\emph{Moore neighbourhood}~\\cite{weissteinmoore}). We also compare with a U-Net baseline similar to the original formulation \\cite{ronneberger2015u}, but based on the specific architecture from \\cite{lehtinen2018noise2noise}. Since most of our datasets consist of $32\\!\\times\\!32$ (resampled) images, we only have two downsampling steps as opposed to five. We implement a U-Net-based CA (UNetCA) baseline consisting of a modified version of our U-Net with 48 initial output feature maps as opposed to 24 and with all convolutions except the first changed to $1\\!\\times\\!1$ to respect typical NCA restrictions~\\cite{palm2022variational,mordvintsev2020growing}.\n\n\\vspace{-.5\\baselineskip}\n\\subsection{Denoising autoencoding}\n\\vspace{-.5\\baselineskip}\n\\label{subsec:denoising_autoencoding}\n\nWe compare between our baseline models and a number of ViTCA variants in the context of denoising autoencoding. We present test set results across six benchmark datasets: a land cover classification dataset intended for representation learning (LandCoverRep) \\cite{yeh2021sustainbench}, MNIST, CelebA \\cite{liu2015faceattributes}, FashionMNIST, CIFAR10, and Tiny ImageNet (a subset of ImageNet \\cite{russakovsky2015imagenet}). All datasets consist of $32\\!\\times\\!32$ resampled images except Tiny ImageNet, which is at $64\\!\\times\\!64$ resolution. During testing, we use all masking combinations, chosen in a fixed order, and we update cells using a fixed number of iterations ($T\\!=\\!64$). See \\Tab{\\ref{tab:denoising_results}} for quantitative results.\n\nBriefly mentioned in \\Sec{\\ref{subsec:update_rule}}, we employ a masking strategy inspired by Curriculum Learning (CL) \\cite{wang2021survey,bengio2009curriculum} to ease training. This schedule follows a geometric progression of difficulty---tied to training iterations---maxing out at 10K training iterations. Specifically, masking starts at covering 25\\% of the input with $1\\!\\times\\!1$ patches of noise (dropout for RGB inputs, Gaussian for grayscale), then at each shift in difficulty, new masking configurations are added to the list of available masking configurations in the following order: $(2^0\\!\\times\\!2^0, 50\\%), (2^0\\!\\times\\!2^0, 75\\%), (2^1\\!\\times\\!2^1, 25\\%), (2^1\\!\\times\\!2^1, 50\\%), (2^1\\!\\times\\!2^1, 75\\%), ..., (2^2\\!\\times\\!2^2, 75\\%)$. Masking configurations are randomly chosen from this list.\n\nWe initialize weights\/parameters using He initialization \\cite{he2015delving}, except for the final layer of CA-based models, which are initialized to zero \\cite{mordvintsev2020growing}. Unless otherwise stated, we train for $I\\!=\\!100$K iterations, use a minibatch size $b\\!=\\!32$, AdamW optimizer \\cite{loshchilov2019decoupled}, learning rate $\\eta\\!=\\!10^{-3}$ with a cosine annealing schedule \\cite{loshchilov2017sgdr}, pool size \\poolsize\\ $\\!=\\!1024$, and cell hidden channel size $C_h\\!=\\!32$. In the case of Tiny ImageNet, $b\\!=\\!8$ to accommodate training on a single GPU (48GB Quadro RTX 8000). Training typically lasts a day at most, depending on the model. Due to the recurrent iterations required per training step, CA-based models take the longest to train. To alleviate memory limitations for some of our experiments, we use gradient checkpointing \\cite{chen2016training} during CA iterations at the cost of backpropagation duration and slight variations in gradients due to its effect on round-off propagation. We also experiment with a cell fusion and mitosis scheme as an alternative. See Appendix \\ref{sec:appendix} for details on runtime performance, gradient checkpointing, and fusion and mitosis.\n\n\\input{tables\/denoising_results}\n\nAmongst baselines, ViTCA outperforms on most metrics across the majority of datasets used (10 out of 18). Exceptions include LandCoverRep, where UNetCA universally outperforms by a small margin, likely due to the texture-dominant imagery being amenable to convolutions. Notably, ViTCA strongly outperforms on MNIST. Although MNIST is a trivial dataset for common tasks such as classification, our masking\/noise strategy turns it into a challenging dataset for denoising autoencoding, \\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, it is difficult for even a human to classify a $32\\!\\times\\!32$ MNIST digit 75\\% corrupted by $4\\!\\times\\!4$ patches of Gaussian noise. We hypothesize that when compared to convolutional models, ViTCA's weaker inductive biases (owed to attention \\cite{yifan2021input,jaegle2021perceiver}) immediately outperform these models when there are large regions lacking useful features, \\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, MNIST digits cover a small space in the canvas. This is not the case with FashionMNIST, where the content is more filled out. Between baselines and ViTCA variants, ViTCA-32 (32 heads) and 32xy (xy-coordinate positional encoding) outperform all models by large margins, demonstrating the benefits of multi-head self-attention. We also experiment with a parameter-reduced (by $\\sim\\!60\\%$), inverted bottleneck variant where $d\\!=\\!64$ and \\mlp\\ size is 256, often with a minimal reduction in performance.\n\\vspace{-.5\\baselineskip}\n\\subsubsection{Ablation study}\n\\label{subsubsec:ablation_study}\n\\vspace{-.5\\baselineskip}\n\nIn \\Tab{\\ref{tab:ablation_results}} we perform an ablation study using the baseline ViTCA model above as reference on CelebA. Results are ordered in row-wise blocks, top-to-bottom. Specifically, we examine the impact of varying the cell hidden size $C_h$; the \\embed\\ size $d$; the number of \\mhsa\\ heads $h$; the depth (\\# encoders), comparing both ViTCA (used throughout the table) with ViT; and in the last block we examine the impact of various methods of incorporating positional information into the model.\n\n\\input{tables\/ablation_results}\n\nSpecifically, we examine the use of: (1) a xy-coordinate-based positional encoding \\emph{concatenated} (``injected'') to cells, and; (2) a Transformer-based positional encoding (or embedding, if learned) \\emph{added} into \\embed. These two categories are subdivided into: \n(1a) sincos5---consisting of handcrafted Fourier features \\cite{mildenhall2020nerf} with four doublings of a base frequency, \\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, \\pe\\ $\\!=\\!(\\sin{2^0\\pi p},$ $\\cos{2^0\\pi p},$ $...,\\sin{2^{J-1}\\pi p},$ $\\cos{2^{J-1}\\pi p})\\!\\in\\mathbb{R}^{N \\times (4JP_HP_W)}$ where $J\\!=\\!5$ and $p$ is the pixel coordinate (normalized to [-1,1]) for each pixel the cell is situated on (one pixel since $P_H\\!=\\!P_W\\!=\\!1$); \n(1b) sincos5xy---consisting of both Fourier features and explicit xy-coordinates concatenated; \n(1c) xy---only xy-coordinates; \n(2a) handcrafted (our baseline approach)---sinusoidal encoding \\pe\\ $\\!\\in\\!\\mathbb{R}^{N \\times d}$ similar to (1a) but following a Transformer-based approach \\cite{vaswani2017attention}, and; \n(2b) learned---learned embedding \\pe\\ $\\!\\in\\!\\mathbb{R}^{N \\times d}$ following the original ViT approach \\cite{dosovitskiy2020image}. \nTo further test the self-organizing capabilities of ViTCA, we also include: (3) none---no explicit positioning provided, where we let the cells localize themselves.\n\nAs shown in \\Tab{\\ref{tab:ablation_results}}, ViTCA benefits from an increase to most CA and Transformer-centric parameters, at the cost of computational complexity and\/or an increase in parameter count. A noticeable decrease in performance is observed when \\embed\\ size $d\\!=\\!512$, most likely due to the vast increase in parameter count necessitating more training. In the original ViT, multiple encoding blocks were needed before the model could exhibit performance equivalent to their baseline CNN \\cite{dosovitskiy2020image}, as verified in our ablation with our ViT. However, for ViTCA we notice an inverse relationship of the effect of Transformer depth, causing a divergence in cell state. It is not clear why this is the case, as we have observed that the LN layers and overflow losses otherwise encourage a contractive $F_\\theta$. This is an investigation we leave for future work. Despite the benefits of increasing $h$, we use $h\\!=\\!4$ for our baseline to optimize runtime performance. Finally, we show that ViTCA does not dramatically suffer when no explicit positioning is used---in contrast to typical Transformer-based models---as cells are still able to localize themselves by relying on their stored hidden information.\n\n\\vspace{-.5\\baselineskip}\n\\subsubsection{Cell state analysis}\n\\label{subsubsec:cell_state_analysis}\n\\vspace{-.5\\baselineskip}\n\nHere we provide an empirically-based qualitative analysis on the effects ViTCA and UNetCA have on cell states through several experiments with our pre-trained models (\\Fig{\\ref{fig:analysis} (a,b,c)}). We notice that in general, ViTCA indefinitely maintains cell state stability while UNetCA typically induces a divergence past a certain point. An extended analysis is available in Appendix \\ref{subsec:extended_analysis}.\n\n\\textbf{Damage resilience.}\nShown in \\Fig{\\ref{fig:analysis}} (a), we damage a random $H\/2\\!\\times\\!W\/2$ patch of cells with random values $\\sim\\!\\mathcal{U}(-1,1)$ twice in succession. ViTCA is able to maintain cell stability despite not being trained to deal with such noise, while UNetCA induces a divergence. Note both models are simultaneously performing the typical denoising task. We also note that ViTCA's inherent damage resilience is in contrast to recent NCA formulations that required explicit training for it \\cite{palm2022variational,mordvintsev2020growing}.\n\n\\textbf{Convergence stability.}\n\\Fig{\\ref{fig:analysis}} (b) shows denoising results after 2784 cell grid updates. ViTCA is able to maintain a stable cell grid state while UNetCA causes cells to diverge.\n\n\\textbf{Hidden state visualizations.}\n\\Fig{\\ref{fig:analysis}} (c) shows 2D and 3D PCA dimensionality reductions on the hidden states of converged cell grids for all examples in FashionMNIST~\\cite{xiao2017fashion}. The clusters suggest some linear separability in the learned representation, motivating our probing experiments in \\Sec{\\ref{subsec:probing}}. \n\n\\input{figures\/analysis}\n\n\\subsubsection{Investigating update rule inductive biases}\n\\label{subsubsec:inductive_bias_investigation}\n\\vspace{-.5\\baselineskip}\n\nHere we investigate the inductive biases inherent in ViTCA and UNetCA by testing their adaptation to various environmental changes (\\Fig{\\ref{fig:analysis}} (d,e,f,g,h)).\n\n\\textbf{Adaptation to varying update rates.}\nDespite being trained with a $\\sigma\\!=\\!50\\%$ cell update rate, ViTCA is able to adapt to varying rates (\\Fig{\\ref{fig:analysis}} (d)). Higher rates result in a proportionally faster rate of cell state convergence, and equivalently with lower rates. UNetCA exhibits a similar relationship, although is unstable at $\\sigma\\!=\\!100\\%$ (see Appendix \\ref{subsec:extended_analysis}). For details comparing training with a synchronous \\emph{vs}\\onedot asynchronous cell grid update, see Appendix \\ref{subsec:extended_ablation}.\n\n\\textbf{Generalization to noise unseen during training.}\nViTCA is capable of denoising configurations of noise it has not been trained on. \\Fig{\\ref{fig:analysis}} (e; \\emph{left-to-right}): $4\\!\\times\\!1$ and $1\\!\\times\\!4$ patches of Gaussian noise at 65\\% coverage. In contrast, UNetCA induces a cell state divergence (see Appendix \\ref{subsec:extended_analysis}).\n\n\\textbf{Adaptation to changing inputs.}\nAt various moments during cell updates, we re-inject cells with new masked inputs (\\Fig{\\ref{fig:analysis}} (f)). ViTCA is able to consistently adapt cells to new inputs while UNetCA experiences difficulty past a certain point (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, at 464 iterations in the figure).\n\n\\textbf{Effects of not \\emph{vs}\\onedot completely masking input.}\n\\Fig{\\ref{fig:analysis}} (g; \\emph{left}): ViTCA is able to perform autoencoding despite not being trained for it. UNetCA induces a cell grid divergence (see Appendix \\ref{subsec:extended_analysis}). \\Fig{\\ref{fig:analysis}} (g; \\emph{right}): Interestingly, when the input is completely masked, ViTCA outputs the median image~\\cite{lehtinen2018noise2noise}. UNetCA does not exhibit such behaviour and instead causes cells to diverge (see Appendix \\ref{subsec:extended_analysis}). \n\n\\textbf{Spatial interpolation.}\nWe use ViTCA models trained at $32\\!\\times\\!32$ using various types of positioning to generate $128\\!\\times\\!128$ outputs during inference, assuming an identical cell grid resolution. \\Fig{\\ref{fig:analysis}} (h; \\emph{top-to-bottom of \\textcolor{vitcaPurple}{outputs}}): xy-coordinates, no positioning, Fourier features \\cite{mildenhall2020nerf}, Fourier features concatenated with xy-coordinates, and a Transformer-based handcrafted positional encoding (baseline) \\cite{vaswani2017attention}. Results are ordered from best to worst. The baseline approach is not capable of spatial interpolation due to being a 1D positioning, while, as expected, the 2D encodings make it capable. Surprisingly, removing Fourier features and using only xy-coordinates results in a higher fidelity interpolation. We believe this to be caused by the distracting amount of positional information Fourier features provide to cells, as cells can instead rely on their hidden states to store higher frequency positional information. Finally, with no explicit positioning, ViTCA is still able to perform high-quality interpolation---even exceeding using Fourier features---by taking advantage of its self-organizing nature. As a side note, we point attention to the fact that ViTCA is simultaneously denoising at a scale space it has not been trained on, exemplifying its generalization capabilities.\n\n\\vspace{-.5\\baselineskip}\n\\subsection{Investigating hidden representations via linear probes}\n\\vspace{-.5\\baselineskip}\n\\label{subsec:probing}\n\n\\input{tables\/linear_probe_results}\nHere we examine the learned representations of our models pre-trained for denoising. We freeze model parameters and learn linear classifiers on each of their learned representations: converged cell hidden states for CA-based models, bottleneck features for U-Net, and LN'd tokens for ViT. This is a common approach used to probe learned representations~\\cite{chen2020generative}. Classification results on MNIST, FashionMNIST, and CIFAR10 are shown in \\Tab{\\ref{tab:linear_probe_results}} and we use the same training setup as for denoising, but without any noise. For comparison, we also provide results using a linear classifier and two 2-layer MLPs of varying complexity, all trained directly on raw pixel values. Correlations between denoising performance in \\Tab{\\ref{tab:denoising_results}} and classification performance in \\Tab{\\ref{tab:linear_probe_results}} can be observed. Linear classification accuracy on ViTCA-based features typically exceeds classification accuracy using other model-based features or raw pixel values, even outperforming the MLPs in most cases.\n\n\n\n\n\n\n\n\\section{Introduction}\n\\vspace{-.5\\baselineskip}\n\\label{sec:introduction}\n\n\\input{figures\/vitca_vs_vit} Recent developments at the intersection of two foundational ideas---Artificial Neural Networks (ANNs) and Cellular Automata (CA)---have led to new approaches for constructing Neural Cellular Automata (NCA). These advances have integrated ideas such as variational inference \\cite{palm2022variational}, U-Nets \\cite{zhang2020learning}, and Graph Neural Networks (GNNs) \\cite{grattarola2021learning} with promising results on problems ranging from image synthesis \\cite{palm2022variational,niklasson2021self-organising,mordvintsev2021mu} to Reinforcement Learning (RL) \\cite{najarro2022hypernca,variengien2021towards}. Transformers are another significant development in deep learning \\cite{vaswani2017attention}, but, until now, have not been examined under an NCA setting.\n\nVision Transformers (ViTs) \\cite{dosovitskiy2020image} have emerged as a competitive alternative to Convolutional Neural Network (CNN) \\cite{lecun1998gradient} architectures for computer vision, such as Residual Networks (ResNets) \\cite{he2016deep}. ViTs leverage the self-attention mechanisms of original Transformers \\cite{vaswani2017attention}, which have emerged as the dominant approach for sequence modelling in recent years. Our work combines foundational ideas from Transformers and ViTs, leading to a new class of NCAs: \\textbf{Vision Transformer Cellular Automata (ViTCA)}. \n\nAn effective and ubiquitous Transformer-based learning technique for Natural Language Processing (NLP) pre-training is the unsupervised task of Masked Language Modelling (MLM), popularized by the BERT language model \\cite{devlin-etal-2019-bert}. The success of MLM-based techniques has similarly inspired recent work re-examining the classical formulation of Denoising Autoencoders (DAEs) \\cite{vincent2010stacked}, but for ViTs \\cite{bao2022beit,dosovitskiy2020image,chen2020generative}, introducing tasks such as Masked Image Encoding \\cite{he2021masked} and Masked Feature Prediction \\cite{wei2021masked} for image and video modelling, respectively. This simple yet highly-scalable strategy of masked-based unsupervised pre-training has yielded promising transfer learning results on vision-based downstream tasks such as object detection and segmentation, image classification, and action detection, even outperforming supervised pre-training~\\cite{he2021masked,wei2021masked}. We examine training methodologies for ViTCA within a DAE setting and perform extensive controlled experiments benchmarking these formulations against modern state of the art architectures, with favourable outcomes, \\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, \\Fig{\\ref{fig:vitca_vs_vit}}.\n\n\\input{figures\/teaser}\n\nOur contributions are as follows: \\textit{first}---to the best of our knowledge---our work is the first to extend NCA methodologies with key Transformer mechanisms, \\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, self-attention and positional encoding (and embedding), with the beneficial side-effect of circumventing the quadratic complexity of self-attention; \\textit{second}, our ViTCA formulation allows for lower model complexity (by limiting ViT depth) while retaining expressivity through CA iterations on a controlled state---all with the same encoder weights. This yields a demonstrably more parameter-efficient \\cite{mordvintsev2021mu} ViT-based model. Importantly, ViTCA mitigates the problems associated with the explicit tuning of ViT depth originally needed to improve performance (\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, we use a depth of 1). With ViTCA, we simply iterate until cell state convergence. Since ViT (and by extension, ViTCA) employs Layer Normalization (LN) \\cite{ba2016layer} at each stage of its processing, it is a fairly contractive model capable of fixed-point convergence guarantees~\\cite{bai2019deep}.\n\nIn relation to our first contribution, ViTCA respects CA requirements, most importantly that computations remain localized about a cell and its neighbourhood. As such, we modify the global self-attention mechanisms of a ViT to respect this locality requirement (\\Fig{\\ref{fig:teaser}}). Localized self-attention is not a new idea \\cite{chen2022regionvit,liu2021swin,chu2021twins,zhang2021multi}; however, because cells contain state information that depends on its previous state, over CA iterations the effective receptive field of ViTCA's localized self-attention grows increasingly larger until eventually incorporating information implicitly across all cells. Thus, admitting global propagation of information from spatially localized self-attention. Moreover, due to the self-organizing nature of NCAs, self-organization also manifests itself within the localized self-attention, resulting in a globally agreed-upon arrangement of local self-attention. Thus, circumventing the quadratic complexity of explicit global self-attention (w.r.t\\onedot the input size) through a linear amortization over time, and increasing the feasibility of per-pixel dense processing (as we demonstrate). This globally consistent and complex behaviour, which arises from strictly local interactions, is a unique feature of NCAs and confers performance benefits which we observe both qualitatively and quantitatively when comparing ViT and ViTCA for denoising autoencoding. \n\n\\input{figures\/overview}\n\n\\section{Background and related work}\n\\vspace{-.5\\baselineskip}\n\\label{sec:related_work}\n\n\\paragraph{Neural Cellular Automata.}\n\nCellular Automata are algorithmic processes motivated by the biological behaviours of cellular growth and, as such, are capable of producing complex emergent (global) dynamics from the iterative application of comparatively simple (localized) rules~\\cite{von1966theory}. \\emph{Neural} Cellular Automata present a more general CA formulation, where the evolving cell states are represented as (typically low-dimensional) vectors and the update rule dictating their evolution is a differentiable function whose parameters are learned through backpropagation from a loss, rather than a handcrafted set of rules~\\cite{mordvintsev2020growing,gilpin2019cellular,wulff1992learning}.\nNeural net-based formulations of CAs in the NeurIPS community can be traced back to the early work of \\cite{wulff1992learning}, where only small and simple models were examined. Recent formulations of NCAs have shown that when leveraging the power of deep learning techniques enabled by advances in hardware capabilities---namely highly-parallelizable differentiable operations implemented on GPUs---NCAs can be tuned to learn surprisingly complex desired behaviour, such as semantic segmentation \\cite{sandler2020image}; common RL tasks such as cart-pole balancing \\cite{variengien2021towards}, 3D locomotion \\cite{najarro2022hypernca}, and Atari game playing \\cite{najarro2022hypernca}; and image synthesis~\\cite{palm2022variational,niklasson2021self-organising,mordvintsev2021mu}. Although these recent formulations rely on familiar compositions of convolutions and non-linear functions, it is important to highlight that NCAs are fundamentally not equivalent to ``very-deep'' CNNs (\\emph{vs}\\onedot~\\cite{gilpin2019cellular}), or any other feedforward architecture (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, ResNets \\cite{he2016deep}), particularly, in the same way that a Recurrent Neural Network (RNN) is not equivalent: CNNs and other feedforward architectures induce a directed \\textit{acyclic} computation graph (\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, a finite impulse response), whereas NCAs (and RNNs) induce a directed \\textit{cyclic} computation graph (\\emph{i.e}\\onedot, } \\def\\Ie{\\emph{I.e}\\onedot, an infinite impulse response), where stateful data can additionally be manipulated using (learned) feedback loops and\/or time-delayed controls. As such, NCAs can be viewed as a type of RNN, and both (N)CAs and RNNs are known to be Turing complete~\\cite{christen2021automatic,cook2004universality,siegelmann1995computational,wulff1992learning}.\\footnote{In the case of (N)CAs, a Turing complete example is the \\emph{Rule 110} elementary CA \\cite{christen2021automatic,cook2004universality}}\n\n\\paragraph{Vision Transformers.}\nVision Transformers \\cite{dosovitskiy2020image} are an adaptation of Transformers \\cite{vaswani2017attention} to vision-based tasks like image classification. In contrast to networks built from convolutional layers, ViTs rely on \\emph{self-attention} mechanisms operating on tokenized inputs. Specifically, input images are divided into non-overlapping patches, then fed to a Transformer after undergoing a linear patch projection with an embedding matrix. While ViTs provide competitive image classification performance, the quadratic computational scaling of global self-attention limits their applicability in high-dimensional domains, \\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, per-pixel dense processing. Recent developments have attempted to alleviate such efficiency limitations \\cite{ali2021xcit,hudson2021generative,arnab2021vivit,fan2021multiscale}, one notable example being Perceiver IO \\cite{jaegle2021perceiver,yifan2021input} with its use of cross-attention. We refer interested readers to a comprehensive survey on ViTs~\\cite{khan2021transformers}.\n\n\n\n\n\n\n\n\n\\section{Vision Transformer Cellular Automata (ViTCA)}\n\\vspace{-.5\\baselineskip}\n\\label{sec:vitca}\n\nBuilding upon NCAs and ViTs, we propose a new class of \\emph{attention-based} NCAs formed using a spatially localized---yet globally organized---self-attention scheme. We detail an instance of this class, ViTCA, by first reviewing its backbone ViT architecture before describing the ``pool sampling''-based training process for the ViTCA update rule (see overview in \\Fig{\\ref{fig:overview}}).\n\n\\paragraph{Input tokenization.}\n\nViT starts by dividing a $C_i\\!\\times\\!H\\!\\times\\!W$ input image \\gt\\ into $N$ non-overlapping $P_H\\!\\times\\!P_W$ patches ($16\\!\\times\\!16$ in the original work~\\cite{dosovitskiy2020image}), followed by a linear projection of the flattened image patches with an embedding matrix $\\mathbf{E} \\in \\mathbb{R}^{L \\times d}$ (\\Fig{\\ref{fig:overview}} \\embed), where $L\\!=\\!C_iP_HP_W$, to produce initial tokens \\pretokens\\ $\\!\\in\\!\\mathbb{R}^{N \\times d}$. Next, a handcrafted positional encoding \\cite{vaswani2017attention} or learned positional embedding \\pe\\ $\\in\\!\\mathbb{R}^{N \\times d}$ \\cite{dosovitskiy2020image} is added to tokens to encode positional information and break permutation invariance. Finally, a learnable class token is appended to the token sequence, resulting with \\tokens\\ $\\!\\in\\!\\mathbb{R}^{(N+1) \\times d}$. For the purposes of our task, we omit this token in all ViT-based models. In ViTCA, the input to the embedding is a flattened cell grid \\cells\\ $\\!\\in\\!\\mathbb{R}^{N \\times L}$ where $L\\!=\\!C_PP_HP_W+C_h$, $C_P\\!=\\!C_i+C_o+C_{\\peplain}$,\\, $C_h$ is the cell hidden size,\\, $C_o$ is the number of output image channels (one or three for grayscale or RGB), and $C_{\\peplain}$ is the positional encoding size when positional encoding is (optionally) concatenated to each cell rather than added to the tokens \\cite{mildenhall2020nerf}.\n\n\\paragraph{Multi-head self-attention (MHSA).}\n\nGiven a sequence of tokens \\tokens, self-attention estimates the relevance of one token to all others (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, which image patches are likely to appear together in an image) and aggregates this global information to update each token. This encodes each token in terms of global contextual information, and does so using three learned weight matrices: $\\mathbf{W}_Q\\!\\in\\!\\mathbb{R}^{d \\times d}$, $\\mathbf{W}_K\\!\\in\\!\\mathbb{R}^{d \\times d}$, and $\\mathbf{W}_V\\!\\in\\!\\mathbb{R}^{d \\times d}$. \\tokens\\ is projected onto these weight matrices to obtain Queries $\\mathbf{Q}\\!=\\!$ \\tokens$\\mathbf{W}_Q$, Keys $\\mathbf{K}\\!=\\!$ \\tokens$\\mathbf{W}_K$, and Values $\\mathbf{V}\\!=\\!$ \\tokens$\\mathbf{W}_V$. The self-attention layer output $\\sa\\!\\in\\!\\mathbb{R}^{N \\times d}$ is:\n\n\\input{equations\/sa}\n\n\\emph{Multi-head} self-attention employs many sets of weight matrices, $\\{\\mathbf{W}_{Q_i},$ $\\mathbf{W}_{K_i},$ $\\mathbf{W}_{V_i}\\!\\in\\!\\mathbb{R}^{d \\times (d\/h)}\\!\\mid$ $i\\!=\\!0,...,(h-1)\\}$. The outputs of $h$ self-attention \\emph{heads} are concatenated into $(\\sa_0,$ $...,$ $\\sa_{h-1})\\!\\in\\!\\mathbb{R}^{N \\times d}$ and projected onto a weight matrix $\\mathbf{W}\\!\\in\\!\\mathbb{R}^{d \\times d}$ to produce $\\mhsa\\!\\in\\!\\mathbb{R}^{N \\times d}$. Self-attention explicitly models global interactions and is more flexible than grid-based operators (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, convolutions) \\cite{perez2019turing,cordonnier2019relationship}, but its quadratic cost in time and memory limits its applicability to high resolution images.\n\n\\paragraph{Spatially localizing self-attention.}\n\nThe global nature of self-attention directly conflicts with the spatial locality constraint of CAs; in response, we limit the connectivity structure of the attention operation to each cell's neighbourhood. This can be accomplished by either masking each head's attention matrix ($\\mathbf{A}\\!=\\!\\texttt{softmax}(\\cdots) \\in \\mathbb{R}^{N \\times N}$ in Eq.\\ \\ref{eq:sa}) with a banded matrix representing local connectivity (\\emph{e.g}\\onedot, } \\def\\Eg{\\emph{E.g}\\onedot, \\Fig{\\ref{fig:overview}} \\localize), or more efficiently,\n\n\\input{equations\/localize_attn} Here, we assume top-left-to-bottom-right input flattening.\nInstead of explicitly computing the global self-attention matrix $\\mathbf{A}\\!\\in\\!\\mathbb{R}^{N \\times N}$ then masking it, this approach circumvents the $\\mathcal{O}(N^2d)$ computation in favour of an $\\mathcal{O}(N\\!M\\!d)$ alternative that indexes the necessary rows and columns \\emph{during} self-attention. The result is a localized self-attention matrix $\\mathbf{A}^{\\!\\star}\\!\\in\\!\\mathbb{R}^{N \\times M}$, where $M\\!=\\!N_HN_W\\!\\ll\\!N$. As we show in our experiments, ViTCA is still capable of global self-attention despite its localization, by leveraging stored state information across cells and their global self-organization during CA iterations (\\Fig{\\ref{fig:teaser}}).\n\nFollowing \\mhsa\\ is a multilayer perceptron (\\Fig{\\ref{fig:overview}} \\mlp) with two layers and a GELU non-linearity. We apply Layer Normalization (LN) \\cite{ba2016layer} before \\mhsa\\ and \\mlp, and residual connections afterwards, forming a single encoding block. We use an MLP head (\\Fig{\\ref{fig:overview}} \\mlphead) to decode to a desired output, with LN applied to its input, finalizing the ViTCA update rule $F_\\theta$. In our experiments, ViT's \\mlphead\\ decodes directly into an image output whereas ViTCA decodes into update vectors added to cells.\n\n\\vspace{-.5\\baselineskip}\n\\subsection{Update rule training procedure}\n\\vspace{-.5\\baselineskip}\n\\label{subsec:update_rule}\n\nTo train the ViTCA update rule, we follow a ``pool sampling''-based training process \\cite{palm2022variational,mordvintsev2020growing} along with a curriculum-based masking\/noise schedule when corrupting inputs. During odd training iterations, we uniformly initialize a minibatch of cells \\cells\\ $\\!=\\!(\\cellsplain_1,...,\\cellsplain_b)$ with constant values (0.5 for output channels, 0 for hidden---see Appendix \\ref{subsec:extended_ablation} for alternatives), then inject the masked input \\maskedinput\\ (see \\Sec{\\ref{subsec:denoising_autoencoding}}). After input injection, we asynchronously update cells ($\\sigma\\!=\\!50\\%$ update rate) using $F_\\theta$ for $T\\!\\sim\\!\\mathcal{U}\\{8,32\\}$ recurrent iterations. We retrieve output \\cellsoutput\\ from the cell grid and apply an $L_1$ loss against the ground truth \\gt. We also apply overflow losses to penalize cell output values outside of [0,1] and cell hidden values outside of [-1,1]. We use $L_2$ normalization on the gradient of each parameter in $\\theta$. After backpropagation, we append the updated cells and their ground truths to a pool \\pool\\ which we then shuffle and truncate up to the first \\poolsize\\ elements. During even training iterations, we retrieve a minibatch of cells and their ground truths from \\pool\\ and process them as above. This encourages $F_\\theta$ to guide cells towards a stable fixed-point. \\Alg{\\ref{alg:vitca_training}} in Appendix \\ref{sec:appendix} details this process.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{ACKNOWLEDGMENT}\nThis work is supported by the National Key program for S\\&T Research and\nDevelopment (No. 2019YFA0307700 and No. 2016YFA0401100), the National Natural Science Foundation\nof China (Nos. 11504215, 11774387, 11874246, 11834015, 11974383), the Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDB21010400), and the Science and Technology Department of Hubei Province (No. 2019CFA035).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\nOwing to the lack of channel reciprocity in the frequency division duplex (FDD) mode, most proposals for FDD massive MIMO systems \nenvision some sort of feedback from the user equipment for downlink channel state acquisition at the base stations \\cite{ad2013,dec2015,miretti18,miretti18SPAWC,xie2016overview,hag2018multi,dai2018}. If traditional channel feedback mechanisms are employed, and the number of antennas is large, downlink channel estimation in FDD massive MIMO systems may incur a prohibitive feedback overhead. Therefore, approaches for reducing this overhead have received a great deal of attention in recent years. A promising approach is to design downlink pilots based on information about the downlink covariance matrix, which is estimated from the uplink covariance matrix \\cite{ad2013,dec2015,miretti18,miretti18SPAWC,xie2016overview,hag2018multi,dai2018}. In this study, we address this estimation problem, hereafter called the uplink-downlink conversion problem.\n\nAlthough channel reciprocity is lost in FDD massive MIMO systems, estimating the downlink covariance matrix from the uplink covariance matrix is possible by exploiting a different form of reciprocity, the so-called reciprocity of the angular power spectrum \\cite{miretti18,miretti18SPAWC,xie2016overview,hag2018multi}. The basic assumption behind this form of reciprocity is that the average receive\/transmit power at a unit of angle (hereafter called {\\it angular power spectrum}) at an antenna array is frequency invariant because, if the frequency separation is not too large, the scattering environments are the same for both the uplink and downlink channels. Building upon this characteristic of wireless channels, researchers have proposed to estimate the angular power spectrum from the uplink covariance matrix, with the intent to use this estimate in models of the antenna array to recover the downlink covariance matrix \\cite{miretti18,miretti18SPAWC,hag2018multi,dai2018} .\n\nIn particular, the algorithms in \\cite{miretti18,miretti18SPAWC} estimate the angular power spectrum with set-theoretic methods that can easily include side information written in terms of closed convex sets in a Hilbert space. Despite working in possibly infinite dimensional spaces, one of the approaches in \\cite{miretti18,miretti18SPAWC} have shown that good uplink-downlink conversion performance can be obtained with a very simple matrix-vector multiplication. In that scheme with remarkably low computational complexity, the matrix is computed only once for the entire system lifetime, and the vector is constructed by rearranging the components of the uplink covariance matrix to be converted. If additional information about the angular power spectrum is used, the studies in \\cite{miretti18,miretti18SPAWC} have also shown that the conversion performance of set-theoretic approaches can be further improved with simple fixed point algorithms that do not appeal to finite-dimensional approximations of the physical models. These approaches can easily take into account the polarization of antennas and real-world impairments (e.g., dissimilarities of the antennas in the array), but performance bounds on the conversion performance have not been considered in \\cite{miretti18,miretti18SPAWC}. \n\n\nMore recently, by using ideal antenna models, the study in \\cite{hag2018multi} has proved that, in uplink-downlink channel covariance conversion based on algorithms that first estimate the angular power spectrum (such as those in \\cite{miretti18,miretti18SPAWC}), some of the components of the downlink covariance matrix can be reliably reconstructed. Based on this observation, that study has derived a scheme in which the angular power spectrum is first estimated by using solvers for nonnegative least square problems (this first step can be interpreted as a finite-dimensional approximation of a particular case of \\cite[Algorithm~2]{miretti18}). This estimate is then used to reconstruct the downlink covariance matrix, and, based on a formal analysis of the reliability of the reconstruction, the authors of \\cite{hag2018multi} have proposed to set to zero the components of the downlink covariance matrix that are not guaranteed to be reliably estimated, in an approach called \\emph{truncation}. However, setting to zero these components is a somewhat heuristic approach that can actually decrease the performance of the reconstruction in some scenarios, as the simulations in \\cite{hag2018multi} have already shown. Furthermore, the reliability analysis does not seem easy to extend to realistic propagation and antenna models such as those in \\cite{miretti18SPAWC}, or to cases where the antenna array response is measured to mitigate modeling errors.\n\n\n\nInspired by the findings in \\cite{hag2018multi}, we derive novel bounds on the reconstruction error of each component of the downlink covariance matrix. Unlike previous results, the proposed performance bounds do not assume any particular antenna array or propagation model. They are based on elementary arguments in Hilbert spaces, and they can be easily applied to the realistic models in \\cite{miretti18SPAWC} (without any changes) or to cases where the array response is measured. Furthermore, the bounds provide insights to improve set-theoretic algorithms. In particular, we show that, if information about the support of the angular power spectrum is used appropriately, the simplest of the algorithms for uplink-downlink conversion in \\cite{miretti18,miretti18SPAWC} can be so effective that performance improvements obtained with more advanced conversion mechanisms are marginal at best (NOTE: the proposed bounds can also be used in the analysis of these complex mechanisms). This result is particularly appealing because that simple algorithm has no parameters to be tuned, and all steps of the enhancements we propose are justified by rigorous arguments.\n\nThis study is structured as follows. In Sect.~\\ref{sect.preliminaries} we introduce the main mathematical concepts, and we prove a simple result (Proposition~\\ref{prop.general_bound}) that is the main mathematical tool used to derive the novel bounds. The general result in Sect.~\\ref{sect.preliminaries} is specialized to the problem of uplink-downlink covariance matrix conversion in Sect.~\\ref{sect.error_bounds}, which also discusses how to exploit the proposed bounds to enhance set-theoretic methods (see Sect.~\\ref{sect.side_info}). To keep the presentation as general as possible, we do not assume any particular antenna array or propagation model in Sect.~\\ref{sect.error_bounds}. A concrete application of the theory developed here is shown in Sect.~\\ref{sect.ULA}.\n\n\n\n\n\\section{Mathematical preliminaries}\n\\label{sect.preliminaries}\nIn the following, we use ${\\mathbb R}_+$ to indicate nonnegative reals. The (coordinate-wise) real and imaginary components of complex vectors or matrices are given by, respectively, $\\mathrm{Real}({\\cdot})$ and $\\mathrm{Imag}(\\cdot)$, and $i$ is the imaginary unit, which is the solution to $i^2=-1$. Given a matrix $\\signal{M}\\in{\\mathbb R}^{T\\times N}$, $\\mathrm{vec}(\\signal{M})\\in{\\mathbb R}^{TN}$ is the vector obtained by stacking the columns of $\\signal{M}$, and $\\signal{M}^\\dagger$ is the Moore-Penrose (pseudo-)inverse of $\\signal{M}$. We denote by $\\mathcal{H}$ a real Hilbert space with inner product $\\innerprod{\\cdot}{\\cdot}$ and norm $\\|\\cdot\\|=\\sqrt{\\innerprod{\\cdot}{\\cdot}}$. Given ${x}\\in\\mathcal{H}$ and a set $C\\subset\\mathcal{H}$, we define $x+C:=\\{h+x \\in\\mathcal{H}~|~h\\in C\\}$. A {\\it linear variety} $V\\subset\\mathcal{H}$ is a set that can be expressed as $V=x+M$ for a vector $x\\in\\mathcal{H}$ and a subspace $M\\subset\\mathcal{H}$; i.e., $V$ is a translation of the subspace $M$. If $C\\subset \\mathcal{H}$ is a nonempty closed convex set, the projection $P_C:\\mathcal{H}\\to~C$ maps an arbitrary vector $x\\in \\mathcal{H}$ to the uniquely existing solution to the optimization problem $\\mathrm{min.}_{y\\in C} \\|x-y\\|$. The orthogonal complement of a subset $C\\subset \\mathcal{H}$ is the closed subspace given by $C^\\perp:=\\{y\\in\\mathcal{H}~|~(\\forall x\\in C)~\\innerprod{x}{y}=0\\}$, and note that $M\\cup M^\\perp=\\mathcal{H}$ and $(M^\\perp)^\\perp = M$ for any closed subspace $M$. The closure of a set $C\\subset\\mathcal{H}$ is denoted by $\\overline{C}$. A set $S=\\{x_1,\\ldots,x_N\\}\\subset\\mathcal{H}$ is called {\\it linear independent} (respectively, dependent) if the vectors $x_1,\\ldots,x_N$ are linearly independent (respectively, dependent). Given a function $f:\\Omega\\to {\\mathbb R}$ with $\\Omega\\subset{\\mathbb R}^N$, we define its support to be the set $\\mathrm{Supp}(f)=\\{x\\in\\Omega~|~f(x)\\neq 0\\}$. By $L^2(\\Omega)$, $\\Omega\\subset {\\mathbb R}^N$, we denote the space of real-valued square-integrable functions $f:\\Omega\\to{\\mathbb R}$ with respect to the standard Lebesgue measure. \n \n \n Below is a summary of standard results in convex analysis that we use throughout this study. The proof can be found in most standard references on convex and functional analysis (e.g., \\cite{yukawa2010,luen}).\n\n\\begin{fact} \n\t\\label{fact.basic}\n\t\\begin{itemize}\n\t\t\\item[(i)] Let $M\\subset \\mathcal{H}$ be a closed linear subspace. Then $(\\forall x\\in\\mathcal{H})~ x=P_M(x)+P_{M^\\perp} (x)$.\n\t\tFurthermore, the projection $P_M:\\mathcal{H}\\to M$ onto $M$ is a bounded linear operator with operator norm given by $\\|P_M\\|_\\mathrm{o}:=\\sup_{\\|x\\|=1}{\\|P_M(x)\\|}\\le 1$, and the equality is achieved if $M\\neq\\{0\\}$.\t\t\t\t\n\t\t\\item[(ii)] Let $M\\subset\\mathcal{H}$ be a closed subspace. For a given $u\\in\\mathcal{H}$, consider the closed linear variety $V=u+M$. Then $(\\forall x\\in V)~V=x+M$ and $x=P_V(0)+P_M(x)$.\n\t\t\\item[(iii)] Let $V=\\cap_{k=1}^K\\{x\\in\\mathcal{H}~|~\\innerprod{x}{v_k}=b_k\\}\\neq\\emptyset$, where $(v_k,~b_k)\\in\\mathcal{H}\\times{\\mathbb R}$ for each $k\\in\\{1,\\ldots,K\\}$. Then we have $(\\forall u\\in V)~V=u+{M}^\\perp$, where $M=\\mathrm{span}\\{v_1,\\ldots, v_K\\}$.\n\t\t\n\t\n\t\\end{itemize} \n\\end{fact}\n\n\n\nIn the application described in the next section, we study the performance of algorithms producing an estimate $\\widetilde{\\rho}\\in V$ of $\\rho\\in V$, where $V$ is a closed linear variety generated by the translation of the orthogonal complement $M^\\perp$ of a finite dimensional subspace $M$ (NOTE: the closed subspace $M^\\perp$ can be infinite dimensional). In that application, we are not directly interested in the error $\\|\\widetilde{\\rho}-\\rho\\|$, but in the approximation of $\\innerprod{\\rho}{y}$ by $\\innerprod{\\widetilde{\\rho}}{y}$ for a given $y\\in\\mathcal{H}$. The absolute error of the approximation is given by $e:=\\left|\\innerprod{\\widetilde{\\rho}-\\rho}{y}\\right|$, and we show in Proposition~\\ref{prop.general_bound} elementary bounds for $e$ that decouples into the product of two terms. The first term depends on the choice of the algorithm. The second term is algorithm independent, and it depends on system parameters captured by the subspace $M\\subset\\mathcal{H}$. \n\n\\begin{proposition}\n\\label{prop.general_bound} Let $M\\subset\\mathcal{H}$ be a closed subspace, and consider the linear variety $V=\\rho+M^\\perp$ for a given $\\rho\\in\\mathcal{H}$. Suppose that an algorithm produces an estimate $\\widetilde{\\rho}\\in {V}$ of $\\rho\\in {V}$. Then each of the following holds:\n\\begin{enumerate}\n\t\\item[(i)] $(\\forall y\\in \\mathcal{H})$ \n\t\\begin{multline}\n\t\t\\label{eq.decoupled_bound}\n\t\t\\left|\\innerprod{\\widetilde{\\rho}-\\rho}{y}\\right|\\le \\|\\widetilde{\\rho} - \\rho\\|~ \\|y-P_{M}(y)\\| \\\\ = \\|P_{M^\\perp}(\\widetilde{\\rho}) - P_{M^\\perp}(\\rho)\\|~ \\|y-P_{M}(y)\\| \n\t\\end{multline}\n\t\\item[(ii)] In particular, if $\\widetilde{\\rho}=P_V(0)$, then $(\\forall y\\in \\mathcal{H})$\n\t\\begin{multline}\n\t\t\\label{eq.bound_algo1}\n\t\t\\left|\\innerprod{\\widetilde{\\rho}-\\rho}{y}\\right|\\le \\|\\rho - P_{M}(\\rho)\\|~ \\|y-P_{M}(y)\\| \n\t\\end{multline}\n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\n\t(i) Let $y\\in\\mathcal{H}$ be arbitrary. By assumption, both $\\widetilde{\\rho}$ and $\\rho$ are elements of the linear variety $V$, so we have \n\t\\begin{multline}\n\t\\label{eq.error_x}\n\t\\widetilde{\\rho}-\\rho = P_{M^\\perp}(\\widetilde{\\rho})+P_V(0)-P_{M^\\perp}(\\rho)-P_V(0) \\\\ = P_{M^\\perp}(\\widetilde{\\rho})-P_{M^\\perp}(\\rho)= P_{M^\\perp}(\\widetilde{\\rho}-\\rho)\n\t\\end{multline}\n\t by Fact~\\ref{fact.basic}(i)-(ii). From $y=P_{M}(y)+P_{M^\\perp}(y)$ (Fact~\\ref{fact.basic}(i)) and the definition of orthogonal complements, we obtain \n\t\\begin{multline*}\n\t\\left|\\innerprod{\\widetilde{\\rho}-\\rho}{y}\\right| = \\left|\\innerprod{P_{M^\\perp}(\\widetilde{\\rho}-\\rho)}{P_{M}(y)+P_{M^\\perp}(y)}\\right| \\\\ = \\left|\\innerprod{P_{M^\\perp}(\\widetilde{\\rho}-\\rho)}{P_{M^\\perp}(y)}\\right|. \n\t\\end{multline*}\n\tA direct application of the Cauchy-Schwartz inequality yields $\\left|\\innerprod{\\widetilde{\\rho}-\\rho}{y}\\right| \\le \\|P_{M^\\perp}(\\widetilde{\\rho}-\\rho)\\|~ \\|P_{M^\\perp}(y)\\|$. The result in \\refeq{eq.decoupled_bound} now follows from \n$P_{M^\\perp}(y) = y - P_{M}(y)$ (Fact~\\ref{fact.basic}(i)) and \\refeq{eq.error_x}.\n\n\t\n\t(ii) By Fact~\\ref{fact.basic}(i)-(ii) and $\\widetilde{\\rho}=P_V(0)$, we deduce $P_{M^\\perp}(\\widetilde{\\rho}) = 0$. Now use the equality $\\rho=P_{M^\\perp}(\\rho)+P_{M}(\\rho)$ (Fact~\\ref{fact.basic}(i)) in \\refeq{eq.decoupled_bound} to obtain the desired result.\n\\end{proof}\n\n\n Proposition~\\ref{prop.general_bound} has a very natural interpretation. If $\\|\\widetilde{\\rho}-\\rho\\|$ is not too large, then \\refeq{eq.decoupled_bound} shows that the error $e:=\\left|\\innerprod{\\widetilde{\\rho}-\\rho}{y}\\right|$ is small if there exists a vector $u\\in M$ that is sufficiently close to $y\\in\\mathcal{H}$ with respect to the metric $d(u,y)=\\|u-y\\|$. In this case, the choice of the algorithm used to produce $\\widetilde{\\rho}\\in V$ does not play a decisive role in the minimization of the error $e$. Proposition~\\ref{prop.general_bound}(ii) shows the guaranteed performance bound of a simple algorithm for estimating $\\rho$. For this scheme, the error $e$ is small if $\\rho$ or $y$, or both, can be well approximated by vectors in the subspace $M$. \n\n\n\\section{Error bounds for uplink-downlink conversion in FDD MIMO systems}\n\\label{sect.error_bounds}\n\nIn this section, we apply the results of Sect.~\\ref{sect.preliminaries} to the problem of covariance matrix conversion in FDD massive MIMO systems, which, as mentioned in the introduction, we call the uplink-downlink conversion problem. We first describe the problem in Sect.~\\ref{sect.system_model}, and then we proceed to tailor the bounds in Proposition~\\ref{prop.general_bound} to our particular application in Sect.~\\ref{sect.bounds_array}. In Sect.~\\ref{sect.side_info}, we show how the bounds can be used to improve the approaches in \\cite{miretti18,miretti18SPAWC}. To keep the discussion as general as possible, we do do not assume any particular array geometry or propagation model in this section. \n\n\\subsection{The uplink-downlink conversion problem}\n\\label{sect.system_model}\n We consider a single-cell flat-fading wireless system, in which a base station equipped with N antennas exchanges data with a single-antenna user. In the uplink, the base station first estimates the uplink channel covariance matrix $\\signal{R}_\\mathrm{u} = E[\\signal{h}_\\mathrm{u} \\signal{h}_\\mathrm{u}^H] \\in \\mathbb{C}^{N\\times N}$ from samples of the uplink channel $\\signal{h}_\\mathrm{u}$ and any prior knowledge of this covariance matrix. In the uplink-downlink conversion problem, samples $\\signal{h}_\\mathrm{d}\\in\\mathbb{C}^N$ of the downlink channel are not available at the base station, and the objective is to obtain an estimate of the downlink channel covariance matrix $\\signal{R}_\\mathrm{d} = E[\\signal{h}_\\mathrm{d} \\signal{h}_\\mathrm{d}^H]\\in \\mathbb{C}^{N\\times N}$ directly from the estimate of $\\signal{R}_\\mathrm{u}$. The main challenge for the conversion in FDD MIMO systems is the lack of channel reciprocity. The uplink and downlink channels use different frequencies, so their statistics are also different, which in turn implies that $\\signal{R}_\\mathrm{u}\\neq\\signal{R}_\\mathrm{d}$. However, $\\signal{R}_\\mathrm{d}$ and $\\signal{R}_\\mathrm{u}$ are related. In particular, estimating $\\signal{R}_\\mathrm{d}$ from $\\signal{R}_\\mathrm{u}$ is possible by using the so-called reciprocity of the angular power spectrum, which we now formally describe.\n \n For typical frequency separation gaps, the real and imaginary parts of each component of $\\signal{R}_\\mathrm{u}$ and $\\signal{R}_\\mathrm{d}$ can be seen as the result of an inner product in an infinite dimensional Hilbert space $(\\mathcal{H},\\innerprod{\\cdot}{\\cdot})$, with the vectors and the inner product taking a particular form that depends on system parameters such as the antenna polarization, array geometry, and the propagation model, among others \\cite{miretti18,miretti18SPAWC}. More precisely, let $r_{\\mathrm{u}, k}\\in{\\mathbb R}$ and $r_{\\mathrm{d}, k}\\in{\\mathbb R}$ denote one of the $k\\in\\mathcal{I}:=\\{1,\\ldots, 2N^2\\}$ components of, respectively, $[\\mathrm{Real}(\\signal{R}_\\mathrm{u})~\\mathrm{Imag}(\\signal{R}_\\mathrm{u})]\\in {\\mathbb R}^{N\\times 2N}$ and $[\\mathrm{Real}(\\signal{R}_\\mathrm{d})~\\mathrm{Imag}(\\signal{R}_\\mathrm{d})]\\in {\\mathbb R}^{N\\times 2N}$. It has been shown in \\cite{miretti18,miretti18SPAWC} that, for each $k\\in\\mathcal{I}$ and for a given inner product $\\innerprod{\\cdot}{\\cdot}$ that depends on the system model, we have \n \\begin{align}\n \\label{eq.upr}\n r_{\\mathrm{u}, k}=\\innerprod{\\rho}{g_{\\mathrm{u}, k}}\n \\end{align}\n and \n \\begin{align}\n \\label{eq.dlr}\n r_{\\mathrm{d}, k}=\\innerprod{\\rho}{g_{\\mathrm{d}, k}},\n \\end{align}\n where $\\rho\\in\\mathcal{H}$ is the unknown frequency independent function called {\\it angular power spectrum}, and $g_{\\mathrm{u}, k}\\in\\mathcal{H}$ and $g_{\\mathrm{d}, k}\\in\\mathcal{H}$ are known uplink and downlink functions related to the antenna array responses (see Sect.~\\ref{sect.ULA} for a concrete example). Intuitively, the angular power spectrum shows the average angular power density that an array receives from the user at a given azimuth (and possibly elevation) angle. In the literature \\cite{miretti18,miretti18SPAWC,xie2016overview,hag2018multi}, it is assumed to be the same for both the uplink and downlink channels, which is the phenomenon we call reciprocity of the angular power spectrum.\n \n With the above explanations, we can summarize the set-theoretic approaches in \\cite{miretti18,miretti18SPAWC} (and some of the approaches in \\cite{hag2018multi}) to the uplink-downlink conversion problem with the following two steps:\n \\begin{itemize}\n \t\\item[(i)] We first obtain an estimate $\\widetilde{\\rho}\\in\\mathcal{H}$ of $\\rho\\in\\mathcal{H}$ from the equations $r_{\\mathrm{u}, k}=\\innerprod{\\rho}{g_{\\mathrm{u}, k}}$ ($k\\in\\mathcal{I}$), and possibly known properties of $\\rho\\in\\mathcal{H}$, by using set-theoretic methods.\n \t\\item[(ii)] With $\\widetilde{\\rho}\\in\\mathcal{H}$, we obtain an estimate $\\widetilde{r}_{\\mathrm{d}, k}\\in{\\mathbb R}$ of ${r}_{\\mathrm{d}, k}\\in{\\mathbb R}$ for each $k\\in\\mathcal{I}$ by computing $\\widetilde{r}_{\\mathrm{d}, k}=\\innerprod{\\widetilde{\\rho}}{g_{\\mathrm{d}, k}}$.\n \\end{itemize}\n One of the contributions of the next section is to derive conditions guaranteeing that, given $k\\in\\mathcal{I}$, the estimate $\\widetilde{r}_{\\mathrm{d},k}$ of ${r}_{\\mathrm{d},k}$ in step (ii) is accurate even if the estimate $\\widetilde{\\rho}$ of $\\rho$ in step (i) is inaccurate. These conditions will be used to derive simple techniques to improve set-theoretic methods addressing the problem in step (i).\n\n\n \n \n \\subsection{Bounds for the error of UL-DL covariance conversion with general arrays}\n \\label{sect.bounds_array}\n We now derive performance bounds for the set-theoretic approaches described above. To this end, we assume that the uplink covariance matrix $\\signal{R}_\\mathrm{u}$ (and hence $(r_{\\mathrm{u}, k})_{k\\in\\mathcal{I}}$) is perfectly estimated. By recalling that covariance matrices have structure (they are at least Hermitian), the number of different equations in \\refeq{eq.upr} and \\refeq{eq.dlr} is strictly less than $|\\mathcal{I}|=2N^2$ ($|\\mathcal{I}|$ denotes the cardinality of $\\mathcal{I}$). Therefore, many repeating equations can be removed, but for brevity this simple operation is not considered in this section. \n\nTo proceed with the bounds, we define \n\\begin{align}\n\\label{eq.setSp}\nS^\\prime=\\{g_{\\mathrm{u},1},\\ldots, g_{\\mathrm{u},|\\mathcal{I}|}\\}\\subset\\mathcal{H}\n\\end{align}\nto be the set corresponding to the uplink functions in \\refeq{eq.upr}. The angular power spectrum $\\rho\\in\\mathcal{H}$ is related to $S^\\prime$ by the fact that, from \\refeq{eq.upr} and Fact.~\\ref{fact.basic}(ii)-(iii), we have $\\rho\\in V^\\prime:=\\cap_{k\\in\\mathcal{I}}\\{x \\in\\mathcal{H}~|~\\innerprod{x}{g_{\\mathrm{u}, k}} = r_{\\mathrm{u}, k}\\}=\\rho+\\mathrm{span}(S^\\prime)^\\perp$. Intuitively, the linear variety $V^\\prime\\subset\\mathcal{H}$ is the set containing all angular power spectrum functions that produce the same uplink covariance matrix.\n\nTo include in the analysis any prior information about $\\rho\\in\\mathcal{H}$ expressed in terms of closed linear varieties or closed subspaces, we assume that\n\\begin{align}\n\\label{eq.Vdprime}\n\\rho\\in V^{\\prime\\prime}:=\\cap_{k=1}^Q\\{x\\in\\mathcal{H}~|~\\innerprod{x}{v_k}=b_k\\} \n\\end{align}\nand that the tuples $\\{(v_k, b_k)\\}_{k=1,\\ldots,Q}\\subset \\mathcal{H}\\times {\\mathbb R}$ used to construct the linear variety $V^{\\prime\\prime}$ ($V^{\\prime\\prime}$ is a subspace if $b_1=\\ldots=b_Q=0$) are known. We now define $V:=V^\\prime\\cap V^{\\prime\\prime}$ and construct a new set $S\\subset\\mathcal{H}$ containing all vectors in $S^\\prime$ in \\refeq{eq.setSp} and all vectors in $S^{\\prime\\prime}:= \\{v_{1},\\ldots, v_{Q}\\}$; i.e., \n\\begin{align}\n\\label{eq.setS}\nS:=S^\\prime\\cup S^{\\prime\\prime} \\subset\\mathcal{H}.\n\\end{align}\n\nBy recalling that a nonempty intersection of closed linear varieties is a closed linear variety, the set $V=V^\\prime \\cap V^{\\prime\\prime}\\ni\\rho$ defined above is a closed linear variety $V$ that can be equivalently written as $V=\\rho+M^\\perp$, where $M\\subset\\mathcal{H}$ is the closed subspace $M:=\\mathrm{span}(S)$. With these operations, we are now exactly in the setting of Proposition~\\ref{prop.general_bound}. Before we proceed with the specialization of this proposition to the problem of uplink-downlink conversion, we first show that the projection $P_M:\\mathcal{H}\\to M$ is easy to compute numerically.\n\nTo simplify the notation, denote by $x_1, \\ldots, x_{L}$ the $L=|\\mathcal{I}|+Q$ vectors in the set $S$ in \\refeq{eq.setS}, and define the following matrix:\n\n\\begin{align}\n\\label{eq.matrixg}\n\\signal{G} = \\left[\\begin{matrix}\\innerprod{x_1}{x_1}&\\cdots&\\innerprod{x_1}{x_{L}}\\\\\n\\vdots&\\ddots&\\vdots \\\\\n\\innerprod{x_{L}}{x_1} & \\cdots & \\innerprod{x_{L}}{x_{L}}\n\\end{matrix}\\right] \\in{\\mathbb R}^{L \\times L}.\n\\end{align}\n\n\nWith the above definitions, we can use arguments similar to those in \\cite[Ch. 3]{luen}\\footnote{Here we do not assume $S$ to be a linearly independent set.} to show that the projection from $y\\in\\mathcal{H}$ onto the closed subspace $M$ is given by:\n\\begin{align}\n\\label{eq.projm}\nP_M:\\mathcal{H}\\to M:y\\mapsto \\sum_{k=1}^{L}\\alpha_k x_{k},\n\\end{align}\nwhere $\\signal{\\alpha}=[\\alpha_1,\\ldots,\\alpha_{L}]^T \\in{\\mathbb R}^{L}$ is any solution to $\\signal{G}\\signal{\\alpha}=\\signal{z}$, and $\\signal{z}=[\\innerprod{x_1}{y} \\ldots \\innerprod{x_{L}}{y}]^T\\in{\\mathbb R}^{L}.$\n\n\nAs it will soon become clear, in the proposed performance bounds we are specially interested in the estimation error $\\|g_{\\mathrm{d}, k}-P_M(g_{\\mathrm{d}, k})\\|$ for each $k\\in\\mathcal{I}$, which can be easily computed as shown in the following standard result. We omit the proof for brevity, but it can be easily obtained by using \\refeq{eq.projm}.\n\n\\begin{proposition}\n\t\\label{proposition.error_qdk} Let $P_M(y)\\in M=\\mathrm{span}(\\{x_1,\\ldots,x_{L}\\})$ be the approximation in the subspace $M$ of an arbitrary vector $y\\in\\mathcal{H}$. Then the approximation error $\\|y-P_M(y)\\|$ is given by \n\t\\begin{align*}\n\t \\|y-P_M(y)\\|=\\sqrt{(\\|y\\|^2 - \\signal{z}^T \\signal{G}^{\\dagger} \\signal{z})},\n\t \\end{align*}\n\t where $\\signal{z}=\\signal{G}\\signal{\\alpha}\\in{\\mathbb R}^L$, $\\signal{\\alpha}\\in{\\mathbb R}^L$, and $\\signal{G}\\in{\\mathbb R}^{L\\times L}$ are as defined above.\n\\end{proposition}\n\nNow, let\n\\begin{align}\n\\label{eq.qmatrix}\n\\signal{Q} := \\left[\\begin{matrix}\\innerprod{x_1}{g_{\\mathrm{d},1}}&\\cdots&\\innerprod{x_1}{g_{\\mathrm{d},|\\mathcal{I}|}}\\\\\n\\vdots&\\ddots&\\vdots \\\\\n\\innerprod{x_{L}}{g_{\\mathrm{d},1}} & \\cdots & \\innerprod{x_{L}}{g_{\\mathrm{d},|\\mathcal{I}|}}\n\\end{matrix}\\right] \\in{\\mathbb R}^{L\\times |\\mathcal{I}|}.\n\\end{align}\nThe proposed error bounds for uplink-downlink conversion are shown in the next corollary. \n\n\\begin{Cor}\n\t\\label{cor.general_bound} Denote by $\\signal{q}_k\\in{\\mathbb R}^{L}$ the $k$th column of the matrix $\\signal{Q}$ in \\refeq{eq.qmatrix}, and let $\\signal{G}$ be as defined in \\refeq{eq.matrixg}. Suppose that $\\tilde{\\rho}\\in V$ is an estimate of the angular power spectrum $\\rho\\in V$ obtained by a given algorithm, where $V=\\rho+M^\\perp$ is a linear variety containing, in particular, all angular power spectrum functions that produce the same uplink covariance matrix, and $M\\subset\\mathcal{H}$ is the closed subspace $M=\\mathrm{span}(S)$ with $S$ as defined in \\refeq{eq.setS}. Further, assume that $\\|\\rho\\|\\le B$ for some $B\\in{\\mathbb R}$. Let $\\innerprod{\\widetilde{\\rho}}{g_{\\mathrm{d}, k}}=\\widetilde{r}_{\\mathrm{d}, k}$ be the estimate of the $k$th ($k\\in\\mathcal{I}$ ) component $r_{\\mathrm{d}, k}$ of the downlink covariance matrix $\\signal{R}_\\mathrm{d}$. Then the estimation error $e_k := |\\widetilde{r}_{\\mathrm{d}, k}-r_{\\mathrm{d}, k}|$ for each $k\\in\\mathcal{I}$ satisfies the following: \n\t\\begin{itemize}\n\t\t\\item[(i)] If the algorithm used to produce the estimate $\\widetilde{\\rho}\\in V$ also guarantees $\\|\\widetilde{\\rho}\\|\\le B$, then \n\t\t\\begin{multline*}\n\t\t\t(\\forall k\\in\\mathcal{I})~ e_k \\le \\|\\rho-\\widetilde{\\rho}\\|~ \\|g_{\\mathrm{d},k}-P_M(g_{\\mathrm{d},k})\\| \\\\ \\le 2B \\sqrt{(\\|g_{\\mathrm{d}, k}\\|^2 - \\signal{q}_k^T \\signal{G}^{\\dagger} \\signal{q}_k)}.\n\t\t\\end{multline*} \n\t\t\n\t\t\\item[(ii)] Using $\\widetilde{\\rho}=P_V(0)$ as the estimate of the angular power spectrum $\\rho$, we have $(\\forall k\\in\\mathcal{I})$\n\t\t\\begin{multline}\n\t\t\\label{eq.bound_cor}\n\t\te_k \\stackrel{(a)}{\\le} \\|\\rho-P_M(\\rho)\\| ~ \\|g_{\\mathrm{d},k}-P_M(g_{\\mathrm{d},k})\\|\n\t\t\\\\ \\stackrel{(b)}{\\le} \\|\\rho\\|~\\|g_{\\mathrm{d},k}-P_M(g_{\\mathrm{d},k})\\| \\stackrel{(c)}{\\le} B \\sqrt{(\\|g_{\\mathrm{d}, k}\\|^2 - \\signal{q}_k^T \\signal{G}^{\\dagger} \\signal{q}_k)}.\n\t\t\\end{multline} \n\t\\end{itemize}\n\\end{Cor}\n\\begin{proof} \n\t\nThe proof of (i) is immediate from Proposition~\\ref{prop.general_bound}(i), Proposition~\\ref{proposition.error_qdk}, and the triangle inequality. To prove (ii), we note that the inequality in $(a)$ follows from Proposition~\\ref{prop.general_bound}(ii), the inequality in $(b)$ follows from $\\|\\rho-P_M(\\rho)\\|=\\|P_{M^\\perp}(\\rho)\\|\\le \\|P_{M^\\perp}\\|_\\mathrm{o}~\\|\\rho\\| \\le \\|\\rho\\|$ [see Fact~\\ref{fact.basic}(i)], and the inequality in $(c)$ follows from Proposition~\\ref{proposition.error_qdk} and the assumption $\\|\\rho\\|\\le B$.\n\\end{proof}\n\n\\subsection{Improving the performance of the conversion with information about the support of the angular power spectrum}\n\\label{sect.side_info}\n\nOne of the practical implications of Corollary~\\ref{cor.general_bound} is that, for a given $k\\in\\mathcal{I}$, any algorithm producing an estimate $\\widetilde{\\rho}\\in V$ of $\\rho\\in V$ is able to approximate reliably the components $r_{\\mathrm{d}, k}$ of the downlink covariance matrix $\\signal{R}_{\\mathrm{d}, k}$ provided that the term $\\|g_{\\mathrm{d},k}-P_M(g_{\\mathrm{d},k})\\|$ is sufficiently small, regardless of how challenging the scenario for the estimation of $\\rho$ may be. By recalling that the projection $P_M(g_{\\mathrm{d},k})\\in\\mathcal{H}$ can be interpreted as the best approximation of $g_{\\mathrm{d},k}$ in the closed subspace $M$, adding to the subspace $M$ functions as similar as possible to $g_{\\mathrm{d},k}$ is a natural idea to decrease the estimation error bound $2B\\|g_{\\mathrm{d},k}-P_M(g_{\\mathrm{d},k})\\|$ of $r_{\\mathrm{d}, k}$. In the discussion below, we show a simple technique to design $M$ based on this simple principle.\n\n\n\nThe subspace $M$ is by definition the span of $S=S^\\prime\\cup S^{\\prime\\prime}$, where $S^\\prime=\\{g_{\\mathrm{u},1},\\ldots,g_{\\mathrm{u},|\\mathcal{I}|}\\}$ is the set of uplink functions and $S^{\\prime\\prime}=\\{v_1,\\ldots,v_Q\\}$ is the set of functions resulting from any prior knowledge about $\\rho$ (see \\refeq{eq.Vdprime}). Therefore, a simple means to include in the subspace $M$ functions that are close to each of the downlink functions $(g_{\\mathrm{d},k})_{k\\in\\mathcal{I}}$ is to make the uplink functions $(g_{\\mathrm{u},k})_{k\\in\\mathcal{I}}$ as similar as possible to the downlink functions $(g_{\\mathrm{d},k})_{k\\in\\mathcal{I}}$. Alternatively, we can also include in the set $S^{\\prime\\prime}$ functions that are as similar as possible to the downlink functions $(g_{\\mathrm{d},k})_{k\\in\\mathcal{I}}$ (NOTE: including $(g_{\\mathrm{d},k})_{k\\in\\mathcal{I}}$ directly while guaranteeing $\\rho\\in V$ is difficult). The first approach, which corresponds to the design of uplink and downlink functions, may not be always possible because it typically entails changes in hardware (e.g., changes in the inter-antenna spacing) or other modifications in standardized system parameters (e.g., operating frequencies). Therefore, here we focus on the second approach; namely, the construction of an appropriate set $S^{\\prime\\prime}$, or, equivalently, the corresponding linear variety or subspace $V^{\\prime\\prime}$ in \\refeq{eq.Vdprime}. To derive the sets, we further assume the following:\n\n\\begin{itemize}\n\t\\item[(A1)] The angular power spectrum $\\rho\\in\\mathcal{H}$, the downlink functions $(g_{\\mathrm{d},k})_{k\\in \\mathcal{I}}\\subset\\mathcal{H}$, and the uplink functions $(g_{\\mathrm{u},k})_{k\\in \\mathcal{I}}\\subset\\mathcal{H}$ are functions in a Hilbert space of functions in $L^2(\\Omega)$, or, as in \\cite{miretti18SPAWC}, a Hilbert space $\\mathcal{H}$ of tuples in $L^2(\\Omega)\\times L^2(\\Omega)$ (NOTE: extensions to different Hilbert spaces is straightforward),~\\footnote{In these Hilbert spaces, which are used in Sect.~\\ref{sect.ULA}, we typically work with classes of equivalent functions, with the equivalence relation between two functions $f$ and $g$ defined by $f\\sim g\\Leftrightarrow \\|f-g\\|=0$. Equalities such as $f=g$ should be understood as equalities between the classes, not to the particular functions (in a pointwise sense) because $f$ and $g$ can differ, for example, in a countable set in their domains.} where $\\Omega\\subset{\\mathbb R}^K$. \\\\\n\t\\item[(A2)] There exists a known measurable set $C_\\mathrm{S}\\subset \\Omega$ such that $\\mathrm{Supp}(\\rho)\\subset C_\\mathrm{S}\\neq \\emptyset$ for the angular power spectrum functions $\\rho\\in\\mathcal{H}$ that can be observed in the system. Intuitively, the set $C_\\mathrm{S}$ is a superset of $\\mathrm{Supp}(\\rho)$ for which $\\theta\\notin C_\\mathrm{S}$ implies $\\rho(\\theta)=0$.\n\\end{itemize}\nAssumption A1 is very natural. It is satisfied in many realistic models representing the angular power spectrum in practical systems. In these models, the set $\\Omega$ has the interpretation of azimuth and elevation angles \\cite{miretti18,miretti18SPAWC}. Assumption A2 is system dependent, but it may be valid in scenarios where signals of users impinging on the antenna array are not likely to have any significant power at certain angles, which are used for the construction of $C_\\mathrm{S}$.\n\nWe now proceed to show how support information of $\\rho$ can be used to design the subspace $M$ by using arguments that have a strong theoretical justification. To this end, consider the closed subspace\n \\begin{align*}\n\\mathcal{K}:=\\overline{{\\{x\\in\\mathcal{H}~|~(\\forall\\theta\\in C_\\mathrm{S})~ x(\\theta)=0\\}}}.\n\\end{align*}\n The projection $P_\\mathcal{K}:\\mathcal{H}\\to\\mathcal{K}$ from $v\\in\\mathcal{H}$ onto $\\mathcal{K}$ is the function given by (we omit the proof for brevity):\n\\begin{align*}\n\\mathcal{H}\\ni P_{\\mathcal{K}}(v):\\Omega\\to{\\mathbb R}:\\theta\\mapsto\\begin{cases}\n0,&\\text{if }\\theta\\in C_{\\mathrm{S}}, \\\\\n v(\\theta) & \\text{otherwise}.\n\\end{cases}\n\\end{align*}\nSince $P_{\\mathcal{K}}(\\rho) = 0$ from the assumption $\\mathrm{Supp}(\\rho) \\subset C_\\mathrm{S}$ and the definition of the subspace $\\mathcal{K}$, we have $\\rho\\in \\mathcal{K}^\\perp$, and thus\n\\begin{align} \n\\label{eq.support_info}\n(\\forall v\\in\\mathcal{H})\\innerprod{P_\\mathcal{K}(v)}{\\rho}=0.\n\\end{align}\n In particular, using the downlink functions as the function $v$ in \\refeq{eq.support_info} yields\n\\begin{align}\n\\label{eq.projgk}\n(\\forall k\\in\\mathcal{I})\\innerprod{P_\\mathcal{K}(g_{\\mathrm{d},k})}{\\rho}=0.\n\\end{align}\nWe have now reached the point to show the closed subspace $V^{\\prime\\prime}$ we propose to represent the prior knowledge about the support of $\\rho$. More precisely, in light of \\refeq{eq.projgk}, we use \n\\begin{align*}\nV^{\\prime\\prime}=\\cap_{k\\in\\mathcal{I}}\\{y\\in\\mathcal{H}~|~\\innerprod{y}{v_k}=0\\}\\ni \\rho,\n\\end{align*}\n where $v_k:=P_\\mathcal{K}(g_{\\mathrm{d},k})$ for each $k\\in \\mathcal{I}$. This choice is intuitively appealing because we add to the set $S$ in \\refeq{eq.setS} all vectors $(v_k)_{k\\in{\\mathcal{I}}}$ in $\\mathcal{K}$ that best approximate (with respect to the metric $d(x,y)=\\|x-y\\|$) the downlink functions $(g_{\\mathrm{d},k})_{k\\in{\\mathcal{I}}}$, and we recall from the above discussion that, for each $k\\in \\mathcal{I}$, the estimation error $|\\widetilde{r}_{\\mathrm{d},k}-r_{\\mathrm{d},k}|$ decreases as the ability to represent $g_{\\mathrm{d},k}$ with functions in $M=\\mathrm{span}(S)$ improves. Note that we could further improve the reliability of the conversion by repeating the above procedure to include in $S$ additional functions of the form $P_\\mathcal{K}(v)$ with $v\\in\\mathcal{H}$ [e.g., the functions $(P_\\mathcal{K}(g_{\\mathrm{u},k}))_{k\\in\\mathcal{I}}$]. Alternatively, we could also change the definition of inner products to consider only functions in $L^2(C_\\mathrm{S})$. These approaches can be numerically unstable in large antenna arrays if the information about the support of $\\rho$ is erroneous and appropriate mitigation techniques are not applied, but we leave this discussion to a future study because of the space limitation. \n\n \n\n\n\nAll the above improvements are available for the simple approach using $\\widetilde{\\rho}=P_V(0)$ as the estimate of $\\rho$. This approach is particularly interesting because, as shown in the study in \\cite{miretti18}, which has not considered the enhancements discussed above, the whole process of estimating the angular power spectrum and using this estimate to reconstruct the downlink covariance matrix can be done with a simple matrix-vector multiplication. This important feature is not lost with the enhancements proposed in this subsection. More precisely, denote by $\\widetilde{\\signal{R}}_\\mathrm{d}$ the estimate of the downlink covariance matrix $\\signal{R}_\\mathrm{d}$, and recall that $\\widetilde{r}_{\\mathrm{d},1} = \\innerprod{P_V(0)}{g_{\\mathrm{d},1}},\\ldots,\\widetilde{r}_{\\mathrm{d},|\\mathcal{I}|}=\\innerprod{P_V(0)}{g_{\\mathrm{d},|\\mathcal{I}|}}$ represent the real and imaginary parts of the components of $\\widetilde{\\signal{R}}_\\mathrm{d}$. By ordering the functions $(g_{\\mathrm{d},k})_{k\\in\\mathcal{I}}$ appropriately, and by mimicking the steps in \\cite[Sect~3.1]{miretti18}, we verify that uplink-downlink channel covariance conversion can be performed with the following simple linear operation:\n\n\\begin{align}\n\\label{eq.algorithm1}\n\\mathrm{vec}[\\mathrm{Real}(\\widetilde{\\signal{R}}_\\mathrm{d})~\\mathrm{Imag}(\\widetilde{\\signal{R}}_\\mathrm{d})] = \\signal{Q}^T\\signal{G}^{\\dagger}\\underline{\\signal{r}}=\\signal{A}{\\signal{r}},\n\\end{align}\nwhere $\\underline{\\signal{r}} = [{\\signal{r}}^T, 0,\\ldots, 0]^T\\in{\\mathbb R}^{L}$, ${\\signal{r}}:=[r_{\\mathrm{u},1},\\ldots,r_{\\mathrm{u},|\\mathcal{I}|}]^T\\in{\\mathbb R}^{|\\mathcal{I}|}$, and $\\signal{A}\\in{\\mathbb R}^{|\\mathcal{I}|\\times |\\mathcal{I}|}$ is the matrix obtained by keeping only the first $|\\mathcal{I}|$ columns of the matrix $\\signal{Q}^T\\signal{G}^{\\dagger}$. Note that these matrices depend on only the support information and the array response, so they need to be computed only once.\n\nBefore we finish this section, it is also worth noticing that, by increasing the subspace $M$ with support information about $\\rho$ as described above, we also decrease the algorithm error term $\\|\\rho-P_M(\\rho)\\|$ in the bound (a) in \\refeq{eq.bound_cor}, thus further improving the reliability of the conversion.\n\n \n \\section{Example: Uniform linear arrays}\n \\label{sect.ULA}\n\n \nWe now further specialize the results in the previous section to uniform linear arrays. This particular choice enables us to relate the analysis in the previous sections to existing results in the literature that, unlike our approaches, do not seem easy to extend to schemes exploiting information about the structure of the angular power spectrum or to systems where the functions $(g_{\\mathrm{u},k})_{k\\in\\mathcal{I}}$ and $(g_{\\mathrm{d},k})_{k\\in\\mathcal{I}}$ are determined by measurements instead of models.\n\n\\subsection{System Model and bounds without support information} \n \tIn a uniform linear array with $N$ antennas, under very mild assumptions \\cite{haghighatshoar2017massive,xie2016overview}, the uplink and downlink channel covariance matrices for typical frequency gaps are given by $\\signal{R}_\\mathrm{u}:=\\signal{R}(f_\\mathrm{u})\\in\\mathbb{C}^{N\\times N}$ and $\\signal{R}_\\mathrm{d}:=\\signal{R}(f_\\mathrm{d})\\in\\mathbb{C}^{N\\times N}$, where $f_\\mathrm{u}\\in{\\mathbb R}_+$ and $f_\\mathrm{d}\\in{\\mathbb R}_+$ are, respectively, the uplink and downlink frequencies; \n \t\\begin{align*}\n \t\\signal{R}(f) = \\int_{-\\pi\/2}^{\\pi\/2} \\rho(\\theta) \\signal{a}(\\theta, f)\\signal{a}(\\theta, f)^H\\mathrm{d}\\theta\n \t\\end{align*}\n \t(the integral should be understood coordinate-wise) is the channel covariance matrix for a given frequency $f$;\t${\\rho:[-\\pi\/2, \\pi\/2]\\to{\\mathbb R}_+}$ is the angular power spectrum; \n \t\\begin{align}\n \t\\label{eq.integral}\n \t\\begin{array}{rl}\n \t\\signal{a}:[-\\frac{\\pi}{2}, \\frac{\\pi}{2}]\\times{\\mathbb R}_+\\to&\\mathbb{C}^N \\\\\n \t(\\theta, f)\\mapsto&\\left[1,e^{i2\\pi \\frac{f}{c}d\\sin\\theta},\\ldots,e^{i2\\pi \\frac{f}{c}d (N-1)\\sin\\theta}\\right]\n \t\\end{array}\n \t\\end{align}\n \tis the array response for a given angle $\\theta$ and frequency $f$; $c$ is the speed of the wave propagation; and $d$ is the inter-antenna spacing. \n \t\n \tIn real physical systems, we can safely assume that $\\rho$ is an element of the Hilbert space $(\\mathcal{H},\\innerprod{\\cdot}{\\cdot})$ of Lebesgue (real) square-integrable functions $\\mathcal{H}=L^2([-\\pi\/2,\\pi\/2])$ equipped with the inner product $(\\forall \\rho \\in\\mathcal{H})(\\forall g\\in\\mathcal{H})\\innerprod{\\rho}{g}=\\int_{-\\pi\/2}^{\\pi\/2}\\rho(\\theta)g(\\theta)\\mathrm{d}\\theta$. As a result, by fixing $f_\\mathrm{u}$, in light of \\refeq{eq.integral} the functions $(g_{\\mathrm{u},k})_{k\\in \\{1,\\ldots,2N^2\\}}$ in \\refeq{eq.upr} are obtained from the equality $(\\forall\\theta\\in[-\\pi\/2,\\pi\/2])$\n \t\\begin{multline}\n \t\\label{eq.ordering}\n \t[g_{\\mathrm{u},1}(\\theta),\\ldots,g_{\\mathrm{u},2N^2}(\\theta)]^T = \\\\ \\mathrm{vec}\\left(\\left[\\begin{matrix}\\mathrm{Real}(\\signal{a}(\\theta, f_\\mathrm{u})\\signal{a}(\\theta, f_\\mathrm{u})^H)\\\\ \\mathrm{Imag}(\\signal{a}(\\theta, f_\\mathrm{u})\\signal{a}(\\theta, f_\\mathrm{u})^H)\\end{matrix}\\right]\\right).\n \t\\end{multline}\n \tThe downlink functions $(g_{\\mathrm{d},k})_{k\\in\\{1,\\ldots,2N^2\\}}$ in \\refeq{eq.dlr} are obtained analogously by considering the downlink frequency $f_\\mathrm{d}$ in \\refeq{eq.ordering}.\n \t\n \t In uniform linear arrays, the covariance matrices are Hermitian and Toeplitz \\cite{haghighatshoar2017massive,miretti18,hag2018multi}, so, with the ordering in \\refeq{eq.ordering}, we can consider only the functions $g_{\\mathrm{u},1},\\ldots,g_{\\mathrm{u},2N}$ responsible for the first column of $\\signal{R}_\\mathrm{u}$ because knowledge of this column is enough to reconstruct all elements of $\\signal{R}_\\mathrm{u}$. For the same reason, we use only the downlink functions $g_{\\mathrm{d},1},\\ldots,g_{\\mathrm{d},2N}$. By doing so, the set $S^\\prime$ in \\refeq{eq.setSp} is given by $S^\\prime=\\{g_{\\mathrm{u},1},\\ldots, g_{\\mathrm{u},2N})\\}\\subset\\mathcal{H}$, and we can redefine the index set $\\mathcal{I}$ accordingly; i.e., $\\mathcal{I}:=\\{1,\\ldots, 2N\\}$. \n \t \n \t Without any information about the support of $\\rho$, we have $S=S^\\prime$, and we can use the results in \\cite[Sect.~4.1]{miretti18} to compute the algorithm independent term $\\|g_{\\mathrm{d},k}-P_M(g_{\\mathrm{d},k})\\|$ ($k\\in\\mathcal{I}$) of the bounds in Corollary~\\ref{cor.general_bound} by using Bessel functions of the first kind, order zero, which we denote by $J_0:{\\mathbb R}\\to{\\mathbb R}$. In particular, the bound in the last inequality in Corollary~\\ref{cor.general_bound}(ii) reduces to\n \t\\begin{align}\n \t \t \\label{bound.specific}\n (\\forall k\\in \\mathcal{I})~ \t e_k\\le B\n\\sqrt{(\\|g_{\\mathrm{d},k}\\|^2 - \\signal{q}_k^T \\signal{G}^{\\dagger} \\signal{q}_k)},\n \t\\end{align}\n \twhere \n \t\\begin{multline*}\n\t \t\\|g_{\\mathrm{d},k}\\|^2=\\\\ \\begin{cases}\n\t \t\\dfrac{\\pi}{2} \\left(1+J_0\\left(4\\pi \\dfrac{f_\\mathrm{d}}{c} d (k-1) \\right)\\right)&\\text{ if } 1\\le k \\le N \\\\\n\t\t\\dfrac{\\pi}{2} \\left(1-J_0\\left(4\\pi \\dfrac{f_\\mathrm{d}}{c} d (k-N-1) \\right)\\right)&\\text{ otherwise, } \n\t \t\\end{cases}\n\t\\end{multline*}\n\\begin{align*}\n\\signal{G}=\\dfrac{\\pi}{2}\\left[\\begin{matrix}\n\\signal{G}_\\mathrm{r}& \\signal{0} \\\\ \n\\signal{0}& \\signal{G}_\\mathrm{j}\n\\end{matrix}\\right], \\signal{Q}=[\\signal{q}_1,\\ldots,\\signal{q}_{2N}]=\\dfrac{\\pi}{2}\\left[\\begin{matrix}\n\\signal{Q}_\\mathrm{r}& \\signal{0} \\\\ \n\\signal{0}& \\signal{Q}_\\mathrm{j},\n\\end{matrix}\\right]\n\\end{align*}\nand the components of the $n$th row and $m$th column of the matrices $\\signal{G}_\\mathrm{r},\\signal{G}_\\mathrm{j},\\signal{Q}_\\mathrm{r},\\signal{Q}_\\mathrm{j}\\in{\\mathbb R}^{N\\times N}$ are given by $\\signal{G}_{\\mathrm{r},nm}=J_0(x_{nm})+J_0(y_{nm})$, $\\signal{G}_{\\mathrm{j},nm}=J_0(x_{nm})-J_0(y_{nm})$, $\\signal{Q}_{\\mathrm{r},nm}=J_0(p_{nm})+J_0(q_{nm})$,\n$\\signal{Q}_{\\mathrm{j},nm}=J_0(p_{nm})-J_0(q_{nm})$ with $$x_{nm}=2\\pi~d~\\dfrac{f_\\mathrm{u}}{c}(n-m),~y_{nm}=2\\pi~d~\\dfrac{f_\\mathrm{u}}{c}(n+m-2),$$ $$p_{nm}=2\\pi d\\left(\\dfrac{f_\\mathrm{u}(n-1)}{c}-\\dfrac{f_\\mathrm{d}(m-1)}{c}\\right),$$\nand $$q_{nm}=2\\pi d\\left(\\dfrac{f_\\mathrm{u}(n-1)}{c}+\\dfrac{f_\\mathrm{d}(m-1)}{c}\\right).$$\n\n\\subsection{Numerical experiments}\nFor a concrete example of the bounds, we use an antenna array with the configuration in Table~\\ref{table.parameters}. As discussed in \\cite{hag2018multi}, this configuration is particularly challenging for uplink-downlink conversion for two main reasons: (i) the uplink frequency is lower than the downlink frequency, and (ii) the antenna spacing is larger than half of the wavelength $c\/(2f_\\mathrm{d})$ of the higher frequency $f_\\mathrm{d}$, so we have the undesirable phenomenon known as grating lobes \\cite{van2004optimum,hag2018multi}. We show below that this challenging scenario for uplink-downlink conversion can be formally verified with the simple bounds in \\refeq{bound.specific}, and the problems for uplink-downlink conversion can be mitigated with information about the support of the angular power spectrum.\n\n\\begin{table}\n\t\\caption{Parameters of the uniform linear array}\n\t\\label{table.parameters}\n\t\\begin{center}\n\t\t\\begin{tabular}{cc}\n\t\t\t\\hline \\\\\n\t\t\tNumber of antennas $(N)$ & 30 \\\\\n\t\t\tUplink frequency $(f_\\mathrm{u})$ & 1.8~MHz \\\\\n\t\t\tDownlink frequency $(f_\\mathrm{d})$ & 1.9~MHz \\\\\n\t\t\tSpeed of wave propagation $(c)$ & $3\\cdot 10^8$~m\/s \\\\\n\t\t\tAntenna spacing $(d)$ & $1.05~\\dfrac{c}{2f_\\mathrm{u}}$ \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\n\n\\end{table}\n\nTo illustrate the theoretical gains that can be achieved with the technique discussed in Sect.~\\ref{sect.side_info}, we assume that $\\mathrm{Supp}(\\rho)\\subset C_\\mathrm{S}=[0,~\\pi\/2]$, and $C_\\mathrm{S}$ is known. For all simulations in this section, we use only the scheme in Corollary~\\ref{cor.general_bound}(ii) for the estimation of $\\rho$ because of its low computational complexity, as discussed in Sect.~\\ref{sect.side_info}. \n\nIn Fig.~\\ref{fig.bounds_theory}, assuming $B=1$, we show the bounds in the last inequality in \\refeq{eq.bound_cor} with and without support information (SI). For the computation of the former bound, we use the expressions in \\refeq{bound.specific}. For the latter bound, we construct the matrices $\\signal{G}$ and $\\signal{Q}$ in \\refeq{eq.matrixg} and \\refeq{eq.qmatrix} by computing integrals numerically, unless the integral falls into one of the cases computed in \\refeq{bound.specific}. From Fig.~\\ref{fig.bounds_theory}, it is clear that, without any support information, the estimate $\\widetilde{r}_{\\mathrm{d},k}$ can be unreliable for many indices $k\\in\\mathcal{I}$, which is also in accordance with the results in \\cite{hag2018multi}. In contrast, with support information, all estimates $(\\widetilde{r}_{\\mathrm{d},k})_{k\\in\\mathcal{I}}$ of the components of the downlink covariance matrix are reliable, even if the estimate $\\widetilde{\\rho}=P_V(0)$ of the angular power spectrum $\\rho$ is not necessarily accurate. \n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=.9\\columnwidth]{bounds_theory.pdf}\n\t\t\\caption{Theoretical error bounds for the estimates of the components of the downlink covariance matrix.}\n\t\t\\label{fig.bounds_theory}\n\t\\end{center}\n\\end{figure}\n\n\n\n\nTo illustrate the above fact, consider the following example for $\\rho:[-\\pi\/2,\\pi\/2]\\to{\\mathbb R}_+$:\n\n\\begin{align} \n\\label{eq.aps}\n\\rho(\\theta)= n{e}^{-{|\\theta-.5|}\/{.05}} + 4n~{e}^{-{|\\theta-1.4|}\/{.05}},\n\\end{align}\nwhere $n\\in{\\mathbb R}_+$ is a normalizing constant chosen to guarantee that $\\|\\rho\\|=1$. This particular $\\rho$ can be interpreted as coming from a user with two multipath components at angles 0.5~rad and 1.4~rad. Note that we have violated the assumption of the support of $\\rho$, but the signal energy outside $C_\\mathrm{S}$ is small compared to the energy in $C_\\mathrm{S}$, so we can expect the bounds shown in Fig.~\\ref{fig.bounds_theory} to be accurate. This fact is illustrated in Fig.~\\ref{fig.bounds_practice}, which shows the absolute error $(|r_{\\mathrm{d},k}-\\widetilde{r}_{\\mathrm{d},k}|)_{k\\in\\mathcal{I}}$ (with $\\tilde{r}_{\\mathrm{u},k}$ computed by using \\refeq{eq.algorithm1}) of the estimates. Note that, by including support information, uplink-downlink conversion has been performed reliably for all components of the downlink covariance matrix, even though the estimate of angular power spectrum (APS) is not necessarily accurate, as depicted in Fig.~\\ref{fig.aps}.\n\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=.9\\columnwidth]{bounds_practice.pdf}\n\t\t\\caption{Estimation error of the components of the downlink covariance matrix with the angular power spectrum in \\refeq{eq.aps}.}\n\t\t\\label{fig.bounds_practice}\n\n\t\\end{center}\n\n\\end{figure}\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=.9\\columnwidth]{aps.pdf}\n\t\t\\caption{Estimates of the angular power spectrum (APS) with the approach in Corollary~\\ref{cor.general_bound}(ii).}\n\t\t\\label{fig.aps}\n\t\\end{center}\n\n\\end{figure}\n\n\n\\section{Summary and Conclusions}\nRecent work has proved that, without side information about the angular power spectrum, existing algorithms in the literature may not be able to estimate reliably all components of the downlink covariance matrix. In this study we have introduced alternative reliability bounds that are based on elementary arguments in infinite dimensional Hilbert spaces. The main advantages of the proposed analysis are the simplicity and the generality. Unlike previous results, the bounds shown here can be straightforwardly used to analyze the performance of algorithms that exploit information about the support of the angular power spectrum in challenging scenarios that take into account the polarization of antennas and physical impairments of real antenna arrays. To illustrate a possible application of the bounds, we have improved a simple set-theoretic algorithm that does not require any parameter tuning. We have shown that, with coarse information about the angular power spectrum, all components of the downlink covariance matrix can be reliably estimated from the uplink covariance matrix with a simple linear operation. This result suggests that, in some scenarios, the main challenge may be the estimation of the uplink covariance matrix, not necessarily the uplink-downlink conversion problem.\n\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}