diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhzqm" "b/data_all_eng_slimpj/shuffled/split2/finalzzhzqm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhzqm" @@ -0,0 +1,5 @@ +{"text":"\\section*{METHODS}\n\\subsection{DFT}\nWe carried out DFT calculations within the \nspin-polarized generalized-gradient approximation ($\\sigma$-GGA) with the \nPerdew-Burke-Ernzerhof parametrization~\\cite{1996Perdew-PRB} for the \nexchange and correlation energy, with same parameters as in Ref.\\citen{miuraPRB2008}.\nAll the calculations were done with the plane wave {\\tt PWscf} code, \nincluded in the {\\tt QUANTUM-Espresso} package.~\\cite{espresso} \nUltrasoft pseudopotentials~\\cite{1990Vanderbilt-PRB} were employed and\nkinetic energy cutoffs of $30$ Ry and $300$ Ry were used for the wave functions \nand the charge density, respectively.\nFor both B and SUB geometries, the Au-Au bond length was taken $2.80$ \\AA \nto avoid spurious magnetizations due to DFT self-interaction errors,~\\cite{miuraPRB2008} \nwhereas all the Au-Ni distances were fully optimized. \nThe calculation supercell consisted of $16$ Au atoms plus one Ni atom periodically \nrepeated in all three directions. The wire-wire distance in the $xy$ plane (perpendicular to the wire axis $z$) \nwas $10.58$ \\AA, making spurious interactions between periodic replicas negligible. \nIn the B configuration, the Ni atom lies in the $xz$ plane. \nConvergence with respect to $k$-points and smearing parameters was carefully checked.\n\nTransmission and reflection amplitudes, and the spin rotation angles (see the Supplementary material) \nwere calculated using the Choi and Ihm's method~\\cite{1999Choi-PRB} generalized \nto ultrasoft pseudopotentials,~\\cite{2004Smogunov-PRB} as implemented in the PWCOND code \n(a part of the {\\tt QUANTUM-Espresso} package). The self-consistent potential in the first part of the supercell \ndescribed above, of length equal to the Au-Au distance, was used to build the periodic potential \nof the left and right leads, while the potential in the rest of the supercell was used as the scattering region. \nThe zero bias ballistic conductance was obtained using the Landauer-B\\\"uttiker formula~\\cite{Landauer}\nfrom the transmission coefficient at $E_F$ (with all spin moments \nfrozen). Spin-orbit effects were not taken into account\nin the present study. The PDOS in Fig. \\ref{fig1} were calculated directly from the scattering states.\n\n\n\\subsection{NRG}\n\n\nIn our NRG code (for a review see Ref.~\\citen{bullaRMP}) we implement the $U(1)$ charge symmetry \n(quantum number $Q$, the total charge with respect to one electron per site and orbital) and the $SU(2)$ spin symmetry \n(quantum number $S$, the total spin). \nWe choose the Wilson discretization parameter $\\Lambda=2$ (or $1.8$ when calculating the Green functions), and \nkeep up to 1500 states per iteration when calculating dynamical quantities, or 800 if we are only interested in the energy spectrum.\nThe Kondo temperature can be expressed as\\cite{hewson} $T_K= \\frac{\\pi w Z \\Gamma}{4 k_B}$ where $w=0.4128$ \nis the Wilson coefficient, $\\Gamma$ the hybridization linewidth in the magnetic channel, and $Z$ the quasiparticle residue. \nThe latter can be extracted from the self-energy according to:\n$\nZ^{-1}=\\left.1-({\\partial \\Sigma(i\\omega_n)}\/{\\partial \\,i \\omega_n})\\right|_{i\\omega_n=0}\\:.\n$\nThe self-energy is $\\Sigma=-G^{-1}+G_0^{-1}$ where $G$ and $G_0$ are the impurity Green functions calculated by NRG, respectively \nin the presence and absence of interaction. The resulting Kondo temperature in our model is shown in Fig.~\\ref{fig2} \nas a function of $U$.\nIn the bridge geometry the zero bias conductance is evaluated as a function of temperature by direct inspection of the NRG flow.\nIt can be expressed as $G=2e^2 \\,\\sin^2(\\delta_{e}-\\delta_{o})\/h$. The phase shift $\\delta_{e\/o}$ are related to \nthe two lowest energies of states with quantum numbers $(Q,S)=(1,1\/2)$, which correspond to the cost of adding an even or odd \nelectron to the ground state that has quantum numbers $(0,0)$.\nWe calculated the difference between these phase shifts, hence the zero-bias conductance, as a function of the temperature $T$ extracted from the NRG iterations. \nWe note however that, while we are quite confident about the values at low temperatures, those at high temperatures must be taken \nwith caution, since the spectrum is still far from a Fermi liquid one.\nIn the substitutional geometry we calculate the Ni $4s$ spectral function following the guidelines of \nRef.\\citen{costi_hewson_zlaticJPC94}. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAutomated fake news detection is the task of assessing the truthfulness of claims in news.\nThis is a new but critical NLP problem because both traditional news media and social media have huge social-political impacts on every individual in the society.\nFor example, exposure to fake news can cause attitudes of inefficacy, alienation, and cynicism toward certain political candidates \\cite{balmas2014fake}.\nFake news even relates to real-world violent events that threaten public safety (e.g., the PizzaGate \\cite{kang2016washington}).\nDetecting fake news is an important application in the world that NLP can help with, as it also creates broader impacts on how technologies can facilitate the verification of the veracity of claims while educating the general public.\n\nThe conventional solution to this task is to ask professionals such as journalists to check claims against evidence based on previously spoken or written facts.\nHowever, it is time-consuming and expensive. For example, PolitiFact\\footnote{https:\/\/www.politifact.com\/} takes three editors to judge whether a piece of news is real or not.\n\nAs the Internet community and the speed of the spread of information are growing rapidly, automated fake news detection on internet content has gained interest in the Artificial Intelligence research community.\nThe goal of automatic fake news detection is to reduce the human time and effort to detect fake news and help us stop spreading it.\nThe task of fake news detection has been studied from various perspectives with the development in subareas of Computer Science, such as Machine Learning (ML), Data Mining (DM), and NLP.\n\n\nIn this paper, we survey automated fake news detection from the perspective of NLP. \nBroadly speaking, we introduce the technical challenges in fake news detection and how researchers define different tasks and formulate ML solutions to tackle this problem. We discuss the pros and cons, as well as the potential pitfalls and drawbacks of each task. More specifically, we provide an overview of research efforts for fake news detection and a systematic comparison of their task definitions, datasets, model construction, and performances. We also discuss a guideline for future research in this direction.\nThis paper also includes some other aspects such as social engagement analysis. Our contributions are three-fold:\n\\begin{itemize}\n \\item We provide the first comprehensive review of Natural Language Processing solutions for automatic fake news detection;\n \\item We systematically analyze how fake news detection is aligned with existing NLP tasks, and discuss the assumptions and notable issues for different formulations of the problem;\n \\item We categorize and summarize available datasets, NLP approaches, and results, providing first-hand experiences and accessible introductions for new researchers interested in this problem. \n\\end{itemize}\n\\begin{table*}[t!]\n\\begin{center}\n\\begin{tabular}{| l | l | l | l | l |}\n\\hline \\bf Name& \\bf Main Input & \\bf Data Size & \\bf Label & \\bf Annotation \\\\ \\hline\n\\textsc{liar} & short claim & 12,836 & six-grade & editors, journalists \\\\\n\\textsc{fever} & short claim& 185,445 &three-grade& trained annotators\\\\ \n\\textsc{buzzfeednews} & FB post & 2,282 & four-grade &journalists\\\\\n\\textsc{buzzface} & FB post & 2,263 & four-grade &journalists \\\\\n\\textsc{some-like-it-hoax} & FB post& 15,500 & hoaxes or non-hoaxes & none\\\\\n\\textsc{pheme} & Tweet & 330 &true or false & journalists\\\\\n\\textsc{credbank} & Tweet & 60 million & 30-element vector &workers\\\\\n\\textsc{fakenewsnet} & article & 23,921 & fake or real & editors\\\\\n\\textsc{bs detector} & article & - & 10 different types & none\\\\\n\\hline\n\\end{tabular}\n\\caption{A Summary of Various Fake News Detection Related Datasets. \\emph{FB: FaceBook.}}\n\\label{font-table}\n\\end{center}\n\\end{table*}\n\n\\begin{table*}[t!]\n\\begin{center}\n\\begin{tabular}{| l | l |}\n\\hline \\bf Attributes & \\bf Value \\\\ \\hline\nID of the statement & 11972 \\\\\nLabel & True \\\\\nStatement & Building a wall on the U.S.-Mexico border will take literally years. \\\\\nSubject(s) & Immigration\\\\\nSpeaker & Rick Perry \\\\\nSpeaker's job title & Governor of Texas \\\\\nParty affiliation & Republican \\\\\nTotal Credibility History Counts & 30, 30, 42, 23, 18 \\\\\nContext & Radio Interview \\\\\n\\hline\n\\end{tabular}\n\\caption{An Example Entry from \\textsc{liar}. The ordered total credibility history counts are \\{barely true, false, half true, mostly true, pants on fire\\}, note that the history counts only include the history for inaccurate statements.}\n\\label{LIARtable}\n\\end{center}\n\\end{table*}\n\n\n\\section{Related Problems}\n\\label{RelatedP}\n\\subsection{Fact-Checking} Fact-checking is the task of assessing the truthfulness of claims made by public figures such as politicians, pundits, etc \\cite{vlachos2014fact}. Many researchers do not distinguish fake news detection and fact-checking since both of them are to assess the truthfulness of claims.\nGenerally, fake news detection usually focuses on news events while fact-checking is broader.\n\\newcite{thorne2018automated} provides a comprehensive review of this topic.\n\n\\subsection{Rumor Detection}\n\\label{RumorDetection}\nThere is not a consistent definition of rumor detection. A recent survey \\cite{zubiaga2018detection} defines rumor detection as separating personal statements into rumor or non-rumor, where rumor is defined as a statement consisting of unverified pieces of information at the time of posting.\nIn other words, rumor must contain information that can be verified rather than subjective opinions or feelings.\n\n\\subsection{Stance Detection} Stance detection is the task of assessing what side of debate an author is on from text.\nIt is different from fake news detection in that it is not for veracity but consistency.\nStance detection can be a subtask of fake news detection since it can be applied to searching documents for evidence \\cite{ferreira2016emergent}. PHEME, one of the fake-news datasets has tweets related to news, capturing the behavior of users who trust or untrust.\n\\subsection{Sentiment Analysis} Sentiment analysis is the task of extracting emotions, such as customers' favorable or unfavorable impression of a restaurant.\nDifferent from rumor detection and fake news detection, sentiment analysis is not to do an objective verification of claim but to analyze personal emotions. \n\n\\section{Task Formulations}\n\\label{Tasks}\nIn Section \\ref{RelatedP}, we compared related problems with fake news detection to define the scope of this survey.\nIn this survey, The general goal of fake news detection is to identify fake news, defined as the false stories that appear to be news, including rumors judged as information that can be verified in rumor detection.\nEspecially, we focus on fake news detection of text content. The input can be text ranging from short statements to entire articles. Inputs are related to which dataset is used (see Section \\ref{Datasets}), and additional information such as speakers' identity can be appended. \n\nThere are different types of labeling or scoring strategies for fake news detection. \nIn most studies, fake news detection is formulated as a classification or regression problem, but the classification is more frequently used.\n\\subsection{Classification} The most common way is to formulate the fake news detection as a binary classification problem.\nHowever, categorizing all the news into two classes (fake or real) is difficult because there are cases where the news is partially real and partially fake.\nTo address this problem, adding additional classes is common practice.\nMainly, a category for the news which is neither completely real nor completely fake, or, more than two degrees of truthfulness are set as additional classes.\nWhen using these datasets, the expected outputs are multi-class labels, and those labels are learned as independent labels with i.i.d assumptions \\cite{rashkin2017truth,wang2017liar}.\n\nOne of the conditions for fake news classifiers to achieve good performances is to have sufficient labeled data. \nHowever, to obtain reliable labels requires a lot of time and labor. Therefore, semi\/weakly-supervised and unsupervised methods are proposed \\cite{rubin2012identification,bhattacharjee2017active}.\n\n\\subsection{Regression} Fake news detection can also be formulated as a regression task, where the output is a numeric score of truthfulness. This approach is used by \\newcite{nakashole2014language}.\nUsually, evaluation is done by calculating the difference between the predicted scores and the ground truth scores or using Pearson\/Spearman Correlations.\nHowever, since the available datasets have discrete ground truth scores, the challenge here is how to convert the discrete labels to numeric scores.\n\n\n\\section{Datasets}\\label{Datasets}\n\nA significant challenge for automated fake news detection is the availability and quality of the datasets. \nWe categorize public fake-news datasets into three categories: claims, entire articles, and Social Networking Services (SNS) data.\nClaims are one or a few sentences including information worth validating (there is a sample in Table \\ref{LIARtable}), while entire articles are composed of many sentences related to each other constituting information as the whole. SNS data are similar to claims in length but featured by structured data of accounts and posts, including a lot of non-text data.\n\\subsection{Claims}\n\\textsc{PolitiFact}, \\textsc {Channel4.com}\\footnote{https:\/\/www.channel4.com\/news\/factcheck\/}, and \\textsc{Snopes}\\footnote{https:\/\/www.snopes.com\/fact-check\/} are three sources for manually labeled short claims in news, which is collected and labeled manually. Editors handpicked the claims from a variety of occasions such as debate, campaign, Facebook, Twitter, interviews, ads, etc.\nMany datasets are created based on these websites.\n\n\\newcite{vlachos2014fact} released the first public fake news detection dataset gathering data from \\textsc{PolitiFact} and \\textsc {Channel4.com}.\nThis dataset has 221 statements with the date it was made, the speaker and the URL, and the veracity label of a five-point scale. \\textsc{Emergent} \\cite{ferreira2016emergent} is the early work of claim-verification dataset too. It is for stance classification in the context of fact-checking, including claim with some documents for or against them. This dataset can improve fact-checking in the condition that some articles related to the claim were given.\n\nVlachos includes only 221 claims and Emergent includes only 300 claims so that it was impractical to use them for machine learning based assessments.\nThese days, datasets with many claims are published, which can use as an improved version of the first two.\n\nA recent benchmark dataset for fake news detection is \\textsc {liar} \\cite{wang2017liar}. \n This dataset collected data from Politifact as \\newcite{vlachos2014fact}, but includes 12,836 real-world short statements, and each statement is labeled with six-grade truthfulness.\n The information about the subjects, party, context, and speakers are also included in this dataset.\n For the datasets from Politifact articles, \\newcite{rashkin2017truth} also published large datasets. They collect articles from PunditFact (Politifact's spin-off site) too.\n\n \n\n {\\textsc fever} \\cite{thorne2018fever} is a dataset providing related evidence for fact-checking. In this point, it is similar to \\textsc{Emergent}. {\\textsc fever} contains 185,445 claims generated from Wikipedia data. Each statement is labeled as \\emph{Supported, Refuted, or Not Enough Info}. They also marked which sentences from Wikipedia they use as evidence. {\\textsc fever} makes it possible to develop a system that can predict the truthfulness of a claim together with the evidence, even though the type of facts and evidence from Wikipedia may still exhibit some major stylistic differences from those in real-world political campaigns. \n\n\n \n \n\n\\subsection{Entire-Article Datasets}\nThere are several datasets for fake news detection predicting whether the entire article is true or fake. For example, \\textsc{fakenewsnet} \\cite{shu2017fake,shu2017exploiting,shu2018fakenewsnet} is an ongoing data collection project for fake news research. It consists of headlines and body texts of fake news articles based on BuzzFeed and PolitiFact. It also collects information about the social engagements of these articles from Twitter. \n\n\\textsc{bs detector}\\footnote{https:\/\/github.com\/bs-detector\/bs-detector}\\label{BSdetector} is collected from a browser extension named BS Detector, indicating that its labels are the outputs of the BS Detector, not human annotators. \nBS Detector searches all links on a web page at issue for references to unreliable sources by checking against a manually compiled list of unreliable domains.\n\n \n\n\n\n\\subsection{Posts On Social Networking Services}\nThere are some datasets for fake news detection focusing on SNS, but they tend to have a limited set of topics and can be less related to news.\n\n\\textsc{buzzfeednews}\\footnote{https:\/\/github.com\/BuzzFeedNews\/2016-10-facebook-fact-check} collects 2,282 posts from 9 news agencies on Facebook. Each post is fact\\-checked by 5 BuzzFeed journalists. The advantages of this dataset are that the articles are collected from both sides of left-leaning and right-leaning organizations.\nThere are two enriched versions of \\textsc{buzzfeednews}: \\newcite{potthast2017stylometric} enriched them by adding data such as the linked articles, and \\textsc{buzzface} \\cite{santia2018buzzface} extends the BuzzFeed dataset with the\n1.6 million comments related to news articles on Facebook.\n\n\\textsc{some-like-it-hoax}\\footnote{https:\/\/github.com\/gabll\/some-like-it-hoax} \\cite{tacchini2017some} consists of 15,500 posts from 32 Facebook pages, that is, the public profile of organizations (14 conspiracy and 18 scientific organizations). \nThis dataset is labeled based on the identity of the publisher instead of post-level annotations. A potential pitfall of such a dataset is that such kind of labeling strategies can result in machine learning models learning characteristics of each publisher, rather than that of the fake news.\n\n\\textsc{pheme} \\cite{zubiaga2016analysing} and \\textsc{credbank} \\cite{mitra2015credbank} are two Twitter datasets.\n\\textsc{pheme} contains 330 twitter threads (a series of connected Tweets from one person) of nine newsworthy events, labeled as true or false.\n\\textsc{credbank} contains 60 million tweets covering 96 days, grouped into 1,049 events with a 30-dimensional vector of truthfulness labels. Each event was rated on a 5-point Likert scale of truthfulness by 30 human annotators. They concatenate 30 ratings as a vector because they find it difficult to reduce it to a one-dimensional score.\n\nAs mentioned above, these datasets were created for verifying the truthfulness of tweets. Thus they are limited to a few topics and can include tweets with no relationship to news. Hence both datasets are not ideal for fake news detection, and they are more frequently used for rumor detection. \n\n\\section{Methods}\n\\label{Models}\nWe introduce the methods for fake news detection.\nAs usual, we first preprocess input texts into suitable forms (\\ref{Preprocessing}).\nIf the dataset has an entire article length, the rhetorical approach can be used as one of the hand-crafted features extraction (\\ref{Rhetorical Approach}).\nIf the dataset has evidence like \\textsc{Emergent} or \\textsc{FEVER}, we can use methods in \\ref{collectevidence} to gather evidence for outputs.\n\n\\subsection{Preprocessing}\\label{Preprocessing}\nPreprocessing usually includes tokenization, stemming, and generalization or weighting words.\nTo convert tokenized texts into features, Term Frequency-Inverse Document Frequency (TF-IDF) and Linguistic Inquiry and Word Count (LIWC)\\label{LIWC} are frequently used. For word sequences, pre-learned word embedding vectors such as word2vec \\cite{mikolov2013efficient} and GloVe \\cite{pennington2014glove} are commonly used.\n \nWhen using entire articles as inputs, an additional preprocessing step is to identify the central claims from raw texts. \\newcite{thorne2018fever} rank the sentences using TF-IDF and DrQA system \\cite{chen2017reading}.\nThese operations are closely related to subtasks, such as word embeddings, named entity recognition, disambiguation or coreference resolution.\n\n\\subsection{Machine Learning Models}\nAs mentioned in Section \\ref{Tasks}, the majority of existing research uses supervised methods while semi-supervised or unsupervised methods are less commonly used. In this section, we mainly describe classification models with several actual examples.\n \n\\subsubsection{ Non-Neural Network Models}\nSupport Vector Machine (SVM) and Naive Bayes Classifier (NBC) are frequently used classification models \\cite{conroy2015automatic,khurana2017linguistic,shu2018fakenewsnet}\nThese two models differ a lot in structure and both of them are usually used as baseline models. \nLogistic regression (LR) \\cite{khurana2017linguistic,bhattacharjee2017active} and decision tree such as Random Forest Classifier (RFC) \\cite{hassan2017toward} are also used occasionally.\n\n\\subsubsection{Neural Network Models}\n\\label{NNM}\nRecurrent Neural Network (RNN) is very popular in Natural Language Processing, especially Long Short-Term Memory (LSTM), which solves the vanishing gradient problem so that it can capture longer-term dependencies.\nIn Section \\ref{Results}, many models based on LSTM perform high accuracy on \\textsc{LIAR} and \\textsc{FEVER}.\nIn addition, \\newcite{rashkin2017truth} set up two LSTM models and input text as simple word embeddings to one side and as LIWC feature vectors to the other. In both cases, they were more accurate than NBC and Maximum Entropy(MaxEnt) models, though only slightly.\n\nConvolutional neural networks (CNN) are also widely used since they succeed in many text classification tasks. \n\\newcite{wang2017liar}\\label{CNN-W} uses a model based on Kim's CNN \\cite{kim2014convolutional}, concatenating the max-pooled text representations with the meta-data representation from the bi-directional LSTM.\nCNN is also used for extracting features with a variety of meta-data. For example, \\newcite{deligiannisdeep} took graph-like data of relationships between news and publishers as input for CNN and assess news with them.\n\n\\newcite{karimi2018multi} proposed Multi-source Multi-class Fake news Detection framework (MMFD)\\label{MMFD}, in which CNN analyzes local patterns of each text in a claim and LSTM analyze temporal dependencies in\nthe entire text, then passing\nthe concatenation of all last hidden outputs through a Fully Connected Network.\nThis model takes advantage of the characteristics of both models because LSTM works better for long sentences.\n\nAttention mechanisms are often incorporated into neural networks to achieve better performance. \\label{kirilin2018exploiting}\n\\newcite{long2017fake}\\label{LSTM-A} used an attention model that incorporates the speaker's name and the statement's topic to attend to features first, then weighted vectors are fed into an LSTM. Doing this increases accuracy by about 3\\% (Table \\ref{LIARresult}). \\newcite{kirilin2018exploiting} used a very similar attention mechanism.\nMemory networks, which are a kind of attention-based neural network and also share the idea of attention mechanism, are used by \\newcite{pham2018study}.\n\n\\subsection{Rhetorical Approach}\n\\label{Rhetorical Approach}\nRhetorical Structure Theory (RST)\\label{RST}, sometimes combined with the Vector Space Model (VSM), is also used for fake news detection \\cite{rubin2015towards,della2018automatic,shu2017exploiting}.\nRST is an analytic framework for the coherence of a story. Through defining the semantic role (e.g., a sentence for Circumstance, Evidence, and Purpose) of text units, this framework can systematically identify the essential idea and analyze the characteristics of the input text. Fake news is then identified according to its coherence and structure.\n\nTo explain the results by RST, VSM is used to convert news texts into vectors, which are compared to the center of true news and fake news in high-dimensional RST space. Each dimension of the vector space indicates the number of rhetorical relations in the news text. \n\n\\subsection{Collecting Evidence}\n\\label{collectevidence}\nThe RTE-based (Recognizing Textual Entailment) \\cite{dagan2010recognizing} method is frequently used to gather and to utilize evidence.\nRTE is the task of recognizing relationships between sentences.\nBy gathering sentences that are for or against input from data sources such as news articles using RTE methods, we can predict whether the input is correct or not.\nRTE-based models need textual evidence for fact-checking, thus this approach can be used only when the dataset includes evidence, such as \\textsc{fever} and Emergent. \n\n\\section{Results \\& Observations}\n\\label{Results}\nWe compare empirical results on classification datasets via various machine learning models in this section.\n We focused on three datasets: \\textsc{liar}, \\textsc{fever}, and \\textsc{fakenewsnet}.\n We introduced 9 datasets above, but we focus on 3 datasets for looking into the results of experiments on them. It is because others have limited size than newer datasets, limited numbers of experiments, or have the aspect of rumor detection much more than fake news detection.\n\n\\subsection{\\textsc{LIAR}}\n\\begin{table}[t!]\n\\begin{center}\n\\begin{tabular}{|l|l|l|l|c|}\n\\hline \\bf Author & \\bf Meta-data & \\bf Base Model & \\bf Acc. \\\\ \\hline\nWang & & SVMs & 0.255 \\\\ \n & & CNNs & 0.270 \\\\ \n & +Speaker & CNNs & 0.248 \\\\ \n & +All & CNNs & \\textbf{0.274} \\\\ \\hline\n\nKarimi& & MMFD & 0.291 \\\\ \n & +All & MMFD& \\textbf{0.348} \\\\ \\hline\n\nLong& &LSTM+Att & 0.255 \\\\ \n& +All & LSTM(no Att) & 0.399 \\\\\n& +All & LSTM+Att & \\textbf{0.415} \\\\ \\hline\n\nKirilin& +All & LSTM & 0.415 \\\\\n& +All+Sp2C & LSTM & \\textbf{\\underline{0.457}} \\\\ \\hline\nBhatta-&2-class label& NLP Shallow & 0.921 \\\\\ncharjee& & Deep (CNN)& 0.962 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\label{LIARresult} The Current Results for \\textsc{LIAR}. +All means including all meta-data in \\textsc{LIAR}. Bhattacharjee convert 6-class labels to 2-class labels.}\n\\end{table}\nTable \\ref{LIARresult} shows accuracy of recent studies on \\textsc{LIAR}. For the detailed explanations of methods, see Section \\ref{Models} As the tendency, LSTM based models \tachieve higher accuracy than CNN based models.\nThe additional meta-data is also important.\n\\newcite{karimi2018multi} supplement LIAR by adding the verdict reports written by annotators and raise accuracy by 4\\%. \n\\newcite{kirilin2018exploiting} improve accuracy by 21\\% through replacing the credibility history in LIAR with a larger credibility source (speaker2credit\\footnote{https:\/\/github.com\/akthesis\/speaker2credit}).\nThe two papers also show the attention scores for verdict reports\/speaker credit are higher than the statement of claim. \n\n\\subsection{\\textsc{fever}}\n\n\\begin{table}[t!]\n\\begin{center}\n\\begin{tabular}{|l|l|c|}\n\\hline \\bf Author & \\bf Model & \\bf Acc. \\\\ \\hline\nThorne& Decomposable Att& \\textbf{0.319} \\\\ \n& & 0.509 \\\\ \\hline \nYin&TWOWINGOS & {\\bf 0.543} \\\\ \n&& 0.760 \\\\\\hline \nHanselowski&LSTM (ESIM-Att) & \\underline{ {\\bf 0.647} } \\\\ \n&& 0.684 \\\\ \\hline \n \nUNC-NLP&Semantic Matching Network& {\\bf 0.640} \\\\\nNie&(LSTM)& 0.680 \\\\ \\hline \n\\end{tabular}\n\\end{center}\n\\caption{\\label{FEVERresult} The Current Results for FEVER. The results in boldface are the accuracy of evidence-collection task.\n\\end{table}\nTable \\ref{FEVERresult} shows accuracy of recent studies on \\textsc{FEVER}. \nTWOWINGOS \\cite{yin2018twowingos} is the model based on attentive convolution, and \\newcite{thorne2018fever} and \\newcite{hanselowski2018ukp} also use attention based methods. As \\textsc{LIAR}, attention-LSTM has the best score both of verification and evidence-collection task.\nThe bottom one of the table is the top results of the workshop for FEVER from EMNLP 2018\\footnote{http:\/\/fever.ai\/2018\/workshop.html}. This method selects evidence by conducting semantic matching between each sentence from retrieved pages and the claim.\n\n\\subsection{\\textsc{fakenewsnet}}\n\\begin{table}[t!]\n\\begin{center}\n\\begin{tabular}{|l|l|l|c|}\n\\hline \\bf Author &Data& \\bf Model & \\bf Acc. \\\\ \\hline\nShu&Buzz& RST & 0.610 \\\\ \n&Feed&LIWC & 0.655 \\\\ \n&&Castillo & 0.747 \\\\ \n&&TriFN & {\\bf 0.864} \\\\ \nDella&&HC-CB-3 & {\\bf 0.856} \\\\ \nDeligiannis&&GCN & \\textbf{\\underline{0.944}} \\\\ \\hline\nShu&Politi&RST & 0.571 \\\\ \n&Fact&LIWC & 0.637 \\\\ \n&&Castillo & 0.779 \\\\\n&&TriFN & {\\bf 0.878} \\\\ \nDeligiannis&&GCN& {\\bf 0.895} \\\\\nDella&&HC-CB-3 & \\textbf{\\underline{0.938}} \\\\ \\hline\n\n\\end{tabular}\n\\end{center}\n\\caption{\\label{FNNresult} The Current Results for \\textsc{fakenewsnet}. There are two sources of data separately: BuzzFeed and PolitiFact.}\n\\end{table}\nTable \\ref{FNNresult} shows accuracy of recent studies on \\textsc{fakenewsnet}.\n\\newcite{shu2017exploiting} achieve over 60\\% accuracy by RST and LIWC methods,using hand-selected features such as linguistic or rhetorical features, but both models achieve lower accuracy compared to other methods.\nOther methods largely rely on social-engagements data, because \\textsc{fakenewsnet} has social engagements of these articles from Twitter.\nCastillo, which uses social engagements data only, defeats the model using only textual data (RST, LIWC).\nHC-CB-3 sets a threshold of the size of social-engagements data for combining a content-based method and a social-engagements-based method.\nGCN takes graph-like data of relationships between news and publishers as input for CNN and assesses news with them.\nThey achieved very high accuracy by the successful utilization of additional data.\n\n\\section{Discussions, \\& Recommendations}\n\\label{Discussions}\n\\subsection{Datasets and Inputs}\n\\newcite{rubin2015deception} define nine requirements for fake news detection corpus: 1. Availability of both truthful and deceptive instances; 2. Digital textual format accessibility; 3. Verifiability of ``ground truth''; 4. Homogeneity in lengths; 5. Homogeneity in writing matters; 6. Predefined timeframe; 7. The manner of news delivery; 8. Pragmatic concerns; 9. Consideration for language and culture differences.\n\nAs the performances on fake news detection are improved, doing more fine-grained and detailed detection becomes more practical.\nWe propose new recommendations for a new dataset as the expansion and embodiment of the nine requirements mentioned above, based on the observation of existing dataset and experimental results.\n\n\n\\subsubsection{Sophisticated Index of Truthfulness}\n\nFirst, news articles or claims might be a mixture of true and false statements, so it is not practical to categorize them totally into true or false.\nIt is shown by the fact that existing manually fact-checking sites have fine-grained labels such as MOSTLYFALSE, HALFTRUE or Mixture, and annotators find difficult to reduce 30 human annotations to a one-dimensional score while making \\textsc{credbank} as mentioned Section \\ref{Datasets}\nEspecially on crowdsourcing, \\newcite{roitero2018many} report that\nthe more classes or the continuous scales seem to lead ordinary people to a similar agreement with expert judges.\n\nSecond, as the machine learning and NLP technology have been improved, we already achieve high accuracy on binary classification, especially on claims. In Table \\ref{FNNresult} and the bottom 2 case in Table \\ref{LIARresult}, the accuracy of 2-class prediction is over 90 \\% while 6-way classification only with text is lower then 30\\%.\nAs the next step, we should develop models predicting the veracity of news in more detail than binary assessment. Currently, models on multi-class fake news detection do not concern with the order of labels and just classify. For example, It will be a fatal error if a classifier judges True news as False, but not much in judging True news as Mostly True. However, these two are treated as the same mistake in learning methods so far. It can improve practicality if we can use this distance for learning, so it can be a future issue too.\n\n\\subsubsection{Quote claims or articles from various speakers and publishers within the scope of dataset}\n\nFake news has different nature by its different birth mechanism, for example, some news has the intention to cause harm but other was born only to make fun. There are 7 different types of fake news as defined by Claire Wardle \\cite{wardle2017fake}.\n\\newcite{roitero2018many} shows that satire can be distinguished well from both real and fake news by style analysis, ensuring that the types of fake news are the important factor, so that we should be careful about which types of fake news we will collect.\n \n After defining which types of fake-news the dataset will cover, we should collect data carefully not to label statements solely according to their website source and collect all true or fake news from a certain speaker or publisher.\n \\newcite{shu2018fakenewsnet} explore the distribution of publishers who publish\nfake news on PolitiFact and GoccipCop, and find that the majority of publishers who published fake news only publish one piece of fake news.\nHence, it is dangerous to assume a publisher as an authentic one because they have not made any mistakes yet. \n\nIn addition, by collecting all true or fake news from a certain situation or publisher, it becomes confounding variables and the task may obtain the aspect of the website classification task.\nWhen collecting data from fact-checking sites, the data has different backgrounds even if the source site is the same; therefore existing datasets frequently based on them.\n\n\\subsubsection{Validate Entire Article}\nFor the claims dataset, there are some sources for manually labeled sentences, such as \\textsc{PolitiFact} or {Channel4.com}.\nIt is easier and cheaper than annotating data from scratch, and additionally, claims are collected grounded, various and natural contexts and labeled with solid analysis.\n\nFor the entire article datasets, there is less such websites\\footnote{GossipCop may be one of the few but only for celebrity reporting:https:\/\/www.gossipcop.com\/}. The only human-annotated entire-article dataset is \\textsc{Fakenewsnet}, but it is for claims in the article rather than for an entire article, and methods on this dataset emphasize utilization of social-engagements data (Table \\ref{FNNresult}).\nFor those reasons, it is difficult to get human annotation for entire-articles.\nBut machine annotation has strong assumptions. For example, \\textsc{BS detector} only assumes an article's truthfulness by checking domains of links (mentioned in section \\ref{Datasets}) and it is not based on its content, so that the machine learning models trained on this dataset are learning the parameters of the BS Detector.\nAs a future task, we should consider how to evaluate the truthfulness of the entire-article and annotate them. For example, it may be preferable to add truthfulness scores to individual statements.\n\n\n\\subsection{Critiques of Common Methods}\nIn this section, we analyze different automatic fake news detection solutions and discuss our findings.\n\nFirst, hand-crafted features were essential in non-neural network approaches but can be replaced by neural networks.\n\nPsycho-linguistic categories and rhetorical features are typical features to extract in fake news detection. After \\newcite{mihalcea2009lie} and \\newcite{rubin2012identification} find characteristics of the word used and the structure in deceptive languages respectively, \\newcite{shu2017exploiting} achieve 60\\% accuracy using them(Table \\ref{FNNresult}).\n\nHowever, these hand-crafted features seem to learn something that is more useful and cannot be combined with hand-crafted features. For example, \\newcite{rashkin2017truth} shows that adding LIWC did not improve the performance of the LSTM model while non-neural network models are improved largely on their dataset.\nThere are no existing studies proving the rhetorical features can not be combined with neural network models, but there is a possibility of NN learning something more useful because the RST model achieves lower accuracy compared to other methods (Table \\ref{FNNresult}).\n\n\nSecond, the attention mechanism can help improve the performance of fake news detection models.\nAs a neural network model for Natural Language Processing on fake news detection, LSTM and attention based method such as attention attachments or memory network are often used as mentioned in Section \\ref{NNM}\nIt is because they can analyze long-term and content-transitional information so that they can use the abundant word data of sentences and detect context.\nMany research getting high-acurracy in Table \\ref{LIARresult},\\ref{FEVERresult},\\ref{FNNresult} use attention methods or LSTM to learn textual models.\n\n\nThird, meta-data and additional information can be utilized to improve the robustness and to suppress the noise of a single textual claim or article but should be carefully used. \nMost studies on three datasets improve accuracy by developing a better way to utilize not texts but meta-data including speaker credibility and social engagements' information in section \\ref{Results}\n\nHowever, relying too much on speakers' or publishers' information for judging may cause some problems, such as silencing minorities' voices as Vlachos indicates \\cite{graves2018understanding}.\nTo solve this, he developed a FEVER mentioned above, which includes evidence so that it can be used for claim verification and not only for classification, and the shared task is tackled by many researchers. Considering the top team in the shared tasks used semantic matching networks, the focus on content-based methods may be promoted as intended.\nIn this point of view, content-based approaches should be developed more in the future, for example, from writing style as \\newcite{potthast2017stylometric}.\n\n\n\n\n\\section{Conclusion}\n\\label{Conclusion}\nIn this survey, we first reveal the importance and definitions of automatic fake news detection.\nThen we compare and discuss the most recent benchmark datasets and experimental results of different methods.\nBased on our observations, we propose new recommendations for future datasets, and also give the following suggestions for our future fake news detection model:\ninvestigate whether the hand-crafted features can be combined with neural network models,\nappropriate usage of non-textual data,\nand extending the way of verification with contents.\n\n\n\\bibliographystyle{lrec}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDigitalization is reshaping the energy sector, and smart meters have been widely installed in households \\cite{RN92}. A large amount of data collected from smart meters has great potential to enhance the energy efficiency, improve energy services, and secure system operations \\cite{RN99}. Many existing studies, such as \\cite{haben2015analysis}, analyzed residential customers' energy consumption behaviors using smart meter data. Studies in \\cite{RN79} took a further step revealing major factors that determine energy consumption behaviors, including socioeconomic factors and dwelling factors. \nThe exploration of linkages between load patterns and demographic and socioeconomic factors can help provide more personalized services. Different social groups have different load patterns \\cite{jain2017data}. If these differences can be identified, there can be a better understanding of the impact of energy policies and programs on energy equity and justice.\nHowever, the relationship between users' energy consumption behaviors and their demographic and socioeconomic background (e.g., age, income, and education) is less well understood. Therefore, this paper aims to develop a methodology to model consumers' load patterns using smart meter data and establish the link between load patterns and demographic and socioeconomic factors.\n\n\nThe relationship between consumers' load patterns and their demographic and socioeconomic information has been analyzed in the literature. For example, the authors in \\cite{RN3} used a random forest model to assess the impact of socioeconomic and environmental factors on residential energy consumption. The study in \\cite{RN1} pointed out that the consumption behavior is more strongly linked to intrinsic factors than to characteristics of the residential environment. In \\cite{RN2}, an interesting observation was made based on demographic information, and the results showed that younger users had their peak consumption later in the day than elderly users. In \\cite{RN67}, socioeconomic information, such as income and education levels, was selected as features to train a prediction model for the load distribution. The studies mentioned above usually input the raw load data into clustering or classification models, and the measurement noises would affect the results. Moreover, from the operator's perspective, the information regarding load shape (e.g., single peak or dual peak) and peak time weights more than subtle load variations in the off-peak period. Hence, a more effective method is needed to model consumption behaviors using smart meter data.\n\nThis paper develops an effective methodology to address the above shortcomings in handling smart meter data and improve the understanding of the relationship between loading patterns and demographic and socioeconomic factors. \nSpecifically, we use clustering techniques to extract consumers' load patterns and build a machine learning model to study the relationship between load patterns and demographic and socioeconomic factors.\n\\begin{itemize}\n \\item We use symbolic aggregate approximation (SAX) to process the smart meter data and eliminate the effect of noises on the clustering results.\n \\item We model load patterns using a set of load clusters, representing different consumption behaviors.\n \\item We build a deep neural network (DNN) model to study the relationship between load patterns and demographic and socioeconomic information, such as consumers' age, income level, education level, and premise area.\n \\item Our developed methodology is validated using real-world data of households.\n\\end{itemize}\nNote that the data involved in this work are anonymized and do not involve personal privacy issues. Also, our study focuses on the energy consumption patterns at the group level, aiming to improve the understanding of different groups' behaviors for better energy equity. \n\n\\section{Data Description}\\label{sec:data}\nThis paper uses data from the Pecan Street database \\cite{RN128} as a case study to validate our design and study the relationship between load patterns and demographic and socioeconomic factors. This database includes smart meter data of over a thousand households at a resolution of one hour. This dataset also provides these households' demographic and socioeconomic information, including the number of residents in each age group, annual household income, education level, and the household audit data. \n\n\\subsection{Load Data}\nWe first pre-process the load data to remove invalid and duplicate values and obtain valid load data of $312$ households from January 1st, 2015 to December 31st, 2017. Specifically, the load data are recorded as the electricity consumption in kWh for each household in each hour, thus forming daily load profiles over three years with a total of $341,328$ data entries.\n\n\\subsection{Demographic and Socioeconomic Data}\nAnother essential part of the data is consumers' demographic and socioeconomic information. According to \\cite{RN128}, consumers' demographic and socioeconomic information includes many aspects, such as income level, education level, age of residents, household location, and year built. Since not all the demographic and socioeconomic data serve meaningful features, simply including them will increase models' complexity and reduce accuracy. It is necessary to eliminate redundant features and select the appropriate variables. According to the results from the entropy-based feature selection in \\cite{RN67}, the most relevant demographic and socioeconomic characteristics to load patterns are age, education, income, and household square footage. Therefore, we choose the above characteristics as the demographic and socioeconomic information in our study.\n\n\\section{Methods for Characterizing Load Patterns}\\label{sec:method}\nTo study the relationship between load patterns and demographic and socioeconomic factors, we process the load data to model representative load patterns and then build a machine learning model to analyze the relationship. Our method consists of three parts: processing load data, clustering load patterns, and developing a model to find the relationship between load patterns and demographic and socioeconomic factors. These three parts are presented in Section \\ref{subsec:sax}-\\ref{subsec:dnn}, respectively.\n\n\\subsection{Representing Load Data with SAX}\\label{subsec:sax}\nWe further process the original load data for the following reasons. The hourly time series load data collected by smart meters often contain measurement noises. Moreover, the operator or the utility cares more about the peak load and load type, such as dual peak or single peak and when the peaks appear. The subtle differences between load profiles, especially in off-peak hours, do not provide many insights but make load clustering less effective. So dimension reduction is necessary to eliminate these effects and capture the key information in the load profiles. In \\cite{RN29}, SAX has been used to convert the load profiles into symbol strings. This process eliminates the influence of noises and subtle variations and highlights the trend change of the load profile. We use the SAX method to process the load data in our work.\n\nThe SAX algorithm proposed in \\cite{RN66} extended the Piece-wise Aggregate Approximation (PAA) and kept the low complexity of the original method \\cite{RN66}. It can convert time series data into different segments with a few discrete values as an approximation. In this paper, we use PAA to process the original time-series load data $\\boldsymbol{L} = \\{ l_1, l_2, \\ldots, l_H\\} $, which represents the daily load profile of $H=24$ hours. We evenly divide $24$ hours into $S$ segments, each of which has an interval length of $\\tau$. PAA converts the original time-series load $\\boldsymbol{L}$ into an approximating representation $\\bar{\\boldsymbol{L}} = \\{\\bar{l}_1, \\bar{l}_2, \\ldots, \\bar{l}_S\\}$, where $\\bar{l}_s,~s=1,...,S$ is calculated as\n\\begin{equation}\n\\bar{l}_s=\\frac{1}{\\tau} \\sum_{t=\\tau(s-1)+1}^{\\tau * S} {l_t}.\n\\end{equation}\n\nAfter the PAA representation is complete, we use the Gaussian distribution to find interval breakpoints, each of which represents an approximation. Each segment is allocated to the approximation corresponding to its closest break-point, thus dividing the time-series load profile into a few segments with several discrete values \\cite{RN66}. In our case study, we choose $\\tau = 3$ and divide the $24$-hour load profiles into $S = 8$ segments. The load data in each segment can be represented by five discrete values \\cite{RN125} as an approximation of consumers' load profiles.\n\n\\subsection{Modeling Load Patterns via Clustering}\\label{subsec:cluster}\nAfter processing the load data with SAX, it is necessary to extract several representative load patterns from the processed load data. Specifically, we use the clustering technique to model load patterns by dividing the SAX-processed load profiles into several clusters, each of which represents a representative load pattern.\n\nThe partitioning clustering methods \\cite{RN97} have been widely used to cluster time-series data, and K-means is one of the most commonly used partitioning clustering methods. The main advantage of K-means is that it is easy to implement and scale-up \\cite{RN114}, and thus we choose K-means as the clustering method in our work. The number of clusters $K$ must be chosen carefully to represent typical load patterns. If $K$ is too large, clusters can be very similar to each other and do not exhibit meaningful differences. If $K$ is too small, some typical load patterns may be missing. Therefore, we consider both Sum of Squared Errors (SSE) and the silhouette index \\cite{RN127} to select the most appropriate value for $K$.\n\nA smaller SSE indicates better clustering performance. As the number of clusters $K$ increases, the SSE will gradually become smaller. When $K$ reaches an appropriate number, the declining rate of SSE diminishes and stabilizes even with an increasing $K$. In our case, the SSE starts to stabilize after the cluster number $K$ reaches $5$, so any $K$ greater than $5$ can be considered. To determine the best $K$, we further consider the silhouette index, as it is also an important measure of clustering performance \\cite{RN127}. A larger silhouette index means that the load profiles are closer to each other within a cluster but more distant from each other across different clusters, indicating better clustering performance. In our study, the silhouette index peaks at a cluster number of $K = 7$. Therefore, we choose $K= 7$ clusters to model load patterns for achieving a low SSE and the highest silhouette index.\n\nTo sum up, this paper divides the load profiles into $7$ clusters by the K-means method, representing $7$ different electricity consumption behaviors. We further study how the consumers' load patterns are correlated with the demographic and socioeconomic features in Section~\\ref{subsec:dnn}.\n\n\\subsection{Machine Learning Model Based on DNN}\\label{subsec:dnn}\nWe develop a machine learning model to explore the relationship between demographic and socioeconomic characteristics and consumers' load patterns. Our problem can be regarded as a regression problem, in which the independent variables are consumers' demographic and socioeconomic features, and the dependent variables are the probabilities of load patterns. We tried to solve the problem with regression models, such as linear regression and polynomial regression, but the fit was not satisfactory due to unknown nonlinearity in the data. Therefore, we tried more efficient models. DNN is a powerful machine learning method with many successful applications \\cite{RN67}. It consists of a collection of neurons organized in a sequence of multiple layers, which can incorporate nonlinear activation functions. Therefore, this paper uses a DNN model and takes a linear regression model and a polynomial regression model as two benchmark models.\n\nAfter the clusters have been identified in our study, we have a set of representative load patterns. Each consumer's load profiles can be modeled as a probability distribution over the representative load patterns. We take users' demographic and socioeconomic features as the input of the DNN model to find the relationship between them and the load patterns. \nA set of percentages of load profiles belonging to each representative load pattern is the estimated target, i.e., the DNN model's output. \nThe probability for each cluster is always in the range [0,1], and the sum of all probabilities is always $1$. Therefore, we introduce the SoftMax function to normalize the output as a valid probability distribution.\n\nIn summary, the developed DNN based model takes consumers' demographic and socioeconomic features as the input and the probability distribution of load patterns as the output, thereby correlating the demographic and socioeconomic features with load patterns and revealing their relationship.\n\n\\section{Case Study}\nTo validate our developed method in Section~\\ref{sec:method}, we use real-world data described in Section~\\ref{sec:data} as a case study. We implement a DNN model with $3$ hidden layers, each of which includes $100$ neurons. The number of residents in each age group, annual household income, the education level of the residents, and total square footage of the house are used as inputs. The outputs are the probability distributions of load patterns. Considering the foreseeable difference in behaviors on weekdays and weekends, we split the load data into weekday and weekend load data. We use 80\\% of the data for training and the remaining 20\\% for the test based on the weekday and weekend load data, respectively. \n\n\\subsection{Model Performance}\nWe compare our developed DNN model with a linear regression model and a polynomial regression model. The data used for training and test are the same for all of the models. Due to the page limit, we only depict the comparison results for weekday load. Figure \\ref{f12} shows the load patterns described as $7$ clusters. Figure \\ref{f11} shows a comparison of weekday load patterns for three typical consumers with different demographic and socioeconomic features.\n\\begin{figure*}[!htbp] %\n \\centering\n \\includegraphics[width=0.85\\linewidth]{1to7}\n \\caption{The $7$ representative load patterns on weekdays.}\n \\label{f12}\n\\end{figure*}\n\n\\begin{figure*}[!htbp] %\n \\centering\n \\includegraphics[width=0.32\\linewidth]{household_59_wkday}\n \\includegraphics[width=0.32\\linewidth]{household_121_wkday}\n \\includegraphics[width=0.32\\linewidth]{household_585_wkday}\n \\caption{The comparison of three different users on weekdays.}\n \\label{f11}\n\\end{figure*}\n\nIn Figure \\ref{f12}, we see that the lower electricity consumption periods may suggest that residents are resting or not at home, while the first peak load (as shown in Clusters 1, 2, and 4) usually means the start of activities in the morning, such as preparing breakfast or doing laundry before going to work. In some clusters (such as Clusters 1, 2, 5, and 6), we see peak load in the evening, suggesting that residents get back home after work or start preparing dinner and have recreational activities. In Cluster 3, the peak occurs during midday, which indicates that residents are at home and residents' activities are concentrated throughout the day. Cluster 7 has a late-night peak, but the electricity consumption is very low during the day. A possible reason for this load pattern is that residents are sleeping or not at home during the daytime but awake at night.\n\nThe residents' load patterns reflect their household activities, which are correlated with their demographic and socioeconomic background. Figure \\ref{f11} shows the estimated results on three residents' probability distributions of load patterns based on their demographic and socioeconomic features using our DNN model and two regression benchmarks, compared with the ground truth. We see that the probability distributions of the load patterns for both Household 59 and Household 585 are concentrated on two specific clusters, but the probability distribution of the load patterns for household 121 is less concentrated, which may be related to the age distribution of the residents. Both Household 59 and Household 585 have residents with ages over 65, but Household 121 also has children under 12 years of age, which reveals that the elderly usually have a more regular routine. Moreover, both Household 59 and Household 585 have load patterns concentrated in Cluster 2 and Cluster 4 with a peak during morning hours. This could be because elderly users are more inclined to wake up early and have a hot breakfast. The probability distribution of Household 121 is less concentrated than those of the other two households. But Household 121 has a higher fraction of load in Cluster 2 and Cluster 6, indicating a peak during the evening hours. This can be caused by households with children spending the evening for some family entertainment activities. These households having peak load in the evening are more likely to be called for demand response, and their household activities will be affected.\n\nFurthermore, the DNN model has a better fit than the two benchmark models. Due to the complex relationship between consumers' demographic and socioeconomic characteristics and the probability distribution of their load patterns, the two benchmark models may under-fit, while our DNN model can provide more accurate results. The errors of the three models are measured by the Mean Squared Error (MSE), and a smaller MSE means better fitting. The DNN model reduces MSE by 90\\% compared to the linear regression model and more than 70\\% compared to the polynomial regression model. Note that different results will be obtained using a different dataset. Our work provides a methodology to model the load patterns and study the relationship between load patterns and demographic and socioeconomic features. We have no intention of emphasizing the improvement in the estimation accuracy.\n\n\\section{Discussions and Conclusion}\nWe developed an analytical method to model the residential load patterns and discover the relationship between the load patterns and the demographic and socioeconomic factors. We use the SAX method to remove the impact of measurement noises in load data, symbolically represent the data and approximate the load profiles. We use the K-means clustering to model the representative load patterns. Using demographic and socioeconomic factors as inputs and load patterns as outputs, we develop a DNN model to study the relationship between the load data and demographic and socioeconomic data. Our model generates new insights into how consumers with different demographic backgrounds use electricity.\n\nIn our future work, we plan to reduce the impact of measurement noises in the load data to model load patterns. The data availability also restricts the current results, and we plan to find more a large dataset for validation. We also aim to analyze the impact of energy tariffs and policy on consumers and improve energy equity.\n\n\\newpage\n\\bibliographystyle{ACM-Reference-Format}\n\n\\section{Introduction}\nDigitalization is reshaping the energy sector, and smart meters have been widely installed in households \\cite{RN92}. A large amount of data collected from smart meters has great potential to enhance the energy efficiency, improve energy services, and secure system operations \\cite{RN99}. Many existing studies, such as \\cite{haben2015analysis}, analyzed residential customers' energy consumption behaviors using smart meter data. Studies in \\cite{RN79} took a further step revealing major factors that determine energy consumption behaviors, including socioeconomic factors and dwelling factors. \nThe exploration of linkages between load patterns and demographic and socioeconomic factors can help provide more personalized services. Different social groups have different load patterns \\cite{jain2017data}. If these differences can be identified, there can be a better understanding of the impact of energy policies and programs on energy equity and justice.\nHowever, the relationship between users' energy consumption behaviors and their demographic and socioeconomic background (e.g., age, income, and education) is less well understood. Therefore, this paper aims to develop a methodology to model consumers' load patterns using smart meter data and establish the link between load patterns and demographic and socioeconomic factors.\n\n\nThe relationship between consumers' load patterns and their demographic and socioeconomic information has been analyzed in the literature. For example, the authors in \\cite{RN3} used a random forest model to assess the impact of socioeconomic and environmental factors on residential energy consumption. The study in \\cite{RN1} pointed out that the consumption behavior is more strongly linked to intrinsic factors than to characteristics of the residential environment. In \\cite{RN2}, an interesting observation was made based on demographic information, and the results showed that younger users had their peak consumption later in the day than elderly users. In \\cite{RN67}, socioeconomic information, such as income and education levels, was selected as features to train a prediction model for the load distribution. The studies mentioned above usually input the raw load data into clustering or classification models, and the measurement noises would affect the results. Moreover, from the operator's perspective, the information regarding load shape (e.g., single peak or dual peak) and peak time weights more than subtle load variations in the off-peak period. Hence, a more effective method is needed to model consumption behaviors using smart meter data.\n\nThis paper develops an effective methodology to address the above shortcomings in handling smart meter data and improve the understanding of the relationship between loading patterns and demographic and socioeconomic factors. \nSpecifically, we use clustering techniques to extract consumers' load patterns and build a machine learning model to study the relationship between load patterns and demographic and socioeconomic factors.\n\\begin{itemize}\n \\item We use symbolic aggregate approximation (SAX) to process the smart meter data and eliminate the effect of noises on the clustering results.\n \\item We model load patterns using a set of load clusters, representing different consumption behaviors.\n \\item We build a deep neural network (DNN) model to study the relationship between load patterns and demographic and socioeconomic information, such as consumers' age, income level, education level, and premise area.\n \\item Our developed methodology is validated using real-world data of households.\n\\end{itemize}\nNote that the data involved in this work are anonymized and do not involve personal privacy issues. Also, our study focuses on the energy consumption patterns at the group level, aiming to improve the understanding of different groups' behaviors for better energy equity. \n\n\\section{Data Description}\\label{sec:data}\nThis paper uses data from the Pecan Street database \\cite{RN128} as a case study to validate our design and study the relationship between load patterns and demographic and socioeconomic factors. This database includes smart meter data of over a thousand households at a resolution of one hour. This dataset also provides these households' demographic and socioeconomic information, including the number of residents in each age group, annual household income, education level, and the household audit data. \n\n\\subsection{Load Data}\nWe first pre-process the load data to remove invalid and duplicate values and obtain valid load data of $312$ households from January 1st, 2015 to December 31st, 2017. Specifically, the load data are recorded as the electricity consumption in kWh for each household in each hour, thus forming daily load profiles over three years with a total of $341,328$ data entries.\n\n\\subsection{Demographic and Socioeconomic Data}\nAnother essential part of the data is consumers' demographic and socioeconomic information. According to \\cite{RN128}, consumers' demographic and socioeconomic information includes many aspects, such as income level, education level, age of residents, household location, and year built. Since not all the demographic and socioeconomic data serve meaningful features, simply including them will increase models' complexity and reduce accuracy. It is necessary to eliminate redundant features and select the appropriate variables. According to the results from the entropy-based feature selection in \\cite{RN67}, the most relevant demographic and socioeconomic characteristics to load patterns are age, education, income, and household square footage. Therefore, we choose the above characteristics as the demographic and socioeconomic information in our study.\n\n\\section{Methods for Characterizing Load Patterns}\\label{sec:method}\nTo study the relationship between load patterns and demographic and socioeconomic factors, we process the load data to model representative load patterns and then build a machine learning model to analyze the relationship. Our method consists of three parts: processing load data, clustering load patterns, and developing a model to find the relationship between load patterns and demographic and socioeconomic factors. These three parts are presented in Section \\ref{subsec:sax}-\\ref{subsec:dnn}, respectively.\n\n\\subsection{Representing Load Data with SAX}\\label{subsec:sax}\nWe further process the original load data for the following reasons. The hourly time series load data collected by smart meters often contain measurement noises. Moreover, the operator or the utility cares more about the peak load and load type, such as dual peak or single peak and when the peaks appear. The subtle differences between load profiles, especially in off-peak hours, do not provide many insights but make load clustering less effective. So dimension reduction is necessary to eliminate these effects and capture the key information in the load profiles. In \\cite{RN29}, SAX has been used to convert the load profiles into symbol strings. This process eliminates the influence of noises and subtle variations and highlights the trend change of the load profile. We use the SAX method to process the load data in our work.\n\nThe SAX algorithm proposed in \\cite{RN66} extended the Piece-wise Aggregate Approximation (PAA) and kept the low complexity of the original method \\cite{RN66}. It can convert time series data into different segments with a few discrete values as an approximation. In this paper, we use PAA to process the original time-series load data $\\boldsymbol{L} = \\{ l_1, l_2, \\ldots, l_H\\} $, which represents the daily load profile of $H=24$ hours. We evenly divide $24$ hours into $S$ segments, each of which has an interval length of $\\tau$. PAA converts the original time-series load $\\boldsymbol{L}$ into an approximating representation $\\bar{\\boldsymbol{L}} = \\{\\bar{l}_1, \\bar{l}_2, \\ldots, \\bar{l}_S\\}$, where $\\bar{l}_s,~s=1,...,S$ is calculated as\n\\begin{equation}\n\\bar{l}_s=\\frac{1}{\\tau} \\sum_{t=\\tau(s-1)+1}^{\\tau * S} {l_t}.\n\\end{equation}\n\nAfter the PAA representation is complete, we use the Gaussian distribution to find interval breakpoints, each of which represents an approximation. Each segment is allocated to the approximation corresponding to its closest break-point, thus dividing the time-series load profile into a few segments with several discrete values \\cite{RN66}. In our case study, we choose $\\tau = 3$ and divide the $24$-hour load profiles into $S = 8$ segments. The load data in each segment can be represented by five discrete values \\cite{RN125} as an approximation of consumers' load profiles.\n\n\\subsection{Modeling Load Patterns via Clustering}\\label{subsec:cluster}\nAfter processing the load data with SAX, it is necessary to extract several representative load patterns from the processed load data. Specifically, we use the clustering technique to model load patterns by dividing the SAX-processed load profiles into several clusters, each of which represents a representative load pattern.\n\nThe partitioning clustering methods \\cite{RN97} have been widely used to cluster time-series data, and K-means is one of the most commonly used partitioning clustering methods. The main advantage of K-means is that it is easy to implement and scale-up \\cite{RN114}, and thus we choose K-means as the clustering method in our work. The number of clusters $K$ must be chosen carefully to represent typical load patterns. If $K$ is too large, clusters can be very similar to each other and do not exhibit meaningful differences. If $K$ is too small, some typical load patterns may be missing. Therefore, we consider both Sum of Squared Errors (SSE) and the silhouette index \\cite{RN127} to select the most appropriate value for $K$.\n\nA smaller SSE indicates better clustering performance. As the number of clusters $K$ increases, the SSE will gradually become smaller. When $K$ reaches an appropriate number, the declining rate of SSE diminishes and stabilizes even with an increasing $K$. In our case, the SSE starts to stabilize after the cluster number $K$ reaches $5$, so any $K$ greater than $5$ can be considered. To determine the best $K$, we further consider the silhouette index, as it is also an important measure of clustering performance \\cite{RN127}. A larger silhouette index means that the load profiles are closer to each other within a cluster but more distant from each other across different clusters, indicating better clustering performance. In our study, the silhouette index peaks at a cluster number of $K = 7$. Therefore, we choose $K= 7$ clusters to model load patterns for achieving a low SSE and the highest silhouette index.\n\nTo sum up, this paper divides the load profiles into $7$ clusters by the K-means method, representing $7$ different electricity consumption behaviors. We further study how the consumers' load patterns are correlated with the demographic and socioeconomic features in Section~\\ref{subsec:dnn}.\n\n\\subsection{Machine Learning Model Based on DNN}\\label{subsec:dnn}\nWe develop a machine learning model to explore the relationship between demographic and socioeconomic characteristics and consumers' load patterns. Our problem can be regarded as a regression problem, in which the independent variables are consumers' demographic and socioeconomic features, and the dependent variables are the probabilities of load patterns. We tried to solve the problem with regression models, such as linear regression and polynomial regression, but the fit was not satisfactory due to unknown nonlinearity in the data. Therefore, we tried more efficient models. DNN is a powerful machine learning method with many successful applications \\cite{RN67}. It consists of a collection of neurons organized in a sequence of multiple layers, which can incorporate nonlinear activation functions. Therefore, this paper uses a DNN model and takes a linear regression model and a polynomial regression model as two benchmark models.\n\nAfter the clusters have been identified in our study, we have a set of representative load patterns. Each consumer's load profiles can be modeled as a probability distribution over the representative load patterns. We take users' demographic and socioeconomic features as the input of the DNN model to find the relationship between them and the load patterns. \nA set of percentages of load profiles belonging to each representative load pattern is the estimated target, i.e., the DNN model's output. \nThe probability for each cluster is always in the range [0,1], and the sum of all probabilities is always $1$. Therefore, we introduce the SoftMax function to normalize the output as a valid probability distribution.\n\nIn summary, the developed DNN based model takes consumers' demographic and socioeconomic features as the input and the probability distribution of load patterns as the output, thereby correlating the demographic and socioeconomic features with load patterns and revealing their relationship.\n\n\\section{Case Study}\nTo validate our developed method in Section~\\ref{sec:method}, we use real-world data described in Section~\\ref{sec:data} as a case study. We implement a DNN model with $3$ hidden layers, each of which includes $100$ neurons. The number of residents in each age group, annual household income, the education level of the residents, and total square footage of the house are used as inputs. The outputs are the probability distributions of load patterns. Considering the foreseeable difference in behaviors on weekdays and weekends, we split the load data into weekday and weekend load data. We use 80\\% of the data for training and the remaining 20\\% for the test based on the weekday and weekend load data, respectively. \n\n\\subsection{Model Performance}\nWe compare our developed DNN model with a linear regression model and a polynomial regression model. The data used for training and test are the same for all of the models. Due to the page limit, we only depict the comparison results for weekday load. Figure \\ref{f12} shows the load patterns described as $7$ clusters. Figure \\ref{f11} shows a comparison of weekday load patterns for three typical consumers with different demographic and socioeconomic features.\n\\begin{figure*}[!htbp] %\n \\centering\n \\includegraphics[width=0.85\\linewidth]{1to7}\n \\caption{The $7$ representative load patterns on weekdays.}\n \\label{f12}\n\\end{figure*}\n\n\\begin{figure*}[!htbp] %\n \\centering\n \\includegraphics[width=0.32\\linewidth]{household_59_wkday}\n \\includegraphics[width=0.32\\linewidth]{household_121_wkday}\n \\includegraphics[width=0.32\\linewidth]{household_585_wkday}\n \\caption{The comparison of three different users on weekdays.}\n \\label{f11}\n\\end{figure*}\n\nIn Figure \\ref{f12}, we see that the lower electricity consumption periods may suggest that residents are resting or not at home, while the first peak load (as shown in Clusters 1, 2, and 4) usually means the start of activities in the morning, such as preparing breakfast or doing laundry before going to work. In some clusters (such as Clusters 1, 2, 5, and 6), we see peak load in the evening, suggesting that residents get back home after work or start preparing dinner and have recreational activities. In Cluster 3, the peak occurs during midday, which indicates that residents are at home and residents' activities are concentrated throughout the day. Cluster 7 has a late-night peak, but the electricity consumption is very low during the day. A possible reason for this load pattern is that residents are sleeping or not at home during the daytime but awake at night.\n\nThe residents' load patterns reflect their household activities, which are correlated with their demographic and socioeconomic background. Figure \\ref{f11} shows the estimated results on three residents' probability distributions of load patterns based on their demographic and socioeconomic features using our DNN model and two regression benchmarks, compared with the ground truth. We see that the probability distributions of the load patterns for both Household 59 and Household 585 are concentrated on two specific clusters, but the probability distribution of the load patterns for household 121 is less concentrated, which may be related to the age distribution of the residents. Both Household 59 and Household 585 have residents with ages over 65, but Household 121 also has children under 12 years of age, which reveals that the elderly usually have a more regular routine. Moreover, both Household 59 and Household 585 have load patterns concentrated in Cluster 2 and Cluster 4 with a peak during morning hours. This could be because elderly users are more inclined to wake up early and have a hot breakfast. The probability distribution of Household 121 is less concentrated than those of the other two households. But Household 121 has a higher fraction of load in Cluster 2 and Cluster 6, indicating a peak during the evening hours. This can be caused by households with children spending the evening for some family entertainment activities. These households having peak load in the evening are more likely to be called for demand response, and their household activities will be affected.\n\nFurthermore, the DNN model has a better fit than the two benchmark models. Due to the complex relationship between consumers' demographic and socioeconomic characteristics and the probability distribution of their load patterns, the two benchmark models may under-fit, while our DNN model can provide more accurate results. The errors of the three models are measured by the Mean Squared Error (MSE), and a smaller MSE means better fitting. The DNN model reduces MSE by 90\\% compared to the linear regression model and more than 70\\% compared to the polynomial regression model. Note that different results will be obtained using a different dataset. Our work provides a methodology to model the load patterns and study the relationship between load patterns and demographic and socioeconomic features. We have no intention of emphasizing the improvement in the estimation accuracy.\n\n\\section{Discussions and Conclusion}\nWe developed an analytical method to model the residential load patterns and discover the relationship between the load patterns and the demographic and socioeconomic factors. We use the SAX method to remove the impact of measurement noises in load data, symbolically represent the data and approximate the load profiles. We use the K-means clustering to model the representative load patterns. Using demographic and socioeconomic factors as inputs and load patterns as outputs, we develop a DNN model to study the relationship between the load data and demographic and socioeconomic data. Our model generates new insights into how consumers with different demographic backgrounds use electricity.\n\nIn our future work, we plan to reduce the impact of measurement noises in the load data to model load patterns. The data availability also restricts the current results, and we plan to find more a large dataset for validation. We also aim to analyze the impact of energy tariffs and policy on consumers and improve energy equity.\n\n\\newpage\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSatellite gravimetry missions, such as the GRACE (Gravity Recovery And Climate Experiment) \\cite{Tapley2004} and the GOCE (Gravity field and steady-state Ocean Circulation Explorer) \\cite{Drinkwater2003, Johannessen2003} missions, have revolutionized our knowledge of the gravity field over the whole Earth surface and our understanding of mass redistribution and mass transport processes on a global scale. In particular, the GOCE mission, launched in 2009 and active up to 2013, carried a gravity gradiometer on-board a satellite for the first time. It allowed for a precise measurement of the static gravity field with unprecedented accuracy and spatial resolution. The geoid was determined with an accuracy of about 1 to 2 cm for a spatial resolution of 100 km on the Earth surface \\cite{Brockmann2014a}. By providing the Earth gravity field down to small spatial scales, our understanding of a number of processes related to solid-earth physics and ocean circulation has been greatly improved \\cite{Knudsen2011} and the global unification of height systems \\cite{Rummel2014} could be implemented. GOCE also brought new and unexpected scientific results in seismology, space weather and changes in ice masses. In this mission, differential accelerations measured on-board a single satellite with an ensemble of ultra-sensitive electrostatic accelerometers allowed to determine all components of the gravity gradient tensor, with best sensitivities in the range of 10-20 mE\/Hz$^{1\/2}$ in the measurement bandwidth (i.e. 5-100 mHz), out of which models of the gravity field could be reconstructed.\n\n\n\n\n\nIn this article, we show that the use of cold-atom-based gravity gradiometers, on-board a dedicated satellite at low altitude (250-300 km), can meet the requirements to improve our present knowledge of the Earth gravity field. In particular, the GOCE gravity gradients showed poor performance in the lower frequency band, where the noise power spectral density (PSD) increases with the inverse of the frequency. Dealing with this low-frequency noise is a great challenge for gravity field recovery, where special decorrelation filters were tailored and used for whitening the noise \\cite{Schuh2003}. This will not be a problem for a gravity gradiometer based on a cold atom interferometer (CAI), as it naturally provides gravity gradients with white noise at all frequencies, except for very high frequencies (i.e. above 100 mHz) which is not relevant for gravity field recovery from space. Moreover, this novel atom-based gradiometer is expected to provide gravity gradients with an improved sensitivity level of the order of 5 mE\/Hz$^{1\/2}$.\n\nThe article is structured as follows. We start in section \\ref{sec:principle} by describing the instrument and measurement principle, which relies on a state-of-the-art manipulation sequence of the atomic source described in \\ref{sec:source}. We then calculate in section \\ref{sec:phase} the phase shift of the interferometer for arbitrary gradients and rotation rates, in the simplified case of a circular orbit around a spherical and homogeneous Earth. This allows to derive in section \\ref{sec:error} specifications for the control of the atomic source parameters and for the attitude control of the satellite. A Monte Carlo simulation of the interferometer is then presented in section \\ref{sec:model}, which allows us for accounting in a comprehensive way for the geometry of the interferometer and furthermore to evaluate the loss in sensitivity as well as the amplitude of several systematic effects due for example to the finite size of the laser beams and the atomic cloud or the finite duration of the interferometer pulses. The results of this simulation are used to refine key specifications for the laser system setup and the atomic source, in order to keep parasitic differential phase shifts (both noise and systematic) below the target uncertainty. \n\nSection \\ref{Sec:Design} is dedicated to the instrument design and related engineering aspects. Details on the design of critical elements and subsystems are given, in particular on the retroreflecting mirror, on the laser, vacuum and detection systems. Engineering tables are elaborated. Finally, we evaluate in section \\ref{Performance_analysis} the impact of the sensor performance for gravity field recovery. This is performed thanks to numerical simulations of the measured gravity in the presence of realistic noise for the sensor and the control of the attitude of the satellite.\n\n\n\\section{Principle of the measurement}\n\\label{sec:principle}\nThe concept of the gradiometer is based on the geometry proposed in \\cite{Carraz2014}, which measures differential acceleration with two spatially separated atom interferometers (AIs) \\cite{Snadden1998}. The interferometers are realized using a sequence of three light pulses based on stimulated Raman transitions. The momentum transfer provided by the Raman diffraction process allows the splitting, redirection and recombination of the atomic wave packets along two paths, thus creating an atomic analogy of a Mach-Zehnder interferometer. In such atom interferometers~\\cite{Berg2015PRL,Louchet2011NJP,Gauguet2009PRA,Kasevich91PRL}, the atomic populations in the two output ports are modulated with the phase difference accumulated along the two paths. For an acceleration $a$ along the direction of the laser beams, this phase is given by $\\Phi=kaT^2$, where $\\hbar k$ is the momentum imparted by the Raman transitions onto the atoms, and $T$ is the free evolution time in between two consecutive Raman pulses. Performing differential acceleration measurements with two such interferometers separated by a distance $D$ allows extracting the gravity gradient $\\gamma$ out of the differential phase shift $\\Delta\\Phi=\\Phi_2-\\Phi_1=k(a_2-a_1)T^2=k\\gamma D T^2$, with $a_1$ and $a_2$ the accelerations experienced by the atoms in the two interferometers~\\cite{McGuirk02PRA}. Moreover, using the same Raman lasers for the two interferometers enables a high rejection ratio to common mode sources of noise and systematic effects~\\cite{McGuirk02PRA,Bertoldi2006}. More details on the working principle of the interferometers will be given in section \\ref{doublediffraction}. \n\nAssuming an interferometer phase noise at the mrad\/shot level, the corresponding sensitivity to the gradient is in the mE\/shot range, for a pulse separation time $T=5$~s and a distance $D=0.5$~m. To take a full advantage of this excellent single-shot sensitivity, a high measurement rate is desirable, which can be achieved by interrogating several atomic clouds at the same time \\cite{Biedermann2013,Dutta2016}. This interleaved scheme requires to produce atomic sources with a cycle time significantly shorter than the interferometer duration \\cite{savoie2018interleaved}. With a production time of about 1~s, the corresponding sensitivity would lie in the low mE\/Hz$^{1\/2}$, which compares favourably with the ultra sensitive electrostatic gradiometers of the GOCE mission. \n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{Concept-Scheme.pdf}\n\\caption{(a) Scheme of the gravity gradiometer, based on differential accelerometry with two separated atom interferometers. (b) An initial BEC source of $10^6$~atoms is magnetically evaporated, displaced and collimated in $1.1$~s. (c) Horizontal transport step to the interferometry chamber (12\\,cm in 100\\,ms). (d) The BEC is split in two by the combination of a double Raman diffraction and a twin-lattice technique feeding both interferometers with ensembles at a horizontal velocity of 4 recoils.}\n\\label{Concept-Scheme}\n\\end{center}\n\\end{figure}\n\n\n\\section{Source preparation}\n\\label{sec:source}\nThe presented measurement principle is not limited to a specific atomic species. It requires, however, an atomic source production at a high flux ($10^6$~atoms\/s) to reach the targeted phase noise of 1\\,mrad per cycle of 1\\,s, yet at a low expansion rate (0.1~mm\/s) characteristic of near-condensed regimes. We analyzed possible candidates of high-flux degenerate sources in terms of their technical readiness for the proposed space-borne gravity gradiometer. Sources based on alkaline atoms (e.g. Li, Na, K, Rb, Cs) are widely used in cold-atom experiments and have shown excellent performance. Due to their rather simple energy levels structure, there are several applicable laser cooling schemes applicable and evaporative cooling to degeneracy is possible both in magnetic and optical potentials~\\cite{Ketterle1995,Mewes-PRL-1997,Clement2009,Naik2018}. In particular, Rubidium sources have been established as reliable sources for atom interferometry fundamental physics experiments as well as geodetic applications~\\cite{zhou2015test,rosi2014precision,hauth2013first}. High-flux evaporative cooling of Rb ($>10^5$~atoms\/s) has been shown both with atom chips~\\cite{Rudolph2015} or dipole traps~\\cite{PhysRevA.71.011602} and atom interferometry can be performed with either Raman or Bragg transitions in single or double diffraction configurations~\\cite{le2008limits,Leveque2009,altin2013precision,ahlers2016double}. Recently, alkaline-earth-like atoms (e.g. Sr, Yb) have been successfully cooled to degeneracy~\\cite{stellmer2013laser,fukuhara2007bose} and are considered to be promising candidates for high-precision interferometry as well. Thanks to their special energy levels structure, they are immune to residual quadratic Zeeman shifts, one of the dominant contributions to the uncertainty budget in alkaline-based atom interferometers. Moreover, interferometry can be performed on the clock transition to suppress technical laser phase noise thus increasing the performance of precision measurements e.g. weak equivalence principle test, gradiometry or gravitational wave detection~\\cite{hogan2011atomic,Canuel2018}. The source flux of alkaline-earth-like atoms is, however, not yet at the same level of performance as the Rb ones. Besides of the performance of the source, the maturity of the required technology is crucial for a successful space mission ~\\cite{Loriani2018GOAT}. The cooling and manipulation laser sources of Rb could be derived from two complementary systems, which are available and field-proven: compact diode lasers at 780~nm together with free-space optics~\\cite{schkolnik2016compact} or fiber-based laser systems fed by frequency-doubled laser running at 1560~nm~\\cite{theron2015narrow}. High-flux sources of condensed Rb have been already demonstrated in transportable and space qualified systems~\\cite{Becker2018}. In addition, complexity, power consumption, size and mass considerations are in favor of a Rb-based choice compared to setups based on Sr or Yb. With recent progress made by the QUANTUS, MAIUS and CAL consortia ~\\cite{abend2016atom,Becker2018,elliott2018nasa}, the atom chip solution has been assessed to be more advanced than the dipole trap one and will be the baseline for the proposed setup.\n\nIn order to span the required baseline of 50 cm together with a cycle time close to 1 s, we propose the following sequence for the transport and preparation of the atoms: we start by producing the BEC (Bose-Einstein Condensate) atoms in close vicinity of the atom chip in a dedicated chamber (left frame a in figure~\\ref{Concept-Scheme}). With an optimized atomic chip design, building on the work of \\cite{Rudolph2015}, about $10^6$ ultracold atoms could be produced in less than 1~s (i.e. 800 ms) at a distance of few hundred microns. The created BEC is immediately magnetically displaced up to a distance of about 5 mm away from the chip surface (frame b in figure~\\ref{Concept-Scheme}) using the external magnetic coils and shortcut-to-adiabaticity protocols as proposed in \\cite{Corgier2018} and implemented in~\\cite{rudolph2016matter}, allowing to reduce the duration of this transport down to about 200 ms. The geometry of the displaced trap can be tailored, thanks for instance to several layers of Z-shaped wires on the chip, to be almost spherically symmetric~\\cite{li2018expansion}, with a final trapping frequency of about 15 Hz. These fast transports have the feature of inducing very low residual dipole oscillations in the final trap, which is an essential ingredient for the next steps. If the quantum gas needs, in addition, to be in its ground state, optimal control solutions are available and proved to be equally fast~\\cite{amri2018optimal}. The atomic ensemble is released from this weak trap, freely expanding for 100\\,ms before being collimated by a magnetic lens flashed for about 1.2\\,ms~\\cite{Muntinga2013}. This drastically reduces the expansion rate of the cloud, already at a Thomas-Fermi radius of $150~\\mu$m, down to a calculated effective expansion energy of 100\\,pK. The subsequent time evolution of the size of the BEC complies with the interferometric requirements on the atomic source. The control accuracy of the position and the velocity of the atomic clouds at the end of this first chip manipulation are estimated to be of the order of a fraction of a $\\mu$m and $\\mu$m\/s, respectively\n\\cite{Corgier2018,rudolph2016matter,amri2018optimal}. \n\n\nTo move the atomic ensembles into the interferometry region, a first Raman double diffraction initiates the cloud momentum at $4\\,\\hbar k$ as indicated in frame b of figure~\\ref{Concept-Scheme}. This ensures a transport by a Bloch accelerated optical lattice (blue beam in figure~\\ref{Concept-Scheme}(c)) that moves the atoms to the interferometry chamber by imparting, in few ms, a velocity corresponding to 200 recoils \\cite{Cadoret2008}. Thanks to the atomic cloud collimation step, it is possible to restrict ourselves to the use of a small beam waist of 1-2 mm for the Bloch lattice and thus keep the power usage at the reasonable level of roughly 200 mW. Once the atoms reach the interferometry chamber (12\\,cm in 100\\,ms), the same optical lattice with opposite acceleration direction is used to decelerate the atoms to a final momentum of $4\\,\\hbar k$ as they started. \nAt this point, the atom source has to be split into two halves to be moved upwards and downwards in order to feed the two interferometers as indicated in figure~ \\ref{Concept-Scheme}(d). This is realized by combining the use of double-Raman-diffraction beams~\\cite{Leveque2009} and twin-Bloch lattices \\cite{abend2017chipgravi}. A first pair of retro-reflected Raman pulses splits the BEC to generate a pair of vertically moving momentum classes with $\\pm 4 \\hbar k$. \nBy reflecting two Bloch lattices on the mirror, with 100 GHz relative detuning, one is able to create two running lattices similarly to what was done in~\\cite{abend2017chipgravi}. The advantage of this highly symmetric scheme is that each of the initially Raman-split clouds will interact with the optical lattice moving in the same direction thanks to Doppler selectivity. The same treatment as in frame c) is subsequently pursued, with an acceleration up to 200 recoil velocities followed by a deceleration to a still momentum state. The transport distance needs, however, to be double the one of the horizontal step to reach 24 cm in 200 ms time.\nIn this manner, each of the gradiometer's Mach-Zehnder interferometers is fed by an incoming flux of atoms with $4\\,\\hbar k$ initial velocity.\n\nIn total, the entire process of production, manipulation and transport of the BEC takes 1.4~s. However, a new magneto-optical trap (MOT) production can start as soon as the previous produced BEC has been loaded into the vertical lattices resulting in an effective cycle time of only 1.2~s. With that scheme it would be possible to run up to 8 interferometers simultaneously.\n\n\n\\section{Interferometer phase shift}\n\\label{sec:phase}\nWe calculate in this section the phase shift of a (single) atom interferometer linked to the frame of a satellite, orbiting at a fixed altitude with a constant orbiting frequency $\\Omega_{\\mathrm{orb}}$. The measurement axis is taken in the orbital plane, initially along $z$. We take into account the rotation rates of the satellite $\\Omega_{x,y,z}$, which will allow to discuss the cases of nadir pointing, for which $\\Omega_y = \\Omega_{\\mathrm{orb}}$ and $\\Omega_{x,z} \\simeq 0$, and inertial pointing, for which $\\Omega_{x,y,z} \\simeq 0$. The calculation of the atom trajectories is performed in the satellite frame, hence one needs to rotate the gravity gradient tensor. This is performed by applying a product of three orthogonal rotations, starting with the rotation along $y$, whose amplitude eventually largely dominates in Nadir pointing. As long as the two other rotations are small enough, this correctly deals with the influence of the satellite rotation at the leading orders.\n\nWe also consider to implement compensation systems for the two following physical quantities:\n\n- the rotation of the satellite, in order to obtain mirrors with a fixed orientation in the frame of the atoms. This configuration can be obtained by tilting the two first and last retroreflecting mirrors by angles $\\pm\\theta=\\pm \\Omega_y T$, where $\\Omega_y$ is the rotation rate along $y$, the cross-track axis, as displayed in figure~\\ref{tiltedmirrors}. As we will see below, this configuration removes the sensitivity of the interferometer to centrifugal and Coriolis accelerations;\n\\begin{figure}[ht]\\centering\n\\includegraphics[width=5in]{TiltedMirrorsConfig.pdf}\n\\caption{Tilted mirror configuration.}\n\\label{tiltedmirrors}\n\\end{figure}\n\n\n- the phase shift induced by the gravity gradient, as recently proposed in \\cite{Roura2017}. This relies on an adequate change by $\\delta k$ of the Raman wavevector at the second pulse. This method has recently been demonstrated in \\cite{Damico2017,Overstreet2018}.\n\nThe trajectories of the atomic wavepackets along the arms of the interferometer are determined analytically by solving the Euler-Lagrange equations using a power series expansion as a function of time $t$, as in ~\\cite{Hogan2009}. Subsequently, the phase shift can be calculated out of the positions of the center of the atomic wavepackets at the different Raman laser pulses using the following formula~\\cite{Borde2004}\n\\begin{eqnarray}\n\\phi = 2k_1\\cdot r_A-4k_2\\frac{r_B+r_C}{2}+2k_3\\frac{r_D+r_{D2}}{2}.\n\\label{formula1}\n\\end{eqnarray}\nWe consider here a $\\pi\/2-\\pi-\\pi\/2$ double diffraction interferometer, such as displayed later on figure \\ref{Interferometer_Scheme}, with effective wave numbers $k_{1,2,3}$, corresponding to two photon transitions. (A, B, D) (resp. (A, C, D2)) are the positions of the centers of the partial wavepackets at the time of the three pulses along the upper (resp. lower) arm of the interferometer. \nThis formula is valid for Hamiltonians at most quadratic in position and momentum \\cite{storey_1994,Borde2004}, which is the case considered here; terms beyond this approximation can be treated as proposed in \\cite{dagostino_2011,Bertoldi2019}.\n\nTable \\ref{tab:phasesingle1} presents the dominant terms contributing to the phase of a single interferometer and to the separation between the wavepackets at the output of the interferometer, listed with respect to their scaling on initial coordinates and velocities of the atoms in the satellite frame $(x,y,z,v_x,v_y,v_z)$. Only the dominant terms, which scale as $T^2$ or $T^3$ depending on the terms, are listed here. For the sake of simplicity, we take here $\\Omega_x=\\Omega_z=0$. We consider the case where the rotation rates $\\Omega_{\\mathrm{orb}}$, $\\Omega_y$ and $\\Omega_m$, the equivalent rotation rate of the mirrors given by $\\Omega_m=\\theta\/T$, are different, and the cases where the rotation and\/or gravity gradient are compensated ($\\Omega_m=\\Omega_y$ and\/or $\\delta k=kT_{zz}T^2\/2$). \n\nHere, $T$ denotes the free evolution time between pulses, $k = 4\\pi\/(780 \\mathrm{nm})$ the effective wave number for a two-photon Raman transition, $T_{zz}$ the gravity gradient. Typical values for the relevant parameters are $\\Omega_{\\mathrm{orb}} = 1.2\\,\\mathrm{mrad\/s}$ (for an altitude of about 250 km), $T = 5\\,\\mathrm{s}$, $T_{zz} = -2.7\\cdot10^{-6}\\,\\mathrm{s}^{-2}$ and $k=16.1\\cdot10^{6}\\,\\mathrm{m}^{-1}$.\n\nThe gravity gradient phase shift is given by $\\Phi=2kzT_{zz}T^2$, and thus leads to a differential phase of $\\Delta\\Phi=2kDT_{zz}T^2$ between two interferometer separated along $z$ by the distance $D$. For $D=0.5$~m, and the parameters above, this phase shift is as large as 1087~rad. We assume that the individual phase measurements are performed with a sensitivity limited by detection noise at the quantum projection limit, for which $\\sigma_\\Phi=1\/\\sqrt{N}$, $N$ being the number of detected atoms, assuming an ideal contrast. For $10^6$ detected atoms at the output of each interferometer, we obtain an expected sensitivity of $\\sigma_{T_{zz}}=\\sqrt{2}\/2\\sqrt{N}kDT^2=3.5$~mE\/shot. \n\n \\begin{table*}[!ht]\n \\begin{center}\n \\caption{Leading terms in the phase of a single interferometer}\n \\small\n \\begin{tabular}{|p{0.12\\textwidth}|p{0.25\\textwidth}|p{0.25\\textwidth}|p{0.25\\textwidth}|}\\hline\n & \\textbf{General case} & \\textbf{Compensated rotation} & \\textbf{Compensated gradient and rotation, Nadir pointing} \\\\ \\hline\n\t & Any $\\Omega_{orb,y,m}$ & $ \\Omega_m=\\Omega_y$ &$ \\Omega_m=\\Omega_y=\\Omega_{\\mathrm{orb}}$ \\\\ \n\t & $\\delta k=0$ & $\\delta k=0$ &$\\delta k=kT_{zz}T^2\/2$ \\\\ \\hline\n $x$ & $kx(T_{xx}(5\\Omega_y-4\\Omega_m-\\Omega_{\\mathrm{orb}})+4(\\Omega_y-\\Omega_m)\\Omega_y^2+T_{zz}(\\Omega_{\\mathrm{orb}}-\\Omega_y))T^3 $& $kx(T_{xx}-T_{zz})(\\Omega_{y}-\\Omega_{\\mathrm{orb}})T^3 $ & $ 0 $ \\\\ \\hline\n $y$ & 0 & 0 & 0\\\\ \\hline\n $z$ & $2kz(T_{zz}-\\Omega_m^2+\\Omega_y^2)T^2$ & $2kzT_{zz}T^2$ & 0 \\\\ \\hline\n $v_x$ & $4kv_x(-\\Omega_m+\\Omega_y)T^2$ & 0 & 0 \\\\ \\hline \n $v_z$ & $2kv_z(T_{zz}-\\Omega_m^2+4\\Omega_m\\Omega_y-3\\Omega_y^2)T^3$ & $2kv_zT_{zz}T^3$ & 0 \\\\ \n $v_y$ & 0 & 0 & 0\\\\ \\hline\n Separation & & & \\\\ \\hline\n$\\Delta x$ & $4 \\hbar k T^2(\\Omega_m-\\Omega_y)\/m_{\\mathrm{Rb}} $ & 0 & 0 \\\\\n$\\Delta y$ & 0 & 0 & 0 \\\\\n\n$ \\Delta z$ & $2\\hbar kT^3(T_{zz}-(\\Omega_m-3\\Omega_y)(\\Omega_m-\\Omega_y))\/m_{\\mathrm{Rb}}$ & $2 \\hbar kT^3 T_{zz}\/m_{\\mathrm{Rb}}$ & 0 \\\\\n\\hline\n\\end{tabular}\n \\label{tab:phasesingle1}\n \\end{center}\n \\end{table*}\n\nWe first discuss the case where $\\Omega_m=0$, i.e. without rotation compensation. The gravity tensor measurement is biased by a contribution in $\\Omega_y^2$ due to centrifugal accelerations. The interferometer phase features a Sagnac phase term $4 \\, kv_x \\, \\Omega_y \\, T^2$, and the separation between the two partial wavepackets at the output of the interferometer along $x$ is $4 \\, \\hbar k \\, T^2 \\, \\Omega_y\/m_{\\mathrm{Rb}}$. This leads to a reduction of the interferometer contrast due to dephasing when averaging the Sagnac phase across the velocity distribution. Equivalently, the contrast is reduced when the separation is comparable to the coherence length of the atomic wavepackets. For the resulting loss of contrast to be negligible, one needs $\\sigma_v\\ll 1\/ \\left ( 4k\\Omega_yT^2 \\right )$. This corresponds to temperatures $T\\ll T_0$, where $T_0$ is given by \n\\begin{equation}\nT_0=\\frac{m_{\\mathrm{Rb}}}{16k_Bk^2\\Omega_y^2T^4} \\,\n\\end{equation} \nwhere $k_B$ is the Boltzmann's constant. For Nadir pointing, in which $\\Omega_y=\\Omega_{\\mathrm{orb}}=1.2\\,\\mathrm{mrad\/s}$, a temperature lower than $T_0=3$ fK is required, which is well below what can be achieved with current technology. This limit would also apply for measurements along the $x$-axis, which is also impacted by the large rotation rate along the $y$-axis, but not for measurements along the $y$-axis. Thus, as such, gravity gradient measurements in Nadir configuration can only be performed along one axis. \n\nAlso, in a similar manner, gravity gradients put an additional requirement on the atomic temperature, given by $T < T_1$, where $T_1$ is given by \n\\begin{equation}\nT_1=\\frac{m_{\\mathrm{Rb}}}{4k_bk^2T_{zz}^2T^6} \\, .\n\\end{equation} \nFor $T_{zz} = -2.7\\cdot10^{-6}\\,\\mathrm{s}^{-2}$, $T_1\\sim100$~pK. Such a limit is compatible with the ultralow atomic temperatures reached thanks to Delta Kick collimation techniques (of the order of a few tens of pK). This requirement can also be relaxed if the gravity gradient phase shift is compensated for using an appropriate change of Raman laser frequency at the second pulse.\n\nMeasuring the three axes in Nadir configuration requires a compensation scheme for the large rotation along $y$, which can be realized by tilting the Raman mirrors, as discussed above. This corresponds to the second case in table \\ref{tab:phasesingle1}, where the Sagnac phase, as well as the centrifugal acceleration, are canceled if the angles set on the mirrors are perfectly tuned ($\\Omega_m=\\Omega_y$).\nOn the other hand, if the pointing of the satellite is inertial (and no rotation compensation is applied), rotation rates up to $\\Omega_0=6\\,\\mu\\mathrm{rad\/s}$ can be accepted, for the same temperature limit of 100 pK. This makes gravity measurements along three orthogonal directions possible in inertial pointing mode (with flat mirrors). Equivalently, this translates into the same limit for the maximum rotation rate mismatch between $\\Omega_m$ and $\\Omega_y$ in the case of imperfect rotation compensation in the Nadir pointing mode: $\\delta\\Omega=\\Omega_m-\\Omega_y \\ll \\Omega_0$. \n\nFinally, the last column in table \\ref{tab:phasesingle1} shows that for a properly tuned rotation rate for Nadir operation ($\\Omega_{y}=\\Omega_{\\mathrm{orb}}$), for properly tilted mirrors ($\\Omega_m=\\Omega_y$), and for a properly set change in the Raman wavevector at the second Raman pulse ($\\delta k=kT_{zz}T^2\/2$), all higher order terms in the interferometer phase, as well as the separation of the two wavepackets at the output of the interferometer, are canceled. The loss of contrast due to dephasing is thus suppressed.\n\n\n\\section{Error budgeting}\n\\label{sec:error}\nHaving discussed the constraints set by the finite coherence length of the atomic wavepacket onto the contrast of the interferometer, we now examine the requirements on the atomic source parameters, on the Raman laser setup and on the control of rotations to keep the uncertainties in the determination of systematics in the differential measurement below 1 mrad (which corresponds to an error of 3.5 mE).\n\n\\subsection{Requirements}\n\nWe start by briefly discussing the inertial case. There, the two measurement axes $x$ and $z$, fixed in the satellite frame and lying in the orbital plane, rotate with respect to the frame where the gravity tensor is diagonal. This leads to a mixing between $T_{zz}$ and $T_{xx}$ components, and a modulation of the gradiometer phase, which is given by: $\\Phi=2kL(T_{zz}\\textrm{cos}(\\chi)+T_{xx}\\textrm{sin}(\\chi))T^2$, where $L$ is the separation between the two interferometers and $\\chi$ is the satellite angle position in the orbital plane. $T_{zz}$ and $T_{xx}$ can then be separated by combining the measurements along two orthogonal directions. There, the uncertainty in the determination of the pointing direction $\\delta\\theta$ leads to an error in the determination of the tensor component $T$ of interest of the order of $\\delta\\theta \\times T$, which amounts up to 3 mE for $\\delta\\theta=1~\\mu$rad. This is a very tight requirement for the mission, especially for a low altitude orbit. The potential of such a configuration for gravity field recovery has already been studied in \\cite{Douch2018}, together with the single cross-track axis in Nadir configuration. We thus focus through the rest of the paper onto the case of a 3-axis determination in Nadir configuration, with the help of compensated rotation. \n\n\\begin{table*}[!ht]\n \\begin{center}\n \\caption{Dominant terms in the residue of the phase of a single interferometer, for compensated gravity gradient and non perfect rotation compensation, corresponding sensitivity for differential measurements, and phase dispersion}\n \\small\n \\begin{tabular}{|p{0.12\\textwidth}|p{0.32\\textwidth}|p{0.28\\textwidth}|p{0.16\\textwidth}|}\\hline\n & \\textbf{Terms} & \\textbf{Differential} & \\textbf{Phase} \\\\ \n & & \\textbf{Phase (in rad)} & \\textbf{dispersion}\\\\ \n & & & \\textbf{(in rad)}\\\\ \n \n\t & $\\Omega_m=\\Omega_y+\\delta\\Omega_m$ & $\\delta\\Omega_m=10^{-6}$~rad\/s&\\\\ \n\t & $\\delta k=kT_{zz}T^2\/2$ & $\\Omega_y-\\Omega_{\\mathrm{orb}}=10^{-6}$~rad\/s&\\\\ \n\t & & $\\delta x=10^{-6}$~m & $\\sigma_x=0.1$~mm \\\\ \n\t & & $\\Delta z=L=0.5$~m & $\\sigma_z=0.1$~mm\\\\ \n\t & & $\\delta v_x=10^{-6}$~m\/s & $\\sigma_v=98~\\mu$m\/s\\\\ \n\t & & $\\delta v_z=10^{-6}$~m\/s & $\\sigma_v=98~\\mu$m\/s\\\\ \\hline\n\t\n\t $x$ & $-kx((T_{xx}-T_{zz})(\\Omega_{\\mathrm{orb}}-\\Omega_{y})+4\\delta\\Omega_m (T_{xx}+\\Omega_y^2))T^3 $& $-8.3\\times10^{-9}$& $-8.3\\times10^{-7}$\\\\ \\hline\n $z$ & $-4kz\\delta\\Omega_m\\Omega_yT^2$ & -0.944 & $-1.9\\times10^{-4}$\\\\ \\hline\n $v_x$ & $-4kv_x\\delta\\Omega_mT^2$ & $-1.6\\times10^{-3}$ & -0.157 \\\\ \\hline \n $v_z$ & $4kv_z\\delta\\Omega_m\\Omega_yT^3$ & $9.4\\times10^{-6}$ & $9.2\\times10^{-4}$ \\\\ \\hline \n Separation & & \\textbf{Separation}&\\\\ \n & & \\textbf{(in m)}&\\\\ \\hline\n$ \\Delta x$ & $4 \\hbar k \\delta\\Omega_m T^2\/m_{\\mathrm{Rb}} $ & $1.2\\times10^{-6}$&\\\\\n$\\Delta z$ & $4\\hbar k\\Omega_y\\delta\\Omega_m T^3\/m_{\\mathrm{Rb}} $ & $6.8\\times10^{-9}$&\\\\\n\\hline\n\\end{tabular}\n \\label{tab:phasesingle3}\n \\end{center}\n \\end{table*}\n\nTable \\ref{tab:phasesingle3} lists the dominant terms in the development of the output interferometer phase in the case where the compensation of the rotation is not perfect ($\\Omega_m \\neq \\Omega_y$). We find for a mismatch of $\\delta\\Omega_m=10^{-6}$ rad\/s, (which corresponds to an error in the tilt of the mirrors of $\\delta \\theta=\\delta \\Omega \\, T=5\\mu$rad) a phase error on the differential acceleration of -944 mrad, due to residual centrifugal accelerations. To keep the error below 1 mrad, an uncertainty in the knowledge of the rotation rate along y of 1 nrad\/s is thus required, or equivalently an uncertainty in the tilt (of the two mirrors) of 5 nrad. \n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{Requirements on the relative initial positions and velocities for the phase error to remain below 1 mrad, in the case of non perfect rotation compensation $\\delta\\Omega_m=10^{-6}$~rad\/s and non perfect Nadir pointing $\\Omega_y-\\Omega_{\\mathrm{orb}}=10^{-6}$~rad\/s}\n\n \t\\begin{tabular}{|c|c|}\\hline\n$x$\t & $\\delta x < 0.12$~m \\\\ \n$z$\t & $\\delta z < 0.5$~mm\\\\ \n$v_x$\t & $\\delta v_x < 0.6~\\mu$m\/s\\\\ \n$v_z$\t & $\\delta v_z < 106~\\mu$m\/s\\\\ \n\\hline\n\\end{tabular}\n \\label{tab:phasesingle3b}\n \\end{center}\n \\end{table}\n\nTable \\ref{tab:phasesingle3b} gives the requirements on the source parameters (velocities and positions) to keep the phase error below 1 mrad, in the case of non perfect compensation of the rotation, and for non perfect Nadir pointing. We assume here a knowledge of the rotation rate along y at the level given above. The corresponding requirements are found to be largely manageable. One should nevertheless keep in mind that these requirements scale with the amplitudes of the mismatches $\\delta\\Omega_m$ and $\\Omega_y-\\Omega_{\\mathrm{orb}}$.\n\n\n\\subsection{Measurements along the two other axes}\n\nThe interferometer configuration required to measure $T_{xx}$ is identical to the one studied before, as for any measurement axis in the orbital plane, the rotation rate along $y$ needs to be compensated. The conclusions and requirements derived in the previous sections thus apply, provided the role of $x$ and $z$ are exchanged. As for the cross-track axis, fixed and parallel mirrors can be used, which simplifies the laser setup design and relaxes the constraints on the control of the beam alignment. This was the configuration considered in \\cite{Douch2018}.\n\n\\subsection{Combining the three signals}\n\nWhile the requirements on the control and knowledge of $\\Omega_{x,z}$ can be met with current technologies, using for instance fiber optic gyroscopes of the Astrix class, the ones of the rotation rate $\\Omega_y$, and of the mismatch of the mirror with respect to the ideal tilt, are very stringent, and cannot presently be met, even with the best space qualified gyroscopes. Instead we propose to use the mathematical properties of the gravity tensor, and its null trace, to estimate $\\Omega_y$, or at least its fluctuations.\n\nThe phase signals of the three interferometers are given by (we consider here only the leading terms, the gravity tensor terms, the gravity gradient compensating terms due to the wavevector change at the second pulse and the centrifugal terms): \n\\begin{eqnarray*}\n\\delta\\Phi_x & = & 2kz(T_{xx}-T_{xx}^{\\mathrm{eff}}-\\Omega_{m,x}^2+\\Omega_y^2+\\Omega_z^2)T^2\\\\\n\\delta\\Phi_y & = & 2kz(T_{yy}-T_{yy}^{\\mathrm{eff}}+\\Omega_x^2+\\Omega_z^2)T^2\\\\\n\\delta\\Phi_z & = & 2kz(T_{zz}-T_{zz}^{\\mathrm{eff}}-\\Omega_{m,z}^2+\\Omega_y^2+\\Omega_x^2)T^2\n\\end{eqnarray*}\nwith $T_{ii}^{\\mathrm{eff}}=2\\delta k_{ii}\/kT^2$ and $\\delta k_{ii}$ the change in $k$ applied in the direction $i$.\n\nAssuming the $T_{ii}^{\\mathrm{eff}}$ are tuned so as to null the output phases, we have\n\\begin{eqnarray*}\nT_{xx}^{\\mathrm{eff}} & = & T_{xx}-\\Omega_{m,x}^2+\\Omega_y^2+\\Omega_z^2\\\\\nT_{yy}^{\\mathrm{eff}} & = & T_{yy}+\\Omega_x^2+\\Omega_z^2\\\\\nT_{zz}^{\\mathrm{eff}} & = & T_{zz}-\\Omega_{m,z}^2+\\Omega_y^2+\\Omega_x^2\n\\end{eqnarray*}\n\nSumming the three equation and exploiting the null trace relation $T_{xx}+T_{yy}+T_{zz}=0$, we find\n\\begin{equation}\n\\sum T_{ii}^{\\mathrm{eff}} = -\\Omega_{m,x}^2-\\Omega_{m,z}^2+2(\\Omega_x^2+\\Omega_y^2+\\Omega_z^2)\n\\end{equation}\n\nFor $\\Omega_x$ and $\\Omega_z$ well below $10^{-6}$ rad\/s (or sufficiently well determined with gyroscopes), and assuming that the mirror tilts are left unchanged, fluctuations of $\\Omega_y$ can be determined with an uncertainty limited by the combined sensitivities of the three gradiometers.\n\nWriting $\\Omega_y=\\Omega_{y0}+\\delta\\Omega_y$, where $\\Omega_{y0}$ is a reference value close to $\\Omega_{\\mathrm{orb}}$, we obtain\n\\begin{equation}\n\\sum T_{ii}^{\\mathrm{eff}} \\simeq -\\Omega_{m,x}^2-\\Omega_{m,z}^2+2\\Omega_{y0}^2+4\\Omega_{y0}\\delta\\Omega_y\n\\end{equation}\n\nThe uncertainty in the evaluation of $\\delta\\Omega_y$ is finally given by \n\\begin{equation}\n\\Delta(\\delta\\Omega_y)=\\frac{\\sqrt{3} \\, \\sigma_T}{4 \\, \\Omega_{\\mathrm{orb}}}\n\\end{equation}\nwhere $\\sigma_T$ is the sensitivity of each gradiometer.\n\nFor $\\sigma_T=3.5$~mE and $\\Omega_{\\mathrm{orb}}=1.17$~mrad\/s, we find $\\Delta(\\delta\\Omega_y)=0.9$~nrad\/s.\n\nThis determination can in turn be used to correct the measurement along $x$ and $z$ from centrifugal accelerations. Neglecting as before terms related to $\\Omega_x$ and $\\Omega_z$, this yields the following equations:\n\\begin{eqnarray*}\nT_{xx} & = & T_{xx}^{\\mathrm{eff}}+\\Omega_{m,x}^2-\\frac{1}{2}\\sum T_{ii}^{\\mathrm{eff}}+\\Omega_{m,x}^2+\\Omega_{m,z}^2\\\\\nT_{yy} & = & T_{yy}^{\\mathrm{eff}}\\\\\nT_{zz} & = & T_{zz}^{\\mathrm{eff}}+\\Omega_{m,z}^2-\\frac{1}{2}\\sum T_{ii}^{\\mathrm{eff}}+\\Omega_{m,x}^2+\\Omega_{m,z}^2\n\\end{eqnarray*}\nand finally\n\\begin{eqnarray*}\nT_{xx} & = & \\frac{1}{2}(T_{xx}^{\\mathrm{eff}}-T_{yy}^{\\mathrm{eff}}-T_{zz}^{\\mathrm{eff}}+\\Omega_{m,x}^2-\\Omega_{m,z}^2)\\\\\nT_{yy} & = & T_{yy}^{\\mathrm{eff}}\\\\\nT_{zz} & = & \\frac{1}{2}(T_{zz}^{\\mathrm{eff}}-T_{xx}^{\\mathrm{eff}}-T_{yy}^{\\mathrm{eff}}+\\Omega_{m,z}^2-\\Omega_{m,}^2)\n\\end{eqnarray*}\nExploiting the null trace to correct for the centrifugal acceleration actually also decreases the uncertainty in the gradiometric measurement along $x$ and $z$ by a factor $2\/\\sqrt{3}=1.15$.\n\n\n\\section{Monte Carlo model of the interferometer}\n\\label{sec:model}\nThis section describes the simulations of a gravity gradiometer based on a pair of double diffraction atom interferometers, focusing here on the effects related to the physics of the interferometer, independently from the inertial forces applied to the atoms. A Monte Carlo model of the interferometers was developed in order to precisely evaluate the impact of the experimental parameters, such as related to the lasers or the atomic sources, onto the differential phase between the two interferometers. The interferometers are fed out of a single ultra-cold atomic source which is split using a combination of a Raman and Bragg laser beams into two clouds separated by 50 cm, as represented in figure~\\ref{Concept-Scheme}. The two clouds are thus taken to be identical in their initial velocity, temperature, spatial distribution. \n\n\\subsection{Double-diffraction interferometers}\n\\label{doublediffraction}\nThe interferometer geometry is based on the double diffraction technique demonstrated in \\cite{Leveque2009}. The Raman beams, of wavevectors $\\vec{k}_{1}$ and $\\vec{k}_{2}$, are brought together onto the atoms before being retroreflected on mirror(s), leading to the existence of two pairs of counterpropagating Raman beams, with opposite effective wavevectors $\\pm\\hbar\\vec{k}_{\\mathrm{eff}}$, where $\\vec{k}_{\\mathrm{eff}}=\\vec{k}_{1}-\\vec{k}_{2}$. Both pairs are resonant when the motion of the atoms is perpendicular to the laser beams, as no Doppler shift lifts the degeneracy between the resonance condition between them.\n\nSimilarly to diffraction by stationary optical waves, the coupling with the two Raman lasers pairs lead to populating several orders of diffraction. In our model, we consider the first 5 coupled atomic states $\\vert j\\rangle$ ($j=-2\\cdots2$). These states correspond to diffraction orders 0, $\\pm 1$ and $\\pm 2$: they are linked to the interaction of the atoms with 2 or 4 photons following the two directions $\\pm\\hbar\\vec{k}_{\\mathrm{eff}}$. They also differ by their internal states (0 and $\\pm 2$ correspond to $\\vert F=1\\rangle$, $\\pm\\,1$ to $\\vert F=2\\rangle$) as the Raman pairs couple different electronic states. Then, the evolution of the atomic quantum state during the pulse is calculated by solving the Schr\\\"odinger equation, generalizing the method of \\cite{Moler1992}, based on adiabatic elimination of the excited state.\n\n\\begin{figure}[hbtp]\n \\centering\n \\includegraphics[scale=0.5]{Interference_Scheme2.pdf}\n \\caption{Double diffraction interferometer scheme using three Raman pulses. Note that we do not display the $\\vert\\pm 2\\rangle$ states as they are pushed away together with the $\\vert 0\\rangle$ wave-packets.}\n \\label{Interferometer_Scheme}\n\\end{figure}\n \nThe double diffraction interferometer is realized using a three Raman pulses sequence $\\frac{\\pi}{2} - \\pi - \\frac{\\pi}{2}$, as shown in figure~\\ref{Interferometer_Scheme}, separated from each other by a free evolution time $T$. The three pulses are realized with the same laser power but with different duration corresponding respectively to $\\tau - 2\\tau - \\tau$. The duration $\\tau$ is defined with respect to the effective Rabi pulsation $\\Omega_{\\mathrm{eff}}$, using the relation: $\\tau =\\pi \/ \\left ( \\sqrt{2} \\, \\Omega_{\\mathrm{eff}} \\right )$, so that the first Raman pulse enables to split the wave-function of the atoms in coherent superposition between the two coupled states $\\vert 0\\rangle\\longrightarrow\\vert\\pm 1\\rangle$. During the first free evolution time $T$, the two arms of the interferometer separate. The second Raman pulse acts on each interferometer arms in order to deflect them (in our case $\\vert\\pm 1\\rangle$ into $\\vert\\mp1\\rangle$). The first and second Raman pulses can also populate other coupled states (such as $\\vert\\pm 2\\rangle$), leading to parasitic paths which could interfere. We suppress these unwanted paths which are in the $\\vert F=2\\rangle$ state by pushing them away of the interferometer area using resonant laser beams, after each Raman pulse, as shown in figure~\\ref{Interferometer_Scheme}. The non-deflected wave-packets after the middle pulse do not disturb the measurement process if we detect only the ``interference signal'' (see figure~\\ref{Interferometer_Scheme}), which is possible in our configuration due to the large distance between the different wave-packet trajectories. \n\nAfter the second $\\pi$-Raman pulse, the two wave-packets $\\vert\\mp1\\rangle$ get closer and overlap after a second free evolution time $T$. Finally, the last $\\frac{\\pi}{2}$ Raman pulse recombines these atomic wave packets, realizing thus a Mach-Zehnder type interferometer. \n\nThe interferometer phase, which corresponds to the difference between the phase shifts accumulated by the two interferometer arms, is finally extracted from the measurement of the transition probability $P=N_1\/(N_1+N_2)$, where $N_1$ and $N_2$ are respectively the number of atoms detected in the hyperfine states $\\vert F=2\\rangle$ (corresponding to the atoms in state $\\vert 0\\rangle$) and $\\vert F=1\\rangle$ (corresponding to the sum of the atoms in states $\\vert \\pm 1\\rangle$). \n\n\nWith this geometry, the interferometer phase is given by:\n\\begin{eqnarray}\n\\Delta\\Phi & = & \\left[\\phi_{\\downarrow}(\\vec{r}_{A}) - \\phi_{\\downarrow}(\\vec{r}_{C}) + \\phi_{\\uparrow}(\\vec{r}_{C}) - \\phi_{\\uparrow}(\\vec{r}_{D})\\right] \\nonumber \\\\\n& & - \\left[\\phi_{\\uparrow}(\\vec{r}_{A}) - \\phi_{\\uparrow}(\\vec{r}_{B}) + \\phi_{\\downarrow}(\\vec{r}_{B}) - \\phi_{\\downarrow}(\\vec{r}_{D})\\right]\n\\label{Analytic_PaseShift}\n\\end{eqnarray}\nwhere $\\vec{r}_{A},\\vec{r}_{B},\\vec{r}_{C}$ and $\\vec{r}_{D}$ are the center of mass positions of the atomic wave-packets at the different locations $A, B, C$ and $D$ represented in the figure~\\ref{Interferometer_Scheme}, and $\\phi_{\\uparrow}$ (resp. $\\phi_{\\downarrow}$) the phase difference between the $k_{\\uparrow}$ (resp. $k_{\\downarrow}$) pair of Raman lasers. Equation \\ref{Analytic_PaseShift} generalizes equation \\ref{formula1} and allows to account for differences between the wavevectors of the counterpropagating Raman lasers pairs. \n\n\n\\subsection{Description of the Monte-Carlo simulation}\n\nUsing this model for the interferometer, we have developed Monte-Carlo simulations of the space-borne gravity gradiometer depicted in figure~\\ref{Concept-Scheme}. We average the contribution to the output signals of a large ensemble of atoms, randomly drawing their initial positions and velocities in Gaussian distributions, and calculating the evolution of their wave-function as well as their classical trajectory along the two interferometer paths. The initial mean longitudinal velocity of the atoms is $4v_{rec}$, and the rms initial atomic position is taken to be 100~$\\mu$m. The initial mean vertical velocity, ideally null, can be taken different from zero when we estimate the effect of an initial velocity drift along the direction of the lasers onto the interferometer. By simulating two interferometers at different initial positions, and computing the difference between the output phase shift, we simulate a cold atom interferometer gravity gradiometer. \n\nTo simulate the propagation of the atoms through the interferometer, we consider that the momentum kicks occur at the middle of the Raman pulses and we neglect the variations of their position during the pulses. The duration of the first Raman pulse is $\\tau =\\frac{\\pi}{\\sqrt{2}\\Omega_{\\mathrm{eff}}}$, where $\\Omega_{\\mathrm{eff}}$ is the Rabi angular frequency at the center of the Raman lasers, whose intensity profiles are Gaussian with identical waists $w_0$. Therefore, the effective Rabi angular frequency $\\Omega_{\\mathrm{eff}}(\\vec{r})$ seen by the atoms, as well as the phase shifts $\\phi_{\\uparrow\\downarrow}(\\vec{r})$, depend on their position in the Raman beams $\\vec{r}$ at the time of the pulses. \n\n\\subsection{Results of the simulation}\n\nThe following section discusses the results from the simulation and derives the requirements.\n\n\\subsubsection{Parallel retroreflecting mirrors}\n\nWe start by calculating the contrast and fraction of detected atoms as a function of the Rabi frequency and the temperature, for parallel Raman mirrors (which corresponds to the case where the measurement axis is cross-track, i.e. along $y$). The results are displayed in figure~\\ref{Contrast-NatmDet_Temperature-Omega} for temperatures ranging from 0.1~pK to 10~nK, and Rabi pulsation from 5~rad\/s to 10~Mrad\/s. Here, the Raman laser beams are taken as Gaussian beams with a 5~mm waist size. \n\\begin{figure}[hbtp]\n\\centering\n\\includegraphics[scale=0.8]{2D_Contrast_NatomDet_vs_Omega_Temperature2.pdf}\n\\caption{Contrast (a) and number of detected atoms (b) as a function of the Rabi pulsation $\\Omega_{\\mathrm{eff}}$ for different atom temperatures.}\n\\label{Contrast-NatmDet_Temperature-Omega}\n\\end{figure}\n\nIn figure~\\ref{Contrast-NatmDet_Temperature-Omega}(a) we present the interferometer contrast \nwith respect to $\\Omega_{\\mathrm{eff}}$ for different temperatures. A plateau of contrast $>0.8$ is observed for a range of Rabi pulsation between $10^{2}$ and $10^{5}$~rad\/s and a temperature range between 10 and 100~pK. The corresponding fractions of detected atoms are displayed in figure~\\ref{Contrast-NatmDet_Temperature-Omega}(b), where a similar plateau is found, though smaller, corresponding to a range of $\\Omega_{\\mathrm{eff}}$ between $5.10^{3}$ and $2.10^{5}$~rad\/s and a temperature range between 10 and 100~pK. These results confirm the expectations that at large effective Rabi pulsations ($>10^{5}$~rad\/s), the coupling to higher momentum states ($\\vert\\pm 2\\rangle$ ...) leads to a loss of contrast. Also, when the temperature increases, the fraction of detected atoms decreases quickly due to the velocity selectivity of the Raman transitions. This motivates to work with the lowest possible temperatures.\n\nBased on these calculations, we select for the rest of the simulations the following parameters: $\\Omega_{\\mathrm{eff}}=40\\times10^{3}$ ~rad\/s and a temperature of 100~pK, for which the contrast C is $\\approx$ 80\\% and the fraction of detected atoms $\\approx$ 80\\%. Temperatures below 100~pK range have already been obtained using the delta-kick collimation technique~\\cite{Kovachy2015PRL}. As for the Rabi pulsation, it corresponds to Raman pulses duration $\\tau$ of the order of 55~$\\mu$s similar to what is used in standard ground-based Raman interferometers~\\cite{Leveque2009}. \n\nThe simulation also allows evaluating the effect of the finite size of the Raman beams onto the phase of the interferometers, due to the effects of curvature and Gouy phase. Figure~\\ref{Effect_Laser_Waist_Position} shows the calculated phase shifts as a function of the Raman laser waists in the 1-10 mm range, assuming identical waists for the three beams, located at $y=0$, in between the two interferometers at $y=\\pm25$~cm. The temperature is $T$ = 100~pK and the effective Rabi pulsation 40~krad\/s. The retroreflecting mirrors are placed at $y=+0.4$~m.\n\n\\begin{figure}[hbtp]\n\\centering\n\\includegraphics[scale=0.35]{PhaseVsWaistAll.png}\n\\caption{Effect of the Raman lasers waist on the phase shifts at the output of the interferometer at 0.25~m $y$-position (open circles) (resp. at -0.25~m $y$-position (open squares)), and on the differential phase shift between the two interferometers (full triangles). All the Raman laser beams have the same size, at the same $y$-position.}\n\\label{Effect_Laser_Waist_Position}\n\\end{figure}\n\nFigure~\\ref{Effect_Laser_Waist_Position} shows the calculated phase shifts for each individual interferometers, displayed as open squares and circles, and their difference, displayed as full triangles. We find phase shifts that decrease quickly with increasing waist sizes, which are not suppressed in the differential measurement. The resulting systematic effect on the gradiometer phase is lower than 1 mrad for waists larger than 4 mm, and was found to be dominated by the impact of the residual curvature of the wavefront rather than the Gouy phase. This motivates the choice made above of a waist of 5 mm.\n\nThe relative positions of the Raman laser waists were then varied in order to evaluate the effect of their positions on the differential phase shift of the gravity gradiometer, and define an error margin for the adjustment of the waist position of the Raman laser beams. The positions of the waists of the three incoming Raman beams were randomly drawn in the range $\\pm 10$~m (resp. $\\pm 50$~m) with respect to the $y=0$ position at the middle of the two interferometers. The corresponding gradiometer phase shifts were found to vary respectively within $\\pm 0.5$~ mrad (resp. within 6 mrad) around the average value of 0.5 mrad. In order to keep the differential phase shift $<$ 1~mrad, the relative $y$-positions of the Raman laser waists should thus be in the range of $\\pm$10~m. The Rayleigh length of a Gaussian beam of 5~mm waist being 100 m, this corresponds to a maximum radius of curvature of 1 km at 10~m from the waist, which is well within the measurement capabilities of state of the art wavefront sensors. \n\nThe model was also used to evaluate the impact of other effects, such as light shifts, residual mean Doppler shifts. In particular, we calculated for single interferometers a residual sensitivity to the mean initial velocity along $y$. The sensitivity amounts to 0.05~mrad per $\\mu$m\/s of mean velocity drift for our parameters (Rabi frequency $\\Omega_{\\mathrm{eff}}$ = 40~krad\/s), and increases when decreasing the Rabi frequency. A control of the relative initial vertical velocity between the two atomic clouds at the input of each interferometer better than 20~$\\mu$m\/s is thus required to keep the phase error below 1 mrad. \n\n\\subsubsection{Tilted retroreflecting mirrors}\n\nIn the tilted mirror configuration, finite size effects are expected to have a stronger impact, as the positions of the atomic clouds are (symmetrically) offset with respect to the centre of the Raman beams by about 1.5~mm at the first and third Raman pulses. Figure~\\ref{Effect_Laser_Waist_Position_Tilted} presents the calculated contrast (left) and fraction of detected atoms (center) as a function of the beam waist. As expected, smaller contrasts and fraction of detected atoms are found with respect to the parallel configuration (63 \\% of contrast instead of 80 \\% for a waist of 5 mm). The effect of the curvature onto the gradiometer phase is displayed on figure~\\ref{Effect_Laser_Waist_Position_Tilted}-right), where a waist larger than 8 mm is required to keep the phase error below 1 mrad. We finally chose a waist of 1~cm and evaluated the impact of the Rabi frequency and temperature onto the contrast and fraction of detected atoms. We found similar behaviours as before, with plateaus in the trends with respect to the Rabi frequency, with significantly higher contrast (92 \\%) and fraction of detected atoms (95 \\%) for a temperature of 100 pK. \n\n\\begin{figure}[hbtp]\n\\centering\n\\includegraphics[scale=0.21]{ContVsWaist.png}\n\\includegraphics[scale=0.21]{FracdetVsWaist.png} \n\\includegraphics[scale=0.21]{PhaseVsWaist.png}\n\\caption{Effect of the Raman lasers waist on the contrast (left), fraction of detected atoms (center), and on the differential phase shift between the two interferometers (right), for tilted Raman mirrors. All the Raman laser beams have the same size, at the same $z$-position.}\n\\label{Effect_Laser_Waist_Position_Tilted}\n\\end{figure}\n\n\\section{Design of the instrument}\n\n\\label{Sec:Design}\n\n\\subsection{Vacuum system}\n\nFigure~\\ref{VacuumSchematics} shows the design of the vacuum system for $T_{zz}$ and $T_{xx}$. The main dimensions of the vacuum chamber are adapted to the displacement of the atomic clouds. The architecture of the BEC chamber, where the ultra-cold atomic source is produced, is inspired by the solution designed for STE-QUEST ATI \\cite{Aguilera2014} but without the dipole trap. In the 2D-MOT chamber, a beam of pre-cooled atoms is created by a two dimensional MOT. This pre-cooled beam of atoms is formed out of the background gas pressure created by a reservoir. In the BEC chamber this atom beam is captured by a three dimensional MOT and the atoms are then transferred in a purely magnetic trap. The magnetic fields for the traps are created by a combination of a three-layer atom chip and comparably small magnetic coils. In the magnetic trap, the atomic cloud is compressed and then cooled via RF-evaporation. The chip is parallel to the plane of the atomic clouds displacements. The transport beams launch the atoms into the CAI chamber. \n\nFour mirrors are fixed inside the vacuum system: one for the two vertical Bloch lattices and three tilted reference mirrors for the interferometer to compensate for the rotation of the satellite which can create bias terms in the output phases and a loss of contrast. The relative angle between two consecutive reference mirrors is $\\sim 7$~mrad corresponding to the mean rotation rate of the satellite. The two mirrors for the $\\pi\/2$ pulses are fixed on piezoelectric tip-tilt mounts to allow the fine control of the relative angle between the three reference mirrors \\cite{Hauth2013}. A dynamic range of $\\pm 30~\\mu$rad and an accuracy of 10 nrad is needed, which is slightly beyond the state-of-the-art of this technology and requires custom development. The impact of these actuators on the power budget is not negligible and their power consumption needs to be optimized. For the $T_{yy}$ CAI, a single reference mirror is used for the splitting and the three interferometer pulses instead of three independent reference mirrors and no tip-tilt mount is needed.\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[width=12cm]{VacuumSchematicsTiltMirrorArticleCAI.pdf}\\\\\n \\caption{Design of the vacuum system for $T_{zz}$ and $T_{xx}$. An atom beam is produced in the 2D MOT chamber and used to load a mirror 3D MOT on a chip in the BEC chamber, where the ultra-cold atom source is achieved. The atom cloud is then launched thanks to a Raman pulse and horizontal Bloch lattices towards the CAI chamber where the differential interferometer is produced. The atom cloud is slowed down thanks to horizontal Bloch lattices, then split and transported at the entrance of the interferometer area by applying vertical Bloch lattices. The detection is achieved in a separated small chamber in order to avoid parasitic light in the CAI interferometric chamber. A $\\pi$ Raman pulse is applied 1 s after the last beamsplitter pulse, to have the 3 output ports overlapped 1 s later for counting by fluorescence detection on a CCD camera.}\\label{VacuumSchematics}\n\\end{figure}\n\nDuring the interferometer the atom clouds pass through the CAI to finally reach their respective detection chambers. Figure~\\ref{VacuumSchematics} zooms in the detection zone. The idea is to wait for the atom cloud to exit the interferometer chamber to avoid parasitic light due to fluorescence. A shutter is placed at the entrance to completely block the scattered photons to reach the CAI chamber. A double diffraction $\\pi$ pulse is applied to bring the diffracted states back at the center. Spatial fringes on the atomic population are observed with a CDD camera.\n\n\n\\subsection{Detection signal}\n\nSpatially resolved detection prevents the contrast loss determined by the inhomogeneous dephasing due to initial velocity and position distribution and allows the extraction of information on velocity dependent phase shifts \\cite{Dickerson2013PRL,Sugarbaker13PRL}. We consider at first a point-like atomic source, to evaluate the effects of the satellite angular rotation $\\Omega_{y,y,z}$ when $\\Omega_m=0$ on the final fringe pattern thanks to the ballistic expansion of the atomic ensemble during the long interrogation time. The remaining phase terms are those related to the initial velocities of the atoms ($v_x, v_y, v_z)$, and including Sagnac terms and the effect of the vertical component of the gravity-gradient $T_{zz}$:\n\n\\begin{equation}\n\\phi(v_x, v_y, v_z)= 4 k \\left ( v_x \\Omega_y + v_y \\Omega_x \\right ) T^2 + 2 k v_z \\left ( T_{zz} - 3\\Omega_y^2 \\right ) T^3\n\\label{eq:phase_velocity}\n\\end{equation}\n\nThe $x$--$y$ cross-section of the final density distribution is shown in figure~\\ref{fig:fringes} for different values of $\\Omega_y$ and when $\\Omega_x=\\Omega_z=1 \\times 10^{-6}$, and $T_{zz}=-2.7 \\times 10^{-6}$ s$^{-2}$. The signals are calculated with a bias phase to have the top of a fringe at $(x,y,z)=(0,0,0)$; in the case of a small residual radial velocity, and an interferometer signal spanning over only a fraction of a fringe period, a suitable phase shift can be applied to the interference pattern in order to center the signal at half fringe to increase the phase sensitivity. The increasing angular rotation along the $y$ axis determines an increasing spatial frequency for the fringes along the $x$ axis. The effect of $T_{zz}$ is to spread the phase by $\\approx$ 600 mrad along the $z$ direction. When $z$ is chosen as observation direction, the interference pattern encodes the angular velocities along the $x$ and $y$ direction as:\n\\begin{equation}\n\\phi(x,y)= 2 k T \\left ( \\Omega_y x + \\Omega_x y \\right ) \\, \n\\end{equation}\nWhere the two angular velocities can be obtained with a 2D fit on the atomic fringes. The fringe spacing is inversely proportional to the projection of the angular velocity on the $x$--$y$ plane, i.e. $\\Omega_{x-y}=\\left ( \\Omega_x^2 + \\Omega_y^2 \\right )^{1\/2}$, and it is equal to $\\pi\/ \\left ( k T \\Omega_{x-y} \\right )$. In order to resolve the interferometer fringes, the CCD camera has to have enough resolution; the requirement becomes more demanding as the satellite angular rotation increases. For example, when $\\Omega_{x-y}=\\Omega_{\\textrm{orb}}$ the fringe spacing is equal to 33 $\\mu$m, and imaging 4 mm over 1024 pixels will lead to 3 pixels for each fringe.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.24\\textwidth]{f6.pdf}\n\\includegraphics[width=0.24\\textwidth]{f5.pdf}\n\\includegraphics[width=0.24\\textwidth]{f4.pdf}\n\\includegraphics[width=0.24\\textwidth]{f3.pdf}\n\\caption{\\label{fig:fringes} Interferometer fringes obtained for a point-source atomic cloud in the $x$--$y$ plane for different values of $\\Omega_y$, from left to the right: $1 \\times 10^{-6}$ rad\/s, $1 \\times 10^{-5}$ rad\/s, $1 \\times 10^{-4}$ rad\/s, and $1.17 \\times 10^{-3}$ rad\/s$=\\Omega_{\\textrm{orb}}$.}\n\\end{figure}\n\nThe initial spatial distribution of the atomic cloud determines the blurring effect on the interferometer fringes; the resulting signal is obtained by calculating the convolution between the probability distribution obtained for the point-like source case with the initial spatial distribution of the atomic ensemble. In figure~\\ref{fig:fringesBlur} is shown how the fringes signal worsens when the initial cloud is considered as an isotropic normal distribution along the 3 directions, with a standard deviation equal to 150 $\\mu$m. The fringe contrast is further decreased because of the density distribution integration along the observation direction required for the imaging. For $\\Omega_y \\ll 10$ mrad\/s the phase spread along the vertical direction is mainly due to $\\Gamma_{zz}$ (see Eq.~\\ref{eq:phase_velocity}) and is $\\approx$ 600 mrad over the final size of the atomic cloud. Other effects, not taken in to account here, contribute to further reduce the phase sensitivity, like the quantum projection noise (QPN) due to the finite number of atoms detected by each pixel of the CCD, and the technical noise determined by the detection technique \\cite{Rocco2014}; these effects must be evaluated to define the requirements for the instrument adopted for the detection.\n\n\\begin{figure}\n\\begin{minipage}{\\textwidth}\n \\centering\n \\raisebox{0.0\\height}{\\includegraphics[width=.27\\textwidth]{fO4.pdf}} \\hspace*{3mm}\n \\raisebox{0.0\\height}{\\includegraphics[width=.27\\textwidth]{fO4intZ.pdf}}\n \\hspace*{3mm}\n \\raisebox{0.13\\height}{\\includegraphics[width=.37\\textwidth]{fringes_Oy4_intZ.pdf}}\n \\caption{\\label{fig:fringesBlur} (left) fringes on the $x$--$y$ plane at $z=0$ when the initial atomic distribution is taken into account to calculate the final density distribution; a by eye hardly discernible reduction of ~5\\% for the fringe amplitude is obtained with respect to the signal resulting from a point-like source, as shown in the third image of figure~\\ref{fig:fringes}; (center) fringes on the $x$--$y$ plane when the atomic density distribution is integrated along the measurement direction $z$ for the CCD imaging. (right) The blurring effect on the fringe visibility is shown on the two density distribution profiles taken at $y=0$, when the signal is integrated (red) or not (black) along the $z$ direction. The combined effect is a fringe amplitude reduction of 20\\%. The images are obtained for $\\Omega_x = 1 \\times 10^{-6}$ rad\/s, $\\Omega_y = 1 \\times 10^{-4}$ rad\/s, and a final size of the cloud along $z$ of 1.1 mm.}\n\\end{minipage}\n\\end{figure}\n\nNote that the above results also apply to the case where the Raman mirrors are tilted to compensate for rotation, provided one replaces $\\Omega_y$ with $\\Omega_y-\\Omega_m$.\n\n\\subsection{Design of the laser source\/frequency power distribution}\n\nThe architecture of the laser system including the frequency\/power distribution is depicted on figure \\ref{architectureLasersource}. It is based on telecom technology combined with second harmonic generation (SHG) \\cite{Thompson2003}. The main specifications of the laser are detailed in table \\ref{laserspecifications}.\n\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{table}\n \\centering\n \n \n\t\\small\n \\begin{tabular}{|c|c|ccccccc|}\n \\hline\n Specifications & Units& 2D MOT & 3D MOT & Raman & Bloch lattice & Push & Detection &Raman $\\Delta k_{z}$\\\\ \\hline\n Wavelength & nm & 780.24& 780.24 & 780.24 & 780.24 & 780.24& 780.24& 780.24\\\\\n Power & mW &200\/20 & 100\/10& 45 & 200 & 5& 5&30\/30\\\\\n Frequency & MHz &- 20& [-20;-120]& -3400 & +100000& $\\sim 0$& $\\sim 0$&[+19100;+24100] \\\\\n Linewidth & MHz & 0.5 & 0.5 & $<$ 0.01 & $<$ 6& 1& 0.1& $<$ 0.01\\\\\n Frequency accuracy & kHz &$<$ 100 & $<$ 100 & $<$ 100 & Not critical& $<$ 100& $<$ 100& $<$ 25\\\\\n Second frequency & NA &F=1$\\rightarrow$ F'=2 & F=1$\\rightarrow$ F'=2 & 6834+$\\omega_{R}$ & [0.075; -3.087] & No &6568& 6834-3 $\\omega_{R}$ \\\\\n Polarization & dB &20 & 20& 30 & 20& 20 & 30&30\\\\\n Power stability & NA &$< 1\\%$ & $< 1\\%$ & $< 0.1\\%$& $< 1\\%$& $< 0.1\\%$ & $< 1\\%$& $< 0.1\\%$\\\\\n Disruptive lines & NA &Tolerated & Tolerated & No & No& No& No&No\\\\ \\hline\n \\end{tabular}\n \\caption{Main specifications of the laser source for $T_{zz}$. Each specification is determined for one type of function. For instance the ``Raman\" column is for both the atom motion and the $\\pi\/2$ interferometer pulses since the specifications are identical. The required power is for a single Raman beam. Since they are used simultaneously, this value need to be multiplied by the number of beams in the total power budget. The two values for the laser power correspond respectively to one optical frequency. Disruptive lines can be produced if we use a phase modulator to create the second optical frequency. $\\omega_{R}$ is the recoil frequency. The difference about the laser parameters for $T_{xx}$ and $T_{yy}$ are the frequency detuning range ([-17150;-14650] MHz) and the laser power (22\/22 mW) of the interferometer $\\pi$ pulse. }\\label{laserspecifications}\n\\end{table}\n\\renewcommand{\\arraystretch}{1.0}\n\nA reference laser supplies an absolute optical frequency corresponding to the atom transition of Rubidium ($^{85}$Rb crossover transition $|F=3\\rangle\\rightarrow|F'=3(4)\\rangle$). Then all the laser frequencies are servo locked compared to this reference thanks to a beatnote and a lock-in electronics box. The laser system is composed of four other blocks corresponding to the functions of Table \\ref{laserspecifications}: one block for the cooling and the detection, two subsystems for the Raman transitions (one for the $\\pi$ pulse of the interferometer, one for the other pulses), and the last part corresponding to the implementation of the Bloch lattice. The wave vector modification of the $\\pi$ pulse interferometer $\\delta k$ implies an additional laser system. Indeed, the laser system to generate $\\pi$ and $\\pi\/2$ pulses of the atom interferometer cannot be the same because the frequency difference is too large and they are used simultaneously. The reference frequency for the $\\pi$ Raman pulse is produced using a phase modulator (PM 2) with a tunable Microwave Frequency Reference (MWFR). The reference frequency for the Bloch lattice laser is the fifth harmonics at +50 GHz of a frequency comb generated using a phase modulator (PM 1) with a DRO operating at 10 GHz.\n\nThe laser source for the cooling part is a standard DFB diode.\nA phase modulator (PM 3) with a DRO at 6.8 GHz creates the repumping frequency used for laser cooling in the 2D MOT. This beam is then amplified by an erbium-doped fiber amplifier (EDFA). Frequency doubling from 1560 nm to 780 nm is accomplished via SHG in a periodically-poled lithium niobate waveguide (PPLN-WG), where the confinement of the optical mode leads to high intensity and thus high efficiency. A similar architecture is implemented for the 3D MOT, the detection and the push beam except for the laser diode (External Cavity Diode Laser). A fibered splitter is used to separate the beams for the three functions.\n\nThe configuration of the Raman laser source is based on the generation of the two frequencies by distinct external cavity diodes (ECDL). The two lasers are phase locked thanks to a beat note on a photoconductor at the output of the laser system to avoid phase noise along the optical paths. We duplicate the Raman laser system for the interferometer $\\pi$ pulse which has a specific detuning and power compared to the $\\pi\/2$ pulses due to the shift of the wave vector $\\delta_{k}$. The required frequency difference between the $\\pi$ pulse and the $\\pi\/2$ pulses to compensate for $T_{zz}$ is +25 GHz. Since the gravity gradient is not constant during the full orbit and the full mission, this frequency shift has to be tunable over a range of $\\pm 2.5$ GHz. Moreover, the idea is to actively compensate for the phase shift due to the variation of the gravity gradient, and extracting the measurement from the needed frequency shift. This requires a control of the frequency shift with a relative accuracy of $10^{-6}$ to be compliant with the specification of 5 mE per shot for the gravity gradient measurement.\n\nThe laser source for the Bloch lattice subsystem is a DFB diode. To create the two frequencies, the beam is split into two paths and a different RF frequency is supplied on each fibered acousto-optic modulators (AOMs). The two beams are then recombined on the free space combination bench.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=16cm]{architecturelLasersourceArticleCAIv2.pdf}\\\\\n \\caption{Architecture of the laser source. Telecom components (green area for laser cooling, violet areas for the Raman pulses, orange area for Bloch oscillation) are used to develop a compact fiber-based laser system. Free-space optical benches (light pink areas) were implemented with 780 nm optical components to control the frequency of the reference laser (blue area) via saturated absorption spectroscopy and to control the power of the laser output via AOMs. DFB: Distributed Feedback Diode Laser. ECDL: External Cavity Diode Laser. PM: phase modulator. EDFA: erbium-doped fiber amplifier. PPLN-WG: periodically-poled lithium-niobate waveguide. MWFR: microwave frequency reference. DRO: dielectric resonator oscillator. AOM: acousto-optic modulator. FC: fiber coupler. PD: photodiode. PLL: Phase Lock Loop. The needed optical power (violet) are used for the power budget. The frequencies and the power for the $\\pi$ pulse laser part on this figure are for $T_{zz}$.}\\label{architectureLasersource}\n\\end{figure}\n\n\n ZERODUR\\textregistered free-space platforms hold AOMs and mechanical shutters to switch on and off the output light. An assembly of polarizing cubes and waveplates splits and recombines the beams.\n Two laser sources with optical frequencies separated by the clock transition of Rubidium 6.8GHz are combined for the Raman pulses. The polarization are crossed and the two output of the polarizing cubes are exploited in order to avoid power losses. A beat-note is done on a photoconductor and the second laser source is phase locked on the first one.\n\n\\subsection{Reference mirror design}\n\nThe mirrors discussed in this section retroreflect the beamsplitting light fields. They serve as a reference for the effective wave fronts of the light fields used to coherently manipulate the atoms. Ideally, these effective wave fronts are flat and smooth over all atom-light interaction zones to avoid spurious phase contributions. This condition implies a parallel alignment of the surfaces in the cross-track direction, and well aligned tilts in the other two directions which compensate for rotations~\\cite{Freier16CS,Dickerson2013PRL,Lan2012PRL} and lead to effective parallel wave fronts seen by the atoms. The retro reflection setup suppresses inhomogeneity of the incoming light field, but imperfections in the ideally flat mirror surface directly affect the effective wave front. Wave front distortions are a major contribution to the uncertainty in precision atom interferometers~\\cite{Freier16CS,Gauguet2009PRA,Louchet2011NJP,Berg2015PRL}; this noise can be reduced using colder atoms~\\cite{Louchet2011NJP,Karcher2018NJPL}, and further minimized with condensed~\\cite{Rudolph2015} or even collimated atoms~\\cite{Abend2016PRL,Kovachy2015PRL,Muntinga2013}. These considerations motivate the detailed discussion for the gradiometer. We assess the requirements on the mirror in three steps: (i) tilts, (ii) defocus and Gouy phase, (iii) higher order distortions, and propose a technical implementation. The following assessments treat a Gaussian distribution with an initial standard deviation of the position of $400\\,\\mu\\mathrm{m}$, and of the velocity of $100\\,\\mu\\mathrm{m\/s}$ for an atomic ensemble of $10^6$ atoms.\n\nFor assessing the impact of relative tilts of the effective wave fronts, we follow the description of Ref.~\\cite{Tackmann2012NJP}. To first order, and for small angles $\\alpha$, a tilt implies a change in the distance of the atoms to the retro reflection mirror of $d\\cdot\\sin{\\alpha}$. Here, $d$ denotes the distance between the pivot point and the projection of the position of the atoms onto the mirror surface in direction of the beam splitting light field. Although the choice of $d$ has a strong impact onto the phase of a single atom interferometer, it is suppressed in the differential signal. This suppression depends on the overlap of the trajectories parallel to the mirror surfaces~\\cite{Fils2005EPJD} which itself depends on the precision of the launching mechanism, that transfers the atoms into the interferometer region, and ultimately on the shot-noise limited position and velocity uncertainty of the atomic ensembles. For our gradiometer, we have to control phase noise, and can neglect an unknown, but stable phase shift.\n\\begin{figure}[tp]\n\\centering\n\\includegraphics[width=10cm]{tilt_mirror.pdf}\n\\caption{Impact on the phase of a relative tilt of the wave fronts.\nThe figure shows the case for a relative tilt $\\Delta_{2,3}$ of the wave fronts in direction of the forward drift velocity in the second and third interaction zone. While a significant impact on the single atom interferometer (black) is visible, the effect is strongly suppressed in the differential phase (red).\nThe blue curve exaggerates the phase by a factor of $10^4$ for clarity.\nThe perpendicular direction shows the same behaviour.\n}\n\\label{fig:tilt}\n\\end{figure}\nFigure~\\ref{fig:tilt} shows an example of a phase shift induced by the relative tilt between two effective wave fronts and its suppression in the differential signal when assuming a shot noise limited differential starting position and velocity (cross-track direction). From the slope of the curves we derive the requirement to keep relative tilts below $25\\,\\mu$rad [$0.5\\,\\mu$rad] if the relative position jitter between two subsequent cycles is below $2\\,\\mu$m [$100\\,\\mu$m], which is at or above the shot-noise limit ($<2\\,\\mu$m). This corresponds to a peak to valley of the mirror surface of 250 nm [5 nm] over a region with a diameter of 1 cm. The same requirements are valid for the other two axes (along track and nadir) when interpreting the relative mirror tilts as the deviation from the ideal alignment for compensating the rotations.\n\nIn a simplified model~\\cite{Louchet2011NJP}, an acceleration signal is contaminated by $\\delta\\,a=\\sigma_v^2\/R$ for an expansion rate $\\sigma_v$ of the atomic ensemble, assuming a Gaussian distribution, and a radius $R$ of a static effective wave front curvature.\nIn our more detailed model the wave front curvature changes as the beam propagates, and the effective wave front is generated by subtracting the retro reflected beam from the incoming one~\\cite{Schubert2013arXiv}. We propagate the beam parameters via ABCD matrices~\\cite{Kogelnik1966ApplOpt}, determining the wave front curvature and Gouy phase depending on the longitudinal position inside the light field. The wave front of the incoming beam depends on the numerical aperture of the fiber, a free propagation, the focal length of the lens for collimating the beam, and another free propagation to the atoms. The wave front of the retro reflected beam depends on the numerical aperture of the fiber, a free propagation, the focal length of the lens for collimating the beam, a free propagation to the mirror (past the atoms), the focal length of the lens describing the mirror, and a free propagation back to the atoms. We take the different positions of the atoms on the upper and lower trajectories of the interferometer into account, and calculate the phase for both interferometers forming the gradiometer. Our model also enables the evaluation of cases where the atoms are off center with respect to the beam. In addition, it introduces a weighting according to the Gaussian intensity profile, since transition probabilities at the edge of the beam degrade because of reduced Rabi frequencies, and atoms at these positions contribute less to the average phase.\n\nWe initially randomize the values for longitudinal positions of the optics in the beam (standard deviation of 5 $\\mu$m to 20 $\\mu$m), the focal length of the collimation lens (standard deviation of few 10 $\\mu$m), the velocities and positions of the atomic ensembles (shot noise limited standard deviation of 0.1 $\\mu$m\/s, and 0.4 $\\mu$m, respectively), and the curvature of the lens modelling the mirror (standard deviation of 300 m) to slightly deviate from the ideal value. For the mirror, we assume a mean curvature radius of $R$=5600 m. This corresponds to a peak to valley of $\\lambda$\/20 for $\\lambda$=780 nm and diameter of the mirror of 6 cm and leads to a reflected wave front with a curvature radius of $R$=2800 m. We individually scan the focal lengths of the collimation lens and the mirror for each interaction zone and keep other parameters fixed to evaluate the impact on the phase of the interferometer for approximate beam waists of 2.5 mm, 5 mm, and 10 mm. An example curve is shown in figure~\\ref{fig:defocus}.\n\\begin{figure}[tp]\n\\centering\n\\includegraphics[width=10cm]{defocus_mirror.pdf}\n\\caption{Impact on the phase for different focal lengths of the mirror.\nThe figure shows the case for an approximate waist of 10\\,mm, third interaction zone for recombination, in the tilted-mirror configuration.\nAs can be seen, the differential phase is robust against variations of the focal length within the given parameters, and significantly suppressed compared to a single atom interferometer (single ATI top \/ bottom). To better illustrate the phase behavior, the blue signal has been multiplied by 100 and reported in purple. The other two zones have a qualitatively similar behaviour.\n}\n\\label{fig:defocus}\n\\end{figure}\nIn addition, we take the standard deviation of the phase for 1000 shots with randomized values for the optics positions in longitudinal direction (standard deviation of few $\\mu$m), the positions of the beam splitting axes in transverse direction (standard deviation 100 $\\mu$m), and the velocities and positions of the atomic ensembles (shot noise limited).\n\nFor the cross-track direction, with an approximate waist of 2.5 mm [5 mm, 10 mm], we estimate a phase noise per shot in the gradiometer signal of about 1 mrad [0.2 mrad, 0.05 mrad] and a phase bias of about 4 mrad [4 mrad, 1 mrad] or less for a deviation of the focal length of the collimation lens from the optimum of $\\leq5\\,\\mu$m [40 $\\mu$m, 40 $\\mu$m] or less and a deviation of the focal length of the mirror of $\\leq$600 m. For waists above 2.5 mm, the Gouy phase dominates the phase excursion in a single atom interferometer. Although it still dominates the residual phase shift in the differential signal for the gradiometer in our simulation, it is sufficiently suppressed.\n\nFor simulating the other two axes, we add displacements to the positions of the atoms with respect to the beam center as required by the tilted-mirror configuration. Again using an approximate waist of 2.5 mm [5 mm, 10 mm], we estimate a phase noise per shot in the gradiometer signal of about 3 mrad [1 mrad, 3 mrad] and a phase bias of about 10 mrad [5 mrad, 1 mrad] or less for a deviation of the focal length of the collimation lens from the optimum of $\\leq5\\,\\mu$m [40 $\\mu$m, 40 $\\mu$m] or less and a deviation of the focal length of the mirror of $\\leq$600 m. Here, the quality of the input collimation dominates the residual shift for waists of 2.5 mm and 5 mm, and for 10 mm the dependence on the Gouy phase is qualitatively similar to the previous case. To push the noise of the configuration with a waist of 10 mm below 1 mrad, the standard deviation of the positions of the beam splitting axes in transverse direction has to be limited to 25 $\\mu$m.\n\nConsequently, this validates our choice of a mirror with a curvature radius of about $R$=5600 m, and sets the requirement for minimum beam waist of 5 mm.\n\nTo simulate higher order distortions, we follow the approach of Refs.~\\cite{Fils2005EPJD,Gauguet2009PRA,Tackmann2009Dipl} by parameterizing the atomic trajectories and calculating the local phase shifts imprinted by the atom-light interactions which are affected by imperfect mirror surfaces, represented by reference objects. Here, we do not propagate the wave fronts, and simply assume the effective wave fronts to be a copy of the surface inhomogeneities. The three reference objects have a peak-to-valley figure of about $\\lambda$\/20 [$\\lambda$\/75] and a root mean square of $\\sim\\lambda$\/100 [$\\lambda$\/150] over the whole surface [central region, diameter 1 cm]. Due to reflection, these values are doubled in the simulation. We take the standard deviation of 10 averages with 400 shots each for randomized positions [velocities] of an atom with a standard deviation of 400 $\\mu$m [100 $\\mu$m\/s]. This procedure is repeated for four different center positions of the reference objects and two different sequences of the reference objects to rule out readings from particularly good or bad spots. The simulation returns an estimate for the differential phase noise of 10 mrad per shot, consistent between the individual intializations. This is a factor of 10 above the requirements. Since the results indicate uncorrelated noise between the two single interferometers contributing to the gradiometer signal, we expect these results to hold for the tilted mirror configuration.\n\nThe simulations imply the requirements of having a local peak-to-valley figure of $\\leq\\lambda$\/1000, and a local root mean square of $\\leq\\lambda$\/1000 ($\\sim$1 nm). For designing appropriate optics, the simulation can be adapted by replacing the reference objects with two blanks and an object whose surface is defined by a single Zernike polynomial~\\cite{Prata1989}, and determining the required pre-factors.\n\nSummarizing these assessments, the requirements are a peak-to-valley figure of $\\lambda$\/20 for a mirror with a diameter of 6 cm, a peak-to-valley figure of $\\lambda$\/1000 ($\\sim$1 nm) in the central region with a diameter of 1 cm, a root mean square of $\\lambda$\/1000 ($\\sim$1 nm) in the central region with a diameter of 1 cm, and a maximum relative tilt of the effective wave fronts in the beam splitting zones below 0.5 $\\mu$m. When assuming a fused silica substrate for the mirrors, temperature gradients have to be limited below 63 K\/m to avoid distortions which violate the requirements above.\n\nMirrors in gravitational wave detectors as VIRGO~\\cite{Beauville2004CQG,Tesar1992ApplOpt} and LIGO~\\cite{Granata2016PRD,Acernese2015CQG} are based on fused silica substrates coated by ion beam sputtering with SiO$_2$ and Ta$_2$O$_5$ or TiO$_2$. Here, the surface quality of the substrate dominates the inhomogeneity after coating. Test substrates with a diameter of 48 mm and a thickness of 12 mm were polished to a peak-to-valley figure below 1 nm and a root mean square of 0.2 nm~\\cite{Middleton2006OptEng,Antonucci2011CQG,Harry2010CQG}. In LIGO, a test sample of a coated substrate reached a roughness below 0.5 nm root mean square, and the requirement on controlling the curvature of the mirror at the level of $\\pm$2 m implies a control of about 1 nm in the peak-to-valley figure.\n\nSince these values are compatible with our requirements, we propose to use the same mirror technology for the gradiometer. For the cross-track axis we propose a single mirror covering all three atom-light interaction zones for passive stability with a size of 6$\\, \\times \\,$6$\\, \\times \\,$31 cm$^3$, leading to a mass of 2.5 kg, and for the other two axes three separate mirrors with a diameter of 6 cm and a thickness of 1.5 cm. The mirrors should be mounted inside the vacuum system to avoid additional distortions by viewports. In the axes with tilted mirror configuration, the outer ones require motorized mirror mounts for initial alignment and adjustment to the actual rotation rate.\n\n\n\n\\subsection{Magnetic field}\n\n\\subsubsection{Requirements} \\label{Sec:MagFieldReq}\n\nFor the interferometer region a static bias field of e.g. $B$=100 nT aligned with the Raman lasers is applied to provide a quantization axis for the atoms. External magnetic fields need to be attenuated below this level by passive shielding and temporal and spatial fluctuations of the magnetic fields need to be suppressed since they cause phase shifts due to the second order Zeeman effect and bias in the measurement of the gravity gradient. \n\nWith a gradient in the magnetic field $\\nabla B$ and the quadratic Zeeman shift in energy $\\delta E=hKB^2$ with $K$=575 Hz\/G, the atoms accelerate by $\\delta a = \\frac{-2hK}{m} B\\nabla B$. Given the same constant spatial gradient in both interferometers, the differential acceleration between the two interferometers then is $\\Delta(\\delta a)=\\frac{-2hK}{m}(\\nabla B)^2 \\Delta z$. This results in a bias on the gravity tensor \n\\begin{equation}\n\\Delta T_{zz}=\\frac{\\Delta (\\delta a)}{\\Delta z} = \\frac{-2hK}{m}(\\nabla B)^2.\n\\end{equation}\nIn order to reduce the bias on the gradiometer phase below 1 mrad, the magnetic field gradient needs to be reduced below 60 nT\/m.\n\nSimilarly, a difference of the time averaged field $\\langle B \\, \\rangle$ between the first and second half of the pulse sequence leads to a bias in the phase measurement of $\\Delta \\phi = 4 \\pi K B \\Delta \\langle B \\, \\rangle T$. Thus with $B$=100 nT a field fluctuation by $\\Delta \\langle B \\, \\rangle$=14 nT causes a phase shift of 1 mrad. This is then subject to common mode suppression in a differential measurement depending on the spatial and temporal correlations of the fields at the two interferometers. \n\nAll of the above requirements need to be met, as the satellite moves in Earth's magnetic field of $B_{\\rm{earth}} \\approx 40~\\mu$T, causing a change of the field component projected on the interferometer axis of up to 60 nT\/s. Thus, an efficient mumetal magnetic shield is required to provide a suitable magnetic field environment \\cite{Dickerson2012}.\n\n\\subsubsection{Magnetic shield design} \\label{Sec:MagShieldDesign\n\nTo provide shielding of the interferometer against external magnetic fields, we consider a passive multilayer mumetal shield. The effectiveness of the magnetic shield can be described by the shielding factor\n\\begin{equation}\nS = \\frac{B_{outside}}{B_{inside}},\n\\end{equation}\ndefined by the ratio of the total residual magnetic field inside the shield to the total initial magnetic field outside. The magnetic field requirements derived above thus translate into a required shielding factor of $S>1000$.\n\nTo verify that our design complies with this requirement, we performed simulations based on finite-element modeling (FEM), similar to \\cite{MuShield2016}. The modeled mumetal shield was placed inside a static magnetic field $B_{\\rm{earth}}$=40 $\\mu$T and the residual field at the interferometer region was calculated to determine the shielding factor and the resulting magnetic field gradient along the interferometer area. Starting from a cylindrical design the best trade-off between shielding effectiveness and dimension and mass was found with elliptical shields. The 3-layer magnetic shield design, shown in figure~\\ref{fig:MagShieldDesign}, consists of one overall outer layer and two inner layers for the 2D-MOT\/BEC chamber and the CAI chamber respectively. The gaps between the layers were set to 20 mm and each layer has a thickness of 1 mm.\n\n\\begin{figure}[ht]\n\\centering\t\n\\includegraphics[width=0.35\\textwidth]{Design_TN3_ellipse_V1.png}\n\\caption{Magnetic shield design, consisting of one 1 mm outer layer and two 1 mm inner layers for each vacuum chamber. The blue dotted area depicts the location of the atoms inside the shield.}\n\\label{fig:MagShieldDesign}\n\\end{figure}\n\nThe FEM simulation results in a shielding factor of $S_{x,y,z}\\approx(49\\,700, 9\\,900, 4\\,800$) for the interferometer area which complies with the requirement. The actually achievable shielding factor will of course depend on the fabrication and details of the design such as location of necessary feedthroughs which are not yet included here. Thus, we take these results only to confirm that a three-layer shield should be capable to meet the requirement in principle. For the magnetic gradient we observe a maximum value of $\\Delta B_{x,y,z}=(0.74, 2.7, 4.6)$ nT\/m, also fulfilling the requirement. These values are determined from the maximum observed differences of the total residual fields over the two interferometer regions and thus represent an upper limit to the gradient.\n\n\n\\subsection{Payload architecture}\n\n\\begin{figure}[ht]\n\\centering\t\n\\includegraphics[width=0.8\\textwidth]{FuncDiagUpdate.png}\n\\caption{Functional diagram of a single axis gradiometer instrument in CAI.}\n\\label{fig:FuncDiag}\n\\end{figure}\n\nHere we consider a satellite in nadir-pointing configuration with a payload of three differently orientated instruments to measure the gravity gradient in each spatial direction: $T_{xx}$, $T_{yy}$ and $T_{zz}$. Each of the three instruments is divided into three functional units: the physics package, the laser system and an electronics unit as depicted in the functional diagram of figure~\\ref{fig:FuncDiag}. In table \\ref{Tab:zOverview}, we give a breakdown of the mass and power budget for the single $T_{zz}$ instrument into these three units. The mass and power for the $T_{xx}$ and $T_{yy}$ sensors slightly differs due to the additionally required tip-tilt mirror. This budget is derived from the design presented in section \\ref{Sec:Design} and based on state of the art technology of cold atom experiments in microgravity \\cite{Rudolph2015,Muntinga2013}. \n\n\\begin{table}[ht]\n\t\\centering\n\t\\begin{tabular}{|c|c|c|c|}\n\t\t\\hline\n\tSystem & Mass (kg) & Power (W) & Size (mm)\\\\\n\t\t\\hhline{====}\n\t\tPhysics Package & 160.4 & 128.4 & (1052 x 444 x 805)\\\\\n\t\t\\hline\n\t\tLaser System & 51.1 & 104.2 & (300 x 300 x 400)\\\\\n\t\t\\hline\n\t\tElectronic System & 52.4 & 607.9 & (300 x 300 x 1000) \\\\\n\t\t\\hhline{====}\n\t\t{\\bf Total } & \\bf 264.0 & \\bf 840.9 & (1052 x 750 x 805) \\\\ \t\t\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Mass and power budget for a $T_{zz}$ instrument including the tip-tilt mirror design. A component margin of 20\\% on mass and power is included here.}\n\t\\label{Tab:zOverview}\n\\end{table}\n\nFrom the basic vacuum chamber design in figure~\\ref{VacuumSchematics}, we estimate the size of the physics package for a single axis instrument to be (1052$\\, \\times \\,$444$\\, \\times \\,$805) mm$^3$ including the elliptical magnetic shield design, discussed in section \\ref{Sec:MagShieldDesign}. An arrangement of all three instruments including lasers and electronics then results in an approximate estimate of the payload size of (1054$\\, \\times \\,$1054$\\, \\times \\,$1600) mm$^3$. This allows for an elongated satellite shape similar to that of GOCE with a front surface slightly larger than 1 m$^2$, which is important to minimize the residual atmospheric drag of the satellite. The power and mass of a full 3-axis instrument including a $20\\%$ margin is estimated at 785 kg and 2940 W. However we expect that ongoing technology development should allow for a significant reduction of these values.\n\n\\section{Performance analysis and optimum measurement bandwidth}\n\n\\label{Performance_analysis}\n\nIn this section, we quantify precisely how the CAI gravity gradiometer can improve our knowledge on the Earth's gravity field. As such, we designed a closed-loop numerical simulator and applied it to evaluate the performance of the CAI gradiometer. We focused our analysis on the nadir pointing mode with the compensation of rotation. \n\n\\subsection{Closed-loop simulator}\n\\label{Simulator}\n\nIn order to represent the chain of the measurement process realistically, we carried out this study through a closed-loop simulation in the time domain. This approach is flexible and provides the possibility to precisely quantify the errors in terms of gravity field solutions. \n\nA general overview of the closed-loop simulation workflow is given in figure ~\\ref{Fig_Workflow}. On the one hand, the simulator takes a gravitational model and the noise-free time series of the satellite orbit, angular velocity and attitude, and, on the other hand, the spectral or statistic characteristics of the sensors' noise, to synthesize realistically degraded observables, namely the estimated orbit, angular velocity, attitude and gradients. These synthesized observables are finally used to derive a gravity field model. The comparison of the estimated gravity field model with the original one enables then to precisely characterize the error in the frequency and spatial domain. \n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{Workflow.pdf}\n\t\\caption{Workflow diagram of the simulation principle, focused on the processing of the gravity gradients.}\n\t\\label{Fig_Workflow}\n\\end{figure}\n\n\\subsection{Gravity field recovery}\n\\label{Recovery}\n\nThe Earth's global gravity field is usually expressed in terms of a Spherical Harmonics (SH) series \\cite{Hofmann-Wellenhof2006} as \n\\begin{equation}\n\t\\label{eq_gravity_field_model}\n\tV (r,\\theta,\\lambda) = \\frac{GM}{R} \\sum_{n=0}^{N} (\\frac{R}{r})^{n+1} \n\t\\sum_{m=0}^{n} \\left[ \\bar{C}_{nm} \\cos m\\lambda + \\bar{S}_{nm} \\sin m\\lambda \\right] \\bar{P}_{nm}(\\cos \\theta), \n\\end{equation}\nwhere $GM$ is the gravitational constant of the Earth, $R$ is the radius of Earth, $(r,\\theta,\\lambda)$ are spherical coordinates of a point on the Earth surface ($r$ radius, $\\theta$ co-latitude, $\\lambda$ longitude), $n,m$ denote SH degree and order, $N$ denotes the maximum degree of the model expansion (in theory, the maximum degree is $\\infty$), $\\bar{P}_{nm} (\\cos \\theta)$ are fully normalized associated Legendre functions, and $\\bar{C}_{nm}, \\bar{S}_{nm}$ are the normalized SH coefficients, which are the unknowns of the gravity field solution. \n\nThe gravity gradients $V_{ij}$ are point-wise measurements of the second-order partial derivatives of the gravity potential. They are usually delivered in the instrumental reference frame, e.g., the Gradiometer Reference Frame (GRF), while the Earth's gravity field model is expressed in the Earth-fixed Reference Frame (ERF). We thus have to transform the gravity gradients and the gravity field model to the same reference frame by\n\\begin{equation}\n\t\\label{eq_Func_model}\n\tV_{ij} = R \\frac{\\partial^2 V}{\\partial x_i \\partial x_j} R^T ,\n\\end{equation}\nwhere $R$ represents the rotation matrix between different reference frames. This equation represents the observational equation for the gravity gradients in the frame of gravity field recovery. \n\nDue to the large amount of observations and the large number of unknowns, it forms a large-scale and over-determined linear equation system for the determination of the gravity field model. The classic Least-Squares (LS) adjustment is applied to solve this linear equation system, which poses a great numerical challenge because of the high computational requirements in terms of both time and memory. The computation of this part is mainly accomplished on the clusters of Leibniz Universit\\\"at IT Services (LUIS)\\footnote{https:\/\/www.luis.uni-hannover.de\/luis.html}. \n\n\\subsection{Data}\n\\label{Data}\n\nThree kinds of observations are required for gravity field recovery, including satellite's orbit, attitude and gravity gradients. Orbit data is mainly used to geolocate other observations, while attitude data is necessary for the setup of the rotation matrix between different reference frames. The gravity gradients are the primary observations for the retrieval of the SH coefficients. \n\n\\subsubsection{Synthesis of noise-free data}\n\\label{Noise_free_obs}\n\nSince the gravity field signal attenuates quickly with altitude, it is important to have the satellite's orbit as low as possible. For the sake of rigorous comparisons to GOCE, we have chosen 71 days of the GOCE orbit, from 2\\textsuperscript{nd} March to 10\\textsuperscript{th} May, 2013. During this period, the orbit had an average altitude of 239 km. Note that this altitude is lower than the standard one of GOCE, typically of 259 km. The time series of the orbit coordinates have been interpolated at 1 Hz from the GOCE L2 dynamic orbit data sampled at 0.1 Hz using spline functions. Based on the orbit data, the noise-free gravity gradients are computed using the gravity field model ``Eigen-6c4\" \\cite{Forste2014} up to SH degree and order of 360. \n\nThe generation of the angular velocity $\\bm{\\Omega}$ is indispensable for the determination of attitude and rotations. This can be done by two steps: first, the instantaneous angular velocity $\\omega_{\\rm{LORF}}$ of the Local Orbital Reference Frame (LORF) with respect to the Inertial Reference Frame (IRF) has to be computed from the orbital position and velocity, and then assigned to the component $\\omega_y$ so that the main rotation of the instrument frame, i.e., GRF, at the orbital frequency is indeed about the y-axis; second, a residual zero-mean angular velocity $\\delta \\bm{\\Omega} = (\\delta \\omega_x, \\delta \\omega_y, \\delta \\omega_z)^T$ is added so that we finally have\n\\begin{equation}\n\t\\label{angular_velocity}\n\t\\bm{\\Omega} = \\left(\n\t\\begin{matrix}\n\t\t \\delta \\omega_x \\\\\n\t\t\\omega_{LORF} + \\delta \\omega_y \\\\\n\t\t\\delta \\omega_z \n\t\\end{matrix}\t\n\t\\right) .\n\\end{equation}\n\n\\subsubsection{Synthesis of noisy data}\n\\label{Noisy_obs}\n\nWe assumed a zero-mean, normally distributed error with a standard deviation of 2.6 cm added to the orbit initial coordinates. This precision was typically achieved with GOCE \\cite{Bock2014}. \n\nWe assumed that three star-trackers and a gyroscope are on board of the satellite for the determination of the angular velocity. Both kinds of sensors give estimates of the three components of the angular velocity, which are then optimally combined in the frequency domain using a Wiener filter like in the processing of the angular velocity of GOCE, see \\cite{Stummer2011}. This method requires to have a model of the PSD of the noise of both sensors. Thereafter, the noise models of both sensors were assumed, with their spectral characteristics shown in figure~\\ref{PSD_Angular_Velocity}. For more details on this aspect, we refer to reference~\\cite{Douch2018}. \n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=0.6\\textwidth]{PSD_Angular_Velocity.pdf}\n\t\\caption{Amplitude spectral density of the noise degrading the estimation of one component of the angular velocity of the instrument frame, i.e., GRF, with respect to the inertial frame.}\n\t\\label{PSD_Angular_Velocity}\n\\end{figure}\nThe errors of the gravity gradients are the sum of the gradiometer noise and the error due to the correction of the centrifugal terms. An additive noise with a white behaviour is assumed for the gradiometer. The amplitude of the PSD is assumed as 5 mE\/Hz$^{1\/2}$. The centrifugal terms are the square of the angular velocities whose noise has been discussed above. \n\n\\subsection{Results}\n\\label{Results}\n\nGlobal gravity field models were finally recovered from the synthesized noisy observations. They were resolved up to SH degree and order of 240, including 58 077 unknowns. As pointed out previously, we could have the diagonal gradient components, i.e., $V_{xx}, V_{yy}, V_{zz}$, obtained for the nadir pointing mode with compensation on rotations. We thus derived three component-wise gravity field solutions as well as a combined solution from all three components. To evaluate the performance of these models, both the true errors (the differences between the recovered SH coefficients and the input background model, i.e., Eigen-6c4) and the formal errors (the accompanied standard deviations of the parameters, obtained in the LS adjustment) are analyzed. \n\nThe formal errors of the component-wise and combined solutions are displayed in figure~\\ref{Formal_errors}. It is shown that these gradient components are sensitive to different parts of the gravity field. For instance, $V_{xx}$ is more sensitive to lower order zonal and near-zonal coefficients, i.e., around order zero, but less sensitive to higher order coefficients. In this regard, $V_{yy}$ exactly complements $V_{xx}$ with the inverse sensitivity. It contributes mainly to non-zonal coefficients, i.e., the sectorial coefficients. The reason for the inverse and complementary contribution is related to the orientation of the corresponding gradiometer pairs. Compared to $V_{xx}$ and $V_{yy}$, $V_{zz}$ is sensitive to all orders of the coefficients, according to Laplace's equation, $V_{zz} = - (V_{xx} + V_{yy})$. The contribution of $V_{zz}$ is identical to the combination of $V_{xx}$ and $V_{yy}$. This could be demonstrated indirectly by comparing the $V_{zz}$ component solution with the combined solution, where both show quite similar patterns. Here, we would also like to mention that the zonal and near-zonal coefficients for all solutions are determined in a degraded performance. This is attributed to the GOCE orbit, which leaves polar gaps where no observations are available \\cite{Sneeuw1997}. \n\\begin{figure}\n\t\\centering\n\t\\subfigure[$V_{xx}$]{\n\t\t\\includegraphics[width=0.47\\textwidth]{Formalerr_Nadir_Vxx.png}}\n\t\t\\hfill\n\t\\subfigure[$V_{yy}$]{\n\t\t\\includegraphics[width=0.47\\textwidth]{Formalerr_Nadir_Vyy.png}} \\\\\n\t\\subfigure[$V_{zz}$]{\n\t\t\\includegraphics[width=0.47\\textwidth]{Formalerr_Nadir_Vzz.png}}\n\t\t\\hfill\n\t\\subfigure[Combined]{\n\t\t\\includegraphics[width=0.47\\textwidth]{Formalerr_Nadir_Combined.png}}\t\n\t\\caption{Formal errors of the component-wise gravity field solutions, in logarithmic scale.}\t\n\t\\label{Formal_errors}\n\\end{figure}\n\nThe contribution analysis indicates that the one-axis nadir pointing mode shows natural deficits for determining the Earth's gravity field. Only a part of the gravity field signal can be precisely retrieved from the $V_{yy}$ component. However, when the compensation of the rotation was applied in this mode, the other two components $V_{xx}$ and $V_{zz}$ can be obtained with a comparable high accuracy. This three-axis mode can thus integrally capture the gravity field signal. \n\nThe degree medians of the true errors for all recovered models are shown in figure~\\ref{Degree_error}. The degree medians that are more robust with respect to the degraded coefficients are used to represent the error amplitudes for each degree. And for a better understanding, the errors are expressed in terms of geoid height. The solutions from $V_{yy}$ in the one-axis and the three-axis mode show the same performance, as the same signal-to-noise ratio (SNR) of the observations has been assumed. Similarly, the error curve of the $V_{xx}$ component is at a comparable level as $V_{yy}$. However, the solution from $V_{zz}$ is about twice better than those of $V_{xx}$ and $V_{yy}$. This is mainly due to the double power of signal, see more discussion in \\cite{Douch2018}. To sum up the contribution of all components, the combined solution shows the best performance. It is much better than the component-wise solutions of $V_{xx}$ and $V_{yy}$ but only marginally better than the $V_{zz}$ component solution. The combination of all components by optimal weighting indicates that the combined solution is dominated by the $V_{zz}$ solution. \n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=0.6\\textwidth]{DM_Nadir_case_three_axis.pdf}\n\t\\caption{Degree medians of the true errors for the recovered models in three-axis nadir pointing mode, expressed in terms of geoid height. For comparison, the $V_{yy}$ solution in the one-axis mode was plotted as well.}\t\n\t\\label{Degree_error}\n\\end{figure}\n\n\\subsection{Discussion}\n\\label{Discussion}\n\nWe analyze now to what extent the CAI gradiometer concept can outperform GOCE when assuming a nominal gradiometer noise of 5 mE\/Hz$^{1\/2}$. To this end, we extrapolate the error of these models from 71 days to 8 months assuming the error is purely stochastic and reduces as $\\sqrt{t}$ where $t$ is the time of integration. To gain an idea of the GOCE solution error, we compute a gravity field model based on the three diagonal gravity gradients of the whole GOCE mission period (November 2009 -- October 2013, about 47 months). Since the comparison concerns only the contribution of the gradiometer, we have not taken into account the GOCE high-low satellite-to-satellite tracking data that is mainly responsible for the recovery of the low-degree gravity field coefficients. \n\nThe 8-month solution for the three-axis nadir pointing mode is better than the GOCE solution for the whole mission period, as shown in figure~\\ref{Compare_GOCE}. We can thus conclude that an 8-month mission at an altitude of 239 km and using a 3-axis CAI gradiometer in the nadir pointing mode with a nominal white noise of 5 mE\/Hz$^{1\/2}$ would outperform the full GOCE mission and eventually yields a more precise gravity field model. Nonetheless, in this comparison, it should be kept in mind that the GOCE satellite was most of its lifetime at a altitude higher than 239 km, about 3 months at 239 km and 5 months lower than 239 km. This fact shows that it is technically possible to fly a satellite at an altitude as low as 239 km for a duration of 8 months. \n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[width=0.6\\textwidth]{DM_final_eigen6c4.pdf}\n\t\\caption{Degree medians of the true errors for gravity field solutions that are extrapolated to 8 months for the CAI gradiometer as well as the errors for the GOCE SGG (Satellite Gravity Gradient) solutions. Two GOCE SGG solutions are included here. One is based on the gravity gradients of the whole mission period, while the other one corresponds to the period where the satellite was in lower orbits.} \n\t\\label{Compare_GOCE}\n\\end{figure}\n\nFor a fair comparison, we have also plotted the GOCE solution for the 8 months where the satellite was at an altitude equal or lower than 239 km. Again, the 8-month CAI gradiometer in the three-axis nadir mode at a constant altitude yields a better solution despite the fact that the 8-month GOCE solution is partly based on gravity gradients measured at a lower altitude. \n\n\n\n\n\\section{Conclusion}\n\n\nWe have carried out a detailed design of a 3D gradiometer based on atom interferometry for space geodesy. We have performed a detailed analysis and modelling of the atomic signals and of the constraints on relevant parameters (i.e. atomic source, interferometer geometry and attitude control of the satellite). The implementation of cancellation methods for the large rotation rate at the orbital frequency when operating Nadir, and for the gravity gradient, allows for reducing dephasing and systematic effects and for extracting the signal with maximal sensitivity. With an expected sensitivity of 5 mE\/Hz$^{1\/2}$ (PSD), we show a two-fold improvement on the gravity field recovery for degrees above 50, and significantly better for lower orders, when comparing an 8-month solution at an altitude of 239 km with the model obtained from GOCE data over its whole mission duration. \n\nThe determination of the optimal gain requires a realistic mission scenario, which remains to be investigated. As inputs, such mission-oriented study would use the constraints which we have determined for the attitude control and the overall size, weight and power (SWaP) budget of the total instrument. This budget has been established considering existing and available technology, and certainly needs to be reduced to end up with a more reasonable load. Possible modifications to the design, such as sharing subsystems between the instruments, would certainly help, but it is clear that a number of specific technological and engineering efforts are also required, in particular directed towards the optimization of the power consumption. This challenging task motivates on-going and future research and development activities. This concerns not only the technological efforts mentioned above, to improve for instance the generation of BEC sources on atom chips or the compactness and power consumption of fiber-based laser systems, but also the validation of the instrument concept. Indeed, if the key scientific methods, such as bloch-lattice transport, double Raman diffraction or interleaved measurements, have for most of them been demonstrated individually, demonstration activities combining several, and in the end all, of them in a single setup in a representative environment, need to be pushed. This calls for carrying prototyping activities, such as developing an elegant breadboard model of the sensor and characterizing it in a relevant environment. A thorough assessment of the performances of such a prototype will establish gradiometers based on cold atom interferometry as appealing sensors for future gravity missions aiming at improving our knowledge of the Earth's gravity field.\n\n\\section*{Acknowledgments}\n\n\\addcontentsline{toc}{section}{Acknowledgments}\n\nThis work has been carried out in the context of the ``Study of a Cold Atom Interferometer Gravity Gradiometer Sensor and Mission Concepts\", supported by the European Space Agency through Contract No. 4000112677\/14\/NL\/MP.\nThe authors affiliated to the IQ acknowledge financial support by ``Nieders\\\"achsisches Vorab\" through the ``Quantum- and Nano- Metrology (QUANOMET)\" initiative within the project QT3, and by the German Space Agency DLR with funds provided by the Federal Ministry of Economics and Technology (BMWi) under the grant numbers 50 WP 1431 and 1700.\n\n\n\\pagebreak\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe distribution of dust in the interstellar medium (ISM) is coupled to the evolution of galaxies. Even though only about a percent of the gas in the ISM is condensed in the form of dust grains, they play an enourmous role for galaxy evolution. There is a plethora of processes occuring on the surfaces of dust grains, which greatly affect the chemical and thermal history of galaxies. One of these processes, the efficient formation of molecular hydrogen on grain surfaces has important consequences for star formation as it leads to the formation of molecular clouds which eventually form stars \\citep{2002MNRAS.337..921H, 2004ApJ...604..222C, 2011ApJ...735...44Y}. By absorbing ultra-violet (UV) light and reemitting in the infrared (IR), the dust also acts as a coolant itself, leading to star formation in dusty clouds \\citep{2005ApJ...626..627O}, shaping the initial mass function (IMF) \\citep{2012MNRAS.419.1642C} and spectral energy density (SED) of observations of galaxies \\citep{2005MNRAS.362..592T} as well as determining the typical range of stellar masses \\citep{2006MNRAS.369.1437S}. Since smaller grains have a larger surface area per unit mass than larger grains, the total available surface area for these processes is not just determined by the total abundance of dust, but also by the grain size distribution (GSD). In particular, the shape of GSD affects the efficiency of the rate at which processes occuring on grain surfaces happen \\citep[e.g.][]{2011ApJ...735...44Y, 2017ApJ...837...78H} as well as shape of the dust extinction curve \\citep[e.g.][henceforth MRN]{1977ApJ...217..425M}. A complete understanding of the evolution of the GSD is thus of critical importance for a complete picture of galaxy evolution.\n\n\\citet{2013MNRAS.432..637A} have developed a comprehensive model for the evolution of the GSD in the ISM, in which large dust grains with sizes $a \\sim 0.1 \\mu\\text{m}$ \\citep{2012ApJ...745..159Y} condense from the metals in supernova (SN) ejecta \\citep[e.g.][]{1989ApJ...344..325K, 2007MNRAS.378..973B, 2007ApJ...666..955N} and the stellar winds of asymptotic giant branch (AGB) stars \\citep[e.g.][]{2006A&A...447..553F, 2014MNRAS.439..977V, 2017MNRAS.467.4431D}. These grains are then processed through a number of processes. In sufficiently warm and diffuse gas, large grains are efficiently shattered, leaving behind small fragments \\citep[e.g.][]{2009MNRAS.394.1061H}. In the cold, dense and metal enriched ISM small grains can efficiently grow through accretion of gas phase metals and stick together to form larger grains in a process known as coagulation \\citep[e.g.][]{2012MNRAS.422.1263H}. \nIn regions where the metallicity exceeds $\\sim 0.1\\, Z_{\\sun}$, accretion can lead to a rapid growth of small grains until it saturates once most gas phase metals have been used up\n\\citep[e.g.][]{1998ApJ...501..643D, 2008A&A...479..453Z, 2012MNRAS.424L..34K}, increasing the abundance of small grains. Subsequent coagulation shapes the GSD by populating the intermediate size range, smoothing it towards a power-law shape similar to the \\citetalias{1977ApJ...217..425M} grain size distribution. In the hot gas of the circum-galactic medium (CGM) or gas heated by SN shocks, grains can be evaporated through sputtering processes \\citep[e.g.][]{1994ApJ...431..321T}, while in star-forming regions dust is depleted by astration.\n\nThe relative efficiencies of the processes listed above strongly depend on the local physical conditions, like temperature, density and metallicity. Therefore in order to understand the spatial variation of the GSD and the effect of dust transport to different environments, the evolution of the dust needs to computed consistently with the hydrodynamic evolution of the ISM. To this end, hydrodynamic simulations have been used to study the evolution of the dust alongside the evolution of the ISM in isolated galaxies \\citep[e.g.][henceforth HA19 and AHN20]{2019MNRAS.482.2555H, 2020MNRAS.491.3844A} and in cosmological simulations of structure formation \\citep[e.g.][]{2018MNRAS.478.4905A, 2019MNRAS.485.1727H}.\nA variety of methods have been used by many researchers to model the dust. \\citet{2015MNRAS.451..418Y} used cosmological zoom-in simulations to study the dust distribution in high-redshift galaxies, assuming a constant dust-to-metal ratio. \\citet{2015MNRAS.449.1625B} treated dust as a separate massive particle species, which is affected by a drag force due to gas and radiation pressure due to stars. The dust particles in their model evolve through dust growth and destruction, but also affect star formation and feedback. \\citet{2016MNRAS.457.3775M} treated dust as a component dynamically coupled to the gas in their cosmological zoom-in simulations. They considered growth and destruction processes, finding that in the absence of growth processes the dust mass is heavily suppressed. Furthermore, \\citet{2017MNRAS.468.1505M} studied the statistical properties of dust in a large sample of galaxies from a suite of cosmological simulations. They were able to reproduce the present-day dust abundance in galaxies, but tend to underestimate the dust mass at high redshift. \\citet{2018MNRAS.478.2851M} combine previous efforts in modelling the evolution of dust using `live' dust particles, each of which samples a local GSD. They performed promising test calculations of an isolated galaxy, but further implementation of e.g. stellar feedback, is needed to provide results to be compared to observations.\n\\citet{2021MNRAS.501.1336H} post-processed a cosmological simulation with a model of the GSD to study the extinction law in Milky-Way-like galaxies. \\citet{2021MNRAS.507..548L} performed a similar study by directly implententing the evolution of GSD in a cosmological zoom-in simulation.\nBoth of these studies find a large diversity of extinction laws, with bump strengths and UV slopes that are comparable to observations in the Milky Way.\n\n\\citet[][hereafter A17]{2017MNRAS.466..105A} studied the evolution of small and large dust grains in a simulation of an isolated galaxy. Their model is based on the \\textit{two-size approximation} \\citep{2015MNRAS.447.2937H}, in which the GSD is sampled by two bins referred to as `small' and `large' grains to save the computational cost. Their radial dust profiles are in agreement with observations of late type galaxies by \\citet{2012MNRAS.423...38M} and their results predict a variation of the GSD with the galactic radius. \\citet{2017MNRAS.469..870H} followed up on these results, but evolved carbonaceous and silicate dust separately, studying the dynamical evolution of extinction curves in the same isolated galaxy. They find a steepening of the extinction curve at intermediate age and metallicity, at which the dust is efficiently processed by shattering and accretion. \\citet{2018MNRAS.474.1545C} post-process the simulation by \\citetalias{2017MNRAS.466..105A} to compute the equilibrium abundances of H$_2$ and CO, finding that H$_2$ fails to trace the star formation rate (SFR) at low metallicity because under such conditions H$_2$ is confined to dense, compact clouds. The two-size approximation has also been applied in cosmological simulations \\citep[e.g.][]{2018MNRAS.478.4905A, 2018MNRAS.479.2588G, 2019MNRAS.484.1852A, 2019MNRAS.485.1727H, 2021MNRAS.503..511G}. These simulations largely explained a number of observed relations, like the relation between the dust-to-gas ration and metallicity, the dust mass function and the evolution of the comoving dust mass density.\n\n\\citet{2019MNRAS.485.1727H}, adopting the same approximation in a cosmological simulation, also attempted to investigate extinction curves. However, the limited freedom in the GSD did not allow them to predict detailed extinction curve shapes. Recently, \\citetalias{2020MNRAS.491.3844A} adressed this, by simulating the evolution of the full GSD sampled at 32 logarithmically spaced grid points in a simulation of an isolated galaxy. They showed that some of the results, like the evolution of the dust-to-gas ratio with metallicity and radial profiles of the dust-to-gas ratio and dust-to-metal ratio are unaffected by the two-size approximation. The main focus of their work was laid on the spatial and temporal evolution of the grain size distribution. They find that dust evolution happens in three stages dominated by stellar yield, accretion and coagulation, respectively. They also studied the evolution of extinction curves, in the dense and diffuse medium and find that the extinction curve in the dense medium first becomes steeper than in the diffuse medium at intermediate times and then flattens as the GSD settles to the MRN power law while the extinction curve in the diffuse medium steepens.\n\n\\citet{2020A&A...636A..18R} compared the results of the simulations from \\citet{2017MNRAS.466..105A} and \\citet{2017MNRAS.469..870H} to spatially resolved observations of nearby spiral galaxies and the results from \\citet{2019MNRAS.485.1727H} to the integrated properties of their sample galaxies. They found that while the simulations tend to agree with the observed total dust abundances at high metallicity, the agreement gets worse at low metallicity. Moreover, they find that there are some discrepancies between the observed and the simulated small-to-large grain ratios, especially in galaxies with high stellar mass. Rela\\~no et al. (2022; in prep.) take an unprecedented observational sample of 247 local galaxies from five state-of-the-art galaxy surveys and compare their dust properties obtained by SED fitting to the results of the cosmological simulations of \\citet{2018MNRAS.478.4905A} and \\citet{2019MNRAS.485.1727H}.\nThey find that the simulations tend to overestimate the dust mass in the high stellar mass regime. They also find that the small-to-large grain ratios predicted by the simulations are consistent with a subsample of their galaxy sample, which exhibits lower small-to-large grain ratios at high stellar and dust mass. However, another subsample of the galaxies with high small-to-large grain ratios cannot be explained by the simulations. It remains unclear how to explain the large scatter in the relation between the small-to-large grain ratio at high stellar and dust mass. It is encouraging to see that such efforts to reconcile simulations and highly detailed observations are becoming possible, especially in the light of high resolution observations from ALMA and integral field spectroscopy.\n\nWhile these simulations successfully explain a large variety of observational results, like the evolution of the dust-to-gas ratio with metallicity or the radial dust profiles in nearby galaxies, there are still a number of issues that need to be addressed. \\citetalias{2019MNRAS.482.2555H} point out that the production of small grains is insufficient in order to explain the Milky Way extinction curve and that coagulation needs to be more efficient in order to explain the observational trend of flatter extinction curves in denser gas.\nWhile this issue turned out to be less problematic than originally expected \\citepalias{2020MNRAS.491.3844A}, the GSD in their simulations still tend to be slightly biased toward large unprocessed grains and thus their median extinction curves are slightly flatter than the observations. \n\nGiven that the models used for dust processing seem to be largely comprehensive, it is worthwhile considering whether or not the results might be biased due to the hydrodynamical scheme. Most groups, studying the evolution of dust in the ISM used Lagrangian hydrodynamics schemes. In these methods, the fluid is mass-discretized and thus, by definition mass mixing between the fluid tracers and therefore also chemical mixing is absent. A popular method to account for the missing advection in Lagrangian codes is the diffusion prescription of \\citet{2010MNRAS.407.1581S}, which itself is based on the Smagorinsky-Lilly model \\citep{1963MWRv...91...99S}. The\nimpact of a numerical diffusion prescription in Lagrangian schemes has been investigated by numerous groups \\citep[e.g.][]{2018MNRAS.480..800H, 2018MNRAS.474.2194E, 2021ApJ...917...12S}. \n\\citet{2018MNRAS.480..800H} find that including metal mixing does not affect any of the gross galactic properties like star formation or gas dynamics but can influence the abundance ratio distributions as discussed in detail by \\citet{2018MNRAS.474.2194E}. \\citet{2021ApJ...917...12S} compare the metal enrichment of the CGM in simulations of an isolated galaxy, with the mesh-based code ENZO and the particle based codes {\\sc Gadget-2} and Gizmo-PSPH. They find that the inclusion of a subgrid model for turbulent diffusion between the particles is required in particle based codes in order to achieve the same level of mixing as in the mesh-based code. Given that in the dust evolution model by \\citet{2013MNRAS.432..637A}interstellar processing is treated in the dense and diffuse medium separately,\nincluding mixing between the dense and the diffuse medium would provide a natural channel to accelerate dust processing and enhance the interplay between the processes. In particular, the interplay between shattering in the diffuse ISM and accretion in the dense ISM plays an important role in enhancing the small-grain abundance \\citepalias{2020MNRAS.491.3844A}. This\nmight reduce the bias towards\nunprocessed large grains.\n\nThe goal of this study is to address the previously reported issues in the dust model employed by \\citetalias{2019MNRAS.482.2555H} and \\citetalias{2020MNRAS.491.3844A}. To this end we study, to what extent fluid mixing by turbulent diffusion can affect dust processing.\nAt the same time, we examine the robustness of \\citetalias{2020MNRAS.491.3844A}'s conclusions against the inclusion of diffusion. For example, we test if the different grain size distributions between the dense and diffuse ISM are maintained or not.\nAdditionally we address the low efficiency of coagulation and accretion reported by \\citetalias{2019MNRAS.482.2555H} by recalibrating the subgrid recipe for the modelling of the unresolved dense clouds.\nApart from the issues that these changes are meant to address, we expect to see additional features in the spatial distribution of dust and metals, due to mixing between the galactic disk and the CGM.\n\nThis paper is structured as follows. We describe the simulation setup and the adopted physical models in Section~\\ref{sec:methods}. In Section~\\ref{sec:results}, we present our simulation results. In Section~\\ref{sec:Extinction} we compare our results with the observed extinction curve in the Milky Way. In Section~\\ref{sec:discussion}, we discuss our results and conclude by summarizing our findings. Throughout this paper, we adopt a value of $Z_{\\sun} = 0.01295$ for the solar metallicity consistent with the default value in Grackle-3 \\citep{2017MNRAS.466.2217S}. We adopt a value of $0.03\\,\\micron$ for the grain radius which separates `large' and `small' grains.\n \n\\section{Methods}\n\\label{sec:methods}\n\n\\subsection{Hydrodynamic Simulation}\\label{sec:simulation}\nWe study the evolution of a simulated isolated galaxy using the {\\sc Gadget4-Osaka} smooth particle hydrodynamics (SPH) simulation code, which is based on a combination of the {\\sc Gadget-4} code \\citep{2021MNRAS.506.2871S} and the {\\sc Gadget3-Osaka} feedback model \\citep[]{2017MNRAS.466..105A,2019MNRAS.484.2632S, 2021ApJ...914...66N}. \nWe treat the star formation and production of dust and metals self-consistently with the hydrodynamic evolution of the system, accounting for the effects of SN feedback. In our simulations, the relative motion of dust and gas is neglected, and instead it is assumed that the dust is carried along by the gas particles (i.e.\\ tight coupling between them). Gas cooling and primordial chemistry are treated using the {\\sc Grackle-3} library \\footnote{\\label{footer:Grackle3} \\href{https:\/\/grackle.readthedocs.org\/}{https:\/\/grackle.readthedocs.org\/}}\\citep{2017MNRAS.466.2217S}, which provides a 12 species non-equilibrium chemistry solver for a network including reactions among the species H, H$^+$, He, He$^+$, He$^{2+}$, e$^-$, H$_2$, H$^-$, H$_2^+$, D, D$^+$ and HD.\nPhoto-heating, photo-ionization and photo-dissociation due to the UV background radiation (UVB) at $z = 0$ from \\citet{2012ApJ...746..125H} is taken into account.\n\nIn order to estimate the effect diffusion on the evolution of the GSD, we evolve an isolated Milky Way-like galaxy for $2\\,\\text{Gyr}$, beyond which the GSD does not evolve much. The initial conditions (ICs) are the low resolution ones of the AGORA collaboration \\citep{2016ApJ...833..202K}, but following \\citet{2021ApJ...917...12S} we have added a hot ($T = 10^6\\, \\text{K}$) gaseous halo with a mass of $\\sim 10^9 \\, \\text{M}_\\odot$. We let the ICs relax adiabatically for $500 \\text{Myr}$, in order to avoid numerical artifacts due to density fluctuations. We then enable the subgrid physics and evolve the relaxed ICs for another $2 \\,\\text{Gyr}$. We run four simulations with and without turbulent diffusion of metals and dust and one simulation with tenfold mass refinement with diffusion. The gravitational softening length is set to $\\epsilon_\\text{grav} = 80\\, \\text{pc}$ in the low resolution runs and to $\\epsilon_\\text{grav} = 40\\, \\text{pc}$ in the high resolution run. We do not allow the SPH smoothing length to get smaller than a tenth of the gravitational softening length.\n\nIn order to produce similar amounts of stars and metals over the simulated period in all runs, we need to recalibrate the star formation and feedback model parameters. To this end, we increase the threshold density for star formation to occur according to Larson's law \\citep{1981MNRAS.194..809L}, i.e. effectively double its value to $n_{\\text{H, th}} = 20\\,\\text{cm}^{-3}\\,$\nand half the number of energy injections due to type-II SN feedback per star particle in order to achieve similar energy injections per feedback event. This treatment leads to a similar star- and metal production as discussed below in section \\ref{sec:SFhistory}.\n\n\\subsection{Dust Processing}\\label{sec:dustprocessing}\n\nWe are using an updated version of the model used by \\citetalias{2019MNRAS.482.2555H} and \\citetalias{2020MNRAS.491.3844A} for the evolution of the GSD, which is based on the model by \\citet{2013MNRAS.432..637A}. The processes considered are the stellar dust production, shattering in the diffuse ISM, growth by accretion and coagulation in the dense ISM as well as destruction in SN shocks and through thermal sputtering. On the scales considered in this work, it is usually safe to assume that the gas and dust are dynamically coupled \\citep{2018MNRAS.478.2851M}. We therefore neglect the relative dynamics of dust and gas and simply treat the dust as a property of the gas particles. Just as in previous versions of the model the GSD is expressed in terms of the grain mass distribution $\\rho_d\\left(m, t\\right)$ which is defined such that $\\rho_d\\left(m, t\\right) \\text{d}m$ is the mass density of grains with mass $m \\in \\left[m, m+\\text{d}m\\right]$. The grains are assumed to be compact and spherical such that $m\\left(a\\right) = \\left(4\\pi\/3\\right) s a^3$, where the bulk density $s = 3.5\\, \\text{g}\\,\\text{cm}^{-3}$ appropriate for silicates is adopted \\citep{2001ApJ...548..296W}. With this definiton of the grain mass distribution, the dust-to-gas ratio is\n\\begin{equation}\n \\mathcal{D}_{\\text{tot}}\\left(t\\right) = \\int \\frac{\\rho_d\\left(m, t\\right)}{\\rho_{\\text{gas}}\\left(t\\right)} \\text{d}m.\n\\end{equation}\nWe sample the GSD with 30 logarithmically spaced bins in the range of $3 \\times 10^{-4}$ -- $10 \\,\\micron$ and enforce vanishing boundary conditions through the use of a virtual empty bin at each boundary.\n\nThe evolution of the GSD through stellar yield, shattering, coagulation and accretion is modelled in the exact same way as described in \\citetalias{2020MNRAS.491.3844A} and we refer the interested reader to their paper. In their model, whenever metals are ejected into the ISM by SN explosions or stellar winds from AGB stars, a fraction $f_{in} = 0.1$ of the ejected metals \\citep{2011EP&S...63.1027I, 2013MNRAS.436.1238K} are assumed to be condensed into dust grains following an initial log-normal GSD centered around $a_{0} = 0.1\\,\\micron\\,$ and with a variance of $\\sigma = 0.47$ \\citep{2013MNRAS.432..637A}. In the diffuse ISM large grains are shattered into smaller fragments, while in the cold and dense ISM small grains can grow through accretion of gas phase metals or stick together to form larger grains in a process known as coagulation. Dust is lost due to star formation and strong shocks due to SN feedback. \nWe modified the model for estimating the multiphase structure on subgrid scales used to determine the strength of accretion and coagulation, added thermal sputtering and modified the treatment of destruction in SN shocks. These changes are described below.\n\n\\subsubsection{Two-phase ISM subgrid model}\\label{sec: MultiPhase}\nProcesses like coagulation and accretion can only happen in a sufficiently cold and dense medium, while other processes like shattering are more efficient in the warm diffuse medium. Since the former presently cannot be resolved in our simulations, a subgrid model needs to be employed in order to resolve the effects associated to such environments.\n\n\\citetalias{2020MNRAS.491.3844A} assume that cold ($T < 10^3\\, \\text{K}$) and dense ($n_\\text{H} > 10\\, \\text{cm}^{-3}$) gas particles host dense clouds with $n_\\text{H} = 10^3\\, \\text{cm}^{-3}$ and $T = 50 \\text{K}$ on unresolved scales, making up $f_\\text{cloud} = 0.5$ of the particle's hydrogen mass. Analysis of snapshots of their simulations revealed that the global fraction of dense gas rarely exceeds $1 \\,\\%$. Assuming that this dense gas traces the amount of molecular gas, this puts an upper limit of $\\sim 1 \\,\\%$ on the molecular gas fraction, in strong disagreement with the typical value of $f_{\\text{H}_2} \\sim 20 \\,\\%$ for Milky Way-like spiral galaxies \\citep{2018MNRAS.476..875C}.\n\nLowering the density threshold can lead to slightly better agreement, but also leads to undesirably large amounts of accretion and coagulation in relatively diffuse gas $n_\\text{H} \\leq 1 \\,\\text{cm}^{-3}$. Instead, if we assume that dense gas traces H$_2$, then previous results by \\citet{2011ApJ...728...88G} and \\citet{2014ApJ...795...37G} indicate that denser gas hosts more dense clouds. We thus model this trend by assuming that in cool gas ($T < 10^4\\, \\text{K}$) the dense fraction increases linearly with the density until it saturates at large densities as \n\\begin{equation} \\label{eq: fdense}\n f_\\text{cloud} = \\text{min}\\left(\\alpha\\, n_0, 1\\right),\n\\end{equation}\nwhere $n_0 = n_\\text{H} \\left[\\text{cm}^{-3}\\right]$, and alpha is a parameter to set the slope. We have found that $\\alpha \\sim 0.12$ leads to a global dense gas fraction of $\\sim 20 \\,\\%$ at our fiducial resolution. At higher resolution, as more of the gas reaches higher densities, slightly lower values are preferred. In the high resolution run we thus set $\\alpha = 0.08$, in order to match the time evolution of the global dense gas fraction to the lower resolution case.\n\nAnother detail that \\citetalias{2020MNRAS.491.3844A} neglected in their study was that the rest of the gas in a gas particle hosting dense clouds must be warmer and more diffuse than the densities and temperatures obtained from the hydrodynamics, in order to be consistent. In order to see this, consider a gas particle with fixed mass $m$, average number density $n$ and temperature $T$. The volume occupied by the particle is $V \\sim m \/ n$, omitting constant factors. The condition that the union of the diffuse and the dense medium (indicated by subscripts `diff' and `dense', respectively) fill up the volume occupied by the gas particle reads \n\\begin{equation}\\label{eq:volume}\n V = V_\\text{diff} + V_\\text{dense}.\n\\end{equation}\nConservation of internal energy implies \n\\begin{equation}\\label{eq:energy}\n pV = \\left(pV\\right)_\\text{diff} + \\left(pV\\right)_\\text{dense},\n\\end{equation}\nwhere $p_i = n_i T_i$ is the pressure in each component. Given that the fraction of the mass present in the diffuse medium is $f_\\text{diff} = 1 - f_\\text{cloud}$, equation \\ref{eq:volume} leads to the relation between the densities in the diffuse and dense medium\n\\begin{equation}\n \\frac{1}{n} = \\frac{f_\\text{diff}}{n_\\text{diff}} + \\frac{f_\\text{dense}}{n_\\text{dense}},\n\\end{equation}\nwhich in the typical case that $n_\\text{dense} \\gg n$ implies $n_\\text{diff} \\sim f_\\text{diff}\\, n$. Similarly eq. \\ref{eq:energy} implies that the temperature of the gas particle is the mass weighted temperature of the different ISM components\n\\begin{equation}\n T = f_\\text{diff} T_\\text{diff} + f_\\text{dense} T_\\text{dense}.\n\\end{equation}\nIn the typical case, where $T \\gg T_{\\text{dense}}$, this implies $T_\\text{diff} \\sim T \/ f_\\text{diff}$.\nWe adjust the efficiencies of processes happening in either medium by attenuating the corresponding reaction rates with the respective mass fractions. We neglect processes happening in either medium if the mass fraction falls below $1 \\,\\%$ in order to save computation time. We note that a similar, but more detailed model for the multi-phase nature of star formation and dust physics on subgrid scales has recently been applied by \\citet{2021MNRAS.503..511G}.\n\n\n\\subsubsection{Grain destruction}\\label{sec:destruction}\n\nIn our model we consider two separate channels of grain destruction, destruction by SN shocks and thermal sputtering in hot gas. Both processes keep the number of grains constant and lead to a shedding of grain surface layers that get ejected in the form of gas phase metals. Thus both processes can be modelled with a continuity equation \\citepalias{2019MNRAS.482.2555H}\n\\begin{equation}\\label{eq:sputtering}\n \\left[\\frac{\\partial \\rho_d\\left(m, t\\right)}{\\partial t}\\right]_{\\text{sput}} = \\frac{\\Dot{m}}{m} \\rho_d\\left(m, t\\right) - \n \\frac{\\partial}{\\partial m }\\left[\\Dot{m} \\rho_d\\left(m, t\\right)\\right],\n\\end{equation}\nwhere we estimate $\\Dot{m} = - m \/ \\tau_{\\text{sput}}\\left(m\\right)$. We integrate the continuity equation (\\ref{eq:sputtering}) by applying the same formulation and integration scheme as \\citetalias{2019MNRAS.482.2555H} and \\citetalias{2020MNRAS.491.3844A}. \nIn the employed feedback model, cooling is temporarily turned off for gas particles subject to SN feedback in order to keep them hot. Since this hot phase cannot be properly resolved, we turn off dust processing as well and only take into account the destruction due to the SN shock after the hot phase has ended. This way the destruction can be regarded as an effective treatment of the dust processing happening in the unresolved hot phase. We employ the same estimate of the SN destruction timescale as \\citetalias{2020MNRAS.491.3844A}, which is based on the mass sweeping timescale \\citep[e.g.][]{1989IAUS..135..431M}.\n\nThermal sputtering becomes important at temperatures around $T \\sim 10^6\\, \\text{K}$. Since in our simulation diffusion manages to transport dust out of the cold and dense disk and into the diffuse and hot halo, including thermal sputtering is important in order to prevent the overproduction of small grains through efficient shattering in the halo. \nWe approximate the thermal sputtering timescale using equation (14) of \\citet{1995ApJ...448...84T}:\n\\begin{equation}\n \\tau_\\text{sp}\\left(m\\right) = \\frac{1}{3} \\tau_{0, \\text{sp}} \\left(\\frac{a}{0.1 \\mu \\text{m}}\\right) n_0^{-1} \\left[ 1 + \\left(\\frac{T_\\text{sput}}{T_\\text{gas}}\\right)^{\\omega}\\right],\n\\end{equation}\nwhere $\\tau_{0, \\text{sp}} = 9.9 \\times 10^4\\, \\text{yr}$, $T_\\text{sput} = 2 \\times 10^6\\, \\text{K}$, and $\\omega = 2.5$. Given that this timescale can get much shorter than the dynamical timescale, explicit integration can become very expensive and slow down the simulation. Fortunately the discrete version of the continuity equation provided in the appendix of \\citetalias{2019MNRAS.482.2555H} admits an analytical solution that can be used to efficiently integrate the thermal sputtering over long dynamical timesteps. We refer the interested reader to the Appendix \\ref{Appendix:thermsputtering} for details about the derivation of this analytical solution.\n\n\\subsection{Turbulent diffusion}\\label{sec:diffusion}\n\nIn Lagrangian simulations by default there are no mass fluxes in between the fluid tracer particles; therefore fluid mixing between particles is generally suppressed leading to discontinuities in quantities like element abundances.\nA commonly used computationally rather inexpensive method for smoothing out passive scalar fields like metallicity is to smooth them within a kernel radius similar to the density field \\citep[e.g.][]{2005MNRAS.363.1299O, 2019MNRAS.484.2632S}. However, this method fails to capture the transport of abundances beyond the kernel radius and cannot be used in a satisfactory way for quantities that evolve dynamically due to chemistry or processing in the ISM like dust or molecules. Indeed, \\citet{2021ApJ...917...12S} have found that explicit inter-particle diffusion of metals due to turbulent mixing is essential for rendering the metal poor gas in the CGM. The importance of diffusion for matching the observed scatter in metal element abundances is widely recognized and many groups working on metal transport with particle based codes have devised ways of modelling diffusion between particles \\citep[see e.g.][]{2016A&A...588A..21R, 2018MNRAS.474.2194E, 2021MNRAS.506.4374D}.\nIn this work, we adopt a turbulent metal and dust diffusion scheme similar to the one used in \\citet{2018MNRAS.474.2194E} and \\citet{2021ApJ...917...12S} which is based on the Smagorinsky--Lilly model \\citep{1963MWRv...91...99S, 2010MNRAS.407.1581S}. In this model, the effect of subgrid diffusion is triggered by the resolved shear between particles. The diffusion operator and the diffusivity are given by\n\\begin{align}\n \\left.\\frac{d}{dt}\\right|_\\text{diff} &= \\frac{1}{\\rho} \\nabla\\cdot \\left(D \\nabla \\right)~,\\\\\n D &= C_{d} \\left\\|S_{ab}\\right\\| h^2 \\rho~, \n\\end{align}\nwhere $C_d$ is a constant parameter setting the diffusion scale $L_{\\text{diff}} = \\sqrt{C_d} h$, $\\left\\|\\cdot\\right\\|$ is the Frobenius-norm, $h$ is the SPH smoothing length and $S_{ab}$ is the symmetric, trace-free shear tensor\n\\begin{equation}\n S_{ab} = \\frac{1}{2} \\left(\\frac{\\partial v_a}{\\partial x_b} + \\frac{\\partial v_b}{\\partial x_a}\\right) - \\frac{1}{3} \\left(\\nabla\\cdot \\Vec{v}\\right) \\delta_{ab}.\n\\end{equation}\nHere indices $a, b$ refer to cartesian directions and $v_a$ is the velocity field. We compute $\\left\\|S_{ab}\\right\\|$ using the higher-order estimate of the shear tensor computed by {\\sc Gadget-4} using matrix inversion \\citep{2014MNRAS.443.1173H}. \nWe discretize the diffusion equation for the scalar field $A$ following \\citet{2004MNRAS.351..423J}\n\\begin{equation}\n \\frac{dA_i}{dt} = \\sum_j \\frac{m_j}{\\rho_i \\rho_j} \\frac{D_{ij} \\left(A_j - A_i\\right)}{\\left\\|\\Vec{x}_{ij}\\right\\|^2}\\, \\Vec{x}_{ij}\\cdot \\nabla_i W_{ij},\n\\end{equation}\nwhere the sum runs over all SPH neighbors, $i$ and $j$ are particle indices, $W_{ij} = 1\/2 \\left[W\\left(\\Vec{x}_{ij}; h_i\\right) + W\\left(\\Vec{x}_{ij}; h_j\\right) \\right]$ is a symmetrised version of the SPH kernel function and $\\Vec{x}_{ij} = \\Vec{x}_i - \\Vec{x}_j$. We make the replacement $\\left(D_{i} + D_j\\right) \\mapsto D_{ij} = \\frac{4 D_i D_j}{D_i + D_j}$ following \\citet{1999JCoPh.148..227C}, who show that this ensures continuity in the flux at boundaries by effectively selecting the minimum of the two fluxes.\n\nThe timescale for diffusion can be estimated from the amount of shear and the degree of mixing between two adjacent fluid elements\n\\begin{equation}\n \\tau_{\\text{diff}, A} \\sim 1 \\left(\\frac{A}{\\Delta A}\\right) \\left(\\frac{C_d}{0.01}\\right)^{-1} \\left(\\frac{\\left\\|S_{ab}\\right\\|}{100 \\,\\text{km}\\, \\text{s}^{-1}\\, \\text{kpc}^{-1}}\\right)^{-1} \\text{Gyr},\n\\end{equation}\nwhich is long compared to the dynamical timescale for well-mixed fluids, but can get extremely short if concentrations vary by many orders of magnitudes. Here $\\Delta A$ refers to the difference in the quantity $A$ between the two neighboring fluid elements. In practice this also means that na\\\"ively requiring a Courant-like timestep criterion\n\\begin{equation} \\label{eq:timestep}\n dt \\leq \\alpha \\frac{A}{\\left|\\text{d}A\/\\text{d}t\\right|}\n\\end{equation}\ncan potentially lead to extremely short timesteps that would unnecessarily slow down the simulation. We thus limit diffusive outflows from low concentration gas with abundances falling below $A_\\text{low} = 10^{-4}\\, A_\\odot$ where $A_\\odot$ refers to solar abundance in order to avoid negative abundances. We do not limit the timestep for inflows into low concentration gas. \nIn gas with concentrations higher than $A_\\text{low}$ we employ a timestep criterion of the form given in equation (\\ref{eq:timestep}) with $\\alpha = 5$ for inflows ($\\text{d}A\/\\text{d}t > 0$) and $\\alpha = 0.5$ for outflows ($\\text{d}A\/\\text{d}t < 0$).\nGiven the large range of metallicities, this treatment ensures that diffusion does not significantly overshoot, while at the same time avoiding computational overhead. Inflows tend to occur in particles with very low concentrations, where diffusion time-steps are short even with this relatively loose time-step criterion. In principle if our time-step criterion is too loose, overshooting could lead to spurious metal production or depletion. We have verified that the levels of spurious metal production or depletion are negligible, by comparing to a simulation with a more stringent time-step criterion ($\\alpha = 0.1$ for both in- and outflows). \n\nFinally it should be noted that the model is resolution dependent through the definition of the diffusion length scale with respect to the SPH smoothing length which itself scales with the mass resolution. If one requires the diffusion length to be a physical length-scale independent of the resolution, the diffusion parameter $C_d$ has to scale as\n\\begin{equation} \\label{eq: resolution}\n C_\\mathrm{d}\\propto m_\\mathrm{SPH}^{-2\/3},\n\\end{equation}\ni.e. $C_d$ needs to be increased as the resolution is refined. A wide range of different diffusion coefficients has been used in the literature \\citep[][]{2010MNRAS.407.1581S, 2018MNRAS.474.2194E}. Therefore we estimate the impact of diffusion for a range of values spanning two orders of magnitudes. The full suite of simulations is described in Table~\\ref{tab:runs}.\n\n\\begin{table}\n\\caption{List of different simulations}\n\\label{tab:runs}\n\\begin{tabular}{lccc}\n\\hline\nRun name & $C_d$ & Resolution & Notes \\\\\n\\hline\nNoDiff & -- & Low & No diffusion \\\\\nDiffx0.1 & 0.002 & Low & Weak diffusion \\\\\nDiffx1 & 0.02 & Low & Intermediate diffusion \\\\\nDiffx10 & 0.2 & Low & Strong diffusion \\\\\nHiRes & 0.08 & High & High resolution with diffusion \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Particle selection}\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/Methods\/R_z_disk_040.pdf}\n\\caption{The spatial distribution of gas particles belonging to the disk on the $R$--$z$ plane. The color coding indicates the total mass of the particles belonging to each bin. We use the selection criterion in eq. (\\ref{eq:selection}) to select the disk gas particles.}\\label{fig:selection}\n\\end{figure}\nThe physical properties of the gas within the galactic disk and the halo are very different. As a result, relations between physical quantities like metallicity and dust abundance can be very different within the disk and the halo. While we are mostly interested in the relations in the disk, which makes up most of the gas mass, reliably selecting only particles belonging to this component is not straightforward. While selecting particles through geometric criteria is fairly straightforward, this method is prone to pollution from halo particles in the vicinity of the disk. A method that has proven to be more reliable is selection of particles based on the equation of state. The idea is, that particles which are part of the disk tend to be denser and colder than particles in the halo. The halo and disk equations of state are well separated in $T$--$n$ space, and thus a rough criterion like\n\\begin{equation}\\label{eq:selection}\n T_\\text{gas} < 10^4 \\left(\\frac{n_\\text{H}}{3 \\times 10^{-4}\\,\\text{cm}^{-3}}\\right)\\,\\text{K} \n\\end{equation}\ncan reliably select only particles belonging to the disk as is shown in Figure~\\ref{fig:selection}, which shows the distribution of particle coordinates in the $r$-$z$-plane (integrated over the azimuthal angle $\\phi$). Here $r$ and $z$ refer to the cylindrical radius as measured from the density weighted center-of-mas of the gas and the vertical displacement from the disk-plane, respectively.\n\n\\subsection{Extinction Curves}\n\nThe wavelength dependence of the optical depth of dust is usually expressed in terms of the extinction curve. Extinction curves can be derived from observations and are sensitive to the shape of the GSD, making them a useful tool for relating observations to simulations and constraining the GSD \\citep[e.g.][]{2001ApJ...548..296W}. We calculate extinction curves in the same way as \\citetalias{2020MNRAS.491.3844A} using the GSD $n_{i}\\left(a\\right)$, where $i$ indicates the composition of the grains using the same fixed mass ratio of graphite to silicates (54:46), corresponding to the value in the Milky Way. We write the extinction at wavelength $\\lambda$ as \n\\begin{equation}\n A_{\\lambda} \\propto \\sum_i \\int n_i\\left(a\\right) \\pi a^2 Q_{\\text{ext}, i}\\left(a, \\lambda\\right) da,\n\\end{equation}\nwhere $Q_{\\text{ext}, i}\\left(a, \\lambda\\right)$ is the ratio of the extinction cross section $\\sigma_{\\text{ext}, i}\\left(a, \\lambda\\right)$ and the geometrical cross section $\\sigma_{\\text{g}} = \\pi a^2$, which \\citet{2001ApJ...548..296W} have evaluated for silicates and carbonaceous grains using Mie theory. \nTheir results are tabulated and made publicly available\\footnote{\\label{footer:ExtinctionParameters} \\href{https:\/\/www.astro.princeton.edu\/~draine\/dust\/dust.diel.html}{https:\/\/www.astro.princeton.edu\/~draine\/dust\/dust.diel.html}}. We normalize $A_\\lambda$ to the value in the $V$ band ($\\lambda^{-1} = 1.8 \\,\\micron^{-1}$) in order to cancel out the proportionality constant. We assume that both grain species follow the same GSD following the approach of \\citetalias{2020MNRAS.491.3844A}.\n\n\n\\section{Results}\\label{sec:results}\n\nTo explore the effects of turbulent diffusion, we study both the global assembly history of the dust and metal components and their spatial distribution. We then check how our simulations compare to available observations of dust extinction and small-to-large grain ratios.\n\n\\subsection{Star Formation and Metal Enrichment}\\label{sec:SFhistory}\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/global\/M_star_history.pdf}\n\\caption{The formed stellar mass (solid lines) and the formed metal mass $M_{Z}$ (dashed lines) over the simulated timespan for the different models. Different lines present different models as indicated in the legend (also in figures below). All runs produce similar amounts of stellar and metal mass.}\\label{fig:Mstar}\n\\end{figure}\n\nMetal production in galaxies is linked to the star formation as metals are produced in stars and subsequently injected into the ISM. Thus it needs to be ensured that all runs exhibit a similar star formation history, in order to ensure that differences in the metal distribution are due to differences in the models and not due to different star formation histories. \nThe solid lines in Figure~\\ref{fig:Mstar} show the time evolution of the formed stellar mass. All runs exhibit very similar star formation histories enabling us to compare the metal distributions among the runs. Despite the similar star formation histories, the metal production depiced by dashed lines in Figure~\\ref{fig:Mstar} is slightly lower in the runs with stronger diffusion. In order to understand this difference, it is good to know that the metallicity of the stellar ejecta is almost independent of the stellar metallicity, which implies that metal-poor stars overall introduce more newly formed metals into the ISM than metal-rich stars. \nThus, differences in the net metal production are likely due to differences in the distribution of stellar metallicities. With stronger diffusion one expects a narrower metallicity distribution with less metal poor stars and thus slightly lower metal production. \n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/global\/Z_star_dist.pdf}\n\\caption{The distribution of stellar mass with respect to metallicity at $t = 500 \\,\\text{Myr}\\,$ (solid) in the run without diffusion (blue) and the run with strong diffusion (orange) as well as the metal yield per solar mass as a function of metallicity (dashed).}\\label{fig:Zstardist}\n\\end{figure}\n\nThis trend is reflected in Figure~\\ref{fig:Zstardist}, which depicts the distribution of stellar mass with respect to metallicity after the first 500 Myr and clearly shows that diffusion pushes stellar metallicity towards the average, where the yield is slightly lower. Even though the total metal mass is slightly different between the runs, the differences are only small and should not affect a comparison between the runs too much.\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/global\/metal_history.pdf}\n\\caption{The global (i.e. mass-weighted average) stellar metallicity (dashed) and gas metallicity (solid) as a function of time for the different models. With stronger diffusion or higher resolution, stellar metallicity is lowered and gas metallicity increased.}\\label{fig:Zstar-gas}\n\\end{figure}\n\nWhile the total mass of metals is similar in all runs, diffusion can affect how it is distributed among stars and gas. In Figure~\\ref{fig:Zstar-gas}, the time evolution of the galactic stellar and gas metallicity is shown. The stellar metallicity (dashed) is generally higher than the gas metallicity (solid), but with stronger diffusion or higher resolution stellar metallicity is lowered while gas metallicity increases. If all components were perfectly mixed there would be no difference between them and so this trend is easily understood for diffusion which enhances mixing. At higher resolution, the difference between gas and stellar metallicity is even smaller. This can be attributed to the smoother distribution of sources which leads to less extreme metallicities and therefore a narrower metallicity distribution.\n\n\\subsection{Spatial Distribution of Metals and Dust}\n\n\\subsubsection{Projected Distributions}\n\n\\begin{figure*}\n\\includegraphics[width=0.95\\textwidth]{figures\/spatial maps\/Metallicity_005.pdf}\n\\caption{Density-weighted projection of metallicity of the simulated galaxies at $t = 250\\,\\text{Myr}$ for different runs. The top and bottom panels show the face-on view and edge-on view of the simulated galaxy, respectively. The $x$ and $y$ coordinates run from $-20$ to $20\\,\\text{kpc}$ and the $z$ coordinate runs from $-8$ to $8\\,\\text{kpc}\\,$.}\\label{fig:Zmap250}\n\\includegraphics[width=0.95\\textwidth]{figures\/spatial maps\/Large_005.pdf}\n\\caption{Same as Fig.~\\ref{fig:Zmap250} but for large grain abundance.}\\label{fig:DLmap250}\n\\includegraphics[width=0.95\\textwidth]{figures\/spatial maps\/DsDl_005.pdf}\n\\caption{Same as Fig.~\\ref{fig:Zmap250} but for small-to-large grain ratio.}\\label{fig:DsDlmap250}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[width=0.95\\textwidth]{figures\/spatial maps\/Metallicity_040.pdf}\n\\caption{Same as Fig.~\\ref{fig:Zmap250} but at $t = 2\\,\\text{Gyr}$}\\label{fig:Zmap2Gyr}\n\\includegraphics[width=0.95\\textwidth]{figures\/spatial maps\/Large_040.pdf}\n\\caption{Same as Fig.~\\ref{fig:DLmap250} but at $t = 2\\,\\text{Gyr}$}\\label{fig:DLmap2Gyr}\n\\includegraphics[width=0.95\\textwidth]{figures\/spatial maps\/DsDl_040.pdf}\n\\caption{Same as Fig.~\\ref{fig:DsDlmap250} but at $t = 2\\,\\text{Gyr}$}\\label{fig:DsDlmap2Gyr}\n\\end{figure*}\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/spatial maps\/Shear.pdf}\n\\caption{Density-weighted projection of the shear $\\left\\|S_{ab}\\right\\|$ at $t = 2\\,\\text{Gyr}$ in the run with $C_{\\text{d}} = 0.02$. The top and bottom panels show the face-on view and edge-on view of the simulated galaxy, respectively.}\\label{fig:ShearMap}\n\\end{figure}\n\nDiffusion impacts the spatial distribution of metals and dust. To highlight this, we show spatial maps of the metallicity, the large grain abundance, i.e. the mass fraction of large grains and the small-to-large grain mass ratio in Figures~\\ref{fig:Zmap250} -- \\ref{fig:DsDlmap2Gyr} at early ($t = 250\\, \\text{Myr}$) times, where metallicities are still low and we thus do not expect significant dust growth and at late times ($t = 2 \\, \\text{Gyr}$) and high metallicities, where the GSD has settled in its final state and is not expected to evolve much further. All projection plots have been made with the SPH visualization code SPLASH \\citep{2007PASA...24..159P}.\n\nThe distribution of metallicity at early and late times is shown in Figures~\\ref{fig:Zmap250} and \\ref{fig:Zmap2Gyr}, respectively. As expected, the spatial distribution of metals in the run without diffusion is more grainy than in the other runs and becomes increasingly smoother as the diffusion strength is increased. At early times, the metals are extending out to similar radii, but in the run with strong diffusion they reach slightly higher altitude above and below the disk. In the run without diffusion, the metallicity hardly traces the spiral structure of the galaxy, while in all runs with diffusion it is clearly visible in the face-on view, both at early and late times.\n\nAt late times, the distribution of metals shows significantly different spatial extent among the runs. \nIn the \\emph{NoDiff}\\ run, there are hardly any metals present beyond $r \\sim 15 \\, \\text{kpc}$ and $\\left|z\\right| \\sim 8 \\, \\text{kpc}$, while they tend to extend out further from the galactic disk with increasing diffusion strength. Furthermore, in the runs with diffusion, there is a toroidal region with low metallicity located in the galactic plane just at the edge between the disk and the halo around $r \\gtrsim 15\\, \\text{kpc}$ which is not present in the \\emph{NoDiff}\\ run. A likely explanation for this metal-poor torus is that the diffusion is sourced by turbulence. Within the disk, turbulence arises due to gravitational torques and stellar feedback from young stars, while in the halo it originates from random motion. In the outer parts of the disk where stellar feedback is weak\nthere is hardly any turbulence which could drive diffusion. This point is further illustrated in Figure\\ref{fig:ShearMap}, which shows the spatial distribution of the shear $\\left\\|S_{ab}\\right\\|$ in the intermediate diffusion run at $t = 2\\,\\text{Gyr}\\,$. The shear is strongest in the thin disk and the halo gas just above and below the disk, but is weak in the thick disk. The qualitative features of this spatial map do not differ much between different snapshots and different runs.\nThus there should be a radius within the disk beyond which diffusion is inefficient. Within the halo the fluid can easily mix and therefore will eventually populate the parts of the halo that lie in the plane of the galactic disk, leading to the formation of a metal-poor torus. Another notable detail is that, in the lower resolution runs, there are more metals in the polar regions above and below the disk than in the \\emph{HiRes}\\ run. This is likely due to less efficient feedback in the \\emph{HiRes}\\ run, owing to a lower total energy output per feedback event, which is quickly dissipated. This can lead to weaker outflows in the central region, where star formation is strongest and therefore feedback effects would become most apparent.\n\nThe spatial distribution of large dust grains is shown in Figures~\\ref{fig:DLmap250} and \\ref{fig:DLmap2Gyr} at early and late times, respectively. Given that a large fraction of the dust mass is locked in large grains, they are an excellent tracer of the total dust abundance and supplemented with the small-to-large grain ratio, all information about dust can be retained. Similarly to the distribution of metallicity, the distribution of large grains is increasingly smoother with stronger diffusion and traces the spiral structure of the galaxy, once diffusion is included. The distribution of large grains closely follows the overall distribution of metals within the disk, but rapidly drops with the onset of the halo. This is contrary to the na\\\"ive expectation, that the halo might in fact be a rather hospitable environment for large grains. Shattering due to grain-grain collisions, which most efficiently depletes the large grain abundance, is rather inefficient in the halo, due to the extremely low densities. Furthermore, since large grains have a low surface-area-to-volume ratio, thermal sputtering, which efficiently depletes small grains in the halo, can only slowly deplete large grains. \nIn spite of this, the fact that there is only a low abundance of large grains in the halo, owes to their production in the dense environment of the galactic disk. In order to reach the very diffuse parts of the halo, large grains need to travel through gas with intermediate density, where they are shattered efficiently. As a result, hardly any large grains make it into the halo, while the shattered fragments are depleted through thermal sputtering. Interestingly, hot and diffuse outflows, i.e. due to strong SN or AGN feedback, might constitute a way for large grains to escape from the disk and populate the halo.\n\nIn order to see where dust processing is most efficient, the spatial distribution of the small-to-large grain ratio $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ is shown in Figures~\\ref{fig:DsDlmap250} and \\ref{fig:DsDlmap2Gyr} at early and late times, respectively. At early times, enrichment with small grains is only seen in the spiral arms of the disk and it extends out to larger radii in the \\emph{NoDiff}\\ and \\emph{Diffx0.1}\\ runs. With stronger diffusion, small grains are initially restricted to the very center of the disk and cannot reach far from the relatively metal-rich center. These differences are likely explained by the fact that metal mixing introduces some sort of \\emph{socialism of metals}, in the sense that growth can only happen once the whole disk has been sufficiently enriched with metals, as it initially \\emph{lowers} the local metal budget, by sharing it among the whole neighborhood. Without diffusion, whichever cell gets enriched with metals first can grow first, leading to a headstart in growth compared to the rest of the cells, explaining the overall higher small grain enrichment at early times.\n\nAt late times, in all runs the full galactic disk is enriched with small grains, and $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ increases from the center towards the edge of the disk and then falls off towards the halo. The exact radius at which the maximum is attained seems to be rather independent of the strength of diffusion as long as some degree of diffusion is present. Within the halo, in the vicinity of the disk $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ is higher in the runs with stronger diffusion. Furthermore, in the runs with diffusion $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ becomes large within a thin layer above and below the disk, whereas it tends to fall off with increasing height in the \\emph{NoDiff}\\ run. This is due to a steady supply of large grains from the thin disk into the surrounding layers, which can be efficiently shattered, leading to the slightly increased $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$.\n\n\\subsubsection{Radial Profiles}\\label{sec:profiles}\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/radial profiles\/Z_rho.pdf}\n\\caption{The gas metallicity as a function of cylindrical radius $r$ for the different simulations at early (top) and late times (bottom). We normalised $r$ by $R_{25}$(see text). Solid lines depict the median relation, while the shaded area shows the range of values between the 25th and 75th percentile. We compare our results with observational data compiled by \\citet{2012MNRAS.423...38M}. In the top panel the filled circles with errorbars correspond the data in Holmberg II. In the bottom panel the circle, $\\triangle$, $\\triangleleft$, $\\triangleright$, $\\triangledown$ and diamond symbols correspond to NGC628, NGC2403, NGC4736, NGC5055, NGC5194, NGC7793, respectively.}\\label{fig:RadialZ}\n\\end{figure}\n\\begin{figure}\n\n\\includegraphics[width=0.45\\textwidth]{figures\/radial profiles\/Dtot_rho.pdf}\n\\caption{Same as Fig. \\ref{fig:RadialZ} but presenting the radial profile of the dust-to-gas ratio.}\\label{fig:RadialDust}\n\\end{figure}\n\nIn Figures~\\ref{fig:Zmap250} to \\ref{fig:DsDlmap2Gyr}, we have seen that metal and dust components of the galaxy follow qualitatively similar radial trends. We have also shown that diffusion leads to the formation of a metal-poor torus in galactic plane. In order to quantitatively assess the impact of diffusion and compare the models to the observations, in this section we present radial profiles of certain quantities like dust abundance and metallicity. To this end, we assign particles to logarithmically spaced radial bins. For each bin, we compute the median, 25th and 75th percentile of the quantities of interest. We normalize the radius to $R_{25}$ (the radius beyond which the surface brightness falls below 25 mag arcsec$^{-2}$), in order cancel the galaxy size effect, enabling us to compare our results to observations of several spiral galaxies at once.\nWe evaluate $R_{25}$ for the simulated galaxies, by using the relation $R_{25} \\simeq 4 R_{\\text{d}}$ \\citep{1998ggs..book.....E}. Here $R_{d}$ refers to the scale length of the radial column density profile of young stars, obtained by fitting them with an exponential function. In our simulated galaxies $R_{25}$ grows from $R_{25} \\sim 4\\,\\text{kpc}\\,$ at early times to $R_{25} \\sim 4.5\\,\\text{kpc}\\,$ at late times with mild differences between each run.\n \nIn order to allow for a better comparison with previous theoretical work, we overplot some of the profiles with the same observational dataset as the one adopted by \\citetalias{2017MNRAS.466..105A} and \\citetalias{2020MNRAS.491.3844A} which has been compiled by \\citet{2012MNRAS.423...38M}. \nThe data consists of spatially resolved data of the dust-to-gas ratio and the dust-to-metal ratio in a sample of nearby galaxies chosen from the \\emph{Spitzer} Infrared Nearby Galaxies Survey sample \\citep{2003PASP..115..928K}. \\citetalias{2017MNRAS.466..105A} categorised the sample by classifying them according to their specific star formation rates, allowing to map them to different simulation snapshots. We apply their classification in order to perform a similar mapping. We compare the dwarf irregular galaxy Holmberg II to the snapshots at $t = 250\\,\\text{Myr}\\,$ and the galaxies NGC628, NGC2403, NGC4736, NGC5055, NGC5194, NGC7793 to the snapshots at $t = 2\\,\\text{Gyr}\\,$.\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/radial profiles\/D_Z_rho.pdf}\n\\caption{Same as Fig. \\ref{fig:RadialZ} but presenting the radial profile of the dust-to-metal ratio.}\\label{fig:RadialDZ}\n\\end{figure}\n\nIn Figure~\\ref{fig:RadialZ}, the radial metallicity profile at early ($t = 250\\,\\text{Myr}$) and late times ($t = 2\\,\\text{Gyr}$) is shown for the different models. Observational data points for $Z$ are derived from the dataset from \\citet{2012MNRAS.423...38M}. In all models the metallicity is peaked at the center and gradually falls off towards $r \\sim R_{25}$, beyond which it rapidly drops. Without diffusion, the dispersion of metallicity values is generally larger than in the runs with diffusion. This is generally true, as diffusion leads to the local averaging of diffused quantities pulling their values closer to the local mean value. At early times and small radii, all models agree quite well with each other and differences only become visible at large radii, where the radius beyond which the metallicity starts to drop rapidly increases with diffusion strength. In contrast, the observational data (Holmberg II) exhibits no significant metallicity gradient. This difference might be due to the different morphology of Holmberg II, which is classified as a dwarf irregular galaxy. Nonetheless, overall the metallicity levels agree with our simulations. At late times, the drop at the edge of the disk is shallower with stronger diffusion, as diffusion acts to populate the halo with metals. In the runs with diffusion, the metallicity reaches a local minimum at $r \\gtrsim 3 R_{25}$ before increasing towards a local maximum at $r \\lesssim 4 R_{25}$ beyond which it falls off rapidly. The minimum is more pronounced with weaker diffusion, but requires some degree of diffusion to be present in the first place. In the \\emph{HiRes}\\ run, the minimum is less pronounced than in the \\emph{Diffx0.1}\\ run, but more pronounced than in the \\emph{Diffx1}\\ run, indicating that the effective diffusion strength is somewhere in between the two. This region around the local minimum corresponds to the metal-poor torus mentioned in the discussion of Figure~\\ref{fig:Zmap2Gyr} above. Very strong diffusion ($C_d = 0.2$) acts to flatten the metallicity profile and reduces the slope of the drop in metallicity outside the disk. At lower values of the diffusion strength, the slopes in the central region and the outer parts of the halo are hardly changed compared to the case of no diffusion, but the profile is slightly shallower at intermediate radii $r \\sim R_{25}$. The observational data tend to lie above the simulation data. Since metallicity tends to increase with time, this might indicate that a comparison with a later snapshot might lead to better agreement. Nonetheless, apart from the normalization, the overall radial trend is well captured by all models. The runs with diffusion tend to be in better agreement with the data at large radii, where the profiles tend to be flatter with diffusion. \n\nTo our knowledge, the metal-poor torus seen in the runs with diffusion has not been observed in spatially resolved observations of nearby spirals. Here we want to explore why this might be. First it should be noted that spatially resolved observations of metals and dust require bright sources, i.e. it is significantly easier to make reliable spatially resolved observations of the dense inner parts of a galaxy, while it is exponentially more difficult and significantly less conclusive if one tries to make similar observations of the outer part of the disk or the halo. This is why most spatially resolved observational data do not extend much beyond $R_{25}$. Given that the metal-poor torus is located at around $r\\sim 3-4 R_{25}\\,$, it is thus likely that current observational methods are simply not able to reasonably resolve such a structure and even in the case that observations at this radius existed, the scatter in the observed metallicity values might be larger than the depth of the dip in metallicity, i.e. such a torus would be unobservable. \nFurthermore, it is possible that the torus is just a numerical relic arising from the perfectly spherical structure of the halo and the absence of environmental effects like mergers, filaments or major in- or outflows. It is thus unclear, whether such a torus would even form in the first place, in the more realistic cosmological environment, which includes all of these disruptive features.\n\nFigure~\\ref{fig:RadialDust} depicts the radial profile of the dust-to-gas ratio $\\mathcal{D}_{\\text{tot}}\\,$ at early and late times. Observational data are taken from \\citet{2012MNRAS.423...38M}. The dust profile exhibits similar features as the metallicity profile, but at early times is flatter in the center and at late times exhibits a less pronounced feature at the location of the metal-poor torus in the runs with diffusion. Central dust abundances are slightly higher with diffusion than without, except in the \\emph{Diffx10}\\ run where a similar abundance is reached. In all runs, the central slope flattens over time as the disk becomes more enriched with dust, while the slope at the edge of the disk steepens. At early times all models agree well with the data in Holmberg II, however overall levels of $\\mathcal{D}_{\\text{tot}}\\,$\\ tend to be slightly lower in Holmberg II. At late times, the observational data have a large dispersion. All simulations tend to agree reasonably well with most profiles, though the runs with diffusion, which tend to have a flatter and more extended profile are in better agreement with the data in NGC5055, NGC5194 and NGC7793, while NGC628 and NGC2403 where the profiles fall off more rapidly tend to be more consistent with the run without diffusion.\nOur results are in reasonable agreement with \\citetalias{2020MNRAS.491.3844A}, who find that while their profiles of the dust-to-gas ratio tend to be in good agreement with some of the observations, the discrepancies with the rest of the data could potentially be attributed to the still rather large uncertainties in the destruction efficiency in SN reverse shocks \\citep[see, e.g.][]{2007MNRAS.378..973B, 2007ApJ...666..955N} and the dust yield from AGB stars \\citep[e.g.][]{2011EP&S...63.1027I, 2013MNRAS.436.1238K}.\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/radial profiles\/DsDl_rho.pdf}\n\\caption{Same as Fig. \\ref{fig:RadialZ} but presenting the radial profile of small-to-large grain ratio. The data points are taken from \\citet{2020A&A...636A..18R}. The galaxies from this sample are late stage spirals, which is why we are only comparing them to the snapshot at late times.}\\label{fig:RadialDsDl}\n\\end{figure}\n\nFigure~\\ref{fig:RadialDZ} shows the radial profile of the dust-to-metal ratio $\\mathcal{D}\/Z\\,$ at early and late times. Observational data are taken from \\citet{2012MNRAS.423...38M}. At early times, there are major differences between the models. In the \\emph{NoDiff}\\ run, $\\mathcal{D}\/Z\\,$ is roughly constant at $\\mathcal{D}\/Z\\,$$\\sim 0.1$ within the disk, but rapidly drops beyond $r \\gtrsim 2 R_{25}$.\nIn the other runs, it is slightly larger in the center but drops to $\\mathcal{D}\/Z\\,$$ \\sim 7 \\times 10^{-2}\\,$ beyond $r \\gtrsim 0.3 - 0.5 R_{25}$. The central increase in $\\mathcal{D}\/Z\\,$ is larger with weaker diffusion. With strong diffusion, there is no increase in the central value of $\\mathcal{D}\/Z\\,$, but at intermediate radii it is still lower than without diffusion. In the runs with diffusion, the drop in $\\mathcal{D}\/Z\\,$ beyond $r \\sim 2 R_{25}$ is shallower and happens at larger radii than in the \\emph{NoDiff}\\ run. The profile in the \\emph{HiRes}\\ run is very different from those in the \\emph{Diffx1}\\ and \\emph{Diffx10}\\ runs, indicating that $\\mathcal{D}\/Z\\,$ might be sensitive to the details of the feedback prescription. In all runs, the values of $\\mathcal{D}\/Z\\,$ tend to be higher than the observed values in Holmberg II. $\\mathcal{D}\/Z\\,$ tends to decrease with increasing radius, a trend which is captured in the runs with diffusion, slightly improving the agreement compared to the run without diffusion, where $\\mathcal{D}\/Z\\,$ is constant throughout the disk. \n\nThe radial trend of the dust-to-metal ratio at early times may be explained by the fact that the galaxy is still actively star forming, which leads to stronger feedback and efficient outward advection of dust- and metal-enriched particles. In the \\emph{NoDiff}\\ run, this leads to the build-up of a constant $\\mathcal{D}\/Z\\,$ profile consistent with the yield relation. In the runs with diffusion, the large grains diffuse into the outer parts of the disk where they can be shattered, leading to the formation of small grains. At this point, the gas in the center is still hosting mostly large grains and thus diffusion transports some of the small shattered fragments to the central region. Since the only sources of dust grains in the outer parts of the disk are advection and diffusion of large grains from the center, this leads to lower dust-to-metal ratios in this region. With stronger diffusion this asymmetry is stronger leading to lower values of $\\mathcal{D}\/Z\\,$ in the outer parts of the disk and initially larger values in the center. If shattering can deplete the abundance of large grains in the outer disk fast enough, diffusion can further enhance the outward advection of large grains from the center, shifting the small-to-large grain ratio $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ towards larger values. However, the central star-forming part of the disk is a very hostile environment for small grains at early times, since sputtering due to SN explosions efficiently consumes their abundances. Thus the overall destruction rate of small grains in the central region increases with increasing diffusion rate, resulting in lower $\\mathcal{D}\/Z\\,$ in the central region.\n\nAt late times, the feedback is more quiescent and differences between the simulations arise mostly due to differences in grain growth. All runs exhibit a similar $\\mathcal{D}\/Z\\,$ profile within $r < 2 R_{25}$, which is rather flat at around $\\mathcal{D}\/Z\\,$ $\\sim 0.8 - 0.9$, with a slight decrease towards small $r$. The profile in the \\emph{NoDiff}\\ run reaches slightly lower values than the other runs and drops steeply just beyond $r \\sim 2 R_{25}$. In the runs with diffusion the steep drop only happens at slightly larger radii. With strong diffusion, higher values of $\\mathcal{D}\/Z\\,$ exceeding the yield relation $\\mathcal{D}\/Z\\,$$ = 0.1$ are attained even in the halo, indicating the presence of large grains, as small grains would be lost to sputtering almost immediately. At late times, the values of $\\mathcal{D}\/Z\\,$ from the simulations tend to be slightly higher than the observational data at large radii. None of the models can explain the steep decline in $\\mathcal{D}\/Z\\,$ in the systems NGC628 and NGC 2403. \\citetalias{2020MNRAS.491.3844A} report the same issue of too flat dust-to-metal ratio profiles in their simulation. They discuss that this may be related to too efficient accretion, which may be related to the missing distinction between different grain species. \n\nIn Figure~\\ref{fig:RadialDsDl} the radial profile of the small-to-large grain ratio is shown at early and late times. Observational data points are taken from \\citet{2020A&A...636A..18R}. They fitted near to far-IR maps of\nthe three spiral galaxies M33, M101 and NGC628 on a pixel-by-pixel basis and derived dust maps within the disks of their galaxy sample. They fit the data with the classical dust model by \\citet{1990A&A...237..215D}, assuming three types of grains: polycyclic aromatic hydrocarbons (PAHs) and very small graphite grains for the small grains and big silicates ($a > 0.15 \\,\\micron$) for the large grains. Such a comparison is bound to exhibit certain differences due to numerous uncontrolled systematic effects related to the exact evolutionary stage and assembly histories of the galaxies, however, it is still worthwhile and might lead to some insight.\n\nIn all runs, the early time $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ profile exhibts a gentle increase from the center towards intermediate radii, where a local maximum is attained. Beyond the maximum, $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ drops rather mildly and, in the runs with diffusion, reaches a constant value of $\\left(\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\right)_h \\sim 2 \\times 10^{-3}\\,$. The shape of the profile is significantly altered by the inclusion of diffusion. In the \\emph{NoDiff}\\ run the maximum value of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ $\\sim \\mathcal{O}\\left(10^{-2}\\right)$ is attained at $r \\sim 0.2 R_{25}$. If diffusion is enabled, the central value of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ $\\sim 0.1$ is independent of the diffusion strength, while the exact value and the position of the maximum depend on the strength of diffusion. With weaker diffusion, the maximum tends to be further out at larger radii and reach slightly larger values, even though the variation is small as $\\left(\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\right)_\\text{max} \\sim 0.1 - 0.2$. In the \\emph{Diffx1}\\ and \\emph{Diffx10}\\ runs the drop from the maximum to $\\left(\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\right)_h$ is almost identical, with $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ being slightly lower in the case of stronger diffusion, while the drop is shallower in the run with weaker diffusion. The $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ profile in the \\emph{HiRes}\\ run is almost constant below $r \\sim 0.3 R_{25}$ and then falls with a slope similar to the \\emph{Diffx1}\\ and \\emph{Diffx10}\\ runs, even below $\\left(\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\right)_h$. At $r \\sim 2 R_{25}$ the small-to-large grain ratio reaches a local minimum slightly below $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim 10^{-3}$ and increases from there towards $\\left(\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\right)_h$.\n\nAt late times, the shape of the profile has changed significantly, and in the central region differences between the runs with and without diffusion have become smaller. In the \\emph{NoDiff}\\ run, the central value of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ has increased compared to early times to $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim 6 \\times 10^{-2}\\,$ and the profile mildly increases up to $r \\sim 2 R_{25}$ where it reaches its maximum at $\\left(\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\right)_\\text{max} \\sim 0.3$. Beyond the maximum, the small-to-large grain ratio drops steeply. In the runs with diffusion the central value of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim 0.1 - 0.2$ is rather independent of the diffusion strength, but with strong diffusion the central value is $2 - 3$ times larger. In the runs with diffusion, $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ is almost constant and only mildly increases up to $r \\lesssim R_{25}$ where the slope slightly steepens. At $r \\sim 2.5 R_{25}$ a maximum value of $\\left(\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\right)_\\text{max} \\sim 1$ is reached. At larger radii the profile drops towards an almost constant value of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ that is higher in the runs with stronger diffusion. At small radii, the \\emph{Diffx0.1}, \\emph{Diffx1}\\ and \\emph{HiRes}\\ runs agree well with the observations. The \\emph{Diffx10}\\ and \\emph{NoDiff}\\ runs are exhibiting too high and too low values of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$, respectively. At intermediate radii the discrepancy becomes less prominent.\nHowever none of the models reproduce the mild drop in $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ towards intermediate radii. Possible reasons for this discrepancy are manifold. Environmental effects like mergers or cold gas inflows which are not present in the simple case of an isolated galaxy can cause radial inflows which can impact the morphology of the galaxy at all radii and change how metals and dust are distributed within the galaxy \\citep{2014MNRAS.438.1870D}. Furthermore, some of the the assumptions made in the dust model used to fit the observations like optically thin emission \\citep{1998ApJ...509..103S} may bias the results.\n\n\\subsection{Evolution of the Grain Size Distribution}\n\nThe GSD is determined by two main components: its normalization, i.e. the dust-to-gas ratio, and its shape, as captured by the small-to-large grain ratio. Here we will study how these two properties of the GSD evolve with time and metallicity. We also show how the full GSD in the dense and diffuse ISM changes over time. \n\n\\subsubsection{Global Picture}\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/GSD\/DsDl_history.pdf}\n\\caption{The galactic small-to-large grain ratio (solid) and the dust-to-metal ratio (dashed) as a function of time for the different models.}\\label{fig:DsDlHistory}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/Dtot\/Dust_history.pdf}\n\\caption{The galactic dust-to-gas ratio as a function of time for the different models.}\\label{fig:DtotHistory}\n\\end{figure}\n\nIn order to compare the efficiency of overall dust growth in the models, we study the evolution history of the dust-to-gas ratio, shown in Figure~\\ref{fig:DtotHistory}. In all models, the dust abundance first undergoes a phase of exponential growth, before it slows down at around $t = 500\\,\\text{Myr}\\,$ after which the dust abundance only grows linearly, limited by the very slow injection of new metals. The initial growth is slower, but longer in the runs with stronger diffusion. The final value of the dust-to-gas ratio is higher by about $16\\,\\%\\,$ with stronger diffusion.\n\nComparing this trend to the evolution of the small-to-large grain ratio might give some insight into what drives the dust growth. The global small-to-large grain ratio is defined as the ratio of the total mass of small and large grains within the disk. The time evolution of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ and $\\mathcal{D}\/Z\\,$ is shown in Figure~\\ref{fig:DsDlHistory} for the different models. In all runs, $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ first undergoes a phase of exponential growth, before it saturates, similar to the time evolution of $\\mathcal{D}_{\\text{tot}}\\,$. The time evolution of $\\mathcal{D}\/Z\\,$ reflects the trends seen in the time evolution of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$, but is slightly offset, indicating that grain growth occurs only after grain processing shifted the GSD towards smaller grain sizes.\nThere are a number of notable differences in the evolution histories of the different models. In the runs with weaker or no diffusion, $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ initially grows at a higher rate, but for a shorter time, resulting in overall lower values of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ and $\\mathcal{D}\/Z\\,$ at saturation. The differences in the growth phase due to diffusion can be explained by the above coined concept of \\emph{Socialism of metals}. As long as the diffusion timescale is shorter than the growth timescale, newly grown small grains tend to be shared among more gas particles, leading to a dilution of grain abundances. Since the grains tend to grow faster in regions with higher abundances, this dilution initially slows down overall levels of grain growth. However, once sufficiently high abundances are reached, since there are more gas particles involved, the growth can go much further, leading to more growth in the long run. In the \\emph{HiRes}\\ run, $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ initially grows as slowly as in the \\emph{Diffx10}\\ run and after a few $100 \\,\\text{Myr}\\,$, the growth slows down even further and saturation sets in slightly delayed at a similar value as in the \\emph{Diffx0.1}\\ run. \nThis is in line with the above discussion of $\\mathcal{D}\/Z\\,$ at early times. If at high resolution, there is more turbulence at early times, this would mean that diffusion can be comparable, and even stronger than in the run with strong diffusion, delaying the growth of small grains, until the feedback driven turbulence has dispersed and diffusion weakens to a level, that is comparable to the runs with intermediate or weak diffusion. In the runs with diffusion, the final value of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ is higher by a factor of 1.5 to 2 compared to the \\emph{NoDiff}\\ run, indicating that diffusion enhances the production of small grains. \n\nThere are three interesting time intervals at which the GSD is expected to be qualitatively different. At early times ($t \\sim 250\\,\\text{Myr}\\,$), we expect the GSD to be yield dominated, i.e. closely following the log-normal distribution from the stellar yield relation. At intermediate times ($t \\sim 750\\,\\text{Myr}\\,)$, we expect the abundance of small grains to have reached its maximum. Finally at late times ($t \\sim 2 \\,\\text{Gyr}\\,$), we do not expect the GSD to change much. In the following, we will analyse the GSD at these points in time and compare the effect of the different models at each point in time.\n\n\\subsubsection{Dust-to-gas Ratio vs. Metallicity}\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/Dtot\/DtotvsZ.pdf}\n\\caption{The dust-to-gas ratio as a function of metallicity at early (top panel), intermediate (center panel) and late times (bottom panel) for the different runs. The solid lines show the median relation in the disk, while dashed lines show the relation in the halo. The shaded area corresponds to the range between the 25th and 75th percentile. The black dashed lines correspond to the yield relation $\\mathcal{D}_{\\text{tot}}\\,$$ = 0.1 Z$ and saturation $\\mathcal{D}_{\\text{tot}}\\,$$ = Z$, respectively. We compare our results to the observational data from \\citet{2014A&A...563A..31R}.}\\label{fig:DtotvsZ}\n\\end{figure}\n\nWe use the relation of the dust-to-gas ratio, $\\mathcal{D}_{\\text{tot}}\\,$, with metallicity, $Z$, in order to verify that our model reasonably calculates the dust abundance. This relation is often used as a benchmark to assess the validity of chemical evolution models \\citep[e.g.][]{1998ApJ...496..145L, 2020MNRAS.491.3844A}. In Figure~\\ref{fig:DtotvsZ}, we show the relation between $\\mathcal{D}_{\\text{tot}}\\,$ and $Z$ in the early, intermediate, and late stage of the evolution of the galaxy for each run. We show separately the relation in the disk (solid lines) and halo (dashed lines) by grouping the particles according to the criterion in Eq.~(\\ref{eq:selection}). We also plot the observed $\\mathcal{D}_{\\text{tot}}\\,$--$Z$ relation for a sample of nearby individual galaxies compiled by \\citet{2014A&A...563A..31R} from the KINGFISH survey and the sample from \\citet{2011A&A...532A..56G}. The conclusions derived from the comparison are not affected even if we use newer analysis \\citep[e.g.][]{2019A&A...623A...5D,2021A&A...649A..18G} for the observational data. The simulation data corresponds to the median relation of gas particles within a single galaxy. Thus a direct comparison cannot be made. Instead, we just use the observational data as a first reference to decide whether or not our model produces results in line with the observed relation. All models are roughly in line with the observations, indicating that our implementation reasonably reproduces the trend of dust evolution with metallicity.\n\nIn the disk, the relation is generally characterised by $\\mathcal{D}_{\\text{tot}}\\,$ following the linear yield relation at low $Z$ and saturation at high $Z$. At intermediate metallicities non-linear growth kicks in connecting the two regimes \\citep[see, e.g.][]{1998ApJ...501..643D, 1999ApJ...510L..99H, 1999ApJ...522..220H, 2003PASJ...55..901I, 2008A&A...479..453Z, 2013EP&S...65..213A, 2020MNRAS.491.3844A}. At high metallicity, the dust-to-gas ratio tends to slightly drop, which is likely related to sputtering due to SN shocks, as particles with high $Z$ tend to be located closer to stars. In the halo, typically lower dust abundances are achieved, due to depletion by thermal sputtering. \n\nAt early times, the dust has not yet experienced growth in the ISM; thus, the dust-to-gas ratio in the disk closely follows the stellar yield relation in all runs. In the runs with diffusion, $\\mathcal{D}_{\\text{tot}}\\,$ falls below the yield relation at low $Z$. This is because the gas in this regime is typically in the warm phase, where large grains can be shattered, leaving behind small fragments. Shattering increases the small grain abundances in the diffuse ISM to levels, that are larger than in the dense ISM, leading to an outflow of grains from the diffuse ISM that lowers the total dust-to-gas ratio. This is further enhanced by the erosion of small grains due to thermal sputtering in the halo and destruction in SN shocks in the dense ISM.\nDiffusion also leads to higher dust-to-gas ratios in the halo. These effects are more pronounced with stronger diffusion.\n\nAt intermediate times, dust growth has started to saturate at high metallicity. In all runs, the $\\mathcal{D}_{\\text{tot}}\\,$--$Z$ relation in the disk exhibits yield dominated low $Z$ gas and dust saturated high-$Z$ gas, with a transition region around $Z \\sim 0.1 Z_{\\sun}$. \nJust as at early times, the relation in the runs with diffusion lies slightly below the yield relation at low $Z$, which again is due to the excess of small grains which are easily destroyed in the halo and star forming regions and therefore keep flowing out of the diffuse ISM. Diffusion lowers the metallicity at which non-linear growth kicks in, through mixing of enriched high $Z$ gas where grain growth has already commenced and lower metallicity gas, but keeps the metallicity at which saturation is achieved rather unchanged. The resulting $\\mathcal{D}_{\\text{tot}}\\,$-$Z$ relation in the non-linear growth regime is therefore shallower but extends over a larger range of metallicities than in the \\emph{NoDiff}\\ run. As discussed above, grain growth is slower in the \\emph{HiRes}\\ run. This might be due to differences in the driving of turbulence, which could prolong the initial dilution phase. Indeed, this phase seems to be still ongoing in this run, as indicated by dust-to-gas ratios below the stellar yield relation at relatively high $Z$ and the onset of non-linear growth at larger $Z$. The effects of non-linear growth are also visible in the halo, especially in the runs with stronger diffusion. \n\nAt late times, in the disk, the dust abundance in the high-$Z$ particles has saturated. In the \\emph{NoDiff}\\ run, the shape of the $\\mathcal{D}_{\\text{tot}}\\,$--$Z$ relation has hardly evolved compared to intermediate times, while in the runs with diffusion dilution tends to raise dust-to-metal ratio at low $Z$ above the yield relation. This is most pronounced in the \\emph{Diffx0.1}\\ run, as in the runs with higher diffusion, there are no low-$Z$ gas particles left in the disk. In the halo, non-linear growth has left its imprint in all runs. In the \\emph{NoDiff}\\ run,\nthe relation falls just below the yield relation, with some departure towards saturation at high $Z > Z_{\\sun}$. In the runs with diffusion, $\\mathcal{D}_{\\text{tot}}\\,$ reaches much lower values at low $Z$, owing to thermal sputtering efficiently depleting the dust abundance far away from the disk. This depletion is competing with the enrichment with new dust from the disk and therefore in the runs with stronger diffusion, dust abundances in the halo tend to be higher.\n\n\\subsubsection{Small-to-large grain Ratio vs. Metallicity}\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/GSD\/DsDlvsZ.pdf}\n\\caption{The small-to-large grain ratio as a function of metallicity at early (top panel), intermediate (center panel) and late times (bottom panel) for the different runs. The solid lines show the median relation in the disk, while dashed lines show the relation in the halo. The shaded area corresponds to the range between the 25th and 75th percentile. Observational data shown in the bottom panel are taken from \\citet{2020A&A...636A..18R} for the three galaxies as presented in the legend.}\n\\label{fig:DsDlvsZ}\n\\end{figure}\n\nThe relation of the small-to-large grain ratio $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ with metallicity $Z$ may give hints about the evolution of the GSD. In Figure~\\ref{fig:DsDlvsZ}, we show the relation between $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ and $Z$ in the early, intermediate and late stage of the evolution of the galaxy for each run. We show separately the relation in the disk (solid lines) and halo (dashed lines) by grouping the particles according to the criterion in Eq.~(\\ref{eq:selection}). \nWe further compare the relation at late times to spatially resolved data from an observational sample of late type spirals taken from \\citet{2020A&A...636A..18R}. \nIn the disk, the relation is generally characterised by a rise in $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ at low $Z < 0.1 Z_{\\sun}$, while at high $Z$ it is decreased due to coagulation. This trend can be roughly seen in all runs at all times, but there are some differences as discussed below. In the halo, the relation is flatter most likely due to the importance of thermal sputtering.\n\nAt early times, in the \\emph{NoDiff}\\ run, $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ rises from $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim -4$ at low $Z$ to $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim -(1 - 2)$ at around $Z \\sim 0.4 Z_{\\sun}$. At higher $Z$, it then drops slightly to $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim -2$. The dispersion is rather large, spanning more than one order of magnitude. In the halo, the relation is essentially flat at $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim -3$, with some variation at larger $Z$. In the \\emph{Diffx0.1}\\ and \\emph{Diffx10}\\ runs, the relation in the disk initially shows little differences, with $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ being slightly higher in the \\emph{Diffx1}\\ run.\nAt low $Z$, the disk and halo relations are identical and flat, with constant $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim -3$ for $Z < 0.04 Z_{\\sun}$. At larger metallicities, $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ increases and reaches its maximum value at around $Z \\sim 0.3 Z_{\\sun}$ before it slightly drops due to coagulation. The increase is less pronounced in the halo than in the disk, where it reaches a maximum value of $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim -1$. In the \\emph{Diffx0.1}\\ run, the relation in the halo is similar to that in the \\emph{Diffx1}\\ run but it is slightly flatter towards large $Z$, while the relation in the disk at low $Z$ exhibits larger values, probably because it has already reached a more advanced stage in the dust growth, due to a shorter dilution phase. In the \\emph{HiRes}\\ run, the relation in the disk is similar to that in the \\emph{Diffx1}\\ and \\emph{Diffx10}\\ runs, but it is shifted towards slightly lower values of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$. The relation in the halo is similar as well, but the increase in $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ starts at slightly larger $Z$. Overall the inclusion of diffusion seems to slightly increase $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ at all metallicities, which is in line with the expectation that fluid mixing enhances dust processing. Moreover, we see an enhancement in the halo, due to a steady supply with dust grains from the disk.\n\nAt intermediate times, the relation in the halo flattens to a mild power-law in all runs with diffusion, while it develops a bump at $Z \\sim Z_{\\sun}$ in the \\emph{NoDiff}\\ run.\nThe power-law is shallower with stronger diffusion, but has a larger value of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim 0.05 - 0.1$, whereas in the other runs with diffusion the values are lower at $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim -2$. In the \\emph{HiRes}\\ run, the relation in the halo has a similar shape as the one in the disk with a peak at $Z \\sim 0.2 Z_{\\sun}$ at $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim -1$. In the disk, the relation in the runs with diffusion at low resolution and at high resolution are very different at low $Z$, while they agree well at $Z > 0.2 Z_{\\sun}$. \nIn the low-resolution runs, the low-$Z$ relation is rather flat at $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim -1$ and increases towards $Z \\sim 0.1 Z_{\\sun}$ where it attains a value of $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\sim 0$, while at high resolution the value at low $Z$ is lower by almost two orders of magnitudes and increases towards a maximum of $\\log$\\,$\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\lesssim 0$ at $Z > 0.2 Z_{\\sun}$ beyond which the curves join. \nAt large $Z \\gtrsim 0.2 Z_{\\sun}$, $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ falls off like $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$$\\propto Z^{-1}$. The relation in the \\emph{NoDiff}\\ run\nhas a similar shape to the one in the \\emph{HiRes}\\ run, but it is overall lower by about $0.5\\,\\text{dex}\\,$. The enhancement of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ at low $Z$ and in the halo in the runs with diffusion is likely due to the diffusion of small grain abundances from high-$Z$ regions where they can grow efficiently. In the low-$Z$ regions in the disk, which are usually associated with warm diffuse gas, sputtering, which could deplete the small grain abundance, is very inefficient due to low star-formation activity and temperatures that are too low for thermal sputtering. Thus small grain abundances which are comparable to the high-$Z$ growth regions can accumulate in these regions, due to a steady supply from the growth regions.\n\nAt late times, the shape of the $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$--Z relation in the disk in the \\emph{NoDiff}\\ run has hardly changed, but it has been shifted towards slightly higher $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ values. In the runs with diffusion, the low-$Z$ relation has flattened even further, due to the ongoing diffusion of small grains out of high-$Z$ regions, while the large-$Z$ decline with $Z^{-1}$ remains roughly the same. The range of attained metallicity values in the disk is drastically narrowed down with increasingly stronger diffusion. \nIn the halo, $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ is almost constant with respect to $Z$ in the runs with diffusion, and takes on higher values with stronger diffusion, due to more efficient mixing with the small-grain-enriched disk. In the \\emph{NoDiff}\\ run, the relation in the halo is very different from the ones seen in the other runs.\n\nNone of the runs can reproduce the observations completely, which is rather surprising, given the good agreement with the radial trend of the data and the runs with diffusion shown in the bottom panel of Figure~\\ref{fig:RadialDsDl}.\nThe slight decreasing trend of $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ at high metallicity in M33 and NGC 628 is consistent with coagulation, but the decrease is not as large as that predicted by the models. In the \\emph{HiRes}\\ and the \\emph{Diffx10}\\ runs, the relation in the halo is similar to what is observed.\nThis might indicate that our models overestimate the effect of processes that only occur in the disk, such as accretion and coagulation.\n\n\\subsubsection{Dense and Diffuse ISM}\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/GSD\/dense.pdf}\n\\caption{The grain size distribution in the dense ISM at early (top frame), intermediate (center frame) and late times (bottom frame) for the different runs. The solid lines depict the median relation. The shaded region shows the range between the 25th and 75th percentile. The black dashed line marked with `MRN' in the bottom panel shows the power-law corresponding to the \\citetalias{1977ApJ...217..425M} grain-size distribution $\\left[n\\left(a\\right) \\propto a^{-3.5}\\right]$.}\\label{fig:GSDdense}\n\\end{figure}\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/GSD\/diffuse.pdf}\n\\caption{Same as Fig. \\ref{fig:GSDdense} but in the diffuse ISM.}\\label{fig:GSDdiffuse}\n\\end{figure}\n\nWe compare the general features of the GSD in the dense and the diffuse medium at early, intermediate, and late times among the different models. We follow the definitions of the dense and diffuse medium used by \\citetalias{2020MNRAS.491.3844A}. In particular we restrict our discussion to particles within $r < 7\\,\\text{kpc}\\,$ and $\\left|z\\right| < 0.3\\,\\text{kpc}\\,$, where $r$ corresponds to the cylindrical radius with respect to the density-weighted center-of-mass of the gas and $\\left|z\\right|$ corresponds to the distance to the galactic plane. Particles are considered to belong to the cold, dense ISM if their density and temperature satisfy $n_\\text{H} > 10 \\,\\text{cm}^{-3}\\,$ and $T_\\text{gas} < 10^3\\,\\text{K}\\,$ and to the warm, diffuse ISM if $0.1\\,\\text{cm}^{-3}\\, < n_\\text{H} < 1 \\,\\text{cm}^{-3}\\,$ and $10^{3}\\,\\text{K}\\, < T_\\text{gas} < 10^4\\,\\text{K}\\,$. \n\nIn Figure~\\ref{fig:GSDdense}, the GSD in the dense ISM is shown at early, intermediate, and late times. As expected, at early times the GSD is dominated by the large grains from the yield relation. In all runs, the dense GSD has a small grain tail with a bump, indicating that growth has already begun. The weight of the small grain tail differs between the runs. In the \\emph{NoDiff}\\ run, \nthe tail hardly exceeds the yield relation, but admits a large dispersion. With diffusion, the abundance of small grains is greatly enhanced as exchange of dust between the dense and diffuse ISM enhances the processing of grains, by moving grains to where they can be processed. Without diffusion, large grains may end up trapped in dense clumps, where they may never be shattered, artificially biasing the GSD towards large grains. The enhancement is the largest with weaker diffusion, in line with the argument above, that the growth only kicks in after an initial diffusive period, where the diffusion timescale is shorter than the growth timescale. The distribution of GSD values is narrower with stronger diffusion. \n\nAt intermediate times, dust growth has increased the abundance of small grains to a level similar to that of large grains. The overall normalisation of the GSD has increased compared to early times. At the largest grain radii, the GSD still drops with a tail similar to that of the initial log-normal distribution, but the tail towards smaller grains is now a heavy power-law tail similar to the \\citetalias{1977ApJ...217..425M} grain size distribution. At the smallest grain radii, the distribution falls off significantly. The large-grain-end of the distribution shows little variation among the runs, whereas the drop at the small-grain-end occurs at larger grain radii with weaker (or no) diffusion, i.e. there are more very small grains with stronger diffusion, as exchange rates of grains between the dense and the diffuse ISM, which drive enhanced dust processing, are proportional to the diffusion strength. In the intermediate grain size range, all runs with diffusion agree remarkably well, while the GSD in the \\emph{NoDiff}\\ run falls short by $\\sim 0.5\\,\\text{dex}\\,$.\n\nAt late times, coagulation has kicked in, steepening the GSD and enhancing the abundance of large grains ($a > 0.1\\, \\micron$). This enhancement is less pronounced in the \\emph{NoDiff}\\ run, since the absence of mixing leads to less efficient grain processing. The power-law in the intermediate size range is closely resembling the \\citetalias{1977ApJ...217..425M} grain size distribution. This is a robust prediction from theory, which shows, that collisional processes like shattering and coagulation lead to a MRN-like GSD \\citep[e.g.][]{1969JGR....74.2531D, 1994Icar..107..117W, 1996Icar..123..450T, 2010Icar..206..735K}.\n\nIn Figure~\\ref{fig:GSDdiffuse} the GSD in the diffuse ISM ($0.1\\,\\text{cm}^{-3} < n_\\text{H} < 1 \\,\\text{cm}^{-3}\\,$ and $10^{3}\\,\\text{K} < T_\\text{gas} < 10^4\\,\\text{K}\\,$) at early intermediate and late times is shown. At early times, similar trends as in the case of the dense ISM are shown, though at lower normalisation. The values in the runs with diffusion are higher by almost an order of magnitude indicating that the transport of dust into the diffuse ISM is significantly more efficient with diffusion. Remarkably, while the dust enrichment of the diffuse ISM is slightly more efficient in the runs with stronger diffusion, the differences are only marginal, indicating that as long as there is even a small amount of diffusion, the diffuse ISM becomes significantly more enriched than without diffusion. \n\nAt intermediate times, there are significant differences in the GSDs with and without diffusion. The GSD in the \\emph{NoDiff}\\ run is essentially the yield log-normal with a slight tail towards small grain radii. In the runs with diffusion, the yield GSD is present, but is overshadowed by a broad bump at small grain radii, centered around $a \\sim 0.1 \\,\\micron$. With stronger diffusion, the amount of dust in the diffuse ISM is larger, because dust growth has already started in the dense ISM and grain abundances are continuously mixed throughout the ISM. This is why small grain abundances are higher in runs with stronger diffusion, where mixing is strongest. At the very small grain end, abundances of small grains are even higher than in the dense ISM, due to the absence of processes which could deplete the small grain abundance like destruction in SN shocks, which is more efficient in the dense star-forming regions. In the \\emph{HiRes}\\ run, the small grain bump is narrower and less pronounced than in the other runs with diffusion. This might simply be due to the later onset of dust growth which additionally is slower in this run.\n\nAt late times, coagulation kicks in, smoothing the small and large grains to form a power-law like distribution. In the \\emph{NoDiff}\\ run, there are two bumps; the large grain bump from the yield relation is still present, but it is slightly overshadowed by a wide small grain bump at $a \\sim 0.04 \\,\\micron$. The abundance of small grains slowly drops towards the smallest grains, but exhibits a lot of scatter. \nContrary to this, the GSDs in the runs with diffusion show only little scatter. In these runs, the GSD has been smoothed to a power-law that is shallower than the \\citetalias{1977ApJ...217..425M} one. The reason for this shallower slope, which implies larger small-to-large-grain ratios is, that in the diffuse ISM there are no processes like sputtering due to SN shocks which could deplete the small grain abundances. At grain radii $a < 10^{-3}\\,\\micron$ and $a > 0.3 \\,\\micron$ the distribution falls off. The overall normalisation of the GSD is higher with stronger diffusion.\n\n\\section{Comparison with Observations --- Extinction curves}\\label{sec:Extinction}\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/Extinction\/dense.pdf}\n\\caption{The extinction curve in the dense ISM at early (top frame), intermediate (center frame), and late times (bottom frame) for the different runs. The solid lines depict the median relation. The shaded region shows the range between the 25th and 75th percentile. The observed extinction curve in the Milky Way is shown by purple dots \\citep{1992ApJ...395..130P}. The error bars show the dispersion along various lines of sight in the Milky Way \\citep{2013ApJ...770...27N}.}\\label{fig:Extdense}\n\\end{figure}\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{figures\/Extinction\/diffuse.pdf}\n\\caption{Same as Fig. \\ref{fig:Extdense} but in the diffuse ISM.}\\label{fig:Extdiffuse}\n\\end{figure}\n\nWe use extinction curves in order to translate the GSDs presented above into an observable that can be directly compared to observations. We compare the extinction curves obtained with the different models in the dense and diffuse medium at early, intermediate and late times to the observed extinction curve in the Milky Way. To this end, Figures~\\ref{fig:Extdense} and \\ref{fig:Extdiffuse} show the extinction curves in the dense and diffuse medium, respectively. Observational data are taken from \\citet{1992ApJ...395..130P} for the mean curve and from \\citet{2013ApJ...770...27N} for the dispersion in the various lines of sight \\citep[originally from][]{2007ApJ...663..320F}.\n\nIn the \\emph{NoDiff}\\ run, the extinction curve at early times is flat, both in the diffuse and in the dense medium, corresponding to the log-normal GSD from the stellar yield. With diffusion, the same is true for the extinction curve in the diffuse medium. In the dense medium however, diffusion steepens the extinction curve already at early times. This relates to the fact that the abundance of small grains of size $a \\sim 0.01 \\,\\micron$ is already growing (Fig.~\\ref{fig:GSDdense}), especially in the runs with weaker diffusion, where the initial dilution phase is ending earlier. \n\nAt intermediate times, in the dense ISM the extinction curve steepens significantly. In the diffuse ISM, the extinction curve remains flat in the \\emph{NoDiff}\\ run, but it steepens significantly in the runs with diffusion. The steepening of the extinction curves is more pronounced in runs with stronger diffusion, corresponding to more efficient small-grain production shown in Fig.~\\ref{fig:GSDdiffuse}.\n\nAt late times, the extinction curves in the dense ISM flatten in the runs with diffusion, becoming comparable to the observed extinction curve in the Milky Way. This trend agrees with the findings of \\citet{2021MNRAS.507..548L}, who find that the extinction curves tend to flatten at high metallicity. In the \\emph{NoDiff}\\ run, the extinction curve becomes slightly steeper instead, becoming slightly more comparable to the Milky Way extinction curve. The late time dense extinction curves in the \\emph{Diffx0.1}\\ and \\emph{Diffx1}\\ runs are in best agreement with the observations around the peak and at short wavelengths, but the extinction curves in the \\emph{Diffx10}\\ and \\emph{HiRes}\\ runs are in better agreement at long wavelengths and around $\\lambda^{-1} \\sim 6\\, \\micron^{-1}$. The median extinction curve in the \\emph{NoDiff}\\ run has too-low extinction at short wavelengths and too-high extinction at long wavelengths to explain the observations. This is consistent with previous results \\citep[e.g.][]{2020MNRAS.491.3844A}. \nIn the diffuse ISM, the extinction curves steepen even more compared to intermediate times. In the \\emph{NoDiff}\\ run, the extinction curve exhibits large diversity, ranging from flat extinction curves to extremely steep ones. The median relation is comparable to the observed Milky Way extinction curve. In the runs with diffusion, the extinction curve in the diffuse ISM is extremely steep and there is only little variation among the different runs, indicating that even very weak diffusion might lead to the same result. The steepness is due to the overproduction of small grains compared with the MRN slope as shown in Figure \\ref{fig:GSDdiffuse}.\n\n\n\\section{Discussion and Conclusions}\\label{sec:discussion}\n\nWe presented here an extension of the dust evolution model by \\citetalias{2020MNRAS.491.3844A}, which aimed to address the reported underproduction of small grains and the low efficiency of coagulation. We have addressed the former by introducing a subgrid model for fluid mixing by diffusion sourced by turbulence on unresolved scales, as described in Section~\\ref{sec:diffusion}. This has been motivated by the work of \\citet{2021ApJ...917...12S}, who stress that SPH by design suppresses fluid mixing \\citep[see also, e.g.][]{2010MNRAS.407.1581S, 2018MNRAS.474.2194E, 2018MNRAS.480..800H}. We have addressed the latter issue by recalibrating the subgrid prescription for dense clouds to be roughly consistent with the molecular gas fraction in Milky Way mass galaxies \\citep{2018MNRAS.476..875C} as described in Section~\\ref{sec: MultiPhase}. As diffusion inevitably leads to the enrichment of gas in the halo with dust grains, we had to additionally extend the treatment of \\citetalias{2020MNRAS.491.3844A} by destruction of dust grains by thermal sputtering \\citep{1995ApJ...448...84T}. Indeed, the results of \\citet{2021MNRAS.503..511G} indicate, that the omission of thermal sputtering can bias the total dust mass towards significantly higher values by promoting additional dust growth.\n\nIn order to understand the effect of diffusion, we have run a suite of simulations of an isolated Milky Way-like galaxy including a run without diffusion serving as a base line, three runs with diffusion spanning three orders of magnitude in the diffusion parameter $C_{\\text{d}}$ and a high-resolution run with diffusion.\nIn the following we summarise our findings:\n\\begin{enumerate}\n \\item Diffusion of metals tends to reduce stellar metallicity, while reducing the number of very metal-poor stellar populations, by reducing the width of the metallicity distribution of the (star forming) gas. This in turn leads to a slight increase in gas metallicity, bringing the metallicity of the two populations closer together to one unified value.\n \\item Diffusion generally leads to narrower (i.e. less scatter) relations with less weight on the tails (i.e. less extreme values) between diffused quantities like metallicities and grain abundances.\n \\item With diffusion, metals and dust extend out to larger distances from the galactic disk, and the spiral arms are traced much better by their spatial distributions.\n \\item Diffusion leads to the formation of a metal-poor torus in the galactic plane just beyond the edge of the gas disk. In Section~\\ref{sec:profiles} we have discussed several reasons why such a structure might not have been observed so far. \n \\item Diffusion can enhance the processing of dust grains by moving grains to the sites where they can be most efficiently processed. This tends to increase small-to-large grain ratios and boosts overall dust growth, which is most efficient for smaller grains.\n \\item Diffusion initially delays dust growth in an initial \\emph{dilution} phase where locally high dust abundances get diluted as they spread throughout the whole disk. However, once diffusion timescales become longer than the growth timescales dilution ends, and growth by accretion starts to deplete most gas phase metals, binding them onto dust grains. Diffusion increases the fraction of gas phase metals that are depleted in this way.\n \\item All of our simulations reproduce the relation between the dust-to-gas ratio and the metallicity. However, in the runs with diffusion, the metallicity above which the dust-to-gas ratio exceeds the yield relation is lowered. This is not because growth starts at lower metallicity, but rather because of enrichment of relatively metal and dust-poor gas with dust from nearby gas with higher metallicity where growth can occur.\n \\item Diffusion leads to steeper extinction curves both in the diffuse and the dense ISM. The exinction curve in the dense ISM at late times is largely consistent with the observations in the Milky Way for a value of the diffusion parameter of $C_{\\text{d}} = 0.02$. However, in the diffuse ISM the extinction curve tends to be too steep, a prediction that is rather insensitive to the choice of the diffusion parameter. Observed extinction curves are measured along a line of sight, which may sample a mixture of dense and diffuse ISM. The observed extinction curves are therefore expected to lie somewhere in between our respective predictions for the dense and diffuse ISM.\n \\item As indicated by the differences in the high-resolution run, the strength of diffusion does not only depend on the value of the diffusion parameter, but also on the spatial distribution of sources of metals and dust and the calibration of the feedback model driving turbulence. If the distribution of sources of metals and dust is very clumpy and metal injections tend to only affect few particles, metal gradients become large, artificially enhancing diffusion. Similarly, in the presence of strong kinetic feedback, driving strong turbulence, diffusion can become significantly enhanced. The exact value of the diffusion parameter therefore needs to be calibrated along with the feedback model. \n \\item In all of our models, the radial profiles of dust and metal abundances as well as the small-to-large grain ratios are in reasonable agreement with the observations in nearby galaxies from \\citet{2012MNRAS.423...38M} and \\citet{2020A&A...636A..18R}. The dust-to-metal ratios in our models tend to be slightly larger than the observed values taken from \\citet{2012MNRAS.423...38M}. Curiously, from Figure~\\ref{fig:DsDlvsZ} we can see that in our simulations the relation between $\\mathcal{D}_{S}\/\\mathcal{D}_{L}\\,$ and $Z$ in the disk tends to be above the observational relation \\citep{2020A&A...636A..18R}, while the relation in the halo lies below. While this might indicate that in the observations some kind of average between the two relations is taken, which would explain the slightly lower values, it seems more likely that a better calibration of our subgrid model, a better treatment of dust processing due to SN feedback and the inclusion of AGN feedback, will lead to a better agreement with the observations.\n\\end{enumerate}\nIn this work, we employed an isotropic diffusion model. While this might be appropriate for gas phase metals and small grains which are closely coupled to the gas, large dust grains can decouple from small-scale turbulence eddies\n\\citep[e.g.][]{2015MNRAS.449.1625B, 2015MNRAS.452.3932B, 2018MNRAS.478.2851M}. It might thus be worthwhile to model the effects of drag force on the dust grains as an anisotropic diffusion current. Moreover, throughout this paper we have assumed neutral grains and neglected grain charging. \\citet{2022MNRAS.510.1068G} have found that dust grains in \\ion{H}{ii} regions can pick up considerable amounts of negative charges. The charging of grains can have a profound impact on interaction rates, which might affect both grain growth as well as dust-based heating- and cooling-rates. Therefore, it might be interesting to include the effects of grain charing into future dust evolution models. Finally, in this work we assumed that dust grains were compact spheres. \\citet{2022MNRAS.509.5771H} developed a model for the grain-size-dependent evolution of grain porosity. They show that porosity may act to steepen the extinction curve.\nGrain porosity might also have important implications for other processes like molecule formation on grain surfaces through the change of grain surface area. It might thus be advisable to take the effect of porosity into account in future studies.\n\nIn conclusion, including diffusion of dust and metals in our simulations lead to an enhancement in the production of small grains, while being largely consistent with total dust and metal abundances in previous works \\citep[e.g.][]{2017MNRAS.466..105A, 2019MNRAS.485.1727H, 2020MNRAS.491.3844A}. There are some discrepancies between our simulations and the observations. In particular, we have found that in the outer parts of the galactic disk, the dust-to-metal ratios and the small-to-large grain ratios tend to be too high in our simulations, indicating that production of small grains is too efficient in this regime. Future analysis will show how this issue can be resolved. It should also be stressed that, while the setting of an isolated disk galaxy is a useful framework for testing new models, its unrealistic initial conditions and the absence of environmental effects like mergers, filaments as well as large-scale tidal fields limit its applicability to realistic comparisons with observations. In order to obtain results that allow for a reasonable comparison with nearby spirals, future cosmological zoom-in simulations will have to be performed. Despite these limitations, we are confident that our results regarding the effects of diffusion hold true and hope that they will be useful to guide future efforts for modelling the evolution of dust and the GSD in the ISM.\n\n\\section*{Acknowledgements}\n\nWe acknowledge Shohei Aoyama and Ikkoh Shimizu who provided their versions of {\\sc GADGET3-Osaka}, which served as useful references for the implementation of the feedback and dust evolution model in our {\\sc GADGET4-Osaka}.\nOur numerical simulations and analyses were carried out on our local cluster {\\sc Orion}.\nThis work was partly supported by the JSPS KAKENHI Grant Number JP17H01111, 19H05810, 20H00180. \nKN acknowledges the support from the Kavli IPMU, World Premier Research Center Initiative (WPI).\nHH thanks the Ministry of Science and Technology (MOST) for support through grant\nMOST 108-2112-M-001-007-MY3, and the Academia Sinica\nfor Investigator Award AS-IA-109-M02.\n\n\\section*{Data Availability}\n\nData related to this publication and its figures are available on request from\nthe corresponding author.\n\n\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}