diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznjoc" "b/data_all_eng_slimpj/shuffled/split2/finalzznjoc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznjoc" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and Background}\nNatural Language Processing is now dominated by transformer-based models \\citep{vaswani2017attention}, like BERT \\citep{devlin2019bert}, a model trained on predicting masked tokens and relations between sentences. BERT's impact is so strong that we already talk about `BERTology' \\citep{bertology}. \n\nIn addition to using BERT in NLP tasks and end applications, research has also been done \\textit{on} BERT, especially to reveal what linguistic information is available in different parts of the model. This is done, e.g., investigating \nwhat BERT's attention heads might be attending to \\citep{clark-etal-2019-bert}, or looking at its internal vector representations using so-called probing (or diagnostic) classifiers \\citep{tenney2019bert}.\nIt has been noted that BERT progressively acquires linguistic information roughly in the same the order of the classic language processing pipeline \\citep{tenney2019you,tenney2019bert}: surface features are expressed in lower layers, syntactic features more in middle layers and semantic ones in higher layers \\citep{jawahar-etal-2019-bert}. So, for example, information on part-of-speech appears to be acquired earlier than on coreference.\n\nMost work dedicated to understanding the inner workings of BERT has focused on English, though non-English BERT models do exist, in two forms.\nOne is a multilingual model \\citep[mBERT]{devlin2019bert}, which is trained on Wikipedia dumps of 104 different languages. The other one is a series of monolingual BERTs \\citep[among others]{polignano2019alberto, le2019flaubert, virtanen2019multilingual, camembert, bertje}. As expected, also the non-English monolingual BERT models achieve state-of-the-art results on a variety of NLP tasks, and mostly outperform the multilingual model on common NLP tasks \\citep{nozza2020mask}. Nevertheless, mBERT performs surprisingly well on zero-shot POS tagging and Named Entity Recognition (NER), as well as on cross-lingual model transfer \\citep{pires-etal-2019-multilingual}.\n\nIf these results imply that the inner workings of other monolingual BERTs and of mBERT are the same as BERT's is not yet known. Also not known is how \\textit{homogeneous} layer specialisation is: through general performance of, e.g., POS tagging, we see a peak at a given layer, but we do not know how specialisation actually evolves across the whole model. \nThis work investigates such issues. \n\n\\paragraph{Contributions}\nUsing probing classifiers for four tasks on six datasets for a monolingual Dutch model and for mBERT, we observe that (i) these models roughly exhibit the same classic pipeline observed for the original BERT, suggesting this is a general feature of BERT-based models; (ii) the most informative mBERT layers are consistently earlier layers than in monolingual models, indicating an inherent task-independent difference between the two models. Through a deeper analysis of POS tagging, we also show that (iii) the picture of a neatly ordered NLP pipeline is not completely correct, since information appears to be more spread across layers than suggested by the performance peak at a given layer.\n\nThe full source code is publicly available on Github\\footnote{\\url{https:\/\/github.com\/wietsedv\/bertje\/tree\/master\/probing}}.\n\n\n\\section{Approach}\n\nWe run two kinds of analyses. \n\nThe first is aimed at a rather high level comparison of the performance of a monolingual (Dutch) BERT model (BERTje, \\citealt{bertje}) and multilingual BERT (mBERT) on a variety of tasks at different levels of linguistic complexity (POS tagging, dependency parsing, named entity recognition, and coreference resolution; see Section~\\ref{sec:task-data}), with attention to what happens at different layers. \n\nThe second is an in-depth analysis of the performance of BERTje and mBERT on part-of-speech tagging. The reason behind this is that looking at global performance over a given task does not provide enough information on what is actually learned by different layers of the model \\textit{within} that task. POS tagging lends itself well for this type of layerwise evaluation.\nFirst, because it is a low level task for which relatively little real-world knowledge is required.\nSecond, because analysis of single tags is straightforward since it is done at a token level.\nThird, because POS tagging contains both easy and difficult cases that depend on surrounding context. Some words are more ambiguous than others, and some classes are open whereas others are closed.\nToken ambiguity may for instance be an important factor for differences between a monolingual and a multilingual model since the latter has to deal with more homographs, due to the co-presence of multiple languages.\n\nSection~\\ref{sec:analysis} describes how these analyses can be performed in practice using the probes.\n\n\\subsection{Experimental setup}\nOur method for measuring task performance at different layers is based on the edge probing approach of \\citet{tenney2019bert, tenney2019you}.\nEdge probing is a method to evaluate how well linguistic information can be extracted from a pre-trained encoder.\nSeparate trained classifiers on the outputs of Transformer layers in BERT can reveal which layers contain most information for a particular task. \n\nThe inputs of the probing classifiers are embeddings extracted from the lexical layer (layer 0) and each Transformer layer (layers 1 up to 12) from either the pre-trained BERTje or mBERT model.\nEmbeddings of token spans are extracted from these full sentence or document embeddings and those spans are used as probe model inputs.\nThe probing classifiers are trained to predict task labels based on span representations using an LSTM layer for tokens that require multiple WordPieces.\\footnote{See \\citet{tenney2019bert} for technical details on the classifier architecture. Our hyper-parameters are in Appendix~\\ref{appendix:hyperparams}.}\n\nFor each model, layer and task we train two probes: a single layer based probe and a scalar mixing probe.\nThe single layer probe uses a single pre-trained Transformer layer output as its input, whereas the scalar mixing probes use a weighted sum of the target layer and preceding layers.\n\n\\subsection{Tasks and Data}\n\\label{sec:task-data}\n\nWe train the probing classifiers on six datasets with four different tasks, chosen to represent linguistic layers of abstraction.\\footnote{Details on size, splits, and processing are in Appendix~\\ref{appendix:data}.}\nFor POS tagging and dependency parsing, the LassySmall and Alpino datasets from Universal Dependencies (UD) v2.5 \\citep{ud25} are used with provided splits.\nFor Named Entity Recognition, we use the Dutch portion of the CoNLL-2002 NER dataset \\citep{tjong2002conll} with the provided splits.\nFinally, we use the coreference annotations of the SoNaR-1 corpus \\citep{sonar1} for coreference, with document level training (80\\%), validation (10\\%) and testing (10\\%) splits.\n\n\\begin{figure*}[!t]\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/udlassy-pos-v0-weight-curves.png}\n \\caption{UDLassy POS}\n \\label{fig:weights:udlassy-pos}\n \\end{subfigure}\n %\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/udalpino-pos-v0-weight-curves.png}\n \\caption{UDAlpino POS}\n \\label{fig:weights:udalpino-pos}\n \\end{subfigure}\n %\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/udlassy-dep-v0-weight-curves.png}\n \\caption{UDLassy DEP}\n \\label{fig:weights:udlassy-dep}\n \\end{subfigure}\n %\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/udalpino-dep-v0-weight-curves.png}\n \\caption{UDAlpino DEP}\n \\label{fig:weights:udalpino-dep}\n \\end{subfigure}\n %\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/conll2002-ner-v0-weight-curves.png}\n \\caption{CoNLL-2002 NER}\n \\label{fig:weights:conll2002-ner}\n \\end{subfigure}\n %\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/sonar-coref-v0-weight-curves.png}\n \\caption{SoNaR Coref}\n \\label{fig:weights:sonar-coref}\n \\end{subfigure}\n \\caption{Scalar mixing weights for each pre-trained model and each task.\n Highlights:\n The sorted weights form clean curves;\n BERTje makes more use of lexical embeddings;\n Weights decrease at final layers;\n mBERT peaks earlier than BERTje;\n POS and DEP results are consistent across datasets.\n }\n \\label{fig:weights}\n\\end{figure*}\n\n\\subsection{Analysis}\n\\label{sec:analysis}\nWe perform a series of analyses aimed at creating a picture of what happens inside of BERTje and mBERT.\nInitial overall analyses of the tasks are done with the scalar mixing probes as well as the single layer probes for each of the six tasks.\n\nFirst, weights that the scalar mixing probes give to each pre-trained model layer are compared (Section~\\ref{sec:weights}).\nLayers that get larger scalar mixing weights may be considered to be more informative than lower weight layers for a particular task \\citep{tenney2019bert}.\nIt does not have to be the case that the most informative layers are at the same position in the model since an interaction between layers in different positions may be even more informative.\nTherefore, we compare layer weights between tasks and pre-trained models.\nThe two different data sources for POS tagging and dependency parsing will give an indication about stability of these weight distributions across datasets and within tasks.\nThese weights are solely based on training data, so they may not represent the exact layer importance for unseen data.\n\nSecond, we compare overall prediction scores of the probes on unseen test data for each task (Section~\\ref{sec:predictions}).\nThrough this, we can observe at what stage models peak for what task, and where monolingual and multilingual models might differ.\nThe accuracy deltas between layers for scalar mixing probes will give an indication about which layers add information that was not present in all previous layers combined.\nFor these probes, deltas should be positive if information is added and zero if a layer is uninformative.\n\nThird, we take a closer look at POS tagging (Section~\\ref{sec:pos}). \nThe previous analyses reveal information about the amount of task-relevant information that is present in each layer, but POS tagging can require different kinds of abstraction for different labels, so that POS performance might be non-homogeneous across layers.\nSpecifically, we (i) compare layerwise performance for each tag and the groups of open and closed class POS tags; (ii) \ninvestigate whether information is lost, learned or relearned within the model by combining probe predictions for each individual token; and (iii)\ncheck the most frequent confusions between tags to better understand the causes of errors.\n\n\n\n\\section{Analysis over all tasks}\n\nFirst, the weights of the scalar mixing models are compared in order to see which layer combinations are most informative. \nThese weights are tuned solely on the training data so they give no indication about layer importance for unseen data.\nSecond, we compare overall prediction scores of the probes on unseen test data for each of the tasks.\n\n\n\n\\subsection{Layer weights}\\label{sec:weights}\n\nFigure \\ref{fig:weights} shows the scalar mixing weights of the full scalar mixing probes. We highlight a few important patterns that are consistent between tasks, and suggest possible explanations for what we observe, in particular regarding the differences between BERTje and mBERT.\n\n\\paragraph{The sorted weights form clean curves.}\nThe probing classifier is ignorant about ordering of layers when the weights are tuned.\nNevertheless the sorted weights mostly show clean curves.\nThe clean curves indicate that embedding of useful information for these tasks is gradually added and removed by the transformer models.\nThis also confirms that our probing model is actually sensitive to these gradual changes in the embeddings.\n\n\\begin{figure*}[!t]\n\\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/udlassy-pos-v0-mix-accuracies.png}\n \\caption{UDLassy POS}\n \\end{subfigure}\n %\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/udalpino-pos-v0-mix-accuracies.png}\n \\caption{UDAlpino POS}\n \\end{subfigure}\n %\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/udlassy-dep-v0-mix-accuracies.png}\n \\caption{UDLassy DEP}\n \\end{subfigure}\n %\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/udalpino-dep-v0-mix-accuracies.png}\n \\caption{UDAlpino DEP}\n \\end{subfigure}\n %\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/conll2002-ner-v0-mix-accuracies.png}\n \\caption{CoNLL-2002 NER}\n \\end{subfigure}\n %\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{images\/sonar-coref-v0-mix-accuracies.png}\n \\caption{SoNaR Coref}\n \\end{subfigure}\n \\caption{Accuracy deltas for cumulative introduction of layers with scalar mixing probes. Positive values indicate that these layers contain new task-specific information. Some negative values in later layers suggest overfitting.}\n \\label{fig:mix:acc-deltas}\n\\end{figure*}\n\n\\paragraph{BERTje makes more use of lexical embeddings.}\nThe curves in Figure~\\ref{fig:weights} show that the probes for BERTje give higher weights to the first layer than the mBERT probes. This suggests that the pre-trained context-independent lexical embeddings of BERTje are more informative for these tasks than those of mBERT. \nThis makes sense because mBERT word pieces are shared between languages, so there is more word piece level lexical ambiguity in mBERT than BERTje.\n\nThe exception to this pattern is the SoNaR coreference task, where the difference between mBERT and BERTje is small. Establishing whether two spans of text corefer requires more context-dependent information in addition to lexical embeddings, whereas the other tasks contain examples where context is not always required.\nBERTje does not rely on the lexical layer more strongly than on subsequent layers for this task.\n\n\n\\paragraph{Weights decrease at final layers.}\nIf the transformer layers continually add information, the final layer would contain most information.\nHowever, information actually decreases after peaking in layers 5 to 9.\nThe reason may be that the actual output of the model should be roughly the same as the original input.\nTherefore generalisations are discarded in favour of representations that map back to actual word pieces.\nGeneralisations may lead to information loss if they do not correspond to our target tasks, because original information may become less accessible after generalisation.\nThe first and last lexical layers contain most token identity information. If the probes did not benefit from learned language model representations, we would observe that these layers are the most important to solve the tasks.\nHowever, the weight peaks that we see in between the lexical layers suggest that the language models contain generalisations that are informative for the given tasks.\n\n\n\\paragraph{mBERT peaks earlier than BERTje.}\nThe weight peak for the mBERT probes is always in an earlier layer than the peaks of equivalent BERTje probes.\nThese peaks do not correspond with center measures in BERT probing scalar mixing weights of \\citet{tenney2019bert}, since single center measures only correspond with peaks if the distribution is roughly normal.\n\nThis might suggest differing priorities during pre-training.\nGenerally, BERTje's weights start to decrease somewhere in the second half of the layers whereas mBERT's peaks are closer to the center.\nThis suggests that BERTje uses more layers to generalise than to instantiate back to tokens.\nThe large vocabulary and variety of languages in mBERT may require mBERT to start instantiating earlier with an equal amount of generalisation and instantiation as a result.\n\n\\paragraph{POS and DEP results are consistent across datasets.}\nThe UDLassy and UDAlpino datasets contain equivalent annotations, but the data originates from different text genres.\nTheir POS curves in Figure \\ref{fig:weights:udlassy-pos} and \\ref{fig:weights:udalpino-pos} and their DEP curves in Figure \\ref{fig:weights:udlassy-dep} and \\ref{fig:weights:udalpino-dep} are however mostly the same.\nThis indicates that the probes are sensitive to the task and the input embeddings, but not overly sensitive to the specific data that the probes are trained on.\n\n\n\n\n\\subsection{Prediction scores}\\label{sec:predictions}\n\nFigure \\ref{fig:mix:acc-deltas} shows deltas of accuracy scores compared to the preceding layer based on test predictions.\nThe minimum absolute accuracy scores for each task range from 0.630 (SoNaR Coref) to 0.979 (CoNLL-2002 NER) and the maximum accuracy scores per task range from 0.729 (SoNaR Coref) to 0.991 (CoNLL-2002 NER).\\footnote{Accuracy deltas for single layer probes are in Appendix~\\ref{appendix:accuracies}.}\n\nIntuitively, positive deltas in the mixing results in Figure~\\ref{fig:mix:acc-deltas} indicate that the introduced layer contains new information that was not present in any preceding layers, whereas zero-deltas indicate that the new layer is completely uninformative.\nIdeally, the accuracy deltas would never be negative since the probe of layer $N$ has access to information from all layers up to $N$.\nNegative deltas with cumulative introduction of layers to the probes suggest that the probes sometimes overfit to training data.\nOtherwise, these deltas should always be zero or higher.\nScalar mixing weights of layers that correspond with these uninformative negative delta layers should be lower in order to reduce their effect on the predictions.\nFigure~\\ref{fig:weights} shows that negative accuracy deltas mainly correspond with negative weight slopes.\nTherefore, the effects in Figure~\\ref{fig:weights} may be stronger in optimally performing probes.\n\nThe general pattern in the scalar mixing accuracy deltas in Figure~\\ref{fig:mix:acc-deltas} is that deltas are positive in earlier layers and improvement stops for the last layers.\nThis fits with the decreasing weights for the last layers in the full scalar mixing model (Figure~\\ref{fig:weights}).\n\nOne important difference between the layer mixing probes and the single layer probes is that single layer probes sometimes show negative accuracy deltas while the corresponding accuracy delta is positive for the mixing probe.\nPositive mixing probe deltas suggest that new information is introduced or made more accessible, whereas the negative single layer deltas suggest that some information is lost or has been made less accessible by the language model.\nIntuitively, this indicates that some information is sacrificed in order to make place for new information in the embedding.\nIf that is the case, the actual probe prediction mistakes may change between layers even if overall accuracy scores stay the same.\n\nAnalysis of scalar mixing weights or accuracy on the whole test data only gives an indication of the sum of information for a task.\nHowever, a more fine-grained error analysis is required to give any indication about what information is retrievable in which layer and what information becomes harder to identify.\n\n\n\\section{In-depth analysis for POS tagging}\\label{sec:pos}\n\\begin{figure}[!b]\n\\centering\n\\includegraphics[width=1\\linewidth]{images\/udlassy-pos\/preds-tag-distribution.png}\n \\caption{Distributions of POS tags in the full test set as well as the filtered test set. The filtered distribution is not equivalent to the original distribution because some common tags are relatively easy.}\n \\label{fig:tag-dist}\n\\end{figure}\n\n\\begin{figure*}[h!]\n\\centering\n\\begin{subfigure}[b]{0.45\\textwidth}%\n\\includegraphics[width=\\textwidth]{images\/udlassy-pos\/layers\/preds-tags-layer-changes-bertje-closed_classes.png}\n\\caption{BERTje closed class POS tags}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.45\\textwidth}\n\\includegraphics[width=\\textwidth]{images\/udlassy-pos\/layers\/preds-tags-layer-changes-mbert-closed_classes.png}\n\\caption{mBERT closed class POS tags}\n\\end{subfigure}\n\\caption{F1 scores per closed class POS tag per layer for BERTje and mBERT. Closed class performance stabilises around the sixth layers and does not significantly decrease.}\n\\label{fig:tags:agg:closed}\n\\end{figure*}\n\n\\begin{figure*}[h!]\n\\centering\n\\begin{subfigure}[b]{0.45\\textwidth}%\n\\includegraphics[width=\\textwidth]{images\/udlassy-pos\/layers\/preds-tags-layer-changes-bertje-open_classes.png}\n\\caption{BERTje open class POS tags}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.45\\textwidth}%\n\\includegraphics[width=\\textwidth]{images\/udlassy-pos\/layers\/preds-tags-layer-changes-mbert-open_classes.png}\n\\caption{mBERT open class POS tags}\n\\end{subfigure}\n\\caption{F1 scores per open class POS tag per layer for BERTje and mBERT. Except for verbs, performances decrease in later layers. This indicates that these tag representations become hard to distinguish in later layers.}\n\\label{fig:tags:agg:open}\n\\end{figure*}\n\nLayer-wise task performance and scalar mixing weights give information about overall information density for a task.\n\nFor POS tagging, maximum performance and largest scalar mixing weights are assigned to layers 5 to 9 for the pre-trained models, but this does not tell the whole story.\nIndeed, probes can make different types of errors for different layers and models, because the models may clarify or lose information between layers.\nMoreover, different examples and labels within a task may rely on information from different layers.\n\nWe want to give a more thorough view of what BERTje and mBERT learn and whether information becomes unidentifiable between layers as well as whether BERTje and mBERT make the same mistakes.\nTherefore, we evaluate the errors of the UDLassy POS predictions with single layer probes.\n\nWe do this analysis on POS predictions because this task stays closest to the lexical level of embedding that the models are pre-trained for, but also rely on context and generalisation for optimal performance.\nWe focus on UDLassy data rather than UDAlpino because the differences between the accuracy deltas of scalar mixing models and single layer models appears higher for UDLassy. This would suggest a larger shift in mistakes.\n\nThe following analysis is done on the predictions of the 13 single layer BERTje probes and the 13 single layer mBERT probes.\nPOS tagging is not difficult for all tokens, so for 85\\% of the test data all 26 probes predict the correct tag.\nIn order to focus on errors, we perform all analyses using the subset of the tokens that have an incorrect prediction by at least one of the probes. This amounts to 1,720 tokens.\nThe original test data distribution as well as the filtered distribution are shown in Figure~\\ref{fig:tag-dist}.\n\n\nNote that the filtered data distribution does not correspond to the original distribution since some tags are easier to recognise than others.\nFor instance, proper nouns are over-represented in our analysis set whereas adpositions and punctuation are underrepresented.\nThis is not a problem since we are explicitly interested in the mistakes and difficult cases and not in overall performance.\n\n\\subsection{Accuracies per POS tag}\n\nFigures \\ref{fig:tags:agg:closed}~and~\\ref{fig:tags:agg:open} show the F1 scores per POS tag per layer for the single layer probe predictions.\n\nPOS tags are grouped in aggregates based on whether they are considered to be closed categories (Figure~\\ref{fig:tags:agg:closed}) or open categories (Figure~\\ref{fig:tags:agg:open}) according to the Universal Dependencies guidelines.\nThere are six POS tags with relatively low average performance, which also have random fluctuations in per layer performance.\nTherefore, \\textit{adp}, \\textit{cconj}, \\textit{punct}, \\textit{num}, \\textit{sym} and \\textit{x} are left out of Figures \\ref{fig:tags:agg:closed}~and~\\ref{fig:tags:agg:open}.\n\nFigure \\ref{fig:tags:agg:closed} shows that closed class POS tags seem to be learned by the pre-trained models and not lost in later layers.\nOn average, their scores increase for the first six layers, indicating that the probe uses learned information to identify these tags.\nAfter reaching top performance, the probe performance does not really decrease, rather it plateaus.\nOnly the subordinating conjunction class seems to show some decline.\nThere is remarkably little difference between BERTje and mBERT for these classes.\n\nFigure \\ref{fig:tags:agg:open} shows the tag F1 scores for open class POS tags.\nContrary to the closed classes, the mean scores on open classes do seem to decline in later layers.\nWithin the closed classes there are three different patterns.\nNouns and proper nouns are learned quickly and stay relatively stable.\nThis is especially true for mBERT.\nFor BERTje, the scores for (proper) nouns seem to decline somewhat after reaching a peak.\nVerbs keep improving for more layers than (proper) nouns.\nApparently, recognition of verbs is something that is resolved later in the pre-trained models.\nFinally, adjectives and adverbs show an actual decline in performance, since these two tags become hard to distinguish from each other, or possibly other tags, in later layers.\n\n\n\\subsection{Confusion between tags}\nThe previous figures give an indication about which POS tags are learned by pre-trained models based on context and which tags become unidentifiable, but they do not give an indication about changes in tag confusion.\nFigure \\ref{fig:tags:agg:open} shows that overall single layer performance of open class words peaks in layer~6 for BERTje and layer~6 is also included in the peak layers for mBERT.\n\nTo illustrate whether biases and confusions change after this peak, we compare the summed confusion matrices from the six layers before and the six layers after layer~6. These confusion matrices (Figure~\\ref{fig:cm:summed:open}) show that there are many similarities between BERTje and mBERT with respect to the confusions that are learned or lost.\n\nDecrease in error counts between the first half and the second half of the models suggests that differentiation between tags is learned, whereas increase in errors suggests information loss.\nFor instance verbs and adverbs are more often misclassified as determiners in the first than in the second half. \nSimilarly, proper nouns are confused a lot more often with auxiliary verbs or pronouns in the first half than in the second half.\n\nThose differences suggest that discrimination between these tags is learned by both models.\nHowever, nouns and proper nouns are confused with adjectives a lot more often in the second than in the first half.\n\n\n\\begin{figure*}\n\\centering\n\\begin{subfigure}[b]{0.48\\textwidth}\n\\includegraphics[width=\\textwidth]{images\/udlassy-pos\/layer-errors-bertje-06-before.png}\n\\caption{BERTje layers 0 up to 5}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.48\\textwidth}\n\\includegraphics[width=\\textwidth]{images\/udlassy-pos\/layer-errors-bertje-06-after.png}\n\\caption{BERTje layers 7 up to 12}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.48\\textwidth}\n\\includegraphics[width=\\textwidth]{images\/udlassy-pos\/layer-errors-mbert-06-before.png}\n\\caption{mBERT layers 0 up to 5}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.48\\textwidth}\n\\includegraphics[width=\\textwidth]{images\/udlassy-pos\/layer-errors-mbert-06-after.png}\n\\caption{mBERT layers 7 up to 12}\n\\end{subfigure}\n\n\\caption{Total confusions of open class POS tags before and after the middle. Confusions are very similar between BERTje and mBERT, but some confusions change between first and last layers.}\n\\label{fig:cm:summed:open}\n\\end{figure*}\n\n\\subsection{Example errors}\n\nBERTje and mBERT do not always make the same mistakes, nor are the same mistakes made in each layer.\nFor many tokens, the probes make incorrect predictions for the first layer(s), but start making correct predictions in later layers, which indicates that learned information is used.\nOften, these error patterns are similar between BERTje and mBERT.\nThe following are examples of differences:\n\\setlength{\\Extopsep}{5pt}\n\n\\smallskip\n\n\\ex.\\label{ex1} Max Rood --- minister van Binnenlandse Zaken , kabinet - Van \\textbf{Agt} III \\\\\n {[}Max Rood --- minister of Internal Affairs , cabinet - Van \\textbf{Agt} III{]}\n\n\\ex.\\label{ex2} \\textbf{Federale} Regering \\\\\n {[}\\textbf{Federal} Government{]}\n\n\\ex.\\label{ex3} Het ontplooiingsliberalisme stelde de vrije \\textbf{maar} verantwoordelijke mens centraal. \\\\\n\t{[}The self-development liberalism put the free \\textbf{but} responsible man central.{]}\n\n\\ex.\\label{ex4} \\textbf{Reeds} in het begin van de 20ste eeuw \\dots\\\\\n {[}\\textbf{Already} in the beginning of the 20th century{]}\n\n\\ex.\\label{ex5} \\dots het \\textbf{Duitstalig} taalgebied \\dots\\\\\n {[}\\dots the \\textbf{German} language-area \\dots{]}\n\n\\ex.\\label{ex6} \\dots de Keltische \\textbf{stammen} in het gebied \\dots \\\\\n {[}\\dots the Celtic \\textbf{tribes} in the area \\dots{]}\n\n\\smallskip\n\n\n\\noindent In \\ref{ex1}, mBERT initially tags the proper noun ``Agt\" as verb. In \\ref{ex2} BERTje initially tags the adjective ``Federale\" as proper noun.\nBoth classifications are incorrect guesses, but with additional context both pre-trained models correctly identify this proper noun in later layers.\nA different pattern of errors is that the probes make correct predictions based on the first or last layer, but some mistakes for layers in between.\nIn \\ref{ex3} the conjunction ``maar\" (but) receives the tag adv in several layers instead of the correct tag ``cconj\".\nBERTje makes this mistake in layer 4, 5, and 10; mBERT makes it in layers 3 to 7.\nIt happens relatively often that all BERTje probes assign correct labels, but mBERT goes from incorrect to correct. These mistakes are typically resolved in the first layer of mBERT, suggesting such errors are easily resolvable with a little bit of context; see \\ref{ex4} for an example.\n\n\nThere are also a lot of examples where mBERT probes are always correct, but BERTje probes make a mistake somewhere in the middle.\nIt may be the case that these examples are resolvable with and without context but that the internal representations of BERTje get generalised based on non-POS properties. In \\ref{ex5} the adjective ``Duitstalig\" gets confused with proper noun in layers 4, 5, 7, 8 and 9, but in the layers before and after BERTje probes get it correct.\nSemantically it is reasonable to think that ``Duitstalig\" has proper noun-like properties.\nFinally, \\ref{ex6} is an example where BERTje is always correct but mBERT makes a mistake in the middle somewhere.\nThe word ``stammen\" should be a noun but mBERT sometimes thinks it is a verb.\n\n\n\\section{Conclusion}\n\nOur results show that BERTje and mBERT exhibit a pipeline-like behaviour along tasks similar to what has previously been shown for English.\n\n\\citet{tenney2019bert} observed that the pipeline order is roughly first POS tagging, then named entity recognition, then dependency parsing and coreference resolution. \nOur results suggest that BERTje encodes these pipeline tasks in a similar order.\nScalar mixing weights show that there is not a single layer that contains all important information because the weight curves show peaks and valleys.\nThis suggests that useful task information is distributed between layers.\nGenerally, the most informative layers are located early in the second half of the pre-trained models.\nAs an additional note, because we ran the model on different datasets for the same task, we can assess stability across datasets.\nWe observe that POS tagging and dependency parsing results are consistent, suggesting that the probes are sensitive to the task and the embeddings, but not overly sensitive to the specific data that they are trained on.\n\nThe main task differences between the monolingual BERTje model and the multilingual mBERT model are that BERTje probes make more use of the lexical embedding layer than the mBERT probes and the most important layers of BERTje are mostly later layers than those of mBERT.\n\nSemantically rich POS tags like nouns and adjectives become harder to identify in later layers (Figure~\\ref{fig:tags:agg:open}) and confusions mainly happen between semantically rich open categories (Figure~\\ref{fig:cm:summed:open}). This suggests that semantic content is more important than POS discriminating features for final token predictions.\nSo even if the POS abstraction is not readily present in the lexical layer nor in the final token prediction layer, POS tag information is still found in middle layer generalisations.\nPOS tagging is a part of what the pre-trained models learn, but different tag abstractions are present in different layers.\nTherefore, feature-based use of these models should not use the output of a \\textit{single best} layer.\nIt would be better to combine the outputs of multiple or all layers in order to retrieve all learned information that is relevant for a downstream task.\nHowever, actual fine-tuning of pre-trained language models should still be a preferred approach.\n\nIn sum, our results show that pipeline-like behaviour is present in both a monolingual pre-trained BERT-based model as well as a multilingual model even though task-specific information is distributed between layers.\nWe observed this for POS tagging, but it is still unclear how information within tasks is distributed in these models for other tasks. \nMoreover, it would be interesting to investigate \nalternative probing strategies in order to better disentangle what pertains to the model itself from what is specific to a given probing strategy. \nLastly, it is an open question how well linguistic properties are embedded within large pre-trained language models for non Indo-European languages.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Technical Proofs}\n\n\\subsection{Proof of Lemma~\\ref{lem:bold}}\\label{app:L2}\n\n\\newcommand{K_\\infty\\models\\varphi}{K_\\infty\\models\\varphi}\n\\newcommand{t\\to t'}{t\\to t'}\n\t\nLet $\\mathsf{BSCC}$ denote the set of BSCCs of the chain-automaton product and $\\mathsf{SCC}$ the set of its SCCs.\n\nFor a subset $K$ of states of the product, $\\skcand k(K)$ denotes the event (random predicate) of $K$ being a candidate with strength at least $k$ on a run of the product.\nFurther, the ``weak'' version $\\kcand k(K)$ denotes the event that $K$ has strength $k$ when counting visits even prior to discovery of $K$, i.e. each state of $K$ has been visited and exited at least $k$ times on a prefix $\\path$ of the run with $K(\\path)=K$.\nPrevious work bounds the probability that a non-BSCC can be falsely deemed BSCC based on the high strength it gets.\n\n\\begin{lemma}[\\cite{DacaHKP17}\n\tFor every set of states $K\\notin\\mathsf{BSCC}$, and every $s\\in K$, $k\\in\\mathbb N$,\n\t$$\n\t\\mathbb P_s[\\kcand k (K)]\\leq (1-\\mathsf{p}_{\\mathsf{min}})^k\\,.\n\t$$\n\\end{lemma}\n\\begin{proof}\n\tSince $K$ is not a BSCC, there is a state $t\\inK$ with a transition to $t'\\notinK$.\n\tThe set of states $K$ becomes a $k$-candidate of a run starting from $s$, only if $t$ is visited at least $k$ times by the path and was never followed by $t'$ (indeed, even if $t$ is the last state in the path, by definition of a $k$-candidate, there are also at least $k$ previous occurrences of $t$ in the path). \n\tFurther, since the transition from $t$ to $t'$ has probability at least $\\mathsf{p}_{\\mathsf{min}}$, the probability of not taking the transition $k$ times is at most $(1-\\mathsf{p}_{\\mathsf{min}})^k$. \n\\end{proof}\n\nIn contrast to \\cite{DacaHKP17}, we need to focus on runs where $\\varphi$ is satisfied.\nFor clarity of notation, we let $K\\models\\varphi$ denote that $K$ is good, and $K\\not\\models\\varphi$ denote that $K$ is bad.\nIn particular, $K_\\infty\\models\\varphi$\ndenotes the event that the run satisfies $\\varphi$.\n\n\n\\begin{lemma}\\label{lem:one-cand}\n\tFor every set of states $K\\notin\\mathsf{BSCC}$, and every $s\\in K$, $k\\in\\mathbb N$,\n\t$$\n\t\\mathbb P_s[\\kcand k (K)\\mid K_\\infty\\models\\varphi]\\leq (1-\\mathsf{p}_{\\mathsf{min}})^k\\,.\n\t$$\n\\end{lemma}\n\\begin{proof}\n\tThe previous argument applies also in the case where we assume that after this strength is reached the run continues in any concrete way (also satisfying $\\varphi$) due to the Markovian nature of the product:\n\t\n\n\t\\begin{align*}\n\t &\\mathbb P_s[\\kcand k (K)\\mid K_\\infty\\models\\varphi]\\\\\n\t=& \\sum_{t\\to t'}\\mathbb P_s[\\kcand k (K), K\\text{ exited by }t\\to t'\\mid K_\\infty\\models\\varphi]\\\\\n\t=& \\sum_{t\\to t'}\\mathbb P_s[\\kcand k (K), K\\text{ exited by }t\\to t', K_\\infty\\models\\varphi]\/\\mathbb P_s[ K_\\infty\\models\\varphi]\\\\\n\t=& \\sum_{t\\to t'}\\mathbb P_s[\\kcand k (K), K\\text{ exited by }t\\to t'] \\cdot \\mathbb P_s[K_\\infty\\models\\varphi\\mid \\kcand k (K), K\\text{ exited by }t\\to t'] \/\\mathbb P_s[ K_\\infty\\models\\varphi]\\\\\n\t\\stackrel{(1)}=& \\sum_{t\\to t'}\\mathbb P_s[\\kcand k (K), K\\text{ exited by }t\\to t'] \\cdot\\mathbb P_s[K_\\infty\\models\\varphi\\mid K\\text{ exited by }t\\to t'] \/\\mathbb P_s[K_\\infty\\models\\varphi]\\\\\n\t\\stackrel{(2)}=& \\sum_{t\\to t'}\\mathbb P_s[\\kcand k (K), K\\text{ exited by }t\\to t'] \\cdot\\mathbb P_{t'}[K_\\infty\\models\\varphi] \/\\mathbb P_s[ K_\\infty\\models\\varphi]\\\\\n\t\\leq&\\sum_{t\\to t'\\text{ exiting }K}\\mathbb P_s[\\text{reach }t]\\mathbb P_t[\\text{not take }t\\to t'\\text{in }k\\text{ visits of }t]\\cdot\\mathbf{P}(t,t') \\cdot\\mathbb P_{t'}[K_\\infty\\models\\varphi] \/\\mathbb P_s[ K_\\infty\\models\\varphi] \\\\\n\t=&\\sum_{t\\to t'\\text{ exiting }K}\\mathbb P_t[\\text{not take }t\\to t'\\text{in }k\\text{ visits of }t] \\mathbb P_s[\\text{reach }t]\\cdot\\mathbf{P}(t,t')\\cdot\\mathbb P_{t'}[K_\\infty\\models\\varphi] \/\\mathbb P_s[ K_\\infty\\models\\varphi] \\\\\n\t\\leq&\\sum_{t\\to t'\\text{ exiting }K}(1-\\mathsf{p}_{\\mathsf{min}})^k\\mathbb P_s[\\text{reach $t'$ as the first state outside $K$}] \\cdot\\mathbb P_{t'}[K_\\infty\\models\\varphi] \/\\mathbb P_s[ K_\\infty\\models\\varphi]\\\\\n\t=& (1-\\mathsf{p}_{\\mathsf{min}})^k \\mathbb P_s[K_\\infty\\models\\varphi] \/\\mathbb P_s[ K_\\infty\\models\\varphi]\\\\\n\t=&(1-\\mathsf{p}_{\\mathsf{min}})^k\n\t\\end{align*}\n\n\\noindent where (1) follows by the Markov property and by a.s. $K\\neqK_\\infty$, (2) by the Markov property.\n\\end{proof}\n\n\n\nIn the next lemma, we lift the results from fixed designated candidates to arbitrary discovered candidates, at the expense of requiring the (strong version of) strength instead of only the weak strength.\nTo that end, let \\emph{birthday} $b_i$ be the moment when $i$th candidate on a run is discovered, i.e., a run is split into $\\rho=\\path b_i\\rho'$ so that $K_i=K(\\path b_i)\\neqK(\\path)$.\nIn other terms, $b_i$ is the moment we start counting the occurences for the strength, whereas the weak strength is already 1 there.\n\n\n\n\n\n\n\n\\begin{lemma}\\label{lem:ith-cand}\n\tFor every $i,k\\in\\mathbb N$, we have $$\\mathbb P[\\skcand k(K_i) \\mid K_i\\notin\\mathsf{BSCC},K_\\infty\\models\\varphi]\\leq (1-\\mathsf{p}_{\\mathsf{min}})^k\\,.$$ \n\n\\end{lemma}\n\\begin{proof}\n\t\\begin{align*}\n\t&\\mathbb P[\\skcand k(K_i) \\mid K_i\\notin\\mathsf{BSCC},K_\\infty\\models\\varphi]\\\\[0.2cm]\n\t&= \\frac{\\mathbb P[\\skcand k(K_i), K_i\\notin\\mathsf{BSCC},K_\\infty\\models\\varphi]}{\\mathbb P[K_i\\notin\\mathsf{BSCC},K_\\infty\\models\\varphi]}\\\\\n\t&= \\frac1{\\mathbb P[K_i\\notin\\mathsf{BSCC},K_\\infty\\models\\varphi]} \\sum_{\\substack{C\\in\\mathsf{SC}\\setminus\\mathsf{BSCC}\\\\s\\in C}}\\mathbb P[\\skcand k(C),K_i=C,b_i=s,K_\\infty\\models\\varphi]\\\\\n\t&= \\frac1{\\mathbb P[K_i\\notin\\mathsf{BSCC},K_\\infty\\models\\varphi]} \\sum_{\\substack{C\\in\\mathsf{SC}\\setminus\\mathsf{BSCC}\\\\s\\in C}}\\mathbb P[K_i=C,b_i=s]\\cdot\\mathbb P_s[\\kcand k (C),K_\\infty\\models\\varphi] \\\\\n\t&= \\frac1{\\mathbb P[K_i\\notin\\mathsf{BSCC},K_\\infty\\models\\varphi]} \\sum_{\\substack{C\\in\\mathsf{SC}\\setminus\\mathsf{BSCC}\\\\s\\in C}}\\mathbb P[K_i=C,b_i=s]\\cdot\\mathbb P_s[\\kcand k (C)\\midK_\\infty\\models\\varphi]\\cdot\\mathbb P_s[K_\\infty\\models\\varphi] \\\\\n\t&\\leq \\frac{(1-\\mathsf{p}_{\\mathsf{min}})^k }{\\mathbb P[K_i\\notin\\mathsf{BSCC},K_\\infty\\models\\varphi]} \\sum_{\\substack{C\\in\\mathsf{SC}\\setminus\\mathsf{BSCC}\\\\s\\in C}}\\mathbb P[K_i=C,b_i=s]\\cdot\\mathbb P_s[K_\\infty\\models\\varphi]\\tag{by Lemma~\\ref{lem:one-cand}}\\\\\n\t&\\leq \\frac{(1-\\mathsf{p}_{\\mathsf{min}})^k }{\\mathbb P[K_i\\notin\\mathsf{BSCC},K_\\infty\\models\\varphi]} \\sum_{\\substack{C\\in\\mathsf{SC}\\setminus\\mathsf{BSCC}\\\\s\\in C}}\\mathbb P[K_i=C,b_i=s,K_\\infty\\models\\varphi]\\\\\t\n\t&= (1-\\mathsf{p}_{\\mathsf{min}})^k\n\t\\end{align*}\n\twith the last equality due to $$K_i\\notin\\mathsf{BSCC}\\capK_\\infty\\models\\varphi=\\biguplus_{\\substack{C\\in\\mathsf{SC}\\setminus\\mathsf{BSCC}\\\\s\\in C}}K_i=C,b_i=s,K_\\infty\\models\\varphi$$\n\\end{proof}\n\n\n\n\n\n\nThe set $\\mathcal E\\mathit{rr}$ of the next lemma is actually exactly the set considered in Lemma~\\ref{lem:bold} but in a more convenient notation for the computation.\n\n\\begin{lemma}\\label{lem:more-cand}\n\tFor $(k_i)_{i=1}^\\infty\\in\\mathbb N^\\mathbb N$, let \n\t$\\mathcal E\\mathit{rr}$ be the set of runs such that for some $i\\in\\mathbb N$, we have $\\skcand {k_i}(K_i)$ despite $K_i\\not\\models\\varphi$ and $K_{\\infty}\\models\\varphi$.\n\tThen $$\\displaystyle\\mathbb P[\\mathcal E\\mathit{rr}] n$ or $j > \\alpha(i- \\log\\varepsilon)$. Indeed, if the Markov chain has $n$ states, \nthen along the run there are at most $n$ candidates; moreover, the strength of the $K_i$ stays strictly below\n$\\alpha(i-\\log\\varepsilon)$, because otherwise the run is aborted. So we have\n\n\\begin{equation}\n\\label{eq:T}\nT = \\sum_{j=1}^\\infty T_j = \\sum_{j=1}^\\infty T_{j}^\\bot + \\sum_{j=1}^\\infty T_{j}^C = \\sum_{j=1}^\\infty T_{j}^\\bot + \n\\sum_{j=1}^\\infty \\sum_{i=1}^n \\sum_{k=1}^{\\alpha(i- \\log\\varepsilon)} I_{ji,k}\n\\end{equation}\n\n\\noindent and so, by linearity of expectations,\n\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{equation}\n\\label{eq:Tbound}\n\\begin{aligned}\n\\mathbb E(T) & = \\mathbb E\\left( \\sum_{j=1}^{\\infty} \\left( T_{j}^\\bot + \\sum_{i=1}^n \\sum_{k=1}^{\\alpha(i- \\log\\varepsilon)} I_{ji,k} \\right) \\right) \\\\\n & = \\mathbb E\\left( \\sum_{j=1}^{\\infty} T_{j}^\\bot \\right) + \n \\sum_{i=1}^n \\sum_{k=1}^{\\alpha(i- \\log\\varepsilon)} \\sum_{j=1}^{\\infty} \\mathbb E\\left( I_{ji,k} \\right) \n\\end{aligned}\n\\end{equation}\nLet us bound the first summand. Since $K(\\pi)= \\bot$ only holds when the \nlast state of $\\path$ is visited for the first time, we have $T_j^\\bot \\leq n$. Moreover, $T_j^\\bot = 0$ for every $j \\geq R$, the number of resets.\nSo we get\n\\begin{equation}\n\\label{eq:Tbound2}\n\\mathbb E\\left( \\sum_{j=1}^{\\infty} T_{j}^\\bot \\right) \\leq \\mathbb E(n \\cdot R) = n \\cdot \\mathbb E(R) \n\\end{equation}\nConsider now the variables $I_{ji,k}$. If $j \\geq R$ then $I_{ji,k}=0$ by definition, since there is no $(j+1)$-th reset. Moreover, under the condition\n$j < R$ the variables $I_{ji,k}$ and $I_{(j+1)i,k}$ have the same expectation, because they refer to different runs. By Theorem \\ref{thm:bold}(a) $R$ is geometrically distributed\nwith parameter at least $\\mathsf{p}_{\\varphi}(1-\\varepsilon)$, and so we get\n\\begin{equation}\n\\label{eq:Tbound3}\n\\mathbb E(I_{(j+1)i,k}) \\le \\mathbb E(I_{ji,k}) \\cdot (1-\\mathsf{p}_{\\varphi}(1-\\varepsilon))\n\\end{equation}\nPlugging (\\ref{eq:Tbound}) and (\\ref{eq:Tbound2}) into (\\ref{eq:T}), and taking into account that $\\mathbb E(R) \\leq 1\/ \\mathsf{p}_{\\varphi} (1 - \\varepsilon)$, we obtain\n\n\\begin{equation}\n\\label{eq:Tbound4}\n\\begin{aligned} \n\\mathbb E(T) & \\leq \\mathbb E(n \\cdot R) + \\sum_{i=1}^n \\sum_{k=1}^{\\alpha(i- \\log\\varepsilon)} \\left(\\mathbb E(I_{0i,k}) \\sum_{j=0}^{\\infty} (1-\\mathsf{p}_{\\varphi}(1-\\varepsilon))^j \\right) \\\\\n & = n \\cdot \\mathbb E(R) + \\sum_{i=1}^n \\sum_{k=1}^{\\alpha(i- \\log\\varepsilon)} \\frac{\\mathbb E(I_{0i,k})}{\\mathsf{p}_{\\varphi}(1- \\varepsilon)} \\\\\n & \\leq \\frac{1}{\\mathsf{p}_{\\varphi}(1 - \\varepsilon)} \\left( n + \\sum_{i=1}^n \\sum_{k=1}^{\\alpha(i- \\log\\varepsilon)} \\mathbb E(I_{0i,k})\\right)\n\\end{aligned}\n\\end{equation}\nIf we can find an upper bound $I \\geq \\mathbb E(I_{0i,k})$ for every $i, k$, then we finally get:\n\n\\begin{equation}\n\\label{eq:Tbound5}\n\\begin{aligned} \n\\mathbb E(T) & \\leq \\frac{1}{\\mathsf{p}_{\\varphi}(1 - \\varepsilon)} \\cdot n \\cdot \\left( 1 + \\alpha (n- \\log\\varepsilon) \\cdot I \\right) \\\\\n& \\leq \\frac{1}{\\mathsf{p}_{\\varphi}(1 - \\varepsilon)} \\cdot 2 n \\alpha (n- \\log\\varepsilon) I\n\\end{aligned}\n\\end{equation}\n\nBefore estimating the bound $I$ let us consider the family of chains of Figure \\ref{fig:smallscc}, \nand the property ${\\ensuremath{\\mathbf{F}}} p$. In this case the candidates contain only one state, and their strength increases whenever the self-loop\non the state is traversed. So $\\mathbb E(I_{0i,k}) \\leq 1$ holds for every $i, k$, and so we can take $I := 1$.\n\nWe now compute a bound $I \\geq \\mathbb E(I_{0i,k})$ valid for arbitrary chains. Recall that $\\mathbb E(I_{0i,k})$ is the number of steps it takes to increase the \nstrength of the $i$-th candidate $K_i$ of the $0$-th run from $k$ to $k+1$. This is bounded by the number of steps it takes to visit every state of $K_i$ once. Let $\\mathsf{{\\scriptsize mxsc}} \\in O(n)$ be the maximal \nsize of a SCC. Given any two states $s, s'$ of an SCC, the probability of reaching $s'$ from $s$ after at most $\\mathsf{{\\scriptsize mxsc}}$ steps is at least $\\mathsf{p}_{\\mathsf{min}}^{\\mathsf{{\\scriptsize mxsc}}}$. \nSo the expected time it takes to visit every state of an SCC at least once is bounded by $\\mathsf{{\\scriptsize mxsc}} \\cdot \\mathsf{p}_{\\mathsf{min}}^{-\\mathsf{{\\scriptsize mxsc}}}$. So taking $I := \\mathsf{{\\scriptsize mxsc}} \\cdot \\mathsf{p}_{\\mathsf{min}}^{-\\mathsf{{\\scriptsize mxsc}}}$\nwe obtain the final result.\n\\qed\n\\end{proof}\n\n\\section{The bold monitor} \n\nWe proceed in two steps. In Section \\ref{subsec:chainwithpmin}, inspired by \\cite{DacaHKP17}, we design a bold controller that knows the minimum probability $\\mathsf{p}_{\\mathsf{min}}$ appearing in \n$\\mathcal{M}$ (more precisely, a lower bound on it). In Section \\ref{subsec:arbitrarychain} we modify this controller to produce another one that works correctly without any prior knowledge \nabout $\\mathcal{M}$, at the price of a performance penalty.\n\n\\subsection{Chains with known minimal probability}\n\\label{subsec:chainwithpmin}\nThe cautious controller aborts a run if the strength of the current candidate exceeds a fixed threshold that remains constant throughout the execution. \nIn contrast, the bold controller dynamically increases the threshold, depending on\nthe number of different candidates it has seen since the last reset. Intuitively, the controller becomes bolder over time,\nwhich prevents it from resetting too soon on the family of Figure \\ref{fig:smallscc}, independently of the length of the chain.\nThe controller is designed so that it resets almost all bad runs and only a fixed fraction~$\\varepsilon$ of the good runs.\nLemma~\\ref{lem:bold} below shows how to achieve this. We need some additional definitions.\n\n\\paragraph{Strength of a candidate and strength of a path.} Let $\\path$ be a path of $\\mathcal{M}\\otimes \\mathcal{A}$.\nThe \\emph{strength of $K(\\path)$} in $\\path$ is undefined if $K(\\path) = \\bot$. \nOtherwise, write $\\path = \\path' \\, s \\, \\kappa$, where $\\path'$ is the shortest prefix of $\\path$ such that $K(\\path' s) = K(\\path)$; the strength of $K(\\path)$ is the largest $k$ such that every state of $K(\\path)$ occurs at least $k$ times in $s \\, \\kappa$, and the last element of $s \\, \\kappa$ occurs at least $k+1$ times. Intuitively, if the strength is $k$ then every state of the candidate has been been exited at least $k$ times but, for technical reasons, we start counting only after the candidate is discovered. The function $\\textsc{Str}(\\path)$ returns the strength of $K(\\path)$ if $K(\\path) \\neq \\bot$, and $0$ otherwise.\n\n\\begin{example}\nThe following table illustrates the definition of strength.\n$$\\begin{array}{l|l|l|l|l|c}\n\\path & K(\\path) & \\path' & s & \\kappa & \\textsc{Str}(\\path) \\\\ \\hline\np_0p_1 & \\bot & - & - & - & 0 \\\\\np_0p_1p_1 & \\{p_1\\} & p_0p_1 & p_1 & \\epsilon & 0 \\\\\np_0p_1p_1p_1 & \\{p_1\\} & p_0p_1 & p_1 & p_1 & 1 \\\\\np_0p_1p_1p_1p_0 & \\{p_0,p_1\\} & p_0p_1p_1p_1 & p_0 & \\epsilon & 0 \\\\\np_0p_1p_1p_1p_0p_1& \\{p_0,p_1\\} & p_0p_1p_1p_1 & p_0 & p_1 & 0 \\\\\np_0p_1p_1p_1p_0p_1p_0 & \\{p_0,p_1\\} & p_0p_1p_1p_1 & p_0 & p_1 p_0 & 1 \\\\\np_0p_1p_1p_1p_0p_1p_0p_0 p_1& \\{p_0,p_1\\} & p_0p_1p_1p_1 & p_0 & p_1 p_0 p_0 p_1& 2 \n\\end{array}$$\n\\end{example}\n\n\\paragraph{Sequence of candidates of a run.} Let $\\rho=s_0s_1\\cdots$ be a run of $\\Mc \\otimes \\dra$. Consider the sequence of random variables defined by $K(s_0\\ldots s_j)$ for $j\\geq 0$, and let $(K_i)_{i\\geq 1}$ be the subsequence without undefined elements and with no repetition of consecutive elements.\nFor example, for $\\varrho=p_0p_1p_1p_1p_0p_1p_2p_2\\cdots$, we have $K_1=\\{p_1\\}$, $K_2=\\{p_0,p_1\\}$, $K_3=\\{p_2\\}$, etc.\nGiven a run $\\rho$ with a sequence of candidates $K_1, K_2 \\ldots, K_k$, we call $K_k$ ithe final candidate. We define the \\emph{strength} of $K_i$ in $\\rho$ as the supremum of the strengths of $K_i$ in all prefixes $\\path$ of $\\rho$ such that $K(\\path) = K_i$. %\nFor technical convenience, we define $K_\\ell:=K_j$ for all $\\ell>j$ and $K_\\infty:=K_j$. Observe that $\\rho$ satisfies $\\varphi$ if{}f its final candidate is good. \n\\begin{lemma} \\label{lem:strength-increases}\nW.p.1 the final candidate of a run $\\rho$ is a BSCC of $\\Mc \\otimes \\dra$. Moreover, for every $k$ there exists a prefix $\\path_k$ of $\\rho$ such that\n$K(\\pi_k)$ is the final candidate and $\\textsc{Str}(\\pi_k) \\geq k$.\n\\end{lemma}\n\\begin{proof}\nFollows immediately from the definitions, and the fact that w.p.1 the runs of a finite-state Markov chain eventually get trapped in a BSCC and visit\nevery state of it infinitely often.\n\\end{proof}\n\n\n\n\n\\paragraph{The bold monitor.} We bold monitor for chains with minimal probability\n$\\mathsf{p}_{\\mathsf{min}}$, shown in Algorithm \\ref{alg:boldalpha}. For every $\\rho$ and $i \\geq 1$, we define two random variables:\n\\begin{itemize}\n\\item $\\textsc{Str}_i(\\rho)$ is the strength of $K _i(\\rho)$ in $\\rho$;\n\\item $\\textsc{Bad}_i(\\rho)$ is \\textbf{true} if $K _i(\\rho)$ is a bad candidate, and \\textbf{false} otherwise. \n\\end{itemize}\nLet $\\alpha_0 := \\max\\{1, - 1\/\\log(1-\\mathsf{p}_{\\mathsf{min}})\\}$. The lemma states that, for every $\\alpha \\ge \\alpha_0$ and $\\varepsilon > 0$, the runs that satisfy $\\varphi$ and in which some bad candidate, say $K_i$, reaches a strength of at least $\\alpha(i-\\log \\varepsilon)$, have probability at most $\\varepsilon \\mathsf{p}_{\\varphi}$. This leads to the following strategy for the controller: when the controller is considering the $i$-th candidate, abort only if the strength reaches $\\alpha(i-\\log\\varepsilon)$.\n\n\\begin{lemma}\n\\label{lem:bold}\nLet $\\mathcal{M}$ be a finite-state Markov chain with minimum probability $\\mathsf{p}_{\\mathsf{min}}$, let\n$\\varphi$ be an LTL formula with positive probability $\\mathsf{p}_{\\varphi}$.\nFor every Markov chain $\\mathcal{M}$ with minimal probability $\\mathsf{p}_{\\mathsf{min}}$, for every $\\alpha \\ge \\alpha_0$ and $\\varepsilon > 0$: $$\\mathbb P \\left[ \\; \\big\\{ \\rho \\mid \\rho \\models \\varphi \\wedge \\exists i \\geq 1 \\, . \\, \n\\textsc{Bad}_i(\\rho) \\wedge \\textsc{Str}_i(\\rho) \\geq \\alpha(i\\rem{+j}\\rc{-\\log\\varepsilon}) \\big\\} \\; \\right] \\leq \\rem{2^{-j}}\\rc{\\varepsilon} \\mathsf{p}_{\\varphi}$$\n\\end{lemma}\n\\begin{proof}\n\tThe proof is quite technical and can be found in Appndix~\\ref{app:L2} and is inspired by \\cite{DacaHKP17}.\n\tThe main technical difficulty, compared to \\cite{DacaHKP17} is omnipresent conditioning on the property $\\varphi$ being satisfied.\n\tThis also allows for strengthening the bound by the factor of the probability to satisfy it.\n\\end{proof}\n\n\nThe monitor is parametric in $\\alpha$ and~$\\varepsilon$. The variable $C$ stores the current candidate, and is used to detect when the candidate changes. The variable $i$ maintains the index of the current candidate, i.e., in every reachable configuration of the algorithm, if $C \\neq \\bot$ then $C:= K_i$. \n\n\\begin{algorithm}[ht]\n \\caption{\\textsc{BoldMonitor}$_{\\alpha,\\epsilon}$}\n \\label{alg:boldalpha}\n\\begin{algorithmic}[1]\n\\While {\\textbf{true}}\n\\State $\\path \\gets \\lambda$ \\Comment{Initialize path}\n\\State $C \\gets \\bot$, $i \\gets 0$ \\Comment{Initialize candidate and candidate counter}\n\\Repeat\n\\State $\\path \\gets \\path \\,.\\, \\mathsf{NextState}(\\path)$ \\Comment{Extend path}\n\\If {$\\bot \\neq K(\\path) \\neq C$} \n\\State $C \\gets K(\\path)$; $i \\gets i+1$ \\Comment{Update candidate and candidate counter} \n\\EndIf \n\\Until {$\\textsc{Bad}(\\path)$ \\textbf{and} $\\textsc{Str}(\\path) \\geq \\alpha(i\\rem{+j}\\rc{-\\log\\varepsilon})$}\n\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\nThe infinite-state Markov chain $\\mathcal{B}$ of the bold monitor is defined as the chain $\\mathcal{C}$ for the\ncautious monitor; we just replace the condition that $\\pi$ is bad (and thus has strength at least 1) by the condition that $K(\\path)$ is bad and has strength $\\alpha(i - \\log \\varepsilon)$. The random variable $R$ and the event $S_\\varphi$ are also defined as for \\textsc{CautiousMonitor}.\n\n\\begin{theorem}\n\\label{thm:bold}\nLet $\\mathcal{M}$ be a finite-state Markov chain with minimum probability $\\mathsf{p}_{\\mathsf{min}}$, and let\n$\\varphi$ be an LTL formula with probability $\\mathsf{p}_{\\varphi} > 0$ in $\\mathcal{M}$.\nLet $\\mathcal{B}$ be the Markov chain, defined as above, corresponding to the execution of \\textsc{BoldMonitor}$_{\\alpha,\\epsilon}$ on $\\Mc \\otimes \\dra$, where $\\alpha \\ge \\alpha_0$ and $\\varepsilon > 0$. We have:\n\\begin{itemize}\n\t\\item[(a)] The random variable $R$ is geometrically distributed, with parameter (success probability) at least $\\mathsf{p}_{\\varphi}(1-\\varepsilon)$. Hence, we have $\\mathbb P_\\mathcal{B}[R < \\infty] = 1$ and $\\mathbb E_\\mathcal{B}(R) \\leq\n\t \\rc{1\/\\mathsf{p}_{\\varphi}(1-\\varepsilon)}$ for every $\\varepsilon >0$.\n\t\\item[(b)] $\\mathbb P_\\mathcal{B}[S_\\varphi | R < \\infty] = \\mathbb P_\\mathcal{B}[S_\\varphi]=1$.\n\\end{itemize}\n\\end{theorem}\n\n\\begin{proof}\n (a) By Lemma~\\ref{lem:strength-increases}, almost all bad runs are reset.\n By Lemma~\\ref{lem:bold}, runs, conditioned under being good, are reset with probability at most~$\\varepsilon$.\n It follows that the probability that a run is good and not reset is at least $\\mathsf{p}_{\\varphi}(1-\\varepsilon)$.\n\t(b)~In runs satisfying $R < \\infty$, the suffix after the last reset almost surely reaches a BSCC of $\\mathcal{M}\\otimes \\mathcal{A}$ and visits all its states infinitely often, increasing the strength of the last candidate beyond any bound. So runs satisfying $R < \\infty$ belong to $S_\\varphi$ with probability 1. \t\n\\end{proof}\n\n\n\\paragraph{Performance.} Recall that $T$ is the random variable that assigns to a run the number of steps until the last reset.\nLet $T_j$ be the number of steps between the $j$-th and $(j+1)$th reset. Observe that all the $T_j$ are identically distributed.\nWe have $T_j = T_{j}^\\bot + T_{j}^C$, where $T_j^\\bot$ and $T_{j}^C$ are the number of prefixes $\\path$ such that $K(\\path) = \\bot$\n(no current candidate) and $K(\\path) \\neq \\bot$ (a candidate), respectively.\nBy deriving bounds on $\\mathbb E(T_j^\\bot)$ and $\\mathbb E(T_j^C)$, we obtain:\n\n\\begin{theorem} \\label{thm:performance-known-pmin}\nLet $\\mathcal{M}$ be a finite-state Markov chain with $n$ states, minimum probability $\\mathsf{p}_{\\mathsf{min}}$, and maximal SCC size $\\mathsf{{\\scriptsize mxsc}}$. Let\n$\\varphi$ be an LTL formula with probability $\\mathsf{p}_{\\varphi} > 0$ in $\\mathcal{M}$.\nLet $\\alpha \\ge \\alpha_0$ and $\\varepsilon > 0$.\nLet $T$ be the number of steps taken by \\textsc{BoldMonitor}$_{\\alpha,\\epsilon}$ until the \nlast reset (or $\\infty$ if there is no last reset). We have:\n\\begin{equation}\n\\label{eq:Tbound6}\n\\begin{aligned} \n\\mathbb E(T) & \\leq \\frac{1}{\\mathsf{p}_{\\varphi}(1 - \\varepsilon)} \\cdot 2 n \\alpha (n- \\log\\varepsilon) \\mathsf{{\\scriptsize mxsc}} \\left( \\frac{1}{\\mathsf{p}_{\\mathsf{min}}}\\right)^{\\mathsf{{\\scriptsize mxsc}}}\n\\end{aligned}\n\\end{equation}\n\\end{theorem}\n\n\\noindent Here we observe the main difference with \\textsc{CautiousMonitor}: Instead of the exponential dependence on $n$ of Theorem \\ref{thm:bold}, we only have an exponential dependence on $\\mathsf{{\\scriptsize mxsc}}$. So for chains satisfying \n$\\mathsf{{\\scriptsize mxsc}} <\\!\\!< n$ the bold controller performs much better than the cautious one. \n\n\n\n\\subsection{General chains}\n\\label{subsec:arbitrarychain}\n\nWe adapt \\textsc{BoldMonitor} so that it works for arbitrary finite-state Markov chains, at the price of a performance penalty.\nThe main idea is very simple: given any non-decreasing sequence \n$\\{\\alpha_n\\}_{n=1}^\\infty$ of natural numbers such that $\\alpha_1 = 1$ and $\\lim_{n\\rightarrow \\infty} \\alpha_n = \\infty$, we sample as in \n\\textsc{BoldMonitor}$_{\\alpha,\\epsilon}$ but, instead of using the same value $\\alpha$ for every sample, we use $\\alpha_j$ for the $j$-th sample\n(see Algorithm~\\ref{alg:bold}). The intuition is that $\\alpha_j \\geq \\alpha_0$ holds from some index $j_0$ onwards, and so, by the previous analysis, after the $j_0$-th reset the monitor a.s.\\ only executes a finite number of resets. \nLet $\\textsc{Sample}(\\alpha)$ be the body of the while loop of \\textsc{BoldMonitor}$_{\\alpha,\\varepsilon}$ for a given value of $\\alpha$. \n\\begin{algorithm}[t]\n \\caption{\\textsc{BoldMonitor}$_\\varepsilon$ for $\\{\\alpha_n\\}_{n=1}^\\infty$}\n \\label{alg:bold}\n\\begin{algorithmic}[1]\n\\State $j \\gets 0$ \n\\While {\\textbf{true}}\n\\State $j = j+1$\n\\State \\textsc{Sample}$(\\alpha_j)$\n\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\nMore formally, the correctness follows from the following two properties.\n\\begin{itemize}\n\\item For every $j \\geq 1$, if \\textsc{Sample}$(\\alpha_j)$ does not terminate then it executes a good run a.s..\\\\\nIndeed, if \\textsc{Sample}$(\\alpha_j)$ does not terminate then it a.s.\\ reaches a BSCC of $\\Mc \\otimes \\dra$ and visits all its states infinitely often. So from some moment on $K(\\path)$ is and remains equal to this BSCC, and $\\textsc{Str}(\\path)$ grows beyond any bound. Since \\textsc{Sample}$(\\alpha_j)$ does not terminate, the BSCC is good, and it executes a good run.\n\\item If $\\alpha_j \\ge \\alpha_0$ then the probability that \\textsc{Sample}$(\\alpha_j)$ does not terminate is at least \\rc{$\\varepsilon \\mathsf{p}_{\\varphi}$}. \\\\\nIndeed, by Lemma \\ref{lem:bold}, if $\\alpha_j \\ge \\alpha_0$, the probability is already at least \\rc{$\\varepsilon \\mathsf{p}_{\\varphi}$}. Increasing $\\alpha$ strengthens the exit condition of the until loop. So the probability that the loop terminates is lower, and the probability of non-termination higher.\n\\end{itemize} \nThese two observations immediately lead to the following proposition:\n\\begin{proposition}\n\\label{prop:bold2}\nLet $\\mathcal{M}$ be an arbitrary finite-state Markov chain, and let\n$\\varphi$ be an LTL formula such that $\\mathsf{p}_{\\varphi} := \\mathbb P[\\mathcal{M} \\models \\varphi] > 0$.\nLet $\\mathcal{B}$ be the Markov chain corresponding to the execution of \\textsc{BoldMonitor}$_\\varepsilon$ on $\\Mc \\otimes \\dra$ with sequence $\\{\\alpha_n\\}_{n=1}^\\infty$.\nLet $\\mathsf{p}_{\\mathsf{min}}$ be the minimum probability of the transitions of $\\mathcal{M}$ (which is unknown to \\textsc{BoldMonitor}$_\\varepsilon$). We have\n\\begin{itemize}\n\\item[(a)] $\\mathbb P_\\mathcal{B}[R < \\infty] = 1$.\n\\item[(b)] $\\mathbb P_\\mathcal{B}[S_\\varphi | R < \\infty] = \\mathbb P_\\mathcal{B}[S_\\varphi]=1$.\n\\item[(c)] $\\mathbb E(R) \\leq j_{\\text{min}} + \\rc{1 \/ \\mathsf{p}_{\\varphi}(1 - \\epsilon)}$, where $j_{\\text{min}}$ is the smallest index $j$ such that \n$\\alpha_j \\geq \\alpha_0$.\n\\end{itemize}\n\\end{proposition}\n\n\\paragraph{Performance.} Difference choices of the sequence $\\{\\alpha_n\\}_{n=1}^\\infty$ lead to versions of \\textsc{BoldMonitor}$_\\varepsilon$ with different\nperformance features. Intuitively, if the sequence grows very fast, then $j_{\\text{min}}$ is very small, and the expected number of resets $\\mathbb E(R)$ \nis only marginally larger than the number for the case in which the monitor knows $\\mathsf{p}_{\\mathsf{min}}$. However, in this case\nthe last $\\rc{1 \/ \\mathsf{p}_{\\varphi}(1 - \\epsilon)}$ aborted runs are performed for very large values $\\alpha_j$, and so they take many steps.\nIf the sequence grows slowly, then the opposite happens; there are more resets, but aborted runs have shorter length. Let us analyze two extreme cases:\n$\\alpha_j := 2^j$ and $\\alpha_j := j$. \n\nDenote by $f(\\alpha)$ the probability that a run is reset, i.e., the probability that a call \\textsc{Sample}$(\\alpha)$ terminates.\nLet further $g(\\alpha)$ denote the expected number of steps done in \\textsc{Sample}$(\\alpha)$ of a run that is reset (taking the number of steps as $0$ if the run is not reset).\nAccording to the analysis underlying Theorem~\\ref{thm:performance-known-pmin}, for $\\alpha \\ge \\alpha_0$ we have $g(\\alpha) \\le c \\alpha$ with $c := 2 n (n- \\log\\varepsilon) \\mathsf{{\\scriptsize mxsc}}\\,\\mathsf{p}_{\\mathsf{min}}^{-\\mathsf{{\\scriptsize mxsc}}}$.\nWe can write $T = T_1 + T_2 + \\cdots$, where $T_j = 0$ when either the $j$-th run or a previous run is not aborted, and otherwise $T_j$ is the number of steps of the $j$-th run.\nFor $j \\le j_{\\text{min}}$ we obtain $\\mathbb E(T_j) \\le g(\\alpha_{j_\\text{min}})$ and hence we have:\n\\begin{align*}\n\\mathbb E(T) \\ &=\\ \\sum_{j=0}^\\infty \\mathbb E(T_j) \\ \\le\\ j_{\\text{min}} g(\\alpha_{j_{\\text{min}}}) + \\sum_{i=0}^\\infty f(\\alpha_{j_{\\text{min}}})^i g(\\alpha_{j_{\\text{min}}+i})\n\\end{align*}\nBy Theorem~\\ref{thm:bold}(a) we have $f(\\alpha_{j_{\\text{min}}}) \\le 1 - \\mathsf{p}_{\\varphi}(1-\\varepsilon)$.\nIt follows that choosing $\\alpha_j := 2^j$ does not in general lead to a finite bound on $\\mathbb E(T)$.\nChoosing instead $\\alpha_j := j$, we get\n\\begin{align*}\n\\mathbb E(T) \\ &\\le \\ c j_{\\text{min}}^2 + \\sum_{i=0}^\\infty (1 - \\mathsf{p}_{\\varphi}(1-\\varepsilon))^i c (j_{\\text{min}}+i) \\\\\n&\\le \\ \\left( j_{\\text{min}}^2 + \\frac{j_{\\text{min}}}{\\mathsf{p}_{\\varphi}(1-\\varepsilon)} + \\frac{1}{(\\mathsf{p}_{\\varphi}(1-\\varepsilon))^2}\\right) c\\,,\n\\end{align*}\nwhere $j_\\text{min}$ can be bounded by $j_\\text{min} \\le - 1\/\\log(1-\\mathsf{p}_{\\mathsf{min}}) + 1 \\le 1\/\\mathsf{p}_{\\mathsf{min}}$.\nSo with $c = 2 n (n- \\log\\varepsilon) \\mathsf{{\\scriptsize mxsc}}\\,\\mathsf{p}_{\\mathsf{min}}^{-\\mathsf{{\\scriptsize mxsc}}}$ we arrive at\n\\begin{equation}\n \\mathbb E(T) \\ \\le \\ \\left( \\frac{1}{\\mathsf{p}_{\\mathsf{min}}^2} + \\frac{1}{\\mathsf{p}_{\\mathsf{min}} \\mathsf{p}_{\\varphi}(1-\\varepsilon)} + \\frac{1}{(\\mathsf{p}_{\\varphi}(1-\\varepsilon))^2}\\right) 2 n (n- \\log\\varepsilon) \\mathsf{{\\scriptsize mxsc}}\\,\\mathsf{p}_{\\mathsf{min}}^{-\\mathsf{{\\scriptsize mxsc}}}\\,,\n\\end{equation}\na bound broadly similar to the one from Theorem~\\ref{thm:performance-known-pmin}, but with the monitor not needing to know $\\mathsf{p}_{\\mathsf{min}}$. \n\\section{The cautious monitor} \\label{sec:cautious}\n\nAll our monitors assume the existence of a deterministic Rabin automaton $\\mathcal{A} = (Q, {2^{Ap}}, \\gamma, q_o, Acc)$ for $\\varphi$. They monitor the path $\\path$ of the chain $\\mathcal{M}\\otimes \\mathcal{A}$ corresponding to the path of $\\mathcal{M}$ executed so far. In order to present the cautious monitor we need some definitions and notations.\n\n\\paragraph{Candidate of a path.} \nGiven a finite or infinite path $\\rho=s_0 s_1\\cdots$ of $\\mathcal{M}\\otimes \\mathcal{A}$, the \\emph{support} of $\\rho$ is the set $\\support{\\rho}=\\{s_0,s_1,\\ldots\\}$. The \\emph{graph of $\\rho$} is $G_\\rho = (\\support{\\rho}, E_\\rho)$, where $E_\\rho=\\{(s_i,s_{i+1})\\mid i=0,1,\\ldots\\}$.\n\nLet $\\path$ be a path of $\\mathcal{M}\\otimes \\mathcal{A}$. If \n$\\path$ has a suffix $\\kappa$ such that $G_\\kappa$ is a BSCC of $G_\\path$, we call $\\support{\\kappa}$ the \\emph{candidate of $\\path$}. Given a path $\\path$, we define $K(\\path)$ as follows: If $\\path$ has a candidate $\\support{\\kappa}$, then $K(\\path) := \\support{\\kappa}$; otherwise, $K(\\path) := \\bot$, meaning that $K(\\path)$ is undefined. \n\n\\begin{example}\nConsider the family of Markov chains of Figure \\ref{fig:largescc}. We have e.g. $K(s_0) = K(s_0s_1) = K(s_0s_0s_1) = \\bot$, $K(s_0 s_0) = \\{s_0\\}$, and $K(s_0s_1s_0s_1) = \\{s_0,s_1\\}$. In the family of Figure \\ref{fig:smallscc} we have e.g. $K(s_0s_1s_1)=\\{s_1\\}$, $K(s_0s_1s_1s_2)= \\bot $, and $K(s_0s_1s_1s_2s_2)= \\{s_2\\}$.\n\\end{example}\n\n\\begin{figure}\n\t\\centering\n\t\\scalebox{0.8}{\n\t\t\\begin{tikzpicture}\n\t\t\\node[state,initial,initial text=] (s0) at (0,1){$s_0$};\n\t\t\\node[state] (s1) at (2,1){$s_1$};\n\t\t\\node[state] (s2) at (4,1){$s_2$};\n\t\t\\node (dots) at (5,1){\\large $\\cdots$};\n\t\t\\node[state] (s3) at (6,1){$s_{n-1}$};\n\t\t\\node[state] (s4) at (8,1){$s_n$};\n\t\t\\node[state] (sg) at (10,2){$s_{\\text{good}}$};\n\t\t\\node[state] (sb) at (10,0){$s_{\\text{bad}}$};\n\t\t\\path[->] \n (s0) edge[loop above] node[above=-1pt]{$1\/2$} ()\n\t\t(s0) edge node[below]{$1\/2$} (s1)\n\t\t(s1) edge node[below]{$1\/2$} (s2)\n\t\t(s3) edge node[below]{$1\/2$} (s4)\n\t\t%\n\t\t(s1) edge[bend right=25] node[above=-2pt]{$1\/2$} (s0)\n\t\t(s2) edge[bend right=50] node[above=-2pt]{$1\/2$} (s0)\n\t\t(s3) edge[bend right=60] node[above=-2pt]{$1\/2$} (s0)\n\t\t(sg) edge[loop above] node[above=-2pt]{$1$} ()\n\t\t(sb) edge[loop above] node[above=-2pt]{$1$} ()\n\t\t(s4) edge node[above]{$1\/2$} (sg)\n\t\t(s4) edge node[below]{$1\/2$} (sb);\n\t\t\\end{tikzpicture}}\n\t\\caption{A family of Markov chains}\n\t\\label{fig:largescc}\n\\end{figure}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\scalebox{0.8}{\n\t\t\\begin{tikzpicture}\n\t\t\\node[state,initial,initial text=] (s0) at (0,1){$s_0$};\n\t\t\\node[state] (s1) at (2,1){$s_1$};\n\t\t\\node[state] (s2) at (4,1){$s_2$};\n\t\t\\node (dots) at (5,1){\\large $\\cdots$};\n\t\t\\node[state] (s3) at (6,1){$s_{n-1}$};\n\t\t\\node[state] (sg) at (8,1){$s_{\\text{good}}$};\n\t\t\\path[->] \n\t\t(s0) edge node[below]{$1\/2$} (s1)\n\t\t(s1) edge node[below]{$1\/2$} (s2)\n\t\t(s3) edge node[below]{$1\/2$} (sg)\n\t\t%\n\t\t(s0) edge[loop above] node[above=-2pt]{$1\/2$} ()\n\t\t(s1) edge[loop above] node[above=-2pt]{$1\/2$} ()\n\t\t(s2) edge[loop above] node[above=-2pt]{$1\/2$} ()\n\t\t(s3) edge[loop above] node[above=-2pt]{$1\/2$} ()\n\t\t(sg) edge[loop above] node[above=-2pt]{$1$} ();\t\t\n\t\t\\end{tikzpicture}}\n\t\\caption{A family of Markov chains with small SCCs}\n\t\\label{fig:smallscc}\n\\end{figure}\n\n\n\n\\paragraph{Good and bad candidates.} A candidate $K$ is \\emph{good} if there exists a Rabin pair $(E,F) \\in Acc$ such that $K \\cap (S\\times E) = \\emptyset$ and $K_i \\cap (S\\times F) \\neq \\emptyset$. Otherwise, $K$ is \\emph{bad}.\nA path $\\path$ of $\\mathcal{M}\\otimes \\mathcal{A}$ is \\emph{bad} if $K(\\path)\\neq \\bot$ and $K(\\path)$ is a bad candidate.\nThe function $\\textsc{Bad}(\\path)$ returns \\textbf{true} if $\\path$ is bad, and \\textbf{false} otherwise. \n\n\n\\begin{proposition}\n\\label{prop:suspect}\n\\begin{itemize}\n\\item[(a)] Bad runs of $\\Mc \\otimes \\dra$ almost surely have a bad finite prefix.\n\\item[(b)] If the good runs of $\\Mc \\otimes \\dra$ have nonzero probability, then the set of runs without bad prefixes also has nonzero probability.\n\\end{itemize}\n\\end{proposition}\n\\begin{proof}\n(a) By standard properties of Markov chains, bad runs of $\\Mc \\otimes \\dra$ almost surely reach a BSCC of $\\Mc \\otimes \\dra$ and then traverse all edges of that BSCC infinitely often.\nTherefore, a bad run~$\\rho$ almost surely has a finite prefix~$\\path$ that has reached a bad BSCC, say $B$, of~$\\Mc \\otimes \\dra$ and has traversed all edges of~$B$ at least once.\nThen $K(\\path) = B$, and so $\\path$ is bad.\n\n\\medskip \\noindent (b) Suppose the good runs of $\\Mc \\otimes \\dra$ have nonzero probability.\nWe construct a finite path, $\\path$, starting at $s_0$ so that $K(\\path')$ is good for all extensions $\\path'$ of~$\\path$, and $K(\\path'')$ is good or undefined for all prefixes $\\path''$ of~$\\path$.\n\nSince the good runs of $\\Mc \\otimes \\dra$ have nonzero probability, $\\Mc \\otimes \\dra$ has a good BSCC $B$.\nLet $\\path_1'$ be a simple path from~$s_0$ to a state $s_1 \\in B \\cap (S \\times F)$.\nExtend~$\\path_1'$ by a shortest path back to $\\support{\\path_1'}$ (forming a lasso) and denote the resulting path by~$\\path_1$.\nObserve that $K(\\path_1) \\subseteq B$ is good, and $K(\\path') = \\bot$ holds for all proper prefixes $\\path'$ of~$\\path_1$.\nIf $K(\\path_1) = B$, then we can choose $\\path := \\path_1$ and $\\path$ has the required properties.\nOtherwise, let $\\path_2'$ be a shortest path extending~$\\path_1$ such that $\\path_2'$ leads to a state in $B \\setminus K(\\path_1)$.\nExtend that path by a shortest path back to $K(\\path_1)$ and denote the resulting path by~$\\path_2$.\nThen we have $K(\\path_1) \\subsetneq K(\\path_2) \\subseteq B$, and $K(\\path_2)$ is good, and $K(\\path') \\in \\{K(\\path_1), \\bot\\}$ holds for all paths~$\\path'$ that extend~$\\path_1$ and are proper prefixes of~$\\path_2$.\nRepeat this process until a path $\\path$ is found with $K(\\path) = B$.\nThis path has the required properties.\n\\end{proof}\n\n\\paragraph{The cautious monitor.} The cautious monitor is shown in Algorithm \\ref{alg:cautious}. \nThe algorithm samples a run of $\\Mc \\otimes \\dra$ step by step, and resets whenever the current path $\\path$ is bad.\n\\begin{algorithm}[ht]\n \\caption{\\textsc{CautiousMonitor}}\n \\label{alg:cautious}\n\\begin{algorithmic}[1]\n\\While {\\textbf{true}}\n\\State $\\path \\gets \\lambda$ \\Comment{Initialize path}\n\\Repeat\n\\State $\\path \\gets \\path \\,.\\, \\mathsf{NextState}(\\path)$ \\Comment{Extend path}\n\\Until {$\\textsc{Bad}(\\path)$}\n\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\nWe formalize its correctness with respect to the specification given in the introduction. Consider the infinite-state Markov chain $\\mathcal{C}$ (for cautious) defined as follows. \nThe states of $\\mathcal{C}$ are pairs $\\tup{\\path, r}$, where $\\path$ is a path of $\\Mc \\otimes \\dra$, and $r\\geq 0$. Intuitively, $j$ counts the number of resets so far. The initial probability distribution assigns probability $1$ to $\\tup{\\lambda,0}$, and $0$ to all others. The transition probability matrix $\\mathbf{P}_\\mathcal{C}(\\tup{\\path, r}, \\tup{\\path',r'})$ is defined as follows.\n\\begin{itemize}\n\\item If $\\path$ is bad, then \n$$\n\\mathbf{P}_\\mathcal{C}(\\tup{\\path, r}, \\tup{\\path',r'}) =\n\\begin{cases}\n1 & \\mbox{if $\\path' = \\lambda$ and $r' = r+1$} \\\\\n0 & \\mbox{otherwise}\n\\end{cases}\n$$\n\\noindent We call such a transition a \\emph{reset}.\n\\item If $\\path$ is not bad, then\n$$\n\\mathbf{P}_\\mathcal{C}(\\tup{\\path, r}, \\tup{\\path',r}) =\n\\begin{cases}\n\\mathbf{P}(p, p') & \\mbox{if $r'=r$, $\\path = \\path'' \\,.\\, p$ and $\\path' = \\path \\,.\\, p'$} \\\\\n0 & \\mbox{otherwise.}\n\\end{cases}\n$$\n\\end{itemize}\nA run of \\textsc{CautiousMonitor} corresponds to a run $\\rho = \\tup{\\path_1, r_1} \\tup{\\path_2, r_2} \\cdots$ of~$\\mathcal{C}$. Let $R$ be the random variable that assigns to $\\rho$ the supremum of $r_1, r_2 \\cdots$. Further, let $S_\\varphi$ be the the set of runs such that $R(\\rho)<\\infty$ and the suffix of $\\rho$ starting immediately after the last reset satisfies $\\varphi$.\nThe following theorem states that \\textsc{CautiousMonitor} is correct with respect to the specification described in the introduction. The proof is an immediate consequence of Proposition \\ref{prop:suspect}.\n\n\\begin{theorem}\nLet $\\varphi$ be a LTL formula such that $\\mathbb P[\\mathcal{M} \\models \\varphi] > 0$.\nLet $\\mathcal{C}$ be the Markov chain defined as above. We have\n\\begin{itemize}\n\\item[(a)] $\\mathbb P_\\mathcal{C}[R < \\infty] = 1$.\n\\item[(b)] $\\mathbb P_\\mathcal{C}[S_\\varphi | R < \\infty] = \\mathbb P_\\mathcal{C}[S_\\varphi]=1$.\n\\end{itemize}\n\\end{theorem}\n\n\n\\paragraph{Performance.} Let $T$ be the random variable that assigns to $\\rho$ the number of steps till the last reset, or $\\infty$ if the number of resets is infinite. First of all, we observe that without any assumption on the system $\\mathbb E(T)$ can grow exponentially in the number of states of the chain. Indeed, consider the family of Markov chains of Figure \\ref{fig:largescc} and the property ${\\ensuremath{\\mathbf{F}}} p$. Assume the only state satisfying $p$ is $s_{\\text{good}}$. Then the product of each chain in the family with the DRA for ${\\ensuremath{\\mathbf{F}}} p$ is essentially the same chain, and the good runs are those reaching $s_{\\text{good}}$. We show that even if the controller has full knowledge of the chain $\\mathbb E(T)$ grows exponentially. Indeed, since doing a reset brings the chain to $s_0$, it is clearly useless to abort a run that has not yet reached $s_n$. In fact, the optimal monitor is the one that resets whenever the run reaches $s_\\text{bad}$. The average number of resets for this controller is clearly 1, and so $\\mathbb E(T)$ is the expected number of steps needed to reach $s_\\text{bad}$, under the assumption that it is indeed reached. It follows $\\mathbb E(T) \\ge 2^n$. We formulate this result as a proposition.\n\n\\begin{fact}\nLet ${\\cal M}_n$ be the Markov chain of Figure \\ref{fig:largescc} with $n$ states. Given a monitor ${\\cal N}$ for the property ${\\ensuremath{\\mathbf{F}}} p$, let $T_{\\cal N}$ be the random variable that assigns to a run of the monitor on ${\\cal M}_n$ the number of steps till the last reset, or $\\infty$ if the number of resets is infinite. Then $\\mathbb E(T_{\\cal N}) \\geq 2^n$ for every monitor ${\\cal N}$.\n\\end{fact}\n\nWe learn from this example that all monitors have problems when the time needed to traverse a non-bottom SCC of the chain can be very large. So we conduct a parametric analysis in the maximal size $\\mathsf{{\\scriptsize mxsc}}$ of the SCCs of the chain. This reveals the weak point of \\textsc{CautiousMonitor}: $\\mathbb E(T)$ remains exponential even for families satisfying $\\mathsf{{\\scriptsize mxsc}}=1$. Consider the family of Figure \\ref{fig:smallscc}.\n\\textsc{CautiousMonitor} resets whenever it takes any of the self-loops in states $s_i$. Indeed, after taking a self-loop in state, say $s_i$, the current path $\\path$ ends in $s_i s_i$, and so we have $K(\\path) = \\{s_i\\}$, which is a bad candidate.\nSo after the last reset the chain must follow the path $s_0 s_1 s_2 \\cdots s_{\\text{good}}$. Since this path has probability $1\/2^n$, we get $\\mathbb E(T) \\geq 2^n$.\n\nIn the next section we introduce a ``bold'' monitor. Intuitively, instead of resetting at the first suspicion that the current path may not yield a good run, the bold monitor ``perseveres''.\n\n\n\n\\section{Conclusions}\nWe have shown that monitoring of arbitrary $\\omega$-regular properties is possible for finite-state Markov chains, even if the monitor has no information \\emph{at all} about the chain, its probabilities, or its structure. More precisely, we have exhibited\nmonitors that ``force'' the chain to execute runs satisfying a given property $\\varphi$ (with probability 1). The monitors reset the chain whenever the current run is suspect of not satisfying $\\varphi$. They work even if $\\varphi$ is a liveness property without any ``good prefix'', i.e., a prefix after which any extension satisfies $\\varphi$.\n\nUnsurprisingly, the worst-case behaviour of the monitor, measured as the number of steps until the last reset, \nis bad when the probability of the runs satisfying $\\varphi$ or the minimal probability of the transitions of the chain are very small, or when the strongly connected components of the chain are very large. We have given performance estimates that quantify the relative weight of each of these parameters. The design of dedicated monitors that exploit information on this parameters is an interesting topic for future research.\n\n\n\n\n\\section{Experimental results}\n\nIn this section, we illustrate the behaviour of our monitors on concrete models from the PRISM Benchmark Suite \\cite{DBLP:conf\/qest\/KwiatkowskaNP12} and compare the obtained theoretical bounds to the experimental values.\nTo this end, we have re-used the code provided in \\cite{DacaHKP17}, which in turn is based on the PRISM model checker \\cite{prism}.\nWhenever the obtained candidate is actually an accepting BSCC, we have a guarantee that no restart will ever happen and we terminate the experiment.\n\nTable~\\ref{tab:exper} shows the results on several models.\nFor the bluetooth benchmark, the optimal number of restarts is $1\/0.2=5$ and with $\\varepsilon=0.5$ it should be smaller than 10. \nWe see that while the bold monitor required $6.6$ on average, the cautious one indeed required a bit more.\nFor Hermann's stabilization protocol, almost all sufficiently long runs have a good candidate.\nIn this model, we have not even encountered any bad candidate on the way. \nThis can be easily explained since only 38 states out of the half million are outside of the single BSCC. \nThey are spread over 9 non-bottom SCCs and some states are transient; however, no runs got stuck in any of the tiny SCCs.\nA similar situation occurs for the case of gridworld, where on average every tenth run is non-satisfying. \nHowever, in our five repetitions (each with a single satisfying run), we have not encountered any.\nFinally, we could not determine the satisfaction probability in crowds since PRISM times out on this model with more than two million states.\nHowever, one can still see the bold monitor requiring slightly less resets than the cautious one, predicting $\\mathsf{p}_{\\varphi}$ to be in the middle of the $[0,1]$-range.\nIt is also worth mentioning that the large size preventing a rigorous numeric analysis of the system did not prevent our monitors from determining satisfaction on single runs.\n\n\\newcommand{{\\ensuremath{\\mathbf{G}}}{\\ensuremath{\\mathbf{F}}}}{{\\ensuremath{\\mathbf{G}}}{\\ensuremath{\\mathbf{F}}}}\n\\newcommand{{\\ensuremath{\\mathbf{F}}}{\\ensuremath{\\mathbf{G}}}}{{\\ensuremath{\\mathbf{F}}}{\\ensuremath{\\mathbf{G}}}}\n\\begin{table}[t]\n\t\\caption{Experimental comparison of the monitors, showing the average number of restarts and average length of a restarted run. The average is taken over five runs of the algorithm and $\\varepsilon=0.5$.}\n\t\\label{tab:exper}\n\\begin{tabular}{llr@{\\hskip 2mm}lr@{\\hskip 4mm}lrr}\nModel &Property &States&$\\mathsf{p}_{\\mathsf{min}}$&$\\mathsf{p}_{\\varphi}$&Monitor&Avg. $R$&Avg. $\\frac T R$\\\\\\hline\nbluetooth&{\\ensuremath{\\mathbf{G}}}{\\ensuremath{\\mathbf{F}}}&143,291&$0.008$&0.20&Cautious &9.0 & 4578\\\\\n&&&&&Bold&6.6&3758\\\\\nhermann&{\\ensuremath{\\mathbf{F}}}{\\ensuremath{\\mathbf{G}}}&524,288&$1.9\\cdot 10^{-6}$&1&Cautious&0&-\\\\\n&&&&&Bold&0&-\\\\\ngridworld&${\\ensuremath{\\mathbf{G}}}{\\ensuremath{\\mathbf{F}}}\\to{\\ensuremath{\\mathbf{F}}}{\\ensuremath{\\mathbf{G}}}$& 309,327&0.001& 0.91& Cautious & 0 & -\\\\\n&&&&&Bold&0&-\\\\\ncrowds& {\\ensuremath{\\mathbf{F}}}{\\ensuremath{\\mathbf{G}}}&2,464,168& 0.066&?&Cautious& 0.8 & 63\\\\\n&&&&&Bold&0.6&90\\\\\n\\end{tabular}\n\\end{table}\n\\subsection{Implementing the bold monitor} \nA straightforward implementation of the bold monitor in which the candidate $K(\\path)$ and its strength are computed anew \neach time the path is extended is very inefficient. We present a far more efficient algorithm that continuously maintains the candidate of the current \ncandidate and its strength. The algorithm runs in $O(n \\log n)$ amortized time for a path $\\path$ of length $n$, and uses $O(s_n \\log s_n)$ space, \nwhere $s_n$ denotes the number of states visited by $\\path$ (which can be much smaller than $n$ when states are visited multiple times). \n\nLet $\\path$ be a path of $\\mathcal{M}\\otimes \\mathcal{A}$, and let $s \\in \\path$.\n(Observe that $s$ now denotes a state of $\\mathcal{M}\\otimes \\mathcal{A}$, not of $\\mathcal{M}$.) We let $G_\\path = (V_\\path, E_\\path)$ \ndenote the subgraph of $\\mathcal{M}\\otimes \\mathcal{A}$ where $V_\\path$ and $E_\\path$ are the sets of states and edges visited by $\\path$, respectively. \nIntuitively, $G_\\path$ is the fragment of $\\mathcal{M}\\otimes \\mathcal{A}$ explored by the path $\\path$. We \nintroduce some definitions. \n\\begin{itemize}\n\\item The \\emph{discovery index} of a state $s$, denoted $d_\\path(s)$, is the number of states that appear in the prefix of $\\path$ ending \nwith the first occurrence\nof $s$. Intuitively, $d_\\path(s) = k$ if $s$ is the $k$-th state discovered by $\\path$. Since different states have different discovery times, \nand the discovery time does not change when the path is extended, we also call $d_\\path(s)$ the \\emph{identifier} of $s$. \n\\item A \\emph{root} of $G_\\path$ is a state $r \\in V_\\path$ such that $d_\\path(r) \\leq d_\\path(s)$ for every state $s \\in \\mathsf{SCC}_\\path(r)$,\nwhere $\\mathsf{SCC}_\\path(r)$ denotes the SCC of $G_\\path$ containing $s$. Intuitively, $r$ is the first state of $\\mathsf{SCC}_\\path(r)$ visited by $\\path$.\n\\item The \\emph{root sequence} $R_\\path$ of $\\path$ is the sequence of roots of $G_\\path$, ordered by ascending discovery index. \n\\item Let $R_\\path = r_1 \\, r_2 \\cdots r_m$. We define the sequence $S_\\path = S_\\path(r_1) \\, S_\\path(r_2) \\cdots S_\\path(r_m)$ of sets, where\n\n$$S_\\path(r_i) := \\{s \\in V_\\path \\mid d_\\path(r_i) \\leq d_\\path(s) < d_\\path(r_{i+1}) \\}$$\n\\noindent for every $1 \\leq i < m$, i.e., $S_\\path(r_i)$ is the set of states discovered after $r_i$ (including $r_i$) and before $r_{i+1}$ (excluding $r_i$); and\n$$S_\\path(r_m) := \\{s \\in V_\\path \\mid d_\\path(r_m) \\leq d_\\path(s)\\} \\ . $$\n\n\n\\item $\\mathit{Birthday}_\\path$ is defined as $\\bot$ if $K(\\path) = \\bot$, and as the length of the shortest prefix $\\path'$ of $\\path$ such that $K(\\path') = K(\\path)$ otherwise. Intuitively, $\\mathit{Birthday}_\\path$ is the time at which the current candidate of $\\path$ was created.\n\n\n\\item For every state $s$ of $\\path$, let $\\path_s$ be the longest prefix of $\\pi$ ending at $s$. \nWe define $\\mathit{Visits}_\\path(s)$ as the pair $(\\mathit{Birthday}_{\\path_s}, v)$, where $v$ is $0$ if $\\mathit{Birthday}_{\\path_s}=\\bot$, and $v$ is the number of times $\\pi_s$ has visited $s$ since $\\mathit{Birthday}_{\\path_s}$ otherwise. We define a total order on these pairs: $(b, v) \\preceq (b',v')$ if{}f $b > b'$ (where $\\bot > n$ for every number $n$), or $b=b'$ and $v \\leq v'$. Observe that, if $\\path$ has a candidate, then the smallest pair w.r.t. $\\preceq$ corresponds to the state that is among the states visited since the creation of the candidate, and has been visited the least number of times.\n\\end{itemize}\n\n\\noindent The following lemma is an immediate consequence of the definitions. \n\\begin{lemma}\nLet $G_\\path = (V_\\path, E_\\path)$. The SCCs of $G_\\path$ are the sets of $S_\\path$. Further, let \n$(b,v)= \\min \\{ \\mathit{Visits}_\\path(s) \\mid s \\in V_\\path \\}$, where the minimum is over $\\preceq$. We have $\\textsc{Str}(\\path) = v$.\n\\end{lemma}\n\nBy the lemma, in order to efficiently implement \\textsc{Monitor} it suffices to maintain $R_\\path$, $S_\\path$, and \na mapping $\\mathit{Visits}_\\path$ that assigns $\\mathit{Visits}_\\path(s)$ to each state $s$ of $\\path$.\nMore precisely, assume that \\textsc{Monitor} has computed so far a path $\\path$ leading to a state $s$, and now it extends $\\pi$ \nto $\\path' = \\path \\cdot s'$ by traversing a transition $s \\rightarrow s'$ of $\\mathcal{M}\\otimes \\mathcal{A}$;\nit suffices to compute $R_{\\path'}$, $S_{\\path'}$ and $\\mathit{Visits}_{\\path'}$\nfrom $R_\\path$, $S_\\path$, and $\\mathit{Visits}_{\\path'}$ in \n$O(\\log n)$ amortized time, where $n$ is the length of $\\path$. We first show how to update \n$R_\\path$, $S_\\path$, and $\\textsc{Str}(S_\\path)$, and then we describe data structures to maintain them in $O(n \\log n)$ amortized time. We consider three cases:\n\\begin{itemize}\n\\item $s' \\notin V_\\path$. That is, the monitor discovers the state $s'$ by traversing $s \\rightarrow s'$. Then the SCCs of $G_{\\path'}$ are the \nSCCs of $G_\\path$, plus a new trivial SCC containing only $s$, with $s$ as root. So $R_{\\path'} = R_\\path \\cdot s$, $S_{\\path'} = S_\\path \\cdot \\{s\\}$. Since $s'$ has just been discovered, there is no candidate, and so $\\mathit{Visits}_{\\path'}(s') = (\\bot, 0)$, because .\n\\item $s' \\in V_\\path$, and $d_\\path(s) \\leq d_\\path(s')$. That is, the monitor had already discovered $s'$, and it had discovered it after $s$. \nThen $G_{\\path'} = (V_\\path, E_\\path \\cup \\{(s, s')\\}$, but the SCCs of $G_\\path$ and $G_{\\path'}$ coincide, and so \n$R_{\\path'} = R_\\path$, $S_{\\path'} = S_\\path$, and $\\textsc{Str}(S_{\\path'}) = \\min \\{ \\#_{\\path'}(s'), \\textsc{Str}(S_{\\path})\\}$.\n\\item $s' \\in V_\\path$, and $d_\\path(s) > d_\\path(s')$. That is, the monitor discovered $s'$ before $s$. Let $R_\\path = r_1 \\, r_2 \\cdots r_m$ and \nlet $r_i$ be the root of $\\mathsf{SCC}_\\path(s')$. Then $G_{\\path'}$ has a path \n$$r_i \\trans{*} r_{i+1} \\trans{*} \\cdots \\trans{*} r_{m-1} \\trans{*} r_m \\trans{*} s \\trans{} s' \\trans{*} r_i \\ .$$\nSo we have \n\\begin{align*}\nR_{\\path'} & = r_1 \\, r_2 \\cdots r_i \\\\\nS_{\\path'} & = S_\\path(r_1) \\cdots S_\\path(r_{i-1}) \\, \\left(\\bigcup_{j = i}^m S_{\\path}(r_j) \\right) \n\\end{align*}\nMoreover, $K(\\path') = S_{\\path'}$, because we have discovered a new candidate. \nSince the strength of a just discovered candidate is $0$ by definition, we set $\\textsc{Str}(\\path')=0$. \n\\end{itemize}\n\nIn order to efficiently update $R_\\path$, $S_\\path$ and $\\min_\\path$ we represent them using the following data structures.\n\\begin{itemize}\n\\item The number $N$ of different states visited so far.\n\\item A hash map $D$ that assigns to each state $s$ discovered by $\\path$ its discovery index. When\n$s$ is visited for the first time, $D(s)$ is set to $N+1$. Subsequent lookups return $N+1$.\n\\item A structure $R$ containing the identifiers of the roots of $R_\\path$, and supporting the following operations in\namortized $O(\\log n)$ time: $\\textsc{insert}(r)$, which inserts the identifier of $r$ in $R$; $\\textsc{extract-max}$, which returns the largest identifier\nin $R$; and $\\textsc{find}(s)$, which returns the largest identifier of $R$ smaller than or equal to the identifier of $s$. (This is the identifier of the root of\nthe SCC containing $s$.) For example, this is achieved by implementing $R$ both as a search tree and a heap. \n\\item For each root $r$ a structure $S(r)$ that assigns to each state of $s \\in S(r)$ the value $\\mathit{Visits}_\\path(s)$,\nand supports the following operations in amortized $O(\\log n)$ time: $\\textsc{find-min}$, which returns the minimum value of the states of $S(r)$; $\\textsc{increment-key}(s)$, which increases the value of $s$ by 1, and $\\textsc{merge}$, which returns the union of two given maps.\nFor example, this is achieved by implementing $S(r)$ as a Fibonacci heap. \n\\end{itemize}\n\nWhen the algorithm explores an edge $s \\rightarrow s'$ of the Markov chain $\\mathcal{M}\\otimes \\mathcal{A}$, it\nupdates these data structures as follows. The algorithm first computes $D(s')$, and then proceeds according to the three cases above:\n\\begin{itemize}\n\\item[(1)] If $s'$ had not been visited before (i.e., $D(s') = N+1$), then the algorithm sets $N := N+1$, inserts $D(s')$ in $R$, and creates a new Fibonacci heap $S(s')$ containing only the state $s'$ with key 1.\n\\item[(2)] If $s'$ had been visited before (i.e., $D(s') \\leq N$), and $D(s) \\leq D(s')$, then the algorithm executes\n$\\textsc{find}(s)$ to find the root $r$ of the SCC containing $s$, and then increments the key of $s$ in $S(r)$ by 1.\n\\item[(3)] If $s'$ had been visited before (i.e., $D(s') \\leq N$), and $D(s) > D(s')$, then the algorithm \nexecutes the following pseudocode, where $\\sigma$ is an auxiliary Fibonacci heap:\n\\begin{center}\n\\begin{minipage}{5cm}\n\\begin{algorithmic}[1]\n\\State $\\sigma \\gets \\emptyset$\n\\Repeat \n\\State $r \\gets \\textsc{extract-max}(R)$\n\\State $\\sigma \\gets \\textsc{merge}(\\sigma, S(r))$ \n\\Until{$D(r) \\leq D(s')$} \n\\State $\\textsc{insert}(r, R)$\n\\end{algorithmic}\n\\end{minipage}\n\\end{center}\n\\end{itemize}\nAt every moment in time the current candidate is the set $S(r)$, where $r = \\textsc{extract-max}(R)$, and its strength can be obtained from $\\textsc{find-min}(S(r))$.\n\nLet us now examine the amortized runtime of the implementation. Let $n_1, n_2, n_3$ be the number of steps executed by the algorithm corresponding to the cases (1), (2), and (3) above. In cases (1) and (2), the algorithm executes a constant\nnumber of heap operations per step, and so it takes $O((n_1 + n_2) \\log n)$ amortized time for all steps together. \nThis is no longer so for case (3) steps. For example, if the Marvov chain is a big elementary circuit $s_0 \\trans{} s_1 \\trans{} \\cdots \\trans{} s_{n-1} \\trans{} s_0$, then at each step but the last one we insert one state into the heap, and at the last step we extract them all; that is, the last step takes $O(n)$ heap operations. However, observe that each state is inserted in the heap exactly once, when it is discovered, and extracted at most once. So the algorithm executes at most $n$\n \\textsc{extract-max} and \\textsc{merge} heap operations for all case (3) steps together, and the amortized time over all of them is $O(n_3 \\log n)$. This gives an overall amortized runtime of $O(n \\log n)$.\n\n\n\\section{Introduction}\nRuntime verification, also called runtime monitoring, is the problem of checking at runtime whether an execution of a system satisfies a given correctness property (see e.g. \\cite{HavelundR04a,LeuckerS09,FalconeHR13,BartocciFFR18}). It can be used to automatically evaluate test runs, or to steer the application back to some safety region if a property is violated. Runtime verification of LTL or $\\omega$-regular properties has been thoroughly studied \\cite{BauerLS06,LeuckerS09,BauerLS11,BartocciBNR18}. It is conducted by automatically translating the property into a monitor that inspects the execution online in an incremental way, and (in the most basic setting) outputs ``yes'', ``no'', or ``unknown'' after each step. \nA fundamental limitation of runtime verification is that, if the system is not known \\textit{a priori}, then many properties, like for example ${\\ensuremath{\\mathbf{G}}}{\\ensuremath{\\mathbf{F}}} p$ or ${\\ensuremath{\\mathbf{F}}}{\\ensuremath{\\mathbf{G}}} p$, are not monitorable. Loosely speaking, since every finite execution can be extended to a run satisfying ${\\ensuremath{\\mathbf{G}}}{\\ensuremath{\\mathbf{F}}} p$ and to another run satisfying its negation, monitors can only continuously answer ``unknown'' (see \\cite{BartocciBNR18} for a more detailed discussion). Several approaches to this problem have been presented, which modify the semantics of LTL in different ways to refine the prediction and palliate the problem \\cite{PnueliZ06,BauerLS07,BauerLS10,MorgensternGS12,ZhangLD12,BartocciBNR18}, but the problem is of fundamental nature.\n\nRuntime monitoring of stochastic systems modeled as Hidden Markov Chains (HMM) has been studied by Sistla \\textit{et al.\\ } in a number of papers \\cite{SistlaS08,GondiPS09,SistlaZF11}. Given a HMM $H$ and an $\\omega$-regular language $L$,\nthese works construct a monitor that (a) rejects executions of $H$ not in $L$ w.p.1, and (b) accepts executions of $H$ in $L$ with positive probability (this is called a \\emph{strong monitor} in \\cite{SistlaS08}). Observe, however, that the monitor knows $H$ in advance. The case where $H$ is not known in advance is also considered in \\cite{SistlaS08}, but in this case strong monitors only exist for languages recognizable by deterministic, possibly infinite state, B\\\"{u}chi automata. Indeed, it is easy to see that, for example, a monitor that has no information about the HMM cannot be strong for a property like ${\\ensuremath{\\mathbf{G}}}{\\ensuremath{\\mathbf{F}}} p$.\n\nSummarizing, the work of Sistla \\textit{et al.\\ } seems to indicate that one must either know the HMM in advance, or has to give up monitorability of liveness properties. In this paper we leverage a technique introduced in \\cite{DacaHKP17} to show that there is a third way: Assume that, instead of only being able to observe the output of a state, as in the case of HMM, we can observe the state itself. In particular, we can observe that the current state is the same we visited at some earlier point. We show that this allows us to design monitors for all $\\omega$-regular properties that work without any knowledge of the system in the following simple setting. We have a finite-state Markov chain, but we have no information on its size, probabilities, or structure; we can only execute it. We are also given an arbitrary $\\omega$-regular property $\\varphi$, and the purpose of monitoring is to abort runs of the system that are ``unlikely'' to satisfy the specification until the system executes a correct run. Let us make this informal idea more precise. The semantics of the system is a finite-state Markov chain, but we have no information on its size, probabilities, or structure. We know that the runs satisfying $\\varphi$ have nonzero probability, but the probability is also unknown. We are allowed to monitor runs of the system and record the sequence of states it visits; further, we are allowed to abort the current run at any moment in time, and reset the system back to its initial state. The challenge is to design a controller for the reset action that satisfies the following property w.p.1: the number of resets is finite, and the run of the system after the last reset satisfies $\\varphi$. \n\nIntuitively, the controller must abort the right number of executions: If it aborts too many, then it may reset infinitely often with positive probability; if it aborts too few, the run after the last reset might violate $\\varphi$.\nFor a safety property like ${\\ensuremath{\\mathbf{G}}} p$ the controller can just abort whenever the current state does not satisfy $p$; indeed, since ${\\ensuremath{\\mathbf{G}}} p$ has positive probability by assumption, eventually the chain executes a run satisfying ${\\ensuremath{\\mathbf{G}}} p$ a.s., and this run is not aborted. Similarly, for a co-safety property like ${\\ensuremath{\\mathbf{F}}} p$, the controller can abort the first execution after one step, the second after two, steps etc., until a state satisfying $p$ is reached. Since ${\\ensuremath{\\mathbf{F}}} p$ has positive probability by assumption, at least one reachable state satisfies $p$, and with this strategy the system will almost surely visit it.\nBut for ${\\ensuremath{\\mathbf{G}}}{\\ensuremath{\\mathbf{F}}} p$ the problem is already more challenging. Unlike the cases of ${\\ensuremath{\\mathbf{G}}} p$ and ${\\ensuremath{\\mathbf{F}}} p$, the controller can never be sure that every extension of the current execution will satisfy the property or will violate it. \n\nIn our first result we show that, perhaps surprisingly, notions introduced in \\cite{DacaHKP17} can be used to show that the problem has very simple solution. Let $\\mathcal{M}$ be the (unknown) Markov chain of the system, and let $\\mathcal{A}$ be a deterministic Rabin automaton for $\\varphi$. Say that a run of the product chain $\\mathcal{M} \\times \\mathcal{A}$ is \\emph{good} if it satisfies $\\varphi$, and \\emph{bad} otherwise.\nWe define a set of \\emph{suspect} finite executions satisfying two properties:\n\\begin{itemize}\n\\item[(a)] bad runs a.s. have a suspect prefix; and\n\\item[(b)] if the set of good runs has nonzero probability, then the set of runs without suspect prefixes also has nonzero probability.\n\\end{itemize}\nThe controller resets whenever the current execution is suspect. We call it the \\emph{cautious controller}. By property (a) the cautious controller aborts bad runs w.p.1, and by property (b) w.p.1 the system eventually executes a run without suspect prefixes, which by (a) is necessarily good.\n\nThe performance of a controller is naturally measured in terms of two parameters: the expected number of resets $R$, and the expected number $S$ of steps to a reset (conditioned on the occurrence of the reset). While the cautious controller is very simple, it has poor performance: In the worst case, both parameters are exponential in the number of states of the chain. A simple analysis shows that, without further information on the chain, the exponential dependence in $S$ is unavoidable. However, the exponential dependence on $R$ can be avoided: using a technique of \\cite{DacaHKP17}, we define a \\emph{bold controller} for which the expected number of resets is almost optimal. \n\n\\paragraph{Related work.} Sistla \\textit{et al.\\ } have also studied the power of finite-state probabilistic monitors for the analysis of non-probabilistic systems, and characterized the monitorable properties \\cite{ChadhaSV09}. This is also connected to work by Baier \\textit{et al.\\ } \\cite{BaierGB12}. There is also a lot of work on the design of monitors whose purpose is not to abort runs that violate a property, say $\\varphi$, but gain information about the probability of the runs that satisfy $\\varphi$. This is often called statistical model checking, and we refer the reader to \\cite{LegayDB10} for an overview.\n\n\\paragraph{Appendix.} Some proofs have been moved to an Appendix available at \\\\\n\\url{https:\/\/www7.in.tum.de\/~esparza\/tacas2021-134.pdf} \n\n\n\n\n\n\n\n\n\\section{Preliminaries}\n\n\\paragraph{Directed graphs.} A directed graph is a pair $G=(V, E)$, where $V$ is the set of vertices and $E \\subseteq V\\times V$ is the set of edges. A path (infinite path) of $G$ is a finite (infinite) sequence $\\path = v_0, v_1 \\ldots$ of vertices such that $(v_i, v_{i+1}) \\in E$ for every $i=0,1 \\ldots$. We denote the empty path by $\\lambda$ and concatenation of paths $\\path_1$ and $\\path_2$ by $\\path_1\\,.\\,\\path_2$. A graph $G$ is strongly connected if for every two vertices $v, v'$ there is a path leading from $v$ to $v'$. A graph $G'=(V',E')$ is a subgraph of $G$, denoted $G' \\preceq G$, if $V' \\subseteq V$ and $E' \\subseteq E \\cap V' \\times V'$; we write $G' \\prec G$ if $G' \\preceq G$ and $G'\\neq G$. A graph $G' \\preceq G$ is a strongly connected component (SCC) of $G$ if it is strongly connected and no graph $G''$ satisfying $G' \\prec G'' \\preceq G$ is strongly connected. An SCC $G'=(V',E')$ of $G$ is a bottom SCC (BSCC) if $v \\in V'$ and $(v, v') \\in E$ imply $v' \\in V'$.\n\n\\paragraph{Markov chains.} A \\emph{Markov chain (MC)} is a tuple $\\mathcal{M} = (S, \\mathbf{P}, \\mu)$, where\n\\begin{itemize}\n\\item $S$ is a finite set of states,\n\\item $\\mathbf{P} \\;:\\; S \\times S \\to [0,1]$ is the transition probability matrix, such that for every $s\\in S$ it holds $\\sum_{s'\\in S} \\mathbf{P}(s,s') = 1$,\n\\item $\\mu$ is a probability distribution over $S$.\n\\end{itemize}\nThe graph of $\\mathcal{M}$ has $S$ as vertices and $\\{ (s, s') \\mid \\mathbf{P}(s,s') > 0\\}$ as edges. Abusing language,\nwe also use $\\mathcal{M}$ to denote the graph of $\\mathcal{M}$.\nWe let $\\mathsf{p}_{\\mathsf{min}}:=\\min(\\{\\mathbf{P}(s,s') > 0\\mid s,s'\\inS\\})$ denote the smallest positive transition probability in $\\mathcal{M}$.\nA \\emph{run} of $\\mathcal{M}$ is an infinite path $\\rho = s_0 s_1 \\cdots$ of $\\mathcal{M}$; we let $\\rho[i]$ denote the state $s_i$. \nEach path $\\path$ in $\\mathcal{M}$ determines the set of runs $\\mathsf{Cone}(\\path)$ consisting of all runs that start with $\\path$. \nTo $\\mathcal{M}$ we assign the probability space \n(\\mathsf{Runs},\\mathcal F,\\mathbb P)$, where $\\mathsf{Runs}$ is the set of all runs in $\\mathcal{M}$, $\\mathcal F$ is the $\\sigma$-algebra generated by all $\\mathsf{Cone}(\\path)$,\nand $\\mathbb P$ is the unique probability measure such that\n$\\mathbb P[\\mathsf{Cone}(s_0s_1\\cdots s_k)] = \n\\mu(s_0)\\cdot\\prod_{i=1}^{k} \\mathbf{P}(s_{i-1},s_i)$, where the empty product equals $1$.\nThe respective expected value of a random variable $f:\\mathsf{Runs}\\to\\mathbb R$ is $\\mathbb E[f]=\\int_\\mathsf{Runs} f\\ d\\,\\mathbb P$.\n\nGiven a finite set $Ap$ of atomic propositions,\na \\emph{labelled Markov chain} (LMC) is a tuple $\\mathcal{M} = (S, \\mathbf{P}, \\mu, Ap, L)$, where $(S,\\mathbf{P}, \\mu)$ is a MC and $L : S \\to 2^{Ap}$ is a labelling function. \nGiven a labelled Markov chain $\\mathcal{M}$ and an LTL formula $\\varphi$, we are interested in the measure $\\mathbb P[\\mathcal{M} \\models \\varphi] := \\mathbb P[\\{\\rho\\in\\mathsf{Runs} \\mid L(\\rho) \\models \\varphi\\}],$\nwhere $L$ is naturally extended to runs by $L(\\rho)[i]=L(\\rho[i])$ for all $i$. \n\n\\paragraph{Deterministic Rabin Automata.} For every $\\omega$-regular property $\\varphi$ there is a \\emph{ deterministic Rabin automaton} (DRA) $\\mathcal{A} = (Q, {2^{Ap}}, \\gamma, q_o, Acc)$ that accepts all runs that satisfy $\\varphi$~\\cite{BK08}. \nHere\n$Q$ is a finite set of states, $\\gamma : Q \\times {2^{Ap}} \\to Q$ is the transition function, $q_o \\in Q$ is the initial state, and $Acc \\subseteq 2^Q \\times 2^Q$ is the acceptance condition.\n\n\\paragraph{Product Markov Chain.} The product of\na MC $\\mathcal{M}$ and DRA $\\mathcal A$ is the Markov chain $\\Mc \\otimes \\dra =(S \\times Q, \\mathbf{P}', \\mu')$, where $\\mathbf{P}'((s,q),(s',q')) = \\mathbf{P}(s,s')$ if $q'=\\gamma(q,L(s'))$ and $\\mathbf{P}'((s,q),(s',q'))=0$ otherwise, and $\\mu'(s,q) = \\mu(s)$ if $\\gamma(q_o, L(s))=q$ and $\\mu'(s,q)=0$ otherwise. \n\nAn SCC $B$ of $\\Mc \\otimes \\dra$ is \\emph{good} if \nthere exists a Rabin pair $(E,F) \\in Acc$ such that $B \\cap (S\\times E) = \\emptyset$ and $B \\cap (S\\times F) \\neq \\emptyset$. Otherwise, the SCC is \\emph{bad}. Observe that the runs of $\\Mc \\otimes \\dra$ satisfying $\\varphi$ almost surely reach a good BSCC, and the runs that do not satisfy $\\varphi$ almost surely reach a bad BSCC.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Basic equations}\n\\label{sec:1}\n\nThere are modifications to the Einstein's gravity which turn out to survive, depending on the features of a given theory of gravity, in the non-relativistic limit derived from their fully relativistic equations.\nThat is, some of those proposals modify Newtonian gravity, which is commonly used to describe stellar objects, such as the Sun and other stars of the Main Sequence. Those equations are also used to study the substellar family, starting with brown dwarf stars, giant gaseous planets, and even those more similar to the Earth. Therefore, there has appeared a need to explore non-relativistic objects not only for the consistency in describing different astrophysical bodies and gravitational phenomena\nwith the use of the {\\it same} theory of gravity\\footnote{however the \"which one?\" is a question which many physicists try to answer.} but this fact is also an opportunity to understand the nature of the theory, since we better understand the density regimes of such objects. Moreover, since data sets of the discussed stars and exoplanets as well as the accuracy of the observations are still growing, the objects described by non-relativistic equations can be used to constrain some of the gravitational proposals, as presented in the further part of this chapter.\n\nBefore discussing the recent findings regarding the topic of non-relativistic objects in modified gravity, we will go through a suitable formalism needed to study low-mass stars and other objects living in the cold and dark edge of the Hertzsprung-Russell diagram (see the picture \\ref{hr} and basic literature \\cite{stellar,hansen,planets,planets2}). As a working theory we will consider Palatini $f(\\bar{R})$ gravity for the Starobinsky model\n\\begin{equation}\n f(\\bar{R})=\\bar{R}+\\beta\\bar{R}^2,\n\\end{equation}\nwhere $\\beta$ is the theory parameter, but similar results as the ones presented here are expected to happen in any theory of gravity which alters Newtonian limit. To read more about Palatini gravity, see \\cite{DeFelice:2010aj}, because we will now focus directly on the modified hydrostatic equilibrium equation without its derivation \\cite{aneta1,artur,gonzalo,aneta2,aneta3,maria,olek}. Therefore, we will consider a toy-model of a star or planet, that is, a spherical-symmetric low-mass object without taking into account nonsphericity, magnetic fields, and time-dependency, described by the modified hydrostatic equilibrium equation\n\\begin{equation}\\label{pres}\n p'=-g\\rho(1+\\kappa c^2 \\beta [r\\rho'-3\\rho]) \\ ,\n\\end{equation}\nwhere prime denotes the derivative with respect to the radius coordinate $r$, $\\kappa=-8\\pi G\/c^4$, $G$ and $c$ are Newtonian constant and speed of light, respectively. The quantity $g$ is the surface gravity, approximated on the object's atmosphere as a constant value ($r_{atmosphere}\\approx R$, where $R$ is the radius of the object):\n\\begin{equation}\\label{surf}\n g\\equiv\\frac{G m(r)}{r^2}\\sim\\frac{GM}{R^2}=\\textrm{constant},\n\\end{equation}\nwhere $M=m(R)$. We will consider only the usual definition for the mass function (however, see the discussion in \\cite{olek,olek2} on modified gravity issues)\n\\begin{equation}\\label{masa}\n m'(r)=4\\pi r^2\\rho(r).\n\\end{equation}\nUsing (\\ref{masa}) and (\\ref{surf}), the equation (\\ref{pres}) can be written as\n\\begin{equation}\\label{hyd}\n p'=-g\\rho\\left( 1+8\\beta\\frac{g}{c^2 r} \\right).\n\\end{equation}\nOne of the most important elements in the star's or planet's modelling is the heat transport through object's interior and its atmosphere. A simple and common criterion which determines which kind of the energy transport takes place is given by the Schwarzschild one\n \\cite{schw,schw2}:\n\\begin{eqnarray}\n \\nabla_{rad}&\\leq&\\nabla_{ad}\\;\\;\\textrm{pure diffusive radiative or conductive transport}\\\\\n\\nabla_{rad}&>&\\nabla_{ad}\\;\\;\\textrm{ adiabatic convection is present locally.}\n\\end{eqnarray}\nThe gradient stands for the temperature $T$ variation with depth \n\\begin{equation}\n \\nabla_{{rad}}:=\\left(\\frac{d \\ln{T}}{d\\ln{p}}\\right)_{{rad}}.\n\\end{equation}\nThe Schwarzschild criterion turns out to be modified in Palatini gravity \\cite{aneta2}\n\\begin{equation}\\label{grad}\n \\nabla_{rad}=\\frac{3\\kappa_{rc}lp}{16\\pi acG mT^4}\\left(1+8\\beta\\frac{G m}{c^2 r^3}\\right)^{-1},\n\\end{equation}\nwith $l$ being the local luminosity, the constant $a=7.57\\times 10^{-15}\\frac{erg}{cm^3K^4}$ the radiation density while $\\kappa_{rc}$ is the radiative and\/or conductive opacity. The additional $\\beta-$term , depending on the sign of the parameter, has a stabilizing or destabilizing effect. On the other hand, the adiabatic gradient $\\nabla_{ad}$ is a constant value for particular cases, as we will see in the further part.\n\nRegarding the microscopic description of matter, an approximation which we will be using here is the polytropic equation of state (EoS): \n\\begin{equation}\\label{pol}\n p=K\\rho^{1+\\frac{1}{n}},\n\\end{equation}\nIt is good enough for our purposes,\nin particularly taking into account the fact that $K$, since it depends on the composition of the fluid, carries information about the interactions between particles, the effects of electron degeneracy, and phase transitions,...\\cite{aud}. We will use at least 3 different polytropic EoS, depending on the physical situation. On the other hand, the value of the polytropic index $n$ is related to the class of the astrophysical objects we study \\cite{politropia}. The simplest case we will deal with is a fully convective objects with the interior modelled by non-relativistic degenerate\nelectron gas for which $n=3\/2$ while $K$ is given by \\cite{stellar}:\n\\begin{equation}\\label{Ka}\n K=\\frac{1}{20}\\left(\\frac{3}{\\pi}\\right)^\\frac{2}{3}\\frac{h^2}{m_e}\\frac{1}{(\\mu_e m_u)^\\frac{5}{3}}.\n\\end{equation}\nIt is always useful in the case of analytic EoS to write it in the polytropic form (\\ref{pol}) since there exists a very convenient approach, called the Lane-Emden (LE) formalism, allowing to rewrite all relevant equations in the dimensionless form. It can be shown that for our particular model of gravity the equation (\\ref{hyd}) transforms into the modified Lane-Emden equation \\cite{aneta1}\n\\begin{equation}\\label{LE}\n \\frac{1}{\\xi}\\frac{d^2}{d\\xi^2}\\left[\\sqrt{\\Phi}\\xi\\left(\\theta-\\frac{2\\alpha}{n+1}\\theta^{n+1}\\right)\\right]=\n -\\frac{(\\Phi+\\frac{1}{2}\\xi\\frac{d\\Phi}{d\\xi})^2}{\\sqrt{\\Phi}}\\theta^n,\n\\end{equation}\nwhere $\\Phi=1+2\\alpha \\theta^n$ and $\\alpha=\\kappa c^2\\beta\\rho_c$. The dimensionless $\\theta$ and $\\xi$ are defined in the following way \n\\begin{eqnarray}\n r=r_c\\bar{\\xi},\\;\\;\\;\\rho=\\rho_c\\theta^n,\\;\\;\\;p=p_c\\theta^{n+1},\\;\\;\\;\n r^2_c=\\frac{(n+1)p_c}{4\\pi G\\rho^2_c},\n\\end{eqnarray}\nwith $p_c$ and $\\rho_c$ being the core values of pressure and density, respectively. The equation (\\ref{LE}) can be solved numerically, and its solution $\\theta$ provides star's mass, radius, central density, and temperature: \n\\begin{eqnarray}\n M&=&4\\pi r_c^3\\rho_c\\omega_n,\\;\\;\\;\n R=\\gamma_n\\left(\\frac{K}{G}\\right)^\\frac{n}{3-n}M^\\frac{n-1}{n-3},\\\\\n \\rho_c&=&\\delta_n\\left(\\frac{3M}{4\\pi R^3}\\right) \\label{rho0s},\\;\\;\\;\n T=\\frac{K\\mu}{k_B}\\rho_c^\\frac{1}{n}\\theta_n,\n\\end{eqnarray}\nwhere $k_B$ is Boltzmann's constant, $\\mu$ the mean molecular weight while $\\xi_R$ is the dimensionless radius for which $\\theta(\\xi_R)=0$. In the case of the model of gravity used here the constants (\\ref{omega}) and (\\ref{delta}) appearing in the above equations also include modifications \\cite{artur} but it is not a common feature of modified gravity (see the case of Horndeski gravity, for instance \\cite{koyama}, or in Eddington-inspired Born-Infeld gravity, \\cite{merce}):\n\\begin{eqnarray}\n \\omega_n&=&-\\frac{\\xi^2\\Phi^\\frac{3}{2}}{1+\\frac{1}{2}\\xi\\frac{\\Phi_\\xi}{\\Phi}}\\frac{d\\theta}{d\\xi}\\mid_{\\xi=\\xi_R},\\label{omega}\\\\\n \\gamma_n&=&(4\\pi)^\\frac{1}{n-3}(n+1)^\\frac{n}{3-n}\\omega_n^\\frac{n-1}{3-n}\\xi_R,\\label{gamma}\\\\\n \\delta_n&=&-\\frac{\\xi_R}{3\\frac{\\Phi^{-\\frac{1}{2}}}{1+\\frac{1}{2}\\xi\\frac{\\Phi_\\xi}{\\Phi}}\\frac{d\\theta}{d\\xi}\\mid_{\\xi=\\xi_R}}. \\label{delta}\n\\end{eqnarray}\n Using the LE formalism we may rewrite (\\ref{hyd}) and (\\ref{grad}) as\n \\begin{equation}\\label{hyd_pol}\n p'=-g\\rho\\left( 1-\\frac{4\\alpha}{3\\delta} \\right),\\;\\;\\;\\;\\nabla_{{rad}}=\\frac{3\\kappa_{rc}lp}{16\\pi acG mT^4}\\left(1-\\frac{4\\alpha}{3\\delta}\\right)^{-1},\n\\end{equation}\nwhere we have skipped the index $n$ in the parameter $\\delta$ (\\ref{delta}).\nSome of the objects we will consider in those notes are massive enough to burn light elements in their core; it can be either hydrogen, deuterium, or lithium. The product of any of those energy generation processes is luminosity, which can be obtained by the integration of the below expression:\n\\begin{equation}\\label{Lbur}\n \\frac{dL_{burning}}{dr}=4\\pi r^2\\dot\\epsilon\\rho.\n\\end{equation}\nThe energy generation rate $\\dot\\epsilon$ is a function of energy density, temperature, and stellar composition, however it can be approximated as a power-low function of the two first \\cite{fowler}.\nThe energy produced in the core is radiated through the surface and can be expressed by the Stefan-Boltzmann law \n\\begin{equation}\\label{stefan}\n L=4\\pi f\\sigma T_{eff}^4R^2,\n\\end{equation}\nwhere $\\sigma$ is the Stefan-Boltzmann constant. We have added the factor $f\\leq1$ which allows to include planets, which obviously radiate less than the black-body with the same effective temperature $T_{eff}$. This particular temperature (as well as other parts of atmosphere modelling) is usually difficult to determine and can carry significant uncertainties. Notwithstanding, there is a tool which we will often use when we look for some characteristics of the atmosphere. It is the optical depth $\\tau$, averaged over the object's atmosphere, see e.g. \\cite{stellar,hansen}):\n\\begin{equation} \\label{eq:od}\n \\tau(r)=\\bar\\kappa\\int_r^\\infty \\rho dr,\n\\end{equation}\nwhere $\\bar\\kappa$ is a mean opacity. In the further part, since we will mainly work with the objects whose atmospheres have low temperatures, we will use Rosseland mean opacities which are given by the simple Kramers' law\n\\begin{equation}\\label{abs}\n \\bar\\kappa= \\kappa_0 p^u T^{w},\n\\end{equation}\nwhere $\\kappa_0$, $u$ and $w$ are values depending on different opacity regimes \\cite{planets,kley}. \nWe will also assume that the atmosphere is made of particles satisfying the ideal gas relation ($N_A$ is the Avogardo constant)\n\\begin{equation}\\label{ideal}\n \\rho=\\frac{\\mu p}{N_A k_B T}.\n\\end{equation}\nAgain we can use the polytropic EoS (\\ref{pol}) to rewrite above as\n\\begin{equation}\\label{eos}\n p=\\tilde K T^{1+n},\\;\\;\\;\\tilde K=\\left(\\frac{N_Ak_B}{\\mu}\\right)^{1+n}K^{-n},\n\\end{equation}\nwhere $K$ can be shown to be a function of solutions of the modified Lane-Emden equation, an therefore it depends on the theory of gravity \\cite{aneta2}.\n\n\\section{Pre-Main Sequence phase}\nIn the following section we will discuss some of the processes related to the early stellar evolution. Before reaching the Main Sequence, a baby star being on the so-called Hayashi track still contracts, decreasing its luminosity but not changing too much its surface temperature. Often the conditions present in the core are sufficient to burn light elements such as deuterium and lithium for instance, however in order to burn hydrogen, the temperature in the star's core must be much higher than in the lithium's case. Moreover, during its journey down along the Hayashi track the pre-Main Sequence star is fully convective apart from its radiative atmosphere. As already mentioned, because of the gravitational contractions the physical conditions in the core are changing and it may happen that the convective core will become radiative. In such a situation, the star will follow subsequently the Henyey track. This phase is much shorten than Hayashi one, it is followed by more massive stars (see the figure \\ref{hr}), and will not be discussed here. The radiative core development, hydrogen burning, and other processes related to the early stellar evolution not only depend on the star's mass but also on a theory of gravity, as we will see in the following subsections.\n\n\\subsection{Hayashi track}\n\n\nThe photosphere is defined at the radius for which the optical depth (\\ref{eq:od}) with mean opacity $\\kappa$ is equaled to $2\/3$.\nUsing this relation in order to integrate the hydrostatic equilibrium equation (\\ref{hyd}) with $r=R$ and $M=m(R)$, and applying the absorption law (\\ref{abs}) for a stellar atmosphere dominated by $H^-$ in the temperature range $3000< T<6000$K with $\\kappa_0\\approx1.371\\times10^{-33}Z\\mu^\\frac{1}{2}$ and $u=\\frac{1}{2},\\,w=8.5$, where $Z=0.02$ is solar metallicity \\cite{hansen}, one gets\n\\begin{equation}\\label{fotos}\n p_{ph}=8.12\\times10^{14}\\left(\\frac{ M\\left(1-\\frac{4\\alpha}{3\\delta}\\right)}{L T_{ph}^{4.5}Z\\mu^\\frac{1}{2}}\\right)^\\frac{2}{3},\n\\end{equation}\nin which the Stefan-Boltzmann law $L=4\\pi\\sigma R^2T^4_{ph}$ with $ T_{{eff}{\\mid_{r=R}}}\\equiv T_{ph}$ was already used.\nOn the other hand, from (\\ref{eos}) taken on the photosphere with $n=3\/2$ and applying the Stefan-Boltzmann law again, we have \n\\begin{equation}\n T_{ph}=9.196\\times10^{-6}\\left( \\frac{L^\\frac{3}{2}Mp_{ph}^2\\mu^5}{-\\theta'\\xi_R^5} \\right)^\\frac{1}{11}.\n\\end{equation}\nThe pressure appearing above is the pressure of the atmosphere; therefore, using (\\ref{fotos}) and rescaling mass and luminosity to the solar values $M_\\odot$ and \n$L_\\odot$, respectively, we can finally write\n\\begin{equation}\\label{hay}\n T_{ph}=2487.77\\mu^\\frac{13}{51}\\left( \\frac{L}{L_\\odot} \\right)^{\\frac{1}{102}}\\left( \\frac{M}{M_\\odot} \\right)^{\\frac{7}{51}}\n \\left( \\frac{\\left(\\frac{1-\\frac{4\\alpha}{3\\delta}}{Z}\\right)^\\frac{4}{3}}{\\xi_R^5\\sqrt{-\\theta'}} \\right)^\\frac{1}{17}\\textrm{K}.\n\\end{equation}\nThe obtained formula relates the effective temperature and luminosity of the pre-main sequence star for a given mass $M$ and mean molecular weight $\\mu$. That is, it provides an evolutionary track called Hayashi track \\cite{hayashi}. Those tracks, being almost vertical lines on the right-hand side of the H-R diagram, are followed by the baby stars until they develop the radiative core, or they reach the Main Sequence. Immediately we observe that the effective temperature is nearly constant; but also notice that the temperature coefficient is too low -- it is caused by our toy-model assumptions, mainly related to the atmosphere modelling. However, this simplified analysis allows us to agree that indeed modified gravity shifts the curves (see the figure 2 in \\cite{aneta2}), leading to the possibility of constraining models of gravity by studying T Tauri stars \\cite{bert} positioned nearby the Hayashi forbidden zone (see the figure \\ref{hr}).\n\n\n\n\n\n\\subsection{Lithium burning}\nIn the fully convective stars (in such a case we may assume that the star is well-mixed) with mass M and hydrogen fraction $X$,\nthe depletion rate is given by the expression\n\\begin{equation}\\label{reac}\n M\\frac{{d}f}{{d}t}=-\\frac{Xf}{m_H}\\int^M_0\\rho\\langle\\sigma v\\rangle dM,\n\\end{equation}\nwhere the non-resonant reaction rate for the temperature $T<6\\times 10^6$ K is\n\\begin{equation}\n N_A\\langle\\sigma v\\rangle=Sf_{scr} T^{-2\/3}_{c6}\\textrm{exp}\\left[-aT_{c6}^{-\\frac{1}{3}}\\right]\\;\\frac{\\textrm{cm}^3}{\\textrm{s g}},\n\\end{equation}\nwhere $T_{c6}\\equiv T_c\/10^6$ K and $f_{scr}$ is the screening correction factor, while $S=7.2\\times10^{10}$ and $a=84.72$ are dimensionless parameters \nin the fit to the reaction rate $^7\\textrm{Li}(p,\\alpha)\\,^4\\textrm{He}$ \\cite{usho,cf,raimann}. The Lane-Emden formalism for Palatini gravity provides the expressions for the central temperature $T_c$ and central density $\\rho_c$ (\\ref{rho0s}). However, instead of the simplest polytropic model (\\ref{Ka}), we need to take into account an arbitrary electron degeneracy degree $\\Psi$ and mean molecular weight $\\mu_{eff}$, and thus the radius is\n\\begin{equation}\\label{Rpol}\n \\frac{R}{R_\\odot}\\approx\\frac{7.1\\times10^{-2}\\gamma}{\\mu_{eff}\\mu_e^\\frac{2}{3}F^\\frac{2}{3}_{1\/2}(\\Psi)}\n \\left(\\frac{0.1M_\\odot}{M}\\right)^\\frac{1}{3},\n\\end{equation}\nwhere $F_n(\\Psi)$ is the $n$th order Fermi-Dirac function. Inserting the quantities $T_c$, $\\rho_c$, and $R$ given by the Lane-Emden formalism, changing the variables to the spatial ones, and assuming that the burning process is restricted to the central region of the star (so then we can use the near center solution of LE) the depletion rate (\\ref{reac}) can be written as \\cite{aneta3}\n\\begin{eqnarray}\n \\frac{{d}}{{d}t}\\textrm{ln}f&=&-6.54\\left(\\frac{X}{0.7}\\right)\\left(\\frac{0.6}{\\mu_{eff}}\\right)^3\\left(\\frac{0.1M_\\odot}{M}\\right)^2\\nonumber\\\\\n &\\times& Sf_{scr} a^7 u^{-\\frac{17}{2}}e^{-u}\n \\left(1+\\frac{7}{u}\\right)^{-\\frac{3}{2}}\\xi_R^2(-\\theta'(\\xi_R)),\n\\end{eqnarray}\nwhere $u\\equiv aT_6^{-1\/3}$. In order to proceed further, we need to find the dependence on time of the central temperature parameter $u$, which can be obtained from the Stefan-Boltzman equation together with the virial theorem\n\\begin{equation}\n L=4\\pi R^2 T^4_{eff}=-\\frac{3}{7}\\Omega\\frac{GM^2}{R^2}\\frac{{d}R}{{d}t}.\n\\end{equation}\nThe factor $\\Omega$ stands for modified gravity effects on the equation (in Palatini quadratic model $\\Omega=1$ for $n=3\/2$). The above relations provides the radius and luminosity as functions of time during the contraction phase\n\\begin{eqnarray}\n \\frac{R}{R_\\odot}&=&0.85\\Omega^\\frac{1}{3} \\left(\\frac{M}{0.1M_\\odot}\\right)^\\frac{2}{3} \\left(\\frac{3000\\textrm{K}}{T_{eff}}\\right)^\\frac{4}{3}\n \\left(\\frac{\\textrm{Myr}}{t}\\right)^\\frac{1}{3}\\label{Rt}\\\\\n \\frac{L}{L_\\odot}&=& 5.25\\times10^{-2}\\Omega \\left(\\frac{M}{0.1M_\\odot}\\right)^\\frac{4}{3} \\left(\\frac{T_{eff}}{3000\\textrm{K}}\\right)^{\\frac{4}{3}}\n \\left(\\frac{\\textrm{Myr}}{t}\\right)^\\frac{2}{3}\\label{Lcontr},\n\\end{eqnarray}\nwith the contraction time given as\n\\begin{eqnarray}\\label{tcon}\n t_{cont}\\equiv&-&\\frac{R}{{d}R\/{d}t}\\approx841.91 \\left(\\frac{3000\\textrm{K}}{T_{eff}}\\right)^4 \\left(\\frac{0.1M_\\odot}{M}\\right)\\\\\n &\\times&\n \\left(\\frac{0.6}{\\mu_{eff}}\\right)^3 \\left(\\frac{T_c}{3\\times10^6\\textrm{K}}\\right)^3 \\frac{\\xi_R^2(-\\theta'(\\xi_R))\\Omega}{\\delta^2}\\,\\textrm{Myr}.\\nonumber\n\\end{eqnarray}\nUsing equations (\\ref{Rt}) and (\\ref{Rpol}) it is possible to express the central temperature $T_c$ with the time during the contraction epoch, which results as\n\\begin{equation}\n \\frac{u}{a}=1.15\\left(\\frac{M}{0.1M_\\odot}\\right)^{2\/9}\\left(\\frac{\\mu_eF_{1\/2}(\\eta)}{t_6 T^4_{3eff}}\\right)^{2\/9}\n \\times\n \\left(\\frac{\\xi_R^5\\Omega^{2\/3}(-\\theta'(\\xi_R))^{2\/3}}{\\gamma\\delta^{2\/3}}\\right)^{1\/3},\n\\end{equation}\nwhere $T_{3eff}\\equiv T_{eff}\/3000$K and $t_6\\equiv t\/10^6$.\n\nLet us focus now on stars with masses $M<0.2M_\\odot$ such that the degeneracy effects are insignificant and $\\dot{\\mu}_{eff}$ can be neglected when compared to $\\dot{R}$. Then, we can write the depletion rate as\n\\begin{eqnarray}\n &\\frac{\\textrm{dln}f}{\\textrm{d} u}& = 1.15\\times10^{13}~T_{3eff}^{-4}\\left(\\frac{X}{0.7}\\right)\\left(\\frac{0.6}{\\mu_{eff}}\\right)^6\n\\left(\\frac{M_\\odot}{M}\\right)^3\\nonumber\\\\\n&\\times& Sf_{scr} a^{16}u^{-\\frac{37}{2}}e^{-u}\\left(1-\\frac{21}{2u}\\right)\\frac{\\xi_R^4(-\\theta'(\\xi_R))^2\\Omega}{\\delta^2}.\n\\end{eqnarray}\nThe above equation can be integrated from $u_0=\\infty$ to $u$ ($\\mathcal{F}\\equiv\\textrm{ln}\\frac{f_0}{f}$):\n\\begin{eqnarray}\\label{sol}\n \\mathcal{F}=1.15\\times10^{13}\\frac{X}{0.7}\\left(\\frac{0.6}{\\mu_{eff}}\\right)^6\n\\left(\\frac{M_\\odot}{M}\\right)^3\n \\frac{Sf_{scr} a^{16}g(u)}{~T_{3eff}}\\frac{\\xi_R^4(-\\theta'(\\xi_R))^2\\Omega}{\\delta^2},\n\\end{eqnarray}\nwhere $g(u)=u^{-37\/2}e^{-u}-29\\Gamma(-37\/2,u)$ while $\\Gamma(-37\/2,u)$ is an upper incomplete gamma function. The $^7\\textrm{Li}$ abundance depends on the gravity model.\n\nOne obtains the central temperature $T_c$ from $u(\\mathcal{F})$ for a given depletion $\\mathcal{F}$. The star's age, radius, and luminosity are given by the equations (\\ref{tcon}), (\\ref{Rt}), and (\\ref{Lcontr}). Let us emphasize that all these values depend on the model of gravity, clearly altering the pre-Main Sequence stage of the stellar evolution. Moreover, age determination techniques which are based on lithium abundance measurements are not model-independent: they do depend on a model of gravity used, as presented above (see details in \\cite{aneta3}).\n\n\n\n\n\n\\subsection{Approaching the Main Sequence - Hydrogen burning}\\label{Shb}\nThe process of becoming a true star is related to the stable hydrogen burning. It means that the energy produced in this reaction is radiated away through the star's atmosphere, and that the pressure appearing there because of the energy transport balances the gravitational contraction. When a star contracts, the central temperature increases and when it reaches the values $\\sim3\\times10^6$K in the core, the thermonuclear ignition of hydrogen starts. There are three reactions responsible for this process: $p + p \\to d + e^{+}+\\nu_e, p +e^{-}+p \\to d +\\nu_e, p+d \\to {^{3}}H_e + \\gamma$, where the first one is slow and a bottle-neck for the lower-mass objects; that is, it stands behind the Minimum Main Sequence Mass (MMSM) term. It was demonstrated that the energy generation rate per unit mass for the hydrogen ignition process can be well described by the power law form \\cite{fowler,burrows1}\n\\begin{equation} \\label{eq:pp}\n\\dot{\\epsilon}_{pp}= \\dot{\\epsilon}_c \\left(\\frac{T}{T_c}\\right)^s \\left(\\frac{\\rho}{\\rho_c} \\right)^{u-1}, \\,\\,\\,\\,\\,\\dot{\\epsilon}_c=\\epsilon_0T_c^s\\rho_c^{u-1} ,\n\\end{equation}\nwhere\nthe two exponents can be approximated as $s \\approx 6.31$ and $u \\approx 2.28$, while $\\dot{\\epsilon}_0\\approx 3.4\\times10^{-9}$ ergs g$^{-1}$s$^{-1}$. For a baby star with the hydrogen fraction $X=0.75$ the number of baryons per electron in low-mass stars is $\\mu_e \\approx 1.143$.\n\nUsing the energy generation rate (\\ref{eq:pp}) and luminosity (\\ref{Lbur}) formulae, we can integrate the latter over the stellar volume ($M_{-1}=M\/(0.1M_\\odot)$):\n \\begin{equation}\\label{lhb}\n \\frac{L_{HB}}{L_\\odot}=4\\pi r_c^3\\rho_c\\dot{\\epsilon}_c\\int^{\\xi_R}_{0}\\xi^2\\theta^{n(u+\\frac{2}{3}s)}d\\xi=\n\\frac{1.53\\times10^7\\Psi^{10.15}}{(\\Psi+\\alpha_d)^{16.46}}\n\\frac{\\delta^{5.487}_{3\/2}M^{11.977}_{-1}}{\\omega_{3\/2}\\gamma^{16.46}_{3\/2}},\n \\end{equation}\nwhere we used the Lane-Emden formalism with\n\\begin{eqnarray}\n K= \\frac{ (3\\pi^2)^{2\/3} \\hbar}{5 m_e m_H^{5\/3} \\mu_e^{5\/3}} \\left( 1+ \\frac{\\alpha_d}{\\Psi} \\right),\n\\end{eqnarray}\nand the near center solution of the LE equation which is\n$\\theta(\\xi \\approx 0)= 1- \\frac{\\xi^2}{6} \\sim \\textrm{exp}\\left( -\\frac{\\xi^2}{6}\\right)$ for Palatini $f(R)$ gravity. Here, $\\alpha_d\\equiv5\\mu_e\/2\\mu\\approx4.82$.\n\nNow we will focus on finding the photospheric luminosity which must be equaled to (\\ref{lhb}) in order to have a star as a stable system.\nTherefore, the surface gravity (\\ref{surf}) needs to be rewritten wrt the Lane-Emden variables:\n\\begin{equation}\n g=\\frac{3.15\\times10^6}{\\gamma^2_{3\/2}}\nM_{-1}^{5\/3}\\left(1+\\frac{\\alpha_d}{\\Psi}\\right)^{-2} \\textrm{cm\/s}^2.\n\\end{equation}\nThe most tricky part is to find the photospheric temperature. Usually it is obtained from matching the specific entropy of the gas and metallic phases of the H-He mixture \\cite{burrows1} (here without the phase transition points \\cite{aud}) \n\\begin{equation}\\label{temp}\n T_{ph}=1.8\\times10^6\\frac{\\rho_{ph}^{0.42}}{\\Psi^{1.545}}\\textrm{K} \\ .\n\\end{equation}\nApplying these two results into (\\ref{hyd}) and (\\ref{ideal}) one writes the photospheric energy density as ($\\kappa_{-2}=\\kappa_R\/(10^{-2}\\mathrm{cm^2 g^{-1}})$, $\\kappa_R$ is Rosseland's mean opacity):\n\\begin{equation}\n \\frac{\\rho_{ph}}{\\mathrm{g\/cm^3}}=\n 5.28\\times10^{-5}M^{1.17}_{-1}\\left(\\frac{1+8\\beta\\frac{g}{c^2 R} }{\\kappa_{-2}}\\right)^{0.7}\n\\frac{\\Psi^{1.09}}{\\gamma^{1.41}_{3\/2}}\\left(1+\\frac{\\alpha_d}{\\Psi}\\right)^{-1.41}.\n\\end{equation}\nInserting it into $T_{ph}$ and using the stellar luminosity (\\ref{stefan}) we find\n\\begin{equation}\nL_{ph}=28.18L_\\odot\\frac{M^{1.305}_{-1}}{\\gamma^{2.366}_{3\/2}\\Psi^{4.351}}\n \\times\\left(\\frac{1+8\\beta\\frac{g}{c^2 R} }{\\kappa_{-2}}\\right)^{1.183}\n \\left(1+\\frac{\\alpha_d}{\\Psi}\\right)^{-0.366} \\ .\n\\end{equation}\nFinally, writing $L_{HB}=L_{ph}$ and performing non-complicated algebra:\n\\begin{equation} \\label{result}\nM_{-1}^{MMSM}=0.290 \\frac{\\gamma_{3\/2}^{1.32} \\omega_{3\/2}^{0.09}}{\\delta_{3\/2}^{0.51}} \\frac{(\\alpha_d + \\Psi)^{1.509}}{\\Psi^{1.325}} \\left(1-1.31\\alpha\\frac{\\left(\\frac{\\alpha_d+\\Psi}{\\Psi}\\right)^4}{\\delta_{3\/2}\\kappa_{-2} }\\right)^{0.111} \n\\end{equation}\nwe have derived the MMSM. It is clearly modified by our model of gravity not only by the parameter $\\alpha$, but also the solutions of the LE equation (\\ref{LE}).\n\n\n\n\\section{Low-mass Main Sequence stars}\nIn every stellar modelling one needs to determine which kind of the energy transport mechanism is present in each particular layer of the given star. It is usually given by the Schwarzschild criterion (\\ref{grad}) which is also altered by the model of gravity \\cite{aneta2}. Using that result we will demonstrate that the mass limit of fully convective stars on the Main Sequence is shifted and can have a significant effect on how we model the stars from this mass range. Newtonian-based models predict that Main Sequence stars' interiors with masses smaller than $\\sim0.6M_\\odot$ are fully convective.\n\nSince the star's luminosity decreases when it contracts following the Hayashi track, it may happen that there appears a radiative zone in the star's interior, and then the star will start following the Henyey track \\cite{henyey,hen2,hen3}. In the case of the low-mass stars, however, the fully convective baby star may also reach the Main Sequence without developing a radiative core. In order to deal with such a situation, the decreasing luminosity in the Schwarschild condition for the radiative core development condition (it happens when $ \\nabla_{rad}=\\nabla_{ad}$, where in our simplified model $\\nabla_{ad}=0.4$) cannot be lower than the luminosity of H burning (\\ref{lhb}). Therefore, the modified Schwarzschild criterion, after inserting (\\ref{stefan}) and (\\ref{rho0s}) with homology contaction argument, provides the minimum luminosity for the radiative core development:\n\\begin{equation}\\label{lmin}\n L_{min}=9.89\\times10^{7}L_\\odot \\frac{\\delta_{3\/2}^{1.064}(\\frac{3}{4}\\delta_{3\/2}-\\alpha)}{\\xi^{8.67}(-\\theta')^{1.73}} \\left(\\frac{T_{eff}}{\\kappa_0}\\right)^{0.8} M_{-1}^{4.4},\n\\end{equation}\nwhere we have used the Kramer's absorption law (\\ref{abs}) with $u=1$ and $w=-4.5$. Thus, a star on the onset of the radiative core development will reach the Main Sequence when $L_{min}=L_{HB}$; so the mass of the maximal fully convective star on the Main Sequence is given by the following expression:\n\\begin{equation}\\label{masss}\n M_{-1}=1.7\\frac{\\mu^{0.9}T_{eff}^{0.11}(\\alpha_d+\\Psi)^{2.173}}{\\Psi^{1.34}\\kappa_0^{0.11}}\n \\frac{\\gamma^{2.173}\\omega^{0.132}}{\\delta_{3\/2}^{0.58}\\xi^{1.14}(-\\theta')^{0.23}}.\n\\end{equation}\nLet us firstly focus on the GR case, that is, when $\\alpha=0$. Considering a star with \n$\\alpha_d=4.82$, the degree of the degeneracy electron pressure as $\\Psi=9.4$, and the mean molecular weight $\\mu=0.618$ with $T_{eff}=4000$K, the maximal mass of the fully convective star on the Main Sequence is:\n\\begin{equation}\n M=4.86M_\\odot\\kappa^{-0.11}_0.\n\\end{equation}\nWe notice immediately that the final value does depend on the opacity. Considering two Kramers' opacities: the total bound-free and free-free estimated to be (in $\\textrm{cm}^2\\textrm{g}^{-1}$), \\cite{hansen}\n\\begin{equation}\n \\kappa_0^{bf}\\approx 4\\times10^{25}\\mu \\frac{ Z(1+X)}{N_Ak_B},\\;\\;\\;\\;\n \\kappa_0^{ff}\\approx 4\\times10^{22}\\mu\\frac{(X+Y)(1+X)}{N_Ak_B},\n\\end{equation}\nthe corresponding masses, for $X=0.75$ and $Z=0.02$, are \n\\begin{equation}\n M_{bf}=0.099M_\\odot,\\;\\;\\;M_{ff}=0.135 M_\\odot,\n\\end{equation}\nrespectively. The obtained masses, as we expected, are too low - it is a result of our simplified analysis, mainly related to the atmosphere's description and gas behaviour in the considered pressure and temperature regimes. However, we may use the obtained values as reference to compare the result arriving from modified gravity: depending on the parameter's value, the masses can even differ around $50\\%$ \\cite{aneta2}.\n\n\n\n\n\n\n\\section{Aborted stars: brown dwarfs}\n\nLet us discuss a family of objects which do not satisfy necessary conditions in their core to ignite hydrogen\\footnote{some massive brown dwarfs do burn hydrogen, however the process is not stable ($L_{HB}\\neq L_{ph}$) and since although there is some energy production, the object radiates more than produces, therefore it is cooling down and following the BDs' evolution.} and subsequently to enter the Main Sequence phase. Such an object will radiate away all stored energy, being a result of gravitational contraction and eventual light elements burning in the early stage's evolution. It will stop contracting when the electron degeneracy pressure balances the gravitational pulling, and consequently it will be cooling down with time. In order to study a simple but accurate cooling model of brown dwarfs, we need to consider a more realistic description of matter, as the brown dwarf stars are composed of the mixture of degenerate and ideal gas states at finite temperature. It turns out however that such an EoS can be rewritten in the polytropic form for $n=3\/2$ \\cite{aud}, but with more complicated polytropic function with $K=C\\mu_e^{-\\frac{5}{3}}(1+b+a \\eta)$, where the constant $C=10^{13} \\rm{cm}^4g^{-2\/3}s^{-2}$, $a=\\frac{5}{2}\\mu_e\\mu_1^{-1}$, while the number of baryons per electron is represented by $\\mu_e$. Here, we use $\\eta=\\Psi^{-1}$ as the electron degeneracy parameter, while $\\mu_1$ takes into account ionization, and it is defined as\n\\begin{equation}\n\\frac{1}{\\mu_1}=(1+x_{H^+})X+\\frac{Y}{4},\n\\end{equation}\nwhere $x_{H^+}$ is the ionization fraction of hydrogen $X$ ($Y$ stand for helium one) and depends on the phase transitions points \\cite{chab4}. Besides, the quantity $b$ is \n\\begin{equation}\\label{defb}\n b=-\\frac{5}{16} \\eta \\mathrm{ln}(1+e^{-1\/\\eta})+\\frac{15}{8}\\eta^2\\left( \\frac{\\pi^2}{3}+\\mathrm{Li}_2[-e^{-1 \/\\eta}] \\right),\n\\end{equation}\nwhere $\\rm{Li}_2$ denotes the second order polylogarithm function and the degeneracy parameter is given as $\\eta=\\frac{k_B T}{\\mu_F}$.\nTherefore, we can still use the LE formalism for our purposes, that is, we can express the star's central pressure, radius, central density, and temperature $T_c=\\frac{K\\mu}{k_B}\\rho_c^\\frac{1}{n}$ as functions of the above parameters; we will see soon that the degeneracy parameter depends on time because of the still ongoing gravitational contraction.\n\nAs already commented, the most uncertain part of our calculations is related to the photospheric values of, for instance, effective temperature. In brown dwarfs' case one usually uses the entropy method, that is, matching the entropy of non-ionized molecular mixture of H and He at the atmosphere to the interior one, composed mainly of degenerate electron gas \\cite{aud,burrows1}:\n\\begin{equation}\\label{entr}\n S_{interior}=\\frac{3}{2}\\frac{k_BN_A}{\\mu_{1mod}}(\\rm{ln}\\eta+12.7065)+C_1,\n\\end{equation}\nwhere $C_1$ is an integration constant of the first law of thermodynamics and $\\mu_{1mod}$ is modified $\\mu_1$ at the photosphere (see the details and its form in \\cite{maria,aud}. The matching provides the effective temperature as\n\\begin{equation}\\label{tsur}\n T_{eff}=b_1 \\times 10^6 \\rho_{ph}^{0.4}\\eta^\\nu\\,\\,\\mathrm{K},\n\\end{equation}\nwhere the parameters $b_1$ and $\\nu$ depend on the specific model describing the phase transition between a metallic H and He state in the BD's interior and the photosphere composed of molecular ones \\cite{chab4}. Following the analogous steps as in the section (\\ref{Shb}), one gets the photospheric temperature as\n\\begin{eqnarray}\n T_{eff}=\\frac{2.558\\times10^4\\,\\mathrm{K}}{\\kappa_R^{0.286}\\gamma^{0.572}}\n \\left(\\frac{M}{M_\\odot} \\right)^{0.4764}\n \\frac{\\eta^{0.714\\nu}b_1^{0.714}}{(1+b+a\\eta)^{0.571}}\n \\left(1-1.33\\frac{\\alpha}{\\delta}\\right)^{0.286},\n\\end{eqnarray}\nwhere $\\mu_e=1.143$ was used. That allows to find the luminosity of the brown dwarf; hence using the Stefan-Boltzman equation one gets:\n\\begin{equation}\\label{lumph}\n L=\\frac{0.0721 L_\\odot}{\\kappa_R^{1.1424}\\gamma^{0.286}}\n \\left(\\frac{M}{M_\\odot} \\right)^{1.239}\n \\frac{\\eta^{2.856\\nu}b_1^{2.856}}{(1+b+a\\eta)^{0.2848}}\n \\left(1-1.33\\frac{\\alpha}{\\delta}\\right)^{1.143}.\n\\end{equation}\n\nThe above luminosity depends on time since the electron degeneracy $\\eta$ does. To find such a relation for the latter one \\cite{burrows1,stev}, let us consider the pace of cooling and contraction given by the first and the second law of thermodynamics\n\\begin{equation}\n \\frac{\\rm d E}{\\rm d t}+p\\frac{\\rm d V}{\\rm d t}=T\\frac{\\rm d S}{\\rm d t}\n =\\dot\\epsilon-\\frac{\\partial L}{\\partial M},\n\\end{equation}\nin which the energy generation term $\\dot\\epsilon$ is negligible in brown dwarfs. We can integrate the above equation over mass to find\n\\begin{equation}\n \\frac{\\rm d\\sigma}{\\rm dt} \\left[\n \\int N_A k_B T \\rm dM\n \\right]=-L,\n\\end{equation}\nwhere $L$ is a surface luminosity and we have defined $\\sigma=S\/k_BN_A$. The LE polytropic relations allow to get rid of $T$ and $\\rho$ and write down\n\\begin{equation}\\label{eqL}\n \\frac{\\rm d\\sigma}{\\rm dt}\n \\frac{N_A A \\mu_e\\eta}{C(1+b+a\\eta)} \\int p \\rm dV\n =-L,\n\\end{equation}\nwhere $A=(3\\pi\\hbar^3 N_A)^\\frac{2}{3}\/(2m_e)\\approx4.166\\times10^{-11}$. The integral in the above equation can be simply found to be $\\int p \\rm dV=\\frac{2}{7}\\Omega G\\frac{M^2}{R}$ with $\\Omega=1$ for $n=3\/2$ in Palatini gravity \\cite{artur,aneta3}.\n\nWith the use of the entropy formula (\\ref{entr}) one can easily get the entropy rate as (let us recall that $\\sigma=S\/k_BN_A$):\n\\begin{equation}\n \\frac{\\rm d\\sigma}{\\rm dt}=\\frac{1.5}{\\mu_{1{mod}}}\\frac{1}{\\eta} \\frac{\\rm d\\eta}{\\rm dt}.\n\\end{equation}\nInserting the above expression into (\\ref{eqL}) together with the luminosity (\\ref{lumph}) gives us the evolutionary equation for the degeneracy parameter $\\eta$\n\\begin{eqnarray}\n \\frac{\\rm d\\eta}{\\rm dt}=&-&\n \\frac{1.1634\\times10^{-18}b_1^{2.856}\\mu_{1{mod}}}{\\kappa_R^{1.1424}\\mu_e^{8\/3}}\n \\left(\\frac{M_\\odot}{M} \\right)^{1.094}\n \\\\\n &\\times&\\eta^{2.856\\nu}(1+b+a\\eta)^{1.715} \\frac{\\gamma^{0.7143}}{\\Omega} \\left(1-1.33\\frac{\\alpha}{\\delta}\\right)^{1.143}.\\nonumber\n\\end{eqnarray}\nThis equation, together with the luminosity equation (\\ref{lumph}) and initial conditions $\\eta=1$ at $t=0$, provides the cooling process model for a brown dwarf star in Palatini $f(\\bar R)$ gravity. To see how modified gravity affects such an evolution after solving these equations numerically\\footnote{https:\/\/github.com\/mariabenitocst\/brown$\\_$dwarfs$\\_$palatini}, see \\cite{maria}. \n\n\n\n\\section{(Exo)-planets}\nAs we will see, some theories of gravity can change the giant planets' evolution, and may also affect the internal structure of gaseous and terrestrial ones. This fact can change our understanding of the Solar System's formation, as well as it can be used to constrain different gravitational proposals when observational and experimental data with high accuracy are at our disposal. Missions such as ESA's Cosmic Visions \\cite{esa} will bring soon more data on the physical properties of Jupiter-like planets, while improved seismic experiments \\cite{butler}, as well as those performed in laboratories \\cite{merkel}, or with the use of the new generation of the neutrinos' telescopes \\cite{donini} will provide more information about the matter behaviour in the Earth's core and its more exact composition.\n\n\n\n\\subsection{Jovian planets}\n\nGiant gaseous planets, although their formation processes differs significantly from the one followed by stars and brown dwarfs \\cite{planets,planets2}, do also contract and cool down until it reaches the thermal equilibrium, that is, when the received energy from its parent star is equalled to the energy radiated away from the surface of the planet. Their inner description is quite similar to the one of brown dwarfs'; however, the main difference in the cooling process between these two substellar object is that the jovian planets possess an additional source of energy provided by the parent star which cannot be ignored. When a planet with the radius $R_p$ and in the distance $R_{sp}$ from its parent star is in the mentioned thermal equilibrium, it means that its equilibrium temperature \n\\begin{equation}\n (1-A_{p})\\left(\\frac{R_{p}}{2R_{sp}}\\right)^2L_{s}=4\\pi f\\sigma T_{eq}^4R_{p}^2,\n\\end{equation}\nwhere $A_p$ is an albedo of the planet while $L_s$ the star's luminosity, is equalled to its effective one. However, when we are dealing with some additional energy sources such as for instance gravitational contraction, Ohmic heating, or tidal forces, it is not so since the planets radiates more than it receives. Therefore, we need a relation between these two temperatures; it is derived from the radiative transport equation with the use of Eddington's approximation \\cite{hansen}:\n\\begin{equation}\\label{temp}\n 4T^4=3\\tau(T^4_{eff}-T^4_{eq})+2(T^4_{eff}+T^4_{eq}),\n\\end{equation}\nwhere $T$ is the stratification temperature in the atmosphere while $\\tau$ is the optical depth. This will allow, when we integrate the equation (\\ref{hyd_pol}) with (\\ref{abs}), to write down the atmospheric pressure as (see \\cite{aneta_jup} for $w=4$): \n\\begin{eqnarray}\\label{presat}\n p^{u+1}_{w\\neq4}=\\frac{4^\\frac{w}{4}g}{3\\kappa_0}\\frac{u+1}{1-\\frac{w}{4}}\\left(1-\\frac{4\\alpha}{3\\delta}\\right)\n T_-^{-1}\\Big((3\\tau T_-+2T_+)^{1-\\frac{w}{4}}-(2T_+)^{1-\\frac{w}{4}}\\Big),\n\\end{eqnarray}\nwhere we have defined $T_-:=T^4_{eff}-T^4_{eq}$ and $T_+:=T^4_{eff}+T^4_{eq}$. The atmosphere is radiative so there must exist a region in which the convective transport of energy in the planet's interior becomes radiative. In order to find this boundary, we will use the Schwarzschild criterion (\\ref{grad}) to find the critical depth in which the radiative process is replaced with the convective one:\n\\begin{eqnarray}\n \\tau_c=\\frac{2}{3}\\frac{T_+}{T_-}\\left(\\Big(1+\\frac{8}{5}\\Big(\\frac{\\frac{w}{4}-1}{u+1}\\Big)\\Big)^\\frac{1}{\\frac{w}{4}-1}-1\\right),\\;\\;w\\neq4\n\\end{eqnarray}\nSubstituting those expressions into (\\ref{presat}) and (\\ref{temp}) we may write the formulas for the boundary pressure and temperature\n\\begin{eqnarray}\\label{pbound}\n p^{u+1}_{conv}&=&\\frac{8g}{15\\kappa_0}\\frac{4^\\frac{w}{4}\\left(1-\\frac{4\\alpha}{3\\delta}\\right)}{T_-(2T_+)^{w-1}}\\left(\\frac{5(u+1)}{5u+8\\frac{w}{4}-3}\\right),\\\\\n T^4_{conv}&=&\\frac{T_+}{2} \\left(\\frac{5u+8\\frac{w}{4}-3}{5(u+1)}\\right)^{\\frac{w}{4}-1},\\;\\;\\;\\;\\;w\\neq4.\n\\end{eqnarray}\nOn the other hand, to describe the planet's convective interior, let us consider a combination of pressures \\cite{don0}\n\\begin{equation}\\label{prescomb}\n p=p_1+p_2,\n\\end{equation}\nwhere $p_1$ is pressure arising from electron degeneracy, given by the polytropic EoS (\\ref{pol}) with $n=3\/2$, while $p_2$ is pressure of ideal gas (\\ref{ideal}). It can be shown that such a mixture can be again written as a polytrope \\cite{stev}. Matching the above interior pressure with (\\ref{pbound}) provides a relation between the effective temperature $T_{eff}$ with the radius of the planet $R_p$ which depends on modified gravity:\n\\begin{eqnarray}\\label{cond}\n T_+^{\\frac{5}{8}u+\\frac{w}{4}-\\frac{3}{8}}T_-&=&CG^{-u} M_p^{\\frac{1}{3}(2-u)}R_p^{-(u+3)}\\mu^{\\frac{5}{2}(u+1)}k_B^{-\\frac{5}{2}(u+1)}\\nonumber\\\\\n &\\times&\\gamma^{u+1}(G\\gamma^{-1}M_p^\\frac{1}{3}R_p-K)^{\\frac{5}{2}(u+1)}\\left(1-\\frac{4\\alpha}{3\\delta}\\right)\n\\end{eqnarray}\nwhere $C$ is a constant depending on the opacity constants $u$ and $w$:\n\\begin{equation}\n C_{w\\neq4}=\\frac{16}{15\\kappa_0}2^{\\frac{5}{8}(1+u)+\\frac{w}{4}}\\left(\\frac{5u+8\\frac{w}{4}-3}{5(u+1)}\\right)^{1+\\frac{5}{8}(1+u)(\\frac{w}{4}-1)}.\n\\end{equation}\nSince the contraction of the planet is a quasi-equilibrium process, the planet's luminosity is a sum of the total energy absorbed by the planet and the internal energy such that for a polytrope with $n=3\/2$ \\cite{maria} we may write\n\\begin{equation}\\label{cooling}\n L_p=(1-A_{p})\\left(\\frac{R_{p}}{2R_{sp}}\\right)^2L_{s}-\\frac{3}{7}\\frac{GM_p^2}{R_p^2}\\frac{dR_p}{dt}.\n\\end{equation}\nUsing (\\ref{stefan}), (\\ref{abs}) and integrating it from an initial radius $R_0$ to the final one $R_F$, and inserting (\\ref{cond}) to get rid of $T_-$ we can derive the cooling equation for jovian planets:\n\\begin{eqnarray}\n t=-\\frac{3}{7}\\frac{GM_p^\\frac{4}{3}k_B^{\\frac{5}{2}(u+1)}\\kappa_0}{\\pi ac\\gamma\\mu^{\\frac{5}{2}(u+1)}K^{\\frac{3}{2}u+\\frac{5}{2}}C}\\left(1-\\frac{4\\alpha}{3\\delta}\\right)^{-1}\n \\int^{x_p}_{x_0}\\frac{(T_{eff}^4+T^4_{eq})^{\\frac{5}{8}u+\\frac{w}{4}-\\frac{3}{8}}dx}{x^{1-u}(x-1)^{\\frac{5}{2}(u+1)}}.\\nonumber\n\\end{eqnarray}\n This, together with (\\ref{cond}) providing the effective temperature for a given radius allows to find the age of the planet which clearly differ from the values given by Newtonian physics (see the figure 2 and tables 1-2 in \\cite{aneta_jup}).\n\n\n\n\n\n\n\n\n\n\\subsection{Terrestrial planets}\nIn this section we will just comment some findings regarding the rocky planets, such as for example the Earth and Mars. Although the numerical analysis demonstrates that we should not expect a large degeneracy in the mass-radius plots for the Earth-sized and smaller planets\\footnote{in the case of larger terrestrial planets we observe a significant difference, making the exoplanet's composition more difficult to determine \\cite{olek2,seager}.} \\cite{olek2} -- however have a look on a more realistic approach in \\cite{olek3,olek4} -- it turns out that there is a considerable difference in the density profiles $\\rho(r)$, which could be used to constrain and test models of gravity.\nKnowing what is the density profile in a given planet allows to obtain the polar moment of inertia $\\mathcal{C}$ ($R_p$ is the planet's radius)\n\\begin{equation}\\label{polar}\n \\mathcal{C}=\\frac{8\\pi}{3}\\int_0^{R_p} \\rho(r) r^4 dr.\n\\end{equation}\nThe density profiles provide information on the number of layers composed of different materials (that is, EoS), and their boundaries. The inner structure of the Earth is given by the PREM model \\cite{prem,kustowski,iasp91,aki135} being a result of the seismic data analysis, while the martian interior will be known soon, when the Seismic Experiment for Interior Structure from NASA's MARS\nInSight Mission's seismometer \\cite{nasa} provides the required data.\n\nSince density profiles (central and boundary values of density\/pressure, and layers' thickness) are slightly different in modified gravity than those obtained from Newtonian gravity, it means that this fact has an influence on the polar moment of inertia (\\ref{polar}), yielding different results for different models of gravity. Such a phenomenon can be compare with the observational value $\\mathcal{C}$ provided by precession rate $d\\eta\/dt$ being caused by gravitational torques from the Sun \\cite{kaula}:\n\\begin{equation}\\label{precession}\n \\frac{d\\eta}{dt}=-\\frac{3}{2}J_2\\cos{\\epsilon}(1-e^2)\\frac{n^2}{\\omega}\\frac{MR^2}{\\mathcal{C}}\n\\end{equation}\nwhere the orbital eccentricity $e$, obliquity $\\epsilon$, the rotation rate $\\omega$, the effective mean motion $n$ and the gravitational harmonic coefficient $J_2$ are well-known with high accuracy for the Solar System planets, especially for the Earth \\cite{ziemia} and Mars \\cite{konopliv,smith,folk2}. Therefore, the computed polar moment of inertia from a given model of gravity must agree with the observational one provided by (\\ref{precession}). That procedure, when the theoretical modelling improved, can be a powerful tool to test theories of gravity which alters Newtonian equations.\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{HR.png}\n\t\\caption{The sketch of the low-temperature region of the evolutionary Hertzsprung-Russell diagram for astrophysical objects discussed in this chapter (the proportions of the evolution and scales are not preserved). A baby star is travelling along the Hayashi track till it reaches the Main Sequence, possibly burning lithium and deuterium. Depending on the star's mass, the object can reach the Main Sequence (MMSM - Minimum Main Sequence Mass indicated as stars with masses $\\sim0.08M_\\odot$ for hydrogen burning) as a fully convective star (MFCM - Maximal Fully Convective Mass marked), or it can develop a radiative core (it happens for stars with masses $\\sim0.6M_\\odot$) and then move along the Henyey track. The Hayashi forbidden zone as well as region occupied by brown dwarfs are also indicated. Giant gaseous planets can be found in the colder and dimmer region of the diagram.}\n\t\\label{hr}\n\\end{figure}\n\n\n\n\\begin{acknowledgement}\nThis work was supported by the EU through the European Regional Development Fund CoE program TK133 ``The Dark Side of the Universe\". \n\\end{acknowledgement}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nTheories of dark energy that introduce new, light scalar fields coupled to matter have inspired the study of screening mechanisms to explain why the associated fifth forces have not yet been detected \\cite{Joyce:2014kja,Clifton:2011jh}. Screening mechanisms allow the scalar field theory to have non-trivial self-interactions, and so the properties of the scalar, and the resulting fifth force, can vary with the environment. Whilst screening mechanisms were introduced in order to explain the absence of an observation of a fifth force to date, that does not mean that such fifth forces are intrinsically unobservable. Experimental searches need only to be carefully designed to take advantage of the non-linear screening behaviour. \n\nGiven a background field profile, the self interactions of the screened scalar field can have three possible consequences on the properties of scalar fluctuations on top of that background \\cite{Joyce:2014kja}:\n\\begin{enumerate}\n\\item The mass of the fluctuations becomes dependent on the background. If the field becomes heavy in dense environments and light in diffuse ones, this can explain why the scalar force would not be detected around the macroscopic dense sources used in current fifth force experiments. This is known as the chameleon mechanism after the archetypal chameleon model \\cite{Khoury:2003rn,Khoury:2003aq}. \n\n\\item The strength of the coupling to matter becomes dependent on the background. If the field becomes weakly coupled in experimental environments it is clear that it will be harder to detect. Examples of models that employ this mechanism include the symmetron \\cite{Hinterbichler:2010es,Hinterbichler:2011ca} and density dependent dilaton \\cite{Damour:1994zq,Brax:2010gi}. \n\n\\item The coefficient of the scalar kinetic term becomes dependent on the background. If the coefficient becomes large in experimental searches it becomes difficult for the scalar to propagate, and so the force is suppressed. This effect occurs in any model which has gradient self interactions, including Galileon \\cite{Nicolis:2008in} and k-essence models \\cite{Babichev:2009ee,Brax:2012jr,Burrage:2014uwa}, and is called the Vainshtein mechanism \\cite{Vainshtein:1972sx}.\n\\end{enumerate}\n\nIt has recently been demonstrated that atomic nuclei inside a high quality vacuum chamber are very sensitive probes of chameleon screening, this is because the nucleus is so small that the screening cannot work efficiently. Forces on individual atoms can now be measured to a very high precision using atom interferometry, and as a result new constraints on chameleon models have been derived. Further improvements to these experiments are cureently underway. \n\nIt remains to be determined whether the power of atom interferometry can be extended to constrain theories which screen through other means. Vainshtein screening will not be accessible, because the gradient self-interactions mean that the screening takes place over much longer distance scales than are achievable in a terrestrial laboratory. In contrast, however, theories which screen by varying their coupling constant with the environment are phenomenologically similar to chameleon models, and so it is expected that atom interferometry could also provide useful constraints. \n\nIn this work we will focus on the symmetron model, as an example of a theory which screens by varying its coupling constant. This model is chosen because it has been shown that the model can be constructed in such a way that it is radiatively stable and quantum corrections remain under control \\cite{Burrage:2016xzz}. Earlier work studied a similar model but with a different motivation~\\cite{Pietroni:2005pv,Olive:2007aj}, and string-inspired models with similar phenomenology have also been proposed~\\cite{Damour:1994zq,Brax:2011ja}.\n\n\nIn Section \\ref{sec:symm} we will review the symmetron model, and how the force between two extended objects can be screened. In Section \\ref{sec:atom} we apply the results of existing atom interferometry experiments to find new constraints on the symmetron model which are presented in Figure \\ref{fig:constraints}. In Section \\ref{sec:domain} we discuss the possibility that domain walls could form inside the vacuum chamber, leading to the possibility that atoms could experience a symmetron force, even in the absence of a source inside the vacuum chamber. We conclude in Section \\ref{sec:conclusions}.\n\n\n\\section{The Symmetron}\n\\label{sec:symm}\nThe simplest version of the symmetron model is as a canonical scalar field with potential \n\\begin{equation}\nV(\\phi) = \\frac{\\lambda}{4} \\phi^4 -\\frac{\\mu^2}{2}\\phi^2\\;,\n\\end{equation}\nwhere $\\lambda$ (which is dimensionless) and $\\mu$ (which has mass dimensions) are the parameters of the theory which must be determined by experiment. \nThe scalar field couples to matter through dimension six terms in the Lagrangian of the form\n\\begin{equation}\n\\mathcal{L} \\supset \\frac{\\phi^2}{2 M^2}T^{\\mu}_{\\mu}\\;,\n\\end{equation}\nwhere $T_{\\mu\\nu}$ is the energy-momentum tensor of all of the matter fields and $M$ is an energy scale which controls the strength of the coupling to matter. \n\nThe interactions with matter mean that in the presence of a non-relativistic, static background matter density $\\rho$ the symmetron field moves in an effective potential \n\\begin{equation}\nV_{\\rm eff}(\\phi) = \\frac{1}{2}\\left(\\frac{\\rho}{M^2}-\\mu^2\\right)\\phi^2 +\\frac{\\lambda}{4}\\phi^4\\;,\n\\end{equation}\nfrom which it can be seen that when the density is sufficiently high, $\\rho>M^2 \\mu^2$, the effective potential has only one minimum and the field is trapped at $\\phi=0$. As the density is decreased the potential undergoes a symmetry breaking transition, and the field can roll into one of two minima with $\\phi^2 = (\\mu^2 -\\rho\/M^2)\/\\lambda$.\n\nIn Ref.~\\cite{Hinterbichler:2011ca}, the symmetry-breaking scale is chosen close to the cosmological density today, i.e.~$\\mu^2 M^2\\sim H_0^2M_{\\rm Pl}^2$, where $H_0$ is the present-day Hubble scale. In addition, the symmetron force in vacuum is required to have approximately gravitational strength, i.e.~$\\phi\/M^2 \\sim 1\/M_{\\rm Pl}$, such that there may be observable consequences without fine-tuning of the coupling scale. However, other choices of parameters are possible. In particular it has been shown that both E\\\"{o}t-Wash experiments \\cite{Upadhye:2012rc}, and measurements of exo-planets \\cite{Santos:2016rdg} constrain a very different region of parameter space with coupling constants $M\\lesssim 10 \\mbox{ TeV}$ and the mass scale $\\mu$ around the electronvolt scale. The self coupling parameter $\\lambda$ is very poorly constrained currently. \n\nThe radiatively stable model derived in \\cite{Burrage:2016xzz} has a slightly more complicated potential \n\\begin{equation}\nV(\\phi) =\\left(\\frac{\\lambda}{16\\pi}\\right)^2 \\phi^4 \\left(\\ln \\frac{\\phi^2}{m^2}-Y\\right)\\;,\n\\end{equation}\nfor constant $m$, $\\lambda$ and $Y$. However, the structure of the symmetry breaking transition, and the resulting phenomenology remains essentially the same as the original symmetron model, and so we will focus our attention on the simpler model in what follows. \n\nAround a static, spherically symmetric object of density $\\rho_{\\rm in}$ and radius $R$, embedded in a background density $\\rho_{\\rm out}$, the symmetron field profile is \n\\begin{equation}\n\\phi=\\phi_{\\rm out} -\\frac{(\\phi_{\\rm out}-\\phi_{\\rm in})R e^{m_{\\rm out}(R-r)}}{r}\\left(\\frac{m_{\\rm in}R-\\tanh m_{\\rm in} R}{m_{\\rm in}R+Rm_{\\rm out}\\tanh m_{\\rm in}R}\\right)\\;,\n\\end{equation}\nwhere $m_{\\rm in}$ and $m_{\\rm out}$ are respectively the mass of the field inside the source object, and outside, and $\\phi_{\\rm in}$ and $\\phi_{\\rm out}$ are the values of the scalar field that minimize the effective potential inside and outside the source. \nThe force on a test particle moving on top of this field profile is then given by $F= \\phi\\nabla \\phi\/M^2$.\n\nIf everywhere in the experiment the density is higher than that required for the symmetry breaking transition, $(\\rho_{\\rm out},\\rho_{\\rm in})>M^2 \\mu^2$, then the field will be constrained to be $\\phi=0$ everywhere and it will never be possible to see the associated fifth forces. \nIf $\\rho_{\\rm out}< \\mu^2 M^2$ then the field has a non-trivial profile inside the vacuum chamber, and the value of the field at the center of the vacuum chamber (assumed to be spherical for simplicity) depends on the relative sizes of the radius of the vacuum chamber $L$ and the Compton wavelength of the field. If $m_{\\rm out} L\\gtrsim1$ then \n\\begin{equation}\n\\phi_{\\rm out} =\\frac{1}{\\sqrt{\\lambda}}\\left(\\mu^2 -\\frac{\\rho_{\\rm out}}{M^2}\\right)^{1\/2}\\;,\n\\end{equation}\notherwise the field does not have room to evolve away from $\\phi=0$ in the walls of the chamber and we have\n\\begin{equation}\n\\phi_{\\rm out}=0\\;.\n\\end{equation}\n\n\nIf $\\rho_{\\rm in}< \\mu^2 M^2$, then the source object causes only a small perturbation of the background field. In this case there is no screening, and the force has the usual Yukawa form. On the other hand, if $\\rho_{\\rm in}> \\mu^2 M^2$ and $m_{\\rm in}R \\gg 1$, then the symmetry is restored inside the source and the resulting force on a test particle is suppressed \\cite{Hinterbichler:2010es}. \n\nWe are not interested, however, in the force on an infinitesimal test particle but instead in the force between two extended objects either or both of which could be screened. Following the arguments of \\cite{Hui:2009kc} if we assume a hierarchy between mass A and mass B, so that the field profile due to B can be considered a small perturbation on the field profile of A {\\it at the surface of mass B}, then the force can be found by considering the change in momentum of ball B and using the Bianchi identity. The symmetron force between two objects A and B is therefore\n\\begin{equation}\nF_{\\rm symm}=4 \\pi \\lambda_A\\lambda_B (1+m_{\\rm out}R_B)(1+m_{\\rm out}r)\\frac{e^{m_{\\rm out}(R_A-r)}}{r^2}\\;,\n\\end{equation}\nwhere \n\\begin{equation}\n\\lambda_i = \\left.(\\phi_{\\rm out}-\\phi_{\\rm in})R\\left(\\frac{m_{\\rm in}R-\\tanh m_{\\rm in} R}{m_{\\rm in}R+Rm_{\\rm out}\\tanh m_{\\rm in}R}\\right)\\right|_{i}\\;,\n\\label{eq:lambda}\n\\end{equation}\nwhere all of the quantities on the right hand side of Equation (\\ref{eq:lambda}) are evaluated for the object in question. $\\lambda_i$ can therefore be considered the symmetron `charge' for the object. The fact that we are treating the field profiles due to A and B hierarchically explains the slight asymmetry in the dependencies on $R_A$ and $R_B$.\nIf $\\lambda_i \\ll 1$ then we say that the object is screened, and the force is suppressed. \n\n\\section{Atom Interferometry}\n\\label{sec:atom}\nAtom interferometry has been shown to be a powerful technique for constraining chameleon models with screening \\cite{Burrage:2014oza}. These experiments work by putting an atom into a superposition of states which travel on two different paths. If the wavefunction accumulates a phase difference between the two paths this can be detected as an interference pattern when the two path are merged \\cite{Storey:1994oka,feynmanhibbs}. If the atoms experience a constant acceleration in the same direction as the separation between the paths then this results in exactly such a phase difference, allowing the force the atoms have experienced to be measured very precisely. Atom interferometry measurements looking for chameleons have reached a sensitivity of $10^{-6} g$ (where $g$ is the acceleration due to free fall at the surface of the earth) and are forecast to reach $10^{-9} g$ \\cite{Hamilton:2015zga,Elder:2016yxm}.\n\nFor chameleon screening two properties make atom interferometry particularly powerful. Firstly, because the atoms are so small they are unscreened and so the chameleon force is less suppressed than it would be in a comparable macroscopic fifth force experiment. Secondly the walls of a vacuum chamber are sufficiently thick that they screen the interior of the vacuum chamber from any chameleon gradients or fluctuations in the exterior. This simplifies the computation of the chameleon forces in the experiment, but does have the consequence that the source mass must be placed inside the vacuum chamber, unlike many other tests of gravity performed with atom interferometry. \n\nDo these same advantages also apply to symmetron screening? Assuming the walls of the vacuum chamber have a density of $\\rho_{\\rm wall}\\sim \\mbox{g\/cm}^3$ then the field can reach $\\phi=0$ and restore the symmetry inside the walls if the thickness of the walls is greater than $\\sim 1\/m_{\\rm wall}$. We will see shortly that atom interferometry experiments constrain a fairly narrow region in $\\mu$ around $\\mu \\sim 10^{-4}\\mbox{ eV}$ and coupling strengths in the range $10^{-4} \\mbox{ GeV}< M < 10^{4} \\mbox{ GeV}$. \nIn this region of the parameter space the Compton wavelength of the field in the symmetry restored vacuum in the wall has a maximum value of $1\/m_{\\rm wall} \\sim 1\\mbox{ mm}$. Therefore we should expect the symmetry to be restored in the wall in the region of parameter space we consider, and so the interior is effectively decoupled from the behaviour of the symmetron in the exterior. \n\nFor a compact object to be screened from the symmetron force we need both $\\rho_{\\rm in}\/M^2 >\\mu^2$ and $m_{\\rm in}R\\gg1$. The first condition is actually {\\it easier } to satisfy for an atomic nucleus than for a macroscopic test mass, because the nuclear density is much higher than the density of, for example, silicon. The second condition is harder to satisfy for atoms than macroscopic masses because of the small size of the atomic nuclei. It is therefore not always the case that atoms make better probes of the symmetron field than macroscopic objects do, however they will be sensitive in some region of the parameter space which we will now determine. \n\nWe apply the results of reference \\cite{Hamilton:2015zga}, which measures the acceleration of cold caesium 133 atoms. The atoms were held $8.8\\mbox{ mm}$ away from an aluminium sphere of radius $9.5\\mbox{ mm}$. The experiment was performed in a vacuum chamber of radius $5 \\mbox{ cm}$ and pressure $6 \\times 10^{-10}\\mbox{ Torr}$. No anomalous acceleration of the atoms is measured, restricting the acceleration due to the symmetron field to satisfy $a< 6.8 \\times 10^{-6}\\mbox{ m\/s}^2$. The constraints that this places on the symmetron parameter space can be seen in Figure \\ref{fig:constraints}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.8]{figure.pdf}\n\\caption{\\label{fig:constraints} Constraints on the symmetron parameters from the atom interferometry experiment of \\cite{Hamilton:2015zga}. The excluded regions are shaded blue. The different regions are for different values of $\\mu$; from left to right $\\mu= 10^{-4}\\mbox{ eV}$, $\\mu= 10^{-4.5}\\mbox{ eV}$, $\\mu= 10^{-5}\\mbox{ eV}$, $\\mu= 10^{-5.5}\\mbox{ eV}$. The black dashed line on the left shows constraints from observations of exo-planets (points to the left of the line are excluded), with $\\mu \\rightarrow 0$ the most constraining choice \\cite{Santos:2016rdg}. The black dotted line on the right shows the constraints from torsion pendulum experiments (points to the right of the line are excluded) with $\\mu=10^{-4} \\mbox{ eV}$ chosen for reference \\cite{Upadhye:2012rc}.}\n\\end{figure}\n\nConstraints are restricted to a narrow range of the mass parameter $\\mu$, around $\\sim 10^{-4}, 10^{-5}\\mbox{ eV}$. For smaller $\\mu$ the Compton wavelength of the symmetron in the vacuum is larger than the size of the vacuum chamber and so the field cannot vary its value over the scale of the experiment. For larger $\\mu$ the Compton wavelength of the symmetron in vacuum is so small that the Yukawa term exponentially suppresses the force. The peak in the $\\mu= 10^{-4}\\mbox{ eV}$ plot occurs because there is a value of $M$ for which the Compton wavelength of the field in vacuum exactly matches the distance between the aluminium sphere and the atoms.\n\nWe see that, whilst the range of $\\mu$ values that are accessible is relatively constrained, where atom interferometry experiments do give constraints they explore a region of parameter space that is inaccessible to other experiments and observations.\n\n\\subsection{Other Experiments with Unscreened Test Particles}\nAtoms are not the only objects that can be unscreened in a laboratory vacuum. Experiments that measure forces on neutrons \\cite{Brax:2011hb,Ivanov:2012cb,Brax:2013cfa,Jenke:2014yel,Lemmel:2015kwa,Li:2016tux} and on silicon microspheres \\cite{Rider:2016xaq} have also been shown to be sensitive to chameleon forces, precisely because the test particles are sufficiently small that they are not screened. However, these have not yet reached the sensitivity of the atom interferometry experiments and so do not provide better constraints on symmetron models than those presented in Figure \\ref{fig:constraints}. We note, however, that silicone microspheres have a lower average density than neutrons and atomic nuclei, and so, if the sensitivity can be improved, they may provide the best prospect for searching for symmetron forces. \n\n\\section{Domain Walls}\n\\label{sec:domain}\nSymmetron fields open another possibility for laboratory searches that is not present for the chameleon model. Since the symmetron effective potential has two minima, as gas is pumped from the vacuum chamber the field could settle in either minimum with equal probability, and there is no reason for different regions of the chamber to all settle into the same one; if the Compton wavelength is comparable to or smaller than the size of the vacuum chamber then it is possible for a domain wall or a network of domain walls to form. \n\n\n If we approximate the wall as being straight and static, then its field profile is\n\\begin{equation}\n\\phi(z)=\\phi_0\\tanh\\left(\\frac{\\tilde{\\mu}z}{\\sqrt{2}}\\right)\\;,\n\\label{eq:soliton}\n\\end{equation}\nwhere \n\\begin{equation} \\label{eq_muTilde}\n\\tilde{\\mu}^2:=\\mu^2-\\frac{\\rho}{M^2}\\;,\n\\end{equation}\nand $\\phi_0 = \\frac{\\tilde{\\mu}}{\\sqrt{\\lambda}}$, meaning that the thickness of the wall is $\\sim 1\/\\tilde{\\mu}$, and its tension is $= 4\\tilde{mu}^3\/(3\\lambda)$ \\cite{Vilenkin:2000jqa}. \n\nTaking into account that the atom may be screened, the acceleration experienced by atom moving in the neighbourhood of a domain wall is \n\\begin{equation}\n\\vec{a} = 4 \\pi \\lambda_{\\rm atom} (1+m_{\\rm out}R_{\\rm atom}) \\nabla \\phi\\;,\n\\end{equation}\nwhere we should remember that $\\lambda_{\\rm atom}$ depends on the background field value $\\phi_{\\rm out}$ which in this case should be replaced with the domain wall field profile $\\phi$ evaluated at the position of the atom. \n\n\n\n\nThe maximum acceleration that an atom may experience in such a situation is roughly proportional to $|\\phi\\nabla\\phi|$. We can find an approximation for this by assuming the domain walls are small compared to the radius of the chamber such that we can use the planar solution for a domain wall given above in equation (\\ref{eq:soliton}). We find that this maximum acceleration $a_\\phi\\approx\\frac{\\phi\\nabla\\phi}{M^2}\\approx 0.27\\frac{\\tilde{\\mu}^3}{\\lambda M^2}$, which is always much less than $10^{-10}\\;g$ within the parameter space we have examined. This means that any domain walls that form will have a negligible effect on searches for symmetron fifth forces between atoms and source masses in the vacuum chamber, and that sensitivity must be improved if we are to detect the forces due to the domain wall directly. \n\n\n\n\nOf course, depending on the correlation length more than one domain wall can form creating a network. The symmetry is restored in the walls of the vacuum chamber, and in the core of the domain wall, so from the point of view of the field the walls can be viewed as a fixed sphere of $\\phi=0$ surrounding the domain wall network. We know from cosmological studies \\cite{Vilenkin:2000jqa} that networks of domain walls want to evolve towards the configuration with the minimum wall length. Therefore we can assume that the network inside the vacuum chamber is not stable, and the domain walls will straighten, and merge with one another and with the walls of the vacuum chamber. It is therefore reasonable to expect that the end point of this evolution will be the vacuum chamber entirely filled with one domain and no domain walls are present.\n An example of such an evolution is shown in Figure \\ref{fig:network}. This figure was constructed using two-dimensional numerical simulation with unphysical values for the symmetron parameters, and so should be considered only as an example of what kind of evolution is possible. We leave a full numerical simulation for future work. \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.65]{symmetronConfigs.pdf}\n\\caption{\\label{fig:network} A two dimensional domain wall network inside a circular cavity. The colours indicate the value of the scalar field. Blue and grey regions represent the positive and negative symmetry broken vacua respectively, and in the black regions the symmetry is restored at $\\phi=0$. The system evolves in time from left to right. This simulation was performed with unphysical values for the symmetron parameters due to numerical limitations. }\n\\end{figure}\n\n\n It remains to be determined how long such an evolution takes. We can use some results from cosmological studies of domain walls as a guideline of what to expect. In a true vacuum it is expected that the domain walls move with relativistic velocities. If this were the case in our vacuum chamber the walls would exist for an undetectably short period of time. However, this motion can be slowed down by friction if the walls interact with a surrounding particle bath. The force per unit area on the wall can be approximated by \\cite{Kibble:1976sj}\n\\begin{equation}\nF_{\\rm friction} \\sim N n T v\\;,\n\\end{equation}\nwhere $N$ is the number of light particles interacting with the domain wall, $n$ is the number density of these particles, $T$ is their temperature and $v$ the velocity of the wall compared to the background. Taking the number density corresponding to the hydrogen gas pressure in the atom interferometry experiment described above, which is performed at room temperature, we find\n\\begin{equation}\nF_{\\rm friction} \\sim v \\times 1.3 \\times 10^{-45} \\mbox{ GeV}^4\\;.\n\\end{equation}\nThis frictional force is comparable to the domain wall tension if $F_{\\rm friction} \\sim 4 \\mu^3\/(3 \\lambda R)$, where $R$ is the mean curvature radius. From this we can deduce that the strings will move non-relativistically if\n\\begin{equation}\n0.02 \\left(\\frac{\\mbox{cm}}{R}\\right) \\ll \\lambda\\;.\n\\end{equation}\nThis suggests that at least for some values of $\\lambda$, the domain walls could be long lived inside the vacuum chamber. As mentioned above, a full numerical study of the evolution of the domain walls in a vacuum chamber remains a topic for future work.\n\n \n\nSearches for the forces due to domain walls are not the most sensitive way to search for symmetron fields, although they do have the technical advantage that they do not rely on the presence of a source mass that can be moved inside the vacuum chamber. However, they only occur in theories of screening, such as the symmetron, which undergo a symmetry breaking transition. If a fifth force with screening is detected in an upcoming experiment, the presence or absence of a network of domain walls in the experiment could be used to discriminate between models. \n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\\label{sec:conclusions}\n\nWe have shown that symmetron fifth forces, inspired by theories of dark energy, can be constrained by terrestrial experiments using cold atoms. The constraints we have found in Figure \\ref{fig:constraints}, are particularly interesting as they fill a previously empty region of parameter space between the constraints coming from E\\\"{o}t-Wash experiments, and those coming from observations of exo-planets. \n\nWe have also discussed the possibility that symmetron domain walls may form in the vacuum chamber, leading to the atoms experiencing a fifth force without the need to place a source mass inside the vacuum chamber. Whilst we find that the accelerations experienced by the atoms are smaller than the sensitivity of current experiments, they are not so small that it would be impossible to detect them in the future. Additionally, as the domain walls only form for symmetron models, if a screened fifth force is ever detected in a terrestrial experiment the presence or absence of these domain walls would provide a way to discriminate between different models of screening. \n\n\\subsection*{}\nIn the final stages of writing this article it has come to our attention that Brax and Davis have derived the same constraints on the symmetron model using the tomographic model of screening \\cite{A&P}.\n\n\\section*{Acknowledgements}\nWe would like to thank Ed Copeland for useful discussions during the completion of this work. CB is supported by a Royal Society University Research Fellowship. JS is supported by the Royal Society\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTo understand the molecular details of how any living system functions, one inevitably has to know the three-dimensional structure of a large number of proteins. Despite the recent progresses in cryo-electron microscopy~\\citep{fernandez2016unravelling}, experimental approaches to this problem are unlikely to scale up, and there is a dire need for innovation in computational methods.\n\nState-of-the-art methods that attempt to solve the protein folding problem~\\citep{dill2012} usually rely on complex workflows that consist of multiple loosely interconnected operations~\\citep{yang2015tasser, raman2009structure}. For the majority of these methods, a first step consists in generating a ``rough'' protein structure, using either homology modelling or some fragment-based assembly approach. The second step consists in refining this structure using one of many optimization techniques. The parameters of these two steps are usually tuned separately.\n\nHowever, new computational approaches to protein folding have recently emerged, which make use of end-to-end learning. The work of AlQuraishi~\\citep{Mohammed:2018} attempts to learn the positions of protein backbone atoms using a BiLSTM that predicts distributions of dihedral angles and, using a differentiable transformation, that converts these internal coordinates into atomic Cartesian coordinates. Other work under review~\\citep{anonymous2019learning} tries to learn the force field parameters used by a simulation of the protein folding process, using the same differentiable conversion between internal coordinates and atomic coordinates.\n\nWe anticipate that end-to-end models will become extremely important for structural biology, due to the growing amounts of data for both protein sequences and protein structures, and the necessity of relating the two. Since the transformation between internal coordinates and atomic positions is the only known unambiguous differentiable mapping between protein sequence and protein structure, a fast implementation of this transformation is the key building block for any ``sequence-to-structure'', end-to-end model.\n\nIn this work we present \\textsc{TorchProteinLibrary}, a library that implements the conversion between internal protein coordinates and atomic positions for ``full-atom'' and ``backbone-only'' models of protein structure. It also contains an implementation of the least root-mean-square deviation (LRMSD), a measure of distance in the space of protein structures that respects translational and rotational invariance.\n\n\\section{Full-atom model}\n\nThe ``full-atom'' representation of protein structure specifies the positions of all non-hydrogen atoms. (Following the usual convention, hydrogen atoms are omitted from the representation because their positions are easy to infer from the rest of the molecular structure.) The layer computes the Cartesian coordinates by building a graph of transforms, acting on standard coordinates of rigid groups of atoms. In that standard reference frame, the first atom of the rigid group is at the origin and any additional atom is at a position consistent with the stereochemistry of the group. The smallest rigid group consists of a single atom at the origin. Each amino acid conformation is described by a list of up to 7 dihedral angles. For example, Figure~\\ref{Fig:aminoacid} shows a schematic representation of amino acid threonine, which has 5 rigid groups in total and is parameterized by 4 transforms.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\columnwidth]{Fig\/Amino-acid-real.png}\n \\caption{Example of parameterization of threonine in terms of dihedral angles $\\phi$, $\\psi$, $\\omega$, and $\\chi$. In the current implementation $\\omega$ is fixed to $\\pi$, which makes $R_4$ constant. Blue boxes are aligned to local coordinate systems, in which initial coordinates of rigid groups are defined. $R'$ is an out-of-plane transform that does not contain differentiable parameters.}\n \\label{Fig:aminoacid}\n\\end{figure}\n\nEach transform $R_i$ is parameterized by a dihedral angle $\\alpha_i$ and has the form \n\\begin{equation*}\nR_i(\\alpha_i) = R(\\alpha_i, \\theta_i, d_i) = R_y (\\theta_i) T_x(d_i) R_x(\\alpha_i)\n\\end{equation*}\nwhere $R_x(\\alpha_i)$ and $R_y(\\theta_i)$ are the $4\\times 4$ rotation matrices about axes $x$ and $y$, respectively, and $T_x(d_i)$ is the $4\\times 4$ translation matrix along axis $x$. $\\theta_i$ and $d_i$ are fixed parameters for which we do not compute derivatives. They depend only on the type of the amino acid and its stereochemical properties. For instance, the first transform of the threonine parameterization of Figure~\\ref{Fig:aminoacid} can be written as\n\\begin{equation*}\nR_1(\\phi) = R(\\phi, \\theta_1, d_1)\n\\end{equation*} \nwhere $\\phi$ is the first dihedral angle (variable), $\\theta_1$ is the C-N-CA angle (fixed), and $d_1$ is the N-CA bond length (also fixed).\n\n\nWe compute the position of every atom in a rigid group by transforming its position in the original reference frame with the appropriate matrix. In short, to get the cumulative transform of a node $M_{i}$, we take the cumulative transform $M_{\\mathrm{parent}(i)}$ and multiply it by the transform of the current node $R_{i}$. In the threonine example, the cumulative transforms are as follows:\n\\begin{eqnarray*}\nM_1 &=& M_0 R_1(\\phi)\\\\\nM_2 &=& M_0 R_1(\\phi) R' R_2(\\chi)\\\\\nM_3 &=& M_0 R_1(\\phi) R_3(\\psi)\\\\\nM_4 &=& M_0 R_1(\\phi) R_3(\\psi) R_4(\\omega)\n\\end{eqnarray*}\n$M_0$ represents the cumulative transform leading to the threonine residue considered, due to all residues ahead in the sequence. For L-amino acids, $R'$ corresponds to a counterclockwise rotation of $122.686^\\circ$ about the $x$ axis, needed to properly orient the side chain.\nThe atomic coordinates ``$\\mathbf{r}$'' are obtained by transforming the standard coordinates ``$\\mathbf{r}^\\circ$'' of each rigid group $i$ by its corresponding cumulative transform $M_i$. For instance, the atomic positions of the threonine residue can be written as follows:\n\\begin{eqnarray*}\n\\mathbf{r}_\\mathrm{CA} &=& M_1 \\mathbf{r}_\\mathrm{CA}^\\circ = M_1 \\mathbf{0}\\\\\n\\mathbf{r}_\\mathrm{CB} &=& M_2 \\mathbf{r}_\\mathrm{CB}^\\circ = M_2 \\mathbf{0}\\\\\n\\mathbf{r}_\\mathrm{OG1} &=& M_2 \\mathbf{r}_\\mathrm{OG1}^\\circ\\\\\n\\mathbf{r}_\\mathrm{CG2} &=& M_2 \\mathbf{r}_\\mathrm{CG2}^\\circ\\\\\n\\mathbf{r}_\\mathrm{C} &=& M_3 \\mathbf{r}_\\mathrm{C}^\\circ = M_3 \\mathbf{0}\\\\\n\\mathbf{r}_\\mathrm{O} &=& M_3 \\mathbf{r}_\\mathrm{O}^\\circ\\\\\n\\mathbf{r}_\\mathrm{N} &=& M_4 \\mathbf{r}_\\mathrm{N}^\\circ = M_4 \\mathbf{0}\n\\end{eqnarray*}\nwhere $\\mathbf{r}_t^\\circ = (x_t^\\circ, y_t^\\circ, z_t^\\circ, 1)^\\mathrm{T}$ is the 4-component vector representing the position of atom of type $t$ in the standard reference frame and $\\mathbf{0} = (0,0,0,1)^\\mathrm{T}$ is the 4-component vector representing the origin.\n\n\\begin{figure}\n \\centering\n \\includegraphics{Fig\/example_graph.png}\n \\caption{Example of the threonine molecular graph. Each node in the graph contains the cumulative transform $M_i$, computed during the forward pass. Rigid groups coordinates associated with each node are denoted as $\\mathbf{r}_i$.}\n \\label{Fig:molgraph_simple}\n\\end{figure}\n\nWe describe the algorithm of computing the gradients of atomic positions with respect to any dihedral angle $\\alpha_i$ by considering the transformation graph from Figure~\\ref{Fig:molgraph_simple}, corresponding to threonine. The graph has four nodes that contain the cumulative transformation matrices $M_1$ to $M_4$. To simplify the notation, we assume that each rigid group contains a single atom at position $\\mathbf{r}_i$. The corresponding atoms in threonine would be $\\mathbf{r}_1 = \\mathbf{r}_\\mathrm{CA}$, $\\mathbf{r}_2 = \\mathbf{r}_\\mathrm{CB}$, $\\mathbf{r}_3 = \\mathbf{r}_\\mathrm{C}$, and $\\mathbf{r}_4 = \\mathbf{r}_\\mathrm{N}$. Supposing we have a function $L$ that depends only on the coordinates of these four atoms, we can write its derivative with respect to dihedral angle $\\phi$ as:\n\\begin{equation*}\n\\frac{\\partial L}{\\partial \\phi} = \n\\sum_{k=1}^4 \\frac{\\partial L}{\\partial \\mathbf{r}_k}\\cdot \\frac{\\partial \\mathbf{r}_k}{\\partial \\phi}\n\\end{equation*}\nwhere ``$\\cdot$'' denotes the scalar product of two vectors.\nThe derivative of the first position with respect to $\\phi$ can be written as:\n\\begin{equation*}\n\\frac{\\partial \\mathbf{r}_1}{\\partial \\phi} = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} \\mathbf{r}_1^\\circ\n\\end{equation*}\nwhere $\\mathbf{r}_1^\\circ$ represents the position of the atom in the standard reference frame. The other three derivatives can be written as:\n\\begin{equation*}\n\\frac{\\partial \\mathbf{r}_2}{\\partial \\phi} = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} R' R_2 \\mathbf{r}_2^\\circ\n,\\qquad\n\\frac{\\partial \\mathbf{r}_3}{\\partial \\phi} = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} R_3 \\mathbf{r}_3^\\circ\n\\qquad\\mathrm{and}\\qquad\n\\frac{\\partial \\mathbf{r}_4}{\\partial \\phi} = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} R_3 R_4 \\mathbf{r}_4^\\circ\n\\end{equation*}\nWe can write those expressions using $\\mathbf{r}_1$, $\\mathbf{r}_2$, $\\mathbf{r}_3$ and $\\mathbf{r}_4$, the atomic coordinates calculated during the forward pass:\n\\begin{equation*}\n\\frac{\\partial \\mathbf{r}_1}{\\partial \\phi} = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} M_1^{-1} M_1 \\mathbf{r}_1^\\circ =\nM_0 \\frac{\\partial R_1}{\\partial \\phi} M^{-1}_1 \\mathbf{r}_1\n\\end{equation*}\n\\begin{equation*}\n\\frac{\\partial \\mathbf{r}_2}{\\partial \\phi} = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} M_1^{-1} M_1 R' R_2 \\mathbf{r}_2^\\circ = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} M_1^{-1} M_2 \\mathbf{r}_2^\\circ = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} M_1^{-1} \\mathbf{r}_2\n\\end{equation*}\n\\begin{equation*}\n\\frac{\\partial \\mathbf{r}_3}{\\partial \\phi} = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} M_1^{-1} M_1 R_3 \\mathbf{r}_3^\\circ = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} M_1^{-1} M_3 \\mathbf{r}_3^\\circ = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} M^{-1}_1 \\mathbf{r}_3\n\\end{equation*}\n\\begin{equation*}\n\\frac{\\partial \\mathbf{r}_4}{\\partial \\phi} = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} M_1^{-1} M_1 R_3 R_4 \\mathbf{r}_4^\\circ = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} M_1^{-1} M_4 \\mathbf{r}_4^\\circ = \nM_0 \\frac{\\partial R_1}{\\partial \\phi} M^{-1}_1 \\mathbf{r}_4\n\\end{equation*}\nThe expression for the derivative becomes:\n\\begin{equation*}\n\\frac{\\partial L}{\\partial \\phi} = \n\\sum_{k=1}^4 \\frac{\\partial L}{\\partial \\mathbf{r}_k}\\cdot M_0 \\frac{\\partial R_1}{\\partial \\phi} M^{-1}_1 \\mathbf{r}_k\n\\end{equation*}\nIn this formula index $k$ iterates over the children of node 0, and matrix $M_0 \\frac{\\partial R_1}{\\partial \\phi} M^{-1}_1$ can be computed efficiently. This expression can be generalized to any graph without loops. For $L$ a function of all atomic positions, the derivative with respect to any dihedral angle $\\theta_i$ of the node $i$ is:\n\\begin{equation}\n\\label{Eq:FAMBackward}\n\\frac{\\partial L}{\\partial \\theta_i} = \n\\sum_{k \\in \\mathrm{children}(i)} \n\\frac{\\partial L}{\\partial \\mathbf{r}_k}\\cdot \nM_{i} \\frac{\\partial R_i}{\\partial \\theta_i} M^{-1}_{i+1} \n\\mathbf{r}_k\n\\end{equation}\nThis sum is computed during a backward depth-first propagation through the graph. Matrices $F_i = M_{i} \\frac{\\partial R_i}{\\partial \\theta_i} M^{-1}_{i+1}$, however, are computed during the forward pass. The library presented in this work implements the forward and backward passes on CPU.\n\n\\section{Backbone model}\n\nAnother widely used protein representation is the ``backbone'' model, shown on Figure~\\ref{Fig:reducedmodel}, for which we compute only three atomic positions per residue (for CA, C, and N atoms). The backbone O atoms are omitted but their positions can be easily inferred from the positions of the other three atoms. In this reduced representation, the amino acid side chains are ignored.\n\nThe backbone model, unlike the full-atom model, can be efficiently implemented on GPU. The key to efficient parallel implementation is that $\\partial \\mathbf{r}_i\/\\partial \\theta_j$, the derivatives of the coordinates with respect to parameters of the model, can be computed independently of one another. Here we describe the detailed computation of the coordinates and write down the derivatives in terms of quantities saved during the forward pass.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.6\\columnwidth]{Fig\/reducedmodel.png}\n\\caption{Illustration of parameterization of one amino acid number $j$ in the backbone model.}\n\\label{Fig:reducedmodel}\n\\end{figure}\n\nThe position of the $i$-th atom in the chain is:\n\\begin{equation*}\n\\mathbf{r}_i = R_0 R_1 \\cdots R_i \\mathbf{0}\n\\end{equation*}\nwhere $\\mathbf{0} = (0,0,0,1)^\\mathrm{T}$ and $R_i$ are transformation matrices parameterized by dihedral angle $\\alpha$:\n\\begin{equation*}\nR(\\alpha, \\theta, d) = \\begin{bmatrix}\n\\cos(\\theta) & \\sin(\\alpha)\\sin(\\theta) & \\cos(\\alpha)\\sin(\\theta) & d\\cos(\\theta) \\\\\n0 & \\cos(\\alpha) & -\\sin(\\alpha) & 0 \\\\\n-\\sin(\\theta) & \\sin(\\alpha)\\cos(\\theta) & \\cos(\\alpha)\\cos(\\theta) & -d\\sin(\\theta) \\\\\n0 & 0 & 0 & 1\n\\end{bmatrix}\n\\end{equation*}\n\nThe sequence of transformations is defined for amino acid sequence indexed with $j\\in [0,L)$, where $L$ is the length of the sequence. If $j<0$, then the transformation is the identity matrix, otherwise:\n\\begin{itemize}\n \\item C-N peptide bond of residue $j-1$:\\\\\n\t\t Atomic index: $i = 3j$\\\\\n\t\t Transformation: $ R_i = R(\\omega_j, \\pi - 2.1186, 1.330)$\n\t\\item N-CA bond of residue $j$:\\\\\n\t\t Atomic index: $i = 3j + 1$\\\\\n\t\t Transformation: $ R_i = R(\\phi_{j}, \\pi - 1.9391, 1.460)$\n\t\\item CA-C bond of residue $j$:\\\\\n\t\t Atomic index: $i = 3j + 2$\\\\\n\t\t Transformation: $ R_i = R(\\psi_{j}, \\pi - 2.0610, 1.525)$\n \n\\end{itemize}\n\nIn the current implementation the angle $\\omega_{j-1}$ is fixed to $\\pi$, corresponding to the \\emph{trans} conformation of the peptide bond. During the forward pass, we save the cumulative transformation matrices for each atom:\n\\begin{equation*}\nM_i = R_0 R_1 \\cdots R_i\n\\end{equation*}\nNotice that atom $3j$ is always N, atom $3j + 1$ is always CA, and atom $3j+2$ is always C. Thus\ntransformation matrices $R_{3j+1}$ and $R_{3j+2}$ depend on the angles $\\phi_j$ and $\\psi_j$, respectively. During the backward pass, we first compute the gradient of $\\mathbf{r}_i$ with respect to each ``$\\phi$'' and ``$\\psi$'' angle:\n\\begin{equation*}\n\\frac{\\partial \\mathbf{r}_i}{\\partial \\phi_j} = R_0 R_1 \\cdots \\frac{\\partial R_{3j+1}}{\\partial \\phi_j} \\cdots R_i \\mathbf{0}\n\\qquad \\mathrm{and} \\qquad\n\\frac{\\partial \\mathbf{r}_i}{\\partial \\psi_j} = R_0 R_1 \\cdots \\frac{\\partial R_{3j+2}}{\\partial \\psi_j} \\cdots R_i \\mathbf{0}\n\\end{equation*}\nWe can rewrite these expressions using the matrices $M$, saved during the forward pass:\n\\begin{equation*}\n\\frac{\\partial \\mathbf{r}_i}{\\partial \\phi_j} = M_{3j} \\frac{\\partial R_{3j+1}}{\\partial \\phi_j} M_{3j+1}^{-1} M_{i} \\mathbf{0}\n\\qquad \\mathrm{and} \\qquad\n\\frac{\\partial \\mathbf{r}_i}{\\partial \\psi_j} = M_{3j+1} \\frac{\\partial R_{3j+2}}{\\partial \\psi_j} M_{3j+2}^{-1} M_{i} \\mathbf{0}\n\\end{equation*}\n\nThe inverses of matrices $M_k$ have simple forms and can be computed on the fly during the backward pass. This allows to compute all derivatives simultaneously on GPU. To compute the derivatives of function $L$, which depends on the derivatives of the atomic coordinates with respect to the input angles, we have to calculate the following sums:\n\\begin{equation}\n\\label{Eq:BackboneGradOutput}\n\\sum_i \\frac{\\partial L}{\\partial \\mathbf{r}_i}\\cdot \\frac{\\partial \\mathbf{r}_i}{\\partial \\phi_j}\n\\qquad \\mathrm{and} \\qquad\n\\sum_i \\frac{\\partial L}{\\partial \\mathbf{r}_i}\\cdot \\frac{\\partial \\mathbf{r}_i}{\\partial \\psi_j}\n\\end{equation}\nwhich can be efficiently computed on GPU.\n\n\\section{Loss function}\n\nThe training of any protein structure prediction model requires some measure of how close two structures of the same sequence are. While there are multiple variations of such a measure, the most common one is the least root-mean-square deviation (LRMSD) of atomic Cartesian coordinates:\n\\begin{equation*}\n\\mathrm{LRMSD} = \\min_M \\sqrt{ \\sum_i^{N_\\mathrm{atoms}}{\\frac{|\\mathbf{x}_i - M\\mathbf{y}_i|^2}{N_\\mathrm{atoms}} }}\n\\end{equation*}\nHere, $\\mathbf{x}_i$ and $\\mathbf{y}_i$ are the atomic positions of the ``target'' and ``input'' structure, respectively, and the root-mean-square deviation is minimized over all possible rigid transformation matrices $M$. This measure is invariant with respect to rotations and translations of the two structures being compared.\n\n\\textsc{TorchProteinLibrary} contains an implementation of the algorithm of Coutsias, Seok and Dill~\\citep{coutsias2004using}. This algorithm computes LRMSD and its derivative with respect to the coordinates of one of the structures without explicit minimization. We briefly outline the key steps of the algorithm but a detailed derivation can be found in Ref.~\\citep{coutsias2004using}.\n\nWe first move both target and input structures (positions $\\mathbf{x}_i$ and $\\mathbf{y}_i$) so that their barycenters are at the origin, then we compute the correlation matrix $R$:\n\\begin{equation*}\nR = \\sum_i^{N_\\mathrm{atoms}} \\mathbf{x}_i \\mathbf{y}^\\mathrm{T}_i\n\\end{equation*}\nUsing this $3\\times 3$ matrix we compute the following $4\\times 4$ matrix $T$:\n\\begin{equation*}\nT = \\begin{bmatrix}\nR_{11} + R_{22} + R_{33} & R_{23} - R_{32} & R_{31} - R_{13} & R_{12} - R_{21} \\\\\nR_{23} - R_{32} & R_{11} - R_{22} - R_{33} & R_{12} + R_{21} & R_{13} + R_{31} \\\\\nR_{31} - R_{13} & R_{12} + R_{21} & -R_{11} + R_{22} - R_{33} & R_{23} + R_{32} \\\\\nR_{12} - R_{21} & R_{13} + R_{31} & R_{23} + R_{32} & -R_{11} - R_{22} + R_{33} \\\\\n\\end{bmatrix}\n\\end{equation*}\nWe then compute $\\lambda$, the maximum eigenvalue of matrix $T$, and its associated eigenvector $\\mathbf{q}$. This eigenvector corresponds to the quaternion that gives the optimal rotation of one structure with respect to the other. The rotation matrix can be computed as follows:\n\\begin{equation*}\nU = \\begin{bmatrix}\nq^2_0 + q^2_1 - q^2_2 - q^2_3 & 2(q_1 q_2 - q_0 q_3) & 2(q_1 q_3 + q_0 q_2) \\\\\n2(q_1 q_2 + q_0 q_3) & q^2_0 - q^2_1 + q^2_2 - q^2_3 & 2(q_2 q_3 - q_0 q_1) \\\\\n2(q_1 q_3 - q_0 q_2) & 2(q_2 q_3 + q_0 q_1) & q^2_0 - q^2_1 - q^2_2 + q^2_3\n\\end{bmatrix}\n\\end{equation*}\nThe LRMSD is computed using the formula:\n\\begin{equation*}\n\\mathrm{LRMSD} = \\sqrt{ \\frac{ \\sum_i^{N_\\mathrm{atoms}} \\left( |\\mathbf{x}_i|^2 + |\\mathbf{y}_i|^2 \\right) - 2\\lambda }{N_\\mathrm{atoms}}}\n\\end{equation*}\nThe derivative of LRMSD with respect to the input coordinates is computed using the formula:\n\\begin{equation*}\n\\frac{\\partial \\mathrm{LRMSD}}{\\partial \\mathbf{x}_i} = \\mathbf{x}_i - U^\\mathrm{T} \\mathbf{y}_i\n\\end{equation*}\nThis expression, combined with Eqs.~\\ref{Eq:FAMBackward} or \\ref{Eq:BackboneGradOutput}, allows any sequence-based model predicting internal coordinates of proteins to be directly trained on known protein structures, using LRMSD as a loss function.\n\n\\section{Benchmarks}\n\nHere we give estimates of the run times of the modules described above and a simple baseline for comparison. We perform the measurements using a Titan X Maxwell GPU with Intel Core i7-5930K CPU machine and PyTorch version 0.4.1 CUDA 9.2 build.\n\nFigure~\\ref{Fig:FAMTime} shows the scaling of computation time of the forward and backward passes for the full-atom model. We see that the computational complexity of the backward pass is $O(L^2)$, where $L$ is the sequence length. The reason for this quadratic scaling is that we compute Eq.~\\ref{Eq:FAMBackward} using depth-first graph traversal. In principle, this layer can be further optimized by unfolding the graph and computing the gradients simultaneously on GPU. (We are planning to incorporate this optimization in the next version of the library.)\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{Fig\/FullAtomModelTime.png}\n \\caption{Scaling of computation time of forward and backward passes for the full-atom model. The batch size was set to 32 and amino acid sequences were generated at random for each measurement. We performed 10 measurements per data point and plotted the 95\\% confidence interval.}\n \\label{Fig:FAMTime}\n\\end{figure}\n\nFigure \\ref{Fig:BackboneTime} shows the computation time scaling for the backbone protein model with the growth of the sequence length. While the computational complexity for the backward pass still scales like $O(L^2)$, the scaling coefficient is smaller than for the full-atom model. Here, the presence of quadratic scaling can be attributed to GPU hardware limitations. However, we expect this scaling coefficient to decrease with an increase in the number of CUDA cores, due to increasingly efficient parallelization. Another contributing factor to this scaling behavior is the computation of the sums in Eq.~\\ref{Eq:BackboneGradOutput}, which currently does not rely on the ``reduce'' algorithm for parallelization.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{Fig\/ReducedModelTime.png}\n \\caption{Scaling of computation time for forward and backward passes for the backbone model. The batch size was set to 32. We performed 10 measurements per data point and plotted the 95\\% confidence interval.}\n \\label{Fig:BackboneTime}\n\\end{figure}\n\nFinally, the LRMSD layer computation time is shown on Figure~\\ref{Fig:RMSDTime}. The forward pass run time exhibits the expected linear behavior.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{Fig\/RMSDTime.png}\n \\caption{Scaling of computation time for forward and backward passes of LRMSD. The batch size was set to 32. We performed 10 measurements per data point and plotted the 95\\% confidence interval.}\n \\label{Fig:RMSDTime}\n\\end{figure}\n\nTo have a meaningful reference timescale for the computational times of the layers implemented in the library, we measured forward and backward computation times of an LSTM model~\\citep{hochreiter1997long} on GPU, as implemented in PyTorch~\\citep{paszke2017automatic}. The batch size of the input is 32 and the number of input features is 128. The LSTM has 256 hidden units and one layer. Figure~\\ref{Fig:LSTMTime} shows the scaling of the forward and backward passes of this model with the growth of the input length. We see that the LSTM and backbone models have comparable computation times.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{Fig\/LSTMTime.png}\n \\caption{Scaling of computation time for forward and backward passes of a one-layer LSTM on GPU. The batch size was set to 32, the number of input features was 128, and the number of hidden units was set to 256. We performed 10 measurements per data point and plotted the 95\\% confidence interval.}\n \\label{Fig:LSTMTime}\n\\end{figure}\n\nAnother important concern regarding the backbone model is the numerical stability of chained matrix multiplications using single-precision arithmetic during the forward pass. To estimate the error of the computation we compared the output from a forward pass of the backbone protein model to that of the full-atom model, implemented using double-precision arithmetic on CPU. We generate random input angles for the backbone of a protein and pass them through both full-atom and backbone layers, then compute the distances between equivalent backbone atoms in both models. Figure~\\ref{Fig:ErrorEstimate} shows that the resulting error is negligible for proteins sequences of any (realistic) length.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{Fig\/ErrorStat.png}\n \\caption{Scaling of computation error for forward pass for the backbone model. Atomic index corresponds to the atom numbers in backbone protein model, for example index 2100 corresponds to the CA in residue number 700. We performed 10 measurements per data point and plotted the 95\\% confidence interval.}\n \\label{Fig:ErrorEstimate}\n\\end{figure}\n\n\\section{Conclusion}\n\nIn this paper we have described two differentiable representations that allow the mapping of protein sequences to protein atomic coordinates. These representations enable the development of end-to-end differentiable models and of model-based reinforcement learning~\\citep{berkenkamp2017safe}. We believe these new approaches hold great promise for protein folding and protein modelling in general.\n\nIt should be mentioned that the scope of \\textsc{TorchProteinLibrary} is not limited to these layers only. To gain access to the chemical properties of protein molecules, functions mapping atomic coordinates to a scalar value have to be defined. For that purpose, we have also implemented a representation of the atomic coordinates as a density map on a three-dimensional (3D) grid. This representation can then be transformed into a scalar using 3D convolutional networks. This particular 3D representation allows to circumvent some common problems associated with pairwise potentials. \nCurrently available protein structure datasets are often insufficient to extract meaningful pairwise potentials for all combinations of atom types.\n\nAnother research direction for which this library is expected to be useful is the prediction of protein-protein interactions (PPIs). We have implemented differentiable volume convolutions using cuFFT~\\citep{nvidia2010cufft}. These operations are at the core of most algorithms for exhaustive rigid protein-protein docking~\\citep{katchalski1992molecular}. By constructing a differentiable model that maps two sets of atomic coordinates to the distribution of relative rotations and translations of one set with respect to the other, one can in principle learn to dock proteins directly from experimental data.\n\n\\section*{Acknowledgments}\nWe thank Yoshua Bengio for hosting one of us (G.D.) at the Montreal Institute for Learning Algorithms (MILA) during some of the critical stages of the project, and for access to the MILA computational resources.\nThis work was supported by a grant from the Natural Sciences and Engineering Research Council of Canada to G.L.\\ (RGPIN 355789).\n\n\\bibliographystyle{authordate1}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}