diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkrgu" "b/data_all_eng_slimpj/shuffled/split2/finalzzkrgu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkrgu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nHuman intelligence exhibits \\emph{compositional generalization}, the ability to understand and produce unseen combinations of seen components \\cite{chomsky1965aspects,fodor1988connectionism}.\nFor example, if a person knows the meaning of ``run'' and ``run twice'', once she learns the meaning of a new word ``jump'', she can immediately understand the meaning of ``jump twice''.\nThis compositional generalization is essential in human cognition, enabling humans to learn semantics from very limited data and extend to unseen data \\cite{montague1974formal,partee1984compositionality,lake2017building}.\nTherefore, it is widely believed that compositional generalization is a key property towards human-like AI \\cite{mikolov2016roadmap,bengio2019towards,mollica2020composition}.\nHowever, in recent years, accumulating evidences have shown that neural sequence-to-sequence (seq2seq) models exhibit limited compositional generalization ability~\\cite{lake2018generalization,hupkes2020compositionality,keysers2020measuring,furrer2020compositional}.\n\n\n\nOn the other hand, semi-supervised learning \\cite{chapelle2009semi} is a machine learning paradigm that alleviates the limited-data problem through exploiting a large amount of unlabelled data, which has been successfully used in many different areas such as machine translation \\cite{sennrich2016improving,he2016dual,skorokhodov2018semi} and semantic parsing \\cite{yin2018structvae,cao2019semantic}.\nIn sequence-to-sequence (seq2seq) tasks, unlabelled data (i.e., monolingual data) are usually cheap and abundant, containing a large amount of combinations that are unseen in limited labelled data (i.e., parallel data).\nTherefore, we propose a hypothesis that \\emph{semi-supervised learning can enable seq2seq models understand and produce much more combinations beyond labelled data, thus tackling the bottleneck of lacking compositional generalization}.\nIf this hypothesis holds true, the lack of compositional generalization would no longer be a bottleneck of seq2seq models, as we can simply tackle it through exploiting large-scale monolingual data with diverse combinations.\n\nIn this work, we focus on \\emph{Iterative Back-Translation (IBT)} \\cite{hoang2018iterative}, a simple yet effective semi-supervised method that has been successfully applied in machine translation.\nThe key idea behind it is to iteratively augment original parallel training data with pseudo-parallel data generated from monolingual data.\nTo our best knowledge, iterative back-translation has not been studied extensively from the perspective of compositional generalization.\nThis is partially because a concern about \\emph{the quality of pseudo-parallel data}:\ndue to the problem of lacking compositional generalization, for non-parallel data with unseen combinations beyond the parallel training data, pseudo-parallel data generated from them will be error-prone.\nIt is natural to speculate that errors in pseudo-parallel data are going to be reinforced and then even harm the performance.\n\nThis paper broadens the understanding of iterative back-translation from the perspective of compositional generalization, through answering three research questions:\n\\textbf{RQ1. } How does iterative back-translation affect neural seq2seq models' ability to generalize to more combinations beyond parallel data?\n\\textbf{RQ2. } If iterative back-translation is useful from the perspective of compositional generalization, what is the key that contributes to its success?\n\\textbf{RQ3. } Is there a way to further improve the quality of pseudo-parallel data, thereby further improving the performance?\n\n\\textbf{Main Contributions.}\n(1) We empirically show that iterative back-translation substantially improves the performance on compositional generalization benchmarks (CFQ and SCAN) (Section \\ref{section:c1}).\n(2) To understand what contributes to its success, we carefully examine the performance gains and observe that iterative back-translation is effective to correct errors in pseudo-parallel data (Section \\ref{section:c2}).\n(3) Motivated by this analysis, we propose \\emph{curriculum iterative back-translation} to further improve the quality of pseudo-parallel data, thereby improving the performance of iterative back-translation (Section \\ref{section:c3}).\n\n\\section{Background}\n\n\\subsection{Compositional Generalization Benchmarks}\n\\label{section:tasks}\n\nIn recent years, the compositional generalization ability of DNN models has become a hot research problem in artificial intelligence, especially in Natural Language Understanding (NLU), because compositional generalization ability has been recognized as a basic but essential capability of human intelligence~\\cite{lake2017building}.\nTwo benchmarks, SCAN \\cite{lake2018generalization} and CFQ \\cite{keysers2020measuring}, have been proposed for measuring the compositional generalization ability of different machine learning-based NLU systems.\nOur experiments are also conducted on these two benchmarks.\n\n\\subsubsection{SCAN}\nThe SCAN dataset consists of input natural language commands (e.g., ``\\emph{jump and look left twice}'') paired with output action sequences (e.g., ``\\emph{JUMP LTURN LOOK LTURN LOOK}'').\nDifficult tasks are proposed based on different data split settings, e.g.,\n(1)~\\textbf{ADD\\_JUMP}: the pairs of train and test are split in terms of the primitive \\emph{JUMP}.\nThe train set consists of \\emph{(jump, JUMP)} and all pairs without the primitive \\emph{JUMP}.\nThe rest forms the test set.\n(2)~\\textbf{LENGTH}: Pairs are split by action sequence length into train set ($\\leq 22$ tokens) and test set ($\\geq 24$ tokens). (3)~\\textbf{AROUND\\_RIGHT}: The phrase ``\\emph{around right}\" is held out from the train set, while ``around\" and ``right\" appear separately.\n(4)~\\textbf{OPPOSITE\\_RIGHT}: This task is similar to AROUND\\_RIGHT, while the held-out phrase is ``\\emph{opposite right}''.\n\n\\subsubsection{CFQ}\nThe \\emph{Complex Freebase Questions (CFQ)} benchmark \\cite{keysers2020measuring} contains input natural language questions paired with their output meaning representations (SPARQL queries against the Freebase knowledge graph).\nTo comprehensively measure a learner's compositional generalization ability, CFQ dataset is splitted into train and test sets based on two principles:\n(1) \\emph{Minimizing primitive divergence}: all primitives present in the test set are also present in the training set, and the distribution of primitives in the training set is as similar as possible to their distribution in the test set.\n(2) \\emph{Maximizing compound divergence}: the distribution of compounds (i.e., logical substructures in SPARQL queries) in the training set is as different as possible from the distribution in the test set.\nThe second principle guarantees that the task is compositionally challenging.\nFollowing these two principles, three different \\emph{maximum compound divergence (MCD)} dataset splits are constructed.\nThe mean accuracy of standard neural seq2seq models on the MCD splits is below 20\\%.\nMoreover, this benchmark also constructs 3 MCD data splits for SCAN dataset, and the mean accuracy of standard neural seq2seq models is about 4\\%.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.8\\columnwidth,clip=true]{pictures\/IBT\/bt.png}\n\\caption{Iterative back-translation: exploiting monolingual data to augment parallel training data.\nThe solid lines mean that the src2trg\/trg2src model generates pseudo-parallel data from monolingual data;\nthe dashed lines mean that parallel and pseudo-parallel data are used to train the models.}\n\\label{fig:ibt}\n\\end{figure}\n\n\\subsection{Iterative Back-Translation~(IBT)}\n\nIterative back-translation~\\cite{hoang2018iterative} has been proved as an effective method utilizing bi-directional monolingual data to improve the performance of machine translation models.\nAlgorithm~\\ref{alg:ibt} describes the training process of iterative back-translation.\nFirstly, the source-to-target (src2trg) model and target-to-source (trg2src) model are initially trained on the original parallel data ($D_p$) for $K$ steps (Line 1).\nAfter this initial stage, each training step samples a batch of monolingual data from one direction and back-translate them to the other direction.\nThen it would perform two mini-batches to train source-to-target model using the given parallel data and pseudo backtranslated parallel data separately, and also another two mini-batches for training target-to-source model.\nFigure~\\ref{fig:ibt} visualizes the iterative training process.\n\n\\begin{algorithm}[h]\n\\small\n\t\\caption{Iterative Back-Translation}\n\t\\textbf{Input:} parallel data $D_p$, monolingual data $D_{src}$, $D_{trg}$ \\\\\n\t\\textbf{Output:} source-to-target model $M_{\\rightarrow}$ and target-to-source model $M_{\\leftarrow}$\n\t\\begin{algorithmic}[1]\n\t\t\\State \\textbf{Initial Stage}: Separately train source-to-target model $M_{\\rightarrow}$ and target-to-source model $M_{\\leftarrow}$ on $D_{p}$ for K steps\n\t\t\\While{$M_{\\rightarrow}$ and $M_{\\leftarrow}$ have not converged \\textbf{or} step $\\leq$ N}\n\t\t \\State Sample a batch $B_{t}$ from $D_{trg}$ and use $M_{\\leftarrow}$ to create $B_p=\\{(\\hat{s}, t) | t\\in B_t\\}$ \n\t\t \\State Sample a batch $B_{s}$ from $D_{src}$ and use $M_{\\rightarrow}$ to create $B_p'=\\{(s, \\hat{t}) | s\\in B_s\\}$\n\t\t \n\t\t \n\t\t \n\t\t \\State Train source-to-target model $M_{\\rightarrow}$ on $D_{p}$\n\t\t \\State Train source-to-target model $M_{\\rightarrow}$ on $B_p$\n\t\t \\State Train target-to-source model $M_{\\leftarrow}$ on $D_{p}$\n\t\t \\State Train target-to-source model $M_{\\leftarrow}$ on $B_{p}'$\n\t \\EndWhile\n\t\\end{algorithmic}\n\\label{alg:ibt}\n\\end{algorithm}\n\n\\begin{table*}[htbp]\n\\small\n\\caption{Performance (accuracy) on SCAN Tasks. Cells with a white background are results obtained in previous papers; cells with a grey background are results obtained in this paper.}\n\\centering\n\\begin{tabular}{lccccccccc}\n\\toprule[1pt]\n Models & ADD\\_JUMP & LENGTH &AROUND\\_RIGHT& OPPOSITE\\_RIGHT & MCD1 & MCD2 & MCD3\\\\\n\\midrule\nLSTM+Att &$0.0\\pm0.0$&$14.1$ & $0.0\\pm0.0$ & $16.5\\pm6.4$ &$6.5\\pm3.0$ & $4.2\\pm1.4$& $1.4\\pm0.2$\\\\\nTransformer &$1.0\\pm0.6$& $0.0$& $53.3\\pm10.9$ & $3.0\\pm6.8$ & $0.4\\pm0.2$ & $1.6\\pm0.3$&$0.88\\pm0.4$\\\\\nUni-Transformer& $0.3\\pm0.3$ & $0.0$ & $47.0 \\pm10.0$ &15.2$\\pm$13.0& $0.5\\pm0.1$ & $1.5\\pm0.2$&$1.1\\pm0.4$\\\\\n\\midrule\nSyn-att&$91.0\\pm27.4$&$15.2\\pm0.7$&$28.9\\pm34.8$&$10.5\\pm8.8$&-&-&-\\\\\nCGPS&$98.8\\pm1.4$& $20.3\\pm1.1$ &$83.2\\pm13.2$ &$89.3\\pm15.5$ &$1.2\\pm1.0$ & $1.7\\pm2.0$& $0.6\\pm0.3$ \\\\\nEquivariant&$99.1\\pm0.0$&$15.9\\pm3.2$&$\\mathbf{92.0\\pm0.2}$&-&-&-&-\\\\\nGECA & $87.0$ &- & $82.0$&- &-&-&-\\\\\nMeta-seq2seq&$99.9$&$99.9$&$16.6$&-&-&-&-\\\\\nT5-11B&$98.3$&$3.3$&$49.2$&$\\mathbf{99.1}$ &$7.9$&$2.4$ &$16.8$ \\\\\n\\midrule[1pt]\n GRU+Att~(Ours) & \\cellcolor{lightgray}$0.6$ &\\cellcolor{lightgray}$14.3$&\\cellcolor{lightgray} $23.8$ & \\cellcolor{lightgray}$0.27$ &\\cellcolor{lightgray} $18.7$ &\\cellcolor{lightgray}$32.0$& \\cellcolor{lightgray}$42.4$ \\\\\n\\quad +mono30&\\cellcolor{lightgray}$\\mathbf{99.6}$&\\cellcolor{lightgray}$\\mathbf{77.7}$&\\cellcolor{lightgray}$37.8$&\\cellcolor{lightgray}$95.8$ &\\cellcolor{lightgray} $\\mathbf{64.3}$&\\cellcolor{lightgray}$\\mathbf{80.8}$&\\cellcolor{lightgray}$\\mathbf{52.2}$ \\\\\n\\midrule\n\\quad +mono100&\\cellcolor{lightgray}$100.0$&\\cellcolor{lightgray}$99.9$&\\cellcolor{lightgray}$42.9$&\\cellcolor{lightgray}$100.0$&\\cellcolor{lightgray}$87.1$&\\cellcolor{lightgray}$99.0$&\\cellcolor{lightgray}$64.7$\\\\\n\\quad +transductive&\\cellcolor{lightgray}$100.0$&\\cellcolor{lightgray}$99.3$&\\cellcolor{lightgray}$34.5$&\\cellcolor{lightgray}$100.0$&\\cellcolor{lightgray}$74.8$ &\\cellcolor{lightgray}$99.7$ &\\cellcolor{lightgray}$77.4$ \\\\\n\\bottomrule[1pt]\n\\end{tabular}\n\\label{tab:scan}\n\\end{table*}\n\n\\section{IBT for Compositional Generalization}\n\\label{section:c1}\n\nTo examine the effectiveness of IBT for compositional generalization, we conduct a series of experiments on two compositional generalization benchmarks (SCAN and CFQ, introduced in Section~\\ref{section:tasks}).\n\n\\subsection{Setup}\n\\label{sec:ibt_setup}\n\\subsubsection{Monolingual Data Settings}\nIn our experiments, we conduct different monolingual data settings as follows:\n\\begin{itemize}\n \\item \\textbf{+transductive}:\n \\emph{use all test data as monolingual data.}\n We use this setting to explore how iterative back-translation performs with the most ideal monolingual data.\n \\item \\textbf{+mono100}:\n \\emph{use all dev data as monolingual data.}\n In this setting, test data do not appear in the monolingual data, while the monolingual data contains much more unseen combinations beyond the parallel training data.\n \\item \\textbf{+mono30}:\n \\emph{randomly sample 30\\% source sequences and 30\\% target sequences from dev data.}\n This setting is more realistic than ``+tranductive'' and ``+mono100'', because there is no implicit correspondence between source-side and target-side monolingual data (i.e., for a sequence in source-side monolingual data, the target-side monolingual data do not necessarily contain its corresponding target sequence, and vice versa).\n Therefore, in our analysis of experimental results, we regard results of \\textbf{``+mono30\"} as the actual performance of iterative back-translation.\n\\end{itemize}\n\nNote that because there are no dev data in SCAN tasks, we randomly hold out half of the original test data as the dev data (the held-out dev data and the remaining test data are not overlapped).\n\n\\subsubsection{Implementation Details} We implement iterative back-translation based on the code of UNdreaMT\\footnote{https:\/\/github.com\/artetxem\/undreamt}~\\cite{artetxe2018iclr}. For CFQ, we use a 2-layer GRU encoder-decoder model equipped with attention. We set the size of both word embedding and hidden states to 300. We use a dropout layer with the rate of 0.5 and the training process lasts 30000 iterations with batch size 128. For SCAN, we also use a 2-layer GRU encoder-decoder model with attention. Both of the embedding size and hidden size are set to 200. We use a dropout layer with the rate of 0.5 and the training process lasts 35000 iterations with batch size 64. We set $K=5000$ steps in Algorithm~\\ref{alg:ibt} for both CFQ and SCAN benchmarks.\n\\subsubsection{Baselines}\n(i) \\textbf{General seq2seq models}:~LSTM, Transformer and Uni-Transformer~\\cite{keysers2020measuring}. (ii)~\\textbf{SOTA models on SCAN\/CFQ benchmarks}: Syn-att~\\cite{russin2019compositional}, CGPS~\\cite{li2019compositional}, Equivariant~\\cite{gordon2019permutation}, GECA~\\cite{andreas-2020-good}, Meta-seq2seq~\\cite{lake2019compositional} and T5-11B~\\cite{furrer2020compositional}. \n\n\\begin{table}[tbp]\n\\small\n\\caption{Performance (accuracy) on CFQ tasks.}\n\\centering\n\\begin{tabular}{lcccc}\n\\toprule[1pt]\n Models & MCD1 & MCD2 & MCD3 \\\\\n\\midrule\nLSTM+Attn & $28.9\\pm1.8$ & $5.0\\pm0.8$ & $10.8\\pm0.6$ \\\\\nTransformer & $34.9\\pm1.1$ & $8.2\\pm0.3$ & $10.6\\pm1.1$ \\\\\nUni-Transformer & $37.4\\pm2.2$ & $8.1\\pm1.6$ & $11.3\\pm0.3$\\\\\nCGPS & $13.2\\pm3.9$ & $1.6\\pm0.8$ & $6.6\\pm0.6$\\\\\nT5-11B & $61.4\\pm4.8$&$30.1\\pm2.2$&$31.2\\pm5.7$\\\\\n\\midrule[1pt]\nGRU+Attn~(Ours) &\\cellcolor{lightgray}$32.6\\pm0.22$ &\\cellcolor{lightgray}$6.0\\pm0.25$ & \\cellcolor{lightgray}$9.5\\pm0.25$\\\\\n\\quad +mono30 & \\cellcolor{lightgray}$\\mathbf{64.8\\pm4.4}$&\\cellcolor{lightgray}$\\mathbf{57.8\\pm4.9}$&\\cellcolor{lightgray}$\\mathbf{64.6\\pm4.9}$\\\\\n\\midrule\n\\quad +mono100&\\cellcolor{lightgray}$83.2\\pm3.1$&\\cellcolor{lightgray}$71.5\\pm6.9$&\\cellcolor{lightgray}$81.3\\pm1.6$\\\\\n\\quad +transductive &\\cellcolor{lightgray}$88.4\\pm0.7$ &\\cellcolor{lightgray}$81.6\\pm6.5$ &\\cellcolor{lightgray}$88.2\\pm2.2$\\\\\n\\bottomrule[1pt]\n\\end{tabular}\n\\label{tab:cfq}\n\\end{table}\n\n\\subsection{{Observations}}\n\\begin{figure*}[t]\n\\small\n\\centering\n\\includegraphics[width=0.9\\textwidth]{pictures\/IBT\/accuracy.jpg}\n\\caption{Quality~(accuracy scores) of generated target data during the training procedure.}\n\\label{fig:accuracy_scores}\n\\end{figure*}\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{pictures\/IBT\/bleu.jpg}\n\\caption{Quality~(BLEU scores) of generated source data during the training procedure.}\n\\label{fig:bleu_scores}\n\\end{figure*}\nWe report the experimental results in Table~\\ref{tab:scan} (SCAN) and~\\ref{tab:cfq} (CFQ).\nNotice that we pay more attention to MCD splits (on both SCAN and CFQ), since it has been proven that they are more comprehensive for evaluating compositional generalization than other splits \\cite{keysers2020measuring,furrer2020compositional}.\nOur observations are as follows:\n\n\\textit{1.Iterative Back-translation substantially improves the performance on compositional generalization benchmarks.}\nFirstly, IBT can always bring large performance gains to our baseline (GRU+Att).\nThen, compared to the previous state-of-the-art models, IBT can consistently achieve remarkable performance on all tasks, while those SOTA models only show impressive performances on ADD\\_JUMP \/ LENGTH \/ AROUND\\_RIGHT \/ OPPOSITE\\_RIGHT tasks of SCAN, but perform poorly on MCD tasks of SCAN\/CFQ.\nMoreover, IBT also achieves higher performance than T5-11B, which is a strong baseline that incorporates rich external knowledge during pre-training stage.\n\n\n\n\n\\textit{2. Better monolingual data, better results.} \nAs described in section~\\ref{sec:ibt_setup}, the quality of monolingual data are gradually improved from ``+mono30\" to ``+mono100\" then to ``+transductive\". In most cases, it's as expected that ``+transductive\" performs better than ``+mono100\", and ``+mono100\" performs better than ``+mono30\".\n\n\n\n\\section{Secrets behind Iterative Back-Translation}\n\\label{section:c2}\n\nSection \\ref{section:c1} shows that iterative back-translation substantially improves seq2seq models' ability to generalize to more combinations beyond parallel data, but it is still unclear how and why iterative back-translation succeeds (RQ 2).\n\nTo answer this research question, we first empirically analyze pseudo-parallel data quality during the training process of iterative back-translation, and find that errors are increasingly corrected (Section \\ref{section:quality}).\nThen, we conduct ablation experiments to further illustrate this observation.\nWe find that:\n\\emph{even the error-prone pseudo-parallel data are beneficial.}\nWe speculate the reason is that knowledge of unseen combinations are implicitly injected into the model, thereby the src2trg model and the trg2src model can boost each other rather than harm each other (Section \\ref{section:static}).\nMoreover, pseudo-parallel data in iterative back-translation contain perturbations, helping models correct errors more effectively (Section \\ref{section:otf}).\nIn this section, all experiments are conducted on the CFQ benchmark, as it is much more complex and comprehensive than SCAN.\n\n\\begin{figure*}[t]\n\\small\n\\centering\t\n \\begin{subfigure}{.35\\textwidth}\n \t\\centering\n \t\\includegraphics[width=\\textwidth]{pictures\/IBT\/acc_hist.pdf}\n \t\\caption{Accuracy of Src2trg models}\n \t\\label{fig:acc}\n \\end{subfigure}\n \\begin{subfigure}{.35\\textwidth}\n \t\\centering\n \t\\includegraphics[width=\\textwidth]{pictures\/IBT\/bleu_hist.pdf}\n \t\\caption{BLEU of Trg2src models}\n \t\\label{fig:bleu}\n \\end{subfigure}\n\\caption{Ablation experiments for understanding the key factors contribute to the performance gain.\nThe comparison of baseline and BT indicates that even error-prone pseudo-parallel data are beneficial due to the injected implicit knowledge of unseen combinations (Section \\ref{section:static}).\nThe comparison of BT and BT+OTF indicates that perturbations brought by the on-the-fly mechanism can prevent models learning specific incorrect bias, thus improving the performance (Section \\ref{section:otf}).\nIBT performs best because it uses both source-side and target-side monolingual data while others use only target-side monolingual data.}\n\\label{fig:stanford_res}\n\\end{figure*}\n\n\\subsection{Quality of Pseudo-Parallel Data}\n\\label{section:quality}\n\nWe use Figure \\ref{fig:accuracy_scores} and \\ref{fig:bleu_scores} to get a better sense of the quality of pseudo-parallel data during the training process.\nThe x-axis represents the number of training steps: in the first 5000 steps, we train the src2trg model and the trg2src model with only parallel data;\nthe iterative back-translation process starts from step 5001 (dashed vertical lines).\nFor each step, we use the model to generate pseudo-parallel data from all monolingual data, and then evaluate the generated data quality.\nSpecifically:\nthe src2trg model generates a SPARQL query for each natural language utterance in the source-side monolingual data, and we define the quality of these generated SPARQL queries (i.e., target-side sequences) as the accuracy of them;\nthe trg2src model generates a natural language utterance for each SPARQL query in the target-side monolingual data, and we define the quality of these generated natural language utterances (i.e., source-side sequences) as the BLEU score~\\cite{papineni2002bleu} of them.\n\nWe observe that \\emph{iterative back-translation can increasingly correct errors in pseudo-parallel data.}\nEven though the pseudo-parallel data are error-prone at the end of initial stage~(dashed horizontal lines), the accuracy\/BLEU score has increased dramatically since the iterative back-translation process starts.\n\n\n\n\n\\subsection{Impact of Error-Prone Pseudo-Parallel Data}\n\\label{section:static}\n\nTo explore the reasons why iterative back-translation can increasingly correct error pseudo-parallel data, we investigate whether models can benefit from the initially generated pseudo-parallel data which are error-prone.\nIf yes, errors are going to be reduced during the iterative training process;\notherwise, errors are going to be reinforced.\n\nToward this end, we conduct two ablation experiments with ``+mono30\" monolingual data setting:\n\n\\begin{itemize}\n\\item \\textbf{BT-Src2trg}: the standard back-translation method for training the src2trg model (i.e., BT in Figure \\ref{fig:acc}).\nIt consists of three phases:\nfirst, train a src2trg model and a trg2src model using parallel data;\nthen, generate pseudo-parallel data from target-side monolingual data using the trg2src model;\nfinally, tune the src2trg model using the union of parallel data and pseudo-parallel data.\n\\item \\textbf{BT-Trg2src}: the standard back-translation method for training the trg2src model (i.e., BT in Figure \\ref{fig:bleu}).\nIt also consists of three phrases dual to those in BT-Src2trg.\n\\end{itemize}\n\n\n\n\n\n\n\nAccording to the performance of BT and baseline (i.e., src2trg\/trg2src models trained only on parallel data) in Figure \\ref{fig:stanford_res}, we observe that \\emph{even error-prone pseudo-parallel data can be beneficial.}\nSpecifically, though the initial pseudo-parallel data are error-prone (on average, 21.9\\% BLEU for Trg2src and 16.0\\% accuracy for Src2trg), they can still improve each other (5.4\\% BLEU and 4.0\\% accuracy gains for Trg2src and Src2trg models, respectively).\n\nA reasonable illustration for the observation is that: even though pseudo-parallel data are error-prone, it can still implicitly inject the knowledge of unseen combinations into the src2trg\/trg2src models, thereby boosting these models rather than harming them.\n\n\\subsection{Impact of Perturbations}\n\\label{section:otf}\n\n\nPrevious studies show that \\emph{perturbations (or noises)} in pseudo data play an important role in semi-supervised learning \\cite{zhang2016exploiting,edunov2018understanding,he2019revisiting,xie2020self}.\n\nIn iterative back-translation, the on-the-fly mechanism (i.e., models will be tuned each mini-batch on-the-fly with a batch of pseudo-parallel data) brings perturbations to pseudo-parallel data.\nDifferent batches of pseudo-parallel data are produced by different models at different steps, rather than the same model.\nThis makes errors more diverse, thus preventing the src2trg\/trg2src model learning specific incorrect bias from specific dual model.\nTherefore, it is reasonable to speculate that some performance gains are brought by such perturbations.\n\nWe conduct two ablation experiments (with ``+mono30\" monolingual data setting) to study the impact of such perturbations:\n\n\\begin{itemize}\n\\item \\textbf{BT-Src2trg-with-OTF}:\nthis method can be seen as iterative back-translation without source-side monolingual data (i.e., BT+OTF in Figure \\ref{fig:acc}).\nSpecifically, during the iterative training process, the trg2src model will only be tuned with parallel data.\nAs Figure \\ref{fig:bleu_scores} shows, the quality of pseudo-parallel data produced by the trg2src model would keep error-prone if it is only tuned on parallel data (the baseline curves in Figure \\ref{fig:bleu_scores}).\nWe want to see whether the src2trg model could be improved if the trg2src model keeps providing error-prone pseudo-parallel data.\n\\item \\textbf{BT-Trg2src-with-OTF}:\nthis method can be seen as iterative back-translation without target-side monolingual data (i.e., BT+OTF in Figure \\ref{fig:bleu}).\nWe want to see whether the trg2src model could be improved if the src2trg model keeps providing error-prone pseudo-parallel data.\n\\end{itemize}\n\n\n\n\n\nAccording to the performance of BT and BT+OTF in Figure \\ref{fig:stanford_res}, we find that perturbations brought by the on-the-fly mechanism do help models benefit from error-prone pseudo-parallel data more effectively.\nFor example, on average, the accuracy of BT-Src2trg-with-OTF is 33.3\\%, while the accuracy of BT-Src2trg is only 20.1\\%.\nAccording to our statistics, each sequence in target-side monolingual data has 49.3 different back-translations during the iterative training process.\nThe perturbations bring 13.3\\% performance gain, even if the quality of pseudo-parallel data has almost no improvement (see the baseline curves in Figure \\ref{fig:bleu_scores}).\n\nThis experimental results support our hypothesis that \\emph{perturbations brought by the on-the-fly mechanism contributes a lot to the success of iterative back-translation in compositional generalization benchmarks}.\n\n\\section{Curriculum Iterative Back-Translation}\n\\label{section:c3}\n\nAs discussed in Section \\ref{section:c2}, the major reason for why iterative back-translation succeeds is that it can increasingly correct errors in pseudo-parallel data.\nBased on this observation, it is reasonable to speculate that iterative back-translation can be further improved through injecting inductive bias that can help errors be reduced more efficiently.\nTowards this end, we consider curriculum learning~\\cite{bengio2009curriculum} as a potential solution: start out iterative back-translation with easy monolingual data (of which the generated pseudo-parallel data are less error-prone), then gradually increase the difficulty of monolingual data during the training process.\nBased on this intuition, we propose \\emph{Curriculum Iterative Back-Translation (CIBT)}.\n\n\\begin{table*}[t]\n\\small\n \\centering\n \\caption{Performance~(accuracy) of curriculum iterative back-translation.}\n\t\\begin{tabular}{lcccccc}\n\t \\hline\n \t\\multirow{2}{*}{} & \\multirow{2}{*}{IBT} & \\multicolumn{5}{c}{CIBT with hyperparameter $c$ (steps in each stage)} \\\\ \\cline{3-7}\n \t & & 2000 & 2500 & 3000 & 3500 & 4000 \\\\ \\cline{1-1} \\cline{2-2} \\cline{3-7}\n MCD1 & $ 64.8 \\pm 4.4 $ & $ 66.1 \\pm 5.0 $ & $ 66.0 \\pm 4.8 $ & $ \\textbf{66.6} \\pm 5.4 $ & $ 65.9 \\pm 3.7 $ & $ 65.4 \\pm 3.8 $ \\\\ \n MCD2 & $ 57.8 \\pm 4.9 $ & $ 68.6 \\pm 2.6 $ & $ \\textbf{69.1} \\pm 3.1 $ & $ 68.0 \\pm 1.9 $ & $ 66.8 \\pm 2.4 $ & $ 65.4 \\pm 3.1 $ \\\\\n MCD3 & $ 64.6 \\pm 4.9 $ & $ 70.2 \\pm 4.9 $ & $ 68.4 \\pm 7.0 $ & $ \\textbf{70.4} \\pm 4.8 $ & $ 69.2 \\pm 4.1 $ & $ 67.0 \\pm 6.3 $ \\\\ \n Mean & $ 62.4 \\pm 6.1 $ & $ \\textbf{68.3} \\pm 4.1 $ & $ 67.8 \\pm 4.7 $ & $ \\textbf{68.3} \\pm 4.1 $ & $ 67.3 \\pm 3.4 $ & $ 65.9 \\pm 4.1 $ \\\\ \\hline\n \\end{tabular}\n\t\\label{cl_results_summary}\n\\end{table*}\n\n\\subsection{Method}\n\n\\subsubsection{Problem Formulation}\n\nWe denote source-side and target-side monolingual data in iterative back-translation as $D_{src}$ and $D_{trg}$, respectively.\nSuppose that we have a simple heuristic algorithm $\\mathcal{C}$, which can divide $D_{src}$ into $n$ parts (denoted as $D_{src}^{(1)}$, $D_{src}^{(2)}$, ..., $D_{src}^{(n)}$), and $D_{trg}$ can also be divided into $n$ parts (denoted as $D_{trg}^{(1)}$, $D_{trg}^{(2)}$, ..., $D_{trg}^{(n)}$).\nThey are sorted by difficulty in ascending order:\nthe initial src2trg model performs roughly better on $D_{src}^{(i)}$ than on $D_{src}^{(j)}$ if $i>1$ time unit) could be employed, using the GWRM. The lesson learned from various problems solved so far is that the manageable interval length is indeed problem dependent. For non-smooth problems like the present, the time interval length is limited even if the Chebyshev order is increased because, again, of the spurious solutions found.\n\nAlthough a causal problem is solved, the GWRM need not necessarily be used as a causal method since both initial and end conditions may be applied. This is an interesting possibility for time-spectral methods which, however, requires further theoretical understanding. Various combination of initial and end conditions have indeed been tried, all within the attractor domains of the solutions, but no improvement of convergence was found. \n\nUsually overlapping time intervals, employing two point contact, improves convergence. It is notable, and worthy of further study, that the best results (maximum efficiency) were here obtained using one point contact.\n\nSummarizing, the fact that the GWRM competes well with standard finite difference methods in solving the demanding Lorenz 1984 equations and that very efficient GWRM techniques have been developed for pde's where spatial dimensions are included, suggests that usage of time-spectral methods for NWP is well worth further exploration. \n\n\n\\section{Conclusion}\nIn this work, a preliminary evaluation of a recently developed time-spectral method GWRM is carried out with respect to potential use for numerical weather prediction. The Lorenz (1984) chaotic equations are solved. The efficiency and the behaviour of the error growth with time is compared to traditional explicit and implicit finite difference methods. In particular, the optimal length of the solution time intervals have been determined, the accuracy of the method has been studied in detail and predictability has been investigated. \n\nIt is found that GWRM efficiency is in parity of, or better than, the finite difference methods. Efficiency is further enhanced for cases where several perturbed scenarios need be computed. This is mainly due to that GWRM time intervals are two orders of magnitude larger than those of finite difference methods. Furthermore, the GWRM solutions are analytical Chebyshev series expansions. These findings, and the existence of efficient algorithms for time parallelisation of spatially dependent pde's, are encouraging for future studies of time-spectral methods for NWP. \n\n\\section{Acknowledgement}\n\\noindent The authors would like to thank Dr {\\AA}ke Johansson at SMHI (Swedish Meteorological and Hydrological Institute) for inspiration and for pointing at the relevance of solving the Lorenz equations for a primary evaluation of the GWRM as a method for NWP.\n\n\\label{}\n\n\n\n\n\\section*{References}\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Categories of causal r\\textsc{pes}\\ and \\textsc{rcn}}\\label{sec:category}\nOccurrence nets and \\textsc{pes} es are equipped with morphisms and turned into categories that are\nrelated by suitable functors. \nIn this section, we extends such constructions to \\textsc{rcn}\\ and causal-r\\textsc{pes}. \nWe start by recalling the notions of morphisms for occurrence nets and prime event structures.\n\\begin{definition}\\label{de:occ-net-morph}\n Let $C_0 = \\langle B_0, E_0, F_0, \\mathsf{c}_0\\rangle$ and $C_1 = \\langle B_1, E_1, F_1, \\mathsf{c}_1\\rangle$\n be two occurrence nets. Then, an occurrence net \\emph{morphism} from $C_0$ to $C_1$ is a pair $(\\beta,\\eta)$ where\n $\\beta$ is a relation between $B_0$ and $B_1$, $\\eta : E_0\\to E_1$ is a partial mapping such that\n \\begin{itemize}\n \\item $\\beta(\\mathsf{c}_0) = \\mathsf{c}_1$ and for each $b_1\\in \\mathsf{c}_1$ there exists a unique\n $b_0\\in \\mathsf{c}_0$ such that $b_0 \\beta b_1$, and\n \\item for all $e_0\\in E_0$, when $\\eta(e_0)$ is defined and equal to $e_1$ then \n $\\beta(\\pre{e_0}) = \\pre{\\eta(e_0)} = \\pre{e_1}$\n and $\\beta(\\post{e_0}) = \\post{\\eta(e_0)} = \\post{e_1}$.\n \\end{itemize}\n\\end{definition}\nAs we consider just occurrence nets, we have also that if $b_0 \\beta b_1$ and $(e_1,b_1)\\in F_1$, then there\nexists a unique event $e_0\\in E_0$ such that $\\eta(e_0) = e_1$ and $(e_0,b_0)\\in F_0$. \nConsequently, if a condition $b_1$ in $C_1$ is related to a condition $b_0$ in $C_0$ \nby $\\beta$, then the events producing these conditions are related \nby $\\eta$; moreover, there is a unique event $e_0$ in $C_0$ mapped to the event $e_1$. \nMorphisms on occurrence nets compose and the identity mapping is defined as\nthe identity relation on conditions and the \nidentity mapping on events. \nWe have the category having as objects the occurrence nets and as morphisms the \noccurrence nets morphisms, which will be denoted with $\\mathbf{Occ}$.\n\n\\begin{definition}\\label{de:pes-morph}\n Let $P_0 = (E_0, <_0, \\#_0)$ and $P_1 = (E_1, <_1, \\#_1)$ be two \\textsc{pes} es. Then, a\n \\emph{\\textsc{pes}\\ morphism} from $P_0$ to $P_1$ is a partial mapping $f : E_0 \\to E_1$ such that\n \\begin{enumerate}\n \\item for all $e_0\\in E_0$ if $f(e_0)$ is defined then $\\hist{f(e_0)} \\subseteq f(\\hist{e_0})$, \n \\item for all $e_0, e_0'\\in E_0$ such that $f(e_0)$ and $f(e_0')$ are both defined and\n $f(e_0) \\#_1 f(e_0')$ then $e_0 \\#_0 e_0'$, and \n \\item for all $e_0, e_0'\\in E_0$ such that $f(e_0)$ and $f(e_0')$ are both defined and equal, then\n $e_0 \\neq e_0'$ implies $e_0 \\#_0 e_0'$. \n \\end{enumerate}\n\\end{definition}\n\nAgain \\textsc{pes}\\ morphisms compose, and we have the category $\\mathbf{PES}$ \n whose objects are \\textsc{pes} es\nand whose arrows are the \\textsc{pes}\\ morphisms.\n\nThe categories $\\mathbf{Occ}$ and $\\mathbf{PES}$ are related as follows: each occurrence net is associated with a prime event structure \nas stated\nin Proposition~\\ref{pr:on-to-pes}; each occurrence net morphism\n $(\\beta,\\eta)$ is associated with $\\eta$, which turns out to be a \\textsc{pes}\\ morphism. We denote such functor as\n$\\mathcal{P}$ (as in Proposition~\\ref{pr:on-to-pes}).\nConversely, the mapping $\\mathcal{E}$ introduced in Proposition~\\ref{pr:pes-to-on} can be turned into a functor, \nas shown in \\cite{Win:ES}. \nA \\textsc{pes}\\ morphism $f$ is turned into an occurrence net morphism as follows:\nthe relation $\\beta$ on conditions is defined such that (i) $(a,A) \\beta (a',A')$ when $a = \\bot = a'$, \n$A' = f(A) \\neq \\emptyset$, and $|A| = 1$, and (ii) $(e,A) \\beta (f(e),A')$ when $f(e)$ is defined and\n$A' = f(A) \\neq \\emptyset$; the partial mapping on events is just $f$. It is then easy to\nsee that such definition indeed conforms an occurrence net morphism. \nThe nice result is that these functors form a coreflection, where $\\mathcal{C}$ is the right\nadjoint and $\\mathcal{E}$ the left adjoint.\n\nWe now recall the notion of r\\textsc{pes}\\ morphisms introduced in \\cite{GPY:CatRES}.\n\n\\begin{definition}\\label{de:rpes-morph}\n Let $P_0 = (E_0, U_0, <_0, \\#_0, \\prec_0, \\triangleright_0)$ and $P_1 = (E_1, U_1, <_1, \\#_1, \\prec_0, \\triangleright_0)$ be two r\\textsc{pes}. Then, an\n \\emph{r\\textsc{pes}\\ morphism} from $P_0$ to $P_1$ is a partial mapping $f : E_0 \\to E_1$ such that\n \\begin{enumerate}\n \\item for all $e_0\\in E_0$ if $f(e_0)$ is defined then \n $\\setcomp{e_1\\in E_1}{e_1 <_1 f(e_0)} \\subseteq \\setcomp{f(e)}{e <_0 e_0}$, \n \\item for all $e_0, e_0'\\in E_0$ such that $f(e_0)$ and $f(e_0')$ are both defined and\n $f(e_0) \\#_1 f(e_0')$ then $e_0 \\#_0 e_0'$, \n \\item for all $e_0, e_0'\\in E_0$ such that $f(e_0)$ and $f(e_0')$ are both defined and equal, then\n $e_0 \\neq e_0'$ implies $e_0 \\#_0 e_0'$, \n \\item for all $e_0\\in E_0$ and $e\\inU_0$ such that\n $f(e_0)$ and $f(e)$ are both defined and $f(e_0) \\triangleright_1 f(e)$ then\n $e_0 \\triangleright_0 e$, and\n \\item for all $e_0\\inU_0$ if $f(e_0)$ is defined then \n $\\setcomp{e_1\\in E_1}{e_1 \\prec_1 \\underline{f(e_0)}} \\subseteq \\setcomp{f(e)}{e <_0 \\underline{e_0}}$. \n \\end{enumerate}\n\\end{definition}\nThe notion of morphism above generalise the one on \\textsc{pes}\\ by requiring that prevention is preserved as\nwell as the reverse causality relation. \nIn \\cite{GPY:CatRES} it is also shown that \nr\\textsc{pes}\\ and r\\textsc{pes}\\ morphisms form a category, which is called $\\mathbf{RPES}$. \nWe restrict our attention to the subcategory $\\mathbf{cRPES}$, which \n has causal-r\\textsc{pes} as objects and r\\textsc{pes}\\ morphisms as arrows.\n\n\\begin{proposition}\\label{pr:full-subcat}\n $\\mathbf{cRPES}$ is a full subcategory of $\\mathbf{RPES}$.\n\\end{proposition}\n\nWe now turn our attention to reversible causal nets and introduce the notion of morphisms for reversible causal nets.\n\n\\begin{definition}\\label{de:rcn-morph}\n Let $R_0 = \\langle B_0, E_0, U_0, F_0, \\mathsf{c}_0\\rangle$ and \n $R_1 = \\langle B_1, E_1, U_1, F_1, \\mathsf{c}_1\\rangle$ be two \\textsc{rcn}{s}.\n Then an \\textsc{rcn}\\ \\emph{morphism} from $R_0$ to $R_1$ is the pair $(\\beta,\\eta)$ such that\n \\begin{itemize}\n \\item $(\\beta,\\eta)$ restricted to the occurrence nets \n $\\langle B_0, E_0\\setminus U_0, F'_0, \\mathsf{c}_0\\rangle$ to\n $\\langle B_1, E_1\\setminus U_1, F'_1, \\mathsf{c}_1\\rangle$ is an occurrence net morphism, \n and\n \\item $\\eta(U_0) \\subseteq U_1$ and if $e\\in U_0$ and $\\eta(e)$ is defined then\n also $f(e')$ is defined, where $e' = h(e)$. \n \\end{itemize}\n\\end{definition}\nIt is straightforward to check that \\textsc{rcn}\\ morphisms compose and that the identity relation and the identity mapping on events\nconforms a \\textsc{rcn}\\ morphism. Hence, \\textsc{rcn}\\ together with \\textsc{rcn}\\ morphism form a category, which we call\n$\\mathbf{rOcc}$. \n\nThen, the definition of $\\mathcal{C}$ in Proposition~\\ref{pr:rce-to-rpes} can be \\changed{easily}{} extended to a functor.\n\\begin{proposition}\\label{pr:C-is-functor} \n $\\mathcal{C} : \\mathbf{rOcc} \\to \\mathbf{cRPES}$ acting on objects as in Proposition~\\ref{pr:rce-to-rpes}\n and on morphisms $(\\beta,\\eta) : R_0 \\to R_1$ as $\\eta$ restricted to the events that are not\n reversing ones, is a functor.\n\\end{proposition}\n\nAlso the construction in Definition~\\ref{de:rpestorcn} can be extended to a functor. \n\n\\begin{proposition}\\label{pr:E-is-functor} \n $\\mathcal{E}_{r} : \\mathbf{cRPES} \\to \\mathbf{rOcc}$ acting on objects as in Definition~\\ref{de:rpestorcn}\n and on morphisms as stipulated in for occurrence net, requiring that reversing events are\n preserved, is a functor.\n\\end{proposition}\n\nAlong the lines of \\cite{Win:ES} we can establish a relation between the categories\n$\\mathbf{cRPES}$ and $\\mathbf{rOcc}$.\n\n\\begin{theorem}\\label{th:coreflection} \n $\\mathcal{E}_{r}$ and $\\mathcal{C}_{r}$ form a coreflection, where $\\mathcal{C}_{r}$ is the right\nadjoint and $\\mathcal{E}_{r}$ the left adjoint.\n\\end{theorem}\n\n\n\n\\subsection{From r\\textsc{pes}\\ to \\textsc{rcn}}\nCorrespondingly to what is usually done when relating nets to event structures, we show that \nif we focus on causal r\\textsc{pes}{es} then we can relate them to reversible occurrence nets.\nThe construction is indeed quite standard (see \\cite{Win:ES,BCP:LenNets} among many others),\nbut we do need a further observation on causal r\\textsc{pes}.\n\n\\begin{proposition}\\label{pr:pes-associated-to-crpes}\n Let $\\mathsf{P} = (E, U, <, \\#, \\prec, \\triangleright)$ be a causal r\\textsc{pes}\\ and\n let $<^{+}$ be the transitive closure of $<$. Then, $\\#$ is inherited along $<^{+}$,\n \\emph{i.e.} $e\\ \\#\\ e' <^{+} e''\\ \\Rightarrow\\ e\\ \\#\\ e''$.\n\\end{proposition}\nA consequence of this proposition is that the conflict relation is fully characterized by the\ncausality relation, and the same intuition for introducing reversible causal net can be used in\nassociating a net to a causal r\\textsc{pes}\\ like the one used to associate an occurrence net to a \\textsc{pes}. \n\n\\begin{definition}\\label{de:rpestorcn}\n Let $\\mathsf{P} = (E, U, <, \\#, \\prec, \\triangleright)$ be a causal r\\textsc{pes}, and \n $\\bot\\not\\in E$ be a new symbol. Define \n $\\mathcal{E}_{r}(\\mathsf{P})$ as the Petri net $\\langle B, \\hat{E}, F, \\mathsf{c}\\rangle$ where\n \\begin{itemize}\n \\item $B = \\setcomp{(a,A)}{a\\in E\\cup\\setenum{\\bot} \\land A \\subseteq E \\land \\CF{A} \\land \n a \\neq \\bot\\ \\Rightarrow\\ \\forall e\\in A.\\ a \\ll e}$,\\smallskip\n \\item $\\hat{E} = E\\times\\setenum{\\mathtt{f}}\\ \\cup\\ U\\times\\setenum{\\mathtt{r}}$,\n \\item $F = \\setcomp{(b,(e,\\mathtt{f}))}{b = (a,A)\\ \\land\\ e\\in A} \\quad \\cup\\quad\n \\setcomp{((e,\\mathtt{f}),b)}{b = (e,A)}\\quad\\cup\\quad$ \\phantom{asdfa}\n $\\setcomp{(b,(e,\\mathtt{r}))}{b = (e,A)}\\quad\\cup\\quad\n \\setcomp{((e,\\mathtt{r}),b)}{b = (a,A)\\ \\land\\ e\\in A}$,\n and\n \\item $\\mathsf{c} = \\setcomp{(\\bot,A)}{A \\subseteq E\\ \\land\\ \\CF{A}}$, \n \\end{itemize}\n\\end{definition} \nIn essence the construction above takes the \\textsc{pes}\\ associated to an r\\textsc{pes}\\ and\nconstructs the associated occurrence net, which is then \\emph{enriched} with \nthe reversing events (transitions). The result is a reversible occurrence net.\n\n\\begin{proposition}\\label{prp:rpes_to_cnet}\n Let $\\mathsf{P} = (E, U, <, \\#, \\prec, \\triangleright)$ be a causal r\\textsc{pes}. Then \n $\\mathcal{E}_{r}(\\mathsf{P}) = \\langle B, \\hat{E}, U\\times\\setenum{\\mathtt{r}}, F, \\mathsf{c}\\rangle$ as defined\n in Definition \\ref{de:rpestorcn} is a reversible occurrence net with respect to $U\\times\\setenum{\\mathtt{r}}$.\n\\end{proposition} \n\\begin{theorem}\\label{th:cccrpestorcn}\n Let $\\mathsf{P}$ be a causal \n r\\textsc{pes}. Then \n $X'$ is a configuration of $\\mathcal{E}_{r}(\\mathsf{P})$ iff $X$ \n is a configuration of $\\mathsf{P}$, where $X' = \\setcomp{(e,\\mathtt{f})}{e\\in X}$.\n\\end {theorem}\nClearly, if we start from a reversible causal net, we get a r\\textsc{pes}\\ from which a reversible causal\nnet can be obtained having the same states (up to renaming of events).\n\\begin{corollary}\n Let $R$ be a \\textsc{rcn}. Then $\\states{\\mathcal{E}_{r}(\\mathcal{C}_{r}(R))} = \\states{R}$.\n\\end{corollary}\n\n\\begin{example}\n Consider the r\\textsc{pes}\\ with four events $\\setenum{e_1,e_2,e_3,e_4}$ such that $e_1 < e_3$ and $e_2 < e_4$,\n $e_1$ is in conflict with $e_2$ and this conflict is inherited along $<$. Furthermore, let \n $e_1$ and $e_3$ be reversible, and $e_3 \\triangleright \\underline{e}_1$. The construction \nin Definition~\\ref{de:rpestorcn} gives the net below.\n \\begin{center}\n \\input{rpesrcnfig.tex}\n \\end{center}\n\\end{example}\n\n\n\n\\section{Conclusions and future works}\\label{sec:conc}\nThe constructions we have proposed to associate a reversible causal net to a causal \nreversible prime event structure, and vice versa, are certainly driven by the classical ones \n(see \\cite{Win:ES}) for relating occurrence nets and prime event structures. \nThe consequence of this approach is that the causality relation, either the one given in a r\\textsc{pes}\\ or\nthe one induced by the flow relation in the occurrence net obtained ignoring the reversing\nevents, is the one driving the construction. One of the other two relations of an r\\textsc{pes}\\ is substantially\nignored (and we obtain from a \\textsc{rcn}\\ a causal r\\textsc{pes}\\ where the reverse causality relation just \nsays that an event can be reversed only after it has occurred) whereas the second (prevention)\nis tightly related to the causality relation: $b$ is caused by $a$ precisely when $b$ prevents \nof undoing of $a$.\nThe notion of reversible causal net we have proposed suggests this construction, so the problem \nof finding which kind of net would correspond to, for example, a cause-respecting or even\nan arbitrary r\\textsc{pes}\\ remains open and\ncertainly deserves to be investigated.\n\nIt is however interesting to observe that the construction in Definition \\ref{de:rpestorcn}\ngives a reversible causal net even when the r\\textsc{pes}\\ one started with is not a causal r\\textsc{pes}. \n Consider the r\\textsc{pes}\\ with two events $\\setenum{e_1, e_2}$ such that\n $e_1 < e_2$ and where the conflict and the prevention relations are empty.\n The only reversible event is $e_1$ and $e_1 \\prec \\underline{e}_1$.\n The set $\\{e_2\\}$ is a reachable configuration: we can remove $e_1$ from a reachable configuration \n$\\setenum{e_1, e_2}$ by performing the event $\\underline{e}_1$. \nThis is an example of out-of-causal order computation. \nGiven this r\\textsc{pes}, our construction produces the following \\textsc{rcn}, which does not have $\\{e_2\\}$ among its configurations.\n %\n \\begin{center}\n \\input{counterex.tex}\n \\end{center}\n\n\\noindent\nThe constructions we have proposed are somehow the more adherent to what is usually done, based on the \ninterpretation that \\emph{causality} implies that the event causing some other event somehow\nproduces something that it is used by the latter. \nThis is not the only interpretation of what causality could mean. \nIn fact, causality is often confused with the observation that two causally related events appear\nordered in the same way in each possible execution, and when we talk about ordered execution, \nit should be stressed that this can be achieved in several ways, for instance using \\emph{inhibitor}\narcs. \nConsider the net $C$\n\\begin{center}\n\\input{inhiscausal.tex}\n\\end{center}\nthe event $e_2$ can be executed only after the event $e_1$ has been executed. However, $e_1$ \ndoes not produce a token (resource) that must be used by $e_2$. If we simply make\nthe event $e_1$ reversible but do nothing to prevent reversing of $e_1$ before $e_2$ is reversed,\nthen we would obtain the net $C'$. We could do better in $C''$ where we model the prevention using the\nso-called \\emph{read arcs} \\cite{MR:CN}. Hence, using the inhibitor or read arcs seem feasible way\nforwards to capture more precise the new relations of r\\textsc{pes} es, including prevention. \nA similar approach has been already pursued in\n\\cite{CP:soap17} to model the so called modifiers that are able to change the causality pattern of\nan event.\nThis suggests that, for arbitrary r\\textsc{pes} es, we need to find relations different from the flow \nrelation to capture faithfully (forward and reverse) causal and prevention dependencies.\nThis will be the subject of future research.\n\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\nEvent structures and nets are closely related. Since the seminal papers\nby Nielsen, Plotkin and Winskel \\cite{NPW:PNES} and Winskel \\cite{Win:ES}, the relationship among\nnets and event structures has been considered as a pivotal characteristic of\nconcurrent systems, as testified by numerous papers in the literature addressing\n event structures and nets. \nThe ingredients of an event structure are, beside a set of events, a number of relations that are\nused to express which events can be part of a configuration (the snapshot of a concurrent \nsystem), modelling a consistency predicate, and how events can be added to reach another \nconfiguration, modelling the dependencies among the (sets of) events.\nOn the net side, the ingredients boil down to constraints on how transitions may be executed,\nand usually have a structural flavour.\n\nSince the introduction of event structures there has been a flourish of\ninvestigations into the possible relations among events, giving rise to a number\nof different definitions of event structures.\nWe recall some of them, without the claim of completeness. \nFirst to mention are the classical \\emph{prime} event structures \n\\cite{Win:ES} where\nthe dependency between events, called \\emph{causality}, is given by a partial order and the consistency\nis determined by a \\emph{conflict} relation.\n\\emph{Flow} event structures \\cite{Bou:FESFN} drop the requirement that the dependency should\nbe a partial order, and \\emph{bundle} event structures \\cite{Langerak:1992:BES} are able to represent \nOR-causality by allowing each event to be caused by a member of a bundle of events.\n\\emph{Asymmetric} event structures \\cite{BCM:CNAED} introduce the notion of weak causality that can \nmodel asymmetric conflicts. \\emph{Inhibitor} event structures \\cite{BBCP:rivista} are able\nto faithfully capture the dependencies among events which arise in the presence of read and inhibitor arcs. \nIn \\cite{BCP:LenNets} event structures where the causality relation may be circular are\ninvestigated, and in \\cite{AKPN:lmcs18} the notion of dynamic causality is considered.\nFinally, we mention the quite general approach presented in \\cite{GP:CSESPN}, where there is\na unique relation, akin to a \\emph{deduction relation}. \nTo each of the aforementioned event structures \na particular class of nets corresponds. To prime event structures we have \n\\emph{occurrence nets}, to flow event structures we have flow nets, to bundle event structures\nwe have \\emph{unravel nets} \\cite{CaPi:PN14}, to asymmetric and inhibitor event structures we\nhave \\emph{contextual nets} \\cite{BCM:CNAED,BBCP:rivista}, to event structures with circular\ncausality we have \\emph{lending nets} \\cite{BCP:LenNets}, to those with dynamic causality we have\n\\emph{inhibitor unravel nets} \\cite{CP:soap17} and finally to the ones presented in \n\\cite{GP:CSESPN} \\emph{1-occurrence nets} are associated.\n\nRecently a new type of event structure tailored to model \\emph{reversible} computation has been\nintroduced and studied \\cite{PU:jlamp15,UPY:RES-NGC18}. In particular, in \\cite{PU:jlamp15},\n\\emph{reversible prime event structures} have been introduced. In this kind of event structure \ntwo relations are added: \nthe \\emph{reverse causality} relation and the \\emph{prevention} relation. The first \none is a standard dependency relation: in order to reverse an event some other events must be\npresent. The second relation, on the contrary, identifies those events whose presence \\emph{prevents}\nthe event being reversed. \nThis kind of event structure is able to model different\nflavours of reversible computation\nsuch as causal-consistent reversibility~\\cite{rccs,ccsk,rhotcs} and out-of-causal-order \nreversibility~\\cite{PhiUliYuen12,UK16}. \nCausally consistent reversibility relates reversibility with causality: an event can be undone provided that all of its effects have been undone. This allows the system to get back to a past state, which was possible to reach by just the normal (forward) computation.\nThis notion of reversibility is natural in reliable distributed systems since when an error occurs the system tries to go back to a past consistent state. Examples of how causal consistent reversibility is used to model reliable systems are \ntransactions~\\cite{DanosK05,LaneseLMSS13} and rollback protocols~\\cite{VassorS18}.\nAlso, causally consistent reversibility can be used for program analysis and debugging \\cite{GiachinoLM14,LanesePV19}. \n On the other hand, out-of-causal-order reversibility does not preserve causes, and it is suitable to model biochemical reaction where, for example, a bond can be undone leading to a different state which was not present before.\n\nReversibility in Petri nets has been studied in \\cite{PhilippouP18,MMU:coordination19} with two different approaches. \nIn \\cite{PhilippouP18} reversibility for acyclic Petri net is solved by\nrelying on a new kind of tokens, called bonds, that\nkeep track of the execution history. Bonds are rich enough for allowing other approaches to\nreversibility such as out-of-causal order and causal consistent reversibility. In\n \\cite{MMU:coordination19}\na notion of \\emph{unfolding} of a P\/T (place\/transition) net, where all the transitions can be reversed, has been\nproposed. In particular, by resorting to standard notions of the Petri net theory \\cite{MMU:coordination19} provides a causally-consistent reversible semantics \nfor P\/T nets. This exploits the well-known unfolding of P\/T nets into occurrence nets \n\\cite{Win:ES}, and is done by adding for each transition its reversible counterpart. \n \n\nIn this paper we start\nour research quest towards studying what kind of nets can be\nassociated to reversible prime event structures. To this aim we introduce the notion of\na \\emph{reversible causal net} which is an occurrence net enriched with \ntransitions which operationally\n\\emph{reverse} the effects of executing some others\n\n\nWe associate to each reversing transition (event in the occurrence net dialect) a unique transition, \nwhich is in charge of producing\n the effects that the reversing transition\nhas to undo.\nTo execute a reversing event in a reversible causal net \nthe events caused by the event to be reversed (the one associated to the reversing one)\nhave to be reversed as well, and if this is not possible then the reversing event cannot be\nexecuted. This corresponds, in the reversible event structure, to the fact that such \nevents \\emph{prevent} the reversing event from happening. A reversible causal \nnet where the reversing events \nhave been removed is just an occurrence net.\nThis discussion suggests the easiest way of relating\nreversible causal nets and reversible prime event structure: the causal relation is\nthe one induced by the occurrence net, and the prevention relation is the one induced\nby the events that are in the future of the one to be reversed. The reversible causality relation\nis the basic one: in order to reverse an event the event itself must be present. \nWhat is obtained from a reversible causal net is a \\emph{causal} reversible prime event structure,\nwhich is a subclass of reversible prime event structures. \n\nWhen we start from a causal reversible prime event structure, it is possible to obtain\na reversible causal net. The ingredients that are used are just the causal relation and\nthe set of reversible events. \nThus, if we start from a causal\nreversible prime event structure, the obtained reversible causal net \nhas the same set of configurations.\nHence, the precise correspondence is between causal reversible prime event structures and \nreversible causal nets. \nThis relation is made clear also by turning causal reversible prime event structures and\nreversible causal nets into categories. Then the constructions associating reversible causal nets to \ncausal reversible prime event structures can be turned into functors and these functors form a coreflection.\nThis implies that the notion of reversible causal net is the appropriate one when dealing with causal\nprime event structures.\n\n\\paragraph{Structure of the paper.}\nSection \\ref{sec:preliminaries} reviews some preliminary notions for nets and\nevent structures, including reversible prime event structures. Section \\ref{sec:cn-and-pes} recalls\nthe well-known relationship between prime event structures and occurrence nets. The core of the\npaper is Section \\ref{sec:rcn-and-rpes} where we first introduce reversible causal nets and then\nwe show how to obtain a reversible causal net from an occurrence net. We then \nshow how to associate a causal reversible prime event structure to a reversible causal net, \nand vice versa. \nWe sum up our findings in Section \\ref{sec:category} where we introduce a notion of morphism for \nreversible causal nets and show that this gives a category which is related to the subcategory of causal\nreversible prime event structures.\nSection \\ref{sec:conc} concludes the paper\n\n\n\\section{Omitted Proofs}\\label{sec:proof}\n\\input{proofs}\n\\end{document}\n\n\\section{Occurrence nets and prime event structures}\\label{sec:cn-and-pes}\nWe now review the notion of occurrence nets~\\cite{NPW:PNES,Win:ES}. \nGiven a net $N = \\langle S, T, F, \\mathsf{m}\\rangle$,\nwe \nwrite $<_N$ for transitive closure of $F$, and $\\leq_N$ for the reflexive closure of $<_N$.\n We say $N$ is \\emph{acyclic} if $\\leq_N$ is a partial order.\nFor occurrence nets, we adopt the usual convention and refer to places and transitions respectively \n as {\\em conditions} and {\\em events}, and correspondingly use $B$ and \n$E$ for the sets of conditions and events.\n We will often confuse conditions with places and \nevents with transitions.\n\\begin{definition}\n An \\emph{occurrence net} (\\textsc{on}) $C = \\langle B, E, F, \\mathsf{c}\\rangle$ is an acyclic, safe net \n satisfying the\n following restrictions:\n \\begin{itemize}\n \\item $\\forall b\\in \\mathsf{c}$. $\\pre{b} = \\emptyset$,\n %\n \\item $\\forall b\\in B$. $\\exists b'\\in \\mathsf{c}$ such that $b' \\leq_C b$,\n %\n \\item $\\forall b\\in B$. $\\pre{b}$\n is either empty or a singleton,\n \n \\item for all $e\\in E$ the set $\\hist{e} = \\setcomp{e' \\in E}{e'\\leq_C e}$ is finite,\n and\n \n \\item $\\#$ is an irreflexive and symmetric relation defined as follows:\n \n \\begin{itemize}\n \\item $e\\ \\#_0\\ e'$ iff $e, e' \\in E$, $e\\neq e'$ and \n $\\pre{e}\\cap\\pre{e'}\\neq \\emptyset$,\n %\n \\item $x\\ \\#\\ x'$ iff $\\exists y, y'\\in E$ such that $y\\ \\#_0\\ y'$ and $y \\leq_C x$ and \n $y' \\leq_C x'$.\n \\end{itemize}\n \\end{itemize}\n\\end{definition}\nThe intuition behind occurrence nets is the following: each condition $b$ represents \nthe occurrence of a token,\nwhich is produced by the \\emph{unique} event in $\\pre{b}$, \nunless $b$ belongs to the initial marking,\nand it is used by only one transition (hence if $e, e'\\in\\post{b}$, then $e\\ \\#\\ e'$).\nOn an occurrence net $C$ it is natural to define a notion of \\emph{causality} among elements of the \nnet: we say that $x$ is \\emph{causally dependent} on $y$ iff $y \\leq_C x$.\n \nOccurrence nets are often the result of the \\emph{unfolding} of a (safe) net. \nIn this perspective an occurrence net is meant to describe precisely the non-sequential semantics\nof a net, and each reachable marking of the occurrence net corresponds to a reachable marking\nin the net to be unfolded. Here we focus purely on occurrence nets and not on the nets\nthey are unfoldings of.\n\n\\begin{definition}\\label{de:cn-conf}\n Let $C = \\langle B, E, F, \\mathsf{c}\\rangle$ be a \\textsc{on}\\ and \n $X\\subseteq E$ be a subset of events. Then $X$ is a \\emph{configuration} of\n $C$ whenever $\\CF{X}$ and $\\forall e\\in X$. $\\hist{e}\\subseteq X$.\n The set of configurations of the occurrence net $C$ is denoted by $\\Conf{C}{\\textsc{on}}$.\n\\end{definition}\n\nGiven an occurrence net $C = \\langle B, E, F, \\mathsf{c}\\rangle$ and a state\n$X \\in \\states{C}$, it is easy to see that it is \\emph{conflict free}, \\emph{i.e.}\n$\\forall e, e'\\in X$. $e\\neq e'\\ \\Rightarrow\\ \\neg (e\\ \\#\\ e')$, and \\emph{left closed},\n\\emph{i.e.} $\\forall e \\in X$. $\\setcomp{e'\\in E}{e'\\leq_C e}\\subseteq X$.\n\n\\begin{proposition}\\label{pr:states-are-conf}\n Let $C = \\langle B, E, F, \\mathsf{c}\\rangle$ be an occurrence net and $X\\in \\states{C}$. Then\n $X\\in \\Conf{C}{\\textsc{on}}$.\n\\end{proposition}\nWe now recall the connection among occurrence nets and prime event structures~\\cite{Win:ES}.\n\\begin{proposition}\\label{pr:on-to-pes}\n Let $C = \\langle B, E, F, \\mathsf{c}\\rangle$ be an occurrence net. Then\n $\\mathcal{P}(C) = (E, \\leq_C, \\#)$\n is a \\textsc{pes}, and \n $\\Conf{C}{\\textsc{on}} = \\Conf{\\mathcal{P}(C)}{\\textsc{pes}}$.\n\\end{proposition}\n\n\\begin{example}\\label{ex:cn}\n\nFigure~\\ref{fig:occ-nets} illustrates some (finite) occurrence nets. We can associate \\textsc{pes} es to them as follows.\nThe net $C_1$ has two concurrent events, which hence are neither causally ordered nor in conflict,\nconsequently both $<$ and $\\#$ are empty. The two events $e_1$ and $e_2$ in $C_2$ are in conflict,\nnamely $e_1\\ \\#\\ e_2$, while they are causally ordered in $C_3$, namely $e_1 < e_2$, and not in conflict.\nFinally, in $C_4$ we have $e_1< e_3$ and $e_2 0$ and $\\flt{m}(a) = 0$ otherwise. \n %\n When a multiset $m$ of $A$ is a set, \n \\emph{i.e.} $m = \\flt{m}$, we write\n $a\\in m$ to denote that $m(a) \\neq 0$, and often confuse the\n multiset $m$ with the set $\\setcomp{a\\in A}{m(a) \\neq 0}$. \n %\n Furthermore we use the standard set operations like $\\cap$, $\\cup$ or\n $\\setminus$.\n \n Given a set $A$ and a relation $<\\ \\subseteq A\\times A$, we say that $<$ is \n an \\emph{irreflexive} partial order whenever\nit is irreflexive and transitive. \nWe shall write $\\leq$ for the reflexive closure of a partial order $<$.\n \n\\subsection{Petri nets} \nWe review the notion of Petri net along with some auxiliary notions.\n\\begin{definition}\n A \\emph{Petri net} is a 4-tuple \n $N = \\langle S, T, F, \\mathsf{m}\\rangle$, where\n $S$ is a set of {\\em places} and $T$ is a set of {\\em transitions} \n (with $S \\cap T = \\emptyset$), \n \n {$F \\subseteq (S\\times T)\\cup (T\\times S)$}\n is the \n \\emph{flow} relation, and\n $\\mathsf{m}\\in \\mu S$ is called the {\\em initial marking}.\n \\end{definition}\n Petri nets are depicted as usual. \n %\n Given a net $N = \\langle S, T, F, \\mathsf{m}\\rangle$ and $x\\in S\\cup T$, we define the following\n multisets: \n\n {$\\pre{x} = \\{y\\ |\\ (y,x)\\in F\\}$}\n and \n\n {$\\post{x} = \\{y\\ |\\ (x,y)\\in F\\}$}.\n If $x\\in S$ then \n\n\n $\\pre{x} \\in \\mu T$ and $\\post{x} \\in \\mu T$;\n analogously, \n if $x\\in T$ then $\\pre{x}\\in\\mu S$ and $\\post{x} \\in \\mu S$.\n %\n A multiset of transitions $A\\in \\mu T$, called \\emph{step}, \n is enabled at a marking $m\\in \\mu S$, denoted by $m\\trans{A}$,\n whenever $\\pre{A} \\subseteq m$, where $\\pre{A} = \\sum_{x\\in\\flt{A}}\\ A(x)\\cdot\\pre{x}$. \n %\n A step $A$ enabled at a marking $m$ can \\emph{fire} and its firing produces \n the marking $m' = m - \\pre{A} + \\post{A}$, where \n $\\post{A} = \\sum_{x\\in\\flt{A}}\\ A(x)\\cdot\\post{x}$.\n The firing of $A$ at a marking $m$ is denoted by $m\\trans{A}m'$.\n %\n We assume that each transition $t$ of a net $N$ is such that $\\pre{t}\\neq\\emptyset$,\n meaning that no transition may fire \\emph{spontaneously}.\n %\n Given a generic marking $m$ (not necessarily the initial one), \n the (step) \\emph{firing sequence} ({shortened as} \\textsf{fs}) of \n $N = \\langle S, T, F, \\mathsf{m}\\rangle$ starting at $m$ is\n defined as: \n ($i$) $m$ is a firing sequence (of length 0), and \n %\n ($ii$) if $m\\trans{A_1}m_1$ $\\cdots$ $m_{n-1}\\trans{A_n}m_n$ is a firing sequence \n and $m_n\\trans{A}m'$, \n then also $m\\trans{A_1}m_1$ $\\cdots$ $m_{n-1}\\trans{A_n}m_n\\trans{A}m'$\n is a firing sequence.\n %\n Let us note that each step $A$ such that $|A| = n$ \n can be written as $A_1 + \\cdots + A_n$ where for each $1 \\leq i\\leq n$ it holds that \n $A_i = \\flt{A_i}$ and $|A_i| = 1$, and $m\\trans{A}m'$ iff for each decomposition\n of $A$ in $A_1 + \\cdots + A_n$, we have that $m\\trans{A_1}m_1\\dots m_{n-1}\\trans{A_n}m_n = m'$.\n %\n When $A$ is a singleton, \\emph{i.e.}\n $A = \\setenum{t}$, we write \n $m\\trans{t}m'$. \n\n The set of firing sequences of a net $N$ \n starting at a marking $m$ is denoted by $\\firseq{N}{m}$ and it is ranged over by $\\sigma$.\n Given a \\textsf{fs}\\ $\\sigma = m\\trans{A_1}\\sigma'\\trans{A_n}m_n$, we denote with\n $\\start{\\sigma}$ the marking $m$, with $\\lead{\\sigma}$ the marking $m_n$ \n and with $\\remains{\\sigma}$ the \n \\textsf{fs}\\ $\\sigma'\\trans{A_n}m_n$. \n %\n Given a net $N = \\langle S, T, F, \\mathsf{m}\\rangle$, a marking $m$ is \\emph{reachable} \n iff there exists a \n \\textsf{fs}\\ $\\sigma \\in \\firseq{N}{\\mathsf{m}}$ \n such that $\\lead{\\sigma}$ is $m$. \n %\n The set of reachable markings of $N$ is\n $\\reachMark{N} = \\bigcup_{\\sigma\\in\\firseq{N}{\\mathsf{m}}} \\lead{\\sigma}$.\n %\n Given a \\textsf{fs}\\ $\\sigma = m\\trans{A_1}m_1\\cdots m_{n-1}\\trans{A_n}m'$, \n we write \n $X_{\\sigma} = \\sum_{i=1}^{n} A_i$\n for the multiset of transitions associated to \\textsf{fs}. \nWe call $X_{\\sigma}$ a \\emph{state} of the net and write\n \\(\n \\states{N} = \\setcomp{X_{\\sigma}\\in \\mu T}{\\sigma\\in\\firseq{N}{\\mathsf{m}}}\n \\)\n for the set of states of $N$. \n \n \n\n\\begin{definition} \n A net $N = \\langle S, T, F, \\mathsf{m}\\rangle$ is said to be \\emph{safe} if\n each marking $m\\in \\reachMark{N}$ is such that $m = \\flt{m}$. \n\\end{definition}\n\nIn this paper we consider safe nets $N = \\langle S, T, F, \\mathsf{m}\\rangle$ \nwhere each transition can be fired, \\emph{i.e.}\n$\\forall t\\in T\\ \\exists m\\in \\reachMark{N}.\\ m\\trans{t}$, and each place is marked in a \ncomputation, \\emph{i.e.} $\\forall s\\in S\\ \\exists m\\in \\reachMark{N}.\\ m(s) = 1$.\n\n\\subsection{Prime event structures}\n\nWe now recall the notion of prime event structure~\\cite{Win:ES}.\n\n\\begin{definition}\\label{de:pes-winskel}\n A \\emph{prime event structure ({\\textsc{pes}})} is a triple $P = (E, <, \\#)$, where \n \\begin{itemize}\n \\item $E$ is a countable set of \\emph{events},\n \\item $<\\ \\subseteq E\\times E$ is\nan irreflexive partial order called the \\emph{causality relation}, such that\n $\\forall e\\in E$. $\\setcomp{e'\\in E}{e' < e}$ is finite, and \n \\item $\\#\\ \\subseteq E\\times E$ is a \\emph{conflict relation}, which is irreflexive,\n symmetric and \\emph{hereditary} relation with respect to $<$: if $e\\ \\#\\ e' < e''$, then\n $e\\ \\#\\ e''$ for all $e,e',e'' \\in E$.\n \\end{itemize} \n\\end{definition}\n\nGiven an event $e\\in E$, $\\hist{e}$ denotes the set $\\setcomp{e'\\in E}{e'\\leq e}$. \n{A subset of events $X \\subseteq E$ is left-closed if $\\forall e\\in X. \n\\hist{e}\\subseteq X$.\n} \nGiven a\n subset $X\\subseteq E$ of events, we say that $X$ is \\emph{conflict free} iff \nfor all $e, e'\\in X$ it holds that \n$e\\neq e'\\ \\Rightarrow\\ \\neg(e\\ \\#\\ e')$, and we denote it with $\\CF{X}$. \nGiven $X\\subseteq E$ such that $\\CF{X}$ and $Y\\subseteq X$, then also $\\CF{Y}$.\n\nWhen adding reversibility to \\textsc{pes} es, conflict heredity may not hold.\nTherefore, we rely on a weaker form of \\textsc{pes}\\ by following the approach in~\\cite{PU:jlamp15}.\n\\begin{definition}\\label{de:pre-pes}\n A \\emph{pre-\\textsc{pes}} (p\\textsc{pes}) is a triple $P = (E, <, \\#)$, where\n \\begin{itemize}\n \\item $E$ is a set of \\emph{events},\n \\item $\\#\\ \\subseteq E\\times E$ is an irreflexive and symmetric relation,\n \\item $<\\ \\subseteq E\\times E$ is an irreflexive partial order such that\nfor every $e\\in E$. $\\setcomp{e'\\in E}{e' < e}$ is finite and\n conflict free, and\n \\item $\\forall e, e'\\in E$. if $e < e'$ then not $e\\ \\#\\ e'$.\n \\end{itemize}\n \\end{definition}\n{A p\\textsc{pes}\\ is a prime event structure in which conflict heredity does not hold, \nand since every \\textsc{pes}\\ is also a p\\textsc{pes}\\ the notions and results stated below \nfor p\\textsc{pes} es also apply to \\textsc{pes} es.\n}\n\n\\begin{definition}\\label{de:ppes-conf-enab}\n Let $P = (E, <, \\#)$ be a p\\textsc{pes}\\ and\n \n $X\\subseteq E$ such that $\\CF{X}$.\n For $A\\subseteq E$, we say that $A$ is \\emph{enabled}\n at $X$ if\n \\begin{itemize}\n \\item $A\\cap X = \\emptyset$ and $\\CF{X\\cup A}$, and\n \\item $\\forall e\\in A$. if $e' < e$ then $e'\\in X$.\n \\end{itemize}\n If $A$ is enabled at $X$, then $X \\stackrel{A}{\\longrightarrow} Y$ where $Y = X\\cup A$.\n \\end{definition}\n\\begin{definition}\\label{de:rpes-forwconf}\n Let $P = (E, <, \\#)$ be a p\\textsc{pes}\\ and\n \n {$X\\subseteq E$ s.t. $\\CF{X}$}.\n $X$ is a \\emph{forwards reachable configuration} if there exists a sequence\n \n {$A_1,\\ldots,A_n$}, such that\n \\[ X_i\\stackrel{A_i}{\\longrightarrow} X_{i+1} \\ \\mathit{ for all}\\ i, \\ \\mathit{and}\\\n X_1 = \\emptyset \\ \\mathit{and}\\ X_{n+1}=X.\n \\]\nWe write $\\Conf{P}{p\\textsc{pes}}$ for the set of all (forwards reachable) configurations of $P$.\n \\end{definition}\nWhen a p\\textsc{pes}\\ is a \\textsc{pes}\\ we shall write $\\Conf{P}{\\textsc{pes}}$ instead of $\\Conf{P}{p\\textsc{pes}}$, with \n$\\Conf{P}{\\textsc{pes}}=\\Conf{P}{p\\textsc{pes}}$ holding.\nFrom a p\\textsc{pes}\\ a \\textsc{pes}\\ can be obtained.\n\n\n\\begin{definition}\\label{de:hereditary-closure}\n Let $P = (E, <, \\#)$ be a p\\textsc{pes}. Then $\\mathsf{hc}(P) = (E, <, \\sharp)$ is the \n \\emph{hereditary closure} of $P$, where $\\sharp$ is\n derived by using the following rules \n \\[ \n\\begin{array}{ccccc}\n\\infer{e\\ \\sharp\\ e'}{e\\ \\#\\ e'} & \\qquad & \\infer{e\\ \\sharp\\ e''}{e\\ \\sharp\\ e'\\ \\ \\ \\ \\ e' < e''}\n & \\qquad & \\infer{e\\ \\sharp\\ e'}{e'\\ \\sharp\\ e}\\\\\n\\end{array}\n\\]\n\\end{definition}\nThe following proposition relates p\\textsc{pes}\\ to \\textsc{pes}\\ \\cite{PU:jlamp15}.\n\\begin{proposition}\\label{pr:ppes-prop}\n Let $P = (E, <, \\#)$ be a p\\textsc{pes}, then\n \\begin{itemize}\n \\item $\\mathsf{hc}(P) = (E, \\leq, \\sharp)$ is a \\textsc{pes},\n \\item if $P$ is a \\textsc{pes}, then $\\mathsf{hc}(P) = P$, and\n \\item $\\Conf{P}{p\\textsc{pes}} = \\Conf{\\mathsf{hc}(P)}{\\textsc{pes}}$.\n \\end{itemize}\n\\end{proposition}\n\n\\subsection{Reversible prime event structures} \nWe now focus on the notion of \n\\emph{reversible prime event structure}. The definitions and the results in this subsection \nare drawn from \\cite{PU:jlamp15}. In reversible event structures some \nevents are categorised as \\emph{reversible}. The added relations\nare among events and those representing the \\emph{actual} undoing of the reversible events.\nThe undoing of events is represented by \\emph{removing} them (from a configuration), and is achieved\nby \\emph{executing} the appropriate \\emph{reversing} events.\n\n\\begin{definition}\\label{de:rpes}\n A \\emph{reversible prime event structure} (r\\textsc{pes}) is the tuple \n $\\mathsf{P} = (E, U, <, \\#, \\prec, \\triangleright)$ where \n $(E, <, \\#)$ is a p\\textsc{pes}, $U\\subseteq E$ are the\n \\emph{reversible\/undoable}\n\n events\n (with reverse events being denoted by \n {$\\underline{U} = \\setcomp{\\underline{u}}{u\\inU}$ and disjoint from $E$, i.e., $\\un U \\cap E = \\emptyset$})\nand\n { \\begin{itemize}\n \\item $\\triangleright\\ \\subseteq E\\times \\underline{U}$ is the \\emph{prevention} relation,\n %\n \\item $\\prec\\ \\subseteq E\\times \\underline{U}$ is the \\emph{reverse causality} relation and\n it is such that $u\\prec \\underline{u}$ for each $u\\inU$ and \n $\\setcomp{e\\in E}{e\\prec\\underline{u}}$ is finite and conflict-free for every $u\\inU$, \n %\n \\item if $e \\prec \\underline{u}$ then not $e \\triangleright \\underline{u}$,\n %\n \\item the \n \\emph{sustained causation} $\\ll$ is a transitive relation defined such that $e \\ll e'$ if $e < e'$ and if $e\\inU$, then \n $e' \\triangleright \\underline{e}$, and \n %\n \\item $\\#$ is hereditary with respect to $\\ll$: if $e\\ \\#\\ e' \\ll e''$, then $e\\ \\#\\ e''$.\n \\end{itemize}\n}\n\\end{definition}\nThe ingredients of a r\\textsc{pes}\\ partly overlap with those of a \\textsc{pes}: there is a causality\nrelation ($<$) and a conflict one ($\\#$) and the two are related by the \\emph{sustained causation} relation $\\ll$. The new ingredients are the \n\\emph{prevention} relation and the \\emph{reverse causality} relation.\nThe prevention relation states that certain events should be absent when trying to reverse an event, e.g., \n{$e\\triangleright \\underline{u}$}\nstates that {$e$} should be absent when reversing {$u$}. The reverse causality relation\n{$e \\prec \\underline{u}$} says that \n{$\\underline{u}$} can be executed \nonly when $e$ is present.\n\n\n \n\\begin{example}\\label{ex:cr not causal}\nLet $\\mathsf{P} = (E,U,<,\\#,\\prec,\\mathrel{\\triangleright})$ where\n$E = U = \\{a,b,c\\}$, $a