diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeuvy" "b/data_all_eng_slimpj/shuffled/split2/finalzzeuvy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeuvy" @@ -0,0 +1,5 @@ +{"text":"\\section*{Background}\nIn the light of increased vaccine hesitance in various countries, consistent monitoring of public beliefs and opinions about the national immunization program is important. Besides performing qualitative research and surveys, real-time monitoring of social media data about vaccination is a valuable tool to this end. The advantage is that one is able to detect and respond to possible vaccine concerns in a timely manner, that it generates continuous data and that it consists of unsolicited, voluntary user-generated content. \n\nSeveral studies that analyse tweets have already been conducted, providing \ninsight in the content that\nwas tweeted most during the 2009 H1N1 outbreak \\cite{Chew+10}, the information flow between users with a certain sentiment during this outbreak \\cite{Salathe+11}, or trends in tweets that convey, for example, the worries on efficacy of HPV vaccines \\cite{Du+17,Massey+16}. While human coders are best at deploying world knowledge and interpreting the intention behind a text, manual coding of tweets is laborious. The above-mentioned studies therefore aimed at developing and evaluating a system to code tweets automatically. There are several systems in place that make use of this automatic coding. The Vaccine Confidence Project \\cite{Larson+13} is a real-time worldwide internet monitor for vaccine concerns. The Europe Media Monitor (EMM) \\cite{Linge+09} was installed to support EU institutions and Member State organizations with, for example, the analysis real-time news for medical and health-related topics and with early warning alerts per category and country. MEDISYS, derived from the EMM and developed by the Joint Research Center of the European Commission \\cite{Rortais+10}, is a media monitoring system providing event-based surveillance to rapidly identify potential public health threats based on information from media reports. \n\nThese systems cannot be used directly for the Netherlands because they do not contain search words in Dutch, are missing an opinion-detection functionality, or do not include categories of the proper specificity. Furthermore, opinions towards vaccination are contextualized by national debates rather than a multinational debate \\cite{Becker+16}, which implies that a system for monitoring vaccination stance on Twitter should ideally be trained and applied to tweets with a similar language and nationality. Finally, by creating an automatic system for mining public opinions on vaccination concerns, one can continue training and adapting the system. We therefore believe it will be valuable to build our own system.\nBesides analysing the content of tweets, several other applications that use social media with regard to vaccination have been proposed. They, for example, use data about internet search activity and numbers of tweets as a proxy for (changes in) vaccination coverage or for estimating epidemiological patterns. Huang et al. \\cite{Huang+17} found a high positive correlation between reported influenza attitude and behavior on Twitter and influenza vaccination coverage in the US. In contrast, Aquino et al. \\cite{Aquino+17} found an inverse correlation between Mumps, Measles, Rubella (MMR) vaccination coverage and tweets, Facebook posts and internet search activity about autism and MMR vaccine in Italy. This outcome was possibly due to a decision of the Court of Justice in one of the regions to award vaccine-injury compensation for a case of autism. Wagner, Lampos, Cox and Pebody \\cite{Wagner+18} assessed the usefulness of geolocated Twitter posts and Google search as source data to model influenza rates, by measuring their fit to the traditional surveillance outcomes and analyzing the data quality. They find that Google search could be a useful alternative to the regular means of surveillance, while Twitter posts are not correlating well due to a lower volume and bias in demographics. Lampos, de Bie and Christianinni \\cite{Lampos+10} also make use of geolocated Twitter posts to track academics, and present a monitoring tool with a daily flu-score based on weighted keywords. \n\nVarious studies \\cite{Nagar+14, Kim+13, Signorini+11} show that estimates of influenza-like illness symptoms mentioned on Twitter can be exploited to track reported disease levels relatively accurately. However, other studies \\cite{Vasterman+13, Mollema+15} showed that this was only the case when looking at severe cases (e.g. hospitalizations, deaths) or only for the start of the epidemic when interest from journalists was still high. \n\nOther research focuses on detecting discussion communities on vaccination in Twitter \\cite{Bello-orgaz+17} or analysing semantic networks \\cite{Kang+17} to identify the most relevant and influential users as well as to better understand complex drivers of vaccine hesitancy for public health communication. Tangherlini et al. \\cite{Tangherlini+16} explore what can be learned about the vaccination discussion from the realm of \"mommy blogs\": parents posting messages about children's health care on forum websites. They aim to obtain insights in the underlying narrative frameworks, and analyse the topics of the messages using Latent Dirichlet Allocation (LDA) \\cite{Blei+03}. They find that the most prominent frame is a focus on the exemption of one's child from receiving a vaccination in school. The motivation against vaccination is most prominently based on personal belief about health, but could also be grounded in religion. Surian et al. \\cite{Surian+16} also apply topic modeling to distinguish dominant opinions in the discussion about vaccination, and focus on HPV vaccination as discussed on Twitter. They find a common distinction between tweets reporting on personal experience and tweets that they characterize as `evidence' (statements of having had a vaccination) and `advocacy' (statements that support vaccination). \n\nMost similar to our work is the study by Du, Xu, Song, Liu and Tao \\cite{Du+17}. With the ultimate aim to improve the vaccine uptake, they applied supervised machine learning to analyse the stance towards vaccination as conveyed on social media. Messages were labeled as either related to vaccination or unrelated, and, when related, as 'positive', 'negative' or 'neutral'. The 'negative' category was further broken down into several considerations, such as 'safety' and 'cost'. After having annotated 6,000 tweets, they trained a classifier on different combinations of features, obtaining the highest macro F1-score (the average of the separate F1-scores for each prediction category) of $0.50$ and micro F1-score (the F1-score over all predictions) of $0.73$. Tweets with a negative stance that point to safety risks could best be predicted, at an optimal F1 score of $0.75$, while the other five sub-categories with a negative stance were predicted at an F1 score below $0.5$ or even $0.0$. \n\nLike Du et al. \\cite{Du+17}, we focus on analysing sentiment about vaccination using Twitter as a data source and applying supervised machine learning approaches to extract public opinion from tweets automatically. In contrast, in our evaluation we focus on detecting messages with a negative stance in particular. Accurately monitoring such messages helps to recognize discord in an early stage and take appropriate action. We do train machine learning classifiers on modeling other categories than the negative stance, evaluating whether this is beneficial to detecting tweets with a negative stance. For example, we study whether it is beneficial to this task to model tweets with a positive and neutral stance as well. We also inquire whether a more fine-grained categorization of sentiment (e.g.: worry, relief, frustration and informing) offers an advantage. Apart from comparing performance in the context of different categorizations, we compare different machine learning algorithms and compare data with different levels of annotation reliability. Finally, the performance of the resulting systems is compared to regular sentiment analysis common to social media monitoring dashboards. At the public health institute in the Netherlands, we make use of social media monitoring tools offered by Coosto\\footnote{\\url{https:\/\/www.coosto.com\/en}}. For defining whether a message is positive, negative or neutral with regard to vaccination, this system makes use of the presence or absence of positive or negative words in the messages. We believe that we could increase the sensitivity and specificity of the sentiment analysis by using supervised machine learning approaches trained on a manually coded dataset. The performance of our machine learning approaches is therefore compared to the sentiment analysis that is currently applied in the Coosto tool.\n\n\\section*{Implementation}\n\nWe set out to curate a corpus of tweets annotated for their stance towards vaccination, and to employ this corpus to train a machine learning classifier to distinguish tweets with a negative stance towards vaccination from other tweets. In the following, we will describe the stages of data acquisition, from collection to labeling. \n\n\\subsection*{Data collection}\nWe queried Twitter messages that refer to a vaccination-related key term from TwiNL \\footnote{\\url{https:\/\/twinl.surfsara.nl\/}}, a database with IDs of Dutch Twitter messages from January 2012 onwards \\cite{TjongKimSang+13}. In contrast to the open Twitter Search API\\footnote{\\url{https:\/\/developer.twitter.com\/en\/docs\/tweets\/search\/api-reference}}, which only allows one to query tweets posted within the last seven days, TwiNL makes it possible to collect a much larger sample of Twitter posts, ranging several years. \n\nWe queried TwiNL for different key terms that relate to the topic of vaccination in a five-year period, ranging from January 1, 2012 until February 8, 2017. Query terms that we used were the word `vaccinatie' (Dutch for `vaccination') and six other terms closely related to vaccination, with and without a hashtag (`\\#'). Among the six words is `rijksvaccinatieprogramma', which refers to the vaccination programme in The Netherlands. An overview of all query terms along with the number of tweets that could be collected based on them is displayed in Table \\ref{tab:queried}. \n\nWe collected a total of 96,566 tweets from TwiNL, which we filtered in a number of ways. First, retweets were removed, as we wanted to focus on unique messages.\\footnote{Although original content of the sender could be added to retweets, this was only manifested in a small part of the retweets in our dataset. It was therefore most effective to remove them.} This led to a removal of 31\\% of the messages. Second, we filtered out messages that contain a URL. Such messages often share a news headline and include a URL to refer to the complete news message. As a news headline does not reflect the stance of the person who posted the tweet, we decided to apply this filtering step. It is likely that part of the messages with a URL do include a message composed by the sender itself, but this step helps to clean many unwanted messages. Third, we removed messages that include a word related to animals and traveling (`dier', animal; `landbouw', agriculture; and `teek', tick), as we strictly focus on messages that refer to vaccination that is part of the governmental vaccination program. 27,534 messages were left after filtering. This is the data set that is used for experimentation. \n\n\\small\n\\begin{table}[t]\n\\caption{Overview of the number of Twitter messages that were queried from TwiNL and filtered, from the period between January 2012 and February 2017 . `URLs' refers to tweets with a URL. `blacklist' refers to words related to animal vaccination and vaccination related to travelling: `dier' (animal), `landbouw' (agriculture), and `teek' (tick).}\n \\begin{tabular}{llrrrr}\n \\hline\n Query term & Query term & Before & Excluding & Excluding & Excluding \\\\ \n (original) & (English) & filtering & retweets & URLs & blacklist \\\\ \\hline\n \n\t\t Vaccinatie & Vaccination & 30,730 & 20,677 & 8,872 & 8,818 \\\\\n Vaccin & Vaccine & 21,614 & 16,046 & 4,154 & 4,121 \\\\\n Vaccineren & Vaccinate & 20,689 & 11,904 & 4,682 & 4,653 \\\\\n\t\t Rijksvaccinatieprogramma & Gov. vacc. programme & 1,151 & 520 & 160 & 158 \\\\\n\t\t\n Vaccinatieprogramma & Vacc. programme & 644 & 407 & 121 & 120 \\\\\n \n\t\t Inenting & Inoculation & 8,597 & 7,093 & 4,046 & 4,038 \\\\\n\t\t Inenten & Inoculate & 13,141 & 9,535 & 5,640 & 5,626 \\\\ \\hline \n\t\t \\multicolumn{2}{c}{Total} & 96,566 & 66,182 & 27,675 & 27,534 \\\\ \\hline\n \\end{tabular}\n \\label{tab:queried}\n\\end{table}\n\n\\subsection*{Data annotation}\n\n\\begin{table}[h!]\n\\caption{Specification of the annotation categories}\n\\begin{scriptsize}\n\\begin{tabular}{ L{1.3cm} | L{1.5cm} | L{4cm} | L{4cm} }\n\\hline\nCategory type &\tCategory & Definition & Example tweet (translated from Dutch) \\\\ \\hline\n\\multirow{3}{*}{Relevance} &\tRelevant & If the message is about (information about) human vaccination or expresses an opinion about human vaccination. & \"By the way I do not accuse people who are against vaccination. It is just that they should not imply that the measles are so harmless.\" \\\\ \\cline{2-4}\n &\tRelevant abroad & If the message is relevant and is about an event related to vaccination or an outbreak of vaccine preventable disease that happens abroad. & \"Have you seen the Danish detective on chronic fatigue after HPV-vaccination?\" \\\\ \\cline{2-4}\n &\tIrrelevant & If the message is not about human vaccination. & \"Lethal virus has been fatal to at least twelve rabbits in Hellevoetsluis. Veterinarians sound the alarm: get inoculation \\#ADRD \\#VoornePutten\" \\\\ \\hline\n\\multirow{3}{*}{Subject} &\tVaccine & If the message contains an expression about the vaccine. & \"Rutte: pastors please encourage inoculation measles\" \\\\ \\cline{2-4}\n &\tDisease & If the message contains an expression about the disease. & \"I am not happy. I have the chickenpox, which is not in the governmental vaccination program.\" \\\\ \\cline{2-4}\n &\tVaccine and disease & If the message contains an expression about both the vaccine and disease. & \"I think the whooping-cough disease is rather significant, too bad the vaccine does not have much effect.\" \\\\ \\hline\n\\multirow{4}{*}{Stance} &\tPositive & If one is positive with regard to vaccination and\/or believes the vaccine preventable disease is severe. & \"To inoculate against the measles is at least better than not inoculating. The reformed church is also divided about this.\" \\\\ \\cline{2-4}\n &\tNegative & If one is negative towards vaccination and\/or believes the vaccine preventable disease is not severe. & \"Did you ever check the number of casualties as a result of vaccination? Now those are really in vain. One does not die from \\#measles\" \\\\ \\cline{2-4}\n &\tNeutral & If one takes a neutral stance towards vaccination and if one only wants to inform others. & \"[anonymized name] : inoculating at home \\#measles at \\#refo's\" \\\\ \\cline{2-4}\n &\tNot clear & If from the message it is not clear whether one is positive or negative, if both polarities are present, or if the message is about a related topic such as information about vaccination. & \"Facts and opinions related to \\#HPV vaccination: why is it almost impossible to find them on the website of \\#RIVM?\" \\\\ \\hline\n\\multirow{5}{*}{Sentiment} &\tInformative & If one wants to inform others. & \"GGGD\\_Utrecht: Today the GG\\&GD will start vaccinating all 9-year olds against DTP and BMR. This applies to 3395 kids in Utrecht!\" \\\\ \\cline{2-4}\n &\tAnger, frustration & If one is angry about people who vaccinate or do not vaccinate. & \"Measles epidemic in the \\#biblebelt. Incomprehensible that the love for God can be greater than the love for one's own child.\" \\\\ \\cline{2-4}\n &\tWorry, fear, doubts & If one is worried about side-effects of the vaccine or about the severity of the disease; if one has doubts to vaccinate. & \"I will watch zorg.nu in a bit. This time I am doubtful once more as to whether I should have my youngest daughter inoculated against cervical cancer.\" \\\\ \\cline{2-4}\n &\tRelieved & If one is relieved that the vaccination has been administered or that he\/she recovered from the disease. & \"I am happy that the vaccination is over with.\" \\\\ \\cline{2-4}\n &\tOther & If one expresses another sentiment than those mentioned above, such as humor, sarcasm (see example), personal experience, question raised, or minimized risks. & \"What a genius idea of the doctor to vaccinate me for yellow fever, polio, meningitis, and hepatitis A, all in once! Bye bye weekend.. \" \\\\ \\hline\n\\end{tabular}\n\\end{scriptsize}\n\\label{tab:examples}\n\\end{table}\n\nThe stance towards vaccination was categorized into `Negative', `Neutral', `Positive' and `Not clear'. The latter category was essential, as some posts do not convey enough information about the stance of the writer. In addition to the four-valued {\\bf stance} classes we included separate classes grouped under {\\bf relevance}, {\\bf subject} and {\\bf sentiment} as annotation categories. With these additional categorizations we aimed to obtain a precise grasp of all possibly relevant tweet characteristics in relation to vaccination, which could help in a machine learning setting.\\footnote{We give a full overview of the annotated categories, to be exact about the decisions made by the annotators. However, we did not include all annotation categories in our classification experiment. A motivation will be given in the `Data categorization' section.} \n\nThe {\\bf relevance} categories were divided into `Relevant', `Relevant abroad' and `Irrelevant'. Despite our selection of vaccination-related keywords, tweets that mention these words might not refer to vaccination at all. A word like `vaccine' might be used in a metaphorical sense, or the tweet might refer to vaccination of animals. \n\nThe {\\bf subject} categorization was included to describe what the tweet is about primarily: `Vaccine', `Disease' or `Both'. We expected that a significant part of the tweets would focus on the severeness of a disease when discussing vaccination. Distinguishing these tweets could help the detection of the stance as well. \n\nFinally, the {\\bf sentiment} of tweets was categorized into `Informative', `Angry\/Frustration', `Worried\/Fear\/Doubts', `Relieved' and `Other', where the latter category lumps together occasional cases of humor, sarcasm, personal experience, and question raised. These categories were based on the article by \\cite{Chew+10}, and emerged from analysing their H1N1-related tweets. The `Informative' category refers to a typical type of message in which information is shared, potentially in support of a negative or positive stance towards vaccination. If the message contained more than one sentiment, the first sentiment identified was chosen. Table \\ref{tab:examples} shows examples of tweets for the above-mentioned categories.\n\nWe aimed at a sufficient number of annotated tweets to feed a machine learning classifier with. The majority of tweets were annotated twice. We built an annotation interface catered to the task. Upon being presented with the text of a Twitter post, the annotator was first asked whether the tweet was relevant. In case it was deemed relevant, the tweet could be annotated for the other categorizations. Otherwise, the user could click `OK' after which he or she was directly presented with a new Twitter post. The annotator was presented with sampled messages that were either not annotated yet or annotated once. We ensured a fairly equal distribution of these two types, so that most tweets would be annotated twice. \t\n\nAs annotators, we hired four student assistants and additionally made use of the Radboud Research Participation System.\\footnote{\\url{https:\/\/radboud.sona-systems.com}} \nWe asked participants to annotate for the duration of an hour, in exchange for a voucher valued ten Euros, or one course credit. Before starting the annotation, the participants were asked to read the annotation manual, with examples and an extensive description of the categories, and were presented with a short training round in which feedback on their annotations was given. The annotation period lasted for six weeks. We stopped when the number of applicants dropped. \n\nA total of 8,259 tweets were annotated, of which 6,472 were annotated twice (78\\%).\\footnote{The raw annotations by tweet identifier can be downloaded from \\url{http:\/\/cls.ru.nl\/~fkunneman\/data_stance_vaccination.zip}} 65 annotators joined in the study, with an average of $229.5$ annotated tweets per person. The number of annotations per person varied considerably, with $2,388$ tweets coded by the most active annotator. This variation is due to the different ways in which annotators were recruited: student-assistants were recruited for several days, while participants recruited through the Radboud Research Participation System could only join for the duration of an hour. \n\n\\begin{table}[t]\n\\caption{Agreement scores for all four categorizations; mutual F-score is reported by category.}\n\\begin{scriptsize}\n\\begin{tabular}{ l | l l l l l l l l }\n \\hline\n\t& \\multicolumn{2}{c}{Relevance} & \\multicolumn{2}{c}{Subject} & \\multicolumn{2}{c}{Stance} & \\multicolumn{2}{c}{Sentiment} \\\\ \\hline\n\tPercent agreement & \\multicolumn{2}{r}{0.71} & \\multicolumn{2}{r}{0.70} & \\multicolumn{2}{r}{0.54} & \\multicolumn{2}{r}{0.54} \\\\ \n\tKrippendorff's Alpha & \\multicolumn{2}{r}{0.27} & \\multicolumn{2}{r}{0.29} & \\multicolumn{2}{r}{0.35} & \\multicolumn{2}{r}{0.34} \\\\ \n\tMutual F-score & Relevant & 0.81 & Vaccine & 0.79 & Negative & 0.42 & Worry, fear, doubts & 0.21 \\\\ \n\t& Relevant abroad & 0.40 & Disease & 0.06 & Neutral & 0.23 & Anger, frustration & 0.50 \\\\\n\t& Irrelevant & 0.42 & Vaccine and disease & 0.49 & Positive & 0.64 & Informative & 0.49 \\\\\n\t& & & & & Not clear & 0.31 & Relieved & 0.19 \\\\\n\t& & & & & & & Other & 0.20 \\\\\n\t\\hline\n\\end{tabular}\n\\end{scriptsize}\n\\label{tab:agreement}\n\\end{table}\n\nWe calculated inter-annotator agreement by Krippendorff's Alpha \\cite{Hayes+07}, which accounts for different annotator pairs and empty values. To also zoom in on the particular agreement by category, we calculated mutual F-scores for each of the categories. This metric is typically used to evaluate system performance by category on gold standard data, but could also be applied to annotation pairs by alternating the roles of the two annotators between classifier and ground truth. A summary of the agreement by categorization is given in Table \\ref{tab:agreement}. While both the Relevance and Subject categorizations are annotated at a percent agreement of $0.71$ and $0.70$, their agreement scores are only fair, at $\\alpha=0.27$ and $\\alpha=0.29$. The percent agreement on Stance and Sentiment, which carry more categories than the former two, is $0.54$ for both. Their agreement scores are also fair, at $\\alpha=0.35$ and $\\alpha=0.34$. The mutual F-scores show marked differences in agreement by category, where the categories that were annotated most often typically yield a higher score. This holds for the Relevant category ($0.81$), the Vaccine category ($0.79$) and the Positive category ($0.64$). The Negative category yields a mutual F-score of $0.42$, which is higher than the more frequently annotated categories Neutral ($0.23$) and Not clear ($0.31$). We found that these categories are often confused. After combining the annotations of the two, the stance agreement would be increased to $\\alpha=0.43$. \n\nThe rather low agreement over the annotation categories indicates the difficulty of interpreting stance and sentiment in tweets that discuss the topic of vaccination. We therefore proceed with caution to categorize the data for training and testing our models. The agreed upon tweets will form the basis of our experimental data, as was proposed by Jakubi\u00e7ek, Kovar and Rychly \\cite{Jakubicek+14}, while the other data is added as additional training material to see if the added quantity is beneficial to performance. We will also annotate a sample of the agreed upon tweets, to make sure that these data are reliable in spite of the low agreement rate. \n\n\\subsection*{Data categorization}\nThe labeled data that we composed based on the annotated tweets are displayed in Table \\ref{tab:labeled_data}. We combined the Relevant and Relevant abroad categories into one category (`Relevant'), as only a small part of the tweets was annotated as Relevant abroad. We did not make use of the {\\bf subject} annotations, as a small minority of the tweets that were relevant referred a disease only. For the most important categorization, {\\bf stance}, we included all annotated labels. Finally, we combined part of the more frequent sentiment categories with Positive.\n\nWe distinguish three types of labeled tweets: `strict', `lax' and `one'. The strictly labeled tweets were labeled by both annotators with the same label. The lax labels describe tweets that were only annotated with a certain category by one of the coders. The categories were ordered by importance to decide on the lax labels. For instance, in case of the third categorization, Negative was preferred over Positive, followed by Neutral, Not clear and Irrelevant. If one of the annotators labeled a tweet as Positive and the other as Neutral, the lax label for this tweet is Positive. In table \\ref{tab:labeled_data}, the categories are ordered by preference as imposed on the lax labeling. The `one' labeling applies to all tweets that were annotated by only one annotator. Note that the total counts can differ between label categorizations due to the lax labeling: the counts for Positive labels in the Polarity + sentiment labeling (Positive + Frustration, Positive + Information and Positive + other) do not add up to the count of the Positive label in the Polarity labeling. \n\nWith the `strict', `lax' and `one' labeling, we end up with four variants of data to experiment with: only strict, strict + lax, strict + one and strict + lax + one. The strict data, which are most reliable, are used in all variants. By comparing different combinations of training data, we test whether the addition of less reliably labeled data (lax and\/or one) boosts performance. \n\n\\begin{table}[t!]\n\\caption{Overview of data set (the cells indicate the number of examples per label and data type)}\n\\begin{tabular}{ l|l|l|l|l|l }\n\\hline\n\t & & \\multicolumn{4}{c}{Training data} \\\\\n\t & & Strict & Strict + & Strict + & Strict + \\\\ \n Labeling & Labels & & Lax & One & Lax + \\\\\n & & & & & One \\\\ \\hline\n\tBinary & Negative & 343 & 1,188 & 534 & 1,379 \\\\ \n\t & Other & 2,543 & 5,358 & 4,074 & 6,889 \\\\ \\hline\n\tIrrelevance filter & Negative & 343 & 1,188 & 534 & 1,379 \\\\ \n\t & Irrelevant & 633 & 633 & 1,077 & 1,077 \\\\ \n\t & Other & 1,910 & 4,725 & 2,997 & 5,812 \\\\ \\hline\n\tPolarity & Negative & 343 & 1,188 & 534 & 1,379 \\\\ \n\t & Positive & 1,312 & 2,693 & 1,835 & 3,216 \\\\ \n\t & Neutral & 345 & 1,271 & 623 & 1,549 \\\\ \n\t & Not Clear & 253 & 761 & 539 & 1,047 \\\\ \n\t & Irrelevant & 633 & 633 & 1,077 & 1,077 \\\\ \\hline\n\tPolarity + Sentiment & Negative & 343 & 1,188 & 534 & 1,379 \\\\ \n\t & Positive + Frustration & 392 & 726 & 560 & 894 \\\\ \n\t & Positive + Information & 300 & 1,084 & 513 & 1,297 \\\\ \n\t & Positive + Other & 620 & 879 & 762 & 1,021 \\\\ \n\t & Neutral & 345 & 1,271 & 623 & 1,549 \\\\ \n\t & Not Clear & 253 & 761 & 539 & 1,047 \\\\ \n\t & Irrelevant & 633 & 633 & 1,077 & 1,077 \\\\ \\hline\n\\end{tabular}\n\\label{tab:labeled_data}\n\\end{table}\n\nThe four labelings have an increasing granularity, where the numbers of examples for the Negative category are stable across each labeling. In the first labeling, these examples are contrasted with any other tweet. It hence comprises a binary classification task. In the second labeling, irrelevant tweets are indicated in a separate category. The Other class here represents all relevant tweets that do not convey a negative stance towards vaccination. In the third labeling, this class is specified as the {\\bf stance} categories Positive, Neutral and Not clear. In the fourth labeling, the Positive category, which is the most frequent polarity class, is further split into `Positive + frustration', `Positive + Information' and `Positive + Other'. Positivity about vaccination combined with a frustration sentiment reflects tweets that convey frustration about the arguments of people who are negative about vaccination (e.g.: \"I just read that a 17 year old girl died of the measles. Because she did not want an inoculation due to strict religious beliefs. -.- \\#ridiculous\"). The Positive + Information category reflects tweets that provide information in favor of vaccination, or combined with a positive stance towards vaccination (e.g.: \"\\#shingles is especially common with the elderly and chronically diseased. \\#vaccination can prevent much suffering. \\#prevention\").\\footnote{The tweet IDs and their labels can be downloaded from \\url{http:\/\/cls.ru.nl\/~fkunneman\/data_stance_vaccination.zip}} \n\nIn line with Kov\u00e1r, Rychl\u00fd and Jakub\u00ed\u010dek \\cite{Kovar+14}, we evaluate system performance only on the reliable part of the annotations - the instances labeled with the same label by two annotators. As the overall agreement is not sufficient, with Krippendorff's Alpha ranging between $0.27$ and $0.35$, the first author annotated 300 tweets sampled from the strict data (without knowledge of the annotations) to rule out the possibility that these agreed upon annotations are due to chance agreement. Comparing these new annotations to the original ones, the Negative category and the Positive category are agreed upon at mutual F-scores of $0.70$ and $0.81$. The percent agreement on the binary classification scheme (e.g.: Negative versus Other) is $0.92$, with $\\alpha=0.67$, which decreases to $\\alpha=0.55$ for the Relevance categorization, $\\alpha=0.54$ for the Polarity categorization and $\\alpha=0.43$ for the Polarity + Sentiment categorization. We find that instances of a negative and positive stance can be clearly identified by humans, while the labels Neutral and Not Clear are less clear cut. Since it is our focus to model tweets with a negative stance, the agreement on the binary decision between Negative and Other is just sufficient to use for experimentation based on Krippendorff's \\cite{Krippendorff04} remark that \"$\\alpha \\geq .667$ is the lowest conceivable limit\" (p.241). In our experimental set-up we will therefore only evaluate our system performance on distinguishing the Negative category from any other category in the strict data. \n\n\n\\subsection*{Experimental Set-up}\nFor each combination of labeling (four types of labeling) and training data (four combinations of training data) we train a machine learning classifier to best distinguish the given labels. Two different classifiers are compared: Multinomial Naive Bayes and Support Vector Machines (SVM). In total, this makes for 32 variants (4 labelings $\\times$ 4 combinations of training data $\\times$ 2 classifiers). All settings are tested through ten-fold cross-validation on the strict data and are compared against two rule-based sentiment analysis baselines and two random baselines. All components of the experimental set-up are described in more detail below.\n\n\\subsubsection*{Preprocessing}\nTo properly distinguish word tokens and punctuation we tokenized the tweets by means of Ucto, a rule-based tokenizer with good performance on the Dutch language, and with a configuration specific for Twitter.\\footnote{\\url{https:\/\/languagemachines.github.io\/ucto\/}} Tokens were lowercased in order to focus on the content. Punctuation was maintained, as well as emoji and emoticons. Such markers could be predictive in the context of a discussion such as vaccination. To account for sequences of words and characters that might carry useful information, we extracted word unigrams, bigrams, and trigrams as features. Features were coded binary, i.e. set to 1 if a feature is seen in a message and set to 0 otherwise. During training, all features apart from the top 15,000 most frequent ones were removed. \n\n\\subsubsection*{Machine Learning}\nWe applied two machine learning algorithms with a different perspective on the data: Multinomial Naive Bayes and SVM. The former algorithm is often used on textual data. It models the Bayesian probability of features to belong to a class and makes predictions based on a linear calculation. Features are naively seen as independent of one another \\cite{Hand+01}. In their simplest form, SVMs are binary linear classifiers that make use of kernels. They search for the optimal hyperplane in the feature space that maximizes the geometric margin between any two classes. The advantage of SVMs is that they provide a solution to a global optimization problem, thereby reducing the generalization error of the classifier \\cite{Hearst+98}. \n\nWe applied both algorithms by means of the scikit-learn toolkit, a python library that offers implementations of many machine learning algorithms \\cite{Pedregosa+11}. To cope with imbalance in the number of instances per label, for Multinomial Naive Bayes we set the Alpha parameter to $0.0$ and muted the fit prior. For SVM, we used a linear kernel with the $C$ parameter set to $1.0$ and a balanced class weight. \n\n\\subsubsection*{Baselines}\nAs baselines, we applied two rule-based sentiment analysis systems for Dutch as well as two random baselines. The first rule-based sentiment analysis system is Pattern, an off-the-shelf sentiment analysis system that makes use of a list of adjectives with a positive or negative weight, based on human annotations \\cite{Smedt+12}. Sentences are assigned a score between $-1.0$ and $1.0$ by multiplying the scores of their adjectives. Bigrams like `horribly good' are seen as one adjective, where the adjective `horribly' increases the positivity score of `good'. We translated the polarity score into the discrete labels `Negative', `Positive' and `Neutral' by using the training data to infer which threshold leads to the best performance on the `Negative' category.\n\nThe second baseline is the sentiment analysis offered by the social media monitoring dashboard Coosto. As Coosto is a commercial product, there is no public documentation on their sentiment analysis tool. \n\nIn addition to these two baselines, we applied two random baselines: predicting the negative class randomly for 50\\% of the messages and predicting the negative class randomly for 15\\% of the messages. The latter proportion relates to the proportion of vaccination-hesitant tweets in the strictly labeled data on which we test the systems. \n\n\\subsection*{Evaluation}\n\nWe evaluate performance by means of ten-fold cross-validation on the strictly labeled data. In each of the folds, 90\\% of the strictly labeled data is used as training data, which are complemented with the laxly labeled data and\/or the data labeled by one annotator, in three of the four training data variants. Performance is always tested on the strict data. As evaluation metrics we calculate the F1-score and the Area Under the ROC Curve (AUC) on predicting the negative stance towards vaccination in the test tweets.\n\n\\section*{Results}\n\nWe trained machine learning (ML) classifiers to distinguish Twitter messages with a negative stance towards vaccination, alternating three aspects of the system: the labels to train on, the composition of the training data and the ML algorithm. The results are presented in Table \\ref{tab:results}, as the F1-score and AUC of any setting on correctly predicting tweets with a negative stance. Systems with specific combinations of the ML classifier and size of the training data are given in the rows of the table. The four types of labelings are listed in the columns. \n\nThe results show a tendency for each of the three manipulations. Regarding the ML algorithm, SVM consistently outperforms Naive Bayes for this task. Furthermore, adding additional training data, albeit less reliable, generally improves performance. Training a model on all available data (strict + lax + one) leads to an improvement over using only the strict data, while adding only the laxly labeled data is generally better than using all data. Adding only the data labeled by one annotator often leads to a worse performance. With respect to the labeling, the Polarity-sentiment labeling generally leads to the best outcomes, although the overall best outcome is yielded by training an SVM on Polarity labeling with strict data appended by lax data, at an area under the curve score of $0.66$\\footnote{We choose to value the AUC over the F1-score, as the former is more robust in case of imbalanced test sets}. \n\n\\begin{table}[t!]\n\\caption{Machine Learning performance of correctly predicting the label of tweets with a negative stance (Clf = Classifier, NB = Naive Bayes, SVM = Support Vector Machines, AUC = Area under the curve).}\n\\begin{tabular}{ l l | r r | r r | r r | r r }\n\\hline\n\t& & \\multicolumn{2}{c|}{Binary} & \\multicolumn{2}{c|}{Irrelevance} & \\multicolumn{2}{c|}{Polarity} & \\multicolumn{2}{c}{Polarity -} \\\\\n & & & & \\multicolumn{2}{c|}{filter} & & & \\multicolumn{2}{c}{Sentiment} \\\\\n\tTraining data & Clf & F1 & AUC & F1 & AUC & F1 & AUC & F1 & AUC \\\\ \\hline\n\tStrict & NB & 0.14 & 0.53 & 0.15 & 0.54 & 0.24 & 0.56 & 0.30 & 0.60 \\\\ \n\t& SVM & 0.30 & 0.59 & 0.32 & 0.61 & 0.34 & 0.62 & 0.35 & 0.63 \\\\ \n\tStrict + Lax & NB & 0.26 & 0.58 & 0.27 & 0.59 & 0.33 & 0.63 & 0.36 & 0.64 \\\\ \n\t& SVM & 0.33 & 0.63 & 0.34 & 0.63 & 0.36 & \\textbf{0.66} & 0.36 & 0.64 \\\\ \n\tStrict + One & NB & 0.13 & 0.53 & 0.15 & 0.54 & 0.24 & 0.57 & 0.27 & 0.59 \\\\ \n\t& SVM & 0.29 & 0.59 & 0.29 & 0.59 & 0.34 & 0.62 & \\textbf{0.37} & 0.64 \\\\ \n\tStrict + Lax + One & NB & 0.27 & 0.58 & 0.27 & 0.59 & 0.33 & 0.62 & 0.32 & 0.61 \\\\ \n\t & SVM & 0.34 & 0.63 & 0.32 & 0.62 & 0.35 & 0.64 & 0.36 & 0.64 \\\\ \\hline\n\\end{tabular}\n\\label{tab:results}\n\\end{table}\n\n\\begin{table}[t!]\n\\caption{Baseline performance of correctly predicting the label of tweets with a negative stance (for comparison, the best ML system is included; Pr = Precision, Re = Recall, AUC = Area under the Curve).}\n\\begin{tabular}{l | rrrr}\n\\hline\n\t & Pr & Re & F1 & AUC \\\\ \\hline\n\tRandom (50\\%) & 0.11 & 0.46 & 0.18 & 0.48 \\\\ \n\tRandom (15\\%) & 0.12 & 0.15 & 0.13 & 0.50 \\\\ \n\tPattern & 0.14 & 0.34 & 0.20 & 0.53 \\\\\n\tCoosto & 0.20 & 0.31 & 0.25 & 0.57 \\\\\n\tBest ML system & 0.29 & 0.47 & 0.36 & 0.66 \\\\ \\hline\n\\end{tabular}\n\\label{tab:results_baseline}\n\\end{table}\n\n\\begin{table}[t!]\n\\caption{Confusion table of the classification of tweets in the best ML setting (SVM trained on Polarity labeling with strict data appended by lax data). The vertical axes give gold standard labels, the horizontal axes give the classifier decisions. Numbers given in bold are accurate classifications.}\n\\begin{tabular}{ l | l l l l l }\n\\hline\n\t& Irrelevant & Negative & Neutral & Positive & Not clear \\ \\\\ \\hline\n\tIrrelevant & \\textbf{172} & 17 & 20 & 60 & 25 \\\\\n\tNegative & 74 & \\textbf{161} & 42 & 230 & 57 \\\\\n\tNeutral & 108 & 37 & \\textbf{118} & 133 & 55 \\\\\n\tPositive & 195 & 103 & 140 & \\textbf{832} & 84 \\\\\n\tNot clear & 84 & 25 & 25 & 57 & \\textbf{32} \\\\ \\hline\n\\end{tabular}\n\\label{tab:conf_table}\n\\end{table}\n\n\\begin{figure}[t!]\n{learningcurvenegatiefcombined.png}\n\\caption{\\csentence{}Learning curve of the best ML system.}\n\\label{fig:learning_curve}\n\\end{figure}\n\nThe best reported performance is an F1-score of $0.36$ and an AUC of $0.66$. In comparison to the baselines (Table \\ref{tab:results_baseline}), these scores are considerably higher. Nevertheless, there is room for improvement. The performance of the random baselines, with F1-scores of $0.18$ (50\\%) and $0.13$ (15\\%), indicates that the minimal performance on this task is rather low. The rule-based sentiment analyses yield better performances, at an F1-score of $0.20$ for Pattern and $0.25$ for Coosto. To analyse the behavior of the best ML system, we present a confusion table of its classifications in Table \\ref{tab:conf_table}. The Irrelevant category is most often classified with one of the other categories, while the Positive and Negative categories are the biggest confusables. The classifier is possibly identifying features that denote a stance, but struggles to distinguish positive from negative. \n\nTo gain insight into the potential of increasing the amount of training data, we applied the best ML system (SVM trained on strict and lax data on the polarity labels) on 10\\% of the strictly labeled data, starting with a small sample of the data and increasing it to all available data (excluding the test data). The learning curve is presented in Figure \\ref{fig:learning_curve}. It shows an improved performance until the last training data is added, indicating that more training data would likely yield better performance. \n\n\n\n\\subsection*{Comparison machine learning and rule-based sentiment analysis}\n\nA confusion table of the predictions of the best of the two rule-based baselines, Pattern, and the best ML system is displayed in Table \\ref{tab:conf_table1}. Only 192 tweets are labeled by both systems as Negative, while the best ML system accounts for almost double this amount and Pattern for three times as much. Comparing the predictions to the gold standard labeling, 99 of the tweets predicted only by the best ML system as Negative are correct (27\\%), opposed to 51 that are exclusive to Pattern (8\\%). Of the tweets that were classified by both as negative, 63 are correct (33\\%). This shows that the approaches have a rather complementary view on tweets with a negative stance. \n\n\\begin{table}[t!]\n\\caption{Confusion table of the labeled Twitter messages predicted as `Negative' or another category between Pattern and the best ML system.}\n \\begin{tabular}{ll|rr}\n \\hline\n & & \\multicolumn{2}{c}{Best ML system} \\\\\n & & Other & Negative \\\\ \\hline\n\t\t \\multirow{2}{*}{Pattern} & Other & 1,718 & 372 \\\\ \n & Negative & 604 & 192 \\\\ \\hline\n \\end{tabular}\n \\label{tab:conf_table1}\n\\end{table}\n\n\\begin{table}[t!]\n\\caption{Confusion table of the unlabeled Twitter messages predicted as 'Negative' or another category between Pattern and the best ML system.}\n \\begin{tabular}{ll|rr}\n \\hline\n & & \\multicolumn{2}{c}{Best ML system} \\\\\n & & Other & Negative \\\\ \\hline\n\t\t \\multirow{2}{*}{Pattern} & Other & 8,954 & 2,225 \\\\ \n & Negative & 3,383 & 1,015 \\\\ \\hline\n \\end{tabular}\n \\label{tab:conf_table2}\n\\end{table}\n\nTo gain more insight into the behavior of both approaches, we applied them to 15,577 unlabeled tweets. Table \\ref{tab:conf_table2} presents a confusion table with the numbers of tweets that were classified as Negative or another category by both approaches. Again, pattern accounts for the majority of negatively labeled messages, and the overlap is small. Two of the authors validated for a sample of 600 messages whether they actually manifested a negative attitude towards vaccination: 200 messages that were uniquely classified by the best ML system as Negative, 200 messages that were solely labeled as Negative by Pattern and 200 messages that were classified by both systems as Negative. This validation showed the same tendency as for the labeled data, with a higher precision of the best ML system in comparison to Pattern (33.5\\% versus 21\\% of the messages correctly predicted) and the highest precision when both systems predicted the negative class (36\\%). \n\nThe complementary view on tweets with a negative stance between the best ML system and rule-based sentiment analysis becomes clear from their differing predictions. To make this difference concrete, we present a selection of the messages predicted as Negative by both systems in Table \\ref{tab:predictions}. The first three are only predicted by the best ML system as Negative, and not by Pattern, while the fourth until the sixth examples are only seen as Negative by Pattern. Where the former give arguments (`can not be compared...', `kids are dying from it') or take stance (`I'm opposed to...'), the latter examples display more intensified words and exclamations (`that's the message!!', `Arrogant', `horrific') and aggression towards a person or organization. The last three tweets are seen by both systems as Negative. They are characterized by intensified words that linked strongly to a negative stance towards vaccination (`dangerous', `suffering', `get lost with your compulsory vaccination').\n\nTable \\ref{tab:predictions} also features tweets that were predicted as Negative by neither the best ML-system nor Pattern, representing the most difficult instances of the task. The first two tweets include markers that explicitly point to a negative stance, such as `not been proven' and `vaccinating is nonsense'.\nThe third tweet manifests a negative stance by means of the sarcastic phrase `way to go' (English translation). The use of sarcasm, where typically positive words are used to convey a negative valence, complicates this task of stance prediction. The last tweet advocates an alternative to vaccination, which implicitly can be explained as a negative stance towards vaccination. Such implicitly packaged viewpoints also hamper the prediction of negative stance. Both sarcasm and implicit stance could be addressed by specific modules. \n\n\\subsection*{Improving recall}\n\nFor monitoring the number of Twitter messages over time that are negative towards vaccination, it is arguably more important to detect them at a high recall than at a high precision. False positives (messages incorrectly flagged as Negative) could be filtered manually by a human end user, while False Negatives (messages with a negative stance that are not detected) will be missed. We set out to improve recall, making use of classifier confidence scores and the complementary classifications of Pattern and the best ML system. \n\nA first recall-improving approach is to reset the prediction threshold for the Negative category. For any given instance, the SVM classifier estimates the probability of all categories it was trained on. It will predict the Negative category for an instance if its probability exceeds the probabilities of the other categories. This prediction can be altered by changing the threshold; setting the threshold higher will generally mean that fewer instances will be predicted as a Negative category (corresponding to a higher precision), whereas setting it lower will mean more instances will be predicted as such (corresponding to a higher recall). Thus, the balance between precision and recall can be set as desired, to favor one or another. However, in many cases, changing the threshold will not lead to a (strong) increase in overall performance.\n\n\\begin{table}[t!]\n\\caption{Examples of tweets that were classified by the best ML system and\/or pattern as `Negative' (for privacy reasons, user mentions are replaced with `@USER').}\n\\begin{tabular}{L{8cm}|L{2cm}}\n\\hline\n\tTweet (translated from Dutch) & Predicted as `Negative' by... \\\\ \\hline\n\t @USER aluminum which is a natural component in food cannot be compared to the stuff they put in that vaccine & ML only \\\\ \n\t @USER Kids are dying from it, what will you say to parents who are forced into inoculation despite their reluctance? & ML only \\\\\n\t @USER And I'm opposed to having teenaged girls vaccinated against cervical cancer. @USER @USER @USER & ML only \\\\ \\hline\n\t @USER If your child is autistic after a vaccination, does the phrasing matter? No vaccinations, that's the message!! & Pattern only \\\\\n\t @USER My experience with the RIVM is that I (mother) had proof that the inoculation was a trigger for epi. Arrogant and not empathic! @USER & Pattern only \\\\\n\t @USER @USER I will never get inoculated again since this horrific experience \\#scream \\#connythemartyr & Pattern only \\\\ \\hline\n\t @USER True. But the inoculation is just like that. Dangerous junk & ML and Pattern \\\\\n\tPaternalistic bullshit. I had the measles, the mumps, Rubella and the fifth disease and I'm still here. Get lost with your COMPULSARY inoculation. & ML and Pattern \\\\\n\tThe suffering called \\#vaccination... \\#nightparents 2.0 today... \\#poor \\#baby & ML and Pattern \\\\ \\hline\n @USER Prevalence HPV is very low; effect has not been proven, extremely high frequency of medical issues after vaccination; simply criminal. & Neither ML nor Pattern \\\\\n Vaccinating is nonsense because polio is non-existent. & Neither ML nor Pattern \\\\\n Narcolepsy due to the vaccine against the swine flu. Way to go... \\#eenvandaag & Neither ML nor Pattern \\\\\n Preventive colonoscopy saves many more lives than inoculating against virus cervical cancer 13-year olds. & Neither ML nor Pattern \\\\ \\hline\n\\end{tabular}\n\\label{tab:predictions}\n\\end{table}\n\n\\begin{figure}[t!]\n{trade-off.png}\n\\caption{\\csentence{}Balance between precision and recall of predicting tweets with a negative stance when applying the best ML system, alternating the prediction threshold for this category.}\n\\label{fig:trade-off}\n\\end{figure}\n\n\n\nFigure \\ref{fig:trade-off} presents the balance between recall and precision as a result of predicting the Negative category with the best ML system, when the threshold for this category is altered from lowest to highest. Compared to the standard recall of $0.43$ at a precision of $0.29$, increasing the recall to $0.60$ would lead to a drop of precision to $0.21$. The F1-score would then decrease to $0.31$. \n\nA second means by which recall might be improved is to employ ensemble classification. The comparison in the previous section between the best ML method and rule-based sentiment analysis revealed that both systems have a rather disjoint perspective on negative stance: many more tweets are labeled as `Negative' by only one of the two systems than by both. We therefore built an ensemble system that follows both systems in their perspective on tweets with a negative stance: for each tweet, if either of the systems predicts the Negative category, the ensemble system makes this prediction. \n\nThe performance of the ensemble system is presented in Table \\ref{tab:ensemble}. Of the 343 tweets in the test set that are labeled as Negative, 210 are retrieved by the ensemble system. The result is a recall of $0.61$. The system does overshoot in its categorization of tweets as Negative: this category is predicted for 1,168 tweets (about 40\\% of total test set of 2,886 tweets). The result is a precision of $0.18$. In comparison to lowering the prediction threshold of the ML system, the ensemble system thus yields a slightly worse trade-off between precision and recall. \n\n\\begin{table}[h!]\n\\caption{Performance of the ensemble system on correctly predicting tweets labeled as `Negative' (AUC = Area under the curve, Total = all tweets in the test set that are labeled as `Negative', Predicted = the number of tweets that were classified as `Negative' by the system, Correct = the number of tweets that were correctly classified as `Negative'.}\n \\begin{tabular}{rrrrrrr}\n \\hline\n Precision & Recall & F1-score & AUC & Total\t& Predicted & Correct \\\\ \\hline\n\t\t0.18 & 0.61 & 0.28 & 0.62 & 343 & 1168 & 210 \\\\ \\hline\n \\end{tabular}\n \\label{tab:ensemble}\n\\end{table}\n\n\\section*{Discussion}\n\nWith an F1-score of $0.36$, our system lags behind the $0.75$ F1-score reported by Du et al.\\cite{Du+17}. Several factors might have influenced this difference. A first factor is the low proportion of tweets with the label `Negative' in our dataset. In the strict labeling condition, only 343 cases are labeled as negative by two annotators, against 2,543 labeled as positive -- the negative cases only comprise 13\\% of all instances. In the study of Du et al., the anti-vaccination category comprises 24\\% of all instances (1,445 tweets). More (reliable) examples might have helped in our study to train a better model of negative tweets. Secondly, Du et al. \\cite{Du+17} focused on the English language domain, while we worked with Dutch Twitter messages. The Dutch Twitter realm harbors less data to study than the English one, and might bring forward different discussions when it comes to the topic of vaccination. It could be that the senders' stance towards vaccination is more difficult to pinpoint within these discussions. In line with this language difference, a third prominent factor that might have led to a higher performance in the study of Du et al.\\cite{Du+17} is that they focus on a particular case of vaccination (e.g.: HPV vaccination) and split the anti-vaccination category into several more specific categories that describe the motivation of this stance. The diverse motivations for being against vaccination are indeed reflected in several other studies that focus on identifying discussion communities and viewpoints \\cite{Bello-orgaz+17,Surian+16,Tangherlini+16}. While splitting the data into more specific categories will lead to less examples per category, it could boost performance on predicting certain categories due to a larger homogeneity. Indeed, the most dominant negative category in the study by Du et al.\\cite{Du+17}, dubbed `NegSafety' and occurring in 912 tweets (63\\% of all negative tweets), yielded the highest F1-score of $0.75$. While two less frequent categories were predicted at an F1-score of $0.0$, this outcome shows the benefit of breaking down the motivations behind a negative stance towards vaccination. \n\t\nA major limitation of our study is that the agreement rates for all categorizations are low. This is also the case in other studies, like \\cite{Huang+17}, who report an agreement of $K = 0.40$ on polarity categorization. Foremost, this reflects the difficulty of the task. The way in which the stance towards vaccination is manifested in a tweet depends on the author, his or her specific viewpoint, the moment in time at which a tweet was posted, and the possible conversation thread that precedes it. Making a judgment solely based on the text could be difficult without this context. Agreement could possibly be improved by presenting the annotator with the preceding conversation as context to the text. Furthermore, tweets could be coded by more than two annotators. This would give insight into the subtleties of the data, with a graded scale of tweets that clearly manifest a negative stance towards vaccination to tweets that merely hint at such a stance. Such a procedure could likewise help to generate more reliable examples to train a machine learning classifier.\n\nThe low agreement rates also indicate that measuring stance towards vaccination in tweets is a too difficult task to assign only to a machine. We believe that the human-in-the-loop could be an important asset in any monitoring dashboard that focuses on stance in particular discussions. The system will have an important role in filtering the bigger stream of messages, leaving the human ideally with a controllable set of messages to sift through to end up with reliable statistics on the stance that is seen in the discussion at any point in time. In the analysis section, we explored two approaches to increase recall of messages with a negative stance, which would be most useful in this scenario. Lowering the prediction threshold showed to be most effective to this end. \n\nOur primary aim in future work is to improve performance. We did not experiment with different types of features in our current study. Word embeddings might help to include more semantics in our classifier's model. In addition, domain knowledge could be added by including word lists, and different components might be combined to address different features of the data (e.g.: sarcasm and implicit stance). We also aim to divide the negative category into the specific motivations behind a negative stance towards vaccination, like in the study of Du et al.\\cite{Du+17}, so as to obtain more homogeneous categories. Parallel to this new categorization of the data, adding more labeled data appears to be the most effective way to improve our model. The learning curve that we present in Figure \\ref{fig:learning_curve} shows that there is no performance plateau reached with the current size of the data. An active learning setting \\cite{Tong+01}, starting with the current system, could be applied to select additional tweets to annotate. Such a setting could be incorporated in the practical scenario where a human-in-the-loop judges the messages that were flagged as displaying a negative stance by the system. The messages that are judged as correctly and incorrectly predicted could be added as additional reliable training data to improve upon the model. We have installed a dashboard that is catered for such a procedure,\\footnote{\\url{http:\/\/prikbord.science.ru.nl\/}} starting with the machine learning system that yielded the best performance in our current study. \n\n\n\\section*{Conclusions}\n\nWe set out to train a classifier to distinguish Twitter messages that display a negative stance towards vaccination from other messages that discuss the topic of vaccination. Based on a set of 8,259 tweets that mention a vaccination-related keyword, annotated for their relevance, stance and sentiment, we tested a multitude of machine learning classifiers, alternating the algorithm, the reliability of training data and the labels to train on. The best performance, with a precision of $0.29$, a recall of $0.43$, an F1-score of $0.36$ and an AUC of $0.66$, was yielded by training an SVM classifier on strictly and laxly labeled data to distinguish irrelevant tweets and polarity categories. The baselines, with an optimal F1-score of $0.25$ (rule-based sentiment analysis), were considerably outperformed. The latter shows the benefit of machine-learned classifiers on domain-specific sentiment: despite being trained on a reasonably small amount of data, the machine-learning approach outperforms general-purpose sentiment analysis tools.\n\n\\section*{Availability and requirements}\n\n{\\bf Project name:} Prikbord \\\\\n{\\bf Project home page:} http:\/\/prikbord.science.ru.nl\/ \\\\ \n{\\bf Operating system:} Linux \\\\\n{\\bf Programming language:} Python, javascript \\\\\n{\\bf Other requirements:} Django 1.5.11 or higher, MongoDB 2.6.10, pymongo 2.7.2 or higher, requests 2.13.0 or higher \\\\\n{\\bf License:} GNU GPL \\\\\n{\\bf Any restrictions to use by non-academics:} licence needed \\\\\n\n\\section*{Abbreviations}\n\n{\\bf EMM:} Europe Media Monitor \n{\\bf MMR:} Mumps, Measles, Rubella \n{\\bf LDA:} Latent Dirichlet Allocation \n{\\bf ML:} Machine learning \n{\\bf SVM:} Support Vector Machines \n{\\bf AUC:} Area under the ROC Curve \n{\\bf Clf:} Classifier \n{\\bf NB:} Naive Bayes \n{\\bf Pr:} Precision \n{\\bf Re:} Recall \n\n\n\\begin{backmatter}\n\n\\section*{Declarations}\n\n\\subsection*{Ethics approval and consent to participate}\n\nNot applicable.\n\n\\subsection*{Consent for publication}\n\nNot applicable.\n\n\\subsection*{Availability of data and materials}\nhttp:\/\/cls.ru.nl\/\\~{}fkunneman\/data\\_stance\\_vaccination.zip\n\n\\subsection*{Competing interests}\nThe authors declare that they have no competing interests.\n\n\\subsection*{Funding}\nThis study has been funded by the Rijksinstituut voor Volksgezondheid en Milieu. \n\n\\subsection*{Author's contributions}\nFK has set up the annotations procedure, performed the Machine Learning experiments and analysis, annotated tweets in the analysis and did a major part of the writing. ML has done part of the writing in the Introduction and Conclusion sections. AW has advised on the experimentation and analysis. AB has advised on the experimentation and has edited the complete text. LM has set up the annotations procedure, annotated tweets in the analysis and has done a major part of the writing. All authors read and approved the final manuscript.\n\n\\subsection*{Acknowledgements}\nWe thank Erik Tjong Kim Sang for the development and support of the \\url{http:\/\/twiqs.nl} service. We also thank the ones who have contributed with annotations. \n\n\n\\bibliographystyle{bmc-mathphys}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLinear-quadratic (LQ) control problems are an important class of optimal control problems with a quadratic cost (or objective) functional to minimize subject to linear differential equation constraints describing the dynamics---see for theory and applications \\cite{BurKayMaj2014,MTM,AmmKen1998,MauObe2003,Mou2011,BusMau2000,KugPes1990}. In this paper we will study applications of projection methods to solving the minimum-energy control of pure, under-, critically- and over-damped harmonic oscillators, as well as a machine tool manipulator, which are all examples of LQ control problems. In fact, in all these applications we impose constraints on the control variable which makes the problems computationally more challenging, justifying a novel implementation of projection methods. For the quadratic objective functional, we consider the square norm of the control variable throughout the paper. These problems are what we refer to as {\\em minimum-energy control problems}\\footnote{It must be stressed that we are not necessarily minimizing the ``true'' energy of for example a harmonic oscillator per se from a physics point of view. Rather, we are concerned with minimizing the ``energy of the control or signal'' or the ``energy of the force.'' Elaboration of this subtle difference in the terminology can also be found in \\cite[Section~6.17]{AthaFalb1966}, \\cite[Section~5.5]{Kirk1970}, \\cite[Section~2.9]{Klamka2019} and \\cite[page~118]{Sethi2019}.}.\n\nProjection methods are an emerging field of research in mathematical optimization with successful applications to a wide range of problems, including road design \\cite{BauKoch}, protein reconstruction \\cite{AraBorTam2014}, sphere packing \\cite{GraEls2008}, sudoku \\cite{Bau2008}, graph colouring problems \\cite{AraCampEls2020} and, radiation therapy treatment planning \\cite{AltCenPow1988}. These methods have chiefly been applied to discrete-time optimal control problems \\cite{OdoStaBoy2013}, but there has been little or no research into applications to continuous-time optimal control problems, except recently in~\\cite{BausBuraKaya2019} by Bauschke, Burachik and Kaya. In~\\cite{BausBuraKaya2019} projection methods are applied to solve the energy minimizing double integrator problem, where the control variable is constrained, with promising results. The numerical experiments show that the projection methods outperform the method employing direct discretization even in solving a relatively simple optimal control problem.\n\n\n\nThe aforementioned direct discretization approach is to first discretize the problem, typically using a Runge--Kutta method such as the Euler or trapezoidal methods, and then apply finite-dimensional optimization software (for example, AMPL~\\cite{AMPL} paired with Ipopt~\\cite{WacBie2006}) in order to solve the resulting large scale discrete-time optimal control problem. We aim to show the merits of the projection method approach by solving more general LQ control problems than just the double integrator problem and comparing with direct discretization.\n\nThe approach in this paper exploits the structure of LQ control problems to obtain advantages, just as the approach in \\cite{BausBuraKaya2019} does the same with the simple double integrator problem. In our approach we split the constraints of the original LQ problem into two sets: one contains the ODE constraints involving the state variables, and the other contains box constraints on the control variables. These sets are subsets of Hilbert space, the first one constituting an affine set and the second a closed and convex set. We define two simpler optimal control subproblems for computing projections, one subject to the affine set and the other to the box. Solutions to these subproblems yield the projectors onto each of the two sets. \n\nThe main contributions of this paper are as follows.\n\\begin{itemize}\n \\item We derive a general expression for the projectors onto the affine and box sets of the minimum-energy control problem. (See Theorems~\\ref{thm:projA_gen} and \\ref{thm:projB}.)\n \\item We obtain closed-form analytical expressions for the projectors of the special problems whose dynamics involve pure as well as under-, critically- and over-damped harmonic oscillators. (See Corollaries~\\ref{cor:projA_PHO}--\\ref{cor:projA_PDHO3}, resp., for projections onto the affine sets of each case, and Corollary~\\ref{cor:projB} for projection onto the box.)\n \\item The projector expression in Theorem~\\ref{thm:projA_gen} necessitates the knowledge of the state transition matrix as well as the Jacobian of the near-miss function of the shooting method. For the case of general minimum-energy control, we present a computational algorithm (namely Algorithm~\\ref{alg:projA}) for constructing the state transition matrix and the Jacobian and thus finding a projector onto the affine set which in turn can be used in general projection algorithms.\n \\item We illustrate the working of Algorithm~\\ref{alg:projA} and Theorem~\\ref{thm:projA_gen} in four popular projection algorithms; namely, the method of alternating projections (MAP), Dykstra, Douglas--Rachford (DR) and Arag\\'on Artacho--Campoy (AAC) algorithms. These algorithms are applied to solving not only the above-mentioned example problems but also a challenging machine tool manipulator example problem. These problems furnish a class of test-bed examples for future studies.\n \\item Selection of algorithmic parameters plays an important role in the performances of the DR and AAC algorithms. Through case studies, by means of the test-bed examples listed above, we explore and propose the ranges of values of these parameters with which the algorithms seem to converge in a smaller number of iterations.\n\\end{itemize}\n\nWe note that Corollary~\\ref{cor:projA_PDI}, which provides an analytical projector expression in closed-form for the double integrator problem, was originally derived in~\\cite[Proposition~1]{BausBuraKaya2019}. Nevertheless, in this paper, we show that this expression can also be obtained using direct substitutions of the state transition matrix and the Jacobian into the general expression in Theorem~\\ref{thm:projA_gen}.\n\n\n\n\n\nFor all the above-mentioned example problems we perform numerical experiments and compare the performances of the four projection algorithms by using the optimization modelling software AMPL paired with the interior point optimization software Ipopt. In these experiments we observe that not only are the projection methods we have implemented more efficient, i.e., they can find a solution in a much smaller amount of time, than the AMPL--Ipopt suite, but also that Ipopt sometimes fails in finding a solution at all. We also compare the errors in the control and state variables separately. These cases for different problems are tabulated altogether for an easier appreciation of the conclusions we set out.\n\nThe paper is organized as follows. Section~\\ref{sec:oc} contains necessary background and preliminaries on minimum-energy control problems and optimality. In Section~\\ref{sec:proj} we derive the projectors for a general minimum-energy control problem as well as some specific cases. Section \\ref{sec:projAlg} presents the projection algorithms that we apply in Section \\ref{sec:num}. Section \\ref{sec:num} provides a numerical approach for obtaining the projectors onto the affine set when it is not possible or convenient (due to length) to use an analytical expression. This section also contains numerical experiments comparing the performance of the projection algorithms with a direct discretization approach, as well as an exploration of (in some sense) best values of the parameters of the DR and AAC algorithms. Section \\ref{sec:con} contains concluding remarks and open problems. In the appendices we provide the detailed proofs of the projectors onto the affine set for the harmonic oscillator problems as well as the detailed steps of each of the four projection algorithms.\n\n\n\n\\section{Minimum-energy Control Problem}\n\\label{sec:oc}\n\n\nBefore introducing the optimal control problem we will give some standard definitions. Unless otherwise stated all vectors are column vectors. Let ${\\cal L}^2([t_0,t_f];\\mathbb{R}^q)$ be the Banach space of Lebesgue measurable functions $z:[t_0,t_f]\\rightarrow\\mathbb{R}^q$, with finite ${\\cal L}^2$ norm, namely,\n\\[{\\cal L}^2([t_0,t_f];\\mathbb{R}^q):=\\left\\{z:[t_0,t_f]\\rightarrow\\mathbb{R}^q\\,\\:|\\,\\:\\|z\\|_{{\\cal L}^2} := \\left(\\int_{t_0}^{t_f}\\|z(t)\\|^2\\,dt\\right)^{1\/2}<\\infty\\right\\}\\]\nwhere $\\|\\cdot\\|$ is the $\\ell_2$ norm in $\\mathbb{R}^q$. Furthermore, ${\\cal W}^{1,2}([t_0,t_f];\\mathbb{R}^q)$ is the Sobolev space of absolutely continuous functions, namely\n\\[{\\cal W}^{1,2}([t_0,t_f];\\mathbb{R}^q):=\\left\\{z\\in {\\cal L}^2([t_0,t_f];\\mathbb{R}^q)\\,|\\,\\dot{z}:=dz\/dt\\in {\\cal L}^2([t_0,t_f];\\mathbb{R}^q)\\right\\},\\]\nendowed with the norm\n\\[\\|z\\|_{{\\cal W}^{1,2}}:=(\\|z\\|_{{\\cal L}^2}^2+\\|\\dot{z}\\|_{{\\cal L}^2}^2)^{1\/2}.\\]\nWith these definitions we define a general minimum-energy optimal control problem, which is an LQ control problem, as follows.\n\\[\n\\mbox{(P) }\\left\\{\\begin{array}{rl}\\label{eqn:OCP1}\n\\displaystyle\\min & \\ \\ \\displaystyle \\frac{1}{2}\\int_{t_0}^{t_f} \\|u(t)\\|^2 \\,dt \\\\[5mm] \n\\mbox{subject to} & \\ \\ \\dot{x}(t) = A(t)x(t)+B(t)u(t)\\,,\\ \\ x(t_0) = x_0\\,,\\ \\ x(t_f) = x_f\\,, \\\\[2mm]\n& \\ \\ u(t)\\in U\\subseteq\\mathbb{R}^m\\,,\\ \\ x(t)\\in\\mathbb{R}^n, \\ \\ \\forall t\\in[t_0,t_f].\n\\end{array}\\right.\n\\]\nThe {\\em state variable} $x\\in {\\cal W}^{1,2}([t_0,t_f];\\mathbb{R}^n)$, with $x(t) := (x_1(t),\\ldots,x_n(t))\\in\\mathbb{R}^n$, and the {\\em control variable} $u\\in {\\cal L}^2([t_0,t_f];\\mathbb{R}^m)$, with $u(t) := (u_1(t),\\ldots,u_m(t))\\in\\mathbb{R}^m$, and $U$ is a fixed closed subset of $\\mathbb{R}^m$. \nThe time varying matrices $A:[t_0,t_f]\\rightarrow\\mathbb{R}^{n\\times n}$ and $B:[t_0,t_f]\\to\\mathbb{R}^{n\\times m}$ are continuous. The initial and terminal states are given as $x_0$ and $x_f$ respectively. Note that, for every $t\\in[t_0,t_f]$, we can write\n\\[\nB(t)u(t) = \\sum_{i=1}^m b_i(t)\\,u_i(t)\\,,\n\\]\nwhere $b_i(t)\\in \\mathbb{R}^n,\\,i=1,\\ldots,m$, is the $i$th column of $B(t)$.\n\n\n\\subsection{Optimality conditions}\\label{subsec:opCond}\nIn this section, we use the Maximum Principle to derive the necessary conditions of optimality for Problem (P). Various forms of the Maximum Principle can be found, along with their proofs, in a number of reference books \u2013 see, for example, \\cite[Theorem 1]{pontry1962}, \\cite[Chapter 7]{Hestenes66}, \\cite[Theorem 6.4.1]{Vinter2000}, \\cite[Theorem 6.37]{mord2006}, and \\cite[Theorem 22.2]{clarke2013}. We will state the Maximum\nPrinciple using notation and settings from these references. We start by defining the Hamiltonian function $H:\\mathbb{R}^n\\times\\mathbb{R}^m\\times\\mathbb{R}^n\\times\\mathbb{R}\\times[t_0,t_f] \\to \\mathbb{R}$ for Problem~(P) as\n\\[H(x,u,\\lambda,\\lambda_0,t):=\\frac{\\lambda_0}{2}\\|u\\|^2 +\\lambda^T\\left(A(t)x+\\sum_{i=1}^m b_i(t)\\,u_i\\right),\\]\nwhere the adjoint variable vector $\\lambda:[t_0,t_f]\\rightarrow\\mathbb{R}^n$, with $\\lambda(t):=(\\lambda_1(t),\\dots,\\lambda_n(t))\\in\\mathbb{R}^n$ and $\\lambda_0$ is a real constant. For brevity, we use the following notation,\n\\[\nH[t] := H(x(t),u(t),\\lambda(t),\\lambda_0,t)\\,.\n\\]\nThe adjoint variable vector is assumed to satisfy the condition (see e.g. \\cite{Hestenes66})\n\\begin{equation}\\label{eqn:adj}\n \\dot{\\lambda}(t) := -H_x[t] = -A(t)^T\\lambda(t)\n\\end{equation}\nfor every $t\\in[t_0,t_f]$, where $H_x:=\\partial H\/\\partial x$. Suppose that the control set $U$ is a box in $\\mathbb{R}^m$, i.e., $U = [-a_1,a_1]\\times\\cdots\\times[-a_m,a_m]$, and that the pair $(x,u)\\in {\\cal W}^{1,2}([t_0,t_f];\\mathbb{R}^n)\\times {\\cal L}^2([t_0,t_f];\\mathbb{R}^m)$ is optimal for Problem~(P). Then the maximum principle asserts that there exist a real number $\\lambda_0 \\ge 0$ and a continuous adjoint variable vector $\\lambda\\in {\\cal W}^{1,2}([t_0,t_f];\\mathbb{R}^n)$ as defined in Equation~\\eqref{eqn:adj}, such that $\\lambda(t)\\neq\\mathbf{0}$ for all $t\\in[t_0,t_f]$, and that, for all $t\\in[t_0,t_f]$,\n\\begin{align}\n u_i(t) &= \\argmin_{|v_i|\\leq a_i} H(x(t),u_1,\\ldots,v_i,\\dots,u_m,\\lambda(t),\\lambda_0,t) \\nonumber \\\\\n &= \\argmin_{|v_i|\\leq a_i} \\frac{\\lambda_0}{2}\\,(u_1^2 + \\ldots + v_i^2 + \\ldots + u_m^2) \\nonumber \\\\\n &\\hspace*{20mm} +\\lambda^T(t)\\,(A(t)x(t)+ b_1(t)\\,u_1 + \\ldots + b_i(t)\\,v_i + \\ldots +b_m(t)\\,u_m) \\nonumber \\\\\n &= \\argmin_{|v_i|\\leq a_i} \\frac{\\lambda_0}{2}\\, v_i^2 +\\lambda^T(t)\\,b_i(t)\\,v_i\\,,\n \\label{eqn:maxPrin}\n\\end{align}\nfor $i = 1,\\ldots,m$. We ignored all terms that do not depend on $v_i$ to arrive at Equation~\\eqref{eqn:maxPrin}. Suppose $a_i=\\infty$, $i = 1,\\ldots,m$, i.e., the control vector is unconstrained. Then Condition \\eqref{eqn:maxPrin} becomes\n\\[H_{u_i}[t] = 0,\\]\ni.e.,\n\\begin{equation} \\label{ui_unconstr}\n\\lambda_0 u_i(t)+b_i(t)^T\\lambda(t) = 0\\,,\n\\end{equation}\n$i = 1,\\ldots,m$. We assume that the problem is normal, i.e., $\\lambda_0>0$, so we can take $\\lambda_0=1$ without loss of generality. Then \\eqref{ui_unconstr} can be solved for $u_i(t)$ as\n\\begin{equation} \nu_i(t) = -b_i(t)^T\\lambda(t)\\,,\n\\end{equation}\nfor $i = 1,\\ldots,m$; or using the input matrix $B(t)$,\n\\begin{equation} \\label{eqn:u(t)}\nu(t) = -B(t)^T\\lambda(t)\\,.\n\\end{equation}\nWith the box constraint on $u(t)$, one gets from~\\eqref{eqn:maxPrin}\n\\begin{eqnarray}\\label{eqn:u_gen}\nu_i(t) = \\left\\{\\begin{array}{rl}\n a_i, & \\mbox{if } b_i^T(t)\\lambda(t)\\leq-a_i, \\\\[2mm]\n -b_i^T(t)\\lambda(t), & \\mbox{if } -a_i\\leq b_i^T(t)\\lambda(t)\\leq a_i, \\\\[2mm]\n -a_i, & \\mbox{if } b_i^T(t)\\lambda(t)\\geq a_i,\n\\end{array} \\right.\n\\end{eqnarray}\nfor all $t\\in[t_0,t_f]$, $i=1,\\dots,m$.\nfor all $t\\in[t_0,t_f]$. From \\cite{BorCol} the solution of the initial value problem $\\dot{x}(t) = A(t)x(t)+B(t)u(t)$, $x(t_0) = x_0$, in Problem~(P) can simply be written as\n\\begin{equation} \\label{x_soln}\nx(t) = \\Phi_A(t,t_0)\\,x_0 + \\int_{t_0}^t \\Phi_A(t,\\tau)\\,B(\\tau)\\,u(\\tau)\\,d\\tau\\,,\n\\end{equation}\nwhere $\\Phi_{A}(t,t_0)$ is the {\\em state transition matrix} of $\\dot{x}(t) = A(t)\\,x(t)$, also referred to as the {\\em resolvant matrix}---see \\cite{Rugh1995} for further details and the properties.\nSimilarly, Equation \\eqref{eqn:adj} can be solved as $\\lambda(t) = \\Phi_{(-A^T)}(t,t_0)\\lambda_0$, or by using the identity $\\Phi_{(-A^T)}(t,t_0) = \\Phi_A(t_0,t)^T$~\\cite[Property~4.5]{Rugh1995},\n\\begin{equation}\\label{eqn:lambda_gen}\n \\lambda(t) = \\Phi_A(t_0,t)^T\\lambda_0\\,.\n\\end{equation}\nWhen $a_i$ is small enough so that the control constraint is active it is usually impossible to find an analytical solution for (P), hence the need for numerical methods.\n\n\n\n\\subsection{Constraint splitting}\\label{subsec:cons}\nWe split the constraints into two sets given below.\n\\begin{eqnarray} \n{\\cal A} &:=& \\big\\{u\\in {\\cal L}^2([t_0,t_f];\\mathbb{R}^m)\\ |\\ \\exists x\\in {\\cal W}^{1,2}([t_0,t_f];\\mathbb{R}^n)\\mbox{ which solves } \\nonumber \\\\\n&&\\ \\ \\dot{x}(t) = A(t)x(t)+B(t)u(t), \\ \\ x(t_0) = x_0, \\ \\ x(t_f) = x_f,\\ \\forall t\\in[t_0,t_f]\\big\\}\\,, \\label{A_gen} \\\\[2mm]\n{\\cal B} &:=& \\big\\{u\\in {\\cal L}^2([t_0,t_f];\\mathbb{R}^m)\\ | -a_i\\le u_i(t)\\le a_i\\,,\\ \\forall t\\in[t_0,t_f], i=1,\\dots,m\\big\\}\\,.\\label{B_gen}\n\\end{eqnarray}\nThe set ${\\cal A}$ is an {\\em affine space} and contains all the feasible control functions from (P) where the control function is unconstrained. The set ${\\cal B}$ is a {\\em box} which contains all the control functions with components $u_i$ that are constrained by $-a_i$ and $a_i$. These two sets form the constraint sets for our two subproblems. The reason we split the original problem into two subproblems is because they are much simpler to solve individually so we can derive analytical expressions.\n \n \n \n\\section{Projectors}\\label{sec:proj}\nIn this section we give the projectors for a general problem~(P) followed by the projectors for some specific problems, namely the double integrator, pure harmonic oscillator and (under-, critically- and over-) damped harmonic oscillator.\n\n\n\\subsection{Projectors for general minimum-energy control}\nNow that we have the constraint sets, we need to define the subproblems. The projection onto ${\\cal A}$ from a current iterate $u^-$ is the point $u$ which solves the following problem.\n\\[\\mbox{(P1) }\\left\\{\\begin{array}{rl}\n\\displaystyle\\min & \\ \\ \\displaystyle\\frac{1}{2}\\int_{t_0}^{t_f} \\|u(t)-u^-(t)\\|^2\\,dt = \\frac{1}{2} \\|u - u^-\\|_{{\\cal L}^2}^2 \\\\[5mm] \n\\mbox{subject to} & \\ \\ u\\in{\\cal A}.\n\\end{array} \\right.\\]\nThe projection onto ${\\cal B}$ from a current iterate $u^-$ is the point $u$ which solves the following problem.\n\\[\\mbox{(P2) }\\left\\{\\begin{array}{rl}\n\\displaystyle\\min & \\ \\ \\displaystyle \\frac{1}{2} \\|u - u^-\\|_{{\\cal L}^2}^2 \\\\[5mm] \n\\mbox{subject to} & \\ \\ u\\in{\\cal B}.\n\\end{array} \\right.\\]\n\nFirst we provide a technical lemma.\n\\begin{lemma}\\label{lem:A_tilde}\nGiven the $n\\times n$ matrix $A(t)$, consider the $n^2\\times n^2$ matrix $\\widetilde{A}(t)$, defined as\n\\[\n\\widetilde{A}(t) := \\begin{bmatrix}\nA(t) & & \\mathbf{0}\\\\\n& \\ddots & \\\\\n\\mathbf{0} & & A(t) \n\\end{bmatrix},\n\\]\nwhere $\\mathbf{0}$ is a zero matrix of appropriate size, and the matrix $A(t)$ appears repeatedly ($n$ times) in diagonal blocks. The state transition matrix of $\\widetilde{A}(t)$ is the $n^2\\times n^2$ matrix defined as\n\\begin{equation}\\label{eqn:phi_A_tilde}\n\\Phi_{\\widetilde{A}}(t,t_0) := \\begin{bmatrix}\n\\Phi_A(t,t_0) & & \\mathbf{0}\\\\\n& \\ddots & \\\\\n\\mathbf{0} & & \\Phi_A(t,t_0)\n\\end{bmatrix},\n\\end{equation}\nwhere $\\Phi_A(t,t_0)$ (the state transition matrix for $A(t)$), appears repeatedly ($n$ times) in diagonal blocks, where all other elements are zero.\n\\end{lemma}\n\\begin{proof}\nSuppose that $\\Phi_{A}(t,t_0)$ is the state transition matrix of $\\dot{y_i}(t) = A(t)\\,y_i(t)$, $i = 1,\\ldots,n$, where $y_i(t)\\in\\mathbb{R}^n$. Suppose that $y_i(t_0) = y_{i,0}$, $i = 1,\\ldots,n$, are the initial conditions. By~\\eqref{x_soln}, $y_i(t) = \\Phi_{A}(t,t_0)\\,y_{i,0}$ for $i = 1,\\ldots,n$ is the unique solution. Then with $\\widetilde{y}(t) := (y_1(t),\\ldots,y_n(t))\\in\\mathbb{R}^{n^2}$, we get $\\dot{\\widetilde{y}}(t) = \\widetilde{A}(t)\\,\\widetilde{y}(t)$ and in turn $\\Phi_{\\widetilde{A}}(t,t_0)$ is as required by~\\eqref{eqn:phi_A_tilde} in the lemma.\n\\end{proof}\nTheorem~\\ref{thm:projA_gen} below furnishes an expression for the projector onto $\\cal{A}$. The proof of this theorem broadly follows steps similar to those in~\\cite[Proposition~1]{BausBuraKaya2019} but for a much more general problem.\n\\begin{theorem} \\label{thm:projA_gen}\nThe projection $P_{\\cal{A}}$ of $u^-\\in {\\cal L}^2([t_0,t_f];\\mathbb{R}^m)$ onto the constraint set $\\cal{A}$, as the solution of Problem~{\\em (P1)} is given by\n\\begin{equation}\\label{eqn:projA}\nP_{\\cal{A}}(u^-)(t) = u^-(t) - B(t)^T\\,\\Phi_A(t_0,t)^T\\,\\lambda_0\\,,\n \n\\end{equation}\\label{eqn:J}\nfor all $t\\in[t_0,t_f]$, where $\\lambda_0$ solves\n\\begin{equation}\n J_{\\varphi}(0)\\,\\lambda_0 = -(x(t_f)-x_f)\\,,\n\\end{equation}\nwith\n\\[\nx(t_f) := \\Phi_A(t_f,t_0)\\,x_0 + \\int_{t_0}^{t_f} \\Phi_A(t_f,\\tau)\\,B(\\tau)\\,u^-(\\tau)\\,d\\tau\\,,\n\\]\nand\n\\[\n J_\\varphi(0):=\\frac{\\partial x(t_f)}{\\partial \\lambda_0}\\,.\n\\]\n\\end{theorem}\n\\begin{proof}\nThe Hamiltonian for Problem (P1) is\n\\begin{equation*}\nH[t] := \\frac{1}{2}\\|u(t)-u^-(t)\\|^2+\\lambda^T(t)(A(t)x(t)+B(t)u(t)).\n\\end{equation*}\nFrom Maximum Principle, $H_u[t] = 0$ and so\n\\begin{align}\nu(t) = u^-(t)-B(t)^T\\lambda(t)\\,, \\label{eqn:gen_u}\n\\end{align}\nfor all $t\\in[t_0,t_f]$. Substituting~\\eqref{eqn:lambda_gen} into~\\eqref{eqn:gen_u} and then substituting~\\eqref{eqn:gen_u} into the two-point boundary-value problem (TPBVP) in ${\\cal A}$ we have\n\\begin{equation}\\label{eqn:TPBVP}\n\\dot{x}(t) = A(t)x(t) + B(t)[u^-(t) - B(t)^T\\Phi_A(t_0,t)^T\\lambda_0], \\ \\ x(t_0)=x_0, \\ \\ x(t_f)=x_f\\,,\n\\end{equation}\nfor all $t\\in[t_0,t_f]$. \nSo the problem is reduced to one of finding $\\lambda_0$ that solves the TPBVP in~\\eqref{eqn:TPBVP}. Therefore we define a new variable $z(t,\\lambda_0) := x(t)$.\nNext consider (a simple shooting concept) that the initial value problem (rewritten from~\\eqref{eqn:TPBVP})\n\\begin{equation} \\label{eqn:z}\n\\frac{\\partial z(t,\\lambda_0)}{\\partial t} = A(t)z(t,\\lambda_0) + B(t)[u^-(t) - B(t)^T\\Phi_A(t_0,t)^T\\lambda_0],\\ \\ z(t_0,\\lambda_0)=x_0\\,,\n\\end{equation}\nis solved for a given $\\lambda_0$ to get the {\\em near-miss function}\n\\begin{equation}\\label{eqn:nearmiss}\n\\varphi(\\lambda_0) := z(t_f,\\lambda_0)-x_f\\,,\n\\end{equation}\nwhich measures the discrepancy at the end-point $t=t_f$. \nSo the problem is now further transformed into one of finding $\\lambda_0$ such that $\\varphi(\\lambda_0) = 0$.\n\nNext we prove that $\\varphi$ is affine, as then we can express the solution of $\\varphi(\\lambda_0) = 0$ more simply. From \\eqref{eqn:nearmiss} we can see that if $z(t_f,\\lambda_0)$ is affine in $\\lambda_0$ then $\\varphi$ is affine. Now rewrite~\\eqref{eqn:z} without the substitution of the solution of $\\dot{\\lambda}(t) = -A^T(t)\\,\\lambda(t)$ as the system\n\\begin{align}\\label{eqn:zODE}\n\\begin{bmatrix}\\dot{z}(t,\\lambda_0) \\\\ \\dot{\\lambda}(t)\\end{bmatrix} = \\begin{bmatrix}A(t) & -B(t)B^T(t) \\\\ 0_{n\\times n} & -A^T(t)\\end{bmatrix}\\begin{bmatrix}z(t,\\lambda_0) \\\\ \\lambda(t)\\end{bmatrix} + \\begin{bmatrix}B(t) \\\\ 0_{n\\times m}\\end{bmatrix}u^-(t)\\,,\n\\end{align}\nwhere, with a slight abuse of notation, $\\dot{z}(t,\\lambda_0) := \\partial z(t,\\lambda_0)\/\\partial t$.\nLet $g(t,\\lambda_0):=[z(t,\\lambda_0) \\ \\lambda(t)]^T$. To show affineness we need to prove that\n\\begin{equation}\\label{eqn:affine_g}\ng(t,\\alpha\\lambda_1+(1-\\alpha)\\lambda_2)=\\alpha g(t,\\lambda_1)+(1-\\alpha)g(t,\\lambda_2)\n\\end{equation}\nfor all $\\alpha\\in\\mathbb{R}$ and $\\lambda_1,\\lambda_2\\in\\mathbb{R}^n$. Let the first coefficient matrix on the right-hand side of \\eqref{eqn:zODE} be denoted by $C(t)$ and the matrix multiplying $u^-(t)$ be denoted by $D(t)$. Solving \\eqref{eqn:zODE} gives\n\\begin{align*}\ng(t,\\lambda_0) = \\Phi_C(t,t_0)\\begin{bmatrix}x_0 \\\\ \\lambda_0\\end{bmatrix} + \\int_{t_0}^{t} \\Phi_C(t,\\tau)D(\\tau)u^-(\\tau)\\, d\\tau,\n\\end{align*}\nwhere $\\Phi_C(t,t_0)$ is the state transition matrix of $\\dot{y}(t) = C(t)y(t)$, for all $t\\in[t_0, t_f]$. Next we start off with the left-hand side of~\\eqref{eqn:affine_g}, with the aim of getting the right-hand side after direct manipulations.\n\\begin{align*}\ng(t,\\alpha\\lambda_1+(1-\\alpha)\\lambda_2) &= \\Phi_C(t,t_0)\\begin{bmatrix}x_0 \\\\ \\alpha\\lambda_1+(1-\\alpha)\\lambda_2\\end{bmatrix}+\\gamma(t),\n\\end{align*}\nwhere $\\gamma(t):=\\int_{t_0}^{t}\\Phi_A(t,\\tau)D(\\tau)u^-(\\tau)\\,d\\tau$. Continuing with further manipulations,\n\\begin{align*}\ng(t,\\alpha\\lambda_1+(1-\\alpha)\\lambda_2) &= \\Phi_C(t,t_0)\\begin{bmatrix}\\alpha x_0+(1-\\alpha)x_0 \\\\ \\alpha\\lambda_1+(1-\\alpha)\\lambda_2\\end{bmatrix}+\\alpha\\gamma(t)+(1-\\alpha)\\gamma(t), \\\\\n&= \\alpha\\Phi_C(t,t_0)\\begin{bmatrix}x_0 \\\\ \\lambda_1\\end{bmatrix}+(1-\\alpha)\\Phi_C(t,t_0)\\begin{bmatrix}x_0 \\\\ \\lambda_2\\end{bmatrix}+\\alpha\\gamma(t)+(1-\\alpha)\\gamma(t), \\\\\n&= \\alpha\\left(\\Phi_C(t,t_0)\\begin{bmatrix}x_0 \\\\ \\lambda_1\\end{bmatrix}+\\gamma(t)\\right)+(1-\\alpha)\\left(\\Phi_C(t,t_0)\\begin{bmatrix}x_0 \\\\ \\lambda_2\\end{bmatrix}+\\gamma(t)\\right), \\\\\n&= \\alpha g(t,\\lambda_1)+(1-\\alpha)g(t,\\lambda_2),\n\\end{align*}\nwhich verifies~\\eqref{eqn:affine_g} and thus proves the affineness of $\\varphi$.\n\nSince $\\varphi$ is affine, one can now simply write\n\\[\n\\varphi (\\lambda_0) = \\varphi(0) + J_{\\varphi}(0)\\,\\lambda_0 = 0\\,,\n\\]\nwhere, with $\\lambda(0) = \\lambda_0 := (\\lambda_{0,1},\\ldots,\\lambda_{0,n})$, the Jacobian $J_{\\varphi}$ of $\\varphi$ at $\\lambda_0$ is defined by\n\\begin{equation}\\label{eqn:jac}\nJ_{\\varphi}(0) := \n\\left[\\frac{\\partial z(t_f,0)}{\\partial \\lambda_{0,1}}\\ \\Big|\\ \\cdots\\ \\Big|\\ \\displaystyle\\frac{\\partial z(t_f,0)}{\\partial \\lambda_{0,n}}\n\\right] = \\frac{\\partial x(t_f)}{\\partial \\lambda_0}\\,.\n\\end{equation}\nwhere we have set $x(t_f) = z(t_f,0)$. So one simply needs to solve for $\\lambda_0$ the linear system\n\\begin{equation} \\label{eqn:lineq_lambda0}\nJ_{\\varphi}(0)\\,\\lambda_0 = -(x(t_f)-x_f)\\,,\n\\end{equation}\nwhere we obtain $x(t_f)$ by solving~\\eqref{eqn:z} with $\\lambda_0 = 0$ as\n\\begin{equation} \\label{eqn:lineq_xf}\nx(t_f) = \\Phi_A(t_f,t_0)\\,x_0 + \\int_{t_0}^{t_f}\\,\\Phi_A(t_f,\\tau)\\,B(\\tau)\\,u^-(\\tau)\\,d\\tau\\,.\n\\end{equation}\nConsequently, substitution of~\\eqref{eqn:lambda_gen} into~\\eqref{eqn:gen_u}, subject to \\eqref{eqn:lineq_lambda0} and \\eqref{eqn:lineq_xf}, furnishes the projector expression given in the theorem. \n\\end{proof}\n\n\\begin{remark} \\rm\nOne can find the elements of $J_{\\varphi}(\\lambda_0)$ by solving the variational equations of~\\eqref{eqn:z} with respect to $\\lambda_0$, i.e., by solving the following equations in $(\\partial z \/ \\partial \\lambda_{0,i})(t,\\lambda_0) \\in \\mathbb{R}^n$, for $i=1,\\dots,n$.\n\\[\n\\frac{\\partial}{\\partial t}\\left(\\frac{\\partial z}{\\partial \\lambda_{0,i}}\\right)(t,\\lambda_0) = A(t)\\frac{\\partial z}{\\partial \\lambda_{0,i}}(t,\\lambda_0) - B(t)B(t)^T\\Phi_A(t_0,t)^T\\,e_i\n\\]\nwhere $e_i\\in\\mathbb{R}^n$ are the canonical basis vectors, i.e, with 1 in the $i$th coordinate and zero elsewhere. Let $\\widetilde{y}(t),\\dot{\\widetilde{y}}(t)\\in\\mathbb{R}^{n^2}$ where\n\\begin{equation*}\n\\widetilde{y} := \\begin{bmatrix}\n\\partial z\/\\partial \\lambda_{0,1} \\\\\n\\vdots \\\\\n\\partial z\/\\partial \\lambda_{0,n}\n\\end{bmatrix} = \\begin{bmatrix}\ny_1 \\\\ \\vdots \\\\ y_n\n\\end{bmatrix} \\ \\\n\\textrm{ and } \\ \\\n\\dot{\\widetilde{y}} := \\begin{bmatrix}\n\\frac{\\partial}{\\partial t}\\left(\\partial z\/\\partial \\lambda_{0,1}\\right) \\\\\n\\vdots \\\\\n\\frac{\\partial}{\\partial t}\\left(\\partial z\/\\partial \\lambda_{0,n}\\right)\n\\end{bmatrix}= \\begin{bmatrix}\n\\dot{y}_1 \\\\ \\vdots \\\\ \\dot{y}_n\n\\end{bmatrix}\n\\end{equation*}\nthen $\\dot{\\widetilde{y}}=\\widetilde{A}(t)\\widetilde{y}+\\widetilde{B}(t),\\ \\widetilde{y}(t_0)=0$ where\n\\begin{equation*}\n\\widetilde{A}(t) = \\begin{bmatrix}\nA(t) & & \\mathbf{0}\\\\\n& \\ddots & \\\\\n\\mathbf{0} & & A(t) \n\\end{bmatrix}\n\\in\\mathbb{R}^{n^2\\times n^2} \\ \\\n\\textrm{ and } \\ \\\n\\widetilde{B}(t) = -B(t)B(t)^T\\Phi_A(t_0,t)^T\\begin{bmatrix}\ne_1 \\\\\n\\vdots \\\\\ne_n\n\\end{bmatrix}\n\\in\\mathbb{R}^{n^2}.\n\\end{equation*}\nUsing Lemma \\ref{lem:A_tilde} along with knowledge of differential equations\n\\begin{equation}\\label{eqn:y}\n\\widetilde{y}(t) = \\int_{t_0}^{t}\\Phi_{\\widetilde{A}}(t,\\tau)\\widetilde{B}(\\tau)\\,d\\tau.\n\\end{equation}\nwhere ${\\Phi}_{\\widetilde{A}}(t,t_0)$ is the transition matrix of $\\dot{\\widetilde{y}}=\\widetilde{A}(t)\\widetilde{y}$. So evaluating the above integral and substituting $t=t_f$ gives the components of $J_{\\varphi}(0)$.\n\\hspace{\\fill}{$\\Box$}\n\\end{remark}\n\n\\begin{theorem}[Projection onto \\boldmath{${\\cal B}$}]\\label{thm:projB}\nThe projection $P_{{\\cal B}}$ of $u^-\\in {\\cal L}^2([t_0,t_f];\\mathbb{R})$ onto the constraint set ${\\cal B}$, as the solution of Problem~{\\em(P2)}, is given by\n\\begin{eqnarray}\\label{eqn:projB}\n[P_{{\\cal B}}(u^-)(t)]_i = \\left\\{\\begin{array}{rl}\n a_i, & \\mbox{if } u^-_i(t) \\geq a_i, \\\\[2mm]\n u^-_i(t), & \\mbox{if } -a_i\\leq u^-_i(t) \\leq a_i, \\\\[2mm]\n -a_i, & \\mbox{if } u^-_i(t) \\leq -a_i,\n\\end{array} \\right.\n\\end{eqnarray}\nfor all $t\\in[t_0,t_f],\\ i=1,\\dots,m$. \\end{theorem}\n\\begin{proof}\nSimply use separability of Problem~(P2) in $u_i$, $i = 1,\\ldots,m$.\n\\end{proof}\n\n\n\\subsection{Projectors for special cases}\n\\label{subsec:proj_2var}\n\nIn this subsection we consider problems with two state variables ($n = 2$) and one control variable ($m=1$). In particular we consider problems involving the double integrator and the pure and damped harmonic oscillators, for which the general system and control matrices in set ${\\cal A}$ in~\\eqref{A_gen} become\n\\begin{equation}\\label{eqn:matrices}\nA(t) = A = \\begin{bmatrix}\n0 & 1 \\\\\n-\\omega_0^2 & -2\\zeta\\omega_0\n\\end{bmatrix}\\quad\\mbox{and}\\quad\nB(t) = b = \\begin{bmatrix}\n0 \\\\\n1\n\\end{bmatrix},\n\\end{equation}\nwhere $\\omega_0$ is the natural frequency and $\\zeta$ is the damping ratio. Note that $\\zeta = 0$ is the case of pure (undamped) harmonic oscillator, and $0<\\zeta<1$ under-damped, $\\zeta=1$ critically-damped, and $\\zeta>1$ over-damped harmonic oscillator. The general forms of the constraint sets can be found in \\eqref{A_gen}--\\eqref{B_gen} but for this specialization we define\n\\begin{eqnarray} \n{\\cal A}_{\\omega_0,\\zeta} &:=& \\big\\{u\\in {\\cal L}^2([0,t_f];\\mathbb{R})\\ |\\ \\exists x\\in {\\cal W}^{1,2}([0,t_f];\\mathbb{R}^2)\\mbox{ which solves } \\nonumber \\\\\n&&\\ \\ \\dot{x}_1(t) = x_2(t)\\,,\\ x_1(0) = s_0\\,,\\ x_1(t_f) = s_f\\,, \\nonumber \\\\[1mm]\n&&\\ \\ \\dot{x}_2(t) = -\\omega_0^2 x_1(t) -2\\zeta\\omega_0 x_2(t) + u(t)\\,,\\ \\ \\,x_2(0) = v_0\\,,\\ x_2(t_f) = v_f\\,,\\nonumber\\\\[1mm] \n&&\\ \\ \\forall t\\in[0,t_f]\\big\\}\\,, \\label{A} \\\\[2mm]\n{\\cal B} &:=& \\big\\{u\\in {\\cal L}^2([0,t_f];\\mathbb{R})\\ | -a\\le u(t)\\le a\\,,\\ \\forall t\\in[0,t_f]\\big\\}\\,.\\label{B}\n\\end{eqnarray}\nTo maintain the flow of this paper we move the proofs from this subsection to \\ref{sec:proofs} as these are rather lengthy and the proof techniques follow a similar pattern. In the lemmas we complete some of the technical steps by deriving expressions for the state transition matrices and Jacobians required in Theorem \\ref{thm:projA_gen}. Then the corollaries follow by direct substitution into the expression \\eqref{eqn:projA} in Theorem~\\ref{thm:projA_gen}. Since we find analytical expressions in each of the \nlemmas we express the inverse of the Jacobian and use it directly in the expression in Theorem~\\ref{thm:projA_gen}.\n\nIn the case where the inverse of the Jacobian is analytical and not lengthy we express $\\lambda_0$ as\n\\[\n\\lambda_0 = -[J_\\varphi(0)]^{-1}(x(t_f)-x_f)\n\\]\nto have a more closed form expression for the projector:\n\\begin{multline}\\label{eqn:projA_spec}\nP_{\\cal{A}}(u^-)(t) = u^-(t) + B(t)^T\\Phi_A(t_0,t)^T[J_\\varphi(0)]^{-1}\\bigg(\\Phi_A(t_f,t_0)\\,x_0 \\\\ + \\int_{t_0}^{t_f}\\Phi_A(t_f,\\tau)B(\\tau)u^-(\\tau)d\\tau-x_f\\bigg).\n\\end{multline}\nIn the cases of the double integrator as well as the under-, critically- and over-damped harmonic oscillators the inverse of the Jacobian is simple enough that we will use \\eqref{eqn:projA_spec}.\n\n\\subsubsection{Double integrator}\\label{subsec:projPDI}\nThe dynamics of the double integrator are given by $\\ddot{y}(t) = f(t)$, where $f(t)$ stands for forcing, which typically models the motion of a point mass (or analogously, an electric circuit or a fluid system with capacitance)---see pertaining references in \\cite{BausBuraKaya2019}, where $y(t)$ is the position and $\\dot{y}(t)$ the velocity at time $t$. With $x_1 := y$ and $x_2 = \\dot{y}$, one gets the state equations $\\dot{x}_1 = x_2$ and $\\dot{x}_1 = u$; in other words, $\\omega_0=0$ and $\\zeta=0$, resulting in the constraint set ${\\cal A}_{0,0}$. We note from~\\eqref{eqn:matrices} that\n\\[\nA = \\begin{bmatrix}\n0 &\\ 1 \\\\\n0 &\\ 0\n\\end{bmatrix}. \\ \\\n\\]\nThe minimum-energy problem that we consider corresponds, for example to the practical problem of engineering where one would like to minimize the average magnitude of the force, or the problem of designing cubic (variational) curves.\n\nIn what follows we present the projections onto ${\\cal A}_{0,0}$ and $\\cal B$ in the Corollaries \\ref{cor:projA_PDI} and \\ref{cor:projB} below. These two results and their proofs can be found in \\cite{BausBuraKaya2019}.\n\nRecall the definition of the state transition matrix $\\Phi_A(t,t_0)$ via \\eqref{x_soln} and the definition of the Jacobian $J_\\varphi(0)$ in \\eqref{eqn:jac}. The following lemma evaluates $\\Phi_A(t,0)$ and $[J_\\varphi(0)]^{-1}$ for the double integrator, which are utilized in the proof of Corollary~\\ref{cor:projA_PDI}.\n\\begin{lemma}[Computation of $\\Phi_A$ and $J_\\varphi$ for $\\omega_0=0,\\,\\zeta=0$] \\label{lem:PDE}\nOne has that\n\\begin{equation} \\label{PDI_Phi}\n \\Phi_A(t,0) = e^{At} = \\begin{bmatrix}\n1 & t \\\\\n0 & 1\n\\end{bmatrix}, \\quad\n[J_\\varphi(0)]^{-1} = \\begin{bmatrix}\n-12 & 6 \\\\ -6 & 2\n\\end{bmatrix}.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nSee \\hyperlink{lem:PDE_proof}{proof} in \\ref{sec:proofs}.\n\\end{proof}\n\nThe following result is a corollary of Theorem \\ref{thm:projA_gen}. As mentioned already, this result along with its proof can be found in~\\cite{BausBuraKaya2019}, but we present here a new proof which directly substitutes the expressions in Lemma~\\ref{lem:PDE} into Theorem~\\ref{thm:projA_gen}.\n\\begin{corollary}\n[Projection onto ${\\cal A}_{0,0}$~\\cite{BausBuraKaya2019}] \\label{cor:projA_PDI}\nThe projection $P_{{\\cal A}_{0,0}}$ of $u^-\\in {\\cal L}^2([0,1];\\mathbb{R})$ onto the constraint set ${\\cal A}_{0,0}$, as the solution of Problem~{\\em(P1)} with $\\omega_0=0$ and $\\zeta=0$, is given by\n\\begin{equation}\\label{eqn:u_PDI}\nP_{{\\cal A}_{0,0}}(u^-)(t) = u^-(t) + c_1\\,t + c_2\\,,\n\\end{equation}\nfor all $t\\in[0,1]$, where\n{\\begin{eqnarray}\n&& c_1 := 12\\left(s_0+v_0-s_f+\\int_0^1 (1-\\tau)u^-(\\tau)d\\tau\\right)-6\\left(v_0-v_f+\\int_0^1 u^-(\\tau)d\\tau\\right), \\\\[1mm]\n&& c_2 := -6\\left(s_0+v_0-s_f+\\int_0^1 (1-\\tau)u^-(\\tau)d\\tau\\right)+2\\left(v_0-v_f+\\int_0^1 u^-(\\tau)d\\tau\\right).\n\\end{eqnarray}}\n\\end{corollary}\n\\begin{proof}\nSee \\hyperlink{cor:PDE_proof}{proof} in \\ref{sec:proofs}.\n\\end{proof}\n\nThe following result is a direct consequence of Theorem~\\ref{thm:projB} for all cases of the harmonic oscillator. \n\n\\begin{corollary}[Projection onto \\boldmath{${\\cal B}$}~\\cite{BausBuraKaya2019}]\\label{cor:projB}\nThe projection $P_{{\\cal B}}$ of $u^-\\in {\\cal L}^2([0,t_f];\\mathbb{R})$ onto the constraint set ${\\cal B}$, as the solution of Problem~{\\em(P2)}, is given by\n\\begin{equation} \\label{u_proj_B}\nP_{{\\cal B}}(u^-)(t) = \\left\\{\\begin{array}{rl}\na\\,, &\\ \\ \\mbox{if\\ \\ } u^-(t)\\ge a\\,, \\\\[1mm]\nu^-(t)\\,, &\\ \\ \\mbox{if\\ \\ } -a\\le u^-(t)\\le a\\,, \\\\[1mm]\n-a\\,, &\\ \\ \\mbox{if\\ \\ } u^-(t)\\le -a\\,, \\\\[1mm]\n\\end{array} \\right.\n\\end{equation}\nfor all $t\\in[0,t_f]$. \n\\end{corollary}\n\n\n\n\\subsubsection{Pure harmonic oscillator}\\label{subsec:projPHO}\nWhen a spring is added to the point mass, or an inductor to the electric circuit with capacitance, one gets the {\\em pure} (or undamped) harmonic oscillator, as without forcing, once excited the state variables will exhibit sustained oscillations (or sinusoids) at frequency $\\omega_0$. We extend Corollary~\\ref{cor:projA_PDI} to the general case of projecting onto ${\\cal A}_{\\omega_0,0}$. In this case, from \\eqref{eqn:matrices} one has\n\\begin{equation} \\label{A_PHO}\nA = \\begin{bmatrix}\n0 & 1 \\\\ -\\omega_0^2 & 0\n\\end{bmatrix}.\n\\end{equation}\nThe following lemma provides major ingredients for the projector in Theorem~\\ref{thm:projA_gen}.\n\\begin{lemma}[Computation of $\\Phi_A$ and $J_\\varphi$ for $\\omega_0>0,\\,\\zeta=0$]\\label{lem:PHO}\nOne has that\n\\begin{equation} \\label{eqn:PHO_Phi}\n\\Phi_A(t,0) = e^{At} = \\begin{bmatrix}\n\\cos(\\omega_0t) & \\displaystyle\\frac{\\sin(\\omega_0t)}{\\omega_0} \\\\ -\\omega_0\\sin(\\omega_0t) & \\cos(\\omega_0t)\n\\end{bmatrix}, \\quad\n[J_\\varphi(0)]^{-1} = \\begin{bmatrix}\n-\\omega_0^2\/\\pi & 0 \\\\ 0 & -1\/\\pi\n\\end{bmatrix}.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nSee \\hyperlink{lem:PHO1_proof}{proof} in \\ref{sec:proofs}.\n\\end{proof}\n\nThe following corollary is a direct consequence of Theorem~\\ref{thm:projA_gen}.\n\\begin{corollary}\n[Projection onto ${\\cal A}_{\\omega_0,0}$] \\label{cor:projA_PHO}\nThe projection $P_{{\\cal A}_{\\omega_0,0}}$ of $u^-\\in {\\cal L}^2([0,2\\pi];\\mathbb{R})$ onto the constraint set ${\\cal A}_{\\omega_0,0}$, as the solution of Problem~{\\em(P1)}, $\\zeta=0$, is given by\n\\begin{equation} \\label{eqn:u_PHO}\nP_{{\\cal A}_{\\omega_0,0}}(u^-)(t) = u^-(t)+c_1\\sin(\\omega_0t) - c_2\\cos(\\omega_0t),\n\\end{equation}\nwhere\n\\[\n\\begin{array}{l}\n c_1:=\\displaystyle \\frac{\\omega_0}{\\pi}\\bigg(s_0-s_f-\\frac{1}{\\omega_0}\\int_0^{2\\pi}\\sin(\\omega_0\\tau)u^-(\\tau)\\,d\\tau\\bigg) , \\\\\n \\\\\n c_2:=\\displaystyle \\frac{1}{\\pi}\\bigg(v_0-v_f+\\int_0^{2\\pi}\\cos(\\omega_0\\tau)u^-(\\tau)\\,d\\tau\\bigg).\n\\end{array}\n\\]\n\n\\end{corollary}\n\\begin{proof}\nSee \\hyperlink{cor:PHO1_proof}{proof} in \\ref{sec:proofs}.\n\\end{proof}\n\n\n\n\\subsubsection{Damped harmonic oscillator}\\label{subsec:projPDHO}\nIf a damper (which is an element that dissipates energy) is added to the mass-spring system (or analogously a resistor added to a capacitor-inductor electrical circuit) one gets what is referred to as a {\\em damped harmonic oscillator}. There are three cases to consider for a damped system, namely the {\\em critically-} ($\\zeta = 1$), {\\em over-} ($\\zeta > 1$) and {\\em under-damped} ($0 < \\zeta < 1$) cases. We provide the projectors for each case.\n\nCorollary~\\ref{cor:projA_PDHO1} below presents the projector onto the set ${\\cal A}_{\\omega_0,1}$ for the critically-damped case. From \\eqref{eqn:matrices}, the system matrix $A$ for this case is\n\\[A=\\begin{bmatrix}\n0 & 1 \\\\ -\\omega_0^2 & -2\\omega_0\n\\end{bmatrix}.\\]\n\\begin{lemma}[Computation of $\\Phi_A$ and $J_{\\varphi}$ for $\\omega_0>0,\\,\\zeta=1$]\\label{lem:PDHO_crit}\nOne has that\n\\begin{align}\\label{eqn:PDHO1_Phi}\n\\Phi_A(t,0) = e^{At} = e^{-\\omega_0t}\\begin{bmatrix}\n\\omega_0t+1 & t \\\\ -t\\omega_0^2 & -\\omega_0t+1\n\\end{bmatrix}\n\\end{align}\nand\n\\begin{equation}\\label{eqn:Jphi_PDHO}\n[J_\\varphi(0)]^{-1} = \\dfrac{1}{y_{11}(2\\pi)y_{22}(2\\pi)-y_{12}(2\\pi)y_{21}(2\\pi)}\\begin{bmatrix}\ny_{22}(2\\pi) & -y_{21}(2\\pi) \\\\ -y_{12}(2\\pi) & y_{11}(2\\pi)\n\\end{bmatrix}\n\\end{equation}\nwhere $y_{ij}(2\\pi)$, $j = 1,2$, are the components of the vectors $y_i(2\\pi)$, $i = 1,2$, given below.\n\\begin{equation}\\label{eqn:y_PDHO1}\ny(2\\pi) = \\left[\\begin{array}{c}\ny_1(2\\pi) \\\\[1mm] \\hdashline[3pt\/3pt] \\\\[-4mm]\ny_2(2\\pi)\n\\end{array}\\right] = \n\\left[\\begin{array}{c}\n\\dfrac{e^{-2\\pi\\omega_0}(2\\pi\\omega_0-e^{4\\pi\\omega_0}+2\\pi\\omega_0e^{4\\pi\\omega_0}+1)}{4\\omega_0^3} \\\\[4mm] \\dfrac{\\pi\\sinh(2\\pi\\omega_0)}{\\omega_0} \\\\[4mm] \\hdashline[3pt\/3pt] \\\\[-2mm]\n-\\dfrac{\\pi\\sinh(2\\pi\\omega_0)}{\\omega_0} \\\\[4mm] -\\dfrac{e^{-2\\pi\\omega_0}(2\\pi\\omega_0+e^{4\\pi\\omega_0}+2\\pi\\omega_0e^{4\\pi\\omega_0}-1)}{4\\omega_0}\n\\end{array}\\right].\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nSee \\hyperlink{lem:PDHO1_proof}{proof} in \\ref{sec:proofs}.\n\\end{proof}\n\n\\begin{corollary}[Projection onto ${\\cal A}_{\\omega_0,1}$] \\label{cor:projA_PDHO1}\nThe projection $P_{{\\cal A}_{\\omega_0,\\zeta}}$ of $u^-\\in {\\cal L}^2([0,2\\pi];\\mathbb{R})$ onto the constraint set ${\\cal A}_{\\omega_0,1}$, as the solution of Problem~{\\em(P1)} with $\\zeta=1$, is given by\n\\begin{align}\\label{eqn:u_PDHO1}\nP_{{\\cal A}_{\\omega_0,1}}(u^-)(t) = u^-(t)&+\\frac{e^{\\omega_0(t-2\\pi)}}{y_{11}y_{22}-y_{12}y_{21}}\\left(-((y_{22}+y_{12}\\omega_0)t+y_{12})\\left(x_1(2\\pi)-\\frac{s_f}{e^{-2\\pi\\omega_0}}\\right)\\right.\\nonumber\\\\ &+\\left.((y_{21}+y_{11}\\omega_0)t+y_{11})\\left(x_2(2\\pi)-\\frac{v_f}{e^{-2\\pi\\omega_0}}\\right)\\right),\n\\end{align}\nwhere $y_{ij}$ are the components of $y(2\\pi)$ given in~\\eqref{eqn:y_PDHO1}.\n\\end{corollary}\n\\begin{proof}\nSee \\hyperlink{cor:PDHO1_proof}{proof} in \\ref{sec:proofs}.\n\\end{proof}\n\n\nIn Corollary~\\ref{cor:projA_PDHO2} we consider the derivation of the projection onto the set ${\\cal A}_{\\omega_0,\\zeta}$ from \\eqref{A} where $\\zeta>1$.\n\\begin{lemma}[Computation of $\\Phi_A$ and $J_\\varphi$ for $\\omega_0>0,\\zeta>1$]\\label{lem:PDHO_2}\nOne has that \n\\begin{equation} \\label{eqn:PDHO2_Phi}\n\\Phi_A(t,0) = e^{At} = \\frac{e^{-\\alpha t}}{\\beta}\\begin{bmatrix}\n\\omega_0\\sinh(\\beta t+\\eta) & \\sinh(\\beta t) \\\\ -\\omega_0^2\\sinh(\\beta t) & \\omega_0\\sinh(-\\beta t+\\eta)\n\\end{bmatrix}\n\\end{equation}\nwith $\\alpha=\\omega_0\\zeta$, \\ $\\beta=\\omega_0\\sqrt{\\zeta^2-1}, \\ \\eta = \\frac{1}{2}\\ln\\left|\\frac{\\beta+\\alpha}{\\beta-\\alpha}\\right|$. Then we express the inverse of the Jacobian as in \\eqref{eqn:Jphi_PDHO} where $y_{ij}(2\\pi)$, $j = 1,2$, are the components of the vectors $y_i(2\\pi)$, $i = 1,2$, given below.\n\\begin{equation}\\label{eqn:y_PDHO2}\ny(2\\pi) = \\left[\\begin{array}{c}\ny_1(2\\pi) \\\\[1mm] \\hdashline[3pt\/3pt] \\\\[-4mm]\ny_2(2\\pi)\n\\end{array}\\right] = \n\\left[\\begin{array}{c}\n\\dfrac{e^{-2\\pi\\alpha}}{4\\omega_0^2}\\left(\\dfrac{(1-e^{4\\pi\\alpha})\\cosh(2\\pi\\beta)}{\\alpha}+\\dfrac{(1+e^{4\\pi\\alpha})\\sinh(2\\pi\\beta)}{\\beta}\\right) \\\\[4mm]\n\\dfrac{\\sinh(2\\pi\\alpha)\\sinh(2\\pi\\beta)}{2\\alpha\\beta} \\\\[4mm] \\hdashline[3pt\/3pt] \\\\[-2mm]\n-\\dfrac{\\sinh(2\\pi\\alpha)\\sinh(2\\pi\\beta)}{2\\alpha\\beta} \\\\[4mm]\n\\dfrac{e^{-2\\pi\\alpha}}{4}\\left(\\dfrac{(1-e^{4\\pi\\alpha})\\cosh(2\\pi\\beta)}{\\alpha}+\\dfrac{(1+e^{4\\pi\\alpha})\\sinh(2\\pi\\beta)}{\\beta}\\right)\n\\end{array}\\right].\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nSee \\hyperlink{lem:PDHO2_proof}{proof} in \\ref{sec:proofs}.\n\\end{proof}\n\n\\begin{corollary}[Projection onto ${\\cal A}_{\\omega_0,\\zeta}$ with $\\zeta>1$]\\label{cor:projA_PDHO2}\nThe projection $P_{{\\cal A}_{\\omega_0,\\zeta}}$ of $u^-\\in {\\cal L}^2([0,2\\pi];\\mathbb{R})$ onto the constraint set ${\\cal A}_{\\omega_0,\\zeta}$, as the solution of Problem~{\\em(P1)} where $\\zeta>1$, is given by\n\\begin{align} \\label{eqn:u_PDHO2}\nP_{{\\cal A}_{\\omega_0,\\zeta}}(u^-)&(t) = \\dfrac{e^{\\alpha (t-2\\pi)}}{\\beta^2(y_{11}(2\\pi)y_{22}(2\\pi)-y_{12}(2\\pi)y_{21}(2\\pi))}\\bigg(-(y_{22}(2\\pi)\\sinh(\\beta t)+y_{12}(2\\pi)\\nonumber\\\\ & \\ \\ \\ \\ \\ \\ \\ \\times\\omega_0\\sinh(\\beta t+\\eta))\\left(x_1(2\\pi)-\\dfrac{\\beta s_f}{e^{-2\\pi\\alpha}}\\right)\\nonumber\\\\ &\\ \\ \\ \\ \\ \\ \\ +(y_{21}(2\\pi)\\sinh(\\beta t)+y_{11}(2\\pi)\\omega_0\\sinh(\\beta t+\\eta))\\left(x_2(2\\pi)-\\dfrac{\\beta v_f}{e^{-2\\pi\\alpha}}\\right)\\!\\bigg)\n\\end{align}\nwhere $\\alpha=\\omega_0\\zeta, \\ \\beta=\\omega_0\\sqrt{\\zeta^2-1}$ and $y_{ij}$ are the components of $y(2\\pi)$ given in \\eqref{eqn:y_PDHO2}.\n\\end{corollary}\n\\begin{proof}\nSee \\hyperlink{cor:PDHO2_proof}{proof} in \\ref{sec:proofs}.\n\\end{proof}\n\nIn Corollary~\\ref{cor:projA_PDHO3} we consider the final case for the damped harmonic oscillator, which is the projection onto the set ${\\cal A}_{\\omega_0,\\zeta}$ from \\eqref{A} where $0<\\zeta<1$.\n\n\\begin{lemma}[Computation of $\\Phi_A$ and $J_\\varphi$ for $\\omega_0>0,0<\\zeta<1$]\\label{lem:PDHO_3}\nOne has that\n\\begin{equation} \\label{eqn:PDHO3_Phi}\n\\Phi_A(t,0) = e^{At} = \\dfrac{e^{-\\alpha t}}{\\widetilde{\\beta}}\\begin{bmatrix}\n\\omega_0\\cos(\\widetilde{\\beta} t+\\gamma) & \\sin(\\widetilde{\\beta} t) \\\\ -\\omega_0^2\\sin(\\widetilde{\\beta} t) & \\omega_0\\cos(\\widetilde{\\beta} t-\\gamma)\n\\end{bmatrix},\n\\end{equation}\nwhere $\\alpha=\\omega_0\\zeta, \\ \\widetilde{\\beta}=\\omega_0\\sqrt{1-\\zeta^2}, \\ \\gamma = \\tan^{-1}(-\\frac{\\alpha}{\\widetilde{\\beta}})$. Then we express the inverse of the Jacobian as in \\eqref{eqn:Jphi_PDHO} where $y_{ij}(2\\pi)$, $j = 1,2$, are the components of the vectors $y_i(2\\pi)$, $i = 1,2$, given below.\n\\begin{equation}\\label{eqn:y_PDHO3}\ny(2\\pi) = \\left[\\begin{array}{c}\ny_1(2\\pi) \\\\[1mm] \\hdashline[3pt\/3pt] \\\\[-4mm]\ny_2(2\\pi)\n\\end{array}\\right] = \\left[\\begin{array}{c}\n\\dfrac{e^{-2\\pi\\alpha}}{4\\omega_0^2}\\left(\\dfrac{\\cos(2\\pi\\widetilde{\\beta})(1-e^{4\\pi\\alpha})}{\\alpha}+\\dfrac{\\sin(2\\pi\\widetilde{\\beta})(1+e^{4\\pi\\alpha})}{\\widetilde{\\beta}}\\right) \\\\[4mm]\n\\dfrac{\\sinh(2\\pi\\alpha)\\sin(2\\pi\\widetilde{\\beta})}{2\\alpha\\widetilde{\\beta}} \\\\[4mm] \\hdashline[3pt\/3pt] \\\\[-2mm]\n-\\dfrac{\\sinh(2\\pi\\alpha)\\sin(2\\pi\\widetilde{\\beta})}{2\\alpha\\widetilde{\\beta}} \\\\[4mm]\n\\dfrac{e^{-2\\pi\\alpha}}{4}\\left(\\dfrac{\\cos(2\\pi\\widetilde{\\beta})(1-e^{4\\pi\\alpha})}{\\alpha}-\\dfrac{\\sin(2\\pi\\widetilde{\\beta})(1+e^{4\\pi\\alpha})}{\\widetilde{\\beta}}\\right)\n\\end{array}\\right].\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nSee \\hyperlink{lem:PDHO3_proof}{proof} in \\ref{sec:proofs}.\n\\end{proof}\n\n\\begin{corollary}[Projection onto ${\\cal A}_{\\omega_0,\\zeta}$ with $0<\\zeta<1$]\\label{cor:projA_PDHO3}\nThe projection $P_{{\\cal A}_{\\omega_0,\\zeta}}$ of $u^-\\in {\\cal L}^2([0,2\\pi];\\mathbb{R})$ onto the constraint set ${\\cal A}_{\\omega_0,\\zeta}$, as the solution of Problem~{\\em(P1)} where $0<\\zeta<1$, is given by\n\\begin{align*}\nP_{{\\cal A}_{\\omega_0,\\zeta}} &= u^-(t)+\\dfrac{e^{\\alpha(t-2\\pi)}}{\\widetilde{\\beta}^2(y_{11}(2\\pi)y_{22}(2\\pi)-y_{12}(2\\pi)y_{21}(2\\pi))}\\\\&\\Bigg(-\\left(y_{22}(2\\pi)\\sin(\\widetilde{\\beta} t)+y_{12}(2\\pi)\\omega_0\\cos(\\widetilde{\\beta} t+\\gamma)\\right)\\left(x_1(2\\pi)-\\frac{\\widetilde{\\beta} s_f}{e^{-2\\pi\\alpha}}\\right)\\\\& + \\left(y_{21}(2\\pi)\\sin(\\widetilde{\\beta} t)+y_{11}(2\\pi)\\omega_0\\cos(\\widetilde{\\beta} t+\\gamma)\\right)\\left(x_2(2\\pi)-\\frac{\\widetilde{\\beta} v_f}{e^{-2\\pi\\alpha}}\\right) \\Bigg),\n\\end{align*}\nwhere $\\alpha=\\omega_0\\zeta, \\ \\widetilde{\\beta}=\\omega_0\\sqrt{1-\\zeta^2}$ and $y_{ij}$ are the components of $y(2\\pi)$ given in \\eqref{eqn:y_PDHO3}.\n\\end{corollary}\n\\begin{proof}\nSee \\hyperlink{cor:PDHO3_proof}{proof} in \\ref{sec:proofs}.\n\\end{proof}\n\nNote that Corollary~\\ref{cor:projA_PHO} cannot be recovered from Corollary~\\ref{cor:projA_PDHO3} by simply substituting $\\zeta=0$ because it would result in division by zero since $\\zeta=0$ implies that $\\alpha=0$.\n\n\\subsection{Machine tool manipulator}\nA machine tool manipulator is an automatic machine that simulates human hand operations. The dynamics of this machine can be formulated as a LQ control problem as in \\cite{MTM}. For this problem the system and control matrices in \\eqref{A_gen} become\n\\begin{align*}\nA(t) &= A = \\begin{bmatrix}\n0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n-4.441\\times10^7\/450 & 0 & 0 & -8500\/450 & 0 & 0 & -1\/450 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 1\/750 \\\\\n0 & 0 & -8.2\\times10^6\/40 & 0 & 0 & -1800\/40 & 0.25\/40 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & -1\/0.0025\n\\end{bmatrix}, \\\\ B(t) &= b = \\begin{bmatrix}\n0 & 0 & 0 & 0 & 0 & 0 & 1\/0.0025\n\\end{bmatrix}^T.\n\\end{align*}\nUnlike the special cases in the previous subsection we will not provide analytical projectors for this problem. Because this problem has 7 state variables computing $\\Phi_A, \\Phi_{(-A^T)}$ and $[J_\\phi(0)]^{-1}$ analytically is not a simple task. Instead we will introduce and implement the numerical procedure in Section \\ref{sec:num}.\n\n\\section{Projection algorithms}\\label{sec:projAlg}\nA best approximation algorithm is an algorithm that produces the closest element in a given, prescribed set. In our context these algorithms use projection algorithms, which we recall next. For this, we closely follow the framework in \\cite{BausBuraKaya2019}. We consider a real Hilbert space denoted by $X$, with inner product $\\langle\\cdot,\\cdot\\rangle$ and induced norm $\\|\\cdot\\|$.\n\nWe will consider the sets $\\cal{A}$ and $\\cal{B}$ to align with the previous results but note that the only assumptions required are that we have a closed affine subspace of $X$ and a nonempty closed convex subset of $X$.\n\nGiven $z\\in X$, our aim is to find\n\\[\nP_{\\cal{A}\\cap \\cal{\\cal{B}}}(z),\n\\]\nthe projection of $z$ onto the intersection $\\cal{A}\\cap \\cal{B}$. We assume that the set $\\cal{A}\\cap \\cal{B}$ is \\emph{nonempty}. Hence, the element $P_{\\cal{A}\\cap \\cal{B}}(z)$ will constitute a feasible point, which will in turn be optimal for our optimization problems. In our analysis, we assume that we are able to compute the projector operators $P_{\\cal{A}}$ and $P_{\\cal{B}}$.\nThese operators project a given point onto each of the constraint sets $\\cal{A}$ and $\\cal{B}$, respectively.\n\n\nWe will focus on four methods: the {\\em Method of Alternating Projections (MAP), Dykstra's, Douglas\u2013Rachford (DR)} and {\\em Arag\\'on Artacho--Campoy (AAC) algorithms}. These algorithms have previously been described in \\cite{BausBuraKaya2019} but we provide them in \\ref{sec:alg} for the reader's convenience. In what follows we provide brief settings and descriptions of these algorithms. Then in Section \\ref{sec:num}, we will numerically test them.\n\n\\subsection{MAP}\\label{ssubsec:MAP}\nThis method and the pertaining algorithm are not given in \\cite{BausBuraKaya2019} since it coincides with Dykstra's algorithm (yielding identical iterations) for the case of the double integrator. Let $a_0=z$. Given $a_n$ where $n\\geq0$, update\n\\begin{equation}\n b_n:=P_{\\cal{B}}(a_n),\n \\;\\;\n \\text{and}\n \\;\\;\n a_{n+1}:=P_{\\cal{A}}(b_n).\n\\end{equation}\nSee Algorithm \\ref{alg:map} in \\ref{sec:alg} for the numerical implementation.\n\n\\subsection{Dykstra's Algorithm}\\label{ssubsec:Dyk}\nNext we describe Dykstra's projection algorithm (see \\cite{BoyleDykstra}), which operates as follows: Set $a_0 := z,\\, p_0 := 0$ and $q_0 := 0$. Given $a_n,p_n,q_n$, where $n\\geq 0$, update\n\\begin{align*}\n&b_{n} := P_{\\cal{B}}(a_{n}+q_{n})\\,,\n\\ \\ \\ \\ \\ \\,\\,\nq_{n+1} := a_{n}+q_n-b_{n}\\,, \\\\\n&a_{n+1} := P_{\\cal{A}}(b_{n}+p_n)\\,,\n\\ \\ \\ \np_{n+1} := b_{n}+p_n-a_{n+1}\\,.\n\\end{align*}\nThe numerical implementation of this method can be found in Algorithm \\ref{alg:dyk} within \\ref{sec:alg}. It is known that both $(a_n)_{n\\in\\mathbb{B}}$ and $(b_n)_{n\\in\\mathbb{N}}$ converge \\emph{strongly} (i.e., converge in norm), to $P_{\\cal{A}\\cap \\cal{B}}(z)$.\n\n\\begin{fact}\nWhen $\\cal{A}$ is affine updates to $p_n$ have no impact on the iterates so the algorithm can be posed as in \\cite{BausBuraKaya2019}.\n\\end{fact}\n\n\n\\subsection{DR Algorithm}\\label{ssubsec:DR}\nGiven $\\beta>0$, we specialize the DR algorithm (see\n\\cite{DougRach}, \\cite{LM} and \\cite{EckBer}) \nto the case of minimizing the sum of the two functions $f(x):=\\iota_B(x) +\n\\tfrac{\\beta}{2}\\|x-z\\|^2$ and $g :=\n\\iota_{\\cal A}$, where $\\iota_{\\cal A}$ is the indicator function of ${\\cal A}$. For this case, the DR operator becomes\n\\[ \nT := \\ensuremath{\\operatorname{Id}} - {\\rm Prox}_f + {\\rm Prox}_g(2 {\\rm Prox}_f-\\ensuremath{\\operatorname{Id}}). \n\\]\nGiven $f,g$ we know that the respective proximal mappings are \n$ {\\rm Prox}_f(x) = P_{\\cal{B}}\\big(\\tfrac{1}{1+\\beta}x+\\tfrac{\\beta}{1+\\beta}z\\big)$ and\n$ {\\rm Prox}_g =P_{\\cal{A}}$ (see \\cite[Proposition~24.8(i)]{BauCombettes}).\nSet $\\lambda := \\tfrac{1}{1+\\beta}\\in\\left]0,1\\right[$. \nIt follows that the DR operator becomes\n\\begin{align}\nTx &= x-P_{\\cal{B}}\\big(\\lambda x+(1-\\lambda)z\\big)+P_{\\cal{A}}\\Big(2P_{\\cal{B}}\\big(\\lambda\nx+(1-\\lambda)z\\big)-x\\Big).\n\\end{align}\nNow let $x_0\\in X$ and given $x_n\\in X$, where $n\\geq 0$, update\n\\begin{equation}\n\\label{e:180304a}\nb_n:= P_{\\cal{B}}\\big(\\lambda x_n+(1-\\lambda)z\\big),\\;\\;\nx_{n+1} := Tx_n \n= x_n-b_n\n+P_{\\cal{A}}\\big(2b_n-x_n\\big).\n\\end{equation}\nThen it is known (see \\cite{Svaiter} or \\cite{BauMoursi}) that\n$(b_n)_{n\\in\\mathbb{N}}$ converges weakly to $P_{\\cal{A}\\cap \\cal{B}}(z)$.\nNote that \\eqref{e:180304a} simplifies to \n\\begin{equation}\\label{eq:DR}\nx_{n+1} := x_n - P_{\\cal{B}}(\\lambda x_n)+P_{\\cal{A}}\\big(2P_{\\cal{B}}(\\lambda x_n)-x_n\\big)\n\\quad\\text{provided that $z=0$.}\n\\end{equation}\nSee Algorithm \\ref{alg:DR} in \\ref{sec:alg} for the numerical implementation. \n\nRobustness of the DR algorithm is supported by the fact that many inexact versions of it are shown to converge as well, see \\cite{Svaiter2019,AlvEckGerMel2020}. In \\cite{AlvGer2019} we see a study of the complexity of an inexact version of the algorithm. This justifies the use of discrete approximations of the function iterates in our implementation.\n\n\n\\subsection{AAC Algorithm}\\label{ssubsec:AAC}\n\nThe AAC Algorithm was relatively recently presented in \\cite{AAC}; see also \\cite{AlwBauMouWan2018, AraCam2018}. Given two fixed parameters $\\alpha$ and $\\beta$ in $\\left]0,1\\right[$, define \n\\begin{align}\nTx &= (1-\\alpha)x +\n\\alpha\\Bigg(2\\beta\\bigg(P_{\\cal{A}}\\Big(2\\beta\\big(P_{\\cal{B}}(x+z)-z\\big)-x+z\\Big)-z\\bigg)+x+2\\beta\\big(z-P_{\\cal{B}}(x+z)\\big)\\Bigg)\\notag\\\\\n&=x +\n2\\alpha\\beta\\Bigg(P_{\\cal{A}}\\Big(2\\beta\\big(P_{\\cal{B}}(x+z)-z\\big)-x+z\\Big)-P_{\\cal{B}}(x+z) \\Bigg)\\,.\n\\end{align}\nNow let $x_0\\in X$ and given $x_n\\in X$, where $n\\geq 0$, update\n\\begin{equation}\nb_{n} := P_{\\cal{B}}(x_n+z),\n\\end{equation}\nand \n\\begin{equation}\n\\label{e:180302a}\nx_{n+1} := Tx_n = x_n +\n2\\alpha\\beta\\bigg(P_{\\cal{A}}\\Big(2\\beta\\big(b_{n}-z\\big)-x_n+z\\Big)-b_{n}\\bigg).\n\\end{equation}\nBy \\cite[Theorem~4.1(iii)]{AAC},\nthe sequence $(b_n)_{n\\in\\mathbb{N}}$ converges strongly to\n$P_{\\cal{A}\\cap \\cal{B}}(z)$\nprovided that\\footnote{It appears that this constraint qualification is not easy\nto check in our setting.} $z-P_{\\cal{A}\\cap \\cal{B}}(z)\\in (N_{\\cal{A}}+N_{\\cal{B}})(P_{\\cal{A}\\cap \\cal{B}}(z))$.\nNote that \\eqref{e:180302a} simplifies to \n\\begin{equation}\\label{eq:AAC}\nx_{n+1} := Tx_n = x_n +\n2\\alpha\\beta\\Big(P_{\\cal{A}}\\big(2\\beta P_{\\cal{B}}x_n-x_n\\big)-P_{\\cal{B}}x_n\\Big)\n\\quad\\text{provided that $z=0$\\,.}\n\\end{equation}\nIn \\ref{sec:alg}, Algorithm \\ref{alg:AAC} contains the numerical implementation of this method.\n\n\\begin{remark}\\label{rem:DR-AAC}\\rm\nFrom \\eqref{eq:DR} and \\eqref{eq:AAC} we see that when the parameter $\\lambda$ in DR takes the limit value $\\lambda=1$, the parameter $\\alpha$ in AAC takes the value $\\alpha=1\/2$ and the parameter $\\beta$ in AAC takes the limit value $\\beta=1$, both methods produce the same iterates.\nThis could explain the fact that there is some similarity between these two methods in the numerical experiments. Choices of the parameters far from those mentioned here may give us some idea of the ``distance\" between these two methods.\n\\end{remark}\n\n\n\\section{Numerical Approach}\n\\label{sec:num}\n\nIn Subsection \\ref{subsec:proj_2var} we have a selection of problems where we have derived analytical expressions for the projection onto $\\cal{A}$. In practice however the state transition matrix may be too difficult (if not impossible) to find analytically, in which case one needs to employ a numerical technique, as will be outlined further below.\n\n\\subsection{Backdrop and algorithm}\nFrom Equation \\eqref{eqn:gen_u} we can see that in order to define the projection we must find $\\lambda$. In Theorem \\ref{thm:projA_gen} we assumed we could find $\\Phi_{A}(t_0,t)$ which we can see from \\eqref{eqn:lambda_gen}, is necessary to find $\\lambda$. In the case where we cannot find the state transition matrix directly to substitute into \\eqref{eqn:lambda_gen}, we must solve \n\\begin{align}\\label{eqn:lin_sys}\n\\begin{bmatrix}\\dot{x}(t) \\\\ \\dot{\\lambda}(t)\\end{bmatrix} = \\begin{bmatrix}A(t) & -B(t)B^T(t) \\\\ 0_{n\\times n} & -A^T(t)\\end{bmatrix}\\begin{bmatrix}x(t) \\\\ \\lambda(t)\\end{bmatrix} + \\begin{bmatrix}B(t) \\\\ 0_{n\\times m}\\end{bmatrix}u^-(t)\\,,\n\\end{align}\nfor all $t\\in[t_0,t_f]$, with the initial conditions (ICs) $x(t_0)=x_0$ and $x(t_f)=x_f$, to find $\\lambda$. \nThroughout the steps of Algorithm~\\ref{alg:projA}, we will solve the linear system \\eqref{eqn:lin_sys} with different ICs.\nThe ICs that we will consider are\n\\begin{align}\\label{eqn:IC}\n\\mbox{(i)} \\begin{bmatrix}x(t_0) \\\\ \\lambda(t_0)\\end{bmatrix} = \\begin{bmatrix}x_0\n\\\\ 0 \\end{bmatrix}, \\ \\\n\\mbox{(ii)} \\begin{bmatrix}x(t_0) \\\\ \\lambda(t_0)\\end{bmatrix} = \\begin{bmatrix}x_0 \\\\ e_i \\end{bmatrix}, \\ \\\n\\mbox{(iii)}\\begin{bmatrix}x(t_0) \\\\ \\lambda(t_0)\\end{bmatrix} = \\begin{bmatrix}x_0 \\\\ \\lambda_0 \\end{bmatrix}.\n\\end{align}\n\n\nAs in proof of Theorem \\ref{thm:projA_gen} we define $z(t,\\lambda_0):=x(t)$. Recall in this case that $\\dot{x}(t) = dx(t)\/dt$ can be written as $\\partial z(t,\\lambda_0)\/\\partial t$. \nWe also recall that the near-miss function $\\varphi:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n$ as defined in \\eqref{eqn:nearmiss} is affine as shown in the proof of Theorem~\\ref{thm:projA_gen}. Then the Taylor series expansion of $\\varphi$ about zero is simply\n\\begin{align*}\n\\varphi(\\lambda_0) = \\varphi(0)+J_{\\varphi}(0)\\lambda_0\\,.\n\\end{align*}\nSubstituting \\eqref{eqn:nearmiss}, one gets\n\\[\nz(t_f,\\lambda_0) = z(t_f,0)+J_{\\varphi}(0)\\lambda_0\\,,\n\\]\nand, rearranging,\n\\[\nJ_{\\varphi}(0)\\lambda_0 = z(t_f,\\lambda_0)-z(t_f,0)\\,.\n\\]\nSuppose $\\lambda_0=e_i$. Then\n\\begin{align}\\label{eqn:J_comp}\nJ_{\\varphi}(0)e_i= z(t_f,e_i)-z(t_f,0)\\,,\n\\end{align}\nwhich is the $i$th column of $J_{\\varphi}(0)$. Therefore,\nby finding $z(t_f,0)$ and $z(t_f,e_i)$ for every $i = 1,\\ldots,n$ we can build the Jacobian $J_\\varphi(0)$. Consequently, a procedure for constructing $J_\\varphi(0)$ can be prescribed as follows.\n\\begin{enumerate}\n \\item Solve \\eqref{eqn:lin_sys} with ICs (ii) in~\\eqref{eqn:IC} to get $z(t_f,e_i)$.\n \\item Solve \\eqref{eqn:lin_sys} with ICs (i) in~\\eqref{eqn:IC} to get $z(t_f,0)$.\n \\item Compute the $i$th column of $J_\\varphi(0)$ using \\eqref{eqn:J_comp}, for $i = 1,\\ldots,n$, and obtain $J_\\varphi(0)$.\n\\end{enumerate}\n\nWe can now solve \\eqref{eqn:lineq_lambda0} for $\\lambda_0$ since in above the procedure for finding $J_\\varphi(0)$ we have computed all the other components of this equation. Then once we have $\\lambda_0$ we can solve \\eqref{eqn:lin_sys} with ICs (iii) in~\\eqref{eqn:IC} to obtain $\\lambda$.\n\nThe algorithm below describes the steps for computing the projection of a current iterate $u^-$ onto the constraint set $\\cal{A}$. In solving \\eqref{eqn:lin_sys} with each of the ICs in \\eqref{eqn:IC} we implement {\\sc Matlab}'s numerical ODE solver {\\sf ode45} or a direct implementation of some Runge--Kutta method such as the Euler method.\\\\\n\n\\begin{algorithm}{({\\bf Numerical Computation of the Projector onto ${\\cal A}$})} \\label{alg:projA}\\\n\\begin{description}\n\\item[Step 0] ({\\em Initialization}) The following are given: Current iterate $u^-$, the system and control matrices $A(t)$ and $B(t)$, the numbers of state and control variables $n$ and $m$, and the initial and terminal states $x_0$ and $x_f$, respectively.\n\\item[Step 1] ({\\em Near-miss function}) Solve \\eqref{eqn:lin_sys} with ICs in \\eqref{eqn:IC}(i) to find $z(t_f,0) := x(t_f)$. \\\\ Set $\\varphi(0) := z(t_f,0)-x_f$.\n\\item[Step 2] ({\\em Jacobian}) For $i = 1,\\ldots,n$, solve \\eqref{eqn:lin_sys} with ICs in \\eqref{eqn:IC}(ii), to get $z(t_f,e_i)$. \\\\ \nSet $\\beta_i(t) := z(t_f,e_i) - z(t_f,0)$ and $J_\\varphi(0) := \\left[\\beta_1(t)\\ |\\ \\dots\\ |\\ \\beta_n(t) \\right]$.\n\\item[Step 3] ({\\em Missing IC}) Solve $J_{\\varphi}(0)\\,\\lambda_0 := -\\varphi(0)$ for $\\lambda_0$. \n\\item[Step 4] ({\\em Projector onto ${\\cal A}$}) Solve \\eqref{eqn:lin_sys} with ICs in \\eqref{eqn:IC}(iii) to find $\\lambda(t)$. \\\\ \nSet $P_{\\cal{A}}(u^-)(t) := u^-(t)-B^T(t)\\lambda(t)$.\n\\end{description}\n\\end{algorithm}\n\n\\subsection{Experiments}\nFor computations in this section we use {\\sc Matlab} release R2021b for the projection methods and error analysis. We also use AMPL--Ipopt computational suite \\cite{AMPL,WacBie2006} for comparison with the projection methods since it is commonly used for solving similar optimal control problems. In all the projection method results the relative stopping tolerance for each method is set to $\\varepsilon=10^{-6}$ and the number of discretization points is $N=10^3$ unless otherwise specified. For AMPL--Ipopt we set tol to $10^{-6}$.\n\nIn Figure \\ref{fig:plots} we have the pure and under-damped oscillator solution plots for the constrained control where $\\omega_0=5$. The boundary conditions are $x_2(0) = 1$ and $x_1(0) = x_1(2\\pi) = x_2(2\\pi) = 0$. The bound on $u$ for the under-damped case is much smaller than the value used in the pure case to ensure that the control constraint is active.\n\n\\begin{figure}[t!]\n\\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=7cm]{w0=5_zeta=0_u.png}\n \\includegraphics[width=7cm]{w0=5_zeta=0.5_u.png}\n \\caption{\\footnotesize Control $u(t)$.}\n\\end{subfigure}\n\\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=7cm]{w0=5_zeta=0_x.png}\n \\includegraphics[width=7cm]{w0=5_zeta=0.5_x.png}\n \\caption{\\footnotesize State $x_1(t),x_2(t)$.}\n\\end{subfigure}\n\\caption{Top plots for $\\omega_0=5,\\,\\zeta=0$ where $|u(t)|\\leq 0.259$. Bottom plots for $\\omega_0=5,\\,\\zeta=0.5$ where $|u(t)|\\leq 9.34\\times10^{-7}$.}\n\\label{fig:plots}\n\\end{figure}\n\n\n\n\\subsubsection{Parameter plots}\n\nIn Section \\ref{sec:projAlg} both DR and AAC require some parameters to be chosen, $\\lambda\\in]0,1[$ for DR and $\\alpha,\\beta\\in]0,1[$ for AAC. From the theory $\\alpha<1$ is required but in most of the examples in this research and in \\cite{BausBuraKaya2019} $\\alpha=1$ causes no problems so we will allow this choice. In fact from \\cite{BausBuraKaya2019}, $\\alpha=1$ is the optimal choice. In Figures \\ref{fig:PHO}--\\ref{fig:MTM} we experiment with different parameter choices for DR and AAC for the pure harmonic oscillator, under-damped harmonic oscillator and machine tool manipulator.\n\nIn Figures \\ref{fig:PHO}--\\ref{fig:MTM} we see, for different bounds on $u$, plots with the number of iterations taken by DR and AAC for different values of the parameters $\\lambda$ and $\\beta$. We chose to exclude an analysis of the $\\alpha$ values for AAC, as was also done in \\cite{BausBuraKaya2019}, since the impact on the number of iterations when altering $\\alpha$ is not as significant as when we vary $\\beta$. In the experiments using AAC we use a fixed value for $\\alpha$ which has been chosen from similar analysis of the number of iterations. In each of these plots five values for the bound on $u$ were taken, the smallest bound being close to the value that will lead to a problem with no solution and the largest resulting in an unconstrained $u$. These plots give information on the ``best\" value of the parameters to choose that produce the smallest number of iterations. This is advantageous because a reduction in the number of iterations will result in a reduction in run time.\n\nFigure \\ref{fig:PHO} contains the experiments for the pure harmonic oscillator with $\\omega_0=1$ and $5$. Comparing the top and bottom rows of Figure \\ref{fig:PHO} we see that varying $\\omega_0$ doesn't have an impact on the best parameter choices, at least not when $\\omega_0=1$ or $\\omega_0=5$. For example when $|u(t)|\\leq0.259$ the ``best\" value for $\\lambda$ when using DR would be approximately $0.7$ for both $\\omega_0$ values as shown in the left of the figure. In the AAC curves, and some of the DR cases, we observe downward spikes at some parameter values that result in a large decrease in iterations. Although these spikes achieve the smallest number of iterations we would not necessarily select these values in practice because a slight shift from the parameter values would result in a large increase in the number of iterations. In the case of AAC where $|u(t)|\\leq0.252$ the smallest number of iterations is achieved at the blue spike where $\\beta\\approx0.85$ but we would instead choose $\\beta\\approx0.95$ since that choice appears more stable.\n\n\\begin{figure}[t!]\n\\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=6.5cm]{param_vs_iter_w0=1_zeta=0_DR.png}\n \\includegraphics[width=6.5cm]{param_vs_iter_w0=5_zeta=0_DR.png}\n\\end{subfigure}\n\\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=6.5cm]{param_vs_iter_w0=1_zeta=0_AAC_alpha=1.png}\n \\includegraphics[width=6.5cm]{param_vs_iter_w0=5_zeta=0_AAC_alpha=1.png}\n\\end{subfigure}\n\\caption{Left DR, right AAC with $\\alpha=1$, top $\\omega_0=1,\\,\\zeta=0$, bottom $\\omega_0=5,\\,\\zeta=0$.}\n\\label{fig:PHO}\n\\end{figure}\n\n\nIn Figure \\ref{fig:PDHO} we have the parameter graphs for the under-damped harmonic oscillator with $\\zeta=0.5$ and $\\omega_0=1,3$ and $5$. For this damped problem the larger the value of $\\omega_0$, the closer the optimal $u$ gets to zero. Hence the values of $a$ are chosen to be much smaller for $\\omega_0=3,5$ so the control constraint remains active though again the largest value of $a$, given by the green curve, is the case of unconstrained $u$. The behavior of $\\omega_0=1,3$ are very similar with both having some spikes present, the optimal $\\lambda$ value for the unconstrained $u$ case being almost 1 and bumps present in the DR curves. Though for $\\omega_0=5$ we see some odd behaviour for the three largest values of $a$. When using DR the green curve, i.e., when $u$ in unconstrained, the number of iterations greatly increases at $\\lambda\\approx0.98$. We also observe for the yellow and purple DR curves that the number of iterations seems to plateau before rapidly increasing which is something we don't observe in the other cases. These anomalies could potentially be numerical artifacts but further investigation is needed to draw a conclusion.\n\nFor the machine tool manipulator in Figure \\ref{fig:MTM} we again see a lot of similarities to the figures from the other problems. Although for the machine tool manipulator we see many small fluctuations in the curves for both methods and for the blue curves there are almost no parameter choices where the methods converge in less than 300 iterations. The blue curves represent the problem where $a$ is so small that there are almost no solutions to the problem.\n\nIn general we observe some similarities across all the problems in Figures \\ref{fig:PHO}--\\ref{fig:MTM}. For all the problems the ``best\" parameter choices are $\\lambda\\geq0.5 ,\\ \\beta<1,\\ 0.5\\leq\\alpha\\leq1$. Despite the theory requiring $\\alpha\\in]0,1[$ for AAC we observe that for most of the problems not only is $\\alpha=1$ a viable choice but in many cases it is the optimal choice. As we tighten the bounds on $u$ the choice of parameters becomes more difficult. For example when $\\zeta=0.5,\\,\\omega_0=5$ (bottom right of Figure \\ref{fig:PDHO}) when $a=1.46\\times10^{-6}$, AAC with $\\alpha=0.8$ converges in less than 300 iterations for almost every $\\beta\\in]0,1[$ while when $a=7.59\\times10^{-7}$, AAC only converges for a small interval of values where $\\beta\\approx0.95$. Not only do the number of possible parameter choices shrink as the bounds on $u$ tighten but the parameter values also shift closer and closer to 1. This can also be seen in the same example since the optimal parameter choice for $a=1.46\\times10^{-6}$ is $\\beta\\approx0.6$ while when $a=7.59\\times10^{-7}$ we should choose $\\beta\\approx0.95$. As we approach the critical value of $a$ where the problem has no solutions the choice of parameter values, for both DR and AAC, approach 1.\n\n\\begin{figure}[t!]\n\\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=6.5cm]{param_vs_iter_w0=1_zeta=0.5_DR.png}\n \\includegraphics[width=6.5cm]{param_vs_iter_w0=3_zeta=0.5_DR.png}\n \\includegraphics[width=6.5cm]{param_vs_iter_w0=5_zeta=0.5_DR.png}\n\\end{subfigure}\n\\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=6.5cm]{param_vs_iter_w0=1_zeta=0.5_AAC_alpha=1.png}\n \\includegraphics[width=6.5cm]{param_vs_iter_w0=3_zeta=0.5_AAC_alpha=1.png}\n \\includegraphics[width=6.5cm]{param_vs_iter_w0=5_zeta=0.5_AAC_alpha=0.8.png}\n\\end{subfigure}\n\\caption{Left DR, right AAC, top $\\omega_0=1,\\,\\zeta=0.5$ with $\\alpha=1$, middle $\\omega_0=3,\\,\\zeta=0.5$ with $\\alpha=1$, bottom $\\omega_0=5,\\,\\zeta=0.5$ with $\\alpha=0.8$.}\n\\label{fig:PDHO}\n\\end{figure}\n\n\\begin{figure}[t!]\n\\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=6.5cm]{param_vs_iter_MTM_DR.png}\n \\caption{\\footnotesize DR.}\n\\end{subfigure}\n\\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=6.5cm]{param_vs_iter_MTM_AAC_alpha=0.9.png}\n \\caption{\\footnotesize AAC with $\\alpha=0.9$.}\n\\end{subfigure}\n\\caption{Machine tool manipulator.}\n\\label{fig:MTM}\n\\end{figure}\n\n\n\n\n\\subsubsection{Error and CPU time comparisons}\n\nIn Tables \\ref{tbl:erroru}--\\ref{tbl:errorx} we display the errors for discretizations $N=10^3,\\,10^4$ and $10^5$ for all the previously mentioned problems with the specifications given in Table \\ref{tbl:probs}. Since we cannot find analytical solutions for these problems the ``true\" solution we are comparing to in this error analysis was computed using projection algorithms with discretization $N=10^7$ and relative tolerance $10^{-12}$.\n\nWe are mostly interested in the errors from the control variable since that is the variable being optimized. The states are computed as an auxiliary process using the optimal control and Euler's method. In Table \\ref{tbl:erroru} we see that for the projection methods, in general, an increase in the order of discretization points used results in a decrease in the order of the error. This is useful because if a particular error is required the number of discretization points needed to reach that accuracy can easily be determined. The only case where this seems to differ is in the machine tool manipulator example for the all the projection methods excluding MAP. For example with $N=10^3$ for Dykstra we see that the error is $4.7\\times10^{-2}$ so following the observed pattern we would expect that when $N=10^5$ the error should be around $4.7\\times10^{-4}$ but instead we have nearly double that value with $8.6\\times10^{-4}$. A possible explanation may be that the expected error is not far enough from the tolerance of the ``true\" solution. Or since the machine tool manipulator is the only example to use the numerical implementation rather than analytical expressions more errors are introduced. Another possibility is that since MAP has the simplest algorithm the extra operations in the other projection algorithms are causing an accumulation of the errors from the numerical implementation. Whatever the reason we see that the machine tool manipulator is only producing larger errors for the case where $N=10^5$ and is still performing well when $N=10^3$ and $10^4$.\n\nWe see that in virtually every case the errors produced from the projection methods are much smaller than those from Ipopt. In Table \\ref{tbl:erroru} the only example where Ipopt produces a smaller error is in the machine tool manipulator example when $N=10^3$ but even then Ipopt is unsuccessful in finding a solution at all for larger values of $N$ so in general the projection methods are more reliable for this problem. We also don't observe the relationship between number of discretization points and the resulting errors for Ipopt that we see with the projection methods.\n\nIn Table \\ref{tbl:errorx} we observe that for the states we have the same relationship between the number of discretization points and error for the projection algorithms, though the errors are not exactly one order apart. This is seen for $\\omega_0=5,\\,\\zeta=0$ using MAP, when $N=10^3$ the error is $3.5\\times10^{-1}$ while when $N=10^4$ the error is $2.6\\times10^{-2}$. In the states the errors for the projection methods are much closer to that of Ipopt, with Ipopt performing better in some of the cases. These differences in performance of the projection methods in the state variables could be because of the extra errors introduced from Euler's method in the computation of the states. If we were to use a more accurate method to compute the states it is possible we would have less discrepancies, though by implementing a more complicated method the run time would increase.\n\n\n\\begin{table}[t!]\n \\centering\n \\begin{tabular}{ScScScScSc}\n $(\\omega_0,\\zeta)$ & a & $\\lambda$ & $\\alpha$ & $\\beta$ \\\\ \\hline \\hline\n $(1,0)$ & $0.259$ & $0.75$ & $1$ & $0.85$ \\\\\n $(5,0)$ & $0.259$ & $0.75$ & $1$ & $0.85$ \\\\\n $(1,0.5)$ & $4.96\\times10^{-2}$ & $0.65$ & $1$ & $0.85$ \\\\\n $(5,0.5)$ & $9.34\\times10^{-7}$ & $0.6$ & $0.8$ & $0.7$ \\\\\n (MTM) & $2000$ & $0.55$ & $0.9$ & $0.65$\n \\end{tabular}\n \\caption{Parameter choices and bounds on control variable for numerical experiments.}\n \\label{tbl:probs}\n\\end{table}\n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{ScScllllll}\n $N$ & $(\\omega_0,\\zeta)$ & \\multicolumn{1}{c}{MAP} & \\multicolumn{1}{c}{Dykstra} & \\multicolumn{1}{c}{DR} & \\multicolumn{1}{c}{AAC} & \\multicolumn{1}{c}{Ipopt} \\\\ \\hline\\hline\n & $(1,0)$ & $2.4\\times10^{-2}$ & $1.9\\times10^{-2}$ & $1.6\\times10^{-2}$ & $1.6\\times10^{-2}$ & $1.7\\times10^{-1}$ \\\\\n & $(5,0)$ & $7.4\\times10^{-2}$ & $6.9\\times10^{-2}$ & $6.8\\times10^{-2}$ & $6.9\\times10^{-2}$ & \\multicolumn{1}{c}{$-$} \\\\\n $10^3$ & $(1,0.5)$ & $1.5\\times10^{-2}$ & $7.2\\times10^{-2}$ & $4.3\\times10^{-2}$ & $2.9\\times10^{-2}$ & $1.1\\times10^{-1}$ \\\\\n & $(5,0.5)$ & $5.6\\times10^{-2}$ & $1.6\\times10^{-1}$ & $1.2\\times10^{-1}$ & $1.2\\times10^{-1}$ & $1.6\\times10^{0}$ \\\\\n & (MTM) & $4.7\\times10^{-2}$& $4.7\\times10^{-2}$ & $4.7\\times10^{-2}$ & $4.7\\times10^{-2}$ & $2.2\\times10^{-2}$ \\\\\n \\hline\n & $(1,0)$ & $2.4\\times10^{-3}$ & $1.9\\times10^{-3}$ & $1.6\\times10^{-3}$ & $1.6\\times10^{-3}$ & $5.3\\times10^{-2}$ \\\\\n & $(5,0)$ & $7.4\\times10^{-3}$ & $7.0\\times10^{-3}$ & $6.8\\times10^{-3}$ & $6.8\\times10^{-3}$ & $4.6\\times10^{-1}$ \\\\\n $10^4$ & $(1,0.5)$ & $1.5\\times10^{-3}$ & $7.0\\times10^{-3}$ & $4.3\\times10^{-3}$ & $2.9\\times10^{-3}$ & $2.1\\times10^{-1}$ \\\\\n & $(5,0.5)$ & $5.7\\times10^{-3}$ & $1.6\\times10^{-2}$ & $1.2\\times10^{-2}$ & $1.2\\times10^{-2}$ & $9.4\\times10^{-1}$ \\\\\n & (MTM) & $4.6\\times10^{-3}$ & $4.7\\times10^{-3}$ & $4.7\\times10^{-3}$ & $4.7\\times10^{-3}$ & \\multicolumn{1}{c}{$-$} \\\\\n \\hline\n & $(1,0)$ & $2.4\\times10^{-4}$ & $1.9\\times10^{-4}$ & $1.5\\times10^{-4}$ & $1.6\\times10^{-4}$ & $2.6\\times10^{-1}$ \\\\\n & $(5,0)$ & $7.3\\times10^{-4}$ & $6.9\\times10^{-4}$ & $6.7\\times10^{-4}$ & $6.8\\times10^{-4}$ & $2.9\\times10^{-1}$ \\\\\n $10^5$ & $(1,0.5)$ & $1.5\\times10^{-4}$ & $6.9\\times10^{-4}$ & $4.2\\times10^{-4}$ & $2.9\\times10^{-4}$ & $2.9\\times10^{-1}$ \\\\\n & $(5,0.5)$ & $5.7\\times10^{-4}$ & $1.6\\times10^{-3}$ & $1.2\\times10^{-3}$ & $1.2\\times10^{-3}$ & $9.6\\times10^{-1}$ \\\\\n & (MTM) & $4.6\\times10^{-4}$ & $8.6\\times10^{-4}$ & $7.8\\times10^{-4}$ & $7.0\\times10^{-4}$ & \\multicolumn{1}{c}{$-$} \\\\\n \\hline\n \\end{tabular}\n \\caption{$L^\\infty$ error in control for MAP, Dykstra, DR, AAC and Ipopt, with relative $\\varepsilon = 10^{-6}$ and using specifications from Table \\ref{tbl:probs}. A dash indicates a method was unsuccessful in obtaining a solution.}\n \\label{tbl:erroru}\n\\end{table}\n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{ScScllllll}\n $N$ & $(\\omega_0,\\zeta)$ & \\multicolumn{1}{c}{MAP} & \\multicolumn{1}{c}{Dykstra} & \\multicolumn{1}{c}{DR} & \\multicolumn{1}{c}{AAC} & \\multicolumn{1}{c}{Ipopt} \\\\ \\hline\\hline\n & $(1,0)$ & $1.2\\times10^{-2}$ & $1.3\\times10^{-2}$ & $1.2\\times10^{-2}$ & $1.2\\times10^{-2}$ & $1.1\\times10^{-2}$ \\\\\n & $(5,0)$ & $3.5\\times10^{-1}$ & $3.5\\times10^{-1}$ & $3.5\\times10^{-1}$ & $3.5\\times10^{-1}$ & \\multicolumn{1}{c}{$-$} \\\\\n $10^3$ & $(1,0.5)$ & $4.4\\times10^{-3}$ & $4.3\\times10^{-3}$ & $4.3\\times10^{-3}$ & $4.4\\times10^{-3}$ & $5.0\\times10^{-3}$ \\\\\n & $(5,0.5)$ & $1.5\\times10^{-2}$ & $1.5\\times10^{-2}$ & $1.5\\times10^{-2}$ & $1.5\\times10^{-2}$ & $1.5\\times10^{-2}$ \\\\\n & (MTM) & $1.4\\times10^{-2}$ & $1.4\\times10^{-2}$ & $1.4\\times10^{-2}$ & $1.4\\times10^{-2}$ & $1.5\\times10^{-2}$ \\\\\n \\hline\n & $(1,0)$ & $1.2\\times10^{-3}$ & $1.3\\times10^{-3}$ & $1.2\\times10^{-3}$ & $1.2\\times10^{-3}$ & $2.6\\times10^{-3}$ \\\\\n & $(5,0)$ & $2.6\\times10^{-2}$ & $2.6\\times10^{-2}$ & $2.6\\times10^{-2}$ & $2.6\\times10^{-2}$ & $7.4\\times10^{-3}$ \\\\\n $10^4$ & $(1,0.5)$ & $4.4\\times10^{-4}$ & $4.3\\times10^{-4}$ & $4.3\\times10^{-4}$ & $4.3\\times10^{-4}$ & $9.5\\times10^{-3}$ \\\\\n & $(5,0.5)$ & $1.5\\times10^{-3}$ & $1.5\\times10^{-3}$ & $1.5\\times10^{-3}$ & $1.5\\times10^{-3}$ & $1.5\\times10^{-3}$ \\\\\n & (MTM) & $1.5\\times10^{-3}$ & $1.5\\times10^{-3}$ & $1.5\\times10^{-3}$ & $1.5\\times10^{-3}$ & \\multicolumn{1}{c}{$-$} \\\\\n \\hline\n & $(1,0)$ & $1.2\\times10^{-4}$ & $1.3\\times10^{-4}$ & $1.2\\times10^{-4}$ & $1.2\\times10^{-4}$ & $1.0\\times10^{-2}$ \\\\\n & $(5,0)$ & $2.5\\times10^{-3}$ & $2.5\\times10^{-3}$ & $2.5\\times10^{-3}$ & $2.5\\times10^{-3}$ & $3.1\\times10^{-3}$ \\\\\n $10^5$ & $(1,0.5)$ & $4.3\\times10^{-5}$ & $4.3\\times10^{-5}$ & $4.3\\times10^{-5}$ & $4.3\\times10^{-5}$ & $1.6\\times10^{-2}$ \\\\\n & $(5,0.5)$ & $1.5\\times10^{-4}$ & $1.5\\times10^{-4}$ & $1.5\\times10^{-4}$ & $1.5\\times10^{-4}$ & $1.5\\times10^{-4}$ \\\\\n & (MTM) & $1.5\\times10^{-4}$ & $3.5\\times10^{-4}$ & $3.1\\times10^{-4}$ & $2.7\\times10^{-4}$ & \\multicolumn{1}{c}{$-$} \\\\\n \\hline\n \\end{tabular}\n \\caption{$L^\\infty$ error in states for MAP, Dykstra, DR, AAC and Ipopt, with relative $\\varepsilon = 10^{-6}$ and using specifications from Table \\ref{tbl:probs}. A dash indicates a method was unsuccessful in obtaining a solution.}\n \\label{tbl:errorx}\n\\end{table}\n\n\nIn Table \\ref{tbl:runtime} we have the CPU times to accompany the experiments from the previous tables. In these calculations the CPU times recorded are averages from $1,000$ runs on a PC with an i5-10500T 2.30GHz processor and 8GB RAM. Majority of the projection methods are at least 10 times faster, with some cases of the projection methods being as much as 300 times faster than Ipopt. In general for all four projection methods an increase in the number of discretization points results in a proportional increase in run time. We can use this observation to estimate the CPU time for any number of discretization points.\n\n\n\\begin{table}[t!]\n \\centering\n \\begin{tabular}{ScScllllll}\n $N$ & $(\\omega_0,\\zeta)$ & \\multicolumn{1}{c}{MAP} & \\multicolumn{1}{c}{Dykstra} & \\multicolumn{1}{c}{DR} & \\multicolumn{1}{c}{AAC} & \\multicolumn{1}{c}{Ipopt} \\\\ \\hline\\hline\n & $(1,0)$ & $1.6\\times10^{-2}$ & $1.5\\times10^{-2}$ & $5.4\\times10^{-3}$ & $3.5\\times10^{-3}$ & $2.4\\times10^{-1}$ \\\\\n & $(5,0)$ & $1.8\\times10^{-2}$ & $1.6\\times10^{-2}$ & $5.7\\times10^{-3}$ & $3.7\\times10^{-3}$ & \\multicolumn{1}{c}{$-$} \\\\\n $10^3$ & $(1,0.5)$ & $1.2\\times10^{-2}$ & $1.4\\times10^{-2}$ & $5.5\\times10^{-3}$ & $4.3\\times10^{-3}$ & $2.0\\times10^{-1}$ \\\\\n & $(5,0.5)$ & $4.7\\times10^{-3}$ & $6.4\\times10^{-3}$ & $3.8\\times10^{-3}$ & $2.2\\times10^{-3}$ & $1.8\\times10^{-1}$ \\\\\n & (MTM) & $1.6\\times10^{-1}$ & $1.6\\times10^{-1}$ & $2.3\\times10^{-1}$ & $1.4\\times10^{-1}$ & $1.9\\times10^{0}$ \\\\\n \\hline\n & $(1,0)$ & $1.4\\times10^{-1}$ & $1.5\\times10^{-1}$ & $5.0\\times10^{-2}$ & $3.2\\times10^{-2}$ & $2.1\\times10^{0}$ \\\\\n & $(5,0)$ & $1.6\\times10^{-1}$ & $1.6\\times10^{-1}$ & $5.4\\times10^{-2}$ & $3.5\\times10^{-2}$ & $1.3\\times10^{-1}$ \\\\\n $10^4$ & $(1,0.5)$ & $1.2\\times10^{-1}$ & $1.2\\times10^{-1}$ & $4.7\\times10^{-2}$ & $4.2\\times10^{-2}$ & $7.6\\times10^{-1}$ \\\\\n & $(5,0.5)$ & $4.8\\times10^{-2}$ & $5.0\\times10^{-2}$ & $3.6\\times10^{-2}$ & $2.3\\times10^{-2}$ & $1.3\\times10^{0}$ \\\\\n & (MTM) & $1.7\\times10^{0}$ & $3.1\\times10^{0}$ & $2.5\\times10^{0}$ & $2.5\\times10^{0}$ & \\multicolumn{1}{c}{$-$} \\\\\n \\hline\n & $(1,0)$ & $1.2\\times10^{0}$ & $1.3\\times10^{0}$ & $6.6\\times10^{-1}$ & $2.8\\times10^{-1}$ & $8.5\\times10^{1}$ \\\\\n & $(5,0)$ & $1.4\\times10^{0}$ & $1.4\\times10^{0}$ & $4.7\\times10^{-1}$ & $3.1\\times10^{-1}$ & $1.5\\times10^{0}$ \\\\\n $10^5$ & $(1,0.5)$ & $1.0\\times10^{0}$ & $1.1\\times10^{0}$ & $4.1\\times10^{-1}$ & $3.8\\times10^{-1}$ & $7.1\\times10^{0}$ \\\\\n & $(5,0.5)$ & $4.2\\times10^{-1}$ & $4.3\\times10^{-1}$ & $3.4\\times10^{-1}$ & $2.1\\times10^{-1}$ & $1.5\\times10^{1}$ \\\\\n & (MTM) & $1.7\\times10^{1}$ & $6.4\\times10^{1}$ & $2.1\\times10^{2}$ & $1.8\\times10^{2}$ & \\multicolumn{1}{c}{$-$} \\\\\n \\hline\n \\end{tabular}\n \\caption{CPU times in seconds for MAP, Dykstra, DR, AAC and Ipopt, with relative $\\varepsilon = 10^{-6}$ and using specifications from Table \\ref{tbl:probs}. A dash indicates a method was unsuccessful in obtaining a solution.}\n \\label{tbl:runtime}\n\\end{table}\n\n\\section{Conclusion and Open Problems}\\label{sec:con}\nWe have derived general expressions for the projectors respectively onto the affine set and the box of the minimum-energy control problem. We provided closed-form expressions for the pure, critically-, over- and under-damped harmonic oscillators. For problems where we do not have the necessary information to use the general expression for the projector onto the affine set we proposed a numerical scheme to compute the projection. In our numerical experiments we have applied this numerical scheme to solve a machine tool manipulator problem. We carried out numerical experiments with all the previously mentioned problems, the closed-form examples and the machine tool manipulator, comparing the errors and CPU times. These numerical experiments compared the performance of MAP, the Dykstra, DR and AAC algorithms with the AMPL--Ipopt suite.\n\nFor the DR and AAC algorithms we collected some numerical results regarding the use of different values for the parameters $\\lambda, \\alpha, \\beta$ and their effect on the number of iterations required for the method to converge. In this parameter analysis we observed that as the bounds on the control variable are tightened the choice of parameter becomes more difficult. We also noticed that when the problem is almost infeasible, i.e. the bounds on the control variable are so tight that almost no solutions exist, the parameter values approach 1.\n\nRegarding our other numerical experiments we found some similarities in performance across the four projection methods. We observe that an increase in the order of discretization points produces a resulting decrease in the order of the errors, both for the control and state variables. We also see that an increase in the order of discretization points results in an increase in the order of the CPU time. These observations are useful to estimate the run time and errors for any number of discretization points and were not seen in the results from Ipopt. In general we see smaller errors and faster CPU times when using projection methods. Overall, using projection algorithms with the general expressions and numerical approach we proposed is more advantageous than using Ipopt for the class of problems we consider.\n\n\nIn the future it would be useful to extend this research to more general problems. One such extension is the case when the ODE constraints are nonlinear. If the ODE constraints are nonlinear then we have a nonconvex problem. The DR algorithm has already been shown to have success with finite-dimensional nonconvex problems so it would be interesting to apply this method to nonconvex minimum-energy control problems. An extension to LQ problems where the objective function is given by\n\\begin{equation*}\n\\frac{1}{2}\\int_{t_0}^{t_f} \\big( x(t)^TQ(t)x(t) + u(t)^TR(t)u(t) \\big)\\, dt,\n\\end{equation*}\nwhere $Q,R$ are matrix functions of dimensions $n\\times n$ and $m\\times m$, should also be investigated. The fact that the state variables appear in the objective makes this problem particularly interesting and challenging to study. \nAnother possibility is to look into the cause of the intriguing results in the bottom left of Figure \\ref{fig:PDHO} where $\\omega_0=5,\\zeta=0.5$ using the DR algorithm. It is currently unknown whether the behaviour in this case is a numerical artifact or if there is some theoretical reasoning. \n\n\n\n\n\\setcounter{section}{0}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}}\n\n\n\n\n\\IEEEPARstart{T}{he} first historical stamps of data visualization reach back to before 17\\textsuperscript{th} century.\nIts ``golden age`` dates to second half of 19\\textsuperscript{th} century~\\cite{friendly2008brief}.\nHowever, it is the 21\\textsuperscript{th} that may be called the renaissance of data visualization. \n\nHuman preference to acquire information with visual means~\\cite{kaas2014current} in tandem with the time efficiency of conveying information with charts~\\cite{6305953} results in massive production of data visualization applied in various fields, including business analysis, Big Data, psychology, journalism and production process~\\cite{weber2012data, sinar2015data}. \nMass-produced charts, graphs, diagrams, schemes and infographics flood the market and the viewers~\\cite{few2007data}. \n\nThe influx of data visualizations is possible thanks to the vast plethora of specialized tools. Although novel software makes visualization easier, they do not ensure the quality of their creations.\nMoreover, there are no modern tools designed solely to evaluate the quality of the data visualization. We are still limited to guidelines presented in checklists, which usually contain a non-exhaustive list of questions or advice. To fill this void, we propose an organized, systematic data visualization guidelines based on state-of-the-art practices, VisQualdex.\n\nThe methodology allows everyone, from non-specialists to data science experts, to assess the quality of data visualization and pinpoint existing problems. \nCompared to current methods for data visualization evaluation, VisQualdex\\ leads to a more exhaustive and complete evaluation, due to utilizing a systematic, precise and scientifically supported criteria. \nMoreover, the VisQualdex\\ is also available as the companion web application, VisQual, and thus can be easily incorporated in many design workflows.\n\n\\subsection{The spectrum of data visualizations}\nThe focus of VisQualdex\\ is to evaluate the quality of broadly understood ``data visualizations''. \nThe definitions in the literature vary from the ``image that is representative of the raw data''~\\cite{azzam2013data} to ``the set of methods for graphically displaying information in a way that is understandable and straightforward''~\\cite{tonidandel2015big}. Despite the relative vagueness of these definitions, they capture the essential aim of the data visualization, which is to communicate information in a graphical form. \n\n\n\n\n\nIn all types of communication, there are many possibilities of conveying the same message. Therefore, the data visualization itself covers drastically different entities ranging from the simple chart (Figure~\\ref{fig:simple_chart}) to more complex visualizations (Figure~\\ref{fig:alternative_data_visualization}). It provides an additional layer of complication to the data visualization assessment, as the rules must be general enough to apply to all types of data visualization.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8cm]{img\/simple_chart.png}\n \\caption{An example of a ``classical'' data visualization.}\n \\label{fig:simple_chart}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8cm]{img\/Lallemand-tour-de-force.png}\n \\caption{An example of a ``non-classical'' data visualization (adapted from Charles Lallemand's ``tour de force'' nomogram from a work on hexagonal charts\\cite{lallemand1885abaques}}\n \\label{fig:alternative_data_visualization}\n\\end{figure}\n\n\\subsection{State of the art}\n\nDespite the vagueness of definitions, the way we describe data visualizations is subject to numerous improvements over time~\\cite{friendly2008brief}. These small steps result in more structured descriptions of visualizations exemplified by the grammar of graphics. These theoretical achievements have contributed to the development of practical tools~\\cite{ali2016big,waskom2021seaborn,wickham2006introduction,porcu2018matplotlib}.\n\nIn a glaring difference, the evaluation of data visualization is still as unsystematic as it used the be in the past. The first debate on that topic dated back to 1857 and resulted in several advises~\\cite{palsky1999debate}. The theoretical understanding of the correctness of data visualization has grown over time with new or revised good practices~\\cite{azzam2013data,cawthon2007effect,carpendale2008evaluating,few2006information}, but very few propose practical and applicable evaluation methods.\n \nThe lack of a systematic approach stems from the popularity of checklists as one of the most commonly adapted evaluation systems. Here, a checklist is a list of potential mistakes, sometimes divided in thematic sections. The first checklists (or guidelines) date back to 1915~\\cite{first-data-visualization-guideline}, but the community of data visualization practitioners is still producing newer counterparts~\\cite{evergreen2016data}.\n\n\nChecklists have two main practical advantages: simplicity and shortness. Simplicity means that the majority of the state-of-the-art checklists can be easily operated by any viewer with a basic understanding of the most trivial data visualization concepts, like chart, axis or scale. Moreover, most visualization checklists are brief (e.g., about 20 questions~\\cite{evergreen2016data,jour-data-viz-checklist,david-mckie-checklist,berkley-checklist}). This short length, along with the simplicity, translates to a relatively good assessment pace. \n\n\\subsection{Disadvantages of the checklist usage}\n\n\n\n\n\n\n\n\nHowever, checklists are characterized by the disorder happening on two different conceptual levels: the lack of organization and varying levels of universality. \n\nIn the majority of studied examples, checklists do not possess any grouping or hierarchy of the guidelines. Although some checklists demonstrate some degree of guideline categorization, they often do not reflect the state-of-art data visualization descriptions and may have a practical rationale. \n\nThe varying level of universality happens when general guidelines (e.g., keep the graph two-dimensional) occur along with more specific pieces of advice (e.g., 'use bar charts to visualize achievement of an objective'). It limits the scope of the checklist to a particular set of data visualizations. This problem is even more pronounced if the checklist contains a scale based on the number of questions answered correctly or fulfilled guidelines. In this situation, non-general guidelines falsely lower the actual score of a visualization.\n\nMoreover, usage of checklists forces following the state-of-the-art methodology and logic of checklists\\cite{scriven2000logic}, which state that (some points are skipped):\n\\begin{itemize}\n \\item ``The list should be complete (no significant omissions).'' and ``The checkpoints should refer to criteria and not mere indicators.''\n \n According to this research, the second rule is most often broken as none of the state-of-the-art checklists have ``complete``\/``full'' coverage of the evaluation criteria. \n As most of both state-of-the-art data visualization checklists aim at an ``engineering'' approach, they tend to have very strict, concise and precise points. \n However, this causes them to sometimes be superficial, oversimplified and focus on indicators instead of criteria. \n For example, a rule ``No more than 3 colors''~\\cite{berkley-checklist} is focusing purely on indicators, but disallows great 4-color visualizations from passing the benchmark. \n Additionally, a rule ``Did you start the Y-axis at 0?'' (answer yes\/no)~\\cite{david-mckie-checklist} allows only charts which do not cut the axis in a justifiable way (e.g. shoe sizes for adults, Earth temperatures with Kelvin units, etc.). \n \n \n \\item ``The criteria should be commensurable.''\n \n This criteria is not applicable to data visualization checklists, because some mistakes are more significant than others. \n There exist visualizations which can follow all criteria except one and still be condemningly wrong. \n \n \n \\item ``The list should be concise (to assist its mnemonic function).''\n \n In contrast to engineering processes or medical applications~\\cite{grigg2015smarter}, data visualization does not always follow strict regularities as it is a mix of applied arts and fine arts which cannot be separated~\\cite{mirahan7chapter}. \n \n \n\\end{itemize}\n\nTherefore, applying checklist format to data visualization evaluation may lead to problems which are neither the problem of the checklist methodology, neither the data visualization evaluation. \nThe problem lies in forcefully fusing the two ideas together. \n\nAll problems described above, together with the brevity, result in the non-exhaustiveness of checklists. Right now, there are no checklists that would approach the evaluation of data visualization in a systematized way. Therefore, we have designed VisQualdex\\ to at least partially alleviate these issues and produce an evaluation methodology applicable to the broad spectrum of data visualizations. \n\n\n\\section{Design of VisQualdex}\n\\label{sec:design}\nThe development of VisQualdex\\ follows the VISupply framework for design of data visualization guidelines~\\cite{engelke2018visupply}. It covers four main steps:\n\n\\begin{itemize}\n \\item Evidence collection \\\\ \n Collection and diligent analysis of research works about data visualization. \n \\item Integration \\\\\n Curation of existing data visualization guidelines, good practices, suggestions and similar.\n \\item Contextualization \\& Generalization \\\\\n Merging concepts from different works and forming clusters a,k.a. ``categories`` (see section \\ref{sec:categories}).\n \\item Guideline Definition \\\\\n Formalization of VisQualdex, i.e. stating the ``question`` format, supplementing missing areas and verifying VisQualdex\\ in practice, utilizing and extending nomenclatural notions~\\cite{diehl2020studying}.\n\\end{itemize}\n\n\nMoreover, an original concept introduced in VisQualdex\\ are the four main traits of a correct data visualization.\nThese four pillars of VisQualdex\\ are:\n\\begin{enumerate}\n \\item Real data instead of guesstimates.\n \\item Clarity and readability instead of incomprehensibility and ambiguity.\n \\item Simplicity and summarization instead of complexity and raw data.\n \\item Guidance and objectivity instead of manipulation and subjectivity.\n\\end{enumerate}\n\n\n\\subsection{Categories}\n\\label{sec:categories}\n\\label{categories}\n\nThe important part of VisQualdex\\ is the categorization of guidelines. The baselines are the formalized descriptions of Grammar of Graphics (GoG)~\\cite{wilkinson2012grammar} and Layered Grammar of Graphics (LGoG)~\\cite{wickham2010layered}. Although both of these approaches constitute an in-depth description of data visualization, they are used primarily for either building or decomposing the visualization object. As this is a different goal from the evaluation of data visualizations, GoG and LGoG are only reference points. Therefore, VisQualdex\\ utilizes a redesigned categorization of guidelines. Furthermore, proposed categories are complete (i.e. there is no ``others`` category) and disjoint (i.e. each questions belongs exactly to one category).\n\n\\textit{Subjective} \\\\\nThe objective of this category is to incorporate any purely subjective aspects of the visualization. Although all categories concern issues that may be answered differently depending on the viewer, this one focuses on things exclusively related to the opinion of the on-looker. \n\n\\textit{Theme} \\\\\nThis category contains all visual features and artistic choices not directly depending on data, like colors (not related to the color scale), fonts, spacing, and any additional graphics that are not strictly part of the chart. \n\n\\textit{Coordinates} \\\\\nThis category is responsible for the coordinate system and units. Its purpose is to check if all coordinates systems (or their alternatives), units and axes are correctly prepared, provided and presented. It also examines if the relation between sheer data and all the aspects above is consistent. \n\n\\textit{Summaries} \\\\\nThis category involves all statistical transformations understood as everything that was ``build on top`` of raw data. It concerns any values that summarize the data (e.g., mean, median, confidence intervals, trends). The main goal is to verify whether the summary is appropriate and helps the user understand the data. \n\n\\textit{Facets} \\\\\nThis category contributes to the distribution\/separation of one visualization into many smaller multiples showing data subset by some criteria. It checks whether the division is consistent and maintains the best practices to show different aspects of the raised issue. \n\n\\textit{Geometry} \\\\\nThis category includes all information about the shapes used for data presentations (e.g., the shape itself and its dimensions). It concerns the shape of the whole figure, whether all used figures and any other geometrical aspects. \n\n\\textit{Guides} \\\\\nThis category handles any text content that appears on the visualization. It focus only on the content, not the display of e.g., title, legend, axes labels, additional comments, labels etc.\nIt verifies the most importantly the content of the text but also its clarity, objectivity and overall necessity. \n\n\\textit{Position} \\\\\nThe main focus of this category is to verify positions of elements that do not represent any values, such as gaps between bars in a bar chart or title positioning on the canvas. It evaluates if the positioning prevents elements veiling and increases the clarity of visualization.\n\n\\textit{Perception} \\\\\nThis category focuses on the general perception of the data. It is also responsible for detecting all misuses leading to the incorrect understanding of the data, e.g., bar charts with bars starting at an arbitrary point to make the difference between bar length more pronounced.\n\n\\textit{Data} \\\\\nThis category is responsible for evaluating issues related only to data and all the possible issues such as data source\/validity, missing data, and appropriateness of data explanations (e.g., used metrics).\n\n\n\\subsubsection{Questions}\n\\label{sec:questions}\n\nEach category contains questions which represent unitary criteria basing on the pillars described in section~\\ref{sec:design}. \nThe most important features of them are:\n\\begin{itemize}\n \\item All questions are ``yes or no`` and trigger (negative answer) only if something is incorrect. \n \\item Questions do not overlap or include each other. \n \\item All questions address as general issues as possible while focusing on one particular type of mistake. \n It means that each can question can be applied to any visualization regardless of factors such as form, type, content. \n However, some categories are incompatible with some visualizations by definition, e.g., a simple bar chart without any faceting cannot be evaluated in terms of faceting. \n \\item It is possible for one general bad practice to trigger many questions\n \\item Depending on the context, a single negative answer may have a tiny or gigantic impact on the visualization understanding.\n It means that it is impossible to judge the quality of a visualization solely by the fraction of positively answered questions.\n \\item Literature sources support most questions (the complete citations list available in the supplementary materials). \n\\end{itemize}\n\nVisQualdex\\ contains a total of 120 criteria in the form of a question which address\/detect different mistakes. We present their general content in the form of the word cloud (Figure~\\ref{fig:question_wordcloud}). \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8cm]{img\/question_wordcloud.pdf}\n \\caption{Wordcloud displaying most popular words used in questions (without stopwords).}\n \\label{fig:question_wordcloud}\n\\end{figure}\n\nThe full list of questions is available in the supplementary materials. \nHere we present and analyze few exemplary questions.\n\n\\textbf{Q}: \"Are colors on gradient scale equidistributed?\"\nIn the case of gradient color scale, the distribution of the colors should be regular. \nThe figure~\\ref{fig:ColorGradientScale} shows a proper distribution of colors on a gradient scale. \nEven though this rule does not touch upon the topic of the choice of colors, it is worth mentioning that various studies~\\cite{rogowitz1999trajectories,rogowitz1998data,bergman1995rule,rogowitz1996not,kindlmann2014algebraic} suggest refraining from ``rainbow scale`` and advise simpler\/fewer color combinations instead. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8cm]{img\/ColorGradientScale}\n \\caption{Example of an equidistributed color gradient scale (using dataset Iris\\cite{fisher1936use})}\n \\label{fig:ColorGradientScale}\n\\end{figure}\n\n\\textbf{Q}: \"Is the data-to-ink ratio rational?\",\nThe ``data-ink ratio`` describes the efficiency of commuting information with graphics.\nFinding the balance is a topic of a heated debate between many experts, including Edward Tufte (favoring minimalism in data visualization and rejecting ``chart junk``~\\cite{gatto2015making}) and Alberto Cairo~\\cite{cairo2012functional, tufte1990envisioning}. \nThe former advises that the less ``ink`` is used, the better. \nHowever, the latter challenges this minimalistic approach, claiming that redundancy (e.g., highlighting in color) may help to quicker convey the message. \nA golden mean lies somewhere in between. \nOverly encumbering the visualization with unnecessary information may lead to confusion, but leaving as little trace of the information may also turn a visualization into a ``clue hunt`` instead of quickening information perception. \n\n\\textbf{Q}: \"Are the additional dimensions of shapes showing additional information?\",\nThis question focuses on minimising the additional dimensional complications of the visualization. \nAccording to state-of-the-art research~\\cite{bertini2016judgment} using more than two dimensions on a visualization may be misleading and difficult to perceive. \nMoreover, another study~\\cite{cleveland1985graphical} shows that interpreting angles (which appear a lot more often on 3D graphics) has many possible vicious implications ranging from minor ``illusory effects`` distorting the viewer perception to completely hiding some data points on the visualization. \nSee figure~\\ref{fig:3dPieChartsAngles} for a graphical example of how differently the same angle (data point) looks due to different projections.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=6cm,trim={5cm 0 5cm 0}]{img\/3dPieChartsAngles}\n \\caption{Example of different angles for a 3D pie chart}\n \\label{fig:3dPieChartsAngles}\n\\end{figure}\n\n\\textbf{Q}: \"Does the visualization obey the reading gravity?\",\nThe concept of reading gravity~\\cite{reading-gravity} comprehensively aggregates all aspects of the order in which the viewer perceives (reads) the visualization. \nIt accounts for how the user reads the text of the visualization and in what order they see the data points, sub-charts (if faceting) and all other visuals. \nAlthough most western cultures are sinistrodextral (i.e. read from left to right) and from top to bottom, some cultures follow different patterns (e.g., Arabic is written right to left~\\cite{jordan2014reading}, Hanunuo script is written bottom to top~\\cite{epo2014discourse}). \nHence, to maintain flexibility and universality, the question imposes general ``reading gravity`` instead of ``left to right, top to bottom``. \n\n\\section{Discussion}\n\nThe field of data visualization does not suffer from the lack of guidelines and checklists, but rather from their surplus and disorder. This disarray results in a situation where our checklists are not comprehensive and thus dismiss the impact of the interaction of many factors determining the effectiveness of visual communication~\\cite{kandogan2016grounded}. VisQualdex\\ is the first codex (a structured set of criteria) that could be at the same time versatile and extensive enough to cover all existing data visualizations. \n\nOur systematic approach results in a set of rules that constitutes a foundation for tools for data visualization creation (e.g., Microsoft Excel, plotly, ggplot, matplotlib, D3, etc.) and instruments for automatic\/semi-automatic data visualization correction (e.g., project ReVision~\\cite{revision}). A principal example is the usage VisQualdex\\ for default settings of these tools.\n\nOne of the problems concerning checklists is a varying level of detail. \nSpecific points may mention at the same time significantly narrow and very general criteria while being on the same ``level'' of evaluation or even share the evaluation weight. It leads to imbalanced evaluation, which may either allow ``incorrect'' visualizations to slip through the metric or ``good'' visualizations to be unfairly punished for minor mistakes. \nVisQualdex\\ partially solves this problem with categories, which guard question overlap and thematic division. However, we see it only as one of the first steps into a comprehensive visualization ontology, focused on the evaluation, instead of creation~\\cite{duke2004building}. \n\nThe price for the comprehensiveness of VisQual is its relative length. If evaluators want to be precise, they have to go through each of the 120 questions manually. We hope that with techniques from image processing, it is possible to automate or semi-automatize some evaluation steps. To somehow alleviate this issue, a web server is introduced that streamlines the usage of VisQualdex (\\url{visqual.info}) presented in Figure \\ref{fig:visqual-screenshot}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8cm]{img\/visqual-screenshot.PNG}\n \\caption{VisQual.info website screenshot}\n \\label{fig:visqual-screenshot}\n\\end{figure}\n\n\\section{Conclusion}\nThe following conclusions emerged from this thorough analysis and after seeing thousands of graphs, charts or infographics. Data visualization is a field on the border of computer science, data science and arts~\\cite{mirahan7chapter}, which renders it highly subjective to the bias of the creator and the viewer. However, we believe it is possible to forge universal criteria and find a universal standard to visualize data more understandably. VisQualdex is our first attempt at this task. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\appendices\n\n\n\n\\ifCLASSOPTIONcompsoc\n \n \\section*{Acknowledgments}\n\\else\n \n \\section*{Acknowledgment}\n\\fi\n\n\nThe authors would like to thank Michael Friendly (York University) for his valuable insights on the history of data visualization checklists.\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\n\n\n\n\n\n\\printbibliography\n\n\n\n\n\n\n\\end{document}\n\n\n\n\\section{Supplement 1 - VisQualdex questions}\n\n\n\t\\subsection{Coordinates} \n\n\t\t\\begin{enumerate} \n \t\t\t\\item [1] Is the scale complete (not cut)?\n\n See Figure \\ref{fig:question1_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question1_bad}\n\\caption{Bad example for question 1}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/1_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [2] Are the units properly displayed?\n\n See Figure \\ref{fig:question2_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question2_bad}\n\\caption{Bad example for question 2}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/2_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [3] Does it use the proper coordinate system?\n\n See Figure \\ref{fig:question3_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question3_bad}\n\\caption{Bad example for question 3}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/3_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [4] Are the coordinates consistent?\n\n See Figure \\ref{fig:question4_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question4_bad}\n\\caption{Bad example for question 4}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/4_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [5] Are the coordinates units equal?\n\n See Figure \\ref{fig:question5_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question5_bad}\n\\caption{Bad example for question 5}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/5_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [6] Is each axis in one dimension?\n\n See Figure \\ref{fig:question6_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question6_bad}\n\\caption{Bad example for question 6}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/6_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [7] Is the scale range proper?\n\n\n \n\n\t\t\t\\item [8] Do the coordinates correspond to the rest of the visualization?\n\n\n \n\n\t\t\t\\item [9] Are the units displayed clearly?\n\n See Figure \\ref{fig:question9_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question9_bad}\n\\caption{Bad example for question 9}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/9_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [10] Does it not use logarithmic scale?\n\n\n \n\n\t\t\t\\item [11] Are the coordinates changing in the intuitive direction?\n\n\n \n\n\t\t\t\\item [12] Is the data on all axes changing?\n\n See Figure \\ref{fig:question12_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question12_bad}\n\\caption{Bad example for question 12}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/12_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [13] Does the scale cover the whole data?\n\n See Figure \\ref{fig:question13_good} with a good example. \n\\begin{figure}[h!]\n\\label{fig:question13_good}\n\\caption{Good example for question 13\\footnote{https:\/\/ggplot2.tidyverse.org\/\\#usage}}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/13_goodExampleImage.png}\n\\end{figure}\nNone\n \n\n\t\t\t\\item [14] Is there a scale\/axis?\n\n See Figure \\ref{fig:question14_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question14_bad}\n\\caption{Bad example for question 14}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/14_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [15] Is there enough axes or value references?\n\n See Figure \\ref{fig:question15_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question15_bad}\n\\caption{Bad example for question 15}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/15_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [16] Is the scale\/axis not duplicated?\n\n\n \n\n\t\t\t\\item [17] Is there no more than one scale in one dimension?\n\n See Figure \\ref{fig:question17_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question17_bad}\n\\caption{Bad example for question 17}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/17_badExampleImage.png}\n\\end{figure} \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Data} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [18] Does it contain any data?\n\n See Figure \\ref{fig:question18_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question18_bad}\n\\caption{Bad example for question 18}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/18_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [19] Is the data correct?\n\n\n \n\n\t\t\t\\item [20] Is data redundancy (if present) conveyed with different means?\n\n See Figure \\ref{fig:question20_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question20_bad}\n\\caption{Bad example for question 20}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/20_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [21] Is the data ordered?\n\n\n \n\n\t\t\t\\item [22] Is the visualization not only pure data?\n\n See Figure \\ref{fig:question22_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question22_bad}\n\\caption{Bad example for question 22}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/22_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [23] Is the data source provided?\n\n\n \n\n\t\t\t\\item [24] Is there not too much data?\n\n See Figure \\ref{fig:question24_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question24_bad}\n\\caption{Bad example for question 24}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/24_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [25] Does the data contain not too many NAs?\n\n\n \n\n\t\t\t\\item [26] Is missing data represented?\n\n\n \n\n\t\t\t\\item [27] Is all data factual without interpolation?\n\n\n \n\n\t\t\t\\item [28] Is the data sufficient for presentation?\n\n\n \n\n\t\t\t\\item [29] Is the data sample size provided?\n\n See Figure \\ref{fig:question29_good} with a good example. \n\\begin{figure}[h!]\n\\label{fig:question29_good}\n\\caption{Good example for question 29\\footnote{https:\/\/www.reddit.com\/r\/dataisbeautiful\/comments\/ct876f\/whens\\_the\\_best\\_time\\_of\\_day\\_to\\_post\\_to\\_reddit\\_oc\/}}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/29_goodExampleImage.png}\n\\end{figure}\nNone\n \n\n\t\t\t\\item [30] Is the data consistent?\n\n\n \n\n\t\t\t\\item [31] Is the whole data presented?\n\n See Figure \\ref{fig:question31_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question31_bad}\n\\caption{Bad example for question 31}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/31_badExampleImage.png}\n\\end{figure} \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Facets} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [32] Does the faceting preserve the focus-plus-context principle?\n\n See Figure \\ref{fig:question32_good} with a good example. \n\\begin{figure}[h!]\n\\label{fig:question32_good}\n\\caption{Good example for question 32\\footnote{https:\/\/www.reddit.com\/r\/dataisbeautiful\/comments\/cqo1j9\/global\\_monthly\\_temperature\\_compared\\_to\\_19611990\/}}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/32_goodExampleImage.png}\n\\end{figure}\nNone\n \n\n\t\t\t\\item [33] Is there not too many facets?\n\n\n \n\n\t\t\t\\item [34] Are the facets' meta data consistent?\n\n See Figure \\ref{fig:question34_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question34_bad}\n\\caption{Bad example for question 34}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/34_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [35] Do the facets represent the same topic?\n\n See Figure \\ref{fig:question35_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question35_bad}\n\\caption{Bad example for question 35}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/35_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [36] Are there enough subplots for faceting?\n\n\n \n\n\t\t\t\\item [37] Are facet sizes appropriate?\n\n\n \n\n\t\t\t\\item [38] Are the facets ordered in a logical way?\n\n\n \n\n\t\t\t\\item [39] Are the facets distinctly separated?\n\n See Figure \\ref{fig:question39_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question39_bad}\n\\caption{Bad example for question 39}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/39_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [40] Does it follow the 4-partition rules?\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Geometry} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [41] Are the used shapes consistent?\n\n See Figure \\ref{fig:question41_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question41_bad}\n\\caption{Bad example for question 41}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/41_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [42] Is the shape-to-data conversion easy to interpret?\n\n\n \n\n\t\t\t\\item [43] Do the shapes allow easy comparison of the data?\n\n See Figure \\ref{fig:question43_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question43_bad}\n\\caption{Bad example for question 43}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/43_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [44] Are the additional dimensions of shapes showing additional information?\n\n See Figure \\ref{fig:question44_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question44_bad}\n\\caption{Bad example for question 44}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/44_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [45] Are the shapes distinct?\n\n\n \n\n\t\t\t\\item [46] Do the shapes not prevent reading the data?\n\n See Figure \\ref{fig:question46_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question46_bad}\n\\caption{Bad example for question 46}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/46_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [47] Are the shapes associated with the topic?\n\n See Figure \\ref{fig:question47_good} with a good example. \n\\begin{figure}[h!]\n\\label{fig:question47_good}\n\\caption{Good example for question 47\\footnote{reddit.com\/r\/dataisbeautiful\/comments\/cmbpxd\/oc\\_the\\_periodic\\_table\\_of\\_jeopardy\\_how\\_many\\_times\/}}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/47_goodExampleImage.png}\n\\end{figure}\nNone\n \n\n\t\t\t\\item [48] Do the shapes represent the data in proper scale?\n\n See Figure \\ref{fig:question48_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question48_bad}\n\\caption{Bad example for question 48}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/48_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [49] Do the shapes represent the data precisely (without approximation)?\n\n See Figure \\ref{fig:question49_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question49_bad}\n\\caption{Bad example for question 49}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/49_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [50] Do the shapes not prevent reading one another?\n\n See Figure \\ref{fig:question50_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question50_bad}\n\\caption{Bad example for question 50}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/50_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [51] Do the used shapes represent any data?\n\n\n \n\n\t\t\t\\item [52] Is there less than 50 visual primitives\/elements?\n\n\n \n\n\t\t\t\\item [53] Does it not use glyphs?\n\n See Figure \\ref{fig:question53_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question53_bad}\n\\caption{Bad example for question 53}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/53_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [54] Does it not use shapes to indicate order?\n\n\n \n\n\t\t\t\\item [55] Does it not use curves to represent the data\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Perception} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [56] Does the visualization avoid unnecessary \"symbolization chaining\"?\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Guides} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [57] Does it not use additional trend lines?\n\n See Figure \\ref{fig:question57_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question57_bad}\n\\caption{Bad example for question 57}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/57_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [58] Is there not too many labels?\n\n\n \n\n\t\t\t\\item [59] Are the axes labeled?\n\n See Figure \\ref{fig:question59_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question59_bad}\n\\caption{Bad example for question 59}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/59_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [60] Is the topic related to the visualization\n\n See Figure \\ref{fig:question60_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question60_bad}\n\\caption{Bad example for question 60}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/60_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [61] Are all crucial data points labelled?\n\n See Figure \\ref{fig:question61_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question61_bad}\n\\caption{Bad example for question 61}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/61_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [62] Is the title\/additional text precise and not misleading?\n\n See Figure \\ref{fig:question62_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question62_bad}\n\\caption{Bad example for question 62}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/62_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [63] Is all text readable?\n\n See Figure \\ref{fig:question63_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question63_bad}\n\\caption{Bad example for question 63}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/63_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [64] Is all text necessary?\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Perception} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [65] Is the data interpretation correct?\n\n See Figure \\ref{fig:question65_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question65_bad}\n\\caption{Bad example for question 65}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/65_badExampleImage.png}\n\\end{figure} \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Manipulation} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [66] Is the data displayed in a unbiased fashion?\n\n\n \n\n\t\t\t\\item [67] Is the data not manipulated?\n\n See Figure \\ref{fig:question67_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question67_bad}\n\\caption{Bad example for question 67}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/67_badExampleImage.png}\n\\end{figure} \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Perception} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [68] Is the data memorable after the first sight?\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Data} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [69] Are the measures obvious or explained?\n\n See Figure \\ref{fig:question69_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question69_bad}\n\\caption{Bad example for question 69}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/69_badExampleImage.png}\n\\end{figure} \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Perception} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [70] Are the main features of the data easy to remember?\n\n\n \n\n\t\t\t\\item [71] Is continuous data shown as continuous (and discrete as discrete)?\n\n See Figure \\ref{fig:question71_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question71_bad}\n\\caption{Bad example for question 71}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/71_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [72] Are the labels descriptive?\n\n See Figure \\ref{fig:question72_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question72_bad}\n\\caption{Bad example for question 72}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/72_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [73] Does the visualization obey the reading gravity?\n\n\n \n\n\t\t\t\\item [74] Is the information organized according to five hat racks?\n\n See Figure \\ref{fig:question74_good} with a good example. \n\\begin{figure}[h!]\n\\label{fig:question74_good}\n\\caption{Good example for question 74\\footnote{https:\/\/www.reddit.com\/r\/dataisbeautiful\/comments\/bvz5ne\/passenger\\_fatalities\\_per\\_billion\\_passenger\\_miles\/}}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/74_goodExampleImage.png}\n\\end{figure}\nNone\n \n\n\t\t\t\\item [75] Are all axes' directions and shape proper?\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Guides} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [76] Is all text objective?\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Manipulation} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [77] Does it contain no items with illusory effect?\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Position} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [78] Do elements not overlap with another?\n\n\n \n\n\t\t\t\\item [79] Does the visualization utilize the space properly?\n\n\n \n\n\t\t\t\\item [80] Is the alignment proper?\n\n\n \n\n\t\t\t\\item [81] Does it maintain any symmetry?\n\n\n \n\n\t\t\t\\item [82] Are angles not to sharp\/too flat?\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Statistics} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [83] Are additional statistical information provided?\n\n See Figure \\ref{fig:question83_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question83_bad}\n\\caption{Bad example for question 83}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/83_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [84] Do all statistics correspond to the same data?\n\n\n \n\n\t\t\t\\item [85] Is the statistic useful for the data interpretation?\n\n\n \n\n\t\t\t\\item [86] Does the statistic take into account the outliers?\n\n\n \n\n\t\t\t\\item [87] Do the statistics cover the whole data?\n\n\n \n\n\t\t\t\\item [88] Is the statistic labelled?\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Subjective} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [89] Is this visualization better than similar ones that you have seen?\n\n\n \n\n\t\t\t\\item [90] Do you like the theme of the visualization?\n\n\n \n\n\t\t\t\\item [91] Did you learn something from this visualization?\n\n\n \n\n\t\t\t\\item [92] Is the visualization aesthetically pleasing?\n\n\n \n\n\t\t\t\\item [93] Is the presented information useful?\n\n\n \n\n\t\t\t\\item [94] Was the time spent experiencing the visualization worth it?\n\n\n \n\n\t\t\t\\item [95] Would you like to see a visualization similar to this one?\n\n\n \n\n\t\t\t\\item [96] Do you understand the visualization?\n\n\n \n\n\t\t\t\\item [97] Do you like it?\n\n\n \n\n\t\t\t\\item [98] Is this visualization interesting?\n\n\n \n\n\t\t\\end{enumerate} \n\n\t\\subsection{Theme} \n\n\t\t\\begin{enumerate} \n\n\t\t\t\\item [99] Is it tritanopia-friendly?\n\n\n \n\n\t\t\t\\item [100] Does it look good in black and white?\n\n See Figure \\ref{fig:question100_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question100_bad}\n\\caption{Bad example for question 100}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/100_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [101] Are colors distinct?\n\n See Figure \\ref{fig:question101_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question101_bad}\n\\caption{Bad example for question 101}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/101_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [102] Is the use of colors appropriate?\n\n See Figure \\ref{fig:question102_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question102_bad}\n\\caption{Bad example for question 102}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/102_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [103] Is the shape of the visualization appropriate?\n\n\n \n\n\t\t\t\\item [104] Does it use color to indicate categories?\n\n\n \n\n\t\t\t\\item [105] Are bipolar problems represented using two-color scale?\n\n\n \n\n\t\t\t\\item [106] Are color values on color scale consistent with their representation?\n\n\n \n\n\t\t\t\\item [107] Does it use complex textures to represent data?\n\n\n \n\n\t\t\t\\item [108] Does it use max 3 colors to represent the data if it is possible?\n\n\n \n\n\t\t\t\\item [109] Are the proportions of the visualization close to 1:1?\n\n\n \n\n\t\t\t\\item [110] Is the data-to-ink ratio rational?\n\n\n \n\n\t\t\t\\item [111] Does it contain additional graphics?\n\n See Figure \\ref{fig:question111_good} with a good example. \n\\begin{figure}[h!]\n\\label{fig:question111_good}\n\\caption{Good example for question 111\\footnote{https:\/\/www.reddit.com\/r\/dataisbeautiful\/comments\/cr6nf1\/oc\\_after\\_receiving\\_much\\_feedback\\_criticism\\_here\/}}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/111_goodExampleImage.png}\n\\end{figure}\nNone\n \n\n\t\t\t\\item [112] Are the graphics consistent?\n\n\n \n\n\t\t\t\\item [113] Are there not too many colors?\n\n\n \n\n\t\t\t\\item [114] Is the data not overwhelmed by the excess of form?\n\n\n \n\n\t\t\t\\item [115] Are the colors intuitive?\n\n See Figure \\ref{fig:question115_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question115_bad}\n\\caption{Bad example for question 115}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/115_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [116] Is the gradient scale wide enough to allow distinct colors?\n\n See Figure \\ref{fig:question116_good} with a good example. \n\\begin{figure}[h!]\n\\label{fig:question116_good}\n\\caption{Good example for question 116\\footnote{https:\/\/www.reddit.com\/r\/dataisbeautiful\/comments\/cu58i3\/the\\_south\\_of\\_norway\\_is\\_better\\_at\\_eating\\_their\/}}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/116_goodExampleImage.png}\n\\end{figure}\nNone\n \n\n\t\t\t\\item [117] Are colors on gradient scale equidistributed?\n\n\n \n\n\t\t\t\\item [118] Are colors readable on the medium that the visualization is supposed to be presented on?\n\n See Figure \\ref{fig:question118_bad} with a bad example. \n\n\n\\begin{figure}[h!]\n\\label{fig:question118_bad}\n\\caption{Bad example for question 118}\n\\includegraphics[width=0.8\\textwidth]{supplement_img\/118_badExampleImage.png}\n\\end{figure} \n\n\t\t\t\\item [119] Is it protanopia-friendly?\n\n\n \n\n\t\t\t\\item [120] Is it deuteranopia-friendly?\n\n\n \n \\end{enumerate} \n\n\nThe questions have been built using among others the following sources:\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn domain theory, the probabilistic powerdomain monad on the category of continuous domains and Scott-continuous maps, proposed by Jones and Plotkin \\cite{jones89, jones90}, is the most widely used mathematical construct for denoting probabilistic computation in denotational semantics. It was realised by Kirch~\\cite{kirch93}, Tix~\\cite{tix95} and many others that the weak topology on the probabilistic powerdomain is more natural than the Scott topology, and for this reason, the extended probabilistic powerdomain was proposed over the category of $T_{0}$ spaces and continuous maps~\\cite{cohen06}. Indeed, the weak topology behaves more transparently than the Scott topology on general dcpos as well as topological spaces. For example, Alvarez-Manilla, Jung and Keimel~\\cite{alvarez04} proved that the probabilistic powerdomain of a stably compact space is again stably compact in the weak topology. Schr\\\"oder and Simpson~\\cite{schroder05} showed that continuous linear functionals on the extended probabilistic powerdomain are uniquely determined by continuous functions from the underlying space to the reals. Detailed proofs of these results can be found in~\\cite{goubault15, keimel12}.\n\nThe extended probabilistic powerdomain over a topological space $X$ consists of the so-called continuous valuations on $X$ (see Definition~\\ref{defofvaluations}), and among all continuous valuations, the simple valuations are the most natural ones. They are of the form $\\Sigma_{i= 1}^{n}r_{i}\\delta_{x_{i}}$, where $r_{i}, i=1, .. ,n$ are real numbers and $\\delta_{x}$ is the Dirac measure at $x$ for $x\\in X$. It was proved by Jones~\\cite{jones90} that for a continuous domain $P$ with the Scott topology, every continuous valuation on $P$ can be written as a directed supremum of simple valuations. This is not true for general dcpos, but simple valuations are usually regarded as ``bricks'' that reflect properties of the entire ``building'' of continuous valuations. For this reason, it is useful to single out classes of spaces (that include more general dcpos than the continuous domains) for which there is a completion of the simple valuations ${\\mathcal V}_\\mathrm{s}$ that also forms a monad. The purpose of this paper is to show that each $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category admits such a monad. \n\nHeckmann~\\cite{heckmann96} considered the simple valuations in the topological setting, where he proved the sobrification of the space of simple valuations on a topological space~$X$ consists of point-continu\\-ous valuations on~$X$. Again, he used the weak topology rather than the Scott topology, and he proved that such a sobrification gives rise to the free weakly locally convex sober topological cone over~$X$. Goubault-Larrecq and the first author~\\cite{goubault19} proved that the sobrification of the space of simple valuations also defines a monad over topological spaces. Keimel and Lawson~\\cite{keimel09a} realised that sobrification of a space is just a special case of $\\mathcal{K}$-completion of that space, which led the authors to consider whether there is a general distributive law between $\\mathcal{K}$-completion and the simple valuation powerdomain. As is well known, such a distributive law can be used to show that their composition is again a monad. We show that this distributive law does exist, so the $\\mathcal{K}$-completion of the space of simple valuations gives a monad on the category of $\\mathcal{K}$-spaces for each $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category~$\\mathcal{K}$. Generalising Heckmann's result, we prove that the $\\mathcal{K}$-completion of the simple valuations over a space $X$ gives rise to the free weakly locally convex topological $\\mathcal{K}$-cone over $X$. This result would enable us to prove that the $\\mathcal{K}$-completion of any locally convex (resp., weakly locally convex, locally linear) topological cone is a locally convex (resp., weakly locally convex, locally linear) $\\mathcal{K}$-cone. \n\nThe paper is organised as follows: we recall the extended powerdomain construction, topological cones and $\\mathcal{K}$-completions in Section~2, where we also prove many properties of general $\\mathcal{K}$-completions that are useful in our later discussion. In Section~3, we give two approaches to proving that the $\\mathcal{K}$-completion of the simple valuation monad gives a monad over topological spaces. In Section~4, we characterise the Eilenberg-Moore algebras of our new monad as the weakly locally convex $\\mathcal{K}$-cones, and employ it in Section~5 to prove that the $\\mathcal{K}$-completion of a locally convex (resp., weakly locally convex, locally linear) topological cone is a locally convex (resp., weakly locally convex, locally linear) $\\mathcal{K}$-cone. In the final section, we show that ${\\mathcal C}^\\top$, the Cantor tree with a top element, has the property that the dcpo-completion of its poset of simple valuations is not the monotone convergence completion of the space~${\\mathcal V}_\\mathrm{s}\\, {\\mathcal C}^\\top$ endowed with the weak topology. \n\n\n\\section{Preliminaries}\n\n\nWe use standard notions and terminology from Domain Theory~\\cite{gierz03,abramsky94} and from non-Hausdorff topology~\\cite{goubault13a}. \n\nIn this paper, we restrict attention to the category ${\\bf TOP_{0}}$ of $T_0$ spaces and continuous maps, so all topological spaces are assumed to be $T_{0}$ spaces unless stated otherwise. For a poset $P$, we use $\\Sigma P$ to denote the space $P$ equipped with the Scott topology; and for a topological space $X$, we use $\\Omega X$ to denote the poset $X$ equipped with the specialisation order on $X$, that is, $x\\leq y$ for $x, y\\in X$ if and only if $x$ is in the closure of $\\{y\\}$. We use $\\mathbb R_+$ to denote the set of nonnegative reals, and $\\overline{\\mathbb R}_+$ the set of nonnegative reals extended with $\\infty$. Whenever $\\overline{\\mathbb R}_+$ is treated as a topological space, we mean that it is equipped with the Scott topology. Note that this implies a function $f\\colon X\\to \\overline{\\mathbb R}_+$ is continuous iff it is lower semicontinuous. For a topological space $X$, we use $X^{\\mathrm{s}}$ to denote the canonical sobrification of $X$, which we equate to the set of irreducible closed subsets of $X$ endowed with the lower Vietoris topology (see for example, \\cite[Definition 8.2.17]{goubault13a}). We use $\\eta^{\\mathrm{s}}_{X}$ to denote the topological embedding of $X$ into $X^{\\mathrm{s}}$ which sends each $x\\in X$ to the closure of $\\{x \\}$ which is $\\mathord{\\downarrow} x$ in the specialisation order. Notice that $\\eta^{\\mathrm{s}}_{X}$ is an embedding if and only if $X$ is $T_0$. We use $\\mathbb S$ to denote the Sierpi\\'nski space consisting of two elements $0$ and $1$ in which the only proper open set is the singleton $\\{ 1\\}$. Finally, for a subset $A$ of a space $X$ we use $\\chi_{A}$ to denote the characteristic function of~$A$, that is, $\\chi_{A}(x)=1$ when $x\\in A$ and $0$ otherwise. \n\n\n\n\n\n\n\n\n\n\\subsection{The extended probability powerdomain monad ${\\mathcal V}_w$}\n\n\\begin{definition}\n\\label{defofvaluations}\nA \\emph{valuation} on a topological space $(X,\\mathcal OX)$ is a function~$\\mu$ from $\\mathcal OX$ to the extended nonnegative reals $\\overline{\\mathbb{R}}_+$ satisfying for any $U, V\\in \\mathcal OX$:\n\\begin{itemize}\n\\item (strictness) $\\mu (\\emptyset) = 0$; \n\\item (monotonicity) $\\mu(U)\\leq \\mu(V)$ if $U\\subseteq V$; \n\\item (modularity) $ \\mu(U) + \\mu(V) = \\mu (U\\cup V) +\\mu (U\\cap V)$.\n\\end{itemize}\n\n\nA \\emph{continuous valuation} $\\mu$ on $(X, \\mathcal OX)$ is a valuation that is Scott-continuous from $\\mathcal OX$ to $\\overline{\\mathbb{R}}_+$, that is, for every directed family of open subsets $U_i, i\\in I$, it is true that:\n\\begin{itemize}\n\\item (Scott-continuity) $\\mu (\\bigcup_{i\\in I}U_i) = \\sup_{i\\in i} \\mu(U_i)$.\n\\end{itemize}\n\n\nValuations on a topological space~$X$ can be ordered: given $\\mu$ and $\\nu$, $\\mu\\leq \\nu$ if and only if $\\mu(U)\\leq \\nu(U)$ for all $U\\in \\mathcal OX$, an order that sometimes is referred to as the \\emph {stochastic order}. \n\nThe set of continuous valuations on $X$ endowed with the stochastic order is denoted by~$\\mathcal VX$.\n\\end{definition}\n\nAmong all continuous valuations on $X$ there are \\emph{Dirac masses} $\\delta_x, x\\in X$. For each $x\\in X$, $\\delta_x$ is defined as:\n$$\\delta_x(U) = \\begin{cases}1 & x\\in U \\\\ 0 & \\text{otherwise.} \\end{cases}$$\n\\emph{Simple valuations} are finite linear combinations of Dirac masses. They are of the form $\\Sigma_{i=1}^n r_i\\delta_{x_i}$, where $x_i, i=1,...,n$ are in $X$ and $r_i, i=1,..., n$ are nonnegative real numbers, i.e., $r_i\\in \\mathbb R_+$. For each open set $U$, $(\\Sigma_{i=1}^n r_i\\delta_{x_i})(U) = \\Sigma_{i\/ x_i\\in U} r_i $. The set of all simple valuations on $X$ is denoted by~${\\mathcal V}_\\mathrm{s} X$. In contrast of simple valuations, one can consider \\emph{finite valuations}, i.e., these continuous valuations that only take finitely many (finite) values. Simple valuations are finite valuations, and it is proved by R. Tix \\cite{tix99} that finite valuations on sober spaces are simple valuations. In general, finite valuations are not necessarily simple, but we do have the following:\n\n\\begin{proposition}\nLet $X$ be a $T_0$ topological space. Then the finite valuations on $X$ are precisely the simple valuations if and only if $X$ is sober.\n\\end{proposition}\n\\begin{proof}\nThe ``if'' (hard) direction is given as \\cite[Satz 2.2]{tix95}. For the ``only if'' direction, we assume that $A$ is an irreducible closed subset of $X$ and define a finite valuation $\\mu$ as \n$$ \\mu (U) = \\begin{cases} 1 & U\\cap A \\not= \\emptyset\\\\ 0 & \\text{otherwise}. \\end{cases}$$\nIt is easy to see that $\\mu$ is a continuous finite valuation, hence by assumption there exists a finite subset $F=\\oneset{x_1, ..., x_n}$ of $X$ and positive reals $r_i, i=1, ..., n$ such that $\\Sigma_{i=1}^n r_i\\delta_{x_i} = \\mu $. It follows that $\\Sigma_{i=1}^n r_i = 1$, and hence for any open set $U$, $U$ intersects $A$ if and only if $F\\subseteq U$. Since $X$ is $T_0$, this implies that $x_1 = x_2 =...= x_n$, so $A$ is the closure of $\\{x_1\\}$ and $x_1$ is the unique point whose closure is $A$.\n\\end{proof}\n\n\nFor any topological space $X$, we topologise $\\mathcal VX$ with the \\emph{weak topology}, which is generated by a subbasis of sets of the form \n$$[U>r]= \\{\\mu\\mid \\mu \\text{ is continuous and } \\mu(U) > r\\},$$ \nfor $U\\in \\mathcal OX, r\\in \\mathbb R_+$. We use ${\\mathcal V}_w X$ to denote the space $\\mathcal VX$ equipped with the weak topology and call ${\\mathcal V}_w X$ the \\emph{extended probabilistic powerdomain} or the \\emph{valuation powerdomain} over $X$. The weak topology on ${\\mathcal V}_w X$ is natural in many ways. For example, the specialisation order of the weak topology is just the stochastic order on $\\mathcal VX$ and the canonical map $\\delta_X\\colon X \\to {\\mathcal V}_w X$, sending each $x\\in X$ to the Dirac mass $\\delta_x$ at $x$, is a topological embedding.\n\nWe can extend ${\\mathcal V}_w$ to an endofunctor on ${\\bf TOP_0}$ by defining its action ${\\mathcal V}_w f$ on continuous maps $f\\colon X\\to Y$ by ${\\mathcal V}_w f(\\mu)(V) = \\mu(f^{-1}(V))$. Moreover, ${\\mathcal V}_w$ is a monad over the category ${\\bf TOP_0}$. In order to show this, we need an integration theory of lower semicontinuous functions with respect to continuous valuations and Manes' equivalent description of monads. \n\nFor any topological space $X$, every lower semicontinuous function $h\\colon X\\to \\overline{\\mathbb R}_+$ has a Choquet-type \\emph{integral} with respect to a continuous valuation $\\mu$ on $X$ defined by:\n$$ \\int_{x\\in X} h(x)d\\mu = \\int_0^\\infty \\mu(h^{-1}(r,\\infty])dr,$$\nwhere the right side of the equation is a Riemann integral. If there is no chance of confusion, we write $\\int_{x\\in X} h(x)d\\mu$ as $\\int h~d\\mu$. This integral enjoys many nice properties. For example, it is linear: for any continuous valuation~$\\mu$ on~$X$, $a, b\\in \\mathbb R_{+}$, and lower semicontinuous maps $f, g \\colon X\\to \\overline{\\mathbb R}_+$, \n$$\\int af + bg~d\\mu = a\\int f~d\\mu+ b\\int g~d\\mu. $$\nFor simple valuations $\\Sigma_{i=1}^n r_i\\delta_{x_i}$, we also have: \n$$\\int f d(\\Sigma_{i=1}^n r_i\\delta_{x_i}) = \\Sigma_{i=1}^{n}r_{i}f(x_{i}),$$ and the function \n$$\\mu\\mapsto \\int fd\\mu \\colon {\\mathcal V}_w X \\to \\overline{\\mathbb R}_+ $$ is continuous. The reader is referred to \\cite{kirch93, lawson04, tix95, goubault19} for more related properties of integration. \n\n\n\\begin{definition}[Manes' description for monads]{\\rm \\cite{manes76}}\nA \\emph{monad} on a category {\\bf C} is a triple $(T , \\eta, \\_^\\dagger)$ consisting of a map $T$ from objects $X$ of~{\\bf C} to objects $T X$ of {\\bf C}, a collection $\\eta = (\\eta_X)_X$ of morphisms $\\eta_X : X \\to TX$, one for each object $X$ of {\\bf C} (called the unit of $T$), and a so-called extension operation $\\_^\\dagger$ that maps every morphism $ f : X\\to TY$ to $f ^\\dagger : T X\\to T Y$ such that:\n\\begin{enumerate}\n\\item $\\eta_X^\\dagger = \\mathrm{id}_{TX}$;\n\\item for every morphism $f : X\\to TY$, $ f^\\dagger\\circ \\eta_X = f$;\n\\item for all morphisms $f : X\\to TY$ and $g: Y\\to TZ$, $g^\\dagger\\circ f^\\dagger = (g^\\dagger\\circ f)^\\dag$.\n\\end{enumerate}\n\\end{definition}\n\n\nWith the aforementioned ingredients, we can show that ${\\mathcal V}_w$ is a monad over the category ${\\bf TOP_0}$. The unit of ${\\mathcal V}_w$ is given by $\\delta_X\\colon x\\mapsto \\delta_x$ for each $X$, and for continuous functions $f\\colon X\\to {\\mathcal V}_w Y$ the extension operation $\\_^{\\dagger}$ is given by \n$$f^\\dagger(\\mu)(U) = \\int_{x\\in X} f(x)(U)d\\mu.$$\nThe function $f^\\dagger\\colon {\\mathcal V}_w X\\to {\\mathcal V}_w Y: \\mu\\mapsto (U\\mapsto \\int_{x\\in X} f(x)(U)d\\mu)$, in particular, is continuous. Alternatively, one can describe the multiplication $\\beta\\colon {\\mathcal V}_w^2\\to{\\mathcal V}_w$ of the monad at $X$ as the map $\\beta_{X}$ sending each continuous valuation $\\varpi\\in {\\mathcal V}_w({\\mathcal V}_w X )$ to $\\mathrm{id}_{{\\mathcal V}_w X}^\\dagger(\\varpi)=(U\\mapsto \\int_{\\mu\\in {\\mathcal V}_w X} \\mu(U)d\\varpi)$. For a detailed discussion the reader is referred to Section 2.3 in \\cite{goubault19}.\n\nRestricting ourselves to the simple valuations, we conclude that ${\\mathcal V}_\\mathrm{s}$ is a monad over the category ${\\bf TOP_0}$, where for a topological space $X$, ${\\mathcal V}_\\mathrm{s} X$ is the subspace of ${\\mathcal V}_w X$ consisting of simple valuations; and for continuous maps $f\\colon X\\to Y$ and simple valuations $\\Sigma_{i=1}^n r_i\\delta_{x_i}$, ${\\mathcal V}_\\mathrm{s} f (\\Sigma_{i=1}^n r_i\\delta_{x_i}) = {\\mathcal V}_w f (\\Sigma_{i=1}^n r_i\\delta_{x_i})= \\Sigma_{i=1}^n r_i\\delta_{f(x_i)}$. The unit of ${\\mathcal V}_\\mathrm{s}$ at $X$ is the map ${ {\\delta^{\\s}}}_{X}\\colon X\\to {\\mathcal V}_\\mathrm{s} X\\colon x\\mapsto \\delta_x$, the corestriction of the map $\\delta_{X}$ on ${\\mathcal V}_\\mathrm{s} X$. The multiplication $ {\\beta^{\\s}}\\colon {\\mathcal V}_\\mathrm{s}^2\\to {\\mathcal V}_\\mathrm{s}$ at $X$ is the map $ {\\beta^{\\s}}_{X}(\\Sigma_{i=1}^{n}r_{i}\\delta_{\\mu_{i}}) = \\Sigma_{i=1}^{n} {r_{i} \\mu_{i}}$, where for $i= 1,\\ldots,n$, $\\mu_{i}$ is a simple valuation and $r_{i}$ is a nonnegative real number. \n\n\nWhile a characterisation of the Eilenberg-Moore algebras of the ${\\mathcal V}_w$-monad remains an open problem, we know that ${\\mathcal V}_\\mathrm{s}$-algebras are precisely the \\emph{weakly locally convex topological cones}~\\cite{goubault19} : For any weakly locally convex topological cone $X$, the structure map $\\alpha_X \\colon {\\mathcal V}_\\mathrm{s} X\\to X$ is the continuous map which sends each simple valuation\n$\\Sigma_{i=1}^n r_i\\delta_{x_i}$ to its \\emph{barycentre} $\\Sigma_{i=1}^n r_i x_i$. Moreover, the ${\\mathcal V}_\\mathrm{s}$-morphisms are continuous \\emph{linear maps}. The notions of topological cone and linear map are explained below, and the barycentre of a valuation is defined in Definition~\\ref{defn:bary:choquet}.\n\n\\subsection{Ordered cones and topological cones}\n\nThe following notions are mainly from \\cite{keimel08} and \\cite{goubault19}. \n\n\n\\begin{definition}[Cone]\nA \\emph{cone} is defined to be a commutative monoid~$C$ together with a scalar multiplication by nonnegative real numbers satisfying the same axioms as for vector spaces; that is, $C$ is endowed with an addition $(x, y)\\mapsto x+y: C\\times C\\to C$ which is associative, commutative and admits a neutral element~$0$, and with a scalar multiplication $(r, x)\\mapsto r \\cdot x: \n\\mathbb R_+ \\times C\\to C$ satisfying the following axioms for all $x, y\\in C$ and all $r, s\\in \\mathbb R_+$:\n\\begin{align*}\n&r\\cdot (x+y) = r\\cdot x+r\\cdot y & & (rs)\\cdot x=r\\cdot (s\\cdot x) & 0\\cdot x = 0 \\\\\n& (r+s)\\cdot x = r\\cdot x+s\\cdot x & &1\\cdot x=x &r\\cdot 0 = 0\n\\end{align*}\nWe shall often write $rx$ instead of $r \\cdot x$ for $r\\in \\mathbb R_+$ and $x\\in C$.\n\nAn \\emph{ordered cone} is a cone $C$ endowed with a partial order $\\leq$ such that addition and multiplication by fixed scalars $r\\in \\mathbb R_{+}$ are order preserving, that is, for all $x, y, z\\in C$ all $r\\in \\mathbb R_{+}$: $x\\leq y \\Rightarrow x+ z\\leq y+z $ and $rx\\leq ry $. \n\nA \\emph{semitopological cone} is a cone with a $T_0$ topology that makes $+$ and $\\cdot$ separately continuous.\n\nA \\emph{topological cone} is a cone with a $T_0$ topology that makes $+$ and $\\cdot$ jointly continuous.\n\\end{definition}\n\nTopological cones are semitopological cones, semitopological cones are ordered cones in their specialisation order. The extended reals $\\overline{\\mathbb R}_+$ with the Scott topology is a topological cone, with scalar multiplication and addition extended as: $a+\\infty = \\infty$ for all $a\\in \\overline{\\mathbb R}_+$, $b\\cdot \\infty=\\infty$ for $b\\in \\overline{\\mathbb R}_+\\setminus \\{0\\}$ and $0\\cdot\\infty=0$. Usually a cone cannot be embedded into a vector space as a subcone, the reason is that a cone might not satisfy the Cancellation Law (a+b = a+c implies b=c), a property enjoyed by every vector space. As an example, the extended reals $\\overline{\\mathbb R}_+$ does not satisfy the Cancellation Law hence cannot be be embedded into any vector space. \n\nThe following definition is taken from~\\cite[Definition 3.7]{keimel08}. \n\n\\begin{definition}\nA function $f:C\\to D$ from cone $C$ to $D$ is called \\emph{linear} if it is \\emph{homogeneous}:\n$$f(ra)= rf(a)~\\text{for all}~a\\in C~\\text{and}~r\\in \\mathbb R_{+} $$ \nand \\emph{additive}:\n$$f(a+b)=f(a)+f(b)~\\text{for all}~a, b\\in C.$$\nIf $D$ is an ordered cone, $f$ is called \\emph{superadditive} if\n$$ f(a+b)\\geq f(a)+f(b)~\\text{for all}~a, b\\in C$$\nand \\emph{subadditive} if\n$$ f(a+b)\\leq f(a)+f(b)~\\text{for all}~a, b\\in C.$$\nWe say that $f$ is \\emph{sublinear} (resp., \\emph{superlinear}), if f is homogeneous and subadditive (resp., superadditive).\n\\end{definition}\n\n\n\\begin{definition} Let $C$ be a cone. \n\\begin{itemize}\n\\item A subset $A\\subseteq C$ is \\emph{convex}, if given $a, b\\in A$, the linear combination $ra+(1-r)b\\in A$ for any $r\\in [0,1 ]$.\n\\item A subset $A\\subseteq C$ is a \\emph{half-space}, if both $A$ and its complement are convex. \n\\item A cone $C$ with a $T_0$ topology is called \\emph{weakly locally convex}\\footnote{This was introduced as local convexity in~\\cite{heckmann96}.}, if for every point $x\\in C$, every open neighbourhood $U$ of $x$ contains a convex (not necessarily open) neighbourhood of $x$.\n\\item A cone $C$ with a $T_0$ topology is called \\emph{locally convex}, if each point has a neighbourhood basis of open convex neighbourhoods.\n\\item A cone $C$ with a $T_0$ topology is called \\emph{locally linear}, if $C$ has a subbase of open half-spaces.\n\\end{itemize}\n\\end{definition}\n\nIt is immediate from the definition that every locally linear (semi)topological cone is locally convex and every locally convex (semi)topological cone is weakly locally convex. \n\n\\begin{proposition}\nFor any topological space $X$, ${\\mathcal V}_w X$ has a canonical cone structure that makes it a locally linear \\emph{topological cone} (see, for example, {\\rm \\cite[Proposition 3.8]{goubault19}}).\n\\end{proposition}\n\\begin{proof}\nFor any $\\mu, \\nu\\in \\mathcal VX, r\\in \\mathbb R_+$ and $U$ open in $X$, define $(\\mu+\\nu)(U)= \\mu(U)+\\nu(U)$ and $(r\\cdot\\mu)(U)=r\\cdot\\mu(U)$. The \\emph{zero valuation} $(U\\mapsto 0)$ is the neutral element. Then the triple $(\\mathcal VX, +, \\cdot)$ is a cone. The addition $ + \\colon {\\mathcal V}_w X\\times {\\mathcal V}_w X\\to {\\mathcal V}_w X\\colon (\\mu, \\nu)\\mapsto \\mu+\\nu $ and scalar multiplication $r\\cdot\\_\\colon {\\mathcal V}_w X\\to {\\mathcal V}_w X\\colon \\mu\\mapsto r\\cdot\\mu$ for each $r\\in \\mathbb R_+$ are jointly continuous. The latter fact implies that ${\\mathcal V}_w X$ is a topological cone. Notice that the sub-basic opens $[U>r]$ of ${\\mathcal V}_w X$ are all half spaces, hence ${\\mathcal V}_w X$ is locally linear, hence also locally convex, and also weakly locally convex. \n\\end{proof}\n\nAlso, note that ${\\mathcal V}_w f\\colon {\\mathcal V}_w X\\to {\\mathcal V}_w Y$ is continuous and linear, for any continuous map $f\\colon X\\to Y$.\n\nLet $(C, +,\\cdot)$ be a cone, by a \\emph{subcone} $A$ of $C$ we mean that the neutral element~$0$ of $C$ is in $A$, and that for any $a, b\\in A, r\\in \\mathbb R_+$, we have $a+b\\in A$ and $r\\cdot a\\in A$. Following this terminology, the cone ${\\mathcal V}_\\mathrm{s} X$ of simple valuations on~$X$ is a subcone of~${\\mathcal V}_w X$, and ${\\mathcal V}_\\mathrm{s} X$ with the subspace topology is also a locally linear topological cone. \n\n\nWe also need the notion of a linear retraction between semitopological cones. In topology, a continuous function $f\\colon X\\to Y$ is called a \\emph{retraction}\nif there exists a continuous map $g\\colon Y\\to X$ such that $f\\circ g= \\mathrm{id}_Y$. The function~$g$ is called a \\emph{section} (of $f$), and $Y$ is called a \\emph{retract} of $X$. \nFor semitopological cones $C$ and $D$, a retraction map $f\\colon X\\to Y$ is called a \\emph{ linear retraction} if $f$ is linear. Notice that we do not require the section of $f$ to be a linear map. \n\n\\begin{lemma}\\label{retractionofweaklylc}{\\rm \\cite[Proposition 6.6~]{heckmann96}}\nLet $C, D$ be topological cones, and $f\\colon C\\to D$ be a linear retraction. If $C$ is weakly locally convex, then so is $D$. \\hfill $\\Box$\n\\end{lemma}\n\n\n\\subsection{The $\\mathcal{K}$-completion}\n\nIn order to prove that the category of monotone convergence spaces is reflexive in ${\\bf TOP_0}$, Keimel and Lawson considered the general setting of \\emph{$\\text{{\\fontfamily{cmtt}\\selectfont K}}$-categories}~\\cite{keimel09a}. \n\n\\begin{definition}[$\\text{{\\fontfamily{cmtt}\\selectfont K}}$-\\emph{category}] \\label{defk}\nA full subcategory $\\mathcal K$ of the category ${\\bf TOP_0}$ is called a \n$\\text{{\\fontfamily{cmtt}\\selectfont K}}$-\\emph{category} if all objects of $\\mathcal K$ satisfy the following properties:\n\\begin{enumerate}\n \\item Homeomorphic copies of $\\mathcal{K}$-spaces are $\\mathcal{K}$-spaces. That is, $\\mathcal{K}$ is a replete subcategory of ${\\bf TOP_{0}}$.\n \\item All sober spaces are $\\mathcal{K}$-spaces. That is, all sober spaces are in the category $\\mathcal{K}$.\n \\item In a sober space $S$, the intersection of any family of $\\mathcal{K}$-subspaces is a $\\mathcal{K}$-space. \n \\item Continuous maps $f\\colon S\\to T$ between sober spaces $S$ and $T$ are \\emph{$\\mathcal{K}$-continuous}, that is, for every $\\mathcal{K}$-subspace $K$ of $T$, the inverse image\n $f^{-1}(K)$ is a $\\mathcal{K}$-subspace of $S$ or, equivalently, $f(\\mathrm{cl}_{\\kk}(Z))\\subseteq \\mathrm{cl}_{\\kk} f(Z)$ for every subset $Z\\subseteq S$, where for a subset $A$ of a sober space $X$, $\\mathrm{cl}_{\\kk} A$ stands for the smallest $\\mathcal{K}$-subspace of $X$ containing $A$. The existence of $\\mathrm{cl}_{\\kk} A$ is guaranteed by Item 3. \n\\end{enumerate}\n\\end{definition}\nNote that a $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category $\\mathcal{K}$ is also referred to as \\emph{having {Property}~$\\text{{\\fontfamily{cmtt}\\selectfont K}}$}, and the objects of $\\mathcal{K}$ are called $\\mathcal{K}$-spaces.\n\n\n\nKeimel and Lawson \\cite{keimel09a} proved that every $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category~$\\mathcal{K}$ is a reflective subcategory of ${\\bf TOP_0}$: The corresponding reflector $\\mathcal{K}_\\mathrm{c}$ sends each topological space~$X$ to the smallest $\\mathcal{K}$-subspace of~$X^\\mathrm{s}$ that contains $\\eta^{\\mathrm{s}}_{X}(X)$, the embedding image of $X$ into $X^{\\mathrm{s}} $. They then showed that the category of monotone convergence spaces, which we denote by $\\mathcal{D}$, is a $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category. \n\nAdditionally, the category $\\mathcal S$ of sober spaces is a $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category, the related reflector $\\mathcal S_\\mathrm c$ is just the sobrification functor. It was proved in \\cite{wu20} that the category $\\mathcal{W}$ of well-filtered spaces is also a $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category. However, the category of Scott spaces (dcpos equipped with the Scott topology) and Scott-continuous functions is not a $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category since not every sober space is a Scott space: the topology on a sober space need not be the Scott topology of its specialisation order; the unit interval $[0, 1]$ with the ordinary topology is Hausdorff hence sober, but it is not a Scott space. \n\\begin{remark}\nIn this paper, $\\mathcal{K}$ is always reserved for some $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category. Moreover, $\\mathcal{K}_\\mathrm{c}$ is used to denote the corresponding reflector.\n\\end{remark}\n\n\n\n\\begin{figure}[h]\n\\centering\n\\begin{tikzpicture}[commutative diagrams\/every diagram]\n\\matrix[matrix of math nodes, name=m, commutative diagrams\/every cell] {\n\\mathcal{K}_\\mathrm{c}(X) & K \\\\\n X & \\\\};\n \\path[commutative diagrams\/.cd, every arrow, every label]\n (m-1-1) edge[\"$\\overline f$\"] (m-1-2)\n (m-2-1) edge[swap,\"$f$\"] (m-1-2)\n edge[commutative diagrams\/hook, \"$\\eta_X$\"] (m-1-1);\n\\end{tikzpicture}\n\\begin{tikzpicture}[commutative diagrams\/every diagram]\n\\matrix[matrix of math nodes, name=m, commutative diagrams\/every cell] {\n \\mathcal{K}_\\mathrm{c}(X) & \\mathcal{K}_\\mathrm{c}(Y) \\\\\n X & Y \\\\};\n \\path[commutative diagrams\/.cd, every arrow, every label]\n (m-1-1) edge[\"$\\mathcal{K}_\\mathrm{c} (f)$\"] (m-1-2)\n (m-2-1) edge[swap,\"$f$\"] (m-2-2)\n edge[commutative diagrams\/hook, \"$\\eta_X$\"] (m-1-1)\n (m-2-2) edge[commutative diagrams\/hook, \"$\\eta_Y$\"] (m-1-2);\n\\end{tikzpicture}\n\\end{figure}\n\n\nSince each $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category $\\mathcal{K}$ is a reflective subcategory of $\\bf TOP_0$, the reflector $\\mathcal{K}_\\mathrm c\\colon {\\bf TOP_0}\\to \\mathcal{K}$ is left adjoint to the inclusion of $\\mathcal{K}$ into $\\bf TOP_0$. We now state some properties that apply in this situation that will be useful in what follows.\n\nFor a space $X$, we use $\\eta_{X}\\colon X\\to \\mathcal{K}_\\mathrm{c}(X) $ to denote the corestriction of $\\eta^{\\mathrm{s}}_{X}$ on $\\mathcal{K}_\\mathrm{c}(X)$ throughout the paper. Then $\\eta_{X}$ is the unit for the monad~$\\mathcal{K}_\\mathrm c$, and it is a topological embedding of $X$ into $\\mathcal{K}_\\mathrm{c}(X)$. Moreover, the pair $(\\mathcal{K}_\\mathrm{c}(X), \\eta_{X})$ satisfies the following universal property: for each $\\mathcal{K}$-space $K$ and continuous map $f\\colon X\\to K$, there is a unique extension map $\\overline f \\colon \\mathcal{K}_\\mathrm{c} (X)\\to K$ of $f$ such that $\\overline f\\circ \\eta_{X} = f$. We call $(\\mathcal{K}_\\mathrm{c}(X), \\eta_{X})$ is the \\emph{canonical $\\mathcal{K}$-completion} of $X$. More generally, we say a pair $(Y, g)$ is a \\emph{$\\mathcal{K}$-completion} of $X$ if $Y$ is a $\\mathcal{K}$-space, $g\\colon X\\to Y$ is a topological embedding, and for every continuous map $f$ from $X$ to a $\\mathcal{K}$-space $K$, there exists a unique extension map $\\overline f\\colon Y\\to K$ such that $\\overline f\\circ g=f $. \n\nAs a functor from $\\bf TOP_0$ to $\\mathcal{K}$, the reflector $\\mathcal{K}_\\mathrm{c}$ sends each space $X$ to $\\mathcal{K}_\\mathrm{c}(X)$, and each continuous map $f\\colon X\\to Y$ to $\\mathcal{K}_\\mathrm{c}(f)=\\overline{\\eta_{Y}\\circ f}$, the unique continuous function from $\\mathcal{K}_\\mathrm{c}(X)$ to $\\mathcal{K}_\\mathrm{c}(Y)$ that extends $\\eta_{Y}\\circ f$, i.e., $\\mathcal{K}_\\mathrm{c}(f) \\circ\\eta_X=\\eta_Y \\circ f$.\n\n\n\\begin{remark}\nRestricting to the category $\\mathcal{D}$ of monotone-convergence spaces, Keimel and Lawson showed that $\\mathcal{D}$-completion gives rise to, in a natural way, the dcpo-completion, a concept well studied in~\\cite{zhao10}. In detail, for a poset~$P$, the pair $(\\overline P, e)$ is called a \\emph{dcpo-completion} of~$P$, if $e\\colon P\\to \\overline P$ is a topological embedding with respect to the Scott topologies on both $P$ and $\\overline P$, where $\\overline P$ is a dcpo, and for every Scott-continuous map $f$ from $P$ to a dcpo $L$, there exists a unique Scott-continuous map $\\overline f$ such that $\\overline f\\circ e = f$. They proved that the functor $\\Omega\\circ \\mathcal{D}_{\\mathrm c}\\circ \\Sigma$ is the left adjoint of the inclusion functor $U$ from the category $\\mathbf{DCPO}$ of dcpos and Scott-continuous maps into the category $\\mathbf{POS_{d}}$ of posets and Scott-continuous maps (not order-preserving maps). \n\\[\n\\begin{tikzcd}\n\\mathbf{TOP_{0}}\n\\arrow[rr, \"\\mathcal{D}_{\\mathrm c} \"] && \\mathcal{D} \\arrow[dd, \"\\Omega\"] \\\\\n\\\\\n\\mathbf{POS_{d}}\\arrow[uu, \"\\Sigma\"] \\arrow[rr, \"\\Omega\\circ \\mathcal{D}_{\\mathrm c}\\circ \\Sigma\" {name=F}, bend left = 15] && \\mathbf{DCPO} \\arrow[ll, \"U\"{name=G}, bend left=15 ]\n\\arrow[phantom, from=F, to=G, \"\\dashv\" rotate=-90]\n\\end{tikzcd}\n\\]\nHence for each poset $P$, the pair $(\\Omega(\\mathcal{D}_{\\mathrm c}( \\Sigma P)), e)$ is a dcpo-completion of $P$, where $e$ is the unit of the adjunction. \n\\end{remark}\n\n\n\n\n\nBy composition with the inclusion of $\\mathcal{K}$ into $\\bf TOP_0$, $\\mathcal{K}_\\mathrm{c}$ can be seen as an endofunctor on ${\\bf TOP_0}$, so it defines a monad. Then $\\mathcal{K}_\\mathrm{c}$ is an idempotent monad due to the universal property mentioned above. Its unit at $X$ is just $\\eta_{X}$ also described above, and its multiplication $m_{X}$ at $X$ is the extension map of $\\mathrm{id}_{X}$. Moreover, we know that $\\eta_{\\mathcal{K}_\\mathrm{c}(X)}=\\mathcal{K}_\\mathrm{c}(\\eta_{X})$, which is inverse to $m_{X}$; this implies $\\mathcal{K}_\\mathrm{c}(X)$ is homeomorphic to $\\mathcal{K}_\\mathrm{c}(\\mathcal{K}_\\mathrm{c}(X))$.\nThe (Eilenberg-Moore) algebras of $\\mathcal{K}_\\mathrm c$ are precisely the $\\mathcal{K}$-spaces (See for example \\cite[Proposition 4.23, Corollary 4.24]{borceux2}).\n\n\n\nSince each $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category $\\mathcal{K}$ is a reflective, full and replete subcategory of ${\\bf TOP_0}$, the product of any family of $\\mathcal{K}$-spaces, as computed in ${\\bf TOP_0}$, is a $\\mathcal{K}$-space and the same holds for equalisers. An easy consequence is that any $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category is closed under products and equalisers, hence is complete. (See for example, Exercise~7 one page 92 in~\\cite{maclane98}.)\n\n\nThe following slightly stronger statement also holds. \n\n\\begin{lemma}\n\\label{equaliser:kspaces}\nLet $X$ a $\\mathcal{K}$-space and $Y$ be a $T_0$ space, and let $f, g$ be continuous maps from $X$ to $Y$. If $h\\colon Z\\to X$ is an equaliser of $f$ and $g$, then $Z$ is also a $\\mathcal{K}$-space. \n\\end{lemma}\n\\begin{proof}\n\\begin{figure}[h]\n\\centering\n\\begin{tikzcd}\n Z \\arrow[r, \"h\"] & X \\arrow[r, \"g\"', bend right =10] \\arrow[r, \"f\", bend left=10] & Y \\arrow[r, hook, \"\\eta^{\\mathrm{s}}_{Y}\"] & Y^{s} \n\\end{tikzcd}\n\\end{figure}\nLet $Y^\\mathrm{s}$ be the canonical sobrification of $Y$ and $\\eta^{\\mathrm{s}}_{Y}$ be the canonical topological embedding of $Y$ into $Y^{\\mathrm{s}}$ sending $y\\in Y$ to the closure of $\\{y\\}$. Then we know that $h\\colon Z\\to X$ is also a equaliser of $\\eta^{\\mathrm{s}}_{Y}\\circ f$ and $\\eta^{\\mathrm{s}}_{Y}\\circ g$. Since $Y^\\mathrm{s}$ is sober, hence a $\\mathcal{K}$-space, this implies that $Z$ is a $\\mathcal{K}$-space. \n\\end{proof}\n\n\n\\begin{corollary}\n\\label{densedetermined}\nLet $X$ be a $T_0$ space and let $f, g$ be continuous maps from $\\mathcal{K}_\\mathrm{c}(X)$ to $T_0$ space $Y$. If $f$ and $g$ are equal on $X$, that is, $f\\circ \\eta_{X} = g\\circ \\eta_{X}$, then $f=g$. \n\\end{corollary}\n\\begin{proof}\n\\begin{figure}[h]\n\\centering\n\\begin{tikzcd}\n X \\arrow[r, \"\\eta_{X}\"] \\arrow[dr, \"j\"'] & \\mathcal{K}_\\mathrm{c}(X) \\arrow[d, \"\\overline j\", bend left = 10] \\arrow[r, \"f\", bend left = 10] \\arrow[r, \"g\"', bend right=10] & Y \\\\\n & Z \\arrow[u, \"h\", bend left = 10] &\n\\end{tikzcd}\n\\end{figure}\nLet $h\\colon Z\\to \\mathcal{K}_\\mathrm{c}(X)$ be the equaliser of $f$ and $g$. We know that there exists a unique $j\\colon X\\to Z$ such that $\\eta_{X}=h\\circ j$ since by assumption $\\eta_{X}$ equalises $f$ and $g$. Lemma~\\ref{equaliser:kspaces} implies $Z$ is a $\\mathcal{K}$-space, so the map $j$ has a unique extension $\\overline j\\colon \\mathcal{K}_\\mathrm{c}(X)\\to Z$ such that $j=\\overline j\\circ \\eta_{X}$. We claim that $\\overline j$ and $h$ are inverse to each other. First, notice that $h\\circ \\overline j\\circ \\eta_{X} = h\\circ j= \\eta_{X} =\\mathrm{id}_{\\mathcal{K}_\\mathrm{c} (X)}\\circ \\eta_{X}$. This means that both the maps $h\\circ \\overline j$ and $\\mathrm{id}_{\\mathcal{K}_\\mathrm{c} (X)}$ extend $\\eta_{X}$ and hence it follows that $h\\circ\\overline j = \\mathrm{id}_{\\mathcal{K}_\\mathrm{c}(X)}$. This also implies that $h\\circ \\overline j\\circ h=h$, which in turn implies that $\\overline j\\circ h = \\mathrm{id}_{Z}$, since the map $h$, as an equaliser, is a monomorphism. Hence $\\mathcal{K}_\\mathrm{c}(X)$ and $Z$ are homeomorphic. Finally, we conclude that $f=g$ since $h\\circ f = h\\circ g$ and $h$ is a homeomorphism. \n\\end{proof}\n\n\n\nAs another corollary to Lemma~\\ref{equaliser:kspaces}, every $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category is also stable under \\emph{retractions}. \n\n\\begin{corollary}\n\\label{retracts:of:kspace}\nLet $\\mathcal{K}$ be a $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category, $Y$ be a $\\mathcal{K}$-space, and $X$ be a $T_0$ space. If $X$ is a retract of $Y$, then $X$ is also a $\\mathcal{K}$-space. \n\\end{corollary}\n\\begin{proof}\n\\begin{figure}[h]\n\\centering\n\\begin{tikzcd}\n X \\arrow[r, \"s\"] & Y \\arrow[rr, \"\\mathrm{id}_{Y}\"', bend right =13] \\arrow[r, \"r\", bend left=13] & X \\arrow[r, \"s\", bend left=13] & Y \n\\end{tikzcd}\n\\end{figure}\nLet $r\\colon Y\\to X$ be a retraction and $s\\colon X\\to Y$ be its section. It is immediate that $s\\colon X\\to Y$ is the equaliser of $\\mathrm{id}_{Y}$ and $s\\circ r$. By Lemma~\\ref{equaliser:kspaces} we conclude that $X$ is a $\\mathcal{K}$-space. \n\\end{proof}\n\n\nThe following is \\`a la \\cite[Theorem 2.4]{heckmann96}. \n\n\\begin{corollary}\n\\label{alahcekmann}\nFor a subspace $X$ of a $\\mathcal{K}$-space $Y$, consider the following statements:\n\\begin{enumerate}\n\\item The pair $(Y, e)$ is a $\\mathcal{K}$-completion of $X$, where $e$ is the subspace embedding of $X$ into $Y$.\n\\item For every $y\\in Y$ and open subset $O$ of $Y$ with $y\\in O$ there is some $x\\in X$ with $x\\in O$ and $x\\leq y$;\n\\item For all opens $U$ and $V$ of $Y$, $X\\cap U\\subseteq V$ implies that $U\\subseteq V$.\n\\end{enumerate}\nThen $(1) \\Rightarrow (2) \\Leftrightarrow (3)$. \n\\end{corollary}\n\\begin{proof}\nThe proof of the equivalence between $(2)$ and $(3)$ goes exactly as in that of~\\cite[Theorem 2.4]{heckmann96}. \n\nWe prove that $(1)$ implies $(2)$. \nTo this end we assume that $(1)$ holds and $(2)$ does not. Then there exists some $y\\in Y$ and open set $O\\subseteq Y$ of $y$ such that $\\mathord{\\downarrow} y\\cap U\\cap X = \\emptyset$. This means that $X\\subseteq Y\\setminus (\\mathord{\\downarrow} y\\cap U)$. Now consider the map $\\chi_{O\\cap X}\\colon X\\to \\mathbb S$. Since $\\mathbb S$ is sober hence a $\\mathcal{K}$-space, the fact that both $\\chi_{O}\\colon Y\\to \\mathbb S$ and $\\chi_{O\\setminus \\mathord{\\downarrow} y}\\colon Y\\to \\mathbb S $ extend $\\chi_{O\\cap X}$ would contradict with the fact that $(Y, e)$ is a $\\mathcal{K}$-completion of $X$. \n\\end{proof}\n\n\n\\begin{remark}\\label{isobetweenxandkx}\nThe Item~$3$ of Corollary~\\ref{alahcekmann} implies that for each open subset $W$ of $X$, there is one and only one open subset $O$ of $Y$, such that $W=O\\cap X$, and $\\chi_{O}\\colon Y\\to \\mathbb S$ is the unique continuous map extending $\\chi_{W}\\colon X\\to \\mathbb S$. Moreover, the map $\\lambda O. X\\cap O$ is an order-isomorphism between the lattice of open sets of $Y$ and that of $X$. \n\\end{remark}\n\n\n\n\\begin{proposition}\n\\label{kpreservesorder}\nLet $X$ be a $T_{0}$ space, $K$ be a $\\mathcal{K}$-space and $f\\colon X\\to K$ be any continuous map. If we use $[X\\to K]$ (resp., $[\\mathcal{K}_\\mathrm{c}(X) \\to K]$) to denote the set of continuous functions from $X$ (resp., $\\mathcal{K}_\\mathrm{c}(X)$) to $K$, then the extension map $f\\mapsto \\overline f\\colon [X\\to K]\\to[\\mathcal{K}_\\mathrm{c}(X)\\to K] $ is a bijection. Moreover, it is an order isomorphism. That is, for $f, g\\colon X\\to K$ with $f\\leq g$, we have that $\\overline f\\leq \\overline g$. The order mentioned here is the pointwise order between functions induced by the specialisation order of $K$. \n\\end{proposition}\n\\begin{proof}\nIt is obvious that $(f\\mapsto \\overline f)$ is a bijection. We assume that it is not an order isomorphism. \nHence we know that there exist functions $f$ and $g$ from $X$ to $K$ with $f\\leq g$, but $\\overline f\\not\\leq \\overline g$. So there is some point $a\\in \\mathcal{K}_\\mathrm{c}(X)$ such that $\\overline f(a)\\not\\leq \\overline g(a)$. This implies that $\\overline f(a)$ is in the open set $U= K\\setminus \\mathord{\\downarrow} \\overline g(a)$. By Item~2 of Corollary~\\ref{alahcekmann} we have some $x\\in X$ such that $\\eta_{X}(x)\\in \\mathord{\\downarrow} a\\cap {\\overline f^{-1}}(U)$. Applying $\\overline f$ to $\\eta_{X}(x)$ yields that $f(x)= \\overline f(\\eta_{X}(x))\\in U$. This implies that $f(x)\\not\\leq \\overline g(a) $. However, this is impossible since $f(x)\\leq g(x) = \\overline g(\\eta_{X}(x)) \\leq \\overline g(a)$. (the last inequality comes from the fact that $\\overline g$ is continuous hence order-preserving and that $\\eta_{X}(x)\\leq a$.)\n\\end{proof}\n\n\n\\section{The monad structure}\nIn this section, we prove that for each $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category $\\mathcal{K}$, the composition $\\mathcal{K}_\\mathrm{c} \\circ {\\mathcal V}_\\mathrm{s}$ gives rise to a monad over the category ${\\bf TOP_{0}}$. First of all, it is easy to see that the assignment of the space $\\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} X)$ to each space $X$, and of the continuous function $\\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} f)$ to each continuous function $f\\colon X\\to Y$ defines an endofunctor on ${\\bf TOP_0}$, which we denote by~${\\mathcal V}_\\mathcal{K}$.\n\nThe $\\mathcal{K}$-completion of ${\\mathcal V}_\\mathrm{s} X$, as a topological space, is homeomorphic to the smallest $\\mathcal{K}$-subspace in ${\\mathcal V}_w X$ containing ${\\mathcal V}_\\mathrm{s} X$. This is because \\cite[Theorem 4.4]{keimel09a} tells us that we could complete ${\\mathcal V}_\\mathrm{s} X$ in any $\\mathcal{K}$-space that has (the homeomorphic copy of) ${\\mathcal V}_\\mathrm{s} X$ as a subspace. The space ${\\mathcal V}_w X$, in our case, is a sober space~\\cite[Proposition5.1]{heckmann96} hence a $\\mathcal{K}$-space and it contains ${\\mathcal V}_\\mathrm{s} X$ as a subspace. Henceforth we identify ${\\mathcal V}_\\mathcal{K} X$ as a subspace of ${\\mathcal V}_w X$. \n\nIf $f\\colon X\\to Y$ is a continuous map, then ${\\mathcal V}_w f\\colon {\\mathcal V}_w X\\to {\\mathcal V}_w Y$ also is continuous. Since ${\\mathcal V}_w X, {\\mathcal V}_w Y$ are sober spaces, ${\\mathcal V}_w f$ is $\\mathcal{K}$-continuous by Item~4 of Definition~\\ref{defk}. Thus, ${\\mathcal V}_w f$ sends ${\\mathcal V}_\\mathcal{K} X$ into ${\\mathcal V}_\\mathcal{K} Y$. Moreover, ${\\mathcal V}_\\mathcal{K} f$ is defined to be the unique map $\\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} f)$ such that $\\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} f) \\circ \\eta_{{\\mathcal V}_\\mathrm{s} X} =\\eta_{{\\mathcal V}_\\mathrm{s} Y}\\circ {\\mathcal V}_\\mathrm{s} f$. Hence, ${\\mathcal V}_\\mathcal{K} f = {\\mathcal V}_w f$ on ${\\mathcal V}_\\mathcal{K} X$, since for each simple valuation $s$, ${\\mathcal V}_w f(s) = {\\mathcal V}_\\mathrm{s} f(s)$. That is, for each $\\mu\\in {\\mathcal V}_\\mathcal{K}(X)$ and each open subset $U$ of $Y$, ${\\mathcal V}_\\mathcal{K} f (\\mu)(U) = {\\mathcal V}_w f (\\mu)(U) = \\mu(f^{-1}(U))$.\n\nWe summarise comments above in Figure~\\ref{relation}, hooked arrows denote topological embeddings and tailed arrows denote subspace embeddings. \n\n\n\\begin{figure}[h]\n\\centering\n\\begin{tikzcd}\nX \\arrow[d, \"f\"] \\arrow[r, hook, \" {\\delta^{\\s}}_{X}\"] & {\\mathcal V}_\\mathrm{s} X \\arrow[d, \"{\\mathcal V}_\\mathrm{s} f\"] \\arrow[r, tail, \"\\eta_{{\\mathcal V}_\\mathrm{s} X}\"] & {\\mathcal V}_\\mathcal{K} X \\arrow[d, \"{\\mathcal V}_\\mathcal{K} f\"] \\arrow[r, tail, \"e_{{\\mathcal V}_\\mathcal{K} X}\"] & {\\mathcal V}_w X \\arrow[d, \"{\\mathcal V}_w f\"] \\\\\nY \\arrow[r, hook, \" {\\delta^{\\s}}_{Y}\"] & {\\mathcal V}_\\mathrm{s} Y \\arrow[r, tail, \"\\eta_{{\\mathcal V}_\\mathrm{s} Y}\"] & {\\mathcal V}_\\mathcal{K} Y \\arrow[r, tail, \"e_{{\\mathcal V}_\\mathcal{K} Y}\"] & {\\mathcal V}_w Y \n\\end{tikzcd}\n\\caption{ \\label{relation} }\n\\end{figure}\n\nNotice that ${\\mathcal V}_w X$ is a topological cone containing ${\\mathcal V}_\\mathrm{s} X$ as a topological subcone. A natural question is whether the cone structure of ${\\mathcal V}_w X$ restricts to that of ${\\mathcal V}_\\mathcal{K} X$. That is, if we take valuations from ${\\mathcal V}_\\mathcal{K} X$ and form finite sums and scalar multiples in ${\\mathcal V}_w X$, do the results land back in ${\\mathcal V}_\\mathcal{K} X$? The following proposition answers this question in the positive. \n\n\\begin{proposition}\\label{subcone}\n${\\mathcal V}_\\mathcal{K} X$ is a subcone of ${\\mathcal V}_w X$ for every topological space~$X$. \n\\end{proposition}\n\\begin{proof}\nObviously the neutral valuation $(U\\mapsto 0)$ is in ${\\mathcal V}_\\mathcal{K} X$. For any $\\mu, \\nu \\in {\\mathcal V}_\\mathcal{K} X, r\\in \\mathbb{R}_+$, we prove that $\\mu+ \\nu, r\\mu$, as computed in ${\\mathcal V}_w X$, are in ${\\mathcal V}_\\mathcal{K} X$. First, notice that $+\\colon {\\mathcal V}_w X\\times {\\mathcal V}_w X\\to {\\mathcal V}_w X$ is jointly continuous. Hence for each simple valuation $s\\in {\\mathcal V}_\\mathrm{s} X$, $s+\\_\\colon{\\mathcal V}_w X\\to {\\mathcal V}_w X$ is continuous, then $s'\\mapsto s+ s'$ is $\\mathcal{K}$-continuous since ${\\mathcal V}_w X$ is sober. Since $s'\\mapsto s+ s'$ maps ${\\mathcal V}_\\mathrm{s} X$ into itself, that is, $s + ({\\mathcal V}_\\mathrm{s} X) \\subseteq {\\mathcal V}_\\mathrm{s} X$, it follows from Item~4 of Definition~\\ref{defk} that $s+\\mathrm{cl}_{\\kk}({\\mathcal V}_\\mathrm{s} X)\\subseteq \\mathrm{cl}_{\\kk}(s+{\\mathcal V}_\\mathrm{s} X)\\subseteq \\mathrm{cl}_{\\kk}({\\mathcal V}_\\mathrm{s} X)$. Thus, $\\nu\\mapsto s+\\nu$ maps ${\\mathcal V}_\\mathcal{K} X$ into ${\\mathcal V}_\\mathcal{K} X$. It follows that for each valuation $\\mu\\in {\\mathcal V}_\\mathcal{K} X$, $s\\mapsto \\mu+ s$ maps ${\\mathcal V}_\\mathrm{s} X$ into ${\\mathcal V}_\\mathcal{K} X$. Using the fact that $\\mu+\\_: {\\mathcal V}_w X\\to {\\mathcal V}_w X$ is continuous, hence $\\mathcal{K}$-continuous, we conclude that $\\mu+\\nu\\in {\\mathcal V}_\\mathcal{K} X$ for any $\\mu, \\nu\\in {\\mathcal V}_\\mathcal{K} X$. Similar reasoning shows $r\\mu\\in {\\mathcal V}_\\mathcal{K}(X)$ for $r\\in \\mathbb{R}_+, \\mu\\in {\\mathcal V}_\\mathcal{K} X$, since $s\\mapsto r\\cdot s$ sends simple valuations to simple valuations for each $r\\in\\mathbb{R}_+$. \n\\end{proof}\n\nSince ${\\mathcal V}_\\mathcal{K} X$ has a canonical cone structure inherited from ${\\mathcal V}_w X$, the following statements make sense; when referring to the cone structure on ${\\mathcal V}_\\mathcal{K} X$, we always mean the cone structure inherited from ${\\mathcal V}_w X$.\n\n\n\\begin{corollary}\\label{llofvk}\n${\\mathcal V}_\\mathcal{K} X$ is a locally linear topological cone. \n\\end{corollary}\n\\begin{proof}\nStraightforward from the fact that ${\\mathcal V}_w X$ is locally linear ant that ${\\mathcal V}_\\mathcal{K} X$ is a subcone of ${\\mathcal V}_w X$.\n\\end{proof}\n\n\\begin{lemma}\\label{kpreslinear}\nLet $f\\colon {\\mathcal V}_\\mathrm{s} X\\to {\\mathcal V}_\\mathrm{s} Y$ be a continuous linear map, then $\\mathcal{K}_\\mathrm{c} (f)$ is a continuous linear map from ${\\mathcal V}_\\mathcal{K} X$ to ${\\mathcal V}_\\mathcal{K} Y$.\n\\end{lemma}\n\\begin{figure}[h]\n\\centering\n\\begin{tikzcd}\n{\\mathcal V}_\\mathcal{K} X \\arrow[rr, \"\\mathcal{K}_\\mathrm{c}(f)\"] && {\\mathcal V}_\\mathcal{K} Y \\\\\n&&\\\\\n{\\mathcal V}_\\mathrm{s} X \\arrow[uu, \"\\eta_{{\\mathcal V}_\\mathrm{s} X}\"] \\arrow[rr, \"f\"] && {\\mathcal V}_\\mathrm{s} Y \\arrow[uu, \"\\eta_{{\\mathcal V}_\\mathrm{s} Y}\"]\n\\end{tikzcd}\n\\end{figure}\n\n\\begin{proof}\nWe denote $\\mathcal{K}_\\mathrm{c}(f)$ by $F$ and prove that $F$ is linear. Note that for a fixed simple valuation $s\\in{\\mathcal V}_\\mathcal{K} X$, the two maps $$F_{1}\\colon \\mu\\mapsto F(s) + F(\\mu)\\colon {\\mathcal V}_\\mathcal{K} X \\to {\\mathcal V}_\\mathcal{K} Y~\\text{and}~F_{2}\\colon\\mu \\mapsto F(s+\\mu)\\colon {\\mathcal V}_\\mathcal{K} X \\to {\\mathcal V}_\\mathcal{K} Y$$ are equal when $\\mu$ is a simple valuation, since $f$ is linear. Hence they are equal, as they both equal $\\mathcal{K}_\\mathrm{c}(g)$, where $g\\colon {\\mathcal V}_\\mathrm{s} X\\to {\\mathcal V}_\\mathrm{s} Y$ by $g(t) = f(s)+f(t)$ for each simple valuation $t$. Thus, the maps \n$$\\nu \\mapsto F(\\nu) + F(\\mu)\\colon {\\mathcal V}_\\mathcal{K} X\\to {\\mathcal V}_\\mathcal{K} Y~\\text{and}~\\nu \\mapsto F(\\nu+\\mu)\\colon {\\mathcal V}_\\mathcal{K} X\\to {\\mathcal V}_\\mathcal{K} Y$$ are equal since they are unique continuous extensions of $(s \\mapsto F(\\eta_{{\\mathcal V}_\\mathrm{s} X}(s))+F(\\mu) \\colon {\\mathcal V}_\\mathrm{s} X \\to {\\mathcal V}_\\mathcal{K} Y )$ and $(s \\mapsto F(\\eta_{{\\mathcal V}_\\mathrm{s} X}(s)+ \\mu) \\colon {\\mathcal V}_\\mathrm{s} X \\to {\\mathcal V}_\\mathcal{K} Y)$, respectively. With a similar reasoning we can prove that $F$ is homogeneous.\n\\end{proof}\n\n\\begin{proposition}\\label{dagger}\nFor topological spaces $X$ and $Y$, and for every continuous function $f\\colon X\\to {\\mathcal V}_\\mathcal{K} Y$, the map \n$$ f^\\dagger_\\mathcal{K} \\colon \\mu \\mapsto (U\\mapsto \\int_{x\\in X} f(x) (U)d\\mu )\\colon {\\mathcal V}_\\mathcal{K} X \\to {\\mathcal V}_\\mathcal{K} Y$$\nis well-defined and continuous.\n\\end{proposition}\n\\begin{proof}\nSince $f^\\dagger\\colon {\\mathcal V}_w X\\to {\\mathcal V}_w Y$ is continuous for any continuous map $f\\colon X\\to {\\mathcal V}_\\mathcal{K} Y$, it is sufficient to show that $f^\\dagger$ sends ${\\mathcal V}_\\mathcal{K} X$ into ${\\mathcal V}_\\mathcal{K} Y$.\n\nWe start with showing that $f^\\dagger$ sends ${\\mathcal V}_\\mathrm{s} X$ to ${\\mathcal V}_\\mathcal{K} Y$.\nTo this end, we let $\\mu$ be $\\Sigma_{i=1}^n r_i \\delta_{x_i}$ and perform the following computation for each open subset $U$ of $Y$:\n$$f^\\dagger (\\mu) (U) = \\int_{x\\in X} f(x) (U)d\\mu = \\Sigma_{i=1}^n r_i f(x_i)(U) = (\\Sigma_{i=1}^n r_i f(x_i)) (U).$$\nThis shows that $f^\\dagger (\\mu) = \\Sigma_{i=1}^n r_i f(x_i)$. Since $f(x_i)\\in {\\mathcal V}_\\mathcal{K} Y$ for $i=1, ..., n$, and ${\\mathcal V}_\\mathcal{K} Y$ is a subcone of ${\\mathcal V}_w Y$ by Proposition~\\ref{subcone}, we have $f^\\dagger (\\mu)=\\Sigma_{i=1}^n r_i f(x_i)$ is in ${\\mathcal V}_\\mathcal{K} Y$. \n\nSo we have that $f^\\dagger|_{{\\mathcal V}_\\mathrm{s} X}$, the restriction of $f^\\dagger$ on ${\\mathcal V}_\\mathrm{s} X$, is continuous from ${\\mathcal V}_\\mathrm{s} X$ to ${\\mathcal V}_\\mathcal{K} Y$. By the universal property of $\\mathcal{K}$-completion we know that $f^\\dagger|_{{\\mathcal V}_\\mathrm{s} X} $ has a unique extension $F$ from ${\\mathcal V}_\\mathcal{K} X$ to ${\\mathcal V}_\\mathcal{K} Y$. Since ${\\mathcal V}_\\mathcal{K} Y$ is a subspace of ${\\mathcal V}_w Y$, $F$ is also continuous from ${\\mathcal V}_\\mathcal{K} X$ to ${\\mathcal V}_w Y$. We note that $f^\\dagger|_{{\\mathcal V}_\\mathcal{K} X}$, the restriction of $f^\\dagger$ on ${\\mathcal V}_\\mathcal{K} X$, is also continuous from ${\\mathcal V}_\\mathcal{K} X$ to ${\\mathcal V}_w Y$, and moreover that $F$ and $f^\\dagger|_{{\\mathcal V}_\\mathcal{K} X}$ are equal on ${\\mathcal V}_\\mathrm{s} X$, so $F$ and $f^\\dagger|_{{\\mathcal V}_\\mathcal{K} X}$ are equal on ${\\mathcal V}_\\mathcal{K} X$ by Corollary~\\ref{densedetermined}. Now we conclude that $f^\\dagger$ sends ${\\mathcal V}_\\mathcal{K} X$ into ${\\mathcal V}_\\mathcal{K} Y$ because $F$ does. \n\\end{proof}\n\n\\begin{theorem}\n\\label{vkisamonad}\nFor each $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category $\\mathcal{K}$, ${\\mathcal V}_\\mathcal{K}$ is a monad on ${\\bf TOP_0}$. \n\\end{theorem}\n\\begin{proof}\nThe unit of ${\\mathcal V}_\\mathcal{K}$ sends each point $x\\in X$ to the Dirac mass $\\delta_x$ in ${\\mathcal V}_\\mathcal{K} X$, which we denote by $ { {\\delta^{k}}}_{X} \\colon X\\to {\\mathcal V}_\\mathcal{K} X\\colon x\\mapsto \\delta_{x}$. For each continuous function $f\\colon X\\to {\\mathcal V}_\\mathcal{K} Y$, we have an extension $f^\\dagger_\\mathcal{K}\\colon {\\mathcal V}_\\mathcal{K} X \\to {\\mathcal V}_\\mathcal{K} Y$ of $f$ guaranteed by Proposition~\\ref{dagger}. To verify the monad laws one then follows the proof of \\cite[Proposition 2.12]{goubault19}.\n\nAn easy consequence is that the multiplication of ${\\mathcal V}_\\mathcal{K}$ at $X$, in symbols ${\\beta^{k}}_{X}$, is given by $({\\mathrm{id}_{{\\mathcal V}_\\mathcal{K} X}})_{\\mathcal{K}}^{\\dagger}$, as defined in Proposition~\\ref{dagger}. \n\\end{proof}\n\n\\begin{remark}\nWhen $\\mathcal{K}$ is the category of sober spaces and continuous functions, the functor ${\\mathcal V}_\\mathcal{K}$ was shown to coincide with the \\emph{point-continuous valuation} functor $\\mathcal V_{\\mathrm p}$ by Heckmann \\cite{heckmann96}; that $\\mathcal V_{\\mathrm p}$ is a monad was shown in~\\cite{goubault19}.\n\\end{remark}\n\n\\begin{remark}\nWe summarise different monads and their respective units and multiplications that we have so far.\n\n\\begin{center}\n\\begin{tabular}{ |c|c|c| } \n \\hline\nmonads & unit & multiplication \\\\ \n \\hline\n ${\\mathcal V}_w$ & $ \\delta$ & $\\beta$ \\\\ \n \\hline\n ${\\mathcal V}_\\mathrm{s}$ & $ {\\delta^{\\s}}$ & $ {\\beta^{\\s}}$ \\\\ \n \\hline\n $\\mathcal{K}_\\mathrm{c}$ & $\\eta$ & $m$ \\\\ \n \\hline\n ${\\mathcal V}_\\mathcal{K}=\\mathcal{K}_\\mathrm{c}\\circ {\\mathcal V}_\\mathrm{s}$ & $ {\\delta^{k}}$ & ${\\beta^{k}}$ \\\\ \n \\hline\n\\end{tabular}\n\\end{center}\n\\end{remark}\n\nIn order to show that ${\\mathcal V}_\\mathcal{K}$ is a monad over ${\\bf TOP_0}$, one can alternatively show that there exists a distributive law of ${\\mathcal V}_\\mathrm{s}$ over $\\mathcal{K}_\\mathrm{c}$. For this, we recall the definition of distributive laws between monads and a result due to Heckmann. \n\n\\begin{definition}[Distributive law]\n\\label{distrilaw}\nLet $(T_{1}, \\eta_{1}, m_{1}), (T_{2}, \\eta_{2}, m_{2})$ be monads over category $\\bf C$. A distributive law of $T_{1}$ over $T_{2}$ is a natural transformation $\\lambda\\colon T_{1}T_{2}\\to T_{2}T_{1}$ such that the diagrams in Figure~\\ref{distributivelaws} commute. \n\\begin{figure}[h]\n\\centering\n\\begin{tikzcd}\n & T_{1} \\arrow[dl, \"T_{1}\\eta_{2}\"'] \\arrow[dr, \"\\eta_{2}T_{1}\"] & \\\\\nT_{1}T_{2} \\arrow[rr, \"\\lambda\"] & & T_{2}T_{1}\\\\\n\\end{tikzcd}\n\\begin{tikzcd}\n & T_{2} \\arrow[dl, \"\\eta_{1}T_{2}\"'] \\arrow[dr, \"T_{2}\\eta_{1}\"] & \\\\\nT_{1}T_{2} \\arrow[rr, \"\\lambda\"] & & T_{2}T_{1}\\\\\n\\end{tikzcd}\n\\begin{tikzcd}\n T_{1}^{2}T_{2} \\arrow[r, \"T_{1}\\lambda\"] \\arrow[d, \"m_{1}T_{2}\"'] & T_{1}T_{2}T_{1} \\arrow[r, \"\\lambda T_{1}\"] & T_{2}T_{1}^{2} \\arrow[d, \"T_{2}m_{1}\"] \\\\\nT_{1}T_{2} \\arrow[rr, \"\\lambda\"] & & T_{2}T_{1}\\\\\n\\end{tikzcd}\n\\begin{tikzcd}\n T_{1}T_{2}^{2} \\arrow[r, \"\\lambda T_{2}\"] \\arrow[d, \"T_{1}m_{2}\"'] & T_{2}T_{1}T_{2} \\arrow[r, \"T_{2}\\lambda\"] & T_{2}^{2}T_{1} \\arrow[d, \"m_{2}T_{1}\"] \\\\\nT_{1}T_{2} \\arrow[rr, \"\\lambda\"] & & T_{2}T_{1}\n\\end{tikzcd}\n\\caption{\\label{distributivelaws}}\n\\end{figure}\n\\end{definition}\n\n\n\n\n\n\n\\begin{theorem}{\\rm \\cite[Theorem 6.7]{heckmann96}}\n\\label{theoremfreeeofvf}\n${\\mathcal V}_\\mathrm{s} X$ is the free weakly locally convex cone over $X$ in ${\\bf TOP_{0}}$. This means: for a $T_{0}$ topological space $X$, ${\\mathcal V}_\\mathrm{s} X$ is itself a weakly locally convex cone, and for every continuous function $f\\colon X\\to M$ from $X$ to a weakly locally convex cone $M$, there is a unique continuous linear function $\\overline f \\colon {\\mathcal V}_\\mathrm{s} X\\to M$ with $\\overline f\\circ {\\delta^{\\s}}_{X} = f$. \n\\end{theorem}\n\n\nTheorem~\\ref{theoremfreeeofvf} and Proposition~\\ref{subcone} allow us to define a distributive law of ${\\mathcal V}_\\mathrm{s}$ over $\\mathcal{K}_\\mathrm{c}$. Notice from Corollary~\\ref{llofvk} that for each space $X$, ${\\mathcal V}_\\mathcal{K} X$ is a locally linear cone, in particular, a weakly locally convex cone. Then by Theorem~\\ref{theoremfreeeofvf} the continuous map $\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})\\colon \\mathcal{K}_\\mathrm{c}(X) \\to {\\mathcal V}_\\mathcal{K} X$ has a unique continuous extension \n$\\overline{\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})}\\colon {\\mathcal V}_\\mathrm{s} \\mathcal{K}_\\mathrm{c} (X)\\to {\\mathcal V}_\\mathcal{K} X$ that is linear, and that satisfies $\\overline{\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})} \\circ {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(X)} = \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})$ (See Figure~\\ref{existenceoflambda}). \n\\begin{figure}[h]\n\\centering\n\\begin{tikzcd}\n{\\mathcal V}_\\mathrm{s} \\mathcal{K}_\\mathrm{c}(X) \\arrow[dr, \"\\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})} \"] & \\\\\n\\mathcal{K}_\\mathrm{c}(X) \\arrow[u, \" {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(X)} \"] \\arrow[r, \"\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})\"'] & {\\mathcal V}_\\mathcal{K} X \\\\\nX \\arrow[u, \"\\eta_{X}\"] \\arrow[r, \" {\\delta^{\\s}}_X\"'] & {\\mathcal V}_\\mathrm{s} (X) \\arrow[u, \"\\eta_{{\\mathcal V}_\\mathrm{s} X}\"'] \n\\end{tikzcd}\n\\caption{\\label{existenceoflambda}}\n\\end{figure}\nWe claim that the collection of morphisms $\\overline{ \\mathcal{K}_\\mathrm{c}(\\delta_{X})}$ indexed by $X$ is the (unique) distributive law of ${\\mathcal V}_\\mathrm{s}$ over~$\\mathcal{K}_\\mathrm{c}$. \n\n\n\n\\begin{remark}\nOne may immediately notice that it would be more involved to show that there is a distributive law of ${\\mathcal V}_\\mathrm{s}$ over $\\mathcal{K}_\\mathrm{c}$, and then use that to prove ${\\mathcal V}_\\mathcal{K} = \\mathcal{K}_C\\circ {\\mathcal V}_\\mathrm{s}$ defines a monad, instead of appealing to Theorem~\\ref{vkisamonad}. However, by doing so, we obtain additional information that is useful in Section~\\ref{extensionsection}.\n\\end{remark}\n\n\n\n\\begin{theorem}\n\\label{existenceofdis}\nThe family $\\overline{\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})}\\colon {\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}(X) \\to \\mathcal{K}_\\mathrm{c} ({\\mathcal V}_\\mathrm{s} X)$, for $X$ in $\\bf TOP_0$, of continuous linear maps form the (unique) distributive law of ${\\mathcal V}_\\mathrm{s}$ over $\\mathcal{K}_\\mathrm{c}$.\n\\end{theorem}\n\\begin{proof}\nWe first show that the maps $\\overline{\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})}$ form a natural transformation from ${\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}$ to $\\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s}$. That is, for each continuous map $f\\colon X\\to Y$, the equation \n\\begin{equation}\n \\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{Y})} \\circ {\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}(f) = \\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} f) \\circ \\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})}. \\tag{$\\star$}\n \\end{equation}\nholds.\n\\[\n\\begin{tikzcd}\n{\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}(Y) \\arrow[rr, \"\\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{Y})} \"] & & \\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} Y) \\\\\n& \\mathcal{K}_\\mathrm{c}(Y) \\arrow[ul, \" {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(Y)}\"] \\arrow[ur, \"{\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{Y})}\"'] &\\\\\n& \\mathcal{K}_\\mathrm{c}(X) \\arrow[dl, \" {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(X)}\"'] \\arrow[u, \"\\mathcal{K}_\\mathrm{c}(f)\"] \\arrow[dr, \"\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_X)\"] & \\\\\n {\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}(X) \\arrow[uuu, \"{\\mathcal V}_\\mathrm{s}{\\mathcal{K}_\\mathrm{c}(f)}\"] \\arrow[rr, \"\\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})}\"'] & & \\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} X) \\arrow[uuu, \"\\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} f)\"']\n\\end{tikzcd}\n\\]\nTo this end we perform the following: \n\\begin{align*}\n&\\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{Y})} \\circ {\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}(f) \\circ {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c} (X)} \\\\ \n& = \\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{Y})}\\circ {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(Y)}\\circ \\mathcal{K}_\\mathrm{c}(f) & \\text{naturality of $ {\\delta^{\\s}}$ }\\\\\n& = \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{Y})\\circ \\mathcal{K}_\\mathrm{c}(f) & \\text{definition of $\\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{Y})}$}\\\\\n& = \\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} f) \\circ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})& \\text{naturality of $ {\\delta^{\\s}}$ and functoriality of $ \\mathcal{K}_\\mathrm{c} $}\\\\\n& = \\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} f) \\circ \\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})} \\circ {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c} X}. &\\text{definition of $\\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})}$ }\n\\end{align*}\nNotice that both sides of the Equation~$(\\star)$ are continuous linear maps by Lemma~\\ref{kpreslinear} and the fact that ${\\mathcal V}_\\mathrm{s} f$ is linear for all continuous~$f$, by the freeness of \n${\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}(X)$ over $\\mathcal{K}_\\mathrm{c}(X)$ (Theorem~\\ref{theoremfreeeofvf}) we conclude that Equation~$(\\star)$ holds. For each $X$ we denote $\\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})}$ by $\\lambda_{X}$, and hence $\\lambda$ is indeed a natural transformation from ${\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}$ to $\\mathcal{K}_\\mathrm{c} {\\mathcal V}_\\mathrm{s}$.\n\\[\n\\begin{tikzcd}\n& & {\\mathcal V}_\\mathrm{s} X \\arrow[ddll, \"{\\mathcal V}_\\mathrm{s}\\eta_{X}\"'] \\arrow[dd, \"\\eta_{{\\mathcal V}_\\mathrm{s} X}\"] & & \\\\\n& & & & \\\\\n{\\mathcal V}_\\mathrm{s} \\mathcal{K}_\\mathrm{c}(X) \\arrow[rr, \"\\lambda_{X}\" description] & & \\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} X) & & X\\arrow[uull, \" {\\delta^{\\s}}_{X}\"'] \\arrow[ddll, \"\\eta_{X}\"]\\\\\n& & & & \\\\\n& & \\mathcal{K}_\\mathrm{c}(X)\\arrow[uull, \" {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(X)}\"] \\arrow[uu, \"\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})\"'] & &\n\\end{tikzcd}\n\\]\n\nTo verify the commutativity of the first digram of Definition~\\ref{distrilaw} in our setting, we show that \n$ \\lambda_{X}\\circ {\\mathcal V}_\\mathrm{s}\\eta_{X}= \\eta_{{\\mathcal V}_\\mathrm{s} X}$. We notice that\n\\begin{align*}\n&\\lambda_{X} \\circ {\\mathcal V}_\\mathrm{s}\\eta_{X} \\circ {\\delta^{\\s}}_{X} \\\\ \n& = \\lambda_{X} \\circ {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(X)}\\circ \\eta_{X} &\\text{ naturality of $ {\\delta^{\\s}}$ } \\\\\n& = \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X}) \\circ \\eta_{X} & \\text{definition of $\\lambda_{X}$}\\\\\n& = \\eta_{{\\mathcal V}_\\mathrm{s} X} \\circ {\\delta^{\\s}}_{X} & \\text{definition of $ \\mathcal{K}_\\mathrm{c} $}\n\\end{align*}\nand use the fact that $ \\lambda_{X}\\circ {\\mathcal V}_\\mathrm{s}\\eta_{X}$ and $ \\eta_{{\\mathcal V}_\\mathrm{s} X}$ are continuous linear maps and freeness of ${\\mathcal V}_\\mathrm{s}(X)$ over $X$ to conclude that \n\\begin{equation} \n \\lambda_{X}\\circ {\\mathcal V}_\\mathrm{s}\\eta_{X} = \\eta_{{\\mathcal V}_\\mathrm{s} X} \\tag{$\\bullet$}.\n\\end{equation}\nThe definition of~$\\lambda$ trivially implies the commutativity of the second digram in our setting, and the uniqueness of $\\overline{ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{X})}$ guarantees the uniqueness of such a distributive law $\\lambda$. \n\nWe now verify the commutativity of the third digram in Figure~\\ref{distributivelaws} in our setting. That is, we need to prove that\n \\begin{equation}\n \\mathcal{K}_\\mathrm{c} {\\beta^{\\s}} \\circ \\lambda_{{\\mathcal V}_\\mathrm{s}}\\circ {\\mathcal V}_\\mathrm{s} \\lambda = \\lambda\\circ {\\beta^{\\s}}_{\\mathcal{K}_\\mathrm{c}}.\\tag{$\\ast$}\n \\end{equation}\nTo this end, we notice that ${\\mathcal V}_\\mathcal{K} X$ is a weakly locally convex cone by Corollary~\\ref{llofvk}, and it then follows from Theorem~\\ref{theoremfreeeofvf} that there exists a unique continuous linear map $h_{X}\\colon {\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} X)\\to {\\mathcal V}_\\mathcal{K} X$ such that $h_{X}\\circ {\\delta^{\\s}}_{{\\mathcal V}_\\mathcal{K} X} = \\mathrm{id}_{{\\mathcal V}_\\mathcal{K} X}$. We claim that \n\\begin{equation}\n\\lambda_{X}\\circ {\\beta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(X)} = h_{X}\\circ {\\mathcal V}_\\mathrm{s}\\lambda_{X} \\tag{$1$}\n\\end{equation}\nas illustrated in the following:\n\n\\[\n\\begin{tikzcd}\n{\\mathcal V}_\\mathrm{s}^{2} \\mathcal{K}_\\mathrm{c} \\arrow[rr, \"{\\mathcal V}_\\mathrm{s} \\lambda\"] \\arrow[ddd, \" {\\beta^{\\s}}_{\\mathcal{K}_\\mathrm{c}}\"'] & & {\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s} \\arrow[rr, \"\\lambda_{{\\mathcal V}_\\mathrm{s}}\"] \\arrow[ddd, bend right = 25 , \"h\" description] & & \\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s}^{2} \\arrow[ddd, \"\\mathcal{K}_\\mathrm{c} {\\beta^{\\s}}\"] \\\\\n &&&&\\\\\n &&&&\\\\ \n{\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c} \\arrow[rr, \"\\lambda\"] && \\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s} \\arrow[rr, \"\\mathrm{id}_{\\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s}}\", equal] \\arrow[uuurr, \"\\mathcal{K}_\\mathrm{c} {\\delta^{\\s}}_{{\\mathcal V}_\\mathrm{s}} \" description] \\arrow[uuu, bend right = 25, \" {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s}}\"' description ] && \\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s}\n\\end{tikzcd}\n\\]\n For this, we take any simple valuation $\\Sigma_{i=1}^{n}r_{i}\\delta_{\\mu_{i}}$ in ${\\mathcal V}_\\mathrm{s}^{2}\\mathcal{K}_\\mathrm{c}(X)$, where for each $i$, $\\mu_{i}$ is some simple valuation on $\\mathcal{K}_\\mathrm{c}(X)$, and perform the following: \n\\begin{align*}\n& h_{X}\\circ {\\mathcal V}_\\mathrm{s}\\lambda_{X}(\\Sigma_{i=1}^{n}r_{i}\\delta_{\\mu_{i}})\\\\\n& =h_{X}(\\Sigma_{i=1}^{n}r_{i}\\delta_{\\lambda_{X}(\\mu_{i}) } ) & \\text{actions of ${\\mathcal V}_\\mathrm{s}\\lambda_{X}$} \\\\\n& = \\Sigma_{i=1}^{n}r_{i} h_{X}(\\delta_{\\lambda_{X}(\\mu_{i})}) & \\text{$h_{X}$ is linear}\\\\\n& = \\Sigma_{i=1}^{n}r_{i} \\lambda_{X}( \\mu_{i}) & h_{X}\\circ {\\delta^{\\s}}_{{\\mathcal V}_\\mathcal{K} X} = \\mathrm{id}_{{\\mathcal V}_\\mathcal{K} X}\\\\\n& = \\lambda_{X}(\\Sigma_{i=1}^{n} r_{i}\\mu_{i} ) &\\text{ $\\lambda_{X}$ is linear } \\\\\n& = \\lambda_{X}\\circ {\\beta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(X)} (\\Sigma_{i=1}^{n}r_{i}\\delta_{\\mu_{i}})& \\text{definition of $ {\\beta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(X)}$}\n\\end{align*}\nNext, we show that \n\\begin{equation}\n\\mathcal{K}_\\mathrm{c}( {\\beta^{\\s}}_{X}) \\circ\\lambda_{{\\mathcal V}_\\mathrm{s} X } = h_{X}. \\tag{$2$}\n\\end{equation} \nBy Lemma~\\ref{kpreslinear} both sides of this equation are continuous linear maps, so we only need to show that \n$$\\mathcal{K}_\\mathrm{c}( {\\beta^{\\s}}_{X}) \\circ\\lambda_{{\\mathcal V}_\\mathrm{s} X}\\circ {\\delta^{\\s}}_{{\\mathcal V}_\\mathcal{K} X} = h_{X}\\circ {\\delta^{\\s}}_{{\\mathcal V}_\\mathcal{K} X}$$ \nand conclude with Theorem~\\ref{theoremfreeeofvf}. Indeed, the right side of the equation is just $\\mathrm{id}_{{\\mathcal V}_\\mathcal{K} X}$ by the definition of $h_{X}$. For the left side, one notices that $\\mathcal{K}_\\mathrm{c}( {\\beta^{\\s}}_{X}) \\circ\\lambda_{{\\mathcal V}_\\mathrm{s} X}\\circ {\\delta^{\\s}}_{{\\mathcal V}_\\mathcal{K} X} =\\mathcal{K}_\\mathrm{c}( {\\beta^{\\s}}_{X})\\circ \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{{\\mathcal V}_\\mathrm{s} X})$ by definition of $\\lambda_{{\\mathcal V}_\\mathrm{s} X}$. But $\\mathcal{K}_C( {\\beta^{\\s}}_X)\\circ\\mathcal{K}_C( {\\delta^{\\s}}_{{\\mathcal V}_\\mathrm{s} X}) = \\mathcal{K}_C( {\\beta^{\\s}}_X\\circ {\\delta^{\\s}}_{{\\mathcal V}_\\mathrm{s} X})$ as $\\mathcal{K}_C$ is a functor, and $ {\\beta^{\\s}}_X\\circ {\\delta^{\\s}}_{{\\mathcal V}_\\mathrm{s} X} = \\mathrm{id}_{{\\mathcal V}_\\mathrm{s} X}$, so $\\mathcal{K}_\\mathrm{c}( {\\beta^{\\s}}_{X}\\circ {\\delta^{\\s}}_{{\\mathcal V}_\\mathcal{K} X}) = \\mathcal{K}_\\mathrm{c}(\\mathrm{id}_{{\\mathcal V}_\\mathrm{s} X}) = \\mathrm{id}_{{\\mathcal V}_\\mathcal{K} X}$, again since $\\mathcal{K}_\\mathrm{c}$ is a functor. Then Equation~$(\\ast)$ follows directly from Equations~$(1)$ and $(2)$. \n\n\n\\[\n\\begin{tikzcd}\n{\\mathcal V}_\\mathrm{s} \\mathcal{K}_\\mathrm{c}^{2} \\arrow[rr, \"\\lambda_{\\mathcal{K}_\\mathrm{c}}\"] \\arrow[ddd, \"{\\mathcal V}_\\mathrm{s}{m}\", bend left = 20] & & \\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c} \\arrow[rr, \"\\mathcal{K}_\\mathrm{c} \\lambda\"] & & \\mathcal{K}_\\mathrm{c}^{2}{\\mathcal V}_\\mathrm{s} \\arrow[ddd, \"m_{{\\mathcal V}_\\mathrm{s}}\", bend left =20 ] \\\\\n &&&&\\\\\n &&&&\\\\ \n{\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c} \\arrow[rr, \"\\lambda\"] \\arrow[uuu, \"{\\mathcal V}_\\mathrm{s}{\\mathcal{K}_\\mathrm{c} \\eta}\", bend left =20] && \\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s} \\arrow[rr, \"\\mathrm{id}_{\\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s}}\", equal] \\arrow[uuu, \"\\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s}\\eta\"'] && \\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s} \\arrow[uuu, \"{\\mathcal{K}_\\mathrm{c}\\eta_{{\\mathcal V}_\\mathrm{s}}}\", bend left =20]\n\\end{tikzcd}\n\\]\n\nFinally, to show the commutativity of the last diagram of the distributive law in our setting, we first notice that ${\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}\\eta$ is inverse to $\\vsm$ and that $\\mathcal{K}_\\mathrm{c}\\eta_{{\\mathcal V}_\\mathrm{s}} $ is inverse to $m_{{\\mathcal V}_\\mathrm{s}}$ since $\\mathcal{K}_\\mathrm{c}$ is idempotent. Then we only need to show that \n\\begin{equation}\n\\mathcal{K}_\\mathrm{c}\\lambda\\circ \\lambda_{\\mathcal{K}_\\mathrm{c}}\\circ{\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}\\eta = \\mathcal{K}_\\mathrm{c}\\eta_{{\\mathcal V}_\\mathrm{s}}\\circ \\lambda. \\tag{$**$}\n\\end{equation} To show Equation~$(**)$ holds, it suffices to prove that $\\lambda_{\\mathcal{K}_\\mathrm{c}} \\circ {\\mathcal V}_\\mathrm{s}\\mathcal{K}_\\mathrm{c}\\eta = \\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s}\\eta \\circ \\lambda$ and that $\\mathcal{K}_\\mathrm{c}\\lambda \\circ\\mathcal{K}_\\mathrm{c}{\\mathcal V}_\\mathrm{s}\\eta =\\mathcal{K}_\\mathrm{c}\\eta_{{\\mathcal V}_\\mathrm{s}}$. \nThese two equations hold trivially; the first one follows from the naturality of~$\\lambda$, and the second follows from Equation~($\\bullet$), which we proved above, and the functoriality of~$\\mathcal{K}_\\mathrm{c}$.\n\\end{proof}\n\nThe following is then a straightforward application of Beck's theorem of composing monads~\\cite{beck69}, knowing that $\\lambda$ is a distributive law of ${\\mathcal V}_\\mathrm{s}$ over~$\\mathcal{K}_{\\mathrm c}$. \n\n\\begin{corollary}\nThe functor ${\\mathcal V}_\\mathcal{K} = \\mathcal{K}_\\mathrm{c}\\circ {\\mathcal V}_\\mathrm{s}$ is a monad whose unit is $\\eta_{{\\mathcal V}_\\mathrm{s}} {\\delta^{\\s}}$, and whose multiplication is $m_{{\\mathcal V}_\\mathrm{s}}\\circ \\mathcal{K}_\\mathrm{c}^{2} {\\beta^{\\s}} \\circ \\mathcal{K}_\\mathrm{c}\\lambda_{{\\mathcal V}_\\mathrm{s}} = \\mathcal{K}_\\mathrm{c} {\\beta^{\\s}} \\circ m_{{\\mathcal V}_\\mathrm{s}^{2}} \\circ \\mathcal{K}_\\mathrm{c}\\lambda_{{\\mathcal V}_\\mathrm{s}}$. \\hfill $\\Box$\n\\end{corollary}\n\n\n\n\n\\section{The Eilenberg-Moore algebras of ${\\mathcal V}_\\mathcal{K}$}\n\nRecall that an \\emph{(Eilenberg-Moore) algebra of a monad} $\\mathcal T$ over category ${\\bf C}$ is a pair $(A, \\alpha)$, where $A$ is an object in ${\\bf C}$ and $\\alpha_A\\colon \\mathcal T(A)\\to A$ is a morphism of~${\\bf C}$, called the \\emph{structure map}, such that $\\alpha_A \\circ \\eta_A=\\mathrm{id}_A$ and $\\alpha_A\\circ\\mu_A=\\alpha_A\\circ \\mathcal T\\alpha_A$. We often simply call such a pair $(A, \\alpha_A)$ a $\\mathcal T$-algebra. Given $\\mathcal T$-algebras $(A, \\alpha_A)$ and $(B, \\alpha_B)$, a morphism $f\\colon A\\to B$ of~${\\bf C}$ is a \\emph{$\\mathcal T$-algebra morphism} if $f\\circ \\alpha_A=\\mathcal Tf\\circ \\alpha_B$. \n\nIn order to summarise what the ${\\mathcal V}_\\mathrm{s}$-algebras and ${\\mathcal V}_\\mathrm{s}$-algebra morphisms are, we recall the definition of a barycentre of a valuation.\n\n\n\\begin{definition}\n \\label{defn:bary:choquet}\n Let $C$ be a semitopological cone, and $\\nu$ be a continuous\n valuation on $C$. A \\emph{barycentre} of $\\nu$ is any point\n $b_\\nu \\in C$ such that, for every linear lower semicontinuous map\n $\\Lambda \\colon C \\to \\overline{\\mathbb R}_+$,\n $\\Lambda (b_\\nu) = \\int \\Lambda ~d \\nu$.\n\\end{definition}\n\n\\begin{example}\nLet $X$ be a semitopological cone and $\\Sigma_{i=1}^n r_i\\delta_{x_i}$ be a simple valuation on $X$. Then $\\Sigma_{i=1}^n r_i {x_i}$ is a barycentre of $\\Sigma_{i=1}^n r_i\\delta_{x_i}$. \n\\end{example}\n\n\n\n\\begin{theorem}{\\rm \\cite[Theorem 5.9, Theorem 5.10]{goubault19}}\n\\label{theoremvf}\nLet $X$ be a $T_0$ space, and $\\alpha_X: {\\mathcal V}_\\mathrm{s} X\\to X$ be a continuous map.\n\\begin{enumerate}\n\\item \nIf $(X, \\alpha_X)$ is a ${\\mathcal V}_\\mathrm{s}$-algebra, then $X$ is a weakly locally convex topological cone: For $x, y\\in X$ and $r\\in \\mathbb R_+$, $x+y$ is defined as $\\alpha_X(\\delta_x+\\delta_y)$ and $r\\cdot x$ is defined as $\\alpha_X(r\\delta_x)$. The map $\\alpha_X$ is then the barycentre map which sends each simple valuation $\\Sigma_{i=1}^n r_i \\delta_{x_i}$ to its barycentre $\\Sigma_{i=1}^n r_i{x_i}$.\n\\item\nConversely, for every weakly locally convex topological cone~$C$, there exists a (unique) continuous linear map~$\\alpha_C$ from ${\\mathcal V}_\\mathrm{s} C$ to $C$, sending each simple valuation $\\Sigma_{i=1}^n r_i \\delta_{x_i}$ to its barycentre $\\Sigma_{i=1}^n r_i{x_i}$, and the pair $(C, \\alpha_C)$ is a ${\\mathcal V}_\\mathrm{s}$-algebra. \n\\item\nMoreover, ${\\mathcal V}_\\mathrm{s}$-morphisms are precisely the continuous linear maps between ${\\mathcal V}_\\mathrm{s}$-algebras. \n\\end{enumerate}\n\\end{theorem}\n\n\n\n\n\nFrom Corollary~\\ref{llofvk} we know that ${\\mathcal V}_\\mathcal{K} X$ is a locally linear topological cone for each $T_0$ space $X$. Then the following proposition is not hard to obtain.\n\n\\begin{proposition}\\label{prop:vkalgcones}\nLet $(X, \\rho_X)$ be a ${\\mathcal V}_\\mathcal{K}$-algebra. Then $X$ is a weakly locally convex topological cone with $+$ defined by $x + y = \\rho_X(\\delta_x + \\delta_y)$, and scalar multiplication $\\cdot$ defined by $r \\cdot x = \\rho_X(r \\delta_x)$ for $r \\in \\mathbb R_+$ and $x, y\\in X$. \n\nMoreover $X$ is also a $\\mathcal{K}$-space, and the structure map $\\rho_{X}$ maps each valuation $\\mu\\in {\\mathcal V}_\\mathcal{K} X$ to one of its barycentres. \n \\end{proposition}\n\\begin{proof}\nThe proof that $X$ is a topological cone is similar to that of Lemma~4.6 and Proposition~4.9 in \\cite{goubault19}. In particular, we know that $\\rho_{X}$ is a continuous linear retraction since it is a structure map. Hence $X$ is a weakly locally convex cone by Lemma~\\ref{retractionofweaklylc}. Since $\\rho_X$ is a retraction and the category~$\\mathcal{K}$ is stable under retractions by Corollary~\\ref{retracts:of:kspace}, we conclude that $X$ is a $\\mathcal{K}$-space.\nWe next prove that $\\rho_{X}$ sends each valuation ${\\mathcal V}_\\mathcal{K} X$ to one of its barycentres. Assume that $\\Lambda\\colon X\\to \\overline{\\mathbb R}_+$ is a lower semicontinuous linear map. We consider two maps from ${\\mathcal V}_\\mathcal{K} X$ to $\\overline{\\mathbb R}_+$: $\\mu\\mapsto \\Lambda(\\rho_{X}(\\mu))$ and $\\mu\\mapsto \\int \\Lambda~d\\mu$. It is straightforward to see that these two maps are continuous and they agree on ${\\mathcal V}_\\mathrm{s} X$, hence they also agree on ${\\mathcal V}_\\mathcal{K} X$ by Corollary~\\ref{densedetermined}. So we have $\\Lambda\\circ\\rho_{X}= \\int \\Lambda d\\_$. \n\\end{proof}\n\nWe call a topological cone $C$ which is also a $\\mathcal{K}$-space a \\emph{$\\mathcal{K}$-cone}. Proposition~\\ref{prop:vkalgcones} shows that every ${\\mathcal V}_\\mathcal{K}$-algebra is a weakly locally convex $\\mathcal{K}$-cone. The following proposition shows the converse also holds. That is, each weakly locally convex $\\mathcal{K}$-cone is the underlying space of a ${\\mathcal V}_\\mathcal{K}$-algebra. \n\n\n\\begin{proposition}\n\\label{whatbetadoes}\nFor every weakly locally convex topological $\\mathcal{K}$-cone~$C$, there exists a (unique) continuous linear map~$\\rho_C$ from ${\\mathcal V}_\\mathcal{K} C$ to $C$, sending each continuous valuation in ${\\mathcal V}_\\mathcal{K} C$ to one of its barycentres, in particular, sending each simple valuation $\\Sigma_{i=1}^n r_i \\delta_{x_i}$ to $\\Sigma_{i=1}^n r_i{x_i}$. Moreover, the pair $(C, \\rho_C)$ is a ${\\mathcal V}_\\mathcal{K}$-algebra. \n\\end{proposition}\n\\begin{figure}[h]\n\\centering\n\\begin{tikzcd}\n{\\mathcal V}_\\mathcal{K} C= \\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} C) \\arrow[dr, \"\\rho_{C}=\\overline{\\alpha_{C}}\"] \\\\\n {\\mathcal V}_\\mathrm{s} C \\arrow[r, \"\\alpha_{C}\"'] \\arrow[u, hook, \"\\eta_{{\\mathcal V}_\\mathrm{s} C}\"] & C \n\\end{tikzcd}\n\\end{figure}\n\\begin{proof}\nSince $C$ is a weakly locally convex $\\mathcal{K}$-cone, it is in particular a ${\\mathcal V}_\\mathrm{s}$-algebra, hence from Theorem~\\ref{theoremvf} we know that the (unique) map \n$\\alpha_{C}\\colon {\\mathcal V}_\\mathrm{s} C\\to C$ that takes each simple valuation $\\Sigma_{i=1}^n r_i \\delta_{x_i}$ to its barycentre $\\Sigma_{i=1}^n r_i{x_i}$ is continuous and linear. Since $\\mathcal{K}_\\mathrm{c}$ is a reflector and $C$ is a $\\mathcal{K}$-space, the map $\\alpha_{C}$ has a unique continuous extension $\\overline{\\alpha_{C}}$ from ${\\mathcal V}_\\mathcal{K} C$ to $C$, which we claim is the desired~$\\rho_{C}$. \n\nThe map $\\rho_{C} = \\overline{\\alpha_{C}}$ is obviously continuous. To see that it is linear, for each continuous valuation $\\mu\\in {\\mathcal V}_\\mathcal{K} C$ we define $$F_{\\mu}\\colon {\\mathcal V}_\\mathcal{K} C\\to C\\colon \\nu\\mapsto \\rho_{C}(\\nu)+\\rho_{C}(\\mu)$$ and $$G_{\\mu}\\colon {\\mathcal V}_\\mathcal{K} C\\to C\\colon \\nu\\mapsto \\rho_{C}(\\nu+\\mu).$$\nIt is easy to see that both $F_{\\mu}$ and $G_{\\mu}$ are continuous for each $\\mu$. Notice that $\\eta_{{\\mathcal V}_\\mathrm{s} C}$ is the space embedding of ${\\mathcal V}_\\mathrm{s} C$ into ${\\mathcal V}_\\mathcal{K} C$, so for simple valuations $s$ and $t$ we have:\n\\begin{align*}\nF_{s}(t) &= \\rho_{C}(s)+\\rho_{C}(t) &\\text{definition of $F_{s}$} \\\\\n&= \\alpha_{C}(s)+\\alpha_{C}(t) & \\text{$\\rho_{C}$ is an extension of $\\alpha_{C}$ } \\\\\n&= \\alpha_{C}(s+t) &\\text{$\\alpha_{C}$ is linear}\\\\\n&= \\rho_{C}(s+t) & \\text{$\\rho_{C}$ is an extension of $\\alpha_{C}$}\\\\ \n&=G_{s}(t) & \\text{definition of $G_{s}$} \n\\end{align*}\nSo we have that $F_{s}$ and $G_{s}$ are equal on ${\\mathcal V}_\\mathrm{s} C$ hence they are equal on ${\\mathcal V}_\\mathcal{K} C$ by Corollary~\\ref{densedetermined}. This means for each continuous valuation $\\nu\\in {\\mathcal V}_\\mathcal{K} C$, $F_{s}(\\nu)=G_{s}(\\nu)$. Notice that $F_{s}(\\nu) = F_{\\nu}(s)$ and $G_{s}(\\nu) = G_{\\nu}(s)$ for any simple valuation $s$ and continuous valuation $\\nu\\in {\\mathcal V}_\\mathcal{K} C$. We know that $F_{\\nu}$ are $G_{\\nu}$ are equal on ${\\mathcal V}_\\mathrm{s}(C)$ for each $\\nu$. Hence by using Corollary~\\ref{densedetermined} again, they are equal on ${\\mathcal V}_\\mathcal{K} C$. That is, for each $\\nu, \\mu\\in {\\mathcal V}_\\mathcal{K} C$, $F_{\\nu}(\\mu)=G_{\\nu}(\\mu)$, which is just equivalent to saying that $\\rho_{C}(\\nu)+ \\rho_{C}(\\mu) = \\rho_{C}(\\nu+\\mu)$.\n\nTo prove that $\\rho_{C}(r\\mu) = r\\rho_{C}(\\mu)$ for $r\\in \\mathbb R_+, \\mu\\in {\\mathcal V}_\\mathcal{K} C$, one notes that both sides of the equation are continuous in $\\mu$, and they are equal on simple valuations. Hence they are equal.\n\nNow we prove that $\\rho_{C}$ sends continuous valuations in ${\\mathcal V}_\\mathcal{K} C$ to one of their barycentres. Assume that $\\Lambda\\colon C\\to \\overline{\\mathbb R}_+$ is a lower semicontinuous linear map. We define \n$$ H\\colon {\\mathcal V}_\\mathcal{K} C \\to \\overline{\\mathbb R}_+ \\colon \\mu\\mapsto \\int f~d\\mu$$\nand \n$$ J\\colon {\\mathcal V}_\\mathcal{K} C \\to \\overline{\\mathbb R}_+ \\colon \\mu\\mapsto f(\\rho_{C}(\\mu)).$$\nIt is straightforward to show $H$ and $J$ are continuous, and they are equal on simple valuations because $\\rho_{C}$ is a linear extension of $\\alpha_{C}$. Hence they are equal on ${\\mathcal V}_\\mathcal{K} C$ by~Corollary~\\ref{densedetermined}.\n\nThe uniqueness of $\\rho_{C}$ follows from the uniqueness of $\\alpha_{C}$.\n\nFinally, we prove that the pair $(C, \\rho_{C})$ is a ${\\mathcal V}_\\mathcal{K}$-algebra. One easily sees that $\\rho_{C}\\circ {\\delta^{k}}_{ C} = \\mathrm{id}_{C}$ since $\\rho_{C}$ extends $\\alpha_{C}$. To prove that \n$\\rho_{C}\\circ {\\mathcal V}_\\mathcal{K} \\rho_{C} = \\rho_{C}\\circ {\\beta^{k}}_{C}$ one notes that both sides of the equation are linear continuous maps, hence we only need to prove that $\\rho_{C}\\circ {\\mathcal V}_\\mathcal{K} \\rho_{C}(\\delta_{\\nu}) = \\rho_{C}\\circ {\\beta^{k}}_{C} (\\delta_{\\nu})$ for every $\\nu \\in {\\mathcal V}_\\mathcal{K} C$. Indeed we know that these are equal since they both are just $\\rho_{C}(\\nu)$ via easy calculations. \n\\end{proof}\n\n\n\\begin{proposition}\nLet $(C_{1}, \\rho_{1}), (C_{2}, \\rho_{2})$ be two ${\\mathcal V}_\\mathcal{K}$-algebras. Then a continuous function $f\\colon C_{1}\\to C_{2}$ is a ${\\mathcal V}_\\mathcal{K}$-algebra morphism if and only if it is linear. \n\\end{proposition}\n\\begin{proof}\nFor the ``if'' direction, we show that for every continuous linear map $f$, $\\rho_{2}\\circ {\\mathcal V}_\\mathcal{K} f = f\\circ \\rho_{1}$. By Corollary~\\ref{densedetermined} we only need to show that the maps are equal on simple valuations. Indeed, \n\\begin{align*}\n&\\rho_{2}\\circ {\\mathcal V}_\\mathcal{K} f(\\Sigma_{i=1}^{n}r_{i}\\delta_{x_{i}}) \\\\\n&= \\rho_{2}( \\Sigma_{i=1}^{n}r_{i}\\delta_{f(x_{i})}) &\\text{${\\mathcal V}_\\mathcal{K} f = {\\mathcal V}_w f$ on ${\\mathcal V}_\\mathcal{K} C_{1}$}\\\\\n&= \\Sigma_{i=1}^{n}r_{i}{f(x_{i})} & \\text{Proposition~\\ref{whatbetadoes}}\\\\\n&= f(\\Sigma_{i=1}^{n}r_{i}{x_{i}} ) & \\text{$f$ is linear}\\\\\n&= f\\circ \\rho_{1}( \\Sigma_{i=1}^{n}r_{i}\\delta_{x_{i}} ). &\\text{Proposition~\\ref{whatbetadoes} }\n\\end{align*}\n\nFor the ``only if'' direction, we take $x_{i}\\in C_{1}, r_{i}\\in \\mathrm{R_{+}}, i=1, ...,n$ and calculate \n\\begin{align*}\n& f(\\Sigma_{i=1}^{n}r_{i}{x_{i}}) \\\\\n&= f(\\rho_{1}( \\Sigma_{i=1}^{n}r_{i}{\\delta_{x_{i}}})) & \\text{Proposition~\\ref{whatbetadoes}}\\\\\n&= \\rho_{2}({\\mathcal V}_\\mathcal{K} f ( \\Sigma_{i=1}^{n}r_{i}{\\delta_{x_{i}}})) & \\text{$f$ is a ${\\mathcal V}_\\mathcal{K}$-algebra morphism }\\\\\n&= \\rho_{2}( \\Sigma_{i=1}^{n}r_{i}\\delta_{f(x_{i})} ) & \\text{${\\mathcal V}_\\mathcal{K} f$ is linear}\\\\\n&= \\Sigma_{i=1}^{n}r_{i}f(x_{i}) . &\\text{Proposition~\\ref{whatbetadoes} }\n\\end{align*}\n\\end{proof}\n\nSummarising the above results, we have the following.\n\n\\begin{theorem}\nThe category ${\\bf TOP_{0}^{{\\mathcal V}_\\mathcal{K}}}$ of ${\\mathcal V}_\\mathcal{K}$-algebras and ${\\mathcal V}_\\mathcal{K}$-algebra morphisms is isomorphic to the category of weakly locally convex $\\mathcal{K}$-cones and continuous linear maps between them. \\hfill $\\Box$\n\\end{theorem}\n\n\nThe following theorem which generalises \\cite[Theorem 6.8]{heckmann96} is then immediate, and holds for general categorical reasons.\n\\begin{theorem}\n${\\mathcal V}_\\mathcal{K} X$ is the free weakly locally convex $\\mathcal{K}$-cone over $X$. That is, ${\\mathcal V}_\\mathcal{K} X$ is a weakly locally convex $\\mathcal{K}$-cone, and for every continuous function $f\\colon X\\to C$ from $X$ to a weakly locally convex $\\mathcal{K}$-cone $C$, there is a unique continuous linear map $\\overline f \\colon {\\mathcal V}_\\mathcal{K} X\\to C$ with $\\overline f(\\delta_{x})=f(x)$ for all $x\\in X$.\\hfill $\\Box$\n\\end{theorem}\n\n\n\\section{Extensions of lower semicontinuous linear maps}\n\\label{extensionsection}\n\nIn this section, we show that the cone structure on each weakly locally convex cone $C$ can be extended to $\\mathcal{K}_\\mathrm{c}(C)$ making the latter a weakly locally convex $\\mathcal{K}$-cone. Moreover, each lower semicontinuous linear map $\\Lambda\\colon C\\to \\overline{\\mathbb R}_+$ admits a unique lower semicontinuous linear extension $\\overline \\Lambda\\colon \\mathcal{K}_\\mathrm{c}(C)\\to \\overline{\\mathbb R}_+$ such that $\\overline\\Lambda\\circ \\eta_{C}=\\Lambda$.\n\n\n\n\\begin{proposition}\n\\label{kcisacone}\nLet $C$ be a weakly locally convex topological cone, $\\alpha_{C}$ be the structure map of the ${\\mathcal V}_\\mathrm{s}$-algebra at $C$, and $ {\\delta^{\\s}}_{C}$ be the unit of ${\\mathcal V}_\\mathrm{s}$ at $C$. Then $\\mathcal{K}_\\mathrm{c}(C)$ is a weakly locally convex topological $\\mathcal{K}$-cone: For $x, y\\in \\mathcal{K}_\\mathrm{c}(C)$, $r\\in \\mathbb R_{+}$, $x+y$ is defined as $\\mathcal{K}_\\mathrm{c}(\\alpha_{C})(\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{C})(x)+\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{C})(y))$ and $r\\cdot x$ is defined as $\\mathcal{K}_\\mathrm{c}(\\alpha_{C})(r\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{C})(x))$. Moreover, the topological embedding $\\eta_{C}$ of $C$ into $\\mathcal{K}_\\mathrm{c}(C)$ is linear with respect to this cone structure on $\\mathcal{K}_\\mathrm{c}(C)$.\n\\end{proposition}\n\\begin{figure}[h]\n\\centering\n\\begin{tikzcd} \n{\\mathcal V}_\\mathrm{s} \\mathcal{K}_\\mathrm{c}(C) \\arrow[rr, \" \\lambda_{C}\"] &&\\mathcal{K}_\\mathrm{c}({\\mathcal V}_\\mathrm{s} C) \\arrow[ddd, \"\\mathcal{K}_\\mathrm{c}(\\alpha_{C})\", bend left = 20] && {\\mathcal V}_\\mathrm{s} C \\arrow[ll, \"\\eta_{{\\mathcal V}_\\mathrm{s} C} \"] \\arrow[ddd, \"\\alpha_{C}\", bend left = 20] \\\\\n&\\\\\n&\\\\\n&&\\mathcal{K}_\\mathrm{c}(C) \\arrow[uuu, \"\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{C})\" description , bend left =20] \\arrow[uuull, \" {\\delta^{\\s}}_{\\mathcal{K}_\\mathrm{c}(C)} \"] && C\\arrow[ll, \"\\eta_{C}\"] \\arrow[uuu, \" {\\delta^{\\s}}\", bend left = 20]\n\\end{tikzcd}\n\\end{figure} \n\\begin{proof}\nTheorem~\\ref{existenceofdis} implies there is a unique distributive law $\\lambda$, consisting of linear maps, of ${\\mathcal V}_\\mathrm{s}$ over $\\mathcal{K}_\\mathrm{c}$, and then Beck's lifting theorem states that $(\\mathcal{K}_\\mathrm{c}(C), \\mathcal{K}_\\mathrm{c}(\\alpha_{C})\\circ \\lambda_{C})$ is a ${\\mathcal V}_\\mathrm{s}$-algebra (see the Proposition on Page 122 in \\cite{beck69}). Then, Theorem~\\ref{theoremvf} implies $\\mathcal{K}_\\mathrm{c}(C)$ is a weakly locally convex cone in which $x+y$ is defined as \n$\\mathcal{K}_\\mathrm{c}(\\alpha_{C})( \\lambda_{C} ( \\delta_{x}+\\delta_{y}) )$ and $r x $ is defined as $\\mathcal{K}_\\mathrm{c}(\\alpha_{C})(\\lambda_{C} ( r \\delta_{x}))$. Moreover, we note that $\\lambda_{C}$ is linear and $\\lambda_{C}(\\delta_{x}) = \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{C})(x) $ for all $x\\in \\mathcal{K}_\\mathrm{c}(C)$, so $\\mathcal{K}_\\mathrm{c}(\\alpha_{C})( \\lambda_{C} ( \\delta_{x}+\\delta_{y}) ) = \\mathcal{K}_\\mathrm{c}(\\alpha_{C})(\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{C})(x)+\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{C})(y)) $ and $ \\mathcal{K}_\\mathrm{c}(\\alpha_{C})(\\lambda_{C} ( r \\delta_{x})) = \\mathcal{K}_\\mathrm{c}(\\alpha_{C})(r\\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{C})(x))$. \n\nTo prove that $\\eta_{C}$ is linear, let $a, b\\in C$, and calculate:\n\\begin{align*}\n&\\eta_{C}( a + b ) \\\\\n&= \\eta_{C} (\\alpha_{C}(\\delta_{a}+\\delta_{b}) ) & \\text{ $\\alpha_{C}$ is a structure map } \\\\ \n&= \\mathcal{K}_\\mathrm{c}(\\alpha_{C})( \\eta_{{\\mathcal V}_\\mathrm{s} C} (\\delta_{a}+ \\delta_{b}) ) &\\text{ definition of $ \\mathcal{K}_\\mathrm{c}$ }\\\\\n&= \\mathcal{K}_\\mathrm{c}(\\alpha_{C}) ( \\eta_{{\\mathcal V}_\\mathrm{s} C} (\\delta_{a}) + \\eta_{{\\mathcal V}_\\mathrm{s} C}(\\delta_{b}) ) & \\text{ $\\eta_{{\\mathcal V}_\\mathrm{s} C}$ is linear }\\\\\n&= \\mathcal{K}_\\mathrm{c}(\\alpha_{C}) ( \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{C})(\\eta_{C}(a ) ) + \\mathcal{K}_\\mathrm{c}( {\\delta^{\\s}}_{C})(\\eta_{C}(a ) ) ) &\\text{definition of $\\mathcal{K}_\\mathrm{c}$ }\\\\\n&= \\eta_{C}(a) + \\eta_{C}(b). &\\text{definition of $\\eta_{C}(a) + \\eta_{C}(b)$}\n\\end{align*}\nThe equation $\\eta_{C}(ra)= r\\eta_{C}(a)$ can be proved similarly, for $r\\in \\mathbb R_{+}, a\\in C$. \n\\end{proof}\n\nLet $C$ be a weakly locally convex cone and $f\\colon C\\to \\overline{\\mathbb R}_+$ be lower semicontinuous. Since $\\overline{\\mathbb R}_+$ is a sober space, hence a $\\mathcal{K}$-space, and $\\mathcal{K}_\\mathrm{c}$ is a reflector, there is a unique continuous extension $\\overline f\\colon \\mathcal{K}_\\mathrm{c}(C)\\to \\overline{\\mathbb R}_+$. Moreover, since $C$ is weakly locally convex, $\\mathcal{K}_\\mathrm{c}(C)$ is a weakly locally convex $\\mathcal{K}$-cone by Proposition~\\ref{kcisacone}. We can show more:\n\n\\begin{theorem}\n\\label{extensionoflinearmaps}\nLet $C$ be a weakly locally convex cone. Then a lower semicontinuous map $f \\colon C\\to \\overline{\\mathbb R}_+$ is homogeneous (resp., superadditive, subadditive) if and only if its extension $\\overline f\\colon \\mathcal{K}_\\mathrm{c}(C)\\to \\overline{\\mathbb R}_+ $ is homogeneous (resp., superadditive, subadditive). As a result, $f$ is superlinear (resp., sublinear, linear) if and only if $\\overline f$ is, respectively. \n\\end{theorem}\n\\begin{proof}\nWe prove the case that $f$ is superadditive, that is, $f(a+b)\\geq f(a)+f(b)$ for all $a, b\\in C$. We need to show that for any fixed $c, d\\in \\mathcal{K}_\\mathrm{c}(C)$, $\\overline f(c+d)\\geq \\overline f(c)+\\overline f(d)$. For fixed $a\\in C$, it follows from the linearity of $\\eta_{C}$ that $\\lambda x. \\overline f(\\eta_{C}(a)+x)\\colon \\mathcal{K}_\\mathrm{c}(C)\\to K$ is the extension of $\\lambda x. f(a+x)\\colon C\\to K$ and that $\\lambda x. f(a)+ \\overline f(x)\\colon \\mathcal{K}_\\mathrm{c}(C)\\to K$ is the extension of $\\lambda x. f(a)+ f(x)\\colon C\\to K$. By superadditivity of $f$ we know that $\\lambda x. f(a+x)\\geq \\lambda x. f(a)+ f(x)$, hence it follows from Proposition~\\ref{kpreservesorder} that $\\lambda x. \\overline f(\\eta_{C}(a)+x)\\geq \\lambda x. f(a)+ \\overline f(x)$. In particular, we know that $\\overline f(\\eta_{C}(a)+d)\\geq f(a)+ \\overline f(d)$ for any $a\\in C$. This means that $\\lambda a. \\overline f(\\eta_{C}(a)+d)\\geq \\lambda a. f(a)+ \\overline f(d)$. By applying Proposition~\\ref{kpreservesorder} again with the same reasoning we obtain that $\\overline f(x+d)\\geq \\overline f(x)+\\overline f(d)$ for any $x\\in \\mathcal{K}_\\mathrm{c}(C)$. Hence, we have $\\overline f(c+d)\\geq \\overline f(c)+\\overline f(d)$. \n\nThe rest can be proved similarly. \n\\end{proof}\n\nThe following result due to Keimel states that there is an order isomorphism between the family of convex open subsets of a topological cone $C$, ordered by set containment, and the family of superlinear functionals on $C$, ordered pointwise, via the so-called \\emph{upper Minkowski functionals}.\n\\begin{definition}\nLet $A$ be a subset of a cone $C$, the \\emph{upper Minkowski functional} $M^{A}$ of $A$ is defined as $$ M^{A}(x)= \\sup\\{r \\in \\mathbb R_{+} \\mid x\\in rA\\}. $$\n\\end{definition}\n\n\\begin{lemma}{\\rm \\cite[Proposition 7.4, Proposition 7.6]{keimel08}}\nLet $C$ be a semitopological cone. \n\\begin{itemize}\n\\item Assigning the upper Minkowski functional $M^{U}$ to each proper open subset $U$ of $C$ induces an order isomorphism between the poset $\\mathcal O(C)$ of proper open subsets and the poset of homogeneous lower semicontinuous functionals on $C$. \n\\item Assigning the upper Minkowski functional $M^{U}$ to each proper open convex subset $U$ of $C$ induces an order isomorphism between the poset $\\mathcal O_{c}(C)$ of proper open convex subsets and the poset of lower semicontinuous superlinear functionals on $C$. \n\\item In both cases, the inverse assignment sends each lower semicontinuous (superlinear) functional $f$ to the open $f^{-1}((1, +\\infty ] )$. \n\\end{itemize}\n\\end{lemma}\n\n\nWe have the following similar result with proper open half-spaces in lieu of proper open convex subsets and lower semicontinuous linear functionals in lieu of lower semicontinuous superlinear functionals. The proof is similar to \\cite[Proposition 7.6]{keimel08}. \n\n\\begin{lemma}\n\\label{inlieu}\nLet $C$ be a semitopological cone. Assigning the upper Minkowski functional $M^{U}$ to every proper open half-space $U$ of $C$ induces an order isomorphism between the poset of proper open half-spaces of $C$ and the poset of lower semicontinuous linear functionals on $C$. The inverse assignment sends each lower semicontinuous linear functional $f$ to the open $f^{-1}((1, +\\infty ] )$. \\hfill $\\Box$\n\\end{lemma}\n\nFinally, we have the following result as a corollary to Theorem~\\ref{extensionoflinearmaps}. \n\n\\begin{theorem}\nLet $C$ be a topological cone. Then $C$ is locally convex (resp., locally linear, weakly locally convex) if and and only if $\\mathcal{K}_\\mathrm{c}(C)$ is a locally convex (resp., locally linear, weakly locally convex) $\\mathcal{K}$-cone. \n\\end{theorem}\n\\begin{proof}\nThe ``if'' direction comes from the fact that $\\eta_{C}$ is linear by Proposition~\\ref{kcisacone}. We prove the ``only if'' direction for the case when $C$ is locally convex. To this end, we assume that $U$ is an open subset of $\\mathcal{K}_\\mathrm{c} (C)$, and $x\\in U$. Then $\\eta_{C}^{-1}(U)$ is an open subset of $C$. By Corollary~\\ref{alahcekmann} we have some point $a\\in \\eta_{C}^{-1}(U)$ with $\\eta_{C}(a)\\leq x$. Since $C$ is locally convex, there exists an open convex subset $W$ of $C$ such that $a\\in W\\subseteq \\eta^{-1}(U)$. Hence the upper Minkowski functional $M^{W}$ of $W$ is a lower semicontinuous superlinear map from $C$ to $\\overline{\\mathbb R}_+$. By Theorem~\\ref{extensionoflinearmaps}, it has an extension $\\overline{ M^{W} }\\colon \\mathcal{K}_\\mathrm{c}(C)\\to \\overline{\\mathbb R}_+$ which is also lower semicontinuous and superlinear. \n\nSince $\\eta_{C}$ is linear by Proposition~\\ref{kcisacone}, $x\\in r \\eta_{C}^{-1}(U)$ if and only if $\\eta_{C}(x)\\in rU$ for $x\\in C$ and $r\\in \\mathbb R_{+}$. This implies that $M^{U} \\circ\\eta_{C}= M^{\\eta_{C}^{-1}(U)}$, hence by uniqueness of extension, \n $\\overline{M^{\\eta_{C}^{-1}(U)}}=M^{U}$.\n It is then straightforward to verify that $x\\in \\overline{ M^{W} }^{-1}((1, +\\infty])\\subseteq U$. We conclude by noticing that $ \\overline{ M^{W} }^{-1}((1, +\\infty])$ is an open convex subset of $\\mathcal{K}_\\mathrm{c}(C)$. \n\nThe case that $C$ is locally linear can be proved similarly by invoking Lemma~\\ref{inlieu}, and the case that $C$ is weakly locally convex was proved in Proposition~\\ref{kcisacone}. \n\\end{proof}\n\n\\begin{remark}\nIn contrast with Remark~\\ref{isobetweenxandkx}, we know from the proof of the above theorem that for each convex open subset $W$ of a weakly locally convex topological cone~$C$, there exists one and only open convex open set~$V$ of $\\mathcal{K}_\\mathrm{c}(C)$ such that $\\eta_{C}(W)= V\\cap C$, and $\\overline{M^{W}} = M^{V}$.\n\\end{remark}\n\n\n\\section{Topping the Cantor tree}\nFocusing on the $\\text{{\\fontfamily{cmtt}\\selectfont K}}$-category $\\mathcal{D}$ of monotone convergence spaces and continuous functions, we note that the monad $\\mathcal V_{\\mathcal{D}}$ gives rise to a natural monad over $\\mathbf{DCPO}$. \n\\[\n\\begin{tikzcd}\n\\mathbf{DCPO}\n\\arrow[rr, \"\\Sigma\"{name=F}, bend left=10] \n&&\n\\mathcal{D} \\ar[rr, \"F\" {name=H}, bend left = 10] \n\\arrow[loop, \"\\mathcal V_\\mathcal{D}\"', distance=2.5em, start anchor={[xshift=1ex]north}, end anchor={[xshift=-1ex]north}]{}{}\n\\arrow[ll, \"\\Omega\"{name=G}, bend left=10]\n\\arrow[phantom, from=F, to=G, \"\\dashv\" rotate=-90]\n&& \\mathcal{D}^{\\mathcal V_{\\mathcal{D}}} \\ar[ll, \"U\" {name=K}, bend left = 10] \n\\arrow[phantom, from=H, to=K, \"\\dashv\" rotate=-90]\n\\end{tikzcd}\n\\]\nThe key to this observation is an adjunction $(\\Sigma, \\Omega)$ between $\\mathbf{DCPO}$ and $\\mathcal{D}$: The left adjoint $\\Sigma$ sends each dcpo $L $ to $\\Sigma L$, the space $L$ equipped with the Scott topology, and the right adjoint $\\Omega$ sends each monotone convergence space $X$ to the dcpo $\\Omega X$. \nSince $\\mathcal V_{\\mathcal{D}}X$ is a monotone convergence space, we can restrict $\\mathcal V_{\\mathcal{D}}$ on the category $\\mathcal{D}$, and it is again a monad over $\\mathcal{D}$.\nIt is then immediate that the functor $\\Omega \\circ \\mathcal V_{\\mathcal{D}}\\circ \\Sigma$ is a monad over {\\bf DCPO}: Just note that $\\mathcal V_{\\mathcal{D}}$ can be written as $U \\circ F$, where $F\\dashv U$ is the canonical adjoint pair between $\\mathcal{D}$ and the Eilenberg-Moore category $\\mathcal{D}^{\\mathcal V_{\\mathcal{D}}}$ of $\\mathcal V_{\\mathcal{D}}$ that recovers the monad $\\mathcal V_{\\mathcal{D}}$. Then one easily verifies that $F\\circ \\Sigma\\dashv \\Omega\\circ U$, and $\\Omega \\circ \\mathcal V_{\\mathcal{D}}\\circ \\Sigma =(\\Omega\\circ U)\\circ (F\\circ \\Sigma)$. \n\n\nSince we know that $\\mathcal V_{\\mathcal{D}} X$ is the $\\mathcal{D}$-completion of ${\\mathcal V}_\\mathrm{s} X$, one may wonder whether $\\Omega(\\mathcal V_{\\mathcal{D}} X) $ is the dcpo-completion of $\\Omega({\\mathcal V}_\\mathrm{s} X)$. We see in the following example that the answer is no. The reason is that the embedding of ${\\mathcal V}_\\mathrm{s} X$ into ${\\mathcal V}_w X$ is not Scott-continuous in general, even when $X$ is an algebraic lattice endowed with its Scott topology.\n\n\n\\begin{example}\nConsider the Cantor tree $\\mathcal C$ consisting of finite and infinite words over $\\{0, 1\\}$ and endowed with the prefix order. $\\mathcal C$ is a bounded complete algebraic domain with the empty word $\\epsilon$ as the least element. The Cantor set can be realised as the set $C$ of maximal elements in $\\mathcal C$, that is, the set of all infinite words. By adding a top element~$\\top$ to $\\mathcal C$, the resulting poset, which we denote by $\\mathcal C^\\top$, is an algebraic lattice.\n\\begin{center}\n\\tikzset{\n solid node\/.style={circle,draw,inner sep=1.2,fill=black},\n hollow node\/.style={circle,draw,inner sep=1.2},\n}\n\\begin{tikzpicture}[rotate=180,font=\\footnotesize]\n\\draw[thick,dashed] (-3.75,-4.3) -- (-3.25,-4.3); \n\\draw[thick,dashed] (-2.25,-4.3) -- (-2.75,-4.3); \n\\draw[thick,dashed] (-1.25,-4.3) -- (-1.7,-4.3); \n\\draw[thick,dashed] (-.25,-4.3) -- (-.75,-4.3); \n\\draw[thick,dashed] (.75,-4.3) -- (.25,-4.3); \n\\draw[thick,dashed] (1.25,-4.3) -- (1.75,-4.3); \n\\draw[thick,dashed] (2.25,-4.3) -- (2.75,-4.3); \n\\draw[thick,dashed] (3.25,-4.3) -- (3.75,-4.3); \n \\tikzset{\n level 1\/.style={level distance=12mm,sibling distance=40mm},\n level 2\/.style={level distance=12mm,sibling distance=20mm},\n level 3\/.style={level distance=12mm,sibling distance=10mm},\n level 4\/.style={level distance=12mm,sibling distance=7mm},\n }\n \\node(l0)[solid node,label=below:{${\\epsilon}$}]{}\n child{node[solid node,label=right:{$^{1}$}]{}\n child{node(l1)[solid node,label=right:{$^{11}$}]{}\n child{node(R5)[solid node,label=above:{$\\vdots$},label= right:{$^{111}$}]{}\n \t\tedge from parent node[left]{}}\n child{node[solid node,label=above:{$\\vdots$},label= right:{$^{110}$}]{}\n \t\tedge from parent node[left]{}}\n edge from parent node[left]{}\n }\n child{node(l2)[solid node,label=right:{$^{10}$}]{}\n child{node[solid node,label=above:{$\\vdots$},label= right:{$^{101}$}]{}\n \t\tedge from parent node[left]{}}\n child{node[solid node,label=above:{$\\vdots$},label= right:{$^{100}$}]{}\n \t\tedge from parent node[left]{}}\n edge from parent node[left]{}\n }\n edge from parent node[left]{}\n }\n child{node[solid node,label=right:{$^{0}$}]{}\n child{node(r1)[solid node,label=right:{$^{01}$}]{}\n child{node[solid node,label=above:{$\\vdots$},label= right:{$^{011}$}]{}\n \t\tedge from parent node[left]{}}\n child{node[solid node,label=above:{$\\vdots$},label= right:{$^{010}$}]{}\n \t\tedge from parent node[left]{}}\n edge from parent node[left]{}\n }\n child{node(r2)[solid node,label=right:{$^{00}$}]{}\n child{node[solid node,label=above:{$\\vdots$},label= right:{$^{001}$}]{}\n \t\tedge from parent node[left]{}}\n child{node(L5)[solid node,label=above:{$\\vdots$},label= right:{$^{000}$}]{}\n \t\tedge from parent node[left]{}}\n\tedge from parent node[left]{}\n }\n edge from parent node[left]{}\n }\n ;\n \\draw (0,-5) node[solid node,label=above:{$^\\top $}]{};\n\\end{tikzpicture}\n\\end{center}\n\n For each natural number $n$, we define the \\emph{normalized counting measure at level $n$} as: $$c_{n}= {1\\over 2^{n}} \\Sigma_{x\\in F_{n}} \\delta_{x},$$ where $F_{n}$ is the set of words in $\\mathcal C$ with length $2^n$. It is obvious that in ${\\mathcal V}_w \\mathcal C^{\\top}$ the supremum $\\sup_{n} c_{n}$ is not a simple valuation. In fact, it is well known that $\\sup_{n}c_{n}$ is equal to ${\\mathcal V}_w(e)(\\mu)$, where $e$ is the topological embedding of $C$ into $\\mathcal C^{\\top}$ (with the Scott topology) and $\\mu$ is the Haar measure on $C$, regarded as an infinite product of two-point groups (see~\\cite{mislove20}). \n \nWe claim that in ${\\mathcal V}_\\mathrm{s} \\mathcal C^{\\top}$, the supremum of $c_{n}, n=1, 2, 3,...$ is~$\\delta_\\top$:\n\n\nFirst, $\\delta_\\top$ is obviously an upper bound of $c_{n}$. \n\nSince $\\mathord{\\downarrow} \\delta_\\top$ is Scott closed in ${\\mathcal V}_\\mathrm{s} \\mathcal C^\\top$, for any subset of $\\mathord{\\downarrow} \\delta_\\top$, there is no difference between computing its supremum in $\\mathord{\\downarrow} \\delta_\\top$ and in ${\\mathcal V}_\\mathrm{s} \\mathcal C^\\top$.\n\nLet $\\Sigma_{x_{i}\\in F} r_i \\delta_{x_i}$, $F$ finite, be a simple valuation. We show that if $x_i \\not= \\top$ for some index $i$, or if $\\sum_i r_i < 1$, then $\\Sigma_{x_{i}\\in F} r_i \\delta_{x_i}$ is not an upper bound for some $c_n$. Indeed, if $\\sum_i r_i < 1$, then $\\Sigma_{x_{i}\\in F} r_i \\delta_{x_i}({\\mathcal C}^\\top) < 1 = c_n({\\mathcal C}^\\top)$, so $\\Sigma_{x_{i}\\in F} r_i \\delta_{x_i}$ is not above any of the $c_n$s. \n\nOn the other hand, assume $\\sum_i r_i =1$, but $x_i < \\top$ for some $i$. Then $U = {\\mathcal C}^\\top \\setminus \\mathord{\\downarrow} x_i$ is an open set, and $\\Sigma_{x_{i}\\in F} r_i \\delta_{x_i}(U) \\leq 1 - r_i$. Then we can choose an $n$ large enough that $c_n(U) > 1 - r_i$: \n\nIndeed, if $x_i\\not\\in C$, then $x_i \\in F_m$ for some $m$, and choosing any $n > m$ implies $c_n(U) = 1$. Hence $c_n \\not\\leq \\Sigma_{x_{i}\\in F} r_i \\delta_{x_i}$.\n\nIf $x_i\\in C$, then we can choose an $n > 0$ large enough such that $b$ out of $2^{n}$ elements of $F_{n}$ are not below $x_{i}$ and $b\/2^{n}$ is strictly greater than $1-r_{i}$. Hence $ \\Sigma_{x_{i}\\in F} r_i \\delta_{x_i}(U) \\leq 1-r_{i}-1$ dex) strongly differ from disk stars in kinematics, and instead exhibit halo-like motions \\citep[e.g.,][]{Nissen10,Nissen11,Schuster12,Bonaca17,Posti18,Fernandez-Alvar19}. According to kinematic properties of these metal-rich stars, they are identified as metal-rich halo stars. \nFurthermore, several works have been made to reveal the origin of the metal-rich halo stars. An in-situ metal-rich halo has been corroborated by some works. For example, \\cite{Hawkins15} used a sample of 57 high-velocity stars from the fourth data release of the Radial Velocity Experiment to report the discovery of a metal-rich halo star that has likely been dynamically ejected into the halo from the thick disc, which support the theory of \\cite{Purcell10} that massive accretion events are believed to heat more metal-rich disk stars so that they are ejected into the halo. \n\\cite{Bonaca17} reported that metal-rich halo stars in the solar neighborhood actually formed in situ, rather than having been accreted from satellite systems, based on kinematically identifying halo stars within 3 kpc from the Sun. \n\n\\par\nSome studies also \\citep{Haywood18,Di Matteo18,Gallart19} detected a substantial population of with thick disc chemistry on halo-like orbits, and corroborated their in-situ origin. \\cite{Belokurov19} used a Gaia DR2 and auxiliary spectroscopy data sets to identify a large population of metal-rich ([Fe\/H]$> -0.7$) stars on high eccentric orbits in the rotational velocity versus metallicity plane, and dub the Splash stars. They confirmed that the Splash stars are predominantly old, but not as old as the stars deposited into the Milky Way in the last major merger, suggesting that the Splash stars could have been born in the Milky Way's proto-disc prior to the massive ancient accretion event which drastically altered their orbits. Although these metal-rich halo stars have been found, whether they are a part of halo stars still needs more detailed research.\n\\par\nTo understand the complex structure of the Galaxy, we need more information such as chemical abundance and kinematics of large number of individual stars. The ongoing Large Sky Area Multi-Object Fiber Spectroscopic Telescope survey \\citep[LAMOST, also called Guoshoujing Telescope;][]{Zhao12} has released more than five million stellar spectra with stellar parameters in the DR5 catalog. Furthermore, more accurate elemental abundances and radial velocity from high-resolution spectra are provided by\nthe Apache Point Observatory Galactic Evolution Experiment \\citep[APOGEE;][]{Majewski17} survey. The accurate kinematic information requires accurate proper motions and parallaxes with sufficiently small uncertainties, which are provided by the second Gaia data release of Gaia survey \\citep{Gaia18a, Gaia18b}. These data sets allow us to explore the Galactic structure accurately.\n\n\\par\nIn this work, we use a low-resolution sample from the LAMOST DR5 and a high-resolution sample from the APOGEE DR14 combined with the Gaia DR2 to study the metal-rich halo stars kinematically, prove the existence of high-velocity thick disk, and measure their MDFs. \nThe paper is structured as follows: Sect.\\ref{sec:data} introduces the observation data, determines the distance and velocity of sample stars, and describes the sample selection. Sect.\\ref{sec:kinematics} presents the kinematic evidence for the existence of metal-rich halo stars and high-velocity thick disk, and studies their kinematic properties. In Sect.\\ref{sec:metallicity}, we present metallicity distribution functions (MDFs) of the metal-rich stellar halo and high-velocity thick disk. Sect.\\ref{sec:discussion} discusses their potential origins. The summary and conclusions are given in Sect.\\ref{sec:summary}. \n\n\n\\section{Data}\n\\label{sec:data}\n\\subsection{LAMOST, APOGEE, and Gaia}\n\\par\nLarge Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is a reflecting Schmidt telescope located at Xinglong station, which is operated by National Astronomical Observatories, Chinese Academy of Sciences (NAOC). LAMOST has an effective aperture of 3.6 - 4.9 m in diameter, a focal length of 20 m and 4000 fibers within a field of view of $5^\\circ$, which enable it to take 4000 spectra in a single exposure to a limiting magnitude as faint as $r = 19$ (where $r$ denotes magnitude in the SDSS $r$-band) at resolution R = 1800. Its observable sky covers $-10^\\circ \\sim +90^\\circ$ declination and observed wavelength range spans 3,700 {\\AA} $\\sim$ 9,000 {\\AA} \\citep{Cui12, Zhao12}. In this work, we use the LAMOST DR5 catalog that contains over 5 million A-, F-, G-, and K-type stars. Stellar parameters, including radial velocity, effective temperature, surface gravity, and metallicity ([Fe\/H]), are delivered from the spectra with the LAMOST Stellar Parameter Pipeline \\citep[LASP;][]{Wu11,Luo15}. The accuracy of LASP was tested by selecting 771 stars from the LAMOST commissioning database, and comparing it with the SDSS\/SEGUE Stellar Parameter Pipeline (SSPP). The precisions of effective temperature, surface gravity, and metallicity ([Fe\/H]) were found to be 167 K, 0.34 dex, and 0.16 dex, respectively.\n\\begin{figure}[]\n\t\\includegraphics[width=1.0\\hsize]{Figure1.pdf}\n\t\\caption{Top panel: Toomre diagram of our high-velocity sample stars for the high-resolution sample. The black dashed line represents the total spatial velocity $v_{{\\rm tot}} =220$ ${\\rm km\\ s^{-1}}$, and we adopt $v_{\\rm LSR}=232.8\\ {\\rm km\\ s^{-1}}$. Our high-velocity sample stars are defined as $v_{{\\rm tot}}>220$ ${\\rm km\\ s^{-1}}$. Bottom panel: the spatial distribution in cylindrical Galactic coordinates of these high-velocity sample stars. Red dots indicate the Sun, which is located at ($x_{\\odot}$, $y_{\\odot}$, $z_{\\odot}$) $=$ ($-8.2, 0, 0.015$) kpc. $N$ represents the number of stars.}\n\t\\label{figure1}\n\\end{figure}\n\n\\begin{figure}[]\n\t\\includegraphics[width=1.0\\hsize]{Figure2.pdf}\n\t\\caption{Effective temperature ($T_{\\rm eff}$) versus surface gravity (log(g)) diagram for our high-resolution sample (top panel) and low-resolution sample (bottom panel). The sample stars have been selected using $v_{{\\rm tot}} > 220$ ${\\rm km\\ s^{-1}}$ and given sample criteria. The red dots in the top panel represent 695 common stars between high-resolution and low-resolution samples. The color bar in the bottom panel represents the number of stars. }\n\t\\label{figure2}\n\\end{figure}\n\\par \nThe Apache Point Observatory Galactic Evolution Experiment (APOGEE), part of the Sloan Digital Sky Survey III, is a near-infrared (H-band; 1.51-1.70 ${\\rm \\mu m}$) and high-resolution (R $\\sim$ 22,500) spectroscopic survey targeting primarily red giant (RG) stars \\citep{Zasowski13}. It provides accurate ($\\sim$ 0.1 ${\\rm km\\ s^{-1}}$) radial velocity, stellar atmospheric parameters, and precise ($\\lesssim 0.1$ dex) chemical abundances for about 15 chemical species \\citep{Nidever15}. Detailed information about the APOGEE Stellar Parameter and Chemical Abundances Pipeline (ASPCAP) can be found in \\cite{Holtzman15} and \\cite{Garcia16}.\n\\par \nGaia is an ambitious mission to chart a three-dimensional map of the Milky Way, and launched by the European Space Agency (ESA) in 2013. The second Gaia data release, Gaia DR2, provides high-precision positions, parallaxes, and proper motions for 1.3 billion sources brighter than magnitude $G \\sim 21$ mag as well as line-of-sight velocities for 7.2 million stars brighter than $G_{RVS}$ = 12 mag \\citep{Gaia18a, Gaia18b}. More detailed information about Gaia can be found in \\cite{Gaia Collaboration16,Gaia18a, Gaia18b}. \n\n\\begin{figure}[]\n\t\\includegraphics[width=1.0\\hsize]{Figure3.pdf}\n\t\\caption{The metallicity distribution of our high-velocity sample stars ($v_{{\\rm tot}}>220$ ${\\rm km\\ s^{-1}}$). The red dashed line represents the low-resolution sample and black line for the high-resolution sample.}\n\t\\label{figure3}\n\\end{figure}\n\\subsection{Distance and velocity determination}\n\\label{sec:Distance}\n\\par \nIn this study, we use two initial samples. One is the low-resolution sample obtained by cross-matching between the LAMOST DR5 and Gaia DR2 catalog, and it can provide a large quantity stars\nto study the Galactic disk and halo statistically. In this sample, the stellar parameters such as [Fe\/H], radial velocity, effective temperature, and surface gravity are from the LAMOST DR5 catalog, and proper motion and parallax are from the Gaia DR2 catalog. Another is the high-resolution sample from the APOGEE DR14 and Gaia DR2 catalog, and its stellar parameters ([Fe\/H], radial velocity, effective temperature, and surface gravity) are from the APOGEE DR14 catalog, proper motion, and parallax from the Gaia DR2 catalog. We restrict relative parallax uncertainties smaller than 20$\\%$, the error of proper motion smaller than $0.2$ mas\/year, radial velocity uncertainties smaller than $10$ ${\\rm km\\ s^{-1}}$, the error of [Fe\/H] smaller than $0.2$ dex, and signal-to-noise S\/N $> 20$ in the $g$-band. \\textbf{We also restrict the error of the effective temperature smaller than 150 K and the error of the surface gravity smaller than 0.3 dex for the low-resolution sample.}\n\n\\par \n\\cite{Bailer-Jones15} discussed that the inversion of the parallax to obtain distance is not appropriate when the relative parallax error is above 20 percent. Therefore, we discuss separately the derivation of distances and velocities with $\\sigma_{\\varpi}\/(\\varpi-\\varpi_{\\rm zp}) <0.1$ and $\\sigma_{\\varpi}\/(\\varpi-\\varpi_{\\rm zp}) \\geq 0.1$ \\citep{Marchetti19}.\nThe quantity $\\varpi$ and $\\sigma_{\\varpi}$ denote stellar parallax and its error and $\\varpi_{\\rm zp}$ is the global parallax zero-point of the Gaia observations. \\cite{Butkevich17} confirmed that due to various instrumental effects of the Gaia satellite, in particular, to a certain kind of basic-angle variations, these can bias the parallax zero point of an astrometric solution derived from observations. This global parallax zero-point was determined in \\cite{Lindegren18} based on observations of quasars: $\\varpi_{\\rm zp}=-0.029$ mas. Thus, it is necessary to subtract parallax zero-point ($\\varpi_{\\rm zp}$) when parallax is used to calculate astrophysical quantities \\citep{Li19}. \t\nFor the sample stars with $\\sigma_{\\varpi}\/(\\varpi-\\varpi_{\\rm zp}) <0.1$, we use simple inversion to calculate the distance, but for the $\\sigma_{\\varpi}\/(\\varpi-\\varpi_{\\rm zp}) \\geq 0.1$ stars, we adopt the Bayesian approach to derive it. Using the Bayesian approach to estimate the distance and velocity of the sample stars will be introduced in the Appendix, as well as the comparisons of our distances and velocities with other works.\nHere we only introduce the distance and velocity determination of the sample stars with $\\sigma_{\\varpi}\/(\\varpi-\\varpi_{\\rm zp}) <0.1$ using parallax, proper motion in right ascension ($\\mu_{\\alpha^*}$) and declination ($\\mu_{\\delta}$), and radial velocity ($rv$). \n\n\n\\par \nWe calculate the Galactocentric Cartesian ($x, y, z$) coordinates from the Galactic ($l, b$) coordinates, and $l$ and $b$ are the Galactic longitude and latitude. We apply a right-handed Galactic-centered Cartesian coordinate with the $x$-axis pointing toward the Galactic center:\n\\begin{align}\n& x = d\\,\\cos(l)\\,\\cos(b)- x_\\odot \\nonumber \\\\\n& y = d\\,\\sin(l)\\,\\cos(b) \\\\\n& z = d\\,\\sin(b). \\nonumber\n\\end{align}\nHere, we adopt the distance from the Sun to the Galactic center is $x_{\\odot} = -8.2$ kpc and height above the plane $z_{\\odot} = 15$ pc \\citep{Bland-Hawthorn16}. In such a coordinate system, the Sun is located at ($x_{\\odot}$, $y_{\\odot}$, $z_{\\odot}$) $=$ ($-8.2, 0, 0.015$) kpc, and $d$ is the distance from the Sun. The proper motions together with the radial velocity are used to derive the Galactic velocity components $(U, V, W)$ using a right-handed Cartesian coordinate. The directions of $U$ and $W$ are toward the Galactic center and the north Galactic pole, and $V$ is in the direction of the Galactic rotation. The Galactic velocity is relative to Local Standard of Rest (LSR): $(U, V, W)$ $=$ $(U_{\\rm LSR}, V_{\\rm LSR}, W_{\\rm LSR})$. \nThe velocity components in the Galactocentric Cartesian Coordinates can be obtained: $(V_x, V_y, V_z)$ $=$ $(U, V+v_{\\rm LSR}, W)$ and $v_{\\rm LSR}$ is the LSR velocity, we adopt $v_{\\rm LSR}=232.8\\ {\\rm km\\ s^{-1}}$ \\citep{McMillan17}. The corrections applied for the motion of the Sun with respect to the LSR are $(V_x^{\\odot,{\\rm pec}},V_y^{\\odot,{\\rm pec}},V_z^{\\odot,\\rm pec}) = {\\rm (10.0\\ km\\ s^{-1}, 11.0\\ km\\ s^{-1}, 7.0\\ km\\ s^{-1})}$ \\citep{Tian15, Bland-Hawthorn16}. The Galactocentric cylindrical components can be calculated:\n\\begin{align}\n& R = \\sqrt{x^2+y^2} \\nonumber \\\\\n& \\phi = \\tan^{-1}(\\frac{y}{x})\n\\end{align}\n\n\\begin{align}\n& V_{\\phi} = V_x\\,\\frac{x}{R}+V_y\\,\\frac{y}{R} \\nonumber \\\\\n& V_R = -V_x\\,\\frac{y}{R}+V_y\\,\\frac{x}{R} \\\\\n& V_z = V_z, \\nonumber\n\\end{align}\n$V_{\\phi}$ is in the direction of the Galactic rotation. Due to the error propagation in the observed quantities, the uncertainties of the derived parameters for each star are determined by 1,000 realizations of Monte Carlo simulation. The standard deviation is adopted as uncertainty.\n\\par\nWe integrate the stellar orbits of sample stars based on the observation parameters as the starting point. We use a recent Galactic potential model provided by \\cite{McMillan17}. Their model includes five components: the cold gas discs near the Galactic plane, as well as the thin and thick stellar disk, a bulge component, and a dark-matter halo. The GALPOT code \\citep{McMillan17,Dehnen98} is used to integrate the stellar orbit and set up orbit integrator with integration time of 1,000 Myr. As a result, we obtain various stellar orbital parameters, such as the closest approach of an orbit to the Galactic center ($ r_{\\rm peri}$, i.e., the perigalactic distance), the farthest extent of an orbit from the Galactic center ($r_{\\rm apo}$), the orbital energy ($E$), and angular momentum ($L_{z}$). The orbital eccentricities of sample stars, $e$, defined as $e= (r_{\\rm apo} - r_{\\rm peri})\/(r_{\\rm apo} + r_{\\rm peri})$.\n\n\n\n\\subsection{Sample selection}\n\n\\par \n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{Figure4.pdf}\n\t\\caption{The rotational velocity distribution of the high-velocity sample stars of [Fe\/H] $>-1.0$ dex (top panels) and [Fe\/H] $>-0.8$ dex (bottom panels). The left and right panels are the low-resolution and high-resolution sample stars, respectively. The distribution functions of the rotational velocity are well fitted with a two-Gaussian model according to the lowest $BIC$. The two single-Gaussian components are interpreted as the metal-rich stellar halo (MRSH) and the high-velocity thick disk (HVTD), and their sum is illustrated by the red curve. The best-fit values of the means ($\\mu$), standard deviations ($\\sigma$), and weights of each single-Gaussian component are given in the corresponding panels and $N$ represents the number of stars.}\n\t\\label{figure4}\n\\end{figure*}\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{Figure5.pdf}\n\t\\caption{The rotational velocity distribution of the canonical halo stars with $v_{{\\rm tot}}>220$ ${\\rm km\\ s^{-1}}$ and [Fe\/H] $<-1.0$ dex for the low-resolution (left panel) and high-resolution sample (right panel). The distribution functions can be fitted with a two-Gaussian model according to the lowest $BIC$. The two single-Gaussian components are regarded as the inner-halo and outer-halo, and their sum is illustrated by the red curve. The best-fit values of the means ($\\mu$), standard deviations ($\\sigma$), and weights of each single-Gaussian component are given in the corresponding panels.}\n\t\\label{figure5}\n\\end{figure*}\n\n\\par \nThe Toomre diagram has been widely used to distinguish the thin-disk,\nthick-disk, and halo stars, which is a plot of $\\sqrt{U_{\\rm LSR}^2+W_{\\rm LSR}^2}$ versus rotational component $V_{\\rm LSR}$.\nThe halo stars are usually defined as stars with $v_{{\\rm tot}} = \\sqrt{U_{{\\rm LSR} }^2 + V_{{\\rm LSR}}^2 + W_{{\\rm LSR}}^2 } > 200\\sim220$ ${\\rm km\\ s^{-1}}$ \\citep[e.g.,][]{Venn04, Nissen10, Bonaca17,XingZhao18}. In order to investigate the properties of the stellar halo, we define stars with $v_{{\\rm tot}} > 220$ ${\\rm km\\ s^{-1}}$ as our high-velocity sample stars. According to previous studies, the high-velocity sample stars mainly consist of halo stars, and their spatial distribution in the Toomre diagram is presented in the top panel of Figure \\ref{figure1}. In total, we obtain 17,470 high-velocity sample stars of low resolution, and 3,391 of high resolution. There are 695 common targets between these two samples.\nAs shown in the bottom panel of Figure \\ref{figure1}, our high-velocity sample stars of high-resolution are within $4\\lesssim R \\lesssim13$ kpc and can extend up to 6 kpc in height from the Galactic plane. \n\n\n\\par \nIn this work, the low-resolution sample is used to study statistically kinematic and chemical characteristics of the stellar halo.\nAt the same time, since the high-resolution sample has accurate stellar parameters, which is used to confirm the conclusions derived from the low-resolution sample. \nAs shown in Figure \\ref{figure2}, the high-resolution sample mainly consists of G- and K-type giant stars, while the low-resolution sample is mainly A-, F-, G-, and K-type stars. \nObviously, there are many more low-resolution sample stars than high-resolution sample stars. So low-resolution sample can reduce the influence of sample selection bias.\n\n\n\\section{Kinematics of Metal-rich Stellar Halo and High-Velocity thick disk}\n\\label{sec:kinematics}\n\\subsection{Kinematic Evidence of metal-rich stellar halo and high-velocity thick disk}\n\n\n\\par \nAlthough we have selected halo stars according to kinematic criteria $v_{{\\rm tot}}>220$ ${\\rm km\\ s^{-1}}$, more high-velocity sample stars are comprised of metal-rich stars ([Fe\/H] $>-1.0$ dex) as shown in Figure \\ref{figure3}. \nIt is similar to the result of \\cite{Bonaca17}, who select sample stars within $\\lesssim$ 3 kpc from the Sun, based on first Gaia data, the RAVE, and APOGEE spectroscopic surveys. They regarded these metal-rich stars as metal-rich stellar halo stars. However, because of these stars exhibit the metallicity of the thick disk, whether these metal-rich stars belong to the halo or disk still need more consideration. Since rotational behavior is a very effective way to distinguish the thin disk, thick disk, and halo component, we shall further study the rotational velocity distribution. \n\n\\par \nTo study how many components these metal-rich stars ([Fe\/H] $>-1.0$) contain, \nwe first make the traditional assumption that the distribution function of the stellar rotational velocity from a single stellar population is well described by a single-Gaussian function, then the optimal number of the Gaussian function is given by using the Bayesian information criterion ($BIC$) \\citep{Ivezic14}:\n\\begin{align}\nBIC=-2ln[L^{0}(M)] + klnN, \n\\end{align}\nwhere $L^{0}(M)$ represents the maximum value of the likelihood function of the model, $N$ is the number of data points, and $k$ is the number of free parameters. Uncertainties of the best-fit value are determined by 1,000 realizations of Monte Carlo simulation and the standard deviations are defined as errors.\nFigure \\ref{figure4} shows that the rotational velocity distribution of the metal-rich stars can be fitted with a two-peak Gaussian model according to the lowest $BIC$. \n\\begin{table*}\n\t\\begin{center}\n\t\t\\centering\n\t\t\\caption{ \\upshape {The Best-fit Values of Mean ($\\mu$), Standard Deviation ($\\sigma$), and Weight of Each Rotational Velocity Distribution of Gaussian Form in Different Metallicity Intervals}}\n\t\t\\label{Table 1}\n\t\t\\begin{tabular}{lllllll}\n\t\t\t\\hline\n\t\t\t\\hline \n\t\t\t[Fe\/H]& \\multicolumn{3}{c}{MRSH} & \\multicolumn{3}{c}{HVTD}\\\\\n\t\t\t- & $\\mu$ & $\\sigma$ & Weight & $\\mu$ & $\\sigma$ & Weight \\\\\n\t\t\t(dex)&(${\\rm km\\ s^{-1}}$)&(${\\rm km\\ s^{-1}}$)&-&(${\\rm km\\ s^{-1}}$)&(${\\rm km\\ s^{-1}}$)&-\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{7}{c}{Low Resolution Sample}\\\\\n\t\t\t\\hline\n\t\t\t[Fe\/H]$>-1$ & $+3.0\\pm0.3$&$45.7\\pm0.3$&$0.72\\pm0.002$&$164.2\\pm0.7$&$63.2\\pm0.5$&$0.28\\pm0.002$\\\\\n\t\t\t\n\t\t\t[Fe\/H]$>-0.8$ & $+6.44\\pm0.3$&$45.7\\pm0.4$&$0.69\\pm0.002$&$169.2\\pm0.7$&$62.4\\pm0.5$&$0.31\\pm0.002$\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{7}{c}{High Resolution Sample}\\\\\n\t\t\t\\hline\n\t\t\t[Fe\/H]$>-1$ & $+4.7\\pm1.0$&$49.8\\pm2.7$&$0.61\\pm0.005$&$201.6\\pm1.4$&$50.2\\pm2.9$&$0.39\\pm0.005$\\\\\n\t\t\t\n\t\t\t[Fe\/H]$>-0.8$ & $+7.9\\pm1.8$&$48.4\\pm2.6$&$0.57\\pm0.004$&$203.5\\pm1.7$&$49.1\\pm2.6$&$0.43\\pm0.004$\\\\\n\t\t\t\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table*}\n\\par\nHere, we use low and high-resolution samples to confirm each other. It needs to be noted that the fitted parameters from two samples are slightly different, such as a best-fit mean rotational velocity is $\\mu_{V_{\\phi}}=164.2\\pm 0.7$ for the low-resolution sample in the top left panel of Figure \\ref{figure4}, while $\\mu_{V_{\\phi}}=201.6\\pm 1.4$ for the high-resolution sample in the top right panel. But we notice that two samples show a consistent component number, which implies that the component number does not depend on the sample. In order to be consistent with the type of high-resolution sample stars, we also restrict effective temperature with $4000<{\\rm T_{eff}}<5300$ K and surface gravity with log(g) $<3.5$ dex for the low-resolution sample. We find that the fitted parameters from this restricted low-resolution sample are still slightly different from the high-resolution sample, but the difference between the parameters derived from the high-resolution sample and this restricted low-resolution sample has diminished. So we consider that the differences of parameters from two samples could result from uncertainties of stellar parameters in the low-resolution sample or incompleteness of the high-resolution sample.\n\n\n\\par\nA small number of inner halo stars with [Fe\/H] $>-1.0$ dex have been reported by some study \\citep[e.g.,][]{An13,An15,Zuo17,Liu18,Gu19}. In order to eliminate the effects of the inner halo, we also inspect the component number for the stars with [Fe\/H] $>-0.8$ dex. Our results show that the component number is identical for the stars both [Fe\/H] $>-1.0$ and [Fe\/H] $>-0.8$ dex as shown in the bottom panel of Figure \\ref{figure4}, which indicates the effect of the inner halo on our results is negligible. Table \\ref{Table 1} lists the best-fit values of the two single-Gaussians components in Figure \\ref{figure4}.\n\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{Figure6.pdf}\n\t\\caption{Top left panel: Toomre diagram of the high-velocity thick disk (HVTD, marked by the red dots) and metal-rich stellar halo (MRSH, marked by the cyan dots) for the high-resolution sample. Top right and bottom left panels: The distribution between the total energy and the vertical angular momentum (Lindblad diagram) of the thick disk ( marked by the black dots), HVTD (marked by the red dots), MRSH (marked by the cyan dots), and canonical halo stars (marked by the orange dots). \n\t\tThe blue dashed line represents the separation of populations. \n\t\tBottom right panel: Distribution of orbit eccentricity of the HVTD (marked by the red dotted line), MRSH (marked by the cyan line), thick disk stars (black dashed line), and canonical halo stars (orange line). }\n\t\\label{figure6}\n\\end{figure*}\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{Figure7.pdf}\n\t\\caption{Variation of rotational velocity with metallicity for the MRSH (marked by the cyan dots), HVTD (marked by the orange dots), and halo stars (marked by the yellow-green dots) from our high-resolution sample. These variation trends can be fitted with linear functions using the least square method. Stars in the gray rectangle box represent Splash stars defined by \\cite{Belokurov19}.}\n\t\\label{figure7}\n\\end{figure*}\n\\par \nWe have confirmed that high-velocity stars of [Fe\/H] $>-1.0$ dex contain two independent components using their rotational velocity distribution. According to previous studies, the mean rotational velocity of the thick disk is within the range of 160-200 ${\\rm km\\ s^{-1}}$, and the thin disk is greater than 210 ${\\rm km\\ s^{-1}}$ \\citep[e.g.,][]{Kordopatis11,Li18}. For example, \n\\cite{Carollo10} and \\cite{Li17,Li18} reported a mean rotational velocity of $ \\sim 180$ ${\\rm km\\ s^{-1}}$ for the thick disk, and \\cite{Kordopatis11} measured $ = 166$ ${\\rm km\\ s^{-1}}$. \nThe halo stars have lower mean rotational velocity \\citep[e.g.,][]{Kafle17}, for example, \\cite{Smith09} measured $ \\sim 2.3$ ${\\rm km\\ s^{-1}}$for the halo stars, \\cite{Carollo10} reported $ =+7\\pm4$ and $ =-80\\pm13$ ${\\rm km\\ s^{-1}}$ for the inner halo and outer halo, and \\cite{Tian19} reported that local halo have progradely rotates with $\\sim +27$. \nAs shown in Figure \\ref{figure4}, for the stars with [Fe\/H]$>-1.0$, one component peaks at $ \\sim 201.3$ ${\\rm km\\ s^{-1}}$ for the high-resolution sample and $ \\sim 164.2$ ${\\rm km\\ s^{-1}}$ for the low-resolution sample, which is consistent with the thick disk. So we consider that this component should be the high-velocity thick disk (HVTD) and it has the same rotational velocity and metallicity as the canonical thick disk, but its member stars have the same position as the halo in the Toomre diagram. For the stars with [Fe\/H]$>-1.0$, another component peaks at $ \\sim +4.7$ ${\\rm km\\ s^{-1}}$ for the high-resolution sample and $ \\sim +3.0$ ${\\rm km\\ s^{-1}}$ for the low-resolution sample, which is similar to the rotational velocity of the halo. Therefore we regard this component as a metal-rich stellar halo (MRSH). It has the same rotational velocity and position as the halo in the Toomre diagram, but it has metallicity of the canonical thick disk. \\cite{Belokurov19} measured rotational velocity distribution of the metal-rich stars with $-0.7<$[Fe\/H]$<-0.2$ and $2<|z|<3$ on halo-like orbits (Splash stars), with a peak at 25 ${\\rm km\\ s^{-1}}$ and standard deviation of $108\\pm 19$. They showed that Splash stars contain lots of stars with $V_{\\phi} > 100$ ${\\rm km\\ s^{-1}}$. Because they did not remove disk stars, their Splash stars may be contaminated by the thin and thick disk stars. \nSince there exist a clear gap between rotational velocity distribution of the HVTD and MRSH as shown in Figure \\ref{figure4}, the HVTD is defined as high-velocity sample stars with [Fe\/H] $>-1.0$ dex and $V_{\\phi} >90$ ${\\rm km\\ s^{-1}}$, while the MRSH is high-velocity sample stars with [Fe\/H] $>-1.0$ dex and $V_{\\phi} <90$ ${\\rm km\\ s^{-1}}$. \n\\begin{figure}[]\n\t\\includegraphics[width=1.0\\hsize]{Figure8.pdf}\n\t\\caption{The variation of rotational velocity with metallicity for the thick disk stars of $70220$ ${\\rm km\\ s^{-1}}$ and [Fe\/H] $<-1.0$ dex.\nIn order to check whether the canonical halo stars contain the HVTD or MRSH stars, we study the rotational velocity distribution of the canonical halo that is fitted with a two-peak Gaussian model according to the lowest $BIC$ as shown in Figure \\ref{figure5}.\nSome previous studies showed that the canonical stellar halo has two components: inner-halo and outer-halo. Thus, we could regard these two single-Gauss components as inner-halo and outer-halo. This implies that the stars of $v_{{\\rm tot}}>220$ ${\\rm km\\ s^{-1}}$ and [Fe\/H] $<-1$ dex contain very few HVTD or MRSH stars. \n\n\\begin{figure}\n\t\\includegraphics[width=1.0\\hsize]{Figure9.pdf}\n\t\\caption{Metallicity distribution of the high-velocity stars for the low-resolution (top panel) and high-resolution sample (bottom panel). The distribution functions are well fitted with a four-Gaussian model according to the lowest $BIC$, which represents the contribution from the outer-halo, inner-halo, MRSH, and HVTD, and the sum is illustrated by the red curve. The best-fit values of the means ($\\mu$), standard deviations ($\\sigma$), and weights for each single-Gaussian component are given in the corresponding panels. }\n\t\\label{figure9}\n\\end{figure}\n\\par \n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{Figure10.pdf}\n\t\\caption{Metallicity distribution of the high-velocity sample stars for the low-resolution (left panel) and high-resolution sample (bottom panel) in different vertical height intervals. The distribution functions are well fitted with a four-Gaussian or three-Gaussian model according to the lowest $BIC$. The best-fit values of the means ($\\mu$), standard deviations ($\\sigma$), and weights for each single-Gaussian component are given in the corresponding panels.}\n\t\\label{figure10}\n\\end{figure*}\n\n\\subsection{kinematic properties of the metal-rich stellar halo and high-velocity thick disk}\n\\par\nFigure \\ref{figure6} shows the Toomre diagram, Lindblad diagram (a plot of the integrals of motion representing the total energy, $E$, and vertical angular momentum, $L_z$), and distribution of orbit eccentricity for the HVTD, MRSH, thick disk, and canonical halo. Our results indicate that there is a relatively clear separation between the HVTD and MRSH in the Toomre diagram, but there is a small amount mixing in the boundary. This also implies that the HVTD and MRSH could be different population. In addition, in order to present a clear dynamical relation between the HVTD, MRSH, canonical halo, and thick disk, we compare the thick disk sample stars from \\cite{Yan19} with the HVTD, MRSH, and canonical halo in Lindblad diagram.\nAs shown in the top right panel of Figure \\ref{figure6}, there is an apparent separation between the canonical halo and thick disk as indicated by the blue dashed line. About $65\\%$ of the HVTD stars are clustered with the thick disk in the Lindblad diagram, while the other $35\\%$ of the HVTD stars are clustered with the canonical halo stars. Furthermore, the HVTD stars contain $23\\%$ of high orbital eccentricity ($e>0.8$) stars as shown in the bottom right panel of Figure \\ref{figure6}. After excluding those $35\\%$ of the HVTD stars that have the same position as the canonical halo in the Lindblad diagram, the orbit eccentricity distribution of the HVTD is basically consistent with the thick disk. These indicate that our HVTD stars could be contaminated by the MRSH stars and most of HVTD stars share the same dynamical properties as the thick disk. As shown in the bottom left and right panels of Figure \\ref{figure6}, most MRSH stars are clustered with the canonical halo. The orbit eccentricity distribution of the MRSH is basically consistent with the canonical halo, and most of them have high orbit eccentricity (e$>$0.6), which is consistent with previous studies \\citep[e.g.,][]{Mackereth19b,Fernandez-Alvar19,Belokurov19}. These imply that the MRSH stars share the same dynamical properties as the canonical halo.\n\n\\par \nThe gradient of rotational velocity with metallicity is important for the Galactic disk, which can provide useful clues to its formation and evolution. Many works have confirmed that the thin disk stars show a negative rotational velocity gradient versus metallicity, and the gradient range from $\\sim$ $-16$ to $-24$ ${\\rm km\\ s^{-1}\\ dex^{-1}}$. The thick disk stars show a positive gradient from $\\sim$ $+30$ to $+49$ ${\\rm km\\ s^{-1}\\ dex^{-1}}$ \\citep[e.g.,][]{Lee11,Adibekyan13,Recio-Blanco14,Guiglion15,Jing16,Yan19}.\nFigure \\ref{figure7} shows the variations of rotational velocity with metallicity for the MRSH, HVTD, and halo stars. Variation trends of the MRSH and HVTD stars can be fitted with linear functions, but their gradients are distinctly different. The HVTD exists a steeper gradient than the canonical thick disk, $\\Delta V_{\\phi}\/\\Delta{\\rm [Fe\/H]} = +82.2 \\pm 1.8 $ ${\\rm km\\ s^{-1}\\ dex^{-1}}$. However, the MRSH shows a relatively flat gradient, $\\Delta V_{\\phi}\/\\Delta{\\rm [Fe\/H]} = +18.0 \\pm 3.1 $ ${\\rm km\\ s^{-1}\\ dex^{-1}}$, which is less than the canonical thick disk. The distribution of rotational velocity with metallicity for the halo stars is more scattered than the MRSH and HVTD stars, but it globally exhibits a relatively flat gradient, $\\Delta V_{\\phi}\/\\Delta{\\rm [Fe\/H]} = +22.2 \\pm 1.7 $ ${\\rm km\\ s^{-1}\\ dex^{-1}}$. The gradient of the halo is basically equal to that of the MRSH. \\cite{Belokurov19} noticed that some metal-rich stars ([Fe\/H]$>-0.7$) show distinct difference in the rotation velocity versus metallicity distribution for the thin and thick disk stars. The distribution of the rotational velocity with metallicity for these metal-rich stars shows a vertical trend, and these metal-rich stars are referred as Splash stars by \\cite{Belokurov19}. The boundaries of the Splash stars by a rectangular box\nhave been marked in Figure \\ref{figure7}. It can be clearly seen that the Splash stars locate in the MRSH.\n\\par \nThe HVTD has a steeper gradient than the canonical thick disk in the rotational velocity versus metallicity distribution, and the gradient of the HVTD is about twice as high as the canonical thick disk. We noticed that the gradient of the rotational velocity versus metallicity in the thick disk depends strongly on the spatial velocity. Figure \\ref{figure8} displays variation of rotational velocity with metallicity in different spatial velocity intervals for the canonical thick disk from \\cite{Yan19}. We can see that the thick disk stars with $70220$ ${\\rm km\\ s^{-1}}$, the gradient of rotation velocity versus metallicity is steeper than the canonical thick disk, which implies that the HVTD stars could belong to the thick disk. \n\n\n\\section{The Metallicity Distribution of the metal-rich stellar halo and high-velocity thick disk}\n\\label{sec:metallicity}\n\\par \n\nWe obtained the HVTD or MRSH stars selected by rotational velocity distribution and metallicity.\nThe mean metallicities of the HVTD stars ${\\rm <[Fe\/H]>=-0.51 \\pm 0.002}$ dex with standard deviations of $\\sigma_{\\rm [Fe\/H]}=0.26$ dex \nfor the low-resolution sample, and ${\\rm <[Fe\/H]>=-0.31 \\pm 0.0004}$ dex with standard deviations $\\sigma_{\\rm [Fe\/H]}=0.31 $ dex\nfor the high-resolution sample. The mean metallicities of the MRSH stars are ${\\rm <[Fe\/H]>=-0.67 \\pm 0.001}$ dex with standard deviations $\\sigma_{\\rm [Fe\/H]}=0.20 $ dex \nfor the low-resolution sample and ${\\rm <[Fe\/H]>=-0.60 \\pm 0.0003}$ dex with standard deviations $\\sigma_{\\rm [Fe\/H]}=0.23$ dex for the high-resolution sample. Therefore, the HVTD stars have higher metallicity than the MRSH on average.\nThe metallicity distributions of both high-velocity sample stars are well fitted with a four-peak Gaussian model according to the lowest $BIC$ in Figure \\ref{figure9}. \nThe two single-Gaussian for the canonical halo stars with [Fe\/H] $\\lesssim-1$ dex could be interpreted as the inner-halo and outer-halo. \nThe relative metal-rich stars with [Fe\/H] $\\gtrsim-1$ dex also exist two single-Gausses components, which could be interpreted as the HVTD and MRSH stars. As shown in Figure \\ref{figure9}, the canonical halo contains few HVTD and MRSH.\n\n\\begin{table}\n\t\\begin{center}\n\t\t\\centering\n\t\t\\caption{ \\upshape {The Best-fit Values of Mean ($\\mu$), Standard Deviation ($\\sigma$), and Weight (W) of Each Metallicity Distribution of Gaussian Form in Different Vertical Height Intervals.} }\n\t\t\\label{Table 2}\n\t\t\\begin{tabular}{lllll}\n\t\t\t\\hline\n\t\t\t\\hline \n\t\t\t& Outer-halo&Inner-halo &MRSH&HVTD\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{5}{c}{Low Resolution Sample $|z|\\lesssim6$ kpc}\\\\\n\t\t\t\\hline\n\t\t\t$\\mu$&$-1.69\\pm0.009$&$-1.21\\pm0.005$&$-0.65\\pm0.003$&$-0.56\\pm0.02$\\\\\n\t\t\t$\\sigma$ &$0.29\\pm0.003$&$0.20\\pm0.002$&$0.17\\pm0.002$&$0.39\\pm0.006$\\\\\n\t\t\tW&$0.12\\pm0.003$&$0.30\\pm0.005$&$0.43\\pm0.005$&$0.15\\pm0.009$\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{5}{c}{High Resolution Sample $|z|\\lesssim6$ kpc}\\\\\n\t\t\t\\hline\n\t\t\t$\\mu$ &$-2.0\\pm0.007$&$-1.34\\pm0.003$&$-0.59\\pm0.002$&$-0.25\\pm0.002$\\\\\n\t\t\t$\\sigma$ &$0.30\\pm0.004$&$0.23\\pm0.002$&$0.21\\pm0.001$&$0.34\\pm0.006$\\\\\n\t\t\tW&$0.13\\pm0.002$&$0.33\\pm0.003$&$0.37\\pm0.016$&$0.17\\pm0.01$\\\\\n\t\t\t\n\t\t\t\\hline\n\t\t\t\\multicolumn{5}{c}{Low Resolution Sample $|z|<3$ kpc}\\\\\n\t\t\t\\hline\n\t\t\t$\\mu$ &$-1.65\\pm0.01$&$-1.20\\pm0.006$&$-0.65\\pm0.003$&$-0.53\\pm0.019$\\\\\n\t\t\t$\\sigma$ &$0.30\\pm0.004$&$0.20\\pm0.002$&$0.17\\pm0.002$&$0.38\\pm0.006$\\\\\n\t\t\tW&$0.12\\pm0.004$&$0.30\\pm0.004$&$0.44\\pm0.005$&$0.14\\pm0.009$\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{5}{c}{High Resolution Sample $|z|<3$ kpc}\\\\\n\t\t\t\\hline\n\t\t\t$\\mu$ &$-1.96\\pm0.006$&$-1.33\\pm0.003$&$-0.58\\pm0.005$&$-0.14\\pm0.02$\\\\\n\t\t\t$\\sigma$ &$0.29\\pm0.003$&$0.21\\pm0.007$&$0.21\\pm0.003$&$0.27\\pm0.008$\\\\\n\t\t\tW&$0.13\\pm0.002$&$0.27\\pm0.003$&$0.42\\pm0.011$&$0.18\\pm0.012$\\\\\n\t\t\t\n\t\t\t\\hline\n\t\t\t\\multicolumn{5}{c}{Low Resolution Sample $3<|z|<6$ kpc}\\\\\n\t\t\t\\hline\n\t\t\t$\\mu$ &$-1.80\\pm0.02$&$-1.28\\pm0.01$&$-0.69\\pm0.01$&-\\\\\n\t\t\t$\\sigma$&$0.27\\pm0.007$&$0.20\\pm0.009$&$0.25\\pm0.005$&-\\\\\n\t\t\tW&$0.17\\pm0.007$&$0.31\\pm0.012$&$0.52\\pm0.015$&-\\\\\n\t\t\t\\hline\n\t\t\t\\multicolumn{5}{c}{High Resolution Sample $3<|z|<6$ kpc}\\\\\n\t\t\t\\hline\n\t\t\t$\\mu$ &$-2.17\\pm0.016$&$-1.41\\pm0.005$&$-0.65\\pm0.002$&-\\\\\n\t\t\t$\\sigma$&$0.23\\pm0.01$&$0.24\\pm0.003$&$0.29\\pm0.002$&-\\\\\n\t\t\tW&$0.12\\pm0.005$&$0.42\\pm0.004$&$0.46\\pm0.002$&-\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\\par\n\nWe noticed that the parameters fitted by the low-resolution and high-resolution sample are slightly different. But two samples show consistent component number, which implies that the component number does not depend on the sample. In addition, we restrict the effective temperature with $4000<{\\rm T_{eff}}<5300$ K and surface gravity with log(g) $<3.5$ dex for the low-resolution sample in order to be consistent with the high-resolution sample stars. We find that the parameters fitted by this restricted low-resolution sample are still slightly different from the high-resolution sample but the difference has diminished. So the parameter differences from two samples could result from the metallicity uncertainty of the low-resolution sample.\n\n\n\\par \nThus, we have also confirmed the existence of the HVTD and MRSH by the metallicity distribution. \nWe now study the variation of the MDFs with vertical distance for these high-velocity sample stars.\nFigure \\ref{figure10} shows the lowest $BIC$ fitting from the data: a four-peak and three-peak Gaussian model in the low- and high-resolution sample of $|z|<3$ kpc and $3<|z|<6$ kpc, respectively. The top panel of Figure \\ref{figure10} shows that there are four components within $|z|<3$ kpc: outer-halo, inner-halo, MRSH, and HVTD. The inner-halo and MRSH occupy the vast majority, and the outer-halo component still exists within $|z|<3$ kpc. The bottom panel of Figure \\ref{figure10} shows that there are three components in $|z|>3$ kpc: outer-halo, inner-halo, and MRSH. The inner-halo and MRSH still occupy the majority, but their weights are higher than that of $|z|<3$ kpc. \nThe weight of the outer-halo component is basically invariable, which implies that vertical height has little effect on the outer-halo component within $|z|<6$ kpc. Furthermore, when $|z|>3$ kpc, the HVTD component disappeared, which indicates that most of the HVTD stars are within $|z|<3$ kpc. Therefore, the variation of component weight with vertical height also indicates that the MRSH stars belong to the halo, and HVTD stars attribute to the thick disk. Table \\ref{Table 2} lists the best-fit values of the four or three single-Gaussian components in Figure \\ref{figure9} and Figure \\ref{figure10}. \\cite{Belokurov19} used K-giants identified in the Sloan Digital Sky Survey spectroscopy to show that the Splash population extends as far as $|z|\\sim 20$ kpc, and the ranking of the vertical sizes of the Splash, the disc and the halo, i.e. $z_{\\rm disc}-1.0$ are preferentially aligned with the disk rotation, they proposed that these metal-rich halo stars may have formed in situ, rather than having been accreted from satellite systems, and these metal-rich halo stars have likely undergone substantial radial migration or heating. In addition, as a part of the metal-rich halo stars, the Splash stars have chemical and kinematic properties similar to our MRSH stars. Because the Splash stars are predominantly old, but not so old as the stars deposited into the Milky Way in the last major merger, \\cite{Belokurov19} concluded that the Splash stars may have been born in the Milky Way's proto-disc prior to the massive ancient accretion event which drastically altered their orbits, and they put constraints of the epoch of the last massive accretion event to have finished 9.5 Gyr ago. This massive ancient merger event is the Gaia-Sausage \\citep{Belokurov18,Myeong18} (sometimes also referred as Gaia-Enceladus \\citep{Helmi18}). Therefore, according to the chemical and kinematic properties, it implies that the MRSH stars were born in situ and the HVTD stars are a part of the thick disk. \n\n\\par \nOn the other hand, the stellar ages are also an effective way to probe the potential origins of the population. However, it is difficult to obtain accurate stellar ages, and different methods of estimating age have systematic differences \\citep{Frankel19}. In this work, we only use the age range of the stars to discuss the potential origins of the MRSH and HVTD. Because the Gaia-Sausage merger could happen $\\sim 9-11$ Gyr ago \\citep[e.g.,][]{Belokurov18,Helmi18,Di Matteo18,Belokurov19}, we define old stars as older than $9$ Gyr and young stars as younger than $9$ Gyr. Ages of our sample stars are obtained by cross-matching with two catalogs, Sanders18 catalog \\citep{Sanders18} and Wu19 catalog \\citep{Wu19}. \n\\cite{Sanders18} presented a catalog of stellar distances, masses, and ages for $\\sim3$ million giant stars. The mass and ages have been estimated using the method outlined in \\cite{Das19}. \\cite{Sanders18} only estimated masses and ages for the stars metal-richer than $-1.5$ dex and the maximum age isochrone considered is 12.6 Gyr. \n\\cite{Wu19} presented a catalog of stellar age and mass estimates for red giant branches (RGB) stars from the LAMOST DR4. The estimated age has a median error of 30 percent for the stars of SNR $>30$.\nThe age distributions of the MRSH and HVTD stars are shown in Figure \\ref{figure11}. Although the age distributions of the MRSH and HVTD stars inferred from different samples and methods have some differences, these age distributions confirm that both MRSH and HVTD stars contain a certain number of young stars ($<9$ Gyr) and old stars ($>9$ Gyr). \n\\par\nFor the young stars ($<9$ Gyr), their formation may not be affected by the Gaia-Sausage merger. In this regard, the MRSH stars were likely born in-situ rather than accreted from the Gaia-Sausage merger. The in-situ population can contain stars formed in the initial gas collapse \\citep{Samland03} and\/or stars formed in the disk,\nwhich has subsequently been kicked out and placed on halo \\citep{Zolotov09,Purcell10}. However, it is difficult to distinguish between the MRSH stars formed by the initial gas collapse and being heated from the disk. \\cite{Cooper15} listed two different channels that the initial gas collapse to form the in-situ stellar halo: stars formed from gas smoothly accreted on to the halo and stars formed in streams of gas stripped from infalling satellites. The `phase wrapping' signature in the disk \\citep[e.g.,][]{Fux01,Minchev09,Gomez12,de la Vega15} and some substructures in the phase space, such as the Gaia snail and spiral \\citep{Antoja18}, are now widely considered to be a relic of a recent external perturbation by a satellite or dwarf galaxy flyby such as Sagittarius \\citep[e.g.,][]{Antoja18,Binney18,Laporte19,Bland-Hawthorn19}. In particular, the last pericentre of the orbit of the Sagittarius has been shown to have a strong effect on the disk stars \\citep{Purcell10,Gomez12,de la Vega15}. These external perturbations may heat up disk stars \\citep{Mackereth19a} and subsequently, alter their orbits. In addition, the radial migration may also explain the origins of the MRSH stars. \\cite{El-Badry16} reported that stars in low-mass galaxies experience significant radial migration via two related processes. First, inflowing and outflowing gas clouds are driven by stellar feedback can remain star-forming, and initial orbits of producing stars can be eccentric and have large anisotropic. Second, outflows and inflows gas drive strong fluctuations in the overall galactic potential, and stellar orbits are affected by such fluctuations, ultimately becoming heated. \\cite{Bonaca17} concluded that this radial migration mechanism could explain the origin of metal-rich stars on halo-like orbits in the solar neighborhood. \n\n\\par \nFor the old stars formed in-situ ($>9$ Gyr), the Gaia-Sausage merger event may have a major effect on their formation. The MRSH stars may form in an old proto-disk, possibly dynamically heated by the Gaia-Sausage merger, and subsequently be kicked out to the halo. This result is consistent with previous studies \\citep[e.g.,][]{Haywood18,Di Matteo18,Gallart19,Belokurov19}. The HVTD stars also form in an old proto-disk, but these stars may be so less affected by the Gaia-Sausage merger event than MRSH that they retain some properties of the thick disk. \n\n\n\\section{Summary and Conclusions}\n\\label{sec:summary}\n\\par \nBased on a high-resolution sample of G-\/K-type giant stars from the APOGEE DR14 \nand a low-resolution sample of A-, F-, G-, and K-type stars from the LAMOST spectroscopic survey combined Gaia DR2 survey, \nwe obtained high-velocity sample stars ($v_{{\\rm tot}}>220$ ${\\rm km\\ s^{-1}}$) in the Toomre diagram. \nFrom the kinematic and chemical distribution of these high-velocity sample stars, we concluded that the Galaxy exists \na metal-rich stellar halo (MRSH) and a high-velocity thick disk (HVTD), and studied their kinematic and chemical properties. \n\n\\par\nThe rotational velocity distribution of the sample stars with $v_{{\\rm tot}}>220$ ${\\rm km\\ s^{-1}}$ and [Fe\/H] $>-1.0$ dex can be well described by a two-Gaussian model, associated with the HVTD and MRSH. We also confirmed that the metallicity distribution of the sample stars can be described by a four-Gaussian model: outer-halo, inner-halo, MRSH, and HVTD. The HVTD has basically the same rotational velocity and metallicity as the canonical thick disk, and it shares the same dynamical properties as the thick disk. However, their member stars have the same position as the halo in the Toomre diagram. The MRSH shows basically the same rotational velocity, orbit eccentricity, and position in the Lindblad and Toomre diagram as the canonical halo, but their metallicity distribution similar to the thick disk. \n\n\\par\nIn addition, we found that the outer-halo component still exists within $|z|<3$ kpc, and the increase of vertical height has little effect on the proportions of the outer-halo component in $|z|<6$ kpc. Among these stars with $v_{{\\rm tot}}>220$ ${\\rm km\\ s^{-1}}$, $|z|<6$ kpc, and $4\\lesssim R \\lesssim13$ kpc, the inner-halo, and MRSH occupy the vast majority, and most of the HVTD stars are within $|z|<3$ kpc and they have higher metallicity than the MRSH stars on average, and the canonical halo contains very few HVTD or MRSH stars. \nFor the HVTD, there exist a steeper gradient of rotational velocity with metallicity than canonical thick disk. However, the gradient of rotational velocity with metallicity for the MRSH is more flat than the canonical thick disk. Their chemical and kinematic properties and age imply that the MRSH and HVTD stars may form in situ rather than being accreted from satellite systems.\n\n\\section{Acknowledgements}\n\\par \nWe thank especially the referee for insightful comments and suggestions, which has improved the paper significantly.\nThis work was supported by the National Natural Foundation of China (NSFC No. 11973042 and No. 11973052).\nThis project was developed in part at the 2016 NYC Gaia Sprint, hosted by the Center for Computational Astrophysics at the Simons Foundation in New York City. The Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. \n\\par \nFunding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges\nsupport and resources from the Center for High-Performance Computing at\nthe University of Utah. The SDSS web site is www.sdss.org.\n\\par \nSDSS-IV is managed by the Astrophysical Research Consortium for the\nParticipating Institutions of the SDSS Collaboration including the\nBrazilian Participation Group, the Carnegie Institution for Science,\nCarnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics,\nInstituto de Astrof\\'isica de Canarias, Johns Hopkins University,\nKavli Institute for the Physics and Mathematics of the Universe (IPMU) \/\nUniversity of Tokyo, Lawrence Berkeley National Laboratory,\nLeibniz Institut f\\\"ur Astrophysik Potsdam (AIP),\nMax-Planck-Institut f\\\"ur Astronomie (MPIA Heidelberg),\nMax-Planck-Institut f\\\"ur Astrophysik (MPA Garching),\nMax-Planck-Institut f\\\"ur Extraterrestrische Physik (MPE),\nNational Astronomical Observatories of China, New Mexico State University,\nNew York University, University of Notre Dame,\nObservat\\'ario Nacional \/ MCTI, The Ohio State University,\nPennsylvania State University, Shanghai Astronomical Observatory,\nUnited Kingdom Participation Group,\nUniversidad Nacional Aut\\'onoma de M\\'exico, University of Arizona,\nUniversity of Colorado Boulder, University of Oxford, University of Portsmouth,\nUniversity of Utah, University of Virginia, University of Washington, University of Wisconsin,\nVanderbilt University, and Yale University.\n\\par \nThis work has made use of data from the European Space Agency (ESA) mission Gaia (http:\/\/www.cosmos.esa.int\/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, http:\/\/www.cosmos.esa.int\/web\/gaia\/dpac\/consortium). Funding for DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}