diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlksi" "b/data_all_eng_slimpj/shuffled/split2/finalzzlksi" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlksi" @@ -0,0 +1,5 @@ +{"text":"\n\n\n\\section{Introduction}\n\nElectrostatics is an important topic in most calculus-based introductory physics courses.\nAlthough Coulomb's law, superposition principle, and Gauss's law are taught in\nmost of these courses, investigations have shown that these concepts are challenging\nfor students~\\cite{maloney,rainson,singh}. Despite the fact that students may have learned the\nsuperposition principle in the context of forces in introductory mechanics, this learning\ndoes not automatically transfer to the abstract context of electrostatics and students get\ndistracted by the very different surface features of the electrostatics problems.\nEffective application of Gauss's law implicitly requires\nunderstanding the principle of superposition for electric fields and the symmetry\nthat ensues from a given charge distribution.\nHelping students learn these concepts will not only help them build a more coherent\nknowledge structure, it can also improve their reasoning and meta-cognitive skills.\nHere, we discuss the development and evaluation of research-based tutorials and the corresponding \npre-\/post-tests to help students develop a functional understanding of these concepts.\n\n\\vspace{-.05in}\n\\section{Tutorial Development and Administration}\n\n\\input All_Q_by_Q_Rounded1_new\n\nBefore the development of the tutorials, we conducted investigation of student difficulties\nwith these concepts~\\cite{singh} by administering free-response and multiple-choice questions and by\ninterviewing individual students. \nWe found that many students have difficulty distinguishing between the electric charge, field and force.\nStudents also have difficulty with the principle of superposition and in recognizing whether sufficient\nsymmetry exists for a particular charge distribution to calculate the electric field using Gauss's law.\nChoosing appropriate Gaussian surfaces to calculate the electric field using Gauss's law\nwhen sufficient symmetry exists is also challenging for students.\nDistinguishing between electric field and flux was often difficult.\n\n\n\\input All_Rounded_new\n\n\nWe then developed the preliminary version of five tutorials and the corresponding pre-\/post-tests\nbased upon the findings of the difficulties elicited in previous research and a theoretical task analysis of the underlying concepts.\nTheoretical task analysis involves making a fine-grained flow chart of the concepts involved in solving specific class of problems.\nThis type of analysis can help identify stumbling blocks where students may have difficulty.\nThe first two tutorials were developed to help students learn about\nCoulomb's law, superposition principle and symmetry in the context of discrete and continuous charge distributions (conceptually),\nthe third tutorial focused on distinguishing between electric flux and field, and the fourth and fifth\ntutorials dealt with symmetry and Gauss's law and on revisiting superposition principle after Gauss's law.\nAlthough some tutorials on related topics have been developed by the University of Washington group, those tutorials\nare complementary to the ones we have developed focusing on symmetry ideas.\nWe administered each pre-test, tutorial and post-test to 5 students individually\nwho were asked to talk aloud while working on them. After each administration, we modified the\ntutorials based upon the feedback obtained from student interviews. These individual administrations \nhelped fine-tune the tutorials and improve their organization and flow. Then, the tutorials\nwere administered to four different calculus-based introductory physics classes with four lecture hours and one recitation\nhour per week. Students\nworked on each tutorial in groups of two or three either during the lecture section of the class or in the\nrecitation depending upon what was most convenient for an instructor. Table 1 shows the pre-\/post-test data\non each question from three of the classes in which the tutorials were administered. The details of each question will be discussed\nelsewhere. In the fourth class, the post-tests were\nreturned without photocopying them and we only have complete data on student performance on the cumulative test\nadministered after all tutorials. As shown in Table 1,\nfor some tutorials, additional questions were included in the pre-test and\/or post-test after the previous\nadministration and analysis of data. The pre-\/post-tests were not identical but focused on the same topics covered in a tutorial.\n\n\n\\input Gauss_Law_Avg_short\n\nAll pre-tests and tutorials were administered after traditional instruction in relevant concepts. Instructors\noften preferred to alternate between lectures and tutorials during the class and give an additional\ntutorial during the recitation. \nThis way all of the five tutorials from Coulomb's law to Gauss's\nlaw were administered within two weeks. For the tutorials administered in lecture section of the class, pre-tests were given to students right before\nthey worked on the tutorials in groups. Since not all students completed a tutorial during the class,\nthey were asked to complete them as part of their homework assignment. At the beginning of the next\nclass, students were given an opportunity to ask for clarification on any issue related to the part of the tutorial\nthey completed at home and then they were administered the corresponding post-test before the lecture began. Each pre-\/post-test\ncounted as a quiz and students were given a full quiz grade for taking each of the pre-test regardless of students' actual performance. \nThe pre-tests were not returned but the post-tests were returned after grading. \nWhen a tutorial was administered in the recitation (the second and fifth tutorials which were shorter), \nthe teaching assistant (TA) was given specific \ninstruction on how to conduct the group work effectively during the tutorial. Moreover, since the TA had to \ngive the post-test corresponding to the tutorial during the same recitation class in which the students worked\non the tutorials (unlike the lecture administration in which\nthe post-tests were in the following class), the pre-tests were skipped for some\nof these tutorials due to a lack of time. \nSometimes, the instructors gave the pre-tests \nin the lecture section of the class for a tutorial that was administered in the recitation. \n\n\nIn all of the classes in which the tutorials were used, 2-2.5 weeks were\nsufficient to cover all topics from Coulomb's law to Gauss's law.\nThis time line is not significantly different from what\nthe instructors in other courses allocated to this material. The main difference between the tutorial\nand the non-tutorial courses is that fewer solved examples were presented in the tutorial classes (students\nworked on many problems themselves in the tutorials). We note that since many of the tutorials were administered\nduring the lecture section of the class, \nsometimes two instructors (e.g., the instructor\nand the TA) were present during these ``large\" tutorial sessions to ensure smooth facilitation.\nIn such cases, students working in groups of three were asked to raise their hands for questions and clarifications.\nOnce the instructor knew that a group of students was making good progress, that group was invited to help other\ngroups in the vicinity which had similar questions. Thus, students not only worked in small groups discussing\nissues with each other, some of them also got an opportunity to help those in the other groups.\n\n\\vspace*{-.1in}\n\\section{Discussion}\n\\vspace*{-.05in}\n\n\nOut of the five tutorials that students worked on, the first two focused on Coulomb's law, superposition\nand symmetry. The first tutorial started with the electric field due to a single point charge\nin the surrounding region and then extended this discussion to two or more point charges. The second tutorial further\ncontinued the conceptual discussion that started in the first tutorial (which was mainly about discrete charges) to continuous charge\ndistributions. The tutorials guided students to understand the vector nature of the electric field,\nlearn the superposition principle and recognize\nthe symmetry of the charge distribution. Students worked on examples in which the symmetry of the charge distribution\n(and hence the electric field) was the same but the charges were embedded on objects of different shapes\n(e.g., four equidistant charges on a plastic ring vs. a plastic square). Common misconceptions were explicitly\nelicited often by having two students discuss an issue in a particular context. \nStudents were asked to identify the student with whom they agreed.\n\n\nThe third tutorial was designed to help students learn to distinguish between the electric field and flux. The tutorial\ntried to help students learn that the electric field is a vector while the electric flux is a scalar. Also, electric\nfield is defined at various {\\it points} in space surrounding a charge distribution while the electric flux is always\nthrough an {\\it area}. Students learn about Gauss's law and how to relate the flux through a closed surface to the net\ncharge enclosed. \nRather than emphasizing the symmetry considerations, this tutorial focused on helping students use Gauss's law to find \nthe net flux through a closed surface given the net charge enclosed and vice versa. \n\nThe fourth tutorial was designed to help students learn to exploit Gauss's law to calculate the electric field at a point\ndue to a given charge distribution if a high symmetry exists. Students were helped to draw upon the superposition and symmetry ideas they\nlearned in the first two tutorials to evaluate whether sufficient symmetry exists to exploit Gauss's law to calculate the\nelectric field. Then, students learn to choose the appropriate Gaussian surfaces that would aid in using Gauss's law\nto find the electric field. Finally, they use Gauss's law to calculate the electric field in these cases. The last\ntutorial revisits the superposition principle after students have learned to exploit Gauss's law to calculate the electric field.\nFor example, students learn to find the electric field at a point due to two non-concentric uniform spheres of charge or due to\na point charge and an infinitely long uniform cylinder of charge. \n\nThe pre-tests and post-tests were graded by two individuals and the inter-rater reliability is good.\nThe average pre-\/post-test scores on matched pairs for a particular class graded by them differed at most by a few percent.\nTable 1 shows the student performance (on each question and also overall) on the pre-test and post-test in each of the five tutorials \n(I-V) in percentage. The classes utilizing each tutorial may differ either because additional \npre-\/post-test questions were added or the pre-tests for tutorial II and V were not administered to some of the classes. \nThe differences in the performance of different classes may also be due to the differences in student \nsamples, instructor\/TA differences or the manner in which the tutorials were administered. Table 2 shows the performance of \nstudents on the pre-\/post-tests for each tutorial \npartitioned into three separate groups based upon the pre-test performance (see the Range column). \nAs can be seen from Table 2, tutorials generally\nhelped all students including those who performed poorly on the pre-test.\nTable 3 shows the average percentage scores from a cumulative test which includes concepts from all of the tutorials~\\cite{singh} \nadministered to different student populations. Although the performance of students working on the tutorials is\nnot as impressive on the cumulative test as on the pre-\/post-tests administered with the tutorials, Table 3 shows that students who \nworked through the tutorials significantly outperformed both Honors students and those in upper-level undergraduate courses,\nbut not physics graduate students.\n\n\\vspace*{-.10in}\n\\section{Conclusion}\n\\vspace*{-.06in}\n\nWe developed and evaluated tutorials to help calculus-based introductory students learn Coulomb's law, \nsuperposition, symmetry and Gauss's law. Pre-\/post-tests for each tutorial and a test that \nincludes content on all of the tutorials show that the tutorials can be effective in improving student understanding\nof these concepts. \nMoreover, these tutorials appear to be helpful for students who obtained low scores ($0-33\\%$) on the \npretest after traditional instruction. \n\n\n\\vspace*{-.07in}\n\\begin{theacknowledgments}\nWe are grateful to the NSF for award DUE-0442087.\n\\end{theacknowledgments}\n\\vspace*{-.07in}\n\n\\bibliographystyle{aipproc}\n\\vspace*{-.07in}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDepression or Major Depressive Disorder (MDD) is regarded as one of the most commonly identified mental health problems among young adults in developed countries, accounting for 75\\% of all psychiatric admissions \\cite{boyd1982screening}. Most people who suffer from depression do not acknowledge it, for various reasons, ranging from social stigma to plain ignorance; this means that a vast majority of depressed people remain undiagnosed.\nLack of diagnosis eventually results in suicide, drug abuse, crime and many \nother societal problems. For example, depression has been found to be a major cause behind \n800,000 deaths committed through suicide each year \nworldwide\\footnote{\\url{https:\/\/who.int\/mental_health\/prevention\/suicide\/suicideprevent\/en\/}}. Moreover, the economic burden created by depression is estimated to \nhave been 210 billion USD in 2010 in the USA alone \\cite{greenberg2015economic}. Hence, detecting, monitoring and treating depression is very important and there is huge need for effective, inexpensive and almost real-time \ninterventions. In such a scenario, social media, such as, Twitter and Facebook, provide the foundation of a remedy. Social media are very popular among young adults, where depression is prevalent. In addition, it has been found that people who are otherwise socially aloof (and more prone to having depression) can be very active in the social media platforms \\cite{Choudhury2013Pred}. \nAs a consequence, there has been significant depression detection research, based on various social media attributes, such as social network size, social media behavior, and language used in social media posts. Among these multi-modalities, human language alone can be a very good predictor of depression \\cite{Choudhury2013Pred}. However, the main bottle neck of social media posts-based depression detection task is the lack of labelled data to identify rich feature representations, which provide the basis for constructing models that help identify depressive language.\n\nHere we discuss the creation of a word embedding \nthat leverages both the Twitter vocabulary (from pre-trained Twitter word embedding) and depression semantics (from a word embedding created from depression forum posts) to identify depressive Tweets. We believe our proposed methods would significantly relieve us from the burden of curating a huge volume of human annotated data \/ high quality labelled data (which is very expensive and time consuming) to support the learning of better feature representations, and eventually lead to improved classification. In the next sections we provide a brief summary of earlier research together with some background supporting our formulation of our proposed methods for identifying depression from Tweets. \n\nThroughout our paper, we use the phrase ``word embedding'' as an object that consists of word vectors. So by ``word embeddings'' we mean multiple instances of that object.\n\n\\section{Background \\& motivation}\nPrevious studies suggest that the words we use in our daily life can express our mental state, mood and emotion \\cite{Pennebaker2003}. Therefore analyzing language to identify and monitor human mental health problems has been regarded as an appropriate avenue of mental health modeling. With the advent of social media platforms, researchers have found that social media posts can be used as a good proxy for our day to day language usage \\cite{Choudhury2013Pred}. There have been many studies that identify and monitor depression through social media posts in various social media, such as, Twitter \\cite{reece2017forecasting,Choudhury2013RoleSM,Choudhury2013Pred}, Facebook \\cite{schwartz2014towards,moreno2011feeling} and online forums \\cite{yates2017depression}. \n\n\t\nDepression detection from social media posts can be specified as a low resource supervised classification task \nbecause of the paucity of valid data. Although there is no concrete precise definition of valid data, previous research emphasizes collecting social media posts, which are either validated by annotators as carrying clues of depression, or coming from the people who are clinically diagnosed as depressed, or both. Based on the methods of depression intervention using these data, earlier research can be mostly divided into two categories: (1) general categories of post-specific depression detection (or depressive language detection) \\cite{Choudhury2013RoleSM,jamil2017monitoring,vioules2018detection}, and (2) user-specific depression detection, which considers all the posts made by a depressed user in a specific time window \\cite{Resnik2013,resnik2015beyond}. The goal of (1) is to identify depression in a more fine grained level, i.e., in social media posts, which further helps in identifying depression inclination of individuals when analyzed by method (2). \n \n\nFor the post specific depression detection task, previous research concentrate on the extraction of depression specific features used to train machine learning models, e.g., building depression lexicons based on unigrams present in posts from depressed individuals \\cite{Choudhury2013Pred}, depression symptom related unigrams curated from depression questionnaires \\cite{cheng2016psychologist}, metaphors used in depressive language \\cite{neuman2012proactive}, or psycholinguistic features in LIWC \\cite{tausczik2010psychological}. For user specific depression identification, variations of topic modeling have been popular to identify depressive topics and use them as features \\cite{resnik2015beyond,Resnik2013}. But recently, some research has used convolutional neural network (CNN) based deep learning models to learn feature representations \\cite{yates2017depression} and \\cite{orabi2018deep}. Most deep learning approaches require a significant volume of labelled data to learn the depression specific embedding from scratch, or from a pre-trained word embedding in a supervised manner. So, in general, both post level and user level depression identification research emphasize the curation of labelled social media posts indicative of depression, which is a very expensive process in terms of time, human effort, and cost. Moreover, previous research showed that a robust post level depression\nidentification system is an important prerequisite for accurately identifying depression at the user level \\cite{Choudhury2013RoleSM}. In addition, most of this earlier research leveraged Twitter posts to identify depression because a huge volume of Twitter posts are publicly available. \n\nTherefore the motivation of our research comes from the need for a better feature representation specific to depressive language, and reduced dependency on a large set of (human annotated) labelled data for depressive Tweet detection task. We proceed as follows: \n\n\n \n\n\\begin{enumerate}\n \\item We create a word embedding space that encodes the semantics of depressive language from a\n small but high quality \n depression corpus curated from depression related public forums.\n \\item We use that word embedding to create feature representations for our Tweets and feed them to our machine learning models to identify depressive Tweets; this achieves good accuracy, even with very small amount of labelled Tweets.\n \\item Furthermore, \n \nwe adjust a pre-trained Twitter word embedding based on our depression specific word embedding, using a non-linear mapping between the embeddings (motivated by the work of \\cite{mikolov2013exploiting} and \\cite{smith2017offline} on bilingual dictionary induction for machine translation), and use it to create feature representation for our Tweets and feed them to our machine learning models. This helps us achieve 4\\% higher F1-score than our strongest baseline in depressive Tweets detection.\n\\end{enumerate}\n\nAccuracy improvements mentioned in points 2 and 3 above are true for a high quality dataset curated through rigorous human annotation, \nas opposed to the low quality dataset with less rigorous human annotation; this indicates the effectiveness of our proposed feature representations for depressive Tweets detection. To the best of our knowledge, ours is the first effort to build a depression specific word embedding for identifying depressive Tweets, and to formulate a method to gain further improvements on top of it, then to present a comprehensive analysis on the quantitative and qualitative performance of our embeddings.\n \n\\section{Datasets} \\label{sec:datasets}\nHere we provide the details of our two datasets that we use for our experiments and their annotation procedure, the corpus they are curated from and their quality comparisons.\n\n\\subsection{Dataset1} Dataset1 is curated by the ADVanced ANalytics for data SciencE (ADVANSE) research team at the University of Montpellier, France \\cite{vioules2018detection}. This dataset contains Tweets having key-phrases generated from the American Psychiatric Association (APA)'s list of risk factors and the American Association of Suicidology (AAS)'s list of warning signs related to suicide. Furthermore, they randomly investigated the authors of these Tweets to identify 60 distressed users who frequently write about depression, suicide and self mutilation. They also randomly collected 60 control users.\nFinally, they curated a balanced and human annotated dataset of a total of around 500 Tweets, of which 50\\% Tweets are from distressed and 50\\% are from control users, with the help of seven annotators and one professional psychologist. The goal of their annotation was to provide a distress score (0 - 3) for each Tweet. They reported a Cohen's kappa agreement score of 69.1\\% for their annotation task. Finally, they merged Tweets showing distress level 0, 1 as control Tweets and 2, 3 as distressed Tweets. \\textit{Distressed Tweets} carry signs of suicidal ideation, self-harm and depression while control Tweets are about daily life occurrences, such as weekend plans, trips and common distress such as exams, deadlines, etc. We believe this dataset is perfectly suited for our task, and we use their distressed Tweets as our depressive Tweets and their control as our control. \n\n\\subsection{Dataset2}\nDataset2 is collected by a research group at the University of Ottawa \\cite{jamil2017monitoring}. They first filtered depressive Tweets from \\#BellLetsTalk2015 (a Twitter campaign) based on keywords such as, suffer, attempt, suicide, battle, struggle and first person pronouns. \nUsing topic modeling, they removed Tweets under the topics of public campaign, mental health awareness, and raising money. They further removed Tweets which contain mostly URLs and are very short. Finally, from these Tweets they identified 30 users who self-disclosed their own depression, and 30 control users who did not. They \nemployed two annotators to label Tweets from 10 users as either depressed or non-depressed. They found that their annotators labelled most Tweets as non-depressed. To reduce the number of non-depressive Tweets, they further removed neutral Tweets from their dataset, as they believe neutral Tweets surely do not carry any signs of depression. After that, they annotated Tweets from the remaining 50 users with the help of two annotators with a Cohen's kappa agreement score of 67\\%. Finally, they labelled a Tweet as depressive if any one of their two annotators agree, to gather more depressive Tweets. This left them with 8,753 Tweets with 706 depressive Tweets.\n\n\\subsection{Quality of Datasets} Here we present a comparative analysis of our datasets based on their curation process and the linguistic components present in them relevant to depressive language detection as follows:\n\\subsubsection{Analysis based on data curation process:}\nWe think Dataset2 is of lower quality compared to Dataset1 for the following reasons: (1) this dataset is collected from the pool of Tweets which is a part of a mental health campaign, and thus compromises the authenticity of the Tweets; (2) the words they used for searching depressive Tweets are not validated by any depression or suicide lexicons; (3) although they used two annotators (none of them are domain experts) to label the Tweets, they finally considered a Tweet as depressive if at least one annotator labelled it as depressive, hence introduced more noise in the data; (4) it is not confirmed how they identified neutral Tweets since their neutral Tweets may convey depression as well; (5) they identified a person is depressed if s\/he disclose their depression, but they did not mention how they determined these disclosures. Simple regular expression based methods to identify these self disclosures can introduce a lot of noise in the data. In addition, these self disclosures may not be true.\n\n\\subsubsection{Analysis based on linguistic components present in the dataset:} \\label{subsect: LIWC}\nFor this analysis, we use Linguistic Inquiry and Word Count (LIWC) \\cite{tausczik2010psychological}. LIWC is a tool widely used in psycholinguistic analysis of language. It extracts the percentage of words in a text, across 93 pre-defined categories, e.g., affect, social process, cognitive processes, etc. To analyse the quality of our datasets, we provide scores of few dimensions of LIWC lexicon relevant for depressive language detection \\cite{nguyen2014affective}, \\cite{Choudhury2013Pred} and \\cite{kuppens2012emotional}, such as, 1st person pronouns, anger, sadness, negative emotions, etc, see Table \\ref{tab:LIWC-feats} for the depressive Tweets present both in our datasets. The bold items in that table shows significant score differences in those dimensions for both datasets and endorses the fact that Dataset1 indeed carries more linguistic clues of depression than Dataset2 (the higher the score, the more is the percentage of words from that dimension is present in the text). Moreover, Tweets labelled as depressive in Dataset2 are mostly about common distress of everyday life unlike those of Dataset1, which are indicative of severe depression. Figures \\ref{fig:wordcloud1} and \\ref{fig:wordcloud2} depict the word clouds created from Dataset1 and Dataset2 depressive Tweets respectively. We provide few random samples of Tweets from Dataset1 and Dataset2 at Table \\ref{tab:exampleTweets} as well. \n\n\n\n\\begin{table}[!ht]\n\\small\n\\centering\n\\begin{tabular}{|C{1.5cm}|C{1.7cm}|C{1.9cm}|C{1.9cm}|}\n\\hline\n\\textbf{LIWC Category} & \\textbf{Example Words} & \\textbf{Dataset1 Depressive Tweets score} & \\textbf{Dataset2 Depressive Tweets score} \\\\ \\hline\n\\textbf{1st person pronouns} &I, me, mine &12.74 &7.06\\\\ \\hline\n\\textbf{Negations} &no, not, never &3.94 &2.63 \\\\ \\hline\nPositive Emotion &love, nice, sweet &2.79 &2.65 \\\\ \\hline\n\\textbf{Negative Emotion} &hurt, ugly, nasty &8.59 &6.99 \\\\ \\hline\nAnxiety &worried, fearful &0.72 &1.05 \\\\ \\hline\nAnger &hate, kill, annoyed &2.86 &2.51 \\\\ \\hline\n\\textbf{Sadness} &crying, grief, sad &3.29 &1.97 \\\\ \\hline\nPast Focus &ago, did, talked &2.65 &3 \\\\ \\hline\n\\textbf{Death} &suicide, die, overdosed &1.43 &0.44 \\\\ \\hline\n\\textbf{Swear} &fuck, damn, shit &1.97 &1.39 \\\\ \\hline\n\\end{tabular}\n\\caption{Score of Dataset1 and Dataset2 in few LIWC dimensions relevant to depressive language detection}\n\\label{tab:LIWC-feats}\n\\end{table}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.3 \\textwidth]{img\/wc_bal.png}\n\\caption{\\label{fig:wordcloud1} Dataset1 depressive Tweets word cloud }\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.3 \\textwidth]{img\/wc_imbal.png}\n\\caption{\\label{fig:wordcloud2} Dataset2 depressive Tweets word cloud}\n\\end{figure}\n\n\\begin{table}[!ht]\n\\small\n\\centering\n\\begin{tabular}{ |l|p{6cm}| }\n\\multicolumn{1}{ c }{}\\\\\n\\hline\n\\textbf{Datasets} & \\textbf{Depressive Tweets} \\\\ \\hline\n\\multirow{1}{*}{Dataset1} & ``I wish I could be normal and be happy and feel things like other people'' \\\\ \\cline{2-2}\n & ``I feel alone even when I'm not'' \\\\ \\cline{2-2}\n & ``Yesterday was difficult...and so is today and tomorrow and the days after...'' \\\\\n \\hline\n\\multirow{1}{*}{Dataset2} & ``Last night was not a good night for sleep... so tired And I have a gig tonight... yawnnn'' \\\\ \\cline{2-2}\n & ``So tired of my @NetflixCA app not working, I hate Android 5'' \\\\ \\cline{2-2}\n & ``I have been so bad at reading Twitter lately, I don't know how people keep up, maybe today I'll do better'' \\\\\n \\hline\n\\end{tabular}\n\\caption{Sample Tweets from Dataset1 and Dataset2}\n\\label{tab:exampleTweets}\n\\end{table}\n\n\\subsection{Why these two datasets?}\nWe believe these two datasets represent the two broad categories of publicly available Twitter datasets for the depressive Tweets identification task. One category relies on keywords related to depression and suicidal ideation to identify depressive Tweets and employed rigorous annotation to filter out noisy Tweets (like Dataset1); the other relies on self disclosures of Twitter users to identify depressive Tweets and employed less rigorous annotation (like Dataset2) to filter noisy Tweets. So any other datasets that fall into one of above categories or do not go through annotation procedure atleast like Dataset2, such as datasets released in a CLPsych 2015 shared task \\cite{coppersmith2015clpsych} are not evaluated in this research. Moreover, our Dataset2 is a representative of imbalanced dataset (with fewer depressive Tweets than non-depressive Tweets) which is a very common characteristic of the datasets for depressive Tweets identification task. It is also to be noted that we are interested in depressive Tweets identification, so our datasets are from Twitter, not from depression forums. We are using depression forum posts only to learn improved word embedding feature representation that can help us identifying depressive Tweets in the above mentioned Twitter datasets.\n\n\\subsection{Creating a depression specific corpus}\n\\label{subsec: depcorpora}\nTo build a depression specific word embedding, we curate our own depression corpus. \nFor this, we collect all the posts from the Reddit depression forum: r\/depression \\footnote{\\url{reddit.com\/r\/depression\/}} between 2006 to 2017 and all those from Suicidal Forum \\footnote{\\url{suicideforum.com\/}} and concatenated for a total of 856,897 posts. We choose these forums because people who post anonymously in these forums usually suffer from severe depression and share their struggle with depression and its impact in their personal lives \\cite{de2014mental}. We believe these forums contain useful semantic components indicative of depressive language. \n\n\n\\section{Feature extraction methods} \\label{sec:features}\n\\subsection{Bag-of-Words (BOW)} \\label{subsec:bow} We represent each Tweet as a vector of vocabulary terms and their frequency counts in that Tweet, also known as bag-of-words. The vocabulary terms refer to the most frequent 400 terms existing in the training set. Before creating the vocabulary and the vector representation of the Tweets, we perform the following preprocessing: (1) we make the Tweets all lowercase, then (2) tokenize them using the NLTK Tweet tokenizer \\footnote{\\url{nltk.org\/api\/nltk.tokenize.html}}; the reason for using Tweet tokenizer is to consider Tweet emoticons (:-)), hashtags (\\#Depression) and mentions (@user) as single tokens; we then (3) remove all stop words except the first person pronouns such as, I, me and my (because they are useful for depression detection) and then (4) use NLTK porter stemmer \\footnote{\\url{nltk.org\/_modules\/nltk\/stem\/porter.html}}. \nStemming helps us reduce sparsity of the bag-of-words representations of Tweets. \n\n\\subsection{Lexicons} We experimented with several emotion and sentiment lexicons, such as, LabMT \\cite{dodds2011temporal}, Emolex \\cite{mohammad2013nrc}, AFINN \\cite{nielsen2011new}, LIWC \\cite{tausczik2010psychological}, VADER \\cite{gilbert2014vader}, NRC-Hashtag-Sentiment-Lexicon (NHSL) \\cite{kiritchenko2014sentiment}, NRC-Hashtag-Emotion-Lexicon (NHEL) \\cite{COIN:COIN12024} and CBET \\cite{shahrakilexical}. Among these lexicons we find LIWC and NHEL perform the best and hence we report the results of these two lexicons. The following subsections provide a brief description of LIWC, NHEL and lexicon-based representation of Tweets.\n\n\\subsubsection{Linguistic Inquiry and Word Count (LIWC):} \nLIWC has been widely used as a good baseline for depressive Tweet detection in earlier research \\cite{nguyen2014affective,coppersmith2014quantifying}. We use it to convert a Tweet into a fixed length vector representation of 93 dimensions, that is then used as the input for our machine learning models. \n\n\\subsubsection{NRC Hashtag Emotion Lexicon (NHEL):} \nIn NHEL there are 16,862 unigrams, each of which are associated with a vector of 8 scores for 8 emotions, such as, anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Each of the scores (a real value between $0$ and $\\infty$) indicate how much a particular unigram is associated with each of the 8 emotions.\nIn our experiments, we tokenize each Tweet as described in the Bag-of-Words (BOW) section, then we use the lexicon to determine a score for each token in the Tweet; finally, we sum them to get a vector of 8 values for each Tweet, which represents the expressed emotions in that tweet and their magnitude. Finally, we use that value as a feature for our machine learning models.\n\n\\subsection{Distributed representation of words} \\label{subsec:wordembed}\nDistributed representation of words (also known as word embedding (WE) or a collection of word vectors \\cite{Mikolov2013}) capture the semantic and syntactic similarity between a word and its context defined by its neighbouring words that appear in a fixed window, and has been successfully used as a compact feature representation in many downstream NLP tasks.\nPrevious research show that a domain specific word embedding is usually better for performing domain specific tasks than a general word embedding, e.g., \\cite{bengio2014word} proposed word embedding for speech recognition, \\cite{tang2014learning} proposed the same for sentiment classification and \\cite{asgari2015continuous} for representing biological sequences. Inspired by these works, we here report the construction of depression specific word embedding, \nin an unsupervised manner, \nIn addition, we report that the\nword embedding resulting from a non-linear mapping between general (pre-trained) word embedding and depression specific word embedding can be a very useful feature representation for our depressive Tweet identification task. \n\nOur embedding adjustment method has some similarity to embedding retrofitting proposed by \\cite{faruqui:2014:NIPS-DLRLW} and embedding refinement proposed by \\cite{yu2018refining}, in the sense that we also adjusted (or retrofitted) our pre-trained embedding. However, there is a major difference between their method and ours. They only adjusted those words, $w \\in V$, where $V$ is the common vocabulary between their pre-trained embedding and semantic\/sentiment lexicons, e.g. WordNet \\cite{miller1995wordnet}, ANEW \\cite{nielsen2011new} etc. By this adjustment they brought each word in the pre-trained embedding closer to the other words which are semantically related to them (as defined in the semantic lexicons) through an iterative update method, where all these words are member of $V$. So their method strictly depends on semantic lexicons and their vocabularies. In depression detection research, where labelled data is scarce and human annotation is expensive, building depression lexicons (given there is no good publicly available depression lexicons) and using them for retrofitting is counter intuitive. Even if we create one, there is a good chance that its vocabulary would be limited. Also, most importantly, there is no comprehensive discussion\/analysis in their paper on how to retrofit those words which are only present in pre-trained embedding but not in semantic lexicons or out-of-vocabulary (OOV) words.\nIn our method we do not depend on any such semantic lexicons. We retrofitted a general pre-trained embedding based on the semantics present in depression specific embedding through a non-linear mapping between them. Our depression specific embedding is created in an unsupervised manner from depression forum posts. Moreover, through our mapping process we learn a transformation matrix, see Equation \\ref{eq:3}, that can be further used to predict embedding for OOVs and this helps us to achieve better accuracy (see Table \\ref{tab:embed-augment}).\n\n\n\nInterestingly, there have been no attempts taken in depressive language detection research area which primarily focus on building better depression specific embedding in an unsupervised manner, then further analyse its use in augmenting a general pre-trained embedding. Very recently \\cite{orabi2018deep} proposed a multi-task learning method which learns an embedding in a purely supervised way by simultaneously performing (1) adjacent word prediction task from one of their labelled train dataset and (2) depression\/PTSD sentence prediction task again from the same labelled train dataset, where this labelled dataset was created with the help of human annotation. We have no such dependency on labelled data. Also, we have disjoint sets of data for learning\/adjusting our embedding and detecting depressive Tweets across all our experiments, unlike them, which makes our experiments fairer than theirs. They did not provide any experimental result and analysis on depressive post (or Tweet) identification rather on depressive Twitter user identification which is fundamentally different from our task. Also, our paper discusses the transferability of depressive language specific semantics from forums to microblogs, which is not the focus of their paper. \nFinally, we argue that the dataset they used for depression detection is very noisy and thus not very suitable for the same (See ``Quality of Datasets'' section). \n\nIn the following subsections we describe different word embeddings used in our experiments. \n\n\\subsubsection{General Twitter word Embedding (TE):} We use a pre-trained 400 dimensional skip-gram word embedding learned from $400$ million Tweets with vocabulary size of $3,039,345$ words \\cite{godin2015multimedia} as a representative of word embedding learned from a general dataset (in our case, Tweets); we believe this captures the most relevant vocabulary for our task. The creator of this word embedding used negative sampling ($k = 5$) with a context window size = $1$ and mincount = $5$. Since it is pre-trained, we do not have control over the parameters it uses and simply use it as is. \n\n\\subsubsection{Depression specific word Embedding (DE):} We create a 400 dimensional depression specific word embedding (DE) on our curated depression corpus. \nFirst, we identify sentence boundaries in our corpora based on punctuation, such as: ``?'',``!'' and ``.''.\nWe then feed each sentence into \na skip-gram based word2vec implementation in gensim\n\\footnote{\\url{radimrehurek.com\/gensim\/models\/word2vec.html}}. We use negative sampling ($k=5$) with the context window size = $5$ and mincount = $10$ for the training of these word embeddings. DE has a vocabulary size of $29,930$ words. We choose skip-gram for this training because skip-gram learns good embedding from a small corpus \\cite{mikolov2013efficient}.\n\n\n\\subsubsection{Adjusted Twitter word Embedding (ATE): a non-linear mapping between TE and DE:} \nIn this step, we create a non-linear mapping between TE and DE. \nTo do this, we use a Multilayer Perceptron Regressor (MLP-Regressor) with a single hidden layer with 400 hidden units and Rectified Linear Unit (ReLU) activations (from hidden to output layer), which attempts to minimize the Minimum Squared Error (MSE) loss function, $\\mathcal{F(\\theta)}$ in Equation \\ref{eq:1}, using stochastic gradient descent: \n\n\\begin{equation}\n\\label{eq:1}\n\\mathcal{F(\\theta)} = \\argmin_\\theta (\\mathcal{L(\\theta)})\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eq:2}\n\\mathcal{L(\\theta)} = \\frac{1}{m}\\sum_{i=1}^{m}||g_i(x) - y_i||_{2}^{2}\n\\end{equation} \nand\n\\begin{equation}\n\\label{eq:3}\ng(x) = ReLU(b_1 + W_1(b_2+W_2x))\n\\end{equation} \nhere, $g(x)$ is the non-linear mapping function between the vector $x$ (from TE) and $y$ (from DE) of a word $w \\in V$, where, $V$ is a common vocabulary between TE and DE; $W_1$ and $W_2$ are the hidden-to-output and input-to-hidden layer weight matrices respectively, $b_1$ is the output layer bias vector and $b_2$ is the hidden layer bias vector (all these weights and biases are indicated as $\\theta$ in Equation \\ref{eq:1}) \nIn Equation \\ref{eq:2}, $m$ is the\nlength of $V$ (in our case it is 28,977). \nOnce the MLPR learns the $\\theta$ that minimizes $\\mathcal{F(\\theta)}$, \nit is used to predict the vectors for the words in TE which are not present in DE (i.e., out of vocabulary(OOV) words for DE). After this step, we finally get an adjusted Twitter word embedding\nwhich encodes the semantics of depression forums as well as word coverage from Tweets. We call these embedding the Adjusted Twitter word Embedding (ATE). The whole process is depicted in Figure \\ref{fig:Mapper}. \n\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.5 \\textwidth] {img\/MLPRegressor.pdf}\n\\caption{\\label{fig:Mapper} Non-linear mapping of TE to DE (creation of ATE)}\n\\end{figure}\n\n\\subsubsection{ATE original (ATE(orig.)):} We report our general Twitter embedding (TE) adjusted by DE(original) or DE(orig.) and we name it adjusted Twitter embedding, ATE(original) or ATE(orig.). DE(orig.) is created with same parameter settings as our TE. We show that our DE with more frequent words (trained with mincount=10) and a bit larger context (context window size=5) help us create an improved ATE.\n\n\nA summary of the vocabulary sizes and the corpus our embedding sets are built on is provided in Table \\ref{tab:corpusStat}.\n\n\\begin{table}[!ht]\n\\small\n\\centering\n\\begin{tabular}{|C{2cm}|C{2cm}|c|c|}\n\\hline\n\\textbf{Word Embeddings} & \\textbf{Corpus Type} & \\textbf{\\#Posts} & \\textbf{Vocab. Size} \\\\ \\hline\nTE, ATE and ATE (orig.) &Twitter &400M &3M\\\\ \\hline\nDE &Depression Forums &1.5M &30K \\\\ \\hline\n\\end{tabular}\n\\caption{Corpus and vocabulary statistics for word embeddings}\n\\label{tab:corpusStat}\n\\end{table}\n\n\n\\subsubsection{Conditions for embedding mapping\/adjustment:} Our non-linear mapping between two embeddings works better given that those two embeddings are created from the same word embedding creation algorithm (in our case skip-gram) and have same number of dimensions (i.e. 400). We also find that a non-linear mapping between our TE and DE produces slightly better ATE than a linear mapping for our task, although the former is a bit slower.\n\n\\subsection{Other embedding augmentation methods:} We experiment with two more embedding augmentation methods. These methods are actually a slightly complex extensions of our proposed methods and do not necessarily surpass them in accuracy, hence we did not report them in our `Results Analysis' section, rather discuss here briefly.\n\n\\subsubsection{ATE-Centroids:} \nWe propose two more methods of embedding augmentation which can be seen as an extension to our ATE construction method. In general, we call these methods ATE-Centroids. For a word $w \\in V$, where $V$ is the common vocabulary between TE and DE. We learn a non-linear mapper (like the one we use to create ATE) which does the following, (1) it learns to minimize the squared euclidean distance between the embedding of $w$ in TE and the centroid (or average of word vectors) calculated from $w$ and its neighbours in DE (i.e. the words which are close to it in Euclidean distance in DE), we name the resulting embedding from this method, ATE-Centroid-1. (2) along with (1), it learns to maximize the distance between $w$ in TE and the centroid calculated from the word vectors of the distant words from $w$ in DE, we name the resulting embedding from this method, ATE-Centroid-2. After learning, the mapper is used to predict OOV words (the words which are in TE but not in DE). So in summary, by doing operations (1) and (2), we basically adjust a source pre-trained embedding (in our case TE) by pushing its constituent words close to the words of close semantics and away from the words of distant semantics according to the semantics defined in our target embedding (in our case DE). \n\nThese methods obtain average F1 scores which is 0.7\\% below than our best models in our two datasets. The reason of this slight under-performance could be the fact that the mapping process is less accurate than our best models. The stability of F1 scores (i.e. the standard deviation) is not significantly different than our best models and around on average $0.020$ across our datasets. See Table \\ref{tab:meta-embeddings} and \\ref{tab:meta-embeddings-centroids}.\n\n\\subsubsection{Averaged Autoencoded Meta-embedding (AAEME):} We try a state-of-the-art variational autoencoder based meta-embedding \\cite{bollegala2018learning} created from our TE and DE. In this method, an encoder encodes the word vector of, $w \\in V$, where $V$ is the common vocabulary between TE and DE. Then the average of those encoded vectors is calculated, which is called ``meta-embedding'' of $w$ according to \\cite{bollegala2018learning}. Finally, a decoder is used to re-construct the corresponding TE and DE vector of $w$ from that meta-embedding. This way we gradually learn the meta-embedding that is supposed to hold the useful semantics from TE and DE for all the words in $V$.\n\nWith this meta-embedding (which we call ATE-AAEME), we achieve F1 scores on average 3.45\\% less than our best model in both datasets. Since this method works better with a bigger common vocabulary between embeddings, we learn meta-embedding from TE and ATE instead of DE, we call it ATE-AAEME-OOV. This slightly improves F1 score by 2\\%, but still in both datasets the F1 score we achieve is 1.36\\% (on average) less than our best models. Moreover, ATE-AAEME-OOV achieves 1.3\\% less stable F1-scores than our best model in Dataset1 but only 0.09\\% more stable F1 scores than our best model in Dataset2. So we observe that the performance of AAEME method is significantly dependent on an efficient mapper function that we outlined in this paper.\nSee Table \\ref{tab:meta-embeddings} and \\ref{tab:meta-embeddings-centroids}.\n\n\\subsection{Word embedding representation of Tweets:}\nFor our standard machine learning models, we represent a Tweet by taking the average of the vector of the individual words in that Tweet, ignoring the ones that are out of vocabulary. For our deep learning experiments, we take the vector of each word in a Tweet and concatenate them to get a word vector representation of the Tweet. Since this approach will not create a fixed length word vector representation, we pad each tweet to make their length equal to the maximum length Tweet in the training set. In the next sections we provide detailed technical descriptions of our word experimental setup.\n\n\n\\section{Experimental setup} \n\nWe experiment with all the\n28 combinations from seven feature extraction methods, such as, BOW, NHEL, LIWC, TE, DE, ATE, ATE(orig.) and four standard machine learning models, such as, Multinomial Na\\\"{\\i}ve Bayes (NB), Logistic Regression (LR), Linear Support Vector Machine (LSVM) and Support Vector Machine with radial basis kernel function (RSVM). In addition, \nwe run experiments on all our four word embeddings and a randomly initialized embedding representations combined with our deep learning model (cbLSTM) to further analyse the efficacy of our proposed word embeddings in deep learning setting. We run all these experiments in our datasets (i.e. Dataset1 and Dataset2).\n\n \\subsubsection{Train-test splits:} For a single experiment, we split all our data into a disjoint set of training (70\\% of all the data) and testing (30\\% of all the data) (see Table \\ref{tab:splits}). \n\n\\begin{table}[!ht]\n\\small\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Datasets} & \\textbf{Train} & \\textbf{Test} \\\\ \\hline\nDataset1 & 355(178) & 152(76) \\\\\nDataset2 & 6127(613) & 2626(263) \\\\ \\hline\n\\end{tabular}\n\\caption{Number of Tweets in the train and test splits for the two datasets. The number of depressive Tweets is in parenthesis.\n}\n\\label{tab:splits}\n\\end{table}\n\nWe use stratified sampling so that the original distribution of labels is retained in our splits. Furthermore, with the help of 10-fold cross validation in our training set, we learn the best parameter settings for all our model-feature extraction combinations, except for those that require no such parameter tuning. We then find the performance of the best model on our test set. \n\nWe have run 30 such experiments on 30 random train-test splits. Finally, we report the performance of our model-feature extraction combinations based on the Precision, Recall, and F1 score averaged over the test sets of those 30 experiments. \n\n\\subsubsection{Standard machine learning model specific settings:} \nFor the SVMs and LR, we tune the parameter, $C \\in \\{ 2^{-9}, 2^{-7}, \\dots, \n2^5 \\}$ and \nadditionally, $\\gamma \\in \\{ 2^{-11}, 2^{-9},\\dots, 2^2 \\}$ for the RSVM (see scikit-learn SVM \\footnote{\\url{http:\/\/scikit-learn.org\/stable\/modules\/svm.html}} and LR \\footnote{\\url{http:\/\/scikit-learn.org\/stable\/modules\/svm.html}} docs for further description of these parameters). We use min-max feature scaling for all our features.\n\n\\subsubsection{Deep learning model specific settings:} We use a state of the art deep learning model which is a combination of Convolutional Neural Network (CNN) layer followed by a Bidirectional Long Short Term Memory (Bi-LSTM) layer (see Figure \\ref{fig:cbLSTM}) inspired by the work of \\cite{Zhou2015ACN} and \\cite{nguyen2017deep}, which we name as cbLSTM.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.15 \\textwidth] {img\/cbLSTM.pdf}\n\\caption{\\label{fig:cbLSTM} Deep learning model (cbLSTM) architecture}\n\\end{figure}\n\n\nFrom the train splits (as described in ``Train-test splits'' section), the deep learning model separates 10\\% of samples for validation purpose and reports the result on test set. Although we have the liberty to learn our pre-trained word embedding in our deep learning model, we keep the embedding layer untrainable so that we can report the results that reflect only the effectiveness of pre-trained embedding, not the learned ones. Moreover, we report results on random initialized embedding to show how the other embeddings improved upon it. Since we have a smaller dataset, learning embedding do not introduce added value. \n\n\n\\section{Results analysis} \\label{subsec:results}\n\n\\subsection{Quantitative performance analysis} \nHere we report the average results (i.e., average Precision, Recall and F1) for the best performing combinations among all the 28 combinations of our standard machine learning and feature extraction methods (as described in ``Experimental setup'' section). We also report the same for our four word embeddings combined with the deep learning (cbLSTM) model. We report these results separately for our Dataset1 and Dataset2.\n\nMoreover, we report the results of two experiments, one by \\cite{vioules2018detection} for Dataset1 and another by \\cite{jamil2017monitoring} for Dataset2, where they use their own depression lexicons as a feature representation for their machine learning models. We report these two previous results because these are the most recent \nresults on depressive Tweets identification task. See Tables \\ref{tab:dataset1-results} and \\ref{tab:dataset2-results}.\n\n\\subsubsection{Standard machine learning models:}\nIn general, Tweet level depression detection is a tough problem and a good F1 score is hard to achieve \\cite{jamil2017monitoring}. Still, our LSVM-ATE achieves an average F1 score of $0.8238 \\pm 0.0259$ which is 4\\% better than our strongest baseline (RSVM-TE) with average F1 score of $0.7824 \\pm 0.0276$ and 11\\% better than \\cite{vioules2018detection} with F1 score of 0.71 in Dataset1, see Table \\ref{tab:dataset1-results} and Figure \\ref{fig:barcharts-dataset1}. \n\n\\begin{table}[!ht]\n\\small\n\\centering\n\\begin{tabular}{|c|C{2cm}|c|c|c|}\n\\hline\n\\textbf{Category} & \\textbf{Model-Feat.} & \\textbf{Prec.} & \\textbf{Rec.} & \\textbf{F1} \\\\ \\hline\n\\multirow{1}{*}{Baselines} \n&NB-NHEL &0.6338\t& \\textbf{0.9224}\t&0.7508 \\\\\n&LR-BOW &0.6967 &0.8264 &0.7548 \\\\\n &LR-LIWC &0.7409\t&0.7772\t&0.7574 \\\\\n &RSVM-TE &0.7739\t&0.7939\t&0.7824 \\\\\n\n \\hline\n\\multirow{1}{*}{Our Models}\n&cbLSTM-DE &0.6699\t&0.8606\t&0.7526 \\\\\n&RSVM-DE &0.7495\t&0.8280\t&0.7859 \\\\\n&LR-ATE(orig.) &0.7815\t&0.8020\t&0.7906 \\\\\n&\\textbf{LSVM-ATE} &\\textbf{0.7984} &0.8520 &\\textbf{0.8239} \\\\\n\n \n\n \\hline\n\n\n\n\n \\multirow{1}{*}{Prev. Res.} &\\cite{vioules2018detection} &0.71\t&0.71\t&0.71 \\\\\n \\hline\n\\end{tabular}\n \\caption{Average results on Dataset1 best model-feat combinations}\n\n\\label{tab:dataset1-results}\n\\end{table}\n\n\\begin{table}[!ht]\n\\small\n\\centering\n\\begin{tabular}{|c|C{2cm}|c|c|c|}\n\\hline\n\\textbf{Category} & \\textbf{Model-Feat.} & \\textbf{Prec.} & \\textbf{Rec.} & \\textbf{F1} \\\\ \\hline\n\\multirow{1}{*}{Baselines} \n &RSVM-NHEL &0.1754\t&\\textbf{0.7439}\t&0.2858 \\\\\n &RSVM-BOW &0.2374\t&0.5296\t&0.3260 \\\\\n\n &RSVM-LIWC &0.2635\t&0.6750\t&0.3778 \\\\\n &RSVM-TE &0.3485\t&0.6305\t&0.4448 \\\\\n\n \\hline\n\\multirow{1}{*}{Our Models} \n&RSVM-DE &0.3437\t&0.5198\t&0.4053 \\\\\n&cbLSTM-ATE &\\textbf{0.4416}\t&0.3987\t&0.4178 \\\\\n&RSVM-ATE(orig.) &0.3476\t&0.5648\t&0.4276 \\\\\n&\\textbf{RSVM-ATE} &0.3675\t&0.5923\t&\\textbf{0.4480}\\\\\n\n \\hline\n\n\n\n\n \\multirow{1}{*}{Prev. Res.} \n &\\cite{jamil2017monitoring} &0.1706 &0.5939 &0.265 \\\\\n \\hline\n\\end{tabular}\n \\caption{Average results on Dataset2 best model-feat combinations} \n\n\\label{tab:dataset2-results}\n\\end{table}\n\n\n\\begin{table}[!ht]\n\\small\n\\centering\n\\begin{tabular}{|c|C{2cm}|c|c|c|}\n\\hline\n\\textbf{Category} & \\textbf{Model-Feat.} & \\textbf{Prec.} & \\textbf{Rec.} & \\textbf{F1} \\\\ \\hline\n\\multirow{1}{*}{Baselines} \n &cbLSTM-Random &0.5464\t&\\textbf{0.9817} &0.6986\\\\\n &cbLSTM-TE &0.6510\t &0.8325\t&0.7262 \\\\\n \\hline\n\\multirow{1}{*}{Proposed} \n &cbLSTM-ATE(orig.) &0.6288\t&0.8439\t&0.7093 \\\\\n &cbLSTM-ATE &\\textbf{0.6915} &0.8231\t&0.7491 \\\\\n &\\textbf{cbLSTM-DE} &0.6699\t&0.8606\t&\\textbf{0.7526} \\\\\n \\hline\n\n\n\\end{tabular}\n \\caption{Average results for deep learning model (cbLSTM) in Dataset1 for all our (Baseline and Proposed) embeddings} \n\n\\label{tab:dataset1-deep-learn}\n\\end{table}\n\n\n\n\\begin{table}[!ht]\n\\small\n\\centering\n\\begin{tabular}{|c|C{2cm}|c|c|c|}\n\\hline\n\\textbf{Category} & \\textbf{Model-Feat.} & \\textbf{Prec.} & \\textbf{Rec.} & \\textbf{F1} \\\\ \\hline\n\\multirow{1}{*}{Baselines} \n&cbLSTM-Random &0.2308\t&0.2791 &0.2502\\\\\n&cbLSTM-TE &0.2615\t&\\textbf{0.6143}\t&0.3655\\\\\n\n \\hline\n\\multirow{1}{*}{Proposed} \n&cbLSTM-ATE(orig.) &\\textbf{0.4598}\t&0.3105\t&0.3671 \\\\\n&cbLSTM-DE &0.3231\t&0.4891\t&0.3880 \\\\\n&\\textbf{cbLSTM-ATE} &0.4416\t&0.3987\t&\\textbf{0.4178} \\\\\n\n \\hline\n\\end{tabular}\n \\caption{Average results for deep learning model (cbLSTM) in Dataset2 for all our (Baseline and Proposed) embeddings} \n\n\\label{tab:dataset2-deep-learn}\n\\end{table}\n\nIn Dataset2, which is imbalanced (90\\% samples are non-depressive Tweets), our best model RSVM-ATE achieves 0.32\\% better average F1 score (i.e. $0.4480 \\pm 0.0209$) than the strongest baseline, RSVM-TE with average F1 score of $0.4448 \\pm 0.0197$ and 22.3\\% better F1 score than \\cite{jamil2017monitoring} (i.e. $0.265$) , see Table \\ref{tab:dataset2-results} and Figure \\ref{fig:barcharts-dataset2}.\n\nIn both datasets, NHEL has the best recall and the worst precision, while, BOW, LIWC and word embedding based methods have acceptable precision and recall.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.50 \\textwidth]{img\/all_bal_updated_1.png}\n\\caption{\\label{fig:barcharts-dataset1} Error bars of F1 scores for our best model-feat. combinations on Dataset1}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.50 \\textwidth]{img\/all_imbal_updated_1.png}\n\\caption{\\label{fig:barcharts-dataset2} Error bars of F1 scores for our best model-feat. combinations on Dataset2}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.45 \\textwidth]{img\/cbLSTM-bal-all.png}\n\\caption{\\label{fig:barcharts-dataset1-cbLSTM} Error bars of F1 scores for deep learning models (cbLSTM) with all our embeddings on Dataset1}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.45 \\textwidth]{img\/cbLSTM-imbal-all.png}\n\\caption{\\label{fig:barcharts-dataset2-cbLSTM} Error bars of F1 scores for deep learning models (cbLSTM) with all our embeddings on Dataset2}\n\\end{figure}\n\n\n\n\\subsubsection{Deep learning model:}\nIn Dataset1, our best performing model cbLSTM-DE achieves around 3\\% better in average F1 score (i.e. $0.7526 \\pm 0.0244$) than the strongest baseline cbLSTM-TE with average F1 score of $0.7262 \\pm 0.034$. It slightly performs better (0.35\\%) than cbLSTM-ATE with average F1 score of $0.7491 \\pm 0.0301$. In Dataset2, our best performing model cbLSTM-ATE with average F1 score of $0.4178 \\pm 0.0202$ performs 5\\% better than the strongest baseline cbLSTM-TE with average F1 score of $0.3655 \\pm 0.021$ and around 2\\% better than close contender cbLSTM-DE with average F1 score of $0.3880 \\pm 0.0233$, see Figures \\ref{fig:barcharts-dataset1-cbLSTM}, \\ref{fig:barcharts-dataset2-cbLSTM} and Tables \\ref{tab:dataset1-deep-learn}, \\ref{tab:dataset2-deep-learn}. However, our best deep learning model performs on average 5\\% lower in F1 scores than our standard machine learning models across our two datasets.\n \n\\subsection{Influence of datasets and feature representations in predictions}\n\\subsubsection{Standard machine learning models:}\nOverall, in both datasets, word embedding based methods perform much better than BOW and lexicons. The reason is, \nthey have a bigger vocabulary and better feature representation than BOW and lexicons. Among non-word embedding methods, BOW and LIWC perform better than NHEL, because the former provide better discriminating features than the latter. In Dataset1, ATE achieves better and stable F1 scores than both TE and DE with DE performing close enough. This confirms that DE can capture the semantics of depressive language very well. ATE is superior in performance because it leverages both the vocabulary coverage and semantics of a depressive language. In Dataset2, ATE achieves slightly better (although slightly unstable) F1 score than TE but significantly better F1 score than DE. The reason for this could be that the Tweet samples in Dataset2 are more about general distress than actual depression, also dataset is very imbalanced. In this case, the performance is affected mostly by the vocabulary size rather than the depressive language semantics. \n\\subsubsection{Deep learning model:}\nAlthough the overall F1 scores achieved with the deep learning model (cbLSTM) while used with the word embeddings is below par than the standard machine learning models for both datasets, we observe a general trend of improvement of our proposed ATE and DE embedding compared to strong baseline TE and random initialized embedding. Moreover this performance improvement is more pronounced in our noisy dataset (Dataset2), unlike standard machine learning models, suggests that deep learning models might be better at handling noisy data. Overall, all our proposed embeddings (i.e. DE, ATE) achieve more stable F1 scores than TE and random embeddings in both datasets. \n\nWe believe the overall under-performance in the deep learning model is attributed to our small datasets. In our future analysis of the deep learning models and their performance we intend to experiment with a much larger and better dataset to derive more insights.\n\n\\subsection{Qualitative performance analysis} \nWe report correctly predicted depressive Tweets in Table \\ref{tab:correct_fn} by LSVM-ATE (our overall best model) \nwhich are mistakenly predicted as control Tweets (i.e., false negatives) when \nRSVM-TE (our strongest baseline) is used in a test set from Dataset1. The first example from Table \\ref{tab:correct_fn}, \\textit{``Tonight may definitely be the night''}, may be indicative of suicidal ideation and should not be taken lightly, also, the second one \\textit{``0 days clean.''} is the trade mark indication of continued self-harm, although many depression detection models will predict these as normal Tweets. It is also interesting to see how our best word embedding is helpful in identifying depressive Tweets which are more subtle like, \\textit{``Everyone is better off without me. Everyone.''}.\n\n\\begin{table}[!ht]\n\\small\n\\centering\n\\begin{tabular}{|p{7.4cm}|}\n\\hline\n\\textbf{Tweets} \\\\ \\hline\n``Tonight may definitely be the night.'' \\\\ \\hline\n``0 days clean.'' \\\\ \\hline\n``Everyone is better off without me. Everyone.'' \\\\ \\hline\n``Is it so much to ask to have a friend who will be there for you no matter what?'' \\\\ \\hline\n``I understand you're `busy', but fuck that ... people make time for what they want.'' \\\\ \\hline\n`` I'm a failure.'' \\\\ \\hline\n\\end{tabular}\n\\caption{False negative depressive Tweets when TE is used, correctly predicted when ATE is used in a test set from Dataset1.}\n\\label{tab:correct_fn}\n\\end{table}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=8.59cm] {img\/twitter_zoomed.png}\n\\caption{\\label{fig:GTEcluster} Two-dimensional PCA projection of LIWC \nPOSEMO and NEGEMO words (frequently occured in our datasets) in General Twitter word Embedding (TE).}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=8.59cm] {img\/trans_depsf_zoomed.png}\n\\caption{\\label{fig:ATEcluster} Two-dimensional PCA projection of LIWC POSEMO and NEGEMO words (frequently occured in our datasets) in Adjusted Twitter word Embedding (ATE).}\n\\end{figure}\n\nAccording to earlier research, depression has close connection with abnormal regulation of positive and negative emotion \\cite{kuppens2012emotional} and \\cite{seabrook2018predicting}. So to consider how the words carrying positive and negative sentiment are situated in our adjusted vector space, we plot the PCA projections of ATE and TE for \nthe high frequency words used in both datasets that are the members of LIWC positive emotion (POSEMO) and negative emotion\n(NEGEMO) \ncategories. We observe that POSEMO and NEGEMO words form two clearly distinctive clusters, i.e., C2 and C1 respectively in ATE. We also notice the word ``insomnia'' and ``sleepless'' which represent common sleep problem in depressed people, reside in C1 or NEGEMO cluster. However, we do not see any such clusters in TE. See Figure \\ref{fig:ATEcluster} and \\ref{fig:GTEcluster}. We believe this distinctions of affective contents in vector space partially play a role in our overall accuracy. Also the PCA projection gives a glimpse of the semantic relationship of affective words in depressive language. Although its not an exhaustive analysis but a insightful one that we believe would be helpful for further analysis of affect in depressive language.\n\n\\subsection{Effects of embedding augmentation:}\nThe fact that generally ATE performs better (see Tables \\ref{tab:dataset1-results}, \\ref{tab:dataset2-results}, \\ref{tab:dataset1-deep-learn} and \\ref{tab:dataset2-deep-learn}) than TE proves that our embedding adjustment improves the semantic representation of TE, because ATE and TE both have exactly the same vocabulary. Further to show that embedding adjustment for OOVs contributes to this improved accuracy, we run an experiment, where we replace the words in TE which are common to DE, with their ATE word vectors. We name this embedding, ``$TE \\cap DE$-in-TE-adjusted.'' This confirms that none of the OOVs are adjusted and we compare this result with our ATE (where all the OOVs are adjusted). In this experiment, we see $TE \\cap DE$-in-TE-adjusted obtains 2.62\\% less F1 score compared to ATE for Dataset1 and 2.13\\% less F1 score for the same in Dataset2, confirming semantic adjustment of OOVs does play an important role. Also, this model has less stable F1 scores than our best model in both datasets. See Table \\ref{tab:embed-augment}\n\n\n\\begin{table}[!ht]\n\\small\n\\centering\n\\begin{tabular}{|c|C{3.5cm}|c|}\n\\hline\n\\textbf{Dataset} & \\textbf{Model-Feat.} &\\textbf{F1} \\\\ \\hline\n\\multirow{1}{*}{Dataset1} \n\n&LR-$TE \\cap DE$-in-TE-adjusted & $0.7977 \\pm 0.0305$ \\\\ \n&\\textbf{LSVM-ATE (Best Model)} &$\\bm{0.8239 \\pm 0.0259}$ \\\\\n\n \\hline\n\\multirow{1}{*}{Dataset2} \n\n\n &RSVM-$TE \\cap DE$-in-TE-adjusted &$0.4267 \\pm 0.0217$ \\\\ \n &\\textbf{RSVM-ATE (Best Model)} &$\\bm{0.4480 \\pm 0.0209}$ \\\\\n\n \\hline\n\\end{tabular}\n \\caption{Average F1 scores from experiments to show the effect of our augmentation for OOVs}\n\n\\label{tab:embed-augment}\n\\end{table}\n\n\n\\section{Ethical concerns} \\label{subsec:ethics} We use \\textit{Suicidal Forum} posts where users are strictly required to stay anonymous. Additionally, it employs active moderators who regularly anonymize the contents in case the user reveals something that can identify them. Moreover, we use Reddit and Twitter public posts which incur minimal risk of user privacy violation as established by earlier research (\\cite{milne2016clpsych}, \\cite{coppersmith2015adhd} and \\cite{losada2016test}) utilizing the same kind of data. We also obtained our university ethics board approval that lets us use datasets collected by external organizations.\n\n\\section{Conclusion} \nIn this paper, we empirically present the following observations for a high quality dataset:\n\n\\begin{itemize}\n\\item For depressive Tweets detection, we can use word embedding trained in an unsupervised manner on a small corpus of depression forum posts, which we call Depression specific word Embedding (DE), and then use it as a feature representation for our machine learning models. This approach can achieve good accuracy, despite the fact that it has 100 times smaller vocabulary than our general Twitter pre-trained word embedding.\n\\item Furthermore, we can use DE to adjust the general Twitter pre-trained word Embedding (available off the shelf) or TE through non-linear mapping between them. This adjusted Twitter word Embedding (ATE) helps us achieve even better results for our task. \n\\item We need not to depend on human annotated data or labelled data for any of our word embedding representation creation.\n\\item Depression forum posts have specific distributed representation of words and it is different than that of general twitter posts and this is reflected in ATE, see Figure \\ref{fig:ATEcluster}. \n\n\\item We intend to make our depression corpus and embeddings publicly available upon acceptance of our paper. \n\n\\end{itemize}\n\n\\section{Future work}\n In the future, we would like to analyze DE more exhaustively to find any patterns in semantic clusters that specifically identify depressive language. We would also like to use ATE for Twitter depression lexicon induction and for discovering depressive Tweets. We can see great promise in its use in creating a semi-supervised learning based automated depression data annotation task later on.\n \n \n\\section{Supplementary Results}\nHere we report Tables \\ref{tab:other-vocab}, \\ref{tab:ate-centroids}, \\ref{tab:meta-embeddings} and \\ref{tab:meta-embeddings-centroids} for the analysis of our proposed augmentation methods.\n\n\\begin{table}[!htp]\n\\small\n\\centering\n\\begin{tabular}{|c|c|C{2.2cm}|}\n\\hline\n\\textbf{Dataset} &\\textbf{Model-Feat.} &\\textbf{F1} \\\\ \\hline\n\\multirow{1}{*}{Dataset1}\n&\\textbf{LSVM-ATE (Best Model)} &\\textbf{$\\bm{0.8239 \\pm 0.0259}$} \\\\\n&RSVM-DE &$0.7859 \\pm 0.0335$ \\\\\n&RSVM-TE &$0.7824 \\pm 0.0276$ \\\\\n\n&RSVM-$ATE \\cap DE$ &$0.7786 \\pm 0.0290$ \\\\\n&LSVM-$TE \\cap DE$ &$0.7593 \\pm 0.0340$\\\\\n&RSVM-$TE\\cap DE$-concat &$0.7834 \\pm 0.0321$ \\\\\n&LR-$TE \\cap DE$-in-TE-adjusted &$0.7977 \\pm 0.0304$ \\\\ \n\\hline\n\\multirow{1}{*}{Dataset2} \n&\\textbf{RSVM-ATE (Best Model)} &\\textbf{$\\bm{0.4480 \\pm 0.0209}$} \\\\\n&RSVM-DE &$0.4351 \\pm 0.0206$ \\\\\n&RSVM-TE &$0.4448 \\pm 0.0196$ \\\\\n&RSVM-$ATE \\cap DE$ &$0.4155 \\pm 0.028$ \\\\\n&LSVM-$TE \\cap DE$ &$0.4002 \\pm 0.0292$\\\\\n&RSVM-$TE\\cap DE$-concat &$0.4249 \\pm 0.0187$ \\\\\n&RSVM-$TE \\cap DE$-in-TE-adjusted &$0.4267 \\pm 0.0216$ \\\\ \n\\hline\n\\end{tabular}\n\\caption{Average Precision, Recall and F1 scores from experiments to show the effect of our augmentation}\n\\label{tab:other-vocab}\n\\end{table}\n\n\n\\begin{table}[!htp]\n\\small\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Dataset} &\\textbf{Model-Feat.} &\\textbf{F1} \\\\ \\hline\n\\multirow{1}{*}{Dataset1} \n&\\textbf{LSVM-ATE (Best Model)} &\\textbf{$\\bm{0.8239 \\pm 0.0259}$} \\\\ \n&LR-ATE-Centroid-1 &$0.8219 \\pm 0.0257$ \\\\\n&LR-ATE-Centroid-2 &$0.8116 \\pm 0.029$\\\\\n\\hline\n\\multirow{1}{*}{Dataset2} \n&\\textbf{RSVM-ATE (Best Model)} &\\textbf{$\\bm{0.4480 \\pm 0.0209}$} \\\\ \n&RSVM-ATE-Centroid-1 &$0.4372 \\pm 0.0172$ \\\\\n&RSVM-ATE-Centroid-2 &$0.4431 \\pm 0.0211$ \\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{tab:ate-centroids}Experiments on centroid based methods}\n\n\\end{table}\n\n\\begin{table}[!htp]\n\\small\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Dataset} &\\textbf{Model-Feat.} &\\textbf{F1} \\\\ \\hline\n\\multirow{1}{*}{Dataset1} \n&\\textbf{LSVM-ATE (Best Model)} &\\textbf{$\\bm{0.8239 \\pm 0.0259}$} \\\\\n&NB-ATE-AAEME &$0.7845 \\pm 0.0229$ \\\\\n&LR-ATE-AAEME-OOV &$0.8014 \\pm 0.0333$ \\\\\n\\hline\n\\multirow{1}{*}{Dataset2}\n&\\textbf{RSVM-ATE (Best Model)} &\\textbf{$\\bm{0.4480 \\pm 0.0209}$} \\\\\n&RSVM-ATE-AAEME &$0.4166 \\pm 0.0187$ \\\\\n&RSVM-ATE-AAEME-OOV &$0.4433 \\pm 0.0200$ \\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{tab:meta-embeddings}Experiments on meta-embedding methods}\n\n\\end{table}\n\n\\begin{table}[!htp]\n\\small\n\\centering\n\\begin{tabular}{|c|c|C{2.2cm}|}\n\\hline\n\\textbf{Dataset} &\\textbf{Model-Feat.} &\\textbf{F1} \\\\ \\hline\n\\multirow{1}{*}{Auto-Encoder Based Methods} \n&\\textbf{ATE (Best Model)} &\\textbf{$\\bm{0.8239 \\pm 0.0259}$} \\\\\n&ATE-AAEME &$0.7845 \\pm 0.0229$ \\\\\n&ATE-AAEME-OOV &$0.8014 \\pm 0.0333$ \\\\\n\\hline\n\\multirow{1}{*}{Centroid Based Methods}\n&ATE-Centroid-1 &$0.8219 \\pm 0.0257$ \\\\\n&ATE-Centroid-2 &$0.8116 \\pm 0.029$\\\\ \n\\hline\n\\end{tabular}\n\\caption{\\label{tab:meta-embeddings-centroids}Experiments on meta-embedding and centroid based methods on Dataset-1}\n\\end{table}\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\section{Related Work}\\label{sec:relatedwork}\n\n\n\n\n\n\n\\textbf{Scan Matching}.\nThe most basic element of lidar-based navigation is \\emph{local} data association, often referred to as scan matching.\nIt is frequently carried out by iterative closest point (ICP)~\\cite{besl1992method} and its variants~\\cite{rusinkiewicz2001efficient}, though care must be taken to provide a good initialization, otherwise, a wrong odometry solution can be obtained.\nBosse and Zlot~\\cite{bosse2009continuous} perform scan matching on spinning 2D lidar sweeps where the correspondence generation step of ICP is informed by local shape information.\nLOAM~\\cite{zhang2014loam} uses feature-based scan matching by minimizing the distance between edge points and planar points of subsequent scans in a Levenberg-Marquardt (LM) framework, resulting in high-rate, low-drift odometry.\nLeGO-LOAM~\\cite{shan2018lego} specializes LOAM for ground vehicles with limited computation; by first extracting ground points and segmenting remaining points into local clusters, noisy points can be removed and scan matching is performed in a two-step LM optimization.\n\n\\textbf{Place Recognition}.\nScan matching alone will introduce drift over time, which can be reduced via loop closure or localization within a map.\nTo identify potential loop closure scan pairs, some systems extract a compact global descriptor of the scan~\\cite{yin2018locnet,kim2018scan} which is then used to search for similar scans via a k-d tree.\nOnce the top loop candidate is identified, the rigid transformation between two scans is refined using ICP, which requires that the initial pose error relating the two scans is low and that all the points be saved for each each scan.\nDescriptors of a subset of the scan could instead be extracted~\\cite{bosse2013place,cop2018delight} and subsequently matched, but handcrafted features are especially known to be unstable due to the sparsity of the lidar point cloud~\\cite{dewan2018learning}.\nSegMap~\\cite{dube2020segmap} incrementally segments new scans into clusters of a local map to overcome the sparsity of scans and to reduce the number of points required to store, after which descriptors of each cluster are used to find matches, followed by a graph-based geometric verification step.\n\nOther graph-based methods~\\cite{zhu2020gosmatch,kong2020semantic} leverage semantic information to create histogram-based global descriptors used for place retrieval, followed by RANSAC~\\cite{fischler1981random} geometric verification.\nFern\\'andez-Moral et al.~\\cite{fernandez2013fast} present a graph-based place recognition system which matches planar patches extracted from RGB-D scans using an interpretation tree to validate various unary and binary constraints between candidate plane matches, followed by geometric verification.\nJiang et al.~\\cite{jiang2020lipmatch} extend~\\cite{fernandez2013fast} and introduce additional unary and binary constraints.\nSome of these constraints are sensitive to viewpoint changes, thus these methods rely on close proximity of the 3D scans as an initialization, precluding their applicability in settings like global localization.\n\n\nPathak et al.~\\cite{pathak2010online} use 3D plane landmarks in a relaxed graph-based SLAM and perform data association of planes using a series of geometric tests followed by a maximum consensus selection~\\cite{pathak2010fast}.\nKaess~\\cite{kaess2015simultaneous} extracts 3D planes from RGB-D data and proposes a novel quaternion-based representation of planes for use in SLAM which avoids the issues of overparameterized state vector in nonlinear least-squares estimation.\nGeneva et al.~\\cite{geneva2018lips} alternatively introduce the ``closest point'' (CP) parameterization of planes for estimation and demonstrate its advantages in lidar-inertial SLAM.\nHowever, \\cite{kaess2015simultaneous} and \\cite{geneva2018lips} do not provide a means for global data association for detection of loop closures.\nZhou et al.~\\cite{zhou2021pi} present an indoor smoothing and mapping system which incorporates a plane-based bundle adjustment.\nLoop closures candidates, identified by previous keyframes in close proximity, are verified by first matching plane CP vectors, followed by a plane-based RANSAC.\n\nPole-based localization methods~\\cite{schaefer2019long,kummerle2019accurate,wilbers2019localization} commonly treat poles as 2D points based on their intersection with the ground plane and use point-based registration methods for geometric verification given an initial guess.\nBrenner~\\cite{brenner2009global} investigates the use of upright poles extracted from lidar scans to construct descriptors for global localization.\nSchlichting and Brenner~\\cite{schlichting2014localization} extend this descriptor to include upright planes, but effectively treat poles and planes as 2D points and lines.\nCao et al.~\\cite{cao2021lidar} perform object-level SLAM using poles, walls, and parked cars as landmarks and propose to use pole positions within a scan to create a scan signature used for detecting loops.\nUpon detecting a pair of scans as a loop candidate, clusters of pole-points are matched~\\cite{cao2020accurate} and a rigid transformation is estimated in a RANSAC framework.\n\nOur method similarly leverages poles and planes, but is not limited to treating these landmarks as 2D objects and does not make assumptions on the proximity of scans, nor does it require an initial alignment guess.\nInstead, we perform global data association by identifying matches based on pairwise geometric consistency between lines and planes.\nThus, our method provides a means for obtaining the transformation between two sets of geometric objects, a key feature for place recognition in object-based maps.\n\n\n\n\n\\textbf{Grassmannian Manifold}.\nThe Grassmannian manifold has been used extensively in subspace learning~\\cite{hamm2008grassmann}, especially in face recognition~\\cite{huang2015projection} and appearance tracking~\\cite{shirazi2014object} tasks in the computer vision community.\nRentmeesters et al.~\\cite{rentmeesters2010filtering} develop an observer for subspace tracking on the manifold.\nCalinon~\\cite{calinon2020gaussians} outlines the use of Riemannian manifolds in robotics and notes the under-representation of the Grassmannian.\n\n\n\n\n\\subsection{Preliminaries}\\label{sec:preliminaries}\n\nWe briefly introduce the Grassmannian manifold.\nFor a more comprehensive introduction, we refer the reader to~\\cite{edelman1998geometry}.\nThe Grassmannian is the space of $k$-dimensional subspaces of $\\mathbb{R}^n$, denoted $\\mathrm{Gr}(k,n)$.\nFor example, $\\mathrm{Gr}(1,3)$ represent 3D lines containing the origin.\nAn element $\\mathbb{A}\\in\\mathrm{Gr}(k,n)$ is represented by an orthonormal matrix $A\\in\\mathbb{R}^{n\\times k}$ whose columns form an orthonormal basis of $\\mathbb{A}$.\nNote that the choice of $A$ is not unique.\nThe geodesic distance between two subspaces $\\mathbb{A}_1\\in\\mathrm{Gr}(k_1,n)$ and $\\mathbb{A}_2\\in\\mathrm{Gr}(k_2,n)$ is\n\\begin{equation}\nd_\\mathrm{Gr}(\\mathbb{A}_1, \\mathbb{A}_2) = \\left(\\sum_{i=1}^{\\min(k_1,k_2)} \\theta_i^2\\right)^{1\/2}\n\\end{equation}\nwhere $\\theta_i$ are known as the principal angles~\\cite{edelman1998geometry}.\nThese angles can be computed via the singular value decomposition (SVD) of the corresponding orthonormal matrices of $\\mathbb{A}_1$ and $\\mathbb{A}_2$,\n\\begin{equation}\nA_1^\\top A_2 = U\\, \\mathrm{diag}(\\cos\\theta_1, \\dots, \\cos\\theta_k )\\, V^\\top.\n\\end{equation}\nNote that if the subspaces are of unequal dimension, the number of principal angles is equal to the smaller dimension of the two.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.49\\columnwidth}\n \\includeinkscape[pretex=\\footnotesize,width=\\columnwidth]{graffex2}\n \\caption{}\n \\label{fig:graffexample}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.49\\columnwidth}\n \\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=1\\linewidth]{graffsensitivity}\n \\caption{}\n \\label{fig:graffsensitivity}\n \\end{subfigure}\n \\caption{(a) Example of a point in $\\mathrm{Graff}(0,1)$ being embedded as a line in $\\mathrm{Gr}(1,2)$.\n The principal angle between these two linear subspaces is $\\theta_1$.\n (b) When applied directly, $d_\\mathrm{Graff}$ is not invariant to global translation $s$.\n }\n \\label{fig:graffexample-both}\n\\end{figure}\n\nWe are specifically interested in affine subspaces of $\\mathbb{R}^3$, e.g., lines and planes that are at some distance away from the origin.\nIn analogy to $\\mathrm{Gr}(k,n)$, the set of $k$-dimensional affine subspaces constitute a smooth manifold called the \\emph{affine Grassmannian}, denoted $\\mathrm{Graff}(k,n)$~\\cite{lim2021grassmannian}.\nWe write an element of this manifold as $\\mathbb{Y}=\\mathbb{A}+b\\in\\mathrm{Graff}(k,n)$ with affine coordinates $[A,b]\\in\\mathbb{R}^{n\\times(k+1)}$, where $A\\in\\mathbb{R}^{n\\times k}$ is an orthonormal matrix and $b\\in\\mathbb{R}^n$ is the displacement of $\\mathbb{A}$ from the origin.\nWe emphasize that $\\mathrm{Graff}(k,n)\\neq\\mathrm{Gr}(k,n)\\times\\mathbb{R}^n$.\nInstead, an element $\\mathbb{Y}\\in\\mathrm{Graff}(k,n)$ is treated as a higher-order subspace via the embedding\n\\begin{align}\nj:\\mathrm{Graff}(k,n)&\\hookrightarrow\\mathrm{Gr}(k+1,n+1), \\notag \\\\ \n\\mathbb{A}+b &\\mapsto \\mathrm{span}(\\mathbb{A}\\cup\\{b+e_{n+1}\\}),\n\\end{align}\nwhere $e_{n+1} = (0,\\dots,0,1)^\\top\\in\\mathbb{R}^{n+1}$ (see \\cite[Theorem 1]{lim2021grassmannian}).\nFig.~\\ref{fig:graffexample} shows an example of a point in $\\mathbb{R}$ being embedded as a line in $\\mathbb{R}^2$.\n\nThe Stiefel coordinates of $\\mathbb{Y}\\in\\mathrm{Graff}(k,n)$,\n\\begin{equation}\nY =\n\\begin{bmatrix}\nA & b_0\/\\sqrt{1+\\|b_0\\|^2} \\\\\n0 & 1\/\\sqrt{1+\\|b_0\\|^2}\n\\end{bmatrix}\\in\\mathbb{R}^{(n+1)\\times(k+1)},\n\\end{equation}\nallow for the computation of distances between two affine subspaces using the Grassmannian metric,\n\\begin{equation}\nd_\\mathrm{Graff}(\\mathbb{Y}_1,\\mathbb{Y}_2) = d_\\mathrm{Gr}(j(\\mathbb{Y}_1),j(\\mathbb{Y}_2)),\n\\end{equation}\nwith principal angles computed via the SVD of $Y_1^\\top Y_2$.\nThe vector $b_0\\in\\mathbb{R}^n$ is the orthogonal displacement of $\\mathbb{A}$, which is the projection of $b$ onto the left nullspace of $A$ s.t. $A^\\top b_0=0$.\n\nFor convenience, the line $\\mathbb{Y}^\\ell\\in\\mathrm{Graff}(1,3)$ may also be represented in point-direction form as $\\ell = [A;b]\\in\\mathbb{R}^6$, and a plane $\\mathbb{Y}^\\pi\\in\\mathrm{Graff}(2,3)$ may be represented in Hesse normal form as $\\pi = [n;d]\\in\\mathbb{R}^4$ where $n = \\mathrm{ker}\\,A^\\top$ and $d = \\|b_0\\|$.\nUnder a rigid transformation $T=(R,t)\\in\\mathrm{SE}(3)$, the transformation law of lines and planes can be written\n\\begin{align}\n\\ell' &= f_\\ell(\\ell,R,t) := \\begin{bmatrix}RA&Rb+t\\end{bmatrix}^\\top \\\\\n\\pi' &= f_\\pi(\\pi,R,t) := T^{-\\top}\\pi.\n\\end{align}\n\n\n\\section{Introduction}\\label{sec:intro}\n\nGeometric verification provides a critical line of defense against incorrect loop closure, which can lead to disastrous map distortion and estimation error.\nPlace recognition modules attempt to suggest previously explored areas that are similar to current local sensor observations, but require a geometric verification step to confirm the loop closure hypothesis and to provide a geometric constraint between the pair of associated poses.\nThese constraints are extremely valuable in reducing odometric drift present in simultaneous localization and mapping (SLAM) systems~\\cite{cadena2016past}, so long as they are correct.\nThe core challenge of place recognition and geometric verification is associating current local observations with previously processed observations without relying on an initial guess.\nThis challenge is known as global data association~\\cite{durrant2006simultaneous,bailey2006simultaneous} and is at the heart of many perception problems, such as extrinsic calibration, multi-robot map merging, loop closure detection, and global (re)localization.\n\nIn the visual place recognition~\\cite{lowry2015visual} setting, image features are commonly used in bag-of-words techniques~\\cite{galvez2012bags} for loop candidate retrieval and geometric verification.\nHowever, appearance-based methods are sensitive to illumination, weather, and viewpoint changes and can fail to detect loop closures in these settings.\nAlternatively, geometric-based methods~\\cite{yin2018locnet,kim2018scan,chen2021auro} utilizing 3D lidar sensors are more resilient to these changes, but come at the expense of processing and storing hundreds of thousands of point measurements per second.\nTo maintain the benefits of geometric data, but to reduce the storage and computational costs of large point maps, some lidar odometry and SLAM systems use geometric primitives like lines and planes instead of points~\\cite{brenner2009global,pathak2010online,schaefer2019long,kummerle2019accurate,cao2021lidar}.\nIn addition to providing lightweight maps with high-level semantic information, navigating using explicit planes extracted from the environment provides extra information over points and has lead to improved, low-drift odometry~\\cite{kaess2015simultaneous,hsiao2017keyframe,geneva2018lips}.\nIn fact, even utilizing \\emph{points} (momentarily ignoring the storage costs) that exhibit strong local planarity have allowed for high-quality lidar-based odometry systems~\\cite{zhang2014loam,shan2018lego}.\nWhile existing works either use lines\/poles or planes (often in 2D) for global data association, a remaining challenge is performing global data association using 3D lines and planes simultaneously.\nWe present an efficient and robust method for global data association and geometric verification amenable to any combination of points, lines, and planes.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[trim=1cm 1cm 1cm 2cm, clip, width=\\columnwidth]{fig1}\n \\caption{Successful alignment between the lidar scans of a loop closure hypothesis.\n Sensor origins are denoted by the green and yellow cars, which are \\SI{18}{\\m} apart.\n Poles and planes extracted from each lidar scan are represented as 3D affine Grassmannian elements.\n Using the associated Riemannian metric allows for the evaluation of geometric consistency between object pairs, even between a pole and a plane.\n Object correspondences with high pairwise consistency are identified using our graph-based global data association algorithm and then used to estimate the rigid transformation between the two frames, yielding an alignment error of \\SI{4}{\\cm} and \\SI{0.3}{\\deg}.\n }\n \\label{fig:teaser-image}\n\\end{figure}\n\n\nA key novelty of our approach is in the representation of line and plane landmarks as elements of a Grassmannian manifold, which is the space of all linear subspaces.\nIn particular, we utilize the \\emph{affine} Grassmannian manifold, which allows for the representation of affine subspaces (i.e., linear subspaces not necessarily containing the origin).\nBy leveraging this manifold representation, distances between simple geometric landmarks can easily be defined in a principled manner.\nWe use these distances between pairwise landmarks in each lidar scan to build a consistency graph, enabling the use of our robust, graph-theoretic global data association framework~\\cite{lusk2021clipper} to find the largest set of landmark associations that are geometrically consistent.\nThen, the rigid transformation between a pair of candidate loop closure scans can be estimated by solving a line and plane registration problem with known correspondences in the least-squares sense.\nExperimental evaluation of loop closure verification on the KITTI dataset~\\cite{geiger2012we} shows that our method surpasses the state-of-the-art in global data association with geometric primitives. %\nCompared to pole-only approaches and plane-only approaches, our method yields a \\SI{71}{\\percent} and \\SI{325}{\\percent} increase respectively to loop closure recall at \\SI{100}{\\percent} precision.\nIn summary, our main contributions are:\n\\begin{itemize}\n \\item the introduction of the affine Grassmannian representation of pole and plane objects for global data association, leading to geometric verification with 3D landmarks free of requirements of an initial alignment;\n \\item a least squares estimator for rigid transformation using lines and planes instead of points, leading to a more accurate estimate for rotation and translation;\n \\item evaluation of loop closure geometric verification on four sequences of the KITTI~\\cite{geiger2012we} dataset, showing superior recall and accuracy over the state-of-the-art.\n\\end{itemize}\nWe emphasize that this is the first work using the affine Grassmannian manifold for data association, which provides a unifying and principled framework for associating points, lines, planes (or higher dimensional linear objects) in robotic loop closure and geometric verification problems.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\\label{sec:experiments}\n\nWe evaluate our global data association method using candidate loop closure pairs from KITTI~\\cite{geiger2012we} sequences 00, 02, 05, and 08.\nWe compare our method, called GraffMatch, with a pole-only method~\\cite{cao2021lidar} based on 2D cluster matching~\\cite{cao2020accurate}, and a plane-only method~\\cite{zhou2021pi} that attempts to match planes via nearest neighbor search on CP parameterization~\\cite{geneva2018lips} followed by RANSAC~\\cite{fischler1981random}.\nWe adapt the pole-only method~\\cite{cao2021lidar} to 3D and denote it as PoleMatch, while the plane-only method is denoted PlaneMatch.\nThe algorithms are implemented in MATLAB\\footnote{\\href{https:\/\/github.com\/mit-acl\/clipper}{https:\/\/github.com\/mit-acl\/clipper}} and executed on an i9-7920X CPU with 64 GB RAM.\n\n\\subsection{Dataset}\\label{sec:dataset}\n\nEach sequence contains a trajectory of ground truth poses $T_i = (R_i,t_i)\\in\\mathrm{SE}(3)$.\nWe generate potential loop candidates by sampling $K$ keyframe poses $\\bar{T}_k,\\forall\\,k\\in[K]$ along the trajectory with a stride of \\SI{20}{\\m}, e.g., see Fig.~\\ref{fig:kitti-traj-kfs}.\nLet the set of all poses $T_i$ leading up to keyframe $\\bar{T}_k$ be denoted $\\mathcal{T}_k$.\nThe set of previously visited poses near keyframe $\\bar{T}_k$ is then\n\\begin{equation*}\n\\mathcal{X}_k = \\{ T_i\\;\\colon \\|t_k-\\bar{t}_i\\| < r,\\;\\forall\\,T_i\\in\\mathcal{T}_{k-1} \\},\n\\end{equation*}\nwhere we have set $r=\\SI{20}{\\m}$ to prevent selecting a loop pair without overlapping scans.\nFrom each $\\mathcal{X}_k\\ne\\emptyset$, three loop candidates are generated with $\\bar{T}_k$ based on straight-line distance.\nWe used distances of \\SI{0}{\\m}, \\SI{8}{\\m}, and \\SI{16}{\\m}, for easy, medium, and hard difficulty, respectively.\nThese three cases allow us to evaluate each method's sensitivity to noise, baseline, and partial overlap.\nSome keyframes may not have a loop candidate at a specified distance, resulting in an unequal number of easy, medium, and hard cases.\nA histogram of these distances is shown in Fig.~\\ref{fig:candidate-dists} for all KITTI sequences.\n\nPole and plane features are extracted from each loop candidate lidar scan and are used as input for each algorithm for global data association.\nPoles are extracted as lines by leveraging the SemanticKITTI~\\cite{behley2019semantickitti} dataset for simplicity.\nGiven points corresponding to the pole or trunk classes, we use DBSCAN~\\cite{ester1996density} implemented in Open3D~\\cite{Zhou2018} to generate clusters, from which PCA~\\cite{shlens2014tutorial} is used to estimate a line.\nPlanar patches are extracted from the lidar scan using our implementation\\footnote{\\href{https:\/\/github.com\/plusk01\/pointcloud-plane-segmentation}{https:\/\/github.com\/plusk01\/pointcloud-plane-segmentation}} of~\\cite{araujo2020robust}.\nBecause planar patches are bounded, there may be multiple planar patches that correspond to the same infinite plane.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=\\columnwidth]{kitti-pdists}\n \\caption{\n Pairwise object distances from KITTI 00, 02, 05, and 08.\n The mean is \\SI[separate-uncertainty=true,multi-part-units=single]{27\\pm16}{\\meter}.\n Using this data, we choose the scaling parameter as $\\rho=40$.\n }\n \\vskip0.1in\n \\label{fig:kitti-pdists}\n\\end{figure}\n\n\\subsection{Selection of Scaling Parameter}\\label{sec:exp-scaling}\n\nThe scaling parameter $\\rho$ (see Section~\\ref{sec:consistency}) is chosen so that the pairwise affine Grassmannian distance lies in the linear regime and is therefore more sensitive when scoring consistencies.\nThe Velodyne HDL-64E used in KITTI has a range of \\SIrange{50}{120}{\\meter}, with an average point range in the KITTI dataset of approximately \\SI{80}{\\meter}.\nIn terms of pairwise object distances, we find that the average Euclidean distance is \\SI[separate-uncertainty=true,multi-part-units=single]{27\\pm16}{\\meter}, as shown in Fig.~\\ref{fig:kitti-pdists}.\nTherefore, we select $\\rho=40$ so that relative Euclidean distances of \\SI{80}{\\meter} will be scaled to \\SI{2}{\\meter}, which is at the end of the linear regime (see Fig.~\\ref{fig:graffsensitivity}).\n\n\\begin{table}[!t] %\n\\centering\n\\caption{\nRecall at \\SI{100}{\\percent} precision.\nDivided into easy (E), medium (M), hard (H) cases based on straight-line distance between loop candidate poses.\n}\n\\setlength{\\tabcolsep}{3.3pt}\n\\ra{1.2}\n\\begin{tabular}{c c c c c c c c c c c c}\n\\toprule\nSeq. & \\multicolumn{3}{c}{GraffMatch (Ours)} && \\multicolumn{3}{c}{PoleMatch} && \\multicolumn{3}{c}{PlaneMatch} \\\\\n\\cmidrule{2-4}\\cmidrule{6-8}\\cmidrule{10-12}\n & E & M & H && E & M & H && E & M & H \\\\\n\\toprule\n$00$ & \\SI{91}{\\percent} & \\SI{78}{\\percent} & \\SI{46}{\\percent} && \\SI{69}{\\percent} & \\SI{43}{\\percent} & \\SI{41}{\\percent} && \\SI{66}{\\percent} & \\SI{3}{\\percent} & \\SI{3}{\\percent} \\\\\n$02$ & \\SI{100}{\\percent} & \\SI{78}{\\percent} & \\SI{50}{\\percent} && \\SI{44}{\\percent} & \\SI{33}{\\percent} & \\SI{17}{\\percent} && \\SI{33}{\\percent} & \\SI{11}{\\percent} & \\SI{0}{\\percent} \\\\\n$05$ & \\SI{95}{\\percent} & \\SI{68}{\\percent} & \\SI{35}{\\percent} && \\SI{42}{\\percent} & \\SI{41}{\\percent} & \\SI{18}{\\percent} && \\SI{42}{\\percent} & \\SI{14}{\\percent} & \\SI{6}{\\percent} \\\\\n$08$ & \\SI{100}{\\percent} & \\SI{79}{\\percent} & \\SI{78}{\\percent} && \\SI{55}{\\percent} & \\SI{32}{\\percent} & \\SI{44}{\\percent} && \\SI{0}{\\percent} & \\SI{0}{\\percent} & \\SI{0}{\\percent} \\\\\n\\midrule\nall & \\SI{94}{\\percent} & \\SI{76}{\\percent} & \\SI{48}{\\percent} && \\SI{56}{\\percent} & \\SI{39}{\\percent} & \\SI{33}{\\percent} && \\SI{45}{\\percent} & \\SI{6}{\\percent} & \\SI{3}{\\percent} \\\\\n\\bottomrule \n\\end{tabular}\n\\label{tbl:recall}\n\\end{table}\n\n\n\\begin{table}[t] %\n\\centering\n\\caption{\nMedian translation and rotation alignment error of all successful loop closures, divided into easy (E), medium (M), hard (H) cases.\n}\n\\setlength{\\tabcolsep}{3.1pt}\n\\ra{1.2}\n\\begin{tabular}{c c c c c c c c c c c c}\n\\toprule\n & \\multicolumn{3}{c}{GraffMatch (Ours)} && \\multicolumn{3}{c}{PoleMatch} && \\multicolumn{3}{c}{PlaneMatch} \\\\\n\\cmidrule{2-4}\\cmidrule{6-8}\\cmidrule{10-12}\n & E & M & H && E & M & H && E & M & H \\\\\n\\toprule\n$\\tilde{t}_\\text{err}$ [cm] & $9.1$ & $17.3$ & $25.7$ && $10.4$ & $23.2$ & $16.0$ && $11.8$ & $17.3$ & $25.1$ \\\\\n$\\tilde{\\theta}_\\text{err}$ [deg] & $0.57$ & $0.92$ & $1.32$ && $0.74$ & $1.6$ & $1.72$ && $0.97$ & $1.78$ & $2.58$ \\\\\n\\bottomrule \n\\end{tabular}\n\\label{tbl:alignment-error}\n\\end{table}\n\n\n\n\\subsection{Loop Closure Results}\nGlobal data association is attempted on each loop closure candidate, after which line and plane matches are used to estimate a rigid transformation $\\hat{T}^i_j$ of scan $j$ w.r.t scan $i$.\nThe quality of loop closure is evaluated by comparing $\\hat{T}^i_j$ with the ground truth $T^i_j$ and calculating the rotation and translation error.\nIf the rotation error is less than \\SI{5}{\\degree} and the translation error is less than \\SI{1}{\\meter}, the loop closure is accepted.\nIf the number of matches returned by an algorithm is less than 3, the loop closure attempt is considered failed.\nThe parameters used for GraffMatch (see \\eqref{eq:consistency}) are $\\epsilon=0.2$ and $\\sigma=0.02$.\n\nTable~\\ref{tbl:recall} lists the recall at \\SI{100}{\\percent} precision for each tested KITTI sequence.\nAs expected, utilizing both poles and planes in GraffMatch produces a higher number of successful loop closures.\nThe number of successful PoleMatch loop closures is low due to too few poles or variation of extracted poles across lidar scans (i.e., in a single scan, few lidar point returns may exist for a pole-like object, leading to a noisy centroid).\nPlaneMatch also scores low in general and even fails to successfully match and align planes in all of sequence 08, where the car drives through previously visited streets in the opposite direction.\nBecause the CP parameterization heavily depends on the origin and orientation of the lidar sensor frame, successful CP plane matching requires a very good initialization, as in the easy case where PlaneMatch performs at its best.\nThis requirement can be problematic in the presence of odometry-only measurements, as drift could prevent loop closure from ever succeeding.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=\\columnwidth]{kitti-err-2dhist}\n \\caption{\n Alignment error for loop closure pairs, visualized as a grid of likelihood-normalized density plots.\n From left to right, the grid columns correspond to GraffMatch (ours), PoleMatch, and PlaneMatch.\n From top to bottom, the grid rows correspond to the easy, medium, and hard cases.\n For each case, GraffMatch achieves the highest recall, indicated by the high density of points in the low-translation, low-rotation error regime.\n PoleMatch fails to generate enough pole correspondences in many loop closures due to the scarcity of poles; in these cases, the error is set to a high value (upper-right corner).\n PlaneMatch performs at its best in the easy case when lidar scans have very close initial poses, but breaks down as the baseline distance increases.\n }\n \\label{fig:kitti-alignment-err}\n\\end{figure}\n\nFig.~\\ref{fig:kitti-alignment-err} shows the alignment error of loop candidates from all sequences as a $3\\times3$ grid of density heatmaps, where columns correspond to algorithms and rows (from top to bottom) correspond to easy, medium, and hard cases.\nIn many cases of PoleMatch and in some cases of PlaneMatch, less than 3 matches were returned and so alignment error is set high, causing increased density in the upper-right corner.\nGraffMatch is the only data association method that consistently scores in the low-translation, low-rotation error regime.\nThe median alignment error for successful loop closures is listed in Table~\\ref{tbl:alignment-error}.\n\n\nAs discussed in Section~\\ref{sec:consistency}, the distance function used to score consistency in our graph-theoretic framework is an important consideration.\nWe choose $d_\\mathrm{Graff}$ because it allows us to score the consistency of affine subspaces pairs with arbitrary dimension in a principled manner.\nOther distance functions might only consider the distance or angle between objects, for example.\nFig.~\\ref{fig:recall-vs-distance} shows recall at \\SI{100}{\\percent} precision and compares our choice of $d_\\mathrm{Graff}$ with four other possible distances.\nThe distances $d_\\mathrm{Gr}$ and $d_{\\pi\\ell}$ disregard distance information, treating lines and planes as linear subspaces containing the origin, or naively using the inner product between a plane's normal vector and a line's direction vector, respectively.\nThe standard Euclidean distance $d_{\\mathbb{R}^n}$ disregards subspace orientation and instead treats lines and planes as bounded, using their centroids as measurements.\nAs discussed previously in this section, using centroid requires that points be segmented into the same bounded lines and planes in every view, and thus will suffer as the baseline between loop pairs increases.\nNaively combining orientation and distance information in $d_{\\mathrm{Gr}\\times\\mathbb{R}^n}$ leverages all available information, but requires the weighting function $f$ (see Section~\\ref{sec:consistency}) to take on an ad-hoc mixture of kernels with additional parameters, e.g., $f(c_r,c_\\theta):=\\exp(-c_r^2\/\\sigma_r^2)\\exp(-c_\\theta^2\/\\sigma_\\theta^2)$.\nUsing $d_\\mathrm{Graff}$ leads to a simple method of calculating distances on the manifold of affine subspaces and leads to higher recall.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=\\columnwidth]{recallvsdgraff}\n \\caption{\n Recall at \\SI{100}{\\percent} precision of loop candidate alignment using different distance functions in our data association framework.\n The shifted affine Grassmannian distance $d_\\mathrm{Graff}$, which combines line and plane `direction' with distance, provides the highest recall.\n Using centroid information ($d_\\mathbb{R}^n$, $d_{\\mathrm{Gr}\\times\\mathbb{R}^n}$) also gives good results, but depends on accurate line and plane segmentation.\n Using only directional information ($d_\\mathrm{Gr}$, $d_{\\pi\\ell}$) performs poorly due to many objects with similar plane normals and line directions.\n }\n \\label{fig:recall-vs-distance}\n\\end{figure}\n\n\n\nTiming results for GraffMatch, PoleMatch, and PlaneMatch are respectively \\SI[separate-uncertainty=true,multi-part-units=single]{0.076\\pm0.102}{\\second}, \\SI[separate-uncertainty=true,multi-part-units=single]{0.005\\pm0.004}{\\second}, and \\SI[separate-uncertainty=true,multi-part-units=single]{0.011\\pm0.003}{\\second}.\nThus, GraffMatch is suitable for online operation in loop closure tasks and is a robust alternative to PoleMatch and PlaneMatch, both of which rely on assumptions to speed up their execution, but limit their accuracy.\nSpecifically, PoleMatch treats infinite lines as centroid points and PlaneMatch requires an initial frame alignment guess.\nIn our experiments, there were an average of $7$ poles and $23$ planar patches extracted per frame, resulting in an average of $650$ initial correspondences to be processed for geometric consistency.\nExecution time could be reduced by leveraging additional object information to immediately discard initial matches instead of allowing each object be potentially associated with each other object (e.g., a plane with large area is unlikely to be matched to a small plane).\n\n\n\n\n\n\n\\subsection{Proof of Invariance}\n\n\\begin{repprop}{prop:invariance}\n\\input{paper\/prop_invariance}\n\\end{repprop}\n\\begin{proof}\nThe subspace distance between $\\mathbb{Y}_1$ and $\\mathbb{Y}_2$ is $d_\\mathrm{Graff}(\\mathbb{Y}_1,\\mathbb{Y}_2) = \\|\\Theta\\|$, where $\\Theta$ is a vector of $k=\\min(k_1,k_2)$ principal angles.\nThese angles can be calculated via the singular value decomposition of $Y_1^\\top Y_2$, the inner product of the Stiefel coordinates of $\\mathbb{Y}_1,\\mathbb{Y}_2$.\nWithout loss of generality, assume $\\mathbb{Y}_1,\\mathbb{Y}_2$ are shifted s.t. $b_{1}=0$.\nThen,\n\\begin{equation}\nY_1^\\top Y_2 =\n\\begin{bmatrix}\nA_1^\\top A_2 & \\tfrac{1}{\\eta_2}A_1^\\top b_{02} \\\\\n0 & \\tfrac{1}{\\eta_1\\eta_2}\n\\end{bmatrix},\n\\end{equation}\nwhere $\\eta_i=\\sqrt{\\|b_{0i}\\|^2 + 1}$.\nGiven $T=(R,t)\\in\\mathrm{SE}(3)$, let $\\bar{\\mathbb{Y}}_1,\\bar{\\mathbb{Y}}_2$ be the rotated and translated versions of $\\mathbb{Y}_1,\\mathbb{Y}_2$, respectively, with affine coordinates\n\\begin{equation}\n\\mathbb{Y}_i:[A_i,b_i] \\xrightarrow{\\quad T\\quad} \\bar{\\mathbb{Y}}_i:[RA_i, Rb_i + t].\n\\end{equation}\nShifting $\\bar{\\mathbb{Y}}_1,\\bar{\\mathbb{Y}}_2$ by $-\\bar{b}_1=-(Rb_1+t)$ leads to the affine coordinates $\\bar{\\mathbb{Y}}_1:[RA_1, 0]$ and\n\\begin{equation}\n\\bar{\\mathbb{Y}}_2:[RA_2, Rb_2+t-(Rb_1+t)] = [RA_2, Rb_2],\n\\end{equation}\nso that\n\\begin{equation}\n\\bar{Y}_1^\\top \\bar{Y}_2 =\n\\begin{bmatrix}\nA_1^\\top A_2 & \\tfrac{1}{\\eta_2}A_1^\\top b_{02} \\\\\n0 & \\tfrac{1}{\\eta_1\\eta_2}\n\\end{bmatrix},\n\\end{equation}\nwhich is free of $R$ and $t$ and equal to $Y_1^\\top Y_2$, as desired.\n\n\n\n\n\n\n\n\\end{proof}\n\\section{Method}\\label{sec:method}\n\nGiven a candidate pair of lidar scans produced by, e.g., matching global scan descriptors~\\cite{zhu2020gosmatch} or comparison with past keyframes~\\cite{jiang2020lipmatch}, we seek to geometrically verify the loop pair and produce a relative transformation between the two sensor poses.\nIn the following discussion, we assume that scan $i$ has already had $l_i$ lines and $p_i$ planes extracted, and we refer to them collectively as objects $s_{i,a}\\in{\\mathcal{S}_i = \\{ \\mathbb{Y}^\\ell_1,\\dots,\\mathbb{Y}^\\ell_{l_i}, \\mathbb{Y}^\\pi_1,\\dots,\\mathbb{Y}^\\pi_{p_i}\\}}$.\nOur method is comprised of the following steps: (i) constructing a consistency graph based on pairwise object distances in each scan, (ii) identifying object correspondences via the densest fully-connected subgraph in the consistency graph, and (iii) estimating a rigid transformation based on object correspondences.\n\n\\begin{figure}[t]\n \\centering\n \\includeinkscape[pretex=\\footnotesize,width=\\columnwidth]{consistencygraph}\n \\caption{\n Construction of a consistency graph.\n Using $d_\\mathrm{Graff}$, the distance between a line and a plane in scan $\\mathcal{S}_i$ (\\tikzcircle[scan1blue,fill=scan1blue]{1.5pt}) is compared to the distance between the two corresponding objects in $\\mathcal{S}_j$ (\\tikzcircle[scan2red,fill=scan2red]{1.5pt}).\n The consistency of these two distances is evaluated using \\eqref{eq:consistency} and the edge $(u_1,u_2)$ is so weighted.\n }\n \\label{fig:consistencygraph}\n\\end{figure}\n\n\n\n\n\n\\subsection{Consistency Graph Construction}\\label{sec:consistency}\n\nA consistency graph for two scans $\\mathcal{S}_i$, $\\mathcal{S}_j$ is an undirected weighted graph $\\mathcal{G}=(\\mathcal{V},\\mathcal{E},w)$ with potential object correspondences $s_{i,a}\\leftrightarrow s_{j,b}$ as vertices, edges between consistent correspondences, and a weighting function $w:\\mathcal{E}\\to[0,1]$ that evaluates the strength of consistency.\nA pair of correspondences $u_1,u_2\\in\\mathcal{V}$ is consistent if the distance between the underlying objects $s_{i,a}\\in\\mathcal{S}_i,s_{j,b}\\in\\mathcal{S}_j$ satisfies\n\\begin{equation}\\label{eq:consistency}\nc_{u_1,u_2} = |d(s_{i,u_1^a},\\,s_{i,u_2^a}) - d(s_{j,u_1^b},\\,s_{j,u_2^b})| < \\epsilon,\n\\end{equation}\nfor some distance function $d$.\nNote that the two distances in \\eqref{eq:consistency} are between objects \\emph{internal} to scans $\\mathcal{S}_i$ and $\\mathcal{S}_j$, respectively.\nIf a pair of correspondences are deemed consistent, the corresponding edge is attributed the weight $w(u_1,u_2):=f(c_{u_1,u_2})$, for some choice of ${f:\\mathbb{R}_+\\to[0,1]}$ that scores very consistent pairs close to 1.\nIn this paper, we choose $f(c):=\\exp(-c^2\/2\\sigma^2)$ for simplicity, though other appropriate functions could be used.\nGiven a consistency graph, correspondences are selected that maximize consistency, further explained in Section~\\ref{sec:clipper}.\n\nThe distance function $d$ must be carefully chosen to ensure accuracy of graph-based data association.\nIn particular, we desire \\eqref{eq:consistency} to hold when $s_{j,u_1^b},\\,s_{j,u_2^b}$ are the transformed versions of $s_{i,u_1^a},\\,s_{i,u_2^a}$, respectively.\nThis property is called invariance and leads to subgraphs of the consistency graph that indicate a set of object matches in a loop pair.\n\\begin{definition}\\label{defn:invariance}\nA distance $d:X\\times X\\to\\mathbb{R}$ is \\emph{invariant} if $d(x_1,x_2) = d(x_1',x_2')$, where $x_1',x_2'\\in X$ are the transformation of $x_1,x_2\\in X$ under $T\\in\\mathrm{SE}(3)$, respectively.\n\\end{definition}\nWe establish the invariance of the metric $d_\\mathrm{Graff}$ to rotation and, under careful application, translation.\n\\begin{prop}\\label{prop:invariance}\n\\input{paper\/prop_invariance}\n\\end{prop}\n\\begin{proof}\nSee Appendix A.\n\\end{proof}\nThe intuition of Proposition~\\ref{prop:invariance} can be understood from Fig.~\\ref{fig:graffexample-both}.\nAs $\\mathbb{Y}_1$ and $\\mathbb{Y}_2$ are together translated further from the origin, the principal angle between $j(\\mathbb{Y}_1)$ and $j(\\mathbb{Y}_2)$ decreases to zero in the limit.\nHowever, the distance between the affine components of $\\mathbb{Y}_1$ and $\\mathbb{Y}_2$ remains the same, no matter the translation.\nBy first shifting the affine components, we remove the dependence of the absolute translation in the computation of the principal angle, while maintaining the dependence on the \\emph{relative} translation between $\\mathbb{Y}_1$ and $\\mathbb{Y}_2$.\n\nA remaining challenge is to address the insensitivity of $d_\\mathrm{Graff}$ to the Euclidean distance between affine components of objects.\nThe yellow curve ($s=0$) in Fig.~\\ref{fig:graffsensitivity} represents the principal angle between $\\mathbb{Y}_1,\\mathbb{Y}_2\\in\\mathrm{Graff}(0,1)$ after shifting them as per Proposition~\\ref{prop:invariance}, as a function of the Euclidean distance between $\\mathbb{Y}_1$ and $\\mathbb{Y}_2$.\nObserve that after a distance of approximately \\SI{2}{\\meter}, the curve flattens significantly as it asymptotes towards $\\tfrac{\\pi}{2}$.\nThis nonlinearity leads to poor discrimination between pairs of correspondences whose internal objects are far apart in the Euclidean sense.\nTo combat this when calculating pairwise affine Grassmannian distances, we first scale the affine component of each $\\mathbb{Y}_i$ by a constant parameter $\\rho$ so that the affine coordinates of $\\mathbb{Y}_i$ become $[A_i,b_i\/\\rho]$.\nThe choice of $\\rho$ depends on the average Euclidean distance between objects in the environment and its effect is to bring principal angles into the linear regime.\nThe selection of $\\rho$ is discussed further in Section~\\ref{sec:exp-scaling}.\n\nWith Proposition~\\ref{prop:invariance} and the scaling parameter $\\rho$ in hand, a consistency graph between objects in $\\mathcal{S}_i$ and $\\mathcal{S}_j$ can be constructed.\nWe establish initial correspondences between each object in $\\mathcal{S}_i$ with each object of $\\mathcal{S}_j$ so long as the objects are of the same dimensions $k$ (i.e., we do not consider lines associated to planes).\nGiven additional information such as color, scan intensity, planar patch area, or pole radius, this initial set of correspondences could be refined, but would rely on accurately segmenting lines and planes across potentially wide baselines.\nWhile we restrict object correspondences to be of the same dimension, the machinery we have developed allows for computing the consistency of two correspondences whose internal pair of objects have differing dimension.\nEvaluating the consistency of a correspondence pair in our affine Grassmannian framework is illustrated in Fig.~\\ref{fig:consistencygraph}.\n\n\n\n\n\n\n\\subsection{Graph-based Global Data Association}\\label{sec:clipper}\n\nGiven a consistency graph, the task of matching objects from two scans is reduced to identifying the densest clique of consistent correspondences, formalized as the problem\n\\begin{gather}\\label{eq:densestclique}\n\\begin{array}{ll}\n\\underset{u \\in \\{0,1\\}^m}{\\text{maximize}} & \\dfrac{u^\\top M \\, u}{u^\\top u}\n\\\\\n\\text{subject to} & u_i \\, u_j = 0 \\quad \\text{if}~ M(i,j)=0, ~ \\forall_{i,j},\n\\end{array}\n\\end{gather}\nwhere $M\\in[0,1]^{m\\times m}$ is the weighted adjacency matrix (i.e., from $w$ as defined in Section~\\ref{sec:consistency}) with ones on the diagonal, and ${u\\in\\{0,1\\}^m}$ indicates a consistent set of correspondences.\nNote that we choose to maximize the \\emph{density} of correspondences rather than the cardinality (e.g., maximum clique) as our previous work has found this objective to produce more accurate results~\\cite{lusk2021clipper}.\nProblem~\\eqref{eq:densestclique} is NP-hard, therefore we solve a particular relaxation which yields high accuracy solutions via our efficient CLIPPER algorithm (see ~\\cite{lusk2021clipper} for more details).\n\n\n\n\\subsection{Transformation Estimation}\n\nGiven pairwise correspondences between objects in $\\mathcal{S}_i$ and $\\mathcal{S}_j$, consider finding the best rigid transformation to simultaneously align matched lines and planes by solving the optimization problem\n\\begin{equation}\n\\min_{\\substack{R\\in\\mathrm{SO}(3),\\\\ t\\in\\mathbb{R}^3}}\n\\sum_{i=1}^{p} \\|\\pi_i' - f_\\pi(\\pi_i,R,t)\\|^2\n+\n\\sum_{i=1}^{l} \\|\\ell_i' - f_\\ell(\\ell_i,R,t)\\|^2.\n\\end{equation}\nThis problem can be solved in closed-form by first solving for the rotation via SVD, then solving for the translation via least squares, similar to Arun's method for point cloud registration~\\cite{arun1987least}.\nThe benefit of using the line and plane geometry directly, as opposed to a point parameterization, is twofold.\nFirst, it allows the use of the full information present in the infinite plane or line, i.e., distance from origin as well as orientation.\nSecond, it does not require assumptions about where the ``centroid'' of the plane or line is, which is undefined for infinite planes and lines and requires consistent segmentation of objects across scans.\nTogether, these benefits lead to a more accurate rigid transformation estimate when aligning line and plane features.\n\n\\section{Conclusion}\\label{sec:conclusion}\n\nWe presented a global data association method that achieved high recall with low alignment error when evaluated on candidate loop closures in the KITTI dataset.\nBy unifying the representation of poles and planes extracted from lidar scans as affine Grassmannian manifold elements, GraffMatch widens the applicability of using geometric primitive in place of memory-intensive point cloud maps.\nImportantly, leveraging the invariant shifted affine Grassmannian distance in our graph-based data association framework enables the geometric verification of place recognition candidates with a wide range of baseline distances between frames.\nBy removing assumptions on initial frame alignment (e.g., from noisy odometry), GraffMatch is applicable to other perception problems requiring geometric verification, such as extrinsic sensor calibration, map merging, and global relocalization.\n\nIn future work we will incorporate GraffMatch into a complete SLAM pipeline, using affine Grassmannian objects for both local and global data association.\nIn particular, we will investigate the estimation of lines and planes directly via subspace tracking methods, using manifold-based optimization techniques to perform online bundle adjustment of affine Grassmannian object landmarks.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\n\n\n\\section{Introduction}\n\nThe Doppler effect of atmospheric turbulence on the broadening of spectral \nlines plays a significant role in stellar spectroscopy. For example,\nin spectroscopic determination of projected rotational velocities \n($v_{\\rm e}\\sin i$) of solar-type (FGK-type) stars, it is crucial to\nproperly eliminate the line-broadening component of turbulence origin, \nbecause it is comparable to (or predominant over) the rotational \nbroadening in such generally slow rotators decelerated due to \nthe magnetic braking mechanism. \n\nTo make the problem easy and tractable, a very rough approximation \nhas been adopted in traditional stellar spectroscopy, where turbulence \nin stellar atmospheres is divided into ``micro''-turbulence and \n``macro''-turbulence and separately treated, where the former \n(microscopic scale) is included in the Doppler with of the \nline-opacity profile (like thermal velocity) while the latter \n(macroscopic scale) acts as a global velocity distribution\nfunction (like rotational broadening function) to be convolved \nwith the intrinsic profile. Given that the extent of the latter \n($\\gtsim 2$~km~s$^{-1}$) is known to be comparatively larger \nand more important than the former ($\\ltsim 1$~km~s$^{-1}$) \nin solar-type dwarfs, the latter ``macroturbulence'' is \nthe main issue in this context.\n\nRegarding the velocity distribution function of macroturbulence, \nthe so-called ``radial-tangential macroturbulence'' (hereinafter \nabbreviated as RTM) model has been widely used so far,\nwhich was introduced by Gray (1975) for the first time\nfor analyzing the line-profiles of late-type stars.\nThat is, the appearance of solar granular velocity field \n(consisting of convective cells moving upward\/downward and horizontal \nmotions between the rising\/falling cells) inspired Gray (1975) to\npostulate that the velocity vectors are directionally restricted \nto being either along stellar radius or tangential to the surface \nwhile the speed of gas motion in each direction follows the random \nGaussian distribution with a dispersion parameter $\\zeta_{\\rm RT}$. \nSince then, along with the efficient Fourier transform technique \n(e.g., Smith \\& Gray 1976), Gray and his coinvestigators have extensively \napplied this RTM model to line-profile analyses of F-, G-, and K-type \nstars in various evolutionary stages (e.g., to determine $v_{\\rm e}\\sin i$ \nby separating $\\zeta_{\\rm RT}$; see, Gray 1988, 2005 for more details\nregarding the technical descriptions and accomplished results in this field). \n\nHowever, we feel some concern regarding the applicability of RTM to the case \nof solar-type dwarfs. Namely, according to Gray (1984; cf. section~V therein),\nthe value of $\\zeta_{\\rm RT}$ ($\\sim 4$~km~s$^{-1}$) derived from the flux spectrum of \nthe Sun-as-a-star based on RTM is appreciably larger than the non-thermal dispersion \nor typical granular velocities ($\\sim$2--3~km~m$^{-1}$) directly estimated \nfrom spectroscopic observations of the resolved-disk Sun. What is the cause of \nthis difference?\n\nIn order to clarify this situation, we refer to the work of Takeda (1995b; \nhereinafter referred to as Paper~I). In paper~I, an extensive profile-fitting \nanalysis was carried out for many ($\\sim 300$) blend-free lines of various \nstrengths in the solar flux spectrum by using RTM model with an aim to \ninvestigate the relation between $\\zeta_{\\rm RT}$ and the mean-formation \ndepth ($\\overline{\\log\\tau}$),\\footnote{\nAs described in subsection~5.1 of Paper~I, the mean formation depth\nin the solar flux spectrum ($\\overline{\\log\\tau}$) is defined as \n$\\overline{\\log\\tau} \\equiv \\int R^{0}_{\\lambda}\\log\\tau_{5000}(\\tau_{\\lambda} =2\/3)d\\lambda\/\n\\int R^{0}_{\\lambda} d\\lambda$, where $R^{0}_{\\lambda}$ is the line depth of the intrinsic \nprofile expressed as $R^{0}_{\\lambda} \\equiv 1.- F^{0}_{\\lambda}\/F^{0}_{\\rm cont}$.\nNote that the notation $\\overline{\\log\\tau}$ is used for the line-forming depth \nin the flux spectrum as in Paper~I, while that for the intensity spectrum\nis denoted as $\\langle \\log\\tau \\rangle$ in this paper [cf. equation~(9)].}\n and the following results were derived\n(cf. figure~2 in Paper~I):\\\\\n--- $\\zeta_{\\rm RT}$ progressively increases \nwith depth from $\\sim 2.3$~km~s$^{-1}$ (at $\\overline{\\log\\tau} \\sim -2$) to\n$\\sim 3.8$~km~s$^{-1}$ (at $\\overline{\\log\\tau} \\sim -0.5$).\\\\\n--- While this depth-dependence of $\\zeta_{\\rm RT}$ is qualitatively \nconsistent as compared to the tendency of solar photospheric non-thermal \nvelocity dispersion ($V^{\\rm rad}$, $V^{\\rm tan}$; see, e.g., figures~1--3 \nin Gurtovenko 1975c or figure~1 in Canfield \\& Beckers 1976), the former is \nsystematically higher by $\\sim 1$~km~s$^{-1}$ for unknown reason,\\footnote{\nAlthough it was speculated in Paper~I that the limb effect might cause some \nextra broadening of line width in the disk-integrated flux spectrum, such an \neffect (even if any exists) is quantitatively too small to account for this excess.} \nwhich again confirmed that $\\zeta_{\\rm RT}$ tends to be larger than \nthe directly estimated velocity dispersion.\n\nIt is worth noting here that previous determinations of non-thermal \nvelocity dispersion mentioned above were done under\nthe assumption of anisotropic Gaussian distribution (cf. equation~(4)\nin subsection~2.3; i.e., near-random distribution of velocity vectors), \nwhich is markedly different from the basic assumption of RTM. \nAs a trial, we repeated the same analysis as done in Paper~I \n(with a fixed $v_{\\rm e}\\sin i$ of 1.9~km~s$^{-1}$) but with \nthe classical Gaussian macroturbulence (hereinafter referred to as GM) \nexpressed by one dispersion parameter $\\eta$ instead of RTM. \nThe resulting $\\eta$ values are plotted against $\\overline{\\log\\tau}$ \nin figure~1a, where the $\\zeta_{\\rm RT}$ vs. $\\overline{\\log\\tau}$ relation \nis also shown for comparison. It is manifest from this figure that \n$\\eta$ is systematically smaller than $\\zeta_{\\rm RT}$ (the difference \namounting to a factor of $\\sim 2$) and more consistent with the\nliterature results of $V^{\\rm rad}$ or $V^{\\rm tan}$.\n\nThis $\\zeta_{\\rm RT}$ vs. $\\eta$ discrepancy is reasonably interpreted as due \nto the difference between the characteristic widths of these two broadening functions.\nIn order to demonstrate this point, the broadening functions for RTM ($M_{1}$)\nand GM ($M_{2}$) are graphically displayed in figures~1b and 1c, respectively.\nFocusing on the zero-rotation case ($v_{\\rm e}\\sin i = 0$), we see \nthat the half-width at half maximum (HWHM) for $M_{1}$ is \nHWHM$_{1} = 0.36 \\zeta_{\\rm RT}$ while that for $M_{2}$ is HWHM$_{2} = 0.83 \\eta$, \nwhich yields $\\zeta_{\\rm RT}\/\\eta \\sim 2.3 (\\simeq 0.83\/0.36)$ by equating these\ntwo widths as HWHM$_{1}$ = HWHM$_{2}$. \nThat is, since the width of RTM is narrower than that of GM (see also Fig.~17.5 of \nGray 2005), the inequality of $\\zeta_{\\rm RT} > \\eta$ generally holds \nregarding the solutions of $\\zeta_{\\rm RT}$ and $\\eta$ required to reproduce \nthe observed line width. This should be the reason why $\\zeta_{\\rm RT}$ is larger than\n$\\eta$ by a factor of $\\sim 2$. \n\nGiven that resulting solutions of macroturbulence are so significantly dependent \nupon the choice of broadening function, it is necessary to seriously consider \nwhich model represents the actual velocity field more adequately. \nEspecially, we wonder whether RTM is ever based on a reasonable assumption,\nbecause its broadening width is appreciably narrower than the extent of turbulent \nvelocity dispersion, which stems from the extraordinary two-direction-confined \ncharacteristics (i.e., due to the projection effect; cf. figure~2).\nSince the peculiarity of RTM lies in its specific angle-dependence, we would \nbe able to give an answer to this question by studying the solar photospheric \nvelocity dispersion at various points on the disk (i.e., from different view angles).\n\nAccording to this motivation, we decided to challenge the task of verifying \nthe validity of RTM by using the Sun as a testbench. Our approach is simple and \nstraightforward in the sense that we carefully examine the widths of spectral \nlines from the disk center to the limb by making use of the profile-fitting\ntechnique (as adopted in Paper~I), by which the widths of local broadening function \ncan be efficiently determined while eliminating the effects of \nintrinsic and instrumental profiles. \nRegarding the observational data, we employed two data sets of local intensity \nspectra taken at a number of points on the solar disk: (i) spatially averaged \nspectra (over $\\sim 50''$) to test whether the center-to-limb variation \nof the macrobroadening width predicted by RTM is observed, and (ii) spectra of \nfine spatial resolution (with sampling step of $\\sim 0.''1$--$0.''3$) to examine \nthe validity of the fundamental assumption on which RTM stands.\n\nThe remainder of this article is organized as follows:\nWe first explain the definitions of RTM and GM models in section~2, \nwhich forms the fundamental basis for the following sections.\nSection~3 describes our method of analysis using the profile-fitting technique. \nIn section~4, the behavior of velocity dispersion along the line-of-sight \nis investigated based on the low-resolution ground-based data and compared with \nthe prediction from RTM. The analysis of high-resolution data from space observation \nis presented in section~5, where statistical properties of local velocity dispersion \nand radial velocity are discussed. The conclusions are summarized in section~6.\nIn addition, three special appendices are provided, where the influence of the choice\nof microturbulence is checked (appendix~1), solar depth-dependent non-thermal \nvelocity dispersions are derived to compare with the literature results (appendix~2), \nand the behavior of macroturbulence in solar-type stars is discussed (appendix~3).\n\n\\section{Definition of macroturbulence broadening function}\n\nIn this section, we briefly describe the basic definitions of representative \nmacroturbulence broadening functions, which form the basis for the contents \nin later sections. \n\n\\subsection{Line-profile modeling with macroturbulence}\n\nIn the approximation that the intrinsic specific intensity \ngoing to a direction angle $\\theta$ [$I^{0}(v, \\theta)$] is broadened by \nthe local macroturbulence function [$\\Theta(v, \\theta)$], \nthe emergent intensity profile [$I(v, \\theta)$] is expressed as\n\\begin{equation}\nI(v, \\theta) = I^{0}(v, \\theta) \\otimes \\Theta(v, \\theta),\n\\end{equation}\nwhere $\\otimes$ means ``convolution.''\n\nSimilarly, when the intrinsic stellar flux [$F^{0}(v)$] is broadened by the \nintegrated macroscopic line-broadening function [$M(v)$] (including the \ncombined effects of macroturbulence and rotation), we may write the finally \nresulting flux profile [$F(v)$] as \n\\begin{equation}\nF(v) = F^{0}(v) \\otimes M(v),\n\\end{equation} \nwhere an implicit assumption is\nmade that the continuum-normalized profile of $F^{0}$ does not vary over the \nstellar disk. \nGenerally, $M(v)$ is derived by integrating $\\Theta(v, \\theta)$ over the disk, while \nappropriately Doppler-shifting (with the assumed $v_{\\rm e}\\sin i$) as well as \nmultiplying by the limb-darkening factor (see Gray 1988 or Gray 2005 for more details).\n\n\\subsection{Radial--tangential macroturbulence}\n\nRegarding the widely used radial-tangential macroturbulence (RTM) model (Gray 1975),\n$\\Theta(v, \\theta)$ is defined as\n\\begin{eqnarray}\n\\Theta_{1}(v, \\theta) = \n\\frac{A_{\\rm R}}{\\pi^{1\/2}\\zeta_{\\rm R}\\cos\\theta} \\exp[-v^{2}\/(\\zeta_{\\rm R}\\cos\\theta)^2]+ \\nonumber \\\\\n\\frac{A_{\\rm T}}{\\pi^{1\/2}\\zeta_{\\rm T}\\sin\\theta} \\exp[-v^{2}\/(\\zeta_{\\rm T}\\sin\\theta)^2],\n\\end{eqnarray}\nthough $A_{\\rm R} = A_{\\rm T}$ and $\\zeta_{\\rm R} = \\zeta_{\\rm T} (= \\zeta_{\\rm RT})$ are\nusually assumed to represent the macroturbulence by only one parameter ($\\zeta_{\\rm RT}$). \nIt should be remarked that this is essentially a two-component model, in the sense\nthat fraction $A_{\\rm R}$ and fraction $A_{\\rm T}$ of the stellar surface are covered with \nregion R and region T (respectively) and that intrinsic intensity spectrum in each region \nis broadened by only either (i.e., not both) of the radial or tangential turbulent flow\n(cf. figure~2 for a schematic description of this model). \nIn practice, direct application of $\\Theta_{1}$ to solar intensity spectrum is difficult, \nsince it has an unusual profile especially near to the disk center ($\\sin \\theta \\sim 0$) or \nto the limb ($\\cos \\theta \\sim 0$). That is, as the broadening function is defined by \nthe sum of a Gaussian profile (with a reasonable width) and a $\\delta$-function-like profile \n(with very narrow width and very high peak), its width at half-maximum \ndoes not represent the real velocity dispersion any more (cf. figure~3a and figure~3a'). \nAccordingly, this model is used primarily in stellar application after the integration \nover the disk has been completed. \nFigure~1b shows the profiles of the disk-integrated macrobroadening function \nfor this model $M_{1}(v; \\zeta_{\\rm RT}, v_{\\rm e}\\sin i)$, which were computed \nfor various values of $v_{\\rm e}\\sin i\/\\zeta_{\\rm RT}$ ratio by following\nthe procedure described in Gray (1988) (with the assumption of rigid rotation\nand limb-darkening coefficient of $\\epsilon = 0.6$).\n\n\\subsection{Anisotropic Gaussian macroturbulence}\n\nAlternatively, we can consider a Gaussian macroturbulence with an anisotropic \ncharacter with respect to the radial and tangential direction.\nThis is the case where the intrinsic intensity spectrum is broadened by \ngas of near-random motions (in terms of both speed and direction) following\nthe Gaussian velocity distribution of ellipsoidal anisotropy\n(with dispersions of $\\eta_{\\rm R}$ and $\\eta_{\\rm T}$ in the radial and\ntangential direction, respectively).\nThen, the local broadening function is expressed by the convolution of two Gaussians as\n\\begin{eqnarray}\n\\Theta_{2}(v, \\theta) \\propto \n \\frac{ \\exp[-v^{2}\/(\\eta_{\\rm R}\\cos\\theta)^2] }{ \\pi^{1\/2}\\eta_{\\rm R}\\cos\\theta } \n\\otimes \\frac{ \\exp[-v^{2}\/(\\eta_{\\rm T}\\sin\\theta)^2] }{ \\pi^{1\/2}\\eta_{\\rm T}\\sin\\theta } \\nonumber \\\\\n\\propto \\exp\\Bigl[-\\frac{v^{2}}{(\\eta_{\\rm R}\\cos\\theta)^2+(\\eta_{\\rm T}\\sin\\theta)^2}\\Bigr].\n\\end{eqnarray}\nActually, this is the traditional turbulence model which was used by solar physicists \nin 1960s--1970s to derive the radial and tangential components of non-thermal velocity \ndispersions (cf. section~1).\nIn the special case of $\\eta_{\\rm R} = \\eta_{\\rm T} (= \\eta)$, equation~(4)\nreduces to the simple isotropic Gaussian function\n\\begin{equation}\n\\Theta_{2}(v) \\propto \\exp[-(v\/\\eta)^2].\n\\end{equation}\nBy integration of $\\Theta_{2}(v)$ over the disk, the integrated macrobroadening function \n$M_{2}(v; \\eta, v_{\\rm e}\\sin i)$ can be obtained.\nFigure~1c display the profiles of $M_{2}(v; \\eta, v_{\\rm e}\\sin i)$ for \ndifferent $v_{\\rm e}\\sin i\/\\eta$ ratios, which were numerically computed in the same manner \nas in $M_{1}$ (though $M_{2}$ in this case of angle-independent $\\Theta_{2}$ can be expressed \nby a simple convolution of rotational broadening function and Gaussian function).\n\n\\section{Profile fitting for parameter determination}\n\nRegarding the profile-fitting analysis (to be described in the following two sections), \nalmost the same procedure as in Paper~I was adopted,\nexcept that (i) specific intensity ($I$) emergent with angle $\\theta$ \ncorresponding to each observed point is relevant here (instead of angle-integrated \nflux) and (ii) Gaussian line-broadening function parameterized by $V_{\\rm los}$ \n(velocity dispersion along the line of sight) was used for the kernel function:\n\\begin{equation}\nK(v) \\propto \\exp[-(v\/V_{\\rm los})^2].\n\\end{equation} \nThat is, the intensity profile $I(v,\\theta)$ emergent to direction angle \n$\\theta$ is expressed as\n\\begin{equation}\nI(v,\\theta) = I^{0}(v,\\theta) \\otimes K(v) \\otimes P(v),\n\\end{equation}\nwhere $P(v)$ is the instrumental profile. \n$I^{0}(v,\\theta)$ is the intrinsic profile of outgoing specific intensity \nat the surface, which is written by the formal solution of radiative transfer as \n\\begin{equation}\nI^{0}(\\lambda; \\theta) = \\int_{0}^{\\infty} S_{\\lambda}(t_{\\lambda}) \n\\exp(-t_{\\lambda}\/\\cos\\theta)d(t_{\\lambda}\/\\cos\\theta),\n\\end{equation}\nwhere $S_{\\lambda}$ is the source function and $t_{\\lambda}$ ts the optical depth\nin the vertical direction. \nRegarding the calculation of $I^{0}$, we adopted Kurucz's (1993) ATLAS9 solar \nphotospheric model with a microturbulent velocity of $\\xi$ = 0.5~km~s$^{-1}$ \n(see appendix~1 regarding the effect of changing this parameter) while assuming LTE.\n\nFollowing Paper~I, we adopted the algorithm described in Takeda (1995a) \nto search for the best-fit theoretical profile, where the following three \nparameters were varied for this purpose: $\\log\\epsilon$ (elemental abundance), \n$V_{\\rm los}$ (line-of-sight velocity dispersion), and $\\Delta\\lambda_{\\rm r}$ \n(wavelength shift) [which is equivalent to the radial velocity $v_{\\rm r}$\n($\\equiv c \\Delta\\lambda_{\\rm r}\/\\lambda$; c: velocity of light)]. \n \nAfter the solutions of these parameters have been converged, we computed \nthe mean depth of line formation ($\\langle \\log \\tau \\rangle$) defined as follows:\n\\footnote{\nNote that equation~(9) (see also footnote~1 for the flux case) \nis one of the various possibilities of defining line-forming depth.\nActually, photons of a given wavelength in a profile naturally \nemerge (not from a certain depth but) from a wide region \naccording to the contribution function, for which several\ndefinitions are proposed (see, e.g., Magain 1986).\nBesides, there is no appointed procedure regarding how to \nrepresent the mean-formation depth of a line ``as a whole''\nfrom the different forming depths for each of the wavelength points\nwithin a profile. See, e.g., Gurtovenko and Sheminova (1997) \nfor a review on the formation region of spectral lines. \nAt any rate, the difference between our adopted definition and \nother ones is not so significant in the practical sense (cf. \nsubsection~2.3 of Takeda 1992).\n}\n\\begin{equation}\n\\langle \\log \\tau \\rangle \\equiv \\frac{\\int R^{0}_{\\lambda}\\log \n \\tau_{5000}(\\tau_{\\lambda} = \\cos\\theta) d\\lambda} {\\int R^{0}_{\\lambda} d\\lambda}\n\\end{equation}\nwhere $\\tau_{5000}$ is the continuum optical depth at 5000~$\\rm\\AA$,\n$R^{0}_{\\lambda}$ is the line depth of theoretical intrinsic profile \n(corresponding to the resulting solution of $\\log\\epsilon$) with respect to \nthe continuum level ($R^{0}_{\\lambda} \\equiv 1 - I^{0}_{\\lambda}\/I^{0}_{\\rm cont}$),\nand integration is done over the line profile.\nBesides, the local equivalent width ($w_{\\lambda}^{i}$) could be evaluated\nas a by-product by integrating $R^{0}_{\\lambda}$ over the wavelength.\n\n\\section{Analysis of low spatial-resolution spectra}\n\n\\subsection{Expected angle-dependence of $V_{\\rm los}$}\n\nWe are now ready to test the validity of RTM based on actual spectra\nat various points of the solar disk. As RTM is a two-component model,\nwhich is meaningful only in the combination of radial- and tangential-flow\nparts, the observed spectra to be compared must be locally averaged\nover a sufficient number of granular cells. \n\nSince $\\Theta_{1}(v, \\theta)$ has an extraordinary form (cf. subsection~2.2) \nand can not be directly incorporated in the analysis scheme described \nin section~3, we proceed with the following strategy:\\\\\n--- Let us first assume that the RTM model exactly holds, which is characterized \nby two parameters ($\\zeta_{\\rm R}$ and $\\zeta_{\\rm T}$; \nwhile $A_{\\rm R} = A_{\\rm T}$ is assumed).\\\\\n--- Then, the emergent intensity profile can be simulated by convolving\nthe intrinsic profile $R_{0}(v, \\theta)$ with the RTM broadening function\n$\\Theta_{1}(v, \\theta; \\zeta_{\\rm R}, \\zeta_{\\rm T})$ as\n$R_{0} \\otimes \\Theta_{1}$.\\\\\n--- Let us consider here how much $V_{\\rm los}$ value would be obtained\nif this RTM-broadened profile is analyzed by the Gaussian-based procedure described \nin section~3. This can be reasonably done by equating the HWHM (half-width at half-maximum) \nof $R_{0}(v, \\theta) \\otimes \\Theta_{1}(v, \\theta; \\zeta_{\\rm R}, \\zeta_{\\rm T})$\nwith that of $R_{0}(v, \\theta) \\otimes \\exp[-(v\/V_{\\rm los})^2]$, by which we can express\n$V_{\\rm los}$ as a function of $\\theta$ for any combination of ($\\zeta_{\\rm R}$, \n$\\zeta_{\\rm T}$).\\\\ \n--- Regarding the intrinsic profile $R_{0}$, we adopted a Gaussian profile \nwith $e$-folding half-width of 1.5~km~s$^{-1}$ (typical value for the thermal \nmotion of Fe atom plus microturbulence in the solar photosphere). \n\nAs examples, we display in figure~3 the profiles of $\\Theta_{1}$ and \n$R_{0} \\otimes \\Theta_{1}$ computed for two cases ($\\zeta_{\\rm R} = \\zeta_{\\rm T}$ \n= 2 and 4~km~s$^{-1}$) at various angles ($\\theta$).\nThe resulting $V_{\\rm los}$ vs. $\\theta$ relations for various combinations\nof $(\\zeta_{\\rm R}, \\zeta_{\\rm T})$ are depicted (in solid lines) in figure~4, where \nthe curves for the (anisotropic) Gaussian macroturbulence (GM) case are also shown \n(in dashed lines) for comparison\n[$V_{\\rm los}^{2} = (\\eta_{\\rm R} \\cos\\theta)^{2} + (\\eta_{\\rm T} \\sin\\theta)^{2}$\nholds in this GM case according to equation~(4)].\n\nSeveral notable points are summarized below regarding the trends read from figure~4:\\\\\n--- $V_{\\rm los}$(GM) is almost the same order as $\\sim \\eta$, and thus\nits dependence upon $\\theta$ is nearly flat or monotonic.\\\\ \n--- On the contrary, $V_{\\rm los}$(RTM) is significantly smaller \nthan $\\zeta$ [and $V_{\\rm los}$(GM)] especially near to the disk center and near to the limb, \nresulting in a peak of $V_{\\rm los}$(RTM) around $\\theta \\sim 45^{\\circ}$. \\\\\n--- This is due to the very characteristics of $\\Theta_{1}$; i.e., its width \ntends to be considerably narrow without reflecting the turbulence dispersion \n(especially around $\\theta \\sim 0^{\\circ}$ and $\\theta \\sim 90^{\\circ}$) as clearly \nseen in figures~3a and 3a'.\\\\\n--- Besides, why the inequality relation $V_{\\rm los} < \\zeta$ holds can also be \nunderstood from figure~2 (for the $\\theta \\sim 0^{\\circ}$ case).\\\\ \n--- Another important point is that this anomaly becomes particularly manifest \nwhen $\\zeta$ outweighs $v_{\\rm th}$ (1.5~km~s$^{-1}$), while less pronounced \nif $\\zeta$ is comparable or smaller than $v_{\\rm th}$.\\\\\nWe will make use of these characteristics to check the RTM model in subsection~4.4.\n\n\\subsection{Observational data of Hida\/DST}\n\nThe ground-based observations were carried out on 2015 November 3--5\n(JST) by using the 60~cm Domeless Solar Telescope (DST) with the\nHorizontal Spectrograph at Hida Observatory of Kyoto University\n(Nakai \\& Hattori 1985).\nThe aspect angles of the solar rotation axis ($P$: position angle \nbetween the geographic north pole and the solar rotational north pole; \n$B_{0}$: heliographic latitude of the central point of the solar disk) \nin this period were $P = +24^{\\circ}$ and $B_{0} = +4.^{\\circ}1$.\nRegarding the target positions on the solar disk, we selected 32 \npoints on the northern meridian line of the solar disk \n(from the disk center to 0.97~$R_{0}$ with a step of \n30$''\\simeq$~0.03~$R_{0}$, where $R_{0}$ is the apparent radius \nof the solar disk) as depicted in figure~5a, at which the slit \nwas aligned in the E--W direction.\nSince the disk center and the nearest-limb point correspond to \n$\\cos\\theta =1$ and $\\cos\\theta = 0.24$ $(\\equiv \\sqrt{1-0.97^{2}})$ \nin this arrangement ($\\theta$ is the emergent angle of the ray \nmeasured with respect to the normal to the surface), the angle\nrange of $0^{\\circ}\\le \\theta \\ltsim 76^{\\circ}$ is covered by our data.\n\nIn the adopted setting of the spectrograph, \nour observation produced a solar spectrum covering of 153$''$ (spatial) \nand 24~$\\rm\\AA$ (wavelength) on the CCD detector with $1\\times2$ binning \n(1600 pixels in the dispersion direction and 600 pixels in the spatial direction). \nWe repeated the whole set (consecutive observations on 32 points\nalong the center-to-limb meridian) 30 times while changing the central \nwavelength, and finally obtained the spectra in the wavelength \nregions of 5190--5450~$\\rm\\AA$, 5650--5690~$\\rm\\AA$, 5830--5870~$\\rm\\AA$, \nand 6050--6310~$\\rm\\AA$ (about $\\sim 600$~$\\rm\\AA$ in total).\nAlthough most of our observations were done in quiet regions on\nthe solar disk, active regions may have affected in some of our data, \nsince a notable spot passed through the meridian on November 4.\n \nThe data reduction was done by following the standard procedures\n(dark subtraction, spectrum extraction, wavelength calibration,\\footnote{\nSince the wavelength vs. pixel relation was derived (not legitimately \nby using the comparison spectra but) based on $\\sim$~20--50 solar lines in \nthe disk-center spectrum for each region, any absolute wavelength calibration \nis not accomplished in our data.} and continuum normalization). \nThe 1D spectrum was extracted by \nintegrating over 200 pixels ($= 51''$; i.e., $\\pm 100$ pixels \ncentered on the target point) along the spatial direction.\nGiven that typical granule size is on the order of $\\sim$~1$''$,\nour spectrum corresponds to the spatial mean of each region including \nseveral tens of granular cells, by which the condition necessary for testing \nthe RTM model is reasonably satisfied (cf. 1st paragraph of subsection~4.1). \nFinally, the effect of scattered light was corrected by following \nthe procedure described in subsection~2.3 of Takeda and UeNo (2014), \nwhere the adopted value of $\\alpha$ (scattered-light fraction) was \n0.10 ($\\lambda < 5500$~$\\rm\\AA$) and 0.15 (($\\lambda > 5500$~$\\rm\\AA$) \naccording to our estimation.\nGiven that the main scope of this study is to measure the ``widths''\nof spectral lines, this scattered-light correction does not \nhave any essential influence, since it is a simple multiplication\nof a factor to the line-depth profile (i.e., its similarity is\nunaffected) as shown in equations (1) and (2) of Takeda and UeNo (2014).\n\nThe S\/N ratio of the resulting spectrum (directly measured from \nstatistical fluctuation in the narrow range of line-free continuum) \nturned out to be sufficiently high (typically $\\sim$~500--1000).\nThe e-folding half width of the instrumental profile (assumed to be \nGaussian as $\\propto \\exp (v\/v_{\\rm ip})^{2}$ in this study) \nwas determined to be $v_{\\rm ip} \\simeq 1.3$~km~s$^{-1}$\nby using the lamp + I$_{2}$ gas cell spectrum (cf. section~2 in \nTakeda \\& Ueno 2012 for details), which corresponds to \nFWHM ($=2\\sqrt{\\ln 2} \\; v_{\\rm ip}$) $\\simeq 2.2$~km~s$^{-1}$ and \nthe spectrum resolving power of $\\sim 140000 (\\simeq c\/{\\rm FWHM})$ \n($c$: velocity of light).\nNote that this $v_{\\rm ip}$ of $\\sim$~1.3~km~s$^{-1}$ is nearly \nthe same order of the combined thermal+microturbulent velocity (e.g., \n$\\sim 1.5$~km~s$^{-1}$ for the case of Fe atoms) and comparatively\nsmaller than the typical non-thermal velocity dispersion\n($\\sim$~2--3~km~s$^{-1}$; cf. appendix~2).\n\n\\subsection{Spectrum fitting and parameter determination}\n\nBased on the same line list as used in Paper~I [originally \ntaken from Meylan et al. (1993)], we selected a total of 86 lines \ncorresponding to the wavelength ranges available in our data.\nThe basic data of these 86 lines (mostly Fe lines) are given in table~1, \nwhere we can see that lines of diversified strengths (equivalent widths \nfrom a few m$\\rm\\AA$ to $\\sim 200$~m$\\rm\\AA$) are included. \nFor convenience, we divided these 86 lines into 3 classes according\nto the flux equivalent width ($W_{\\lambda}^{f}$), as given in the table\n(cf. the caption of the table).\n\nWe then applied the fitting technique described in section~3 to \nthese lines. Quite a satisfactory convergence was successfully attained for \nmost cases ($\\sim 95$\\%) of the total 2752 (= 32 points $\\times$ 86 lines) \ntrials, though solutions sometimes failed to converge or settled at\na poor fitting. \n \nAs a demonstration, in figure~6 are compared the best-fit theoretical \nprofiles with the observed profiles for three representative Fe~{\\sc i} \nlines (weak, medium strength, strong) at three points (disk center,\nintermediate angle of $\\theta = 45^{\\circ}$, near to the limb) on the disk.\nBesides, the resulting values of $V_{\\rm los}$, $\\langle \\log \\tau \\rangle$,\nand $w_{\\lambda}^{i}$ for these three lines at each point on the disk are \nplotted against $\\theta$ in figure~7, where we can see that all these three\nparameters tend to increase with an increase of $\\theta$ and that stronger \nlines form at shallower layers.\n\nWe note in figure~7a that $V_{\\rm los}$ shows appreciable fluctuations.\nActually, the solutions (especially of $V_{\\rm los}$) appear to be rather sensitive to \nthe local conditions as shown in figure~8, where the statistical distributions \nof $V_{\\rm los}$ and $v_{\\rm r}$ derived from a test analysis applied to 200 \ndisk-center spectra corresponding to each pixel (before spatially averaging) \nare shown. \n\n\\subsection{Implication from the $V_{\\rm los}$ vs. $\\theta$ relation}\n\nThe resulting $V_{\\rm los}$ solutions are plotted against $\\theta$ in figure~9a (class-1 lines), \nfigure~9b (class-2 lines), figure~9c (class-3 lines), and figure~9d (all lines).\nAs seen from the curve of growth depicted in figure~9e, each class\ncorresponds to weak unsaturated (class-1), moderately saturated (class-2),\nand strongly saturated lines (class-3), respectively.\nAmong these three classes, the results derived from class-3 lines \n($W_{\\lambda}^{f} \\ge 100$~m$\\rm\\AA$) had better be viewed with some caution, \nbecause of a different sensitivity to a choice of microturbulence (cf. appendix~1).\nWe can see from figures~9a--c that the center-to-limb variation of $V_{\\rm los}$ is \ncharacterized by a monotonic\/gradual increase from $\\sim 2$~km~s$^{-1}$ ($\\theta \\sim 0^{\\circ}$)\nto $\\sim$~2.5--3~km~s$^{-1}$ ($\\theta \\sim 80^{\\circ}$). The trend that $V_{\\rm los}$\ntends to slightly decrease as the line becomes stronger may be attributed to its\ndepth-dependence (i.e., increasing with depth; cf. appendix~2),\nsince the line-forming depth becomes progressively shallower with the line strength \n(cf. figure~7b). \n\nComparing the observed tendency of $V_{\\rm los}$ (figures~9a--d) with the predicted \ntrends for both RTM and GM (figure~4), we can draw a clear conclusion:\nNone of the $V_{\\rm los}$(RTM) vs. $\\theta$ relations matches the observed\ncenter-to-limb variation (gradual increase), since the characteristic peak\nat $\\theta \\sim 45^{\\circ}$ expected for the RTM case is lacking. \nLet us recall that a significantly large $\\zeta_{\\rm RT}$ value of $\\sim$~3--4~km~s$^{-1}$\nwas derived from the analysis of solar flux spectrum (cf. section~1), which is evidently\nlarger than $v_{\\rm th}$. Then, a prominent peak should be observed if the condition\nassumed by RTM is really realized in the solar surface. Given the absence of such a\nkey trend, we can state that RTM is not a valid model for the solar atmospheric\nvelocity field. In contrast, the $V_{\\rm los}$(GM) vs. $\\theta$ curve predicted\nfor $(\\eta_{\\rm R}, \\eta_{\\rm T})$ = (2~km~s$^{-1}$, 3~km~s$^{-1}$) satisfactorily\nreproduces the observed relation, which may indicate that the classical GM is \na more reasonable and better representation in this respect.\n\n\n\\section{Analysis of high spatial-resolution spectra}\n\n\\subsection{Merit of studying well-resolved surface structures}\n\nIn order to ascertain the consequence of subsection~4.4 from an alternative \npoint of view, we further carried out a similar analysis but using spectra \nof high spatial resolution acquired by satellite observations. \nUnlike the case of low-resolution spatially-averaged spectra \nstudied in section~4, we can not employ these highly-resolved data \nfor direct comparison with predictions from the RTM model, because\neach spectrum reflects the gas motion of a local part \nsmaller than the typical size of granules, to which the concept \nof RTM (meaningful only for spectra averaged over granular cells; cf. figure~2) \nis no more applicable. Instead, however, we can make use of such observational \ndata of high spatial resolution to verify the fundamental assumption\non which the RTM model is based, since the velocity distribution (amplitude, \ndirection) within a cell can be directly studied; e.g., whether the vectors of\nturbulent motions are really coordinated in two orthogonal directions as \nassumed in RTM (cf.figure~2). This would make a decisive touchstone.\n\n\\subsection{Observational data of Hinode\/SOT}\n\nRegarding the spectra used for this purpose, we adopted the data obtained \nby the Solar Optical Telescope (SOT; Tsuneta et al. 2008) aboard the {\\it Hinode}\\footnote{\n{\\it Hinode} is a Japanese mission developed and launched by ISAS\/JAXA, with NAOJ \nas domestic partner and NASA and STFC (UK) as international partners. \nIt is operated by these agencies in co-operation with ESA and NSC (Norway).\n} satellite (Kosugi et al. 2007). Since the Spectro-Polarimeter (SP; Lites et al. 2013)\nin {\\it Hinode}\/SOT provides full calibrated Stokes $IQUV$ spectra of \n6301--6303~$\\rm\\AA$ region (comprising two Fe~{\\sc i} lines at 6301.498 and \n6302.494~$\\rm\\AA$), we could use unpolarized $I$ spectra for our purpose. \nwhich are available as Level-1 data from the Hinode Data Center\\footnote{\n$\\langle$http:\/\/darts.isas.jaxa.jp\/solar\/hinode\/$\\rangle$.\n} or from the SolarSoft site\\footnote{\n$\\langle$http:\/\/sot.lmsal.com\/data\/sot\/level1d\/$\\rangle$}.\n\nHaving inspected the archived data, we decided to use the spectra obtained by \nnormal-map mode observations of three quiet regions along the southern meridian \non 2008 December 17 (the start time of each mapping was 05:43:35, 09:34:05, and \n10:34:05 in UT, respectively). These SP mapping observations were done by moving \nthe (N--S aligned) slit of $0.''16$ width in E--W direction by $\\sim 0.''1$, \nand field-of-view of in the slit direction is $129''$ (corresponds to 408 pixels \non the detector), resulting in sampling steps of $\\sim 0.''1$ ($x$ or E--W direction) \nand $\\sim 0.''3$ ($y$ or N--S direction).\nAlthough the total region covered by each mapping was $30''\\times 129''$,\nwe used only the spectra within three $20'' \\times 20''$ square regions centered\nat ($0''$, $0''$), ($0''$, $-700''$), and ($0''$, $-975''$) corresponding to\nthe disk center ($\\theta \\simeq 0^{\\circ}$), the half-right-angle view point \n($\\theta \\simeq 45^{\\circ}$), and the limb ($\\theta \\simeq 80^{\\circ}$),\nrespectively. Figure~5b indicates the locations of these three regions,\nfor which the numbers of the resulting spectra were 8191, 8266, and 8253, respectively. \n\n\\subsection{Statistical properties of $V_{\\rm los}$ and $v_{\\rm r}$}\n\nAs done in subsection~4.3, we applied the spectrum-fitting method \n(cf. section~3) to these spectra and successfully established\nthe solutions of $V_{\\rm los}$ and $v_{\\rm r}$, where only \nFe~{\\sc i}~6302.494 line (the slightly weaker one of the \navailable two lines; also included in table~1) was used\nfor this analysis, and the instrumental profile was assumed\nto be the Gaussian function with the $e$-folding half-width \nof 0.71~km~s$^{-1}$ corresponding to the spectrum resolving power \nof $R \\simeq 6302.5\/0.025 \\simeq 252000$ (cf. figure~7 of \nLites et al. 2013).\nThe typical signal-to-noise ratio of these SOT spectra is \naround $\\sim 100$.\nSome selected examples of fitted theoretical and \nobserved spectra are displayed in figure~10.\nThe histograms for the resulting $V_{\\rm los}$ and $v_{\\rm r}$, \n$V_{\\rm los}$ vs. $v_{\\rm r}$ correlation, and the continuum brightness \nvs. $v_{\\rm r}$ relation, are graphically shown figure~11.\n\nAlthough several notable trends are observed in figure~11 (e.g., \nblue-shift tendency of brighter points at the disk-center which should \nbe due to rising hot bubbles), we here confine ourselves only to \nthe main purpose of this study; i.e., checking the validity of RTM. \n\nLet us focus on the disk-center results ($\\theta \\sim 0^{\\circ}$) \nshown in the top row of figure~12. If the condition assumed in RTM\nis really existent in the solar surface (cf. figure~2), observations \nof disk center (i.e., line-of-sight normal to the surface) with \nhigh spatial resolution should reveal almost comparable numbers \nof cases with turblent-broadened profiles (R) and those with \nunbroadened sharp profiles (T). Then, the following trends are expected:\\\\\n--- The distribution function of $V_{\\rm los}$ would show an extraordinary \nfeature (e.g., an appreciable hump at low $V_{\\rm los}$).\\\\\n--- Since cells of horizontal flow (T) show no radial velocity,\nan unusually prominent peak would exist in the distribution of \n$v_{\\rm r}$ at $v_{\\rm r} \\sim 0$.\\\\\n--- As a consequence, a considerable bias at (small $V_{\\rm los}$, \n$v_{\\rm r} \\sim 0$) would be observed in the $V_{\\rm los}$ vs. \n$v_{\\rm r}$ plot.\n\nHowever, none of these features is observed in the disk-center results \nin figure~11, where we can confirm that $V_{\\rm los}$ as well as $\nv_{\\rm r}$ follow a statistically near-normal distribution without \nany such expected bias as mentioned above. \nThese observational facts suggest that the velocity vectors of solar \nphotospheric turbulence are not confined to only two (radial and \ntangential) directions but more chaotic with rather random orientations. \nAccordingly, we have reached a decision that the basic assumption of RTM does not\nrepresent the actual solar photosphere, which means that the RTM model \ndoes not correctly describe the spectral line-broadening of solar-type stars.\n\n\\section{Concluding remark}\n\nWe carried out an extensive spectroscopic investigation on the non-thermal \nvelocity dispersion along the line-of-sight by analyzing spectral lines \nat various points of the solar disk, in order to check \nwhether the RTM model (which has been widely used for line-profile \nstudies of solar-type stars) adequately represents the actual solar \nphotospheric velocity field.\nApplying the profile-fitting analysis to two sets of observational data: \n(spatially-averaged spectra from Hida\/DST observations and very high \nspatial-resolution spectra from {\\it Hinode}\/SOT observations), \nwe found the following results. \n\nFirst, the center-to-limb variation of $V_{\\rm los}$ derived from \nlow-resolution spectra turned out simply monotonic with a slightly increasing \ntendency. This apparently contradicts the characteristic trend (an appreciable \npeak at $\\theta \\sim 45^{\\circ}$) expected from the RTM model. \nSecond, the distributions of $V_{\\rm los}$ and $v_{\\rm r}$ values derived from \nspectra of very high spatial resolution revealed to show a nearly normal \ndistribution, without any sign of anomalous distribution predicted from\nthe RTM model.\n\nThese observational facts suggest that the fundamental assumption of RTM \nis not compatible with the real atmospheric velocity field of the Sun, \nwhich can not be so simple (i.e, being confined only to radial and tangential \ndirections) but should be directionally more chaotic. We thus conclude that \nRTM is not an adequate model at least for solar-type stars.\n\nIt is evident that RTM significantly overestimates the turbulent velocity \ndispersion in the solar photosphere, which should actually be $\\sim 2$~km~s$^{-1}$ \n(disk center) and $\\sim 2.5$~km~s$^{-1}$ (limb) as evidenced from the mean (or peak)\nvalue of $V_{\\rm los}$ derived from high-resolution data indicated in figure~11\n(leftmost panels). Therefore, the fact that RTM yields $\\zeta_{\\rm RT} \\sim$~3--4~km~s$^{-1}$ \nfor the solar macroturbulence (cf. section~1) simply means that the width \nof RTM broadening function ($M_{1}$) is unreasonably too narrow. \nWe therefore stress that, when using RTM for \nanalyzing line profiles of solar-type stars, $\\zeta_{\\rm RT}$ should be \nregarded as nothing but a fudge parameter without any physical meaning.\nIf it were carelessly associated with discussion of physical processes\n(e.g., in estimation of the turbulent energy budget or in comparison \nwith the sonic velocity), erroneous results would come out.\n\nOn the other hand, the classical Gaussian macroturbulence model should be \nmore reasonable and useful in this respect. Actually,\nour application of GM to the analysis of solar flux spectrum resulted in \n$\\eta \\sim 2$~km~s$^{-1}$ (cf. figure~1b). Likewise, the GM-based conversion formula\n[equation~(A1)] lead to $V^{\\rm rad} \\sim 2$~km~s$^{-1}$ (at $\\log\\tau \\sim -1.5$) \nand $V^{\\rm tan} \\sim 2.5$~km~s$^{-1}$ (at $\\log\\tau \\sim -2$) as the non-thermal \ndispersion in radial and tangential direction (cf. figure~13 in appendix~2), which \nare in fairly good agreement with the directly evaluated results based on \nhigh-resolution observations mentioned above (note that \nthe mean formation depth for Fe~{\\sc i} 6302.494 line is\n$\\log\\tau \\sim -1.5$ for the disk center and $\\log\\tau \\sim -2$ for the limb;\ncf. figure~7b). Accordingly, application of the simple GM would be \nmore recommended, rather than the inadequate and complex RTM. \n(See appendix~3, where the trend of macroturbulence in FGK-type dwarfs \nis discussed in view of applying the GM model.) \n\nFinally, some comments may be due on the future prospect \nin this field. Regarding the modeling of turbulent velocity field \nin the atmosphere of the Sun or solar-type stars, we have to \nmention the recent remarkable progress in the simulations of \n3D time-dependent surface convection (see, e.g., Nordlund, \nStein, \\& Asplund 2009, and the references therein), which \nsuccessfully reproduce the observed characteristics of spectral \nlines (e.g., Asplund et al. 2000; Pereira et al. 2013, for the \nsolar case) without any ad-hoc turbulent-velocity parameters (such as \nmicro- and macro-turbulence in the classical case) and thus by far \nsuperior to the traditional modeling. However, given the enormous \ncomputational burden of calculating such elaborate 3D models, \nthe simple micro\/macro-turbulence model is expected to remain \nstill in wide use for practical analysis of stellar spectra. \nTherefore, it would be very helpful if the behaviors of classical\nmicroturbulence as well as macroturbulence can be predicted or\nunderstood based on the realistic 3D simulations.\nFor example, while main emphasis is placed on shift and asymmentry \n(bisector) of spectral lines in demonstrating the predictions\nof 3D models in comparison with observations, less attention seems \nto be paid to the ``width'' of spectral lines. Can the trend of \napparent turbulent dispersion derived in this study (i.e., \ntangential component being slightly larger than the radial component, \nan increasing tendency with depth) be reproduced by such \nstate-of-the-art 3D hydrodynamical models? Further contributions \nof theoreticians in this light would be awaited.\n\n\\bigskip\n\nThis work was partly carried out on the Solar Data Analysis System \noperated by the Astronomy Data Center in cooperation with the Hinode \nScience Center of the National Astronomical Observatory of Japan.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and summary}\n\n\nThis is a note on numerical computation of\nthe decay rate of a false vacuum within a potential\nof multiple scalar fields.\nThe aim is to describe a new strategy to obtain the bounce configuration,\nthat is based on an earlier idea by Konstandin and Huber\nsplitting the task into two stages \\cite{Konstandin:2006nd}.\nThe first is to search for an intermediate solution\nin one-dimensional spacetime using an improved potential.\nIn the second, the field profile is stretched smoothly\nto that in four dimensions.\n\nOne advantage of the original approach is that\nthe outcome, by construction, must be either\na valid solution of the Euclidean equation of motion or\nsomething clearly wrong.\nFor instance, this property is not shared by\nthe improved action method \\cite{Kusenko:1995jv}.\nThe authors of Ref.~\\cite{Kusenko:1996jn} use this method\nto find a fit and then check its quality\nby taking the kinetic to potential ratio within the action.\nThis ratio must be $-2$ for a field configuration\nto leave the action stationary under an infinitesimal scale transformation\n\\cite{Derrick:1964ww,Coleman:1977th}.\nBeing a necessary condition, however,\nthis is not more than a consistency check and\none cannot be sure that the numerical data\nis indeed a bounce until it is put into the equation.\n\nIts virtues notwithstanding, the original approach has drawbacks\ndue to the use of an improved potential\nthat render it suboptimal in some circumstances.\nParticularly problematic is a case in which\nthe false and the true vacua are separated far apart.\nThis can arise for instance\nin the Minimal Supersymmetric Standard Model (MSSM)\nwith a large trilinear term unless it involves a top squark\n\\cite{Casas:1995pd,Casas:1996de}.\nOne can put an upper limit on such a term by demanding that\nthe lifetime of the standard vacuum be longer than the age of the universe\n\\cite{Endo:2010ya,Hisano:2010re}.\nThe same criterion has been employed in a study of\nmaximal flavour violation by a soft trilinear coupling \\cite{FVA}.\nFor this last work,\na modification has been made to the original approach,\nwhich is to be the subject of this article.\n\n\n\n\n\n\\FIGURE{\n \\incgraph{improvedpot2}\n \\caption{An example potential $V$\n and the corresponding improved potentials $U_\\epsilon$\n and $\\widetilde{U}_\\epsilon$, displayed upside down.\n The false and the true vacua are designated by\n $\\phi_+$ and $\\phi_-$, respectively.\n Between them, there are $\\phi_e$ and $\\phi_b$.\n The former denotes any point with $V(\\phi_e) = V(\\phi_+)$ and\n the latter such a point on the bounce.}\n \\label{fig:nondeg}}\nConsider the potential depicted in Fig.~\\ref{fig:nondeg},\nwhere it has been inverted to make it easy to imagine a classical motion.\nIt is a function of scalar fields collectively denoted by $\\phi$.\nThe problem is to estimate the rate of transition from\nthe local minimum $\\phi_+$ to the state $\\phi_-$ with the lowest energy.\nNote that the notations $\\phi_\\pm$ in this article follow\nRef.~\\cite{Coleman:1977py}\nand are reversed with respect to Ref.~\\cite{Konstandin:2006nd}.\n\nUsing a semiclassical approximation \\cite{Coleman:1977py,Callan:1977pt},\none finds that the decay rate of a false vacuum per unit volume is\n\\begin{equation}\n \\label{eq:probability}\n \\Gamma\/V = A\\,\\exp(-S_E[\\overline{\\phi}]) ,\n\\end{equation}\nwhere the determinantal factor $A$\ncomes from the Gaussian functional integral\naround the stationary point $\\overline{\\phi}$, called the bounce,\nof the Euclidean action $S_E$.\nThe formula for $A$ can be found in Ref.~\\cite{Callan:1977pt}.\nIt is very difficult to compute in practice and so\ncan only be estimated to be around\nthe characteristic mass scale of the problem.\nNormally, evaluation of $A$ is exponentially less important\nthan that of $S_E[\\overline{\\phi}]$.\nWithout loss of generality,\nthe dominant bounce can be assumed to be $O(4)$-invariant\n\\cite{Coleman:1977th}.\nTherefore, one can let $\\phi$ be a function of the single coordinate\n$\\rho \\equiv \\sqrt{\\tau^2 + \\mathbf{x}^2}$,\nand write $S_E$ in the form,\n\\begin{equation}\n \\label{eq:SE}\n S_E[\\phi(\\rho)] = 2\\pi^2 \\int_0^\\infty d\\rho\\,\\rho^3\n \\biggl[ \\frac{1}{2}\\left(\\frac{d\\phi}{d\\rho}\\right)^2 + V(\\phi) \\biggr] .\n\\end{equation}\nThe ultimate goal is then to find the solution $\\overline{\\phi}$ of\nthe Euler-Lagrange equation,\n\\begin{equation}\n \\label{eq:diff eq}\n \\frac{d^2\\phi}{d\\rho^2} + \\frac{3}{\\rho} \\frac{d\\phi}{d\\rho} =\n \\nabla V(\\phi) ,\n\\end{equation}\nderived from the above Euclidean action,\nthat meets the boundary conditions,\n\\begin{equation}\n \\label{eq:BCs}\n \\frac{d\\phi}{d\\rho} (\\rho=0) = 0, \\quad\n \\phi(\\rho \\rightarrow \\infty) = \\phi_+ .\n\\end{equation}\n\n\n\n\n\n\n\n\nThe original approach \\cite{Konstandin:2006nd} is based on the\nidea that one can generalise the equation of motion to\nthat in $\\alpha$ spacetime dimensions,\n\\begin{equation}\n\\label{eq:diff eq alpha}\n \\frac{d^2\\phi}{d\\rho^2} +\n \\frac{\\alpha-1}{\\rho} \\frac{d\\phi}{d\\rho} =\n \\nabla V(\\phi) ,\n\\end{equation}\nwhich corresponds to the generalised action,\n\\begin{equation}\n \\label{eq:SE alpha}\n \\SEx{\\alpha}[\\phi(\\rho)] = \\int_0^\\infty d\\rho\\,\\rho^{(\\alpha - 1)}\n \\biggl[ \\frac{1}{2}\\left(\\frac{d\\phi}{d\\rho}\\right)^2 + V(\\phi) \\biggr] .\n\\end{equation}\nThe procedure can be divided into two parts:\nto solve the equation in the undamped case with $\\alpha = 1$\nand then to deform the solution to its final shape.\nThe deformation part has no problem and shall be presented later\nwhen the refined version is described.\n\nThe first part for the undamped case is summarised.\nIf $\\alpha = 1$, the damping term in~\\eqref{eq:diff eq alpha} disappears\nand so the motion conserves energy.\nTherefore, one can reproduce the initial condition in~\\eqref{eq:BCs}\nin the limit where the starting and the ending points are degenerate.\nThe degeneracy is approximated by flattening out part of the potential\nthat is lower than the false vacuum.\nThe solution is found\nby minimising the action~\\eqref{eq:SE alpha} for $\\alpha = 1$.\nMore specifically, the procedure is made up of the following steps.\n\\begin{enumerate}\n\\item\nFind the minimum of $\\SEx{1}$ with $V$ in~\\eqref{eq:SE alpha}\nreplaced by the improved potential,\n\\begin{equation}\n \\label{eq:Uepsilon}\n U_\\epsilon(\\phi) = \\frac{V(\\phi)-V(\\phi_+)}{2} +\n \\left[ \\frac{[V(\\phi)-V(\\phi_+)]^2}{4} + \\epsilon^2 \\right]^{1\/2} ,\n\\end{equation}\nsatisfying the boundary conditions,\n\\begin{equation}\n \\label{eq:BCs improved potential}\n \\phi(0) = \\phi_- , \\quad\n \\phi(T) = \\phi_+ .\n\\end{equation}\nThe shape of $U_\\epsilon$ is illustrated in Fig.~\\ref{fig:nondeg}\nwith $\\epsilon$ set to $[V(\\phi_-) - V(\\phi_+)]\/10$.\nOne can accelerate the convergence by making\na further modification to $U_\\epsilon$ to get\n\\begin{align}\n \\widetilde{U}_\\epsilon(\\phi) &= U_\\epsilon(\\phi) + \\Delta U_\\epsilon(\\phi) ,\n\\intertext{shown in Fig.~\\ref{fig:nondeg}, with a small perturbation,}\n \\Delta U_\\epsilon(\\phi) &=\n - 2 \\epsilon \\frac{|\\phi-\\phi_+|^3}{|\\phi_--\\phi_+|^3}\n + 3 \\epsilon \\frac{|\\phi-\\phi_+|^2}{|\\phi_--\\phi_+|^2} .\n\\label{eq:Delta Uepsilon}\n\\end{align}\nThe initial choice of the parameter $\\epsilon$ and the time $T$ may be\n\\begin{align}\n\\epsilon_\\mathrm{init} &= V(\\phi_-) - V(\\phi_+) ,\n\\\\\nT_\\mathrm{init} &= 20\\, |\\phi_--\\phi_+| \/ \\sqrt{8 V_b} ,\n\\label{eq:Tinit}\n\\end{align}\nwhere $V_b$ is the height of the potential barrier.\n\\item\nFind the point $\\phie$ between $\\phi_-$ and $\\phi_+$\nsuch that $V(\\phie) = V(\\phi_+)$\nwithin the configuration obtained in the previous step.\nTruncate the part before $\\phie$, i.e.\\ the configuration on the plateau.\n\\item\nRepeat minimisation with the new set of boundary conditions,\n\\begin{equation}\n \\label{eq:BCs float orig}\n \\phi(0) = \\phi_e , \\quad\n \\phi(T) = \\phi_+ ,\n\\end{equation}\nwhile iteratively sending $\\epsilon$ down to zero.\nInitially $\\phi_e$ is set to the point found above\nand then\nit is allowed to move freely on the submanifold with\n$V(\\phi_e) = V(\\phi_+)$.\n\\item\nThe last minimum for $\\epsilon = 0$ is the solution of\nthe undamped equation,\n\\begin{equation}\n \\label{eq:diff eq alpha=1}\n \\frac{d^2\\phi}{d\\rho^2} = \\nabla V(\\phi) ,\n\\end{equation}\nwhich is~\\eqref{eq:diff eq alpha} for $\\alpha = 1$,\nand obeys the boundary conditions,\n\\begin{equation}\n \\label{eq:BCs finite T}\n \\frac{d\\phi}{d\\rho} (0) = 0, \\quad\n \\phi(T) = \\phi_+.\n\\end{equation}\n\\end{enumerate}\n\nThere are a couple of points of concern.\nFirst, the improved potential~\\eqref{eq:Uepsilon},\nsketched in Fig.~\\ref{fig:nondeg},\nhas a plateau on which the solution of step 1 has to spend a long time.\nIn fact this is what the improved potential is designed for so that\nthe initial condition in~\\eqref{eq:BCs} can be approximated.\nAs already stated in Ref.~\\cite{Konstandin:2006nd},\nhowever, this costs generically many lattice points to store\na long part of the path that is to be discarded eventually.\nThis grows more and more problematic as $\\phi_-$ moves away from $\\phi_b$,\nand hampers the computation.\nSecond, step 3 might lead to a numerical instability.\nThe intention of this step is to find a path that\nstarts from $\\phi_e$ and then rolls down the inverted potential.\nAlthough modified to have a plateau,\nthe improved potential still has a part with\n$U_\\epsilon(\\phi) < U_\\epsilon(\\phi_e) = U_\\epsilon(\\phi_+)$\nfor a finite $\\epsilon$.\nAfter starting from $\\phi_e$,\nthe path may prefer to climb up the inverted potential\nand spend time in this region to minimise the action.\nThis can be avoided by using $\\widetilde{U}_\\epsilon$\ninstead of $U_\\epsilon$.\nWith a judicious choice of $\\Delta U_\\epsilon$,\none can make $\\phi_+$ the global minimum of $\\widetilde{U}_\\epsilon$\nso that the system tries to spend as much time as possible around\n$\\phi_+$ at the end of the path.\nNote that for this, $\\epsilon$ in~\\eqref{eq:Delta Uepsilon}\nshould be different from $\\epsilon$ in~\\eqref{eq:Uepsilon}\nin general.\nThe initial condition in~\\eqref{eq:BCs finite T} is violated\nby $\\widetilde{U}_\\epsilon(\\phi_e) - V(\\phi_e)$, which eventually goes away\nas $\\epsilon \\rightarrow 0$.\n\n\n\n\nThe first point makes it difficult to analyse many interesting problems.\nFor instance, vacuum transition triggered by the stau trilinear coupling\nin the MSSM is studied in Ref.~\\cite{Hisano:2010re}.\nThe charge-breaking global minimum in the scalar potential is far from\nthe local one due to the small tau Yukawa coupling.\nTo overcome this problem,\nthey first found the bounce\nof a temporary potential with two nearly degenerate minima\nusing the above steps,\nand then made a continuation to the actual potential by iteration.\nIn the course of deforming the potential,\none should be careful not to introduce a singular behaviour.\n\n\n\n\nIn what follows, a more streamlined procedure shall be presented.\nIt does not cost extra lattice points to be truncated in the end.\nIt does not need a deformation of the potential.\nIt does not rely on the location of the true vacuum.\nConsequently, it can deal with a potential that has\nwidely separated local and global minima or\nthat is even unbounded from below.\n\nThe main idea is to exploit the energy conservation in the undamped case,\nwhich was already mentioned in Ref.~\\cite{Konstandin:2006nd}.\nThis feature enables one to replace the Neumann boundary condition\nof the original problem by a constraint on the potential.\nAs pointed out above,\nfixing the potential at $\\rho = 0$ is not enough to\nhave the desired solution be a minimum since\na path can lower the action further by\nshooting up the inverted potential and\nstaying in a region with $V(\\phi) < V(\\phi_+)$.\nThe refinement here is to eliminate those paths by additional constraints.\n\nIncluding the preparation and\nthe continuation stages as well,\nthe new series of steps would be as follows.\n\\begin{enumerate}\n\\item\nFind a point $\\phie$ such that $V(\\phie) = V(\\phi_+)$\nthat is on the other side of the barrier.\nFor instance, one can walk over the barrier\nalong a valley starting from $\\phi_+$\nuntil the level comes down back to $V(\\phi_+)$.\n\\item\nConstruct an initial configuration $\\phi(\\rho)$ such that\n\\begin{equation}\n \\phi(0) = \\phie , \\quad\n \\phi(T) = \\phi_+ .\n\\end{equation}\nThe simplest choice would be a step-like profile\nin which $\\phi(\\rho)$ is constant except for one jump\nsomewhere in the middle, possibly around $\\rho = 0.1 T$.\n\nChoosing a suitable $T$ is important for efficient evaluation.\nRecall that \\eqref{eq:constraints on ends} can substitute for\n\\eqref{eq:BCs finite T} only in the limit of energy conservation.\nRealising this limit would require infinite $T$, whereas\na small $T$ saves the computation time and storage.\nHowever, it is difficult to estimate an optimal $T$\nbefore doing any minimisation.\nInitially, one may try~\\eqref{eq:Tinit} with $\\phie$ instead of\n$\\phi_-$ and $V_b$ from the previous step,\nand then $T$ can be adjusted later as explained below.\n\nOptionally to prepare a better initial profile, one could perform\na single-field minimisation of $\\SEx{1}$ using as the field\nthe position on the segment connecting $\\phi_+$ and $\\phie$.\nOne should repeat this process until $T$ is adjusted appropriately.\nThe criterion for this and the constraints on the potential are\nthe same as in the next step.\n\\item\nFind a minimum of $\\SEx{1}$ obeying the boundary conditions,\n\\begin{subequations}\n \\label{eq:constraints}\n \\begin{align}\n \\label{eq:constraints on ends}\n V(\\phi(0)) &= V(\\phi_+), \\quad\n \\phi(T) = \\phi_+, \\\\\n\\intertext{in combination with}\n \\label{eq:constraints in between}\n V(\\phi(\\rho)) &\\ge V(\\phi_+) \\quad\\text{for}\\quad 0 < \\rho < T .\n \\end{align}\n\\end{subequations}\nThe inequality \\eqref{eq:constraints in between} should be enforced\nat every lattice point in order to prevent the unwanted\n`upshooting'. \n\nOne should repeat the minimisation with an increased $T$\nuntil energy is sufficiently conserved.\nIt usually works to require that\nthe average difference between the kinetic and the potential energy densities\nbe less than around 1\\% of the barrier height\nmeasured within the resulting profile.\n\\item\nThe minimum is a solution of the equation,\n\\begin{equation}\n\\label{eq:eq for iteration}\n \\frac{d^2\\phi}{d\\rho^2} +\n \\frac{\\alpha-1}{\\rho + \\Delta \\rho} \\frac{d\\phi}{d\\rho} =\n \\nabla V(\\phi) ,\n\\end{equation}\nwith the damping term killed by $\\alpha = 1$, that\ncomplies with the boundary conditions~\\eqref{eq:BCs finite T}.\nIn comparison to~\\eqref{eq:diff eq alpha},\nthere is an additional\noffset parameter $\\Delta \\rho$ that has been introduced to avoid a pathological\nbehaviour near to $\\rho = 0$ \\cite{Konstandin:2006nd}.\n\\item\nMake a continuation to the damped case by gradually increasing\n$\\alpha$ from 1 to 4 with $\\Delta \\rho$ fixed around $0.75 T$, and then\nsend $\\Delta \\rho$ also to zero.\nOne can choose to stop at $\\alpha = 3$ for tunnelling in\na finite temperature system \\cite{thermal}.\nFor each pair of $\\alpha$ and $\\Delta \\rho$,\none can linearise~\\eqref{eq:eq for iteration}\nusing the series expansion of the right hand side,\n\\begin{equation}\n \\nabla V(\\phi) \\approx \\nabla V(\\widetilde{\\phi}) + (\\phi - \\widetilde{\\phi}) \\cdot\n \\nabla \\nabla V(\\widetilde{\\phi}) ,\n\\end{equation}\nand then iteratively solve it by matrix inversion.\nBoundary conditions are set by \\eqref{eq:BCs finite T}.\n\\item\nThe final solution is the bounce configuration.\n\\end{enumerate}\nIt should be reminded that\nsteps 4 and 5 have been copied from Ref.~\\cite{Konstandin:2006nd}.\nModifications have been made only to the steps for the undamped case.\n\nThe proof of step 3 follows directly from\nthat of step 1 in the original method \\cite{Konstandin:2006nd}.\nFirst of all, consider a solution determined by\nthe improved potential~\\eqref{eq:Uepsilon}\nand the boundary conditions~\\eqref{eq:BCs improved potential}.\nIt is a minimum of the action since\n$\\phi_-$ is the global minimum of $U_\\epsilon$.\nNext, take the limit of $\\epsilon \\rightarrow 0$.\nSince the path on the plateau makes no contribution to the action,\nthe rest of the path becomes a minimum of $\\SEx{1}$\nsubject to the constraints~\\eqref{eq:constraints}.\nThis motion off the plateau satisfies the equation~\\eqref{eq:diff eq alpha=1}\nand the boundary conditions~\\eqref{eq:BCs finite T}\nwhich are equivalent to~\\eqref{eq:constraints on ends}\nby virtue of energy conservation.\nNote that the improved potential is invoked only for\nthe proof but never appears in the practical numerical calculation.\n\nThere is a subtlety regarding the bounce as a minimum of the action\nunder the conditions~\\eqref{eq:constraints}.\nIn the limit of $T \\rightarrow \\infty$,\nthere is a flat direction of minima that initially spend\ndifferent amounts of time staying at $\\phi_e$ before rolling down.\nAny of this family of minima qualifies in principle as a solution of\nthe undamped equation.\nIn practice, however, it is desirable to find a path\nthat does not have a long constant part to save computation resources.\nThis is why the initial profile in step 2 was set to have\na jump at a position close to $\\rho = 0$.\nFor a finite $T$, the flat direction is slightly lifted.\nThe minimum search routine should be able to deal with\nthis kind of continuously (quasi-)degenerate minima.\nAlso, the termination condition should be tuned\nin such a way that the routine stops after\nreaching a point that is acceptable to step 4\neven though it is not the exact minimum.\nThis needs some experiments.\n\nThe minimisation problem in step 3 is a nontrivial task\ntermed nonlinear programming \\cite{nlp}\nwhich is a broad research area on its own.\nOne possible strategy is the following (see e.g.\\ \\cite{ipopt paper}).\nAn equality constraint is taken care of by employing a Lagrange multiplier.\nAn inequality constraint is replaced with an equality\nby introducing a slack variable with a limited range.\nThis limit is fulfilled by adding\na barrier term to the function to be minimised.\nThis term is a product of the barrier parameter $\\mu$\nand a barrier function of the slack variable.\nThe solution of the original problem is obtained by\nsolving a sequence of subproblems\nwith $\\mu$ successively decreasing towards zero.\nAt each `outer iteration', the subproblem with the fixed $\\mu$\nis solved by a damped Newton's method which involves `inner iterations',\nand then this and the previous outcomes are used for making\nthe initial guess for the next subproblem with a smaller $\\mu$.\nThe method in Ref.~\\cite{ipopt paper} needs\nthe `outer loop' to be repeated several times.\nIf the algorithm is too hard to implement,\none can let a library do the job.\nFor instance, the example exhibited below uses\nthe Ipopt package \\cite{ipopt}.\n\nIt might be of interest to compare this and the original approaches\nfor a problem that both can solve. \nSuppose that one introduces $n$\nlattice points each holding $f$ unfixed field variables\nbetween $\\phi_+$ and $\\phi_e$ (see Fig.~\\ref{fig:nondeg}).\nThe constrained minimisation problem in step 3 roughly corresponds to\napplying Newton's method to\na function of $n (f+2) - 1$ free variables,\ni.e.\\ $n f$ field values, $n$ Lagrange multipliers, and\n$n-1$ slack variables.\nThis procedure is repeated, each time with a decreased barrier parameter.\nTo use the original approach on the other hand,\none needs additional $N$ points\nfrom $\\phi_e$ to $\\phi_-$ except $\\phi_e$ that has already been counted.\nThe function to be minimised\ndepends on $(n + N) f$ variables in the first step\nand $n f + 1$ in the second including one Lagrange multiplier.\nNewton's method is repeated while iteratively driving\n$\\epsilon \\rightarrow 0$.\nGiven only this information,\nit is difficult to tell which way is more efficient on general grounds.\nNevertheless, it is obvious that\na case with large $N$ is better handled with the new strategy\nsince each Newton iteration becomes very expensive with the original.\nThis happens when $\\phi_-$ is distant from $\\phi_e$.\nAnother limit arises with large $f$\nthat may favour the new approach over the original.\nIn this case, the number of auxiliary degrees of freedom $2n-1$\nin the former becomes much smaller than $Nf$ in the latter.\n\n\n\n\n\n\nNow, the refined procedure is applied to a problem\nwith two real fields as a demonstration.\nThe scalar potential is chosen to be\n\\begin{equation}\n \\label{eq:ufb potential}\n V = \\frac{m^2_1}{2} \\phi_1^2 + \\frac{m^2_2}{2} \\phi_2^2\n - \\frac{A}{2} \\phi_1\\phi_2^2 + g^2 (\\phi_1^2 - \\phi_2^2)^2 .\n\\end{equation}\nThis potential is unbounded from below.\nIn this sense, it is qualitatively\ndifferent from the examples in Ref.~\\cite{Konstandin:2006nd}.\nClearly, one cannot impose\nthe boundary conditions~\\eqref{eq:BCs improved potential}\nsince there exists no $\\phi_-$.\nTherefore, the original method is not applicable unless\nthe potential is altered.\nThere is a minimum with $V = 0$ at $\\phi_1 = \\phi_2 = 0$ that is local\nas long as $A \\ne 0$.\nOne can see the metastability easily by taking the field direction,\n$\\phi_1 = \\phi_2 = a$, that cuts out the quartic term leaving\n\\begin{equation}\n \\label{eq:flat direction}\n V_\\mathrm{flat} = \\frac{1}{2} (m^2_1 + m^2_2)\\,a^2 - \\frac{A}{2}\\,a^3 .\n\\end{equation}\nAlong this direction, a barrier starts from the origin\nand ends at the position where $A\\,a = m^2_1 + m^2_2$,\nafter which the potential drops below zero and keeps falling down.\n\n\\FIGURE[t]{\n\\incgraph{ufbexcntrs}\n\\caption{Contours of the scalar potential~\\eqref{eq:ufb potential}.\n The local minimum is marked with $\\phi_+$.\n The white curve is the set of points $\\phi_e$ at the same level as $\\phi_+$.\n The dotted and the solid curves show\n the bounces in the undamped and the damped cases, respectively.}\n\\label{fig:contours}}\nThe shape of the potential is illustrated in Fig.~\\ref{fig:contours}.\nThe masses and couplings in~\\eqref{eq:ufb potential} are set to be\n\\begin{equation}\n \\label{eq:parameters}\n m^2_1 = 3.0, \\quad\n m^2_2 = 0.5, \\quad\n A = 1.5, \\quad\n g = 0.4 .\n\\end{equation}\nThere is a force that drives the path off\nthe straight directions with $\\phi_1 = \\pm\\phi_2$,\narising from the gradients of both the trilinear and the mass terms.\nA large difference between $m^2_1$ and $m^2_2$ has been introduced\nto emphasise this dynamics.\n\nThe white contour corresponds to the point $\\phi_e$ in Fig.~\\ref{fig:nondeg}.\nFinding any point on the curve is enough to complete step 1.\nSuccessful execution up to step 3 should lead to the\nundamped solution for $\\alpha = 1$ shown in Fig.~\\ref{fig:contours}.\nNotice that the starting point of\nthe trajectory is on the white curve\nas enforced by~\\eqref{eq:constraints on ends}.\nThen one can proceed to take steps 4 and 5 to\ndeform this path to the\nfinal bounce for $\\alpha = 4, \\Delta \\rho = 0$.\nIt is also partly visible in the same figure.\n\n\\FIGURE{\n\\incgraph{ufbexsols}\n\\caption{Field profiles of the bounces.\n The dotted and the solid curves show\n the undamped and the damped cases, respectively.}\n\\label{fig:sol}}\nThe solutions are plotted as functions of the time in Fig.~\\ref{fig:sol}.\nAs $\\alpha$ increases,\nthe starting point retreats from the false vacuum\ndue to the damping term in~\\eqref{eq:eq for iteration}.\nNotice that the path does not diverge indefinitely even with an\nunbounded-from-below potential.\nThis is understandable from the fact that\nthe original problem formulated in~\\eqref{eq:diff eq}\nand~\\eqref{eq:BCs} has nothing to do with\nany information on the global minimum.\nFor the same reason,\nthe procedure described in this article\ndoes not need to explore deep into the potential\nfor an unreachable bottom.\nOne can put this solution back into the Euclidean action~\\eqref{eq:SE}\nto calculate the final answer,\n\\begin{equation}\n S_E[\\overline{\\phi}] = 488 .\n\\end{equation}\nIn this problem, the potential has a symmetry with respect to the reflection\n$\\phi_2 \\rightarrow -\\phi_2$.\nThis pairs a bounce with the other that makes exactly the same\ncontribution to the decay rate.\nTherefore, one must multiply the decay rate~\\eqref{eq:probability} by 2.\n\nIt is straightforward to write the\nlatticised versions of the fields, the Euclidean action,\nthe Euler-Lagrange equation, and the boundary conditions,\nand they can be found in Ref.~\\cite{Konstandin:2006nd} for instance.\nThe same reference reports an analysis of discretisation error.\nIt is determined by the final iteration\nstep for continuation to four dimensions\nthat is common to the present and the original methods.\n\n\n\n\n\\acknowledgments\n\nThe author thanks Ahmed Ali for helpful comments.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}