diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpxok" "b/data_all_eng_slimpj/shuffled/split2/finalzzpxok" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpxok" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{S:intro}\n\nIn this paper, we consider the Gaussian linear regression model, given by\n\\begin{equation}\n\\label{eq:reg}\nY = X \\beta + \\varepsilon, \n\\end{equation}\nwhere $Y$ is a $n \\times 1$ vector of response variables, $X$ is a $n \\times p$ matrix of predictor variables, $\\beta$ is a $p \\times 1$ vector of slope coefficients, and $\\varepsilon$ is a $n \\times 1$ vector of iid $\\mathsf{N}(0,\\sigma^2)$ random errors. Recently, there has been considerable interest in the high-dimensional case, where $p \\gg n$, driven primarily by challenging applications. Indeed, in genetic studies, where the response variable corresponds to a particular observable trait, the number of subjects, $n$, may be of order $10^3$, while the number of genetic features, $p$, in consideration can be of order $10^5$. Despite the large number of features, usually only a few have a genuine association with the trait. For example, the \\citet{wellcome2007} has confirmed that only seven genes have a non-negligible association with Type~I diabetes. Therefore, it is reasonable to assume that $\\beta$ is sparse, i.e., only a few non-zero entries. \n\nGiven the practical importance of the high-dimensional regression problem, there is now a substantial body of literature on the subject. In the frequentist setting, a variety of methods are available based on minimizing loss functions, equipped with a penalty on the complexity of the model. This includes the lasso \\citep{tibshirani1996}, the smoothly clipped absolute deviation \\citep{fanli2001}, the adaptive lasso \\citep{zou2006}, and the Dantzig selector \\citep{candes.tao.2007, james.radchenko.2009, james.radchenko.lv.2009}. \\citet{fan.lv.2010} give a selective overview of these and other frequentist methods. From a Bayesian perspective, popular methods for variable selection in high-dimensional regression include stochastic search variable selection \\citep{george.mccullogh.1993} and the methods based on spike-and-slab priors \\citep{ishwaran.rao.2005.aos, ishwaran.rao.2005.jasa}. These methods and others are reviewed in \\citet{clydegeorge2004} and \\citet{heaton.scott.2009}. More recently, \\citet{bondell.reich.2012}, \\citet{johnson.rossell.2012}, and \\citet{narisetty.he.2014} propose Bayesian variable selection methods and establish model selection consistency. \n\nAny Bayesian approach to the regression problem \\eqref{eq:reg} yields a posterior distribution on the high-dimensional parameter $\\beta$. It is natural to ask under what conditions will the $\\beta$ posterior distribution concentrate around the true value at an appropriate or optimal rate. Recently, \\citet{castillo.schmidt.vaart.2014} show that, with a suitable Laplace-like prior for $\\beta$, similar to those in \\citet{park.casella.2008}, and under conditions on the design matrix $X$, the posterior distribution concentrates around the truth at rates that match those for the corresponding lasso estimator \\citep[e.g.,][]{buhlmann.geer.book}. These results leave room for improvement in at least two directions; first, the rates associated with the lasso estimator are not optimal, so a break from the Laplace priors (and perhaps even the standard Bayesian setup itself) is desirable; second, and perhaps most importantly, posterior computation with these inconvenient non-conjugate priors is expensive and non-trivial. In this paper, we develop a new approach, motivated by computational considerations, which leads to improvements in both directions, simultaneously. \n\nTowards a model that leads to more efficient computation, it is natural to consider a conjugate normal prior for $\\beta$. However, Theorem~2.8 in \\citet{castillo.vaart.2012} says that if the prior has normal tails, then the posterior concentration rates can be suboptimal, motivating a departure from the somewhat rigid Bayesian framework. Following \\citet{martin.walker.eb}, we consider a new empirical Bayes approach, motivated by the very simple idea that the tails of the prior are irrelevant as long as its center is chosen informatively. So, our proposal is to use the data to provide an informative center for the normal prior for $\\beta$, along with an extra regularization step to prevent the posterior from tracking the data too closely. Details of our proposed empirical Bayes model are presented in Section~\\ref{S:model}. It turns out that this new empirical Bayes posterior is both easy to compute and has desirable asymptotic concentration properties. Section~\\ref{S:theory} presents a variety of concentration rate results for our empirical Bayes posterior. For example, under almost no conditions on the model or design matrix, a concentration rate relative to prediction error loss is obtained which is, at least in some cases, minimax optimal; the optimal rate can be achieved in all cases, but at a cost (see Remark~\\ref{re:minimax}). Furthermore, we provide a model selection consistency result which says, under optimal conditions, the empirical Bayes posterior can asymptotically identify those truly non-zero coefficients in the linear model. Our approach has some similarities with the exponential weighting methods in, e.g., \\citet{rigollet.tsybakov.2011, rigollet.tsybakov.2012} and \\citet{ariascastro.lounici.2014}; in fact, ours can be viewed as a generalization of these approaches, defining a full posterior that, when suitably summarized, corresponds essentially to their estimators. In Section~\\ref{S:numerical} we propose a simple and efficient Markov chain Monte Carlo method to sample from our empirical Bayes posterior, and we present several simulation studies to highlight both the computational speed the superior finite-sample performance of our method compared to several others in terms of model selection. Finally Section~\\ref{S:discuss} gives a brief a discussion, the key message being that we get provable posterior concentration results, optimal in a minimax sense in some cases, fast and easy computation, and strong finite-sample performance. Lengthy proofs and some auxiliary results are given in the Appendix. \n\n\n\n\n\n\n\\section{The empirical Bayes model}\n\\label{S:model}\n\n\\subsection{The prior}\n\\label{SS:prior}\n\nHere, and in the theoretical analysis in Section~\\ref{S:theory}, we take the error variance $\\sigma^2$ to be known, as is often done \\citep[e.g.,][]{rigollet.tsybakov.2012, castillo.schmidt.vaart.2014}. Techniques for estimating $\\sigma^2$ in the high-dimensional case are available; see Section~\\ref{S:numerical}. To specify a prior for $\\beta$ that incorporates sparsity, we decompose $\\beta$ as $(S,\\beta_S)$, where $S \\subset \\{1,\\ldots,p\\}$ denotes the ``active set'' of variables, $S=\\{j: \\beta_j \\neq 0\\}$, and $\\beta_S$ is the $|S|$-vector containing the particular non-zero values. Based on this decomposition, we can specify the prior for $\\beta$ in two steps: a prior for $S$ and then a prior for $\\beta_S$, given $S$. \n\nFirst, the prior $\\pi(S)$ for the model $S$ decomposes as follows:\n\\begin{equation}\n\\label{eq:prior.decomp}\n\\pi(S) = \\textstyle\\binom{p}{s}^{-1} \\, f_n(s), \\quad s=0,1,\\ldots,p, \\quad s=|S|,\n\\end{equation}\nwhere $f_n(s)$ is a probability mass function on the size $|S|$ of $S$. That is, we assign a prior distribution $f_n(s)$ on the model size and then, given the size, put a uniform prior on all models of the given size. Some conditions on $f_n(s)$ will be required for suitable posterior concentration. In particular, we assume that $f_n(s)$ is supported on $\\{0,1,\\ldots,R\\}$, not on $\\{0,1,\\ldots,p\\}$, where $R \\leq n$ is the the rank of the matrix $X$; see, also, \\citet{jiang2007}, \\citet{abramovich.grinshtein.2010}, \\citet{rigollet.tsybakov.2012}, and \\citet{ariascastro.lounici.2014}. That is, \n\\begin{equation}\n\\label{eq:prior.support}\nf_n(s) = 0 \\quad \\text{for all $s=R+1,\\ldots,p$}. \n\\end{equation}\nOur primary motivation for imposing this constraint is that in practical applications, the true value of $s$; i.e. $s^\\star=|S^\\star|$, is typically much smaller than $R$. Even in the ideal case where $S^\\star$ is known, if $|S^\\star| > R$, then quality estimation of the corresponding parameters is not possible. Moreover, models containing a large number of variables can be difficult to interpret. Therefore, since having no more variables than samples in the fixed-model case is a reasonable assumption, we do not believe that restricting the support of our prior for the model size is a strong condition. \n\nSecond, for the conditional prior on $\\beta_S$, given $S$ that satisfies $|S| \\leq R$, we propose to employ the available distribution theory for the least squares estimator $\\widehat\\beta_S$. Specifically, we take the prior for $\\beta_S$, given $S$, as \n\\[ \\beta_S \\mid S \\sim \\mathsf{N}_{|S|}\\bigl( \\widehat\\beta_S, \\gamma^{-1}(X_S^\\top X_S)^{-1} \\bigr).\\]\nHere, $X_S$ is the columns of $X$ corresponding to $S$, and $\\gamma > 0$ is a tuning parameter, to be specified. This is reminiscent to Zellner's $g$-prior \\citep[e.g.,][]{zellner1986}, except that it is centered at the least squares estimator; see Section~\\ref{SS:likelihood} for more on this data-dependent prior centering. To summarize, our proposed prior $\\Pi$ for $\\beta$ is given by \n\\begin{equation}\n\\label{eq:prior}\n\\Pi(d\\beta) = \\sum_{S: |S| \\leq R} \\mathsf{N}_{|S|}\\bigl( d\\beta_S \\; \\bigl\\vert \\; \\widehat\\beta_S, \\gamma^{-1}(X_S^\\top X_S)^{-1} \\bigr) \\, \\delta_0(d\\beta_{S^c}) \\, \\pi(S). \n\\end{equation}\nFollowing \\citet{martin.walker.eb}, we refer to this data-dependent prior as an empirical prior; see Section~\\ref{SS:posterior}. By restricting $|S| \\leq R$, we can be sure that the least squares estimator $\\widehat\\beta_S$ is available, along with the usual distribution theory. In our implementation, $\\gamma^{-1}$ will be large, which means that the conditional prior for $\\beta_S$ is rather diffuse, so the dependence on the data, through $\\widehat\\beta_S$, is not overly strong.\n\nObviously, to properly define the conditional prior for $\\beta_S$, we implicitly assume that $X_S^\\top X_S$ is non-singular for all subsets $S$ with $|S| \\leq R$. This is only for simplicity, however, since the theory in Section~\\ref{S:theory} goes through without this assumption at the cost of making computations more difficult. \n\n \n\\subsection{The likelihood function}\n\\label{SS:likelihood}\n\nFor the likelihood function, write $L_n(\\beta) = \\mathsf{N}_n(Y \\mid X\\beta, \\sigma^2 I)$ as the $n$-dimensional Gaussian density at $Y$, with mean $X\\beta$, covariance matrix proportional to the identity matrix, and treated as a function of $\\beta$. One unique feature of our approach so far is the centering of the (conditional) prior on the least squares estimator, which is greedy, in some sense. To prevent the posterior from tracking the data too closely, the second feature of our proposed approach is that we introduce a fractional power $\\alpha \\in (0,1)$ on the likelihood. That is, instead of $L_n(\\beta)$, our likelihood will be $L_n(\\beta)^\\alpha$; see \\citet{martin.walker.eb}. Other authors have advocated the use of a fractional likelihood, including \\citet{barron.cover.1991}, \\citet{walker.hjort.2001}, \\citet{zhang2006}, \\citet{jiang.tanner.2008}, \\citet{dalalyan.tsybakov.2008}, and \\citet{grunwald.ommen.2014}, but these papers have different foci and none include a data-dependent (conditional) prior centering. In fact, we feel that this combination of centering and fractional likelihood regularization (see Section~\\ref{SS:posterior}) is a powerful tool that can be used for a variety of high-dimensional problems. \n\nOur analysis in what follows does not go through for the genuine Bayes case, corresponding to $\\alpha=1$, but $\\alpha$ can be arbitrarily close to 1. Clearly, for finite-samples, the numerical differences between results for $\\alpha \\approx 1$ and for $\\alpha=1$ are negligible. \n\n\n\\subsection{The posterior distribution}\n\\label{SS:posterior}\n\nGiven the prior $\\Pi$ for $\\beta$ and the fractional likelihood, we form an empirical Bayes posterior distribution, denoted by $\\Pi^n$, for $\\beta$ using the standard Bayesian update. That is, for $B$ a measurable subset of $\\mathbb{R}^p$, we have \n\\begin{equation}\n\\label{eq:post0}\n\\Pi^n(B) = \\frac{\\int_B L_n(\\beta)^\\alpha \\,\\Pi(d\\beta)}{\\int_{\\mathbb{R}^p} L_n(\\beta)^\\alpha \\, \\Pi(d\\beta)}. \n\\end{equation}\nComputation of this empirical Bayes posterior will be discussed in Section~\\ref{S:numerical}.\n\nWe interpret ``empirical Bayes'' loosely---if the prior depends on data, then the corresponding posterior is empirical Bayes. The combination of a prior, data-dependent or not, with a fractional likelihood via Bayes formula can also be understood from this empirical Bayes point of view. Indeed, \n\\[ L_n(\\beta)^\\alpha \\, \\Pi(d\\beta) = L_n(\\beta) \\, \\frac{\\Pi(d\\beta)}{L_n(\\beta)^{1-\\alpha}}, \\]\ni.e., the Bayes combination of a fractional likelihood with a prior is equivalent to a Bayes combination of the original likelihood function with a data-dependent prior. As \\citet{walker.hjort.2001} explain, rescaling the prior by a portion of the likelihood helps to protect from possible inconsistencies by penalizing those parameter values that ``track the data too closely.''\n\n\n\n\n\n\\section{Posterior concentration rates}\n\\label{S:theory}\n\n\\subsection{Setup}\n\\label{SS:setup}\n\nBefore getting into details about the concentration rates, we first want to clarify what is meant by asymptotics in this context. There is an implicit triangular array setup, i.e., for each $n$, the response vector $Y^n = (Y_1^n,\\ldots,Y_n^n)^\\top$ is modeled according to \\eqref{eq:reg} with the $n \\times p$ design matrix $X^n = ((X_{ij}^n))$, of rank $R \\leq n$, which we take to be deterministic but depending on $n$, and vector of coefficients $\\beta^n = (\\beta_1,\\ldots,\\beta_p)^\\top$. When $n$ is increased, more data is available so, even though there are more variables to contend with (since $p \\gg n$), there is hope that something about the true $\\beta^n$ can be learnt, provided that it is sufficiently sparse. In what follows, we will use the standard notation in \\eqref{eq:reg} which is less cumbersome but hides the triangular array formulation. It is important to keep in mind, however, that, throughout our analysis, $p$, $R$, and $s^\\star$ depend implicitly on $n$. \n\n\nWe make some minimal standing assumptions. First, without loss of generality, we can assume that $s^\\star \\leq R \\leq n \\ll p$. No other assumptions concerning $n$, $p$, $R$, and $s^\\star$ will be required. The results below also hold for all fixed tuning parameters $\\alpha \\in (0,1)$ and $\\gamma > 0$; see Section~\\ref{SS:implementation} for guidance on the practical choice of $(\\alpha,\\gamma)$. For the design matrix $X$, there is a standing simplifying assumption that we shall make. In particular, we assume that $X_S$ is full-rank for each $S$ satisfying $|S| \\leq R$. This assumption holds, for example, if $X$ satisfies the ``sparse Riesz condition with rank $n$'' discussed in \\citet{zhang.huang.2008} and \\citet{chen.chen.2008}. It is possible, however, to remove this condition, but it requires a modification of the empirical Bayes model. Indeed, if the prior $\\pi$ for $S$ only puts positive mass on those $S$ such that $X_S$ is full-rank, and if $X_{S^\\star}$ is full-rank, then the theoretical results presented below follow similarly. The drawback for adjusting the prior for $S$ in this way is additional computational cost, i.e., the less-than-full-rank models must be identified and removed by zeroing out the prior mass. We opt here to keep things simple by making the full-rank assumption. \n\n\n\n\n\n\\subsection{A preliminary result}\n\\label{SS:prelim}\n\nLet $B$ be a generic event for $\\beta \\in \\mathbb{R}^p$. Our empirical Bayes posterior probability of the event $B$ in \\eqref{eq:post0} can be rewritten as \n\\begin{equation}\n\\label{eq:post}\n\\Pi^n(B) = \\frac{\\int_B R_n(\\beta, \\beta^\\star)^\\alpha \\, \\Pi(d\\beta)}{\\int R_n(\\beta, \\beta^\\star)^\\alpha \\,\\Pi(d\\beta)}, \n\\end{equation}\nwhere $R_n(\\beta,\\beta^\\star) = L_n(\\beta)\/L_n(\\beta^\\star)$ is the likelihood ratio. Let $D_n$ denote the denominator in the above display, i.e., $D_n = \\int R_n(\\beta,\\beta^\\star)^\\alpha \\, \\Pi(d\\beta)$. The next result, which will be useful throughout our analysis, gives a sure lower bound on $D_n$. \n\n\\begin{lem}\n\\label{lem:denominator}\nThere exists $c=c(\\alpha, \\gamma, \\sigma^2) > 0$ such that $D_n \\geq \\pi(S^\\star)e^{-c|S^\\star|}$. \n\\end{lem}\n\n\\begin{proof}\n$D_n$ is an average of a non-negative $S$-dependent quantity with respect to $\\pi(S)$. This average is clearly greater than the quantity for $S=S^\\star$ times $\\pi(S^\\star)$. That is, \n\\begin{align*}\nD_n & > \\pi(S^\\star) \\int R_n(\\beta, \\beta^\\star)^\\alpha \\mathsf{N}(\\beta_{S^\\star} \\mid \\hat\\beta_{S^\\star}, \\gamma^{-1}(X_{S^\\star}^\\top X_{S^\\star})^{-1}) \\, d\\beta_{S^\\star} \\\\\n& = \\pi(S^\\star) \\int e^{-\\frac{\\alpha}{2\\sigma^2}\\{\\|Y-X_{S^\\star}\\beta_{S^\\star}\\|_2^2 - \\|Y-X_{S^\\star}\\beta_{S^\\star}^\\star\\|_2^2\\}} \\mathsf{N}(\\beta_{S^\\star} \\mid \\hat\\beta_{S^\\star}, \\gamma^{-1}(X_{S^\\star}^\\top X_{S^\\star})^{-1}) \\,d\\beta_{S^\\star}.\n\\end{align*}\nDirect calculation shows that the lower bound above equals \n\\[ \\pi(S^\\star) e^{\\frac{\\alpha}{2\\sigma^2}\\|X_{S^\\star}(\\hat\\beta_{S^\\star}-\\beta_{S^\\star}^\\star)\\|_2^2} \\Bigr(1 + \\frac{\\alpha}{\\gamma \\sigma^2} \\Bigr)^{-|S^\\star|\/2}. \\]\nUsing the trivial bound $\\|\\cdot\\|_2 \\geq 0$ on the norm in the exponent, the proof is complete if we let $c = \\frac12\\log\\bigl( 1 + \\frac{\\alpha}{\\gamma \\sigma^2} \\bigr)$, which is clearly positive. \n\\end{proof}\n\n\n\n\n\\subsection{Prediction loss}\n\\label{SS:prediction}\n\nWe now present a result characterizing the concentration rate of the posterior distribution for the mean $X\\beta$. Set \n\\begin{equation}\n\\label{eq:prediction.event}\nB_{\\varepsilon_n} = \\{\\beta \\in \\mathbb{R}^p: \\|X(\\beta-\\beta^\\star)\\|_2^2 > \\varepsilon_n\\}, \n\\end{equation}\nwhere $\\varepsilon_n$ is a positive sequence to be specified. Since this loss involves the $X$ matrix, the notion of convergence we are considering here is related to prediction. Different loss functions will be considered in Section~\\ref{SS:other}. As discussed in \\citet{buhlmann.geer.book}, e.g., their equation (2.8), $\\varepsilon_n$ proportional to $s^\\star \\log p$ corresponds to the convergence rate for the lasso estimator. Intuitively, if $S^\\star$ were known, then the best rate for the prediction error would be $s^\\star$, so the logarithmic term acts as a penalty for having to also deal with the unknown model. \n\nLet $N_n$ be the numerator for the posterior probability of $B_{\\varepsilon_n}$, as in \\eqref{eq:post}, i.e., $N_n = \\int_{B_{\\varepsilon_n}} R_n(\\beta,\\beta^\\star)^\\alpha \\, \\Pi(d\\beta)$. We have the following bound on $N_n$. \n\n\\begin{lem}\n\\label{lem:numerator}\nThere exists $d=d(\\alpha, \\sigma^2) > 0$ and $\\phi=\\phi(\\alpha, \\gamma, \\sigma^2) > 1$ such that $\\mathsf{E}_{\\beta^\\star}(N_n) \\leq e^{-d \\varepsilon_n} \\sum_{S: |S| \\leq R} \\phi^{|S|} \\pi(S)$, uniformly in $\\beta^\\star$. \n\\end{lem}\n\n\\begin{proof}\nSee Appendix~\\ref{SS:proof.num}. \n\\end{proof}\n\n\\ifthenelse{1=1}{}{\n\\begin{remark}\n\\label{re:hyper}\n{\\color{red} Following the intuition in Section~\\ref{SS:prior}, we take $\\alpha$ close to 1. This makes the constant $d$ in Lemma~\\ref{lem:numerator} small. Also, if $\\alpha$ is close to 1, then $\\gamma$ can be chosen small enough so that the constant $\\phi$ is not too big, i.e., close to $2^{1\/2}$. Justification for these statements can be found in the proof of Lemma~\\ref{lem:numerator}; see, also, Section~\\ref{S:numerical}. } \n\\end{remark}\n}\n\nTo bound the posterior probability of $B_{\\varepsilon_n}$, let $b_n = \\pi(S^\\star) e^{-c s^\\star}$. Since $D_n \\geq b_n$, surely, by Lemma~\\ref{lem:denominator}, we have \n\\[ \\Pi^n(B_{\\varepsilon_n}) = \\frac{N_n}{D_n} \\cdot 1(D_n \\geq b_n) + \\frac{N_n}{D_n} \\cdot 1(D_n < b_n) \\leq \\frac{N_n}{b_n}. \\]\nTaking expectation and plugging in the bound in Lemma~\\ref{lem:numerator} gives \n\\begin{align*}\n\\mathsf{E}_{\\beta^\\star}\\{\\Pi^n(B_{\\varepsilon_n})\\} & \\leq e^{c|S^\\star|-d\\varepsilon_n} \\, \\frac{1}{\\pi(S^\\star)} \\sum_S \\phi^{|S|} \\pi(S) \\\\\n& = e^{cs^\\star-d\\varepsilon_n} \\, \\frac{\\binom{p}{s^\\star}}{f_n(s^\\star)} \\sum_{s=0}^R \\phi^s f_n(s); \n\\end{align*}\nwhich holds uniformly in $\\beta^\\star$ with $|S_{\\beta^\\star}|=s^\\star$. Then the empirical Bayes concentration rate $\\varepsilon_n = \\varepsilon_n(p, R, s^\\star)$ is such that the above upper bound vanishes. A first conclusion is that $\\varepsilon_n$ must satisfy $s^\\star = o(\\varepsilon_n)$. More precisely, if we set\n\\[ \\zeta_n = \\zeta_n(p, R, s^\\star) = \\frac{\\binom{p}{s^\\star}}{f_n(s^\\star)} \\sum_{s=0}^R \\phi^s f_n(s), \\]\nthen the rate $\\varepsilon_n$ satisfies \n\\begin{equation}\n\\label{eq:growth}\n\\log\\zeta_n = O(\\varepsilon_n), \\quad \\text{as $n \\to \\infty$}.\n\\end{equation}\nThis amounts to a condition on the prior $f_n$ for $|S|$. Indeed, \\eqref{eq:growth} requires that $f_n$ should be sufficiently concentrated near $s^\\star$, so that $f_n(s^\\star)$ is not too small and the expectation of $\\phi^{|S|}$ with respect to $f_n$ is not too big. Compare this to the prior support conditions in \\citet{ggv2000}, \\citet{shen.wasserman.2001}, and \\citet{walker2007}. \n\n\\begin{thm}\n\\label{thm:mean.concentration}\nFor any $s^\\star \\leq R$, if the prior $f_n$ on $|S|$ admits $\\zeta_n$ such that \\eqref{eq:growth} holds with $\\varepsilon_n$, then there exists a constant $M > 0$ such that $\\mathsf{E}_{\\beta^\\star} \\{\\Pi^n(B_{M\\varepsilon_n})\\} \\to 0$ as $n \\to \\infty$, uniformly in $\\beta^\\star$ with $|S_{\\beta^\\star}| = s^\\star$. \n\\end{thm}\n\n\\begin{proof}\nBy Lemmas~\\ref{lem:denominator} and \\ref{lem:numerator}, and the growth condition \\eqref{eq:growth}, we have that, for large $n$, \n\\[ \\log \\mathsf{E}_{\\beta^\\star}\\{\\Pi^n(B_{M\\varepsilon_n})\\} \\leq \\Bigl( \\frac{cs^\\star}{\\varepsilon_n} - Md + \\frac{\\log\\zeta_n}{\\varepsilon_n} \\Bigr) \\varepsilon_n. \\]\nThe first term inside the parentheses vanishes since $s^\\star = o(\\varepsilon_n)$. Next, under \\eqref{eq:growth}, there exists a $K > 0$ such that $(\\log\\zeta_n)\/\\varepsilon_n < K$. So, if we take $M$ such that $Md > K$, then the upper bound above goes to $-\\infty$ as $n \\to \\infty$. This implies the result. \n\\end{proof}\n\n\\begin{remark}\n\\label{re:minimax}\nWhat rates $\\varepsilon_n$ are desirable\/attainable? The minimax rate for estimation under this prediction error loss is $\\min\\{R, s^\\star \\log(p \/ s^\\star)\\}$; see, e.g., \\citet{rigollet.tsybakov.2012}. Note the phase transition between the ordinary [$s^\\star \\log(p \/ s^\\star) < R$] and the ultra high-dimensional [$s^\\star \\log(p \/ s^\\star) > R$] regimes. According to Remark~\\ref{re:geometric}, an empirical Bayes posterior concentration rate equal to $s^\\star \\log(p \/ s^\\star)$ obtains for a class of priors on $S$, which is minimax optimal but only in the ordinary high-dimensional regime; this rate is slightly better than those obtained in \\citet{ariascastro.lounici.2014} and \\citet{castillo.schmidt.vaart.2014}. By picking a prior outside this class, in particular, one that puts a little mass on an overly-complex model, the minimax rate can be achieved in both the ordinary and ultra high-dimensional regimes. There is a price to be paid, however, for this complete minimax rate: the little piece of extra prior mass on the complex model is large enough to cause problems with the proofs of marginal posterior concentration properties for $S$. Justification of these claims can be found in {Appendix~\\ref{S:claims}}. Based on these observations, we conjecture that the priors on $S$ that lead to minimax concentration rate under prediction error loss do not lead to desirable model selection properties. This is intuitively reasonable, since good prediction generally does not require a correctly specified model, but more work is needed to confirm this. Since we prefer to have a single prior that does well in all aspects, we will not concern ourselves here with attaining the optimal minimax rate in the ultra high-dimensional regime, though we do know how to obtain it. \n\\end{remark}\n\n\\begin{remark}\n\\label{re:geometric}\nThe growth condition \\eqref{eq:growth} holds with $\\varepsilon_n$ proportional to $s^\\star \\log(p \/ s^\\star)$, the minimax rate in the ordinary high-dimensional case, if there exists constants $a_1$, $a_2$, $c_1$, $c_2$, $C_1$, and $C_2$ such that $f_n$ satisfies \n\\begin{equation}\n\\label{eq:prior.dim}\nC_1\\Bigl( \\frac{1}{c_1 p^{a_1}} \\Bigr)^s \\leq f_n(s) \\leq C_2 \\Bigl( \\frac{1}{c_2 p^{a_2}} \\Bigr)^s \\quad \\text{for all $s=0,1,\\ldots,R$} \n\\end{equation}\nThe proof of this claim follows from calculations similar to those in Example~\\ref{ex:complexity} below. Assumption~1 in \\citet{castillo.schmidt.vaart.2014} implies \\eqref{eq:prior.dim}, but our restriction, $|S| \\leq R$, allows us to get rates for priors that may not satisfy \\eqref{eq:prior.dim}.\n\\end{remark}\n\n\\begin{remark}\n\\label{re:phi}\nConsider the expectation term $\\sum_{s=0}^R \\phi^s f_n(s)$. The trivial bound $\\phi^R$ could be used in the ultra high-dimensional case where $s^\\star \\log(p \/ s^\\star) \\gg R$. More generally, if $f_n$ satisfies \\eqref{eq:prior.dim}, then the formulas for partial sums of a geometric series reveal that this expectation term is bounded as $n \\to \\infty$. In fact, in all three examples discussed below, it is easy to confirm that the expectation term is bounded. Therefore, the rate is determined completely by the prior concentration around $S^\\star$. \n\\end{remark}\n \nNext we identify the rate $\\varepsilon_n$ corresponding to several choices of prior $f_n$. The complexity prior in Example~\\ref{ex:complexity}, which is simple and has good properties, will be our choice of prior in what follows; our proofs Sections~\\ref{SS:dimension}--\\ref{SS:selection} can be easily modified to cover any $f_n$ that satisfies \\eqref{eq:prior.dim}. \n\n\\begin{ex}\n\\label{ex:complexity}\nThe complexity prior for the model size $|S|$ in Equation~(2.3) of \\citet{castillo.schmidt.vaart.2014} is given by \n\\begin{equation}\n\\label{eq:complexity}\nf_n(s) \\propto c^{-s} p^{-as}, \\quad s=0,1,\\ldots,R, \n\\end{equation}\nwhere $a$ and $c$ are positive constants. This prior clearly satisfies the condition \\eqref{eq:prior.dim} in Remark~\\ref{re:geometric}. We claim that this complexity prior satisfies \\eqref{eq:growth} with $\\varepsilon_n = s^\\star\\log(p \/ s^\\star)$. To see this, note that $\\log f_n(s^\\star)$ is lower bounded by\n\\[ -s^\\star \\log(c s^{\\star a}) -as^\\star\\log(p \/ s^\\star) = -\\Bigl( a + \\frac{\\log c + a \\log s^\\star}{\\log(p\/s^\\star)} \\Bigr) s^\\star \\log(p \/ s^\\star). \\]\nThe ratio inside the parentheses above vanishes since $s^\\star \\ll p$. Similarly, by Stirling's formula, we have that $\\log \\binom{p}{s^\\star} \\leq s^\\star \\log(p \/ s^\\star)\\{1 + o(1)\\}$. Putting these two bounds together, and using the result in Remark~\\ref{re:phi}, we can conclude that the complexity prior above yields a posterior concentration rate $s^\\star \\log(p \/ s^\\star)$. \n\\end{ex}\n\n\\begin{ex}\n\\label{ex:beta.binom}\nConvergence rates can be obtained for other priors $f_n$. First, consider a beta--binomial prior for $|S|$, i.e., \n\\[ f_n(s) = \\int_0^1 \\binom{R}{s} w^{R-s}(1-w)^s \\, a_n w^{a_n-1} \\,dw, \\]\nwhich corresponds to a $\\mathsf{Beta}(a_n,1)$ prior for $W$ and a conditional $\\mathsf{Bin}(R,1-w)$ prior for $|S|$, given $W=w$. For $a_n = a R$, for a constant $a > 0$, it can be shown that the corresponding rate $\\varepsilon_n$ is proportional to $s^\\star \\log(p \/ s^\\star)$. If, on the other hand, $f_n$ is a $\\mathsf{Bin}(R, R^{-1})$ mass function, then similar calculations show that the concentration rate is $\\varepsilon_n = s^\\star \\log p$, which agrees with the lasso rate in \\citet{buhlmann.geer.book}, but falls short of the rates discussed previously. \n\\end{ex}\n\n\\ifthenelse{1=1}{}{\nThere are other priors whose corresponding posterior concentration rate is worse than $s^\\star \\log(p \/ s^\\star)$, if only by a little bit. The next example illustrates this. \n\n\\begin{ex}\n\\label{ex:binomial}\nLet $f_n$ be a $\\mathsf{Bin}(R,R^{-1})$ mass function. Using the same lower and upper bounds on the binomial coefficients as in the previous example, we have \n\\begin{align*}\n\\frac{f_n(s^\\star)}{\\binom{p}{s^\\star}}\n& \\geq p^{-s^\\star} e^{-s^\\star-1} \\{1 + o(1)\\}. \n\\end{align*}\nAgain, the expectation of $\\phi^{|S|}$ is bounded as $n \\to \\infty$, so we get that $\\log\\zeta_n = O(s^\\star\\log p)$. This matches the rate for the lasso quoted in Equation~(2.8) of \\citet{buhlmann.geer.book}, but falls short of the rates in the previous examples. \n\\end{ex}\n}\n\n\n\\subsection{Effective dimension}\n\\label{SS:dimension}\n\nUnder our proposed prior, the empirical Bayes posterior distribution for $\\beta$ is concentrated on an $R$-dimensional subspace of the full $p$-dimensional parameter space. In the sparse case, where the true $\\beta^\\star$ has effective dimension $s^\\star \\leq R\\ll p$, it is interesting to ask if the posterior distribution is actually concentrated on a space of dimension close to $s^\\star$. Below we give an affirmative answer to this question under some conditions. Such considerations will also be useful in Sections~\\ref{SS:other} and \\ref{SS:selection}. \n\nFor a given $\\Delta$, let $B_n(\\Delta) = \\{\\beta \\in \\mathbb{R}^p: |S_\\beta| \\geq \\Delta\\}$ be those $\\beta$ vectors with no less than $\\Delta$ non-zero entries. We say that the effective dimension of $\\Pi^n$ is bounded by $\\Delta=\\Delta_n$ if the expected posterior probability of $B_n(\\Delta)$ vanishes as $n \\to \\infty$. Next write \n\\[ N_n(\\Delta) = \\int_{B_n(\\Delta)} R_n(\\beta,\\beta^\\star)^\\alpha \\, \\Pi(d\\beta), \\]\nfor the numerator of the posterior probability of $B_n(\\Delta)$. \n\n\\begin{lem}\n\\label{lem:dim}\n$\\mathsf{E}_{\\beta^\\star}\\{N_n(\\Delta)\\} \\leq \\sum_{s = \\Delta}^R \\phi^s f_n(s)$ for all $\\beta^\\star$. \n\\end{lem}\n\n\\begin{proof}\nSee Appendix~\\ref{SS:proof.dim}. \n\\end{proof}\n\nWe can combine Lemma~\\ref{lem:dim} and Lemma~\\ref{lem:denominator} to conclude that\n\\[ \\mathsf{E}_{\\beta^\\star}[ \\Pi^n\\{B_n(\\Delta)\\} ] \\leq e^{cs^\\star} \\frac{\\binom{p}{s^\\star}}{f_n(s^\\star)} \\sum_{s = \\Delta}^R \\phi^s f_n(s), \\]\nuniformly in $\\beta^\\star$ with $|S_{\\beta^\\star}| = s^\\star$. Since $\\phi > 1$, we have $\\sum_s \\phi^s f_n(s) > 1$ and, therefore, \n\\begin{equation}\n\\label{eq:dim.bound}\n\\mathsf{E}_{\\beta^\\star}[ \\Pi^n\\{B_n(\\Delta)\\} ] \\leq e^{cs^\\star + \\log \\zeta_n} \\sum_{s=\\Delta}^R \\phi^s f_n(s). \n\\end{equation}\nSo, if the tail of the prior $f_n$ on the model size is sufficiently light, then the posterior probability assigned to models with complexity of order greater than $s^\\star$ will be small. Under the conditions of Theorem~\\ref{thm:mean.concentration}, we know the magnitude of $\\log\\zeta_n$, but here we need additional control on the tails of $f_n$. \n\n\\begin{thm}\n\\label{thm:dim}\nLet $s^\\star \\leq R$. If $f_n$ is of the form \\eqref{eq:complexity}, then $\\mathsf{E}_{\\beta^\\star}[\\Pi^n\\{B_n(\\Delta_n)\\}] \\to 0$, holds with $\\Delta_n = Cs^\\star$, uniformly in $\\beta^\\star$ with $|S_{\\beta^\\star}|=s^\\star$, i.e., the effective dimension $\\Pi^n$ is bounded by $Cs^\\star$. \n\\end{thm}\n\n\\begin{proof}\nRecall that, for this $f_n$, $\\log\\zeta_n$ is of the order $s^\\star \\log(p\/s^\\star)$. Moreover, for a generic $\\Delta$, the summation $\\sum_{s=\\Delta}^R \\phi^s f_n(s)$ is bounded by a partial sum of a geometric series. In particular, the bound is $O(r^{\\Delta+1})$, where $r = \\phi\/cp^a$ and $a,c$ are in \\eqref{eq:complexity}. In that case, \n\\[ r^{\\Delta+1} = e^{-(\\Delta+1)[a\\log p + \\log(c\/\\phi)]}. \\]\nSo, if $\\Delta$ is a suitable multiple of $s^\\star$, then clearly the $r^{\\Delta+1}$ term dominates the $e^{cs^\\star + \\log\\zeta_n}$ term. In particular, if $\\Delta = C s^\\star$ with $C > a^{-1}$, then the product on the right-hand side of \\eqref{eq:dim.bound} vanishes, proving the claim.\n\\end{proof}\n\n\nTo summarize, our prior is such that the posterior distribution is supported on models of size no more than $R$. However, a good prior is one such that the posterior ought to be able to learn the size of the true model that generated the data, which is possibly much less than $R$. Theorem~\\ref{thm:dim} shows that, indeed, if the prior $f_n$ on the model size has sufficiently light tails, then the posterior will concentrate on models of size proportional to $s^\\star$, the true model size. Note, also, that we cannot take a $\\tilde{s} 0. \n\\end{equation}\nConsequently, $\\kappa$ is an important quantity and will appear in Theorem~\\ref{thm:beta.concentration} below. One can define quantities analogous to $\\kappa$ in order to get concentration results relative to the $\\ell_1$- or $\\ell_\\infty$-norm of $\\beta$; see \\citet[][Section~2]{castillo.schmidt.vaart.2014}. \n\nThe result presented below will follow almost immediately from Theorem~\\ref{thm:mean.concentration} and the definition of $\\kappa$. Indeed, for any $\\beta$, we have\n\\begin{equation}\n\\label{eq:L2.bound}\n\\|X(\\beta-\\beta^\\star)\\|_2 \\geq \\kappa(|S_{\\beta-\\beta^\\star}|) \\, \\|\\beta-\\beta^\\star\\|_2. \n\\end{equation}\nFor example, if $\\|\\beta-\\beta^\\star\\|_2$ is lower-bounded, then so is $\\|X(\\beta-\\beta^\\star)\\|_2$, for suitable $\\kappa$, so a posterior concentration result for the $\\ell_2$-norm on $\\beta$ should follow from an analogous result for the $\\ell_2$ prediction error as in Theorem~\\ref{thm:mean.concentration}. The only obstacle is that the $\\kappa$ term on the right-hand depends on the particular $\\beta$. The following result leads to the observation that $\\kappa(|S_{\\beta-\\beta^\\star}|)$ can be controlled by a term that depends only on $s^\\star$. \n\n\\begin{lem}\n\\label{lem:kappa}\nFor any $\\beta$ and $\\beta^\\star$, $\\kappa(|S_{\\beta-\\beta^\\star}|) \\geq \\kappa(|S_\\beta| + |S_{\\beta^\\star}|)$.\n\\end{lem}\n\n\\begin{proof}\nThis follows since $\\kappa$ is non-increasing and $|S_{\\beta-\\beta^\\star}| \\leq |S_\\beta| + |S_{\\beta^\\star}|$.\n\\end{proof}\n\nUnder our prior formulation, we know that the posterior puts probability~1 on those $\\beta$ for which $|S_\\beta| \\leq R$. So, if $|S_{\\beta^\\star}| = s^\\star$, then, trivially, $\\kappa(|S_{\\beta-\\beta^\\star}|) \\geq \\kappa(R + s^\\star)$. For better control on the $\\kappa$ term in \\eqref{eq:L2.bound}, recall that Theorem~\\ref{thm:dim} says that the posterior probability of the event $\\{|S_\\beta| \\geq Cs^\\star\\}$ vanishes as $n \\to \\infty$. Therefore, for $C'=C+1$, \n\\begin{equation}\n\\label{eq:kappa.bound}\n\\kappa(|S_{\\beta-\\beta^\\star}|) \\geq \\kappa(C's^\\star) \n\\end{equation}\nholds for all $\\beta$ in a set with posterior probability approaching 1. Compare this to Theorem~1 of \\citet{castillo.schmidt.vaart.2014}, and also to the corresponding model selection results for frequentist point estimators in, e.g., \\citet[][Chap.~7]{buhlmann.geer.book}. \n\nWe are now ready for the concentration rate result with respect to the $\\ell_2$-norm loss on the parameter $\\beta$ itself. This time, set \n\\[ B_{\\delta_n}' = \\{\\beta \\in \\mathbb{R}^p: \\|\\beta-\\beta^\\star\\|_2^2 > \\delta_n\\}, \\]\nwhere $\\delta_n$ is a positive sequence to be specified. \n\n\\begin{thm}\n\\label{thm:beta.concentration}\nFor $s^\\star \\leq \\min(n, R)$, suppose that the prior $f_n$ satisfies \\eqref{eq:complexity} with exponent $a > $, so that Theorem~\\ref{thm:mean.concentration} holds with $\\varepsilon_n$ equal to $s^\\star\\log(p\/s^\\star)$ and Theorem~\\ref{thm:dim} holds with $\\Delta_n=Cs^\\star$, for $C > a^{-1}$. Then there exists a constant $M$ such that $\\mathsf{E}_{\\beta^\\star}\\{\\Pi^n(B_{M\\delta_n}')\\} \\to 0$ as $n \\to \\infty$, uniformly in $\\beta^\\star$ with $|S_{\\beta^\\star}|=s^\\star$, where \n\\[ \\delta_n = \\frac{s^\\star \\log(p \/ s^\\star)}{\\kappa(C's^\\star)^2}, \\]\nprovided that $\\kappa(C's^\\star) > 0$, where $C' = 1+C > 1 + a^{-1} > 1$. \n\\end{thm}\n\n\\begin{proof}\nIt follows immediately from \\eqref{eq:L2.bound} that $\\|\\beta-\\beta^\\star\\|_2^2 > M\\delta_n$ implies \n\\[ \\|X(\\beta-\\beta^\\star)\\|_2^2 > M\\kappa(|S_{\\beta-\\beta^\\star}|)^2 \\delta_n. \\]\nBy definition of $\\delta_n$ and the inequality \\eqref{eq:kappa.bound}, this last inequality implies \n\\[ \\|X(\\beta-\\beta^\\star)\\|_2^2 > M s^\\star \\log(p \/ s^\\star). \\]\nIf we take $M$ as in Theorem~\\ref{thm:mean.concentration}, then the event in the above display is exactly $B_{M\\varepsilon_n}$. We have shown that $\\Pi^n(B_{M\\delta_n}') \\leq \\Pi^n(B_{M\\varepsilon})$. By Theorem~\\ref{thm:mean.concentration}, the expectation of the upper bound vanishes uniformly in $\\beta^\\star$ as $n \\to \\infty$, so the proof is almost complete. The remaining issue to deal with is an extra term in the upper bound for $\\Pi^n(B_{M\\delta_n}')$ coming from using $\\kappa(C's^\\star)$ in place of $\\kappa(|S_{\\beta-\\beta^\\star}|)$ above. However, this extra term is $o(1)$ by Theorem~\\ref{thm:dim}, and, therefore, does not actually impact the proof.\n\\end{proof}\n\nCompare this result to the third in Theorem~2 of \\citet{castillo.schmidt.vaart.2014}. First, our rate is slightly better, $s^\\star \\log(p \/ s^\\star)$ compared to the lasso rate $s^\\star \\log p$. Second, our bound does not depend on a ``compatibility number'' \\citep[e.g.,][Definition~2.1]{castillo.schmidt.vaart.2014}, which also improves the rate and makes interpretation of our result easier. {A referee has indicated that the improved results are as a direct consequence of the $(X_S^\\top X_S)^{-1}$ term that appears in the prior for $\\beta_S$.} Also, the condition $\\kappa(C' s^\\star) > 0$, with $C' = 1 + a^{-1}$ and $a < 1$, agrees with the condition, roughly, $\\kappa\\bigl( (2 + \\varepsilon)s^\\star \\bigr) > 0$ for some $\\varepsilon > 0$, in \\citet{ariascastro.lounici.2014}; that is, just a little more than identifiability, as in \\eqref{eq:identifiable} is needed. \n\n\n\n\\subsection{Model selection}\n\\label{SS:selection}\n\nInterest here is on the model $S$ and not directly on the regression coefficients. In this case, it is convenient to work with the marginal posterior distribution for $S$ which, thanks to the simple conjugate structure in the conditional prior, we can write explicitly as \n\\begin{equation}\n\\label{eq:marginal0}\n\\pi^n(S) \\propto \\pi(S) e^{-\\frac{\\alpha}{2\\sigma^2}\\|Y-\\hat Y_S\\|^2} \\nu^{-|S|}, \n\\end{equation}\nwhere $\\nu=(\\gamma + \\alpha \/ \\sigma^2)^{1\/2}$. Then \n\\begin{equation}\n\\label{eq:model.bound}\n\\pi^n(S) \\leq \\frac{\\pi^n(S)}{\\pi^n(S^\\star)} = \\frac{\\pi(S)}{\\pi(S^\\star)} \\nu^{|S^\\star|-|S|} e^{\\frac{\\alpha}{2\\sigma^2}\\{\\|Y-\\hat Y_{S^\\star}\\|^2 - \\|Y-\\hat Y_S\\|^2\\}}. \n\\end{equation}\nFrom this bound, we can show that the posterior concentrates on models contained in $S^\\star$, i.e., asymptotically, it will not charge any models with unnecessary variables. Furthermore, this conclusion requires no conditions on the $X$ matrix or true $\\beta^\\star$. For simplicity, we will focus on the particular complexity prior $f_n$ in \\eqref{eq:complexity} shown previously to yield desirable posterior concentration properties. \n\n\\begin{thm}\n\\label{thm:selection1}\nLet the constant $a > 0$ in the complexity prior \\eqref{eq:complexity} be such that $p^a \\gg R$. Then $\\mathsf{E}_{\\beta^\\star}\\{\\Pi^n(\\beta: S_\\beta \\supset S_{\\beta^\\star})\\} \\to 0$, uniformly over $\\beta^\\star$. \n\\end{thm}\n\n\\begin{proof}\nSee Appendix~\\ref{proof:selection1}. \n\\end{proof}\n\nTheorem~\\ref{thm:selection1} says that, asymptotically, our empirical Bayes posterior will not include any unnecessary variables. It remains to say what it takes for the posterior to asymptotically identify all the important variables. The first condition is one on the $X$ matrix, specifically, if $s^\\star$ is the true model size, then we require $\\kappa(s^\\star) > 0$; this is implied by monotonicity of $\\kappa$ and the identifiability condition \\eqref{eq:identifiable} in Section~\\ref{SS:other}. For our second assumption, we consider the magnitudes of the non-zero entries in a $s^\\star$-sparse $\\beta^\\star$. Intuitively, we cannot hope to be able to distinguish between an actual zero and a very small non-zero, but defining what is ``very small'' requires some care. Here, we define this cutoff by \n\\begin{equation}\n\\label{eq:rho}\n\\rho_n = \\frac{\\sigma}{\\kappa(s^\\star)} \\Bigl\\{ \\frac{2M(1+\\alpha)}{\\alpha} \\, \\log p \\Bigr\\}^{1\/2}, \n\\end{equation}\nwhere $M > 0$ is a constant to be determined. In particular, coefficients of magnitude greater than $\\rho_n$ are large enough to be detected. The so-called \\emph{beta-min} condition assumes that all the non-zero coefficients are sufficiently far from zero. The cutoff $\\rho_n$ in \\eqref{eq:rho} is better than that appearing in Equation~(2.18) in \\citet{buhlmann.geer.book} for the lasso model selector but comparable to that in Theorem~1 of \\citet{ariascastro.lounici.2014} and in the third part of Theorem~5 in \\citet{castillo.schmidt.vaart.2014}, where the latter requires additional assumptions on $X$. \n\n\\begin{thm}\n\\label{thm:selection2}\nFor any $s^\\star \\leq R$, let $\\beta^\\star$ be such that $|S_{\\beta^\\star}| = s^\\star$ and \n\\[ \\min_{j \\in S^\\star} |\\beta_j^\\star| \\geq \\rho_n, \\]\nwith $M > a+1$, where $a > 0$ is in the complexity prior, with $p^a \\gg R$. Assuming the condition of Theorem~\\ref{thm:dim} holds, if $\\kappa(s^\\star) > 0$, then $\\mathsf{E}_{\\beta^\\star}\\{\\Pi^n(\\beta: S_\\beta = S_{\\beta^\\star})\\} \\to 1$. \n\\end{thm}\n\n\\begin{proof}\nSee Appendix~\\ref{proof:selection2}.\n\\end{proof}\n\n\n\n\n\\section{Numerical results}\n\\label{S:numerical}\n\n\\subsection{Implementation}\n\\label{SS:implementation}\n\nTo compute our empirical Bayes posterior distribution, we employ a Markov chain Monte Carlo method. To start, recall from \\eqref{eq:marginal0} that we can write the marginal posterior mass function, $\\pi^n(S)$, for the model $S$ can be written down explicitly, i.e., \n\\[ \\pi^n(S) \\propto \\pi(S) \\, e^{-\\frac{\\alpha}{2\\sigma^2}\\|Y - \\widehat Y_S\\|^2} \\Bigl(\\gamma + \\frac{\\alpha}{\\sigma^2} \\Bigr)^{-|S|\/2}, \\]\nwhere $\\widehat Y_S = X_S \\widehat\\beta_S$ is the least-squares prediction for model $S$. Intuitively, there are three contributing factors to the posterior distribution for $S$, namely, the prior probability of the model, a measure of how well the model fits the data, and an additional penalty on the complexity of the model. So, clearly, the posterior distribution will favor models with smaller number of variables that provide adequate fit to the observed $Y$. This provides further intuition about theorems presented in Section~\\ref{S:theory}. \n\nBesides this intuition, the formula $\\pi^n(S)$ provides a convenient way to run a Rao--Blackwellized Metropolis--Hastings method to sample from the posterior distribution of $S$. Indeed, if $q(S' \\mid S)$ is a proposal function, then a single iteration of our proposed Metropolis--Hastings sampler goes as follows:\n\\begin{enumerate}\n\\item Given a current state $S$, sample $S' \\sim q(\\cdot \\mid S)$.\n\\vspace{-2mm}\n\\item Move to the new state $S'$ with probability \n\\[ \\min\\Bigl\\{1, \\frac{\\pi^n(S')}{\\pi^n(S)} \\frac{q(S \\mid S')}{q(S' \\mid S)} \\Bigr\\}; \\]\notherwise, stay at state $S$.\n\\end{enumerate}\nRepeating this process $M$ times, we obtain a sample of models $S_1,\\ldots,S_M$ from the posterior $\\pi^n(S)$. Monte Carlo approximations of, say, the inclusion probabilities (Section~\\ref{SS:simulations}) of individual variables can then easily be computed based on this sample. In our case, we use a symmetric proposal distribution $q(S' \\mid S)$, i.e., one that samples $S'$ uniformly from those models that differ from $S$ in exactly one position, which simplifies the acceptance probability above since the $q$-ratio is identically 1. Also, we initialize our Markov chain Monte Carlo search at the model selected by lasso. \n\nTo implement this procedure, some additional tuning parameters need to be specified. First, recall that $(\\alpha,\\gamma) = (1,0)$ corresponds to the genuine Bayes model with a flat prior for $\\beta_S$. Our theory does not cover this case, but we can mimic it by picking something close. Here we consider $\\alpha=0.999$ and $\\gamma=0.001$; in our experience, the performance is not sensitive to the choice of $(\\alpha,\\gamma)$ in a neighborhood of $(0.999, 0.001)$. Second, for the prior on the model size, we employ the complexity prior \\eqref{eq:complexity} with $c=1$ and $a=0.05$, i.e., $f_n(s) \\propto p^{-0.05 s}$. The choice of small $a$ makes the prior sufficiently spread out, allowing the posterior to move across the model space and, in particular, helping the Markov chain for $S$ to mix reasonably well. Third, in practice, the error variance $\\sigma^2$ is seldom known, so some procedure to handle unknown $\\sigma^2$ is needed. We proposed to modify our empirical Bayes posterior by plugging in an estimate of $\\sigma^2$. In particular, we use a residual mean square error based on a lasso fit \\citep{reid.tibshirani.friedman.2014}. \n\n\nFinally, if samples from the $\\beta$ posterior are desired, then these can easily be obtained, via conjugacy, after a sample of $S$ is available. In particular, the conditional posterior distribution for $\\beta_S$, given $S$, is normal with mean $\\hat\\beta_S$ and variance $(\\gamma + \\frac{\\alpha}{\\sigma^2})^{-1} (X_S^\\top X_S)^{-1}$. R code to implement our procedure is available at \\url{www.math.uic.edu\/~rgmartin}. \n\n\n\n\n\\subsection{Simulations}\n\\label{SS:simulations}\n\nIn this section, we reconsider some of the simulation experiments performed by \\citet{narisetty.he.2014}, which are related to experiments presented in \\citet{johnson.rossell.2012}. In each setting, the error variance is $\\sigma^2=1$; the covariate matrix is obtained by sampling from a multivariate normal distribution with zero mean, unit variance, and constant pairwise correlation $\\rho=0.25$; and the true model $S^\\star$ has $s^\\star=5$. The particular correlation structure among the covariates is given practical justification in \\citet{johnson.rossell.2012}. Under this setup, we consider three different settings:\n\\begin{description}\n\\item[\\it Setting~1.] $n=100$, $p=500$, and $\\beta_{S^\\star}=(0.6, 1.2, 1.8, 2.4, 3.0)^\\top$;\n\\vspace{-2mm}\n\\item[\\it Setting~2.] $n=200$, $p=1000$, and $\\beta_{S^\\star}$ the same as in Setting~1;\n\\vspace{-2mm} \n\\item[\\it Setting~3.] $n=100$, $p=500$, and $\\beta_{S^\\star}=(0.6, 0.6, 0.6, 0.6, 0.6)^\\top$.\n\\end{description}\nOur Settings~1--2 correspond to the two $(n,p)$ configurations in Case~2 of \\citet{narisetty.he.2014} and our Setting~3 is the same as their Case~3. \n\nWe carry out model selection by retaining those variables whose inclusion probability $p_j = \\Pi^n(\\beta_j \\neq 0)$, $j=1,\\ldots,p$, exceeds 0.5; this is the so-called median probability model, shown to be optimal, in a certain sense, by \\citet{barbieri.berger.2004}. Alternatively, one could select the model with the largest posterior probability, but this is more expensive computationally compared to the median probability model---only $p$ inclusion probabilities instead of up to $2^p$ model probabilities. In all cases, the posterior almost immediately concentrates on the true model. Our Markov chain Monte Carlo required only 5000 iterations to reach convergence, which took only a few seconds on an ordinary laptop computer: about 10 seconds for Setting~1 and about 25 seconds for Setting~2. \n\nTo summarize the performance, we consider five different measures. First, we consider the mean inclusion probability for those variables in and out of the active set $S^\\star$, respectively, i.e., \n\\[ \\bar p_1 = \\frac{1}{s^\\star} \\sum_{j \\in S^\\star} p_j \\quad \\text{and} \\quad \\bar p_0 = \\frac{1}{p-s^\\star} \\sum_{j \\not\\in S^\\star} p_j. \\]\nWe expect the former to be close to 1 and the latter to be close to 0. Next, we consider the probability that the model selected by our empirical Bayes method, denoted by $\\hat S$ is equal to or contains the true model $S^\\star$. Finally, we also compute the false discovery rate of our selection procedure. A summary of these quantities for our empirical Bayes method, denoted by \\emph{EB}, across the three settings is given in Tables~\\ref{table:setting1}--\\ref{table:setting3}. \n\nFor comparison, we consider those methods discussed in \\citet{narisetty.he.2014}, including their two Bayesian methods, denoted by BASAD and BASAD.BIC. Two other Bayesian methods considered are the credible region approach of \\citet{bondell.reich.2012}, denoted by BCR.Joint, and the spike-and-slab method of \\citet{ishwaran.rao.2005.jasa, ishwaran.rao.2005.aos}, denoted by SpikeSlab. We also consider three penalized likelihood methods, all tuned with BIC, namely, the lasso \\citep{tibshirani1996}, the elastic net \\citep{zou.hastie.2005}, and the smoothly clipped absolute deviation \\citep{fanli2001}, denoted by Lasso.BIC, EN.BIC, and SCAD.BIC, respectively. The results for these methods are taken from Tables~2--3 in \\citet{narisetty.he.2014}, which were obtained based on 200 samples taken from the models described in Settings~1--3 described above. \n\nOur selection method based on our empirical Bayes posterior is the overall the best among those being compared in terms of selecting the true model and false discovery rate. In addition to the strong finite-sample performance of our model selection procedure, our theory is arguably stronger than that available for the other methods in this comparison. Take, for example, the BASAD method of \\citet{narisetty.he.2014}, the next-best-performer in the simulation study. Their method produces a posterior distribution for $\\beta$ but since their prior has no point mass, this posterior cannot concentrate on a lower-dimensional subspace of $\\mathbb{R}^p$. So, it is not clear if their posterior distribution for $\\beta$ can attain the minimax concentration rate without tuning the prior using knowledge about the underlying sparsity level. \n\n\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{lccccc}\n\\hline\nMethod & $\\bar p_0$ & $\\bar p_1$ & $\\mathsf{P}(\\hat S = S^\\star)$ & $\\mathsf{P}(\\hat S \\supseteq S^\\star)$ & FDR \\\\\n\\hline\nBASAD & 0.001 & 0.948 & 0.730 & 0.775 & 0.011 \\\\\nBASAD.BIC & 0.001 & 0.948 & 0.190 & 0.915 & 0.146 \\\\\nBCR.Joint & & & 0.070 & 0.305 & 0.268 \\\\\nSpikeSlab & & & 0.000 & 0.040 & 0.626 \\\\\nLasso.BIC & & & 0.005 & 0.845 & 0.466 \\\\\nEN.BIC & & & 0.135 & 0.835 & 0.283 \\\\\nSCAD.BIC & & & 0.045 & 0.980 & 0.328 \\\\\n$EB$ & 0.002 & 0.959 & 0.680 & 0.795 & 0.051 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Simulation results for Setting~1. First seven rows taken from Table~2 (top) in \\citet{narisetty.he.2014}; the \\emph{EB} row corresponds to our empirical Bayes procedure.}\n\\label{table:setting1}\n\\end{table}\n\n\n\\begin{table}[t]\n\\begin{center}\n{\n\\begin{tabular}{lccccc}\n\\hline\nMethod & $\\bar p_0$ & $\\bar p_1$ & $\\mathsf{P}(\\hat S = S^\\star)$ & $\\mathsf{P}(\\hat S \\supseteq S^\\star)$ & FDR \\\\\n\\hline\nBASAD & 0.000 & 0.986 & 0.930 & 0.950 & 0.000 \\\\\nBASAD.BIC & 0.000 & 0.986 & 0.720 & 0.990 & 0.046 \\\\\nBCR.Joint & & & 0.090 & 0.250 & 0.176 \\\\\nSpikeSlab & & & 0.000 & 0.050 & 0.574 \\\\\nLasso.BIC & & & 0.020 & 1.000 & 0.430 \\\\\nEN.BIC & & & 0.325 & 1.000 & 0.177 \\\\\nSCAD.BIC & & & 0.650 & 1.000 & 0.091 \\\\\n$EB$ & 0.000 & 0.998 & 0.945 & 0.990 & 0.015 \\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\caption{Simulation results for Setting~2. First seven rows taken from Table~2 (bottom) in \\citet{narisetty.he.2014}; the \\emph{EB} row corresponds to our empirical Bayes procedure.}\n\\label{table:setting2}\n\\end{table}\n\n\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{lccccc}\n\\hline\nMethod & $\\bar p_0$ & $\\bar p_1$ & $\\mathsf{P}(\\hat S = S^\\star)$ & $\\mathsf{P}(\\hat S \\supseteq S^\\star)$ & FDR \\\\\n\\hline\nBASAD & 0.002 & 0.622 & 0.185 & 0.195 & 0.066 \\\\\nBASAD.BIC & 0.002 & 0.622 & 0.160 & 0.375 & 0.193 \\\\\nBCR.Joint & & & 0.030 & 0.315 & 0.447 \\\\\nSpikeSlab & & & 0.000 & 0.000 & 0.857 \\\\\nLasso.BIC & & & 0.000 & 0.520 & 0.561\\\\\nEN.BIC & & & 0.040 & 0.345 & 0.478 \\\\\nSCAD.BIC & & & 0.045 & 0.340 & 0.464 \\\\\n$EB$ & 0.003 & 0.811 & 0.305 & 0.350 & 0.092 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Simulation results for Setting~3. First seven rows taken from Table~3 in \\citet{narisetty.he.2014}; the \\emph{EB} row corresponds to our empirical Bayes procedure.}\n\\label{table:setting3}\n\\end{table}\n\n\n\n\\ifthenelse{1=1}{}{\n\nNEW SIMULATIONS USING ALPHA = 0.999, GAMMA = 0.001\n\n> ebreg.sim(reps=200, n=200, p=1000, r=0.25, beta=0.6 * 1:5, M=2000, a=0.05)\n pp0 pp1 pr.exact pr.contain fdr time\n[1,] 0.0004016281 0.997559 0.945 0.99 0.01549405 22.99551\n> \n> \n> ebreg.sim(reps=200, n=100, p=500, r=0.25, beta=0.6 * 1:5, M=2000, a=0.05)\n pp0 pp1 pr.exact pr.contain fdr time\n[1,] 0.00206997 0.959215 0.68 0.795 0.05104239 8.28069\n> \n> \n> ebreg.sim(reps=200, n=100, p=500, r=0.25, beta=0.6 * rep(1,5), M=2000, a=0.05)\n pp0 pp1 pr.exact pr.contain fdr time\n[1,] 0.00261197 0.811447 0.305 0.35 0.09240267 8.19032\n\n\nOLD SIMULATIONS...\n\nCASE 1:\n\n> o1 <- ebreg.sim(200, 100, 500, 0.25, 0.6 * 1:5, 2000, TRUE, 0.1); print(o1)\n pp0 pp1 pr.exact pr.contain fdr\n[1,] 0.003246894 0.9544405 0.635 0.755 0.06901045\n\n> o1 <- ebreg.sim(200, 100, 500, 0.25, 0.6 * 1:5, 2000, TRUE, 0.05); print(o1)\n pp0 pp1 pr.exact pr.contain fdr\n[1,] 0.002204419 0.96558 0.745 0.835 0.04871404\n\n\nCASE 2:\n\n> o2 <- ebreg.sim(200, 200, 1000, 0.25, 0.6 * 1:5, 2000, TRUE, 0.1); print(o2)\n pp0 pp1 pr.exact pr.contain fdr\n[1,] 0.001356093 0.9954135 0.905 0.975 0.02745995\n\n> o2 <- ebreg.sim(200, 200, 1000, 0.25, 0.6 * 1:5, 2000, TRUE, 0.05); print(o2)\n pp0 pp1 pr.exact pr.contain fdr\n[1,] 0.0002108719 0.9986405 0.95 0.995 0.01058442\n\n\nCASE 3:\n\n> o3 <- ebreg.sim(200, 100, 500, 0.25, 0.6 * rep(1, 5), 2000, TRUE, 0.1); o3\n pp0 pp1 pr.exact pr.contain fdr\n[1,] 0.004038485 0.776765 0.225 0.27 0.1082552\n\n> o3 <- ebreg.sim(200, 100, 500, 0.25, 0.6 * rep(1, 5), 2000, TRUE, 0.05); o3\n pp0 pp1 pr.exact pr.contain fdr\n[1,] 0.004338581 0.786109 0.23 0.29 0.1009734\n\n}\n\n\n\n\n\n\\section{Discussion}\n\\label{S:discuss}\n\nWe have presented an empirical Bayes model for the sparse high-dimensional regression problem. Though the proposed approach has some unusual features, such as a data-dependent prior, we characterize the posterior concentration rate, which agrees with the optimal minimax rate in some cases. To our knowledge, this is the only available minimax concentration rate result for a full posterior distribution in the sparse high-dimensional linear model. Moreover, our formulation allows for relatively simple posterior computation, via Markov chain Monte Carlo, and simulation studies show that model selection by thresholding the posterior inclusion probabilities outperforms a variety of existing methods. \n\n\nThe general strategy proposed here goes as follows. Suppose we have a high-dimensional parameter, and different models $S$ identify a set of parameters $\\theta_S$. Suppose further that $\\theta$ is sparse in the sense that only a few of its entries are non-null. Then an empirical Bayes model is obtained by specifying a prior for $(S,\\theta_S)$ as $\\pi(S) \\pi(d\\theta_S \\mid S)$, where $\\pi(d\\theta_S \\mid S)$ would be allowed to depend on data through, say, the maximum likelihood estimator $\\hat\\theta_S$ of $\\theta_S$. Intuitively, the idea is to center the conditional prior on a data-dependent point, say $\\hat\\theta_S$, and then use the fractional likelihood to prevent the posterior to track the data too closely. We believe this is a general tool that can be used in high-dimensional problems, and one possible application of this approach, which we plan to explore, is a mixture model where $S$ represents the number of mixture components, and $\\theta_S$ is the set of parameters associated with a mixture model with $S$ mixture components. \n \n\n\n\\section*{Acknowledgements}\n\nThe authors are grateful for the valuable comments provided by the Editor, Associate Editor, and three anonymous referees. This work is partially supported by the U.~S.~National Science Foundation, grants DMS--1506879 and DMS--1507073.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCoupled surface and grain boundary motion is an important\nphenomenon controlling the grain growth in materials processing\nand synthesis. A commonly used model to study this coupled effect\nis called ``quarter loop\" geometry introduced by Dunn et\nal.~\\cite{dunn}.\n\n\n\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width = 12cm, height = 5.1cm, clip]{quarterlp.eps}\n\\caption{The quarter loop bicrystal geometry.} \\label{fig:fig1}\n\\end{center}\n\\end{figure}\n\nIn the quarter loop geometry, there are two grains between which\nthere is an interface called grain boundary as shown in\nFig.\\ref{fig:fig1}. The two grains are of the same material and\ndiffer only in their relative crystalline orientation. The grain\nboundary runs parallel to a free surface before it turns up and\nattaches to upper surfaces at a groove root. When heated at a\nspecific temperature, the grain boundary migrates to reduce the\nsurface energy and to heal the orientation mismatch. Since the\ndriving force is constant the grain boundary moves at a constant\nvelocity after a short time decay. It is reasonable to assume that\nthe bicrystal is uniform along the cross-section direction. Thus\nit is reasonable for us to consider only two dimensional(2D)\ngeometry in this paper.\n\nThis geometry contains two types of motion. One of them is mean\ncurvature motion for the grain boundary. And the other one is\nsurface diffusion for the upper surfaces. More detail about this\nmodel is give in ~\\cite{amy}.\n\nMotion by mean curvature is an evolution law in which the normal\nvelocity of an interface is proportional to its mean curvature.\nMore precisely, the motion of an interface $\\Gamma$ satisfies\n\\begin{equation}\nV_{c}=A\\kappa\n\\end{equation}\nHere $V_c$ denotes the velocity in the normal direction of\n$\\Gamma$, and $\\kappa$ stands for the mean curvature of $\\Gamma$.\n\nFirst proposed by Mullins~\\cite{mullins} to model the curvature\ndriven diffusion on the surface of a crystal, surface diffusion is\na different evolution law in which the normal velocity of an\ninterface is proportional to the surface Laplacian of mean\ncurvature. The motion of interface $\\Gamma$ satisfies\n\\begin{equation}\\label{eq:q1}\nV_{d}=-B\\Delta_s \\kappa\n\\end{equation}\nHere $V_{d}$ stands for the normal velocity and $\\Delta_s$ stands\nfor the operator of surface Laplacian which is defined as\n\\begin{equation}\n\\Delta_s=\\nabla_s\\cdot\\nabla_s \\textrm{ where }\n\\nabla_s=\\nabla-n\\partial_n\n\\end{equation}\n\nIn two dimensions, surface diffusion can be reduced to a normal\ndirection motion with a speed function depending on the second\nderivative of the curvature with respect to arc length, i.e.,\n\\begin{equation}\\label{eq:surdiff}\nV_{d}=-B\\kappa_{ss}\n\\end{equation}\nHere $s$ is arc length parametrization. Since the problem is\nproposed in two dimensions, the motion by surface diffusion always\nrefers to equation (\\ref{eq:surdiff}) instead of the general case\n(\\ref{eq:q1}) in this paper.\n\nWe shall prove later in the appendix that we could normalize $A$\nand $B$ by rescaling the time and space. Since this reformulation\nmake the problem neither harder or easier we will take both $A$\nand $B$ as one.\n\nSurface diffusion is an intrinsically difficult problem to solve\nnumerically even in two dimensions. The main difficulty is that it\nis stiff due to the fourth order derivatives and such that an\nexplicit time stepping strategy requires very small time steps.\nMoreover, owing to the lack of a maximum principle, an embedded\ncurve may not stay embedded, in other words, it may become\nself-intersected during the evolution.\n\n\n\nTravelling wave solutions have been derived for the whole\nnonlinear problem and for a linearized problem in~\\cite{amy2}\nand~\\cite{amy} respectively. These analytic results are used to\nverify numerical results in this paper.\n\nThe formulations in this paper will be proposed in parameterized\nform. There are two reasons why we prefer the parameterized form.\nFirstly, we try to set up a versatile formulation that is\nextensible to other problems, such as the evolution of a closed\ncurve , other freely positioned triple junction problems and even\nthe evolution curve networks, wherever for a single type motion or\na mixed type motion. Secondly, even for the coupled grain boundary\nmotion, the function $y(x)$ which represents the exterior surface\nmay not be single-valued as shown in Fig.\\ref{fig:exception} and\nthis phenomena is physically reasonable ~\\cite{amy-1,amy0}. Also\nfor such consideration we will treat the exterior surface as two\ncurves separated by the triple junction in the following\ndiscussion.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width = 6cm, height = 4cm, clip]{exception.eps}\n\\caption{An example when one of the surfaces is not\nsingle-valued.} \\label{fig:exception}\n\\end{center}\n\\end{figure}\n\nThe outline of this paper is as follows. In section \\ref{se:2},\nparabolic equations are derived for the motion by mean curvature\nand the motion by surface diffusion separately. The boundary\nconditions including the triple junction conditions and domain\nboundary conditions are discussed in section \\ref{se:3} referring\nto the analytic work of Novick-Cohen et al.~\\cite{amy,amy2} and\nWong et al.~\\cite{Donhong}. In section \\ref{se:4}, the\nwell-posedness for a linear parabolic system that is closely\nrelated to the full nonlinear problem is analyzed and followed by\na discussion about the artificial tangential condition. From\nsection \\ref{se:5} to \\ref{se:tangential} we discuss the numerical\ndetails including discretization, time stepping and some other\nnumerical issues. We start the discussion for a PDAE system in\nsection \\ref{se:8}. An interesting computational example of\nsurface diffusion is given in section \\ref{se:star}. The linear\nwell-posedness of the PDAE system is analyzed in section\n\\ref{se:9}.\n\n\\section{Cartesian Formulation}\nIn this section, we consider the problem in the cartesian\ncoordinate system which is give as below(see~\\cite{amy}).\n\\begin{eqnarray}\\label{eq:car}\ny_t&=&-\\Big[\\frac{1}{(1+y_x^2)^{1\/2}}\\Big[\\frac{y_{xx}}{(1+y_x^2)^{3\/2}}\\Big]_x\\Big]_x,\\quad\nt>0,x\\in(-\\infty,s(t)^-)\\cup(s(t)^+,\\infty),\\nonumber\\\\\nu_t&=&u_{xx}(1+u_x^2)^{-1},\\quad t>0,x>s(t)\n\\end{eqnarray}\nwith following triple junction conditions\n\\begin{eqnarray}\\label{eq:carcon}\n&&y(s(t)^+,t)=y(s(t)^-,t)=u(s(t)^+,t),\\quad t>0\\nonumber\\\\\n&&\\arctan(y_x(s(t)^+,t))-\\arctan(y_x(s(t)^-,t))=2\\arcsin(\\frac{\\gamma_{grain\n}}{2\\gamma_{exterior}})\\nonumber\\\\\n&&\\arctan(u_x(s(t)^+,t))=-\\frac{\\pi}{2}+\\frac{1}{2}[\\arctan(y_x(s(t)^+,t))+\\arctan(y_x(s(t)^-,t))],\\,t>0\\\\\n&&\\frac{y_{xx}}{(1+y_x^2)^{3\/2}}\\Big|_{(s(t)^+,t)}=\\frac{y_{xx}}{(1+y_x^2)^{3\/2}}\\Big|_{(s(t)^-,t)},\\,t>0\\nonumber\\\\\n&&\\Big[\\frac{1}{(1+y_x^2)^{1\/2}}\\Big[\\frac{y_{xx}}{(1+y_x^2)^{3\/2}}\\Big]_x\\Big]_{(s(t)^+,t)}=\\Big[\\frac{1}{(1+y_x^2)^{1\/2}}\\Big[\\frac{y_{xx}}{(1+y_x^2)^{3\/2}}\\Big]_x\\Big]_{(s(t)^-,t)},\\,t>0\\nonumber\\\\\n&&y(+\\infty,t)=y(-\\infty,t)=0,\\, t>0\\nonumber\\\\\n&&u(+\\infty,t)=-1,\\,t>0\\nonumber\n\\end{eqnarray}\nHere $y=y(x,t)$ stands for the height of the two exterior\nsurfaces. $u=u(x,t)$ is the height of the grain boundary and\n$s(t)$ denotes the location of the junction where the three\nsurfaces meet.\n\nSince the junction is moving it is not straightforward to solve\nthis system numerically. We fix the junction by making the\nfollowing transform,\n\\begin{eqnarray}\n\\bar{x}=x-s(t)\n\\end{eqnarray}\nWe let $y(\\bar{x},t)=y(x,t)$ and $u(\\bar{x},t)=u(x,t)$. Therefor,\n\\begin{eqnarray*}\ny_x(x,t)&=&y_{\\bar{x}}(\\bar{x},t)\\\\\ny_t(x,t)&=&y_t(\\bar{x},t)-y_{\\bar{x}}(\\bar{x},t)s_t\\\\\nu_x(x,t)&=&u_{\\bar{x}}(\\bar{x},t)\\\\\nu_t(x,t)&=&u_t(\\bar{x},t)-u_{\\bar{x}}(\\bar{x},t)s_t\n\\end{eqnarray*}\nThen system (\\ref{eq:car}) becomes\n\\begin{eqnarray}\\label{eq:car2}\ny_t&=&-\\Big[\\frac{1}{(1+y^2_{\\bar{x}})^{1\/2}}\\Big[\\frac{y_{\\bar{x}\\bar{x}}}{(1+y_{\\bar{x}}^2)^{3\/2}}\\Big]_{\\bar{x}}\\Big]_{\\bar{x}}+y_{\\bar{x}}(\\bar{x},t)s_t,\\quad\nt>0,\\bar{x}\\in(-\\infty,0)\\cup(0,\\infty),\\nonumber\\\\\nu_t&=&u_{\\bar{x}\\bar{x}}(1+u_{\\bar{x}}^2)^{-1}+y_{\\bar{x}}(\\bar{x},t)s_t,\\quad\nt>0,\\bar{x}>0\n\\end{eqnarray}\nAnd the boundary conditions (\\ref{eq:carcon}) become\n\\begin{eqnarray}\\label{eq:carcon2}\n&&y(0^+,t)=y(0^-,t)=u(0^+,t),\\quad t>0\\nonumber\\\\\n&&\\arctan(y_{\\bar{x}}(0^+,t))-\\arctan(y_{\\bar{x}}(0^-,t))=2\\arcsin(\\frac{\\gamma_{grain\n}}{2\\gamma_{exterior}})\\nonumber\\\\\n&&\\arctan(u_{\\bar{x}}(0^+,t))=-\\frac{\\pi}{2}+\\frac{1}{2}[\\arctan(y_{\\bar{x}}(0^+,t))+\\arctan(y_{\\bar{x}}(0^-,t))],\\,t>0\\\\\n&&\\frac{y_{{\\bar{x}}{\\bar{x}}}}{(1+y_{\\bar{x}}^2)^{3\/2}}\\Big|_{(0^+,t)}=\\frac{y_{{\\bar{x}}{\\bar{x}}}}{(1+y_{\\bar{x}}^2)^{3\/2}}\\Big|_{(0^-,t)},\\,t>0\\nonumber\\\\\n&&\\Big[\\frac{1}{(1+y_{\\bar{x}}^2)^{1\/2}}\\Big[\\frac{y_{{\\bar{x}}{\\bar{x}}}}{(1+y_{\\bar{x}}^2)^{3\/2}}\\Big]_{\\bar{x}}\\Big]_{(0^+,t)}=\\Big[\\frac{1}{(1+y_{\\bar{x}}^2)^{1\/2}}\\Big[\\frac{y_{{\\bar{x}}{\\bar{x}}}}{(1+y_{\\bar{x}}^2)^{3\/2}}\\Big]_{\\bar{x}}\\Big]_{(0^-,t)},\\,t>0\\nonumber\\\\\n&&y(+\\infty,t)=y(-\\infty,t)=0,\\, t>0\\nonumber\\\\\n&&u(+\\infty,t)=-1,\\,t>0\\nonumber\n\\end{eqnarray}\n\n\nThis system could easily be discretized using standard finite\ndifference schemes on a fixed staggered grid. A numerical result\nis shown in Fig.\\ref{fig:car}.\n\\begin{figure\n\\begin{center}\n\\includegraphics[width = 7cm, height = 6cm, clip]{cartesian.eps}\n\\caption{Numerical result for system\n(\\ref{eq:car2})-(\\ref{eq:carcon2}) with $m=0.5$. Dotted line:\nnumerical result; Solid line: travelling wave solution.}\n\\label{fig:car}\n\\end{center}\n\\end{figure}\n\nThe disadvantage of using cartesian formulation is that it is not\napplicable to non-single valued case as shown in\nFig.\\ref{fig:exception}. What's more, since the grain boundary is\nnearly singular at the junction, it requires very small grid size\nfor accuracy. For wider application we consider two parametric\nformulations in the rest of this paper.\n\n\n\\section{A Parabolic Formulation}\\label{se:2}\nIn this section we derive a parabolic system to model the coupled\nmotion. Here and throughout this paper we use\n$X=(u(\\cdot),v(\\cdot))$ to represent a parameterized curve with\n$u(\\cdot)$ and $v(\\cdot)$ being the coordinates.\n\nSeveral more notations should be introduced as well. The arc\nlength parameter is denoted by $X(s)=X(u(s),v(s))$ and any other\nparameter is denoted by $X(\\sigma)=X(u(\\sigma),v(\\sigma))$.\n$\\vec{t}$ and $\\vec{n}$ stand for unit tangential direction and\nunit normal direction respectively. $\\kappa$ stands for curvature.\nAlthough all final equations are parameterized by $\\sigma$, the\narc length parametrization is useful for the intermediate\ndeviations.\n\\subsection{Motion by Mean Curvature}\nWe first derive a parabolic equation to describe the motion by\nmean curvature. Similar discussion has been addressed\nin~\\cite{brian} and~\\cite{garcke}. We give a brief description for\nreader's convenience.\n\n\nWith the notations introduced above one has\n\\begin{eqnarray*}\nX_s=\\vec{t}\\nonumber\n\\end{eqnarray*}\n\\begin{eqnarray*}\nX_{ss}=\\kappa\\vec{n}\n\\end{eqnarray*}\nHere the subscript $s$ stands for the derivative of $X$ with\nrespect to arc length $s$. Direct computation shows that\n\\begin{equation}\\label{eq:8}\nX_{\\sigma}=X_s\\frac{dS(\\sigma)}{d\\sigma}=X_s|X_\\sigma|\n\\end{equation}\nwhere $S(\\sigma)$ is defined by\n\\begin{eqnarray}\\label{eq:9}\nS(\\sigma)=\\int_{\\sigma_0}^{\\sigma}\\sqrt{u_{\\sigma}^2+v_{\\sigma}^2}d\\sigma\n\\end{eqnarray}\nwhich stands for the length of the curve from point $X(\\sigma_0)$\nto $X(\\sigma)$. $|X_\\sigma|$ is $L_2$ norm of $X_\\sigma$ defined\nby\n\\begin{eqnarray*}\n|X_\\sigma|=\\sqrt{u_\\sigma^2+v_\\sigma^2}\n\\end{eqnarray*}\n Differentiate equation\n(\\ref{eq:8}) with respect to $\\sigma$ to botain\n\\begin{equation}\nX_{\\sigma\\sigma}=X_{ss}|X_\\sigma|^2+X_s|X_\\sigma|_s|X_\\sigma|\n\\end{equation}\nBy previous derivations one can compute the normal component of\nvector $\\frac{X_{\\sigma\\sigma}}{|X_\\sigma|^2}$ and obtains\n\\begin{eqnarray}\\label{eq:normal}\n\\frac{X_{\\sigma\\sigma}}{|X_\\sigma|^2}\\cdot\\vec{n}&=&X_{ss}\\cdot\n\\frac{X_{ss}}{\\kappa}+\\frac{|X_\\sigma|_s}{|X_\\sigma|}X_s\\cdot\\frac{X_{ss}}{\\kappa}\\nonumber\\\\\n&=& \\kappa\n\\end{eqnarray}\nThus, if we set up a formulation:\n\\begin{equation}\\label{eq:curvature1}\nX_t=\\frac{X_{\\sigma\\sigma}}{|X_\\sigma|^2}\n\\end{equation}\nit is obvious that the motion described by (\\ref{eq:curvature1})\nhas normal velocity $\\kappa$. This gives us an option to describe\nthe motion by mean curvature. Equation (\\ref{eq:curvature1}) is\nfully parabolic which means it is parabolic in both the normal\ncomponent and tangential component.\n\nThere are some other equations that can also describe curvature\nmotion, for example,\n\\begin{eqnarray}\\label{eq:alternate}\nX_t=\\kappa \\vec{n}\n\\end{eqnarray}\nBut this system is not fully parabolic. A linearization shows that\nthis system is parabolic in the normal component and hyperbolic in\nthe tangential component. A discretization of system\n(\\ref{eq:alternate}) will not have the good numerical properties\nas those of a fully parabolic system due to the lack of regularity\nin the parametrization as shown in~\\cite{brian}.\n\n\\subsection{Motion by Surface Diffusion} Considering the good\nproperties of a parabolic formulation, we hope to find a parabolic\nformulation for motion by surface diffusion. By analogy with the\napproach to the mean curvature motion described above, we try the\nfollowing form:\n\\begin{equation}\\label{eq:14}\nX_t=-\\frac{X_{\\sigma\\sigma\\sigma\\sigma}}{|X_\\sigma|^4}+\nL(X_{\\sigma\\sigma\\sigma},X_{\\sigma\\sigma},X_{\\sigma})\n\\end{equation}\nwhere $L(X_{\\sigma\\sigma\\sigma},X_{\\sigma\\sigma},X_{\\sigma})$\nincludes some lower order terms and will be determined such that\n\\begin{eqnarray}\\label{eq:145}\nX_t\\cdot \\vec\n{n}=(-\\frac{X_{\\sigma\\sigma\\sigma\\sigma}}{|X_\\sigma|^4}+\nL(X_{\\sigma\\sigma\\sigma},X_{\\sigma\\sigma},X_{\\sigma}))\\cdot\n\\vec{n}=-\\kappa_{ss}\n\\end{eqnarray}\n\nWe focus our study on finding out\n$L(X_{\\sigma\\sigma\\sigma},X_{\\sigma\\sigma},X_{\\sigma})$ in the\nrest of this section.\n\nNote first the following equation (see appendix for proof),\n\\begin{equation}\\label{eq:ksss}\n(X_{ssss}+\\kappa^2X_{ss})\\cdot \\vec{n}=\\kappa_{ss}\n\\end{equation}\nCompare equation (\\ref{eq:145}) and (\\ref{eq:ksss}) to get a\nchoice for $L$,\n\\begin{equation}\\label{eq:newl}\nL(X_{\\sigma\\sigma\\sigma},X_{\\sigma\\sigma},X_{\\sigma})=\\frac{X_{\\sigma\\sigma\\sigma\\sigma}}{|X_\\sigma|^4}-X_{ssss}\n-\\kappa^2 X_{ss}\n\\end{equation}\nOne will find later that the fourth order terms appeared in\n(\\ref{eq:newl}) could be cancelled with each other and such that\n$L$ involves only third or lower order derivatives.\n\nStart by equation (\\ref{eq:8}) and differentiate several times\nwith respect to $\\sigma$ to get following relations,\n\\begin{equation}\\label{eq:17}\nX_{\\sigma\\sigma}=X_{ss}S_\\sigma^2+X_sS_{\\sigma\\sigma}\n\\end{equation}\n\\begin{equation}\\label{eq:15}\nX_{\\sigma\\sigma\\sigma}=X_{sss}S_\\sigma^3+3X_{ss}S_\\sigma\nS_{\\sigma\\sigma}+X_sS_{\\sigma\\sigma\\sigma}\n\\end{equation}\n\\begin{equation}\\label{eq:16}\nX_{\\sigma\\sigma\\sigma\\sigma}=X_{ssss}S_\\sigma^4+6X_{sss}S_\\sigma^2S_{\\sigma\\sigma}+4X_{ss}S_\\sigma\nS_{\\sigma\\sigma\\sigma}+3X_{ss}S_{\\sigma\\sigma}^2+X_sS_{\\sigma\\sigma\\sigma\\sigma}\n\\end{equation}\nDividing through equation (\\ref{eq:16}) by $S_\\sigma^4$ and\nnoticing the fact that $|X_\\sigma|=S_\\sigma$ one obtains\n\\begin{equation}\\label{eq:116}\n\\frac{X_{\\sigma\\sigma\\sigma\\sigma}}{|X_\\sigma|^4}=X_{ssss}+6\\frac{S_{\\sigma\\sigma}}{S_\\sigma^2}X_{sss}\n+4\\frac{S_{\\sigma\\sigma\\sigma}}{S_\\sigma^3}X_{ss}+3\\frac{S_{\\sigma\\sigma}^2}{S_\\sigma^4}X_{ss}\n+\\frac{S_{\\sigma\\sigma\\sigma\\sigma}}{S_\\sigma^4}X_s\n\\end{equation}\nSubstitute equation (\\ref{eq:116}) into (\\ref{eq:newl}), rewrite\narc length parametrization $s$ into $\\sigma$ using\n(\\ref{eq:17})-(\\ref{eq:15}). Since vector $X_s$ is perpendicular\nto $\\vec{n}$ and has no contribution to normal direction we can\nignore all $X_s$ terms and obtain\n\\begin{equation}\\label{eq:F}\nL(X_{\\sigma\\sigma\\sigma},X_{\\sigma\\sigma},X_{\\sigma})=\n6\\frac{S_{\\sigma\\sigma}}{S_\\sigma^2}\\frac{X_{\\sigma\\sigma\\sigma}}{|X_\\sigma|^3}-\n15\\frac{S_{\\sigma\\sigma}^2}{S_\\sigma^4}\\frac{X_{\\sigma\\sigma}}{|X_\\sigma|^2}+\n4\\frac{S_{\\sigma\\sigma\\sigma}}{S_\\sigma^3}\\frac{X_{\\sigma\\sigma}}{|X_\\sigma|^2}\n-\\kappa^2\\frac{X_{\\sigma\\sigma}}{|X_\\sigma|^2}\n\\end{equation}\n\nSubstitute equation (\\ref{eq:F}) back into (\\ref{eq:14}) and\ncollect to get the scheme as\n\\begin{equation}\\label{eq:scheme}\nX_t=-\\frac{X_{\\sigma\\sigma\\sigma\\sigma}}{|X_\\sigma|^4}+\n6S_{\\sigma\\sigma}\\frac{X_{\\sigma\\sigma\\sigma}}{|X_\\sigma|^5}-\n(15\\frac{S_{\\sigma\\sigma}^2}{|X_\\sigma|^4}-\n4\\frac{S_{\\sigma\\sigma\\sigma}}{|X_\\sigma|^3}+\\kappa^2)\\frac{X_{\\sigma\\sigma}}{|X_\\sigma|^2}\n\\end{equation}\n\nWe would like to point out that the choice of $L$ is not unique. A\nsimilar expression has been given by Garcke et al.\nin~\\cite{garcke}.\n\n\n\\subsection{The Parabolic System}\nWe now give the fully parabolic system,\n \\begin{eqnarray}\nX_{t}^1&=&\\frac{X^1_{\\sigma\\sigma}}{|X^1_{\\sigma}|^2}\\nonumber\\\\\nX_{t}^2&=&-\\frac{X^2_{\\sigma\\sigma\\sigma\\sigma}}{|X^2_{\\sigma}|^4}+\n6S_{\\sigma\\sigma}\\frac{X^2_{\\sigma\\sigma\\sigma}}{|X^2_{\\sigma}|^5}-\n(15\\frac{S_{\\sigma\\sigma}^2}{|X^2_{\\sigma}|^4}-\n4\\frac{S_{\\sigma\\sigma\\sigma}}{|X^2_{\\sigma}|^3}+\\kappa^2)\\frac{X^2_{\\sigma\\sigma}}{|X^2_{\\sigma}|^2}\\label{eq:sys}\\\\\nX^3_{t}&=&-\\frac{X^3_{\\sigma\\sigma\\sigma\\sigma}}{|X^3_{\\sigma}|^4}+\n6S_{\\sigma\\sigma}\\frac{X^3_{\\sigma\\sigma\\sigma}}{|X^3_{\\sigma}|^5}-\n(15\\frac{S_{\\sigma\\sigma}^2}{|X^3_{\\sigma}|^4}-\n4\\frac{S_{\\sigma\\sigma\\sigma}}{|X^3_{\\sigma}|^3}+\\kappa^2)\\frac{X^3_{\\sigma\\sigma}}{|X^3_{\\sigma}|^2}\\nonumber\n\\end{eqnarray}\nwhere $X^1$ stands for the grain boundary and $X^2,X^3$ stand for\nthe left branch and right branch of the upper surface\nrespectively. All curves are represented by $X(\\sigma)$ with\n$\\sigma\\in[0,\\infty)$.\n\nThis system will be solved numerically with the boundary\nconditions discussed in the next section.\n\n\\section{Boundary Conditions}\\label{se:3}\nThe grain boundary and the two upper surfaces meet together at one\nend which is referred as triple junction. The other end of the\nthree curves tends to infinity in the quarter loop geometry. For\nnumerical reasons, we compute this problem in a bounded domain.\nThis domain is chosen large enough such that it can simulate the\nmotion at least for a short time. This restriction is reasonable\nsince the curves are asymptotically flat for the parts far away\nfrom the triple junction. All computations presented in this paper\nare constrained in a finite domain $[-6,12]$ and the curves are\nparameterized with $\\sigma\\in [0,1]$.\n\n\nAt $\\sigma=0$ the three curves meet at a triple junction and at\n$\\sigma=1$ the three curves meet the artificial domain boundary\nseparately.\n\n\\subsection{Triple Junction Conditions at $\\sigma=0$}\\label{se:junction}\n\\begin{figure\n\\begin{center}\n\\includegraphics[width = 8.5cm, height = 5cm, clip]{groove.eps}\n\\caption{Sketch of the grain boundary groove.} \\label{fig:fig2}\n\\end{center}\n\\end{figure}\nWe first discuss the boundary conditions at the triple junction.\nFirst of all, three curves should have common coordinates at\n$\\sigma=0$, i.e.,\n\\begin{equation}\\label{eq:cond1}\nX^1(0,t)=X^2(0,t)=X^3(0,t)\n\\end{equation}\n\n\n By Young's law we have two more conditions which are referred as angle\n conditions:\n\\begin{eqnarray}\\label{eq:cond2}\n&&\\frac{X^1_{\\sigma}}{|X^1_{\\sigma}|}\\cdot \\frac{X^2_{\\sigma}}{|X^2_{\\sigma}|}=\\cos\\theta_{12}=\\cos(\\frac{\\pi}{2}+\\arcsin\\frac{m}{2})\\label{eq:cond3}\\\\\n&&\\frac{X^1_{\\sigma}}{|X^1_{\\sigma}|}\\cdot\n\\frac{X^3_{\\sigma}}{|X^3_{\\sigma}|}=\\cos\\theta_{13}=\\cos(\\frac{\\pi}{2}+\\arcsin\\frac{m}{2})\\label{eq:cond3}\n\\end{eqnarray}\nwhere $\\theta_{ij}$ denotes the angle between curve $i$, $j$ and\n$m=\\gamma_{grain }\/{\\gamma_{exterior}}$ is a constant measuring\nthe relative surface energy between the grain boundary and\nexterior surface.\n\nThe continuity of the surface chemical potentials implies that\n\\begin{equation}\\label{eq:potentials}\n\\kappa^2=-\\kappa^3\\qquad\n(\\kappa=\\frac{X_{\\sigma\\sigma}}{|X_\\sigma|^2}\\cdot\n\\frac{X_\\sigma^\\bot}{|X_\\sigma|})\n\\end{equation}\nHere the superscripts are indices of curves.\n\nAnd the balance of mass flux implies that\n\\begin{equation}\\label{eq:mass}\n\\kappa^2_{s}=\\kappa^3_{s}\\qquad\n(\\kappa_s=\\frac{X_{\\sigma\\sigma\\sigma}\\cdot\nX_\\sigma^\\perp}{|X_\\sigma|^4}-\n3\\frac{|X_\\sigma|_\\sigma(X_{\\sigma\\sigma}\\cdot X_\\sigma^\\perp)}\n{|X_\\sigma|^5})\n\\end{equation}\nwhere the expression for $\\kappa_s$ is obtained by taking the\nderivative of the expression of $\\kappa$ directly.\n\nWe must be careful about condition (\\ref{eq:potentials}).\nBasically, we need the two upper surfaces have the same convexity.\nSince $\\sigma$ has opposite directions for the two curves the odd\ntime derivatives will have opposite signs when computed by\nparametric form. Thus we should put a minus sign for\n(\\ref{eq:potentials}) and keep the same for (\\ref{eq:mass}).\n\n\\subsection{Boundary Conditions at $\\sigma=1$}\nAt the other ends of the curves we put several artificial\nconditions such that they do not move during evolution and keep\nbeing flat. This is reasonable since they start being flat and\nthey will not be influenced by the motion of the triple junction\nin a short time. The following conditions are imposed at\n$\\sigma=1$,\n\\begin{eqnarray*}\n&& X_{t}^i(1,t)=\\textbf{0}\\qquad \\textrm{for}\\, i=1,2,3\\\\\n&& X_{\\sigma\\sigma}^i(1,t)=\\textbf{0}\\qquad \\textrm{for}\\, i=2,3\n\\end{eqnarray*}\n\\subsection{Artificial Tangential Conditions}\nWe should point out that the whole system contains two second\norder equations and four fourth order equations and it should have\nten conditions at the junction point for well-posedness. Recall\nthat there are only eight junction conditions as have been\naddressed above. Thus, we need two more conditions. There are\nseveral options to impose the extra conditions. And we will prove\nlater that different conditions could only change the\nparametrization of the curves and will not change the profiles of\nthe curves. Since these conditions do change the tangential\nvelocities of the grid nodes we refer them as artificial\ntangential conditions. As one of the options, the following two\nconditions are applied into the system:\n\\begin{equation}\nX_{\\sigma\\sigma}^i\\cdot X_{\\sigma}^i=0 \\,\\,\\quad \\textrm{for}\n\\,i=2,3\n\\end{equation}\n\\section{Well-posedness for the Parabolic System}\\label{se:4}\nIn this section, we analyze the well-posedness of the system\nproposed above. We linearize around fixed straight line solutions\nand get a system that has the same highest order parabolic\nbehavior as the original problem. The well-posedness we do gives\nthe conditions that match those that in more complicated nonlinear\nanalysis gives, where such analysis exists. And therefore, we\nbelieve the results of the analysis should apply to the full\nnonlinear problem.\n\\subsection{Linearization of the System}\\label{se:linearized}\nTo linearize the system we consider a perturbation expansion\naround the tangential direction at the triple junction for each\ncurve, i.e.,\n\\begin{eqnarray*}\n&&X^1=d_1\\sigma+\\epsilon \\bar{X}^1+O(\\epsilon^2)\\\\\n&&X^2=d_2\\sigma+\\epsilon \\bar{X}^2+O(\\epsilon^2)\\\\\n&&X^3=d_3\\sigma+\\epsilon \\bar{X}^3+O(\\epsilon^2)\n\\end{eqnarray*}\nwhere $d_i=(d_{i1},d_{i2})$ is a constant vector standing for the\nunit tangential direction. Substitute above equations into\n(\\ref{eq:sys}), linearize and keep the leading order terms to get\na linear system:\n\\begin{eqnarray}\n&&\\bar{X}^1_{t}=\\bar{X}^1_{\\sigma\\sigma}\\nonumber\\\\\n&&\\bar{X}^2_{t}=-\\bar{X}^2_{\\sigma\\sigma\\sigma\\sigma}\\label{eq:linearized}\\\\\n&&\\bar{X}^3_{t}=-\\bar{X}^3_{\\sigma\\sigma\\sigma\\sigma}\\nonumber\n\\end{eqnarray}\nFor convenience, we omit the bar above $X$ in following\ndiscussion.\n\nThe linearization of the triple junction conditions is\nstraightforward.\n\\begin{itemize}\n\\item Common point at $\\sigma=0$:\n\\begin{eqnarray*}\nX^1=X^2=X^3\n\\end{eqnarray*}\n\\item Angle conditions:\n\\begin{eqnarray*}\nd_1\\cdot X^2_{\\sigma}+d_2\\cdot X^1_{\\sigma}-(d_1\\cdot\nd_2)(d_1\\cdot\nX^1_{\\sigma}+d_2\\cdot X^2_{\\sigma})=0\\\\\nd_1\\cdot X^3_{\\sigma}+d_3\\cdot X^1_{\\sigma}-(d_1\\cdot\nd_3)(d_1\\cdot X^1_{\\sigma}+d_3\\cdot X^3_{\\sigma})=0\n\\end{eqnarray*}\n\\item Continuity of surface chemical potentials:\n\\begin{eqnarray*}\nX^2_{\\sigma\\sigma}\\cdot d_2^\\perp =-X^3_{\\sigma\\sigma} \\cdot\nd_3^\\perp\n\\end{eqnarray*}\n\\item Balance of mass flux:\n\\begin{eqnarray*}\nX^2_{\\sigma\\sigma\\sigma}\\cdot d_2^\\perp =X^3_{\\sigma\\sigma\\sigma}\n\\cdot d_3^\\perp\n\\end{eqnarray*}\n\\item Artificial tangential conditions:\n\\begin{eqnarray*}\nX^2_{\\sigma\\sigma}\\cdot d_2=0\\\\\nX^3_{\\sigma\\sigma}\\cdot d_3=0\n\\end{eqnarray*}\n\\end{itemize}\nThe linear system (\\ref{eq:linearized}) can be solved using\nLaplace transforms to get {\\large\n\\begin{eqnarray}\\label{eq:sol}\n\\left\\{\\begin{array}{l}\n u_1=A_{11}e^{-\\sqrt{s}\\sigma}\\\\\n v_1=A_{12}e^{-\\sqrt{s}\\sigma}\\\\\n u_2=A_{21}e^{\\lambda_1\\sigma}+B_{21}e^{\\lambda_2\\sigma}\\\\\n v_2=A_{22}e^{\\lambda_1\\sigma}+B_{22}e^{\\lambda_2\\sigma}\\\\\n u_3=A_{31}e^{\\lambda_1\\sigma}+B_{31}e^{\\lambda_2\\sigma}\\\\\n v_3=A_{32}e^{\\lambda_1\\sigma}+B_{32}e^{\\lambda_2\\sigma}\\\\\n \\end{array}\n \\right.\n\\end{eqnarray}}\nwhere\n\\begin{eqnarray*}\n\\lambda_1=(-\\frac{\\sqrt{2}}{2}+\\frac{\\sqrt{2}}{2}i)\\sqrt[4]{s}\\quad\n\\lambda_2=(-\\frac{\\sqrt{2}}{2}-\\frac{\\sqrt{2}}{2}i)\\sqrt[4]{s}\n\\end{eqnarray*}\nand here $s$ temporally stands for the transformed time variable\nof Laplace transform.\n\nFor simplicity, we first suppose the angles between any two curves\nare $\\frac{2}{3}\\pi$. Substituting solution (\\ref{eq:sol}) into\nboundary conditions one obtains a $10 \\times 10$ coefficient\nmatrix $M$(transposed)\n\\begin{eqnarray}\\label{eq:matrix}\n\\left(\\begin{array}{cccccccccc} 1&0&0&0&(-d_{21}-\\frac{1}{2}\nd_{11})\\sqrt{s}&(-d_{31}-\\frac{1}{2}\nd_{11})\\sqrt{s}&0&0&0&0\\\\\n0&0&1&0&(-d_{22}-\\frac{1}{2}d_{12})\\sqrt{s}&(-d_{32}-\\frac{1}{2}d_{12})\\sqrt{s}&0&0&0&0\\\\\n-1&1&0&0&(d_{11}+\\frac{1}{2}\nd_{21})\\lambda_1&0&-d_{22}\\lambda_1^2&-d_{22}\\lambda_1^3&d_{21}\\lambda_1^2&0\\\\\n-1&1&0&0&(d_{11}+\\frac{1}{2}\nd_{21})\\lambda_2&0&-d_{22}\\lambda_2^2&-d_{22}\\lambda_2^3&d_{21}\\lambda_2^2&0\\\\\n0&0&-1&1&(d_{12}+\\frac{1}{2}\nd_{22})\\lambda_1&0&d_{21}\\lambda_1^2&d_{21}\\lambda_1^3&d_{22}\\lambda_1^2&0\\\\\n0&0&-1&1&(d_{12}+\\frac{1}{2}d_{22})\\lambda_2&0&d_{21}\\lambda_2^2&d_{21}\\lambda_2^3&d_{22}\\lambda_2^2&0\\\\\n 0&-1&0&0&0&(d_{11}+\\frac{1}{2}\n d_{31})\\lambda_1&-d_{32}\\lambda_1^2&d_{32}\\lambda_1^3&0&d_{31}\\lambda_1^2\\\\\n0&-1&0&0&0&(d_{11}+\\frac{1}{2}\nd_{31})\\lambda_2&-d_{32}\\lambda_2^2&d_{32}\\lambda_2^3&0&d_{31}\\lambda_2^2\\\\\n0&0&0&-1&0&(d_{12}+\\frac{1}{2}\nd_{32})\\lambda_1&d_{31}\\lambda_1^2&-d_{31}\\lambda_1^3&0&d_{32}\\lambda_1^2\\\\\n0&0&0&-1&0&(d_{12}+\\frac{1}{2}d_{32})\\lambda_2&d_{31}\\lambda_2^2&-d_{31}\\lambda_2^3&0&d_{32}\\lambda_2^2\\\\\n \\end{array}\n\\right)\n\\end{eqnarray}\n\n\nLinear well-posedness requires that the determinant of matrix $M$\nis nonsingular for any $s$ satisfying Re$(s)>0$. Since the\nwell-posedness depends only on their relative positions, we\nsuppose further that\n\\begin{eqnarray*}\nd_1=\\left(\\begin{array}{c} 0\\\\\\\\-1\\end{array}\\right)\\quad\nd_2=\\left(\\begin{array}{c}\n-\\frac{\\sqrt{3}}{2}\\\\\\\\\\frac{1}{2}\\end{array}\\right)\\quad\nd_3=\\left(\\begin{array}{c}\n\\frac{\\sqrt{3}}{2}\\\\\\\\\\frac{1}{2}\\end{array}\\right)\n\\end{eqnarray*}\nWith these assumptions, one obtains the determinant of $M$:\n\\begin{eqnarray*}\n|M|=6\\sqrt{6}s^{11\/4}+24\\sqrt{3}s^3\n\\end{eqnarray*}\nSimilarly the determinant of $M$ for arbitrary angles is\n\\begin{eqnarray*}\n|M|=32(\\sin\\theta_{13}\\sin^2\\theta_{12}+\\sin\\theta_{12}\\sin^2\\theta_{13})s^3-16\\sqrt{2}(\\sin\\theta_{12}\\sin\\theta_{13}\n\\sin(\\theta_{12}+\\theta_{13}))s^{11\/4}\n\\end{eqnarray*}\nwhere $\\theta_{12}$, $\\theta_{13}$ are the angles between the\ncurves as shown in Fig.\\ref{fig:fig2}. $M$ is nonsingular for any\n$s$ with $\\textrm{Re}(s)>0$ if $0<\\theta_{12},\\theta_{13} <\\pi$.\nThe constraint on $\\theta$ is not an issue since it has included\nall the cases of interest.\n\n\\subsection{Analysis of Artificial Tangential Conditions}\nAs have been mentioned before, there are several options for the\nartificial tangential conditions. We are interested to know if\ndifferent choices will lead to the same solution which is shown to\nbe true. To prove this point, it suffices to prove that the\nposition of the junction and the three tangential directions do\nnot depend on the artificial tangential conditions. The idea to\nprove this point is to show that they all lead to the same\nsolution for $X_1$. If this is true, the position of the junction\npoint and the tangential direction of $X_1$ are uniquely\ndetermined. Since the angle conditions are guaranteed, the\ntangential directions of the other two curves could also be\nuniquely determined. To sum up, the key point is proving\ncoefficients of $X^1$, i.e., $A_{11},A_{12}$ do not depend on the\nextra conditions.\n\nThe coefficients $A_{ij},B_{ij}$ in solution (\\ref{eq:sol}) can be\nsolved by\n\\begin{equation}\nM \\cdot C =P\n\\end{equation}\nwhere $M$ is the coefficient matrix (\\ref{eq:matrix}) for boundary\nconditions, $C=[A_{11},A_{12},\\cdots, A_{32},B_{32}]$ is the\ncoefficient vector to be solved and\n$P=[p_1,p_2,\\cdots,p_9,p_{10}]$ is a constant vector depending on\nthe initial data. Note that only $p_9,p_{10}$ and the last two\nlines of $M$ depend on artificial tangential conditions.\n\nAccording to the discussion above we need to prove $A_{11},A_{12}$\ndo not depend on the artificial tangential conditions. More\nprecisely, we need to prove $A_{11},A_{12}$ do not depend on the\nlast two lines of matrix $M$ and $p_9,p_{10}$.\n\nFor convenience, we rewrite $M$ into a block form\n\\begin{equation}\nM=\\left(\\begin{array}{cc}\n M_1(8\\times 2)& M_2(8\\times 8)\\\\\\\\\n M_3(2\\times 2) & M_4(2\\times 8)\n \\end{array}\n \\right)\n \\end{equation}\nWe do the Gauss elimination for block $M_2$ and it shows that the\nrank of submatrix $M_2$ is 6 for any angle conditions. this means\nwe can make the last two lines of $M_2$ be zeros by row deduction\nand meanwhile making the last two lines of $M_1$ into a full rank\n$(2\\times 2)$ matrix.\n\nWe again use $M$ to denote the new matrix after row deduction.\nNext we compute $M^{-1}$ in a block form satisfying\n\\begin{eqnarray}\\label{eq:times}\nM\\times M^{-1}&=&\\left(\\begin{array}{cc}\n M_1(8\\times 2)& M_2(8\\times 8)\\\\\\\\\n M_3(2\\times 2) & M_4(2\\times 8)\n \\end{array}\n \\right)\\times\n\\left(\\begin{array}{cc}\n \\bar{M}_1(2\\times 8)& \\bar{M}_2(2\\times 2)\\\\\\\\\n \\bar{M}_3(8\\times 8) & \\bar{M}_4(8\\times 2)\n \\end{array}\n \\right)\\nonumber\\\\ \\nonumber\\\\\n &=&\\left(\\begin{array}{cc}\n I(8\\times 8)& \\mathbf{0}\\\\\\\\\n \\mathbf{0} & I(2\\times 2)\n \\end{array}\n \\right)\n \\end{eqnarray}\nExpand directly to get\n\\begin{eqnarray}\n&&M_1\\times \\bar{M}_1+M_2\\times \\bar{M}_3=\\mathbf{I(8\\times 8)} \\label{eq:e1}\\\\\n&&M_1\\times \\bar{M}_2+M_2\\times \\bar{M}_4=\\mathbf{0(8\\times\n2)}\\label{eq:e2}\n\\end{eqnarray}\nNote that (\\ref{eq:e1})-(\\ref{eq:e2}) do not involve $M_3,M_4$\nwhich means they do not depend on the artificial tangential\nconditions. If $\\bar{M}_1,\\bar{M}_2$ can be determined by equation\n(\\ref{eq:e1})-(\\ref{eq:e2}) then we can say $\\bar{M}_1,\\bar{M}_2$\ndo not depend on the artificial conditions. The fact\n\\begin{eqnarray*}\n\\left(\\begin{array}{c}\n A_{11}\\\\\n A_{12}\n \\end{array}\n \\right)\n =\\left(\\begin{array}{cc}\n \\bar{M}_1& \\bar{M}_2\n \\end{array}\n \\right)\\times\n P\n \\end{eqnarray*}\nimplies that $A_{11},A_{12}$ do not depend on the artificial\nconditions if we can further prove $\\bar{M}_2=\\mathbf{0}$ .\n\n\nActually, $\\bar{M}_1$ can surely be solved from equation\n(\\ref{eq:e1}). This is because the last two lines of $M_2$ are\nzeros and we have exactly sixteen equations involving only the\nsixteen unknowns of $\\bar{M}_1$. For the same reason we can solve\nfor $\\bar{M}_2$ by equation (\\ref{eq:e2}). Actually, since the\nlast two lines of $M_1$ is a full rank $(2\\times 2)$ matrix\n$\\bar{M}_2$ must be zero. This completes the proof that the\ncoefficients $A_{11},A_{12}$ in equation (\\ref{eq:linearized}) do\nnot depend on the artificial conditions. And consequently, the\nshapes of the three curves do not depend on the artificial\ntangential conditions. Novick-Cohen et al.~\\cite{amy0} also\npointed out that the artificial conditions do not influence the\nsolutions, although the problem there is a little bit different.\nIn~\\cite{amy0} the authors look at a three phase problem in which\nall three interfaces evolve by minus the surface Laplacian of mean\ncurvature and meet at a triple junction.\n\n\\section{Numerical Discretization}\\label{se:5}\n\nBack to the full nonlinear problem, we present in detail the\ndiscretization procedure of the parabolic scheme (\\ref{eq:sys})\nand junction conditions (\\ref{eq:cond1})-(\\ref{eq:mass}). The\nbasic approach is to use a staggered grid in $\\sigma$ and we shall\ndenote the approximations by capital letters with subscripts,\ni.e., $X_j(t)\\simeq X((j-1\/2)h,t)=(u((j-1\/2)h,t),v((j-1\/2)h,t))$\nwhere $h$ is grid spacing and $N=1\/h$ is the number of interior\ngrid points for $\\sigma\\in [0,1]$.\n\n\nIn order to write the discretized equations we introduce some\nadditional notations. Let $D_k$ denote the second order centered\napproximation of the $k$th derivative, i.e.,\n\\begin{eqnarray*}\nD_1X_j&=&(X_{j+1}-X_{j-1})\/2h\\\\\nD_2X_j&=&(X_{j+1}+X_{j-1}-2X_j)\/h^2\n\\end{eqnarray*}\nand let $D_+$ and $\\mathcal{F}$ denote forward differencing and\nforward averaging, respectively,\n\\begin{eqnarray*}\nD_+X_j&=&(X_{j+1}-X_{j})\/h\\\\\n\\mathcal{F}X_j&=&(X_{j+1}+X_{j})\/2\n\\end{eqnarray*}\nWe discretize each motion separately.\n\\subsection{Grain Boundary Motion(Motion by Mean Curvature)}\nThe grain boundary motion is approximated at all grid points by\nstandard differences,\n\\begin{equation}\n\\dot{X_j^i}=\\frac{D_2X_j^i}{|D_1X_j^i|^2} \\qquad i=1,\nj=1,2,\\cdots,N\n\\end{equation}\nwhere $\\dot{X_j}$ stands for time derivative. Formally, these\ndiscrete equations require values of $X_0$ and $X_{N+1}$ outside\nthe computation domain. We shall use the boundary condition to\nextrapolate the interior values of $X_1$ and $X_N$ to the unknown\nexterior values of $X_0$ and $X_{N+1}$. We shall give the details\nof the extrapolation procedure later.\n\n\\subsection{Surface Diffusion}\nThe higher order derivatives appeared in surface diffusion are\napproximated by\n\\begin{equation}\n(X_{\\sigma\\sigma\\sigma})_j\\simeq\nD_3X_j=\\frac{D_2X_{j+1}-D_2X_{j-1}}{2h}\n\\end{equation}\n\\begin{equation}\n(X_{\\sigma\\sigma\\sigma\\sigma})_j\\simeq\nD_4X_j=\\frac{D_2X_{j-1}+D_2X_{j+1}-2D_2X_j}{h^2}\n\\end{equation}\nThere are some other terms such as $S_\\sigma,S_{\\sigma\\sigma},\nS_{\\sigma\\sigma\\sigma}$ to be approximated. Start from\n(\\ref{eq:9}) and differentiate several times with respect to\n$\\sigma$ to get\n\\begin{eqnarray*}\nS_\\sigma&=&\\sqrt{u_{\\sigma}^2+v_{\\sigma}^2}=|X_\\sigma|\\\\\nS_{\\sigma\\sigma}&=&\\frac{u_{\\sigma}u_{\\sigma\\sigma}+v_{\\sigma}v_{\\sigma\\sigma}}{\\sqrt{u_{\\sigma}^2+v_{\\sigma}^2}}\n=\\frac{X_\\sigma\\cdot X_{\\sigma\\sigma}}{|X_\\sigma|}\\\\\nS_{\\sigma\\sigma\\sigma}&=&-\\frac{(X_\\sigma \\cdot\nX_{\\sigma\\sigma})^2}{|X_\\sigma|^3}+ \\frac{X_{\\sigma\\sigma}\\cdot\nX_{\\sigma\\sigma}+X_\\sigma\\cdot X_{\\sigma\\sigma\\sigma}}{|X_\\sigma|}\n\\end{eqnarray*}\nEvery term in scheme (\\ref{eq:sys}) is now ready to be\napproximated by standard differences.\n\\subsection{Junction Conditions at $\\sigma=0$}\nThe discretization at the junction point is much more complicated.\nSince there are fourth order derivatives for the surface diffusion\nwe shall need two ghost points for each surface curve and one\nghost point for grain boundary. These ghost points are denoted by\n$X^1_{0},X^2_{-1},X^2_{0},X^3_{-1}, X^3_{0}$ respectively. The\njunction conditions (\\ref{eq:cond1})-(\\ref{eq:mass}) are\napproximated as follows, see Fig. \\ref{fig:junc}.\n\\begin{figure\n\\begin{center}\n\\includegraphics[width = 10cm, height = 5.1cm, clip]{ghostpoint.eps}\n\\caption{Sketch of the ghost points at the triple junction.}\n\\label{fig:junc}\n\\end{center}\n\\end{figure}\n\n\nCondition (\\ref{eq:cond1}),\n\\begin{equation}\\label{eq:cond1d}\n\\mathcal{F}X^1_0=\\mathcal{F}X^2_0=\\mathcal{F}X^3_0=C\n\\end{equation}\nwhere $C$ denotes the junction point.\n\nThe angle conditions (\\ref{eq:cond2})-(\\ref{eq:cond3}) are\napproximated by\n\\begin{eqnarray}\n\\frac{D_+X_0^1}{|D_+X_0^1|}\\cdot\\frac{D_+X_0^2}{|D_+X_0^2|}&=&\\cos\\theta_{12}\\\\\n\\frac{D_+X_0^1}{|D_+X_0^1|}\\cdot\\frac{D_+X_0^3}{|D_+X_0^3|}&=&\\cos\\theta_{13}\n\\end{eqnarray}\nDiscretize condition (\\ref{eq:potentials}) for each surface curve\nto get\n\\begin{equation}\n\\frac{D_2X_C^2\\cdot\n(D_1X_C^2)^\\perp}{|D_1X_C^2|^3}=-\\frac{D_2X_C^3\\cdot\n(D_1X_C^3)^\\perp}{|D_1X_C^3|^3}\n\\end{equation}\nSince staggered grid are used, center $C$ is a midpoint not a grid\npoints. But we still can use previous notations $D_k$ with the\nfollowing extensions\n\\begin{eqnarray*}\nX_{C-1}^i&=&(X_{-1}^i+X_0^i)\/2=\\mathcal{F}X_{-1}^i\\\\\nX_{C+1}^i&=&(X_{1}^i+X_{2}^i)\/2=\\mathcal{F}X_{1}^i\n\\end{eqnarray*}\n$\\kappa_s$ can be expressed by\n\\begin{equation}\n\\kappa_s=\\frac{X_{\\sigma\\sigma\\sigma}\\cdot\nX_\\sigma^\\perp}{|X_\\sigma|^4}-\n3\\frac{S_{\\sigma\\sigma}(X_{\\sigma\\sigma}\\cdot X_\\sigma^\\perp)}\n{|X_\\sigma|^5}\n\\end{equation}\nThus condition (\\ref{eq:mass}) is approximated by\n\\begin{equation}\n\\frac{D_3X^2_C\\cdot (D_1X^2_C)^\\perp}{|D_1X^2_C|^4}-\n3\\frac{S^2_{\\sigma\\sigma}(D_2X^2_C\\cdot (D_1X^2_C)^\\perp)}\n{|D_1X^2_C|^5}=\\frac{D_3X^3_C\\cdot\n(D_1X^3_C)^\\perp}{|D_1X^3_C|^4}-\n3\\frac{S^3_{\\sigma\\sigma}(D_2X^3_C\\cdot (D_1X^3_C)^\\perp)}\n{|D_1X^3_C|^5}\n\\end{equation}\nFinally, the artificial tangential conditions is calculated by\n\\begin{equation}\n\\frac{D_2X_C^i\\cdot D_1X_C^i}{|D_1X_C^i|^3}=0 \\,\\,\\quad for\n\\,i=2,3\n\\end{equation}\nWe now finish discretizing the junction conditions.\n\\subsection{Domain Boundary Conditions at $\\sigma=1$}\nThe discretization at $\\sigma=1$ is straightforward.\n\\begin{eqnarray*}\n&&\\mathcal{F}X_N^i\\big|_{t=n\\cdot\ndt}=\\mathcal{F}X_N^i\\big|_{t=(n-1)\\cdot dt}\\quad\\textrm{for}\\,\ni=1,2,3 \\\\\n&&D_+X_N^i=0 \\quad \\textrm{for} \\,i=2,3\n\\end{eqnarray*}\n\\section{Time Stepping}\\label{se:6}\n\\subsection{Explicit Scheme}\nAs an explicit scheme, forward Euler method is used for the time\nstepping process.\n\\begin{equation}\nX^{n+1}=X^n+\\Delta tF(X^n)\n\\end{equation}\nHere $F(X^n)$ denotes the right hand side in formulation\n(\\ref{eq:scheme}) evaluated at time level $n$. Time steps $\\Delta\nt$ are chosen so that the full discrete scheme is stable. Here we\nchoose $\\Delta t=1e-12$. This scheme is easy to implement. Given\nthe results at time $n$ we update the values of the interior grid\npoints by forward Euler method to time level $n+1$ for the three\ncurves respectively. Next solve for the ghost points, junction\npoint and the boundary points by the boundary conditions. Then go\non to the next time level. The time step is excessively small due\nto the stiffness of the fourth order parabolicity as noted\npreviously.\n\\subsection{Implicit Scheme}\nIn order to avoid the excessively small time steps for explicit\nscheme we consider implicit techniques in this section. For\nsimplicity we use backward Euler method. Given the values at time\n$n$ we update the values at time level $n+1$\n by solving the nonlinear system\n\\begin{equation}\nX^{n+1}-\\Delta tF(X^{n+1})-X^n=0\n\\end{equation}\n\nSince the three curves are strongly coupled by the junction, we\nsolve all unknown points simultaneously including the ghost points\nand the extrapolated boundary points. This leads to a large\nnonlinear system which is solved by Newton's method. There is no\ndoubt that this scheme should be stable for any time steps. But it\ncan not survive a long time computation due to the nonuniform\ntangential velocity which leads to a nonuniform distribution of\nthe grid points. This phenomenon can not be fixed even if we\nrefine the grid.\n\nOne way to overcome this difficulty is regridding the grid points\nonce they become too far or too close. But the bad distribution\ncould happen only near the junction and the closer to the junction\nthe sparser (or denser) the grid points are. Hence it is hard to\nregrid no matter globally or locally. Another way is adjusting the\ntangential velocity of the grid points such that they could adjust\nthemselves being uniform. And this is the motivation for the next\nsection.\n\nNumerical results for scheme (\\ref{eq:sys}) with time step $\\Delta\nt=1e-4$ are shown in Fig. \\ref{fig:fig3}. All numerical\nexperiments in this paper start from the same position as showed\nin Fig. \\ref{fig:fig3}. All results are compared with a travelling\nwave solution solved by Amy et al~\\cite{amy2}.\n\\begin{figure}[h]\n\\setlength{\\unitlength}{1cm}\n\\begin{picture}(10,6.5)\n\\put(0.5,0.5){\\includegraphics[width=7cm,height=6cm]{initial.eps}}\n\\put(8.5,0.5){\\includegraphics[width=7cm,height=6cm]{originalstarcolor.eps}}\n\\end{picture}\n\\caption{Plot of results for scheme (\\ref{eq:sys}) with backward\nEuler method for a short time with $m=0.5$. Left: initial status\nwith grid points; Right: result zoomed in near triple junction.\nDotted line: numerical solution; Solid line: travelling wave\nsolution; Time step size: $\\Delta t=1e-2$.}\\label{fig:fig3}\n\\end{figure}\\\\\n\\section{Adjustment of Tangential Velocity}\\label{se:tangential}\nWe have mentioned that long time computations are problematical\neven for implicit schemes. This is because of the bad distribution\nof grid points. To get a more uniform distribution of gird points\nalong the curve we consider adding an artificial term to adjust\nthe tangential velocity of the grid points for the motion by\nsurface diffusion.\n\nWe consider the following modified scheme for the fourth order\nproblem\n\\begin{equation}\\label{eq:newsc}\nX_t=F(X)+\\alpha\n(\\frac{X_{\\sigma\\sigma\\sigma\\sigma}}{|X_\\sigma|^4}\\cdot\\vec{t})\\,\\vec{t}\n\\end{equation}\nwhere $\\alpha$ is a constant to be determined. The newly added\nterm in (\\ref{eq:newsc}) will not influence the normal velocity\nbut it does change the tangential velocity. We do not know exactly\nhow to choose the optimal $\\alpha$ but $\\alpha=-100$ seems to work\nwell for our problem. The result is shown in Fig.\\ref{fig:fig4}.\nIt is obvious that the grid points are much more uniform than that\nin Fig.\\ref{fig:fig3}. The time step size for Fig.\\ref{fig:fig4}\nis $\\Delta t=0.01$. A numerical convergence study is shown in\nTable \\ref{tb:para}.\n\\begin{table}[!h]\n\\begin{center}\n\\begin{tabular}{|c||c||c|c|c|c|}\n\\hline\n$dt$& $\\Delta s$ & $L_2$ Norm & Rate & $L_\\infty$ Norm& Rate\\\\\n\\hline\\hline& 0.2 & 3.1494e-04 && 2.0241e-03&\\\\\n\\cline{2-6}\n $dt=0.01\\Delta s^2$ & 0.1 & 7.9775e-05 &1.9811 & 5.4797e-04&1.8852\\\\\n\\cline{2-6}\n& 0.05 &2.1445e-05 &1.8953&1.4530e-05 &1.9151\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Estimated errors and convergence rates for parabolic\nformulation with $m=0.5$. Errors are evaluated at\n$t=0.02$}.\\label{tb:para}\n\\end{table}\n\nAlthough the newly added tangential term improves the numerical\nbehavior it can not completely overcome the difficulty. The\nartificial tangential conditions discussed in section \\ref{se:4}\nmake the problem even more complicated. All these motivate us to\nseek a more efficient scheme.\n\\begin{figure}[h]\n\\setlength{\\unitlength}{1cm}\n\\begin{picture}(10,6.5)\n\\put(0.5,0.5){\\includegraphics[width=7cm,height=6cm]{fulltanadjcolor.eps}}\n\\put(8.5,0.5){\\includegraphics[width=7cm,height=6cm]{comparecolor.eps}}\n\\end{picture}\n\\caption{Plot of the results for scheme (\\ref{eq:newsc}) with\n$m=0.5, \\alpha=-100, \\Delta t=0.01$. Left: result at t=0.2; Right:\nresult zoomed in near triple junction at t=0.2. Dotted line:\nnumerical result; Solid line: travelling wave solution.\n}\\label{fig:fig4}\n\\end{figure}\\\\\n\n\\noindent\\textbf{Remark} There is another way to adjust the\ntangential velocity,\n\\begin{equation}\nX_t=F(X)+\\alpha\n(\\frac{X_{\\sigma\\sigma}}{|X_\\sigma|^2}\\cdot\\vec{t})\\,\\vec{t}\n\\end{equation}\nFor this case, we should choose $\\alpha$ positive, for example\n$\\alpha=100$.\n\n\n\\section{A PDAE Formulation}\\label{se:8}\nAs we have pointed out in section \\ref{se:tangential}, the fully\nparabolic scheme (\\ref{eq:sys}) does not always have good\nnumerical behavior. And the presence of the artificial tangential\ncondition makes the discretization of the original problem more\ncomplicated. In this section we propose another formulation that\ncan overcome these disadvantages and also avoids possible loss of\ntangential monotonicity in the parametrization due to the fourth\norder PDE.\n\nLet us again start from the motion by mean curvature. First of all\nthe basic evolution law should be satisfied, i.e.,\n\\begin{equation}\\label{eq:curvature}\nX_t\\cdot \\vec{n}-\\kappa =0\n\\end{equation}\nBecause there are two free variables in this equation we need one\nmore equation for solvability. Since the requirement for the\nnormal direction motion has been fulfilled by equation\n(\\ref{eq:curvature}) we use the second equation to impose a\nconstraint on the distribution of grid points. It is natural to\nlet all grid points have equal spaces. To avoid introducing an\nextra variable we let the change rate between any two adjacent\nspaces is zero, i.e.,\n\\begin{equation}\\label{eq:constraint}\n|X_\\sigma|_\\sigma =0\n\\end{equation}\nNote that\n\\begin{eqnarray*}\n|X_\\sigma|_\\sigma =(\\sqrt{X_\\sigma\\cdot\nX_\\sigma})_\\sigma=\\frac{X_\\sigma\\cdot\nX_{\\sigma\\sigma}}{|X_\\sigma|}\n\\end{eqnarray*}\nthe following equations are actually used to describe the motion\nand keep grids equi-spaced,\n\\begin{eqnarray*}\n&&X_t\\cdot \\vec{n}-\\kappa =0\\\\\n&&X_\\sigma\\cdot X_{\\sigma\\sigma}=0\n\\end{eqnarray*}\nThese are called partial differential algebraic equations (PDAEs).\n\nIn a similar way we derive the PDAEs for the motion by surface\ndiffusion,\n\\begin{eqnarray*}\n&&X_t\\cdot \\vec{n}+\\kappa_{ss} =0\\\\\n&&X_\\sigma\\cdot X_{\\sigma\\sigma}=0\n\\end{eqnarray*}\nThen the full PDAE system for the coupled motion is\n\\begin{eqnarray}\\label{eq:daesys}\n&&X^1_{t}\\cdot \\vec{n}-\\kappa =0\\nonumber\\\\\n&&X^2_{t}\\cdot \\vec{n}+\\kappa_{ss} =0\\nonumber\\\\\n&&X^3_{t}\\cdot \\vec{n}+\\kappa_{ss} =0\\\\\n&&X^i_{\\sigma}\\cdot X^i_{\\sigma\\sigma}=0 \\qquad i=1,2,3\\nonumber\n\\end{eqnarray}\nThe boundary conditions are the same as the parabolic case except\nthat we do not need artificial tangential conditions any more.\n\nThis is an implicit index-1 DAE system. Usually an index-1 DAE can\nbe discretized directly without any numerical difficulties\n~\\cite{ascher}, and that is our experience in this case.\n\nAlthough the boundary conditions are the same as those of the\nparabolic system, the discretization is a little bit different.\nInstead of using five ghost points we now introduce only three\nghost points plus two extra variables which stand for the\ncurvature at the two ghost points corresponding to the two surface\nbranches. The two variables are denoted by $\\kappa^2_0,\\kappa^3_0$\nand the last two junction conditions are approximated by\n\\begin{eqnarray*}\n\\frac{\\kappa^2_0+\\kappa^2_1}{2}&=&-\\frac{\\kappa^3_0+\\kappa^3_1}{2}\\\\\n\\frac{(\\kappa^2_0-\\kappa^2_1)}{|D_1X^2_c|}&=&\\frac{(\\kappa^3_0-\\kappa^3_1)}{|D_1X^3_c|}\n\\end{eqnarray*}\nwhere $\\kappa^i_1$ stands for the curvature of the first interior\npoint of curve $i$ and we use the average of $k^i_0, k^i_1$ to\napproximate the curvature at the center point, i.e., junction\npoint. Again the sign should be carefully handled.\n\nImplementing this scheme one obtains a better result shown in\nFig.\\ref{fig:fig5}. The result is much more accurate and the grid\npoints are more uniform as well. An error comparison is shown in\nTable \\ref{tb:err}. A numerical convergence study of the PDAE\nformulation is shown in Table \\ref{tb:pdae}. The convergence rates\nshown in Table \\ref{tb:pdae} are close to 2 as expected.\n\\begin{figure}[h]\n\\setlength{\\unitlength}{1cm}\n\\begin{picture}(10,6.5)\n\\put(0.5,0.5){\\includegraphics[width=7cm,height=6cm]{forcomparetan2.eps}}\n\\put(8.5,0.5){\\includegraphics[width=7cm,height=6cm]{notanstarcolor2.eps}}\n\\end{picture}\n\\caption{Results comparison between the two schemes. Left: result\nfor (\\ref{eq:sys}); Right: result for (\\ref{eq:daesys}). Both\npictures are zoomed in near triple junction. Dotted line:\nnumerical solution; Solid line: travelling wave solution; Time\nstep size: $\\Delta t=0.01$.}\\label{fig:fig5}\n\\end{figure}\n\\begin{table}[!h]\n\\begin{center}\n\\begin{tabular}{|c||c|c|}\n\\hline & Parabolic Formulation & PDAE Formulation\\\\\n\\hline\\hline $\\Delta s$ & 0.05 & 0.05 \\\\\n\\hline $\\Delta t$ & 0.01 & 0.01 \\\\\n\\hline $L_\\infty$ & 0.0041 & 0.0027\n\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Performance of the two formulations with $L_\\infty$ norm\nand $\\Delta t=0.01$.}\\label{tb:err}\n\\end{table}\n\\begin{table}[!h]\n\\begin{center}\n\\begin{tabular}{|c||c||c|c|c|c|}\n\\hline\n$dt$& $\\Delta s$ & $L_2$ Norm & Rate & $L_\\infty$ Norm& Rate\\\\\n\\hline\\hline& 0.2 & 2.7837e-04 && 1.8996e-03&\\\\\n\\cline{2-6}\n $dt=0.01\\Delta s^2$ & 0.1 & 7.2717e-05 &1.9366 & 5.4444e-04&1.8029\\\\\n\\cline{2-6}\n& 0.05 &1.8732e-05 &1.9568&1.4732e-04 &1.8858\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Estimated errors and convergence rates for PDAE\nformulation with $m=0.5$. Errors are evaluated at\n$t=0.02$.}\\label{tb:pdae}\n\\end{table}\n\nWithout difficulty we can apply this scheme to the case when the\nsurface curve is not a single-valued function as shown in\nFig.\\ref{fig:nonsingle}\n\\begin{figure}[h]\n\\setlength{\\unitlength}{1cm}\n\\begin{picture}(10,6.5)\n\\put(4.5,0.5){\\includegraphics[width=7cm,height=6cm]{nonsingle.eps}}\n\\end{picture}\n\\caption{Plot of the results for scheme (\\ref{eq:daesys}) with\n$m=1.96$ which has non-single valued upper surface. Dotted line:\nnumerical result; Solid line: travelling wave\nsolution.}\\label{fig:nonsingle}\n\\end{figure}\\\\\n\n\n\n\n\\section{An Example of Surface Diffusion Problem}\\label{se:star}\nWe temporally move our focus to a normal direction motion that\ninvolves only motion by surface diffusion. The motion starts with\na closed star shaped curve and evolves with a speed equal to the\nsecond derivative of curvature with respect to arc length.\nAccording to the properties of surface diffusion the curve will\nevolve into a circle and preserve the area enclosed by itself.\nThis problem is computed using the PDAE formulation for the\nsurface diffusion and the result is shown in Fig.\\ref{fig:star}.\nThe method conserves the area quite well and the change is about\n0.032\\%. Similar examples have been investigated using level set\nmethods in~\\cite{chopp,peter}. Level set methods have unbeatable\nsuperiority for interface motion problem especially when there is\ntopology change. But for this simple problem (with no topology\nchange ) our method is more efficient and accurate. Note that this\nproblem can also be computed by the parabolic formulation.\n\\begin{figure}[!h]\n\\begin{center}\n\\includegraphics[width = 11cm, height = 10cm, clip]{star.eps}\n\\caption{A computational example that involve only the motion by\nsurface diffusion. $\\Delta t=5\\times 10^{-7}$. The area changes by\n0.032\\% } \\label{fig:star}\n\\end{center}\n\\end{figure}\n\n\\section{Well-posedness for the PDAE System}\\label{se:9}\nSimilar to the parabolic system we do a well-posedness analysis\nfor the PDAE system in this section.\n\nConsidering the same linear problem as that in section\n\\ref{se:linearized} one obtains\n\\begin{eqnarray}\n&&X^1_{t}\\cdot d_1^\\perp=X^1_{\\sigma\\sigma}\\cdot d_1^\\perp\\nonumber\\\\\n&& d_1\\cdot X^1_{\\sigma\\sigma}=0\\nonumber\\\\\n&&X^2_{t}\\cdot d_2^\\perp=-X^2_{\\sigma\\sigma\\sigma\\sigma}\\cdot d_2^\\perp\\nonumber\\\\\n&& d_2\\cdot X^2_{\\sigma\\sigma}=0\\label{eq:2ndlinear}\\\\\n&&X^3_{t}\\cdot d_3^\\perp=-X^3_{\\sigma\\sigma\\sigma\\sigma}\\cdot d_3^\\perp\\nonumber\\\\\n&& d_3\\cdot X^3_{\\sigma\\sigma}=0\\nonumber\n\\end{eqnarray}\nwhere $d_i$ and $d_i^\\perp$ stand for unit tangential direction\nand unit normal direction of the $i^{th}$ curve respectively.\n\nLinearization of the boundary conditions are exactly the same as\nbefore. They differ only for the discretization procedure.\n\nIf $d_{i1},d_{i2}\\neq0$ the linearized system (\\ref{eq:2ndlinear})\nhas solution in the form {\\large\n\\begin{eqnarray}\\label{eq:1}\n\\left\\{\\begin{array}{l}\n u_1=A_{11}e^{-\\sqrt{s}\\sigma}+B_{11}\\\\\n v_1=-k_{1}A_{11}e^{-\\sqrt{s}\\sigma}+\\frac{1}{k_{1}}B_{11}\\\\\n u_2=A_{21}e^{\\lambda_1\\sigma}+B_{21}e^{\\lambda_2\\sigma}+C_{21}\\\\\n v_2=-k_{2}(A_{22}e^{\\lambda_1\\sigma}+B_{22}e^{\\lambda_2\\sigma})+\\frac{1}{k_{2}}C_{21}\\\\\n u_3=A_{31}e^{\\lambda_1\\sigma}+B_{31}e^{\\lambda_2\\sigma}+C_{31}\\\\\n v_3=-k_{3}(A_{32}e^{\\lambda_1\\sigma}+B_{32}e^{\\lambda_2\\sigma})+\\frac{1}{k_{3}}C_{31}\\\\\n \\end{array}\n \\right.\n\\end{eqnarray}}\nwhere $k_{i}=\\frac{d_{i1}}{d_{i2}}$ is a constant and\n\\begin{eqnarray*}\n\\lambda_1=(-\\frac{\\sqrt{2}}{2}+\\frac{\\sqrt{2}}{2}i)\\sqrt[4]{s}\\quad\n\\lambda_2=(-\\frac{\\sqrt{2}}{2}-\\frac{\\sqrt{2}}{2}i)\\sqrt[4]{s}\n\\end{eqnarray*}\n\nWithout changing the well-posedness of the problem we specify one\nof the tangential directions, say $d_1=(0,-1)^T$. Further we\nassume\n\\begin{eqnarray*}\n\\theta_{12},\\theta_{13} \\in (0,\\pi) \\textrm{ and\n}\\theta_{12},\\theta_{13}\\neq \\frac{\\pi}{2}\n\\end{eqnarray*}\nSince $d_{11}=0$ now the solution is changed to\n\\begin{eqnarray}\\label{eq:2}\n\\left\\{\\begin{array}{l}\n u_1=A_{11}e^{-\\sqrt{s}\\sigma}\\\\\n v_1=B_{11}\\\\\n u_2=A_{21}e^{\\lambda_1\\sigma}+B_{21}e^{\\lambda_2\\sigma}+C_{21}\\\\\n v_2=-k_{2}(A_{22}e^{\\lambda_1\\sigma}+B_{22}e^{\\lambda_2\\sigma})+\\frac{1}{k_{2}}C_{21}\\\\\n u_3=A_{31}e^{\\lambda_1\\sigma}+B_{31}e^{\\lambda_2\\sigma}+C_{31}\\\\\n v_3=-k_{3}(A_{32}e^{\\lambda_1\\sigma}+B_{32}e^{\\lambda_2\\sigma})+\\frac{1}{k_{3}}C_{31}\\\\\n \\end{array}\n \\right.\n\\end{eqnarray}\nApply these solutions to boundary conditions to get an $8 \\times\n8$ matrix $M$ and compute the determinant of $M$ directly to get\n\\begin{eqnarray*}\n|M|=\\frac{4\\sqrt{2}s^{7\/4}\\sin(\\theta_{12}+\\theta_{13})-8s^2(\\sin\\theta_{12}+\\sin\\theta_{13})}\n{\\cos^2\\theta_{12}\\cos^2\\theta_{13}}\n\\end{eqnarray*}\nFor the special case when one of the angles\n$\\theta_{12},\\theta_{13}$ is $\\frac{\\pi}{2}$, for example,\n$\\theta_{12}=\\frac{\\pi}{2}$,\n\\begin{eqnarray}\\label{eq:case2}\n\\left\\{\\begin{array}{l}\n u_1=A_{11}e^{-\\sqrt{s}\\sigma}\\\\\n v_1=B_{11}\\\\\n u_2=C_{21}\\\\\n v_2=A_{22}e^{\\lambda_1\\sigma}+B_{22}e^{\\lambda_2\\sigma}\\\\\n u_3=A_{31}e^{\\lambda_1\\sigma}+B_{31}e^{\\lambda_2\\sigma}+C_{31}\\\\\n v_3=-k_{3}(A_{32}e^{\\lambda_1\\sigma}+B_{32}e^{\\lambda_2\\sigma})+\\frac{1}{k_{3}}C_{31}\\\\\n \\end{array}\n \\right.\n\\end{eqnarray}\nThe determinant of the coefficient matrix is\n\\begin{eqnarray*}\n|M|=\\frac{-8s^2\\sin\\theta_{13}+4\\sqrt{2}s^{7\/4}\\cos\\theta_{13}-8s^2}{\\cos^2\\theta_{13}}\n\\end{eqnarray*}\nAnd if both $\\theta_{12},\\theta_{13}$ are $\\frac{\\pi}{2}$,\n\\begin{eqnarray}\\label{eq:case2}\n\\left\\{\\begin{array}{l}\n u_1=A_{11}e^{-\\sqrt{s}\\sigma}\\\\\n v_1=B_{11}\\\\\n u_2=C_{21}\\\\\n v_2=A_{22}e^{\\lambda_1\\sigma}+B_{22}e^{\\lambda_2\\sigma}\\\\\n u_3=C_{31}\\\\\n v_3=A_{32}e^{\\lambda_1\\sigma}+B_{32}e^{\\lambda_2\\sigma}\\\\\n \\end{array}\n \\right.\n\\end{eqnarray}\nThe corresponding determinant of the matrix $M$ is\n\\begin{eqnarray*}\n|M|=-16s^2\n\\end{eqnarray*}\nSimilar as before all cases discussed above are well-posed if\n$0<\\theta_{12},\\theta_{13} <\\pi$ and\n$\\theta_{12}+\\theta_{13}\\geq\\pi$. Note that the well-posedness\nproperty here coincides with that for the parabolic system we got\nbefore.\n\\section{Conclusion}\nWe proposed two formulations to describe the coupled surface and\ngrain boundary motion. Both of them are well-posed and easy to be\nimplemented by finite difference method. Numerical results are\nshown to be accurate. The PDAE formulation behaves better than the\nparabolic form does. And since all grid points are equispaced for\nthe PDAE formulation it is convenient to regrid globally when\nnecessary. This often happens when the curves expand or shrink\nquickly.\n\nIt is obvious that these schemes can also be used to simulate the\nmotion of a curve that involves only mean curvature motion or\nsurface diffusion as shown in section \\ref{se:star}. And they are\nextensible to any normal direction motion. Wherever applicable\nthese methods are more efficient comparing to the level set\nmethods. But they can not manage topology changes during the\nevolution.\n\n\\section*{Acknowledgements}\nThe authors would like to thank Amy Novick-Cohen for the helpful\ninformation on recent progress about the model addressed in this\npaper.\n\\section*{Appendix A: Reformulation of the Motion.}\nThe original curvature motion and surface diffusion are given as\n\\begin{eqnarray*}\nV_{c}&=&A\\kappa\\\\\nV_{d}&=&-B\\kappa_{ss}\n\\end{eqnarray*}\nWithout loss of generality will shall prove in particular in a\nparameterized form that we may normalize both $A$ and $B$ by\nrescaling the time and space.\n\nSuppose the original motions are modelled by\n\\begin{eqnarray}\\label{eq:rescale}\nX_t\\cdot \\vec{n}&=&A\\kappa \\nonumber\\\\\nY_t\\cdot \\vec{n}&=&-B\\kappa_{ss}\n\\end{eqnarray}\nWe first rescale the time by\n\\begin{eqnarray*}\n\\tilde{t}=Tt\n\\end{eqnarray*}\nThen equation (\\ref{eq:rescale}) becomes\n\\begin{eqnarray}\\label{eq:rescale2}\nX_{\\tilde{t}}\\cdot \\vec{n}&=&\\frac{A}{T}\\kappa \\nonumber\\\\\nY_{\\tilde{t}}\\cdot \\vec{n}&=&-\\frac{B}{T}\\kappa_{ss}\n\\end{eqnarray}\nThe spatial variable $X,Y$ are rescaled as\n\\begin{eqnarray*}\n\\tilde{X} = RX,\\quad \\tilde{Y} = RY\n\\end{eqnarray*}\nAnd we can easily verify that\n\\begin{eqnarray*}\n\\tilde{s}=Rs\n\\end{eqnarray*}\nwhere $\\tilde{s}$ is the arc length in the new system. More\nderivation shows that\n\\begin{eqnarray*}\n\\tilde{X}_{\\tilde{s}}=X_{s}, \\quad\n\\tilde{Y}_{\\tilde{s}}=Y_{s},\\quad\n \\tilde{\\kappa}=\\frac{1}{R}\\kappa, \\quad \\tilde{\\kappa}_{\\tilde{s}\\tilde{s}}=\\frac{1}{R^3}\\kappa_{ss}\\\\\n\\end{eqnarray*}\nNow equation (\\ref{eq:rescale2}) becomes\n\\begin{eqnarray}\\label{eq:rescale3}\n\\tilde{X}_{\\tilde{t}}\\cdot \\vec{n}&=&\\frac{AR^2}{T}\\tilde{\\kappa} \\nonumber\\\\\n\\tilde{Y}_{\\tilde{t}}\\cdot\n\\vec{n}&=&-\\frac{BR^4}{T}\\tilde{\\kappa}_{\\tilde{s}\\tilde{s}}\n\\end{eqnarray}\nHere we use the same notation $\\vec{n}$ since the normal direction\ndoes not change. If we choose\n\\begin{eqnarray*}\nR=\\sqrt{\\frac{A}{B}},\\quad T=\\frac{A^2}{B}\n\\end{eqnarray*}\nthen equation (\\ref{eq:rescale3}) becomes\n\\begin{eqnarray}\\label{eq:rescale4}\n\\tilde{X}_{\\tilde{t}}\\cdot \\vec{n}&=&\\tilde{\\kappa} \\nonumber\\\\\n\\tilde{Y}_{\\tilde{t}}\\cdot\n\\vec{n}&=&-\\tilde{\\kappa}_{\\tilde{s}\\tilde{s}}\n\\end{eqnarray}\nThis complete the normalization of coefficients $A$ and $B$.\n\\section*{Appendix B: Proof of Equation (\\ref{eq:ksss}) }\nOne has the following fact\n\\begin{equation}\\label{eq:kappa}\nX_{ss}\\cdot X_{ss}=\\kappa^2\n\\end{equation}\nDifferentiating (\\ref{eq:kappa}) with respect to $s$ one obtains\n\\begin{eqnarray*}\\label{eq:26}\nX_{sss}\\cdot X_{ss}=\\kappa\\kappa_s\n\\end{eqnarray*}\n\\begin{eqnarray*}\nX_{ssss}\\cdot X_{ss}+X_{sss}\\cdot X_{sss}\n=\\kappa\\kappa_{ss}+\\kappa_s^2\n\\end{eqnarray*}\nThen an expression for $\\kappa_{ss}$ can be derived,\n\\begin{equation}\\label{eq:36}\n\\kappa_{ss}=\\frac{X_{ssss}\\cdot X_{ss}+X_{sss}\\cdot X_{sss} -\n\\kappa_s^2}{\\kappa}\n\\end{equation}\nOn the other hand, one has\n\\begin{eqnarray*}\nX_{ss}=\\kappa \\vec{n}\n\\end{eqnarray*}\nTake derivative again and calculate the inner product of $X_{sss}$\n\\begin{eqnarray*}\nX_{sss}=\\kappa_s\\vec{n}+\\kappa\n\\vec{n}_s=\\kappa_s\\vec{n}-\\kappa^2\\vec{t}\n\\end{eqnarray*}\n\\begin{eqnarray}\\label{eq:39}\nX_{sss}\\cdot X_{sss} &=&\\kappa_s^2+\\kappa^4\n\\end{eqnarray}\nSubstitute equation (\\ref{eq:39}) into (\\ref{eq:36}) to get\n\\begin{equation}\\label{eq:40}\n\\kappa_{ss}=X_{ssss}\\cdot\\vec{n}+\\kappa^3\n\\end{equation}\nUsing equation (\\ref{eq:40}) together with the fact that\n\\begin{equation}\\label{eq:44}\n\\kappa^2X_{ss}\\cdot \\vec{n}=\\kappa^2(X_{ss}\\cdot \\vec{n})=\\kappa^3\n\\end{equation}\none obtains\n\\begin{eqnarray*}\n\\kappa_{ss}=(X_{ssss}+\\kappa^2X_{ss})\\cdot \\vec{n}\n\\end{eqnarray*}\nwhich completes the proof.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThere is a growing list of materials which behave as neither a metal nor an\ninsulator.~\\cite{Stewart01:73} Recent examples of interest include underdoped\nand high temperature superconducting cuprates, heavy fermions, iron based\nchalcogenides~\\cite{Sales09:103,Rosler11:84,Rodriguez13:88}, and\noxyselenides~\\cite{Zhu10:104,McCabe14:89,Stock16:28}. The underlying cause of\nthis unconventional behavior is not understood on a general level and, as\nfound in at least the cuprates~\\cite{Keimer15:518} and iron based\nsuperconductors~\\cite{Dai15:87}, is often complicated by several competing\nstructural and magnetic orders. Here, we report neutron scattering measurements\nstudying the magnetic fluctuations in an extreme example of this unusual\nelectronic behavior found in FeCrAs.\n\nFeCrAs displays very unusual properties which have led to it being termed a\n``non-metallic metal\" \\cite{Wu09:85,Akrap14:89}. Thermodynamic measurements\nreveal a highly enhanced Fermi liquid: the linear coefficient of specific heat\nis $\\gamma \\sim$ 30~mJ\/mole\\,K$^2$, while the susceptibility is Pauli like and\nquite large, leading to a Wilson ratio of approximately 4. On the other hand,\nnot only does the electrical resistivity $\\rho(T)$ show a strong departure from\nFermi liquid $T^2$ behavior -- as $T \\rightarrow 0$ K it has a {\\em\n sub-linear} power law $\\rho(T) \\simeq \\rho_\\circ + A T^{0.6}$ -- but it is\nalso ``non-metallic\" in the sense that the $A$ coefficient is {\\em negative}.\nThat is, the resistivity rises with decreasing temperature, but without any\nevidence of a gap in the density of states. In contrast to the Kondo effect,\nwhere such behaviour is seen only at low temperature, in FeCrAs the resistivity\nhas a negative slope over a huge temperature range. The $ab$-plane resistivity\nrises monotonically with decreasing temperature from near 900~K down to the\nlowest measured temperatures of 80 mK, while the $c$-axis resistivity has a\nsimilar rising form interrupted only by a sharp fall just below the\nantiferromagnetic ordering temperature $T_N \\simeq 125$~K. The magnitude of\nthe resistivity is in the range of a few hundred $\\mu\\Omega$\\,cm, which is very\nlarge for a metal. First principles calculations predict a carrier density of\napproximately $2\\times 10^{28}$ m$^{-3}$, and for this density the measured\nresistivity would suggest an extremely short mean-free-path, well below one\nlattice spacing.\n\nFeCrAs has a hexagonal crystal structure (space group $P\\overline{6}2m$ with\nlattice constants $a$=6.068 \\AA\\ and $c$= 3.657 \\AA). The Fe sublattice forms\na triangular lattice of trimers, while the Cr ions form a highly distorted\nKagome framework within the basal plane (See Figure~\\ref{fig:structure}).\nHowever, the interlayer Cr-Cr distance is relatively short (3.657 \\AA),\nsuggesting that the interlayer hopping is substantial in this material. This is\nconsistent with the small resistivity anisotropy ($\\rho_c\/\\rho_{ab} < 2$). The\nCr magnetic moments order at T$_{N} \\sim$125 K forming a spin-density wave with\nthe ordered moments varying from 0.6 to 2.2 $\\mu_{B}$.~\\cite{Swainson10:88}\nGiven that the Cr magnetic moment measured with neutrons is proportional to\n$gS$ ($g$ is electron gyromagnetic ratio and $S$ is spin quantum number), it is\nlikely that Cr has valence of $3+$ (hence $S={3\\over 2}$) and therefore lacks\nan orbital degeneracy in pyramidal crystal field environment. In contrast to\niron based pnictides, earlier studies report that the Fe site in FeCrAs does\nnot carry an observable moment at any temperature. The neutron diffraction\nresults found no ordered moment at the Fe site.\\cite{Swainson10:88} A\nfluctuating Fe moment should result in some induced polarization when the Cr\nsublattice orders, but this is not observed in M\\\"ossbauer\nspectroscopy.\\cite{Rancourt:thesis} Linear spin density approximation\ncalculation suggests significant covalency between Fe and As, so that moment\nformation is negligible (i.e. it is below the Stoner criterion).\\cite{Ishida96}\nAll of these studies are also consistent with the suppressed Fe K$\\beta^\\prime$\nfluorescence line observed by X-ray emission spectroscopy, which is sensitive\nto any fluctuating moment down to the x-ray time scale.\\cite{Gret11:84} These\ncombined observations provide compelling evidence that any static and dynamic\nFe moment is negligibly small in FeCrAs.\n\nA number of theoretical studies have been devoted to understanding the strange\nmetallic properties of FeCrAs. The magnetic phase diagram of the coupled Fe\ntrimer lattice and the distorted Kagome lattice of Cr has been mapped out\npredicting magnetic order consistent with experiment.~\\cite{Redpath11:xx}\nGiven the lack of observable static magnetic order on the Fe sublattice, a\nhidden spin-liquid phase has been proposed arising from the close proximity to\na metal-insulator transition. The strong charge fluctuations associated with\nthis nearby critical point have been implicated as the origin of the unusual\ntransport properties.~\\cite{Rau11:84} An alternate explanation has been\nproposed in the context of ``Hund's metals'' where large localized moments are\ncoupled to more itinerant electrons.~\\cite{Nevi09:103,Yin11:10} There have been\nonly limited number of spectroscopic studies to put these theories to test.\nCharge excitations have been investigated using optical spectroscopy which\nrevealed that the anomalous temperature dependence of resistivity was dominated\nby the temperature dependence of scattering rate, rather than\ncarrier-concentration.\\cite{Akrap14:89} In addition, they found that two Drude\ncomponents with drastically different energy scales contribute to the low\nenergy charge dynamics. On the other hand, the spin dynamics in FeCrAs have not\nbeen investigated to date.\n\n\nIn this study, we apply neutron scattering to investigate the magnetic\nproperties of FeCrAs with emphasis on the static order and fluctuations\noriginating from the Cr$^{3+}$ sites. We first present diffraction work showing\nthe magnetic order associated with the propagation wave vector of\n$\\vec{q}_0=({1\\over3}, {1\\over3})$, is described with a mean-field critical\nexponent. We then measured the powder averaged fluctuations showing stiff\nmagnetic fluctuations extending up to at least $\\sim$ 80 meV, while the low\nenergy excitations seem to be well described with gapless spin waves emanating\nfrom the ordering wave-vector. These results illustrate spin excitations in\nFeCrAs resemble those in itinerant magnets. We further discuss the magnetic\nexcitation spectrum in the context of the unusual transport properties. Our\nfinding of a high energy scale for magnetic fluctuations suggests that magnetic\nfluctuations could be responsible for the anomalous scattering that is observed\nup to high temperatures, despite the N\\'eel temperature occurring at much lower\ntemperature. Although our observations do not directly speak to the mechanism\nof non-metallic and non-Fermi-liquid resistivity in the $T \\rightarrow 0$ K\nlimit, it seems natural to hypothesize that anomalous magnetic correlations\nbegin to form at very high temperature in FeCrAs, and continue to evolve down\nto very low temperature, somehow producing the non-metallic metal state.\n\n\n\\begin{figure}[t]\n \\includegraphics[width=3.25in]{fig1.eps}\n \\caption{\\label{fig:structure} (a) Crystal structure of FeCrAs illustrating\n the CrAs$_{5}$ pyramids and connectivity along the $c$-axis. (b) The\n structure projected onto the ab-plane.}\n\\end{figure}\n\n\\section{Experiment Details}\n\nThe powder samples of FeCrAs were prepared by melting high purity Fe, Cr, and\nAs in stoichiometric ratios following Ref. \\onlinecite{Katsuraki66:21}. A small\nsingle crystal (with mass 25 mg) was also produced by slow cooling from the stoichiometric\nmelt. The single crystal used here was from the same batch as those used in\nearlier transport and thermodynamic studies discussed in Ref. \\onlinecite{Wu09:85}.\n\nHigh energy inelastic neutron scattering measurements on powder samples were performed\nusing the MARI direct geometry chopper spectrometer (ISIS, Didcot).\nMeasurements were performed with incident energies of E$_{i}$=150~meV, and\n300~meV that were selected using the ``relaxed'' Fermi chopper spinning at\nf=300~Hz, and 450~Hz respectively with the data being collected in a time of\nflight mode. Details of the background subtraction are provided below.\nSingle crystal spectroscopy measurements were not successful owing to the small\nsample size.\n\nFurther higher resolution neutron spectroscopy measurements were performed on\nthe MACS cold triple-axis spectrometer (NIST, Gaithersburg). Instrument and\ndesign concepts can be found elsewhere.~\\cite{Rodriguez08:19,Broholm96:369}\nData was collected by measuring momentum space cuts at constant energy\ntransfers by fixing the final energy at E$_{f}$=2.4 meV\nusing the 20 double-bounce PG(002) analyzing crystals and detectors and varying\nthe incident energy defined by a double-focused PG(002) monochromator. Each\ndetector channel was collimated using 90$'$ Soller slits before the analyzing\ncrystal and a cooled Be filter was placed before the analyzing crystals.\nMaps of the spin excitations as a function of energy transfer were then\nconstructed from a series of constant energy scans at different energy\ntransfers. All of the data has been corrected for the $\\lambda\/2$\ncontamination of the incident-beam monitor and an empty cryostat measurement\nwas used to estimate the background.\n\nSingle crystal magnetic neutron diffraction measurements were performed on the\n1T1 thermal triple axis spectrometer (LLB, Saclay) utilizing an open\ncollimation sequence, double focusing monochromater and vertically focusing\nanalyzer. The crystal was aligned in the (HK0) scattering plane of the\nhexagonal unit cell for the duration of the experiment.\n\n\\section{Results}\n\n\\subsection{Magnetic order from neutron diffraction}\nNeutron diffraction characterizing the magnetic order is presented in\nFig.~\\ref{fig:order_param}. The resolution limited magnetic Bragg peaks in\nFig.~\\ref{fig:order_param} (b) confirm the presence of long-range magnetic\norder with a $({1\\over 3}, {1\\over 3})$ propagation vector as observed in\nprevious powder diffraction measurements.\\cite{Swainson10:88} The integrated\nneutron scattering intensity which is proportional to the squared magnetic\norder parameter is plotted in Fig.~\\ref{fig:order_param} (a). We observe the\nonset of magnetic Bragg intensity at $T_N\\!=\\!115.5(5)$~K, a temperature\nsignificantly lower than the $T_N\\!=\\!125$~K N\\'eel temperature extracted from\nresistivity and magnetic susceptibility measurements on the same sample\n(Ref.~\\onlinecite{Wu09:85}). The value of $T_N$ in FeCrAs is known to vary across\ndifferent samples between 100 and 125~K depending on the synthesis conditions\nand sample quality. Those samples with a higher T$_N$ are observed to have a\nsplitting of field cooled and zero field cooled magnetic susceptibility at\nlower temperatures and the highest quality samples are associated with the\nhighest $T_N$.\\cite{Wu11:170} However, neutron diffraction and magnetic\nsusceptibility measurements were performed on the same sample so the origin of\nthis discrepancy is presently not clear.\n\n\\begin{figure}[t]\n \\includegraphics[]{fig2.eps}\n \\caption{\\label{fig:order_param} (a) Integrated intensity of the\n (2\/3,2\/3,0) magnetic Bragg peak measured on 1T-1 (LLB), solid line is a\n fit to $\\left(1-T\/T_N\\right)^{2\\beta}$ with $T_N\\!=\\!115.5\\pm 0.5$~K\n and $\\beta\\!=\\! 0.54\\!\\pm\\!0.05$. (b) Transverse scans through the\n magnetic Bragg peak at (2\/3,2\/3,0).}\n\\end{figure}\nIn a mean field approximation, for localized magnetism, the critical\ntemperature is related to the magnetic exchange interaction via the relation,\n\\begin{eqnarray}\nk_B T_{N} \\sim k_{B}\\Theta_{CW}={2 \\over 3} S (S+1)zJ,\n\\label{Curie_Weiss} \\nonumber\n\\end{eqnarray}\nwhere $S$ is the spin value, presumably ${3 \\over 2}$ for Cr$^{3+}$, and $J$ is\nthe average exchange constant with $z$ representing the number of nearest\nneighbors. The FeCrAs magnetic structure is highly\nfrustrated~\\cite{Florez15:27} potentially making the sum over neighbors quite\ncomplicated. However, this expression does allow us to obtain an estimate of\nthe mean-field spin-wave velocity of $zSJa \\sim 3 k_B T_N a\/2(S+1)\\sim\n20$~meV$\\cdot$\\AA\\, if we assume local spin moments. We compare this energy\nscale to the measured spin fluctuations below.\n\nA fit of the temperature dependent integrated neutron intensity to a power law\nnear $T_{c}$ finds the mean-field critical exponent $\\beta$=0.54 $\\pm$ 0.05.\nThis differs from the critical exponent of $\\beta \\sim $0.25 found in iron based\nlangasite~\\cite{Stock11:83} and other two-dimensional triangular\nmagnets~\\cite{Kawamura88:63}. The fluctuations critical to magnetic order in\nFeCrAs also differ from iron based pnictides and chalcogenides which broadly\ndisplay Ising universality class\nbehavior.~\\cite{Wilson10:81,Wilson09:79,Pajerowski13:87,Stock16:28} However,\nthe mean-field critical exponent is expected for an itinerant ferromagnetic\ntransition. For example, Moriya's spin-fluctuation theory\npredicts the temperature dependence of $M \\sim\n(1-T\/T_c)^{1\/2}$.\\cite{Mohn_book}\n\n\\subsection{Magnetic dynamics from inelastic neutron scattering}\nWe now discuss the magnetic dynamics as measured by inelastic neutron\nscattering. Figure \\ref{exp} illustrates the high-energy spectroscopy\nmeasurements performed on the MARI chopper spectrometer. Panel (a) displays a\npowder-averaged energy-momentum map at 5~K showing the presence of scattering\nat low momentum transfers above $\\sim$ 50~meV which decays rapidly with $Q$.\nThe white region corresponds to where no data could be taken due to kinematic\nconstraints of neutron scattering imposed by a minimum scattering angle of\n$2\\theta \\sim 3 ^{\\circ}$. Panel (b) shows a constant energy cut illustrating\nthe presence of two components to the scattering: one rapidly decaying with\nmomentum, indicative of magnetic fluctuations and well described by the\nCr$^{3+}$ form factor, and the other slowly increasing at large momentum\ntransfers, characteristic of a phonon contribution. To extract magnetic\nfluctuations at high energy transfers, we relied on the fact that the magnetic\nscattering is confined to small momentum transfers and decays with increasing\n$Q$ while the phonon background increases with $Q^2$.\n\nWe have separated the two components by fitting the high angle detector\nintensity (where magnetic scattering is expected to be negligibly weak) to\n$I_{BG}=B_{0}+B_{1}Q^{2}$ and extrapolating to small momentum transfers. An\nexample of this analysis is illustrated by the dashed curves in Fig.~\\ref{exp}\n(b) which show a cut integrated over energies between 75 and 100 meV. The\ndashed lines in (b) show an estimate of the background based on a fit to the\nhigh angle detectors and also the Cr$^{3+}$ form factor scaled by a constant\nfactor to agree with the low-$Q$ momentum dependence. The result of applying\nthis analysis to each energy transfer and subtracting the high-$Q$ background\nis shown by the false color image in panel (c). Individual cuts integrating\nover $E$=[55,60] meV and $E$=[25,30] meV are plotted in panels (d) and (e).\nThe analysis successfully extracts magnetic intensity for energy transfers\nabove $\\sim$ 45 meV, but failed to separate out the magnetic and phonon\ncontribution at lower energy transfers resulting in an over subtraction of\nintensity. This is seen in the false color image in panel (c) and further\ndisplayed through constant energy cuts in panels (d) and (e). While the\nbackground subtraction works at large energy transfers as shown in panel (d),\nthe assumptions behind this background correction break down for low-energy\ntransfers, where the phonon scattering becomes intense and highly structured in\nmomentum as shown in panel (e). Therefore, we have removed the region below\n20~meV from the plots. We note that this technique for background subtraction\nhas been successfully applied previously to studying high energy $d-d$\ntransitions in NiO and CoO.~\\cite{Kim11:84, Cowley13:88} It was also applied\nto extract the magnetic fluctuations in $\\alpha$-NaMnO$_{2}$.\\cite{NaMnO2} In\nall of these cases the analysis was only applied to a region in momentum-energy\nwhere the powder averaged phonon contribution was small and unstructured in\n$Q$.\n\n\\begin{figure}[t]\n\\includegraphics[width=8.5cm] {fig3.eps}\n \\caption{\\label{exp} (a) Powder-averaged inelastic neutron spectrum in FeCrAs\n taken on MARI. The intensity between 75~meV and 100~meV is integrated and\n plotted as a momentum cut in panel (b). The blue dashed line is\n an estimate of the background from extrapolating from large momentum\n transfers as described in the text and the dark red curve is the scale\n magnetic Cr$^{3+}$ form factor. (c) Illustrates the same data as in panel\n (a), but with the background removed. The solid line at $E$=40~meV shows\n where the background subtraction fails due to strong and highly structured\n in momentum phonon scattering. Constant energy cuts from this panel are\n plotted in (d) for the energy interval $E$=[55,60] meV, and (e) for the\n energy interval $E$=[25,30] meV. }\n\\end{figure}\n\n\\begin{figure}[t]\n\\includegraphics[width=9.5cm] {fig4.eps}\n\\caption{\\label{constQ} The powder averaged magnetic response at 5 K in FeCrAs\n measured with (a) $E_{i}$=300 meV (MARI, ISIS) and (b) $E_{f}$=2.4 meV\n (MACS, NIST). The variation in pixel size as a function of energy transfer in\n the MACS data, panel (b), is due to the difference in the way the data is\n collected. MARI data was collected in a time of flight configuration while\n MACS is a triple-axis which each energy transfer corresponding to a different\n constant energy scan. (c-d) The powder averaged heuristic parametrization\n based on the single mode approximation (SMA) discussed in the text. The\n calculation was done assuming two dimensional linear spin-waves with a\n velocity of 200 meV $\\cdot$ \\AA. }\n\\end{figure}\n\n\\begin{figure}[t]\n\\includegraphics[width=8.5cm] {fig5.eps}\n\\caption{\\label{temp} The powder averaged magnetic response measured with\n $E_{i}$=150 meV at (a) T=5 K and (b) 150 K. (c-d) Constant energy cuts\n illustrating the decay of magnetic intensity with momentum transfer. Panel\n (e) illustrates an energy cut at 25 meV where phonon scattering prevents a\n reliable subtraction of the background.}\n\\end{figure}\n\nGiven the failure to extract reliable magnetic scattering below $\\sim$ 40 meV\nusing the MARI direct geometry spectrometer we have used the MACS cold\ntriple-axis spectrometer with a low fixed $E_{f}$=2.4 meV to investigate the\nmagnetic response at low energy transfers. This configuration kinematically\naffords access to low momentum transfers where phonon scattering is expected to\nbe negligible. The background corrected data from MACS is compared against the\nhigh-energy magnetic response extracted used in MARI in Fig.~\\ref{constQ}\n(a-b). Steeply dispersing magnetic fluctuations are observable at low energies\nbelow $\\sim$ 6~meV, emanating from $Q$ positions which correspond to the\npropagation vector of $\\vec{q}_{0} = ({1\\over3}, {1\\over3})$. Further magnetic\nfluctuations are observable above $\\sim$ 40 meV using MARI. Data between these\ntwo energy ranges, bridging the MACS and MARI data sets, could not be reliably\nextracted, as discussed above, due to both kinematic constraints of neutron\nscattering and also the substantial phonon background over this energy range\nhighlighted in Fig. \\ref{exp} panel (c).\n\nThe MACS data in Fig.~\\ref{constQ} (b) reveal additional weak magnetic scattering\nnear 3~meV suggestive of a second low-frequency magnetic mode. It is possible\nthat this mode is the second transverse mode (magnon) with a gap of about 3~meV\nresulting from a weak easy plane anisotropy. Another possibility is that this\nintensity arises from a longitudinal mode, similar to what has been found in\nother metallic magnets.\\cite{Endoh06} Experiments using single crystal\nsamples are necessary to address the nature of these low energy modes. The high velocity, or stiff, spin\nexcitations extend up to 6~meV beyond which they are outside of the observation\nwindow on MACS. The fact that these excitations form steep rods in $Q$ seen in\nthe MACS data allows us to speculate that they link to the high-energy response\nobserved on MARI. We discuss this point below by applying a parameterization,\nillustrated in Fig. \\ref{constQ} (c-d), based on the first moment sum rule.\n\n\nAbove, we have relied on the momentum dependence to extract the magnetic\nintensity. To further confirm the magnetic origin of the low-angle response,\nwe have measured the magnetic fluctuations at higher temperature shown in\nFig.~\\ref{temp} which plots the extracted magnetic scattering with\n$E_{i}$=150~meV at 5~K and 150~K, below and above $T_{N}$ respectively.\nBackground corrected false color maps at these two temperatures are shown in\npanels (a-b) with constant energy cuts shown in panels (c-d). While the\nlow-$Q$ excitations are still present at high temperatures, indicative of a\nlarge underlying energy scale, a decrease in the scattering confirms the\nmagnetic origin of this scattering present at small momentum transfers.\n\n\n\\subsection{Parameterization in terms of high velocity damped spin waves}\n\nThe two data sets from time of flight and triple-axis spectroscopy show\nmagnetic excitations at high and low energy regime quite clearly; however, we\nnote that it is difficult to measure magnetic excitations in the intermediate\nenergy regime connecting these two data sets because of strong phonon\nscattering. To illustrate a consistent link between the low and high energy\ndata sets, we have parameterized the spin fluctuations by high velocity damped\nspin-waves from the magnetic $=({1 \\over 3}, {1\\over 3})$ positions. We have\nsimulated the scattering using the following form motivated by the\nHohenberg-Brinkmann first moment sum rule applied in the case of a dominant\nsingle mode, known as the single mode approximation.~\\cite{Hohenberg74:10}\nThis approach has been applied to low-dimensional organic magnets (Refs.\n\\onlinecite{Hong06:74,Stone01:64}) and the form reflects that used to describe\nmagnetic excitations in powder samples of triangular magnets (Refs.\n\\onlinecite{NaMnO2,Wheeler09:79}).\n\n\\begin{eqnarray}\nS(\\vec{Q},E)\\propto {1\\over \\epsilon(\\vec{Q})} \\gamma(\\vec{Q}) f^{2} (Q) \\delta (E-\\epsilon(\\vec{Q})),\n\\label{equation_rho} \\nonumber\n\\end{eqnarray}\n\n\\noindent where $\\gamma(\\vec{Q})$ is a geometric term chosen to peak at the\nBragg positions with propagation vector $({1\\over 3} , {1\\over 3})$, $f^{2}(Q)$\nis the magnetic form factor for Cr$^{3+}$, $\\delta (E-\\epsilon(\\vec{Q}))$ is an\nenergy conserving delta function, and $\\epsilon(\\vec{Q})$ is the dispersion\nrelation for the spin excitations. We only consider Cr$^{3+}$ moments here\nbecause the iron moment is negligibly small as discussed above. Given that the\nscattering is concentrated at low momentum transfers and a large portion is\nkinematically inaccessible, we are not able to derive an accurate measure of\nthe total integrated intensity for comparison to sum rules of neutron\nscattering. For the calculations shown here, we have taken the spin wave\ndispersion to be two dimensional (within the $a-b$ plane) and also be linear\ngiven that no upper band is observed. Powder averaging was done using a finite\ngrid of 10$^{4}$ points and summed at each momentum and energy transfer.\nWe note due to powder averaging it is difficult to make any conclusions from\nthe data regarding any continuum scattering that may exist owing to\nlongitudinal spin fluctuations as observed in other itinerant\nsystems~\\cite{Stock15:114}. As displayed in Fig. \\ref{constQ} panel (d), the\ncombination of powder averaging results in scattering over an extended range in\nmomentum transfer. Given kinematics associated with the $({1\\over 3} , {1\\over\n 3})$ type order, we are not able to draw any conclusions about possible\nferromagnetic fluctuations that may exist near $Q$=0.\n\nThe results of this calculation using a linear and three dimensional spin-wave velocity of $\\hbar c$=\n200 meV $\\cdot$ \\AA\\ and performing the powder average are shown in Fig.\n\\ref{constQ}(c-d). The calculation confirms that the two experimental data\nsets presented in Fig.~\\ref{constQ} (a-b) can be consistently understood in\nterms of high velocity spin-waves emanating from the (${1\\over 3}$, ${1\\over\n 3}$) positions. As seen in Fig. \\ref{constQ}(c), the magnetic form factor\nensures that the magnetic scattering is suppressed at large momentum transfers.\nThe value used in this calculation, $\\hbar c$= 200 meV $\\cdot$ \\AA\\, should be\nconsidered as a lower bound of the spin wave velocity. The steep velocity\nensures magnetic scattering is confined to low scattering angles as observed\nexperimentally which are eventually completely masked at high energy transfers\nby kinematic constraints of neutron scattering. One thing that is not clear in\nthis analysis is the highest energy scale of the steeply dispersing magnetic\nexcitations. Our measurements do not reveal a high energy peak in the powder\naveraged spectra that would result from an enhanced density of states for zone\nboundary spin waves and instead we observe an apparent high energy continuum.\nThis may be attributed to either a combination of kinematic constraints and the\nmagnetic form factor, or possibly to strong damping of the highest energy\nmagnetic excitations that results from coupling to conduction electrons. The\nlatter case occurs in classic itinerant magnets.~\\cite{Ishikawa77:16} We note that our model of three dimensional spin waves emanating from magnetic $({1\\over 3} , {1\\over 3})$ does not capture the momentum dependent intensity of the spin excitations at low energy transfers measured on MACS (Fig. \\ref{constQ} (b)). We speculate that such modulation with momentum originates from a more complex momentum dependence not captured in our analysis originating from unusual magnetic structure. To refine a model to capture this, single crystal data is required.\n\nOur parameterization of the data in terms of three dimensional and high velocity spin waves emanating from $\\vec{q}_{0}$=(1\/3, 1\/3) positions is arguably the simplest model that is consistent with the three dimensional nature of the resistivity and also the structure discussed above. However, it should be emphasized that powder averaging does mask features that would become clear in single crystals. It is possible that the three dimensional character of the spin excitations is only present at low energies crossing over to two dimensional excitations at higher energies. Indeed, our heuristic model does not capture the additional scattering at $\\sim$ 3 meV measured on MACS which could be suggestive of such a scenario. As an example, we point to powder averaged spin excitations in BaFe$_{2}$As$_{2}$~\\cite{Ewings08:78} which did give clear ridges of scattering up to high energies while later single crystal work confirmed the two dimensional character. Our data and parameterization does show that high velocity spin excitations are present up to unusually high energies in FeCrAs with the exact nature of the dimensionality made ambiguous from the powder averaging.\n\n\\section{Discussion}\nOur neutron diffraction measurements (with resolution $\\sim$ 2~meV) show that\nthe magnetic order sets in around $T_N=115$~K in FeCrAs, and the sublattice\nmagnetization is well described with mean-field critical exponent of\n$\\beta=1\/2$. The inelastic neutron scattering measurements show that the\nlow-energy spin excitations of FeCrAs are well-defined gapless spin-waves\nextending up to $\\sim$ 6 meV. The spin excitations are observed up to high\nenergy transfers of at least 80~meV. Such a large energy scale of these spin\nexcitations indicate an underlying magnetic energy scale that is significantly\nlarger than that estimated from local moment molecular field model (See\nSec.~IIIA).\n\nThese observations are quite reminiscent of the spin excitations observed in\nchromium metal. The incommensurate spin-density wave order in Cr is considered\nas a textbook example of magnetic order driven by the Fermi surface\nnesting.\\cite{Fawcett88} Its spin excitation spectrum has been the subject of\nintense investigation both theoretically and\nexperimentally.\\cite{Fedders66,Walker76,Fishman96,Fincher79,Lorenzo94,Endoh06}\nExperimentally, spin-wave like excitations with very steep dispersion have been\nobserved by inelastic neutron scattering\nmeasurements.\\cite{Fincher79,Lorenzo94,Endoh06} Theoretical studies showed that\nthe transverse spin fluctuations in the long wavelength limit can be described\nby spin-wave modes even for this type of itinerant\nsystems.\\cite{Fedders66,Walker76,Fishman96} That is, $\\omega=c|q|$, but the\nspin-wave velocity is given by $c=v_F\/\\sqrt{3}$, where $v_F$ is the Fermi\nvelocity which originates from charge physics and therefore is much larger than\ntypical spin-wave velocity observed in a localized spin model. In addition to\nthe transverse spin waves, a longitudinal mode is allowed and, in fact, has\nbeen observed to be quite strong.\\cite{Lorenzo94} The observed spin wave\nvelocity is a weighted combination of transverse and longitudinal modes and so\ncan differ significantly from $c=v_F\/\\sqrt{3}$. In Cr, the longitudinal\nfluctuation renormalizes the apparent spin-wave velocity down \\cite{Endoh06}\nand the apparent spin-wave velocity $\\hbar c$(Cr) is given by $\\hbar c$(Cr)$\n\\sim \\hbar \\sqrt{c_L c_T} \\sim 1000$~meV $\\cdot$ \\AA\\, where $c_L$ and $c_T$\ndenote longitudinal and transverse velocity of Cr.\n\nSince the carrier density in FeCrAs is known from the first principles\ncalculation ($n=2 \\times 10^{28}$~m$^{-3}$), we can estimate $v_F\/\\sqrt{3} \\sim\n4000$~meV \\AA. Although this value is much larger than the spin wave velocity\nused in Fig.~\\ref{constQ}, we do not consider this as significant numerical\ndiscrepancy. First, the spin-wave velocity used in Fig.~\\ref{constQ} is just a\nlower bound, and the data will be still adequately described with a larger\nvalue of $c$. Second, the Sommerfeld coefficient of 30~mJ\/mol K$^2$ suggests a\nlarge renormalization of the bare Fermi velocity. Finally, longitudinal\nmagnetic excitations are expected to reduce the apparent spin-wave velocity. Of\ncourse a calculation based on the real band structure would be necessary to\nobtain a more quantitative comparison between itinerant theory and experiment.\n\nWe would like to point out that there is a growing list of materials which\ndisplay low-energy localized excitations but itinerant fluctuations at higher\nenergy transfers.\nFe$_{1+x}$Te~\\cite{Fruchart75:10,Koz13:88,Okada09:78,Rodriguez11:84,Yu2018} has\nbeen found to have localized transverse fluctuations at low-energies which\ncross over to high energy fluctuations resembling more itinerant\nfluctuations.~\\cite{Stock14:90,Stock17:95} CeRhIn$_{5}$ shows well defined\nlocalized spin waves which breakdown into a multiparticle\ncontinuum.~\\cite{Stock15:114} YBa$_{2}$Cu$_{3}$O$_{6+x}$ similarly displays\nlocalized low-energy fluctuations but itinerant fluctuations at high\nenergies.~\\cite{Stock07:75,Stock10:82} However, unlike above materials, FeCrAs\nis far from the quasi-two-dimensional limit. The observed weak resistivity\nanisotropy is a strong indicator of this, with an additional support provide by\nour observation of mean-field critical exponent. In the parent compounds of\niron or copper based superconductors, the critical behavior is usually governed\nby strong 2D fluctuations, giving rise to critical exponents in the range of $\\beta \\sim 0.2 - 0.3$, much smaller than the observed mean-field exponent.~\\cite{Bramwell1993,Wilson10:81,Wilson09:79,Pajerowski13:87,Stock16:28}\n\nWe now discuss the relation between the spin excitations and the unusual\nresponse measured in resistivity. The data shows fast spin excitations at low\nmomentum transfers. The MACS data illustrate that the excitations are\noriginating from finite-$Q$, but extend up to high energy transfers. A central\nquestion in FeCrAs is the origin of the unusual metallic properties with the\nresistivity increasing in a power-law fashion from 600 K. The resistivity from\nspin fluctuations has been suggested to have the following form in the context\nof work done on the cuprate superconductors.~\\cite{Keimer91:67,Moriya90:59}\n\n\\begin{eqnarray}\n\\rho(T) \\propto T \\int_{-\\infty}^{\\infty} {E \\over T} d\\left({E \\over T} \\right) {{e^{E\/T}} \\over {(e^{E\/T}-1)^{2}}}\\int d^{3}q \\chi '' (\\vec{q},E).\n\\label{equation_rho} \\nonumber\n\\end{eqnarray}\n\n\\noindent Given that the neutron scattering cross section $I(Q, E)\\propto S(Q,\nE)={1\\over \\pi} [n(E)+1]\\chi ''(Q, E)$, an energy independent $\\int d^{3}q \\chi\n'' (\\vec{q},E)$ would result in a resistivity which has linear temperature\ndependence. If, however, this local susceptibility integral term decreased\nslowly with increasing temperature, then a temperature independent resistivity\nmay be explained. While the kinematic constraints of our experiment preclude\nmeasurement of the temperature dependent local susceptibility, we do observe\nonly a weak decrease of high energy magnetic intensity with increasing\ntemperature implying that the associated change in local susceptibility is\nsmall. The large energy scale of the fluctuations inevitably will affect the\nresistivity over a very broad temperature scale.\n\nThe measurements above find two results in the context of the dynamics; first\nthat the magnetic excitations are gapless down to the energy scale set by the\n$\\sim$ 0.5 meV resolution of MACS; and second, the high energy scale\nfluctuations are present at high temperatures above T$_{N}$. The\nlarge energy scale and gapless nature of the spin fluctuations may provide an\nexplanation for the unusual transport response. A similar coupling between\nspin fluctuations and the electron response was suggested in Fe$_{1+x}$Te which\nalso display little change in the resistivity over a broad range in\ntemperature.~\\cite{Liu09:80} Indeed, only when the magnetic fluctuations\nbecome gapped in Fe$_{1+x}$Te does the resistivity drop and the two can be\ncorrelated using the relation above.~\\cite{Rodriguez13:88} FeCrAs may\nrepresent an extreme example with gapless spin excitations that extend up to at\nleast 80 meV ($\\sim$ 926 K).\n\nIn summary, we studied critical behavior of the magnetic order parameter near\nthe Neel transition in FeCrAs, and observed that the temperature dependence of\nthe magnetic order parameter is described with the mean-field critical\nexponent. Our neutron spectroscopy measurements reveal high velocity gapless\nspin wave excitations which extend up to at least $\\sim$ 80 meV, which\nresembles spin excitations in itinerant magnets. We suggest that coupling\nbetween this broad-band spin fluctuations is the origin of the unusual\nresistivity measured in this ``nonmetal-metal\".\n\n\\begin{acknowledgements}\nWe acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), Canada Foundation for Innovation (CFI), and Ontario Innovation Trust (OIT).\nThis work was supported by the Carnegie Trust for the Universities of Scotland, the Royal Society, and the Engineering and Physical Sciences Research Council (EPSRC).\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\paragraph{}Stars form out of clouds of dense molecular gas and dust. From detailed studies of nearby molecular clouds, we have developed a picture of how stars with masses typically close to our Sun's form and evolve onto the main sequence (e.g., Shu, Adams, \\& Lizano~1987). A corresponding picture does not exist for the highest mass end of the stellar-mass spectrum. This is in part due to high-mass star-forming regions in our Galaxy being at greater distances, and thus being observed at lower spatial resolution, than low-mass regions. High-mass stars also form in highly clustered environments, whereas the well-studied nearby low-mass stars are typically more isolated (e.g. Taurus). Not even the basic formation mechanism of massive-star formation (scaled-up version of monolithic core collapse vs. competitive accretion formation; Shu et al.~1987; Bonnell \\& Bate~2006; McKee \\& Ostriker~2007) is yet well agreed upon, especially for the formation of the highest mass stars. It is possible that both processes are important in different regimes of the stellar mass spectrum. For low-mass stars, there are observational indicators of the evolutionary state of the protostar (e.g. $T_{bol}$ -- Temperature of a blackbody with a peak at the flux weighted mean frequency of the spectral energy distribution and $\\alpha_{IR}$ -- IR spectral index, defining the Class~0, I, II, III system; Lada 1987, Evans et al. 2009); a universal evolutionary sequence for high-mass stars is still being developed and that exact ordering of the possible observational indicators (e.g., the presence of a H$_2$O maser or a CH$_3$OH Class I or Class II maser, e.g. Plume et al.~1997; Shirley et al.~2003; De Buizer et al.~2005; Minier et al.~2005; Ellingsen et al.~2007; Longmore et al.~2007; Purcell et al.~2009; Breen et al.~2010) is still debated.\n\n\\paragraph{}One observational aspect that has limited our complete understanding of star formation is that we lack a complete census of the star-forming regions in our own Galaxy and, therefore, a census of their basic properties (size, mass, luminosity). Previous surveys of star-forming regions have been heavily biased. For instance, the earliest studies of dense molecular gas focused on known (optical or radio) H~{\\small \\sc II} regions where an O or B spectral type star had already formed. The discovery and cataloguing of UCH~{\\small \\sc II} regions (e.g. Wood \\& Churchwell 1989) extended studies to an earlier embedded phase, but still required the presence of a forming high-mass star. Infrared Dark Clouds (IRDCs), clouds of dust and gas that are opaque at mid-infrared wavelengths (i.e. 8~$\\mu$m), permitted less-biased studies of star forming regions through the earliest (prestellar) phases and across the stellar mass spectrum (Carey et al. 2009; Peretto \\& Fuller 2009); however, they were limited to clouds at near kinematic distances and typically observable only in the inner Galaxy ($-60 < \\ell < 60$ degrees). Dust continuum observations at far-infrared through millimeter wavelengths provide the least-biased means to survey star forming regions at all embedded evolutionary phases and a wide range of the stellar mass spectrum across the Milky Way Galaxy since the emission is optically thin, always present, and can trace small amounts of mass. \n\n\\paragraph{}In the past decade, new surveys of the Milky Way Galaxy have been made from mid-infrared through millimeter wavelengths. Several Galactic plane surveys are published or currently being observed, including the Bolocam Galactic Plane Survey (BGPS: Aguirre et al. 2011), the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL: Schuller et al. 2009), and the Herschel infrared Galactic Plane Survey (Hi-GAL: Molinari et al. 2010). The goals of these surveys are to look for the precursors to massive star formation in the Galaxy as a whole, without targeting individual regions known to contain forming stars. They are an integral part of completing a full census of star-forming cores and clumps in the Milky Way as they are sensitive to star formation at all stages.\n\n\\paragraph{}The Bolocam Galactic Plane Survey is a 1.1~mm continuum survey of the Galactic plane (Aguirre et al. 2011). Covering 220~square~degrees at 33$^{\\prime\\prime}$ resolution, the BGPS is one of the first large-area, systematic continuum surveys of the Galactic plane in the millimeter regime. The BGPS spans the entire first quadrant of the Galaxy with a latitude range of $|b|< 0.5$ degrees from the Galactic plane and portions of the second quadrant (Aguirre et al. 2011). The survey has detected and catalogued approximately 8400 clumps of dusty interstellar material (Rosolowsky et al. 2010). The BGPS is beam matched to the spectroscopic data we are taking in this paper. This allows us to easily compare the gas and dust in the same phase of star formation. The BGPS data are available in full from the IPAC website\\footnote{http:\/\/irsa.ipac.caltech.edu\/data\/BOLOCAM\\_GPS\/}.\n\n\\paragraph{}The vast majority of sources detected in the BGPS represents a new population of dense, potentially star-forming clumps in the Milky Way. The basic properties of these objects such as size, mass, and luminosity depend on the distance to the objects. However, since the BGPS observations are continuum observations, they contain no kinematic information. In this paper, we use the line-of-sight velocity (v$_{LSR}$) from a molecular line detection and a model of the Galaxy to determine a kinematic distance. Not only is the v$_{LSR}$ useful, but the line properties themselves can elucidate a number of properties of the dense gas in the clumps (e.g., virial mass, infalling gas, outflows, etc.).\n\n\\paragraph{}Most kinematic surveys of the Milky Way have been performed using low gas density tracers (e.g. H~{\\small \\sc I}: Giovanelli et al. 2005; $^{12}$CO: Dame et al. 2001, GRS($^{13}$CO): Jackson et al. 2006). With these low density tracers, most lines-of-sight in the Galaxy have multiple velocity components. To mitigate this, we choose dense gas tracers that will be excited exclusively in the BGPS clumps. Surveying dense gas has been done before using CS $J=2-1$ toward IRDCs (see Jackson et al.~2008). In this survey, we simultaneously observe two dense gas tracers HCO$^+$ $J=3-2$ and N$_2$H$^+$ $J=3-2$ using the 1~mm~ALMA prototype receiver on the Heinrich Hertz Submillimeter Telescope (HHT). The HHT resolution of $\\sim 30^{\\prime\\prime}$ at 1.1~mm is nearly beam-matched to the original BGPS survey, allowing a one-to-one comparison between these dense gas tracers and peak 1.1~mm continuum emission positions. These two molecular tracers have very similar effective excitation densities, n$_{eff}\\sim 10^4$ cm$^{-3}$, that are well-matched to the average density derived from the continuum-emitting dust (see Dunham et al. 2010). The effective excitation density for a molecular tracer is defined in Evans (1999) as the density at a given kinetic temperature required to excite a 1~K line for a column density of log$_{10}$~N~\/~$\\Delta v = 13.5$. To determine the effective density, we use RADEX, which is a Monte Carlo radiative transfer code (Van~der~Tak et al. 2007), assuming log$_{10}$~N~=~13.5~cm$^{-2}$ and $\\Delta v~=~1$ km s$^{-1}$. \n\n\\paragraph{}The chemistry of HCO$^+$ and N$_2$H$^+$ is useful, as these two molecules have opposite chemistries with respect to the CO molecule (J\\o rgensen et al. 2004). HCO$^+$ is created by CO and N$_2$H$^+$ is destroyed by CO. The formation routes of HCO$^+$ and N$_2$H$^+$ are dominated by the following reactions: \n\\begin{equation}\n H_3^+ + CO \\rightarrow HCO^+ +H_2\n\\end{equation}\n\\begin{equation}\n H_3^+ + N_2 \\rightarrow N_2H^+ + H_2\n\\end{equation}\n\\begin{equation}\n N_2H^+ + CO \\rightarrow N_2 + HCO^+\n\\end{equation}\nFor N$_2$H$^+$ to exist in large quantities, the gas must be cold where CO has frozen out onto dust grains. The ratio of N$_2$H$^+$\/HCO$^+$ emission can be a chemical indicator of the amount of dense, cold gas in BGPS clumps. Even with the modest upper energy levels, E$_{u}$\/k~(HCO$^+$ $J=3-2$)~=~25.67~K and E$_{u}$\/k~(N$_2$H$^+$ $J=3-2$)~=~26.81~K, these transitions can still be excited at low temperatures if the density of the gas is high enough. This brings up an interesting conflict for N$_2$H$^+$: chemically, it favors a low kinetic temperature where CO is depleted but higher T$_{kin}$ or higher gas density leads to a stronger excitation of the $J=3-2$ line.\n\n\\paragraph{}In this paper, we present the results for spectroscopic observations of 1882 BGPS clumps. In \\S~2 we discuss source selection and observing, calibration, and reduction procedures. In \\S~3 detection statistics, line intensities, velocities and linewidths are analyzed. In \\S~4 we calculate the kinematic distance to each detected source and determine our size-linewidth relations, clump mass spectra, and present a face-on view of the Milky Way Galaxy based on kinematic distances determined from our sample.\n\n\\section{Observations, Calibrations, and Reduction}\n\\subsection{Facility and Setup}\n\n\\paragraph{}Observations were conducted with the Heinrich Hertz Submillimeter Telescope on Mount Graham, Arizona. The data were taken over the course of 44 nights beginning in February 2009 and ending in June 2009. We utilized the ALMA~Band-6 dual-polarization sideband-separating prototype receiver in a 4-IF setup (Lauria et al. 2006, ALMA memo \\#553). With this setup we simultaneously and separately observe both the upper and lower sidebands (USB and LSB, respectively) in horizontal polarization (H$_{pol}$) and vertical polarization (V$_{pol}$) using two different linearly polarized feeds on the receiver. The receiver was tuned to place the HCO$^+$ $J=3-2$ (267.5576259~GHz) line in the center of the LSB. The IF was set to 6~GHz, which offsets the N$_2$H$^+$ $J=3-2$ line (279.5118379~GHz) in the USB by $+47.47$~km~s$^{-1}$. The signals were recorded by the 1~GHz Filterbanks (1~MHz per channel, 512~MHz bandwidth in 4-IF mode; LSB velocity resolution $\\Delta v_{ch} = 1.12$~km~s$^{-1}$ and USB velocity resolution $\\Delta v_{ch} = 1.07$~km~s$^{-1}$) in each polarization and sideband pair (V$_{pol}$ LSB, V$_{pol}$ USB, H$_{pol}$ LSB, H$_{pol}$ USB). \n\n\n\\subsection{\\label{calibration}Calibration and Sideband Rejections}\n\n\\paragraph{}Every observing session utilized 3 types of observations to calibrate the velocity offset, temperature scale, and rejection of the sideband separating receiver. The antenna temperature scale T$^*_{A}$ is used at the HHT and is set by the chopper wheel calibration method (Penzias \\& Burrus 1973). This temperature scale was then converted to T$_{mb} = $T$^*_A \/ \\eta_{mb}$ by observing Jupiter and calculating the main beam efficiency ($\\eta_{mb}$; Mangum 1993)\n\\begin{equation}\n \\eta_{mb} = \\frac{ f_{rej} \\cdot T^*_{A} (Jupiter)}{J(\\nu_s,T_{Jupiter}) - J(\\nu_s,T_{CMB})} \\cdot \\left[ 1-\\exp \\left(-\\;ln\\: 2\\cdot \\frac{\\theta_{eq}\\theta_{pol}}{\\theta^2_{mb}} \\right) \\right]^{-1} \\;\\; ,\n\\end{equation}\n\\noindent where $J(\\nu,T_{B}) = \\frac{h \\nu\/k}{\\exp(h \\nu\/k T_{B} ) - 1}$ is Planck function in temperature units evaluated at the observed frequency $\\nu$ and brightness temperature T$_{B}$, T$^*_{A}$ is the average observed temperature of Jupiter in the bandpass, T$_{Jupiter} = 170$ K $\\pm 5$ K, T$_{CMB} = 2.73$ K, $\\theta_{eq}$ \\& $\\theta_{pol}$ are the daily equatorial and poloidal angular diameters of Jupiter, $\\theta_{mb}$ is the HHT FWHM (equal to 28.2$^{\\prime\\prime}$~LSB and 26.9$^{\\prime\\prime}$~USB). The sideband rejection correction factor is given by,\n\\begin{equation}\n f_{rej} = \\left(1+\\frac{I(T^*_{A} USB)}{I(T^*_{A} LSB)}\\right)^{-1}.\n\\end{equation}\n\\noindent We calculate $f_{rej}$, by measuring the integrated intensity of the flux that bleeds over from the LSB into the USB by observing S140, a source with very strong HCO$^+$ $J=3-2$ emission ($T_{mb} = 18$~K). The average rejection was $-13.8$~dB in V$_{pol}$ and $-15.2$~dB in H$_{pol}$. We ignore the difference in atmospheric opacities between 267~GHz and 279~GHz since it is small.\n\n\n\\paragraph{}We report the computed $\\eta_{mb}$ for each observing session in Table \\ref{etamb} (see Figure \\ref{calibplot}). Each data point for $\\eta_{mb}$ in Figure \\ref{calibplot} consists of the average of five or more observations of Jupiter each night. There are no apparent trends in $\\eta_{mb}$ versus time except on MJD 918 -- 920, when $\\eta_{mb}$ for V$_{pol}$ is significantly lower than for the rest of the observing sessions. For these dates we choose to use the average $\\eta_{mb}$ and treat them independently from the rest of the calibration. A drop in the integrated intensity, I(T$_A^*$), is also seen in the data taken of the two spectral line calibration sources, S140 and W75(OH) (Figure \\ref{calibplot}), supporting the decision to treat these days separately. \n\n\\paragraph{}We also compared $\\eta_{mb}$ for the two polarizations and sidebands against each other (Figure \\ref{polcompjup}). The two upper panels compare the USB and LSB of each polarizations against each other. These are highly correlated with Spearman's rank coefficients of $\\rho \\sim 1$, which is expected, as each linear polarization has its own feed. The Spearman's rank coefficient is a measure of the monotonic dependence between two variables. The lower panels compare the two polarization feeds of the LSB and two polarization feeds of the USB against each other. These are less well-correlated with Spearman's rank coefficients of $\\rho \\sim 0.5$ and show that the variation we see in $\\eta_{mb}$ does not come from a systematic effect that affects both polarizations at the same time.\n\n\\paragraph{}We observed each source in the catalog described in \\S~\\ref{catalog} for 2~minutes total integration time. We position switched between a common OFF position for each 0.5~degrees in Galactic longitude ($\\ell$). Each of these OFF positions was observed for 6~minutes to check if the OFF position had any detectable line emission. For a subset of sources near the end of our observations, starting on MJD~$-$~245400~=~986, the integration time was increased to 4 or 10 minutes to compensate for deteriorating weather. Our goal was to keep the baseline rms less than 100~mK ($\\Delta v_{ch} \\sim 1$~km~s$^{-1}$) for as many sources as possible. S140 was also used to calculate the Allan Variance of the ALMA prototype receiver (Schieder \\& Kramer 2001). Accounting for the Allan Variance, we determined that $\\sim 20$ seconds was the optimal switching time between ON and OFF positions for position switching with the 1~mm~ALMA prototype receiver.\n\n\\subsection{\\label{catalog} Source Selection}\n\n\\paragraph{}We selected sources out of a preliminary version of the BGPS source catalog (BOLOCAT, Rosolowsky et al. 2010). We used the BOLOCAT version 0.7 to divide sources into logarithmic flux bins with equal numbers using the 40$^{\\prime\\prime}$ aperture flux. We point at peak $1.1$ mm continuum positions as listed in the BOLOCAT and we restricted the range of Galactic longitudes to fall between $10^\\circ \\le \\ell \\le 100^\\circ$. The entire BOLOCAT in this longitude range was divided into logarithmically-spaced flux bins from S$_{1.1mm} = 0.1$ Jy to S$_{1.1mm} \\sim 0.4$ Jy in intervals of log$_{10}($S$_{1.1mm}) = 0.1$. All sources greater than S$_{1.1mm} \\sim 0.4$ Jy were included for observation ($N=689$). Below this flux, 100 sources per bin were selected at random to comprise a flux-selected set of $N=1,289$ sources from the BOLOCAT v0.7. In addition, we observed all BOLOCAT v0.7 sources in $\\ell$ ranges of 10$^{\\circ}$ -- 11.5$^{\\circ}$, 15$^{\\circ}$ -- 21$^{\\circ}$, and 80$^{\\circ}$ -- 85.5$^{\\circ}$. These ranges were chosen due to a combination of overlap with other surveys and observing availability. Sources with $\\ell>100^{\\circ}$ were taken from the BOLOCAT v1.0 and were only restricted by observability.\n\n\\paragraph{}Near the end of our spectroscopic survey, the BGPS version 1.0 maps and the BOLOCAT v1.0 were released. We recomputed the photometry of our sources at the observed v0.7 positions on the v1.0 maps using the HHT beamsize, this is the data presented in \\S~3. A correction factor of 1.5 was multiplied to all of the 1.1~mm fluxes (see Aguirre et al. 2011). This calibration factor was determined by the BGPS team to account, in part, for the spatial filtering present in the BGPS v1.0 maps and possible calibration differences between the BGPS and other surveys. This factor brings the BGPS fluxes inline with those of other surveys (e.g. M\\H{o}tte et al.~2007).\n\n\\paragraph{}Between versions of the catalogs the algorithms for processing the images were improved and thus source peak continuum positions may have moved or sources may even be removed from the catalog. In \\S~4 we compare physical properties of the sources and need to know the overall source properties not just the photometry of the locations we observed. We take the nearest source from the v1.0 catalog to the position we observed. The median offset between observed v0.7 positions and v1.0 positions is $6.4^{\\prime\\prime}$. The vast majority ($83$\\% ) of v0.7 observed positions lie within 15$^{\\prime\\prime}$ (1\/2 the beam width) of the nearest v1.0 position.\nSince the median angular size of our observed sources is $\\sim 60^{\\prime\\prime}$, the small positional differences between v0.7 and v1.0 do not significantly affect the physical properties derived from the 1.1 mm emission (e.g., size, mass). No source for which we have resolved the distance ambiguity and derived physical properties for in \\S4 (Known Distance Sample) has an offset between the catalogs greater than 30$^{\\prime\\prime}$ (one beam width) when determining physical properties. \n\n\\paragraph{} In the following analysis, we refer to the ``Full Sample'' of 1882 observed sources and the ``Deep Sample'' of $707$ sources where the entire BOLOCAT was observed within the longitude ranges described above. Figure \\ref{lvb} shows the location of the sources we observed in the Full Sample and the sources in the Deep Sample. \n\n\\subsection{\\label{reduction} Data Reduction}\n\n\\paragraph{}The spectra were reduced using scripts we developed for the CLASS software package\\footnote{http:\/\/www.iram.fr\/IRAMFR\/GILDAS\/}. In 4-IF mode, there are four filterbank spectra for each source. The HCO$^+$ $J=3-2$ spectrum was used to determine the baseline window, typically $\\pm$50~--~75~km~s$^{-1}$ from the line center, and to determine the line window, typically $\\pm$ 10~--~15~km~s$^{-1}$ from line center. The two polarizations of HCO$^+$ $J=3-2$ data were then baselined, and averaged together. This averaged spectrum is used to determine the line flag of the observed source and the line flag is determined from the approximate line shape (see Table \\ref{lineflags}). A flag of \\textit{0} means there is no apparent line in the spectrum at any velocity. A flag of \\textit{1} indicates a single line in the spectrum and the line exhibits no apparent structure from a line wing or self-absorbed profile. A flag of \\textit{2} means there was confusion along the line of sight and multiple velocity components are observed. In this case there is no way of determining which source is associated with the 1.1~mm map without mapping the molecular emission. A flag of \\textit{4} means there was a possible line wing. A Gaussian fit is plotted over the data to emphasize any deviations from a Gaussian shape; this helped to accentuate any sources with line profiles with a red or blue line wing, or possibly both. A flag of \\textit{5} means the line profile was possibly self-absorbed. Examples of spectra for the various flags are shown in Figure \\ref{sampplots}.\n\n\\paragraph{}The N$_2$H$^+$ $J=3-2$ line is offset $+47.47$~km~s$^{-1}$ from the center of the USB; therefore, each of the HCO$^+$ baseline windows is shifted by that offset in order to baseline the N$_2$H$^+$ data. The N$_2$H$^+$ spectra are baselined and averaged in the same manner as above and the line is flagged for its quality and structure. \n\n\\paragraph{}After each spectrum is flagged, it is converted to the T$_{mb}$ scale and the two polarizations are averaged together. Each spectrum is corrected with the corresponding $\\eta_{mb}$ given by the date it was observed and its polarization, as explained in \\S \\ref{calibration}. The spectra are weighted by their baseline rms values, averaged, and baselined. The resulting combined spectra are used in our analysis. \n\n\n\\subsection{Analysis Pipeline}\n\n\\paragraph{}Once the spectrum for each source has been calibrated and averaged, the next step is to measure the line properties. The analysis of all the spectra is performed in IDL using custom and ASTROLIB routines, and all CLASS spectra are exported to an {\\small \\sc ASCII} file containing the final spectrum. The peak temperature is given by the maximum temperature within the defined line window and the error is the rms of the data outside of that window. The integrated intensity, central velocity, and line width are computed using both an analysis of the 0$^{th}$, 1$^{st}$, and 2$^{nd}$ moments and by fitting a Gaussian model to the spectral line. \n\n\\subsubsection{Moment Analysis}\n\n\\paragraph{}The moments of a spectral line are calculated using\n\\begin{equation}\n M_n = \\sum_{i=v_l}^{v_u} T_i v_i^n \\Delta v_{ch},\n \\label{momenteqn}\n\\end{equation}\n\\noindent where $n$ is the moment and $i$ represents each channel between the $v_l$ and $v_u$, defining the line window. These moments are then used to compute the integrated intensity, central velocity, and FWHM using\n\\begin{equation}\n I(T_{mb}) = M_0\n\\end{equation}\n\\begin{equation}\n v_{cen} = \\frac{M_1}{M_0}\n\\end{equation}\n\\begin{equation}\n v_{FWHM} = \\sqrt{ 8 \\; \\ln{2}} \\cdot \\left(\\frac{M_2}{M_0} - v_{cen}^2 \\right)^{1\/2}.\n\\end{equation}\n\n\\noindent Moment calculations are sensitive to the rms of the baseline and our generously large line windows. For lower signal-to-noise lines, a small noise feature in the spectrum can drastically change the first moment when using all of the data points in the line window. To compensate for low signal-to-noise we estimate the line center using only data three times the baseline rms in the first moment calculation. This new method returns velocities and widths that more closely match those that are determined by eye than the method using all of the data within the line window. This only has significant effects for low signal-to-noise spectra.\n\n\\subsubsection{\\label{gaussian}Gaussian Fitting}\n\n\\paragraph{}Another method of determining the central velocity and FWHM for a spectral line is to fit a Gaussian model to the line profile. This method has its drawbacks as well, but the main drawback is that it struggles with lines that deviate strongly from Gaussian shaped line profiles. Examples of such are lines with self-absorbed profiles or lines with very prominent line wings, (Figure \\ref{sampplots} (c) and (d)). The Gaussian fits are computed with the MPFITPEAK function (Markwardt 2009) and return a reduced $\\chi^2$. The boundary conditions chosen are the following: $1)$ the baseline is defined to be 0, $2)$ the peak line temperature of the Gaussian is defined to be positive, $3)$ the central velocity of the line must be within the line window, and $4)$ the FWHM must be smaller than the line window. For the starting parameters of the fit, we use the results from the moment analysis. \n\n\\paragraph{}At this point we refine our method of computing the desired quantities for the line shape. After the initial Gaussian fit is completed, the next step is to modify the line window and recompute both the moment analysis and the Gaussian fitting. To modify the line window we used the parameters of the line as determined by the Gaussian fit to center the line window on the line center, v$_{Gauss}$, and extend it by the measured linewidth, $\\pm 2 \\cdot \\sigma_{Gauss}$. If either bound of this new line window extends outside of the original, the original bounding velocity is used for that term. The fit and moments are recomputed but do not yield significant changes for most sources. At this time $\\sigma_I$ (the error on the integrated intensity) is calculated using the final ``fit'' line window. $\\sigma_I=\\sigma_T \\cdot \\sqrt{\\delta v_{ch}\\cdot (v_u-v_l)}$ where $v_{ch}$ is the channel width ($\\delta v_{ch} = 1.1$ km\/s), $v_u$ and $v_l$ are the upper and lower bounds of the line window, and $\\sigma_T$ is the baseline rms. If the line was undetected, flag of \\textit{0}, the line window used to estimate the error is $\\sim 6$~km~s$^{-1}$, within which more than 95\\% of all measured FWHMs lie (Figure \\ref{linewidthhist}). The Gaussian fits are used to determine the HCO$^+$ and N$_2$H$^+$ central velocities and the HCO$^+$ linewidths while the zero$^{th}$ moment is used to calculate the integrated intensity. For N$_2$H$^+$ linewidths, we use an IDL script that deals with the hyperfine lines (assuming a Gaussian shape for each hyperfine line) and uses MPFIT to determine the best-fit line profile. For the rest of the paper, linewidths refer to only the Gaussian-fit, observed HCO$^+$ linewidth.\n\n\\section{\\label{analysis}Detection Statistics and Analysis}\n\n\\paragraph{}In this section, we discuss the statistics of the detected sources in HCO$^+$ and N$_2$H$^+$ and correlations between integrated intensity for HCO$^+$ and N$_2$H$^+$ with respect to the two sample groups. We also discuss the determined v$_{LSR}$ and the analysis of the line centroids and linewidths.\n\n\\subsection{\\label{detectionstats}Detection Statistics}\n\n\\paragraph{}Each source has two flags, one for HCO$^+$ and one for N$_2$H$^+$; multiple flags are not set for any of the sources\/tracer pairs; it is either a detection (1,2,4,5) or a non detection (0). A breakdown of the number of sources with each flag is shown in Figure \\ref{stats}. Out of a total of 1882 sources observed we detect 1444~(76.7\\%) in HCO$^+$ and 952~(50.5\\%) in N$_2$H$^+$ at a $3\\sigma$ or greater level. Out of 1444 HCO$^+$ detections, 1119~(77.49\\%) are single-velocity component detections, 39~(2.70\\%) are multiple-velocity component detections, 67~(4.64\\%) have possible line wings, and 219~(15.17\\%) have a possible self-absorption profile. For the 952 N$_2$H$^+$ detections, 919~(96.53\\%) are single-velocity component detections, 14~(1.47\\%) are multiple-velocity component detections, 6~(0.63\\%) have possible line wings, and 13~(1.37\\%) have a possible self-absorption profile. Breaking the sources down into the ``Deep Sample'' where we observed every source in the BOLOCAT v0.7 in certain $\\ell$ ranges, we find slightly lower detection rates: 72.6\\% and 41.2\\% of the $N=707$ sources are detected in HCO$^+$ and N$_2$H$^+$ respectively (See Figure \\ref{stats} for the breakdown of flagging statistics for the ``Deep Sample''). Detection statistics versus 1.1~mm dust emission are presented in Figure \\ref{percentile}. For sources in the lowest flux percentile, we detect barely 40\\% in HCO$^+$. Sources in the highest flux percentile have a 99\\% detection rate in HCO$^+$ and 88\\% in N$_2$H$^+$.\n\n\\paragraph{}We find that nearly all (99.6\\%) of N$_2$H$^+$ detections are associated with an HCO$^+$ detection at the $\\ge3\\sigma$ level. Only 4 sources that are detected in N$_2$H$^+$ do not have a $3\\sigma$ detection in HCO$^+$. These sources are approximately $2\\sigma$ detections and show a small amount of HCO$^+$ emission at the correct velocity to be associated with the N$_2$H$^+$ emission. The HCO$^+$ line flag statistics change in a fairly interesting way for the sources with detected N$_2$H$^+$. The percentage of sources showing a possible self-absorbed profile increases from 15.17\\% to 21.10\\%. The percentage of single line detections drops by a similar amount. About 1\\% of sources show possible self-absorption in both HCO$^+$ and in N$_2$H$^+$. \nFor sources that display self absorption in N$_2$H$^+$, 11 of 13 also showed self absorption in HCO$^+$. \n\n\n\\paragraph{}A recent mapping survey of these two molecular transitions toward a sample of IRDCs has shown that HCO$^+$ and N$_2$H$^+$ $J=3-2$ emission has a very similar extent and morphology to the 1.1~mm emission (Battersby et al. 2010). To completely understand the physical properties of the gas that is excited in these transitions, we require a detailed source model and radiative transfer modeling of multiple transitions in each species. However, from our astrochemical knowledge of these two species, we can make some general statements about the regions where they are excited. HCO$^+$ probes clumps with a wide range of properties. It can exist in warm regions where CO is abundant (e.g., Reiter et al. 2011) and cold regions where CO has frozen out onto dust grains (e.g., Gregersen and Evans 2000). It is possible that HCO$^+$ is depleted by freeze-out in some of these clumps with cold ($T < 20$ K) dense ($n > 10^4$ cm$^{-3}$) gas within the HHT beam (see Tafalla et al.~(2006) for examples observed toward low mass cores), although our observations indicate that this mechanism is unlikely to be dominant in BGPS clumps. HCO$^+$ $J=3-2$ emission likely originates in the dense, warm inner regions of these clumps. In contrast, N$_2$H$^+$ is destroyed by CO in the gas phase, and thus N$_2$H$^+$ is most abundant in cold, dense gas where the CO abundance is depleted (J\\o rgensen et al. 2004). Thus, in star-forming clumps with a strong temperature increase toward the center, N$_2$H$^+$ may only be tracing the outer parts of the clumps where the gas is still relatively dense and cold. This chemical differentiation of N$_2$H$^+$ has been mapped in a few high-mass star-forming regions (Pirogov et al. 2007; Reiter et al. 2011; Busquet et al. 2010), although the differentiation mostly occurs on size scales that are unresolved within our $30^{\\prime\\prime}$ beam.\n\n\n\\paragraph{}In nearly 12\\% of sources, the HCO$^+$ line profiles display apparent self-absorption. For an optically thick line profile, a blue asymmetry (redshifted self-absorption) may be an indication of infalling gas (see Myers et al. 2000); however, the blue asymmetric profile can also be created by rotating and outflowing gas (Redman et al.~2004). For a large sample of sources, it is possible to statistically identify infall in the population by searching for an excess of blue asymmetric profiles. Infall does not create a red asymmetric profile in centrally heated, optically thick gas while rotation and outflow can equally produce both blue and red asymmetric profiles. Surveys of high-mass star-forming regions in HCN $J=3-2$ have shown statistical excesses in blue asymmetric profiles (Wu \\& Evans 2004). In order to calculate the line asymmetry of our subset of sources with self-absorbed profiles, we must obtain observations of an optically thin isotopologue (H$^{13}$CO$^+$) to discriminate between self-absorption and a cloud with two closely-spaced velocity components along the line of sight. We shall observe this subset of sources in H$^{13}$CO$^+$ $J=3-2$ in a future study.\n\n\n\\subsection{Integrated Intensity and Peak Line Temperature Analysis}\n\n\\subsubsection {Comparison of Molecular Emission}\n\n\\paragraph{}Figures \\ref{ithist} (a) and (b) show the difference in the distributions of line temperature and integrated intensity for HCO$^+$ and N$_2$H$^+$ $J=3-2$. The HCO$^+$ emission extends to far greater intensities than N$_2$H$^+$, whose distribution seems to be truncated at high intensities. We find, on average, in our Full Sample, HCO$^+$ $J=3-2$ to be 2.18 times as bright as N$_2$H$^+$ $J=3-2$ in integrated intensity I(T$_{mb}$) (Figure \\ref{HCOPvN2HP} (a)). There are a small number of sources (11.8\\%, $N=114$) detected in both HCO$^+$ and N$_2$H$^+$ that show stronger N$_2$H$^+$ $J=3-2$ emission than HCO$^+$ $J=3-2$ emission. For these sources 1\/3 are self absorbed in HCO$^+$. The integrated intensity of each species is highly correlated with a Spearman's rank correlation coefficient of $\\rho=0.82$. The slope of a linear regression (MPFIT) is $m=0.82$, taking into account errors in x and y directions. The highest intensity points appear to form a tail turning upward on the plot of I(T$_{mb}$ HCO$^+$) versus I(T$_{mb}$ N$_2$H$^+$). This would be expected for the warmest clumps since the N$_2$H$^+$ abundance is expected to decrease in warmer regions, which should be more prevalent toward brighter 1.1~mm sources (\\S 3.2.2).\n\n\n\\paragraph{}The peak line temperature tells a similar story to the integrated intensities. The upward curving tail of points at the brightest end of the Full Sample is less noticeable for peak line temperature (Figure \\ref{HCOPvN2HP} (b)). The average ratio of peak line temperatures is T$_{mb}($HCO$^+$ $J=3-2)\/$T$_{mb}($N$_2$H$^+$ $J=3-2) = 1.94$ and the correlation coefficient is $\\rho = 0.79$. The best fit line has a slope of $m=0.83$. \n\n\n\\paragraph{}It is interesting that HCO$^+$ and N$_2$H$^+$ $J=3-2$, with their similar effective densities but different chemistries, are so highly correlated. Their similar effective densities should result in their emission being co-spatial; however, their chemical differences should result in differentiation (e.g., J\\o rgensen et al. 2004; Pirogov et al. 2007). It is likely that any differentiation is unresolved within our beam, which averages over the densities, temperatures, and abundance structure on size scales of a few tenths of a parsec (e.g., Battersby et al. 2010; Reiter et al. 2011). \n\n\n\\subsubsection{Comparison with Millimeter Continuum Emission}\n\n\\paragraph{}In Figures \\ref{ITvs1mm} (a) and (b), we show the integrated intensity of HCO$^+$ $J=3-2$ and N$_2$H$^+$ $J=3-2$ versus the 1.1~mm flux per beam obtained from the BOLOCAT v0.7 positions on the v1.0 BGPS maps (Rosolowsky et al. 2010). The Spearman's rank coefficient for the two species are $\\rho_{HCO^+} = 0.80$ and $\\rho_{N_2H^+} = 0.73$. The slopes are $m_{HCO^+}=1.15$ and $m_{N_2H^+} = 1.28$. For HCO$^+$, the high 1.1~mm flux points have a tail that continues curving up toward higher HCO$^+$ emission with increasing 1.1~mm flux. In contrast, the N$_2$H$^+$ $J=3-2$ emission for high 1.1~mm fluxes shows a flattening which is again consistent with N$_2$H$^+$ being less abundant in warm sources. The median ratio of integrated intensity to 1.1~mm emission is 6.32~K~km~s$^{-1}$ per Jy\/beam and 3.27~K~km~s$^{-1}$ per Jy\/beam for HCO$^+$ $J=3-2$ and N$_2$H$^+$ $J=3-2$, respectively.\n\n\\paragraph{}Comparing the peak line temperatures of the molecular emission versus the 1.1~mm dust flux (Figures \\ref{ITvs1mm} (c) and (d)) leads to similar results as the integrated intensities. The correlations are still significant: $\\rho_{HCO^+} = 0.75$ and $\\rho_{N_2H^+} = 0.73$. The slopes are lower than for integrated intensity: $m_{HCO^+}=0.88$ and $m_{N_2H^+} = 0.98$. The median ratio of peak line temperature to 1.1~mm emission is 1.76~K per $Jy\/beam$ and 1.06~K per $Jy\/beam$ for HCO$^+$ $J=3-2$ and N$_2$H$^+$ $J= 3-2$, respectively.\n\n\\paragraph{}We also compare the ratios of the integrated intensities and the peak line temperatures of HCO$^+$ and N$_2$H$^+$ with 1.1~mm flux in Figure \\ref{ratiov1mm}. Both ratios are uncorrelated with the 1.1~mm dust emission. Surprisingly, there is a wide range in the observed intensity and peak temperature ratios, even for bright sources with fluxes above $1$~Jy. Even in these brightest 1.1~mm sources, the N$_2$H$^+$ $J=3-2$ emission can be strong, indicating significant amounts of unresolved dense, cold ($T < 20$~K) gas within the beam.\n\n\n\\subsection{Velocities and Linewidths}\n\n\\paragraph{}We use the Gaussian fit velocity centers of the HCO$^+$ and N$_2$H$^+$ lines as described in \\S \\ref{gaussian} to determine v$_{LSR}$. We plot v$_{LSR}$ versus Galactic longitude in Figure \\ref{lvlsr} $(a)$. We find the distribution of v$_{LSR}$ in the dense gas tracers is comparable to that of CO 1-0 (Dame et al. 2001). The spread in dense gas velocities is very similar to the spread in CO emission at each $\\ell$ when our data is overplotted on the Dame et al. (2001) $v-\\ell$ map. The v$_{LSR}$ determined for sources detected in both HCO$^+$ and N$_2$H$^+$ agree very well, see Figure \\ref{lvlsr} $(b)$. We use only the Gaussian fit HCO$^+$ velocities in \\S4 to calculate the kinematic distances of sources.\n\n\\paragraph{}Figure \\ref{lvlsr} $(c)$ shows the FWHM of our detected HCO$^+$ lines versus Galactic longitude. There is no trend with $\\ell$ apparent in the sources we have observed. The few bins where the linewidth seems to vary by an appreciable amount have small numbers of sources in them. There is a moderate relationship in the plot of $\\Delta v[$HCO$^+]$ versus $I(T_{MB})[$ HCO$^+]$, see Figure \\ref{deltavIhcop} $(a)$. This is expected, as the integrated area of a line is directly related to the peak temperature multiplied by the FWHM. Given the relationship between $\\Delta v[$ HCO$^+]$ and $I(T_{MB})[$ HCO$^+]$ and $S_{1.1mm}$ and $I(T_{MB})[$ HCO$^+]$, it is logical to expect a trend of $\\Delta v$ with $S_{1.1mm}$; Figure \\ref{deltavIhcop} (b) shows this trend. A moderate correlation also exists between the linewidth and the 1.1~mm dust emission per beam at the BOLOCAT v0.7 positions; however, there is large amount of scatter around this trend.\n\n\n\\section{Discussion}\n\\subsection{Kinematic Distances}\n\n\\paragraph{}We use the kinematic model of the Galaxy as defined by the parameters determined by Reid et al. (2009) to calculate the near and far distances to BGPS clumps. One thing to note is that the distance determination for Reid et al. (2009) assumes all motions are in the azimuthal direction only and does not account for any radial streaming which is known to exist near $\\ell \\sim 0$. This model sets the distance from the Galactic center to the Sun to be $R_0 = 8.4\\pm0.6$ kpc and the circular rotation speed $\\Theta_0=254\\pm 16$~km~s$^{-1}$ from VLBI parallax measurements. We then use these parameters and the kinematic definition of v$_{LSR}$ to compute the distances to all of our sources with single HCO$^+$ velocity components. \n\n\\paragraph{}The distances for all detected sources are plotted versus Galactic longitude in Figure \\ref{lvdist}. In the first quadrant ($0^\\circ \\le \\ell < 90^\\circ$), a velocity will give two distances that are degenerate. Without further information, we cannot tell if a source is on the near side or the far side of the galaxy. For sources that are known to be within a given region, and thus approximately the same distance, we can quantify the velocity spread of the individual sources. For instance, the spread in v$_{LSR}$ for sources in $109^\\circ < \\ell < 112^\\circ$ is 4.9~km~s$^{-1}$. This is one measure of the systematic ``random'' errors in our v$_{LSR}$ due to intrinsic motion that limits the accuracy of the corresponding distances. Some sources have much larger peculiar motions determined from VBLI parallax, as great as 40~km~s$^{-1}$ (Nagayama et al.~2011), but it is not likely the majority of sources will be severely discrepant. Some distribution of distances is expected and the spread in velocities for sources nearby makes an accurate kinematic distance determination difficult. In our distance determination, a cloud with a velocity at or greater than the tangent velocity will be placed at the tangent distance. In \\S~4.2, we resolve the distance ambiguity for a subsample of our sources. \n\n\\subsubsection{Galactocentric Distance}\n\n\\paragraph{}The Galactocentric distance is the distance of the source from the Galactic Center and is only dependent on velocity and Galactic longitude of a source and therefore does not have a distance ambiguity. We plot a variety of source properties versus their Galactocentric distance in Figure \\ref{GCdist}. The distribution of sources clearly traces four major spiral arm structures in the Galaxy. The large peak at $4.5$~kpc corresponds to the molecular ring and, for sources near $\\ell = 30^{\\circ}$ , the edge of the central bar. The largest concentration of sources is within these two structures. The other structures in order of galactocentric distance are the Sagittarius arm, the local arm, and the Perseus arm.\n\n\\paragraph{}We also plot the observed quantities (linewidth, integrated intensity, and 1.1~mm flux) versus Galactocentric distance. There is a large amount of scatter in each 1.5~kpc bin, and the median values of each quantity are nearly constant except for the bin beyond 10~kpc. The median linewidth, integrated intensity (both HCO$^+$ and N$_2$H$^+$), and 1.1~mm flux are systematically lower for sources beyond 10~kpc compared to smaller Galactocentric distances. This could be due to a bias in the original BGPS observing strategy. Sources at Galactocentric distances greater than 10~kpc are predominately in the second quadrant of the Galaxy. Only a few selected regions (e.g., Gem OB1, G111\/NGC7538, IC 1396) were observed by the BGPS in this quadrant. Unlike BGPS observations toward the first quadrant, these second-quadrant heterodyne follow-up observations are not a complete census of sources in the second quadrant and biased toward known star forming regions. It is possible that the observed regions are not entirely representative of the properties of Galactic sources at this distance biasing our results to sources with stronger HCO$^+$ and N$_2$H$^+$ lines. This would make the ``true'' Galactocentric gradient larger than what we see in Figure \\ref{GCdist}. Another possibility is that nitrogen and\/or carbon metallicity gradients in the Galaxy are becoming important, and that is why HCO$^+$ and N$_2$H$^+$ are becoming weaker, on average, beyond 10~kpc. There is a decreasing trend for both N and C in OB stars when going from the inner galaxy to the outer galaxy (Daflon \\& Cunha 2004). This same effect may manifest itself in the dense gas as well; however, more complete sampling is needed to understand these effects.\n\n\n\\subsection{Resolving the Distance Ambiguity}\n\n\\paragraph{}We must determine whether or not a source lies on the near or far side of the tangent distance in order to resolve the distance ambiguity (see Figure \\ref{lvdist}). We use three conservative methods to break the degeneracy: coincidence with sources with observed maser parallax measurements, coincidence with Infrared Dark Clouds (IRDCs),and correspondence with known kinematic structures in the Galaxy. We then use this subsample of sources where the distance ambiguity has been resolved, the ``Known Distance Sample,'' to study the properties of BGPS objects. A detailed study of the probability that BGPS sources in the first quadrant lie at the near or far distance is currently being made by Ellsworth-Bowers et al. (in preparation).\n\n\\paragraph{}The most accurate distance determination technique is direct parallax measurements of sources by very long baseline interferometry (e.g., Reid et al. 2009a, 2009b). For a source to be considered associated with a VLBA-determined parallax, we allow for a source position to be different from the VLBA position by up to 30$^{\\prime\\prime}$ (one beam size). We have 4 sources that are coincident on the sky with a VLBA source but only 3 with a single HCO$^+$ detection. The distances used to these objects are their parallax distances.\n\n\\paragraph{}The next selection criterion we used to determine distance is coincidence with an IRDC. IRDCs are clouds of dust that appear dark against the background of mid-infrared Galactic radiation (for example at 8~$\\mu$m). Because these objects appear in front of the majority of Galactic emission, they are assumed to be on the near side of the galaxy. Placing an IRDC is at the near kinematic distance is a good assumption but disregards the fact that a small number of IRDCs could be at the far distance. IRDCs have been catalogued in the Galactic plane from \\textit{MSX} mid-infrared observations (Egan et al. 1999; Simon et al. 2006) and most recently from \\textit{Spitzer Space Telescope} observations (Peretto \\& Fuller 2009). Peretto \\& Fuller (2009) developed a catalog of IRDCs and describe each cloud with an ellipse. We first select sources that lie within the ellipse itself. In reality, IRDCs have a wide range of projected geometries and a simple ellipse is not always the best choice to describe more complicated filamentary shapes. Therefore, we also did a by-eye comparison of Spitzer GLIMPSE (Benjamin et al. 2003) images and BGPS images and made a list of BGPS sources that appeared to be coincident with an IRDC. Both the ellipse-coincidence method and the by-eye method suffer from different biases (e.g., the ellipse shape is too simple and the by-eye method is subjective and depends on image display parameters). We conservatively choose sources that are coincident with an IRDC from both methods to be assumed to be at the near distance ($N=192$). \n\n\\paragraph{}We have also included sources in the Known Distance Sample that do not have distance ambiguities, including sources in the Outer Galaxy ($\\ell > 90^\\circ$, $N=89$) and sources in the first quadrant that lie at the tangent distance. A caveat is that the method used to compute the distances forces a source to be at the tangent distance if its velocity is larger than allowed in the circular rotation model of the Galaxy. We also include sources that are coincident with the sources in Shirley et al. (2003) and the H {\\small \\sc II} regions in Kolpak et al. (2003). In addition to all of this we use kinematic information from Dame et al. (2001) to include sources near $\\ell \\sim 80^\\circ$ that have the corresponding velocities of the Cygnus-X region. The other sources in this $\\ell$ range have negative velocities, indicating they lie in the outer arms. We do the same analysis and add sources near $20^\\circ < \\ell< 55^\\circ$ that have velocities corresponding to the Outer Arm. We also add a few sources with $\\ell \\sim 10$ that correspond to the 3 kpc Arm. \nAll of the 529 sources for which the distance has been determined comprise our Known Distance Sample. For the remainder of the paper, we will only use the Known Distance Sample for our analysis of source properties unless otherwise specified. Resolution of the distance ambiguity for all observed sources is beyond the scope of this paper. In a future paper, we will build probability density functions for the distances to BGPS sources by combining dense gas tracers such as HCO$^+$ and N$_2$H$^+$ with extant data sets such as the Galactic Ring Survey ($^{13}$CO ($J=1-0$) for diffuse-gas velocity comparison, H {\\small \\sc I} Galactic Plane Surveys (VGPS, CGPS, SGPS) for H {\\small \\sc I} Self-Absoption, models of Galactic molecular gas distribution, and a more refined analysis with IRDCs (Ellsworth-Bowers et al., in preparation).\n\n\n\\paragraph{}Figure \\ref{disthist} is a histogram of the heliocentric distances for the Known Distance Sample. These sources include outer Galaxy sources, so the peaks of the distribution do not always correspond to a specific spiral arm. The observed peak of sources at 5~kpc from us does correspond with the Near 3~kpc arm and the edge of the Galactic bar. The median distance of the Known Distance Sample is 2.65~kpc. The large number of sources at a distance of 1-2~kpc comes from sources that are in the range of $\\ell = 80^\\circ-85^\\circ$ and those in the outer galaxy (W3\/4\/5, NGC~7538). In this area, the intrinsic dispersion of velocities is much larger than the allowable velocities for the kinematic model of the galaxy so we use known average distances to the regions in this $\\ell$ range. \n\n\\paragraph{}Compiling this distance information together gives us a look at Galactic structure. Figure \\ref{galpolar} shows the face-on view of the Milky Way given the kinematic distances we have measured. The sources that lie in the Near Sample are separated from those where we have not resolved the distance ambiguity. The unresolved sources are plotted twice for their near and far distances. Even with the distance ambiguity affecting the majority of our sources one can begin to see strong evidence for Galactic structure including the Near 3 kpc arm, the end of the Galactic bar, as well as the Sagittarius and Perseus Arms. The ``molecular ring'' is also visible between 3 and 5 kpc from the center of the Galaxy. \n\n\\subsection{Size-Linewidth Relation}\n\\paragraph{}Several physical properties of the clumps may be derived once the distance to the source is determined. The size of BGPS sources were determined from analysis of the flux distribution in 1.1~mm continuum images (Rosolowsky et al. 2010). The physical size of the object is simply $r = \\theta \\cdot Distance$, where $\\theta$ is the angular radius of the sources. Figure \\ref{raddist} (a) shows histograms of the deconvolved angular source size as measured in the BOLOCAT v1.0. The median source size for the Known Distance Sample is $\\sim 60^{\\prime\\prime}$, or 0.752~pc. This is a factor of 2.35 larger than the CS $J=5-4$ source sizes from Shirley et al. 2003 (0.32~pc median source size) but inline with Wu et al.~2010 who find median sizes of 0.71~pc in HCN $J=1-0$ 0.77~pc in CS $J=2-1$. This indicates that the typical BGPS clump is of lower density and more extended than the sample of water maser selected clumps traced in CS $J=5-4$ in the Shirley et al. survey. CS $J=5-4$ also has an effective density an order of magnitude higher than HCO$^+$ $J=3-2$ and is tracing the denser gas toward the center (Reiter et al.~2011). The Shirley et al. (2003) CS $J=5-4$ survey was a very heterogeneous sample of sources that spanned a wide range of distances. In comparison to a more homogenous sample at a common distance the continuum survey of the nearby GMC Cygnus X, by M\\H{o}tte et al. (2007), find an average source size of 1.2~mm continuum clumps of 0.1~pc, with a HPBW of 11$^{\\prime\\prime}$; this is a factor of seven times smaller than the median BGPS clumps size. \n\n\\paragraph{} We plot the clump size against the distance to the source (Figure \\ref{raddist} (b)). There is a very strong relationship directly due to the fact that the physical size grows directly proportional to distance for a given angular size. The line indicates the physical size that is equivalent to our beamsize at each distance. This shows that 89\\% of sources are spatially resolved by the BGPS beam. We caution that we are not resolving everything within the source as there is clear evidence of unresolved cores within the larger clump structures. Rather, the large size indicates that BGPS objects are, on average, larger and more diffuse structures than have been systematically studied before in high-mass star-formation surveys.\n\n\n\\paragraph{}The integrated kinematic motions along the line-of-sight determine the observed FWHM linewidth of the clumps. Linewidths may be broadened due to thermal motions, unresolved bulk flows of gas across a cloud, small scale (unresolved) turbulence, and optical depth effects. Thermal broadening for our sources is minimal since $\\Delta v_{therm}(FWHM) = 0.612$~km~s$^{-1}$ at $T=20$ K. \nThe observed median linewidth of HCO$^+$ for the Known Distance Sample is 2.98~km~s$^{-1}$, which indicates supersonic motions within the $30^{\\prime \\prime}$ beam. The typical linewidth is comparable with the observed linewidths toward high-mass star forming clumps (e.g., Shirley et al. 2003) and much larger than the typical linewidths observed toward nearby low-mass star forming cores (e.g., Rosolowsky et al. 2008). Comparing the distribution of linewidths versus distance shows no trend at all. As a source gets farther away, our beam is averaging over a larger source size; yet, we do not see any systematic increase in linewidth with distance. \n\n\\paragraph{}Optical depth effects may broaden the observed linewidth. This effect may be especially acute for the HCO$^+$ $J=3-2$ line (e.g., $> 11$\\% of sources have evidence of possible self-absorption). Equation \\ref{widthvtau} shows that as the optical depth increases, the linewidth increases:\n\\begin{equation}\n \\frac{\\Delta v}{\\Delta v_o} = \\frac{1}{\\sqrt{ln 2}} \\; \\sqrt{ln(\\frac{\\tau }{ln(\\frac{2}{1 + e^{-\\tau}})})}\n \\label{widthvtau}\n\\end{equation}\n\\noindent (Phillips et al. 1979). For $\\tau = 10$, the optically thick linewidth to the optically thin linewidth ratio is larger by a factor of 2. For our observed linewidths, which are 10 times the thermal linewidth, even accounting for modest optical depth effects, it is unlikely that optical depth effects can account for the large observed median linewidth. Therefore, we conclude that the dense molecular gas in typical BGPS sources is characterized primarily by supersonic turbulence.\n\n\\paragraph{}The size and linewidth of the Known Distance Sample clumps are directly compared in Figure \\ref{sizelinewidth} (a). In contrast to the Larson relationship for molecular clouds (Larson 1981) we do not observe a strong size-linewidth relationship in the dense molecular gas for BGPS clumps: $\\rho_{Spearman}=0.40$. While there is a very weak trend that generally agrees with Larson's relationship, the scatter in the data erases our ability to say much about it. Traditionally, studies of the size-linewidth relationship in cores find two different slopes depending on the mass of the objects (Caselli \\& Myers 1995). For instance, the study of Caselli \\& Myers (1995) find a very shallow slope of R$^{0.21}$ for high-mass cores and a much steeper slope of R$^{0.53}$ for low-mass cores. Combining these two distributions may partially erase the size-linewidth relationship (Shirley et al. 2003); however, the derived power laws from Caselli \\& Myers (1995) predict significantly smaller linewidths than the sources in the Known Distance Sample. The lack of a correlation between size and linewidth argues against a universal scaling relationship between the amount of supersonic turbulence in dense clumps and their size. \n\n \n\\subsection{Mass Calculations}\n\n\\paragraph{}The clump mass may be calculated from the total 1.1~mm continuum flux for each source in the Known Distance Sample by\n\\begin{equation}\n M_{H_{2}} = \\frac{S_{1.1}\\cdot D^2}{B_\\nu(T_{dust}) \\cdot \\kappa_{dust,1.1}\\cdot \\frac{1}{100}}\n \\label{Masseqn}\n\\end{equation}\n\\noindent (Hildebrand 1983) where $S_{1.1}$ is the total dust emission from a source, D is the distance to the source, $\\kappa_{dust}(1.1mm)=1.14$~cm$^2$~g$^{-1}$ is the dust opacity at 1.1~mm (Ossenkopf \\& Henning 1994), $B_{1.1mm}$ is the blackbody intensity at 1.1~mm where we initially assume a temperature of $T_{dust} = 20$ K. We also assume a gas-to-dust mass ratio of 100:1 (Hildebrand 1983). Millimeter dust continuum observations are incredibly sensitive to small amounts of mass, as we can detect a 20~K source of 1.1~M$_{\\odot}$ at distances of 1~kpc given an average $3\\sigma$ flux threshold of 90~mJy at 1.1~mm. Below (\\S~4.4.1), we will systematically explore the results of changing the dust temperature distribution of BGPS sources using a Monte Carlo simulation.\n\n\n\\paragraph{}Figure \\ref{masshistdist} (a) plots a histogram of the masses of the Known Distance Sample assuming $T_{dust} = 20$ K. The median mass of the sample is 320 M$_\\odot$ and the mean is 1648 M$_\\odot$. The BOLOCAT is complete to 98\\% for sources $> 0.4$ Jy; this gives us a completeness limit for masses of 580 M$_\\odot$ at a distance of 8.5~kpc (most of our sources are at the near distance). The mass versus distance plot, Figure \\ref{masshistdist} (b) shows the expected trend of more massive sources at the farthest distances due to the $D^2$ term in the mass calculation. The masses we observed range from 10~$M_{\\odot}$ to 10$^5$~$M_{\\odot}$. Compared to other samples of high-mass clumps, we probe similar mass ranges as the H$_2$O maser sample of Shirley et al. (2003); although, our observed median mass is smaller. The mass distribution is similar to that observed for the BGPS Galactic Center sample for sources assumed to be at the distance of the Galactic center, 8.5~kpc (Bally et al. 2010). In comparison with clumps in targeted regions in Orion (Johnstone \\& Bally 2006) and Cygnus-X (M\\H{o}tte et al. 2007) probe typical masses of 10s to 100s of $M_{\\odot}$, more analogous to core masses. In comparison to IRDCs, (Rathborne et al. 2005, Peretto \\& Fuller 2010), we appear to probe the same physical properties on the same scales. IRDC fragments tend to agree with masses of clumps in targeted regions, while the overall IRDC properties agree with those found in this study. For example, in Peretto \\& Fuller (2010) they find mass ranges of a few tenths of solar masses to nearly 10$^{5}$~$M_{\\odot}$ and are complete to $\\sim 800$~$M_{\\odot}$. We are sensitive throughout the entire mass range of these other samples, leading us to believe we are observing objects that span from dense cores to clouds (see Dunham et al. 2010).\n\n\\paragraph{}Using the dust-determined mass and the source size, we compute the volume-averaged number density given a mean free particle weight, $\\mu=2.37$ (Figure \\ref{Avgnumberden} (a)). Assuming a spherical volume for each source. The median value is $n=2.5\\times10^3$~cm$^{-3}$, which is within a factor of $3$ of the masses determined in the NH$_3$ survey of Gem~OB1, where they find a median volume-averaged density of $n=6.2\\times10^3$ cm$^{-3}$ (Dunham et al. 2010a). Compared to the IRDC sample of Peretto \\& Fuller (2010), we probe the same range of volume-averaged number densities from 100 to 10$^{4}$~$cm^{-3}$. This low volume-averaged number density is a result of the large observed source sizes and, therefore, large volume observed toward BGPS clumps. The volume-averaged density is lower than the typical effective excitation density for HCO$^+$ and N$_2$H$^+$ $J=3-2$ emission. Steep density gradients are known to exist in both low-mass (Shirley et al. 2002) and high-mass (Mueller et al. 2002) star-forming clumps. If we were resolving individual dense knots, we would expect to see a trend of linewidth with volume-averaged density as well (Figure \\ref{Avgnumberden} (c)). Therefore, BGPS sources tend to be fairly low-density sources with possible compact high-density regions that may be probed with higher angular resolution observations. Some of these clumps of low density gas contain high density regions up to $n=10^5$~cm$^{-3}$, several orders of magnitude higher than the average source properties (Dunham et al. 2011). \n\n\\paragraph{}Comparing the time scales of these clumps shows that the free-fall and crossing time scales are similar at a few times 10$^5$ years. The median free-fall time is $8\\times 10^5$~yr; thus, if these clouds were bound, it would take a few hundred thousand years to collapse and begin forming stars. With the masses of these clumps we also compute the virial linewidth as \n\\begin{equation}\n\\Delta v^{2}_{virial} = \\frac{8 \\ln{2}\\; a\\; G\\; M_{virial}}{5\\; R} \\approx \\frac{a \\cdot\\frac{M_{obs}}{209\\;M_{\\odot}}}{(R\\;\/1\\;pc) (\\Delta v_{obs}\/1\\;km\\;s^{-1})^{2}} \\; km \\; s^{-1}\n \\label{viriallinewidth}\n\\end{equation}\n\\begin{equation}\na = \\frac{1-p\/3}{1-2p\/5}; p=1.5\n \\label{virialavalue}\n\\end{equation}\n\\noindent This definition includes the correction factor for a power-law density distribution of p=1.5 (Bertoldi \\& McKee 1992, Shirley et al. 2003). The virial linewidth represents the velocity dispersion from internal motions due to self-gravity. Figure \\ref{sizelinewidth} (b) shows the virial linewidths are smaller on average than the observed linewidths, about 2--3~km~s$^{-1}$ indicating that many of the BGPS clumps are not entirely gravitationally bound. It is likely that there are smaller denser regions within these clumps that are gravitationally bound. \n\nWe can also look at the virial parameter in terms of the surface density (see \\S 4.4.1) of the clumps given by\n\\begin{equation}\n\\alpha_{virial} = \\frac{5 \\Delta v^2_{obs}} {8\\ln{2}\\pi G \\Sigma R}.\n \\label{virialparam}\n\\end{equation}\n\\noindent This is shown in Figure \\ref{sizelinewidth} (c). The virial parameter versus mass is also shown in Figure \\ref{sizelinewidth} (d). This is another way of showing that few of our clouds indicate virialized motion ($\\alpha < \\sim 1$) and most lie more than a factor of a few above this line. The caveat is we assume the BGPS traces ALL the mass which may not be entirely true. The BGPS does resolve out diffuse emission and the dense gas tracers are missing the low-density gas traced by CO or H~{\\small \\sc I}. It appears that most of the clumps we see are likely not gravitationally bound but do contain dense substructures that likely are.\n\n\\subsubsection{\\label{DiffMass}Differential Mass Histogram}\n\n\\paragraph{}Figure \\ref{masshistdist} (c) shows the differential mass histogram for the number of sources in bins of log(M). In this parameterization, the mass function takes on the form of\n\\begin{equation}\n \\frac{dN}{dlog(M)} \\sim M^{-(\\alpha -1)},\n \\label{massfunc}\n\\end{equation}\n\\noindent where the power-law index is the slope of a line through the histogram. For the observed masses, calculated assuming a dust temperature of $20$ K, we find a slope of $\\alpha - 1 = 0.91$. This slope is shallower than the slope of the Salpeter stellar IMF $\\alpha - 1 = 1.35$ for dN\/dlog(M). Our observed slope is steeper than has been found for the mass distribution of CO clumps $\\alpha - 1 = 0.6 - 0.7$ (e.g., Scoville \\& Sanders 1987). Observations of dense-gas tracers tend to increase the differential mass histogram slope. For instance, Shirley et al. (2003) find a slope of $\\alpha -1 = 0.9$ for their cumulative mass function for cores probed by CS $J=5-4$. In comparison to IRDCs (Peretto \\& Fuller 2010), they find a slope of $\\alpha - 1 = .85$ for the total IRDC. This intermediate slope also suggests that we really are looking at the intermediate case of star forming ``clumps'' rather than clouds or cores.\n\n\\paragraph{}There are several sources of uncertainty in determining the slope of the differential mass histogram which we must characterize. The effect of binning has been shown to cause problems in studies of the stellar mass function and thus will also be problematic with the clump mass spectrum (Ma{\\'i}z Apell{\\'a}niz \\& {\\'U}beda 2005). For instance, the choice of binsize may have a substantial effect on the computed slope of the differential mass histogram. Bins with small numbers of sources dominate the fit if the binwidth is chosen too small, and the number of bins used in a linear regression decreases rapidly if the binwidth is too large. Our data do not span many decades in mass above the the estimated completeness limit in the Known Distance Sample, and thus choosing an appropriate binwidth is difficult. We can circumvent this binning problem by using the maximum-likelihood estimator (MLE) to compute the power-law slope $\\alpha$ (Clauset et al. 2007, Swift \\& Beaumont 2010). The MLE maximizes the likelihood that the data was drawn from a given model. If the data are drawn from the distribution given by equation \\ref{MLEDIST}, then the maximum likelihood estimate of the power-law index is $\\hat{\\alpha}$ given by equation \\ref{MLE}, where $M_{min}$ is the lower bound of the power law behavior and $n$ is the number of sources with mass greater than $M_{min}$ (Clauset et al. 2007),\n\\begin{equation}\n p(x) = \\frac{\\alpha-1}{M_{min}}\\left(\\frac{M}{M_{min}}\\right)^{-\\alpha}\n \\label{MLEDIST}\n\\end{equation}\n\n\\begin{equation}\n \\hat{\\alpha} = 1+n\\left[\\sum_{i=1}^{n} \\ln \\frac{M_i}{M_{min}} \\right]^{-1}.\n \\label{MLE}\n\\end{equation}\n\n\n\\paragraph{}Another important source of uncertainty is the assumed dust temperature of BGPS sources. We initially assume $T_{dust} = 20$ K for every source in the Known Distance Sample to compute our masses (Merello et al., \\textit{in prep}). In reality, BGPS sources have a range of dust temperatures of an unknown distribution. We perform a Monte Carlo simulation of the differential mass distribution to constrain the range $\\alpha$ by assuming that the source temperature distribution is approximated by a Normal distribution, characterized by mean $T_{mean}$ and $\\sigma(T)$. We then compute the mass of each source with a random temperature drawn from the temperature distribution and the source flux that has an error term drawn randomly from a Normal distribution added in. For each $T_{mean}$ and $\\sigma(T)$, the mass histogram is computed $10^4$ times, and we calculate the median value of $\\hat{\\alpha}$. \n\n\\paragraph{}The variation of $\\alpha$ with the temperature distribution is shown in Figure \\ref{montecarlo} (a). The effect of temperature distribution is significant but not strong. As part of the MLE procedure we estimate the best value of $M_{min}$, the minimum mass used in the power-law fit for each mass distribution. This is determined by minimizing the KS statistic between the best-fit model and $M_{min}$ as a function of $M_{min}$ (Clausset et al. 2007; Swift \\& Beaumont 2010). The dependence of $M_{min}$ on temperature means that colder temperatures lead to higher masses for a given observed flux. This dependence is also seen in the median of the mass distribution, Figure \\ref{montecarlo} (b), and the median volume-averaged number density, Figure \\ref{montecarlo} (c). \n\n\n\\paragraph{}There are several important caveats that must be considered in this analysis. Foremost, the BGPS pipeline reductions will leave artifacts in the data. The process by which the sky variation is removed from the data will also remove some of the diffuse emission, resulting in spatial filtering of the 1.1~mm brightness distributions (Aguirre et al. 2011). Furthermore, the cataloguing algorithm is also more sensitive to peaked emission than low levels of diffuse emission (Rosolowsky et al. 2010). This means we are not seeing diffuse sources unless they are very bright. We are missing flux from extended emission surrounding the clumps we do see, thus leaving us with a lower limit on the total mass, and it also affects the derived sizes. This survey is flux-limited, which means we also suffer from Malmquist bias. The BGPS is sensitive to different types of sources (i.e. cores and clumps vs. clouds) as the distance increases (see Dunham et al. 2010). The last caveat is that the brightest BGPS clumps (most massive) are associated with H~{\\small \\sc II} regions which heat the dust and may contain a significant amount of free-free emission which would most likely lead to a steepening of the mass spectrum. However, we present one of the first differential mass distribution with statistically significant numbers showing that massive clumps appear to have a shallower slope than nearby low-mass cores. \n\n\\subsubsection{Mass Surface Density}\n\n\\paragraph{}While the derived quantities discussed above depend explicitly on distance, one quantity that is distance independent is mass surface density. We compute the mass surface density while assuming $T_{dust} = 20K$ from \n\\begin{equation}\n \\Sigma_{H_{2}} = \\frac{M_{H_{2}}}{\\pi R^2} = \\frac{37.2\\cdot S_{Jy}}{\\theta^2_{arcsec}} \\;\\;\\; \\rm{g} \\;\\; \\rm{cm}^{-2},\n \\label{surfeq}\n\\end{equation}\n\\noindent where $\\Sigma$ only depends on the the observed flux, observed source size, and the assumed dust temperature.\n\n\\paragraph{}Figure \\ref{surfacedensity} (a) shows the mass surface density for the all of the BGPS sources we observed with determined fluxes and sizes from the v1.0 catalog ($N=1684$). Since dust temperature variations affect the derived mass (\\S\\ref{DiffMass}), we also perform a Monte Carlo simulation to calculate the median mass surface density for distributions with different $T_{mean}$ and $T_{\\sigma}$. The Monte Carlo simulation includes variations of the fluxes by the Normal distribution corresponding to the flux and error on the flux. Figure \\ref{surfacedensity} (b) shows how the median of the surface density distribution changes with temperature. The median of the $T_{dust} = 20$~K distribution is 0.027~g~cm$^{-2}$ for the Full Sample and 0.033~g~cm$^{-2}$ for the Known Distance Sample, indicating that the properties derived from the Known Distance Sample are likely representative of the total sample. Both values are significantly smaller than the results from Shirley et al. (2003), where they find a median surface density of 0.605~g~cm$^{-2}$. Our median surface density is also much smaller than the 1.0~g~cm$^{-2}$ threshold that Krumholz and McKee (2008) require for massive star formation (see Fall et al. 2010). \n\n\\paragraph{}The dependence of the median of surface density distribution on temperature is shown in Figure \\ref{montecarlo} (d). The median mass surface density only increases by up to a factor of $2.5$ for distributions with colder dust temperatures but they are still significantly below that of Shirley et al. (2003). The smaller mass surface density is likely a result of the larger clump sizes measured for BGPS clumps compared to previous surveys. However, our result does not indicate that these BGPS clumps are not capable of forming massive stars. It is likely that observations with higher spatial resolution will reveal the observed mass surface density approaching 1.0~g~cm$^{-2}$ for cores within BGPS clumps. Indeed, the infrared populations of BGPS clumps have recently been characterized by Dunham et al. (2011) who found that many BGPS clumps are in fact forming high-mass stars. They find that 49\\% of sources that are in the regions where the BGPS overlaps with other mid-IR Galactic plane surveys contain at least one mid-IR source. \n\n\\section{Conclusions}\n\n\\paragraph{}We used the 10m Heinrich Hertz Telescope to perform spectroscopic follow-up observations of 1882 sources in the Bolocam Galactic Plane Survey. We simultaneously observed HCO$^+$ $J=3-2$ and N$_2$H$^+$ $J=3-2$ emission using the dual-polarization, sideband-separating ALMA prototype receiver. Out of the 1882 observed sources, we detect $\\sim 77\\%$ of the sources in HCO$^+$ and over 50\\% in N$_2$H$^+$. Multiple velocity components along the line-of-sight to BGPS clumps in these dense molecular gas tracers are rare. Our detection statistics are somewhat biased toward more detections because this sample includes all of the intrinsically brightest sources. \n\n\\paragraph{}We find a strong correlation between peak temperature and integrated intensity of each dense gas tracer with each other and with the 1.1~mm dust flux. The median ratio of HCO$^+$ integrated intensity to 1.1~mm flux is 5.42~K~km~s$^{-1}$ per Jy\/beam . We find that HCO$^+$ is brighter than N$_2$H$^+$ for the vast majority of sources, with a subset of only 117 sources (12.6\\% of sources singly detected in N$_2$H$^+$ ) where N$_2$H$^+$ is the brighter of the two dense gas tracers. The ratio of the peak line temperature and integrated intensity of the two molecules does not correlate well with 1.1~mm dust emission.\n\n\\paragraph{}The observed v$_{LSR}$ appear to follow the same distribution with Galactic longitude as $^{13}$CO $J=1-0$ emission observed by Dame et al. (2001). We determine the linewidths from a best-fit Gaussian model and find little change, on average, with Galactic longitude. The median linewidth is $2.98$~km~s$^{-1}$, indicating that BGPS clumps are dominated by supersonic turbulence. Linewidths only modestly correlate with 1.1~mm flux.\n\n\\paragraph{}We compute kinematic distances for all detected sources and are able to break the distance ambiguity for 529 of our detections in the first and second quadrants using IRDC coincidence, VLBA-determined parallax source coincidence, or proximity to the tangent velocity or known kinematic structures. Using the set of sources of known distance, we compute the radius, mass, and average density of the sources. We find the median source size to be 0.752~pc at a median distance of 2.65~kpc, typically larger than source sizes observed in published surveys of high-mass star-forming clumps and cores. Comparing linewidth to the physical size of the source, we do not find any compelling evidence for a size-linewidth relation in our data. We find our sources lie above the relationships found by Caselli and Myers (1995) and have too small of a correlation to say much about Larson's relationship (Larson 1981). \n\n\\paragraph{}For an assumed dust temperature of 20K we find a median mass of $\\sim 300$~M$_\\odot$, a low median volume-averaged number density of $2.4\\times 10^3$~cm$^{-3}$, and a median mass surface density of $0.03$~g~cm$^{-2}$. The similarity of the median mass surface density between the full sample and the Known Distance Sample indicates that the sources in the Known Distance Sample are characteristic of the Full Sample. Compared to published surveys of high mass star formation, BGPS clumps tend to be larger and less dense on average. We also analyzed the variation in median mass and volume density using a Monte Carlo simulation of cores with various dust temperature distributions. From the differential clump mass histogram, we find a power-law slope (dN\/dlogM) that is intermediate between the slope derived for diffuse CO clumps and the stellar IMF. Finally, a comparison of the virial linewidth to the observed linewidth indicates that many of the BGPS clumps in this survey have motions consistent with not being gravitationally bound.\n\n\\paragraph{}In the future, we plan to complete observations of the BGPS catalog for all sources with $\\ell \\ge 10^\\circ$ in the dense molecular gas tracers HCO$^+$ and N$_2$H$^+$ with the HHT. This will be the largest systematic survey of dense molecular gas in the Milky Way. A series of follow-up observations are currently underway to characterize the physical properties (density, temperature) and evolutionary state of BGPS clumps. \n\n\n\\acknowledgements\nWe would like to thank the operators (John Downey, Patrick Fimbres, Sean Keel, and Bob Moulton) and the staff of the Heinrich Hertz Telescope for their excellent assistance through numerous observing sessions. Yancy L. Shirley is funded by NSF Grant AST-1008577.\n\n{\\it Facilities:} \\facility{HHT (ALMA Band 6)}\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction }\n The integral of the energy density over space volume gives the mass of the body and this parameter is incorporated in Schwarzschild solution. The second quantity- pressure is absent. Usually the role of pressure is negligible. But when a star like our Sun is so compressed that its radius becomes of order of its gravitational radius, the volume per nucleon becomes of order\n$(\\frac{\\hbar}{mc})^3$ ( For this remark I am indebted to V.I. Ritus). So the particles of the body become relativistic. The pressure parameter $p$ becomes comparable with energy density $\\epsilon$, see for example equation (35.9) in [1]. This pressure should influence the gravitational field not only inside the body, but also outside.\nThis is seen from the linearized Einstein equations written in the form ($g_{ik}=\\eta_{ik}+h_{ik}$)\n$$\n\\Delta h_{ik}=\\frac{16\\pi G}{c^4}\\bar T_\n{ik},\\quad \\bar T_{ik}=T_{ik}-\\frac12\\eta_{ik}T_l{}^l, \\quad\n\\eta_{ik}=\\rm diag(1,-1,-1,-1.), \\eqno(1)\n$$\nsee for example, equations (95.8) and (105.11) in [1].\nHere\n$$\nT_{ik}=\\rm diag(\\epsilon, p,p,p), \\quad \\bar T_{00}=\\frac12(\\epsilon+3p),\\quad\n\\bar T_{11}=\\bar T_{22}=\\bar T_{33}=\\frac12(\\epsilon-p). \\eqno(2)\n$$\n\nThe solution of (1) is well known, \n$$\nh_{ik}(\\vec x)=-4\\frac{G}{c^4}\\int\\frac{\\bar T_{ik}(\\vec x')}{|\\vec x-\\vec x'|}d^3x'. \\eqno(3)\n$$\nIt is similar to Coulomb problem. For spherically symmetric body it follows from (3) that outside the body\n$$\nh_{00}=-\\frac{2Gm_t}{rc^2}, \\quad h_{s}\\equiv h_{11}=-\\frac{2Gm_s}{rc^2}. \\eqno(4)\n$$\nHere $m_t$ is the effective mass (with the pressure taken into account) defining $h_{00}$ and\nsimilarly for space part of metric $h_s\\equiv h_{11}$.\n When $p=0$ we have the linear approximation of Schwarzschild solution in isotropic coordinate system $h_{ik}=\\delta_{ik}2\\phi$, where $\\phi$ is the Newtonian potential, see\neq.(100.3) in [1]. When $p\\ne0$, $h_{00}\\ne h_{11}$. The difference reflects the role of pressure in generating the gravitational field.\n\nSo, the presence of pressure $p$ of the order $\\epsilon$ essentially affect the gravitational field. To go to the next approximation , we have to know the 3-graviton vertex. It is difficult to believe that summing the contributions from all powers of gravitational constant will eliminate the influence of pressure in gravitational field outside the body. \n\nIn any case one can think of a sphere filled with photon gas. The inner wall of the \nsphere can be made almost ideally reflecting the photons. For this gas $\\epsilon=p\/3$. \nFor sufficiently large radius of the sphere one can make the $mc^2$ of the sphere\nof order of energy of photon gas (for fixed photon density). The gravitational field \nof this ball may be weak everywhere and equations (1-4) are applicable. The influence of pressure on gravitational field outside the body should be essential. But the linear approximation of the Schwarzschild solution says otherwise. This suggest that\nthe phenomenological approach to gravity, using only graviton propagators and many graviton vertices, will lead to a theory different from general relativity. \n\n\nThe remarks made for the Schwarzschild field concerns also Weyl's solution for an axially symmetric body [1-3]. We note that in contrast to Schwarzschild solution Weyl's solution has no\nevent horizon.\n If we doubt these solutions, \nit is not surprising that Weyl's solution do not goes over to Schwarzschild one\n(in a natural way), even when the axial symmetry becomes indistinguishable from spherical one.\n\n\nI am greatly indebted to V.I. Ritus for criticism, which helps me to formulate the discussed problem more clearly. The work was supported by Scientific Schools and Russian Fund for Fundamental Research (Grants 1615.2008.2 and 08-02-01118).\n\n \\section*{References}\n1. L.D.Landau and E.M.Lifshitz, {\\sl The classical theory of\n fields}, Addison-Wesley, Cambridge, MA, 1971).\\\\\n2.J.L. Synge, {\\sl Relativity: The general theory,} North Holland Publishing Company,\n Amsterdam,1960\\\\\n3. C. M\\\"oller, {\\sl The theory of relativity}, Clarendon press, Oxford, 1972.\\\\\n\n\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}}