diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbsvx" "b/data_all_eng_slimpj/shuffled/split2/finalzzbsvx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbsvx" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{intro}\n\nSolutions of the Tolman-Oppenheimer-Volkoff equation yield quite realistic models for non-rotating stars.\nMoreover, the system of the static Einstein equations with perfect fluids also \nprovides a testcase for general mathematical techniques and has stimulated their development.\nFirstly, group theoretical and Hamiltonian methods for generating solutions have applications to\nthis system on the local level (see, e.g. \\cite{ES,KR1}).\nSecondly, under natural global conditions like asymptotic flatness the system is overdetermined, \nwhich should lead to spherical symmetry of the solutions for all equations of state (EOS) $\\rho(p)$\nwith $\\rho \\ge 0$ for $p \\ge 0$. This long-standing conjecture has, in essence, been settled recently by \nMasood-ul-Alam \\cite{MA1} by using extensions of the techniques of Witten's positive mass theorem\n\\cite{EW}. Thirdly, as an interesting result on ODEs Rendall and Schmidt \\cite{RS} and\nSchaudt \\cite{US} have proven existence of 1-parameter families of spherically\nsymmetric, asymptotically flat solutions of the Tolman-Oppenheimer-Volkoff\nequation for very general classes of EOS, and for all values of the central pressure and the \ncentral density for which the EOS is defined.\n\nTo illustrate these mathematical results, to study remaining conjectures\n(see e.g. \\cite{WS1,SYY}) and to make the connection with physics,\nit is desirable to have at one's disposal some \"exact\" model solutions,\npreferably with physically realistic equation of state. \nWell suited for these purposes is the two parameter family of Pant-Sah EOS \n(PSEOS) \\cite{PS}\n\\begin{eqnarray} \n\\label{psr}\n\\rho & = & \\rho_{-}(1 - \\lambda)^{5} + \\rho_{+}(1 + \\lambda)^{5} \\\\\n\\label{psp}\n p & = & \\frac{1}{6 \\lambda}[\\rho_{-}(1 - \\lambda)^{6} -\n\\rho_{+}(1 + \\lambda)^{6}] \n\\end{eqnarray}\nfor some constants $\\lambda$, $\\rho_{-}$ and $\\rho_{+}$ with $0<\\lambda<1$\nand $0\\le\\rho_+ < \\rho_-$. \nThis is a parametric representation of solutions of the second order ODE\n$I[\\rho(p)] \\equiv 0$ where \n\\begin{equation} \n\\label{Ikap}\nI[\\rho(p)] = \\frac{1}{5}\\kappa^{2} + 2\\kappa + (\\rho + p)\\frac{d\\kappa}{dp} \n\\qquad \\mbox{with} \\quad \\kappa = \\frac{\\rho + p}{\\rho +\n3p}\\frac{d\\rho}{dp}.\n\\end{equation}\nPutting $\\rho_+ = 0$ and eliminating $\\lambda$ in (\\ref{psr}),(\\ref{psp}) \nwe obtain the one-parameter family of Buchdahl equations of state (BEOS) \nwith has the 2-parameter family of Buchdahl solutions \\cite{HB1}.\nThe case $\\rho = const.$ is included here as a (degenerate) solution of $I\\equiv 0$, \nand it also arises in the limit $\\rho_+ \\rightarrow \\rho_-$, $\\lambda\n\\rightarrow 0$ in (\\ref{psr}),(\\ref{psp}) (c.f. Sect. 2.2.2).\nThe general 2-parameter family (\\ref{psr}),(\\ref{psp}) was considered by Pant and Sah \\cite{PS} \nwho gave the 3-parameter family of corresponding solutions in terms of elementary functions. \nA decade later, the PSEOS arose in the course of work on a uniqueness proof \\cite{BS1,BS2}, \nwhich lead to the first \"rediscovery\" of the Pant-Sah solutions \\cite{WS2}. \nMoreover, the Pant-Sah solutions also came up in a systematic Hamiltonian approach to \nrelativistic perfect fluids by Rosquist \\cite{KR1}.\n\nThese papers also established the basic properties of the solutions relevant\nfor their use as stellar models, namely:\n\\begin{itemize}\n\\item The Pant-Sah solutions are regular as long as the central pressure stays bounded\n(contrary to the claim by Delgaty and Lake \\cite{DL}; see however \\cite{KL}\nfor a correction).\n\\item All Pant-Sah solutions except for the Buchdahl solutions have a fluid region of finite extent, \nwhich is obvious from (\\ref{psr}), (\\ref{psp}) since $\\rho > 0$ at $p=0$ iff $\\rho_+ > 0$. \n\\item The energy density is positive and the pressure is non-negative everywhere, \nand these functions decrease monotonically with the radius.\n\\item Under suitable restrictions on the parameters, the speed of sound\nremains subluminal everywhere \\cite{KR2}.\n\\end{itemize}\nMorover, Pant and Sah showed that the parameters can be fitted quite \nwell to neutron star data \\cite{PS}, while Rosquist \\cite{KR2} considered\nPant-Sah solutions as possible \"traps\" for gravitational waves.\n\nThe purpose of this paper is to give a unified description of the Pant-Sah\nsolutions, to explain their wide range of applicability and to extend it even further.\nApart from the physically relevant properties mentioned above, we find here the following. \nDepending on the choice of $\\rho_+$ and $\\rho_-$, the mass-radius relation\n(which is a polynomial equation quadratic in the mass) is either monotonic, \nor it exhibits a maximum of the radius only, or a maximum of the mass as well\nbefore it reaches a solution with a singular center.\nHowever, the surface redshift and therefore the quotient mass\/radius \nuniquely characterizes a Pant-Sah solution for any given EOS. This implies that the mass-radius curves\ncan form a single, open \"loop\" but will not exhibit the \"spiral\" form typical for degenerate matter \nat extreme densities \\cite{HTWW,TM}.\nNevertheless, for suitable $\\rho_+$ and $\\rho_-$ the mass-radius curve fits remarkably \nwell with some quark star models discussed in the last years (see, e.g. \\cite{WNR}-\\cite{DBDRS}) \nexcept at extreme densities.\n\nAs to the mathematical properties of the Pant-Sah solutions, the key for their\nunderstanding is the {\\it \"Killing-Yamabe property\"}. \nBy this term, motivated by the Yamabe problem \\cite{LP}, we mean the following: \n{\\it For all static solutions of Einstein's equations with a perfect fluid} \n(defined only locally and not necessarily spherically symmetric)\n{\\it we require that $g_{ij}^+ = (1 + fV)^4 g_{ij}\/16$ is a metric of constant scalar curvature} ${\\cal R}_+$\nwhere $g_{ij}$ is the induced metric on the static slices, $V$ is the norm of the Killing vector \nand $f$ is a constant chosen a priori \\cite{WS2}.\n(If $f\\ne 0$, it may be absorbed in $V$ by a suitable scaling of the Killing vector. \nThe case $f = 0$ clearly corresponds to fluids with $\\rho = const.$).\nIf the Killing-Yamabe property holds, the field equations {\\it imply that} \n$g_{ij}^{-} = (1 - f V)^4 g_{ij}\/16$, {\\it has constant curvature ${\\cal R}_-$\nas well, with ${\\cal R}_- \\ne {\\cal R}_+$ in general}.\nTogether with (\\ref{psr}),(\\ref{psp}) and $I\\equiv 0$, the Killing-Yamabe\nproperty provides a third alternative \ncharacterization of the PSEOS, and the two curvatures are related to the constants in (\\ref{psr})\nand (\\ref{psp}) by ${\\cal R}_{\\pm} = 512 \\pi \\rho_{\\pm}$.\n\nTo understand how the Killing-Yamabe property leads to the \"exactness\" of the Pant-Sah solutions, we note that \n spherically symmetric 3-metrics with constant scalar curvature and \nregular centre are \"Einstein spaces\" (i.e. the Ricci tensor is pure trace).\nSuch spaces enjoy simple expressions in suitable coordinates, and the same \napplies to the conformal factors defined above.\n\nTo sketch the proofs of spherical symmetry of asymptotically flat solutions, \nwe need two further generally defined rescalings of the spatial metric, namely \n$g_{ij}^{\\ast} = K(V) g_{ij}$ where the function $K(V)$ is chosen such that \n$g_{ij}^{\\ast}$ is flat if $g_{ij}$ is a Pant-Sah solution ($K(V$) is non-unique in\ngeneral), and $g_{ij}'= (1 - f^2 V^2)^{4}g_{ij}\/16 V^2$.\nWe then show that for solutions with PSEOS there holds the \"$\\pm$\"- pair of equations \n\\begin{equation}\n\\label{LapR}\n\\Delta' {\\cal R}_{\\ast} = \\beta_{\\pm} {\\cal B}_{ij}^{\\pm} {\\cal B}^{ij}_{\\pm}\n\\end{equation}\nwhere $\\Delta'$ is the Laplacian of $g_{ij}'$, ${\\cal B}^{+}_{ij}$ and\n${\\cal B}^{-}_{ij}$ are the trace-free parts of the Ricci-tensors of $g_{ij}^{+}$\nand $g_{ij}^{-}$ respectively, and $\\beta_{+}$ and $\\beta_-$ are non-positive functions.\n\nTo show uniqueness of the Buchdahl solutions for the BEOS (${\\cal R}_{+} = 0$, ${\\cal R}_- \\ne 0$), \none first shows that all asymptotically flat solutions must extend to infinity\n\\cite {WS1,WS3,MH}. Hence for a Killing vector normalized such that\n$V\\rightarrow 1$ at infinity, we can choose $f = 1$ and $K(V) = (1 + V)^4\/16$ in the definitions\nabove. Then there are two alternative ways to continue \\cite{BS1}. The first one consists of \nnoting that $g_{ij}^{\\ast} = g_{ij}^+$ is asymptotically flat with vanishing mass. \nHence the positive mass theorem \\cite{EW,SY} implies that these metrics are flat and $(V,g_{ij})$ is\na Buchdahl solution.\nAlternatively, we can integrate the \"minus\" version of (\\ref{LapR}) over the static slice. \nBy the divergence theorem and by asymptotic flatness,\n${\\cal B}_{ij}^- = 0$, i.e. $g_{ij}^-$ is an Einstein space, which again yields\na Buchdahl solution.\n\nIn the generic case ${\\cal R}_{+} \\ne 0$, the divergence theorem alone \napplied to (\\ref{LapR}) is insuffient for a proof as ${\\cal R}_{\\ast}$ \ncannot be made $C^1$ on the fluid boundary in general. \nHowever, by employing a suitable elliptic identity in the vacuum\nregion as well, the maximum principle now yields ${\\cal R}_{\\ast} \\ge 0$.\nThen the positive mass theorem leads to the required conclusion.\n\nThe positive mass theorem combined with Equ. (\\ref{LapR}) has been been employed earlier\nin proofs \nof spherical symmetry in the cases of fluids with constant density \\cite{Lind} and \"near constant density\" \\cite{MA2}. \nMoreover, there are generalizations of (\\ref{LapR}) for non-Killing-Yamabe EOS, for fluids which satisfy $I\\le 0$, \nwhich again give uniqueness \\cite{BS2,LM}. The general proof of spherical symmetry by Masood-ul-Alam\n\\cite{MA1} involves modified Witten spinor identities and integral versions of generalizations of (\\ref{LapR}).\n\nJointly with the PSEOS we will consider here a model in Newtonian theory\ncharacterized by the following counterpart of the Killing-Yamabe property: \n We require that {\\it a conformal rescaling of flat space with \n$(\\bar v - v)^4\/16$,} where $v$ is the Newtonian potential (not necessarily spherically\nsymmetric) and $\\bar v$ a constant, {\\it is a metric of constant scalar\ncurvature} ${\\cal R}_-$. This leads to the 2-parameter family of equations of state \n\\begin{equation}\n\\label{Neos}\np = \\frac{1}{6} \\left( \\rho_{-}^{- \\frac{1}{5}}~ \\rho^{\\frac{6}{5}} - \\rho_+ \\right)\n\\end{equation}\nwhere ${\\cal R}_- = 512 \\pi \\rho_-$ and $\\rho_+$ is another constant which has here no obvious relation to curvature. \nWe will refer to (\\ref{Neos}) as \"the Newtonian equation(s) of state\" (NEOS).\nFor $\\rho_+ = 0$ the NEOS are polytropes of index 5 which are analogous to the BEOS. \nThe general NEOS and the corresponding solutions may be considered as \"Newtonian limits\" of the PSEOS and the\nPant-Sah solutions, with similar properties for low density and pressure.\nAs to uniqueness proofs for asymptotically flat solutions with the NEOS, \nthere is available some sort of counterpart of (\\ref{LapR}), and the positive mass theorem\nhas to be substituted here by the \"virial theorem\".\n\nThis paper is organized as follows. In Sect. 2 we give the field\nequations in the Newtonian and in the relativistic case and introduce our models. \nIn Sect.3 we rederive the spherically symmetric solutions and discuss \ntheir main properties, in particular the mass-radius curves.\nThe Section 4 we prove spherical symmetry of asymptotically flat solutions\nwith the NEOS and the PSEOS. The \nAppendix contains general material on conformal rescalings of metrics\nand on spaces of constant curvature.\n\n\\section{The Field Equations}\n\nOur description of the Newtonian and the relativistic fluids will be as close as\npossible. For simplicity we will use identical symbols \n($g_{ij}$, $g_{ij}^-$, $\\nabla_i$, ${\\cal R}$, ${\\cal R}^-$...)\nfor analogous quantities but with different formal definitions, depending on\nthe context.\n\nWe denote by $\\cal F$ the fluid region, which we assume to be open and connected, \nand which may extend to infinity. $\\cal V$ is the open vacuum region\n(which may be empty) and $\\partial {\\cal F} = \\partial {\\cal V}$ is the \n{\\it common} boundary (i.e. infinity is not included in ${\\partial \\cal V}$ ). \nThis redundant terminology is useful to describe the matching.\nWhen $\\cal F$ is spherically symmetric it is called \"star\".\n\n\\subsection{ Newtonian Fluids}\n\n\\subsubsection{General properties}\n\nWe consider as Newtonian model $\\cal F \\cup \\partial {\\cal F} \\cup \\cal V$ a manifold \n$({\\cal M}, g_{ij})$ with a flat metric $g_{ij}$. The potential function $v$ is assumed\nto be smooth in ${\\cal F}$ and ${\\cal V}$, $C^{1,1}$ at $\\partial {\\cal F}$,\nnegative everywhere and $v\\rightarrow 0$ at infinity. \nFor smooth density $\\rho(x^i)$ and pressure functions $p(x^i)$ in ${\\cal F}$, \nwith $p \\rightarrow 0$ at $\\partial {\\cal F}$, Newton's and Euler's equations read \n\\begin{eqnarray}\n\\label{Poi}\n\\Delta v & = & 4\\pi \\rho \\\\\n\\label{Eul}\n\\nabla_i p & = & - \\rho \\nabla_i v\n\\end{eqnarray}\nwhere $\\nabla_i$ and $\\Delta = \\nabla_i \\nabla^i$ denote the gradient and the Laplacian of flat space. \n(Indices are moved with $g_{ij}$ and its inverse $g^{ij}$).\n\nA general EOS is of the form $H(\\rho,p) = 0$. (If possible we choose\n$H(\\rho,p) = p - p(\\rho)$). $H(\\rho,p)$ should be defined in the intervals $\\rho \\in\n[\\rho_s,\\infty)$ and $p \\in [0,\\infty)$ with $\\rho_s \\ge 0$\nand smooth in the intervals $\\rho \\in (\\rho_s,\\infty)$ and $p \\in (0,\\infty)$.\nUsing the EOS, we can write $p$ and $\\rho$ as smooth functions $p(v)$ and\n$\\rho(v)$ and Euler's equation (\\ref{Eul}) as $dp\/dv = - \\rho$.\n\nTo recall the matching conditions, we note that, from the above requirements, the metric induced on \n$\\partial {\\cal F}$ is $C^{1,1}$ and the mean curvature of $\\partial {\\cal F}$ is continuous. \nWe now write these conditions in terms of the quantity $w=\\nabla_i v \\nabla^i v$.\n(The gradient always acts only on the subsequent argument, i.e. $w=(\\nabla_i v)( \\nabla^i v)$).\nFrom (\\ref{Poi}) it follows that the quantity in brackets on the l.h. side of \n\\begin{equation}\n\\label{Ngm}\n \\left[w^{-1} \\nabla^i v \\nabla_i w - 8 \\pi \\rho \\right]_{\\Rightarrow \\partial{\\cal F}} \n= \\left[w^{-1}\\nabla^i v \\nabla_i w \\right]_{\\partial{\\cal V} \\Leftarrow} \n\\end{equation} \nis continuous at the surface. Hence (\\ref{Ngm}) must hold, where\n\"$\\Rightarrow \\partial{\\cal F}$\" and \"$\\partial{\\cal V} \\Leftarrow$\" denote\nthe approach to the boundary from the fluid and the vacuum sides, respectively. \n\nGeneralizations to several disconnected \"matching surfaces\" are\ntrival and will not be considered here.\n\nTo formulate the asymptotic properties we consider an \"end\" \n${\\cal M}^{\\infty} = {\\cal M} \\setminus \\{ \\mbox{a compact set} \\}$. \nWe assume that, for some $\\epsilon > 0$\n\\begin{equation}\n\\label{NM}\nv = - \\frac{M}{r} + O(\\frac{1}{r^{1+\\epsilon}}), \\quad \n\\partial_i v = - \\partial_i \\frac{M}{r} + O(\\frac{1}{r^{2+\\epsilon}}) \\quad\n\\partial_i \\partial_j v = - \\partial_i \\partial_j \\frac{M}{r} +\nO(\\frac{1}{r^{3+\\epsilon}})\n\\end{equation}\nwhere $M$ is the mass.\nWith (\\ref{Poi}) and (\\ref{Eul}) this implies that\n\\begin{equation}\n\\label{Nas}\n\\rho = O(\\frac{1}{r^{3+\\epsilon}}), \\qquad p = O(\\frac{1}{r^{4+\\epsilon}}).\n\\end{equation}\nA more natural but involved precedure is to derive the falloff conditions\nof the potential $v$ and of $p$ only from the falloff of $\\rho$ \n(c.f. \\cite{WS1}).\n\n \\subsubsection{The Newtonian equation of state}\n\nTo introduce our model it is useful to rescale the Euclidean metric by the\nfourth power of a linear function of the Newtonian potential, i.e.\nwe define $g_{ij}^- = g_{ij} (\\bar v - v)^4\/16$ for some constant $\\bar v \\ge 0$. \n\nWe use the general formula $(\\ref{csc})$ for conformal rescalings with \n$\\wp_{ij}$ flat, $\\Phi = (\\bar v - v)\/2$, and hence $\\widetilde \\wp_{ij} =\ng_{ij}^-$. Together with the field equation (\\ref{Poi}), we find that\n\\begin{equation}\n\\label{conf}\n {\\cal R}_- (\\bar v - v)^5 = 512\\pi \\rho\n\\end{equation}\nwhere ${\\cal R}_-$ is the scalar curvature of $g_{ij}^-$.\nWe now determine the NEOS by requiring that ${\\cal R}_- = const.$. \nIntroducing $\\rho_-$ by ${\\cal R}_- = 512 \\pi \\rho_- $ and another constant\n$\\rho_+ $, eqs. (\\ref{Eul}) and (\\ref{conf}) yield\n\\begin{equation}\n\\label{rhop}\n \\rho = \\rho_- (\\bar v - v)^5 \\qquad p = \\frac{1}{6}[\\rho_- (\\bar v - v)^6 - \\rho_+].\n\\end{equation}\nWe require that $\\rho_+ \\ge 0$ and $\\rho_- > 0$.\nEliminating the potential we obtain the NEOS equ. (\\ref{Neos}).\nIn terms of the variables $p\/\\rho_-$ and $\\rho\/\\rho_-$ and $\\tau = (\\rho_+\/\\rho_-)^{1\/6}$\nthis equation reads\n$p\/\\rho_- = \\frac{1}{6} \\left[ (\\rho\/\\rho_-)^{\\frac{6}{5}} - \\tau^6 \\right]$. \nThis means that we have singled out $\\rho_-$ as a \"scaling\" parameter while \n$\\tau \\in [0,\\infty]$ plays a more \"essential\" role. \n(This terminology mainly serves to simplify the analysis of static spherically symmetric\nsolutions in Sect. 3. Both parameters have direct physical significance, as follows from \n(\\ref{Nsurf}) below. On the other hand, from the dynamical system point of\nview, both parameters can be considered as \"scaling\" except in the case $\\rho =const.$\n(c.f. \\cite{HU,HRU}). \n\nFig. (\\ref{NEOS}) shows the NEOS for the values\n$\\tau = (3 - \\sqrt{5})\/2 \\approx 0.382$, $\\tau = 0.6$ and \n$\\tau = [(\\sqrt{5} - 1)\/2]^{1\/2} \\approx 0.786$. \n(These particular values play a role in Relativity and are chosen here for\ncomparison).\n\n\\begin{figure}[h!]\n\\setlength{\\unitlength}{0.240900pt}\n\\ifx\\plotpoint\\undefined\\newsavebox{\\plotpoint}\\fi\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\begin{picture}(1500,900)(0,0)\n\\font\\gnuplot=cmr10 at 10pt\n\\gnuplot\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\put(201.0,123.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,123){\\makebox(0,0)[r]{ 0}}\n\\put(1419.0,123.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,307.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,307){\\makebox(0,0)[r]{ 0.05}}\n\\put(1419.0,307.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,492.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,492){\\makebox(0,0)[r]{ 0.1}}\n\\put(1419.0,492.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,676.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,676){\\makebox(0,0)[r]{ 0.15}}\n\\put(1419.0,676.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,860.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,860){\\makebox(0,0)[r]{ 0.2}}\n\\put(1419.0,860.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(201,82){\\makebox(0,0){ 0}}\n\\put(201.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(399.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(399,82){\\makebox(0,0){ 0.2}}\n\\put(399.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(597.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(597,82){\\makebox(0,0){ 0.4}}\n\\put(597.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(795.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(795,82){\\makebox(0,0){ 0.6}}\n\\put(795.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(993.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(993,82){\\makebox(0,0){ 0.8}}\n\\put(993.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1191.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1191,82){\\makebox(0,0){ 1}}\n\\put(1191.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1389.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1389,82){\\makebox(0,0){ 1.2}}\n\\put(1389.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(201.0,123.0){\\rule[-0.200pt]{298.234pt}{0.400pt}}\n\\put(1439.0,123.0){\\rule[-0.200pt]{0.400pt}{177.543pt}}\n\\put(201.0,860.0){\\rule[-0.200pt]{298.234pt}{0.400pt}}\n\\put(40,491){\\makebox(0,0){$p\/\\rho_-$}}\n\\put(820,21){\\makebox(0,0){$\\rho\/\\rho_-$}}\n\\put(201.0,123.0){\\rule[-0.200pt]{0.400pt}{177.543pt}}\n\\put(208,122.67){\\rule{1.445pt}{0.400pt}}\n\\multiput(208.00,122.17)(3.000,1.000){2}{\\rule{0.723pt}{0.400pt}}\n\\multiput(214.00,124.59)(1.267,0.477){7}{\\rule{1.060pt}{0.115pt}}\n\\multiput(214.00,123.17)(9.800,5.000){2}{\\rule{0.530pt}{0.400pt}}\n\\multiput(226.00,129.60)(1.797,0.468){5}{\\rule{1.400pt}{0.113pt}}\n\\multiput(226.00,128.17)(10.094,4.000){2}{\\rule{0.700pt}{0.400pt}}\n\\multiput(239.00,133.59)(1.267,0.477){7}{\\rule{1.060pt}{0.115pt}}\n\\multiput(239.00,132.17)(9.800,5.000){2}{\\rule{0.530pt}{0.400pt}}\n\\multiput(251.00,138.59)(1.378,0.477){7}{\\rule{1.140pt}{0.115pt}}\n\\multiput(251.00,137.17)(10.634,5.000){2}{\\rule{0.570pt}{0.400pt}}\n\\multiput(264.00,143.59)(1.033,0.482){9}{\\rule{0.900pt}{0.116pt}}\n\\multiput(264.00,142.17)(10.132,6.000){2}{\\rule{0.450pt}{0.400pt}}\n\\multiput(276.00,149.59)(1.123,0.482){9}{\\rule{0.967pt}{0.116pt}}\n\\multiput(276.00,148.17)(10.994,6.000){2}{\\rule{0.483pt}{0.400pt}}\n\\multiput(289.00,155.59)(1.267,0.477){7}{\\rule{1.060pt}{0.115pt}}\n\\multiput(289.00,154.17)(9.800,5.000){2}{\\rule{0.530pt}{0.400pt}}\n\\multiput(301.00,160.59)(1.123,0.482){9}{\\rule{0.967pt}{0.116pt}}\n\\multiput(301.00,159.17)(10.994,6.000){2}{\\rule{0.483pt}{0.400pt}}\n\\multiput(314.00,166.59)(1.033,0.482){9}{\\rule{0.900pt}{0.116pt}}\n\\multiput(314.00,165.17)(10.132,6.000){2}{\\rule{0.450pt}{0.400pt}}\n\\multiput(326.00,172.59)(0.950,0.485){11}{\\rule{0.843pt}{0.117pt}}\n\\multiput(326.00,171.17)(11.251,7.000){2}{\\rule{0.421pt}{0.400pt}}\n\\multiput(339.00,179.59)(1.033,0.482){9}{\\rule{0.900pt}{0.116pt}}\n\\multiput(339.00,178.17)(10.132,6.000){2}{\\rule{0.450pt}{0.400pt}}\n\\multiput(351.00,185.59)(1.123,0.482){9}{\\rule{0.967pt}{0.116pt}}\n\\multiput(351.00,184.17)(10.994,6.000){2}{\\rule{0.483pt}{0.400pt}}\n\\multiput(364.00,191.59)(0.874,0.485){11}{\\rule{0.786pt}{0.117pt}}\n\\multiput(364.00,190.17)(10.369,7.000){2}{\\rule{0.393pt}{0.400pt}}\n\\multiput(376.00,198.59)(1.123,0.482){9}{\\rule{0.967pt}{0.116pt}}\n\\multiput(376.00,197.17)(10.994,6.000){2}{\\rule{0.483pt}{0.400pt}}\n\\multiput(389.00,204.59)(0.874,0.485){11}{\\rule{0.786pt}{0.117pt}}\n\\multiput(389.00,203.17)(10.369,7.000){2}{\\rule{0.393pt}{0.400pt}}\n\\multiput(401.00,211.59)(0.950,0.485){11}{\\rule{0.843pt}{0.117pt}}\n\\multiput(401.00,210.17)(11.251,7.000){2}{\\rule{0.421pt}{0.400pt}}\n\\multiput(414.00,218.59)(0.874,0.485){11}{\\rule{0.786pt}{0.117pt}}\n\\multiput(414.00,217.17)(10.369,7.000){2}{\\rule{0.393pt}{0.400pt}}\n\\multiput(426.00,225.59)(0.950,0.485){11}{\\rule{0.843pt}{0.117pt}}\n\\multiput(426.00,224.17)(11.251,7.000){2}{\\rule{0.421pt}{0.400pt}}\n\\multiput(439.00,232.59)(0.874,0.485){11}{\\rule{0.786pt}{0.117pt}}\n\\multiput(439.00,231.17)(10.369,7.000){2}{\\rule{0.393pt}{0.400pt}}\n\\multiput(451.00,239.59)(0.950,0.485){11}{\\rule{0.843pt}{0.117pt}}\n\\multiput(451.00,238.17)(11.251,7.000){2}{\\rule{0.421pt}{0.400pt}}\n\\multiput(464.00,246.59)(0.874,0.485){11}{\\rule{0.786pt}{0.117pt}}\n\\multiput(464.00,245.17)(10.369,7.000){2}{\\rule{0.393pt}{0.400pt}}\n\\multiput(476.00,253.59)(0.950,0.485){11}{\\rule{0.843pt}{0.117pt}}\n\\multiput(476.00,252.17)(11.251,7.000){2}{\\rule{0.421pt}{0.400pt}}\n\\multiput(489.00,260.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(489.00,259.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(501.00,268.59)(0.950,0.485){11}{\\rule{0.843pt}{0.117pt}}\n\\multiput(501.00,267.17)(11.251,7.000){2}{\\rule{0.421pt}{0.400pt}}\n\\multiput(514.00,275.59)(0.874,0.485){11}{\\rule{0.786pt}{0.117pt}}\n\\multiput(514.00,274.17)(10.369,7.000){2}{\\rule{0.393pt}{0.400pt}}\n\\multiput(526.00,282.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(526.00,281.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(539.00,290.59)(0.874,0.485){11}{\\rule{0.786pt}{0.117pt}}\n\\multiput(539.00,289.17)(10.369,7.000){2}{\\rule{0.393pt}{0.400pt}}\n\\multiput(551.00,297.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(551.00,296.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(564.00,305.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(564.00,304.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(576.00,313.59)(0.950,0.485){11}{\\rule{0.843pt}{0.117pt}}\n\\multiput(576.00,312.17)(11.251,7.000){2}{\\rule{0.421pt}{0.400pt}}\n\\multiput(589.00,320.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(589.00,319.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(601.00,328.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(601.00,327.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(614.00,336.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(614.00,335.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(626.00,344.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(626.00,343.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(639.00,352.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(639.00,351.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(651.00,360.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(651.00,359.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(664.00,368.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(664.00,367.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(676.00,376.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(676.00,375.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(689.00,384.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(689.00,383.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(701.00,392.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(701.00,391.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(714.00,400.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(714.00,399.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(726.00,408.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(726.00,407.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(739.00,416.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(739.00,415.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(751.00,424.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(751.00,423.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(764.00,433.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(764.00,432.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(776.00,441.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(776.00,440.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(789.00,449.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(789.00,448.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(801.00,458.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(801.00,457.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(814.00,466.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(814.00,465.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(826.00,475.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(826.00,474.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(839.00,483.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(839.00,482.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(851.00,492.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(851.00,491.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(864.00,500.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(864.00,499.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(876.00,509.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(876.00,508.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(889.00,518.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(889.00,517.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(901.00,526.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(901.00,525.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(914.00,535.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(914.00,534.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(926.00,544.59)(0.824,0.488){13}{\\rule{0.750pt}{0.117pt}}\n\\multiput(926.00,543.17)(11.443,8.000){2}{\\rule{0.375pt}{0.400pt}}\n\\multiput(939.00,552.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(939.00,551.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(951.00,561.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(951.00,560.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(964.00,570.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(964.00,569.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(976.00,579.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(976.00,578.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(989.00,588.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(989.00,587.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1001.00,597.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(1001.00,596.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(1014.00,606.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1014.00,605.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1026.00,615.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(1026.00,614.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(1039.00,624.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1039.00,623.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1051.00,633.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(1051.00,632.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(1064.00,642.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1064.00,641.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1076.00,651.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(1076.00,650.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(1089.00,660.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1089.00,659.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1101.00,669.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(1101.00,668.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(1114.00,678.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1114.00,677.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1126.00,687.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(1126.00,686.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(1139.00,696.58)(0.600,0.491){17}{\\rule{0.580pt}{0.118pt}}\n\\multiput(1139.00,695.17)(10.796,10.000){2}{\\rule{0.290pt}{0.400pt}}\n\\multiput(1151.00,706.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(1151.00,705.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(1164.00,715.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1164.00,714.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1176.00,724.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(1176.00,723.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(1189.00,733.58)(0.600,0.491){17}{\\rule{0.580pt}{0.118pt}}\n\\multiput(1189.00,732.17)(10.796,10.000){2}{\\rule{0.290pt}{0.400pt}}\n\\multiput(1201.00,743.59)(0.728,0.489){15}{\\rule{0.678pt}{0.118pt}}\n\\multiput(1201.00,742.17)(11.593,9.000){2}{\\rule{0.339pt}{0.400pt}}\n\\multiput(1214.00,752.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1214.00,751.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1226.00,761.58)(0.652,0.491){17}{\\rule{0.620pt}{0.118pt}}\n\\multiput(1226.00,760.17)(11.713,10.000){2}{\\rule{0.310pt}{0.400pt}}\n\\multiput(1239.00,771.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1239.00,770.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1251.00,780.58)(0.652,0.491){17}{\\rule{0.620pt}{0.118pt}}\n\\multiput(1251.00,779.17)(11.713,10.000){2}{\\rule{0.310pt}{0.400pt}}\n\\multiput(1264.00,790.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1264.00,789.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1276.00,799.58)(0.652,0.491){17}{\\rule{0.620pt}{0.118pt}}\n\\multiput(1276.00,798.17)(11.713,10.000){2}{\\rule{0.310pt}{0.400pt}}\n\\multiput(1289.00,809.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1289.00,808.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1301.00,818.58)(0.652,0.491){17}{\\rule{0.620pt}{0.118pt}}\n\\multiput(1301.00,817.17)(11.713,10.000){2}{\\rule{0.310pt}{0.400pt}}\n\\multiput(1314.00,828.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1314.00,827.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1326.00,837.58)(0.652,0.491){17}{\\rule{0.620pt}{0.118pt}}\n\\multiput(1326.00,836.17)(11.713,10.000){2}{\\rule{0.310pt}{0.400pt}}\n\\multiput(1339.00,847.59)(0.669,0.489){15}{\\rule{0.633pt}{0.118pt}}\n\\multiput(1339.00,846.17)(10.685,9.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(1351.00,856.60)(0.627,0.468){5}{\\rule{0.600pt}{0.113pt}}\n\\multiput(1351.00,855.17)(3.755,4.000){2}{\\rule{0.300pt}{0.400pt}}\n\\sbox{\\plotpoint}{\\rule[-0.400pt]{0.800pt}{0.800pt}}%\n\\multiput(278.00,124.38)(1.432,0.560){3}{\\rule{1.960pt}{0.135pt}}\n\\multiput(278.00,121.34)(6.932,5.000){2}{\\rule{0.980pt}{0.800pt}}\n\\multiput(289.00,129.39)(1.132,0.536){5}{\\rule{1.800pt}{0.129pt}}\n\\multiput(289.00,126.34)(8.264,6.000){2}{\\rule{0.900pt}{0.800pt}}\n\\multiput(301.00,135.38)(1.768,0.560){3}{\\rule{2.280pt}{0.135pt}}\n\\multiput(301.00,132.34)(8.268,5.000){2}{\\rule{1.140pt}{0.800pt}}\n\\multiput(314.00,140.40)(0.913,0.526){7}{\\rule{1.571pt}{0.127pt}}\n\\multiput(314.00,137.34)(8.738,7.000){2}{\\rule{0.786pt}{0.800pt}}\n\\multiput(326.00,147.39)(1.244,0.536){5}{\\rule{1.933pt}{0.129pt}}\n\\multiput(326.00,144.34)(8.987,6.000){2}{\\rule{0.967pt}{0.800pt}}\n\\multiput(339.00,153.39)(1.132,0.536){5}{\\rule{1.800pt}{0.129pt}}\n\\multiput(339.00,150.34)(8.264,6.000){2}{\\rule{0.900pt}{0.800pt}}\n\\multiput(351.00,159.40)(1.000,0.526){7}{\\rule{1.686pt}{0.127pt}}\n\\multiput(351.00,156.34)(9.501,7.000){2}{\\rule{0.843pt}{0.800pt}}\n\\multiput(364.00,166.39)(1.132,0.536){5}{\\rule{1.800pt}{0.129pt}}\n\\multiput(364.00,163.34)(8.264,6.000){2}{\\rule{0.900pt}{0.800pt}}\n\\multiput(376.00,172.40)(1.000,0.526){7}{\\rule{1.686pt}{0.127pt}}\n\\multiput(376.00,169.34)(9.501,7.000){2}{\\rule{0.843pt}{0.800pt}}\n\\multiput(389.00,179.39)(1.132,0.536){5}{\\rule{1.800pt}{0.129pt}}\n\\multiput(389.00,176.34)(8.264,6.000){2}{\\rule{0.900pt}{0.800pt}}\n\\multiput(401.00,185.40)(1.000,0.526){7}{\\rule{1.686pt}{0.127pt}}\n\\multiput(401.00,182.34)(9.501,7.000){2}{\\rule{0.843pt}{0.800pt}}\n\\multiput(414.00,192.40)(0.913,0.526){7}{\\rule{1.571pt}{0.127pt}}\n\\multiput(414.00,189.34)(8.738,7.000){2}{\\rule{0.786pt}{0.800pt}}\n\\multiput(426.00,199.40)(1.000,0.526){7}{\\rule{1.686pt}{0.127pt}}\n\\multiput(426.00,196.34)(9.501,7.000){2}{\\rule{0.843pt}{0.800pt}}\n\\multiput(439.00,206.40)(0.913,0.526){7}{\\rule{1.571pt}{0.127pt}}\n\\multiput(439.00,203.34)(8.738,7.000){2}{\\rule{0.786pt}{0.800pt}}\n\\multiput(451.00,213.40)(1.000,0.526){7}{\\rule{1.686pt}{0.127pt}}\n\\multiput(451.00,210.34)(9.501,7.000){2}{\\rule{0.843pt}{0.800pt}}\n\\multiput(464.00,220.40)(0.913,0.526){7}{\\rule{1.571pt}{0.127pt}}\n\\multiput(464.00,217.34)(8.738,7.000){2}{\\rule{0.786pt}{0.800pt}}\n\\multiput(476.00,227.40)(0.847,0.520){9}{\\rule{1.500pt}{0.125pt}}\n\\multiput(476.00,224.34)(9.887,8.000){2}{\\rule{0.750pt}{0.800pt}}\n\\multiput(489.00,235.40)(0.913,0.526){7}{\\rule{1.571pt}{0.127pt}}\n\\multiput(489.00,232.34)(8.738,7.000){2}{\\rule{0.786pt}{0.800pt}}\n\\multiput(501.00,242.40)(1.000,0.526){7}{\\rule{1.686pt}{0.127pt}}\n\\multiput(501.00,239.34)(9.501,7.000){2}{\\rule{0.843pt}{0.800pt}}\n\\multiput(514.00,249.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(514.00,246.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(526.00,257.40)(1.000,0.526){7}{\\rule{1.686pt}{0.127pt}}\n\\multiput(526.00,254.34)(9.501,7.000){2}{\\rule{0.843pt}{0.800pt}}\n\\multiput(539.00,264.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(539.00,261.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(551.00,272.40)(1.000,0.526){7}{\\rule{1.686pt}{0.127pt}}\n\\multiput(551.00,269.34)(9.501,7.000){2}{\\rule{0.843pt}{0.800pt}}\n\\multiput(564.00,279.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(564.00,276.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(576.00,287.40)(0.847,0.520){9}{\\rule{1.500pt}{0.125pt}}\n\\multiput(576.00,284.34)(9.887,8.000){2}{\\rule{0.750pt}{0.800pt}}\n\\multiput(589.00,295.40)(0.913,0.526){7}{\\rule{1.571pt}{0.127pt}}\n\\multiput(589.00,292.34)(8.738,7.000){2}{\\rule{0.786pt}{0.800pt}}\n\\multiput(601.00,302.40)(0.847,0.520){9}{\\rule{1.500pt}{0.125pt}}\n\\multiput(601.00,299.34)(9.887,8.000){2}{\\rule{0.750pt}{0.800pt}}\n\\multiput(614.00,310.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(614.00,307.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(626.00,318.40)(0.847,0.520){9}{\\rule{1.500pt}{0.125pt}}\n\\multiput(626.00,315.34)(9.887,8.000){2}{\\rule{0.750pt}{0.800pt}}\n\\multiput(639.00,326.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(639.00,323.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(651.00,334.40)(0.847,0.520){9}{\\rule{1.500pt}{0.125pt}}\n\\multiput(651.00,331.34)(9.887,8.000){2}{\\rule{0.750pt}{0.800pt}}\n\\multiput(664.00,342.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(664.00,339.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(676.00,350.40)(0.847,0.520){9}{\\rule{1.500pt}{0.125pt}}\n\\multiput(676.00,347.34)(9.887,8.000){2}{\\rule{0.750pt}{0.800pt}}\n\\multiput(689.00,358.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(689.00,355.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(701.00,366.40)(0.847,0.520){9}{\\rule{1.500pt}{0.125pt}}\n\\multiput(701.00,363.34)(9.887,8.000){2}{\\rule{0.750pt}{0.800pt}}\n\\multiput(714.00,374.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(714.00,371.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(726.00,382.40)(0.847,0.520){9}{\\rule{1.500pt}{0.125pt}}\n\\multiput(726.00,379.34)(9.887,8.000){2}{\\rule{0.750pt}{0.800pt}}\n\\multiput(739.00,390.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(739.00,387.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(751.00,399.40)(0.847,0.520){9}{\\rule{1.500pt}{0.125pt}}\n\\multiput(751.00,396.34)(9.887,8.000){2}{\\rule{0.750pt}{0.800pt}}\n\\multiput(764.00,407.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(764.00,404.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(776.00,415.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(776.00,412.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(789.00,424.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(789.00,421.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(801.00,432.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(801.00,429.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(814.00,441.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(814.00,438.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(826.00,449.40)(0.847,0.520){9}{\\rule{1.500pt}{0.125pt}}\n\\multiput(826.00,446.34)(9.887,8.000){2}{\\rule{0.750pt}{0.800pt}}\n\\multiput(839.00,457.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(839.00,454.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(851.00,466.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(851.00,463.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(864.00,475.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(864.00,472.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(876.00,483.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(876.00,480.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(889.00,492.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(889.00,489.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(901.00,500.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(901.00,497.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(914.00,509.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(914.00,506.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(926.00,518.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(926.00,515.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(939.00,527.40)(0.774,0.520){9}{\\rule{1.400pt}{0.125pt}}\n\\multiput(939.00,524.34)(9.094,8.000){2}{\\rule{0.700pt}{0.800pt}}\n\\multiput(951.00,535.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(951.00,532.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(964.00,544.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(964.00,541.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(976.00,553.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(976.00,550.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(989.00,562.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(989.00,559.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1001.00,571.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(1001.00,568.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(1014.00,580.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1014.00,577.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1026.00,589.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(1026.00,586.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(1039.00,598.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1039.00,595.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1051.00,607.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(1051.00,604.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(1064.00,616.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1064.00,613.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1076.00,625.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(1076.00,622.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(1089.00,634.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1089.00,631.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1101.00,643.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(1101.00,640.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(1114.00,652.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1114.00,649.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1126.00,661.40)(0.654,0.514){13}{\\rule{1.240pt}{0.124pt}}\n\\multiput(1126.00,658.34)(10.426,10.000){2}{\\rule{0.620pt}{0.800pt}}\n\\multiput(1139.00,671.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1139.00,668.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1151.00,680.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(1151.00,677.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(1164.00,689.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1164.00,686.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1176.00,698.40)(0.654,0.514){13}{\\rule{1.240pt}{0.124pt}}\n\\multiput(1176.00,695.34)(10.426,10.000){2}{\\rule{0.620pt}{0.800pt}}\n\\multiput(1189.00,708.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1189.00,705.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1201.00,717.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(1201.00,714.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(1214.00,726.40)(0.599,0.514){13}{\\rule{1.160pt}{0.124pt}}\n\\multiput(1214.00,723.34)(9.592,10.000){2}{\\rule{0.580pt}{0.800pt}}\n\\multiput(1226.00,736.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(1226.00,733.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(1239.00,745.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1239.00,742.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1251.00,754.40)(0.654,0.514){13}{\\rule{1.240pt}{0.124pt}}\n\\multiput(1251.00,751.34)(10.426,10.000){2}{\\rule{0.620pt}{0.800pt}}\n\\multiput(1264.00,764.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1264.00,761.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1276.00,773.40)(0.654,0.514){13}{\\rule{1.240pt}{0.124pt}}\n\\multiput(1276.00,770.34)(10.426,10.000){2}{\\rule{0.620pt}{0.800pt}}\n\\multiput(1289.00,783.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1289.00,780.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1301.00,792.40)(0.654,0.514){13}{\\rule{1.240pt}{0.124pt}}\n\\multiput(1301.00,789.34)(10.426,10.000){2}{\\rule{0.620pt}{0.800pt}}\n\\multiput(1314.00,802.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1314.00,799.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1326.00,811.40)(0.654,0.514){13}{\\rule{1.240pt}{0.124pt}}\n\\multiput(1326.00,808.34)(10.426,10.000){2}{\\rule{0.620pt}{0.800pt}}\n\\multiput(1339.00,821.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1339.00,818.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1351.00,830.40)(0.654,0.514){13}{\\rule{1.240pt}{0.124pt}}\n\\multiput(1351.00,827.34)(10.426,10.000){2}{\\rule{0.620pt}{0.800pt}}\n\\multiput(1364.00,840.40)(0.599,0.514){13}{\\rule{1.160pt}{0.124pt}}\n\\multiput(1364.00,837.34)(9.592,10.000){2}{\\rule{0.580pt}{0.800pt}}\n\\multiput(1376.00,850.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(1376.00,847.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\put(1389,857.34){\\rule{0.482pt}{0.800pt}}\n\\multiput(1389.00,856.34)(1.000,2.000){2}{\\rule{0.241pt}{0.800pt}}\n\\sbox{\\plotpoint}{\\rule[-0.600pt]{1.200pt}{1.200pt}}%\n\\put(498,121.51){\\rule{0.723pt}{1.200pt}}\n\\multiput(498.00,120.51)(1.500,2.000){2}{\\rule{0.361pt}{1.200pt}}\n\\multiput(501.00,127.24)(0.835,0.505){4}{\\rule{2.529pt}{0.122pt}}\n\\multiput(501.00,122.51)(7.752,7.000){2}{\\rule{1.264pt}{1.200pt}}\n\\multiput(514.00,134.24)(0.738,0.505){4}{\\rule{2.357pt}{0.122pt}}\n\\multiput(514.00,129.51)(7.108,7.000){2}{\\rule{1.179pt}{1.200pt}}\n\\multiput(526.00,141.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(526.00,136.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(539.00,149.24)(0.738,0.505){4}{\\rule{2.357pt}{0.122pt}}\n\\multiput(539.00,144.51)(7.108,7.000){2}{\\rule{1.179pt}{1.200pt}}\n\\multiput(551.00,156.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(551.00,151.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(564.00,164.24)(0.657,0.503){6}{\\rule{2.100pt}{0.121pt}}\n\\multiput(564.00,159.51)(7.641,8.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(576.00,172.24)(0.835,0.505){4}{\\rule{2.529pt}{0.122pt}}\n\\multiput(576.00,167.51)(7.752,7.000){2}{\\rule{1.264pt}{1.200pt}}\n\\multiput(589.00,179.24)(0.657,0.503){6}{\\rule{2.100pt}{0.121pt}}\n\\multiput(589.00,174.51)(7.641,8.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(601.00,187.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(601.00,182.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(614.00,195.24)(0.657,0.503){6}{\\rule{2.100pt}{0.121pt}}\n\\multiput(614.00,190.51)(7.641,8.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(626.00,203.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(626.00,198.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(639.00,211.24)(0.738,0.505){4}{\\rule{2.357pt}{0.122pt}}\n\\multiput(639.00,206.51)(7.108,7.000){2}{\\rule{1.179pt}{1.200pt}}\n\\multiput(651.00,218.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(651.00,213.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(664.00,226.24)(0.657,0.503){6}{\\rule{2.100pt}{0.121pt}}\n\\multiput(664.00,221.51)(7.641,8.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(676.00,234.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(676.00,229.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(689.00,242.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(689.00,237.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(701.00,251.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(701.00,246.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(714.00,259.24)(0.657,0.503){6}{\\rule{2.100pt}{0.121pt}}\n\\multiput(714.00,254.51)(7.641,8.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(726.00,267.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(726.00,262.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(739.00,275.24)(0.657,0.503){6}{\\rule{2.100pt}{0.121pt}}\n\\multiput(739.00,270.51)(7.641,8.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(751.00,283.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(751.00,278.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(764.00,292.24)(0.657,0.503){6}{\\rule{2.100pt}{0.121pt}}\n\\multiput(764.00,287.51)(7.641,8.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(776.00,300.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(776.00,295.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(789.00,308.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(789.00,303.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(801.00,317.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(801.00,312.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(814.00,325.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(814.00,320.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(826.00,334.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(826.00,329.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(839.00,342.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(839.00,337.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(851.00,351.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(851.00,346.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(864.00,359.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(864.00,354.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(876.00,368.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(876.00,363.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(889.00,377.24)(0.657,0.503){6}{\\rule{2.100pt}{0.121pt}}\n\\multiput(889.00,372.51)(7.641,8.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(901.00,385.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(901.00,380.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(914.00,394.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(914.00,389.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(926.00,403.24)(0.732,0.503){6}{\\rule{2.250pt}{0.121pt}}\n\\multiput(926.00,398.51)(8.330,8.000){2}{\\rule{1.125pt}{1.200pt}}\n\\multiput(939.00,411.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(939.00,406.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(951.00,420.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(951.00,415.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(964.00,429.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(964.00,424.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(976.00,438.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(976.00,433.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(989.00,447.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(989.00,442.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1001.00,456.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1001.00,451.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1014.00,465.24)(0.657,0.503){6}{\\rule{2.100pt}{0.121pt}}\n\\multiput(1014.00,460.51)(7.641,8.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(1026.00,473.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1026.00,468.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1039.00,482.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1039.00,477.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1051.00,491.24)(0.587,0.502){10}{\\rule{1.860pt}{0.121pt}}\n\\multiput(1051.00,486.51)(9.139,10.000){2}{\\rule{0.930pt}{1.200pt}}\n\\multiput(1064.00,501.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1064.00,496.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1076.00,510.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1076.00,505.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1089.00,519.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1089.00,514.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1101.00,528.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1101.00,523.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1114.00,537.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1114.00,532.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1126.00,546.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1126.00,541.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1139.00,555.24)(0.531,0.502){10}{\\rule{1.740pt}{0.121pt}}\n\\multiput(1139.00,550.51)(8.389,10.000){2}{\\rule{0.870pt}{1.200pt}}\n\\multiput(1151.00,565.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1151.00,560.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1164.00,574.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1164.00,569.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1176.00,583.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1176.00,578.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1189.00,592.24)(0.531,0.502){10}{\\rule{1.740pt}{0.121pt}}\n\\multiput(1189.00,587.51)(8.389,10.000){2}{\\rule{0.870pt}{1.200pt}}\n\\multiput(1201.00,602.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1201.00,597.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1214.00,611.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1214.00,606.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1226.00,620.24)(0.587,0.502){10}{\\rule{1.860pt}{0.121pt}}\n\\multiput(1226.00,615.51)(9.139,10.000){2}{\\rule{0.930pt}{1.200pt}}\n\\multiput(1239.00,630.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1239.00,625.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1251.00,639.24)(0.587,0.502){10}{\\rule{1.860pt}{0.121pt}}\n\\multiput(1251.00,634.51)(9.139,10.000){2}{\\rule{0.930pt}{1.200pt}}\n\\multiput(1264.00,649.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1264.00,644.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1276.00,658.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1276.00,653.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1289.00,667.24)(0.531,0.502){10}{\\rule{1.740pt}{0.121pt}}\n\\multiput(1289.00,662.51)(8.389,10.000){2}{\\rule{0.870pt}{1.200pt}}\n\\multiput(1301.00,677.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1301.00,672.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1314.00,686.24)(0.531,0.502){10}{\\rule{1.740pt}{0.121pt}}\n\\multiput(1314.00,681.51)(8.389,10.000){2}{\\rule{0.870pt}{1.200pt}}\n\\multiput(1326.00,696.24)(0.587,0.502){10}{\\rule{1.860pt}{0.121pt}}\n\\multiput(1326.00,691.51)(9.139,10.000){2}{\\rule{0.930pt}{1.200pt}}\n\\multiput(1339.00,706.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1339.00,701.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1351.00,715.24)(0.587,0.502){10}{\\rule{1.860pt}{0.121pt}}\n\\multiput(1351.00,710.51)(9.139,10.000){2}{\\rule{0.930pt}{1.200pt}}\n\\multiput(1364.00,725.24)(0.588,0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1364.00,720.51)(8.056,9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1376.00,734.24)(0.587,0.502){10}{\\rule{1.860pt}{0.121pt}}\n\\multiput(1376.00,729.51)(9.139,10.000){2}{\\rule{0.930pt}{1.200pt}}\n\\multiput(1389.00,744.24)(0.531,0.502){10}{\\rule{1.740pt}{0.121pt}}\n\\multiput(1389.00,739.51)(8.389,10.000){2}{\\rule{0.870pt}{1.200pt}}\n\\multiput(1401.00,754.24)(0.651,0.502){8}{\\rule{2.033pt}{0.121pt}}\n\\multiput(1401.00,749.51)(8.780,9.000){2}{\\rule{1.017pt}{1.200pt}}\n\\multiput(1414.00,763.24)(0.531,0.502){10}{\\rule{1.740pt}{0.121pt}}\n\\multiput(1414.00,758.51)(8.389,10.000){2}{\\rule{0.870pt}{1.200pt}}\n\\multiput(1426.00,773.24)(0.587,0.502){10}{\\rule{1.860pt}{0.121pt}}\n\\multiput(1426.00,768.51)(9.139,10.000){2}{\\rule{0.930pt}{1.200pt}}\n\\end{picture}\n\\caption{The equation of state (\\ref{Neos}) for the values $\\tau = 0.382$ (thin line),\n$\\tau = 0.6$ (medium) and $\\tau= 0.786$ (thick line).}\n\\label{NEOS} \n\\end{figure}\n\nThe speed of sound $C$ defined by the first equation of (\\ref{ss}) takes the\nsimple form \n\\begin{equation}\n\\label{ss}\nC^2 = \\frac{dp}{d\\rho} = \\frac{1}{5}\\left( \\frac{\\rho}{\\rho_-} \\right)^{\\frac{1}{5}}\n= \\frac{1}{5}(\\bar v - v)\n\\end{equation}\nin terms of the potential.\nWe recall that $\\bar v$ was taken to be positive, hence $dp\/d\\rho > 0$ and (\\ref{ss}) makes sense. \n\nAt the surface where the pressure is zero the potential, the density and the speed of \nsound take the values \n\n\\begin{equation}\n\\label{Nsurf}\nv_s = \\bar v - \\tau, \\qquad \\rho_s= \\tau^{-1} \\rho_{+}, \\qquad C_s^2 = \\frac{1}{5}\\tau.\n\\end{equation}\n\nNote that $\\rho_s$ and $C_s$ are determined by the equation of state alone, \nas opposed to $v_s$ and $\\bar v$ which will be used in Sect. 3.1 to parametrize the spherically symmetric solutions. \n The polytrope of index 5, for which the solutions extend to infinity, arises from the equations above \nas the special case $\\rho_+ = \\rho_s = 0 = v_s = \\bar v$. The corresponding\ncurve would pass through the origin in Fig. (\\ref{NEOS}), very close to the curve for $\\tau = 0.382$. \n\n\\subsection{Relativistic Fluids}\n\n\\subsubsection{General properties}\n\nWe consider static spacetimes of the from \n$\\mathbb{R} \\times {\\cal M} = \\mathbb{R} \\times \n\\left({\\cal F} \\cup \\partial {\\cal F} \\cup {\\cal V} \\right)$ \nwith metric \n\\begin{equation} \n\\label{met}\nds^{2} = - V^{2}dt^{2} + g_{ij}dx^{i}dx^{j}\n\\end{equation}\nwhere $V(x^i)$ and $g_{ij}(x^i)$ are smooth on ${\\cal F}$ and ${\\cal V}$ and $C^{1,1}$ at $\\partial {\\cal F}$. \nMoreover, $0 < V < 1$ on ${\\cal M}$ and $V \\rightarrow 1$ at infinity. \nOn ${\\cal F}$ we consider smooth density and pressure functions \n$\\rho(x^i)$, $p(x^i)$, with $p\\rightarrow 0$ on $\\partial {\\cal F}$, \nin terms of which Einstein's and Euler's equations read \n\\begin{eqnarray} \n\\label{Alb}\n\\Delta V & = & 4 \\pi V (\\rho + 3p) \\\\\n\\label{Ein}\n{\\cal R}_{ij} & = & V^{-1}\\nabla_{i}\\nabla_{j}V + 4 \\pi(\\rho - p)g_{ij}\\\\\n\\label{Bia}\n\\nabla_i p & = & - V^{-1} (\\rho + p) \\nabla_i V.\n\\end{eqnarray}\nThe gradient $\\nabla_i$, the Laplacian $\\Delta = \\nabla_{i}\\nabla^{i}$ and the Ricci\ntensor ${\\cal R}_{ij}$ now refer to $g_{ij}$. \nAs well known the Euler equation (\\ref{Bia}) is a consequence of the Bianchi identity for\n${\\cal R}_{ij}$.\n\nA general equation of state $H(\\rho,p) = 0$, in particular with $H(\\rho,p) = p -\n\\rho(p)$, should be defined in $\\rho \\in [\\rho_s,\\infty)$ and $p \\in [0,\\infty)$ \nwith $\\rho_s \\ge 0$ and smooth in the intervals $\\rho \\in (\\rho_s,\\infty)$ and $p \\in (0,\\infty)$. \nEuler's equation (\\ref{Bia}) together with the equation of state imply that there are smooth \nfunctions $p(V)$ and $\\rho(V)$, and Euler's equation becomes $dp\/dV = - V^{-1}(\\rho + p)$.\n\nIn analogy with the Newtonian case, the metric induced on $\\partial {\\cal F}$ is $C^{1,1}$ \nand the mean curvature of $\\partial {\\cal F}$ is continuous. \nIn terms of the quantity $W=\\nabla_i V \\nabla^i V$, the matching conditions\ntogether with equ. (\\ref{Alb}) imply that the quantity in brackets on the l.h. side of \n\\begin{equation}\n\\label{Egm}\n \\left[W^{-1} \\nabla^i V \\nabla_i W - 8 \\pi V \\rho \\right]_{\\Rightarrow \\partial{\\cal F}} \n= \\left[W^{-1}\\nabla^i V \\nabla_i W \\right]_{\\partial{\\cal V} \\Leftarrow} \n\\end{equation} \nis continuous at $\\partial {\\cal F}$ and hence (\\ref{Egm}) holds. \n\nTo formulate the asymptotic properties we consider an \"end\" \n${\\cal M}^{\\infty} = {\\cal M} \\setminus \\{ \\mbox{a compact set} \\}$. \nWe require that, for some $\\epsilon > 0$\n\\begin{eqnarray}\n\\label{EM}\n V = 1 - \\frac{M}{r} + O(\\frac{1}{r^{1+\\epsilon}}), \\quad\n \\partial_i V & = & - \\partial_i \\frac{M}{r} + O(\\frac{1}{r^{2+\\epsilon}}), \\nonumber \\\\ \n\\partial_i \\partial_j V & = & - \\partial_i \\partial_j \\frac{M}{r} +\nO(\\frac{1}{r^{3+\\epsilon}}), \\\\ \n\\label{Eg}\ng_{ij} = (1 + \\frac{2M}{r})\\delta_{ij} + O(\\frac{1}{r^{1+\\epsilon}}), \\quad\n\\partial_k g_{ij} & = &\\partial_k \\frac{2M}{r}\\delta_{ij} +\nO(\\frac{1}{r^{2+\\epsilon}})\n\\nonumber \\\\\n\\partial_k \\partial_l g_{ij} & = & \\partial_k \\partial_l \\frac{2M}{r} \\delta_{ij} +\nO(\\frac{1}{r^{3+\\epsilon}})\n\\end{eqnarray}\nin suitable coordinates, where $M$ is the mass.\nEqus.\n (\\ref{Alb}) and (\\ref{Bia}) together with the decay conditions (\\ref{EM}) and (\\ref{Eg}) imply that\n\\begin{equation}\n\\label{Eas}\n\\rho = O(\\frac{1}{r^{3+\\epsilon}}), \\qquad p = O(\\frac{1}{r^{4+\\epsilon}}).\n\\end{equation}\nHere the falloff conditions of the potential $V$ could also be derived from\nsome weak falloff conditions of $g_{ij}$, and $\\rho$ and $p$\n\\cite{WS1}. \nClearly a substantial refinement of all asymptotic conditions is possible if \n ${\\cal M}^{\\infty}$ is vacuum, c.f. \\cite{KM}. \n\n\\subsubsection{The Pant-Sah equation of state}\n\nWe now introduce conformal rescalings of the spatial metric of the form \n$g_{ij}^{\\pm} = g_{ij} (1 \\pm fV)^4\/16$ for some constant $f$ which we take to be\nnon-negative (this just fixes the notation), \nand we restrict ourselves to the range $V < 1\/f$.\nWhile any $f > 0$ could be absorbed into $V$ by rescaling the static Killing\nfield, we have already fixed the scaling above by requiring $V \\rightarrow 1$ at infinity, \nwhich is why the extra constant $f$ will persist here in general. \nWe now use the standard formula (\\ref{csc}) with $\\wp = g_{ij}$, $\\Phi = (1 \\pm fV)\/2$ so\nthat $\\widetilde \\wp_{ij} = g^{\\pm}_{ij}$. Together with the field equations (\\ref{Alb}) and \n(\\ref{Ein}) this gives\n\\begin{equation}\n\\label{Rpm}\n\\frac{1}{128} {\\cal R}_{\\pm}(1 \\pm fV)^5 = - (\\Delta - \\frac{1}{8}{\\cal R})(1 \\pm fV) = 2\\pi[\\rho(1 \\mp fV) \\mp 6fpV]\n\\end{equation} \nwhere ${\\cal R}$ and ${\\cal R}_{\\pm}$ are the scalar curvatures of $g_{ij}$ and\n$g_{ij}^{\\pm}$, respectively. By differentiating (\\ref{Rpm}) with respect to $V$ we obtain\n\\begin{equation} \n\\label{dRdV}\n\\frac{d {\\cal R}_{\\pm}}{dV} = \\frac{2560 \\pi (\\rho + 3p)}{(1 \\pm fV)^{6}}\n\\left[f^2 V - \\frac{\\kappa}{10 V} \\left(1 - f^2 V^2\\right) \\right] \n\\end{equation}\nwhere $\\kappa$ has been introduced in (\\ref{Ikap}).\n\nWe now implement the \"Killing Yamabe property\" defined in the introduction\nby requiring that at least one of the curvatures ${\\cal R}_{+}$ and ${\\cal R}_{-}$ is constant. \nThis implies that the quantity in brackets in (\\ref{dRdV}) vanishes, i.e.,\n\\begin{equation}\n \\label{kap} \n\\kappa = \\frac{10 f^{2}V^{2}}{(1 - f^{2}V^{2})}\n\\end{equation} \nand therefore the other scalar curvature is necessarily constant as well.\nUsing this in (\\ref{Rpm}) and setting ${\\cal R}_{\\pm} = 512 \\pi \\rho_{\\pm}$ \nwe obtain \n\\begin{eqnarray} \n\\label{rV}\n\\rho & = & \\rho_{-}(1 - fV)^{5} + \\rho_{+}(1 + fV)^{5}, \\\\\n\\label{pV}\n p & = & \\frac{1}{6 fV}[\\rho_{-}(1 - fV)^{6} - \\rho_{+}(1 + fV)^{6}] \n\\end{eqnarray}\nwhich yields the parametric form (\\ref{psr}), (\\ref{psp}) of the PSEOS \nwhen we set $fV = \\lambda$. Positivity of the pressure now requires that we\nrestrict ourselves to $0 \\le \\rho_+ < \\rho_- < \\infty$. \nNote that Equ. (\\ref{kap}) implies that $d\\rho\/dp$ takes on all real positive values for the \nallowed range $0< fV < 1$ of $V$.\n\nThe case $\\rho = const.$ is included in (\\ref{rV}) and (\\ref{pV}) in the\nlimit $\\rho_+ \\rightarrow \\rho_-$ and $fV \\rightarrow 0$. \nTo see this we expand (\\ref{rV}) and (\\ref{pV}) in $fV$, \n\\begin{equation} \n\\label{rhoc}\n\\rho = \\rho_+ + \\rho_- + O(fV), \\quad \np = \\frac{\\rho_- - \\rho_+}{6 f V} - (\\rho_+ + \\rho_-) + O(fV).\n\\end{equation}\nWithout the terms of order $fV$, this is a solution of the Euler equation (\\ref{Bia}) \nfor constant density $\\rho_+ + \\rho_-$. \nNow the limit $\\rho_+ \\rightarrow \\rho_-$ and $fV \\rightarrow 0$ \nhas to be taken in such a way that $p$ stays regular and non-negative\n(we skip mathematical subtleties).\n\nWe define $\\tau = (\\rho_+\/\\rho_-)^{\\frac{1}{6}}$ which, in contrast to the Newtonian case, \nis now restricted to be less than $1$.\nWe draw the PSEOS in Fig. (\\ref{PSEOS}) in terms of the rescaled variables $p\/\\rho_-$, \n$\\rho\/\\rho_-$ and for the same values of $\\tau$ as chosen for the NEOS in Fig. (\\ref{NEOS}), \nnamely $\\tau = (3 - \\sqrt{5})\/2 \\approx 0.382$, $\\tau = 0.6$ and \n$\\tau = [(\\sqrt{5} - 1)\/2]^{1\/2} \\approx 0.786$. \n\n\\begin{figure}[h!]\n\n\\setlength{\\unitlength}{0.240900pt}\n\\ifx\\plotpoint\\undefined\\newsavebox{\\plotpoint}\\fi\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\begin{picture}(1500,900)(0,0)\n\\font\\gnuplot=cmr10 at 10pt\n\\gnuplot\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\put(161.0,123.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(141,123){\\makebox(0,0)[r]{ 0}}\n\\put(1419.0,123.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(161.0,270.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(141,270){\\makebox(0,0)[r]{ 2}}\n\\put(1419.0,270.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(161.0,418.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(141,418){\\makebox(0,0)[r]{ 4}}\n\\put(1419.0,418.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(161.0,565.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(141,565){\\makebox(0,0)[r]{ 6}}\n\\put(1419.0,565.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(161.0,713.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(141,713){\\makebox(0,0)[r]{ 8}}\n\\put(1419.0,713.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(161.0,860.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(141,860){\\makebox(0,0)[r]{ 10}}\n\\put(1419.0,860.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(161.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(161,82){\\makebox(0,0){ 0}}\n\\put(161.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(365.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(365,82){\\makebox(0,0){ 0.2}}\n\\put(365.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(570.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(570,82){\\makebox(0,0){ 0.4}}\n\\put(570.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(774.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(774,82){\\makebox(0,0){ 0.6}}\n\\put(774.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(979.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(979,82){\\makebox(0,0){ 0.8}}\n\\put(979.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1183.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1183,82){\\makebox(0,0){ 1}}\n\\put(1183.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1388.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1388,82){\\makebox(0,0){ 1.2}}\n\\put(1388.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(161.0,123.0){\\rule[-0.200pt]{307.870pt}{0.400pt}}\n\\put(1439.0,123.0){\\rule[-0.200pt]{0.400pt}{177.543pt}}\n\\put(161.0,860.0){\\rule[-0.200pt]{307.870pt}{0.400pt}}\n\\put(40,491){\\makebox(0,0){$p\/\\rho_-$}}\n\\put(800,21){\\makebox(0,0){$\\rho\/\\rho_-$}}\n\\put(161.0,123.0){\\rule[-0.200pt]{0.400pt}{177.543pt}}\n\\multiput(1108.95,833.02)(-0.447,-10.509){3}{\\rule{0.108pt}{6.500pt}}\n\\multiput(1109.17,846.51)(-3.000,-34.509){2}{\\rule{0.400pt}{3.250pt}}\n\\multiput(1105.92,799.13)(-0.496,-3.798){45}{\\rule{0.120pt}{3.100pt}}\n\\multiput(1106.17,805.57)(-24.000,-173.566){2}{\\rule{0.400pt}{1.550pt}}\n\\multiput(1081.92,623.57)(-0.496,-2.442){43}{\\rule{0.120pt}{2.030pt}}\n\\multiput(1082.17,627.79)(-23.000,-106.786){2}{\\rule{0.400pt}{1.015pt}}\n\\multiput(1058.92,515.24)(-0.496,-1.625){43}{\\rule{0.120pt}{1.387pt}}\n\\multiput(1059.17,518.12)(-23.000,-71.121){2}{\\rule{0.400pt}{0.693pt}}\n\\multiput(1035.92,442.51)(-0.496,-1.238){41}{\\rule{0.120pt}{1.082pt}}\n\\multiput(1036.17,444.75)(-22.000,-51.755){2}{\\rule{0.400pt}{0.541pt}}\n\\multiput(1013.92,389.57)(-0.496,-0.914){41}{\\rule{0.120pt}{0.827pt}}\n\\multiput(1014.17,391.28)(-22.000,-38.283){2}{\\rule{0.400pt}{0.414pt}}\n\\multiput(991.92,350.17)(-0.496,-0.729){41}{\\rule{0.120pt}{0.682pt}}\n\\multiput(992.17,351.58)(-22.000,-30.585){2}{\\rule{0.400pt}{0.341pt}}\n\\multiput(969.92,318.61)(-0.496,-0.595){39}{\\rule{0.119pt}{0.576pt}}\n\\multiput(970.17,319.80)(-21.000,-23.804){2}{\\rule{0.400pt}{0.288pt}}\n\\multiput(947.92,294.92)(-0.498,-0.496){37}{\\rule{0.500pt}{0.119pt}}\n\\multiput(948.96,295.17)(-18.962,-20.000){2}{\\rule{0.250pt}{0.400pt}}\n\\multiput(927.63,274.92)(-0.588,-0.495){31}{\\rule{0.571pt}{0.119pt}}\n\\multiput(928.82,275.17)(-18.816,-17.000){2}{\\rule{0.285pt}{0.400pt}}\n\\multiput(907.37,257.92)(-0.668,-0.494){27}{\\rule{0.633pt}{0.119pt}}\n\\multiput(908.69,258.17)(-18.685,-15.000){2}{\\rule{0.317pt}{0.400pt}}\n\\multiput(886.96,242.92)(-0.798,-0.492){21}{\\rule{0.733pt}{0.119pt}}\n\\multiput(888.48,243.17)(-17.478,-12.000){2}{\\rule{0.367pt}{0.400pt}}\n\\multiput(867.43,230.92)(-0.964,-0.491){17}{\\rule{0.860pt}{0.118pt}}\n\\multiput(869.22,231.17)(-17.215,-10.000){2}{\\rule{0.430pt}{0.400pt}}\n\\multiput(848.26,220.93)(-1.019,-0.489){15}{\\rule{0.900pt}{0.118pt}}\n\\multiput(850.13,221.17)(-16.132,-9.000){2}{\\rule{0.450pt}{0.400pt}}\n\\multiput(829.85,211.93)(-1.154,-0.488){13}{\\rule{1.000pt}{0.117pt}}\n\\multiput(831.92,212.17)(-15.924,-8.000){2}{\\rule{0.500pt}{0.400pt}}\n\\multiput(811.85,203.93)(-1.154,-0.488){13}{\\rule{1.000pt}{0.117pt}}\n\\multiput(813.92,204.17)(-15.924,-8.000){2}{\\rule{0.500pt}{0.400pt}}\n\\multiput(792.88,195.93)(-1.485,-0.482){9}{\\rule{1.233pt}{0.116pt}}\n\\multiput(795.44,196.17)(-14.440,-6.000){2}{\\rule{0.617pt}{0.400pt}}\n\\multiput(774.94,189.93)(-1.823,-0.477){7}{\\rule{1.460pt}{0.115pt}}\n\\multiput(777.97,190.17)(-13.970,-5.000){2}{\\rule{0.730pt}{0.400pt}}\n\\multiput(757.94,184.93)(-1.823,-0.477){7}{\\rule{1.460pt}{0.115pt}}\n\\multiput(760.97,185.17)(-13.970,-5.000){2}{\\rule{0.730pt}{0.400pt}}\n\\multiput(741.27,179.93)(-1.712,-0.477){7}{\\rule{1.380pt}{0.115pt}}\n\\multiput(744.14,180.17)(-13.136,-5.000){2}{\\rule{0.690pt}{0.400pt}}\n\\multiput(724.36,174.94)(-2.090,-0.468){5}{\\rule{1.600pt}{0.113pt}}\n\\multiput(727.68,175.17)(-11.679,-4.000){2}{\\rule{0.800pt}{0.400pt}}\n\\multiput(706.73,170.95)(-3.365,-0.447){3}{\\rule{2.233pt}{0.108pt}}\n\\multiput(711.36,171.17)(-11.365,-3.000){2}{\\rule{1.117pt}{0.400pt}}\n\\multiput(693.36,167.94)(-2.090,-0.468){5}{\\rule{1.600pt}{0.113pt}}\n\\multiput(696.68,168.17)(-11.679,-4.000){2}{\\rule{0.800pt}{0.400pt}}\n\\multiput(676.84,163.95)(-2.918,-0.447){3}{\\rule{1.967pt}{0.108pt}}\n\\multiput(680.92,164.17)(-9.918,-3.000){2}{\\rule{0.983pt}{0.400pt}}\n\\multiput(662.28,160.95)(-3.141,-0.447){3}{\\rule{2.100pt}{0.108pt}}\n\\multiput(666.64,161.17)(-10.641,-3.000){2}{\\rule{1.050pt}{0.400pt}}\n\\put(642,157.17){\\rule{2.900pt}{0.400pt}}\n\\multiput(649.98,158.17)(-7.981,-2.000){2}{\\rule{1.450pt}{0.400pt}}\n\\put(629,155.17){\\rule{2.700pt}{0.400pt}}\n\\multiput(636.40,156.17)(-7.396,-2.000){2}{\\rule{1.350pt}{0.400pt}}\n\\multiput(620.84,153.95)(-2.918,-0.447){3}{\\rule{1.967pt}{0.108pt}}\n\\multiput(624.92,154.17)(-9.918,-3.000){2}{\\rule{0.983pt}{0.400pt}}\n\\put(602,150.17){\\rule{2.700pt}{0.400pt}}\n\\multiput(609.40,151.17)(-7.396,-2.000){2}{\\rule{1.350pt}{0.400pt}}\n\\put(589,148.67){\\rule{3.132pt}{0.400pt}}\n\\multiput(595.50,149.17)(-6.500,-1.000){2}{\\rule{1.566pt}{0.400pt}}\n\\put(577,147.17){\\rule{2.500pt}{0.400pt}}\n\\multiput(583.81,148.17)(-6.811,-2.000){2}{\\rule{1.250pt}{0.400pt}}\n\\put(565,145.17){\\rule{2.500pt}{0.400pt}}\n\\multiput(571.81,146.17)(-6.811,-2.000){2}{\\rule{1.250pt}{0.400pt}}\n\\put(553,143.67){\\rule{2.891pt}{0.400pt}}\n\\multiput(559.00,144.17)(-6.000,-1.000){2}{\\rule{1.445pt}{0.400pt}}\n\\put(542,142.67){\\rule{2.650pt}{0.400pt}}\n\\multiput(547.50,143.17)(-5.500,-1.000){2}{\\rule{1.325pt}{0.400pt}}\n\\put(531,141.17){\\rule{2.300pt}{0.400pt}}\n\\multiput(537.23,142.17)(-6.226,-2.000){2}{\\rule{1.150pt}{0.400pt}}\n\\put(520,139.67){\\rule{2.650pt}{0.400pt}}\n\\multiput(525.50,140.17)(-5.500,-1.000){2}{\\rule{1.325pt}{0.400pt}}\n\\put(509,138.67){\\rule{2.650pt}{0.400pt}}\n\\multiput(514.50,139.17)(-5.500,-1.000){2}{\\rule{1.325pt}{0.400pt}}\n\\put(499,137.67){\\rule{2.409pt}{0.400pt}}\n\\multiput(504.00,138.17)(-5.000,-1.000){2}{\\rule{1.204pt}{0.400pt}}\n\\put(488,136.67){\\rule{2.650pt}{0.400pt}}\n\\multiput(493.50,137.17)(-5.500,-1.000){2}{\\rule{1.325pt}{0.400pt}}\n\\put(479,135.67){\\rule{2.168pt}{0.400pt}}\n\\multiput(483.50,136.17)(-4.500,-1.000){2}{\\rule{1.084pt}{0.400pt}}\n\\put(469,134.67){\\rule{2.409pt}{0.400pt}}\n\\multiput(474.00,135.17)(-5.000,-1.000){2}{\\rule{1.204pt}{0.400pt}}\n\\put(451,133.67){\\rule{2.168pt}{0.400pt}}\n\\multiput(455.50,134.17)(-4.500,-1.000){2}{\\rule{1.084pt}{0.400pt}}\n\\put(442,132.67){\\rule{2.168pt}{0.400pt}}\n\\multiput(446.50,133.17)(-4.500,-1.000){2}{\\rule{1.084pt}{0.400pt}}\n\\put(460.0,135.0){\\rule[-0.200pt]{2.168pt}{0.400pt}}\n\\put(425,131.67){\\rule{1.927pt}{0.400pt}}\n\\multiput(429.00,132.17)(-4.000,-1.000){2}{\\rule{0.964pt}{0.400pt}}\n\\put(417,130.67){\\rule{1.927pt}{0.400pt}}\n\\multiput(421.00,131.17)(-4.000,-1.000){2}{\\rule{0.964pt}{0.400pt}}\n\\put(433.0,133.0){\\rule[-0.200pt]{2.168pt}{0.400pt}}\n\\put(401,129.67){\\rule{1.927pt}{0.400pt}}\n\\multiput(405.00,130.17)(-4.000,-1.000){2}{\\rule{0.964pt}{0.400pt}}\n\\put(409.0,131.0){\\rule[-0.200pt]{1.927pt}{0.400pt}}\n\\put(386,128.67){\\rule{1.686pt}{0.400pt}}\n\\multiput(389.50,129.17)(-3.500,-1.000){2}{\\rule{0.843pt}{0.400pt}}\n\\put(393.0,130.0){\\rule[-0.200pt]{1.927pt}{0.400pt}}\n\\put(365,127.67){\\rule{1.686pt}{0.400pt}}\n\\multiput(368.50,128.17)(-3.500,-1.000){2}{\\rule{0.843pt}{0.400pt}}\n\\put(372.0,129.0){\\rule[-0.200pt]{3.373pt}{0.400pt}}\n\\put(347,126.67){\\rule{1.445pt}{0.400pt}}\n\\multiput(350.00,127.17)(-3.000,-1.000){2}{\\rule{0.723pt}{0.400pt}}\n\\put(353.0,128.0){\\rule[-0.200pt]{2.891pt}{0.400pt}}\n\\put(329,125.67){\\rule{1.445pt}{0.400pt}}\n\\multiput(332.00,126.17)(-3.000,-1.000){2}{\\rule{0.723pt}{0.400pt}}\n\\put(335.0,127.0){\\rule[-0.200pt]{2.891pt}{0.400pt}}\n\\put(304,124.67){\\rule{1.204pt}{0.400pt}}\n\\multiput(306.50,125.17)(-2.500,-1.000){2}{\\rule{0.602pt}{0.400pt}}\n\\put(309.0,126.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(279,123.67){\\rule{0.723pt}{0.400pt}}\n\\multiput(280.50,124.17)(-1.500,-1.000){2}{\\rule{0.361pt}{0.400pt}}\n\\put(282.0,125.0){\\rule[-0.200pt]{5.300pt}{0.400pt}}\n\\put(249,122.67){\\rule{0.723pt}{0.400pt}}\n\\multiput(250.50,123.17)(-1.500,-1.000){2}{\\rule{0.361pt}{0.400pt}}\n\\put(252.0,124.0){\\rule[-0.200pt]{6.504pt}{0.400pt}}\n\\put(234.0,123.0){\\rule[-0.200pt]{3.613pt}{0.400pt}}\n\\put(1187,124){\\usebox{\\plotpoint}}\n\\put(1187.00,124.00){\\usebox{\\plotpoint}}\n\\put(1187.00,144.76){\\usebox{\\plotpoint}}\n\\put(1187.00,165.51){\\usebox{\\plotpoint}}\n\\put(1187.00,186.27){\\usebox{\\plotpoint}}\n\\put(1187.00,207.02){\\usebox{\\plotpoint}}\n\\put(1187.00,227.78){\\usebox{\\plotpoint}}\n\\put(1187.00,248.53){\\usebox{\\plotpoint}}\n\\put(1187.00,269.29){\\usebox{\\plotpoint}}\n\\put(1187.00,290.04){\\usebox{\\plotpoint}}\n\\put(1187.00,310.80){\\usebox{\\plotpoint}}\n\\put(1187.00,331.55){\\usebox{\\plotpoint}}\n\\put(1187.00,352.31){\\usebox{\\plotpoint}}\n\\put(1187.00,373.07){\\usebox{\\plotpoint}}\n\\put(1187.00,393.82){\\usebox{\\plotpoint}}\n\\put(1187.00,414.58){\\usebox{\\plotpoint}}\n\\put(1187.00,435.33){\\usebox{\\plotpoint}}\n\\put(1187.00,456.09){\\usebox{\\plotpoint}}\n\\put(1187.00,476.84){\\usebox{\\plotpoint}}\n\\put(1187.00,497.60){\\usebox{\\plotpoint}}\n\\put(1187.00,518.35){\\usebox{\\plotpoint}}\n\\put(1187.00,539.11){\\usebox{\\plotpoint}}\n\\put(1187.00,559.87){\\usebox{\\plotpoint}}\n\\put(1187.00,580.62){\\usebox{\\plotpoint}}\n\\put(1187.00,601.38){\\usebox{\\plotpoint}}\n\\put(1187.00,622.13){\\usebox{\\plotpoint}}\n\\put(1187.00,642.89){\\usebox{\\plotpoint}}\n\\put(1187.00,663.64){\\usebox{\\plotpoint}}\n\\put(1187.00,684.40){\\usebox{\\plotpoint}}\n\\put(1187.00,705.15){\\usebox{\\plotpoint}}\n\\put(1187.00,725.91){\\usebox{\\plotpoint}}\n\\put(1187.00,746.66){\\usebox{\\plotpoint}}\n\\put(1187.00,767.42){\\usebox{\\plotpoint}}\n\\put(1187.00,788.18){\\usebox{\\plotpoint}}\n\\put(1187.00,808.93){\\usebox{\\plotpoint}}\n\\put(1187.00,829.69){\\usebox{\\plotpoint}}\n\\put(1187.00,850.44){\\usebox{\\plotpoint}}\n\\put(1187,860){\\usebox{\\plotpoint}}\n\\sbox{\\plotpoint}{\\rule[-0.400pt]{0.800pt}{0.800pt}}%\n\\multiput(1159.07,812.12)(-0.536,-9.281){5}{\\rule{0.129pt}{11.533pt}}\n\\multiput(1159.34,836.06)(-6.000,-61.062){2}{\\rule{0.800pt}{5.767pt}}\n\\multiput(1153.09,748.21)(-0.505,-4.034){37}{\\rule{0.122pt}{6.455pt}}\n\\multiput(1153.34,761.60)(-22.000,-158.603){2}{\\rule{0.800pt}{3.227pt}}\n\\multiput(1131.09,586.17)(-0.505,-2.477){37}{\\rule{0.122pt}{4.055pt}}\n\\multiput(1131.34,594.58)(-22.000,-97.585){2}{\\rule{0.800pt}{2.027pt}}\n\\multiput(1109.09,485.45)(-0.505,-1.651){37}{\\rule{0.122pt}{2.782pt}}\n\\multiput(1109.34,491.23)(-22.000,-65.226){2}{\\rule{0.800pt}{1.391pt}}\n\\multiput(1087.09,417.10)(-0.505,-1.238){35}{\\rule{0.122pt}{2.143pt}}\n\\multiput(1087.34,421.55)(-21.000,-46.552){2}{\\rule{0.800pt}{1.071pt}}\n\\multiput(1066.09,368.00)(-0.505,-0.941){35}{\\rule{0.122pt}{1.686pt}}\n\\multiput(1066.34,371.50)(-21.000,-35.501){2}{\\rule{0.800pt}{0.843pt}}\n\\multiput(1045.09,330.19)(-0.505,-0.756){33}{\\rule{0.122pt}{1.400pt}}\n\\multiput(1045.34,333.09)(-20.000,-27.094){2}{\\rule{0.800pt}{0.700pt}}\n\\multiput(1025.09,300.97)(-0.506,-0.632){31}{\\rule{0.122pt}{1.211pt}}\n\\multiput(1025.34,303.49)(-19.000,-21.487){2}{\\rule{0.800pt}{0.605pt}}\n\\multiput(1003.85,280.09)(-0.495,-0.505){33}{\\rule{1.000pt}{0.122pt}}\n\\multiput(1005.92,280.34)(-17.924,-20.000){2}{\\rule{0.500pt}{0.800pt}}\n\\multiput(983.43,260.09)(-0.560,-0.507){25}{\\rule{1.100pt}{0.122pt}}\n\\multiput(985.72,260.34)(-15.717,-16.000){2}{\\rule{0.550pt}{0.800pt}}\n\\multiput(964.90,244.09)(-0.645,-0.509){21}{\\rule{1.229pt}{0.123pt}}\n\\multiput(967.45,244.34)(-15.450,-14.000){2}{\\rule{0.614pt}{0.800pt}}\n\\multiput(945.74,230.08)(-0.838,-0.512){15}{\\rule{1.509pt}{0.123pt}}\n\\multiput(948.87,230.34)(-14.868,-11.000){2}{\\rule{0.755pt}{0.800pt}}\n\\multiput(927.19,219.08)(-0.933,-0.514){13}{\\rule{1.640pt}{0.124pt}}\n\\multiput(930.60,219.34)(-14.596,-10.000){2}{\\rule{0.820pt}{0.800pt}}\n\\multiput(909.27,209.08)(-0.927,-0.516){11}{\\rule{1.622pt}{0.124pt}}\n\\multiput(912.63,209.34)(-12.633,-9.000){2}{\\rule{0.811pt}{0.800pt}}\n\\multiput(892.11,200.08)(-1.139,-0.520){9}{\\rule{1.900pt}{0.125pt}}\n\\multiput(896.06,200.34)(-13.056,-8.000){2}{\\rule{0.950pt}{0.800pt}}\n\\multiput(873.31,192.07)(-1.579,-0.536){5}{\\rule{2.333pt}{0.129pt}}\n\\multiput(878.16,192.34)(-11.157,-6.000){2}{\\rule{1.167pt}{0.800pt}}\n\\multiput(857.31,186.07)(-1.579,-0.536){5}{\\rule{2.333pt}{0.129pt}}\n\\multiput(862.16,186.34)(-11.157,-6.000){2}{\\rule{1.167pt}{0.800pt}}\n\\multiput(841.87,180.07)(-1.467,-0.536){5}{\\rule{2.200pt}{0.129pt}}\n\\multiput(846.43,180.34)(-10.434,-6.000){2}{\\rule{1.100pt}{0.800pt}}\n\\multiput(825.21,174.06)(-2.104,-0.560){3}{\\rule{2.600pt}{0.135pt}}\n\\multiput(830.60,174.34)(-9.604,-5.000){2}{\\rule{1.300pt}{0.800pt}}\n\\put(807,167.34){\\rule{3.000pt}{0.800pt}}\n\\multiput(814.77,169.34)(-7.773,-4.000){2}{\\rule{1.500pt}{0.800pt}}\n\\put(793,163.34){\\rule{3.000pt}{0.800pt}}\n\\multiput(800.77,165.34)(-7.773,-4.000){2}{\\rule{1.500pt}{0.800pt}}\n\\put(779,159.84){\\rule{3.373pt}{0.800pt}}\n\\multiput(786.00,161.34)(-7.000,-3.000){2}{\\rule{1.686pt}{0.800pt}}\n\\put(766,156.84){\\rule{3.132pt}{0.800pt}}\n\\multiput(772.50,158.34)(-6.500,-3.000){2}{\\rule{1.566pt}{0.800pt}}\n\\put(753,153.84){\\rule{3.132pt}{0.800pt}}\n\\multiput(759.50,155.34)(-6.500,-3.000){2}{\\rule{1.566pt}{0.800pt}}\n\\put(741,150.84){\\rule{2.891pt}{0.800pt}}\n\\multiput(747.00,152.34)(-6.000,-3.000){2}{\\rule{1.445pt}{0.800pt}}\n\\put(729,147.84){\\rule{2.891pt}{0.800pt}}\n\\multiput(735.00,149.34)(-6.000,-3.000){2}{\\rule{1.445pt}{0.800pt}}\n\\put(717,145.34){\\rule{2.891pt}{0.800pt}}\n\\multiput(723.00,146.34)(-6.000,-2.000){2}{\\rule{1.445pt}{0.800pt}}\n\\put(706,143.34){\\rule{2.650pt}{0.800pt}}\n\\multiput(711.50,144.34)(-5.500,-2.000){2}{\\rule{1.325pt}{0.800pt}}\n\\put(695,141.34){\\rule{2.650pt}{0.800pt}}\n\\multiput(700.50,142.34)(-5.500,-2.000){2}{\\rule{1.325pt}{0.800pt}}\n\\put(684,139.34){\\rule{2.650pt}{0.800pt}}\n\\multiput(689.50,140.34)(-5.500,-2.000){2}{\\rule{1.325pt}{0.800pt}}\n\\put(674,137.84){\\rule{2.409pt}{0.800pt}}\n\\multiput(679.00,138.34)(-5.000,-1.000){2}{\\rule{1.204pt}{0.800pt}}\n\\put(664,136.34){\\rule{2.409pt}{0.800pt}}\n\\multiput(669.00,137.34)(-5.000,-2.000){2}{\\rule{1.204pt}{0.800pt}}\n\\put(654,134.84){\\rule{2.409pt}{0.800pt}}\n\\multiput(659.00,135.34)(-5.000,-1.000){2}{\\rule{1.204pt}{0.800pt}}\n\\put(645,133.84){\\rule{2.168pt}{0.800pt}}\n\\multiput(649.50,134.34)(-4.500,-1.000){2}{\\rule{1.084pt}{0.800pt}}\n\\put(636,132.34){\\rule{2.168pt}{0.800pt}}\n\\multiput(640.50,133.34)(-4.500,-2.000){2}{\\rule{1.084pt}{0.800pt}}\n\\put(627,130.84){\\rule{2.168pt}{0.800pt}}\n\\multiput(631.50,131.34)(-4.500,-1.000){2}{\\rule{1.084pt}{0.800pt}}\n\\put(619,129.84){\\rule{1.927pt}{0.800pt}}\n\\multiput(623.00,130.34)(-4.000,-1.000){2}{\\rule{0.964pt}{0.800pt}}\n\\put(611,128.84){\\rule{1.927pt}{0.800pt}}\n\\multiput(615.00,129.34)(-4.000,-1.000){2}{\\rule{0.964pt}{0.800pt}}\n\\put(603,127.84){\\rule{1.927pt}{0.800pt}}\n\\multiput(607.00,128.34)(-4.000,-1.000){2}{\\rule{0.964pt}{0.800pt}}\n\\put(595,126.84){\\rule{1.927pt}{0.800pt}}\n\\multiput(599.00,127.34)(-4.000,-1.000){2}{\\rule{0.964pt}{0.800pt}}\n\\put(588,125.84){\\rule{1.686pt}{0.800pt}}\n\\multiput(591.50,126.34)(-3.500,-1.000){2}{\\rule{0.843pt}{0.800pt}}\n\\put(575,124.84){\\rule{1.445pt}{0.800pt}}\n\\multiput(578.00,125.34)(-3.000,-1.000){2}{\\rule{0.723pt}{0.800pt}}\n\\put(569,123.84){\\rule{1.445pt}{0.800pt}}\n\\multiput(572.00,124.34)(-3.000,-1.000){2}{\\rule{0.723pt}{0.800pt}}\n\\put(563,122.84){\\rule{1.445pt}{0.800pt}}\n\\multiput(566.00,123.34)(-3.000,-1.000){2}{\\rule{0.723pt}{0.800pt}}\n\\put(581.0,127.0){\\rule[-0.400pt]{1.686pt}{0.800pt}}\n\\put(551,121.84){\\rule{1.445pt}{0.800pt}}\n\\multiput(554.00,122.34)(-3.000,-1.000){2}{\\rule{0.723pt}{0.800pt}}\n\\put(557.0,124.0){\\rule[-0.400pt]{1.445pt}{0.800pt}}\n\\put(549.0,123.0){\\usebox{\\plotpoint}}\n\\sbox{\\plotpoint}{\\rule[-0.500pt]{1.000pt}{1.000pt}}%\n\\put(1231,124){\\usebox{\\plotpoint}}\n\\put(1231.00,124.00){\\usebox{\\plotpoint}}\n\\put(1231.00,144.76){\\usebox{\\plotpoint}}\n\\put(1231.00,165.51){\\usebox{\\plotpoint}}\n\\put(1231.00,186.27){\\usebox{\\plotpoint}}\n\\put(1231.00,207.02){\\usebox{\\plotpoint}}\n\\put(1231.00,227.78){\\usebox{\\plotpoint}}\n\\put(1231.00,248.53){\\usebox{\\plotpoint}}\n\\put(1231.00,269.29){\\usebox{\\plotpoint}}\n\\put(1231.00,290.04){\\usebox{\\plotpoint}}\n\\put(1231.00,310.80){\\usebox{\\plotpoint}}\n\\put(1231.00,331.55){\\usebox{\\plotpoint}}\n\\put(1231.00,352.31){\\usebox{\\plotpoint}}\n\\put(1231.00,373.07){\\usebox{\\plotpoint}}\n\\put(1231.00,393.82){\\usebox{\\plotpoint}}\n\\put(1231.00,414.58){\\usebox{\\plotpoint}}\n\\put(1231.00,435.33){\\usebox{\\plotpoint}}\n\\put(1231.00,456.09){\\usebox{\\plotpoint}}\n\\put(1231.00,476.84){\\usebox{\\plotpoint}}\n\\put(1231.00,497.60){\\usebox{\\plotpoint}}\n\\put(1231.00,518.35){\\usebox{\\plotpoint}}\n\\put(1231.00,539.11){\\usebox{\\plotpoint}}\n\\put(1231.00,559.87){\\usebox{\\plotpoint}}\n\\put(1231.00,580.62){\\usebox{\\plotpoint}}\n\\put(1231.00,601.38){\\usebox{\\plotpoint}}\n\\put(1231.00,622.13){\\usebox{\\plotpoint}}\n\\put(1231.00,642.89){\\usebox{\\plotpoint}}\n\\put(1231.00,663.64){\\usebox{\\plotpoint}}\n\\put(1231.00,684.40){\\usebox{\\plotpoint}}\n\\put(1231.00,705.15){\\usebox{\\plotpoint}}\n\\put(1231.00,725.91){\\usebox{\\plotpoint}}\n\\put(1231.00,746.66){\\usebox{\\plotpoint}}\n\\put(1231.00,767.42){\\usebox{\\plotpoint}}\n\\put(1231.00,788.18){\\usebox{\\plotpoint}}\n\\put(1231.00,808.93){\\usebox{\\plotpoint}}\n\\put(1231.00,829.69){\\usebox{\\plotpoint}}\n\\put(1231.00,850.44){\\usebox{\\plotpoint}}\n\\put(1231,860){\\usebox{\\plotpoint}}\n\\sbox{\\plotpoint}{\\rule[-0.600pt]{1.200pt}{1.200pt}}%\n\\multiput(1379.26,787.26)(-0.501,-7.446){24}{\\rule{0.121pt}{17.524pt}}\n\\multiput(1379.51,823.63)(-17.000,-207.629){2}{\\rule{1.200pt}{8.762pt}}\n\\multiput(1362.26,574.32)(-0.501,-4.188){24}{\\rule{0.121pt}{10.041pt}}\n\\multiput(1362.51,595.16)(-17.000,-117.159){2}{\\rule{1.200pt}{5.021pt}}\n\\multiput(1345.26,451.85)(-0.501,-2.559){24}{\\rule{0.121pt}{6.300pt}}\n\\multiput(1345.51,464.92)(-17.000,-71.924){2}{\\rule{1.200pt}{3.150pt}}\n\\multiput(1328.26,374.01)(-0.501,-1.810){22}{\\rule{0.121pt}{4.575pt}}\n\\multiput(1328.51,383.50)(-16.000,-47.504){2}{\\rule{1.200pt}{2.288pt}}\n\\multiput(1312.26,321.99)(-0.501,-1.285){22}{\\rule{0.121pt}{3.375pt}}\n\\multiput(1312.51,329.00)(-16.000,-33.995){2}{\\rule{1.200pt}{1.688pt}}\n\\multiput(1296.26,283.46)(-0.501,-1.024){20}{\\rule{0.121pt}{2.780pt}}\n\\multiput(1296.51,289.23)(-15.000,-25.230){2}{\\rule{1.200pt}{1.390pt}}\n\\multiput(1281.26,254.22)(-0.501,-0.836){18}{\\rule{0.121pt}{2.357pt}}\n\\multiput(1281.51,259.11)(-14.000,-19.108){2}{\\rule{1.200pt}{1.179pt}}\n\\multiput(1267.26,231.99)(-0.501,-0.647){18}{\\rule{0.121pt}{1.929pt}}\n\\multiput(1267.51,236.00)(-14.000,-14.997){2}{\\rule{1.200pt}{0.964pt}}\n\\multiput(1253.26,213.62)(-0.501,-0.575){16}{\\rule{0.121pt}{1.777pt}}\n\\multiput(1253.51,217.31)(-13.000,-12.312){2}{\\rule{1.200pt}{0.888pt}}\n\\multiput(1240.26,198.36)(-0.501,-0.489){14}{\\rule{0.121pt}{1.600pt}}\n\\multiput(1240.51,201.68)(-12.000,-9.679){2}{\\rule{1.200pt}{0.800pt}}\n\\multiput(1224.32,189.26)(-0.484,-0.502){12}{\\rule{1.609pt}{0.121pt}}\n\\multiput(1227.66,189.51)(-8.660,-11.000){2}{\\rule{0.805pt}{1.200pt}}\n\\multiput(1211.11,178.26)(-0.588,-0.502){8}{\\rule{1.900pt}{0.121pt}}\n\\multiput(1215.06,178.51)(-8.056,-9.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1200.22,169.26)(-0.460,-0.502){8}{\\rule{1.633pt}{0.121pt}}\n\\multiput(1203.61,169.51)(-6.610,-9.000){2}{\\rule{0.817pt}{1.200pt}}\n\\multiput(1187.93,160.26)(-0.642,-0.505){4}{\\rule{2.186pt}{0.122pt}}\n\\multiput(1192.46,160.51)(-6.463,-7.000){2}{\\rule{1.093pt}{1.200pt}}\n\\multiput(1177.28,153.25)(-0.283,-0.509){2}{\\rule{2.100pt}{0.123pt}}\n\\multiput(1181.64,153.51)(-4.641,-6.000){2}{\\rule{1.050pt}{1.200pt}}\n\\put(1167,145.01){\\rule{2.409pt}{1.200pt}}\n\\multiput(1172.00,147.51)(-5.000,-5.000){2}{\\rule{1.204pt}{1.200pt}}\n\\put(1159,140.01){\\rule{1.927pt}{1.200pt}}\n\\multiput(1163.00,142.51)(-4.000,-5.000){2}{\\rule{0.964pt}{1.200pt}}\n\\put(1151,135.01){\\rule{1.927pt}{1.200pt}}\n\\multiput(1155.00,137.51)(-4.000,-5.000){2}{\\rule{0.964pt}{1.200pt}}\n\\put(1143,130.51){\\rule{1.927pt}{1.200pt}}\n\\multiput(1147.00,132.51)(-4.000,-4.000){2}{\\rule{0.964pt}{1.200pt}}\n\\put(1136,127.01){\\rule{1.686pt}{1.200pt}}\n\\multiput(1139.50,128.51)(-3.500,-3.000){2}{\\rule{0.843pt}{1.200pt}}\n\\put(1130,124.01){\\rule{1.445pt}{1.200pt}}\n\\multiput(1133.00,125.51)(-3.000,-3.000){2}{\\rule{0.723pt}{1.200pt}}\n\\put(1126,121.51){\\rule{0.964pt}{1.200pt}}\n\\multiput(1128.00,122.51)(-2.000,-2.000){2}{\\rule{0.482pt}{1.200pt}}\n\\sbox{\\plotpoint}{\\rule[-0.500pt]{1.000pt}{1.000pt}}%\n\\put(1425,124){\\usebox{\\plotpoint}}\n\\put(1425.00,124.00){\\usebox{\\plotpoint}}\n\\put(1425.00,165.51){\\usebox{\\plotpoint}}\n\\put(1425.00,207.02){\\usebox{\\plotpoint}}\n\\put(1425.00,248.53){\\usebox{\\plotpoint}}\n\\put(1425.00,290.04){\\usebox{\\plotpoint}}\n\\put(1425.00,331.55){\\usebox{\\plotpoint}}\n\\put(1425.00,373.07){\\usebox{\\plotpoint}}\n\\put(1425.00,414.58){\\usebox{\\plotpoint}}\n\\put(1425.00,456.09){\\usebox{\\plotpoint}}\n\\put(1425.00,497.60){\\usebox{\\plotpoint}}\n\\put(1425.00,539.11){\\usebox{\\plotpoint}}\n\\put(1425.00,580.62){\\usebox{\\plotpoint}}\n\\put(1425.00,622.13){\\usebox{\\plotpoint}}\n\\put(1425.00,663.64){\\usebox{\\plotpoint}}\n\\put(1425.00,705.15){\\usebox{\\plotpoint}}\n\\put(1425.00,746.66){\\usebox{\\plotpoint}}\n\\put(1425.00,788.18){\\usebox{\\plotpoint}}\n\\put(1425.00,829.69){\\usebox{\\plotpoint}}\n\\put(1425,860){\\usebox{\\plotpoint}}\n\\end{picture}\n\\caption{The Pant-Sah equation of state (\\ref{psr}), (\\ref{psp}) for the values $\\tau= 0.382$ (thin solid line)\n$\\tau = 0.6$ (medium solid) and $\\tau = 0.786$ (thick solid line).\nThe dotted lines indicate the respective limits of $\\rho\/\\rho_-$ for $p\/\\rho_- \\rightarrow\n\\infty$.}\n\\label{PSEOS} \n\\end{figure}\nAt first sight the PSEOS looks very different from the Newtonian model, \nFig. (\\ref{NEOS}). In fact, in contrast to the latter, \nthe density now stays bounded and tends to $\\rho_+ + \\rho_-$ as the pressure goes to infinity\n(which happens for $fV \\rightarrow 0$). \nThis means that for high pressures the PSEOS first violates the energy\nconditions, and finally always becomes infinitely \"stiff\".\nNote however that Fig. (\\ref{NEOS}) and Fig. (\\ref{PSEOS}) have a very\ndifferent scale in the $p\/\\rho_-$ direction. For small $p\/\\rho_-$ and small\n$\\rho\/\\rho_-$ we can still consider the EOS (\\ref{Neos}) as Newtonian limit of the\nPSEOS (\\ref{rV}) and (\\ref{pV}).\n\nAt the surface, the quantity $\\kappa$ as defined by (\\ref{Ikap}) is related\nto the speed of sound by $C_s^2 = dp\/d\\rho|_s = \\kappa_s^{-1}$.\nIn analogy with (\\ref{Nsurf}) we now determine the surface potential,\nthe surface density and $C_s$ from (\\ref{kap}),(\\ref{rV}) and (\\ref{pV}) as\n\n\\begin{equation}\n\\label{Esurf}\nfV_s = \\frac{1 - \\tau}{1 + \\tau}, \\qquad \\rho_s = \\frac{32\\rho_- \\tau^5}{(1 +\n\\tau)^4}, \\qquad C_s^2 = \\frac{2\\tau}{5 (1 - \\tau)^2}.\n\\end{equation}\n\nSince $V_s < 1$, $f$ is bounded from below by $f > (1 - \\tau)\/(1 + \\tau)$.\n\nAgain $\\rho_s$ and $C_s$ are determined by the EOS alone whereas one of $f$ or $V_s$ \ncan be used to parametrize the solutions. If $\\tau > (6 - \\sqrt{11})\/5\n\\approx 0.537$, then $C_s > 1$, \ni.e. the speed of sound exceeds the speed of light already at the surface.\nThis applies in particular to the curves for $\\tau = 0.6$ and $\\tau =\n0.786$ of Fig. (\\ref{PSEOS}).\nBut $\\tau < (6 - \\sqrt{11})\/5$ implies $C_s < 1$, and for\nsufficiently \"small\" spherical stars $C < 1$ then holds up to the center.\nAn example is the thin line $\\tau = 0.382$ of Fig. (\\ref{PSEOS}).\nOn the other hand, for all $\\tau$, $C > 1$ at the centre if the star is\nsufficiently large due to the stiffness of the PSEOS at high pressures.\nThe size limits for the star follow from Fig.1 in \\cite{KR2}, and\nthey could also be determined from the results of Sect. 3.2.\n\nWhile for our purposes the parametric form (\\ref{rV}) and (\\ref{pV})\nsuffices as EOS, the latter can also be \ndisplayed in closed form. \nWe first consider the BEOS ($\\rho_+ = 0$) which reads\n\\begin{equation}\np = \\frac{\\rho^{6\/5}}{6 (\\rho_{-}^{1\/5} - \\rho^{1\/5})},\n\\end{equation}\nand holds for $\\rho<\\rho_{-}$.\nIn the general case $\\rho_+ > 0$ it is clearly simplest to eliminate one of $\\rho_-$ or\n$\\rho_+$, and to interpret the other one, together with $\\lambda = fV$ as parameters\nof the PSEOS.\nHowever, in view of the geometric interpretation of $\\rho_-$ and $\\rho_+$, \nand in view of the \"symmetric form\" of equations\n(\\ref{rV}) and (\\ref{pV}), it is more natural to eliminate $\\lambda = fV$. \nTo do so we note that the following linear combination of equations\n(\\ref{rV}) and (\\ref{pV})\n\\begin{eqnarray}\n\\label{poly4}\n\\lefteqn{\\frac{1}{20 \\rho_-} [\\rho(1 + \\lambda) + 6 p\\lambda]\n+ \\frac{1}{20 \\rho_+} [\\rho(1 - \\lambda) - 6 p\\lambda] = {} } \\nonumber\\\\\n & & {} = \\frac{1}{10}(1 - \\lambda)^5 + \\frac{1}{10}(1 + \\lambda)^5 =\n\\lambda^4 + 2\\lambda^2 + \\frac{1}{5} \n\\end{eqnarray}\ngives a polynomial equation of fourth order in $\\lambda$ which can be solved algebraically by a \nstandard procedure.\nAlternatively, we can use (\\ref{poly4}) to eliminate the fifth and fourth order terms in\n(\\ref{rV}) which leaves us with the polynomial equation\n\\begin{equation}\n\\label{poly3}\n\\lambda^3 + \\frac{\\nu_-}{5} (\\rho + 6p) \\lambda^2 + \n\\frac{3}{5}[1 + 2\\nu_+ (\\rho + 5p)] \\lambda - \\nu_- \\rho = 0\n\\end{equation}\nof third order, with $32 \\nu_{\\pm} = \\rho_+^{-1} \\pm \\rho_-^{-1}$. Solving either (\\ref{poly4}) \nor (\\ref{poly3}) for $\\lambda = \\lambda(\\rho,p,\\rho_+,\\rho_-)$ and putting this back again into \n(\\ref{rV}) or (\\ref{pV}) gives the PSEOS in closed form $H(\\rho,p,\\rho_+,\\rho_-) = 0$.\nThe function $H$ is elementary but involved and will not be displayed here.\n\n\\section{The Spherically Symmetric Solutions}\n\nWe now determine the spherically symmetric solutions corresponding to the two-parameter \nfamilies of NEOS (\\ref{Neos}) and the PSEOS (\\ref{psr}) and (\\ref{psp}),\nmaking use of the formulas in the Appendix.\nBy the general theorem \\cite{RS} there exist 1-parameter families of such\nsolutions in either case, for all values of the central pressure.\nWe will in particular determine the physically relevant parameters mass $M$ and radius\n$R$. As well known and easy to see from the definitions and the field equations, \nfamilies of static, spherically symmetric fluid solutions are always invariant under the scaling \n\\begin{equation}\n\\rho \\rightarrow \\gamma^2 \\rho, \\qquad p \\rightarrow \\gamma^2 p,\n\\qquad M \\rightarrow \\gamma^{-1} M, \\qquad R \\rightarrow \\gamma^{-1} R.\n\\end{equation}\nfor any $\\gamma > 0$. For our families of solutions this means that one of the three \nparameters is \"trivial\" in this sense. \nIn Sects. 3.1.2 and 3.2.3 we will therefore use scale invariant variables $\\widehat M$ and \n$\\widehat R$ defined by\n\\begin{equation}\n\\label{resc}\n\\widehat M = \\sqrt{\\frac{8\\pi \\rho_s}{3}}M, \\qquad \\widehat R = \\sqrt{\\frac{8\\pi \\rho_s}{3}} R,\n\\end{equation}\nwhere $\\rho_s$ is the surface density. Note that the latter is given in terms of $\\rho_-$ and \n$\\rho_+$ in Newtonian theory by (\\ref{Nsurf}) but in Relativity by (\\ref{Esurf}). \n\n\\subsection{The Newtonian Solutions}\n\n\\subsubsection{The matching}\n\nUsing Lemma A.3. of the Appendix with $\\wp^+_{ij}$ flat, $\\Phi = (\\bar v - v)\/2$, \nand ${\\cal R}_- = 512\\pi\\rho_-$ we can write the spherically symmetric solutions\n(\\ref{Phi}) of (\\ref{conf}) as\n\\begin{equation}\n\\label{vmu}\n\\bar v - v = 2 \\mu \\sqrt{\\frac{1}{1 + \\frac{64 \\pi}{3}\\mu^4 \\rho_- r^2}}.\n\\end{equation}\nIt remains to eliminate one of the constants $\\bar v$ and $\\mu$ by global conditions.\nIn the case $\\rho_+ = 0$ the NEOS becomes the polytrope of index 5. \nIt follows from (\\ref{PSvir}) that $ \\bar v = 0$ which means that ${\\cal F}$ extends to \ninfinity and (\\ref{vmu}) is valid for all $r$. \nThe solutions can be conveniently parametrized by their mass $M$ defined in (\\ref{NM}). \n\nIn the case $\\rho_+ > 0$, it is simplest to parametrize the solutions in terms \nof $\\bar v$ which is related to the surface potential by (\\ref{Nsurf}). \nTo get $\\mu$ it suffices to use that $v \\in C^1$ which implies that \n\\begin{equation}\n\\label{Nmcoo}\n \\left. \\frac{dv}{dr} \\right|_{\\Rightarrow {\\cal F}} = \n \\left. \\frac{dv}{dr} \\right|_{{\\cal V} \\Leftarrow} = - \\frac{v_s}{R}.\n\\end{equation}\nwhere $R = r|_s$ is the radius of the star.\n\nUsing (\\ref{vmu}) and (\\ref{Nsurf}) and recalling that $\\bar v$ was assumed\nnon-negative, it follows that $\\mu$ can be expressed as\n\\begin{equation}\n\\label{Nmu}\n \\mu = \\left\\{ \\begin{array}{r@{\\quad \\mbox{for} \\quad}l} \n {\\displaystyle \\frac{1}{M} \\sqrt{ \\frac{3}{16 \\pi \\rho_-} } } & \\rho_+ = 0, \\\\\n {\\displaystyle \\frac{\\tau}{2} \\sqrt{\\frac{\\tau}{\\bar v} } } & \\rho_+ > 0 \n\\end{array} \\right.\n\\end{equation}\nand we can write (\\ref{vmu}) as\n\\begin{equation}\n\\label{vex}\n v = \\left\\{ \\begin{array}{r@{\\quad \\mbox{for}\\quad}l} \n {\\displaystyle - \\frac{M}{\\sqrt{\\frac{4\\pi}{3} \\rho_- M^4 + r^2}} } & \\rho_+ = 0, \\\\\n {\\displaystyle \\bar v - \\tau \\sqrt{\\frac{\\tau \\bar v}{\\bar v^2 + \\frac{4\\pi}{3} \\rho_+ r^2}} }\n & \\rho_+ > 0.\n \\end{array} \\right.\n\\end{equation}\n\nNote that $M$ can take any value $M \\in [0,\\infty)$, and\nthe allowed values for the other parameters are \n$\\bar v \\in [0,\\tau]$ or $v_s \\in [-\\tau,0]$. \n \nFor all $\\rho_+$ and $\\rho_-$ the density, \nthe pressure and the speed of sound follow from (\\ref{rhop}) and (\\ref{ss}); \nthey are monotonic functions of $r$. \nFor $\\rho_+ > 0$ the central density $\\rho_c$, the central pressure $p_c$ \nand the speed of sound at the center $C_c$ take the values\n\\begin{equation}\n\\rho_c = \\frac{\\rho_+ \\tau}{\\bar v^2}\\sqrt{\\frac{\\tau}{\\bar v}}, \\qquad \np_c = \\frac{\\rho_+}{6} \\left( \\frac{\\tau^3}{\\bar v^3} - 1 \\right), \\qquad \nC_c^2 = \\frac{\\tau}{5} \\sqrt{\\frac{\\tau}{\\bar v}}.\n\\end{equation}\nThese quantities diverge as the parameter $\\bar v$ goes to zero. \n\nInstead of the coordinate expressions (\\ref{vmu}) - (\\ref{vex}) the matching and the solutions\ncan be described in a \"covariant\" manner in terms of $w = \\nabla_i v \\nabla^i v$ \nwhich is a function of $v$ in the spherically symmetry case.\nIn particular we have $w= M^{-2} v^4$ in the vacuum region.\nTo determine $w$ for the spherically symmetric solutions characterized by ${\\cal R}_- = 0$, \nwe use Lemma A.2 of the Appendix which shows that $g_{ij}^- = g_{ij} (\\bar v - v)^4 \/16$ \nare spaces of constant curvature. With the general formula (\\ref{cri}) this yields\n\\begin{equation}\n0 = (\\bar v - v)^2 {\\cal B}_{ij}^- = 2 {\\cal C}[(\\bar v - v) \\nabla_i\\nabla_j v + 3 \\nabla_i v \\nabla_j v] \n\\end{equation}\nContracting this equation with $\\nabla^i v \\nabla^j v$ and using (\\ref{Poi})\nand (\\ref{rhop}) gives\n\\begin{equation}\n\\frac{d}{dv} \\left[ \\frac{w}{(\\bar v - v)^4} \\right] = \\frac{8\\pi}{3}\\rho_- (\\bar v - v).\n\\end{equation}\nThis has the solution \n\\begin{equation}\n\\label{wv}\nw = \\frac{4 \\pi}{3} \\rho_- (\\bar v - v)^4 \n\\left[\\sigma^2 - (\\bar v - v)^2 \\right]\n\\end{equation}\nfor some constant $\\sigma$ which has to be determined by global conditions.\n\nFrom the exterior form $w = M^{-2} v^4$ and from the matching conditions (\\ref{Ngm})\nwe obtain \n\\begin{equation}\n\\label{Nsm}\n \\left[ \\frac{d w}{d v} - 8 \\pi \\rho\n\\right]_{\\Rightarrow \\partial {\\cal F}} = \\left[ \\frac{d w}{d v} \\right]_{\\partial {\\cal V} \\Leftarrow} \n= \\frac{4 w_s}{v_s}.\n\\end{equation} \n\nUsing the asymptotic property (\\ref{NM}) for $\\rho_+ = 0$ and (\\ref{Nsm})\nfor $\\rho_+ > 0$ we find that \n\n\\begin{equation}\n\\label{sigma}\n \\sigma^{2} = \\left\\{ \\begin{array}{r@{\\quad \\mbox{for} \\quad}l} \n {\\displaystyle \\frac{3}{16 \\pi \\rho_- M^2} } & \\rho_+ = 0, \\\\\n {\\displaystyle \\frac{\\tau^3}{\\bar v} } & \\rho_+ > 0. \\end{array} \\right.\n\\end{equation}\n\nAlternatively, equ. (\\ref{wv}) can of course be checked directly from (\\ref{vex}).\nIn particular the value of $\\sigma$ follows from (\\ref{Nmu}) or vice versa.\n\n\\subsubsection{The Mass-Radius relation}\n\nTo determine mass and radius we take equ. (\\ref{vex}) or the first of\n(\\ref{Nsm}) and (\\ref{wv}) at the surface and use $v_s = -M\/R$. \n\nIn terms of the rescaled variables (\\ref{resc}) this gives\n\\begin{equation}\n\\widehat R^2 = \\frac{2 \\bar v}{\\tau} (\\tau - \\bar v), \\qquad\n\\widehat M^2 = \\frac{2 \\bar v}{\\tau} (\\tau - \\bar v)^3, \n\\end{equation}\nand implies the mass-radius relation\n\\begin{equation}\n\\frac{ \\widehat M^2}{\\tau} - \\widehat R. \\widehat M + \\frac{\\widehat R^4}{2} = 0\n\\end{equation}\nwhich can be solved for the mass \n\\begin{equation}\n\\label{NMRE}\n\\widehat M = \\frac{\\tau \\widehat R}{2} \\left[1 \\pm \\sqrt{ 1 - \\frac{2 \\widehat R^2}{\\tau}} \\right].\n\\end{equation}\n\nWe remark that in (\\ref{NMRE}) $\\tau$ could be removed completely by a further rescaling of\n$\\widehat M$ and $\\widehat R$. We avoid this, however, to keep the close\nanlogy to the relativistic case where this is not possible. \nThe behaviour of the parameters introduced above is illustrated in Table (\\ref{Npar}) and \nFig. (\\ref{NMR}).\n\n\\begin{table}[h!]\n\\caption{The parameters of the Newtonian solutions}\n\\label{Npar}\n\\vspace*{0.5cm}\n\\begin{tabular}{|r||c|c|c|c|c|c|c|}\n{} & $\\bar v$ & $v_s$ & $\\widehat R$ & $\\widehat M$ & $v_c$ & $\\rho_c$ & $p_c$ \\\\ [1.0ex] \\hline \\hline\n{} & {} & {} & {} & {} & {} & {} & {} \\\\\ndust particle & $\\tau$ & $0$ & $0$ & $0$ & $0$ & $\\rho_-\\tau^5$ & $0$ \\\\ [1.5ex] \nbiggest star & {\\large $\\frac{\\tau}{2}$} & {\\large $-\\frac {\\tau}{2}$} &\n{\\large $\\sqrt{\\frac{\\tau}{2}}$ } & {\\large $\\frac{\\tau}{2} \\sqrt{\\frac{\\tau}{2}} $ }\n& {\\large $ \\frac{\\tau (1 - 2\\sqrt{2})}{2}$} & \n $4\\sqrt{2}\\rho_- \\tau^5$ & {\\large $\\frac{7 \\rho _+}{6}$} \\\\ [1.5ex] \nheaviest star & {\\large $\\frac{\\tau}{4}$} & {\\large $-\\frac{3\\tau}{4}$} & \n{\\large $ \\frac{1}{2} \\sqrt{\\frac{3\\tau}{2}}$ } & {\\large $\\frac{3\\tau}{8} \\sqrt{\\frac{3\\tau}{2}}$}\n& {\\large $ -\\frac{7\\tau}{4}$} & \n $32\\rho_- \\tau^5$ & {\\large $\\frac{21 \\rho _+}{2}$} \\\\ [1.5ex]\nsingularity & $0$ & $-\\tau$ & $0$ & $0$ & $- \\infty$ & $\\infty$ & $\\infty$ \n\\end{tabular} \n\\end{table}\n\n\\begin{figure}[h!]\n\n\\setlength{\\unitlength}{0.240900pt}\n\\ifx\\plotpoint\\undefined\\newsavebox{\\plotpoint}\\fi\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\begin{picture}(1500,900)(0,0)\n\\font\\gnuplot=cmr10 at 10pt\n\\gnuplot\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\put(201.0,123.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,123){\\makebox(0,0)[r]{ 0}}\n\\put(1419.0,123.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,228.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,228){\\makebox(0,0)[r]{ 0.05}}\n\\put(1419.0,228.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,334.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,334){\\makebox(0,0)[r]{ 0.1}}\n\\put(1419.0,334.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,439.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,439){\\makebox(0,0)[r]{ 0.15}}\n\\put(1419.0,439.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,544.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,544){\\makebox(0,0)[r]{ 0.2}}\n\\put(1419.0,544.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,649.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,649){\\makebox(0,0)[r]{ 0.25}}\n\\put(1419.0,649.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,755.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,755){\\makebox(0,0)[r]{ 0.3}}\n\\put(1419.0,755.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,860.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,860){\\makebox(0,0)[r]{ 0.35}}\n\\put(1419.0,860.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(201,82){\\makebox(0,0){ 0}}\n\\put(201.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(378.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(378,82){\\makebox(0,0){ 0.1}}\n\\put(378.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(555.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(555,82){\\makebox(0,0){ 0.2}}\n\\put(555.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(732.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(732,82){\\makebox(0,0){ 0.3}}\n\\put(732.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(908.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(908,82){\\makebox(0,0){ 0.4}}\n\\put(908.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1085.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1085,82){\\makebox(0,0){ 0.5}}\n\\put(1085.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1262.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1262,82){\\makebox(0,0){ 0.6}}\n\\put(1262.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1439.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1439,82){\\makebox(0,0){ 0.7}}\n\\put(1439.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(201.0,123.0){\\rule[-0.200pt]{298.234pt}{0.400pt}}\n\\put(1439.0,123.0){\\rule[-0.200pt]{0.400pt}{177.543pt}}\n\\put(201.0,860.0){\\rule[-0.200pt]{298.234pt}{0.400pt}}\n\\put(40,491){\\makebox(0,0){$\\widehat M$}}\n\\put(820,21){\\makebox(0,0){$\\widehat R$}}\n\\put(201.0,123.0){\\rule[-0.200pt]{0.400pt}{177.543pt}}\n\\put(280,123){\\usebox{\\plotpoint}}\n\\put(280,123.17){\\rule{27.700pt}{0.400pt}}\n\\multiput(280.00,122.17)(80.507,2.000){2}{\\rule{13.850pt}{0.400pt}}\n\\multiput(418.00,125.61)(17.207,0.447){3}{\\rule{10.500pt}{0.108pt}}\n\\multiput(418.00,124.17)(56.207,3.000){2}{\\rule{5.250pt}{0.400pt}}\n\\multiput(496.00,128.60)(8.523,0.468){5}{\\rule{6.000pt}{0.113pt}}\n\\multiput(496.00,127.17)(46.547,4.000){2}{\\rule{3.000pt}{0.400pt}}\n\\multiput(555.00,132.60)(6.915,0.468){5}{\\rule{4.900pt}{0.113pt}}\n\\multiput(555.00,131.17)(37.830,4.000){2}{\\rule{2.450pt}{0.400pt}}\n\\multiput(603.00,136.59)(4.606,0.477){7}{\\rule{3.460pt}{0.115pt}}\n\\multiput(603.00,135.17)(34.819,5.000){2}{\\rule{1.730pt}{0.400pt}}\n\\multiput(645.00,141.59)(3.203,0.482){9}{\\rule{2.500pt}{0.116pt}}\n\\multiput(645.00,140.17)(30.811,6.000){2}{\\rule{1.250pt}{0.400pt}}\n\\multiput(681.00,147.59)(3.604,0.477){7}{\\rule{2.740pt}{0.115pt}}\n\\multiput(681.00,146.17)(27.313,5.000){2}{\\rule{1.370pt}{0.400pt}}\n\\multiput(714.00,152.59)(2.570,0.482){9}{\\rule{2.033pt}{0.116pt}}\n\\multiput(714.00,151.17)(24.780,6.000){2}{\\rule{1.017pt}{0.400pt}}\n\\multiput(743.00,158.59)(1.942,0.485){11}{\\rule{1.586pt}{0.117pt}}\n\\multiput(743.00,157.17)(22.709,7.000){2}{\\rule{0.793pt}{0.400pt}}\n\\multiput(769.00,165.59)(2.118,0.482){9}{\\rule{1.700pt}{0.116pt}}\n\\multiput(769.00,164.17)(20.472,6.000){2}{\\rule{0.850pt}{0.400pt}}\n\\multiput(793.00,171.59)(1.637,0.485){11}{\\rule{1.357pt}{0.117pt}}\n\\multiput(793.00,170.17)(19.183,7.000){2}{\\rule{0.679pt}{0.400pt}}\n\\multiput(815.00,178.59)(1.484,0.485){11}{\\rule{1.243pt}{0.117pt}}\n\\multiput(815.00,177.17)(17.420,7.000){2}{\\rule{0.621pt}{0.400pt}}\n\\multiput(835.00,185.59)(1.332,0.485){11}{\\rule{1.129pt}{0.117pt}}\n\\multiput(835.00,184.17)(15.658,7.000){2}{\\rule{0.564pt}{0.400pt}}\n\\multiput(853.00,192.59)(1.255,0.485){11}{\\rule{1.071pt}{0.117pt}}\n\\multiput(853.00,191.17)(14.776,7.000){2}{\\rule{0.536pt}{0.400pt}}\n\\multiput(870.00,199.59)(1.103,0.485){11}{\\rule{0.957pt}{0.117pt}}\n\\multiput(870.00,198.17)(13.013,7.000){2}{\\rule{0.479pt}{0.400pt}}\n\\multiput(885.00,206.59)(1.026,0.485){11}{\\rule{0.900pt}{0.117pt}}\n\\multiput(885.00,205.17)(12.132,7.000){2}{\\rule{0.450pt}{0.400pt}}\n\\multiput(899.00,213.59)(0.758,0.488){13}{\\rule{0.700pt}{0.117pt}}\n\\multiput(899.00,212.17)(10.547,8.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(911.00,221.59)(0.874,0.485){11}{\\rule{0.786pt}{0.117pt}}\n\\multiput(911.00,220.17)(10.369,7.000){2}{\\rule{0.393pt}{0.400pt}}\n\\multiput(923.00,228.59)(0.626,0.488){13}{\\rule{0.600pt}{0.117pt}}\n\\multiput(923.00,227.17)(8.755,8.000){2}{\\rule{0.300pt}{0.400pt}}\n\\multiput(933.00,236.59)(0.569,0.485){11}{\\rule{0.557pt}{0.117pt}}\n\\multiput(933.00,235.17)(6.844,7.000){2}{\\rule{0.279pt}{0.400pt}}\n\\multiput(941.00,243.59)(0.569,0.485){11}{\\rule{0.557pt}{0.117pt}}\n\\multiput(941.00,242.17)(6.844,7.000){2}{\\rule{0.279pt}{0.400pt}}\n\\multiput(949.59,250.00)(0.485,0.569){11}{\\rule{0.117pt}{0.557pt}}\n\\multiput(948.17,250.00)(7.000,6.844){2}{\\rule{0.400pt}{0.279pt}}\n\\multiput(956.59,258.00)(0.477,0.710){7}{\\rule{0.115pt}{0.660pt}}\n\\multiput(955.17,258.00)(5.000,5.630){2}{\\rule{0.400pt}{0.330pt}}\n\\multiput(961.59,265.00)(0.477,0.710){7}{\\rule{0.115pt}{0.660pt}}\n\\multiput(960.17,265.00)(5.000,5.630){2}{\\rule{0.400pt}{0.330pt}}\n\\multiput(966.61,272.00)(0.447,1.355){3}{\\rule{0.108pt}{1.033pt}}\n\\multiput(965.17,272.00)(3.000,4.855){2}{\\rule{0.400pt}{0.517pt}}\n\\multiput(969.61,279.00)(0.447,1.355){3}{\\rule{0.108pt}{1.033pt}}\n\\multiput(968.17,279.00)(3.000,4.855){2}{\\rule{0.400pt}{0.517pt}}\n\\put(971.67,286){\\rule{0.400pt}{1.686pt}}\n\\multiput(971.17,286.00)(1.000,3.500){2}{\\rule{0.400pt}{0.843pt}}\n\\put(972.67,293){\\rule{0.400pt}{1.445pt}}\n\\multiput(972.17,293.00)(1.000,3.000){2}{\\rule{0.400pt}{0.723pt}}\n\\put(972.67,299){\\rule{0.400pt}{1.445pt}}\n\\multiput(973.17,299.00)(-1.000,3.000){2}{\\rule{0.400pt}{0.723pt}}\n\\put(971.67,305){\\rule{0.400pt}{1.445pt}}\n\\multiput(972.17,305.00)(-1.000,3.000){2}{\\rule{0.400pt}{0.723pt}}\n\\multiput(970.95,311.00)(-0.447,1.132){3}{\\rule{0.108pt}{0.900pt}}\n\\multiput(971.17,311.00)(-3.000,4.132){2}{\\rule{0.400pt}{0.450pt}}\n\\multiput(967.94,317.00)(-0.468,0.774){5}{\\rule{0.113pt}{0.700pt}}\n\\multiput(968.17,317.00)(-4.000,4.547){2}{\\rule{0.400pt}{0.350pt}}\n\\multiput(963.94,323.00)(-0.468,0.627){5}{\\rule{0.113pt}{0.600pt}}\n\\multiput(964.17,323.00)(-4.000,3.755){2}{\\rule{0.400pt}{0.300pt}}\n\\multiput(958.09,328.60)(-0.774,0.468){5}{\\rule{0.700pt}{0.113pt}}\n\\multiput(959.55,327.17)(-4.547,4.000){2}{\\rule{0.350pt}{0.400pt}}\n\\multiput(951.93,332.59)(-0.821,0.477){7}{\\rule{0.740pt}{0.115pt}}\n\\multiput(953.46,331.17)(-6.464,5.000){2}{\\rule{0.370pt}{0.400pt}}\n\\multiput(943.26,337.60)(-1.066,0.468){5}{\\rule{0.900pt}{0.113pt}}\n\\multiput(945.13,336.17)(-6.132,4.000){2}{\\rule{0.450pt}{0.400pt}}\n\\multiput(933.05,341.61)(-2.025,0.447){3}{\\rule{1.433pt}{0.108pt}}\n\\multiput(936.03,340.17)(-7.025,3.000){2}{\\rule{0.717pt}{0.400pt}}\n\\multiput(923.05,344.61)(-2.025,0.447){3}{\\rule{1.433pt}{0.108pt}}\n\\multiput(926.03,343.17)(-7.025,3.000){2}{\\rule{0.717pt}{0.400pt}}\n\\put(906,347.17){\\rule{2.700pt}{0.400pt}}\n\\multiput(913.40,346.17)(-7.396,2.000){2}{\\rule{1.350pt}{0.400pt}}\n\\put(892,349.17){\\rule{2.900pt}{0.400pt}}\n\\multiput(899.98,348.17)(-7.981,2.000){2}{\\rule{1.450pt}{0.400pt}}\n\\put(841,349.67){\\rule{4.577pt}{0.400pt}}\n\\multiput(850.50,350.17)(-9.500,-1.000){2}{\\rule{2.289pt}{0.400pt}}\n\\put(819,348.17){\\rule{4.500pt}{0.400pt}}\n\\multiput(831.66,349.17)(-12.660,-2.000){2}{\\rule{2.250pt}{0.400pt}}\n\\multiput(805.85,346.95)(-4.927,-0.447){3}{\\rule{3.167pt}{0.108pt}}\n\\multiput(812.43,347.17)(-16.427,-3.000){2}{\\rule{1.583pt}{0.400pt}}\n\\multiput(786.95,343.93)(-2.825,-0.477){7}{\\rule{2.180pt}{0.115pt}}\n\\multiput(791.48,344.17)(-21.475,-5.000){2}{\\rule{1.090pt}{0.400pt}}\n\\multiput(761.56,338.93)(-2.570,-0.482){9}{\\rule{2.033pt}{0.116pt}}\n\\multiput(765.78,339.17)(-24.780,-6.000){2}{\\rule{1.017pt}{0.400pt}}\n\\multiput(734.50,332.93)(-1.893,-0.489){15}{\\rule{1.567pt}{0.118pt}}\n\\multiput(737.75,333.17)(-29.748,-9.000){2}{\\rule{0.783pt}{0.400pt}}\n\\multiput(701.44,323.92)(-1.903,-0.491){17}{\\rule{1.580pt}{0.118pt}}\n\\multiput(704.72,324.17)(-33.721,-10.000){2}{\\rule{0.790pt}{0.400pt}}\n\\multiput(665.48,313.92)(-1.562,-0.494){25}{\\rule{1.329pt}{0.119pt}}\n\\multiput(668.24,314.17)(-40.242,-14.000){2}{\\rule{0.664pt}{0.400pt}}\n\\multiput(622.70,299.92)(-1.490,-0.495){31}{\\rule{1.276pt}{0.119pt}}\n\\multiput(625.35,300.17)(-47.351,-17.000){2}{\\rule{0.638pt}{0.400pt}}\n\\multiput(573.36,282.92)(-1.281,-0.496){45}{\\rule{1.117pt}{0.120pt}}\n\\multiput(575.68,283.17)(-58.682,-24.000){2}{\\rule{0.558pt}{0.400pt}}\n\\multiput(512.51,258.92)(-1.234,-0.497){63}{\\rule{1.082pt}{0.120pt}}\n\\multiput(514.75,259.17)(-78.755,-33.000){2}{\\rule{0.541pt}{0.400pt}}\n\\multiput(431.79,225.92)(-1.146,-0.499){123}{\\rule{1.014pt}{0.120pt}}\n\\multiput(433.89,226.17)(-141.895,-63.000){2}{\\rule{0.507pt}{0.400pt}}\n\\put(860.0,351.0){\\rule[-0.200pt]{7.709pt}{0.400pt}}\n\\put(201,123){\\usebox{\\plotpoint}}\n\\put(201,122.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(201.00,122.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(204,123.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(204.00,123.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(202.0,124.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(206,124.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(206.00,124.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(205.0,125.0){\\usebox{\\plotpoint}}\n\\put(208,125.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(208.00,125.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(207.0,126.0){\\usebox{\\plotpoint}}\n\\put(211,126.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(211.00,126.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(209.0,127.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(213,127.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(213.00,127.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(214,128.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(214.00,128.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(212.0,128.0){\\usebox{\\plotpoint}}\n\\put(217,129.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(217.00,129.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(215.0,130.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(219,130.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(219.00,130.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(218.0,131.0){\\usebox{\\plotpoint}}\n\\put(221,131.67){\\rule{0.482pt}{0.400pt}}\n\\multiput(221.00,131.17)(1.000,1.000){2}{\\rule{0.241pt}{0.400pt}}\n\\put(220.0,132.0){\\usebox{\\plotpoint}}\n\\put(224,132.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(224.00,132.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(223.0,133.0){\\usebox{\\plotpoint}}\n\\put(226,133.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(226.00,133.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(227,134.67){\\rule{0.482pt}{0.400pt}}\n\\multiput(227.00,134.17)(1.000,1.000){2}{\\rule{0.241pt}{0.400pt}}\n\\put(225.0,134.0){\\usebox{\\plotpoint}}\n\\put(230,135.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(230.00,135.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(229.0,136.0){\\usebox{\\plotpoint}}\n\\put(232,136.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(232.00,136.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(231.0,137.0){\\usebox{\\plotpoint}}\n\\put(234,137.67){\\rule{0.482pt}{0.400pt}}\n\\multiput(234.00,137.17)(1.000,1.000){2}{\\rule{0.241pt}{0.400pt}}\n\\put(233.0,138.0){\\usebox{\\plotpoint}}\n\\put(237,138.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(237.00,138.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(236.0,139.0){\\usebox{\\plotpoint}}\n\\put(239,139.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(239.00,139.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(238.0,140.0){\\usebox{\\plotpoint}}\n\\put(242,140.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(242.00,140.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(243,141.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(243.00,141.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(240.0,141.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(245,142.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(245.00,142.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(244.0,143.0){\\usebox{\\plotpoint}}\n\\put(248,143.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(248.00,143.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(246.0,144.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(250,144.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(250.00,144.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(249.0,145.0){\\usebox{\\plotpoint}}\n\\put(252,145.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(252.00,145.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(251.0,146.0){\\usebox{\\plotpoint}}\n\\put(255,146.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(255.00,146.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(253.0,147.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(257,147.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(257.00,147.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(258,148.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(258.00,148.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(256.0,148.0){\\usebox{\\plotpoint}}\n\\put(261,149.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(261.00,149.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(259.0,150.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(263,150.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(263.00,150.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(262.0,151.0){\\usebox{\\plotpoint}}\n\\put(265,151.67){\\rule{0.482pt}{0.400pt}}\n\\multiput(265.00,151.17)(1.000,1.000){2}{\\rule{0.241pt}{0.400pt}}\n\\put(264.0,152.0){\\usebox{\\plotpoint}}\n\\put(268,152.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(268.00,152.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(267.0,153.0){\\usebox{\\plotpoint}}\n\\put(270,153.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(270.00,153.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(269.0,154.0){\\usebox{\\plotpoint}}\n\\put(272,154.67){\\rule{0.482pt}{0.400pt}}\n\\multiput(272.00,154.17)(1.000,1.000){2}{\\rule{0.241pt}{0.400pt}}\n\\put(274,155.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(274.00,155.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(271.0,155.0){\\usebox{\\plotpoint}}\n\\put(276,156.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(276.00,156.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(275.0,157.0){\\usebox{\\plotpoint}}\n\\put(278,157.67){\\rule{0.482pt}{0.400pt}}\n\\multiput(278.00,157.17)(1.000,1.000){2}{\\rule{0.241pt}{0.400pt}}\n\\put(277.0,158.0){\\usebox{\\plotpoint}}\n\\put(281,158.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(281.00,158.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(280.0,159.0){\\usebox{\\plotpoint}}\n\\put(283,159.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(283.00,159.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(282.0,160.0){\\usebox{\\plotpoint}}\n\\put(286,160.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(286.00,160.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(287,161.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(287.00,161.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(284.0,161.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(289,162.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(289.00,162.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(288.0,163.0){\\usebox{\\plotpoint}}\n\\put(291,163.67){\\rule{0.482pt}{0.400pt}}\n\\multiput(291.00,163.17)(1.000,1.000){2}{\\rule{0.241pt}{0.400pt}}\n\\put(290.0,164.0){\\usebox{\\plotpoint}}\n\\put(294,164.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(294.00,164.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(293.0,165.0){\\usebox{\\plotpoint}}\n\\put(296,165.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(296.00,165.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(295.0,166.0){\\usebox{\\plotpoint}}\n\\put(299,166.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(299.00,166.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(297.0,167.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(301,167.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(301.00,167.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(302,168.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(302.00,168.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(300.0,168.0){\\usebox{\\plotpoint}}\n\\put(305,169.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(305.00,169.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(303.0,170.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(307,170.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(307.00,170.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(306.0,171.0){\\usebox{\\plotpoint}}\n\\put(309,171.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(309.00,171.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(308.0,172.0){\\usebox{\\plotpoint}}\n\\put(312,172.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(312.00,172.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(310.0,173.0){\\rule[-0.200pt]{0.482pt}{0.400pt}}\n\\put(314,173.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(314.00,173.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(313.0,174.0){\\usebox{\\plotpoint}}\n\\put(316,174.67){\\rule{0.482pt}{0.400pt}}\n\\multiput(316.00,174.17)(1.000,1.000){2}{\\rule{0.241pt}{0.400pt}}\n\\put(318,175.67){\\rule{0.241pt}{0.400pt}}\n\\multiput(318.00,175.17)(0.500,1.000){2}{\\rule{0.120pt}{0.400pt}}\n\\put(315.0,175.0){\\usebox{\\plotpoint}}\n\\sbox{\\plotpoint}{\\rule[-0.400pt]{0.800pt}{0.800pt}}%\n\\put(280,123){\\usebox{\\plotpoint}}\n\\put(280,122.34){\\rule{33.485pt}{0.800pt}}\n\\multiput(280.00,121.34)(69.500,2.000){2}{\\rule{16.743pt}{0.800pt}}\n\\put(419,124.84){\\rule{19.031pt}{0.800pt}}\n\\multiput(419.00,123.34)(39.500,3.000){2}{\\rule{9.516pt}{0.800pt}}\n\\put(498,128.34){\\rule{12.200pt}{0.800pt}}\n\\multiput(498.00,126.34)(34.678,4.000){2}{\\rule{6.100pt}{0.800pt}}\n\\multiput(558.00,133.38)(8.148,0.560){3}{\\rule{8.360pt}{0.135pt}}\n\\multiput(558.00,130.34)(33.648,5.000){2}{\\rule{4.180pt}{0.800pt}}\n\\multiput(609.00,138.38)(6.973,0.560){3}{\\rule{7.240pt}{0.135pt}}\n\\multiput(609.00,135.34)(28.973,5.000){2}{\\rule{3.620pt}{0.800pt}}\n\\multiput(653.00,143.38)(6.133,0.560){3}{\\rule{6.440pt}{0.135pt}}\n\\multiput(653.00,140.34)(25.633,5.000){2}{\\rule{3.220pt}{0.800pt}}\n\\multiput(692.00,148.39)(3.699,0.536){5}{\\rule{4.867pt}{0.129pt}}\n\\multiput(692.00,145.34)(24.899,6.000){2}{\\rule{2.433pt}{0.800pt}}\n\\multiput(727.00,154.39)(3.365,0.536){5}{\\rule{4.467pt}{0.129pt}}\n\\multiput(727.00,151.34)(22.729,6.000){2}{\\rule{2.233pt}{0.800pt}}\n\\multiput(759.00,160.40)(2.490,0.526){7}{\\rule{3.629pt}{0.127pt}}\n\\multiput(759.00,157.34)(22.469,7.000){2}{\\rule{1.814pt}{0.800pt}}\n\\multiput(789.00,167.40)(2.227,0.526){7}{\\rule{3.286pt}{0.127pt}}\n\\multiput(789.00,164.34)(20.180,7.000){2}{\\rule{1.643pt}{0.800pt}}\n\\multiput(816.00,174.40)(2.139,0.526){7}{\\rule{3.171pt}{0.127pt}}\n\\multiput(816.00,171.34)(19.418,7.000){2}{\\rule{1.586pt}{0.800pt}}\n\\multiput(842.00,181.40)(1.651,0.520){9}{\\rule{2.600pt}{0.125pt}}\n\\multiput(842.00,178.34)(18.604,8.000){2}{\\rule{1.300pt}{0.800pt}}\n\\multiput(866.00,189.40)(1.789,0.526){7}{\\rule{2.714pt}{0.127pt}}\n\\multiput(866.00,186.34)(16.366,7.000){2}{\\rule{1.357pt}{0.800pt}}\n\\multiput(888.00,196.40)(1.432,0.520){9}{\\rule{2.300pt}{0.125pt}}\n\\multiput(888.00,193.34)(16.226,8.000){2}{\\rule{1.150pt}{0.800pt}}\n\\multiput(909.00,204.40)(1.179,0.516){11}{\\rule{1.978pt}{0.124pt}}\n\\multiput(909.00,201.34)(15.895,9.000){2}{\\rule{0.989pt}{0.800pt}}\n\\multiput(929.00,213.40)(1.285,0.520){9}{\\rule{2.100pt}{0.125pt}}\n\\multiput(929.00,210.34)(14.641,8.000){2}{\\rule{1.050pt}{0.800pt}}\n\\multiput(948.00,221.40)(1.139,0.520){9}{\\rule{1.900pt}{0.125pt}}\n\\multiput(948.00,218.34)(13.056,8.000){2}{\\rule{0.950pt}{0.800pt}}\n\\multiput(965.00,229.40)(0.990,0.516){11}{\\rule{1.711pt}{0.124pt}}\n\\multiput(965.00,226.34)(13.449,9.000){2}{\\rule{0.856pt}{0.800pt}}\n\\multiput(982.00,238.40)(0.927,0.516){11}{\\rule{1.622pt}{0.124pt}}\n\\multiput(982.00,235.34)(12.633,9.000){2}{\\rule{0.811pt}{0.800pt}}\n\\multiput(998.00,247.40)(0.863,0.516){11}{\\rule{1.533pt}{0.124pt}}\n\\multiput(998.00,244.34)(11.817,9.000){2}{\\rule{0.767pt}{0.800pt}}\n\\multiput(1013.00,256.40)(0.737,0.516){11}{\\rule{1.356pt}{0.124pt}}\n\\multiput(1013.00,253.34)(10.186,9.000){2}{\\rule{0.678pt}{0.800pt}}\n\\multiput(1026.00,265.40)(0.800,0.516){11}{\\rule{1.444pt}{0.124pt}}\n\\multiput(1026.00,262.34)(11.002,9.000){2}{\\rule{0.722pt}{0.800pt}}\n\\multiput(1040.00,274.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1040.00,271.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1052.00,283.40)(0.674,0.516){11}{\\rule{1.267pt}{0.124pt}}\n\\multiput(1052.00,280.34)(9.371,9.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1064.00,292.40)(0.611,0.516){11}{\\rule{1.178pt}{0.124pt}}\n\\multiput(1064.00,289.34)(8.555,9.000){2}{\\rule{0.589pt}{0.800pt}}\n\\multiput(1075.00,301.40)(0.487,0.514){13}{\\rule{1.000pt}{0.124pt}}\n\\multiput(1075.00,298.34)(7.924,10.000){2}{\\rule{0.500pt}{0.800pt}}\n\\multiput(1085.00,311.40)(0.548,0.516){11}{\\rule{1.089pt}{0.124pt}}\n\\multiput(1085.00,308.34)(7.740,9.000){2}{\\rule{0.544pt}{0.800pt}}\n\\multiput(1096.40,319.00)(0.516,0.548){11}{\\rule{0.124pt}{1.089pt}}\n\\multiput(1093.34,319.00)(9.000,7.740){2}{\\rule{0.800pt}{0.544pt}}\n\\multiput(1105.40,329.00)(0.520,0.554){9}{\\rule{0.125pt}{1.100pt}}\n\\multiput(1102.34,329.00)(8.000,6.717){2}{\\rule{0.800pt}{0.550pt}}\n\\multiput(1113.40,338.00)(0.520,0.627){9}{\\rule{0.125pt}{1.200pt}}\n\\multiput(1110.34,338.00)(8.000,7.509){2}{\\rule{0.800pt}{0.600pt}}\n\\multiput(1121.40,348.00)(0.526,0.650){7}{\\rule{0.127pt}{1.229pt}}\n\\multiput(1118.34,348.00)(7.000,6.450){2}{\\rule{0.800pt}{0.614pt}}\n\\multiput(1128.40,357.00)(0.526,0.738){7}{\\rule{0.127pt}{1.343pt}}\n\\multiput(1125.34,357.00)(7.000,7.213){2}{\\rule{0.800pt}{0.671pt}}\n\\multiput(1135.39,367.00)(0.536,0.797){5}{\\rule{0.129pt}{1.400pt}}\n\\multiput(1132.34,367.00)(6.000,6.094){2}{\\rule{0.800pt}{0.700pt}}\n\\multiput(1141.39,376.00)(0.536,0.909){5}{\\rule{0.129pt}{1.533pt}}\n\\multiput(1138.34,376.00)(6.000,6.817){2}{\\rule{0.800pt}{0.767pt}}\n\\multiput(1147.38,386.00)(0.560,1.096){3}{\\rule{0.135pt}{1.640pt}}\n\\multiput(1144.34,386.00)(5.000,5.596){2}{\\rule{0.800pt}{0.820pt}}\n\\put(1151.34,395){\\rule{0.800pt}{2.000pt}}\n\\multiput(1149.34,395.00)(4.000,4.849){2}{\\rule{0.800pt}{1.000pt}}\n\\put(1155.34,404){\\rule{0.800pt}{2.200pt}}\n\\multiput(1153.34,404.00)(4.000,5.434){2}{\\rule{0.800pt}{1.100pt}}\n\\put(1158.84,414){\\rule{0.800pt}{2.168pt}}\n\\multiput(1157.34,414.00)(3.000,4.500){2}{\\rule{0.800pt}{1.084pt}}\n\\put(1161.84,423){\\rule{0.800pt}{2.168pt}}\n\\multiput(1160.34,423.00)(3.000,4.500){2}{\\rule{0.800pt}{1.084pt}}\n\\put(1164.34,432){\\rule{0.800pt}{2.168pt}}\n\\multiput(1163.34,432.00)(2.000,4.500){2}{\\rule{0.800pt}{1.084pt}}\n\\put(1165.84,441){\\rule{0.800pt}{2.168pt}}\n\\multiput(1165.34,441.00)(1.000,4.500){2}{\\rule{0.800pt}{1.084pt}}\n\\put(1166.84,450){\\rule{0.800pt}{1.927pt}}\n\\multiput(1166.34,450.00)(1.000,4.000){2}{\\rule{0.800pt}{0.964pt}}\n\\put(1167.84,458){\\rule{0.800pt}{2.168pt}}\n\\multiput(1167.34,458.00)(1.000,4.500){2}{\\rule{0.800pt}{1.084pt}}\n\\put(1167.84,475){\\rule{0.800pt}{2.168pt}}\n\\multiput(1168.34,475.00)(-1.000,4.500){2}{\\rule{0.800pt}{1.084pt}}\n\\put(1166.34,484){\\rule{0.800pt}{1.927pt}}\n\\multiput(1167.34,484.00)(-2.000,4.000){2}{\\rule{0.800pt}{0.964pt}}\n\\put(1164.84,492){\\rule{0.800pt}{1.686pt}}\n\\multiput(1165.34,492.00)(-1.000,3.500){2}{\\rule{0.800pt}{0.843pt}}\n\\put(1162.84,499){\\rule{0.800pt}{1.927pt}}\n\\multiput(1164.34,499.00)(-3.000,4.000){2}{\\rule{0.800pt}{0.964pt}}\n\\put(1159.84,507){\\rule{0.800pt}{1.686pt}}\n\\multiput(1161.34,507.00)(-3.000,3.500){2}{\\rule{0.800pt}{0.843pt}}\n\\put(1156.34,514){\\rule{0.800pt}{1.600pt}}\n\\multiput(1158.34,514.00)(-4.000,3.679){2}{\\rule{0.800pt}{0.800pt}}\n\\put(1152.34,521){\\rule{0.800pt}{1.600pt}}\n\\multiput(1154.34,521.00)(-4.000,3.679){2}{\\rule{0.800pt}{0.800pt}}\n\\multiput(1147.85,529.39)(-0.462,0.536){5}{\\rule{1.000pt}{0.129pt}}\n\\multiput(1149.92,526.34)(-3.924,6.000){2}{\\rule{0.500pt}{0.800pt}}\n\\multiput(1144.06,534.00)(-0.560,0.592){3}{\\rule{0.135pt}{1.160pt}}\n\\multiput(1144.34,534.00)(-5.000,3.592){2}{\\rule{0.800pt}{0.580pt}}\n\\multiput(1136.30,541.39)(-0.574,0.536){5}{\\rule{1.133pt}{0.129pt}}\n\\multiput(1138.65,538.34)(-4.648,6.000){2}{\\rule{0.567pt}{0.800pt}}\n\\multiput(1128.52,547.38)(-0.760,0.560){3}{\\rule{1.320pt}{0.135pt}}\n\\multiput(1131.26,544.34)(-4.260,5.000){2}{\\rule{0.660pt}{0.800pt}}\n\\multiput(1120.86,552.38)(-0.928,0.560){3}{\\rule{1.480pt}{0.135pt}}\n\\multiput(1123.93,549.34)(-4.928,5.000){2}{\\rule{0.740pt}{0.800pt}}\n\\put(1110,556.34){\\rule{2.000pt}{0.800pt}}\n\\multiput(1114.85,554.34)(-4.849,4.000){2}{\\rule{1.000pt}{0.800pt}}\n\\put(1100,560.34){\\rule{2.200pt}{0.800pt}}\n\\multiput(1105.43,558.34)(-5.434,4.000){2}{\\rule{1.100pt}{0.800pt}}\n\\put(1089,563.84){\\rule{2.650pt}{0.800pt}}\n\\multiput(1094.50,562.34)(-5.500,3.000){2}{\\rule{1.325pt}{0.800pt}}\n\\put(1078,566.34){\\rule{2.650pt}{0.800pt}}\n\\multiput(1083.50,565.34)(-5.500,2.000){2}{\\rule{1.325pt}{0.800pt}}\n\\put(1065,568.34){\\rule{3.132pt}{0.800pt}}\n\\multiput(1071.50,567.34)(-6.500,2.000){2}{\\rule{1.566pt}{0.800pt}}\n\\put(1052,569.84){\\rule{3.132pt}{0.800pt}}\n\\multiput(1058.50,569.34)(-6.500,1.000){2}{\\rule{1.566pt}{0.800pt}}\n\\put(1170.0,467.0){\\rule[-0.400pt]{0.800pt}{1.927pt}}\n\\put(1004,569.34){\\rule{4.095pt}{0.800pt}}\n\\multiput(1012.50,570.34)(-8.500,-2.000){2}{\\rule{2.048pt}{0.800pt}}\n\\put(985,567.34){\\rule{4.577pt}{0.800pt}}\n\\multiput(994.50,568.34)(-9.500,-2.000){2}{\\rule{2.289pt}{0.800pt}}\n\\put(965,564.34){\\rule{4.200pt}{0.800pt}}\n\\multiput(976.28,566.34)(-11.283,-4.000){2}{\\rule{2.100pt}{0.800pt}}\n\\multiput(950.22,562.06)(-3.111,-0.560){3}{\\rule{3.560pt}{0.135pt}}\n\\multiput(957.61,562.34)(-13.611,-5.000){2}{\\rule{1.780pt}{0.800pt}}\n\\multiput(931.78,557.08)(-1.964,-0.526){7}{\\rule{2.943pt}{0.127pt}}\n\\multiput(937.89,557.34)(-17.892,-7.000){2}{\\rule{1.471pt}{0.800pt}}\n\\multiput(908.38,550.08)(-1.797,-0.520){9}{\\rule{2.800pt}{0.125pt}}\n\\multiput(914.19,550.34)(-20.188,-8.000){2}{\\rule{1.400pt}{0.800pt}}\n\\multiput(885.02,542.08)(-1.287,-0.512){15}{\\rule{2.164pt}{0.123pt}}\n\\multiput(889.51,542.34)(-22.509,-11.000){2}{\\rule{1.082pt}{0.800pt}}\n\\multiput(857.59,531.08)(-1.349,-0.511){17}{\\rule{2.267pt}{0.123pt}}\n\\multiput(862.30,531.34)(-26.295,-12.000){2}{\\rule{1.133pt}{0.800pt}}\n\\multiput(827.86,519.09)(-1.130,-0.508){23}{\\rule{1.960pt}{0.122pt}}\n\\multiput(831.93,519.34)(-28.932,-15.000){2}{\\rule{0.980pt}{0.800pt}}\n\\multiput(795.16,504.09)(-1.077,-0.506){29}{\\rule{1.889pt}{0.122pt}}\n\\multiput(799.08,504.34)(-34.080,-18.000){2}{\\rule{0.944pt}{0.800pt}}\n\\multiput(757.69,486.09)(-0.991,-0.505){35}{\\rule{1.762pt}{0.122pt}}\n\\multiput(761.34,486.34)(-37.343,-21.000){2}{\\rule{0.881pt}{0.800pt}}\n\\multiput(717.17,465.09)(-0.913,-0.504){45}{\\rule{1.646pt}{0.121pt}}\n\\multiput(720.58,465.34)(-43.583,-26.000){2}{\\rule{0.823pt}{0.800pt}}\n\\multiput(670.46,439.09)(-0.865,-0.503){57}{\\rule{1.575pt}{0.121pt}}\n\\multiput(673.73,439.34)(-51.731,-32.000){2}{\\rule{0.788pt}{0.800pt}}\n\\multiput(615.82,407.09)(-0.808,-0.502){75}{\\rule{1.488pt}{0.121pt}}\n\\multiput(618.91,407.34)(-62.912,-41.000){2}{\\rule{0.744pt}{0.800pt}}\n\\multiput(550.13,366.09)(-0.761,-0.502){105}{\\rule{1.414pt}{0.121pt}}\n\\multiput(553.06,366.34)(-82.065,-56.000){2}{\\rule{0.707pt}{0.800pt}}\n\\multiput(465.35,310.09)(-0.727,-0.501){183}{\\rule{1.362pt}{0.121pt}}\n\\multiput(468.17,310.34)(-135.173,-95.000){2}{\\rule{0.681pt}{0.800pt}}\n\\put(1021.0,572.0){\\rule[-0.400pt]{7.468pt}{0.800pt}}\n\\put(201,123){\\usebox{\\plotpoint}}\n\\put(201,121.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(201.00,121.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(203,122.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(203.00,122.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(204,123.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(204.00,123.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(206,124.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(206.00,124.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(207,125.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(207.00,125.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(207.84,128){\\rule{0.800pt}{0.482pt}}\n\\multiput(207.34,128.00)(1.000,1.000){2}{\\rule{0.800pt}{0.241pt}}\n\\put(210,128.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(210.00,128.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(212,129.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(212.00,129.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(213,130.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(213.00,130.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(215,131.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(215.00,131.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(216,132.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(216.00,132.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(218,133.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(218.00,133.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(219,134.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(219.00,134.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(221,135.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(221.00,135.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(222,136.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(222.00,136.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(223,137.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(223.00,137.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(225,138.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(225.00,138.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(226,139.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(226.00,139.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(228,140.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(228.00,140.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(229,141.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(229.00,141.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(231,142.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(231.00,142.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(232,143.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(232.00,143.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(232.84,146){\\rule{0.800pt}{0.482pt}}\n\\multiput(232.34,146.00)(1.000,1.000){2}{\\rule{0.800pt}{0.241pt}}\n\\put(235,146.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(235.00,146.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(237,147.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(237.00,147.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(238,148.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(238.00,148.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(240,149.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(240.00,149.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(241,150.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(241.00,150.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(243,151.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(243.00,151.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(244,152.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(244.00,152.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(246,153.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(246.00,153.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(247,154.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(247.00,154.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(249,155.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(249.00,155.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(250,156.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(250.00,156.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(252,157.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(252.00,157.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(253,158.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(253.00,158.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(255,159.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(255.00,159.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(256,160.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(256.00,160.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(256.84,163){\\rule{0.800pt}{0.482pt}}\n\\multiput(256.34,163.00)(1.000,1.000){2}{\\rule{0.800pt}{0.241pt}}\n\\put(259,163.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(259.00,163.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(261,164.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(261.00,164.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(262,165.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(262.00,165.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(264,166.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(264.00,166.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(265,167.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(265.00,167.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(267,168.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(267.00,168.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(268,169.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(268.00,169.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(270,170.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(270.00,170.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(271,171.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(271.00,171.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(272,172.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(272.00,172.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(274,173.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(274.00,173.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(275,174.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(275.00,174.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(277,175.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(277.00,175.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(278,176.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(278.00,176.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(280,177.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(280.00,177.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(281,179.34){\\rule{0.482pt}{0.800pt}}\n\\multiput(281.00,178.34)(1.000,2.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(283,180.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(283.00,180.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(284,181.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(284.00,181.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(286,182.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(286.00,182.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(287,183.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(287.00,183.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(289,184.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(289.00,184.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(290,185.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(290.00,185.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(292,186.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(292.00,186.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(293,187.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(293.00,187.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(295,188.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(295.00,188.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(296,189.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(296.00,189.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(298,190.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(298.00,190.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(299,191.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(299.00,191.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(301,192.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(301.00,192.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(302,193.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(302.00,193.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(304,194.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(304.00,194.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(305,195.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(305.00,195.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(305.84,198){\\rule{0.800pt}{0.482pt}}\n\\multiput(305.34,198.00)(1.000,1.000){2}{\\rule{0.800pt}{0.241pt}}\n\\put(308,198.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(308.00,198.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(310,199.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(310.00,199.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(311,200.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(311.00,200.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(313,201.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(313.00,201.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(314,202.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(314.00,202.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(316,203.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(316.00,203.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(317,204.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(317.00,204.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(319,205.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(319.00,205.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(320,206.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(320.00,206.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(322,207.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(322.00,207.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(323,208.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(323.00,208.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(324,209.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(324.00,209.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(326,210.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(326.00,210.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(327,211.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(327.00,211.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(329,212.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(329.00,212.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(330,214.34){\\rule{0.482pt}{0.800pt}}\n\\multiput(330.00,213.34)(1.000,2.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(332,215.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(332.00,215.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(333,216.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(333.00,216.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(335,217.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(335.00,217.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(336,218.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(336.00,218.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(338,219.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(338.00,219.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(339,220.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(339.00,220.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(341,221.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(341.00,221.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(342,222.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(342.00,222.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(344,223.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(344.00,223.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\put(345,224.84){\\rule{0.482pt}{0.800pt}}\n\\multiput(345.00,224.34)(1.000,1.000){2}{\\rule{0.241pt}{0.800pt}}\n\\put(347,225.84){\\rule{0.241pt}{0.800pt}}\n\\multiput(347.00,225.34)(0.500,1.000){2}{\\rule{0.120pt}{0.800pt}}\n\\sbox{\\plotpoint}{\\rule[-0.600pt]{1.200pt}{1.200pt}}%\n\\put(280,123){\\usebox{\\plotpoint}}\n\\put(280,121.51){\\rule{33.726pt}{1.200pt}}\n\\multiput(280.00,120.51)(70.000,2.000){2}{\\rule{16.863pt}{1.200pt}}\n\\put(420,124.01){\\rule{19.031pt}{1.200pt}}\n\\multiput(420.00,122.51)(39.500,3.000){2}{\\rule{9.516pt}{1.200pt}}\n\\put(499,127.51){\\rule{14.695pt}{1.200pt}}\n\\multiput(499.00,125.51)(30.500,4.000){2}{\\rule{7.347pt}{1.200pt}}\n\\put(560,132.01){\\rule{12.286pt}{1.200pt}}\n\\multiput(560.00,129.51)(25.500,5.000){2}{\\rule{6.143pt}{1.200pt}}\n\\put(611,137.01){\\rule{10.840pt}{1.200pt}}\n\\multiput(611.00,134.51)(22.500,5.000){2}{\\rule{5.420pt}{1.200pt}}\n\\put(656,142.01){\\rule{9.636pt}{1.200pt}}\n\\multiput(656.00,139.51)(20.000,5.000){2}{\\rule{4.818pt}{1.200pt}}\n\\multiput(696.00,149.24)(4.867,0.509){2}{\\rule{7.500pt}{0.123pt}}\n\\multiput(696.00,144.51)(20.433,6.000){2}{\\rule{3.750pt}{1.200pt}}\n\\multiput(732.00,155.24)(2.853,0.505){4}{\\rule{6.129pt}{0.122pt}}\n\\multiput(732.00,150.51)(21.280,7.000){2}{\\rule{3.064pt}{1.200pt}}\n\\multiput(766.00,162.24)(2.565,0.505){4}{\\rule{5.614pt}{0.122pt}}\n\\multiput(766.00,157.51)(19.347,7.000){2}{\\rule{2.807pt}{1.200pt}}\n\\multiput(797.00,169.24)(2.373,0.505){4}{\\rule{5.271pt}{0.122pt}}\n\\multiput(797.00,164.51)(18.059,7.000){2}{\\rule{2.636pt}{1.200pt}}\n\\multiput(826.00,176.24)(2.180,0.505){4}{\\rule{4.929pt}{0.122pt}}\n\\multiput(826.00,171.51)(16.771,7.000){2}{\\rule{2.464pt}{1.200pt}}\n\\multiput(853.00,183.24)(1.638,0.503){6}{\\rule{4.050pt}{0.121pt}}\n\\multiput(853.00,178.51)(16.594,8.000){2}{\\rule{2.025pt}{1.200pt}}\n\\multiput(878.00,191.24)(1.562,0.503){6}{\\rule{3.900pt}{0.121pt}}\n\\multiput(878.00,186.51)(15.905,8.000){2}{\\rule{1.950pt}{1.200pt}}\n\\multiput(902.00,199.24)(1.487,0.503){6}{\\rule{3.750pt}{0.121pt}}\n\\multiput(902.00,194.51)(15.217,8.000){2}{\\rule{1.875pt}{1.200pt}}\n\\multiput(925.00,207.24)(1.225,0.502){8}{\\rule{3.233pt}{0.121pt}}\n\\multiput(925.00,202.51)(15.289,9.000){2}{\\rule{1.617pt}{1.200pt}}\n\\multiput(947.00,216.24)(1.260,0.503){6}{\\rule{3.300pt}{0.121pt}}\n\\multiput(947.00,211.51)(13.151,8.000){2}{\\rule{1.650pt}{1.200pt}}\n\\multiput(967.00,224.24)(1.098,0.502){8}{\\rule{2.967pt}{0.121pt}}\n\\multiput(967.00,219.51)(13.843,9.000){2}{\\rule{1.483pt}{1.200pt}}\n\\multiput(987.00,233.24)(0.970,0.502){8}{\\rule{2.700pt}{0.121pt}}\n\\multiput(987.00,228.51)(12.396,9.000){2}{\\rule{1.350pt}{1.200pt}}\n\\multiput(1005.00,242.24)(0.865,0.502){10}{\\rule{2.460pt}{0.121pt}}\n\\multiput(1005.00,237.51)(12.894,10.000){2}{\\rule{1.230pt}{1.200pt}}\n\\multiput(1023.00,252.24)(0.907,0.502){8}{\\rule{2.567pt}{0.121pt}}\n\\multiput(1023.00,247.51)(11.673,9.000){2}{\\rule{1.283pt}{1.200pt}}\n\\multiput(1040.00,261.24)(0.810,0.502){10}{\\rule{2.340pt}{0.121pt}}\n\\multiput(1040.00,256.51)(12.143,10.000){2}{\\rule{1.170pt}{1.200pt}}\n\\multiput(1057.00,271.24)(0.779,0.502){8}{\\rule{2.300pt}{0.121pt}}\n\\multiput(1057.00,266.51)(10.226,9.000){2}{\\rule{1.150pt}{1.200pt}}\n\\multiput(1072.00,280.24)(0.698,0.502){10}{\\rule{2.100pt}{0.121pt}}\n\\multiput(1072.00,275.51)(10.641,10.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(1087.00,290.24)(0.642,0.502){10}{\\rule{1.980pt}{0.121pt}}\n\\multiput(1087.00,285.51)(9.890,10.000){2}{\\rule{0.990pt}{1.200pt}}\n\\multiput(1101.00,300.24)(0.583,0.502){12}{\\rule{1.827pt}{0.121pt}}\n\\multiput(1101.00,295.51)(10.207,11.000){2}{\\rule{0.914pt}{1.200pt}}\n\\multiput(1115.00,311.24)(0.587,0.502){10}{\\rule{1.860pt}{0.121pt}}\n\\multiput(1115.00,306.51)(9.139,10.000){2}{\\rule{0.930pt}{1.200pt}}\n\\multiput(1128.00,321.24)(0.587,0.502){10}{\\rule{1.860pt}{0.121pt}}\n\\multiput(1128.00,316.51)(9.139,10.000){2}{\\rule{0.930pt}{1.200pt}}\n\\multiput(1141.00,331.24)(0.484,0.502){12}{\\rule{1.609pt}{0.121pt}}\n\\multiput(1141.00,326.51)(8.660,11.000){2}{\\rule{0.805pt}{1.200pt}}\n\\multiput(1153.00,342.24)(0.475,0.502){10}{\\rule{1.620pt}{0.121pt}}\n\\multiput(1153.00,337.51)(7.638,10.000){2}{\\rule{0.810pt}{1.200pt}}\n\\multiput(1164.00,352.24)(0.434,0.502){12}{\\rule{1.500pt}{0.121pt}}\n\\multiput(1164.00,347.51)(7.887,11.000){2}{\\rule{0.750pt}{1.200pt}}\n\\multiput(1175.00,363.24)(0.434,0.502){12}{\\rule{1.500pt}{0.121pt}}\n\\multiput(1175.00,358.51)(7.887,11.000){2}{\\rule{0.750pt}{1.200pt}}\n\\multiput(1188.24,372.00)(0.502,0.475){10}{\\rule{0.121pt}{1.620pt}}\n\\multiput(1183.51,372.00)(10.000,7.638){2}{\\rule{1.200pt}{0.810pt}}\n\\multiput(1198.24,383.00)(0.502,0.524){8}{\\rule{0.121pt}{1.767pt}}\n\\multiput(1193.51,383.00)(9.000,7.333){2}{\\rule{1.200pt}{0.883pt}}\n\\multiput(1207.24,394.00)(0.502,0.524){8}{\\rule{0.121pt}{1.767pt}}\n\\multiput(1202.51,394.00)(9.000,7.333){2}{\\rule{1.200pt}{0.883pt}}\n\\multiput(1216.24,405.00)(0.502,0.524){8}{\\rule{0.121pt}{1.767pt}}\n\\multiput(1211.51,405.00)(9.000,7.333){2}{\\rule{1.200pt}{0.883pt}}\n\\multiput(1225.24,416.00)(0.503,0.581){6}{\\rule{0.121pt}{1.950pt}}\n\\multiput(1220.51,416.00)(8.000,6.953){2}{\\rule{1.200pt}{0.975pt}}\n\\multiput(1233.24,427.00)(0.503,0.581){6}{\\rule{0.121pt}{1.950pt}}\n\\multiput(1228.51,427.00)(8.000,6.953){2}{\\rule{1.200pt}{0.975pt}}\n\\multiput(1241.24,438.00)(0.505,0.642){4}{\\rule{0.122pt}{2.186pt}}\n\\multiput(1236.51,438.00)(7.000,6.463){2}{\\rule{1.200pt}{1.093pt}}\n\\multiput(1248.24,449.00)(0.505,0.642){4}{\\rule{0.122pt}{2.186pt}}\n\\multiput(1243.51,449.00)(7.000,6.463){2}{\\rule{1.200pt}{1.093pt}}\n\\multiput(1255.24,460.00)(0.505,0.642){4}{\\rule{0.122pt}{2.186pt}}\n\\multiput(1250.51,460.00)(7.000,6.463){2}{\\rule{1.200pt}{1.093pt}}\n\\multiput(1262.24,471.00)(0.509,0.792){2}{\\rule{0.123pt}{2.700pt}}\n\\multiput(1257.51,471.00)(6.000,6.396){2}{\\rule{1.200pt}{1.350pt}}\n\\multiput(1268.24,483.00)(0.509,0.622){2}{\\rule{0.123pt}{2.500pt}}\n\\multiput(1263.51,483.00)(6.000,5.811){2}{\\rule{1.200pt}{1.250pt}}\n\\put(1272.01,494){\\rule{1.200pt}{2.650pt}}\n\\multiput(1269.51,494.00)(5.000,5.500){2}{\\rule{1.200pt}{1.325pt}}\n\\put(1277.01,505){\\rule{1.200pt}{2.650pt}}\n\\multiput(1274.51,505.00)(5.000,5.500){2}{\\rule{1.200pt}{1.325pt}}\n\\put(1281.51,516){\\rule{1.200pt}{2.650pt}}\n\\multiput(1279.51,516.00)(4.000,5.500){2}{\\rule{1.200pt}{1.325pt}}\n\\put(1286.01,527){\\rule{1.200pt}{2.891pt}}\n\\multiput(1283.51,527.00)(5.000,6.000){2}{\\rule{1.200pt}{1.445pt}}\n\\put(1290.01,539){\\rule{1.200pt}{2.650pt}}\n\\multiput(1288.51,539.00)(3.000,5.500){2}{\\rule{1.200pt}{1.325pt}}\n\\put(1293.51,550){\\rule{1.200pt}{2.650pt}}\n\\multiput(1291.51,550.00)(4.000,5.500){2}{\\rule{1.200pt}{1.325pt}}\n\\put(1297.01,561){\\rule{1.200pt}{2.650pt}}\n\\multiput(1295.51,561.00)(3.000,5.500){2}{\\rule{1.200pt}{1.325pt}}\n\\put(1299.51,572){\\rule{1.200pt}{2.650pt}}\n\\multiput(1298.51,572.00)(2.000,5.500){2}{\\rule{1.200pt}{1.325pt}}\n\\put(1301.51,583){\\rule{1.200pt}{2.409pt}}\n\\multiput(1300.51,583.00)(2.000,5.000){2}{\\rule{1.200pt}{1.204pt}}\n\\put(1303.51,593){\\rule{1.200pt}{2.650pt}}\n\\multiput(1302.51,593.00)(2.000,5.500){2}{\\rule{1.200pt}{1.325pt}}\n\\put(1305.01,604){\\rule{1.200pt}{2.650pt}}\n\\multiput(1304.51,604.00)(1.000,5.500){2}{\\rule{1.200pt}{1.325pt}}\n\\put(1306.01,615){\\rule{1.200pt}{2.409pt}}\n\\multiput(1305.51,615.00)(1.000,5.000){2}{\\rule{1.200pt}{1.204pt}}\n\\put(1307.01,625){\\rule{1.200pt}{2.650pt}}\n\\multiput(1306.51,625.00)(1.000,5.500){2}{\\rule{1.200pt}{1.325pt}}\n\\put(1307.01,646){\\rule{1.200pt}{2.409pt}}\n\\multiput(1307.51,646.00)(-1.000,5.000){2}{\\rule{1.200pt}{1.204pt}}\n\\put(1306.01,656){\\rule{1.200pt}{2.409pt}}\n\\multiput(1306.51,656.00)(-1.000,5.000){2}{\\rule{1.200pt}{1.204pt}}\n\\put(1305.01,666){\\rule{1.200pt}{2.409pt}}\n\\multiput(1305.51,666.00)(-1.000,5.000){2}{\\rule{1.200pt}{1.204pt}}\n\\put(1303.51,676){\\rule{1.200pt}{2.168pt}}\n\\multiput(1304.51,676.00)(-2.000,4.500){2}{\\rule{1.200pt}{1.084pt}}\n\\put(1301.51,685){\\rule{1.200pt}{2.409pt}}\n\\multiput(1302.51,685.00)(-2.000,5.000){2}{\\rule{1.200pt}{1.204pt}}\n\\put(1299.51,695){\\rule{1.200pt}{2.168pt}}\n\\multiput(1300.51,695.00)(-2.000,4.500){2}{\\rule{1.200pt}{1.084pt}}\n\\put(1296.51,704){\\rule{1.200pt}{1.927pt}}\n\\multiput(1298.51,704.00)(-4.000,4.000){2}{\\rule{1.200pt}{0.964pt}}\n\\put(1293.01,712){\\rule{1.200pt}{2.168pt}}\n\\multiput(1294.51,712.00)(-3.000,4.500){2}{\\rule{1.200pt}{1.084pt}}\n\\put(1289.01,721){\\rule{1.200pt}{1.927pt}}\n\\multiput(1291.51,721.00)(-5.000,4.000){2}{\\rule{1.200pt}{0.964pt}}\n\\put(1284.51,729){\\rule{1.200pt}{1.927pt}}\n\\multiput(1286.51,729.00)(-4.000,4.000){2}{\\rule{1.200pt}{0.964pt}}\n\\multiput(1282.25,737.00)(-0.509,0.113){2}{\\rule{0.123pt}{1.900pt}}\n\\multiput(1282.51,737.00)(-6.000,4.056){2}{\\rule{1.200pt}{0.950pt}}\n\\put(1273.51,745){\\rule{1.200pt}{1.686pt}}\n\\multiput(1276.51,745.00)(-6.000,3.500){2}{\\rule{1.200pt}{0.843pt}}\n\\put(1267.51,752){\\rule{1.200pt}{1.686pt}}\n\\multiput(1270.51,752.00)(-6.000,3.500){2}{\\rule{1.200pt}{0.843pt}}\n\\put(1260,759.51){\\rule{1.686pt}{1.200pt}}\n\\multiput(1263.50,756.51)(-3.500,6.000){2}{\\rule{0.843pt}{1.200pt}}\n\\multiput(1252.11,767.24)(-0.113,0.509){2}{\\rule{1.900pt}{0.123pt}}\n\\multiput(1256.06,762.51)(-4.056,6.000){2}{\\rule{0.950pt}{1.200pt}}\n\\multiput(1244.11,773.24)(-0.113,0.509){2}{\\rule{1.900pt}{0.123pt}}\n\\multiput(1248.06,768.51)(-4.056,6.000){2}{\\rule{0.950pt}{1.200pt}}\n\\put(1235,777.01){\\rule{2.168pt}{1.200pt}}\n\\multiput(1239.50,774.51)(-4.500,5.000){2}{\\rule{1.084pt}{1.200pt}}\n\\put(1225,781.51){\\rule{2.409pt}{1.200pt}}\n\\multiput(1230.00,779.51)(-5.000,4.000){2}{\\rule{1.204pt}{1.200pt}}\n\\put(1214,785.51){\\rule{2.650pt}{1.200pt}}\n\\multiput(1219.50,783.51)(-5.500,4.000){2}{\\rule{1.325pt}{1.200pt}}\n\\put(1203,789.01){\\rule{2.650pt}{1.200pt}}\n\\multiput(1208.50,787.51)(-5.500,3.000){2}{\\rule{1.325pt}{1.200pt}}\n\\put(1191,791.51){\\rule{2.891pt}{1.200pt}}\n\\multiput(1197.00,790.51)(-6.000,2.000){2}{\\rule{1.445pt}{1.200pt}}\n\\put(1177,793.01){\\rule{3.373pt}{1.200pt}}\n\\multiput(1184.00,792.51)(-7.000,1.000){2}{\\rule{1.686pt}{1.200pt}}\n\\put(1163,794.01){\\rule{3.373pt}{1.200pt}}\n\\multiput(1170.00,793.51)(-7.000,1.000){2}{\\rule{1.686pt}{1.200pt}}\n\\put(1310.0,636.0){\\rule[-0.600pt]{1.200pt}{2.409pt}}\n\\put(1132,793.51){\\rule{3.854pt}{1.200pt}}\n\\multiput(1140.00,794.51)(-8.000,-2.000){2}{\\rule{1.927pt}{1.200pt}}\n\\put(1115,791.51){\\rule{4.095pt}{1.200pt}}\n\\multiput(1123.50,792.51)(-8.500,-2.000){2}{\\rule{2.048pt}{1.200pt}}\n\\put(1096,788.51){\\rule{4.577pt}{1.200pt}}\n\\multiput(1105.50,790.51)(-9.500,-4.000){2}{\\rule{2.289pt}{1.200pt}}\n\\put(1076,784.01){\\rule{4.818pt}{1.200pt}}\n\\multiput(1086.00,786.51)(-10.000,-5.000){2}{\\rule{2.409pt}{1.200pt}}\n\\multiput(1059.81,781.26)(-1.604,-0.505){4}{\\rule{3.900pt}{0.122pt}}\n\\multiput(1067.91,781.51)(-12.905,-7.000){2}{\\rule{1.950pt}{1.200pt}}\n\\multiput(1039.43,774.26)(-1.487,-0.503){6}{\\rule{3.750pt}{0.121pt}}\n\\multiput(1047.22,774.51)(-15.217,-8.000){2}{\\rule{1.875pt}{1.200pt}}\n\\multiput(1018.30,766.26)(-1.256,-0.502){10}{\\rule{3.300pt}{0.121pt}}\n\\multiput(1025.15,766.51)(-18.151,-10.000){2}{\\rule{1.650pt}{1.200pt}}\n\\multiput(994.55,756.26)(-1.119,-0.501){14}{\\rule{3.000pt}{0.121pt}}\n\\multiput(1000.77,756.51)(-20.773,-12.000){2}{\\rule{1.500pt}{1.200pt}}\n\\multiput(969.12,744.26)(-0.954,-0.501){20}{\\rule{2.620pt}{0.121pt}}\n\\multiput(974.56,744.51)(-23.562,-15.000){2}{\\rule{1.310pt}{1.200pt}}\n\\multiput(940.38,729.26)(-0.929,-0.501){24}{\\rule{2.559pt}{0.121pt}}\n\\multiput(945.69,729.51)(-26.689,-17.000){2}{\\rule{1.279pt}{1.200pt}}\n\\multiput(909.29,712.26)(-0.837,-0.501){30}{\\rule{2.340pt}{0.121pt}}\n\\multiput(914.14,712.51)(-29.143,-20.000){2}{\\rule{1.170pt}{1.200pt}}\n\\multiput(875.87,692.26)(-0.780,-0.500){38}{\\rule{2.200pt}{0.121pt}}\n\\multiput(880.43,692.51)(-33.434,-24.000){2}{\\rule{1.100pt}{1.200pt}}\n\\multiput(838.28,668.26)(-0.740,-0.500){46}{\\rule{2.100pt}{0.121pt}}\n\\multiput(842.64,668.51)(-37.641,-28.000){2}{\\rule{1.050pt}{1.200pt}}\n\\multiput(796.87,640.26)(-0.682,-0.500){58}{\\rule{1.959pt}{0.121pt}}\n\\multiput(800.93,640.51)(-42.934,-34.000){2}{\\rule{0.979pt}{1.200pt}}\n\\multiput(750.19,606.26)(-0.651,-0.500){72}{\\rule{1.880pt}{0.121pt}}\n\\multiput(754.10,606.51)(-50.097,-41.000){2}{\\rule{0.940pt}{1.200pt}}\n\\multiput(696.48,565.26)(-0.623,-0.500){90}{\\rule{1.812pt}{0.120pt}}\n\\multiput(700.24,565.51)(-59.239,-50.000){2}{\\rule{0.906pt}{1.200pt}}\n\\multiput(633.84,515.26)(-0.588,-0.500){118}{\\rule{1.725pt}{0.120pt}}\n\\multiput(637.42,515.51)(-72.420,-64.000){2}{\\rule{0.863pt}{1.200pt}}\n\\multiput(558.05,451.26)(-0.568,-0.500){170}{\\rule{1.673pt}{0.120pt}}\n\\multiput(561.53,451.51)(-99.527,-90.000){2}{\\rule{0.837pt}{1.200pt}}\n\\multiput(455.34,361.26)(-0.542,-0.500){400}{\\rule{1.605pt}{0.120pt}}\n\\multiput(458.67,361.51)(-219.668,-205.000){2}{\\rule{0.803pt}{1.200pt}}\n\\put(1148.0,797.0){\\rule[-0.600pt]{3.613pt}{1.200pt}}\n\\put(201,123){\\usebox{\\plotpoint}}\n\\put(201,121.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(201.00,120.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(202,122.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(202.00,121.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(202.01,125){\\rule{1.200pt}{0.482pt}}\n\\multiput(201.51,125.00)(1.000,1.000){2}{\\rule{1.200pt}{0.241pt}}\n\\put(205,125.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(205.00,124.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(206,126.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(206.00,125.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(207,127.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(207.00,126.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(208,128.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(208.00,127.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(209,129.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(209.00,128.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(211,130.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(211.00,129.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(212,131.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(212.00,130.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(213,132.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(213.00,131.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(212.01,135){\\rule{1.200pt}{0.482pt}}\n\\multiput(211.51,135.00)(1.000,1.000){2}{\\rule{1.200pt}{0.241pt}}\n\\put(215,135.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(215.00,134.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(217,136.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(217.00,135.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(218,137.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(218.00,136.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(219,138.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(219.00,137.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(220,139.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(220.00,138.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(221,140.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(221.00,139.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(223,141.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(223.00,140.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(224,142.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(224.00,141.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(223.01,145){\\rule{1.200pt}{0.482pt}}\n\\multiput(222.51,145.00)(1.000,1.000){2}{\\rule{1.200pt}{0.241pt}}\n\\put(226,145.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(226.00,144.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(227,146.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(227.00,145.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(229,147.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(229.00,146.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(230,148.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(230.00,147.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(231,149.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(231.00,148.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(232,150.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(232.00,149.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(233,151.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(233.00,150.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(234,152.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(234.00,151.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(234.01,155){\\rule{1.200pt}{0.482pt}}\n\\multiput(233.51,155.00)(1.000,1.000){2}{\\rule{1.200pt}{0.241pt}}\n\\put(237,155.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(237.00,154.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(238,156.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(238.00,155.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(239,157.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(239.00,156.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(240,158.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(240.00,157.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(242,159.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(242.00,158.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(243,160.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(243.00,159.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(244,161.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(244.00,160.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(245,162.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(245.00,161.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(246,163.51){\\rule{0.482pt}{1.200pt}}\n\\multiput(246.00,162.51)(1.000,2.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(248,165.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(248.00,164.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(249,166.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(249.00,165.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(250,167.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(250.00,166.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(251,168.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(251.00,167.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(252,169.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(252.00,168.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(253,170.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(253.00,169.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(255,171.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(255.00,170.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(256,172.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(256.00,171.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(255.01,175){\\rule{1.200pt}{0.482pt}}\n\\multiput(254.51,175.00)(1.000,1.000){2}{\\rule{1.200pt}{0.241pt}}\n\\put(258,175.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(258.00,174.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(259,176.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(259.00,175.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(261,177.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(261.00,176.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(262,178.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(262.00,177.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(263,179.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(263.00,178.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(264,180.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(264.00,179.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(265,181.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(265.00,180.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(267,182.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(267.00,181.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(266.01,185){\\rule{1.200pt}{0.482pt}}\n\\multiput(265.51,185.00)(1.000,1.000){2}{\\rule{1.200pt}{0.241pt}}\n\\put(269,185.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(269.00,184.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(270,186.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(270.00,185.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(271,187.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(271.00,186.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(272,188.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(272.00,187.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(274,189.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(274.00,188.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(275,190.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(275.00,189.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(276,191.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(276.00,190.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(277,192.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(277.00,191.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(278,193.51){\\rule{0.482pt}{1.200pt}}\n\\multiput(278.00,192.51)(1.000,2.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(280,195.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(280.00,194.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(281,196.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(281.00,195.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(282,197.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(282.00,196.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(283,198.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(283.00,197.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(284,199.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(284.00,198.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(286,200.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(286.00,199.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(287,201.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(287.00,200.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(288,202.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(288.00,201.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(287.01,205){\\rule{1.200pt}{0.482pt}}\n\\multiput(286.51,205.00)(1.000,1.000){2}{\\rule{1.200pt}{0.241pt}}\n\\put(290,205.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(290.00,204.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(291,206.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(291.00,205.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(293,207.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(293.00,206.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(294,208.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(294.00,207.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(295,209.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(295.00,208.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(296,210.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(296.00,209.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(297,211.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(297.00,210.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(299,212.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(299.00,211.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(298.01,215){\\rule{1.200pt}{0.482pt}}\n\\multiput(297.51,215.00)(1.000,1.000){2}{\\rule{1.200pt}{0.241pt}}\n\\put(301,215.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(301.00,214.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(302,216.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(302.00,215.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(303,217.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(303.00,216.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(305,218.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(305.00,217.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(306,219.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(306.00,218.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(307,220.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(307.00,219.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(308,221.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(308.00,220.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(309,222.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(309.00,221.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(310,223.51){\\rule{0.482pt}{1.200pt}}\n\\multiput(310.00,222.51)(1.000,2.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(312,225.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(312.00,224.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(313,226.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(313.00,225.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(314,227.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(314.00,226.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(315,228.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(315.00,227.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\put(316,229.01){\\rule{0.482pt}{1.200pt}}\n\\multiput(316.00,228.51)(1.000,1.000){2}{\\rule{0.241pt}{1.200pt}}\n\\put(318,230.01){\\rule{0.241pt}{1.200pt}}\n\\multiput(318.00,229.51)(0.500,1.000){2}{\\rule{0.120pt}{1.200pt}}\n\\end{picture}\n\\caption{The mass-radius diagram for the Newtonian model with equation of\nstate (\\ref{Neos}), for the values $\\tau = 0.382$ (thin line), \n$\\tau = 0.6$ (medium) and $\\tau = 0.786$ (thick line).}\n\\label{NMR} \n\\end{figure}\n\nFor the NEOS (\\ref{Neos}) with $\\rho_- \\in [0,\\infty)$, $\\rho_+ \\in (0,\\infty)$\nthe quantities $\\bar v$, $v_s$, $v_c$, $\\rho_c$ and $p_c$ can take all values \nin the open interval bounded by the respective values of the \"dust particle\" \nand the point singularity, which are clearly unphysical themselves. These parameters are \nmonotonic functions of one another, and any of them can be used to characterize the solutions.\n\nOn the other hand, the mass and the radius have extrema which follow easily from \n(\\ref{NMRE}) and are also given in Table (\\ref{Npar}). Fig. (\\ref{NMR}) where we have\nchosen the same values of $\\tau$ as in Fig. (\\ref{NEOS}) shows the following.\nStarting with the dust particle at $\\widehat R = \\widehat M = v_s = 0$ \nand increasing $p_c$ and $v_s$, we follow the lower branch of the\nmass-radius curve which corresponds to the minus sign in\n(\\ref{NMRE}). After passing the maximum radius $\\widehat R = \\sqrt{\\tau\/2}$,\nthe mass which is now given by the plus sign in (\\ref{NMRE}), continues to\nincrease till the \"heaviest star\" of the table is reached. Then mass and\nradius drop towards the point singularity $\\widehat R = \\widehat M = 0$ and\n$v_s = - \\tau$. The surface potential $v_s = - \\widehat M\/\\widehat R$ \n(minus the slope of the line joining points of the curve with the origin) \ndecreases monotonically along the curve, whence the latter forms precisely one \"loop\".\n\n\\subsection{The Relativistic Solutions}\n\n\\subsubsection{The matching}\n\nUsing Lemma A.3 of the Appendix with $\\wp_{ij} = g_{ij}$, $\\Phi = (1 - fV)\/(1 + fV)$ and\n${\\cal R}_{\\pm} = 512 \\pi \\rho_{\\pm}$ we write the spherically symmetric solutions of (\\ref{Rpm}) as \n\\begin{eqnarray}\n\\label{PSV}\n\\frac{1 - fV}{1 + fV} & = & \\mu \\sqrt{\\frac{1 + \\frac{64\\pi}{3}\\rho_+ r^2}{1 + \\frac{64\\pi}{3}\\mu^4 \\rho_- r^2}}, \\\\\n\\label{PSg}\ng_{ij} dx^i dx^j & = \n& \\frac{16 \\left(dr^2 + r^2 d\\omega^2 \\right)}{\\left(1 + fV \\right)^4 \\left(1 + \\frac{64\\pi}{3}\\rho_+ r^2\n\\right)^2}.\n\\end{eqnarray}\n\nAgain we have to eliminate one of the constants $f$ and $\\mu$ by global conditions. \nRecall that the parameters in the EOS are now restricted by $0 \\le \\rho_+ < \\rho_- < \\infty$ \nand so $0 \\le \\tau < 1$.\nIn the case $\\rho_+ = 0$ the solutions extend to infinity \n(which can be shown independently of spherical symmetry, c.f.\n\\cite{WS1,WS3,MH}) \nand we set $f=1$. For $\\rho_+ > 0$ the solutions are finite since $\\rho_s > 0$. \nWe claim that the Buchdahl solutions and the Pant-Sah solutions are given by (\\ref{PSV}) and (\\ref{PSg}) with\n\\begin{equation}\n\\label{Emu}\n \\mu = \\left\\{ \\begin{array}{r@{\\quad \\mbox{for} \\quad}l} \n {\\displaystyle \\frac{1}{4M} \\sqrt{ \\frac{3}{\\pi \\rho_-} } } & \\rho_+ = 0,\n \\\\\n {\\displaystyle \\frac{\\Sigma_+ + \\Sigma_-}{2} } & \\rho_+ > 0,\n \\end{array} \\right.\n\\end{equation}\nand \n\\begin{equation}\n\\label{Spm}\n\\Sigma_{\\pm} = \\tau \\sqrt{(1 \\pm \\tau)^2 + \\frac{(1 + \\tau)^2 f^2 - (1 - \\tau)^2}{1 - f^2}}\n\\end{equation}\nwhich requires $f < 1$ to make sense. \nFor $\\rho_+ = 0$ equ. (\\ref{Emu}) follows easily from the asymptotic condition (\\ref{EM}).\n On the other hand, for $\\rho_+ > 0$ the matching to Schwarzschild \n is quite involved as the isotropic coordinates of (\\ref{PSV}),(\\ref{PSg}) which simplify the interior \nsolutions are unsuited for the matching. We will verify (\\ref{Emu}) below by matching \"covariantly\"\n(c.f. (\\ref{Esm})).\n\nWe also write the Pant-Sah solutions in the alternative form \n\\begin{equation}\n\\label{Econf}\nds^2 = - V^2 dt^2 + \\Omega^2 (dr^2 + r^2 d\\omega^2)\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{Omega}\n\\Omega = \\frac{4}{2\\mu^2 - \\Sigma^2} \n\\left[ \\frac{\\mu^2}{(1 + fV)^2} - \\frac{\\tau^6}{(1 - fV)^2}\\right],\n\\end{equation}\nand the constant $\\Sigma$ is given by\n\\begin{equation}\n\\label{Sigma}\n \\Sigma^2 = \\left\\{ \\begin{array}{r@{\\quad \\mbox{for} \\quad}l} \n {\\displaystyle \\mu^2} & \\rho_+ = 0, \\\\\n {\\displaystyle \\frac{\\Sigma_+^2 + \\Sigma_-^2}{2} } & \\rho_+ > 0\n \\end{array} \\right.\n\\end{equation}\nin terms of the quantities defined in (\\ref{Emu}) and (\\ref{Spm}).\nThe form (\\ref{Econf}) will be useful in Section 5 for the proof of spherical symmetry of solutions with the PSEOS.\nEqu. (\\ref{Omega}) makes sense for $\\rho_+ = 0$ as well (i.e. for the BEOS with $\\rho_- \\ne 0$ \nas well as for vacuum $\\rho_- = 0$) and reduces to $\\Omega = 4\/(1 + V)^2$ in either case.\n\nIn analogy with the quantity $w$ in the Newtonian case, we now determine $W = \\nabla^i V \\nabla_i V$ \nwhich is a function of $V$ in spherical symmetry. In particular, for Schwarzschild we have \n$W = (16 M)^{-2} \\left(1 - V^2 \\right)^4$.\nFor our model characterized by ${\\cal R}_{\\pm} = const.$, \nLemma A.2 shows that the spherically symmetric solutions are spaces of constant curvature. \nUsing the field equations (\\ref{Ein}) and the general formula (\\ref{cri}) with \n$\\wp_{ij} = g_{ij}$, $\\Phi^{\\pm} = (1 \\pm fV)\/2$, $\\widetilde \\wp_{ij} = g_{ij}^{\\pm}$ we have\n\\begin{equation}\n0 = V (1 \\pm V)^2 {\\cal B}^{\\pm}_{ij} = \n{\\cal C}[(1 - f^2 V^2) \\nabla_i \\nabla_j V + 6 f^2 V \\nabla_i V \\nabla_j V].\n\\end{equation}\n\nContracting this equation with $\\nabla^i V \\nabla^j V$ and using the field equation \n(\\ref{Alb}) and (\\ref{rV}) and (\\ref{pV}) gives\n\\begin{equation}\n\\frac{d}{dV} \\left[ \\frac{W}{(1 - f^2 V^2)^4} \\right] = \n\\frac{4\\pi (1 - fV)}{3 f (1 + fV)}\\left [ \\frac{\\rho_-}{(1 + fV)^2} - \\frac{\\rho_+}{(1 - fV)^2} \\right] \n\\end{equation}\nwhich has the solution \n\\begin{eqnarray} \n\\label{PSW}\nW & = & \\frac{\\pi \\rho_-}{3 f^2} (1 - f^{2}V^{2})^{4}\n\\left[ \\Sigma^2 - \\frac{(1 - fV)^2}{(1 + fV)^2} - \\tau^6 \\frac{(1 + fV)^2}{(1 - fV)^2}\\right]\n= \\\\\n\\label{Wprod}\n& = & \\frac{\\pi \\rho_- (2\\mu^2 - \\Sigma^2)}{12 f^2} (1 - f^2V^2)^4 \\Omega \\left[(1 + fV)^2 - \\frac{(1 - fV)^2}{\\mu^2}\n\\right].\n\\end{eqnarray}\n\nIn equ. (\\ref{PSW}) $\\Sigma^2$ arises as a constant of integration,\nand we first verify that it is consistent with the earlier definitions \n(\\ref{Sigma}), (\\ref{Emu}) and (\\ref{Spm}).\n\nFor $\\rho_+ = 0$ this follows once again from the asymptotics, equ.(\\ref{EM}).\nFor $\\rho_+ > 0$ this is done with the \"covariant\" matching condition (\\ref{Egm}) \nwhich becomes \n\\begin{equation}\n\\label{Esm}\n \\left[ \\frac{d W}{d V} - 8 \\pi \\rho V \\right]_{\\Rightarrow \\partial {\\cal F}} =\n\\left[ \\frac{d W}{d V} \\right]_{ \\partial {\\cal V} \\Leftarrow} = \\frac{8 V_s W_s }{1 - V_s^2}.\n\\end{equation} \nNext, with a little algebra one can check (\\ref{Wprod}) which contains \n $\\mu$ and $\\Omega$ defined in (\\ref{Emu}) and (\\ref{Omega}).\nTo verify that $\\mu$ as defined in (\\ref{Emu}) in fact agrees with\nthe constant appearing in (\\ref{PSV}) it is simplest to use (\\ref{PSW}) and the\ngeneral definition $W=\\nabla_i V \\nabla^i V$.\n\nFinally we note that one can alternatively write the Pant-Sah solutions by using $V$ as a coordinate {\\it everywhere}. \n(Equ. (\\ref{Econf}) still contains $\"r\"$).\nFrom eqs. (3.1) and (3.17) of \\cite{BS2} one finds \\cite{WS1} \n\\begin{equation}\nds^2 = - V^{2}dt^{2} + \\frac{1}{W} dV^2 + \n\\frac{9 f^2 W }{4\\pi^2 \\rho_-^2 (2\\mu^2 -\\Sigma^2)^2 (1 - f^2 V^2)^6} d\\omega^2. \n\\end{equation}\n\n\\subsubsection{The centre}\n\nWe now turn to the important issue of regularity at the centre. \nThe latter is characterized either by $r=0$ or by the minimum of $V$, i.e. $W =\n0$. From (\\ref{PSV}) and (\\ref{PSg}) it is easy to see that the centre\nis regular if $V > 0$; it can be made manifestly regular, i.e.\n$g_{ij}dx^idx^j = (1 + O(r^2))(dr^2 + r^2 d\\omega^2) $ by a suitable rescaling of $r$.\n\nWe also note that either from (\\ref{PSV}) or (\\ref{Wprod}) it follows that regularity is equivalent to $\\mu < 1$.\nFor $\\rho_+ = 0$ this entails the {\\it lower bound} $M^2 > 3\/16\\pi\\rho_-$ {\\it for the\nmass}, while there is no upper bound.\nFor $\\rho_+ > 0$ we have collected in Table (\\ref{Epar1}) the most\nimportant parameters which are monotonic functions of one another and \nprovide unique characterizations of the model.\nWe use the shorthand ${\\cal T}_{\\pm} = \\sqrt{1 \\pm \\tau + \\tau^2}$.\n\n\\begin{table}[h]\n\\caption{Some parameters for the Pant-Sah solutions}\n\\label{Epar1}\n\\vspace*{0.5cm}\n\\begin{tabular}{|r||c|c|c|c|c|}\n{} & $f$ & $V_s$ & $V_c$ & $\\rho_c$ & $p_c$ \\\\ [1.0ex] \\hline \\hline\n{} & {} & {} & {} & {} & {} \\\\ \ndust particle & {\\large $\\frac{1 - \\tau}{1 + \\tau}$ } & $1$ & $1$ & \n{\\large $\\frac{32 \\rho_- \\tau^5}{1 + \\tau^4}$ } & $0$ \\\\ [1.5ex] \nsingular centre & {\\large $\\frac{(1 - \\tau){\\cal T}_+^2}{(1 + \\tau){\\cal T}_-^2}$ } & \n{\\large $\\frac{{\\cal T}_-^2}{{\\cal T}_+^2}$} & $0$ & $ \\rho_- + \\rho_+$ & $\\infty$ \n\\end{tabular} \n\n\\end{table}\n\nThe allowed parameter values are bounded by the respective ones of the \"dust particle\" \nwith $p_c = 0$ and $V_c = V_s = 1$, and the model with singular centre for which\n$p_c = \\infty$ and $V_c = 0$.\n Like their Newtonian counterparts these limits are unphysical, but unlike\nthe Newtonian ones the singular solution has now finite extent.\nThe dust particle has $\\rho_c = \\rho_s > 0$, while as the singular centre is approached\n $\\rho_c$ always stays finite (in contrast to the Newtonian case) due to the \n\"stiffness\" of the PSEOS. This singular model also has largest redshift,\n which can be tested against the Buchdahl limit \\cite{HB2} \n$V_s \\ge 1\/3$. The latter is saturated for fluids of constant density\nonly. Such fluids are approached by the present models for $\\tau \\rightarrow 1$ (c.f.\nEqu.(\\ref{rhoc})), and in fact we find that $V_s \\rightarrow 1\/3$ in this limit.\n\n\\subsubsection{The mass-radius relation}\n\nTo obtain mass and radius we use $1 - V_s^2 = 2M\/R$, (\\ref{Esm}) and (\\ref{PSW}) at the surface. \nIn terms of the rescaled variables (\\ref{resc}) we obtain\n\n\\begin{eqnarray} \n\\label{mass}\n\\widehat R^2 & = & \n\\frac{1 - f^{2}}{4 \\tau} \\left[(1 + \\tau)^2 - \\frac{(1 - \\tau)^{2}}{f^2} \\right], \\\\\n \\widehat M^2 & = & \n\\frac{1 - f^{2}}{16 \\tau (1 + \\tau)^4} \\left[ (1 + \\tau)^2 - \\frac{(1 - \\tau)^{2}}{f^2} \\right]^{3}. \n\\end{eqnarray}\n\nEliminating $f$ gives the mass-radius relation\n\\begin{equation}\n\\frac{(1 + \\tau)^2}{2\\tau} \\widehat M^2 - (1 + \\widehat R^2)\\widehat R \\widehat M + \\frac{\\widehat R^4}{2}\n= 0\n\\end{equation}\n\nwhich can be solved for the mass\n\n\\begin{equation}\n\\label{EMR}\n\\widehat M = \\frac{\\tau \\widehat R}{(1 + \\tau)^2} \n\\left[1 + \\widehat R^2 \\pm \\sqrt{\\left(\\tau - \\widehat R^2 \\right )\\left(\\frac{1}{\\tau} - \\widehat R^2 \\right)}\\right].\n\\end{equation}\n\nThe extrema of mass and radius are listed in Table (\\ref{Epar2}).\n(Recall that ${\\cal T}_{\\pm} = \\sqrt{1 \\pm \\tau + \\tau^2}$).\n\n\\begin{table}[h!]\n\\caption{Surface potential, radius and mass of the Pant-Sah solution}\n\\label{Epar2}\n\\vspace*{0.5cm}\n\\begin{tabular}{|r||c|c|c|}\n{} & $V_s^2$ & $\\widehat R$ & $\\widehat M$ \\\\ [1.0ex] \\hline \\hline\n{} & {} & {} & {} \\\\ \ndust particle & 1 & 0 & 0 \\\\ [1.5ex] \nbiggest star & {\\large $\\frac{1 - \\tau}{1 + \\tau}$ } &\n $\\sqrt{\\tau}$ & {\\large $\\frac{\\tau \\sqrt{\\tau}}{1 + \\tau}$ } \\\\ [1.5ex] \nheaviest star & \n {\\large $\\frac{(1-\\tau) \\left( 2 {\\cal T}_+ + 1 - \\tau \\right)} {3(1 + \\tau)^2} $ } & \n {\\large $\\frac{{\\cal T}_+ + \\tau - 1}{\\sqrt{3\\tau}}$} & \n {\\large $\\frac{2{\\cal T}_+^3 - (2 + \\tau)(1 + 2\\tau)(1 - \\tau)}{3 \\sqrt{3\\tau}(1 + \\tau)^2}$} \\\\ [1.5ex]\nsing. centre & {\\large $ \\frac{{\\cal T}_-^4} {{\\cal T}_+^4}$ } & \n{\\large $\\frac{2 \\tau \\sqrt{\\tau(\\tau^2 + 1)}}{{\\cal T}_-^2 {\\cal T}_+^2}$ } &\n{\\large $\\frac{4 \\tau^2 (1 + \\tau^2) \\sqrt{\\tau(\\tau^2 + 1)}}{{\\cal T}_-^2 {\\cal T}_+^6} $ }\n\\end{tabular} \n\\end{table}\n\n\\begin{figure}[h!]\n\\setlength{\\unitlength}{0.240900pt}\n\\ifx\\plotpoint\\undefined\\newsavebox{\\plotpoint}\\fi\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\begin{picture}(1500,900)(0,0)\n\\font\\gnuplot=cmr10 at 10pt\n\\gnuplot\n\\sbox{\\plotpoint}{\\rule[-0.200pt]{0.400pt}{0.400pt}}%\n\\put(201.0,123.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,123){\\makebox(0,0)[r]{ 0}}\n\\put(1419.0,123.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,215.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,215){\\makebox(0,0)[r]{ 0.05}}\n\\put(1419.0,215.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,307.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,307){\\makebox(0,0)[r]{ 0.1}}\n\\put(1419.0,307.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,399.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,399){\\makebox(0,0)[r]{ 0.15}}\n\\put(1419.0,399.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,492.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,492){\\makebox(0,0)[r]{ 0.2}}\n\\put(1419.0,492.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,584.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,584){\\makebox(0,0)[r]{ 0.25}}\n\\put(1419.0,584.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,676.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,676){\\makebox(0,0)[r]{ 0.3}}\n\\put(1419.0,676.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,768.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,768){\\makebox(0,0)[r]{ 0.35}}\n\\put(1419.0,768.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,860.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(181,860){\\makebox(0,0)[r]{ 0.4}}\n\\put(1419.0,860.0){\\rule[-0.200pt]{4.818pt}{0.400pt}}\n\\put(201.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(201,82){\\makebox(0,0){ 0}}\n\\put(201.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(339.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(339,82){\\makebox(0,0){ 0.1}}\n\\put(339.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(476.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(476,82){\\makebox(0,0){ 0.2}}\n\\put(476.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(614.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(614,82){\\makebox(0,0){ 0.3}}\n\\put(614.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(751.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(751,82){\\makebox(0,0){ 0.4}}\n\\put(751.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(889.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(889,82){\\makebox(0,0){ 0.5}}\n\\put(889.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1026.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1026,82){\\makebox(0,0){ 0.6}}\n\\put(1026.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1164.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1164,82){\\makebox(0,0){ 0.7}}\n\\put(1164.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1301.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1301,82){\\makebox(0,0){ 0.8}}\n\\put(1301.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1439.0,123.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(1439,82){\\makebox(0,0){ 0.9}}\n\\put(1439.0,840.0){\\rule[-0.200pt]{0.400pt}{4.818pt}}\n\\put(201.0,123.0){\\rule[-0.200pt]{298.234pt}{0.400pt}}\n\\put(1439.0,123.0){\\rule[-0.200pt]{0.400pt}{177.543pt}}\n\\put(201.0,860.0){\\rule[-0.200pt]{298.234pt}{0.400pt}}\n\\put(40,491){\\makebox(0,0){$\\widehat M$}}\n\\put(820,21){\\makebox(0,0){$\\widehat R$}}\n\\put(201.0,123.0){\\rule[-0.200pt]{0.400pt}{177.543pt}}\n\\put(258,123){\\usebox{\\plotpoint}}\n\\put(258,122.67){\\rule{18.067pt}{0.400pt}}\n\\multiput(258.00,122.17)(37.500,1.000){2}{\\rule{9.034pt}{0.400pt}}\n\\put(333,123.67){\\rule{11.081pt}{0.400pt}}\n\\multiput(333.00,123.17)(23.000,1.000){2}{\\rule{5.541pt}{0.400pt}}\n\\put(379,124.67){\\rule{8.672pt}{0.400pt}}\n\\multiput(379.00,124.17)(18.000,1.000){2}{\\rule{4.336pt}{0.400pt}}\n\\put(415,126.17){\\rule{6.300pt}{0.400pt}}\n\\multiput(415.00,125.17)(17.924,2.000){2}{\\rule{3.150pt}{0.400pt}}\n\\put(446,128.17){\\rule{5.500pt}{0.400pt}}\n\\multiput(446.00,127.17)(15.584,2.000){2}{\\rule{2.750pt}{0.400pt}}\n\\put(473,130.17){\\rule{4.900pt}{0.400pt}}\n\\multiput(473.00,129.17)(13.830,2.000){2}{\\rule{2.450pt}{0.400pt}}\n\\multiput(497.00,132.61)(4.927,0.447){3}{\\rule{3.167pt}{0.108pt}}\n\\multiput(497.00,131.17)(16.427,3.000){2}{\\rule{1.583pt}{0.400pt}}\n\\put(520,135.17){\\rule{4.300pt}{0.400pt}}\n\\multiput(520.00,134.17)(12.075,2.000){2}{\\rule{2.150pt}{0.400pt}}\n\\multiput(541.00,137.61)(4.258,0.447){3}{\\rule{2.767pt}{0.108pt}}\n\\multiput(541.00,136.17)(14.258,3.000){2}{\\rule{1.383pt}{0.400pt}}\n\\multiput(561.00,140.61)(3.811,0.447){3}{\\rule{2.500pt}{0.108pt}}\n\\multiput(561.00,139.17)(12.811,3.000){2}{\\rule{1.250pt}{0.400pt}}\n\\put(579,143.17){\\rule{3.700pt}{0.400pt}}\n\\multiput(579.00,142.17)(10.320,2.000){2}{\\rule{1.850pt}{0.400pt}}\n\\multiput(597.00,145.60)(2.382,0.468){5}{\\rule{1.800pt}{0.113pt}}\n\\multiput(597.00,144.17)(13.264,4.000){2}{\\rule{0.900pt}{0.400pt}}\n\\multiput(614.00,149.61)(3.365,0.447){3}{\\rule{2.233pt}{0.108pt}}\n\\multiput(614.00,148.17)(11.365,3.000){2}{\\rule{1.117pt}{0.400pt}}\n\\multiput(630.00,152.61)(3.141,0.447){3}{\\rule{2.100pt}{0.108pt}}\n\\multiput(630.00,151.17)(10.641,3.000){2}{\\rule{1.050pt}{0.400pt}}\n\\multiput(645.00,155.61)(3.141,0.447){3}{\\rule{2.100pt}{0.108pt}}\n\\multiput(645.00,154.17)(10.641,3.000){2}{\\rule{1.050pt}{0.400pt}}\n\\multiput(660.00,158.60)(1.943,0.468){5}{\\rule{1.500pt}{0.113pt}}\n\\multiput(660.00,157.17)(10.887,4.000){2}{\\rule{0.750pt}{0.400pt}}\n\\multiput(674.00,162.61)(2.918,0.447){3}{\\rule{1.967pt}{0.108pt}}\n\\multiput(674.00,161.17)(9.918,3.000){2}{\\rule{0.983pt}{0.400pt}}\n\\multiput(688.00,165.60)(1.797,0.468){5}{\\rule{1.400pt}{0.113pt}}\n\\multiput(688.00,164.17)(10.094,4.000){2}{\\rule{0.700pt}{0.400pt}}\n\\multiput(701.00,169.60)(1.797,0.468){5}{\\rule{1.400pt}{0.113pt}}\n\\multiput(701.00,168.17)(10.094,4.000){2}{\\rule{0.700pt}{0.400pt}}\n\\multiput(714.00,173.60)(1.651,0.468){5}{\\rule{1.300pt}{0.113pt}}\n\\multiput(714.00,172.17)(9.302,4.000){2}{\\rule{0.650pt}{0.400pt}}\n\\multiput(726.00,177.60)(1.797,0.468){5}{\\rule{1.400pt}{0.113pt}}\n\\multiput(726.00,176.17)(10.094,4.000){2}{\\rule{0.700pt}{0.400pt}}\n\\multiput(739.00,181.60)(1.505,0.468){5}{\\rule{1.200pt}{0.113pt}}\n\\multiput(739.00,180.17)(8.509,4.000){2}{\\rule{0.600pt}{0.400pt}}\n\\multiput(750.00,185.60)(1.651,0.468){5}{\\rule{1.300pt}{0.113pt}}\n\\multiput(750.00,184.17)(9.302,4.000){2}{\\rule{0.650pt}{0.400pt}}\n\\multiput(762.00,189.60)(1.505,0.468){5}{\\rule{1.200pt}{0.113pt}}\n\\multiput(762.00,188.17)(8.509,4.000){2}{\\rule{0.600pt}{0.400pt}}\n\\multiput(773.00,193.60)(1.505,0.468){5}{\\rule{1.200pt}{0.113pt}}\n\\multiput(773.00,192.17)(8.509,4.000){2}{\\rule{0.600pt}{0.400pt}}\n\\multiput(784.00,197.59)(1.044,0.477){7}{\\rule{0.900pt}{0.115pt}}\n\\multiput(784.00,196.17)(8.132,5.000){2}{\\rule{0.450pt}{0.400pt}}\n\\multiput(794.00,202.60)(1.358,0.468){5}{\\rule{1.100pt}{0.113pt}}\n\\multiput(794.00,201.17)(7.717,4.000){2}{\\rule{0.550pt}{0.400pt}}\n\\multiput(804.00,206.59)(1.044,0.477){7}{\\rule{0.900pt}{0.115pt}}\n\\multiput(804.00,205.17)(8.132,5.000){2}{\\rule{0.450pt}{0.400pt}}\n\\multiput(814.00,211.60)(1.358,0.468){5}{\\rule{1.100pt}{0.113pt}}\n\\multiput(814.00,210.17)(7.717,4.000){2}{\\rule{0.550pt}{0.400pt}}\n\\multiput(824.00,215.59)(1.044,0.477){7}{\\rule{0.900pt}{0.115pt}}\n\\multiput(824.00,214.17)(8.132,5.000){2}{\\rule{0.450pt}{0.400pt}}\n\\multiput(834.00,220.60)(1.212,0.468){5}{\\rule{1.000pt}{0.113pt}}\n\\multiput(834.00,219.17)(6.924,4.000){2}{\\rule{0.500pt}{0.400pt}}\n\\multiput(843.00,224.59)(0.933,0.477){7}{\\rule{0.820pt}{0.115pt}}\n\\multiput(843.00,223.17)(7.298,5.000){2}{\\rule{0.410pt}{0.400pt}}\n\\multiput(852.00,229.59)(0.933,0.477){7}{\\rule{0.820pt}{0.115pt}}\n\\multiput(852.00,228.17)(7.298,5.000){2}{\\rule{0.410pt}{0.400pt}}\n\\multiput(861.00,234.59)(0.821,0.477){7}{\\rule{0.740pt}{0.115pt}}\n\\multiput(861.00,233.17)(6.464,5.000){2}{\\rule{0.370pt}{0.400pt}}\n\\multiput(869.00,239.59)(0.933,0.477){7}{\\rule{0.820pt}{0.115pt}}\n\\multiput(869.00,238.17)(7.298,5.000){2}{\\rule{0.410pt}{0.400pt}}\n\\multiput(878.00,244.59)(0.821,0.477){7}{\\rule{0.740pt}{0.115pt}}\n\\multiput(878.00,243.17)(6.464,5.000){2}{\\rule{0.370pt}{0.400pt}}\n\\multiput(886.00,249.59)(0.821,0.477){7}{\\rule{0.740pt}{0.115pt}}\n\\multiput(886.00,248.17)(6.464,5.000){2}{\\rule{0.370pt}{0.400pt}}\n\\multiput(894.00,254.59)(0.821,0.477){7}{\\rule{0.740pt}{0.115pt}}\n\\multiput(894.00,253.17)(6.464,5.000){2}{\\rule{0.370pt}{0.400pt}}\n\\multiput(902.00,259.59)(0.710,0.477){7}{\\rule{0.660pt}{0.115pt}}\n\\multiput(902.00,258.17)(5.630,5.000){2}{\\rule{0.330pt}{0.400pt}}\n\\multiput(909.00,264.59)(0.821,0.477){7}{\\rule{0.740pt}{0.115pt}}\n\\multiput(909.00,263.17)(6.464,5.000){2}{\\rule{0.370pt}{0.400pt}}\n\\multiput(917.00,269.59)(0.710,0.477){7}{\\rule{0.660pt}{0.115pt}}\n\\multiput(917.00,268.17)(5.630,5.000){2}{\\rule{0.330pt}{0.400pt}}\n\\multiput(924.00,274.59)(0.710,0.477){7}{\\rule{0.660pt}{0.115pt}}\n\\multiput(924.00,273.17)(5.630,5.000){2}{\\rule{0.330pt}{0.400pt}}\n\\multiput(931.00,279.59)(0.710,0.477){7}{\\rule{0.660pt}{0.115pt}}\n\\multiput(931.00,278.17)(5.630,5.000){2}{\\rule{0.330pt}{0.400pt}}\n\\multiput(938.00,284.59)(0.710,0.477){7}{\\rule{0.660pt}{0.115pt}}\n\\multiput(938.00,283.17)(5.630,5.000){2}{\\rule{0.330pt}{0.400pt}}\n\\multiput(945.00,289.59)(0.491,0.482){9}{\\rule{0.500pt}{0.116pt}}\n\\multiput(945.00,288.17)(4.962,6.000){2}{\\rule{0.250pt}{0.400pt}}\n\\multiput(951.00,295.59)(0.599,0.477){7}{\\rule{0.580pt}{0.115pt}}\n\\multiput(951.00,294.17)(4.796,5.000){2}{\\rule{0.290pt}{0.400pt}}\n\\multiput(957.00,300.59)(0.710,0.477){7}{\\rule{0.660pt}{0.115pt}}\n\\multiput(957.00,299.17)(5.630,5.000){2}{\\rule{0.330pt}{0.400pt}}\n\\multiput(964.00,305.59)(0.491,0.482){9}{\\rule{0.500pt}{0.116pt}}\n\\multiput(964.00,304.17)(4.962,6.000){2}{\\rule{0.250pt}{0.400pt}}\n\\multiput(970.00,311.59)(0.487,0.477){7}{\\rule{0.500pt}{0.115pt}}\n\\multiput(970.00,310.17)(3.962,5.000){2}{\\rule{0.250pt}{0.400pt}}\n\\multiput(975.00,316.59)(0.599,0.477){7}{\\rule{0.580pt}{0.115pt}}\n\\multiput(975.00,315.17)(4.796,5.000){2}{\\rule{0.290pt}{0.400pt}}\n\\multiput(981.59,321.00)(0.477,0.599){7}{\\rule{0.115pt}{0.580pt}}\n\\multiput(980.17,321.00)(5.000,4.796){2}{\\rule{0.400pt}{0.290pt}}\n\\multiput(986.00,327.59)(0.599,0.477){7}{\\rule{0.580pt}{0.115pt}}\n\\multiput(986.00,326.17)(4.796,5.000){2}{\\rule{0.290pt}{0.400pt}}\n\\multiput(992.00,332.59)(0.487,0.477){7}{\\rule{0.500pt}{0.115pt}}\n\\multiput(992.00,331.17)(3.962,5.000){2}{\\rule{0.250pt}{0.400pt}}\n\\multiput(997.60,337.00)(0.468,0.774){5}{\\rule{0.113pt}{0.700pt}}\n\\multiput(996.17,337.00)(4.000,4.547){2}{\\rule{0.400pt}{0.350pt}}\n\\multiput(1001.00,343.59)(0.487,0.477){7}{\\rule{0.500pt}{0.115pt}}\n\\multiput(1001.00,342.17)(3.962,5.000){2}{\\rule{0.250pt}{0.400pt}}\n\\multiput(1006.00,348.59)(0.487,0.477){7}{\\rule{0.500pt}{0.115pt}}\n\\multiput(1006.00,347.17)(3.962,5.000){2}{\\rule{0.250pt}{0.400pt}}\n\\multiput(1011.60,353.00)(0.468,0.774){5}{\\rule{0.113pt}{0.700pt}}\n\\multiput(1010.17,353.00)(4.000,4.547){2}{\\rule{0.400pt}{0.350pt}}\n\\multiput(1015.60,359.00)(0.468,0.627){5}{\\rule{0.113pt}{0.600pt}}\n\\multiput(1014.17,359.00)(4.000,3.755){2}{\\rule{0.400pt}{0.300pt}}\n\\multiput(1019.60,364.00)(0.468,0.627){5}{\\rule{0.113pt}{0.600pt}}\n\\multiput(1018.17,364.00)(4.000,3.755){2}{\\rule{0.400pt}{0.300pt}}\n\\multiput(1023.61,369.00)(0.447,1.132){3}{\\rule{0.108pt}{0.900pt}}\n\\multiput(1022.17,369.00)(3.000,4.132){2}{\\rule{0.400pt}{0.450pt}}\n\\multiput(1026.60,375.00)(0.468,0.627){5}{\\rule{0.113pt}{0.600pt}}\n\\multiput(1025.17,375.00)(4.000,3.755){2}{\\rule{0.400pt}{0.300pt}}\n\\multiput(1030.61,380.00)(0.447,0.909){3}{\\rule{0.108pt}{0.767pt}}\n\\multiput(1029.17,380.00)(3.000,3.409){2}{\\rule{0.400pt}{0.383pt}}\n\\multiput(1033.61,385.00)(0.447,0.909){3}{\\rule{0.108pt}{0.767pt}}\n\\multiput(1032.17,385.00)(3.000,3.409){2}{\\rule{0.400pt}{0.383pt}}\n\\multiput(1036.61,390.00)(0.447,0.909){3}{\\rule{0.108pt}{0.767pt}}\n\\multiput(1035.17,390.00)(3.000,3.409){2}{\\rule{0.400pt}{0.383pt}}\n\\put(1039.17,395){\\rule{0.400pt}{1.100pt}}\n\\multiput(1038.17,395.00)(2.000,2.717){2}{\\rule{0.400pt}{0.550pt}}\n\\put(1041.17,400){\\rule{0.400pt}{1.100pt}}\n\\multiput(1040.17,400.00)(2.000,2.717){2}{\\rule{0.400pt}{0.550pt}}\n\\put(1043.17,405){\\rule{0.400pt}{1.100pt}}\n\\multiput(1042.17,405.00)(2.000,2.717){2}{\\rule{0.400pt}{0.550pt}}\n\\put(1045.17,410){\\rule{0.400pt}{1.100pt}}\n\\multiput(1044.17,410.00)(2.000,2.717){2}{\\rule{0.400pt}{0.550pt}}\n\\put(1047.17,415){\\rule{0.400pt}{1.100pt}}\n\\multiput(1046.17,415.00)(2.000,2.717){2}{\\rule{0.400pt}{0.550pt}}\n\\put(1048.67,420){\\rule{0.400pt}{1.204pt}}\n\\multiput(1048.17,420.00)(1.000,2.500){2}{\\rule{0.400pt}{0.602pt}}\n\\put(1049.67,429){\\rule{0.400pt}{1.204pt}}\n\\multiput(1049.17,429.00)(1.000,2.500){2}{\\rule{0.400pt}{0.602pt}}\n\\put(1050.0,425.0){\\rule[-0.200pt]{0.400pt}{0.964pt}}\n\\put(1049.67,442){\\rule{0.400pt}{0.964pt}}\n\\multiput(1050.17,442.00)(-1.000,2.000){2}{\\rule{0.400pt}{0.482pt}}\n\\put(1048.67,446){\\rule{0.400pt}{0.964pt}}\n\\multiput(1049.17,446.00)(-1.000,2.000){2}{\\rule{0.400pt}{0.482pt}}\n\\put(1047.67,450){\\rule{0.400pt}{0.964pt}}\n\\multiput(1048.17,450.00)(-1.000,2.000){2}{\\rule{0.400pt}{0.482pt}}\n\\put(1046.17,454){\\rule{0.400pt}{0.900pt}}\n\\multiput(1047.17,454.00)(-2.000,2.132){2}{\\rule{0.400pt}{0.450pt}}\n\\put(1044.17,458){\\rule{0.400pt}{0.700pt}}\n\\multiput(1045.17,458.00)(-2.000,1.547){2}{\\rule{0.400pt}{0.350pt}}\n\\multiput(1041.92,461.61)(-0.462,0.447){3}{\\rule{0.500pt}{0.108pt}}\n\\multiput(1042.96,460.17)(-1.962,3.000){2}{\\rule{0.250pt}{0.400pt}}\n\\multiput(1038.92,464.61)(-0.462,0.447){3}{\\rule{0.500pt}{0.108pt}}\n\\multiput(1039.96,463.17)(-1.962,3.000){2}{\\rule{0.250pt}{0.400pt}}\n\\put(1034,467.17){\\rule{0.900pt}{0.400pt}}\n\\multiput(1036.13,466.17)(-2.132,2.000){2}{\\rule{0.450pt}{0.400pt}}\n\\multiput(1030.82,469.61)(-0.909,0.447){3}{\\rule{0.767pt}{0.108pt}}\n\\multiput(1032.41,468.17)(-3.409,3.000){2}{\\rule{0.383pt}{0.400pt}}\n\\put(1024,472.17){\\rule{1.100pt}{0.400pt}}\n\\multiput(1026.72,471.17)(-2.717,2.000){2}{\\rule{0.550pt}{0.400pt}}\n\\put(1018,473.67){\\rule{1.445pt}{0.400pt}}\n\\multiput(1021.00,473.17)(-3.000,1.000){2}{\\rule{0.723pt}{0.400pt}}\n\\put(1011,475.17){\\rule{1.500pt}{0.400pt}}\n\\multiput(1014.89,474.17)(-3.887,2.000){2}{\\rule{0.750pt}{0.400pt}}\n\\put(1051.0,434.0){\\rule[-0.200pt]{0.400pt}{1.927pt}}\n\\put(995,476.67){\\rule{2.168pt}{0.400pt}}\n\\multiput(999.50,476.17)(-4.500,1.000){2}{\\rule{1.084pt}{0.400pt}}\n\\put(986,476.67){\\rule{2.168pt}{0.400pt}}\n\\multiput(990.50,477.17)(-4.500,-1.000){2}{\\rule{1.084pt}{0.400pt}}\n\\put(975,475.67){\\rule{2.650pt}{0.400pt}}\n\\multiput(980.50,476.17)(-5.500,-1.000){2}{\\rule{1.325pt}{0.400pt}}\n\\put(963,474.67){\\rule{2.891pt}{0.400pt}}\n\\multiput(969.00,475.17)(-6.000,-1.000){2}{\\rule{1.445pt}{0.400pt}}\n\\multiput(955.39,473.95)(-2.695,-0.447){3}{\\rule{1.833pt}{0.108pt}}\n\\multiput(959.19,474.17)(-9.195,-3.000){2}{\\rule{0.917pt}{0.400pt}}\n\\multiput(941.28,470.95)(-3.141,-0.447){3}{\\rule{2.100pt}{0.108pt}}\n\\multiput(945.64,471.17)(-10.641,-3.000){2}{\\rule{1.050pt}{0.400pt}}\n\\multiput(927.53,467.94)(-2.382,-0.468){5}{\\rule{1.800pt}{0.113pt}}\n\\multiput(931.26,468.17)(-13.264,-4.000){2}{\\rule{0.900pt}{0.400pt}}\n\\multiput(911.61,463.93)(-1.935,-0.477){7}{\\rule{1.540pt}{0.115pt}}\n\\multiput(914.80,464.17)(-14.804,-5.000){2}{\\rule{0.770pt}{0.400pt}}\n\\multiput(894.60,458.93)(-1.560,-0.485){11}{\\rule{1.300pt}{0.117pt}}\n\\multiput(897.30,459.17)(-18.302,-7.000){2}{\\rule{0.650pt}{0.400pt}}\n\\multiput(873.60,451.93)(-1.550,-0.488){13}{\\rule{1.300pt}{0.117pt}}\n\\multiput(876.30,452.17)(-21.302,-8.000){2}{\\rule{0.650pt}{0.400pt}}\n\\multiput(850.10,443.92)(-1.381,-0.491){17}{\\rule{1.180pt}{0.118pt}}\n\\multiput(852.55,444.17)(-24.551,-10.000){2}{\\rule{0.590pt}{0.400pt}}\n\\multiput(823.43,433.92)(-1.272,-0.492){21}{\\rule{1.100pt}{0.119pt}}\n\\multiput(825.72,434.17)(-27.717,-12.000){2}{\\rule{0.550pt}{0.400pt}}\n\\put(1004.0,477.0){\\rule[-0.200pt]{1.686pt}{0.400pt}}\n\\sbox{\\plotpoint}{\\rule[-0.400pt]{0.800pt}{0.800pt}}%\n\\put(261,123){\\usebox{\\plotpoint}}\n\\put(261,121.84){\\rule{19.513pt}{0.800pt}}\n\\multiput(261.00,121.34)(40.500,1.000){2}{\\rule{9.756pt}{0.800pt}}\n\\put(342,122.84){\\rule{11.804pt}{0.800pt}}\n\\multiput(342.00,122.34)(24.500,1.000){2}{\\rule{5.902pt}{0.800pt}}\n\\put(391,124.34){\\rule{9.154pt}{0.800pt}}\n\\multiput(391.00,123.34)(19.000,2.000){2}{\\rule{4.577pt}{0.800pt}}\n\\put(429,126.34){\\rule{7.950pt}{0.800pt}}\n\\multiput(429.00,125.34)(16.500,2.000){2}{\\rule{3.975pt}{0.800pt}}\n\\put(462,128.84){\\rule{7.227pt}{0.800pt}}\n\\multiput(462.00,127.34)(15.000,3.000){2}{\\rule{3.613pt}{0.800pt}}\n\\put(492,131.34){\\rule{6.263pt}{0.800pt}}\n\\multiput(492.00,130.34)(13.000,2.000){2}{\\rule{3.132pt}{0.800pt}}\n\\put(518,133.84){\\rule{5.782pt}{0.800pt}}\n\\multiput(518.00,132.34)(12.000,3.000){2}{\\rule{2.891pt}{0.800pt}}\n\\put(542,136.84){\\rule{5.541pt}{0.800pt}}\n\\multiput(542.00,135.34)(11.500,3.000){2}{\\rule{2.770pt}{0.800pt}}\n\\put(565,139.84){\\rule{5.300pt}{0.800pt}}\n\\multiput(565.00,138.34)(11.000,3.000){2}{\\rule{2.650pt}{0.800pt}}\n\\put(587,143.34){\\rule{4.200pt}{0.800pt}}\n\\multiput(587.00,141.34)(11.283,4.000){2}{\\rule{2.100pt}{0.800pt}}\n\\put(607,146.84){\\rule{4.577pt}{0.800pt}}\n\\multiput(607.00,145.34)(9.500,3.000){2}{\\rule{2.289pt}{0.800pt}}\n\\put(626,150.34){\\rule{3.800pt}{0.800pt}}\n\\multiput(626.00,148.34)(10.113,4.000){2}{\\rule{1.900pt}{0.800pt}}\n\\put(644,154.34){\\rule{3.800pt}{0.800pt}}\n\\multiput(644.00,152.34)(10.113,4.000){2}{\\rule{1.900pt}{0.800pt}}\n\\put(662,158.34){\\rule{3.600pt}{0.800pt}}\n\\multiput(662.00,156.34)(9.528,4.000){2}{\\rule{1.800pt}{0.800pt}}\n\\put(679,162.34){\\rule{3.400pt}{0.800pt}}\n\\multiput(679.00,160.34)(8.943,4.000){2}{\\rule{1.700pt}{0.800pt}}\n\\put(695,166.34){\\rule{3.400pt}{0.800pt}}\n\\multiput(695.00,164.34)(8.943,4.000){2}{\\rule{1.700pt}{0.800pt}}\n\\multiput(711.00,171.38)(2.104,0.560){3}{\\rule{2.600pt}{0.135pt}}\n\\multiput(711.00,168.34)(9.604,5.000){2}{\\rule{1.300pt}{0.800pt}}\n\\put(726,175.34){\\rule{3.200pt}{0.800pt}}\n\\multiput(726.00,173.34)(8.358,4.000){2}{\\rule{1.600pt}{0.800pt}}\n\\multiput(741.00,180.38)(2.104,0.560){3}{\\rule{2.600pt}{0.135pt}}\n\\multiput(741.00,177.34)(9.604,5.000){2}{\\rule{1.300pt}{0.800pt}}\n\\multiput(756.00,185.38)(1.936,0.560){3}{\\rule{2.440pt}{0.135pt}}\n\\multiput(756.00,182.34)(8.936,5.000){2}{\\rule{1.220pt}{0.800pt}}\n\\multiput(770.00,190.38)(1.768,0.560){3}{\\rule{2.280pt}{0.135pt}}\n\\multiput(770.00,187.34)(8.268,5.000){2}{\\rule{1.140pt}{0.800pt}}\n\\multiput(783.00,195.38)(1.936,0.560){3}{\\rule{2.440pt}{0.135pt}}\n\\multiput(783.00,192.34)(8.936,5.000){2}{\\rule{1.220pt}{0.800pt}}\n\\multiput(797.00,200.38)(1.768,0.560){3}{\\rule{2.280pt}{0.135pt}}\n\\multiput(797.00,197.34)(8.268,5.000){2}{\\rule{1.140pt}{0.800pt}}\n\\multiput(810.00,205.38)(1.600,0.560){3}{\\rule{2.120pt}{0.135pt}}\n\\multiput(810.00,202.34)(7.600,5.000){2}{\\rule{1.060pt}{0.800pt}}\n\\multiput(822.00,210.39)(1.244,0.536){5}{\\rule{1.933pt}{0.129pt}}\n\\multiput(822.00,207.34)(8.987,6.000){2}{\\rule{0.967pt}{0.800pt}}\n\\multiput(835.00,216.38)(1.600,0.560){3}{\\rule{2.120pt}{0.135pt}}\n\\multiput(835.00,213.34)(7.600,5.000){2}{\\rule{1.060pt}{0.800pt}}\n\\multiput(847.00,221.39)(1.132,0.536){5}{\\rule{1.800pt}{0.129pt}}\n\\multiput(847.00,218.34)(8.264,6.000){2}{\\rule{0.900pt}{0.800pt}}\n\\multiput(859.00,227.38)(1.432,0.560){3}{\\rule{1.960pt}{0.135pt}}\n\\multiput(859.00,224.34)(6.932,5.000){2}{\\rule{0.980pt}{0.800pt}}\n\\multiput(870.00,232.39)(1.132,0.536){5}{\\rule{1.800pt}{0.129pt}}\n\\multiput(870.00,229.34)(8.264,6.000){2}{\\rule{0.900pt}{0.800pt}}\n\\multiput(882.00,238.39)(1.020,0.536){5}{\\rule{1.667pt}{0.129pt}}\n\\multiput(882.00,235.34)(7.541,6.000){2}{\\rule{0.833pt}{0.800pt}}\n\\multiput(893.00,244.39)(1.020,0.536){5}{\\rule{1.667pt}{0.129pt}}\n\\multiput(893.00,241.34)(7.541,6.000){2}{\\rule{0.833pt}{0.800pt}}\n\\multiput(904.00,250.39)(1.020,0.536){5}{\\rule{1.667pt}{0.129pt}}\n\\multiput(904.00,247.34)(7.541,6.000){2}{\\rule{0.833pt}{0.800pt}}\n\\multiput(915.00,256.39)(0.909,0.536){5}{\\rule{1.533pt}{0.129pt}}\n\\multiput(915.00,253.34)(6.817,6.000){2}{\\rule{0.767pt}{0.800pt}}\n\\multiput(925.00,262.39)(1.020,0.536){5}{\\rule{1.667pt}{0.129pt}}\n\\multiput(925.00,259.34)(7.541,6.000){2}{\\rule{0.833pt}{0.800pt}}\n\\multiput(936.00,268.40)(0.738,0.526){7}{\\rule{1.343pt}{0.127pt}}\n\\multiput(936.00,265.34)(7.213,7.000){2}{\\rule{0.671pt}{0.800pt}}\n\\multiput(946.00,275.39)(0.909,0.536){5}{\\rule{1.533pt}{0.129pt}}\n\\multiput(946.00,272.34)(6.817,6.000){2}{\\rule{0.767pt}{0.800pt}}\n\\multiput(956.00,281.39)(0.909,0.536){5}{\\rule{1.533pt}{0.129pt}}\n\\multiput(956.00,278.34)(6.817,6.000){2}{\\rule{0.767pt}{0.800pt}}\n\\multiput(966.00,287.40)(0.650,0.526){7}{\\rule{1.229pt}{0.127pt}}\n\\multiput(966.00,284.34)(6.450,7.000){2}{\\rule{0.614pt}{0.800pt}}\n\\multiput(975.00,294.39)(0.909,0.536){5}{\\rule{1.533pt}{0.129pt}}\n\\multiput(975.00,291.34)(6.817,6.000){2}{\\rule{0.767pt}{0.800pt}}\n\\multiput(985.00,300.40)(0.650,0.526){7}{\\rule{1.229pt}{0.127pt}}\n\\multiput(985.00,297.34)(6.450,7.000){2}{\\rule{0.614pt}{0.800pt}}\n\\multiput(994.00,307.40)(0.738,0.526){7}{\\rule{1.343pt}{0.127pt}}\n\\multiput(994.00,304.34)(7.213,7.000){2}{\\rule{0.671pt}{0.800pt}}\n\\multiput(1004.00,314.40)(0.650,0.526){7}{\\rule{1.229pt}{0.127pt}}\n\\multiput(1004.00,311.34)(6.450,7.000){2}{\\rule{0.614pt}{0.800pt}}\n\\multiput(1013.00,321.40)(0.650,0.526){7}{\\rule{1.229pt}{0.127pt}}\n\\multiput(1013.00,318.34)(6.450,7.000){2}{\\rule{0.614pt}{0.800pt}}\n\\multiput(1022.00,328.39)(0.685,0.536){5}{\\rule{1.267pt}{0.129pt}}\n\\multiput(1022.00,325.34)(5.371,6.000){2}{\\rule{0.633pt}{0.800pt}}\n\\multiput(1030.00,334.40)(0.650,0.526){7}{\\rule{1.229pt}{0.127pt}}\n\\multiput(1030.00,331.34)(6.450,7.000){2}{\\rule{0.614pt}{0.800pt}}\n\\multiput(1039.00,341.40)(0.554,0.520){9}{\\rule{1.100pt}{0.125pt}}\n\\multiput(1039.00,338.34)(6.717,8.000){2}{\\rule{0.550pt}{0.800pt}}\n\\multiput(1048.00,349.40)(0.562,0.526){7}{\\rule{1.114pt}{0.127pt}}\n\\multiput(1048.00,346.34)(5.687,7.000){2}{\\rule{0.557pt}{0.800pt}}\n\\multiput(1056.00,356.40)(0.562,0.526){7}{\\rule{1.114pt}{0.127pt}}\n\\multiput(1056.00,353.34)(5.687,7.000){2}{\\rule{0.557pt}{0.800pt}}\n\\multiput(1064.00,363.40)(0.562,0.526){7}{\\rule{1.114pt}{0.127pt}}\n\\multiput(1064.00,360.34)(5.687,7.000){2}{\\rule{0.557pt}{0.800pt}}\n\\multiput(1072.00,370.40)(0.562,0.526){7}{\\rule{1.114pt}{0.127pt}}\n\\multiput(1072.00,367.34)(5.687,7.000){2}{\\rule{0.557pt}{0.800pt}}\n\\multiput(1080.00,377.40)(0.481,0.520){9}{\\rule{1.000pt}{0.125pt}}\n\\multiput(1080.00,374.34)(5.924,8.000){2}{\\rule{0.500pt}{0.800pt}}\n\\multiput(1088.00,385.40)(0.562,0.526){7}{\\rule{1.114pt}{0.127pt}}\n\\multiput(1088.00,382.34)(5.687,7.000){2}{\\rule{0.557pt}{0.800pt}}\n\\multiput(1096.00,392.40)(0.481,0.520){9}{\\rule{1.000pt}{0.125pt}}\n\\multiput(1096.00,389.34)(5.924,8.000){2}{\\rule{0.500pt}{0.800pt}}\n\\multiput(1104.00,400.40)(0.475,0.526){7}{\\rule{1.000pt}{0.127pt}}\n\\multiput(1104.00,397.34)(4.924,7.000){2}{\\rule{0.500pt}{0.800pt}}\n\\multiput(1112.40,406.00)(0.526,0.562){7}{\\rule{0.127pt}{1.114pt}}\n\\multiput(1109.34,406.00)(7.000,5.687){2}{\\rule{0.800pt}{0.557pt}}\n\\multiput(1118.00,415.40)(0.562,0.526){7}{\\rule{1.114pt}{0.127pt}}\n\\multiput(1118.00,412.34)(5.687,7.000){2}{\\rule{0.557pt}{0.800pt}}\n\\multiput(1127.40,421.00)(0.526,0.562){7}{\\rule{0.127pt}{1.114pt}}\n\\multiput(1124.34,421.00)(7.000,5.687){2}{\\rule{0.800pt}{0.557pt}}\n\\multiput(1134.40,429.00)(0.526,0.562){7}{\\rule{0.127pt}{1.114pt}}\n\\multiput(1131.34,429.00)(7.000,5.687){2}{\\rule{0.800pt}{0.557pt}}\n\\multiput(1140.00,438.40)(0.475,0.526){7}{\\rule{1.000pt}{0.127pt}}\n\\multiput(1140.00,435.34)(4.924,7.000){2}{\\rule{0.500pt}{0.800pt}}\n\\multiput(1148.39,444.00)(0.536,0.685){5}{\\rule{0.129pt}{1.267pt}}\n\\multiput(1145.34,444.00)(6.000,5.371){2}{\\rule{0.800pt}{0.633pt}}\n\\multiput(1154.40,452.00)(0.526,0.562){7}{\\rule{0.127pt}{1.114pt}}\n\\multiput(1151.34,452.00)(7.000,5.687){2}{\\rule{0.800pt}{0.557pt}}\n\\multiput(1161.39,460.00)(0.536,0.685){5}{\\rule{0.129pt}{1.267pt}}\n\\multiput(1158.34,460.00)(6.000,5.371){2}{\\rule{0.800pt}{0.633pt}}\n\\multiput(1167.40,468.00)(0.526,0.562){7}{\\rule{0.127pt}{1.114pt}}\n\\multiput(1164.34,468.00)(7.000,5.687){2}{\\rule{0.800pt}{0.557pt}}\n\\multiput(1174.39,476.00)(0.536,0.574){5}{\\rule{0.129pt}{1.133pt}}\n\\multiput(1171.34,476.00)(6.000,4.648){2}{\\rule{0.800pt}{0.567pt}}\n\\multiput(1180.39,483.00)(0.536,0.685){5}{\\rule{0.129pt}{1.267pt}}\n\\multiput(1177.34,483.00)(6.000,5.371){2}{\\rule{0.800pt}{0.633pt}}\n\\multiput(1186.39,491.00)(0.536,0.685){5}{\\rule{0.129pt}{1.267pt}}\n\\multiput(1183.34,491.00)(6.000,5.371){2}{\\rule{0.800pt}{0.633pt}}\n\\multiput(1192.39,499.00)(0.536,0.685){5}{\\rule{0.129pt}{1.267pt}}\n\\multiput(1189.34,499.00)(6.000,5.371){2}{\\rule{0.800pt}{0.633pt}}\n\\multiput(1198.38,507.00)(0.560,0.928){3}{\\rule{0.135pt}{1.480pt}}\n\\multiput(1195.34,507.00)(5.000,4.928){2}{\\rule{0.800pt}{0.740pt}}\n\\multiput(1203.39,515.00)(0.536,0.685){5}{\\rule{0.129pt}{1.267pt}}\n\\multiput(1200.34,515.00)(6.000,5.371){2}{\\rule{0.800pt}{0.633pt}}\n\\multiput(1209.38,523.00)(0.560,0.928){3}{\\rule{0.135pt}{1.480pt}}\n\\multiput(1206.34,523.00)(5.000,4.928){2}{\\rule{0.800pt}{0.740pt}}\n\\multiput(1214.38,531.00)(0.560,0.928){3}{\\rule{0.135pt}{1.480pt}}\n\\multiput(1211.34,531.00)(5.000,4.928){2}{\\rule{0.800pt}{0.740pt}}\n\\multiput(1219.38,539.00)(0.560,0.928){3}{\\rule{0.135pt}{1.480pt}}\n\\multiput(1216.34,539.00)(5.000,4.928){2}{\\rule{0.800pt}{0.740pt}}\n\\multiput(1224.38,547.00)(0.560,0.760){3}{\\rule{0.135pt}{1.320pt}}\n\\multiput(1221.34,547.00)(5.000,4.260){2}{\\rule{0.800pt}{0.660pt}}\n\\put(1228.34,554){\\rule{0.800pt}{1.800pt}}\n\\multiput(1226.34,554.00)(4.000,4.264){2}{\\rule{0.800pt}{0.900pt}}\n\\multiput(1233.38,562.00)(0.560,0.928){3}{\\rule{0.135pt}{1.480pt}}\n\\multiput(1230.34,562.00)(5.000,4.928){2}{\\rule{0.800pt}{0.740pt}}\n\\put(1237.34,570){\\rule{0.800pt}{1.800pt}}\n\\multiput(1235.34,570.00)(4.000,4.264){2}{\\rule{0.800pt}{0.900pt}}\n\\put(1241.34,578){\\rule{0.800pt}{1.600pt}}\n\\multiput(1239.34,578.00)(4.000,3.679){2}{\\rule{0.800pt}{0.800pt}}\n\\put(1244.84,585){\\rule{0.800pt}{1.927pt}}\n\\multiput(1243.34,585.00)(3.000,4.000){2}{\\rule{0.800pt}{0.964pt}}\n\\put(1248.34,593){\\rule{0.800pt}{1.800pt}}\n\\multiput(1246.34,593.00)(4.000,4.264){2}{\\rule{0.800pt}{0.900pt}}\n\\put(1251.84,601){\\rule{0.800pt}{1.686pt}}\n\\multiput(1250.34,601.00)(3.000,3.500){2}{\\rule{0.800pt}{0.843pt}}\n\\put(1254.84,608){\\rule{0.800pt}{1.686pt}}\n\\multiput(1253.34,608.00)(3.000,3.500){2}{\\rule{0.800pt}{0.843pt}}\n\\put(1257.34,615){\\rule{0.800pt}{1.927pt}}\n\\multiput(1256.34,615.00)(2.000,4.000){2}{\\rule{0.800pt}{0.964pt}}\n\\put(1259.34,623){\\rule{0.800pt}{1.686pt}}\n\\multiput(1258.34,623.00)(2.000,3.500){2}{\\rule{0.800pt}{0.843pt}}\n\\put(1261.34,630){\\rule{0.800pt}{1.686pt}}\n\\multiput(1260.34,630.00)(2.000,3.500){2}{\\rule{0.800pt}{0.843pt}}\n\\put(1262.84,637){\\rule{0.800pt}{1.445pt}}\n\\multiput(1262.34,637.00)(1.000,3.000){2}{\\rule{0.800pt}{0.723pt}}\n\\put(1263.84,643){\\rule{0.800pt}{1.686pt}}\n\\multiput(1263.34,643.00)(1.000,3.500){2}{\\rule{0.800pt}{0.843pt}}\n\\put(1264.84,650){\\rule{0.800pt}{1.445pt}}\n\\multiput(1264.34,650.00)(1.000,3.000){2}{\\rule{0.800pt}{0.723pt}}\n\\put(1264.84,662){\\rule{0.800pt}{1.445pt}}\n\\multiput(1265.34,662.00)(-1.000,3.000){2}{\\rule{0.800pt}{0.723pt}}\n\\put(1263.84,668){\\rule{0.800pt}{1.445pt}}\n\\multiput(1264.34,668.00)(-1.000,3.000){2}{\\rule{0.800pt}{0.723pt}}\n\\put(1262.34,674){\\rule{0.800pt}{1.204pt}}\n\\multiput(1263.34,674.00)(-2.000,2.500){2}{\\rule{0.800pt}{0.602pt}}\n\\put(1259.84,679){\\rule{0.800pt}{0.964pt}}\n\\multiput(1261.34,679.00)(-3.000,2.000){2}{\\rule{0.800pt}{0.482pt}}\n\\put(1256,683.34){\\rule{0.964pt}{0.800pt}}\n\\multiput(1258.00,681.34)(-2.000,4.000){2}{\\rule{0.482pt}{0.800pt}}\n\\put(1251,687.34){\\rule{1.200pt}{0.800pt}}\n\\multiput(1253.51,685.34)(-2.509,4.000){2}{\\rule{0.600pt}{0.800pt}}\n\\put(1245,690.34){\\rule{1.445pt}{0.800pt}}\n\\multiput(1248.00,689.34)(-3.000,2.000){2}{\\rule{0.723pt}{0.800pt}}\n\\put(1237,692.34){\\rule{1.927pt}{0.800pt}}\n\\multiput(1241.00,691.34)(-4.000,2.000){2}{\\rule{0.964pt}{0.800pt}}\n\\put(1227,693.84){\\rule{2.409pt}{0.800pt}}\n\\multiput(1232.00,693.34)(-5.000,1.000){2}{\\rule{1.204pt}{0.800pt}}\n\\put(1216,693.84){\\rule{2.650pt}{0.800pt}}\n\\multiput(1221.50,694.34)(-5.500,-1.000){2}{\\rule{1.325pt}{0.800pt}}\n\\put(1202,692.34){\\rule{3.373pt}{0.800pt}}\n\\multiput(1209.00,693.34)(-7.000,-2.000){2}{\\rule{1.686pt}{0.800pt}}\n\\put(1267.0,656.0){\\rule[-0.400pt]{0.800pt}{1.445pt}}\n\\sbox{\\plotpoint}{\\rule[-0.600pt]{1.200pt}{1.200pt}}%\n\\put(263,123){\\usebox{\\plotpoint}}\n\\put(263,121.01){\\rule{19.513pt}{1.200pt}}\n\\multiput(263.00,120.51)(40.500,1.000){2}{\\rule{9.756pt}{1.200pt}}\n\\put(344,122.51){\\rule{12.045pt}{1.200pt}}\n\\multiput(344.00,121.51)(25.000,2.000){2}{\\rule{6.022pt}{1.200pt}}\n\\put(394,124.01){\\rule{9.636pt}{1.200pt}}\n\\multiput(394.00,123.51)(20.000,1.000){2}{\\rule{4.818pt}{1.200pt}}\n\\put(434,126.01){\\rule{7.950pt}{1.200pt}}\n\\multiput(434.00,124.51)(16.500,3.000){2}{\\rule{3.975pt}{1.200pt}}\n\\put(467,128.51){\\rule{7.227pt}{1.200pt}}\n\\multiput(467.00,127.51)(15.000,2.000){2}{\\rule{3.613pt}{1.200pt}}\n\\put(497,131.01){\\rule{6.504pt}{1.200pt}}\n\\multiput(497.00,129.51)(13.500,3.000){2}{\\rule{3.252pt}{1.200pt}}\n\\put(524,134.01){\\rule{6.023pt}{1.200pt}}\n\\multiput(524.00,132.51)(12.500,3.000){2}{\\rule{3.011pt}{1.200pt}}\n\\put(549,137.01){\\rule{5.541pt}{1.200pt}}\n\\multiput(549.00,135.51)(11.500,3.000){2}{\\rule{2.770pt}{1.200pt}}\n\\put(572,140.51){\\rule{5.300pt}{1.200pt}}\n\\multiput(572.00,138.51)(11.000,4.000){2}{\\rule{2.650pt}{1.200pt}}\n\\put(594,144.01){\\rule{5.059pt}{1.200pt}}\n\\multiput(594.00,142.51)(10.500,3.000){2}{\\rule{2.529pt}{1.200pt}}\n\\put(615,147.51){\\rule{4.818pt}{1.200pt}}\n\\multiput(615.00,145.51)(10.000,4.000){2}{\\rule{2.409pt}{1.200pt}}\n\\put(635,151.51){\\rule{4.577pt}{1.200pt}}\n\\multiput(635.00,149.51)(9.500,4.000){2}{\\rule{2.289pt}{1.200pt}}\n\\put(654,155.51){\\rule{4.336pt}{1.200pt}}\n\\multiput(654.00,153.51)(9.000,4.000){2}{\\rule{2.168pt}{1.200pt}}\n\\put(672,159.51){\\rule{4.095pt}{1.200pt}}\n\\multiput(672.00,157.51)(8.500,4.000){2}{\\rule{2.048pt}{1.200pt}}\n\\put(689,164.01){\\rule{4.095pt}{1.200pt}}\n\\multiput(689.00,161.51)(8.500,5.000){2}{\\rule{2.048pt}{1.200pt}}\n\\put(706,168.51){\\rule{3.854pt}{1.200pt}}\n\\multiput(706.00,166.51)(8.000,4.000){2}{\\rule{1.927pt}{1.200pt}}\n\\put(722,173.01){\\rule{3.854pt}{1.200pt}}\n\\multiput(722.00,170.51)(8.000,5.000){2}{\\rule{1.927pt}{1.200pt}}\n\\put(738,178.01){\\rule{3.614pt}{1.200pt}}\n\\multiput(738.00,175.51)(7.500,5.000){2}{\\rule{1.807pt}{1.200pt}}\n\\put(753,183.01){\\rule{3.614pt}{1.200pt}}\n\\multiput(753.00,180.51)(7.500,5.000){2}{\\rule{1.807pt}{1.200pt}}\n\\put(768,188.01){\\rule{3.614pt}{1.200pt}}\n\\multiput(768.00,185.51)(7.500,5.000){2}{\\rule{1.807pt}{1.200pt}}\n\\put(783,193.01){\\rule{3.373pt}{1.200pt}}\n\\multiput(783.00,190.51)(7.000,5.000){2}{\\rule{1.686pt}{1.200pt}}\n\\put(797,198.01){\\rule{3.373pt}{1.200pt}}\n\\multiput(797.00,195.51)(7.000,5.000){2}{\\rule{1.686pt}{1.200pt}}\n\\multiput(811.00,205.24)(0.962,0.509){2}{\\rule{2.900pt}{0.123pt}}\n\\multiput(811.00,200.51)(6.981,6.000){2}{\\rule{1.450pt}{1.200pt}}\n\\multiput(824.00,211.24)(0.962,0.509){2}{\\rule{2.900pt}{0.123pt}}\n\\multiput(824.00,206.51)(6.981,6.000){2}{\\rule{1.450pt}{1.200pt}}\n\\put(837,215.01){\\rule{3.132pt}{1.200pt}}\n\\multiput(837.00,212.51)(6.500,5.000){2}{\\rule{1.566pt}{1.200pt}}\n\\multiput(850.00,222.24)(0.962,0.509){2}{\\rule{2.900pt}{0.123pt}}\n\\multiput(850.00,217.51)(6.981,6.000){2}{\\rule{1.450pt}{1.200pt}}\n\\multiput(863.00,228.24)(0.792,0.509){2}{\\rule{2.700pt}{0.123pt}}\n\\multiput(863.00,223.51)(6.396,6.000){2}{\\rule{1.350pt}{1.200pt}}\n\\multiput(875.00,234.24)(0.962,0.509){2}{\\rule{2.900pt}{0.123pt}}\n\\multiput(875.00,229.51)(6.981,6.000){2}{\\rule{1.450pt}{1.200pt}}\n\\multiput(888.00,240.24)(0.792,0.509){2}{\\rule{2.700pt}{0.123pt}}\n\\multiput(888.00,235.51)(6.396,6.000){2}{\\rule{1.350pt}{1.200pt}}\n\\multiput(900.00,246.24)(0.642,0.505){4}{\\rule{2.186pt}{0.122pt}}\n\\multiput(900.00,241.51)(6.463,7.000){2}{\\rule{1.093pt}{1.200pt}}\n\\multiput(911.00,253.24)(0.792,0.509){2}{\\rule{2.700pt}{0.123pt}}\n\\multiput(911.00,248.51)(6.396,6.000){2}{\\rule{1.350pt}{1.200pt}}\n\\multiput(923.00,259.24)(0.622,0.509){2}{\\rule{2.500pt}{0.123pt}}\n\\multiput(923.00,254.51)(5.811,6.000){2}{\\rule{1.250pt}{1.200pt}}\n\\multiput(934.00,265.24)(0.642,0.505){4}{\\rule{2.186pt}{0.122pt}}\n\\multiput(934.00,260.51)(6.463,7.000){2}{\\rule{1.093pt}{1.200pt}}\n\\multiput(945.00,272.24)(0.642,0.505){4}{\\rule{2.186pt}{0.122pt}}\n\\multiput(945.00,267.51)(6.463,7.000){2}{\\rule{1.093pt}{1.200pt}}\n\\multiput(956.00,279.24)(0.622,0.509){2}{\\rule{2.500pt}{0.123pt}}\n\\multiput(956.00,274.51)(5.811,6.000){2}{\\rule{1.250pt}{1.200pt}}\n\\multiput(967.00,285.24)(0.642,0.505){4}{\\rule{2.186pt}{0.122pt}}\n\\multiput(967.00,280.51)(6.463,7.000){2}{\\rule{1.093pt}{1.200pt}}\n\\multiput(978.00,292.24)(0.642,0.505){4}{\\rule{2.186pt}{0.122pt}}\n\\multiput(978.00,287.51)(6.463,7.000){2}{\\rule{1.093pt}{1.200pt}}\n\\multiput(989.00,299.24)(0.546,0.505){4}{\\rule{2.014pt}{0.122pt}}\n\\multiput(989.00,294.51)(5.819,7.000){2}{\\rule{1.007pt}{1.200pt}}\n\\multiput(999.00,306.24)(0.546,0.505){4}{\\rule{2.014pt}{0.122pt}}\n\\multiput(999.00,301.51)(5.819,7.000){2}{\\rule{1.007pt}{1.200pt}}\n\\multiput(1009.00,313.24)(0.506,0.503){6}{\\rule{1.800pt}{0.121pt}}\n\\multiput(1009.00,308.51)(6.264,8.000){2}{\\rule{0.900pt}{1.200pt}}\n\\multiput(1019.00,321.24)(0.546,0.505){4}{\\rule{2.014pt}{0.122pt}}\n\\multiput(1019.00,316.51)(5.819,7.000){2}{\\rule{1.007pt}{1.200pt}}\n\\multiput(1029.00,328.24)(0.546,0.505){4}{\\rule{2.014pt}{0.122pt}}\n\\multiput(1029.00,323.51)(5.819,7.000){2}{\\rule{1.007pt}{1.200pt}}\n\\multiput(1039.00,335.24)(0.506,0.503){6}{\\rule{1.800pt}{0.121pt}}\n\\multiput(1039.00,330.51)(6.264,8.000){2}{\\rule{0.900pt}{1.200pt}}\n\\multiput(1049.00,343.24)(0.450,0.505){4}{\\rule{1.843pt}{0.122pt}}\n\\multiput(1049.00,338.51)(5.175,7.000){2}{\\rule{0.921pt}{1.200pt}}\n\\multiput(1058.00,350.24)(0.506,0.503){6}{\\rule{1.800pt}{0.121pt}}\n\\multiput(1058.00,345.51)(6.264,8.000){2}{\\rule{0.900pt}{1.200pt}}\n\\multiput(1068.00,358.24)(0.430,0.503){6}{\\rule{1.650pt}{0.121pt}}\n\\multiput(1068.00,353.51)(5.575,8.000){2}{\\rule{0.825pt}{1.200pt}}\n\\multiput(1077.00,366.24)(0.546,0.505){4}{\\rule{2.014pt}{0.122pt}}\n\\multiput(1077.00,361.51)(5.819,7.000){2}{\\rule{1.007pt}{1.200pt}}\n\\multiput(1087.00,373.24)(0.430,0.503){6}{\\rule{1.650pt}{0.121pt}}\n\\multiput(1087.00,368.51)(5.575,8.000){2}{\\rule{0.825pt}{1.200pt}}\n\\multiput(1096.00,381.24)(0.430,0.503){6}{\\rule{1.650pt}{0.121pt}}\n\\multiput(1096.00,376.51)(5.575,8.000){2}{\\rule{0.825pt}{1.200pt}}\n\\multiput(1105.00,389.24)(0.430,0.503){6}{\\rule{1.650pt}{0.121pt}}\n\\multiput(1105.00,384.51)(5.575,8.000){2}{\\rule{0.825pt}{1.200pt}}\n\\multiput(1114.00,397.24)(0.430,0.503){6}{\\rule{1.650pt}{0.121pt}}\n\\multiput(1114.00,392.51)(5.575,8.000){2}{\\rule{0.825pt}{1.200pt}}\n\\multiput(1125.24,403.00)(0.503,0.430){6}{\\rule{0.121pt}{1.650pt}}\n\\multiput(1120.51,403.00)(8.000,5.575){2}{\\rule{1.200pt}{0.825pt}}\n\\multiput(1131.00,414.24)(0.430,0.503){6}{\\rule{1.650pt}{0.121pt}}\n\\multiput(1131.00,409.51)(5.575,8.000){2}{\\rule{0.825pt}{1.200pt}}\n\\multiput(1140.00,422.24)(0.355,0.503){6}{\\rule{1.500pt}{0.121pt}}\n\\multiput(1140.00,417.51)(4.887,8.000){2}{\\rule{0.750pt}{1.200pt}}\n\\multiput(1148.00,430.24)(0.396,0.502){8}{\\rule{1.500pt}{0.121pt}}\n\\multiput(1148.00,425.51)(5.887,9.000){2}{\\rule{0.750pt}{1.200pt}}\n\\multiput(1157.00,439.24)(0.355,0.503){6}{\\rule{1.500pt}{0.121pt}}\n\\multiput(1157.00,434.51)(4.887,8.000){2}{\\rule{0.750pt}{1.200pt}}\n\\multiput(1165.00,447.24)(0.396,0.502){8}{\\rule{1.500pt}{0.121pt}}\n\\multiput(1165.00,442.51)(5.887,9.000){2}{\\rule{0.750pt}{1.200pt}}\n\\multiput(1174.00,456.24)(0.355,0.503){6}{\\rule{1.500pt}{0.121pt}}\n\\multiput(1174.00,451.51)(4.887,8.000){2}{\\rule{0.750pt}{1.200pt}}\n\\multiput(1184.24,462.00)(0.503,0.430){6}{\\rule{0.121pt}{1.650pt}}\n\\multiput(1179.51,462.00)(8.000,5.575){2}{\\rule{1.200pt}{0.825pt}}\n\\multiput(1192.24,471.00)(0.503,0.430){6}{\\rule{0.121pt}{1.650pt}}\n\\multiput(1187.51,471.00)(8.000,5.575){2}{\\rule{1.200pt}{0.825pt}}\n\\multiput(1198.00,482.24)(0.355,0.503){6}{\\rule{1.500pt}{0.121pt}}\n\\multiput(1198.00,477.51)(4.887,8.000){2}{\\rule{0.750pt}{1.200pt}}\n\\multiput(1208.24,488.00)(0.503,0.430){6}{\\rule{0.121pt}{1.650pt}}\n\\multiput(1203.51,488.00)(8.000,5.575){2}{\\rule{1.200pt}{0.825pt}}\n\\multiput(1216.24,497.00)(0.505,0.450){4}{\\rule{0.122pt}{1.843pt}}\n\\multiput(1211.51,497.00)(7.000,5.175){2}{\\rule{1.200pt}{0.921pt}}\n\\multiput(1223.24,506.00)(0.503,0.430){6}{\\rule{0.121pt}{1.650pt}}\n\\multiput(1218.51,506.00)(8.000,5.575){2}{\\rule{1.200pt}{0.825pt}}\n\\multiput(1231.24,515.00)(0.503,0.430){6}{\\rule{0.121pt}{1.650pt}}\n\\multiput(1226.51,515.00)(8.000,5.575){2}{\\rule{1.200pt}{0.825pt}}\n\\multiput(1239.24,524.00)(0.505,0.450){4}{\\rule{0.122pt}{1.843pt}}\n\\multiput(1234.51,524.00)(7.000,5.175){2}{\\rule{1.200pt}{0.921pt}}\n\\multiput(1246.24,533.00)(0.503,0.506){6}{\\rule{0.121pt}{1.800pt}}\n\\multiput(1241.51,533.00)(8.000,6.264){2}{\\rule{1.200pt}{0.900pt}}\n\\multiput(1254.24,543.00)(0.505,0.450){4}{\\rule{0.122pt}{1.843pt}}\n\\multiput(1249.51,543.00)(7.000,5.175){2}{\\rule{1.200pt}{0.921pt}}\n\\multiput(1261.24,552.00)(0.505,0.450){4}{\\rule{0.122pt}{1.843pt}}\n\\multiput(1256.51,552.00)(7.000,5.175){2}{\\rule{1.200pt}{0.921pt}}\n\\multiput(1268.24,561.00)(0.505,0.450){4}{\\rule{0.122pt}{1.843pt}}\n\\multiput(1263.51,561.00)(7.000,5.175){2}{\\rule{1.200pt}{0.921pt}}\n\\multiput(1275.24,570.00)(0.503,0.506){6}{\\rule{0.121pt}{1.800pt}}\n\\multiput(1270.51,570.00)(8.000,6.264){2}{\\rule{1.200pt}{0.900pt}}\n\\multiput(1283.24,580.00)(0.505,0.450){4}{\\rule{0.122pt}{1.843pt}}\n\\multiput(1278.51,580.00)(7.000,5.175){2}{\\rule{1.200pt}{0.921pt}}\n\\multiput(1290.24,589.00)(0.505,0.546){4}{\\rule{0.122pt}{2.014pt}}\n\\multiput(1285.51,589.00)(7.000,5.819){2}{\\rule{1.200pt}{1.007pt}}\n\\multiput(1297.24,599.00)(0.509,0.283){2}{\\rule{0.123pt}{2.100pt}}\n\\multiput(1292.51,599.00)(6.000,4.641){2}{\\rule{1.200pt}{1.050pt}}\n\\multiput(1303.24,608.00)(0.505,0.546){4}{\\rule{0.122pt}{2.014pt}}\n\\multiput(1298.51,608.00)(7.000,5.819){2}{\\rule{1.200pt}{1.007pt}}\n\\multiput(1310.24,618.00)(0.505,0.450){4}{\\rule{0.122pt}{1.843pt}}\n\\multiput(1305.51,618.00)(7.000,5.175){2}{\\rule{1.200pt}{0.921pt}}\n\\multiput(1317.24,627.00)(0.509,0.452){2}{\\rule{0.123pt}{2.300pt}}\n\\multiput(1312.51,627.00)(6.000,5.226){2}{\\rule{1.200pt}{1.150pt}}\n\\multiput(1323.24,637.00)(0.505,0.546){4}{\\rule{0.122pt}{2.014pt}}\n\\multiput(1318.51,637.00)(7.000,5.819){2}{\\rule{1.200pt}{1.007pt}}\n\\multiput(1330.24,647.00)(0.509,0.283){2}{\\rule{0.123pt}{2.100pt}}\n\\multiput(1325.51,647.00)(6.000,4.641){2}{\\rule{1.200pt}{1.050pt}}\n\\multiput(1336.24,656.00)(0.509,0.452){2}{\\rule{0.123pt}{2.300pt}}\n\\multiput(1331.51,656.00)(6.000,5.226){2}{\\rule{1.200pt}{1.150pt}}\n\\multiput(1342.24,666.00)(0.509,0.452){2}{\\rule{0.123pt}{2.300pt}}\n\\multiput(1337.51,666.00)(6.000,5.226){2}{\\rule{1.200pt}{1.150pt}}\n\\multiput(1348.24,676.00)(0.509,0.283){2}{\\rule{0.123pt}{2.100pt}}\n\\multiput(1343.51,676.00)(6.000,4.641){2}{\\rule{1.200pt}{1.050pt}}\n\\multiput(1354.24,685.00)(0.509,0.452){2}{\\rule{0.123pt}{2.300pt}}\n\\multiput(1349.51,685.00)(6.000,5.226){2}{\\rule{1.200pt}{1.150pt}}\n\\multiput(1360.24,695.00)(0.509,0.452){2}{\\rule{0.123pt}{2.300pt}}\n\\multiput(1355.51,695.00)(6.000,5.226){2}{\\rule{1.200pt}{1.150pt}}\n\\multiput(1366.24,705.00)(0.509,0.452){2}{\\rule{0.123pt}{2.300pt}}\n\\multiput(1361.51,705.00)(6.000,5.226){2}{\\rule{1.200pt}{1.150pt}}\n\\put(1370.01,715){\\rule{1.200pt}{2.168pt}}\n\\multiput(1367.51,715.00)(5.000,4.500){2}{\\rule{1.200pt}{1.084pt}}\n\\put(1375.01,724){\\rule{1.200pt}{2.409pt}}\n\\multiput(1372.51,724.00)(5.000,5.000){2}{\\rule{1.200pt}{1.204pt}}\n\\multiput(1382.24,734.00)(0.509,0.452){2}{\\rule{0.123pt}{2.300pt}}\n\\multiput(1377.51,734.00)(6.000,5.226){2}{\\rule{1.200pt}{1.150pt}}\n\\put(1386.01,744){\\rule{1.200pt}{2.168pt}}\n\\multiput(1383.51,744.00)(5.000,4.500){2}{\\rule{1.200pt}{1.084pt}}\n\\put(1390.51,753){\\rule{1.200pt}{2.409pt}}\n\\multiput(1388.51,753.00)(4.000,5.000){2}{\\rule{1.200pt}{1.204pt}}\n\\put(1395.01,763){\\rule{1.200pt}{2.168pt}}\n\\multiput(1392.51,763.00)(5.000,4.500){2}{\\rule{1.200pt}{1.084pt}}\n\\put(1399.51,772){\\rule{1.200pt}{2.409pt}}\n\\multiput(1397.51,772.00)(4.000,5.000){2}{\\rule{1.200pt}{1.204pt}}\n\\put(1403.51,782){\\rule{1.200pt}{2.168pt}}\n\\multiput(1401.51,782.00)(4.000,4.500){2}{\\rule{1.200pt}{1.084pt}}\n\\put(1407.01,791){\\rule{1.200pt}{2.168pt}}\n\\multiput(1405.51,791.00)(3.000,4.500){2}{\\rule{1.200pt}{1.084pt}}\n\\put(1410.01,800){\\rule{1.200pt}{2.168pt}}\n\\multiput(1408.51,800.00)(3.000,4.500){2}{\\rule{1.200pt}{1.084pt}}\n\\put(1413.01,809){\\rule{1.200pt}{2.168pt}}\n\\multiput(1411.51,809.00)(3.000,4.500){2}{\\rule{1.200pt}{1.084pt}}\n\\put(1415.51,818){\\rule{1.200pt}{1.927pt}}\n\\multiput(1414.51,818.00)(2.000,4.000){2}{\\rule{1.200pt}{0.964pt}}\n\\put(1417.01,826){\\rule{1.200pt}{1.927pt}}\n\\multiput(1416.51,826.00)(1.000,4.000){2}{\\rule{1.200pt}{0.964pt}}\n\\put(1418.01,834){\\rule{1.200pt}{1.927pt}}\n\\multiput(1417.51,834.00)(1.000,4.000){2}{\\rule{1.200pt}{0.964pt}}\n\\end{picture}\n\n\\caption{The mass-radius diagram for the Pant-Sah solution for the values $\\tau=0.382$\n(thin line), $\\tau = 0.6$ (medium) and $\\tau=0.786$ (thick line).}\n\\label{PSMR} \n\\end{figure}\n\nAs in the Newtonian case the surface potential characterizes the solution uniquely,\nwhich implies the loop-like structure of the mass-radius curves, Fig.\n(\\ref{PSMR}).\nWe first describe the diagrams for sufficiently small values of $\\tau$\nsuch as $\\tau = (3 -\\sqrt{5})\/2 \\approx 0.382$ and $\\tau = 0.6$.\n(These particular values correspond to $V_s^2 = 1\/3$ and $V_s^2 = 1\/6$ at\nthe respective mass maxima). \nStarting with the dust particle and increasing $p_c$, $V_s$ decreases and \nthe mass-radius curve corresponds to the minus sign in front of the root in (\\ref{EMR}). \nAt the maximum radius which is now at $\\widehat R = \\sqrt{\\tau}$,\nwe pass to the plus sign. From then onwards the star shrinks,\nreaching its maxium mass at some lower value of $V_s$, and subsequently\nlosing weight as well. \nIn contrast to the Newtonian case, the singular model now prevents the\n\"mass-radius loop\" from closing. As already mentioned in the discussion of\nTable (\\ref{Epar1}), at some finite size of the star the central pressure\ndiverges, and this is where the curves in Fig. (\\ref{PSMR}) terminate.\n\nFor $\\tau=0.6$, the star with maximal mass still has a regular centre. \nHowever, for larger values of $\\tau$, the central pressure diverges \nbefore the mass maximum or even the maximal radius are reached. \nThis means that the \"biggest star\" and the \"heaviest star\" in Table (3) only make sense \nif the respective values of $V_s$ are larger than the ones given for the\n\"singular centre\". For $\\tau = [(\\sqrt{5} - 1)\/2]^{\\frac{1}{2}} \\approx 0.786$ \nthe star with maximal radius is precisely the first one with singular centre,\nand the meaningful part of the mass-radius curve is monotonic (c.f. Fig. (\\ref{PSMR})).\n\nWe finally note that the mass radius curves for the \"softer\" PSEOS (such as $\\tau = 0.382$\nin Fig (\\ref{PSMR})) resemble strikingly the mass-radius curves for \nquark stars \\cite{WNR}-\\cite{DBDRS} \n(in particular those with \"harder\" equations of state such as \\cite{RBDD}, \\cite{DBDRS}).\nPutting $\\rho_- = 3~\\mbox{GeV\/fm}^3$ in the PSEOS with $\\tau = 0.382$ \none obtains typical values of about 1.5 solar masses for the maximal mass \nand about 7 km for the maximal radius. \nHowever, this coincidence should not be overestimated.\nAs mass and radius are obtained by integration, they \"smooth out\" differences in the\nEOS, which seem quite substantial even at moderate densities.\nMoreover, we recall that for the PSEOS the pressure always diverges at finite density, \nwhereas the EOS of \"ultrarelativistic\" quarks is nowhere too far from $p = \\rho\/3$. \nThis discrepancy prevents us from modelling extreme quark conditions\nand has rather drastic consequences for the mass-radius relation.\nAs follows from Harrison et. al. \\cite{HTWW} and has been shown rigorously \nby Makino \\cite{TM}, if the quotient $p(\\rho)\/\\rho$ for some given EOS tends to a constant \nsufficiently fast for $\\rho \\rightarrow \\infty$ or $p \\rightarrow \\infty$, \nthe mass-radius curve develops the form of a \"spiral\", with an infinite number\nof twists, for high central pressure.\nWhile the EOS for quark stars given in the literature seem safely \nwithin the range of the Makino theorem, the mass-radius diagrams are normally \nnot drawn till the spiral sets on. On the other hand, the mass-radius diagrams for the\nPant-Sah solutions are single open loops, \nwhich we have drawn to the end (infinite central pressure) in Fig. (\\ref{PSMR}).\n\n\n\\section{Proofs of Spherical Symmetry}\n\nIn Newtonian theory Lichtenstein \\cite{Lich} has given a proof \nof spherical symmetry of static perfect fluids which satisfy \n$\\rho \\ge 0$ and $p \\ge 0$. \nUnder the same condition on the equation of state, Masood-ul-Alam\nhas recently proven spherical symmetry in the relativistic case \nby using a substantial extension of the positive mass theorem \\cite{MA1}.\nFor the relativistic model considered in this paper, spherical symmetry\n is a consequence of the uniqueness theorem of Beig and Simon \\cite{BS2}. \nIn Sect. 4.2 we reproduce the core of this proof for the present model,\nfor which it simplifies.\n\nIn Sect. 4.1 we give a version of the Newtonian proof which resembles as closely as\npossible the relativistic proof, substituting the positive mass theorem\nby the virial theorem. A proof along the same lines has been sketched in\n\\cite{BS2} for fluids of constant density.\n\n\\subsection{The Newtonian Case}\n\nWe use the notation of sections 2.1.1 and 2.1.2 with the following\nadditions and modification.\nWe define $w = g^{ij}\\nabla_i v\\nabla_j v$ as in Sect. 2.1.1.\nHowever, for a given model described by $v_s$, \nwe now denote by $w_0(v)$ the function of $v$ and $v_s$ \ndefined by the r.h. side equ. (\\ref{wv}). \nNote that this function may become negative, which happens if the central potential \n$v_c$ of the given model is less than the central potential of the spherical symmetric \nmodel with the same $v_s$. The proof of spherical symmetry proceeds by\nshowing that $w$ and $w_0$ coincide. We split this demonstation into a series\nof Lemmas. \\\\ \\\\\n{\\bf Lemma 4.1.1. (The virial theorem)}\\\\ \n For all static asymptotically flat fluids as described in Sect. 2.1.1., we\nhave, denoting the volume element by $d\\eta$,\n\\begin{equation}\n\\label{vir}\n\\int_{\\cal F} (6p + \\rho v) d\\eta = 0.\n\\end{equation} \\\\ \n{\\bf Remark.} In kinetic gas theory, the two terms in the integral (\\ref{vir})\nare four times the kinetic energy and twice the potential energy of the\nparticles, respectively.\\\\ \\\\\n{\\bf Proof.} Let $\\xi_i$ be a dilation in flat space, i.e. $\\nabla_{(i}\\xi_{j)} =\ng_{ij}$. (In cartesian coordinates, $g_{ij} = \\delta_{ij}$ and $\\xi_i =\nx^i$). Let $v$, $\\rho$ and $p$ define an asymptotically flat \nNewtonian model as in Sect. 2.1.1.. \nThen there holds the Pohozaev identity \\cite{SP}\n\\begin{equation}\n\\label{Poho}\n\\nabla_i \\left [\\left(\\xi^j \\nabla_j v + \\frac{1}{2} v \\right) \\nabla^i v - \n \\frac{1}{2} w \\xi^i + 4\\pi p \\xi^i \\right] = 2 \\pi (6p + \\rho v)\n\\end{equation}\nwhich is verified easily. We integrate this equation over ${\\cal M}$ and apply the divergence\ntheorem, using that the integrand in brackets on the left is continuous at the\nsurface. Due to the asymptotic conditions (\\ref{NM}), (\\ref{Nas}) the boundary integral at\ninfinity vanishes which gives the required result (\\ref{vir}). \n\\hfill $\\Box$ \\\\ \\\\\n{\\bf Lemma 4.1.2.} \nFor solutions with the NEOS (\\ref{Neos}), we have\n\\begin{eqnarray}\n\\label{PSvir}\n0 & = & \\int_{\\cal F} (6p + \\rho v) d\\eta = - \\rho_+ Y + \\bar v M \\\\\n\\label{intw}\n0 & = & \\int_{\\cal F} \\frac{w - w_0}{(\\bar v - v)^4} d\\eta \n\\end{eqnarray}\nwhere $Y = \\int_{\\cal F} d\\eta$ is the volume of the fluid.\nIn particular, $\\rho_+ = 0$ iff the solutions extend to infinity ($0 = v_s = \\bar v - \\tau$) \\\\ \\\\\n{\\bf Proof.} For the Newtonian model the virial theorem (\\ref{vir}) and (\\ref{rhop}) imply\n\\begin{equation}\n\\label{PSvirp}\n0 = \\int_{\\cal F} (6p + \\rho v) d\\eta = \n\\int_{\\cal F} [6p - \\rho (\\bar v - v)] d\\eta + \\bar v \\int_{\\cal F} \\rho d\\eta = - \\rho_+ Y + \\bar v M\n\\end{equation} \nwhich proves (\\ref{PSvir}). To show (\\ref{intw})\nwe use the divergence theorem, (\\ref{Poi}), (\\ref{rhop}),\n(\\ref{Nsurf}) and (\\ref{wv}). We obtain\n\\begin{eqnarray}\n& & 3 \\int_{\\cal F} \\frac{w - w_0}{(\\bar v - v)^4} d\\eta = \n\\int_{\\cal F} \\left [ \\nabla_i \\frac{\\nabla^i v}{(\\bar v - v)^3} \n- \\frac{\\Delta v}{(\\bar v - v)^3} - \\frac{3 w_0}{(\\bar v - v)^4} \\right] d\\eta =\n\\nonumber \\\\\n\\label{ww0}\n{} & = & \\frac{1}{\\tau^3} \\int_{\\partial {\\cal F}} \\nabla_i v~ d{\\cal S}^i\n- \\frac{4\\pi \\rho_- \\tau^3}{\\bar v} \\int_{\\cal F} d\\eta =\n\\frac{4\\pi}{\\tau^3} \\left( M - \\frac{\\rho_+ Y}{\\bar v} \\right) \n\\end{eqnarray}\nEqu. (\\ref{PSvir}) and (\\ref{ww0}) now give the required result. \n\\hfill $\\Box$ \\\\ \\\\\n{\\bf Lemma 4.1.3.} For the NEOS there holds, inside ${\\cal F}$\n\\begin{equation}\n\\label{NLapf}\n 2 \\Delta^- ~\\frac{ w - w_0}{(\\bar v - v)^4} = (\\bar v - v)^2 {\\cal B}_{ij}^- {\\cal B}^{ij}_- \\ge 0.\n\\end{equation}\n\\\\ {\\bf Proof.}\nWe first note that formula (\\ref{cri}) with $\\wp_{ij} = g_{ij}^-$, $\\Phi = (\\bar v - v)^{-1}$,\n and $\\widetilde \\wp_{ij} = \\Phi^{4} \\wp_{ij} = g_{ij}$ implies \n$ {\\cal B}_{ij}^- = - (\\bar v - v)^{-2} {\\cal C}_- [ \\nabla_i^- \\nabla_j^- (\\bar v - v)^2]$.\nIf follows that\n\\begin{eqnarray}\n 2 \\Delta^- ~ \\frac{w - w_0}{(\\bar v - v)^4} & = & \n- \\nabla^i_- [ {\\cal B}_{ij}^- \\nabla^j_- (\\bar v - v)^2] =\n (\\bar v - v)^2 {\\cal B}_{ij}^- {\\cal B}^{ij}_- - \\nonumber \\\\\n & - & \\frac{1}{6} [\\nabla^i_- (\\bar v - v)^2 ] \\nabla_i^- {\\cal R}^- = \n(\\bar v - v)^2 {\\cal B}_{ij}^- {\\cal B}^{ij}_- \n\\end{eqnarray}\nwhere we have used the Bianchi identity $\\nabla^i_- {\\cal B}_{ij}^- = \\nabla_j^- {\\cal R}^-\/6$ \nand the fact that ${\\cal R}^- = const. $ for the NEOS.\n\\hfill $\\Box$ \\\\ \\\\\n{\\bf Lemma 4.1.4.} In ${\\cal V}$ we have \n\\begin{equation}\n\\label{NLapv}\n\\Delta^- ~ \\frac { w - w_0}{|v|^3 (\\bar v - v)} = \\frac{|v|^7}{(\\bar v - v)^5} \\widehat{\\cal B}_{ij} \n\\widehat {\\cal B}^{ij} \\ge 0\n\\end{equation}\nwhere $\\widehat{\\cal B}_{ij}$ is the trace-free part of the Ricci tensor\nw.r. to the metric $\\widehat g_{ij} = v^4 g_{ij}$. \\\\ \\\\\n{\\bf Proof.} \nIn the vacuum region (\\ref{NLapf}) still holds (we set $\\bar v = 0$), since the metric\n$\\widehat g_{ij}$ has curvature $\\widehat {\\cal R} = 0$.\nIt follows that\n\\begin{eqnarray}\n \\Delta^- ~ \\frac { w - w_0}{|v|^3 (\\bar v - v)} & = &\n\\frac{v^6}{(\\bar v - v)^6} \n\\widehat \\nabla_i \\left [ \\frac{(\\bar v - v)^2}{v^2} \n\\widehat \\nabla^i ~ \\left( \\frac{|v|}{\\bar v - v}~ \\frac{w - w_0}{v^4} \\right) \\right]\n= \\nonumber \\\\\n& = & \\frac{|v|^5}{(\\bar v - v)^5} \\widehat \\Delta~ \\frac{w - w_0}{v^4} = \n \\frac{|v|^7}{(\\bar v - v)^5} \\widehat {\\cal B}_{ij} \\widehat {\\cal B}^{ij}\n\\end{eqnarray}\nwhere $\\widehat \\nabla^i$ and $\\widehat \\Delta$ refer to $\\widehat g_{ij}$. \n\\hfill $\\Box$ \\\\ \\\\\n{\\bf Lemma 4.1.5.} On ${\\cal M} = {\\cal V} \\cup {\\cal F}$, we obtain $w \\le w_0$.\n\\\\ \\\\\n{\\bf Proof.}\nThe weak maximum principle applied to (\\ref{NLapf}) on ${\\cal F}$\nimplies that $(w - w_0)\/(\\bar v - v)^4$ takes on its maximum at some point\n$q \\in \\partial {\\cal F}$, i.e.\n\\begin{equation}\n\\label{Nin2}\n\\sup_{\\cal F} \\frac{w - w_0}{(\\bar v - v)^4} \\le \\max_{\\partial {\\cal F}} \\frac{w - w_0}{(\\bar v - v)^4}\n= \\left. \\frac{w - w_0}{(\\bar v - v)^4} \\right|_q\n\\end{equation}\nOn the other hand, the weak maximum principle applied to (\\ref{NLapv}) shows that either \n$(w - w_0)\/|v|^3 (\\bar v - v)$ takes on its (absolute) maximum at infinity (where it vanishes)\nor that it has a positive maximum on $\\partial {\\cal F}$.\nIn the latter case this maximum is located at $q$\nsince $v$ is constant on $\\partial {\\cal F}$. This leads to a contradiction\nas follows. Taking $n^i$ to be the normal to $\\partial {\\cal F}$ directed towards\ninfinity, we have \n\\begin{equation}\n\\label{Nbd}\n\\left. n^{i} \\nabla_i~ \\frac{w - w_0}{(\\bar v - v)^4} \\right|_q = \n\\left. \\frac{|v_s|^3}{\\tau^3} n^{i} \\nabla_i ~ \\frac{w - w_0}{(\\bar v - v) |v|^3} \\right|_q\n+ 3 \\frac{\\bar v (w - w_0)}{\\tau^3 |v_s|^3} \\left. n^i \\nabla_i~ |v| \\right|_q.\n\\end{equation} \nBy the boundary point lemma \\cite{GT} applied to (\\ref{NLapv}), \nthe first term on the right of (\\ref{Nbd}) is negative,\nand the same applies to the second term by virtue of $\\Delta v = 0$.\nIt follows that\n\\begin{equation}\n\\label{Nin1}\n\\left. n^{i} \\nabla_i ~\\frac{w - w_0}{(\\bar v - v)^4} \\right|_q < 0\n\\end{equation}\nBut as $\\bar v - v$ is $C^1$ on ${\\cal M}$, this contradicts\n(\\ref{Nin2}). Hence we are left with $(w - w_0)\/|v|^3 (\\bar v - v) \\le 0$ in ${\\cal V}$ which, \ntogether with (\\ref{Nin2}) implies $(w - w_0)\/(\\bar v - v)^4 \\le 0$\neverywhere on ${\\cal M}$.\nThis proves the Lemma. \\hfill $\\Box$ \\\\ \\\\\n{\\bf Lemma 4.1.6}. On ${\\cal M} = {\\cal V} \\cup {\\cal F}$, we have $w = w_0$.\nFurthermore $w = w_0(v)$ is positive for $v > v_{min}$, smooth in $[v_{min}, v_s)$ \nand such that at $v_{min}$ there holds $w_0 = 0$ and $ dw\/dv \\ne 8\\pi \\rho$.\n\\\\ \\\\\n{\\bf Proof.} The first assertion follows from Lemmas 4.1.2 and 4.1.5.,\nand the rest is easily checked. \\hfill $\\Box$ \\\\ \\\\\n{\\bf Proposition 4.1.7.} Asymptotically flat solutions with the NEOS are spherically symmetric\nand uniquely defined by $w_0$. \\\\ \\\\\n{\\bf Proof.} A trivial modification of a relativistic result, Lemma 4 of \\cite{BS2}, \nhas the conclusion of Lemma 4.1.6. as hypothesis. The conclusion of this\nmodified Lemma is the proposition.\n \n\n\\subsection{The Relativistic Case} \n\nWe use the notation of the previous sections with modifications analogous to the Newtonian case. \nWe recall that $W = g^{ij}\\nabla_i V \\nabla_j V$, and for a given model described by $V_s$ \nwe now denote by $W_0(V)$ the function of $V$ and $V_s$ defined by the r.h. side equ. (\\ref{PSW}). \nAgain this function becomes negative if the central potential $V_c$ of the given model is less than the \ncentral potential of the spherical symmetric model with the same $V_s$.\nWe first prove that $W$ and $W_0$ coincide \\cite{Lind},\nwhich is done in a series of Lemmas. From Lemma 4.2.3. onwards, they are direct counterparts \nof the Newtonian ones in the previous section. \\\\ \\\\\n{\\bf Lemma 4.2.1. (The vanishing mass theorem)}.\nWe recall that an asymptotically flat Riemannian manifold with non-negative scalar curvature \nand vanishing mass is flat \\cite{SY}. \\\\ \\\\\n{\\bf Lemma 4.2.2.} For fluids with PSEOS, we have \n\\begin{equation}\n\\label{Rstar}\n{\\cal R}_{\\star} = \\frac{1536 \\tau^6 \\mu^2 f^2 (W_0 - W)}{(2\\mu^2 - \\Sigma^2)(1 - f^2V^2)^4}\n\\end{equation}\nwhere ${\\cal R}_{\\star}$ is the scalar curvature w.r.to the metric\n $g_{ij}^{\\star} = \\Omega^{-2} g_{ij}$, with $\\Omega(V)$ defined in (\\ref{Omega}).\nFor $\\rho_+ = 0$, i.e. for the BEOS and for vacuum, we obtain ${\\cal R}_{\\star} \\equiv 0$.\n\\\\ \\\\\n{\\bf Proof.}\nFor the curvature w.r.t. to $g_{ij}^{\\star}$ we obtain \n\\begin{eqnarray}\n{\\cal R}_{\\star} & = & 2 \\left[3 \\left(\\frac{d\\Omega}{dV} \\right)^2 - 2 \\Omega \\frac{d^2\n\\Omega}{dV^2} \\right] \n(W_0 - W) = \\nonumber\\\\\n& = & 16\\pi\\Omega^2 \\left[ \\rho + (\\rho + 3p) \\frac{V}{\\Omega} \\frac{d\\Omega}{dV}\\right]\n\\left( 1 - \\frac{W}{W_0} \\right).\n\\end{eqnarray}\nHere the first equation holds for conformal rescalings \nof the form $g_{ij}^{\\star} = \\Omega^{-2} g_{ij}$ (for any $g_{ij}$ and\n$\\Omega(V)$ if ${\\cal R}$ is a function of $V$ only) \nwhile the second one uses the general formula (\\ref{csc}), property (\\ref{Econf}), \nand the field equations (\\ref{Alb}) and (\\ref{Ein}). \nNow (\\ref{Rstar}) follows by using the explicit forms (\\ref{rV}), (\\ref{pV}) and (\\ref{Omega}) \nof $\\rho$, $p$ and $\\Omega$. \\\\ \\\\\n{\\bf Lemma 4.2.3.} \nFor solutions with PSEOS, we have\n\\begin{equation}\n\\label{ELapf}\n \\Delta^{\\prime} ~ \\frac{W - W_0}{(1 - f^2V^2)^4} = \n \\frac{V^4 (1 \\pm fV)^2}{(1 \\mp fV)^{10}} {\\cal B}_{ij}^{\\pm} {\\cal B}^{ij}_{\\pm} \\ge 0\n\\end{equation}\nwhere $\\Delta^{\\prime}$ refers to the metric $g_{ij}^{\\prime} = (1 - f^2V^2)^4 g_{ij}\/16 V^2$.\n\\\\ \\\\\n{\\bf Proof.}\nWe first note that formula (\\ref{cri}) with $\\wp_{ij} = g_{ij}^{\\pm}$, $\\Phi^{\\pm} = 2\/(1 \\pm fV)$,\nand $\\widetilde \\wp_{ij} = \\Phi^{4}_{\\pm} \\wp_{ij} = g_{ij}$ implies, \ntogether with the field equation (\\ref{Ein}),\n\\begin{equation}\n\\label{EB}\n{\\cal C}^{\\pm}\\left[ \\nabla_i^{\\pm} X_j^{\\pm} \\right ] = \\alpha^{\\pm} {\\cal B}_{ij}^{\\pm}\n\\end{equation}\nwhere we have defined\n\\begin{equation}\nX_i^{\\pm} = \\frac{1 \\pm fV}{(1 \\mp fV)^3} \\nabla_{j}V \n\\qquad \\alpha^{\\pm} = \\frac{V (1 \\pm fV)^{2}}{(1 \\mp fV)^4}.\n\\end{equation}\nThen we find from (\\ref{EB}) that\n\\begin{eqnarray}\n \\frac{(1 \\mp fV)^6}{V^3} \\Delta^{\\prime}~ \\frac{W - W_0}{(1 - f^2V^2)^4} \n& = &\n \\nabla^i_{\\pm} \\left[{\\cal B}_{ij}^{\\pm} X^j_{\\pm} \\right] =\n \\alpha^{\\pm} {\\cal B}_{ij}^{\\pm} {\\cal B}_{ij}^{\\pm} + \\nonumber \\\\\n& + & \\frac{1}{6} X^i_{\\pm} \\nabla_i^{\\pm} {\\cal R}^{\\pm} = \n\\alpha^{\\pm} {\\cal B}_{ij}^{\\pm} {\\cal B}^{ij}_{\\pm} \n\\end{eqnarray}\nwhere we have used the Bianchi identity $\\nabla^i_{\\pm} {\\cal B}_{ij}^{\\pm} = \\nabla_j^{\\pm}\n{\\cal R}^{\\pm}\/6$ and the fact that ${\\cal R}^{\\pm} = const. $ for our model.\n\\hfill $\\Box$ \\\\ \n\nNote that the argument of the Laplacian on the l.h. side of (\\ref{ELapf}) agrees with ${\\cal R}_{\\star}$ \nas given in (\\ref{Rstar}) modulo a constant factor. In other words, (\\ref{Rstar}) and (\\ref{ELapf}) \nshow that $\\Delta^{\\prime} {\\cal R}_{\\star} \\ge 0$. \\\\ \\\\\n{\\bf Lemma 4.2.4.} In ${\\cal V}$, we have\n\\begin{equation}\n\\label{ELapv}\n\\Delta^{\\prime}~\\frac{W - W_0}{(1 - V^2)^3 (1 - f^2 V^2)} = \n\\frac{V^3 (1 \\pm V)^7}{(1- f^2 V^2)^5 (1 \\mp V)^5} \n\\widehat {\\cal B}_{ij}^{\\pm} \\widehat {\\cal B}^{ij}_{\\pm} \\ge 0\n\\end{equation} \nwhere $\\widehat {\\cal B}_{ij}^{\\pm}$ are the trace free parts of the Ricci\ntensors w.r. to the metrics $\\widehat g_{ij}^{\\pm} = (1 \\pm V)^4 g_{ij}\/2$.\\\\ \\\\\n{\\bf Proof.} In vacuum (\\ref{ELapf}) still holds (we set $f=1$), since the\nmetrics $\\widehat g_{ij}^{\\pm}$ have vanishing curvatures $\\widehat {\\cal R}^{\\pm}$. \nIt follows that\n\\begin{eqnarray}\n & & \\Delta^{\\prime}~\\frac{W - W_0}{(1 - V^2)^3 (1 - f^2 V^2)} = \\nonumber \\\\\n& = & \\frac{(1 - V^2)^6}{(1 - f^2V^2)^6} \\widehat \\nabla_i^{\\pm} \\left[\\frac{(1 - f^2 V^2)^2}{(1 - V^2)^2}\n\\widehat \\nabla^i~ \\left( \\frac{1 - V^2}{1 - f^2 V^2} ~ \\frac{W - W_0}{(1 - V^2)^4} \\right) \\right] = \\nonumber \\\\ \n& = & \\frac{(1 - V^2)^5}{(1 - f^2 V^2)^5} \\widehat \\Delta^{\\pm}~ \\frac{W - W_0}{(1 - V^2)^4} =\n\\frac{V^3 (1 \\pm V)^7}{(1- f^2 V^2)^5 (1 \\mp V)^5} \\widehat {\\cal B}_{ij}^{\\pm} \\widehat {\\cal B}^{ij}_{\\pm}.\n\\end{eqnarray} \nwhere $\\widehat \\nabla^{\\pm}$ and $\\widehat \\Delta^{\\pm}$ refer to $\\widehat g_{ij}^{\\pm}$, respectively.\n\\hfill $\\Box$ \\\\ \\\\\n{\\bf Lemma 4.2.5.} On ${\\cal M} = {\\cal V} \\cup \\partial {\\cal F} \\cup {\\cal F}$, we have $W \\le W_0$. \\\\ \\\\ \n{\\bf Proof.} \nThe weak maximum principle applied to (\\ref{ELapf}) on ${\\cal F}$ implies that $(W - W_0)\/(1 - f^2 V^2)^4$ \ntakes on its maximum on some point $q \\in \\partial {\\cal F}$, i.e.\n\\begin{equation}\n\\label{Ein2}\n\\sup_{\\cal F} \\frac{W - W_0}{(1 - f^2 V^2)^4} \\le \\max_{\\partial {\\cal F}} \\frac{W - W_0}{(1 - f^2 V^2)^4}\n= \\left. \\frac{W - W_0}{(1 - f^2 V^2)^4} \\right|_q\n\\end{equation}\nOn the other hand, the weak maximum principle applied to (\\ref{ELapv}) shows that either\n$(W - W_0)\/(1 - V^2)^3 (1 - f^2 V^2)$ takes on its (absolute) maximum at infinity (where it vanishes)\nor that it has a positive maximum at some point $q \\in \\partial {\\cal F}$.\nIn the latter case this maximum is located at $q$,\nas $V$ is constant on $\\partial {\\cal F}$. This leads to a contradicition as\nfollows. Taking $n^i$ to be the normal to $\\partial {\\cal F}$ directed\ntowards infinity, we have \n\\begin{eqnarray}\n\\label{Ebd}\n\\left. n^{i} \\nabla_i~ \\frac{W - W_0}{(1 - f^2 V^2)^4} \\right|_q & = &\n\\left. \\frac{(1 + \\tau)^2 (1 - V_s^2)^3}{4\\tau} n^{i} \\nabla_i ~ \\frac{W - W_0}{(1 - f^2 V^2)(1 - V^2)^3} \\right|_q\n- \\nonumber \\\\\n& -& \\frac{3 (1 + \\tau)^6 (1 - f^2) V_s (W - W_0)}{32 \\tau^3 (1 - V_s^2)^3} \\left. n^i \\nabla_i~ V \\right|_q\n\\end{eqnarray} \nBy the boundary point lemma applied to (\\ref{ELapv}), the first term on the right of (\\ref{Ebd}) is negative,\nwhile the second term (without the minus in front) is positive by virtue of $\\Delta V = 0$.\nIt follows that\n\\begin{equation}\n\\label{Ein1}\n\\left. n^{i} \\nabla_i ~\\frac{W - W_0}{(1 - f^2 V^2)^4} \\right|_q < 0\n\\end{equation}\nBut as $1 - f^2 V^2$ is $C^1$ on ${\\cal M}$, this contradicts\n(\\ref{Ein1}). Hence we are left with $(W - W_0)\/(1 - V^2)^3 (1 - f^2 V^2) \\le 0$ in ${\\cal V}$ which, \ntogether with (\\ref{Ein2}) implies $(W - W_0)\/(1 - f^2 V^2)^4 \\le 0$\neverywhere on ${\\cal M}$.\nThis proves the Lemma. \\hfill $\\Box$ \\\\ \\\\\n{\\bf Lemma 4.2.6}. On ${\\cal M} = {\\cal V} \\cup {\\cal F}$ we have $W = W_0$.\nFurthermore $W = W_0(v)$ is positive\nfor $V > V_{min}$, smooth in $[V_{min}, V_s)$ and such that at $V_{min}$\nthere holds $W_0 = 0$ and $ dW\/dV \\ne 8\\pi (\\rho + 3p) V$.\n\\\\ \\\\\n{\\bf Proof.} The first assertion follows from Lemmas 4.2.2 and 4.2.5.,\nand the rest is easily checked. \\hfill $\\Box$. \\\\ \\\\\n{\\bf Proposition 4.2.7.} Asymptotically flat solutions with the PSEOS are spherically symmetric\nand uniquely defiend by $W_0$. \\\\ \\\\\n{\\bf Proof.} Lemma 4 of \\cite{BS2} (which is a reformulation of \nresults of \\cite{AA,HK}) has the conclusion of Lemma 4.2.6 as hypothesis.\nThe conclusion of the former Lemma is the proposition.\n \\\\ \\\\\n{\\large\\bf Acknowledgements.}\nI am grateful to J. Mark Heinzle, Kayll Lake, Marc Mars and to the referees for useful comments\non the manuscript. \n\n\n\\section{Appendix}\n\nWe recall here the basic formulas for the behaviour of the curvature\nunder conformal rescalings of the metric and the standard form of metrics\nof constant curvature. We give the proofs of the latter two Lemmas. \\\\ \\\\ \n{\\bf Lemma A.1.} \nLet $\\wp_{ij}$, $\\widetilde \\wp_{ij} = \\Phi^4 \\wp_{ij}$ be conformally related metrics on a\n3-dimensional manifold ${\\cal M}$.\nThen the scalar curvatures $\\Re$, $\\widetilde \\Re$ and\nthe trace-free parts ${\\cal B}_{ij}= {\\cal C}[\\Re_{ij}]$ and \n$\\widetilde {\\cal B}_{ij} = \\widetilde {\\cal C}[\\widetilde \\Re_{ij}]$ \nof the Ricci tensors $\\Re_{ij}$ and $\\widetilde \\Re_{ij}$ behave as\n\\begin{eqnarray}\n\\label{csc}\n - \\frac{1}{8} \\widetilde \\Re \\Phi^5 & = & \\left(\\Delta - \\frac{1}{8} \\Re \\right) \\Phi\n\\\\\n\\label{cri}\n\\widetilde {\\cal B}_{ij} & = & {\\cal B}_{ij} - 2\\Phi^{-1} {\\cal C}[\\nabla_i\\nabla_j \\Phi] + 6\n\\Phi^{-2} {\\cal C}[\\nabla_i \\Phi \\nabla_j \\Phi]\n\\end{eqnarray}\nwhere $\\nabla_i$ is the gradient and $\\Delta = \\wp^{ij}\\nabla_i \\nabla_j $ \nthe Laplacian of $\\wp_{ij}$.\\\\ \\\\\n{\\bf Lemma A.2.} Any smooth, spherically symmetric metric $({\\cal M}, \\wp_{ij})$\nwith constant scalar curvature $\\Re$ is a space of constant curvature\n(i.e. ${\\cal B}_{ij} = 0$) and can be written as\n\\begin{equation}\n\\label{cc}\nds^2 = \\frac{1}{\\left(1 + \\frac{1}{24} \\Re r^2 \\right)^2}\n(dr^2 + r^2 d\\omega^2)\n\\end{equation}\n{\\bf Proof.} By solving ODEs we can write $\\wp_{ij}$ in isotropic coordinates\nas\n\\begin{equation}\n\\label{iso}\nds^2 = \\Phi(r)^4 (dr^2 + r^2 d\\omega^2)\n\\end{equation} \nTo determine $\\Phi$, we solve (\\ref{csc}) with $\\wp_{ij}$ flat and\n$\\widetilde \\Re = const.$. Near the center, the solution determined uniquely\n by the initial values $\\Phi_c = 1$ and $\\partial \\Phi\/\\partial x^i|_c = 0$ reads \n\\begin{equation}\n\\label{Phi}\n\\Phi(r) = \\frac{1}{\\sqrt{1 + \\frac{1}{24} \\Re r^2}}\n\\end{equation}\nThe solution can be extended analytically as far as required, which gives (\\ref{cc}). \\hfill $\\Box$ \\\\ \\\\\n{\\bf Lemma A.3.} Let $({\\cal M}_+, \\wp_{ij}^+)$ and $({\\cal M}_-, \\wp^-_{ij})$ be 3-dimensional, \nspherically symmetric manifolds with smooth metrics and with constant scalar curvatures $\\Re_{+}, \\Re_-$. \nThen the smooth, spherically symmetric solutions $\\Phi_+(r)$ to the equation (\\ref{csc}) \non $({\\cal M}_+, \\wp^+_{ij})$ are given by\n\\begin{equation}\n\\label{Phi}\n\\Phi_+(r) = \\mu \\sqrt{\\frac{1 + \\frac{1}{24}\\Re_+ r^2}{1 + \\frac{1}{24}\\mu^4 \\Re_- r^2}}\n\\end{equation}\nwhere $\\mu$ is a constant. \\\\ \\\\\n{\\bf Proof.}\nUsing Lemma A.2., the required solution is determined by a conformal rescaling \n$\\wp_{ij}^- = \\Phi_+^4 \\wp_{ij}^+$ between spaces of constant curvature. \nWriting the metrics in the forms (\\ref{cc}) gives \n\\begin{equation}\n\\label{cc+-}\n \\frac{\\Phi_+(r_+)^4} {\\left(1 + \\frac{1}{24}\\Re_+ r_+^2 \\right)^2}(dr_+^2 + r_+^2\nd\\omega^2) = \n\\frac{1} {\\left(1 + \\frac{1}{24}\\Re_- r_-^2 \\right)^2}(dr_-^2 + r_-^2 d\\omega^2) \n\\end{equation}\nHence $\\Psi_+ (r_+)$ defined by\n\\begin{equation}\n\\Psi_+(r_+) = \\Phi_{+}(r_+) \\sqrt{\\frac{1 + \\frac{1}{24} \\Re_- r_-^2}\n{1 + \\frac{1}{24} \\Re_+ r_+^2}}\n\\end{equation}\nwith a yet unknown relation $r_- = r_-(r_+)$, \ndetermines a conformal diffeomorphism of flat space, i.e.\n\\begin{equation}\n\\label{cd}\n \\Psi_+^4 (dr_+^2 + r_+^2 d\\omega^2) = (dr_-^2 + r_-^2 d\\omega^2) \n\\end{equation}\n\nBy (\\ref{csc}) all such diffeomorphisms are solutions of $\\Delta \\Psi_+ = 0$ on flat space, \nand hence given by $\\Psi_+ = \\mu + \\nu\/r_+$ for some constants $\\mu$ and $\\nu$. \nA consistency check with (\\ref{cd}) now shows that either $\\nu = 0$ and $r_-\n= \\mu^2 r_+$ or $\\mu = 0$ and $r_- = \\nu^2\/r_+$. Setting $r_+ = r$, the\nfirst case leads directly to (\\ref{Phi}) while in the second case we have to\nput $\\mu^2 = 24\/(\\nu^2 \\Re_-)$. The solution again extends analytically as far\nas needed. \\hfill $\\Box$\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\@startsection {section}{1}{\\z@}\n{-30pt \\@plus -1ex \\@minus -.2ex} {2.3ex \\@plus.2ex}\n{\\normalfont\\normalsize\\bfseries}}\n\n\\renewcommand\\subsection{\\@startsection{subsection}{2}{\\z@}\n{-3.25ex\\@plus -1ex \\@minus -.2ex} {1.5ex \\@plus .2ex}\n{\\normalfont\\normalsize\\bfseries}}\n\n\\renewcommand{\\@seccntformat}[1]{\\csname the#1\\endcsname. }\n\n\\makeatother\n\\title{\\bf{Skew cyclic and skew $\\boldsymbol{(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})}$-constacyclic codes over $\\boldsymbol{F_{q}+uF_{q}+vF_{q}+uvF_{q}}$}}\n\\author{ \\bf Habibul Islam and Om Prakash\\\\\\\\\nDepartment of Mathematics \\\\\nIndian Institute of Technology Patna\\\\ Patna- 801 106, India \\\\\nE-mail: habibul.pma17@iitp.ac.in and om@iitp.ac.in}\n\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\n\nIn this note, we study skew cyclic and skew constacyclic codes over the ring $\\mathcal{R}=F_{q}+uF_{q}+vF_{q}+uvF_{q}$ where $q=p^{m},$ $p$ is an odd prime, $u^{2}=u,~v^{2}=v,~uv=vu$. We show that Gray images of a skew cyclic and skew $\\alpha$-constacyclic code of length $n$ are skew quasi-cyclic code of length $4n$ over $F_{q}$ of index 4. Also, it is shown that skew $\\alpha$-constacyclic codes are either equivalent to $\\alpha$-constacyclic codes or $\\alpha$-quasi-twisted codes over $\\mathcal{R}$. Further, structural properties, specially, generating polynomials and idempotent generators for skew cyclic and skew constacyclic codes are determined by decomposition method.\n\\end{abstract}\n\n\\noindent {\\it Key Words} : Skew polynomial ring; skew cyclic code; skew quasi-cyclic code; skew constacyclic code; Gray map; generating polynomial; idempotent generator.\\\\\n\n\\noindent {\\it 2010 MSC} : 94B15, 94B05, 94B60.\n \\section{Introduction}\nLinear codes have been studied in coding theory for last six decades. Initially linear codes were considered and studied over binary field. In 1994, a remarkable work presented by Hammons et al. \\cite{hamm94}, which shows that some good binary non-linear codes can be considered as linear codes over $\\mathbb{Z}_{4}$ through Gray map. Consequently, the study of linear codes over finite rings has got more attention than the study of linear codes over binary field. One of the most important class of linear codes is cyclic code which is easier to implement and also playing crucial role in the development of algebraic coding theory. Latter, cyclic codes over finite rings have got attention of many researchers and hence many new techniques have been discovered to produce cyclic codes over finite commutative ring with better parameters and properties [\\cite{tabul03}, \\cite{dinh10}, \\cite{xiying17}, \\cite{joel}]. But all of these works are restricted to the finite commutative rings.\\\\\n\nIn 2007, Boucher et al. \\cite{D07} introduced skew cyclic codes which is a generalization of the class of cyclic codes on a non-commutative ring, namely, skew polynomial ring $F[x;\\theta]$, where $F$ is a field and $\\theta$ is an automorphism on $F$. Boucher and Ulmer \\cite{D09} constituted some skew cyclic codes with Hamming distance larger than the previously known linear codes with the same parameters. Later on, Abualrub et al. \\cite{abul12} studied skew cyclic codes over $F_{2}+vF_{2}$, where $v^{2}=v$. In these works, they constructed skew cyclic codes by taking an automorphism whose order must divide the length of the code. By relaxing the said restriction, Siap et al. \\cite{siap11} investigated structural properties of skew cyclic codes of arbitrary length and found that a skew cyclic code of arbitrary length is equivalent to either a cyclic code or a quasi-cyclic code over $F_{p}+vF_{p}$, where $v^{2}=v$. In 2014, Gursoy et\nal. \\cite{Gursoy14} constructed a skew cyclic code along with their generators and also found the idempotent generators of skew cyclic codes over $F_{p}+vF_{p}$, where $v^{2}=v$. Jitman\net al. \\cite{jitman} introduced skew constacyclic codes over the skew polynomial ring whose coefficients are comes from finite chain rings, particularly for the ring $F_{p^{m}} + uF_{p^{m}}$ where $u^{2} = 0.$ Very recently, Gao et al. \\cite{gao} investigated some structural properties of skew constacyclic codes over the ring $F_{q}+vF_{q}; v^{2}=v$. \\\\\n\nInspired by these study, we consider $\\mathcal{R}=F_{q}+uF_{q}+vF_{q}+uvF_{q}$, where $q=p^{m},~p$ is an odd prime and $u^{2}=u,v^{2}=v,uv=vu $ and study skew cyclic codes and skew constacyclic codes over $\\mathcal{R}$.\nNow, for skew polynomial ring, we define an automorphism\n\\begin{align*}\n{\\theta}_{t}: \\mathcal{R}\\rightarrow \\mathcal{R}\n\\end{align*}\nby\n\\begin{align*}\n{\\theta}_{t}(a+ub+vc+uvd)=a^{p^{t}}+ub^{p^{t}}+vc^{p^{t}}+uvd^{p^{t}},\n\\end{align*}\nwhere $a,b,c,d\\in F_{q}$ and use the same throughout this paper.\nIn this case, the order of the automorphism is $\\mid\\langle{\\theta}_{t}\\rangle\\mid =\\frac{m}{t}=k$ (say). Also, the invariant subring under the automorphism ${\\theta}_{t}$ is $F_{p^{t}}+uF_{p^{t}}+vF_{p^{t}}+uvF_{p^{t}}$. For this automorphism ${\\theta}_{t}$ on $\\mathcal{R}$, the set $\\mathcal{R}[x;{\\theta}_{t}]=\\big \\{a_{0}+a_{1}x+a_{2}x^{2}+\\dots +a_{n}x^{n}:a_{i}\\in \\mathcal{R}, \\forall~ i=0,1,2,...,n \\big \\}$ forms a ring under the usual addition of polynomials and the multiplication, denoted by $\\ast$, with respect to $(ax^{i})\\ast(bx^{j})=a{{\\theta}_{t}}^{i}(b)x^{i+j}$. This is a non-commutative ring, unless ${\\theta}_{t}$ is identity, known as a skew polynomial ring. \\\\\n\nPresentation of the manuscript is as follows: In section \\ref{sec2}, we discuss some preliminaries which have been used later. In section \\ref{sec3}, we define Gray map and describe Gray images of skew cyclic codes over $\\mathcal{R}$. Section \\ref{sec4} contains some important results of skew cyclic codes over $\\mathcal{R}$ while section \\ref{sec5} gives us the idempotent generators of skew cyclic along with their duals and includes two examples to elaborate the derived results. In section \\ref{sec6} presents a relation among skew constacyclic, constacyclic and quasi-twisted code. Section \\ref{sec7} discusses the characterization of skew $\\alpha$-constacyclic code along with quasi-twisted shift operator. Section \\ref{sec8} gives structure of skew constacyclic code by decomposition method and section \\ref{sec9} concludes the paper.\n\n\n\\section{Preliminaries}\\label{sec2}\n\nThroughout the paper $\\mathcal{R}$ represents $F_{q}+uF_{q}+vF_{q}+uvF_{q}$, where $q=p^{m}, ~p$ is an odd prime and $u^{2}=u,v^{2}=v,uv=vu$. It can be checked that $\\mathcal{R}$ is a finite commutative ring containing $q^{4}$ elements and characteristic $p$.\\\\ We recall that a linear code $\\mathcal{C}$ of length $n$ over $\\mathcal{R}$ is a $\\mathcal{R}-$submodule of $\\mathcal{R}^{n}$ and members of $\\mathcal{C}$ are called codewords. A linear code $\\mathcal{C}$ of length $n$ over $\\mathcal{R}$ is said to be a skew cyclic code with respect to the automorphism ${\\theta}_{t}$ if and only if for any codeword $a=(a_{1},a_{2},...,a_{n})\\in \\mathcal{C}$ implies $\\sigma(a)=( {\\theta}_{t}(a_{n}),{\\theta}_{t}(a_{0}),...,{\\theta}_{t}(a_{n-1}) )\\in \\mathcal{C}$, where $\\sigma$ is a skew cyclic shift of $\\mathcal{C}$.\n\\\\The inner product of any $a=(a_{1},a_{2},...,a_{n})$, $b=(b_{1},b_{2},...,b_{n})$ in $\\mathcal{C}$ is define as $a.b=\\sum_{i=1}^{n}a_{i}b_{i}$ and $a,b$ is said to be orthogonal if $a.b=0$. For a code $\\mathcal{C}$, its dual denoted by $\\mathcal{C}^{\\perp}$ and define as $\\mathcal{C}^{\\perp}=\\big \\{ x\\in \\mathcal{R}^{n}:x.c=0 ~\\forall c\\in \\mathcal{C}\\big \\}$. If $\\mathcal{C}=\\mathcal{C}^{\\perp}$, then $\\mathcal{C}$ is said to be a self-dual code.\nThe Hamming weight $w_{H}(a)$ is defined as the number of the non-zero components in $a=(a_{1},a_{2},...,a_{n})\\in \\mathcal{C}$ and Hamming distance between two codewords $a$ and $b$ of $\\mathcal{C}$ is defined as $d_{H}(a,b)=w_{H}(a-b)$ while the Hamming distance for the code $\\mathcal{C}$ is denoted by $d_{H}(\\mathcal{C})$ and defined as $d_{H}(\\mathcal{C})=min\\big\\{ d_{H}(a,b)\\mid a\\neq b, ~\\forall a,b \\in \\mathcal{C} \\big\\}$.\nThe Lee weight of an element $r=a+ub+vc+uvd\\in \\mathcal{R}$ is defined by\n$w_{L}(r)= w_{H}(a,a+b,a+c,a+b+c+d)$ and Lee weight for the codeword $a=(a_{1},a_{2},...,a_{n})\\in \\mathcal{R}^{n}$ is $w_{L}(a)=\\sum_{i=1}^{n}w_{L}(a_{i}).$ The Lee distance between two codewords $a, b\\in R^{n}$ is defined as $d_{L}(a,b)= w_{L}(a-b)=\\sum_{i=1}^{n}w_{L}(a_{i}-b_{i})$ and the Lee distance for the code $\\mathcal{C}$ is defined by $d_{L}(\\mathcal{C})=min\\big\\{ d_{L}(a,b)\\mid a\\neq b, \\forall a,b\\in \\mathcal{C}\\big \\}$.\\\\\nA code $\\mathcal{C}$ of length $nm$ over $F_{q}$ is said to be a skew quasi-cyclic code of index $m$ if ${\\pi}_{m}(\\mathcal{C})=\\mathcal{C}$, where ${\\pi}_{m}$ is the skew quasi-cyclic shift on $(F_{q}^{n})^{m}$ define by\n\\begin{align}\\label{eq 1}\n{\\pi}_{m}(a^{1}\\mid a^{2}\\mid...\\mid a^{m})=(\\sigma(a^{1})\\mid \\sigma(a^{2})\\mid...\\mid \\sigma(a^{m}) ).\n\\end{align}\n\n\\section{Gray map and $F_{q}$ -images of skew cyclic code over $\\mathcal{R}$}\\label{sec3}\n\nA map $\\Psi: \\mathcal{R} \\rightarrow F_{q}^{4}$ defined by\n\\begin{align*}\n\\Psi(a+ub+vc+uvd)=(a,a+b,a+c,a+b+c+d)\n\\end{align*}\nis called the Gray map [\\cite{xiying17}]. It can be checked that $\\Psi$ is a linear map and can be extended to $\\mathcal{R}^{n}$ in obvious way by $\\Psi: \\mathcal{R}^{n}\\rightarrow F_{q}^{4n}$ such that\n\\begin{align*}\n\\Psi(r_{1},r_{2},...,r_{n})&=(a_{1},...,a_{n},a_{1}+b_{1},...,a_{n}+b_{n},a_{1}+c_{1},...,a_{n}+c_{n},\\\\\n&a_{1}+b_{1}+c_{1}+d_{1},...,a_{n}+b_{n}+c_{n}+d_{n})\n\\end{align*}\nwhere $r_{i}=a_{i}+ub_{i}+vc_{i}+uvd_{i}\\in \\mathcal{R} ~\\forall~ i=1,2,...,n.$\\\\\nIn light of above definition, the following fact can be easily checked:\\\\\n\n\\begin{thm}\\label{Gray th1}\nThe Gray map $\\Psi$ is a $F_{q}$-linear distance preserving map from $\\mathcal{R}^{n}$ (Lee distance) to $F_{q}^{4n}$ (Hamming distance).\n\\end{thm}\n\n\\begin{thm}\\label{Gray th2}\nIf $\\mathcal{C}$ is $[n,k,d_{L}]$ linear code over $\\mathcal{R}$, then $\\Psi(\\mathcal{C})$ is $[4n,k,d_{H}]$ linear code over $F_{q}$, where $d_{L}=d_{H}.$\n\\end{thm}\n\n\\begin{proof}\nBy Theorem \\ref{Gray th1}, $\\Psi$ is a $F_{q}$-linear map, so $\\Psi(\\mathcal{C})$ is a linear code of length $4n$. Since $\\Psi$ is bijection as well as distance preserving, $\\Psi(\\mathcal{C})$ has same minimum distance as $\\mathcal{C}$ $i.e$, $d_{L}=d_{H}$ and dimension $k$.\n\\end{proof}\n\n\n\n\n\\begin{pro}\\label{pro1}\nLet $\\sigma$ be the skew cyclic shift and ${\\pi}_{4}$ the skew quasi-cyclic shift as defined in equation \\ref{eq 1} and $\\Psi$ be the Gray map from $\\mathcal{R}^{n}$ to $F_{q}^{4n}$. Then $\\Psi \\sigma={\\pi}_{4}\\Psi$.\n\\end{pro}\n\n\\begin{proof}\nLet $r_{i}=a_{i}+ub_{i}+vc_{i}+uvd_{i}\\in \\mathcal{R}$ for $i=1,2,..., n.$ Take $r=(r_{1},r_{2},\\dots ,r_{n})$, then $\\sigma(r)=( {\\theta}_{t}(r_{n}),{\\theta}_{t}(r_{1}),...,{\\theta}_{t}(r_{n-1})).$ Applying $\\Psi$, we have $\\Psi(\\sigma(r))=( a^{p^{t}}_{n},{a^{p^{t}}_{1}},...,{a^{p^{t}}_{n-1}},{a^{p^{t}}_{n}}+{b^{p^{t}}_{n}},...,{a^{p^{t}}_{n-1}}+{b^{p^{t}}_{n-1}},{a^{p^{t}}_{n}}+{c^{p^{t}}_{n}},...,{a^{p^{t}}_{n-1}}+{c^{p^{t}}_{n-1}},{a^{p^{t}}_{n}}+{b^{p^{t}}_{n}}+{c^{p^{t}}_{n}}+{d^{p^{t}}_{n}},...,{a^{p^{t}}_{n-1}}+{b^{p^{t}}_{n-1}}+{c^{p^{t}}_{n-1}}+{d^{p^{t}}_{n-1}})$.\nOn the other side, $\\Psi(r) =(a_{1},a_{2},...,a_{n},a_{1}+b_{1},...,a_{n}+b_{n},a_{1}+c_{1},...,a_{n}+c_{n},...,a_{1}+b_{1}+c_{1}+d_{1},...,a_{n}+b_{n}+c_{n}+d_{n})$. Therefore, ${\\pi}_{4}(\\Psi(r))=( {a^{p^{t}}_{n}},{a^{p^{t}}_{1}},...,{a^{p^{t}}_{n-1}},{a^{p^{t}}_{n}}+{b^{p^{t}}_{n}},...,{a^{p^{t}}_{n-1}}+{b^{p^{t}}_{n-1}},{a^{p^{t}}_{n}}+{c^{p^{t}}_{n}},...,{a^{p^{t}}_{n-1}}+{c^{p^{t}}_{n-1}},{a^{p^{t}}_{n}}+{b^{p^{t}}_{n}}+{c^{p^{t}}_{n}}+{d^{p^{t}}_{n}},...,{a^{p^{t}}_{n-1}}+{b^{p^{t}}_{n-1}}+{c^{p^{t}}_{n-1}}+{d^{p^{t}}_{n-1}})$. Hence $\\Psi \\sigma={\\pi}_{4}\\Psi$.\n\n\\end{proof}\n\n\n\\begin{thm}\nA linear code $\\mathcal{C}$ of length $n$ over $\\mathcal{R}$ is a skew cyclic if and only if $\\Psi(\\mathcal{C})$ is a skew quasi-cyclic code of length $4n$ over $F_{q}$ of index 4.\n\\end{thm}\n\\begin{proof}\nLet $\\mathcal{C}$ be a skew cyclic code of length $n$ over $\\mathcal{R}$. So, $\\sigma(\\mathcal{C})=\\mathcal{C}$ and this implies $\\Psi(\\sigma(\\mathcal{C}))=\\Psi(\\mathcal{C})$. By Proposition \\ref{pro1}, we have ${\\pi}_{4}(\\Psi(\\mathcal{C}))=\\Psi(\\mathcal{C})$. This shows that $\\Psi(\\mathcal{C})$ is a skew quasi-cyclic code of length $4n$ over $F_{q}$ of index 4.\\\\\nConversely, suppose $\\Psi(\\mathcal{C})$ is a skew quasi-cyclic code of length $4n$ over $F_{q}$ of index 4. Then ${\\pi}_{4}(\\Psi(\\mathcal{C}))=\\Psi(\\mathcal{C})$. Proposition \\ref{pro1}, we get $\\Psi(\\sigma(\\mathcal{C}))=\\Psi(\\mathcal{C})$ and since $\\Psi$ is injective, so $\\sigma(\\mathcal{C})=\\mathcal{C}$, $i.e$, $\\mathcal{C}$ is a skew cyclic code over $\\mathcal{R}$.\n\\end{proof}\n\n\n\nIt is noted that sometimes the use of permuted version of Gray map instead of using direct Gray map is more convenient. The permuted version of $\\Psi_{\\pi}$ defined as $\\Psi_{\\pi}(r) = (a_{0}, a_{0}+b_{0}, a_{0}+c_{0}, a_{0}+b_{0}+c_{0}+d_{0}, a_{1}, a_{1}+b_{1}, a_{1}+c_{1}, a_{1}+b_{1}+c_{1}+d_{1},\\dots , a_{n-1}, a_{n-1}+b_{n-1}, a_{n-1}+c_{n-1}, a_{n-1}+b_{n-1}+c_{n-1}+d_{n-1})$. Clearly, the codes obtained by $\\Psi$ and $\\Psi_{\\pi}$ are permutation equivalent. Now, for a particular case, $\\mid\\langle\\theta_{t}\\rangle\\mid = k = 3$, we have the following result:\n\n\\begin{pro}\nFor any $r\\in \\mathcal{R}^{n}$, we have $\\Psi_{\\pi}\\sigma(r) = \\sigma^{4}\\Psi_{\\pi}(r)$.\n\\end{pro}\n\\begin{proof}\nLet $r = (r_{0}, r_{1},\\dots , r_{n-1})\\in \\mathcal{R}^{n}$ where $r_{i}=a_{i}+ub_{i}+vc_{i}+uvd_{i}$ for $0\\leq i\\leq n-1.$ We have,\n$\\Psi_{\\pi}(\\sigma(r)) = \\Psi_{\\pi}(\\theta_{t}(r_{n-1}), \\theta_{t}(r_{0}), \\dots , \\theta_{t}(r_{n-2}))=( a^{p^{t}}_{n-1}, a^{p^{t}}_{n-1}+b^{p^{t}}_{n-1}, a^{p^{t}}_{n-1}+c^{p^{t}}_{n-1}, a^{p^{t}}_{n-1}+b^{p^{t}}_{n-1}+c^{p^{t}}_{n-1}+d^{p^{t}}_{n-1}, a^{p^{t}}_{0}, a^{p^{t}}_{0}+b^{p^{t}}_{0}, a^{p^{t}}_{0}+c^{p^{t}}_{0}, a^{p^{t}}_{0}+b^{p^{t}}_{0}+c^{p^{t}}_{0}+d^{p^{t}}_{0}, \\dots , a^{p^{t}}_{n-2}, a^{p^{t}}_{n-2}+b^{p^{t}}_{n-2}, a^{p^{t}}_{n-2}+c^{p^{t}}_{n-2}, a^{p^{t}}_{n-2}+b^{p^{t}}_{n-2}+c^{p^{t}}_{n-2}+d^{p^{t}}_{n-2}).$\\\\ \\\\\nOn the other hand,\\\\ \\\\\n$\\sigma^{4}\\Psi_{\\pi}(r) = \\sigma^{4}(a_{0}, a_{0}+b_{0}, a_{0}+c_{0}, a_{0}+b_{0}+c_{0}+d_{0}, a_{1}, a_{1}+b_{1}, a_{1}+c_{1}, a_{1}+b_{1}+c_{1}+d_{1}, \\dots , a_{n-1}, a_{n-1}+b_{n-1}, a_{n-1}+c_{n-1}, a_{n-1}+b_{n-1}+c_{n-1}+d_{n-1}) =(\\theta_{t}^{4}(a_{n-1}), \\theta_{t}^{4}(a_{n-1}+b_{n-1}), \\theta_{t}^{4}(a_{n-1}+c_{n-1}), \\theta_{t}^{4}(a_{n-1}+b_{n-1}+c_{n-1}+d_{n-1}), \\theta_{t}^{4}(a_{0}), \\theta_{t}^{4}(a_{0}+b_{0}), \\theta_{t}^{4}(a_{0}+c_{0}), \\theta_{t}^{4}(a_{0}+b_{0}+c_{0}+d_{0}), \\dots \\theta_{t}^{4}(a_{n-2}), \\theta_{t}^{4}(a_{n-2}+b_{n-2}), \\theta_{t}^{4}(a_{n-2}+c_{n-2}), \\theta_{t}^{4}(a_{n-2}+b_{n-2}+c_{n-2}+d_{n-2})).$\\\\ \\\\\nAs $\\mid\\langle \\theta_{t}\\rangle\\mid = 3$, $\\theta_{t}^{3}(a) = a, ~~\\forall~~ a\\in \\mathcal{R}$. Finally,\\\\ \\\\\n$\\sigma^{4}\\Psi_{\\pi}(r) = ( a^{p^{t}}_{n-1}, a^{p^{t}}_{n-1}+b^{p^{t}}_{n-1}, a^{p^{t}}_{n-1}+c^{p^{t}}_{n-1}, a^{p^{t}}_{n-1}+b^{p^{t}}_{n-1}+c^{p^{t}}_{n-1}+d^{p^{t}}_{n-1}, a^{p^{t}}_{0}, a^{p^{t}}_{0}+b^{p^{t}}_{0}, a^{p^{t}}_{0}+c^{p^{t}}_{0}, a^{p^{t}}_{0}+b^{p^{t}}_{0}+c^{p^{t}}_{0}+d^{p^{t}}_{0},\\dots , a^{p^{t}}_{n-2}, a^{p^{t}}_{n-2}+b^{p^{t}}_{n-2}, a^{p^{t}}_{n-2}+c^{p^{t}}_{n-2}, a^{p^{t}}_{n-2}+b^{p^{t}}_{n-2}+c^{p^{t}}_{n-2}+d^{p^{t}}_{n-2}).$\nThis completes the proof.\n\\end{proof}\n\n\nIn the light of the above proposition, we conclude the following:\n\n\\begin{thm}\nLet $\\mid\\langle \\theta\\rangle\\mid = k = 3$ and $C$ be a skew cyclic code of length $n$ over $\\mathcal{R}$. Then its $F_{q}$ -image $\\Psi_{\\pi}(C)$ is equivalent to a $4-$quasicyclic code of length $4n$ over $F_{q}$.\n\\end{thm}\n\n\n\\section{Skew cyclic codes over $\\mathcal{R}$}\\label{sec4}\n\n We denote $B_{1}\\oplus B_{2}\\oplus B_{3}\\oplus B_{4}=\\big\\{ a_{1}+a_{2}+a_{3}+a_{4}:a_{i}\\in B_{i}$ for $i=1,2,3,4\\big \\}$.\\\\\nAlso, any $a+ub+vc+uvd\\in \\mathcal{R}$ can be written as $a+ub+vc+uvd=(1-u-v+uv)a+(u-uv)(a+b)+(v-uv)(a+c)+uv(a+b+c+d)$ where $a,b,c,d\\in F_{q}$ and by \\cite{joel}, $a+ub+vc+uvd\\in \\mathcal{R}$ is a unit if and only if $a,(a+b),(a+c),(a+b+c+d)$ are units in $F_{q}$. Let $\\mathcal{C}$ be a linear code of length $n$ in $\\mathcal{R}$ and \\\\\n$\\mathcal{C}_{1}=\\big \\{a\\in F_{q}^{n}\\mid a+ub+vc+uvd\\in \\mathcal{C}$, ~for ~some~ $b,c,d\\in F_{q}^{n} \\big\\}$;\\\\\n$\\mathcal{C}_{2}=\\big \\{a+b\\in F_{q}^{n}\\mid a+ub+vc+uvd\\in \\mathcal{C}$, ~for ~some~ $c,d\\in F_{q}^{n} \\big\\}$;\\\\\n$\\mathcal{C}_{3}=\\big \\{a+c\\in F_{q}^{n}\\mid a+ub+vc+uvd\\in \\mathcal{C}$, ~for ~some~ $b,d\\in F_{q}^{n} \\big\\}$;\\\\\n$\\mathcal{C}_{4}=\\big \\{a+b+c+d\\in F_{q}^{n}\\mid a+ub+vc+uvd\\in \\mathcal{C}\\big\\}$.\\\\\nThen $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3},\\mathcal{C}_{4}$ are linear codes of length $n$ over ${F_{q}}$ and $\\mathcal{C}$ can be expressed uniquely as $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$.\\\\ \\\\\nNow, we calculate dual code of $\\mathcal{C}$, denoted by $\\mathcal{C}^{\\perp}$, as follows:\n\n\\begin{thm}\\label{dual th}\n Let $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a linear code over $\\mathcal{R}$ where $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ be linear codes over ${F_{q}}$. Then $\\mathcal{C}^{\\perp}=(1-u-v+uv)\\mathcal{C}_{1}^{\\perp}\\oplus(u-uv)\\mathcal{C}_{2}^{\\perp}\\oplus(v-uv)\\mathcal{C}_{3}^{\\perp}\\oplus uv\\mathcal{C}_{4}^{\\perp}$. Further, $\\mathcal{C}$ is self-dual if and only if $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ are self-dual.\n\\end{thm}\n\\begin{proof}\nThe proof is similar to the proof of Theorem 5 in \\cite{xiying17}.\n\\end{proof}\n\nLet $\\mathcal{D}$ be a linear code of length $n$ over $F_{q}$. For any codeword $c=(c_{0},c_{1},...,c_{n-1})$ in $\\mathcal{D}$, we identify to a polynomial $c(x)$ in ${F_{q}[x;{\\theta}_{t}]}\/{\\langle x^{n}-1\\rangle}$ where $c(x)=c_{0}+c_{1}x+\\dots +c_{n-1}x^{n-1}.$ By this identification Siap et al. \\cite{siap11} studied skew cyclic codes over the field $F_{q}$ and constructed many structural properties like $\\mathcal{D}$ is a skew cyclic code over $F_{q}$ if and only if $\\mathcal{D}$ is a left $F_{q}[x;{\\theta}_{t}]-$submodule of ${F_{q}[x;{\\theta}_{t}]}\/{\\langle x^{n}-1\\rangle}$\nand $\\mathcal{D}$ is generated by a monic polynomial which is a right divisor of $(x^{n}-1)$ in $F_{q}[x;{\\theta}_{t}]$. Analogous to their study we discuss skew cyclic codes over $\\mathcal{R}$ in this article.\n\\begin{thm}\\label{sk th1}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a linear code of length $n$ over $\\mathcal{R}$ where $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ be linear codes of length $n$ over ${F_{q}}$. Then $\\mathcal{C}$ is a skew cyclic code over $\\mathcal{R}$ if and only if $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ are skew cyclic codes over ${F_{q}}$.\n\\end{thm}\n\\begin{proof}\nSuppose $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ are skew cyclic codes over ${F_{q}}$. In order to prove $\\mathcal{C}$ is a skew cyclic code over $\\mathcal{R}$, let $r=(r_{0},r_{1},...,r_{n-1})\\in \\mathcal{C}$ where $r_{i}=(1-u-v+uv)a_{i}+(u-uv)b_{i}+(v-uv)c_{i}+uvd_{i}, 0\\leq i\\leq n-1.$ Take $a=(a_{0},a_{1},...,a_{n-1}), ~ b=(b_{0},b_{1},...,b_{n-1}), ~c=(c_{0},c_{1},...,c_{n-1}), ~d=(d_{0},d_{1},...,d_{n-1})$. Then $a\\in \\mathcal{C}_{1},b\\in \\mathcal{C}_{2},c\\in \\mathcal{C}_{3},d\\in \\mathcal{C}_{4}$ and since $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ are skew cyclic, so $\\sigma(a)\\in \\mathcal{C}_{1},\\sigma(b)\\in \\mathcal{C}_{2},\\sigma(c)\\in \\mathcal{C}_{3},\\sigma(d)\\in \\mathcal{C}_{4}.$ Note that ${\\theta}_{t}(r_{i})=(1-u-v+uv){\\theta}_{t}(a_{i})+(u-uv){\\theta}_{t}(b_{i})+(v-uv){\\theta}_{t}(c_{i})+uv{\\theta}_{t}(d_{i})$ for $0\\leq i\\leq n-1.$ Then $\\sigma(r)=({\\theta}_{t}(r_{n-1}),{\\theta}_{t}(r_{0}),...{\\theta}_{t}(r_{n-2})=(1-u-v+uv)\\sigma(a)+(u-uv)\\sigma(b)+(v-uv)\\sigma(c)+uv\\sigma(d)\\in (1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}=\\mathcal{C}$. This shows that $\\mathcal{C}$ is a skew cyclic code over $\\mathcal{R}$.\\\\\nConversely, suppose $\\mathcal{C}$ is a skew cyclic code over $\\mathcal{R}$. Let $a=(a_{0}, a_{1},.., a_{n-1})\\in \\mathcal{C}_{1}, b = (b_{0},b_{1},..,b_{n-1})\\in \\mathcal{C}_{2}, c=(c_{0},c_{1},...,c_{n-1})\\in \\mathcal{C}_{3}, d=(d_{0},d_{1},..,d_{n-1})\\in \\mathcal{C}_{4}$. Consider $r_{i}=(1-u-v+uv)a_{i}+(u-uv)b_{i}+(v-uv)c_{i}+uvd_{i}, 0\\leq i\\leq n-1.$ Then $r=(r_{0},r_{1},...,r_{n-1})\\in \\mathcal{C}$ and since $\\mathcal{C}$ is the skew cyclic so $\\sigma(r)\\in \\mathcal{C}$. But $\\sigma(r)=(1-u-v+uv)\\sigma(a)+(u-uv)\\sigma(b)+(v-uv)\\sigma(c)+uv\\sigma(d)$, it follows that $\\sigma(a)\\in \\mathcal{C}_{1},\\sigma(b)\\in \\mathcal{C}_{2},\\sigma(c)\\in \\mathcal{C}_{3},\\sigma(d)\\in \\mathcal{C}_{4}$. Hence $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ are skew cyclic codes over $F_{q}$.\n\\end{proof}\n\n\n\\begin{cor}\nThe dual code $\\mathcal{C}^{\\perp}$ is a skew cyclic code over $\\mathcal{R}$, provided $\\mathcal{C}$ is a skew cyclic code over $\\mathcal{R}$.\n\\end{cor}\n\\begin{proof}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a skew cyclic code over $\\mathcal{R}$. Then by Theorem \\ref{sk th1}, $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ are skew cyclic codes over ${F_{q}}$. Since dual of skew cyclic code over $F_{q}$ is also a skew cyclic code over ${F_{q}}$ by Corollary 18 of \\cite{D09}, so $\\mathcal{C}_{1}^{\\perp},\\mathcal{C}_{2}^{\\perp},\\mathcal{C}_{3}^{\\perp}$ and $\\mathcal{C}_{4}^{\\perp}$ are skew cyclic codes over ${F_{q}}$. Thus, by Theorem \\ref{dual th} and Theorem \\ref{sk th1}, $\\mathcal{C}^{\\perp}$ is a skew cyclic codes over $\\mathcal{R}$.\n\\end{proof}\n\n\n\n\\begin{cor}\nThe code $\\mathcal{C}$ is a self-dual skew cyclic code over $\\mathcal{R}$ if and only if $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ are self-dual skew cyclic codes over ${F_{q}}$.\n\\end{cor}\n\n\\begin{thm}\\label{sk th2}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a skew cyclic code of length $n$ over $\\mathcal{R}$. Then $\\mathcal{C}$ has a generating polynomial $f(x)$ which is a right divisor of $(x^{n}-1)$ in ${\\mathcal{R}[x;{\\theta}_{t}]}$.\n\\end{thm}\n\\begin{proof}\nLet $f_{i}(x)$ be generator of $\\mathcal{C}_{i}$ in ${F_{q}[x;{\\theta}_{t}]}$ for $i=1,2,3,4.$ Then $(1-u-v+uv)f_{1}(x),(u-uv)f_{2}(x),(v-uv)f_{3}(x),uvf_{4}(x)$ are generators of $\\mathcal{C}$. Let $f(x)=(1-u-v+uv)f_{1}(x)+(u-uv)f_{2}(x)+(v-uv)f_{3}(x)+uvf_{4}(x)$ and $\\mathcal{G}=\\langle f(x) \\rangle.$ Then $\\mathcal{G} \\subseteq \\mathcal{C}$. Now, $(1-u-v+uv)f(x) = (1-u-v+uv)f_{1}(x)\\in \\mathcal{G}, (u-uv)f(x)=(u-uv)f_{2}(x)\\in \\mathcal{G}, (v-uv)f(x)=(v-uv)f_{3}(x)\\in \\mathcal{G}, uv f(x) = uv f_{4}(x)\\in \\mathcal{G}$, it follows that $\\mathcal{C}\\subseteq \\mathcal{G}$ and hence $\\mathcal{C}=\\mathcal{G}=\\langle f(x)\\rangle.$\\\\\nSince $f_{i}(x)$ is a right divisor of $(x^{n}-1)$ in ${F_{q}[x;{\\theta}_{t}]}$ for $i=1,2,3,4$, so there exit $h_{i}(x)\\in {F_{q}[x;{\\theta}_{t}]}$ such that $(x^{n}-1)=h_{1}(x)\\ast f_{1}(x),(x^{n}-1)=h_{2}(x)\\ast f_{2}(x),(x^{n}-1)=h_{3}(x)\\ast f_{3}(x),(x^{n}-1)=h_{4}(x)\\ast f_{4}(x).$ Now, $[(1-u-v+uv)h_{1}(x)+(u-uv)h_{2}(x)+(v-uv)h_{2}(x)+uvh_{4}(x)]\\ast f(x)=(1-u-v+uv)h_{1}(x)\\ast f_{1}(x) +(u-uv)h_{2}(x)\\ast f_{2}(x)+(v-uv)h_{3}(x)\\ast f_{3}(x) +uvh_{4}(x)\\ast f_{4}(x)=(x^{n}-1)$. Thus, $f(x)$ is a right divisor of $(x^{n}-1)$.\n\\end{proof}\n\n\n\n\\begin{cor}\nEach left submodule of $\\mathcal{R}[x;\\theta_{t}]\/\\langle x^{n}-1 \\rangle$ is generated by single element.\n\\end{cor}\nLet $\\mathcal{C}$ be a skew cyclic code over $F_{q}$ generated by the polynomial $f(x)$ such that $(x^{n}-1)= h(x)\\ast f(x)$ where $f(x)=f_{0}+f_{1}x+\\dots +f_{r}x^{r}, h(x)=h_{0}+h_{1}x+\\dots +h_{n-r}x^{n-r}$ in $F_{q}[x;{\\theta}_{t}]$. Then by \\cite{D09}, its dual $\\mathcal{C}^{\\perp}$ is a skew cyclic code over $F_{q}$ generated by the polynomial $\\widehat{h}(x)=h_{n-r}+{\\theta}_{t}(h_{n-r-1})x+\\dots +{\\theta}_{t}(h_{0})x^{n-r}$.\\\\\n\n\\begin{cor}\\label{cor 1}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a skew cyclic code over $\\mathcal{R}$ and $f_{i}$ the generator of $\\mathcal{C}_{i}$ such that $(x^{n}-1)=h_{i}(x)\\ast f_{i}(x)$, for $i=1,2,3,4$. Then $\\mathcal{C}^{\\perp}=\\langle (1-u-v+uv)\\widehat{h}_{1}(x)+(u-uv)\\widehat{h}_{2}(x)+(v-uv)\\widehat{h}_{3}(x)+uv\\widehat{h}_{4}(x)\\rangle$ and $\\mid \\mathcal{C}^{\\perp}\\mid =q^{\\sum_{i=1}^{4}f_{i}(x)}$.\n\\end{cor}\n\\begin{proof}\nSince $\\widehat{h}_{i}(x)$ is generator of ${\\mathcal{C}_{i}^{\\perp}}$ for $i=1,2,3,4$, then by Theorem \\ref{dual th} and Theorem \\ref{sk th2}, $\\mathcal{C}^{\\perp}=\\langle (1-u-v+uv)\\widehat{h}_{1}(x)+(u-uv)\\widehat{h}_{2}(x)+(v-uv)\\widehat{h}_{3}(x)+uv\\widehat{h}_{4}(x)\\rangle$.\\\\\n Also, $\\mid \\mathcal{C}^{\\perp}\\mid=\\mid \\mathcal{C}_{1}^{\\perp}\\mid \\mid \\mathcal{C}_{2}^{\\perp}\\mid \\mid \\mathcal{C}_{3}^{\\perp}\\mid \\mid \\mathcal{C}_{4}^{\\perp}\\mid=q^{\\sum_{i=1}^{4}f_{i}(x)}.$\n\\end{proof}\n\n\n\n\\section{Idempotent generators of skew cyclic codes and their dual codes over $\\mathcal{R}$}\\label{sec5}\n\nSince underlying ring is the skew polynomial ring which is non-commutative, so polynomials exhibit here more factorizations. Therefore, it is not too easy to find exact number of skew cyclic codes over $\\mathcal{R}[x;\\theta_{t}]$ or the number of idempotent generators of skew cyclic codes over $\\mathcal{R}$. But, when we impose conditions $gcd(n,k)=1$ and $gcd(n,q)=1$ [ as given by \\cite{Gursoy14}], where $k$ is the order of the automorphism and $n$ is the length of the code, we can find idempotent generators. Towards this, we have the following:\n\n\\begin{thm} (\\cite{Gursoy14}) \\label{idm th1}\n~~Let $f(x)\\in F_{q}[x;{\\theta}_{t}]$ be a monic right divisor of $(x^{n}-1)$ and $\\mathcal{C}=\\langle f(x) \\rangle.$ If $gcd(n,k)=1$ and $gcd(n,q)=1$, then there exits an idempotent polynomial $e(x)\\in F_{q}[x;{\\theta}_{t}]\/ \\langle x^{n}-1 \\rangle$ such that $\\mathcal{C}=\\langle e(x) \\rangle.$\n\\end{thm}\n\n\\begin{thm}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a skew cyclic code of length $n$ over $\\mathcal{R}$ with $gcd(n,k)=1$ and $gcd(n,q)=1$. Then $\\mathcal{C}$ has an idempotent generator $e(x)$ in $R[x;{\\theta}_{t}]$.\n\\end{thm}\n\\begin{proof}\nBy Theorem \\ref{idm th1}, there exists idempotent generator $e_{i}(x)$ of $\\mathcal{C}_{i}$ for $i=1,2,3,4$ in $F_{q}[x;{\\theta}_{t}]$. Then by Theorem \\ref{sk th2}, $e(x)=(1-u-v+uv)e_{1}(x)+(u-uv)e_{2}(x)+(v-uv)e_{3}(x)+uve_{4}(x)$ is a generator of $\\mathcal{C}$ which is also idempotent.\n\\end{proof}\n\n\n\\begin{thm}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a skew cyclic code of length $n$ over $\\mathcal{R}$ and $gcd(n,k)=1$ and $gcd(n,q)=1$. Then $\\mathcal{C}^{\\perp}$ has an idempotent generator in $R[x;{\\theta}_{t}]$.\n\\end{thm}\n\\begin{proof}\nSuppose $e(x) = (1-u-v+uv)e_{1}(x)+(u-uv)e_{2}(x)+(v-uv)e_{3}(x)+uve_{4}(x)$ is an idempotent generator of $\\mathcal{C}$ where $e_{i}(x)$ is idempotent generator of $\\mathcal{C}_{i}$ for $i=1,2,3,4.$ Then $\\mathcal{C}_{i}^{\\perp}$ has idempotent generator\n$1-e_{i}(x^{-1})$, for $i=1,2,3,4$ [See Lemma 12.3.23 (i) in \\cite{book}].\nHence, by Theorem \\ref{sk th2}, $\\mathcal{C}^{\\perp}$ has idempotent generator $(1-u-v+uv)(1-e_{1}(x^{-1}))+(u-uv)(1-e_{2}(x^{-1}))+(v-uv)(1-e_{3}(x^{-1}))+uv(1-e_{4}(x^{-1}))=1-e(x^{-1}).$\n\\end{proof}\n\n\\begin{exam}\nConsider the field $F_{25}=F_{5}[\\alpha]$, where ${\\alpha}^{2}+\\alpha +1=0$. Take $n=4$ and Frobenius automorphism ${\\theta}_{t}:F_{25}\\rightarrow F_{25}$ defined by ${\\theta}_{t}(\\alpha)={\\alpha}^{5}.$ Now, $x^{4}-1=(x+2)(x+3)(x+\\alpha)(x+\\alpha+1)=(x+2)(x+3)(x+\\alpha+1)(x+\\alpha).$ Take $f_{1}(x)=f_{2}(x)=f_{3}(x)=f_{4}(x)=(x+\\alpha+1)$. Then $\\mathcal{C}_{1}=\\langle f_{1}(x) \\rangle, \\mathcal{C}_{2}=\\langle f_{2}(x) \\rangle, \\mathcal{C}_{3}=\\langle f_{3}(x) \\rangle, \\mathcal{C}_{4}=\\langle f_{4}(x) \\rangle$ are skew cyclic codes of length 4 with dimension 3 over $F_{25}$. Let $f(x) =(1-u-v+uv)f_{1}(x)+(u-uv)f_{2}(x)+(v-uv)f_{3}(x)+uvf_{4}(x)=(x+\\alpha+1).$ Then $\\mathcal{C}=\\langle f(x) \\rangle$ is a skew cyclic code of length 4 over $\\mathcal{R}=F_{25}+uF_{25}+vF_{25}+uvF_{25},$ where $u^{2}=u, v^{2}=v, uv=vu.$ Here, the Gray image $\\Psi(\\mathcal{C})$ is a skew quasi-cyclic code of index $4$ over $F_{25}$ with parameters $[16,12,2]$, which is an optimal code.\n\\end{exam}\n\\begin{exam}\nConsider the field $F_{9}=F_{3}[2\\alpha+1]$; where ${\\alpha}^{2}+1=0$. Take $n=6$ and Frobenius automorphism ${\\theta}_{t}:F_{9}\\rightarrow F_{9}$ defined by ${\\theta}_{t}(\\alpha)={\\alpha}^{3}$. Now, $x^{6}-1=(2+x+(1+2\\alpha)x^{2}+x^{3})(1+x+(2\\alpha+2)x^{2}+x^{3})=(2+\\alpha x+2\\alpha x^{3}+x^{4})(1+\\alpha x+x^{2})$. Take $f_{1}(x)=f_{2}(x)=f_{3}(x)=(2+\\alpha x+2\\alpha x^{3}+x^{4})$ and $f_{4}(x)=(2+x+(1+2\\alpha)x^{2}+x^{3})$, then $\\mathcal{C}_{1}=\\langle f_{1}(x) \\rangle, \\mathcal{C}_{2}=\\langle f_{2}(x) \\rangle, \\mathcal{C}_{3}=\\langle f_{3}(x) \\rangle, \\mathcal{C}_{4}=\\langle f_{4}(x) \\rangle$ are skew cyclic codes of length $6$ over $F_{9}$ with dimension $2, 2, 2$ and $3$ respectively. If we take $f(x) =(1-u-v+uv)f_{1}(x)+(u-uv)f_{2}(x)+(v-uv)f_{3}(x)+uvf_{4}(x)=(1-uv)(2+\\alpha x+2\\alpha x^{3}+x^{4})+uv(2+x+(1+2\\alpha)x^{2}+x^{3})$, then $\\mathcal{C}=\\langle f(x) \\rangle$ is a skew cyclic code of length 6 over $\\mathcal{R}=F_{9}+uF_{9}+vF_{9}+uvF_{9}$, where $u^{2}=u, v^{2}=v, uv=vu.$ The Gray image $\\Psi(\\mathcal{C})$ is a skew quasi-cyclic code of index $4$ over $F_{9}$ with parameters $[24,9,4].$\n\\end{exam}\n\n\n\\section{Skew Constacyclic codes over $\\mathcal{R}$}\\label{sec6}\n\\begin{df}\nLet $\\alpha=(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$ be a unit in $\\mathcal{R}$ where $\\alpha_{i}\\in F_{p^{t}}\\backslash\\{0\\}$ and ${\\theta}_{t}$ be the automorphism on $\\mathcal{R}$. A linear code $\\mathcal{C}$ of length $n$ is said to be skew $\\alpha$-constacyclic code over $\\mathcal{R}$ if and only if $\\mathcal{C}$ is invariant under the skew $\\alpha$-constacyclic shift operation ${\\tau}_\\alpha$ where $\\tau_{\\alpha}:\\mathcal{R}^{n} \\rightarrow \\mathcal{R}^{n}$ defined by $\\tau_{\\alpha}(c_{0},c_{1},..,c_{n-1})=(\\alpha \\theta_{t}(c_{n-1}),\\theta_{t}(c_{0}),..,\\theta_{t}(c_{n-2}))$, $i.e, \\mathcal{C}$ is skew $\\alpha$-constacyclic code if and only if $\\tau_{\\alpha}(\\mathcal{C})=\\mathcal{C}.$\\\\\nClearly, $\\mathcal{C}$ is said to be skew cyclic code for $\\alpha= 1$ and skew negacyclic code for $\\alpha= -1$.\n\\end{df}\nIn this section our study restricted to the condition $\\alpha\\in \\mathcal{R}$ such that $\\alpha^{2}=1$. By identifying each codeword $c = (c_{0}, c_{1}, \\dots c_{n-1})\\in \\mathcal{R}^{n}$ to a polynomial $c(x) = c_{0}+c_{1}x+\\dots +c_{n-1}x^{n-1}$ in the left $\\mathcal{R}$-module $\\mathcal{R}_{n}= \\mathcal{R}[\\theta_{t};x]\/\\langle x^{n}-\\alpha\\rangle$, we call a linear code $C$ is a skew $\\alpha$-constacyclic code if and only if it is a left $\\mathcal{R}$-submodule of $\\mathcal{R}_{n}= \\mathcal{R}[\\theta_{t};x]\/\\langle x^{n}-\\alpha\\rangle$.\n\n\\begin{thm}\nDefine a map $\\rho : \\mathcal{R}_{n} = \\mathcal{R}[\\theta_{t};x]\/\\langle x^{n}-1 \\rangle \\mapsto \\mathcal{R}_{n},\\alpha = \\mathcal{R}[\\theta_{t};x]\/\\langle x^{n}-\\alpha\\rangle$ by $\\rho(f(x)) = f(\\alpha x)$. If $n$ is odd, then $\\rho$ is a left $\\mathcal{R}$-module isomorphism.\n\\end{thm}\n\\begin{proof}\nJustification is straightforward. Only need to observe that if\n\\begin{align*}\nf(x)&=g(x) ~mod~ (x^{n}-1)\\\\\n\\Leftrightarrow f(x)-g(x)&=h(x)\\ast(x^{n}-1) ~for~ some~ h(x)\\in \\mathcal{R}[x;\\theta_{t}]\\\\\n\\Leftrightarrow f(\\alpha x)-g(\\alpha x) &=h(\\alpha x)\\ast (\\alpha^{n}x^{n}-1)\\\\\n& =h(\\alpha x)\\ast (\\alpha x^{n}-\\alpha^{2})(as~\\alpha^{n}=\\alpha~for~odd~n)\\\\\n& =\\alpha h(\\alpha x)\\ast ( x^{n}-\\alpha )\\\\\n\\Leftrightarrow f(\\alpha x) &= g(\\alpha x) ~mod~ (x^{n}-\\alpha).\n\\end{align*}\n\\end{proof}\n\n\n\n\\begin{cor}\nThere is a one-to-one correspondence between the skew cyclic codes and skew $\\alpha$-constacyclic codes over $\\mathcal{R}$ of odd length.\n\\end{cor}\n\n\\begin{cor}\nLet $n$ be an odd positive integer. Define a permutation $\\mu$ on $\\mathcal{R}^{n}$ as $\\mu(c_{0}, c_{1}, \\dots ,c_{n-1}) = (c_{0}, \\alpha c_{1}, \\dots ,\\alpha^{n-1}c_{n-1})$. Then $\\mathcal{C}$ is a skew cyclic code of length $n$ if and only if $\\mu(\\mathcal{C})$ is skew $\\alpha$-constacyclic code of length $n$ over $\\mathcal{R}$.\n\\end{cor}\n\n\\subsection{Relations}\n\n\\begin{df}\nLet $\\mathcal{C}$ be a linear code of length $n$ over $\\mathcal{R}$ and $n=ml.$ Then $\\mathcal{R}$ is said to be an $\\alpha$-quasi-twisted code if for any\n\\begin{align*}\n(c_{0 ~0},c_{0 ~1},...,c_{0 ~l-1},...,c_{m-1~ 0},c_{m-1 ~1},...,c_{m-1 ~l-1})\\in \\mathcal{C}\n\\end{align*}\nimplies\n\\begin{align*}\n(\\alpha c_{m-1 ~0},\\alpha c_{m-1 ~1},...,\\alpha c_{m-1~ l-1},...,c_{m-2 ~0 },c_{m-2 ~1},...,c_{m-2 ~l-1})\\in \\mathcal{C}.\n\\end{align*}\nIf $l$ be the least positive integer satisfying $n=ml$, then $\\mathcal{C}$ is known as $\\alpha$-quasi-twisted code of length $n$ over $\\mathcal{R}.$\n\\end{df}\n\nThere is a nice relationship among skew $\\alpha$-constacyclic codes, constacyclic codes and $\\alpha$-quasi-twisted code over $\\mathcal{R}$ which are obtain from following results.\n\\begin{thm}\\label{ret 1}\nLet $\\mathcal{C}$ be a skew $\\alpha$-constacyclic code of length $n$ over $\\mathcal{R}$ and $gcd(n, k) = 1$. Then $\\mathcal{C}$ is a $\\alpha$-constacyclic code of length $n$ over $\\mathcal{R}$.\n\\end{thm}\n\\begin{proof}\nSince $gcd(n, k) = 1$, by elementary concept of number theory, there exists an integer $L>0$ such that $Tk= 1+Ln$.\nLet $c(x)=c_{0}+c_{1}x+\\dots +c_{n-1}x^{n-1}\\in \\mathcal{C}$. As $\\mathcal{C}$ is a skew $\\alpha$-constacyclic code and $x\\ast c(x)$ represents skew $\\alpha$-constacyclic shift of the codeword $c(x)$, so $x\\ast c(x), x^{2}\\ast c(x),...,x^{Tk}\\ast c(x)$ are belong to $\\mathcal{C}$, where\n\\begin{align*}\nx^{Tk}\\ast c(x)& = x^{Tk}\\ast (c_{0}+c_{1}x+\\dots +c_{n-1}x^{n-1})\\\\\n& = \\theta_{t}^{Tk}(c_{0})x^{Tk}+ \\theta_{t}^{Tk}(c_{1})x^{Tk+1}+\\dots + \\theta_{t}^{Tk}(c_{n-1})x^{Tk+n-1}\\\\\n& = c_{0}x^{1+Ln}+ c_{1}x^{2+Ln}+\\dots + c_{n-1}x^{Ln+n}\\\\\n& = \\alpha^{L}(c_{0}x+c_{1}x^{2}+\\dots + c_{n-2}x^{n-1}+\\alpha c_{n-1})~(as~in~\\mathcal{R}_{n},~ x^{n} = \\alpha)\\\\\n\\implies \\alpha^{L}x^{Tk}\\ast c(x)& = c_{0}x+c_{1}x^{2}+\\dots + c_{n-2}x^{n-1}+\\alpha c_{n-1}\\in \\mathcal{C}~(as~ \\alpha^{2} = 1).\n\\end{align*}\nThis proves that $\\mathcal{C}$ is a $\\alpha$-constacyclic code of length $n$ over $\\mathcal{R}$.\n\\end{proof}\n\n\n\\begin{cor}\nLet $gcd(n, k) = 1$. If $f(x)$ is a right divisor of $x^{n}-\\alpha$ in the skew polynomial ring $\\mathcal{R}[x;\\theta_{t}]$, then $f(x)$ is a factor of $x^{n}-\\alpha$ in the polynomial ring $\\mathcal{R}[x]$.\n\\end{cor}\n\n\n\\begin{thm}\\label{ret 2}\nLet $\\mathcal{C}$ be a skew $\\alpha$-constacyclic code of length $n$ and $gcd(n,k) = l.$ Then $\\mathcal{C}$ is a $\\alpha$-quasi-twisted code of index $l$ over $\\mathcal{R}.$\n\\end{thm}\n\\begin{proof}\n Since $gcd(n,k) = l$, there exit two integers $T$ and $D$ such that $Tk = l + Dn ; D>0.$\nLet $r = (c_{0 ~0},c_{0~ 1},...,c_{0 ~l-1},...,c_{m-1~ 0},c_{m-1~ 1},...,c_{m-1~ l-1})\\in \\mathcal{C}$. Since $\\mathcal{C}$ is a skew $\\alpha$-constacyclic code, $\\tau_{\\alpha}(r), \\tau_{\\alpha}^{2}(r),...,\\tau_{\\alpha}^{l}(r)$ belong to $\\mathcal{C}$, where\n\\begin{align*}\n\\tau_{\\alpha}^{l}(r)&= (\\theta_{t}^{l}(\\alpha c_{m-1~ 0}),...,\\theta_{t}^{l}(\\alpha c_{m-1~ l-1}),\\theta_{t}^{l}(c_{0~ 0}),...\\theta_{t}^{l}(c_{0 ~l-1}),...,\\\\& ~~~~\\theta_{t}^{l}(c_{m-2~ 0}),...,\\theta_{t}^{l}(c_{m-2 ~l-1})).\\\\\n\\implies \\tau_{\\alpha}^{l+Dn}(r)&=(\\theta_{t}^{l+Dn}(c_{m-1~ 0}),...,\\theta_{t}^{l+Dn}(c_{m-1~ l-1}),...,\\\\& ~~~~\\theta_{t}^{l+Dn}(\\alpha c_{m-2~ 0}),...,\\theta_{t}^{l+Dn}(\\alpha c_{m-2~ l-1}))\\\\\n&= (\\theta_{t}^{Tk}(c_{m-1 ~0}),...,\\theta_{t}^{Tk}(c_{m-1 ~l-1}),.\\\\&~~~~..,\\theta_{t}^{Tk}(\\alpha c_{m-2~ 0}),...,\\theta_{t}^{Tk}(\\alpha c_{m-2~ l-1}))\\\\\n&= (c_{m-1~ 0},...,c_{m-1~ l-1},...,\\alpha c_{m-2~ 0},...,\\alpha c_{m-2~ l-1}).\\\\\n\\implies \\alpha \\tau_{\\alpha}^{l+Dn}(r)&= (\\alpha c_{m-1~ 0},...,\\alpha c_{m-1~ l-1},...,c_{m-2 ~0},...,c_{m-2 ~l-1})\\in \\mathcal{C}~(as~ \\alpha^{2} = 1)\n\\end{align*}\nThis proves that $\\mathcal{C}$ is an $\\alpha$-quasi-twisted code of index $l$.\n\\end{proof}\n\n\n\n\\section{Constacyclic code with other shift constant}\\label{sec7}\n\nIn this section we characterize the Gray images of skew $\\alpha$-constacyclic codes over $\\mathcal{R}$ as a skew quasi-twisted code over $F_{q}$. We define the quasi-twisted shift operator $\\omega_{l}$ on $(F_{q}^{n})^{l}$ by \\\\\n\\begin{align*}\n\\omega_{l}((c^{1})\\mid (c^{2})\\mid(c^{3})\\mid\\dots\\mid(c^{l})) = (\\tau_{\\alpha}(c^{1})\\mid \\tau_{\\alpha}(c^{2})\\mid\\tau_{\\alpha}(c^{3})\\mid\\dots \\mid\\tau_{\\alpha}(c^{l}))\n\\end{align*}\nwhere $c^{i}\\in F_{q}^{n}$ and $\\tau_{\\alpha}$ is skew $\\alpha$-constacyclic shift operators as define in last section \\ref{sec6}.\\\\\nA linear code $\\mathcal{C}$ of length $nl$ over $F_{q}$ is said to be a skew quasi-twisted code of index $l$ if $\\omega_{l}(C) = C$.\n\nWith the help of above definition, we get the following results:\n\n\\begin{pro}\\label{pro3}\nLet $\\Psi$ be the Gray map as define earlier, then $\\Psi\\tau_{\\alpha} = \\omega_{4}\\Psi$.\n\\end{pro}\n\\begin{proof}\nLet $r = (r_{0}, r_{1},\\dots ,r_{n-1})\\in \\mathcal{R}$. We have,\n$\\Psi\\tau_{\\alpha}(r) = \\Psi(\\alpha \\theta_{t}(r_{n-1}), \\theta_{t}(r_{0}),\\\\ \\dots , \\theta_{t}(r_{n-2})) = (\\alpha a_{n-1}^{p^t}, a_{0}^{p^t},\\dots , a_{n-2}^{p^t}, \\alpha a_{n-1}^{p^t}+\\alpha b_{n-1}^{p^t},\\dots , a_{n-2}^{p^t}+b_{n-2}^{p^t}, \\alpha a_{n-1}^{p^t}+\\alpha c_{n-1}^{p^t}, \\dots , a_{n-2}^{p^t}+ c_{n-2}^{p^t}, \\alpha a_{n-1}^{p^t}+\\alpha b_{n-1}^{p^t}+\\alpha c_{n-1}^{p^t}+\\alpha d_{n-1}^{p^t}, \\dots , a_{n-2}^{p^t}+b_{n-2}^{p^t}+c_{n-2}^{p^t}+d_{n-2}^{p^t}).$\\\\ \\\\\nOn the other hand,\\\\ \\\\\n$\\omega_{4}\\Psi(r) = \\omega_{4}(a_{0}, a_{1},\\dots ,a_{n-1}, a_{0}+b_{0}, a_{1}+b_{1}, \\dots , a_{n-1}+b_{n-1}, a_{0}+c_{0}, a_{1}+c_{1}, \\dots , a_{n-1}+c_{n-1}, a_{0}+b_{0}+c_{0}+d_{0}, a_{1}+b_{1}+c_{1}+d_{1}, \\dots , a_{n-1}+b_{n-1}+c_{n-1}+d_{n-1}) = (\\alpha a_{n-1}^{p^t}, a_{0}^{p^t},\\dots , a_{n-2}^{p^t}, \\alpha a_{n-1}^{p^t}+\\alpha b_{n-1}^{p^t},\\dots , a_{n-2}^{p^t}+b_{n-2}^{p^t}, \\alpha a_{n-1}^{p^t}+\\alpha c_{n-1}^{p^t}, \\dots , a_{n-2}^{p^t}+ c_{n-2}^{p^t}, \\alpha a_{n-1}^{p^t}+\\alpha b_{n-1}^{p^t}+\\alpha c_{n-1}^{p^t}+\\alpha d_{n-1}^{p^t}, \\dots , a_{n-2}^{p^t}+b_{n-2}^{p^t}+c_{n-2}^{p^t}+d_{n-2}^{p^t}).$ Therefore, $\\Psi\\tau_{\\alpha} = \\omega_{4}\\Psi$.\n\\end{proof}\n\nAs a consequence of the Proposition \\ref{pro3}, we have the following:\n\n\\begin{thm}\nIf $C$ is a skew $\\alpha$-constacyclic codes of length $n$ over $\\mathcal{R}$, then its $F_{q}$-image $\\Psi(C)$ is a skew quasi-twisted code of length $4n$ over $F_{q}$ of index $4$ and vice versa.\n\\end{thm}\n\n\\section{ Decomposition of skew $\\boldsymbol{(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})}$-constacyclic codes over $\\mathcal{R}$} \\label{sec8}\nIn this section, we discuss skew $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$-constacyclic codes of arbitrary length $n$ over $\\mathcal{R}$ by decomposition method.\\\\\n\nGao et al. \\cite{gao} have shown that a skew $\\alpha$-constacyclic code $\\mathcal{C}$ of length $n$ over $F_{q}$ is a left $F_{q}[x;\\theta_{t}]$-submodule of $F_{q}[x;\\theta_{t}]\/\\langle x^{n}-\\alpha \\rangle$ generated by a monic polynomial $f(x)$ with minimal degree in $\\mathcal{C}$ and $f(x)$ is a right divisor of $(x^{n}-\\alpha)$.\nMotivated by these study we find some structural properties of skew $\\alpha$-constacyclic codes over $\\mathcal{R}$ which are listed below.\n\\begin{thm}\\label{de th1}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a linear code of length $n$ over $\\mathcal{R}$. Then $\\mathcal{C}$ is $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$-constacyclic code over $\\mathcal{R}$ if and only if $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ are skew $\\alpha_{1}$-constacyclic code, skew $(\\alpha_{1}+\\alpha_{2})$-constacyclic code, skew $(\\alpha_{1}+\\alpha_{3})$-constacyclic code and skew $(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4})$-constacyclic code of length $n$ over $F_{q}$ respectively, where $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$ is a unit in $\\mathcal{R}.$\n\\end{thm}\n\\begin{proof}\nLet $r=(r_{0},r_{1},..,r_{n-1})\\in \\mathcal{C}$, where $r_{i}=(1-u-v+uv)a_{i}+(u-uv)b_{i}+(v-uv)c_{i}+uvd_{i},0\\leq i\\leq n-1$. Take $a=(a_{0},..,a_{n-1}),b=(b_{0},..,b_{n-1}),c=(c_{0},..,c_{n-1}),d=(d_{0},..,d_{n-1})$, then $a\\in \\mathcal{C}_{1},b\\in \\mathcal{C}_{2},c\\in \\mathcal{C}_{3}$ and $d\\in \\mathcal{C}_{4}$. Suppose $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ are skew $\\alpha_{1}$-constacyclic, skew $(\\alpha_{1}+\\alpha_{2})$-constacyclic, skew $(\\alpha_{1}+\\alpha_{3})$-constacyclic, skew $(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4})$-constacyclic code over $F_{q}$ respectively. So $\\tau_{\\alpha_{1}}(a)\\in \\mathcal{C}_{1},\\tau_{\\alpha_{1}+\\alpha_{2}}(b)\\in \\mathcal{C}_{2},\\tau_{\\alpha_{1}+\\alpha_{3}}(c)\\in \\mathcal{C}_{3}$, and $\\tau_{\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4}}(d)\\in \\mathcal{C}_{4}.$ Now, $\\tau_{\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4}}(r)$=\n$((\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})\\theta_{t}(r_{n-1}),\\theta_{t}(r_{0}),..,\\theta_{t}(r_{n-2}))=(1-u-v+uv)\\tau_{\\alpha_{1}}(a)+(u-uv)\\tau_{\\alpha_{1}+\\alpha_{2}}(b)+(v-uv)\\tau_{\\alpha_{1}+\\alpha_{3}}(c)+uv\\tau_{\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4}}(d)\\in (1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}=\\mathcal{C}.$ This shows that $\\mathcal{C}$ is $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$-constacyclic code over $\\mathcal{R}$.\\\\\nConversely, let $a=(a_{0},..,a_{n-1})\\in \\mathcal{C}_{1},b=(b_{0},..,b_{n-1})\\in \\mathcal{C}_{2},c=(c_{0},..,c_{n-1})\\in \\mathcal{C}_{3},d=(d_{0},..,d_{n-1})\\in \\mathcal{C}_{4}$. Take $r_{i}=(1-u-v+uv)a_{i}+(u-uv)b_{i}+(v-uv)c_{i}+uvd_{i},0\\leq i\\leq n-1$. Then $r=(r_{0},r_{1},..,r_{n-1})\\in \\mathcal{C}$. Suppose $\\mathcal{C}$ is $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$-constacyclic code over $\\mathcal{R}$. So, $\\tau_{\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4}}(r)\\in \\mathcal{C}$, where $\\tau_{\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4}}(r)=(1-u-v+uv)\\tau_{\\alpha_{1}}(a)+(u-uv)\\tau_{\\alpha_{1}+\\alpha_{2}}(b)+(v-uv)\\tau_{\\alpha_{1}+\\alpha_{3}}(c)+uv\\tau_{\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4}}(d).$ It follows that $\\tau_{\\alpha_{1}}(a)\\in \\mathcal{C}_{1},\\tau_{\\alpha_{1}+\\alpha_{2}}(b)\\in \\mathcal{C}_{2},\\tau_{\\alpha_{1}+\\alpha_{3}}(c)\\in \\mathcal{C}_{3}$ and $\\tau_{\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4}}(d)\\in \\mathcal{C}_{4}.$ Hence $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ are skew $\\alpha_{1}$-constacyclic code, skew $(\\alpha_{1}+\\alpha_{2})$-constacyclic code, skew $(\\alpha_{1}+\\alpha_{3})$-constacyclic code and skew $(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4})$-constacyclic code of length $n$ over $F_{q}$ respectively.\n\\end{proof}\n\n\\begin{cor}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a skew $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$-constacyclic code of length $n$ over $\\mathcal{R}$. Then the dual code $\\mathcal{C}^{\\perp}=(1-u-v+uv)\\mathcal{C}_{1}^{\\perp}\\oplus(u-uv)\\mathcal{C}_{2}^{\\perp}\\oplus(v-uv)\\mathcal{C}_{3}^{\\perp}\\oplus uv\\mathcal{C}_{4}^{\\perp}$ is skew $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})^{-1}$-constacyclic code over $\\mathcal{R}$, where $\\mathcal{C}_{1}^{\\perp},\\mathcal{C}_{2}^{\\perp},\\mathcal{C}_{3}^{\\perp},\\mathcal{C}_{4}^{\\perp}$ are skew $\\alpha_{1}^{-1}$-constacyclic code, skew $(\\alpha_{1}+\\alpha_{2})^{-1}$-constacyclic code, skew $(\\alpha_{1}+\\alpha_{3})^{-1}$-constacyclic code and skew $(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4})^{-1}$-constacyclic code over $F_{q}$ respectively, provided $n$ is a multiple of $k$(order of the automorphism).\n\\end{cor}\n\\begin{proof}\nAs $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$ is fixed by $\\theta_{t}$ and $n$ is a multiple of $k$, then by Lemma 3.1 of \\cite{jitman}, $\\mathcal{C}^{\\perp}$ is $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})^{-1}$-constacyclic over $\\mathcal{R}$. Since $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})=(1-u-v+uv)\\alpha_{1}+(u-uv)(\\alpha_{1}+\\alpha_{2})+(v-uv)(\\alpha_{1}+\\alpha_{3})+uv(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4})$, it follows $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})^{-1}=(1-u-v+uv){\\alpha_{1}}^{-1}+(u-uv)(\\alpha_{1}+\\alpha_{2})^{-1}+(v-uv)(\\alpha_{1}+\\alpha_{3})^{-1}+uv(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4})^{-1}.$ Hence by Theorem \\ref{de th1}, $\\mathcal{C}_{1}^{\\perp},\\mathcal{C}_{2}^{\\perp},\\mathcal{C}_{3}^{\\perp},\\mathcal{C}_{4}^{\\perp}$ are skew $\\alpha_{1}^{-1}$-constacyclic code, skew $(\\alpha_{1}+\\alpha_{2})^{-1}$-constacyclic code, skew $(\\alpha_{1}+\\alpha_{3})^{-1}$-constacyclic code and skew $(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4})^{-1}$-constacyclic code over $F_{q}$ respectively.\n\\end{proof}\n\n\n\\begin{cor}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a skew $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$-constacyclic code of length $n$ over $\\mathcal{R}$. Then $\\mathcal{C}$ is self-dual if and only if $\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4}=1,-1,1-2u,1-2v,1-2uv,-1+2u,-1+2v,-1+2uv,1-2u+2uv,1-2v+2uv,-1+2u-2uv,-1+2v-2uv,1-2u-2v+2uv,-1+2u+2v-2uv,1-2u-2v+4uv,-1+2u+2v-4uv.$\n\\end{cor}\n\\begin{proof}\nIt can easily be proved that $\\mathcal{C}$ is self-dual if and only if $\\alpha_{1}=\\pm 1, \\alpha_{1}+\\alpha_{2}=\\pm 1, \\alpha_{1}+\\alpha_{2}+\\alpha_{3}=\\pm 1$ and $\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4}=\\pm 1.$\n\\end{proof}\n\n\\begin{thm}\\label{de th2}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a skew $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$-constacyclic code over $\\mathcal{R}$. Then there exits a polynomial $f(x)$ in ${\\mathcal{R}[x;{\\theta}_{t}]}$ which is a right divisor of $x^{n}-(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$ and $\\mathcal{C}=\\langle f(x)\\rangle.$\n\\end{thm}\n\\begin{proof}\nLet $f_{i}(x)$ be generator of $\\mathcal{C}_{i}$ for $i=1,2,3,4$. Then $(1-u-v+uv)f_{1}(x),(u-uv)f_{2}(x),(v-uv)f_{3}(x)$ and $uvf_{4}(x)$ are generators of $\\mathcal{C}$. Take $f(x)=(1-u-v+uv)f_{1}(x)+(u-uv)f_{2}(x)+(v-uv)f_{3}(x)+uvf_{4}(x)$ and $\\mathcal{G}=\\langle f(x) \\rangle$. Then $\\mathcal{G}\\subseteq\\mathcal{C}$. On the other hand, $(1-u-v+uv)f(x)=(1-u-v+uv)f_{1}(x)\\in\\mathcal{G}, (u-uv)f(x)=(u-uv)f_{2}(x)\\in \\mathcal{G} , (v-uv)f(x)=(v-uv)f_{3}(x)\\in \\mathcal{G}$ and $uvf(x)=uvf_{4}(x)\\in \\mathcal{G}$. This implies that $\\mathcal{C}\\subseteq \\mathcal{G}$ and hence $\\mathcal{C}=\\mathcal{G}=\\langle f(x) \\rangle.$\\\\\nSince $f_{1}(x), f_{2}(x), f_{3}(x)$ and $f_{4}(x)$ are right divisors of $x^{n}-\\alpha_{1},x^{n}-(\\alpha_{1}+\\alpha_{2}), x^{n}-(\\alpha_{1}+\\alpha_{3})$, and $x^{n}-(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4})$ respectively, so there exist $h_{1}(x),h_{2}(x),h_{3}(x),h_{4}(x)$ such that $x^{n}-\\alpha_{1}=h_{1}(x)\\ast f_{1}(x),x^{n}-(\\alpha_{1}+\\alpha_{2})=h_{2}(x)\\ast f_{2}(x),x^{n}-(\\alpha_{1}+\\alpha_{3})=h_{3}(x)\\ast f_{3}(x)$ and $x^{n}-(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4})=h_{4}(x)\\ast f_{4}(x).$ Also, $[(1-u-v+uv)h_{1}+(u-uv)h_{2}+(v-uv)h_{3}+uvh_{4}]\\ast f(x)=(1-u-v+uv)h_{1}\\ast f_{1}+(u-uv)h_{2}\\ast f_{2}+(v-uv)h_{3}\\ast f_{3}+uvh_{4}\\ast f_{4}=x^{n}-(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$. This proves that $f(x)$ is a right divisor of $x^{n}-(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$.\n\\end{proof}\n\n\n\\begin{cor}\nEach left submodule of $\\mathcal{R}[x;\\theta_{t}]\/\\langle x^{n}-(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4}) \\rangle$ is generated by single element where $\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4}$ is a unit in $\\mathcal{R}$.\n\\end{cor}\n\n\\begin{cor}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a skew $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$-constacyclic code of length $n$ over $\\mathcal{R}$ and $f_{1}(x), f_{2}(x), f_{3}(x)$ and $f_{4}(x)$ be generators of $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ in $F_{q}[x;\\theta_{t}]$ respectively such that $x^{n}-\\alpha_{1}=h_{1}(x)\\ast f_{1}(x),x^{n}-(\\alpha_{1}+\\alpha_{2})=h_{2}(x)\\ast f_{2}(x),x^{n}-(\\alpha_{1}+\\alpha_{3})=h_{3}(x)\\ast f_{3}(x)$ and $x^{n}-(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4})=h_{4}(x)\\ast f_{4}(x).$ Then $\\mathcal{C}^{\\perp}=\\langle (1-u-v+uv)\\widehat{h}_{1}(x)+(u-uv)\\widehat{h}_{2}(x)+(v-uv)\\widehat{h}_{3}(x)+uv\\widehat{h}_{4}(x)\\rangle$ and $\\mid \\mathcal{C}^{\\perp}\\mid =q^{\\sum_{i=1}^{4}f_{i}(x)}$.\n\\end{cor}\n\\begin{proof}\n Same as Corollary \\ref{cor 1}.\n\\end{proof}\n\n\n\n\\begin{thm}\nLet $\\mathcal{C}=(1-u-v+uv)\\mathcal{C}_{1}\\oplus(u-uv)\\mathcal{C}_{2}\\oplus(v-uv)\\mathcal{C}_{3}\\oplus uv\\mathcal{C}_{4}$ be a skew $(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4})$-constacyclic code over $\\mathcal{R}$. Let $gcd(n,k)=1$ and $gcd(n,q)=1$. Then there exists an idempotent generator $e(x)=(1-u-v+uv)e_{1}(x)+(u-uv)e_{2}(x)+(v-uv)e_{3}(x)+uve_{4}(x)$ in $\\mathcal{R}[x;\\theta_{t}]\/\\langle x^{n}-(\\alpha_{1}+u\\alpha_{2}+v\\alpha_{3}+uv\\alpha_{4}) \\rangle$ such that $\\mathcal{C}=\\langle e(x) \\rangle$, where $e_{1}(x) \\in F_{q}[x; \\theta_{t}]\/ \\langle x^{n}- \\alpha_{1} \\rangle, e_{2}(x)\\in F_{q}[x;\\theta_{t}]\/ \\langle x^{n}-(\\alpha_{1}+\\alpha_{2}) \\rangle,e_{3}(x)\\in F_{q}[x;\\theta_{t}]\/\\langle x^{n}-(\\alpha_{1}+\\alpha_{3}) \\rangle, e_{4}(x)\\in F_{q}[x;\\theta_{t}]\/\\langle x^{n}-(\\alpha_{1}+\\alpha_{2}+\\alpha_{3}+\\alpha_{4}) \\rangle$ are idempotent generators of $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ respectively.\n\\end{thm}\n\\begin{proof}\nBy using same argument of the proof of Theorem 16 of \\cite{siap11}, we conclude that there exist idempotent generators $e_{1}(x),e_{2}(x),e_{3}(x)$ and $e_{4}(x)$ of $\\mathcal{C}_{1},\\mathcal{C}_{2},\\mathcal{C}_{3}$ and $\\mathcal{C}_{4}$ respectively in $F_{q}[x; \\theta_{t}]$. Then by Theorem \\ref{de th2}, $e(x)=(1-u-v+uv)e_{1}(x)+(u-uv)e_{2}(x)+(v-uv)e_{3}(x)+uve_{4}(x)$ is an idempotent generator of $\\mathcal{C}.$\n\\end{proof}\n\n\\begin{exam}\nConsider the field $F_{49}=F_{7}[\\alpha];$ where ${\\alpha}^{2}-\\alpha+3=0$. Take $n=4$ and Frobenius automorphism $\\theta_{t}:F_{49}\\rightarrow F_{49}$ defined by $\\theta_{t}(\\alpha)={\\alpha}^{7}.$ Now, $x^{4}-1=(x+1)(x-1)(x^{2}+1); x^{4}+1=(x^{2}+3x+1)(x^{2}+4x+1)$. Take $f_{1}(x)=f_{2}(x)=f_{3}(x)=(x+1)$ and $f_{4}(x)=x^{2}+4x+1$. Then $\\mathcal{C}=\\langle (1-u-v+uv)f_{1}(x)+(u-uv)f_{2}(x)+(v-uv)f_{3}(x)+uvf_{4}(x) \\rangle=\\langle(1-uv)(x+1)+uv(x^{2}+4x+1)\\rangle$ is a self-dual skew $(1-2uv)$-constacyclic code of length $4$ over $\\mathcal{R}=F_{49}+uF_{49}+vF_{49}+uvF_{49}$, where $u^{2}=u,v^{2}=v,uv=vu.$ Also, $\\mid\\langle\\theta_{t}\\rangle\\mid = k = 2$ and $gcd(n, k) = 2$. So, by Theorem \\ref{ret 2}, $C$ is a $(1-2uv)$-quasi-twisted code of length $4$ over $\\mathcal{R}$ of index 2.\n\\end{exam}\n\n\\begin{exam}\nConsider the field $F_{9} = F_{3}[2\\alpha+1];$ where ${\\alpha}^{2}+1=0$. Take Frobenius automorphism $\\theta_{t}:F_{9}\\rightarrow F_{9}$ defined by $\\theta_{t}(\\alpha)={\\alpha}^{3}$ and $\\mathcal{R}=F_{9}+uF_{9}+vF_{9}+uvF_{9}$, where $u^{2}=u,v^{2}=v,uv=vu.$ The polynomial $f(x) = x^{6}+(1-2v)x^{5}+x^{4}+(1-2v)x^{3}+x^{2}+(1-2v)x+1$ is a right divisor of $x^{7}-(1-2v-2uv)$ in $\\mathcal{R}[\\theta_{t};x]$ and also $gcd(n, k) = gcd(7, 2) = 1$. Therefore, by Theorem \\ref{ret 1}, $C = \\langle f(x) \\rangle$ is a $(1-2v-2uv)$-constacyclic code of length 7 over $\\mathcal{R}$.\n\\end{exam}\n\n\\section{Conclusion}\\label{sec9}\nIn this paper, we considered skew cyclic and skew constacyclic codes over $\\mathcal{R}=F_{p^{m}}+uF_{p^{m}}+vF_{p^{m}}+uvF_{p^{m}}$, where $p$ is an odd prime and $u^{2}=u,v^{2}=v,uv=vu.$ It is shown that skew cyclic codes and skew constacyclic codes over $\\mathcal{R}$ are principally generated. Also, we have given the necessary and sufficient conditions for codes being self-dual over $\\mathcal{R}$.\n\n\\section*{Acknowledgement}\nThe authors are thankful to University Grant Commission (UGC), Government of India for financial support under Ref. No. 20\/12\/2015(ii)EU-V dated 31\/08\/2016 and Indian Institute of Technology Patna for providing the research facilities. The authors would like to thank the anonymous referees for their useful comments and suggestions.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}}\n\\IEEEPARstart{G}{raph} data, such as citation networks, social networks, and knowledge networks, are attracting much attention in real applications. Graphs can depict not only node features but their relationships.\nWith the development of deep learning, \ngraph neural networks (GNNs)~\\cite{wu2019comprehensive} are currently the most popular paradigm to learn and represent nodes in a graph. \nGNN encodes the patterns from node features, aggregates the representations of neighbors based on their edge connections, and generates effective embeddings for downstream tasks, such as node classification, link prediction, and community detection. \nTypical GNN models include graph convolution-based semi-supervised learning~\\cite{kipf2016semi}, and\ngenerating relational features by incorporating input features with a columnar network~\\cite{pham2017column}. In addition, graph attention is developed to estimate the contribution of incident edges~\\cite{velivckovic2017graph}. The theory of information aggregation in GNNs has also discussed to enhance the representation ability~\\cite{xu2018powerful}.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_Introduction_PPR.pdf}\n \\caption{An elaboration of privacy-protected graph perturbation. \\underline{Left}: we aim to perturb the given graph by removing an edge and adding a new one such that two requirements are satisfied: (1) The prediction confidence (y-axis) on private labels (i.e., square and circle) is lowered down, i.e., decreasing the risk of leaking privacy. (2) The prediction confidence on targeted labels (i.e., light green and yellow) is maintained, i.e., keeping the data utility. \\underline{Right}: The proposed NetFense model can achieve such two requirements, compared to clean and perturbed data generated by Netteck~\\cite{zugner2018adversarial}.}\n \\label{fig:Intro}\n \\vspace{-2.5em}\n\\end{figure}\n\nPowerful GNNs make us concern about the disclosure of private information if the adversary considers private labels (e.g., gender and age) as the label. For example, \nwhile online social networks, such as Facebook, Twitter, and LinkedIn \nallow users to do privacy controls, partial data still have leakage crisis if users do not actively enable the privacy settings or agree with the access for external apps. As the adversary has partial data, GNNs can be trained to infer and acquire private information. For example, GNNs can be used to detect the visited location via user-generated texts on Twitter~\\cite{rahimi2018semi}, and to predict age and gender using users' e-commerce records~\\cite{chen2019semi}. \nDifferential privacy (DP)~\\cite{dwork2006calibrating} is a typical approach to add noise into an algorithm so that the risk of leaking private data can be lowered down. \nDPNE~\\cite{xu2018dpne} and PPGD~\\cite{ZhangDifferential}\ndevise shallow DP-based embedding models to decrease performances of link prediction and node classification.\nHowever, since the original graph data cannot be influenced, such two models still lead to a high potential of risk exposure of private information.\nWe use Fig.~\\ref{fig:Intro} (left) to elaborate the idea. Through a well-devised defense model, the graph is perturbed by removing one edge and adding another one. We expect that the new graph misleads the inference on private labels by decreasing the prediction confidence while keeping the data utility on target labels by maintaining the prediction confidence. \n\n\\ct{In fact, an adversary can train a model based on the public-available data of some users' profiles in online social platforms, such as Twitter and Instagram. Not all of the users seriously care about their privacy. Hence, personal attributes and connections can be exposed due to two factors. First, some users may be not aware of privacy leaking when they choose to set some fields public. Second, some users do not care whether their private information is obtained by other people but are eager to promote themselves and maximize the visibility of themselves by proving full personal data. Data collected from such kinds of users allow the attackers to train the attack model. Therefore, we aim to find and fix the weaker parts from the data that can cause the risk of privacy exposure inferred by the attack model, and also to maintain the data utility. Then, the attackers would see the privacy-preserved data and cannot disclose the private labels of users by using the attack model.}\n\nNettack~\\cite{zugner2018adversarial} is the most relevant study.\nA gradient-based attack model is developed to perturb node features and graph structure so that the performance of a task (e.g., node classification) is significantly reduced. However, Nettack cannot work for privacy protection in two aspects. First, when the targeted private label is binary, the adversary can reverse the misclassified labels to obtain the true value if she knows there is some protection. Second, while Nettack can be used to defend against privacy attacks by decreasing the performance, it does not guarantee the utility of the perturbed data on inferring non-private labels. As shown in Fig.~\\ref{fig:Intro} (right) conducted on real graph data, Nettack leads to misclassification on the private label, but fails to maintain the prediction confidence on the target label. Note that the y-axis is the classification margin, indicating the difference of prediction probabilities between the ground truth and the $2$-nd probable label. It can be also regarded as the prediction confidence of a model. Negative values mean higher potential to be identified as the $2$-nd probable label.\n\nIn this paper, we propose a novel problem of \\textit{adversarial defense against privacy attack} on graph data. Given a graph, in which each node is associated with a feature vector, a targeted label (e.g., topic or category), and a private label (e.g., gender or age), our goal is to perturb the graph structure by adding and removing edges such that the privacy is protected and the data utility is maintained at the same time. To be specific, we aim at lowering down the prediction confidence on the private label to prevent privacy from being inferred by the adversary's GNNs, and simultaneously maintaining the prediction confidence on the targeted label to keep the data utility under GNNs when the data is released. This task can be treated as a kind of \\textit{privacy defense}, i.e., defending the model attack that performs learning to infer private labels. \n\nWe create Table~\\ref{tab:att} to highlight the key differences between model attack (i.e., Nettack~\\cite{zugner2018adversarial}) and our proposed privacy defense (i.e., NetFense) on graph data. First, since the model attack is performed by the adversary and the privacy defense is conducted by the data owner, their scope of accessible data is different. Second, as mentioned above, our problem is to tackle two tasks at the same time (i.e., fool the model on private labels and keep data utility), but model attack deals with only fooling the model on targeted labels. Third, in the context of privacy protection, decreasing the prediction accuracy on private labels cannot prevent them from being inferred if the private label is binary. The adversary can reverse the prediction results if she knows the existence of defense mechanism. Therefore, we reduce the prediction confidence as close as possible to $0.5$. Fourth, while model attack tends to make one or fewer nodes misclassified on targeted labels, privacy defense is expected to shield the private labels of more nodes from being accurately inferred. Last, both model attack and privacy defense need to ensure the perturbation on graph data is unnoticeable. Privacy defense further requires to achieve model unnoticeability, which is maintaining the performance of target label prediction using the perturbed graph under the same model (i.e., equivalent to maintain data utility). It is quite challenging to have a defense model that meets all of these requirements at the same time.\n\n\\begin{table}\n\\caption{Summary of differences between model attack and privacy defense on graph data in terms of who is doing the attack\/defense (WHO), accessible data (\\textbf{AD}), strategy (STG), perturbation objective (\\textbf{PO}), non-noticeable perturbation (\\textbf{NP}), number of tackled task (\\textbf{\\#Task}), and number of concerned targets (\\textbf{\\#Trg}). ``Pred-Acc'' and ``Pred-Confi'' are prediction accuracy and confidence. ``$\\Join$'' is maintenance.}\n\\vspace{-0.5em}\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{c|l|l}\n\\hline\n & \\textbf{Model Attack} (e.g., Nettack~\\cite{zugner2018adversarial}) & \\textbf{Privacy Defense} (i.e., NetFense) \\\\ \\hline\nWHO & the adversary & the data owner \\\\\\hdashline\nAD & partial & all \\\\\\hdashline\nSTG & fool model on target labels & (1) fool model on private labels \\\\\n & & (2) keep utility on target labels \\\\\\hdashline\nPO & Pred-Acc@target: $\\downarrow$ & (1) Pred-Confi@privacy: $\\leadsto 0.5$ \\\\\n & & (2) Pred-Confi@target: $\\Join$ \\\\\\hdashline\n\\#Task & $1$ & $\\geq2$ \\\\\\hdashline\n\\#Trg & fewer & more \\\\\\hdashline\nNP & data & data$+$model \\\\ \\hline \n\\end{tabular}\n}\n\\label{tab:att}\n\\vspace{-1.5em}\n\\end{table}\n\nTo tackle the proposed privacy defense problem, we propose an adversarial method, NetFense\\footnote{The code of NetFense can be accessed via the following Github link:\n\\url{https:\/\/github.com\/ICHproject\/NetFense\/}}, based on the adversarial model attack. NetFense consists of three phases, including \\textit{candidate selection}, \\textit{influence with GNNs}, and \\textit{combinatorial optimization}. The first phase ensures the perturbed graph is unnoticeable while the second and third phases ensure both privacy preservation and data utility (i.e., model unnoticeable) of the perturbed graph. \nWe summarize the contribution of this paper as follows.\n\\begin{itemize}[leftmargin=*]\n\\item We propose a novel problem of adversarial defense against privacy attacks on graph data based on graph neural networks (GNNs). \nThe goal is to generate a perturbed graph that can simultaneously keep graph data unnoticeability, maintain the prediction confidence of targeted label classification (i.e., model unnoticeability), and reduce the prediction confidence of private label classification (i.e., privacy protection).\n\n\\item We devise a novel adversarial perturbation-based defense framework, NetFense, to achieve the three-fold goal. We justify that perturbing graph structure can bring more influence on GNNs than perturbing node features, and prove edge perturbations can strike a balance between model unnoticeability and privacy protection.\n\n\\item We conduct experiments on single- and multi-target perturbations using three real datasets, and the results exhibit the promising performance of NetFense. Advanced empirical studies further give crucial insights, including our loss function is allowed to flexibly control the effect of model unnoticeability and privacy protection, the private labels of high-degree nodes can be better protected, the proposed PPR-based data unnoticeability strategy in NetFense can better preserve the local graph structure of each node, and perturbing graph structure is more effective to prevent private labels from being confidentially inferred than perturbing node features.\n\\end{itemize}\n\n\\textit{Paper Organization.} We review relevant studies in Sec.~\\ref{sec-related}, and present problem statement in Sec.~\\ref{sec-prob}. Sec.~\\ref{sec-method} provides the technical details of NetFense, and Sec.~\\ref{sec-exp} exhibits the experimental results. Sec.~\\ref{sec-concl} concludes this work.\n\\vspace{-0.5em}\n\n\n\\section{Related Work}\n\\label{sec-related}\n\\vspace{-0.25em}\n\\textbf{Privacy-preserving Learning.} For non-graph data, Wang et al.~\\cite{wang2018not} present a privacy-preserving deep learning model via differential privacy (DP). Beigi et al. ~\\cite{BeigiRecommendation} devise a privacy-protected recommender system via bi-optimization. For graph data, existing studies develop shallow privacy-preserving models to achieve DP, such as DPNE~\\cite{xu2018dpne} and PPGD~\\cite{ZhangDifferential}. Besides, Zheleva et al. ~\\cite{zheleva2009join}, Liu et al.~\\cite{liu2016linkmirage} and Cai et al.~\\cite{cai2016collective} protect the link privacy by obfuscating the social graph structure through neighborhood search and community detection. \nWe compare privacy-preserving studies based on four aspects in Table~\\ref{tab:PPprop} (top). (1) \\underline{Data}: representing the input data type, including ``Continuous'' (e.g., images and user profiles) ``Graph.'' \n(2) \\underline{Approach}: the way to ensure privacy protection in the generated data, including differential privacy (``DP''), establishing some privacy protection criteria for ``Optimization'', and utilizing data ``Statistics'' (e.g., degree or centrality) to change the graph structure so that the privacy is free from leaking. \n(3) \\underline{Model}: a machine learning or deep learning model that is adopted by the adversary, including shallow models (``SM'') such as KNN, Bayesian inference and SVM, deep neural network (``DNN''), and graph neural network (``GNN''). \n(4) \\underline{Goal}: the goals of the privacy-preserving studies are the same as ``Protect'' the private information. \nAlthough existing studies achieve some success, their goals are different from ours that relies on GNNs to simultaneously let the adversary misclassify private labels and maintain the utility of the perturbed graphs on targeted labels.\n\n\\begin{table}\n\\caption{Summary and comparison of related studies.}\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{l|c|l|l|l|l}\n\\hline\n & &Data &Approach & Model&Goal \\\\ \\hline\n\\parbox[t]{2mm}{\\multirow{4}{*}{\\rotatebox[origin=c]{90}{Privacy}}}&~\\cite{wang2018not} & Continuous & DP & DNN&Protect \\\\ \\cdashline{2-6}\n&~\\cite{BeigiRecommendation} & Continuous & Optimization & DNN&Protect \\\\\\cdashline{2-6}\n&~\\cite{xu2018dpne, ZhangDifferential} & Graph & DP & SM&Protect \\\\\\cdashline{2-6}\n&~\\cite{zheleva2009join, liu2016linkmirage, cai2016collective} & Graph & Statistics & SM&Protect \\\\\\hline\n{\/}&\\textbf{NetFense} & {Graph} & {Statistics+Gradient} & {GNN}&{Protect}\\\\\\hline \n\\parbox[t]{2mm}{\\multirow{7}{*}{\\rotatebox[origin=c]{90}{Adversarial}}}&~\\cite{goodfellow2014explaining} & Continuous & Gradient & DNN&Attack \\\\\\cdashline{2-6\n&~\\cite{wang2019attacking} & Graph & Optimization & DNN&Attack \\\\\\cdashline{2-6}\n&~\\cite{dai2018adversarial, xu2019topology, zugner2019adversarial} & Graph & Optimization & GNN&Attack \\\\\\cdashline{2-6\n&~\\cite{zugner2018adversarial} & Graph & Gradient & GNN&Attack \\\\\\cdashline{2-6} \n&~\\cite{wu2019adversarial} & Graph & Gradient & GNN&Attack \\\\\\cdashline{2-6} \n&~\\cite{bhagoji2018enhancing, das2018shield} & Continuous & Data Transformation & DNN&Defense \\\\\\cdashline{2-6} \n&~\\cite{Entezari2020AllYN} & Graph & Data Transformation & GNN&Defense \\\\\\cdashline{2-6} \n&~\\cite{tang2020transferring} & Graph & Optimization & GNN&Defense \\\\\\cdashline{2-6} \n&~\\cite{zhu2019robust} & Graph & Optimization & GNN&Defense \\\\\\cdashline{2-6} \n&~\\cite{wu2019adversarial} & Graph & Data Transformation & GNN&Defense \\\\\\hline \n\\end{tabular}\n}\n\\label{tab:PPprop}\n\\vspace{-2.0em}\n\\end{table}\n\n\n\\textbf{Model Attack on Graphs.} \\ct{\nThe adversarial learning~\\cite{chen2020survey} benefits the attacks on graphs from examining the robustness of the model via simulating the competitive model or data. On the one hand, the adversarial learning techniques can improve the classification performance like training with adversarial examples~\\cite{pan2019learning}, which leads the model to learn the embeddings more precisely and robustly. On the other hand, the adversarial attack can fool the model to damage the prediction outcomes by discovering which parts in the graph structure are more sensitive to mislead the results.\n} \nThe typical model attack is FGSM~\\cite{goodfellow2014explaining}, which is a gradient-based approach that utilizes small noise to create adversarial examples to fool the model. However, perturbing graph data is more challenging for the gradient-based method due to its discrete structure. \nWang et al.~\\cite{wang2019attacking} manipulate the graph structure by solving an optimization problem, but their method cannot deal with graphs with node features. \nXu et al.~\\cite{xu2019topology} also consider the optimization-based perturbation for attack and robustness problems, which only discuss un-specific targets. \nIn addition, Dai et al.~\\cite{dai2018adversarial} perturb graph data through Q-learning under GNN models. Nettack~\\cite{zugner2018adversarial} is the first to perform adversarial attacks on attributed graphs and consider the unnoticebility of graph perturbations for GNNs. \nThen Z{\\\"u}gner et al.~\\cite{zugner2019adversarial} incorporate meta learning with a bi-level optimization for GNN-based model attack on graphs. \n\\ct{\nOn the other hand, the methods against model attack also gain much attention, including the techniques proposed by Bhagoji et al.~\\cite{bhagoji2018enhancing} and Das et al.~\\cite{das2018shield}, \nwhich focus on image data. \nFor graph data, Entezari et al.~\\cite{Entezari2020AllYN} discuss the low singular values of adjacency matrix can be affected by the attackers, and reconstruct the graph via low-rank approximation. \nTang et al.~\\cite{tang2020transferring} adopt transfer learning to incorporate the classifiers trained from perturbed graphs to build a more robust GNN against poisoning attacks. \nZhu et al.~\\cite{zhu2019robust} propose to leverage Gaussian distributions to absorb the effects of adversarial changes so that the GNN models can be trained more robustly.\nFurthermore, Wu et al.~\\cite{wu2019adversarial} revise the gradient-based attack to adaptively train robust GCN with binary node features, and develop a defense algorithm against the attack by making the GNN aggregation weights trainable to alleviate the influence of misleading edges. \n}\n\nWe again use Table~\\ref{tab:PPprop} (bottom) to compare the studies of adversarial model attack on graphs. The meaning of some column names here are different from those of privacy-preserving learning.\n\\underline{Approach}: the adversarial method to fool or enhance data robustness, including ``Gradient''-based and ``Optimization''-based approaches to perturb graph data,\nand ``data transformation'' to modify the graph structure (e.g., extracting the critical part via SVD or condensing the image to remove the noise).\n\\underline{Goal}: the adversary can ``Attack'' the model, and the defender can ``Defend'' against model attack. \nThese studies perturb graph data for model attack, but have not connections with privacy protection on graphs. \nOur work is the first attempt to propose adversarial defense against privacy attack on graph data, in which the predictability of private labels is destroyed and the utility of perturbed graphs is maintained.\n\n\n\\section{Preliminaries}\n\\label{sec-prob}\n\n\\subsection{Definitions}\nLet $G = (V, A, X)$ denote an attributed graph, where $V = \\{ v_1, v_2, ..., v_N \\}$ is the set of nodes, $A \\in \\{0, 1\\}^{N\\times N}$ is the adjacency matrix, and $X \\in \\{0, 1\\}^{N\\times d}$ is the feature matrix. The typical \\textit{node classification} task assumes that nodes are annotated with known or unknown target labels, given by $C = \\{c_1,c_2, ..., c_N\\}\\in \\{0, 1\\}^{N}$, and is to accurately predict the unknown labels of nodes by training a model (e.g., GNNs) based on $G$. \n\nThe predictions of nodes' \\textit{target labels} (e.g., topics and interests) and \\textit{private labels} (e.g., age and gender) are treated as two node classification tasks: Target Label Classification (TLC) and Private Label Classification (PLC). Note that to simplify the problem, here we consider binary classifications for TLC and PLC. Different from the conventional adversary who aims at accurately performing PLC using $G$, in our scenario, the data owners (e.g., companies) are allowed to change and publish the data on their platforms by perturbing $G$ and generating $G'$, whose difference from $G$ is \\textit{unnoticeable}, such that: (1) the adversary's PLC leads to worse performance, and (2) normal users' TLC performance based on $G'$ is as close as possible to that based on $G$. In this work, we play the role of the data owner (i.e., company).\n\n\\begin{definition}\nGiven graph $G = (V, A, X)$, the graph \\textbf{perturbation} is the change of the graph structure $A$ or node features $X$ with budget $b$. Let the graph after perturbation be $G' = (V, A', X')$. The number of perturbations is $N_p = \\sum|A-A'|\/2+\\sum|X-X'|$. We require $N_p \\leq b$.\n\\label{def:perturbation}\n\\end{definition}\n\nA perturbation is an action of changing a value in either $A$ or $X$ from $0$ to $1$ or from $1$ to $0$. For the perturbation on the adjacency matrix $A$, each change represents adding or removing an edge $(u,v)$ (e.g., updating one friendship). We assume that the change is symmetric since the graph is undirected.\nFor the perturbation on the feature matrix $X$, every change implies the adjustment of binary feature $x_i$ of a node (e.g., updating an attribute in the user profile).\nMoreover, we limit the number of perturbations $N_p$ using a budget $b$ to have unnoticeable perturbations.\n\n\\begin{definition}\n\\textbf{Unnoticeable} perturbation indicates a slight difference between original graph $G$ and perturbed graph $G'$, and involves two aspects: \\textbf{data} and \\textbf{model}. The perturbation is \\textbf{data-unnoticeable} if $|P_g(G)-P_g(G')|<\\delta_g$ for a small scalar $\\delta_g$ with respect to a given statistic $P_g$. Let $f_C: G \\rightarrow \\mathbb{R}^N$ is the trained GNN model that outputs the predicted TLC scores for all nodes. The perturbation is \\textbf{model-unnoticeable} if $\\sum_v |(f_C(G)-f_C(G'))|<\\delta_c$ for a small scalar $\\delta_c$.\n\\label{def:unnoticeable}\n\\end{definition}\n\nThe choice of graph statistic $P_g$ can be either degree distribution or covariance of features. GNN is used as the prediction model $f_C$ to measure the loss of information after perturbation. Note that model unnoticeability is equivalent to maintain the data utility for the normal usage of users. We formally elaborate the privacy-preserving graph that we perturb towards.\n\n\\begin{definition}\nLet $P = \\{p_1,p_2, ..., p_N\\} \\in \\{0, 1\\}^{N}$ be known\/unknown private labels for all nodes in $G$. Also let $f_P: G \\rightarrow \\mathbb{R}^N$ be the trained GNN model that outputs the predicted PLC scores for all nodes. The perturbing graph $G' = (V, A', X')$ is \\textbf{privacy-preserved} if $\\sum_{v\\in V} |\\rho(f_P(G))-0.5|<\\delta_p$ for a small scalar $\\delta_p$, where $\\rho: \\mathbb{R} \\rightarrow [0,1]$ is a scaling function.\n\\label{def:pp}\n\\end{definition}\n\nThe value of $\\rho(f_P(G))$ is considered as the prediction confidence. Since the prediction is binary and the goal is to prevent private labels from being inferred, we want $\\rho(f_P(G))$ to be as close to $0.5$ as possible. Such a requirement indicates that\nthe model $f_P(G)$ cannot distinguish which of the binary private labels is true. That is, it would become more difficult for the adversary to accurately obtain user's personal information.\nLast, we combine all the above-defined to describe the proposed privacy-protected graph perturbation as follows. \n\n\\begin{definition}\nGiven an attributed graph $G$ with target labels $C$ and private labels $P$, GNN models $f_C$ and $f_P$ can be trained for the prediction of $C$ and $P$, respectively. A perturbed graph $G'$ is a \\textbf{privacy-preserved graph} from $G$ with budget $b$ if $G'$ is generated through $N_p$ \\textbf{unnoticeable perturbations}.\n\\label{def:all}\n\\end{definition}\n\n\\ct{\nNote that the actions of adding or removing edges need to rely on user will, and cannot be controlled by the company (i.e., the service provider). What the company can do is to provide ``suggestions'' for users, and meanwhile inform users the risk of privacy exposure by the attackers. There are two practical ways that the company can do to protect user privacy. First, when a user attempts to create a connection with another, the platform can display the privacy-leaking confidence for her reference, instead of enforcing her not to make friends. Some users who concern much about the privacy attacking can set a higher accessing level for their profile visibility. Second, the company can also provide users an option: automatically ``hide'' highly privacy-exposed connections from user profiles. Those who are afraid of being attacked can enable this option for privacy protection.\n}\n\n\\subsection{Graph Neural Networks}\nFor the GNN models of $f_C$ and $f_P$ for semi-supervised node classification, we adopt Graph Convolutional Networks (GCN)~\\cite{kipf2016semi}. The GCN information propagation to generate node embedding matrix $H$ from $l$ to $l+1$ layer is given by: \n$\n H^{(l+1)} = \\sigma ( \\tilde{D}^{(-\\frac{1}{2})} \\tilde{A} \\tilde{D}^{(-\\frac{1}{2})} H^{(l)} W^{(l)} ), \n$\nwhere $\\tilde{A} = A + I$ is the addition of adjacency and identity matrices.\n$\\tilde{D}$ is the recomputed degree matrix, $\\tilde{D}_{ii} = \\sum_{j} \\tilde{A}_{ij}$. \n$H^{(l)}$ is the input representation of layer-$l$, and \n$H^{(0)} = X$ (using node features as input). $\\sigma$ is the activation function and $W^{(l)}$ is the trainable weight. \n\nTo have an effective prediction and avoid over-smoothing, we create GCNs with $2$ layers for node classification~\\cite{wu2019comprehensive}. The output prediction probabilities $Z$ are:\n\\begin{equation}\n Z = f(G ; W^{(1)}, W^{(2)} ) = softmax( \\hat{A} \\sigma ( \\hat{A} X W^{(1)} ) W^{(2)} ),\n\\label{eq:GCN2}\n\\end{equation}%\nwhere $ \\hat{A} = \\tilde{D}^{(-\\frac{1}{2})} \\tilde{A} \\tilde{D}^{(-\\frac{1}{2})}$ is the normalized adjacency matrix with self-loop.\nThe trainable weights $W^{(1)}, W^{(2)}$ are updated by cross-entropy loss $L$, given by:\n$\n Loss = -\\sum_{v \\in V_{train}} \\ln Z_{v, c_v},\n$\nwhere $c_v$ is the given label of $v$ in the training data $V_{train}$, and $Z_{v, c_v}$ is the probability of classifying node $v$ to label $c$. \n\n\\section{The Proposed NetFense Framework}\n\\label{sec-method}\nOur NetFense is an adversarial method that fits the transductive learning setting for GNNs. NetFense consists of three components, (a) candidate selection, (b) influence with GNNs models, and (c) combinatorial optimization.\n\n\\subsection{Candidate Edge Selection}\n\\label{sec-cand}\nThis component aims at ensuring data-unnoticeable perturbation.\nRecall that we limit the number of perturbations $N_P$ using budget $b$ in Def.~\\ref{def:perturbation}. \nWe think each edge provides different contributions in achieving data-unnoticeable perturbation. \nA proper selection of edges to be perturbed can lead to better data-unnoticeability under the given budget.\nHence, we devise a perturbed candidate selection method, which selects candidate edges by examining how each edge contributes to data unnoticeability. Note that we perturb only edges, i.e., graph structure, because we have discussed in Sec.~\\ref{sec-perturbcom} that the action of changing user attributes is more obvious and sensitive in real social networks. The number of perturbations accordingly becomes $N_P = \\sum|A-A'|\/2$.\n\nWe measure the degree of data unnoticeability using\nPersonalized PageRank (PPR) \\cite{page1999pagerank} since it exhibits high correlation to the capability of GNN models~\\cite{klicpera2018predict, bojchevski2019pagerank}.\nOur idea is if the PPR scores of nodes before and after perturbations can be maintained, the perturbed graph is data-unnoticeable. In other words, PPR is treated as the graph statistic $P_g$ mentioned in Def.~\\ref{def:unnoticeable}.\nThe PPR values on all nodes can be computed via ${{\\pi}}_{r}^{(n)}={(1-\\alpha){H}{\\pi}}_{r}^{(n-1)} + \\alpha {e}_r$, where ${{\\pi}}_{r}^{(n)}$ is the probability vector from each starting node $r$ to all of the other nodes at $n$-th iteration, ${H} = {D}^{-1}{A}$ is the normalized adjacency matrix derived from ${A}$, ${D}$ is the degree matrix, $\\alpha \\in [0,1]$ is the restart probability, and ${e}_r$ is the one-hot encoding indicating the start node. \nThe closed form of PPR matrix ${\\Pi}$ is given by: ${\\Pi} = \\alpha (I-(1-\\alpha) H)^{-1}$, where the entry ${\\Pi}_{ij}$ denotes the stationary probability of arriving at node $j$ from node $i$. \n\nAccording to Def.~\\ref{def:unnoticeable}, we set the $P_g={\\Pi}$ and investigate the influence of edge perturbation $(u, v)$ on graph $G$. First, we simplify the symmetric effect of the undirected edge as one direction. We define the perturbation $(u, v)$ \nas an one-hot matrix $B$, where $B_{ij} =0$ if $(i, j) \\neq (u, v)$, $B_{ij} =1- 2\\cdot\\mathbb{I}((u, v)\\in G)$ if $(i, j) = (u, v)$ and the indicating function $\\mathbb{I}((u, v)\\in G) = 1$ if the edge $(i, j)$ is on graph $G$; otherwise, $\\mathbb{I}((u, v)\\in G) = 0$ if the edge $(i, j)$ is not on graph $G$. That is, $\\mathbb{I}((u, v)\\in G) = 1$ means an edge deletion $A'_{uv} = A_{uv}+(1- 2\\cdot\\mathbb{I}((u, v)\\in G)) = A_{uv} -1$, and we would add a new edge to adjacency (i.e., $A'_{uv} = A_{uv} +1$) if $\\mathbb{I}((u, v)\\in G)= 0$.\nThen, the graph after perturbation via $(u, v)$ can be denoted as a new adjacency matrix $A' = A+B$. We can derive the PPR score after perturbation via Lemma~\\ref{lamma_inev} ~\\cite{miller1981inverse}.\n\\begin{lemma}\n\\label{lamma_inev}\nLet $M_1$ be an inevitable matrix, $M_2$ be a matrix with rank $1$ and $g = trace(M_2(M_1)^{-1})$. If $M_1 + M_2$ is inevitable, $g\\neq -1$ and $(M_1+M_2)^{-1} = M_1^{-1} - (M_1^{-1}M_2 M_1^{-1})\/(1+g)$.\n\\end{lemma}\nTo derive the new PPR score after the perturbation, we let $M_1 = I-(1-\\alpha)H$ and $M_2 = -(1-\\alpha)H'$, where $H' = D^{-1}B$, from which we suppose the degree shift is small, i.e., $D'\\approx D$, for the degree matrix $D'$ of $A'$. The change of one entry of $A$ would affect only $D_{uu}$, i.e., $D_{uu} \\pm 1$. Hence, we can \nminimize the difference between $D$ and $D'$ based on the given budget. Note that the normalized matrix $H$ with non-negative and bounded entries satisfies the condition of Neumann series and is invertible. Besides, we prevent the graph from being disconnected by not considering nodes with degree $1$ for perturbation, to ensure $H'$ is invertible. In detail, by employing PPR as the statistic $P_g$, we derive the formula that depicts the difference of PPR score $P_{g}$ for from any nodes $i$ to $j$ by perturbing the directed edge $u \\to v$, denoted by $\\Delta_{u \\to v} P_{g}[i,j]$, as follows:\n\\begin{equation}\n\\begin{aligned}\n \\Delta_{u \\to v} P_{g}&[i,j] = \\{P_g(G') - P_g(G)\\}_{ij} \\\\\n &= \\{ \\alpha (M_1+M_2)^{-1} - \\alpha M_1^{-1} \\}_{ij} \\\\\n &= -\\alpha\\{(M_1^{-1}M_2M_1^{-1})\/(1+g)\\}_{ij} \\\\\n &= -\\alpha\\{(M_1^{-1} (- (1-\\alpha)D^{-1}B ) M_1^{-1})\/(1+g)\\}_{ij} \\\\\n &= \\alpha d_u^{-1}(1-\\alpha) \\{M_1^{-1} B M_1^{-1}\\}_{ij}\/(1+g)) \\\\\n &= \\alpha c' \\{(M_1^{-1})_{*u} (M_1^{-1})_{v*}\\}_{ij}\/(1+g) \\\\\n &= \\alpha c' (M_1^{-1})_{iu} (M_1^{-1})_{vj}\/(1-c'(M_1^{-1})_{vu}) \\label{eq:PPR}\n\\end{aligned}%\n\\end{equation}\nwhere $c' = b_s d_u^{-1}(1-\\alpha)$ and $b_s = sign(B_{uv})$. We can directly derive the quadratic term and trace of $M_1$ by selecting related column $(M_1^{-1})_{*u}$ and row $(M_1^{-1})_{v*}$ since $B$ is a one-hot matrix and only values with the corresponding indexes are took account. To reduce the destruction of graph structure, we consider the perturbation of edge $u \\to v$ with the lower absolute value of $\\Delta_{{u \\to v}} P_{g}[i,j]$ as the candidate. In other words, $\\Delta_{{u \\to v}} P_{g}[i,j]$ is regarded as the influence on PPR values by perturbing edge $u \\to v$.\nRegarding $(M_1^{-1})_{iu}$ and $(M_1^{-1})_{vj}$ in the numerator, the strength of such interaction effect depends on the degree of the original relationship among neighbors of $u$ and $v$. Higher values of $(M_1^{-1})_{iu}$ and $(M_1^{-1})_{vj}$ reflect the larger influence for perturbation $(u,v)$. Regarding $(M_1^{-1})_{vu}$ in the denominator, such an individual effect comes from the reverse direction $(v,u)$, and is relevant to the action of adding or deleting. \n\nSince the PPR derived by iterating the passing probability through nodes' directed edges, the influence of the perturbation results from not only the paths $u' \\leadsto u$ and $v \\leadsto v'$ for any nodes $u', v' \\in V$, but the effect of its opposite direction ($v \\to u$). For the case of edge addition, we have $sign(-c'(M_1^{-1})_{uv}) = sign(-b_s) = sign(-(1-0))<0$, which means lower $(M_1^{-1})_{vu}$ has lower $\\Delta_{{u \\to v}} P_{g}[i,j]$. That is, for an directed edge $v \\to u$ that some paths can pass through, the addition of edge $u \\to v$ would make it a bi-directed one, and bring the increasing flow for PPR values. On the other hand, for edge deletion, we have $sign(-c'(M_1^{-1})_{uv}) = sign(-b_s) = sign(-(1-2))>0$, indicating that larger $(M_1^{-1})_{vu}$ decreases the change of PPR. That said, if edge $v \\to u$ can support the unnoticeable structural change in terms of PPR, the deletion of edge $u \\to v$ would not cause much destruction of PPR flow from nodes $v$ to $u$. However, we need not to consider edge direction. We assume the graph is undirected, and the edge change is symmetric. Hence, we need to revise the denominator term. \nWe combine all influence of PPR from any nodes $i$ to $j$ by perturbing $(u,v)$, denoted by $\\Delta_{u \\leftrightarrow v}P_{g}$, in which the other edge direction $v \\to u$ is took into account, given by:\n\\begin{equation}\n\\begin{aligned}\n \\Delta_{u \\to v}P_{g} &= \\sum_{(i,j)} \\Delta_{u \\to v} P_{g}[i,j] \\label{eq:oneside}\\\\\n &= \\sum_{(i,j)} \\alpha c' (M_1^{-1})_{iu} (M_1^{-1})_{vj}\/(1+c'(M_1^{-1})_{uv}) \\\\\n &= \\sum_{i} c' (M_1^{-1})_{iu} \/(1+c'(M_1^{-1})_{uv})\\\\\n\\end{aligned}%\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n \\Delta_{u \\leftrightarrow v}P_{g} &= |\\Delta_{u \\to v}P_{g} + \\Delta_{v \\to u}P_{g}|,\\label{eq:twoside}\n\\end{aligned}%\n\\end{equation}\nwhere $\\sum_{j}\\alpha (M_1^{-1})_{vj} = 1$ since the row of the PPR marix $\\Pi_v$ represents all probabilities for node $v$ to each node. \nConsidering the adjustment of edge direction, we can simply focus on the PPR score regarding $-(M_1^{-1})_{uv}$ \nto achieve the data unnoticeability. The new formula ensures the low influence of perturbation $\\Delta_{u \\to v}P_{g}$ by deleting edges with lower PPR (i.e., $sign(c'(M_1^{-1})_{uv})<0$) and adding edges with higher PPR (i.e., $sign(c'(M_1^{-1})_{uv})>0$). In other words, we add the edges with higher $(M_1^{-1})_{uv}$, and delete the edges with lower $(M_1^{-1})_{uv}$. Both the PPR influence on all node pairs and the symmetric effect are considered as the measurement of candidate selection.\n\nThe formula above is to discuss the data unnoticeablity of a single edge perturbation (i.e., $N_p=1$). In practice, we require multiple edge perturbations. Here we aim at selecting multiple edge candidates being perturbed to approximate our original setting of data unnoticeability in $|P_g(G)-P_g(G')|<\\delta_g$ with $N_p>1$ in Def. \\ref{def:unnoticeable}. We find every candidate node pair by setting an upper bound $\\tau$, and consider node pair $e=(u, v)$ as our candidate if it satisfies the data unnoticeability, i.e., $|P_g(G)- P_g(G'))| = \\Delta_{u \\leftrightarrow v}P_{g}<\\tau$, where $G' = G \\pm e$, and the threshold $\\tau$ is a given hyperparamerter. We collect all selected node pairs into the candidate set $Cand = \\{(v,u) | (\\Delta_{u \\leftrightarrow v}P_{g}<\\tau) \\wedge (v,u \\in V) \\}$ for the following two components.\n\n\n\\subsection{Influence with GNNs}\n\\label{sec-infgnn}\nTo achieve both goals of model unnoticeablility (i.e., data utility) in Def. \\ref{def:unnoticeable} and privacy preservation in Def. \\ref{def:pp}, we need to investigate how the perturbation affects the trained GCN model. In this section, we first discuss the difference between perturbing graph structure and perturbing node features. Then we elaborate how to generate the perturbed graph structure using edge additions and removals.\n\n\\subsubsection{Perturbation Comparison between Graph Structure and Node Features on GNNs}\n\\label{sec-perturbcom}\nBefore devising an approach to select better candidates of perturbations, we need to understand to what extent perturbing graph structure and node features contribute to the prediction results of GNN. We will verify that perturbing structure has more significant influence than perturbing node features in terms of GNN prediction probabilities.\n\n\\textbf{GCN Simplication.} We simplify the activation function in GNNs to better discuss the prediction difference between graph structure and node features. Given the adjacency matrix $A$ and the feature matrix $X$, the output prediction probabilities $Z'$ of a $2$-layers GCN without activation functions is given by:\n\\begin{equation}\n Z'_{i}(A,X)= \\left(\\hat{A}^2 X W^{(1)} W^{(2)}\\right)_i = \\sum_{v_j \\in N_{G}^{(2)}(v_i)} \\hat{A}_{ij}^2 x_j^\\top W' ,\n\\label{eq:GCNs}\n\\end{equation}%\nwhere $W' = W^{(1)}W^{(2)}$ learned during pre-training. $N_{G}^{(2)}$ is the set containing the first- and second-hop neighbors derived from the $2$-nd order transition matrix for graph $G$, in which $\\hat{A}_{ij}^2 = 0$ if the minimum distance between nodes $i$ and $j$ is greater than $2$.\nOmitting the activation functions alleviates the computation for the propagation of node features with learned weights, without losing much information. We find the simplified output still involves the aggregation of propagated features from a node's first- and second-hop neighbors using trained weights. In other words, the simplified GCN aggregates the first and second order neighbors at the same layer, and the weights learned from GCN's second layer produce the deviation since we ignore the regulating by activation.\n\nTo evaluate the influence of perturbation, we compute $Z_i'(A',X')=(\\hat{A'}^2 X' W^{(1)} W^{(2)})_i$ with perturbed graph $A'$ and feature $X'$, and compare it with $Z'_{i}(A,X)$. We analyze how prediction probabilities are affected by perturbing node features and graph structure, respectively. Here we assume every perturbation is performed on either one feature or one edge at one time. \n\n\\textbf{Feature Perturbation.} Considering a specific entry $x_{kl}$ depicting the feature $l$ of node $k$, we denote the perturbation by: $\\Delta(x_{kl}): x_{kl} \\to (1-x_{kl})$, which indicates such a binary feature becomes opposite. We derive the difference of $Z'$ under $\\Delta( x_{kl})$, denoted as $\\epsilon_{\\Delta(x_{kl})}Z'$, given by:\n\\begin{align*}\n \\epsilon_{\\Delta(x_{kl})}Z' \n &= Z'_i(A,X') - Z'_i(A,X)\\\\ \n &= \\left(\\hat{A}^2 X' W'\\right)_i - \\left(\\hat{A}^2 X W'\\right)_i\\\\\n &= \\left(\\hat{A}^2 (X'-X) W'\\right)_i \\\\\n &= \\sum_{v_j \\in N_{G}^{(2)}(v_i)} \\hat{A}_{ij}^2 \\left({x'}_j-x_j\\right)^\\top W' \\\\\n &= \\hat{A}_{ik}^2 \\left({x'}_k-x_k\\right)^\\top W'\\\\\n &= \\hat{A}_{ik}^2 h_{kl}^\\top W'.\n\\label{eq:GCN2sx}\n\\end{align*}%\nwhere $h_{kl}\\in \\mathbb{R}^d$ is the one-hot encoding vector with value $1-2x_{kl}$ for feature $l$ \n, and $h_{kg}=x'_{kg}-x_{kg}=0$ for $g \\neq l$.\nThe result $\\epsilon_{\\Delta(x_{kl})}Z'=|\\hat{A}_{ik}^2h_{kl}^\\top W'|$ implies that the feature perturbation only influences the prediction outputs for $\\hat{A}_{ik} \\neq 0$, i.e., nodes nearby the target $k$.\n\n\\textbf{Structure Perturbation.} A structure perturbation is to add or remove an edge $(k,m)$ between nodes $k$ and $m$, denoted as $\\Delta(e(k,m)): A_{km} \\to (1-A_{km})$. Structure perturbation affects the adjacency matrix $A$ in deriving $Z'$. We derive the difference of $Z'$ under $\\Delta(e(k,m))$, denoted as $\\epsilon_{\\Delta(e(k,m))}Z'$, as follows:\n\\begin{align*}\n \\epsilon_{\\Delta(e(k,m))}&Z' =\n Z'_i(A',X) - Z'_i(A,X) \\\\ \n &= \\left(\\hat{A'}^2 X W'\\right)_i - \\left(\\hat{A}^2 X W'\\right)_i \\\\\n &= \\left(\\left(\\hat{A'}^2 - \\hat{A}^2\\right) X W'\\right)_i \\\\\n &= \\left(\\sum_{v_j \\in N_{G'}^{(2)}(v_i)} \\hat{A'}_{ij}^2 x_j^\\top - \\sum_{v_j \\in N_{G}^{(2)}(v_i)} \\hat{A}_{ij}^2 x_j^\\top\\right) W',\n\\end{align*}%\nwhere $\\hat{A'}_{ij}^2$ is the normalized adjacency matrix with self-loop under ${\\Delta(e(k,m))}$, and $N_{G'}^{(2)}(v_i)$ is the new neighborhood of node $v_i$ in the perturbed graph $G'$. The structure perturbation influences a node's neighbors for the aggregation and weights due to the changed $N_{G'}^{(2)}(v_i)$ and $\\hat{A'}^2$. Such change results from increasing or decreasing the minimum distance between nodes $k$ and $m$, which further affects the numbers of their first- and second-hop neighbors.\n\nTo sum up, by looking into a particular node $k$ that involves in the perturbation of feature ${\\Delta(x_{kl})}$ or edge ${\\Delta(e(k,m))}$, we can understand which parts in GNN prediction $Z'$ are influenced. For feature perturbation, ${\\Delta(x_{kl})}$ brings $|\\hat{A}_{ik}^2h_{kl}^\\top W'|$ for $\\hat{A}_{ik}^2 \\neq 0$. Such influence locates at node $k$'s specific feature and its first two hops' neighbors. On the other hand, for structure perturbation, ${\\Delta(e(k,m))}$ produces $|(\\sum_{v_j \\in N_{G'}^{(2)}(v_i)} \\hat{A'}_{ij}^2 x_j^\\top - \\sum_{v_j \\in N_{G}^{(2)}(v_i)} \\hat{A}_{ij}^2 x_j^\\top)|$. Its influence range can cover the aggregation terms ($N_{G'}$), the adjacency weights ($\\hat{A'}$) for nodes $k$ and $m$, and their first two hops' neighbors. Consequently, we can apparently find that structure perturbation affects wider on the GNN model. This suggests that perturbing edges takes more effect.\n\n\\subsubsection{Calcuating the Updated Graph Structure}\nBased on the above analysis, to estimate how the perturbation of edge $(k,m)$ affects the predicted probability on a target node $k$, we need to recalculate $\\hat{A}$. Such a recalculation has a high computational cost.\nSince the $2$-layer GNN-based prediction relies on only the first- and second-hop neighbors of the target node $k$, i.e., most elements in $\\hat{A}$ are zero, we can have an incremental update. \nIn perturbing edge $(k,m)$, we extend Theorem 5.1 in Netteck~\\cite{zugner2018adversarial} to consider the row $k$ of $\\hat{A}^2$ as well as the computation of $\\hat{A'}_{ij}^2$ in term of $\\hat{A}_{ij}^2$ to have an efficient update. The updated $\\hat{A'}_{ij}^2$ is given by:\n\\begin{equation}\n\\label{adj_update}\n\\begin{split}\n \\hat{A'}_{ij}^2 & = \\frac{1}{\\sqrt{\\tilde{d'}_i \\tilde{d'}_j}}\n \\Biggl[\n \\sqrt{\\tilde{d}_i \\tilde{d}_j}\\hat{A}_{ij}^2 \n +\\left(\n \\frac{\\tilde{a'}_{ij}}{\\tilde{d'}_i}\n -\\frac{\\tilde{a}_{ij}}{\\tilde{d}_i}\n +\\frac{\\tilde{a'}_{ij}}{\\tilde{d'}_j}\n -\\frac{\\tilde{a}_{ij}}{\\tilde{d}_j}\n \\right)\\\\\n &+\\left(\n \\frac{{a'}_{ik}{a'}_{kj}}{\\tilde{d'}_k}\n -\\frac{{a}_{ik}{a}_{kj}}{\\tilde{d}_k}\n +\\frac{{a'}_{im}{a'}_{mj}}{\\tilde{d'}_m}\n -\\frac{{a}_{im}{a}_{mj}}{\\tilde{d}_m}\n \\right)\n \\Biggr],\n\\end{split}%\n\\end{equation}\nwhere the lowercase notations are the elements of their corresponding matrix forms: the self-loop adjacency matrix $\\tilde{A}_{ij} := \\tilde{a}_{ij}$, the original adjacency matrix $A_{ij}:=a_{ij}$, the self-loop degree matrix $\\tilde{D}_{ii}:=\\tilde{d}_{i}$, and the updated matrix $A'_{ij} := a'_{ij}$ being positioned within ``$\\left[\\; \\right]$.''\nUpdated elements $\\tilde{a'}$, $a'$, $\\tilde{d'}$ can be obtained via: \n\\begin{align*}\n \\tilde{d'_i} & = \\tilde{d_i} + \\mathbb{I}(i \\in \\{k, m\\})(1-2\\cdot a_{km})\\\\\n {a'_{ij}} & = {a_{ij}} + \\mathbb{I}((i, j) = (k, m) )(1-2\\cdot {a_{ij}})\\\\\n \\tilde{a}'_{ij} & = \\tilde{a}_{ij} + \\mathbb{I}((i, j) = (k, m))(1-2\\cdot {a_{ij}}),\n\\end{align*}%\nwhere the indicating function $\\mathbb{I}(i \\in \\{k, m\\}) =1$ is used to update node $k$'s degree if $i=k$ or $i=m$ (e.g., $\\tilde{d'_k} = \\tilde{d_k} + (1-2\\cdot a_{km})$ for $i=k$); else $\\mathbb{I}(i \\in \\{k, m\\}) = 0$ implies no degree change. On the other hand, $\\mathbb{I}((i, j) = (k, m)) = 1$ indicates the changes between ${a}_{ij}$ and ${a}'_{ij}$ and between $\\tilde{a}_{ij}$ and $\\tilde{a}'_{ij}$ are only applied when $i=k$ and $i= m$ (e.g., ${a'}_{km} = {a}_{km} + (1-2\\cdot a_{km})$ for the case ${a'}_{km}$); else $\\mathbb{I}((i, j) = (k, m)) = 0$. \n\nEquation~\\ref{adj_update} consists of three parts, including the original adjacency matrix $\\hat{A}_{ij}^2$, the direct adjustment in the first bracket, and the indirect adjustment in the second bracket, in which $j \\in N_{G}^{(2)}$ that reflects the influence of perturbation covers the first two neighbors of node $k$. The first adjustment comes from the direct influence of perturbation ${\\Delta(e(k,m))}$ and node $k$'s first-order neighbors. The second-order neighbors of node $k$ suffer from the indirect adjustment via its first-order neighbor $m$. Equation~\\ref{adj_update} exhibits the influence of perturbation on the first- and second-order neighbors, and provides guidance for us to select more effective perturbation candidates in the next phase. \n\n\\subsection{Perturbations via Combinatorial Optimization}\n\\label{sec-opt}\nWe generate the best combination of edges being perturbed, i.e., the perturbed graph, by a combinatorial optimization. We iteratively find the set of edge candidates $Cand$ that satisfies the data unnoticeability described in Sec.~\\ref{sec-cand}, then select an edge (from $Cand$) at one time that leads to the best model unoticeability (i.e., data utility) and privacy preservation discussed in Sec.~\\ref{sec-infgnn}. The selection of edges being perturbed will stop when either running out of the budget $b$ or no candidates in $Cand$. Below we first present our NetFense algorithm that generates the perturbed edges when the adversary aims to attack the privacy of a single node $v$. Then we elaborate the extension from single-target attacks to multi-target attacks.\n\nFor single-target attacks, we consider the protection of the private label for one node $v\\in V$. We can derive the corresponding node-pair candidates $Cand(v) = \\{(v,u) | (\\Delta_{u \\leftrightarrow v}P_{g}<\\tau) \\wedge (u \\in V) \\}$ at each time of perturbation selection. That is, the perturbation candidates are node pairs whose one end is the target node $v$. Some candidate node pairs are selected to create new edges while some are removed from the graph. \nThe objective for perturbation selection consists of two parts. To ensure model unnoticeability, we devise the objective $O_C$ for target label classification, given by $O_C = [|f_C(G)-f_C(G')|]_v$, where $f_C = \\hat{A}^2 X W_C$. The objective $O_C$ aims to minimize the classification error of node $v$. To protect node privacy, we design the objective $O_P$ for private label classification, i.e., minimizing $O_P = |[\\rho(f_P(G))]_v-0.5|$, where $f_P = \\hat{A}^2 X W_P$. Both $f_C$ and $f_P$ are simplified pre-trained GCN models, and $W_C$ and $W_P$ are learnable weights.\nWe formulate the final loss function to solve the bi-tasks, i.e., minimizing both $O_C$ and $O_P$, as follows:\n\\begin{equation}\n \\mathcal{L}(G', W_C, W_P , v) = \\frac{|[\\hat{A'}^2XW_P]_{v}^{p_1} - [\\hat{A'}^2XW_P]_{v}^{p_2} |^{a_d}}{ ([\\hat{A'}^2 XW_C]_{v}^{\\check{c}})^{a_m} },\n\\label{eq:Score}\n\\end{equation}%\nwhere $p_1$ and $p_2$ are two classes of the specified binary private label, $[\\hat{A'}^2XW_P]_{v}^{p_1}$ is the predicted score of node $v$ on private class $p_1$, $\\check{c}$ is the predicted target label by the pre-trained GCN, and $[\\hat{A'}^2 XW_C]_{v}^{\\check{c}}$ is the predicted score of node $v$ on a particular predicted target class $\\check{c}$. In addition, ${a_d}$ and ${a_m}$ are balancing hyperparameters that control the strength of maintaining the performance of target label classification (for model unnoticeability) and reducing the performance of private label classification (for privacy protection). Below we elaborate the loss function. Larger ${a_d}$ and smaller ${a_m}$ implies concentrating more on privacy protection than model unnoticeability. We will analyze how ${a_d}$ and ${a_m}$ affect the performance in the experiments.\n\nFor privacy protection, as depicted by the numerator in Equation~\\ref{eq:Score}, since the private label is binary (i.e., only two classes $p_1$ and $p_2$), the score of one class approaches $0.5$ implies scores of both classes are close. Hence, we can rewrite the objective $O_P$ as: $|[\\hat{A'}^2XW_P]_v^{p_1} - [\\hat{A'}^2XW_P]_v^{p_2}|$. The scaling function $\\rho$ is skipped here because minimizing the difference with scaling can be approximated by that without scaling. Lower values of rewritten $O_P$ indicate that the perturbed graph leads to lower confidence in disclosing private labels of the target node since two classes $p_1$ and $p_2$ are indistinguishable.\n\nTo maintain model unnoticeability, as depicted by the denominator in Equation~\\ref{eq:Score}, since we cannot know the ground-truth target labels for all nodes, what we can utilize is the label $\\check{c}$ predicted by the pre-trained GCN model. \nSince the predicted score $f_C(G)$ on the target label using the original graph is a constant for the fixed input and parameters, and to strongly prevent edges that contributed to target label classification from being selected as perturbation candidates, we rewrite the objective $O_C$ from minimizing $|[f_C(G)-f_C(G')]_{v}^{\\check{c}}|$ to approximately maximize $[f_C(G)]_{v}^{\\check{c}}$, which can be further transformed to minimize $([f_C(G)]_{v}^{\\check{c}})^{-1}$. Higher values of $[f_C(G)]_{v}^{\\check{c}}$ tend to maintain the performance of target label classification.\n\nWe summarize the proposed single-target adversarial defense framework, NetFense, in Algorithm~\\ref{alg:algorithm}. We first use the original graph $G=(V,A,E)$ to pre-train two GCN models, $f_C$ and $f_P$, so that the loss in Equation~\\ref{eq:Score} can be constructed using $W_C$ and $W_P$. Then by assuming every node pair $(u,v)$ is perturbed, we compute all influence of PPR between nodes, $\\Delta_{u \\leftrightarrow v}P_{g}(G)$, which is used to limit the number of possible candidates for perturbation of the given target. \nIn selecting each structural perturbation until either running out of the budget $b$ or no appropriate candidates (line 4-15), we rely on calculating the loss in Equation~\\ref{eq:Score} to choose the best candidate at one time. We generate the perturbation candidates $Cand(v)$ based on threshold $\\tau$ (line 5), accordingly produce the perturbed graphs by adding and removing every candidate in $Cand(v)$ (line 7-8), and find the perturbed one $G'^{*(t)}$ that minimizes Equation~\\ref{eq:Score} (line 9).\n\nThe NetFense algorithm can be extended for multi-target adversarial defense by repeating single-target NetFense (Algorithm~\\ref{alg:algorithm}) on different targets. That said, we can randomly choose a node from the target set without replacement, and then apply NetFense algorithm in an iterative manner. The perturbed graph for the current target will be utilized to generate the new perturbed graph of the next target.\n\n\\begin{algorithm}[tb]\n\\caption{NetFense}\n\\label{alg:algorithm}\n\\textbf{Input}: Graph $G = (V, A, X)$, single target $v$, threshold $\\tau$, perturbation budget $b$, and hyperparameters $({a_d},{a_m})$\\\\\n\\textbf{Output}: Perturbed Graph $G' = (V, A', X)$\\\\\n \\begin{algorithmic}[1]\n \\STATE Train two GNNs $f_C$ and $f_P$ using $G$ to get $W_C$ and $W_P$\\\\\n \\STATE Compute $\\Delta_{u \\leftrightarrow v}P_{g}(G)$ for all possible node pairs\\\\\n \\STATE Let $G^{(0)} = G$ and $t=0$\n \\WHILE{$\\sum (A^{(0)}-A^{(t)}) \\leq b$}\n \n \\STATE $Cand(v) = \\{(v,u) | (\\Delta_{u \\leftrightarrow v}P_{g}<\\tau) \\wedge (u \\in V) \\}$\n \\IF {$|Cand(v)| > 0$}\n \n \\STATE $\\{G'^{(t)}\\}=\\{G^{(t)}\\pm(k, l)| (k, l) \\in Cand(v)\\}$\n \\STATE Update $\\hat{A'}^2$ for every graph in $\\{G'^{(t)}\\}$ by Eq.~\\ref{adj_update}\n \n \\STATE $G'^{*(t)} = \\arg\\min_{G'\\in \\{G'^{(t)}\\}}\\{\\mathcal{L}(G', W_C, W_P , v)\\}$\n \n \\ELSE\n \\STATE break\n \\ENDIF\n \\STATE $G^{(t+1)} = G'^{*(t)}$\n \\STATE $t = t+1$\n \\ENDWHILE\n \\STATE \\textbf{return} $G^{(t)}$\n \\end{algorithmic}\n\\end{algorithm}\n\n\\section{Experiments}\n\\label{sec-exp}\n\\subsection{Evaluation Settings}\nThree real-world network datasets, including Cora, Citeseer~\\cite{zugner2018adversarial}, and PIT~\\cite{zhao2006entity}, are utilized for the evaluation. The statistics is shown in Table \\ref{tab:data}. Cora and Citeseer are citation networks with 0\/1-valued features, corresponding to the absence of words from the dictionary, and their TLC labels are paper categories. PIT depicts the relations between terrorists, along with anonymized features. The label ``colleague'' is used as the TLC task. \n\\ct{Since the private label is not defined in these datasets, we regard one feature as the privacy label to simulate that few users do not set their sensitive attribute (e.g., hometown location) invisible. That is, we assume the attack aims to encroach on the target feature (e.g., hometown location) as the PLC label by building the model based on social relationships, other features, and a few visible PLC labels.}\nTo let the adversary have stronger attack capability, in each dataset, we select the binary attribute with the most balanced 0\/1 ratio and having at least $0.6$ accuracy to be the PLC label. Besides, we will also study the correlation between TLC and PLC labels. By adopting \\textit{Cohen's kappa coefficient} $\\kappa$~\\footnote{\\url{https:\/\/en.wikipedia.org\/wiki\/Cohen\\%27s_kappa}}, we can measure the correlation between targeted and private labels. Higher $\\kappa$ (close to 1) indicates higher agreement between PLC and TLC labels, and means the performance of TLC has higher potential to be influenced when we perform perturbations for PLC. Here, we report the $\\kappa$ coefficient values between TLC and PLC labels for the choices of our private labels: $-0.13$, $0.01$ and $-0.14$, for Citeseer, Cora, and PIT, respectively. We further discuss label correlation between PLC and TLC in Sec.~\\ref{sec-exp-adv}.\nFor the data processing, we split each dataset into training, validation, and testing sets with ratio $0.1$, $0.1$, and $0.8$ for semi-supervised learning.\n\nIn the hyperparameter settings, we train the 2-layer GCN with $16$-dim hidden layer tuned via the validation set. We set the PPR restarting probability $\\alpha = 0.1$, balancing values in loss function Eq.~\\ref{eq:Score}: $a_d = 2$ and $a_m = 1$. For candidate selection, we set the quantile $\\tau=0.9$, indicating that we retain node pair $(u^*, v^*)$ if its $\\Delta_{u^* \\leftrightarrow v^*}P_{g}(G) \\leq 0.9$ quantile of $\\{\\Delta_{u \\leftrightarrow v}P_{g}(G) | u,v\\in V \\}$. That said, we exclude extremely influential node pairs. \n\n\\begin{table}\n\\centering\n\\caption{Summary of data statistics.}\n\\begin{tabular}{lcccc}\n\\hline\nDataset & \\#nodes & \\#edges & density & \\#features\\\\\n\\hline\nCora & 2708 & 5429 & 0.0015 & 1433 \\\\\nCiteseer & 3312 & 4715 & 0.0009 & 3703 \\\\\nPIT & 851 & 16392 & 0.0227 & 1224 \\\\\n\\hline\n\\end{tabular}\n\\label{tab:data}\n\\end{table}\n\nIn the single-target setting, we follow Nettack~\\cite{zugner2018adversarial} to consider the classification margin, i.e., the probability of ground-truth label minus the second-highest probability via the pre-train 2-layers GCN, and to select three different sets of target nodes for the experiment:\n(a) high-confident targets: $10$ nodes with the highest positive margin of classification, (b) low-confident targets: $10$ nodes with the lowest positive margin of classification, and (c) Random targets: 20 more randomly-chosen nodes. \nThe classification margin can exhibit both accuracy and confidence. The value of margin indicates the confidence level, and the positive\/negative means correctly\/wrongly classification.\nWe also follow Nettack to repeat the random splitting of train\/validation\/test up to five times, and report the average results. Besides, we set the maximum budget $b=20$.\n\nFor the multiple-target setting, all test nodes are considered as the targets.\nThe maximum budget is set as $b = 10$ for each target.\nWe repeat the experiment up tp $20$ times in terms of different data splitting, and report the average results for both perturbed nodes (denoted as ``Set'') and overall testing nodes (denoted as ``Overall''). \n\nSince no existing methods are on perturbing graphs for worsening PLC and maintaining TLC,\nwe consider the following baselines: \n(1) \\textbf{clean}: generating the evaluation scores using the original graphs, i.e., no perturbation is performed;\n(2) \\textbf{pseudo-random perturbation} (RD): randomly perturbing candidate edges that pass the hypothesis test of unnoticeable power-law degree distribution (proposed by Nettack~\\cite{zugner2018adversarial}); and \n(3) \\textbf{adversarial attack} (NT): fully adopting Nettack~\\cite{zugner2018adversarial},\nwhich is similar to NetFense but considers only the PLC task, i.e., we apply Nettack's structured graph perturbations to lower down the PLC performance. \n\n\\subsection{Single-Target Perturbations}\nWe display the results in Table~\\ref{tab:single} and Fig.~\\ref{fig:R1ST_cora} and~\\ref{fig:R1ST_terr}. In each figure, the results of TLC and PLC are shown in the left and right, respectively.\nThe proposed NetFense method leads to more satisfying performance, comparing to RD and NT. NetFense maintains the classification margin in the TLC task, and lowers down the classification margin to approach zero in the PLC task. \nSpecifically, in TLC, NetFense can keep the positive confidence in predicting the target label (i.e., close to clean), indicating that the perturbed graphs do not hurt the data utility. In PLC, NetFense can make both private labels (binary classification) tend to have equal prediction probabilities, which indicates the privacy can be protected.\nNT largely reduces the classification margin of PLC. It is because NT directly perturbs the graphs to mislead the PLC task by remarkably imposing the opposite effect on the original data distribution. However,\nthe negative PLC margin generated by NT would result in the crisis of anti-inference since the task is binary classification.\nIn addition, RD gives both margins more moderate changes. Although the margins of TLC for RD keeps a similar level as clean in Cora and PIT, its PLC margin still holds at a high level, which exhibits the instability of RD.\n\nWe also present the averaged scores of margin and accuracy (Acc.), along with each method's performance difference from Clean, in Table~\\ref{tab:single}.\n\nSimilar to the performance in Fig.~\\ref{fig:ST_cite},~\\ref{fig:ST_cora} and~\\ref{fig:ST_terr}, \nthe perturbation of RD and NT can neither well maintain TLC accuracy nor reduce PLC accuracy to approach $0.5$, comparing to our NetFense. In other words, NetFense can have TLC with smaller performance change, which preserves the data utility, and simultaneously make the accuracy of PLC binary classification much close to random prediction, which protects private labels.\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_SA_citeseer_TP_PPR.pdf}\n \\caption{Boxplots of single targets' margin for Citeseer.}\n \\label{fig:ST_cite}\n\\end{figure}\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_SA_cora_TP_PPR.pdf}\n \\caption{Boxplots of targets' margin for Cora.}\n \\label{fig:ST_cora}\n\\end{figure}\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_SA_TerroristRel_TP_PPR.pdf}\n \\caption{Boxplots of targets' margin for PIT.}\n \\label{fig:ST_terr}\n\\end{figure}\n\n\\begin{table*}[!t]\n\\centering\n\\caption{Results of single-target perturbations.}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{l||c|rr|rr|rr|rr|rr|rr}\n\\hline\nMethods & Datasets & \\multicolumn{4}{c|}{Citeseer} & \\multicolumn{4}{c|}{Cora} & \\multicolumn{4}{c}{PIT} \\\\ \\hline \n & & \\multicolumn{2}{c|}{TLC (Margin\/Acc.)} & \\multicolumn{2}{c|}{PLC (Margin\/Acc.)} & \\multicolumn{2}{c|}{TLC (Margin\/Acc.)} & \\multicolumn{2}{c|}{PLC (Margin\/Acc.)} & \\multicolumn{2}{c|}{TLC (Margin\/Acc.)} & \\multicolumn{2}{c|}{PLC (Margin\/Acc.)} \\\\ \\hline \\hline\nClean & Score & 0.545 & 0.980 & 0.616 & 0.840 & 0.735 & 0.960 & 0.278 & 0.685 & 0.657 & 1.000 & 0.544 & 0.825 \\\\ \\hline\nRD & Score & {0.371} & {0.810} & 0.561 & 0.770 & 0.545 & 0.855 & 0.130 & 0.605 & {0.629} & 0.950 & 0.573 & 0.850 \\\\ \n & Difference & -0.174 & -0.170 & -0.055 & -0.070 & -0.190 & -0.105 & -0.148 & -0.080 & -0.028 & -0.050 & 0.029 & 0.025 \\\\ \\hline\nNT & Score & -0.456 & 0.245 & -0.390 & {0.270} & 0.327 & 0.690 & -0.825 & {0.020} & 0.344 & 0.745 & -0.669 & {0.155} \\\\\n & Difference & -1.001 & -0.735 & -1.006 & -0.570 & -0.408 & -0.270 & -1.103 & -0.665 & -0.313 & -0.255 & -1.213 & -0.670 \\\\ \\hline\nNetFense & Score & \\textbf{0.408} & \\textbf{0.845} & \\textbf{0.274} & \\textbf{0.685} & \\textbf{0.596} & \\textbf{0.915} & \\textbf{0.045} & \\textbf{0.540} & \\textbf{0.660} & \\textbf{0.955} & \\textbf{0.284} & \\textbf{0.635} \\\\\n & Difference & -0.137\t& -0.135\t& -0.342\t& -0.155\t& -0.139\t& -0.045\t& -0.233\t& -0.145\t& 0.003\t& -0.045\t& -0.260\t& -0.190\n \\\\ \\hline\n\n\\end{tabular}\n}\n\\label{tab:single}\n\\end{table*}\n\n\\subsection{Multiple-Target Perturbations}\nWe repeatedly apply the single-target perturbation on each testing node to have the results of multi-target perturbation.\nThe performance in terms of classification accuracy is exhibited at each time of single-target perturbation. By considering the perturbation ratio, i.e., the percentage of testing nodes being perturbed, as the x-axis, \nwe show the results in Fig. \\ref{fig:MT_cora}, \\ref{fig:MT_cite} and \\ref{fig:MT_terr}. Each figure exhibits the averaged accuracy of only perturbed nodes (``Set'') in the left and the accuracy of all testing nodes (``Overall'') in the right. \n\nOverall speaking, the proposed NetFense can better approach the TLC accuracy of Clean, along with stabler performance as the perturbation ratio increases, in both Set and Overall results. NetFense also simultaneously produces apparent accuracy drop in the task of PLC. Comparing to NT, although NT has stronger perturbation power to lower down the accuracy of PLC, we concern that the over-perturbation may lead to the crisis of anti-inference for binary private labels. That said, one could reverse the predicted labels to obtain the true private information. On the other hand, since NT does not consider to preserve the structural features for TLC, its perturbations apparently destroy the TLC accuracy. \n\nFor ``Set'' accuracy, there exists obvious fluctuation when the perturbation ratio is low since fewer perturbed targets tend to result in extreme accuracy scores. After about $5\\%~10\\%$ perturbation ratio, the accuracy becomes stable. \nIn addition, since NT imposes too much disturbance into the near-by graph structure of perturbed targets, \nits performance curves are more unstable than NetFense and RD. \nOn the other hand, the curves of the ``Overall'' accuracy is decreasing as the increment of perturbation ratio. Such results exhibit that each target is gradually perturbed, and the perturbing effect is kept and even reinforced. \n\n\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_MA_cora_TP_PPR.pdf}\n \\caption{Multiple targets' accuracy for Cora.}\n \\label{fig:MT_cora}\n\\end{figure}\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_MA_citeseer_TP_PPR.pdf}\n \\caption{Multiple targets' accuracy for Citeseer.}\n \\label{fig:MT_cite}\n\\end{figure}\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_MA_TerroristRel_TP_PPR.pdf}\n \\caption{Multiple targets' accuracy for PIT.}\n \\label{fig:MT_terr}\n\\end{figure}\n\n\\subsection{Advanced Experimental Analyses}\n\\label{sec-exp-adv}\n\\textbf{Balancing Factors of TLC and PLC.} \nBy performing single-target perturbation using our NetFense, we examine the effect of the hyperparameters ${a_d}$ and ${a_m}$ in Eq.~\\ref{eq:Score} that determines the balance between TLC and PLC. As the results on the other datasets exhibit similar trends, we only show the results on the PIT dataset due to the page limit.\nWithout loss of generality, we discuss the effect of positive values of ${a_d}$ and ${a_m}$. \nWe fix one and present the effect of the other by varying ${a_d}$ and ${a_m}$ around $1$. The results in terms of TLC and PLC classification margin are exhibited in Fig.~\\ref{fig:Hyper}.\nIn the left, we can find both TLC and PLC performance is quite sensitive to ${a_d}$. It is because ${a_d}$ is directly related to the perturbation goal. Higher $a_d$ drops PLC performance more significant than TLC performance. We choose higher $a_d$ as it makes NetFense to select perturbation candidates that heavily hurt PLC but slightly hurt TLC.\nIn the right of Fig.~\\ref{fig:Hyper}, the results of varying $a_m$ are opposite to those of $a_d$. Increasing $a_m$ strengthens the maintenance of TLC performance but weakens the privacy protection effect of PLC (i.e., raising the margin towards that of Clean). \nHigher $a_m$ leads NetFense to avoid selecting candidates that can hurt TLC.\nNevertheless, since there could exist some features shared by both TLC and PLC, too stronger maintenance power (i.e., higher $a_m$) can limit the perturbation capability, i.e., increasing the margin of PLC. Eventually we suggest $(a_d, a_m)=(2, 1)$ can better maintain TLC performance and decrease PLC margin.\n\\comments{\nTherefore, we keep $a_d = 1$ as the original value and then select $a_m$ to ensure an appropriate effect of maintenance. We choose $a_m = 2$ because the red curve of TLC start to convergence around $a_m = 2$ (top - right) and its corresponding margin of PLC is lower. Additionally, we find the margins of $a_d = 2$ with fixed $a_m$ for PLC and TLC can still hold a medium level.\n}\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_HyperparametersTest_PPR.pdf}\n \\caption{Effects of ${a_d}$ and ${a_m}$ in PIT dataset.}\n \\label{fig:Hyper}\n\\end{figure}\n\n\\textbf{Perturbation Factors.} \nWe demonstrate how the perturbation factors, including the perturbation budget $b$, the threshold $\\tau$ of data unnoticeability, and node degree, affect the performance.\nWe conduct the single-target perturbation with various $(b,\\tau)$ combinations\non the PIT dataset, and show the PLC margin that reflects privacy disclosing. \nWe select $\\tau$ via the quantile $q\\in \\{0.3, 0.5, 0.7, 0.9\\}$ of $\\Delta_{u \\leftrightarrow v}P_{g}$ for all $u, v \\in V$, denoted as $privacy_q$.\nThe results shown in Fig.~\\ref{fig:D_N} (left) exhibit that higher budget $b$ leads to better privacy protection (lower PLC margin towards zero). It is because a higher $b$ brings more perturbations.\nThe steep decrements of PLC margins indicate most of the edges that could reveal private labels are perturbed. Higher $\\tau$ leads to better privacy protection (i.e, steeper decrements). Nevertheless, there is a trade-off between data unnoticeablility and privacy protection. Here we pay more attention to privacy protection, and thus apply higher $\\tau$ (i.e., $q = 0.9$) to consider more candidates being perturbed.\nA higher $\\tau$ (i.e., higher $q$) can help identify the edges whose perturbations lead to a nearly unnoticeable graph change, and prevent significant edges from being the candidates.\n\n\\textbf{Effect of Node Degree.}\nSince high-degree nodes have more neighbors to learn their embeddings, they might have higher potential being influenced by perturbations. Here we aim at investigating how node degree correlates with the performance of TLC and PLC after perturbations. We conduct single-target perturbations on PIT data, and compare the results between Clean and NetFense (NF). Nodes are divided into four groups based on their degree values. The results are exhibited in Fig.~\\ref{fig:D_N} (right).\nFor TLC, we can find the margins do not change too much across after NF perturbations various degree ranges. That said, NetFense can maintain TLC performance for both high- and low-degree nodes.\nFor PLC, the decrements after NetFense perturbations are more obvious.\nMoreover, the difference of margin between Clean and NetFense becomes significant as the degree increases\n(i.e., $0.167, 0.577, 0.532, 0.491$).\nSuch results deliver that the privacy protection capability of NetFense is quite significant for high-degree nodes because the PLC margin can be reduced to approach zero. \n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_Np_deg_group_margin.pdf}\n \\caption{Effects of perturbation factors (i.e., $b$ and $\\tau$) on PLC (left) and node degree on TLC and PLC (right) in PIT.}\n \\label{fig:D_N}\n\\end{figure}\n\n\\textbf{Strategies of Selecting Data-Unnoticeable Candidates.} \nNetFense relies on the revised PPR for data-unnoticeable candidate edge selection in Sec.~\\ref{sec-cand}. Here we aim to examine whether the perturbed graphs according to our proposed revised PPR can truly keep the local neighborhood of each node unchanged. We think a good strategy of selecting data-unnoticeable candidates should hurt the local structure of every node as minimum as possible in the perturbed graphs. We compare the proposed revised PPR-based selection (i.e., Eq.~\\ref{eq:oneside}) with three baseline strategies of candidate selection. \n(a) \\textit{Random Choice}: randomly selecting edges without replacement.\n(b) The hypothesis testing of degree distribution~\\cite{clauset2009power}, which is adopted by Nettack~\\cite{zugner2018adversarial}: we examine the scaling parameter of power-law distribution\nbased on the likelihood ratio test for the hypotheses $\\{$ $H_0$: distributions of $G$ and $G'$ are the same$\\}$ and $\\{$ $H_1$: distributions of $G$ and $G'$ are different from each other$\\}$, where the graph $G'$ indicates the graph with at least one edge perturbation from $G$. We identify the unnoticeability of an edge perturbation if the likelihood ratio is lower than a threshold $\\chi_{(1),0.95}^2 \\approx 0.004$ for the given significance level to p-value $=0.05$. We determine the edge with the smallest ratio as the candidate.\n(c) \\textit{PPR (Original)}: according to our discussion (i.e., Eq.~\\ref{eq:oneside} and Eq.~\\ref{eq:twoside})\nin Sec.~\\ref{sec-cand}, we can derive the influence of edge perturbation via the original PPR score for the version without the direction of adjustment, given by: $\\Delta_{u \\to v} P^{o}_{g}((i \\to j))= \\sum_{i} c' (M_1^{-1})_{iu} \/(1-c'(M_1^{-1})_{vu})$). The edges with lower scores are considered as good unnoticeable candidates. \n\\comments{\nNote that the difference between PPR (Original) and our revised PPR lies in\ntheir denominators of (Eq.\\ref{eq:oneside} vs. Eq.~\\ref{PPR-revised}), which reverses the sign and the direction of the PPR score of $v,u$. The trick helps us to make the low influence edge with a higher score to remove, and we increase the score to add the edge with high PPR.\n}\nTo examine which strategy can better maintain the local graph neighborhood, we utilize \\textit{average of local clustering coefficient}, denoted by \\textbf{CA}, to be the evaluation metric. For each strategy, we sequentially perturb the candidates in the descent order of their unnoticeable scores, and report the CA score at a time.\nThe damage degree of local neighborhoods resulting from the accumulated edge perturbations can be reflected by the decrement of CA. \n\nWe present the results on the PIT dataset in Fig.~\\ref{fig:Un_FS} (left), in which x-axis is the number of perturbed edges ($N_p$), and y-axis is the difference between CA scores before and after the perturbations. \nWe can apparently find that PPR-based strategies demonstrate promising preservation of local graph structure (i.e., lower CA differences). The proposed revised PPR can have almost unchanged CA scores. \nRandom Choice tremendously reduces CA scores. The hypothesis testing of degree distribution alleviates the destruction of local structure, but still cannot well maintain local connectivity because it considers no interactions between nodes that are exploited by PPR-based strategies.\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_unnotice_Struct_feat.pdf}\n \\caption{Left: effect of different data-unnoticeable candidate selection strategies in PIT. Right: comparison of structure and feature effects in Citeseer.}\n \\label{fig:Un_FS}\n \\vspace{-1em}\n\\end{figure}\n\n\\textbf{Structure vs. Feature Perturbations.}\nRecall that in Sec.~\\ref{sec-infgnn} we have discussed that perturbing graph structure (\\textbf{Struc.}) brings a more significant influence on GNN than perturbing node features (\\textbf{Feat.}). \nNow we aim to empirically examine and compare structural and feature perturbations for TLC and PLC margins under single-target perturbation.\nTo have a fair comparison, either all edges or all nodes' features are considered for perturbations by NetFense. That said, we do not perturb candidate selection here. \nSince perturbing node features can affect the feature matrix in GNNs, we keep $\\hat{A}$ but replace $X$ by the perturbed feature matrix $X'$ for the propagation term (i.e., $\\hat{A}^2XW$) in the loss function Eq.~\\ref{eq:Score}.\nTo further understand whether perturbing node features are effective in protecting private labels, the analysis is conducted via two cases: (1) maintaining TLC performance and reducing PLC prediction confidence by setting $a_d>0$ and $a_m>0$, denoted by \\textbf{P+T}, and (2) only reducing PLC prediction confidence by setting $a_d>0$ and $a_m=0$, denoted by \\textbf{P}, which implies protecting private labels without preserving data utility. \n\nBy reporting the margin from accumulated perturbations of edges or node features, we show the results in Fig.~\\ref{fig:Un_FS} (right) in Citeseer. \nWe can find that the feature perturbations in both settings of P+T and P apparently maintain the TLC margins as Clean, but fails to reduce the PLC margins. That said, perturbing node features cannot effectively prevent private labels from being inferred.\nThe structural perturbations sacrifice some TLC margins (i.e., decreases TLC margins) for lowering down the PLC margins so that the private labels can be protected. \nIn addition, to maintain data utility, the P+T case of structural perturbation cannot work that well as the P case. \nThese results indicate that perturbing node features can only maintain data utility in terms of TLC, and has no effect in privacy protection.\nPerturbing edges with having TLC in the loss function (i.e., P+T) is able to strike the balance between keeping data utility and preventing private labels from being confidentially inferred.\n\n\\ct{\n\\textbf{Perturbing Both.}\nWe discuss whether perturbing both feature and structure can bring more effective results of NetFense. Single-target perturbation with NetFense (NF) and Random (RD) are considered.\nWe follow Nettack~\\cite{zugner2018adversarial} to combine both perturbation options by choosing either structure or feature with the higher score for each time of perturbation. The score is defined by the gradient-based function in Eq.~\\ref{eq:Score}. That said, there could be some structural and some feature perturbations at the end of perturbation process.\nNote that RD is to randomly choose either structure or feature for each time of perturbation with uniform probabilities. \nThe results are exhibited in Fig.~\\ref{fig:SFST_cora} and~\\ref{fig:SFST_terr} for Cora and PIT datasets, respectively, in which NF-Struct$\\_$Feat and RD-Struct$\\_$Feat denote the results of NetFense and Random, respectively.\nWe can find that NetFense with perturbing both structure and feature leads to higher-margin values in TLC and approaches to zero confidence in PLC, and the results look a bit better than NetFense with perturbing only structure.\nRandom (RD-Struct$\\_$Feat) perturbation does not consider how to maintain TLC performance and decrease PLC confidence, and thus produces worse results. \nAlthough perturbing both is more effective for TLC and PLC, we still need to point out that perturbing features has two concerns. First, as discussed in Sec.~\\ref{sec-infgnn}, perturbing features bring much more computation cost. Second, it is less reasonable to perturbing features (i.e., modifying user profiles) by an algorithm in online social platforms because they should be manually filled and modified by users. Therefore, given the performance of perturbing both and perturbing only structure is competitive, we suggest perturbing only structure for real applications.\n}\n\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_SA_PPR_RD_struct_feat_cora.pdf}\n \\caption{(Struct+Feat) Boxplots of targets' margin for Cora.}\n \\label{fig:SFST_cora}\n\\end{figure}\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_SA_PPR_RD_struct_feat_TerroristRel.pdf}\n \\caption{(Struct+Feat) Boxplots of targets' margin for PIT.}\n \\label{fig:SFST_terr}\n\\end{figure}\n\n\\ct{\n\\textbf{Label Swapping.}\nTo understand whether the proposed NetFense can be also effective for different classification tasks, we swap the classification tasks of PLC and TLC, i.e., considering the original private labels as targeted labels and the original targeted labels as private labels, and conduct the same experiments. The results are displayed in Fig.~\\ref{fig:R1ST_cora} and~\\ref{fig:R1ST_terr} (NF-Reverse) for Cora and PIT datasets, respectively, in which the ``Reverse'' results indicate the performance when PLC and TLC labels are swapped. We can find that our NetFense can successfully lower down the prediction confidence of TLC (as it is now considered to be protected) and simultaneously keep the high performance of PLC (as it is now considered to be maintained). Such results also verify that NetFense can protect any specified labels from being confidentially attacked and maintain the performance of any specified targeted labels.}\n\n\\ct{\n\\textbf{Label Correlation.}\nWe aim at studying how the correlation between PLC and TLC labels affects the performance of our NetFense. We create a new TLC label that is highly correlated with the PLC label. While the label of PLC is binary, suppose the number of labels for TLC is $\\vartheta$, the new TLC label of node $v_i$ is defined as: its new TLC label $c'_i$ is the same as the original TLC label $c_i$, i.e., $c'_i = c_i$, if its PLC label is $p_i = 1$; otherwise, if its PLC label is $p_i = 0$, we set $c'_i$ to be an extra newly-created label (i.e., the ($\\vartheta+1$)-th label).\nWith Cohen's kappa coefficient, we display the correlation change between TLC and PLC labels before and after modifying the TLC labels.\n\\comments{To examine the correlation change between TLC and PLC labels before and after modifying the TLC label, \nwe adopt \\textit{Cohen's kappa coefficient} $\\kappa$ value \\footnote{\\url{https:\/\/en.wikipedia.org\/wiki\/Cohen\\%27s_kappa}}, which can measure the correlation of two discrete distributions. Higher $\\kappa$ (close to 1) indicates more similar agreements, and lower $\\kappa$ (close to -1) means less consistent than random selection.} \nWe find that $\\kappa$ coefficients between TLC and PLC labels change from $0.01$ and $-0.14$, to $0.33$ and $0.63$ for \nCora, and PIT, respectively. Such $\\kappa$ differences imply that the new TLC label is highly correlated to the PLC label in this experiment. \nWe present performance of Clean and NetFense for the new TLC with the new ``Higher Correlated'' (HC) label\nin Fig.~\\ref{fig:R1ST_cora} and~\\ref{fig:R1ST_terr}. The new TLC performances without perturbation (Clean-HC) are similar to the scores of the original TLC. The new TLC scores on NetFense (NF-HC) can be maintained as the original level; however, the PLC scores are a bit worse than the original setting (training with the original TLC label). Such results tell us that when the PLC label is highly correlated with the TLC label, it would become more challenging for NetFense to maintain the effectiveness of PLC. Since accurately classifying the mutually-correlated TLC and PLC labels involves learning similar information from the data, the model naturally cannot better distinguish them from each other during the process of adversarial-defense perturbation.\n}\n\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_SA_PPR_R1_cora.pdf}\n \\caption{Boxplots for Cora on various settings.}\n \\label{fig:R1ST_cora}\n\\end{figure}\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_SA_PPR_R1_TerroristRel.pdf}\n \\caption{Boxplots for PIT on various settings.}\n \\label{fig:R1ST_terr}\n\\end{figure}\n\n\n\n\n\n\\ct{\n\\textbf{Against Robust GNN.}\nWe further aim to study whether the perturbed\/defended graphs can survive from robust GNNs on PLC and TLC tasks because the adversary can choose to utilize robust GNN as a stronger attacking model to avoid perturbation influence and disclose user privacy. In other words, we examine how robust are the perturbed graphs generated by NetFense against robust GNNs. One may expect that robust GNNs can generate better prediction performance on TLC and PLC. We employ RobustGCN (RGCN)~\\cite{zhu2019robust} as the robust attacker, which has no knowledge about whether the graph is perturbed. A two-layer RCGN is implemented and performed on the perturbed graphs generated by single-target NetFense, denoted as NF-RGCN. The results are reported in Fig.~\\ref{fig:R1ST_cora} and~\\ref{fig:R1ST_terr}. We can find that on the TLC task, the performance of RGCN (i.e., NF-RGCN) is competitive with that of our simplified GCN (i.e., NF). In more details, NF is slightly better to maintain the TLC performance. On the other hand, for the PLC task, RGCN (i.e., NF-RGCN) leads to lower classification confidence than our simplified GCN (i.e., NF). Such results further prove the usefulness of our NetFense model because even the robust GNN (RGCN) cannot increase the prediction confidence on the PLC task, i.e., the private labels are still well protected by NetFense. That said, the perturbed graphs generated by NetFense are verified to have solid privacy protection against robust GNN attackers.\n}\n\n\n\n\n\\comments{\n\\textbf{Disclosing Crisis for Edges.}\nIn this case, we consider the detection of dangerous edges which raise the possibility of leakage of private label via the effect metric in Eq. \\ref{eq:Score}. \\ic{We focus on three parts to measure the crisis of edges, including prediction for Clean case, loss for each edge and prediction for NF perturbation.} Given a random node $v_1$ from Citeseer \\ct{(how\/why to select this $v_1$?)}\\footnote{To clearly present, we filter out the node with the degree $<5$ and $>10$ and consistent-label neighbors}, \\ic{we choose its neighbors with unknown labels (i.e., belong to the testing set), and we would compare each edge as well as the corresponding neighbors via the prediction values before\/ after perturbation and its loss of Eq. \\ref{eq:Score}. Note that we present the prediction value in the form of the scale score of the model's output for the ground truth label.} In Fig. \\ref{fig:Crisis} (Top - left), we depict the $v_1$ and its neighbors with different private labels in the form of square and circle shapes, and we display the predicted scores without perturbation for their true private labels of these nodes for the original graph shown in Fig. \"Original Predicted Score\" (bottom-left) \\ct{(is each node v1-7's private label unknown?)}.\nWe can find that the private label of most of the neighbors is the same as target's; \\ct{therefore GCN model can predict nodes' labels via the information of the interaction offered by these in a similar neighborhood.} \n\\ic{Then, we display the crisis of privacy leakage without completely training the model here, and therefore we purpose to examine the crisis of each edges by a measurement from revising our NetFense's objective loss (i.e., Eq. \\ref{eq:Score}) $L(G', W_P, v_1) =|[\\hat{A'}^2XW_P]_{vc_1} - [\\hat{A'}^2XW_P]_{vc_2}|$, where $G' = G-e(v_1, v_i)$ for $i = 2, ..., 7$ and $W_P$ is pre-train weights. The results are shown in the figure of \"Crisis of each edge\", which the lower loss indicates the relationship is more dangerous to suffer the privacy leakage.}\nThe figure shows the loss of each relationship for $v_1$, \\ct{which the edge with lower loss implies more influential potential to make $v_1$ be unrecognized if we delete the edge. The lower loss also indicates the crisis of the privacy leakage because a better perturbation candidate to dominate the capability of performance means the edge helps the prediction of the target. \nWe find the node with the same private label tends to increase the crisis (i.e., low loss), such as $v_5$ and $v_7$. Other nodes with the same private label (i.e., $v_4$, $v_6$) may suffer from other factors such as different features and graph structure, and therefore GCN is hard to learn the high weight on them.}\n\nAfter the measurement, we remove the edge for each relationship to verify the true influence on the privacy disclosing of these nodes. \\ic{In the figure \"Influence for Removing Each Edge\", we demonstrate each node's (x-axis) predicted scores after removing different edges in the corresponding color refer to the color in Fig. \"Crisis of each edge\". For an example of $v_1$, there are five bars in different color respected to $v_1$ (x-axis) that we annotate with the words \"(target)\" before the vertical dotted line, and the red bar for $v_1$ represents the predicted score of $v_1$ after we delete the edge $(v_1, v_2)$, and other bars follow the same rule so on.} \n\\ct{For the target $v_1$, we add a horizontal black line as $v_1$'s original performance and observe the $v_5$ (blue), $v_6$ (yellow) and $v_7$ (royal blue) with the lower loss reduce the probability to disclose the private label of $v_1$ after we delete their edges. $v_3$ and $v_4$ with higher losses would not cause much influence on the prediction of $v_1$, and the opposite result reflects the quite different attributes between $v_1$ and $v_2$ for PLC which helps the de-noising and increase the margin for prediction of PLC.\nBesides, we also show the predicted scores of $v_1$'s neighbors. The predicted scores of nodes $v_2$ and $v_3$ have the tremendous improvement (i.e., red bar for $v_2$ and pink bar for $v_3$) when we remove them from $v_1$, which indicates the neighborhood of $v_1$ offers the noise to protect the information of node with opposite attributes.\nSimilarly, the nodes with private label $1$ gain the decrements of the predicted score as they don't connected with $v_1$ because the capability of GCN highly depends on the information of the target's neighbors. To sum up, our proposed loss can detect the most dangerous edges and also give the user a security alert for the specific relationship. In the social network, the conclusion suggests that people stay in their echo chamber with high risk of privacy disclosing.}\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{Fig_Crisis_PPR.pdf}\n \\caption{Analysis of the disclosing crisis.}\n \\label{fig:Crisis}\n\\end{figure}\n}\n\n\\section{Conclusions}\n\\label{sec-concl}\nThis paper presents a novel research task: adversarial defenses against privacy attack via graph neural networks under the setting of semi-supervised learning. \nWe analyze and compare the differences between the proposed problem and model attacks on graph data, and realize the perturbed graphs should keep data unnoticeability, maintain model unnoticeability (i.e., data utility), and achieve privacy protection at the same time.\nWe develop an adversarial approach, NetFense, and empirically find that the graphs perturbed by NetFense can simultaneously lead to the least change of local graph structures, maintain the performance of targeted label classification, and lower down the prediction confidence of private label classification. We also exhibit that perturbing edges brings more damage in misclassifying private labels than perturbing node features. In addition, the promising performance of the proposed NetFense lies in not only single-target perturbations, but also multi-target perturbations that cannot be well done by model attack methods such as Nettack. The evaluation results also deliver that \nmoderate edge disturbance can influence the graph structure to avoid the leakage of privacy via GNNs and alleviate the destruction of graph data. \nBesides, we also offer the analysis of hyperparameters and perturbation factors that are highly related to the performance. We believe the insights found in this study can encourage future work to further investigate how to devise a privacy-preserved graph neural networks, and to study the correlation between the leakage of multiple private labels and attributed graphs.\n\\vspace{-5pt}\n\n\n\\ifCLASSOPTIONcompsoc\n \n \\section*{Acknowledgments}\n\\else\n \n \\section*{Acknowledgment}\n\\fi\n\n\nThis work is supported by Ministry of Science and Technology (MOST) of Taiwan under grants 109-2636-E-006-017 (MOST Young Scholar Fellowship) and 109-2221-E-006-173, and also by Academia Sinica under grant AS-TP-107-M05.\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\uppercase{Introduction}}\n\\label{sec:introduction}\n\\subsection{Incremental PCA}\nPrincipal Component Analysis (PCA) is a widely used technique and a well-studied subject in the literature. PCA is a technique to reduce data dimensionality of a set of correlated variables. Several natural phenomena and industrial processes are described by a large number of variables and hence their study can benefit from the dimensionality reduction PCA has been invented for. As such PCA naturally applies to statistical data analysis. This means that such technique is traditionally implemented as an offline batch operation. Nevertheless, PCA can be useful when applied to data that are available incrementally, e.g. in the context of process monitoring \\cite{dunia1996identification} or gesture recognition \\cite{Lippi2009}. \nThe PCA can be applied to a data-flow after defining the transformation on a\nrepresentative off-line training set set using the batch algorithm \\cite{lippi2011can}. This approach can be used in pattern recognition problems for data pre-processing \\cite{Lippi2009}. \nNevertheless one can imagine an on-line implementation of the algorithm.\nAn on-line implementation is more efficient in terms of memory usage than a batch one. This can be particularly relevant for memory consuming data-sets such as image collections; in fact in the field of visual processing some techniques to implement incremental PCA have been proposed, see for example \\cite{artavc2002incremental}.\nPCA consists of a linear transformation to be applied to the data-set. Dimensionality reduction is performed by selecting a subset of the transformed variables that are considered more relevant in the sense that they exhibit a larger variance compared to the others. Usually the transformation is calculated and computed on the Z-score, and hence the averages and the variances of the dataset are taken into account. Depending on the applications the algorithm has been extended in different ways, adding samples on-line as presented in \\cite{artavc2002incremental} or incrementally increasing the dimension of the reduced variable subset as seen in \\cite{neto2005incremental}. A technique to dynamically merge and split the variable subsets has been presented in \\cite{hall2000merging}\nSeveral \\textit{approximate} incremental algorithms have been proposed for PCA, e.g. see \\cite{shamir2015convergence} and\n\\cite{boutsidis2015online}, as well as for singular value decomposition \\cite{sarwar2002incremental}. An implementation for on-line PCA has been proposed, for example, for the R language \\cite{DegrasandCardot2015}. In some cases the incremental process is designed to preserve some specific information; for example in \\cite{hall1998incremental} the average of the samples is updated with new observations.\n\nCurrently, to the best of our knowledge, there is no available description of an exact incremental implementation of PCA, where \\textit{exact} means that the transformation obtained given $n$ samples is exactly the same as would have been produced by the batch algorithm, including the z-score normalization, a step that is not included in previous works presenting a similar approach like\\cite{artavc2002incremental}. We decided in light of this to describe the algorithm in detail in this paper. The incremental techniques cited above \\cite{hall1998incremental,artavc2002incremental,hall2000merging} are designed to update the reduced set of variables and change its dimensionality when it is convenient for data representation. In the present work, no indication is provided for which subset of variables should be used , i.e. how many principal components to consider. All the components are used during the algorithm to ensure the exact solution. After describing the exact algorithm some implication of using this incremental analysis are discussed. In particular, we provide an intuitive definition of continuity for the obtained transformation and then we propose a modified version designed to avoid discontinuities. \n\nThe concept of continuity is strictly related to the incremental nature of the proposed algorithm: in standard PCA the batch analysis implies that the notion of time does not exist, e.g. the order of the elements in the sample set is not relevant for the batch algorithm. In our treatment we instead want to follow the time evolution of variances and eigenvectors.\nWe are thus lead to consider a dynamical evolution. \n\nThe paper is organized as follows.\nIn the remaining part of the Introduction we recall the PCA algorithm and we introduce the notation used.\nIn Section \\ref{sec:incremental} we give a detailed account of the incremental\nalgorithm for an on-line use of PCA. \nIn Section \\ref{sec:continuity} we address the problems related to the data\nreconstruction, in particular those connected with the signal continuity.\nIn Sections \\ref{sec:results}, \\ref{sec:conclusion} we then present the results\nof some applications to an industrial data set and draw our conclusions.\n\n\n\\subsection{The principal component analysis}\n\\label{sec:PCA}\nThe computation for the PCA starts considering a set of observed data.\nWe suppose we have $m$ sensors which sample some physical observables at constant rate.\nAfter $n$ observations we can construct the matrix\n\\begin{equation}\nX_n=\\left[\\begin{array}{c}\n x_1 \\\\\n x_2 \\\\\n \\vdots \\\\\n x_n\n \\end{array}\\right]\n\\end{equation}\nwhere $x_i$ is a row vector of length $m$ representing the measurements of the\n$i^{th}$ time step so that $X_n$ is a $n \\times m$ real matrix whose columns\nrepresent all the values of a given observable.\n\nThe next step is to define the sample means $\\bar{x}_n$ and standard deviations\n$\\sigma_n$ with respect to the columns (i.e. for the observables) in the usual\nway as\n\\begin{align}\n\\bar{x}_{n(j)} &= \\frac{1}{n} \\sum^n_{i=1} X_{n(ij)} \\\\\n\\sigma_{n(j)} &= \\sqrt{\\frac{1}{n-1} \\sum^{n}_{i=1}\n \\left[ X_{n(ij)} - \\bar{x}_{n(j)} \\right]^2}\n\\end{align}\nwhere in parentheses we write the matrix and vector indices explicitly.\nIn this way we can define the standardized matrix for the data as\n\\begin{equation}\nZ_n=\\left[\\begin{array}{c}\n x_1-\\bar{x}_n \\\\\n x_2-\\bar{x}_n \\\\\n \\vdots \\\\\n x_n-\\bar{x}_n\n \\end{array}\\right]\\Sigma_n^{-1}\n\\end{equation}\nwhere $\\Sigma_n \\equiv {\\rm diag}(\\sigma_n)$ is a $m \\times m$ matrix.\nThe covariance matrix $Q_n$ of the data matrix $X_n$ is then defined as\n\\begin{equation}\n\\label{eq:cov}\nQ^{}_n = \\frac{1}{n-1} Z^{T}_n Z^{}_n \\; .\n\\end{equation}\nWe see that $Q_n$ is for any $n$ a symmetric $m \\times m$ matrix and it is\npositive definite.\n\nFinally we make a standard diagonalization so that we can write\n\\begin{equation}\nQ_n = C^{-1}_n \\left[\\begin{array}{cccc}\n \\lambda_1 \\\\\n & \\lambda_2 \\\\\n && \\ddots \\\\\n &&& \\lambda_m\n \\end{array}\\right] C_n\n\\end{equation}\nwhere the (positive) eigenvalues $\\lambda_i$ are in descending order:\n$\\lambda_i > \\lambda_{i+1}$.\nThe transformation matrix $C_n$ is the eigenvectors matrix and it is\northogonal, $C^{-1}_n = C^T_n$.\nIts rows are the principal components of the matrix $Q_n$ and the value of\n$\\lambda_i$ represents the variance associated to the $i^{th}$ principal\ncomponent.\nSetting $P_n = Z_n C_n$, we have a time evolution for the values of the PCs\nuntil time step $n$.\n\nWe recall that the diagonalization procedure is not uniquely defined: once the\norder of the eigenvalues is chosen, one can still choose the ``sign'' of the\neigenvector for one-dimensional eigenspaces and a suitable orthonormal basis for\ndegenerate ones (in Section \\ref{sec:continuity} we will see some consequences\nof this fact).\nWe stress that, since only the eigenspace structure is an intrinsic property\nof the data, the PCs are quantity useful for their interpretation but they are\nnot uniquely defined.\n\n\n\n\\section{On-line analysis}\n\\subsection{Incremental algorithm}\n\\label{sec:incremental}\n\n\nThe aim of the algorithm is to construct the covariance matrix $Q_{n+1}$\nstarting from the old matrix $Q_n$ and the new observed data $x_{n+1}$.\nTo do this, at the beginning of step $(n+1)$, we consider the sums of the\nobservables and their squares after step $n$:\n\n\\begin{equation}\na_{n(j)} = \\sum^{n}_{i=1} X_{n(ij)}\n\\end{equation}\n\n\\begin{equation}\nb_{n(j)} = \\sum^{n}_{i=1} X_{n(ij)}^2\n\\end{equation}\n\nThese sums are updated on-line at every step.\nFrom these quantities we can recover the starting means and standard\ndeviations: $\\bar{x}_n = a_n \/ n$ and $(n-1) \\sigma^2_n = b_n - n a^2_n $.\nSimilarly the current means and standard deviations are also simply obtained.\n\nThe key observation to get an incremental algorithm is the following identity:\n\\begin{equation}\nZ_{n+1} = \\left[ \\begin{array}{c}\n Z_n \\Sigma_n + \\Delta \\\\\n y\n \\end{array} \\right] \\Sigma^{-1}_{n+1}\n\\end{equation}\nwhere $y = x_{n+1} - \\bar{x}_{n+1}$ is a row vector and $\\Delta$ is a\n$n \\times m$ matrix built repeating $n$ times the row vector\n$\\delta = \\bar{x}_n - \\bar{x}_{n+1}$.\nBy definition $nQ^{}_{n+1} = Z_{n+1}^T Z^{}_{n+1}$ and, expanding the preceding\nidentity, we get\n\\begin{align}\nn \\, Q^{}_{n+1} \\,\n = \\, & \\Sigma^{-1}_{n+1} \\Sigma_n Z^T_n Z_n \\Sigma_n \\Sigma^{-1}_{n+1} +\n \\nonumber \\\\\n & \\Sigma^{-1}_{n+1} \\Sigma_n (Z^T_n \\Delta) \\Sigma^{-1}_{n+1} +\n \\nonumber \\\\\n & \\Sigma^{-1}_{n+1} (\\Delta^T Z_n) \\Sigma_n \\Sigma^{-1}_{n+1} +\n \\nonumber \\\\\n & \\Sigma^{-1}_{n+1} \\Delta^T \\Delta \\Sigma^{-1}_{n+1} +\n \\nonumber \\\\\n & z^T z\n\\end{align}\nwhere $z=y\\Sigma^{-1}_{n+1}$ and we used the fact that the $\\Sigma$s are\ndiagonal.\n\nRecalling that by hypothesis all the columns of the matrix $Z_n$ have zero mean\nand that the columns of the matrix $\\Delta$ have the same number, we see that\nterms in parentheses are zero.\nThus\n\\begin{align}\nn \\, Q_{n+1} \\, \n = \\, & \\Sigma^{-1}_{n+1} \\Sigma_n Q_n \\Sigma_n \\Sigma^{-1}_{n+1} +\n \\nonumber \\\\\n & n \\, \\Sigma^{-1}_{n+1} \\delta^T \\delta \\Sigma^{-1}_{n+1} +\n z^T z\n\\end{align}\nwhere $\\delta^T \\delta$, $z^T z$ and $Q_n$ are three $m \\times m$ matrices.\nWe now see that we can compute $Q_{n+1}$ by making operations only on\n$m \\times m$ matrices and with the sole knowledge of $Q_n$ and $x_{n+1}$.\n\nThe computational advantage of this strategy is that we do not need to save\nin the memory all the sampled data $X_{n+1}$ and moreover we do not need to\nperform the explicit matrix product in eq. (\\ref{eq:cov}), which would require\na great amount of memory and time for $n \\approx 10^{5\/6}$.\nConsequently this algorithm can be fruitfully applied in situations where the\nsensors number $m$ is small (e.g. of the order of tens) but the data stream is\nexpected to grow quickly.\n\nThe meaning of the normalization procedure depends on the process under\nanalysis and the meaning that is associated to the data within the current\nstudy: both centering around the empirical mean and dividing by the empirical\nvariance can be avoided by respectively setting $\\Delta=0$ or $\\Sigma=I$.\n\nIn practice, one keeps $n_{\\rm start}$ observations and compute $Q$ as given by\neq. (\\ref{eq:cov}) and the relative $C$ (and hence $P_{\\rm start}$).\nThen the updated $Q$s are used, step by step, to compute the $n^{th}$ values for\nthe evolving PCs in the standard way as $p_n = z_n C_n$.\nIn this way the last sample is equal for any $n$ to the one that would result\nfrom a batch analysis until time step $n$.\nInstead the whole sequence of the $p_n$ values with\n$n_{\\rm start} < n < n_{\\rm final}$ would not coincide with those from\n$P_{\\rm final}$, since the $Q$s matrices change every time a sample is added,\nand likewise for the $C$s matrices.\nThe most relevant implications of this fact will be considered in the next\nsubsection.\n\nThe library for the present implementation of the algorithm is available on the\nMathworks website under the name \\textit{incremental PCA}. \n\n\n\\subsection{Continuity issues}\n\\label{sec:continuity}\n\\begin{figure*}[t!]\n \\centering\n \\begin{tabular}{c}\n \\includegraphics[width=2.00\\columnwidth]{data.pdf} \\\\\t\t\n \\includegraphics[width=2.00\\columnwidth]{dataPCs.pdf}\n \\end{tabular}\n \\caption{Top, data-set used in the example.\n Bottom, variances associated to the PCs of the system.}\n\\label{fig:distillation}\n\\end{figure*}\nWe now consider the problem of the continuity for the PCs during the on-line\nanalysis.\nIn a batch analysis, one computes the PCs using all the data at the end of the\nsampling, obtaining the matrix $C_{\\rm final}$, and then, by applying this\ntransformation and its inverse, one can pass from the original data set to the\nset PCs values.\nOf course, since we are considering sampled data, we cannot speak of continuity in a strict sense. As previously stated, the temporal evolution of the data is not something relevant for the batch PCA. Regardless, we may intuitively expect to use a sampling rate of at least two times the signal bandwidth (for the sampling theorem) usually even more, i.e. ten times. We hence expect a limited difference between two samples in proportion to the overall signal amplitude. For sampled data we can then define continuity in a intuitive sense as a condition where the difference between two consecutive samples is smaller than a given threshold. A discontinuity in the original samples may be reflected in the principal components depending on the transformation coefficients, in detail\n\\begin{equation}\n\tp_n-p_{n-1}=z_{n}C_{n}-z_{n-1}C_{n-1}\n\\end{equation}\nthat is equal to\n\\begin{equation}\n\\label{difference}\n\tp_n-p_{n-1}=(z_{n}-z_{n-1})C_{n}+z_{n-1}(C_{n}-C_{n-1})\n\\end{equation}\nThe first term would be the same for the batch procedure (in that case with constant $C$) and the second term shows how $p$ is changing due to the change in coefficients $C$. We can regard this term as the source of discontinuities due to the incremental algorithm.\n\nTo understand the problems that could arise, from the point of view of the\ncontinuity of the PCs values, let us consider the on-line procedure more\nclosely.\n\nWe start with some the matrices $Q_{\\rm start}$ and $C_{\\rm start}$.\nAt a numerical level the eigenvalues are all different (since the machine\nprecision is at least of order $10^{-15}$), so that we have a set of formally\none-dimensional eigenspaces, from which the eigenvectors are taken.\nGoing on in the time sampling, we naturally create $m$ different time series of\neigenvectors.\n\nWe could expect that the difference of two subsequent eigenvectors of a given\nseries be slowly varying (in the sense of the standard euclidean norm), since\nthey come from different $C$s that are obtained from different $Qs$ which\ndiffer only slightly (i.e. for the last $x_{n+1}$).\nBut this is not fully justified, since the PCs are not uniquely defined and in\nsome case two subsequent vectors of $p_n$ and $p_{n+1}$ can differ\nconsiderably, as shown in the exaple in Figure \\ref{fig:continuity}.\nThere are three ways in which one or more eigenvector series could exhibit a\ndiscontinuity (in the general sense discussed above).\n\\begin{itemize}\n\\item Consider the case of a given eigenvalue associated with two eigenspaces at two subsequent time steps, spanned by\n the vectors $c_n$ and $c_{n+1}$.\n They belong by hypothesis to two close ``lines'' but the algorithm can\n choose $c_{n+1}$ in the ``wrong'' direction.\n In this case, to preserve the on-line continuity as much as possible, we\n take the new PC to be $-c_{n+1}$, i.e. minus the eigenvector given by the\n diagonalization process at step $n+1$.\n The ``right'' orientation can be identified with simple considerations on\n the scalar product of $c_n$ with $c_{n+1}$.\n Recalling the considerations at the end of Section \\ref{sec:PCA}, this\n substitution does not change the meaning of our analysis.\n\t\t\t\t\t\t\n\\item Consider the case where the differences of a group of $\\nu$ contiguous\n eigenvalues are much smaller than the others: we can say that these\n eigenvalues correspond in fact to a degenerate eigenspace.\n In this case we can choose an infinite number of $\\nu$ orthonormal\n vectors that can be legitimately considered our PCs, but the incremental\n algorithm can choose, at subsequent time steps, two basis which\n considerably differ.\n To overcome this problem, we must apply to the new transformation a\n change of basis in such a way not to modify the eigenspaces structure and\n to ``minimize'' the distance with the old basis. Although the case of a \n\t\t\tproper degenerated space on real data is virtually impossible, as the difference \n\t\t\tbetween two or more eigenvalues of $Q$ is approaching zero, the numerical values of the associated PCs can become\n discontinuous in a real time analysis. This by itself does not represent an error in an absolute sense in computing \n\t\t\t$C_n$, as the\tspecific $C_n$ is the same as that which would be computed off-line.\n\t\t\t\n\\item In the two previous cases the discontinuity was due to an ambiguity of the diagonalization matrix. A third source of\n\t\t\tdiscontinuity can consist into the temporal evolution of the eigenvalues. Consider two one-dimensional eigenspaces associated, one with a variance that is increasing in time, the other with a variance that is decreasing: there will be a time step $\\bar{n}$ the two eigenspaces are degenerate. This is called a ``level crossing'' and corresponds in the algorithm to an effective swap in the ``\n\t\t\tcorrect'' order of the eigenvectors. To restore continuity, the two components must me swapped.\n\n \n \n \n \n \n \n \n \n \n \n \n \n\\end{itemize}\n\n\n\\textcolor{black}{}\n\\section{Examples and Results}\n\\label{sec:results}\nA publicly available data-set was used for this example: it consists of\nsnapshot measurements on $27$ variables from a distillation column, with a\nsampling rate of one every three days, measured over $2.5$ years.\nSampling rate and time in general are not relevant \\textit{per se} for PCA. Nevertheless, as we discussed the continuity issue it is interesting to see how the algorithm behaves on physical variables representing the continuous evolution of a physical system.\n\nVariables represent temperatures, pressures, flows and other kind of measures\n(the database is of industrial origin and the exact meaning of all the\nvariables is not specified).\nDetails are available on-line \\cite{distillation}.\n\nThis kind of data set includes variables that are strongly correlated amongst\neach other, variables with a large variance and variables almost constant during\na time of several samples.\nIn Figure \\ref{fig:distillation} we display the time evolution of the variables\nand the standard batch PCA.\nIn Figure \\ref{fig:resultDistillation} the evolution of the covariance matrix\n$Q$ and the incremental PCs are shown.\nNotice that the values $p_n$ are obviously not equal to the ones computed with\nthe batch method until the last sample.\nThe matrix $Q$ almost constantly converges to the covariance matrix computed\nwith the batch method.\nNote that at the beginning the Frobenius norm of the difference between the\ntwo matrices sometimes grows with the addition of some samples, the number of\nsamples needed for $Q$ to settle to the final value depends on the regularity\nof the data and the variations in $Q$ may represent an interesting description\nof the analyzed process.\nThis is expected for the estimator of the covariance matrix until $m \\gtrsim n$.\nWhile the sample covariance matrix is an unbiased estimator for\n$n \\rightarrow \\infty$, it is known to converge inefficiently\n\\cite{smith2005covariance}.\n\nIn order to quantify the efficiency of the algorithm the computational of the proposed incremental solution has been compared with the batch implementation provided by the Matlab built-in function \\textit{PCA} on a Intel Core i7-7700HQ CPU running at 2.81 GHz, with windows 10 operative system. The results are shown in Figure \\ref{fig:PerformanceTcrop}. The time required to execute the incremental algorithm grows linearly with the number of samples while the batch presents an increase of the execution time associated with the size of the dataset. As reasonably expected, the batch implementation is more efficient than the incremental one when the PCA is computed on the whole dataset, while the incremental implementation is more efficient when samples are added incrementally. \n\\begin{figure}[htb]\n\t\\centering\n\t\t\\includegraphics[width=1.00\\columnwidth]{PerformanceT_Errorbar_Crop.pdf}\n\t\\caption{Time required to execute the incremental PCA and the batch implementation as function of the number of samples. For the batch algorithm both the time required to compute the PCA on the given number of samples (single case) and the cumulative time required to perform the PCA with each additional sample (cumulative) are shown. The computational time is measured empirically and can be affected by small fluctuations due to the activity of the operative system: in order to take this in account the average times (darker lines) and their standard deviations (error bars) are computed on 33 trials. The batch implementation is more efficient than the incremental one when the PCA is computed on the whole dataset, while the incremental implementation is more efficient when samples are added incrementally.}\n\t\\label{fig:PerformanceTcrop}\n\\end{figure}\n\n \n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[trim={5cm 2.8cm 5cm 0},clip,width=2.00\\columnwidth]{resultscrop.pdf}\n \\caption{Application of the incremental algorithm to the sample data-set.\n The uppermost picture shows the 27 principal components computed\n with the batch algorithm, the second from top the incremental PCA\n computed without continuity check, the third picture from top\n represents the Frobenius norm of the difference between the\n covariance matrix computed through the incremental algorithm and a\n given sample and the one computed on the whole sample-set.\n The lowermost picture represents the variances associated to the\n PCs (eigenvalues of covariance matrix). \\textcolor{black}{The covariance matrix and the variable values are the same for the batch algorithm and the on-line implementation when they are provided with the same samples. The differences in the pictures are due to the fact that same transformation computed with the batch algorithm is applied to the whole set, while the one computed online changes with every sample.}}\n\\label{fig:resultDistillation}\n\\end{figure*}\n\n\\begin{figure*}[t!]\n \\centering\t\n\\includegraphics[width=2.00\\columnwidth]{continuity3crop.pdf}\n \\caption{The figure shows the $9^{th}$ and the $10^{th}$ PCs computed without\n (top) and with (bottom) continuity constraints.\n Note the discontinuity addressed by the arrows.}\n\\label{fig:continuity}\n\\end{figure*}\n\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=1.00\\columnwidth]{postPCcrop.pdf}\n \\caption{Variances and cumulative variance of the\n PCs computed with the on-line algorithm \\textcolor{black}{including the continuity correction (blue) and the ones computed with the batch algorithm (yellow)}.The order of the on-line computed PCs is the one produced by the transformation.\\textcolor{black}{. The difference between the two set of PCs' variances are due to the continuity correction and the fact that the variance of the on-line series is computed on the whole set of data.}}\n\\label{fig:postPC}\n\\end{figure}\n\nIn Figure \\ref{fig:postPC} the variance of the whole incremental PCA is shown.\nComparing it with Figure \\ref{fig:distillation} (bottom), it is evident that\nthe incremental PCs that are not linearly independent over the whole sampling\ntime have a slightly different distribution of the variance compared to the PCs\ncomputed with the batch algorithm.\nNevertheless they are a good approximation in that they are still ordered by\nvariance and most of the variance is in the first components\n(i.e. more than $90 \\%$ is in the first 5 PCs).\n\n\n\n\n\\section{Discussion and Conclusions}\n\\label{sec:conclusion}\nThe continuity issues arise for principal components with similar\nvariances.\nWhen working with real data this issue often affects the components with\nsmaller variance which are usually dropped and hence it can be reasonable to\nexecute the algorithm without taking measures to preserve the continuity.\n\nNevertheless it should be noticed that, in some process analysis, the\ncomponents with a smaller variance identify the \\textit{stable} part of the\nanalyzed data, and hence the one identifying the process, e.g. the controlled\nvariables in a human movement \\cite{lippi2011uncontrolled} or the response of\na dynamic system known through the input-output samples \\cite{huang2001process}.\n\nIn Figure \\ref{fig:continuity} the effects of discontinuities are shown: two discontinuities present into the values of one of the principal components are fixed according to Section \\ref{sec:continuity}.\nIn case the continuity is imposed the phenomenon is limited, but this comes at\nthe price of modifying the exact computation of the eigenvectors for $Q$ at a\ngiven step, in case of a degenerate eigenspace.\nAnyway the error introduced on the $Q$ eigenvectors depends on the threshold\nused to establish that two slightly different eigenvalues are degenerate and so\nwe can still consider the transformation to be ``exact'', but not at machine precision.\nIn the reported example, the two big discontinuities highlighted by arrows disappear when the continuity is imposed. Notice that the two PCs have different values in the corrected version also before the two big discontinuities because of previous corrections on $Q$.\nThe choice as to whether or not the continuity is imposed depends on the application, on the\ndata-set and on the meaning associated with the analysis.\n\\subsection{Software}\nThe \\textsc{Matlab} software implementing the function and the examples shown in the figures is available at the URL: \n\\href{https:\/\/it.mathworks.com\/matlabcentral\/fileexchange\/69844-incremental-principal-component-analysis}{https:\/\/it.mathworks.com\/matlabcentral\/fileexchange\/ 69844-incremental-principal-component-analysis}\n\n\\section*{Acknowledgments}\nThe authors thank Prof. Thomas Mergner for the support to this work. \\\\\nThe financial support from the European project H$_{2}$R \\\\\n(http:\/\/www.h2rproject.eu\/) is appreciated. \\\\\nWe gratefully acknowledge financial support for the project MTI-engAge (16SV7109) by BMBF \\\\\nG.C. has been supported by I.N.F.N. \\\\\n\\balance\n\\bibliographystyle{apalike}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe amount of human knowledge is rapidly increasing. Almost every discipline has been subdivided into lots of sub-disciplines. In information age, humans, especially knowledge workers, need to keep learning during their whole lives. There are a lot of demands for knowledge workers to estimate their knowledge quantitatively. The following are some examples:\n\\begin{itemize}\n\\item A computer engineer wants to estimate how much he has obtained the collection of concepts and algorithms of the curriculum ``Information Retrieval''; \n\\item A researcher wants to predict how much he will understand the contents of a lecture just from its poster, a subsequent decision of whether to attend it will be made based on the prediction; \n\\item Two researchers with different academic backgrounds want to find out the set of knowledge points on which they both have a solid understanding, these knowledge points can serve as the starting point of an academic communication. \n\\item A scientist wants to have a quantitative evaluation of his research concentrations for the last three years.\n\\end{itemize}\nMost of an individual's knowledge is obtained from postnatal learning. By recording and analyzing one's learning history, it is possible to estimate his knowledge quantitatively.\n\n\\subsection{Classification of an individual's activities}\nTo analyze one's learning history, an individual's daily activities are classified into two categories: learning activities and non-learning activities. Learning activities are those which are related to at least one piece of knowledge. The definitions of knowledge and non-knowledge will be explained in section 1.4.1.\nExamples of learning activities are reading books, taking courses, discussing with someone about a piece of knowledge etc.\n\n\\subsection{Capturing the text learning contents}\nMost learning processes can be associated with a piece of learning material. For example, reading a book, the book is the learning material. Taking a course or having a discussion, the course and discussion contents can be regarded as the learning material. Some of the learning materials are text or can be converted to text. For example, discussing about a piece of knowledge with others. The discussion contents can be converted to text by exploiting speech recognition technologies. Similarly, if one is reading a printed book, the contents of the book can be recorded by a camera like Google Glass, then converted to text by utilizing Optical Character Recognition (OCR) technology. If the book is an electronic one, no conversion is needed, text can be extracted directly.\n\\subsection{Analyzing the text content with topic models}\nHaving the extracted or converted text, with topic models, the main ideas of the text can be obtained in a quantitative manner \\cite{blei2012probabilistic}. With probabilistic topic models, the main ideas of a piece of text can be computed as a distribution over a series of topics. Each topic is expressed as a word distribution over a vocabulary set. \nWith the calculated topic distribution and word distribution, further analyzing of knowledge model is available.\n\\subsection{Analyzing the learning history with knowledge model}\nKnowledge model can quantitatively evaluate an individual's knowledge based on his learning history.\n\\subsubsection{Organization of human knowledge}\nIn knowledge model, all the knowledge pieces are organized in a tree structure. Every node of the knowledge tree can be referenced by a name. A branch node represents a discipline or sub-discipline of knowledge, such as math, computer science, and information retrieval etc. A leaf node represents a concrete piece of knowledge, which is explicitly defined and has been widely accepted, such as Bayes' theorem, Mass-energy equivalence, Expectation-maximization algorithm etc. A leaf node of the knowledge tree is called a knowledge point. A branch node of it is called a knowledge branch. The knowledge tree can be constructed and maintained empirically by a group of experts of each discipline. Fig. 1 is an example of the knowledge tree based on a classification of Wikipedia\\footnote{It can be found at \\url{https:\/\/en.wikipedia.org\/wiki\/Branches_of_science}}. To keep it simple, other nodes of the tree are omitted. \n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[height=2.2in, width=4in]{1knowledgeTree}\n\t\\caption{An example of the knowledge tree.}\n\\end{figure}\n\n\\subsubsection{Learning sessions}\nAn individual's learning activities can be separated into a series of learning sessions based on some specific standards, such as intervals between activities or topics of activities. Details of how to discriminate learning sessions will be discussed in section 3.\nTable 1 illustrates some examples of learning sessions.\n\n\n\n\n\n\n\\subsubsection{An individual's learning history}\nEach individual has a knowledge tree which records his learning history about each knowledge node. Each node of the tree has a data structure which records the individual's every learning experience about the corresponding knowledge point or knowledge branch. Each recorded learning experience has the following 4 attributes:\n\\begin{itemize}\n\\item Learning sequence ID\\\\\nRecording the sequence ID of the learning experience.\n\\item Stop time\\\\\nRecording when the learning session stopped.\n\\item Duration\\\\\nRecording the duration time of a learning session.\n\\item Proportion\\\\\nRecording the knowledge point's share of the learning contents. The calculation of the proportion is based on results of topic model analysis, details of calculation will be discussed in section 3.\n\\end{itemize}\nTable 2 is an example of learning history, it is a snippet of a subject's learning history of the knowledge point ``$Bayes'~ rule$''.\n\n\\begin{table*}[htbp]\n\t\\footnotesize\n\t\\centering\n\t\\caption{Some examples of learning sessions}\n\t\\begin{tabular}{|c|c|c|c|} \\hline\n\t\tDate& Activities & Duration(S) & Captured text contents \\\\\\hline\n\t\t... &&& \\\\\\hline\n\t\t\\multirow{2}{20mm}{2016-03-13 9:30:00}&\\multirow{2}{35mm}{Started reading a document}& \\multirow{4}{*}{3610 } &\t\\multirow{4}{70mm}{... Probabilistic models, such as hidden Markov\n\t\t\tmodels or Bayesian networks, are commonly ...} \\\\\t\n\t\t&&& \\\\ \\cline{1-2}\n\t\t\\multirow{2}{20mm}{2016-03-13 10:30:10}&\\multirow{2}{35mm}{Stopped reading the document}&& \\\\\n\t\t&&& \\\\\\hline\n\t\t\n\t\t\\multirow{2}{20mm}{2016-03-13 13:30:20}&\\multirow{2}{35mm}{Started attending a class}& \\multirow{4}{*}{2710 } &\t\\multirow{4}{70mm}{... how does the expectation maximization algorithm work ...} \\\\\t\n\t\t&&& \\\\ \\cline{1-2}\n\t\t\\multirow{2}{20mm}{2016-03-13 14:15:30}&\\multirow{2}{35mm}{Stopped attending the class}&& \\\\\n\t\t&&& \\\\\\hline\n\t\t\n\t\t\\multirow{2}{20mm}{2016-03-13 15:10:10}&\\multirow{2}{35mm}{Started a discussion}& \\multirow{4}{*}{930 } &\t\\multirow{4}{70mm}{... I think your understanding of Bayes' theorem is wrong ...} \\\\\t\n\t\t&&& \\\\ \\cline{1-2}\n\t\t\\multirow{2}{20mm}{2016-03-13 15:25:40}&\\multirow{2}{35mm}{Stopped the discussion}&& \\\\\n\t\t&&& \\\\\\hline\n\t\t\n\t\t... &&& \\\\\\hline\n\t\\end{tabular}\t\n\\end{table*}\n\n\\begin{table*}\n\t\\centering\n\t\\caption{A subject's learning history of the knowledge point ``$Bayes'~ rule$''}\n\t\\begin{tabular}{|c|c|c|c|} \\hline\n\tLearning sequence ID&\tLearning stop time&\tDuration(S)&\tProportion\\\\ \\hline\n\t1&\t2\/27\/2016 18:41&\t1171&\t1.22\\%\\\\ \\hline\n\t2&\t2\/27\/2016 18:47&\t220&\t2.12\\%\\\\ \\hline\n\t3&\t2\/29\/2016 16:08&\t2523&\t1.17\\%\\\\ \\hline\n\t4&\t2\/29\/2016 16:55&\t330&\t0.66\\%\\\\ \\hline\n\t5& 3\/3\/2016 16:21& \t1710& \t1.17\\%\t\\\\\n\t\t\\hline\\end{tabular}\n\\end{table*}\n\n\\subsubsection{Calculation of an individual's familiarity measure about a knowledge point}\nWith an individual's learning history of a knowledge point, it is possible to measure the individual's familiarity of the knowledge point. There is no unanimous agreement of how previous learning experiences affect an individual's current understanding of a knowledge point exactly. Therefore, there are many choices of calculating the familiarity measure. Details of calculation will be discussed in section 3. Figure 2 illustrates a flowchart of using topic model and knowledge model to estimate an individual's knowledge quantitatively. Each hexagon of the diagram indicates a step of processing, the following rectangle indicates the results of the processing. A preliminary system of evaluating an individual's knowledge is implemented in section 3. \n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[height=3.6in, width=2.1in]{2flowchart}\n\t\\caption{A flowchart to estimate an individual's knowledge quantitatively.}\n\\end{figure}\n\n\n\\section{Related works}\nRecording an individual's learning history is vital for knowledge model. Bush envisioned the $ memex $ system in which individuals could compress and store all of their personally experienced information, such as books, records, and communications\\cite{bush1945atlantic}. Inspired by $ memex $, Gemmell et al. developed a project named `$ MyLifeBits $' to store all of a person's digital media, including documents, images, sounds, and videos\\cite{gemmell2002mylifebits}. Knowledge model shares a similar idea with $ memex $ and `$ MyLifeBits $' of recording an individual's digital history, but with a different intention. $ Memex $ and `$ MyLifeBits $' are mainly for re-finding or review of personal data, knowledge model is for quantitatively evaluating a knowledge worker's knowledge.\n\\paragraph*{Probabilistic topic model.} Probabilistic topic model is used to analyze the topics of a collection of text documents. Each topic is represented as a multinomial distribution of words over a vocabulary set. Each document is represented as a distribution over the topics \\cite{steyvers2007probabilistic, blei2012probabilistic}. Probabilistic latent semantic analysis (PLSA) \\cite{hofmann1999probabilistic} and Latent Dirichlet Allocation (LDA) \\cite{blei2003latent } are two representative probabilistic topic models. PLSA models each word of a document as a sample from a mixture model. It has a limitation that parameterization of the model is susceptible to over-fitting. In addition, it cannot provide a straightforward way to make inferences about new documents\\cite{lu2011investigating}. LDA is an unsupervised algorithm that models each document as a mixture of topics. It addresses some of PLSA's limitations by adding a Dirichlet prior on the per-document topic distribution.\n\\paragraph*{Forgetting curve.} Human memory declines along time. In 1885, Hermann Ebbinghaus hypothesized the exponential nature of forgetting \\cite{ebbinghaus1913memory}. Ebbinghaus found Equation 1 can be used to describe the proportion of memory retention after a period of time, $ t $ is the time in minutes counting from one minute before the end of the learning, $ k $ and $ c $ are two constants which equal 1.84 and 1.25 separately\\footnote{It can be found at \\url{http:\/\/psychclassics.yorku.ca\/Ebbinghaus\/memory7.htm}}.\n\\begin{equation}\nb = k\/((\\log t)^c + k)\n\\end{equation}\nFigure 3 shows the percentage of memory retention in time calculated by Equation 1.\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[height=2.4in, width=2.4in]{3forgettingcurve}\n\t\\caption{The percentage of memory retention in time calculated by Equation 1.}\n\\end{figure}\nAverell and Heathcote proposed other forms of forgetting curves. There is no unanimous agreement of how human memory declines. Psychologists have debated the form of the forgetting curve for a century \\cite{averell2011form}.\n\n\\section{A preliminary knowledge evaluating system}\nA preliminary knowledge evaluating system is developed to test the feasibility of knowledge model. \nBecause of the complexity of human learning activities and the workload of programming, it is impractical to handle all the learning situations once and for all. Therefore, it only handles the situation that a user is reading Portable Document Format (PDF) documents. Other document formats and learning methods like listening and discussing will be considered in further research.\n\nA plug-in for the Adobe Acrobat Reader application is developed. With the plug-in, the system can detect an individual's PDF reading activities, then divides them into a sequence of learning sessions. Meanwhile, it extracts the text contents of each learning session, then uses topic model to analyze the topics of the text contents, and then selects the topics which are knowledge points, finally, it updates the individual's learning histories of related knowledge points. With the learning histories, the individual's familiarity measure of each knowledge point at time $ t $ can be calculated with knowledge model.\n\n\\subsection{An Algorithm to Discriminate Learning Sessions}\nDiscriminating learning sessions is critical to knowledge model because it is essential to know how many times and how long for each time the individual has learned a knowledge point. Further analyses are based on these results. Algorithm 1 is devised to discriminate learning sessions when the user is reading. The algorithm periodically checks what the individual is doing. Either of the following three conditions indicates a learning session has started.\n\\begin{enumerate}\n\t\\item The individual opens a document;\n\t\\item The foreground window has switched to an opened document from another application (APP), such as a game APP;\n\t\\item After the computer being idled for a period of time, there are some mouse or keyboard inputs detected, which indicates the individual has come back from other things. Meanwhile, the foreground window is an opened document.\n\\end{enumerate}\n\nIf either of the following three conditions is satisfied, a learning session is assumed to be terminated.\n\\begin{enumerate}\n\t\\item The individual closes a document;\n\t\\item The foreground window has switched to another APP from a document;\n\t\\item The foreground window is a document, but the computer has idled for a certain period of time without any mouse or keyboard inputs detected, the individual is assumed to have left to do other things.\n\\end{enumerate}\n\n\nThe algorithm periodically checks if any of the conditions listed above is satisfied, if so, it records a learning session has started or stopped. When a document is opened or closed, the PDF Reader APP will send the plug-in a message, so there is no need to check these two actions. The duration of a learning session equals the interval between its start and stop time. Page numbers are recorded for the purpose of extracting learning content, which will be analyzed with topic model.\n\n\\begin{algorithm}\n\t\\caption{An algorithm to discriminate one's learning sessions when reading}\n\t\\label{alg1}\n\t\\begin{algorithmic}[1]\t\t\t\n\t\t\\WHILE{The PDF Reader APP is running}\t\t\n\t\t\\IF{The foreground window has switched to another APP from a document \\textbf{OR}\\\\\n\t\t\tthe computer has idled for a certain period of time when showing a document}\t\t\t\t\t\n\t\t\\STATE record that the document's learning session has stopped;\n\t\t\\ELSE\n\t\t\\IF{The foreground window has switched to an opened document from another APP \\textbf{OR}\\\\\n\t\t\tthe individual has come back to continue reading a document}\t\t\t\n\t\t\\STATE record that a learning session about the current document has started;\n\t\t\\ENDIF\n\t\t\\ELSE\n\t\t\\IF{There is no APP and document switch}\t\t\t\n\t\t\\STATE check and record if there is a Page switch;\n\t\t\\ENDIF\t\t\t\n\t\t\\ENDIF\n\t\t\\STATE keep silent for T seconds;\n\t\t\\ENDWHILE\t\t\n\t\\end{algorithmic}\t\n\\end{algorithm}\n\n\nFigure 4 shows some examples of discriminated learning sessions. Attribute ``$ did $'' means document ID, which indexes the documents uniquely. Attribute ``$ actiontype $'' indicates the type of an action. ``$ Doc~ Act $'' means a document has been activated. ``$ Page~ Act $'' is defined similarly. ``$ Doc~ DeAct $'' means a document has been deactivated. That is to say, a learning session has stopped. Attribute ``$ page $'' indicates a page number. Attribute ``$ duration $'' records how long a page has been activated in seconds. If two learning session's interval is less than a certain threshold, such as 30 minutes, and their learning material is the same, for example, the same document, they are merged into one session. Therefore, ``$ Session 2 $'' and ``$ Session 3 $'' are merged into one session.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[height=2.45in, width=2.8in]{4learningsession}\n\t\\caption{Some examples of discriminated learning sessions.}\n\\end{figure}\n\n\\subsection{Analyzing the learning contents with topic model}\nWhen discriminating learning sessions, text learning contents are also extracted. Because Algorithm 1 can record the accurate set of pages the individual has read during a learning session, only the related pages' text contents are extracted. This strategy brings in less errors than extracting all the text contents of the whole document. Because a document may contain many pages, usually some of them are not read during a learning session, it is unreasonable to count them in. The inputs of a probabilistic topic model are a collection of $ N $ documents, a vocabulary set $ V $, and the number of topics $ k $. The outputs of a probabilistic topic model are the followings:\n\n\\begin{itemize}\n\t\\item k topics, each is word distribution : $ \\{\\theta_{1},...,\\theta_{k}\\} $;\n\t\\item Coverage of topics in each document $ d_{i} $: $ \\{\\pi_{i1},...,\\pi_{ik}\\} $;\\\\\n\t $ \\pi_{ij} $ is the probability of document $ d_{i}$ covering topic $ \\theta_{j} $.\t\n\\end{itemize}\n\nIn the implementation, N is set 1 because there is only one document during a learning session, k is set 2 currently. The LDA analysis of learning contents is based on the implementation of MeTA, which is an open source text analysis toolkit \\footnote{The package is available at \\url{https:\/\/meta-toolkit.org}}. Before topic model analysis, the text learning contents are scanned to find out the word group which is a multi-word knowledge point, such as ``inverse document frequency'' (IDF). The word group is then merged into one word like inverse-document-frequency. After the merging of multi-word knowledge points, the text contents are analyzed with the unigram method of LDA. \n\n\\subsection{Computation of a knowledge point's share of the learning contents}\nTopic model can calculate each topic's contribution to the learning contents and each term's share of a topic. Each knowledge point can be allocated with a share based on its topic share. The share is an estimation of how much the learning contents concern the knowledge point.\nOnly the top $ m $ terms of each topic are considered. Each related topic term's share is calculated with Equation 2. $ \\varphi_{ij} $ is the share of term $ i $ of topic $ j $, $ \\pi_{j} $ is topic $j $'s share of the learning contents, $ p(t_{i}|\\theta_{j}) $ is term $ i $'s share of topic $ j $. A knowledge point's share equals its topic term share.\n\n\\begin{equation}\n\\varphi_{ij} = \\frac{\\pi_{j}p(t_{i}|\\theta_{j})}{\\sum_{j=1}^k\\sum_{i=1}^m\\pi_{j}p(t_{i}|\\theta_{j})}\n\\end{equation}\n\n\\subsection{Computation of the familiarity measure of a knowledge point at a particular time}\nWith the recognized learning sessions and the results of topic model analysis, an individual's learning history of a knowledge point can be generated. Table 2 shows an example of an individual's learning history of a knowledge point.\nWith the learning history, there are many choices to calculate the individual's familiarity measure of a knowledge point. The simplest method is just considering the cumulative learning time of each knowledge point, multiplied by its corresponding share in each learning session. However, human brain works in a very complicated manner when learning. A lot of factors affect how effective an individual can learn a knowledge point. For example, human memory declines. There is much difference between learning a knowledge point yesterday and three years ago. Moreover, subsequent learning of a knowledge point will be associated with what have been learned previously. A simplified method of calculating familiarity measures is used in this preliminary implementation. The computation is based on the following hypotheses:\n\\begin{itemize}\n\\item Each learning experience of a knowledge point is independent from other learning experiences of it;\n\\item The effect of each learning experience declines in time according to Ebbinghaus' forgetting curve of Equation 1;\n\\item The familiarity measure of a knowledge point is the additive effects of all the learning experiences of it.\n\\end{itemize}\nEquation 3 is used to calculate an individual's familiarity measure of knowledge point $ k_{i} $ at time $ t $. The input is a sequence of $ n $ learning sessions. $ d_{j} $ is session $ j $'s duration in seconds;~ $ \\xi_{ij} $ is knowledge point $ k_{i} $'s share in session $ j $, it is calculated with Equation 2;~ $ b_{j} $ is the proportion of memory retention of learning session $ j $ at time $ t $, it is calculated with Equation 1.\n\\begin{equation}\nF_{k_{i}} = \\sum_{j=1}^nd_{j}*\\xi_{ij}*b_{j}\n\\end{equation}\n\nA relative familiarity measure can be calculated by dividing the familiarity measures with the mean value of them.\n\\subsection{Results}\nA subject's 13 days (from 2\/23\/2016 to 3\/6\/2016) of PDF documents reading histories are recorded and analyzed. During the period of time, the subject has read 38 documents for 417 times.\nFor the simplicity of calculation, pages on which the subject has spent less than 30 seconds are ignored; learning sessions which are less than 150 seconds are also ignored. After the filtering, there are a total of 43 learning sessions recognized, 69 knowledge points were captured. Table 3 illustrates the subject's statistics and familiarity measures of 5 randomly selected knowledge points, the calculation time is 2016-03-29 19:24:00. The values of familiarity measures change over time, because human memory declines over time.\n\\begin{table*}\n\t\\centering\n\t\\caption{A subject's statistics and familiarity measures of 5 randomly selected knowledge points}\n\t\\begin{tabular}{|l|c|c|l|c|} \\hline\n\t\t\n\t\t\\multirow{2}{*}{Knowledge point name}&\tLearning&\tCumulative&\t\\multirow{2}{*}{Latest learning date}&\tFamiliarity \\\\\n\t\t&frequency& learning time(S)& &\tmeasure \\\\ \\hline\n\t\tBayes' rule&\t5&\t5954&\t3\/3\/2016 16:21&\t15.14 \\\\ \\hline\n\t\tConditional entropy&\t3&\t6294&\t2\/24\/2016 16:13&\t25.75\\\\ \\hline\n\t\tPosterior distribution&\t5&\t4715&\t3\/5\/2016 17:44&\t35.05\\\\ \\hline\n\t\tLagrange multiplier&\t1&\t751&\t2\/27\/2016 19:52&\t3.97\\\\ \\hline\n\t\tExpectation-maximization&\\multirow{2}{*}{12}\t&\\multirow{2}{*}{11448}\t&\\multirow{2}{*}{3\/3\/2016 16:21}\t&\t\\multirow{2}{*}{122.54}\\\\ \n\t\talgorithm& &\t &\t & \\\\\n\t\t\n\t\t\\hline\\end{tabular}\n\\end{table*}\n\n\n\\section{Potential applications of knowledge model}\nWith a quantitative evaluation of an individual's knowledge, many decisions which were made empirically can now be considered based on a numerical analysis. The following are some examples:\n\\subsection{Searching common topics}\nAs mentioned in section 1, knowledge model can be used to discover common topics efficiently for people with different education or cultural background. A discipline or sub-discipline they both are interested in can be selected first, then find out the set of knowledge points with which they both are familiar based on the familiarity measures. These knowledge points can serve as the common topics of their conversation. This application can be extended to discover common topic terms which are not defined as knowledge points, such as a movie star's name.\n\\subsection{Selecting a lecture}\nIt is common for a knowledge worker to take part in all kinds of academic lectures. It is frustrated and wasting of time that a lecture is too recondite to understand. To help a potential audience predict how much he can understand the contents of the lecture, the lecturer can list a set of knowledge points which are important to understand it on the poster, then the audience can check his familiarity measures of those knowledge points. A score of how much he can understand it can be calculated based on the familiarity measures.\n\\subsection{Evaluating a scientist's research concentrations in a period of time}\nBecause an individual's learning histories of all knowledge points have been recorded. It is convenient to extract fragments of the learning histories of a period of time, for example, the last three years, then use knowledge model to calculate the familiarity measures of that period. The set of knowledge points which have larger familiarity measures are the scientist's research concentrations.\n\\subsection{Selecting appropriate referees for a research paper}\nWhen a research paper is submitted for reviewing, choosing the optimal referees from a candidate set is a difficult problem. At present it is usually decided empirically. With knowledge model, an objective numerical analysis is possible. For example, each candidate referee's research concentrations can be calculated, the submitted paper's knowledge points and their corresponding shares can also be calculated, by matching these values, the optimal referee list can be obtained.\n\\subsection{Evaluating a knowledge worker's expertise on a discipline or sub-discipline}\nWith an individual's familiarity measures of all the knowledge points, it is not hard to evaluate his expertise on a discipline or sub-discipline. The knowledge points are organized in a tree structure, each subtree represent a discipline or sub-discipline of knowledge. The evaluation can be made based on how many knowledge points the individual has mastered and the average familiarity measure of the subtree. \n\n\\section{Conclusion}\nIn this paper, a method named knowledge model which can quantitatively evaluate a knowledge worker's knowledge is proposed. The main idea is to record an individual's learning histories of each piece of knowledge, and then use the learning history as an input to calculate the individual's familiarity measure of each knowledge point. A preliminary knowledge evaluating system is developed, it analyzes an individual's PDF documents reading activities, then uses topic model and knowledge model to calculate the individual's familiarity measures of captured knowledge points. An algorithm of discriminating learning sessions is devised. In addition, a method of calculating the individual's familiarity measure of a knowledge point based on its learning history is proposed.\n\n\\section{Discussion}\nEvaluating a person's possession of knowledge is very complicated. This part discusses related issues about individual knowledge evaluation.\n\\subsection{Cognitive assumption of knowledge model}\nKnowledge model focuses on evaluating a person's gaining of conceptual knowledge. Because it is difficult for a machine to observe learning of procedural knowledge. However, since learning of procedural knowledge is usually companioned by learning of conceptual knowledge, it has some ability for assessing the gaining of procedural knowledge. Knowledge model divides a person's learning activities into two categories: Observable Learning Activities (OLA) and Unobservable Learning Activities (ULA). For OLA, the learning start and stop time, and the text contents of it are observable, such as reading, listening, discussing, writing, and speaking. For ULA, the learning start and stop time, and the text contents of it are unobservable, such as rumination and meditation. Therefore, we cannot analyze ULA.\n\nIf the interval between the evaluation time and the last learning time of a knowledge point is large, the third variable of Equation 3 is approximating to a constant. Because the speed of memory decay is attenuating. If we set the third variable of Equation 3 to a constant and summarize all the knowledge points' familiarity measures, we obtain a value that is proportional to a person's total time spent on learning; that is to say, knowledge model assumes the quantity of a person's knowledge is proportional to his\/her total time spent on learning knowledge. This quantity is then distributed to different knowledge points. A set of related knowledge points forms a domain. Since ULA are unobservable, the quantity and distribution of ULA are also unobservable. Knowledge model does not make assumption about the total quantity of ULA. However, it assumes the distribution of ULA is equivalent to OLA (at least in domain level). To put simply, knowledge model does not assume how much time a person has spent on rumination; it assumes the contents of ruminations are related to, and are proportional to (at least in domain level) what the person has experienced. E.g., a farmer who has nothing to do with quantum physics cannot regularly ruminate about the topics in quantum physics, contrarily, a quantum physicist will do; a person cannot ruminate a concept he has never seen or heard, unless he is the creator of the concept. \n\n\n\\subsection{Normalization among knowledge points}\nThe calculation of familiarity measure mainly considers the individual's time devotion to a knowledge point and its share of each learning content. However, the complexity levels of knowledge points are usually different. For example, spending 20 minutes is sufficient for a normal knowledge worker to understand and remember the Pythagorean Theorem, but it is usually not enough to understand a complicated algorithm like LDA. Therefore, the familiarity measures should be normalized among knowledge points. Each knowledge point can be allocated with a complexity level. The familiarity measure can be multiplied by a factor, which is a function of the knowledge point's complexity level. The complexity level of a knowledge point can be decided empirically by a group of experts when constructing the knowledge tree. Another method for calculating complexity level is by examining its Understanding Graph \\cite{abs-1711-06553}. Another method for normalizing familiarity measures among knowledge points is by analyzing its Understanding Tree \\cite{1612.07714}. If we want to evaluate a person's possession of knowledge about a domain, the average familiarity measure of that domain can be used.\n\n\\subsection{Normalization among knowledge workers}\nIf knowledge model is used for self-evaluating, like most applications mentioned in section 4, it is not essential to normalize familiarity measures among knowledge workers. A standardized value of subtracting the mean value and then divided by the standard deviation is sufficient. If knowledge model is used for making decisions for a competition, such as using knowledge model analysis as a substitution of an examination (test), normalization of familiarity measures among knowledge workers is essential. The normalization can be made by multiplying the familiarity measures by a factor that is determined by the subject's characteristics, such as Intelligence Quotient (IQ). \n\nAccording to \\cite{hunt2010human}, human IQs are normally distributed with a mean value of 100, and standard deviation of 15. Approximately two-thirds of all scores lie between 85 and 115. Five percent (1\/20) are above 125, and one percent (1\/100) are above 135. Similarly, five percent are below 75 and one percent below 65. Therefore, in many circumstances, such as evaluating a group of undergraduates from the same university and department, we can hypothesize the people's IQs are equivalent. Even under this assumption, we cannot compare people's familiarity measures directly, because the existence of ULA; unless the gap between familiarity measures is distinct. \n\nThe factor can be determined by test. E.g., by testing, we conclude person A's familiarity measure of 100 is equivalent to person B's familiarity measure of 80. If we define person A's factor is 1, then person B's factor is 1.25. Cheating is inevitable in a lot of competitions, it can be detected by sampling a set of knowledge points and examining them by test.\n\n\\subsection{Using topic as knowledge unit}\nKnowledge model uses knowledge point (concept) as a unit of knowledge. A knowledge point is defined as a piece of knowledge that is explicitly defined and has been widely accepted, it is embodied by a concept. This definition makes an attempt to exclude concepts that are unsubstantiated, such as the ones presented in a newly published paper. The name conveys a thinking that a ``knowledge point\" is just a ``point\" in the tremendous tree or network of knowledge. It is trivial or less meaningful to examine one isolated knowledge point; it is more meaningful to analyze a group of related knowledge points, such as all the knowledge points of a domain, or knowledge points that are organized by Understanding Graph \\cite{abs-1711-06553} or by a topic of topic model.\n\nA topic of topic model can also be considered as a unit of knowledge \\cite{wallach2006topic,huang2016framework}. It has the advantage that a topic is naturally a set of related concepts and the topics can be calculated automatically by machines. The disadvantage is that the topics are unfixed; when the corpus changes, the topics also change. This characteristic makes it unsatisfactory as a unit of knowledge. If we choose a topic of topic model as a unit of knowledge, Equation 4 can be used to calculate a person's familiarity measure to topic $ T_{i} $ at time $ t $. The input is a sequence of $ m $ learning sessions that are related to topic $ T_{i} $. $ d_{j} $ is session $ j $'s duration;~ $ \\rho_{ij} $ is topic $ T_{i} $'s share in session $ j $, it is calculated by topic model;~ $ b_{j} $ is the proportion of memory retention of learning session $ j $ at time $ t $, it is calculated with Equation 1. Each document in the corpus matches the text contents of a learning experience.\n\n\\begin{equation}\nF_{T_{i}} = \\sum_{j=1}^md_{j}*\\rho_{ij}*b_{j}\n\\end{equation}\n\n\\subsection{Constructing concepts pool}\nA concepts pool is defined as a set of concepts that are select based on some standards. The following lists several types of concepts pool:\n\\begin{itemize}\n\t\\item Type 1 concepts pool is corpus based, the concepts are selected by analyzing a corpus.\n\t\\begin{itemize}\n\t\t\\item Type 1A concepts pool is constructed by checking the Term Frequency (TF) of a concept in a corpus. If the TF is larger than a threshold, the concept is selected.\n\t\t\\item Type 1B concepts pool is constructed by checking the Inverse Document Frequency (IDF) of concepts.\n\t\\end{itemize}\n\t\\item Type 2 concepts pool is familiarity measure based, the concepts are selected by checking a person's or a group of people's familiarity measures at sometime.\n\t\\begin{itemize}\n\t\t\\item Type 2A concepts pool is constructed by checking a person's familiarity measure of a concept at time $ t $. If the familiarity measure is larger than a threshold, the concept is selected.\n\t\t\\item Type 2B concepts pool is constructed by checking a group of people's familiarity measures at sometime.\n\t\\end{itemize}\n\t\\item Type 3 concepts pool is based on the structure of knowledge. Such as selecting concepts from a concept's n-level neighborhood in an Understanding Map \\cite{abs-1711-06553}, or a concept map \\cite{mcclure1999concept}.\n\\end{itemize}\nBy comparing a Type 1A concepts pool with a Type 2A, a person's expertise in a domain can be obtained. By examining a Type 2B concepts pool, some cultural elements that belong to a society or nation of people can be obtained.\n\n\\subsection{Using logistic regression for estimating understanding}\nIn \\cite{1612.07714}, the average familiarity measure in an Understanding Tree (except for the root) is used to estimate a person's understanding degree to the root concept. The root is differentiated to make sure the subject has substantial learning experiences about it. Equation 2 of \\cite{1612.07714} assumes the effect of each descendant to the understanding of the root is equal. However, different descendants may play different roles for understanding the root. Logistic regression can be used to discriminate effects of different descendants. Equation 5 is the logistic regression equation. $ F_{k_{j}}(t) $ is a concept's familiarity measure on the understanding tree at time $ t $, $ \\alpha_{j} $ is its coefficient. The parameters can be determined by test. Equation 6 calculates the subject's probability of understanding the root at time $ t $. Besides selecting concepts from an Understanding Tree for evaluation, an alternative method is selecting from the root's n-level neighborhood in an Understanding Map \\cite{abs-1711-06553}.\n\n\\begin{equation}\n\\theta (t) = \\alpha_{0} + \\sum_{j=1}^m \\alpha_{j} * F_{k_{j}}(t)\n\\end{equation}\n\n\\begin{equation}\nP_{r}(t) = \\frac{1}{1 + e^{-\\theta (t)}}\n\\end{equation}\n\n\\subsection{Evaluation of knowledge model}\nEvaluation of the effectiveness of knowledge model is a huge project. \n\\begin{itemize}\n\\item Firstly, a knowledge worker's almost all the learning activities should be recorded and analyzed, such as reading of all kinds of documents and web pages, attending of lectures, oral discussions etc. Each knowledge point's complete learning history is obtained, its relative familiarity measure is also computed; \n\\item Secondly, select a sample of knowledge points and group them according to their relative familiarity measures; \n\\item Thirdly, let the individual take an examination which tests his understanding of the sample knowledge points; \n\\item Fourthly, compare the results of the examination with the relative familiarity measures calculated by knowledge model; \n\\item Finally, repeat the above procedures on other knowledge workers to reduce the randomness of the results. \n\\end{itemize}\nA detailed evaluation will be considered in further research. \n\\subsection{Limitations of using Ebbinghaus' forgetting curve}\nEbbinghaus' forgetting curve formula is used in the computation of familiarity measures. It depicts the decline of memory retention in time. Many research results have testified the soundness of the formula \\cite{murre2015replication, rubin1996one}. However, other factors may affect the speed of memory decay as well. Such as how the information is presented and the physiological state of the individual. There are no unanimously accepted formulas of how these factors affect the speed of memory decay. In addition, it is difficult to obtain accurate values for these factors. \n\nThe calculation of the familiarity measures is based on the individual's learning histories of a long range of time, usually several years or decades of years. In my opinion, when observing from a long time range, it can be hypothesized that the average presentation qualities and average physiological states among knowledge points are equivalent , so these factors can be ignored. If other forms of forgetting curve formulas are proved to be better than Ebbinghaus', it can be used as a substitution when calculating familiarity measures.\n\\subsection{Privacy issues}\nRecording learning histories of each knowledge point will inevitably violate an individual's privacy. To protect the privacy, the learning histories can be password protected or even encrypted. They are stored in the individual's personal storage, and should not be revealed to other people. The only information the outside world can see is the individual's familiarity measures of the knowledge points. The knowledge points which may involve the individual's privacy are separated from other knowledge points, every output of their familiarity measures should be authorized by the owner.\n\n\\footnotesize\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}