diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzghyb" "b/data_all_eng_slimpj/shuffled/split2/finalzzghyb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzghyb" @@ -0,0 +1,5 @@ +{"text":"\\section{introduction}\n\nSince $J\/\\psi$ suppression was first suggested by Matsui and Satz as a signature of quark-gluon plasma (QGP) formation in relativistic heavy-ion collisions~\\cite{Matsui:1986dk}, there have been many experimental~\\cite{Alessandro:2004ap,Adare:2006ns} and theoretical studies~\\cite{Vogt:1999cu,Zhang:2000nc,Zhang:2002ug,Zhao:2007hh,Yan:2006ve} on this very interesting phenomenon; see, e.g., Refs. \\cite{Rapp:2008tf,Andronic:2006ky} for a recent review. The original idea of Matsui and Satz was that the color screening in the produced QGP would prohibit the binding of charm and anticharm quarks into the $J\/\\psi$ and thus suppress its production. However, lattice QCD calculations of the $J\/\\psi$ spectral function have since shown that the $J\/\\psi$ can survive above the critical temperature for the QGP phase transition~\\cite{Hatsuda04,Datta04}. As a result, the study of $J\/\\psi$ suppression in relativistic heavy-ion collisions has been changed from being a signature of the QGP to a probe of its properties. Indeed, we have recently shown in a two-component model, which includes $J\/\\psi$ production from both initial hard nucleon-nucleon scattering and regeneration from charm and anticharm quarks in the produced QGP, that the in-medium effect on $J\/\\psi$ interactions in the QGP can affect the $J\/\\psi$ nuclear modification factor and elliptic flow in Au+Au collisions at $\\sqrt{s_{NN}}=200$ GeV at the Relativistic Heavy Ion Collider (RHIC) \\cite{Song:2010er}. In the present study, we extend this study to $J\/\\psi$ production in Pb+Pb collisions at the higher energy of $\\sqrt{s_{NN}}=2.76$ TeV at the Large Hadron Collider (LHC)~\\cite{:2010px,cms} and also at the lower energy of $\\sqrt{s_{NN}}=17.3$ GeV at the Super Proton Synchrotron (SPS) \\cite{Alessandro:2004ap}. Furthermore, a schematic viscous hydrodynamic model is used to include the effect of viscosity on the expansion dynamics of the produced hot dense matter that was neglected in our previous studies. We find that the two-component model can give a good description of the experimental data from heavy-ion collisions at these different energies.\n\nTo make the present paper self contained, we briefly review in Sec.~\\ref{two} the two-component model for $J\/\\psi$ production, in Sec.~\\ref{hydrodynamics} the schematic causal viscous hydrodynamical model used in modeling the expansion dynamics of produced hot dense matter, and in Sec.~\\ref{properties} the in-medium dissociation temperatures and thermal decay widths of charmonia. Results obtained from our study for the $J\/\\psi$ nuclear modification factors in heavy-ion collisions at SPS, RHIC and LHC are then presented in Sec. \\ref{suppression}. Finally, a summary is given in Sec. \\ref{summary}.\n\n\\section{The two-component model}\\label{two}\n\nThe two-component model for $J\/\\psi$ production in heavy-ion collisions~\\cite{Grandchamp:2002wp,Grandchamp:2003uw} includes contributions from both initial hard nucleon-nucleon scattering and regeneration from charm and anticharm quarks in the produced QGP. For initially produced $J\/\\psi$'s, their number is proportional to the number of binary collisions between nucleons in the two colliding nuclei. Whether these $J\/\\psi$'s can survive after the collisions depends on many effects from both the initial cold nuclear matter and the final hot partonic and hadronic matters. The cold nuclear matter effects include the Cronin effect of gluon-nucleon scattering before the production of the primordial $J\/\\psi$ from the gluon-gluon fusion~\\cite{Cronin:1974zm}; the shadowing effect due to the modification of the gluon distribution in a heavy nucleus~\\cite{Eskola:2009uj}; and the nuclear absorption by the passing nucleons~\\cite{Alessandro:2003pi,Lourenco:2008sk,Vogt:2010aa}. In our previous work~\\cite{Song:2010fk} on $J\/\\psi$ production in heavy-ion collisions at RHIC, we have considered only the most important nuclear absorption effect. In this case, the survival probability of a primordial $J\/\\psi$ after the nuclear absorption is given by \\cite{Kharzeev:1996yx,Ferreiro:2008wc}\n\\begin{eqnarray}\nS_{\\rm cnm}({\\bf b},{\\bf s})=\n\\frac{1}{T_{AB}({\\bf b},{\\bf s})}\\int dz dz' \\rho_A({\\bf s},z)\\rho_B({\\bf b}-{\\bf s},z')\\nonumber\\\\\n\\times {\\rm exp}\\bigg\\{ -(A-1)\\int_z^\\infty dz_A \\rho_A ({\\bf s},z_A)\\sigma_{\\rm abs}\\bigg\\}~~~~~~~\\nonumber\\\\\n\\times {\\rm exp}\\bigg\\{ -(B-1)\\int_{z'}^\\infty dz_B \\rho_B\n({\\bf b}-{\\bf s},z_B)\\sigma_{\\rm abs}\\bigg\\},\n\\label{absorption}\n\\end{eqnarray}\nwhere ${\\bf b}$ is the impact parameter and ${\\bf s}$ is the transverse vector from the center of nucleus A; $T_{AB}({\\bf b},{\\bf s})$ is the nuclear overlap function; $\\rho_A ({\\bf s},z)$ is the density distribution in the nucleus; $\\sigma_{\\rm abs}$ is the $J\/\\psi$ absorption cross section by a nucleon. For the latter, it is obtained from p+A collisions and has values of 4.18 and 2.8 mb for the SPS and RHIC, respectively \\cite{Alessandro:2004ap,Adare:2007gn}. Presently, there are no p+A data available from the LHC. Since the cross section for $J\/\\psi$ absorption is expected to decrease with increasing energy~\\cite{Lourenco:2008sk}, we consider in the present study the two extreme values of 0 and 2.8 mb to study its effect on the $J\/\\psi$ yield in heavy-ion collisions at LHC.\n\nAlthough the shadowing effect has usually been neglected in heavy-ion collisions at SPS and RHIC, this may not be justified at LHC. In the present study, we thus include also the shadowing effect for heavy-ion collisions at LHC using the EPS09 package~\\cite{Eskola:2009uj}. The shadowing effect is expressed by the ratio $R_i^A$ of the parton distribution $f_i^A(x,Q)$ in a nucleus to that in a nucleon $f_i^{\\rm nucleon}(x,Q)$ multiplied by the mass number $A$ of the nucleus, i.e.,\n\\begin{eqnarray}\nR_i^A(x,Q)=\\frac{f_i^A(x,Q)}{A f_i^{\\rm nucleon}(x,Q)}, \\quad i=q, \\bar{q}, g.\n\\end{eqnarray}\nIn the above, $x=m_T\/\\sqrt{s_{NN}}$, with $m_T$ being\nthe transverse energy of the produced charmonium and $\\sqrt{s_{NN}}$ being the center-of-mass energy of colliding nucleons, is the momentum fraction and $Q=m_T$ is the momentum scale. Assuming the shadowing effect is proportional to the path length, we can then express the spatial dependence of $R_i^A$ as \\cite{Vogt:2004dh,Lansberg:2005pc,Ferreiro:2008wc}:\n\\begin{eqnarray}\n\\frac{R_i^A({\\bf s},x,Q)-1}{R_i^A(x,Q)-1}=N\\frac{\\int dz \\rho_A({\\bf s},z)}{\\int dz \\rho_A({\\bf 0},z)},\n\\end{eqnarray}\nwhere $N$ is a normalization factor determined from the condition\n\\begin{eqnarray}\n\\frac{1}{A}\\int d^2{\\bf s}\\int dz \\rho_A({\\bf s},z) R_i^A({\\bf s},x,Q)=R_i^A(x,Q).\n\\end{eqnarray}\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{c| c c c c}\n\\hline ~~~ & ~SPS~ & ~RHIC~ & ~LHC~ & ~LHC~\\\\[2pt]\n~~~ & ~~~ & ~~~ & ~~~ &$p_T>$6.5 GeV\\\\[2pt]\n\\hline production ($\\mu$b)~& & & &\\\\[2pt]\n~$d\\sigma_{J\/\\psi}^{pp}\/dy$~ & ~0.05~\\cite{Andronic:2006ky} & ~0.774~\\cite{Adare:2006kf} & ~4.0~ \\\\[2pt]\n~$d\\sigma_{c\\bar{c}}^{pp}\/dy$~ & ~5.7~\\cite{Andronic:2006ky} & ~119~\\cite{Adare:2010de} & ~615~ &\\\\[2pt]\n\\hline feed-down (\\%)~& & & &\\\\[2pt]\n~$f_{\\chi_c}$~ & ~25~\\cite{Faccioli:2008ir} & ~32~\\cite{Adare:2011vq} & ~26.4~\\cite{Abe:1997yz} & ~23.5~\\cite{Abe:1997yz}\\\\[2pt]\n~$f_{\\psi^\\prime(2S)}$~ & ~8~\\cite{Faccioli:2008ir} & ~9.6~\\cite{Adare:2011vq} & ~5.6~\\cite{Abe:1997yz} & ~5~\\cite{Abe:1997yz}~ \\\\[2pt]\n~$f_b$~& ~~~ & ~~~ & ~11~\\cite{Acosta:2004yw} &~21~\\cite{Acosta:2004yw}\\\\[2pt]\n\\hline\n~$R_g^A$ for charm~ & ~~~ & ~~~ & ~0.813~ & ~0.897~\\\\[2pt]\n\\hline ~$\\tau_0$ (fm\/c)~ & ~1.0~ & ~0.9~\\cite{Song:2011qa} & ~1.05~\\cite{Song:2011qa} \\\\[2pt]\n~$\\eta\/s$~& ~0.16~ & ~0.16~\\cite{Song:2011qa} & ~0.2~\\cite{Song:2011qa} &\\\\[2pt]\n\\hline\n\\end{tabular}\n\\caption{Parameters for $J\/\\psi$ production and the firecylinder expansion in Pb+Pb collisions at\n$\\sqrt{s_{NN}}=17.3$ GeV at SPS and at $\\sqrt{s_{NN}}=2.76$ TeV at LHC and in Au+Au collisions\nat $\\sqrt{s_{NN}}=17.3$ GeV at RHIC. $d\\sigma_{J\/\\psi}^{pp}\/dy$ and $d\\sigma_{c\\bar{c}}^{pp}\/dy$ are, respectively, the differential $J\/\\psi$ and $c{\\bar c}$ production cross sections in rapidity in p+p collisions; $f_{\\chi_c}$, $f_{\\psi^\\prime(2S)}$, and $f_b$ are, respectively, the fraction of $J\/\\psi$ production from the decay of $\\chi_c$, $\\psi^\\prime$, and bottom hadrons in p+p collisions; $R_g^A$ is the gluon shadowing effect on charm production; and $\\tau_0$ and $\\eta\/s$ are the thermalization time and the specific viscosity of the produced $QGP$. Also shown in the last column are the parameters for the feed-down contribution to the production of $J\/\\psi$'s of transverse momentum $p_T > 6.5$ GeV at LHC.} \\label{parameters}\n\\end{table}\n\nThe shadowing effect reduces the survival probability of a primordial $J\/\\psi$ after the nuclear absorption (Eq.(\\ref{absorption})) by the factor $R_g^A({\\bf s},x,Q)R_g^B({\\bf b}-{\\bf s},x,Q)$. Taking the momentum scale $Q=4.2~{\\rm MeV}$ to be the average $J\/\\psi$ transverse energy at $\\sqrt{s_{NN}}=1.96$ TeV \\cite{Acosta:2004yw}, we obtain the value of the ratio $R_g^{\\rm pb}(x,Q)$ given in Table \\ref{parameters} for charm production in heavy-ion collisions at the LHC.\n\nFor the hot partonic and hadronic matter effect, the model includes the dissociation of charmonia in the QGP\nof temperatures higher than the dissociation temperature and the thermal decay of survived charmonia through interactions with thermal partons in the expanding hot dense mater. Since the number of produced charm quarks in relativistic heavy-ion collisions is not small, charmonia can also be regenerated from charm and anticharm quarks in the QGP. The effect of thermal dissociation and regeneration of charmonia on the number $N_i$ of charmonium of type $i$ is taken into account via the rate equation \\cite{Grandchamp:2003uw}\n\\begin{eqnarray}\n\\frac{dN_i}{d\\tau}=-\\Gamma_i(N_i-N_i^{\\rm eq}),\n\\label{rate}\n\\end{eqnarray}\nwhere $\\tau$ is the longitudinal proper time, while $N_i^{\\rm eq}$ and $\\Gamma_i$ are, respectively, the equilibrium number and thermal decay width of charmonia and will be discussed in Sec.~\\ref{properties}.\n\nSince charm quarks are not expected to be completely thermalized either chemically or kinetically during the expansion of the hot dense matter, the fugacity parameter $\\gamma$ and the relaxation factor $R$ are introduced to describe their distributions. Assuming that the number of charm and anticharm quark pairs does not change during the fireball expansion, the fugacity is obtained from \\cite{BraunMunzinger:2000px,Gorenstein:2000ck}\n\\begin{eqnarray}\nN_{c\\bar{c}}^{AB}=\\bigg\\{\\frac{1}{2}\\gamma n_o\\frac{I_1(\\gamma n_o V)}{I_0(\\gamma n_o V)}+\\gamma^2 n_h \\bigg\\}V,\n\\label{fugacity}\n\\end{eqnarray}\nwhere $N_{c\\bar{c}}^{AB}$ is the number of $c\\bar{c}$ pairs produced in an A+B collision; $n_o$ and $n_h$ are, respectively, the number densities of open- and hidden-charm hadrons in grand canonical ensemble; $V$ is the volume of the hot dense matter; and $I_0$ and $I_1$ are modified Bessel functions resulting from the canonical suppression of charm quarks in heavy-ion collisions~\\cite{Gorenstein:2000ck,Ko:2000vp}. For the relaxation factor, it is defined as $R(\\tau)=1-\\exp[-(\\tau-\\tau_0)\/\\tau_{\\rm eq}]$ with the relaxation time $\\tau_{\\rm eq}=3~{\\rm fm\/c}$ of charm quarks in the QGP taken from Ref. \\cite{Zhao:2007hh} and $\\tau_0$ being the initial thermalization time.\n\nSince charmonia can only be regenerated in the QGP of temperature below the dissociation temperature $T_i$, the number of equilibrated charmonium of type $i$ in the QGP is\n\\begin{eqnarray}\nN_i^{\\rm eq}=\\gamma^2 R~ n_i ~f V\\theta(T_i-T),\n\\end{eqnarray}\nwhere $n_i$ is its number density in grandcanonical ensemble;\n$f$ is the fraction of QGP in the mixed phase and is 1 in the QGP; and $\\theta(T_i-T)$ is the step function.\n\nFor the initial charmonium number $N_i$ and the charm quark pair number $N_{c\\bar{c}}$, they are obtained from multiplying their respective differential cross sections in rapidity $d\\sigma_i^{pp}\/dy$ and $d\\sigma_{c\\bar{c}}^{pp}\/dy$ in p+p collisions \\cite{Andronic:2006ky,Adare:2006kf,Adare:2010de} by the number of binary collisions $N_{\\rm coll}$ in heavy-ion collisions. Since only the $J\/\\psi$ production cross section at $\\sqrt{s}=7$ TeV \\cite{Khachatryan:2010yr} has been measured in p+p collisions at LHC, its value at $\\sqrt{s_{NN}}=2.76$ TeV is obtained by using a linear function in $\\sqrt{s}$ to interpolate from the measured values at $\\sqrt{s}=1.96$ TeV by the CDF Collaboration at the Fermi Lab \\cite{Acosta:2004yw} to the one at $\\sqrt{s}=7$ TeV at LHC. The cross section for $c\\bar{c}$ pair production at $\\sqrt{s_{NN}}=2.76$ TeV is then determined by assuming that the ratio between the $J\/\\psi$ and $c\\bar{c}$ pair production cross sections is the same as that at RHIC. In Table \\ref{parameters}, we list the differential cross sections for $J\/\\psi$ and $c\\bar{c}$ pair production in p+p collision at SPS, RHIC and LHC that are used in the present study.\n\nSince $J\/\\psi$ production in p+p collisions includes the contribution from the decay of excited charmonium states, the cross section $d\\sigma_{J\/\\psi}^{pp}\/dy$ shown in Table \\ref{parameters} is the sum of the production cross sections for the $J\/\\psi$ and its excited states. For p+p collisions at SPS, we use the global average values of the fractions $f_{\\chi_c}=$ 25\\% from the $\\chi_c$ decay and $f_{\\psi^\\prime(2S)}=$ 8\\% from the $\\psi'$ decay \\cite{Faccioli:2008ir}. The cross sections for $J\/\\psi$, $\\chi_c$ and $\\psi^\\prime$ production in a p+p collision at the SPS are then given, respectively, by\n\\begin{eqnarray}\n\\sigma_{J\/\\psi}^*&=&0.67 ~\\sigma_{J\/\\psi}\\nonumber\\\\\n\\sigma_{\\chi_c}&=&\\frac{0.25 ~\\sigma_{J\/\\psi}}{{\\rm Br}(\\chi_c \\rightarrow J\/\\psi+X)},\\nonumber\\\\\n\\sigma_{\\psi'}&=&\\frac{0.08 ~\\sigma_{J\/\\psi}}{{\\rm Br}(\\psi' \\rightarrow J\/\\psi+X)},\n\\label{excited}\n\\end{eqnarray}\nwhere $\\sigma_{J\/\\psi}^*$ is the cross section for $J\/\\psi$ production without the feed-down contribution, and `${\\rm Br}$' denotes the branching ratio.\n\nFor p+p colisions at RHIC, the fractions of $J\/\\psi$'s from $\\chi_c$ and $\\psi^\\prime (2S)$ decays are taken to be $f_{\\chi_c}=$ 32\\% and $f_{\\psi^\\prime}=$ 9.6 \\%, respectively, based on recent experimental results by the PHENIX Collaboration \\cite{Adare:2011vq}. Since the fractions of $J\/\\psi$'s from $\\chi_c$ and $\\psi^\\prime(2S)$ decays are not known at LHC, we use the values inferred from $p+{\\bar p}$ annihilation at $\\sqrt{s_{NN}}=1.96$ TeV by the CDF Collaboration at the Fermi Lab. It was found in these reactions that among promptly produced $J\/\\psi$'s, about 64\\% are directly produced and about 29.7\\% from the $\\chi_c$ decay, and both are approximately independent of the $J\/\\psi$ transverse momentum~\\cite{Abe:1997yz}. This leads to the fraction of promptly produced $J\/\\psi$'s from the $\\psi^\\prime(2S)$ decay to be 6.3\\%. Using the experimental result that promptly produced $J\/\\psi$'s constitute about 89\\% of measured $J\/\\psi$'s~\\cite{Acosta:2004yw}, we obtain the fractions $f_{\\chi_c}=26.4\\%$ and $f_{\\psi^\\prime(2S)}=5.6\\%$ of measured $J\/\\psi$'s that are from $\\chi_c$ and $\\psi^\\prime(2S)$ decays, respectively. Since the fraction of prompt $J\/\\psi$'s is reduced to 79\\% for $J\/\\psi$ of transverse momentum $p_T>6.5$ GeV~\\cite{Acosta:2004yw}, the fractions of measured $J\/\\psi$'s of $P_T>6.5$ GeV that are from $\\chi_c$ and $\\psi^\\prime(2S)$ decays are reduced to $f_{\\chi_c}=23.5\\%$ and $f_{\\psi^\\prime(2S)}=5.0\\%$, respectively. Besides the contribution from excited charmonia, the decay of bottom hadrons can also contribute to $J\/\\psi$ production in high-energy collisions.\nThis contribution increases significantly with $p_T$ as shown in the experiments by the CDF~\\cite{Acosta:2004yw}, CMS~\\cite{Khachatryan:2010yr}, LHCb~\\cite{Collaboration:2011sp} and ATLAS~\\cite{Aaij:2011jh} Collaborations. The fraction is between 5 \\% and 10 \\% for $p_T < 3$ GeV, depending on the rapidity of the $J\/\\psi$, then increases to more than 40 \\% at $p_T\\sim 15$ GeV, and reaches 60-70 \\% for $p_T$ above 25 GeV. On the average, about 11 \\% of produced $J\/\\psi$'s are from the decay of bottom hadrons in $p+{\\bar p}$ annihilation at $\\sqrt{s}=1.96~{\\rm TeV}$ at the Fermi Lab \\cite{Acosta:2004yw}, and the fraction increases to about 21 \\% for $J\/\\psi$'s of transverse momentum $p_T > 6.5$ GeV \\cite{Acosta:2004yw}. These values and other input parameters used in the present study are shown in Table~\\ref{parameters}.\n\n\\section{A schematic viscous hydrodynamic model}\\label{hydrodynamics}\n\nFor the expansion dynamics of the hot dense matter formed in relativistic heavy-ion collisions, we describe it by a schematic causal viscous hydrodynamic model recently developed in Ref. \\cite{Song:2010fk}. It is based on the assumption that all thermal quantities such as the energy density, temperature, entropy density, and pressure as well as the azimuthal and space-time rapidity components of the shear tensor are uniform along the transverse direction in the hot dense matter. Assuming the boost-invariance and using the $(\\tau, r, \\phi, \\eta)$ coordinate system\n\\begin{eqnarray}\n\\tau&=&\\sqrt{t^2-z^2}, ~~\\eta=\\frac{1}{2}\\ln \\frac{t+z}{t-z},\\nonumber\\\\\nr&=&\\sqrt{x^2+y^2}, ~~\\phi=\\tan^{-1}(y\/x),\n\\end{eqnarray}\nthen the following equations are obtained from the usual Israel-Stewart viscous hydrodynamic equations:\n\\begin{eqnarray}\n&&\\partial_\\tau (A\\tau \\langle T^{\\tau \\tau}\\rangle)=-(p+\\pi^\\eta_\\eta)A,\\label{energy7}\\\\\n\\nonumber\\\\\n&&\\frac{T}{\\tau}\\partial_\\tau (A\\tau s \\langle \\gamma_r\\rangle)=-A\\bigg\\langle\\frac{\\gamma_r v_r}{r}\\bigg\\rangle \\pi^\\phi_\\phi-\\frac{A\\langle \\gamma_r\\rangle}{\\tau}\\pi^\\eta_\\eta\\nonumber\\\\\n&&~~~~~~~+\\bigg\\{\\partial_\\tau(A\\langle \\gamma_r\\rangle)-\\frac{\\gamma_R \\dot{R}}{R}A\\bigg\\}(\\pi^\\phi_\\phi+\\pi^\\eta_\\eta),\\label{entropy7}\\\\\n\\nonumber\\\\\n&&\\partial_\\tau (A\\langle \\gamma_r\\rangle \\pi^\\eta_\\eta) -\\bigg\\{\\partial_\\tau(A\\langle\\gamma_r\\rangle)+2\\frac{A\\langle\\gamma_r\\rangle}{\\tau} \\bigg\\}\\pi^\\eta_\\eta\\nonumber\\\\\n&&~~~~=-\\frac{A}{\\tau_\\pi}\\bigg[\\pi^\\eta_\\eta-2\\eta_s\\bigg\\{\\frac{\\langle\\theta\\rangle}{3}-\\frac{\\langle\\gamma_r\\rangle}{\\tau}\\bigg\\}\\bigg],\\label{entropy7}\\\\\n\\nonumber\\\\\n&&\\partial_\\tau(A\\langle\\gamma_r\\rangle~ \\pi^\\phi_\\phi)-\\bigg\\{\\partial_\\tau(A\\langle\\gamma_r\\rangle)+2A\\bigg\\langle\\frac{\\gamma_r v_r}{r}\\bigg\\rangle\\bigg\\}\\pi^\\phi_\\phi\\nonumber\\\\\n&&~~~~=-\\frac{A}{\\tau_\\pi}\\bigg[ \\pi^\\phi_\\phi-2\\eta_s \\bigg\\{\\frac{\\langle\\theta\\rangle}{3}-\\bigg\\langle\\frac{\\gamma_r v_r}{r}\\bigg\\rangle\\bigg\\}\\bigg]\\label{shear7b}.\n\\end{eqnarray}\nIn the above, $T^{\\tau\\tau}=(e+P_r)u_\\tau^2 -P_r$ is the time-component of the energy-momentum tensor, $\\pi^\\phi_\\phi=r^2\\pi^{\\phi\\phi}$ and $\\pi^\\eta_\\eta=\\tau^2\\pi^{\\eta\\eta}$ are, respectively, the azimuthal and the space-time rapidity component of the shear tensor; $\\eta_s$ and $\\tau_\\pi$ are the shear viscosity of the hot dense matter and the relaxation time for the particle distributions, respectively; $\\theta=\\frac{1}{\\tau}\\partial_\\tau (\\tau \\gamma_r)+\\frac{1}{r}\\partial_r(rv_r \\gamma_r)$ with $\\gamma_r=1\/\\sqrt{1-v_r^2}$ in terms of the radial velocity $v_r$; $A=\\pi R^2$ with $R$ being the transverse radius of the uniform matter; and $\\langle\\cdots\\rangle$ denotes average over the transverse area. For the radial flow velocity that is a linear function of the radial distance from the center, i.e., $\\gamma_r v_r=\\gamma_R \\dot{R}(r\/R)$, where $\\dot{R}=\\partial R\/\\partial \\tau$ and $\\gamma_R=1\/\\sqrt{1-\\dot{R}^2}$, we have $\\langle\\gamma_r^2\\rangle=1+\\gamma_R^2 \\dot{R}^2\/2$, $\\langle\\gamma_r^2 v_r^2\\rangle=\\gamma_R^2 \\dot{R}^2\/2$, $\\langle\\gamma_r\\rangle=2(\\gamma_R^3-1)\/(3\\gamma_R^2 \\dot{R}^2)$, and $\\langle\\gamma_r v_r\/r\\rangle=\\gamma_R \\dot{R}\/R$. With the energy density $e$ and pressure $p$ related by the equation of state of the matter through its temperature $T$, Eqs.(\\ref{energy7})-(\\ref{shear7b}) are four simultaneous equations for $T$, $\\dot{R}$, $\\pi^\\phi_\\phi$ and $\\pi^\\eta_\\eta$, and can be solved numerically by rewriting them as difference equations.\n\nFor the equation of state of the produced dense matter, we use the quasiparticle model with three flavors for the QGP phase \\cite{Levai:1997yx,Song:2010ix} and the resonance gas model for the HG phase. As to the specific shear viscosity $\\eta_s\/s$, where $s$ is the entropy density, its value in the QGP is taken to be 0.16 for SPS and RHIC, and 0.2 for LHC \\cite{Song:2011qa}, while it has the same value of $5\/2\\pi$ in the HG \\cite{Demir:2008tr}. The specific viscosity in the mixed phase is assumed to be their linear combination, i.e., $(\\eta\/s)_{QGP}f+(\\eta\/s)_{HG}(1-f)$, where $f$ is the fraction of QGP in the mixed phase. The initial thermalization time is taken to be 1.0 fm\/$c$ for SPS, which has usually been used, and 0.9 fm\/c and 1.05 fm\/c for RHIC and LHC, respectively \\cite{Song:2011qa}. Although the initial thermalization time for RHIC is 0.6 fm\/c in ideal hydrodynamics \\cite{Hirano:2001eu}, the nonzero viscosity generates additional transverse flow \\cite{Song:2010fk} and requires a late thermalization to fit the experimental data on $p_T$ spectra and elliptic flows. This is the same reason for the later thermalization at LHC in viscous hydrodynamics.\n\nThe initial local temperature of produced matter can be calculated from the equation of state and the local entropy density, which we parameterize as \\cite{Song:2010ix,Bozek:2011wa}\n\\begin{equation}\n\\frac{ds}{d\\eta}=C\\left[(1-\\alpha)\\frac{n_{\\rm part}}{2}+\\alpha~n_{\\rm coll}\\right],\n\\label{entroden}\n\\end{equation}\nwith $\\alpha=$ 0, 0.11 and 0.15 for SPS, RHIC and LHC, respectively \\cite{Antinori:2000ph,Kharzeev:2000ph,Bozek:2011wa}. The number density $n_{\\rm part(coll)}$ in Eq. (\\ref{entroden}) is defined as $\\Delta N_{\\rm part(coll)}\/(\\tau_0\\Delta x \\Delta y)$, where $\\Delta N_{\\rm part(coll)}$ is the number of participants (binary collisions) in the volume $\\tau_0\\Delta x\\Delta y$ of the transverse area $\\Delta x\\Delta y$ and is obtained from the Glauber model with the inelastic nucleon-nucleon cross sections of 30, 42 and 64 mb for SPS, RHIC and LHC, respectively \\cite{Back:2004je,Ferreiro:2011rw}. The factor $C$ is determined by fitting the multiplicity of final charged particles after the hydrodynamical evolution to the measured one.\n\nAssuming the same chemical freeze out temperature $T_f=160$ MeV for all charged particles, their pseudorapidity distribution at midrapidity is then \\cite{Song:2010er}\n\\begin{eqnarray}\n\\frac{dN_{\\rm ch}}{d\\eta}\\bigg|_{y=0}&=&\\sum_i \\int dp_T \\sqrt{1-\\frac{m_i^2}{{m_{Ti}}^2}}D_i\\frac{dN_i}{dydp_T}\\nonumber\\\\\n&=&\\frac{\\tau}{\\pi}\\sum_i D_i\\int dp_T~ p_T^2 \\int_0^R rdr \\nonumber\\\\\n&&\\times I_0\\bigg[\\frac{p_T \\sinh\n\\rho}{T_f}\\bigg] K_1\\bigg[\\frac{m_{Ti} \\cosh \\rho}{T_f}\\bigg],\n\\label{multiplicity}\n\\end{eqnarray}\nwhere $\\rho=\\tanh^{-1}(v_r)$. The summation $i$ includes all mesons lighter than 1.5 GeV and all baryons lighter than 2.0 GeV. In including the contribution from the decays of particles, we simply multiply their pseudorapidity distributions by the product $D_i$ of their decay branching ratio and the number of charged particles resulting from the decay. We have thus neglected the difference between the rapidity of the daughter particles and that of the decay particle. Also, we have used the thermal momentum distributions at chemical freeze out as well as during the expansion of the hot dense matter, thus ignoring the viscous effect on the particle momentum distributions as it is only important for particles of large momenta \\cite{Dusling:2009df}. From the multiplicities of charged particles per half participant, $(dN_{\\rm ch}\/d\\eta)\/(N_{\\rm part}\/2)$, which are roughly 2, 4 and 8.4 in central collisions of Pb+Pb at $\\sqrt{s_{NN}}=17.3$ GeV at SPS, of Au+Au collisions at $\\sqrt{s_{NN}}=200$ GeV at RHIC, and of Pb+Pb at $\\sqrt{s_{NN}}=2.76$ TeV at LHC, respectively \\cite{Back:2004je,Aamodt:2010cz}, we obtain the corresponding values of 14.6, 18.7 and 27.0 for the parameter $C$ in Eq. (\\ref{entroden}).\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=8.5 cm]{profiles.eps}}\n\\caption{Temperature profiles along the radial direction at initial thermalization time as functions of radial distance in central Pb+Pb collisions at $\\sqrt{s_{NN}}=17.3$ GeV at SPS and\n$\\sqrt{s_{NN}}=2.76$ TeV at LHC, and in central Au+Au collisions at $\\sqrt{s_{NN}}=200$ GeV at RHIC from viscous hydrodynamics.}\\label{temperaturea}\n\\end{figure}\n\nIn Fig. \\ref{temperaturea}, we show the temperature profile along radial direction at initial thermalization time in heavy-ion collisions at SPS, RHIC, and LHC from the viscous hydrodynamics. Defining the firecylinder as the region where the initial temperature is above $T_c=170~{\\rm MeV}$, its transverse radius in the case of viscous hydrodynamics has values of 6.5, 6.6 and 7.1 fm in central collisions at SPS, RHIC and LHC, respectively. The time evolution of the average temperature of the firecylinder determined from the schematic hydrodynamic model is shown in Fig. \\ref{temperatureb}. The initial average temperatures at SPS and RHIC are 218 and 269 MeV, respectively, and are consistent with those extracted from the experimental data on dileptons at SPS \\cite{Collaboration:2010xu} and on direct photons at RHIC \\cite{:2008fqa}. The predicted initial average temperature in heavy-ion collisions at LHC is 311 MeV.\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=8.5 cm]{evolution.eps}}\n\\caption{Average temperatures of firecylinder as functions of time in central Pb+Pb collisions at $\\sqrt{s_{NN}}=17.3$ GeV at SPS and $\\sqrt{s_{NN}}=2.76$ TeV at LHC, and in central Au+Au collisions at $\\sqrt{s_{NN}}=200$ GeV at RHIC from the viscous hydrodynamics.}\n\\label{temperatureb}\n\\end{figure}\n\nFor non-central heavy-ion collisions where the initial geometry of the transverse area is an ellipse, the schematic viscous hydrodynamic model described here needs to be extended. For simplicity, the present model is used by taking the circular transverse area to be the same as that of the ellipse as in Ref.~\\cite{Song:2010ix} based on a parameterized firecylinder model.\n\n\\section{thermal properties of charmonia}\\label{properties}\n\nTo describe the properties of charmonia in QGP, we need the potential between heavy quark and its antiquark at finite temperature. Although some information on this can be obtained from the lattice gauge theory \\cite{Kaczmarek:2003dp,Wong:2004zr} or from the static QCD \\cite{Brambilla:2008cx,Brambilla:2010vq}, we use in the present study the extended Cornell model that includes the Debye screening effect on color charges \\cite{Karsch:1987pv}. The Cornell model \\cite{Eichten:1979ms} was devised to imitate the asymptotic freedom and confinement of the QCD interaction with a Coulomb-like potential for short distance and a linear potential for long distance. In the QGP, the linear potential becomes weaker due to the Debye screening between color charges, leading to the screened Cornell potential \\cite{Karsch:1987pv}\n\\begin{eqnarray}\nV(r,T)=\\frac{\\sigma}{\\mu(T)}\\bigg[1-e^{-\\mu(T) r}\\bigg]-\\frac{\\alpha}{r}e^{-\\mu(T) r}\n\\label{Cornell}\n\\end{eqnarray}\nwith $\\sigma=0.192~{\\rm GeV^2}$ and $\\alpha=0.471$.\nThe screening mass $\\mu(T)$ depends on temperature and is given in thermal pQCD by\n\\begin{eqnarray}\n\\mu(T)=\\sqrt{\\frac{N_c}{3}+\\frac{N_f}{6}}~gT,\n\\label{screening}\n\\end{eqnarray}\nwhere $N_c$ is the number of colors, $N_f$ is the number of light quark flavors, and $g$ is the QCD coupling constant. In the limit of $\\mu \\rightarrow 0$, we recover the original Cornell potential.\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=9 cm]{bindingE.eps}}\n\\caption{Binding energy of $J\/\\psi$ in the QGP as a function of temperature for the QCD coupling constant\n$g=1.87$.}\n\\label{bindingE}\n\\end{figure}\n\nThe wavefunctions and binding energies of charmonia in the QGP are obtained by solving the Schr\\\"odinger equation with the screened Cornell potential. With the binding energy $\\varepsilon_0$ defined as \\cite{Karsch:1987pv}\n\\begin{eqnarray}\n\\varepsilon_0=2m_c+\\frac{\\sigma}{\\mu}-E,\n\\end{eqnarray}\nwhere the charm quark mass is taken to be $m_c=1.32~{\\rm GeV}$ and $E$ is the eigenvalue of the Schr\\\"odinger equation, we show in Fig.~\\ref{bindingE} the binding energy of $J\/\\psi$ as a function of temperature for the case of $g=1.87$.\nIt is seen that the $J\/\\psi$ becomes unbound or dissociated in the QGP for temperatures above $\\sim 300$ MeV. As indicated by Eq. (\\ref{screening}), the $J\/\\psi$ dissociation temperature decreases as the QCD coupling constant $g$ increases. This is shown in Fig.~\\ref{disso} not only for the $J\/\\psi$ but also for its excited states $\\chi_c$ and $\\psi^\\prime$. In obtaining the dissociation temperatures for $\\chi_c$ and $\\psi^\\prime$, we have assumed that they are always above the critical temperature $T_c=170~{\\rm MeV}$ even for large $g$. We note that the screening mass $\\mu$ is nonzero in QCD vacuum but has a value of 180 MeV~\\cite{Karsch:1987pv}. In this case, the binding energy of $J\/\\psi$ in the vacuum is 600$\\sim$700 MeV.\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=9 cm]{dissociation.eps}}\n\\caption{Charmonium dissociation temperatures in the QGP as functions of the QCD coupling constant $g$.}\n\\label{disso}\n\\end{figure}\n\nAlthough the charmonium can be formed in the QGP at high temperature, it can still be dissociated by scattering with thermal partons. In the leading order (LO) pQCD, the charmonium breaks up by absorbing a thermal gluon, while in the next-to-leading order (NLO) the dissociation is induced either by a quark or a gluon, and their invariant matrix elements are given, respectively, by \\cite{Park:2007zza}\n\\begin{eqnarray}\n\\overline{|\\mathcal{M}|}_{\\rm LO}^2=\\frac{2}{3N_c}g^2m_C^2m_\\Phi\n(2k_{10}^2+m_G^2) {\\Big|\\frac{\\partial \\psi({\\bf\np})}{\\partial {\\bf p}}\\Big|}^2,\n\\label{LO}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\overline{|\\mathcal{M}|}_{\\rm qNLO}^2=\\frac{4}{3} g^4 m_C^2 m_\\Phi\n{\\Big|\\frac{\\partial \\psi({\\bf p})}{\\partial {\\bf p}}\\Big|}^2\n\\bigg\\{-\\frac{1}{2}+\\frac{k_{10}^2+k_{20}^2}{2 k_1 \\cdot k_2}\\bigg\\},\\nonumber\\\\\n\\label{qNLO}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\overline{|\\mathcal{M}|}_{\\rm gNLO}^2=\\frac{4}{3} g^4 m_C^2 m_\\Phi\n{\\Big|\\frac{\\partial \\psi({\\bf p})}{\\partial {\\bf p}}\\Big|}^2\n\\Bigg\\{-4+\\frac{k_1 \\cdot k_2}{k_{10}k_{20}}\\nonumber \\\\\n+\\frac{2k_{10}}{k_{20}}+\\frac{2k_{20}}{k_{10}}\n-\\frac{k_{20}^2}{k_{10}^2}-\\frac{k_{10}^2}{k_{20}^2} +\\frac{2}{k_1\n\\cdot k_2}~~~~~\\nonumber\\\\\n\\times\\bigg[\n\\frac{(k_{10}^2+k_{20}^2)^2}{k_{10}k_{20}} -2 k_{10}^2-2\nk_{20}^2+k_{10}k_{20}\\bigg] \\Bigg\\}.\n\\label{gNLO}\n\\end{eqnarray}\nIn the above, $k_1$ and $k_2$ are, respectively, the momenta of incoming and outgoing thermal partons; $\\psi({\\bf p})$ is the wavefunction of charmonium with ${\\bf p}=({\\bf k}_1-{\\bf k}_2)\/2$; $N_c$ is the number of colors; $m_G$ is the mass of thermal gluon and can be extracted from the lattice QCD \\cite{Levai:1997yx}; $m_\\Phi$ is the mass of charmonium in the QGP; and $m_C\\equiv m_c+\\sigma\/2\\mu$ is the mass of the constituent charm quark. With the screening mass $\\mu=0.18~{\\rm GeV}$ in the vacuum~\\cite{Karsch:1987pv}, the latter has a value of $m_C=1.85~{\\rm GeV}$ in the vacuum and is similar to the mass of $D$ meson. The dissociation cross sections of charmonia are then obtained by integrating Eq. (\\ref{LO})-(\\ref{gNLO}) over the phase space.\n\nThe same pQCD formula can be used for charmonium dissociation by partons inside hadrons in the HG. It was found, however, that the charmonium is not heavy enough for pQCD to be applicable \\cite{Song:2005yd}. In the present study, we thus take the cross section for charmonium dissociation by a hadron to be proportional to its squared radius as in Ref. \\cite{Song:2010ix} or given by that from a phenomenological hadronic Lagrangian \\cite{Lin:1999ad,Lin:2000ke}. We note that the effect of charmonium dissociation in the HG is negligible compared to that in the QGP due to the much smaller thermal decay width \\cite{Grandchamp:2002wp,Song:2010ix}.\n\nIn terms of its dissociation cross section $\\sigma_i^{\\rm diss}$, the thermal decay width of a charmonium is given by\n\\begin{eqnarray}\n\\Gamma(T)&=&\\sum_i \\int\\frac{d^3k}{(2\\pi)^3}v_{\\rm rel}(k)n_i(k,T) \\sigma_i^{\\rm diss}(k,T),\n\\label{width}\n\\end{eqnarray}\nwhere $i$ denotes the quarks and gluons in the QGP, and the baryons and mesons in the HG; $n_i$ is the number density of particle $i$ in grand canonical ensemble; and $v_{\\rm rel}$ is the relative velocity between charmonium and the particle. For the thermal width in the mixed phase, it is taken to be a linear combination of those in the QGP and the HG as following:\n\\begin{equation}\n\\Gamma(T_c)=f~\\Gamma^{\\rm QGP}(T_c)+(1-f)\\Gamma^{\\rm HG}(T_c),\n\\label{mixed}\n\\end{equation}\nwhere $f$ is the fraction of QGP in the mixed phase.\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=8.5 cm]{widths.eps}}\n\\caption{Thermal decay widths of charmonia in the QGP as a function of temperature for the QCD coupling constant $g=1.87$.}\n\\label{widths}\n\\end{figure}\n\nThe thermal decay widths of charmonia also depend both on the QCD coupling constant and the temperature of QGP. In Fig. \\ref{widths}, they are shown as functions of temperature for $g=1.87$. It is seen that the thermal decay width of $J\/\\psi$ diverges at the dissociation temperature $T=300~{\\rm MeV}$, while those of $\\chi_c$ and of $\\psi^\\prime$ become divergent at the critical temperature $T_c=170~{\\rm MeV}$. An infinitely large thermal decay width implies that the particles instantly reach their maximally allowed equilibrium value $N_i^{\\rm eq}$. Therefore, the $J\/\\psi$ abundance is not expected to reach this value at $T_c$, in contrast to that of the $\\chi_c$ and $\\psi^\\prime$. We note that the value $g=1.87$ is slightly larger than that used in our previous studies based on a schematic firecylinder model as a result of the viscous effect that is included in the present study.\n\n\\section{Results}\\label{suppression}\n\nUsing the above described two-component model based on the schematic viscous hydrodynamics and taking into account the in-medium effects on charmonia, we can calculate the nuclear modification factor $R_{AA}$ of $J\/\\psi$ in heavy ion collisions according to\n\\begin{eqnarray}\nR_{AA}&=&(1-f_{\\chi_c}-f_{\\psi^\\prime(2S)}-f_b)R_{\\rm pri}+f_b R_b+R_{\\rm reg},\\nonumber\\\\\n\\label{Raa-highpt}\n\\end{eqnarray}\nwhere $R_{\\rm pri}$, $R_b$, and $R_{\\rm reg}$ are the nuclear modification factors for $J\/\\psi$'s that are produced from primordial hard nucleon-nucleon scattering, the decay of bottom hadrons, and the regeneration in the QGP, respectively.\nIn writing the above expression, we have used the fact that all primordial $\\chi_c$ and $\\psi^\\prime$ are dissociated\nabove the critical temperature $T_C$. For $R_{\\rm pri}$, it is calculated according to \\cite{Song:2010ix}:\n\\begin{eqnarray}\nR_{\\rm pri}(\\vec{b})=\\int d^2 {\\bf s}~ S_{\\rm cnm}({\\bf b},{\\bf s}){\\rm exp}\\bigg\\{-\\int_{\\tau_0}^{\\tau_f} \\Gamma_{J\/\\psi}d\\tau\\bigg\\},\\nonumber\\\\\n\\end{eqnarray}\nwhere $\\tau_f$ is the freeze-out proper time. For $R_b$, it is taken to be one as a result of the expected conservation of total bottom and antibottom numbers. As to $R_{\\rm reg}$, it is calculated from the ratio of the number of $J\/\\psi$'s obtained from solving Eq.(\\ref{rate}) to the number of $J\/\\psi$'s from p+p collisions at same energy multiplied by the number of binary collisions in A+A collisions.\n\n\\subsection{Nuclear modification factor of $J\/\\psi$ at SPS and RHIC}\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=7.5 cm]{SPS-RHIC.eps}}\n\\caption{Nuclear modification factor $R_{AA}$ of $J\/\\psi$ (solid line) as a function of the participant number $N_{\\rm part}$ in Pb+Pb collisions at $\\sqrt{s_{NN}}=17.3$ GeV at SPS (upper panel) and in Au+Au collisions at $\\sqrt{s_{NN}}=200$ GeV at RHIC (lower panel). Dashed and dotted lines represent, respectively, the contributions to $J\/\\psi$ production from primordial hard nucleon-nucleon scattering and regeneration in the QGP. Experimental data are from Refs. \\cite{Alessandro:2004ap,Adare:2006ns}.}\n\\label{Raa}\n\\end{figure}\n\nIn Fig. \\ref{Raa}, we show the nuclear modification factor $R_{AA}$ of $J\/\\psi$ as a function of the participant number in Pb+Pb collisions at $\\sqrt{s_{NN}}=17.3$ GeV at SPS and in Au+Au collisions at $\\sqrt{s_{NN}}=200$ GeV at RHIC. These results are obtained with the QCD coupling constant $g=1.87$, which gives a good description of the experimental data as shown by solid lines in the upper and lower panels. It is seen that the $R_{AA}$ of $J\/\\psi$ becomes smaller as the number of participants in the collision increases. Also shown in Fig. \\ref{Raa} are results from the primordial (dashed lines) and the regenerated $J\/\\psi$ in the QGP (dotted lines), and they clearly indicate that the contribution from the primordial $J\/\\psi$ decreases and that from the regenerated ones increases as the collision energy increases.\n\n\\subsection{Nuclear modification factor of $J\/\\psi$ at LHC}\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=7.5 cm]{LHC.eps}}\n\\caption{Nuclear modification factor $R_{AA}$ of $J\/\\psi$ as a function of the participant number $N_{\\rm part}$ without (upper panel) and with (lower panel) the shadowing effect in Pb+Pb collisions at $\\sqrt{s_{NN}}=2.76$ TeV at LHC. Upper and lower solid lines are the $R_{AA}$ of $J\/\\psi$ for the nuclear absorption cross sections of $\\sigma_{\\rm abs}=$0 and 2.8 mb, respectively. Dashed, dotted, dot-dashed lines denote, respectively, the contributions to $J\/\\psi$ production from primordial hard nucleon-nucleon scattering, regeneration in the QGP, and decay of bottom hadrons.}\n\\label{Raalhc}\n\\end{figure}\n\nIn Fig. \\ref{Raalhc}, we show the $R_{AA}$ of $J\/\\psi$ in Pb+Pb collisions at $\\sqrt{s_{NN}}=2.76$ GeV at LHC with (lower panel) and without the shadowing effect (upper panel). It is seen that the shadowing effect suppresses the production of charm pairs and consequently the regeneration of $J\/\\psi$. In obtaining these results, we have included the contribution to $J\/\\psi$ production from the decay of bottom hadrons, which becomes non-negligible at LHC~\\cite{Collaboration:2011sp,Aaij:2011jh}, by assuming that the $R_{AA}$ of $J\/\\psi$ from the decay of bottom hadrons is independent of the centrality as indicated by the measured data from the CMS Collaboration \\cite{cms} and shown by the dash-dotted line in Fig. \\ref{Raalhc} as a function of the participant number. This contribution is comparable to that from the regenerated $J\/\\psi$ (dotted lines) in peripheral collisions and more important than the primordial ones (dashed lines) in more central collisions. The upper and lower solid lines are the final $R_{AA}$ of $J\/\\psi$ obtained with the nuclear absorption cross section of 0 and 2.8 mb, respectively. It is seen that the difference between the results obtained with and without the nuclear absorption is mainly in collisions of small number of participants as the primordial $J\/\\psi$s are mostly dissolved in central and semi-central collisions.\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=8.5 cm]{pt-spectrum.eps}}\n\\caption{Transverse momentum $p_T$ spectrums of $J\/\\psi$ from $p+{\\bar p}$ annihilation at $\\sqrt{s_{NN}}=1.96$ TeV by the CDF Collaboration at Fermi Lab (filled squares and solid line) \\cite{Acosta:2004yw} and of regenerated $J\/\\psi$ (dashed line) in central Pb+Pb collisions at $\\sqrt{s_{NN}}=2.76$ TeV at LHC.}\n\\label{spectrum}\n\\end{figure}\n\nTo compare the results from our model with the experimental data from LHC~\\cite{:2010px,cms}, which have a transverse momentum cut $p_T>6.5$ GeV for the measured $J\/\\psi$, we note that the fraction of produced $J\/\\psi$'s with transverse momentum larger than 6.5 GeV from the decay of bottom hadrons is about 21\\% in $p+{\\bar p}$ annihilation at 1.96 TeV from the CDF Collaboration at the Fermi Lab \\cite{Acosta:2004yw}. Parameterizing the latter by $~[1+(p_T\/4.1{\\rm GeV})^2]^{-3.8}$ as shown by the solid line in Fig.~\\ref{spectrum}, we obtain that the fraction of $J\/\\psi$'s with transverse momentum larger than 6.5 GeV is 3\\%. This is significantly larger than that from the regeneration contribution in Pb+Pb collisions, which is only 0.17\\%, as shown by the dashed line that is obtained from the two-component model but is arbitrarily normalized. It was first pointed out in Ref. \\cite{Zhao:2011cv} that limiting the $J\/\\psi$ transverse momentum to 6.5 GeV suppresses the contribution from the regenerated $J\/\\psi$. For $J\/\\psi$'s of high transverse momenta, their nuclear modification factor $R_{AA}$ can thus be calculated by multiplying the last term in Eq.(\\ref{Raa-highpt}) by the percentage of regenerated $J\/\\psi$'s with transverse momentum larger than 6.5 GeV divided by the percentage of primordial $J\/\\psi$'s with the same range of transverse momenta, which is 0.12 in central Pb+Pb collisions at $\\sqrt{s_{NN}}=2.76$ TeV.\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=7.5 cm]{LHCc.eps}}\n\\caption{Nuclear modification factor $R_{AA}$ of $J\/\\psi$ with transverse momentum larger than 6.5 GeV versus the number of participants without (upper panel) and with (lower panel) the shadowing effect in Pb+Pb collisions at $\\sqrt{s_{NN}}=2.76$ TeV at LHC. Dashed, dotted and dot-dashed lines represent, respectively, the contributions to $J\/\\psi$ production from primordial hard nucleon-nucleon scattering, regeneration from QGP, and decay of bottom hadrons, and the solid line is the sum of them, without nuclear absorption. Upper and lower solid lines are the $R_{AA}$ obtained with the nuclear absorption cross section of 0 and 28 mb, respectively.}\n\\label{LHCa}\n\\end{figure}\n\nSince it takes time for an initially produced $c\\bar{c}$ pair to form a charmonium, which depends on the charmonium radius and the relative velocity between charm and anticharm quark \\cite{Blaizot:1988ec,Karsch:1987zw},\nthermal dissociation of charmonia is thus delayed until charmonia are formed. Since the $J\/\\psi$ formation time increases with its transverse momentum as a result of time dilation, this effect becomes more important for $J\/\\psi$'s of high transverse momenta that are measured in experiments at LHC. In this study, we treat the formation time as a free parameter to fit the experimental data. Using the formation time of 0.5 fm\/$c$, which corresponds to 1.4 fm\/$c$ in the firecylinder frame based on the average of the $J\/\\psi$ transverse momenta that are above 6.5 GeV, our results for the the nuclear modification factor $R_{AA}$ of $J\/\\psi$ with transverse momentum larger than 6.5 GeV as a function of the participant number are shown in Fig. \\ref{LHCa} without (upper panel) and with (lower panel) the shadowing effect as well as without (upper solid curve) and with (lower solid curve) the nuclear absorption effect. It is seen that the results obtained without the shadowing and the nuclear absorption effect describe reasonably the recent experimental results from the CMS Collaboration at LHC \\cite{cms} shown by solid squares. Also, it is interesting to see that the shoulder structure around $N_{\\rm part}=100$ in the measured $R_{AA}$ at LHC is roughly reproduced by our model. As suggested in our previous study on the $J\/\\psi$ $R_{AA}$ in Au+Au collisions at RHIC, the sudden drop in its value at certain value of $N_{\\rm part}$ reflects the maximum temperature of the formed QGP that is above the dissociation temperature of $J\/\\psi$, because the survival probability of $J\/\\psi$ is discontinuous at its dissociation temperature. Moreover, the fact that the shoulder seen at LHC occurs at a smaller number of participants than the value $N_{\\rm part}=190$ at RHIC is consistent with the expectation that the maximum temperature of the QGP formed at LHC that is above the dissociation temperature of $J\/\\psi$ happens in more peripheral collisions than at RHIC. We note that the CMS Collaboration has also measured the fraction of $J\/\\psi$ from the decay of bottom hadrons in Pb+Pb collisions at $\\sqrt{s_{NN}}=2.76$ TeV and found that its nuclear modification factor is about 0.37, which corresponds to $R_b$ in Eq. (\\ref{Raa-highpt}), and is almost independent of the centrality \\cite{cms}.\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=10.0 cm]{LHCd.eps}}\n\\caption{Ratio $R_{\\rm cp}$ of the $R_{AA}$ of $J\/\\psi$\nwith transverse momentum larger than 6.5 GeV in a given centrality to that\nin the peripheral collision versus the centrality of Pb+Pb collisions\nat $\\sqrt{s_{NN}}=2.76$ TeV at LHC. Experimental data shown by solid squares are from\nRef. \\cite{:2010px}.}\n\\label{LHCb}\n\\end{figure}\n\nOur results can further be compared with the experimental data from the ATLAS collaboration at LHC \\cite{:2010px} on the centrality dependence of the ratio $R_{\\rm cp}$ of the $R_{AA}$ of $J\/\\psi$ in a collision of certain centrality to that in the peripheral collision. For this purpose, we determine the centrality of a collision using the Glauber model as follows \\cite{Miller:2007ri}:\n\\begin{eqnarray}\n{\\rm Centrality}(b)=\\frac{\\sigma_{\\rm inel}^{AB}(b)}{\\sigma_{\\rm total~inel}^{AB}}~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n=\\frac{\\int^b_0 2\\pi b^\\prime db^\\prime \\bigg\\{1-\\bigg[1-T_{AB}(b^\\prime)\\sigma_{\\rm inel}^{NN}\\bigg]^{AB}\\bigg\\}}{\\int^\\infty_0 2\\pi b^\\prime db^\\prime \\bigg\\{1-\\bigg[1-T_{AB}(b^\\prime)\\sigma_{\\rm inel}^{NN}\\bigg]^{AB}\\bigg\\}},\n\\end{eqnarray}\nwhere the numerator is the inelastic cross section of nuclei A and B with the impact parameters between 0 and b, and the denominator is the total inelastic cross section of the two nuclei; and $\\sigma^{NN}_{\\rm inel}$ is the inelastic cross section of a p+p collision at the same collision energy. In Fig. \\ref{LHCb}, we show the calculated centrality dependence of $R_{cp}$ in Pb+Pb collisions at $\\sqrt{s_{NN}}=2.76$ TeV at LHC with the uncertainty of the reference point, i.e., the $R_{AA}$ of $J\/\\psi$ in the peripheral collision, shown as dashed lines. It is seen that results from our model calculations can reproduce the measured $R_{\\rm cp}$ of $J\/\\psi$'s with high transverse momenta, and the shadowing and the nuclear absorption effect do not make significant difference in the $R_{\\rm cp}$ of $J\/\\psi$.\n\n\\section{summary}\\label{summary}\n\nModeling the evolution of the hot dense matter produced in relativistic heavy-ion collisions by a schematic viscous hydrodynamics, we have extended the two-component model, that was previously used to describe $J\/\\psi$ production in heavy-ion collisions at RHIC, to those at SPS and LHC. As in our previous studies, we have included the effect due to absorption by the cold nuclear matter on the primordially produced charmonia from initial nucleon-nucleon hard scattering, the dissociation of survived charmonia in the produced hot dense matter, and the regeneration of chamronia from charm and anticharm quarks in the quark-gluon plasma. For heavy-ion collisions at LHC, we have further included the shadowing effect in the initial cold nuclei. We have also taken into account the medium effects on the properties of the charmonia and their dissociation cross sections by using the screened Cornell potential model and the NLO pQCD. With the same quasiparticle model for the equation of state of the QGP and the resonance gas model for that of the HG as used before, we have obtained a lower initial temperature than in our previous study to reach the same final entropy density as a result of the finite viscosity. Consequently, a slightly larger QCD coupling constant was needed to reproduce the measured centrality dependence of the nuclear modification factor at RHIC. The calculated nuclear modification factor for heavy-ion collisions at the SPS was found to agree with the measured value as well. For both the SPS and RHIC, the contribution from the primordial charmonia was found to dominate, although the contribution from the regenerated ones increases from the SPS to RHIC. For heavy-ion collisions at LHC, the regenerated charmonia becomes most important in semi-central to central collisions as a result of the larger number of charm and anticharm quark pairs produced in higher energy collisions. Since the available experimental data from the LHC are for $J\/\\psi$'s of transverse momentum $p_T > 6.5$ GeV, we have further considered the contribution of $J\/\\psi$ production from the decay of bottom hadrons as its effect increases with increasing $J\/\\psi$ transverse momentum and the effect due to the formation time of the $J\/\\psi$. A reasonable agreement with the preliminary experimental data has been obtained if the shadowing and the nuclear absorption effect is absent. However, a definitive conclusion can only be made after more refined experimental data becomes available. Furthermore, we have found the similar trend in the centrality dependence of the $R_{AA}$ of $J\/\\psi$ at both RHIC and LHC that it decreases monotonously in peripheral collisions and then drops at a certain centrality as a result of the onset of an initial QGP temperature higher than the $J\/\\psi$ dissociation temperature. Moreover, this takes place in less central collisions at LHC than at RHIC, indicating that the initial temperature at the same centrality is higher at LHC than at RHIC.\n\n\\section*{Acknowledgements}\nThis work was supported in part by the U.S. National Science\nFoundation under Grant Nos. PHY-0758115 and PHY-1068572, the US Department of Energy\nunder Contract No. DE-FG02-10ER41682, and the Welch Foundation under\nGrant No. A-1358.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGas-grain chemical models, which are useful tools for studying the chemistry in the interstellar medium, often include the rate-equation approach to calculate the evolution of species in the gas phase and on the grain surface \\citep[e.g.,][]{1973ApJ...185..505H,1992ApJS...82..167H}.\nRegarding the chemistry on dust grains and their ice mantles, the rate-equation approach can be used in the basic two-phase approach, in which no distinction is made between the surface of the ice and layers underneath it, the three-phase model, in which chemistry only takes place on the surface, or even a multi-layer approach \\citep[e.g.,][]{1992ApJS...82..167H,1993MNRAS.263..589H,2012A&A...538A..42T}.\nEven the two-phase approach can describe the chemistry reasonably accurately \\citep{2008ApJ...682..283G}, and rate-equation methods are efficient for large chemical networks where thousands of reactions are taken into account.\nSome limitation exists when the average number of species per dust grain is below unity, and stochastic methods are more accurate \\citep{1997OLEB...27...23T,1998ApJ...495..309C,2003Ap&SS.285..725H,2003A&A...400..585L,2004A&A...423..241S}.\nContrary to microscopic kinetic Monte Carlo models, and other detailed stochastic treatments, \\citep{2005A&A...434..599C,2005MNRAS.361..565C,2007A&A...469..973C,2009A&A...508..275C,2010A&A...522A..74C,2012ApJ...759..147C,2012ApJ...751...58I,2014ApJ...787..135C}, rate-equation models do not take into account each individual process that occurs on and beneath the ice surface. \nFor example, when a molecule is adsorbed on the grain surface, the desorption energy should depend on the substrate and other grain surface parameters, which are functions of the location on the grain surface, itself continuously in evolution as a function of time.\nFor rate-equation treatments, however, a single binding energy per adsorbate is commonly used depending upon physical conditions and type of source.\nSince water ice is often the main component of dust grain mantles in cold dense interstellar clouds \\citep{1982A&A...114..245T,1998ApJ...498L.159W}, the adsorption energies used in models of these sources are usually the ones of a given adsorbate on a water substrate. A list of desorption energies for a selection of physisorbed adsorbates on water and other substrates can be found in Table 3 of\n\\cite{2007ApJ...668..294C}.\nIn this paper we will be concerned with the binding of H$_{2}$ on a water substrate and on itself, and will be using 440~K and 23~K on H$_2$ for the two values, respectively \\citep{2007ApJ...668..294C}.\n\nDetermination of the amount of H$_{2}$ on a dust grain in a cold cloud is a difficult task for several reasons. First, it is difficult if not impossible to include the adsorption and desorption of hydrogen molecules in a complete treatment of the surface chemistry via the kinetic Monte Carlo approach given the speed with which these events can happen. The situation gets worse if complex simulations of star formation, such as three-dimensional hydrodynamical simulations \\citep{2012ApJ...758...86F,2013ApJ...775...44H}, or even warm-up models \\citep[e.g.][]{2004MNRAS.354.1141V,2006A&A...457..927G}, are undertaken, because the kinetic Monte Carlo models are very time-consuming computationally. Secondly, the large difference between the binding energy of H$_{2}$ to a water ice substrate and to itself means that a rate-equation model with only one binding energy for H$_{2}$, the one with water, can lead to\n an overestimate of the H$_2$ granular abundance, especially at low temperature and high density ($\\approx$10~K and >10$^4$~cm$^{-3}$), and on the contrary, considering a single binding energy of 23~K prevents adsorption of H$_2$ onto grain surfaces, which is not real except at high temperatures. \nThus, a simple and efficient numerical approach to deal with H$_2$ coverage as a function of temperature, density, and time, and applicable to rate-equation chemical models, is desirable.\nThe goal of this paper is to present one such approach and to use it in treatments of cold and dense regions. \nThis new approach differs from earlier approaches of \\cite{2010PhRvE..81f1109W}, \\cite{2011A&A...529A.151C}, and \\cite{2011ApJ...735...15G}.\n\nThe remainder of the paper is structured as follows. \nWe present our treatment in terms of a rate equation in Section~\\ref{sec:enc_des_mec}. In Section~\\ref{sec:microMCmodel}, we then consider a simple steady-state model in which we only include a fixed amount of H$_{2}$ and calculate the surface abundance of H$_{2}$ as a function of density for a cloud at 10 K. We compare the results of this simple model with those of a detailed kinetic Monte Carlo approach. The two approaches lead to very similar results for the H$_{2}$ surface abundance. In Section~\\ref{sec:REmodel}, we introduce a large gas-grain network and code with encounter desorption, based on the Nautilus model \\citep{2009A&A...493L..49H}, and use it to obtain the H$_{2}$ surface abundance as a function of density. The good agreement with the simple treatments suggests that we can use a large gas-grain rate-equation treatment with encounter desorption to determine the overall chemistry that occurs as a function of H$_2$ surface abundance. The chemistry is discussed in Section~\\ref{sec:discussions}, and a conclusion follows.\n\n\n\\section{The \"encounter desorption\" mechanism}\n\\label{sec:enc_des_mec}\n\nWater is the main component of the ice mantle, therefore the desorption energy of a species on a water substrate is usually used.\nHowever, at very high density, H$_2$ can become quite abundant on the grain surface, since it is the most abundant species in the gas phase. In an extreme case, we would need to use the binding energy of adsorbates to H$_{2}$ and not to water \\citep{2013MNRAS.429.3578M}. \nTo take this problem into account, \\cite{2011ApJ...735...15G} calculated effective binding energies and diffusion barriers according to the fractional coverage of the surface with H$_2$.\nThis method produces a maximum H$_2$-ice fraction of around $10$~\\% under cold molecular cloud conditions.\nUnfortunately, desorption energies and diffusion barriers of every species become time dependent using this technique.\nThen the differential equation system become stiffer than usual and as a consequence more difficult to solve, which could be a handicap for complex hydrodynamical simulations \\citep{2012ApJ...758...86F,2013ApJ...775...44H}, where computing time is a critical limitation.\n\nOur approach, which we label ``encounter'' desorption, is a different one. Figure~\\ref{fig:H2_dif_des} shows an interstellar grain consisting of a silicate or carbonaceous core, and an ice mantle assumed mainly to be of water ice, with water molecules in the top layer illustrated in dark blue, and hydrogen molecules in white. Individual hydrogen molecules diffuse for the most part over water molecules until they reach another hydrogen molecule beneath them, at which time the binding energy of the diffusing species is sharply reduced from 440~K to 23~K, raising the likelihood of desorption.\nThe desorption of an H$_2$ molecule due to the lower desorption energy, when the molecule ends up on an H$_2$ substrate, is modeled by considering the encounter of two H$_2$ molecules on the same surface site.\nThe grain surface \"reaction\" $\\rm H_2(grain) + H_2(grain) \\longrightarrow H_2(grain) + H_2(gas)$ is added to the reaction network with a specific reaction rate $R_{H_2H_2}$ to take into account this process.\n\nThe rate is given by\n\\begin{equation}\n\\label{eq:enc_des}\nR_{H_2H_2}= \\frac{1}{2} k_{H_2H_2} n_s(H_2)n_s(H_2)\\kappa(H_2),\n\\end{equation}\nin units of cm$^{-3}$~s$^{-1}$, where $k_{H_2H_2}$ is the rate coefficient (cm$^{3}$~s$^{-1}$) at which two hydrogen molecules diffuse into the same lattice site \\citep{1992ApJS...82..167H,1998ApJ...495..309C}, the hydrogen concentration on grain surfaces is written as $n_{s}(H_{2})$ (cm$^{-3}$) and $ \\kappa(H_{2})$ is the probability of desorption rather than diffusion.\nThis probability is given by the equation\n\\begin{equation}\n\\label{eq:kappa_AdsDes}\n\\kappa(H_2)=\\frac{\\displaystyle\\sum_{X}k_{Xdes}(H_2)}{R_{diff}(H_2)+\\displaystyle\\sum_{X}k_{Xdes}(H_2)}\n\\end{equation}\nwhere the sum over X is a sum over the thermal desorption rate coefficient and assorted non-thermal desorption rate coefficients (s$^{-1}$) due to photons and cosmic rays.\nIn the general case, we take into account thermal desorption, cosmic ray induced desorption, and photodesorption from direct interstellar UV photons and secondary photons generated by cosmic rays \\citep[see][]{1992ApJS...82..167H,1993MNRAS.261...83H,1985A&A...144..147L,2007ApJ...662L..23O,2009A&A...504..891O,2009ApJ...693.1209O,2009A&A...496..281O,2008ApJ...681.1385H,2010A&A...515A..66H}), while $R_{diff}$ is the diffusion rate (s$^{-1}$) of one H$_2$ molecule on an H$_2$ substrate \\citep{1992ApJS...82..167H,1998ApJ...495..309C}. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.0\\linewidth]{drawing}\n\\caption{An interstellar grain covered by a mantle of ice. \nThe surface layer is composed of water and molecular hydrogen, and H$_2$ molecules diffuse on the surface.}\n\\label{fig:H2_dif_des}\n\\end{figure}\n\n\\section{A comparison between different methods for a simple system}\n\\label{sec:microMCmodel}\nBefore we apply our encounter desorption mechanism to a realistic gas-grain simulation, the validity of the mechanism\nhas to be tested. Since the microscopic Monte Carlo approach is the most rigorous simulation method, ideally\nwe should compare the results of the rate equation approach including encounter desorption with analogous results with the microscopic Monte Carlo simulation results using a full reaction network.\nHowever, since the gas phase H$_2$ abundance is too large to be treated by the Monte Carlo method, we \nchoose a system that is as simple as possible. \nIn this section, we report the results of a comparative study of such a simple system, in which we consider only H$_{2}$ and a mantle of effectively water ice. \nWe then compute the steady-state fraction of molecular hydrogen on the dust grains at 10 K as a function of total H$_{2}$ density. \nWe take a typical grain with radius $0.1 \\mu$m and $10^6$ binding sites, and a gas-to-dust number density of 10$^{-12}$. \nH$_2$ from the gas can accrete onto a grain surface and then \ndiffuse or desorb from the surface.\nThermal desorption is treated in the standard manner, while the rate coefficient for encounter desorption is treated as in equation (\\ref{eq:enc_des}), but without non-thermal desorption mechanisms. \n\nThe simple rate equation approach is based on setting the time derivative of the concentration of H$_{2}$ on grains to zero:\n\\begin{equation}\n\\frac{d~n_{s}(H_2)}{dt} = k_{ads}(H_2)n_g(H_2) - R_{H_2H_2} - k_{\\theta des}(H_2)n_s(H_2) = 0 \n\\end{equation}\nand solving for the H$_{2}$ grain concentration. In the equation, $n_g(H_{2})$ is the gas-phase H$_{2}$ abundance, $k_{ads}$ is the adsorption rate coefficient for H$_{2}$, $k_{\\theta des}$ is its thermal desorption rate coefficient \\citep{1992ApJS...82..167H}, and the rate for encounter desorption is to be found in equation (\\ref{eq:enc_des}). \n\nThe microscopic Monte Carlo simulation method has been explained in detail in \\cite{2005A&A...434..599C}, so will only be discussed briefly here.\nA grain surface with $N$ binding sites is represented as an $L\\times L$ square lattice, where $L$ is the number of sites on grain surface in one dimension.\nWe keep track of the position and movement of H$_{2}$ species on the lattice.\nThe movements, which include hopping, desorption, and adsorption, are modeled \nas Poisson processes, so that\nthe time interval between two successive movement operations, $\\Delta t$, is given by\n\\begin{equation}\n\\Delta t = \\frac{\\ln(x)}{k},\n\\end{equation}\nwhere $x$ is a random number uniformly distributed within 0 and 1, and $k$ (in s$^{-1}$) is the hopping rate coefficient $k_{hop}$, the thermal desorption rate coefficient $k_{\\theta des}$, or the adsorption rate coefficient $k_{ads}$, depending on the specific movement operation. \nMoreover, hopping will compete with desorption for an H$_{2}$ species that resides in a binding site. \nWe combine hopping and desorption as a joint Poisson process and then use a competition mechanism to decide whether the species will hop or desorb \\citep{2005A&A...434..599C}. \n\nFigure~\\ref{fig:REvsMC} shows the steady-state molecular hydrogen abundance on a grain surface, calculated as a function of H$_{2}$ total density (gas and grain surface) for three models, two of which contain no encounter desorption using a desorption energy for H$_{2}$ of either 440 K, the H$_{2}$-water value, or 23 K, the H$_{2}$-H$_{2}$ value.\nFor these models, only the rate-equation result is shown.\nThe third model contains the encounter desorption rate process as well as thermal desorption using the 440 K desorption energy, which is a very slow process at 10 K\\footnote{In this third model, 440~K is used in the rate coefficient $k_{H_2H_2}$ in equation~\\ref{eq:enc_des}, while 23~K is used in the different terms of the probability $\\kappa(H_2)$ shown in equation~\\ref{eq:kappa_AdsDes}.}.\nFor this case, we also plot the result of the kinetic Monte Carlo approach, which should reproduce the H$_{2}$ granular abundance of the encounter desorption rate-equation model if the latter is accurate.\nNote that the kinetic Monte Carlo model assumes a constant gas phase H$_2$ abundance, which is not the case using the rate-equation approach.\nThe grain surface abundance of H$_2$ is, however, very small compared with the gas phase H$_2$ using the Monte Carlo model, so this assumption does not change our results presented in the figure.\n\nBoth models without encounter desorption show a linear dependence of the H$_{2}$ surface abundance on total proton density $n_{\\rm H}$ for at least a portion of the H$_{2}$ densities considered.\nWith the higher desorption energy, H$_{2}$ is slowly desorbed, so that as the H$_{2}$ density approaches 10$^{11}$ cm$^{-3}$, virtually all molecular hydrogen is located on grains, reaching a fractional abundance of 0.5 with respect to the total proton density. \nWith the lower desorption energy, the average number of H$_{2}$ molecules per grain is less than unity even at the highest density utilized (abundance $\\approx 10^{-14} - 10^{-13}$). \nWith encounter desorption, the results lie in-between, with the H$_{2}$ granular fractional abundance at a standard dense cloud gas density of 10$^{4}$ cm$^{-3}$ approximately 10$^{-9}$, which corresponds to about 40 molecules per grain, and, at the highest density studied, $4\\times 10^{6}$ molecules per grain, which corresponds to $\\sim4$ monolayers. \nThe surface fractional abundance calculated with the rate equation model including encounter desorption is slightly larger than the value obtained with the microscopic Monte Carlo model at densities larger than $10^{12}$~cm$^{-3}$, because in the rate-equation model, the H$_2$ grain surface concentration is approximately linearly dependent on the density of the medium, whereas the Monte Carlo model involves one monolayer of H$_2$ as a limit.\nHowever, even at the highest density in our simulation, $10^{14}$~cm$^{-3}$, the encounter desorption model result for the grain H$_{2}$ abundance is only about a factor of 4 larger than the microscopic Monte Carlo model value.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.0\\linewidth]{H2_steady-state}\n\\caption{H$_2$ fractional abundance on the dust surface with respect to the total proton abundance plotted against the total (gas and grain surface) hydrogen density, as computed by different methods for a simple system at steady state (see text in Section~\\ref{sec:microMCmodel}).}\n\\label{fig:REvsMC}\n\\end{figure}\n\n\\section{Results with encounter desorption and a large gas-grain reaction network}\n\\label{sec:REmodel}\n\nGiven the degree of agreement between the kinetic Monte Carlo method and the encounter desorption rate-equation method for a simple system, we have chosen to extend the encounter desorption approach to a full gas-grain model, using the Nautilus code \\citep{2009A&A...493L..49H}.\nThe two-phase rate-equation approach is used, in which no distinction is made between the inner and surface layers of the mantle.\nDetails on the processes included in the code are presented by \\cite{2010A&A...522A..42S} and \\cite{2012PhDT........49H}.\nThe potential energy barrier against diffusion, $E_{\\rm b}$, is linked to the desorption energy $E_{\\rm D}$ by the equation $E_{\\rm b}=\\alpha E_{\\rm D}$.\nWe set $\\alpha$ equal to 0.5 as in \\cite{2006A&A...457..927G}, although other estimates exist, typically ranging from 0.3 to 0.77 \\citep{1976RvMP...48..513W,1987ASSL..134..397T,1992ApJS...82..167H,2000MNRAS.319..837R}.\nWe used the chemical network of \\cite{2013ApJ...775...44H}, which includes the latest recommendations from the KIDA experts on gas-phase processes until October 2011.\nAn electronic version of this network is available at \\url{http:\/\/kida.obs.u-bordeaux1.fr\/models}.\nPhotodesorption has been included following \\cite{2007ApJ...662L..23O,2009A&A...504..891O,2009ApJ...693.1209O,2009A&A...496..281O} and \\cite{2008ApJ...681.1385H,2010A&A...515A..66H} and a limiting factor is added to restrict the mechanism to two monolayers.\nTwo sources of incident UV radiation are considered : direct interstellar UV photons, and secondary photons generated by cosmic rays.\nWe used the elemental abundances following \\cite{2011A&A...530A..61H}\\footnote{Values used in this study come from \\cite{1982ApJS...48..321G}, \\cite{2008ApJ...680..371W}, and \\cite{2009ApJ...700.1299J}.} with an oxygen elemental abundance relative to hydrogen of $3.3\\times10^{-4}$.\nThe species are assumed to be initially in an atomic form as in diffuse clouds except for hydrogen, which is initially in H$_2$ form.\nElements with an ionization potential below 13.6~eV -- C, S, Si, Fe, Na, Mg, Cl, and P -- are initially singly ionized.\nFrom the initial state, the chemistry evolves under cold and dense conditions.\nThe gas and grain temperature are equal to 10~K, the cosmic-ray ionization rate is $1.3\\times10^{-17}$~s$^{-1}$, and the visual extinction is set to 30.\nThe density is once again varied in the range $\\sim 10^{4}$~cm$^{-3}$ to $\\sim 10^{14}$~cm$^{-3}$.\nWe have run three different models, summarized in Table~\\ref{tab:models}, which are analogous to those used for the simple models.\nIn models~440-noED and 23-noED, the desorption energy of H$_2$ is fixed to 440~K and 23~K, respectively, and the encounter desorption mechanism is disabled.\nIn model~440-ED, the desorption energy of H$_2$ is fixed to 440~K and the encounter desorption mechanism is enabled with a desorption energy equal to 23~K.\n\n\\begin{table}[h]\n\\centering\n\\caption{Model designations for full gas-grain simulation}\n\\label{tab:models}\n\\begin{tabular}{ccc}\nModel & H$_2$ Desorption Energy& Encounter Desorption \\\\\n\\hline \\hline\n440-noED & 440~K & disabled \\\\\n23-noED & 23~K & disabled \\\\\n440-ED & 440~K & enabled\n\\end{tabular}\n\\end{table}\n\nFigure~\\ref{fig:JH2_H2} shows the abundance of H$_2$ at steady state in the gas phase and on a grain surface for all three models, as a function of total gaseous plus surface hydrogen density. \nSteady state for H$_2$ is reached before 10~yr since its high abundance in the gas phase causes a high adsorption rate, and because we start with all hydrogen in its molecular form. The steady-state results in Figure~{\\ref{fig:JH2_H2} are quite similar to those in Figure~\\ref{fig:REvsMC}. Thus, the addition of a \"complete\" gas-grain reaction network does not change significantly the abundance of surface H$_2$ as a function of density. We note specifically the results for a standard cold dense cloud with the inclusion of encounter desorption (model 440-ED): the H$_{2}$ fractional surface abundance lies between $10^{-11}$ and {\\bf $10^{-10}$} for a standard cold dense cloud, which represent respectively $\\sim 10$ and $\\sim 100$ molecules on the surface of a dust grain.\n\nThe use of encounter desorption, as seen in Figures~\\ref{fig:REvsMC} and \\ref{fig:JH2_H2}, clearly reduces the amount of surface H$_2$ at all densities chosen.\nThese lowered abundances, however, are still significantly higher than what can be obtained if we assume that H$_2$ cannot stick to grains at all, and that all of the molecular hydrogen on grains comes directly from its formation from two H atoms that have accreted onto the surface.\nThus the implementation of encounter desorption does not lead to the same situation as the assumption of no sticking of H$_2$, at least at the densities studied.\n\nThe amount of surface hydrogen is likely to affect the chemistry and abundance of other species, both gaseous and solid-state, especially at densities significantly higher than those pertaining to cold dense clouds. Some of the effect derives from radical-H$_{2}$ surface reactions that can occur even at low temperatures on the surface via tunneling \\citep{1993MNRAS.261...83H}. In the following section, we discuss the impact of H$_2$ grain coverage on the abundances of other species for sources at 10 K and a range of densities.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.0\\linewidth]{O3-3E-4_JH2_H2}\n\\caption{H$_2$ fractional abundance in the gas phase (gray) and on the dust grain surface (black) relative to $n_{\\rm_H}$ as a function of total H$_{2}$ density for the three models 440-noED (solid line), 23-noED (dotted line), and 440-ED (dashed line).\nThe dotted and dashed gray lines, which blend into one another, are horizontal and lie atop the figure.}\n\\label{fig:JH2_H2}\n\\end{figure}\n\n\\section{Discussion}\n\\label{sec:discussions}\n\nWe computed the time-dependent chemical evolution under the same range of physical conditions as used previously and with the models listed in Table~\\ref{tab:models}. Although the higher density models are not relevant to dense cores, they can be relevant to the dense midplane of protoplanetary disks and to centers of prestellar isothermally collapsing cores. Moreover, hydrodynamic calculations can lead to temporary high densities and low temperatures. \n\nWe start with a comparison of the gas-phase abundances measured for the cold cores TMC-1CP and L134N and the gas-phase results of the three models using a comparison parameter $D$ between modeling results and observational constraints, given by the equation\n\\begin{equation}\nD(t)=\\frac{1}{N}\\sum_j\\left|\\log\\left(X_j^{mod}(t)\\right)-\\log\\left(X_j^{obs}\\right)\\right|.\n\\end{equation}\nHere, $X_j^{obs}$ is the observed abundance of species $j$, $X_j^{mod}(t)$ is the computed abundance of species $j$ at time $t$, and $N$ is the number of observed species in the cloud.\nThe smaller the value of $D$, the closer the agreement.\nWe used the observed abundances listed in \\cite{2013ChRv..113.8710A}\\footnote{\\samepage\nValues used in this study come from\n\\cite{1985ApJ...290..609M}, \\cite{1987ApJ...315..646M},\n\\cite{1989ApJ...345L..63M}, \\cite{1991A&A...247..487S},\n\\cite{1992ApJ...386L..51K}, \\cite{1992ApJ...396L..49K},\n\\cite{1992IAUS..150..171O}, \\cite{1993A&A...268..212G},\n\\cite{1994ApJ...422..621M}, \\cite{1994ApJ...427L..51O},\n\\cite{1997ApJ...486..862P}, \\cite{1997ApJ...480L..63L}, \\cite{1998A&A...335L...1G},\n\\cite{1998FaDi..109..205O}, \\cite{1998A&A...329.1156T},\n\\cite{1999ApJ...518..740B}, \\cite{2000ApJ...539L.101S},\n\\cite{2000ApJ...542..870D}, \\cite{2000ApJS..126..427T}, \\cite{2003A&A...402L..77P},\n\\cite{2006ApJ...643L..37R}, \\cite{2006ApJ...647..412S},\n\\cite{2007A&A...462..221A}, \\cite{2007ApJ...664L..43B},\n\\cite{2008A&A...478L..19A}, \\cite{2009ApJ...690L..27M},\n\\cite{2009ApJ...691.1494G}, and\n\\cite{2011A&A...531A.103C}.\n}. \nThere is little difference in the results for $D(t)$ using the three models at densities of $2 \\times 10^{4}$ and $2 \\times 10^{5}$~cm$^{-3}$, as shown in Figure~\\ref{fig:tmc1_l134n}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.0\\linewidth]{obs}\n\\caption{Parameter $D$ as a function of time for TMC-1 (black) and L134N (gray), using a total hydrogen density of $2\\times10^4$ and $2\\times10^5$~cm$^{-3}$ respectively, for the three models: 440-noED (solid line), 23-noED (dotted line), and 440-ED (dashed line).}\n\\label{fig:tmc1_l134n}\n\\end{figure}\n\nFor individual species, however, the three models can yield different results, even in the gas-phase.\nFor an example, let us consider the major species water, CO, and methane, and the atomic carbon.\nPanels A and B of Figure~\\ref{fig:ggH2O_CO_CH4} show the abundances of water, carbon monoxide, and methane both on the grain surface and in the gas phase, as a function of total H$_2$ density, for the three models at $10^6$~yrs, a time relevant for protoplanetary disks and older cold cores. \nGrain surface abundances of these three species are sensitive to the model used, but in the gas phase, only water is strongly affected by the choice of model. \nHowever, while the grain surface abundances of these species vary by a maximum factor of three, the gas phase water abundance is decreased by three orders of magnitude at $\\rm 10^9~cm^{-3}$ going from model 440-ED to 440-noED, which corresponds to an increase in sH$_2$, where the ``s'' stands for ``grain surface''.\nThe depletion in gaseous H$_{2}$ leads to a depletion of precursors to gaseous water and to an increase of sH$_2$, which consumes sOH so that the production of gas phase water through reactive desorption with sH is lessened.\nThe abundance of solid atomic carbon, seen in Panel C, also depends strongly on the sH$_{2}$ abundance.\nIn model 440-noED, where sH$_{2}$ is highest, the abundance of sC is lowest due to its destruction via reaction with sH$_{2}$.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1.0\\linewidth]{O3-3E-4_JH2O_JCO_JCH4_landscape}\n\\caption{H$_2$O (blue), CO (gray), and CH$_4$ (black) abundances at 10$^{6}$ yr on a grain surface (panel A) and in the gas phase (panel B) plotted against total H$_{2}$ density for the three models 440-noED (solid line), 23-noED (dotted line), and 440-ED (dashed line).\nPanel C contains the atomic carbon abundance on a grain surface at the same time and the same models.}\n\\label{fig:ggH2O_CO_CH4}\n\\end{figure*}\n\nDepending on the density and the time considered, a general behavior can be seen for the majority of grain surface species, based upon the surface H$_{2}$ abundance (see Figure~\\ref{fig:JH2_H2}).\nAt the lowest densities, model 440-noED and 440-ED present the same results, while the results of model 23-noED are different. \nAt the highest densities, model 440-noED shows different results from the two others, which are quite similar.\nThus the encounter desorption model starts out similarly to the model with a desorption energy for H$_2$ of 440~K and ends up, with increasing density, similar to the model with a lower desorption energy of 23~K.\nThis relation can be understood by the following argument.\nAt the lowest densities, H$_2$ does not adsorb very much, so encounter desorption is not very efficient since sH$_2$ lies mainly on top of the water substrate. \nAt the highest densities, H$_2$ builds hundreds or even thousands of monolayers if we consider a fixed desorption energy of H$_2$ of 440 K, while encounter desorption becomes quite efficient in these conditions.\nWe can then discriminate among three different regimes.\nThe first one occurs when almost no H$_2$ at all is present on the grain surface (model 23-noED at the lowest densities), the second one when some H$_2$ is present at an \"intermediate level\" (models 440-ED and 440-noED at the lowest densities, and models 440-ED and 23-noED at the highest densities), and a third one when H$_2$ is very abundant on the grain surface and depletion of H$_2$ from the gas phase is large (model 440-noED at the highest densities).\n \nFigure~\\ref{fig:JCO2} shows these different regimes and the transition from one regime to another one for sCO$_2$.\nAt $2\\times 10^5$~yr and $10^4$~cm$^{-3}$, models 440-noED and 440-ED give close results while model 23-noED gives $\\sim 4$ times more sCO$_2$.\nBetween $10^5$ and $10^{11}$~cm$^{-3}$, each model gives different results, during this transitional range.\nAt the highest densities, models 440-ED and 23-noED give same result and the third model gives slightly different results.\nHowever, these transitions are not only density dependent, but also somewhat time dependent.\nAt $10^7$~yrs and $10^4$~cm$^{-3}$, all models give similar results, while outside this density range, models 440-ED and 23-noED give similar results and model 440-noED gives different ones.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.0\\linewidth]{O3-3E-4_JCO2}\n\\caption{CO$_2$ abundance on the grain surface at $2\\times 10^5$~yrs (black) and $10^7$~yrs (gray) for the three models 440-noED (solid line), 23-noED (dotted line), and 440-ED (dashed line).}\n\\label{fig:JCO2}\n\\end{figure}\n\n\\subsection{Sticking probability}\n\nWe assume a sticking probability $S$ equal to unity, which means that each collision between a gas phase species and a grain results in an adsorption.\nIn reality, this probability depends on numerous parameters such as the gas and grain temperature \\citep[e.g.][]{2011EPJWC..1803002F} as well as the composition and the structure of the grain surface \\citep[e.g.][]{1970JChPh..53...79H,1991ApJ...379..647B,1998A&A...330..773M}.\nRecently, \\cite{2014MNRAS.443.1301A} has experimentally studied the sticking coefficient of H$_2$ on an olivine substrate, and estimated a lower limit from 0.25 to 0.82 for temperatures between 7~K and 14~K.\nOur initial assumption may have some impact on our results, so we tested the sensitivity of the rate equation model to this coefficient using a value equal to 0.5. \n\nWith $S=0.5$, the abundance of sH$_2$ is decreased while the abundance of gas phase H$_2$ is increased, both by a factor $\\sim 10$ maximum.\nIn the 440-noED model, all hydrogen is located on the grain surface as the densities approaches $10^{12}$~cm$^{-3}$, a roughly ten times higher value than with $S=1$.\nThis result comes obviously from the lower adsorption rate of gas phase H$_2$ to the grain surface.\nThe sensitivity of other molecules to the value of $S$ also comes from whether they are formed in the gas or on grains.\nMolecules such sH$_2$O and sCH$_4$ are efficiently produced on the grain surface, so a decrease in the adsorption rate normally implies a decrease of the abundance of the reactants that will produce these molecules.\nHowever, water is also formed in the gas phase , so its sensitivity to $S$ is lower than for methane which is essentially formed on the grain surface; the abundance of water is decreased by a factor of two at most while the factor can be as high as ten for methane.\nThe abundance of sCO is modified by less than a factor two. \nThis small effect stems from two opposing processes: the adsorption rate of CO is lower when $S$ is reduced to 0.5, but the surface reaction rates involving sCO are also lower. \n\nWhile the abundance for a given molecule can be different when the sticking probability is reduced from 1.0 to 0.5, the relative results of the three models 440-ED, 440-noED, and 23-noED exhibit the same pattern whatever the value of the coefficient is.\nFor example, the abundance of sC is still much lower in the case of the 440-noED model than the other two models, while the gas phase abundances of CO, CH$_4$, and H$_2$O are still not dependent on the model, except for water at densities higher than $10^7$~cm$^{-3}$ for the 440-noED model as shown in panel B of figure~\\ref{fig:ggH2O_CO_CH4}.\nAs a consequence, we conclude that the value of the sticking coefficient does not impact the relative efficiency of encounter desorption significantly.\n\n\\subsection{Initial abundances}\n\nWe typically start with all hydrogen in its molecular form.\nTo test the sensitivity of encounter desorption to this assumption, we also performed some simulations with all hydrogen initially in its pure atomic form.\nThe results for the three models 440-ED, 440-noED, and 23-noED are presented in Figure~\\ref{fig:ggH2O_CO_CH4_HinH}, at the same time (10$^{6}$ yr) and for the same species as in Figure~\\ref{fig:ggH2O_CO_CH4} to allow for an easy comparison.\nThe abundance profiles of surface atomic carbon and gas phase water, carbon monoxide, and methane are similar to our previous simulation.\nDue to the high density, adsorption of atomic hydrogen is efficient, and H$_2$ is formed quickly on the grain surface.\nGrain surface H$_2$ needs about 1~yr or less depending on the density to reach steady state, and gas phase H$_2$ needs about $10^6$ and $10^1$~yrs to reach steady state at a total proton density of $2\\times 10^4$ and $2\\times 10^8$~cm$^{-3}$ respectively.\nThe case of the main ices is however slightly different.\nThey are formed faster, since hydrogenation by s-H is more efficient.\nBesides, surface atomic carbon is primarily converted into methane rather than carbon monoxide.\nFor these ices, the abundance of sH$_2$ seems less critical than for our previous simulations and as a consequence differences between the results of models 440-ED, 440-noED, and 23-noED are reduced.\nIn conclusion, our results are still sensitive to encounter desorption at 10$^{6}$ yr using atomic hydrogen as an initial condition, depending on the considered species.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1.0\\linewidth]{O3-3E-4_JH2O_JCO_JCH4_landscape_HinH}\n\\caption{Same as Figure~\\ref{fig:ggH2O_CO_CH4}. Hydrogen is initially in atomic rather than molecular form.}\n\\label{fig:ggH2O_CO_CH4_HinH}\n\\end{figure*}\n\n\\subsection{Motion through quantum tunneling}\n\nWe typically assume thermal diffusion.\nWe also studied the sensitivity of our three models 440-ED, 440-noED, and 23-noED to the motion of H$_2$ through quantum tunneling.\nThe abundances of H$_2$ in the gas phase and on the grain surface are not changed for the two models 440-noED and 23-noED using this new assumption, at all times and densities.\nFor the third model, abundance of s-H$_2$ is however reduced by about three orders of magnitude compared to the same model without tunneling, at all times and densities.\nSince motion through quantum tunneling is faster than thermal diffusion at 10~K, encounter desorption happens more frequently and reduces the surface abundance of H$_2$.\nAs a consequence for the molecules studied in this paper, the results of 440-ED model are closer to those of the 23-noED model.\nDepending on the density and the time however, results of these two models can still be quite different.\n\n\\section{Conclusion}\n\\label{sec:conclu}\n\nWe have developed a new approach to prevent a huge accumulation of H$_2$ on interstellar grain surfaces at low temperatures and high densities, which should not occur because the desorption energy of H$_2$ on an H$_2$ substrate is much lower than on a water substrate.\nThis method, which is to be used in gas-grain rate-equation simulations, is based on the facile desorption of molecular hydrogen when it encounters a molecule of an H$_2$ substrate.\nWe have named this process ``encounter desorption''.\n\nIn order to test our approach, we first used a very simple system including the encounter desorption process to calculate the surface abundance of H$_{2}$ molecules as a function of density at a temperature of 10 K. We then compared our result with an analogous but more exact result obtained using a microscopic Monte Carlo stochastic method. A comparison between the steady-state results of the rate-equation model with encounter desorption and the Monte-Carlo approach gives very good agreement for gas densities from $10^4$ to $10^{12}$~cm$^{-3}$.\nAbove $10^{12}$~cm$^{-3}$, the rate-equation model slightly overestimates the H$_2$ grain surface abundance, by a factor of up to 4.\nWe then used a complete gas-grain network with the Nautilus model to which encounter desorption was added and repeated the steady-state rate-equation calculation of the surface abundance of H$_{2}$ vs density, obtaining very similar results to both the simple rate-equation and Monte Carlo treatments. We thus conclude that the approach is a reasonable one, although the mathematics do not distinguish between the surface layer of an ice mantle and the inner layers. \n\nWe also studied the impact of different H$_2$ surface abundances on the abundance of other species, using two models with a fixed desorption energy of H$_2$ (23~K and 440~K) and one model with the encounter desorption mechanism. The sensitivity of the results of these models is relatively complex, and depends on the considered chemical species, the density of the medium, and the time. Nevertheless, the calculated abundances can often be divided into three regions depending upon whether the H$_{2}$ surface coverage is low, intermediate, or high. \nReducing the sticking coefficient for adsorption from unity to 0.5 does not change these results.\n\nWe tested the sensitivity of our models to a different initial condition -- hydrogen initially in atomic form instead of molecular -- and to the motion of H$_2$ via quantum tunneling instead of only thermal diffusion.\nThese assumptions may change the abundances of some species depending on the density and the time, but our simulations are generally still sensitive to the encounter desorption mechanism, and the results mentioned in the previous paragraph often still hold.\n\nOur results highlight the need to incorporate in rate-equation models a way to more correctly model H$_2$ coverage at low temperature and high density.\nThis need becomes even more pressing when physical conditions present huge variations, such as during the collapse of a prestellar core to form a protostar surrounded by a disk.\nIn this scenario, matter transits through variations of both density and temperature, which lead to potentially large variations of H$_2$ coverage, some quite unphysical such as the limit in which all H$_{2}$ lies on interstellar grain mantles.\nThe encounter desorption method keeps the amount of H$_{2}$ at physically reasonable values by using both the desorption energy of H$_{2}$ on a water ice mantle and on a mantle with some H$_{2}$ on its surface. The approach computes the rate of encounter desorption ``on the fly'', i.e., as a function of time and H$_2$ coverage, and therefore is well designed for a scenario in which the physical conditions are rapidly changing. \nIn addition, it is relatively easy to implement, and is not significantly CPU time consuming.\nAlthough H$_2$ is the most abundant molecule in the interstellar medium, a possible extension to this work would be to consider encounter desorption for other weakly bound atoms and molecules.\n\n\\begin{acknowledgements}\nU. Hincelin thanks G. Hassel for help in adding photodesorption processes to the chemical code, and K. Acharyya and K. Furuya for useful discussions.\nWe thank the anonymous referee for suggestions that helped us to improve our paper.\nE. Herbst acknowledges the support of the National Science Foundation (US) for his astrochemistry program, and support from the NASA Exobiology and Evolutionary Biology program through a subcontract from Rensselaer Polytechnic Institute.\nSome kinetic data we used have been downloaded from the online database KIDA (KInetic Database for Astrochemistry, \\url{http:\/\/kida.obs.u-bordeaux1.fr}, \\cite{2012ApJS..199...21W}).\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nAlthough from the very beginning of the creation of the Special Theory of Relativity (STR) researchers tried to use various mathematical tools to describe the relativistic phenomenons, the mainstream of theoretical physics went in the direction of tensor calculus. It is a formalism that requires extensive mathematical knowledge and proficiency. Operations on vectors, even when they are more than 3-dimensional, are conceivable as opposed to the manipulation of rates, which is why many authors still try to describe the Theory of Relativity using more friendly tools. For some time now there have appeared a lot of articles on the STR, whose authors use the quaternion algebra, geometric algebra or paravectors, which proves that the tensor calculus has not been accepted by all as the best language for this branch of physics. \n\nJ.L.Synge \\cite{Synge} (1972) tried to use complex quaternions to present Maxwell's equations. At the same time, a lot of articles on relativistic physics were written by David Hestenes \\cite{Hestenes} using Grassman Algebra. Following him, but using the paravectors, was \\href{http:\/\/www1.uwindsor.ca\/physics\/dr-william-baylis}{William Baylis} who showed that paravectors and multivectors belong to different representations of the same algebra \\cite{Baylis}. Pedagogical experience of Professor William Baylis \\cite{Baylis_1} shows that the STR taught in paravector formalism is much faster and more easily absorbed by the students than when he taught it traditionally. This means that this formalism is more intuitive, and therefore it suited better to describe the relativistic phenomena then tensor calculus.\n\nThis work is a continuation of my article ,,\\href{http:\/\/arxiv.org\/abs\/1601.02965}{Algebra of paravectors}'' where paravectors are shown in such a way that everyone can imagine them as vectors. With the operation of summation paravectors create a unitary space over the field of complex numbers. Although it is a very important difference compared to the commonly applied definition of the scalar product (the product of paravector with itself is not a real positive number but is a complex number!), the analogy is so clear that I decided to leave the name commonly used in linear algebra. From the point of view of algebra paravectors together with summation and multiplication create a ring with identity, due to which they have some characteristics of numbers.\n\nIn the current paper there are shown simple methods of transformation of the expressions containing differential operator under the linear transformation described by paravector. The identities containing the operator of spatio-temporal differentiation will be proved through which, in turn, invariance of wave equation will be shown under orthogonal transformations. Next, I will show how these equations are transformed under an Euclidean rotation. I will not interpret the results from the physics point of view nor will I analyse the physical sense of the domain or complex space. I treat the subject as pure mathematics to be used in the next works undertaking more practical problems. At the moment I can only say that the complex space-time, although similar to the Euclidean one, has a different structure, but is not Minkowski space either. Therefore, the results will come gradually.\n\nBefore reading this work, it is recommend to read the article ,,\\href{http:\/\/arxiv.org\/abs\/1601.02965}{Algebra of paravectors}''\\cite{Radomanski}.\n \n\\section{The spatio-temporal differential operator.} \n\nSince we do not know yet what restrictions should be put on the differentiation so it would not be in conflict with physics, we apply the most general assumption: The domain of the paravector function is a complex space-time. It means that both time and space are complex. Having to put some assumptions is evidenced by the fact that the differentiation of time should have other rules than the differentiation of space if only because time doesn't flinch. Determining when, why and what assumptions we need to put requires a thorough examination, but the subject is so vast that it will be discussed gradually in subsequent publications.\n\nIn conclusion of the article ,,\\href{http:\/\/arxiv.org\/abs\/1601.02965}{Algebra of paravectors}'' it was mentioned that some paravectors are additive, and others are not, in spite of the same construction. The additive ones are called traditionally: 4-vectors and are denoted with a capital doubled letter or featured as a column matrix in parentheses, for example:\n\n\\begin{equation}\n\\mathbb{X} :=\n \\begin{pmatrix}\n \\Delta t \\\\ \n \\Delta \\mathbf{x}%\n \\end{pmatrix}%\n\\end{equation}\t\n\nNon-additive paravectors which we can only multiply are denoted by a capital letter or presented as a column array in square brackets, for example:\n\n\\begin{equation}\n \\Gamma :=\n \\begin{bmatrix}\n \\alpha \\\\ \n \\boldsymbol{\\beta }\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n a+id \\\\ \n \\textbf{b}+i\\textbf{c}\n \\end{bmatrix}\n\\end{equation} \n \n\\begin{definition}\nThe \\textbf{spatio-temporal differential operator} (or \\textbf{4-divergence}) we call the paravector:\n\\[\n\\partial :=\n\\begin{bmatrix}\n\\frac{\\partial }{\\partial t} \\\\ \n\\nabla\n\\end{bmatrix}\n\\]\n\nWe call the \\textbf{4-gradient} operator reversed to the 4-divergence, or:\n\\[ \\partial^{-} =\n\\begin{bmatrix}\n\\frac{\\partial }{\\partial t} \\\\ \n-\\nabla\n\\end{bmatrix}\n\\]\n\nLet $A(X)$ be an analytic paravector function defined on the set $C^{1+3}$. The spatio-temporal differential operator works on the function $A(X)$ as follows:\n\n\\begin{equation}\n\\partial A(X) =\n\\begin{bmatrix}\n\\frac{\\partial }{\\partial t} \\\\ \n\\nabla\n\\end{bmatrix}\n\\begin{bmatrix}\n\\varphi (X) \\\\ \n\\pmb{\\Phi }(X)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\\frac{\\partial \\varphi}{\\partial t}+ \\nabla \\pmb{\\Phi }\\\\ \n\\frac{\\partial \\pmb{\\Phi }}{\\partial t}+\\nabla \\varphi +i\\nabla \\times \\pmb{\\Phi}\n\\end{bmatrix}\n\\end{equation}\n\\end{definition}\n\n\\begin{example}\n\nIn this notation the equations of electricity and magnetism in a vacuum has a form:\n \\begin{equation} \n \\frac{1}{\\epsilon_{0}}\n \\begin{pmatrix}\n \\rho \\\\ \n -\\mathbf{j}\/c\n \\end{pmatrix}%\n =%\n \\begin{bmatrix}\n \\frac{\\partial }{c\\partial t} \\\\ \n \\nabla%\n \\end{bmatrix}%\n \\begin{pmatrix}\n 0 \\\\ \n \\mathbf{E}+ic\\mathbf{B}\n \\end{pmatrix}\n \\qquad \\text{i}\\qquad \n \\begin{pmatrix}\n 0 \\\\ \n \\mathbf{E}+ic\\mathbf{B}\n \\end{pmatrix}%\n =%\n \\begin{bmatrix}\n \\frac{\\partial }{c\\partial t} \\\\ \n -\\nabla%\n \\end{bmatrix}%\n \\begin{pmatrix}\n \\varphi \\\\ \n -c\\mathbf{A}%\n \\end{pmatrix}%\n \\end{equation}\n \nAfter the operation as in equation (3) on the left side of (4) we obtain Maxwell's equations in a vacuum, and on the right side we get conditions so that the field around charges should describe a system of wave equations: \n \n\\begin{equation}\n (\\frac{\\partial^{2}}{\\partial t^{2}} - \\nabla^{2} )\n \\begin{pmatrix}\n \\varphi \\\\ \n -c\\mathbf{A}%\n \\end{pmatrix}=\n \\begin{bmatrix}\n \\frac{\\partial }{c\\partial t} \\\\ \n \\nabla%\n \\end{bmatrix}%\n \\begin{bmatrix}\n \\frac{\\partial }{c\\partial t} \\\\ \n -\\nabla%\n \\end{bmatrix}%\n \\begin{pmatrix}\n \\varphi \\\\ \n -c\\mathbf{A}%\n \\end{pmatrix}%\n =\\frac{1}{\\epsilon_{0}}\n \\begin{pmatrix}\n \\rho \\\\ \n -\\mathbf{j}\/c\n \\end{pmatrix}%\n\\end{equation} \n\\end{example}\n\nFrom the articles of Professor William Baylis and by the example above, one can see that the paravectors calculus is firmly rooted in physics and therefore it should be looked at it more closely from the mathematical point of view, so that this mathematics could be later applied in practice.\n\n\\section{Properties of the operator \\texorpdfstring{$\\partial$}{partial}}\n\n\\begin{theorem}\nIf paravector function $A(X)$ is analytic and additive, then\n\\begin{equation}\n\\partial [A_{1}(X)+A_{2}(X)]=\\partial A_{1}(X)+\\partial A_{2}(X)\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\n\nIt is due to the fact that the derivative, gradient, divergence and curl keep the additivity of function.\n\\end{proof}\n\n\\begin{theorem}\nLet the scalar function $\\rho(x)$ and the paravector function $A(X)$ be analytic and defined on the set $C^{1+3}$, then\n\\begin{equation} \\label{con1}\n\\partial [\\rho(X)A(X)] = [\\partial \\rho(X)] A(X) + \\rho(X) [\\partial A(X)]\n\\end{equation}\n\\end{theorem}\n\\begin{myproof}\n\\[\\partial [\\rho(X)A(X)]=\n\\begin{bmatrix}\n\\frac{\\partial }{\\partial t} \\\\ \n\\nabla\n\\end{bmatrix}\n\\begin{bmatrix}\n\\rho(X) \\varphi(X) \\\\ \n\\rho(X) \\pmb{\\Phi}(X)\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\frac{\\partial (\\rho \\varphi)}{\\partial t}+ \\nabla(\\rho \\pmb{\\Phi }) \\\\ \n\\frac{\\partial(\\rho \\pmb{\\Phi })}{\\partial t}+\\nabla(\\rho \\varphi)+i\\nabla \\times(\\rho \\pmb{\\Phi })\n\\end{bmatrix}=\n\\]\n\\[\n=\\begin{bmatrix}\n\\frac{\\partial \\rho}{\\partial t}\\varphi+\\rho \\frac{\\partial \\varphi}{\\partial t}\n+ \\pmb{\\Phi }\\nabla\\rho +\\rho\\nabla\\pmb{\\Phi } \\\\ \n\\frac{\\partial\\rho}{\\partial t} \\pmb{\\Phi }\n+\\rho \\frac{\\partial\\pmb{\\Phi }}{\\partial t}\n+ \\varphi\\nabla\\rho +\\rho \\nabla\\varphi\n+i\\rho (\\nabla \\times\\pmb{\\Phi }) + i(\\nabla\\rho)\\times\\pmb{\\Phi}\n\\end{bmatrix}=\n\\begin{bmatrix}\n\\frac{\\partial \\rho}{\\partial t}\\varphi\n+ \\pmb{\\Phi }\\nabla\\rho \\\\ \n\\frac{\\partial\\rho}{\\partial t} \\pmb{\\Phi }\n+ \\varphi\\nabla\\rho\n+ i(\\nabla\\rho)\\times\\pmb{\\Phi}\n\\end{bmatrix}+\n\\begin{bmatrix}\n\\rho \\frac{\\partial \\varphi}{\\partial t}\n+\\rho\\nabla\\pmb{\\Phi } \\\\ \n\\rho \\frac{\\partial\\pmb{\\Phi }}{\\partial t}\n+\\rho \\nabla\\varphi\n+i\\rho (\\nabla \\times\\pmb{\\Phi })\n\\end{bmatrix}\n=\\]\n\\[=\n\\begin{bmatrix}\n\\frac{\\partial \\rho }{\\partial t} \\\\ \n\\nabla\\rho\n\\end{bmatrix}\n\\begin{bmatrix}\n\\varphi \\\\ \n\\pmb{\\Phi}\n\\end{bmatrix} +\n\\rho \\begin{bmatrix}\n\\frac{\\partial }{\\partial t} \\\\ \n\\nabla\n\\end{bmatrix}\n\\begin{bmatrix}\n\\varphi \\\\ \n\\pmb{\\Phi}\n\\end{bmatrix}\n\\]\n\\end{myproof}\n\nThe operator $\\partial^{-}$ holds similar property.\n\n\\textbf{Note:} Despite some similarities between the operator $\\partial$ and the derivative of function of one variable, the properties of differential operator are not as extensive, for example:\n\n\\begin{itemize}\n\\item The formula \\eqref{con1} is not true for the product of two paravector functions.\n\\item If on the left-hand side of the equation \\eqref{con1} we reorder the scalar and paravector function, we can not do it on the right side.\n\\end{itemize}\n\nBelow, four identities will be shown by means of which one can show how to change the equation containing the spatio-temporal differential operator by the paravector transformation. The proofs are not complicated but are somewhat tedious, so we bring forth in detail only the first and third identities. I will take the reader a shortcut while proving the second one and will leave fourth one for self-proving.\n\n\\begin{theorem}\\label{th:1}\n\nSuppose that $A(X)$ is a paravector analytic function defined on the set $C^{1+3}$ and let the non-singular paravector $\\Gamma$ determine the automorphism in the set $C^{1+3}$ so that $X^{\\prime} = \\Gamma X$, then the following identities are true:\n\n\\begin{enumerate}\n\\item \n$\\partial A(X) =\\partial ^{\\prime }\\Gamma A( \\Gamma^{-1}X^{\\prime })$\\label{eq:th1.1}\n\\item\n$\\partial ^{-}A\\left( X\\right) =\\Gamma ^{-}\\partial ^{\\prime -}A\\left(\\Gamma ^{-1}X^{\\prime }\\right)$\\label{eq:th1.2}\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{myproof}\n\nLet's expand the equation $X^{\\prime }=\\Gamma X:$\n\n\\begin{flushleft}\n\\qquad $t^{\\prime }=\\alpha t+x\\beta _{x}+y\\beta _{y}+z\\beta _{z}$\n\n\n\\qquad $x^{\\prime }=t\\beta _{x}+\\alpha x-iy\\beta _{z}+iz\\beta _{y}$\n\n\n\\qquad $y^{\\prime }=t\\beta _{y}+ix\\beta _{z}+\\alpha y-iz\\beta _{x}$\n\n\n\\qquad $z^{\\prime }=t\\beta _{z}-ix\\beta_{y}+iy\\beta _{x}+\\alpha z$\n\\end{flushleft}\n \n1. We transform the differential expression $\\partial A(X)$\n\n\\begin{equation} \\label{eq:1.1a}\n\\partial A(X)=\\partial A(\\Gamma ^{-1}\\Gamma X)=\\partial A(\\Gamma\n^{-1}X^{\\prime })=\n\\begin{bmatrix}\n\\frac{\\partial }{\\partial t} \\\\ \n\\nabla\n\\end{bmatrix}\n\\begin{bmatrix}\n\\varphi (\\Gamma ^{-1}X^{\\prime }) \\\\ \n\\pmb{\\Phi }(\\Gamma ^{-1}X^{\\prime })\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t}+ \\nabla \\pmb{\\Phi }^{\\prime} \\\\ \n\\frac{\\partial \\pmb{\\Phi }^{\\prime }}{\\partial t}+\\nabla \\varphi ^{\\prime}+i\\nabla \\times \\pmb{\\Phi }^{\\prime }\n\\end{bmatrix}\n\\end{equation}\n\nwhere the prime at the symbol of a function means that the argument is the phase $\\Gamma^{-1}X^{\\prime }$.\n\nUsing the formula for the derivative of the composite function we get:\n\n\\begin{equation}\\label{eq:1.1b}\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t}=\\frac{\\partial \\varphi\n^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial t}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial t}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial t}+\\frac{\\partial \\varphi^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial t}=\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\alpha +\\frac{\\partial \\varphi ^{\\prime }}{\\partial x^{\\prime }}\\beta _{x}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial y^{\\prime }}\\beta _{y}+\\frac{\\partial \\varphi^{\\prime }}{\\partial z^{\\prime }}\\beta _{z}=\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\alpha +\\pmb{\\beta }\\nabla ^{\\prime }\\varphi^{\\prime }\n\\end{equation}\n\n$\\begin{array}{ccccc}\n\\nabla \\Phi^{\\prime } & =\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial x} & +\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial x} & +\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial x} & +\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial x}+ \\\\ \n &+\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial y} & +\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial y} & +\\frac{\\partial \\Phi_{y}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial y} & +\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial y}+ \\\\ \n &+\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial z} & +\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial z} & +\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial z} & +\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial z}= \\\\\n \\\\\n &=\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\beta _{x} & +\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial x^{\\prime }}\\alpha & +i\\beta _{z}\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial y^{\\prime }} & -i\\beta _{y}\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}+ \\\\ \n & +\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\beta _{y} & -i\\beta _{z}\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial x^{\\prime }} & +\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial y^{\\prime }}\\alpha & +i\\beta_{x}\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}+ \\\\ \n & +\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\beta _{z} & +i\\beta _{y}\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial x^{\\prime }} & -i\\beta _{x}\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial y^{\\prime }} & +\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial z^{\\prime }}\\alpha=\n\\end{array}\n$\n\\begin{equation} \\label{eq:1.1c}\n=\\pmb{\\beta }\\frac{\\partial \\pmb{\\Phi }^{\\prime }}{\\partial t^{\\prime}}+\\alpha \\nabla ^{\\prime } \\pmb{\\Phi }^{\\prime }-i\\pmb{\\beta }\\left(\\nabla ^{\\prime }\\times \\pmb{\\Phi }^{\\prime }\\right) =\\pmb{\\beta }\\frac{\\partial \\pmb{\\Phi }^{\\prime }}{\\partial t^{\\prime }}+\\alpha \\nabla^{\\prime } \\pmb{\\Phi }^{\\prime }+\\nabla ^{\\prime }i\\left( \\pmb{\\beta }\\times \\pmb{\\Phi }^{\\prime }\\right)\n\\end{equation}\n\\begin{flushleft}\n$\\nabla \\varphi ^{\\prime }=\n\\begin{bmatrix}\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial x}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial x}+\\frac{\\partial \\varphi\n^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial x}+\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial x} \\\\ \n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial y}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial y}+\\frac{\\partial \\varphi\n^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial y}+\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial y} \\\\ \n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial z}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial z}+\\frac{\\partial \\varphi\n^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial z}+\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial z}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\beta _{x}+\\frac{%\n\\partial \\varphi ^{\\prime }}{\\partial x^{\\prime }}\\alpha +i\\beta _{z}\\frac{%\n\\partial \\varphi ^{\\prime }}{\\partial y^{\\prime }}-i\\beta _{y}\\frac{\\partial\n\\varphi ^{\\prime }}{\\partial z^{\\prime }} \\\\ \n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\beta _{y}-i\\beta\n_{z}\\frac{\\partial \\varphi ^{\\prime }}{\\partial x^{\\prime }}+\\frac{\\partial\n\\varphi ^{\\prime }}{\\partial y^{\\prime }}\\alpha +i\\beta _{x}\\frac{\\partial\n\\varphi ^{\\prime }}{\\partial z^{\\prime }} \\\\ \n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\beta _{z}+i\\beta\n_{y}\\frac{\\partial \\varphi ^{\\prime }}{\\partial x^{\\prime }}-i\\beta _{x}%\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial y^{\\prime }}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial\nz^{\\prime }}\\alpha%\n\\end{bmatrix}%\n=$\n\\end{flushleft}\n\\begin{equation}\\label{eq:1.1d}\n=\\pmb{\\beta }\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}+\\alpha \\nabla ^{\\prime }\\varphi ^{\\prime }+i\\left( \\nabla ^{\\prime }\\varphi^{\\prime }\\right) \\times \\pmb{\\beta }=\\pmb{\\beta }\\frac{\\partial\n\\varphi ^{\\prime }}{\\partial t^{\\prime }}+\\alpha \\nabla ^{\\prime }\\varphi\n^{\\prime }+i\\nabla ^{\\prime }\\times \\left( \\pmb{\\beta }\\varphi ^{\\prime\n}\\right)\n\\end{equation}\n\\[\n\\frac{\\partial \\pmb{\\Phi} ^{\\prime }}{\\partial t}+i\\nabla \\times \\pmb{\\Phi} %\n^{\\prime }=\\frac{\\partial \\pmb{\\Phi} ^{\\prime }}{\\partial t}+i%\n\\begin{bmatrix}\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial y}-\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial z} \\\\ \n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z}-\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial x} \\\\ \n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial x}-\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial y}%\n\\end{bmatrix}%\n=\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad\\qquad \\qquad \\qquad\n\\]\n\n$=\n\\begin{bmatrix}\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial t}+%\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial t} \\\\ \n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial t}+%\n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial t} \\\\ \n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial t}+%\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial t}%\n\\end{bmatrix}+$\n\n$+i\\begin{bmatrix}\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial y}+\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial y}+\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial y}+%\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial y}-\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial\nt^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial z}-\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial z}-%\n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial\ny^{\\prime }}{\\partial z}-\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial\nz^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial z} \\\\ \n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial z}+\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial z}+\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial z}+%\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial z}-\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial\nt^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial x}-\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial x}-%\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial\ny^{\\prime }}{\\partial x}-\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial\nz^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial x} \\\\ \n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial x}+\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial x}+\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial x}+%\n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial x}-\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial\nt^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial y}-\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial y}-%\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial\ny^{\\prime }}{\\partial y}-\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial\nz^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial y}%\n\\end{bmatrix}%\n=$\n\n\n\\noindent $=\\begin{bmatrix}\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\alpha +\\frac{%\n\\partial \\Phi _{x}^{\\prime }}{\\partial x^{\\prime }}\\beta _{x}+\\frac{\\partial\n\\Phi _{x}^{\\prime }}{\\partial y^{\\prime }}\\beta _{y}+\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial z^{\\prime }}\\beta _{z} \\\\ \n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\alpha +\\frac{%\n\\partial \\Phi _{y}^{\\prime }}{\\partial x^{\\prime }}\\beta _{x}+\\frac{\\partial\n\\Phi _{y}^{\\prime }}{\\partial y^{\\prime }}\\beta _{y}+\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial z^{\\prime }}\\beta _{z} \\\\ \n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\alpha +\\frac{%\n\\partial \\Phi _{z}^{\\prime }}{\\partial x^{\\prime }}\\beta _{x}+\\frac{\\partial\n\\Phi _{z}^{\\prime }}{\\partial y^{\\prime }}\\beta _{y}+\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial z^{\\prime }}\\beta _{z}%\n\\end{bmatrix}\n+i\n\\begin{bmatrix}\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\beta _{y}-i\\beta\n_{z}\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial x^{\\prime }}+\\frac{\\partial\n\\Phi _{z}^{\\prime }}{\\partial y^{\\prime }}\\alpha +i\\beta _{x}\\frac{\\partial\n\\Phi _{z}^{\\prime }}{\\partial z^{\\prime }}-\\frac{\\partial \\Phi _{y}^{\\prime }%\n}{\\partial t^{\\prime }}\\beta _{z}-i\\beta _{y}\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial x^{\\prime }}+i\\beta _{x}\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial y^{\\prime }}-\\frac{\\partial \\Phi _{y}^{\\prime }}{%\n\\partial z^{\\prime }}\\alpha \\\\ \n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\beta _{z}+i\\beta\n_{y}\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial x^{\\prime }}-i\\beta _{x}%\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial y^{\\prime }}+\\frac{\\partial\n\\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}\\alpha -\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial t^{\\prime }}\\beta _{x}-\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial x^{\\prime }}\\alpha -i\\beta _{z}\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial y^{\\prime }}+i\\beta _{y}\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial z^{\\prime }} \\\\ \n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\beta _{x}+\\frac{%\n\\partial \\Phi _{y}^{\\prime }}{\\partial x^{\\prime }}\\alpha +i\\beta _{z}\\frac{%\n\\partial \\Phi _{y}^{\\prime }}{\\partial y^{\\prime }}-i\\beta _{y}\\frac{%\n\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}-\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial t^{\\prime }}\\beta _{y}+i\\beta _{z}\\frac{\\partial\n\\Phi _{x}^{\\prime }}{\\partial x^{\\prime }}-\\frac{\\partial \\Phi _{x}^{\\prime }%\n}{\\partial y^{\\prime }}\\alpha -i\\beta _{x}\\frac{\\partial \\Phi _{x}^{\\prime }%\n}{\\partial z^{\\prime }}%\n\\end{bmatrix}%\n=$\n\n\\begin{equation}\\label{eq:1.1e}\n=\\alpha \\frac{\\partial \\pmb{\\Phi} ^{\\prime }}{\\partial t^{\\prime }}+i\\alpha\n\\nabla ^{\\prime }\\times \\pmb{\\Phi }^{\\prime }-i\\frac{\\partial \\pmb{\n\\Phi}^{\\prime }}{\\partial t^{\\prime }}\\times \\pmb{\\beta }+(\\nabla\n^{\\prime }\\pmb{\\beta \\Phi }^{\\prime })+\\nabla ^{\\prime }\\times (\\pmb{\n\\Phi}^{\\prime }\\times \\pmb{\\beta })\n\\end{equation}\n\nSubstituting partial results \\eqref{eq:1.1b} - \\eqref{eq:1.1e} into the equation \\eqref{eq:1.1a} we receive\n\n\\[\n\\begin{bmatrix}\n\\frac{\\partial }{\\partial t} \\\\ \n\\nabla\n\\end{bmatrix}\n\\begin{bmatrix}\n\\varphi (X) \\\\ \n\\pmb{\\Phi }(X)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\\frac{\\partial \\left( \\alpha \\varphi ^{\\prime }+\\pmb{\\beta \\pmb{\\Phi} }^{\\prime\n}\\right) }{\\partial t^{\\prime }}+\\nabla ^{\\prime }\\left( \\pmb{\\beta }%\n\\varphi ^{\\prime }+\\alpha \\pmb{\\Phi }^{\\prime }+i\\pmb{\\beta }\\times \n\\pmb{\\Phi }^{\\prime }\\right) \\\\ \n\\frac{\\partial }{\\partial t^{\\prime }}\\left( \\pmb{\\beta }\\varphi ^{\\prime\n}+\\alpha \\pmb{\\Phi }^{\\prime }+i\\pmb{\\beta }\\times \\pmb{\\Phi }%\n^{\\prime }\\right) +\\nabla ^{\\prime }\\left( \\alpha \\varphi ^{\\prime }+\\pmb{\n\\beta \\Phi }^{\\prime }\\right) +i\\nabla ^{\\prime }\\times \\left( \\pmb{\\beta \n}\\varphi ^{\\prime }+\\alpha \\pmb{\\Phi }^{\\prime }+i\\pmb{\\beta }\\times \n\\pmb{\\Phi }^{\\prime }\\right)%\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\frac{\\partial }{\\partial t^{\\prime}} \\\\ \n\\nabla^{\\prime}%\n\\end{bmatrix}%\n(\\begin{bmatrix}\n\\alpha \\\\ \n\\pmb{\\beta }%\n\\end{bmatrix}%\n\\begin{bmatrix}\n\\varphi ^{\\prime } \\\\ \n\\pmb{\\Phi }^{\\prime }%\n\\end{bmatrix}),\n\\]\n\nwhich completes the proof of 1st identity.\n\n2. To prove the truth of the identity $\\partial ^{-}A\\left( X\\right) =\\Gamma ^{-}\\partial ^{\\prime -}A\\left(\\Gamma ^{-1}X^{\\prime }\\right)$, we must use the formulas \\eqref {eq:1.1b} - \\eqref {eq:1.1d}, and instead of \\eqref{eq:1.1e} we must prove that:\n\n\\[\n\\frac{\\partial \\Phi ^{\\prime }}{\\partial t}-i\\nabla \\times \\pmb{\\Phi }^{\\prime }=\\alpha \\frac{\\partial \\pmb{\\Phi} ^{\\prime }}{\\partial t^{\\prime }}-i\\alpha \\nabla ^{\\prime }\\times \\pmb{\\Phi }^{\\prime }-i \\pmb{\\beta}\n\\times \\frac{\\partial \\pmb{\\Phi }^{\\prime }}{\\partial t^{\\prime }}\n+\\pmb{\\beta} (\\nabla ^{\\prime }\\pmb{\\Phi }^{\\prime })+\\pmb{\\beta}\n\\times (\\nabla ^{\\prime }\\times \\pmb{\\Phi }^{\\prime })\n\\]\n\\end{myproof}\n\n\\begin{theorem}\\label{th:2}\n\nSuppose that $A(X)$ is a paravector analytic function defined on the set $C^{1+3}$ and let the non-singular paravector $\\Gamma$ determine the automorphism in the set $C^{1+3}$ so that $X^{\\prime} =X \\Gamma $, then the following identities are true:\n\n\\begin{enumerate}\n\\item \n$\\partial A\\left( X\\right) =\\Gamma \\partial ^{\\prime }A\\left( X^{\\prime}\\Gamma ^{-1}\\right)$\\label{eq:th2.1}\n\\item\n$\\partial ^{-}A\\left( X\\right) =\\partial ^{\\prime -}\\Gamma ^{-}A\\left(X^{\\prime }\\Gamma ^{-1}\\right)$\\label{eq:th2.2}\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{myproof}\n\nLet's expand the equation $X^{\\prime }= X\\Gamma:$\n\n\\begin{flushleft}\n\\qquad $t^{\\prime }=\\alpha t+x\\beta _{x}+y\\beta _{y}+z\\beta _{z}$\n\n\n\\qquad $x^{\\prime }=t\\beta _{x}+\\alpha x+iy\\beta _{z}-iz\\beta _{y}$\n\n\n\\qquad $y^{\\prime }=t\\beta _{y}-ix\\beta _{z}+\\alpha y+iz\\beta _{x}$\n\n\n\\qquad $z^{\\prime }=t\\beta _{z}+ix\\beta y-iy\\beta _{x}+\\alpha z$\n\\end{flushleft}\n\n1. We transform the differential expression $\\partial A(X)$\n\n\\begin{equation} \\label{eq:2.1a}\n\\partial A(X)=\\partial A(X\\Gamma \\Gamma^{-1} )=\\partial A(X^{\\prime }\\Gamma^{-1})=\n\\begin{bmatrix}\n\\frac{\\partial }{\\partial t} \\\\ \n\\nabla\n\\end{bmatrix}\n\\begin{bmatrix}\n\\varphi (X^{\\prime }\\Gamma^{-1}) \\\\ \n\\pmb{\\Phi }(X^{\\prime }\\Gamma^{-1})\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t}+ \\nabla \\pmb{\\Phi }^{\\prime} \\\\ \n\\frac{\\partial \\pmb{\\Phi }^{\\prime }}{\\partial t}+\\nabla \\varphi ^{\\prime}+i\\nabla \\times \\pmb{\\Phi }^{\\prime }\n\\end{bmatrix}\n\\end{equation}\n\nwhere the prime at the symbol of a function means that the argument is the phase $X^{\\prime }\\Gamma^{-1}$\n\nUsing the formula for the derivative of the composite function we get:\n\n\\begin{equation}\\label{eq:2.1b}\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t}=\\frac{\\partial \\varphi\n^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial t}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial t}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial t}+\\frac{\\partial \\varphi^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial t}=\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\alpha +\\frac{\\partial \\varphi ^{\\prime }}{\\partial x^{\\prime }}\\beta _{x}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial y^{\\prime }}\\beta _{y}+\\frac{\\partial \\varphi^{\\prime }}{\\partial z^{\\prime }}\\beta _{z}=\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\alpha +\\pmb{\\beta }\\nabla ^{\\prime }\\varphi^{\\prime }\n\\end{equation}\n\n$\\begin{array}{ccccc}\n\\nabla \\Phi^{\\prime } & =\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial x} & +\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial x} & +\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial x} & +\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial x}+ \\\\ \n &+\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial y} & +\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial y} & +\\frac{\\partial \\Phi_{y}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial y} & +\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial y}+ \\\\ \n &+\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial z} & +\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial z} & +\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial z} & +\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial z}= \\\\\n \\\\\n &=\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\beta _{x} & +\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial x^{\\prime }}\\alpha & -i\\beta _{z}\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial y^{\\prime }} & +i\\beta _{y}\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}+ \\\\ \n & +\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\beta _{y} & +i\\beta _{z}\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial x^{\\prime }} & +\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial y^{\\prime }}\\alpha & -i\\beta_{x}\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}+ \\\\ \n & +\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\beta _{z} & -i\\beta _{y}\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial x^{\\prime }} & +i\\beta _{x}\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial y^{\\prime }} & +\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial z^{\\prime }}\\alpha=\n\\end{array}\n$\n\\begin{equation} \\label{eq:2.1c}\n=\\pmb{\\beta }\\frac{\\partial \\pmb{\\Phi }^{\\prime }}{\\partial t^{\\prime}}+\\alpha \\nabla ^{\\prime } \\pmb{\\Phi }^{\\prime }+i\\pmb{\\beta }\\left(\\nabla ^{\\prime }\\times \\pmb{\\Phi }^{\\prime }\\right)\n\\end{equation}\n\\begin{flushleft}\n$\\nabla \\varphi ^{\\prime }=\n\\begin{bmatrix}\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial x}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial x}+\\frac{\\partial \\varphi\n^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial x}+\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial x} \\\\ \n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial y}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial y}+\\frac{\\partial \\varphi\n^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial y}+\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial y} \\\\ \n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial z}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial z}+\\frac{\\partial \\varphi\n^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial z}+\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial z}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\beta _{x}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial x^{\\prime }}\\alpha -i\\beta _{z}\\frac{\\partial \\varphi ^{\\prime }}{\\partial y^{\\prime }}+i\\beta _{y}\\frac{\\partial\\varphi ^{\\prime }}{\\partial z^{\\prime }} \\\\ \n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\beta _{y}+i\\beta_{z}\\frac{\\partial \\varphi ^{\\prime }}{\\partial x^{\\prime }}+\\frac{\\partial\\varphi ^{\\prime }}{\\partial y^{\\prime }}\\alpha -i\\beta _{x}\\frac{\\partial\\varphi ^{\\prime }}{\\partial z^{\\prime }} \\\\ \n\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}\\beta _{z}-i\\beta_{y}\\frac{\\partial \\varphi ^{\\prime }}{\\partial x^{\\prime }}+i\\beta _{x}\\frac{\\partial \\varphi ^{\\prime }}{\\partial y^{\\prime }}+\\frac{\\partial \\varphi ^{\\prime }}{\\partial z^{\\prime }}\\alpha\n\\end{bmatrix}\n=$\n\\end{flushleft}\n\\begin{equation}\\label{eq:2.1d}\n=\\pmb{\\beta }\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}+\\alpha \\nabla ^{\\prime }\\varphi ^{\\prime }+i\\pmb{\\beta } \\times \\left( \\nabla ^{\\prime }\\varphi^{\\prime }\\right)\n\\end{equation}\n\\[\n\\frac{\\partial \\pmb{\\Phi} ^{\\prime }}{\\partial t}+i\\nabla \\times \\pmb{\\Phi} %\n^{\\prime }=\\frac{\\partial \\pmb{\\Phi} ^{\\prime }}{\\partial t}+i%\n\\begin{bmatrix}\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial y}-\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial z} \\\\ \n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z}-\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial x} \\\\ \n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial x}-\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial y}%\n\\end{bmatrix}%\n=\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad\\qquad \\qquad \\qquad\n\\]\n\n$=\n\\begin{bmatrix}\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial t}+%\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial t} \\\\ \n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial t}+%\n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial t} \\\\ \n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial t}+\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial t}+%\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial t}%\n\\end{bmatrix}+$\n\n$+i\\begin{bmatrix}\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial y}+\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial y}+\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial y}+%\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial y}-\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial\nt^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial z}-\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial z}-%\n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial\ny^{\\prime }}{\\partial z}-\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial\nz^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial z} \\\\ \n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial z}+\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial z}+\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial z}+%\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial z}-\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial\nt^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial x}-\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial x}-%\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial\ny^{\\prime }}{\\partial x}-\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial\nz^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial x} \\\\ \n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\frac{\\partial\nt^{\\prime }}{\\partial x}+\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial\nx^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial x}+\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial y^{\\prime }}{\\partial x}+%\n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}\\frac{\\partial\nz^{\\prime }}{\\partial x}-\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial\nt^{\\prime }}\\frac{\\partial t^{\\prime }}{\\partial y}-\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial x^{\\prime }}\\frac{\\partial x^{\\prime }}{\\partial y}-%\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial y^{\\prime }}\\frac{\\partial\ny^{\\prime }}{\\partial y}-\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial\nz^{\\prime }}\\frac{\\partial z^{\\prime }}{\\partial y}%\n\\end{bmatrix}%\n=$\n\n\n\\noindent $=\\begin{bmatrix}\n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\alpha +\\frac{%\n\\partial \\Phi _{x}^{\\prime }}{\\partial x^{\\prime }}\\beta _{x}+\\frac{\\partial\n\\Phi _{x}^{\\prime }}{\\partial y^{\\prime }}\\beta _{y}+\\frac{\\partial \\Phi\n_{x}^{\\prime }}{\\partial z^{\\prime }}\\beta _{z} \\\\ \n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\alpha +\\frac{%\n\\partial \\Phi _{y}^{\\prime }}{\\partial x^{\\prime }}\\beta _{x}+\\frac{\\partial\n\\Phi _{y}^{\\prime }}{\\partial y^{\\prime }}\\beta _{y}+\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial z^{\\prime }}\\beta _{z} \\\\ \n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\alpha +\\frac{%\n\\partial \\Phi _{z}^{\\prime }}{\\partial x^{\\prime }}\\beta _{x}+\\frac{\\partial\n\\Phi _{z}^{\\prime }}{\\partial y^{\\prime }}\\beta _{y}+\\frac{\\partial \\Phi\n_{z}^{\\prime }}{\\partial z^{\\prime }}\\beta _{z}%\n\\end{bmatrix}\n+i\n\\begin{bmatrix}\n\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial t^{\\prime }}\\beta _{y}+i\\beta_{z}\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial x^{\\prime }}+\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial y^{\\prime }}\\alpha -i\\beta _{x}\\frac{\\partial \\Phi _{z}^{\\prime }}{\\partial z^{\\prime }}-\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\beta _{z}+i\\beta _{y}\\frac{\\partial \\Phi\n_{y}^{\\prime }}{\\partial x^{\\prime }}-i\\beta _{x}\\frac{\\partial \\Phi_{y}^{\\prime }}{\\partial y^{\\prime }}-\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}\\alpha \\\\ \n\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial t^{\\prime }}\\beta _{z}-i\\beta_{y}\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial x^{\\prime }}+i\\beta _{x} \\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial y^{\\prime }}+\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}\\alpha -\\frac{\\partial \\Phi_{z}^{\\prime }}{\\partial t^{\\prime }}\\beta _{x}-\\frac{\\partial \\Phi_{z}^{\\prime }}{\\partial x^{\\prime }}\\alpha +i\\beta _{z}\\frac{\\partial \\Phi_{z}^{\\prime }}{\\partial y^{\\prime }}-i\\beta _{y}\\frac{\\partial \\Phi_{z}^{\\prime }}{\\partial z^{\\prime }} \\\\ \n\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial t^{\\prime }}\\beta _{x}+\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial x^{\\prime }}\\alpha -i\\beta _{z}\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial y^{\\prime }}+i\\beta _{y}\\frac{\\partial \\Phi _{y}^{\\prime }}{\\partial z^{\\prime }}-\\frac{\\partial \\Phi_{x}^{\\prime }}{\\partial t^{\\prime }}\\beta _{y}-i\\beta _{z}\\frac{\\partial\\Phi _{x}^{\\prime }}{\\partial x^{\\prime }}-\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial y^{\\prime }}\\alpha +i\\beta _{x}\\frac{\\partial \\Phi _{x}^{\\prime }}{\\partial z^{\\prime }}%\n\\end{bmatrix}%\n=$\n\n\\begin{equation}\\label{eq:2.1e}\n=\\alpha \\frac{\\partial \\pmb{\\Phi} ^{\\prime }}{\\partial t^{\\prime }}+i\\alpha \\nabla ^{\\prime }\\times \\pmb{\\Phi }^{\\prime } +i\\pmb{\\beta } \\times \\frac{\\partial \\pmb{\\Phi}^{\\prime }}{\\partial t^{\\prime }}+\\pmb{\\beta}(\\nabla^{\\prime }\\pmb{\\Phi }^{\\prime })-\\pmb{\\beta } \\times (\\nabla ^{\\prime }\\times \\pmb{\\Phi}^{\\prime })\n\\end{equation}\n\nSubstituting all the above partial results \\eqref{eq:2.1b} - \\eqref{eq:2.1e} into the equation \\eqref{eq:2.1a} we obtain\n\n\\[\n\\begin{bmatrix}\n\\frac{\\partial }{\\partial t} \\\\ \n\\nabla\n\\end{bmatrix}\n\\begin{bmatrix}\n\\varphi (X) \\\\ \n\\pmb{\\Phi }(X)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\\alpha \\left( \\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}+\\nabla^{\\prime} \\pmb{\\Phi }^{\\prime }\\right) + \\pmb{\\beta } \\left(\\nabla ^{\\prime }\n\\varphi ^{\\prime }+\\frac{\\partial \\Phi^{\\prime }}{\\partial t^{\\prime }} +i\\nabla^{\\prime} \\times \n\\pmb{\\Phi }^{\\prime }\\right) \\\\ \n\\alpha \\left(\\nabla ^{\\prime }\n\\varphi ^{\\prime }+\\frac{\\partial \\Phi^{\\prime }}{\\partial t^{\\prime }} +i\\nabla^{\\prime} \\times \n\\pmb{\\Phi }^{\\prime }\\right)\n\n+\\pmb{\\beta} (\\frac{\\partial \\varphi ^{\\prime }}{\\partial t^{\\prime }}+\\nabla^{\\prime} \\pmb{\\Phi }^{\\prime })\n\n+i\\pmb{\\beta} \\times \\left(\\frac{\\partial \\Phi^{\\prime }}{\\partial t^{\\prime }} + \\nabla ^{\\prime }\n\\varphi ^{\\prime } +i\\nabla^{\\prime} \\times \n\\pmb{\\Phi }^{\\prime }\\right)\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\alpha \\\\ \n\\pmb{\\beta }%\n\\end{bmatrix}%\n\\begin{bmatrix}\n\\frac{\\partial }{\\partial t^{\\prime}} \\\\ \n\\nabla^{\\prime}%\n\\end{bmatrix}%\n\\begin{bmatrix}\n\\varphi ^{\\prime } \\\\ \n\\pmb{\\Phi }^{\\prime }%\n\\end{bmatrix},\n\\]\n\nwhich completes the proof of the 1st identity of Theorem \\ref{th:2}.\n\n2. Proof of the identity $\\partial ^{-}A\\left( X\\right) =\\partial ^{\\prime -}\\Gamma ^{-}A\\left(X^{\\prime }\\Gamma ^{-1}\\right)$ is left to the reader.\n\n\\end{myproof}\n\n\\newpage\n\\begin{theorem} \\label{th:1.3}\n\nLet $A(X)$ be an analytic paravector function defined on the set $C^{1+3}$, then for each paravector $\\Gamma$ it is true that:\n\n\\begin{enumerate}\n\\item $\\partial \\left[ A\\left(X\\right) \\Gamma \\right] = \\left[ \\partial A\\left( X\\right) \\right] \\Gamma$\n\\item $\\partial ^{-}\\left[A\\left( X\\right) \\Gamma \\right] = \\left[ \\partial ^{-}A\\left( X\\right) \\right] \\Gamma$\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}1.\n\n$\\partial \\left[ A\\left(X\\right) \\Gamma \\right] =\n\\begin{bmatrix}\n\\frac{\\partial}{\\partial t} \\\\ \n\\nabla\n\\end{bmatrix}\n(\\begin{bmatrix}\n\\varphi (X) \\\\ \n\\pmb{\\Phi}(X)\n\\end{bmatrix}\n\\begin{bmatrix}\n\\alpha \\\\ \n\\pmb{\\beta}\n\\end{bmatrix})=\n\\begin{bmatrix}\n\\frac{\\partial}{\\partial t} \\\\ \n\\nabla\n\\end{bmatrix}\n\\begin{bmatrix}\n\\alpha \\varphi (X) + \\pmb{\\Phi}(X)\\pmb{\\beta}\\\\ \n\\alpha\\pmb{\\Phi}(X)+\\pmb{\\beta}\\varphi (X) +i\\pmb{\\Phi}(X)\\times\\pmb{\\beta}\n\\end{bmatrix}=$\n\n\\noindent $=\\begin{bmatrix}\n\\alpha \\frac{\\partial}{\\partial t}\\varphi(X)\n+\\pmb{\\beta}\\frac{\\partial}{\\partial t}\\pmb{\\Phi}(X)\n+\\alpha\\nabla\\pmb{\\Phi}(X)\n+\\pmb{\\beta}\\nabla\\varphi(X)\n+i\\nabla[\\pmb{\\Phi}(X)\\times\\pmb{\\beta}] \\\\\n\\alpha\\frac{\\partial}{\\partial t}\\pmb{\\Phi}(X)\n+\\pmb{\\beta}\\frac{\\partial}{\\partial t}\\varphi (X)\n+\\alpha\\nabla\\varphi(X)\n+i[\\frac{\\partial}{\\partial t}\\pmb{\\Phi}(X)\\times\\pmb{\\beta}\n+\\alpha\\nabla\\times\\pmb{\\Phi}(X)+\n\\nabla\\varphi(X)\\times\\pmb\\beta]\n+\\nabla[\\pmb{\\Phi}(X)\\pmb{\\beta}]\n-\\nabla\\times[\\pmb{\\Phi}(X)\\times\\pmb{\\beta}]\n\\end{bmatrix},\n$\n\nhence, under property of the nabla operator we obtain\n\n$=\\begin{bmatrix}\n[\\frac{\\partial}{\\partial t}\\varphi(X)\n+\\nabla\\pmb{\\Phi}(X)]\\alpha\n+[\\frac{\\partial}{\\partial t}\\pmb{\\Phi}(X)\n+\\nabla\\varphi(X)\n+i\\nabla\\times\\pmb{\\Phi}(X)]\\pmb{\\beta} \\\\\n[\\frac{\\partial}{\\partial t}\\pmb{\\Phi}(X)\n+\\nabla\\varphi(X)\n+i\\nabla\\times\\pmb{\\Phi}(X)]\\alpha\n+[\\frac{\\partial}{\\partial t}\\varphi(X)\n+\\nabla\\pmb{\\Phi}(X)]\\pmb{\\beta}\n+i[\\frac{\\partial}{\\partial t}\\pmb{\\Phi}(X)\n+\\nabla\\varphi(X)\n+i\\nabla\\times\\pmb{\\Phi}(X)]\\times\\pmb{\\beta}\n\\end{bmatrix}=\n$\n\n$=\\left[ \\partial A\\left(X\\right) \\right] \\Gamma$\n\n2. Similarly, as the proof of the equation 1.\n\\end{proof}\n\nFormulas of transformation of the field by the rotation of reference system $X^{\\prime} = \\Gamma X \\Gamma^{-1}$ follow from above results, where the rotation means a more general transformation then Euclidean rotation \\cite{Radomanski}.\n\n\\begin{example}Rotation of the observer in the field \\label{ex2}\n\nLet $\\Lambda$ be an orthogonal paravector (ie. det $\\lambda=1$) and let the fields $A(X)$ and $B(X)$ satisfy the relationship $\\partial A(X)=B(X)$, where $X \\in C^{1+3}$. The observer rotates:\n\\[\\partial A(\\Lambda ^{-}X^{\\prime}\\Lambda) = B(\\Lambda ^{-}X^{\\prime}\\Lambda)\\qquad, \\qquad\\qquad \\text{where}\\qquad X^{\\prime} = \\Lambda X\\Lambda ^{-}\\]\n\nIn the turned frame the above equation has the form (by Theorems \\ref{th:1} and \\ref{th:2})\n\\[\\Lambda^{-}\\partial^{\\prime} \\Lambda A(\\Lambda ^{-}X^{\\prime}\\Lambda) = B(\\Lambda ^{-}X^{\\prime}\\Lambda)\\]\n\nMultiplying this equation on the left-side by $\\Lambda$ and right-side by $\\Lambda^{-}$, on the basis of the Theorem \\ref{th:1.3}, we obtain an equation of the field after rotation.\n\\[\\partial^{\\prime} [\\Lambda A(\\Lambda ^{-}X^{\\prime}\\Lambda)\\Lambda^{-}] = \\Lambda[B(\\Lambda ^{-}X^{\\prime}\\Lambda)]\\Lambda^{-},\\]\n\nSimilarly for the reversed operator (4-gradient). The conclusion is obvious:\n\\textbf{If the observer turns to one side, the field around it will turn by the same amount in the opposite direction}.\n\\end{example}\n\n\\section{Invariance of wave equation under orthogonal transformation.}\n\nUsing the theorems \\ref{th:1} - \\ref{th:1.3}, we can easily demonstrate the invariance of the wave equation $\\square A(X)= B(X)$ under the transformation represented by the orthogonal paravector. We can do this in four ways:\n\n\\begin{enumerate}\n\\item $\\square A(X)=\\partial ^{-}\\partial A(X)=\\partial ^{\\prime -}\\Lambda\n^{-}\\Lambda \\partial ^{\\prime }A(X^{\\prime }\\Lambda ^{-})=\\square ^{\\prime\n}A(X^{\\prime }\\Lambda ^{-})=B(X^{\\prime }\\Lambda ^{-}), \\qquad$ or\n\\begin{equation}\\label{eq:3.1}\n\\square A(X)=B(X)\\qquad \\iff \\qquad \\square ^{\\prime\n}A(X^{\\prime }\\Lambda ^{-})=B(X^{\\prime }\\Lambda ^{-})\n\\end{equation}\n\\item $\\square A(X)=\\partial \\partial ^{-}A(X)=\\Lambda \\partial ^{\\prime}\\partial ^{\\prime -}[\\Lambda ^{-}A(X^{\\prime }\\Lambda ^{-})]=\\Lambda\n\\square^{\\prime }[\\Lambda ^{-}A(X^{\\prime }\\Lambda ^{-})]=B(X^{\\prime\n}\\Lambda ^{-}), \\qquad $ hence\n\\begin{equation}\\label{eq:3.2}\n\\square A(X)=B(X)\\qquad \\iff \\qquad \\square^{\\prime }[\\Lambda\n^{-}A(X^{\\prime }\\Lambda ^{-})]=\\Lambda ^{-}B(X^{\\prime }\\Lambda ^{-})\n\\end{equation}\n\\item $\\square A(X)=\\partial ^{-}\\partial A(X)=\\Lambda ^{-}\\partial ^{\\prime -}\\partial ^{\\prime } \\left[ \\Lambda A\\left( \\Lambda ^{-}X^{\\prime }\\right) \\right]=\\Lambda\n^{-}\\square^{\\prime }[\\Lambda A(\\Lambda ^{-}X^{\\prime })]=B(\\Lambda\n^{-}X^{\\prime }),\\qquad $ hence\n\\begin{equation}\\label{eq:3.3}\n\\square A(X)=B(X)\\qquad \\iff \\qquad \\square^{\\prime\n}[\\Lambda A(\\Lambda ^{-}X^{\\prime })]=\\Lambda B(\\Lambda ^{-}X^{\\prime })\n\\end{equation}\n\\item $\\square A(X)=\\partial \\partial ^{-}A(X)=\\partial ^{\\prime }\\Lambda\\Lambda ^{-}\\partial ^{\\prime -}A(\\Lambda ^{-}X^{\\prime })=\\square ^{\\prime\n}A(\\Lambda ^{-}X^{\\prime })=B(\\Lambda ^{-}X^{\\prime }),\\qquad \\qquad $ or\n\\begin{equation}\\label{eq:3.4}\n\\qquad \\square A(X)=B(X)\\qquad \\iff \\qquad \\square ^{\\prime\n}A(\\Lambda ^{-}X^{\\prime })=B(\\Lambda ^{-}X^{\\prime })\n\\end{equation}\n\\end{enumerate}\n\n\nFrom the above relationships it can be seen that further discussion can be carried out in different directions. In points 1 and 4 both the equation and the value of the function are invariant.\n\\[\nA^{\\prime }=A\\qquad \\text{i} \\qquad B^{\\prime }=B\n\\]\n\nAt points 2 and 3 the form of wave equation is invariant, while the values of the function of a field undergo changes:\n\\begin{itemize}\n\\item contravariant\n\\[\nA^{\\prime}=\\Lambda^{-} A \\qquad \\text{i} \\qquad B^{\\prime }=\\Lambda^{-} B\n\\]\n\\item or covariant\n\\[\nA^{\\prime }= \\Lambda A \\qquad \\text{i} \\qquad B^{\\prime }=\\Lambda B\n\\]\n\\end{itemize}\n\nWe encounter an interesting problem that goes beyond the established frame to show in this article, and enters the field of physics, so its analysis will be presented in another paper.\n\nAs for the rotation of the observer in the field meeting the wave equation, we get the same result as in example \\ref{ex2}.\n\\[\\square A(X)=B(X)\\qquad \\iff \\qquad \\square ^{\\prime\n} \\Lambda [A(\\Lambda ^{-}X^{\\prime }\\Lambda)]\\Lambda ^{-}=\n\\Lambda [B(\\Lambda ^{-}X^{\\prime }\\Lambda)]\\Lambda ^{-} \\]\n\n\\section*{Summary}\n\nThe wave equation is one of the most important relationships in physics. It underlies the theory of the electromagnetic field and relativistic quantum mechanics, and is applicable in all fields of physics. The considerations presented above and simplicity of calculation show that the paravector calculus fits into this equation naturally and so, it is also natural for relativistic physics. I will by trying to convince the reader in further publications that it is so and that using paravectors we can take a different look at the seemingly well-known phenomenons. \n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nChaotic systems are characterised by their high sensitivity to even infinitesimal changes in their initial conditions. As a result these systems, by their intrinsic nature, defy attempts at control or synchronization. Nevertheless many techniques have been proposed by a large group of researchers to control and synchronize chaotic systems. Control of chaos refers to a process wherein a judiciously chosen perturbation is applied to a chaotic system, in order to realize a desirable behaviour \\cite{gonzalez04}. Since the seminal contribution by Ott, Grebogi and Yorke in 1990 \\cite{ogy90} the concept of control of chaos has been modified and developed by many researchers \\cite{rajasekar93} and applied to a large number of physical systems \\cite{boc00}. Synchronization of chaos, on the other hand, can be described as a process wherein two or more chaotic systems (either equivalent or non-equivalent) adjust a given property of their motion to a common behaviour, due to coupling or forcing. This may range from complete agreement of trajectories to locking of phases \\cite{ml96,ml03,boc02}.\n\nIn this paper we describe the general principles of control of chaos using state feedback mechanism and synchronization of chaotic systems using observer based adaptive techniques. Further using these, we report the control of chaos in a single Memristive Murali-Lakshmanan-Chua (MLC) oscillator and the synchronization of chaos in a two coupled Memristive MLC oscillator system. The paper is organized as follows. In Sec. 2 we give a brief introduction of the Memristive MLC circuit, its circuit realization, its circuit equations and their normalized forms and the description of the circuit as a non-smooth system. In Sec. 3 the various algorithms for the control of chaos are outlined. In Sec. 4 the control of chaos in the Memristive MLC circuit using state feed back control technique is dealt with. Similarly in Secs. 5 and 6 the concept of synchronization of chaos and its realization are explained, while in Sec. 7 the observer based adaptive synchronization of chaos in a system of two coupled Memristive MLC oscillator is described. Finally in Sec. 8 the results and further discussions are given. \n\\section{Memristive Murali-Lakshmanan-Chua Circuit} \nThe memristive MLC circuit was introduced by the present authors \\cite{icha13} by replacing the Chua's diode in the classical Murali-Lakshmanan-Chua circuit with an active flux controlled memristor as its non-linear element. The analog model of the memristor used in this work was desinged by \\cite{icha11}. The schematic of the memristive MLC circuit is shown in Fig. \\ref{fig:mmlc_cir}, while the actual analog realization based on the prototype model for the memristor is shown in Fig. \\ref{fig:prototype}.\n\n\\begin{figure}[!t]\n\\centering{\n\\includegraphics[width=.8\\columnwidth]{.\/Fig1}}\n\\caption{The memristive MLC circuit}\n\\label{fig:mmlc_cir}\n\\end{figure}\n\\begin{figure*}\n\t\\centering{\n\\includegraphics[width=1.6\\columnwidth]{.\/Fig2}}\t\t\\caption[Multisim Prototype of the Memristive MLC Circuit] {A Multisim Prototype Model of a memristive MLC circuit. The memristor part is shown by the dashed outline. The parameter values of the circuit are fixed as $ L = 21 mH$, $R = 900 \\Omega$, $C_1 = 10.5nF$. The frequency of the external sinusoidal forcing is fixed as $\\nu_{ext} = 8.288 kHz$ and the amplitude is fixed as $F = 770 mV_{pp} $ ( peak-to-peak voltage). }\n\t\t\\label{fig:prototype}\n\\end{figure*}\n\nApplying Kirchoff's laws, the circuit equations can be written as a set of autonomous ordinary differential equations (ODEs) for the flux $\\phi(t)$, voltage $v(t)$, current $i(t)$ and the time $p$ in the extended coordinate system as\n\\begin{eqnarray}\n\\frac{d\\phi}{dt} & = & v, \\nonumber \\\\\nC\\frac{dv}{dt} & = & i - W(\\phi)v, \\nonumber \\\\ \nL \\frac{di}{dt} & = & -v -Ri +F\\sin( \\Omega p),\\nonumber \\\\\n\\frac{dp}{dt} & = & 1.\n\t\\label{eqn:mlc_cir}\n\\end{eqnarray}\nHere $W(\\phi)$ is the memductance of the memristor and is as defined in \\cite{itoh08},\n\\begin{equation}\nW(\\phi) = \\frac{dq(\\phi)}{d\\phi} = \\left\\{\n\t\t\t\t\t\\begin{array}{ll}\n\t\t\t\t\tG_{a_1}, ~~~ | \\phi | > 1 \\\\\n\t\t\t\t\tG_{a_2}, ~~~ | \\phi | \\leq 1,\t\n\t\t\t\t\t\\end{array}\n\t\t\t\t\\right.\n\t\\label{eqn:W}\n\\end{equation}\nwhere $G_{a_1}$ and $G_{a_2}$ are the slopes of the outer and inner segments of the characteristic curve of the memristor respectively. We can rewrite Eqs. (\\ref{eqn:mlc_cir}) in the normalized form as\n\\begin{eqnarray}\n\\dot{x}_1 & = & x_2, \\nonumber \\\\\n\\dot{x}_2 & = & x_3-W(x_1)x_2, \\nonumber \n\\end{eqnarray}\n\\begin{eqnarray}\n\\dot{x}_3 & = & -\\beta(x_2+x_3) + f \\sin(\\omega x_4),\\nonumber \\\\\n\\dot{x}_4 & = & 1.\n\\label{eqn:mlc_nor}\n\\end{eqnarray}\nHere dot stands for differentiation with respect to the normalized time $\\tau$ (see below) and $W(x_1)$ is the normalized value of the memductance of the memristor, given as\n\\begin{equation}\nW(x_1) = \\frac{dq(x_1)}{dx_1} = \\left\\{\n\t\t\\begin{array}{ll}\n\t\ta_1, ~~~ | x_1 | > 1 \\\\\n\t\ta_2, ~~~ | x_1 | \\leq 1\n\t\t\\end{array}\n\t\\right.\n\t\\label{eqn:W_nor}\n\\end{equation}\nwhere $ a_1 = G_{a_1}\/G $ and $a_2 = G_{a_2}\/G$ are the normalized values of $G_{a_1} $ and $G_{a_2} $ mentioned earlier and are negative. The rescaling parameters used for the normalization are\n\\begin{eqnarray}\nx_1 = \\frac{G\\phi}{C},x_2 = v, x_3 = \\frac{i}{G}, x_4 = \\frac{Gp}{C}, G = \\frac{1}{R}, \\\\ \\nonumber \n\\beta = \\frac{C}{LG^2}, \\omega = \\frac{\\Omega C}{G} = \\frac{2\\pi \\nu C}{G},\\tau = \\frac{Gt}{C},f = F\\beta.\n\t\\label{eqn:rescale} \n\\end{eqnarray}\nIn our earlier work on this memristive MLC circuit, see \\cite{icha13}, we reported that the addition of the memristor as the nonlinear element converts the system into a piecewise-smooth continuous flow having two discontinuous boundaries, admitting \\emph{grazing bifurcations}, a type of discontinuity induced bifurcation (DIB). These grazing bifurcations were identified as the cause for the occurrence of hyperchaos, hyperchaotic beats and transient hyperchaos in this memristive MLC system. Further we have reported \\emph{discontinuity induced Hopf and Neimark-Sacker bifurcations} in the same circuit, refer \\cite{icha17}. Thus the memristive MLC circuit shows rich dynamics by virtue of it being a non-smooth system. Hence we give a brief description of the memristive MLC circuit in the frame work of non-smooth bifurcation theory.\n\n\\subsection{Memristive MLC Circuit as a Non-smooth System}\n\nThe memristive MLC circuit is a piecewise-smooth continuous system by virtue of the discontinuous nature of its nonlinearity, namely the memristor. An active flux controlled memristor is known to switch state with respect to time from a more conductive ON state to a less conductive OFF state and vice versa at some fixed values of flux across it, see \\cite{icha17}. In the normalized coordinates this switching is found to occur at $x_1 = +1$ and at $x_1 = -1$. These switching states of the memristor give rise to two discontinuity boundaries or switching manifolds, $\\Sigma_{1,2}$ and $\\Sigma_{2,3}$ which are symmetric about the origin and are defined by the zero sets of the smooth functions $H_i(\\mathbf{x},\\mu) = C^T\\mathbf{x}$, where $C^T = [1,0,0,0]$ and $\\mathbf{x}= [x_1,x_2,x_3,x_4]$, for $i=1,2$. Hence $H_1(\\mathbf{x}, \\mu) = (x_1-x_1^\\ast)$, $x_1^\\ast = -1$ and $H_2(\\mathbf{x}, \\mu) = (x_1-x_1^\\ast)$, $x_1^\\ast = +1$, respectively. Consequently the phase space $\\mathcal{D}$ can be divided into three subspaces $S_1$, $S_2$ and $S_3$ due to the presence of the two switching manifolds. The memristive MLC circuit can now be rewritten as a set of smooth ODEs \n\\begin{footnotesize}\n\\begin{equation}\n\\dot{x}(t) = \n\t\\left\\lbrace \n\t\t\t\\begin{array}{l}\n\t\tF_{1,3}(\\mathbf{x},\\mu), \\, H_1(\\mathbf{x}, \\mu)< 0 \\,\\& \\, H_2(\\mathbf{x},\\mu)> 0, \\,\\mathbf{x} \\in S_{1,3}\\\\ \n\t\t\\\\\n\t\tF_2(\\mathbf{x},\\mu), \\quad H_1(\\mathbf{x}, \\mu) >0 \\, \\& \\, H_2(\\mathbf{x}, \\mu) < 0,\\mathbf{x} \\in S_2 \n\t\t\t\\end{array}\n\t\t\\right.\n\t\\label{eqn:smooth_odes}\n\\end{equation}\n\\end{footnotesize}\nwhere $\\mu$ denotes the parameter dependence of the vector fields and the scalar functions. The vector fields $F_i$'s are\n\\begin{equation}\n F_i(\\mathbf{x},\\mu) = \\left (\t\\begin{array}{c}\n\t\t\t\t\tx_2\t\t\t\\\\\n\t\t\t\t-a_i x_2+x_3\t\\\\\n\t\t\t\t-\\beta x_2 -\\beta x_3 +f sin(\\omega x_4)\t\\\\\t\n\t\t\t\t1 \n\t\t\t\t\\end{array}\n\t\t\\right ), \\mathrm{i\\;=\\;1,2,3}\n\\label{eqn:vect_field}\n\\end{equation}\nwhere we have $a_1 = a_3$. \n\nThe discontinuity boundaries $\\Sigma_{1,2}$ and $\\Sigma_{2,3}$ are not uniformly discontinuous. This means that the degree of smoothness of the system in some domain $\\mathcal{D}$ of the boundary is not the same for all points $x \\in \\Sigma_{ij}\\cap \\mathcal{D}$. This causes the memristive MLC circuit to behave as a non-smooth system having a degree of smoothness of either {\\textit{one}} or {\\textit{two}}. In such a case it will behave either as a \\emph{Filippov system} or as a \\emph{piecewise-smooth continuous flow} respectively, refer Appendix A in \\cite{icha17}.\n\n\\subsection{Equilibrium Points and their Stability}\nIn the absence of the driving force, that is if $f=0$, the memristive MLC circuit can be considered as a three-dimensional autonomous system with vector fields given by\n\\begin{equation}\n F_i(\\mathbf{x},\\mu) = \\left (\t\n \t\t\t\t\t\\begin{array}{c}\n\t\t\t\t\t\t\tx_2\t\t\t \t\\\\\n\t\t\t\t\t\t-a_i x_2+x_3 \t\t\\\\\n\t\t\t\t\t\t-\\beta x_2 -\\beta x_3\t\\\\\t\n\t\t\t\t\t\\end{array}\n\t\t\\right ), \\mathrm{i\\;=\\;1,2,3}.\n\\label{eqn:vect_field3d}\n\\end{equation}\nThis three dimensional autonomous system has a trivial equilibrium point $E_0$, two {\\textit{admissible equilibrium}} points $E_{\\pm}$ and two {\\textit{boundary equilibrium}} points $E_{B\\pm}$.\\\\The trivial equilibrium point is given as\n\\begin{equation}\nE_0 = \\{(x_1,x_2,x_3)|x_1=x_2=x_3=0\\}\n\\end{equation}\nThe two admissible equilibria $E_{\\pm}$ are\n\\begin{equation}\nE_{\\pm} = \\{(x_1,x_2,x_3)|x_2=x_3=0,x_1^*= \\textrm{constant and not equal to } \\pm 1 \\}\n\\end{equation}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\resizebox{\\columnwidth}{!}\n\t\t{\\includegraphics{.\/Fig3}}\t\t\n\t\\caption[Equilibrium Points]{Figure showing the equilibrium points $E_{\\pm}$ in the subspaces $S_1$ and $S_3$ for the parameter value above $\\beta_c = 0.8250$. The initial conditions are $x_1 = 0.0$, $x_2 = 0.01$, $x_3 = 0.01$ for the fixed point $E_{+}$ in the subspace $S_3$ and \n \t$x_1 = 0.0$, $x_2 = -0.01$, $x_3 = -0.01$ for the fixed point $E_{-}$ in the subspace $S_1$.}\n \t\t\\label{fig:mmlc_limitcycle}\n\\end{figure} \nThe two boundary equilibrium points are \n\\begin{equation}\nE_{B\\pm} = \\{(x_1,x_2,x_3)|x_2=x_3=0,\\hat{x}_1= \\pm 1 \\}\n\\end{equation}\nThe multiplicity of equilibrium points arises because of the non-smooth nature of the nonlinear function, namely $W(x_1)$ given in Eq. (\\ref{eqn:W_nor}). To find the stability of these equilibrium states, we construct the Jacobian matrices $N_i, \\, i = 1,2,3$ and evaluate their eigenvalues at these points,\n\\begin{equation}\nN_i = \\left ( \\begin{array}{ccc}\n\t\t\t\t0 & 1 \t& 0 \t\t \\\\\n\t\t\t\t0 &-a_i & 1 \t\t \\\\\n\t\t\t\t0 &-\\beta & -\\beta \t\\\\\t\t\t\t \n\t\t\t\t\\end{array}\n\t\t\\right), \\text{i\\;=\\;1,2,3}.\n\t\\label{eqn:Jac}\n\\end{equation}\nThe characteristic equation associated with the system $N_i$ in these equilibrium states is\n\\begin{equation}\n\\lambda^3 + p_2\\lambda^2 + p_1 \\lambda = 0,\n\t\\label{eqn:chac}\n\\end{equation}\nwhere {\\it{$\\lambda$}}'s are the eigenvalues that characterize the equilibrium states and $\\it{p_i}$'s are the coefficients, given as $p_1 = \\beta(1+a_i)$ and $p_2 = (\\beta + a_i)$. The eigenvalues are \n\\begin{equation}\n\\lambda_1 = 0, \\,\\lambda_{2,3} = \\frac{-(\\beta+a_i)}{2} \\pm \\frac{\\sqrt{(\\beta - a_i)^2-4 \\beta}}{2}.\n\t\\label{eqn:eigen}\n\\end{equation} \nwhere $ i = 1,2,3$. Depending on the eigenvalues, the nature of the equilibrium states differ. \n\\begin{enumerate}\n\n\\item\nWhen $(\\beta - a_i)^2 = 4 \\beta$, the equilibrium state will be a stable\/unstable star depending on whether $(\\beta+a_i)$ is positive or not.\n\n\\item\nWhen $(\\beta - a_i)^2 > 4 \\beta$, the equilibrium state will be a saddle.\n\n\\item\nWhen $(\\beta - a_i)^2 < 4 \\beta$, the equilibrium state will be a stable\/unstable focus.\n\\end{enumerate}\nFor the third case, the circuit admits self oscillations with natural frequency varying in the range \\\\$\\sqrt{\\left [(\\beta - a_1)^2-4 \\beta \\right ]} \/ 2 < \\omega_o < \\sqrt{\\left [(\\beta - a_2)^2-4 \\beta \\right ]}\/2$. \\\\It is at this range of frequency that the memristor switching also occurs. \n\nAs the vector fields $F_1(\\mathbf{x},\\mu)$ and $F_3(\\mathbf{x},\\mu)$ are symmetric about the origin, that is $F_1(\\mathbf{x},\\mu) = F_3(-\\mathbf{x},\\mu)$, the admissible equilibria $E_{\\pm}$ are also placed symmetric about the origin in the subspaces $S_1$ and $S_3$. These are shown in Fig. \\ref{fig:mmlc_limitcycle} for a certain choice of parametric values.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\resizebox{\\columnwidth}{!}\n\t\t{\\includegraphics{.\/Fig4}}\n\t\t\\caption[Dynamics of the Memristive MLC Circuit before application of state feedback control] {The chaotic dynamics of the memristive MLC oscillator arising due to sliding bifurcations occurring in the circuit, with a(i) the time plot of the $x_1$ variable and a(ii) phase portrait in the $(x_1-x_2)$ plane. The step size is assumed as $h = \\frac{1}{1000}(2\\pi\/\\omega)$, with $\\omega = 0.65$ and $f = 0.20$.}\n\t\\label{fig:control1}\n\\end{figure}\n\n\\subsection{Sliding Bifurcations and Chaos}\nLet us assume the bifurcation points at the two switching manifolds to be \n\\begin{equation}\nE_{B\\pm} = \\{(x_1,x_2,x_3)|x_2 \\neq 0, x_3=0,\\hat{x}_1= \\pm 1. \\}\n\t\\label{eqn:slid_points}\n\\end{equation}\nThen we find from Eqs. (\\ref{eqn:vect_field3d}) that $F_2(x,\\mu) \\neq F_1(x,\\mu)$ at $x \\in \\Sigma_{1,2}$ and $F_2(x,\\mu) \\neq F_3(x,\\mu)$ at $x \\in \\Sigma_{2,3}$. Under such conditions the system is said to have a degree of smoothness of order \\textit{one}, that is $r=1.$ Hence the memristive MLC circuit can be considered to behave as a \\textit{Filippov} system or a \\textit{Filippov} flow capable of exhibiting \\textit{sliding bifurcations}. \n\nSliding bifurcations are Discontinuity Induced Bifurcations ( DIB's )\narising due to the interactions between the limit cycles of a Filippov system with the boundary of a sliding region. Four types of sliding bifurcations have been identified by Feigin \\cite{feigin94} and were subsequently analysed by di Bernado, Kowalczyk and others \\cite{dib_ijbc01,kowal01,dib02,dib_ijbc03} for a general $n-$dimensional system. These four sliding bifurcations are \\textit{crossing-sliding} bifurcations, \\textit{grazing-sliding} bifurcations, \\textit{switching-sliding} bifurcations and \\textit{adding-sliding} bifurcations.\n\nThe memristive MLC circuit is found to admit three types of sliding bifurcations, namely \\textit{crossing-sliding}, \\textit{grazing-sliding} and \\textit{switching sliding} bifurcations \\cite{icha16}. Let the parameters be chosen as $a_{1,3} = -0.55$, $a_2 = -1.02$, $\\beta= 0.95$, $f = 0.20$ and $\\omega = 0.65$. For these choice of parameters, the memristive MLC circuit undergoes repeated sliding bifurcations at the discontinuity boundaries $\\Sigma_{1,2}$ and $\\Sigma_{2,3}$, giving rise to a chaotic state as shown in Fig. \\ref{fig:control1}. Here a(i) shows the time plot of the $x_1$ variable and a(ii) shows the phase portrait in the $(x_1-x_2)$ plane. In the subsequent section we will show that this chaotic behaviour exhibited by the memristive MLC circuit can be controlled using \\textit{state feedback control} technique.\n\n\\section{Control of Chaos}\nControl of chaos refers to purposeful manipulation of the chaotic behaviour of a nonlinear system to some desired or preferred dynamical state. As chaotic behaviour is considered undesired or harmful, a need was felt for suppression of chaos or at least reducing it as much as possible. For example, control of chaos is necessary in avoiding fatal voltage collapses in power grids, elimination of cardiac arrhythmias, guiding cellular neural networks to reach certain desirable pattern formations, etc. The earliest attempts at controlling chaos were focussed on eliminating the response of a chaotic system, which resulted in the destruction of the dynamics of the system itself. However it was Ott, Grebogi and Yorke \\cite{ogy90} who showed that it would be beneficial to force the chaotic system to one of its infinite unstable periodic orbits (UPO) which are embedded in the chaotic attractor of the system without totally destroying the dynamics of the system. Following this many workers have developed newer techniques to control chaos and have applied them successfully on a variety of systems to realize different desired behaviours. Generally all the known methods of chaos control can be grouped into two categories, either\nfeedback control methods or non-feedback control algorithms.\n\n\\subsection{Feedback Controlling Algorithms}\nFeedback control algorithms essentially make use of the intrinsic properties of chaotic systems to stabilize orbits which are already existing in the systems. The Adaptive Control Algorithm (ACA) developed by \\cite{huberman90} and applied by \\cite{sinha90,rajasekar93}, the Ott-Grebogi-Yorke (OGY) Algorithm developed by \\cite{ogy90} and applied by \\cite{ding93,tel91,lai_tel93,ml96,singer91}, the Control Engineering Approach, developed by \\cite{chen_dong92,chenG93} are all examples of these algorithms. \n\n\\subsection{Non-feedback Methods}\nThe non-feedback methods refer to the use of some small perturbing external force, or noise, or a constant bias potential, or a weak modulating signal to some system parameter. The parametric control of chaos was demonstrated by, \\cite{lima90,liu94,rajasekar93,wisen86a,wisen86b,braiman91,ml96}. The control of chaos by applying a constant weak biasing voltage was demonstrated by \\cite{ml96} in the case of MLC oscillator and Duffing oscillator and by addition of noise was demonstrated in a BVP oscillator by \\cite{rajasekar93}. The other control algorithms are Entrainment or Open Loop Control method developed and applied by \\cite{jack90a,jack90b,jack91a,jack91b,jack91c}, the Oscillation Absorber Method developed by \\cite{kapit92,kapit93}. \n\n\\subsection{Control of Chaos using State Feedback}\nAs the feedback and nonfeed back methods of chaos control have many drawbacks, \na continuous time feedback control using small perturbations was proposed numerically by \\cite{pyragas92}. This control scheme was provided a rigorous basis by Chen and Dong and was demonstrated successfully in time continuous systems like Duffing Oscillator \\cite{chen_dong93}, Chua's Circuit \\cite{chenG93,hwang97} and so on. However the drawbacks of these methods are\n\\begin{enumerate}\n\\item\nthey can be applied only when the dynamical equations for the system are known \\textit{a priori}\n\\item\nthe internal state variables are assumed to be available to construct control forces\n\\item\nthe controller structure, in some cases, is extremely complicated\n\\item\nlimited information may be available and the only measurable quantity of the system is its output\n\\item\nFurther, for nonsmooth systems, these conventional techniques, in particular addition of a second weak periodic excitation or the addition of a constant bias do not seem to enforce control of chaos\n\\end{enumerate}\nUnder such conditions a parallel state reconstruction by means of either a Kalman filter or Luenberger type observer must be used to implement control laws. For this purpose, the state space representation of the system and their transformations to either controller canonical form or observer canonaical form are derived, refer Appendix A. \n\nThe state space representation refers to the modelling of dynamical systems in terms of state vectors and matrices so that the analyses of such systems are made conveniently in the time domain, using the basic knowledge of matrix algebra \\cite{kailath80,Ioannou96}. This representation is a well researched area in the field of control engineering \\cite{ctchen70,vidya75,kailath80}. The main advantage of this approach is that it presents a uniform platform for representing time varying as well as time invariant systems, linear as well as piece-wise nonlinear systems. Further the vector fields for all the sub-spaces of the system take on a uniform form. Some of the methods of control that fall in this category are adaptive control \\cite{yassen03}, observer based control \\cite{liao98b}, sliding mode control \\cite{yau04}, impulsive control \\cite{sun03} and backstepping control \\cite{yassen07}, linear switched state feedback method \\cite{zhangj09}, twin-T notch filter method \\cite{Iu11}, and backstepping method \\cite{song11}.\n\nIn this section we outline the feedback method for control of chaos in a general dynamical system using state space models. Let us consider the observer canonical representation of a single input single output (SISO) nonlinear chaotic system in state space, refer Eq. (\\ref{eqn:AppenA_ss2}) in Appendix A, \n\\begin{eqnarray}\n\\dot{x}_o\n & = & \\tilde{A}x_o + B^Tu, \\nonumber \\\\\ny \t\t& = & C_o^Tx_o +D^Tu,\n\t\\label{eqn:ss_openloop}\n\\end{eqnarray}\nwhere $\\tilde{A} \\in \\mathcal{R}^{n\\times n}$, $B \\in \\mathcal{R}^{n \\times r}$, $C \\in \\mathcal{R}^{n \\times l}$ and $D \\in \\mathcal{R}^{l \\times r}$ are matrices, $u$ is a $r$-dimensional vector denoting the control input and $y$ is a $l$-dimensional vector representing the output of the system. This system is often called as \\textit{open-loop system} in control theory. \n\nBeing in the observer canonical form, the system matrix $\\tilde{A}$ is given as \n\\begin{equation}\n \\tilde{A} = \\left (\t\\begin{array}{ccccc}\n\t\t\t\t-\\tilde{a}_1\t\t&1\t\\;\t\\;\t&0 \t\\;\t\t\\;\t&\\cdots\t\t&0\t\\\\\n\t\t\t\t-\\tilde{a}_2\t\t&0 \t\\; \\; &1 \t\\;\t\t\\;\t&\\cdots\t\t&0\t\\\\\n\t\t\t\t\\vdots\t\t&\\vdots\t\\;\\;&\\vdots\t\\; \\;\t&\\cdots\t\t&\\vdots\t\\\\\n\t\t\t\t-\\tilde{a}_{n-1}\t&0\t\\;\t\\;\t&0\t\\;\t\t\\;\t&\\cdots\t\t&1\t\\\\\n\t\t\t\t-\\tilde{a}_n\t\t&0\t\\; \\;\t&0 \t\\;\t\t\\;\t&\\cdots\t\t&0\n\t\t\t\t\\end{array}\n\t\t\\right ), \n\\label{eqn:ss_Ao}\n\\end{equation}\nwhere $\\tilde{a}_i$'s are the coefficients of the characteristic polynomial $\\{ |sI - \\tilde{A}| \\}$. \n\nIf we want the states of the system to approach zero starting from any arbitrary state, then we should have to design a control input which would regulate the states of the system to the desired equilibrium conditions. To achieve this we assume a \\textit{state feedback control law}\n\\begin{equation}\nu = -\\tilde{K} x_o,\n\t\\label{eqn:feedback_cont}\n\\end{equation}\nwhere $\\tilde{K}$ is called the \\textit{control gain vector} and can be designed using pole placement technique, familiar in control theory. \n\nSubstituting this control law, Eq. (\\ref{eqn:feedback_cont}) in the state space representation of the open-loop system, Eq. (\\ref{eqn:ss_openloop}), the system now becomes a \\textit{closed-loop system} represented as\n\\begin{eqnarray}\n\\dot{x}_o & = & (\\tilde{A}-B^T\\tilde{K})x_o, \\nonumber \\\\\ny \t\t& = & C_o^Tx_o +D^Tu, \n\t\\label{eqn:ss_closedloop}\n\\end{eqnarray}\nwhere $B^T$ is the transpose of the vector B and the closed-loop system matrix is given as\n\\begin{equation}\n (\\tilde{A}-B^T\\tilde{K}) = \n \t \\left (\t\\begin{array}{lcccc}\n\t\t-(\\tilde{a}_1-\\tilde{k}_n)\t\t&1\t\\;\t\\;\t&0 \t\\;\t\t\\;\t&\\cdots\t\t&0\t\\\\\n\t\t-(\\tilde{a}_2-\\tilde{k}_{n-1})\t\t&0 \t\\; \\; &1 \t\\;\t\\;\t&\\cdots\t\t&0\t\\\\\n\t\\;\\;\\;\\;\\;\\vdots\t&\\vdots\t\\;\t\\;\t&\\vdots\t\\; \\;\t&\\cdots\t\t&\\vdots\t\\\\\n\t\t-(\\tilde{a}_{n-1}-\\tilde{k}_2) &0\t\\;\t\\;\t&0\t\\;\t\t\\;\t&\\cdots\t\t&1\t\\\\\n\t\t-(\\tilde{a}_n-\\tilde{k}_1)\t&0\t\\; \\;\t&0 \t\\;\t\t\\;\t&\\cdots\t\t&0\n\t\t\t\t\\end{array}\n\t\t\\right ). \n\\label{eqn:ss_A-BK}\n\\end{equation}\nIf the values of $\\tilde{K}$ are so chosen that the eigen values of the matrix $(\\tilde{A}-B^T\\tilde{K})$ lie within the unit circle in the complex plane, then the system can be controlled to a desired stable equilibrium state. The problem of chaos control thus reduces to just determining a state feedback control gain vector $\\tilde{K}$ such that the control law, Eq. (\\ref{eqn:feedback_cont}), places the poles of the closed loop system, Eq. (\\ref{eqn:ss_closedloop}), in the desired locations. An illustration of this concept is shown in the block diagram in Fig. \\ref{fig:control_BD}.\n\\begin{figure*}\n\t\\centering\n\t\\resizebox{\\textwidth}{!}\n\t\t{\\includegraphics{.\/Fig5}}\t\t\n\t\\caption[Block Diagram of State Feedback Control] {Block diagram illustrating the concept of state feedback control.}\n\t\\label{fig:control_BD}\n\\end{figure*}\n\nA necessary and sufficient condition for successful pole placement is that the nonlinear system, that is, the pair of matrices $(\\tilde{A},B)$, must be controllable. \n\nLet the characteristic polynomial $ \\{sI -(\\tilde{A}-B^T\\tilde{K})\\}$ of the closed-loop system, Eq. (\\ref{eqn:ss_closedloop}), be given as \n\\begin{equation}\ns^n+(\\tilde{a}_1-\\tilde{k}_n)s^{n-1}+(\\tilde{a}_2-\\tilde{k}_{n-1})s^{n-2}+ \\cdots +(\\tilde{a}_n-\\tilde{k}_1)=0.\n\t\t\\label{eqn:charac_poly1}\n\\end{equation}\nLet the characteristic equation of the desired control state of the system be\n\\begin{eqnarray}\n(s-s_1)(s-s_2)(s-s_3)\\cdots (s-s_n) & = &0, \\nonumber \\\\\ns^n+\\alpha_1s^{n-1}+\\alpha_2s^{n-2}+\\cdots \\alpha_{n-1}s+\\alpha_n& = &0,\n\t\\label{eqn:charac_poly2}\n\\end{eqnarray}\nwhere $s_i$, $i=1,2,\\cdots n$ are the desired poles to which the system should be guided and $\\alpha_i$, $i=1,2,\\cdots n$ are the coefficients of the desired characteristic equation.\nBy comparing Eqs. (\\ref{eqn:charac_poly1}) and (\\ref{eqn:charac_poly2}) we get\nthe elements of the transformed control gain vector $\\tilde{K}$ as\n\\begin{equation}\n\\tilde{k}_n = \\alpha_1-\\tilde{a}_1,\\;\\;\\nonumber\n\\tilde{k}_{n-1} = \\alpha_2-\\tilde{a}_2,\\;\\; \\nonumber\n\\tilde{k}_{n-3} = \\alpha_3-\\tilde{a}_3,\\;\\; \\cdots \\nonumber \n\\tilde{k}_1 = \\alpha_n-\\tilde{a}_n.\n\t\\label{eqn:K_values}\n\\end{equation}\n\\section{Control of Chaos in Memristive MLC Circuit}\n\nIn the earlier sections we have seen that the memristive MLC circuit is a piecewise-smooth dynamical system having two discontinuity boundaries causing the state space of the system to be split up into three sub-spaces. Consequently the memristive MLC circuit is represented by a set of smooth ODE's, refer Eqs. (\\ref{eqn:smooth_odes}). Further we have seen that for the boundary equilibrium points given by Eqs. (\\ref{eqn:slid_points}), the memristive MLC circuit becomes a \\textit{Filippov} system. \n\nLinearising the vector fields about the equilibrium points defined by Eqs. (\\ref{eqn:slid_points}), the observer canonical form of the state space representation of the memristive MLC oscillator as a SISO system, refer Eq. (\\ref{eqn:AppenA_ss2}) in Appendix A, can be given as\n\\begin{eqnarray}\n\\dot{x}_o(t) & = & \n\t\t\\begin{cases}\n\t\t\\tilde{A}_2 x_o +B^Tu & \t\\text{if $x \\in S_2$ }, \\\\\n\t\t\\tilde{A}_{1,3} x_o+B^Tu & \t\t\\text{if $x \\in S_{1,3}$ }, \\\\\n\t\t\\end{cases} \\nonumber \\\\\ny \t\t& = & C^Tx + D^Tu,\n\t\\label{eqn:mmlc_ss1}\n\\end{eqnarray}\nwhere the system matrices $\\tilde{A}_i$'s are calculated for the above chosen parameters as \n\\begin{equation}\n \\tilde{A}_2(x)\\;\\; = \\left (\t\\begin{array}{ccccccc}\n\t\t\t\t\\enspace 0.0700 &\t&\t&1.0\t& \t&\t&0.0\t\\\\\n\t\t\t\t\\enspace 0.0190\t& \t&\t&0.0 \t&\t&\t&1.0\t\\\\\n\t\t\t\t\\enspace 0.0000\t&\t&\t&0.0 \t&\t&\t&0.0 \t\\\\\t\n\t\t\t\t\\end{array}\n\t\t\\right ),\n\\label{eqn:mmlc_A2}\n\\end{equation}\nwhile \n\\begin{equation}\n \\tilde{A}_{1,3}(x) = \\left (\t\\begin{array}{ccccccc}\n\t\t\t\t-0.4000\t&\t&\t&1.0\t&\t&\t&0.0\t\t\t\\\\\n\t\t\t\t-0.4275\t&\t&\t&0.0 \t&\t&\t&1.0\t\t\t\\\\\n\t\t\t\t-0.0000\t&\t&\t&0.0 \t&\t&\t&0.0 \t\t\t\\\\\n\t\t\t\t\\end{array}\n\t\t\\right ).\n\\label{eqn:mmlc_A13}\n\\end{equation}\nFurther the vectors $B^T$, $C^T$ and $D^T$ are chosen as\n\\begin{equation}\n B^T = \\left (\t\\begin{array}{ccc}\n\t\t\t\t0\t&1\t\t&0\t\t\t\\\\\n\t\t\t\t\t\\end{array}\n\t\t\\right ),\n\\label{eqn:mmlc_B}\n\\end{equation}\n\\begin{equation}\n C^T = \\left (\t\\begin{array}{ccc}\n\t\t\t\t1\t&0\t\t&0\t\t\t\\\\\n\t\t\t\t\t\\end{array}\n\t\t\\right ),\n\\label{eqn:mmlc_C}\n\\end{equation}\n\\begin{equation}\n D^T = \\left (\t\\begin{array}{ccc}\n\t\t\t\t0\t&0\t\t&0\t\t\t\\\\\n\t\t\t\t\t\\end{array}\n\t\t\\right ).\n\\label{eqn:mmlc_D}\n\\end{equation}\nWe assume here that no disturbance is present in the system, that is, the vector $D^T$ is a null vector $D=0$.\n\n\\begin{figure*}[!t]\n\t\\centering\n\t\\resizebox{\\textwidth}{!}\n\t\t{\\includegraphics{.\/Fig6}}\n\t\\caption[Dynamics of the memristive MLC Circuit after application of state feedback control] {The periodic oscillations of the memristive MLC oscillator after the application of the state feedback control shown by a(i) \\& a(ii) the time plots and b(i) \\& b(ii) phase portraits in the $(x_1-x_2)$ plane. A change in the initial conditions form $(x_1 = -0.1,x_2 =-0.1,x_3 = -0.1)$ to $(x_1 = -0.2,x_2 =-0.2,x_3 = -0.2)$ results in the symmetric interchange of the time plots and attractors about the origin. The step size is assumed as $h = \\frac{1}{1000}(2\\pi\/\\omega)$, with $\\omega = 0.65$ and $f = 0.20$.} \n\t\\label{fig:control2}\n\\end{figure*}\nThe peculiarity of this observer canonical representation, Eqs. (\\ref{eqn:mmlc_ss1}), is that the transformations required become identical for all the three sub-regions of the phase space. This is particularly helpful in studying nonsmooth bifurcations of piecewise-smooth systems \\cite{sontag98}.\n\nThe controllability matrices for the sub-spaces $S_{1,3}$ for the above mentioned parameters are given as\n\\begin{equation}\n P_{c_{1,3}} = \\left (\t\\begin{array}{ccccccc}\n\t\t\t\t0.0\t&\t&\t&\\enspace 1.00\t&\t&\t&\\enspace 0.5500\t\t\\\\\n\t\t\t\t1.0\t&\t&\t&\\enspace 0.55 \t&\t&\t&-0.6475\t\t\t\\\\\n\t\t\t\t0.0\t&\t&\t&-0.95 \t\t\t&\t&\t&\\enspace 0.3800 \t\\\\\t\n\t\t\t\t\\end{array}\n\t\t\\right ).\n\\label{eqn:mmlc_CO2}\n\\end{equation}\nSimilarly the controllability matrix for the sub-space $S_2$ is\n\\begin{equation}\n P_{c_2} = \\left (\t\\begin{array}{ccccccc}\n\t\t\t\t0.0\t&\t&\t&\\enspace 1.00\t&\t&\t&\\enspace 1.0200\t\\\\\n\t\t\t\t1.0\t&\t&\t&\\enspace 1.02 \t&\t&\t&\\enspace 0.0904\t\\\\\n\t\t\t\t0.0\t&\t&\t&-0.95 \t\t\t&\t&\t&-0.0665 \t\t\t\\\\\t\n\t\t\t\t\\end{array}\n\t\t\\right ).\n\\label{eqn:mmlc_CO13}\n\\end{equation}\nAs the controllability matrices in all the three sub-spaces have a full rank of $3$, we find that the matrices $(\\tilde{A}_i,B)$ form controllable pairs. Hence the linearised parts of the memristive MLC circuit are controllable.\nTo achieve state feedback control, we assume a \\textit{switched state feedback control law} \\cite{zhangj09},\n\\begin{eqnarray}\nu \t& = &\n\t\\begin{cases}\n\t-\\tilde{K}_2 x_o & \t\\text{if $x \\in S_2$ }, \\\\\n\t-\\tilde{K}_{1,3} x_o &\t\\text{if $x \\in S_{1,3}$ }, \n\t\\end{cases}\n\t\\label{eqn:feedback_cont1}\n\\end{eqnarray}\nwhere $\\tilde{K}_i$'s are the \\textit{control gain vectors} in the three sub-regions of the phase space and are found using the procedure outlined in the previous section as\n\\begin{equation}\n\\tilde{K}_2\\;\\; = \\left( \\begin{array}{ccc}\n\t\t\t\t\t-0.2050 &\t 0.8290 &\t-1.2300\n\t\t\t \\end{array}\n\t\t\t \\right),\n\t\\label{eqn:mmlc_k2}\n\\end{equation}\nand\n\\begin{equation}\n\\tilde{K}_{1,3} = \\left( \\begin{array}{ccc}\n\t\t\t\t\t\\;\\;0.5040 &\t1.4825 & \\;\\;2.0000\n\t\t\t \\end{array}\n\t\t\t \\right).\n\t\\label{eqn:mmlc_k13}\n\\end{equation}\nThe \\textit{closed loop system} for the memristive MLC circuit upon application of gain is\n\\begin{eqnarray}\n\\dot{x}_o(t) & = & \n\t\t\\begin{cases}\n\t\t(\\tilde{A}_2\\;\\;-B^T\\tilde{K}_2\\;\\;)x_o & \t\\text{if $x \\in S_2$ }, \\\\\n\t\t(\\tilde{A}_{1,3}-B^T\\tilde{K}_{1,3}) x_o & \t\t\\text{if $x \\in S_{1,3}$ }, \\\\\n\t\t\\end{cases} \\nonumber \\\\\ny \t\t& = & C_o^Tx + D^Tu.\n\t\\label{eqn:mmlc_ss1k}\n\\end{eqnarray}\nAs the eigen values of the matrices $(\\tilde{A}_i-B^T\\tilde{K_i})$, $i = 1,2,3$ lie within the unit circle, the dynamics of the controlled closed system settles down to a non-chaotic equilibrium state. The chaotic attractor of the system before the application of the state feedback control and the controlled periodic state after the control has been applied are shown in Figs. \\ref{fig:control2}.\n\nThe time series of the system which is chaotic before the application of control becomes periodic after the control is applied. This regulation of the chaotic time series to a periodic behaviour for the initial conditions $(x_1 = -0.1,x_2 =-0.1,x_3 = -0.1)$ is shown in Fig. \\ref{fig:control2}a(i) while the periodic attractor in the $(x_1-x_2)$ phase plane in the asymptotic limit is shown in Fig. \\ref{fig:control2}b(i). However if the initial conditions are changed to $(x_1 = -0.2,x_2 =-0.2,x_3 = -0.2)$, we observe an inversion of the time series for the variable $x_1$ and the periodic attractor in $(x_1-x_2)$ phase space as are shown in the corresponding Figs. \\ref{fig:control2}a(ii) and \\ref{fig:control2}b(ii).\n\nIt is pertinent to state here that from Figs. \\ref{fig:mmlc_limitcycle} and \\ref{fig:control2} the memristive MLC system may possess multistability. This is because we see that in these two cases, a mere change in the initial conditions forces the system to exhibit different dynamics. If the system were to possess multistability, then we strongly believe that by tweaking the control gain vectors $K_1$ and $K_2$, it can be directed to take on any of the desired multistable states.\n\n\\section{Synchronization of Chaos}\n\nThe feasibility of synchronization of chaotic systems and the conditions to be satisfied for the same were first demonstrated by \\cite{caroll90} by introducing the concept of \\textit{Drive-Response} systems. Here a chaotic system is considered as the \\textit{drive} system and a part of or subsystem of this drive system is considered as the \\textit{response}. Under the right conditions ( the conditional Lyapunov exponents (CLEs) of the error dynamics being negative), the signals of the response part will converge to those of the drive system as time elapses. Ever since this ground breaking work, many researchers have proposed synchronization of chaos in different systems based on theoretical analysis and even experimental realizations. For example, this methodology has been successfully applied to synchronize chaos in Lorenz systems \\cite{caroll90, caroll93b, vaidya92}, R\\\"{o}ssler systems \\cite{caroll90}, the hysteretic circuits \\cite{caroll91b}, Chua's circuits \\cite{kocarev92a}, driven Chua's circuits \\cite{kocarev92b}, Chua's and MLC circuits \\cite{murali95, murali97}, ADVP oscillators \\cite{kmurali93, kmurali94}, phase locked loops (PLL) \\cite{endo91, sousa91}, etc. \n\nFurther the possibility of applying this approach for secure communication has been demonstrated. The idea of \\textit{chaotic masking and modulation} and \\textit{chaotic switching} for secure communication of information signals based on Pecora and Caroll method of synchronization of chaos was demonstrated numerically by Cuomo and Oppenheim \\cite{cuomo93a, cuomo93b, cuomo93c, cuomo94} and experimentally by Koracev \\cite{kocarev92b} using Chua's circuit as the chaos generator. Further the applicability of chaotic synchronization to digital secure transmission was demonstrated by \\cite{cuomo93a} and experimentally by \\cite{parlitz92, kmurali94}. The possibility of synchronization of hyperchaotic systems and its applicability for communication purposes was proposed by \\cite{peng96}. \nAll these works make secure communications more practicable and with improved degree of security. \n\nMany alternative schemes of synchronization based on modifications of the drive-response concept have been proposed, such as the unidirectional coupling sche- me refer \\cite{kmurali93,kmurali94}, function projective synchronization \\cite{main99}, hybrid function projective synchronization \\cite{chee03,xu01,xu02,grasmil09,grasmil07}, the arbitrary hybrid function projective synchronization \\cite{huxu07b,luzhang08,huxu07a} etc. The synchronization of two canonical Chua's circuits using resistive unidirectional coupling has been studied by \\cite{dvs05} and two unidirectional coupled SC-CNN based canonical Chua's circuits has been realised experimentally by \\cite{swathi14}. The synchronization and propagation of a low frequency signal in a network of unidirectionally coupled Chua's circuits driven by a bi-harmonic external excitation has been studied by \\cite{jothi13}. However all these methods have drawbacks such as,\n\n\\begin{enumerate}\n\\item\nthey do not give a systematic procedure for determining the response system and the drive signal. This means that most of the schemes are dependent on the drive system and could not be generalized to an arbitrary drive system.\n\\item\nthe dynamics of the drive system should be free of any disturbances.\n\\item\nthe conditional lyapunov exponents (CLE) should be negative. This condition restricts the signal to be transmitted to be a small perturbation to the state variables. As this requirement is not fulfilled by nonsmooth systems, such as in the case of a two coupled memristive MLC system, effecting synchronisation should necessary be obtained by other techniques only. \n\\end{enumerate}\n\nThe concept of adaptive synchronization was applied by \\cite{wu96, bernado96, liao98} and observer based approaches by \\cite{morgul96, morgul97} to overcome these difficulties of the drive-response concept.\n\n\\section{Observer Based Adaptive Synchronization of\\\\ Chaos}\n\nLet us consider the state space representation of a single input single output (SISO) nonlinear system \\cite{Ioannou96}, defined as in Eq. (\\ref{eqn:AppenA_ss1}) in Appendix A,\n\\begin{eqnarray}\n\\dot{x} & = & \\tilde{A}x + B^Tu, \\nonumber \\\\\ny \t\t& = & C^Tx +D^Tu, \n\t\\label{eqn:ss_drive}\n\\end{eqnarray}\nwhere $\\tilde{A} \\in \\mathcal{R}^{n\\times n}$, $B \\in \\mathcal{R}^{n \\times r}$, $C \\in \\mathcal{R}^{n \\times l}$ and $D \\in \\mathcal{R}^{l \\times r}$ are matrices, $u$ is a $r$-dimensional vector denoting the control input and $y$ is a $l$-dimensional vector representing the output of the system. The control input can be given as\n\\begin{equation}\nu = d +\\theta^T f(x,y),\n\t\\label{eqn:control}\n\\end{equation}\nwhere $d \\in R$ is a bounded disturbance, $\\theta \\in R^p $ is the constant parameter vector and $f(x,y)$ is a $p$-dimensional vector differential function.\n\nWhen all the state variables of this system are unavailable for measurement, then according to control theory, the states of the system may be estimated by designing a parametric model of the original system. This parametric model is called an \\textit{observer} and is considered as the response system. The concept of \\textit{observer design} is a well established branch of control engineering and is widely used in the state feedback control of dynamical systems \\cite{ctchen70,vidya75,kailath80}. In this method, once the drive system and its related observer are chosen, then under certain conditions, local or global synchronization between the drive and observer system is guaranteed \\cite{morgul96}. \n\nLet us assume that the output $y(t)$ is the only variable that can be measured for the system Eq. (\\ref{eqn:ss_drive}). Then an observer based on the available signal can be derived to estimate the state variables. This observer is known in the literature as the \\textit{Luenberger Observer} \\cite{Ioannou96} and is given as\n\\begin{eqnarray}\n\\dot{\\hat{x}} & = & \\tilde{A}\\hat{x} + L^T(y-\\hat{y})+ B^T\\hat{u}, \\nonumber \\\\\t\t\t\t\t\n\\hat{y}\t\t & = & C^T \\hat{x} +D^T\\hat{u},\n\t\\label{eqn:ss_response}\n\\end{eqnarray}\nwhere $\\hat{x}$ denotes the dynamic estimate of the state variable $x$, $L \\in \\mathcal{R}^n$ is a $n$-dimensional vector called as the \\textit{observer gain vector}. It is essential that Eq. (\\ref{eqn:ss_response}) is in observer canonical form, refer Eq. (\\ref{eqn:AppenA_ss_Ao}) in Appendix A.\n\nThe control law can be derived as \n\\begin{equation}\n\\hat{u} = \\hat{d}+\\hat{\\theta}^Tf(x,y),\n\t\\label{eqn:adap_control}\n\\end{equation}\nwhere $\\hat{d}$ and $\\hat{\\theta}$ are the estimates of the disturbances and the parameters of the system and are updated according to the adaptive algorithm \\cite{liao2k} as\n\\begin{eqnarray}\n\\dot{\\hat{d}} & = & (y-\\hat{y}), \\nonumber \\\\\n\\dot{\\hat{\\theta}} & = & f(x,y)(y-\\hat{y}).\n\t\\label{eqn:adap_algorithm}\n\\end{eqnarray}\nThe Luenberger observer Eqs. (\\ref{eqn:ss_response}) has a feedback term that depends on the output observation error $\\tilde{y}=y-\\hat{y}$. Then the state observation error $\\tilde{x}=x-\\hat{x}$ satisfies the equation\n\\begin{eqnarray}\n\\dot{\\tilde{x}} & = & (\\tilde{A}-L^TC)\\tilde{x} + B^T \\left[(d-\\hat{d})+(\\theta^T - \\hat{\\theta}^T) f(x,y) \\right], \\nonumber \\\\\n\\tilde{x}(0) & = & x_0-\\hat{x}_0,\n \t\\label{eqn:error_dyn}\n\\end{eqnarray}\nwhere we assume $X = (\\tilde{A}-L^TC)$ as the augmented system matrix.\nThe implementation of this observer based adaptive synchronization of nonlinear systems is illustrated in Fig. \\ref{fig:sync_BD}.\\\\\n\\begin{figure*}[!t]\n\t\\centering\n\t\\resizebox{1.9\\columnwidth}{!}\n\t\t{\\includegraphics{.\/Fig7}}\n\t\\caption[Block Diagram Representation of Observer Based Adaptive synchronization] {Block diagrammatic representation of the observer based adaptive synchronization of nonlinear systems.} \n\t\\label{fig:sync_BD}\n\\end{figure*}\n\n\\subsection{Conditions for stability:}\nAccording to control theory, the system represented by the Eq. (\\ref{eqn:ss_response}) is stable in the sense of Lyapunov \\cite{Ioannou96}, refer section A.1 of Appendix A, if any of the following conditions are satisfied:\n\\begin{enumerate}\n\\item\nAll eigen values of the augmented matrix $X = (\\tilde{A}-L^TC)$, have negative real parts.\n\\item\nFor every positive definite matrix $Q$, (that is $Q = Q^T >0 $), the following Lyapunov matrix equation\n\\begin{equation}\nX^TP+PX = -Q,\n\t\\label{eqn:ss_lyp_eqn1}\n\\end{equation}\nhas a unique solution $P$ that is also positive definite.\n\\item\nFor any given matrix $C$, with the pair $(C,X)$ being observable, the equation\n\\begin{equation}\nX^TP+PX = -C^TC,\n\t\\label{eqn:ss_lyp_eqn2}\n\\end{equation}\nhas a unique solution $P$, that is also positive definite.\n\\end{enumerate}\n\\noindent If $(C^T,\\tilde{A})$ is an observable pair, then we can choose the values of the gain vector $L$ such that the matrix $(\\tilde{A}-L^TC)$ is stable. In fact, the eigen values of the matrix $(\\tilde{A}-L^TC)$, and therefore the rate of convergence of $\\tilde{x}(t)$ to zero can be arbitrarily chosen by designing the vector $L$ appropriately \\cite{kailath80}.\n\nThe observer based response system given by Eq. (\\ref{eqn:ss_response}) and associated with the control law given by Eq. (\\ref{eqn:control}) and the adaptive algorithm given by Eq. (\\ref{eqn:adap_algorithm}) will now globally and asymptotically synchronize with the drive system given by Eq. (\\ref{eqn:ss_drive}), that is \n\\begin{equation*}\n\\parallel \\tilde{x}(t)\\parallel = \\parallel x(t)-\\hat{x}(t)\\parallel \\rightarrow 0 \\,\\,\\,\\textrm{as}\\,\\,\\, t \\,\\, \\rightarrow \\infty,\n\\end{equation*}\nfor all initial conditions.\\\\\n\\noindent Thus we find that the adaptive synchronization scheme is based on the following:\n\\begin{enumerate}\n\\item\nthe linear part of the system is observable, that is the pair $(C^T,\\tilde{A})$ is observable,\n\\item\ndesign of an suitable observer based on an adaptive law and\n\\item\nformulation of a suitable control law.\n\\end{enumerate}\n\n\\section{Observer Based Adaptive synchronization of Chaos in Coupled Memristive MLC Oscillators}\nIn this section we report the synchronization of chaos via an observer based design, with appropriate control law and adaptive algorithm in a system of two coupled memristive MLC circuits. As in the case of control of chaos, we assume that under appropriate choice of the boundary equilibrium points, the memristive MLC circuit becomes a \\textit{Filippov} system. Further we assume the same parameter values as were fixed for effecting control in a single memristive MLC circuit, namely $a_{1,3} = -0.55$, $a_2 = -1.02$ and $\\beta= 0.95$, $f = 0.20$ and $\\omega = 0.65$. Also we assume the observer canonical form of the state space representation of the memristive MLC circuit as given in Eq. (\\ref{eqn:mmlc_ss1}), namely\n\\begin{eqnarray}\n\\dot{x}_o(t) & = & \n\t\t\\begin{cases}\n\t\t\\tilde{A}_2 \\enspace x_o +B^Tu & \t\\text{if $x \\in S_2$ }, \\\\\n\t\t\\tilde{A}_{1,3} x_o+B^Tu & \t\t\\text{if $x \\in S_{1,3}$ }, \\\\\n\t\t\\end{cases} \\nonumber \\\\\ny \t\t& = & C^Tx + D^Tu,\n\t\\label{eqn:smmlc_ss1}\n\\end{eqnarray}\nwhere the system matrices $\\tilde{A}_i$'s and the vectors $B^T$, $C^T$ and $D^T$ are the same as are given in Eqs. (\\ref{eqn:mmlc_A2} - \\ref{eqn:mmlc_D}). Then the observability matrices for the sub-spaces $S_{1,3}$ are\n\\begin{equation}\n P_{o_{1,3}} = \\left (\t\\begin{array}{ccccccc}\n\t\t\t\t1.00\t&\t&\t&0.00\t&\t&\t&0.00\t\t\t\\\\\n\t\t\t\t0.00\t&\t&\t&1.00 \t&\t&\t&0.00\t\t\t\\\\\n\t\t\t\t0.00\t&\t&\t&0.55\t&\t&\t&1.00\t\t \t\\\\\t\n\t\t\t\t\\end{array}\n\t\t\\right ).\n\\label{eqn:smmlc_PO2}\n\\end{equation}\nSimilarly the observability matrix for the sub-space $S_2$ is\n\\begin{equation}\n P_{o_2} = \\left (\t\\begin{array}{ccccccc}\n\t\t\t\t1.00\t&\t&\t&0.00\t&\t&\t&0.00\t\t\t\\\\\n\t\t\t\t0.00\t&\t&\t&1.00 \t&\t&\t&0.00\t\t\t\\\\\n\t\t\t\t0.00\t&\t&\t&1.02\t&\t&\t&1.00 \t\t\t\\\\\t\n\t\t\t\t\\end{array}\n\t\t\\right ).\n\\label{eqn:smmlc_PO13}\n\\end{equation}\nAs these observability matrices in all the three sub-spaces have a full rank of $3$, we find that the matrices $(C^T,A_i)$ form an observable pair. Hence the linearised parts of the memristive MLC circuit are observable. Under this condition the Luenberger observer for the memristive MLC circuit can be derived as\n\\begin{eqnarray}\n\\dot{\\hat{x}}(t) & = & \n\t\t\\begin{cases}\n\t\t\\tilde{A}_2 \\enspace \\hat{x} +L^T_2\\enspace(y-\\hat{y}) +B^T\\hat{u} & \t\\text{if $\\hat{x} \\in S_2$ }, \\\\\n\t\t\\tilde{A}_{1,3} \\hat{x}+ L^T_{1,3}(y-\\hat{y}) +B^T\\hat{u} & \\text{if $\\hat{x} \\in S_{1,3}$ }, \\\\\t\t\n\t\t\\end{cases} \\nonumber \\\\\n\\hat{y} \t\t& = & C^T \\hat{x} + D^T\\hat{u},\n\t\\label{eqn:mmlc_ss2}\n\\end{eqnarray}\nwhere the control law is \n\\begin{equation}\n\\hat{u} = \\hat{\\theta}^Tf(x,y),\n\t\\label{eqn:sadap_control}\n\\end{equation}\nwith the vector $f(x,y)$ given as\n\\begin{eqnarray}\nf_1(x,y) & = & y, \\nonumber \\\\\nf_2(x,y) & = & |y+1|-|y-1|,\n\t\\label{eqn:sadap_fxy}\n\\end{eqnarray}\nis the differential function and $\\hat{\\theta}$ are the estimates of the parameters of the system and are updated according to the adaptive algorithm\n\\begin{eqnarray}\n\\dot{\\hat{\\theta}}_1 & = & -(y-\\hat{y})f_1(x,y), \\nonumber \\\\\n\\dot{\\hat{\\theta}}_2 & = & -(y-\\hat{y})f_2(x,y).\n\t\\label{eqn:sadap_algorithm}\n\\end{eqnarray}\nThe state error $\\tilde{x} = \\dot{x}-\\dot{\\hat{x}}$ dynamics is represented by\n\\begin{eqnarray}\n\\dot{\\tilde{x}} & = &\n\t\\begin{cases}\n(\\tilde{A}_2\\enspace -L^T_2\\enspace C)\\tilde{x} + B^T(\\theta - \\hat{\\theta}^T)f(x,y),\\\\\n(\\tilde{A}_{1,3}-L^T_{1,3}C)\\tilde{x} + B^T(\\theta - \\hat{\\theta}^T)f(x,y). \n\t\\end{cases}\t\n\t\\label{eqn:smmlc_err}\n\\end{eqnarray}\nThe augmented matrices for the system can be defined as\n\\begin{equation}\nX_i = (\\tilde{A}_i\\,-\\,L^T_iC) \\;\\;\\; \\textrm{for} \\;\\;\\;i=1,2,3.\n\t\\label{eqn:mmlc_AM}\n\\end{equation}\nFor the choice of parameters of the system mentioned above, the observer gain vectors $L_i$ for each of the sub-spaces $S_i$ are chosen so as to have the augmented matrices $X_i$ to be exponentially stable.\\\\\nFor the sub-spaces $S_{1,3}$, the gain vectors are chosen as\n\\begin{equation}\nL_{1,3} = \\left (\t\\begin{array}{ccc}\n\t\t\t\t0.8000\t&\t3.1000\t&\t-3.2870 \t\t\n\t\t\t\t\\end{array}\n\t\t\\right )^T.\n\\label{eqn:mmlc_L13}\n\\end{equation}\nDue to this choice of the observer gain vectors $L_i$, the augmented matrices $X_{1,3}$ in these sub-spaces $S_{1,3}$ will have \\textit{poles} at \n$\\{0.0000,\\; -0.6000 \\pm i 1.86748\\}$.\nSimilarly, for the sub-space $S_2$, the gain vector is chosen as\n\\begin{equation}\nL_2 = \\left (\t\\begin{array}{ccc}\n\t\t\t\t0.0000\t&\t15.2212\t&\t13.6508\t\t\n\t\t\t\t\\end{array}\n\t\t\\right )^T.\n\t\\label{eqn:mmlc_L2}\n\\end{equation}\nThis will cause the augmented matrix $X_2$ to have \\textit{poles} at $\\{-1.5788,\\; 0.824398 \\pm i 4.13832 \\}$.\\\\\nThe Lyapunov equation for stability, Eq. (\\ref{eqn:ss_lyp_eqn1}), may be written separately for the three sub-spaces as,\n\\begin{equation}\nX_i^TP_i+P_iX_i = -Q \\;\\;\\; \\textrm{for} \\;\\;\\;i=1,2,3,\n\t\\label{eqn:ss_lyp_eqn3}\n\\end{equation}\nwhere we assume the matrix $Q$ to be a $3$ - dimensional unit matrix. \nThe solutions of the above Lyapunov equation for stability for the sub-spaces $S_{1,3}$ are positive definite matrices $P_{1,3}$ given as\n\\begin{equation}\n P_{1,3} = \\left (\t\\begin{array}{ccc}\n\t\t\t\t\\;\\;1.7778\t&\t-0.4220\t\t&\t-0.7308\t\\\\\n\t\t\t\t-0.4220\t\t&\t 2.7528 \t&\t-0.1558\t\\\\\n\t\t\t\t-0.7308\t\t&\t-0.1558\t\t&\t 0.6020 \\\\\t\n\t\t\t\t\\end{array}\n\t\t\\right ).\n\\label{eqn:smmlc_P13}\n\\end{equation}\nThe matrix $P_2$ for the sub-space $S_2$ is given as \n\\begin{equation}\n P_2\\;\\; = \\left (\t\\begin{array}{ccc}\n\t\t\t\t-0.0012\t\t&\t-0.0615\t\t&\t-0.0647\t\\\\\n\t\t\t\t-0.0615\t\t&\t-3.0924 \t&\t-3.2555\t\\\\\n\t\t\t\t-0.0647\t\t&\t-3.2555\t\t&\t-3.4268 \\\\\t\n\t\t\t\t\\end{array}\n\t\t\\right ) \\times 10^6,\n\\label{eqn:smmlc_P2}\n\\end{equation}\nWe find that the matrix $P_2$ for the sub-space $S_2$ is not a solution of the Lyapunov equation, Eq. (\\ref{eqn:ss_lyp_eqn3}). Therefore the trajectories in this sub-space should be, as per Lyapunov theory, unstable. \nHence the augmented matrix $X_2$ in region $S_2$ is also unstable. However the combined effect of the dynamics in the outer two sub-spaces $S_{1,3}$ represented by the augmented matrices $X_{1,3}$ and the positive definite matrices $P_{1,3}$ will impress upon the system as a whole to become asymptotically stable and exhibit a bounded behaviour asymptotically. Further as the conditions for the Lyapunov asymptotic stability, Eq. (\\ref{eqn:ss_lyp_eqn3}) are satisfied by the system as a whole, we find that under the action of the control law, Eq. (\\ref{eqn:sadap_control}) and the adaptive algorithm, Eq. (\\ref{eqn:sadap_algorithm}), the estimated values of the unknown parameters of the observer system $\\hat{a}_i$'s converge finally to the true values of the parameters $a_i$'s as time progresses. These are shown in Figs. \\ref{fig:sync_parameters}, where we find in Fig. \\ref{fig:sync_parameters}(a) the value of the parameter $\\hat{a}_2$ converges to its true value of $-1.02$, while in Fig. \\ref{fig:sync_parameters}(b) the value of the parameter $\\hat{a}_1$ converges to its true value of $-0.55$.\n\nMathematically we have the error between the drive and the response, converging to zero for all initial values, as time progresses, that is \n\\begin{equation*}\n\\parallel \\tilde{x}(t)\\parallel = \\parallel x(t)-\\hat{x}(t)\\parallel \\rightarrow 0 \\,\\,\\,\\textrm{as}\\,\\,\\, t \\,\\, \\rightarrow \\infty.\n\\end{equation*}\nThe convergence of the error dynamics $\\tilde{x}$ to zero is shown in Fig. \\ref{fig:sync_error}. Here the convergence of the errors $\\tilde{x}_1$, $\\tilde{x}_2$ and $\\tilde{x}_3$ are shown in plots (a), (b) and (c) of Fig. \\ref{fig:sync_error} respectively.\n\\begin{figure}\n\t\\centering\n\t\n\t\\resizebox{\\columnwidth}{!}\n\t\t{\\includegraphics{.\/Fig8}}\n\t\\caption[Adaptive Observer Based Estimation of the Two Coupled Memristive MLC Circuit Parameters] { The estimation of (a) the parameter $a_2$ and (b) the parameter $a_1$ of response system of the two coupled Memristive MLC Circuit in the synchronized state using the adaptive observer scheme. It is to be noted that the asymptotic values of $a_2 = -1.02$ and $a_1 = -0.55$ are exactly equal to those of the drive system which were known apriori.}\n\t\\label{fig:sync_parameters}\n\\end{figure}\nThese convergences of the parameters to their true values and that of the error dynamics to zero, cause the observer system dynamics to converge to the original system dynamics as time elapses. This means that the response system dynamics evolves as time proceeds to that of the drive system dynamics. Hence if the drive system is in a chaotic state, then the response system should also exhibit identical chaotic state. This is shown in Fig. \\ref{fig:sync_chaos}.\n\nHad the drive system been in a periodic state, then one would expect the response system also to take on asymptotically the periodic state by virtue of the adaptive synchronization. As both the drive and the response systems exhibit identical behaviour, they are said to be in \\textit{complete synchronization} (CS) with each other. This is shown by the diagonal lines for the variables in the $(x_1-x'_1)$, $(x_2-x'_2)$ and $(x_3-x'_3)$ phase planes in plots (a), (b) and (c) respectively in Fig. \\ref{fig:sync_variable}. \n\\begin{figure}[!t]\n\t\\centering\n\t\\resizebox{\\columnwidth}{!}\n\t\t{\\includegraphics{.\/Fig9}}\n\t\\caption[Error Dynamics of the Two Coupled Memristive MLC Circuit] {The convergences of the errors in the variables, $e_1 = x_1-x_1'$, $e_2 = x_2-x_2'$ and $e_3 = x_3-x_3'$ of the two coupled Memristive MLC Circuit in synchronized state under adaptive observer scheme are shown in (a), (b) and (c) respectively.}\n\t\\label{fig:sync_error}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\resizebox{\\columnwidth}{!}\n\t\t{\\includegraphics{.\/Fig10}}\n\t\\caption[Phase portraits of the drive and response Memristive MLC Circuits] {The phase portraits (a) in the $(x_1-x_2)$ plane and (b) in the $(\\hat{x}_1-\\hat{x}_2)$ plane showing identical chaos respectively.}\n\t\\label{fig:sync_chaos}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\resizebox{\\columnwidth}{!}\n\t\t{\\includegraphics{.\/Fig11}}\n\t\\caption[Complete synchronization of the Two Coupled Memristive MLC Circuit] {The complete synchronization of the two coupled Memristive MLC circuit under the adaptive observer scheme, shown in (a) the $(x_1-x_1')$ plane, (b) the $(x_2-x_2')$ plane and (c) the $(x_3-x_3')$ plane.}\n\t\\label{fig:sync_variable}\n\\end{figure}\nFor effecting this, it is essential that the gain vectors $L_i$ for all the sub-spaces $S_i$'s are properly chosen. Due to the differences in the gain vectors in the three sub-regions of the phase space, this observer based adaptive synchronization is also referred to in literature as \\textit{switched state feedback} method of adaptive synchronization \\cite{zhangj09}.\n\n\\section{Conclusion}\nIn this work, we have studied the control of chaos in an individual memristive MLC circuit as well as the synchronisation behaviour in a system of two coupled memristive MLC circuits using \\textit{state feedback control} and \\textit{observer based adaptive control} techniques respectively. To realize these objectives, we have considered the memristive MLC circuit as a \\emph{Filippov} system, a non-smooth system having the order of discontinuity \\emph{one} and have derived the discontinuity mapping corrections such as (ZDM and PDM). Further we have derived the canonical state space representations for memristive MLC circuit. Also the stability theory of Lyapunov and pole-placement methods, concepts which are very much familiar in control theory, were applied. \n\nWe wish to state here that we have derived analytical conditions for effecting control and adaptive synchronization using state feedback and implemented the results using numerical simulations. The fact that the results of simulations agree with the predictions of the analytical conditions point to the validity of our derivations.\n\nFrom a different point of view, it has been shown by many researchers, that in general any two coupled systems, be they smooth or discontinuous, can be directed towards amplitude death or oscillation death, irrespective of their being in periodic, chaotic, hyper-chaotic or time-delay systems, by the application of proper feedback coupling, for example see \\cite{resmi2011}. The same can be applied to the two coupled system under study, by calculating proper observer gain vectors and choosing proper initial conditions and parametric values. However we have not proceeded along these lines because it falls beyond the realm of this present work. We hope to pursue this possibility in future studies.\n\nThe phenomenon of control of chaos may be further studied to understand and effectively prevent the incidence of nonlinear catastrophic phenomena such as {\\textit{blackouts}} in transmission lines and power grids, cardiac arrythmias, etc. The synchronisation of chaos which we have demonstrated using observer based adaptive scheme in memristive MLC circuits can be used to effect digital modulation schemes for secure communication. For example, the modulation characteristics of the memristor can be used to implement Amplitude Shift Keying ASK, a key technique in Digital Signal Processing and transmission of Digitized Information. Also the switching characteristics of the memristor can be utilised to implement digital protocols for secure transmission of data.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSocial dilemmas are situations in which individuals are torn between what is best for them and what is best for the society. If selfishness prevails, the pursuit of short-term individual benefits may quickly result in loss of mutually rewarding cooperative behavior and ultimately in the tragedy of the commons \\cite{hardin_g_s68}. Evolutionary game theory \\cite{maynard_82, weibull_95, hofbauer_98, mestertong_01, nowak_06} is the most commonly adopted theoretical framework for the study of social dilemmas, and none has received as much attention as the prisoner's dilemma game \\cite{fudenberg_e86, nowak_n93, santos_prl05, imhof_pnas05, santos_pnas06, tanimoto_pre07, fu_epjb07, gomez-gardenes_prl07, poncela_njp07, fu_pre08b, poncela_epl09, fu_pre09, fu_jtb10, antonioni_pone11, tanimoto_pre12, press_pnas12, hilbe_pnas13, szolnoki_pre14}. Each instance of the game is contested by two players who have to decide simultaneously whether they want to cooperate or defect. The dilemma is given by the fact that although mutual cooperation yields the highest collective payoff, a defector will do better if the opponent decides to cooperate.\n\nSince widespread cooperation in nature is one of the most important challenges to Darwin's theory of evolution and natural selection, ample research has been devoted to the identification of mechanisms that may lead to a cooperative resolution of social dilemmas. Classic examples reviewed in \\cite{nowak_s06} include kin selection \\cite{hamilton_wd_jtb64a}, direct and indirect reciprocity \\cite{trivers_qrb71, axelrod_s81}, network reciprocity \\cite{nowak_n92b}, as well as group selection \\cite{wilson_ds_an77}. Recently, however, interdisciplinary research linking together knowledge from biology and sociology as well as mathematics and physics has revealed many refinements to these mechanisms and also new ways by means of which the successful evolution of cooperation amongst selfish and unrelated individuals can be understood \\cite{szabo_pr07, roca_plr09, schuster_jbp08, perc_bs10, santos_jtb12, perc_jrsi13, rand_tcs13}.\n\nOne of the more recent and very promising developments in evolutionary game theory is the introduction of so-called multigames \\cite{hashimoto_jtb06, hashimoto_jtb14} or mixed games \\cite{wardil_csf13} (for earlier conceptually related work see \\cite{cressman_igtr00}), where different players in the population adopt different payoff matrices. Indeed, it is often the case that a particular dilemma is perceived differently by different players, and this is properly taken into account by considering a multigame environment. A simple example to illustrate the point entails two drivers meeting in a narrow street and needing to avoid collision. However, while the first driver drives a cheap old car, the second driver drives a brand new expensive car. Obviously, the second driver will be more keen on averting a collision. Several other examples could be given to illustrate that, when we face a conflict, we are likely to perceive differently what we might loose in case the other player chooses to defect. The key question then is, how the presence of different payoff matrices, motivated by the different perception of a dilemma situation, will influence the cooperation level in the whole population?\n\nMultigames were thus far studied in well-mixed systems, but since stable solutions in structured populations can differ significantly -- a prominent example of this fact being the successful evolution of cooperation in the prisoner's dilemma game through network reciprocity \\cite{nowak_n92b} -- it is of interest to study multigames also within this more realistic setup. Indeed, interactions among players are frequently not random and best described by a well-mixed model, but rather they are limited to a set of other players in the population and as such are best described by a network \\cite{doebeli_el05, szabo_pr07, roca_plr09, perc_bs10, perc_jrsi13}. With this as motivation, we here study evolutionary multigames on the square lattice and scale-free networks, where the core game is the weak prisoner's dilemma while at the same time some fraction of players adopts either a positive or a negative value of the sucker's payoff. Effectively, we thus have some players using the weak prisoner's dilemma payoff matrix, some using the traditional prisoner's dilemma payoff matrix, and also some using the snowdrift game payoff matrix. Within this multigame environment, we will show that the higher the heterogeneity of the population in terms of the adopted payoff matrices, the more the evolution of cooperation is promoted. Furthermore, we will elaborate on the responsible microscopic mechanisms, and we will also test the robustness of our observations. Taken together, we will provide firm evidence in support of heterogeneity-enhanced network reciprocity and show how different perceptions of social dilemmas contribute to their resolution. First, however, we proceed with presenting the details of the mathematical model.\n\n\\section{Evolutionary multigames}\nWe study evolutionary multigames on the square lattice and the Barab\\'asi-Albert scale-free network \\cite{barabasi_s99}, each with an average degree $k=4$ and size $N$.\nThese graphs, being homogeneous and strongly heterogeneous, represent two extremes of possible interaction topology.\nEach player is initially designated either as cooperator ($C$) or defector ($D$) with equal probability. Moreover, each instance of the game involves a pairwise interaction where mutual cooperation yields the reward $R$, mutual defection leads to punishment $P$, and the mixed choice gives the cooperator the sucker's payoff $S$ and the defector the temptation $T$. The core game is the weak prisoner's dilemma, such that $T>1$, $R=1$ and $P=S=0$. A fraction $\\rho$ of the population, however, uses different $S$ values to take into account the different perception of the same social dilemma. In particular, one half of the randomly chosen $\\rho N$ players uses $S=+\\Delta$, while the other half uses $S=-\\Delta$, where $0 < \\Delta < 1$. We adopt the equal division of positive and negative $S$ values to ensure that the average over all payoff matrices returns the core weak prisoner's dilemma, which is convenient for comparisons with the baseline case. Primarily, we consider multigames where, once assigned, players do not change their payoff matrices, but we also verify the robustness of our results by considering multigames with time-varying matrices.\n\nWe simulate the evolutionary process in accordance with the standard Monte Carlo simulation procedure comprising the following elementary steps. First, according to the random sequential update protocol, a randomly selected player $x$ acquires its payoff $\\Pi_x$ by playing the game with all its neighbors. Next, player $x$ randomly chooses one neighbor $y$, who then also acquires its payoff $\\Pi_y$ in the same way as previously player $x$. Importantly, at each instance of the game the applied payoff matrix is that of the randomly chosen player who collects the payoffs, which may result in an asymmetric payoff allocation depending on who is central. This fact, however, is key to the main assumption that different players perceive the same situation differently. Once both players acquire their payoffs, then player $x$ adopts the strategy $s_y$ from player $y$ with a probability determined by the Fermi function\n\\begin{equation}\n\\label{eq1}\nW(s_y \\to s_x)=\\frac{1}{1+\\exp[(\\Pi_x-\\Pi_y)\/K]},\n\\end{equation}\nwhere $K=0.1$ quantifies the uncertainty related to the strategy adoption process \\cite{blume_l_geb93, szabo_pr07}. In agreement with previous works, the selected value ensures that strategies of better-performing players are readily adopted by their neighbors, although adopting the strategy of a player that performs worse is also possible \\cite{perc_pre08b, szolnoki_njp08}. This accounts for imperfect information, errors in the evaluation of the opponent, and similar unpredictable factors.\n\nEach full Monte Carlo step (MCS) consists of $N$ elementary steps described above, which are repeated consecutively, thus giving a chance to every player to change its strategy once on average. All simulation results are obtained on networks\ntypically with $N=10^4$ players, but larger system size is necessary on\nthe proximity to phase transition points, and the fraction of cooperators $f_C$ is determined in the stationary state after a sufficiently long relaxation lasting up to $2 \\cdot 10^5$ MCS. To further improve accuracy, the final results are averaged over $200$\nindependent realizations, including the generation of the scale-free networks, at each set of parameter values.\n\n\\section{Results}\nBefore turning to the main results obtained in structured populations, we first briefly summarize the evolutionary outcomes in well-mixed populations. Although the subpopulation adopting the $T>1$, $R=1$, $P=0$ and $S=+\\Delta$ parametrization fulfills $T>R>S>P$, and thus in principle plays the snowdrift game where the equilibrium is a mixed $C+D$ phase, cooperators in the studied multigame actually never survive. Since there are also players who adopt either the weak ($T>R>P=S$) or the traditional ($T>R>P>S$) prisoner's dilemma payoff matrix, the asymmetry in the interactions renders cooperation evolutionary unstable. In fact, in well-mixed populations the baseline case given by the average over all payoff matrices is recovered, which in our setup is the weak prisoner's dilemma, where for all $T>1$ cooperators are unable to survive. More precisely, cooperators using $S=-\\Delta$ die out first, followed by those using $S=0$ and $S=+\\Delta$, and this ranking is preserved even if the subpopulation using $S=0$ is initially significantly larger than the other two subpopulations (at small $\\rho$ values). Although in finite well-mixed populations the rank of this extinction pattern could be very tight, it does not change the final fate of the population to arrive at complete defection.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=fig1a.eps,width=8cm}}\n\\centerline{\\epsfig{file=fig1b.eps,width=8cm}}\n\\caption{(Color online) Evolution of cooperation (top panel) and the average payoff of the population (bottom panel) in the multigame environment on the square lattice. Depicted results were obtained for $\\rho=1$ and different values of $\\Delta$, as indicated in the legend. Here $\\rho=1$ means that all players use either $S=+\\Delta$ or $S=-\\Delta$ (none use $S=0$). Larger values of $\\Delta$ allow cooperators to survive at larger values of $T$. Importantly, this improvement in $f_C$ is also accompanied by a suitable increase in the average payoff of the population, as shown in the bottom panel.}\n\\label{Delta}\n\\end{figure}\n\nIn structured populations, as expected from previous experience\n\\cite{doebeli_el05, szabo_pr07, roca_plr09, perc_bs10, perc_jrsi13}, we can observe different solutions, where cooperators can coexist with defectors over a wide range of parameter values. But more importantly, the multigame environment, depending on $\\rho$ and $\\Delta$, can elevate the stationary cooperation level significantly beyond that warranted by network reciprocity alone. We first demonstrate this in Fig.~\\ref{Delta}(a), where we plot the fraction of cooperators $f_C$ as a function of the temptation value $T$, as obtained for $\\rho=1$ and by using different values of $\\Delta$. It can be observed that the larger the value of $\\Delta$ the larger the value of $T$ at which cooperators are still able to survive. Indeed, for $\\Delta=0.8$ cooperation prevails across the whole interval of $T$. Since some players use a negative value of $S$, it is nevertheless of interest to test whether the elevated level of cooperation actually translates to a larger average payoff of the population. It is namely known that certain mechanisms aimed at promoting cooperative behavior, like for example punishment \\cite{sigmund_tee07}, elevate the level of cooperation but at the same time fail to raise the average payoff accordingly due to the entailed negative payoff elements. As illustrated in Fig.~\\ref{Delta}(b), however, this is not the case at present since larger values of $f_C$ readily translate to larger average payoffs of the population.\n\n\\begin{figure}[b]\n\\centerline{\\epsfig{file=fig2.eps,width=8cm}}\n\\caption{(Color online) Evolution of cooperation in the multigame environment on the square lattice, as obtained in dependence on $\\rho$ and $\\Delta$. The color map encodes the stationary fraction of cooperators $f_C$. It can be observed that the dependence of $f_C$ on both $\\rho$ and $\\Delta$ is monotonous, and that it is thus beneficial for the population to be in the most heterogeneous state possible. Depicted results were obtained for $T=1.1$, but qualitatively equal evolutionary outcomes can be observers also for other values of $T$.}\n\\label{rho}\n\\end{figure}\n\nIn the light of these results, we focus solely on the fraction of cooperators and show in Fig.~\\ref{rho} how $f_C$ varies in dependence on $\\rho$ and $\\Delta$ at a given temptation value $T$. Presented results indicate that what we have observed in Fig.~\\ref{Delta}(a), namely the larger the value of $\\Delta$ the better, actually holds irrespective of the value of $\\rho$. More to the point, larger $\\rho$ values support cooperation stronger, which corroborates the argument that the more heterogeneous the multigame environment the better. Results presented in Fig.~\\ref{rho} also suggest that it is better to have many players using higher $S$ values, regardless of the fact that the price is an equal number of players in the population using equally high but negative $S$ values. These observations hold irrespective of the temptation $T$, and they fit well to the established notion that heterogeneity, regardless of its origin, promotes cooperation by enhancing network reciprocity \\cite{perc_pre08, santos_n08, szolnoki_epjb08, lei_c_pa10, santos_jtb12, sun_l_ijmpc13, tanimoto_13, vukov_njp12, zhu_p_pone14, maciejewski_pcbi14, yuan_wj_pone14}.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=fig3a.eps,width=8.5cm}}\n\\centerline{\\epsfig{file=fig3b.eps,width=8.5cm}}\n\\caption{(Color online) Top panel depicts the level of cooperation in the two subpopulations on the square lattice, where $f_{C_+}$ denotes the fraction of cooperators among those who use $S=+\\Delta$, while $f_{C_-}$ denotes the fraction of cooperators in the group where $S=-\\Delta$ is used. For reference, we also plot the cooperation level in the corresponding homogeneous population, where every player uses $S=0$. Expectedly, the level of cooperation is largest in the subpopulation where players use $S=+\\Delta$. Much more surprisingly, however, the level of cooperation in the subpopulation where players use $S=-\\Delta$ still significantly exceeds the baseline outcome of the homogeneous weak prisoner's dilemma game. Bottom panel depicts the difference between the level of cooperation in the homogeneous and the heterogeneous multigame environment $\\Delta f_C$, along with the difference in the strategy invasion flow $\\Delta \\gamma$ between the two ``+'' and ``-'' subpopulations (see main text for details). These results were obtained with $\\rho=1$ and $\\Delta=0.2$, but remain qualitatively identical also for other parameter values.}\n\\label{asymmetry}\n\\end{figure}\n\n\nTo support these arguments and to pinpoint the microscopic mechanism that is responsible for the promotion of cooperation in the multigame environment, we first monitor the fraction of cooperators within subgroups of players that use different payoff matrices. For clarity, we use $\\rho=1$, where only two subpopulations exist (players use either $S=+\\Delta$ or $S=-\\Delta$, but nobody uses $S=0$), and where the positive effect on the evolution of cooperation is the strongest (see Fig.~\\ref{rho}). Accordingly, one group is formed by players who use $S=+\\Delta$, and the other is formed by players who use $S=-\\Delta$. We denote the fraction of cooperators in these two subpopulations by $f_{C_+}$ and $f_{C_-}$, respectively. As Fig.~\\ref{asymmetry}(a) shows, even if only a moderate $\\Delta$ value is applied, the cooperation level among players who use a positive $S$ value is significantly higher than among those who use a negative $S$ value. Unexpectedly, even among those players who effectively play a traditional prisoner's dilemma ($T>R>P>S$), the level of cooperation is still much higher than the level of cooperation that is supported solely by network reciprocity (without multigame heterogeneity) in the weak prisoner's dilemma ($T>R>P=S$). This fact further supports the conclusion that the introduction of heterogeneity through the multigame environment involves the emergence of strong cooperative leaders, which further aid and invigorate traditional network reciprocity. Unlike defectors, cooperators benefit from a positive feedback effect, which originates in the subpopulation that uses positive $S$ values and then spreads towards the subpopulation that uses negative $S$ values, ultimately giving rise to an overall higher social welfare (see Fig.~\\ref{Delta}(b)).\n\nThis explanation can be verified directly by monitoring the information exchange between the two subpopulations. More precisely, we measure the frequency of strategy imitations between players belonging to the two different subpopulations. The difference $\\Delta \\gamma$ is positive when players belonging to the ``-'' subpopulation adopt the strategy from players belonging to the ``+'' subpopulation more frequently than vice versa. Results presented in Fig.~\\ref{asymmetry}(b) demonstrate clearly that the level of cooperation is increased only if there is significant asymmetry in the strategy imitation flow in favor of the ``+'' subpopulation. Such symmetry breaking, which is due to the multigame environment, supports a level of cooperation in the homogeneous weak prisoner's dilemma that notably exceeds the level of cooperation that is supported solely by traditional network reciprocity.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=fig4.eps,width=8cm}}\n\\caption{(Color online) Evolution of cooperation in the multigame environment on the scale-free network with degree-normalized payoffs. Depicted results were obtained when only 2\\% of the hubs (high-degree players) used $S=+\\Delta$, while the rest of the population used a moderately negative $S$ value (see main text for details). As on the square lattice (see top panel of Fig.~\\ref{Delta}), larger values of $\\Delta$ (see legend) allow cooperators to survive at larger values of $T$.}\n\\label{sf}\n\\end{figure}\n\nWe proceed by testing the robustness of our observations and expanding this study to heterogeneous interaction networks. First, we consider the Barab{\\'a}si-Albert scale-free network \\cite{barabasi_s99}, where influential players are a priori present due to the heterogeneity of the topology. Previous research, however, has shown that the positive impact of degree heterogeneity vanishes if payoffs are normalized with the degree of players, as to account for the elevated costs of participating in many games \\cite{santos_jeb06, masuda_prsb07, tomassini_ijmpc07, szolnoki_pa08}. We therefore apply degree-normalized payoffs to do away with cooperation promotion that would be due solely to the heterogeneity of the topology. Furthermore, by striving to keep the average over all payoff matrices equal to the weak prisoner's dilemma, it is important to note that the heterogeneous interaction topology allows us to introduce only a few strongly connected players into the $S=+\\Delta$ subpopulation, while the rest can use only a moderately negative $S$ value. Specifically, we assigned $S_1=+\\Delta$ to only 2\\% of the hubs, while the rest used $S_2=-0.0204 \\cdot S_1$ to fulfill $0.02 \\cdot S_1+0.98 \\cdot S_2=0$ (average over all $S$ in the population equal to zero to yield, on average, the weak prisoner's dilemma payoff ranking). As results depicted in Fig.~\\ref{sf} show, even with this relatively minor modification that introduces the multigame environment, the promotion of cooperation is significant if only $\\Delta$ is sufficiently large (see legend). Evidently, $\\Delta=0$ returns the modest cooperation level that has been reported before on scale-free networks with degree-normalized payoffs, but for $\\Delta=0.8$ the coexistence of cooperators and defectors is possible almost across the whole interval of $T$. It is also important to note that the positive effect could be easily amplified further simply by introducing more players into the $S=+\\Delta$ subpopulation and letting the remainder use an accordingly even less negative values of $S$. These results indicate that the topology of the interaction network has only secondary importance, because the heterogeneity that is introduced by payoff differences already provides the necessary support for the successful evolution of cooperation. Consequently, in the realm of the introduced multigame environment, we have observed qualitatively identical cooperation-supporting effects when using the random regular graph or the configurational model of Bender and Canfield \\cite{bender, bollobas, molloy} for generating the interaction network.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=fig5.eps,width=8cm}}\n\\caption{(Color online) Evolution of cooperation in the time-varying multigame environment on the square lattice. Depicted results were obtained when players could choose between $S=+\\Delta$ and $S=-\\Delta$ with equal probability at each instance of the game (see legend for the applied $\\Delta$ values). As on the square lattice with time invariable subpopulations (see top panel of Fig.~\\ref{Delta}), larger values of $\\Delta$ allow cooperators to survive at larger values of $T$, although in this case the positive impact on the evolution of cooperation is less strong.}\n\\label{temporary}\n\\end{figure}\n\nLastly, we present results obtained within a time-varying multigame environment to further corroborate the robustness of our main arguments. Several examples could be provided as to why players' perception might change over time. The key point is that players may still perceive the same dilemma situation differently, and hence they may use different payoff matrices. Our primary goal here is to present the results obtained with a minimal model, although extensions towards more sophisticated and realistic models are of course possible. Accordingly, unlike considered thus far, players do not have a permanently assigned $S$ value, but rather, they can choose between $S=+\\Delta$ and $S=-\\Delta$ with equal probability at each instance of the game. Naturally, this again returns the $S=0$ weak prisoner's dilemma on average over time, and as shown in \\cite{wardil_csf13}, in well-mixed populations returns the complete defection stationary state. In structured populations, however, for $\\Delta>0$, we can again observe promotion of cooperation beyond the level that is warranted solely by network reciprocity. For simplicity, results presented in Fig.~\\ref{temporary} were obtained by using the square lattice as the underlying interaction network, but in agreement with the results presented in Fig.~\\ref{sf}, qualitatively identical evolutionary outcomes are obtained also on heterogeneous interaction networks. Comparing to the results presented in Fig.~\\ref{Delta}(a), where the time invariable multigame environment was applied, we conclude that in the time-varying multigame environment the promotion of cooperation is less strong. This, however, is understandable, since the cooperation-supporting influential players emerge only for a short period of time, but on average the overall positive effect in the stationary state is still clearly there. To conclude, it is worth pointing out that time-dependent perceptions of social dilemmas open the path towards coevolutionary models, as studied previously in the realm of evolutionary games \\cite{zimmermann_pre04, szolnoki_njp09, ohtsuki_prl07, perc_bs10, cardillo_njp10, moyano_jtb09}, and they also invite the consideration of the importance of time scales \\cite{roca_prl06} in evolutionary multigames.\n\n\\section{Discussion}\nWe have studied multigames in structured populations under the assumption that the same social dilemma is often perceived differently by competing players, and that thus they may use different payoff matrices when interacting with their opponents. This essentially introduces heterogeneity to the evolutionary game and aids network reciprocity in sustaining cooperative behavior even under adverse conditions. As the core game and the baseline for comparisons, we have considered the weak prisoner's dilemma, while the multigame environment has been introduced by assigning to a fraction of the population either a positive or a negative value of the sucker's payoff. We have shown that, regardless of the structure of the interaction network, and also irrespective of whether the multigame environment is time invariant or not, the evolution of cooperation is promoted the more the larger the heterogeneity in the population. As the responsible microscopic mechanism behind the enhanced level of cooperation, we have identified an asymmetric strategy imitation flow from the subpopulation adopting positive sucker's payoffs to the population adopting negative sucker's payoffs. Since the subpopulation where players use positive sucker's payoffs expectedly features a higher level of cooperation, the asymmetric strategy imitation flow thus acts in favor of cooperative behavior also in the other subpopulations, and ultimately it raises the overall level of social welfare in the population.\n\nThe obtained results in structured populations are in contrast to the results obtained in well-mixed populations, where simply the baseline weak prisoner's dilemma is recovered regardless of multigame parametrization. Although it is expected that structured populations support evolutionary outcomes that are different from the mean-field case \\cite{szabo_pr07, roca_plr09, perc_bs10, perc_jrsi13}, the importance of this fact for multigames is of particular relevance since interactions among players are frequently not best described by a well-mixed model, but rather they are limited to a set of other players in the population and as such are best described by a network. Put differently, although sometimes analytically solvable, the well-mixed models can at best support proof-of-principle studies, but otherwise have limited applicability for realistic systems.\n\nTaken together, the presented results add to the existing evidence in favor of heterogeneity-enhanced network reciprocity, and they further establish heterogeneity among players as a strong fundamental feature that can elevate the cooperation level in structured populations past the boundaries that are imposed by traditional network reciprocity. The rather surprising role of different perceptions of the same conflict thus reveals itself as a powerful mechanism for resolving social dilemmas, although it is rooted in the same fundamental principles as other mechanisms for cooperation promotion that rely on heterogeneity. We hope this paper will motivate further research on multigames in structured populations, which appears to be an underexplored subject with many relevant implications.\n\n\\begin{acknowledgments}\nThis research was supported by the Hungarian National Research Fund (Grant K-101490), TAMOP-4.2.2.A-11\/1\/KONV-2012-0051, the Slovenian Research Agency (Grants J1-4055 and P5-0027), and the Fundamental Research Funds for Central Universities (Grant DUT13LK38).\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}