diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcdmq" "b/data_all_eng_slimpj/shuffled/split2/finalzzcdmq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcdmq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nRecently nanostructure technology has made it possible to create\nquasi one-dimensional electronic structures, the so-called quantum\nwires \\cite{field_1d,scott-thomas_1d,kastner_coulomb,goni_gas1d,%\ncalleja_gas1d,tarucha_wire_1d}.\nExperimentally situations have been reached where the width of such\na wire is of the order of the Fermi wavelength of the conduction electrons,\nwhich makes it a good realization of a one-dimensional electron gas\n\\cite{calleja_gas1d}.\n\nIn such a system one expects the electron-electron interactions to play an\nimportant role. In particular, at variance with what happens in other\none-dimensional conductors, the long-range Coulomb interaction in a quantum\nwire is not screened since the wire contains only one channel of\nelectrons. One can therefore expect very different physical properties\nthan those of a Luttinger liquid with short-range interactions.\nIt has been proposed that due to these long-range interactions, the electrons\nin a quantum wire will form a Wigner crystal \\cite{schulz_wigner_1d}.\nThe formation of a Wigner crystal can be described as a modulation of the\ncharge density $\\rho(x) \\sim \\rho_0 \\cos(Qx+2\\sqrt{2}\\Phi)$ where\n$\\rho_0$\nis the uniform amplitude of the charge density, $Q= 4 k_F$ its wave\nvector\nand $\\Phi$ describes the location and the motion of the Wigner crystal.\nThe existence of such a Wigner crystal should have observable consequences on\nthe transport properties of the system. Indeed, in the presence of impurities\nthe Wigner crystal will be pinned: the phase $\\Phi(x)$ adjusts to the impurity\npotential on a scale given by $L_0$ called the pinning length.\nThis process of pinning is analogous to what happens in charge density\nwaves in the presence of impurities \\cite{lee_rice_cdw,fukuyama_pinning}.\nSince, in the presence of long-range Coulomb interactions,\nthe most divergent fluctuation is now a $4 k_F$ density\nfluctuation \\cite{schulz_wigner_1d}, the transport properties are\ndominated by $4 k_F$ scattering on impurities, and not the\nusual $2 k_F$ scattering, as was assumed previously\n\\cite{ogata_wires_2kf,fukuyama_wires_2kf}. Due to the $4 k_F$\nscattering, one can also expect different\ntransport properties than those of a Luttinger liquid with short-range\ninteractions\n\\cite{apel_impurity_1d,suzumura_scha,giamarchi_loc_lettre,giamarchi_loc}\nwhere $2 k_F$ fluctuations are the dominant one.\n\nNon linear $I-V$ curves have been observed experimentally\nwhich could be interpreted as the result of pinning\n\\cite{kastner_coulomb}. Up to now only short wires have been made,\nfor which only few impurities are in the wire and dominate the transport.\nEven in that case what has mainly been focussed on theoretically\nis a system with short-range interactions\n\\cite{kane_qwires_tunnel_lettre,kane_qwires_tunnel,glazman_single_impurity}.\nFor long wires, it is important to consider the case of a\nuniform disorder, e.g. a thermodynamic number of impurities, as well as\nthe long-range Coulomb forces.\n\nIn this\npaper we study the effects of disorder on the transport\nproperties of a quantum wire.\nAlthough the problem is very close to that\nof a charge density wave pinned by impurities, there are important\ndifferences that are worth investigating. Due to the long-range\n nature of the forces one can expect modifications of the pinning\nlength and frequency dependence of the conductivity. In addition quantum\nfluctuations have to be taken into account, and for the case of\nshort-range forces are known to drastically affect the transport properties\ncompared to a classical situation \\cite{suzumura_scha,giamarchi_loc}.\n\nThe plan of the paper is as follows. The model is derived in\nsection~\\ref{model}. Effects of the pinning and the pinning length are\nstudied in section~\\ref{static}. The frequency dependence of the\nconductivity is computed in section~\\ref{conductivity} and the\ntemperature dependence of the conductivity (and conductance)\nis discussed in section~\\ref{temperature}.\nDiscussion of\nthe comparison with experiments and conclusions can be found in\nsection~\\ref{conclusion}. Some technical details about the treatment\nof quantum fluctuations can be found in the appendices.\n\n\\section{Model} \\label{model}\n\nWe consider a gas of electrons confined in a channel of\nlength $L$, with a width\n$d\\ll L$ and a thickness $e\\ll d\\ll L$.\nWe will assume that both $d$ and $e$ are small enough for\nthe system to be regarded as one-dimensional, meaning that\nonly one band is filled in the energy-spectrum of the electrons.\nSuch a situation will be realized when $d$ and $e$ become comparable to\nthe Fermi wavelength.\nIn the following we will therefore keep only the degrees of freedom\nalong the wire.\nSince we are interested only in low energy excitations\nwe can linearize the spectrum around the Fermi points\nand take for the free part of the Hamiltonian:\n\\begin{equation} \\label{free}\nH_0 = v_F \\sum _k (k-k_F) a^{\\dag }_{+,k}a_{+,k} + (-k-k_F)\na^{\\dag }_{-,k}a_{-,k}\n\\end{equation}\nwhere $v_F$ is the Fermi velocity and\n$a^{\\dag }_{+,k}$ ($a^{\\dag }_{-,k}$) is the creation\noperator of an electron on the right(left)-going branch\nwith wave-vector $k$. In addition we assume that the electrons\ninteract through the Coulomb interaction\n\\begin{equation}\\label{interact}\nH_c = \\frac1{2}\\int _0 ^L \\int _0 ^L dx dx' V(x-x') \\rho (x) \\rho (x')\n = \\frac1{2L} \\sum _k V_k \\rho _k \\rho _{-k}\n\\end{equation}\nIn a strictly one-dimensional theory, a $\\frac1{r}$ Coulomb potential\nhas no Fourier transform because of the divergence for\n$r\\to 0$. In the real system such a divergence does not exist\nowing to the finite width d of the wire.\nWe will use for $V(r)$ the\nfollowing approximate form \\cite{gold_1dplasmon} which cuts the\nsingularity at $r \\approx d$, and gives the correct asymptotic behavior\nat large $r$\n\\begin{equation}\nV(r) = \\frac{e^2}{\\sqrt{r^2+d^2}}\n\\end{equation}\nthe Fourier transform of which is\n\\begin{equation}\nV(q)=\\int_{-L\/2}^{L\/2} dr V(r)e^{iqr} \\approx 2e^2 K_0(qd)\n\\end{equation}\nwhere $K_0$ is a Bessel function, and one has assumed the wire to be\nlong enough $L\\to \\infty$.\nIn the following we shall frequently use the asymptotic expression\n\\begin{equation}\nK_0(qd) \\approx -\\ln (qd) \\qquad \\text{when} \\qquad qd \\ll 1\n\\end{equation}\n\nThe model (\\ref{free}) plus (\\ref{interact}) has been studied by\nSchulz \\cite{schulz_wigner_1d} who showed that\nthe system is dominated by $4k_F$ charge\ndensity wave fluctuations, which decay as\n\\begin{equation}\n\\langle \\rho_{4k_f}(x)\\rho_{4k_f}(0)\\rangle \\sim e^{-\\ln^{1\/2}(x)}\n\\end{equation}\nThe presence of such a $4k_F$ charge fluctuation can be\nviewed as the formation of a Wigner crystal.\nIn order to describe the pinning of such a Wigner crystal\nwe add to the hamiltonian (\\ref{free}) and (\\ref{interact})\nthe contribution due to impurities.\nWe assume that impurities are located in the wire at random sites\n$X_j$, and that each impurity acts on the electrons with a\npotential $V_{imp}$.\nWe will assume in the following that the potential\ndue to the impurities is short-ranged, and will replace it by a delta\nfunction.\n\\begin{equation}\nV_{imp}(x-X_j) = V_0\\delta (x-X_j)\n\\end{equation}\nThe part of the Hamiltonian stemming from a particular configuration of\nthe impurities is then\n\\begin{equation} \\label{imp}\nH_{imp} = \\sum_j \\int_0^L V_0\\delta (x-X_j) \\rho (x) = \\sum_j V_0\\rho (X_j)\n\\end{equation}\n\nIn order to treat the problem we use the representation of fermion\noperators in term of boson operators\n\\cite{solyom_revue_1d,emery_revue_1d}.\nOne introduces the phase field\n\\begin{equation}\n\\Phi (x) = - {i\\pi \\over L} \\sum_{k\\ne 0} {1 \\over k}e^{-ikx}(\\rho _{+,k}+\n\\rho _{-,k})\n\\end{equation}\nwhere $\\rho _{+,k}$($\\rho _{-,k}$) are the charge density operators for\nright(left)-moving electrons,\nand $\\Pi$, the momentum density conjugate to $\\Phi$.\nThe boson form for (\\ref{free}) plus (\\ref{interact})\nis \\cite{schulz_wigner_1d}\n\\begin{equation} \\label{starting}\nH_0+H_c = {u \\over 2\\pi }\\int_0^L dx \\lbrack K(\\pi \\Pi)^2\n + {1 \\over K} (\\partial _x \\Phi)^2 \\rbrack\n + {1 \\over \\pi^2} \\int _0^L \\int _0^L dxdx' V(x-x')\n (\\partial _x \\Phi(x)) (\\partial _{x}\\Phi(x'))\n\\end{equation}\n$K$ is a number containing the backscattering effects due to the\nFourier components of the interaction close to $2 k_F$ and $u$ is the\nrenormalized Fermi velocity due to the same interactions\n\\cite{solyom_revue_1d,emery_revue_1d,schulz_wigner_1d}.\nWe have taken $\\hbar =1$ in (\\ref{starting}).\nThe long-range nature of the Coulomb interaction manifests itself in\nthe last term of (\\ref{starting}). As we shall precise in the following,\nboth $K$ and the Coulomb potential $V$ control the strength of quantum\neffects.\n\nSince for (\\ref{starting}) the most divergent fluctuation\ncorresponds to a $4 k_F$ charge modulation \\cite{schulz_wigner_1d},\nwe will consider only\nthe coupling of the impurities with this mode and ignore the $2 k_F$\npart of the charge density. The range of validity of such an\napproximation will be discussed in the following.\nUsing the boson representation of the density\n\\cite{solyom_revue_1d,emery_revue_1d} and impurity Hamiltonian\n(\\ref{imp}), the total Hamiltonian becomes\n\\begin{eqnarray}\nH & = & {u \\over 2\\pi }\\int _0^L dx \\lbrack K(\\pi \\Pi)^2\n + {1 \\over K} (\\partial _x \\Phi )^2 \\rbrack\n + \\sum _j V_0\\rho _0 \\cos(4 k_F X_j + 2\\sqrt{2}\\Phi (X_j))\n \\nonumber \\\\\n & & + \\frac1{\\pi ^2} \\int _0^L \\int _0^L dxdx'\nV(x-x') (\\partial _x\\Phi(x)) (\\partial _{x}\\Phi(x')) \\label{total}\n\\end{eqnarray}\nwhere $\\rho_0$ is the average density of electrons.\nThe hamiltonian (\\ref{total}) has similarities with the phase\nHamiltonian of a pinned charge density wave\n\\cite{fukuyama_pinning}.\nSimilarly to the CDW case one can expect the\nphase to distort to take advantage of the impurity potential, leading to\nthe pinning of the Wigner crystal.\nAs for standard CDW, one has to distinguish\nbetween strong and weak pinning on the impurities\n\\cite{fukuyama_pinning}. In the first case\nthe phase adjusts itself on each impurity site. This corresponds to a\nstrong impurity potential or dilute impurities. In the weak pinning\ncase, the impurity potential is too weak or the impurities too close\nfor the phase to be able to adjust on each impurity site, due to the\ncost in elastic energy. Although the problem has similarities with the\nCDW problem, there are two important a priori physical differences that\nhave to be taken into account:\ncompared to the CDW case, one has\nto take into account the long-range Coulomb interaction. One can\nexpect such an interaction to make the Wigner crystal more rigid than a CDW\nand therefore more difficult to pin. In addition, for the Wigner\ncrystal, one cannot neglect the quantum term\n($\\Pi^2$) as is usually done for the CDW problem owing to the large\neffective mass of the CDW. In the absence of long-range interactions such\na term is known to give important quantum corrections\n\\cite{suzumura_scha,giamarchi_loc} on both the pinning length and the\nconductivity.\n\nIn the following sections we will examine both cases of strong and weak\npinning.\n\n\\section{Calculation of the pinning length}\n\\label{static}\nLet us first compute the pinning\nlength $L_0$ over which the phase $\\Phi (x)$ in the ground state varies\nin order to take advantage of the impurity potential.\nIf the impurities are dilute enough, or the impurity potential strong\nenough,\nthe phase $\\Phi (x)$ adjusts on each impurity site such that\n$\\cos(4 k_F X_j + 2\\sqrt{2}\\Phi (X_j))=-1$. This is the so-called strong\npinning regime \\cite{fukuyama_pinning} where\nthe pinning length is the distance between impurities\n$L_0 = n_i^{-1}$. If the impurities are dense enough, or their potential\nweak enough then the cost of elastic and Coulomb energy in distorting the\nphase has to be balanced with the gain in potential energy. One is in\nthe weak pinning regime where the pinning length can be much larger than\nthe distance between impurities.\nIn this regime, we calculate $L_0$ using\nFukuyama and Lee's method developed for the CDW\n\\cite{fukuyama_pinning,lee_coulomb_cdw}.\nThis method neglects the quantum fluctuations of the phase, and the\neffect of such fluctuations will be discussed at the end of this\nsection.\nOne assumes that the phase $\\Phi$ varies on a scale $L_0$. One can\ntherefore divide the wire in segments of size $L_0$ where the phase is\nroughly constant and takes the optimal value to gain the maximum pinning\nenergy. $L_0$ is determined by\noptimizing the total gain in energy, equal to the gain in potential\nenergy minus the cost in elastic and Coulomb energy. If one assumes\nthat the phase varies of a quantity of order $2\\pi$ over a length\n$L_0$, the cost of elastic energy per unit length is\n\\begin{equation}\\label{eps-el}\n{\\cal E}_{el} = {u \\over 2\\pi K} {1 \\over \\alpha L_0^2}\n\\end{equation}\nwhere $\\alpha$ is a number of order unity depending on the precise\nvariation of the phase. Since the impurity potential varies randomly\nin segments of length $L_0$, the gain per unit length due to pinning is\n\\cite{fukuyama_pinning}\n\\begin{equation} \\label{eps-imp}\n{\\cal E}_{\\rm imp}(L_0) = - V_0\\rho _0 ({n_i \\over L_0})^{1 \\over 2}\n\\end{equation}\nIn our case we also have to consider the cost in Coulomb energy.\n\\begin{equation}\n{\\cal E}_{coul} = {1 \\over L} {1 \\over \\pi ^2} \\int _0^L dx \\int _0^L dx'\nV(x-x') \\langle \\partial _x \\Phi \\rangle_{av} \\langle \\partial _{x'} \\Phi\n\\rangle_{av}\n\\end{equation}\nwhere the subscript {\\it av} indicates that the quantity is averaged over all\nimpurity configurations. Since one assumes\nthat the phase varies of a quantity of order $2\\pi$ over a length\n$L_0$, the phases for electrons distant of more than\n$L_0$ are uncorrelated, so that the interactions between such pairs of\nelectrons do not contribute to the energy.\nThe calculation can thus be reduced to the evaluation of the energy for\na segment of length $L_0$\n\\begin{equation} \\label{eps-coul}\n{\\cal E}_{coul} \\approx {1 \\over \\pi ^2}{1 \\over L_0}\nL_0 \\int _{-L_0}^{L_0} du V(u) {\\langle \\Phi ^2(x) \\rangle _{av} \\over L_0^2}\n= {2e^2 \\over \\pi ^2\\alpha L_0^2} \\ln {L_0 \\over d}\n\\end{equation}\nwhere $\\alpha$ is the constant introduced in (\\ref{eps-el}).\n\nThe minimization of the total energy provides a self-consistent expression\nfor $L_0$:\n\\begin{equation} \\label{implicit}\nL_0 = ({8e^2 \\over \\alpha \\pi ^2 V_0\\rho _0n_i^{1 \\over 2}})^{2 \\over 3}\n\\ln ^{2 \\over 3} ({CL_0 \\over d})\n\\end{equation}\nwhere $C$ is a constant of order one\n\\begin{equation} \\label{constant}\nC = e^{({\\pi u \\over 4Ke^2} - {1 \\over 2})}\n\\end{equation}\nTaking typical values $u=3\\times 10^7cm.s^{-1}$ so that\n$\\hbar u=3.15\\times 10^{-20}$ e.s.u. and $K=0.5$,\none gets\n$C \\approx 0.75$. For these typical values of the parameters\nthe contribution of the elastic (short-range) part of the hamiltonian to\nthat result is negligible compared to that of the Coulomb term.\nIn the following, since we expect $\\frac{L_0}{d} \\gg 1$ we approximate\n$\\ln \\frac{CL_0}{d} \\sim \\ln \\frac{L_0}{d}$.\n\nNeglecting $\\log(\\log)$ corrections, one can solve (\\ref{implicit})\nto get\n\\begin{equation} \\label{length}\nL_0 = \\Biggl(\\frac{8e^2}{\\alpha \\pi ^2 V_0\\rho_0\n n_i^{\\frac1{2}}}\\Biggr)^{\\frac2{3}}\n\\ln ^{\\frac2{3}} \\Biggl( \\frac 1{d} \\Bigl({8e^2 \\over \\alpha\n\\pi ^2V_0\\rho _0n_i^{1\/2}}\\Bigr)^{\\frac2{3}}\\Biggr)\n\\end{equation}\nCompared to the pinning length of a CDW\n\\cite{fukuyama_pinning},\n$L_0 \\approx (\\frac {v_F}{\\alpha \\pi V_0\\rho_0n_i^{1\/2}})^{2\/3}$, the\npinning length (\\ref{length}), is enhanced by a logarithmic factor.\nThis is due to the Coulomb interaction which enhances the rigidity of\nthe system and makes it more difficult to pin than a classical\nCDW.\n\nThe expression (\\ref{length}) has been derived for the weak pinning case\nwhere $L_0 \\gg n_i^{-1}$. The crossover to the strong pinning regime\noccurs when the\nphase can adjust itself on each impurity\nsite and $L_0 = n_i^{-1}$.\nOne can introduce a dimensionless quantity $\\epsilon_0$\ncharacterizing the two regimes\n\\begin{equation} \\label{epsilono}\n\\epsilon_0 = \\frac{\\alpha \\pi^2 V_0\\rho _0}{8 n_i e^2}\n\\ln ^{-1} \\Biggl( \\frac 1{d} \\Bigl({8e^2 \\over \\alpha\n\\pi ^2V_0\\rho _0n_i^{1\/2}}\\Bigr)^{\\frac2{3}}\\Biggr)\n\\end{equation}\nThe weak pinning corresponds to $\\epsilon_0 \\ll 1$, and the strong\nstarts at $\\epsilon_0 \\simeq 1$. Compared to a CDW where\n$\\epsilon_0 = \\frac{V_0\\rho_0}{n_iv_f}$, the domain of weak pinning is\nlarger due to the Coulomb interaction. This is again a consequence of\nthe enhanced rigidity of the system that makes it more difficult to pin.\nTo study the conductivity it is also convenient to introduce\n\\begin{equation} \\label{epsilon}\n\\epsilon = \\frac{V_0 \\rho _0}{n_ie^2}\n\\end{equation}\nIndeed we have evaluated, using typical values $d=10^{-8}$m and\n$L_0 \\simeq 10^{-6}$m, (estimated for typical wires in\nsection~\\ref{conclusion}),\nthat $\\epsilon_0 \\simeq 2 \\epsilon$ so that $\\epsilon$ can also be used\nas criterion to distinguish the two regimes of pinning.\n\nExpressions (\\ref{length}) and (\\ref{epsilono}) do not take into account\nthe effects of quantum fluctuations. In the absence of Coulomb\ninteractions, the quantum fluctuations drastically increase the\npinning length compared to the classical case\n\\cite{suzumura_scha,giamarchi_loc} giving a pinning length (for a $4\nk_F$ dominant scattering)\n\\begin{equation}\nL_0 \\sim (1\/V_0)^{2\/(3-4K)}\n\\end{equation}\nTo compute the effect of the quantum fluctuations in the presence of the\nCoulomb interaction we use the self-consistent harmonic approximation\n\\cite{suzumura_scha} for the cosine term in (\\ref{starting})\n\\begin{equation} \\label{scha}\n\\cos (Q x+2\\sqrt{2}(\\Phi_{cl}+\\hat{\\Phi}))=e^{-4\\langle \\hat{\\Phi}^2(x)\n\\rangle}\n\\cos(Qx+2\\sqrt{2}\\Phi_{cl}) (1-4(\\hat{\\Phi}^2(x)-\\langle \\hat{\\Phi}^2(x)\n\\rangle))\n\\end{equation}\nwhere $\\Phi =\\Phi_{cl}+\\hat{\\Phi}$ and $\\hat{\\Phi}$ represents\nthe quantum fluctuations around the classical solution $\\Phi_{cl}$.\nThe average $\\langle\\hat{\\Phi} \\rangle$ has to be done self\nconsistently. Such a calculation is performed in appendix~\\ref{quant}\nand one obtains for the pinning length\n\\begin{equation}\nL_0 =({8e^2 \\over \\alpha \\pi ^2 V_0\\rho _0\\gamma n_i^{1 \\over 2}})^{2 \\over 3}\n\\ln ^{2 \\over 3} ({CL_0 \\over d})\n\\end{equation}\nwhere $\\gamma = e^{-4\\langle\\hat{\\Phi}^2\\rangle} \\approx e^{-\\frac{8\\tilde{K}}\n{\\sqrt{3}}\\ln^{1\/2}V_0}$ and\n$\\tilde{K} = \\frac{\\sqrt{\\pi uK}}{2\\sqrt{2}e}$ instead of\n(\\ref{length}). The quantum fluctuations can thus\nbe taken into account by replacing $V_0$ by the effective impurity\npotential $V_0 \\gamma$. There is an increase of the pinning length\ndue to the quantum fluctuations which can be considerable since\n$\\gamma \\ll 1$. Opposite to what happens for the case of short\nrange interactions, there is no correction in the exponent for the\npinning length. This can be traced back to the fact that the correlation\nfunctions decay much more slowly ($e^{-\\ln^{1\/2}(r)}$ instead of a\npower law), therefore the system is much more ordered and\nthe fluctuations around the ground state are much less important. As a\nconsequence even if one is dealing with a system of electrons, and not a\nclassical CDW, the\nCoulomb interactions push the system to the classical limit\nwhere quantum fluctuations can be neglected except for the redefinition\nof the impurity potential $V_0 \\to V_0 \\gamma$. Note that this effect\ncan be very important quantitatively, since $L_0$ is very large for\ndilute impurities. Such a fluctuation effect also contributes to make\nthe system more likely to be in the weak pinning regime.\n\nWe will in the following make the assumption that all quantum\nfluctuation effects have been absorbed in the proper redefinition of the\npinning length. Such an approximation will be valid as long as one is\ndealing with properties at low enough frequencies. At high frequencies\nthe effect of quantum fluctuations will again be important and will be\nexamined in section~\\ref{LargeFreq}.\n\n\\section{Calculation of the conductivity}\n\\label{conductivity}\n\nIn order to study the transport properties, one makes an expansion\naround the static solution $\\Phi_0(x)$ studied in section (\\ref{static})\nthat minimizes the total energy \\cite{fukuyama_pinning}, assuming\nthat the deviations $\\Psi(x,t)$ are small\n\\begin{equation}\n\\Psi (x,t) = \\Phi (x,t) -\\Phi _0(x)\n\\end{equation}\nOne can expand the Hamiltonian in $\\Psi(x,t)$ to quadratic order\n\\begin{eqnarray} \\label{expansion}\n{\\cal H} _{\\Psi } & = & {u \\over 2\\pi } \\int _0^L dx K(\\pi \\Pi )^2 + {1\n\\over K}(\\partial _x\\Psi )^2\n+ {1 \\over \\pi ^2} \\int _0^L \\int _0^L dx dx' V(x-x') \\partial _x\\Psi\n\\partial _{x'}\\Psi \\nonumber \\\\\n & & - 4\\sum _j V_0\\rho _0 \\cos(4k_F X_j +2\\sqrt{2}\\Phi _0(X_j))\n(\\Psi (X_j))^2\n\\end{eqnarray}\nThis expansion is valid in the classical case. We assume that for the\nquantum problem all quantum corrections are absorbed in the proper\nredefinition of the pinning length $L_0$, as explained in\nsection~\\ref{static} and appendix~\\ref{quant}.\nSuch corrections do not affect\nthe frequency dependence of the conductivity. From\nKubo formula and the representation of the current in terms\nof the field $\\Psi$, the conductivity takes the form\n\\begin{equation}\n\\sigma (\\omega ) = 2i\\omega ({e \\over \\pi} )^2 {\\cal D}\n(0,0;i\\omega _n) \\rfloor _{i\\omega _n \\rightarrow \\omega + i0^+}\n\\end{equation}\nwhere ${\\cal D}(q,q';i\\omega _n)$ is the Green's function of the field\n$\\Phi$\n\\begin{equation}\n{\\cal D}(q,q';i\\omega _n) = \\int _0^{\\beta} d\\tau e^{i\\omega\n_n\\tau} \\langle T_{\\tau }\\Psi _q(\\tau )\\Psi _{-q'}(0)\\rangle\n\\end{equation}\nwith $\\Psi _q(\\tau )=e^{H\\tau }\\Psi _qe^{-H\\tau }$, $\\beta = T^{-1} (k_B=1)$\nand $\\omega _n=2\\pi nT$, where\n$T$ is the temperature, and $T_ {\\tau }$ is the time-ordering operator.\nOur problem is then reduced to the evaluation of this Green function. From\n(\\ref{expansion}) one gets the Dyson equation\n\\begin{equation} \\label{dysondep}\n{\\cal D}(q,q';i\\omega _n) = {\\cal D}_0(q,i\\omega _n) \\lbrack \\delta _{q,q'}\n+ 8V_0\\rho _0 \\sum _{q''} S(q''-q){\\cal D}(q'',q';i\\omega _n) \\rbrack\n\\end{equation}\nwhere\n\\begin{equation}\nS(q)=\\frac1{L} \\sum _j e^{iqX_j} \\cos(QX_j+2\\sqrt{2}\\Phi _0(X_j))\n\\end{equation}\nAfter averaging over all impurity configurations (\\ref{dysondep})\nbecomes\n\\begin{equation}\n\\langle {\\cal D}(q,q';i\\omega _n)\\rangle_{av} =\n\\delta _{q,q'}{\\cal D}(q,i\\omega _n)\n= {1 \\over {\\cal D}_0(q,i\\omega _n)^{-1}-\\Sigma (q,i\\omega _n)}\n\\end{equation}\nwhere the self-energy term $\\Sigma $ contains all connected contributions to\n${\\cal D}$, and ${\\cal D}_0$ is the free Green Function\n\\begin{equation}\n{\\cal D}_0(q,i\\omega _n)=\n{\\pi uK \\over \\omega _n^2 + q^2u^2(1+{2KV(q) \\over \\pi u})}\n\\end{equation}\nIn a similar fashion than for CDW we will compute the self-energy, using\na self-consistent Born approximation \\cite{fukuyama_pinning}, for the\ntwo limiting cases of strong and weak pinning.\n\n\\subsection{Weak pinning case $(\\epsilon \\ll 1)$}\n\\label{weak}\n\nIn that case, as for standard CDW \\cite{fukuyama_pinning},\nthe self-energy can be expanded to second-order in perturbation,\n$\\Sigma \\approx \\Sigma _1 + \\Sigma _2$.\nIndeed we easily verify that in the weak pinning case\n$\\Sigma _1 \\sim\\Sigma _2 \\sim n_i^2(V_0\\rho_0\/n_i)^{4\/3}$,\nwhereas for $n \\ge 1$, $\\Sigma _{2n+1}=0$ and\n$\\Sigma _{2n} \\sim n_i^2(\\frac{V_0\\rho_0}{n_i})^{\\frac{2+2n}{3}}$. Since\n$\\frac{V_0\\rho_0}{n_i} \\sim \\epsilon e^2 \\ll 1$, self-energy terms of higher\norder than $\\Sigma_2$ are negligible. $\\Sigma_1$ is\neasily computed as\n\\begin{equation} \\label{sigma1}\n\\Sigma _1 = 8V_0\\rho_0\\langle S(0)\\rangle_{av} =\n-8V_0\\rho _0 ({n_i \\over L_0})^{1 \\over 2}\n\\end{equation}\nsince again one can divide the wire into $L\/L_0$ segments of length $L_0$,\nand use, as for equation (\\ref{eps-imp}), the random-walk argument of\nreference \\onlinecite{fukuyama_pinning} which gives\n\\begin{equation} \\label{cos}\n\\frac1{L}\\langle\\sum_j \\cos(QX_j + 2\\sqrt{2}\\Phi _0(X_j))\\rangle_{av}\n\\approx \\sqrt{\\frac{n_i}{L_0}}\n\\end{equation}\n\n$\\Sigma _2$ is given by\n\\begin{equation}\n\\Sigma _2 = (8V_0\\rho _0)^2 \\sum _{q''} {\\cal D} _0(q'',i\\omega\n_n)\\langle S(q''-q)S(q-q'') \\rangle_{av}\n\\end{equation}\nIf one assumes that there is\nno interference between scattering on different impurities (single site\napproximation), then\nthe exponentials in $\\langle S(q''-q)S(q-q'')\\rangle_{av}$\ncancel and we find\n\\begin{eqnarray} \\label{sigma2}\n\\Sigma_2 & = & ({8V_0\\rho _0 \\over L})^2 \\sum _{q''} {\\cal D}\n_0(q'',i\\omega _n)\n\\langle \\sum _j \\cos^2(QX_j + 2\\sqrt{2}\\Phi _0(X_j))\\rangle_{av} \\\\\n& =& 64{n_i \\over 2}(V_0\\rho _0)^2 {1 \\over L} \\sum _{q''} {\\cal D}\n_0(q'',i\\omega _n) \\nonumber\n\\end{eqnarray}\nThe approximation\n$\\frac1{L}\\langle \\sum _j \\cos^2(QX_j + 2\\sqrt{2}\\Phi _0(X_j))\\rangle_{av}\n\\approx \\frac1{2}n_i$ is valid in the weak pinning case only. A more general\nresult is\n\\begin{eqnarray}\n\\frac1{L}\\langle \\sum _j \\cos^2(QX_j + 2\\sqrt{2}\\Phi _0(X_j))\\rangle_{av}\n& = & \\frac1{L}\\langle \\sum_j \\frac1{2}(1+\\cos 2(QX_j + 2\\sqrt{2}\\Phi _0(X_j))\n\\rangle_{av} \\nonumber \\\\\n& \\approx & \\frac1{2}(n_i + \\sqrt{\\frac{n_i}{L_0}}) \\label{SquCos}\n\\end{eqnarray}\nbut in the weak pinning\ncase it can be simplified using $n_iL_0 \\gg 1$.\n\n$\\Sigma _2$ given by (\\ref{sigma2}) diverges as\n$\\frac1{|\\omega_n|}\\ln \\frac1{|\\omega_n|}$ when $|\\omega_n| \\to 0$, so\none has to compute $\\Sigma$ self-consistently, and\nreplace ${\\cal D}_0$ by ${\\cal D}$ in the calculation of $\\Sigma_2$.\n(\\ref{sigma2}) is replaced by\n\\begin{equation} \\label{prime}\n\\Sigma _2' = 32n_i(V_0\\rho _0)^2 {1 \\over L} \\sum _{q''}{\\cal\nD}(q'',i\\omega _n)\n= 32n_i(V_0\\rho _0)^2 {1 \\over L} \\sum _{q''}\\lbrack {\\cal\nD}_0^{-1}(q'',i\\omega _n) - \\Sigma (q'',i\\omega _n) \\rbrack ^{-1}\n\\end{equation}\ngiving the self-consistent equation for $\\Sigma$\n\\begin{equation} \\label{exacte}\n\\Sigma = \\Sigma_1 + \\Sigma _2'\n = -8V_0\\rho_0 \\sqrt{\\frac{n_i}{L_0}}\n + 16 n_i(V_0\\rho_0)^2(\\pi uK)\\frac 2{\\pi}\n \\int _0^{\\infty} \\frac{dq}{-\\omega^2-\\pi uK\\Sigma +\n q^2u^2(1+\\alpha_c K_0(qd))}\n\\end{equation}\nwhere we have done the analytic continuation $i\\omega_n \\to \\omega\n+i0^+$ and we have noted $\\alpha_c = \\frac{4Ke^2}{\\pi u}$.\nIt is convenient to rescale (\\ref{exacte}) by the pinning frequency\n$\\omega^*$ defined by\n\\begin{equation} \\label{PinFreq}\n\\omega^{*3}\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\n =16n_iu(V_0\\rho_0\\pi K)^2 \\frac 1{\\sqrt{\\alpha_c}}\n={8\\pi ^{5\/2}(uK)^{3\/2} \\over e} n_i(V_0\\rho_0)^2\n\\end{equation}\nwhere $\\tilde{u}=u\\sqrt{\\alpha_c}$.\nIn terms of $L_0$, (\\ref{PinFreq}) can be rewritten as\n\\begin{equation}\n\\omega^* \\ln^{1\/6}\\frac{\\tilde{u}}{\\omega^*d}=\n 4\\alpha^{-2\/3}\\tilde{u}L_0^{-1}\\ln^{2\/3}\\Bigl({L_0 \\over d}\\Bigr)\n\\end{equation}\nNeglecting $\\log(\\log)$ factors, and in the limit $L_O \\gg d$ allowing\nto discard the constants in the logarithm\n($\\ln\\frac{\\tilde{u}}{\\omega^*d} \\approx \\ln \\frac {L_0}{d}$)\nwe obtain\n\\begin{equation} \\label{pinningob}\n\\omega^*\\approx 4\\alpha^{-2\/3}\\tilde{u}L_0^{-1}\\ln^{1\/2}\\Bigl({L_0 \\over d}\n\\Bigr)\n\\end{equation}\nLeaving aside a factor $4\\alpha^{-2\/3}$, $\\omega^*$ given by\n(\\ref{pinningob}) is\nthe characteristic frequency of a segment of the wire of length $L_0$.\nIndeed if we modelize the wire as a collection of independent oscillators of\ntypical length $L_0$ and use the dispersion law $\\omega \\sim\nq\\ln^{1\/2}q$ of\nthe Wigner Crystal \\cite{schulz_wigner_1d}, those oscillators have\nthe frequency $\\omega_0=\\tilde{u}L_0^{-1}\\ln^{1\/2}\\Bigl({L_0 \\over d}\\Bigr)$.\nNumerically we find $4\\alpha^{-2\/3} \\approx 1$ so that actually\n$\\omega^* \\approx \\omega_0$.\nIntroducing the rescaled quantities\n$y=\\frac{\\omega}{\\omega^* }$ and $G={\\pi uK\\Sigma \\over \\omega^{*2}}$,\nwe rewrite (\\ref{exacte}) as\n\\begin{equation} \\label{rescaled}\nG = G_1 + G'_2\n\\end{equation}\nwith $G_1 = -\\alpha^{\\frac 1{3}}$\nand\n\\begin{equation} \\label{integrale}\nG'_2=\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\\sqrt{\\alpha_c}\\frac 2{\\pi}\n\\int_0^{\\infty} \\frac{dt}{-y^2-G+t^2(1+\\alpha_cK_0(\\frac{\\omega^*d}{u}t))}\n\\end{equation}\nThe rescaled conductivity is:\n\\begin{equation} \\label{conduct}\n\\omega^* \\Re e \\sigma (y) = - \\frac{2uKe^2}{\\pi} y\\Im m ({1 \\over -y^2-G})\n\\end{equation}\n\nThe full solution of (\\ref{rescaled}) has to be obtained numerically,\nbut it is possible to obtain analytically the asymptotic expressions at\nsmall and large frequencies. To evaluate the integral $G'_2$, one notices\nthat there is a frequency $\\omega_{cr}$ above which the Coulomb term\nwill be negligible compared to the kinetic (short-range) term.\n$\\omega_{cr}$ defines a crossover length $\\xi_{cr} \\sim u\/\\omega_{cr}$\nwhich is roughly given by\n\\begin{equation}\n\\xi_{cr} \\sim d e^{1\/\\alpha_c} = d e^{\\frac{\\pi u}{4 K e^2}}\n\\end{equation}\nUsing a numerical estimate for $\\alpha_c$ and the values of the Bessel\nfunction one gets $\\alpha_c K_0(x) \\sim 1$ for $x\\sim 1.5$, giving a\ncrossover frequency $\\omega_{cr}\n\\sim 1.5\\frac {\\tilde{u}}{d} \\sim 10^{14} Hz$. Such a frequency\nis two order of magnitude larger than the pinning frequency\n$\\omega^*$.\nFor frequencies above\n$\\omega_{cr}$ the system is dominated by short-range interactions:\nin that case the dominant fluctuations are always the $2k_F$ charge\nfluctuations and not the $4k_F$ ones, and therefore the model\n(\\ref{total}) is not applicable. One has to take into account\nthe pinning on a $2k_F$ fluctuation as done in reference\n\\onlinecite{suzumura_scha,giamarchi_loc}.\nNote that it\nmakes sense to use a one-dimensional model to describe the behavior\nabove $\\omega_{cr}$ only if $\\xi_{cr} \\gg d$. This can occur for example\nif the short-range interactions are strong enough so that $K$ is very\nsmall. With the numerical values of $u$ that seem relevant for\nexperimental quantum wires and assuming that $K$ is not too small\n$K\\sim 0.5$, one gets $\\xi_{cr} \\sim d$. Therefore one can assert that in\nall the range of frequencies for which the problem can be considered\nas one-dimensional, Coulomb interactions will dominate.\nConsequently the result of the integration (\\ref{integrale}) is,\nwhen $\\omega^*\\sqrt{-y^2-G} \\ll \\omega_{cr}$\n\\begin{equation} \\label{small}\nG'_2 =\\frac 1{\\sqrt{-y^2-G}}\\ln^{-1\/2}\\frac{\\tilde{u}}{d\\omega^*\\sqrt{-y^2-G}}\n\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\n\\end{equation}\nLet us focus on small frequencies $\\omega \\ll \\omega^*$.\nWe will show that in that limit\n$\\omega^*\\sqrt{-y^2-G}\\sim \\omega^* \\ll \\omega_{cr}$, so that we use\n(\\ref{small}) and replace in (\\ref{rescaled})\n\\begin{equation}\nG = G_1 + {1 \\over \\sqrt{-y^2-G}} \\ln ^{-1\/2}\n{\\tilde{u} \\over \\omega^* d\\sqrt{-y^2-G}}\n\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\n\\qquad \\text{when} \\qquad y \\ll 1\n\\end{equation}\n$G$ tends to a limit $G_0$ verifying\n\\begin{equation} \\label{Gzero}\nG_0 = G_1 + (-G_0)^{-1\/2}\\ln ^{-1\/2}\\Bigl(\\frac{\\tilde{u}}{d\\omega^*}\n(-G_0)^{-1\/2}\\Bigr)\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\n\\end{equation}\nEquation (\\ref{Gzero}) has different classes of solutions depending on the\nvalue of $\\alpha$ (zero, one or two roots),\nbut the only physically relevant situation\nis the case of a single solution\n(for a discussion, see reference \\onlinecite{fukuyama_pinning}). The\ncorresponding value of\n$G_0$ is\n\\begin{equation} \\label{value}\nG_0 \\approx -2^{-2\/3}\n\\end{equation}\nExpanding (\\ref{Gzero}) in terms of $y$ and of $G-G_0$ around\nthat solution, we find\n\\begin{equation} \\label{Gexpansion}\nG-G_0 = \\pm i\\frac 2{\\sqrt{3}}(-G_0)^{1\/2}y\n\\qquad \\text{for} \\qquad y\\ll 1\n\\end{equation}\nWe assumed in deriving (\\ref{value}) and (\\ref{Gexpansion}) that\n$\\ln^{-1}(\\tilde{u}\/d\\omega^* )$ is small compared to $1$. This can be\nverified numerically for the parameters we are taking. Using\n$\\omega^* \\sim 1\\times 10^{12}Hz$ (as estimated in section\n\\ref{conclusion}) and taking $d \\sim 10^{-8}m$ one obtains\n$\\ln^{-1}(\\frac {\\tilde{u}}{d\\omega^*}) \\sim 0.2$.\nReplacing in(\\ref{conduct}), we find for the conductivity\n\\begin{equation} \\label{result}\n\\omega^* \\Re e \\sigma (y) = {uKe^2 \\over \\pi } \\frac 8{\\sqrt{3}}y^2\n\\qquad (y \\to 0)\n\\end{equation}\n\nOne can now check that the hypothesis\n$\\omega^*\\sqrt{-y^2-G}\\sim \\omega^* \\ll \\omega_{cr}$\nis indeed verified.\n$\\sqrt{-y^2-G}$ is well defined for $y \\ll 1$\nsince $-G_0$ is positive, and $\\sqrt{-y^2-G} \\sim 1$ since\n$\\sqrt{-G_0}$ is of the order of $1$.\n\nWe have plotted in figure~\\ref{conducti}, the full frequency behavior\nof the conductivity, together with the analytic estimate at small\nfrequencies.\nThe small $\\omega$ behavior as well as the general shape of the\nconductivity is very similar to the one of a classical charge density\nwave: the small $\\omega$ conductivity is behaving as $\\omega^2$, there\nis a maximum at the pinning frequency $\\omega^*$ followed by a decrease\nin $1\/\\omega^4$. As shown in appendix~\\ref{quant} the quantum\nfluctuations do not change the frequency dependence for frequencies\nlower than\n$\\omega^*$. The large frequency behavior will be analyzed in details in\nsection~\\ref{LargeFreq}.\n\nThe low frequency conductivity obtained in our approximation is to be\ncontrasted with the previous result of Efros and\nShklovskii \\cite{efros_coulomb_gap} who find\nthat the low frequency conductivity of a one-dimensional electron gas in\nthe presence of Coulomb interactions should behave as $\\omega$.\nThis result is derived in a very different physical limit where the\nlocalization length is much smaller than the interparticle distance,\nwhereas the implicit assumption to derive the model (\\ref{total}) is that the\nlocalization length is much larger than the interparticle distance\n$k_F^{-1}$. In the limit that was considered in\n\\onlinecite{efros_coulomb_gap} the phase\n$\\phi$ would consist of a series of kinks of width $l$ the localization\nlength and located at random positions (with an average spacing\n$k_F^{-1} \\gg l$).\nThe low-energy excitations that are taken into account in\n\\onlinecite{shklovskii_conductivity_coulomb}, would\ncorrespond to soliton-like\nexcitations for the phase\n$\\phi$, where the phase jumps by $2\\pi$ between two distant kinks.\nIn the physical limit we are considering $k_F^{-1} \\ll L_0$, the phase\n$\\phi$ has no kink-like structure but rather smooth distortions between\nrandom values at a scale of order $L_0$. To get the dynamics, the\napproximation we are using only retains the small ``phonon'' like\ndisplacements of the phase $\\phi$ relative to the equilibrium position\nand no ``soliton'' like excitations are taken into account.\nIn the absence of Coulomb interactions the phonon-like excitations\nalone, when treated exactly in the classical limit $K\\to 0$\nare known \\cite{vinokur_cdw_exact} to give the\ncorrect frequency dependence of the conductivity\n$\\omega^2\\ln^2(1\/\\omega)$ (the self-consistent Born approximation only\ngets the $\\omega^2$ and misses the log correction).\nWhen Coulomb\ninteractions are included and one is in the limit where the localization\nlength is much larger than the interparticle distance, it is not clear\nwhether soliton-like interactions similar to those considered by Efros\nand Shklovskii have to be taken into\naccount. From the solution of a uniform sine-Gordon equation, one\ncould naively say that solitons are only important when the quantum\neffects are large $K \\sim 1$. In the classical limit $K \\to 0$, the\nphonon modes have a much lower energy than the soliton excitations, and\nthe physical behavior of the system should be dominated by such modes.\nWe would therefore argue that the conductivity is given correctly by\nour result (up to possible log corrections) and to behave\nin $\\omega^2$, and not\n$\\omega$, at least if the system is classical enough ($K$ small) thanks\nto the {\\bf short-range} part of the interaction. If our assumption is\ncorrect the crossover towards the Efros and Shklovskii result when the\ndisorder becomes stronger would be very interesting to study.\n\n\\subsection{Strong pinning case $\\epsilon > 1$}\n\nLet us now look at the other limit case of strong pinning.\nIn that case one cannot expand the self-energy $\\Sigma$,\nall the\nsingle-site contributions \\cite{fukuyama_pinning} have to be summed.\nThe result of that summation is\n\\begin{equation}\n\\Sigma = (-8V_0\\rho_0n_i) {1 \\over 1+8V_0\\rho_0A}\n\\end{equation}\nwhere $A$ is defined by\n\\begin{equation}\\label{self-consist}\nA=\\frac1{L} \\sum_{q} {\\cal D}(q,i\\omega_n)\n\\end{equation}\nHere we rescale the conductivity by the characteristic frequency\n\\begin{equation}\n\\omega_0= n_i \\tilde{u}\\ln ^{1\/2}({1 \\over d n_i})\n\\end{equation}\ncorresponding to a pinning length $L_0 \\sim n_i^{-1}$. It is thus the analog\nof $\\omega^*$, to a factor $4\\alpha^{-2\/3} \\approx 1$.\nWe use as rescaled parameters\n$\\overline{y}={\\omega \\over \\omega_0}$\nand $\\overline{G}={\\pi uK\\Sigma \\over \\omega_0^2}$, in which terms\nthe expression of the conductivity is similar to\n(\\ref{conduct}), where we replace $y$,$G$ and $\\omega^*$ respectively by\n$\\overline{y}$, $\\overline{G}$, and $\\omega_0$.\nThe resolution is quite similar to what was done for the CDW\n\\cite{fukuyama_pinning}, so that we give only the main results.\n\nThe exact equation on the rescaled self-energy $\\overline{G}$ is\n\\begin{equation} \\label{self}\n\\overline{G}=-\\lbrack \\frac1{2\\pi^2}\\ln\\frac{\\tilde{u}}{\\omega_0d}\n\\frac1{\\epsilon}+ \\sqrt{\\alpha_c}\\ln^{1\/2}\\frac{\\tilde{u}}{\\omega_0d}\n\\frac1{\\pi} \\int_0^{\\infty}\n\\frac{dt}{-(-y^2-\\overline{G})+t^2(1+\\alpha_cK_0(\\frac{\\omega_0d}{u}t))}\n\\rbrack ^{-1}\n\\end{equation}\nwhere $\\epsilon$, strength of the pinning, was defined in (\\ref{epsilon}).\nThe numerical resolution of this equation gives the conductivity plotted\non Fig.~\\ref{conductiStrong}, for different values of $\\epsilon$.\nThere is a gap below a frequency $\\omega_{\\text{lim}} < \\omega_0$\nclose to the pinning frequency and tending to it as $\\epsilon$ gets\nbigger. In the extremely strong pinning limit $\\epsilon \\gg 1$ one\ncan obtain analytically the conductivity. The equation for the self\nenergy (\\ref{self}), after replacement of the integral by its analytical\napproximate which can be taken from (\\ref{small}) since\n$\\omega_0 \\ll \\omega_{cr}$, is:\n\\begin{equation}\n \\overline{G}= -2\\Bigl(\\ln^{-1\/2}\\frac{\\tilde{u}}{\\omega_0d}\\Bigr)\n \\sqrt{-y^2-\\overline{G}}\n \\ln^{1\/2}\\frac{\\tilde{u}}{\\omega_0d\\sqrt{-y^2-\\overline{G}}}\n\\end{equation}\nwhere the integral is given by (\\ref{weak})\nsince $\\omega_0 \\ll \\omega_{cr}$.\nIn this limit $\\omega_{lim}=\\omega_0$\nand the conductivity is given near the threshold by\n\\begin{equation}\n\\omega_0 {\\cal R}e\\sigma(\\overline{y}) \\approx \\frac{4\\sqrt{2}uKe^2}{\\pi}\n\\sqrt{\\overline{y}-1}\n\\end{equation}\nThis gap below $\\omega_{\\text{lim}}$ is not physical and is an\nartifact of considering only the mean distance $n_i^{-1}$\nbetween impurities.\nIn the real system there is a finite probability of finding neighboring\nimpurities farther apart than $n_i^{-1}$. Such configurations\nwill give contributions at frequencies smaller than\n$\\omega_{\\text{lim}}$.\nAn estimation of those contributions can be done in a similar way than\nfor a CDW \\cite{gorkov_cdw_strong,fukuyama_pinning}.\nThe probability of finding two neighboring impurities at a distance $l$\nis $n_ie^{-n_i l}$. In the strong pinning case where we model our pinned\nCDW by a collection of independent oscillators with frequencies\n${u\\pi \\over l}\\ln^{1\/2}(l\/d)$,\nthe conductivity for $\\omega < \\omega_{\\text{lim}}$ will then be\nproportional to the sum of the contributions over all possible $l$\n\\begin{eqnarray}\n{\\cal R}e\\sigma(\\omega) & \\sim & \\int_0^{\\infty} dl \\; n_i e^{-n_i l}\n\\delta(\\omega -{u\\pi \\over l}ln^{1\/2}{l \\over \\pi d})\\nonumber \\\\\n& \\sim & \\omega ^{-2}\\ln^{1\/2}\\frac1{\\omega d}\ne^{-\\pi n_i \\frac u{\\omega}\\ln^{1\/2}\\frac u{\\omega d}}\n\\end{eqnarray}\nCompared to a CDW, the conductivity in the pseudo gap is\nlowered in the presence of Coulomb interactions. This can again be\nrelated to the fact that the long-range forces make the Wigner crystal\nmore rigid.\n\n\\subsection{Large frequency conductivity}\n\\label{LargeFreq}\n\nWe focus now on large frequencies $\\omega \\gg \\omega^*$,($\\omega_0$),\nwhere we expect the physics to be determined over segments of typical size\n$l_{\\omega} \\sim \\frac{\\tilde{u}}{\\omega} \\ll L_0$,($n_i^{-1}$),\nso that intuitively the behavior of the conductivity should be\nindependent of whether we are in the strong or weak pinning regime.\nAnd indeed at high $\\omega$ the conductivity can always be computed\nusing the approximation $\\Sigma \\approx \\Sigma _1 + \\Sigma _2$,\nwhatever the pinning is, since the self-energy terms $\\Sigma_n$'s are\nof order $(\\frac1{\\omega})^{n-1}$.\nBut we recall that we made drastic assumptions on the phase $\\Phi$,\ndepending on the pinning regime. To be consistent, they should\ngive similar results at high frequencies.\nLet's start first from a weak pinning regime: at $\\omega \\ll \\omega^*$ we\nsupposed the physics to be determined on domains of length $L_0$ on which\nthe phase $\\Phi$ is roughly constant. If we now increase $\\omega$ above\n$\\omega^*$ we simply replace $L_0$ by $l_{\\omega}$ in the evaluation\nof $\\Sigma_1$ and $\\Sigma_2$. More precisely (\\ref{cos}) and (\\ref{SquCos})\nare replaced by:\n\\begin{eqnarray}\n\\frac1{L}\\langle \\sum_j \\cos(QX_j +2\\sqrt{2}\\Phi_0(X_j)) \\rangle_{av} &=&\n\\sqrt{\\frac{n_i}{l_{\\omega}}} \\\\\n\\frac1{L}\\langle \\sum_j \\cos^2(QX_j+2\\sqrt{2}\\Phi_0(X_j))\\rangle_{av}\n&=&\\frac1{2}n_i(1+\\frac 1{\\sqrt{n_il_{\\omega}}})\n\\end{eqnarray}\nThis is valid of course as long as $l_{\\omega} \\gg n_i^{-1}$, above which\nthose averages saturate at values:\n\\begin{eqnarray} \\label{satur}\n\\frac1{L}\\langle \\sum_j\\cos(QX_j+2\\sqrt{2}\\Phi_0(X_j)) \\rangle_{av} &=& n_i \\\\\n\\label{satur2}\n\\frac1{L}\\langle \\sum_j\\cos^2(QX_j+2\\sqrt{2}\\Phi_0(X_j))\\rangle_{av} &=& n_i\n\\end{eqnarray}\nStarting from the strong pinning case and keeping the picture of the phase\nbeing adjusted on each impurity site we find expressions identical to\n(\\ref{satur}) and(\\ref{satur2}), regardless of the frequency.\n\nIn the end, using results of section \\ref{weak} we compute the conductivity\n to be\n\\begin{equation} \\label{highcon}\n\\omega^* \\Re e \\sigma (y) = \\frac{c_{\\Phi}4uKe^2}{\\pi}\ny^{-4}\\ln^{-1\/2}\\frac{\\tilde{u}}{d\\omega^*y}\\ln^{1\/2}\\frac{\\tilde{u}}\n{d\\omega^*}\n\\end{equation}\nwhen $\\omega^*,\\omega_0 \\ll \\omega \\ll \\omega_{cr}$, and where $c_{\\Phi}$\nis a numerical coefficient between $\\frac1{2}$ and $1$. More\nprecisely\n\\begin{eqnarray} \\label{cphi}\nc_{\\Phi} &=& \\frac1{2}(1+\\frac1{\\sqrt{n_il_{\\omega}}}) \\qquad\n\\text{for} \\qquad l_{\\omega} \\ge n_i^{-1} \\\\\n c_\\Phi &=& 1 \\qquad \\text{for} \\qquad l_{\\omega} \\le n_i^{-1}\n\\nonumber\n\\end{eqnarray}\nwhich sums up both weak and strong pinning results.\n\nThe result (\\ref{highcon}) does not take into account the effect of\nquantum fluctuations. Such effects are expected to become important for\nfrequencies larger than the pinning frequency.\nFor short-range interactions, using renormalization group techniques\n\\cite{giamarchi_loc,giamarchi_umklapp_1d}, one can show that if it is\npossible to\nneglect the renormalization of the interactions by disorder (for example\nfor very weak disorder) the conductivity becomes\n(for a $4 k_F$ pinning)\n$\\sigma(\\omega) \\sim \\omega^{4K-4}$ instead of $\\omega^{-4}$ due to\nquantum effects, and would be $\\sigma(\\omega) \\sim \\omega^{K-3}$\nfor $2 k_F$ scattering.\nAlthough one can derive these results\nand get the conductivity at high frequency for long-range interactions,\nusing the memory function formalism \\cite{gotze_fonction_memoire}\nin a way similar to\n\\onlinecite{giamarchi_umklapp_1d,giamarchi_attract_1d},\nwe will show here how to use the SCHA to get the high frequency\nconductivity.\nA naive way to take the frequency into account in the SCHA\nis to divide the system into segments of length $\\frac u{\\omega}$, and\nlook at the system on scale of such a segment. Using this method it is\npossible to rederive the results for the short-range interactions\nand tackle the case of long-range interactions in which we\nare interested. Such a calculation is performed\nin appendix~\\ref{HighFreq}. Instead of (\\ref{highcon}), one gets\n\\begin{equation} \\label{highconq}\n\\omega^* \\Re e \\sigma (y) = \\frac{c_{\\Phi} 4uKe^2}{\\pi}\ny^{-4}\\ln^{-1\/2}\\frac{\\tilde{u}}{d\\omega^*y}\n\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\ne^{-8\\sqrt{2}\n\\tilde{K} \\ln^{\\frac 1{2}}\\frac{\\tilde{u}}{(\\omega^* y d)}}\n\\end{equation}\n From (\\ref{highconq}) one sees that,\nas far as exponents are concerned,\nthe conductivity still decays as $1\/\\omega^4$. This would correspond to\na nearly classical ($K \\sim 0$) system with short-range interactions.\nNote that in this limit the $4 k_F$ scattering is indeed dominant over\nthe $2 k_F$ one since the latter would only give a conductivity in\n$1\/\\omega^3$ for $K\\to 0$ (in the above power laws the frequency is\nnormalized by the bandwidth so that $\\omega \\ll 1$).\n\n\\section{Temperature dependence of the conductivity}\n\\label{temperature}\n\nOne can use arguments similar to the one introduced in\nsection~\\ref{conductivity} to obtain the temperature dependence of the\nconductivity. Instead of having a cutoff length imposed by the frequency\n$\\omega \\sim u\/l_\\omega$, or more precisely as in (\\ref{pinningob})\nwhen Coulomb interactions dominate, one can introduce a thermal length\n$\\xi_T$ such that $T \\sim u\/\\xi_T$, which will act as a similar cutoff.\nInstead of rederiving all the expressions as a function of the\ntemperature, it is simpler to use the following relation for the\nconductivity\n\\cite{gotze_fonction_memoire,giamarchi_umklapp_1d,giamarchi_attract_1d}\n\\begin{eqnarray} \\label{functional}\n\\sigma(\\omega,T=0) \\sim M(\\omega,T=0)\/\\omega^2 \\\\\n\\rho(\\omega=0,T) \\sim M(\\omega=0,T) \\nonumber\n\\end{eqnarray}\nwhere $M$ is the so-called memory function. $M$ has the same functional\nform depending on the lowest cutoff in the problem. Therefore\n$M(\\omega,T=0)$ and $M(\\omega=0,T)$ have identical form provided one\nreplaces $\\omega$ by $T$. From (\\ref{functional}), one sees that it is\npossible to obtain the temperature dependence of the resistivity by\nmultiplying\nthe frequency form obtained in section~\\ref{conductivity}, and then\nsubstituting $\\omega$ by $T$. Such a procedure will be valid as long as\none can have a perturbation expansion in the scattering potential, so\nthat (\\ref{functional}) is valid\n\\cite{giamarchi_umklapp_1d,giamarchi_attract_1d}. This will be the case\nas long as the thermal length $\\xi_T$ is smaller than the pinning length\n$L_0$. Let us examine the various regimes\n\n\\subsection{$\\xi_T \\ll \\xi_{cr}$}\n\nAs discussed in\nsection~\\ref{weak}, for quantum wires with unscreened long-range Coulomb\ninteractions, in such a regime a one-dimensional model is probably not\napplicable. However, it can have\napplication either if the long-range interactions are screened or if the\nshort-range interactions are strong enough ($K$ small) so that\n$\\xi_{cr} \\gg d$.\nIn that case, as discussed in section~\\ref{LargeFreq}, the short-range\ninteractions dominate. One is back to the situation of $2 k_F$\nscattering in a Luttinger liquid for which the temperature dependence of\nthe conductivity was computed in reference\n\\cite{giamarchi_loc_lettre,giamarchi_loc}. Let us briefly recall the\nresults (for a complete discussion see\n\\onlinecite{giamarchi_loc_lettre,giamarchi_loc}): for repulsive\ninteractions the conductivity is roughly given by\n\\begin{equation} \\label{rough}\n\\sigma(T) \\sim T^{\\frac52 - K(T) - \\frac32 K_\\sigma(T)}\n\\end{equation}\nwhere $K(T)$ and $K_\\sigma(T)$ are the renormalized Luttinger liquid\nparameters for charge and spin at the length scale $\\xi_T$. The $K$ are\nrenormalized by the disorder and decrease when the temperature is\nlowered. Such a decrease of the exponents is a signature of the tendency\nof the system to localize \\cite{giamarchi_loc}. As a\nresult the conductivity has no simple power law form since the exponents\nthemselves depend on the temperature. If the disorder is weak enough so\nthat one can neglect the renormalization of the exponents, one gets the\napproximate expression for the conductivity\n\\cite{apel_impurity_1d,giamarchi_loc} (see\nalso appendix~\\ref{HighFreq} for a rederivation of this result using\nSCHA)\n\\begin{equation} \\label{appcond}\n\\sigma(T) \\sim T^{1 - K}\n\\end{equation}\nsince (in the absence of renormalization by disorder) $K_\\sigma = 1$\ndue to spin symmetry. The expression (\\ref{appcond}) coincides with the\none obtained subsequently for the conductance of a single impurity\n\\cite{kane_qwires_tunnel_lettre,kane_qwires_tunnel}. For one single\nimpurity there\nis no renormalization of the exponents \\cite{kane_qwires_tunnel} and the\nconductance is given by\n\\begin{equation} \\label{kane}\nG_0 \\sim T^{1-K}\n\\end{equation}\nat all temperatures. If one assumes that there are $N_i$ impurities in\na wire of length $L$ and that the impurities act as {\\bf independent}\nscatterers, then the conductivity would be, if $G$ is the\nconductance of the wire\n\\begin{equation} \\label{gsig}\n\\sigma(T) = L G = \\frac{L}{N_i} G_0 = \\frac1{n_i} G_0\n\\end{equation}\nand one recovers (\\ref{appcond}) (the impurity density $n_i$ is\nincluded in the disorder in (\\ref{appcond})). When many impurities are\npresent\nthe assumption that their contributions can be added independently is of\ncourse incorrect. The collective effects of many impurities leads to the\nrenormalization of the Luttinger liquid parameters (and in particular\nto localization) and to the formula (\\ref{rough}) for the conductivity\ninstead of (\\ref{appcond}).\n\n\\subsection{$\\xi_{cr} \\ll \\xi_T \\ll L_0$}\n\nIn this regime Coulomb interactions dominate and the $4 k_F$ scattering\nis the dominant process. Using (\\ref{highconq}) one gets\n\\begin{equation} \\label{intert}\n\\rho(T) \\sim \\frac1{T^2}\n\\ln^{-1\/2}\\frac{\\tilde{u}}{d T} e^{-8\\sqrt{2}\n\\tilde{K} \\ln^{\\frac 1{2}}\\frac{\\tilde{u}}{(T d)}}\n\\end{equation}\nProvided the wire is long enough (\\ref{intert}) gives also the\ntemperature dependence of the conductance of the wire.\nIn this regime the $2 k_F$ scattering would give $\\rho_{2 k_F}(T)\n\\sim 1\/T$ and is subdominant. Due to the long-range interactions the\nrenormalization of the exponents of the conductivity that took place for\nshort-range interactions \\cite{giamarchi_loc} does not take place. Such\na change of exponent with temperature is replaced by sub-leading\ncorrections. This is due to the fact that the correlation functions\ndecay much more slowly than a power-law.\n\n\\subsection{$L_0 \\ll \\xi_T$}\n\nThis is the asymptotic regime for which the system is pinned and no\nexpansion like (\\ref{functional}) is available. In this regime the\ntemperature dependence is much less clear. In analogy with the\ncollective pinning of vortex lattices\n\\cite{feigelman_collective,nattermann_pinning}, one could\nexpect a glassy-type nonlinear $I-V$ characteristic of the form\n\\begin{equation} \\label{nonlin}\nI \\sim e^{-\\beta (1\/E)^\\mu}\n\\end{equation}\nSuch an $I-V$ characteristic would correspond to diverging barrier\nbetween metastable states as the voltage goes to zero. (\\ref{nonlin})\nimplies that the linear conductivity vanishes at a finite\ntemperature. Since this could be viewed as a phase transition (with the\nlinear conductivity as an order parameter), it is forbidden in a\nstrictly one dimensional system. In fact, in\na purely one dimensional system (in principle for $d<2$\n\\cite{nattermann_pinning}), the barriers\nshould remain finite. In that case one gets a finite linear\nconductivity, going to zero when $T\\to 0$. A possible form being\n\\begin{equation}\n\\sigma(T) \\sim e^{-E_B\/T}\n\\end{equation}\nwhere $E_B \\sim 1\/L_0$ is a typical energy scale for the barriers.\nHowever, no\ndefinite theoretical method exists to decide the issue, and an\nexperimental determination of the low temperature conductivity would\nprove extremely interesting.\n\n\\section{Discussion and conclusions}\n\\label{conclusion}\n\nWe have looked in this paper at the conductivity of a one-dimensional\nelectron gas in the presence of both disorder and long-range Coulomb\ninteractions. Due to long-range interactions, the electron gas forms a\nWigner crystal which will be pinned by impurities. As a result,\nconversely to what happens in a Luttinger liquid, the dominant\nscattering corresponds to $4 k_F$ scattering on the impurities and not\n$2 k_F$ scattering. Such a pinned Wigner crystal\nis close to classical charge density waves but important a\npriori differences lie in the presence of long-range interactions and\nnon-negligible quantum fluctuations.\n\nWe have computed the\npinning length above which the (quasi) long-range crystalline order is\ndestroyed by the disorder. Compared to the standard CDW case the pinning\nlength is increased both by the Coulomb interactions that makes the\nsystem more rigid and therefore more difficult to pin, and by the\nquantum fluctuations that make the pinning less effective. These effects\nmake the system more likely to be in the weak pinning regime.\nWe have also computed\nthe frequency dependence of the conductivity of such a\nsystem. At low frequencies, the conductivity varies as\n$\\omega^2$ if the pinning on impurities is weak. This is to be\ncontrasted to the result of\nEfros and Shklovskii \\cite{shklovskii_conductivity_coulomb}\n$\\sigma\\sim \\omega$. We believe that this difference is due to the fact that\ntheir result was derived in a different physical limit, namely when the\npinning length is much shorter than the interparticle distance. However\nsince the method we use is approximate, it could also be the consequence\nof having neglected soliton-like excitations of the phase field.\nAlthough we do expect such excitations to play little role at least when\nthe short-range repulsion is strong enough ($K$ small). More\ntheoretical, and especially more experimental investigations would prove\nextremely interesting to settle this important issue.\nFor the case of\nstrong pinning there is a pseudo-gap in the optical conductivity up to\nthe pinning frequency. In the pseudo gap\nthe conductivity behaves as\n$\\frac1{\\omega^2}\\ln^{1\/2}\\frac1{\\omega}e^{-1\/\\omega\\ln^{1\/2}\\frac1{\\omega}}$.\nAbove the pinning\nfrequency, for both regimes,\nthe conductivity decreases as $1\/\\omega^4\\ln^{-1\/2}(\\omega_{cr}\/\\omega)\ne^{-\\text{Cste}\\ln^{1\/2}(\\omega_{cr}\/\\omega)}$\nup to the crossover frequency $\\omega_{cr}$ above which\nthe long-range Coulomb interactions become unimportant.\nFor the parameter we took here, $\\omega_{cr}$ is also the limit of\napplicability of a one-dimensional system since $\\xi_{cr} \\sim d$, the\nwidth of the wire. However if the short-range interactions are strong\nenough ($K$ small), so that $\\xi_{cr} \\gg d$, then above $\\omega_{cr}$\na one-dimensional description will still be valid.\nOne is back to the situation of $2 k_F$ scattering in a Luttinger\nliquid which was studied in detail in \\onlinecite{giamarchi_loc}.\nThe conductivity then behaves as\n$(1\/\\omega)^{\\mu(\\omega)}$, where $\\mu(\\omega)$ is a non universal\nexponent depending on the short-range part of the interactions, and due\nto the renormalization of Luttinger liquid parameters by disorder, also\ndependent on the frequency \\cite{giamarchi_loc}. If one can neglect such\na renormalization of the exponents (e.g. for very weak disorder) then\n$\\mu = 3 - K$.\n\nThe temperature dependence can be obtained by similar methods.\nOne can\ndefine a thermal length $\\xi_T\\sim u\/T$.\nWhen $\\xi_T < L_0$, the\nfrequency and temperature dependence of the conductivity are simply\nrelated by $\\rho(\\omega=0,T) \\sim T^2 \\sigma(\\omega\\to T, T=0)$, giving\n$\\rho(T) \\sim 1\/T^2\\ln^{-1\/2}(1\/T) e^{-\\text{Cste}\\ln^{1\/2}(1\/T)}$.\nAbove the pinning length, frequency and temperature\ncan no longer be treated as equivalent cutoffs, and the conductivity is\nmuch more difficult to compute.\nOn can expect an exponentially vanishing linear conductivity, provided\nthat the barriers between metastable states remain finite. If it is not\nthe case, one should get a non-linear characteristic of the form $I \\sim\n\\text{exp}[-\\beta (1\/E)^\\alpha]$, where $\\beta =1\/T$, and $\\alpha$ is an\nexponent. Again an experimental determination of $\\sigma(T)$\nwould prove extremely useful. Note that\nalthough we considered here the conductivity, most of the results can be\napplied to the conductance of a finite wire, provided that the size of\nthe wire $L$ is larger than the thermal length $L_T$.\n\nWe know that under the application of strong enough electric fields,\na classical CDW can be depinned \\cite{littlewood_sliding_cdw,lee_depinning}.\nSimilarly we expect for a\nWigner crystal the existence of a threshold electric field $E_{th}$\nabove which a finite static conductivity appears.\nWe can make a crude estimation of this threshold field,\nmade on the simple assertion that the electrical energy at threshold must\nbe of the order of the pinning energy\n$\\omega^* \\sim \\frac{\\tilde{u}}{L_0}\\ln^{1\/2}\\frac{L_0}{d}$.\nThis energy\ncan be written as $eU$, where $U$ is the electrical potential corresponding\nto a segment of the wire of length $L_0$, that is to say $U=E_{th}L_0$.\nThus the threshold field is estimated as\n\\begin{equation}\n\\label{threshold}\nE_{th} \\sim \\frac{\\tilde{u}}{eL_0^2}\\ln^{1\/2}\\frac{L_0}{d}\n\\end{equation}\n From this we can extract an estimation of the pinning frequency $\\omega^*$.\nIndeed experimental values of such threshold fields can be found in the\nliterature\\cite{kastner_coulomb}. The latter reference gives a value of\nthreshold field, for a wire of length of about $10\\mu m$, of\n$E_{th}=5\\times 10^2.V.m^{-1}$. Thus (\\ref{threshold}) gives\n$L_0 \\sim 1.4 \\mu m$, which seems quite reasonable compared to the\nlength of the wire, and gives for the pinning frequency the estimation\n$\\omega^* \\sim 1\\times 10^{12}Hz$ (since the wires\nof reference \\onlinecite{kastner_coulomb} contain typically two or three\nimpurities, one is probably here in a strong pinning regime).\nData reported here are just meant as typical values. The system\nstudied in \\onlinecite{kastner_coulomb} is at the limit of\napplicability of our study at low temperatures,\nsince the wire is so short that it contains only few impurities. However,\nregardless of the pinning mechanism and number of impurities,\nour theory should give correctly the\ntemperature dependence of the conductivity (or conductance) at\ntemperatures such that $\\xi_T < n_i^{-1}$, since in this regime the\nimpurities act as independent scatterers. To make a study of the low\ntemperature\/low frequencies properties longer wires would be needed.\n\nThe above estimates, although very crude, show that\ntypical frequencies or\ntemperatures for such systems are in the range of experimentally realizable\nvalues, which gives hope for more experimental evidence for the\nexistence of such pinned Wigner crystals. In particular, measurements\nof the temperature dependence of the conductivity\/conductance would\nprove decisive. At low temperature they would provide evidence for a\npinned Wigner crystal, and at higher temperature test for the\nscattering on impurities in the presence of long-range interactions\n($\\sim 1\/T^2$ behavior). A possible crossover between a Luttinger liquid\n(dominated by short-range interactions) and the\nWigner Crystal (dominated by long-range interactions) could also in\nprinciple be seen on the temperature dependence of the conductivity.\nFrequency dependent conductivity measurement are probably much more\ndifficult to carry out, but would be also of high importance. At low\nfrequency they could serve as tests both on the nature of the pinning\nmechanism and on the effects on long-range Coulomb interactions on the\nfrequency dependence of the conductivity. For these purposes, quantum\nwires would constitute a much cleaner system than the standard CDW\ncompounds.\n\n\\acknowledgments\n\nWe would like to thank H.J. Schulz for many stimulating discussions. One\nof us (T.G.) would like to thank H. Fukuyama, M. Ogata,\nB. Spivak and V. Vinokur for interesting discussions,\nand the Aspen center for Physics where part of this work was completed.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn a myriad of robotic systems, trajectory generation plays a very important role since trajectories govern the robot actions at both joint and task space levels. One popular trajectory generation approach for robots is imitation learning \\citep{Ijspeert,Calinon2007}, where the trajectory of interest is learned from human demonstrations. Typically, the learned trajectories can be successfully reproduced or generalized by the robot under conditions that are similar to those in which the demonstrations took place. However, \nin practice, robots may also encounter unseen situations, such as obstacles or human intervention, which can be considered as new task constraints, requiring the robot to adapt its trajectory in order to perform adequately. \n\nIn the context of imitation learning, several algorithms such as dynamic movement primitives (DMP) \\citep{Ijspeert} and probabilistic movement primitives (ProMP) \\citep{Paraschos} have been developed to generate desired trajectories in various scenarios. However, due to an explicit description of the trajectory dynamics, DMP introduces many open parameters in addition to basis functions and their weighting coefficients. The same problem arises in ProMP, which fits trajectories using basis functions that are manually defined. Moreover, DMP and ProMP were formulated towards the learning of time-driven trajectories (i.e., trajectories explicitly dependent on time), where the learning with high-dimensional inputs are not addressed.\n\nIn order to alleviate the modeling of trajectories via specific functions and meanwhile facilitate the learning of trajectories driven by high dimensional inputs, Gaussian mixture model (GMM) \\citep{Calinon2007} has been employed to model the joint distribution of input variables and demonstrated motions. Usually, GMM is complemented with Gaussian mixture regression (GMR) \\citep{Cohn} to retrieve a desired trajectory. Despite the improvements with respect to other techniques, adapting learned skills with GMM\/GMR is not straightforward. Indeed, it is difficult to re-optimize GMM to fulfill new requirements (e.g., via-points), since this usually requires to re-estimate new model parameters (i.e., mixture coefficients, means and covariance matrices) that actually lie on a high-dimensional space . \n\nAn alternative solution to refine trajectories for satisfying new task constraints is reinforcement learning (RL). For instance, a variant of policy improvement with path integrals \\citep{Buchli} was employed to optimize the movement pattern of DMP \\citep{Stulp}. Also, natural actor-critic \\citep{Peters2005} was used to optimize the centers of GMM components \\citep{Guenter}. However, the time-consuming search of the optimal policy might make the application of RL approaches to on-line refinements (such as those required after perturbations) impractical. In contrast to the RL treatment, ProMP formulates the modulation of trajectories as a Gaussian conditioning problem, and therefore derives an analytical solution to adapt trajectories towards new via-points or targets.\nIt is worth pointing out that DMP can adapt trajectories towards different goals, however, the via-points constraints are not addressed therein.\n\nBesides the generation of adaptive trajectories, another desired property in imitation learning is extrapolation.\nOften, human demonstrations are provided for a limited set of task instances, but the robot is expected to apply the learned movement patterns in a wider range of circumstances.\nIn this context, DMP is capable of generating trajectories starting from arbitrary locations and converging to a goal. This is achieved through a formulation based on a spring-damper system whose equilibrium corresponds to the target of the robot motion. In contrast, ProMP and GMM model the distribution of demonstrated trajectories in absolute frames rather than relative frames, which limits their extrapolation capabilities. As an extension of GMM, a task-parameterized formulation is studied in \\cite{Calinon2016}, which in essence models local (or relative) trajectories and corresponding local patterns, therefore endowing GMM with better extrapolation performance. \n\n\nWhile the aforementioned algorithms have achieved reliable performances, we aim for a solution that addresses the most crucial limitations of those approaches. In particular, we propose an algorithm that:\n\\begin{enumerate}[label=(\\roman*)]\n\\item preserves the probabilistic properties exhibited in multiple demonstrations, \n\\item deals with adaptation and superposition of trajectories,\n\\item can be generalized for extrapolations, \n\\item learns human demonstrations associated with high-dimensional inputs while alleviating the need to explicitly define basis functions. \n\\end{enumerate}\t\n\nThe main contribution of this paper is the development of a novel \\emph{kernelized movement primitive} (KMP), which allows us to address the above listed problems using a single framework. Specifically, KMP provides a non-parametric solution for imitation learning and hence alleviates the explicit representation of trajectories using basis functions, rendering fewer open parameters and easy implementation. More importantly, in light of the kernel treatment, KMP has the ability to model demonstrations associated with high-dimensional inputs, which is usually viewed as a non-trivial problem due to the curse of dimensionality. \n\nIn addition, this paper extends KMP from a task-parameterized perspective and formulates \\emph{local}-KMP, improving the extrapolation capabilities to different task situations described by a set of local coordinate frames. \nFinally, as a special case, we considers the application of KMP to the learning of time-driven trajectories, which inherits all the advantages of KMP while being suitable for time-scale modulation. For the sake of clear comparison, we list most relevant features of the state-of-the-art methods as well as our approach in Table~\\ref{table:comp:table}. Note that we consider the modulation of robot trajectories to pass through desired via-points and end-points as the adaptation capability.\n\n\n\\begin{table}[bt]\n\t\\caption {Comparison Among the State-of-the-Art and KMP}\n\t\\centering\n\t\\scalebox{0.9}{\t\n\t\t\\begin{tabular}{lcccc}\n\t\t\t\\toprule %\n\t\t\t&DMP & ProMP & GMM & Our Approach\\\\ \\toprule %\n\t\t\t$Probabilistic$ \n\t\t\t& -\n\t\t\t& \\checkmark \n\t\t\t& \\checkmark \n\t\t\t& \\checkmark \n\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\t$Via\\!\\!-\\!\\!point$\n\t\t\t& -\n\t\t\t& \\checkmark \n\t\t\t& -\n\t\t\t& \\checkmark\n\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\t$End\\!\\!-\\!\\!point$\n\t\t\t& \\checkmark\n\t\t\t& \\checkmark \n\t\t\t& -\n\t\t\t& \\checkmark\n\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\t$Extrapolation$ \n\t\t\t& \\checkmark\n\t\t\t& -\n\t\t\t& -\n\t\t\t& \\checkmark\n\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\t$High$-$dim \\; Inputs$\n\t\t\t& - \n\t\t\t& -\n\t\t\t& \\checkmark\n\t\t\t& \\checkmark\n\t\t\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}}\n\t\t\\label{table:comp:table}\n\t\\end{table}\n\t\n \nThe structure of this paper is arranged as follows. \nWe formulate imitation learning from an information-theory perspective and\npropose KMP in Section~\\ref{sec:kmp}. Subsequently, we extend KMP to deal with trajectory modulation and superposition in Section \\ref{subsec:kmp:modulation} and Section~\\ref{subsec:super:position}, respectively. \nMoreover, we introduce the concept of learning local trajectories into KMP in Section \\ref{subsec:local_frame}, augmenting its extrapolation capabilities in task space. In Section~\\ref{sec:time_kmp}, we discuss a special application of KMP to time-driven trajectories. We test the performance of KMP on trajectory modulation, superposition and extrapolation in Section \\ref{sec:evaluations}, where\nseveral scenarios are considered, ranging from learning handwritten letters to real robotic experiments.\nAfter that, we review related work in Section~\\ref{sec:relative:work}. An insightful discussion is provided in Section \\ref{sec:discuss}, where we elaborate on the potential of our approach and the similarities between KMP and ProMP, as well as open challenges. Finally, we close with conclusions in Section~\\ref{sec:conclusion}. \n\n\n\n\n\\section{Kernelized Representation of Movement Trajectory}\n\\label{sec:kmp}\nLearning from multiple demonstrations allows for encoding trajectory distributions and extracting important or consistent features of the task. In this section, we first illustrate a probabilistic modeling of human demonstrations (Section~\\ref{subsec:ref:traj}), and, subsequently, we exploit the resulting trajectory distribution to derive KMP (Section~\\ref{subsec:kmp}). \n\n\\subsection{Learning from Human Demonstrations}\n\\label{subsec:ref:traj}\nFormally, let us denote the set of demonstrated training data by $\\{\\{ \\vec{s}_{n,h},{\\vec{\\xi}}_{n,h}\\}_{n=1}^{N}\\}_{h=1}^{H}$ where $\\vec{s}_{n,h} \\in \\mathbb{R}^{\\mathcal{I}}$ is the input and ${\\vec{\\xi}}_{n,h} \\in \\mathbb{R}^{\\mathcal{O}}$ denotes the output. Here, the super-indexes $\\mathcal{I}$, $\\mathcal{O}$, $H$ and $N$ respectively represent the dimensionality of the input and output space, the number of demonstrations, and the trajectory length. Note that a probabilistic encoding of the demonstrations allows the input $\\vec{s}$ and output ${\\vec{\\xi}}$ to represent different types of variables. For instance, by considering $\\vec{s}$ as the position of the robot and ${\\vec{\\xi}}$ as its velocity, the representation becomes an autonomous system. Alternatively, if $\\vec{s}$ and ${\\vec{\\xi}}$ respectively represent time and position, the resulting encoding corresponds to a time-driven trajectory. \n\nIn order to capture the probabilistic distribution of demonstrations, a number of algorithms can be employed, such as GMM \\citep{Calinon2007}, hidden Markov models \\citep{Leonel13}, and even a single Gaussian distribution \\citep{Englert,Osa}, which differ in the type of information that is extracted from the demonstrations. As an example, let us exploit GMM as the model used to encode the training data. More specifically, GMM is employed to estimate the joint probability distribution $\\mathcal{P}(\\vec{s},\\vec{\\xi})$ from demonstrations, i.e., \n\\begin{equation}\n\\left[\\begin{matrix}\n\\vec{s}\\\\\\vec{\\xi}\n\\end{matrix}\\right] \\sim \\sum_{c=1}^{C} \\pi_c \\mathcal{N}(\\vec{\\mu}_c,\\vec{\\Sigma}_c),\n\\label{equ:gmm}\n\\end{equation}\nwhere $\\pi_c$, $\\vec{\\mu}_c$ and $\\vec{\\Sigma}_c$ respectively represent the prior probability, mean and covariance of the $c$-th Gaussian component, while $C$ denotes the number of Gaussian components.\n\nFurthermore, a probabilistic \\emph{reference trajectory} $\\{\\hat{\\vec{\\xi}}_{n}\\}_{n=1}^{N}$ can be retrieved via GMR \\citep{Cohn,Calinon2016}, where each point ${\\hat{\\vec{\\xi}}}_{n}$ associated with $\\vec{s}_n$ is described by a conditional probability distribution with mean $\\hat{\\vec{\\mu}}_n$ and covariance $\\hat{\\vec{\\Sigma}}_n$, i.e, ${\\hat{\\vec{\\xi}}}_{n}|\\vec{s}_n\\sim \\mathcal{N}(\\hat{\\vec{\\mu}}_{n},\\hat{\\vec{\\Sigma}}_{n})$ (see Appendix~\\ref{app:gmr} for details). \nThis probabilistic reference trajectory encapsulates the variability in the demonstrations as well as the correlations among outputs. \nWe take advantage of the probabilistic reference trajectory to derive KMP.\n\n\n\n\\subsection{Kernelized Movement Primitive (KMP)}\n\\label{subsec:kmp}\n\nWe start the derivation of KMP by considering a \\emph{parametric trajectory} \n\\begin{equation}\n\\vec{\\xi}(\\vec{s})\n= \\vec{\\Theta}(\\vec{s})^{\\mathsf{T}} \\vec{w}\n\\label{equ:linear:form}\n\\end{equation}\nwith the matrix $\\vec{\\Theta}(\\vec{s})\\in \\mathbb{R}^{B\\mathcal{O} \\times \\mathcal{O}}$ defined as follows\n\\begin{equation}\n\\vec{\\Theta}(\\vec{s})=\\left[\\begin{matrix} \n\\vec{\\varphi}(\\vec{s}) & \\vec{0} & \\cdots &\\vec{0} \\\\\n\\vec{0} & \\vec{\\varphi}(\\vec{s}) & \\cdots &\\vec{0} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\vec{0} & \\vec{0} & \\cdots & \\vec{\\varphi}(\\vec{s})\\\\\n\\end{matrix}\\right] ,\n\\label{equ:basis:function}\n\\end{equation}\nand the weight vector $\\vec{w} \\in \\mathbb{R}^{B\\mathcal{O}}$, where $\\vec{\\varphi}(\\vec{s})\\in \\mathbb{R}^{B}$ denotes $B$-dimensional basis functions\\footnote{The treatment of fitting trajectories by using basis functions has also been studied in DMP \\citep{Ijspeert} and ProMP \\citep{Paraschos}.}.\nFurthermore, we assume that the weight vector $\\vec{w}$ is normally distributed, i.e., ${\\vec{w}\\sim \\mathcal{N}(\\vec{\\mu}_w,\\vec{\\Sigma}_w)}$, where the mean $\\vec{\\mu}_w$ and the covariance $\\vec{\\Sigma}_w$ are \\emph{unknown}. Therefore, the parametric trajectory satisfies \n\\begin{equation}\n\\vec{\\xi}(\\vec{s}) \\sim \\mathcal{N} \\left(\\vec{\\Theta}(\\vec{s})^{\\mathsf{T}} \\vec{\\mu}_w, \\vec{\\Theta}(\\vec{s})^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}) \\right).\n\\label{equ:distribution:para:traj}\n\\end{equation} \nNote that our goal is to imitate the probabilistic reference trajectory $\\{\\hat{\\vec{\\xi}}_{n}\\}_{n=1}^{N}$, thus we aim to match the parametric trajectory distribution formulated by (\\ref{equ:distribution:para:traj}) with the reference trajectory distribution. In order to address this problem, we propose to minimize the \\emph{Kullback-Leibler} divergence (KL-divergence) \\citep{Kullback,Rasmussen} between both trajectory distributions (Section~\\ref{subsubsec:kl}). Subsequently, we derive optimal solutions for both $\\vec{\\mu}_w$ and $\\vec{\\Sigma}_w$, and formulate KMP by using the kernel trick in Sections~\\ref{subsubsec:optimal:mean:kmp} and \\ref{subsubsec:optimal:var:kmp}, respectively.\n\n\n\\subsubsection{Imitation Learning Based on Information Theory:}\n\\label{subsubsec:kl}\n\nSince the well-known KL-divergence can be used to measure the distance between two probability distributions, we here exploit it to optimize the parametric trajectory distribution so that it matches the reference trajectory distribution.\nFrom the perspective of information transmission, the minimization of KL-divergence guarantees minimal information-loss in the process of imitation learning. \n\nFormally, we consider the minimization of the objective function\n\\begin{equation}\n\\begin{aligned}\nJ_{ini}(\\vec{\\mu}_w,\\!\\vec{\\Sigma}_w)\\!\\!=\\!\\!\\sum_{n=1}^{N}\\!\\! D_{KL} \\biggl ( \\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n)\n|| \\mathcal{P}_{\\mathbf{r}} (\\vec{\\xi}|\\vec{s}_n) \\biggr),\n\\end{aligned}\n\\label{equ:kl:cost:ini:temp}\n\\end{equation}\nwhere \n$\\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n)$ represents the probability distribution of the parametric trajectory (\\ref{equ:distribution:para:traj}) given the input $\\vec{s}_n$, i.e.,\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n)\\!= \\!\\mathcal{N} \\left(\\vec{\\xi}|\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}}\\! \\vec{\\mu}_w, \\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n) \\right),\n\\label{equ:def:prob:para}\n\\end{equation}\n$\\mathcal{P}_{\\mathbf{r}}(\\vec{\\xi}|\\vec{s}_n)$ corresponds to the probability distribution of the reference trajectory associated with $\\vec{s}_n$ (as described in Section~\\ref{subsec:ref:traj}), namely\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{r}}(\\vec{\\xi}|\\vec{s}_n)=\\mathcal{N} (\\vec{\\xi}|\\hat{\\vec{\\mu}}_n,\\hat{\\vec{\\Sigma}}_n).\n\\label{equ:def:prob:ref}\n\\end{equation}\n$D_{KL}(\\cdot||\\cdot)$ denotes the KL-divergence between the probability distributions $\\mathcal{P}_{\\mathbf{p}}$ and $\\mathcal{P}_{\\mathbf{r}}$, which is defined by \n\\begin{equation}\n\\begin{aligned}\nD_{KL}(\\mathcal{P}_{\\mathbf{p}}&(\\vec{\\xi}|\\vec{s}_n)||\\mathcal{P}_{\\mathbf{r}}(\\vec{\\xi}|\\vec{s}_n))\\\\\n&=\\int \\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n) \\log \\frac{\\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n)}{\\mathcal{P}_{\\mathbf{r}}(\\vec{\\xi}|\\vec{s}_n)} d\\vec{\\xi}.\n\\end{aligned}\n\\label{equ:kl:def}\n\\end{equation}\nBy using the properties of KL-divergence between two Gaussian distributions, we rewrite (\\ref{equ:kl:cost:ini:temp}) as \t\n\\begin{equation}\n\\begin{aligned}\nJ_{ini}(\\vec{\\mu}_w,\\! \\vec{\\Sigma}_w)\\! \\! =\\sum_{n=1}^{N}\\! \\frac{1}{2} \\biggl ( \n\\log|\\hat{\\vec{\\Sigma}}_n|\n\\!-\\!\\log|\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)|\\\\\n-\\mathcal{O} \n+\\mathrm{Tr}(\\hat{\\vec{\\Sigma}}_n^{-1}\n\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n) )\\\\\n+(\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\! \\vec{\\mu}_w \\!\\!-\\!\\! \\hat{\\vec{\\mu}}_{n})^{\\mathsf{T}} \\! \\hat{\\vec{\\Sigma}}_n^{-1}\\! (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\! \\vec{\\mu}_w \\!\\!-\\!\\! \\hat{\\vec{\\mu}}_{n})\n\\biggr),\n\\end{aligned}\n\\label{equ:kl:cost:ini}\n\\end{equation}\nwhere $| \\, \\cdot \\, |$ and $\\mathrm{Tr}(\\cdot)$ denote the determinant and trace of a matrix, respectively. \n\nAfter removing the coefficient `$\\frac{1}{2}$', the constant terms $\\log|\\hat{\\vec{\\Sigma}}_n|$ and $\\mathcal{O}$, this objective function (\\ref{equ:kl:cost:ini}) can be further decomposed into a \\emph{mean minimization subproblem} and a \\emph{covariance minimization subproblem}. The former is defined by minimizing\n\\begin{equation}\n{J}_{ini}(\\vec{\\mu}_w)\\!\\!=\\!\\!\\sum_{n=1}^{N} (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\mu}_w \\!- \\!\\hat{\\vec{\\mu}}_{n})^{\\mathsf{T}} \\hat{\\vec{\\Sigma}}_n^{-1} (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\mu}_w \\!- \\! \\hat{\\vec{\\mu}}_{n})\n\\label{equ:kl:mean:cost}\n\\end{equation}\nand the latter is written as the minimization of\n\\begin{equation}\n\\begin{aligned}\n{J}_{ini}(\\vec{\\Sigma}_w)=\\sum_{n=1}^{N} \\Big(&\n-\\log|\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)|\\\\\n&+\\mathrm{Tr}(\\hat{\\vec{\\Sigma}}_n^{-1}\n\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)) \\Big)\n\\end{aligned}.\n\\label{equ:kl:var:cost}\n\\end{equation}\nIn the following two sections,\nwe separately solve the mean and covariance subproblems, resulting in the KMP formulation.\n\n\n\n\\subsubsection{Mean Prediction of KMP:}\n\\label{subsubsec:optimal:mean:kmp}\n\nIn contrast to kernel ridge regression (KRR) \\citep{Saunders, Murphy}, we introduce a penalty term $||\\vec{\\mu}_w||^2$ into the mean minimization subproblem (\\ref{equ:kl:mean:cost}) so as to circumvent the over-fitting problem. Thus, the new mean minimization subproblem can be re-rewritten as\n\\begin{equation}\n\\begin{aligned}\n\t{J}(\\vec{\\mu}_w)\\!\\!=\\!\\!\\!\\sum_{n=1}^{N}\\! (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}}\\!\\vec{\\mu}_w \\!\\!-\\!\\! \\hat{\\vec{\\mu}}_{n})^{\\mathsf{T}} \\hat{\\vec{\\Sigma}}_n^{-1} (\\vec{\\Theta}&(\\vec{s}_n)^{\\mathsf{T}} \\!\\vec{\\mu}_w \\!\\!-\\!\\! \\hat{\\vec{\\mu}}_{n})\\\\\n\t&+\\lambda \\vec{\\mu}_w^{\\mathsf{T}}\\vec{\\mu}_w,\n\\end{aligned}\n\t\\label{equ:kl:mean:cost:penalty}\n\\end{equation}\nwhere $\\lambda>0$.\n\nThe cost function (\\ref{equ:kl:mean:cost:penalty}) resembles a weighted least squares formulation, except for the penalty term $\\lambda \\vec{\\mu}_w^{\\mathsf{T}}\\vec{\\mu}_w$. Also, it is similar to the common quadratic cost function minimized in KRR, where $\\hat{\\vec{\\Sigma}}_n^{-1}=\\vec{I}_{\\mathcal{O}}$. However, the variability of the demonstrations encapsulated in $\\hat{\\vec{\\Sigma}}_n$ is introduced in (\\ref{equ:kl:mean:cost:penalty}) as an importance measure associated to each trajectory datapoint, which can be understood as relaxing or reinforcing the optimization for a particular datapoint. In other words, this covariance-weighted cost function permits large deviations from the reference trajectory points with high covariances, while demanding to be close when the associated covariance is low. \n\n\n\nBy taking advantage of the dual transformation of KRR, \nthe optimal solution ${\\vec{\\mu}}_w^{*}$ of (\\ref{equ:kl:mean:cost:penalty}) can be derived as (see \\cite{Murphy,Kober2011} for details)\n\\begin{equation}\n{\\vec{\\mu}}_w^{*}=\\vec{\\Phi} ( \\vec{\\Phi}^{\\mathsf{T}} \\vec{\\Phi} +\\lambda \\vec{\\Sigma} )^{-1} {\\vec{\\mu}},\n\\label{equ:kmp:muw}\n\\end{equation}\nwhere \n\\begin{equation}\n\\begin{aligned}\n\\vec{\\Phi}&=[\n\\vec{\\Theta}(\\vec{s}_1) \\ \\vec{\\Theta}(\\vec{s}_2) \\ \\cdots \\ \\vec{\\Theta}(\\vec{s}_N)\n],\\\\\n\\vec{\\Sigma}&=\\mathrm{blockdiag}(\\hat{\\vec{\\Sigma}}_1, \\ \\hat{\\vec{\\Sigma}}_2, \\ \\ldots, \\ \\hat{\\vec{\\Sigma}}_N), \\quad \\\\\n{\\vec{\\mu}}&=[\n\\hat{\\vec{\\mu}}_1^{\\mathsf{T}} \\ \\hat{\\vec{\\mu}}_2^{\\mathsf{T}} \\ \\cdots \\ \\hat{\\vec{\\mu}}_N^{\\mathsf{T}}\n]^{\\mathsf{T}}.\n\\end{aligned}\n\\label{equ:notations:define}\n\\end{equation}\nSubsequently, \nfor a query $\\vec{s}^{*}$ (i.e., new input), its corresponding output (expected value) is computed as\n\\begin{equation}\n\\mathbb{E}(\\vec{\\xi}(\\vec{s}^{*})) \n\\!\\!=\\!\\vec{\\Theta}(\\vec{s}^{*})^{\\mathsf{T}}{\\vec{\\mu}}_w^{*}\\!\\!=\\!\\vec{\\Theta}(\\vec{s}^{*})^{\\mathsf{T}}\\vec{\\Phi} ( \\vec{\\Phi}^{\\mathsf{T}} \\vec{\\Phi} \\!+\\!\\lambda \\vec{\\Sigma} )^{-1} {\\vec{\\mu}}.\n\t\\label{equ:kmp:mean:temp}\n\\end{equation}\nIn order to facilitate the application of (\\ref{equ:kmp:mean:temp}) (particularly for high-dimensional $\\vec{s}$), we propose to kernelize (\\ref{equ:kmp:mean:temp}) so as to avoid the explicit definition of basis functions. Let us define the inner product for $\\vec{\\varphi}(\\vec{s}_i)$ and $\\vec{\\varphi}(\\vec{s}_j)$ as\n\\begin{equation}\n\\vec{\\varphi}(\\vec{s}_i)^{\\mathsf{T}} \\vec{\\varphi}(\\vec{s}_j)=k(\\vec{s}_i,\\vec{s}_j),\n\\label{equ:single:basis:product}\n\\end{equation} \nwhere $k(\\cdot,\\cdot)$ is a kernel function. Then, based on (\\ref{equ:basis:function}) and (\\ref{equ:single:basis:product}), we have \n\\begin{equation}\n\\vec{\\Theta}({\\vec{s}_i})^{\\mathsf{T}}\\vec{\\Theta}({\\vec{s}_j})\n=\\left[\\begin{matrix} \nk(\\vec{s}_i, \\vec{s}_j) & \\vec{0} & \\cdots &\\vec{0} \\\\\n\\vec{0} & k(\\vec{s}_i, \\vec{s}_j) & \\cdots &\\vec{0} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\vec{0} & \\vec{0} & \\cdots & k(\\vec{s}_i, \\vec{s}_j)\\\\\n\\end{matrix}\\right],\n\\label{equ:basis:product}\n\\end{equation}\nwhich can be further rewritten as \na kernel matrix\n\\begin{equation}\n\t\\vec{k}(\\vec{s}_i, \\ \\vec{s}_j)= \\vec{\\Theta}({\\vec{s}_i})^{\\mathsf{T}}\\vec{\\Theta}({\\vec{s}_j})=\n\t k(\\vec{s}_i, \\vec{s}_j)\\vec{I}_{\\mathcal{O}},\n\t\\label{equ:kernel:matrix}\n\\end{equation} \nwhere\n$\\vec{I}_{\\mathcal{O}}$ is the ${\\mathcal{O}}$-dimensional identity matrix.\nAlso, let us denote the matrix $\\vec{{K}}$ as\n\\begin{equation}\n\\vec{K}\n=\\left[\\begin{matrix} \n\\vec{k}(\\vec{s}_1, \\vec{s}_1) & \\vec{k}(\\vec{s}_1, \\vec{s}_2) & \\cdots &\\vec{k}(\\vec{s}_1, \\vec{s}_N) \\\\\n\\vec{k}(\\vec{s}_2, \\vec{s}_1) & \\vec{k}(\\vec{s}_2, \\vec{s}_2) & \\cdots &\\vec{k}(\\vec{s}_2, \\vec{s}_N) \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\vec{k}(\\vec{s}_N, \\vec{s}_1) & \\vec{k}(\\vec{s}_N, \\vec{s}_2) & \\cdots &\\vec{k}(\\vec{s}_N, \\vec{s}_N) \\\\\n\\end{matrix}\\right],\n\\label{equ:K:matrix}\n\\end{equation}\nand write the matrix $\\vec{k}^{*}$ as\n\\begin{equation}\n\\vec{k}^{*}=[\\vec{k}(\\vec{s}^{*}, \\vec{s}_{1}) \\; \\vec{k}(\\vec{s}^{*}, \\vec{s}_{2}) \\; \\cdots \\; \\vec{k}(\\vec{s}^{*}, \\vec{s}_{N})],\n\\label{equ:k:star}\n\\end{equation}\nthen the prediction in (\\ref{equ:kmp:mean:temp}) becomes\n\\begin{equation}\n\\mathbb{E}(\\vec{\\xi}(\\vec{s}^{*}))\n=\\ \\vec{{k}}^{*} (\\vec{{K}}+\\lambda \\vec{\\Sigma})^{-1} {\\vec{\\mu}}.\n\t\\label{equ:kmp:mean}\n\\end{equation}\n\nNote that a similar result was derived in the context of reinforcement learning \\citep{Kober2011} (called cost regularized kernel regression, CrKR). \nIn contrast to the mean prediction of KMP, CrKR models target components separately without considering their correlations, i.e., a diagonal weighted matrix ${\\vec{R}_n=r_n \\vec{I}_{\\mathcal{O}}}$ \nis used instead of the full covariance matrix $\\hat{\\vec{\\Sigma}}_n^{-1}$ from (\\ref{equ:kl:mean:cost:penalty}). Furthermore, for the case in which $\\hat{\\vec{\\Sigma}}_n=\\vec{I}_{\\mathcal{O}}$, the prediction in (\\ref{equ:kmp:mean}) is identical to the mean of the Gaussian process regression (GPR) \\citep{Rasmussen}.\n\n\nIt is worth pointing out that the initial mean minimization subproblem (\\ref{equ:kl:mean:cost}) is essentially equivalent to the problem of maximizing the posterior\n$\\prod_{n=1}^{N} \\mathcal{P}(\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\mu}_w|\\hat{\\vec{\\mu}}_n,\\hat{\\vec{\\Sigma}}_n)$, please refer to Appendix~\\ref{app:mean:dual} for the proof. Thus, the optimal solution ${\\vec{\\mu}}_w^{*}$ can be viewed as the best estimation given the observed reference trajectory distribution.\n\n\n\n\\subsubsection{Covariance Prediction of KMP:}\n\\label{subsubsec:optimal:var:kmp}\n\nSimilar to the treatment in (\\ref{equ:kl:mean:cost:penalty}), we propose to add a penalty term into the covariance minimization subproblem (\\ref{equ:kl:var:cost}) in order to bound the covariance $\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)$. On the basis of the properties of the \\emph{Rayleigh quotient},\nthe penalty term could be defined by the largest eigenvalue of $\\vec{\\Sigma}_w$. For the sake of easy derivation, we impose a relaxed penalty term $\\mathrm{Tr}(\\vec{\\Sigma}_w)$ which is larger than the largest eigenvalue of $\\vec{\\Sigma}_w$ since $\\vec{\\Sigma}_w$ is positive definite. \nTherefore, the new covariance minimization subproblem becomes\n\\begin{equation}\n\\begin{aligned}\n{J}(\\vec{\\Sigma}_w)&=\\sum_{n=1}^{N} \\Big(-\\log |\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)|\\\\\n&+\\mathrm{Tr}(\\hat{\\vec{\\Sigma}}_n^{-1}\n\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)) \\Big)+\\lambda \\mathrm{Tr}(\\vec{\\Sigma}_w)\n\\end{aligned}.\n\\label{equ:kl:var:cost:penalty}\n\\end{equation}\n\nBy computing the derivative of (\\ref{equ:kl:var:cost:penalty}) with respect to $\\vec{\\Sigma}_w$\nand setting it to 0, we have\\footnote{The following results on matrix derivatives \\citep{Petersen} are used: $\\frac{\\partial |\\vec{A}\\vec{X}\\vec{B}|}\n\t{\\partial \\vec{X}}=|\\vec{A}\\vec{X}\\vec{B}|(\\vec{X}^{\\mathsf{T}})^{-1}$ and ${\\frac{\\partial}{\\partial \\vec{X}} \\mathrm{Tr}(\\vec{A}\\vec{X}\\vec{B})=\\vec{A}^{\\mathsf{T}} \\vec{B}^{\\mathsf{T}}}$.} \n\\begin{equation}\n\\begin{aligned}\n\\sum_{n=1}^{N} \\Big(\n-\\vec{\\Sigma}_w^{-1}+ \\vec{\\Theta}(\\vec{s}_n) \\hat{\\vec{\\Sigma}}_n^{-1} \\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}}\n\\Big) + \\lambda \\vec{I}=0\n\\end{aligned}.\n\\label{equ:kl:var:cost:derivative}\n\\end{equation}\nFurthermore, we can rewrite (\\ref{equ:kl:var:cost:derivative}) in a compact form by using $\\vec{\\Phi}$ and $\\vec{\\Sigma}$ from (\\ref{equ:notations:define}) and derive the optimal solution $\\vec{\\Sigma}_w^{*}$ as follows\n\\begin{equation}\n\\vec{\\Sigma}_w^{*}=N(\\vec{\\Phi}\\vec{\\Sigma}^{-1}\\vec{\\Phi}^{\\mathsf{T}}+\\lambda \\vec{I})^{-1}.\n\\end{equation}\nThis solution resembles the covariance of weighted least square estimation, except for the factor `$N$' and the regularized term $\\lambda \\vec{I}$. \n\nAccording to the $\\emph{Woodbury identity}$\\footnote{$(\\vec{A}+\\vec{C}\\vec{B}\\vec{C}^{\\mathsf{T}})^{-1}=\\vec{A}^{-1}\\!-\\!\\vec{A}^{-1}\\vec{C}(\\vec{B}^{-1}+\\vec{C}^{\\mathsf{T}}\\vec{A}^{-1}\\vec{C})^{-1}\\vec{C}^{\\mathsf{T}}\\vec{A}^{-1}$.},\nwe can determine the covariance of $\\vec{\\xi}(\\vec{s}^{*})$ for a query $\\vec{s}^{*}$ as\n\\begin{equation}\n\\begin{aligned}\n\\mathbb{D}(\\vec{\\xi}(\\vec{s}^{*}))&= \\vec{\\Theta}(\\vec{s}^{*})^{\\mathsf{T}}\\vec{\\Sigma}_w^{*} \\vec{\\Theta}(\\vec{s}^{*})\\\\\n&=N \\vec{\\Theta}(\\vec{s}^{*})^{\\mathsf{T}} (\\vec{\\Phi}\\vec{\\Sigma}^{-1}\\vec{\\Phi}^{\\mathsf{T}}+\\lambda \\vec{I})^{-1} \\vec{\\Theta}(\\vec{s}^{*})\\\\\n&=\\frac{N}{\\lambda} \\vec{\\Theta}(\\vec{s}^{*})^{\\mathsf{T}} \n\\left(\\vec{I} \\!- \\! \\vec{\\Phi}(\\vec{\\Phi}^{\\mathsf{T}}\\vec{\\Phi}\\!+\\!\\lambda\\vec{\\Sigma}))^{-1}\\vec{\\Phi}^{\\mathsf{T}} \\right)\n\\vec{\\Theta}(\\vec{s}^{*}).\n\\end{aligned}\n\\label{equ:kmp:var:temp}\n\\end{equation}\nRecall that we have defined the kernel matrix in (\\ref{equ:kernel:matrix})-(\\ref{equ:K:matrix}), and hence the covariance of $\\vec{\\xi}(\\vec{s}^{*})$ becomes\n\\begin{equation}\n\\mathbb{D}(\\vec{\\xi}(\\vec{s}^{*}))=\\frac{N}{\\lambda} \\left(\\vec{k}(\\vec{s}^{*}, \\vec{s}^{*}) -\\vec{k}^{*}(\\vec{K}+\\lambda \\vec{\\Sigma})^{-1} \\vec{k}^{*\\mathsf{T}}\\right).\n\\label{equ:kmp:var}\n\\end{equation}\nIn addition to the factor `$\\frac{N}{\\lambda}$', the covariance formula in (\\ref{equ:kmp:var}) differs from the covariances defined in GPR and CrKR in two essential aspects. \nFirst, the variability $\\vec{\\Sigma}$ extracted from demonstrations (as defined in (\\ref{equ:notations:define})) is used in the term $(\\vec{K}+\\lambda \\vec{\\Sigma})^{-1}$, while the identity matrix and the diagonal weighted matrix are used in GPR and CrKR, respectively. Second, in contrast to the diagonal covariances predicted by GPR and CrKR, KMP predicts a full matrix covariance which allows for predicting the correlations between output components. \nFor the purpose of convenient descriptions in the following discussion, we refer to\n${\\vec{D}=\\{\\vec{s}_n,\\hat{\\vec{\\mu}}_{n},\\hat{\\vec{\\Sigma}}_{n}\\}_{n=1}^{N}}$ as the \\emph{reference database}. The prediction of both the mean and covariance using KMP is summarized in Algorithm~\\ref{algorithm:kmp}.\n\n\n\n\\begin{algorithm}[t]\n\t\\caption{\\emph{Kernelized Movement Primitive}}\n\t\\begin{algorithmic}[1]\n\\State{\\textbf{{Initialization}}}\n\\Statex{- Define the kernel $k(\\cdot,\\cdot)$ and set the factor $\\lambda$.} \n\\State{\\textbf{{Learning from demonstrations}}} (see Section~\\ref{subsec:ref:traj})\n\\Statex{- Collect demonstrations $\\{ \\{ \\vec{s}_{n,h},\\vec{\\xi}_{n,h}\\}_{n=1}^{N} \\}_{h=1}^{H}$}.\n\\Statex{- Extract the reference database $\\{\\vec{s}_n,\\hat{\\vec{\\mu}}_{n},\\hat{\\vec{\\Sigma}}_{n}\\}_{n=1}^{N}$.}\n\\State{\\textbf{{Prediction using KMP}}} (see Section~\\ref{subsec:kmp})\n\\Statex{- {\\emph{Input}}: query $\\vec{s}^{*}$.}\n\\Statex{- Calculate $\\vec{\\Sigma}$, $\\vec{\\mu}$, $\\vec{K}$ and $\\vec{k}^{*}$ using (\\ref{equ:notations:define}), (\\ref{equ:K:matrix}) and (\\ref{equ:k:star}).}\n\\Statex{- {\\emph{Output}}: $\\mathbb{E}(\\vec{\\xi}(\\vec{s}^{*}))\n=\\ \\vec{{k}}^{*} (\\vec{{K}}+\\lambda \\vec{\\Sigma})^{-1} {\\vec{\\mu}}$ \\quad\\,and\n\\Statex{$\\mathbb{D}(\\vec{\\xi}(\\vec{s}^{*}))=\\frac{N}{\\lambda} \\left(\\vec{k}(\\vec{s}^{*}, \\vec{s}^{*}) -\\vec{k}^{*}(\\vec{K}+\\lambda \\vec{\\Sigma})^{-1} \\vec{k}^{*\\mathsf{T}}\\right)$}. } \n\t\\end{algorithmic}\n\t\\label{algorithm:kmp}\n\\end{algorithm}\n\n\\section{Extensions of Kernelized Movement Primitive} \n\\label{sec:tp-kmp}\nAs previously explained, human demonstrations can be used to retrieve a distribution of trajectories that the robot exploits to carry out a specific task. However, in dynamic and unstructured environments the robot also needs to adapt its motions when required. \nFor example, if an obstacle suddenly occupies an area that intersects the robot motion path, the robot is required to modulate its movement trajectory so that collisions are avoided. A similar modulation is necessary (e.g., in pick-and-place and reaching tasks) when the target varies its location during the task execution. The trajectory modulation problem will be addressed in Section~\\ref{subsec:kmp:modulation} by exploiting the proposed KMP formulation. \n\n\nBesides the modulation of a single trajectory, another challenging problem arises when the robot is given a set of candidate trajectories to follow, which represent feasible solutions for the task. Each of them may be assigned with a different priority (extracted, for example, from the task constraint). These candidate trajectories can be exploited to compute a mixed trajectory so as to balance all the feasible solutions according to their priorities. We cope with the superposition problem in Section~\\ref{subsec:super:position} by using KMP.\n\n\nFinally, human demonstrations are often provided in a relatively convenient task space. However, the robot might be expected to apply the learned skill to a broader domain. In order to address this problem, we extend KMP \nby using local coordinate systems and affine transformations as in \\cite{Calinon2016}, which allows KMP to exhibit better extrapolation capabilities (Section \\ref{subsec:local_frame}).\n\n\\subsection{Trajectory Modulation Using KMP}\n\\label{subsec:kmp:modulation}\n\nWe here consider trajectory modulation in terms of adapting trajectories to pass through new via-points\/end-points.\nFormally, let us define $M$ new desired points as $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\xi}}_m\\}_{m=1}^{M}$ associated with conditional probability distributions $\\bar{\\vec{\\xi}}_m | \\bar{\\vec{s}}_{m} \\sim \\mathcal{N} ( \\bar{\\vec{\\mu}}_{m},\\bar{\\vec{\\Sigma}}_{m} )$.\nThese conditional distributions can be designed based on new task requirements. For instance, if there are new via-points that the robot needs to pass through with high precision, small covariances $\\bar{\\vec{\\Sigma}}_{m}$ are assigned. On the contrary, for via-points that allow for large tracking errors, high covariances can be set.\n\nIn order to consider both new desired points and the reference trajectory distribution simultaneously, we reformulate the original objective function defined in (\\ref{equ:kl:cost:ini:temp}) as\n\\begin{equation}\n\\begin{aligned}\nJ_{ini}^{U}(\\vec{\\mu}_w,&\\vec{\\Sigma}_w)\\!\\!=\\!\\!\n\\sum_{n=1}^{N} \\!D_{KL}\\! \\biggl (\\! \\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n)\n|| \\mathcal{P}_{\\mathbf{r}} (\\vec{\\xi}|\\vec{s}_n) \\!\\biggr)\\\\ &+\\sum_{m=1}^{M}D_{KL} \\biggl ( \\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\bar{\\vec{s}}_m) || \\mathcal{P}_{\\mathbf{d}}(\\vec{\\xi}|\\bar{\\vec{s}}_m) \\biggr)\n\\end{aligned}\n\\label{equ:kl:cost:ini:modulate}\n\\end{equation}\nwith\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\bar{\\vec{s}}_m)\\!\\!=\\!\\! \\mathcal{N}\\!\\! \\left(\\vec{\\xi}|\\vec{\\Theta}(\\bar{\\vec{s}}_m)^{\\mathsf{T}}\\! \\vec{\\mu}_w, \\!\\vec{\\Theta}(\\bar{\\vec{s}}_m)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\bar{\\vec{s}}_m) \\right)\n\\label{equ:def:prob:para:des}\n\\end{equation}\nand\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{d}}(\\vec{\\xi}|\\bar{\\vec{s}}_m)=\\mathcal{N} (\\vec{\\xi}|\\bar{\\vec{\\mu}}_m,\\bar{\\vec{\\Sigma}}_m).\n\\label{equ:def:prob:ref:des}\n\\end{equation}\nLet ${\\bar{\\vec{D}}=\\{\\bar{\\vec{s}}_{m}, \\bar{\\vec{\\mu}}_{m},\\bar{\\vec{\\Sigma}}_{m}\\}_{m=1}^{M}}$ denote the \\emph{desired database}. We can concatenate the reference database $\\vec{D}$ with the desired database $\\bar{\\vec{D}}$ and generate an \\emph{extended reference database} $\\{\\vec{s}_{i}^{U},\\vec{\\mu}_{i}^{U},\\vec{\\Sigma}_{i}^{U}\\}_{i=1}^{N+M}$, \nwhich is defined as follows\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n\\vec{s}_{i}^{U}&\\!=\\!\\vec{s}_{i}, \\quad\\,\\, \\vec{\\mu}_{i}^{U}\\!=\\!\\hat{\\vec{\\mu}}_{i}, \\;\\;\\;\\,\\, \\vec{\\Sigma}_{i}^{U}\\!=\\!\\hat{\\vec{\\Sigma}}_{i}, \\;\\;\\,\\,\\,\\,\\, \\mathrm{if} \\;\\;\\; 1 \\leq i \\leq N\\\\\n\\vec{s}_{i}^{U}&\\!=\\!\\bar{\\vec{s}}_{i-N}, \\vec{\\mu}_{i}^{U}\\!=\\!\\bar{\\vec{\\mu}}_{i-N}, \\!\\vec{\\Sigma}_{i}^{U}\\!=\\!\\bar{\\vec{\\Sigma}}_{i-N}, \\mathrm{if} \\;N \\!< i\\!\\leq\\! N\\!\\!+\\!\\!M\\\\\n\\end{aligned}\\right. \\!,\n\\label{equ:combine:ref:desired}\n\\end{equation} \nThen, the objective function (\\ref{equ:kl:cost:ini:modulate}) can be written as follows\n\\begin{equation}\n\\begin{aligned}\nJ_{ini}^{U}(\\vec{\\mu}_w,\\!\\vec{\\Sigma}_w)\\!\\!=\\!\\!\\!\\sum_{i=1}^{M+N}\\!\\!\\! D_{KL} \\biggl (\\! \\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_{i}^{U}) || \\mathcal{P}_{\\mathbf{u}}(\\vec{\\xi}|\\vec{s}_{i}^{U}) \\!\\biggr),\n\\end{aligned}\n\\label{equ:kl:cost:ini:modulate:update:ref}\n\\end{equation}\nwith\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_{i}^{U})\\!\\!=\\!\\! \\mathcal{N}\\!\\! \\left(\\vec{\\xi}|\\vec{\\Theta}(\\vec{s}_{i}^{U})^{\\mathsf{T}}\\! \\vec{\\mu}_w, \\!\\vec{\\Theta}(\\vec{s}_{i}^{U})^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_{i}^{U}) \\right)\n\\label{equ:def:prob:para:extend}\n\\end{equation}\nand\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{u}}(\\vec{\\xi}|\\vec{s}_{i}^{U})=\\mathcal{N} (\\vec{\\xi}|\\vec{\\mu}_i^{U},\\vec{\\Sigma}_i^{U}).\n\\label{equ:def:prob:ref:extend}\n\\end{equation}\nNote that (\\ref{equ:kl:cost:ini:modulate:update:ref}) has the same form as (\\ref{equ:kl:cost:ini:temp}). Hence, for the problem of enforcing trajectories to pass through desired via-points\/end-points, we can first concatenate the original reference database with the desired database through (\\ref{equ:combine:ref:desired}) and, subsequently, with the extended reference database, we follow Algorithm~\\ref{algorithm:kmp} to predict the mean and covariance for new queries $\\vec{s}^{*}$.\n\nIt is worth pointing out that there might exist conflicts between the desired database and the original reference database. \nIn order to illustrate this issue clearly, let us consider an extreme case: if there exist \na new input $\\bar{\\vec{s}}_m=\\vec{s}_n$, but $\\bar{\\vec{\\mu}}_m$ is distant from $\\hat{\\vec{\\mu}}_n$ while $\\bar{\\vec{\\Sigma}}_m$ and $\\hat{\\vec{\\Sigma}}_n$ are nearly the same, then the optimal solution of (\\ref{equ:kl:cost:ini:modulate:update:ref}) corresponding to the query $\\vec{s}_n$ can only be a trade-off between $\\bar{\\vec{\\mu}}_m$ and $\\hat{\\vec{\\mu}}_n$.\nIn the context of trajectory modulation using via-points\/end-points, it is natural to consider new desired points with the highest preference. Thus, we propose to update the reference database from the perspective of reducing the above mentioned conflicts while maintaining most of datapoints in the reference database. \nThe update procedure is carried out as follows.\nFor each datapoint $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_m,\\bar{\\vec{\\Sigma}}_m\\}$ in the desired database, \nwe first compare its input $\\bar{\\vec{s}}_{m}$ with the inputs $\\{\\vec{s}_{n}\\}_{n=1}^{N}$ of the reference database so as to find the nearest datapoint $\\{\\vec{s}_{r},\\hat{\\vec{\\mu}}_r,\\hat{\\vec{\\Sigma}}_r\\}$ that\nsatisfies ${d(\\bar{\\vec{s}}_m,\\vec{s}_r) \\leq d(\\bar{\\vec{s}}_m,\\vec{s}_n), \\forall n \\in \\{1,2,\\ldots,N\\}}$, where $d(\\cdot)$ could be an arbitrary distance measure such as 2-norm. \nIf the nearest distance $d(\\bar{\\vec{s}}_m,\\vec{s}_r)$ is smaller than a predefined threshold $\\zeta>0$, we replace $\\{\\vec{s}_{r},\\hat{\\vec{\\mu}}_r,\\hat{\\vec{\\Sigma}}_r\\}$ with $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_m,\\bar{\\vec{\\Sigma}}_m\\}$; Otherwise, we insert $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_m,\\bar{\\vec{\\Sigma}}_m\\}$ into the reference database. More specifically, given a new desired point $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\xi}}_m\\}$ described by ${\\bar{\\vec{\\xi}}_m | \\bar{\\vec{s}}_{m} \\sim \\mathcal{N} ( \\bar{\\vec{\\mu}}_{m},\\bar{\\vec{\\Sigma}}_{m} )}$, we update the reference database \naccording to\n\\begin{equation}\n\t\\left\\{\n\t\\begin{aligned}\n\t\t&\\!\\!\\vec{{D}} \\!\\leftarrow\\! \\{\\! \\vec{D}\/\\{\\vec{s}_{r},\\hat{\\vec{\\mu}}_{r},\\hat{\\vec{\\Sigma}}_{r}\\}\\! \\} \\!\\cup\\! \\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_{m},\\bar{\\vec{\\Sigma}}_{m}\\!\\}, \n\\;\\mathrm{if} \\, d(\\bar{\\vec{s}}_{m},\\vec{s}_{r}) \\!<\\! \\zeta,\\\\ \n\t\t&\\!\\!\\vec{{D}} \\!\\leftarrow \\vec{D} \\cup \\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_{m},\\bar{\\vec{\\Sigma}}_{m}\\}, \n\t\t\\hspace{0.85in} \\mathrm{otherwise},\n\t\\end{aligned}\n\t\\right.\n\t\\label{equ:kmp:update}\n\\end{equation}\nwhere $r=\\arg\\!\\min_{n} d(\\bar{\\vec{s}}_{m},\\vec{s}_{n}), n\\in\\{1,2,\\ldots,N\\}$ and the symbols `$\/$' and `$\\cup$' represent exclusion and union operations, respectively.\n\n\n\\subsection{Trajectory Superposition Using KMP}\n\\label{subsec:super:position}\nIn addition to the modulation operations on a single trajectory, we extend KMP to mix multiple trajectories that represent different feasible solutions for a task, with different priorities. Formally, given a set of $L$ reference trajectory distributions, associated with inputs and corresponding priorities $\\gamma_{n,l}$, denoted as $\\{ \\{\\vec{s}_{n},\\hat{\\vec{\\xi}}_{n,l},\\gamma_{n,l}\\}_{n=1}^{N}\\}_{l=1}^{L}$, where ${\\hat{\\vec{\\xi}}_{n,l}|\\vec{s}_n \\sim \\mathcal{N}(\\hat{\\vec{\\mu}}_{n,l},\\hat{\\vec{\\Sigma}}_{n,l})}$, and $\\gamma_{n,l} \\in (0,1)$ is a priority assigned to the point $\\{\\vec{s}_{n},\\hat{\\vec{\\xi}}_{n,l}\\}$ satisfying $\\sum_{l=1}^{L}\\gamma_{n,l}=1$.\n\nSince each priority indicates the importance of one datapoint in a reference trajectory, we use them to weigh the information-loss as follows\n\\begin{equation}\n\\begin{aligned}\nJ_{ini}^{S}(\\vec{\\mu}_w,\\!\\vec{\\Sigma}_w)\\!\\!=\\!\\!\\sum_{n=1}^{N}\\!\\sum_{l=1}^{L}\\!\\!\\gamma_{n,l} D_{KL} \\biggl (\\!\\! \\mathcal{P}_{\\mathbf{p}}(&\\vec{\\xi}|\\vec{s}_n)\n|| \\mathcal{P}^{l}_{\\mathbf{s}}(\\vec{\\xi}|\\vec{s}_n) \\!\\!\\biggr),\n\\end{aligned}\n\\label{equ:kl:cost:ini:mix}\n\\end{equation}\nwhere $\\mathcal{P}^{l}_{\\mathbf{s}}$ is defined as\n\\begin{equation}\n\\mathcal{P}^{l}_{\\mathbf{s}}(\\vec{\\xi}|\\vec{s}_n)=\\mathcal{N} (\\vec{\\xi}|\\hat{\\vec{\\mu}}_{n,l},\\hat{\\vec{\\Sigma}}_{n,l}),\n\\label{equ:def:prob:subref}\n\\end{equation}\nrepresenting the distribution of the $l$-th reference trajectory given the input $\\vec{s}_n$.\n\nSimilar to the decomposition in (\\ref{equ:kl:cost:ini})--(\\ref{equ:kl:var:cost}), the objective function (\\ref{equ:kl:cost:ini:mix}) can be decomposed into a \\emph{weighted mean minimization subproblem} and a \\emph{weighted covariance minimization subproblem}. The former is written as\n\\begin{equation}\n\\begin{aligned}\n{J}_{ini}^{S}(\\vec{\\mu}_w)\\!\\!=\\!\\!\\sum_{n=1}^{N}\\! \\sum_{l=1}^{L} \\gamma_{n,l}(\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} &\\vec{\\mu}_w -\\hat{\\vec{\\mu}}_{n,l})^{\\mathsf{T}} \\hat{\\vec{\\Sigma}}_{n,l}^{-1} \\\\\n&(\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\mu}_w \\!-\\! \\hat{\\vec{\\mu}}_{n,l})\n\\end{aligned}\n\\label{equ:kl:mean:cost:mix}\n\\end{equation}\nand the latter is\n\\begin{equation}\n\\begin{aligned}\n{J}_{ini}^{S}(\\vec{\\Sigma}_w)\\!=\\!\\sum_{n=1}^{N} &\\sum_{l=1}^{L}\n\\gamma_{n,l}\\Big(\\!-\\!\\log|\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)|\\\\\n&+\\mathrm{Tr}(\\hat{\\vec{\\Sigma}}_{n,l}^{-1}\n\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)) \\Big)\n\\end{aligned}.\n\\label{equ:kl:var:cost:mix}\n\\end{equation}\n \nIt can be proved that the weighted mean subproblem can be solved by minimizing (see Appendix~\\ref{app:compose:mean})\n\\begin{equation}\n\\tilde{J}_{ini}^{S}(\\vec{\\mu}_w)\\!\\!=\\!\\!\\sum_{n=1}^{N} (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\!\\vec{\\mu}_w \\!-\\! \\vec{\\mu}_{n}^{S})^{\\mathsf{T}} {\\vec{\\Sigma}_n^{S}}^{-1} (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\!\\vec{\\mu}_w \\!-\\! \\vec{\\mu}_{n}^{S})\n\\label{equ:kl:mean:cost:mix:prod}\n\\end{equation}\nand the weighted covariance subproblem is equivalent to the problem of minimizing (see Appendix~\\ref{app:compose:var})\n\\begin{equation}\n\\begin{aligned}\n\\tilde{J}_{ini}^{S}(\\vec{\\Sigma}_w)\\!=\\!\\sum_{n=1}^{N} \\Big(&\n\\!\\!-\\log |\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)|\\\\\n&+\\mathrm{Tr}({\\vec{\\Sigma}_n^{S}}^{-1}\n\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)) \\Big)\n\\end{aligned},\n\\label{equ:kl:var:cost:mix:prod}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\vec{\\Sigma}_n^{S}}^{-1}=\\sum_{l=1}^{L} \\left( \\hat{\\vec{\\Sigma}}_{n,l}\/\\gamma_{n,l} \\right)^{-1} \\quad \\mathrm{and}\n\\label{equ:prod:var}\n\\end{equation}\n\\begin{equation}\n\\vec{\\mu}_{n}^{S}={\\vec{\\Sigma}_n^{S}} \\sum_{l=1}^{L} \\left( \\hat{\\vec{\\Sigma}}_{n,l}\/\\gamma_{n,l} \\right)^{-1} \\hat{\\vec{\\mu}}_{n,l}.\n\\label{equ:prod:mean}\n\\end{equation} \nObserve that (\\ref{equ:kl:mean:cost:mix:prod}) and (\\ref{equ:kl:var:cost:mix:prod}) have the same form as the subproblems defined in (\\ref{equ:kl:mean:cost}) and (\\ref{equ:kl:var:cost}), respectively. Note that the definitions in (\\ref{equ:prod:var}) and (\\ref{equ:prod:mean}) essentially correspond to the product of $L$ Gaussian distributions $\\mathcal{N}(\\hat{\\vec{\\mu}}_{n,l},\\hat{\\vec{\\Sigma}}_{n,l}\/\\gamma_{n,l})$ with ${l=1,2,\\ldots,L}$, given by\n\\begin{equation}\n\\mathcal{N}(\\vec{\\mu}_{n}^{S},{\\vec{\\Sigma}_n^{S}}) \\propto \\prod_{l=1}^{L} \\mathcal{N}(\\hat{\\vec{\\mu}}_{n,l},\\hat{\\vec{\\Sigma}}_{n,l}\/\\gamma_{n,l}).\n\\label{equ:product:mix}\n\\end{equation}\nThus, for the problem of trajectory superposition, we first determine a \\emph{mixed reference database}\n$\\{\\vec{s}_n,\\vec{\\mu}_n^{S},\\vec{\\Sigma}_n^{S}\\}_{n=1}^{N}$ through (\\ref{equ:product:mix}), \nthen we employ Algorithm~\\ref{algorithm:kmp} to predict the corresponding mixed trajectory points for arbitrary queries. \nNote that the weighted mean minimization subproblem (\\ref{equ:kl:mean:cost:mix}) can be interpreted as the maximization of the weighted posterior \n$\\prod_{n=1}^{N} \\prod_{l=1}^{L} \\mathcal{P} \\left( \\vec{\\Theta}(\\vec{s}_{n})^{\\mathsf{T}} \\vec{\\mu}_w|\\hat{\\vec{\\mu}}_{n,l},\\hat{\\vec{\\Sigma}}_{n,l} \\right)^{\\gamma_{n,l}}.$\nIn comparison with the trajectory mixture in ProMP \\citep{Paraschos}, \nwe here consider an optimization problem with an unknown $\\vec{\\mu}_w$ rather than the direct product of a set of known probabilities.\n\n\n\\subsection{Local Movement Learning Using KMP}\n\\label{subsec:local_frame}\n\n\n\n\\begin{algorithm}[t]\n\t\\caption{\\emph{Local Kernelized Movement Primitives with Via-points\/End-points}}\n\t\\begin{algorithmic}[1]\n\t\\State{\\textbf{{Initialization}}}\n\t\t\\Statex{- Define $k(\\cdot,\\cdot)$ and set $\\lambda$.} \n\t\t\\Statex{- Determine $P$ local frames $\\{\\vec{A}^{(p)},\\vec{b}^{(p)}\\}_{p=1}^{P}$.}\n\t\\State{\\textbf{{Learning from local demonstrations}}}\n\t\t\\Statex{- Collect demonstrations $\\{ \\{ \\vec{s}_{n,h},\\vec{\\xi}_{n,h}\\}_{n=1}^{N} \\}_{h=1}^{H}$ in $\\{O\\}$.}\n\t\t\\Statex{- Project demonstrations into local frames via (\\ref{equ:linear:transform}).}\n\t\t\\Statex{- Extract local reference databases $\\!\\{\\vec{s}_n^{(p)}\\!,\\!\\hat{\\vec{\\mu}}_n^{(p)}\\!,\\!\\hat{\\vec{\\Sigma}}_n^{(p)}\\}_{n=1}^{N}\\!$.}\n\t\\State{\\textbf{{Update local reference databases}}}\n\t\t\\Statex{- Project via-points\/end-points into local frames via (\\ref{equ:linear:transform}).} \n\t\t\\Statex{- Update local reference databases via (\\ref{equ:kmp:update}).}\n\t\t\\Statex{- Update $\\vec{K}^{(p)},\\vec{\\mu}^{(p)},\\vec{\\Sigma}^{(p)}$ in each frame $\\{p\\}$.}\n\t\\State{\\textbf{{Prediction using local-KMPs}}} \n\t\t\\Statex{- {\\emph{Input}}: query $\\vec{s}^{*}$.}\n\t\t\\Statex{- Update $P$ local frames based on new task requirements.} \n\t\t\\Statex{- Project $\\vec{s}^{*}$ into local frames via (\\ref{equ:linear:transform}), yielding $\\{\\!\\vec{s}\\!^{*\\!(p)}\\!\\}\\!_{p=1}^{P}$.}\n\t\t\\Statex{- Predict the local trajectory point associated with $\\vec{s}^{*(p)}$ in each frame $\\{p\\}$ using KMP.} \n\t\t\\Statex{- {\\emph{Output}}: Compute $\\widetilde{\\vec{\\xi}}(\\vec{s}^{*})$ in the frame $\\{O\\}$ using (\\ref{equ:local:product}).}\n\t\\end{algorithmic}\n\t\\label{algorithm:local-kmp}\n\\end{algorithm}\n\nSo far we have considered trajectories that are represented with respect to the same global frame (coordinate system).\nIn order to enhance the extrapolation capability of KMP in task space, human demonstrations can be encoded in local frames\\footnote{Also referred to as task parameters in \\cite{Calinon2016}.} so as to extract local movement patterns, which can then be applied to a wider range of task instances. \nUsually, the definition of local frames depends on the task at hand. \nFor example, in a transportation task where the robot moves an object from a starting position (that may vary) to different target locations, two local frames can be defined respectively at the starting and ending positions. \n\n\n \n \nFormally, let us define $P$ local frames as $\\{\\vec{A}^{(p)},\\vec{b}^{(p)}\\}_{p=1}^{P}$,\nwhere $\\vec{A}^{(p)}$ and $\\vec{b}^{(p)}$ respectively represent the rotation matrix and the translation vector of frame $\\{p\\}$ with respect to the base frame $\\{O\\}$. Demonstrations are projected into each frame $\\{p\\}$, resulting in new trajectory points $\\{ \\{\\vec{s}_{n,h}^{(p)},\\vec{\\xi}_{n,h}^{(p)}\\}_{n=1}^{N} \\}_{h=1}^{H}$ for each local frame, where \n\\begin{equation}\n\t\\left[ \\begin{matrix}\n\t\t\\vec{s}_{n,h}^{(p)} \\\\ \\vec{\\xi}_{n,h}^{(p)} \n\t\\end{matrix} \\right]=\n\t\\left[\\begin{matrix}\n\t\\vec{A}_{s}^{(p)} &\\vec{0}\\\\\n\t\\vec{0} & \\vec{A}_{\\xi}^{(p)}\t\n\t\\end{matrix}\\right]^{-1} \n\t\\left( \n\t\\left[ \\begin{matrix}\n\t\t\\vec{s}_{n,h} \\\\ \\vec{\\xi}_{n,h}\n\t\\end{matrix} \\right]-\n\t\\left[\\begin{matrix}\n\t\\vec{b}_{s}^{(p)} \\\\\n\t\\vec{b}_{\\xi}^{(p)}\t\n\t\\end{matrix}\\right] \n\t\\right) ,\n\t\\label{equ:linear:transform}\n\\end{equation}\nwith $\\vec{A}_{s}^{(p)}\\!=\\vec{A}_{\\xi}^{(p)}\\!=\\vec{A}^{(p)}$ and $\\vec{b}_{s}^{(p)}\\!=\\vec{b}_{\\xi}^{(p)}\\!=\\vec{b}^{(p)}$\\footnote{Note that, if the input $\\vec{s}$ becomes time, then $\\vec{A}_{s}^{(p)}=\\!1$ and $\\vec{b}_{s}^{(p)}=\\!0$.}.\nSubsequently, by following the procedure in Section~\\ref{subsec:ref:traj}, for each local frame $\\{p\\}$ we can generate \na \\emph{local reference database} ${\\vec{D}^{(p)}=\\{\\vec{s}_n^{(p)},\\hat{\\vec{\\mu}}^{(p)}_{n},\\hat{\\vec{\\Sigma}}^{(p)}_{n}\\}_{n=1}^{N}}$.\n\n\nWe refer to the learning of KMPs in local frames as \\emph{local}-KMPs. For sake of simplicity, we only discuss the trajectory modulations with via-points\/end-points. The operation of trajectory superposition can be treated in the similar manner.\nGiven a set of desired points in the robot base frame $\\{O\\}$ described by the desired database $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_m,\\bar{\\vec{\\Sigma}}_m\\}_{m=1}^{M}$, we project the desired database into local frames using (\\ref{equ:linear:transform}), leading to the set of transformed \\emph{local desired databases} $\\bar{\\vec{D}}^{(p)}=\\{\\bar{\\vec{s}}_{m}^{(p)},\\bar{\\vec{\\mu}}_m^{(p)},\\bar{\\vec{\\Sigma}}_m^{(p)}\\}_{m=1}^{M}$ with $p=\\{1,2,\\dots,P\\}$. Then, we carry out the update procedure described by (\\ref{equ:kmp:update}) in each frame $\\{p\\}$ and obtain a new local reference database $\\vec{D}^{(p)}$. \n\n\nFor a new input $\\vec{s}^{*}$ in the base frame $\\{O\\}$, we first project it into local frames using the input transformation in (\\ref{equ:linear:transform}), yielding local inputs $\\{\\vec{s}^{*(p)}\\}_{p=1}^{P}$. Note that, during the prediction phase, local frames might be updated depending on new task requirements and the corresponding task parameters $\\vec{A}^{(p)}$ and $\\vec{b}^{(p)}$ might vary accordingly. Later, in each frame $\\{p\\}$ we can predict a local trajectory point $\\widetilde{\\vec{\\xi}}^{(p)}\\!\\!( \\vec{s}^{*(p)})\\sim \\mathcal{N}( {\\vec{\\mu}}^{*(p)} , {\\vec{\\Sigma}}^{*(p)} )$ with updated mean ${\\vec{\\mu}}^{*(p)}$ and covariance ${\\vec{\\Sigma}}^{*(p)}$ by using (\\ref{equ:kmp:mean}) and (\\ref{equ:kmp:var}). \nFurthermore, new local trajectory points from all local frames can be simultaneously transformed into the robot base frame using an inverse formulation of (\\ref{equ:linear:transform}). Thus, for the query $\\vec{s}^{*}$ in the base frame $\\{O\\}$, its corresponding trajectory point $\\widetilde{\\vec{\\xi}}(\\vec{s}^{*})$ in $\\{O\\}$ can be determined by maximizing the product of linearly transformed Gaussian distributions\n\\begin{equation}\n\\widetilde{\\vec{\\xi}}(\\vec{s}^{*})\\!=\\!\\argmax_{\\vec{\\xi}}\\!\\prod_{p=1}^{P}\\! \\mathcal{N}\\! \\biggl(\\! \\vec{\\xi} | \\underbrace{\\vec{A}_{\\xi}^{(p)} \\vec{{\\mu}}^{*(p)} \\!\\!+\\! \\vec{b}_{\\xi}^{(p)}}_{\\widetilde{\\vec{\\mu}}_p}, \\underbrace{\\vec{A}_{\\xi}^{(p)} {\\vec{\\Sigma}}^{*(p)} \\!\\!{\\vec{A}_{\\xi}^{(p)}}^{\\mathsf{T}}}_{\\widetilde{\\vec{\\Sigma}}_p} \\biggr)\\!, \n\\label{equ:local:to:global}\n\\end{equation}\nwhose optimal solution is\n\\begin{equation}\n\\widetilde{\\vec{\\xi}}(\\vec{s}^{*})=\\biggl(\\sum_{p=1}^{P} \\widetilde{\\vec{\\Sigma}}_p^{-1} \\biggr)^{-1} \\sum_{p=1}^{P} \\widetilde{\\vec{\\Sigma}}_p^{-1} \\widetilde{\\vec{\\mu}}_p.\n\\label{equ:local:product}\n\\end{equation}\nThe described procedure is summarized in Algorithm \\ref{algorithm:local-kmp}. \nNote that the solution (\\ref{equ:local:product}) actually corresponds to the expectation part of the product of Gaussian distributions in (\\ref{equ:local:to:global}).\n\n\n\n\\begin{figure*}[bt] \\centering\n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.32\\textwidth]{demos_G.png}} \n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.32\\textwidth]{demos_G_gmm.png}} \n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.32\\textwidth]{demos_G_gmr.png}} \n\t\\caption{Demonstrations of handwritten letter `G' and the estimation of the reference trajectory through GMM\/GMR. (\\emph{a}) shows the trajectories of `G', where `$\\ast$' and `+' denote the starting and ending points of the demonstrations, respectively. \n\t\t(\\emph{b}) depicts the estimated GMM with the ellipses representing Gaussian components. (\\emph{c}) displays the retrieval of the reference trajectory distribution, where the red solid curve and shaded area, respectively, correspond to the mean and standard deviation of the reference trajectory.} \n\t\\label{fig:g:demos} \n\\end{figure*}\n\n\n\\section{Time-driven Kernelized Movement Primitives}\n\\label{sec:time_kmp}\n\nIn many robotic tasks, such as biped locomotion \\citep{Nakanishi} and striking movements \\citep{Huang2016}, time plays a critical role when generating movement trajectories for a robot. We here consider a special case of KMP by taking time $t$ as the input $\\vec{s}$, which is aimed at learning time-driven trajectories. \n\n\n\\subsection{A Special Treatment of Time-Driven KMP}\n\\label{subsec:time:tmp}\nSimilarly to ProMP, we formulate a parametric trajectory comprising positions and velocities as\n\\begin{equation}\n\\left[ \\begin{matrix}\n\\vec{\\xi}(t) \\\\ \\dot{\\vec{\\xi}}(t) \n\\end{matrix} \\right] = \\vec{\\Theta}(t)^{\\mathsf{T}} \\vec{w},\n\\label{equ:linear:form:time}\n\\end{equation}\nwhere the matrix $\\vec{\\Theta}(t)\\in \\mathbb{R}^{B\\mathcal{O} \\times 2\\mathcal{O}}$ is\n\\begin{equation}\n\\vec{\\Theta}(t)\\!\\!=\\!\\!\\left[\\begin{matrix} \n\\vec{\\varphi}(t) \\!& \\vec{0} \\!& \\cdots \\!&\\vec{0} \\!& \\dot{\\vec{\\varphi}}(t) \\!& \\vec{0} \\!& \\cdots \\!&\\vec{0} \\\\\n\\vec{0} \\!& \\vec{\\varphi}(t) \\!& \\cdots \\!&\\vec{0} \\!&\\vec{0} \\!& \\dot{\\vec{\\varphi}}(t) \\!& \\cdots &\\vec{0}\\\\\n\\vdots \\!& \\vdots \\!& \\ddots \\!& \\vdots \\!&\\vdots \\!& \\vdots \\!& \\ddots \\!& \\vdots\\\\\n\\vec{0} \\!& \\vec{0} \\!& \\cdots \\!& \\vec{\\varphi}(t) \\!&\\vec{0} \\!& \\vec{0} \\!& \\cdots \\!& \\dot{\\vec{\\varphi}}(t)\\\\\n\\end{matrix}\\right] \\!.\n\\label{equ:basis:function:time}\n\\end{equation}\nNote that we have included the first-order derivative of the parametric trajectory $\\vec{\\xi}(t)$ in (\\ref{equ:linear:form:time}), which allows us to encode the observed dynamics of the motion. Consequently, we include the derivative of basis functions as shown in (\\ref{equ:basis:function:time})\n\n\nIn order to encapsulate the variability in demonstrations, we here model the joint probability $\\mathcal{P}(t,\\vec{\\xi},\\dot{\\vec{\\xi}})$ using GMM, similarly to Section~\\ref{subsec:ref:traj}. The probabilistic reference trajectory associated with time input $t_n$ can then be extracted by GMR as the conditional probability \n${\\mathcal{P}(\\hat{\\vec{\\xi}}_n,\\hat{\\dot{\\vec{\\xi}}}_n|t_n)\n\\sim \\mathcal{N}(\\hat{\\vec{\\mu}}_{n},\\hat{\\vec{\\Sigma}}_{n})}$. \nFinally, we can derive the time-driven KMP by following the derivations presented in Section~\\ref{subsec:kmp}. \n\nIt is noted that, when we calculate the kernel matrix as previously defined in (\\ref{equ:single:basis:product})--(\\ref{equ:kernel:matrix}), we here encounter four types of products $\\vec{\\varphi}(t_i)^{\\mathsf{T}} \\vec{\\varphi}(t_j)$, $\\vec{\\varphi}(t_i)^{\\mathsf{T}} \\dot{\\vec{\\varphi}}(t_j)$, $\\dot{\\vec{\\varphi}}(t_i)^{\\mathsf{T}} \\vec{\\varphi}(t_j)$ and $\\dot{\\vec{\\varphi}}(t_i)^{\\mathsf{T}} \\dot{\\vec{\\varphi}}(t_j)$. \nHence, we propose to approximate $\\vec{\\dot{\\varphi}}(t)$ as\n$\\vec{\\dot{\\varphi}}(t) \\approx \\frac{\\vec{\\varphi}(t+\\delta)-\\vec{\\varphi}(t)}{\\delta}$ by using the finite difference method, where $\\delta>0$ is an extremely small constant. So, based on the definition $\\vec{\\varphi}(t_i)^{\\mathsf{T}} \\vec{\\varphi}(t_j)=k(t_i,t_j)$, we can determine the kernel matrix as\n\\begin{equation}\n\\vec{k}(t_i,t_j)\\!=\\! \\vec{\\Theta}({t_i})^{\\mathsf{T}}\\vec{\\Theta}({t_j})\\!\\!=\\!\\!\n\\left[ \\begin{matrix} k_{tt}(i,j)\\vec{I}_{\\mathcal{O}} \\!&\\! k_{td}(i,j)\\vec{I}_{\\mathcal{O}}\\\\\nk_{dt}(i,j)\\vec{I}_{\\mathcal{O}} \\!&\\! k_{dd}(i,j)\\vec{I}_{\\mathcal{O}} \\\\\n\\end{matrix} \\right],\n\\label{equ:kernel:matrix:time}\n\\end{equation} \nwhere\n\\begin{equation}\n\\begin{aligned}\nk_{tt}(i,j)\\!\\!&=\\!\\!k(t_i,t_j),\\\\\nk_{td}(i,j)\\!\\!&=\\!\\!\\frac{k(t_i,t_j\\!+\\!\\delta)\\!\\!-\\!\\!k(t_i,t_j)}{\\delta},\\\\\nk_{dt}(i,j)\\!\\!&=\\!\\!\\frac{k(t_i\\!+\\!\\delta,t_j)\\!\\!-\\!\\!k(t_i,t_j)}{\\delta}, \\\\\nk_{dd}(i,j)\\!\\!&=\\!\\!\\frac{k(t_i\\!+\\!\\delta, \\!t_j\\!+\\!\\delta)\\!\\! -\\!\\!k(t_i\\!+\\!\\delta, \\!t_j)\\!\\! -\\!\\!k(t_i,\\!t_j\\!+\\!\\delta)\\!\\! +\\!\\!k(t_i,\\!t_j)}{{\\delta}^{2}}. \n\\end{aligned}\n\\end{equation}\nIt follows that we can actually model the output variable $\\vec{\\xi}(t)$ and its derivative $\\dot{\\vec{\\xi}}(t)$ in (\\ref{equ:linear:form:time}) using \n${\\vec{\\Theta}(t)=blockdiag(\n\\vec{\\varphi}(t),\\vec{\\varphi}(t),\\cdots,\\vec{\\varphi}(t))}$.\nIn other words, the derivative of basis functions is not used. However, this treatment requires a higher dimensional $\\vec{\\Theta}(t)$, (i.e., $2B\\mathcal{O}\\times 2\\mathcal{O}$) and thus leads to a higher dimensional $\\vec{w}\\in\\mathbb{R}^{2B\\mathcal{O}}$. In contrast, if both basis functions and their derivatives (as defined in (\\ref{equ:basis:function:time})) are employed, we can obtain a compact representation which essentially corresponds to a lower dimensional $\\vec{w}\\in\\mathbb{R}^{B\\mathcal{O}}$.\n\nWhile the derivation presented in this section applies for a time-driven case, it cannot be easily generalized to the case of high-dimensional $\\vec{s}$. Unlike a straightforward approximation of $\\vec{\\dot{\\varphi}}(t)$ by using the finite difference method, for the high-dimensional input $\\vec{s}$ it is a non-trivial problem to estimate $\\vec{\\dot{\\varphi}}(s)=\\frac{\\partial \\vec{\\varphi}(s)}{\\partial s}\\frac{\\partial s}{\\partial t}$\nunless we have an additional model which can reflect the dynamics between time $t$ and the input $\\vec{s}$. Due to the difficulty of estimating $\\vec{\\dot{\\varphi}}(s)$, an alternative way to encode $[\\vec{\\xi}^{\\mathsf{T}}(\\vec{s}) \\ \\dot{\\vec{\\xi}}^{\\mathsf{T}}(\\vec{s})]^{\\mathsf{T}}$ with high-dimensional input $\\vec{s}$ is to use (\\ref{equ:linear:form}) with an extended matrix $\\vec{\\Theta}(\\vec{s}) \\in \\mathbb{R}^{2B\\mathcal{O} \\times 2\\mathcal{O}}$, i.e., ${\\vec{\\Theta}(\\vec{s})=blockdiag(\n\t\\vec{\\varphi}(\\vec{s}),\\vec{\\varphi}(\\vec{s}),\\cdots,\\vec{\\varphi}(\\vec{s}))}$. \n\n\n\n\\subsection{Time-scale Modulation of Time-driven KMP}\n\\label{subsec:time:modulation}\nIn the context of time-driven trajectories, new tasks may demand to speed up or slow down the robot movement, and hence the trajectory modulation on the time-scale is required. Let us denote the movement duration of demonstrations and the time length of the corresponding reference trajectory as $t_N$. To generate adapted trajectories with new durations $t_D$, we define a monotonic function ${\\tau: [0,t_D] \\mapsto [0,t_N]}$, which is a transformation of time.\nThis straightforward solution implies that for any query $t^{*}\\in[0,t_D]$, we use $\\tau(t^{*})$ as the input for the prediction through KMP, and thus trajectories can be modulated as faster or slower (see also \\cite{Ijspeert, Paraschos} for the modulations in time-scale, where the time modulation is called the phase transformation.). \n\n\n\n\n\n\n\\section{Evaluations of the Approach}\n\\label{sec:evaluations}\nIn this section, several examples are used to evaluate KMP. We first consider the adaptation of trajectories with via-points\/end-points as well as the mixture of multiple trajectories (Section~\\ref{subsec:traj:modulate}), where comparisons with ProMP are shown. Then, we evaluate the extrapolation capabilities of local-KMPs (Section~\\ref{subsec:extra:evaluate}). Subsequently, we validate the approach in two different scenarios using real robots. First, we study a novel application of robot motion adaptation by adding via-points according to sensed forces at the end-effector of the robot (Section~\\ref{subsec:force:ada}). Second, we focus on a human-robot collaboration scenario, namely, the 3rd-hand task, where a 6-dimensional input is considered in the learning and adaptation problems (Section~\\ref{subsec:3rd:hand}). \n\n\n\n\n\\begin{figure*} \\centering \n\t\\subfigure[Trajectory modulations with one start-point and one via-point.]{\n\t\t\\includegraphics[width=0.80\\textwidth,bb=0.0cm 9.2cm 29cm 24cm,clip]{modulation_viaOne.png}\n\t\t\\put (-412.75,161) {KMP}\n\t\t\\put (-416,55) {ProMP}}\n\t\\subfigure[Trajectory modulation with one via-point and one end-point.]{ \n\t\t\\includegraphics[width=0.80\\textwidth,bb=0.0cm 9.2cm 29cm 24cm,clip]{modulation_viaTwo.png}\n\t\t\\put (-412.75,161) {KMP}\n\t\t\\put (-416,55) {ProMP}}\n\t\\subfigure[Superposition of two probabilistic reference trajectories.]{ \n\t\t\\includegraphics[width=0.80\\textwidth,bb=0.0cm 9.2cm 29cm 24cm,clip]{modulation_mix.png}\n\t\t\\put (-412.75,161) {KMP}\n\t\t\\put (-416,55) {ProMP}}\n\t\\caption{Different cases of trajectory modulation using KMP and ProMP. \\emph{(a)--(b)} show trajectories (red and green curves) that are adapted to go through different desired points (depicted by circles). The gray dashed curves represent the mean of probabilistic reference trajectories for KMP and ProMP, while \n\tthe shaded areas depict the standard deviation.\n\t\\emph{(c)} shows the superposition of various reference trajectories, where the dashed red and green curves correspond to the adapted trajectories in \\emph{(a)} and \\emph{(b)}, respectively. \n\tThe resulting trajectory is displayed in solid pink curve.} \n\t\\label{fig:viapoint:compare} \n\\end{figure*} \n\n\\begin{figure*} \\centering \n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.31\\textwidth]{extra_demos.png}} \n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.31\\textwidth]{extra_projectDemos_f1.png}} \n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.31\\textwidth]{extra_projectDemos_f2.png}}\n\t\\caption{Demonstrations of the transportation task as well as GMM modeling of local trajectories. (\\emph{a}) shows the demonstrated trajectories (plotted by purple curves), where gray curves correspond to the projection of demonstrated trajectories into the $x$--$y$ plane. `$\\ast$' and `+' denote the starting and ending points of trajectories, respectively. (\\emph{b})-(\\emph{c}) depict GMM modeling of local trajectories, where local trajectories are obtained by projecting demonstrations into two local frames, respectively. } \n\t\\label{fig:project:gmm} \n\\end{figure*}\n\n\n\n\n\\subsection{Trajectory Modulation\/Superposition}\n\\label{subsec:traj:modulate}\n\nWe first evaluate our approach using five trajectories of the handwritten letter `G'\\footnote{These trajectories are obtained from \\cite{Calinon2017}.}, \nas shown in Figure~\\ref{fig:g:demos}(\\emph{a}).\nThese demonstrations are encoded by GMM with input $t$ and output $\\vec{\\xi}(t)$ being the 2-D position $[x(t)\\, y(t)]^{\\mathsf{T}}$. Subsequently, a probabilistic reference trajectory is retrieved through GMR, as depicted in Figure~\\ref{fig:g:demos}(\\emph{b})--(\\emph{c}), where the position values from the reference trajectory are shown. This probabilistic reference trajectory along with the input is used to initialize KMP as described in Section~\\ref{subsec:time:tmp}, which uses a Gaussian kernel ${k(t_i,t_j)=exp(-\\ell (t_i-t_j)^{2})}$ with hyperparameter $\\ell>0$. The relevant parameters for KMP are set as $\\ell=2$ and $\\lambda=1$. \n\nFor comparison purposes, ProMP is evaluated as well, where 21 Gaussian basis functions chosen empirically are used. For each demonstration, we employ the regularized least squares method to estimate the weights $\\vec{w}\\in \\mathbb{R}^{42}$ of the corresponding basis functions. Subsequently, the probability distribution $\\mathcal{P}(\\vec{\\mu}_w,\\vec{\\Sigma}_w)$ that is computed through maximum likelihood estimation \\citep{Paraschos2015} is used to initialize ProMP. Due to the number of demonstrations being significantly lower when compared to the dimension of $\\vec{w}$, a diagonal regularized term is added to $\\vec{\\Sigma}_w$ in order to avoid singular estimations. \n \nFigure~\\ref{fig:viapoint:compare} displays different trajectory modulation cases using KMP and ProMP. We test not only cases in which new requirements arise in the form of via-points and start-points\/end-points, but also the scenario of mixing different reference trajectories\\footnote{We here only consider position requirements, but velocity constraints can also be directly incorporated in desired points.}. It can be observed from Figure~\\ref{fig:viapoint:compare}(\\emph{a})--(\\emph{b}) that both KMP and ProMP successfully generate trajectories that fulfill the new requirements. For the case of trajectory superposition in Figure~\\ref{fig:viapoint:compare}(\\emph{c}), we consider the adapted trajectories in Figure~\\ref{fig:viapoint:compare}(\\emph{a}) and (\\emph{b}) as candidate reference trajectories and assign them with the priorities ${\\gamma_{t,1}=\\exp(-t)}$ and $\\gamma_{t,2}=1-\\exp(-t)$, respectively. Note that $\\gamma_{t,1}$ and $\\gamma_{t,2}$ correspond to monotonically decreasing and increasing functions, respectively. As depicted in Figure~\\ref{fig:viapoint:compare}(\\emph{c}), the mixed trajectory (solid pink curve) indeed switches from the first to the second reference trajectory. \n\nDespite KMP and ProMP perform similarly, the key difference between them lies on the determination of basis functions. In contrast to ProMP that requires explicit basis functions, KMP is a non-parametric method that does not depend on explicit basis functions. This difference proves to be substantially crucial for tasks where the robot actions are driven by a high-dimensional input. We will show this effect in the 3rd-hand task which is associated with a 6-D input, where the implementation of ProMP becomes difficult since a large number of basis functions need to be defined.\n\n\n\n\n\n\\subsection{Extrapolation with Local-KMPs}\n\\label{subsec:extra:evaluate}\n\nWe evaluate the extrapolation capabilities of local-KMPs in an application with a new set of desired points (start-, via- and end-points) lying far away from the area covered by the original demonstrations, in contrast to the experiment reported in Section~\\ref{subsec:traj:modulate}. Note that ProMP does not consider any task-parameterization, and therefore the extrapolation capability is limited (see \\cite{Havoutis} for a discussion). Thus, we only evaluate our approach here.\n\nWe study a collaborative object transportation task, where the robot assists a human to carry an object from a starting point to an ending location. Five demonstrations\nin the robot base frame are used for the training of local-KMPs (see Figure~\\ref{fig:project:gmm}(\\emph{a})). We consider time $t$ as the input, and the 3-D Cartesian position $[x(t)\\, y(t)\\, z(t)]^{\\mathsf{T}}$ of the robot end-effector as the output $\\vec{\\xi}(t)$.\nFor the implementation of local-KMPs, we define two frames located at the initial and the final locations of the transportation trajectories (as depicted in Figure~\\ref{fig:project:gmm}(\\emph{b})--(\\emph{c})), \nsimilarly to \\cite{Leonel15}, \nwhich are then used to extract the local motion patterns. \n\nWe consider two extrapolation tests, where the starting and ending locations are different from the demonstrated ones. \nIn the first test, we study the transportation from ${\\vec{p}_s\\!=\\![-0.2 \\; 0.2\\;0.2]^{\\mathsf{T}}}$ to ${\\vec{p}_e\\!=\\![-0.15\\; 0.8\\; 0.1]^{\\mathsf{T}}}$. In the second test, we evaluate the extrapolation with ${\\vec{p}_s\\!=\\![0.2\\; -\\!0.3 \\; 0.1]^{\\mathsf{T}}}$ and ${\\vec{p}_e\\!=\\![0.25\\; 0.5\\; 0.05]^{\\mathsf{T}}}$. Note that all locations are described with respect to the robot base frame. In addition to the desired starting and ending locations in the transportation task, we also introduce additional position constraints which require the robot passing through two via-points (plotted by circles in Figure~\\ref{fig:extra:compare}).\nThe extrapolation of local-KMPs for these new situations is achieved according to Algorithm \\ref{algorithm:local-kmp}, where the Gaussian kernel is used. For each test, the local frames are set as ${\\vec{A}^{(1)}=\\vec{A}^{(2)}=\\vec{I}_3}$, $\\vec{b}^{(1)}=\\vec{p}_s$ and $\\vec{b}^{(2)}=\\vec{p}_e$. The related KMP parameters are $\\ell=0.5$ and $\\lambda=10$.\nFigure~\\ref{fig:extra:compare} shows that local-KMPs successfully extrapolate to new frame locations and lead the robot to go through various new desired points while maintaining the shape of the demonstrated trajectories. \n\nNote that the environment might drastically change from demonstrations to final execution, so the capability of modulating the demonstrated trajectories to go through new points is important in many applications. In this sense, local-KMPs prove superior to other local-frame approaches such as those exploited in \\cite{Leonel15, Calinon2016}, which do not consider trajectory modulation.\n\n\n \n\n\n\n\n\\begin{figure} \\centering \t\n\t\\includegraphics[width=0.72\\columnwidth]{extra_viaPoints.png}\n\t\\caption{Extrapolation evaluations of local-KMPs for new starting and ending locations in the transportation task. The purple curve represents the mean of the original probabilistic reference trajectory for KMP, while the red and yellow trajectories show the extrapolation cases. Circles represent desired points describing additional task requirements. Squares denote desired starting and ending locations of the transportation task. Gray curves depict the projection of trajectories into the $x$--$y$ plane.} \n\t\\label{fig:extra:compare} \n\\end{figure}\n\n\n\n\n\\subsection{Force-based Trajectory Adaptation}\n\\label{subsec:force:ada}\nThrough kinesthetic teaching, humans are able to provide the robot with initial feasible trajectories. However, this procedure does not account for unpredicted situations. For instance, when the robot is moving towards a target, undesired circumstances such as obstacles occupying the robot workspace might appear, which requires the robot to avoid possible collisions. \nSince humans have reliable reactions over dynamic environments, we here propose to use the human supervision to adapt the robot trajectory when the environment changes. In particular, we use a force sensor installed at the end-effector of the robot in order to measure corrective forces exerted by the human. \n\nWe treat the force-based adaptation problem under the KMP framework by defining new via-points as a function of the sensed forces.\nWhenever the robot is about to collide with the obstacle, the user interacts physically with the end-effector and applies a corrective force. This force is used to determine a desired via-point which the robot needs to pass through in order to avoid the obstacle.\nBy updating the reference database using this obtained via-point through (\\ref{equ:kmp:update}), KMP can generate an adapted trajectory that fulfills the via-point constraint.\n\n\nFor the human interaction at time $t$, given the robot Cartesian position $\\vec{p}_t$ and the sensed force $\\vec{F}_t$, the first desired datapoint is defined as:\n${\\bar{t}_1=t+\\delta_t}$ and ${\\bar{\\vec{p}}_{1}=\\vec{p}_{t}+\\vec{K}_f \\vec{F}_t}$, where $\\delta_t>0$ controls the regulation time and $\\vec{K}_f>0$ determines the adaptation proportion for the robot trajectory. In order to avoid undesired trajectory modulations caused by the force sensor noise, we introduce a force threshold $F_{thre}$ and add the new force-based via-point to the reference trajectory only when $||\\vec{F}_t||>F_{thre}$.\nNote that the adapted trajectory might be far away from the previous planned trajectory due to the new via-point, we hence consider adding $\\vec{p}_t$ as the second desired point so as to ensure a smooth trajectory for the robot. Doing so, for each interaction, we define the second desired point as $\\bar{t}_2=t$ and $\\bar{\\vec{p}}_{2}=\\vec{p}_t$.\n\n\n\n\n\\begin{figure} \\centering\n\t\\includegraphics[width=0.92\\columnwidth]{forceHumanDemo.jpg}\n\t\\caption{Kinesthetic teaching of the reaching task on the KUKA robot, where demonstrations comprising time and end-effector Cartesian position are collected. The green arrow shows the motion direction of the robot.} \n\t\\label{fig:force:humanDemo} \n\\end{figure} \n\n\\begin{figure} \\centering \n\t\\includegraphics[width=0.90\\columnwidth]{force_gmm.png}\n\t\\caption{GMM modeling of demonstrations for the force-based adaptation task, where the green curves represent demonstrated trajectories and ellipses depict Gaussian components.} \n\t\\label{fig:force:demos} \n\\end{figure} \n\n\n\\begin{figure*} \\centering\n\t\\includegraphics[width=2.03\\columnwidth]{forceAdaRobot.jpg}\n\t\\caption{Snapshots of the force-based trajectory adaptations, where the force exerted by the human is used to determine the via-points for the robot, which ensures collision avoidance. (a) and (f) correspond to the initial and final states of the robot, where circles depict the initial and final positions, respectively. Figures (b)--(e) show human interactions with the green arrows depicting the directions of corrective force. }\n\t\\label{fig:force:robot} \n\\end{figure*} \n\n\n\\begin{figure*} \\centering \n\t\\includegraphics[width=1.9\\columnwidth,bb=4.0cm 0cm 45cm 21.5cm,clip]{force_adaTraj.png}\n\t\\caption{\\emph{Top row}: the desired trajectory (generated by KMP) and the real robot trajectory, where `$\\ast$' represents the force-based desired points and `+' corresponds to the initial and final locations for the robot. For comparison, we also provide the desired trajectory predicted by KMP without obstacles (i.e., without human intervention). The shaded areas show the regulation durations for various human interventions. \\emph{Bottom row}: the force measured at the end-effector of the KUKA robot.} \n\t\\label{fig:force:adaTraj} \n\\end{figure*}\n\n\n\n\n\nIn order to evaluate the adaptation capability of KMP, we consider a reaching task where unpredicted obstacles will intersect the robot movement path. First, we collect six demonstrations (as depicted in Figure~\\ref{fig:force:humanDemo}) comprising time input $t$ and output $\\vec{\\xi}(t)$ being the 3-D Cartesian position ${[x(t)\\, y(t)\\, z(t)]^{\\mathsf{T}}}$. Note that obstacles are not placed in the training phase. The collected data is fitted using GMM (plotted in Figure~\\ref{fig:force:demos}) so as to retrieve a reference database, which is subsequently used to initialize KMP. \nThen, during the evaluation phase,\ntwo obstacles whose locations intersect the robot path are placed on the table, as shown in Figure~\\ref{fig:force:robot}.\nIn addition to the via-points that will be added through physical interaction,\nwe add the initial and target locations for the robot as desired points beforehand, where the initial location corresponds to the robot position before starting moving.\nThe relevant parameters are $\\vec{K}_f\\!\\!=\\!\\!0.006\\vec{I}_{3}$, $\\delta_t=1s$, $F_{thre}=10N$, $\\ell=0.15$ and ${\\lambda=0.3}$. \n\n\nThe trajectory that is generated by KMP according to various desired points\nas well as the real robot trajectory are depicted in Figure~\\ref{fig:force:adaTraj}.\nWe can observe that for each obstacle the robot trajectory is adapted twice. In the first two adaptations (around $8s$ and $11s$), the corrective force is dominant along the $z$ direction, while in the last two adaptations (around $17s$ and $20s$), the force has a larger component along the $x$ and $y$ directions. For all cases, KMP successfully adapts the end-effector trajectory according to the measured forces. \n\nNote that, even without human interaction, the proposed scheme can also help the robot replan its trajectory when it touches the obstacles, where the collision force takes the role of the human correction and guides the robot to move away from the obstacles. \nThus, with KMP the robot is capable of autonomously adapting its trajectory through low-impact collisions, whose tolerated force can be regulated using $F_{thre}$.\nSupplementary material includes a video of experiments using the human corrective force and the obstacle collision force.\n\n\n\n\n\n\n\n\n\n\\begin{figure*} \\centering\n\t\\includegraphics[width=2.03\\columnwidth]{3rdHandDemoRobot.jpg}\n\t\\caption{The 3rd-hand task in the soldering environment with the Barrett WAM robot. \\emph{(a)} shows the initial states of the user hands and the robot end-effector (the 3rd hand in this experiment). \\textcircled{1}--\\textcircled{4} separately correspond to the circuit board (held by the robot), magnifying glass, soldering iron and solder. \\emph{(b)} corresponds to the handover of the circuit board. \\emph{(c)} shows the robot grasping of the magnifying glass. \n\t\t\\emph{(d)} depicts the final scenario of the soldering task using both of the user hands and the robot end-effector. Red, blue and green arrows depict the movement directions of the user left hand, right hand and the robot end-effector, respectively. }\n\t\\label{fig:3rHand:demos:robot} \n\\end{figure*} \n\n\n\n\n\n\n\n\\subsection{3rd-hand Task}\n\\label{subsec:3rd:hand}\n\n\n\nSo far the reported experiments have shown the performances of KMP by learning various time-driven trajectories. We now consider a different task which requires a 6-D input, in particular a robot-assisted soldering scenario.\nAs shown in Figure~\\ref{fig:3rHand:demos:robot}, the task proceeds as follows: \\emph{(1)} the robot needs to hand over a circuit board to the user at the \\emph{handover location} $\\vec{p}^{h}$ (Figure~\\ref{fig:3rHand:demos:robot}\\emph{(b)}), where the left hand of the user is used. \\emph{(2)} the user moves his left hand to place the circuit board at the \\emph{soldering location} $\\vec{p}^{s}$ and simultaneously moves his right hand towards the soldering iron and then grasps it. Meanwhile, the robot is required to move towards the magnifying glass and grasp it at the \\emph{magnifying glass location} $\\vec{p}^{g}$ (Figure~\\ref{fig:3rHand:demos:robot}\\emph{(c)}). \\emph{(3)} the user moves his right hand to the soldering location so as to repair the circuit board. Meanwhile, the robot, holding the magnifying glass, moves towards the soldering place in order to allow the user to take a better look at the small components of the board (Figure~\\ref{fig:3rHand:demos:robot}\\emph{(d)}).\n\n\n\n\n\n\n\n\n\nLet us denote $\\vec{p}^{\\mathcal{H}_l}$, $\\vec{p}^{\\mathcal{H}_r}$ and $\\vec{p}^{\\mathcal{R}}$ as positions of the user left hand, right hand and robot end-effector (i.e., the ``third hand''), respectively. \nSince the robot is required to react properly according to the user hand positions, we formulate the 3rd-hand task as the prediction of the robot end-effector position according to the user hand positions. In other words, in the prediction problem we consider ${\\vec{s}=\\{\\vec{p}^{\\mathcal{H}_l}, \\vec{p}^{\\mathcal{H}_r}\\}}$ as the input (6-D) and $\\vec{\\xi}(\\vec{s})=\\vec{p}^{\\mathcal{R}}$ as the output (3-D) . \n\nFollowing the procedure illustrated in Figure~\\ref{fig:3rHand:demos:robot}, we collect five demonstrations comprising $\\{\\vec{p}^{\\mathcal{H}_l}\\!,\\vec{p}^{\\mathcal{H}_r}\\!,\\vec{p}^{\\mathcal{R}}\\!\\}$ for training KMP, as shown in Figure~\\ref{fig:3rdHand:task:demo}. \nNote that the teacher only gets involved in the training phase.\nWe fit the collected data using GMM, and subsequently extract a probabilistic reference trajectory using GMR,\nwhere the input for the probabilistic reference trajectory is sampled from the marginal probability distribution $\\mathcal{P}(\\vec{s})$, since in this scenario the exact input is unknown (unlike time $t$ in previous experiments). The Gaussian kernel \nis also employed in KMP, whose hyperparameters are set to $\\ell=0.5$ and $\\lambda=2$. \n\n\n\\begin{figure} \\centering\n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.32\\textwidth]{3rdHand_demos_circles.png}}\t\t\t\n\t\\caption{\n\t\tDemonstrations for the 3rd-hand task,\n\t\twhere the red and blue curves respectively correspond to the user left and right hands, while the green curves represent the demonstrated trajectories for the robot. The `$\\ast$' and `+' mark the starting and ending points of various trajectories, respectively.} \n\t\\label{fig:3rdHand:task:demo} \n\\end{figure}\n\n\nTwo evaluations are carried out to evaluate KMP in this scenario. \nFirst, we employ the learned reference database without adaptation so as to verify the reproduction ability of KMP, as shown in Figure~\\ref{fig:3rdHand:eva} (\\emph{top row}). \nThe user left- and right-hand trajectories as well as the real robot trajectory, depicted in Figure~\\ref{fig:3rdHand:eva} (\\emph{top row}), are plotted in Figure~\\ref{fig:3rdHand:task:eva} (dotted curves), where the desired trajectory for robot end-effector is generated by KMP. We can observe that KMP maintains the shape of the demonstrated trajectories for the robot while accomplishing the soldering task. Second, we evaluate the adaptation capability of KMP by adjusting the handover location \n$\\vec{p}^{h}$, the magnifying glass location $\\vec{p}^{g}$ as well as the soldering location $\\vec{p}^{s}$, as illustrated in Figure~\\ref{fig:3rdHand:eva} (\\emph{bottom row}). \nNote that these new locations are unseen in the demonstrations, thus we consider them as new via-point\/end-point constraints within the KMP framework. \n\nTo take the handover as an example, we can define a via-point (associated with input) as\n$\\{\\bar{\\vec{p}}^{\\mathcal{H}_l}_{1},\\bar{\\vec{p}}^{\\mathcal{H}_r}_{1},\\bar{\\vec{p}}^{\\mathcal{R}}_{1}\\}$, where\n$\\bar{\\vec{p}}^{\\mathcal{H}_l}_{1}=\\vec{p}^{h}$, $\\bar{\\vec{p}}^{\\mathcal{H}_r}_{1}=\\vec{p}^{\\mathcal{H}_r}_{ini}$ and $\\bar{\\vec{p}}^{\\mathcal{R}}_{1}=\\vec{p}^{h}$, which implies that the robot should reach the new handover location $\\vec{p}^{h}$ when the user left hand arrives at $\\vec{p}^{h}$ and the user right hand stays at its initial position $\\vec{p}^{\\mathcal{H}_r}_{ini}$.\nSimilarly, we can define additional via- and end-points to ensure that the robot grasps the magnifying glass at a new location $\\vec{p}^{g}$ and assists the user at a new location $\\vec{p}^{s}$. Thus, two via-points and one end-point are used to update the original reference database according to (\\ref{equ:kmp:update}) so as to address the three adaptation situations.\nFigure~\\ref{fig:3rdHand:task:eva} shows the adaptations of the robot trajectory (green solid curve) in accordance with the user hand trajectories (red and blue solid curves).\nIt can be seen that the robot trajectory is indeed modulated towards the new handover, magnifying glass and soldering locations, showing the capability of KMP to adapt trajectories associated with high-dimensional inputs. \n\n\nIt is worth pointing out that the entire soldering task is accomplished by a single KMP without any trajectory segmentation for different subtasks, thus allowing for a straightforward learning of several sequential subtasks. Moreover, KMP makes the adaptation of learned skills associated with high-dimensional inputs feasible. Also, KMP is driven by the user hand positions, which allows for slower\/faster hand movements since the prediction of KMP does not depend on time, hence alleviating the typical problem of time-alignment in human-robot collaborations. For details on the 3rd-hand experiments, please refer to the video in the supplementary material.\n\n\n\n\n\\begin{figure*} \\centering\n\t\\includegraphics[width=2.03\\columnwidth]{3rdHandEvaRobot.jpg}\t\n\t\\caption{Snapshots of reproduction and adaptation using KMP. \\emph{Top row} shows the reproduction case using the learned reference database without adaptation. \\emph{Bottom row} displays the adaptation case using the new reference database which is updated using three new desired points: new handover, magnifying glass and soldering locations depicted as dashed circles (notice the difference with respect to the top row).} \n\t\\label{fig:3rdHand:eva} \n\\end{figure*} \n\n\n\\begin{figure} \\centering\n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.32\\textwidth]{3rdHand_evaluate.png}}\t\t\t\n\t\\caption{\n\t\tThe reproduction (dotted curves) and adaptation (solid curves) capabilities of KMP in the 3rd-hand task, where the user left-hand and right-hand trajectories (red and blue curves) are used to retrieve the robot end-effector trajectory (green curves).} \n\t\\label{fig:3rdHand:task:eva} \n\\end{figure}\n\n\n\n\\section{Related Work}\n\\label{sec:relative:work}\n\nIn light of the reliable temporal and spatial generalization, DMP \\citep{Ijspeert} has achieved remarkable success in a vast range of applications.\nIn addition, many variants of DMP have been developed for specific circumstances, such as stylistic DMP \\citep{Matsubara}, task-parameterized DMP \\citep{Pervez} and combined DMP \\citep{Pastor}. However, due to the spring-damper dynamics, DMP converges to the target position with zero velocity, which prevents its application to cases with velocity requirements (e.g., the striking\/batting movement). Besides, DMP does not provide a straightforward way to incorporate desired via-points. \n\nBy exploiting the properties of Gaussian distributions, ProMP \\citep{Paraschos} allows for trajectory adaptations with via-points and end-points simultaneously. The similarities between DMP and ProMP lie on the fact that both methods need the explicit definition of basis functions and are aimed at learning time-driven trajectories. As a consequence, when we encounter trajectories with high-dimensional inputs (e.g., human hand position and posture in human-robot collaboration scenarios), the selection of basis functions in DMP and ProMP becomes difficult and thus undesired. \n\nIn contrast to DMP and ProMP, GMM\/GMR based learning algorithms \\citep{Muhlig,Calinon2007} have been proven effective in encoding demonstrations with high-dimensional inputs. However, the large number of variables arising in GMM makes the re-optimization of GMM expensive, which therefore prevents its use in unstructured environments where robot adaptation capabilities are imperative. \n\nKMP provides several advantages compared to the aforementioned works. Unlike GMM\/GMR, KMP is capable of adapting trajectories towards various via-points\/end-points without the optimization of high-dimensional hyperparameters. Unlike DMP and ProMP, KMP alleviates the need of explicit basis functions due to its kernel treatment, and thus can be easily implemented for problems with high-dimensional inputs and outputs. \n\nIt is noted that the training of DMP only needs a single demonstration, while ProMP, GMM and KMP require a set of trajectories. In contrast to the learning of a single demonstration, the exploitation of multiple demonstrations makes the extraction of probabilistic properties of human skills possible. In this context, demonstrations have been exploited using the covariance-weighted strategy, as in trajectory-GMM \\citep{Calinon2016}, linear quadratic regulators (LQR) \\citep{Leonel15},\nmovement similarity criterion \\citep{Muhlig} and demonstration-guided trajectory optimization \\citep{Osa}. Note that the mean minimization subproblem as formulated in (\\ref{equ:kl:mean:cost}) also uses the covariance to weigh the cost, sharing the same spirit of the aforementioned results. \n\n\n \nSimilarly to our approach, information theory has also been exploited in different robot learning techniques. As an effective way to measure the distance between two probabilistic distributions, KL-divergence was exploited in policy search \\citep{Peters, Kahn}, trajectory optimization \\citep{Levine} and imitation learning \\citep{Englert}.\nIn \\cite{Englert} KL-divergence was used to measure the difference between the distributions of demonstrations and the predicted robot trajectories (obtained from a control policy and a Gaussian process forward model), and subsequently the probabilistic inference for learning control \\citep{Deisenroth} was employed to iteratively minimize the KL-divergence so as to find the optimal policy parameters. It is noted that this KL-divergence formulation makes the derivations of analytical solution intractable. In this article, we formulate the trajectory matching problem as (\\ref{equ:kl:cost:ini:temp}), which allows us to separate the mean and covariance subproblems and derive closed-form solutions for them separately.\n\n\n\n\n\n\\section{Discussion} \n\\label{sec:discuss}\nWhile both KMP and ProMP \\citep{Paraschos} learn the probabilistic properties of demonstrations, we here discuss their similarities and possible shortcomings in detail. \nFor the KMP, \nimitation learning is formulated as an optimization problem (Section \\ref{subsubsec:kl}), where the optimal distribution $\\mathcal{N}({\\vec{\\mu}}_w^{*},{\\vec{\\Sigma}}_w^{*})$ of $\\vec{w}$ is derived by minimizing the information-loss between the parametric trajectory and the demonstrations. Specifically, the mean minimization subproblem (\\ref{equ:kl:mean:cost}) can be viewed as the problem of maximizing the posterior $\\prod_{n=1}^{N} \\mathcal{P}(\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\mu}_w|\\hat{\\vec{\\mu}}_n,\\hat{\\vec{\\Sigma}}_n)$. \nIn contrast, ProMP formulates the problem of imitation learning as an estimation of the probability distribution of movement pattern $\\vec{w}$ (i.e., $\\vec{w} \\sim \\mathcal{N}(\\vec{\\mu}_w,\\vec{\\Sigma}_w)$), which is essentially equivalent to the maximization of the likelihood $\\prod_{h=1}^{H} \\prod_{n=1}^{N} \\mathcal{P}({\\vec{\\xi}}_{n,h}|\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}}\\vec{\\mu}_w,\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}}\\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n))$. \nTo solve this maximization problem, the regularized least-squares is first used for each demonstration so as to estimate its corresponding movement pattern vector \\citep{Paraschos2015}, where basis functions are used to fit these demonstrations. Subsequently, using the movement patterns extracted from demonstrations, the distribution $\\mathcal{P}(\\vec{w})$ is determined by using the maximum likelihood estimation.\n\nA direct problem in ProMP is the estimation of $\\mathcal{P}(\\vec{w})$.\nIf the dimension of $\\vec{w}$ (i.e., $B\\mathcal{O}$) is too high compared to the number of demonstrations $H$, a singular covariance $\\vec{\\Sigma}_w$ might appear. For this reason, learning movements with ProMP typically requires a high number of demonstrations. In contrast, KMP needs a probabilistic reference trajectory, which is derived from the joint probability distribution of $\\{\\vec{s},\\vec{\\xi}\\}$ that is typically characterized by a lower dimensionality (i.e., $\\mathcal{I}+\\mathcal{O}$). \nAnother problem in ProMP comes up with demonstrations with high dimensional input $\\vec{s}$, where the number of basis functions increases often exponentially, which is the typical curse of dimensionality (see also the discussion on the disadvantages of fixed basis functions in \\cite{Bishop}). In contrast, KMP is combined with a kernel function, alleviating the need for basis functions, while inheriting all the potentials and expressiveness of kernel-based methods.\n\n\nThere are several possible extensions for KMP. First, similarly to most regression algorithms, the computation complexity of KMP increases with the size of training data (i.e., the reference database in our case). One possible solution could be the use of partial training data so as to build a sparse model \\citep{Bishop}.\nSecond, even though we have shown the capability of KMP on trajectory adaptation, the choice of desired points is rather empirical. For more complicated situations where we have no (or minor) prior information, the search of optimal desired points could be useful. To address this problem, RL algorithms could be employed to find appropriate new via-points that fulfill the relevant task requirements that can be encapsulated by cost functions. Third, since KMP predicts mean and covariance of the trajectory simultaneously, it may be exploited in approaches that combine optimal control and probabilistic learning methods \\citep{Medina}. For example, the mean and covariance can be respectively used as the desired trajectory and the weighted matrix for tracking errors in LQR \\citep{Calinon2016}.\nFinally, besides the frequently used Gaussian kernel, the exploitation of various kernels \\citep{Hofmann} could be promising in the future research.\n\n\\section{Conclusions} \n\\label{sec:conclusion}\nWe have proposed a novel formulation of robot movement primitives that incorporates a kernel-based treatment into the process of minimizing the information-loss in imitation learning. Our approach KMP is capable of preserving the probabilistic properties of human demonstrations, adapting trajectories to different unseen situations described by new temporal or spatial requirements and mixing different trajectories. The proposed method was extended to deal with local frames, which provides the robot with reliable extrapolation capabilities. Since KMP is essentially a kernel-based non-parametric approach, it overcomes several limitations of state-of-the-art methods, being able to model complex and high dimensional trajectories. Through extensive evaluations in simulations and real robotic systems, we showed that KMP performs well in a wide range of applications such as time-driven movements and human-robot collaboration scenarios.\n\n\\section*{Acknowledgments}\nWe thank Fares J. Abu-Dakka, Luka Peternel and Martijn J. A. Zeestraten for their help on real robot experiments. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this work, we consider a static and undirected network of $K$ agents connected over some graph where each agent $k$ owns a private cost function $J_k: \\real^{M} \\rightarrow \\real$. Through only local interactions (i.e., with agents only communicating with their immediate neighbors), each agent is interested in finding a solution to the following problem: \n\\begin{align}\nw^\\star \\in \\argmin_{w\\in \\mathbb{R}^M} \\quad\n \\frac{1}{K}\\sum_{k=1}^K J_k(w) + R(w) \\label{decentralized1} \n\\end{align}\n where $R:\\real^{M} \\rightarrow \\real \\cup \\{+ \\infty \\}$ is a convex function (not necessarily differentiable). We adopt the following assumption throughout this work.\n\\begin{assumption} \\label{assump:cost}\n{\\rm ({\\bf Cost function}): We assume that a solution exists to problem \\eqref{decentralized1} and each cost function $ J_k(w)$ is first-order differentiable and $\\nu$-strongly-convex:\n\\eq{\n(w^o-w^\\bullet)\\tran \\big(\\grad J_k(w^o)-\\grad J_k(w^\\bullet)\\big) &\\geq \\nu \\|w^o-w^\\bullet\\|^2 \\label{stron-convexity} \n} \nwith $\\delta$-Lipschitz continuous gradients:\n\\eq{\n\\|\\grad J_k(w^o)-\\grad J_k(w^\\bullet)\\| &\\leq \\delta \\|w^o-w^\\bullet\\| \\label{lipschitz}\n}\n\\noindent for any $w^o$ and $w^\\bullet$. Constants $\\nu$ and $\\delta$ are strictly positive and satisfy $\\nu\\leq \\delta$. \nWe also assume $R(w)$ to be a proper\\footnote{The function $f(.)$ is proper if $-\\infty B$) if $A-B$ is positive semi-definite (positive definite). The $N \\times N$ identity matrix is denoted by $I_N$. We let $\\one_{N}$ be a vector of size $N$ with all entries equal to one. The Kronecker product is denoted by $\\otimes$. We let ${\\rm col}\\{x_n\\}_{n=1}^N$ denote a column vector (matrix) that stacks the vector (matrices) $x_n$ of appropriate dimensions on top of each other. The subdifferential $\\partial f(x)$ of a function $f:\\real^{M} \\rightarrow \\real$ at some $x \\in \\real^{M}$ is the set of all subgradients $\n\\partial f(x) = \\{g \\ | \\ g\\tran(y-x)\\leq f(y)-f(x), \\forall \\ y \\in \\real^{M}\\} $.\nThe proximal operator with parameter $\\mu>0$ of a function $f:\\real^{M} \\rightarrow \\real$ is\n\\eq{\n{\\rm \\bf prox}_{\\mu f}(x) = \\argmin_z \\ f(z)+{1 \\over 2 \\mu} \\|z-x\\|^2 \\label{def_proximal}\n}\n\\section{Unified Decentralized Algorithm (UDA)} \\label{sec:ATC:smooth}\nIn this section, we present the {\\em unified decentralized algorithm} (UDA) that covers various state-of-the-art algorithms as special cases. To this end, we will first focus on the smooth case ($R(w)=0$), which will then be extended to handle the non-smooth component $R(w)$ in the following section.\n\\subsection{General Primal-Dual Framework}\n For algorithm derivation and motivation purposes, we will rewrite problem \\eqref{decentralized1} in an equivalent manner. To do that, we let $w_k \\in \\real^M$ denote a local copy of $w$ available at agent $k$ and introduce the network quantities: \n \\eq{\n \\sw \\define {\\rm col}\\{w_1,\\cdots,w_K\\} \\in \\real^{KM}, \\quad \\cJ(\\sw) &\\define \\frac{1}{K} \\sum_{k=1}^K J_k(w_k)\n } \n Further, we introduce two general symmetric matrices $\\cB \\in \\real^{MK \\times MK}$ and $\\cC \\in \\real^{MK \\times MK}$ that satisfy the following conditions: \n \\begin{subnumcases}{\\label{consensus-condition-both}} \n\t \\cB \\sw=0 \\iff w_1=\\cdots=w_K \\label{consensus-condition-B} \\\\\n\\cC=0 \\quad {\\rm or} \\quad \t\\cC \\sw =0 \\iff \\cB \\sw=0 \\label{consensus-condition-C}\t \n\t\\end{subnumcases} \n For algorithm derivation, the matrices $\\{\\cB,\\cC\\}$ can be any general consensus matrices \\cite{loizou2016new}. Later, we will see how to choose these matrices to recover different decentralized implementations -- see Section \\ref{sec:specific_ins}. With these quantities, it is easy to see that problem \\eqref{decentralized1} with $R(w)=0$ is equivalent to the following problem:\n\\begin{align}\n \\underset{\\ssw\\in \\mathbb{R}^{KM}}{\\text{minimize }}& \\quad\n \\cJ(\\sw)+\\frac{1}{2 \\mu}\\| \\sw\\|_{\\cC}^2 , \\quad {\\rm s.t.} \\ \\cB \\sw=0\\label{decentralized2} \n\\end{align}\nwhere $\\mu >0$ and the matrix $\\cC \\in \\real^{MK \\times MK}$ is a positive semi-definite consensus penalty matrix satisfying \\eqref{consensus-condition-C}. To solve problem \\eqref{decentralized2}, we consider the saddle-point formulation:\n\\eq{\n \\min_{\\ssw} \\max_{\\ssy} \\quad \\cL(\\sw,\\sy) \\define \\cJ(\\sw) + \\frac{1}{ \\mu} \\sy\\tran \\cB\\sw + \\frac{1}{2 \\mu}\\| \\sw\\|_{\\cC}^2\n\\label{saddle_point}\n}\nwhere $\\sy \\in \\real^{MK}$ is the dual variable. To solve \\eqref{saddle_point}, we propose the following algorithm: let $\\sy_{-1}=0$ and $\\sw_{-1}$ take any arbitrary value. Repeat for $i=0,1,\\cdots$\n\n\\begin{subnumcases}{\\label{alg_ATC_framework}}\n\\ssz_i = (I-\\cC) \\sw_{i-1}-\\mu \\grad \\cJ(\\sw_{i-1}) - \\cB \\sy_{i-1} \\label{z_ATC_DIG} &\\textbf{(primal-descent)} \\\\\n\\sy_i = \\sy_{i-1}+ \\cB \\ssz_i \\label{dual_ATC_DIG} &\\textbf{(dual-ascent)} \\\\\n\\sw_i = \\bar{\\cA} \\ssz_{i} \\label{primal_ATC_DIG} &\\textbf{(Combine)} \n \\end{subnumcases}\n where $\\bar{\\cA}=\\bar{A} \\otimes I_M$ and $\\bar{A}$ is a symmetric and doubly-stochastic combination matrix. In the above UDA algorithm, step \\eqref{z_ATC_DIG} is a gradient descent followed by a gradient ascent step in \\eqref{dual_ATC_DIG}, both applied to the saddle-point problem \\eqref{saddle_point} with step-size $\\mu$. The last step \\eqref{primal_ATC_DIG} is a combination step that enforces further agreement. Next we show that by proper choices of $\\bar{\\cA}$, $\\cB$, and $\\cC$ we can recover many state of the art algorithms. To do that, we need to introduce the combination matrix associated with the network.\n\\subsection{Network Combination Matrix} \\label{sec:combina:matrix}\n Thus, we introduce the combination matrices\n \\eq{\n A=[a_{sk}] \\in \\real^{K \\times K}, \\quad \\cA= A \\otimes I_M \\label{combination-cal-A}\n} \n where the entry $a_{sk}=0$ if there is no edge connecting agents $k$ and $s$. The matrix $A$ is assumed to be symmetric and doubly stochastic matrix (different from $\\bar{A}$). We further assume the matrix to be primitive, i.e., there exists an integer $j>0$ such that all entries of $A^j$ are positive. \n Under these conditions it holds that $(I_{MK}-\\cA) \\sw=0$ if and only if $w_k=w_s$ for all $k,s$ --- see \\cite{shi2015extra,yuan2019exactdiffI}. \n\\subsection{Specific Instances} \\label{sec:specific_ins}\nWe start by rewriting recursion \\eqref{alg_ATC_framework} in an equivalent manner by eliminating the dual variable $\\sy_i$. Thus, from \\eqref{z_ATC_DIG} it holds that\n \\eq{\n\\ssz_i-\\ssz_{i-1} &= (I-\\cC) (\\sw_{i-1}-\\sw_{i-2})- \\cB (\\sy_{i-1}-\\sy_{i-2}) -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \\nonumber \\\\\n &\\overset{\\eqref{dual_ATC_DIG}}{=} (I-\\cC) (\\sw_{i-1}-\\sw_{i-2})- \\cB^2 \\ssz_{i-1} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \\nonumber\n}\nRearranging the previous equation we get:\n \\eq{\n\\ssz_i &= (I-\\cB^2) \\ssz_{i-1} + (I-\\cC) (\\sw_{i-1}-\\sw_{i-2}) -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \n\\label{eq:sub_atc}\n}\n Utilizing this property, we will now choose specific matrices $\\{\\bar{\\cA},\\cB,\\cC\\}$ and show that we can recover many state of the art algorithms (see Table \\ref{table}): \n\\subsubsection{\\bf Exact diffusion \\cite{yuan2019exactdiffI}}\n If we choose $\\bar{\\cA}=0.5 (I+\\cA)$, $\\cC=0$ and $\\cB^2=0.5 (I- \\cA)$ in \\eqref{eq:sub_atc}, we get: \n \\eq{\n\\ssz_i &= \n\\bar{\\cA}\n \\ssz_{i-1} + \\sw_{i-1}-\\sw_{i-2} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \n}\n Multiplying the previous equation by $\\bar{\\cA}$ and noting from \\eqref{primal_ATC_DIG} that $\\sw_i= \\bar{\\cA} \\ssz_{i}$, we get:\n \\eq{\n\\sw_i=\\bar{\\cA} \\bigg( 2 \\sw_{i-1}\n - \\sw_{i-2} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \\bigg) \\label{exact-diffusion}\n}\n The above recursion is the exact diffusion recursion first proposed in \\cite{yuan2019exactdiffI}. We also note that if we choose $\\cC=0$, $\\cB^2=c (I- \\cA)$ ($c \\in \\real$), and $\\bar{\\cA}=I-\\cB^2$ then we recover the smooth case of the NIDS algorithm from \\cite{li2017nids}. As highlighted in \\cite{li2017nids}, NIDS is identical to exact diffusion for the smooth case when $c=0.5$.\n\\subsubsection{\\bf Aug-DGM \\cite{xu2015augmented}}\n Let $\\cC=0$, $\\bar{\\cA}=\\cA^2$, and $\\cB=I-\\cA$. Substituting into \\eqref{eq:sub_atc}:\n \\eq{\n\\ssz_i &= (2\\cA-\\cA^2) \\ssz_{i-1} + \\sw_{i-1}-\\sw_{i-2} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \n}\nBy multiplying the previous equation by $\\bar{\\cA}=\\cA^2$ and noting from \\eqref{primal_ATC_DIG} that $\\sw_i= \\cA^2 \\ssz_{i}$, we get the recursion:\n\\eq{\n\\sw_i=\\cA \\bigg( 2 \\sw_{i-1}\n - \\cA \\sw_{i-2} -\\mu \\cA \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \\bigg) \\label{atc_DGM_eliminate} \n}\nThe above recursion is equivalent to the Aug-DGM \\cite{xu2015augmented} (also known as ATC-DIGing \\cite{nedic2017geometrically}) algorithm:\n\\begin{subequations} \\label{atc_DGM}\n\\eq{ \n\\sw_i&=\\cA(\\sw_{i-1}-\\mu \\ssx_{i-1}) \\label{atc-dgm1} \\\\\n\\ssx_{i}&=\\cA \\big(\\ssx_{i-1}+ \\grad \\cJ(\\sw_i)- \\grad \\cJ(\\sw_{i-1}) \\big) \\label{atc-dgm2}\n}\n\\end{subequations}\nBy eliminating the gradient tracking variable $\\ssx_{i}$, we can rewrite the previous recursion as \\eqref{atc_DGM_eliminate} -- see Appendix \\ref{supp_equiva_representation}.\n\\subsubsection{\\bf ATC tracking method \\cite{di2016next,scutari2019distributed}}\nLet $\\cC=I-\\cA$ and $\\cB=I-\\cA$. Substituting into \\eqref{eq:sub_atc}:\n \\eq{\n\\ssz_i &=(2\\cA -\\cA^2) \\ssz_{i-1} \n+ \\cA \\sw_{i-1} - \\cA \\sw_{i-2} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \n}\nBy multiplying the previous equation by $\\bar{\\cA}=\\cA$ and noting from \\eqref{primal_ATC_DIG} that $\\sw_i= \\cA \\ssz_{i}$, we get the recursion:\n\\eq{\n\\sw_i=\\cA \\bigg( 2 \\sw_{i-1}\n - \\cA \\sw_{i-2} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \\bigg) \\label{next_eliminate}\n}\nThe above recursion is equivalent to the following variant of the ATC tracking method \\cite{di2016next,scutari2019distributed}:\n\\begin{subequations} \\label{next}\n\\eq{\n\\sw_i&=\\cA(\\sw_{i-1}-\\mu \\ssx_{i-1}) \\label{next1} \\\\\n\\ssx_{i}&=\\cA \\ssx_{i-1}+ \\grad \\cJ(\\sw_i)- \\grad \\cJ(\\sw_{i-1}) \\label{next2}\n}\n\\end{subequations}\nBy eliminating the gradient tracking variable $\\ssx_i$, we can show that the previous recursion is exactly \\eqref{next_eliminate} -- see Appendix \\ref{supp_equiva_representation}. \n\\subsubsection{\\bf NON-ATC Algorithms ($\\bar{\\cA}=I$)}\nWe note that DIGing \\cite{qu2017harnessing,nedic2017achieving}, EXTRA \\cite{shi2015extra}, and the decentralized linearized alternating direction method of multipliers (DLM) \\cite{ling2015dlm} can also be represented by \\eqref{alg_ATC_framework} with $\\bar{\\cA}=I$ and proper choices of $\\cB^2$ and $\\cC$ -- see Table \\ref{table}. Since $\\bar{\\cA}=I$, these algorithms are not of the ATC form. Please see Appendix \\ref{supp_non_atc} for the details and analysis of non-ATC case.\n\\begin{table}[t] \n\\caption{Listing of some state-of-the-art first-order algorithms that can recovered by specific choices of $\\bar{\\cA}$, $\\cB$, and $\\cC$ in \\eqref{alg_ATC_framework}. The matrix $\\cA$ is a typical symmetric and doubly stochastic network combination matrix introduced in \\eqref{combination-cal-A}. The matrix $\\cL$ is chosen such that the $k$-th block of $\\cL \\sw_i$ is equal to $\\sum_{s \\in \\cN_k} w_{k,i}-w_{s,i}$ and $c>0$ is a step-size parameter. }\n\\centering\n\\large \n\\begin{tabular}{|c|c|c|c|}\n\\thickhline\n\\rowcolor[HTML]{C0C0C0} \n{\\bf ATC algorithms} & $\\bar{\\cA}$ & $\\cB^2$ & $\\cC$ \\\\ \\thickhline\n\\cellcolor[HTML]{EFEFEF} Aug-DGM\/ATC-DIGing \\cite{xu2015augmented,nedic2017geometrically} & $\\cA^2$ & $(I-\\cA)^2$ & $0$ \\\\ \\hline\n \\cellcolor[HTML]{EFEFEF} ATC tracking \\cite{di2016next,scutari2019distributed} & $\\cA$ & $(I-\\cA)^2$ & $I-\\cA$ \\\\ \\hline\n\\cellcolor[HTML]{EFEFEF} Exact diffusion \\cite{yuan2019exactdiffI} & $0.5(I+\\cA)$ & $0.5(I-\\cA)$ & 0 \\\\ \\hline\n \\cellcolor[HTML]{EFEFEF} NIDS \\cite{li2017nids} & $I-c(I-\\cA)$ & $c(I-\\cA)$ & 0 \\\\ \\thickhline\n \\rowcolor[HTML]{C0C0C0} \n{\\bf NON-ATC algorithms} & $\\bar{\\cA}$ & $\\cB^2$ & $\\cC$ \\\\ \\thickhline\n \\cellcolor[HTML]{EFEFEF} DIGing \\cite{qu2017harnessing,nedic2017achieving} & $I$ & $(I-\\cA)^2$ & $I-\\cA^2$ \\\\ \\hline \n\\cellcolor[HTML]{EFEFEF} EXTRA \\cite{shi2015extra} & $I$ & $0.5(I-\\cA)$ & $0.5(I-\\cA)$ \\\\ \\hline \n\\cellcolor[HTML]{EFEFEF} DLM \\cite{ling2015dlm} & $I$ & $c \\mu \\cL$ & $c \\mu \\cL$ \\\\ \\thickhline\n\\end{tabular}\n \\label{table}\n\\end{table}\n\\begin{remark} [\\sc Communication cost] \\label{remak:sharing-variable}{\\rm\nNote that exact diffusion \\eqref{exact-diffusion} requires one round of communication or combination per iteration. This means that each agent sends an $M$ vector to its neighbor per iteration. On the other hand, the gradient tracking method \\eqref{next} requires two rounds of combination\/communication per iteration for the vectors $\\sw_{i-1}-\\mu \\ssx_{i-1}$ and $\\ssx_{i-1}$, which means each agent sends a $2M$ vector to its neighbor. Similarly, the Aug-DGM (ATC-DIGing) method \\eqref{atc_DGM} also requires two rounds of combination per iteration for the vectors $\\sw_{i-1}-\\mu \\ssx_{i-1}$ and $\\ssx_{i-1}+ \\grad \\cJ(\\sw_i)- \\grad \\cJ(\\sw_{i-1})$; moreover, it requires communicating these two variables sequentially (at different communication steps). \\qd\n}\n\\end{remark}\n\n\t\\section{Proximal Unified Decentralized Algorithm (PUDA)}\n In this section, we extend UDA \\eqref{alg_ATC_framework} to handle the non-differentiable component $R(w)$ to get a proximal unified decentralized algorithm (PUDA). Let us introduce the network quantity\n\\eq{\n \\cR(\\sw) &\\define {1 \\over K} \\sum_{k=1}^K R(w_k) \n} \nWith this definition, we propose the following recursion: let $\\sy_{-1}=0$ and $\\sw_{-1}$ take any arbitrary value. Repeat for $i=0,1,\\ldots$\n\\begin{subnumcases}{ \\label{alg_prox_ATC_framework}} \n\\ssz_i = (I-\\cC) \\sw_{i-1}-\\mu \\grad \\cJ(\\sw_{i-1}) - \\cB \\sy_{i-1} \\label{z_prox_ATC_DIG} \\\\\n\\sy_i = \\sy_{i-1}+ \\cB \\ssz_i \\label{dual_prox_ATC_DIG} \\\\\n\\sw_i = {\\rm \\bf prox}_{\\mu \\cR}\\big(\\bar{\\cA} \\ssz_{i} \\big) \\label{primal_prox_ATC_DIG} \n \\end{subnumcases}\n We refer the reader to Appendix \\ref{supp_equiva_represent_prox} for specific instances of PUDA \\eqref{alg_prox_ATC_framework} and how to implement them in a decentralized manner. In the following, we will show that $\\sw_i$ in the above recursion converges to $\\one_K \\otimes w^\\star$ where $w^\\star$ is the desired solution of \\eqref{decentralized1}. We first prove the existence and optimality of the fixed points of recursion \\eqref{alg_prox_ATC_framework}.\n\\begin{lemma}[\\sc Optimality Point] \\label{lemma:existence_fixed_optimality}{\\rm Under Assumption \\ref{assump:cost} and condition \\eqref{consensus-condition-both}, a fixed point $(\\sw^\\star, \\sy^\\star, \\ssz^\\star)$ exists for recursions \\eqref{z_prox_ATC_DIG}--\\eqref{primal_prox_ATC_DIG}, i.e., it holds that\n\t\\begin{subnumcases}{}\n\t\\hspace{.5mm} \\ssz^\\star =\\sw^\\star-\\mu \\grad \\cJ(\\sw^\\star)- \\cB \\sy^\\star \\label{p-d_ed-star} \\\\\n\t\\hspace{2.8mm} 0 = \\cB \\ssz^\\star \\label{d-a_ed-star} \\\\\n\t\\sw^\\star = {\\rm \\bf prox}_{\\mu \\cR}(\\bar{\\cA} \\ssz^\\star) \\label{prox_step_ed-star}\n\t\\end{subnumcases}\nMoreover, $\\sw^\\star$ and $\\ssz^\\star$ are unique with $\\sw^\\star=\\one_K \\otimes w^\\star$ where $w^\\star$ is the solution of problem \\eqref{decentralized1}.\n\t}\n\\end{lemma}\n\\begin{proof} See Appendix \\ref{supp_lemma_fixed}. \n\\end{proof} \n \\section{Linear Convergence}\nNote that there exists a particular fixed point $(\\sw^\\star, \\sy_b^\\star, \\sz^\\star)$ where $\\sy_b^\\star$ is a unique vector that belongs to the range space of $\\cB$ -- see \\cite[Remark 2]{alghunaim2019linearly}. In the following we will show that the iterates $(\\sw_i, \\sy_i, \\sz_i)$ converge linearly to this particular fixed point $(\\sw^\\star, \\sy_b^\\star, \\sz^\\star)$. To this end, we introduce the error quantities:\n\\begin{align}\n\t\\tsw_i\\define \\sw_i-\\sw^\\star, \\quad \\tsy_i \\define \\sy_i - \\sy^\\star_b, \\quad \\tsz_i \\define \\ssz_i-\\ssz^\\star\n\\end{align}\nNote that from condition \\eqref{consensus-condition-both} we have $\\cC\\sw^\\star=0$. Therefore, from \\eqref{z_prox_ATC_DIG}--\\eqref{primal_prox_ATC_DIG} and \\eqref{p-d_ed-star}--\\eqref{prox_step_ed-star} we can reach the following error recursions:\n\\begin{subnumcases}{}\n\\tsz_i=(I-\\cC)\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star) \\big) - \\cB \\tsy_{i-1} \\label{error_primal_ed} \\\\\n\\tsy_i = \\tsy_{i-1}+ \\cB \\tsz_i \\label{error_dual_ed} \\\\\n\\tsw_i = {\\rm \\bf prox}_{\\mu \\cR}\\big(\\bar{\\cA} \\ssz_i\\big)-{\\rm \\bf prox}_{\\mu \\cR}(\\bar{\\cA} \\ssz^\\star) \\label{error_prox_ed}\n\\end{subnumcases}\nFor our convergence result, we need the following technical conditions.\n\\begin{assumption}[\\sc Consensus matrices] \\label{assump_combination}\n{\\rm It is assumed that both condition \\eqref{consensus-condition-both} and the following condition hold:\n\\eq{\n\\bar{\\cA}^2 \\leq I-\\cB^2 \\ {\\rm and} \\ 0 \\leq \\cC < 2I \\label{eq:asump_penalty} }\n\\qd\n}\n\\end{assumption} \n\\begin{remark}[\\sc Convergence conditions]{\\rm\n\\label{remark:conv_conditions}\nNote that the above conditions are satisfied for exact diffusion \\cite{yuan2019exactdiffII} and NIDS \\cite{li2017nids}. \nFor the ATC tracking methods \\eqref{atc_DGM} and \\eqref{next}, the conditions translate to the requirement that the eigenvalues of $A$ are between $[0,1]$, rather than the typical $(-1,1]$.\n Although this condition is not necessary, it can be easily satisfied by redefining $A \\leftarrow 0.5 (I+A)$. We also impose it to unify the analysis of these methods through a short proof. Note that most works that analyze decentralized methods under more relaxed conditions on the network topology impose restrictive step-size conditions that depend on the network and on the order of $O(\\nu^{\\theta_1}\/ \\delta^{\\theta_2})$ where $0 < \\theta_1 \\leq 1$ and $\\theta_2>1$ -- see \\cite{nedic2017geometrically,qu2017harnessing,pu2018push,\njakovetic2019unification}. On the other hand, we require step sizes of order $O(1 \/ \\delta)$. Moreover, we will show that any algorithm that fits into our setup with $\\cC=0$ can use a step-size as large as the centralized proximal gradient descent -- see discussion after Theorem \\ref{theorem_lin_convergence}. \\qd\n}\n\\end{remark}\nNote that $\\cB^2$ and $\\cC$ are symmetric; thus, their singular values are equal to their eigenvalues. Moreover, since the square of a symmetric matrix is positive semi-definite, Assumption \\ref{assump_combination} implies $0 < \\underline{\\sigma}(\\cB^2) \\leq 1$ and $\\sigma_{\\max}(\\cC)<2$.\n\n\\begin{theorem}[\\sc Linear Convergence]\\label{theorem_lin_convergence}\n{\\rm\tUnder Assumptions \\ref{assump:cost}--\\ref{assump_combination}, if $\\sy_0=0$ and the step-size satisfies \\eq{\n\\mu < {2-\\sigma_{\\max}(\\cC) \\over \\delta},\n}\n it holds that\n\\eq{\n\t\\|\\tsw_i\\|^2+ \\|\\tsy_i\\|^2 \n\t&\\leq \\gamma \\big(\\|\\tsw_{i-1}\\|^2+ \\|\\tsy_{i-1}\\|^2 \\big)\n}\nwhere $\\gamma= \\max \\big\\{ 1 \\hspace{-0.5mm}-\\hspace{-0.5mm} \\mu \\nu (2-\\sigma_{\\max}(\\cC)-\\mu \\delta ) ,1 - \\underline{\\sigma}(\\cB^2) \\big\\}<1$.}\n\\end{theorem}\n\\begin{proof} Squaring both sides of \\eqref{error_primal_ed} and \\eqref{error_dual_ed} we get\n\\eq{\n\t\\|\\tsz_i\\|^2&= \\|(I-\\cC)\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star) \\big)\\|^2 + \\| \\cB \\tsy_{i-1}\\|^2 \\nonumber \\\\\n\t& \\ -2 \\tsy_{i-1}\\tran \\cB \\left((I-\\cC)\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star) \\big)\\right) \n\t\\label{er_sq_primal_ed}\n}\nand\n\\eq{\n\t\\|\\tsy_i\\|^2 =\\|\\tsy_{i-1}+ \\cB \\tsz_i \\|^2 &= \\|\\tsy_{i-1}\\|^2+ \\| \\cB \\tsz_i \\|^2 + 2 \\tsy_{i-1} \\tran \\cB \\tsz_i \\nonumber \\\\\n\t&\\overset{\\eqref{error_primal_ed}}{=} \\|\\tsy_{i-1}\\|^2+ \\| \\tsz_i \\|^2_{\\cB^2} - 2 \\|\\cB \\tsy_{i-1}\\|^2 \\nonumber \\\\ \n\t& \\quad +2 \\tsy_{i-1}\\tran \\cB \\left((I-\\cC)\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star) \\big)\\right) \\label{er_sq_dual_ed}\n}\nAdding equation \\eqref{er_sq_dual_ed} to \\eqref{er_sq_primal_ed} and rearranging, we get \n\\eq{\n\\|\\tsz_i\\|^2_{\\cQ} \\hspace{-0.6mm}+\\hspace{-0.6mm} \\|\\tsy_i\\|^2 \\hspace{-0.6mm}=\\hspace{-0.6mm} \\|(I-\\cC)\\tsw_{i-1} \\hspace{-0.6mm}- \\hspace{-0.6mm} \\mu \\big(\\grad \\cJ(\\sw_{i-1})\\hspace{-0.6mm}-\\hspace{-0.6mm}\\grad \\cJ(\\sw^\\star) \\big)\\|^2 \\hspace{-0.6mm}+\\hspace{-0.6mm} \\|\\tsy_{i-1}\\|^2 \\hspace{-0.6mm}-\\hspace{-0.6mm} \\|\\cB \\tsy_{i-1}\\|^2 \\label{err_sum_ed}\n}\nwhere $\\cQ = I - \\cB^2$ is positive semi-definite from \\eqref{eq:asump_penalty}. Since $\\sy_0 = 0$ and $\\sy_i = \\sy_{i-1} + \\cB \\ssz_i$, we know $\\sy_i\\in \\mbox{range}(\\cB)$ for any $i$. Thus, both $\\sy_i$ and $\\sy_b^\\star$ lie in the range space of $\\cB$, and it holds that $\n\\|\\cB \\tsy_{i-1}\\|^2 \\geq \n\\underline{\\sigma}(\\cB^2) \\|\\tsy_{i-1}\\|^2 $. Therefore, we can bound \\eqref{err_sum_ed} by\n\\eq{\n\t\\|\\tsz_i\\|^2_{\\cQ}+ \\|\\tsy_i\\|^2 \n\t& \\le\\|\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star)+{1 \\over \\mu}\\cC \\tsw_{i-1} \\big)\\|^2 \\hspace{-0.5mm}+\\hspace{-0.5mm} (1- \\underline{\\sigma}(\\cB^2))\\|\\tsy_{i-1}\\|^2 \\label{err_sum1_ed}\n}\nAlso, since $\\cJ(\\sw)+{1 \\over 2 \\mu}\\|\\sw\\|^2_{\\cC}$ is $\\delta_{\\mu}=\\delta+{1 \\over \\mu} \\sigma_{\\max}(\\cC)$-smooth, it holds that \\cite[Theorem 2.1.5]{nesterov2013introductory}:\n\\eq{\n\\| \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star)+{1 \\over \\mu}\\cC \\tsw_{i-1} \\big)\\|^2 \\leq \\delta_{\\mu} \\tsw_{i-1}\\tran \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star)+{1 \\over \\mu}\\cC \\tsw_{i-1} \\big)\n}\nUsing this bound, it can be easily verified that:\n\\eq{\n&\\|\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star)+{1 \\over \\mu}\\cC \\tsw_{i-1} \\big)\\|^2 \\nnb\n&\\leq \\|\\tsw_{i-1}\\|^2 - \\mu (2-\\mu \\delta_{\\mu} ) \\tsw_{i-1}\\tran \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star)+{1 \\over \\mu}\\cC \\tsw_{i-1} \\big) \\nnb\n&\\leq \\big(1- \\mu \\nu (2- \\mu\\delta_{\\mu} )\\big) \\|\\tsw_{i-1}\\|^2=\\big(1- \\mu \\nu (2-\\sigma_{\\max}(\\cC)-\\mu \\delta )\\big) \\|\\tsw_{i-1}\\|^2\n} \nwhere in the last step we used the fact that $2-\\mu\\delta_\\mu > 0$, which follows from the condition $\\mu<(2-\\sigma_{\\max}(\\cC))\/\\delta$, and the fact that $\\cJ(\\sw)+{1 \\over 2 \\mu}\\|\\sw\\|^2_{\\cC}$ is $\\nu$-strongly convex. Thus, we can substitute the previous inequality in \\eqref{err_sum1_ed} and get\n\\eq{\n\t\\|\\tsz_i\\|^2_{\\cQ} \\hspace{-0.5mm}+\\hspace{-0.5mm} \\|\\tsy_i\\|^2 \n\t& \\le \\big(\\hspace{-0.5mm} 1 \\hspace{-0.5mm}-\\hspace{-0.5mm} \\mu \\nu (2-\\sigma_{\\max}(\\cC)-\\mu \\delta ) \\hspace{-0.5mm}\\big)\\hspace{-0.5mm}\\|\\tsw_{i-1}\\|^2 \\hspace{-0.5mm}+ (1- \\underline{\\sigma}(\\cB^2))\\|\\tsy_{i-1}\\|^2 \\label{err_sum1_ed-2_ed}\n} \nFrom \\eqref{error_prox_ed} and the nonexpansive property of the proximal operator, we have\n\\eq{\n\t\\|\\tsw_i\\|^2 &= \\|{\\rm \\bf prox}_{\\mu \\cR}\\big(\\bar{\\cA} \\ssz_i \\big)-{\\rm \\bf prox}_{\\mu \\cR}(\\bar{\\cA} \\ssz^\\star) \\|^2 \\leq \\|\\bar{\\cA} \\tsz_i\\|^2 \\leq \\| \\tsz_i\\|^2_{\\cQ} \\label{prox_bound_last}\n}\nwhere the last step holds because of condition \\eqref{eq:asump_penalty} so that $\\|\\bar{\\cA} \\tsz_i\\|^2=\\| \\tsz_i\\|^2_{\\bar{\\cA}^2} \\leq \\| \\tsz_i\\|^2_{\\cQ}$. Substituting \\eqref{prox_bound_last} into \\eqref{err_sum1_ed-2_ed} we reach our result. Finally we note that:\n\\eq{\n\\big(\\hspace{-0.5mm} 1 \\hspace{-0.5mm}-\\hspace{-0.5mm} \\mu \\nu (2-\\sigma_{\\max}(\\cC)-\\mu \\delta ) \\hspace{-0.5mm}\\big) < 1 \\iff \\mu < {2-\\sigma_{\\max}(\\cC) \\over \\delta}\n}\n\\end{proof}\n\\noindent An interesting choice of $\\bar{\\cA}$, $\\cB$, and $\\cC$ is the class with $\\cC=0$. For $\\cC=0$, which is the case for exact diffusion \\eqref{exact-diffusion} and Aug-DGM (ATC-DIGing) \\eqref{atc_DGM}, the step size bound in Theorem \\ref{theorem_lin_convergence} becomes $\\mu<{2 \\over \\delta}$, which is independent of the network and as large as the centralized proximal gradient descent. Moreover, for $\\cC=0$ the convergence rate becomes $\\gamma= \\max \\{ 1 \\hspace{-0.5mm}-\\hspace{-0.5mm} \\mu \\nu (2-\\mu \\delta ) ,1 - \\underline{\\sigma}(\\cB^2)\\}<1$, which separates the network effect from the cost function. If we further choose $\\bar{\\cA}=\\cA^j$ and and $\\cB^2=I-\\cA^j$ for integer $j\\geq 1$, then we have $1 - \\underline{\\sigma}(\\cB^2)=\\lambda_2(\\cA^j) \\rightarrow 0$ as $j \\rightarrow \\infty$ where $\\lambda_2(\\cA^j)$ is the second largest eigenvalue of $\\cA^j$ . Thus, the convergence rate $\\gamma= 1 \\hspace{-0.5mm}-\\hspace{-0.5mm} \\mu \\nu (2-\\mu \\delta )$ can match the rate of centralized algorithms for large $j$. A similar conclusion appears for NIDS \\cite{li2017nids} but for the smooth case, which is subsumed in our framework. \n\\begin{remark}[\\sc Network Effect]{\\rm\n\\label{remark:numberofag}\n The convergence rate depends on the network graph through the terms $\\underline{\\sigma}(\\cB^2)$ and $\\sigma_{\\max}(\\cC)$. Given a certain graph, it will depend on the number of agents {\\em indirectly} as we now explain. If we choose $\\cB^2=I-\\cA$ and $\\cC=0$ where $\\cA$ is constructed as in Section \\ref{sec:combina:matrix} and satisfy Assumption \\ref{assump_combination}. Then, we have that $1 - \\underline{\\sigma}(\\cB^2)=\\lambda_2(\\cA)$ where $\\lambda_2(\\cA)$ denotes the second largest eigenvalue of $\\cA$. For a cyclic network it holds that $\\lambda_2(\\cA)=1-\\cO(1\/K^2)$. For a grid network we have $\\lambda_2(\\cA)=1-\\cO(1\/K)$. For a fully connected network, we can choose $\\cA= {1 \\over K} \\one \\one\\tran$ so that $\\lambda_2(\\cA)=0$. In this case, we can also choose $\\bar{\\cA}= {1 \\over K} \\one \\one\\tran$ and the primal updates in \\eqref{alg_prox_ATC_framework} becomes so that each agent updates its vector via a proximal gradient descent update on the objective function given in problem \\eqref{decentralized1}. \\qd\n}\n\\end{remark}\n\\section{Simulations on real data} \\label{sec-simulation} \nIn this section we test the performance of three different instances of the proposed method \\eqref{alg_prox_ATC_framework} against some state-of-the-art algorithms. We consider the following sparse logistic regression problem:\n\\eq{\n\t\\min_{w\\in \\real^M} \\frac{1}{K}\\sum_{k=1}^K J_k(w) + \\rho \\|w\\|_1\t\\quad \\mbox{where}\\quad J_k(w) = \\frac{1}{L}\\sum_{\\ell=1}^{L}\\ln(1+\\exp(-y_{k,\\ell} x_{k,\\ell}\\tran w)) + \\frac{\\lambda}{2}\\|w\\|^2 \\nonumber\n}\nwhere $\\{x_{k,\\ell}, y_{k,\\ell}\\}_{\\ell=1}^L$ are local data kept by agent $k$ and $L$ is the size of the local dataset. We consider three real datasets: Covtype.binary, MNIST, and CIFAR10. The last two datasets have been transformed into binary classification problems by considering data with two labels, digits two and four (`2' and `4') classes for MNIST, and cat and dog classes for CIFAR-10. In Covtype.binary we use 50,000 samples as training data and each data has dimension 54. In MNIST we use 10,000 samples as training data and each data has dimension 784. In CIFAR-10 we use 10,000 training data and each data has dimension 3072. All features have been preprocessed and normalized to the unit vector with sklearn's normalizer\\footnote{\\url{https:\/\/scikit-learn.org}}.\n\\begin{figure*}[h!]\n\t\\centering\n\t\\includegraphics[scale=0.35]{network.jpg}\n\t\\caption{The network topology used in the simulation.}\n\t\\label{fig-network}\n\\end{figure*} \n\n\nFor the network, we generated a randomly connected network with $K=20$ agents, which is shown in Fig. \\ref{fig-network}. The associated combination matrix $A$ is generated according to the Metropolis rule \\cite{sayed2014nowbook}. For all simulations, we assign data evenly to each agent. We set $\\lambda=10^{-4}$ and $\\rho=2\\times10^{-3}$ for Covtype, $\\lambda=10^{-2}$ and $\\rho=5\\times10^{-4}$ for CIFAR-10, and $\\lambda=10^{-4}$ and $\\rho=2\\times10^{-3}$ for MNIST. The simulation results are shown in Figure \\ref{fig-lr}. The decentralized implementations of Prox-ED, Prox-ATC I, and prox-ATC II are given in Appendix \\ref{supp_equiva_represent_prox}. For each algorithm, we tune the step-sizes manually to achieve the best possible convergence rate. We notice that the performance of each algorithm differs in each data set and Prox-ED performs the best in our simulation setup. The $x$-axis in these plots is in terms of rounds of communication per iteration. Note that Prox-ATC I and Prox-ATC II require two rounds of communication per iteration compared to only one round for all other algorithms -- see Remark \\ref{remak:sharing-variable}.\n\\begin{figure*}[t!]\n\t\\centering\n\t\\includegraphics[scale=0.35]{covtype_plot.pdf}\n\t\\includegraphics[scale=0.35]{cifar10_plot.pdf}\n\t\\includegraphics[scale=0.35]{mnist_plot.pdf}\n\t\\caption{ \\footnotesize Simulation results. The $y$-axis indicates the relative squared error $\\sum_{k=1}^{K}\\|w_{k,i} - w^\\star\\|^2\/\\|w^\\star\\|^2$. Prox-ED refers to \\eqref{alg_prox_ATC_framework} with $\\bar{\\cA}=0.5 (I+\\cA)$, $\\cB^2=0.5 (I- \\cA)$, and $\\cC=0$. Prox-ATC I refers to \\eqref{alg_prox_ATC_framework} with $\\bar{\\cA}=\\cA^2$, $\\cB=I-\\cA$, and $\\cC=0$. Prox-ATC II refers to \\eqref{alg_prox_ATC_framework} with $\\bar{\\cA}=\\cA$, $\\cB=I-\\cA$, and $\\cC=I-\\cA$. DL-ADMM \\cite{chang2015multi}, PG-EXTRA \\cite{shi2015proximal}, NIDS \\cite{li2017nids}.\n}\n\t\\label{fig-lr}\n\\end{figure*} \n\n\\section{Separate non-smooth terms: sublinear rate} \\label{sec:sublinearbound}\n In this section, we will show that if each agent owns a different local non-smooth term, then {\\em exact} global linear convergence cannot be attained in the worst case (for all problem\ninstances) although it can still be possible for some special cases. Consider the more general problem with agent specific regularizers:\n\\eq{\n\\label{eq:separarate-regularizer}\n\\min_{w\\in \\real^M}\\ \\frac{1}{K}\\sum_{k=1}^{K}J_k(w)+R_k(w),\n}\nwhere $J_k(w)$ is a strongly convex smooth function and $R_k(w)$ is non-smooth convex with closed form proximal mappings (each $J_k(w)$ and $R_k(w)$ are further assumed to be closed and proper functions). Although many algorithms (centralized and decentralized) exist that solve \\eqref{eq:separarate-regularizer}, none have been shown to achieve linear convergence in the presence of general non-smooth proximal terms $R_k(w)$. In the following, by tailoring the results from \\cite{woodworth2016tight}, we show that this is not possible when having access to the proximal mapping of each individual non-smooth term $R_k(w)$ separately.\n\\subsection{Sublinear Lower Bound} \\label{sec-sublinear-bound}\nLet $\\mathcal{H}$ be a deterministic algorithm that queries\n\\[\n\\{J_k(\\cdot), R_k(\\cdot), \\nabla J_k(\\cdot), {\\rm \\bf prox}_{\\mu_{i,k} R_k}(\\cdot)\n\\,|\\,\n\\mu_{i,k}>0,\\,\nk=1,\\dots,K\n\\}\n\\]\nonce for each iteration $i=0,1,\\dots$.\nTo clarify, the scalar parameter $\\mu_{i,k}>0$ can differ for $i=0,1,\\dots$ and $k=1,\\dots,K$ or they can be constants (e.g.\\ $\\mu_{i,k}=\\mu >0$).\nNote that $\\mathcal{H}$ has the option to combine the queried values in any possible combination (e.g., it can only use certain information from certain communications). Thus, $\\mathcal{H}$ includes decentralized algorithms in which communication is restricted to edges on a graph.\n\n\nConsider the specific instance of \\eqref{eq:separarate-regularizer}\n\\eq{\n\\min_{w\\in \\real^M} \\ F_\\nu(w)=\\frac{\\nu}{2}\\|w\\|^2+\\frac{1}{K}\\sum_{k=1}^{K} R_k(w)\n\\label{cost_F_nu}}\nwhere $\\nu>0$ and $J_k(w)= \\frac{\\nu}{2K}\\|w\\|^2$. Assume $R_k(w)<\\infty$ if and only if $\\|w\\|\\le B$ and $|R_k(w_1)-R_k(w_2)|\\le G\\|w_1-w_2\\|$ for all $w_1,w_2$ (where $B$ and $G$ are some positive constants) such that \n$\\|w_1\\|\\le B$ and $\\|w_2\\|\\le B$. To prove that linear convergence is not possible, we will reduce our setup to $\\min_{w\\in \\real^M}\\, F_0(w)$, which has a known lower bound \\cite{woodworth2016tight}.\nLet $\\mathcal{H}_o$ be a deterministic algorithm that queries\n\\[\n\\{ R_k(\\cdot), {\\rm \\bf prox}_{\\mu_{i,k} R_k(\\cdot)}(\\cdot)\n\\,|\\,\n\\mu_{i,k}>0,\\,\nk=1,\\dots,K\n\\}\n\\]\nonce for each iteration $i=0,1,\\dots$ and communicates through a fully connected network.\nThe following result is a special case of the more general result \\cite[Theorem~1]{woodworth2016tight}.\n\\begin{theorem} \n\\label{thm:woodworth_lower_bnd}\nLet $00$ as otherwise it can be used to efficiently solve $\\min_{w}\\, F_0(w)$ and contradict Theorem~\\ref{thm:woodworth_lower_bnd}.\n\n\\begin{theorem}\n\\label{thm:main_lower_bound}\nLet $0<\\nu$, $00$ and all $i \\geq i_o$.} has been established in \\cite{latafat2017new} when the functions $\\{J_k(\\cdot),R_k(\\cdot)\\}$ are piecewise linear quadratic.\nThis result does not contradict our result as the linear rate and the number of iterations needed to observe the linear rate are \\emph{dependent} on the problem dimension.\nOur linear convergence result of Theorem \\ref{theorem_lin_convergence} is dimension independent as it holds for any dimension $M$.\n} \\qd\n\\end{remark}\n\\subsection{Numerical Counterexample}\nIn this section, we numerically show that linear convergence to the exact solution $w^\\star$ is not possible in general. We consider an instance of \\eqref{eq:separarate-regularizer} with $K = 2$, $M$ is a very large even number, and quadratic smooth terms $J_k(w)=\\eta\/2 \\|w\\|^2$ for some $\\eta >0$. We let the non-smooth terms be\n\\begin{subequations}\\label{Rk}\n\t\\eq{\n\t\tR_1(w)&=|\\sqrt{2}w(1)-1| \\hspace{-0.5mm}+\\hspace{-0.5mm} |w(2)-w(3)| \\hspace{-0.5mm}+\\hspace{-0.5mm} |w(4)-w(5)| \\hspace{-0.5mm}+\\hspace{-0.5mm} \\cdots \\hspace{-0.5mm}+\\hspace{-0.5mm}|w(M\\hspace{-0.5mm}-\\hspace{-0.5mm}2)\\hspace{-0.5mm}-\\hspace{-0.5mm}w(M\\hspace{-0.5mm}-\\hspace{-0.5mm}1)| \\\\\n\t\tR_2(w)&=|w(1)-w(2)|+|w(3)-w(4)|+\\cdots\n\t\t+|w(M-1)-w(M)| \n\t}\n\\end{subequations}\n Both ${\\bf prox}_{R_1}$ and ${\\bf prox}_{R_2}$ have closed forms --- see Appendix \\ref{app_counter_example_proximal} for details.\n The above construction is related to the one in \\cite{arjevani2015communication}, which was used to derive lower bounds for a different class of algorithms as explained in the introduction.\n\nIn the numerical experiment, we test the performance of two well known decentralized proximal methods, PG-EXTRA \\cite{shi2015proximal} and DL-ADMM \\cite{chang2015multi,aybat2018distributed}. Note that the structure of updates \\eqref{alg_prox_ATC_framework} are designed to handle a common non-smooth term case only, which is why we do not test it in this numerical counterexample. We set $M=2000$ and $\\eta = 1$. The step-sizes for both PG-EXTRA and DL-ADMM are set to $0.005$. The combination matrix is set as $A = \\frac{1}{2}\\mathds{1}_2 \\mathds{1}_2\\tran$. The numerical results in the left plot of Fig. \\ref{fig-counter-example} shows that both PG-EXTRA and DL-ADMM converge sublinearly to the solution. In particular, we see that the error curves after around $10^3$ iterations has sublinear convergence. The right plot in Fig. \\ref{fig-counter-example} shows the squared error where both $x$-axis and $y$-axis are in logarithmic scales. In this scale, a straight line indicates a sublinear rate, which is clearly visible after around $10^3$ iterations. \nNo global linear convergence is observed in the simulation for sufficiently large dimension $M$ and algorithms independent of $M$, which is consistent with our discussion in Remark \\ref{remark:lowerbound_dimension}. \n\\begin{figure*}[h!]\n\\centering\n\t\\includegraphics[scale=0.55]{counter_example_semilogy.pdf}\n\t\\includegraphics[scale=0.55]{counter_example_loglog.pdf}\n\t\\caption{ \\footnotesize Numerical counterexample simulations. Both $y$-axis and $x$-axis are in logarithmic scales in the right plot. PG-EXTRA \\cite{shi2015proximal} and DL-ADMM \\cite{chang2015multi,aybat2018distributed} converge sublinearly to the solution\n of the proposed numerical counterexample.}\n\t\\label{fig-counter-example}\n\\end{figure*} \n\\section{Concluding Remarks}\nIn this work, we proposed a proximal primal-dual algorithmic framework, which subsumes many existing algorithms in the smooth case, and established its linear convergence under strongly-convex objectives. Our analysis provides wider step-size conditions than many existing works, which provides insightful indications on the performance of each algorithm. That said, these step-size bound comes at the expense of stronger assumption on the combination matrices -- see Remark \\ref{remark:conv_conditions}. It is therefore of interest to study the interrelation between the step-sizes and combination matrices for linear convergence. Regarding the discussion below Theorem \\ref{theorem_lin_convergence}, a useful future direction is to study how to optimally choose $\\bar{\\cA}$, $\\cB$, and $\\cC$ as a function of $\\cA$ to get the best possible convergence rate while balancing the communication cost per iteration. \n\n\n\n \n \n\\medskip\n{\\small\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Supergiant Fast X--ray Transients before {\\it Swift}}\n\nThe Galactic plane monitoring performed with the INTEGRAL satellite led to\nthe discovery of several new sources (Bird et al., 2007). \nSome of them displayed \nsporadic, recurrent, bright and short flares, with a typical duration of a few hours and reaching\na peak luminosity of 10$^{36}$--10$^{37}$~erg~s$^{-1}$\n(Sguera et al, 2005, 2006; Negueruela et al. 2006).\nRefining the INTEGRAL positions at arcsec level with \nX--ray follow-up observations, allowed the \nassociation with OB supergiant companions\n(e.g. Halpern et al.\\ 2004; Pellizza et al.\\ 2006; Masetti et al.\\ 2006;\nNegueruela et al.\\ 2006b; Nespoli et al.\\ 2008).\n\nOther important properties are the spectral similarity with \naccreting pulsars (hard power law spectra with a high energy cut-off around 15--30~keV) \nand the large dynamic range, from a peak luminosity\nof 10$^{36}$--10$^{37}$~erg~s$^{-1}$, down to a quiescent emission of 10$^{32}$~erg~s$^{-1}$.\nThe two main characterizing properties (the transient X--ray emission and\nthe association with supergiant companions) \nindicate that these transients form a new class of High Mass X--ray Binaries, \nlater called\nSupergiant Fast X--ray Transients (SFXTs; e.g. Negueruela et al. 2006).\n\nThe similarities of the SFXTs with the properties of accreting pulsars suggest \nthat the majority of these transients are indeed HMXBs hosting a neutron star,\nalthough only in three SFXTs X--ray pulsations have been discovered: \nIGR~J11215--5952 ($P_{\\rm spin}$$\\sim$186.8 \\,s, Swank et al.\\ 2007); \nAX~J1841.0--0536 ($P_{\\rm spin}$$\\sim$4.7\\,s, Bamba et al.\\ 2001)\nand IGR~J18483--0311 ($P_{\\rm spin}$$\\sim$21 \\,s, Sguera et al.\\ 2007).\n\nThe confirmed SFXTs are eight \n(IGR~J08408--4503, IGR~J11215--5952, IGR~J16479--4514, XTE~J1739--302, IGR~J17544--2619,\nSAX~J1818.6--1703, AX~J1841.0-0536 and IGR~J18483--0311),\nwith $\\sim$15 more candidates which\nshowed short transient flaring activity,\nbut with no confirmed association with an OB supergiant companion.\n\nThe main mechanisms proposed to explain the short and bright flaring activity\nfrom SFXTs deal with the properties of the accretion from the \nsupergiant wind (see Sidoli 2008 for a review), either \nrelated with the wind structure (in't Zand 2005; Walter \\& Zurita Heras, 2007;\nNegueruela et al. 2008; Sidoli et al. 2007) or to gated mechanisms which allow accretion onto\nthe neutron star surface only when the centrifugal or the magnetic barriers are open, depending\non the values of the neutron star spin and surface magnetic field (e.g. Bozzo et al. 2008 and references therein).\n\nThe properties of the SFXTs outbursts, although sporadic and short, \nhave been studied more in depth than the quiescent state.\nThe observations performed outside the bright outbursts have been indeed only a few and short (a few ks long), \nand caught these sources either in a low level flaring activity (IGR~J17544--2619, Gonzalez-Riestra et al. 2004) \nor in quiescence (with a very soft spectrum, likely thermal, with an \nX--ray luminosity of $\\sim$10$^{32}$~erg~s$^{-1}$). \nNote that this latter quiescent state has been observed\n{\\em only} in a couple of SFXTs, IGR~J17544--2619 (in't Zand 2005) and IGR J08408--4503\n(Leyder et al.\\ 2007).\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics*[angle=270,scale=0.5]{inte_igr16479_lc.ps}\n\\includegraphics*[angle=270,scale=0.5]{inte_igr17544_lc.ps}\\\\\n\\includegraphics*[angle=270,scale=0.5]{inte_xte1739_lc.ps}\n\\includegraphics*[angle=270,scale=0.5]{inte_igr18410_lc.ps}\\\\\n\\end{center}\n\\caption{\\scriptsize Light curves of the 4 SFXTs monitored with \\emph{Swift}\/XRT (0.2--10 keV), \nfrom 2007 October to 2008 September 10. \nThe upward pointing arrow in the IGR~J17544--2619 light curve \nmarks an outburst which triggered the BAT Monitor \non MJD 54412 (2007-11-08) but could not be observed with XRT because the source was \nSun-constrained. The downward-pointing arrows are 3-$\\sigma$ upper limits. \nThe gap in the observations between about December 2007 and January 2008\nis because the sources were Sun-costrained.\n}\n\\label{lsfig:4lc}\n\\end{figure}\n\n\n\n\\section{{\\it Swift} monitoring of Supergiant Fast X--ray Transients}\n\nBefore the {\\it Swift} campaign (which is still in progress since October 2008) \nno long-term monitoring of SFXTs have ever been performed to study \nthe status where these transients spend most of their life. \nNevertheless, it has been assumed by several authors, without observational evidence, that \nSFXTs spend most of the time in quiescence, when they are not in bright outburst.\n\nThe first observations with {\\it Swift} of a member of this new class of sources \nhave been performed during the 2007 February outburst of IGR~J11215--5952 (Romano et al. 2007).\nThis outburst could be completely monitored thanks to its predictability, because IGR~J11215--5952 was\nthe first SFXT where periodically recurrent outbursts were discovered (Sidoli et al. 2006).\nThese observations are one of the most complete set of observations of a SFXT in outburst,\nand clearly demonstrate, for the first time, that the short (a few hours long) \nflares observed with INTEGRAL (or RXTE, in a few sources), are actually \npart of a much longer outburst event lasting a few days, \nimplying that the accretion phase lasts longer than what was previously thought \n(Romano et al. 2007; Sidoli et al. 2007).\n\nThe success of this campaign led us to propose with {\\it Swift} the first wide-band, long-term \nand deep monitoring campaign of a sample of four SFXTs,\nwith the main aim of\n(1)-studying the long-term properties of these transients, (2)-performing \na truly simultaneous spectroscopy \n(0.3--150 keV) during outbursts, (3)-studying the outburst recurrence and their durations (see \nalso Romano et al. 2008b, these proceedings).\nThe 4 targets are: XTE~J1739--302, IGR~J17544--2619, IGR~J16479--4514\nand AX~J1841.0--0536\/IGR~J18410--0535.\nThe {\\it Swift} campaign consists of 2--3 observations\/week\/source (each observation lasts 1--2~ks; \nsee Romano et al. 2008b, these proceedings, for the campaign strategy).\nFig.~\\ref{lsfig:4lc} shows the four {\\it Swift}\/XRT light curves (0.2--10~keV) \naccumulated in the period October 2007--September 2008.\n\nHere we report on the entire {\\it Swift} monitoring campaign, updated to 2008 September 10.\nIn particular, we focus on the out-of-outburst behaviour \n(Sidoli et al. 2008a, hereafter Paper~I) \nand on the bright flares observed \nfrom two SFXTs of the sample, XTE~J1739--302 and IGR~J17544--2619 \n(Sidoli et al. 2008b, hereafter Paper~III; Sidoli et al. in preparation). \nAnother outburst caught during this campaign\nfrom IGR~J16479--4514 was published by Romano et al. (2008a, Paper~II).\n\nPreliminary results from the last outbursts from XTE~J1739--302 (triggered on 2008 August 13, Romano et al. 2008c) \nand from IGR~J17544--2619 (triggered on 2008 September 4, Romano et al. 2008d) \nare also discussed here for the first time. \nA complete analysis will be addressed in Sidoli et al. (in preparation).\n\n\n\n\\subsection{SFXTs: the long-term X-ray emission outside the bright outbursts}\n\nThe SFXTs light curves of Fig.~\\ref{lsfig:4lc} show a clear \nevidence for highly variable source fluxes even outside the bright outbursts (which were caught\nin three of the four sources we are monitoring).\nThe light curve variability is on timescales of days,\nweeks and months, with a dynamic range (outside bright outbursts)\nof more than one order of magnitude in all four SFXTs.\nThese sources spend most of the time in a \nfrequent low-level flaring activity\nwith an average 2--10 keV luminosity of about 10$^{33}$--10$^{34}$~erg~s$^{-1}$ (see Paper~I).\n\nThe average spectra of this out-of-outburst emission are hard (although not as hard as during the\nbright flares) and can be fitted with an absorbed power law\nwith a photon index in the range 1--2. The absorbing column density is typically higher than\nthe Galactic value, which can be derived from the optical extinction toward the optical counterparts.\n\nThe out-of-outburst emission in IGR~J16479--4514 and in AX~J1841.0--0536 appears to be modulated\nwith a periodicity in the range 22--25~days, although a full timing analysis\nwill be addressed at the end of the campaign.\nThe spectral properties together with the high dynamic range in the flux \nvariability when the sources are {\\em not} in outburst, demonstrate \nthat SFXTs still accrete matter even outside their bright outbursts,\nand that the quiescent state (characterized by a very soft spectrum and by a low level of emission\nat about 10$^{32}$~erg~s$^{-1}$) is not the typical long-term state in SFXTs. \n\n\n \n\\subsection{SFXTs: bright flares from IGR~J17544--2619 and XTE~J1739--302}\n\nTypically, the SFXTs long-term light curves show a number of bright outbursts, reaching peak luminosities \nof a few 10$^{36}$~erg~s$^{-1}$, assuming the distances determined by Rahoui et al. (2008).\nThe only source which did not undergo bright flares is AX~J1841.0--0536\/IGR~J18410--0535,\nwhich showed a flux variability of more than two orders of magnitude.\n\nDuring the {\\it Swift} campaign, three and two outbursts were caught respectively from \nIGR~J17544--2619 (the first of them triggered BAT, but could not be observed with {\\it Swift}\/XRT \nbecause of Sun-constraints) and from XTE~J1739--302, at the following\ndates: on 2007 November 8, 2008 March 31 and 2008 September 4 from IGR~J17544--2619, and\non 2008 April 8 and 2008 August 13 from XTE~J1739--302.\nThus, bright flares in this two prototypical SFXTs occur on a timescale of $\\sim$4--5 months\n(the three outbursts from IGR~J17544--2619 were spaced by $\\sim$144 and 157 days, respectively, while\nthe two outbursts from XTE~J1739--302 were spaced by 127~days).\n\nThe bright flare from IGR~J17544--2619 (on 2008 March 31; Paper~III) \ncould be observed simultaneously with XRT (0.2--10~keV) and BAT (15--150~keV).\nA fit with a power law with a high energy cut-off ($e^{(E_{\\rm cut}-E)\/E_{\\rm fold}}$) \nresulted in the following parameters:\n$N_{\\rm H}$=(1.1$\\pm{0.2}$)$\\times 10^{22}$~cm$^{-2}$, $\\Gamma$=0.75$\\pm{0.11}$, \ncut-off energy $E_{\\rm cut}$=18$\\pm{2}$~keV\nand e-folding energy $E_{\\rm fold}$=4$\\pm{2}$~keV, reaching a \nluminosity of 5$\\times$10$^{37}$~erg~s$^{-1}$ (0.5--100~keV at 3.6~kpc).\nNote that the out-of-outburst emission observed with XRT below 10 keV\nis softer and more absorbed than the emission during this flare.\n\nThe other flare observed from IGR~J17544--2619 on 2008 September 4 was even brighter (Romano et al. 2008d), \nand was preceeded by intense activity for a few days as observed with INTEGRAL \nduring the Galactic bulge monitoring programme (Kuulkers et al. 2008; Romano et al. 2008d).\nThe XRT light curve exceeded 20~s$^{-1}$. \nThis peak emission\ncould be fitted with an absorbed power law with a photon index of 1.3$\\pm{0.2}$ and \nan absorbing column density of 1.8$^{+0.4}_{-0.3}$$\\times10^{22}$~cm$^{-2}$.\nThe average flux in the 2--10~keV range was 8$\\times$10$^{-10}$~erg~cm$^{-2}$~s$^{-1}$.\nThe fainter X--ray emission during the flare (2$\\times$10$^{-10}$~erg~cm$^{-2}$~s$^{-1}$) \ndisplayed a similar \nabsorbing column density of 1.4$^{+0.7}_{-0.5}$$\\times10^{22}$~cm$^{-2}$ and a photon index \n$\\Gamma$=0.8 $^{+0.4}_{-0.3}$.\nA more detailed analysis of the properties of this outburst will be performed in a\nforthcoming paper (Sidoli et al. in preparation).\n\nThe first outburst from XTE~J1739--302 was caught on 2008 April 8 (Paper~III) and was composed \nby two bright flares separated by about 6000~s.\nThe X--ray emission was significantly more absorbed than in IGR~J17544--2619:\nthe broad band (XRT+BAT) spectrum could be well described by an\nabsorbed high energy cut-off \npower law with the following parameters: $N_{\\rm H}$=1.3$\\times$$10^{23}$~cm$^{-2}$, \n$\\Gamma$=1.4$^{+0.5} _{-1.0}$,\ncut-off energy $E_{\\rm cut}$=6$ ^{+7} _{-6}$~keV\nand e-folding energy $E_{\\rm fold}$=16 $ ^{+12} _{-8}$~keV.\nThe derived X--ray luminosity is 3$\\times$10$^{37}$~erg~s$^{-1}$ (0.5--100~keV).\n\nA new outburst was caught from XTE~J1739--302 on 2008 August 13 (Romano et al. 2008c).\nA preliminary spectral analysis of the average broad band spectrum of this bright flare resulted\nin the following parameters, adopting an absorbed power law with a high energy cut-off:\nabsorbing column density $N_{\\rm H}$=(4.0$\\pm{0.3}$)$\\times$$10^{23}$~cm$^{-2}$, \n$\\Gamma$=0.7$\\pm{0.1}$,\n$E_{\\rm cut}$=4.6$\\pm{0.3}$~keV\nand $E_{\\rm fold}$=9 $ ^{+2} _{-1}$~keV. \nThe X--ray luminosities during the flare were 2$\\times$10$^{36}$~erg~s$^{-1}$ (0.5--10~keV) and\n5$\\times$10$^{36}$~erg~s$^{-1}$ (0.5--100~keV). \nFig.~\\ref{lsfig:contxte} shows the comparison of the spectroscopy in the\nsoft energy range (XRT data) of the out-of-outburst emission with the results\nfrom the two flares from XTE~J1739--302.\nA time resolved spectral \nanalysis during the flare will be reported in a forthcoming paper (Sidoli et al., in preparation).\n\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics*[angle=270,scale=0.45]{fig2.ps}\n\\end{center}\n\\caption{\\scriptsize Comparison of the spectral paramenters (absorbed single power law model)\nderived for XTE~J1739--302 during the two bright flares discussed here, \nand the total spectrum of the out-of-outburst emission reported in Paper~I.\n68\\%, 90\\% and 99\\% confidence level contours are shown.\n}\n\\label{lsfig:contxte}\n\\end{figure}\n\nA comparison of the SFXTs light curves (the four SFXTs constantly monitored with {\\it Swift},\ntogether with other two sources, IGR~J11215--5952 and IGR~J08408--4503) \nduring their outbursts are reported in\nFig.~\\ref{lsfig:duration}.\nThis plot clearly demonstrates that the outbursts from \nall these transients last much longer than simply a few hours as previously thought.\nFig.~\\ref{lsfig:duration} shows about 8 days of monitoring for each target, and \nit is clear that the first SFXT, where a day-long outburst event has been observed\n(IGR~J11215--5952, Romano et al. 2007), is not a peculiar case among SFXTs, but a similar behaviour\nhas been observed in the other SFXTs monitored by {\\it Swift} during the last year \n(except AX~J1841.0--0536, where no outburst have yet been observed).\n\n\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics*[angle=0,scale=0.7]{fig3.ps}\n\\end{center}\n\\caption{\\scriptsize Light curves of the outbursts of SFXTs followed by {\\it Swift}\/XRT\nreferred to their\nrespective triggers. We show the 2005 outburst of IGR~J16479$-$4514 (Paper~I), \nwhich is more complete than the one observed in 2008 (Paper~II).\nThe IGR~J11215$-$5952 light curve has an arbitrary start time, since\nthe source\ndid not trigger the BAT (the observations were obtained as a ToO; Romano et al. 2007).\nThe third and the last panels report the two flares from XTE~J1739--302 observed\non 2008 April 8 and on 2008 August 13, respectively.\nThe forth panel shows the outburst from IGR~J17544--2916 occurred on 2008 March 31 (Paper~III).\nThe fifth panel reports on a multiple flaring activity reported from another SFXT,\nnot part of this campaign, IGR~J08408--4503, and occurred on 2008 July 5 (Romano et al., 2008e).\nNote that where no data are plotted, no data were collected. Vertical dashed lines \nmark time intervals equal to 1 day.\n}\n\\label{lsfig:duration}\n\\end{figure}\n\n\n\n\n\\section{Conclusions}\n\nThe results of the monitoring campaign we have been performing in the last year \nwith {\\it Swift} of a sample of 4 SFXTs can be summarized as follows:\n\n\\begin{itemize}\n\n\\item the long-term behaviour of the SFXTs outside their outbursts is a low-level accretion phase at a\nluminosity of 10$^{33}$--10$^{34}$~erg~s$^{-1}$, with a dynamic range of 1 up to, sometimes, 2 orders of magnitude in flux;\n\n\\item the broad band X--ray emission during the bright flares can be described well with models \ncommonly adopted for the emission from the accreting X--ray pulsars;\n\n\\item the SFXTs spectra during flares show high energy cut-offs \ncompatible with a neutron star magnetic field of about 10$^{12}$~G, although no cyclotron lines have been detected yet;\n\n\\item the duration of the outbursts from different SFXTs observed with {\\it Swift} are longer than a few hours.\n\n\\end{itemize}\n\n\n\\acknowledgments\nWe thank the {\\it Swift} team duty scientists and science planners P.J.\\ Brown, M.\\ Chester,\nE.A.\\ Hoversten, S.\\ Hunsberger, C.\\ Pagani, J.\\ Racusin, and M.C.\\ Stroh\nfor their dedication and willingness to accomodate our sudden requests\nin response to outbursts during this long monitoring effort.\nWe also thank the remainder of the {\\it Swift} XRT and BAT teams,\nJ.A.\\ Nousek and S.\\ Barthelmy in particular, for their invaluable help and support with\nthe planning and execution of the observing strategy.\nThis work was supported in Italy by contracts ASI I\/023\/05\/0, I\/088\/06\/0, and I\/008\/07\/0, \nat PSU by NASA contract NAS5-00136.\nH.A.K. was supported by the {\\it Swift } project.\nP.R.\\ thanks INAF-IASF Milano and L.S.\\ INAF-IASF Palermo,\nfor their kind hospitality.\nItalian researchers acknowledge the support of Nature (455, 835-836) and thank\nthe Editors for increasing the international awareness of the current\ncritical situation of the Italian Research.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe hierarchy of quark and charged lepton masses and the small quark mixing angles\nhas been one of the most puzzling aspects left unresolved by the Standard Model.\nThe recent discovery of neutrino masses and mixings has provided further clues\nin the search for the new physics Beyond the Standard Model which must be\nresponsible for the pattern of fermion masses and mixing angles.\nOne promising approach to understanding the fermion spectrum is\nthe idea of family symmetry, and in particular the idea of a\n$U(1)$ family symmetry as originally proposed by Froggatt and Nielsen \\cite{Froggatt:1978nt}.\nSuch an approach was given considerable impetus by the observation\nthat in many string constructions additional $U(1)$ symmetries\nare ubiquitous, and furthermore such a gauged broken $U(1)$\ncould provide a phenomenologically viable candidate\nfamily symmetry by virtue of the Green-Schwartz anomaly cancellation\nmechanism \\cite{Green:1984sg} which provides a string solution to the no-go theorem\nthat anomaly freedom requires such symmetries to be family independent \\cite{Weinberg:anomalies}.\nAs a result of this a considerable literature has developed in recent\nyears based on string-inspired $U(1)$ family symmetries\n\\cite{Chankowski:2005qp,Babuetal}.\n\nMany non-abelian family symmetries have also been considered, \nfor example based on $SU(3)$ family symmetry \\cite{King:2001uz},\nand also\ntextures and analyses of fermion masses have been done not using any family\nsymmetry. At the present time some very successful approaches exist, and\nothers that may with modification also be effective. Family symmetries can\nbe abelian or non-abelian, they can require symmetric Yukawa matrices or\nnot, they can be imposed with or without an associated grand unified theory,\nand so on. Criteria that could be used to choose among possible approaches\ninclude not only describing the quark masses and mixings, and the charged\nlepton masses, but also neutrino masses and mixings, supersymmetry soft\nbreaking effects (since particularly the trilinear couplings are affected by\nthe Yukawa couplings), how many parameters are used to describe the data,\nwhether some results such as the Cabibbo angle are generic or fitted, and\nmore. One of our main goals here is to look at the various possibilities\nsystematically and see if some seem to be favoured by how well they do on a\nset of criteria such as the above listed ones. Presumably family\nsymmetries originate in string theories, and are different for different\nstring constructions that lead to a description of nature, so identifying a\nunique family symmetry (or a subset of possible ones) could point strongly\ntoward a class of string theories and away from other classes. At the\npresent time this approach is not very powerful, though it gives some\ninteresting insights, but better analyses and additional data may improve it.\n\n\nIn this paper we shall consider $U(1)$ family symmetries and\nunification as a viable framework for quark and lepton masses and\nmixing angles in the light of neutrino mass and mixing data\n\\cite{King:2003jb}, using\nsequential right-hand neutrino dominance \\cite{King:1998jw} as a guide\nto constructing hierarchical neutrino mass models with bi-large\nmixing. As has been pointed earlier \\cite{Ross:2000fn}, models which\nsatisfy the Gatto-Sartori-Tonin relations (GST\n\\cite{Gatto:1968ss}){\\footnote{$V_{us}=|\\sqrt{\\frac{m_d}{m_s}}-e^{i\\Phi_1}\\sqrt{\\frac{m_u}{m_c}}|$}}\nrequire the presence of both positive and negative Abelian charges.\nAs we will discuss, the sequential dominance conditions require also\nthe presence of both positive and negative Abelian charges, and hence\nat least two flavon fields of equal and opposite charges. These models\nhowever result in complicated $U(1)$ charges, on the other hand\nNon-GST models have a simpler charge structure and may be possible to\nrealize in a more general context. In this work we also consider non\nGST cases.\n\nWe shall consider $U(1)$ family symmetry combined with unified gauge groups\nbased on $SU(5)$ and $SO(10)$, assuming a Georgi-Jarlskog relation,\nand also consider non-unified models without such a relation.\nWe will present new classes of solutions\nto the anomaly cancellation conditions and perform phenomenological fits,\nand we will compare the different classes of $U(1)$\nto each other and to non-Abelian family symmetry models based on\n$SU(3)$ \\cite{King:2001uz}, \nby performing specific phenomenological fits to the\nundetermined coefficients of the operators.\nFinally we will consider the implications of such\nan approach on flavour-changing processes in the framework\nof supersymmetry, leaving a detailed analysis for a future reference.\n\nThe layout of the paper is as follows. In Section \\ref{sec:anomconst} we consider the\ngeneral conditions for Green-Schwartz anomaly cancellation, and move on to describe\nthe classes of solutions, by whether they are consistent with $SU(5)$, $SO(10)$, Pati-Salam\nunification of representations, generalized non-unified relations, or not at all consistent\nwith unification. Having found these solutions, we move on in section \\ref{sec:new-paramaterisation}\nto re-parametrize in terms of differences in $U(1)_F$ charges. In section \\ref{sec:su5q} we consider\nthe constraints on the Yukawa textures from requiring acceptable quark mixings and quark and lepton\nmasses. Then in section \\ref{sec:neuts}, the constraints from getting acceptable neutrino masses\nand mixings from single right-handed neutrino dominance (SRHND) models, which are a class of see-saw\nmodels. In section \\ref{sec:su5-solut-satisfy-GST} we construct solutions which are consistent with\n$SU(5)$ unification, the Gatto-Satori-Tonin (GST) relation \\cite{Gatto:1968ss}, and correct fermion masses and mixings. \nIn section \\ref{sec:su5-solutions-not-GST} we construct solutions which are consistent with $SU(5)$ unification,\ncorrect fermion masses and mixing angles but which are not consistent with the GST relation. In section \\ref{sec:non-su5-cases}\nwe construct solutions which are not consistent with $SU(5)$ unification. In section \\ref{sec:fitsmasses}, we take\nsome of the solutions constructed in section \\ref{sec:su5-solut-satisfy-GST} and section \\ref{sec:su5-solutions-not-GST}\nand fit the arbitrary $O(1)$ parameters to try to closely predict the observed fermion masses and mixing angles.\nThen in section \\ref{sec:susyconst} we briefly consider whether flavour changing processes will be dangerously high\nin these models, presenting two specific scenarios: a non minimal sugra possibility and a string-inspired mSUGRA-like scenario which is expected to be (or be close to) the best-case scenario for flavour-changing and for which we check explicitly $\\mu\\rightarrow e\\gamma$ Finally, we conclude in\nsection \\ref{sec:conclusions}.\n\n\n\n\\section{Anomaly Constraints on $U(1)$ Family symmetries\\label{sec:anomconst}}\n\n\n\\subsection{Green-Schwartz anomaly cancellation}\n\\label{sec:green-schw-anom}\n\nConsider an arbitrary $U(1)$ symmetry which extends the Standard \nModel gauge group. If\nwe were to insist that it does not contribute to mixed anomalies with \nthe Standard Model,\nwe would find that the generators of $U(1)$ would be a linear \ncombination of Weak hypercharge\nand $B-L$ \\cite{Weinberg:anomalies}. This clearly is not useful for \nfamily symmetries, so we need to use a more sophisticated\nway of removing the anomalies, Green-Schwartz anomaly cancellation \\cite{Green:1984sg}. In \nthis case, we can cancel the mixed\n$U(1) - SU(3) - SU(3)$, $U(1) - SU(2) - SU(2)$ and \n$U(1) - U(1)_Y - U(1)_Y$ anomalies, $A_3$, $A_2$, and $A_1$\nif they appear in the ratio:\n\\begin{equation}\n \\label{eq:aratio}\n A_3 : A_2 : A_1: A_{U(1)}:A_G = k_3 : k_2 : k_1: 3 k_{U(1)}:24,\n\\end{equation}\nwhere we have included the relations to the anomalies of the anomalous flavour groups $A_{U(1)}$ and the gravitational anomaly; $k_i$ are the Kac-Moody levels of the gauge groups, defined by the GUT-scale relation:\n\\begin{equation}\n \\label{eq:g2ratio}\n g_3^2 k_3 = g_2^2 k_2 = g_1^2 k_1\n\\end{equation}\nIf we work with a GUT that has the canonical GUT normalization, we \nfind:\n\\begin{equation}\n \\label{eq:arelation}\n A_3 = A_2 = \\frac{3}{5} A_1\n\\end{equation}\nBut we still require that the $U(1) - U(1) - U(1)_Y$ \nanomaly, $A_1^\\prime$ vanishes.\nNow, the anomalies are given by:\n\\begin{equation}\n \\label{eq:4}\n A_i = \\frac{1}{2}\\mathrm{Tr}\\left[ \\left\\{ T^{(i)}_a , \nT^{(i)}_c\\right\\} T^\\prime_c \\right].\n\\end{equation}\nWe then use the fact that $\\left\\{T_a, T_b\\right\\} = \\delta_{ab} \n\\mathbf{1}$ for $SU(N)$ and\n$\\left\\{ Y, Y\\right\\} = 2Y^2$ for $U(1)_Y$ to obtain:\n\\begin{eqnarray}\n \\label{eq:A3}\n A_3 &=& \\frac{1}{2} \\left[ \\sum_{i = 1}^3 ( 2 q_i + u_i + d_i ) \n\\right] \\\\\n \\label{eq:A2}\n A_2 &=& \\frac{1}{2} \\left[ \\sum_{i = 1}^3 ( 3 q_i + l_i) + h_u + h_d \n\\right] \\\\\n \\frac{3}{5} A_1 &=& \\frac{1}{2}\n \\left[\n \\sum_{i = 1}^3 ( \\frac{q_i}{5} + \\frac{8 u_i}{5} + \\frac{2}{5} d_i \n+ \\frac{3 l_i}{5}\n + \\frac{6 e_i}{5} ) + \\frac{3}{5}( h_u + h_d )\n \\right]\\\\\n \\label{eq:A1p}\n A_1^\\prime &=& \\sum_{i=1}^3 ( -q_i^2 + 2 u_i^2 - d_i^2 + l_i^2 - \ne_i^2 ) + ( h_d^2 - h_u^2 ) = 0\n\\end{eqnarray}\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|cccccccc|}\n \\hline\n Field & $Q_i$ & $\\overline{U}_i$ & $\\overline{D}_i$ & $L_i$ & \n$\\overline{E}_i$ & $\\overline{N}_i$ & $H_u$ & $H_d$ \\\\\n \\hline\n Charge & $q_i$ & $u_i$ & $d_i$ & $l_i$ & $e_i$ & $n_i$ & $h_u$ & \n$h_d$ \\\\\n \\hline\n \\end{tabular}\n \\caption{Fields and family charges}\n \\label{tab:charges}\n\\end{table}\nSince in the mixed anomalies of the $U(1)$ group with the SM gauge group that cancel via \nthe Green-Schwartz mechanism wherever a charge\nappears, it appears in a sum, we parameterize the sums as follows \\cite{Jain:1994hd}:\n\\begin{eqnarray}\n \\label{eq:sumqi}\n \\sum_{i=1}^3 q_i \\!&=&\\! x + u,\\quad \\sum_{i=1}^3 u_i \\ =\\ x + 2u, \\\\\n \\label{eq:sumdi}\n \\sum_{i=1}^3 d_i \\!&=&\\! y + v,\\quad \\sum_{i=1}^3 l_i \\ =\\ y, \\\\\n \\label{eq:sumei}\n \\sum_{i=1}^3 e_i \\!&=&\\! x, \\\\\n \\label{eq:hd}\n h_u \\!&=&\\! -z,\\quad h_d \\ =\\ z + ( u + v ).\n\\end{eqnarray}\nSubstituting \\eq{eq:sumqi}-\\eq{eq:hd} into \\eq{eq:A3}-\\eq{eq:A1p}\nwe find that they satisfy\nEq.~(\\ref{eq:arelation}):\n\\begin{equation}\n \\label{eq:anomalies}\n A_3 = A_2 = \\frac{3}{5} A_1 = \\frac{1}{2} \\left[ 3x + 4u + y + \nv\\right],\n\\end{equation}\nwhich shows that the parameterization is consistent. However we need to find those solutions which also satisfy $A'_{1}=0$.\nWe will see how we can achieve this for different cases. Since the proposal of the GS anomaly mechanism it has been known that the easiest solution, \n$u=v=0$, leads to a $SU(5)$ or Pati-Salam group realization of mass matrices. Another possible solution is to have $u = -v \\ne 0$. Both these forms\n admit a SUSY $\\mu$ term in the tree level\n superpotential at the gravitational scale. However given the form of \\eq{eq:sumqi}-\\eq{eq:hd} one can try to use the flavour symmetry in order \nto forbid this term, allowing it just in the K\\\"ahler potential and thus invoking the Giudice-Masiero \\cite{Giudice:1988yz} mechanism in order to generate the\n $\\mu$ of the desired phenomenological order. Therefore apart from the cases $u+v=0$ we examine plausible cases for $u \\ne -v \\ne 0$. Of course \nin the cases $u=v=0, u=-v\\ne 0$ one can use another symmetry to forbid the $\\mu$ term in the superpotential, however it is appealing if the flavour\n symmetry forbids the $\\mu$ term at high scales.\n\n\\subsection{Anomaly free $A_1^\\prime$ with $u = v = 0$ solutions\\label{sec:yukawa-textures-uv-zero}}\nIn this case the parameterization simplifies and in fact we can\ndecompose the $U(1)$ charges in flavour independent and flavour dependent parts\n\\begin{equation}\n\\label{eq:FIaFDch}\nf_i = \\frac{1}{3}f + f_i^\\prime.\n\\end{equation}\nThe first term is flavour independent because it just depends on the total sum of the individual charges and the $f_i^\\prime$ are flavour dependent charges. We can always find $x$ and $y$ which satisfy\n\\begin{equation}\n \\label{eq:sumfip}\n\\sum_{i=1}^3 f_i^\\prime = 0.\n\\end{equation}\nIn this way $A'_{1}$ can be expressed in flavour independent plus flavour dependent terms\n\\begin{eqnarray}\nA'_{1}=A'_{1FI}+ A'_{1FD}.\n\\end{eqnarray}\nFollowing this, with the unfortunate notation that we have a new $u$, \ncompletely unrelated to the $u$ that we have already set to zero, we then have:\n\\begin{eqnarray}\n \\label{eq:A1pFIFD}\n A_1^\\prime &=& A'_{1FI}+ A'_{1FD}\\nonumber\\\\\n &=& \\frac{1}{3} \\left[ - q^2 + 2 u^2 - d^2 + l^2 - e^2 \n\\right] \n + \\sum_{i = 1}^3 ( -q_i^{\\prime\\; 2} + 2 u_i^{\\prime\\; 2} - \nd_i^{\\prime\\;2} + l_i^{\\prime\\;2} - e_i^{\\prime\\;2} )\n\\end{eqnarray}\nNow it is clear that the terms in the\nsquare bracket in \\eq{eq:A1pFIFD} are family \nindependent. It turns out that the square bracket term is\nautomatically\nzero in this case, since from Eqs.\\ref{eq:sumqi}-\\ref{eq:sumei},\nwe have: $q = u = e = x$ and $l = d = y$. Then \nwe have to make the family dependent part (the second term in \\eq{eq:A1pFIFD}) vanish.\n\n\\subsubsection{$SU(5)$ and $SO(10)$ type cases}\nOne way to make the family dependent part vanish, $A'_{1FD}=0$ , \nis to set $l_i = d_i$ and $q_i = u_i = e_i$ \n\\footnote{The reason that the charges are unprimed\nhere is that if it is true for the primed charges, it is also true for \nthe unprimed charges}. This condition would be automatic in \n$SU(5)$, but in general such a condition on the charges\ndoes not necessarily imply a field theory $SU(5)$ GUT to actually be\npresent, although it may be.\n\nSince the generic Yukawa structure is of the form:\n\\begin{equation}\n \\label{eq:upsymcaspar}\n Y^f \\approx \n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|f_1 + q_1+h_f| } & \n \\epsilon^{|f_2 + q_1+h_f|} &\n \\epsilon^{|f_3 + q_1+h_f| } \\\\\n \\epsilon^{|f_3 + q_2+h_f| } &\n \\epsilon^{|f_2 + q_2+h_f| } &\n \\epsilon^{|f_3 + q_2+h_f| } \\\\\n \\epsilon^{|f_1 + q_3+h_f|} &\n \\epsilon^{|f_2 + q_3+h_f| } &\n \\epsilon^{|f_3 + q_3+h_f| }\n \\end{array}\n \\right].\n\\end{equation}\nit is clear that the $SU(5)$ relations $d_i = l_i$, $q_i = u_i \n= e_i$ lead to Yukawa textures of the form:\n\\begin{eqnarray}\n \\label{eq:Yusu5}\n Y^u &\\approx&\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2 e_1 -2e_3|} &\n \\epsilon^{|e_1 + e_2-2e_3|} &\n \\epsilon^{|e_1 - e_3|} \\\\\n \\epsilon^{|e_1 + e_2 - 2e_3|} &\n \\epsilon^{|2 e_2- 2 e_3|} &\n \\epsilon^{|e_2 - e_3|} \\\\\n \\epsilon^{|e_1 - e_3 |} &\n \\epsilon^{|e_2 - e_3 |} &\n \\epsilon^{| 0 |}\n \\end{array}\n \\right], \\\\\n\\label{eq:Ydsu5}\n Y^d &\\approx&\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|l_1 + e_1+h_d| } &\n \\epsilon^{|l_2 + e_1+h_d| } &\n \\epsilon^{|l_3 + e_1+h_d| } \\\\\n \\epsilon^{|l_1 + e_2+h_d| } &\n \\epsilon^{|l_2 + e_2+h_d| } &\n \\epsilon^{|l_3 + e_2+h_d| } \\\\\n \\epsilon^{|l_1 + e_3+h_d| } &\n \\epsilon^{|l_2 + e_3+h_d| } &\n \\epsilon^{|l_3 + e_3+h_d| }\n \\end{array}\n \\right], \\\\\n Y^e &\\approx& Y^{d\\ T}.\n\\end{eqnarray}\nNote that the up matrix is approximately symmetric,\ndue to the assumed $SU(5)$ relation of charges.\nThe reason why the textures above are approximate is that \neach entry in each matrix contains an undetermined order unity\nflavour dependent coefficient, generically denoted as $a^f_{ij}=O(1)$.\nWe shall continue to suppress such coefficients\nin order to make the discussion less cumbersome, \nbut will return to this question when \nwe discuss the numerical fits later in the paper.\nWe have also assumed that the up and down Yukawa matrices are\ndescribed by a single expansion parameter $\\epsilon$.\nThe possibility of having two different expansion parameters,\none for the up sector and one for the down sector, \nwill also be discussed later in the paper.\nIn order to\nhave an acceptable top quark mass, we have required that $h_u+2e_3 = 0$, in which \ncase the smallness of the bottom quark mass can be due\nto $h_d+e_3+l_3 \\ne 0$, and we are free to have a small $\\tan\\beta$, because we \ndon't need large $\\tan\\beta$ to explain the ratio $\\frac{m_t}{m_b}$ on its own.\n\nAlso note that, as expected from the $SU(5)$ relation of charges, the\ndown and electron textures are the approximate transposes of each\nother, $Y^d \\approx (Y^e)^T$. Such a relation implies bad mass\nrelations for between the down type quarks and charged leptons, but\nmay be remedied by using Clebsch factors such as a Georgi-Jarlskog\nfactor of 3 in the (2,2) position of the charged lepton Yukawa\nmatrix. \n\n\nIf we were to look at the case $x = y$, then we would have a solution\nsuggestive of unified $SO(10)$ GUT symmetry, for which $l_i = q_i =\nu_i = d_i = e_i$. The same comments above also apply here, namely that \nsuch a condition on the charges, though consistent with \nan $SO(10)$ GUT does not necessarily imply a field theory realization of it. \nThe matrices \\eq{eq:Yusu5}-\\eq{eq:Ydsu5} would all become equal to \nthe same symmetric texture in Eq.\\ref{eq:Yusu5}, in the $SO(10)$ \ncase that $x=y$.\n\n\\subsubsection{Pati-Salam type cases}\nIn this case, applying the Pati-Salam constraints on the charges,\n\\begin{eqnarray}\n\\label{eq:pscharg}\nq_i=l_i\\equiv q^L_i,\\quad u_i=d_i=e_i=n_i\\equiv q^R_i,\n\\end{eqnarray}\nso we can immediately see that also for this choice of charges \nboth the the\nflavour independent and \ndependent parts in \\eq{eq:A1pFIFD} vanishes. We have also\nincluded the right-handed neutrino charges, which do not enter into\nthe anomaly cancellation conditions,\\ \\eq{eq:A3}-\\eq{eq:A1p}, but with\na Pati-Salam group should obey the relation of \\eq{eq:pscharg}. Thus\nin this case all the mass matrices have the form\n\\begin{eqnarray}\n Y^{f} &=&\n \\left(\n \\begin{array}{ccc}\n \\epsilon^{|l_1 + e_1+h_{f}| } &\n \\epsilon^{|l_1 + e_2+h_{f}| } &\n \\epsilon^{|l_1 + e_3+h_{f}|} \\\\\n \\epsilon^{|l_2 + e_1+h_{f}|} &\n \\epsilon^{|l_2 + e_2+h_{f}|} &\n \\epsilon^{|l_2 + e_3+h_{f}| } \\\\\n \\epsilon^{|l_3 + e_1+h_{f}| } &\n \\epsilon^{|l_3 + e_2+h_{f}| } &\n \\epsilon^{|l_3 + e_3+h_{f}| }\n \\end{array}\n\\right)\n\\label{PStexture}\n\\end{eqnarray}\nfor $h_{f}=h_u,\\ h_d$. In this case we always need to satisfy $x=y$, in contrast with the generic case of $SU(5)$ where it is not necessary $x=y$. So we can put one of the charges in terms of the other two and the parameters $x=y$\n\\begin{eqnarray}\ne_1=x-(e_2+e_3),\\quad l_1=x-(l_2+l_3),\\quad \\Rightarrow\ne_1+e_2+e_3=l_1+l_2+l_3.\n\\label{PScondn}\n\\end{eqnarray}\nWe have already noted that the Pati-Salam constraints \non the charges imply that the anomaly $A_1'$\nautomatically vanishes. It is also a remarkable fact that \nthe constraints in Eq.\\ref{PScondn} do not in practice lead to \nany physical constraints on the form of the Yukawa texture\nin Eq.\\ref{PStexture}. In practice, assuming only that $u+v=0$,\none can start with any set of charges $l_i$, $e_i$ which \nlead to any desired Yukawa texture, where the charges do not\nsatisfy the anomaly free constraint in Eq.\\ref{PScondn}.\nThen from any set of non-anomaly-free charges one can construct\na set of anomaly-free charges which do satisfy Eq.\\ref{PScondn},\nbut do not change the form of the Yukawa matrix in Eq.\\ref{PStexture},\nby simply making an equal and opposite flavour-independent shift \non the charges as follows \\cite{King:2000ge}:\n$e_i\\rightarrow e_i +\\Delta$, $l_i\\rightarrow l_i -\\Delta$.\nIn this paper we shall not consider the Pati-Salam approach in detail.\n\n\\subsection{Solutions with anomaly free $A_1^\\prime$ with $u + v = 0 \\\n \\ (u,v \\ne 0)$ \\label{sec:u-=-v}}\nIn this case, we can repeat the analysis of the previous subsection, \nbut with the general constraints. Note however,\nthat since $u+v = 0$, $h_u = -z$ and $h_d = +z$.\n\nThen we are left with the result that \n\\begin{eqnarray}\n \\label{eq:12}\n A_1^\\prime = \\frac{1}{3} \\left[\n 6 u^2 + 6 x u + 2 y u\n \\right] -\n \\sum_{i=1}^3 \\left( q_i^{\\prime\\;2} - 2 u_i^{\\prime\\;2} + \nd_i^{\\prime\\;2} - l_i^{\\prime\\;2} + e_i^{\\prime\\;2} \\right).\n\\end{eqnarray}\nNote that the family independent part will vanish if \n\\begin{equation}\n\\label{eq:13}\nu = -v = -\\left( x + \\frac{y}{3} \\right).\n\\end{equation}\n\nHaving done this, we may substitute Eq.~(\\ref{eq:13}) into \nEqs.~(\\ref{eq:sumqi}-~\\ref{eq:hd}) Then we find that:\n\\begin{eqnarray}\n \\label{eq:14}\n \\sum_{i=1}^3 q_i &=& -\\frac{y}{3},\\quad \\quad \\\n \\sum_{i=1}^3 u_i \\ =\\ - ( x + \\frac{2y}{3} ), \\nonumber\\\\\n \\sum_{i=1}^3 d_i &=& x + \\frac{4y}{3},\\quad\n \\sum_{i=1}^3 l_i \\ \\ =\\ y,\\\\\n \\sum_{i=1}^3 e_i &=& x.\n\\end{eqnarray}\n\n\\subsubsection{Yukawa textures for a sample solution \\label{sec:yukawa-textures-uv-nonzero}}\nAt this point, we note that there will be a large number of solutions. \nHowever, one class of solutions that will easily be satisfied\nwill be:\n\\begin{equation}\n \\label{eq:19}\n q_i = -\\frac{l_i}{3} \\;,\\; u_i = - ( \\frac{2 l_i}{3} + e_i ) \\; , \\; \nd_i = \\frac{4 l_i}{3} + e_i.\n\\end{equation}\nThe same equation will hold for the primed charges:\n\\begin{equation}\n \\label{eq:20}\n q_i^\\prime = -\\frac{l_i^\\prime}{3} \\;,\\; u_i^\\prime = - ( \\frac{2 \nl_i^\\prime}{3} + e_i^\\prime ) \\; , \\; \n d_i^\\prime = \\frac{4 l_i^\\prime}{3} + e_i^\\prime.\n\\end{equation}\n\nWe can now put Eq.~(\\ref{eq:20}) into the anomaly, Eq.~(\\ref{eq:12}). \nIn this case we find that:\n\\begin{eqnarray}\n \\nonumber\n A_1^\\prime &=& \\frac{1}{3} \\left[ x^2 ( 6 - 6 ) + \\frac{2}{3} y^2( 1 \n- 1 ) + xy ( 4- 2 - 2) \\right] \\\\\n && - \\sum_{i=1}^3 \\left( l_i^{\\prime\\;2} \\frac{1}{9} ( -1 + 8 - 16 + \n9 ) + e_i^{\\prime\\;2}( 2 - 1 -1 ) \\right)\n = 0.\n \\label{eq:21}\\end{eqnarray}\nSo we see that for this particular relation of leptonic and quark \ncharges, we are automatically anomaly-free.\n\nAgain, we see that, just as for the $u = v = 0$ case, we can specify \neverything by the leptonic charges $l_i$ and $e_i$.\nHowever, in this case we will get three different textures. \nSpecifically, we will get:\n\\begin{eqnarray}\n \\label{eq:yu-umvn0}\n Y^u &\\approx& \n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|l_1 + e_1 + h_u|} & \n \\epsilon^{|\\frac{1}{3}(l_2 + 2l_1)+e_1+ h_u|} &\n \\epsilon^{|\\frac{1}{3}(l_3 + 2l_1)+e_1+ h_u|} \\\\\n \\epsilon^{|\\frac{1}{3}(l_1+ 2l_2)+e_2 + h_u|} &\n \\epsilon^{|l_2+e_2 + h_u|} &\n \\epsilon^{|\\frac{1}{3}(l_3+2l_2)+e_2 + h_u|} \\\\\n \\epsilon^{|\\frac{1}{3}(l_1+2l_2)+e_3 + h_u|} &\n \\epsilon^{|\\frac{1}{3}(l_2+2l_3)+e_3 + h_u|} &\n \\epsilon^{|l_3 + e_3 + h_u|}\n \\end{array} \n \\right] \\\\\n \\label{eq:yd-umvn0}\n Y^d &\\approx& \n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|l_1+e_1 - h_u|} &\n \\epsilon^{|\\frac{1}{3}(-l_1+4l_2)+e_2 - h_u|} &\n \\epsilon^{|\\frac{1}{3}(-l_1+4l_3)+e_3 - h_u|} \\\\\n \\epsilon^{|\\frac{1}{3}(-l_2+4l_1)+e_1 - h_u|} &\n \\epsilon^{|l_2+e_2 - h_u|} &\n \\epsilon^{|\\frac{1}{3}(-l_2+4l_3)+e_2 - h_u|} \\\\\n \\epsilon^{|\\frac{1}{3}(-l_1+4l_3)+e_3 - h_u|} &\n \\epsilon^{|\\frac{1}{3}(-l_2+4l_3)+e_3 - h_u|} &\n \\epsilon^{|l_3+e_3 - h_u|}\n \\end{array} \n \\right] \\\\\n\\label{eq:ye-umvn0}\n Y^e &\\approx& \n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|l_1+e_1 - h_u|} & \n \\epsilon^{|l_1+e_2 - h_u|} &\n \\epsilon^{|l_1+e_3 - h_u|} \\\\\n \\epsilon^{|l_2+e_1 - h_u|} &\n \\epsilon^{|l_2+e_2 - h_u|} &\n \\epsilon^{|l_2+e_3 - h_u|} \\\\\n \\epsilon^{|l_3+e_1 - h_u|} &\n \\epsilon^{|l_3+e_2 - h_u|} &\n \\epsilon^{|l_3+e_3 - h_u|}\n \\end{array} \n \\right]\n\\end{eqnarray}\nWe note that this is a rather predictive scheme; \nwe require that the diagonal elements are of\nthe same order in the between the down and electron Yukawa matrices constrained by the \nanomalies. Also, we require (at the very least)\n$l_3 + e_3 +h_u= 0$ to get a correct top quark mass.\n\n\n\\subsection{Anomaly free $A_1^\\prime$ with $u + v \\neq 0$ solutions\\label{sec:yukawa-textures-upv-notzero}}\n\nIn this case we can not decompose the expression of $A_1^\\prime$ into flavour independent and flavour dependent parts, but we can use for example the relation $\\left(\\sum f_i\\right)^2=\\sum f_i^2+2(f_1(f_2+f_3)+f_2f_3)$ such that we have\n\\begin{eqnarray}\nA_1^\\prime=-2(4u^2+u(v+3x+z)+v(z-y))-\\!2\\!\\!\\!\\!\\!\\!\\!\\sum_{f=u,d,l,e,q}\\!\\!\\!\\!\\!\\! g_f (f_1(f_2+f_3)+f_2f_3),\n\\end{eqnarray}\nwhere $g_f=1,-2,1,-1,1$ respectively for $f=q,u,d,l,e$. However it is difficult to depart from here in order to find some ansatz which cancels the $A_1^\\prime$ anomaly. Instead we can generalize the kind of relations which in the limit of $u=v=0$ would give the $SU(5)$ cases or the Pati-Salam cases.\n\\subsubsection{An extended $SU(5)$ case} \n\\label{sec:genrzsu5like}\nHere a non-GUT case is considered, taken by generalizing the $SU(5)$ relation between the charges. In the $SU(5)$ case,\nwe had $q_i = u_i = e_i$ and $d_i = l_i$. If instead we have the linear relations:\n\\begin{eqnarray}\n\\label{eq:chargrelgensu5like}\nq_i=u_i+\\alpha=e_i+\\gamma,\\quad d_i=l_i+\\beta,\\quad\n\\end{eqnarray}\nFrom the parameterization of Eqs.~(\\ref{eq:A3}-\\ref{eq:A1p}), we see that \nin the limit of the $u=v=0$ we recover the $SU(5)$ case. In agreement with the cancellation of anomalies then one should have\n\\begin{eqnarray}\nq_i=u_i-\\frac{u}{3}=e_i+\\frac{u}{3},\\quad d_i=l_i+\\frac{v}{3}.\n\\label{eq:1}\n\\end{eqnarray}\nIn the expression of the $A_1^\\prime$ anomaly, as given in \\eq{eq:A1p}, the sums of squared charges cancel and we can write it just in terms of sum of charges, which we have parameterized in terms of $u,v,x,y$,\n\\begin{eqnarray}\nA_1^\\prime=-10 \\frac{u^2}{3}-\\frac{2}{3}v^2+2u(x+v)+2y\\frac{v}{3}-2z(u+v)=0.\n\\end{eqnarray}\nThus we need to satisfy this equation in order to have anomaly free solutions. Requiring the condition of $O(1)$ top coupling we have\n\\begin{eqnarray}\n\\label{eq:charggensu5like}\nh_u&=&-z=-2e_3-u,\\nonumber\\\\\nh_d&=&2u+v+2e_3,\\nonumber\\\\\n{\\mathcal{C}}(Y^u_{ij})&=&|e_i+e_j-2e_3|,\\nonumber\\\\\n{\\mathcal{C}}(Y^d_{ij})&=&|e_i+l_j+2e_3+\\frac{7u}{3}+\\frac{4v}{3}|,\\nonumber\\\\\n{\\mathcal{C}}(Y^e_{ij})&=&|l_i+e_j+2e_3+2u+v|,\n\\end{eqnarray}\nwhere ${\\mathcal{C}}(Y^u_{ij})$ denotes the power of $\\epsilon$ for the $(i,j)$ element of the correspondent Yukawa matrix. Note that although we did not begin with an {\\it a priori} condition of having $Y^u$ symmetric, the requirement of the $O(1)$ top coupling cancels the parameter $u$ in all the entries of $Y^u$ and so we end up with a symmetric matrix. \n\\subsubsection{An extended Pati-Salam case}\n\\label{sec:pati-salam-like-case}\nFollowing the extended $SU(5)$ case, we look for solutions which in the\n$u=v=0$ limit reproduce the Pati-Salam case, so we should have the\nrelations\n\\begin{eqnarray}\n\\label{eq:PSgenrel}\nq_i=l_i+\\alpha,\\quad u_i=d_i+\\beta.\n\\end{eqnarray}\nAlso $e_i$ and $n_i$ need to be related to $u_i$ by a constant, as in \\eq{eq:PSgenrel}. In these case in order to satisfy the G-S anomaly conditions we need\n\\begin{eqnarray}\nq_i=l_i+\\frac{u+(x-y)}{3},\\quad u_i=e_i+\\frac{2u}{3},\\quad d_i=e_i+\\frac{v+(y-x)}{3}.\n\\label{eq:8}\n\\end{eqnarray}\nThus the expression for the $A_1^\\prime$ anomaly is\n\\begin{eqnarray}\nA_1^\\prime&=&-\\frac{2}{9}\\left[8u^2+4v^2+u(9v+11x-2y)+2(x-y)^2 -v(2x+y)\\right]\\nonumber\\\\\n&&-2z(u+v),\n\\end{eqnarray}\nand finally requiring the condition of $O(1)$ top Yukawa coupling we have\n\\begin{eqnarray}\nh_u&=&-z=-(l_3+e_3+u+\\frac{x-y}{3}),\\nonumber\\\\\nh_d&=&l_3+e_3+2u+v+\\frac{x-y}{3},\\nonumber\\\\\n{\\mathcal{C}}(Y^u_{ij})&=&|l_i-l_3+e_j-e_3|,\\nonumber\\\\\n{\\mathcal{C}}(Y^d_{ij})&=&|l_i+e_j+l_3+e_3+\\frac{4v+7u+(x-y)}{3}+\\frac{4v}{3}|,\\nonumber\\\\\n{\\mathcal{C}}(Y^e_{ij})&=&|l_i+e_j+2e_3+2u+v|.\n\\end{eqnarray}\n\n\n\\section{A useful phenomenological parameterization}\n\\label{sec:new-paramaterisation}\nSo far we have discussed the anomaly cancellation conditions in $U(1)$ family\nsymmetry models, and some of the possible solutions to these\nconditions, including some new solutions not previously\ndiscussed in the literature. It turns out however that the \nanomaly free charges themselves do not provide the most convenient\nparameters for discussing the phenomenological constraints on the\nYukawa matrices arising from the quark and lepton spectrum.\nIt is more convenient to introduce a \nnew parameterization for the Yukawa matrices as follows:\n\\begin{equation}\n \\label{eq:6}\n Y^f \\approx \\left(\n \\begin{array}{ccc}\n \\epsilon^{|s'_f + r'_f + k_f|} & \\epsilon^{|s'_f + r_f + k_f|} & \\epsilon^{|s'_f + k_f|} \\\\\n \\epsilon^{|s_f + r'_f + k_f|} & \\epsilon^{|s_f + r_f + k_f|} & \\epsilon^{|s_f + k_f|} \\\\\n \\epsilon^{| r'_f + k_f|} & \\epsilon^{| r_f + k_f|} & \\epsilon^{| k_f|}\n \\end{array}\n \\right)\n\\end{equation}\nwhere $f=u,d,e,\\nu$, and we have introduced the\nparameters $r_f, r'_f, s_f, s'_f, k_f$ which are defined \nin terms of the charges in Table 1 as:\n\\begin{eqnarray}\n \\nonumber\n r_f = f_2 - f_3 & r'_f = f_1 - f_3 & k_u = q_3 + u_3 + h_u \\\\\n \\nonumber\n s_{u,d} = q_2 - q_3 & s'_{u,d} = q_1 - q_3 & k_d = q_3 + d_3 + h_d \\\\\n \\nonumber\n s_{e,\\nu} = l_2 - l_3 & s'_{e,\\nu} = l_1 - l_3 & k_e = l_3 + e_3 + h_d \\\\\n \\label{eq:gyukpar}\n & & k_\\nu = l_3 + n_3 + h_u \n\\end{eqnarray}\nIn order to get an acceptable top quark mass, we require that $k_u = 0$. \nNote that the parametrization above is \ncompletely general, there is no information loss from the form of\nEq.~(\\ref{eq:upsymcaspar}), and thus far we have not imposed any\nconstraints on the charges arising from either anomaly cancellation\nor from GUTs. We now consider the simplifications \nwhich arise in the new parametrization \nwhen the charges are constrained by considerations of anomaly cancellation and \nGUTs, as discussed in the previous section.\n\n\\subsubsection*{Simplification in $SU(5)$ type case}\n\nConsider the case where the family charges are consistent with the representations in an $SU(5)$ GUT, $d_i = l_i$, and $q_i = u_i = e_i$:\n\\begin{eqnarray}\n \\nonumber\n k_e = k_d\\ & s_{u,d} = r_{u,e} & s'_{u,d} = r'_{u,e} \\\\\n \\label{eq:11}\n s_{e,\\nu} = r_d & s'_{e,\\nu} = r'_d \n\\end{eqnarray}\nIn this case, all of the parameters can be expressed purely\nin terms of the lepton charges:\n\\begin{eqnarray}\n \\nonumber\n s_{u,d}=r_{u,e} = e_2 - e_3 & s'_{u,d} = r'_{u,e} = e_1 - e_3 \\\\\n s_{e, \\nu} = r_d = l_2 - l_3 & s'_{e,\\nu} = r'_{d} = l_1 - l_3\n\\label{eq:24} \n\\end{eqnarray}\nNote that this leads directly to the fact that $Y^e \\approx (Y^d)^T$. The equality is broken by the arbitrary $O(1)$ coefficients.\nAs discussed, the $SU(5)$ charge conditions are sufficient to\nguarantee anomaly cancellation for the case $u=v=0$.\n\n\\subsubsection*{Simplification in the extended $SU(5)$ case}\n\nIn the case $u+v\\neq 0$, anomalies can again be cancelled by assuming\nthe charge conditions in Eq.~(\\ref{eq:chargrelgensu5like}).\nIf we take Eq.~(\\ref{eq:chargrelgensu5like}), we can again simplify Eq.~(\\ref{eq:gyukpar}). In this case we find:\n\\begin{eqnarray}\n \\nonumber\n s_{u,d} = r_{u,e} & s'_{u,d} = r'_{u,e} \\\\\n \\label{eq:16}\n s_{e,\\nu} = r_d & s'_{e,\\nu} = r'_d\n\\end{eqnarray}\n\nIn this case we have that the texture of $Y^e$ can be attained from $Y^d$ by replacing $k_d$ with $k_e$ and then\ntransposing.\n\n\\subsubsection*{Simplification in the Pati-Salam case}\n\nIn the case of having charge relations consistent with a Pati-Salam theory,\n$q_i = l_i$ and $u_i = d_i = e_i = n_i$, we can simplify:\n\\begin{eqnarray}\n \\nonumber\n k_e = k_d & s_{u,d} = s_{e,\\nu} & s'_{u,d} = s'_{e,\\nu} \\\\\n \\label{eq:15}\n k_u = k_\\nu & r_u = r_d = r_e = r_\\nu & r'_u = r'_d = r'_e = r'_\\nu \n\\end{eqnarray}\n\n\n\n\\section{Quark masses and mixings in $SU(5)$ \\label{sec:su5q}}\nIn this section we shall provide some constraints on the\nphenomenological parameters introduced in the last section,\narising from the quark masses and mixings,\nassuming the simplification in the $SU(5)$ type case mentioned above.\nIn $SU(5)$ Eqs.~(\\ref{eq:6}),(\\ref{eq:11}) imply the quark Yukawa\nmatrices are explicitly of the form:\n\\begin{eqnarray}\n\\label{eq:su5matparam}\nY^u\\approx \n\\left(\n\\begin{array}{ccc}\n\\varepsilon^{|2s'|}&\\varepsilon^{|s'+s|}&\\varepsilon^{|s'|}\\\\\n\\varepsilon^{|s'+s|}&\\varepsilon^{|2s|}&\\varepsilon^{|s|}\\\\\n\\varepsilon^{|s'|}&\\varepsilon^{|s|}&1\n\\end{array}\n\\right),\\ \\ \\ \\ \nY^d\\approx \n\\left(\n\\begin{array}{ccc}\n\\varepsilon^{|s'+r'_{d}+k_d|}&\\varepsilon^{|s'+r_{d}+k_d|}&\\varepsilon^{|s'+k_d|}\\\\\n\\varepsilon^{|s+r'_{d}+k_d|}&\\varepsilon^{|s+r_{d}+k_d|}&\\varepsilon^{|s+k_d|}\\\\\n\\varepsilon^{|r'_{d}+k_d|}&\\varepsilon^{|r_{d}+k_d|}&\\varepsilon^{|k_d|}\n\\end{array}\n\\right).\n\\end{eqnarray}\nwhere we have written $s=s_{u,d}=r_{u,e}$, $s' = s'_{u,d}=r'_{u,e}$.\n\\footnote{Note that the extended $SU(5)$ anomaly free solutions examined\nin section \\ref{sec:genrzsu5like} leave the parameters \n$s,s',r_d,r'_d,k_d$ invariant, as is clear by comparing\nEqs.\\ref{eq:11} and \\ref{eq:16}.\nHence the results in this section for the quark\nsector apply not only to the $SU(5)$ type case\nbut also the extended $SU(5)$ anomaly free cases.}\nNote that we are assuming a single expansion parameter $\\varepsilon$, \nand are suppressing $O(1)$ coefficients. Clebsch factors are also not\nconsidered, and only leading order operators are discussed.\n\nIn order to determine the possible solutions for $s,\\ s',\\ r_d, \\\nr'_d$ and $k_d$ which successfully reproduce quark \nmasses and mixings one can numerically diagonalize Yukawa matrices and\nobtain the CKM matrix. However, in order to understand the behaivour\nof this structure it is quite useful to use the technique of\ndiagonalization by blocks in the $(2,3)$, $(1,3)$ and $(1,2)$ sectors\n\\footnote{This only works if there is an appropriate hierarchy among the elements}. The results are presented in the next subsections.\n\n\\subsection{Quark Masses}\n\n\\noindent Barring accidental cancellations the down quark Yukawa\nmatrix $Y^d$ may be diagonalized, leading to the following \neigenvalues:\n\\begin{eqnarray}\n\\label{eq:yukeigen}\ny_1\\!\\!\\!\\!&\\approx&\\!\\!\\! a_{11}\\varepsilon^{|s'+r'+k|}-\n\\frac{(a_{31}\\varepsilon^{|r'+k|}+a_{23}a_{21}\\varepsilon^{|s+k|+|s+r'+k|-|k|}e^{2i(\\beta^L_2-\\beta^L_1)})}{c^{R}_{23}(\\varepsilon^{|k|}+a^{2}_{32}\\varepsilon^{2|r+k|-|k|}e^{-2i(\\beta^R_2-\\beta^R_1)} )}\\times\\nonumber\\\\ \n&&\\times (a_{13}\\varepsilon^{|s'+k|}\\!+a_{23}a_{12}\\varepsilon^{|r+k|+|s'+r+k|-|k|}e^{-2i(\\beta^R_2-\\beta^R_1)})+\\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\! \\frac{-(a_{12}\\varepsilon^{|s'+r+k|}\\!-\\!a_{32}a_{13}\\varepsilon^{|r+k|+|s'+k|-|k|})(a_{21}\\varepsilon^{|s+r'+k|}\\!-\\!a_{23}a_{31}\\varepsilon^{|s+k|+|r'+k|-|k|} )}{(a_{22}\\varepsilon^{|s+r+k|}-a_{23}a_{32}\\varepsilon^{|s+k|+|r+k|-|k|})e^{-i(\\beta^L_3-\\beta^R_3)}},\\nonumber\n\\end{eqnarray}\n\\begin{eqnarray}\ny_2\\!\\!\\!\\!&\\approx&\\!\\!\\!c^R_{23}\\left(a_{22} \\varepsilon^{|s+r+k|} -a_{23}a_{32}\\varepsilon^{|r+k|+|s+k|-|k|}\\right)e^{2i(\\beta^L_2-\\beta^R_2)},\\nonumber\\\\\ny_3\\!\\!\\!\\!&\\approx&\\!\\!\\!c^{R}_{23}\\left(\\varepsilon^{|k|} +a^{2}_{32}\\varepsilon^{2|r+k|-|k|}e^{2i(\\beta^R_1-\\beta^R_2)} \\right)e^{i(\\beta^L_1-\\beta^R_1)},\n\\end{eqnarray}\nwhere we have suppressed the index $d$ in order to make clearer the\nnotation and re-scaled all the (complex) coefficients by $1\/a_{33}$,\nso that instead of having $a_{33}$ we have 1. \nNote that the down quark masses are given by:\n$m^d_i=y^d_i v_d\/\\sqrt{2}$.\nAnalogous results also apply to the up quark sector,\nwith the replacements\n$r\\rightarrow s$, $r'\\rightarrow s'$, $k\\rightarrow 0$. \nThe phases $\\beta^L_i$\ncorrespond to the diagonalization matrices of the Yukawa matrices,\nwhose notation is given in Appendix (\\ref{ap:diagmat}).\n \nIt is important to remark that in the case of positive charges all the\nelements of the first row of the Yukawa matrix contribute at the same\norder, $s'+r'+k$, to their correspondent lightest eigenvalue, so in\nthese cases it is not possible to have the Gatto-Sartori-Tonin (GST)\nrelation. However in the cases of having $s$ and $s'$ (analogous for\n$r$ and $r'$) with different sign, as in the example of\n\\eq{eq:textibross}, we can have a cancellation in powers of $\\varepsilon$ to\nthe contribution to $y_1$ coming from the diagonalization in the\n$(1,2)$ sector, which is the third term in the expression for $y_1$ in\n\\eq{eq:yukeigen}. On the other hand we can have an enhancement in the\npower of $\\varepsilon$ of the contributions from the $(1,1)$ entry and the\nrotation in the $(1,3)$ sectors, which correspond to the first and\nsecond term of $y_1$, respectively, in \\eq{eq:yukeigen}. This together\nwith the condition ${\\mathcal{C}}(Y_{21})={\\mathcal{C}}(Y_{12})$ are\nthe requirements to achieve the GST relation. We will present examples\nsatisfying and not satisfying the GST relation.\n\n\nWe remark here the constraints from the bottom mass are\n\\begin{eqnarray}\n\\label{eq:tanb}\nm_b \\tan\\beta=\\varepsilon^{|k_d|} m_t,\\qquad k_d=q_3+d_3+h_d\n\\end{eqnarray}\nsince $m_t=O(\\langle H_u\\rangle)$ and $\\tan\\beta=\\langle H_u\\rangle\/\\langle H_d\\rangle$. Thus in terms of charges we have $h_u=-(q_3+u_3)$ and $h_d=q_3+u_3$, for $u=v=0$, $k=2q_3+d_3+u_3$.\n\n\n\\subsection{Quark Mixings}\n\nWe can also obtain the mixing angles in this approximation and compare\nto the required experimental values (see Appendix \\ref{ap:compinf}).\nThe mixing angles in the down sector, again dropping flavour indices,\nare as follows:\n\\begin{eqnarray}\nt^L_{23}&=&e^{i(\\beta^L_2-\\beta^L_1)}a_{23}\\varepsilon^{|s+k|-|k|}+a_{23}a_{22}\\varepsilon^{|s+r+k|+|s+k|-2|k|}e^{i\\xi_L}\\nonumber\\\\\nt^R_{23}&=&e^{i(\\beta^R_2-\\beta^R_1)}a_{32}\\varepsilon^{|r+k|-|k|}+a_{23}a_{22}\\varepsilon^{|s+r+k|+|s+k|-2|k|}e^{i\\xi_R}\\nonumber\n\\end{eqnarray}\n\\begin{eqnarray}\nt^L_{13}&=&\\frac{a_{13}\\varepsilon^{|s'+k|}+a_{32}a_{12}\\varepsilon^{|r+k|+|s'+r+k|-|k|}e^{-i2(\\beta^R_2-\\beta^R_1)}}{\\left(\\varepsilon^{|k|}+a^2_{32}\\varepsilon^{2|r+k|-|k|}e^{2i(\\beta^R_1-\\beta^R_2)}\n\\right) e^{i\\beta^L_1}}\\nonumber\\\\\nt^R_{13}&=&\\frac{a_{31}\\varepsilon^{|r'+k|}+a_{23}a_{21}\\varepsilon^{|s+k|+|s+r'+k|-|k|}e^{2i(\\beta^L_2-\\beta^L_1)}}{\\left(\\varepsilon^{|k|}+a^2_{32}\\varepsilon^{2|r+k|-|k|}e^{2i(\\beta^R_1-\\beta^R_2)}\n\\right) e^{-i\\beta^R_1}}\\sqrt{1+|a^2_{32}|\\varepsilon^{2|r+k|-2|k|}}\\nonumber\\\\\nt^L_{12}&=&\\frac{\\left(a_{12}\\varepsilon^{|s'+r+k|}-a_{32}a_{13}\\varepsilon^{|r+k|+|s'+k|-|k|}\\right)e^{-i(\\beta^R_3+\\beta^L_2)}}{\\left(\na_{22}\\varepsilon^{|s+r+k|}-a_{23}a_{32}\\varepsilon^{|s+k|+|r+k|-|k|} \\right)}\\nonumber\\\\\nt^R_{12}&=&\\frac{\\left(a_{21}\\varepsilon^{|s+r'+k|}-a_{23}a_{31}\\varepsilon^{|s+k|+|r'+k|-|k|}\\right)e^{i(\\beta^L_3+\\beta^R_2)}}{\\left(\na_{22}\\varepsilon^{|s+r+k|}-a_{23}a_{32}\\varepsilon^{|s+k|+|r+k|-|k|} \\right)}\\nonumber\\\\\n\\xi_L\\!\\!\\!&\\!=\\!&\\!\\!\\!-(\\beta^L_2-\\beta^L_1)-2(\\beta^R_2-\\beta^R_1),\\\n\\xi_R=-(\\beta^R_2-\\beta^R_1)-2(\\beta^L_2-\\beta^L_1)\\label{eq:mixsgeral}.\n\\end{eqnarray}\nAnalogous results also apply to the up quark sector, with the\nreplacements $r_d\\rightarrow s$, $r_d'\\rightarrow s'$, $k_d\\rightarrow 0$.\nNote that in the case of positive $s,s',r,r'$ and $k$, the angles\n$t^L_{12}$ and $t^L_{23}$, of the left sector do not depend on\n$r_d,r'_d$, \nso they are equal, at first approximation, for the up and down\nsectors. Having the tangent of the angles expressed in terms of the\nYukawa elements we can see directly their contributions to the CKM\nelements ($V_{\\mathrm{CKM}}=L^uL^{d\\dagger}$ in the notation of\nAppendix (\\ref{ap:diagmat}))\n\\begin{eqnarray}\n\\frac{|V_{ub}|}{|V_{cb}|}&=&\\frac{|s^{u}_{12}s^Q_{23}-s^Q_{13}e^{i(\\Phi_1-\\Phi_2)}|}{|s^Q_{23}|}\\approx 0.09\\sim (\\lambda^2,\\lambda) \\nonumber\\\\\n\\frac{|V_{td}|}{|V_{ts}|}&=&\\frac{|s^{d}_{12}s^Q_{23}-s^Q_{13}e^{i(\\Phi_2)}|}{|s^Q_{23}|}\\sim \\lambda\\nonumber\\\\\n|V_{us}|&=&|s^d_{12}-s^u_{12}e^{i\\Phi_1}|=\\lambda \\approx 0.224\\nonumber\\\\\n{\\rm{Im}}\\{J\\}&=&s^Q_{23}(s^Q_{23}s^d_{12}s^u_{12}\\sin(\\Phi_1)-s^Q_{13}(s^d_{12}\\sin(\\Phi_2))- s^u_{12}\\sin(\\Phi_2-\\Phi_1)),\n\\label{eq:Vsasyu1}\n\\end{eqnarray}\nwith $s^Q_{ij}=|s^d_{ij}-e^{i\\Phi_{X_{ij}}}s^u_{ij}|$. The phases $\\Phi_1$, $\\Phi_2$ and $\\Phi_{X_{ij}}$ depending on the contributions that the mixing angles receive from the different elements of the Yukawa matrix and have a different expression in terms of the phases of the Yukaw matrix for different cases. For example when the elements $(1,2)$ and $(1,3)$ are of the same order and the right handed mixing angle in the $(2,3)$ sector is large, the\n$\\Phi_2$ phase will be\n\\begin{eqnarray}\n\\label{eq:phi2}\n\\Phi_2={\\rm{Arg}}\\left[\\frac{Y^d_{12}+Y^d_{13}t^R_{23}}{Y^d_{33}+Y^d_{23}t^R_{23}} \\right]\n\\end{eqnarray}\nAs we can see from the expressions in \\eq{eq:Vsasyu1} involving $\\Phi_1$, this can be associated to the $U$ sector. When all the signalization angles in this sector are small, then this phase takes the form\n\\begin{eqnarray}\n\\label{eq:phi1}\n\\Phi_1=\\phi^u_{12}-\\phi^u_{22}\n\\end{eqnarray}\nwhere $\\phi_{12}$ and $\\phi_{22}$ are the phases of the $Y^u_{12}$ and $Y^u_{22}$ elements.\nFinally the phases $\\Phi_{ij}$, which appear in $s^Q_{ij}$, can be associated either with the $U$ or with the $D$ sector.\n\\begin{table}[ht] \\centering%\n\\begin{center}\n\\begin{tabular}{|l l |c||l l| c|}\n\\hline\n\\!$U(1)$ relations\\!\\!& \\!Constraint &\\!Reason\\!\\!& $U(1)$ relations\\!\\!& \\!Constraint &\\!Reason\\!\\\\\n\\hline\n$\\varepsilon^{|s+k_d|-|k_d|}$ & $\\!\\sim \\lambda^2$ & $s^Q_{23}$ & $\\varepsilon^{|3q_3+d_3|}$&$\\!\\sim (1,\\lambda^3)$&$m_b$\\\\\n$\\varepsilon^{|s'+k_d|-|k_d|}$ & $\\!\\gtrsim \\lambda^3$ & $s^Q_{13}$ & $\\varepsilon^{|s+r_d+k_d|-|k_d|}$ & $\\!\\sim (\\lambda^2,\\ \\lambda^3) $ & $\\frac{m_s}{m_b}$\\\\\n$\\varepsilon^{|s'+r_d+k_d|-|s+r_d+k_d|}\\!$ & $\\!\\sim\\lambda$ & $s^{Q}_{12}$ & $\\varepsilon^{|s'+r'_d+k_d|-|k_d|}$ & $\\!\\sim (\\lambda^4,\\ \\lambda^5)$ & $\\frac{m_d}{m_s}$\\\\\n$$ & $$ & $$ & $\\varepsilon^{|2s+k_d|-|k_d|}$ & $\\!\\sim \\lambda^4 $ & $\\frac{m_c}{m_t}$\\\\\n$$ & $$ & $$ & $\\varepsilon^{|2s'+k_d|-|k_d|}$ & $\\! \\geq \\lambda^6$ & $\\frac{m_u}{m_c}$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{{\\small Constraints on the parameters $s,\\ s',\\ r_d, \\ r'_d$\nand $k_d$ from quark mixing angles and mass ratios. For the mixing\nangles we need to satisfy the conditions for up or down sector,\nwhere the analogous conditions for the up sector are obtained by\nmaking the replacements \n$r_d\\rightarrow s$, $r_d'\\rightarrow s'$, $k_d\\rightarrow 0$. They\ndo not need to be satisfied for both as long as for the sector in\nwhich they are not satisfied they do not give a bigger contribution\nthan the indicated power.} }\n\\label{table:phen1}\n\\end{table}\n\nWith the requirements of Table (\\ref{table:phen1}) and the values of quark masses in Appendix (\\ref{ap:compinf}), we can identify the viable solutions in the quark sector. \nOne solution which has been widely explored is the up-down symmetric case for which we have $x=y$ thus, $f_i=q_i=u_i=e_i=d_i=l_i$. In this case $h_u=-2e_3=-h_d$ so $k_u=0$, $k_d=k_l=4e_3$, but in this case we need two expansion parameters $\\varepsilon_u$ and $\\varepsilon_d$ to reproduce appropriate mass ratios and mixings, thus we have\n\\begin{eqnarray}\nY^f=\n \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|2s'+k_f|}_f&\\varepsilon^{|s+s'+k_f|}_f&\\varepsilon^{|s'+k_f|}_f\\\\\n\\varepsilon^{|s+s'+k_f|}_f&\\varepsilon^{|2s+k_f|}_f&\\varepsilon^{|s+k_f|}_f\\\\\n\\varepsilon^{|s'+k_f|}_f&\\varepsilon^{|s+k_f|}_f&\\varepsilon^{|k_f|}\n\\end{array}\n\\right].\n\\end{eqnarray} \nWe can think of fixing $s+s'$, and then check for which choice of $s$ we have appropriate phenomenological solutions. For example if we take $s+s'=\\pm 3$ \nand $e_3=0$ ($k_f=0$, $\\forall f$) we have\n\\begin{eqnarray}\n\\label{eq:textibross}\nY^f=\n \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|6-2f_2|}_f&\\varepsilon^{|3|}_f&\\varepsilon^{|3-f_2|}_f\\\\\n\\varepsilon^{|3|}_f&\\varepsilon^{|2f_2|}_f&\\varepsilon^{|f_2|}_f\\\\\n\\varepsilon^{|3-f_2|}_f&\\varepsilon^{|f_2|}_f&1\n\\end{array}\n\\right]\n\\end{eqnarray} \nThe viable phenomenological fit for the case of quarks is for $f_2=-1$ and $f_1=4$ or $f_2=1$ and $f_1=-4$\n\\cite{Ibanez:1994ig}. In this case we have then $x=y=\\pm 3$\nrespectively. \n\n\n\n\n\n\n\n\\section{Neutrino masses and mixings in SRHND\\label{sec:neuts}}\nIn this section we apply the requirements of getting acceptable\nneutrino masses and mixings by using a class seesaw model where \n$l_2 =l_3$. These are a subset of a class of seesaw models called single\nright-handed neutrino dominance (SRHND) or sequential dominance\n\\cite{King:1998jw}. \nThis additional constraint $l_2 =l_3$ will henceforth be \napplied in obtaining phenomenological solutions in the\nlepton sector. \n\nApart from the obvious benefit of considering the neutrino sector, it\nwill turn out that the neutrino sector will constrain the absolute \nvalues of the charges\nunder the $U(1)$ family symmetry, (not the charge differences,) due to\nthe Majorana nature of neutrinos.\nThis is due to the relations between the\ncharges imposed by the relevant GUT constraints, or the extended GUT\nconstraints, eq.~(\\ref{eq:1}) for the extended $SU(5)$ solution of\nsection \\ref{sec:genrzsu5like} and eq.~(\\ref{eq:8}) for \nthe extended Pati-Salam solution of section \\ref{sec:pati-salam-like-case}.\nFor example the additional constraint $l_2 =l_3$ implies immediately\n\\begin{equation}\nr_d = s_{e, \\nu} = l_2 - l_3=0,\n\\label{l2l3}\n\\end{equation}\nin the $SU(5)$ type cases from Eq.\\ref{eq:24}.\n\nHere we would like to study the cases for which large mixing angles in\nthe atmospheric sector and the neutrino sector can be explained\nnaturally in terms of the parameters of the $U(1)$ class of symmetries\nthat we have constructed in the previous sections, under the framework\nof the type I see-saw mechanism together with the scenario of the\nsingle right handed neutrino dominance (SRHND). We refer the reader\nfor a review of this scenario to \\cite{King:1998jw}. Here we\nmake a brief summary of the results and apply them to the present\ncases. In the type I see-saw the mass matrix of the low energy\nneutrinos is given by $m_{LL}\\approx v^2_u Y^{\\nu} M^{-1}_R Y^{\\nu\nT}$, where $Y^{\\nu}$ is the Dirac matrix for neutrinos and $M_R$ is\nthe Majorana matrix for right-handed neutrinos. If we have three right\nhanded neutrinos, $M_1$, $M_2$ and $M_3$, then for the right handed\nneutrino mass, in terms of $U(1)$ charges we have:\n\\begin{eqnarray}\n\\label{eq:Ynu}\nY^\\nu &=& \\left[\n \\begin{array}{ccc}\n \\epsilon^{|l_1+n_1+h_u|} & \\epsilon^{|l_1+n_2+h_u|} & \\epsilon^{|l_1+n_3+h_u|} \\\\\n \\epsilon^{|l_2+n_1+h_u|} & \\epsilon^{|l_2+n_2+h_u|} & \\epsilon^{|l_2+n_3+h_u|} \\\\\n \\epsilon^{|l_3+n_1+h_u|} & \\epsilon^{|l_3+n_2+h_u|} & \\epsilon^{|l_3+n_3+h_u|} \n \\end{array}\n\\right]\\\\\n\\label{eq:MR}\nM_{RR}&=& \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|2n_1+\\sigma|}&\\varepsilon^{|n_1+n_2+\\sigma|}&\\varepsilon^{|n_1+n_3+\\sigma|}\\\\\n\\varepsilon^{|n_1+n_2+\\sigma|}&\\varepsilon^{|2n_2+\\sigma|}&\\varepsilon^{|n_2+n_3+\\sigma|}\\\\\n\\varepsilon^{|n_1+n_3+\\sigma|}&\\varepsilon^{|n_2+n_3+\\sigma|}&\\varepsilon^{|2n_3+\\sigma|}\\\\\n\\end{array}\n\\right]<\\Sigma>\n\\end{eqnarray}\nwhere the charges $n_i$ are the $U(1)$ charges of the right handed neutrinos, $\\nu_{Ri}$ and $\\sigma$ is the $U(1)$ charge of the field $\\Sigma$ giving Majorana masses to the right handed neutrinos. These charges are not constrained by the anomaly cancellation conditions \\eq{eq:sumqi}-\\eq{eq:hd} of Section (\\ref{sec:anomconst}), at least in the $SU(5)$ case, which gives some freedom in order to find appropriate solutions giving two large mixing angles and one small mixing angle for neutrinos. We expect $\\Sigma$ to be of order the scale at which the $U(1)$ symmetry is broken, for example at $M_P=M_{\\rm{Planck}}$, or some other fundamental scale, such as the Grand Unification scale, $M_G$, for the solutions with an underlying GUT theory.\n\nHere we restrict ourselves to the cases in which \\eq{eq:MR} can be considered as diagonal, $M_R\\approx \\rm{diag}\\{M_1,M_2,M_3 \\}$, for which we need in the $(2,3)$ block\n\\begin{eqnarray}\n\\label{eq:srhnd1}\n|n_3+n_2+\\sigma|&>& min\\{ |2n_3+\\sigma|,|2n_2+\\sigma|\\},\\nonumber\\\\\n2|n_3+n_2+\\sigma|&\\geq& |2n_3+\\sigma| +|2n_2+\\sigma|.\n\\end{eqnarray}\nThe conditions in the $(1,2)$ block are analogous to the $(2,3)$ and also we need\n\\begin{eqnarray}\n\\label{eq:srhnd2}\n|n_1+n_3+\\sigma|>max\\{|2n_2+\\sigma|, |2n_3+\\sigma|\\}.\n\\end{eqnarray}\nNow, there are two cases that we can consider here, which correspond to selecting which of the neutrinos will dominate, $M_1$ or $M_3$. For the later case the SRHND conditions are\n\\begin{eqnarray}\n\\frac{|Y^\\nu_{i3}Y^\\nu_{j3}|}{|M_3|} \\gg \\frac{|Y^\\nu_{i2}Y^\\nu_{j2}|}{|M_2|}\\gg\n\\frac{|Y^{\\nu 2}_{31}, Y^{\\nu 2}_{21}, Y^{\\nu}_{21}, Y^{\\nu}_{31} |}{|M_1|}; \\quad i,j=1,2,3. \n\\end{eqnarray}\nFor the case in which $M_1$ dominates we just have to interchange the indices $1$ and $3$ in the neutrino Yukawa terms.\n\nFor the case in which $M_3$ dominates, at first order approximation, we have the following expressions for the neutrino mixings \\cite{King:1998jw},\n\\begin{eqnarray}\nt^\\nu_{23}&=&\\frac{Y_{23}^{\\nu}}{Y^{\\nu}_{33}},\\label{eq:tan23}\\\\\nt^\\nu_{13} &=&\\frac{Y^\\nu_{13}} {\\sqrt{Y^{\\nu 2}_{33}+Y^{\\nu 2}_{23}}}+\n\\frac{M_3}{M_2}\\frac{Y^\\nu_{12}(s_{23}Y^{\\nu}_{22}+c_{23}Y^\\nu_{32})}{\\sqrt{Y^{\\nu 2}_{33}+Y^{\\nu 2}_{23}} },\\label{eq:tan13}\\\\\nt^\\nu_{12}&=&\\frac{Y^{\\nu}_{12}(Y^{\\nu 2}_{33}+Y^{\\nu 2}_{23})-Y^{\\nu}_{13}(Y^{\\nu}_{33}Y^{\\nu}_{32}-Y^{\\nu}_{22}Y^{\\nu}_{23} ) }\n{(Y^{\\nu}_{33}Y^{\\nu}_{33}-Y^{\\nu}_{32}Y^{\\nu}_{23}) \\sqrt{Y^{\\nu 2}_{33} +Y^{\\nu 2}_{23} +Y^{\\nu 2}_{13} } }\\approx\n\\frac{Y^\\nu_{12}}{c_{23}Y^\\nu_{22}-s_{23}Y^\\nu_{32}}.\\label{eq:tan12}\n\\end{eqnarray}\nIn terms of the Abelian charges the Yukawa elements are\n\\begin{eqnarray}\nY^\\nu_{ij}=\\varepsilon^{|l_i+n_j+h_u|}\\equiv \\varepsilon^{| l'_i+n_j|}, \n\\quad l'_i\\equiv l_i+h_u=l_i-2e_3,\n\\end{eqnarray}\nwhere we have defined primed lepton doublet charges\nwhich absorb the Higgs charge, as shown.\nWe can work here in terms of the primed charges, once they are fixed\nwe can determine the original Abelian charges (unprimed). The approximation in \\eq{eq:tan12}\ncorresponds to the case in which we have enough suppression of the\nsecond term in the expression for $t\\nu_{12}$. In \\eq{eq:tan13} the\nsecond term can be neglected sometimes, depending on the ratio\n$M_3\/M_2$. The heaviest low energy neutrino masses are given by\n\\begin{eqnarray}\nm_{\\nu_3}=\\frac{a^{\\nu 2}_3 \\varepsilon^{2|l'_2+n_3|}v^2 }{M_3},\\quad m_{\\nu_2}=\\frac{ a^{\\nu 2}_2 \\varepsilon^{2|l'_2+n_2|} v^2 }{M_2},\n\\end{eqnarray}\nwhere we have written $a^{\\nu 2}_3 \\varepsilon^{2|l'_2+n_3|}= Y^{\\nu 2}_{33}+ Y^{\\nu 2}_{23}$ and $a^{\\nu 2}_2 \\varepsilon^{2|l'_2+n_2|}=\\left( c_{23} Y^{\\nu}_{22} -s_{23}Y^{\\nu}_{32} \\right)^2$. Thus the ratio of the differences of the solar to atmospheric neutrino can be written as\n\\begin{eqnarray}\n\\label{ratmn}\n\\frac{m_{\\nu_2}}{m_{\\nu_3}}\\approx \\frac{M_3}{M_2} \\frac{c^2_{23}}{c^2_{12}}\\frac{\\left( Y^{\\nu}_{22} -Y^{\\nu}_{32} t\\nu_{23}\\right)^2}{Y^{\\nu 2}_{33} + Y^{\\nu 2}_{23}}\\sim \\varepsilon^{p_2-p_3},\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\label{eq:p3}\np_{k}=|2l'_2+n_k|-|2n_k+\\sigma|,\\ {\\mathrm{for}}\\ k=2,\\ 3.\n\\end{eqnarray}\n\nNote that $p_k$ is then defined such that \n\\begin{equation}\n \\label{eq:30}\n m_{\\nu_k} \\approx \\frac{v^2}{\\left<\\Sigma\\right>} \\epsilon^{p_k}.\n\\end{equation}\n\n\n\\section{$SU(5)$ solutions satisfying the GST relation}\n\\label{sec:su5-solut-satisfy-GST}\nIn this section we shall continue to focus on the \ncase of $SU(5)$, where the quark Yukawa matrices take\nthe form of Eq.\\ref{eq:su5matparam},\nwhere, motivated by large atmospheric neutrino mixing,\nwe shall assume $r_d=0$ from Eq.~(\\ref{l2l3})\nThe purpose of this section is to show how the GST\nrelation can emerge from $SU(5)$, by imposing additional\nconstraints on the parameters.\n\\footnote{Note that results in Section \\ref{sec:su5-solut-satisfy-GST} and in \nSection \\ref{sec:su5-solutions-not-GST} apply to both $SU(5)$ type and extended $SU(5)$ \nmodels, as discussed above.}\n\n\n\n\\subsection{The quark sector}\nWe have already seen that the GST relation\ncan be achieved in the u sector, mainly by allowing the\nparameters $s$ and $s'$ to have different signs. \nIn the down sector to satisfy GST we additionally require:\n\\begin{eqnarray}\n|k_d+r'_d+s|&=&|k_d+s'|\\nonumber\\\\\n|k_d+r'_d+s'|-|k_d|&>&|k_d+r'_d+s|+|k_d+s'|-|k_d+s|\\nonumber\\\\\n|r'_d+k_d|&>&|k_d|.\n\\end{eqnarray}\nThe first of these equations ensures the equality of the order of the\nelements $(1,2)$ and $(2,1)$ of the $Y^d$ matrix. The second equation\nensures that the element $(1,1)$ is suppressed enough with respect to\nthe contribution from the signalization of the $(1,2)$ block. This\nlast condition is usually satisfied whenever\n$|k_d+r'_d+s'|>|k_d+r'_d+s|$ is satisfied. Finally the third condition\nensures a small right-handed mixing for d-quarks and a small\nleft-handed mixing for charged leptons. Now in order to satisfy the\nrelations\n\\begin{eqnarray}\ns^u_{12}=\\sqrt{\\frac{m_u}{m_c}}\\approx \\lambda^2,\\quad s^d_{12}=\\sqrt{\\frac{m_d}{m_s}}\\approx \\lambda,\n\\end{eqnarray}\nwe need a structure of matrices, in terms of just one expansion parameter $\\varepsilon=O(\\lambda)$, such as\n\\begin{eqnarray}\nY^u= \\left[\n\\begin{array}{ccc}\n... &\\varepsilon^6&...\\\\\n\\varepsilon^6& \\varepsilon^4 & \\varepsilon^2\\\\\n...&\\varepsilon^2 &1\n\\end{array}\n\\right],\\quad\nY^d= \\left[\n\\begin{array}{ccc}\n... &\\varepsilon^5& \\varepsilon^5\\\\\n\\varepsilon^5& \\varepsilon^4 & \\varepsilon^4\\\\\n...&\\varepsilon^2 &\\varepsilon^2\n\\end{array}\n\\right],\n\\end{eqnarray}\nfor which we have\n\\begin{eqnarray}\ns^u_{12}\\approx \\varepsilon^2 \\quad s^d_{12}\\approx \\varepsilon,\\quad s^d_{23}\\approx \\varepsilon^2,\\quad s^d_{13}\\approx \\varepsilon^3,\\nonumber\\\\\n\\frac{m_c}{m_t}\\approx \\varepsilon^4,\\quad \\frac{m_s}{m_b}\\approx \\varepsilon^2, \\quad \\frac{m_b}{m_t}\\approx \\varepsilon^2, ~~~~~\n\\end{eqnarray}\nin agreement with observed values for quark masses and mixings for $\\varepsilon=\\lambda$.\n\nNow we can proceed as in the example of \\eq{eq:textibross} where \n$s'+s$ is fixed to be $\\pm 3$. In this case we see that we can have plausible solutions in the up sector by allowing half integer solutions\n\\begin{eqnarray}\n\\label{eq:solsspgst}\n|s'+s|=13\/2,\\ 6,\\ 11\/2.\n\\end{eqnarray}\nWe will refer to these solutions as Solution 1, 2 and 3 respectively.\nNote that only the charge differences are constrained here, the actual charges are not.\n\n\\noindent {\\bf Solution 1}, $|s+s'|=13\/2$,\n\\begin{eqnarray}\n\\label{eq:tex1gst}\nY^u= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{35\/2}&\\varepsilon^{13\/2}&\\varepsilon^{35\/4}\\\\\n\\varepsilon^{13\/2}& \\varepsilon^{9\/2} & \\varepsilon^{9\/4}\\\\\n\\varepsilon^{35\/4}&\\varepsilon^{9\/4} &1\n\\end{array}\n\\right],\\quad\nY^d= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{69\/4}&\\varepsilon^{25\/4}& \\varepsilon^{25\/4}\\\\\n\\varepsilon^{25\/4}& \\varepsilon^{19\/4} & \\varepsilon^{19\/4}\\\\\n\\varepsilon^{17\/2}&\\varepsilon^{5\/2} &\\varepsilon^{5\/2}\n\\end{array}\n\\right],\n\\end{eqnarray}\nfor\n\\begin{eqnarray}\n&& r'_d=l_1-l_3=11,\\ s=-\\frac{9}{4},\\ s'=\\frac{35}{4},\\ k_d=-\\frac{5}{2},\n\\quad {\\rm or}\\nonumber\n\\\\ && \nr'_d=l_1-l_3=-11, \\ s=\\frac{9}{4},\\ s'=-\\frac{35}{4},\\ k_d=\\frac{5}{2}.\n\\label{eq:sol1gst}\n\\end{eqnarray}\n{\\bf Solution 2}, $|s'+s|=6$,\n\\begin{eqnarray}\n\\label{eq:tex2gst}\nY^u= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{16}&\\varepsilon^6&\\varepsilon^8\\\\\n\\varepsilon^6& \\varepsilon^4 & \\varepsilon^2\\\\\n\\varepsilon^8&\\varepsilon^2 &1\n\\end{array}\n\\right],\\quad\nY^d= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{31\/2}&\\varepsilon^{11\/2}& \\varepsilon^{11\/2}\\\\\n\\varepsilon^{11\/2}& \\varepsilon^{9\/2} & \\varepsilon^{9\/2}\\\\\n\\varepsilon^{15\/2}&\\varepsilon^{5\/2} &\\varepsilon^{5\/2}\n\\end{array}\n\\right],\n\\end{eqnarray}\nfor\n\\begin{eqnarray}\n&&r'_d=l_1-l_3=10, \\ s=-2,\\ s'=8,\\ k_d=-\\frac{5}{2},\\quad {\\rm or}\\nonumber\\\\\n&&r'_d=l_1-l_3=-10, \\ s=2,\\ s'=-8,\\ k_d=\\frac{5}{2}.\\label{eq:sol2gst}\n\\end{eqnarray}\n{\\bf Solution 3}, $|s+s'|=11\/2$,\n\\begin{eqnarray}\n\\label{tex3gst}\nY^u= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{29\/2}&\\varepsilon^{11\/2}&\\varepsilon^{29\/4}\\\\\n\\varepsilon^{11\/2}& \\varepsilon^{7\/2} & \\varepsilon^{7\/4}\\\\\n\\varepsilon^{29\/4}&\\varepsilon^{7\/4} &1\n\\end{array}\n\\right],\\quad\nY^d= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{31\/2}&\\varepsilon^{21\/4}& \\varepsilon^{21\/4}\\\\\n\\varepsilon^{21\/4}& \\varepsilon^{15\/4} & \\varepsilon^{15\/4}\\\\\n\\varepsilon^{33\/4}&\\varepsilon^{2} &\\varepsilon^{2}\n\\end{array}\n\\right],\n\\end{eqnarray}\nfor\n\\begin{eqnarray}\n&&r'_d=l_1-l_3=41\/4, \\ s=-\\frac{7}{4},\\ s'=\\frac{29}{4},\\ k_d=-2,\n\\quad{\\rm or}\\nonumber\n\\\\ &&r'_d=l_1-l_3=-41\/4, \\ s=\\frac{7}{4},\\ s'=-\\frac{29}{4},\\ k_d=2. \n\\label{eq:sol3gst}\n\\end{eqnarray}\nAll the previous solutions \\eqs{eq:sol1gst}-\\eqs{eq:sol3gst} lead to\nsmall $\\tan\\beta$ ($O(1)$), due to the choice of $k_d$. To find\nsolutions such that $\\tan\\beta$ is $O(10)$ is more difficult, due to\nthe requirements in the up sector, but we have found the following\nsolution\n\\begin{eqnarray}\n&&r'_d=l_1-l_3=\\frac{19}{2}, \\ s=-2,\\ s'=\\frac{15}{2},\\ k_d=-\\frac{3}{2},\n \\quad {\\rm or}\\nonumber\n\\\\ \n&&\nr'_d=l_1-l_3=\\frac{-19}{2}, \\ s=2, \\ s'=-\\frac{15}{2},\\ k_d=\\frac{3}{2}\n\\label{eq:sol4gst}.\n\\end{eqnarray}\n\\label{sec:neutgst}\n\n\\subsection{The neutrino sector}\n\\label{sec:neutrino-sector-gst}\n\nNow we construct solutions for the lepton sector constrained by the\nrequirements from the quark sector in the previous subsection, \nwhere we assumed $r_d=l_2-l_3=0$, and determined\nthe charge differences $r'_d = l_1-l_2$ that agree with the GST\nrelation. Indeed it is convenient to label the solutions\nin the previous subsection by the value of $r'_d = l_1-l_2$.\nHere we find the charges $n_i,l_i,$ and $\\sigma$ which satisfy\nthe conditions arising from the neutrino sector,\nEqs.~(\\ref{eq:tan23}-\\ref{eq:tan12}). {\\footnote{The\ncondition $l_2 =l_3$ is a requirement of the class of see-saw models\nthat we are looking for, single right-handed neutrino dominance\n(SRHND). Note that here we can also have $l'_2=-l'_3$ which then\nforces $n_3=0$ for $l'_2 \\ne 0$, in which case the solutions will be\neven more restricted.}}. In order to satisfy \\eq{eq:tan23}, the most\nnatural solution to achieve $t^\\nu_{12}$ large is to have\n\\begin{eqnarray}\n|l'_1+n_2|=|l'_2+n_2|.\n\\end{eqnarray}\nThe simplest solution is to assume that $n_2 = 0$. \nSince $l'_1$ and $l'_2$ are related through $r'_d = l_1 - l_3 = l'_1 - l'_2$ the solutions to this equation are:\n\\begin{eqnarray}\nr'_d&=&0\\\\\nl'_1&=&\\frac{r'_d}{2}=-l'_2.\\label{eq:soll1l2}\n\\end{eqnarray}\n\nSince none of the solutions found in the previous subsection had\n$r'_d = 0$, we have to work with the second solution in\nEq.~(\\ref{eq:soll1l2}). However, we do not need to solve\nEq.~(\\ref{eq:tan12}) exactly, so we are going to perturb away from it,\nby keeping $n_2\\neq 0$, but we expect it to be small in comparison\nwith $l'_1 = -l'_2$. Then we write:\n\\begin{eqnarray}\n\\label{eq:needp12}\np_{12}=|l'_1+n_2|-|l'_2+n_2|\n\\end{eqnarray}\nSo $t^\\nu_{12}$ is $O(\\varepsilon^{p_{12}})$. The solution \\eq{eq:soll1l2} implies that $l'_1$ and $l'_2$ should have opposite sign, so we choose the case $l'_1>0$\n(the other case is similar). Since $r'_d$ is large for all three GST solutions, and $n_2$ should be small in order to satisfy Eq.~(\\ref{eq:needp12}),\nwe can see that $|l'_2 + n_2| = -(l'_2 + n_2)$, and $|l'_1 + n_2| =\nl'_1 + n_2$ for all the solutions from the previous subsection.\nPutting these relations into Eq.~(\\ref{eq:needp12}) we get:\n\\begin{eqnarray}\n\\label{eq:condl1l2}\nn_2=\\frac{p_{12}}{2}.\n\\end{eqnarray}\nSo when we choose $p_{12}$, $n_2$ is determined. Now for the $t^\\nu_{13}$ mixing, which should be at most $O(\\lambda)$, from \\eq{eq:tan13} we need\n\\begin{eqnarray}\n\\label{eq:n3pos}\n|l'_1+n_3|>|l'_2+n_3|\\Rightarrow n_3>0,\n\\end{eqnarray}\nhence let us define $p_{13}$ by:\n\\begin{eqnarray}\n\\label{eq:needp13}\np_{13}=|l'_1+n_3|-|l'_2+n_3|,\n\\end{eqnarray}\n %\nWe assume that the first term in Eq.~(\\ref{eq:tan13}) dominates. Then $t^\\nu_{13}\\approx \\varepsilon^{p_{13}}\/\\sqrt{2}$. \n\\footnote{We have checked that this is indeed true for the solutions that we find for $n_2,n_3$ later in this section.}\nBy applying the same logic that led to Eq.~(\\ref{eq:condl1l2}), we achieve:\n\\begin{eqnarray}\n\\label{eq:n3zeta}\nn_3=\\frac{p_{13}}{2}\n\\end{eqnarray}\nSo fixing $p_{13} \\geq 1$ we fix $n_3$. Now we need to impose the conditions under which we can have an appropriate value of \\eq{ratmn}. \nFirst note that in order to achieve $m_{\\nu_3}=O(10^{-2}){\\rm eV}$:\n\\begin{eqnarray}\n{\\rm{for}} <\\Sigma>=M_P,\\quad \\frac{v^2}{<\\Sigma>}\\approx 6 \\times 10^{-6} \\ {\\rm eV}\\quad {\\rm we}\\ {\\rm need}\\ \\varepsilon^{p_3}\\sim 10^4 \\nonumber\\\\\n{\\rm{for}} <\\Sigma>=M_G,\\quad \\frac{v^2}{<\\Sigma>}\\approx 6 \\times 10^{-3} \\ {\\rm eV}\\quad {\\rm we}\\ {\\rm need}\\ \\varepsilon^{p_3}\\sim 10,\n\\end{eqnarray}\nwhere $p_3$ has been defined in \\eq{eq:p3}. In terms of powers of $\\lambda$, we have $\\lambda^{-4}-\\lambda^{-7}=O(10^5)-O(10^4)$ for $<\\Sigma>=M_P$ and $\\lambda^{-1}, \\lambda^{-2}=O(10)$\nfor $<\\Sigma>=M_G$. This corresponds to the following requirements:\n\\begin{eqnarray}\n\\label{eq:consonsig}\n{\\rm{for}} <\\Sigma>=M_P,\\quad p_3=(-4,-7)\\\\\n{\\rm{for}} <\\Sigma>=M_G,\\quad p_3=(-1,-2).\n\\end{eqnarray}\nWe can conclude that for zero $n_2$, from Eq.~(\\ref{eq:srhnd1}), since $n_3 > 0$, so must $\\sigma$ be positive. Then we can write the power $p_2-p_3$ ($m_{\\nu_2}\/m_{\\nu_3}\\sim\\varepsilon^{p_2-p_3}$) as follows:\n\\begin{equation}\n \\label{eq:2}\n p_2-p_3 = -2(l'_2 + n_2)- (2n_2 + \\sigma) +(2n_3 + \\sigma) \\mp 2(l'_2 + n_3).\n\\end{equation}\nThe uncertainty in the final sign comes from whether $|l'_2| > |n_3|$. If this is the case then we get:\n\\begin{eqnarray}\n\\label{eq:difn3n2}\np_3-p_2=4(n_2-n_3).\n\\end{eqnarray}\nOtherwise we end up with\n\\begin{equation}\np_2-p_3= -4(l'_2+n_2)\n\\end{equation}\nThe second form is of no use to us, since we know that $-l_2'$ is big for the models we are considering, and since $n_2$ is small\nwe can not get an acceptable mass ratio for $m_{\\nu_2}$ to $m_{\\nu_3}$. For the first form, Eq.~(\\ref{eq:difn3n2}), we need $n_2 \\ne 0$,\nbecause substituting \\eq{eq:n3zeta} into \\eq{eq:difn3n2} we have $p_2-p_3=2p_{13}-4n_2$ and we need $p_{13}\\geq 1$ so for $n_2=0$ we have $p_2-p_3\\geq 2$. \n\nWith the above requirements then we can see that the parameters $n_3$ and $n_2$ do not depend on $r'_d$. The only parameter which \ndepends on this is $\\sigma$, through Eq.~(\\ref{eq:2}), using the fact that $l'_2 = -r'_d\/2$. This also fixes the scale at \nwhich the $U(1)$ should be broken. So, independently of $r'_d$, we have the following solutions\n\\begin{eqnarray}\np_{12}=\\frac{1}{4},\\ p_{13}=1, \\ p_2-p_3=\\frac{3}{2} & \\Rightarrow & \\ n_2=\\frac{1}{8},\\ n_3=\\frac{1}{2};\\nonumber \\\\ \np_{12}=\\frac{1}{2},\\ p_{13}=1, \\ p_2-p_3=1 & \\Rightarrow & \\ n_2=\\frac{1}{4},\\ n_3=\\frac{1}{2}.\\label{eq:solsn3n2}\n\\end{eqnarray}\nWe can write the approximate expressions of mixings and masses in terms of the above results and the coefficients $a^\\nu_{ij}$ of $O(1)$,\n\\begin{eqnarray}\n\\label{eq:mixmasresn}\nt^\\nu_{23}&=&\\frac{a^{\\nu}_{23}}{a^{\\nu}_{33}},\\quad\nt^\\nu_{13}\\ \\ = \\ \\ \\frac {a^\\nu_{13}\\varepsilon^{|2n_3|}}{\\sqrt{a^{\\nu \\ 2}_{33} +a^{\\nu \\ 2}_{23} }},\\quad\nt^\\nu_{12}\\ \\ = \\ \\ \\frac{a^\\nu_{12}\\varepsilon^{|2n_2|}}{(c_{23}a^\\nu_{22}-s_{23}a^\\nu_{32})},\\nonumber\\\\\n\\frac{m_{\\nu_2}}{m_{\\nu_3}}&=& \\frac{c^{\\nu\\ 2}_{23}}{c^{\\nu\\ 2}_{12}}\\frac{(a^{\\nu}_{22}-a^{\\nu}_{32}t_{23})^2}{(a^{\\nu \\ 2}_{33} +a^{\\nu \\ 2}_{23})}\\varepsilon^{|4(n_3-n_2)|},\\quad\nm_{\\nu_3}\\ \\ = \\ \\ \\frac{v^2}{\\langle \\Sigma \\rangle}(a^{\\nu \\ 2}_{33} +a^{\\nu \\ 2}_{23})\\varepsilon^{|p_3|}.\n\\end{eqnarray}\nAs we have seen above, the charges $\\sigma$ are constrained by the differences $r'_d$, the requirements of \nEq.~(\\ref{eq:2}) and the solutions to \\eq{eq:solsn3n2}, which have the same value for $n_3$, so for these two sets of solutions we have the same value for $\\sigma$. We write down these solutions for $<\\Sigma>=M_P$ in Table (\\ref{tbl:solGSTMP}) and for $<\\Sigma>=M_G$ in Table (\\ref{tbl:solGSTMG}).\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|l|c|cc|ccc|}\n \\hline\nSol.& $r'_d$ & $n_2$ & $n_3$ & $p_3$ & $\\sigma$ & $M_3$ [GeV] \\\\\n \\hline\n {\\bf 1}&11&$\\frac{1}{8}$&$\\frac{1}{2}$ & (-4,-7)&(14,16)&$O(10^{10}),O(10^8)$ \\\\\n {\\bf 1}&11&$\\frac{1}{4}$&$\\frac{1}{2}$& (-4,-7)&(14,16)&$O(10^{10}),O(10^8)$ \\\\\n {\\bf 2}&10&$\\frac{1}{8}$&$\\frac{1}{2}$&(-4,-7)&(13,14)&$O(10^{11}),O(10^9)$ \\\\\n {\\bf 2}&10&$\\frac{1}{4}$&$\\frac{1}{2}$&(-4,-7)&(13,14)&$O(10^{11}),O(10^9)$ \\\\\n {\\bf 3}&$\\frac{41}{4}$&$\\frac{1}{8}$&$\\frac{1}{2}$&(-4,-7)&$(\\frac{53}{4},\\frac{57}{4})$&$O(10^{10})$,$O(10^8)$ \\\\\n {\\bf 3}&$\\frac{41}{4}$&$\\frac{1}{4}$&$\\frac{1}{2}$&(-4,-7)&$(\\frac{53}{4},\\frac{57}{4})$&$O(10^{10})$,$O(10^8)$ \\\\\n \\hline\n \\end{tabular}\n \\caption{$\\Sigma$ at $M_P$ for the solutions satisfying the GST relation.}\n \\label{tbl:solGSTMP}\n\\end{table}\n %\n %\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|l|c|cc|ccc|}\n \\hline\n Sol.&$r'_d$ & $n_2$ & $n_3$ & $p_3$ & $\\sigma$ & $M_3$ [GeV] \\\\\n \\hline\n {\\bf 1}&11& $\\frac{1}{8}$&$\\frac{1}{2}$&(-1,-2)&$(10,11)$&$O(10^8)$ \\\\\n {\\bf 1}&11& $\\frac{1}{4}$&$\\frac{1}{2}$&(-1,-2)&$(10,11)$&$O(10^8)$ \\\\\n {\\bf 2}&10& $\\frac{1}{8}$&$\\frac{1}{2}$&(-1,-2)&$(10,11)$&$O(10^9), O(10^{10})$ \\\\\n {\\bf 2}&10& $\\frac{1}{4}$&$\\frac{1}{2}$&(-1,-2)&$(10,11)$&$O(10^9), O(10^{10})$ \\\\\n {\\bf 3}&$\\frac{41}{4}$& $\\frac{1}{8}$&$\\frac{1}{2}$&(-1,-2)&$(\\frac{37}{4},\\frac{41}{4})$&$O(10^9), O(10^8)$ \\\\\n {\\bf 3}&$\\frac{41}{4}$& $\\frac{1}{4}$&$\\frac{1}{2}$&(-1,-2)&$(\\frac{37}{4},\\frac{41}{4})$&$O(10^9), O(10^8)$ \\\\\n \\hline\n \\end{tabular}\n \\caption{$\\Sigma$ at $M_G$ for the solutions satisfying the GST relation.}\n \\label{tbl:solGSTMG}\n\\end{table}\n %\n\n The solutions presented here satisfy the conditions of the single neutrino right-handed dominance, Eq.~(\\ref{eq:srhnd1}), \nwhich relate second and third families. For the first and second family we need similar conditions, which are safely satisfied whenever\n$2n_1>2n_2>-\\sigma$ for $(2n_i+\\sigma)$ positive. Thus $n_1$ is not completely determined but we can choose it to be a negative number between $-\\sigma\/2$ and $0$.\n\nNow that we have determined the conditions that the charges $l'_i$ and $n_i$ need to satisfy in order to produce \nSRHND solutions we can determine the $e_i$ and $l_i$ charges, which are in agreement with the cancellation of\n anomalies, Eqs.(\\ref{eq:A3}-\\ref{eq:A1p}), and that determines the matrices $Y^e$, $Y^u$ and $Y^d$.\nIn Section \\ref{sec:su5q} we presented the conditions that the fermion mass matrices $Y^u$, $Y^d$, \n$Y^e$ and $Y^{\\nu}$ need to satisfy in order to give acceptable predictions for mass ratios and mixings but \nwithout specifying the charges. \nThe charges are then determined from $r'_d$ and $k_d$. We start by rewriting $k_d$ using the $SU(5)$ charge relations,\nand the fact that $l'_i \\equiv l_i + h_u$:\n\\begin{equation}\n \\label{eq:25}\n k_d = q_3 + d_3 + h_d = e_3 + l_3 - h_u = e_3 + (l'_3 - h_u ) - h_u \n\\end{equation}\nThen we use the fact that $k_u = 0 = 2e_3 + h_u$, and we can solve for $e_3$ in terms of $k_d$ and $r'_d$ ( using Eq.~(\\ref{eq:soll1l2}):\n\\begin{equation}\n \\label{eq:26}\n e_3 = \\frac{2 k_d + r'_d}{10}\n\\end{equation}\nOnce we have $e_3$, and $l'_3$, we can get $l_3$ since $h_u = - 2e_3$. From there, we can calculate the other charges from\n$s,s',r,r'$ using Eq.~(\\ref{eq:11}) and Eq.~(\\ref{eq:24}).\n\nThe charges calculated in this way are laid out in Table \\ref{tbl:sol-leptch-GST}.\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|l|c|c|cc|ccccc|c|}\n \\hline\n Sol.& $r'_d$ & $k_d$ & $n_2$ & $n_3$ &$e_1$ & $e_2$ & $e_3$ & $l_1$ & $l_3$ & Fit \\\\\n \\hline \n {\\bf 1}&11& $\\frac{-5}{2}$ & $\\frac{1}{8}$&$\\frac{1}{2}$&$\\frac{187}{20}$ & $\\frac{-33}{20}$ & $\\frac{3}{5}$ & $\\frac{67}{10}$ & $\\frac{-43}{10}$ & - \\\\\n {\\bf 1}&11& $\\frac{-5}{2}$ & $\\frac{1}{4}$&$\\frac{1}{2}$&$\\frac{187}{20}$ & $\\frac{-33}{20}$ & $\\frac{3}{5}$ & $\\frac{67}{10}$ & $\\frac{-43}{10}$ & - \\\\\n {\\bf 2}&10& $\\frac{-5}{2}$ & $\\frac{1}{8}$&$\\frac{1}{2}$& $8$ & $-2$ & $0$ & $\\frac{15}{2}$ & $\\frac{-5}{2}$ & - \\\\ \n {\\bf 2}&10& $\\frac{-5}{2}$ & $\\frac{1}{4}$&$\\frac{1}{2}$&$8$ & $-2$ & $0$ & $\\frac{15}{2}$ & $\\frac{-5}{2}$ & 1 \\\\ \n {\\bf 3}&$\\frac{41}{4}$ & $-2$ &$\\frac{1}{8}$&$\\frac{1}{2}$ & $\\frac{63}{8}$ & $\\frac{9}{8}$ & $\\frac{25}{8}$ & $\\frac{51}{8}$ & $\\frac{-31}{8}$ & - \\\\\n {\\bf 3}&$\\frac{41}{4}$ & $-2$ &$\\frac{1}{4}$&$\\frac{1}{2}$ & $\\frac{63}{8}$ & $\\frac{9}{8}$ & $\\frac{25}{8}$ & $\\frac{51}{8}$ & $\\frac{-31}{8}$ & - \\\\\n\\hline\n \\end{tabular}\n \\caption{Charged lepton charges for the $SU(5)$ type solutions with $u=v=0$ satisfying the GST relation. The fits are discussed in section \\ref{sec:fitsmasses}}\n \\label{tbl:sol-leptch-GST}\n\\end{table}\n\n\n\\subsection{Solutions for the extended $SU(5)$ case with $u+v \\ne 0$}\n\\label{sec:solutions-su5-like}\n\n\nFor this class of solutions, it is clear from Eq.~(\\ref{eq:chargrelgensu5like}) and Eq.~(\\ref{eq:gyukpar}) that the quark sector results will\nbe unchanged. This happens since $s,s',r,r'$ are blind to whether the family charges are related by the $SU(5)$ relation, or the extended $SU(5)$ relation.\n$k_u$ must always be zero,\nand the parameterization happens to leave $k_d$ unchanged. Since $k_e$ is not unchanged, as discussed in section~\\ref{sec:genrzsu5like},\nwe need to find $k_e$ in order to know the structure of the electron Yukawa matrix.\n\n\n\n\nIt is helpful to rewrite $k_e$ and $k_d$, from the form in Eq.~(\\ref{eq:8}) by using Eqs.~(\\ref{eq:hd},~\\ref{eq:1}) and $k_u = 0$:\n\\begin{eqnarray}\nk_d&=&l_3+3e_3+u+\\frac{4}{3}m,\\nonumber\\\\\n\\label{eq:kd_ke_general}\nk_e&=&l_3+3e_3+u+m,\n\\end{eqnarray}\nwhere we have written $u+v=m$, as we will discuss in Section \\ref{sbsec:susycppr} $m$ can be determined such a that the effects of the breaking of $U(1)$ in the $\\mu$ term are of order $\\leq m_{3\/2}$. But on the other hand we need to keep the observed relation at low energies $m_b=O(m_{\\tau})$, so either $m$ has to remain small or be negative to achieve $|k_d|=O(|k_e|)$.\nIn the present case the $Y^d$ matrix has exactly the same form as in \\eqs{eq:charggensu5like} and $Y^e$ has the form\n\\begin{eqnarray}\nY^{e}=\n\\left(\n\\begin{array}{ccc}\n\\varepsilon^{|s'+r'_{d}+k_e|}&\\varepsilon^{|s+r'_{d}+k_e|}&\\varepsilon^{|r'_d+k_e|}\\\\\n\\varepsilon^{|s'+r_{d}+k_e|}&\\varepsilon^{|s+r_{d}+k_e|}&\\varepsilon^{|r_d+k_e|}\\\\\n\\varepsilon^{|s'+k_e|}&\\varepsilon^{|s+k_e|}&\\varepsilon^{|k_e|}\n\\end{array}\n\\right).\n\\label{eq:chleptmatuvn0}\n\\end{eqnarray}\n\nWith $l_2=l_3$,\nwhich determines the solutions of the charges $e_i$ and $l_i$\ncompatible with the condition $r_d=l_2-l_3=0$, the discussion \nfollows exactly as Section \\ref{sec:neutrino-sector-gst} because there we have not referred to other parameters than to \n$k_d$, $r$, $r'$, $s$ and $s'$ without specifying their relations with the charges cancelling the anomalies.\n \n\n\n\nIn this case, the analysis that leads to Eq.~(\\ref{eq:26}) must be repeated, but accounting for the\nfact that instead of the $SU(5)$ relation between the charges, we must instead use the extended\n$SU(5)$ relation between the charges. In this case, we find that:\n\\begin{equation}\n \\label{eq:27}\n k_d = 3e_3 + l_3 + u+\\frac{4}{3}(u + v) - 2 h_u = 5 e_3 + l'_3 + \\frac{10}{3} u + \\frac{4}{3} v\n\\end{equation}\nwhere we have used that $l'_i=l_i-2e_3-u$. $l'_i$ is defined in such a way that $l_i+n_j+h_u=l'_i+n_j$. Using again the fact that $l'_3 = l'_2 = -\\frac{r'_d }{2}$, we find that:\n\\begin{equation}\n \\label{eq:28}\n e_3 = \\frac{1}{10}(2 k_d + r'_d - \\frac{20}{3} u - \\frac{8}{3} v )\n\\end{equation}\n\nUsing these results, and the values of $s,s',r_f, r'_f$, we can find the charges \nin Table~\\ref{tbl:sol-leptch-GSTuvne0}.\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|l|c|c|cc|c|ccccccc|c|}\n \\hline\n Sol. & $r'_d$ & $k_d$ & $u$ & $v$ & $k_e$ &$e_1$ & $e_2$ & $e_3$ & $l_1$ & $l_3$ & $n_2$ & $n_3$ & Fit \\\\\n \\hline\n {\\bf 1} & 11 & $\\frac{-5}{2}$ & $-\\frac{13}{2}$ & $7$ & $\\frac{8}{3}$ &$\\frac{709}{60}$ & $\\frac{49}{60}$ \n & $\\frac{46}{15}$ & $\\frac{77}{15}$ & $\\frac{-88}{15}$ &\n $\\frac{1}{8}$ & $\\frac{1}{2}$ & - \\\\\n {\\bf 1} & 11 & $\\frac{-5}{2}$ & $\\frac{-13}{2}$ & $7$ & $\\frac{8}{3}$ &$\\frac{709}{60}$ & $\\frac{49}{60}$ \n & $\\frac{46}{15}$ & $\\frac{77}{15}$ & $\\frac{-88}{15}$ & \n $\\frac{1}{4}$ & $\\frac{1}{2}$ & - \\\\\n {\\bf 2} &10 & $\\frac{-5}{2}$ & $-13$ & $\\frac{27}{2}$ & $\\frac{8}{3}$ & $\\frac{407}{30}$ \n & $\\frac{107}{30}$ & $\\frac{167}{30}$ & $\\frac{47}{15}$ & $\\frac{-103}{15}$& \n $\\frac{1}{8}$& $\\frac{1}{2}$ &2 \\\\ \n {\\bf 2} &10 & $\\frac{-5}{2}$ & $-13$ & $\\frac{27}{2}$ & $\\frac{8}{3}$ & $\\frac{407}{30}$ \n & $\\frac{107}{30}$ & $\\frac{167}{30}$ & $\\frac{47}{15}$ & $\\frac{-103}{15}$& \n $\\frac{1}{4}$ & $\\frac{1}{2}$ &- \\\\ \n {\\bf 3} &$\\frac{41}{4}$ & $-2$ & $\\frac{-10}{3}$ & $\\frac{23}{6}$ &$\\frac{7}{3}$ & $\\frac{363}{40}$ \n & $\\frac{3}{40}$ & $\\frac{73}{40}$ & $\\frac{653}{120}$ & $\\frac{-577}{120}$& \n $\\frac{1}{8}$&$\\frac{1}{2}$ &3 \\\\\n {\\bf 3} &$\\frac{41}{4}$ & $-2$ & $\\frac{-10}{3}$ & $\\frac{23}{6}$ &$\\frac{7}{3}$ & $\\frac{363}{40}$ \n & $\\frac{3}{40}$ & $\\frac{73}{40}$ & $\\frac{653}{120}$ & $\\frac{-577}{120}$& \n $\\frac{1}{4}$&$\\frac{1}{2}$& - \\\\\n \\hline\n \\end{tabular}\n \\caption{Charged lepton charges for the extended $SU(5)$\n solutions with $m=u+v=1\/2$ satisfying the GST relation. The fits are discussed in section \\ref{sec:fitsmasses}}\n \\label{tbl:sol-leptch-GSTuvne0}\n\\end{table}\n\n\n\n\n\\section{$SU(5)$ solutions not satisfying the GST relation}\n\\label{sec:su5-solutions-not-GST}\n\\subsection{The quark sector}\n\\label{sec:quark-sector-not-GST}\nAs we can see the GST relation puts a constraint on the opposite signs\nof $s$ and $s'$ and on the difference of $r'_d=l_1-l_3$. If we do not\nimpose these requirements, allowing all the numbers $s,\\ s^\\prime,\\ r,\n\\ r^\\prime$ and $k_d$ to have the same sign, positive or negative, we\ncan factorize the $k_d$ factor out of the $Y^d$ matrix and so can\nwrite the down matrix in the form\n\\begin{eqnarray}\nY^d=\n\\varepsilon^{|k_d|}\n \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|s'+l_1-l_3|}&\\varepsilon^{|s'|}&\\varepsilon^{|s'|}\\\\\n\\varepsilon^{|s+l_1-l_3|}&\\varepsilon^{|s|}&\\varepsilon^{|s|}\\\\\n\\varepsilon^{|l_1-l_3|}&1&1\n\\end{array}\n\\right].\n\\end{eqnarray}\nIn this case we do not have the restriction $|s+l_1-l_3|=|s'|$ so the parameter $l_1-l_2$ is not fixed\n by these conditions. In these cases $k_d$ is not constrained so it can acquire a value in the range \n$\\sim (0,3)$ for different values of $\\tan\\beta$. In these cases all positive or all negative charges, \nthe cases which reproduce quark masses and mixings are for\n\\begin{eqnarray}\n\\label{eq:solsspnongst}\n|s|=2,\\ |s'|=3 \\quad {\\rm{or}} \\quad |s|=2,\\ |s'|=4.\n\\end{eqnarray}\nFor $|s|=2, \\ |s'|=3$ we have\n\\begin{eqnarray}\nY^d=\\varepsilon^{|k_d|}\n\\label{eq:ydinl1l3A}\n \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|3+l_1-l_3|}& \\varepsilon^{|3|}&\\varepsilon^{|3|}\\\\\n\\varepsilon^{|2+l_1-l_3|}& \\varepsilon^{|2|}& \\varepsilon^{|2|}\\\\ \n\\varepsilon^{|l_1-l_3|}&1&1\n\\end{array}\n\\right].\n\\end{eqnarray}\nFor $|s|=2, \\ |s'|=4$ we have\n\\begin{eqnarray}\n\\label{eq:ydinl1l3B}\nY^d=\\varepsilon^{|k_d|}\n \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|4+l_1-l_3|}& \\varepsilon^{|4|}& \\varepsilon^{|4|}\\\\\n\\varepsilon^{|2+l_1-l_3|}& \\varepsilon^{|2|}& \\varepsilon^{|2|}\\\\ \n\\varepsilon^{|l_1-l_3|}&1&1\n\\end{array}\n\\right].\n\\end{eqnarray}\nFrom \\eq{eq:ydinl1l3A} and \\eq{eq:ydinl1l3B} we can check if certain differences of leptonic charges can yield a suitable quark phenomenology. From \\eq{eq:yukeigen} we can see that in the cases of having all charges $l_i$ and $e_i$ either positive or negative, then all the terms contributing to the first eigenvalue of $Y^d$, $y_1$, will have the same power, as we mentioned earlier. So the difference $r'_d$ here is constrained to reproduce an appropriate ratio\n$m_d\/m_s$. Let us take here for definitiveness the case for positive charges (the negative charges case is completely analogous). Thus for $s=2$, $s'=3$, we have\n\\begin{eqnarray}\n\\frac{m_d}{m_s}\\sim \\frac{\\varepsilon^{3+r'_d}}{\\varepsilon^2}\\sim (\\lambda^2, \\lambda^{3\/2})\n\\end{eqnarray}\nso in this case we have $r'_d=1,\\ 3\/2$. \nFor the case $s=2$, $s'=4$, we have\n\\begin{eqnarray}\n\\frac{m_d}{m_s}\\sim \\frac{\\varepsilon^{4+r'_d}}{\\varepsilon^2}\\sim (\\lambda^2, \\lambda^{3\/2})\n\\end{eqnarray}\nwe do not want $r'_d=0$ as it will give somewhat large contribution from the $(3,1)$ element of the $Y^d$ matrix to the eigenvalues. So for this case $r'_d\\approx 1\/2$. \nIn this case we have the following matrices for \\eq{eq:ydinl1l3A} \n\\begin{eqnarray}\n\\label{eq:sol1-2nongst}\nY^d\\!=\\!\n \\left[\\begin{array}{ccc}\n\\epsilon^{4}&\\epsilon^{3}&\\epsilon^{3}\\\\\n\\epsilon^{3}&\\epsilon^{2}&\\epsilon^{2}\\\\\n\\epsilon^{1}&1&1\n\\end{array}\n\\right]\\varepsilon^{k_d},\\\nY^d\\!=\\!\n \\left[\\begin{array}{ccc}\n\\epsilon^{9\/2}&\\epsilon^{4}&\\epsilon^{4}\\\\\n\\epsilon^{7\/2}&\\epsilon^{2}&\\epsilon^{2}\\\\\n\\epsilon^{3\/2}&1&1\n\\end{array}\n\\right]\\varepsilon^{k_d},\n\\end{eqnarray}\nrespectively for $r'_d=1,\\ 3\/2$.\nFor \\eq{eq:ydinl1l3B} we have \n\\begin{eqnarray}\n\\label{eq:sol3nongst}\nY^d\\!=\\!\n \\left[\\begin{array}{ccc}\n\\epsilon^{9\/2}&\\epsilon^{4}&\\epsilon^{4}\\\\\n\\epsilon^{5\/2}&\\epsilon^{2}&\\epsilon^{2}\\\\\n\\epsilon^{1\/2}&1&1\n\\end{array}\n\\right]\\epsilon^{k_d},\n\\end{eqnarray}\nfor $r'_d=1\/2.$ These solutions work for $k_d\\in (0,3)$, depending on the value of $\\tan\\beta$, these matrices yield acceptable phenomenology in both charged the lepton sector and d quark sector.\n\n\\subsection{The neutrino sector}\n\\label{sec:neutrino-sector-non-GST}\n\nAs we have seen in Section \\ref{sec:quark-sector-not-GST}, in these cases $r'_d$ is constrained to be \n$r'_d\\in(1,3\/2)$ for $(s,s')=(2,3)$ and $r'_d\\approx 1\/2$ for $(s,s')=(2,4)$ but let us leave it \nunspecified for the moment. We consider here the case of all the parameters related to $l_i$ and $e_i$ \npositive. In this case we require that all the neutrino charges, $n_i$ to be negative but $\\sigma$ positive.\nWe proceed as in Section (\\ref{sec:neutgst}), in order to identify the charges $l'_i$, $n_i$ and $\\sigma$.\nIn principle we need $\\varepsilon^{|l'_1+n_2|}=\\varepsilon^{|l'_2+n_2|}$ but now we require $l'_1,l'_2\\geq 0$ so now the appropriate solution to this would be\n\\begin{eqnarray}\nl'_1=r'_d,\\ l'_2=0,\\quad n_2=\\frac{-r'_d}{2}.\n\\end{eqnarray}\nHowever in this case, as in the case of section (\\ref{sec:neutgst}), we will only be able to produce $m_{\\nu_2}\/m_{\\nu_3}\\sim \\varepsilon^2$. So we work with a solution of the form \\eq{eq:needp12}. For this case we then have \n\\begin{eqnarray}\n\\label{eq:soln2Ngst}\nl'_1=r'_d,\\ l'_2=0,\\ n_2=\\frac{p_{12}-r'_d}{2}.\n\\end{eqnarray}\nNote that in this case the charges $l_i$ are positive because $l_2=k_d-3e_3$ and\nhere $e_3=k_d$. For $t_{13}$ we also make use of the parameterization of \\eq{eq:needp13}. Assuming that $|r'_d | > |n_3|$,\n\\begin{eqnarray}\n\\label{eq:soln3Ngst}\nn_3=\\frac{p_{13}-r'_d}{2}.\n\\end{eqnarray}\nIn order to achieve an appropriate ratio for $m_{\\nu_2}\/m_{\\nu_3}$ we need now the conditions \n$2n_3+\\sigma>0$, $2n_2+\\sigma>0$, $l'_2+n_2<0$, $l'_2+n_3<0$, for one of the last two inequalities the equality can be satisfied, but not for both. For this case, we have\nalso $p_2-p_3=4(n_3-n_2)$ and using \\eq{eq:soln2Ngst} and \\eq{eq:soln3Ngst} we have\n$p_2-p_3=2(p_{13}-p_{12})$. We can also choose the parameters $p_{12}$, $p_{13}$ and $p_2-p_3$ as in\n\\eq{eq:solsn3n2} but now $n_3$ and $n_2$ are given by \\eq{eq:soln2Ngst} and \\eq{eq:soln3Ngst}.\nThus we have\n\\begin{eqnarray}\n\\nonumber\n&& p_{12}=\\frac{1}{4},\\ p_{13}=1, \\ p_2-p_3=\\frac{3}{2}\\\\\n&&\n\\rightarrow \\ n_2=\\frac{1}{8}-\\frac{r'_d}{2}<0 ,\\ n_3=\\frac{1}{2}-\\frac{r'_d}{2}<0 \\Rightarrow r'_d\\geq 1, \\label{eq:n3n2rdNgstA}\\\\ \n\\nonumber\n&&\np_{12}=\\frac{1}{2},\\ p_{13}=1, \\ p_2-p_3=1\\\\\n&&\n\\rightarrow \\ n_2=\\frac{1}{4}-\\frac{r'_d}{2}<0,\\ n_3=\\frac{1}{2}-\\frac{r'_d}{2}<0 \\Rightarrow r'_d\\geq 1. \\label{eq:n3n2rdNgstB}\n\\end{eqnarray}\nIn Section (\\ref{sec:su5q}) we determined the approximate values for $r'_d$. For $(s,s')=(2,3)$ we can have $r'_d=1,3\/2$ while for $(s,s')=(2,4)$ we have \n$r'_d\\approx 1\/2$, which however is not in agreement with the conditions of \\eq{eq:n3n2rdNgstA} and \\eq{eq:n3n2rdNgstB}. The approximate expressions of mixings and masses in terms of the above results and the coefficients $a^\\nu_{ij}$ of $O(1)$ are as in \\eq{eq:mixmasresn}, except for $t^\\nu_{13}$ and $t^\\nu_{12}$ which now read\n\\begin{eqnarray}\n\\label{eq:mixangngst}\nt^\\nu_{13}\\ \\ = \\ \\ \\frac {a^\\nu_{13}\\varepsilon^{|r'_d+n_3|-|n_3|}}{\\sqrt{a^{\\nu \\ 2}_{33} +a^{\\nu \\ 2}_{23} }},\\quad\nt^\\nu_{12}\\ \\ = \\ \\ \\frac{a^\\nu_{12}\\varepsilon^{(|r'_d+n_2|-|n_2|)}}{(c_{23}a^\\nu_{22}-s_{23}a^\\nu_{32})}.\\nonumber\\\\\n\\end{eqnarray}\nWe have listed the possible solutions of Table (\\ref{tbl:solNGSTMP}) for \\eq{eq:n3n2rdNgstA} at $\\langle \\Sigma\\rangle=M_P$ and in Table (\\ref{tbl:solNGSTMG}) for $\\langle \\Sigma\\rangle=M_G$.\n\\begin{table}[htbp]\n \\centering\n \\begin{tabular}{|cc|cccc|}\n \\hline\n$r'_d$ & $n_2$ &$n_3$& $p_3$ & $\\sigma$ & $M_3 [GeV]$ \\\\\n \\hline\n$1$ & $\\frac{-3}{8}$ & $0$ & $(-\\frac{9}{2},-\\frac{5}{2})$ & $(\\frac{9}{2},5)$ & $O(10^{15})$ \\\\\n$\\frac{3}{2}$ & $\\frac{-5}{8}$ & $\\frac{-1}{4}$ & $(-\\frac{17}{4},-\\frac{19}{4})$ & $(5,\\frac{11}{2})$ & $O(10^{15})$\\\\\n$1$ & $\\frac{-1}{4}$&1& (-6,-7)&(6,7)&$O(10^{15})$, $O(10^{14})$\\\\\n$\\frac{3}{2}$ & $\\frac{-1}{2}$&$\\frac{-1}{4}$& (-6,-7)&($\\frac{27}{4}$,$\\frac{31}{4}$) &$O(10^{14})$, $O(10^{15})$\\\\\n\\hline\n\\end{tabular}\n \\caption{$\\Sigma$ at $M_P$ for the solutions not satisfying the GST relation.}\n \\label{tbl:solNGSTMP}\n\\end{table}\n %\n\\begin{table}[htbp]\n \\centering\n \\begin{tabular}{|cc|cccc|}\n \\hline\n$r'_d$ & $n_2$&$n_3$ & $p_3$ & $\\sigma$ & $M_3$ [GeV] \\\\\n \\hline\n$1$ & $\\frac{-3}{8}$&$0$ & $(0,-\\frac{1}{2})$ & $(0,\\frac{1}{2})$ & $O(10^{15})$ \\\\\n$\\frac{3}{2}$ & $\\frac{-5}{8}$&$\\frac{-1}{4}$& $(-\\frac{1}{4},-\\frac{3}{4})$ &$(1,\\frac{3}{2})$ &$O(10^{15})$,\\\\\n$1$ & $\\frac{-1}{4}$&$0$&1& (-1,-2)&$O(10^{18})$\\\\\n$\\frac{3}{2}$ & $\\frac{-1}{2}$&$\\frac{-1}{4}$& (-1,-2)&($\\frac{7}{4}$,$\\frac{11}{4}$)&$O(10^{18})$,$O(10^{17})$\\\\\n \\hline\n \\end{tabular}\n \\caption{$\\Sigma$ at $M_G$ for the solutions not satisfying the GST relation.}\n \\label{tbl:solNGSTMG}\n\\end{table}\n %\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|cccccc|c|}\n \\hline\n$r'_d$ & $k_d$ &$e_1$ & $e_2$ & $e_3$ & $l_1$ & $l_3$ & Fit\\\\\n\\hline\n1&2& $\\frac{17}{5}$ & $\\frac{12}{5}$ & $\\frac{2}{5}$ & $\\frac{9}{5}$ & $\\frac{4}{5}$ & 4 \\\\\n$\\frac{3}{2}$& $2$ & $\\frac{17}{5}$ & $\\frac{12}{5}$ & $\\frac{2}{5}$ & $\\frac{7}{10}$ & $\\frac{4}{5}$ & 5\\\\\n1& $3$ & $\\frac{18}{5}$ & $\\frac{13}{5}$ & $\\frac{3}{5}$ & $\\frac{7}{10}$ & $\\frac{-3}{10}$ & -\\\\\n$\\frac{3}{2}$& $3$ & $\\frac{18}{5}$ & $\\frac{13}{5}$ & $\\frac{3}{5}$ & $\\frac{6}{5}$ & $\\frac{-3}{10}$ & -\\\\\n \\hline\n \\end{tabular}\n \\caption{Charged lepton $U(1)_X$ charges for the solutions $u=v=0$ not satisfying the GST \nrelation. The fits are discussed in section \\ref{sec:fitsmasses}}\n \\label{tbl:sol-lepchar-NGST}\n\\end{table}\n\n\n\\subsection{Solutions for the extended $SU(5)$ case with $u+v \\ne 0$}\n\\label{sec:solutions-su5-like-non-GST}\n\nWe do not present any charges for this class of solutions, but here is how the charges would be calculated.\nIn this case, the analysis is carried out in the same way as in section \\ref{sec:solutions-su5-like}. The only subtlety\nis that the relation linking $l'_2$ to $r'_d$ is different. Instead, we have, from Eq.~(\\ref{eq:soln2Ngst}) that $l'_2 = l'_3 = 0$.\nPutting this result into Eq.~(\\ref{eq:27}), we achieve:\n\\begin{equation}\n \\label{eq:29}\n e_3 = \\frac{1}{10}(2 k_d + \\frac{4}{3} u - \\frac{8}{3} v ).\n\\end{equation}\n\nFrom $e_3$ and $l'_3$, the other charges may be calculated using the known values of $s,s',r_d,r'_d,u \\, \\mathrm{and}\\, v$,\nby using the extended $SU(5)$ charge relations, Eq.~(\\ref{eq:1}) and the simplified parametrization, Eq.~(\\ref{eq:16}).\n\n\n\\section{The non-$SU(5)$ Cases}\n\\label{sec:non-su5-cases}\n\n\\subsection{Solutions for $u=v=0$, in the Pati-Salam case\\label{sec:patisalamq}}\nWith $l_2=l_3$, in this case we have $s=q_2-q_3=l_2-l_3=0$ then the charges of the matrices are\n\\begin{eqnarray}\nC(Y^{u, \\nu})&=&\n \\left[\n\\begin{array}{ccc}\n l_1-l_3+e_1-e_3 & l_1-l_3+e_2-e_3 & l_1-l_3 \\\\\n e_1-e_3 & e_2-e_3 & 0\\\\\n e_1-e_3 & e_2-e_3 & 0\n\\end{array}\n\\right]\\nonumber\\\\\nC(Y^{d, l})&=& \n \\left[\n \\begin{array}{ccc}\n l_1 + l_3 + e_1 + e_3 & l_1 + l_3 + e_2 + e_3 & l_1 + l_3 + 2e_3 \\\\\n 2l_3 + e_1 + e_3 & 2l_3 + e_2 + e_3 & 2l_3 + 2e_3 \\\\\n 2l_3 + e_1 + e_3 & 2l_3 + e_2 + e_3 & 2l_3 + 2e_3\n \\end{array}\n\\right]\n\\end{eqnarray}\nIn this case the $U(1)_X$ symmetry does not give an appropriate description of fermion masses and mixings, however it can be combined with non-renormalizable operators of the Pati-Salam group, \\cite{King:2000ge}, in order to give a good description of the fermion phenomenology.\n\\subsection{Solutions for $u+v=0$}\nOne trivial example of non- $SU(5)$ cases was given in section (\\ref{sec:yukawa-textures-uv-nonzero}) for the solution $u+v=0$. \nWe proceed as in the section (~\\ref{sec:su5q})- in order to analyze the appropriate phenomenology. We are interested \nin the cases $l_2=l_3$, this together with the condition of $O(1)$ top Yukawa coupling give us the following matrices of charges, \nwhich are derived with the appropriate substitutions in \\eq{eq:yu-umvn0}-\\eq{eq:ye-umvn0},\n\\begin{eqnarray}\nC(Y^d)&=&\n \\left[\n\\begin{array}{ccc}\n l_1 + e_1 &\n \\frac{4(l_3-l_1)}{3} + e_1 - e_3 &\n e_2 - e_3 \\\\\n \\frac{l_3-l_1}{3} + e_2 - e_3 &\n e_2 - e_3 &\n e_2 - e_3 \\\\\n \\frac{l_3-l_1}{3} &\n 0 &\n 0\n\\end{array}\n\\right],\\nonumber\\\\\nC(Y^u)&=&\n \\left[\n\\begin{array}{ccc}\n l_1 + e_1 &\n \\frac{2(l_3-l_1)}{3} + e_3 - e_1 &\n \\frac{2(l_3-l_1)}{3} + e_3 - e_1 \\\\\n \\frac{l_3 - l_1}{3} + e_3 - e_3 &\n e_3 - e_2 &\n e_3 - e_2 \\\\\n \\frac{l_3 - l_1}{3} &\n 0 & 0\n\\end{array}\n\\right],\\nonumber\\\\\nC(Y^e)&=&\n \\left[\n\\begin{array}{ccc}\n l_1+e_1 & l_1 + e_2 & l_1 + e_3 \\\\\n e_2-e_3 & e_2 - e_3 & 0 \\\\\n e_2-e_3 & e_2 - e_3 & 0\n\\end{array}\n\\right].\\nonumber\\\\\n\\end{eqnarray}\nDue to the form of the charges in the up and down quark matrices, first at all we would need two expansion parameters: $\\epsilon^u$ and $\\epsilon^d$. But with this structure alone it is not possible to account simultaneously for appropriate mass ratios of the second to third family of quarks and for an appropriate $V_{cb}$ mixing. So in this case just with a $U(1)$ it is not possible to explain fermion masses and mixings in the context of the single neutrino right-handed dominance, SNRHD. \n \n\n\n\\section{Numerical fits of masses and mixings}\n\\label{sec:fitsmasses}\n\\subsection{Fitted examples}\nIn this section we present numerical fits to some of the examples detailed in Sections \n(\\ref{sec:su5-solut-satisfy-GST},\\ref{sec:su5-solutions-not-GST}) and we compare the results \nwith a fit of a generic $SU(3)$-like case \\cite{King:2001uz}. \nThe simplest way to construct the lepton Yukawa matrices from the charges is to first calculate $h_{u,d}$. We extract $h_d$ from $k_e$, $l_3$ and $e_3$ from $k_e = l_3 + e_3 + h_d$. In general, we can use Eq.~(\\ref{eq:kd_ke_general}) to obtain:\n\\begin{equation}\n \\label{eq:22}\n h_u + h_d = m = 3(k_d - k_e)\n\\end{equation}\nThis is then enough to construct the lepton Yukawas from the appropriate line of the table (\\ref{tbl:sol-leptch-GST} or \\ref{tbl:sol-leptch-GSTuvne0}) of the lepton and Yukawa family charges. Below we specify the examples that we have chosen to fit.\n\\subsubsection*{Fit 1: $SU(5)$ type solution ($u=v=0$): example satisfying the GST relation}\n\\label{sec:fit-1}\n\nThis takes GST solution 2, (Eq.~(\\ref{eq:tex2gst})) in the $SU(5)$ type case, with $u=v=0$. The charges $l_i$, $e_i$, and $n_{2,3}$ are read off from the\nfourth line of Table \\ref{tbl:sol-leptch-GST}. In principle, the value of $\\sigma$ would be read off from either Table \\ref{tbl:solGSTMG} ( for neutrino\nmasses generated at the GUT scale ) or Table \\ref{tbl:solGSTMP} ( for neutrino masses geneated at the Planck scale). However, these tables allow for\na range of $\\sigma$; for this fit, we take $\\sigma = 21\/2$ for GUT scale neturino mass generation, and $\\sigma = 29\/2$ for Planck scale neutrino mass\ngeneration.\n\nThen, up to $\\sigma$ and $n_1: -\\sigma\/2 \\le n_1 \\le 0$, the Yukawa and Majorana matrices are:\n\n\\begin{eqnarray}\n \\label{eq:5}\n Y^u \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^u_{11} \\epsilon^{16} & a^u_{12} \\epsilon^{6} & a^u_{13} \\epsilon^{8} \\\\\n a^u_{21} \\epsilon^{6} & a^u_{22} \\epsilon^{4} & a^u_{23} \\epsilon^{2} \\\\\n a^u_{31} \\epsilon^{8} & a^u_{32} \\epsilon^{2} & a^u_{33}\n \\end{array}\n \\right]\\!,\n\\quad \\quad \\quad\n Y^d \\ = \\ \n \\left[\n \\begin{array}{ccc}\n a^d_{11} \\epsilon^{31\/2} & a^d_{12} \\epsilon^{11\/2} & a^d_{13} \\epsilon^{11\/2} \\\\\n a^d_{21} \\epsilon^{11\/2} & a^d_{22} \\epsilon^{9\/2} & a^d_{23} \\epsilon^{9\/2} \\\\\n a^d_{31} \\epsilon^{15\/2} & a^d_{32} \\epsilon^{5\/2} & a^d_{33} \\epsilon^{5\/2}\n \\end{array}\n \\right]\n \\nonumber \\\\\n\\label{eq:7}\n Y^e \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^e_{11} \\epsilon^{31\/2} & a^e_{12} \\epsilon^{11\/2} & a^e_{13} \\epsilon^{15\/2} \\\\\n a^e_{21} \\epsilon^{11\/2} & a^e_{22} \\epsilon^{9\/2} & a^e_{23} \\epsilon^{5\/2} \\\\\n a^e_{31} \\epsilon^{11\/2} & a^e_{32} \\epsilon^{9\/2} & a^e_{33} \\epsilon^{5\/2}\n \\end{array}\n \\right]\\!,\n \\\n Y^\\nu = \n \\left[\n \\begin{array}{ccc}\n a^\\nu_{11} \\epsilon^{|n_1 + 5\/2|} & a^\\nu_{12} \\epsilon^{11\/4} & a^\\nu_{13} \\epsilon^{12\/4} \\\\\n a^\\nu_{21} \\epsilon^{|n_1 - 15\/2|} & a^\\nu_{22} \\epsilon^{29\/4} & a^\\nu_{23} \\epsilon^{28\/4} \\\\\n a^\\nu_{31} \\epsilon^{|n_1 - 15\/2|} & a^\\nu_{32} \\epsilon^{29\/4} & a^\\nu_{33} \\epsilon^{28\/4}\n \\end{array}\n \\right]\n \\nonumber \\\\\n \\label{eq:9}\n M_{RR}\\! &=&\\!\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2n_1+\\sigma|} & \\epsilon^{|1\/4 + n_1 + \\sigma|} & \\epsilon^{|1\/2+ n_1 + \\sigma|} \\\\\n . & a^N_{22} \\epsilon^{|1\/2 + \\sigma| } & \\epsilon^{|3\/4 + \\sigma|} \\\\\n . & . & \\epsilon^{|1 + \\sigma|}\n \\end{array}\n \\right] \\left<\\Sigma\\right>\n\\end{eqnarray}\n\n\n\\subsubsection*{Fit 2: Extended $SU(5)$ solution ($u+v \\ne 0$) satisfying the GST relation}\n\\label{sec:fit-2}\n\nThis takes GST solution 2, (Eq. (\\ref{eq:tex2gst})), in the extended $SU(5)$ case with $u+v\\ne0$. The charges $l_i$, $e_i$ and $n_{2,3}$ are read\noff from the third line of Table \\ref{tbl:sol-leptch-GSTuvne0}. The values of $\\sigma$ taken are $\\sigma = 19\/2$, $\\sigma = 29\/2$ for GUT scale\nand Planck scale neutrino mass generation respectively. Again, $n_1$ is taken to lie in the region $-\\sigma\/2 \\le n_1 \\le 0$.\n{\\footnote{The difference between Fit 1 and Fit 2 is that the charges (Tables (\\ref{tbl:sol-leptch-GST}) and \n(\\ref{tbl:sol-leptch-GSTuvne0}) respectively) are determined in a different way and hence the value of the \neffective parameter expansion $\\varepsilon$ is different}. \n\\begin{eqnarray}\n \\label{eq:10}\n Y^u \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^u_{11} \\epsilon^{16} & a^u_{12} \\epsilon^{6} & a^u_{13} \\epsilon^{8} \\\\\n a^u_{21} \\epsilon^{6} & a^u_{22} \\epsilon^{4} & a^u_{23} \\epsilon^{2} \\\\\n a^u_{31} \\epsilon^{8} & a^u_{32} \\epsilon^{2} & a^u_{33}\n \\end{array}\n \\right]\\!,\n \\quad \\quad \\quad\n Y^d \\ = \\ \n \\left[\n \\begin{array}{ccc}\n a^d_{11} \\epsilon^{31\/2} & a^d_{12} \\epsilon^{11\/2} & a^d_{13} \\epsilon^{11\/2} \\\\\n a^d_{21} \\epsilon^{11\/2} & a^d_{22} \\epsilon^{9\/2} & a^d_{23} \\epsilon^{9\/2} \\\\\n a^d_{31} \\epsilon^{15\/2} & a^d_{32} \\epsilon^{5\/2} & a^d_{33} \\epsilon^{5\/2}\n \\end{array}\n \\right]\n \\nonumber \\\\\n\\label{eq:3}\n Y^e &=& \n \\left[\n \\begin{array}{ccc}\n a^e_{11} \\epsilon^{46\/3} & a^e_{12} \\epsilon^{16\/3} & a^e_{13} \\epsilon^{22\/3} \\\\\n a^e_{21} \\epsilon^{16\/3} & a^e_{22} \\epsilon^{14\/3} & a^e_{23} \\epsilon^{8\/3} \\\\\n a^e_{31} \\epsilon^{16\/3} & a^e_{32} \\epsilon^{14\/3} & a^e_{33} \\epsilon^{8\/3}\n \\end{array}\n \\right]\\!,\n \\\n Y^\\nu =\n \\left[\n \\begin{array}{ccc}\n a^\\nu_{11} \\epsilon^{|n_1 +5|} & a^\\nu_{12} \\epsilon^{\\frac{41}{8}} & a^\\nu_{13} \\epsilon^{\\frac{11}{2}} \\\\\n a^\\nu_{21} \\epsilon^{|n_1 - 5|} & a^\\nu_{22} \\epsilon^{\\frac{39}{8}} & a^\\nu_{23} \\epsilon^{\\frac{9}{2}} \\\\\n a^\\nu_{31} \\epsilon^{|n_1 -5|} & a^\\nu_{32} \\epsilon^{\\frac{39}{8}} & a^\\nu_{33} \\epsilon^{\\frac{9}{2}}\n \\end{array}\n \\right]\n \\nonumber\\\\\n M_{RR} \\!&=&\\!\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2n_1+\\sigma|} & \\epsilon^{|1\/8 + n_1 + \\sigma|} & \\epsilon^{|1\/2 + n_1+\\sigma|} \\\\\n . & a^N_{22} \\epsilon^{|1\/4+\\sigma|} & \\epsilon^{|5\/8+\\sigma|} \\\\\n . & . & \\epsilon^{|1+\\sigma|}\n \\end{array}\n \\right] \\left<\\Sigma\\right>\n\\end{eqnarray}\n\n\n\\subsubsection*{Fit 3: Extended $SU(5)$ solution ($u+v\\neq 0$), satisfying the GST relation}\n\\label{sec:fit-3}\n\nThis takes GST solution 3, ( Eq. (\\ref{eq:sol3gst})), in the extended $SU(5)$ case with $u+v\\ne0$. The charges $l_i$, $e_i$ and $n_{2,3}$ are read off from\nthe fifth line of table \\ref{tbl:sol-leptch-GSTuvne0}. The values of $\\sigma$ taken are $\\sigma = 39\/4$, $\\sigma = 55\/4$ for GUT and Planck scale neutrino\nmass generation respectively. $n_1$ lies in the region $-\\sigma\/2 \\le n_1 \\le 0$.\n\\begin{eqnarray}\n\\label{eq:18}\nY^u \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^u_{11} \\epsilon^{38\/4} & a^u_{12} \\epsilon^{22\/4} & a^u_{13} \\epsilon^{29\/4}\\\\\n a^u_{21} \\epsilon^{22\/4} & a^u_{22} \\epsilon^{14\/4} & a^u_{23} \\epsilon^{7\/4}\\\\\n a^u_{31} \\epsilon^{29\/4} & a^u_{32} \\epsilon^{7\/4} & a^u_{33}\n \\end{array}\n \\right]\\!,\n \\quad \\quad \\quad\n Y^d = \n \\left[\n \\begin{array}{ccc}\n a^d_{11} \\epsilon^{62\/4} & a^d_{12} \\epsilon^{21\/4} & a^d_{13} \\epsilon^{21\/4}\\\\\n a^d_{21} \\epsilon^{21\/4} & a^d_{22} \\epsilon^{15\/4} & a^d_{23} \\epsilon^{15\/4}\\\\\n a^d_{31} \\epsilon^{33\/4} & a^d_{32} \\epsilon^{8\/4} & a^d_{33} \\epsilon^{8\/4}\n \\end{array}\n \\right]\n \\nonumber\\\\\n Y^e &=& \n \\left[\n \\begin{array}{ccc}\n a^e_{11} \\epsilon^{46\/3} & a^e_{12} \\epsilon^{19\/3} & a^e_{13} \\epsilon^{97\/12}\\\\\n a^e_{21} \\epsilon^{61\/12} & a^e_{22} \\epsilon^{47\/12} & a^e_{23} \\epsilon^{13\/6}\\\\\n a^e_{31} \\epsilon^{61\/12} & a^e_{32} \\epsilon^{47\/12} & a^e_{33} \\epsilon^{13\/6}\n \\end{array}\n \\right]\\!,\n \\\n Y^\\nu =\n \\left[\n \\begin{array}{ccc}\n a^\\nu_{11} \\epsilon^{|n_1+\\frac{41}{8}|} & a^\\nu_{12} \\epsilon^{\\frac{21}{4}} & a^\\nu_{13} \\epsilon^{\\frac{45}{8}}\\\\\n a^\\nu_{21} \\epsilon^{|n_1-\\frac{41}{8}} & a^\\nu_{22} \\epsilon^{5} & a^\\nu_{23} \\epsilon^{\\frac{37}{8}}\\\\\n a^\\nu_{31} \\epsilon^{|n_1-\\frac{41}{8}} & a^\\nu_{32} \\epsilon^{5} & a^\\nu_{33} \\epsilon^{\\frac{37}{8}}\n \\end{array}\n \\right]\n \\nonumber \\\\\n \\label{eq:17}\n M_{RR} \\!&=&\\!\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2n_1+\\sigma|} & \\epsilon^{|1\/8 + n_1 + \\sigma|} & \\epsilon^{|1\/2 + n_1+\\sigma|} \\\\\n . & a^N_{22}\\epsilon^{|1\/4+\\sigma|} & \\epsilon^{|5\/8+\\sigma|} \\\\\n . & . & \\epsilon^{|1+\\sigma|}\n \\end{array}\n \\right] \\left<\\Sigma\\right>\n\\end{eqnarray}\n\\subsubsection*{Fit 4: $SU(5)$ ($u=v=0$) solution not satisfying the GST relation}\nHere we present a solution non satisfying the GST relation of the form of \\eq{eq:ydinl1l3A} for $l_1-l_3=1$, which corresponds to the set of charges of the first line of Table (\\ref{tbl:sol-lepchar-NGST}). We also fix here the expansion parameter $\\varepsilon=0.19$, using the FI term. The high energy Yukawa and Majorana matrices are:\n\\begin{eqnarray}\nY^u \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^u_{11} \\epsilon^{6} & a^u_{12} \\epsilon^{5} & a^u_{13} \\epsilon^{3} \\\\\n a^u_{21} \\epsilon^{5} & a^u_{22} \\epsilon^{4} & a^u_{23} \\epsilon^{2} \\\\\n a^u_{31} \\epsilon^{3} & a^u_{32} \\epsilon^{2} & a^u_{33}\n \\end{array}\n \\right]\\!,\n \\quad \\quad \\quad\n Y^d \\! = \\! \n \\left[\n \\begin{array}{ccc}\n a^d_{11} \\epsilon^{4} & a^d_{12} \\epsilon^{3} & a^d_{13} \\epsilon^{3} \\\\\n a^d_{21} \\epsilon^{3} & a^d_{22} \\epsilon^{2} & a^d_{23} \\epsilon^{2} \\\\\n a^d_{31} \\epsilon & a^d_{32} & a^d_{33}\n \\end{array}\n \\right]\\epsilon^{|k_d|}\\nonumber\\\\\n Y^e\\!&=&\\!\n \\left[\n \\begin{array}{ccc}\n a^e_{11} \\epsilon^{4} & a^e_{12} \\epsilon^{3} & a^e_{13} \\epsilon \\\\\n a^e_{21} \\epsilon^{3} & a^e_{22} \\epsilon^{2} & a^e_{23} \\\\\n a^e_{31} \\epsilon^{3} & a^e_{32}\\epsilon^{2} & a^e_{33}\n \\end{array}\n \\right]\\epsilon^{|k_d|},\n\\quad \n Y^\\nu \\!=\\! \n \\left[\n \\begin{array}{ccc}\n a^\\nu_{11} \\epsilon^{|n_1 + 1|} & a^\\nu_{12} \\epsilon^{5\/8} & a^\\nu_{13} \\epsilon \\\\\n a^\\nu_{21} \\epsilon^{|n_1-3\/8|} & a^\\nu_{22} \\epsilon^{3\/8} & a^\\nu_{23} \\\\\n a^\\nu_{31} \\epsilon^{|n_1|} & a^\\nu_{32} \\epsilon^{3\/8} & a^\\nu_{33} \n \\end{array}\n \\right],\n \\nonumber \\\\\n M_{RR}\\! &=&\\!\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2n_1 + \\sigma|} & \\epsilon^{|-3\/8 + n_1 + \\sigma|} & \\epsilon^{|n_1+\\sigma|} \\\\\n . & a^N_{22}\\epsilon^{|-3\/4 + \\sigma|} & \\epsilon^{|-3\/8 + \\sigma|} \\\\\n . & . &\\epsilon^{|\\sigma|}\n \\end{array}\n \\right] \\left<\\Sigma\\right>.\n\\label{eq:31}\n\\end{eqnarray}\n\\subsubsection*{Fit 5: $SU(5)$ ($u=v=0$) solution not satisfying the GST relation}\nHere we present another solution non satisfying the GST relation of the form of \\eq{eq:ydinl1l3A} for $l_1-l_3=3\/2$, which corresponds to the set of charges of the second line of Table (\\ref{tbl:sol-lepchar-NGST}). We also fix here the expansion parameter $\\varepsilon=0.185$, using the FI term. The high energy Yukawa and Majorana matrices are:\n\\begin{eqnarray}\nY^u \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^u_{11} \\epsilon^{6} & a^u_{12} \\epsilon^{5} & a^u_{13} \\epsilon^{3} \\\\\n a^u_{21} \\epsilon^{5} & a^u_{22} \\epsilon^{4} & a^u_{23} \\epsilon^{2} \\\\\n a^u_{31} \\epsilon^{3} & a^u_{32} \\epsilon^{2} & a^u_{33}\n \\end{array}\n \\right]\\!,\n \\quad \\quad \\quad\n Y^d \\! = \\! \n \\left[\n \\begin{array}{ccc}\n a^d_{11} \\epsilon^{9\/2} & a^d_{12} \\epsilon^{3} & a^d_{13} \\epsilon^{3} \\\\\n a^d_{21} \\epsilon^{7\/2} & a^d_{22} \\epsilon^{2} & a^d_{23} \\epsilon^{2} \\\\\n a^d_{31} \\epsilon^{3\/2} & a^d_{32} & a^d_{33}\n \\end{array}\n \\right]\\epsilon^{|k_d|}\\nonumber\\\\\n Y^e\\!&=&\\!\n \\left[\n \\begin{array}{ccc}\n a^e_{11} \\epsilon^{9\/2} & a^e_{12} \\epsilon^{7\/2} & a^e_{13} \\epsilon^{3\/2} \\\\\n a^e_{21} \\epsilon^{3} & a^e_{22} \\epsilon^{2} & a^e_{23} \\\\\n a^e_{31} \\epsilon^{3} & a^e_{32}\\epsilon^{2} & a^e_{33}\n \\end{array}\n \\right]\\epsilon^{|k_d|},\n\\quad \n Y^\\nu \\!=\\! \n \\left[\n \\begin{array}{ccc}\n a^\\nu_{11} \\epsilon^{|n_1 + 1|} & a^\\nu_{12} \\epsilon^{5\/8} & a^\\nu_{13} \\epsilon \\\\\n a^\\nu_{21} \\epsilon^{|n_1-3\/8|} & a^\\nu_{22} \\epsilon^{3\/8} & a^\\nu_{23} \\\\\n a^\\nu_{31} \\epsilon^{|n_1|} & a^\\nu_{32} \\epsilon^{3\/8} & a^\\nu_{33} \n \\end{array}\n \\right],\n \\nonumber \\\\\n M_{RR}\\! &=&\\!\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2n_1 + \\sigma|} & \\epsilon^{|-5\/8 + n_1+\\sigma|} & \\epsilon^{|-1\/4+n_1+\\sigma|} \\\\\n . & a^N_{22}\\epsilon^{|-5\/4+\\sigma|} & \\epsilon^{|-7\/8+\\sigma|} \\\\\n . & . &\\epsilon^{|-1\/2+\\sigma|}\n \\end{array}\n \\right] \\left<\\Sigma\\right>.\n\\label{eq:32}\n\\end{eqnarray}\n\\subsection{Details of the fitting method}\n\\label{sec:deta-fitt-meth}\nOne of the purposes of these fits is to compare which solution fits the data best while constraining the abritary coefficients to remain at $O(1)$.\nWe therefore choose a minimization routine to find these $O(1)$ coefficients and compare the numerical values for the different solutions.\n In the quark sector we use eight experimental inputs in order to determine the parameters (coefficients or phases):\n\\begin{eqnarray}\n\\label{eq:fitparquarks}\nV_{ub}\/V_{cb}, \\quad V_{td}\/V_{ts}, \\quad V_{us},\\quad {\\rm{Im}}\\{J\\},\\quad m_u\/m_c,\\quad m_c\/m_t,\\quad\nm_d\/m_s,\\quad m_s\/m_b.\n\\end{eqnarray}\nWe explain in the Appendix (\\ref{ap:compinf}) how this fit is performed, the important point is that we can only fit eight parameters and the rest need to be fixed. The minimization algorithm has been optimized to fit the solutions satisfying the GST relation because the number of parameters is close to eight. We also fit examples of the non GST solutions but since there are more free parameters in this cases (mainly phases) it is un-practical to make a fit by fixing so many free parameters. So we present particular examples in these cases which do not necessarily correspond to the best $\\chi^2$.\n\nIn the lepton sector we perform two fits, one for the coefficients of the charged lepton mass matrix and the other for the coefficients of the neutrino mass matrix. We do not perform a combined fit for the coefficients of $Y^\\nu$ and $Y^e$ because the uncertainties in these sectors are quite different. While the uncertainties in the masses of the charged leptons is very small, the uncertainties in lepton mixings and quantities related to neutrino masses are still large, such that we cannot determine the parameters involved to a very good accuracy.\n\nThe quantities used for the fit of the coefficients of the charged lepton mass matrix are\n\\begin{eqnarray}\n\\frac{m_e}{m_\\mu},\\quad \\frac{m_\\mu}{m_\\tau},\n\\end{eqnarray}\nsuch that we can just determine two parameters, $a^e_{12}$ and $a^e_{22}$, but for the cases presented here this is enough.\nIn order to do the fit for the coefficients of the neutrino mass matrix we use the observables\n\\begin{eqnarray}\n\\label{eq:fitparneuts}\nt^l_{23},\\quad t^l_{13},\\quad t^l_{12},\\quad \\frac{|m_{\\rm{sol}}|}{|m_{\\rm{atm}}|},\\quad m_{\\nu_3}\n\\end{eqnarray}\nwhere we relate $t^l_{23}$ to the atmospheric mixing, $t^l_{12}$ to the solar mixing and $t^l_{13}$ to the reactor mixing. In this case we are going to be able to fit just five parameters. For this reason and because the uncertainties in the above observables are significantly bigger than the uncertainties in the quark sector, the fits of the coefficients of the neutrino mass matrix have large errors and they may leave a room for other solutions once the experimental uncertainties improve. Since we only have an upper bound for the reactor angle, $t^l_{13}$, we fit the solutions in the neighborhood of this upper bound.\n\\subsection{Results of the fits}\n\\subsubsection{Fit 1: $SU(5)$ ($u=v=0$) example satisfying the GST relation}\n\nThis is a $SU(5)$ type solution, and hence $u=v=0$, which satisfies the GST relation. The textures are as laid out in Eq.~(\\ref{eq:9}). \n\n\\subsubsection*{Quark sector}\nWe can use the expressions \\eq{eq:yukeigen} and \\eq{eq:mixsgeral} adapted to the solution of \\eq{eq:sol2gst} in order to fit the Yukawa coefficients, \nalong with the appropriate phases entering into the expressions of mixings. The expansion parameter $\\varepsilon$ is determined with the Fayet-Iliopoulos term \nand the appropriate charges cancelling the anomalies, for this case its value is $\\varepsilon=0.183$. The parameters that we \nfit are the real parameters\n\\begin{eqnarray}\n\\label{eq:fitpargst}\na^u_{12},\\quad a^u_{23},\\quad a^d_{22},\\quad a^d_{12},\\quad a^d_{13},\\quad a^d_{23},\\quad a^d_{32},\\quad \\cos(\\Phi_2),\n\\end{eqnarray}\nwhich enter in the expressions of mixings and masses, \\eq{eq:yukeigen}-\\eq{eq:Vsasyu1}. Note that in these expressions the coefficients $a^f_{ij}$ can be complex but for the fit we choose them real and write down explicitly the phases. We are free to choose the parameters to fit. However we need to check which are the most relevant parameters to test the symmetry. Thus we follow this as a guideline to choose the parameters to fit and leave other parameters fixed. Due to the form of \\eq{eq:sol2gst} the mixing angles in the $(2,3)$ sector of both matrices contribute at the same order in the $V_{\\rm{CKM}}$ matrix mixing, $s^Q_{23}=|a^d_{23}-a^u_{23}e^{i\\Phi_{X_{23}}}|\\varepsilon^2$, so we have decided to put a phase here. In the $s^u_{12}$ diagonalization angle and the second eigenvalue of $Y^u$ the combination $a^u_{22}e^{i\\Phi_3}-a^{u\\ 2}_{23}$ appears, so we have chosen as well to include a phase difference there. The fixed parameters are then\n\\begin{eqnarray}\n\\label{eq:fixpargst}\na^u_{22},\\quad \\Phi_1, \\quad \\Phi_3,\\quad \\Phi_{X_{23}},\n\\end{eqnarray}\nwhere $\\Phi_1$ has the form of \\eq{eq:phi1} and the phases $\\Phi_3$ and $\\Phi_{X_{23}}$ can be written as {\\footnote{In terms of the $\\beta_i$ phases appearing in the diagonalization matrices, \\eq{eq:pardimatL}, we have $\\Phi_1=-\\beta^{u \\ L}_3$, $\\Phi_2=-\\beta^{d\\ L}_3$ and $\\Phi_{X_{23}}=(\\beta^{d\\ L}_2-\\beta^{d\\ L}_1)-(\\beta^{u\\ L}_2-\\beta^{u\\ L}_1)$.}}\n\\begin{eqnarray}\n\\label{eq:phi3}\n\\Phi_3=\\phi^u_{22}-2\\phi^u_{23},\\quad \\Phi_{X_{23}}=(\\phi^d_{33}-\\phi^d_{23})-(\\phi^u_{33}-\\phi^u_{23}) .\n\\end{eqnarray}\nThe results of the fit in the quark sector appear in the second column of Table (\\ref{tabl:sol2gst}). \n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|r|l||l|l|}\n\\hline\n\\multicolumn{4}{|c|}{{Quark Fitted Parameters}}\\\\ \\hline\n\\multicolumn{2}{|c|}{{GST sol. 2}}&\n\\multicolumn{1}{|c|}{{GST sol. 2, $u,v\\neq 0$}}&\n\\multicolumn{1}{|c|}{{GST sol. 3, $u,v\\neq 0$}}\n\\\\ \\hline\nParameter & BFP Value & BFP Value & BFP Value \\\\\n$a^{u}_{12}$& $2.74\\pm 0.61$ & $1.04\\pm 0.19$ & $2.74\\pm 0.71$\\\\\n$a^u_{23}$& $1.68\\pm 0.17 $ & $1.34\\pm 0.13$ & $1.41\\pm 0.18$ \\\\\n$a^d_{22}$& $1.08\\pm 0.18$ & $1.05\\pm 0.11$ & $0.70\\pm 0.23$ \\\\\n$a^d_{12}$& $0.93\\pm 0.15$ & $0.55\\pm 0.20$ & $0.74\\pm 0.13$ \\\\\n$a^d_{13}$& $0.29\\pm 0.21$ & $0.30\\pm 0.14$ & $0.74\\pm 0.17$\\\\\n$a^d_{23}$& $0.79\\pm 0.10$ & $0.70\\pm 0.13$& $0.66\\pm 0.35$\\\\\n$a^d_{32}$& $0.48\\pm 0.17$ & $1.28\\pm 0.32$ & $1.28\\pm 0.58$\\\\\n$\\cos(\\Phi_2)$& $0.454\\pm 0.041$ & $0.456\\pm 0.041$ & $0.547\\pm 0.424$\\\\ \\hline\n\\multicolumn{4}{|c|}{{Quark Fixed Parameters}} \\\\ \\hline\n$\\varepsilon$ & $0.183$ & $0.217$ & $0.154$ \\\\\n$a^{u}_{22}$& $1$& $1$ & $1.4$ \\\\\n$\\cos(\\Phi_3)$ & $0.8$ & $0.83$ & $0.8$\\\\\n$\\cos(\\Phi_{X_{23}})$& $1$ & $1$ & $1$\\\\\n$\\Phi_1$ & $\\pi\/2$ & $\\pi\/2$ & $\\pi\/2$ \\\\ \\hline\n\\multicolumn{3}{|c|}{{$\\chi^2$}} & \\\\ \\hline\n$\\chi^2$ & $1.47$ & $2.41$ & $4.32$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\footnotesize{Quark fitted parameters for the examples of Section \\ref{sec:su5-solut-satisfy-GST}). The second column corresponds to \nthe Solution 2 in the $SU(5)$ ($u=v=0$) case, the third column to the Solution 2 in the $u \\ne -v \\neq 0$ case. \nThe fourth column presents the fit to the Solution 3 in the $u\\neq -v \\neq 0$ case.}}\n\\label{tabl:sol2gst}\n\\end{table}\nGiven these results we can think that the structure of Yukawa matrices has the following form\n\\begin{eqnarray}\nY^u= \\left[\n\\begin{array}{ccc}\n* &y_{12}e^{i\\Phi_1}&y_{13}\\\\\ny_{12}e^{i\\Phi_1}&y_{22}e^{i\\Phi_3}&y_{23}\\\\\ny_{13} &y_{23}&1 \n\\end{array}\n\\right],\\quad\nY^d= \\left[\n\\begin{array}{ccc}\n*& y_{12}e^{i\\Phi_2} & y_{13}e^{i\\Phi_2}\\\\\n\\left[y_{21}e^{i\\Phi^R_2}\\right] & y_{22} & y_{23}\\\\\n*&y_{32}&1\n\\end{array}\n\\right],\n\\end{eqnarray}\nwhere $y_{ij}$ denote real elements and we have associated the phases $\\Phi_i$ to particular elements of the matrices. Note that we need three phases to determine the amount of CP violation experimentally required because in all the fits we found $\\Phi_{X_{23}}=0$. If this phase was not zero then it could have been associated to the $Y^d_{23}$ element. The entries marked with $*$ cannot be determined because they are not restricted by masses and mixings, due to the structure of the Yukawa matrices. The value of $y_{21}e^{i\\Phi^R_2}$ is determined indirectly because we need to satisfy the GST relation so $t^R_{12}=t^L_{12}$ for both up and quark sectors.\n\\subsubsection*{Lepton sector}\nWe have fixed the coefficients of $Y^d$ in the quark sector and now we can use the results for the charged lepton matrix $Y^e$. The masses of the charged lepton are obtained through the $SU(5)$ relations, ensuring the correct value of charged lepton masses, once the masses of the d-quarks are in agreement with experimental information. Thus in this case we perform a fit just for coefficients of the neutrino mass matrix, $Y^\\nu$, using the ratio of neutrino mass differences (solar to atmospheric), the mass of the heaviest neutrino and the lepton mixings, which have a contributions from both the charged leptons and the neutrinos. Here the relevant parameter that we need from the quark sector is $a^d_{32}$ because the tangent of the angle diagonalizing $Y^e$ on the left is related to this parameter: $t^e_{23}=a^e_{23}\\propto a^d_{32}$. Since this is an $O(1)$ mixing we have to take it into account for the results of the $U_{MNS}$ mixings, thus we have\n\\begin{eqnarray}\n\\label{eq:t23lept}\nt^l_{23}=\\frac{|c^e_{23}s^\\nu_{23}e^{-i\\phi_{X_{23}}}-s^e_{23}c^{\\nu}_{23} |}{|s^\\nu_{23}s^e_{23}+c^{\\nu}_{23}c^e_{23}e^{i\\phi_{X_{23}}}| },\n\\end{eqnarray}\nwhere we use the expression \\eq{eq:tan23} to determine $s^\\nu_{23}$ and $c^{\\nu}_{23}$, and the approximation $t^e_{23}=a^d_{32}$; $\\phi_{X_{23}}$ is a phase relating $e$ and $\\nu$ mixings in the $(2,3)$ sector \\cite{NuMngsPhases}. We denote the $U_{MNS}$ angles by the superscript $l$ and by $e$ and $\\nu$ the charged lepton and neutrino mixings respectively. The mixings $t^l_{13}$ and $t^\\nu_{12}$ are essentially given by the neutrino mixings, \\eqs{eq:mixmasresn}, so we fit these mixings according to \\eq{eq:tan13} and \\eq{eq:tan12} respectively. We note from Table (\\ref{tabl:sol2gstneut}) that in the lepton sector we need two phases, $\\phi_{X_{23}}$ and $\\phi^{\\nu}$. The phase $\\phi_{X_{23}}$ can be associated to the charged lepton sector and we can put it in the $Y^e_{23}$ entry. The second phase, $\\phi^{\\nu}$ can be assigned to $Y^{\\nu}_{22}$.\nWe fit the mass ratio and the heaviest neutrino state using their expressions appearing in \\eqs{eq:mixmasresn}. The results for this fit appear in the second column of Table (\\ref{tabl:sol2gstneut}).\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|r|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{7}{|c|}{{Neutrino Fitted Parameters}}\\\\ \\hline\n\\multicolumn{3}{|l|}{{$\\!\\!\\!$Parameter$\\!\\!\\!$ ~~ GST sol. 2}}&\n\\multicolumn{2}{|c|}{{GST sol. 2, $u,v\\neq 0$}}&\n\\multicolumn{2}{|c|}{{GST sol. 3, $u,v\\neq 0$}}\n\\\\ \\hline\n & $M_P$ & $M_G$ & $M_P$ & $M_G$ & $M_P$ & $M_G$ \\\\\n & $\\!\\!$ BFP value & $\\!\\!$ BFP value & $\\!\\!$ BFP value & $\\!\\!$ BFP value & $\\!\\!$ BFP value & $\\!\\!\\!$ BFP value \\\\\n$a^\\nu_{23}$& $\\!\\!\\!\\!\\!0.75\\pm 0.79\\!\\!\\!\\!$ & $\\!\\!\\!\\!0.67\\pm 0.61\\!\\!\\!$ & $\\!\\!0.21\\pm 0.25\\!\\!\\!$ & $\\!\\!\\!\\!0.85\\pm 0.27\\!\\!\\!\\!$ & $\\!\\!\\!0.30\\pm 0.18\\!\\!\\!$ & $\\!\\!\\!\\!0.40\\pm 0.15\\!\\!\\!\\!$ \\\\\n$a^\\nu_{13}$& $\\!\\!\\!\\!\\!1.41\\pm 1.32\\!\\!\\!\\!$ & $\\!\\!\\!\\!\\!1.36\\pm 1.10\\!\\!\\!$ & $\\!\\!0.97\\pm 0.47\\!\\!\\!$ & $\\!\\!\\!\\!1.25\\pm 0.63\\!\\!\\!\\!$ & $\\!\\!\\!1.02\\pm 0.50\\!\\!\\!$ & $\\!\\!\\!\\!1.45\\pm 0.70\\!\\!\\!\\!$ \\\\\n$a^\\nu_{12}$& $\\!\\!\\!\\!\\!2.23\\pm 0.92\\!\\!\\!\\!$ & $\\!\\!\\!\\!\\!2.10\\pm 0.81\\!\\!\\!$ & $\\!\\!1.25\\pm 0.29\\!\\!\\!$ & $\\!\\!\\!\\!2.08\\pm 0.69\\!\\!\\!\\!$ & $\\!\\!\\!1.35\\pm 0.34\\!\\!\\!$ & $\\!\\!\\!\\!1.97\\pm 0.45\\!\\!\\!\\!$ \\\\\n$a^\\nu_{22}$& $\\!\\!\\!\\!\\!1.84\\pm 1.37\\!\\!\\!\\!$ & $\\!\\!\\!\\!\\!1.96\\pm 1.92\\!\\!\\!$ & $\\!\\!1.23\\pm 1.41\\!\\!\\!$ & $\\!\\!\\!\\!1.98\\pm 0.79\\!\\!\\!\\!$ & $\\!\\!\\!1.48\\pm 1.31\\!\\!\\!$ & $\\!\\!\\!\\!2.26\\pm 1.6\\!\\!\\!\\!$ \\\\\n$a^\\nu_{32}$& $\\!\\!\\!\\!1.47\\pm 1.93\\!\\!\\!\\!$ & $\\!\\!\\!\\!\\!0.98\\pm 0.91\\!\\!\\!$ & $\\!\\!0.65\\pm 0.70\\!\\!\\!$ & $\\!\\!\\!\\!1.53 \\pm 0.75\\!\\!\\!\\!$ & $\\!\\!\\!0.53 \\pm 0.78\\!\\!\\!$ & $\\!\\!\\!\\!0.56 \\pm 0.98\\!\\!\\!\\!$ \\\\\n\\hline\n\\multicolumn{7}{|c|}{{Neutrino Fixed Parameters}}\\\\ \\hline\n$\\varepsilon$ & $0.183$ & & $0.217$ & & $0.154$ & \\\\\n$a^e_{23}$ & $a^d_{32}=0.48$ & & $-1.6$ & & $1.2$ &\\\\\n$a^\\nu_{33}$ & $1$ & & $1$ & & $0.7$ & $1$\\\\\n$\\sigma$ & $29\/2$ & $21\/2$ & $29\/2$ & $19\/2$ & $55\/4$ & $39\/4$ \\\\ \n$\\!\\!\\!c(\\phi_{\\!X_{23}})\\!\\!\\!$ & $0.29$ & & $0.29$ & & $1$ & $0.5$\\\\\n$\\!\\!\\!c(\\!\\phi^{\\nu}\\!)\\!\\!$ & $-1$ & & $-0.5$ & & $0.86$ & $1$\\\\ \\hline\n$\\!\\!\\!\\!(n_2,n_3)\\!\\!\\!$ & \\multicolumn{2}{|c|}{{$(1\/4,1\/2)$}}&\n\\multicolumn{4}{|c|}{{$(1\/8,1\/2)$}}\\\\\n\\hline\n\\multicolumn{7}{|c|}{{$\\chi^2$}} \\\\ \\hline\n$\\chi^2$ & $0.44$ & $0.12$ & $1.67$ & $0.49$ & $2.16$& $0.72$\\\\ \n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\footnotesize{Neutrino fitted parameters for the examples of Section \\ref{sec:su5-solut-satisfy-GST}. The second column corresponds to the \nSolution 2 in the $SU(5)$ ($u=v=0$) case, the third column to the Solution 2 in the $u + v \\neq 0$ case. The fourth column \npresents the fit to the Solution 3 in the $u + v \\ne 0$ case. Here $c(y)$ is the cosine of the respective parameter.}}\n\\label{tabl:sol2gstneut}\n\\end{table}\n\n\\subsubsection{Fit 2 and Fit 3: Extended $SU(5)$ solutions with $u + v \\neq 0$ satisfying the GST relation}\nThese are both extended $SU(5)$ solutions, with $u+v \\ne 0$, satisfying the GST relation.\nFit 2 corresponds to the textures laid out in Eq.~(\\ref{eq:3}), and Fit 3 corresponds to the textures laid out\nin Eq.~(\\ref{eq:17}).\n\n\\subsubsection*{Quark sector}\n\nThis section is completely analogous to the previous one, the only difference is in the value of $\\varepsilon$. We present here two examples. The first example corresponds to the first solution of \\eq{eq:sol2gst}, which we called Solution 2, and corresponds to $\\varepsilon=0.217$ according to the charges of the third row of \\eq{tbl:sol-leptch-GSTuvne0}. The second example corresponds to the first solution of \\eq{eq:sol3gst}, which has been called Solution 3 and corresponds to $\\varepsilon=0.154$, according to the charges of the fourth row of \\eq{tbl:sol-leptch-GSTuvne0}. The fitted and fixed parameters are also those of the previous example, \\eq{eq:fitpargst} and \\eq{eq:fixpargst} respectively. The results for the quark fitting are presented in the third and fourth column of Table (\\ref{tabl:sol2gst}), respectively, so we can compare directly with the previous case.\n\\subsubsection*{Lepton sector}\nThis case is different from the Section \\ref{sec:fit-1} because now we do not have the $SU(5)$ relations. Instead the parameter \n$k_e$ is different from $k_d$, as explained in Section (\\ref{sec:su5q}), and hence $Y^e\\neq (Y^d)^T$. In this case we perform two fits, one for the coefficients of the charged lepton mass matrix, $Y^e$ and another for the coefficients of the neutrino mass matrix, $Y^\\nu$.\n\nFor the Solution 2, taking into account the value of the charges, the second row of Table (\\ref{tbl:sol-leptch-GSTuvne0}), and that $m=u+v=1\/2$ we have $k_e=-8\/3$. We note in this case that since we need $m_b\\approx m_{\\tau}$, which are given by \n\\begin{eqnarray}\n\\label{eq:mbmtau}\nm_b&=&m_t\\varepsilon^{|k_d|},\\quad k_d=l_3+3e_3+u+4(u+v)\/3\\nonumber\\\\\nm_\\tau&=&m_t\\varepsilon^{|k_e|},\\quad k_e=l_3+3e_3+u+(u+v),\n\\end{eqnarray}\nwe expect the sum $(u+v)$ to remain small.\n\nNow the coefficients $a^e_{23}$ and $a^d_{32}$ are not related but we can fix $a^e_{23}$ in the neutrino sector such that it is in agreement with the results from neutrino oscillation. We have performed a fit using the experimental information of the parameters of \\eq{eq:fitparneuts}. Here we have also used the expression \\eq{eq:t23lept} in order to fit the atmospheric angle, the expressions \\eq{eq:tan13} and \\eq{eq:tan12} to fit $t^l_{13}$ and $t^l_{12}$ (reactor and solar angle respectively) and the mass ratio and the heaviest neutrino state using their expressions appearing in \\eqs{eq:mixmasresn}. The results for this fit appear in the third column of Table (\\ref{tabl:sol2gstneut}). \n\nOnce the parameter $a^e_{23}$ has been fixed we fit the parameters of the charged lepton mass matrix, of the form \\eq{eq:chleptmatuvn0} and the other parameters as in the first solution of \\eq{eq:sol2gst}. In this case the relevant parameters are $a^e_{12}$ and $a^e_{22}$. However if we just fit the expressions\n\\begin{eqnarray}\n\\frac{m_e}{m_{\\mu}}&=&\\frac{|a^e_{12}|^2}{|(a^e_{22}-a^e_{23}a^e_{32})|^2}\\varepsilon^{4\/3}=s^{e\\ 2}_{12},\\nonumber \\\\\n\\frac{m_\\mu}{m_\\tau}&=&(a^e_{22}-a^e_{23}a^e_{32})\\varepsilon^2,\n\\end{eqnarray}\nthe coefficients $a^e_{12}$ and $a^e_{22}$ are not quite $O(1)$ so we have to make use of a coefficient, $c$ such that $(a^e_{22}-a^e_{23}a^e_{32})\\rightarrow \\ (a^e_{22}-a^e_{23}a^e_{32})\/c $, e.g. $c=3$, in order to have acceptable values for charged lepton masses. This fit is presented in the second column of Table (\\ref{tabl:sol2gscluvn0}). In this case the extra-coefficient needed for the fit is not really justified in the context of just a single $U(1)$ symmetry.\n \nFor the Solution 3, we have $m=1\/2$, $k_e=-13\/6$, according to the charges of the third row of Table (\\ref{tbl:sol-leptch-GSTuvne0}). The fit of the coefficients of the neutrino mass matrix are completely analogous for Solution 2 and they appear in the third column of Table (\\ref{tabl:sol2gstneut}). The relevant parameters for the charged lepton sector are\n\\begin{eqnarray}\n\\frac{m_e}{m_{\\mu}}&=&\\frac{|a^e_{12}|^2}{|(a^e_{22}-a^e_{23}a^e_{32})|^2}\\varepsilon^{29\/6}=s^{e\\ 2}_{12},\\nonumber \\\\\n\\frac{m_\\mu}{m_\\tau}&=&(a^e_{22}-a^e_{23}a^e_{32})\\varepsilon^{7\/4}.\n\\end{eqnarray}\nFor this case {\\it there is no need} to invoke another coefficient as for the Solution 2. $O(1)$ coefficients in this case can account for the masses and mixings in the leptonic sector. Once the coefficient $a^e_{23}$ is fitted in the charged lepton sector then we need to use this parameter as a fixed parameter in the fit for the neutrino sector but in this case the fit is not as good as for the previous solution. The results are presented in the third column of Table (\\ref{tabl:sol2gscluvn0}).\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|r|l|l|}\n\\hline\n\\multicolumn{3}{|c|}{{Charged lepton Fitted Parameters}}\\\\ \\hline\n\\multicolumn{2}{|c|}{{GST sol. 2, $u,v\\neq 0$}}&\n\\multicolumn{1}{|c|}{{GST sol. 3, $u,v\\neq 0$}}\n\\\\ \\hline\nParameter & BFP Value & BFP Value \\\\\n$a^e_{12}$& $0.56\\pm 0.006$ & $2.88 \\pm 0.032$ \\\\\n$a^e_{22}$& $0.92\\pm 0.013$ & $1.87 \\pm 0.013$\\\\\n\\hline\n\\multicolumn{3}{|c|}{{Charged lepton Fixed Parameters}}\\\\ \\hline\n$\\varepsilon$ & $0.217$ & $0.154$\\\\\n$a^e_{23}$ & $-1.6$ & $1.2$ \\\\ \n$a^e_{32}$ & $1.8$ & $1.2$ \\\\ \\hline\n\\multicolumn{3}{|c|}{{$\\chi^2$}} \\\\ \\hline\n$\\chi^2$ & $0.05$ & $1.2\\times 10^{-5}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\footnotesize{Charged lepton fitted parameters for the examples of Section \\ref{sec:su5-solut-satisfy-GST}\nThe second column corresponds to the Solution 2 in the $u + v \\neq 0$ case. The fourth column presents the fit to the Solution 3 in the $u + v \\ne 0$ case.}}\n\\label{tabl:sol2gscluvn0}\n\\end{table}\n\\subsubsection{Fit 4: $SU(5)$ type ($u=v=0$) solution not satisfying the GST relation}\n\nThis is a $SU(5)$ type solution, hence $u=v=0$, which doesn't satisfy the GST relation. The charges are as laid out\nin Eq.~(\\ref{eq:31}). \n\n\\subsubsection*{Quark sector}\nHere we also use the expressions \\eq{eq:yukeigen} and \\eq{eq:mixsgeral} adapted to the solution \\eq{eq:ydinl1l3A} for $r'_d=l_1-l_3=1$ and check the fit with an exact numerical solution, which agrees with the fit to \\eq{eq:yukeigen} and \\eq{eq:mixsgeral} within a $5\\%$ error.\nSince the fit can just fit eight parameters, in this case it is not possible to select out ``the best fit'', according to the criteria that we have used for the previous fits, so we present the following solution for the coefficients of the up and down Yukawa matrices:\n\\begin{eqnarray}\n\\label{eq:ngstsol1}\na^u&=&\n \\left[\n\\begin{array}{ccc}\n0.42 & 0.58 e^{-i\\pi\/2} & 0.51\\\\\n0.58 e^{-i\\pi\/2} & 0.9 e^{-i\\pi} &0.43e^{-i\\pi\/2} \\\\\n0.51 &0.43e^{-i\\pi\/2} & 1 \\\\\n\\end{array}\n\\right],\\nonumber\\\\\na^d&=&\n \\left[\n\\begin{array}{ccc}\ne^{-i0.5} & 0.8 & 0.29 e^{i 0.48}\\\\\n1.63 e^{-i1.49} & 0.86 e^{-i1.2} &0.55e^{-i0.7} \\\\\ne^{-i0.79} &0.4e^{-i0.5} & e^{-i3.05} \\\\\n\\end{array}\n\\right].\n\\end{eqnarray}\nFor this fit we have $\\chi^2=2.31$.\n\\subsubsection*{Lepton sector}\nIn the lepton sector, once we have done the fit to the quark masses, the $SU(5)$ relations produce acceptable values for the charged lepton masses, what we need to care about are the mixings for the neutrino sector. According to the expressions for the mixings in the $(1,2)$ and $(1,3)$ neutrino sector, \\eq{eq:mixangngst}, now $t^\\nu_{13}=a^\\nu_{13}\\varepsilon\/\\sqrt{a^{\\nu \\ 2}_{33}+a^{\\nu\\ 2}_{23}}$ and \n $t^\\nu_{12}=a^\\nu_{12}\\varepsilon^{1\/4}\/(c^{\\nu}_{23}a^{\\nu}_{22}-s^{\\nu}_{23}a^{\\nu}_{32}))$, for $(n_2,n_3)=(-3\/8,0)$. On the other hand the mixings in the charged lepton sector go as $t^e_{12}=|a^d_{21}+3a^d_{23}a^d_{31}\/a^d_{33}|\\varepsilon\/3|a^d_{22}+3a^d_{32}a^d_{23}|$ and $t^e_{13}=a^d_{31}\\varepsilon\/|a^d_{33}+|a^{d}_{32}|^2|$, so here these contributions are important to the $U_{MNS}$ $s^l_{12}$ and $s^l_{13}$ mixings, identified respectively to the solar and reactor mixings, for example for $s^l_{13}$ we have\n\\begin{eqnarray}\ns^l_{13}&=&|c^{e}_{12}c^{e}_{13}s^\\nu_{13}-c^\\nu_{13}(e^{i(\\beta^{e}_1-\\beta^\\nu_1)}c^\\nu_{23}(c^{e}_{12}c^{e}_{23}s^{e}_{13}+e^{i\\beta^{e}_3}s^{e}_{12}s^{e}_{23})\\nonumber \\\\\n& &-e^{i(\\beta^{e}_2-\\beta^{\\nu}_2)}s^{\\nu}_{23}(e^{i\\beta^{e}_3}s^{e}_{12}c^{e}_{13}-c^{e}_{12}s^{e}_{13})s^{e}_{23})|.\n\\end{eqnarray}\nThe mixing $s^l_{23}$ is driven by the neutrino mixing $s^\\nu_{23}$\n\\begin{eqnarray}\ns^l_{23}c^l_{13}\\approx |e^{i(\\beta^{e}_2-\\beta^\\nu_2)}s^\\nu_{23}c^{e}_{12}c^{e}_{23}-e^{i(\\beta^{e}_1-\\beta^\\nu_1)}s^{e}_{23}c^\\nu_{13}c^\\nu_{23}|.\n\\end{eqnarray}\nDespite all the contributions to the mixings $s^l_{13}$ and $s^l_{12}$ we can reproduce the observed masses and mixings in the neutrino sector with $O(1)$ coefficients and with out any phase in this sector, we just use the phases of the right handed quark matrix, which are given by\n\\begin{eqnarray}\n\\beta^{e}_1&=&{\\rm{ArcTan}}\\left[\\frac{\\sin(\\phi^d_{33})}{\\cos(\\phi^d_{33})+|a^d_{32}|^2}\\right]-\\phi^d_{31}\\nonumber\\\\\n\\beta^{e}_2&=&(\\phi^d_{32}-\\phi^d_{33})+\\beta^{dR}_1\\nonumber\\\\\n\\beta^{e}_3&=&(\\phi^d_{22}-\\phi^d_{21})-\\beta^{dR}_2,\n\\end{eqnarray}\nand are specified in \\eq{eq:ngstsol1}.\nThe results of this fit are given in the second row of in Table (\\ref{tabl:sol2nongstneut}).\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|r|l|l|}\n\\hline\n\\multicolumn{3}{|c|}{{Neutrino Fitted Parameters}}\\\\ \\hline\n\\multicolumn{2}{|c|}{{Non GST sol. 1}}&\n\\multicolumn{1}{|c|}{{Non GST sol. 2}}\\\\\n\\hline\nParameter & BFP Value & BFP Value\\\\\n$a^\\nu_{23}$& $1.6\\pm 0.8$ & $2\\pm 0.9$\\\\\n$a^\\nu_{13}$& $1.4\\pm 0.7$ & $0.9\\pm 0.3$ \\\\\n$a^\\nu_{12}$& $1\\pm 0.6$ & $1.6\\pm 0.3$ \\\\\n$a^\\nu_{22}$& $0.67\\pm 0.27$ & $0.5 \\pm 0.4$ \\\\\n\\hline\n\\multicolumn{3}{|c|}{{Neutrino Fixed Parameters}}\\\\ \\hline\n$\\varepsilon$ & $0.19$ & $0.185$\\\\\n$a^e_{23}$ & $-3a^d_{32}=-1.2$ & $-3a^d_{32}=-1.25$\\\\\n$a^\\nu_{33}$ & $1$ & $1$ \\\\\n$a^N_{22}$ & $2$ & $2$\\\\\n$\\sigma$ & $(4.5,0.5)$ & $(5,1)$ \\\\ \n$(n_2,n_3)$ &$(-3\/8,0)$ & $(-5\/8,-1\/4)$\\\\\n\\hline\n\\multicolumn{3}{|c|}{{$\\chi^2$}} \\\\ \\hline\n$\\chi^2$ & $(5.09,4.77)$ & $(4.78,3.79)$\\\\ \n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\footnotesize{Neutrino fitted parameters for two of the non GST examples of Section \\ref{sec:su5-solutions-not-GST}\nThe second and third columns correspond respectively to solution 1 and 2 in the non GST $SU(5)$ ($u=v=0$) cases, \nfor the first one we have used $r'_d=1$ and for the second $r'_d=3\/2$. While we have fitted in the first case \n$t^\\nu_{13}$ to saturate its current upper limit, we have allowed for the second case to be smaller than it. \nThe first entry for $\\sigma$ corresponds to the fit using $M_P$ and the second entry using $M_G$; analogously for $\\chi^2$.}}\n\\label{tabl:sol2nongstneut}\n\\end{table}\n\\subsubsection{Fit 5: $SU(5)$ type ($u=v=0$) solution not satisfying the GST relation}\n\nThis is a $SU(5)$ type solution, and hence $u=v=0$ which doesn't satisfy the GST relation.\nThe textures are as laid out in Eq.~(\\ref{eq:32}).\n\n\n\\subsubsection*{Quark sector}\nHere we present the following solution for the case $r_d=l_1-l_3=\\frac{3}{2}$, in this case the coefficients of the up and down Yukawa matrices:\n\\begin{eqnarray}\n\\label{eq:ngstsol1_5}\na^u&=&\n \\left[\n\\begin{array}{ccc}\n0.5& 0.6 e^{-i\\pi\/2} & 0.5\\\\\n0.6 e^{-i\\pi\/2} & e^{-i\\pi} &0.43e^{-i\\pi\/2} \\\\\n0.5 &0.43e^{-i\\pi\/2} & 1 \\\\\n\\end{array}\n\\right],\\nonumber\\\\\na^d&=&\n \\left[\n\\begin{array}{ccc}\n1& 0.72 & 0.29 e^{i 0.49}\\\\\n1.82 e^{-i2.28} & 0.76 e^{-i1.12} &0.55e^{-i0.71} \\\\\ne^{-i1.57} &0.4e^{-i0.41} & e^{-i2.951} \\\\\n\\end{array}\n\\right].\n\\end{eqnarray}\nFor this fit we have $\\chi^2=2.10$.\n\\subsubsection*{Lepton sector}\nThe analysis of this fit is completely analogous to the Fit 4, the results of the fitting procedure is presented in the second column of Table \\ref{tabl:sol2nongstneut}.\n\\subsubsection{Top and bottom masses and $\\mathbf{\\tan\\beta}$}\nFor these cases $\\tan\\beta$ and $a^d_{33}$ are a prediction, once the coefficient $a^u_{33}$ is fixed through the value of $m_t$, $m_t=Y^u_{33}v\/\\sqrt{2}$. The values of $a^u_{33}$, $a^d_{33}$ and $\\tan\\beta$ for the cases presented in this section are given in Table (\\ref{tabl:tanbetares}). We can see that for a natural value of $a^u_{33}=1$ we have acceptable values for $\\tan\\beta$ (which should be $>2$) and $a^d_{33}$ in any of the cases presented.\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|r|l|l|l|}\n\\hline\n\\multicolumn{4}{|c|}{{$\\tan\\beta$, $a^u_{33}$ and $a^d_{33}$}}\\\\ \\hline\n\\multicolumn{1}{|r|}{Parameter} &\n\\multicolumn{1}{|l|}{{GST sol. 2, $u\\!=v\\! =\\! 0$}}&\n\\multicolumn{1}{|l|}{{GST sol. 2, $u,v\\neq 0$}}&\n\\multicolumn{1}{|l|}{{GST sol. 3, $u,v\\neq 0$}}\n\\\\ \\hline\n$a^u_{33}$ & $(1,1.34)$ & $(1.1.3)$ & $(1,1.3)$\\\\\n$a^d_{33}$ & $(5.33^{-1.13}_{+2.81},2.40^{-0.12}_{+0.13})$ & $(3.49^{-0.73}_{+1.84},1.62^{-0.09}_{+0.08})$ & $(3.23^{-0.68}_{+1.70}, 1.62^{-0.09}_{+0.10})$ \\\\\n$\\tan\\beta$ & $(3.00^{-0.66}_{+4.82}, 1.00^{-0.06}_{+0.06})$ & $(3.00^{-0.66}_{+1.61}, 1.07^{-0.07}_{+0.07})$ & $(3.00^{-0.45}_{+1.61}, 1.07^{-0.07}_{+0.07})$ \\\\\n\\hline\n($\\epsilon$, $|k_d|$) & ($0.183$, $5\/2$) & ($0.217$, $5\/2$) & ($0.154$, $2$)\\\\ \\hline\n\\multicolumn{1}{|r|}{Parameter} &\n\\multicolumn{1}{|l|}{{$\\!\\!$Non GST sol. 1, $u\\!=v\\! =\\!0\\!\\!\\!$}}&\n\\multicolumn{2}{|l|}{$\\!\\!$Non GST sol. 2, $u\\! = v\\! =\\!0\\! $}\\\\ \\hline\n$a^u_{33}$ & $(1,1.2)$ & $(1,1.2)$ & \\\\\n$a^d_{33}$ & $(2.12^{-0.44}_{+0.98},1.1^{-0.03}_{+0.08})$ & $(2.11^{-0.45}_{+0.87},1.3^{-0.06}_{+0.09})$ & \\\\\n$\\tan\\beta$ & $(3^{-0.66}_{+1.61},1.3^{-0.1}_{+0.1})$ & $(3^{-0.78}_{+1.32},1.2^{-0.2}_{+0.2})$ & \\\\\n\\hline\n($\\epsilon$, $|k_d|$) & ($0.19$, $2$) & ($0.185$, $2$) & \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\footnotesize{Value of $a^d_{33}$ and $\\tan\\beta$ for the different models presented, once $a^u_{33}$ is fixed using $m_t$.}}\n\\label{tabl:tanbetares}\n\\end{table}\n\\subsection{Comparison to the $SU(3)$ case}\nIn this section we present the comparison to a generic $SU(3)$ case\n\\cite{King:2001uz}. \nWhat we fit are the $O(1)$ coefficients of a Yukawa matrices of the form\n\\begin{eqnarray}\nY^f= \\left[\n\\begin{array}{ccc}\n\\varepsilon^8_f &\\varepsilon^3_f& \\varepsilon^3_f\\\\\n\\varepsilon^3_f &\\varepsilon^2_f& \\varepsilon^2_f\\\\\n\\varepsilon^3_f &\\varepsilon^2_f& 1\n\\end{array}\n\\right],\n\\end{eqnarray}\nwhere we allow two different expansion paramaters $\\varepsilon_u$ and $\\varepsilon_d$ and complex phases to reproduce the CP violation phase. It is enough to consider one different phase in each of the $Y^u$ and $Y^d$ matrices. Here we put the phases on $Y^d_{13}$ and $Y^u_{12}$ \\cite{Ross:2004qn}, but we have the freedom to use other choices. We have used here as well the method of minimization that we have used for the $U(1)$ cases. The results of these fits are consistent with previous determination of these parameters, \\cite{Roberts:2001zy, Ross:2004qn}, taking into account the change induced by the change of the value used here for the parameter $m_c\/m_s=15.5\\pm 3.7$ and the different methods used for the determination of coefficients{\\footnote{In \\cite{Roberts:2001zy, Ross:2004qn} $m_c\/m_s=9.5\\pm 1.7$.}}. The fits presented here are the fits with the lowest possible $\\chi^2$ because of the minimization procedure.\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{|r|l||l|}\n\\hline\n\\multicolumn{3}{|c|}{{ Quark fitted Parameters, SU(3)-like case}}\\\\ \\hline\nParameter & BFP Value $\\pm \\sigma$ & BFP Value $\\pm \\sigma$ \\\\\n$a{'u}_{22}$& $1.11\\pm 0.55$ & $1.11\\pm 0.07$ \\\\\n$a^d_{12}$& $0.66\\pm 0.32$ & $2.45\\pm 0.20 $ \\\\\n$a^d_{13}$& $0.10\\pm 0.12$ & $0.91\\pm 0.15$ \\\\\n$a^d_{22}$& $0.74\\pm 0.10 $ & $1.77\\pm 0.09$ \\\\\n$a^d_{23}$& $0.45\\pm 0.29 $ & $1.18\\pm 0.12$ \\\\\n$\\epsilon^u$& $0.05\\pm 0.007$ & $0.05\\pm 0.007$ \\\\\n$\\epsilon^d$& $0.25\\pm 0.03$ & $0.16 \\pm 0.02$ \\\\\n$\\cos(\\Phi_2)$& $0.516\\pm 0.1$ & $0.450\\pm 0.045$ \\\\ \\hline\n\\multicolumn{3}{|c|}{{ Quark Fixed Parameters, SU(3)-like case}}\\\\ \\hline\n$\\Phi_1^*$ & $-1.25 \\approx=-0.8\\pi\/2$ & $1.120 \\approx 0.7\\pi\/2$ \\\\ \\hline\n\\multicolumn{3}{|c|}{{$\\chi^2$}} \\\\ \\hline\n$\\chi^2$ & $0.972$ & $0.974$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\small{Fitted and fixed parameters for the $SU(3)$-like case.}}\n\\label{tabl:paru1su3}\n\\end{table}\nWe have not included here for the $SU(3)$ case a fit in the neutrino sector because in the $SU(3)$-like cases the neutrino sector requires more assumptions than in the analogous $U(1)$ cases.\n\nAnother important difference between the $SU(3)$ and the $U(1)$ cases presented here is that in the first one there are \ntwo parameter expansions $\\varepsilon^u$ and $\\varepsilon^d$ which have been fitted while in the $U(1)$ cases there is only one expansion \nparameter which can be fixed by relating the $U(1)$ symmetry to the cancellation of anomalies and the Fayet-Iliopoulos term. \nThis has allowed that more $O(1)$ coefficients have been able to be fitted.\n\nBy comparing Tables (\\ref{tabl:sol2gst}) and (\\ref{tabl:paru1su3}) we can see that according to the minimization \nprocedure and the criteria of $O(1)$ coefficients, the second case of the $SU(3)$ solution fits better the data. \nHowever the $U(1)$ solutions also have a good fit and taking into account the fact that for the neutrino sector we \njust have added the SRHND conditions, the fits in both of the $U(1)$ cases presented are good. We can therefore consider \nthat $U(1)$ symmetries are still an appealing description of the fermion masses and mixings observed. Note that although \nthe Solution 3 in the $u\\neq v\\neq 0$ does not fit the data as well as the Solution 2 (in either case, $u=v=0$ or not) in \nthe quark sector, it does reproduce masses and mixings in the charged lepton sector. We have for this case $Y^e\\neq (Y^d)^ T$ \nbut we have $m_b\\approx m_{\\tau}$ without introducing ad-hoc $O(1)$ coefficients in order to reproduce the appropriate mixings.\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|l|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{{ Comparison}}\\\\ \\hline\n & $U(1)$ (GST)& $U(1)$ (Non-GST)& $SU(3)$-like \\\\\n\\# of expansion pars. &1& 1&2\\\\\n\\# of free pars.(quark sector) &12& $>$18 &10\\\\\nGST relation &yes & no & yes\\\\\nprediction for $\\tan\\beta$ & small& small &no\\\\\nlepton sector &o.k.&o.k&o.k\\\\\nsimple flavour charges &no &yes & yes\\\\\n \\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\small{Some criteria of comparison. Here the number of free parameters corresponds to the number of coefficients, phases and parameter expansions that need to be adjusted or determined in the fits.}}\n\\label{tabl:compar}\n\\end{table}\n\nGiven the results of these fits we need further criteria in order to compare models based in anomalous $U(1)$ models and non-Abelian models, such as $SU(3)$. These other criteria may be found in the predictions that the models presented here can give in the\nsupersymmetric sector.\n\n\\section{Flavour issues in SUSY flavour symmetry models}\n\\label{sec:susyconst}\n\nSince the flavour symmetry is expected to be broken at a high energy scale,\nnon supersymmetric models will have a hierarchy problem, since the cutoff\nof the theory must at least be of the order of the flavour symmetry breaking scale.\nSupersymmetric models with soft\nbreaking parameters around the TeV scale do not have this problem. For\nthis reason flavour symmetries are almost exclusively considered in the context\nof one of the minimal supersymmetric models, or one of the popular SUSY unified\ntheories. The soft Lagrangian parameters are strongly constrained by the\nsupersymmetric flavour problem and the supersymmetric CP problem. \n\nThe supersymmetric flavour problem needs the soft scalar mass squared matrices\nto be diagonal to good approximation at high energy scales, since the off-diagonal\nelements contribute to one-loop flavour violating decays such as the highly \nconstrained $\\mu\\rightarrow e \\gamma$ in the lepton sector and $b\\rightarrow s\\gamma$ in\nthe quark sector. It also requires that the trilinear couplings are aligned well to\nthe corresponding Yukawa matrix, since off diagonal elements in the trilinears in\nthe mass eigenstate basis also contribute to highly constrained decays. \nThe supersymmetric CP problem is related to the phases of the parameters in\nthe soft Lagrangian. The general requirement is that these phases need to be\nsmall for the majority of soft breaking parameters.\n\nThe reason that these problems are relevant in the context of family symmetries\nis that in general, the existence of the family symmetry and the fields that break\nit can give dangerous contributions to the soft Lagrangian parameters. It would\nbe remiss to look at these models but not check whether CP violation or flavour\nviolation is likely to rule them out. The starting point for investigating these\nproblems is to consider the hidden sector part of the theory, which leads to the\nsize and phases of the vevs of the fields which break the $U(1)$ symmetry, $\\theta$\nand $\\overline\\theta$.\n\n\\subsection{The flavon sector}\n\\label{sec:flavons}\n\nWe start by considering the values of the expansion parameters $\\epsilon$ and $\\overline{\\epsilon}$.\nThey are defined by:\n\\begin{equation}\n \\label{eq:epsilon}\n \\epsilon \\equiv \\frac{\\left<\\theta\\right>}{M} \\;\\; \\; \\overline{\\epsilon} = \\frac{\\left<\\overline{\\theta}\\right>}{M},\n\\end{equation}\nwhere $\\theta$ and $\\overline{\\theta}$ are scalars which break the $U(1)_F$ symmetry, and have charges of $1,-1$ respectively\nunder the symmetry. We wish to arrange that $\\epsilon = \\overline{\\epsilon}$, which entails arranging that the\npotential is minimized by $<\\theta> = <\\overline{\\theta}>$. This would be simple if the $U(1)$ were non-anomalous, and thus\nmissing a Fayet-Iliopoulis term. If we set the $\\theta$ sector of the superpotential to be:\n\\begin{equation}\n \\label{eq:33}\n W_\\theta = S(\\theta \\overline{\\theta} - M_\\theta^2)\n\\end{equation}\nWe introduce a new field, $X$, which has charge $q_X$ under $U(1)$. $q_X$ will be unspecified, but some number such that\nwhen $ \\ne 0$, it doesn't contribute to the fermion mass operators ( or, at the very least, it doesn't contribute at\nleading order). Then, if we give $\\theta$ and $\\overline{\\theta}$ the same soft mass\n\\footnote{This requirement may seem somewhat strong, but we also wish to minimize flavour violation coming from the D-term\nassociated with $U(1)$, which is proportional to $m_\\theta^2 - m_{\\overline\\theta}^2$, and will provide a non-universal\ncontribution to the scalar masses. This contribution will lead to off diagonal elements in the SCKM basis which can easily\nbe dangerously large with regard to flavour violation.\n}\n, and require that $X$ doesn't get\na soft mass, we end up with a hidden sector potential:\n\\begin{equation}\n \\label{eq:34}\n V = | \\theta \\overline{\\theta} - M_\\theta^2 |^2 + \\frac{g^2}{2} \\left( |\\theta|^2 + |\\overline{\\theta}|^2 - q_X |X|^2 + \\xi^2\\right)^2 \n + m^2(\\theta^2 + {\\overline{\\theta}^2}).\n\\end{equation}\nIf we minimize this potential with respect to $\\theta, \\overline{\\theta}$ and $X$, we end up with the following constraints:\n\\begin{eqnarray}\n \\label{eq:35}\n \\frac{\\partial V}{\\partial \\theta} &= 0 =& 2 \\overline\\theta ( \\theta \\overline\\theta - M_\\theta^2) + {g^2}\\theta\n ( |\\theta|^2 - |\\overline\\theta|^2 + q_X |X|^2 + \\xi^2 ) + 2 m^2 \\theta\\\\\n \\label{eq:36}\n \\frac{\\partial V}{\\partial \\overline\\theta} &= 0 = &\n 2 \\theta ( \\theta \\overline\\theta - M_\\theta^2 ) + {g^2} \\overline \\theta ( |\\theta|^2 - |\\overline\\theta|^2 + q_X |X|^2 + \\xi^2)\n + 2m^2 \\overline\\theta \\\\\n \\label{eq:37}\n \\frac{\\partial V}{\\partial X} & = 0 = & \\frac{g^2}{2} 2 X ( |\\theta|^2 - |\\overline\\theta|^2 + q_X |X|^2 + \\xi^2 ) \n\\end{eqnarray}\n\nSince $X$ doesn't have a mass term, it would be massless unless $ \\ne 0$. Therefore, of the two solutions of Eq.~(\\ref{eq:37}),\nwe have to take $X \\ne 0$. From this, we see that:\n\\begin{equation}\n \\label{eq:38}\n |\\theta|^2 - |\\overline\\theta|^2 + q_X |X|^2 + \\xi^2 = 0\n\\end{equation}\nSubstituting Eq.~(\\ref{eq:38}) into Eq.~(\\ref{eq:35}) and Eq.~(\\ref{eq:36}), and multiplying by $\\overline\\theta$ and $\\theta$ respectively,\nwe find:\n\\begin{eqnarray}\n \\label{eq:39}\n 0 &=& \\theta\\overline\\theta(\\theta\\overline\\theta - M^2_\\theta) + m^2 \\theta^2 \\\\\n 0 &=& \\theta\\overline\\theta(\\theta\\overline\\theta - M^2_\\theta) + m^2 {\\overline\\theta}^2\n\\end{eqnarray}\nFrom this, we can deduce that either $\\theta = \\overline\\theta = 0$ or $|\\theta| = |\\overline\\theta| = M_\\theta$. The potential is minimized\nby the second solution if $m^2 < 2 M_\\theta^2$. As we expect $M_\\theta$ to be a GUT scale mass, and $m$ to be a TeV scale soft mass term, we\nfind, that as desired, that we will have:\n\\begin{eqnarray}\n \\label{eq:40}\n <\\theta> = <\\overline\\theta> \\Rightarrow \\epsilon = \\overline\\epsilon\n\\end{eqnarray}\n\nThis allows us to consider Yukawa textures without having to keep track of whether the overall charge for each\nterm is positive or negative. \n\n\n\\subsubsection{Getting $\\epsilon$ from the Fayet-Iliopoulos term}\n\\label{sec:epsilon-from-FI}\n\nThe GST requirement leads to needing flavon fields with opposite charges. under $U(1)_F$. Were this not the case, we would have an\nelegant way of generating $<\\theta>$. Consider a simple case where $\\theta$ doesn't have a superpotential mass term, but does\nhave a soft mass:\n\\begin{equation}\n \\label{eq:41}\n V = \\frac{g^2}{2} ( -|\\theta|^2 + \\xi^2 )^2 + m^2_\\theta \\theta^2\n\\end{equation}\nThen, without the need for an explicit mass term in the superpotential, we would find that minimizing the potential with respect\nto $\\theta$ would lead to:\n\\begin{equation}\n \\label{eq:42}\n <\\theta> = \\xi \\sqrt{1 + \\frac{m_\\theta^2}{\\xi^2}} \\approx \\xi\n\\end{equation}\nWhere the final approximation is due to the fact that we expect $\\xi^2$ to be much larger than $m^2_\\theta$. \nSo we have managed to set $<\\theta>$ from $\\xi$, which can be predicted from string theory. So this allows one to\npredict the flavon vev, rather than having to put it in by hand. \n\nThis provides a motivation for trying to set up the case where $<\\theta>$ and $<\\overline\\theta>$ could both be\nset by the FI term. However, it doesn't seem possible to make this work without adding in either an extra symmetry,\nor extra matter. Even then, trying to arrange things so that $<\\theta> = <\\overline\\theta> = z \\xi$, with $z$ some\nreal number is difficult. \n\n\\subsection{Yukawa Operators}\n\nSince the net $U(1)$ charge can be either positive or negative and we have $\\epsilon = \\overline\\epsilon$, an effective potential has the following form:\n\\begin{eqnarray} \n \\nonumber\n W = \\sum_{f=u,d;\\;ij} & Q^i f^{c\\;j} H_f & a^f_{ij} \\epsilon^{|q_i + f_j + h_f|} \\\\\n \\label{eq:effectsup}\n + \\sum_{f=e,n;\\;ij} & L^i f^{c\\;j} H_f & a^f_{ij} \\epsilon^{|l_i + f_j + h_f|}. \n\\end{eqnarray}\nWe cannot say anything in particular about the K\\\"ahler potential. We can assume that the phases responsible for CP violation only appear in the flavour sector.\nThen observable CP violating phases will be put into the Yukawa couplings indirectly from the effective superpotential of Eq.~(\\ref{eq:effectsup}). In general \nwe can consider an effective K\\\"ahler potential of the form:\n\\begin{eqnarray}\nK=K_o(t_\\alpha)-\\ln(S+\\bar{S}+\\delta_{GS})+ \\sum_i f_i (t_\\alpha) \\theta_i \\bar{\\theta_i}+...+\\sum_{ij} K^{\\Phi}_{ij}\\Phi^i\\bar{\\Phi}^j\n\\end{eqnarray}\nwhere $K_o$ is the K\\\"ahler potential of the moduli fields, $t_\\alpha=T_\\alpha+\\bar{T}_\\alpha$, $S$ is the dilaton, \n$f _i(t_\\alpha)$ are possible functions of these moduli fields e.g. $f(t)=\\Pi^p_{\\alpha=1}t_{\\alpha}^{n(\\alpha)_{ij^*}}$. But we cannot specify \nthe form of the K\\\"ahler metric. \nIt may be that the K\\\"ahler metric is canonical, in which case $K^{\\Phi}_{ij^*}=\\delta_{ij^*}$. Such a form has a good change of leading to\nacceptable phenomenology, since the scalar mass matrices will be proportional to the identity at the appropriate high energy scale. When\nrotating the scalar mass matrices to the super-CKM (SCKM) basis at the high energy scale, the transformation will leave the mass matrices invariant.\nFlavour violation tends to be proportional to off-diagonal elements in the scalar mass matrices in the SCKM basis, so any flavour violation will\nbe due to RG effects, and will therefore be suppressed. On the other hand, the K\\\"ahler metric could have off-diagonal structure, in which case\nthe risk of flavour violating effects would be high, and the case where the K\\\"ahler metric is diagonal but non-universal is potentially very interesting since flavour changing effects are induced in general by the SCKM rotation.\n\n\\subsection{The SUSY CP problem}\n\\label{sbsec:susycppr}\n\n\\subsubsection{The $\\mu$ problem}\n\nIn order to avoid the $\\mu$ problem, a symmetry or other mechanism to protect $\\mu$ from unwanted contributions needs to be introduced.\nThe $\\mu$ parameter can have contributions from the superpotential, (expected to be at the Planck scale) and from the K\\\"ahler potential,\nvia the Giudice-Masiero mechanism \\cite{Giudice:1988yz} or other mechanisms \\cite{Casas:1992mk,Kim:1994eu}, $\\mu = \\mu_W + \\mu_K$. The charges of the fields $H_u$ and $H_d$ under the flavour symmetry \ncan be chosen in such a way that $\\mu_W(M_P)$ is forbidden in the superpotential. Then another field, $S$ can be introduced, so that the term \n$\\lambda S H_uH_d$ is allowed in the K\\\"ahler potential, which generates an effective $\\mu = O(m_{3\/2})$. \nNote that in the cases that we have found for $u+v\\neq 0$ there is no $\\mu_W$ at $M_P$. In general for a theory containing two flavon fields with opposite charges, once the flavour symmetry \nis broken below the Planck scale, the contributions to the $\\mu$ term are:\n\\begin{eqnarray}\n\\label{eq:mubreaku3s}\n\\epsilon^{|u+v|} H_u H_d \\mu_W + \\epsilon^{|u+v|} H_u H_d \\mu_K\n\\end{eqnarray}\nThus, even if the $\\mu$ term is missing from the superpotential at renormalizable level, it will be generated by non-renormalizable\noperators once the family symmetry is broken. However, it will appear suppressed by a factor of $\\epsilon^{|u+v|}$. To get an sufficient\nsuppression, either $|u+v|$ must be large or $\\epsilon$ must be small.\nObviously, since the same factor $\\epsilon^{|u+v}|$ appears suppressing both superpotential and K\\\"ahler potential $\\mu$ contributions,\nthere is no extra constraint from considering the second term in Eq.~(\\ref{eq:mubreaku3s}).\n\nHowever, $|u+v|$ is related to the anomaly cancellation conditions considered in Section \\ref{sec:anomconst}. There are two possibilities\nfor having small $|u+v|$. The first is to have small expansion parameters, $\\epsilon$; however if $\\epsilon$ becomes too small, it makes\npredicting the fermion mass hierarchy very difficult. The second is to accept a contribution to $\\mu$ that is larger than order O($m_{3\/2}$);\nhowever phenomenologically, the total $\\mu$ should not be much bigger than the $O(m_{3\/2})$. It is, however, possible to apply a new discrete\nsymmetry to disallow the superpotential $\\mu$ term, which never allows any flavon corrections to generate it. \n\n\\subsubsection{Electric dipole moment constraints}\nThe electric dipole moments (EDMs) constrain the form of the trilinear couplings, $(Y^A_{f})_{ij}$. The trilinear couplings are\ndefined through $(Y^A_{f})_{ij}H_{f}Q_i f^c_j$. Here we need to ensure that there is not a large contribution from the phases found \nin the trilinear terms to the CP violating phases. In the context of flavour symmetries it is usually postulated that the only phases \nappearing in the theory are in the Yukawa couplings and any other phase will enter as a consequence of a dependence in the Yukawa couplings. \nThen to check if the model gives contribution below the bounds one needs to compare the diagonal elements of the Yukawa couplings \nwith the diagonal elements of the trilinear couplings, in the SCKM basis. The trilinear terms in general can be written as:\n %\n \\begin{eqnarray}\n \\label{eq:trilinears}\n \\mathbf{(Y^A_{f})}_{ij}=Y^{f}_{ij}F^a\\partial_a\\left(\\tilde{K}+\\ln(K^f_f K^i_i K^j_j) \\right)\n+F^a\\partial_a {Y}^{f}_{ij}\n \\end{eqnarray}\n %\nWe can always write the first term in a ``factorisable'' form \\cite{Kobayashi:2000br}, such that if the Yukawa couplings, \n\\eq{eq:effectsup}, are the only source of CP violation then the first term does not give any contribution at the leading order.\nFor the second term, which involves the derivative in terms of the flavon fields, if the flavon field is the only field with $F^\\theta\\neq 0$ then the \ndiagonal trilinear couplings in the SCKM basis are real at leading order in the flavon fields \\cite{Ross:2002mr}. \nThus there is not an $O(1)$ contribution to the CP phases from this sector. \n\nOne can check this simply by writing the last term of \\eq{eq:trilinears} in the SCKM basis: \n\\begin{eqnarray}\n \\nonumber\n (F^a\\partial_a({\\hat Y}^f))^{\\rm{SCKM}}_{ij} &=& F^a(V^\\dagger_L)_{ik}(\\partial_a V_L)_{kj}(Y_{\\rm{Diag}})_{jj}+\\\\\n \\label{eq:45}\n &&F^a(\\partial_aY_{\\rm{Diag}})_{ij}+F^a(Y_{\\rm{Diag}})_{ii}\n (\\partial_a V_R)_{ir}(V^\\dagger_R)_{rj}\n\\end{eqnarray}\n Where $V^\\dagger_L$ and $V^\\dagger_R$ diagonalize the Yukawa matrix: $Y_{\\rm{Diag}}=V^\\dagger_L Y V^\\dagger_R$. The leading term of the Eq.~(\\ref{eq:45})\nis the second term and it is at most of order $\\theta$. \nIf another field has non-zero F-term, $F^X\\neq 0$ then all the quantities appearing in \\eq{eq:trilinears} can be written as a expansion \nin $X$ and $\\theta\/M=\\varepsilon$:\n\\begin{equation}\n (Y_{\\rm{Diag}})_{ii}=(a_{ii}+b_{ii}X)\\varepsilon^{p_{ii}}\\label{eq:46}.\n\\end{equation}\nWe are assuming that only the matter sector in \n\\eq{eq:effectsup} has phases leading to CP violation, so the term $b_{ii}X\\varepsilon^{p_{ii}}$ is real and hence so is:\n\\begin{equation}\n F^X(\\partial_a Y_{\\rm{Diag}})_{ii}=F^X b_{ii}\\theta^{p_{ii}}\\label{eq:47}\n\\end{equation}\n\\subsection{SUSY flavour problem}\nIn addition to the F term contribution to the soft masses we have to add the D term contributions\n %\n \\begin{eqnarray}\n(M^2)_{ij}=(M^2)_{F\\ ij}+(M^2)_{D\\ ij}.\n \\end{eqnarray}\n %\nIf the K\\\"ahler metric is diagonal in the basis where the symmetry is broken both contributions are diagonal and proportional to the K\\\"ahler metric. \nFor example, consider universal SUGRA: $(M^2)_{F\\ ij}=K_{ij}m^2_o$. However, even if we assume that the first term is indeed proportional to \nthe K\\\"ahler metric, the D-term will not in general be proportional to the K\\\"ahler metric:\n\n\\begin{equation}\n\\label{eq:48}\n(M^2)_{D\\ ij}= \\sum_N g_N X_{N\\ \\theta_a} K_{ij^*}(\\theta_a)m^2_{D},\\;\\;\\; m^2_{D}=O(m^2_{3\/2})\n\\end{equation}\nThe main problems for FC processes for these kind of theories are the contributions to the trilinear couplings from the anomalous D-term \ncontribution to the soft masses \\cite{Chung:2003fi}. For the last issue there is no real solution so far but one can ameliorate the problem by making all the\nscalars heavier, which is a simply mass suppression.\n\n\nIn order to study all the possible consequences of models with the superpotential structure of \\eq{eq:effectsup}, \nwe can parameterize the K\\\"ahler metric according to the different contributions it may have, \nassuming a broken underlying symmetry with at least two flavon fields with opposite charges. \nOnce this is done we can then study their consequences. As mentioned earlier, this analysis is beyond the scope of this paper, so\nwe just mention how extreme and dangerous situations may arise and we leave the analysis for a future reference \\cite{inpreparation}. Some authors have studied possible consequences of flavour models for FC effects but very specific assumptions need to be assumed due to the many unknown supersymmetric parameters \\cite{Babuetal, Ciuchini:2003rg,Masina:2003wt}.\n\nThe most strict bound for flavour changing processes is coming from the decay $\\mu\\rightarrow\\ e \\ \\gamma$ \\cite{Hisano:1995cp}-\\cite{Masina:2002mv} and given the fact that we \nhave a large mixing angle in the left handed sector of the charged lepton matrices it is crucial to determine under which conditions we can \nproduce a suppressed effect. Also the constraints given by the process $B\\rightarrow\\ \\Phi \\ K_S$ may select out some of the possibilities presented.\n\\subsubsection{Non minimal sugra and diagonal K\\\"ahler metric}\nConsider, for example, the case for which at the scale at which the flavour symmetry is broken, the K\\\"ahler metric is diagonal. For this case, we also\nwant the soft scalar mass matrices diagonal but not proportional to the unit matrix, due to possible different D term contributions. Since the general case it is difficult to handle we consider the case where $M^2_{\\tilde f \\ 1}-M^2_{\\tilde f \\ 2}$ is small and $M^2_{\\tilde f \\ 1}- M^2_{\\tilde f \\ 3}>0$.\nIn order to estimate the flavour changing processes we need to take into account the effects from renormalization group equations (RGE's) and then at the electroweak scale make the transformation to the basis where the fermions are diagonal. Here we consider the case of leptons, since we are interested in determining\n$\\delta^{l}_{ij}$ and in particular $\\delta^{l}_{12}$ which is the most constrained parameter due to $B(\\mu\\rightarrow e\\ \\gamma)$.\n\nWe make an estimation of the contributions from the renormalization $\\beta$ functions in this case, such that at the scale where the dominant right handed neutrino it is decoupled we can write the soft masses as\n\\begin{eqnarray}\n\\label{eq:massren}\nM^2_{\\tilde L\\ ij}(M_{Y})\\approx M^2_{\\tilde L \\ ij}(M_X)-\\frac{1}{16\\pi^2} \\ln\\left(\\frac{M_X}{M_Y} \\right)(\\beta^{(1)}_{M^2_{\\tilde L\\ ij}})\n\\end{eqnarray}\nfor $M_X=M_{\\rm{G}}$ or $M_{\\rm{P}}$, GUT or Planck scales respectively, and for \n$M_Y=M_{RR\\ 3}$ in this case and considering just one loop corrections. The $\\beta$ functions of $M^2_{\\tilde L\\ ij}$, from $M_X$ to $M_{RR\\ 3}$ receive the contributions from the MSSM particles plus the contribution from right-handed neutrinos.\nAt $M_3$ we then run from that scale to the electroweak symmetry breaking scale with the appropriate $\\beta$ function and matter content. In the case of SNRHD scenario and the form of the Yukawa matrices that we have considered in Section (\\ref{sec:fitsmasses}) we can make the following approximations for the $\\beta$ functions{\\footnote{For the MSSM see for example \\cite{Martin:1993zk}, when including right handed neutrinos, see for example \\cite{Hisano:1995cp}.}}:\n\\begin{eqnarray}\n\\label{eq:betasMSSM}\n\\left(\\beta^{(1)}_{ M^2_{\\tilde L\\ ii} }\\right)^{MSSM}\\!\\!\\!\\!\\!\\!\\!&\\!\\approx\\!&\\!\\!\n2\\left[(m^2_{M^2_{\\tilde L\\ ij}} +m^2_{\\tilde H_d})\\left(|Y_{2i}|^2+|Y_{3i}|^2\\right) + m^2_{\\tilde e_2}(1+a^2)(\\left|Y_{2i}|^2+r^2_{\\tilde e_{23}}|Y_{3i}|^2\\right)\\right]\\nonumber\\\\\n&& -6g^2_2|m_2|^2-\\frac{6}{5}g^2_1|m_1|^2-\\frac{3}{5}g^2_1 S\\nonumber\\\\\n\\left(\\beta^{(1)}_{ M^2_{\\tilde L\\ ij} }\\right)^{MSSM}\\!\\!\\!\\!\\!\\!\\!&\\!\\approx\\!&\\!\\! (2m^2_{\\tilde H_d}+m^2_{\\tilde L\\ i} +m^2_{\\tilde L\\ j})\n\\left( Y^{e *}_{2i}Y^{e*}_{2j} + Y^{e *}_{3i}Y^{e*}_{3j}\\right)+\\nonumber\\\\\n&&+ 2m^2_{\\tilde e_2}(1+a^2)\\left(Y^{e *}_{2i}Y^{e*}_{2j} + r^2_{\\tilde e_{23}} Y^{e *}_{3i}Y^{e*}_{3j} \\right)\n\\end{eqnarray}\nwhere we have assumed that the trilinear terms can be written as \n$A^f_{ij}=aY^f_{ij}M^2_{\\tilde e}$, and $M^2_{\\tilde e}$ is not necessarily diagonal. The parameter $S$, defined as $S=m^2_{\\tilde H_u}-m^2_{\\tilde H_d}+\\rm{Tr}\\left[M^2_{\\tilde Q} -M^2_{\\tilde L}-2 M^2_{\\tilde u}+ M^2_{\\tilde d}+ M^2_{\\tilde e} \\right]$, does not generate big contributions as long the masses involved remain somewhat degenerate. The $\\beta$ functions generated by the dominant right-handed neutrino can be approximated by\n\\begin{eqnarray}\n\\label{eq:betasMR3}\n\\left(\\beta^{(1)}_{ M^2_{\\tilde L\\ ij} }\\right)^{\\nu_{M_3}}\\!\\!\\!\\!\\!&\\!\\approx\\!&\\!\\!2Y^{\\nu *}_{3i}Y^{\\nu}_{3j}\\left[m^2_{\\tilde L 3} + m^2_{\\tilde \\nu 3}(1+b^2)+m^2_{\\tilde H_u} \\right]\n\\end{eqnarray}\nFrom $M_X=M_3$ to $M_Y=M_{\\rm{S}}$ -the supersymmetry breaking scale-, we consider $\\left(\\beta ^{(1)}_{ M^2_{\\tilde L\\ ij }}\\right)^{MSSM}$. For this estimation we ignore the effect from $M_{\\rm{S}}$ down to the electroweak scale. At this scale we then transform the renormalized $M^2_{\\tilde L}$ in the basis where the charged leptons are diagonal. Since there is a large mixing angle $(s^{e_L}_{23})$ in the left sector of $Y^e$ we are interested here only in estimating $(M^2_{\\tilde L})_{LL}$. We can use the parameterization of Appendix A in order to make this transformation, i.e.\n\\begin{eqnarray}\n \\label{eq:49}\n Y^f_{\\rm{diag}}=V^{f\\dagger}_{L} Y^f V^f_{R},\\quad\n (M^2_{\\tilde L})'_{LL}=V^{f\\dagger}_{L} M^2_{\\tilde L} V^f_{L},\n\\end{eqnarray}\nfor $V^f_{L,R}$ as parameterized in \\eq{eq:pardimatL}, with the $\\beta$ phases as follow\n\\begin{eqnarray}\n\\{\\beta^{e_L}_1,\\beta^{e_L}_2,\\beta^{e_L}_3\\}=\\{\\phi^{e}_{X_{23}},0,0\\},\\quad \\phi^{e}_{X_{23}}=\\beta^{e_L}_1-\\beta^{e_L}_2.\n\\end{eqnarray}\nUsing these approximations, we obtain the following results\n\\begin{eqnarray}\n(M^2_{\\tilde L})^{\\prime}_{12}&=&\ns^{e_L}_{12}(c^{e_L}_{23} {m^2_{\\tilde L\\ 22 }} - {m^2_{\\tilde L\\ 11 }}) + \\nonumber\\\\\n&+&(c^{e_L}_{12})^2e^{-i\\beta_{3L}}\\left(c^{e_L}_{23} e^{-i\\beta_{2L}} {m^2_{\\tilde L\\ 12 }}-2t_{12}c^{e_L}_{23}s^{e_L}_{23} e^{i\\beta_{3L}} \\rm{Re}\\{ {m^2_{\\tilde L\\ 23}} e^{-i\\chi}\\}\\right.\\nonumber\\\\\n&&-\\left. s^{e_L}_{23}e^{-i\\beta_{1L}} {m^2_{\\tilde L\\ 13 }} \\right),\\nonumber\\\\\n(m^2_{\\tilde L})^{\\prime}_{13}&=&\nc^{e_L}_{23} s^{e_L}_{23}s^{e_L}_{12}e^{i\\beta_3L}( {m^2_{\\tilde L\\ 22 }} - {m^2_{\\tilde L\\ 33 }} ) +\\nonumber\\\\\n&+& c^{e_L}_{12}c^{e_L}_{23} \\left(\\left( \ne^{-i\\chi} c^{e_L}_{23}t_{12} {m^2_{\\tilde L\\ 23 }}\n-e^{i\\chi} t_{12}t_{23}s^{e_L}_{23} \\beta^*_{m^2_{\\tilde L\\ 23 }}\n\\right)\\right.+\\nonumber\\\\\n&+&\\left.t_{23}e^{i\\chi} {m^2_{\\tilde L\\ 12 }}+ {m^2_{\\tilde L\\ 13 }} \\right)\n\\nonumber,\\\\\n(m^2_{\\tilde L})^{\\prime}_{23}&=&\nc^{e_L}_{23}s^{e_L}_{23}e^{i\\beta_{3L}}\\left({m^2_{\\tilde L\\ 22 }}- {m^2_{\\tilde L\\ 33 }} \\right)\n \\nonumber\\\\\n&&+e^{i\\beta_{3L}}c^{e_L}_{12}\\left((c^{e_L}_{23})^2e^{-i\\chi} {m^2_{\\tilde L\\ 23 }} - (s^{e_L}_{23})^2e^{i\\chi} \\beta^*_{m^2_{\\tilde L\\ 23 }} \\right) ,\\nonumber\\\\\n(m^2_{\\tilde L})^{\\prime}_{11}&=&\n(s^{e_L}_{12})^2\\left((c^{e_L}_{23})^2 m^2_{\\tilde L\\ 22 }+ (s^{e_L}_{23})^2 m^2_{\\tilde L\\ 33 } \\right) \n\\nonumber\\\\\n(m^2_{\\tilde L})^{\\prime}_{22}&=&\n(c^{e_L}_{12})^2\\left((c^{e_L}_{23})^2 m^2_{\\tilde L\\ 22 }+(s^{e_L}_{23})^2 m^2_{\\tilde L\\ 33 }\\right)-(c^{e_L}_{12})^2c^{e_L}_{23}s^{e_L}_{23} 2\\rm{Re}\\{ {m^2_{\\tilde L\\ 23}} e^{-i\\chi}\\}\\nonumber\\\\ \n(m^2_{\\tilde L})^{\\prime}_{33}&=&\n(c^{e_L}_{13})^2\\left((s^{e_L}_{23})^2 m^2_{\\tilde L\\ 22 }+ (c^{e_L}_{23})^2 m^2_{\\tilde L\\ 33 } \\right)+\nc^{e_L}_{23}s^{e_L}_{23}\\left( 2\\rm{Re}\\{ {m^2_{\\tilde L\\ 23}} e^{-i\\chi}\\} \\right)\\nonumber\\\\\n \\end{eqnarray}\nhere the soft masses $m^2_{\\tilde L {ij}}$ are the soft masses at $M_{\\rm{S}}$, renormalized from $M_X=M_{\\rm{G}}, M_{\\rm{P}}$ down to $M_3$ with the appropriate contributions from the dominant right handed neutrino, \\eq{eq:massren}, and \\eq {eq:betasMSSM}-\\eq{eq:betasMR3} and then from $M_3$ to $M_S$ with the appropriate $\\beta^(MSSM)$ functions. Thus we began with a diagonal matrix $M^2_{\\tilde L}$ at $M_X$, then the RGE effects up to the scale where $M_3$ is decoupled generate a non diagonal matrix which receives more RGE contributions from $M_3$ to $M_S$. At electroweak scale we transformed to the basis where charged leptons are diagonal.\nThe mixing angles in this sector can be approximated as\n\\begin{eqnarray}\ns^{e_L}_{12}=|(a^e_{12}-t_{32}a^e_{13})|\/|(a^e_{22}-a^e_{32}a^e_{23})|\\epsilon^{p^e_{12}},\\quad\ns^{e_L}_{13}=a^e_{13}\/a^e_{33}\\epsilon^{p^e_{13}},\\quad \ns^{e_L}_{23}=a^e_{23}\/a^e_{33}\n\\end{eqnarray}\nThe powers $p^e_{ij}$ for the different solutions presented now correspond to \n$p^e_{12}=2\/3,14\/3$, $p^e_{13}=29\/12,71\/12$ for Fits 2 and 3 respectively.\nSo in this case we see that we need a big suppression of the element $(m^2_{\\tilde l \\ L})^{\\prime}_{12}$ in order to be in agreement with the \nobserved bound on $\\mu\\rightarrow\\ e \\gamma$. In the present example the suppression it is related to a bound on $( m^2_{\\tilde L1}\\! -\\!m^2_{\\tilde L2})$ and a relative big set of soft masses. The results of these estimations are presented in Table \\ref{tbl:nonsugex}.\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{Estimation of $\\delta_{ij}$ for the Fit.3 of Section \\ref{sec:fitsmasses}.}\\\\\n\\hline\n Paramter & Ex. I & Ex. II & Ex. III\\\\\n\\hline\n$m_{\\tilde L 1}[\\rm{GeV}]$ & 520 & 520 &520\\\\\n$m_{\\tilde L 2}[\\rm{GeV}]$ & 530 & 530 & 570\\\\\n$m_{\\tilde L 3}[\\rm{GeV}]$ & 500 & 500 & 230\\\\\n$m_{\\tilde e 1}[\\rm{GeV}]$ & 520 & 520 & 520\\\\\n$m_{\\tilde e 2}[\\rm{GeV}]$ & 530 & 530 & 550\\\\\n$m_{\\tilde e 3}[\\rm{GeV}]$ & 500 & 300& 300\\\\\n$M_1[\\rm{GeV}]$ & 500 & 500 & 500\\\\\n$M_2[\\rm{GeV}]$ & 2$M_1$ & 700 &700\\\\\n$M_{H_d}[\\rm{GeV}]$ & 510 & 510 &510\\\\\n$M_{H_u}[\\rm{GeV}]$ & 510 & 510&510\\\\\n$M_{\\rm{S}}$ & 1000 & 1000&1000\\\\\n\\hline\n$\\overline{m}_{\\tilde l}$ &514 &486&456\\\\\n\\hline\n$x=m^2_{\\tilde \\gamma}\/m^2_{\\tilde l}$&\\multicolumn{3}{|c|}{$0.3$}\\\\ \n$|(\\delta^l_{LL})^E_{12}|$ &$4.3\\times 10^{-3}$ & \n$5.6\\times 10^{-3}$ &$1.4\\times 10^{-3}$\\\\\n$|(\\delta^l_{LL})^B_{12}|$ &\\multicolumn{2}{|c|}{$O(10^{-1})$}& $O(10^{-2})$\\\\ \n$|(\\delta^l_{LL})^E_{13}|$ &$1.7\\times 10^{-3}$ & $1.8\\times 10^{-3}$&$1.9\\times 10^{-2}$\\\\\n$|(\\delta^l_{LL})^E_{23}|$ &$5.7\\times 10^{-2}$ & $6.4\\times 10^{-2}$&$6.3\\times 10^{-1}$\\\\\n$|(\\delta^l_{LL})^B_{23}|$ &\\multicolumn{2}{|c|}{$O(10^{-1})$}& $O(10^{-1})$\\\\ \n\\hline\n \\end{tabular}\n \\caption{Estimation of $|\\delta^l_{ij}|^E$ in the fit 3 presented for the non minimal sugra example and its comparison to the observed bounds $|\\delta^l_{ij}|^E$ \\cite{Hisano:1995cp}-\\cite{Masina:2002mv}.}\n \\label{tbl:nonsugex}\n\\end{table}\nAs we can see from the results of Table (\\ref{tbl:nonsugex}) the estimation of $|(\\delta^l_{LL})^E_{ij}|$ is less dependent on the relation among the original soft mass terms $m^2_{\\tilde L i}$ than on the value taken for the average s-lepton mass, which indeed needs to be large.\nHere we note that this is just an estimation on the conditions that $B(\\mu\\rightarrow e\\ \\gamma)$ imposes on the soft masses, but with out fully checking whether or not appropriate masses for all the MSSM parameters can be obtained. In the following we consider a numerical investigation in the minimal sugra case.\n\\subsubsection{Numerical Investigation of $B(\\mu\\rightarrow e\\ \\gamma)$ in minimal sugra}\n\\label{sec:numer-invest-fits}\nThe presence of a right-handed neutrino fields leads to RG lepton flavour violation. Since the masses of the right handed neutrinos are so light for the GST solutions, fits 1-3, we attempted a numerical analysis for all of the fits of Section (\\ref{sec:fitsmasses}) using the same modified version of SOFTSUSY \\cite{Allanach:2001kg} as used in \\cite{King:2003kf}. \n\nIn order to get a good handle, we have embedded the flavour model fits into a string-inspired mSUGRA type scenario, with no D-term contribution to the scalar masses. This scenario was chosen because it is expected to be the embedding with the lowest flavour violation. In the scenario,\n$A_0, m^2_0, M_{1\/2}$ are all related to a gravitino mass $m_{3\/2}$.\n\n As $n_1$ was only constrained to be between $-\\sigma\/2$ and $0$, we allow it to vary within this\nrange. We define the model at the GUT scale as:\n\\begin{equation}\n \\label{eq:23}\n m^2_0 = \\frac{1}{4} m_{3\/2}^2 \n \\;\\;,\\;\\;\n A^0 = \\sqrt{\\frac{3}{4}} m_{3\/2}\\;\\;\\;\n M_{1\/2} = \\sqrt{\\frac{3}{4}} m_{3\/2}.\n\\end{equation}\nThis setup of the soft parameters corresponds to benchmark point A in \\cite{King:2003kf}.\nThe results are as follows, for Fit 1 the code being used can not generate any low energy data for this fit so we do not find any safe $B(\\mu\\rightarrow e \\gamma)$ region using the conditions presented above.\nThe Fit 2 has $\\mathrm{BR}({\\mu\\rightarrow e\\gamma}) <= 10^{-30}$ which is unattainably low, thus this fit is plausible within the context of the minimal sugra conditions that have been specified. \nThe smallness of the branching ratio for fit 2 comes about because with no RG running, in mSUGRA this rate would be exactly zero. The RG flavour violation will come from terms proportional to ${Y^\\nu}^\\dag Y^\\nu$, whose elements are tiny ( the largest is $O(10^{-14})$ ).\n\nThe Fit 3 generates a tachyonic s-electron for the full $(m_{3\/2}, n_1)$ range. This is not to say that this fit will always have a tachyonic s-electron in other, less trivial embeddings. \nFits 4 and 5 produce regions below and above the experimental limits on $B(\\mu\\rightarrow e\\gamma)$, the graphs for these fits appear in Tables (\\ref{fig:br_meg_fit_4a}-\\ref{fig:br_meg_fit_5b}).\n\\begin{figure}[htbp]\n \\centering\n \\input{fit4a.tex}\n \\caption{$\\mathrm{BR}(\\mu\\rightarrow e\\gamma)$ for fit 4, with $\\left<\\Sigma\\right> = O(M_G)$. The solid points are below the experimental\nlimit of $1.1 \\cdot 10^{-11}$, and the hollow points are above.}\n \\label{fig:br_meg_fit_4a}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\input{fit4b.tex}\n \\caption{$\\mathrm{BR}(\\mu\\rightarrow e\\gamma)$ for fit 4, with $\\left<\\Sigma\\right> = O(M_{Pl})$. The solid points are below the experimental\nlimit of $1.1 \\cdot 10^{-11}$, and the hollow points are above}\n \\label{fig:br_meg_fit_4b}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\input{fit5a.tex}\n \\caption{$\\mathrm{BR}(\\mu\\rightarrow e\\gamma)$ for fit 5, with $\\left<\\Sigma\\right> = O(M_G)$. The solid points are below the experimental\nlimit of $1.1 \\cdot 10^{-11}$, and the hollow points are above}\n \\label{fig:br_meg_fit_5a}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\input{fit5b.tex}\n \\caption{$\\mathrm{BR}(\\mu\\rightarrow e\\gamma)$ for fit 5, with $\\left<\\Sigma\\right> = O(M_{Pl})$. The solid points are below the experimental\nlimit of $1.1 \\cdot 10^{-11}$, and the hollow points are above}\n \\label{fig:br_meg_fit_5b}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nIn summary, we began our analysis\nby reviewing the Green-Schwartz (GS) conditions for anomaly\ncancellation for theories based on a \n$U(1)$ family symmetry. We then used these conditions\nto fix the charges of all the quark, lepton\nand Higgs fields and studied possibilities where the Higgs mass \n$\\mu$ term is either present or absent in the original\nsuperpotential. The solutions which we constructed do not necessarily require\nan underlying Grand Unified Theory (GUT) but may be\nconsistent with unification because of the GS conditions. \nRegardless of the presence of an explicit unified gauge group,\nthe explicit solutions can produce matrices of the form that are\nidentical to those that would be expected in \nan $SU(5)$ case or Pati-Salam unified theory, for example.\n\nThe flavour structure of the resulting Yukawa matrices is \ncontrolled by the charges of the quarks and leptons under the \n$U(1)$ family symmetry gauge group.\nWe have determined these charges which are consistent with\nanomaly cancellation, and studied cases\nwhich can reproduce quark Yukawa matrices satisfying\nthe Gatto-Sartori-Tonin (GST) relation, as well as other cases\nwhich do not satisfy the GST relation. \nWe find the GST relation to be an\nappealing description of the value of the element $V_{us}$,\nand the GST relation provides a useful criterion \nfor classifying flavour models. \nIn our view, having the Cabibbo angle emerging automatically\nfrom a flavour model should have a similar status to gauge\ncoupling unification in a high scale model. \nHaving classified the solutions in terms of the GST condition,\nwe then further classify\nthe solutions according to which of them can produce the\nobserved mixings in the lepton sector, and those that are consistent\nwith a sub-class of solutions based on the SRHND or sequential dominance\nscenario with the further condition that the charges of the lepton\ndoublets for the second and third family are equal, $l_2=l_3$. \nWe find that the GST solutions combined with SRHND results in \nhighly fractional charges. \nOn the other hand non-GST solutions with SRHND results in simpler\ncharges, and we have therefore studied both sorts of examples.\n\nWe have presented three numerical examples of solutions satisfying the\nGST relation and two examples of non-GST solutions in order to compare\nhow well these solutions fit the experimental information while\nmaintaining $O(1)$ coefficients. For the GST solutions, one of these\nexamples corresponds to a model that can be thought of as coming from an\nunderlying $SU(5)$ and for which a $\\mu$ term is allowed in the\nsuperpotential. It is well known that in this case, given the relation\n$Y^e=Y^{d \\ T}$, there should be a Clebsch-Gordan coefficient\ndifferent in the charged lepton $(2,3)$ sector and in the $(2,3)$\nd-quark sector in order to produce appropriate mixings in the\ncontext of the $U(1)$ flavour symmetry and the GUT theory. Two other\nGST examples are presented for which the $\\mu$ term is not allowed and\nwhich are not consistent with an underlying $SU(5)$, or other GUT theory. In\nthese cases $Y^e\\neq (Y^d)^T$ but it is possible to maintain the\nrelation $m_\\tau\\approx m_b$ and in one of them just the $O(1)$\ncoefficients of the underlying $U(1)$ theory can account for the\nappropriate mixings in the charged lepton and d-quark sector.\nThe non-GST cases also give a good description of masses and mixings,\nalthough in this case we need to rely on further coefficients,\npossible Clebsch-Gordan coefficients from an underlying GUT, in order\nto achieve a good phenomenological description.\n \nFor the above examples we have provided detailed numerical fits \nof the $O(1)$ coefficients required to reproduce the observed\nmasses and mixings in both quark and lepton sectors.\nThe purpose of performing such fits\nis to compare how well the different models can fit the data, \nand to try to determine quantitatively the best possible model\ncorresponding to the best possible fit. \nAlthough in the cases just mentioned the solutions which fit the\ndata best are the solutions consistent with an underlying $SU(5)$ theory, the\nother two fits are quite plausible and represent interesting\npossibilities which cannot be excluded. \nSince all the models constructed\nhave good agreement with the fermion masses and mixings, we clearly \nneed further criteria in order to discriminate between the different\nclasses of $U(1)$ family symmetry models. \n\nOne may ask the more general question whether family symmetries based on\nabelian or non-abelian gauge groups are generically preferred?\nIn order to address this question, we have extended the fit to include\na generic symmetric form of quark and lepton mass matrices that can be\nunderstood in the context of a theory based on $SU(3)$ family\nsymmetry. We have found that overall the generic $SU(3)$ family\nsymmetry produces Yukawa matrices which tend to fit the data better, \nalthough the effect is not decisive, and \none cannot draw a strong conclusion based solely on fits to\nfermion masses and mixings (or the way they can be reproduced). \nWe have therefore enumerated \nsome other possible criteria that are important in order to\nfurther discriminate among different flavour theories. \nIncluding the effects from the supersymmetric sector provides \nan additional way to discriminate among different\ntheories based on their different predictions for \nsoft masses and the resulting flavour changing processes and CP violation. \nWe have presented two frameworks in which\nthese processes can be studied in the context of flavour theories. The\nfirst is a non-minimal sugra scenario where family symmetries\nmay render the K\\\"ahler metric diagonal at the flavour symmetry breaking\nscale, with off-diagonal elements arising only due to RG contributions and\nthe non-degeneracy of soft masses. The second framework is a minimal sugra\nscenario for which a numerical exploration of $\\mu\\rightarrow e\\ \\gamma$\nwas performed. The results of this analysis\nshows marked differences between the different models presented. Of the\nGST cases only one survives the test of $B(\\mu\\rightarrow e\\ \\gamma)$\nwhile for all of the non-GST cases presented there exist regions\ncompatible with the $B(\\mu\\rightarrow e\\ \\gamma)$ experimental limit.\n\nIn conclusion, \nat the present time, phenomenological analyses provide some guidance\nabout what family symmetry approaches may be valid, but do not yet allow\none to draw any firm conclusion. More specific assumptions or data in\nthe supersymmetric sector are needed in order to further discriminate \nbetween classes of models based on different family symmetry, \nunification or GST criteria.\n\n\n\n\\section*{Acknowledgments}\nL. V-S. would like to thank the School of Physics and Astronomy at the U of Southampton for its hospitality during a visit last year. S.K. would like to thank the MCTP for its hospitality during August 2004 when this work was under development. The work of G. K. and L. V-S. is supported by the U. S. A. Department of Energy.\n\n\n\\newpage\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}