diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzemjn" "b/data_all_eng_slimpj/shuffled/split2/finalzzemjn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzemjn" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec: intro}\nIn the physical sciences, simple mathematical models are often used as a means of developing intuition and capturing phenomenology for complex systems. Whilst more complex, faithful models might require computational methods or advanced analytical techniques to study them, simple models are commonly amenable to succinct pen-and-paper analysis, and can sometimes yield dynamical predictions that agree with those of more intricate representations, at least qualitatively. This minimalistic approach is readily exemplified in the study of microswimmers, where minimal models of complex, shape deforming swimmers on the microscale are used in a range of settings, from back-of-the-envelope calculation and undergraduate teaching through to state-of-the-art, application-driven research contributions \\cite{Zottl2012,Thery2020,Gutierrez-Ramos2018,Qi2022,Spagnolie2015,Elgeti2016,Malgaretti2017,Mathijssen2016,Lauga2020,Omori2022,Lauga2009}.\n\nA common, if not ubiquitous, hydrodynamic representation of a microswimmer is the \\emph{force dipole} model, wherein the flow field generated by a swimmer is taken to correspond simply to that of a force dipole. This approximate representation is valid in the far field of a microswimmer, with force-free swimming conditions being appropriate in the inertia-free limit of low-Reynolds-number swimming that applies to many microswimmers, including the well-studied spermatozoa and the breaststroke-swimming algae \\textit{Chlamydomonas reinhardtii}. Invariably, the force dipole is assumed to be aligned along a swimmer-fixed axis and taken to be of constant signed strength. These parameters can be estimated from experimental measurements and hydrodynamic simulations \\cite{Klindt2015,Ishimoto2020}, and typically involve averaging out rapid temporal variations that can be present in biological swimmers. This leads to a minimal model of microswimming: instead of studying a complex, shape-deforming swimmer and the associated time-varying flow field, we can instead consider the motion of a constant-strength force dipole in the same environment. With this approximation, one can often derive surrogate equations of motion for the swimmer \\cite{Kim2005,Lauga2009}, which can then be analysed with ease, at least when compared to models that capture the intricate, time-dependent details of the microswimmer and the flow that it generates.\n\nThough many of the assumptions associated with this modelling approach are well understood, such as the limitations of the far-field approximation when studying near-field interactions, the impact of assuming a constant dipole strength is less clear. More generally, the impact of adopting constant, \\textit{a priori}-averaged parameters in simple models of temporally evolving microswimmers has not been thoroughly investigated, to the best of our knowledge. However, it is clear that employing rapidly varying parameters can have significant consequences for the predictions of simple models. For example, the work of \\citet{Omori2022} recently explored a simple model of shape-changing swimmers, explicitly including rapid variation in the parameters that describe the swimmer shape and its speed of self propulsion. The subsequent analysis of \\citet{Walker2022a} highlighted, amongst other observations, that the fast variation in the parameters was key to the behavioural predictions of the model, which were found to align with the experimental observations of \\citet{Omori2022}. Hence, the study of the effects of employing time-dependent parameters in even simple models, in comparison to adopting constant, averaged parameters, is warranted.\n\nThus, the primary aim of this study will be to explore minimal models of microswimming, incorporating fast variation in model parameters. To do this, motivated by a number of recent works in a similar vein \\cite{Walker2022,Walker2022a,Gaffney2022}, we will employ a multiple scales asymptotic analysis \\cite{Bender1999} to systematically derive effective governing equations from non-autonomous models. In particular, we will incorporate the effects of rapid variation by exploiting the separation of timescales often associated with microswimming, yielding leading-order autonomous dynamical systems. In what follows, we will focus on two example scenarios: the interaction of a dipole swimmer with a boundary, and the angular dynamics of two hydrodynamically interacting dipoles. Through our analysis, which will be simple, if not elementary, we will compare the dynamics of multi-timescale models with the predictions of the simplest, constant-parameter models, seeking to ascertain both if qualitative differences arise and if they can be systematically corrected for by informed parameter choices.\n\n\\section{A dipole near a no-slip boundary}\\label{sec: no slip}\n\\subsection{Model equations}\nConsider a swimmer moving in a half space that is bounded by an infinite plane, with the swimmer moving in a plane perpendicular to the boundary. We parameterise the orientation of the swimmer by the angle $\\theta$ between a swimmer-fixed director $\\d$ and the boundary normal, and parameterise its position by the distance $h$ from its centre to the boundary, as illustrated in \\cref{fig: no slip: setup}. With all quantities dimensionless, a minimal model for the swimmer dynamics is presented in part by \\citet{Lauga2020} as\n\\begin{subequations}\\label{eq: no slip: original system}\n\\begin{align}\n \\diff{h}{t} &= \\frac{3p}{16h^2}\\left(1 + 3\\cos{2\\theta}\\right) + u \\cos{\\theta}\\,,\\\\\n \\diff{\\theta}{t} &= \\frac{3p\\sin{2\\theta}}{64h^3}\\left[4 + B(3 + \\cos{2\\theta})\\right]\\,,\\label{eq: no slip: original system: angular}\n\\end{align}\n\\end{subequations}\nwhere $u$ is speed of self propulsion and we have shifted \\citeauthor{Lauga2020}'s definition of $\\theta$ by $\\pi\/2$. In this minimal model, the flow generated by the swimmer in the absence of the boundary is assumed to be purely that of a force dipole with vector strength $\\vec{p}=p\\d$, aligned along the body fixed director that defines $\\theta$, and the swimmer shape is captured only through the Bretherton parameter $B$ \\cite{Bretherton1962}. This modelling approach can be justified by considering a far-field limit of a swimmer, though here we focus on analysing the model of \\cref{eq: no slip: original system} rather than on its origin and motivation. In particular, we focus on the angular dynamics contained within \\cref{eq: no slip: original system: angular}.\n\nThe standard approach to modelling this system would be to assume that $p$, $u$, and $B$ are constant in time, as is the case in the textbook of \\citet{Lauga2020}. This can be interpreted as averaging away any time dependence of the three parameters, which one would generically expect to be present for a multitude of shape-changing microswimmers, for instance. Here, we do not perform this \\emph{\\textit{a priori}} averaging of the parameters, and will instead suppose that $p$, $u$, and $B$ are indeed functions of time. In particular, we suppose that $p=p(\\omega t)$, $u=u(\\omega t)$, and $B=B(\\omega t)$ are periodic functions of $\\omega t$, where $\\omega\\gg1$ is a large dimensionless frequency of oscillation and we assume that $p$, $u$, and $B$ share a period, in line with the rapid shape changes undergone by many microswimmers. For later convenience, we make the additional assumption that the average of $p$ over a period is non-zero, and will impose the minimal restriction that $B\\in(-1,1)$, which holds for all but the most elongated of objects \\cite{Bretherton1962}. Hence, we study the non-autonomous system\n\\begin{subequations}\\label{eq: no slip: full system}\n\\begin{align}\n \\diff{h}{t} &= \\frac{3p(\\omega t)}{16h^2}\\left(1 + 3\\cos{2\\theta}\\right) + u(\\omega t) \\cos{\\theta}\\,,\\\\\n \\diff{\\theta}{t} &= \\frac{3p(\\omega t)\\sin{2\\theta}}{64h^3}\\left[4 + B(\\omega t)(3 + \\cos{2\\theta})\\right]\\,,\n\\end{align}\n\\end{subequations}\nwith $\\omega\\gg1$ and all other quantities being $\\bigO{1}$ as $\\omega\\to\\infty$. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.3\\textwidth]{figs\/no_slip_setup\/no_slip_setup.png}\n \\caption{Geometry and parameterisation of a minimal model of a swimmer above a no-slip boundary. The swimmer is parameterised by the angle $\\theta$ between the vector dipole strength $\\vec{p}$ and the separation $h$ of its centroid from the boundary. The velocity due to self-propulsion is assumed to be along the direction of $\\vec{p}$, shared with swimmer-fixed director $\\d$.}\n \\label{fig: no slip: setup}\n\\end{figure}\n\n\\subsection{Multi-scale analysis}\nWe will attempt to exploit the large frequency $\\omega \\gg 1$ in order to make progress in analysing the non-autonomous system of \\cref{eq: no slip: full system}, employing the method of multiple scales \\cite{Bender1999}. Following this approach, we introduce the fast timescale $T \\coloneqq \\omega t$, so that $p = p(T)$ etc., and formally treat $t$ and $T$ as independent. The proper time derivative $\\mathrm{d}\/\\mathrm{d}t$ accordingly transforms as\n\\begin{equation}\\label{eq: no slip: time transform}\n \\diff{}{t} \\mapsto \\pdiff{}{t} + \\omega \\pdiff{}{T}\\,,\n\\end{equation}\ntransforming our non-autonomous system of ordinary differential equations (ODEs) into a system of partial differential equations (PDEs). We now seek asymptotic expansions of $h$ and $\\theta$ in inverse powers of $\\omega$, which we write as\n\\begin{equation}\n h \\sim h_0(t,T) + \\frac{1}{\\omega}h_1(t,T) + \\cdots\\,, \\quad \\theta \\sim \\theta_0(t,T) + \\frac{1}{\\omega}\\theta_1(t,T) + \\cdots\\,.\n\\end{equation}\nTransforming \\cref{eq: no slip: full system} via \\cref{eq: no slip: time transform} and inserting these asymptotic expansions gives the $\\bigO{\\omega}$ balance simply as\n\\begin{equation}\n \\pdiff{h_0}{T} = 0\\,, \\quad \\pdiff{\\theta_0}{T} = 0 \\quad \\implies \\quad h_0 = h_0(t)\\,, \\quad \\theta_0 = \\theta_0(t)\\,,\n\\end{equation}\nso that the leading order solutions are independent of the fast timescale $T$. This should be expected, as the forcing of the system is strictly $\\bigO{1}$, so that the dominant contribution to the evolution occurs on the long timescale $t$.\n\nAt the next asymptotic order we pick up the $\\bigO{1}$ forcing, and we have\n\\begin{subequations}\\label{eq: no slip: order unity system}\n\\begin{align}\n \\diff{h_0}{t} + \\pdiff{h_1}{T} &= \\frac{3p(T)}{16h_0^2}(1 + 3\\cos{2\\theta_0}) + u(T)\\cos{\\theta_0}\\,,\\\\\n \\diff{\\theta_0}{t} + \\pdiff{\\theta_1}{T} &= \\frac{3p(T)\\sin{2\\theta_0}}{64h_0^3}\\left[4 + B(T)(3 + \\cos{2\\theta_0})\\right]\\,,\n\\end{align}\n\\end{subequations}\nwriting $t$-derivatives of $h_0$ and $\\theta_0$ as proper due to their established independence from $T$. The appropriate solvability conditions for this first-order system are obtained by averaging the equations over a period in $T$ and imposing periodicity in $T$, equivalent to the Fredholm Alternative Theorem for this system \\cite{Bender1999}. To do so, we assume, without loss of generality, that the period of the fast oscillations is $2\\pi$, defining the averaging operator $\\avg{\\cdot}$ via\n\\begin{equation}\\label{eq: no slip: averaging operator}\n \\avg{a} = \\frac{1}{2\\pi}\\int_0^{2\\pi}a(T)\\mathop{}\\!\\mathrm{d}{T}\\,.\n\\end{equation}\nComputing the average of \\cref{eq: no slip: order unity system}, we arrive at\n\\begin{subequations}\\label{eq: no slip: systematic model}\n\\begin{align}\n \\diff{h_0}{t} &= \\frac{3\\avg{p}}{16h_0^2}(1 + 3\\cos{2\\theta_0}) + \\avg{u}\\cos{\\theta_0}\\,,\\\\\n \\diff{\\theta_0}{t} &= \\frac{3\\avg{p}\\sin{2\\theta_0}}{64h_0^3}\\left[4 + \\frac{\\avg{pB}}{\\avg{p}}(3 + \\cos{2\\theta_0})\\right]\\,,\n\\end{align}\n\\end{subequations}\nwith the imposed periodicity eliminating the fast-time derivatives. Comparing these leading order differential equations with those of \\cref{eq: no slip: full system}, we see that we have essentially replaced the parameters $p$, $u$, and $B$ with the effective parameters $\\avg{p}$, $\\avg{u}$, and $\\avg{pB}\/\\avg{p}$, the precise forms of which have arisen through our brief, systematic analysis. Whilst the modifications to $p$ and $u$ are as might be naively expected, the effective shape constant $\\avg{pB}\/\\avg{p}$ is perhaps less obvious at first glance, with one perhaps expecting the average parameter $\\avg{B}$. Indeed, with these authors having previously been guilty of employing such parameters in back-of-the-envelope calculations, we refer to this model as the \\emph{\\textit{a priori}-averaged model}, given explicitly by \n\\begin{subequations}\\label{eq: no slip: a priori model}\n\\begin{align}\n \\diff{h^a}{t} &= \\frac{3\\avg{p}}{16h^2}\\left(1 + 3\\cos{2\\theta}\\right) + \\avg{u} \\cos{\\theta}\\,,\\\\\n \\diff{\\theta^a}{t} &= \\frac{3\\avg{p}\\sin{2\\theta}}{64h^3}\\left[4 + \\avg{B}(3 + \\cos{2\\theta})\\right]\\,,\n\\end{align}\n\\end{subequations}\nusing a superscript of $a$ to denote the solutions of the \\textit{a priori}-averaged system. This model makes use of the averaged parameters $\\avg{p}$, $\\avg{u}$, and $\\avg{B}$ in place of the rapidly oscillating quantities.\n\nThough an elementary observation, it is worth highlighting that, without any additional assumptions on $p(T)$ and $B(T)$, it is in general not the case that $\\avg{pB}\/\\avg{p} = \\avg{B}$, so that we should expect to observe differences between the systematically determined, leading-order dynamics of \\cref{eq: no slip: systematic model} and those of the \\textit{a priori}-averaged model of \\cref{eq: no slip: a priori model}. In what follows, through a brief consideration of the angular evolution equations, we will highlight how these differences can be more than simply quantitative.\n\nFocussing on the angular dynamics, we specifically consider the abstracted scalar autonomous ODE\n\\begin{equation}\\label{eq: no slip: scalar ode}\n \\diff{x}{t} = f(x;\\alpha,\\beta)\\,,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq: no slip: f}\n f(x;\\alpha,\\beta) \\coloneqq \\frac{3\\alpha\\sin{2x}}{64h^3}\\left[4 + \\beta(3 + \\cos{2x})\\right]\\,,\n\\end{equation}\nso that $(\\alpha,\\beta) = (\\avg{p},\\avg{B})$ corresponds to the \\textit{a priori}-averaged model, whilst $(\\alpha,\\beta) = (\\avg{p},\\avg{pB}\/\\avg{p})$ gives the leading order, systematically averaged dynamics. Note that, for the purposes of a stability analysis of the angular dynamics, we can treat the swimmer separation from the boundary as a positive parameter, abusing notation and generically writing $h$ in the denominator of \\cref{eq: no slip: f}, without materially modifying a steady state analysis of the angular dynamics.\n\n\\subsection{Exploring the autonomous dynamics}\nThe fixed points of \\cref{eq: no slip: scalar ode} are readily seen to be $x = n\\pi$, $x=\\pi\/2 + n\\pi$, and solutions of $4 + \\beta(3 + \\cos{2x})=0$ (if they exist), for $n\\in\\mathbb{Z}$. Notably, if $\\beta\\in(-1,1)$, as is the case in the \\textit{a priori}-averaged model, then the only steady states are at integer multiples of $\\pi\/2$. Focussing on these steady states, their linear stability is given by\n\\begin{equation}\\label{eq: no slip: simple steady states}\n x = \\left\\{\\begin{array}{lr}\n n\\pi & \\text{ is stable } \\iff\\alpha(1+\\beta) < 0\\,,\\\\\n \\pi\/2 + n\\pi & \\text{ is stable } \\iff\\alpha(2+\\beta) > 0\\,.\\\\\n \\end{array}\\right.\n\\end{equation}\nHence, for $\\beta\\in(-1,1)$, the stability of the steady states is determined solely by the sign of $\\alpha$, with $\\alpha>0$ giving rise to unstable states at $x=n\\pi$ and stable states at $x=\\pi\/2+n\\pi$. Identifying $\\alpha$ with the signed dipole strength, this is precisely in line with the classical analysis of pusher and puller swimmers via the \\textit{a priori}-averaged model, as summarised by \\citet{Lauga2009}, with pushers and pullers corresponding to $\\alpha > 0$ and $\\alpha < 0$, respectively.\n\n\\begin{figure}\n \\centering\n \\begin{overpic}[permil,width=0.7\\textwidth]{figs\/no_slip_dynamics\/no_slip_dynamics.png}\n \\put(100,140){$\\beta<-2$}\n \\put(425,140){$-2<\\beta<-1$}\n \\put(810,140){$\\beta>-1$}\n \\put(0,300){(a)}\n \\put(350,300){(b)}\n \\put(700,300){(c)}\n \\put(115,300){$\\theta=0$}\n \\put(-30,140){$\\frac{\\pi}{2}$}\n \\put(136,-27){$\\pi$}\n \\put(293,140){$\\frac{3\\pi}{2}$}\n \\end{overpic}\n \\caption{Steady states and stability of angular dynamics for the autonomous system of \\cref{eq: rollers: abstract autonomous system} are shown as dynamics on a circle, for $\\alpha>0$ fixed and for various values of $\\beta$. Swimming parallel to the boundary corresponds to states with $\\theta=\\pi\/2$, $3\\pi\/2$, whilst $\\theta=0$, $\\pi$ corresponds to swimming aligned with the normal to the boundary. (a) With $\\beta < -2$, the system evolves to a steady state with $x=n\\pi$, $n\\in\\mathbb{Z}$. (b) For $-2<\\beta<-1$, $x=n\\pi\/2$ are stable for all $n\\in\\mathbb{Z}$, with unstable configurations present between these attractors. (c) For $\\beta > -1$, the system evolves to a steady state with $x=\\pi\/2 + n\\pi$, for $n\\in\\mathbb{Z}$. Stable states are shown as solid points, whilst unstable points are shown hollow. Stabilities for $\\alpha<0$ are obtained by reversing the illustrated dynamics.}\n \\label{fig: no slip: dynamics}\n\\end{figure}\n\nHowever, if $\\beta<-1$, the profile of stability can change significantly. Bifurcations at $\\beta=-1$ and $\\beta=-2$ see the creation and destruction of additional steady states (the solutions of $4 + \\beta(3 + \\cos{2x})=0$) accompanied by changes in stability of the steady states at $x=n\\pi$ and $x=\\pi\/2 + n\\pi$, respectively. When they exist, the additional states have the opposite stability to the other steady states, so that they are stable for $\\alpha<0$. For $\\beta<-2$, the equation defining the additional steady states admits no real solutions, so that these steady states cease to exist and the angular equilibria are the same as for $\\beta>-1$, though with opposite linear stabilities. Each of these dynamical regimes is illustrated in \\cref{fig: no slip: dynamics} for $\\alpha>0$, highlighting a strong dependence of the dynamics on $\\beta$. The linear stability of each state is flipped upon taking $\\alpha<0$.\n\n\n\\subsection{Comparing the emergent dynamics}\nThe elementary analysis of the previous section highlights how qualitative changes in the globally attractive behaviour of the model depend strongly on the parameter $\\beta$. However, the predictions of the \\textit{a priori}-averaged model are simple: if the swimmer is a pusher, with $\\alpha=\\avg{p}>0$, then the states $\\theta^a=n\\pi$ are unstable, and the swimmer instead evolves to a state where $\\theta^a = \\pi\/2 + n\\pi$ and swims parallel to the boundary. If the swimmer is a puller, with $\\alpha=\\avg{p}<0$, then the swimmer instead evolves to $\\theta^a=n\\pi$, thereafter moving perpendicular to the boundary.\n\nHowever, the predictions of the systematically averaged model are more complex. Whilst switching the sign of $\\avg{p}$ still switches the stability of each steady state, the value of $\\beta = \\avg{pB}\/\\avg{p}$, which need not be smaller than unity in magnitude, can materially alter both the steady states in existence and the stability of the states that correspond to parallel and perpendicular swimming. For instance, if $\\alpha=\\avg{p}>0$ and $\\beta\\in(-2,-1)$, both the $\\theta_0=n\\pi$ and the $\\theta_0=\\pi\/2+n\\pi$ states are linearly stable, accompanied by four steady states in the range $\\theta_0\\in(0,2\\pi)$ that are unstable; for $\\alpha=\\avg{p}<0$, the stability of each state is swapped. Hence, swimmers with $\\avg{p}<0$ and $\\avg{pB}\/\\avg{p}\\in(-2,-1)$ will evolve to a steady state that is not a multiple of $\\pi\/2$; in other words, they will neither align parallel nor perpendicular to the boundary, a behaviour that is never predicted by the \\textit{a priori}-averaged model in any admissible parameter regime.\n\nAs an explicit illustration of how the two models can qualitatively differ, we take $p(T) = 4A\\sin{T} + 1$ and $B(T) = \\sin{(T)}\/2$, so that $\\avg{p} = 1$, $\\avg{B}=0$, and $\\avg{pB}\/\\avg{p} = A$. The \\textit{a priori}-averaged model predicts the dynamics shown in \\cref{fig: no slip: dynamics}c for all values of $A$, whilst the systematically averaged dynamics follow \\cref{fig: no slip: dynamics}b for $A\\in(-2,-1)$ and \\cref{fig: no slip: dynamics}a for $A<-2$. Fixing $A=-3\/2$, the temporal evolution of both of these models is shown in \\cref{fig: no slip: example}, along with a numerical solution to angular dynamics of the full system of \\cref{eq: no slip: full system}. Here, we have fixed $h>0$ as a parameter, recalling that the swimmer separation serves only to modify the rate at which the angular dynamics approach a steady state. In agreement with our analysis, the \\textit{a priori}-averaged model incorrectly predicts the qualitative evolution of the model swimmer, whilst the leading-order, systematically averaged model is in agreement with the full numerical solution.\n\nDespite these general qualitative differences, it should be noted that there are parameter regimes in which the dynamics are qualitatively indistinct between the models. For instance, suppose that we are in a regime where $\\beta=\\avg{pB}\/\\avg{p}\\not\\in(-2,-1)$, so that the only steady states are those given in \\cref{eq: no slip: simple steady states}. Then, if $p(T)$ is of fixed sign for all $T$, so that the swimmer is unambiguously a pusher or a puller, it is simple to note that $\\avg{p} + \\avg{pB} =\\int_0^{2\\pi}p(T)[1+B(T)]\\mathop{}\\!\\mathrm{d}{T}\/2\\pi$ has the same sign as $\\avg{p}$, recalling that $B\\in(-1,1)$. Similarly, $2\\avg{p} + \\avg{pB}$ has the same sign as $\\avg{p}$, so that the long-time behaviour is completely determined by the sign of $\\avg{p}$, following \\cref{eq: no slip: simple steady states}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.3\\textwidth]{figs\/no_slip_example\/no_slip_example.png}\n \\caption{Angular evolution of a swimmer at fixed separation from a no-slip boundary. The prediction of the \\textit{a priori}-averaged model can be seen to not align with the prediction of the systematically averaged model of \\cref{eq: no slip: systematic model} or the dynamics of the full model of \\cref{eq: no slip: original system}, with qualitatively distinct steady configurations from the same initial condition. Rapid oscillations in the solution full dynamics are not visible at the resolution of this plot. Here, we have fixed $h=1$, $\\omega = 100$, and taken $p(T) = -6\\sin{(T)} + 1$ and $B(T) = \\sin{(T)}\/2$.}\n \\label{fig: no slip: example}\n\\end{figure}\n\n\\section{Interacting dipole rollers}\\label{sec: rollers}\n\\subsection{Model equations}\\label{sec: rollers: model eqs}\nConsider a pair of particles in the plane that are pinned in place in the laboratory frame, so that they are free to rotate in the plane but unable to translate. The particles, which we term \\emph{rollers}, are assumed to interact through dipolar flow fields, with vector dipole strengths $\\vec{p}_1$ and $\\vec{p}_2$, respectively, which are both assumed to lie in the plane containing the particles. Taking $\\{\\e_x,\\e_y\\}$ to be an orthogonal basis for the laboratory frame that spans this plane, we define the orientation of the $i$\\textsuperscript{th} particle, $i\\in\\{1,2\\}$, via the angle $\\theta_i$ between a body-fixed axis and $\\e_x$, so that the direction of the roller can be captured as $\\d_i = \\cos{\\theta_i}\\e_x + \\sin{\\theta_i}\\e_y$. Analogously to the previous section, we assume that the vector dipole strength of each particle is aligned with $\\d_i$, so that we can write $\\vec{p}_i = p_i\\d_i$, where $p_i$ is the scalar dipolar strength and may take any sign. We further assume that $p_1=p_2$, so that the dipoles are of equal strength, though we remark that retaining generality is straightforward but notationally cumbersome.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figs\/roller_setup\/roller_setup.png}\n \\caption{Geometry of fixed-position dipole rollers. Each roller is pinned in place at its centre, but is free to rotate, driven by the dipolar flow generated by the other. Their orientation is captured by their directors $\\d_1$ and $\\d_2$, parameterised by $\\theta_1$ and $\\theta_2$, respectively. The axis of each roller's dipolar flow field is assumed to be directed along their orientation vector.}\n \\label{fig: roller setup}\n\\end{figure}\nWithout loss of generality, we assume that the displacement of particle $1$ from particle $2$ is $r\\e_x$, where $r>0$ is the distance between them. In this setting, illustrated in \\cref{fig: roller setup} and following \\citet[p. 295]{Lauga2020}, the effects of each particle's dipolar flow on the orientation of the other particle are captured by the following dimensionless coupled system of ODEs:\n\\begin{subequations}\\label{eq: rollers: full system}\n\\begin{align}\n \\diff{\\theta_1}{t} &= f(\\theta_1,\\theta_2;p,B)\\,,\\\\\n \\diff{\\theta_2}{t} &= f(\\theta_2,\\theta_1;p,B)\\,,\n\\end{align}\n\\end{subequations}\nwhere\n\\begin{equation}\\label{eq: rollers: def f}\n f( x,y;\\alpha,\\beta) \\coloneqq -\\frac{3\\alpha}{r^3}\\left(\\sin{ y}\\cos{ y} + \\beta\\left[\\sin{ x}\\cos{ x}\\left(1 - 5\\cos^2{ y}\\right) + \\cos{ y}\\sin{(2 x- y)}\\right]\\right)\\,,\n\\end{equation}\ndistinct from the notation of the previous section. Here, $B$ is once again the shape-capturing parameter of \\citet{Bretherton1962}, and we assume that both of the particles are associated with the same shape parameter. This assumption can be relaxed at the expense of notational convenience.\n\nAs with the previous model, the standard approach would be to assume that $p$ and $B$ are constant, so that the above system is autonomous. Here, we take $p=p(T)$ and $B=B(T)$, interpreting the constant-parameter model as the \\textit{a priori}-averaged model, as above. Hence, we study the non-autonomous system\n\\begin{subequations}\\label{eq: rollers: non-autonomous system}\n\\begin{align}\n \\diff{\\theta_1}{t} &= f(\\theta_1,\\theta_2;p(\\omega t),B(\\omega t)) \\,,\\\\\n \\diff{\\theta_2}{t} &= f(\\theta_2,\\theta_1;p(\\omega t), B(\\omega t))\\,,\n\\end{align}\n\\end{subequations}\nwith $\\omega\\gg1$ and all other quantities being $\\bigO{1}$ as $\\omega\\to\\infty$. It should be noted that, whilst we study this system as a simple extension of an established model, these equations can be rigorously derived by considering the dynamics of shape-changing spheroids, subject to the far-field approximation and the assumption that their instantaneous deformation is both force- and torque-free. This derivation is somewhat elementary and follows the approach given in \\citet{Gaffney2022b}, from which this model can also be extended to accommodate more general deformations through the addition of uncomplicated terms. Here, seeking simplicity and clarity, we pursue the model given in \\cref{eq: rollers: non-autonomous system}.\n\n\\subsection{Multi-scale analysis}\nThe multi-scale analysis of this problem proceeds entirely analogously to that of the previous section, but is reproduced here in detail in the interest of clarity. As in \\cref{sec: no slip}, we formally introduce the fast timescale $T=\\omega t$, transforming the proper time derivative as\n\\begin{equation}\\label{eq: rollers: time transform}\n \\diff{}{t} \\mapsto \\pdiff{}{t} + \\omega \\pdiff{}{T}\\,.\n\\end{equation}\nWe seek an asymptotic expansion of $\\theta_1$ and $\\theta_2$ in inverse powers of $\\omega$, which we write as\n\\begin{equation}\\label{eq: rollers: expansions}\n \\theta_1 \\sim \\theta_{1,0}(t,T) + \\frac{1}{\\omega}\\theta_{1,1}(t,T) + \\cdots\\,, \\quad \\theta_2 \\sim \\theta_{2,0}(t,T) + \\frac{1}{\\omega}\\theta_{2,1}(t,T) + \\cdots\\,.\n\\end{equation}\nTransforming \\cref{eq: rollers: non-autonomous system} via \\cref{eq: rollers: time transform} and inserting the expansions of \\cref{eq: rollers: expansions}, at $\\bigO{\\omega}$ we have\n\\begin{equation}\n \\pdiff{\\theta_{1,0}}{T} = 0\\,, \\quad \\pdiff{\\theta_{2,0}}{T} = 0 \\quad \\implies \\quad \\theta_{1,0} = \\theta_{1,0}(t)\\,, \\quad \\theta_{2,0} = \\theta_{2,0}(t)\\,,\n\\end{equation}\nso that the leading order solutions for $\\theta_1$ and $\\theta_2$ are independent of the fast timescale and, thus, are functions only of $t$. As in \\cref{sec: no slip}, this arises due to the forcing of the ODEs being $\\bigO{1}$, so that the forcing doesn't contribute at $\\bigO{\\omega}$. At $\\bigO{1}$, we have\n\\begin{subequations}\\label{eq: rollers: first-order system}\n\\begin{align}\n \\diff{\\theta_{1,0}}{t} + \\pdiff{\\theta_{1,1}}{T} &= f(\\theta_{1,0},\\theta_{2,0};p(T), B(T)) \\,,\\label{eq: rollers: first-order system: theta1}\\\\\n \\diff{\\theta_{2,0}}{t} + \\pdiff{\\theta_{2,1}}{T} &= f(\\theta_{2,0},\\theta_{1,0};p(T), B(T))\\,,\\label{eq: rollers: first-order system: theta2}\n\\end{align}\n\\end{subequations}\nusing the fact that $\\theta_{1,0}$ and $\\theta_{2,0}$ are independent of $T$ to write their time derivatives as total derivatives. As in \\cref{sec: no slip}, imposing periodicity and averaging over a period in $T$ closes the system of PDEs at this order. Without loss of generality, we assume that the period of oscillations of $p$ and $B$ is $2\\pi$, and we recall the averaging operator $\\avg{\\cdot}$ from \\cref{eq: no slip: averaging operator} as\n\\begin{equation}\\label{eq: rollers: avg}\n \\avg{a} = \\frac{1}{2\\pi}\\int_0^{2\\pi} a(T)\\mathop{}\\!\\mathrm{d}{T}\\,.\n\\end{equation}\nTo compute the average of \\cref{eq: rollers: first-order system} in $T$, it is instructive to consider the dependence of $f$ on its arguments explicitly. To that end, we explicitly compute the average of \\cref{eq: rollers: first-order system: theta1} as\n\\begin{equation}\\label{eq: rollers: explicit average}\n \\diff{\\theta_{1,0}}{t} = -\\frac{3}{r^3}\\left(\\avg{p}\\sin{\\theta_{2,0}}\\cos{\\theta_{2,0}} + \\avg{pB}\\left[\\sin{\\theta_{1,0}}\\cos{\\theta_{1,0}}\\left(1 - 5\\cos^2{\\theta_{2,0}}\\right) + \\cos{\\theta_{2,0}}\\sin{(2\\theta_{1,0}-\\theta_{2,0})}\\right] \\right)\n\\end{equation}\nwith the average of \\cref{eq: rollers: first-order system: theta2} following similarly. Comparing the right-hand side of \\cref{eq: rollers: explicit average} with the definition of $f$ in \\cref{eq: rollers: def f}, we identify the systematically averaged governing equations with those of the original system:\n\\begin{subequations}\\label{eq: rollers: effective eqs}\n\\begin{align}\n \\diff{\\theta_{1,0}}{t} & = f(\\theta_{1,0}, \\theta_{2,0}; \\avg{p}, \\avg{pB} \/ \\avg{p})\\,,\\\\\n \\diff{\\theta_{2,0}}{t} & = f(\\theta_{2,0},\\theta_{1,0}; \\avg{p}, \\avg{pB} \/ \\avg{p})\\,.\n\\end{align}\n\\end{subequations}\nHence, in order to understand the leading order behaviour of the non-autonomous system of \\cref{eq: rollers: non-autonomous system}, it is sufficient to explore the autonomous system\n\\begin{subequations}\\label{eq: rollers: abstract autonomous system}\n\\begin{align}\n \\diff{x}{t} &= f(x,y;\\alpha,\\beta)\\,,\\\\\n \\diff{y}{t} &= f(y,x;\\alpha,\\beta)\\,,\n\\end{align}\n\\end{subequations}\nidentifying $x$ and $y$ with the leading order solutions for $\\theta_1$ and $\\theta_2$, respectively. \n\nAs noted in \\cref{sec: rollers: model eqs}, the commonplace model presented by \\citet{Lauga2020} makes use of constant parameters in place of $p$ and $B$ in \\cref{eq: rollers: full system}, which we interpret as the \\textit{a priori} averages of $p(T)$ and $B(T)$ for flow-generating particles. In symbols, when interpreted in this way, the model of \\citet{Lauga2020} is equivalent to taking $(\\alpha,\\beta) = (\\avg{p},\\avg{B})$ in \\cref{eq: rollers: abstract autonomous system}. In line with having taken these model-agnostic averages of the parameters, we refer to this model as the \\emph{\\textit{a priori}-averaged model}, which we state explicitly as\n\\begin{subequations}\\label{eq: rollers: naive system}\n\\begin{align}\n \\diff{\\theta_1^a}{t} &= f(\\theta_1^a,\\theta_2^a;\\avg{p},\\avg{B})\\,,\\\\\n \\diff{\\theta_2^a}{t} &= f(\\theta_2^a,\\theta_1^a;\\avg{p},\\avg{B})\\,,\n\\end{align}\n\\end{subequations}\nwith the superscript of $a$ distinguishing the solution from that of the full system of \\cref{eq: rollers: full system}. In contrast, we have seen that the true leading order behaviour actually corresponds to taking $(\\alpha,\\beta) = (\\avg{p}, \\avg{pB}\/\\avg{p})$, with $\\avg{pB}\\neq\\avg{p}\\avg{B}$ in general. As we have seen throughout our analysis, this difference in parameters results directly from the employed processes of averaging: one is performed independent of the dynamical systems, whilst the other systematically determines the appropriate averaged parameters for this particular dynamical system. In the next two sections, we seek to determine if these differences in employed parameters between the systematically averaged equations and the \\textit{a priori}-averaged model can result in differences in behaviour, which we establish through an elementary exploration of the autonomous dynamical system of \\cref{eq: rollers: abstract autonomous system}.\n\n\\subsection{Exploring the autonomous dynamics}\nFirst, we identify and classify the fixed points of \\cref{eq: rollers: abstract autonomous system} in terms of $\\alpha$ and $\\beta$, before returning to consider the particular parameter combinations of the previous section. The dynamical system of \\cref{eq: rollers: abstract autonomous system} can be explored via standard methods with relative ease, so we refrain from providing a full and detailed account of the analysis. It is worth noting that, due to the periodicity of the trigonometric functions in $f$, the forcing of the dynamics is periodic with period $\\pi$ in both $x$ and $y$, so that we only need to characterise the dynamics up to multiples of $\\pi$. This periodicity, along with a visual summary of the analysis that follows, is illustrated in \\cref{fig: rollers: portraits,fig: rollers: fixed points}.\n\\begin{figure}\n \\centering\n \\begin{overpic}[width=0.8\\textwidth]{figs\/roller_configurations\/roller_configurations.png}\n \\put(0,50){(a)}\n \\put(56,50){(b)}\n \\put(0,25){(c)}\n \\put(56,25){(d)}\n \\end{overpic}\n \\caption{Fixed points of the two-roller system for $\\alpha>0$. Up to symmetry and $\\pi$-periodicity, the identified fixed points correspond to four distinct configurations whose stability depend on $\\alpha$ and $\\beta$ nontrivially. Here, we report the stability for $\\alpha>0$, corresponding to a swimmer with $\\avg{p}>0$ in \\cref{eq: rollers: effective eqs}, which we might refer to as pusher-type particle. The dynamics of a puller-type particle, with $\\alpha<0$, correspond to reversing the direction of motion from $\\alpha>0$, swapping the stability of non-saddle fixed points. (a) Parallel alignment along the direction of separation is stable if $\\beta<-1$ and unstable for $\\beta>-1$. (b) Orthogonal alignment, with one particle pointing directly towards the other, is stable only if $\\beta>0$. (c) Parallel alignment that is perpendicular to the relative displacement is only stable if $\\beta<-1\/2$. (d) Steady parallel alignment that is neither parallel nor perpendicular to the relative displacement is unstable when it exists, which is for $\\beta < -1\/2$ and $\\beta>1\/3$.}\n \\label{fig: rollers: fixed points}\n\\end{figure}\n\nIt is simple to identify fixed points along the manifolds $x=y$ and $x=-y$, seeking solutions of\n\\begin{equation}\n \\frac{3\\alpha}{2r^3}\\sin{2x}\\left(1 + \\beta\\left[2-5\\cos^2{x}\\right]\\right) = 0 \\quad \\text{ and } \\quad \\frac{3\\alpha}{2r^3}\\sin{2x}\\left(1 + \\beta\\cos^2{x}\\right) = 0\\,,\n\\end{equation}\nrespectively. For non-zero $\\alpha$, these both admit solutions $x=n\\pi\/2$, $n\\in\\mathbb{Z}$, whilst the former admits the additional solutions of $1+\\beta\\left[2-5\\cos^2{x}\\right]=0$ whenever $\\beta \\leq -1\/2$ or $\\beta \\geq 1\/3$. There are additional steady states on the $x=-y$ manifold that satisfy $1+\\beta\\cos^2{x}=0$, which exist whenever $\\beta\\leq-1$. Further, we note the existence of additional fixed points on the manifold $x + y = \\pi\/2$, which again correspond to $x=n\\pi\/2$ for $n\\in\\mathbb{Z}$. Hence, the fixed points of the system are given by $(x,y)\\in\\{(0,0),(\\pi\/2,0),(0,\\pi\/2),(\\pi\/2,\\pi\/2)\\}$, up to periodicity, in addition to solutions of $1+\\beta\\left[2-5\\cos^2{x}\\right]=0$ on the $x=y$ manifold and solutions of $1+\\beta\\cos^2{x}=0$ on the $x=-y$ manifold. The fixed points and their readily computed linear stabilities are summarised in \\cref{tab: rollers: stab} in \\cref{app: rollers: stab}, with the steady configurations interpreted in terms of the particles in \\cref{fig: rollers: fixed points}. \n\nWe illustrate the overall dynamics in various parameter regimes through phase portraits in \\cref{fig: rollers: portraits}, capturing the full range of qualitatively distinct behaviours that emerge from \\cref{eq: rollers: abstract autonomous system}. Noting that $\\alpha$ plays only a simple role in the dynamics, with changing the sign of $\\alpha$ simply reversing the direction of evolution, we fix $\\alpha>0$ in \\cref{fig: rollers: portraits}, focussing instead on the impact of varying $\\beta$. From these portraits, it is clear that changing the value of $\\beta$ can have a drastic effect on the dynamical system. For instance, $\\beta$ crossing the thresholds of $-1\/2$ and $1\/3$ modifies the character of the phase plane through the emergence or destruction of saddle points and nodes, accompanied by qualitative changes in phase-plane trajectories. Though there are multiple further bifurcations, a notable switch in stability occurs when crossing $\\beta=0$, with $\\beta=0$ corresponding to an integrable system with truly closed orbits \\footnote{Excluding heteroclinic trajectories.} that bifurcate into stable and unstable spirals either side of the bifurcation point.\n\nThese local bifurcations, though effecting changes in linear stability, also give rise to changes in the globally attracting dynamics of the system. This drastic alteration to the overall behaviour is illustrated via the sample trajectories highlighted in blue in \\cref{fig: rollers: portraits}, which either approach closed, heteroclinic connections in \\cref{fig: rollers: portraits}d or the fixed point at the centre of stable spirals in \\cref{fig: rollers: portraits}f and \\cref{fig: rollers: portraits}g, for instance.\n\n\\begin{figure}\n \\centering\n \\begin{overpic}[permil,width=\\textwidth]{figs\/all_phase_portraits\/all_phase_portraits.png}\n \\put(10,557){(a)}\n \\put(273,557){(b)}\n \\put(520,557){(c)}\n \\put(760,557){(d)}\n \\put(10,282){(e)}\n \\put(273,282){(f)}\n \\put(520,282){(g)}\n \\put(760,282){(h)}\n \\end{overpic}\n \\caption{Diverse phase portraits of the autonomous system of \\cref{eq: rollers: abstract autonomous system}. Varying $\\beta$, we illustrate the qualitatively distinct portraits of the autonomous dynamics, with stable and unstable fixed points shown as solid and empty circles, respectively. Moving from (a) to (b), we identify the transition of $(0,0)$ from a stable node to a saddle point via the coalescence of two saddles on the $x=-y$ manifold; following this transition, the parallel alignment of \\cref{fig: rollers: fixed points}c is the globally attracting state. Moving from (b) to (c) and from (h) to (g) transitions stable\/unstable nodes at $(0,\\pi\/2)$ and $(\\pi\/2,0)$ to spirals of the same stability. Significant qualitative changes occur through bifurcations of the saddles at $(0,0)$ and $(\\pi\/2,\\pi\/2)$, each into a node and two saddles, visible between panels (c) and (d) and panels (f) and (g). The blue trajectories in each panel, which have a common initial condition, highlight the qualitative change in behaviour that can result from changing $\\beta$, potentially altering the dynamics from the almost-heteroclinic orbits of panel (d) to the distinct stable attractors of panels (c) and (f), for instance. Here, we have fixed $\\alpha>0$, noting that $\\alpha<0$ corresponds purely to a reversal of the dynamics.}\n \\label{fig: rollers: portraits}\n\\end{figure}\n\n\\subsection{Comparing the emergent dynamics}\nFrom the above exploration of the autonomous system of \\cref{eq: rollers: abstract autonomous system}, it is clear that changes to the parameter $\\beta$ can have significant qualitative impacts on the emergent dynamics. In this section, we showcase how adopting $(\\alpha,\\beta) = (\\avg{p},\\avg{B})$ in the \\textit{a priori}-averaged model can give rise to predictions that are wholly different to those of the systematically motivated parameters $(\\alpha,\\beta)=(\\avg{p},\\avg{pB}\/\\avg{p})$.\n\nBefore we exemplify such cases, it is worth highlighting that certain classes of $p(T)$ and $B(T)$ trivially result in $\\avg{pB}\/\\avg{p}=\\avg{B}$, so that the dynamics associated with the \\textit{a priori} and systematically averaged models are identical to leading order. This equality trivially holds if at least one of $p(T)$ or $B(T)$ is constant, with $\\avg{pB}=\\avg{p}\\avg{B}$ in this case. Hence, if the rollers can be associated with a constant dipole strength, or if their Bretherton parameters do not not change in time, then we can naively average any remaining fast-time dependencies before inserting them into the model and achieve the correct leading order behaviour. Further, if we are concerned only with the eventual configuration of the rollers, and not the details of any transient dynamics, we can identify additional $p(T)$ and $B(T)$ that can be \\textit{a priori}-averaged without consequence. For instance, if $p(T)>0$ and $B(T)>0$ for all $T$, then we have $\\avg{pB}\/\\avg{p}>0$ and $\\avg{B}>0$, so that $\\beta > 0$ in both cases and the configuration shown in \\cref{fig: rollers: fixed points}b is globally attracting, up to symmetry, as can be deduced from \\cref{fig: rollers: portraits}f-h. These particular constraints are compatible, for example, with requiring that the particle be spheroidal, always prolate, and consistently generating a dipolar flow field that can be associated with a hydrodynamic pusher.\n\nFor more general $p(T)$ and $B(T)$, however, it is clear that we cannot guarantee that the dynamics predicted by the \\textit{a priori}-averaged model of \\cref{eq: rollers: naive system} will be at all reminiscent of the leading order systematically averaged dynamics of \\cref{eq: rollers: effective eqs}. Seeking a minimal example in order to highlight this general observation, we take $p(T)=2A\\sin{T} + 1$ and $B(T) = \\sin{T} + D$. Clearly, $\\avg{p}=1$ and $\\avg{B}=D$, so that the choice of $D$ uniquely determines the panel of \\cref{fig: rollers: portraits} that corresponds to the dynamics of the \\textit{a priori}-averaged model. However, computing $\\avg{pB}\/\\avg{p} = A + D$ highlights that we can choose $A$ so that the systematically averaged dynamics occupy any given panel of \\cref{fig: rollers: portraits}. \n\nTo illustrate this concretely, we take $A = -5\/4$ and $D = 1\/2$ and numerically solve \\cref{eq: rollers: naive system,eq: rollers: effective eqs,eq: rollers: full system} from the same initial condition, taking $\\omega=100$, with the solutions shown in \\cref{fig: rollers: comparison}. The \\textit{a priori}-averaged system shown in \\cref{fig: rollers: comparison}a corresponds to $\\beta=1\/2$, so follows the dynamics of \\cref{fig: rollers: portraits}g, with the numerical solution approaching the $(0,\\pi\/2)$ steady state, in which the rollers are perpendicular to one another. In contrast, the systematically averaged dynamics of \\cref{fig: rollers: comparison}b correspond to $\\beta = -3\/4$ and the portrait of \\cref{fig: rollers: portraits}c, evolving to the parallel steady state $(\\pi\/2,\\pi\/2)$. The numerical solution to the full problem of \\cref{eq: rollers: full system} is also shown in \\cref{fig: rollers: comparison}b, highlighting good agreement with the asymptotic solution and evidencing the potential for disparity between the predictions of the \\textit{a priori}-averaged model and the dynamics of temporally evolving bodies.\n\n\\begin{figure}\n \\centering\n \\begin{overpic}[width=0.8\\textwidth]{figs\/roller_comparison\/roller_comparison.png}\n \\put(3,36){(a)}\n \\put(53,36){(b)}\n \\end{overpic}\n \\caption{Comparing the results of \\textit{a priori} and systematic averaging. With fixed initial conditions of $(\\pi\/8,0)$, numerical solutions of (a) the \\textit{a priori}-averaged system of \\cref{eq: rollers: naive system}, (b) the systematically averaged dynamics of \\cref{eq: rollers: effective eqs}, and the full system of \\cref{eq: rollers: non-autonomous system} are shown as solid curves. (a) With $p(T) = -5\\sin{T}\/2 + 1$ and $B(T) = \\sin{T} + 1\/2$, the \\textit{a priori}-averaged dynamics approach the $(0,\\pi\/2)$ steady state, following the dynamics illustrated in \\cref{fig: rollers: portraits}g, as $\\beta=\\avg{B}=1\/2$. (b) The leading order, systematically derived dynamics evolve to a distinct steady state, with $\\theta_{1,0}=\\theta_{2,0} = \\pi\/2$, following the dynamics shown in \\cref{fig: rollers: portraits}c, as $\\beta = \\avg{pB}\/\\avg{p} = -3\/4$. The corresponding steady state configurations of the rollers are illustrated in the lower right corner of each panel, highlighting the qualitatively distinct behaviours predicted by the models. Here, we have taken $\\omega=100$.}\n \\label{fig: rollers: comparison}\n\\end{figure}\n\n\n\n\\section{Discussion and conclusions}\nThe use of \\textit{a priori}-averaged parameters in minimal modelling approaches can be appealing, seemingly commensurate with seeking the simplest possible model of a given setting. However, through the simple examples presented in this study, we have seen how such model-agnostic averaging can result in behavioural predictions that differ qualitatively from those of models that incorporate fast-timescale parameter oscillations, which are common to many microswimmers. Hence, the key conclusion of our simple analysis is that such a modelling approach can be unreliable even at the level of back-of-the-envelope calculations, with the intuition gained from such explorations potentially rendered invalid. This observation is expected to hold somewhat generically across such simple models, with our approach extending to a range of employed minimal swimmer representations.\n\nThough one might interpret the conclusion of this study as an argument against the use of minimal models of microswimming, in fact, asymptotic analysis of these models revealed that they \\emph{can} capture the leading order dynamics, but only with appropriate parameterisation. In particular, our analysis highlights that it is the use of \\textit{a priori}-averaged parameters, rather than the use of constant parameters more generally, that gives rise to inaccurate predictions. Hence, this study supports the use of minimal models in developing intuition and understanding of microswimmer systems, though only when used with systematically derived, effective parameters. \n\nFurther, we have seen how an asymptotic analysis can show that, in certain parameter regimes, one can reliably employ \\textit{a priori}-averaged parameters without qualitatively affecting the predictions of the model. However, the explorations of \\cref{sec: no slip,sec: rollers} revealed that such robust parameter regimes are far from being universal; on the contrary, we have seen that they depend strongly on the model in question. Hence, in general, a bespoke analysis is required for any given model in order to determine these regimes of serendipitous agreement.\n\nThe analysis presented in this study is simple, even elementary, relying on the commonplace method of multiple scales and the basic observation that the average of a product need not be the product of individual averages. Despite this simplicity, we have identified potential missteps in the use of the simplest models of microswimming, of which these authors have previously been guilty. Further, we have demonstrated how a straightforward, multi-timescale analysis can inform reliable, systematic parameterisation of minimal models, such that they recover their marked utility in the generation of intuition, basic understanding, and back-of-the-envelope predictions.\n\n\\section*{Acknowledgments}\nB.J.W. is supported by the Royal Commission for the Exhibition of 1851. K.I. acknowledges JSPS-KAKENHI for Young Researchers (Grant No. 18K13456), JSPS-KAKENHI for Transformative Research Areas (Grant No. 21H05309) and JST, PRESTO, Japan (Grant No. JPMJPR1921).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction \\label{sec:intro}}\n\nThe study of $\\delta$ Scuti stars is expected to provide an\nimportant diagnostic tool for the structure and evolution of\nintermediate-mass stars at the main sequence evolution\nand thick-shell hydrogen burning stages. Having masses between 1.5 and\n2.5 $M_{\\odot}$, these pulsating variables develop convective\ncores during their central hydrogen-burning phases that make them\nsuitable for investigating the hydrodynamic processes occurring in\nthe core.\n\nAlthough most of the general properties of these stars are well\nunderstood, the application of seismological methods to derive their\nproperties has proved to be a difficult task. The problems\nconcerning the modelling of $\\delta$ Scuti stars have been\ndiscussed in details in recent reviews (e.g.\\ Christensen-Dalsgaard\n\\cite{christensen}; Kjeldsen \\cite{kjeldsen}). The most acute seem\nto be the mode selection mechanism and the influence of rotation.\nConcerning the former, although there is fairly good agreement\nbetween the observed frequency range and that derived from\ninstability calculations (e.g.\\ Michel et al.\\ \\cite{michel1}), far\nmore modes than observed are predicted to be unstable. \nWhile it is expected that the forthcoming\nasteroseismology space mission COROT (Baglin et al.\\ \\cite{baglin})\nwill be able to disentangle the whole spectra of $\\delta$ Scuti\nstars, for the moment the\nmechanism that could cause an amplitude limitation in some\nmodes is still unknown. As a consequence, mode identification\nbased on the comparison between theoretical and observed\nfrequencies is difficult to attain in general. On the other hand,\nalthough several observational techniques for mode identification\nhave been developed in recent years (Watson \\cite{watson};\nGarrido et al.\\ \\cite{garrido}; Aerts \\cite{aerts}; Kenelly \\&\nWalker \\cite{kenelly}; Viskum et al.\\ \\cite{viskum}), in only a few\ncases have these techniques been successfully applied.\n\n\nIt is well known that most $\\delta$ Scuti stars\nare rapid rotators. This has important effects on the\nmode frequencies, both directly as a consequence of the changes that must be\nintroduced in the oscillation equations and indirectly through the changes in the\nequilibrium models.\nTherefore, successful analysis of rotating $\\delta$ Scuti stars\nrequires that rotation should be taken into account consistently at\nall levels in the analysis.\n\nDespite these problems, in recent years several \nattempts have been made to interpret the observed spectra\nof $\\delta$ Scuti stars (Goupil et al.\\ \\cite{goupil}; P\\'erez\nHern\\'andez et al.\\ \\cite{perez1}; Guzik \\& Bradley \\cite{guzik};\nPamyatnykh et al.\\ \\cite{pamyatnykh}; Bradley \\& Guzik\n\\cite{bradley}; Hern\\'andez et al.\\ \\cite{hernandez}; Su\\'arez et\nal.\\ \\cite{suarez1}). Here we address the problem of mode\nidentification for stars in an open cluster. These stars are\nexpected to have a common age and chemical composition.\nFurthermore, the distance to the cluster is usually known with\nhigh accuracy. The constraints imposed by the cluster parameters\nhave proved to be very useful when modelling an ensemble of\n$\\delta$ Scuti stars. The best examples of such studies are found\nfor the variables in the Praesepe cluster (Michel et al.\\\n\\cite{michel1}; Hern\\'andez et al.\\ \\cite{hernandez}). In a similar\nway, we consider here several $\\delta$ Scuti stars of the Pleiades\ncluster and search for a best fit solution in the sense of a set\nof stellar parameters that allows the simultaneous modelling of\nall the stars considered, and that satisfies all the observables,\nincluding the frequencies.\n\nThe paper is organized as follows. In Sect.~\\ref{sec:stars}, we\ndescribe the main characteristics of the Pleiades cluster and the\nsix $\\delta$ Scuti stars under study. The modelling, the range of\nparameters and the calculations of the eigenfrequencies are\ndiscussed in Sect.~\\ref{sec:models}. The details of the comparison\nbetween observed and theoretical frequencies are discussed in\nSect.~\\ref{sec:comp}. The results and their\ndiscussion are given in Sect.~\\ref{sec:results}. Finally, we\npresent our conclusions in Sect.~\\ref{sec:conclusions}.\n\n\n\\section{The observational material \\label{sec:stars}}\n\nThe Pleiades (M45) is a young ($\\sim 75$--100\\, Myr), Population I\ncluster. Because of its proximity, the observational parameters of the\nPleiades have been intensively studied. In particular, the\nmetallicity of the cluster is estimated to be between $-0.14 \\leq$\n[Fe\/H] $\\leq +0.13$ (e.g. $-$0.034 $\\pm$ 0.024 Boesgaard \\& Friel\n\\cite{boesgaard}, 0.0260 $\\pm$ 0.103 Cayrel de Strobel\n\\cite{cayrel}, $-$0.11 $\\pm$ 0.03 Grenon \\cite{grenon}),\n\nUntil recently, there was a dispute regarding the distance of the\nPleiades cluster from the Earth. While the determination of the\ncluster distance from direct parallax measurements of\nHIPPARCOS gives 116.3$\\pm^{3.3}_{3.2}$~pc\n($m_{V}-M_{V}= 5.33\\,\\pm\\, 0.06$ mag, Mermilliod et al.\\\n\\cite{mermilliod}), the previously accepted distance, which is\nbased on comparing the cluster's main sequence with that of nearby\nstars, was $\\approx$130 pc corresponding to a distance modulus of\nabout 5.60 mag (e.g.\\ 5.65 $\\pm$ 0.38 mag [Eggen \\cite{eggen}],\n5.60 $\\pm$ 0.05 mag [Pinsonneault et al.\\ \\cite{pinsonneault}],\n5.61 $\\pm$ 0.03 mag [Stello \\& Nissen \\cite{stello}]).\nAs will be discussed in Sect.~\\ref{sec:results}, the distance derived\nin the research presented here agrees with the latter value.\n\n\\begin{table*}[ht]\n \\caption{Observational properties of the target stars}\n \\begin{tabular}{lccccccc}\n \\hline\n Star & HD & ST & $m_{V}$&$B-V$ & $E(B-V)$ & $V \\sin\\, i$ & $\\beta$ \\\\\n & & & & & & $(\\mathrm{km\\, s}^{-1})$& \\\\\n \\hline\n - & 23628 & A7V & $7.66\\pm0.03 $& $+0.211\\pm0.005$ &$0.063 \\pm0.008$ & $ 150 \\pm15$& 2.884 \\\\\n V650 Tau& 23643 & A3V & $7.77\\pm0.02 $& $+0.157\\pm0.006$& $0.027\\pm0.008$ & $175\\pm18$ & 2.862 \\\\\n - & 23194 & A5V & $8.06\\pm0.02 $& $+0.202\\pm0.010$& $0.076\\pm0.008$ & $20 \\pm2$& 2.881 \\\\\n V624 Tau& 23156 & A7V & $8.23\\pm0.01$ & $+0.250\\pm0.005$& $0.046\\pm0.008$ & $ 20 \\pm2$& 2.823 \\\\\n V647 Tau& 23607 & A7V & $8.26\\pm0.01$ & $+0.255 \\pm0.001$&$0.057 \\pm0.008$ & $ 20 \\pm2$& 2.841 \\\\\n V534 Tau& 23567 & A9V & $8.28\\pm0.02$ & $+0.360\\pm0.001$ &$ 0.084\\pm0.008$ & $ 90\\pm 9$& 2.788 \\\\\n\n \\hline\n \\end{tabular}\n \\label{tab:stars}\n\\end{table*}\n\nTo date, six $\\delta$ Scuti stars are known in\nthe Pleiades. In a survey of variability in the\ncluster, Breger (\\cite{breger}) found $\\delta$ Scuti type oscillations\nin four stars. The remaining two were recently reported to be\n$\\delta$ Scuti stars by Koen (\\cite{koen1}) and Li et al.\\ (\\cite{li}).\nSome observational properties of these stars are given in\nTable~\\ref{tab:stars}. The projected rotational\nvelocities, $v\\sin i$, were obtained from Morse et al.\\\n(\\cite{morse}) and Uesugi \\& Fukuda (\\cite{uesugi}). A 10\\%\nuncertainty in these quantities has been assumed. The spectral\ntypes (ST) and $\\beta$ parameters were obtained from the SIMBAD database in\nStrasbourg (France). The errors in the photometric parameters $(B-V)$\nand $m_{V}$ are estimated from the dispersion between different\nmeasurements of these quantities. The errors in reddening are\ntaken from Breger (\\cite{breger1}). We shall use these\nuncertainties in Section 3.2.\n\nIn recent years, the $\\delta$ Scuti stars of the Pleiades cluster have been\nobserved in several campaigns of the STEPHI multi-site network\n(Michel et al.\\ \\cite{michel2}). The information on\nthe oscillation frequencies of the stars used here has\nbeen obtained from those campaigns and are published elsewhere\n(Belmonte \\& Michel \\cite{belmonte1}; Michel et al.\\\n\\cite{michel4}; Liu et al.\\ \\cite{liu}; Li et al.\\ \\cite{li};\nLi et al.\\ \\cite{li1}; Fox-Machado et al.\\ \\cite{fox}).\nThe frequency peaks detected\nwith a confidence level above 99\\% are summarized\nin Table~\\ref{tab:freq}. We note that the frequency resolution in a typical\nSTEPHI campaign (three weeks) is $\\Delta \\nu \\sim 0.5\\, \\mu$Hz.\n\n\\begin{table}[!t]\n\\begin{center}\n\\caption{Observed frequencies used in this work. The data were obtained from\ndifferent STEPHI campaigns as detailed in the text.}\n\\begin{tabular}{lc|lc}\n\\hline \\hline\nstar & $\\nu$ & star& $\\nu $ \\\\\n& $(\\mu {\\rm Hz})$& & $(\\mu {\\rm Hz})$ \\\\\n\\hline\nHD 23628& 191.8 & V624 Tau &242.9 \\\\\n & 201.7 &&409.0 \\\\\n & 376.6 &&413.5 \\\\\\cline{1-2}\nV650 Tau&197.2&&416.4 \\\\\n &292.7&& 451.7 \\\\\n &333.1&& 489.4 \\\\\n &377.8 &&529.1 \\\\\\hline\nHD 23194&533.6&V647 Tau&236.6 \\\\\n&574.9&&304.7 \\\\\\cline{1-2}\nV534 Tau &234.2 &&315.6 \\\\\n &252.9&&374.4 \\\\\n &307.6&&405.8 \\\\\n &377.9 &&444.1 \\\\\n &379.0 &&\\\\\n &448.1 && \\\\\n &525.0&& \\\\\\hline\n\\end{tabular}\n\\end{center}\n\\label{tab:freq}\n\\end{table}\n\nFigure~\\ref{fig:dscutis} illustrates a colour--magnitude diagram of the Pleiades\ncluster in the region where the target stars are located.\nThe filled circles corresponds to the $\\delta$ Scuti stars.\nThe apparent magnitudes, $V$, colours, $(B-V)$ and\nmembership used are contained in the WEBDA database (Mermilliod\n\\cite{mermilliod1}). The Pleiades shows\ndifferential reddening with significant excess in the southwest with an\naverage value of $E(B-V) \\approx 0.04$ (e.g.\\\nvan Leeuwen \\cite{vanleeuwen}; Breger \\cite{breger1}; Hansen-Ruiz\n\\& van Leeuwen \\cite{hansen}; Pinsonneault et al.\\\n\\cite{pinsonneault}). In particular, we adopted the reddening for\nindividual stars as given by Breger (\\cite{breger1}) if present, otherwise\nthe average of $E(B-V) = 0.04$ was used. Stars known to be spectroscopic binaries\nwere rejected.\n\nGiven the young age of the Pleiades cluster, we expect all the\n$\\delta$-Scuti stars to be at an early evolutionary stage on the main sequence.\nFrom the figure it follows that\nwhile V647 Tau and V624 Tau have similar masses,\nV650 Tau, HD 23194 and HD 23628 are slightly more massive.\nV534 Tau, located near the red edge of the instability\nstrip, is the coolest star in the ensemble.\n\n\n\\begin{figure}[!t]\n\\resizebox{\\hsize}{!} {\\includegraphics{3791fig1.eps}}\n\\caption{HR diagram of the Pleiades cluster. Only the region around the instability\nstrip is shown. The target stars are represented with filled circles.\nThe blue and red edges of the instability strip are indicated by continuous lines\nand were taken from Rodr\\'{\\i}guez et al.\\\n(\\cite{rodriguez}). An isochrone of 70 Myr is also shown.}\n\\label{fig:dscutis}\n\\end{figure}\n\n\\section{The modelling \\label{sec:models}}\n\nIn this section we shall explain how we have computed the stellar models,\nthe corresponding eigenfrequencies and how we have constrained the stellar\nparameters used in the forward comparison with the observed frequencies.\n\n\\subsection{The stellar models}\n\nWe have considered stellar models\nwith input physics appropriate to the mass range covered by\n$\\delta$ Scuti stars. In particular, the stellar models were computed\nusing the CESAM evolutionary code (Morel \\cite{morel}).\nThe nuclear reaction rates are from Caughlan \\& Fowler\n(\\cite{caughlan}), the equation of state is from Eggleton et al.\\\n(\\cite{eggleton}), the opacities from the OPAL project (Iglesias\n\\& Roger \\cite{iglesias}) complemented at low temperatures by\nKurucz data (\\cite{kurucz}), and the\natmosphere is computed using the Hopf's $T(\\tau)$ law (Mihalas\n1978). The convection is described according\nto the classical mixing-length theory (MLT hereafter). In particular,\nwe have considered an MLT parameter of $\\alpha_{\\rm MLT} =\nl\/H_{p}=1.52$, where $H_{p}$ is the pressure scale-height.\nWe have considered a fixed value for $\\alpha_{\\rm MLT}$ because this parameter is not expected\nto have any significant influence on the global structure of the evolution models\nconsidered for our target stars.\n\nWe have considered different sets of initial chemical compositions\nin order to cover the Pleiades [Fe\/H] observed range (see\nSection~\\ref{sec:stars}) and different initial He abundances. The\nvalues used are given in Table~\\ref{tab:metal}. For each initial\nchemical composition we have computed models with and without core\novershooting, $\\alpha_{{\\rm ov}}=0.20$ in the former case \n(Schaller et al.\\ \\cite{schaller}).\n\n\\begin{table}[!t]\n\\begin{center}\n\\caption{Metallicity of the models. [Fe\/H] = $\\log (Z\/X)_{\\ast}- \\log (Z\/X)_{\\odot}$,\nwith $(Z\/X)_{\\odot}=0.0245$ from Grevesse \\& Noels (\\cite{grevesse}).\n\\label{tab:metal}}\n\\begin{tabular}[h]{ccccc}\n\\noalign{\\smallskip} \\hline \\hline \\noalign{\\smallskip}\n$X_{0}$&$Y_{0}$&$Z_{0}$&[Fe\/H]&$\\alpha_{\\rm ov}$\\\\\n\\noalign{\\smallskip} \\hline \\noalign{\\smallskip}\n\n0.735&0.250&0.015&$-$0.0794\\,\\,\\,\\,&0.0--0.2\\\\\n\n0.685&0.300&0.015&$-$0.0488\\,\\,\\,\\,&0.0--0.2\\\\\n\n0.700&0.280&0.020&0.0668&0.0--0.2\\\\\n\n0.725&0.250&0.025&0.1484&0.0--0.2\\\\\n\n0.675&0.300&0.025&0.1794&0.0--0.2\\\\\n\n\\noalign{\\smallskip} \\hline \\noalign{\\smallskip}\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nAlthough the final comparison with observations is done\nconsidering rotating models, our procedure requires the\ncomputation of non-rotating models and the corresponding\nisochrones. Hence, sequences of non-rotating models were\ncalculated for masses between 0.8 $M_{\\odot}$ and 5.0 $M_{\\odot}$,\nfrom the ZAMS to the sub-giant branch and the corresponding\nisochrones computed for each sets of parameters in\nTable~\\ref{tab:metal}. In the following analysis we shall consider\nthree ages, $A$, of 70 Myr, 100 Myr and 130 Myr. Finally, for\ncomparison with the observations three distance moduli, $d$, of\n5.50, 5.60 and 5.70 mag were considered\n except for models with [Fe\/H]=$-$0.0488, in which case \nan additional value of $d=5.39$ mag was used.\n\n\nFigure~\\ref{fig:dscutis} shows an example of such isochrones\ncomputed with the following parameters: [Fe\/H]=0.0668,\n$\\alpha_{\\rm ov} = 0.2$, $A = 70$ Myr and $d= 5.70$ mag. The\nisochrones were calibrated from [$T_{\\rm eff}, \\log\n(L\/L_{\\odot})$] to $(B-V, M_{V})$ by using the Schmidt--Kaler\n(\\cite{schmidt}) calibration for magnitudes and the relationship\nbetween $T_{\\rm eff}$ and $B-V$ of Sekiguchi \\& Fukugita\n(\\cite{sekiguchi}) for the colours. As can be seen in\nFig.~\\ref{fig:dscutis}, in the case illustrated here the isochrone\nmatches the observed colour--magnitude diagram. For some combinations \nof high metallicity and low\ndistance modulus the isochrone fit is not satisfactory but we have\nnot rejected those combinations to allow for an independent\ndetermination of the cluster parameters from the oscillation\nfrequencies. \nHence we have considered \nthe entire combination of cluster parameters given in\nTable~\\ref{tab:metal} and the three ages given above. \n\n\n\\subsection{Photometric effect of rotation on the HR diagram}\n\nIt is known that fast rotation affects the position of a star in the\ncolour--magnitude diagram (e.g.\\ Tassoul \\cite{tassoul}).\nIn particular a ZAMS rotating model has a smaller luminosity and mean\n$T_{\\mathrm{eff}}$ than a non-rotating model with the same mass and chemical\ncomposition.\nAlso, the magnitude and the colour of a rotating star depend on\nthe aspect angle, $i$, between the line of sight and the rotation\naxis. In particular, a rotating star seen pole-on\nappears brighter and hotter than the same star seen equator-on.\n\nWe now search for the range of masses and angular rotations suitable for\neach star. To do this we shall use the\nresults of P\\'erez Hern\\'andez et al.\\ (\\cite{perez2}), in which the\ncorrection to the photometric magnitudes of non-rotating models needed to obtain\nthose of rotating models were computed for main sequence stars\nof spectral type A0 to F5.\nIn this calculation, we shall take into account the observed $v \\sin i$ for each star.\nAs an example,\nFig.~\\ref{fig:cor} illustrates the corrections due to rotation in\nthe particular case of the $\\delta$ Scuti type star V650 Tau with $v \\sin\ni=175$ km s$^{-1}$. The actual position of the star is shown with\na thick dot upon which there is a cross that gives the errors in\nthe estimation of $(M_{V})_{0}$ and $(B-V)_{0}$ calculated\naccording the following expressions:\n\n\\begin{figure}\n\\resizebox{\\hsize}{!} {\\includegraphics[]{3791fig2.eps}}\n\\caption{Colour--magnitude diagram illustrating the photometric\ncorrections due to rotation in the case of V650 Tau\n($v\\sin i = 175\\,$km s$^{-1}$). The thick continuous line is the\nsame isochrone represented in Fig.~\\ref{fig:dscutis} ([Fe\/H] =\n0.0668, $\\alpha_{\\rm ov} = 0.2$, $A = 70$ Myr, $d=5.7$ mag).\nSome models are shown with filled circles upon the isochrone. The masses of the \nfirst and last models are indicated.\nThe small tracks represent the photometric corrections due to rotation for each \n the models indicated on the isochrone when $i$ runs from\n$i=90^{\\circ}$ to $i=i_{\\rm min}$. The corrections at fixed $i$ are represented by dotted lines.}\n\\label{fig:cor}\n\\end{figure}\n\n\\begin{equation}\n\\sigma[(B-V)_{0}] = \\sqrt{\\sigma^{2}(B-V) +\\sigma^{2}[E(B-V)]},\n\\end{equation}\n\n\n\\begin{equation}\n\\sigma[(M_{V})_{0}] = \\sqrt{\\sigma^{2}(d)+\\sigma^{2}(m_{V}) +\n(3.2\\sigma[E(B-V)])^{2}}\n\\, ,\n\\end{equation}\n\n\\noindent\nwhere $\\sigma(d)=0.06$ is the error in the distance modulus. The\nerrors in the magnitude $M_{V}$, colour index $(B-V)$ and reddening\n$E(B-V)$ are given in Table~\\ref{tab:stars}.\n\nThe range of masses and rotational velocities of the star depends\non the stellar parameters considered and hence needs to be computed\nfor each combination in Table~\\ref{tab:metal}. In particular, in\nthe example illustrated in Fig.~\\ref{fig:cor}, the same cluster\nparameters as in Fig.~\\ref{fig:dscutis} were considered\n([Fe\/H]=0.0668, $\\alpha_{\\rm ov} = 0.2$, $A = 70$ Myr and $d=\n5.70$ mag). The isochrone is represented by a thick continuous\nline and some of its models are shown by dots. For each model on\nthe isochrone there is a small track corresponding to the\nphotometric corrections due to rotation computed by using the\nprojected velocity $v \\sin i$ of the star and changing the angle\nof inclination from $i=90^{\\circ}$ to the minimum angle $i=i_{\\rm\nmin}$ corresponding to $\\sim 0.90 \\Omega_{\\rm br}$, where\n$\\Omega_{\\rm br}$ is the break-up angular rotational velocity.\nThis angular velocity is given by the balance between the\ngravitational attraction and the centrifugal force at the equator.\nThe dotted lines give the corrections at fixed angles:\n$i=90^{\\circ}$ (the closest line to the isochrone), $i=50^{\\circ}$\n(intermediate line) and $i=39^{\\circ}$ (the reddest line). It is\nevident that the actual position of the star, including the error\nbox, is associated to a domain of non-rotating models. In this\ncase we have obtained for the star considered a range\nof masses of $[1.86,1.91]M_{\\odot}$ and a range of angles of\ninclination of $[39^{\\circ},50^{\\circ}]$. Nonetheless, in order\nto take into account possible errors coming from the isochrone\ncalibration we shall consider a wider range of aspect angles\n$i=[i_{\\rm min}, 90^{\\circ}]$. The same procedure is carried out\nfor the other stars. Also one needs to consider all the\nmetallicities, ages, overshooting parameters and distance moduli for\nthe whole set of stars.\n\nHaving obtained ranges of $M$ and values of $i_{\\rm min}$, we proceed to find the\ncorresponding range of angular rotational velocities $\\Omega$, for\nwhich an estimate of the equatorial radius, $R_{\\rm eq}$, of the\nrotating model is needed. Under the assumption of uniform rotation\nand approximating the surface of the star by a Roche surface (see\ne.g. P\\'erez Hern\\'andez et al. \\cite{perez2}) one has:\n\n\\begin{equation}\nR_{\\rm eq} \\simeq R_0 \\frac{3}{\\omega} \\cos \\left\\{ \\frac{1}{3}\n\\left[ \\pi + {\\rm arcos}\\, \\omega \\right] \\right\\}\n\\; ,\n\\end{equation}\n\n\\noindent where $\\omega \\simeq \\Omega\/\\Omega_{\\rm br}$,\n$\\Omega^2_{\\rm bq} \\simeq 8GM\/(27R^3_0)$ and $R_0$ is the polar\nradius. As noted in P\\'erez Hern\\'andez et al.~(\\cite{perez2}), the\npolar radius can be approximated by the radius of a spherical\nnon-rotating model with the same mass and evolutionary state\n(since our stars are close to the ZAMS, a non-rotating model of\nthe same mass and age is suitable for this error-box estimation).\n\nSince on the other hand the angular rotation is related to the equatorial radius by\n\n\\begin{equation}\n\\Omega \\simeq \\frac{(v\\sin i)_{\\rm obs}}{R_{\\rm eq}\\sin i}\n\\; ,\n\\end{equation}\n\n\\noindent it is possible to carry out a simple iterative process\nto obtain a range of possible rotations, $[\\Omega_{\\rm min},\\Omega_{\\rm max}]$, \nfrom the previously obtained range of\nangle of inclinations $[i_{\\rm min},90^{\\circ}]$ and masses.\nSince for given data parameters, the mass of the star is obtained with an \nuncertainty of $\\sim 0.02$ $M_{\\odot}$ this\niterative process was carried out for just one mass.\n\nAs an example, Figure~\\ref{fig:om} shows the ranges of $\\Omega$\nobtained for the six stars as a function of the mass of the models\nconsidering the same cluster parameters as in\nFigure~\\ref{fig:cor}. Lines of different types give the estimates\nfor the different stars as indicated in the figure. A similar\nprocedure needs to be considered for all sets of parameters in\nTable~\\ref{tab:metal}.\n\n\\begin{figure}[!t]\n\\resizebox{\\hsize}{!} {\\includegraphics[width=22cm]{3791fig3.eps}}\n\\caption{Range of rotation rates against masses estimated for the\nsix stars from the photometric corrections due to rotation applied\nto the same isochrone depicted in Figure~\\ref{fig:cor}. The vertical error\nbars give the estimated range of mass for each star. The crosses\ngive the values of the break-up angular rotational velocity,\n$\\Omega_{\\rm br}$. The ranges of $\\Omega$ and $M$ are associated\nwith stars as follows: dot-dashed line for V650 Tau $(v\\sin i=175$\nkm s$^{-1})$; long-dashed line for HD 23628 $(v\\sin i=150\\,$km\ns$^{-1})$; three-dots dashed line for HD 23194 $(v\\sin i=20$\nkm s$^{-1})$; dashed-line for V647 Tau $(v\\sin i=20$ km s$^{-1})$;\ncontinuous line for V624 Tau $(v\\sin i=20\\,$km s$^{-1})$ and\ndotted line for V534Tau $(v\\sin i=90\\,$km s$^{-1})$. }\n\\label{fig:om}\n\\end{figure}\n\nWith the information obtained above we compute evolutionary\nsequences of rotating models with the same input physics as the\nnon-rotating models but with appropriate initial angular\nrotational velocities in order to match as closely as possible the\nextreme values in the interval $[\\Omega_{\\rm min},\\Omega_{\\rm\nmax}]$ at the final age. Solid-body rotation and conservation of\nglobal angular momentum during the evolution were assumed during\nthe calculus. In order to have a better overview of the rotation rates of\nthe stars we have also computed models with an intermediate value\nof $\\Omega_{\\rm mid} \\approx 0.5 \\Omega_{\\rm br}$. A total amount\nof 1620 sequences of rotating models were finally obtained for the\nwhole ensemble of stars.\n\n\\subsection{Theoretical oscillation frequencies}\n\nThe eigenfrequencies of the rotating models previously described\nhave been calculated using the oscillation code FILOU (Tran Minh\net al.\\ \\cite{tran}; Su\\'arez \\cite{suarez2}). We have considered\nfrequency perturbations up to second order in the rotation rate.\nFrequencies of the modes are labelled in the usual way: $n, l, m$\nfor the radial order, degree and azimuthal order. The radial\norders of a given mode are assigned according to the Scuflaire\ncriterion (\\cite{scuflaire}) with $n>0$ for the $p$ modes, $n<0$\nfor the $g$ modes, $n=1$ for the fundamental radial mode of degree\n0 and 1, and $n=0$ for the fundamental mode with the $l=2$.\nEigenfrequencies were computed up to $l=2$. For geometrical\nreasons higher degree modes are expected to have considerably\nsmaller amplitudes. The theoretical frequencies cover the\nfrequency range of the observed pulsation peaks (see\nTable~\\ref{tab:freq}). Coupling between modes is not considered in\nthe present work.\n\nThe estimated interval of $\\Omega_{\\rm\nrot}$ for all the stars may be as large as \n$\\Delta \\Omega_{\\rm rot} \\sim 1.6 \\times 10^{-4}$ rad\/s \nwhich in terms of cyclic rotational frequency corresponds\nto $\\Delta \\nu_{\\rm rot} \\sim 25\\,\\mu$Hz (see for instance\nFig.~\\ref{fig:om}). After some tests, we found that a\nsatisfactory comparison between the observed and theoretical\nfrequencies cannot be achieved by using only the frequencies\ncomputed for models with three $\\Omega_{\\rm rot}$ within this\ninterval. To overcome this difficulty once we had the mode\nfrequencies for each series of models (three $\\Omega_{\\rm rot}$\nfor fixed $M$, [Fe\/H], $\\alpha_{\\rm MLT}$, $\\alpha_{\\rm ov}$, $d$\nand $A$), we proceeded to interpolate the results on $\\Omega_{\\rm\nrot}$ in 21 equally spaced points covering the interval\n[$\\Omega_{\\rm min}, \\Omega_{\\rm max}$]. A quadratic spline\ninterpolation was applied. In order to\nevaluate the goodness of the interpolation we also computed the\neigenfrequencies for several randomly selected models. We have\nfound that there is a good agreement between the interpolated and\ntheoretical frequencies up to $\\nu \\sim 600\\, \\mu$Hz ($n \\sim 6$),\n the average of absolute separation being $|\\nu_{\\rm int} -\\nu_{\\rm\ncal} |\\sim 0.3\\,\\mu$Hz. The disagreement found beyond this value can be\nexplained by the fact that the second order effect of rotation is\nenhanced at higher frequencies. In any case, as can be seen in\nTable~\\ref{tab:freq}, the highest frequency we are dealing with\nis $\\simeq 574\\,\\mu$Hz.\n\n\\section{Method of comparison \\label{sec:comp}}\n\nOnce we have the mode frequencies for all sets of parameters we\ncompare the observational frequencies ($\\nu_{\\rm obs}$)\nwith the theoretical ones ($\\nu_{\\rm cal}$) at each interpolated\n$\\Omega_{\\rm rot}$.\n\nLet us consider a rotating model with given parameters for just\none star. Then, for all the possible combinations between the\nobserved and computed frequencies, we compute the quantity $\\chi^{2}$ given by\n\n\\begin{equation}\n\\chi^{2}= \\frac{1}{N}\\sum^{N}_{j=1}(\\nu_{{\\rm obs},j} - \n\\nu_{{\\rm cal},j})^{2}\n\\, ,\n\\end{equation}\n\n\\noindent\nwhere $N$ is the number of observational frequencies.\nIn this computation it is understood that each theoretical frequency is\nassigned to one observed frequency at most.\n\nAs a first step, in order fully to exploit the collective behaviour\nof stars within open cluster we consider the solutions of the six\nstars with given cluster parameters of [Fe\/H], $\\alpha_{\\rm ov}$,\n$d$ and $A$. The models corresponding to each star differ in the\nmass and the rotation rate. As an example, Fig.\\ref{fig:ang}\nshows $\\chi^{2}$ against angular rotational velocity,\n$\\Omega$, for the six models corresponding to the same parameters\nas in Fig.~\\ref{fig:cor} but without overshooting ([Fe\/H]=0.0668,\n$\\alpha_{\\rm ov}=0.0$, $d$=5.70 mag and $A$=70 Myr). Each panel\ncorresponds to the star indicated in the figure. For clarity,\nonly solutions with $\\chi^{2}< 20 \\, (\\mu\\rm{Hz}^{2})$ are shown. Similar\nfeatures are found for other metallicity, distance moduli and ages,\nbut the lowest $\\chi^{2}$ may appear with rather high values.\n\n\\begin{figure}[!ht]\n\\resizebox{\\hsize}{!} {\\includegraphics[width=22cm]{3791fig4.eps}}\n\\caption{$\\chi^{2}$ against $\\Omega$ for six models with common parameters\n [Fe\/H], $\\alpha_{\\rm ov}$, $d$ and $A$.}\n\\label{fig:ang}\n\\end{figure}\n\nSince we expect the same [Fe\/H], $d$ and $A$ for all the stars in\nthe cluster we shall assume that the best fits should happen\nsimultaneously in the six stars despite\n differences in rotation\nrate and masses of the models. Hence we proceed to calculate the\nmean square root of the lowest $\\chi^{2}$ found in each model for\ngiven common parameters by means of the following expression:\n\n\\begin{equation}\n\\epsilon= \\sqrt{\\frac{1}{6}\\sum^{6}_{i=1}(\\chi_{{\\rm min},i})^{2}}.\n\\end{equation}\n\n\n\\noindent Figure~\\ref{fig:sig} shows $\\epsilon$ against\n[Fe\/H] for models without overshooting. For the sake of clarity\nonly the points with $\\epsilon \\leq 3.0\\,\\mu$Hz are shown. It\ncan be seen that the solution with the smallest $\\epsilon $\ncorresponds to [Fe\/H]=0.0668. A plot of similar characteristics is\nobtained for models with overshooting.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!} {\\includegraphics[width=22cm]{3791fig5.eps}}\n\\caption{Minimum values of the mean square root of the difference\nbetween observed and theoretical frequencies for models with the\nsame values of [Fe\/H], $\\alpha_{\\rm ov}$, $d$ and $A$ against\n[Fe\/H]. For clarity, only models without overshooting are shown.\nThe symbols are related to different values of $A$ and $d$ as\nfollows: asterisks ($d=5.39$ mag), squares ($d=5.50$ mag), diamonds ($d=5.60$ mag), circles\n($d=5.70$ mag), small (70 Myr), middle (100 Myr) and big (130\nMyr). } \\label{fig:sig}\n\\end{figure}\n\nSince only those identifications obtained from eq.~(5) with low\n$\\chi^{2}$ values are of interest, we require that for each star\nthe solution must have $\\chi \\leq 3.5\\,\\mu$Hz. \nIn Fig.~\\ref{fig:sig} we reject those solutions that have at least one\nmodel with $\\chi> 3.5\\,\\mu$Hz for all the parameters and\ncombinations between observed and theoretical frequencies. With\nthis restriction we found that only those solutions associated with\nthe models represented by the four lowest points in\nFig.~\\ref{fig:sig} should be considered. Similar\nsolutions were found for models with overshooting. All these models are listed\nin Table~\\ref{tab:dom} and will be analysed in detail later.\n\n\\begin{table}[]\n\\begin{center}\n\\caption{Ensembles of models with and without overshooting, which,\nafter applying a threshold of $\\chi \\leq 3.5\\,\\mu$Hz, have potential valid\nsolution (see text for details).}\n\\begin{tabular}{cccccc}\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n\\multispan6 \\hfill [Fe\/H] = 0.0668 \\hfill \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n\\multispan3 \\hfill $\\alpha_{\\rm ov} = 0.00$ \\hfill &\\multispan3 \\hfill\n$\\alpha_{\\rm ov} = 0.20$ \\hfill \\\\\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n& $d$ & $A$ & & $d$ & $A$ \\\\\n\n& (mag)&(Myr)& & (mag)&(Myr) \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\nC1& 5.70 & \\,\\,70&D1& 5.70 & \\,\\,70 \\\\\nC2& 5.70 & 130& D2&5.60 & 100\\\\\nC3& 5.60 & \\,\\,70 &D3&5.60 &130 \\\\\nC4& 5.60 & 100&D4&5.70 &100 \\\\\n\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n\\end{tabular}\n\\normalsize\n\\end{center}\n\\label{tab:dom}\n\\end{table}\n\nWe shall use a geometrical argument to reduce further the\nnumber of solutions so far available for each star. To this end,\nwe take into account that the visibility of each mode\ndepends on the angle of inclination. \nFollowing Pesnell~(\\cite{pesnell}), we illustrate in\nFig.~\\ref{fig:vis} the spatial response function\n$S_{lm}$ against $i$, for mode degree $l=0,1,2$. For\nsimplicity, limb-darkening has been neglected. It can be seen that for $i \\approx\n90^{\\circ}$ only even $l+m$ values can be detected, while for $i\n\\approx 0^{\\circ}$ only modes with $m = 0$ will be visible.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!} {\\includegraphics[width=22cm]{3791fig6.eps}}\n\\caption{Variation of visibility amplitude with the inclination angle for radial\n$l=0$ (continuous line) and non-radial $l=1$ (dotted lines) and $l=2$ (dashed lines)\nmodes.}\\label{fig:vis}\n\\end{figure}\n\nWe introduce an hypothesis based on the visibility of the modes:\nwe reject any solution with one or more modes\nwith $S_{lm}^{\\prime} < 0.1$. After applying this constraint we\nwere able to limit the analysis to four sets in \nTable~\\ref{tab:dom}, two with $\\alpha_{\\rm ov}=0.0$: C1\n$(d=5.70,\\, A=70)$, C3 $(d=5.60,\\, A=70)$ and two with\n$\\alpha_{\\rm ov}=0.2$: D1 $(d=5.70,\\, A=70)$, D2 $(d=5.60,\\, A=100\n)$. We note that while for the stars HD 23628 and HD 23194 only\na few identifications ($<$ 8) remain, the number of identifications\nfor V624 Tau, V534 Tau, V647 Tau and V650 Tau remained larger than\n$100$ in most cases.\n\n\n\\section{Results and discussion \\label{sec:results}}\n\nIn order to discuss the results we shall introduce a parameter\n $\\Delta$ associated with each star in each identification and given\nby\n\n\\begin{equation}\n\\Delta = {\\rm max} ( |\\nu_{\\rm obs}- \\nu_{\\rm cal}| ).\n\\end{equation}\n\nTable~\\ref{tab:delta} lists the number of solutions for the six\nstars in each ensemble. Different maximum values of $\\Delta$ have\nbeen considered. Certain features can be derived from these\ngeneral results:\n\n\n\\begin{table*}[]\n\\begin{center}\n\\caption{Number of possible solutions for models in each ensemble\nfor different values of $\\Delta$ (in $\\mu$Hz). \\label{tab:delta}}\n\\begin{tabular}{lcccccccccc}\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&\\multispan{10} \\hfill [Fe\/H] = 0.0668 \\hfill \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&\\multispan5 \\hfill $\\alpha_{\\rm ov} = 0.00$ \\hfill &\\multispan5 \\hfill $\\alpha_{\\rm ov} = 0.20$ \\hfill \\\\\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&$\\Delta\\,(\\mu{\\rm Hz})\\, \\leq$ &1.0 & 2.0&3.0 &3.5 & $\\Delta\\,(\\mu{\\rm Hz})\\, \\leq$ &1.0 &2.0 & 3.0 & 3.5 \\\\\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&$\\frac{M}{M_{\\odot}}$& & & &&$\\frac{M}{M_{\\odot}}$& & & & \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&C1 & & & & &D1& & & & \\\\\nV624 Tau&1.70 &0 &0 &12 &18&1.69 &0 &0 &0 &12 \\\\\nV534 Tau&1.67 &0 &0 &0 &16&1.67 & 0& 0& 0&0 \\\\\nV647 Tau&1.70 &0 &0 &0 &2 &1.70 &0 & 0&0 &2 \\\\\nV650 Tau&1.86 &0 &0 &7 &13&1.86 &0 &2 &4 &4 \\\\\nHD 23194&1.80 &0 &1 &2 & 3&1.81 &0 &0 &3 &3 \\\\\nHD 23668&1.84 &0 &1 &2 & 4&1.84 &0 &0 &2 &3 \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n&C3 & & & & &D2 & & & & \\\\\nV624 Tau&1.68 &0 &0 &6 &12&1.68 &0 &0 &0 &0\\\\\nV534 Tau&1.66 &0 &0 &0 &24&1.66 & 0& 0& 0&0 \\\\\nV647 Tau&1.69 &0 &0 &0 &2 &1.69 &0 & 0&0 &3 \\\\\nV650 Tau&1.86 &0 &0 &0 &0&1.86 &0 &0 &0 &3\\\\\nHD 23194&1.79 &1 &1 &2 &2&1.79 &1 &1 &1 &2 \\\\\nHD 23668&1.82 &0 &0 &3 &4&1.84 &0 &2 &3 &6 \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n\\end{tabular}\n\\normalsize\n\\end{center}\n\\end{table*}\n\n\n\\begin{itemize}\n\\item The resulting masses for each star are similar\nfor all the solutions. \n\n\\item The number of solutions for a given star at fixed $\\Delta$\nassociated with models with overshooting is smaller than without it.\n\n\\item There are no solutions with $\\Delta \\leq 3.0\\,\\mu$Hz for the stars V534 Tau\nand V647 Tau, while the star HD 23194 shows solutions even with\n$\\Delta \\leq 1.0\\mu$Hz.\n\n\\item In order to find a set of models that has at least one solution simultaneously\nfor all the stars a value of $\\Delta \\simeq 3.5\\,\\mu$Hz is needed.\nThis solution is found for C1 that corresponds to the cluster\nparameters [Fe\/H]=0.0668, $\\alpha_{\\rm ov}=0.0$, $d=5.70$ and\n$A=70$, and it coincides with the minimum in Figure~\\ref{fig:sig}.\n\n\\item\nThe remaining ensembles in Table~\\ref{tab:delta} have at least\none solution simultaneously when\nthe value of $\\Delta$ is increased slightly. In particular, for\nC3 a value of $\\Delta \\simeq 4.5\\,\\mu$Hz is needed to obtain a\nsolution while for D1 and D2 a value of $\\Delta \\simeq 3.8\\,\\mu$Hz\nis required. \n\n\\end{itemize}\n\nWe have found that the\nidentifications and the range of aspect angles derived for each star are \nsimilar for all the solutions. \nThis agreement may be due to the fact that the estimated range of mass and radius,\nand hence of mean densities, is similar for all the solutions. \nAt the ages considered the overshooting has a negligible\neffect on the models.\n\nTable~\\ref{tab:results} summarizes the parameters estimated for\nthe cluster as well as for each star associated with the possible\nidentifications. The radial orders of these\nmodes are consistent with growth rate predictions\n(Su\\'arez et al.\\ \\cite{suarez}). \nWhile these identifications cannot be considered as\ndefinitive but rather as ``best fit solutions'', some\ninformation on the stars could be derived.\nOn the one hand, those stars with smaller masses\n(V624 Tau, V534 Tau and V647 Tau) have more than six frequencies and could have \nsimultaneously excited radial and non-radial oscillations although\nat most two radial modes. \nFollowing Hern\\'andez et al.\\ (\\cite{hernandez}), \nat most two radial modes were found to be excited for the $\\delta$\nScuti stars in the Praesepe cluster.\nOn the other hand, the most massive stars, V650 Tau, HD\n23628 and HD 23194, with a mean mass of 1.83 $M_{\\odot}$\npresent on average few observational frequencies ($N \\le 4$) that\nmight be associated only with non-radial modes. With the\npresent observational data it remains unclear, however, whether this\nis a general trend for $\\delta$ Scuti stars. \n\n\\begin{figure}\n\\resizebox{\\hsize}{!} {\\includegraphics[width=22cm]{3791fig7.eps}}\n\\caption{Comparison of observed (solid lines) and theoretical (dot lines)\nfrequencies of the best fit solutions in C1-models (Table~\\ref{tab:delta}).}\n\\label{fig:comp}\n\\end{figure}\n\n\n\\begin{center}\n\\begin{table*}[]\n\\caption{Summary of parameters derived for the Pleiades cluster as well as for\n$\\delta$ Scuti stars with the possible identification of the frequency modes.\n\\label{tab:results}}\n\\begin{tabular}{lcclcc}\n\n\\noalign{\\smallskip}\n\\hline\n\\hline\n\\noalign{\\smallskip}\n \\multispan{6}\\hfill [Fe\/H] = 0.0668, $\\alpha_{\\rm ov} = $[0.00-0.20], $m_{V}-M_{V}=$[5.60-5.70], $A=$[70-100]${\\times}10^{6}$ years \\hfill \\\\\n\\noalign{\\smallskip}\n\\hline\n\\hline\n\\noalign{\\smallskip}\n Star&$\\nu $ &Identification &Star&$\\nu $ &identification \\\\\n &($\\mu$Hz)& ($n,l,m)$& &($\\mu$Hz)& ($n,l,m)$ \\\\\n\\noalign{\\smallskip}\n\\hline\n\\hline\n\\noalign{\\smallskip}\nV624 Tau&242.9& (1,0,0) &V650 Tau& 197.2&(1,1,1) \\\\\n$M = [1.68$-1.72]\\,$M_{\\odot}$ &409.0& (3,1,1) &$M =$ [1.84-1.88]\\,$M_{\\odot}$ & 292.7&(0,2,$-$2),(2,2,2) \\\\\n $\\nu_{\\rm rot}= [3$-5]\\,$\\mu$Hz &413.5& (3,1,0) & $\\nu_{\\rm rot}= [$25-28]\\, $\\mu$Hz & 333.1&(3,1,1),(3,2,2) \\\\\n $i=[37^{\\circ}$-$67^{\\circ}]$&416.4&(3,1,$-$1) & $i=[60^{\\circ}$-$70^{\\circ}]$ & 377.8&(2,2,$-$2),(3,1,0) \\\\\n& 451.7& (3,2,$-$2),(4,0,0) & &&(3,1,$-$1),(3,2,1)\\\\\n &489.4 & (4,1,0),(4,1,1) &&&\\\\\n &529.1 & (4,2,$-$1),(4,2,$-$2)&&&\\\\\n & &(5,0,0) &&&\\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\nV534 Tau&234.2& (1,1,1)&HD 23628 &191.8 &(0,2,2) \\\\\n$M = [1.65$-1.69]\\,$M_{\\odot}$ &252.9& (1,1,0)&$M = [1.82$-1.86]\\,$M_{\\odot}$ &201.7 & (1,1,1) \\\\\n$\\nu_{\\rm rot}= [14$-16]\\,$\\mu$Hz &307.6&(2,0,0),(2,1,1)& $\\nu_{\\rm rot}= [24$-26]\\,$\\mu$Hz &376.6&(2,2,$-$2)\\\\\n$i=[59^{\\circ}$-$79^{\\circ}]$ &377.9& (2,2$-$1),(3,0,0) &$i=[53^{\\circ}$-$59^{\\circ}]$&&\\\\\n &379.0 & (2,2$-$1),(3,0,0) & &&\\\\\n & & (3,1,1) &&\\\\\n &448.1& (3,2,$-$1),(4,0,0) &&&\\\\\n &525.0& (4,2,$-$1) &&&\\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\nV647 Tau &236.6&(1,1,1) &HD 23194&533.6&(5,1,1),(5,1,$-$1) \\\\\n $M = [1.68$-1.72]\\,$M_{\\odot}$ &304.7&(1,2,1) &$M = [1.78$-1.82]\\,$M_{\\odot}$&574.9&(5,2,0),(5,2,1) \\\\\n $\\nu_{\\rm rot}= 10-11\\,\\mu$Hz&315.6&(2,0,0),(2,1,1)& $\\nu_{\\rm rot}= [14$-23]\\,$\\mu$Hz & & \\\\\n $i=17^{\\circ}$-18$^{\\circ}$ &374.4&(2,2,$-$1)&$i=7^{\\circ}$-$12^{\\circ}$ &&\\\\\n &405.8&(3,1,0) &&&\\\\\n &444.1&(3,2,0) &&&\\\\\n\\noalign{\\smallskip}\n\\hline\n\\hline\n\\noalign{\\smallskip}\n\\end{tabular}\n\\end{table*}\n\\end{center}\n\n\n\n\nA comparison of observed and computed frequencies is shown in\nFigure~\\ref{fig:comp}. In each panel the solid lines represent the \nobservational frequencies with their corresponding amplitudes.\nThe ``best fit'' theoretical frequencies\nare represented by dotted lines. The stars are arranged from top to bottom\naccording to their magnitude. \n\nFrom this figure it follows that\nthere is an asymmetric triplet in the observational spectrum of\nV624 Tau. We have found that for all the solutions the\nidentification is always a $(l=1,\\,n=3)$ mode split by rotation.\nIn turn, this implies that an independent estimate of the mean rotation\nrate can be obtained from the rotational splitting, up to second\norder, given by\n\n\\begin{equation}\n\\frac{\\nu_{-m} - \\nu_{m}}{2 m} \\sim \\frac{\\Omega_{\\rm rot}}{2 \\pi}\n(1-C_{nl}),\\;\\;\\;\\; m=1,2,\\ldots,l\n\\end{equation}\n\n\\noindent\nWe have found complete agreement between the cyclic\nrotational frequency computed from the above expression and those\nderived from the mode identification, the differences being \n$\\sim 0.03-0.04\\,\\mu$Hz. We have also found a fairly\ngood agreement between the observations and the models in the differences\n$\\nu_{-m}-\\nu_{0}$ and $\\nu_{+m}-\\nu_{0}$.\n\n\\medskip\nIf the estimated inclination angles for\nthe stars are correct, all of them could be rapid rotators ($\\nu_{\\rm rot}>10\\mu$Hz)\nexcept for V624 Tau with $\\nu_{\\rm rot}\\leq 5\\mu$Hz.\nHD 23194 and V647 Tau, whose projected rotational velocities are low,\n$v\\,\\sin\\,i = 20$ kms$^{-1}$, would have equatorial velocities as high as\n90 and 70 kms$^{-1}$ respectively.\n\n\\medskip\nIn the present work the distance is found to be\n$m_{V}-M_{V} = 5.60-5.70$ mag $(132 - 138$ pc) and agrees very well\nwith that derived recently\nby independent determinations using independent techniques and data\n(Pan et al.\\ \\cite{pan}; Munari et al.\\ \\cite{munari}; Soderblom et al.\\\n\\cite{soderblom}; Percival et al.\\ \\cite{percival}).\n However, although\nthe isochrones computed with sub-solar metallicity\n ($Z=0.015$ and $Y=0.30$) match well the observational data\n with a distance modulus of $d=5.39$ mag in agreement with\nthat of HIPPARCOS within 1-$\\sigma$ error,\nthe resulting solutions (shown by asterisks in\nFig.~\\ref{fig:sig}) do not lead to\n good fits. Moreover, the plot of $\\epsilon$ against $d$ which we do\nnot reproduce here, shows better fits for further distances.\nFor solar metallicity, in order to reproduce the HIPPARCOS MS of the Pleiades,\nunrealistically high helium abundances are required (e.g. $Y=0.34$ Belikov et al.\n\\cite{belikov}, $Y=0.37$ Pinsonneault et\n al. \\cite{pinsonneault}).\n\n\\section{Conclusions \\label{sec:conclusions}}\n\nIn this study we have performed a seismological\nanalysis of six $\\delta$ Scuti stars belonging to the Pleiades\ncluster to identify their frequency modes.\nTo the best of our knowledge this group of\nvariables constitutes the most statistically significant sample of\n$\\delta$ Scuti stars analysed in an open cluster to date.\n\nRotational effects were considered in different stages of\nthe analysis: first when determining the star positions in the HR\ndiagram and second when computing stellar models that approximately\nreproduce the evolutionary stage of the stars, and finally when\ncomputing theoretical oscillation frequencies in order to\nconstruct seismic models for target stars. The comparison between\nobservational and computed frequencies was carried out by\na least-squares fit.\nThere is a large number of possible solutions partly because\n neither the equatorial velocity nor the inclination\nangle are known a priori. In order to limit the number of possible\nsolutions we used the relationship between the\namplitude visibility, $S_{lm}$ and the aspect angle, $i$.\nAs a result we found few solutions for each star,\nsuggesting the existence of only $p$ modes of low degree, low radial order\nin all the stars. For the less massive stars,\nsolutions with at most two radial modes were also possible. These solutions have\nassociated ranges of stellar parameters for each star. Most of the\nstars could be rapid rotators according to the estimated angle of inclination,\n$i$.\n\nThe best fits between observational and theoretical frequencies\nare achieved for global cluster parameters of [Fe\/H] = 0.0668\n$(Z_{0}=0.02,\\, X_{0}=0.70)$, $A=70 - 100\\,$Myr and $d=5.60 -\n5.70$ mag. The derived distance modulus for the Pleiades cluster\nagrees with that of the main sequence fitting method, in spite of the fact that\nthe isochrones with sub-solar metallicity\nclosely matches the Pleiades HR diagram with the HIPPARCOS distance.\n\n\n\n\\begin{acknowledgements}\nThis work has been partially funded by grants AYA2001-1571, ESP2001-4529-PE and \nESP2004-03855-C03-03 of the Spanish national research plan. L.F.M acknowledges the partial \nfinancial support by the Spanish AECI. J.C.S. acknowledges the financial support by the Spanish Plan of Astronomy and Astrophysics, under project AYA2003-04651, by the Spanish ``Consejer\\'{\\i}a de Innovaci\\'on, Ciencia y Empresa'' from the ``Junta de Andaluc\\'{\\i}a local government, and by the Spanish Plan Nacional del Espacio under project ESP2004-03855-C03-01.\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe problem of predicting network structure can be both of great practical importance as well as a case-study in understanding the usefulness of deep learning in network settings. An accurate model can be used to suggest friend recommendations, product recommendations, and even predict individual user actions. A system which solves this problem is generally referred to in the literature as a recommender system, and such systems are quite common at large Internet companies such as Amazon \\cite{Linden:2003:ARI:642462.642471}, Netflix \\cite{Zhou:2008:LPC:1424237.1424269}, and Google.\n\nThe main approaches typically taken fall into two categories - \\textit{content based} and \\textit{collaborative filtering} approaches. The first makes use of text, meta-data, and other features in order to identify potentially related items, while the latter leans more towards making use of aggregated behavior and of a large number of training samples (ie, users and businesses). Collaborative filtering approaches have proven useful in recommender systems in industry, and are typically the preferred method due to how expensive it typically is (in both computational resources and engineering effort) to extract useful features from large amounts of meta-data. However, with advances in deep learning (extracting features from videos and text that are useful for many tasks), it seems feasible that revisiting content-based approaches with additional network-level data will prove fruitful.\n\nIn this paper, we seek to explore a novel method combining both deep learning feature extraction (a \\textit{content-based} approach) with network prediction models (a quasi-\\textit{collaborative filtering} approach). We focus on a real-world, practical network - the Yelp Review Network. The network consists of 4.7M review (edges), 156K businesses, 200K pictures, covering over 12 metropolitan areas in the united state.\n\nSpecifically, we seek to model the problem of predicting a user's star rating of a previously unrated business by using features about the business, the user, as well as existing interactions between the user and other businesses.\n\nFrom a general view point, we hypothesize that the final star rating given by a users is a mixture of all of the above interactions. In particular, we would expect that rating at time $t$ between user $i$ and business $j$ could be modeled as:\n$$\nr_t = f(i_t, j_t, \\delta_{i,j,t}) + \\mathcal{N}(0,\\epsilon_{i,j,t})\n$$\n\nHere, we have $i_t$ is the overall user-based bias at time $t$. For example, some users simply tend to give higher or lower ratings based on previous experience -- one could argue this is inherent to the user directly. We also have $j_t$, the overall business bias at time $t$. For example, some business are objectively better across the board, by having better food, websites, or being at better locations. Finally, the term $\\delta_{i,j,t}$ which is an interaction term reflecting the interaction between this user and the business as time $t$. One might imagine for example that a user who really enjoys Mexican food will tend to give those restaurants a higher rating.\n\nIn the end, these three terms should be combined in some way (with normalization, etc.) to arrive at a final rating. As such, we essentially have four models which can be combined to give better predictive power:\n\n\\begin{itemize}\n\\item a user model, trained only on user properties\n\\item a business model, trained on business properties\n\\item interaction model trained on a mixture of both properties with additional features known only to the network (such as previous business interactions, etc).\n\\end{itemize}\n\n\\section{Related Work}\nIn general, there are three areas of interest in the literature. We have (1) work which focuses and provides techniques for predicting results based on network structures, (2) work which has applied some ML techniques to the features extracted from networks (and sometimes elements themselves), and (3) work which throws away a lot of the network structure and focuses exclusively on using the data to make predictions. All of these are supervised learning methods which varying degrees of complexity. We provide a brief overview of them, followed by a section discussing the mathematical underpinnings of the models.\n\n\\subsection{Graph-Based Approaches}\n\nLiben-Nowell and Kleinberg \\cite{TheLinkPredictionProblemForSocialNetworks} formalize the \\textit{link prediction problem} and develop a proximity-based approach to predict the formation of links in a large co-authorship network. The model focuses on the network topology alone, ignoring any additional meta-data associated with each node since its basic hypothesis is that the known network connections offer sufficient insight to accurately predict network growth over time. They formally tackle the problem of given a social graph $G = (V,E)$ where each edge represents an interaction between $u,v$ and a particular timestamps $t$, can we use a subset of the graph across time (ie, with edges only in the interval $[t,t']$ to predict a future subset of the graph $G'$). The methods presented ignore the creation of new nodes, focusing only on edge prediction.\n\nMultiple predictors $p$ are presented, each focusing on only network structure. For example, some intuitive predictors (there are many others studied, though not necessarily as intuitive) for the edge creation between $x$ and $y$:\n\n\\begin{enumerate}\n\\item graph distance -- (negated) length of the shortest path between $x$ and $y$\n\\item preferential attachments -- $|\\Gamma(x)| \\cdot |\\Gamma(y)|$ where $\\Gamma: V \\to 2^V$ is a map from nodes to neighbors of nodes.\n\\end{enumerate}\n\nEach of the above predictors $p$ can output a ranked list of most likely edges. The paper evaluates effectiveness by comparing calculating the percentage of edges which are correctly predicted to exists in the test data. The baseline for the paper appears to be a random predictor based on the training graph and the graph distance predictor. The predictors are evaluated over five difference co-authoring networks. =\n\nThe predictors can be classified into essentially three categories:\n\n\\begin{itemize}\n\\item Predictors based on local network structure\n\\item Predictors based on global network structure\n\\item Meta predictors based on a mixture of the above two \n\\end{itemize}\n\nAll predictors performed above the random baseline, on average. The hitting time predictors performed below the graph distance baseline, with a much narrower positive gap for the remaining predictors. Most predictors performed on-par with just a common neighbors predictors.\n\n\\subsection{Introducing ML}\n\nFurther work by Leskovec et al. \\cite{Leskovec:2010:PPN:1772690.1772756} seeks to introduce the nuance of both ``positive'' and ``negative'' relationships to the link prediction problem, addressing limitations of previous work. In concrete, it seeks to predict the sign of each edge in a graph based on the local structure of the surrounding edges. Such predictions can be helpful in determining future interactions between users, as well as determining polarization of groups and communities. \n\nLeskovec et al. introduce the ``edge sign prediction problem'' and study it in three social networks where explicit trust\/distrust is recorded as part of the graph structure, work which is later expanded by Chiang et al. \\cite{Chiang:2011:ELC:2063576.2063742}. The explicit sign of the edges is given by a vote for or a vote against, for example, in the Wikipedia election network. They find that their prediction performance degrades only slightly across these three networks, even when the model is trained on one network and evaluated against another.\n\nThey also introduces social-psychological theories of balance and status and demonstrates that these seems to agree, in some predictions, with the models explored.\n\nFurthermore, they introduces the novel idea of using a machine learning approach built on top of the network features to improve the performance of the model. Rather than rely directly on any one network features, it instead extracts these features from the network and uses them in a machine learning model, achieving great performance. The features selected are, roughly speaking:\n\n\\begin{itemize}\n\\item Degree features for pair $(u,v)$ - there are seven such features, which are (1) the number of incoming positive edges to $v$, (2) the number of incoming negative edges to $v$, (3) the number of outgoing positive edges from $u$, (4) the number of outgoing negative edges from $u$, (5) the total number of common neighbors between $u$ and $v$, (6) the out-degree of $u$ and the (7) in-degree of $v$.\n\\item Triad features - We consider 16 distinct triads produced by $u,v,w$ and count how many of each type of triad.\n\\end{itemize}\n\nThe above features are fed into a logistic regression model and are used to relatively successfully predict the sign of unknown edges.\n\nOverall, while previous network predictions problems have attempted to make use of machine learning, most still rely on relatively simple models and have not yet made the jump to deeper architectures.\n\n\\subsection{Content-Based Deep Learning}\nHasan et. al in \\cite{Hasan06linkprediction} introduce the very important idea of using features of the node to assist in link prediction. The paper also significantly expands on the set of possible models to use for ML, demonstrating that for their data, SVMs work the best when it comes to predicting the edge. They formulate their problem as a supervised machine learning problem. Formally, we take two snapshots of a network at different times $t$ and $t'$ where $t' > t$. The training set of generated by choosing pairs of nodes $(u,v)$ which are not connected by an edge in $G_t$, and labeling as positive if they are connected in $G_{t'}$ and negative if they are not connected in $G_{t'}$. The task then becomes a classification problem to predict whether the edges $(u,v)$ is positive or negative. \n\nIn particular,they make use of the following features:\n\n\\begin{itemize}\n\\item Proximity features - computed from the similarity between nodes.\n\\item Aggregated features - how \"prolific\" a scientists is, or other features that belong to each node.\n\\item Network topology features - (1) shortest distance among pairs of nodes, (2) number of common neighbors, (3) Jaccard's coefficient, etc.\n\\end{itemize}\n\nThe authors rigorously describes the sets of features it found the most predictive, and takes into account node-level information extractable from the network as well as some amount of ``meta''-level information (for example, how similar two nodes are to each other). The results demonstrate great success (with accuracies up to 90\\% compared to a baseline of 50\\% or so). Overall, The authors presents a novel approach of using machine learning to assist in the link prediction problem by rephrasing the problem as a supervised learning task.\n\n\\section{Methodology and Data}\nIn this section, we describe the architecture of our feature extraction networks as well as lay the ground work for our predictive models. We define our loss function and presents some additional details used for training, such as learning rate and other hyper-parameters.\n\nWe convert the original data from JSON format to CSV. The data set contains 156,639 businesses (with 101 distinct attributes), 196,278 photos (associated with businesses), 1,028,802 tips (these are between users and businesses), 135,148 check-ins (again, associated with each business), and 1,183,362 users.\n\n\\subsection{Dataset}\nOur dataset is the set released for the Yelp Data Set Challenge Round 10 \\cite{YelpDataSet} in 2017. The entirety of the dataset consists of the following entities:\n\\begin{itemize}\n\\item \\textbf{Businesses}: Consists of exactly 156,639 businesses. It contains data about businesses on Yelp including geographical location, attributes, and categories.\n\\item \\textbf{Reviews}: 4,736,897 reviews. It contains full review text (for NLP processing) as well as the user id that wrote the review and the business id the review is written for. It also contains the number of stars given, as well as the number of useful, funny, and cool up-votes (finally, it also contains the date).\n\\item \\textbf{Users}: 1,183,362 Yelp users. It includes the user's friend mapping and all the meta-data associated with the user. Just this single dataset consists contains 538,440,966 edges.\n\\item \\textbf{Tips}: 1,028,802 tips. Tips are associated with each business and are written by users. Tips are similar to reviews, but without rating and usually much shorter.\n\\item \\textbf{Photos:} 196,278 Photos, each associated with businesses. The photos are also associated with captions.\n\\item \\textbf{Check-ins:} 135,148 check-ins on a business (this a business only attribute).\n\\end{itemize}\n\nAs we can see from above, the dataset is relatively rich, with many possible graph structures to study on top of it. In general, given that we are trying to predict review ratings, we focus on the following bipartite graph with users and businesses:\n\n\\begin{figure}[h!]\n\\centering\n\\begin{tikzpicture}[scale=0.2]\n\\tikzstyle{every node}+=[inner sep=0pt]\n\\draw [black] (21.9,-11.9) circle (3);\n\\draw (21.9,-11.9) node {$1$};\n\\draw [black] (21.9,-20.2) circle (3);\n\\draw (21.9,-20.2) node {$2$};\n\\draw [black] (21.9,-28.1) circle (3);\n\\draw (21.9,-28.1) node {$3$};\n\\draw [black] (21.9,-36.9) circle (3);\n\\draw (21.9,-36.9) node {$4$};\n\\draw [black] (44.1,-8.1) circle (3);\n\\draw (44.1,-8.1) node {$1$};\n\\draw [black] (44.1,-8.1) circle (2.4);\n\\draw [black] (44.1,-18.7) circle (3);\n\\draw (44.1,-18.7) node {$2$};\n\\draw [black] (44.1,-18.7) circle (2.4);\n\\draw [black] (44.1,-27.4) circle (3);\n\\draw (44.1,-27.4) node {$3$};\n\\draw [black] (44.1,-27.4) circle (2.4);\n\\draw [black] (44.1,-35.9) circle (3);\n\\draw (44.1,-35.9) node {$4$};\n\\draw [black] (44.1,-35.9) circle (2.4);\n\\draw [black] (44.1,-43.8) circle (3);\n\\draw (44.1,-43.8) node {$5$};\n\\draw [black] (44.1,-43.8) circle (2.4);\n\\draw [black] (24.86,-11.39) -- (41.14,-8.61);\n\\fill [black] (41.14,-8.61) -- (40.27,-8.25) -- (40.44,-9.23);\n\\draw (34.64,-10.84) node [below] {$reviews$};\n\\draw [black] (24.36,-13.62) -- (41.64,-25.68);\n\\fill [black] (41.64,-25.68) -- (41.27,-24.81) -- (40.7,-25.63);\n\\draw [black] (23.94,-14.1) -- (42.06,-33.7);\n\\fill [black] (42.06,-33.7) -- (41.89,-32.77) -- (41.15,-33.45);\n\\draw [black] (24.89,-20) -- (41.11,-18.9);\n\\fill [black] (41.11,-18.9) -- (40.27,-18.46) -- (40.34,-19.46);\n\\draw [black] (24.75,-21.13) -- (41.25,-26.47);\n\\fill [black] (41.25,-26.47) -- (40.64,-25.75) -- (40.33,-26.7);\n\\draw [black] (24.66,-35.72) -- (41.34,-28.58);\n\\fill [black] (41.34,-28.58) -- (40.41,-28.44) -- (40.8,-29.35);\n\\draw [black] (24.35,-29.83) -- (41.65,-42.07);\n\\fill [black] (41.65,-42.07) -- (41.29,-41.2) -- (40.71,-42.01);\n\\draw [black] (24.73,-29.09) -- (41.27,-34.91);\n\\fill [black] (41.27,-34.91) -- (40.68,-34.17) -- (40.35,-35.11);\n\\draw [black] (23.73,-34.52) -- (42.27,-10.48);\n\\fill [black] (42.27,-10.48) -- (41.38,-10.8) -- (42.18,-11.41);\n\\draw [black] (24.13,-26.09) -- (41.87,-10.11);\n\\fill [black] (41.87,-10.11) -- (40.94,-10.27) -- (41.61,-11.01);\n\\end{tikzpicture}\n\\caption{Simplified graph model of user reviews of businesses. The graph is bipartite, with users and businesses connected by directed \"review\" edges.}\n\\label{fig:graph_structure}\n\\end{figure}\n\nand further propose making use of the friend-friend explicit graph (it is possible that it might be meaningful to see if we can find any relationship between friend reviews and user reviews) and the tip edges (without ratings, but possible meaningful information about a business). With this additional information, the structure of the graph itself becomes increasingly complex, as shown in Diagram \\ref{fig:graph_complex_structure}.\n\n\\begin{figure}[h!]\n\\centering\n\\begin{tikzpicture}[scale=0.2]\n\\tikzstyle{every node}+=[inner sep=0pt]\n\\draw [black] (28.4,-27.4) circle (3);\n\\draw (28.4,-27.4) node {$1$};\n\\draw [black] (30.9,-16.9) circle (3);\n\\draw (30.9,-16.9) node {$2$};\n\\draw [black] (20.3,-19.9) circle (3);\n\\draw (20.3,-19.9) node {$3$};\n\\draw [black] (21.9,-36.9) circle (3);\n\\draw (21.9,-36.9) node {$4$};\n\\draw [black] (44.1,-8.1) circle (3);\n\\draw (44.1,-8.1) node {$1$};\n\\draw [black] (44.1,-8.1) circle (2.4);\n\\draw [black] (44.1,-18.7) circle (3);\n\\draw (44.1,-18.7) node {$2$};\n\\draw [black] (44.1,-18.7) circle (2.4);\n\\draw [black] (44.1,-27.4) circle (3);\n\\draw (44.1,-27.4) node {$3$};\n\\draw [black] (44.1,-27.4) circle (2.4);\n\\draw [black] (44.1,-35.9) circle (3);\n\\draw (44.1,-35.9) node {$4$};\n\\draw [black] (44.1,-35.9) circle (2.4);\n\\draw [black] (44.1,-43.8) circle (3);\n\\draw (44.1,-43.8) node {$5$};\n\\draw [black] (44.1,-43.8) circle (2.4);\n\\draw [black] (33.4,-15.24) -- (41.6,-9.76);\n\\fill [black] (41.6,-9.76) -- (40.66,-9.79) -- (41.22,-10.62);\n\\draw (41.47,-13) node [below] {$5\\mbox{ }review$};\n\\draw [black] (41.115,-27.585) arc (-94.24841:-162.7529:11.037);\n\\fill [black] (41.11,-27.58) -- (40.35,-27.03) -- (40.28,-28.02);\n\\draw (33.5,-25.71) node [below] {$tip$};\n\\draw [black] (30.21,-19.82) -- (29.09,-24.48);\n\\fill [black] (29.09,-24.48) -- (29.77,-23.82) -- (28.79,-23.59);\n\\draw (28.89,-21.73) node [left] {$friend$};\n\\draw [black] (29.09,-24.48) -- (30.21,-19.82);\n\\fill [black] (30.21,-19.82) -- (29.53,-20.48) -- (30.51,-20.71);\n\\draw [black] (22.5,-21.94) -- (26.2,-25.36);\n\\fill [black] (26.2,-25.36) -- (25.95,-24.45) -- (25.27,-25.19);\n\\draw (21.45,-24.14) node [below] {$friend$};\n\\draw [black] (26.2,-25.36) -- (22.5,-21.94);\n\\fill [black] (22.5,-21.94) -- (22.75,-22.85) -- (23.43,-22.11);\n\\draw [black] (26.71,-29.88) -- (23.59,-34.42);\n\\fill [black] (23.59,-34.42) -- (24.46,-34.05) -- (23.63,-33.48);\n\\draw (24.55,-30.8) node [left] {$friend$};\n\\draw [black] (23.59,-34.42) -- (26.71,-29.88);\n\\fill [black] (26.71,-29.88) -- (25.84,-30.25) -- (26.67,-30.82);\n\\draw [black] (21.62,-33.91) -- (20.58,-22.89);\n\\fill [black] (20.58,-22.89) -- (20.16,-23.73) -- (21.15,-23.64);\n\\draw (21.73,-28.32) node [right] {$friend$};\n\\draw [black] (20.58,-22.89) -- (21.62,-33.91);\n\\fill [black] (21.62,-33.91) -- (22.04,-33.07) -- (21.05,-33.16);\n\\draw [black] (18.908,-36.812) arc (-98.51807:-309.93742:12.583);\n\\fill [black] (18.91,-36.81) -- (18.19,-36.2) -- (18.04,-37.19);\n\\draw (8.57,-18.2) node [left] {$friend$};\n\\draw [black] (18.908,-36.811) arc (-98.5328:-309.92269:12.582);\n\\fill [black] (28.85,-14.72) -- (28.56,-13.82) -- (27.92,-14.59);\n\\draw [black] (31.02,-25.95) -- (41.48,-20.15);\n\\fill [black] (41.48,-20.15) -- (40.53,-20.1) -- (41.02,-20.98);\n\\draw (37.8,-23.55) node [below] {$tip$};\n\\draw [black] (20.708,-16.933) arc (166.48658:66.25769:15.116);\n\\fill [black] (41.49,-6.63) -- (40.96,-5.85) -- (40.56,-6.76);\n\\draw (27.15,-6.41) node [above] {$tip$};\n\\draw [black] (24.9,-36.77) -- (41.1,-36.03);\n\\fill [black] (41.1,-36.03) -- (40.28,-35.57) -- (40.33,-36.57);\n\\draw [black] (24.76,-37.79) -- (41.24,-42.91);\n\\fill [black] (41.24,-42.91) -- (40.62,-42.19) -- (40.32,-43.15);\n\\draw (34.33,-39.78) node [above] {$tip$};\n\\draw [black] (24.66,-35.72) -- (41.34,-28.58);\n\\fill [black] (41.34,-28.58) -- (40.41,-28.44) -- (40.8,-29.35);\n\\draw (36.85,-32.7) node [below] {$4\\mbox{ }review$};\n\\end{tikzpicture}\n\\caption{Proposed Complex Graph Models Based on Users, Reviews, Businesses, User-User Interactions, and Tips}\n\\label{fig:graph_complex_structure}\n\\end{figure}\n\n\\subsection{Predictive Models}\nThe rich meta-data about the network makes it quite interested to analyze, and opens up a lot of venues for possible improvements in terms of link prediction. We have multiple networks available for explorations, including \\textit{user-user} network (based on friendships, comments, etc.), \\textit{user-business} network, based on reviews given by a specific business to a user.\n\nFurthermore, we also have the raw text of the Yelp Review as well as geographical information about the business and photos for some businesses, which opens the possibility of using moderns visual image recognition and natural language processing techniques to further extract node-level meta-data to incorporate into our model.\n\nConcretely, we focus our work on predicting the rating that a user will assign a particular business. This problem has immediate and obvious utility: it would be useful to help users discover new businesses to visit (if the predicted rating is high) and also help business determine positive and negative trends. The dataset can be broken into three sets so we can train, evaluate, and test our models. One set will have edges, reviews, and information for businesses for a certain time $[t_0, t_1)$, the second set will have the edges created from $[t_1, t_2)$ and will be used to cross-validate our models and tune hyper-parameters, and the third set will he a hold out containing edges from $[t_2, t_3)$ and will be used for testing only.\n\n\\subsection{Network-Only Predictor}\nWe first present a predictive model which focus ``only'' on the structure of the graph, and uses this information to predict the ratings. For this purposes, we focus on the smaller user\/business graph as shown in Figure \\ref{fig:graph_structure}. We therefore have an undirected, weighed graph. In later sections, we explore alternative representations as well as additional data which can be input to our learning models.\n\n\n\\subsubsection{Data Preprocessing}\nGiven this representation $G$, we define three sets - training, validation, and test. We split the graph naturally -- edges and nodes are added to the graph as time progresses. However, we make special care to only use the nodes which remained and were available in the graph for the extent of our study. We can see the distribution of the reviews (edges) in our graph over time in Figure \\ref{fig:reviews_over_time}. Given the skewed nature of the graph, we subset it to include only the latest reviews. Let us consider $G, G_{train}, G_{val}$ and $G_{test}$ where $G = G_{train} \\cup G_{val} \\cup G_{test}$. We first perform the following to obtain $G$:\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{distribution_of_reviews_over_time}\n\\caption{Number of reviews in original dataset as a measure of time. We can see readily that the number of reviews increases drastically in the later years.}\n\\label{fig:reviews_over_time}\n\\end{figure}\n\n\\begin{itemize}\n\\item Remove all reviews before ``2016-08-24''. This is primarily to (1) remove bias from early users and reviewers and instead focus on later reviews (see Figure \\ref{fig:reviews_subset_over_time} for the distribution over time, which is far more uniform) and (2) reduce the size of our graphs to a manageable data set. We then have a graph with 428,795 users, 107,138 businesses, and 1,000,277 edges. We therefore have an extremely sparse graph, as only 0.0103109977941166\\% of all possible edges even exist.\n\\end{itemize}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{distribution_of_reviews_subset_over_time}\n\\caption{Number of reviews by date for $G$}\n\\label{fig:reviews_subset_over_time}\n\\end{figure}\n\nWe now proceed to split the graph into $G_{train}, G_{test}$ and $G_{val}$. We split time-wise using three split-points, $t_0, t_1,$ and $t_2$. Then we have $G_{train}$ as the subset of $G$ from $[t_0, t_1)$ and $G_{val}$ as the subset in $[t_1, t_2)$ with $G_{test}$ containing the subset to the latest date $[t_2, \\infty)$. Furthermore, we set all nodes in $G_{val}$ and $G_{test}$ to be the same set as those in $G_{train}$ to avoid running into issues with unseen nodes in the network.\n\nAfter the above, we end up with the following networks:\n\\begin{itemize}\n\\item $G_{train} = (V_{train}, E_{train})$ with $|V_{train}| = 375,149$ where we have $283,085$ users and $92,064$. We also have $|E_{train}| = 599,133$, which is an incredibly sparse graph given (only $0.00143945169412\\%$ of edges exists, even taking into account the bipartite structure of the graph).\n\\item $G_{val} = (V_{val}, E_{val})$ with $|V_{val}| = 75,466$ and $|E_{val}| = 88,079$.\n\\item $G_{test} = (V_{test}, E_{test})$ with $|V_{test}| = 67.125$ and $|E_{val}| = 73,730$.\n\\end{itemize}\n\nNote that we've split the data essentially into an $80\\%, 10\\%, 10\\%$ split. For more details on the graph structures, see Appendix \\ref{sec:graph_distributions}. Given the sparsity of the graph, we focus on predicting the star rating given that an edge is created between user $u$ and business $b$. As such, our dataset does not contain any negative examples. We leave this predictive problem for open investigation. Furthermore, given the extreme size of our data, we process and train our models using Google Compute Engine with 8 CPUs and 30GB of memory.\n\n\\subsubsection{Graph Features}\nNow that we have partitioned our data into training, validation, and testing, we move forward with calculating some rating prediction scores. We first focus on calculating multiple properties from our generated graph. In fact, we calculate the following:\n\n\\begin{itemize}\n\\item \\textbf{Number of Common Raters:} For each pair $(u,b)$ of user and business, we calculate the number of common raters. A common rater is an extension of neighbors, in the sense that this is someone who has also rated $b$.\n\\item \\textbf{Number of Common Business:} For each pair $(b,u)$ of user and business, we calculate the number of common businesses. A common business is an extension of neighbors, in the sense that this is a business someone who has also rated $b$.\n\\item \\textbf{Average Rating of Common Raters:} For each pair $(u,b)$ of user and business, we calculate the number of common raters. A common rater is an extension of neighbors, in the sense that this is someone who has also rated $b$.\n\\item \\textbf{Average Rating of Common Business:} For each pair $(u,b)$ of user and business, we calculate the number of common raters. A common rater is an extension of neighbors, in the sense that this is someone who has also rated $b$.\n\\item \\textbf{Preferential Attachment}: We take the product of the average star rating of businesses rated by $u$ and the average star rating of raters of business $b$. We expect this value to indicate the relative popularity.\n\\item \\textbf{Page Rank:} We treat the graph as an unweighed undirected graph and calculate the page rank value for all nodes and assign their sum as a feature.\n\\item \\textbf{Eigenvector Centrality}: We calculate the global centrality of a node (compared to its neighbors) and use the sum as a feature.\n\\item \\textbf{Adamic-Adar measure}: We look at common neighbors (as defined previously) and sum the inverse of the sum of their degrees (considering the graph to be weighed). Intuitively, this creates a measure for similarity where nodes with the same degreed neighbors are more similar.\n\\end{itemize}\n\n\nOnce calculate for our training, validation, and test data sets, all of the features are normalized to have unit mean and unit variance as is standard practice in machine learning problems.\n\n\\subsubsection{Models for Prediction}\nWe now present and describe the machine learning models used from the extracted features. Let $X$ be our training matrix, which is of shape $(n,d)$ where $n$ is the number of training examples and $d$ is the number of features extracted (in our case, $d = 9$) and $n = 599,133$. The most straight forward approach is simply to integrate our extracted features individually and directly train our models to predict the ratings, in a scale from $0$ to $5$. We now present the models we attempted.\n\n\\begin{enumerate}\n\\item \\textbf{Linear Regression}: We attempt to fit a standard linear regressors to our input feature set. That is to say, our model takes the form of $r_i = \\sum_{d=1}^{D} w_d x_{id}$ where $x_i$ is a single feature vector in our training set and $r_i$ is the corresponding rating. We train the model directly using the generated data from above and directly on the raw ratings for each edge. Linear regression is a simple model which attempts to minimize the mean square error, and can be thought of as a data generating process where we assume the ratings $r$ are generated by $r = W^TX + b + \\epsilon$ where $\\epsilon \\sim N(0,\\sigma)$ is some noise introduced into the system. The models is then able to recover the best plane of fit $W$ such that the error is minimized. In terms of loss functions, we can consider this as minimizing the loss function $L:$\n\\begin{align*}\nL(W,b; X,y) &= \\sum_{i=1}^{|X|} (\\hat{y}_i - y_i)^2 \\\\\n&=\\sum_{i=1}^{|X|} (w_i^Tx_i + b_i - y_i)^2 \\\\\n&=\\sum_{i=1}^{|X|} \\left(\\sum_{d = 1}^{|x_i|} w_dx_{id} + b_i - y_i\\right)^2\n\\end{align*}\nwhere $x_i$ is a the $i$-th row in our feature matrix $X$. We minimize over the parameters $W$.\n\\item \\textbf{Ridge Regression}: This is an improvement of linear regression. A possible issue with normal linear regression is that the possibility of over-training on the training set. It is possible to generate an extremely ``peaky'' set of weights such that the training error is reduced significantly yet the test error increases. The issue here is that we lack any term enforcing generalization in our loss function. The most typical method to enforce this generalization is to add a regularizer to the weights $W$. The loss function then becomes:\n\\begin{align*}\nL(W, b; X,y) &= \\sum_{i=1}^{|X|} (\\hat{y}_i - y_i)^2 + \\alpha|W|\\\\\n&=\\sum_{i=1}^{|X|} (w_i^Tx_i + b_i - y_i)^2 + \\alpha \\left|\\sum_{i,j} W_{ij}^2\\right|\\\\\n&=\\sum_{i=1}^{|X|} \\left(\\sum_{d = 1}^{|x_i|} w_dx_{id} + b_i - y_i\\right)^2 + \\alpha \\left|\\sum_{i,j} W_{ij}^2\\right|\n\\end{align*}\n\nThe above encourages the model to minimize the squared loss for the training data while still maintaining a relatively sparse matrix $W$. This further prevent values in $W$ from becoming too large. In our case, we find $\\alpha = 0.0001$ to be the optimal hyper-parameter (tuned on the validation set).\n\\item \\textbf{Bayesian Regression}: Bayesian regression is essentially equivalent to ridge regression, but it is self-regularizing -- this means we do not need to choose an optimal parameter $\\alpha.$ The theory behind Bayesian regression is to consider finding the parameters $W$ in our mode $y = WX$ which maximize the model probability. Given Bayes' rule, we have:\n\n\\begin{align*}\nP(W,b \\mid X) &= \\frac{P(X \\mid W,b)P(W,b)}{P(X)} \\\\\n&\\propto P(X \\mid W,b)P(W,b) \\\\\n\\end{align*}\nIf we consider the case where $P(W,b) \\sim N(\\mu, \\Sigma)$, then we arrive at ridge regression. We use this Bayesian model to also directly predict our ratings $r$. We optimize the above using the ADAM gradient descent optimizer where we use $\\alpha_1 = \\alpha_2 = \\lambda = \\lambda_2 = 0.000001$. The parameters are not tuned using the validation set due to lack of computational resources.\n\n\\item \\textbf{Deep Neural Networks}: The latest research has had great success using ``deep learning'' to extract more details from the data and to learn the values and result more directly. We make use of this approach by constructing a relatively shallow network consists of a fully connected layer with 200 neurons, followed by a second fully-connected layer with 40 neurons, followed by a fully connected layer of 8 neurons, and a final fully connected layer of 2 neurons.\n\nGiven the recent effectiveness in a large range of tasks of this model, we expect that it will likewise be useful for rating prediction.\n\nThis gives us a total of $200x(9 + 1) + 40x200 + 8x40 + 2x8$ with a relu nonlinearity:\n$$\nrelu(x) = \\max(0,x)\n$$\nWe use a final softmax at the end to generate the distribution of ratings.\n\nWe use Adam to perform gradient descent (with parameters $\\beta_1 = 0.9$ and $\\beta_2 = 0.999$ and $\\epsilon = 1\\times10^{-8}$) on the loss function with a regularization factor or $\\alpha = 0.0001$ and batch size of $200$. We maintain a constant learning rate of $0.001$, randomly shuffle the input data. The parameters are selected based on past experience with neural network training and are not optimized using cross-validation or the validation set. \n\n\\item \\textbf{Random Forest}: We make use also of a random forest estimator. The random forest is a meta estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. We generate the sub-samples by sampling from the original data set with replacement. We use a total of 100 estimators, where we look at all features in the data set when considering the best split. \n\\end{enumerate}\n\n\\subsection{Method Evaluation}\n\nFor all of the above approaches, we evaluate our effectiveness on the validation set and use this to tune our hyper-parameters (ie, network size, learning rate, etc.). In the end, we evaluate the results on the test set (previously unseen and untouched by our models) and make predictions for ratings in the seen edges. \n\nWe evaluate our models across three metrics:\n\n\\begin{itemize}\n\\item The root mean squared error. This evaluates how close our predictions achieve our desired ratings:\n$$\nRMSE = \\sqrt{\\frac{1}{|N|}\\sum_{i = 1}^{|X|} (\\hat{y}_i - y_i)^2}\n$$\n\\item The relative error. This is a metric that evaluates, on average, how wrong our star rating is compared to the true star rating. We take the method to be more indicate of improvements in our algorithms and with our data extraction. Formally, we define the relative error as:\n$$\nRELERROR = 100*\\frac{1}{|X|}\\sum_{i=1}^{|X|} \\frac{|\\hat{y_i} - y_i|}{\\max_{i=1}^{|X|} \\max\\{\\hat{y_i}, \\hat{y}\\}}\n$$\n\\item The last metric we use for evaluating our regression models is the $R^2$ score. This gives us a way to evaluate our models against other models in literature, as is standard across regression problems. The best possible score is 1.0. The score ranges from $(-\\infty, 1.0]$, where the worse a model is the more negative the value. Note that in the case where we have a model which simply predicts the constant expected value of the final output (disregarding any input features):\n$$\nE[Y] = \\frac{1}{|X|}\\sum_{i=1}^{|X|} y_{i}\n$$\nwe will have a score of $0.0$. The formula for computing this score is:\n$$\nR^2 = 1 - \\frac{\\sum_{i=1}^{|X|}(y_i - \\hat{y}_i)^2}{\\sum_{i=1}^{|X|} \\left(y_i - \\frac{1}{|X|}\\sum_{i=1}^{|X|}y_i\\right)^2}\n$$\n\n\\end{itemize}\n\n\\subsection{Extracting Item Features}\n\nGiven our results (see Results section) from the above models, we continue forward with our deep neural network. We begin by augmenting the data available for the business nodes. \n\n\\begin{itemize}\n\\item We make use of the pre-trained SqueezeNet network included in the PyTorch model zoo for visual image processing. We first down-sample the images to the expected 256x256x3 input (we do this simply by cropping and averaging pixel values over regions mapping into the 256x256xspace).\n\\item We can then feed these smaller images directly into the pre-trained squeezenet (see Figure \\ref{fig:squeezenet_architecture} for architecture) which has been modified to remove the final soft-max layer (and instead we produce a vector in $\\mathbb{R}^1000$.\n\\item For a business $b$, we take the $p_i^b \\in \\mathbb{R}^1000$ and compute their mean. We use this embedding as a representation of the business.\n\n\\item Furthermore, we make use of the pre-trained word-embedding and take the business description and generate a small 256-dimensional vector.\n\\item We concatenate the above vectors into a 1000 + 256 + 9 vector, which we take as input into a modified neural net.\n\\end{itemize}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{squeezenet.jpg}\n\\caption{Original SqueezeNet Architecture. We modify it to remove the final soft-max layer and instead output a $1000$ embedding for our images.}\n\\label{fig:squeezenet_architecture}\n\\end{figure}\n\nThe meat of the model consists of a neural net which takes a input as 1265-length vector for each $(u,b)$ pair and runs through through a single layer with 200 hidden units (so number of parameters is 200x1265). We then take this and feed it into the successful network described in the previous section and evaluate it in the same way as described before.\n\n\\subsubsection{Training and Date}\n\nDue to the large size of the above networks, we subset the data significantly into a much smaller amount of only ~15k reviews. We select the businesses with the most photos as the candidates to subset by, and make sure we take the reviews which include these businesses. With the reduced data size, we are able to successfully train our specified model end-to-end and achieve a marginal improvement over our previous models.\n\n\n\\section{Results and Discussion}\nIn this section, we presents the results on our test set and provide some discussion as to the extent to which our models successfully predicted Yelp review ratings.\n\n{\\renewcommand{\\arraystretch}{2}%\n\\begin{table*}[]\n\\centering\n\\caption{Supervised Training Results on Training Set}\n\\label{table:trainint_set_results}\n\\begin{tabular}{|l|lll}\n\\hline\n\\textit{\\textbf{Model}} & \\multicolumn{1}{l|}{\\textbf{RMSE}} & \\multicolumn{1}{l|}{\\textbf{RELERROR}} & \\multicolumn{1}{l|}{\\textbf{R\\textasciicircum 2}} \\\\ \\hline\n\\textit{Baseline} & 1.50142049076 & 25.7312431633 & 0.0 \\\\ \\cline{1-1}\n\\textit{Linear Regression} & 1.29409210615 & 20.397681968 & 0.257107970487 \\\\ \\cline{1-1}\n\\textit{Ridge Regression} & 1.29409210617 & 20.3976788744 & 0.257107970462 \\\\ \\cline{1-1}\n\\textit{Bayesian Regression} & 1.29409213097 & 20.3975770983 & 0.257107941987 \\\\ \\cline{1-1}\n\\textit{Neural Network} & 1.26509191767 & 18.7831282852 & 0.290030838364 \\\\ \\cline{1-1}\n\\textit{Random Forest} & \\textbf{0.749654164334} & \\textbf{10.0184313324} & \\textbf{0.750702893173} \\\\ \\cline{1-1}\n\\textit{\\textbf{Business Features}} & 1.24943163247 & 16.2852635532 & 0.32123445344 \\\\ \\cline{1-1}\n\\end{tabular}\n\\end{table*}\n}\n\n\n{\\renewcommand{\\arraystretch}{2}%\n\\begin{table*}[t]\n\\centering\n\\caption{Supervised Training Results on Validation Set}\n{}\\label{table:validation_set_results}\n\\begin{tabular}{|l|lll}\n\\hline\n\\textit{\\textbf{Model}} & \\multicolumn{1}{l|}{\\textbf{RMSE}} & \\multicolumn{1}{l|}{\\textbf{RELERROR}} & \\multicolumn{1}{l|}{\\textbf{R\\textasciicircum 2}} \\\\ \\hline\n\\textit{Baseline} & 1.42776997765 & 24.1191750901 & 0.0 \\\\ \\cline{1-1}\n\\textit{Linear Regression} & 1.18380441761 & 18.2853412809 & 0.327892972837 \\\\ \\cline{1-1}\n\\textit{Ridge Regression} & 1.18380391943 & 18.2853539128 & 0.312515072251 \\\\ \\cline{1-1}\n\\textit{Bayesian Regression} & 1.18378759889 & 18.2857790551 & 0.312534028173 \\\\ \\cline{1-1}\n\\textit{Neural Network} & \\textbf{1.16442192777} & \\textbf{16.0869919882} & \\textbf{0.334842664919} \\\\ \\cline{1-1}\n\\textit{Random Forest} & 1.18801209881 & 18.6598898159 & 0.307618649842 \\\\ \\cline{1-1}\n\\textit{\\textbf{Business Features}} & 1.14952444234 & 14.8854451849 & 0.35986245424 \\\\ \\cline{1-1}\n\\end{tabular}\n\\end{table*}\n}\n{\\renewcommand{\\arraystretch}{2}%\n\\begin{table*}[t]\n\\centering\n\\caption{Supervised Training Results on Test Set}\n\\label{table:test_set_results}\n\\begin{tabular}{|l|lll}\n\\hline\n\\textit{\\textbf{Model}} & \\multicolumn{1}{l|}{\\textbf{RMSE}} & \\multicolumn{1}{l|}{\\textbf{RELERROR}} & \\multicolumn{1}{l|}{\\textbf{R\\textasciicircum 2}} \\\\ \\hline\n\\textit{Baseline} & 1.4634860104 & 24.7465614817 & -0.000855120695457 \\\\ \\cline{1-1}\n\\textit{Linear Regression} & 1.19928440313 & 18.5669587686 & 0.327892972837 \\\\ \\cline{1-1}\n\\textit{Ridge Regression} & 1.19928405085 & 18.5669755633 & 0.327893367682 \\\\ \\cline{1-1}\n\\textit{Bayesian Regression} & 1.19927256299 & 18.567542783 & 0.327906243749 \\\\ \\cline{1-1}\n\\textit{Neural Network} & \\textbf{1.1838237529} & \\textbf{16.3219078547} & \\textbf{0.34511029369} \\\\ \\cline{1-1}\n\\textit{Random Forest} & 1.19281377927 & 18.7137709175 & 0.335125985417 \\\\ \\cline{1-1}\n\\textit{\\textbf{Business Features}} & 1.1694425252 & 15.4556500245 & 0.35111454552 \\\\ \\cline{1-1}\n\\end{tabular}\n\\end{table*}\n}\n\nWe now present the final results from our models, each evaluated on the test set. We compare the different methods used, and discuss their differences and possible improvements. The main results are presented in Table \\ref{table:test_set_results}, with the validation data set results in Table \\ref{table:validation_set_results} and the training data set results in Table \\ref{table:trainint_set_results}. \n\nWe have implemented a complete end-to-end pipeline which begins with the (1) raw Yelp JSON data, (2) construct training, validation, and testing graphs over the data by our pre-determined timescales, (3) extracts training, validation, and testing sets from the graphs generated and computes a variety of network properties to be used by our machine learning models and (4) trains a variety of machine learning models on the extracted data and tunes their hyper-parameters when possible, (5) culminating in the evaluation of the models on the known results from the test set. We implements this work, wrote optimized code for feature extraction, and built our networks. Everything was implemented ourselves with the use of SNAP, Python, scikit-learn, and PyTorch -- the code base is publicly available at GitHub \\footnote{https:\/\/github.com\/kandluis\/cs224w-project}.\n\nThe major challenge faced when experimenting was the sheer size of the dataset -- even after sub-setting the data to a more manageable size in the millions and using the extremely powerful Google Compute Engine to add additional memory and processing power, more complex models such as random forest and the convolution neural networks could take in the order of days to fully train. Even extracting the word embeddings and pre-trained image feature vectors with SqueezeNet and ResNet would alone take a significant amount of time -- so much so that it proved unfeasible to do for a large portion of the dataset.\n\nAs such, as described in our Methods section, we were able to sample only approximately one years worth of data from the Yelp review network. However, despite this, we were nonetheless able to train the network predictors on over 500k training samples (reviews) which contained over 280k users and over 92k businesses (for more than 375k nodes), and validate and test our network models on over 88K and 73K examples respectively.\n\nFinally, to evaluate the performance of all of our models we make use of RMSE, relative error, and the score function defined in our methods. Our results can be see in Table \\ref{table:trainint_set_results}, Table \\ref{table:test_set_results}, and Table \\ref{table:validation_set_results}.\n\n\\section{Discussion}\nWe begin the discussion by analyzing some network properties.\n\n\\subsection{Summary Statistics}\n\\subsubsection{Users}\nWe present some overview of the user meta-data. In Figure \\ref{fig:user_characteristics}, we can see that multiple characteristics of the users follow a power-law degree distributions -- and not just the node degrees. This is to say that the distribution can be modeled as $P(k) \\propto k^{-\\gamma}$. The power-law distribution is immediately evident in:\n\n\\begin{itemize}\n\\item Number of review -- we have a few users that write many reviews and many users that write few reviews.\n\\item Number of friends -- this means the network follows a true-power law distribution.\n\\item useful\/funny\/cool\/fans -- this appears to demonstrate that social ranking\/status also follows a power-law distribution in the Yelp social network. This trend is further demonstrated by Figure \\ref{fig:user_compliment_distribution}.\n\\end{itemize} \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{distribution_user_characteristics}\n\\caption{Frequency of countable user characteristics -- the majority exhibit a power-law distributions}\n\\label{fig:user_characteristics}\n\\end{figure}\n\nFurthermore, we can look at the average rating given by users, across the network. The results are shown in a log plot in Figure \\ref{fig:user_rating_distribution}.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{average_user_rating_distribution}\n\\caption{Distribution of Average user Rating}\n\\label{fig:user_rating_distribution}\n\\end{figure}\n\nWe notice that the ratings tend to be inflated (3-5) stars being quite frequent, while 1-2 stars being very infrequent. Presumable this might be due to the fact that people do not frequent poor restaurants. The other aspect that is immediately apparent is the spikes at even numbered ratings -- this is likely due to users who have rated only one once, of which we have many.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{compliement_type_distribution}\n\\caption{Distribution of Received user Compliments}\n\\label{fig:user_compliment_distribution}\n\\end{figure}\n\n\\subsubsection{Businesses}\nWe present some overview of the user meta-data. In Figure \\ref{fig:business_review_distribution}, we can see that the power-law distribution is also respected in the business side. Furthermore, we can also see that businesses tend to be rated quite highly, on average, with most businesses either having in the range from 3-5 (see Figure \\ref{fig:business_star_distribution}).\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{business_review_count_distribution}\n\\caption{Business Review Distribution}\n\\label{fig:business_review_distribution}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{business_star_distribution}\n\\caption{Business Average Star Distribution}\n\\label{fig:business_star_distribution}\n\\end{figure}\n\n\\subsection{Review Prediction}\nWe continue our discussing by now focusing on the results for our model predictions.\n\nWe can see that extracting network-only data and using machine learning models to fit the ratings seems to perform relatively well, even with simple, un-regularized linear regression. It appears that the features we selected, for example the nearest neighbors and the average ratings, are quite effective at both capturing network properties as well as capturing the user ratings. We did not need to extend the network to include further metadata information, and the results were nonetheless quite good, especially when compared to our non-trivial baseline.\n\nFurthermore, we note that our feature extraction proved extremely effective at generalizing across models. We see that in particular, the deep neural network and the random forest models both performed extremely well. It's interesting to note that the random forest model appears to have over-fit the data by a significant margin. This appeared promising on the training set but did not pan-out when we took the model to unseen data. However, we note that the neural net performed the best -- this appears to lead credence to the idea that the function learned is inherently non-linear, at least in the feature space we selected. This is somewhat counter to what we originally hypothesized, since all of the original features are approximately in the same scale as the ratings and would, intuitively, appear to predict the ratings rather directly. This idea is supported by the t-SNE embedding in Figure \\ref{fig:tsne_embedding}, where we embed our test set in a lower dimensional space (given the features extracted) and we color each based on the rating given (from $1,2,3,4,5$). \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{TSE.png}\n\\caption{t-SNE embedding of of $G_{test}$}\n\\label{fig:tsne_embedding}\n\\end{figure}\n\n\nThis intuitively gives us a good foundation for why the features we choose appear to be so well correlated. Furthermore, we note that the embedding shows a clear non-linear distribution, which appears to corroborate the results where our neural network performed the best. This seems to imply that a neural network would be the best approach to disambiguate between the possible ratings a user might assign to a business. We found the issue of predicting the lack of edges to be somewhat more nuanced and subtle, though initial experiments with this approach proved promising.\n\nAnother promising aspect involves using the photographic and textual descriptions of businesses as input features to our predictive models. Despite making use of pre-trained word-embeddings and pre-trained image models, the computational cost for the models proved extreme for our dataset. Sub-setting into smaller set showed some initially results which appeared positive, however the smaller dataset makes it difficult to determine whether the additional information was accurately fed into the models and used effectively. However, it does appear that the networks performed well overall, and at least learned to make use of the features from the images.\n\n\\section{Conclusion}\nIn this paper, we have presented a novel approach towards review rating prediction in a subset of the Yelp review network. We have investigated the effectiveness of network-only machine learning models, determining 9 key structural features of the network that proved effective in predicting the weight of unseen edges after using supervised algorithms to generate the models. We demonstrated that a deep neural net even with the limited feature set was the most effective and most general approach. Furthermore, we performed early experiments in making use of none-network features to improve the predictions of the neural network. We did this by creating a pipeline building on previous work were for each $(u,b)$ pair, the business descriptions were converted into their respective word-embeddings using the popular word2vect network followed by an RNN which output a fixed sized 256 feature vector for each business. Furthermore, we selected key images from the business and the photo dataset provided by yelp and ran them through pre-trained SqueezeNet network with the final classification layer removed to generate multiple 4096-dimensional feature vectors per image. These feature vectors were then averaged and fed as additional input into a final fully-connected neural network. These preliminary results showed marginal improvement in the accuracy of the results. This shows not only that our original models are able to understand higher order relationships within the Yelp review network, but are also able to understand features (and build on them) specific to each node.\n\n\n\\section{Future Work}\nThe project could be continued in several directions. In particular, we could continue to follow the example set by \\cite{PintrestProject} and consider some of the temporal features of the graph structure. They proposed using a sliding window approach to achieve improved accuracy in link-prediction, which could easily be modified to support review prediction in the Yelp network.\n\nFurthermore, our preliminary work incorporating deep convolution neural nets and recurrent neural nets to extract feature embeddings for the businesses have demonstrated marginal capabilities of improving the predictive power of our models. Further work could be done in this area by, rather than extracting static embeddings, incorporating the visual and textual networks into an end-to-end model which could tweak the learned weights for visual and textual processing in order to have better understanding of how these features related to the ratings given to businesses by users. Furthermore, would like to see further work placed into whether user features can similar be used to improve performance -- for example, finding embeddings of users based on their features and using these embeddings as inputs to our model.\n\nLastly, there's additional work to be done to incorporate even more graph features into the predictive model. Given the effectiveness of the network structure itself at predicting the values of unseen ratings alone, we would like to explore further network features and models and see how this additional information can improve our models. This can include incorporating the information about tips -- we would expect someone that has given a tip to be more likely to rate the business positively (or negatively).\n\nIn any case, we believe that there is yet much work to be done in this field and many potential interesting developments in the area of combining non-network features with network features.\n\n\n\\section{Acknowledgments}\nThe author would like to thank Jure Leskovec and the rest of the CS224W staff for their continued guidance and advice during the course of the project. In particular, for emphasizing a focus on network properties.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nModeling the regression relationship between a response $Y$ and a multivariate Euclidean predictor vector $\\tbf{X}$ corresponds to specifying the form of the conditional means $h(\\tbf{x})=\\mbb{E}{(Y|\\tbf{X} = \\tbf{x})}$ and higher dimensionality of $\\tbf{X}$ can be problematic when one is interested to go beyond the standard multiple linear model. This provides strong motivation to consider regression models that provide dimension reduction, and single index models are one of the most popular approaches to achieve this, under the assumption that the influence of the predictors can be collapsed to an index, i.e., a projection on one direction complemented by a nonparametric link function. This reduces the predictors to a univariate index while still capturing relevant features of the high-dimensional data and is thus not subject to the curse of dimensionality. This model generalizes linear regression (where the link function is the identity) and is a special case of nonparametric regression \\citep{heck:86, rice:86, rupp:03}, which in its general form is subject to the curse of dimensionality. For a real-valued response $Y$ and a $p$-dimensional predictor $\\tbf{X}$, the semiparametric single index regression model is \n\\begin{equation}\\begin{gathered} \\label{model:real}\\mbb{E}{(Y|\\tbf{X}= \\tbf{x})} = m(\\tbf{x} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}). \\end{gathered}\\end{equation} \nIn model~\\eqref{model:real},\nthe dependence between $Y$ and $\\tbf{X}$ is characterized by the conditional mean, i.e., \nthe conditional mean is a function of ${\\boldsymbol{\\bar{\\theta}_0}},$ and this reduces the dimensionality of the predictor from $p$ to essentially 1. \n\nThe function $m(\\cdot)$ is nonparametric and thus includes location and level changes, and therefore the vector $\\tbf{X}_i$ cannot include a constant that would serve as intercept. For identifiability reasons, ${\\boldsymbol{\\bar{\\theta}_0}}$ is often assumed to be a unit vector with non-negative first coordinate. A second approach that has been used is to require one component to equal one. This presupposes that the component that is set to equal 1 indeed has a non-zero coefficient \\citep{lin:07,cui:11, ichi:93}.\nModel (\\ref{model:real}) is only meaningful if the Euclidean predictor vector $\\tbf{X}_i$ is of dimension $2$ or larger. If $\\tbf{X}_i$ is one-dimensional, the\ncorresponding special case of the model is the one-dimensional nonparametric regression $\\mbb{E}{(Y|X =x)} = m(x)$, which does not feature any parametric component.\n\nDue to its flexibility, the interpretability of the (linear) coefficients and the nonparametric link function $m(\\cdot)$, as well as due to its wide applicability in many scientific fields, the classical single index regression model with Euclidean responses has attracted attention from the scientific community for a long time. The coefficient ${\\boldsymbol{\\bar{\\theta}_0}}$ that defines the single index $\\tbf{x} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}$ along with the shape of the nonparametric component $m(\\cdot)$ characterizes the relationship between the response and the predictor. \nThe parametric component ${\\boldsymbol{\\bar{\\theta}_0}}$ is \n of primary interest\nin this model. \nThe problem of recovering the true direction ${\\boldsymbol{\\bar{\\theta}_0}}$ can be viewed as a subclass of sufficient dimension reduction (SDR) techniques, where identifying the central subspace of $\\tbf{X}$ that explains most of the variation in $Y$ has been a prime target \\citep{chen:98, li:89,cook:94, li:07}. \n\nIn addition to sufficient dimension reduction techniques, a multitude of related approaches to estimate ${\\boldsymbol{\\bar{\\theta}_0}}$ in \\eqref{model:real} have been studied. These include projection pursuit regression (PPR) \\citep{frie:81, hall:89}, the average derivative approach \\citep{hard:89a, stok:86, fan:95, powe:86}, sliced inverse regression (SIR) \\citep{li:91, cook:91}, conditional minimum average variance estimation (MAVE) \\citep{xia:09} and various other methods \\citep{wang:10, xia:99, xia:06, xia:07, lian:10,yu:02}.\nVarious approaches focused on nonparametric estimation of the link function to recover the index parameter in \\eqref{model:real} \\citep{hard:93, huh:02, hris:01} along with partially linear versions \\citep{carr:97,cui:11,ichi:93}, various noise models \\citep{chan:10a,wang:10}\n\n\nVarious extensions of single index regression have been considered more recently \\citep{zhao:20,kere:20}, including models with multiple indices or \nhigh- dimensional predictors \\citep{zhu:09b,zhou:08, kuch:17, kuch:20} and longitudinal and functional data as predictors \\citep{jian:11, chen:11, ferr:11, novo:19}. However, none of these extensions considers the case where responses are not in a Euclidean vector space, even though this case is increasingly important for data application. An exception is \\cite{ying:20}, who considered extending sufficient dimension reduction approaches for the case of random objects.\nThis lack of available methodology for single index models with random object responses motivates our approach.\nNon-Euclidean complex data structures arising in areas such as biological or social sciences are becoming increasingly common, due to technological advances have made it possible to record and efficiently store time courses of images \\citep{peyr:09, gonz:18}, shapes \\citep{smal:12} or networks \\citep{tsoc:04},\nin addition to sensor data and other complex data. For example, one might be interested in the functional connectivity quantified as correlation matrices obtained from neuroimaging studies to study the effect of predictors on brain connectivity, an application that we explore further below.\n\n\nOther examples of general metric space objects include probability distributions \\citep{deli:17}, \nsuch as age-at-death distributions as observed in demography or network objects, such as internet traffic networks. Such ``object-oriented data'' \\citep{marr:alon:14} or ``random objects'' \\citep{mull:16} can be viewed as random variables taking values in a separable metric space that is devoid of a vector space structure and where only pairwise distances between the observed data are available. Existing methodology for single index models as briefly reviewed above assumes that one has Euclidean responses, and these methods rely in a fundamental way on the vector space structure of the responses. When there is no linear structure, new methodology is needed and this paper contributes to this development. \n\nThe natural notion of a mean for random elements of a metric space is the Fr\\'echet mean \\citep{frec:48}, which is a direct generalization of the standard mean, and is defined as the element of the metric space for which the expected squared distance to all other elements, the so-called Fr\\'echet function, is minimized. Depending on the space and metric, Fr\\'echet means may or may not exist as unique minimizers of the Fr\\'echet function. Fr\\'echet regression is an extension of Fr\\'echet means to the notion of conditional Fr\\'echet means, and has been recently studied in several papers \\citep{pete:19, dube:19, pete:19b,chen:18}, including both global and local versions. \n\nIn this paper, we introduce a novel method for the single Index Fr\\'echet Regression (IFR) when the response variable is a random object lying in a general metric space and the predictor is a $p$-dimensional Euclidean vector. \nOur goal is to develop a simple and straightforward extension of the conventional estimation\nparadigm for single index models for this challenging case.\nSince there is no notion of direction or sign in a general metric space, we interpret the index parameter in the proposed index Fr\\'echet regression model (IFR) as the direction in the predictor space along which the variability of the response is maximized.\nIt turns out to be useful to view the direction as an M-estimator of an appropriate objective function, and to use empirical process theory to show consistency of the proposed estimate.\nWe also develop a \nbootstrap method to obtain inference in finite sample situations. \n\nThe paper is organized as follows: The basic set up is defined in Section~\\ref{sec:model:methods} and theory on the asymptotic behavior of the index parameter is provided in Section~\\ref{sec:theory}\nThe index vector is assumed to lie on a hypersphere, with non-negative first element to facilitate identifiability. Then it is natural to quantify the performance of the proposed estimators by the geodesic distances between the estimated and true directions. Empirical studies with different types of random object responses are conducted in Section~\\ref{sec:simul} to validate the estimation method. In Section~\\ref{sec:data:ADNI} we apply the methods to infer and analyze the effect of age, sex, total ADAS score, and the stage of propagation of Alzheimer's Disease (AD) on the brain connectivity networks of patients with dementia. The networks are derived from fMRI signals of certain brain regions \\citep{thom:11} and for our analysis we represent them as correlation matrices. We conclude with a brief discussion in Section~\\ref{sec:concl}. \n\n\\section{Model and estimation methods}\n\\label{sec:model:methods}\nMore formally, in all of the following $(\\Omega,d,P)$ is a totally bounded metric space with metric $d$ and probability measure $P.$ The random objects $Y$ take values in $\\Omega$. \nThis is coupled with a $p$-dimensional real-valued predictor $\\tbf{X}.$ The conditional Fr\\'echet mean of $Y|\\tbf{X}$ is a generalization of $\\mbb{E}{(Y|\\tbf{X} = \\tbf{x})}$ to metric spaces, as the minimizer of $\\mbb{E}{(d^2(Y,\\omega)|\\tbf{X} = \\tbf{x}))},$ $\\omega \\in \\Omega.$ The latter is the corresponding generalized measure of dispersion around the conditional Fr\\'echet mean and can be viewed as a conditional Fr\\'echet function. \n\nAdopting the framework of Fr\\'echet regression for random objects with Euclidean predictors, we define the Index Fr\\'echet Regression (IFR) model as\n\\begin{equation}\\begin{gathered} \\label{model:sim}\n\\mop{\\tbf{x} \\t {\\boldsymbol{\\bar{\\theta}_0}}} := \\mbb{E}_\\oplus{(Y|\\tbf{X} = \\tbf{x})} := \\underset{\\omega \\in \\Omega}{\\argmin} \\ \\mbb{E}{(d^2(Y,\\omega)|\\tbf{X} = \\tbf{x})},\n\\end{gathered}\\end{equation}\nwhere ${\\boldsymbol{\\bar{\\theta}_0}}$ is the true direction parameter of interest. The conditional Fr\\'echet mean is assumed to be a function of ${\\boldsymbol{\\bar{\\theta}_0}}$ in such a way that the distribution of $Y$ only depends on $\\tbf{X}$ through the index $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}},$ that is, $$Y \\perp \\mbb{E}{(Y|\\tbf{X})}|(\\tbf{X} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}).$$\nModel \\eqref{model:real} emerges as a special case of model \\eqref{model:sim} for a Euclidean response, as the conditional Fr\\'echet mean coincides with the conditional expectation $\\mbb{E}{(Y|\\tbf{X})}$ for the choice of the squared distance metric for the case $\\Omega = \\mbb{R}.$\n\n\nThe identifiability condition for is rephrased following the state-of-the-art literature \\citep{carr:97, lin:07, cui:11, zhu:06}. We assume the parameter space to be $\\Theta$ rather than the entire $\\mbb{R}^p$ in order to ensure that $\\boldsymbol{\\theta}$ in the representation \\eqref{sim:obj} can be uniquely defined, where $\\Theta := \\{\\boldsymbol{\\bar{\\theta}} = (\\theta_1, \\dots, \\theta_p)^{\\top} : \\ltwoNorm{\\boldsymbol{\\bar{\\theta}}} = 1,\\ \\theta_1 \\geq 0, \\ \\boldsymbol{\\bar{\\theta}} \\in \\mbb{R}^p\\}.$ \nWe first choose an identifiable parametrization which transforms the boundary of a unit ball in $\\mbb{R}^p$ to the interior of a unit ball in in $\\mbb{R}^{(p-1)}.$ By eliminating $\\theta_1,$ the parameter space $\\Theta$ can be rearranged to a form $\\{( (1- \\sum_{r=2}^p \\theta_r^2)^{1\/2}, \\theta_2,\\dots, \\theta_p)^{\\top} : \\sum_{r=2}^p \\theta_r^2 \\leq 1\\}.$ This re-parametrization is the key to analyzing the asymptotic properties of the estimates for $\\boldsymbol{\\theta}$ and to facilitating an efficient computation algorithm. \n\nThe true parameter is then partitioned into $\\boldsymbol{\\bar{\\theta}} = (\\theta_1, \\boldsymbol{\\theta})^{\\top},$ where $\\boldsymbol{\\theta} = (\\theta_2,\\dots, \\theta_p)^{\\top}.$ We need to estimate the $(p-1)-$ dimensional vector $\\boldsymbol{\\theta}$ in the single-index model, and then use the fact that $\\theta_1 = (1- \\sum_{r=2}^p \\theta_r^2)^{1\/2}$ to obtain $\\hat{\\theta}_1.$\n\\begin{prop}[Identifiability of model \\eqref{model:sim}]\n\\label{lem:iden:sim:obj}\nSuppose $h_\\oplus({\\tbf{x}}) = \\mbb{E}{(Y|\\tbf{X} = \\tbf{x})}.$ The support $S$ of $h_\\oplus({\\cdot})$ is a convex bounded set with at least one interior point and $h_\\oplus({\\cdot})$ is a non-constant continuous function on $S.$ If\n$$h_\\oplus(\\tbf{x}) = g_{1\\oplus}(\\boldsymbol{\\alpha}^{\\top} \\tbf{x}) = g_{2\\oplus}(\\boldsymbol{\\beta} ^{\\top} \\tbf{x}), \\text{ for all } \\tbf{x} \\in S, $$ for some continuous link function objects $g_{1\\oplus}$ and $g_{2\\oplus}$, and some $\\boldsymbol{\\alpha}, \\boldsymbol{\\beta}$ with positive first element such that $\\ltwoNorm{\\boldsymbol{\\alpha}} = \\ltwoNorm{\\boldsymbol{\\beta}} =1$ then $\\boldsymbol{\\alpha} = \\boldsymbol{\\beta}$ and $g_{1\\oplus} \\equiv g_{2\\oplus}$ on $\\{\\boldsymbol{\\alpha} ^{\\top} \\tbf{x}| \\tbf{x} \\in S\\}.$\n\\end{prop}\n\nThe above result can be proven using a similar argument as given in Theorem $1$ of \\cite{lin:07}. \n\nStudying the special case of the Euclidean response $Y$ in detail one may observe that the variation in $Y$ results from the variation in $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}$ and also from the variation in the error, $\\varepsilon$ \\citep{ichi:93}. On the contour line $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}} = c$, the variability in $Y$ only results from the variability in $\\varepsilon$. Along the contour line $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}= c,\\ \\boldsymbol{\\bar{\\theta}}\\neq {\\boldsymbol{\\bar{\\theta}_0}}$, the value of $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}$ changes. Therefore the variability in $Y$ on the contour line $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}= c,\\ \\boldsymbol{\\bar{\\theta}}\\neq {\\boldsymbol{\\bar{\\theta}_0}}$ comes from both the variation in $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}$ and in $\\varepsilon$. Since ${\\rm Var}\\left(Y|\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}= c\\right)$ measures the variability in $Y$ on a contour line $\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}= c,\\ \\boldsymbol{\\bar{\\theta}}\\neq {\\boldsymbol{\\bar{\\theta}_0}}$, a sensible way to alternatively interpret ${\\boldsymbol{\\bar{\\theta}_0}}$ would be finding the minimizer of the objective function $H(\\boldsymbol{\\bar{\\theta}})$, where\n$H(\\boldsymbol{\\bar{\\theta}}) : = \\mbb{E}{\\left( {\\rm Var}\\left(Y|\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}\\right)\\right)} \\text{ and } {\\boldsymbol{\\bar{\\theta}_0}} = \\underset{\\boldsymbol{\\bar{\\theta}}:\\ \\boldsymbol{\\bar{\\theta}} ^{\\top} \\boldsymbol{\\bar{\\theta}} =1}{\\argmin}\\ H(\\boldsymbol{\\bar{\\theta}}).$\nIt is indeed important to impose the constraint $\\boldsymbol{\\bar{\\theta}} ^{\\top} \\boldsymbol{\\bar{\\theta}} = 1,$ with the first element of the index $\\theta_{01}>0$ to ensure the identifiability of the objective function. Under such constraint we note that $H({\\boldsymbol{\\bar{\\theta}_0}}) \\leq H(\\boldsymbol{\\bar{\\theta}}).$\n\n\n\nThe method for recovering the true direction of the single index from model~\\eqref{model:sim} can be generalized in a similar way. The conditional variance of $Y$ given $\\tbf{X} = \\tbf{x}$ for a real-valued response can be directly generalized to the conditional Fr\\'echet variance $d^2(Y,\\mop{\\tbf{x} ^{\\top} \\boldsymbol{\\bar{\\theta}}})$ for any given unit orientation vector $\\boldsymbol{\\bar{\\theta}}.$ Thus, for a general object response $Y\\in (\\Omega,d),$ ${\\boldsymbol{\\bar{\\theta}_0}}$ can alternatively be expressed as\n\\begin{equation}\\aligned\n\\label{sim:obj}\n{\\boldsymbol{\\bar{\\theta}_0}} = &\\underset{\\boldsymbol{\\bar{\\theta}} \\in\\ \\Theta}{\\argmin}\\ H(\\boldsymbol{\\bar{\\theta}}), \\ \\text{where } H(\\boldsymbol{\\bar{\\theta}})= \\mbb{E}{\\left(d^2(Y,\\mop{\\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}}})\\right)},\\\\\n\\mop{t} &= \\underset{\\omega\\in \\Omega}{\\argmin} \\ M(\\omega, t), \\ \\text{with } M(\\omega,t):= \\mbb{E}{\\left(d^2(Y,\\omega) |\\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}= t\\right)}.\n\\endaligned\\end{equation}\n\nTo recover ${\\boldsymbol{\\bar{\\theta}_0}}$ from the representation~\\eqref{sim:obj}, one needs to also estimate the conditional Fr\\'echet mean, as in the IFR model \\eqref{model:sim}. We employ the local Fr\\'echet regression estimate \\citep{pete:19} for this. The conditional Fr\\'echet mean in \\eqref{sim:obj} can be approximated by a locally weighted Fr\\'echet mean, with weight function $S(\\cdot,\\cdot,\\cdot)$ that depends on a chosen kernel function $K(\\cdot)$ and a bandwidth parameter $b.$ For any given unit direction index $\\boldsymbol{\\bar{\\theta}},$ this intermediate localized weighted Fr\\'echet mean is defined as\n\\begin{equation}\\aligned\n\\label{intermed:local:fr}\n\\tmop{t} &= \\underset{\\omega\\in \\Omega}{\\argmin} \\ \\tilde{L}_b(\\omega, t), \\ \\text{with } \\tilde{L}_b(\\omega,t):= \\mbb{E}{\\left(S(\\tbf{X}^{\\top} \\boldsymbol{\\bar{\\theta}},\\ t,b)d^2(Y,\\omega)\\right)},\n\\endaligned\\end{equation}\nwhere\n\\begin{equation}\\aligned \n\\label{local:Fr:weight} \n&S(\\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}},\\ t,b) = \\frac{1}{\\sigma_0^2}K_b(\\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}}- t ) [\\mu_2 - \\mu_1(\\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}}- t)],\\\\\n&\\mu_k = \\mbb{E}{(K_b( \\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}} - t ) \\ ( \\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}}- t )^k)},\\ k = 0,1,2,\\quad \n\\sigma_0^2 = \\mu_2 \\mu_0- \\mu_1^2, \n\\endaligned\\end{equation}\nwhere $M(\\cdot,t) = \\tilde{L}_b(\\cdot,t) + O(b)$ for all $t$ \\citep{pete:19}. Suppose we observe a random sample of paired observations $(\\tbf{X}_i,Y_i),\\ l=1,\\dots,n$, where $\\tbf{X}_i$ is a $p-$ dimensional Euclidean predictor and $Y_i \\in (\\Omega,d)$ is an object response situated in a general metric space $(\\Omega,d).$ Using the form of the intermediate target in \\eqref{intermed:local:fr} and replacing the auxiliary parameters by their corresponding empirical estimates, the local Fr\\'echet regression estimator at a given value $t$ of the single index is defined as\n\n\\begin{equation}\\aligned\n\\label{est:local:fr}\n\\hmop{t} &= \\underset{\\omega \\in \\Omega}{\\argmin} \\ \\hat{L}_n(\\omega, t), \\ \\text{with } \\hat{L}_n(\\omega,t):= \\frac{1}{n}\\sum_{i=1}^n \\wh{S}(\\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}},\\ t,b)d^2(Y_i,\\omega),\n\\endaligned\\end{equation}\nwhere\n\\begin{equation}\\aligned \n\\label{est:local:Fr:weight} \n&\\wh{S}(\\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}},\\ t,b) = \\frac{1}{\\hat{\\sigma}_{0}^2}K_b( \\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}} - t) [\\hat{\\mu}_{2} - \\hat{\\mu}_{1}(\\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}} - t)],\\\\\n&\\hat{\\mu}_{p} = \\frac{1}{n} \\sum_{j=1}^n K_b( \\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}} - t ) \\ (\\tbf{X}_i ^{\\top} \\boldsymbol{\\bar{\\theta}} - t)^p, \\ p = 0,1,2,\\quad\n\\hat{\\sigma}_{0}^2 = \\hat{\\mu}_{2} \\hat{\\mu}_{0}- \\hat{\\mu}_{1}^2.\n\\endaligned\\end{equation}\nAssuming that the support of $T:= \\tbf{X}^{\\top} \\boldsymbol{\\bar{\\theta}}$, for any given unit direction $\\boldsymbol{\\bar{\\theta}}$ is compact and is denoted by $\\mathcal{T} = [0,1],$ we partition the interval $\\mathcal{T}$ into $M$ equal-width non-overlapping bins $\\{B_1,B_2,\\dots,B_M\\},$ such that data belonging to different bins are independent and identically distributed.\nWe define the mean observations $\\tilde{\\tbfX}_l$ and $\\tilde{Y}_l$ for the data points belonging to the $l-$th bin, where the latter are defined as the appropriate Fr\\'echet barycenters,\n\\begin{equation}\\aligned\n\\label{binned_dat}\n\\tilde{\\tbfX}_l &= \\sum_{i=1}^{n} W_{il} \\tbf{X}_i, \\ \\tilde{Y}_l = \\underset{\\omega \\in \\Omega} {\\text{argmin}}\\ \\sum_{i=1}^{n} W_{il} d^2(Y_i,\\omega), \\text{ where } W_{il} &= \\frac{\\indicator{\\tbf{X}_i^{\\top}\\boldsymbol{\\bar{\\theta}} \\in B_l} } {\\sum_{i=1}^{n} \\indicator{\\tbf{X}_i^{\\top}\\boldsymbol{\\bar{\\theta}} \\in B_l}}.\n\\endaligned\\end{equation}\n\nHere the number of bins $M$ depends on the sample size $n$. The appropriate choice of $M = M(n)$ will be discussed later.\nThe proposed estimator for the true direction ${\\boldsymbol{\\bar{\\theta}_0}}$ in \\eqref{sim:obj} is then \n\\begin{equation}\\aligned \n\\label{est:sim:obj}\n\\boldsymbol{\\wh{\\bar{\\theta}}} &= \\underset{\\boldsymbol{\\bar{\\theta}} \\in\\ \\Theta}{\\argmin}\\ V_n(\\boldsymbol{\\bar{\\theta}}), \\text{ where }\nV_n(\\boldsymbol{\\bar{\\theta}}) = \\frac{1}{M} \\sum_{l=1}^{M} d^2(\\tilde{Y}_l, \\hmop{\\tilde{\\tbfX}_l^{\\top}\\boldsymbol{\\bar{\\theta}}}).\n\\endaligned\\end{equation}\nHere $\\hmop{\\tilde{\\tbfX}_l^{\\top}\\boldsymbol{\\bar{\\theta}}}, \\ l =1,\\dots,M,\\ $ is the local Fr\\'echet regression estimator, constructed based on the sample $(\\tbf{X}_i,Y_i), \\ i=1,\\dots,n$, and evaluated at each sample point of the binned sample $(\\tilde{\\tbfX}_l,\\tilde{Y}_l), \\ l = 1,\\dots, M,$ as described in \\eqref{est:local:fr} and \\eqref{est:local:Fr:weight}.\nDefine an intermediate quantity that corresponds to the empirical version of $H(\\cdot)$ in \\eqref{sim:obj} as \n\\begin{equation}\\aligned \n\\label{intermed:sim:obj}\n\\boldsymbol{\\tilde{\\bar{\\theta}}} &= \\underset{\\boldsymbol{\\bar{\\theta}} \\in\\ \\Theta}{\\argmin}\\ \\tilde{V}_n(\\boldsymbol{\\bar{\\theta}}), \\text{ where }\n\\tilde{V}_n(\\boldsymbol{\\bar{\\theta}}) = \\frac{1}{M} \\sum_{l=1}^{M} d^2(\\tilde{Y}_l, \\mop{\\tilde{\\tbfX}_l^{\\top}\\boldsymbol{\\bar{\\theta}}}).\n\\endaligned\\end{equation}\nThis auxilary quantity is used to prove the asymptotic results for $\\boldsymbol{\\wh{\\bar{\\theta}}}$ in the next section.\n\nThe bandwidth $b = b(n)$ is a tuning parameter involved in the estimation and the rate of convergence for $\\hmop{\\cdot}$ to $\\mop{\\cdot}$ is contingent on $b.$ It is important to note here that, another possible estimator for $\\mop{\\cdot}$ could be given by the global Fr\\'echet regression estimator introduced by \\cite{pete:19}. This is developed by generalizing multiple linear regression to the case of a metric-valued response by viewing the regression function as a sequence of weighted Fr\\'echet means, with weights derived from those of the corresponding standard linear regression. Using this alternative estimate for the unknown link function in the IFR model~\\eqref{model:sim} avoids the tuning parameter $b$ that is needed for local Fr\\'echet regression.\n\n\n\\section{Theory}\n\\label{sec:theory}\nThe unknown quantities that constitute the Index Fr\\'echet Regression (IFR) model consist of the nonparametric link function and the index parameter, and thus the asymptotic properties of the estimate of the true unit direction rely on those of the estimates of the link function (with local Fr\\'echet regression) and the index parameter (through an M-estimator of the criterion function $H(\\cdot)$ in \\eqref{sim:obj}). The separable metric space $(\\Omega,d)$ is assumed to be totally bounded with diameter $D$, hence separable. In addition, with regard to the quantities in \\eqref{sim:obj}, \\eqref{est:local:fr}, and \\eqref{est:sim:obj} we require the following assumptions.\n\\begin{enumerate}[label = (A\\arabic*), series = fregStrg, start = 1]\n\\item \\label{ass:fr:exist} The conditional and weighted Fr\\'echet means in \\eqref{sim:obj}, \\eqref{intermed:local:fr}, \\eqref{est:local:fr} and \\eqref{binned_dat} are well defined, i.e., they exist and are unique.\n\\item \\label{ass:reg:cont} The link function $\\mop$ is Lipschitz continuous, that is, that is, there exists a real constant $K > 0$, such that, for all $\\boldsymbol{\\bar{\\theta}}_1, \\boldsymbol{\\bar{\\theta}}_2 \\in \\bar{\\Theta},$\n\\[\nd(\\mop{\\tbf{x} ^{\\top}\\boldsymbol{\\bar{\\theta}}_1}, \\mop{\\tbf{x} ^{\\top}\\boldsymbol{\\bar{\\theta}}_2}) \\le\nK \\ltwoNorm{\\tbf{x} ^{\\top} (\\boldsymbol{\\bar{\\theta}}_1-\\boldsymbol{\\bar{\\theta}}_2)}.\n\\]\n\\item For any $\\varepsilon>0$ and $\\beta_1,\\beta_2>1,$ define\n\\begin{equation}\\aligned\n\\label{rate:an}\na_n = \\max\\{ b^{2\/(\\beta_1 -1)}, (nb^2)^{-1\/(2(\\beta_2 -1)+\\varepsilon)}, (nb^2(-\\log b)^{-1})^{1\/2(\\beta_2 -1)}\\}.\n\\endaligned\\end{equation}\n\\label{ass:tuning:M} \nThe number of non-overlapping bins defined in Section~\\ref{sec:model:methods}, $M$ is a function of the sample size $n,$ that is $M = M(n)$ such that $Ma_n \\rightarrow 0.$\n\\end{enumerate}\nWe note that for $\\beta_1 = \\beta_2 =2,$ $a_n$ reduces to\n\\[\na_n = \\max\\{ b^{2}, (nb^2)^{-1\/(2 +\\varepsilon)}, (nb^2(-\\log b)^{-1})^{1\/2}\\}.\n\\]\t\n\n\nThe above assumptions are commonly imposed when one studies M-estimators. Whether Fr\\'echet means are well defined depends on the nature of the space, as well as the metric considered. For example, in case of Euclidean responses Fr\\'echet means coincide with the usual means for random vectors with finite second moments. For finite-dimensional Riemannian manifolds additional regularity conditions are required \\citep{afsa:11,penn:18}. For Hadamard spaces, unique Fr\\'echet means are known to exist \\citep{bhat:03,bhat:05,patr:15,kloe:10}.\nAssumption~\\ref{ass:reg:cont} limits how fast the object $\\mop{\\cdot}$ can change, introducing a concept of smoothness in the link function for the IFR model \\eqref{model:sim}.\nAssumption~\\ref{ass:fr:exist} is satisfied for the space $(\\Omega,d_W)$ of univariate probability distributions with the 2-Wasserstein metric and also for the space $(\\Omega,d_F)$ of covariance matrices\nwith the Frobenius metric $d_F$ \\citep{dube:19,pete:19}.\nAssumption~\\ref{ass:tuning:M} serves to connect the uniform rate of convergence $a_n$ for the local Fr\\'echet regression estimator as given in ~\\eqref{rate:an} with the number of bins $M$. A basic additional assumption is that the predictors needed for the nonparametric Fr\\'echet regression are randomly distributed over the domain where the function is to be estimated, and that on average they become denser as more data are collected. \nThis requires that there is at least one continuous predictor since if all the predictors are binary then the predictor locations cannot become denser with larger sample size. For any given direction $\\boldsymbol{\\bar{\\theta}},$ the univariate index variable $T := \\tbf{X} ^{\\top} \\boldsymbol{\\bar{\\theta}}$ is assumed to have a density $f_T(\\cdot)$ with a compact support $\\mathcal{T}$ and that the multivariate random variable $\\tbf{X}$ is bounded.\n\nAdditional assumptions ~\\ref{ass:minUnif}-\\ref{ass:curvatureUnif}, and ~\\ref{ass:ker}-\\ref{ass:jointdtn} have been used previously in \\cite{pete:19} and are stated in the Appendix. They concern metric entropy and curvature for M estimators and are commonly used in their asymptotic analysis utilizing empirical process theory \\citep{vand:00}. They\nare specifically required to establish consistency and uniform rate of convergence for the local Fr\\'echet regression estimator in \\eqref{est:sim:obj} \\citep{chen:20}. Assumptions ~\\ref{ass:ker}-\\ref{ass:jointdtn} are commonly used in the local regression literature \\citep{silv:78,fan:96}.\n\\begin{prop}\n\\label{lem:H}\nUnder assumption~\\ref{ass:fr:exist}-\\ref{ass:reg:cont}, $H(\\cdot)$ in model \\eqref{sim:obj} is a continuous function of $\\boldsymbol{\\bar{\\theta}}$ and for any $\\boldsymbol{\\bar{\\theta}} \\in \\bar{\\Theta},$ $H({\\boldsymbol{\\bar{\\theta}_0}}) \\leq H(\\boldsymbol{\\bar{\\theta}}).$ \n\\end{prop}\n\nMost types of random objects, such as those in the Wasserstein space (the space of probability distributions equipped with the 2-Wasserstein distance) or the space of symmetric, positive semidefinite matrices endowed with the Frobenius or power metric satisfy assumptions ~\\ref{ass:minUnif}-\\ref{ass:curvatureUnif} (see Appendix) with $\\beta_1= \\beta_2 =2$. \tIf one chooses the bandwidth sequence $b$ for the local Fr\\'echet regression such that, for a given $\\varepsilon>0,$ $b\\sim n^{-(\\beta_1 -1)\/(2\\beta_1 + 4\\beta_2 - 6 +2\\varepsilon)},$ $a_n$ is of the order $n^{-\\frac{1}{(\\beta_1 +2\\beta_2 -3 +\\varepsilon)}}$ \\citep{chen:20}. For $\\beta_1 = \\beta_2 =2,$ this becomes \n$a_n \\sim n^{-\\frac{1}{3+\\varepsilon}},$ leading to a uniform convergence rate that is arbitrarily close to $O_P(n^{-1\/3}).$ Any choice $M=M(n)=n^{\\gamma}$ with \n$0 <\\gamma<\\frac{1}{3}$ will then satisfy Assumption~\\ref{ass:tuning:M}. \t\n\nWe will make use of the following known result to deal with the link function part when investigating the asymptotic convergence rates of the proposed IFR estimator.\n\\begin{lem}[\\cite{chen:20} Theorem $1$]\n\\label{lem:unif:local:fr:rate}\nUnder assumptions~\\ref{ass:minUnif}-\\ref{ass:curvatureUnif},~\\ref{ass:ker}-\\ref{ass:jointdtn}, and if $b\\to 0,$ such that $nb^2(-\\log b)^{-1} \\to \\infty$ as $n\\to \\infty,$ for any $\\varepsilon>0,$ and $\\beta_1,\\beta_2 >1$ as per assumption~\\ref{ass:curvatureUnif},\n\\begin{equation}\\begin{gathered}\n\\underset{t \\in \\mathcal{T}}{\\sup} \\ d(\\hmop{t},\\mop{t}) = O_P(a_n),\n\\end{gathered}\\end{equation}\nwhere $a_n$ is as given in equation \\eqref{rate:an} in Assumption~\\ref{ass:tuning:M}.\n\\end{lem}\n\nThe following result demonstrates the consistency of the proposed estimator for the true index direction. All proofs can be found in the Supplementary Material\n\\begin{thm} \\label{thm:probConv}\nUnder assumptions~\\ref{ass:fr:exist}-\\ref{ass:reg:cont}, ~\\ref{ass:minUnif}-\\ref{ass:curvatureUnif}, and ~\\ref{ass:ker}-\\ref{ass:jointdtn}\n$$\\boldsymbol{\\wh{\\bar{\\theta}}} -{\\boldsymbol{\\bar{\\theta}_0}} \\overset{P}{\\longrightarrow} 0 \\text{ on } \\bar{\\Theta}.$$\n\\end{thm}\nAny $\\boldsymbol{\\bar{\\theta}} \\in \\bar{\\Theta}$ is decomposed into $(\\theta_1,\\boldsymbol{\\theta})^{\\top},$ where $\\theta_1>0,$ and for purposes of modeling the single index, keeping identifiability in mind, can be expressed solely as a function of $\\boldsymbol{\\theta}.$\nTo this end we rewrite the criteria function and the corresponding minimizers in terms of the sub-vector $\\boldsymbol{\\theta}$ only, \n\\begin{equation}\\aligned\n\\boldsymbol{\\theta_0} = \\underset{\\boldsymbol{\\theta} : \\boldsymbol{\\theta} \\in \\Theta}{\\argmin} \\, H(\\boldsymbol{\\theta}),\\quad \n\\tilde{\\boldsymbol{\\theta}} = \\underset{\\boldsymbol{\\theta} : \\boldsymbol{\\theta} \\in \\Theta}{\\argmin} \\, \\tilde{V}_n(\\boldsymbol{\\theta}),\\quad\t\n\\hat{\\boldsymbol{\\theta}} = \\underset{\\boldsymbol{\\theta} : \\boldsymbol{\\theta} \\in \\Theta}{\\argmin} \\, V_n(\\boldsymbol{\\theta}).\n\\endaligned\\end{equation}\nWe note that $\\boldsymbol{\\theta_0},$ $\\tilde{\\boldsymbol{\\theta}},$ and $\\hat{\\boldsymbol{\\theta}}$ are the unconstrained minimizers for the criteria functions $H(\\cdot),$ $\\tilde{V}_n(\\cdot),$ and $V_n(\\cdot)$ respectively, which \nare continuous functions of $\\boldsymbol{\\theta},$ the latter two almost surely. \n\nCombining the consistency result for the direction vector from Theorem~\\ref{thm:probConv} with the uniform convergence of the local Fr\\'echet regression estimator in Lemma~\\ref{lem:unif:local:fr:rate}, the asymptotic consistency of the estimated single index regression (IFR) model follows.\n\\begin{cor}\n\\label{cor:probConv:ifr}\nUnder the conditions required for Theorem~\\ref{thm:probConv}, for any $\\tbf{x} \\in \\mbb{R}^p,$\n$$d(\\hmop{\\tbf{x} ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}}, \\mop{\\tbf{x} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}}) = o_P(1).$$\n\\end{cor}\nThe above corollary justifies the effectiveness of the use of local Fr\\'echet regression in the context of IFR.\n\n\\section{Simulation studies}\n\\label{sec:simul}\nThere are two tuning parameters involved in the implementation of the single index Fr\\'echet regression (IFR) model in~\\eqref{sim:obj}, namely the bandwidth $b = b(n)$ involved in the local Fr\\'echet regression as per \\eqref{model:sim} and the number of i.i.d data points, $M = M(n),$ where the local Fr\\'echet regression is fitted as per Assumption~\\ref{ass:tuning:M}.\n\nThe pair $(b,M)$ can be chosen by leave-one-out cross validation, where the objective function to be minimized is the mean discrepancy between the local Fr\\'echet regression estimates and the observed distributions for the binned data; specifically, \n\\[b_{opt} = \\argmin_{(b,M)} \\frac{1}{Mn} \\sum_{l=1}^M \\sum_{i=1}^n d^2(\\tilde{Y}_l, \\loomop{\\tilde{\\tbfX}_l ^{\\top}\\boldsymbol{\\bar{\\theta}}}) \\]\nwhere $\\loomop{\\tilde{\\tbfX}_l ^{\\top}\\boldsymbol{\\bar{\\theta}}}$ is the local Fr\\'echet regression estimate at $\\tilde{\\tbfX}_l ^{\\top}\\boldsymbol{\\bar{\\theta}}$ obtained with bandwidth $b$ based on the sample excluding the $i-$th pair $(\\tbf{X}_i,Y_i)$, i.e., \n\\[\n\\loomop{\\tilde{\\tbfX}_l ^{\\top}\\boldsymbol{\\bar{\\theta}}} = \\underset{\\omega \\in \\Omega}{\\argmin} \\frac{1}{(n-1)} \\sum_{j\\neq i} \\wh{S}(\\tbf{X}_j ^{\\top} \\boldsymbol{\\bar{\\theta}},\\tilde{\\tbfX}_l ^{\\top} \\boldsymbol{\\bar{\\theta}} ,b)d^2(Y_j,\\omega),\n\\]\nIn practice, we replace leave-one-out cross validation by $5-$fold cross validation when $n > 30$.\n\nThe performance of the estimation is measured through simulations under various settings. The random objects we consider include samples of univariate distributions equipped with the Wasserstein$-2$ metric, samples of adjacency matrices from networks with the Frobenius metric and samples of multivariate data with the usual Euclidean metric. It is important to recall that true direction ${\\boldsymbol{\\bar{\\theta}_0}}$ is chosen to lie on the unit sphere in $\\mbb{R}^p$ with $\\theta_{01}>0.$ In each case, the optimal direction is estimated as the minimizer of $V_n(\\boldsymbol{\\theta}) : \\boldsymbol{\\theta} \\in \\mbb{R}^{p-1} \\text{ and } \\boldsymbol{\\theta} ^{\\top} \\boldsymbol{\\theta} \\leq 1,$ in \\eqref{est:sim:obj}. \n\nTo evaluate the accuracy of the estimate, we repeat the data generating mechanism $500$ times in each simulation setting, and for each such replication, obtain the optimal direction as $\\boldsymbol{\\wh{\\bar{\\theta}}}^{(i)} \\ i = 1,\\dots, 500.$ The intrinsic Fr\\'echet mean of these $500$ estimates on the unit sphere is computed as $\\widehat{\\bar{\\boldsymbol{\\theta}}}.$ Since each $\\boldsymbol{\\wh{\\bar{\\theta}}}^{(i)}$ lies on the manifold (the unit sphere in $\\mbb{R}^p$), the bias and deviance of the estimator is estimated as\n\\begin{align}\n\\text{bias}(\\boldsymbol{\\wh{\\bar{\\theta}}}) &= \\arccos \\langle \\widehat{\\bar{\\boldsymbol{\\theta}}},{\\boldsymbol{\\bar{\\theta}_0}} \\rangle, \\nonumber \\\\\n\\text{dev}(\\boldsymbol{\\wh{\\bar{\\theta}}}) &= {\\rm Var}\\left( \\arccos \\langle \\boldsymbol{\\wh{\\bar{\\theta}}}^{(i)}, \\widehat{\\bar{\\theta}} \\rangle\\right) \n\\label{simul:bias:var}\n\\end{align}\n\n\\noindent Essentially, we estimate the $(p-1)-$ dimensional parameter $\\boldsymbol{\\theta_0} = (\\theta_{20},\\dots,\\theta_{p0})$ freely and then estimate $\\theta_{10}$ from the relation $\\theta_{10} = \\sqrt{1- \\ltwoNorm{\\boldsymbol{\\theta_0}}^2}.$ \n\n\\subsection{Distributional responses}\n\nThe space of distributions with the Wasserstein$-2$ metric provides an ideal setting for illustrating the efficacy of the proposed methods through simulation experiments. We consider distributions on a bounded domain $\\mathcal{T}$ as the response, $Y(\\cdot)$, and they are represented by the respective quantile functions $Q(Y)(\\cdot)$. A $p-$ dimensional Euclidean predictor $\\tbf{X}$ is considered.\nThe random response is generated conditional on $\\tbf{X}$, by adding noise to the true regression quantile\n\\begin{align}\nQ(m_\\oplus(\\tbf{x}))(\\cdot) &= \\mbb{E}{\\left(Q(Y)(\\cdot)|\\tbf{X} = \\tbf{x}\\right)}\n\\label{simul:dens1}\n\\end{align}\n\n\\noindent As emphasized, the conditional distribution of $Y$ depends on $\\tbf{X}$ only through the true index parameter $\\boldsymbol{\\theta_0}.$\nTwo different simulation scenarios are examined as we generate the distribution objects from location-scale shift families (see Table~\\ref{table:sim}). In the first setting, the response is generated, on average, as a normal distribution with parameters that depend on $\\tbf{X}.$ For $\\tbf{X} =\\tbf{x}$, the distribution parameters $\\mu \\sim N(\\zeta(\\tbfx \\t \\true), \\nu_1)$ is independently sampled, and for a fixed parameter $\\sigma=0.1$ the corresponding distribution is given by $Q(Y)(\\cdot) = \\mu + \\sigma \\Phi^{-1}(\\cdot)$. Here, the relevant sub-parameter is chosen as $\\nu_1 = 0.1$ and three different link functions are considered, namely $\\zeta(y) = y$ $\\ \\zeta(y) = y^2,$ and $\\zeta(y) = \\exp(y),$ and $\\Phi(\\cdot)$ is the standard normal distribution function. The second setting is slightly more complicated. The distributional parameter $\\mu|\\tbf{X} = \\tbf{x}$ is sampled as before and $\\sigma = 0.1$ is assumed to be a fixed parameter. The resulting distribution is then ``transported'' in Wasserstein space via a random transport map $T$, that is uniformly sampled from the collection of maps $T_k(a) = a - \\sin (ka)\/|k|$ for $k \\in \\{\\pm1, \\pm 2, \\pm 3\\}$. The distributions thus generated are not Gaussian anymore due to the transportation. Nevertheless, one can show that the Fr\\'echet mean is exactly $ \\mu + \\sigma \\Phi^{-1}(\\cdot)$ as before.\n\n\\begin{table}[!htb]\n\t\\centering\n\t\\begin{tabular}{|l|l|}\n\t\t\\hline\n\t\tSetting I &\n\t\tSetting II \\\\ \\hline\n\t\t\\begin{tabular}[c]{@{}l@{}}$Q(Y)(\\cdot) = \\mu + \\sigma \\Phi^{-1}(\\cdot) $, \\\\ where \\\\ $\\mu \\sim N(\\zeta(\\tbfx \\t \\true), \\nu_1), $\\ $\\sigma = 0.1.$\n\t\t\\end{tabular} &\n\t\t\\begin{tabular}[c]{@{}l@{}}$Q(Y)(\\cdot) = T \\#(\\mu +\\sigma \\Phi^{-1}(\\cdot)) $, \\\\ where\\\\ $\\mu \\sim N(\\zeta(\\tbfx \\t \\true), \\nu_1), $\\ $\\sigma = 0.1,$ \\\\ $T_k(a) = a - \\sin(ka)\/|a|, k \\in \\{\\pm1,\\pm 2, \\pm 3\\}.$\\end{tabular} \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Table showing two different simulation scenarios.}\n\t\\label{table:sim}\n\\end{table}\n\nTo this end, we generate a random sample of size $n$ of density objects and multivariate Euclidean predictors from the true models, incorporating measurement error as described in the two situations above. We sample the predictors $\\tbf{X}_i$ to be of dimension $p=4,$ with the components of the vectors to be distributed independently as $Beta(1,1).$ Three different link functions are used to simulate the densities from the ``true'' model, namely, the identity link, the square link, and the exponential link. The bias and deviance of the estimated direction vectors for varying sample sizes are displayed in Tables~\\ref{tab:dens:set1} and~\\ref{tab:dens:set2}. We note that, the bias encountered due to the local Fr\\'echet estimation is generally low and the variance of the estimates also diminish with a higher sample size.\n\\begin{table}[!htb]\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t\\multicolumn{7}{|c|}{Setting I} \\\\ \\hline\n\t\t\t& \\multicolumn{2}{c|}{link1 ($x \\mapsto x$)} & \\multicolumn{2}{c|}{link2 ($x \\mapsto x^2$)} & \\multicolumn{2}{c|}{link3 ($x \\mapsto e^x$)} \\\\ \\hline\n\t\t\t& bias & dev & bias & dev & bias & dev \\\\ \\hline\n\t\t\t$n = 100$ & 0.033 & 0.027 & 0.031 & 0.029 & 0.039 & 0.041 \\\\ \\hline\n\t\t\t$n = 1000$ & 0.011 & 0.013 & 0.017 & 0.012 & 0.020 & 0.013 \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Table showing bias and variance of $\\boldsymbol{\\wh{\\bar{\\theta}}}$ (measured in radians) based on $500$ replications when the predictor dimension is $p=4.$ We took the four components of the vectors to be distributed independently as $Beta(1,1).$ The tuning parameters $(b,M)$ are chosen by a $5-$fold cross validation method.}\n\t\\label{tab:dens:set1}\n\\end{table}\nThe performance of the fits is evaluated by computing the Mean Square Error (MSE) between the observed and the fitted distribution. Denoting the simulated true and estimated distribution objects at $(\\tilde{\\tbfX}_l,\\tilde{Y}_l),$ by $\\mop{\\tilde{\\tbfX}_l ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}}$ and $\\hmop{\\tilde{\\tbfX}_l ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}}$ respectively, for $l=1,\\dots,M,$ the utility of the estimation was measured quantitatively by \n\\begin{align}\n\\label{simu:dens:mse:ifr}\nMSE = \\frac{1}{M} \\sum_{l=1}^M d^2_W(\\tilde{Y}_l, \\hmop{\\tilde{\\tbfX}_l ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}}),\n\\end{align}\nwhere $d_W(\\cdot,\\cdot)$ is the Wasserstein-2 distance between two distributions.\nWe also compared the estimation performance of the proposed single index Fr\\'echet regression (IFR) method to a baseline Global Fr\\'echet regression (GFR) method, which can handle multivariate predictors being a generalization of global least squares regression \\citep{pete:19}. Denoting the GFR estimate of the distribution at $(\\tilde{\\tbfX}_l,\\tilde{Y}_l),$ by $\\hat{g}_\\oplus(\\tilde{\\tbfX}_l)$ for each $l=1,\\dots,M,$ we express the MSE of the fits as\n\\begin{align}\n\\label{simu:dens:mse:gfr}\nMSE = \\frac{1}{M} \\sum_{l=1}^M d^2_W(\\tilde{Y}_l, \\hat{g}_\\oplus(\\tilde{\\tbfX}_l)),\n\\end{align}\nwhere $d_W(\\cdot,\\cdot)$ is the Wasserstein-2 distance between two distributions. Figure~\\ref{fig:dens:mspe} shows the boxplots for different link functions used to generate the distribution data in the two simulation settings I and II for a sample size of $n=1000.$ We observe that, the IFR method outperforms the baseline GFR method in all the cases. Perhaps, the closest comparison between the two methods would be when an identity link function is used in the data generation mechanism. This is indeed expected since, in this case, the true model essentially reduces to a linear model.\nWe also display the fitted and the true distributions represented as densities (Figure~\\ref{fig:dens:fits}). The estimates from the IFR method matches the true observed densities quite well, thus the proposed IFR method can be used to validate the estimation method.\n\\begin{figure}[!htb]\n\t\\centering\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{plots\/boxplot_dens_n100}\n\t\t\\caption{Simulation Setting I.}\n\t\t\\label{fig:dens_set1:mspe}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{plots\/boxplot_dens_set2_n100}\n\t\t\\caption{Simulation Setting II.}\n\t\t\\label{fig:dens_set2:mspe}\n\t\\end{subfigure}\n\t\\caption{Boxplot of MSPE of the fits using the single index Fr\\'echet regression model (IFR) and the Global Fr\\'echet regression (GFR) model for a sample size $n=1000.$ The left and the right panel correspond to the simulation settings I and II, respectively. The left, middle, and right columns in each of the panels correspond to the three different link functions used in the data generation mechanism, namely, identity, square, and exponential link functions, respectively, while the link functions in all cases are estimated from the data.}\n\t\\label{fig:dens:mspe}\n\\end{figure}\n\n\n\\begin{table}[!htb]\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t\\multicolumn{7}{|c|}{Setting II} \\\\ \\hline\n\t\t\t& \\multicolumn{2}{c|}{link1 ($x \\mapsto x$)} & \\multicolumn{2}{c|}{link2 ($x \\mapsto x^2$)} & \\multicolumn{2}{c|}{link3 ($x \\mapsto e^x$)} \\\\ \\hline\n\t\t\t& bias & dev & bias & dev & bias & dev \\\\ \\hline\n\t\t\t$n = 100$ & 0.029 & 0.027 & 0.022 & 0.037 & 0.028 & 0.044 \\\\ \\hline\n\t\t\t$n = 1000$ & 0.010 & 0.012 & 0.011 & 0.014 & 0.017 & 0.021 \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Table showing bias and variance of $\\boldsymbol{\\wh{\\bar{\\theta}}}$ (measured in radians) based on $500$ replications when the predictor dimension is $p=4.$ \tWe took the five components of the vectors to be distributed independently as $Beta(1,1).$ The tuning parameters $(b,M)$ are chosen by $5-$fold cross validation method.}\n\t\\label{tab:dens:set2}\n\\end{table}\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\t\\includegraphics[width=\\textwidth, height= .4\\textheight]{plots\/fits_dens_set2_n100}\n\t\\caption{Figure showing the density estimates of the distribution objects generated in simulation setting I. The blue and red curves are the observed and estimated densities, respectively. The left, middle, and right panels correspond to the three different link functions used in the data generation mechanism, namely, identity, square, and exponential link functions, respectively.}\n\t\\label{fig:dens:fits}\n\\end{figure}\n\n\n\\subsection{Adjacency matrix responses}\nHere the response objects are assumed to reside in the space of adjacency matrices arising out of a weighted graph equipped with the Frobenius norm. The $(p,q)-$th entry of the adjacency matrix $Y$ is given by,\n\\begin{align}\n\\label{simu:adj:mat}\nY_{pq} = m(\\tbf{x}^{\\top} \\boldsymbol{\\theta_0}) + \\epsilon_{pq},\n\\end{align}\nwhere $\\epsilon_{pq}$ are independently sampled errors and the link function $m(\\cdot)$ is such that $Y_{pq} \\in (0,1)$ for all $p, q.$ To ensure this, an appropriate link function in this case is taken as the expit link function, that is, $m(\\tbf{x}^{\\top} \\boldsymbol{\\theta_0}) = 1\/(1 + \\exp(- m(\\tbf{x}^{\\top} \\boldsymbol{\\theta_0}))).$ For a given index $\\tbf{x}^{\\top} \\boldsymbol{\\theta_0},$ $\\epsilon_{pq}$ was sampled from a Uniform distribution on $[\\max\\{0,-m(\\tbf{x}^{\\top} \\boldsymbol{\\theta_0})\\}, \\min\\{1,1-m(\\tbf{x}^{\\top} \\boldsymbol{\\theta_0})\\}]$. \n\\begin{table}[!htb]\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{link ($x \\mapsto 1\/(1+\\exp(-x))$} \\\\ \\hline\n\t\t\t& bias & dev \\\\ \\hline\n\t\t\t$n = 100$ & 0.044 & 0.052 \\\\ \\hline\n\t\t\t$n = 1000$ & 0.021 & 0.019 \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Table showing bias and variance of $\\hat{\\theta}$ (measured in radians) based on $500$ replications for weighted adjacency matrix responses. The tuning parameters for the model-fitting are chosen by $5-$ fold cross validation method.}\n\t\\label{tab:adjacency:set1}\n\\end{table}\nWe generated samples of networks with $10$ nodes, as one might encounter in brain networks, with the weighted adjacency computed as per \\eqref{simu:adj:mat}. The predictors are sampled from a $4-$dimensional multivariate normal distributions, where each of the components was truncated to lie between $[-5,5].$ While the mean vector for the multivariate normal distribution in the data generation scheme is assumed to be the zero vector, we assume the associated covariance matrix to be non-identity with ${\\rm cor}(X_1,X_2) = {\\rm cor}(X_1,X_3) = {\\rm cor}(X_2,X_3) = 0.3,$ and ${\\rm cor}(X_1,X_4) = {\\rm cor}(X_2,X_4) = -0.4.$ The variances for each of the four components are assumed equal to $0.25.$ We note here that the non-zero correlation among the components of the predictor vector does not influence the performance of the nonparametric regression fit negatively. Table~\\ref{tab:adjacency:set1} presents the bias and variance of the estimator computed based on $500$ replication of the data generating process. \n\\subsection{Euclidean responses}\nHere the object response of interest is assumed to lie in the Euclidean space. For generating the predictor vectors we consider a $5-$dimensional vector distributed as truncated multivariate normal distributions, where each of the components is truncated to lie between $[-10,10].$ The components are assumed to be correlated such that $X_1$ correlates with $X_2$ and $X_3$ with $r = 0.5$, and $X_2$ and $X_3$ correlate with $r = 0.25.$ The variances for each of the five components are $0.1.$ \nThe consistency of the estimates is illustrated in Table~\\ref{tab:euclidean:set1} based on $500$ replications of the simulation scenario.\n\\begin{table}[!htb]\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{link1 ($x \\mapsto x$)} & \\multicolumn{2}{c|}{link2 ($x \\mapsto x^2$)} & \\multicolumn{2}{c|}{link3 ($x \\mapsto e^x$)} \\\\ \\hline\n\t\t\t& bias & dev & bias & dev & bias & dev \\\\ \\hline\n\t\t\t$n = 100$ & 0.013 & 0.061 & 0.025 & 0.048 & 0.037 & 0.029 \\\\ \\hline\n\t\t\t$n = 1000$ & 0.006 & 0.021 & 0.014 & 0.019 & 0.013 & 0.009 \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Table showing bias and variance of $\\boldsymbol{\\wh{\\bar{\\theta}}}$ (measured in radians) based on $500$ replications for a Euclidean vector response. The predictors $X_1,\\dots, X_5$ are generated from a truncated multivariate normal distribution.}\n\t\\label{tab:euclidean:set1}\n\\end{table}\n\\section{Data analysis}\n\\label{sec:data:ADNI}\nModern functional Magnetic Resonance Imaging (fMRI) methodology has made it possible to study structural elements of the brain and identify brain regions or cortical hubs that exhibit similar behavior, especially when subjects are in the resting state \\citep{alle:14, ferr:13}\nIn resting state fMRI, a time series of Blood Oxygen Level Dependent (BOLD) signal is observed for the seed voxels in selected functional hubs. For each hub, a seed voxel is identified as the voxel whose signal has the highest correlation with the signals of nearby voxels. Alzheimer's Disease has been found to have associations with anomalies in functional integration of brain regions and target regions or hubs of high connectivity in the brain \\citep{damo:12,zhan:10c}. \n\n\nData used in the preparation of this article were obtained from the Alzheimer's Disease Neuro-imaging Initiative (ADNI) database (\\url{adni.loni.usc.edu}).\nBOLD signals for $V= 11$ brain seed voxels for each subject were extracted. These Regions of Interest are aMPFC (Anterior medial prefrontal cortex), PCC (Posterior cingulate cortex), dMFPC (Dorsal medial prefrontal cortex), TPJ (Temporal parietal junction), LTC (Lateral temporal cortex), TempP (Temporal pole), vMFPC (Ventral medial prefrontal cortex), pIPL (Posterior inferior parietal lobule), Rsp (Retrosplenial cortex), PHC (Parahippocampal cortex), and HF$^+$ (Hippocampal formation) \\citep{andr:10}. The pre-processing of the BOLD signals was implemented by adopting the standard procedures of slice-timing correction, head motion correction and normalization and other standard steps. The signals for each subject were recorded over the interval $[0, 270]$ (in seconds), with $K=136$ measurements available at $2$ second intervals. From this the temporal correlations were computed to construct the connectivity correlation matrix, also referred to as the Pearson correlation matrix in the area of fMRI studies. \n\nThe data set in our analysis consists of $n=830$ subjects at the four stages of the disease: $372$ CN (cognitively normal), $113$ EMCI (early mild cognitive impairment), $200$ LMCI ( late mild cognitive impairment), and $145$ AD subjects were considered. The inter-hub connectivity Pearson correlation matrix for the $i-th$ subject is denoted as $Y_{i}$,and has the $(q,r)-$th element \n\\begin{align}\n(Y_{i})_{qr} = \\frac{\\sum_{p=1}^{K}(s_{ipq} - \\bar{s}_{iq}) (s_{ipr} - \\bar{s}_{ir})}{\\left[\\left(\\sum_{p=1}^{K}(s_{ipq} - \\bar{s}_{iq})^2\\right) \\left(\\sum_{p=1}^{K}(s_{ipq} - \\bar{s}_{iq})^2 \\right) \\right]^{1\/2}},\n\\label{data:ADNI:pearson:corr}\n\\end{align}\nwhere $s_{ipq}$ is the $(p,q)^{\\text{th}}$ element of the signal matrix for the $i^{\\text{th}}$ subject and $\\bar{s}_{iq} := \\frac{1}{K}\\sum_{p=1}^{K} s_{ipq}$ is the mean signal strength for the $q^{\\text{th}}$ voxel. For Alzheimer's disease trials, ADAS-Cog-13 is a widely-used measure of cognitive performance. It measures impairments across several cognitive domains that are considered to be affected early and characteristically in Alzheimer's disease \\citep{rock:07, kuep:18}. It is important to note that higher scores are associated with more serious cognitive deficiency. \n\nTo demonstrate the validity of the model, we consider the out-of-sample prediction performance of the proposed IFR. For this, we first randomly split the dataset into a training set with sample size $n_{\\text{train}}$ and a test set with the remaining $n_{\\text{test}}$ subjects. The IFR method was implemented as follows - for any given unit direction $\\boldsymbol{\\bar{\\theta}} \\in \\bar{\\Theta},$ we partition the domain of the projections into $M$ equal-width non-overlapping bins and compute the mean observations $\\tilde{\\tbfX}_l$ and $\\tilde{Y}_l$ for the data points belonging to the $l-$th bin, where the latter are defined as the appropriate Fr\\'echet barycenters. Observe that $M$ is dependent on the sample size. The ``true'' index is estimated as $\\boldsymbol{\\wh{\\bar{\\theta}}}$ as per~\\eqref{est:sim:obj}. We then take the fitted objects obtained from the training set, and predict the responses in the test set using the covariates present in the test set. As a measure of the efficacy of the fitted model, we compute root mean squared prediction error (RMPE) as\n\\begin{align}\n\\text{RMPE} = \\left[\\frac{1}{M_{n_{\\text{test}}}}\\sum_{i=1}^{M_{n_{\\text{test}}}} d_F^2\\left(\\tilde{Y}_l^{\\text{test}}, \\hmop{\\tilde{\\tbfX}_l ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}} \\right) \\right]^{-1\/2},\n\\end{align}\nwhere $\\tilde{Y}_l^{\\text{test}}$ and $\\hmop{\\tilde{\\tbfX}_l ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}}$ denote, respectively, the $l^{\\text{th}}$ observed and predicted responses in the test set, evaluated at the binned average $\\tilde{\\tbfX}_l.$ We repeat this process $500$ times, and compute RMPE for each split for the subjects separately (See Table~\\ref{tab:ADNI:rmpe}).\nThe tuning parameters $(b,M)$ are chosen by a $5-$fold cross validation method for each replication of the process.\n\\begin{table}[h!]\n\t\\centering\n\t\\begin{tabular}{ ccc } \n\t\t\\hline\n\t\t$n_{\\text{train}}$ & $n_{\\text{test}}$ & RMPE for the IFR method\\\\\n\t\t\\hline\n\t\t$500$ & $330$ & $0.206$ \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Average Root Mean Prediction Error (RMPE) over $1000$ repetitions for the subjects, as obtained from the local fits of the single Index Fr\\'echet Regression (IFR) model. Here, $n_{\\text{train}}$ and $n_{\\text{test}}$ denote the sample sizes for the split training and testing datasets respectively.}\n\t\\label{tab:ADNI:rmpe}\n\\end{table}\nWe observe that the out-of-sample predictions error is quite low for after fitting the IFR model. In fact it is very close to the in-sample-prediction error $(0.251)$, calculated as the average distance between the observed training sample and the predicted objects based on the covariates in the training sets, which supports the validity of the proposed IFR models. \n\n\nAnother interest of our study was to understand the effect of various relevant predictors, such as age, gender, total score, and stage of the disease on the inter-hub functional connectivity. In particular, the hypothesis to test is given by\n$H_0: \\mop{\\cdot} = \\mop{\\theta_1X_1} \\text{ vs. } H_1: \\mop{\\cdot} = \\mop{\\sum_{j=1}^p \\theta_jX_j},$ or \nequivalently \n$$H_0: \\boldsymbol{\\theta} = \\mathbf{0}_{(p-1) \\times 1} \\ vs. \\ H_1: \\text{ not all } \\theta_j \\text{ are }0, \\ j =2,\\dots,p $$ \nwhere $\\boldsymbol{\\bar{\\theta}} = (\\theta_1,\\boldsymbol{\\theta})^{\\top}$ and $\\boldsymbol{\\theta} = (\\theta_2,\\dots,\\theta_p).$\nHere we consider $p =10,$ predictors, namely, $X_{1}=$ stages for the disease, $X_2 =$ age, $X_3 =$ sex, $X_4 =$ total ADAS score, and the pairwise interaction terms between them as\n$X_5 = X_1X_2$, $X_6 = X_1X_3$, $X_7 = X_1X_4$, $X_8 = X_2X_3$, $X_9 = X_2X_4$, and $X_{10} = X_3X_4$. \n\nWe employ a bootstrap procedure to test for $H_0$.\nBased on a standard bootstrap approach, we resample $(\\tbf{X}_i^\\ast, Y_i^\\ast)$ a large number of times, $B$. For each of these $B$ bootstrap samples, the estimated direction as $\\hat{\\boldsymbol{\\theta}}^{\\ast}$ ($(p-1)-$ dimensional vector) is computed. \tWe estimate the full $p-$ dimensional vector $\\boldsymbol{\\wh{\\bar{\\theta}}}^\\ast$ for each of the bootstrap sample, where $\\boldsymbol{\\wh{\\bar{\\theta}}}^\\ast = (\\widehat{\\theta}_1^\\ast, \\hat{\\boldsymbol{\\theta}}^\\ast),$ with $\\widehat{\\theta}_1^\\ast = \\sqrt{1 - \\ltwoNorm{\\hat{\\boldsymbol{\\theta}}^\\ast}^2}.$ Denote the $p-$ dimensional unit vector estimated from the original sample as $\\boldsymbol{\\wh{\\bar{\\theta}}}.$ $\\boldsymbol{\\wh{\\bar{\\theta}}}$ and $\\boldsymbol{\\wh{\\bar{\\theta}}}^\\ast$ for all $B$ bootstrap samples, lie on a sphere is $\\mbb{R}^p.$ We can compute the the geodesic distance between two points $\\boldsymbol{\\bar{\\gamma}_1} $ and $\\boldsymbol{\\bar{\\gamma}_2} $ situated on the boundary of a unit sphere by $d_g(\\boldsymbol{\\bar{\\gamma}_1} , \\boldsymbol{\\bar{\\gamma}_2} ) = \\arccos \\langle \\boldsymbol{\\bar{\\gamma}_1} , \\boldsymbol{\\bar{\\gamma}_2} \\rangle.$ To this end, we proceed to conduct a bootstrap test as follows. \n\\begin{itemize}\n\t\\item[1.] Compute $d_g(\\boldsymbol{\\wh{\\bar{\\theta}}}^{\\ast (b)}, {\\boldsymbol{\\bar{\\theta}_0}}),$where $\\boldsymbol{\\wh{\\bar{\\theta}}}^{\\ast (b)}$ is the $p-$ dimensional unit vector estimated based on the $b-$th bootstrap sample, $b = 1,\\dots, B$ and ${\\boldsymbol{\\bar{\\theta}_0}} = (1,0,\\dots)_{p \\times 1}.$\n\t\\item[2.] The Achieved Significance Level (ASL) of the bootstrap test is given by\n\t$$ASL = \\frac{1}{B}\\sum_{b=1}^B d_g(\\boldsymbol{\\wh{\\bar{\\theta}}}^{\\ast (b)},{\\boldsymbol{\\bar{\\theta}_0}}) > d_g(\\boldsymbol{\\wh{\\bar{\\theta}}}, \\boldsymbol{\\wh{\\bar{\\theta}}}^{\\ast (b)}),$$\n\t\\item[3.] If $ASL <\\alpha$ reject the null hypothesis at level $\\alpha.$\n\\end{itemize}\nWe carried out the procedure for testing the significance of the other predictors when ``stages of the disease'' is assumed to be included in the model. The $ASL$ for the bootstrap test came out to be $0.012,$ thus giving evidence for rejecting $H_0$ at level $\\alpha = 0.05.$ That is, not all of the $(p-1)$ predictors are insignificant. \n\\begin{table}[H]\n\t\\begin{center}\n\t\t\\begin{tabular}{|l|c|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{Step 1} & \\multicolumn{2}{c|}{Step 2} & \\multicolumn{2}{c|}{Step 3} \\\\ \\hline\n\t\t\t& Coeff. & ASL & Coeff. &ASL & Coeff & ASL \\\\ \\hline\n\t\t\tAge & -0.364 & 0.005 & -0.394 & - & -0.401 & - \\\\ \\hline\n\t\t\tGender & 0.371 & 0.122 & 0.558 & 0.161 & 0.279 & 0.113 \\\\ \\hline\n\t\t\tTotal Score & 0.198 & 0.094 & 0.207 & 0.010 & 0.173 & - \\\\ \\hline\n\t\t\\end{tabular}\n\t\t\\caption{Table shows the coefficients and p-values for the step-wise addition of predictors to quantify their relative significance.}\n\t\t\\label{tab:data:step_reg}\n\t\\end{center}\n\\end{table}\nWe further performed a sequential addition of predictors to quantify their relative significance. For this, we specify an ``alpha-to-enter'' significance level at $\\alpha = 0.05$ and fit each of the one-predictor model. To this end, we consider $X_1$ to be in the model and fit model~\\eqref{sim:obj} to estimate the direction index. In particular, we include each of $X_2,$ $X_3,$ and $X_4$ in the model along with $X_1$ and test whether the corresponding effect is significant, i.e., we test $\\theta_j=0, \\ j=2,3,4$ separately. The first predictor to be included in the step-wise model is the predictor that has the smallest $ASL$. We stop if no predictor has a test $ASL$ less than $\\alpha.$ Table~\\ref{tab:data:step_reg} illustrates the step-wise addition of predictors for all one-predictor models. The first predictor to be added at level $0.05,$ when $X_1$ (``stage of the disease'') is already in the model, is $X_2$ (age). Interestingly, a negative value of $\\hat{\\theta}_2$ $(-0.364)$ signifies a possible negative effect of the predictor on the response. This is quite expected since Alzheimer's is a disease that is known to progress with age.\nIn step 2, $X_4$ (total score) is added as a significant predictor at $0.05$ level. Here the estimate $\\hat{\\theta}_4 = 0.207$ can be interpreted as the possible association of a higher value of the total score with greater cognitive impairment. The effect of $X_3$ (gender) is deemed not significant.\n\nFinally, with $X_1,$ $X_2,$ and $X_4$ in the model, we test for significance of the pairwise interactions terms. The hypothesis of interest is $H_0 : \\theta_5 =\\theta_6 = \\dots = \\theta_{10} =0.$ The $ASL$ for the test comes out to be $0.106,$ giving evidence for no significant pairwise interaction to be included in the model. \nThus we estimate the relevant ``true'' model in this case as $E_\\oplus(Y|\\tbfX \\t \\para) = \\mop{X_1\\theta_1 + X_2\\theta_2 + X_4\\theta_4}.$\nThe estimated average Fr\\'echet error $\\frac{1}{n} \\sum_{i=1}^n d^2(Y_i,\\hmop{X_{1i}\\hat{\\theta}_1 + X_{2i}\\hat{\\theta}_2 + X_{4i}\\hat{\\theta}_4})$ is \nquite small $(0.239).$ \n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=.9\\textwidth]{.\/plots\/obs_vs_fits_ADNI_corrplots}\n\n\n\t\\caption{The observed and fitted functional connectivity matrices for an increasing value of the single index are plotted. The estimated direction $\\boldsymbol{\\wh{\\bar{\\theta}}}$ is computed for the final model, and the estimated index $\\tbf{X}_i ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}}$ are calculated. The panels in the top row, from left to right, depict the observed functional connectivity correlation matrices for the subjects, for whom the estimated index values are the closest to the $25\\%, 50\\%,$ and $75\\%$ of the estimated index respectively. The bottom row shows the fitted functional connectivity correlation matrices for the same subjects, the link functions estimated using the nonparametric Fr\\'echet regression at the given quantiles (from left to right). Positive (negative) values are drawn in red (blue) and larger circles correspond to larger absolute values. The figure illustrates the dependence of functional connectivity on the overall index effect.}\n\t\\label{fig:ADNI:obs_vs_fits}\n\\end{figure}\n\n\nTo demonstrate the validity of the IFR method, we compute the estimated indices for the final model as $X_{1}\\hat{\\theta}_1 + X_{2}\\hat{\\theta}_2 + X_{4}\\hat{\\theta}_4,$ for each subject and calculate the $25\\%, 50\\%,$ and $75\\%$ quantiles of the index. These come out to be $q_1 = 15.048,$ $q_2 = 16.430,$ and $q_3 = 18.250,$ respectively. We find the subjects who have their estimated index values closest to $q_1,$ $q_2,$ and $q_3$ respectively. Table~\\ref{tab:ADNI:fits} shows the details on the three subjects selected. We compare the observed and fitted functional connectivity correlation matrices for these three subjects, where the object link function is fitted by the local Fr\\'echet regression method at the estimated index values corresponding to each subject. This gives an intuitive idea of how the estimated link function at the estimated direction vector given by $\\hmop{\\tbf{x} ^{\\top}\\boldsymbol{\\wh{\\bar{\\theta}}}}$ changes with an increasing value of the index $\\tbf{x} ^{\\top} \\boldsymbol{\\wh{\\bar{\\theta}}},$ and thus brings about the effectiveness of the IFR model. In Figure~\\ref{fig:ADNI:obs_vs_fits} we display the observed (top row) and fitted (bottom row) correlation matrices for an increasing value of the single index at $q_1,$ $q_2,$ and $q_3$ respectively, in the columns from left to right. We observe that the fits match the general pattern of the observed matrices quite well. Indeed the Frobenius distance between the observed and the estimated matrices at $q_1,$ $q_2,$ and $q_3$ are calculated as $1.68,$ $1.10,$ and $0.79,$ respectively. A seeming tendency to have more negative correlation values in the connectivity matrices with increasing index values is an interesting finding. An overall effect of stages of the disease, age, and total score tends to influence the connectivity pattern in the brain of the Alzheimer's patients. A higher index value would imply more cognitive deficiency in this case, which matches the widely held beliefs. \n\\begin{table}[]\n\t\\centering\n\t\\begin{tabular}{|c|c|c|c|c|c|}\n\t\t\\hline\n\t\t\\begin{tabular}[c]{@{}c@{}}Subject\\\\ number\\end{tabular} &\n\t\t\\begin{tabular}[c]{@{}c@{}}Estd.\\\\ index value\\end{tabular} &\n\t\t\\begin{tabular}[c]{@{}c@{}}Stage of the\\\\ disease\\end{tabular} &\n\t\tAge &\n\t\tGender &\n\t\tTotal score \\\\ \\hline\n\t\t726 & 15.045 & 2 & 66.10 y & M & 20.33 \\\\ \\hline\n\t\t695 & 16.430 & 2 & 78.12 y & M & 14 \\\\ \\hline\n\t\t556 & 18.252 & 1 & 72.55 y & M & 51.67 \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Table showing the details on the subjects who have their estimated index values closest to the first three quantiles of the estimated index, $q_1 (15.048),$ $q_2 (16.430),$ and $q_3 (18.250),$ respectively. The subject number $726$ has his estimated index value closest to $q_1$ and so on.}\n\t\\label{tab:ADNI:fits}\n\\end{table}\n\\section{Discussion}\n\\label{sec:concl}\nThe proposed single Index Fr\\'echet Regression (IFR) model provides a new tool for the regression analysis of random object data, which are increasingly encountered in modern data analysis. Instead of a two step procedure to estimate the link function and the index parameter separately, we discuss a direct M-estimation approach. In fact, the proposed method is a combination of multivariate M-estimation (for the index vector) and Fr\\'echet regression (for the link function) that extends the regression regime to object data responses. The index parameter is recovered as the unit direction minimizing the ``residual sum of squares'' after fitting the Fr\\'echet model. In fact, any other convex loss function can be used for this purpose. For an efficient computation, we use the \\emph{Julia} language and parallel programming to estimate the minimizer direction $\\hat{\\boldsymbol{\\theta}}$ by searching over $1000$ directions such that the Fr\\'echet variance is minimized. In a $25$ core computing system, for a sample size $n=1000,$ determining the optimal direction takes about $2$ hours, while employing a $5-$fold cross validation method to select the tuning parameters $(b,M).$\n\nIn this project, we provide the asymptotic results involving the estimation of the index parameter which could be extended for inference. In particular, single index models being a generalization of linear regression, the interpretability of the index parameter is important for testing the effect of a subset of predictors in modulating the response. The same is true in our IFR model, despite the responses being situated in a general metric space, as illustrated by the FMRI brain imaging example.\n\n\\section*{Appendix}\n\\subsection*{A.1. Technical assumptions regarding local Fr\\'echet regression}\n\\label{appen:assump}\nRecall, for any given direction $\\boldsymbol{\\theta},$ such that $\\tbfx \\t \\para = t,$ the conditional Fr\\'echet mean is given by \n\\begin{equation}\\aligned\n\\label{loc:fr:target}\n\\mop{t} = \\underset{\\omega \\in \\Omega}{\\argmin} \\ M(\\omega,t); \\quad M(\\omega,t) := \\mbb{E}{(d^2(Y,\\omega)|\\tbf{X}^{\\top} \\boldsymbol{\\theta} = t)}.\n\\endaligned\\end{equation}\nThe local Fr\\'echet regression estimate is given by \n\\begin{equation}\\aligned\n\\label{loc:fr::inter:target}\n\\hmop{t} = \\underset{\\omega \\in \\Omega}{\\argmin} \\ \\hat{L}_n(\\omega,t);\\quad \\hat{L}_n(\\omega,t); :=\\frac{1}{n}\\sum_{i=1}^n\\wh{S}(\\tbf{X}_i ^{\\top} \\boldsymbol{\\theta}_1, t, h)d^2(Y_i,\\omega)).\n\\endaligned\\end{equation}\nLet us define the intermediate localized weighted Fr\\'echet mean as \n\\begin{equation}\\aligned\n\\label{loc:fr:estd}\n\\tmop{t} = \\underset{\\omega \\in \\Omega}{\\argmin} \\ \\tilde{L}_b(\\omega,t); \\quad \\tilde{L}_b(\\omega,t):=\\mbb{E}{(S(\\tbfX \\t \\para_1, t, h)d^2(Y,\\omega))}.\n\\endaligned\\end{equation}\nThe marginal density $f_T$ of $T := \\tbf{X} ^{\\top} \\boldsymbol{\\theta},$ for any $\\boldsymbol{\\theta} \\in \\mbb{R}^{p-1} \\text{ and } \\boldsymbol{\\theta} ^{\\top} \\boldsymbol{\\theta} \\leq 1,$ is bounded away from zero on its support $\\mathcal{T},$ i.e., $\\inf_{t\\in\\mathcal{T}}\\ f_T(t) >0.$ We require the following assumptions for as described in \\cite{chen:20}.\n\\begin{enumerate}[label=(U\\arabic*)]\n\\item \\label{ass:minUnif} \nFor all $t \\in \\mathcal{T},$ \nthe minimizers $\\mop{t}$, $\\hmop{t}$, and $\\tmop{t}$ exist and are unique, the latter two almost surely. In addition, for any $\\varepsilon>0$, \n\\begin{equation}\\begin{gathered}\n\\inf_{t \\in \\mathcal{T}}\n\\inf_{d(\\mop{t},\\omega)>\\varepsilon} [M(\\omega,t)-M(\\mop{t},t)]>0,\\\\\n\\liminf_{b \\to 0} \\inf_{t \\in \\mathcal{T}} \n\\inf_{d(\\omega, \\tmop{t}) > \\varepsilon} [\\tilde{L}_b(\\omega,t) - \\tilde{L}_b(\\tmop{t},t)]>0,\n\\end{gathered}\\end{equation}\nand there exists $c = c(\\varepsilon)>0$ such that\n\\begin{equation}\\begin{gathered}\nP\\left(\\inf_{t \\in \\mathcal{T}} \\inf_{d(\\hmop{t},\\omega) >\\varepsilon} [\\hat{L}_n(\\omega,t) - \\hat{L}_n(\\hmop{t},t)]\\geq c \\right) \\to 1.\n\\end{gathered}\\end{equation}\n\\item \\label{ass:entropyUnif} \nLet $\\mathcal{B}_r(\\mop{t}) \\subset \\Omega$ be a ball of radius $r$ centered at $\\mop{t}$ and $\\mathcal{N}(\\varepsilon,\\mathcal{B}_r(\\mop{t}),d)$ be its covering number using balls of radius $\\epsilon.$ Then\n\\begin{equation}\\begin{gathered}\n\\underset{r\\to 0+}{\\lim} \\int_0^1 \\underset{t \\in \\mathcal{T}}{\\sup} \\sqrt{1 + \\log \\mathcal{N}(r\\varepsilon,\\mathcal{B}_r(\\mop{t}),d)} d \\epsilon = O(1).\n\\end{gathered}\\end{equation}\n\\item \\label{ass:curvatureUnif} There exists $r_1,r_2>0,$ $c_1,c_2>0,$ and $\\beta_1,\\beta_2>1$ such that\n\\begin{equation}\\begin{gathered}\n\\inf_{t \\in \\mathcal{T}} \\ \\inf_{d(\\mop{t},\\omega)0}|K'(x)| < \\infty.$ Additionally, $\\int_\\mbb{R} x^2 |K'(x)|\\sqrt{|x\\log|x||} dx <\\infty.$\n\\item \\label{ass:jointdtn} \nThe marginal density $f_T$ of $T = \\tbfX \\t \\para$ for any given unit direction $\\boldsymbol{\\theta}$ and the conditional densities $f_{T|Y}(\\cdot,y)$ of $T = \\tbfX \\t \\para$ given $Y= y$ exist and are twice continuously differentiable on the interior of $\\mathcal{T}$, the latter for all $y \\in\\Omega$. The marginal density $f_T$ is bounded away from zero on its support $\\mathcal{T},$ i.e., $\\inf_{t\\in\\mathcal{T}} f_T(t) >0.$\nThe second-order derivative $f_T''$ is uniformly bounded, $\\sup_{^{\\top}}|f_T''(t)|<\\infty$. \nThe second-order partial derivatives $(\\partial^2 f_{T|Y}\/\\partial t^2)(\\cdot,y)$ are uniformly bounded, $\\sup_{y,t} |(\\partial^2 f_{T|Y}\/\\partial t^2)(\\cdot,y)| < \\infty$. \nAdditionally, for any open set $U\\subset \\Omega$, $P(Y\\in U | T = t)$ is continuous as a function of $t$. \n\\end{enumerate}\n\\bibliographystyle{apalike}\n\n\\section*{\\bf \\sf \\refname}}\n\\renewcommand{\\baselinestretch}{1.6}\n\\renewcommand{\\arraystretch}{1.0}\n\\newcommand{\\renewcommand{\\baselinestretch}{1.2}\\normalsize}{\\renewcommand{\\baselinestretch}{1.2}\\normalsize}\n\\newcommand{\\renewcommand{\\baselinestretch}{1.63}\\normalsize}{\\renewcommand{\\baselinestretch}{1.63}\\normalsize}\n\n\n\n\\newtheorem{prop}{Proposition}\n\n\\newtheorem{thm}{Theorem}\n\\newtheorem{lem}{Lemma}\n\\newtheorem{cor}{Corollary}\n{\n\t\\theoremstyle{remark}\n\t\\newtheorem{rem}{Remark}\n\t\\newtheorem{exm}{Example}\n\t\\newtheorem{eg}{Example}\n\t\\newtheorem{assumption}{Assumption}\n}\n\\newtheorem{defn}{Definition}\n\\newtheorem*{lem*}{Lemma}\n\n\\newcommand{\\begin{eqnarray*}}{\\begin{eqnarray*}}\n\\newcommand{\\end{eqnarray*}}{\\end{eqnarray*}}\n\\newcommand{\\begin{eqnarray}}{\\begin{eqnarray}}\n\\newcommand{\\end{eqnarray}}{\\end{eqnarray}}\n\\newcommand{\\begin{equation}}{\\begin{equation}}\n\\newcommand{\\end{equation}}{\\end{equation}}\n\\newcommand{\\aligned}{\\aligned}\n\\newcommand{\\endaligned}{\\endaligned}\n\\newcommand{\\als}[1]{\\begin{align*}#1\\end{align*}}\n\\newcommand{\\begin{equation}\\aligned}{\\begin{equation}\\aligned}\n\\newcommand{\\endaligned\\end{equation}}{\\endaligned\\end{equation}}\n\\newcommand{\\begin{equation}\\begin{gathered}}{\\begin{equation}\\begin{gathered}}\n\\newcommand{\\end{gathered}\\end{equation}}{\\end{gathered}\\end{equation}}\n\\newcommand{\\end{document}}{\\end{document}}\n\\newcommand{\\textit{et al. }}{\\textit{et al. }}\n\\newcommand{\\begin{tabular}}{\\begin{tabular}}\n\\newcommand{\\end{tabular}}{\\end{tabular}}\n\\newcommand{\\newpage}{\\newpage}\n\\newcommand{\\label}{\\label}\n\\newcommand{\\begin{itemize}}{\\begin{itemize}}\n\\newcommand{\\end{itemize}}{\\end{itemize}}\n\\newcommand{\\begin{figure}}{\\begin{figure}}\n\\newcommand{\\end{figure}}{\\end{figure}}\n\\newcommand{\\begin{enumerate}}{\\begin{enumerate}}\n\\newcommand{\\end{enumerate}}{\\end{enumerate}}\n\\newcommand{\\begin{array}}{\\begin{array}}\n\\newcommand{\\end{array}}{\\end{array}}\n\\newcommand{\\boldsymbol}{\\boldsymbol}\n\\newcommand{\\boldsymbol}{\\boldsymbol}\n\\newcommand{\\nonumber}{\\nonumber}\n\\newcommand{\\vspace{-.1in}}{\\vspace{-.2cm}}\n\n\\def\\vspace{0.3cm}{\\vspace{0.3cm}}\n\\def\\vspace{.125cm}{\\vspace{.1cm}}\n\n\\def\\iffalse{\\iffalse}\n\n\\newcommand{\\noindent}{\\noindent}\n\\newcommand{\\begin{center}}{\\begin{center}}\n\\newcommand{\\end{center}}{\\end{center}}\n\n\\definecolor{DarkBlue}{rgb}{0,.08,.45}\n\\definecolor{DarkRed}{rgb}{.7,0,.4}\n\\definecolor{DarkGreen}{rgb}{0,0.4,0}\n\\def\\textcolor{black}{\\textcolor{black}}\n\\def\\textcolor{blue}{\\textcolor{blue}}\n\\def\\textcolor{red}{\\textcolor{red}}\n\\def\\textcolor{DarkRed}{\\textcolor{DarkRed}}\n\\def\\textcolor{DarkBlue}{\\textcolor{DarkBlue}}\n\\def\\textcolor{DarkGreen}{\\textcolor{DarkGreen}}\n\\def\\begin{\\red}{\\begin{\\textcolor{red}}}\n\t\\def\\end{\\red}{\\end{\\textcolor{red}}}\n\\def\\begin{\\blu}{\\begin{\\textcolor{blue}}}\n\t\\def\\end{\\blu}{\\end{\\textcolor{blue}}}\n\n\n\\newcommand{\\begin{pmatrix}}{\\begin{pmatrix}}\n\t\\newcommand{\\end{pmatrix}}{\\end{pmatrix}}\n\n\\newcommand{\\ct}{\\lincmmt\n\\newcommand{\\includegraphics}{\\includegraphics}\n\\newcommand{\\hangindent=0.5cm\\hangafter=1\\noindent}{\\hangindent=0.5cm\\hangafter=1\\noindent}\n\n\\newcommand{\\begin{split}}{\\begin{split}}\n\t\\newcommand{\\end{split}}{\\end{split}}\n\n\\newcommand{\\begin{description}}{\\begin{description}}\n\t\\newcommand{\\end{description}}{\\end{description}}\n\n\\newcommand{\\begin{assumption}}{\\begin{assumption}}\n\t\\newcommand{\\end{assumption}}{\\end{assumption}}\n\\newcommand{\\begin{thm}}{\\begin{thm}}\n\t\\newcommand{\\end{thm}}{\\end{thm}}\n\\newcommand{\\begin{lem}}{\\begin{lem}}\n\t\\newcommand{\\end{lem}}{\\end{lem}}\n\\newcommand{\\begin{cor}}{\\begin{cor}}\n\t\\newcommand{\\end{cor}}{\\end{cor}}\n\\newcommand{\\begin{proof}}{\\begin{proof}}\n\t\\newcommand{\\end{proof}}{\\end{proof}}\n\\newcommand{\\begin{prop}}{\\begin{prop}}\n\t\\newcommand{\\end{prop}}{\\end{prop}}\n\n\\def\\vspace{-.1in}{\\vspace{-.24cm}}\n\\def\\vspace{.2cm}{\\vspace{.2cm}}\n\\def\\vspace{-1cm}{\\vspace{-1cm}}\n\\def\\vspace{-.1cm}{\\vspace{-.1cm}}\n\\def\\vspace{-.12in}{\\vspace{-.12in}}\n\\def\\hspace{0.2cm}{\\hspace{0.2cm}}\n\\def\\vspace{0.3cm}{\\vspace{0.3cm}}\n\\def\\vspace{.125cm}{\\vspace{.125cm}}\n\\def\\vspace{0.1cm}{\\vspace{0.1cm}}\n\\def\\vspace{.7cm}{\\vspace{.7cm}}\n\\def\\vspace{.125cm}{\\vspace{.125cm}}\n\n\\def_{i1}{_{i1}}\n\\def_{ik}{_{ik}}\n\\def_{ki}{_{ki}}\n\\def_{ij}{_{ij}}\n\\def_{ijk}{_{ijk}}\n\\def_{ik}{_{ik}}\n\\def_{jj}{_{jj}}\n\\def_{jk}{_{jk}}\n\\def^{-1\/2}{^{-1\/2}}\n\\def^{-2}{^{-2}}\n\\def^{1\/2}{^{1\/2}}\n\\def^{-1}{^{-1}}\n\n\\def{\\rm cov}{{\\rm cov}}\n\\def{\\rm Cov}{{\\rm Cov}}\n\\def{\\rm var}{{\\rm var}}\n\\def{\\rm Var}{{\\rm Var}}\n\\def{\\rm cor}{{\\rm cor}}\n\\def{\\rm diag}{{\\rm diag}}\n\\def{\\rm trace}{{\\rm trace}}\n\\def{\\rm diam}{{\\rm diam}}\n\\def\\hbox{logit}{\\hbox{logit}}\n\\def\\hbox{expit}{\\hbox{expit}}\n\\def\\cite{\\cite}\n\\def\\citep{\\citep}\n\\def\\varepsilon{\\varepsilon}\n\\def\\vspace{-.1in}{\\vspace{-.1in}}\n\\def{\\rm id}{{\\rm id}}\n\\def\\hbox{det}{\\hbox{det}}\n\\def\\mathrm{d}{\\mathrm{d}}\n\n\\def\\overline{X}{\\overline{X}}\n\\def\\widehat{\\widehat}\n\\def\\widetilde{\\widetilde}\n\\def\\widecheck{\\widecheck}\n\\def\\longrightarrow{\\longrightarrow}\n\\def\\rightarrow{\\rightarrow}\n\n\\def \\mathbb {\\mathbb}\n\\def \\mathbf {\\mathbf}\n\\def \\mathcal {\\mathcal}\n\\def \\mathscr{\\mathscr}\n\\def \\rightsquigarrow{\\rightsquigarrow}\n\\def\\ell^\\infty(\\Omega){\\ell^\\infty(\\Omega)}\n\\def \\textbf{\\textbf}\n\\def \\underset{\\eps \\to 0}{\\lim}\\ {\\underset{\\varepsilon \\to 0}{\\lim}\\ }\n\\def \\underset{h \\to 0}{\\lim}\\ {\\underset{h \\to 0}{\\lim}\\ }\n\\def \\frac{1}{\\eps}{\\frac{1}{\\varepsilon}}\n\\def \\frac{1}{\\eps^2}{\\frac{1}{\\varepsilon^2}}\n\\def \\boldsymbol{\\theta}_\\mathbf{\\ast}{\\boldsymbol{\\theta}_\\mathbf{\\ast}}\n\\def \\boldsymbol{\\theta}^\\mathbf{\\ast}{\\boldsymbol{\\theta}^\\mathbf{\\ast}}\n\\def \\triangledown{\\triangledown}\n\\def \\tilde{\\Dtri}{\\tilde{\\triangledown}}\n\\def \\wh{\\Dtri}{\\widehat{\\triangledown}}\n\n\\def \\boldsymbol{\\alpha}{\\boldsymbol{\\alpha}}\n\\def \\boldsymbol{\\beta}{\\boldsymbol{\\beta}}\n\\def g_{1\\oplus} {g_{1\\oplus}}\n\\def g_{2\\oplus} {g_{2\\oplus}}\n\\def h_\\oplus {h_\\oplus}\n\\def \\wh{\\sigma}_{rs}(\\boldsymbol{\\theta_0}){\\widehat{\\sigma}_{rs}(\\boldsymbol{\\theta_0})}\n\\def \\wh{S}{\\widehat{S}}\n\\def \\wh{\\Sigma}(\\boldsymbol{\\theta_0}){\\widehat{\\Sigma}(\\boldsymbol{\\theta_0})}\n\\def \\wh{\\Lambda}(\\boldsymbol{\\theta_0}){\\widehat{\\Lambda}(\\boldsymbol{\\theta_0})}\n\n\\def \\bm{\\mathit{\\Delta}}H {\\bm{\\mathit{\\Delta}}H}\n\\def \\bm{\\mathit{\\Delta}}\\tilde{V}_n {\\bm{\\mathit{\\Delta}}\\tilde{V}_n}\n\\def V_n{V_n}\n\\def \\bm{\\mathit{\\Delta}}\\Vn {\\bm{\\mathit{\\Delta}}V_n}\n\n\\def \\bm{\\mathit{\\Delta^2}}H {\\bm{\\mathit{\\Delta^2}}H}\n\\def \\bm{\\mathit{\\Delta^2}}\\tilde{V}_n {\\bm{\\mathit{\\Delta^2}}\\tilde{V}_n}\n\\def V_n{V_n}\n\\def \\bm{\\mathit{\\Delta^2}}\\Vn {\\bm{\\mathit{\\Delta^2}}V_n}\n\\def \\overset{P}{\\rightarrow}{\\overset{P}{\\rightarrow}}\n\\def \\overset{D}{\\rightarrow}{\\overset{D}{\\rightarrow}}\n\\def \\underset{\\boldsymbol{\\theta} \\in \\Theta}{\\sup}{ \\underset{\\boldsymbol{\\theta} \\in \\Theta}{\\sup}}\n\\def \\underset{\\omega \\in \\Omega}{\\argmin}{\\underset{\\omega \\in \\Omega}{\\argmin}}\n\\def \\underset{\\boldsymbol{\\bar{\\theta}} \\in\\ \\Theta}{\\argmin}{\\underset{\\boldsymbol{\\bar{\\theta}} \\in\\ \\Theta}{\\argmin}}\n\\def \\underset{\\boldsymbol{\\theta} : \\boldsymbol{\\theta} \\in \\Theta}{\\argmin}{\\underset{\\boldsymbol{\\theta} : \\boldsymbol{\\theta} \\in \\Theta}{\\argmin}}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\argmin}{argmin}\n\n\\def\\msr{B}{\\mathscr{B}}\n\\def\\mbb{E}{\\mathbb{E}}\n\\defP{P}\n\\def\\mbb{R}{\\mathbb{R}}\n\\def\\mbb{N}{\\mathbb{N}}\n\\def\\ntnum_+{\\mbb{N}_+}\n\\def\\mbb{Z}{\\mathbb{Z}}\n\\defO{O}\n\\defo{o}\n\\defO_P{O_P}\n\\defo_P{o_P}\n\\def\\rightarrow\\infty{\\rightarrow\\infty}\n\\def\\indicator#1{\\mathbb I\\left(#1\\right)}\n\\def\\ball#1#2{B_{#2}(#1)}\n\n\\def^{\\top}{^{\\top}}\n\\def {\\boldsymbol{\\bar{\\theta}_0}}{{\\boldsymbol{\\bar{\\theta}_0}}}\n\\def \\tbf{x} \\t {\\boldsymbol{\\bar{\\theta}_0}}{\\tbf{x} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}}\n\\def \\tbf{X} \\t {\\boldsymbol{\\bar{\\theta}_0}}{\\tbf{X} ^{\\top} {\\boldsymbol{\\bar{\\theta}_0}}}\n\\def \\boldsymbol{\\tilde{\\bar{\\theta}}}{\\boldsymbol{\\tilde{\\bar{\\theta}}}}\n\\def \\boldsymbol{\\wh{\\bar{\\theta}}}{\\boldsymbol{\\widehat{\\bar{\\theta}}}}\n\\def \\boldsymbol{\\bar{\\theta}}{\\boldsymbol{\\bar{\\theta}}}\n\n\\def\\boldsymbol{\\theta_0}{\\boldsymbol{\\theta_0}}\n\\def\\tilde{\\boldsymbol{\\theta}}{\\tilde{\\boldsymbol{\\theta}}}\n\\def\\boldsymbol{\\theta}{\\boldsymbol{\\theta}}\n\\def\\boldsymbol{\\theta}^{\\mathbf{(1)}}{\\boldsymbol{\\theta}^{\\mathbf{(1)}}}\n\\def\\tbf{X}{\\textbf{X}}\n\\def\\tbf{x}{\\textbf{x}}\n\\def m(\\cdot){m(\\cdot)}\n\\def \\hat{m}(\\cdot){\\hat{m}(\\cdot)}\n\\defK{K}\n\\def \\hat{\\boldsymbol{\\theta}}{\\hat{\\boldsymbol{\\theta}}}\n\\def \\tbfX \\t \\true{\\tbf{X} ^{\\top} \\boldsymbol{\\theta_0}}\n\\def \\tbfx \\t \\true{\\tbf{x} ^{\\top} \\boldsymbol{\\theta_0}}\n\\def \\tbfX \\t \\para{\\tbf{X} ^{\\top} \\boldsymbol{\\theta}}\n\\def \\tbfx \\t \\para{\\tbf{x} ^{\\top} \\boldsymbol{\\theta}}\n\\def \\tilde{Y}_l{\\tilde{Y}_l}\n\\def \\tilde{\\tbfX}_l{\\tilde{\\tbf{X}}_l}\n\\def \\tilde{L}_b{\\tilde{L}_b}\n\\def \\hat{L}_n{\\hat{L}_n}\n\\def \\tilde{V}_n{\\tilde{V}_n}\n\n\n\\newcommand \\mop[1]{m_\\oplus(#1)}\n\\newcommand \\hmop[1]{\\hat{m}_\\oplus(#1)}\n\\newcommand \\tmop[1]{\\tilde{m}_\\oplus(#1)}\n\\newcommand \\loomop[1] {\\hat{m}_{\\oplus(-i)}(#1)}\n\n\n\\newcommand{\\ltwoNorm}[2][]{%\n\t\\ifthenelse{ \\equal{#1}{} }\n\t{\\ensuremath{\\|#2\\|}}\n\t{\\ensuremath{\\|#2\\|_{#1}}}\n} ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nThe study of heavy quarkonia, i.e. mesons build from heavy (with $m_Q\\gg\\Lambda_\\mathrm{QCD}$) quark-antiquark pair is a very interesting task from both theoretical and experimental points of view. The theoretical model that is usually used for theoretical description of these particles is the Nonrelativistic Quantum Chromodynamic (NRQCD) \\cite{Bodwin:1994jh}, that allows one to describe with pretty good accuracy the processes of charmonia (e.g. ${J\/\\psi}$ or $\\chi_{cJ}$ or bottomonia (e.g. $\\Upsilon$, $\\chi_{bJ}$) production on hadronic colliders as well as various decays of these particles \\cite{Aad:2011sp,Chatrchyan:2011kc,TheATLAScollaboration:2013bja,Butenschoen:2012qr,Butenschoen:2012px, Likhoded:2016zmk, Aaij:2016bqq}. It should be mentioned, however, that NRQCD predictions depend on a number of parameters (so called NRQCD matrix elements), whose numerical values are determined phenomenologically from analysis of available experimental data \\cite{Likhoded:2014kfa,Abe:1997yz,Abulencia:2007bra,Chatrchyan:2012ub,LHCb:2012ac,Aaij:2013dja}.\n\nIn addition to mentioned above processes there are also reactions that can be considered in almost model independent way. Among such processes one can name, for example, lepton pair production in $\\chi_{cJ}\\to{J\/\\psi}\\ell\\ell$ and $\\chi_{bJ}\\to\\Upsilon\\ell\\ell$ decays. In \\cite{Faessler:1999de} it was shown that the branching fractions of these decays and distributions over the invariant mass of $(\\ell\\ell)$ pair can be calculated using very general assumptions on the basis of experimentally known branching fractions of the corresponding radiative decays $\\chi_{cJ}\\to{J\/\\psi}\\gamma$ and $\\chi_{bJ}\\to\\Upsilon\\gamma$ (see also \\cite{Eichten:1979ms,Brambilla:2004wf, Barnes:2005pb,Cao:2016xqo}). Currently only $\\chi_{c1,2}\\to{J\/\\psi} ee$ process was studied experimentally \\cite{Ablikim:2017kia}. It could be interesting also to consider muon pair production in $\\chi_{cJ}\\to{J\/\\psi}\\mu\\mu$ and $\\chi_{bJ}\\to\\Upsilon\\mu\\mu$ decays. In the current paper we consider theoretically these reactions.\n\n\n\n\nIn the recent experimental paper \\cite{Ablikim:2017kia} BESIII Collaboration analysed electron-positron pair production in $\\chi_{c1,2}$-meson decays\n\\begin{align}\n \\label{eq:decay}\n \\chi_{cJ} &\\to{J\/\\psi} e^+ e^-.\n\\end{align}\n In this article branching fractions of the named reactions and distributions over invariant mass of the lepton pair were presented. It is interesting to note, that these results can be obtained from the branching fractions of the radiative decays $\\chi_{cJ}\\to{J\/\\psi}\\gamma$ using a very simple relation. In paper \\cite{Faessler:1999de} for example, it was shown, that $q^2=m_{ee}^2$ distribution of the (\\ref{eq:decay}) decay is equal to\n \\begin{align}\n \\label{eq:II}\n \\frac{d\\Br{J}{{\\ell\\ell}}}{dq^2} &= \n \\frac{\\alpha}{3\\pi q^2}\\frac{\\lambda(M_\\chi,M_\\psi,\\sqrt{q^2})}{\\lambda(M_\\chi,M_\\psi,0)} \n \\left(1+\\frac{2m_\\ell^2}{q^2}\\right)\\sqrt{1-\\frac{4m_\\ell^2}{q^2}}\n \\Br{J}{\\gamma},\n \\end{align}\nwhere $\\Br{J}{{\\ell\\ell}}$ and $\\Br{J}{\\gamma}$ are the branching fractions of $\\chi_{cJ}\\to{J\/\\psi}{\\ell\\ell}$ and $\\chi_{cJ}\\to{J\/\\psi}\\gamma$ decays respectively and\n\\begin{align}\n \\lambda(M,m_1,m_2) &= \\sqrt{1-\\left(\\frac{m_1+m_2}{M}\\right)^2} \\sqrt{1-\\left(\\frac{m_1-m_2}{M}\\right)^2}\n\\end{align}\nis the velocity of the final particle in $M\\to m_1m_2$ decay. It should be noted that this result is almost model independent since it is based on the gauge invariance of $\\chi_c\\to{J\/\\psi}\\gamma^*$ vertex. The only assumption that was made is that one can neglect the form factors' dependence on photon virtuality. This assumption seems pretty reasonable since according to energy conservation $q^2<(M_\\chi-M_\\psi)^2\\sim 0.025\\,\\mathrm{GeV}^2$, that is much smaller than the typical hard scale $\\sim M_\\psi^2\\sim 10\\,\\mathrm{GeV}^2$. It is clear from presented in Fig.~\\ref{fig:hQEE} figures, that obtained by BESIII Collaboration experimental data are in good agreement with theoretical predictions (\\ref{eq:II}). The same is true also for the integrated values of the branching fractions: theoretical predictions in comparison with BESIII results are\n\\begin{align}\n \\fBr{0}{ee} &= 8.1\\times 10^{-3},\\qquad \\left(\\fBr{0}{ee}\\right)_\\mathrm{exp} = (9.5\\pm1.9\\pm0.7)\\times 10^{-3},\\\\\n \\fBr{1}{ee} &= 8.6\\times 10^{-3},\\qquad \\left(\\fBr{1}{ee}\\right)_\\mathrm{exp} = (10.1\\pm0.3\\pm0.5)\\times 10^{-3},\\\\\n \\fBr{2}{ee} &= 8.7\\times 10^{-3},\\qquad \\left(\\fBr{2}{ee}\\right)_\\mathrm{exp} = (11.3\\pm0.4\\pm0.5)\\times 10^{-3}.\n\\end{align}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{hQ_EE.pdf}\n \\caption{$Q$ distribution in comparison with BESSIII data \\cite{Ablikim:2017kia, Zhang:PC}}\n \\label{fig:hQEE}\n\\end{figure}\n\n\n\nIt could be interesting to use described above formalism for muon pair production in $\\chi_{cJ}\\to{J\/\\psi}\\mu\\mu$ decays. It is clear that in this case we can also use (\\ref{eq:decay}) relation to describe $q^2$ distributions, the corresponding results are shown in figure \\ref{fig:hQ2MM}(a). One can easily see that the form of the distributions depends strongly on the spin of the initial particle and differs significantly from the presented in figure \\ref{fig:hQEE} curves, but all these distributions are given by all universal relation (\\ref{eq:II}), only the difference in participating particles' masses is important. As for the integrated branching fractions, we have\n\\begin{align}\n\\fBr{0}{\\mu\\mu} &= 2.2\\times 10^{-4},\n\\quad \\fBr{1}{\\mu\\mu}= 5.1 \\times 10^{-4},\n\\quad\\fBr{2}{\\mu\\mu}= 6.4\\times 10^{-4}.\n\\end{align}\nUsing the same approach we have calculated also branching fractions of $\\chi_{bJ}\\to\\Upsilon\\mu\\mu$ decays:\n\\\n\\begin{align}\n \\fBrb{0}{\\mu\\mu} &= 4.7\\times 10^{-4},\n\\quad \\fBrb{1}{\\mu\\mu}= 5.7 \\times 10^{-4},\n\\quad\\fBrb{2}{\\mu\\mu}= 6.2\\times 10^{-4}.\n\\end{align}\nDistributions over leptons' invariant mass are shown in Figure \\ref{fig:hQ2MM}(b).\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{Q2_MM}\n \\caption{Normalized $Q^2$ distribution for $\\chi_{cJ}\\to J\/\\psi\\mu\\mu$ (left figure) and $\\chi_{bJ}\\to\\Upsilon\\mu\\mu$ (right figure) decays}\n \\label{fig:hQ2MM}\n\\end{figure}\n\n\n\n\nIf one wants to obtain other types of distributions more detailed information on the physics of the underlying process is required. In our work we use the following expressions for $P$-wave quarkonia decay verticies \\cite{Baranov:2011ib}:\n\\begin{align}\n \\A{0} &= g_0 M_V \\left( g_{\\mu\\nu} - \\frac{p_\\mu q_\\nu}{(pq)}\\right)\\epsilon_V^\\nu \\epsilon^{(\\gamma)}_\\mu, \\\\\n \\A{1} &= g_1 e_{\\mu\\nu\\alpha\\beta} q^\\nu\\epsilon_V^\\alpha\\epsilon_\\chi^\\beta \\epsilon^{(\\gamma)}_\\mu,\\\\\n \\A{2} &= \\frac{g_2}{M_V} p^\\mu \\epsilon_V^\\alpha \\epsilon_{\\chi_{c2}}^{\\alpha\\beta}\\left[q_\\mu\\epsilon^{(\\gamma)}_\\beta-q_\\beta\\epsilon^{(\\gamma)}_\\mu\\right],\n\\end{align}\nwhere $\\epsilon^{(\\gamma)}$, $\\epsilon_V$, and $\\epsilon_\\chi$ are polarization vectors of final photon, vector quarkonia and initial $\\chi_Q$ meson respectively, while dimensionless coupling constants $g_{0,1,2}$ can be determined from experimental values of the corresponding radiative decays.\nUsing these expressions it is easy to obtain presented in Fig.~\\ref{fig:hM2PsiM} distributions over squared invariant of $\\psi\\mu$ pair.\n\nExperimentally vector quakonium is detected via its leptonic decay $V\\to\\mu^+\\mu^-$. In order to obtain the information of the polarization picture of\n\\begin{align}\n \\label{eq:dec2}\n \\chi_{QJ} &\\to V\\mu^+\\mu^- \\to (\\mu^+\\mu^-)\\mu^+\\mu^-\n\\end{align}\nit could be interesting to study also distributions over squared invariant mass of the same-signed final leptons. These distributions are shown in Fig.~\\ref{fig:hm2MM}.\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{PsiK.pdf}\n \\caption{Normalized $m^2_{V\\mu}$ distribution for $\\chi_c$- and $\\chi_b$ meson decays (left and right panels respectively)}\n \\label{fig:hM2PsiM}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{K1KK1.pdf}\n \\caption{Normalized \n$m_{\\mu^+\\mu^+}^2$ distribution for $\\chi_c$- and $\\chi_b$ meson decays (left and right panels respectively)}\n \\label{fig:hm2MM}\n\\end{figure}\n\nThe author would like to thank I. Belyaev for useful discussions and J. Zhang for providing BESIII experimental data.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}