diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzatjf" "b/data_all_eng_slimpj/shuffled/split2/finalzzatjf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzatjf" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\n\\input{sec\/intro_P1.tex}\n\\end{fmtext}\n\n\\maketitle\n\\clearpage\n\n\\input{sec\/intro_P2.tex}\n\n\n\\section{ Methods}\\label{sec:methods}\n\\subsection{Experimental Methods}\\label{subsec:experimentalmethods}\n\\input{sec\/method_exp.tex}\n\n\\subsection{Simulation Methods}\\label{subsec:simulationmethods}\n\\input{sec\/method_sim.tex}\n\n\n\\section{ Results}\\label{sec:results}\n\\subsection{Experimental Results}\\label{subsec:experimentalresults}\n\\input{sec\/result_exp.tex}\n\n\n\n\\subsection{Simulation Results }\\label{subsec:simulationresults}\n\\input{sec\/result_sim.tex}\n\n\\section{Discussion}\\label{sec:discussion}\n\\input{sec\/discussion.tex}\n\n\n\\section{Conclusion}\\label{sec:conclusion}\n\\input{sec\/conclusion.tex}\n\n\n\\section{Acknowledgement}\\label{sec:acknowledgement}\n\\input{sec\/acknowledgement.tex}\n\n\\section{Data availability}\\label{sec:Data_availability}\n\\input{sec\/data_availability.tex}\n\n\n\\input{sec\/nomenclature.tex}\n\n\\clearpage\\newpage\n\\bibliographystyle{apalike}\n{\n\\renewcommand{\\clearpage}{} \n\\onecolumn\n\\setlength{\\columnsep}{0.5cm}\n\\begin{multicols*}{3}\n\n\n\n\n\\subsection{VP quality and location in human gait}\\label{subsec:VPingait}\nIn the first part, we discuss the validity of a virtual \\emph{point} estimated from the GRF measurements of the human running.\nWe only consider step~0 of the \\SI{90}{\\percent} dataset, since the \\SI{100}{\\percent} dataset is biased by the additional effects of the impact forces and has low $R^2$ values \\cite{blickhan2015positioning}. In the second part, we discuss how the VP position is correlated to the gait type.\n\nTo determine the quality of the virtual point estimation, we used the coefficient of determination $R^2$. In our experiments, the $R^2$ values for level running are high, where $R^2\\!\\approx\\,$\\SI{92}{\\percent} (see V0 in \\creff{fig:VP_mean}{b}).\nThe values of the $R^2$ get significantly lower for the visible drop condition, where $R^2\\!\\approx\\,$\\SI{83}{\\percent} (see V10 in \\creff{fig:VP_mean}{b}).\nOn the other hand, the $R^2$ of the camouflaged drop conditions are even lower than for the visible drop conditions, where $R^2\\!\\approx\\,$\\SI{69}{\\percent} (see C10 in \\creff{fig:VP_mean}{b}). An $R^2$ value of $\\approx\\,$\\SI{70}{\\percent} is regarded as \"reasonably well\" in the literature \\cite[p.475]{herr2008angular}.\nBased on the high $R^2$ values, we conclude that the measured GRFs intersect near a \\emph{point} for the visible and camouflaged terrain conditions.\nWe can also confirm that this point is below the CoM (\\VPb), as the mean value of the estimated points is \\SI{\\minus 32.2}{\\centi\\meter} and is significantly below the CoM.\n\nWe find a difference in the estimated VP position between the human walking and our recorded data of human running.\nThe literature reports a VP above the CoM (\\VPa) for human walking gait \\cite{gruben2012force,maus2010upright,muller2017force,vielemeyer2019ground}, some of which report a \\VPa in human running as well \\cite{Maus_1982,blickhan2015positioning}.\nIn contrast, our experiments show a \\VPb for human level running at \\SI{5}{\\meter\\per\\second} and running over a visible or camouflaged step-down perturbation.\nOur experimental setup and methodology are identical to \\cite{vielemeyer2019ground}, which reports results from human walking. Thus, we can directly compare the $R^2$ values for both walking and running.\nThe $R^2$ value of the level running is 6~percentage points lower than the $R^2$ reported in \\cite{vielemeyer2019ground} for level walking.\nThe $R^2$ value for V10 running is 15~percentage points lower than V10 walking, whereas the $R^2$ for C10 running is up to 25~percentage points lower compared to C10 walking.\nIn sum, we report that the spread of the $R^2$ is generally higher in human running at \\SI{5}{\\meter\\per\\second}, compared to human walking.\n\n\n\\subsection{Experiments vs. model}\\label{subsec:modelVSexp}\nIn this section, we discuss how well the TSLIP simulation model predicts the CoM dynamics, trunk angle trajectories, GRFs and energetics of human running.\nA direct comparison between the human experiments and simulations is possible for the level running. The V0 condition of the human experiments corresponds to step~-1 of the simulations (also to the base gait).\nOverall, we observe a good match between experiments and simulations for the level running (see \\crefs{fig:ExpVsModel_States}{}{~to~}{fig:ExpVsModel_Epot}).\nOn the other hand, a direct comparison for the gaits with perturbed step is not feasible due to the reasons given in \\cref{subsec:Limitations} in detail. Here, we present perturbed gait data to show the extent of the similarities and differences between the V10 and C10 conditions of the experiments and step~0~and~1 of the simulations.\n\n\nConcerning the CoM dynamics, the predicted CoM height correlates closely with the actual CoM height in level running, both of which fluctuate between \\SI{1.05 \\minus 1.00}{\\meter} with \\SI{5}{\\centi\\meter} vertical displacement (\\crefsub{fig:ExpVsModel_States}{$a_{1}$}{$a_{2}$}).\nThe vertical displacement of the CoM is larger for the perturbed step, where the CoM height alternates between \\SI{1.0 \\minus 0.9}{\\meter} in the experiments (\\creff{fig:ExpVsModel_States}{$a_{3}$}) and \\SI{1.05 \\minus 0.85}{\\meter} in the simulations (\\creff{fig:ExpVsModel_States}{$a_{4}$}).\nThe differences can be attributed to the visibility of the drop. Human runners visually perceiving changes in ground level and lowered their CoM by\nabout 25\\% of the possible drop height for the camouflaged contact \\cite{ernst2019humans}.\nThe mean forward velocity at leg touch-down is \\SI{5.2}{\\meter\\per\\second} in the experiments (\\creff{fig:ExpVsModel_States}{$b_{1}$}).\nIn the simulations, the leg angle controller adjusts the forward speed at apex to a desired value. We set the desired speed to \\SI{5}{\\meter\\per\\second} (\\creff{fig:ExpVsModel_States}{$b_{2}$}), which is the mean forward velocity of the step estimated from the experiments.\nFor level running, both the experiments and simulations show a \\SI{0.2}{\\meter\\per\\second} decrease in forward velocity between the leg touch-down and mid-stance phases (\\crefsub{fig:ExpVsModel_States}{$b_{1}$}{$b_{2}$}).\nAs for the perturbed running, human experimental running shows a drop in forward speed of \\SI{4.5}{\\percent} for V10, and \\SI{0.1}{\\percent} for the C10 condition (see \\creff{fig:ExpVsModel_States}{$b_{3}$}{}).\nNamely, there is no significant change in forward velocity during the stance phase for the C10 condition.\nThe simulation shows a drop in forward speed of \\SI{9.5}{\\percent} for step~0, and \\SI{11.1}{\\percent} in step~1 (see \\creff{fig:ExpVsModel_States}{$b_{4}$}{}).\n\n\n\nThe trunk angle is the least well predicted state, since the S-shape of the simulated trunk angle is not recognizable in the human running data (see \\crefsub{fig:ExpVsModel_States}{$c_{1}$}{$c_{2}$}).\nOne of the reasons may be the simplification of the model. The flight phase of a TSLIP model is simplified as a ballistic motion, which leads to a constant angular velocity of the trunk. The human body on the other hand is composed of multiple segments, and intra-segment interactions lead to more complex trunk motion during flight phase.\nIn addition, there is a large variance in the trunk angle trajectories between different subjects and trials, in particular for the C10 condition.\nConsequently, the mean trunk angle profiles do not provide much information about the trunk motion pattern, especially for the perturbed step for C10.\nTherefore, we can not clarify to what extend the VP position is utilized for regulating the trunk motion in humans.\nHowever, a trend of trunk moving forward is visible in both simulation and experiments.\nThe mean trunk angular excursion at step~0 of the experiments is \\SI{1.8}{\\degree} for V0, \\SI{5.5}{\\degree} for V10, and \\SI{1.9}{\\degree} for the C10 condition (\\crefsub{fig:ExpVsModel_States}{$c_{1}$}{$c_{3}$}).\nThe S-shaped pattern of the trunk motion becomes more perceivable in the experiments with a visible perturbed step (\\creff{fig:ExpVsModel_States}{$c_{3}$}).\nIn the simulations, the trunk angular excursion is set to \\SI{4.5}{\\degree} for level running based on \\cite{Heitcamp_2012,Schache_1999,thorstensson1984trunk}. The magnitude of the trunk rotation at the perturbation step is higher in simulations, and amounts to \\SI{7.8}{\\degree} at step~0 and \\SI{8.6}{\\degree} at step~1 (\\crefsub{fig:ExpVsModel_States}{$c_{2}$}{$c_{4}$}).\n\n\\begin{figure}[t!]\n{\\captionof{figure}{The CoM height (a), horizontal CoM velocity (b) and trunk angle (c) for the step~0 of the experiments V0, V10 and C10 are shown on left\\protect\\footnotemark, and the steps -1, 0 and 1 of the simulation are shown on right column. The TSLIP model is able to predict the CoM height and forward speed. Its prediction capability is reduced for the trunk motion, as the flight phase involves ballistic motion and the trunk angular velocity is constrained to be constant.\n}\\label{fig:ExpVsModel_States}}\n{\\begin{annotatedFigure}\n\t{\\includegraphics[width=1\\linewidth]{fig\/Figure_000_VPb_StepDown_Mix_05.pdf}}\n\t\\sublabel{$a_{1}$)}{0.16,0.945}{color_gray}{0.8} \\sublabel{$a_{2}$)}{0.63,0.945}{color_gray}{0.8}\n\t\\sublabel{$a_{3}$)}{0.16,0.805}{color_gray}{0.8} \\sublabel{$a_{4}$)}{0.63,0.805}{color_gray}{0.8}\n\t\\sublabel{$b_{1}$)}{0.16,0.635}{color_gray}{0.8} \\sublabel{$b_{2}$)}{0.63,0.635}{color_gray}{0.8}\n\t\\sublabel{$b_{3}$)}{0.16,0.495}{color_gray}{0.8} \\sublabel{$b_{4}$)}{0.63,0.495}{color_gray}{0.8}\n\t\\sublabel{$c_{1}$)}{0.16,0.32}{color_gray}{0.8} \\sublabel{$c_{2}$)}{0.63,0.32}{color_gray}{0.8}\n\t\\sublabel{$c_{3}$)}{0.16,0.185}{color_gray}{0.8} \\sublabel{$c_{4}$)}{0.63,0.185}{color_gray}{0.8}\n\\end{annotatedFigure}}\n\\end{figure}\n\\footnotetext{The mean is shown with a line and the standard error is indicated with the shaded region. The standard error equals to the standard deviation divided by the square roof of number of subjects.}\n\n\n\n\nThere is a good agreement between the simulation-predicted and the recorded GRFs for level running.\nThe peak horizontal and vertical GRFs amount to \\SI{0.5}{\\bw} and \\SI{3}{\\bw} respectively, in both experiments and simulations (see \\crefsss{fig:grf_step2}{,}{\\,}{fig:GRF}{a,\\,}{d,\\,}{and\\,}{fig:GRFexpsim}{}).\nAs for the step-down perturbation, the simulation model is able to predict the peak vertical GRF, but the prediction becomes less accurate for the peak horizontal GRF.\nThe peak vertical GRF of the \\SI{\\minus 10}{\\centi\\meter} step-down perturbation case is \\SI{3.5}{\\bw} for the V10 condition and \\SI{4}{\\bw} for the C10 condition, whereas it is \\SI{4}{\\bw} for the simulation.\nIn the C10 condition, the vertical GRF peak occurs at the foot impact and its peak is shifted in time, to the left.\nThe numerical simulation leads to over-simplified horizontal GRF profiles, in the step-down condition. The human experiments show an impact peak.\nThe experiments have a peak horizontal GRF magnitude of \\SI{0.5}{\\bw}, which remains the same for all perturbation conditions. In contrast, the peak horizontal GRF increases up to \\SI{1}{\\bw} in simulations.\n\n\nIn level running the GRF impulses of the experiments and the simulation are a good match (see \\creft{tab:VP_statistical}, \\crefs{fig:GRFx}{b}{ and }{fig:GRFy}{b}). The normalized horizontal impulses for both braking and propulsion intervals are the same at $0.1$, while the normalized net vertical impulse in experiments are \\SI{15}{\\percent} higher than in simulation.\nFor the step-down conditions, the simulation predicts higher normalized net vertical impulse values of $1.46$ at step~0 and $1.36$ at step~1, as opposed to $1.31$ for the V10 condition and $1.18$ for C10 condition in experiments.\nThe change in the horizontal impulses during the step-down differs significantly between the simulation and experiments.\nThe V10 condition shows no significant change in the horizontal impulses, while in the C10 condition they decrease to $0.04$ for breaking and $0.06$ for propulsion.\nIn contrast, the simulations show an increase in the horizontal impulses (\\creff{fig:GRFx}{b}). In particular for a step-down perturbation of \\SI{\\minus 10}{\\centi\\meter}, the normalized braking impulse increases to $0.15$ at step~0 and $0.18$ at step~1, whereas for propulsion it increases to $0.15$ and $0.12$.\n\n\nThe different behavior we observe in horizontal impulses at step-down for the experiment and simulations may be due to different leg angles at touch-down.\nWe expect that a steeper leg angle of attack at touch-down would decrease the horizontal and increase the vertical braking impulse.\nHowever, we observe with \\SI{66}{\\degree} a \\SI{9}{\\degree} steeper angle of attack in the simulations for level running than it was reported for V0 for the same experiments \\cite{muller2012leg}.\nNevertheless, no corresponding changes in the braking impulses could be observed.\nOn the other hand, in the perturbed condition the angle of attack is with \\SI{66}{\\degree} nearly the same in the simulation and C10, but here the braking impulses differ.\nTherefore, we conclude that additional factors have to be involved in the explanation of the different impulses between simulation and experiments and further investigations are needed.\nThe simulation could potentially be improved by implementing a swing-leg retraction as observed in humans \\cite{Seyfarth_2003,Blum_2010,muller2012leg}.\n\n\n\nIn terms of the CoM energies, there is a good match between the kinetic energies of the experiments and simulation for the unperturbed step (V0 and step~{$\\minus 1$} in \\crefsub{fig:ExpVsModel_Ekin}{a}{b}).\nThe simulated energies of the perturbed step are closer to the experiments with visible perturbations (V10 and steps~0~and~1 in \\crefsub{fig:ExpVsModel_Ekin}{c}{d}).\nHuman experiments show a drop in kinetic energy of \\SI{9}{\\percent} for V10, \\SI{3}{\\percent} for C10.\nThe simulation shows a drop in kinetic energy of about \\SI{25}{\\percent} for step~0 and step~1.\nThe C10 condition shows a higher mean kinetic energy compared to visible perturbations and there is no obvious decrease of energy in the stance phase (\\creff{fig:ExpVsModel_Ekin}{c}) \n\n\n\n\\begin{figure}[!t]\n{\\captionof{figure}{Kinetic energy of the CoM for the human running experiments\\protect\\footnotemark[\\value{footnote}] (left) and simulated model (right). The TSLIP model is able to predict the kinetic energies for the unperturbed and visible perturbed step well. The simulation yields larger energy fluctuations during the stance phase compared to experiments. Experiments with camouflaged perturbation (C10) yield higher mean kinetic energy compared to the ones with visible perturbations (V10).\n}\\label{fig:ExpVsModel_Ekin}}\n{\\begin{annotatedFigure}\n\t{\\includegraphics[width=1\\linewidth]{fig\/Figure_000_VPb_StepDown_Mix_04_01.pdf}}\n\t\\sublabel{a)}{0.15,0.87}{color_gray}{1}\n\t\\sublabel{b)}{0.62,0.87}{color_gray}{1}\n\t\\sublabel{c)}{0.15,0.48}{color_gray}{1}\n\t\\sublabel{d)}{0.62,0.48}{color_gray}{1}\n\\end{annotatedFigure}}\n\\end{figure}\n\n\n\\begin{figure}[t!]\n{\\captionof{figure}{Potential energy of the CoM for the human running experiments\\protect\\footnotemark[\\value{footnote}] (left) and simulated model (right). Overall, the TSLIP model predicts the CoM height and its related potential energy well.\n}\\label{fig:ExpVsModel_Epot}}\n{\\begin{annotatedFigure}\n\t{\\includegraphics[width=1\\linewidth]{fig\/Figure_000_VPb_StepDown_Mix_04_02.pdf}}\n\t\\sublabel{a)}{0.15,0.87}{color_gray}{1}\n\t\\sublabel{b)}{0.62,0.87}{color_gray}{1}\n\t\\sublabel{c)}{0.15,0.48}{color_gray}{1}\n\t\\sublabel{d)}{0.62,0.48}{color_gray}{1}\n\\end{annotatedFigure}}\n\\end{figure}\n\n\n The potential energy estimate of the simulations lies in the upper boundary of the experiments for the unperturbed step (V0 and step~-1 in \\crefsub{fig:ExpVsModel_Ekin}{a}{b}).\nThe experiments with visible and camouflaged perturbations, as well as the TSLIP model, result in similar potential energy curves (\\crefsub{fig:ExpVsModel_Ekin}{c}{d}).\n\n\n\\subsection{Limitations of this study}\\label{subsec:Limitations}\nThe human experiments and the numerical simulations differ in several points, and conclusions from a direct comparison must be evaluated carefully. We discuss details for our choice of human experimental and numerical simulation conditions in this section.\n\nFirst of all, there is a difference in terrain structure. After passing step~0, the human subjects face a different terrain structure type, compared to the TSLIP simulation model. The experimental setup is constructed as a pothole: a step-down followed by a step-up. However, an identical step-up in the numerical simulation would require an additional set of controllers to adjust the TSLIP model's leg angle and push off energy. Hence for the sake of simplicity, the TSLIP model continues running on the lower level and without a step-up. After the step-down perturbation, the simulated TSLIP requires several steps to recover. An experimental setup for an equivalent human experiment would require a large number of force plates, which were not available here.\n\nIn the V10 condition, the subjects have a visual feedback and hence the prior knowledge of the upcoming perturbation. This additional information might affect the chosen control strategy.\nIn particular, since there is a step-up in the human experiments, subjects might account for this upcoming challenge prior to the actual perturbation.\n\n\nIn the C10 condition, some subjects might prioritize safety in the case of a sudden and expected drop, and employ additional reactive strategies \\cite{muller2015preparing}.\nIn contrast, the simulations with a VP controller can not react to changes during the step-down and only consider the changes of the previous step when planning for the next.\n\n\nFurthermore, in the human experiments we can not set a step-down higher than \\SI{\\minus 10}{\\centi\\meter} due to safety reasons, especially in the camouflaged setting.\nInstead, we can evaluate these situations in numerical simulations and test whether a hypothesized control mechanism can cope with higher perturbations.\nHowever, one has to keep in mind that the TSLIP model that we utilize in our analysis is simplified. Its single-body assumption considers neither intra-segment interactions, nor leg dynamics from impacts and leg swing. Finally, our locomotion controller applied does not mimic specific human neural locomotion control or sensory feedback strategy.\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Energy regulation }\\label{subsubsec:simEnergy}\nIn order to assess the response of the VP controller, we plot the VP position with respect to a hip centered non-rotating coordinate frame that is aligned with the global vertical axis, as it can be seen in \\crefsub{fig:SimulationResults}{$c_{1}$}{$c_{4}$}.\nFor a \\VPbl target, a left shift in VP position indicates an increase in the negative hip work.\n\nThe step-down perturbation at step~0 increases the total energy of the system by the amount of potential energy introduced by the perturbation, which depends on the step-down height.\nThe position of the VP with respect to the hip shifts downward by \\SI{0.5 \\minus 1.9}{\\centi\\meter} depending on the drop height (see circle markers\\,{\\protect \\markerCircle[color_darkgray]} in \\crefsub{fig:SimulationResults}{$c_{1}$}{$c_{4}$}).\nConsequently, the net hip work remains positive and its magnitude increases by $0.7 \\text{ to } 1.7$ fold\\footnote{For quantities A and B, the fold change is given as $\\mathrm{(B \\minus A) \/ A}$.} (see solid lines\\,{\\protect\\markerLine[color_darkgray]} in \\crefs{fig:Energy_dt}{c}{~and~}{fig:Energy_dt_Dz}{c}).\nThe leg deflection increases by $0.95 \\text{ to } 3$ fold, whose value is linearly proportional to the leg spring energy as $E_{SP} \\myeq \\frac{1}{2} \\, k\\, \\Delta l_{L}^{2}$ (see solid lines\\,{\\protect\\markerLine[color_darkgray]} in \\crefs{fig:Energy_dt}{a}{~and~}{fig:Energy_dt_Dz}{a}).\nThe leg damper dissipates $1.5 \\text{ to } 6$ fold more energy compared to its equilibrium condition (see solid lines\\,{\\protect\\markerLine[color_darkgray]} in \\crefs{fig:Energy_dt}{b}{~and~}{fig:Energy_dt_Dz}{b}).\n\n\nThe reactive response of the VP starts at step~1, where the target VP is shifted to left by \\SI{1.2 \\minus 2.8}{\\centi\\meter} and down by \\SI{0.6 \\minus 2.9}{\\centi\\meter} depending on the drop height (see cross markers\\,{\\protect \\raisebox{-0.6 pt}{\\markerCross[color_darkgray][1.3]}} in \\creff{fig:SimulationResults}{c}).\nThe left shift in VP causes a $1.4 \\text{ to } 3.8$ fold increase in the negative hip work, and the \\emph{net} hip work becomes negative (see dashed lines\\,{\\protect\\markerDashedLine[color_darkgray]} in \\crefs{fig:Energy_dt}{c}{~and~}{fig:Energy_dt_Dz}{c}). In other words, the hip actuator starts to remove energy from the system.\nAs a result, the trunk leans more forward during the stance phase (see yellow colored GRF vectors {\\protect\\markerRectangle[color_yellow]} in \\creff{fig:SimulationResults}{b}).\nThe leg deflects $0.7 \\text{ to } 2.3$ fold larger than its equilibrium value, and the leg damper removes between $1 \\text{ and } 4.1$ fold more energy. However, the increase in leg deflection and damper energy in step~1 are lower in magnitude compared to the increase in step~0.\nIn step~1, we see the \\VPbl's capability to remove the energy introduced by the step-down perturbation.\n\nIn the steps following step~1, the target VP position is continued to be adjusted with respect to the changes in the trunk angle at apices, as expressed in \\crefe{eqn:thetaVP} and shown with {\\protect \\markerCross[color_gray][1]} markers in \\creff{fig:SimulationResults}{c}. The VP position gradually returns to its initial value, and the gait ultimately converges to its initial equilibrium, see coinciding markers {\\protect \\raisebox{-0.3 pt}{\\markerDiamond[color_red][0]},\\,{\\protect \\markerSquare[color_blue][0]} in \\creff{fig:SimulationResults}{c}}. During this transition, the energy interplay between the hip and leg successfully removes the energy added to the system, as shown in \\crefsub{fig:Energy_dt}{$b$}{$c$} and in \\crefsub{fig:Energy_dt_Dz}{b}{c} for larger step-down perturbation magnitudes.\n\n\\subsubsection{GRF analysis}\\label{subsubsec:simEnergy}\n\\begin{figure}[tb!]\n{\\captionof{figure}{Numerical simulation results: The ground reaction forces (a,c) and the corresponding net impulses (b,d) for -\\SI{10}{\\centi\\meter} step-down perturbation. The GRFs are normalized to body weights (\\si{\\bw}), whereas the impulses are normalized to their $\\mathsmaller{\\mathrm{BW} \\mathsmaller{\\mathop{\\sqrt{\\sfrac{l}{g}}}}}$ values. The effect of the \\VPbl control can be seen in the horizontal GRF and impulse. \\VPbl alters the net horizontal impulse, and causes either net horizontal acceleration or deceleration after the step-down perturbation. Consequently, the excess energy introduced by the perturbation is removed from the system. The vertical GRF and impulse increase with the perturbation and decrease gradually to its equilibrium value approximately within 15 steps. Extended plots for the step-down height of $\\Delta z \\protect\\myeq$[\\SI{ \\protect\\minus 20, \\protect\\minus 30, \\protect\\minus 40}{\\centi\\meter}] can be found in \\cref{sec:app:simGRF}.\n}\\label{fig:GRF}}\n{\\begin{annotatedFigure}\n\t{\\includegraphics[width=1\\linewidth]{fig\/Figure_000_VPb_StepDown_Mix_03.pdf}}\n\t\\sublabel{a)}{0.15,0.92}{color_gray}{1}\n\t\\sublabel{c)}{0.15,0.53}{color_gray}{1}\n\t \\sublabel{b)}{0.62,0.92}{color_gray}{1}\n\t \\sublabel{d)}{0.62,0.53}{color_gray}{1}\n\\end{annotatedFigure}}\n\\end{figure}\n\nThe energy increment due to the step-down perturbation and the energy regulation of the \\VPbl control scheme can also be seen in the GRF and impulse profiles.\n\nThe peak vertical GRF magnitude of the equilibrium state is \\SI{3}{\\bw}. It increases to \\SI{4.2 \\minus 6.1}{\\bw} at step~0 with the step-down (\\crefs{fig:GRF}{c}{~and~}{fig:GRFy}{a}). The peak magnitude decreases gradually to its initial value in the following steps, indicating that the VP is able to bring the system back to its equilibrium.\nIn a similar manner, the normalized vertical impulse increases from $1$ to $1.4 \\minus 2.2$ at step~0 ({\\protect \\markerCircle[color_darkgray]}) and decreases to $1$ in approximately 15 steps.\n\nThe peak horizontal GRF magnitude of the equilibrium state amounts to \\SI{0.6}{\\bw}. It increases to \\SI{0.9 \\minus 1.4}{\\bw} at step~0 (\\crefs{fig:GRF}{a}{~and~}{fig:GRFx}{a}). The sine shape of the horizontal GRF and its peak magnitude depend on the change in VP position. Therefore, the horizontal GRF impulse provides more information.\nThe net horizontal GRF impulse is zero at the equilibrium state (see {\\protect \\markerSquare[color_blue][0]} in \\crefs{fig:GRF}{b}{~and~}{fig:GRFx}{b}). It becomes positive at the step-down perturbation ({\\protect \\markerCircle[color_darkgray]}), leading to a net horizontal acceleration of the CoM. In step~1, the \\VPbl is adjusted with respect to the change in the state and causes the impulse to decelerate the body ({\\protect \\raisebox{-0.6 pt}{\\markerCross[color_darkgray][1.5]}}).\nIn the following steps, the VP adjustment yields successive net accelerations and decelerations ({\\protect \\markerCross[color_gray][1]}) until the system returns to its equilibrium state ({\\protect \\raisebox{-0.3 pt}{\\markerDiamond[color_red][0]}}).\n\n\n\n\\subsection{Simulation: TSLIP model parameters}\\label{sec:app:simTSLIPparameters}\n\nThe TSLIP model parameters are presented in \\creft{tab:ModelPrm}{} (see \\cite{drama2019human} for the parameters for the human model and \\cite{drama2019bird} for the avian model).\n\\begin{table}[h!]\n\\centering\n\\captionsetup{justification=centering}\n\\caption{Model parameters for TSLIP model}\n\\label{tab:ModelPrm}\n\\begin{adjustbox}{width=1\\linewidth}\n\\begin{tabular}{@{} l| c c c c c @{}}\n\\multicolumn{1}{l}{Name} & \\multicolumn{1}{c}{Symbol} & \\multicolumn{1}{c}{Units} & \\multicolumn{1}{c}{Literature} & \\multicolumn{1}{c}{Chosen} & \\multicolumn{1}{c}{Reference} \\\\\n\\hline\n\\hspace{1mm} mass & $\\mathit{m}$ & \\si{\\kilogram} & 60-80 & 80 & {\\cite{Sharbafi_2013}} \\\\\n\\hspace{1mm} moment of inertia & $\\mathit{J}$ & \\si{\\kilogram\\meter\\squared} & 5 & 5 & {\\cite{ Sharbafi_2013, deLeva_1996}} \\\\\n\\hspace{1mm} leg stiffness & $\\mathit{k}$ & \\si{\\kilo\\newton\\per\\meter} &16-26 & 18 & {\\cite{Sharbafi_2013, McMahon_1990}} \\\\\n\\hspace{1mm} leg length & $\\mathit{l}$ & \\si{\\meter} & 1 & 1 & {\\cite{Sharbafi_2013}} \\\\\n\\hspace{1mm} leg angle at TD & $ \\mathit{\\theta_{L}^{TD}}$ & (\\si{\\degree}) & 78-71 & $\\mathit{f_{H}(\\dot{x})}$ & {\\cite{Sharbafi_2013, McMahon_1990}} \\\\\n\\hspace{1mm} dist. Hip-CoM & $\\mathit{r_{HC}}$ & \\si{\\meter} & 0.1 &0.1 & {\\cite{Sharbafi_2013,Wojtusch_2015}}\n\\end{tabular}\n\\end{adjustbox}\n\\vspace{-4mm}\n\\end{table}\n\n\n\\subsection{Simulation: Flowchart for leg angle and VP angle control}\\label{sec:app:simTSLIPparameters}\n\nThe linear controller for the leg angle $\\theta_{L}$ and VP angle $\\theta_{VP}$ is presented in \\creff{fig:Flowchart}{}. The leg angle control coefficients ($k_{\\dot{x}} \\, k_{\\dot{x}_{0}}$) in \\crefe{eqn:thetaL} are decreased from ($0.25, \\,0.5 k_{\\dot{x}}$) to ($0.2, \\, 0.3 k_{\\dot{x}}$), as the step-down height is increased from \\SI{\\minus 10}{\\centi\\meter} to \\SI{\\minus 40}{\\centi\\meter}. The reduction of the coefficients slows down the adjustment of the forward speed, and enables us to prioritize the postural correction in the presence of larger perturbations.\n\n\\begin{figure}[tbh!]\n{\\captionof{figure}{ The linear feedback control scheme for the leg angle in \\crefe{eqn:thetaL} and the VP angle in \\crefe{eqn:thetaVP} are presented. Both controllers update step-to-step at the apex event where the CoM height reaches to its maximum.\n}\\label{fig:Flowchart}}\n{\\includegraphics[width=1\\linewidth]{fig\/Flowchart.pdf}}\n\\end{figure}\n\n\\vfill\\break\n\n\\subsection{Simulation: Energy regulation at the leg and hip}\\label{sec:app:simLegHipEnergies}\n\nHere, we present the energy levels of the leg spring, leg damper and the hip actuator for the entire set of step-down perturbations ($\\Delta z \\myeq$[\\SI{ \\minus 10, \\minus 20, \\minus 30, \\minus 40}{\\centi\\meter}]).\n\n\n\\begin{figure}[tbh!]\n{\\captionof{figure}{ The energy curves for the leg spring ($a_{0} \\protect \\minus a_{4}$), leg damper ($b_{0} \\protect \\minus b_{4}$) and hip actuator ($c_{0} \\protect \\minus c_{4}$){\\protect\\footnotemark}. The sub-index \"0\" indicates the trajectory belongs to the equilibrium state. With the increase of the system's energy at step-down ({\\protect\\markerLine[color_darkgray]}), the leg deflects more, the leg damper dissipates more energy and the hip actuator injects more energy than its equilibrium condition. During the reaction step ({\\protect\\markerDashedLine[color_darkgray]}), the hip actuator reacts to energy change and starts to remove energy from the system. In the following steps ({\\protect\\markerLine[color_gray]}) the hip regulates the energy until the system reaches to the initial equilibrium state ({\\protect\\markerLine[color_blue]}).\n}\\label{fig:Energy_dt_Dz}}\n{\\begin{annotatedFigure}\n\t{\\includegraphics[width=1\\linewidth]{fig\/Figure_003_Energy_dt.pdf}}\n\t\\sublabel{$a_{0}$)}{0.15,0.945}{color_gray}{0.7} \\sublabel{$b_{0}$)}{0.47,0.945}{color_gray}{0.7} \\sublabel{$c_{0}$)}{0.79,0.945}{color_gray}{0.7}\n\t\\sublabel{$a_{1}$)}{0.15,0.765}{color_gray}{0.7} \\sublabel{$b_{1}$)}{0.47,0.765}{color_gray}{0.7} \t\\sublabel{$c_{1}$)}{0.79,0.765}{color_gray}{0.7}\n\t\\sublabel{$a_{2}$)}{0.15,0.585}{color_gray}{0.7} \\sublabel{$b_{2}$)}{0.47,0.585}{color_gray}{0.7}\t\\sublabel{$c_{2}$)}{0.79,0.585}{color_gray}{0.7}\n\t\\sublabel{$a_{3}$)}{0.15,0.405}{color_gray}{0.7} \\sublabel{$b_{3}$)}{0.47,0.405}{color_gray}{0.7}\t\t\\sublabel{$c_{3}$)}{0.79,0.405}{color_gray}{0.7}\n\t\\sublabel{$a_{4}$)}{0.15,0.225}{color_gray}{0.7} \\sublabel{$b_{4}$)}{0.47,0.225}{color_gray}{0.7}\t\\sublabel{$c_{4}$)}{0.79,0.225}{color_gray}{0.7}\n\\end{annotatedFigure}}\n\\end{figure}\n\\footnotetext{In subplot $c_{4}$, the maximum value of steps 7-11 is indicated with a text and arrow due to the scaling issues.}\n\n\\clearpage\\newpage\n\n\\subsection{Simulation: Ground reaction forces and impulses}\\label{sec:app:simGRF}\n\nWe provide the vertical and horizontal ground reaction forces for the entire set of step-down perturbations ($\\Delta z \\myeq$[\\SI{ \\minus 10, \\minus 20, \\minus 30, \\minus 40}{\\centi\\meter}]).\n\n\n\\begin{figure}[hb!]\n{\\begin{annotatedFigure}\n{\\includegraphics[width=1\\linewidth]{fig\/Figure_001_VPb_StepDown_GRFx.pdf}}\n\\sublabel{$a_{0}$)}{0.16,0.935}{color_gray}{0.7} \\sublabel{$b_{0}$)}{0.64,0.935}{color_gray}{0.7}\n\\sublabel{$a_{1}$)}{0.16,0.76}{color_gray}{0.7} \\sublabel{$b_{1}$)}{0.64,0.76}{color_gray}{0.7}\n\\sublabel{$a_{2}$)}{0.16,0.585}{color_gray}{0.7} \\sublabel{$b_{2}$)}{0.64,0.585}{color_gray}{0.7}\n\\sublabel{$a_{3}$)}{0.16,0.41}{color_gray}{0.7} \\sublabel{$b_{3}$)}{0.64,0.41}{color_gray}{0.7}\n\\sublabel{$a_{4}$)}{0.16,0.23}{color_gray}{0.7} \\sublabel{$b_{4}$)}{0.64,0.23}{color_gray}{0.7}\n\\end{annotatedFigure}}\n\\captionof{figure}{The horizontal ground reaction forces over normalized step time are shown ($a_{0} \\protect \\minus a_{4}$). The peak horizontal GRF increases with the step-down perturbation. The area under this curve is the horizontal impulse, which corresponds to the acceleration and deceleration the main body ($b_{0} \\protect \\minus b_{4}$). The step-down perturbation at step~0 increases the energy of the system. The increase in energy influences the net horizontal impulse, as the impulse attains a positive value ({\\protect\\markerCircle[color_darkgray]}) and causes the body to accelerate forward. In response, the VP position changes to create net negative impulse in the following step (i.e., step~1, {\\protect \\raisebox{-0.6 pt}{\\markerCross[color_darkgray][1.5]}}) and decelerates the body. The VP position is adjusted until all the excess energy is removed from the system\\,({\\protect \\markerCross[color_gray][1]}) and the gait reaches to an equilibrium state\\,({\\protect \\raisebox{-0.3 pt}{\\markerDiamond[color_red][0]}}).\n}\\label{fig:GRFx}\n\\end{figure}\n\n\n\\begin{figure}[t!]\n{\\begin{annotatedFigure}\n{\\includegraphics[width=1\\linewidth]{fig\/Figure_001_VPb_StepDown_GRFy.pdf}}\n\\sublabel{$a_{0}$)}{0.16,0.935}{color_gray}{0.7} \\sublabel{$b_{0}$)}{0.64,0.935}{color_gray}{0.7}\n\\sublabel{$a_{1}$)}{0.16,0.76}{color_gray}{0.7} \\sublabel{$b_{1}$)}{0.64,0.76}{color_gray}{0.7}\n\\sublabel{$a_{2}$)}{0.16,0.585}{color_gray}{0.7} \\sublabel{$b_{2}$)}{0.64,0.585}{color_gray}{0.7}\n\\sublabel{$a_{3}$)}{0.16,0.41}{color_gray}{0.7} \\sublabel{$b_{3}$)}{0.64,0.41}{color_gray}{0.7}\n\\sublabel{$a_{4}$)}{0.16,0.23}{color_gray}{0.7} \\sublabel{$b_{4}$)}{0.64,0.23}{color_gray}{0.7}\n\\end{annotatedFigure}}\n\\captionof{figure}{The vertical ground reaction forces over normalized step time are shown ($a_{0} \\protect \\minus a_{4}$). The peak vertical GRF increases with the step-down perturbation. During the following steps, the impulse decreases to its initial value through the regulation of the VP position. The increase in the peak GRF after step-down is proportional to the step-down height. Between steps 4-5, the peak vertical GRF increases 1.4 fold for \\SI{\\protect\\minus 10}{cm} drop and 2 fold for \\SI{\\protect\\minus 40}{cm} drop. In accordance, the vertical impulse increases with the step-down perturbation and returns to its initial value ($b_{0} \\protect \\minus b_{4}$). As the step-down height increases from -10 to \\SI{\\protect\\minus 40}{cm} the vertical impulse increases 1.53 fold from its initial value for step~0\\,({\\protect \\markerCircle[color_darkgray]}) and 1.48 fold for step~1\\,({\\protect \\raisebox{-0.6 pt}{\\markerCross[color_darkgray][1.5]}})\n}\\label{fig:GRFy}\n\\end{figure}\n\n\\clearpage \\newpage\n\n\\subsection{STD of the Experiments}\\label{sec:app:expSTD}\nIn the \\cref{sec:discussion}, we provided the standard error (SE) of the measurements from the human running experiments (see the patched areas in \\cref{fig:ExpVsModel_States,fig:ExpVsModel_Ekin,fig:ExpVsModel_Epot}).\nThe standard error is calculated by dividing the standard deviation by the square roof of number of subjects. The SE shows how good the mean estimate of the measurements is.\n\nOn the other hand, standard deviation (STD) shows how spread out our different measurements are. The STD is an important measure, especially for the trunk angle measurements, where the trajectories of the each subject significantly varies. Therefore, we provide the STD values here in for the CoM state in \\cref{fig:CoMstateSTD} and CoM energy in \\cref{fig:CoMenergySTD}.\n\n\n\n\\begin{figure}[tbh!]\n \\begin{floatrow}\n \\ffigbox[\\Xhsize]\n{\\begin{annotatedFigure}\n{\\includegraphics[width=0.97\\linewidth]{fig\/Figure_000_VPb_StepDown_Mix_09_01_STD.pdf}}\n\\sublabel{$a_{0}$)}{0.15,0.94}{color_gray}{0.7}\n\\sublabel{$a_{1}$)}{0.64,0.94}{color_gray}{0.7}\n\\sublabel{$b_{0}$)}{0.15,0.645}{color_gray}{0.7}\n\\sublabel{$b_{1}$)}{0.64,0.645}{color_gray}{0.7}\n\\sublabel{$c_{0}$)}{0.15,0.35}{color_gray}{0.7}\n\\sublabel{$c_{1}$)}{0.64,0.35}{color_gray}{0.7}\n\\end{annotatedFigure}}\n{\\caption{The figure is an extension of the \\cref{fig:ExpVsModel_States}, with the difference that the standard deviation is plotted with the patches instead of the standard error.\n}\\label{fig:CoMstateSTD}}\n \\end{floatrow}\n \\vspace{\\floatsep}%\n \\begin{floatrow}\n \\ffigbox[\\Xhsize]\n{\\begin{annotatedFigure}\n{\\includegraphics[width=0.97\\linewidth]{fig\/Figure_000_VPb_StepDown_Mix_09_02_STD.pdf}}\n\\sublabel{$a_{0}$)}{0.155,0.87}{color_gray}{0.7}\n\\sublabel{$b_{0}$)}{0.63,0.87}{color_gray}{0.7}\n\\sublabel{$a_{1}$)}{0.155,0.49}{color_gray}{0.7}\n\\sublabel{$b_{1}$)}{0.63,0.49}{color_gray}{0.7}\n\\end{annotatedFigure}}\n{\\caption{The figure is an extension of the \\cref{fig:ExpVsModel_Ekin,fig:ExpVsModel_Epot}, with the difference that the standard deviation is plotted with the patches instead of the standard error.\n}\\label{fig:CoMenergySTD}}\n\\end{floatrow}\n \\vspace{-2\\floatsep}%\n\\end{figure}\n\n\n\\subsection{GRFs: Simulation vs. Experiment}\\label{sec:app:simexpGRF}\n\nWe present the vertical (a) and horizontal (b) GRFs belonging to the step~0 of the human running experiments (V0, V10, C10) and steps~-1,0 and 1 of the simulations with a \\SI{\\minus 10}{\\centi\\meter} step-down height, plotted on top of each other in \\creff{fig:GRFexpsim}{}.\n\n\\begin{figure}[tbh!]\n{\\begin{annotatedFigure}\n{\\includegraphics[width=1\\linewidth]{fig\/Figure_000_VPb_StepDown_Mix_07.pdf}}\n\\sublabel{$a_{0}$)}{0.13,0.91}{color_gray}{0.7}\n\\sublabel{$b_{0}$)}{0.6,0.91}{color_gray}{0.7}\n\\sublabel{$a_{1}$)}{0.13,0.5}{color_gray}{0.7}\n\\sublabel{$b_{1}$)}{0.6,0.5}{color_gray}{0.7}\n\\end{annotatedFigure}}\n\\captionof{figure}{The vertical (a) and horizontal (b) ground reaction forces are plotted over normalized step time. The mean of the experimental results are shown. The TSLIP model simulation is able to capture the characteristics of the GRF in level running ($a_{0}$,$b_{0}$). For the step-down perturbation, the model predicts higher values for the peak vertical ($a_{1}$) and horizontal ($b_{1}$) GRF, compered to the mean values of the experiments.\n}\\label{fig:GRFexpsim}\n\\end{figure}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn recent years, layered materials showing moir\u00e9 patterns have stood as a platform for\nthe exploration of new physics \\cite{andrei_marvels_2021,devakul_magic_2021,naik_ultraflatbands_2018,cao_correlated_2018-1,cao_unconventional_2018-2,nayak_probing_2017,tran_evidence_2019,wu_hubbard_2018,leon_tuning_2022}. Graphene-based materials, such as twisted bilayer graphene (TBG), and transition metal dichalcogenides constitute two families of great interest.\nIn particular, layered graphene structures have been shown to host electronic correlations, superconductivity and nontrivial topological phases when arranged at a small rotation angle, the so-called magic angle, which is close to $1\\degree$ for TBG \\cite{suarez_morell_flat_2010-1,cao_correlated_2018-1,cao_unconventional_2018-2,song_all_2019,serlin_intrinsic_2020}.\n \nThe quest for novel materials which could enlighten these new phenomena has even extended to one-dimensional systems, where many-body effects have been widely described. Flat bands can be found in some one-dimensional materials\n \\cite{huda_designer_2020,kennes_one-dimensional_2020,arroyo-gascon_one-dimensional_2020,zhao_interlayer_2022}; for instance, carbon nanotubes (CNTs) can show moir\u00e9 patterns in their double-walled and multi-walled form \\cite{bonnet_charge_2016-1,zhao_observation_2020-1,koshino_incommensurate_2015-1}, and single-walled tubes also display moir\u00e9s when collapsed \\cite{arroyo-gascon_one-dimensional_2020}. For the latter, flat bands with an even smaller energy span than TBG and sharp densities of states ensue along with these one-dimensional patterns. As in TBG, these features depend on\n the rotational angle, which is related to a corresponding chiral angle in collapsed CNTs.\n\nThe electronic structure of carbon nanotubes is varied, \ncomprising \nmetallic and semiconducting tubes.\nIn metallic \nCNTs, which host Dirac fermions as in graphene, the Dirac point can be located at the center of the Brillouin zone (the $\\Gamma$ point) or at 2\/3 of the $\\Gamma$-X line \\cite{dresselhaus_group_2008-1}. \n We denote them as $\\Gamma$-metals and 2\/3-metals, respectively \\cite{arroyo-gascon_one-dimensional_2020}. Chiral metallic CNTs can belong to these two \ngroups, although a small gap develops due to curvature effects. \nIn a previous article, we found flat bands and highly localized states \nin the AA regions (zones of AA stacking) of the moir\u00e9 patterns formed in collapsed 2\/3-metal nanotubes with small chiral angles \\cite{arroyo-gascon_one-dimensional_2020}. In that work we chose to explore 2\/3-metallic CNTs because of their similarity to graphene: they present two bands with linear dispersion crossing at 2\/3 of the positive part of their Brillouin zone, being the one-dimensional (1D) analogues of the Dirac cones in graphene. \nHowever,\nthe behavior of collapsed chiral $\\Gamma$-metals and semiconducting tubes is yet unknown;\nfurther analysis is needed to \nelucidate whether \n magic angle \nphysics also pertains \n to the rest of families of chiral tubes. \n\nIn order to obtain CNTs that are stable upon collapse \nwith sizable moir\u00e9s at small chiral angles, tubes with diameters above \\SI{40}{\\angstrom}, \noften involving a high number of atoms per unit cell, are needed \\cite{chopra_fully_1995,benedict_microscopic_1998,he_precise_2014,zhang_closed-edged_2012,tang_collapse_2005,gao_energetics_1998,liu_molecular_2004,elliott_collapse_2004}. The search for $\\Gamma$-metal and semiconducting tubes which fulfill these conditions leads to nanotubes showing several \nAA regions per unit cell, which is intrinsically related to the symmetry operations and the number of localized states in the structure.\n\nIn this work, we show that moir\u00e9 physics occurs for all collapsed chiral carbon nanotubes close to the magic angle, regardless of their metallic or semiconducting behavior. \nWe assess several criteria usually \nemployed \nto describe magic angle physics: the appearance of flat bands and the reduction of the Fermi velocity, sharp peaks in the density of states, and real-space electronic localization via local density of states \\cite{arroyo-gascon_one-dimensional_2020,cao_correlated_2018-1,andrei_marvels_2021,trambly_de_laissardiere_localization_2010}. We study the interplay of these criteria and discuss the most suitable one for our system. Analyzing these benchmarks, we perform an exhaustive description of the electronic structure of a range of collapsed chiral nanotubes belonging to each family: semiconducting, 2\/3-metals and $\\Gamma$-metals. \n\n\nWe find a magic angle very close to that of TBG for the three families of CNTs, namely, $1.12\\degree$ for 2\/3-metals, and $1.11\\degree$ for semiconducting and $\\Gamma$-metals. \nMoreover, a homogeneous behavior extends to all tubes when their \nmoir\\' e angle is small enough, regardless of their specific family or symmetries. Our findings imply that the experimental observation of one-dimensional moir\u00e9 physics in CNTs might be easier than previously expected. \n\n\n\\section{Geometry and symmetry} \\label{sec:geometry}\n\nLet us briefly recall the standard notation in nanotube physics. CNTs are identified by its circumference vector $\\bm{C_h}$ on an unrolled graphene sheet. \nThe coordinates of $\\bm{C_h}$ in the graphene basis, $(n,m)$, are customarily used to label CNTs. Considering a one-orbital model and leaving out curvature effects, if $n-m$ is a multiple of 3, the tube is metallic \\cite{saito_electronic_1992-2,hamada_new_1992-1,saito_electronic_1992-3}.\nAnother important magnitude is the chiral angle of the CNT, $\\theta_{NT}$, which is spanned between the $(n,0)$ direction and $\\bm{C_h}$. \nUpon collapsing a chiral carbon nanotube, a moir\u00e9 pattern can emerge, which is directly related to the CNT chiral angle $\\theta_{NT}$. \nThus, a moir\u00e9 angle $\\theta_M$ can be defined as the relative rotation angle between the two flattened parts of the CNT, just as in TBG. An analysis of the geometry of the unrolled unit cell yields $\\theta_M=2\\theta_{NT}$ \\cite{arroyo-gascon_one-dimensional_2020}, so that in the following tubes will be labeled by the angle $\\theta_M$. \n\nAddressing a wider range of nanotube types implies that new tube symmetries may be involved. Since the geometry of each tube is related to its symmetry operations, the existence and arrangement of moir\u00e9 patterns once the structure is collapsed can be predicted. Line groups describe the symmetry operations of one-dimensional systems, such as CNTs. Specifically, chiral cylindrical nanotubes belong to the fifth line group\nand comprise screw-axis and isogonal point group $D_p$ symmetries, where $p$ is the greatest common divisor of the nanotube indices $n$ and $m$ \\cite{damnjanovic_line_2010}. Dihedral groups include rotations and often reflections; \n in particular, a rotational $C_{2 q}$ \n symmetry operation, where $q$ is an integer, is convenient, \n so \n to obtain two equivalent graphene regions on the unrolled unit cell, that will \n constitute the two coupled ``layers\" of the collapsed nanotube \n yielding a full AA moir\\'e \n area. In fact, this is the simplest symmetry that yields full \n AA moir\u00e9 patterns once the tube is collapsed. This rotational symmetry can be achieved by choosing nanotubes with indices $(2m,2)$. We have followed this method for 2\/3-metallic and semiconducting nanotubes; despite having the same greatest common divisor $p$, 2\/3-metals and semiconducting tubes might not have the same symmetry operations and exhibit different moir\u00e9 patterns. In fact, as it is shown in Figure~\\ref{fig1}(a), a $C_2$ symmetry not only can render one centered AA moir\u00e9 region per unit cell as in 2\/3-metals, but three.\n \n\\begin{figure}[h!]\n\\includegraphics[trim={1.5cm 0 0 0},clip,width=1.05\\columnwidth]{fig1.pdf} \n\\caption{Top view of the unit cells of the (a) collapsed semiconducting (48,2) and (b) $\\Gamma$-metal (78,3) nanotubes. The number and placement of the AA regions differ and depend on the symmetry operations of the original cylindrical tube.}\n\\label{fig1}\n\\end{figure}\n\nAs for $\\Gamma$-metallic CNTs, they usually have more atoms per unit cell than \nsemiconducting \nchiral CNTs \nwith \n similar chiral angles. Suitable nanotubes should be at least \\SI{40}{\\angstrom}-wide so that they are stable once collapsed. The smallest tubes that fulfill the aforementioned diameter condition and have a moir\u00e9 angle close to the magic angle in TBG have indices $(3m,3)$, thus showing $C_3$ symmetry. This results on \n a zigzag arrangement of the AA moir\u00e9 regions with respect to the tube axis \n(see Figure \\ref{fig1}(b)), \nin contrast \nto the linear disposition achieved for 2\/3-metallic and semiconducting tubes. The minimum number of\nAA regions per unit cell for these tubes is now six (taking into account \nhalf AA \nmoir\u00e9 spots). Therefore, we show that not only $C_{2q}$-symmetric tubes are suitable for moir\u00e9 engineering, and that \nsizable AA regions can appear in all families of CNTs. \n\nThe appearance of AA-stacked moir\u00e9 patterns in the center of collapsed chiral tubes, such as in TBG, motivates the search for fragile topological phases in CNTs, that have been predicted and found in TBG \\cite{song_all_2019,serlin_intrinsic_2020} and other systems displaying flat bands \\cite{skurativska_flat_2021}. For instance, edge states in cylindrical CNTs can be classified attending to a topological invariant \\cite{okuyama_topological_2019}. However, a group theory analysis of the band representations following \\cite{bradlyn_topological_2017} does not render fragile topology \\cite{milosevic_elementary_2020}. Regarding collapsed CNTs, the molecular dynamics process can break the rotational symmetries of the tubes, so that symmetry-protected fragile topological states, such as $C_2T$ in TBG \\cite{song_all_2019}, are not feasible in our case. Nevertheless, model Hamiltonians used to describe nontrivial topology in TBG could be applied to the central part of our tubes, where the AA regions appear, since $C_2$ symmetry is locally present there.\n\n\n\\section{Methods}\n\nCollapsed nanotubes are simulated by means of molecular dynamics calculations, resorting to the Large-scale Atomic\/Molecular Massively Parallel Simulation (LAMMPS) package \\cite{thompson_lammps_2022}. This allows us to have a reliable description of the structure. In fact, at low angles we reproduce the corrugations that are known to appear in TBG \\cite{wijk_relaxation_2015-3} in the flattened, bilayer-like portion of the nanotubes. An Adaptive Intermolecular Reactive Empirical Bond Order (AIREBO) \\cite{stuart_reactive_2000} potential models the interactions between carbon atoms. Periodic conditions are applied with supercells wide enough to avoid interaction between nanotube replicas. A detailed description of the methods used in this section can be found in the Supporting Information of \\cite{arroyo-gascon_one-dimensional_2020}. \nFigure \\ref{fig1} shows two examples of converged final geometries. \n\nThe band structure of the tubes is then obtained by means of a tight-binding model derived from that presented by Nam and Koshino \\cite{nam_lattice_2017}:\n\\begin{equation}\nH= -\\sum_{i,j} t({\\mathbf R}_i - {\\mathbf R}_j) \\ket{{\\mathbf R}_i} \\bra{ {\\mathbf R}_j } + {\\rm H. c.},\n\\end{equation}\nwhere ${\\mathbf R}_i$ is the position of the atom $i$, $ \\ket{{\\mathbf R}_i}$ is the wavefunction at $i$, and $t({\\mathbf R})$ is the hopping between atoms $i$ and $j$:\n\\begin{equation}\n-t({\\mathbf R}) = V_{pp\\pi} (R) \\left[ 1-\\left( \\frac{ {\\mathbf R} \\cdot \\mathbf{e}_y } {R} \\right)^2 \\right] + V_{pp\\sigma} (R) \\left( \\frac{ {\\mathbf R} \\cdot \\mathbf{e}_y } {R} \\right)^2\n\\end{equation}\nThe explicit expression of the hopping parameters $V_{pp\\pi}(R)$ and $V_{pp\\sigma} (R)$ is detailed in the Supporting Information of \\cite{arroyo-gascon_one-dimensional_2020}.\nAfter the molecular dynamics calculation, the tubes end up with a flattened central region where moir\u00e9s appear (see Figure~1), as well as two narrow lobular regions at the extremes of the flattened part. Both $V_{pp\\pi}(R)$ and $V_{pp\\sigma}(R)$ contributions are taken into account for the flat part. Only the $V_{pp\\pi}(R)$ part of the Hamiltonian is applied to the regions of the lobes, since we do not need to assess the effect of corrugations\ntherein.\n\nThe tight-binding Hamiltonian allows us to model nanotubes which would be computationally out of reach within a first-principles approach. In addition, it is validated against DFT calculations of the electronic bands of a relatively small semiconducting tube, namely the (36,2), employing the SIESTA code within the GGA-PBE approximation \\cite{soler_siesta_2002-1,troullier_efficient_1991,perdew_generalized_1996}. We are thus able to compare the first-principles and tight-binding electronic structures of the collapsed tube, \nwhere the atomic coordinates are taken from the relaxed nanotube atomic configuration previously obtained by molecular dynamics simulations. Figure \\ref{fig2} \nillustrates the agreement between tight-binding (black) and DFT (red) calculations; it is especially important to reflect properly the behavior of the central bands nearest to the Fermi energy, since they will be the most affected upon collapse. Note that the bands of the (36,2) tube, which has 2744 atoms per unit cell, only show a slight flattening since the moir\u00e9 angle is $\\theta_M=5.36\\degree$, which has to be compared with the magic angle in TBG, $\\sim 1.1\\degree$. \n\n\\begin{figure}[h!]\n\\includegraphics[trim={9cm 0 0 0},clip,width=0.7\\textwidth]{fig2.pdf}\n\\caption{Band structure of the (36,2) CNT with $\\theta_M=5.36\\degree$. (a) Cylindrical geometry, obtained with the tight-binding model. (b) Collapsed structure, obtained with both tight-binding (black) and DFT (red) calculations.\n}\n\\label{fig2}\n\\end{figure}\n\n\n\\section{Benchmarks of magic angle physics in semiconducting and $\\Gamma$-metal CNTs}\n\\subsection{Semiconducting nanotubes} \\label{sec:semiconducting}\n\\begin{figure*}[ht!]\n\\includegraphics[trim={0cm 0 0cm 0},clip,width=\\textwidth]{figs34.pdf}\n\\caption{(a) Band structure of four semiconducting tubes; the innermost 12 bands are highlighted in blue and presented in zoom-ins in the bottom panel. Notice the different energy scales. (b) Density of states of the largest three semiconducting tubes in panel (a). (c) Comparison between the collapsed (black) and cylindrical (red) DOS for the (178,2) tube, which displays the maximum peak height among the depicted tubes.}\n\\label{fig3}\n\\end{figure*}\n\nIn order to extend the flat band picture from 2\/3-metal CNTs to all kinds of tubes, the first family we analyze are semiconducting CNTs. As stated in Section \\ref{sec:geometry}, we choose these tubes to present a $C_2$ symmetry, which produces three AA regions per unit cell, located \nalong the nanotube axis (see Figure \\ref{fig1}(a)). Recall that 2\/3-metallic CNTs have been shown to host a set of eight flat bands near the neutrality point along with an inner subset of 4 notably flat bands,\nand a single AA region per unit cell \\cite{arroyo-gascon_one-dimensional_2020}.\nSince the number of AA regions per unit cell has now increased, a higher number of bands are expected to be affected by the collapse. \nBesides, semiconducting tubes fulfilling our requirement also show a higher number of atoms per unit cell \ncompared to 2\/3-metal CNTs with a similar moir\\'e angle. \n\\begin{figure}[h!]\n\\includegraphics[width=0.35\\textwidth]{fig5.pdf}\n\\caption{Band structure and top view of the (94,2) collapsed semiconducting nanotube. The LDOS of the marked bands is highlighted in yellow.}\n\\label{fig5}\n\\end{figure}\n\nFigure \\ref{fig3}(a) clearly depicts a progressive flattening and isolation of the central bands of several tubes as the moir\u00e9 angle decreases, ranging from $1.49\\degree$ to $0.98\\degree$. This central set now encompasses 24 bands, instead of the 8 bands found in 2\/3-metals. \nRemarkably, and as in 2\/3-metals, a subset of several extremely flat bands is found in the semiconducting tubes. \nFor 2\/3-metals, the innermost 4-band set is most flattened; for semiconducting tubes, the number of flat bands in this set increases to 12 and are distinguishable starting from the (162,2) tube in Figure~\\ref{fig3}(a). Hence, the number of flattened bands in semiconductors is triple than in 2\/3-metals; this is ultimately related to the number of AA regions per unit cell, which is also triple. \n\n\n\nSemiconducting tubes can be divided in two groups depending on whether the relation between their indices $n-m=3l \\pm 1$, where $l$ is an integer; for instance, the (132,2) and (162,2) tubes belong to the $+1$ group whereas the (172,2) and (202,2) tubes belong to the $-1$ subfamily. They have been shown to display different optical and electronic properties \\cite{kataura_optical_1999,chico_curvature-induced_2009,charlier_electronic_2007}, so it is reasonable to ask whether such differences may arise with respect to moir\\'e physics. However, observing Fig.~\\ref{fig3}(a) we find that the $\\pm 1$ classification does not have a significant effect on the flattening of the central bands of the tubes. \n\\begin{figure*}[ht!]\n\\includegraphics[width=\\textwidth]{figs68.pdf}\n\\caption{(a) Band structure of four $\\Gamma$-metal nanotubes. Mind the different energy scales. (b) DOS of the largest three nanotubes depicted in panel (a).}\n\\label{fig6}\n\\end{figure*}\n\n\nWe previously found \\cite{arroyo-gascon_one-dimensional_2020} that in 2\/3-metals, as in TBG, flat bands \ngive rise to \nsharp signals of electronic localization in the density of states (DOS). This is a consequence of the \nproportionality of the DOS with the inverse of the norm of the electronic velocity integrated over the energy surface. In our 1D case, such a relation\nis inversely proportional to the electronic speed, so that the DOS per atom reads,\n\\begin{equation}\n\\label{doseq}\n\\rho(\\varepsilon)= \\frac{1}{\\rho_{A} D_{cnt }}\\sum_{n}\\frac{1}{\\hbar \\, |v_{n}(k)|_{k=k_{n}(\\varepsilon)}} \\,,\n\\end{equation}\n\\noindent\nwhere $n$ labels the electronic bands, $v_{n}(k)$ is the corresponding electron velocity, $\\rho_{A}$ is the carbon areal density of graphene, and $D_{cnt}$ the diameter of the CNT. Expression (\\ref{doseq}), since $\\hbar \\, |v_{n}(k)|$ is the derivative of the band dispersion relation, neatly illustrates that flattening of the bands gives rise to high narrow peaks in the DOS. \nThe presence of higher and narrower peaks is a signature of higher localization, and thus, of a larger probability of strongly correlated electronic behavior. Comparing the DOS peak heights thus serves as a criterion for the potential for strong electronic correlations among diverse nanotubes. \nAdditionally, Eq.\\ (\\ref{doseq}) also implies that for a given resolution the DOS is proportional to the inverse of the electronic velocity averaged over the resolution function. But notice that if, alternatively, we choose as localization criterion the vanishing of electronic velocities in absolute terms, the heights we must compare are those of the DOS multiplied by the corresponding nanotube diameter. \n\nFigure~\\ref{fig3}(b) shows the DOS for the largest semiconducting tubes in Figure~\\ref{fig3}, computed from the dispersion relations with a 50 $\\mu$eV Lorentzian resolution. Due to the flattening, these tubes become metallic and sharp peaks appear near the neutrality point, whereas a small gap is observed for smaller tubes. As the moir\u00e9 angle decreases, the peaks pack together around the Fermi level. Moreover, the innermost 12-band set is also separated from the rest of the spectrum. The maximum DOS peak height is reached for the (178,2) tube, with an angle $\\theta_M=1.11\\degree$. Multiplying by the corresponding diameters does not change the qualitative picture but enhances the preponderance of the (178,2) tube.\n\nThe effect of collapse on the DOS is \nillustrated in Figure~\\ref{fig3}(c), which spans the 16 innermost bands of the (178,2) tube, both cylindrical and collapsed. The former spaced bands in the cylindrical structure gather around the Fermi level upon collapse and give rise to pronounced peaks that evidence the high localization of these states. As the LDOS for the smaller (94,2) CNT displayed in Figure~\\ref{fig5} confirms, the states are also localized in the AA regions. These three AA-stacked regions show equal enhancement of the DOS, consistently with the triple number of flat bands present in these tubes with respect to those found in 2\/3-metals. Notice also that electron-hole symmetry breaks when the tube is flattened due to the interlayer hopping in our model. \n\nEven though the DOS criterion highlights the (178,2) tube, analyzing the energy span of the tubes depicted in Figure~\\ref{fig3} we find that the energy spans for the (178,2) tube are 0.2 meV and 0.52 meV for the lower conduction and highest valence band respectively (conduction band and valence band from now on), larger than those of the (202,2) tube (0.04 meV and 0.15 meV respectively). \nTherefore, for semiconducting nanotubes, the DOS and the band span criteria point to different nanotubes. \n\n\n\n\\subsection{$\\Gamma$-metal nanotubes}\n\n\n$\\Gamma$-metal tubes are similar to 2\/3-metals, \nin that they both display a Dirac-like dispersion that is renormalized upon collapse.\n However, the smaller suitable $\\Gamma$-metals show six\nAA regions per unit cell instead of one (see Figure~\\ref{fig1}(b)). Although\nthe number of \nAA regions per unit cell is six times that of the 2\/3-metals, \nthat does not necessarily mean sextuple localized states: comparing Figures \\ref{fig1}(a) and (b), it follows that the symmetry of $\\Gamma$-tubes is different to that of 2\/3-metallic and semiconducting tubes\nin a way that precludes the tube structure to be approximated\nby consecutive copies of \nsingle-AA-region cells, as it is the case with semiconducting CNTs.\n\nAs depicted in Figure~\\ref{fig6}(a), the Dirac crossing is slightly displaced away from the $\\Gamma$ point for the collapsed tubes. This Figure is equivalent to Figure~\\ref{fig3} (a) but for $\\Gamma$-metals, with moir\u00e9 angles between $1.49\\degree$ and $0.95\\degree$. The bands constituting the Dirac cone are only distinguishable in the leftmost panel. Overall, a set of 36 bands are progressively isolated and flattened, with an inner set of 24 very flat bands. This subset has certainly sextuple bands that the equivalent set in 2\/3-metals, in spite of the above-mentioned different symmetry. And indeed, the states are again equally localized at the six AA regions, as Figure~\\ref{fig7} illustrates.\nTherefore, one can establish for all kinds of nanotubes a direct proportionality between the number of AA regions and the number of flat bands \naround the neutrality point that appear at small chiral angles.\n\n\\begin{figure}[h!]\n\\includegraphics[width=0.5\\textwidth]{fig7.pdf}\n\\caption{Band structure and top view of the (132,3) $\\Gamma$-metal nanotube. The LDOS of the marked bands is highlighted in yellow. }\n\\label{fig7}\n\\end{figure}\n\n\nFigure~\\ref{fig6}(b) shows the corresponding DOS of the three largest tubes. The most prominent peak appears in the (267,3) case with a moir\u00e9 angle of $\\theta_M=1.11\\degree$. The cluster of narrowest peaks show this time a bimodal structure around the Fermi level, i.e., the bands nearest to Fermi are not particularly flat, something that can be gauged also in the band plots (zooms in Figure~\\ref{fig6}(a)). \n\n\nAs in the semiconductors, the smallest energy span among $\\Gamma$-metals does not correspond to the case with the most prominent DOS peak. In particular, the (312,3) tube (moir\u00e9 angle $\\theta_M=0.95\\degree$) has \n0.16 meV and 0.06 meV energy spans for conduction and the valence bands respectively, to be compared with 0.27 meV and 0.16 meV in the case of the (267,3) tube.\n \n\\subsection{Global discussion and analysis for all CNT types} \nFigure~\\ref{fig9} gives a general picture of the band flattening in collapsed chiral carbon nanotubes in terms of the energy spans of their bands around the Fermi level. \nThe widths of all band sets decrease gradually (with a similar trend) as the moir\u00e9 angle diminishes independently of the family each tube belongs to. \nThis picture neatly illustrates that a small moir\u00e9 angle is the only requirement needed to obtain flat bands in all types of collapsed chiral nanotubes. \n\\begin{figure}[h!]\n\\includegraphics[width=0.4\\textwidth]{fig9.pdf}\n\\caption{Energy spans for the sets of flat bands in all kinds of CNTs. $2\/3$ and $\\Gamma$-metals are labeled $2\/3$ and $\\Gamma$, respectively, whereas the two subfamilies of semiconductors are labeled $\\pm 1$. The energy widths of the pair of innermost bands, denoted by conduction and valence bands, is also shown.}\n\\label{fig9}\n\\end{figure}\n\\begin{figure}[h!]\n\\includegraphics[width=0.35\\textwidth]{fig10.pdf}\n\\caption{From top to bottom: DOS around the Fermi energy of the 2\/3-metal nanotube (176, 2) ($\\theta_M=1.12\\degree$), the semiconductor nanotube (178, 2) ($\\theta_M=1.11\\degree$) and the $\\Gamma$-metal nanotube (267,3) ($\\theta_M=1.11\\degree$).}\n\\label{fig10}\n\\end{figure}\n\n\n\\begin{table*}\n\\begin{tabular}{ccrcccr}\n\\hline\n $(n,m)$ & Family & $\\theta_{NT}$ & $d_{cyl}$(\\AA) & $T_{col}$(\\AA) & $N_M$ & $N_A$\\\\ \n \\hline\n(36,2) & SC & 5.36 & 29.004 & 77.565 & 3 & 2744\\\\\n(132,3) & $\\Gamma$ & 2.23 & 104.556 & 186.426 & 6 & 23772\\\\\n(94,2) & SC & 2.09 & 74.401 & 198.987 & 3 & 18056\\\\\n(132,2) & SC & 1.49 & 104.152 & 278.562 & 3 & 35384\\\\\n(198,3) & $\\Gamma$ & 1.49 & 156.230 & 278.560 & 6 & 53076\\\\\n(162,2) & SC & 1.22 & 127.642 & 341.390 & 3 & 53144\\\\\n(243,3) & $\\Gamma$ & 1.22 & 191.464 & 341.414 & 6 & 79716\\\\\n(176,2) & 2\/3 & 1.12 & 138.605 & 123.568 & 1 & 20888\\\\\n(178,2) & SC & 1.11 & 140.170 & 374.898 & 3 & 64088\\\\\n(267,3) & $\\Gamma$ & 1.11 & 210.256 & 374.870 & 6 & 96132\\\\\n(202,2) & SC & 0.98 & 158.963 & 425.116 & 3 & 82424\\\\\n(312,3) & $\\Gamma$ & 0.95 & 245.492 & 437.748 & 6 & 131052\\\\\n \\hline\n\\end{tabular}\n\\caption{Data of the nanotubes that appear in all figures: family, moir\u00e9 angle ($\\theta_{NT}$), diameter of the cylindrical tube ($d_{cyl}$), length of the collapsed unit cell ($T_{col}$), number of moir\u00e9s ($N_M$) and atoms ($N_A$) per unit cell.}\n\\label{table1}\n\\end{table*}\nWith respect to the degree of the localization, Figure~\\ref{fig10} displays the best DOS results for the three main types of nanotubes (as mentioned earlier, there is no observed distinction between the two kinds of semiconductors). At the magic angle, the most prominent peak of the 2\/3-metal is substantially higher than those of the other cases. The 2\/3-metallic tube also has the most centered and narrower distribution of peaks around the Fermi level. Such narrowing is to be expected, since the group of flattest bands is only four here. We already know that the number of bands that undergo this flattening with diminishing chiral angle correlates with the number of AA-regions in the corresponding moir\u00e9 pattern of a primitive cell. LDOS calculations shows us that the AA-regions are indeed involved in the formation of flat bands. From the 2\/3-metallic case \\cite{arroyo-gascon_one-dimensional_2020} we learnt that, in particular, one AA-region per primitive cell induces the isolation of a group of eight central bands, with an inner subset of four remarkable flat bands. Our new findings confirm that this flattening happens irrespectively of the kind of tube, so that the number of isolated bands remains eight and four per AA-region in a primitive cell. Notice that there is no possibility of degeneration in the number of isolated bands, since we are considering primitive cells. Therefore, the AA-regions cannot be completely identical within a unit cell in spite of their striking similarity at first glance. Under this perspective, the increase of the dispersion in the location of narrow peaks in the DOS with increasing number of AA-regions, as shown in Figure~\\ref{fig10}, seems fairly natural. Likewise, a bimodal structure of such distribution in the case of the $\\Gamma$-metals matches comfortably with the zigzag structure of their moir\u00e9 patterns. In fact, contrary to the 2\/3-metals and semiconductor tubes, in the case of $\\Gamma$-metals no band crosses the Fermi level even at the lowest moir\u00e9 angle explored ($\\theta_M=0.95\\degree$, (312,3) tube). These facts make the energy span criterion unreliable\nto find magic angles since now the flattest bands are not necessarily those closest to the Fermi level.\n\nNotice also that there are pairs of semiconductors and $\\Gamma$-metals sharing the same $\\theta_M$ (see Fig.\\ \\ref{fig9}). In particular, that is the case for the (178,2) and (267,3) with $\\theta_M=1.11\\degree$. Consistently the DOS criterium classifies both cases as ``magic\", highlighting again the driving role of the moir\u00e9 pattern in flat band engineering. \n\nMultiplication of the DOS by the diameter of the corresponding nanotube does not change substantively the picture shown in Figure~\\ref{fig10}: it enhances the 2\/3-metallic CNT with respect the two other classes while balancing the performance between the semiconductor and the $\\Gamma$-metallic CNTs. At any rate, there are no large differences; eventually, only experiments can tell whether some CNTs are better than others to explore one-dimensional strongly correlated behavior. Our work shows that moir\u00e9 physics does not require chiral collapsed CNTs of an specific geometry other than being close to the universal magic angle $\\sim 1.1\\degree$.\n \nFinally, for the reader's convenience, we have collected in Table \\ref{table1} the most relevant parameters characterizing all the nanotubes addressed in this work.\n\n\n\n\n\n\\section{Conclusions}\nWe have analyzed the potential of all families of collapsed carbon nanotubes in flat-band engineering. The rotational symmetries of the uncollapsed tubes determine the structure of the moir\u00e9 patterns. Thus, in their patterns, 2\/3-metals show one AA-region per primitive cell, semiconductors three in a linear arrangement, and $\\Gamma$-metals present six in a zigzag disposition. Remarkably, the three families of collapsed CNTs display flat bands, high density of states and localization in the AA regions of the moir\u00e9 when \n$\\theta_M$ \ndecreases. The number of highly flattened bands shows a perfect fourfold proportionality with the number of AA-regions per primitive cell, but otherwise, there are not significative differences. In particular, the effects are maximized at the same magic moir\u00e9 angle, namely $\\sim 1.1 \\degree$.\n\nOur results show that moir\u00e9 physics is universal, i.e., it will appear in all kinds of chiral collapsed nanotubes, provided that their chiral angle is small enough to host AA-regions with localized states. We expect our conclusions to stir the experimental search for these one-dimensional, highly correlated systems. \n\n\n\\section{Acknowledgements}\nWe thank Gloria Platero for generously sharing her computational resources and Sergio Bravo for helpful discussions. We also thank the Centro de Supercomputaci\\'on de Galicia, CESGA, (www.cesga.es, Santiago de Compostela, Spain) for providing access to their supercomputing facilities. This work was supported by grant PID2019-106820RB-C21 funded by MCIN\/AEI\/10.13039\/501100011033\/ and by \"ERDF A way of making Europe\", by grant TED2021-129457B-I0 funded by MCIN\/AEI\/10.13039\/501100011033\/ and by the \"European Union NextGenerationEU\/PRTR\" and by grant PRE2019-088874 funded by MCIN\/AEI\/10.13039\/501100011033 and by \"ESF Investing in your future\". ESM acknowledges financial support from FONDECYT Regular 1221301, and LC gratefully acknowledges the support from Comunidad de Madrid (Spain) under the Multiannual Agreement with Universidad Complutense de Madrid, Program of Excellence of University Professors, in the context of the V Plan Regional de Investigaci\u00f3n Cient\u00edfica e Innovaci\u00f3n Tecnol\u00f3gica (PRICIT). \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Conclusions and Future Work}\\label{sec:conclusion}\nIn this paper, we presented our experiences of applying deep learning models as well as representation learning approaches for talent search systems at LinkedIn. We provided an overview of LinkedIn Recruiter search architecture, described our methodology for learning representations of sparse entities and deep models in the talent search domain, and evaluated multiple approaches in both offline and online settings. We also discussed challenges and lessons learned in applying these approaches in a large-scale latency-sensitive search system such as ours. \nOur design choices for learning semantic representations of entities at scale, and the deployment considerations in terms of weighing the potential benefits vs. the engineering cost associated with implementing end-to-end deep learning models should be of broad interest to academicians and practitioners working on large-scale search and recommendation systems.\n\\subsection{Online Experiments}\\label{sec:onlineexp}\nBased on the offline results, we have currently put off the online deployment of the end-to-end deep learning models in LinkedIn Recruiter due to the following reasons:\n\\begin{itemize}\n\\item There is a significant engineering cost to implementing end-to-end deep learning solutions in search systems since the number of items (candidates) that need to be scored can be quite large. Further, the relatively large amount of computation needed to evaluate deep neural networks could cause the search latency to be prohibitively large, especially when there are many candidates to be scored, thereby not meeting the real-time requirements of search systems such as LinkedIn Recruiter.\n\\item The offline evaluation for the end-to-end deep models (\\S\\ref{subsubsec:exp_deep}) showed an improvement of 1.72\\% in Prec@25 for the 3-layer case, which, although impressive per our experience, does not currently justify the engineering costs discussed above.\n\\end{itemize}\nInstead, we performed online A\/B tests incorporating the unsupervised network embeddings (\\S\\ref{subsubsec:unsupervised_emb}) as a feature in the gradient boosted decision tree model. As in the case of the offline experiments, in the online setting, we first concatenate both the first and the second order embeddings for the search query, and for each potential candidate to be shown, then take the cosine similarity between the two concatenated embeddings, and use that as a single additional feature (to the baseline) in the gradient boosted decision tree (for both offline training and online model evaluation). Although the offline gain as evaluated in \\S\\ref{subsubsec:exp_embedding} is smaller compared to that of an end-to-end deep learning model, the engineering cost and computational complexity is much less demanding, since the embeddings can be precomputed offline, and the online dot product computation is relatively inexpensive compared to a neural network evaluation. An additional benefit of testing the embedding features with a tree model instead of an end-to-end deep model is that we can measure the impact of the new feature(s) in an apple-to-apple comparison, i.e., under similar latency conditions as the baseline model which is a tree model as well, with the embedding based feature as the only difference. Finally, we decided against deploying the supervised embeddings (\\S\\ref{subsubsec:supervised_emb}) due to relatively weaker offline experimental results (\\S\\ref{subsubsec:exp_embedding}).\n\nThe A\/B test as explained above was performed on LinkedIn Recruiter users during the last quarter of 2017, with the control group being served results from the baseline model (gradient boosted decision tree model trained using XGBoost \\cite{xgboost}) and the treatment group, consisting of a random subset of tens of thousands of users, being served results from the model that includes the unsupervised network embedding based feature. We present the results of the experiment in Table~\\ref{tab:onlineExperiment}. Although the p-values are high for the experiments (as a result of relatively small sample sizes), we note that an increase of 3\\% in the overall precision is an impressive lift, considering that precision is a hard metric to move, based on our domain experience.\n\n\\vspace{-0.1in}\n\\begin{table}[!htp]\n\\small\n\t\\caption{Online A\/B testing results. Comparing XGBoost model with vs. without network embedding based semantic similarity feature. }\n\t\\vspace{-0.17in}\n\t\\begin{tabular}{c|c|c|c}\n\t\t\\hline\\hline\n\t\t\\textbf{ } & \\textbf{Prec@5} & \\textbf{Prec@25} & \\textbf{Overall precision} \\\\\\hline\\hline\n\t\tImprovement & 2\\% & 1.8\\% & 3\\% \\\\ \\hline\n\t\tp-value & 0.2 & 0.25 & 0.11 \\\\ \\hline\n\t\\end{tabular}\n\t\\label{tab:onlineExperiment}\n\t\\vspace{-0.22in}\n\\end{table}\n\n\\subsubsection{Deep Models} \\label{subsubsec:exp_deep}\nWe first evaluated the effect of utilizing the end-to-end deep model, proposed in \\S\\ref{subsec:deepmodel}, with up to three layers on the dataset described above. The baseline model is a gradient boosted decision tree (\\S\\ref{subsec:currentmodels}, trained using the XGBoost \\cite{xgboost} package) which consists of $30$ trees, each having a maximum depth of $4$ and trained in a point-wise manner, and we compare it to the neural network approach. The model family is a $k$ layer multi-layer perceptron (MLP) with $100$ units in each layer and rectified linear units (ReLU) for activations. We did not regularize the network because the size of the network was small enough, but rather used early stopping to achieve a similar effect. Also, as explained in \\S\\ref{subsec:deepmodel}, we chose hinge loss to train the pairwise training (the final metrics produced from logistic or hinge loss did not differ significantly).\n\n\n\\begin{table}[!htp]\n\\small\n\\caption{Precision lift of end-to-end MLP models trained with point-wise and pair-wise losses as well as varying number of layers over the baseline gradient boosting tree model.}\n\\vspace{-0.17in}\n\\begin{tabular}{c|c|c|c|c}\n\t\\hline\\hline\n\t\\textbf{Model} & \\textbf{Optimization} & \\textbf{Prec@1} & \\textbf{Prec@5} & \\textbf{Prec@25} \\\\\\hline\\hline\n\t\tXGBoost & - &0\\% & 0\\% & 0\\% \\\\\\hline\n\t\t1-layer & Pointwise & -2.93\\% & -4.39\\% & -1.72\\% \\\\\\hline\n\t\t3-layer & Pointwise & -0.31\\% & -1.67\\% & -0.19\\% \\\\\\hline\n\t\t1-layer & Pairwise & -0.16\\% & -1.36\\% & 0\\% \\\\\\hline\n\t\t3-layer & Pairwise & +5.32\\% & +2.82\\% & +1.72\\% \\\\\\hline\n\t\\end{tabular}\n\t\\label{tab:deepOffline}\n\t\\vspace{-0.12in}\n\\end{table}\n\nThe results are shown in Table~\\ref{tab:deepOffline}. Interestingly, while single layer neural network trained with pointwise loss has poor ranking performance, additional layers of nonlinearity bring the neural network performance almost on par with XGBoost (further layers and units per layer did not improve the results, and are omitted here for brevity). On the other hand, neural network models trained using pairwise loss outperformed those trained with pointwise loss and XGBoost baseline as more layers are introduced (similar to the pointwise case, we did not see additional gains using more layers or units for pairwise loss). A possible explanation is the following:\n\\begin{enumerate}\n\\item Pairwise loss approach explicitly compares positive examples to negative examples within a search session rather than simply separate positive examples from negative examples in a broad sense, and,\n\\item It automatically deals with imbalanced classes since it could be mathematically shown that pairwise ranking loss is closely related to AUC \\cite{gao2015consistency}, which is immune to class imbalance.\n\\end{enumerate}\n\n\\subsubsection{Shallow Models} \\label{subsubsec:exp_embedding}\nIn this family of models, we use representation learning methods to construct dense vectors to represent certain categorical features. Although deep networks are used to train the embeddings, once trained, they are used as features in the baseline gradient boosted decision tree model, which is shallow.\n\nOur first set of experiments utilizes unsupervised network embeddings, as proposed in \\S\\ref{subsec:representation} to learn representations for categorical variables like skills and titles from member profiles. The title\/skill representations are learned for both the query and the document (member) and a measure of similarity between the two is used as a new feature in the ranking model (baseline GBDT with additional feature). As shown in Table~\\ref{tab:shallowOfflineUnsupervised}, converting the categorical interaction feature to a dense similarity measure results in large gains in the precision metric. The employed embedding was a concatenated version of order $1$ and order $2$ representations. For each member or query, the aggregation strategy used was mean pooling (although max pooling resulted in similar results), i.e., if a member or query has multiple skills, we do a mean pool of all the individual skill vectors to represent them on the vector space. Denote the mean pooled member vector by $m$, and the mean pooled query vector by $q$. We experimented with three similarity measures \\cut{($s$) }between the two vector representations: \n\\begin{enumerate}\n\\item \\textbf{Dot Product:} $ m \\bullet q = \\sum_i m_i \\cdot q_i$,\n\\item \\textbf{Cosine Similarity:} $\\frac{m \\bullet q}{||m||_2 ~ ||q||_2} = \\frac{\\sum_i m_i \\cdot q_i}{\\sqrt{\\sum_i m_i^2}\\sqrt{\\sum_i q_i^2}}$, and \n\\item \\textbf{Hadamard Product:} $m \\circ q = \\langle m_1 \\cdot q_1, ~ \\dots ~, m_d \\cdot q_d \\rangle$ (Also known as element-wise product).\n\\end{enumerate}\nWe note that both dot product and cosine similarity measures result in a single new feature added to the ranking model, whereas Hadamard product measure contributes to as many features as the dimensionality of the embedding vector representation. From Table~\\ref{tab:shallowOfflineUnsupervised}, we can observe that using dot product outperformed using Hadamard product based set of features.\n\nIn our second set of experiments, we retain the same strategy of introducing feature(s) based on the similarity between the member\/query embeddings into the ranking model. The only difference is that we now utilize a supervised algorithm (DSSM) to train the embeddings, which uses the same dataset as the offline experiments$~^3$\\footnote{~$^3$ We first train the embeddings using DSSM on the training set explained in the beginning of \\S\\ref{subsec:offline_exp}. Then, we introduce the similarity measure based feature(s), and train the final ranking model based on GBDT on the same training dataset.}. As shown in Table \\ref{tab:shallowOfflineSupervised}, we observed comparatively modest lift values in the Prec@k metric. In all experiments, we fixed the size of the embedding to $50$. We used tanh as the activation function and experimented with dot product and cosine similarity for the similarity computation between the two arms of DSSM. We used a minimum of 1 layer and a maximum of 3 layers in our experiments with DSSM models. The first hidden layer is used for word hashing, and the next two hidden layers are used to reduce the dimensionality of query \/ document vector representation. In our experiments, we did not observe better performance by using more than 3 layers. We conducted extensive offline experiments and tried over $75$ models. We only report the best configuration (network architecture) for each model.\n\n\n\\begin{table}[!htp]\n\\small\n\t\\caption{Offline experiments with unsupervised embeddings.}\n\t\\vspace{-0.19in}\n\t\\begin{tabular}{c|c|c|c|c}\n\t\t\\hline\\hline\n\t\t\\textbf{Model} & \\textbf{Similarity} & \\textbf{Prec@1} & \\textbf{Prec@5} & \\textbf{Prec@25} \\\\\\hline\\hline\n\t\tXGBoost & - & 0\\% & 0\\% & 0\\% \\\\\\hline\n\t\tSkill, Title & Dot & 2.71\\% & 1.72\\% & 1.06\\% \\\\\\hline\n\t\tSkill, Title & Hadamard & 0\\% & 0.73\\% & 0.36\\% \\\\\\hline\n\t\tTitle & Dot & 2.31\\% & 1.99\\% & 0.53\\% \\\\\\hline\n\t\tSkill & Dot & 2.05\\% & -0.18\\% & -0.35\\% \\\\\\hline\n\t\\end{tabular}\n\t\\label{tab:shallowOfflineUnsupervised}\n\t\\vspace{-0.20in}\n\\end{table}\n\n\\begin{table}[!htp]\n\\small\n\t\\caption{Offline experiments using supervised embeddings. The network architecture is represented in square brackets. Only the best performing architecture type is shown for each dimension of evaluation (Similarity measure, Text vs. Facet)}\n\t\\vspace{-0.19in}\n\t\\begin{tabular}{c|c|c|c|c}\n\t\t\\hline\\hline\n\t\t\\textbf{Model} & \\textbf{Similarity} & \\textbf{Prec@1} & \\textbf{Prec@5} & \\textbf{Prec@25} \\\\\\hline\\hline\n\t\tXGBoost & - & 0\\% & 0\\% & 0\\% \\\\\\hline\n\t\tText [200, 100] & Dot & 3.62\\% & -0.13\\% & 0.15\\% \\\\\\hline\n\t\tText [200, 100] & Cosine & 0.44\\% & 0.55\\% & -0.10\\% \\\\\\hline\n\t\tText [500, 500, 128] & Dot & 0.55\\% & -0.01\\% & 0.38\\% \\\\\\hline\n\t\t Title [500] & Dot & 2.42\\% & -0.13\\% & -0.02\\% \\\\\\hline\n\t\\end{tabular}\n\t\\label{tab:shallowOfflineSupervised}\n\t\\vspace{-0.19in}\n\\end{table}\n\n\\section{Experiments}\\label{sec:exp}\nWe next present the results from our offline experiments for the proposed models, and then discuss the trade-offs and design decisions to pick a model for online experimentation. We finally present the results of our online A\/B test of the chosen model on \\emph{LinkedIn Recruiter} product, which is based on unsupervised embeddings.\n\n\\subsection{Offline Experiments} \\label{subsec:offline_exp}\nTo evaluate the proposed methodologies, we utilized LinkedIn Recruiter usage data collected over a two month period within 2017. This dataset consists of the impressions (recommended candidates) with tracked features from the candidates and recruiters, as well as the labels for each impression (positive\/1 for impressions which resulted in the recruiter sending an inMail and the inMail being accepted, negative\/0 otherwise). Furthermore, we filter the impression set for both training and testing sets to come from a random bucket, i.e., a subset of the traffic where the top $100$ returned search results are randomly shuffled and shown to the recruiters. The random bucket helps reduce positional bias \\cite{joachims_2005}. We split training data and test data by time, which forms a roughly $70\\%-30\\%$ split. The dataset covers tens of thousands of recruiters, and millions of candidates recommended.\n\nTo evaluate the performance of the ranking models, we use offline replay, which re-ranks the recommended candidates based on the new model being tested, and evaluates the new ranked list. As explained previously, the main metric we report is precision at $k$ (Prec@$k$) due to its stability and the suitability with the way LinkedIn Recruiter product presents candidates. Prec@$k$ lift represents the \\% gain in the inMail Accept precision for top $k$ impressions. \nPrec@$k$ is computed as the fraction of positive responses (inMail accepts, within three days of the inMail being sent by the recruiter) in a given search session, averaged over all the sessions in the dataset. For training of the deep models, we utilized TensorFlow \\cite{Abadi16}, an open-source software package for creating and training custom neural network models.\n\n\\input{exp_subsubsec_1}\n\n\\input{exp_subsubsec_2}\n\n\\input{exp_subsec_2}\n\n\\section{Introduction}\\label{sec:intro}\n\nLinkedIn Talent Solutions business contributes to around 65\\% of LinkedIn's annual revenue$^1$\\let\\thefootnote\\relax\\footnote{$^1$ ~ https:\/\/press.linkedin.com\/about-linkedin}, and provides tools for job providers to reach out to potential candidates and for job seekers to find suitable career opportunities. LinkedIn's job ecosystem has been designed as a platform to connect job providers and job seekers, and to serve as a marketplace for efficient matching between potential candidates and job openings. A key mechanism to help achieve these goals is the \\emph{LinkedIn Recruiter} product, which enables recruiters to search for relevant candidates and obtain candidate recommendations for their job postings.\n\n\\footnote{$^*$ ~ Authors contributed equally to this work.} \\footnote{$^+$ ~ Current affiliation: Stanford University. Work done during an internship at LinkedIn.} \\footnote{\\\\ ~ \\\\ ~ \\large \\textbf{This paper has been accepted for publication at ACM CIKM 2018.}}A crucial challenge in talent search and recommendation systems is that the underlying query could be quite complex, combining several structured fields (such as canonical title(s), canonical skill(s), company name) and unstructured fields (such as free-text keywords). Depending on the application, the query could either consist of an explicitly entered query text and selected facets (talent search), or be implicit in the form of a job opening, or ideal candidate(s) for a job (talent recommendations). Our goal is to determine a ranked list of most relevant candidates among hundreds of millions of structured candidate profiles.\n\nThe structured fields add sparsity to the feature space when used as a part of a machine learning ranking model. This setup lends itself well to a dense representation learning experiment as it not only reduces sparsity but also increases sharing of information in feature space. In this work, we present the experiences of applying representation learning techniques for talent search ranking at LinkedIn. Our key contributions include:\n\\vspace{-\\topsep}\n\\begin{itemize}\n\\item Using embeddings as features in a learning to rank application. This typically consists of:\n\\begin{itemize}\n\\item Embedding models for ranking, and evaluating the advantage of a layered (fully-connected) architecture,\n\\item Considerations while using point-wise learning and pair-wise losses in the cost function to train models.\n\\end{itemize}\n\\item Methods for learning semantic representations of sparse entities (such as recruiter id, candidate id, and skill id) using the structure of the LinkedIn Economic Graph~\\cite{Wei12}:\n\\begin{itemize}\n\\item Unsupervised representation learning that uses Economic Graph network data across LinkedIn ecosystem\n\\item Supervised representation learning that utilizes application specific data from talent search domain.\n\\end{itemize}\n\\item Extensive offline and online evaluation of above approaches in the context of LinkedIn talent search, and a discussion of challenges and lessons learned in practice.\n\\end{itemize}\n\\vspace{-\\topsep}\nRest of the paper is organized as follows. We present an overview of talent search at LinkedIn, the constraints, challenges and the optimization problem in \\S\\ref{sec:problem}. We then describe the methodology for the application of representation learning models in \\S\\ref{sec:methodology}, followed by offline and online experimentation results in \\S\\ref{sec:exp}. Finally, we discuss related work in \\S\\ref{sec:related}, and conclude the paper in \\S\\ref{sec:conclusion}.\n\n\nAlthough much of the discussion is in the context of search at LinkedIn, it generalizes well to any multi-faceted search engine where there are high dimensional facets, i.e. movie, food \/ restaurant, product search are a few examples that would help the reader connect to the scale of the problem.\n\\subsection{Lessons Learned in Practice} \\label{sec:lessons}\nWe next present the challenges encountered and the lessons learned as part of our offline and online empirical investigations. As stated in \\S\\ref{sec:onlineexp}, we had to weigh the potential benefits vs. the engineering cost associated with implementing end-to-end deep learning models as part of LinkedIn Recruiter search system. Considering the potential latency increase of introducing deep learning models into ranking, we decided against deploying end-to-end deep learning models in our system. Our experience suggests that hybrid approaches that combine offline computed embeddings (including potentially deep learning based embeddings trained offline) with simpler online model choices could be adopted in other large-scale latency-sensitive search and recommender systems. Such hybrid approaches have the following key benefits: (1) the engineering cost and complexity associated with computing embeddings offline is much lower than that of an online deep learning based system, especially since the existing online infrastructure can be reused with minimal modifications; (2) the latency associated with computing dot product of two embedding vectors is much lower than that of evaluating a deep neural network with several layers.\n\n\n\\subsection{Embedding Models for Ranking} \\label{subsec:deepmodel} \n\nAs mentioned before, we would like to have a flexible ranking model that allows for easy adaptation to novel features and training schemes. Neural networks, especially in the light of recent advances that have made them the state of the art for many statistical learning tasks including learning to rank \\cite{burges_2005,tyliu_2009}, are the ideal choice owing to their modular structure and their ability to be trained end-to-end using straightforward gradient based optimization. Hence we would like to use neural network rankers as part of our ranking models for Talent Search at LinkedIn. Specifically, we propose to utilize multilayer perceptron (MLP) with custom unit activations for the ranking task. Our model supports a mix of model regularization methods including L2 norm penalty and dropout \\cite{srivastava_2014}.\n\nFor the training objective of the neural network, we consider two prevalent methodologies used in learning to rank:\n\\vspace{-2mm}\n\\subsubsection{Pointwise Learning} Also called \\emph{ranking by binary classification}, this method involves training a binary classifier utilizing each example in the training set with their labels, and then grouping the examples from the same search session together and ranking them based on their scores. For this purpose, we apply logistic regression on top of the neural network as follows. We include a classification layer which sums the output activations from the neural network, passes the sum through the logistic function, and then trains against the labels using the cross entropy cost function:\n\\begin{equation}\n\\sigma_i = \\frac{1}{1+\\exp\\left(-\\dotprod{\\mathbf{w}}{\\psi(\\mathbf{x}_i)}\\right)}, \\hspace{5mm} i \\in \\{1, \\cdots, n\\}\n\\end{equation}\n\\begin{equation}\n\\mathcal{L} = -\\sum_{i=1}^n y_i \\log(\\sigma_i) + (1-y_i) \\log(1 - \\sigma_i)\n\\end{equation}\nIn above equations, $\\psi(\\cdot)$ refers to the neural network function, and $\\sigma_i$ is the value of the logistic function applied to the score for the $i^\\text{th}$ training example.\n\\vspace{-2mm}\n\\subsubsection{Pairwise Learning} Although pointwise learning is simple to implement and works reasonably well, the main goal for talent search ranking is to provide a ranking of candidates which is guided by the information inherent in available session-based data. Since it is desirable to compare candidates within the same session depending on how they differ with respect to the mutual interest between the recruiter and the candidate (inMail accept), we form pairs of examples with positive and negative labels respectively from the same session and train the network to maximize the difference of scores between the paired positive and negative examples:\n\\begin{align}\nd_{{i^+},{i^-}} &= \\dotprod{\\mathbf{w}}{\\psi(\\mathbf{x}_{i^+})-\\psi(\\mathbf{x}_{i^-})}, \\\\\n\\label{eq:pairwise_loss}\n\\mathcal{L} &= \\sum_{\\footnotesize \\begin{array}{c} ({i^+}, {i^-}) : s_{{i^+}} = s_{{i^-}}, \\\\ y_{{i^+}}=1, y_{{i^-}}=0\\end{array}} f(d_{{i^+},{i^-}}).\n\\end{align}\nThe score difference between a positive and a negative example is denoted by $d_{{i^+},{i^-}}$, with ${i^+}$ and ${i^-}$ indicating the indices for a positive and a negative example, respectively. The function $f(\\cdot)$ determines the loss, and \\eqref{eq:pairwise_loss} becomes equivalent to the objective of RankNet \\cite{burges_2005} when $f$ is the logistic loss:\n\\begin{align*}\nf(d_{{i^+},{i^-}}) = \\log\\left(1 + \\exp(-d_{{i^+},{i^-}})\\right),\n\\end{align*}\nwhereas \\eqref{eq:pairwise_loss} becomes equivalent to ranking SVM objective \\cite{joachims_2002} when $f$ is the hinge loss:\n\\begin{align*}\nf(d_{{i^+},{i^-}}) = \\max(0, 1-d_{{i^+},{i^-}}).\n\\end{align*}\nWe implemented both pointwise and pairwise learning objectives. For the latter, we chose hinge loss over logistic loss due to faster training times, and our observation that the precision values did not differ significantly (we present the evaluation results for point-wise and hinge loss based pairwise learning in \\S\\ref{sec:exp}).\n\n\\subsection{Learning Semantic Representations of Sparse Entities in Talent Search} \\label{subsec:representation}\nNext, we would like to focus on the problem of sparse entity representation, which allows for the translation of the various entities (skills, titles, etc.) into a low-dimensional vector form. Such a translation makes it possible for various types of models to directly utilize the entities as part of the feature vector, e.g., \\S\\ref{subsec:deepmodel}. To achieve this task of generating vector representations, we re-formulate the talent search problem as follows: given a query \\(q\\) by a recruiter \\(r_i\\), rank a list of LinkedIn members \\(m_1,m_2,...,m_d\\) in the order of decreasing relevance. In other words, we want to learn a function that assigns a score for each potential candidate, corresponding to the query issued by the recruiter. Such a function can learn a representation for query and member pair, and perform final scoring afterwards. We consider two broad approaches for learning these representations.\n\\begin{itemize}\n\\item The \\textbf{\\emph{unsupervised approach}} learns a shared representation space for the entities, thereby constructing a query representation and a member representation. We do not use talent search specific interactions to supervise the learning of representations.\n\\item The \\textbf{\\emph{supervised approach}} utilizes the interactions between recruiters and candidates in historical search results while learning both representation space as well as the final scoring.\n\\end{itemize}\n\nThe architecture for learning these representations and the associated models is guided by the need to scale for deployment to production systems that serve over 500M members. For this reason, we split the network scoring of query-member pair into three semantic pieces, namely query network, member network, and cross network, such that each piece is run on one of the production systems as given in Figure \\ref{fig:model}.\n\n\\begin{figure}\n\t\\includegraphics[width=2.7in]{model-with-siamese}\n\t\\vspace{-0.1in}\n\t\\caption{The two arm architecture with a shallow query arm and a deep member arm}\n\t\\label{fig:model}\n\t\\vspace{-0.1in}\n\\end{figure}\n\n\\subsubsection{Unsupervised Embeddings} \\label{subsubsec:unsupervised_emb}\n~~~~ Most features used in \\\\LinkedIn talent search and recommendation models are categorical in nature, representing entities such as skill, title, school, company, and other attributes of a member's profile. In fact, to achieve personalization, even the member herself could be represented as a categorical feature via her LinkedIn member Id. Such categorical features often suffer from sparsity issues because of the large search space, and learning a dense representation to represent these entities has the potential to improve model performance. While commonly used algorithms such as \\textit{word2vec}~\\cite{Mikolov13} work well on text data when there is information encoded in the sequence of entities, they cannot be directly applied to our use case. Instead, we make use of LinkedIn Economic Graph~\\cite{Wei12} to learn the dense entity representations.\n\nLinkedIn Economic Graph is a digital representation of the global economy based on data generated from over $500$ million members, tens of thousands of standardized skills, millions of employers and open jobs, as well as tens of thousands of educational institutions, along with the relationships between these entities. It is a compact representation of all the data on LinkedIn. To obtain a representation for the entities using the Economic Graph, we could use a variety of graph embedding algorithms (see \\S\\ref{sec:related}). For the purposes of this work, we adopted \\emph{Large-Scale Information Network Embeddings} approach~\\cite{Tang15}, by changing how we construct the graph. In~\\cite{Tang15}, the authors construct the graph of a social network by defining the members of the network as vertices, and use some form of interaction (clicks, connections, or social actions) between members to compute the weight of the edge between any two members. In our case, this would create a large sparse graph resulting in intractable training and a noisy model. Instead, we define a weighted graph, $G = (V, E, w_{..})$ over the entities whose representations need to be learned (e.g., skill, title, company), and use the number of members sharing the same entity on their profile to induce an edge weight ($w_{..}$) between the vertices. Thus we reduce the size of the problem by a few orders of magnitude by constructing a smaller and denser graph.\n\nAn illustrative sub-network of the graph used to construct company embeddings is presented in Figure \\ref{fig:CompanyNetworkEmbeddings}. Each vertex in the graph represents a company, and the edge weight (denoted by the edge thickness) represents the number of LinkedIn members that have worked at both companies (similar graphs can be constructed for other entity types such as skills and schools). In the example, our aim would be to embed each company (i.e., each vertex in the graph) into a fixed dimensional latent space. We propose to learn \\emph{first order} and \\emph{second order} embeddings from this graph. \nOur approach, presented below, is similar to the one proposed in~\\cite{Tang15}.\n\n\\begin{figure}\n\t\\includegraphics[width=2in]{CompanyNetworkEmbeddings.png}\n\t\\vspace{-0.1in}\n\t\\caption{Each vertex represents a company; the edge weight denoted by color, dashed or regular edge represents \\#members that have worked at both companies.} \n\t\\label{fig:CompanyNetworkEmbeddings}\n\t\\vspace{-0.1in}\n\\end{figure}\n\n\\textbf{First order embeddings}\nCorresponding to each undirected edge between vertices $v_i$ and $v_j$, we define the joint probability between vertices $v_i$ and $v_j$ as:\n\\begin{equation}\np_1(v_i,v_j) = \\frac{1}{Z} \\cdot \\frac{1}{ 1+ exp(- \\dotprod{u_i}{u_j})} ~ ,\n\\end{equation}\nwhere $u_i \\in \\mathbb{R}^d$ is the d-dimensional vector representation of vertex $v_i$ and $Z = \\sum_{(v_i, v_j) \\in E} \\frac{1}{ 1+ exp(- \\dotprod{u_i}{u_j})}$ is the normalization factor. The empirical probability, $\\hat{p}_1(\\cdot,\\cdot)$ over the space $V \\times V$ can be calculated using:\n\\begin{equation}\n\\label{eq:empirical_p1}\n\\hat{p}_1(v_i, v_j) = \\frac{w_{ij}}{W} ~ ,\n\\end{equation}\nwhere $w_{ij}$ is the edge weight in the company graph, and $W = \\displaystyle\\sum_{ (v_i, v_j) \\in E } w_{ij}$. We minimize the following objective function in order to preserve first-order proximity:\n\\begin{equation}\nO_1 = d( \\hat{p}_1(\\cdot,\\cdot), p_1(\\cdot,\\cdot) ) ~ ,\n\\end{equation}\nwhere $d(\\cdot,\\cdot)$ is a measure of dissimilarity between two probability distributions. We chose to minimize KL-divergence of $\\hat{p}_1$ with respect to $p_1$:\n\\begin{equation}\nO_1 = - \\sum_{ (v_i, v_j) \\in E } \\hat{p}_1(v_i, v_j) \\log \\bigg( \\frac{p_1(v_i, v_j)}{\\hat{p}_1(v_i, v_j)} \\bigg) ~ .\n\\end{equation}\n\n\\textbf{Second order embeddings}\nSecond order embeddings are generated based on the observation that vertices with shared neighbors are similar. In this case, each vertex plays two roles: the vertex itself, and a specific context of other vertices. Let $u_i$ and ${u_i}^{\\prime}$ be two vectors, where $u_i$ is the representation of $v_i$ when it is treated as a vertex, while ${u_i}^{\\prime}$ is the representation of $v_i$ when it is used as a specific context. For each directed edge $(i,j)$, we define the probability of context $v_j$ to be generated by vertex $v_i$ as follows:\n\\begin{equation}\np_2(v_j | v_i) = \\frac{exp(\\dotprod{u'_j}{u_i})}{\\displaystyle\\sum_{k=1}^{|V|} exp(\\dotprod{u'_k}{u_i})} ~ .\n\\end{equation}\nThe corresponding empirical probability can be obtained as:\n\\begin{equation}\n\\label{eq:empirical_p2}\n\\hat{p}_2(v_j | v_i) = \\frac{w_{ij}}{W_i} ~ ,\n\\end{equation}\nwhere $W_i = \\displaystyle\\sum_{ v_j: (v_i, v_j) \\in E } w_{ij}$.\nIn order to preserve the second order proximity, we aim to make conditional probability distribution of contexts, $p_2(\\cdot|v_i)$, to be close to empirical probability distribution $\\hat{p}_2(\\cdot|v_i)$, by minimizing the following objective function:\n\\begin{equation}\nO_2 = \\sum_{v_i \\in V} \\lambda_i \\cdot d (\\hat{p}_2(\\cdot | v_i), p_2(\\cdot | v_i)) ~ ,\n\\end{equation}\nwhere $d(\\cdot, \\cdot)$ is a measure of dissimilarity between two probability distributions, and $\\lambda_i$ represents the importance of vertex $v_i$ (e.g., computed using PageRank algorithm). In this work, for simplicity, we set $\\lambda_i$ to be the degree of vertex $v_i$. Using KL-divergence as before, the objective function for the second order embeddings can be rewritten as:\n\\begin{equation}\nO_2 = \\sum_{v_i \\in V} \\lambda_i \\cdot \n\\sum_{ v_j: (v_i, v_j) \\in E } \\hat{p}_2(v_j | v_i) \\log \\bigg( \\frac{p_2(v_j|v_i)}{\\hat{p}_2(v_j | v_i)} \\bigg) ~ .\n\\end{equation}\n\nUsing Figure \\ref{fig:CompanyNetworkEmbeddings}, we can now explain how the feature is constructed for each member. After optimizing for $O_1$ and $O_2$ individually using gradient descent, we now have two vectors for each vertex of the graph (i.e. in this case the company). A company can now be represented as a single vector by concatenating the first and second order embeddings. This represents each company on a single vector space. Each query and member can be represented by a bag of companies, i.e. a query can contain multiple companies referenced in the search terms and a member could have worked at multiple companies which is manifested on the profile. Thus, with a simple pooling operation (max-pooling or mean-pooling) over the bag of companies, we can represent each query and member as a point on the vector space. A similarity function between the two vector representations can be used as a feature in ranking.\n\n\\subsubsection{Supervised Embeddings} \\label{subsubsec:supervised_emb}\nIn this section, we explain how to train the entity embeddings in a supervised manner. We first collect the training data from candidates recommended to the recruiters (with the inMail accept events as the positive labels) within the LinkedIn Recruiter product, and then learn the feature representations for the entities guided by the labeled data. For this purpose, we adopted and extended \\emph{Deep Semantic Structured Models} (DSSM) based learning architecture~\\cite{Huang13}. In this scheme, document and query text are modeled through separate neural layers and crossed before final scoring, optimizing for the search engagement as a positive label. Regarding features, the DSSM model uses the query and document text and converts them to character trigrams, then utilizes these as inputs to the model. An example character trigram of the word \\emph{java} is given as \\{\\#ja, jav, ava, va\\#\\}. This transformation is also called \\emph{word-hashing} and instead of learning a vector representation (i.e. embedding) for the entire word, this technique provides representations for each character trigram. In this section we extend this scheme and add categorical representations of each type of entity as inputs to the DSSM model.\n\nWe illustrate our usage of word-hashing through an example. Suppose that a query has the title id $t_i$ selected as a facet, and contains the search box keyword, \\emph{java}. We process the text to generate the following trigrams: \\{\\#ja, jav, ava, va\\#\\}. Next, we add the static standardized ids corresponding to the selected entities ($t_i$, in this example) as inputs to the model. We add entities from the facets to the existing model, since text alone is not powerful enough to encode the semantics. After word hashing, a multi-layer non-linear projection (consisting of multiple fully connected layers) is performed to map the query and the documents to a common semantic representation. Finally, the similarity of the document to the query is calculated using a vector similarity measure (e.g., cosine similarity) between their vectors in the new learned semantic space. We use stochastic gradient descent with back propagation to infer the network coefficients. \n\nOutput of the model is a set of representations (i.e. dictionary) for each entity type (e.g. title, skill) and network architecture that we re-use during the inference. Each query and member can be represented by a bag of entities, i.e. a query can contain multiple titles and skills referenced in the search terms and a member could have multiple titles and skills which are manifested on the profile. The lookup tables learned during training and network coefficients are used to construct query and document embeddings. The two arms of DSSM corresponds to the supervised embeddings of the query and the document respectively. We then use the similarity measured by the distance of these two vectors (e.g. cosine) as a feature in the learning to rank model.\n\nWe used DSSM models over other deep learning models and nonlinear models for the following reasons. First, DSSM enables projection of recruiter queries (query) and member profiles (document) into a common low-dimensional space, where relevance of the query and the document can be computed as the distance between them. This is important for talent search models, as the main goal is to find the match between recruiter queries and member profiles. Secondly, DSSM uses word hashing, which enables handling large vocabularies, and results in a scalable semantic model.\n\\subsection{Online System Architecture} \n\\label{subsec:arch}\nFigure~\\ref{fig:arch} presents the online architecture of the proposed talent search ranking system, which also includes the embedding step (\\S\\ref{subsec:representation}). We designed our architecture such that the member embeddings are computed offline, but the query embeddings are computed at run time. We made these choices for the following reasons: (1) since a large number of members may match a query, computing the embeddings for these members at run time would be computationally expensive, and, (2) the queries are typically not known ahead of time, and hence the embeddings need to be generated online. Consequently, we chose to include member embeddings as part of the forward index containing member features, which is generated periodically by an offline workflow (not shown in the figure). We incorporated the component for generating query embeddings as part of the online system. Our online recommendation system consists of two services:\n\\begin{enumerate}\n\\item \\textbf{Retrieval Service:} This service receives a user query, generates the candidate set of members that match the criteria specified in the query, and computes an initial scoring of the retrieved candidates using a simple, first-pass model. These candidates, along with their features, are retrieved from a distributed index and returned to the scoring\/ranking service. The features associated with each member can be grouped into two categories:\n\\begin{itemize}\n\\item \\emph{Explicit Features:} These features correspond to fields that are present in a member profile, e.g., current and past work positions, education, skills, etc.\n\\item \\emph{Derived Features:} These features could either be derived from a member's profile (e.g., implied skills), or generated by an external algorithm (e.g., embedding for a member (\\S\\ref{subsec:representation})).\n\\end{itemize}\nThe retrieval service is built on top of LinkedIn's Galene search platform~\\cite{galene_engine}, which handles the selection of candidates matching the query, and the initial scoring\/pruning of these candidates, in a distributed fashion.\n\n\\item \\textbf{Scoring\/Ranking Service:} This component is responsible for the second-pass ranking of candidates corresponding to each query, and returning the ranked list of candidates to the front-end system for displaying in the product. Given a query, this service fetches the matching candidates, along with their features, from the retrieval service, and in parallel, computes the vector embedding for the query. Then, it performs the second-pass scoring of the candidates (which includes generation of similarity features based on query and member embeddings (\\S\\ref{subsec:representation})) and returns the top ranked results. The second-pass scoring can be performed either by a deep learning based model (\\S\\ref{subsec:deepmodel}), or any other machine learned model (e.g., a GBDT model, as discussed in \\S\\ref{subsec:currentmodels}), periodically trained and updated as part of an offline workflow (not shown in the figure).\n\n\\end{enumerate}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3.0in]{block-diagram.pdf}\n\\vspace{-0.1in}\n\\caption{Online System Architecture for Search Ranking}\n\\label{fig:arch}\n\\end{figure}\n\n\\section{Methodology} \\label{sec:methodology}\nIn this section, we present our methodology which focuses on two main aspects:\n\\begin{itemize}\n\\item Learning of deep models to estimate likelihood of the two-way interest (inMail accept) between the candidate and the recruiter,\n\\item Learning of supervised and unsupervised embeddings of the entities in the talent search domain.\n\\end{itemize}\nIn Table~\\ref{tab:notations}, we present the notation used in the rest of the section. Note that the term $example$ refers to a candidate that is presented to the recruiter.\n\\begin{table}[!ht]\n\\small\n\\caption{Notations}\n\\vspace{-0.15in}\n\\begin{tabular}{c|c}\n\\hline\\hline\n\\textbf{Notation} & \\textbf{Represents} \\\\ \\hline\\hline\n$n$ & size of a training set \\\\ \\hline\n$\\mathbf{x}_i$ &feature vector for $i^\\text{th}$ example \\\\ \\hline\n$y_i$ & binary response for $i^\\text{th}$ example \\\\ & (inMail accept or not) \\\\ \\hline\n$s_i$ & the search session to which \\\\ & $i^\\text{th}$ example belongs \\\\ \\hline\ntuple $(\\mathbf{x}_i, y_i, s_i)$ & $i^\\text{th}$ example in the training set \\\\ \\hline\n$\\mathbf{w}$ & weight vector \\\\ \\hline\n$\\dotprod{\\cdot}{\\cdot}$ & dot product \\\\ \\hline\n$\\psi(\\cdot)$ & neural network function \\\\ \\hline\n$u_j \\in \\mathbb{R}^d$ & $d$ dimensional \\\\ & vector representation of entity j\n\\\\ \\hline \\hline\n\\end{tabular}\n\\label{tab:notations}\n\\vspace{-0.12in}\n\\end{table}\n\n\n\\input{methodology_subsec_1}\n\n\n\\input{methodology_subsec_2}\n\n\n\\input{methodology_subsec_3}\n\n\n\n\\section{Background and Problem Setting}\\label{sec:problem}\nWe next provide a brief overview of the {\\em LinkedIn Recruiter} product and existing ranking models, and formally present the talent search ranking problem.\n\n\\subsection{Background}\nLinkedIn is the world's largest professional network with over 500 million members world-wide. Each member of LinkedIn has a profile page that serves as a professional record of achievements and qualifications, as shown in Figure~\\ref{fig:profile}. A typical member profile contains around 5 to 40 structured and unstructured fields including title, company, experience, skills, education, and summary, amongst others.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3.3in]{hakanprofile.png}\n\\vspace{-0.05in}\ns\\caption{An example LinkedIn profile page}\n\\label{fig:profile}\n\\vspace{-0.1in}\n\\end{figure}\n\nIn the context of talent search, LinkedIn members can be divided into two categories: candidates (job seekers) and recruiters (job providers). Candidates look for suitable job opportunities, while recruiters seek candidates to fill job openings. In this work, we address the modeling challenges in the \\emph{LinkedIn Recruiter} product, which helps recruiters find and connect with the right candidates.\n \nConsider an example of a recruiter looking for a software engineer with machine learning background. Once the recruiter types keywords \\emph{software engineer} and \\emph{machine learning} as a free text query, the recruiter search engine first standardizes them into the title, \\emph{software engineer} and the skill, \\emph{machine learning}. Then, it matches these standardized entities with standardized member profiles, and the most relevant candidate results are presented as in Figure~\\ref{fig:serp}. In order to find a desired candidate, the recruiter can further refine their search criteria using facets such as title, location, and industry. For each result, the recruiter can perform the following actions (shown in the increasing order of recruiter's interest for the candidate):\n\\begin{enumerate}\n\\item View a candidate profile,\n\\item Bookmark a profile for detailed evaluation later,\n\\item Save a profile to their current hiring project (as a potential fit), and,\n\\item Send an inMail (message) to the candidate.\n\\end{enumerate}\nUnlike traditional search and recommendation systems which solely focus on estimating how relevant an item is for a given query, the talent search domain requires mutual interest between the recruiter and the candidate in the context of the job opportunity. In other words, we simultaneously require that a candidate shown must be relevant to the recruiter's query, and that the candidate contacted by the recruiter must also show interest in the job opportunity. Therefore, we define a new action event, \\emph{inMail Accept}, which occurs when a candidate replies to an inMail from a recruiter with a positive response. Indeed, the key business metric in the Recruiter product is based on inMail Accepts and hence we use the fraction of top $k$ ranked candidates that received and accepted an inMail (viewed as \\emph{precision@k}$~^2$\\footnote{$^2$ ~ While metrics like \\emph{Normalized Discounted Cumulative Gain} \\cite{jarvelin_2002} are more commonly utilized for ranking applications, we have found precision@k to be more suitable as a business metric. Precision@k also aligns well with the way the results are presented in LinkedIn Recruiter application, where each page lists up to 25 candidates by default so that precision@25 is a measure of positive outcome in the first page.}) as the main evaluation measure for our experiments.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3.3in]{serp.png}\n\\vspace{-0.05in}\n\\caption{A recruiter seeks a software engineer with background in machine learning}\n\\label{fig:serp}\n\\vspace{-0.1in}\n\\end{figure}\n\n\\subsection{Current Models} \\label{subsec:currentmodels}\nThe current talent search ranking system functions as follows~\\cite{Thuc16,thuc15pes}. In the first step, the system retrieves a candidate set of a few thousand members from over 500 million LinkedIn members, utilizing hard filters specified in the search query. In particular, a query request is created based on the standardized fields extracted from the free form text in the query, as well as the selected facets (such as skill, title, and industry). This query request is then issued to the distributed search service tier, which is built on top of LinkedIn's \\emph{Galene} search platform~\\cite{galene_engine}. A list of candidates is generated based on the matching features (such as title or skill match). In the second step, the search ranker scores the resulting candidates using a ranking model, and returns the top ranked candidate results. In this paper, we focus on the ranking model used in the second step.\n\nTree ensembles, viewed as non-linear models capable of discovering ``interaction features'', have been studied extensively both in academic and industrial settings. For LinkedIn Talent Search, we have studied the effectiveness of ensemble trees and found that GBDT (Gradient Boosted Decision Trees) models \\cite{friedman_2001,xgboost} outperform the best logistic regression models on different data sets as measured by area under ROC curve (AUC) and precision@k metrics in offline experiments. More importantly, online A\/B tests have shown significant improvement across all key metrics based on inMail Accepts (Table~\\ref{tab:linearvsxgboost}). Having described the current state of the talent search ranking models, we next formally present the problem setting and the associated challenges.\n\n\\begin{table}[!ht]\n\\small\n\\caption{Results of an online A\/B test we performed over a period of three weeks in 2017, demonstrating the precision improvement for the gradient boosted decision tree model compared to the baseline logistic regression model for LinkedIn Talent Search. We compute precision as the fraction of the top ranked candidate results that received and accepted an inMail, within three days of the inMail being sent by the recruiter (\\emph{Prec@k} is over the top $k$ candidate results), and show the relative lift in precision. We note that these improvements are impressive based on our experience in the domain.}\n\\vspace{-0.15in}\n\\begin{tabular}{c|c|c|c}\n\\hline\\hline\n\\textbf{ } & \\textbf{Prec@5} & \\textbf{Prec@25} & \\textbf{Overall precision} \\\\\\hline\\hline\nLift & +7.5\\% & +7.4\\% & +5.1\\% \\\\\\hline\np-value & 2.1e-4 & 4.8e-4 & 0.01 \\\\\\hline\n\\end{tabular}\n\\label{tab:linearvsxgboost}\n\\vspace{-0.1in}\n\\end{table}\n\n\\subsection{Problem Setting and Challenges}\n\\begin{definition}\nGiven a search query consisting of search criteria such as title, skills, and location, provided by the recruiter or the hiring manager, the goal of \\emph{Talent Search Ranking} is to:\n\\begin{enumerate}\n\\item Determine a set of candidates strictly satisfying the specified search criteria (hard constraints), and,\n\\item Rank the candidates according to their utility for the recruiter, where the utility is the likelihood that the candidate would be a good fit for the position, and would be willing to accept the request (inMail) from the recruiter.\n\\end{enumerate}\n\\end{definition}\nAs discussed in \\S\\ref{subsec:currentmodels}, the existing ranking system powering \\\\LinkedIn Recruiter product utilizes a GBDT model due to its advantages over linear models. While GBDT provides quite a strong performance, it poses the following challenges:\n\\begin{enumerate}\n\\item It is quite non-trivial to augment a tree ensemble model with other trainable components such as embeddings for discrete features. Such practices typically require joint training of the model with the component\/feature, while the tree ensemble model assumes that the features themselves need not be trained.\n\\item Tree models do not work well with sparse id features such as skill ids, company ids, and member ids that we may want to utilize for talent search ranking. Since a sparse feature is non-zero for a relatively small number of examples, it has a small likelihood to be chosen by the tree generation at each boosting step, especially since the learned trees are shallow in general.\n\\item Tree models lack flexibility of model engineering. It might be desirable to use novel loss functions, or augment the current objective function with other terms. Such modifications are not easily achievable with GBDT models, but are relatively straight-forward for deep learning models based on differentiable programming. A neural network model with a final (generalized) linear layer also makes it easier to adopt approaches such as transfer learning and online learning.\n\\end{enumerate}\nIn order to overcome these challenges, we explore the usage of neural network based models, which provide sufficient flexibility in the design and model specification.\n\nAnother significant challenge pertains to the sheer number of available entities that a recruiter can include as part of their search, and how to utilize them for candidate selection as well as ranking. For example, the recruiter can choose from tens of thousands of LinkedIn's standardized skills. Since different entities could be related to each other (to a varying degree), using syntactic features (e.g., fraction of query skills possessed by a candidate) has its limitations. Instead, it is more desirable to utilize semantic representations of entities, for example, in the form of low dimensional embeddings. Such representations allow for numerous sparse entities to be better incorporated as part of a machine learning model. Therefore, in this work, we also investigate the application of representation learning for entities in the talent search domain.\n\n\\section{Related Work} \\label{sec:related}\nUse of neural networks on top of curated features for ranking is an established idea which dates back at least to \\cite{caruana1996using}, wherein simple 2-layer neural networks are used for ranking the risk of mortality in a medical application. In \\cite{burges_2005}, the authors use neural networks together with the logistic pairwise ranking loss, and demonstrate that neural networks outperform linear models that use the same loss function. More recently, the authors of \\cite{cheng2016wide} introduced a model that jointly trains a neural network and a linear classifier, where the neural network takes dense features, and the linear layer incorporates cross-product features and sparse features.\n\nResearch in deep learning algorithms for search ranking has gained momentum especially since the work on Deep Structured Semantic Models (DSSM) \\cite{Huang13}. DSSM involves learning the semantic similarity between a pair of text strings, where a sparse representation called tri-letter grams is used. The C-DSSM model \\cite{Shen14} extends DSSM by introducing convolution and max-pooling after word hashing layer to capture local contextual features of words. The Deep Crossing model \\cite{Shan16} focuses on sponsored search (ranking ads corresponding to a query), where there is more contextual information about the ads. Finally, two other popular deep ranking models that have been used for search ranking are the ARC-I \\cite{Hu14} (a combination of C-DSSM and Deep Crossing) and Deep Relevance Matching Model \\cite{Guo16} (which introduces similarity histogram and query gating concepts).\n\nThere is extensive work on generating unsupervised embeddings. The notion of word embeddings (\\textit{word2vec}) was proposed in \\cite{Mikolov13}, inspiring several subsequent \\textit{*2vec} algorithms. Several techniques have been proposed for graph embeddings, including classical approaches such as multidimensional scaling (MDS) \\cite{Cox00}, IsoMap \\cite{Tenenbaum00}, LLE \\cite{Roweis00}, and Laplacian Eigenmap \\cite{Belkin02}, and recent approaches such as graph factorization \\cite{Ahmed13} and DeepWalk \\cite{Perozzi14}. To generate embedding representation for the entities using the LinkedIn Economic Graph, we adopt \\emph{Large-Scale Information Network Embeddings} approach~\\cite{Tang15}, by changing how we construct the graph. While the graph used in~\\cite{Tang15} considers the members of the social network as vertices, we instead define a weighted graph over the entities (e.g., skill, title, company), and use the number of members sharing the same entity on their profile to induce an edge weight between the vertices. Thus, we were able to reduce the size of the problem by a few orders of magnitude, thereby allowing us to scale the learning to all entities in the Economic Graph.\nFinally, a recent study presents a unified view of different network embedding methods like LINE and node2vec as essentially performing implicit matrix factorizations, and proposes NetMF, a general framework to explicitly factorize the closed-form matrices that network embeddings methods including LINE and word2vec aim to approximate \\cite{NetMF}.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec_introduction}\n\nThere are literally hundreds of methods, techniques, and heuristics that can be applied to the determination of hierarchical and nonhierarchical clusters in finite metric (thus symmetric) spaces -- see, e.g., \\cite{RuiWunsch05}. Methods to identify clusters in a network of asymmetric dissimilarities, however, are rarer. A number of approaches reduce the problem to symmetric clustering by introducing symmetrizations that are justified by a variety of heuristics; e.g., \\cite{ZhouEtal05}. An idea that is more honest to the asymmetry in the dissimilarity matrix is the adaptation of spectral clustering \\cite{Chung97, spectral-clustering, NgEtal02} to asymmetric graphs by using a random walk perspective to define the clustering algorithm\\cite{PentneyMeila05} or through the minimization of a weighted cut \\cite{MeilaPentney07}. This relative rarity is expected because the intuition of clusters as groups of nodes that are closer to each other than to the rest is difficult to generalize when nodes are close in one direction but far apart in the other. \n\nTo overcome this generic difficulty we postulate two particular intuitive statements in the form of the axioms of value and transformation that have to be satisfied by allowable hierarchical clustering methods. The Axiom of Value states that for a network with two nodes the nodes are clustered together at the maximum of the two dissimilarities between them. The Axiom of Transformation states that if we consider a network and reduce all pairwise dissimilarities, the level at which two nodes become part of the same cluster is not larger than the level at which they were clustered together in the original network. In this paper we study the space of methods that satisfy the axioms of value and transformation.\n\nAlthough the theoretical foundations of clustering are not as well developed as its practice \\cite{vonlux-david, sober,science_art}, the foundations of clustering in metric spaces have been developed over the past decade \\cite{ben-david-ackermann,ben-david-reza, CarlssonMemoli10, kleinberg}. Of particular relevance to our work is the case of hierarchical clustering where, instead of a single partition, we look for a family of partitions indexed by a resolution parameter; see e.g., \\cite{lance67general}, \\cite[Ch. 4]{clusteringref}, and \\cite{ZhaoKarypis05}. In this context, it has been shown in \\cite{clust-um} that single linkage \\cite[Ch. 4]{clusteringref} is the unique hierarchical clustering method that satisfies symmetric versions of the axioms considered here and a third axiom stating that no clusters can be formed at resolutions smaller than the smallest distance in the given data. One may think of the work presented here as a generalization of \\cite{clust-um} to the case of asymmetric (non-metric) data.\n\nOur first contribution is the derivation of two hierarchical clustering methods that abide to the axioms of value and transformation. In reciprocal clustering closeness is propagated through links that are close in both directions, whereas in nonreciprocal clustering closeness is allowed to propagate through loops (Section \\ref{sec_reicprocal_and_nonreciprocal}). We further prove that any clustering method that satisfies the value and transformation axioms lies between reciprocal and nonreciprocal clustering in a well defined sense (Section \\ref{sec_extremal_ultrametrics}). \n\n\n\n\\section{Preliminaries}\\label{sec_preliminaries}\n\nConsider a finite set of points $X$ jointly specified with a dissimilarity function $A_X$ to define the network $N=(X,A_X)$. The set X represent the nodes in the network. The dissimilarity $A_X(x,x')$ between nodes $x\\in X$ and $x'\\in X$ is assumed to be non negative for all pairs $(x,x')$ and null if and only if $x=x'$. However, dissimilarity values $A_X(x,x')$ need not satisfy the triangle inequality and, more consequential for the problem considered here, may be asymmetric in that it is possible to have $A_X(x,x')\\neq A_X(x',x)$. We further define $\\ccalN$ as the set of all possible networks $N$.\n\nA clustering of the set $X$ is a partition $P_X$ defined as a collection of sets $P_X=\\{B_1,\\ldots, B_J\\}$ that are nonintersecting, $B_i\\cap B_j =\\emptyset$ for $i\\neq j$, and are required to cover $X$, $\\cup_{i=1}^{J} B_i = X$. In this paper we focus on hierarchical clustering methods whose outcomes are not single partitions $P_X$ but nested collections $D_X$ of partitions $D_X(\\delta)$ indexed by a resolution parameter $\\delta\\geq0$. For a given $D_X$, whenever at resolution $\\delta$ nodes $x$ and $x'$ are in the same cluster of $D_X(\\delta)$, we say that they are equivalent at resolution $\\delta$ and write $x\\sim_{D_X(\\delta)} x'$. The nested collection $D_X$ is termed a \\emph{dendrogram} and is required to satisfy the following two properties plus an additional technical property (see \\cite{clust-um}):\n\n\\myparagraph{(D1) Boundary conditions} For $\\delta=0$ the partition $D_X(0)$ clusters each $x\\in X$ into a separate singleton and for some $\\delta_0$ sufficiently large $D_X(\\delta_0)$ clusters all elements of $X$ into a single set,\n\\begin{align}\\label{eqn_dendrogram_boundary_conditions}\n D_X(0) = \\Big\\{ \\{x\\}, \\, x\\in X\\Big\\}, \\,\\,\\,\n D_X(\\delta_0) = \\Big\\{ X \\Big\\}\\ \\forsome\\ \\delta_0. \\nonumber\n\\end{align}\\vspace{-0.15in}\n\\myparagraph{(D2) Resolution} As $\\delta$ increases clusters can be combined but not separated. I.e., for any $\\delta_1 < \\delta_2$ if we have $x\\sim_{D_X(\\delta_1)} x'$ for some pair of points we must have $x\\sim_{D_X(\\delta_2)} x'$.\n\n\\medskip\\noindent As the resolution $\\delta$ increases, partitions $D_X(\\delta)$ become coarser implying that dendrograms are equivalent to trees; see figs. \\ref{fig_axioms_value_influence} and \\ref{fig_axiom_of_transformation}\n\nDenoting by $\\ccalD$ the space of all dendrograms we define a hierarchical clustering method as a function $\\ccalH:\\ccalN \\to \\ccalD$ from the space of networks $\\ccalN$ to the space of dendrograms $\\ccalD$. For the network $N_X=(X,A_X)$ we denote by $D_X=\\ccalH(X,A_X)$ the output of clustering method $\\ccalH$. When dissimilarities $A_X$ conform to the definition of a finite metric space, it is possible to show that there exists a hierarchical clustering method satisfying axioms similar to the ones proposed in this paper \\cite{clust-um}. Furthermore, this method is unique and corresponds to single linkage. For resolution $\\delta$, single linkage makes $x$ and $x'$ part of the same cluster if they can be linked through a path of cost not exceeding $\\delta$. Formally, equivalence classes at resolution $\\delta$ in the single linkage dendrogram $SL_X$ are defined as\n\\begin{equation}\\label{eqn_single_linkage}\n x\\sim_{SL_X(\\delta)} x' \\iff \n \\min_{C(x,x')} \\,\\, \\max_{i | x_i\\in C(x,x')}A_X(x_i,x_{i+1})\\leq\\delta.\n\\end{equation}\nIn \\eqref{eqn_single_linkage}, $C(x,x')$ denotes a \\emph{chain} between $x$ and $x'$, i.e., an ordered sequence of nodes connecting $x$ and $x'$. We interpret $\\max_{i|x_i\\in C(x,x')}A_X(x_i,x_{i+1})$ as the maximum dissimilarity cost we need to pay when traversing the chain $C(x,x')$. The right hand side of \\eqref{eqn_single_linkage} is this maximum cost for the best selection of the chain $C(x,x')$. Recall that in \\eqref{eqn_single_linkage} we are assuming metric data, which in particular implies $A_X(x_i,x_{i+1})=A_X(x_{i+1},x_i)$.\n\n\\vspace{-0.05in}\n\\subsection{Dendrograms as ultrametrics} \\label{sec_dendrograms_and_ultrametrics}\n\\vspace{-0.03in}\n\nDendrograms are convenient graphical representations but otherwise cumbersome to handle. A mathematically more convenient representation is to identify dendrograms with finite \\emph{ultrametric} spaces. An ultrametric defined on the space $X$ is a function $u_X: X \\times X \\to \\reals$ that satisfies the strong triangle inequality \n\\begin{equation}\\label{eqn_strong_triangle_inequality}\n u_X(x,x') \\leq \\max \\Big(u_X(x,x''), u_X(x'',x') \\Big),\n\\end{equation}\non top of the reflexivity $u_X(x,x')=u_X(x',x)$, non negativity and identity properties $u_X(x,x')=0 \\Leftrightarrow x=x'$. Hence, an ultrametric is a metric that satisfies \\eqref{eqn_strong_triangle_inequality}, a stronger version of the triangle inequality.\n\nAs shown in \\cite{clust-um}, it is possible to establish a bijective mapping between dendrograms and ultrametrics.\n\n\\begin{figure}\n \\centering\n \\centerline{\\input{two_node_network_dendrogram.tex}}\n\\caption{Axiom of value. Nodes in a two-node network cluster at the minimum resolution at which both can influence each other.}\n\\label{fig_axioms_value_influence}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\centerline{\\input{axiom_of_transformation.tex} }\n\\caption{Axiom of transformation. If network $N_X$ can be mapped to network $N_Y$ using a dissimilarity reducing map $\\phi$, nodes clustered together in $D_X(\\delta)$ at arbitrary resolution $\\delta$ must also be clustered in $D_Y(\\delta)$. For example, $x_1$ and $x_2$ are clustered together at resolution $\\delta'$, therefore $y_1$ and $y_2$ must also be clustered at this resolution.}\n\\label{fig_axiom_of_transformation}\n\\end{figure}\n\n\n\\vspace{-0.1in}\n\\begin{theorem}[\\cite{clust-um} ]\\label{theo_dendrograms_as_ultrametrics}\nFor a given dendrogram $D_X$ define $u_X(x,x')$ as the smallest resolution at which $x$ and $x'$ are clustered together\n\\begin{equation}\\label{eqn_theo_dendrograms_as_ultrametrics_10}\n u_X(x,x') := \\min \\Big\\{ \\delta\\geq 0, x\\sim_{D_X(\\delta)} x' \\Big\\}.\n\\end{equation}\nThe function $u_X$ is an ultrametric in the space $X$. Conversely, for a given ultrametric $u_X$ define the relation $\\sim_{U_X(\\delta)}$ as\n\\begin{equation}\\label{eqn_theo_dendrograms_as_ultrametrics_20}\n x \\sim_{U_X(\\delta)} x' \\iff u_X(x,x')\\leq \\delta.\n\\end{equation}\nThe relation $\\sim_{U_X(\\delta)}$ is an equivalence relation and the collection of partitions of equivalence classes induced by $\\sim_{U_X(\\delta)}$, i.e. $U_X(\\delta) :=\\big\\{X \\mod \\sim_{U_X(\\delta)}\\big\\}$, is a dendrogram.\n\\end{theorem}\n\n\\noindent Given the equivalence between dendrograms and ultrametrics established by \nTheorem \\ref{theo_dendrograms_as_ultrametrics} \nwe can think of hierarchical clustering methods $\\ccalH$ as inducing ultrametrics in the set of nodes $X$ based on dissimilarity functions $A_X$. The distance $u_X(x,x')$ induced by $\\ccalH$ is the minimum resolution at which $x$ and $x'$ are co-clustered by $\\ccalH$.\n\n\\section{Value and transformation} \\label{sec_axioms}\n\nTo study hierarchical clustering algorithms in the context of asymmetric networks, we start from two intuitive notions that we translate into the axioms of value and transformation. The Axiom of Value is obtained from considering a two-node network with dissimilarities $\\alpha$ and $\\beta$; see Fig. \\ref{fig_axioms_value_influence}. In this case, it makes sense for nodes $p$ and $q$ to be in separate clusters at resolutions $\\delta<\\max(\\alpha,\\beta)$. For these resolutions we have either no influence between the nodes, if $\\delta<\\min(\\alpha,\\beta)$, or unilateral influence from one node over the other, when $\\min(\\alpha,\\beta) \\leq \\delta<\\max(\\alpha,\\beta)$. In either case both nodes are different in nature. E.g., if we think of the network as a Markov chain, nodes $p$ and $q$ form separate classes. We thus require nodes $p$ and $q$ to cluster at resolution $\\delta=\\max(\\alpha,\\beta)$. This is somewhat arbitrary, as any number larger than $\\max(\\alpha, \\beta)$ would work. As a value claim, however, it means that the clustering resolution parameter $\\delta$ is expressed in the same units as the elements of the dissimilarity matrix. A formal statement in terms of ultrametric distances follows.\n\n\\myparagraph{(A1) Axiom of Value} Consider a two-node network $N=(X,A_X)$ with $X=\\{p,q\\}$, $A_X(p,q)=\\alpha$, and $A_X(q,p)=\\beta$. The ultrametric $(X,u_X)=\\ccalH(X,A_X)$ produced by $\\ccalH$ satisfies\n\\begin{equation}\\label{eqn_two_node_network_ultrametric}\n u_{X}(p,q) = \\max(\\alpha,\\beta).\n\\end{equation}\n\n\\vspace{-0.05in}\n\\medskip\\noindent The second restriction on the space of allowable methods $\\ccalH$ formalizes the expected behavior upon a modification of the dissimilarity matrix; see Fig. \\ref{fig_axiom_of_transformation}. Consider networks $N_X=(X,A_X)$ and $N_Y=(Y,A_Y)$ and denote by $D_X=\\ccalH(X,A_X)$ and $D_Y=\\ccalH(Y,A_Y)$ the corresponding dendrogram outputs. If we map all the nodes of the network $N_X=(X,A_X)$ into nodes of the network $N_Y=(Y,A_Y)$ in such a way that no pairwise dissimilarity is increased we expect the network to become more clustered. In terms of the respective clustering dendrograms we expect that nodes co-clustered at resolution $\\delta$ in $D_X$ are mapped to nodes that are also co-clustered at this resolution in $D_Y$. The Axiom of Transformation is a formal statement of this requirement as we introduce next.\n\n\\myparagraph{(A2) Axiom of Transformation} Consider two networks $N_X=(X,A_X)$ and $N_Y=(Y,A_Y)$ and a dissimilarity reducing map $\\phi:X\\to Y$, i.e. a map $\\phi$ such that for all $x,x' \\in X$ it holds $A_X(x,x')\\geq A_Y(\\phi(x),\\phi(x'))$. Then, the output ultrametrics $(X,u_X)=\\ccalH(X,A_X)$ and $(Y,u_Y)=\\ccalH(Y,A_Y)$ satisfy \n\\begin{equation}\\label{eqn_dissimilarity_reducing_ultrametric}\n u_X(x,x') \\geq u_Y(\\phi(x),\\phi(x')).\n\\end{equation} \n\n\\vspace{-0.05 in}\n\\medskip\\noindent A hierarchical clustering method $\\ccalH$ is admissible if it satisfies Axioms (A1) and (A2). Axiom (A1) states that units of the clustering resolution parameter $\\delta$ are the same units of the elements of the dissimilarity matrix. Axiom (A2) states that if we reduce dissimilarities, clusters may be combined but cannot be separated.\n\n\n\n\\section{Reciprocal and nonreciprocal clustering}\\label{sec_reicprocal_and_nonreciprocal}\n\nAn admissible clustering method satisfying axioms (A1)-(A2) can be constructed by considering the symmetric dissimilarity $\\bbarA_X(x,x') = \\max \\big(A_X(x,x'), A_X(x',x) \\big)$, for all $x,x' \\in X$. This effectively reduces the problem to clustering of symmetric data, a scenario in which single linkage \\eqref{eqn_single_linkage} is known to satisfy axioms similar to (A1)-(A2), \\cite{clust-um}. Drawing upon this connection we define the \\emph{reciprocal} clustering method $\\ccalH^{R}$ with ultrametric outputs $(X,u^R_X)=\\ccalH^R(X,A_X)$ as the one for which the ultrametric $u^R_X(x,x')$ between nodes $x$ and $x'$ is given by\n\\begin{align}\\label{eqn_reciprocal_clustering} \n u^{R}_X(x,x')\n &= \\min_{C(x,x')} \\, \\max_{i | x_i\\in C(x,x')}\n \\bbarA_X(x_i,x_{i+1}).\n\\end{align} \nAn illustration of the definition in \\eqref{eqn_reciprocal_clustering} is shown in Fig. \\ref{fig_reciprocal_path}. We search for chains $C(x,x')$ linking nodes $x$ and $x'$. For a given chain we walk from $x$ to $x'$ and determine the maximum dissimilarity, in either the forward or backward direction, across all the links in the chain. The reciprocal ultrametric $u^{R}_X(x,x')$ between nodes $x$ and $x'$ is the minimum of this value across all possible chains. Recalling the equivalence of dendrograms and ultrametrics in Theorem \\ref{theo_dendrograms_as_ultrametrics} we know that the dendrogram produced by reciprocal clustering clusters $x$ and $x'$ together for resolutions $\\delta\\geq u^{R}_X(x,x')$. Combining this latter observation with \\eqref{eqn_reciprocal_clustering} and denoting by $R_X$ the reciprocal dendrogram we write the reciprocal equivalence classes as\n\\begin{equation}\\label{eqn_reciprocal_clustering_dendrogram} \n x\\sim_{R_X(\\delta)} x' \\iff \n \\min_{C(x,x')} \\, \\max_{i | x_i\\in C(x,x')}\\bbarA_X(x_i,x_{i+1})\\leq\\delta.\n\\end{equation}\nComparing \\eqref{eqn_reciprocal_clustering_dendrogram} with the definition in \\eqref{eqn_single_linkage}, we see that reciprocal clustering is equivalent to single linkage for the network $N=(X,\\bbarA_X)$.\n\nFor the method $\\ccalH^R$ specified in \\eqref{eqn_reciprocal_clustering} to be a proper hierarchical clustering method we need to show that $u^{R}$ is an ultrametric. For $\\ccalH^R$ to be admissible it needs to satisfy axioms (A1)-(A2). Both of these properties are true as stated in the following proposition. \n\n\n\\begin{figure}\n \\centering\n \\centerline{\\input{reciprocal_path.tex} }\n\\caption{Reciprocal clustering. Nodes $x$ and $x'$ are clustered together at resolution $\\delta$ if they can be joined with a (reciprocal) chain whose maximum dissimilarity is smaller than $\\delta$ in both directions [cf. \\eqref{eqn_reciprocal_clustering}]. Of all methods that satisfy the axioms of value and transformation, the reciprocal ultrametric is the largest between any pair of nodes.}\\label{fig_reciprocal_path}\n\\vspace{-0.05in}\n\\end{figure}\n\n\\vspace{-0.1in}\n\\begin{proposition}\\label{theo_reciprocal_axioms}\nThe reciprocal clustering method $\\ccalH^R$ is valid and admissible. I.e., $u^R$ as defined by \\eqref{eqn_reciprocal_clustering} is a valid ultrametric and the method satisfies axioms (A1)-(A2).\n\\end{proposition}\n\n\\begin{myproof} See \\cite{Anonymous}. \\end{myproof}\n\n\\begin{figure}\n \\centering\n \\centerline{\\input{nonreciprocal_path.tex}}\n\\caption{Nonreciprocal clustering. Nodes $x$ and $x'$ are co-clustered at resolution $\\delta$ if they can be joined in both directions with possibly different (nonreciprocal) chains of maximum dissimilarity not greater than $\\delta$ [cf. \\eqref{eqn_nonreciprocal_clustering}]. The nonreciprocal ultrametric is the smallest among all that abide to the value and transformation axioms.}\n\\label{fig_nonreciprocal_path}\n\\vspace{-0.05in}\n\\end{figure}\n\nIn reciprocal clustering, nodes $x$ and $x'$ are joined together if we can go back and forth from $x$ to $x'$ at a maximum cost $\\delta$ through the same chain. In \\emph{nonreciprocal} clustering we relax the restriction that the chain achieving the minimum cost must be the same in both directions and cluster nodes $x$ and $x'$ together if there are, possibly different, chains linking $x$ to $x'$ and $x'$ to $x$. To state this definition in terms of ultrametrics, consider a given network $N_X=(X,A_X)$ and define the unidirectional minimum chain cost\n\\begin{align}\\label{eqn_nonreciprocal_chains} \n \\tdu^{NR}_X(x, x') = \\min_{C(x,x')} \\,\\,\n \\max_{i | x_i\\in C(x,x')} A_X(x_i,x_{i+1}).\n\\end{align} \nWe define the nonreciprocal clustering method $\\ccalH^{NR}$ with ultrametric outputs $(X,u^{NR}_X)=\\ccalH^{NR}(X,A_X)$ as the one for which the ultrametric $u^{NR}_X(x,x')$ between nodes $x$ and $x'$ is given by the maximum of the unidirectional minimum chain costs $\\tdu^{NR}_X(x, x')$ and $\\tdu^{NR}_X(x', x)$ in each direction,\n\\begin{equation}\\label{eqn_nonreciprocal_clustering} \n u^{NR}_X(x,x') = \\max \\Big( \\tdu^{NR}_X(x,x'),\\ \\tdu^{NR}_X(x',x )\\Big).\n\\end{equation} \nAn illustration of the definition in \\eqref{eqn_nonreciprocal_clustering} is shown in Fig. \\ref{fig_nonreciprocal_path}. We consider forward chains $C(x,x')$ going from $x$ to $x'$ and backward chains $C(x',x)$ going from $x'$ to $x$. For each of these chains we determine the maximum dissimilarity across all the links in the chain. We then search independently for the best forward chain $C(x,x')$ and the best backward chain $C(x',x)$ that minimize the respective maximum dissimilarities across all possible chains. The nonreciprocal ultrametric $u^{NR}_X(x,x')$ between nodes $x$ and $x'$ is the maximum of these two minimum values.\n\nAs is the case with reciprocal clustering we can verify that $u^{NR}$ is a properly defined ultrametric. We also show that $\\ccalH^{NR}$ is admissible in the following proposition.\n\n\n\\vspace{-0.1in}\n\\begin{proposition}\\label{theo_nonreciprocal_axioms}\nThe nonreciprocal clustering method $\\ccalH^{NR}$ is valid and admissible. That is, $u^{NR}$ as defined by \\eqref{eqn_nonreciprocal_clustering} is a valid ultrametric and the method satisfies axioms (A1)-(A2).\n\\end{proposition}\n\n\\begin{myproof} See \\cite{Anonymous}. \\end{myproof}\n\n\n \n \n\\vspace{-0.1in}\n\\begin{remark} \\label{rem_recip_nonrecip} \\normalfont Reciprocal and nonreciprocal clustering are different in general.\nHowever, for symmetric networks, they are equivalent and coincide with single linkage as defined by \\eqref{eqn_single_linkage}. To see this, note that in the symmetric case $\\tdu^{NR}_X(x, x')= \\tdu^{NR}_X(x', x)$. Therefore, from \\eqref{eqn_nonreciprocal_clustering}, $u^{NR}_X(x, x')= \\tdu^{NR}_X(x, x')$. Comparing \\eqref{eqn_nonreciprocal_chains} and \\eqref{eqn_reciprocal_clustering} we get the equivalence of nonreciprocal and reciprocal clustering by noting that dissimilarities $A_X=\\bbarA_X$ for the symmetric case. By further comparison with \\eqref{eqn_single_linkage} the equivalence with single linkage follows. \\end{remark}\n\n\n\\section{Extremal ultrametrics}\\label{sec_extremal_ultrametrics}\n\nGiven that we have constructed two admissible methods satisfying axioms (A1)-(A2), the question arises of whether these two constructions are the only possible ones and if not whether they are special in some sense, if any. One can find constructions other than reciprocal and nonreciprocal clustering that satisfy axioms (A1)-(A2). However, we prove in this section that reciprocal and nonreciprocal clustering are a peculiar pair in that all possible admissible clustering methods are contained between them in a well defined sense. To explain this sense properly, observe that since reciprocal chains (see Fig. \\ref{fig_reciprocal_path}) are particular cases of nonreciprocal chains (see Fig. \\ref{fig_nonreciprocal_path}) we must have that for all pairs of nodes $x,x'$\n\\begin{equation}\\label{eqn_nonreciprocal_smaller_than_reciprocal} \n u^{NR}_X(x,x') \\leq u^{R}_X(x,x').\n\\end{equation} \nI.e., nonreciprocal clustering distances do not exceed reciprocal clustering distances. An important characterization is that any method $\\ccalH$ satisfying axioms (A1)-(A2) yields ultrametrics that lie between $u^{NR}_X(x,x')$ and $u^{R}_X(x,x')$ as we formally state next.\n\n\\vspace{-0.1in}\n\\begin{theorem}\\label{theo_extremal_ultrametrics}\nConsider an admissible clustering method $\\ccalH$, that is a clustering method satisfying axioms (A1)-(A2). For arbitrary given network $N=(X,A_X)$ denote by $(X,u_X)=\\ccalH(X,A_X)$ the outcome of $\\ccalH$ applied to $N$. Then, for all pairs of nodes $x,x'$\n\\begin{equation}\\label{eqn_theo_extremal_ultrametrics} \n u^{NR}_X(x,x') \\leq u_X(x,x') \\leq u^{R}_X(x,x'),\n\\end{equation} \nwhere $u^{NR}_X(x,x')$ and $u^{R}_X(x,x')$ denote the nonreciprocal and reciprocal ultrametrics as defined by \\eqref{eqn_nonreciprocal_clustering} and \\eqref{eqn_reciprocal_clustering}, respectively.\n\\end{theorem}\n\n\\vspace{-0.05in}\n\n\\begin{myproof} See \\cite{Anonymous}. \\end{myproof}\n\n\nAccording to Theorem \\ref{theo_extremal_ultrametrics}, nonreciprocal clustering applied to the network $N=(X,A_X)$ yields a uniformly minimal ultrametric that satisfies axioms (A1)-(A2). Reciprocal clustering yields a uniformly maximal ultrametric. Any other clustering method abiding to (A1)-(A2) yields an ultrametric such that the distances $u_X(x,x')$ between any two pairs of nodes lie between the distances $u^{NR}_X(x,x')$ and $u^{R}_X(x,x')$ assigned by reciprocal and nonreciprocal clustering. In terms of dendrograms, \\eqref{eqn_theo_extremal_ultrametrics} implies that among all possible clustering methods, the smallest possible resolution at which nodes are clustered together is the one corresponding to nonreciprocal clustering. The highest possible resolution is the one that corresponds to reciprocal clustering. \n\n\n\\begin{remark} \\normalfont\nFrom Remark \\ref{rem_recip_nonrecip}, the upper and lower bounds in \\eqref{eqn_theo_extremal_ultrametrics} coincide with single linkage for symmetric networks. Thus, \\eqref{eqn_theo_extremal_ultrametrics} becomes an equality in such context. Since metric spaces are particular cases of symmetric networks, Theorem \\ref{theo_extremal_ultrametrics} recovers the uniqueness result in \\cite{clust-um} and extends it to symmetric -- but not necessarily metric -- data. Further, the result in \\cite{clust-um} is based on three axioms, two of which are the symmetric particular cases of the axioms of value and transformation. It then follows that one of the three axioms in \\cite{clust-um} is redundant. See \\cite{Anonymous} for details.\n\n\n\n\n\\end{remark}\n\n\n\n\n\\section{Circles of trust}\\label{sec_numerical_experiments}\n\nWe apply the theory developed to the formation of trust clusters -- circles of trust -- in social networks \\cite{CrossParker04}. Recalling the equivalence between dendrograms and ultrametrics, it follows that we can think of trust propagation in a network as inducing a trust ultrametric $T_X$. The induced trust distance bound $T_X(x,x')\\leq\\delta$ signifies that, at resolution $\\delta$, individuals $x$ and $x'$ are part of a circle of trust. Since axioms (A1)-(A2) are reasonable requirements in the context of trust networks Theorem \\ref{theo_extremal_ultrametrics} implies that the trust ultrametric must satisfy\n\\begin{equation}\\label{eqn_theo_trus_distance_bounds} \n u^{NR}_X(x,x') \\leq T_X(x,x') \\leq u^{R}_X(x,x'),\n\\end{equation} \nwhich is just a reinterpretation of \\eqref{eqn_theo_extremal_ultrametrics}. While \\eqref{eqn_theo_trus_distance_bounds} does not give a value for trust ultrametrics, reciprocal and nonreciprocal clustering provide lower and upper bounds in the formation of circles of trust.\n\n\\begin{figure}\n\n\\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=1\\linewidth]\n {figure_2abis.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=1\\linewidth]\n {figure_2bbis.pdf}}\n\\end{minipage}\n\n\\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=1\\linewidth]\n {figure_3abis.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=1\\linewidth]\n {figure_3bbis.pdf}}\n\\end{minipage}\n\\caption{Nonreciprocal (left) and reciprocal (right) clustering for an online social network \\cite{OpsahlPanzarasa09}. Dissimilarities are inversely proportional to the number of messages sent between any two users. Dendrogram closeups shown in second row.}\n\\label{fig_simulations_2}\n\\vspace{-0.1in}\n\n\\end{figure}\nAs a numerical application consider an online social network of a community of students at the University of California at Irvine, \\cite{OpsahlPanzarasa09}. \nIn Fig. \\ref{fig_simulations_2}-top, we depict both clustering algorithms for a subset of the users of the social network. The dissimilarity between nodes has been normalized as a function of the messages sent between any two users where lower distances represent more intense exchange. Note that although the ultrametrics are lower for the nonreciprocal case -- as they should \\eqref{eqn_nonreciprocal_smaller_than_reciprocal} --, the overall structure is similar. The similarity between both dendrograms could be interpreted as an indicator of symmetry in the communication. Indeed, in a completely symmetric case both dendrograms would coincide. However, there is another source of similarity between the two proposed algorithms which can be interpreted as consistent asymmetry. For example, someone who rarely replies to a message regardless of the sender or someone who sends messages but gets few replies regardless of the receiver. The similarity between both dendrograms hints that answering all messages of some people but none of others is not ubiquitous.\n\nFig. \\ref{fig_simulations_2}-bottom presents a closeup of the dendrograms in Fig. \\ref{fig_simulations_2}-top and the major cluster at resolution $\\delta=0.3$ is highlighted in red. We see that this cluster has a different hierarchical genesis in both algorithms. I.e., the two clustering methods alter the clustering order between nodes, which in terms of ultrametrics corresponds to an inversion of the nearest neighbors ordering. Nonetheless, at $\\delta=0.3$ the red cluster contains the same nodes in both clustering methods. This implies that, for the given resolution, this cluster constitutes a circle of trust for any choice of admissible trust metric $T_X$.\n\n\n\\section{Conclusion}\n\nAn axiomatic construction of hierarchical clustering in asymmetric networks was presented. Based on two axioms proposed, the axioms of value and transformation, two particular clustering methods were developed: reciprocal and nonreciprocal clustering. Furthermore, these methods were shown to be well-defined extremes of all possible clustering methods satisfying the proposed axioms. Finally, the theoretical developments were applied to real data in order to study the formation of circles of trust in social networks. \n\\label{sec_conclusion}\n\n\n\n\n\n\n \n\n\n\n\n\n\n\\vfill\\pagebreak\n\n\n\n\\bibliographystyle{IEEEbib}\n\n\n\n\n\n\n\n\n\\subsubsection{#1}\\vspace{-3\\baselineskip}\\color{black}\\medskip{\\noindent \\bf \\thesubsubsection. #1.}}\n\n\\newcommand{\\myparagraph}[1]{\\needspace{1\\baselineskip}\\medskip\\noindent {\\it #1.}}\n\n\\newcommand{\\myindentedparagraph}[1]{\\needspace{1\\baselineskip}\\medskip \\hangindent=11pt \\hangafter=0 \\noindent{\\it #1.}}\n\n\\newcommand{\\myparagraphtc}[1]{\\needspace{1\\baselineskip}\\medskip\\noindent {\\it #1.}\\addcontentsline{toc}{subsubsection}{\\qquad\\qquad\\quad#1}}\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\n\n\t\n\tGiven a graph $G$, the subgraph obtained from $G$ by deleting a vertex $v$ and\n\tall its incident edges is called a \\emph{vertex-deleted} \\emph{subgraph}. The\n\tmultiset of vertex-deleted subgraphs, given up to isomorphism, is called the\n\t\\emph{deck} of $G$. We say that $G$ is \\emph{reconstructible} if it is\n\tuniquely determined (up to isomorphism) by its deck. The well-known Graph\n\tReconstruction Conjecture of Kelly \\cite{kelly1957congruence} and Ulam\n\t\\cite{ulam1960collection} states that all finite graphs on at least three\n\tvertices are reconstructible. A problem which is closely related to this conjecture is\n\tthe reconstruction of graph invariant polynomials. We mean by a \\emph{graph\n\tinvariant} a function $\\mathcal{I}$ from the set of all graphs into any\n\tcommutative ring such that $\\mathcal{I}(G)=\\mathcal{I}(H)$ if $G$ and $H$ are\n\ttwo isomophic graphs. We say that a graph invariant is \\emph{reconstructible}\n\tif it is uniquely determined by the deck. For example, Tutte\n\t\\cite{tutte1979all} proved that the characteristic polynomial and the\n\tchromatic polynomial are reconstructible. A natural question is to ask if a\n\tgraph invariant polynomial can be reconstructed from the polynomial deck, that\n\tis, from the multiset of the polynomials of the vertex-deleted subgraphs? For\n\tthe characteristic polynomial the problem is still open. It was posed by\n\tCvetkovic at the XVIII International Scientific Colloquium in Ilmenau in 1973.\n\tHagos \\cite{hagos2000characteristic} proved that the characteristic polynomial\n\tof a graph is reconstructible from its polynomial deck together\n\twith the polynomial deck of its complement. The \\emph{idiosyncratic\n\tpolynomial} of a graph $G$ with adjacency matrix $A$ is the characteristic polynomial of the matrix\n\tobtained by replacing each non-diagonal zero in $A$\n\twith an indeterminate $x$, that is, the characteristic polynomial of the matrix $ A + x(J-A-I)$. \n\tJohnson and Newman \\cite{johnson1980note} consider a slightly different polynomial which can be viewed as the idiosyncratic polynomial of the complement of $G$. It follows from their main theorem that two graphs have the same idiosyncratic polynomial\n\tif only if they are cospectral, and their complements are also cospectral.\n\tThen by Hagos' theorem, the idiosyncratic polynomial of a graph\n\t$G$ is recontructible from its idiosyncratic polynomial deck.\n\t\n\tThe reconstruction conjecture was also considered for tournaments and more\n\tgenerally for digraphs. In this area,\n Stockmeyer \\cite{stockmeyer1977falsity} construct for every positive integer $n$ two\n\tnon isomorphic tournaments $B_{n}$ and $C_{n}$ on the same vertex set\n\t$\\left\\{ 0,\\ldots,2^{n}\\right\\} $. For this he consider the tournament\n\t$A_{n}$ defined on $\\left\\{ 1,\\ldots,2^{n}\\right\\} $ by $(i,j)$ is an arc of\n\t$A_{n}$ if only if $odd(j-i)\\equiv1\\pmod{4} $, where $odd(x)$ is the largest odd\n\tdivisor of $x$. The tournaments\n\t$B_{n}$ and $C_{n}$ are obtained from $A_{n}$ by adding the vertex $0$. In the\n\ttournament $B_{n}$, the vertex $0$ dominates $2,4\\ldots,2^{n}$ and is\n\tdominated by $1,3\\ldots,2^{n}-1$. In the tournament $C_{n}$, the vertex $0$\n\tdominates $1,3\\ldots,2^{n}-1$ and is dominated by $2,4\\ldots,2^{n}$. It is\n\tproved in \\cite{stockmeyer1977falsity} that for $1\\leq k\\leq2^{n}$, the\n\ttournaments $B_{n}-$ $k$ and $C_{n}-$ $(2^{n}+1-k)$ are isomorphic. Then the\n\tpair $B_{n}$ and $C_{n}$ form a counterexample for the reconstruction\n\tconjecture. As mentioned by Pouzet \\cite{pouzet1979note}, Dumont checked\n\tthat for $n\\leq6$ the difference (in absolute value) between the determinants\n\tof $B_{n}$ and $C_{n}$ is $1$. This fact is perhaps true for arbitrary $n$ but\n\twe are not able to prove it. However, we have the following result.\n\t\n\t\\begin{proposition}\n\t\t\\label{parity}For $n\\geq3$, the determinants of $B_{n}$ and $C_{n}$ do not\n\t\thave the same parity.\n\t\\end{proposition}\n\t\n\tFra\\\"{\\i}ss\\'{e} \\cite{fraisse1970abritement} considered a strengthening of the\n\treconstruction conjecture for the class of relations which contains graphs\n\tand digraphs. For digraphs, Fra\\\"{\\i}ss\\'{e}'s problem can be stated as\n\tfollow. Let $G$ and $H$ be two digraphs with the same vertex set $V$ and assume\n\tthat for every proper subset $W$ of $V$, the subdigraphs $G\\left[ W\\right] $\n\tand $H\\left[ W\\right] $, induced by $W$ are isomorphic. Is it true that $G$\n\tand $H$ are isomorphic? Lopez \\cite{lopez1978indeformabilite} proved that the\n\tanswer is positive when $\\left\\vert V\\right\\vert \\geq7$. It follows that if\n\t$G\\left[ W\\right] $ and $H\\left[ W\\right] $ are isomorphic for every\n\tsubset $W$ of size at most $6$, then $G$ and $H$ are isomorphic.\n\tMotivated by Lopez's theorem, \n\twe can ask the following question.\n\t \n\t\\begin{question}\\label{invariant}\n\t\tLet $\\mathcal{I}$ be a digraph invariant polynomial and let $G$ be a digraph. Is the polynomial $\\mathcal{I}(G)$ reconstructible from the \n\t\tcollection $\\{\\mathcal{I}(H): H\\,\\in \\mathcal{H}\\}$, where\n\t\t$\\mathcal{H}$ is the set of proper induced subdigraphs of $G$?\n\t\\end{question}\n\t \n\t In this paper, we will address this question for idiosyncratic polynomial extended to digraphs and defined as follow.\n\t Let $G$ be a digraph with adjacency matrix $A$. The \\emph{generalized adjacency matrix} of $G$ is $A(y,z)=A + y(J-A-I)+zA^{T}$. The idiosyncratic polynomial of $G$ as \n\t the characteristic polynomial of $A(y,z)$. The presence of\n\t $zA^{T}$ comes from the fact that the adjacency matrix of a digraph is not necessarily symmetric.\n\t It is not difficult to see that if two digraphs have the same idiosyncratic polynomial then they have the same characteristic polynomial, moreover their complement and their converse are also the same characteristic polynomial.\n\t \n\tWe prove that Question \\ref{invariant} is not true for arbitrary digraphs.\n\tOur counterexamples are borrowed from \\cite{boussairi2004c3} where they have been used\n\tin another context. All of these counterexamples contain one of two particular digraphs\n\tcalled \\emph{flag}. Following \\cite{boussairi2004c3} a \\emph{flag} is a digraph with\n\t vertex set $\\{u,v,w\\}$ and whose arcs set is either $\\left\\{ \\left( u,v\\right)\n\t,\\left( u,w\\right) ,\\left( w,u\\right) \\right\\} $ or $\\left\\{ \\left(\n\tv,u\\right) ,\\left( u,w\\right) ,\\left( w,u\\right) \\right\\} $. A\n\t\\emph{flag-free} \\emph{digraph} is a digraph in which there is no flag as\n\tinduced subdigraph. \n\t\n\tOur main result is stated as follow.\n\t\n\t\\begin{theorem}\n\t\\label{maintheorem} Let $G$ and $H$ be two flag-free digraphs with the same\n\tvertex set $V$ of size at least $5$. If for every $3$-subset $W$ of $V$, the\n\tinduced subdigraphs $G\\left[ W\\right] $ and $H\\left[ W\\right] $ have the\n\tsame idiosyncratic polynomial, then $G$ and $H$ have the same idiosyncratic polynomial.\n\t\\end{theorem}\n\t\n\tAs an application, we obtain the following corollary about tournaments.\n\t\n\t\\begin{corollary}\n\t\\label{maincas1}Two tournaments with the same $3$-cycles have the same\n\t idiosyncratic polynomial.\n\t\\end{corollary}\n\t\n\tPosets form an important class of digraphs for which the reconstruction\n\tproblem is still open. Ille and Rampon \\cite{ille1998reconstruction} proved\n\tthat a poset is reconstructible by its deck together with its comparability graph.\n\t\n\tFollowing Habib\n\t\\cite{habib1984comparability}, a parameter of a poset is said to be\n\t\\emph{comparability invariant} if all posets with a given comparability graph\n\thave the same value of that parameter. The dimension and the number of\n\ttransitive extension of a poset are two examples of comparability invariants.\n\t\n\tThe next corollary is another consequence of Theorem \\ref{maintheorem}.\n\t\\begin{corollary}\n\t\\label{maincas2} All the transitive orientations of a comparability graph have the same \n\tidiosyncratic polynomial.\n\t\\end{corollary}\n\t\n\t\\section{Preliminaries}\n\t\n\tA \\emph{graph} $G$ consists of a finite set $V$ of \\emph{vertices} together\n\twith a set $E$ of unordered pairs of distinct vertices of $V$ called\n\t\\emph{edges}. Let $G=(V,E)$ be a graph. With respect to an ordering\n\t$v_{1},\\ldots,v_{n}$\\ of $V$, the \\emph{adjacency matrix} of $G$ is the\n\t$n\\times n$ zero diagonal matrix $A=[a_{ij}]$\\ in which $a_{ij}=1$ if $\\left(\n\tv_{i},v_{j}\\right) \\in E$ and $0$ otherwise. The \\emph{complement }of a graph\n\t$G=(V,E)$ is the graph $\\overline{G}$ with the same vertices as $G$ and such\n\tthat, for any $u,v\\in V$, $\\left\\{ u,v\\right\\} $ is an edge of $\\overline\n\t{G}$ if and only if $\\left\\{ u,v\\right\\} \\notin E$.\n\t\n\tA \\emph{directed graph} or \\emph{digraph} $G$ is a pair $(V,E)$ where $V$ is a\n\tnonempty set $V$ of \\emph{vertices} \\ and $E$ is a set of ordered pairs of\n\tdistinct vertices called \\emph{arcs}. Let $W$ be a subset of $V$ the\n\t\\emph{subdigraph} of $G$ \\emph{induced} by $W$ is the digraph $G\\left[\n\tW\\right] $ whose vertex set is $W$ and whose arc set consists of all arcs of\n\t$G$ which have end-vertices in $W$. \n\tA digraph $G=(V,E)$ is \\emph{symmetric} if, whenever\n\t$(u,v) \\in E$ then $(v,u)$ $\\in E$. There is a natural one\n\tto one correspondence between graphs and symmetric digraphs.\n\t\n\tLet $G=(V,E)$ be a digraph. With respect to an ordering $v_{1},\\ldots,v_{n}%\n\t$\\ of $V$, the \\emph{adjacency matrix} of $G$ is the $n\\times n$ zero diagonal\n\tmatrix $A=[a_{ij}]$ \\ in which $a_{ij}=1$ if $\\left( v_{i},v_{j}\\right) \\in\n\tE$ and $0$ otherwise. The \\emph{converse }of $G$, denoted by $G^{\\ast}$, is\n\tthe digraph obtained from $G$ by reversing the direction of all its arcs. The\n\tadjacency matrix of $G^{\\ast}$ is the transpose $A^{T}$ of the matrix $A$, in\n\tparticular $P_{G}\\left( X\\right) =P_{G^{\\ast}}\\left( X\\right) $. The\n\t\\emph{complement }of $G$ is the digraph $\\overline{G}$ with vertex set $V$ and\n\tsuch that, for any $u,v\\in V$, $\\left( u,v\\right) $ is an arc of\n\t$\\overline{G}$ if and only if $\\left( u,v\\right) \\notin E$. The adjacency\n\tmatrix of $\\overline{G}$ is $J-A-I$.\n\t\n\t An\n\t\\emph{oriented} \\emph{graph} is a digraph $G=\\left( V,E\\right) $ such that\n\tfor $x,y\\in V$ , if $\\left( x,y\\right) \\in$ $E$, then $\\left( y,x\\right)\n\t\\notin$ $E$. Let $G$ be a graph. An \\emph{orientation} of $G$ is an assignment of a direction to \n\teach edge of $G$ so that we obtain an oriented graph.\n\t A \\emph{tournament} is an orientation of the complete graph. An oriented graph\n\t is a \\emph{poset} if, whenever $\\left( x,y\\right)$ and \n\t $\\left( y,z\\right)$ are arcs then $\\left( x,z\\right)$ is also an arc.\n\t A \\emph{transitive orientation} of a graph is one where the resulting oriented graph is\n\t a poset. \\emph{Comparability graphs} are the class of graphs that have a transitive orientation.\n\t \n\t\n\t\n\t\n\t\\section{Determinant of Stockmeyer's tournaments}\\label{Stockmeyer}\n\t\n\tIn this section, we prove Proposition \\ref{parity}. For this, we will use the following lemma, which \n\tis a particular case of \\cite[Equality~(A)]{pouzet1979note}.\n \\begin{lemma}\n \tFor a pair $\\left( G,H\\right) $ of digraphs, satisfying the hypothesis\n \t of the reconstruction Conjecture, we have\n \t\\begin{equation}\\label{a}\n \t\\det(G)-\\det(H)=\\left( -1\\right) ^{n+1}\\left[ C(G)-C(H)\\right] %\n \t\\end{equation}\n \twhere $C(G)$ and $C(H)$ are respectively the numbers of Hamiltonian cycles of\n \t$G$ and $H$.\n \\end{lemma}\n\tRemark that in this Lemma, Equality (\\ref{a}) is slightly different \n\tfrom Equality (A) of \\cite{pouzet1979note}. The reason is that, we do not use the same definition of cycle. In our paper, we mean by a (directed)\n\t\\emph{cycle} of a digraph $G$ every subdigraph with vertex set $\\left\\{\n\tx_{1},\\ldots,x_{t}\\right\\} $ and arcs set $\\left\\{ x_{1}x_{2},\\ldots\n\t,x_{t-1}x_{t},x_{t}x_{1}\\right\\} $. This cycle is said to be\n\t\\emph{Hamiltonian} if it goes through each vertex of $G$ exactly once. A path\n\t of a digraph $G$ is a subdigraph with vertex set $\\left\\{ x_{1},\\ldots\n\t,x_{t}\\right\\} $ and arc set $\\left\\{ x_{1}x_{2},\\ldots,x_{t-1}%\n\tx_{t}\\right\\} $. Such path is denote by $x_{1}x_{2}\\ldots x_{t}$. The notion of Hamiltonian path is defined similarly.\n\t\n\tLet $T$ be a tournament and let $v$ be a vertex of $T$. We denote by $N^{+}(v)$\n\t(resp.$N^{-}(v)$) the \\emph{out-neighborhood} (resp. the\\emph{\n\t\tin-neighborhood}) that is the set of vertices dominated by $v$ (resp. that\n\tdominate $v$).\n\t\n\t\\begin{remark}\n\t\t\\label{path}There is the natural one-to-one correspondence between Hamiltonian\n\t\tcycles of $T$ and Hamiltonian paths of $T-v$ from a vertex $x\\in N^{+}(v)$\n\t\tto a vertex of $N^{-}(v)$.\n\t\\end{remark}\n\t\n\n\t\\begin{pop1}\n\t\t Let $\\mathcal{O}=\\left\\{ 1,3,\\ldots\n\t\t,2^{n}-1\\right\\} $ and $\\mathcal{E}=\\left\\{ 2,4,\\ldots,2^{n}\\right\\} $. The\n\t\tset $\\mathcal{P}$ of Hamiltonian paths of $A_{n}$ is partitioned into four subsets:\n\t\t\n\t\t\\begin{description}\n\t\t\t\\item[i)] $\\mathcal{P}_{o,o}$ the set of Hamiltonian paths joining two vertices in\n\t\t\t$\\mathcal{O}$.\n\t\t\t\n\t\t\t\\item[ii)] $\\mathcal{P}_{e,e}$ the set of Hamiltonian paths joining two vertices in\n\t\t\t$\\mathcal{E}$.\n\t\t\t\n\t\t\t\\item[iii)] $\\mathcal{P}_{o,e}$ the set of Hamiltonian paths joining a vertex in\n\t\t\t$\\mathcal{O}$ to a vertex in $\\mathcal{E}$.\n\t\t\t\n\t\t\t\\item[iv)] $\\mathcal{P}_{e,o}$ the set of Hamiltonian paths joining a vertex in\n\t\t\t$\\mathcal{E}$ to a vertex in $\\mathcal{O}$.\n\t\t\t\n\t\t\t\n\t\t\n\t\t\t We will prove that $\\left\\vert \\mathcal{P}_{o,o}\\right\\vert =\\left\\vert\\mathcal{P}_{e,e}\\right\\vert $. \n\t\t\tLet $x_{1}x_{2}\\ldots x_{2^{n}}%\n\t\t\t\\in\\mathcal{P}_{o,o}$. For $i=1,\\ldots,2^{n}$, we set $\\widetilde{x}\n\t\t\t_{i}:=2^{n}-x_{2^{n}-i+1}+1$. \n\t\t\n\t\t\n\t\tIt is easy to see that\n\t\t\\end{description}\t\n\t\t\\[\n\t\todd(\\widetilde{x}_{i+1}-\\widetilde{x}_{i})=odd(x_{(2^{n}-i)+1}-x_{2^{n}-i})=1\n\t\t\\]\n\t\t Moreover, $\\widetilde{x}_{1}$, $\\widetilde{x}_{2^{n}}\\in$\n\t\t $\\mathcal{E}$, then $\\widetilde{x}_{1}\\widetilde{x}_{2}\\ldots\\widetilde\n\t\t {x}_{2^{n}}\\in\\mathcal{P}_{e,e}$. It follows that there is a one-to-one\n\t\tcorrespondence between $\\mathcal{P}_{o,o}$ and $\\mathcal{P}_{e,e}$. To\n\t\tconclude it suffices to apply Equality (\\ref{a}) and Redei's theorem \\cite{redei1934kombinatorischer} asserting that the number of Hamiltonian paths in a\n\t\ttournament is always odd. \n\t\\end{pop1}\n\t\n\n \n\t\t\\section{Counterexample for Question \\ref{invariant}}\n\t\nConsider the digraph $G$ with vertex set $\\left\\{ 1,\\ldots,n\\right\\} $ and whose arcs are\n\t $ (1,2),\\left( n-1,n\\right) $, $\\left( i,i+1\\right)$ and $\\left(\n\t1,i+1\\right)$ for $i=1,\\ldots,n-2 $. Let $G^{\\prime}$ be the digraph\n\tobtained from $G$ by reversing the arc $\\left( n-1,n\\right) $. These two\n\t digraphs are drawn in Figure $1$.\n\t\n\t We will prove the following proposition.\n\t \n\t \\begin{proposition}\\label{counterq2}\n\t \tLet $G$ and $G^{\\prime}$ be the digraphs defined above. Then we have\n\t \\begin{description}\n\t \t\\item[i)] $G$ and $G^{\\prime}$ do not have the same idiosyncratic polynomial;\n\t \t\\item[ii)] $G\\left[ W\\right] $ and $G^{\\prime}\\left[ W\\right] $\n\t \thave the same idiosyncratic polynomial for every proper subset $W$ of\n\t \t$\\left\\{ 1,\\ldots,n\\right\\}$.\n\t \\end{description}\t\n\t \\end{proposition}\n Our proof is based on the Coates determinant formula \\cite{coates1959flow}. This can be used to evaluate the determinant of\n the adjacency matrix of a digraph. Let $H$\n be a digraph on $n$ vertices (possibly with loops). A linear subdigraph $L$ of\n $H$ is a vertex disjoint union of some cycles in $H$ (we consider a loop as a\n cycle of length $1$). The set of linear subdigraphs of $H$ with $n$ vertices\n is denote by $\\mathcal{L}(H)$. If $N$ is the adjacency matrix of $H$,\n then from Coates determinant formula we have\n \\begin{equation}\n \\det\\left( N\\right) =\\left( -1\\right) ^{n}\\underset{L\\in\\mathcal{L}%\n \t(H)}{\\sum^{n}}\\left( -1\\right) ^{\\left\\vert L\\right\\vert } \\label{b}%\n \\end{equation}\n where $\\left\\vert L\\right\\vert $ is the number of cycles in $L$. \n \n \n \\begin{pop}\n \tTo prove that the digraphs $G$ and $G^{\\prime}$ do not have the same idiosyncratic polynomial, it suffices \n \tto check that their complement $\\overline{G}$ and\n\t$\\overline{G^{\\prime}}$ do not have the same determinant. Let $A$ and\n\t$A^{\\prime}$ be the adjacency matrices of $G$ and $G^{\\prime}$ respectively. Then, the adjacency\n\tmatrices of $\\overline{G}$ and $\\overline{G^{\\prime}}$ are respectively\n\t$\\overline{A}=J-A-I$ and $\\overline{A^{\\prime}}=J-A^{\\prime}-I$.\n\t\n\tLet \\[\n\t\\widetilde{A}:=\\left(\n\t\\begin{tabular}\n\t[c]{c|c}%\n\t$A+I$ & $\\mathbbm{1}$\\\\\\hline\n\t$\\mathbbm{1}^{t}$ & $1$%\n\t\\end{tabular}\n\t\\right)\\mbox{, }\\widetilde{A'}:=\\left(\n\t\\begin{tabular}\n\t[c]{c|c}%\n\t$A^{\\prime}+I$ & $\\mathbbm{1}$\\\\\\hline\n\t$\\mathbbm{1}^{t}$ & $1$%\n\t\\end{tabular}\n\t\\right)\n\t\\]\n\twhere $\\mathbbm{1}$ the all one column vector of dimension $n$.\n\tIt is easy to\n\tsee that%\n\t\\[\n\t\\det(\\overline{A})=\\left( -1\\right) ^{n}\\det\n\t(\n\t\\widetilde{A}\n\t)\\mbox{, }\\det(\\overline{A^{\\prime}})=\\left( -1\\right)\n\t^{n}\\det(\\widetilde{A'})\n\t\\]\n\t\n\tRemark that\n\t $\\widetilde{A}$ and $\\widetilde{A'}$ can be viewed as the\n\tadjacency matrices of the digraphs $\\widetilde{G}$ and $\\widetilde{G^{\\prime}}$\n\tdefined on the set $\\left\\{\n\t1,\\ldots,n+1\\right\\} $ as follow.\n\tThe arcs of $\\widetilde{G}$ are $(1,2)$, $\\left(\n\tn-1,n\\right)$, $\\left( i,i+1\\right)$, $\\left( 1,i+1\\right)$ for $i=1,\\ldots,n-2$\n\tand $(i,i)$, $(i,n+1)$, $(n+1,i)$ for $i=1,\\ldots\n\t,n$. The digraph $\\widetilde{G^{\\prime}}$ is obtained from\n\t$\\widetilde{G}$ by reversing the arc $\\left( n-1,n\\right) $. \n\t\n \n\tWe will evaluate $\\det(\\widetilde{A})-\\det(\\widetilde{A'})$ by using the\n\tformula (\\ref{b}).\n\tFor this,\n\twe partition\n\t$\\mathcal{L}(\\widetilde{G})$ into four subsets:\n\t\n\t\\begin{itemize}\n\t\t\\item $\\mathcal{L}_{1}(\\widetilde{G})$ the set of linear subdigraphs containing\n\t\tthe arcs $(1,2)$ and $\\left( n-1,n\\right)$.\n\t\t\n\t\t\\item $\\mathcal{L}_{2}(\\widetilde{G})$ the set of linear subdigraphs containing\n\t\tthe arc $(1,2)$ but not $\\left( n-1,n\\right)$.\n\t\t\n\t\t\\item $\\mathcal{L}_{3}(\\widetilde{G})$ the set of linear subdigraphs containing\n\t\tthe arc $\\left( n-1,n\\right) $ but not $(1,2)$.\n\t\t\n\t\t\\item $\\mathcal{L}_{4}(\\widetilde{G})$ the set of linear subdigraph containing\n\t\tneither the arc $\\left( n-1,n\\right) $ nor $(1,2)$.\n\t\\end{itemize}\n\t\n\tWe define a similar partition of $\\mathcal{L}(\\widetilde{G^{\\prime}})$ by\n\treplacing the arc $\\left( n-1,n\\right) $ by $\\left( n-1,n\\right) $.\n\tClearly we have $\\mathcal{L}_{2}(\\widetilde{G})=\\mathcal{L}_{2}(\\widetilde\n\t{G^{\\prime}})$, $\\mathcal{L}_{4}(\\widetilde{G})=\\mathcal{L}_{4}(\\widetilde\n\t{G^{\\prime}})$ and $\\mathcal{L}_{3}(\\widetilde{G^{\\prime}})=\\left\\{ L^{\\ast\n\t}:L\\in\\mathcal{L}_{3}(\\widetilde{G})\\right\\} $. Moreover, $\\mathcal{L}_{1}(\\widetilde\n{G^{\\prime}})$ is empty and $\\mathcal{L}%\n\t_{1}(\\widetilde{G})$ contains only the Hamiltonian cycle whose arcs are\n\t$(i,i+1)$ for $i=1,\\ldots,n$ and $(n+1,1)$.\n\t\n\tUsing formula (\\ref{b}), we get $\\det(\\widetilde{A})-\\det(\\widetilde{A^{\\prime}%\n\t})=\\left( -1\\right) ^{n+1}$. Hence $\\det(\\overline{A})-\\det(\\overline\n\t{A^{\\prime}})=-1$. It follows that $G$ and $G^{\\prime}$ do not have the same\n\t idiosyncratic polynomial.\n\t\n\tWe will prove now that $G\\left[ W\\right] $ and $G^{\\prime}\\left[ W\\right] $\n\thave the same idiosyncratic polynomial for every proper subset $W$ of\n\t$\\left\\{ 1,\\ldots,n\\right\\} $. This is true when $\\left\\{\n\t1,2,n,n-1\\right\\} $ is not entirely contained in $W$, because $G\\left[\n\tW\\right] =G^{\\prime}\\left[ W\\right] $ or $G^{\\ast}\\left[ W\\right]\n\t=G^{\\prime}\\left[ W\\right] $. So we can assume that $\\left\\{ 1,2,n,n-1\\right\\}\n\t\\subseteq W$. Let $k\\in\\left\\{ 3,\\ldots,n-2\\right\\}\\setminus W$. The set\n\t$W$ is partitioned into two nonempty subsets $W_{1}:=W\\cap\\left\\{\n\t1,\\ldots,k-1\\right\\} $ and $W_{2}:=W\\cap\\left\\{ k+1,\\ldots,n\\right\\} $.\n\tClearly, there is no arc between $W_{1}$ and $W_{2}$ in $G\\left[ W\\right] $\n\tand $G^{\\prime}\\left[ W\\right] $. Moreover $G\\left[ W_{1}\\right]\n\t=G^{\\prime}\\left[ W_{1}\\right] $ and $G^{\\ast}\\left[ W_{2}\\right]\n\t=G^{\\prime}\\left[ W_{2}\\right] $. Then the generalized adjacency matrices of\n\t$G\\left[ W\\right] $ and $G^{\\prime}\\left[ W\\right] $ have the form%\n\t\n\t\\[\n\tA=\\left(\n\t\\begin{array}\n\t[c]{cc}%\n\tA_{11} & \\alpha\\beta^{t}\\\\\n\t\\beta\\alpha^{t} & A_{22}%\n\t\\end{array}\n\t\\right) \\text{ \\ \\ \\ and \\ \\ \\ }B=\\left(\n\t\\begin{array}\n\t[c]{cc}%\n\tA_{11} & \\alpha\\beta^{t}\\\\\n\t\\beta\\alpha^{t} & A_{22}^{t}%\n\t\\end{array}\n\t\\right) \\text{,}%\n\t\\]\n\t\n\t\n\twhere $\\alpha=\\left(\n\t\\begin{array}\n\t[c]{c}%\n\t1\\\\\n\t\\vdots\\\\\n\t1\n\t\\end{array}\n\t\\right) $ and $\\beta=\\left(\n\t\\begin{array}\n\t[c]{c}%\n\ty\\\\\n\t\\vdots\\\\\n\ty\n\t\\end{array}\n\t\\right) $.\n\n\n\n \n\tWe conclude by Proposition \\ref{HLowey} below.\n \\end{pop}\n \n \n \\begin{proposition}\n \t\\label{HLowey}Suppose $k$, $n$ are positive integers such that $k