diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlicu" "b/data_all_eng_slimpj/shuffled/split2/finalzzlicu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlicu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe past decade has seen major developments in the field of machine learning, and societal applications in health-care, autonomous vehicles, and language processing are becoming commonplace \\cite{jordan2015machine}. The impact of machine learning on basic research has been just as significant, and the use of advanced algorithmic tools in data analysis has resulted in new insights into many areas of science. In physics, there has been particular interest applying the tools of machine learning to study dynamical complex systems which evolve in time. These systems exhibit extreme sensitivity to small variations of the governing parameters, and the use of conventional numerical methods to understand and potentially control these dynamics is challenging. \n\nNonlinear pulse propagation in optical fibre waveguides is known to exhibit highly complex evolution, and machine learning methods have been applied in a variety of ways to both optimize and analyze their spectrum or temporal intensity profile at the fibre output. For example from a feedback and control perspective, evolutionary algorithms (which are typically slow to converge) have been used in experiments optimizing particular characteristics of supercontinuum sources \\cite{wetzel2018customizing, michaeli2018genetic}, as well as the experimental control of mode-locked fibre lasers \\cite{andral2015fiber,pu2019intelligent,dudleymeng2020,kokhanovskiy2019machine2}. Machine learning using neural networks has also been applied to classify experimentally different regimes of nonlinear propagation in modulation instability experiments \\cite{narhi2018machine} or to determine the duration of short pulses from a fibre laser \\cite{kokhanovskiy2019machine}. Applications to the control of mode-locking \\cite{baumeister2018,kokhanovskiy2019machine} and pulse shaping \\cite{finot2018nonlinear} have also been demonstrated numerically. Yet, all these applications have been restricted either to (slow) genetic algorithms or to feed-forward neural network architectures limited to determine the correspondence between a given input and some single output parameter. \n\nMore generally, experiments in optical fibres are of very wide interest in nonlinear science since they provide a convenient means of studying nonlinear dynamics common to many nonlinear Schr\\\"{o}dinger equation (NLSE) systems including hydrodynamics, plasmas, and Bose-Einstein condensates. However, because propagation in an NLSE system depends sensitively on both the input pulse and fibre characteristics, the design and analysis of experiments require extensive numerical simulations based on the numerical integration of the NLSE or its extensions. This is computationally-demanding and creates a severe bottleneck in using numerical techniques to design or optimize experiments in real-time. \n\nIn this paper, we present a solution to this problem using machine-learning to predict complex nonlinear propagation in optical fibres with a recurrent neural network, bypassing the need for direct numerical solution of a governing propagation model. The general context of our work is the recent development of machine learning approaches exploiting \\emph{knowledge-based} and \\emph{model-free} methods to forecast and thus control complex evolving dynamics. Knowledge-based (or physics-informed) methods rely on some a priori knowledge of the mathematical model governing the physical system, and they perform especially well in capturing nonlinear dynamics \\cite{brunton2016discovering,raissi2018deep, raissi2019physics}. In contrast, model-free forecasting is a purely \\emph{data-driven} approach where a neural-network structure will learn the system dynamical behavior from a set of training data, without any prior knowledge of the physics of the system or any underlying governing equation(s). Model-free methods have been particularly successful in forecasting spatio-temporal dynamics of physical systems exhibiting high-dimensional chaos, instabilities and turbulence \\cite{vlachas2018data, vlachas2019forecasting, pandey2020reservoir}, as well as reproducing the propagation dynamics of certain analytical solutions of the NLSE \\cite{jiang2019model}. \n\nOur objective here is to significantly extend the use of model-free methods in nonlinear physics by showing how a long short-term memory (LSTM) recurrent neural network can fully reproduce the complex nonlinear dynamics of ultrashort pulse evolution in optical fibre governed by an NLSE system. We study two particular cases of practical importance: high power pulse compression associated with the generation of Peregrine-soliton structures, and broadband optical supercontinuum generation. In the first case, we show how the network accurately predicts the temporal and spectral evolution of higher-order solitons and the appearance of the Peregrine soliton from a transform-limited intensity profile, and we also show how the predicted results agree with reported experimental measurements \\cite{tikan2017universality}. We then expand our analysis to even more complex dynamics and show how the network can also predict the full development of an octave-spanning supercontinuum with fine details in the spectral and temporal domains. These results represent a significant extension of model-free methods applied to nonlinear optics, with potential important impact for high-field physics, nonlinear spectroscopy, and precision frequency comb metrology. Moreover, we anticipate that our results will stimulate similar studies in all areas of physics where NLSE-like dynamics play a governing role. \n\n\n\n\n\\section{Model-free modeling of nonlinear propagation dynamics}\n\nThe propagation of light in an optical fibre can be represented as a sequence of electric field complex amplitude distributions (spectral or temporal) at different points along the propagation path in the fibre. The amplitude at any specific propagation distance is naturally determined by the evolution which precedes it, and modelling this evolution is conventionally carried out by numerically integrating a governing NLSE model over a large number of elementary steps \\cite{agrawal}. Unfortunately, this conventional approach can be extremely time-consuming.\n\nHere, we show that such a direct numerical approach can in fact be replaced with model-free forecasting using a recurrent neural network (RNN). RNNs are a particular class of neural network that possess internal memory, allowing them to account for long-term dependencies and thus to robustly identify patterns in sequential data \\cite{Lipton2015RNN}. The fact that RNNs intrinsically allow modelling of dynamic behavior makes them particularly adapted to the processing and predictions of time-series with applications in speech recognition, predictive texting, handwriting recognition, natural language processing, or stock market analysis. And they are equally a natural choice to predict the evolution of nonlinear propagation dynamics as a high power optical field propagates in an optical fibre. \n\nThe particular form of RNN we use is the long short-term memory (LSTM) cell architecture \\cite{hochreiter1997long}. Although other approaches such as reservoir computing or the gated recurrent unit would also be possible, our choice of LSTM is based on its simplicity of implementation and demonstrated success in various applications \\cite{Carleoreview2019,reviewRNN2019}. We train the network to be able to separately and independently forecast the evolution of temporal and spectral intensity during nonlinear pulse propagation in optical fibre, based only on the initial condition of a transform-limited pulse. Of course physically, the temporal and spectral field characteristics are tightly coupled, and it is therefore remarkable that the network is able to learn independently the temporal and spectral evolution dynamics using only intensity data. In order to teach the network the pulse propagation dynamics, initial training is performed using ensembles of temporal and spectral intensity evolution maps, generated numerically using simulations of the NLSE (or its generalized version the GNLSE) for a range of input pulse characteristics. In order to reduce the computational load during training, the simulation profiles are downsampled along both the propagation direction, and the temporal and spectral dimensions (see Methods). \n\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=\\linewidth]{schmatic13.pdf}\n\\caption{(a) Schematic of the recurrent neural network architecture used showing: the input layer, the long short-term memory (LSTM) recurrent layer, two hidden (dense) layers, and the output layer. (b) The neural network uses the spectral (or temporal) intensity profiles $\\mathbf{X}_{z-1}$ from the ten previous intensity profiles $h_{z-10\\Delta z}..h_{z-\\Delta z}$ in the evolution to yield the subsequent spectrum $h_{z}$. Each intensity profiles $h$ consists of $B$ intensity bins denoted as $x^{k}$ where $k$ indicates the bin number. (c) The LSTM cell receives the cell input, hidden and cell states from the previous step as an input, and output of the cell is the new hidden state that is also passed on to the next prediction step along with the new cell state. $x_i$ is the cell input where $i = z, z - \\Delta z$, $h_i$ denotes the hidden state and $c_i$ the cell state. The yellow rectangles denote layer operations and the orange circles denote pointwise operations. See Methods for more details on the number of nodes used per layer, activation functions etc. More details and definition of the different cell elements are given in the Methods.} \n\\label{fig:schmatic}\n\\end{figure}\n\nA general schematic of the RNN is shown in Fig. \\ref{fig:schmatic}(a) and an illustration of the training stage is shown in Fig. \\ref{fig:schmatic}(b). Ten consecutive temporal or spectral intensity profiles $h_{z-10\\Delta z}..h_{z-\\Delta z}$ (i.e. the evolution from distance $z-10\\Delta z$ to $z-\\Delta z$) are fed to the RNN. Here $\\Delta z$ corresponds to the sampling distance along the propagation direction (see Methods). The choice to feed the network with ten consecutive intensity profiles at propagation interval $\\Delta z$ was found to be a good heuristic compromise between speed and performance (see Methods). These intensity profiles are then passed to the LSTM layer consisting of cells (Fig. \\ref{fig:schmatic}(c)) governed by a specific algorithm (see Methods). Essentially, the LSTM layer uses 3 different types of information to predict the (spectral or temporal) intensity profile $h_{z}$ at distance $z$: (i) the intensity profile $h_{z-\\Delta z}$ at distance $z-\\Delta z$ which is the input of the LSTM layer, the hidden state of the layer corresponding to the predicted intensity profile $h_{z-2\\Delta z}$ at distance $ z-2\\Delta z$, and the cell state which contains the long-term dependency information from the intensity profiles $h_{z-10\\Delta z}..h_{z-3\\Delta z}$ corresponding to the evolution from distance $z-10\\Delta z$ to $z-3\\Delta z$.\n\nThe output of the LSTM layer is subsequently fed to a fully connected feed-forward neural network with two hidden (dense) layers whose function is to further improve the predicted intensity at distance $z$. The prediction made by the RNN (output layer) is compared with the intensity profile from the NLSE (or it generalized version GNLSE) simulations. The error is backpropagated to the weights and biases of the network nodes (both dense and LSTM layers) that are subsequently adjusted to minimize the prediction error. The RNN cycle is then initiated again with an updated input consisting of the consecutive temporal or spectral intensity profiles $h_{z-9\\Delta z}..h_{z}$ till the full evolution is predicted. Note that the RNN loop is initiated with a ``cold start'' where the input sequence contains only the spectral or temporal intensity profile of pulses injected into the fibre (replicated ten times). In the prediction phase, the RNN model is tested using a separate set of temporal and spectral evolution data that was not used in the training phase.\n\n\\section{Results}\n\\label{sec:results}\n\\subsection{Higher-order soliton compression}\nWe begin by training the RNN to model the propagation of picosecond pulses in the anomalous dispersion regime of a highly nonlinear fibre. This propagation regime is of particular significance as it is associated with extreme self-focusing dynamics and practical ``higher-order soliton'' pulse compression schemes \\cite{agrawal}. Moreover, the dynamics of this nonlinear temporal compression have been recently shown to be associated with the emergence of the celebrated Peregrine soliton that appears in the semiclassical limit of the NLSE \\cite{tikan2017universality}. \n\nThe training data was generated by performing 3,000 NLSE numerical simulations of propagation in 13~m of fibre using initial conditions of transform-limited hyperbolic-secant input pulses. The fibre parameters were kept constant between simulations and corresponded to experiments performed around 1550~nm \\cite{tikan2017universality}. On the other hand, we varied the pulse duration $\\Delta\\,\\tau$ (FWHM) and peak power $P_0$ uniformly over the ranges 0.77--1.43~ps and 18.6--34.2 W, respectively. This yields a variation in soliton number from $N=3.5-8.9$ where $N^2 = \\gamma P_0 T_0^2\/\\beta_2$ with $\\gamma$, $\\beta_2$ the fibre nonlinear and group velocity dispersion parameters respectively, and $T_0 = \\Delta\\,\\tau \/1.763 $. See Methods for further details. \n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=\\linewidth]{PS_NLSE_v4.pdf}\n \n\\caption{Temporal intensity evolution of a 1.1~ps (full width at half maximum) pulse with 26.3~W peak power corresponding to an $N=6$ soliton injected into the anomalous dispersion regime of a 13~m long highly nonlinear fibre. The panels shows the result of NLSE numerical simulation (left), RNN prediction (middle), and relative difference (right). The RNN predictions use only the injected pulse intensity profile as input.}\n\\label{fig:pstime}\n\\end{figure}\n\nWe first illustrate the results obtained when training the network to model the temporal intensity evolution. Figure~\\ref{fig:pstime} compares the evolution of the temporal intensity simulated using the NLSE (left panel) with that predicted by the RNN (central panel). The particular results shown correspond to an input soliton number $N=6$. One can see the overall excellent visual agreement between the propagation dynamic predicted by the RNN and those simulated from the NLSE. Also notice that the distance of maximum compression and associated temporal intensity profile is particularly well predicted by the RNN. The right panel shows the relative difference between the NLSE and RNN evolution maps, with a root mean square (RMS) error computed over the full evolution $R=0.04$ (see Methods). Comparisons between NLSE and RNN evolution for 100 different input condition spanning the full range of parameter variation showed similar results with a RMS error computed over the 100 evolution maps $R=0.097$ (see Methods).\n\nA more detailed comparison between the NLSE simulations, RNN prediction, and experimental measurements at selected distances is plotted in Fig.~\\ref{fig:psexpt}. Note that in this case, third-order dispersion was also included in the training simulations (see Methods). The figure shows the intensity profiles predicted by the RNN (solid blue line), the profiles from the NLSE simulations (dashed red) as well as the experimental measurements (black dots) previously reported in \\cite{tikan2017universality}. One can see remarkable agreement at all distances between the three sets of results, and we stress particularly that the RNN reproduces both the compressed central portion and the side lobes of the Peregrine soliton associated with maximal compression around 10~m. \n\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=\\linewidth]{PS_expts_v2.pdf}\n \n\\caption{Higher-order soliton ($N=6$) temporal intensity at selected distances predicted by the neural network (solid blue line), simulated with the NLSE (dashed red line), and experimentally measured (black dots). Experimental data from Ref. \\cite{tikan2017universality}.}\n\\label{fig:psexpt}\n\\end{figure}\n\nWe also tested the ability of the RNN to predict the propagation dynamics in the spectral domain from the corresponding input spectrum. Here we use the same ensemble of NLSE numerical simulations as for the temporal evolution, but this time we train the network by feeding the spectral intensity evolution. Results for input conditions identical to that of Figs.~\\ref{fig:pstime} and \\ref{fig:psexpt} are shown in Fig.~\\ref{fig:psspec}. For convenient visualization, the evolution is plotted in logarithmic scale. The spectral evolution consists of an initial stage of spectral broadening dominated by self-phase modulation and corresponding to the compression observed in the time-domain. After the point of maximum expansion, we see a breathing phase of narrowing and re-expansion typical of higher-order soliton propagation. One can see excellent agreement between the dynamics predicted from the network and that simulated with the NLSE, with a relative discrepancy within a few dB over the entire evolution (RMS error computed over the full spectral evolution $R=0.106$). \n\nThe excellent correspondence is confirmed in Fig.~\\ref{fig:psexpt_spec} when plotting detailed comparison between the RNN predicted (blue), simulated (dashed red), and experimentally measured spectra (black dots) at selected distances around the maximal temporal compression point as previously considered and which is naturally also the point of maximum spectral broadening. In particular, one can see the excellent agreement between the NLSE and RNN results over a 25~dB dynamic range. We also performed a series of tests for 100 different input pulse spectra spanning the full range of parameter variation and found similar network performances in terms of predicted evolution with a RMS error $R=0.161$ (computed over the 100 evolution maps tested).\n\n\n\n\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=\\linewidth]{PS_NLSE_spec_v4.pdf}\n \n\\caption{Spectral intensity evolution of a 1.1~ps (full width at half maximum) pulse with 26.3~W peak power corresponding to an $N=6$ soliton injected into the anomalous dispersion regime of a 13~m long highly nonlinear fibre. The panels shows the result of numerical simulation (left), RNN prediction (middle), and relative difference (right). The RNN predictions use only the injected pulse spectrum as input.}\n\\label{fig:psspec}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=\\linewidth]{PS_expts_spec_v2.pdf}\n \n\\caption{Higher-order soliton ($N=6$) spectral intensity at selected distances predicted by the neural network (solid blue line), simulated with the NLSE (dashed red line), and experimentally measured (black dots).}\n\\label{fig:psexpt_spec}\n\\end{figure}\n\n\n\\subsection{Supercontinuum generation}\nWe next extended our study to even more complex propagation dynamics and the generation of a broadband supercontinuum. Here, we focus our attention to SC generated by injecting femtosecond pulses into the anomalous dispersion of a highly nonlinear fibre. This regime is of particular significance as it has been shown to be associated with high spectral coherence and the generation of stable frequency combs as well as to yield the broadest SC spectra \\cite{dudley2006supercontinuum}.\n\nIn order to test whether a recurrent neural network could learn SC generation dynamics and model their evolution, we generated an ensemble of SC propagation dynamics using the generalized NLSE (GNLSE) that includes the frequency-dependence of dispersion and nonlinearity, and the delayed Raman response \\cite{dudley2006supercontinuum}. Specifically, we simulated the propagation of 100~fs transform-limited pulses at 810~nm injected into the anomalous dispersion regime of a 20~cm long photonic crystal fibre with zero-dispersion at 750~nm similar to that used in \\cite{narhi2018machine}. See Methods for detailed parameter values. The ensemble includes simulations for a transform-limited input pulse with peak power uniformly distributed in the 500~W to 2~kW range that yields SC spectra with different characteristics, from isolated dispersive wave generation to fully developed octave-spanning SC with very fine spectral features. We emphasize that although the input pulse duration was kept constant for all the simulations, predicted results for other durations show similar agreement with the GNLSE as the specific cases discussed below. \n\nWe begin by training the network from the temporal intensity evolution. Similarly to the higher-order soliton compression case, the simulation profiles are downsampled along both the propagation direction, and the temporal and spectral dimensions (see Methods). After training, the RNN model is tested for an input peak power not used in the training stage and the predicted evolution is compared with that directly simulated with the GNLSE for the same input power. \n\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=\\linewidth]{SC_time_v3.pdf}\n\\caption{Temporal evolution of supercontinuum. The left panel shows the\nnumerical simulation (GNLSE) of a supercontinuum evolution in a 20~cm photonic crystal fibre for 100~fs pulse with peak power of 630~W (a) and 1.96~kW (b). See Methods for a full description of the fibre parameters. The middle panel shows the predicted (RNN) temporal intensity evolution for the same initial temporal intensity profile as in the GNLSE simulations. The right panel shows the comparison between the predicted (solid blue line) and simulated (dashed red line) profiles at selected distances indicated by white dashed lines.}\n\\label{fig:sc_time}\n\\end{figure}\n\nResults are shown in Fig.~\\ref{fig:sc_time}(a) and (b) for an input peak power of 630~W and 1.96~kW corresponding to an input soliton number of $N=4.6$ and $N=8.1$, respectively. These values were chosen as they lead to SC with very distinct characteristics. The left panel shows the temporal intensity evolution from the GNLSE simulation and the central panel shows the predicted evolution by the RNN. The SC generation process arises from soliton dynamics including higher-order soliton compression, soliton fission and dispersive waves emission on the short wavelength side \\cite{dudley2006supercontinuum}. For longer propagation distances, solitons emerging from the fission experience the Raman self-frequency shift expanding the SC spectrum towards the long wavelengths side \\cite{dudley2006supercontinuum}. Significantly, in both scenarios, one can see the excellent visual agreement between the GNLSE simulations and RNN model. The point of soliton fission and dispersive emission as well as the red-shifting solitons parabolic trajectories are perfectly reproduced by the network. Quantitatively, the relative difference remains within a few dBs over the entire evolution down to the -30~dB bandwidth. The RMS error calculated over the full intensity evolution is $R=0.097$ and $R=0.049$ for Figs.~\\ref{fig:sc_time}(a) and (b), respectively. The remarkable ability of the RNN to predict very complex nonlinear dynamics is further highlighted in the right panel where we plot detailed comparison between the predicted and simulated SC temporal intensity at selected distances along the propagation where we can see how the amplitude and delay of the dispersive waves and Raman-shifted solitons are also predicted with excellent accuracy at all stages of the propagation. Additional predictions ran for 50 different values of pulse peak power (not used in the training phase) showed also very good agreement with the GNLSE simulations (RMS error $R=0.176$ computed over 50 different evolution maps tested).\n\n\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=\\linewidth]{SC_v3.pdf}\n\\caption{Spectral evolution of supercontinuum. The left panel shows the\nnumerical simulation (GNLSE) of a supercontinuum evolution in a 20~cm photonic crystal fibre for 100~fs pulse with peak power of 630~W (a) and 1.96~kW (b). See Methods for a full descritpion of the fibre parameters. The middle panel shows the predicted (RNN) temporal intensity evolution for the same initial spectral intensity profile as in the GNLSE simulations. The right panel shows the comparison between the predicted (solid blue line) and simulated (dashed red line) profiles at selected distances indicated by white dashed lines.}\n\\label{fig:sc}\n\\end{figure}\n\nWe then tested the ability of the RNN model to predict the SC spectral intensity evolution from the input pulse spectrum. The results for an input peak power of 630~W and 1.96~kW are shown in Fig.~\\ref{fig:sc}(a) and (b), respectively. For convenient visualization, the evolution is plotted in logarithmic scale. In the case of lower peak power, one can see that the SC spectrum at the fibre output essentially consists of an isolated dispersive wave and solitons with a limited amount of red-shift. For larger input peak power, we see multiple dispersive wave emission and well-separated Raman-shifted solitons resulting in an octave-spanning SC. Again, we can see very good visual agreement between the simulated and predicted evolution maps and that all spectral features including dispersive waves, Raman-shifted solitons and their interference that lead to fine spectral features are perfectly reproduced by the RNN. Additional predictions ran for 50 different input pulse peak power (not used in the training phase) showed also very good agreement with the GNLSE simulations (RMS error $R=0.12$).\n\n\n\n\n\\section{Conclusion}\nWe have shown that machine learning techniques can bring new insight into the study and prediction of nonlinear optical systems. Specifically, we have demonstrated that a recurrent neural network with long short-term memory can learn the complex dynamics associated with the nonlinear propagation of short pulses in optical fibres inclduing higher-order soliton compression and supercontinuum generation using solely the pulse intensity profile as input condition. The network is also able to reproduce the dynamics both in the temporal and spectral domain, and for the particular case of higher-order soliton compression we have been able to confirm that the predicted evolutions maps are also in excellent agreement with experiments. Our results are particularly significant as applications of machine learning to ultrafast dynamics have previously been restricted to slow genetic algorithms or feed-forward neural networks designed to establish the transfer function between specific input-output parameters \\cite{farfan2018femtosecond,finot2018nonlinear,andral2015fiber,kokhanovskiy2019machine,kokhanovskiy2019machine2}. \n\nFrom an application point of view, we expect that neural networks will very soon become an important and standard tool for analysing complex ultrafast dynamics, for optimizing the generation of broadband spectra and frequency combs, as well as for designing ultrafast optics experiments. Future steps may expand the parameter space of the RNN operation by including additional training variables as e.g. the nonlinear fibre parameters. The evolution prediction may be extended to the complex field (amplitude and phase), and one could also envisage to use reverse-engineering in order to optimize the pump pulse characteristics for the generation of on-demand temporal and spectral intensity profiles at the fibre (or waveguide) output. From a more fundamental perspective, we believe that the use of recurrent neural networks will impact on future design and analysis of nonlinear physics experiments as they represent a natural candidate for exploring and analyzing complex operation regimes with long-term dependencies. \n\n\\section*{Acknowledgements}\nLS acknowledges the Faculty of Engineering and Natural Sciences graduate school of Tampere University.\nJD acknowledges the French Agence Nationale de la Recherche (ANR-15-IDEX- 0003, ANR-17-EURE-0002).\nGG acknowledges the Academy of Finland (298463, 318082, Flagship PREIN 320165).\nThe authors also thank D. Brunner for useful discussions.\n\n\\section*{Methods}\n\n\\subsection*{Numerical simulations}\nThe numerical simulations in this work are from the NLSE and its generalized extension (1+1D) that describe the propagation of the slowly-varying optical field envelope.\n\n\\textbf{Higher-order soliton compression.}\nWe model the propagation of short pulses in the anomalous dispersion regime of a 13~m nonlinear optical fibre. The pulses have a hyperbolic-secant intensity profile centered at 1550~nm with pulse duration and peak power varying from 0.77 to 1.43~ps and from 18.41 to 34.19~W, respectively. The nonlinear coefficient of the fibre is $\\gamma = 18.4 \\times 10^{-3}$~W$^{-1}$m$^{-1}$, and the group-velocity dispersion coefficient at 1550~nm is $\\beta_2 = -5.23 \\times 10^{-27}$~s$^2$m$^{-1}$. When comparing with the experiments, third order dispersion ($\\beta_3 = 4.27 \\times 10^{-41}$~s$^3$m$^{-1}$) was also included in the training in addition to a small input pulse asymmetry caused by the experimental implementation \\cite{tikan2017universality}. The simulations use 1024 spectral\/temporal grid points with temporal window size of 10~ps and a step size of 0.13 mm (10,000 steps). For completeness, shot noise is added via one-photon-per-mode with random phase in the frequency domain, although noise effects were found to play no significant physical role in the regime of coherent propagation studied here.\n\n\\textbf{Supercontinuum generation.}\nWe model the propagation of sech-type pulse centered at 810~nm and with pulse duration of 100~fs. The peak power of the input pulse is randomly varied in range of 0.5-2~kW. The pulses are injected in the anomalous dispersion regime of a 20~cm nonlinear optical fibre, including higher-order dispersion terms, self-steepening and Raman effect. The nonlinear coefficient of the fibre is $\\gamma = 0.1$~W$^{-1}$m$^{-1}$, and the Taylor-series expansion coefficients of the dispersion at 810~nm are \n$\\beta_2 = -9.59 \\times 10^{-27}$~s$^2$m$^{-1}$,\n$\\beta_3 = 7.84 \\times 10^{-41}$~s$^3$m$^{-1}$,\n$\\beta_4 = -6.84 \\times 10^{-56}$~s$^4$m$^{-1}$,\n$\\beta_5 = -4.78 \\times 10^{-70}$~s$^5$m$^{-1}$,\n$\\beta_6 = 2.71 \\times 10^{-84}$~s$^6$m$^{-1}$ and\n$\\beta_7 = -5.00 \\times 10^{-99}$~s$^7$m$^{-1}$.\nThe simulations use 2048 spectral\/temporal grid points with temporal window size of 5~ps and a step size of 0.02 mm (10,000 steps). Shot noise is added via one-photon-per-mode with random phase in the frequency domain, but in the coherent propagation regime studied here noise effects were found to play no significant physical role.\n\n\\subsection*{Recurrent neural networks}\n\\textbf{LSTM network operation}\nThe operation of an LSTM cell can be described at time step $t$ with input $\\mathbf{x}_t \\in \\mathbb{R}^{d_o}$ by a set of equations given by \\cite{hochreiter1997long}\n\\begin{equation}\n \\begin{split}\n \\mathbf{f}_t &= \\sigma (\\mathbf{W}_f[\\mathbf{h}_{t-1}, \\mathbf{x}_t] + \\mathbf{b}_f)\\\\\n \\mathbf{\\tilde{c}}_t &= \\text{tanh} (\\mathbf{W}_c[\\mathbf{h}_{t-1}, \\mathbf{x}_t] + \\mathbf{b}_c)\\\\\n \\mathbf{o}_t &= \\sigma (\\mathbf{W}_o[\\mathbf{h}_{t-1}, \\mathbf{x}_t] + \\mathbf{b}_o)\n \\end{split}\n\\qquad\n \\begin{split}\n \\mathbf{i}_t &= \\sigma (\\mathbf{W}_i[\\mathbf{h}_{t-1}, \\mathbf{x}_t] + \\mathbf{b}_i)\\\\\n \\mathbf{c}_t &= \\mathbf{f}_t \\odot \\mathbf{c}_{t-1} + \\mathbf{i}_t \\odot \\mathbf{\\tilde{c}}_t\\\\\n \\mathbf{h}_t &= \\mathbf{o}_t \\odot \\text{tanh}(\\mathbf{c}_t),\n \\end{split}\n\\end{equation}\nwhere $\\mathbf{f}_t$, $\\mathbf{i}_t$ and $\\mathbf{o}_t \\in \\mathbb{R}^{d_h}$ are the forget, input and output gate vectors, respectively, with $d_h$ denoting the dimensionality of the hidden state (i.e. the number of hidden units).\nVectors $\\mathbf{c}_t$ and $\\mathbf{h}_t \\in \\mathbb{R}^{d_h}$ are the updated cell and hidden state, respectively, and $\\mathbf{W}_f$, $\\mathbf{W}_i$, $\\mathbf{W}_c$ and $\\mathbf{W}_o \\in \\mathbb{R}^{d_h \\times (d_h+d_o)}$ represent the cell weights and $\\mathbf{b}_f$, $\\mathbf{b}_i$, $\\mathbf{b}_c$ and $\\mathbf{b}_o \\in \\mathbb{R}^{d_h}$ are the biases. The sign $\\odot$ denotes point-wise multiplication. The weights and biases of the network are iteratively trained via backpropagation \\cite{werbos1990backpropagation}.\n\n\\textbf{Feed-forward network operation}\nThe operation of the fully-connected layers is similar to that in Ref.~\\cite{narhi2018machine}. The codes were written in Python using Keras \\cite{chollet2015keras} with Tensorflow backend \\cite{abadi2016tensorflow}.\n\n\\textbf{Comparison between RNN prediction and (G)NLSE simulations}\nA quantitative comparison between the network predicted evolution map and that simulated with the (G)NLSE can be performed using the average (normalized) root mean squared (RMS) errors as a metric:\n\\begin{equation}\n \\label{eq:error}\n R = \\sqrt{\\frac{\\sum_{i,d} (x_{m,i,d} - \\hat{x}_{m,i,d})^2}{\\sum_{i,d} (x_{m,i,d})^2}},\n\\end{equation}\nwhere $\\mathbf{x}_m$ and $\\hat{\\mathbf{x}}_m$ denote the (G)NLSE and RNN predicted intensity profile for realization $m$. The variables $i$ and $d$ indicate summation over the intensity profiles and propagation steps, respectively. When evaluating the performance of the prediction over an ensemble of $M$ evolution maps, the RMS error is calculated over $M$ distinct realizations.\n\n\\textbf{Higher-order soliton compression.}\nAn ensemble of 3,000 numerical simulations was generated. 2,900 realizations are used for the training of the RNN and 100 unseen realizations are used for testing. The simulated intensity evolution maps are uniformly downsampled at a constant propagation step of $\\Delta z = 0.13~$m, yielding 101 intensity profiles along propagation for each simulated evolution map. At every of the 101 steps, the intensity profile is convolved and downsampled with a 10~fs full width at half maximum super-Gaussian temporal filter corresponding to 145 equally spaced bins in the $\\left[-0.7,+0.7\\right]$~ps time interval. The spectral intensity profiles are convolved and downsampled with a 2~nm full width at half maximum super-Gaussian spectral filter resulting in 126 equally spaced intensity bins spanning from 1425 to 1675~nm. The temporal and spectral intensity profiles are normalized by the peak intensity over all realizations. We emphasize that because from an experimental viewpoint intensity profiles (spectral or temporal) are more straightforward to measure that the full field, we choose to only use transform-limited intensity profiles during the RNN training while the phase information is completely omitted. Both for the temporal and spectral evolution, the network is trained with intensity profiles in linear scale. When comparing with the experiments, to account for the slight input pulse asymmetry the NLSE simulated intensity profiles of every map used in the training phase of the RNN were convolved and downsampled with a 10~fs full width at half maximum super-Gaussian temporal filter corresponding to 151 equally spaced bins in the $\\left[-0.62,+0.85\\right]$~ps time interval. The spectral intensity profiles were convolved and downsampled similarly to the case of ideal higher-order soliton propagation but spanning from 1450 to 1700~nm.\n\nThe LSTM and two hidden layers consist of 161 nodes each with ReLU activations $f(x) = \\text{max}(0,x)$, and the output layer consists of 151 and 126 nodes for temporal and spectral predictions, respectively, with sigmoid activation $f(x) = 1\/(1+\\text{exp}(-x)$. The network is trained for 60 and 120 epochs with RMSprop optimizer \\cite{tieleman2012lecture} and adaptive learning rate for the temporal and spectral intensity predictions, respectively. \n\nThe input of the RNN consists of ten consecutive temporal or spectral intensity profiles $h_{z-10\\Delta z}..h_{z-\\Delta z}$ at distance along the fibre $z-10\\Delta z$ to $z-\\Delta z$. \n\nA smaller number of intensity profiles was also found to give satisfactory results but of course this is at the expense of the relative prediction error which increases from 0.097 to 0.174 (temporal intensity evolution of higher-order soliton) when reducing the number of consecutive intensity profiles from ten to five. As the number of consecutive intensity profiles used in the training is increased, the training time also increases and therefore the training process is always a compromise between the prediction accuracy and the time required to train the network.\n\n\n\\textbf{Supercontinuum generation.}\nAn ensemble of 1,300 numerical simulations was generated. 1,250 realizations were used for training the RNN and 50 realizations for testing. The simulated intensity evolution maps are uniformly downsampled at a constant propagation step of $\\Delta z = 0.2~$mm, yielding 200 intensity profiles along propagation for each simulated evolution. In order to reduce the computational load, when training the RNN to predict temporal intensity maps, the profiles at each of the 200 steps are convolved and downsampled with a 10~fs full width at half maximum super-Gaussian temporal filter corresponding to 276 equally spaced bins spanning in the $\\left[-0.18,+1.16\\right]$~ps time interval. Note that the asymmetry in the modeled time interval is implemented to account for the soliton self-frequency shift effect. When training the RNN from spectral intensity profiles, each spectrum is convolved and downsampled with a 2~nm full width at half maximum super-Gaussian spectral filter such that the wavelength grid consisted of 251 spectral intensity bins spanning from 550 to 1050~nm. The profiles are normalized by the peak intensity over all realizations.\n\n\nFor the temporal intensity evolution, the LSTM and two hidden layers consist of 300 nodes each with ReLU activations, and the output layer consists of 276 nodes with sigmoid activation. The network was trained for 120 epochs with RMSprop optimizer and adaptive learning rate. For the spectral intensity evolution, the LSTM and two hidden layers consist of 250 nodes each with ReLU activations, and the output layer consists of 251 nodes with sigmoid activation. The network is trained for 100 epochs. \n\n\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{introducao}\n\nBefore talking about the subject of this paper, we present two topics related to this work.\nWe don't use the second topic here. It will only illustrate our main result (Theorem \\ref{teorema principal}).\n\nThe first topic is invariant metrics on homogeneous spaces. Let $M$ be a connected differentiable manifold endowed with a completely nonholonomic distribution $\\mathcal D$. \nAn absolutely continuous path $\\gamma$ in $M$ is horizontal if $\\gamma^\\prime(t) \\in \\mathcal D$ for almost every $t$.\nChow-Rashevskii theorem states that every pair of points in $M$ can be connected by a horizontal curve (see \\cite{Chow}, \\cite{Montgomery}, \\cite{Rashevskii}).\nWe endow each subspace $\\mathcal D_x$ of $\\mathcal D$ with a norm $F(x,\\cdot)$ such that $x \\mapsto F(x,\\cdot)$ is continuous, that is, given a horizontal continuous vector field $Y$ on $M$, then $x \\mapsto F(x,Y(x))$ is a continuous map (see \\cite{Berestovskii1}).\nNow we proceed as in the definition of sub-Riemannian geometry: \nThe $C^0$-Carnot-Carath\\'eodory-Finsler metric on $M$ is given by\n\\begin{equation}\n\\label{equacao carnot caratheodory}\nd_c(x,y)=\\inf_{\\gamma \\in \\mathcal H_{x,y}}\\int_I F(\\gamma(t),\\gamma^\\prime(t))dt,\n\\end{equation}\nwhere $\\mathcal H_{x,y}$ is the set of horizontal curves connecting $x$ and $y$ (see \\cite{Berestovskii1}, \\cite{Berestovskii2}).\n\nThe following theorem due to Berestovskii is important for this work and states that if an intrinsic metric is homogeneous, then it has the tendency to gain extra regularity.\n\n\\begin{theorem}\n\\label{Berestovskii theorem}[Berestovskii, \\cite{Berestovskii2}, Theorem 3]\nIf $(M,d)$ is locally compact, locally contractible homogeneous space, endowed with an invariant intrinsic metric $d$, then $(M,d)$ is isometric to the quotient space $G\/H$ of a Lie group $G$ over a compact subgroup $H$ of it, endowed with a $G$-invariant $C^0$-Carnot-Carath\\'eodory-Finsler metric.\n\\end{theorem}\n\nThe second topic is geometric flows in Riemannian geometry. \nThere are several of them such as the mean curvature flow (for hypersurfaces in Riemannian manifolds), the Ricci-Hamilton flow, etc. \nThese flows, in some specific situations, converges to a more ``homogeneous'' Riemannian metric (eventually after some normalization of the volume).\nFor instance, the normalized Ricci-Hamilton flow converges to a Riemannian metric of constant sectional curvature for $3$-dimensional closed Riemannian manifolds with strictly positive Ricci curvature (see \\cite{Hamilton1}).\nIt is an example of a dynamical system of metrics converging to a ``well behaved'' metric.\n\nNow we give an outline of this work.\n\nLet $\\varphi: G \\times M \\rightarrow M$ be a left action of a Lie group on a differentiable manifold endowed with a metric $d$ (distance function) compatible with the topology of $M$.\nAs usual, we denote $gx:=\\varphi(g,x)$. \nLet $X$ be a compact subset of $M$.\nThe isotropy subgroup of $X$ is a closed subgroup $H_X$ of $G$ given by\n\\[\nH_X=\\{g\\in G; gX=X\\}.\n\\]\nThen there exist a unique differentiable structure on $G\/H_X$, compatible with the quotient topology, such that the natural action $\\phi: G \\times G\/H_X \\rightarrow G\/H_X$, given by $\\phi(g,hH_X)=ghH_X$, is smooth.\nIn \\cite{Benetti} and \\cite{BenettiFukuoka}, the authors define a metric $d_X$ on $G\/H_X$, which is called induced Haudorff metric, by \n\\[\nd_X(gH_X,hH_X)=d_H(gX,hX),\n\\]\nwhere $d_H$ is the Hausdorff distance in $(M,d)$ and $gH_X$ denote the left coset of $g$ in $G\/H_X$.\nSuppose that $\\varphi$ is transitive and that there exist $p\\in M$ such that $H_p=H_X$.\nThen the map $\\eta: G\/H_X \\rightarrow M$ given by $gH_X \\mapsto g.p$ is a diffeomorphism such that $\\varphi(g,\\eta(hH_X))= \\eta(\\phi(g,hH_X))$, that is, $\\varphi$ can be identified with $\\phi$.\nWe use the induced Hausdorff metric in order to create a discrete dynamical system of metrics on $M$.\nDefine $d^1=\\hat d_X$, where the hat denotes the intrinsic metric induced by $d_X$.\nWe can iterate this process in $\\varphi:G\\times (M,d^1)\\rightarrow (M,d^1)$ in order to obtain $d^2$, where $(G\/H_X,d^1)$ is identified with $(M,d^1)$ via $\\eta$.\nThrough this iteration, we can define a sequence of metrics $d,d^1,d^2, \\ldots$ on $M$, which we call {\\it sequence of induced Hausdorff metrics}.\n\nIn this work we study the sequence of induced Hausdorff metrics in the following case:\n$M$ is the Lie group $G$ itself endowed with a metric $d$ that is bounded above by a right invariant intrinsic metric $\\mathbf d$, $\\varphi: G \\times (G,d) \\rightarrow (G,d)$ is the product of $G$ and\n$X$ is a finite subset of $G$ containing the identity element $e$ (It is implicit here that $H_X=\\{e\\}$).\nWe prove that $d^i$ converges pointwise to a metric $d^\\infty$.\nMoreover if the semigroup $S_X$ generated by $X$ is dense in $G$ and $d$ is complete, then $d^\\infty$ is the distance function of a right invariant $C^0$-Carnot-Carath\\'eodory-Finsler metric.\nWe give a necessary and sufficient condition in order to $d^\\infty$ be $C^0$-Finsler. \nIn this case, an explicit formula for the Finsler metric is obtained.\n\nOf course the comparison of our work with the Riemannian geometric flows can't be taken so literally. \nIt is given here only to illustrate the dynamics of the sequence of induced Hausdorff metrics, which in some cases converges to a more ``well behaved'' metric. \nOne important restriction is that it is defined only on left coset manifolds. On the other hand the metric $d$ is much more general than Riemannian metrics and the induced Hausdorff distance is the basic tool in order to overcome the lack of differentiability.\n\nThis work is organized as follows. \nIn Section \\ref{preliminares} we fix notations and we present definitions and results that are necessary for this work. \nIn Section \\ref{sequencia de metricas} we study some properties of the sequence of induced Hausdorff metrics.\nIn particular, we prove that it is increasing, and if $d$ is bounded above by a right invariant intrinsic metric, then it converges pointwise to a metric.\nThis limit metric is intrinsic if $d$ is complete.\nIn Section \\ref{existencia sx denso}, we prove that every connected Lie group $G$ admits a finite subset $X \\ni e$ such that $H_X=\\{e\\}$ and $\\bar S_X = G$.\nIn Section \\ref{Finlser fomula}, we study the case $\\bar S_X=G$, where $d$ is complete and bounded above by a intrinsic right invariant metric. \nWe prove that the sequence of induced Hausdorff metrics converges to a right invariant $C^0$-Carnot-Carath\\'eodory-Finsler metric. \nThe case where $d^\\infty$ is $C^0$-Finsler is also studied here, as explained before.\nIn Section \\ref{secao exemplos}, we give some additional examples in order to illustrate better this work.\nFinally Section \\ref{secao final} is devoted to final remarks.\n\nThe authors would like to thank Professor Luiz A. B. San Martin for some valuable suggestions.\n\n\\section{Preliminaries}\n\\label{preliminares}\n\nIn this section we present some definitions and results that are used in this work. They can be found in \\cite{Burago}, \\cite{BenettiFukuoka}, \\cite{Helgason}, \\cite{Jurdjevic}, \\cite{KobayashiNomizu1}, \\cite{KobayashiNomizu2} and \\cite{Warner}.\nFor the sake of clearness, we usually don't give the definitions and results in their most general case.\n\nLet $G$ be a group and $(M,d)$ be a metric space. \nConsider a left action $\\varphi:G\\times M \\rightarrow M$ of $G$ on $M$.\nThen $ex=x$ for every $x\\in M$ and $(gh)x=g(hx)$ for every $(g,h,x)\\in G\\times G\\times M$. \nEvery $\\varphi_g:=\\varphi(g,\\cdot)$ is a bijection. \nWe say that a left action $\\varphi:G \\times M \\rightarrow M$ is an action by isometries if every $\\varphi_g$ is an isometry. \nAnalogously we say that $\\varphi$ is an action by homeomorphism if $\\varphi_g$ is a homeomorphism for every $g\\in G$. \n\nLet $(M,d)$ be a metric space. \nWe denote the open ball with center $p$ and radius $r$ in $(M,d)$ by $B_d(p,r)$. \nThe closed ball is denoted by $B_d[p,r]$. \nThe topology induced by $d$ is denoted by $\\tau_d$.\nThe closure of a subset $A$ in $(M,d)$ is denoted by $\\bar A$. When more than one metric or topology are involved, for instance we have metrics $d$ and $\\rho$ and a topology $\\tau$ on $M$, we use terms like $d$-neighborhood, $\\tau$-open subset, $\\rho$-compact, etc.\n\nLet $A,B$ be compact subsets of $(M,d)$. The Hausdorff distance between $A,B$ is given by\n\\[\nd_H(A,B)=\\max\\left\\{\\sup_{x\\in A} \\inf_{y \\in B} d(x,y), \\sup_{y\\in B} \\inf_{x \\in A} d(x,y) \\right\\}.\n\\]\nIt is well known that $d_H$ is a metric on the family of compact subsets of $M$.\n\nLet $\\varphi:G \\times M \\rightarrow M$ be a left action by homeomorphisms of a group $G$ on a metric space $(M,d)$.\nIf $X\\subset M$ is a subset, then the isotropy subgroup of $G$ with respect to $X$ is defined by $H_X=\\{g\\in G;gX=X\\}$.\nSuppose that $X$ is a compact subset in $M$.\nIn Proposition 2.1 of \\cite{BenettiFukuoka} (and in \\cite{Benetti}), we define the induced Hausdorff metric on the left coset space $G\/H_X$ as $d_X(gH_X,hH_X)=d_H(gX,hX)$, where $d_H$ is the Hausdorff distance in $M$.\n\nA partition $\\mathcal{P}$ of an interval $[a,b]$ is a subset $\\{ t_0,\\ldots,t_{n_{\\mathcal P}}\\} \\subset [a,b]$ such that $a=t_0 < t_1 < \\ldots < t_{n_{\\mathcal P}} = b$. \nThe norm of $\\mathcal P$ is defined as $\\vert \\mathcal P\\vert=\\max_{i=1,\\ldots,n_{\\mathcal P}} \\vert t_i - t_{i-1}\\vert$.\nThe length of a path $\\gamma:[a,b] \\rightarrow M$ on a metric space $(M,d)$ is given by\n\\[\n\\ell_d(\\gamma)=\\sup_{\\mathcal{P}}\\sum_{i=1}^{n_{\\mathcal{P}}}d(\\gamma(t_i),\\gamma(t_{i-1})).\n\\]\nIt is well known that for every $\\varepsilon >0$, there exist a $\\delta >0$ such that\n\\[\n\\ell_d(\\gamma)\\leq \\sum_{i=1}^{n_{\\mathcal P}} d(\\gamma(t_i),\\gamma(t_{i-1}))+\\varepsilon\n\\]\nfor every partition $\\mathcal P$ such that $\\vert \\mathcal{P}\\vert < \\delta$ (see \\cite{Burago}).\n\nGiven a metric space $(M,d)$ we can define the extended metric (the distance can be $\\infty$)\n\\[\n\\hat d(x,y)=\\inf_{\\gamma \\in \\mathcal{C}_{x,y}}\\ell_d(\\gamma),\n\\]\non $M$, where $\\mathcal{C}_{x,y}$ is the family of paths on $(M,d)$ that connects $x$ and $y$. \nWe denote $\\mathcal C^d_{x,y}$ instead of $\\mathcal C_{x,y}$ if there exist more than one metric defined on $M$.\n$\\hat d$ is the intrinsic (extended) metric induced by $d$, we always have that $d\\leq \\hat d$. \nWe say that the metric $d$ is intrinsic if $\\hat d=d$ and we have that a metric $\\hat d$ is always intrinsic.\n\n\\begin{proposition}\n\\label{comprimentos iguais}\nLet $(M,d)$ be a metric space and $\\hat d$ be the intrinsic (extended) metric induced by $d$. \nThen $\\ell_{\\hat d}(\\gamma)=\\ell_d (\\gamma)$ for every rectifiable curve $\\gamma$ in $(M,d)$.\n\\end{proposition}\n\nIf $(M,d)$ is a metric space, $\\varepsilon>0$ and $x,y \\in M$, then an $\\varepsilon$-midpoint of $x$ and $y$ is a point $z\\in M$ that satisfies $\\vert 2d(x,z)-d(x,y)\\vert \\leq \\varepsilon$ and $\\vert 2d(y,z) - d(x,y) \\vert \\leq \\varepsilon$. \n\nWe have the following results about intrinsic metrics. Their proofs can be found in \\cite{Burago}.\n\n\\begin{proposition}\n\\label{existenciamidpoint}\nIf $d$ is an intrinsic metric on $M$ and $\\varepsilon> 0$, then every $x,y \\in M$ admit an $\\varepsilon$-midpoint. \n\\end{proposition}\n\n\\begin{proposition}\n\\label{midpoint}\nLet $(M,d)$ be a complete metric space. \nIf every $x,y\\in M$ admit a $\\varepsilon$-midpoint (for every $\\varepsilon >0$), then $d$ is intrinsic.\n\\end{proposition}\n\n$C^0$-Carnot-Carath\\'eodory-Finsler-metrics are intrinsic because they come from a length structure (see \\cite{Burago}).\nA particular case of $C^0$-Carnot-Carath\\'eodory-Finsler metrics are the $C^0$-Finsler metrics.\nLet $M$ be a differentiable manifold and $TM=\\{(x,v);x\\in M, v\\in T_xM\\}$ be its tangent bundle. \nA $C^0$-Finsler metric on $M$ is a continuous function $F:TM \\rightarrow \\mathbb{R}$ such that $F(x,\\cdot)$ is a norm on $T_xM$ for every $x\\in M$. \n$F$ induces a metric $d_F$ on $M$ given by\n\\[\nd_F(x,y)=\\inf_{\\gamma \\in \\mathcal{S}_{x,y}}\\ell_F(\\gamma),\n\\]\nwhere\n\\[\n\\ell_F(\\gamma)=\\int_a^b F(\\gamma(t),\\gamma^\\prime(t))dt,\n\\]\nis the length of $\\gamma$ in $(M,F)$ and $\\mathcal{S}_{x,y}$ is the family of paths on $M$ which are smooth by parts and connects $x$ and $y$. \nHere the family $\\mathcal S_{x,y}$ can be replaced by the family of absolutely continuous paths connecting $x$ and $y$ (see \\cite{Berestovskii1}).\nIf $d:M\\times M \\rightarrow \\mathbb{R}$ is a metric on $M$, then we say that $d$ is $C^0$-Finsler if there exist a $C^0$-Finsler metric $F$ on $M$ such that $d=d_F$.\nA differentiable manifold endowed with a $C^0$-Finsler metric is a $C^0$-Finsler manifold.\n\n\\begin{remark}\n\\label{outrofinsler}\nThere are another usual (in fact more usual) definition of Finsler manifold, where $F$ satisfies other conditions (see, for instance, \\cite{BaoChernShen} and \\cite{Deng}): $F$ is smooth on $TM-TM_0$, where $TM_0=\\{(p,0)\\in TM;p \\in M\\}$ is the zero section, and $F(p,\\cdot)$ is a Minkowski norm on $T_pM$ for every $p\\in M$. \nIn order to make this difference clear we put the prefix $C^0$ before Finsler.\n\n$C^0$-Finsler metrics are studied, for instance, in \\cite{Berestovskii1}, \\cite{Berestovskii2}, \\cite{Burago} and \\cite{Gribanova}.\n\\end{remark}\n\n\\begin{remark}\n\\label{relacoes Carnot Caratheodory}\nLet $M$ be a connected differentiable manifold and $\\mathcal D$ be a completely nonholonomic distribution on $M$.\nLet $F$ as in the definition of the $C^0$-Carnot-Carath\\'eodory-Finsler metric in the introduction.\nObserve that there exist smooths sections $\\mathbf g_1, \\mathbf g_2$ of inner products in $\\mathcal D$ such that $\\mathbf g_1 \\leq F \\leq \\mathbf g_2$.\nIn fact, this is clear locally, and in order to see that this holds globally, just use partition of unity.\nThis implies that balls in $C^0$-Carnot-Carath\\'eodory-Finsler metrics are contained and contains a ball in some Carnot-Carath\\'eodory metric of the same distribution. \nBall-box theorem gives a qualitative behavior of balls in sub-Riemannian manifolds (see \\cite{Montgomery}) and this behavior depends only on the distribution $\\mathcal D$.\nTherefore balls in $C^0$-Carnot-Carath\\'eodory-Finsler metrics of the same distribution have the same qualitative behavior.\n\nAnother consequence of the ball-box theorem is that a Carnot-Carath\\'eodory metric correspondent to a non-trivial completely nonholonomic distribution can't the bounded above by the distance function of a Riemannian metric.\nTherefore a $C^0$-Carnot-Caratheodory-Finsler metric correspondent to a non-trivial completely nonholonomic distribution can't be bounded above by a $C^0$-Finsler metric.\n\\end{remark}\n\nThe following theorem is the version of the classical Hopf-Rinow theorem for intrinsic metrics (see \\cite{Burago}).\n\n\\begin{theorem}[Hopf-Rinow-Cohn-Vossen theorem]\n\\label{hopf rinow intrinsic}\nLet $(M,d)$ be a locally compact metric space endowed with an intrinsic metric.\nThen the following assertions are equivalent:\n\\begin{itemize}\n\\item $(M,d)$ is complete;\n\\item $B_d[p,r]$ is compact for every $p \\in M$ and $r > 0$;\n\\item Every geodesic (local minimizer parameterized by arclength) $\\gamma:[0,a) \\rightarrow M$ can be extended to a continuous path $\\bar \\gamma:[0,a] \\rightarrow M$;\n\\item There exist a $p \\in M$ such that every shortest path parameterized by arclength $\\gamma:[0,a) \\rightarrow M$ satisfying $\\gamma(0)=p$ admits a continuous extension $\\bar \\gamma:[0,a] \\rightarrow M$.\n\\end{itemize}\n\\end{theorem}\n\n\\begin{lemma}\n\\label{fechado limitado compacto}\nAny closed and bounded subset of a Lie group endowed with a right invariant intrinsic metric is compact. \nIn particular, a right invariant intrinsic metric on a Lie group is complete.\n\\end{lemma}\n\n{\\it Proof}\n\n\\\n\nA right invariant intrinsic metric $\\tilde d$ is a $C^0$-Carnot-Carath\\'eodory Finsler metric that comes from a continuous right invariant section of norms $F$ in a right invariant completely nonholonomic distribution $\\mathcal D$ (see Theorem \\ref{Berestovskii theorem}). \nObserve that $\\tilde d \\geq d_{\\mathbf g}$ for some right invariant Riemannian manifold $\\mathbf g$.\nIn fact, choose a right invariant smooth section of inner products $\\bar{\\mathbf g}$ on $\\mathcal D$ such that $\\bar{\\mathbf g} \\leq F$ and then extend $\\bar{\\mathbf g}$ to the whole tangent spaces resulting in a right invariant Riemannian metric $\\mathbf g$.\nThe completeness of $(M,\\mathbf g)$ and the classical Hopf-Rinow theorem for Riemannian manifolds implies that any closed bounded subset of $(M,\\mathbf g)$ is compact.\nTherefore any closed bounded subset of $(M,\\tilde d)$ is compact because $\\tilde d \\geq d_{\\mathbf g}$.\n\nThe last statement is due to Theorem \\ref{hopf rinow intrinsic}.$\\blacksquare$\n\n\\\n\nThe rest of this section is about the orbits of a family of vector fields and it is used in Section \\ref{existencia sx denso} in order to prove the existence of $X$ such that $S_X$ is dense in $G$ (see \\cite{Jurdjevic}).\n\n\\begin{definition}\n\\label{orbita}\nLet $\\mathcal{F}$ be a family of complete vector fields on a differentiable manifold $M$ and $x\\in M$. \nFor $X \\in \\mathcal{F}$, denote by $t\\mapsto (\\exp tX)(x)$ be the integral curve of $X$ such that $(\\exp 0X) (x)=x$.\nThe orbit $G(x)$ of $\\mathcal{F}$ through $x$ is the set of points given by $(\\exp t_mX_m)(\\exp t_{m-1}X_{m-1}) \\ldots (\\exp t_1X_1)(x)$, with $m\\in \\mathbb N$, $t_i \\in \\mathbb{R}$ and $X_i \\in \\mathcal{F}$, $i=1,\\ldots m$. \n\\end{definition}\n\nIn the conditions of Definition \\ref{orbita}, denote by $Lie(\\mathcal{F})$ the Lie algebra generated by $\\mathcal{F}$ and let $Lie_x(\\mathcal{F})$ be the restriction of $Lie(\\mathcal{F})$ to the tangent space $T_xM$.\n\n\\begin{theorem}[Hermann-Nagano Theorem]\n\\label{nagano}\nLet $M$ be an analytic manifold and $\\mathcal{F}$ be a family of analytic vector fields on $M$. Then\n\\begin{itemize}\n\\item each orbit of $\\mathcal F$ is an analytic submanifold of $M$;\n\\item if $N$ is an orbit of $\\mathcal{F}$, then the tangent space of $N$ in $x$ is given by $\\text{Lie}_x(\\mathcal F)$.\n\\end{itemize}\n\n\\end{theorem}\n\n\\begin{corollary}\n\\label{nagano para grupos de Lie}\nLet $G$ be a connected Lie group and $V$ be a vector subspace of $\\mathfrak{g}$ such that the Lie algebra generated by $V$ is $\\mathfrak{g}$.\nLet $\\{v_1,\\ldots,v_k\\}$ be a basis of $V$ and $\\mathcal{F}$ be the set of left (or right) invariant vector fields with respect to $\\{v_1,\\ldots,v_k\\}$.\nThen, for every $x \\in G$, there exist $m\\in \\mathbb{N}$, $t_{i_1},\\ldots, t_{i_m} \\in \\mathbb R$ and $v_{i_1},\\ldots, v_{i_m} \\in X$ such that $x=\\exp(t_{i_m}v_{i_m})\\ldots \\exp(t_{i_1}v_{i_1})$.\n\\end{corollary}\n\n{\\it Proof}\n\n\\\n\nIt is enough to observe that $G$ is an analytic manifold, that the right (and left) invariant vector fields with respect to $v_1,\\ldots, v_k$ are analytic (see \\cite{Helgason}) and that $\\text{Lie}_e(\\mathcal{F})=\\mathfrak g$.\nThen the orbit of $e$ is an analytic submanifold of $G$ that contains a neighborhood of $e$ (see Theorem \\ref{nagano}) and this neighborhood generates $G$ due to the connectedness of $G$.$\\blacksquare$\n\n\n\\section{Convegence of the sequence of induced Hausdorff metrics on Lie groups}\n\\label{sequencia de metricas}\n\nLet $d$ be a metric on a Lie group $G$ such that $\\tau_d=\\tau_G$, $\\varphi: G \\times (G,d) \\rightarrow (G,d)$ be the product of $G$ and $X= \\{ x_1, \\ldots, x_k \\} \\ni e$ be a finite subset of $G$ such that $H_X=\\{e\\}$.\nIn this section, we study properties of the sequence of induced Hausdorff metrics $d,d^1,d^2,\\ldots,d^i, \\ldots$ on $G$.\n\nFor every $j=1, \\ldots, k$, we define the metric $d_j(p,q)=d(px_j,qx_j)$ on $G$.\n$\\tau_{d_j}=\\tau_d$ because right translations are homeomorphisms on $G$.\nDefine also the metric $d_M(p,q):=\\max\\limits_{j=1,\\ldots, k}d_j(p,q):=\\max\\limits_{j=1,\\ldots, k}d(px_j,qx_j)$ on $G$.\n\nFirst of all we prove that $\\tau_{d_X}=\\tau_d$ (Proposition \\ref{dedXmesmatopologia}).\n\n\\begin{lemma}\n\\label{comparadXdMd}\n$d\\leq d_M$ and $d_X \\leq d_M$.\n\\end{lemma}\n\n{\\it Proof}\n\n\\\n\nThe first inequality is obvious. The second inequality follows because\n\\[\nd_X(p,q)\n=\\max \\left\\{ \\max_i \\min_j d (px_i,qx_j), \\max_j \\min_i d (px_i,qx_j) \\right\\}\n\\]\n\\[\n\\leq \\max \\left\\{ \\max_i d (px_i,qx_i), \\max_j d (px_j,qx_j) \\right\\}\n=\\max_i d (px_i,qx_i)\n=d_M(p,q).\\blacksquare\n\\]\n\n\\begin{lemma}\n\\label{localmenteigual}\nFor every $g\\in G$, there exist a $G$-neighborhood $V$ of $g\\in G$ such that $d_X\\vert_{V \\times V}=d_M\\vert_{V \\times V}$.\n\\end{lemma}\n\n{\\it Proof}\n\n\\\n\nFor $i,j=1,\\ldots , k$, $i\\neq j$, define $\\rho_{ij}:G\\times G\\rightarrow \\mathbb R$ as $\\rho_{ij}(p,q)=d(px_i,qx_j)-\\max_k d(px_k,qx_k)$.\nThen for every $g\\in G$, there exist a $G$-neighborhood $V$ of $g$ such that \n\\[\n\\rho_{ij}(p,q)>0 \\text{ for every } p,q \\in V \\text{ and every }i\\neq j,\n\\]\nbecause $\\rho_{ij}(g,g)>0$.\nIt implies that\n\\[\nd_X(p,q)\n=\\max \\left\\{ \\max_i \\min_j d (px_i,qx_j), \\max_j \\min_i d (px_i,qx_j) \\right\\}\n\\]\n\\begin{equation}\n\\label{vizinhanca pequena}\n=\\max \\left\\{ \\max_i d (px_i,qx_i), \\max_j d (px_j,qx_j) \\right\\}\n=\\max_i d(px_i,qx_i)=d_M(p,q)\n\\end{equation}\nfor every $p,q \\in V$.$\\blacksquare$\n\n\\begin{remark}\nThe formula $d_X(p,q)=\\max_i d(px_i,qx_i)$ for every $p,q$ in a sufficiently small neighborhood of $g$ will be used several times in this work.\n\\end{remark}\n\n\\begin{lemma}\n\\label{dMdmesmatopologia} $\\tau_d = \\tau_{d_M}(=\\tau_G)$ on $G$.\n\\end{lemma}\n\n{\\it Proof}\n\n\\\n\nThe inequality $d\\leq d_M$ implies that $\\tau_d \\subset \\tau_{d_M}$.\n\nIn order to see that $\\tau_{d_M} \\subset \\tau_d$, observe that an open ball $B_{d_M}(p,r)$ can be written as\n\\[\nB_{d_M}(p,r)=\\bigcap_{j=1}^k B_{d_j}(p,r),\n\\]\nwhat implies that it is an open subset of $\\tau_d$. Therefore $\\tau_{d_M}\\subset \\tau_d$.$\\blacksquare$\n\n\\begin{lemma}\n\\label{BolasdXcompactas}\n$B_{d_X}(p,r)$ is contained in a $G$-compact subset of $G$ for a sufficiently small $r > 0$.\n\\end{lemma}\n\n{\\it Proof}\n\n\\\n\nJust observe that\n\\[\nB:=\\bigcup_{i=1}^k B_d(px_i,r)\\supset B_{d_X}(p,r).\n\\]\nIn fact, if $y \\not \\in B$, then\n\\[\n\\begin{array}{ccl}\nd_X(p,y)\n&\n=\n&\n\\max\\left\\{ \\max\\limits_i \\min\\limits_j d (px_i,yx_j), \\max\\limits_j \\min\\limits_i d (px_i,yx_j) \\right\\} \\\\\n&\n\\geq \n&\n\\max\\left\\{ \\max\\limits_i \\min\\limits_j d (px_i,yx_j), \\min\\limits_i d (px_i,y.e) \\right\\} \\\\\n&\n\\geq\n& \n\\max\\left\\{ \\max\\limits_i \\min\\limits_j d (px_i,yx_j), r \\right\\}\n\\geq r,\n\\end{array}\n\\]\nwhere the second inequality holds because $y \\not \\in B$. \nThen the result follows because $B$ is contained in a $G$-compact subset of $G$ for a sufficiently small $r$.\n\n\\begin{proposition}\n\\label{dedXmesmatopologia}\n$\\tau_d = \\tau_{d_X}$ on $G$.\n\\end{proposition}\n\n{\\it Proof}\n\n\\\n\nIt is enough to prove that $\\tau_{d_X} = \\tau_{d_M}$ due to Lemma \\ref{dMdmesmatopologia}.\n\nWe know that $d_X \\leq d_M$ (Lemma \\ref{comparadXdMd}), what implies that $\\tau_{d_X} \\subset \\tau_{d_M}$.\n\nIn order to prove that $\\tau_{d_M} \\subset \\tau_{d_X}$, we consider an open ball $B_{d_M}(p,r)$ and we prove that there exist an $\\varepsilon > 0$ such that $B_{d_X}(p,\\varepsilon)\\subset B_{d_M}(p,r)$.\nWithout loss of generality, we can consider $r > 0$ such that $B_{d_X}(p,r)$ has compact closure (see Lemma \\ref{BolasdXcompactas}).\nConsider a $G$-neighborhood $V$ of $p$ according to Lemma \\ref{localmenteigual}.\nWe can eventually consider a (further) smaller $r$ in such a way that $B_{d_M}(p,r) \\subset V$ (see Lemma \\ref{dMdmesmatopologia}). Then $B_{d_X}(p,r)\\cap V=B_{d_M}(p,r)$ due to Lemma \\ref{localmenteigual}.\n\nIf $B_{d_X}(p,r)-V$ is the empty subset, then we have that $B_{d_X}(p,r) = B_{d_M}(p,r)$ and we are done. \nOtherwise we consider\n\\[\n2\\varepsilon=\\inf\\limits_{x \\in B_{d_X}(p,r)-V}d_X(p,x).\n\\]\nObserve $d_X$ is $d \\times d$-continuous because $\\tau_{d_X}\\subset \\tau_{d}$ and $\\varepsilon$ is strictly positive because $p$ is not contained in the $G$-compact subset $\\overline{B_{d_X}(p,r)-V} $. \nTherefore\n\\[\nB_{d_X}\\left(p,\\varepsilon\\right)\n=B_{d_X}\\left(p,\\varepsilon \\right) \\cap V\n\\subset B_{d_M}(p,r)\n\\]\nwhat settles the proposition.$\\blacksquare$\n\n\\begin{definition}\n\\label{semigrupo}\nLet $X=\\{x_1,\\ldots, x_k\\}\\ni e$ be a finite subset of a Lie group $G$. The semigroup generated by $X$ is defined as \n\\[\nS_X=\\{x_{i_1}\\ldots x_{i_m};m\\in \\mathbb N, i_j \\in (1,\\ldots, k), j\\in (1,\\ldots, m)\\}.\n\\]\n\\end{definition}\n\nIn what follows, if $X=\\{x_1,\\ldots, x_k\\}$, then $X^{-1}:=\\{x_1^{-1}, \\ldots, x_k^{-1}\\}$.\n\n\\begin{proposition}\n\\label{x e menos x}\nLet $X$ be a finite subset of a Lie group $G$. \nThen $S_X$ is dense in $G$ iff $S_{X^{-1}}$ is dense in $G$.\n\\end{proposition}\n\n{\\it Proof}\n\n\\\n\nIt is enough to observe that if $i:G \\rightarrow G$ is the inversion map, then $i(S_X)=S_{X^{-1}}$.$\\blacksquare$\n\n\\\n\n\\begin{lemma}\n\\label{d1 e maior}\nLet $G$ be a Lie group and\n$d:G\\times G\\rightarrow \\mathbb R$ be a metric on $G$ such that $\\tau_d=\\tau_G$. \nLet $\\varphi: G \\times (G,d)\\rightarrow (G,d)$ be the product of $G$ and\n$X=\\{x_1,\\ldots, x_k\\}\\ni e$ be a finite subset of $G$ such that $H_X=\\{e\\}$.\nThen $d\\leq d^1$ and the sequence of induced Hausdorff metrics is increasing.\n\\end{lemma}\n\n{\\it Proof}\n\n\\\n\nLet $x,y \\in G$ and $\\gamma:[a,b]\\rightarrow G$ be a $d_X$-path (which is also a $G$-path due to Proposition \\ref{dedXmesmatopologia}) connecting $x$ and $y$ (If there isn't any path connecting $x$ and $y$, then $d(x,y)\\leq d^1(x,y) = \\infty$).\nThen\n\\[\n\\ell_{d_X}(\\gamma)=\\sup_{\\mathcal P}\\sum_{i=1}^{n_{\\mathcal P}}d_X(\\gamma(t_i),\\gamma(t_{i-1})).\n\\]\nCover $\\gamma([a,b])$ with open subsets $V$ of Lemma \\ref{localmenteigual} such that (\\ref{vizinhanca pequena}) holds. \nLet $\\varepsilon$ be the $d$-Lebesgue number of this covering. Let $\\delta >0$ such that if $\\vert t_1 - t_2\\vert < \\delta$, then $d(\\gamma(t_2),\\gamma(t_1))<\\varepsilon$.\nConsider only partitions with norm less that $\\delta$. Then\n\\begin{equation}\n\\label{calculo comprimento}\n\\ell_{d_X}(\\gamma)\n=\n\\sup_{\\mathcal P}\\sum_{i=1}^{n_{\\mathcal P}}\\max_j d(\\gamma(t_i)x_j,\\gamma(t_{i-1})x_j) \n\\geq\n\\sup_{\\mathcal P}\\sum_{i=1}^{n_{\\mathcal P}} d(\\gamma(t_i),\\gamma(t_{i-1})) = \\ell_d(\\gamma)\n\\end{equation}\ndue to (\\ref{vizinhanca pequena}).\nThen\n\\[\nd^1(x,y)\n=\\inf_{\\gamma \\in \\mathcal C_{x,y}^{d_X}} \\ell_{d_X}(\\gamma)\n\\geq \\inf_{\\gamma \\in \\mathcal C_{x,y}^d} \\ell_{d}(\\gamma)\n=\\hat d(x,y)\\geq d(x,y).\\blacksquare\n\\]\n\n\n\n\\begin{theorem}\n\\label{local maximo 1}\nLet $G$ be a Lie group and\n$d:G\\times G\\rightarrow \\mathbb R$ be a metric on $G$ such that $\\tau_d=\\tau_G$.\nLet $\\varphi: G \\times (G,d)\\rightarrow (G,d)$ be the product of $G$ and\n$X=\\{x_1,\\ldots, x_k\\}\\ni e$ be a finite subset of $G$ such that $H_X=\\{e\\}$.\nSuppose that $d$ is bounded above by a right invariant intrinsic metric $\\mathbf d$ (what implies that $G$ is connected). \nThen every $d^i$ is bounded above by $\\mathbf d$ and the sequence of induced Hausdorff metrics converges pointwise to a metric $d^\\infty$.\n\\end{theorem}\n\n{\\it Proof}\n\n\\\n\nWe prove that if $d\\leq \\mathbf d$, then $d^1 \\leq \\mathbf d$.\nThis is enough to prove that $d^i\\leq \\mathbf d$ for every $i$: just iterate the process, replacing $d$ by $d^{i-1}$.\n\nLet $\\gamma$ be a path defined on a closed interval. Then\n\\begin{eqnarray}\n\\ell_{d_X}(\\gamma)\n&\n=\n&\n\\sup_{\\mathcal P}\\sum_{i=1}^{n_{\\mathcal P}}\\max_j d(\\gamma(t_i)x_j,\\gamma(t_{i-1})x_j) \\nonumber \\\\\n&\n\\leq\n& \n\\sup\\limits_{\\mathcal P}\\sum\\limits_{i=1}^{n_{\\mathcal P}}\\max\\limits_j \\mathbf d(\\gamma(t_i)x_j,\\gamma(t_{i-1})x_j) \\nonumber\n\\\\\n&\n=\n&\n\\sup\\limits_{\\mathcal P}\\sum\\limits_{i=1}^{n_{\\mathcal P}} \\mathbf d (\\gamma(t_i),\\gamma(t_{i-1}))= \\ell_{\\mathbf d}(\\gamma) \\label{majoracao}\n\\end{eqnarray}\nbecause $\\mathbf d$ is right invariant. \nFrom (\\ref{majoracao}) and the fact that $\\mathbf d$ is intrinsic, we have that \n\\[\nd^1(x,y) \n= \n\\inf_{\\gamma \\in \\mathcal{C}_{x,y}^{d_X}} \\ell_{d_X}(\\gamma) \n\\leq \n\\inf_{\\gamma \\in \\mathcal{C}_{x,y}^{\\mathbf d}} \\ell_{\\mathbf d}(\\gamma)\n=\n\\mathbf d(x,y)\n\\]\nfor every $x,y \\in G$ (Observe that $\\mathcal C_{x,y}^{d_X}= \\mathcal C_{x,y}^{\\mathbf d}$ due to Proposition \\ref{dedXmesmatopologia}).\n\nObserve that $d^i$ is an increasing sequence of metrics bounded above by $\\mathbf d$.\nThen $d^i$ converges pointwise to a function $d^\\infty$ and it is easy to prove that $d^\\infty$ is a metric.$\\blacksquare$\n\n\\\n\nObserve that if $d$ is a metric on a group $G$, then $\\bar d: G \\times G \\rightarrow \\mathbb R$, defined as $\\bar d(x,y)=\\sup_{\\sigma \\in G}d(x\\sigma,y\\sigma)$, is a right invariant extended metric. \nIts intrinsic extended metric $\\hat{\\bar d}(x,y)=\\inf_{{\\mathcal C}^{\\bar{d}}_{x,y}} \\ell_{\\bar d}(\\gamma)$ is also a right invariant.\n\n\\begin{corollary}\n\\label{d barra chapeu limitante} \nLet $G$ be a Lie group and\n$d:G\\times G\\rightarrow \\mathbb R$ be a metric on $G$ such that $\\tau_d=\\tau_G$.\nLet $\\varphi: G \\times (G,d)\\rightarrow (G,d)$ be the product of $G$ and\n$X=\\{x_1,\\ldots, x_k\\}\\ni e$ be a finite subset of $G$ such that $H_X=\\{e\\}$. \nSuppose that $\\hat{\\bar{d}}$ is a metric on $G$.\nThen the sequence of induced Hausdorff metrics converges to a metric $d^\\infty$ on $G$.\n\\end{corollary}\n\n{\\it Proof}\n\n\\\n\nIt is enough to see that $\\hat{\\bar{d}}$ is a right invariant intrinsic metric such that $d\\leq \\hat{\\bar{d}}$.$\\blacksquare$ \n\n\\begin{lemma}\n\\label{translacao a direita decresce}\nLet $d^i$ be a sequence of induced Hausdorff metrics converging to $d^\\infty$.\nThen $d^\\infty(x,y) \\geq d^\\infty(x \\sigma,y \\sigma)$ for every $\\sigma \\in \\bar S_X$ and every $x,y \\in G$.\nIn particular $\\ell_{d^\\infty}(\\gamma) \\geq \\ell_{d^\\infty}(\\gamma \\sigma)$ for every $\\sigma \\in \\bar S_X$ and every path $\\gamma$. \nMoreover if $\\bar S_X$ is a subgroup, then $d^\\infty$ is invariant by the right action of $\\bar S_X$. \nIn particular, if $\\bar S_X=G$, then $d^\\infty$ is right invariant.\n\\end{lemma}\n\n{\\it Proof}\n\n\\\n\nIn order to prove that $d^\\infty(x,y) \\geq d^\\infty(x\\sigma,y \\sigma)$ for every $(x,y,\\sigma) \\in G \\times G \\times \\bar S_X$, it is enough to prove that $d^\\infty(x,y) \\geq d^\\infty(x x_j,y x_j)$ for every $(x,y,x_j) \\in G \\times G \\times X$ because every $\\sigma \\in \\bar S_X$ can be arbitrarily approximated by product of elements of $X$.\n\nLet $d$ be a metric on $G$.\nThen \n\n\\begin{eqnarray} \n\\ell_{d^1}(\\gamma) &\n= & \\sup\\limits_{\\mathcal P}\\sum\\limits_{i=1}^{n_{\\mathcal P}} d^1(\\gamma(t_i),\\gamma(t_{i-1})) \n= \n\\sup\\limits_{\\mathcal P}\\sum\\limits_{i=1}^{n_{\\mathcal P}} d_X(\\gamma(t_i),\\gamma(t_{i-1})) \\nonumber \\\\\n&\n=\n&\n\\sup\\limits_{\\mathcal P}\\sum\\limits_{i=1}^{n_{\\mathcal P}} \\max\\limits_j d(\\gamma(t_i)x_j,\\gamma(t_{i-1})x_j) \\nonumber \\\\\n& \n\\geq\n&\n\\sup\\limits_{\\mathcal P}\\sum\\limits_{i=1}^{n_{\\mathcal P}} d(\\gamma(t_i)x_j,\\gamma(t_{i-1})x_j)=\\ell_{d}(\\gamma x_j) \\label{multiplica xj diminui}\n\\end{eqnarray}\nfor every $x_j \\in X$.\nThe second equality holds due to Proposition \\ref{comprimentos iguais}.\n\nFix $(x,y,x_j) \\in G \\times G \\times X$. Observe that $d^i(x,y)\\geq d^{i-1}(x x_j, y x_j)$ for $i \\geq 2$ due to (\\ref{multiplica xj diminui}), $\\lim_{i \\rightarrow \\infty}d^i(x,y)=d^\\infty(x,y)$ and $\\lim_{i\\rightarrow \\infty} d^{i-1}(x x_j, y x_j)=d^\\infty (xx_j, yx_j)$.\nThen $d^\\infty(x,y)\\geq d^\\infty(xx_j, yx_j)$, what implies that $d^\\infty(x,y)\\geq d^\\infty(x\\sigma, y\\sigma)$ for every $\\sigma \\in \\bar S_X$.\n\nIn order to prove that if $\\bar S_X$ is a subgroup, then $d^\\infty$ is invariant by the right action of $\\bar S_X$, it is enough to observe that $d^\\infty(x,y)\\geq d^\\infty(x\\sigma, y\\sigma)\\geq d^\\infty(x\\sigma \\sigma^{-1}, y\\sigma\\sigma^{-1})=d^\\infty(x,y)$.$\\blacksquare$\n\n\\\n\nNow we prove that if $d$ is complete and bounded above by a right invariant intrinsic metric, then $d^\\infty$ is intrinsic (Theorem \\ref{completo eh intrinseco}). \n\n\\begin{lemma}\n\\label{osdoiscompletos}\nLet $M$ be a set and suppose that $d$ and $\\rho$ are metrics on $M$ such that $d \\leq \\rho$ and $\\tau_d=\\tau_\\rho$. \nIf $d$ is complete, then $\\rho $ is also complete. \n\\end{lemma}\n\n{\\it Proof}\n\n\\\n\nJust notice that every Cauchy sequence in $(M,\\rho)$ is a Cauchy sequence in $(M,d)$. \nThen the sequence converges in both metrics spaces because $\\tau_d=\\tau_\\rho$.$\\blacksquare$\n\n\\begin{lemma}\n\\label{limiteintrinseco}\nLet $M$ be locally compact topological space and suppose that $d^i$ is a sequence intrinsic metrics on $M$ that converges pointwise to metric $d^\\infty$.\nSuppose that there exist a complete intrinsic metric $d_l$ and a metric $d_h$ on $M$ such that $\\tau_{d_l}=\\tau_{d_h}=\\tau_M$ and $d_l \\leq d^i \\leq d_h$ for every $i$.\nThen $d^\\infty$ is a complete intrinsic metric.\n\\end{lemma}\n\n{\\it Proof}\n\n\\\n\nFirst of all $d^\\infty$ is complete due to Lemma \\ref{osdoiscompletos} and because $d^\\infty \\geq d_l$. Moreover it is straightforward that $\\tau_{d^\\infty}=\\tau_M$.\n\nLet $\\varepsilon >0$ and $x,y \\in M$. \nWe will prove that $x$ and $y$ admits an $\\varepsilon$-midpoint $z$ with respect to the metric $d^\\infty$ (see Proposition \\ref{midpoint}).\n\nLet $z_i$ be an $\\varepsilon \/3$-midpoint of $x$ and $y$ with respect to the metric $d_i$ (see Proposition \\ref{existenciamidpoint}). We claim that the sequence $z_i$ is contained in a compact subset.\nIn fact \n\\[\n\\left\\vert 2d^i(x,z_i)-d^i(x,y) \\right\\vert \\leq \\frac{\\varepsilon}{3}\n\\]\nimplies that\n\\[\n2d^i(x,z_i) \n\\leq d^i(x,y) + \\frac{\\varepsilon}{3}\n\\]\nand\n\\[\n2d_l(x,z_i) \n\\leq 2d^i(x,z_i) \n\\leq d^i(x,y) + \\frac{\\varepsilon}{3} \n\\leq d_h(x,y) + \\frac{\\varepsilon}{3}.\n\\]\nThen the sequence $z_i$ is bounded in $(M,d_l)$ and it is contained in a compact subset due to Theorem \\ref{hopf rinow intrinsic}.\n\nLet $z$ be an accumulation point of $z_i$.\nWe claim that $z$ is an $\\varepsilon$-midpoint with respect to $d^\\infty$.\nFirst of all, taking a subsequence $z_i$ coverging to $z$, we claim that $d^i(x,z_i)$ converges to $d^\\infty (x,z)$ as $i$ goes to infinity.\nIn fact, for every $\\varepsilon >0$, there exist $N_1\\in \\mathbb N$ such that\n\\[\n\\vert d^\\infty(x,z) - d^i(x,z_i) \\vert \n\\leq \\vert d^\\infty(x,z) - d^i(x,z) + d^i(x,z)- d^i(x,z_i) \\vert\n\\]\n\\[\n\\leq \\vert d^\\infty(x,z) - d^i(x,z)\\vert + \\vert d_h(x,z)- d_h(x,z_i) \\vert < \\varepsilon \/3\n\\]\nfor every $i\\geq N_1$, and $d^i(x,z_i)$ converges to $d^\\infty(x,z)$. Then for every $\\varepsilon >0$, there exist a $N_2 \\in \\mathbb{N}$ such that\n\\[\n\\vert 2d^\\infty(x,z) - d^\\infty(x,y)\\vert\n\\]\n\\[\n\\leq \\left\\vert 2d^\\infty(x,z)-2d^i(x,z_i) \\vert+ \\vert 2d^i(x,z_i)-d^i(x,y) \\vert + \\vert d^i(x,y)-d^\\infty(x,y) \\right\\vert \n< \\varepsilon\n\\]\nfor every $i \\geq N_2$. The inequality \n\\[\n\\vert 2d^\\infty(y,z) - d^\\infty(x,y)\\vert < \\varepsilon\n\\]\nis analogous. Then $z$ is an $\\varepsilon$-midpoint of $x$ and $y$ in $(M,d^\\infty)$ and $d^\\infty$ is intrinsic due to Proposition \\ref{midpoint}.$\\blacksquare$\n\n\\begin{theorem}\n\\label{completo eh intrinseco}\nLet $G$ be a Lie group and\n$d:G\\times G\\rightarrow \\mathbb R$ be a complete metric on $G$ such that $\\tau_d=\\tau_G$.\nLet $\\varphi: G \\times (G,d)\\rightarrow (G,d)$ be the product of $G$ and\n$X=\\{x_1,\\ldots, x_k\\}\\ni e$ be a finite subset of $G$ such that $H_X=\\{e\\}$. \nSuppose that there exist an intrinsic right invariant metric $\\mathbf d$ on $G$ such that $d \\leq \\mathbf d$.\nThen the sequence of induced Hausdorff metrics converges to a complete and intrinsic metric $d^\\infty$. \n\\end{theorem}\n\n{\\it Proof}\n\n\\\n\nObserve that $\\tau_{\\mathbf d} = \\tau_d = \\tau_G$ because $\\mathbf d$ is $C^0$-Carnot-Carath\\'eodory-Finsler and $d\\leq d^i \\leq \\mathbf d$.\nNotice that the sequence $(d^i)_{i \\in \\mathbb N}$ converges to a metric $d^\\infty$ due to Theorem \\ref{local maximo 1} and that we are in the conditions of Lemma \\ref{limiteintrinseco}, with $d_h = \\mathbf d$ and $d_l=d^1$.\nTherefore $d^\\infty$ is complete and intrinsic.$\\blacksquare$\n\n\\section{Existence of dense semigroups $S_X$}\n\\label{existencia sx denso}\n\nLet $G$ be a connected Lie group. \nIn this section we prove the existence of a finite subset $X=\\{x_1,\\ldots, x_k\\}\\ni e$ of $G$ such that $H_X=\\{e\\}$ and $\\bar S_X = G$.\n\nWe begin with an example.\n\n\\begin{example}\n\\label{denso na reta}\nLet $G=\\mathbb R$ be the additive group of real numbers and consider $X=\\{-1,0,\\sqrt 2\\}$.\nWe claim that $S_X$ is dense.\nLet $q \\in \\mathbb{R}$ and $\\varepsilon >0$.\nWe will find a $q_\\varepsilon \\in S_X$ such that $\\vert q - q_\\varepsilon \\vert < \\varepsilon$.\n\nIt is easy to see that there are a sequence $p_1,p_2,\\ldots$ such that $p_{i+1}-p_i = -1$ or $p_{i+1}-p_i=\\sqrt{2}$ and a infinite number of ${p_i}'s$ are in $[0,1]$.\nTherefore there exist $p_j$ and $p_k$ in $S_X$, $j0$. \nThen there exist an element $p \\in S_X$ such that $p 0$ such that $\\hat{\\bar d}(\\exp(tv),e)$ is finite for every $v \\in \\mathfrak g$ and $t \\in (-\\delta,\\delta)$, then it is straightforward that $\\hat{\\bar{d}}(x,y)$ is finite for every $x,y \\in G$ due to the right invariance of $\\hat{\\bar d}$ and the connectedness of $G$. \nThen if $\\hat{\\bar{d}}(x,y)= \\infty$ for some $x,y \\in G$, then there exist $v \\in \\mathfrak g$ and $t\\in \\mathbb R$ such that $\\hat{\\bar d}(\\exp(tv),e) = \\infty$.\nIt implies that for every $C>0$, there exist a partition $\\mathcal P$ of $[0,t]$ such that\n\\[\n\\sum_{i=1}^{n_{\\mathcal P}}\\bar d(\\exp(t_iv),\\exp(t_{i-1}v)) \n= \n\\sum_{i=1}^{n_{\\mathcal P}}\\bar d(\\exp(t_i-t_{i-1})v,e) > Ct\n\\] \ndue to the right invariance of $\\bar d$. Then the average of\n\\[\n\\frac{\\bar d(\\exp(t_i-t_{i-1})v,e)}{t_i - t_{i-1}}\n\\]\nis greater than or equal to $C$. Therefore there exist $t > 0$ such that\n\\[\n\\frac{\\bar d(\\exp(tv,e)}{t}>C.\n\\]\nThus \n\\[\n\\sup_{\\sigma \\in G}\\sup_{t \\neq 0} \\frac{d(\\exp(tv) \\sigma, \\sigma)}{\\vert t\\vert} = \\sup_{t \\neq 0}\\frac{\\bar d(\\exp(tv),e)}{t}\n =\\infty. \\blacksquare\n\\]\n\n\\begin{lemma}\n\\label{supelimsup}\nLet $G$ be a Lie group, $d$ be a metric on $G$ satisfying $\\tau_d = \\tau_G$ and $v \\in \\mathfrak g$.\nThen\n\\begin{equation}\n\\label{igualdade limsup sup 2}\n\\sup_{\\sigma \\in G}\\limsup_{t \\rightarrow 0} \\frac{d(\\exp(tv) \\sigma, \\sigma)}{\\vert t\\vert}\n =\\sup_{\\sigma \\in G}\\sup_{t \\neq 0} \\frac{d(\\exp(tv) \\sigma, \\sigma)}{\\vert t\\vert}.\n\\end{equation}\n\n\\end{lemma}\n\n{\\it Proof}\n\n\\\n\nThe inequality $\\leq $ follows directly from the definition of $\\limsup$. \nLet us prove the inequality $\\geq$.\nIf the left hand side of (\\ref{igualdade limsup sup 2}) is infinity, then there is nothing to prove.\nSuppose that it is equal to $L \\in \\mathbb R$.\nIt implies that \n\\[\n\\limsup_{t \\rightarrow 0} \\frac{d(\\exp(tv)\\sigma, \\sigma)}{\\vert t\\vert}\\leq L\n\\]\nfor every $\\sigma \\in G$.\nThen for every $\\varepsilon >0$ and $\\sigma \\in G$, there exist a $\\delta >0$ such that \n\\[\nd(\\exp(tv)\\sigma, \\sigma)\n\\leq (L + \\varepsilon)\\vert t \\vert\n\\]\nwhenever $t \\in (-\\delta, \\delta)$. \n\nNow fix $\\sigma \\in G$ and $t > 0$.\nDefine $\\gamma:[0,t] \\rightarrow G$ as $\\gamma(s) = \\exp(sv).\\sigma$.\nWe can find a partition $\\mathcal P=\\{0=t_0 < t_1 < \\ldots < t_k = t\\}$ such that \n\\begin{equation}\n\\label{limita inclinacao}\nd(\\exp(t_{i}v)\\sigma,\\exp(t_{i-1}v)\\sigma) \\leq (L + \\varepsilon)(t_i - t_{i-1}).\n\\end{equation}\nIn fact, for every $s\\in [0,t]$, we choose $\\delta_s >0 $ such that \n\\[\nd(\\exp(\\tau v)\\sigma,\\exp(sv)\\sigma) \\leq (L + \\varepsilon)\\vert \\tau - s \\vert.\n\\] \nfor every $\\tau \\in I_s:=[0,t] \\cap (s -\\delta_s, s + \\delta_s)$. \nIf $s \\in (0,t)$, we can choose $I_s = (s -\\delta_s, s + \\delta_s)$.\nFrom $\\{I_s\\}_{s \\in [0,t]}$, we can choose a finite subcover $\\{I_{\\tilde k}\\}$. \nFrom this finite subcover, we drop a $I_{\\tilde k_1}$, once a time, if there exist $I_{\\tilde k_2}$ such that $I_{\\tilde k_1} \\subset I_{\\tilde k_2}$.\nWe end up with a family such that one element is not contained in another element. \nDenote this family by $\\{[0,0 + \\delta_0),(t_2 - \\delta_{t_2},t_2 + \\delta_{t_2}),(t_4 - \\delta_{t_4}, t_4 + \\delta_{t_4}), \\ldots,(t-\\delta_t,t]\\}$, with $t_{i-2} < t_i$ for every $i$ ($t_{i-2} = t_i$ doesn't happen).\nIt is not difficult to see that $0 < t_2-\\delta_{t_2} < t_4 - \\delta_{t_4} < \\ldots < t - \\delta_t$ and $0 + \\delta_0 < t_2 + \\delta_{t_2} < t_4 + \\delta_{t_4} < \\ldots < t$. \nNow we choose the ``odd'' points. \n$t_1 \\in (0,t_2)$ is chosen in $[0,0 + \\delta_0) \\cap (t_2 - \\delta_{t_2},t_2 + \\delta_{t_2})$. \n$t_3 \\in (t_2,t_4)$ is chosen in $ (t_2 - \\delta_{t_2},t_2 + \\delta_{t_2}) \\cap (t_4 - \\delta_{t_4}, t_4 + \\delta_{t_4}) $ and so on.\nThen $\\mathcal P=\\{0=t_0 < t_1 < \\ldots < t_k = t\\}$ is such that (\\ref{limita inclinacao}) is satisfied.\n\nTherefore \n\\[\n\\begin{array}{ccl}\nd(\\exp(tv)\\sigma,\\sigma) & \\leq & \\sum\\limits_{i=1}^{n_\\mathcal P} d(\\exp(t_{i}v)\\sigma,\\exp(t_{i-1}v)\\sigma) \\leq (L+\\varepsilon)\\vert t\\vert\n\\end{array}\n\\]\nfor every $\\sigma$ and $t>0$ and we can conclude that\n\\[\n\\sup_{\\sigma \\in G}\\sup_{t > 0} \\frac{d(\\exp(tv) \\sigma, \\sigma)}{\\vert t\\vert} \\leq L.\n\\]\n\nIf $t<0$, then\n\\begin{eqnarray}\n\\sup_{\\sigma \\in G}\\sup_{t < 0} \\frac{d(\\exp(tv) \\sigma, \\sigma)}{\\vert t\\vert} & = & \\sup_{\\sigma \\in G}\\sup_{t > 0} \\frac{d(\\exp(t(-v)) \\sigma, \\sigma)}{\\vert t\\vert} \\nonumber \\\\ \n& \\leq & \\sup_{\\sigma \\in G}\\limsup_{t \\rightarrow 0} \\frac{d(\\exp(t(-v)) \\sigma, \\sigma)}{\\vert t\\vert} \\nonumber \\\\\n& = & \\sup_{\\sigma \\in G}\\limsup_{t \\rightarrow 0} \\frac{d(\\exp(tv) \\sigma, \\sigma)}{\\vert t\\vert} = L,\n\\end{eqnarray}\nwhat settles the proposition.$\\blacksquare$\n\n\\\n\nIn what follows we identify $G \\times \\mathfrak g$ with $TG$ through the correspondence $(g,v) \\mapsto (g,dR_g(v))$, where $R_g$ denotes the right translation by $g$ on $G$.\n\n\\begin{theorem}\n\\label{teorema principal}\nLet $G$ be a connected Lie group endowed with a complete metric $d$ such that $\\tau_d=\\tau_G$, $\\varphi: G \\times G \\rightarrow G$ be the product of $G$ and $X=\\{x_1,\\ldots x_k\\} \\ni e$ be a finite subset of $G$ such that $H_X=\\{e\\}$ and $\\bar S_X=G$.\n\\begin{enumerate}\n\\item If $d$ is bounded above by a right invariant intrinsic metric $\\mathbf d$, then the sequence of induced Hausdorff metrics converges to a right invariant $C^0$-Carnot-Carath\\'eodory-Finsler metric $d^\\infty$.\nIn particular, if $\\hat{\\bar{d}}$ is a metric, then $d^\\infty$ is a right invariant $C^0$-Carnot-Carath\\'eodory-Finsler.\n\\item If $d$ is bounded above by a right invariant $C^0$-Finsler metric $\\mathbf d$, then the sequence of induced Hausdorff metrics converges to a right invariant $C^0$-Finsler metric $d^\\infty$.\n\\item Suppose that the sequence of induced Hausdorff metrics converges pointwise to a metric $d^\\infty$.\nThen $d^\\infty$ is $C^0$-Finsler iff \n\\begin{equation}\n\\label{f tilde}\n\\tilde F(g,v):=\\sup_{\\sigma\\in G}\\limsup_{t \\rightarrow 0}\\frac{d(\\exp(tv)g\\sigma,g\\sigma)}{\\vert t \\vert}\n\\end{equation}\nis finite for every $(g,v) \\in TG$, and in this case $\\tilde F$ is the Finsler metric on $G$.\n\\end{enumerate}\n\\end{theorem}\n\n{\\it Proof}\n\n\\\n\n(1) We know that the sequence of induced Hausdorff metrics converges to a intrinsic metric $d^\\infty$ due to Theorem \\ref{completo eh intrinseco}. \nIt is a right invariant due to Lemma \\ref{translacao a direita decresce}.\nTherefore $d^\\infty$ is $C^0$-Carnot-Carath\\'eodory-Finsler due to Theorem \\ref{Berestovskii theorem}.\n\nIf $\\hat{\\bar{d}}$ is finite, then $\\mathbf d=\\hat{\\bar{d}}$ is a right invariant intrinsic bound for $d$ and the result follows.\n\n\\\n\n(2) This is a particular case of item (1) and it is direct consequence of the second paragraph of Remark \\ref{relacoes Carnot Caratheodory}.\n\n\\\n\n(3) First of all we prove that if $\\tilde F$ is finite, then $d^\\infty$ is $C^0$-Finsler.\n\nIf $\\tilde F$ is finite, then $\\hat{\\bar d}$ is also finite due to Lemmas \\ref{d barra chapeu e fracao} and \\ref{supelimsup}.\nThus $d^\\infty$ is a right invariant $C^0$-Carnot-Carath\\'eodory-Finsler due to item (1).\n\nIt is a direct consequence of the ball-box theorem that if $d^\\infty$ is a Carnot-Carath\\'eodory metric on a differentiable manifold $M$ with respect to a proper smooth distribution $\\mathcal D$, then for every $p \\in M$ and $v \\not\\in \\mathcal D_p$ we have that\n\\[\n\\lim_{t \\rightarrow 0}\\frac{d^\\infty (\\gamma(t), p)}{\\vert t \\vert}=\\infty,\n\\]\nwhere $\\gamma:(-\\varepsilon,\\varepsilon) \\rightarrow M$ is a smooth path such that $\\gamma(0)=p$ and $\\gamma^\\prime(0)=v$.\nThe same conclusion holds for $C^0$-Carnot-Carath\\'eodory-Finsler metrics due to Remark \\ref{relacoes Carnot Caratheodory}.\nThen $d^\\infty$ is $C^0$-Finsler whenever $\\tilde F$ is finite.\n\nNow we prove that if $d^\\infty$ is $C^0$-Finsler, then $\\tilde F$ is finite.\n\nIf $F:TG \\rightarrow \\mathbb R$ is the $C^0$-Finsler metric correspondent to $d^\\infty$, then\n\\[\nF(g,v)\n=\\lim_{t\\rightarrow 0} \\frac{d^\\infty(\\exp(tv)g,g)}{\\vert t\\vert} \n= \\sup_{\\sigma \\in G}\\lim_{t\\rightarrow 0} \\frac{d^\\infty(\\exp(tv)g \\sigma,g \\sigma)}{\\vert t\\vert} \\geq \\tilde F(g,v),\n\\]\nwhere the first equality is due to Theorem 3.7 of \\cite{BenettiFukuoka} and the second equality is due to the right invariance of $d^\\infty$.\nTherefore $\\tilde F$ is finite.\nObserve that this part of the proof also shows that $\\tilde F \\leq F$ if $d^\\infty$ is $C^0$-Finsler.\n\nFinally we show that if $d^\\infty$ is $C^0$-Finsler, then $F \\leq \\tilde F$.\n\nFix $t>0$ and $v \\in \\mathfrak g$. Define $\\gamma:[0,t] \\rightarrow G$ by $\\gamma(s)=\\exp(sv)$ and notice that if $\\xi \\in G$, then\n\n\\begin{eqnarray}\nd^1(\\exp(tv)\\xi, \\xi) \n& \n\\leq \n&\n\\ell_{d_X} (\\gamma \\xi)\n= \n\\sup\\limits_{\\mathcal P} \\sum\\limits_{i=1}^{n_{\\mathcal P}} d_X(\\exp(t_i v)\\xi,\\exp(t_{i-1}v)\\xi) \\nonumber \\\\\n& \n= \n& \\sup\\limits_{\\mathcal P} \\sum\\limits_{i=1}^{n_{\\mathcal P}} \\max\\limits_j d(\\exp(t_i v)\\xi x_j,\\exp(t_{i-1}v)\\xi x_j) \\nonumber \\\\\n&\n= \n& \\sup\\limits_{\\mathcal P} \\sum\\limits_{i=1}^{n_{\\mathcal P}} \\max\\limits_j d(\\exp((t_i-t_{i-1}) v)h_j,h_j) \\nonumber \\\\\n&\n\\leq\n& \\sup\\limits_{\\mathcal P} \\sum\\limits_{i=1}^{n_{\\mathcal P}} \\sup\\limits_{\\sigma \\in G} d(\\exp((t_i-t_{i-1}) v) \\sigma, \\sigma) \\nonumber\n\\end{eqnarray}\n\nwhere $h_j = \\exp(t_{i-1}v)\\xi x_j$. Then\n\\[\n\\begin{array}{ccl}\nd^1(\\exp(tv)\\xi, \\xi) \n& \n\\leq \n&\n\\sup\\limits_{\\mathcal P} \\sum\\limits_{i=1}^{n_{\\mathcal P}} \\tilde F(v)(t_i-t_{i-1}) = \\tilde F(v)t,\n\\end{array}\n\\]\nwhere the inequality is due to Lemma \\ref{supelimsup}. \nIf we iterate this process, replacing $d^1$ by $d^2, d^3, \\ldots$, we get \n\\begin{equation}\n\\label{compara com is}\nd^i(\\exp(tv)\\xi,\\xi) \\leq \\tilde F(v) t\n\\end{equation} \nfor every $i \\in \\mathbb N$, $t > 0$ and $\\xi \\in G$. \nBut $\\tilde F(v)=\\tilde F(-v)$ because\n\\begin{eqnarray}\n\\tilde F(v)\n&\n=\n&\n\\sup_{\\sigma\\in G}\\limsup_{t \\rightarrow 0}\\frac{d(\\exp(tv)\\sigma,\\sigma)}{\\vert t \\vert} \\nonumber \\\\\n&\n=\n&\n\\sup_{\\sigma\\in G}\\limsup_{t \\rightarrow 0}\\frac{d(\\exp(tv)\\exp(-tv)\\sigma,\\exp(-tv)\\sigma)}{\\vert t \\vert} \n= \\tilde F(-v),\n\\end{eqnarray}\nand\n\\begin{equation}\n\\label{d infinito e f til}\nd^\\infty(\\exp(tv)\\sigma,\\sigma) \\leq \\tilde F(v)\\vert t \\vert\n\\end{equation}\nholds for every $t \\in \\mathbb R$ and $\\sigma \\in G$ due to (\\ref{compara com is}). Therefore\n\\[\nF(v)\n=\n\\lim_{t \\rightarrow 0}\\frac{d^\\infty(\\exp(tv),e)}{\\vert t \\vert}\\leq \\tilde F(v),\n\\]\nwhat settles the theorem.$\\blacksquare$\n\n\\section{Further examples}\n\\label{secao exemplos}\n\n\\begin{example}\nConsider the additive group $G=\\mathbb R$ with the finite metric $d(x,y)=\\vert\\arctan (x) - \\arctan (y) \\vert$.\nLet $X=\\{ -1,0,\\sqrt 2\\}$.\nThen $\\bar S_X=\\mathbb R$ (see Example \\ref{denso na reta}).\nIts maximum derivative is equal to one and it is not difficult to see that $\\hat{\\bar d}$ is a metric on $G$.\nThen $d^\\infty$ is an invariant $C^0$-Finsler metric, with $F(\\cdot)$ equal to the Euclidean norm (See Theorem \\ref{teorema principal}).\nTherefore $d^\\infty$ is the Euclidean metric.\n\nNow we prove that every $d^i$ of the sequence of induced Hausdorff metrics is a finite metric.\nFirst of all notice that\n\\[\n\\frac{d}{dt}\\arctan t = \\frac{1}{1+t^2}.\n\\]\nIt means that if $00$.\nFor $1