diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzptcg" "b/data_all_eng_slimpj/shuffled/split2/finalzzptcg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzptcg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nLow-mass starless cores are the earliest observed phase of isolated\nlow-mass star formation. They are identified via submm\ndust continuum and dense gas molecular lines, they typically contain\na few solar masses, they have sizes of approximately $0.1$ pc,\nand they may form one or a few low-mass \n(M $\\sim 1$ M$_{\\odot}$) stars.\nSeveral hundred starless cores have been\nobserved in the nearest star-forming molecular clouds and isolated Bok\nglobules. \nRecent large scale\nsurveys of nearby molecular clouds have established a remarkable connection\nbetween the Core Mass Function and the Initial Mass Function of stars\n(e.g., Motte et al. 1998, Johnstone et al. 2000), indicating the importance\nof constraining the evolution of starless cores in order\nto understand the initial conditions of disk and protostar formation.\nTheoretically, the basic core formation and evolution process is still \ndebated between a turbulent-dominated (Mac Low \\& Klessen 2004) \nor ambi-polar diffusion-dominated model (Shu et al. 1987). \nObservationally, a fundamental challenge is to determine\nthe evolutionary state of a starless core. Given a set of observations,\ncan we determine how close a starless core is to forming a protostar?\n\nOne step toward understanding the evolutionary state of a starless\ncore is to determine its physical structure ($n(r)$ cm$^{-3}$ and\n$T(r)$ K). \nRadiative transfer modeling of submillimeter dust continuum emission\nhave successfully fit the density structure with hydrostatic\nconfigurations (Bonnor-Ebert Spheres, BESs), while the calculated \ntemperature structure decreases toward\nthe center of the core due to attenuation of the ISRF (Evans et al. 2001). \nSince BESs may be parameterized in terms of their central density,\n$n_c$ cm$^{-3}$, it is natural to think that this may be the\nmain evolutionary variable for starless cores. However, detailed\nmolecular line observations have revealed that the chemical\nstructure ($X(r)$) can strongly vary among cores with similar central\ndensities. Figure 1 shows the example of L1498 and L1521E, two\nstarless cores in Taurus which have comparable $n_c$ (Tafalla et al. 2006), \nbut very different chemical structures. L1521E is centrally peaked in C$_3$S\nwith weak NH$_3$ emission while L1498 is centrally peaked in NH$_3$\nand heavily depleted in C$_3$S. These observations can be explained\nif the cores are evolving at different rates. \\textbf{In order to\ndetermine the evolutionary state of a starless core, it is\nnecessary to map the chemical structure of the core}.\n\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=9 cm]{Shirley_fig1.ps}\n\\caption{Observations of L1521E (top) and L1498 (bottom) in 850 $\\mu$m, \nC$_3$S,\nand NH$_3$ showing chemical differentiation. Data are from the ARO-GBT survey\nand Shirley et al. (2005)} \\label{fig}\n\\end{center}\n\\end{figure}\n\n\n\n\\section{Chemical Processes in Starless Cores}\n\\label{section1}\n\nThe rate at which molecules are created and destroyed differ for each\nspecies; therefore, the relative abundance of molecular species may be\nused as a chemical clock. A prediction that appears to be ubiquitous among\nstarless core chemodynamical models is that there exists\na class of molecules, named ``early-time'' molecules, which\nform rapidly in the cold, moderate density environments typical of nascent\nstarless cores (e.g., CO, C$_2$S, C$_3$S, SO). The early-time molecule\nabundance peaks typically within a few $10^5$ years and then decreases\ndue to various destruction mechanisms (see below). Another class of molecules,\nnamed ``late-time'' molecules, build up in abundance slower and\nremain in the gas phase longer at low temperatures and high densities\n(e.g., N$_2$H$^+$, NH$_3$, H$_2$D$^+$).\nIt has been proposed that observations of the abundance ratio\nof species such as [C$_2$S]\/[NH$_3$] may date the chemical maturity of\na core (Suzuki et al. 1992). Figure 1 shows an example of two cores\nwith very different chemical states in early-time and late-time\nmolecules despite having comparable central densities. \n\n\n\nThe environments of starless cores are cold ($T < 15$ K) \nand dense ($n_c > 10^4$ cm$^{-3}$), thus many gas phase species\nadsorb onto dust grains. The best example is the second most abundant\nmolecular species, CO, which freezes out of the gas at a rate of\n$(dn_{\\rm{CO}}\/dt) \\propto n_g T^{1\/2} S n_{\\rm{CO}}$,\nwhere $n_g$ is the dust grain density and\n$S$ is the sticking coefficient (Rawlings et al. 1992). \nSince the timescale for freezeout depends\non the density and temperature of the core, the amount of CO\ndepletion encodes the history of the physical structure of the\ncore. For instance, a core that evolves slowly (quasi-statically)\nwill have more CO depletion compared to a core that evolves quickly \nto the same $n_c$. Complicating factors to the simple adsorption\nmodel include competing desorption processes due to direct cosmic ray heating,\ncosmic ray photodesorption, and H$_2$ formation on grains, all of which \nmay be important in starless cores (Roberts et al. 2007). CO is a destruction agent of many\ngas phase ions (e.g. N$_2$H$^+$ and H$_2$D$^+$), therefore the abundance history\nof these species are directly related to the amount of CO freezout\n(see Figure 2). The resulting chemical networks in heavily depleted environments\nare very different (see Vastel et al. 2006). Mapping of starless cores have revealed\na plethora of depleted species toward the dust column density peaks \n(Tafalla et al. 2006). \n\nDeuterium fractionation is an important chemical diagnostic in low-mass\ncores. At low temperatures, many chemical reactions involving HD favor\nthe formation of deuterated molecules due to the lower zero-point vibrational\nenergy of deuterated species compared to the hydrogenated species. The \nclassic deuteration reaction operating in the\nenvironments of starless cores is H$_3$$^+$ + HD $\\rightarrow$\nH$_2$D$^+$ + H$_2$ + $230$ K, where the backreaction is inefficent\nat low temperatures (Vastel et al. 2006). As the density increases, the temperature\ndecreases, and species such as CO freeze-out, the deuteration of\nhydrogenated species may increases up to four orders of magnitude\nover the cosmic [D]\/[H] $\\sim 10^{-5}$ (Roberts et al. 2004). Figure 2b illustrates \nthe observed degrees of deuteration of N$_2$H$^+$\nincreases with the amount of CO depletion in the core (Crapsi et al. 2005). \nSimilarly, since a deuterated species, such as NH$_2$D, \nmay be viewed as an extreme late-time molecule, a comparison between deuterated \nand early-time molecules from the ARO-GBT survey indicate increasing\nlate-time vs. early-time molecules with increasing central density is\nthe core (Figure 2a). \n\nThe chemical structure of the core is also important for determining\nwhich species are good probes of the kinematical structure of\nthe core. Several species (CS, H$_2$CO, HCO$^+$, HCN)\nhave been identified as infall tracers, molecules that are moderately \noptically thick and display asymmetric, self-aborbed line profiles. \nRecent surveys have attempted to search for evidence of large \nscale infall (e.g. Sohn et al. 2007). \nFurthermore, the linewidths of \nheavy species that lack hyperfine structure, such as C$_3$S (68 amu), \nare dominated by non-thermal motions and can trace turbulent motions\nor large-scale kinematical motions along different lines-of-sight in the core.\n\nA more thorough review of the chemical processes in starless cores\nmay be found in Di Francesco et al. (2005 and references therein). \nThe comparison of early-time vs. late-time\nspecies, the amount of depletion, the amount of deuteration, and \nthe identification of large scale kinematical motions should be used together to \nelucidate the evolutionary state of a starless core. Other indicators,\nsuch as the ortho-to-para evolution of symmetric molecules with non-zero spins\nand variations in\nthe structure of ionization fraction should also be explored.\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=9 cm]{Shirley_fig2.ps}\n\\caption{Left: The ratio of the ``late-time'' species NH$_2$D to the ``early-time'' \nspecies C$_2$S vs. $n_c$ toward cores where both lines were detected in the \nARO-GBT survey. Right: The deuterium fraction of N$_2$H$^+$ vs. the CO depletion factor\nreported by Crapsi et al. (2005).} \\label{fig}\n\\end{center}\n\\end{figure}\n\n\n\\section{Developing an Evolutionary Sequence}\n\\label{section2} \n\nUltimately, to determine the evolutionary state of a starless core, \nwe should model the molecular line radiaitve transfer\nof each transition along multiple lines-of-sight\nand compare to a grid of chemodynamical models (e.g. Lee et al. 2004).\nThis processes is computationally intensive and the current generation\nof chemodynamical models have not fully explored the parameter space \nof possible conditions in nearby starless cores. \nAn alternative first step is to develop a Boolean evolultionary comparison. \nA flag of $1$ (more evolved) or $0$ (less evolved) is given to a particular \nobserved property of the core if the chemical criterion is met, and the sum\nof flags represents the observed chemical maturity of the core.\nThis strategy was implemented by Crapsi et al. (2005) for a sample of\n$12$ starless cores and is being extended to the sample of $25$ cores\nwith a larger set of chemical criteria in the ARO-GBT survey (Shirley et al.\n2008, in prep.). \n\nWhile there is still much work to develop a\ndetailed understanding of the chemical processes within low-mass starless cores, \nit is now possible and necessary to synthesize the chemical evolutionary indicators.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nVisual saliency patterns are the result of a number of factors, including the task being performed, user preferences and domain knowledge. However existing approaches to predict saliency patterns \\cite{walther2002attentional,hadizadeh2014saliency,wang2015saliency,achanta2009saliency,sharma2012discriminative,yubing2011spatiotemporal} ignore these factors, and instead learn a model specific to a single task while disregarding factors such as user preferences. \\par\n\nBased on empirical results, the human visual system is driven by both bottom-up and top-down factors \\cite{connor2004visual}. The first category (bottom-up) is entirely driven by the visual scene where humans deploy their attention towards salient, informative regions such as bright colours, unique textures or sudden movements. The bottom-up factors are typically exhibited during the free viewing mechanism. In contrast, the top-down attention component, where the observer is performing a task specific search, is modulated by the task at hand \\cite{modellin-search} and the subject's prior knowledge \\cite{deep-fix}. For example, when the observer is searching for people in the scene, they can selectively attend to the scene regions which are most likely to contain the targets \\cite{ einha2008task, zelinsky2008theory}. Furthermore, a subject's prior knowledge, such as the scene layout, scene categories and statistical regularities will influence the search mechanisms and the fixations \\cite{castelhano2007initial, chaumon2008unconscious,neider2006scene}, rendering task specific visual saliency prediction a highly subjective, situational and challenging task \\cite{modellin-search, action-in-the-eye,deep-fix}, which motivates the need for a working memory \\cite{deep-fix, fernando2017tree,Fernando_2017_ICCV}. Even within groups of subjects completing the same task, due to the differences in a subject's behavioural goals, expectations and domain knowledge, unique saliency patterns are generated. Ideally, this user related context information should be captured via a working memory. \\par\n\nFig. \\ref{fig:fig_1} shows the variability of the saliency maps when observers are performing action recognition and context recognition on the same image. In Fig. \\ref{fig:fig_1} (a) the observer is asked to recognise the action performed by the human in the scene; while in Fig. \\ref{fig:fig_1} (b) the saliency map is generated when the observer is searching for cars\/ trees in the scene. It is evident that there exists variability in the resultant saliency patterns, yet accurate modelling of human fixations in the application areas specified above requires task specific models. For example, semantic video search, content aware image resizing, video surveillance, and video\/ scene classification may require a search for pedestrians, for different objects, or recognising human actions depending on the task and video context. \\par\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}{.45\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/saliency_predictions_action_rec.pdf} %\n \\caption{Human action recognition task}\n \\end{subfigure}\n \\begin{subfigure}{.45\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/saliency_predictions_context_rec.pdf} %\n \\caption{Searching for cars\/ trees}\n \\end{subfigure}\n \\caption{Variability of the saliency maps when observers are performing different tasks}\n\t\t\t\\vspace{-3mm}\n \\label{fig:fig_1}\n\\end{figure}\n\nIn recent years, motivated by the success of deep learning techniques \\cite{gammulle2017two,fernando2017soft+}, there have been several attempts to model visual saliency of the human free viewing mechanism with the aid of deep convolutional networks \\cite{deep-fix,deep-ml, vig2014large, kummerer2014deep}. Yet, the usual approach when modelling visual saliency for task specific viewing is to hand-craft the visual features. For instance, in \\cite{modellin-search} the authors utilise the features from person detectors \\cite{dalal2006human} when estimating the search for humans in the scene; while in \\cite{action-in-the-eye} the authors use the features from HoG descriptors \\cite{dalal2005histograms} when searching for the objects in the scene. Therefore, these approaches are application dependent and fail to capture the top-down attentional mechanism of humans which is driven by factors such as a subject's prior knowledge and expectation. \\par \n\nIn this work we propose a deep learning architecture for task specific visual saliency estimation. We draw our inspiration from the recent success of Generative Adversarial Networks (GAN) \\cite{goodfellow2014generative, denton2015deep, radford2015unsupervised, salimans2016improved, zhao2016energy} for pixel to pixel translation tasks. We exploit the capability of the conditional GAN framework \\cite{pix2pix} for automatic learning of task specific features in contrast to hand-crafted features that are tailored for specific applications \\cite{modellin-search, action-in-the-eye}. This results in a unified, simpler architecture enabling direct application to a variety of tasks. The conditional nature of the proposed architecture enables us to learn one network for all the tasks of interest, instead of learning separate networks for each of the tasks. Apart from the advantage of a simpler learning process, this enables the capability of learning semantic correspondences among different tasks and propagating these contextual relationships from one task to another. \\par\nFig. \\ref{fig:networks} (a) shows the conditional GAN architecture where the discriminator $D$ learns to classify between real and synthesised pairs of saliency maps $y$, given the observed image $x$ and task specific class label $c$. The generator $G$ tries to fool the discriminator. It also observes the observed image $x$ and task specific class label $c$. We compare this model to the proposed model given in Fig. \\ref{fig:networks} (b). The differences arise in the utilisation of memory $M$, where we capture subject specific behavioural patterns. This incorporates a subject's prior knowledge, behavioural goals and expectations.\n\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}{.49\\linewidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{fig\/CGANS.pdf} %\n \\caption{Conditional GAN}\n \\end{subfigure}\n \\begin{subfigure}{.49\\linewidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{fig\/proposed.pdf} %\n \\caption{MC-GAN (proposed model)}\n \\end{subfigure}\n \\caption{A comparison of conditional GAN architecture with the proposed model}\n\t\t\t\\vspace{-3mm}\n \\label{fig:networks}\n\\end{figure}\n\n\\vspace{-2mm}\n\\section{Related Work}\nLiterature related to this work can be broadly categorised into ``Visual Saliency Prediction'' and ``Generative Adversarial Networks'', and these two areas are addressed in Sections 2.1 and 2.2 respectively.\n\n\\subsection{Visual Saliency Prediction}\nSince the first attempts to model human saliency through feature integration \\cite{treisman1980feature}, the area of saliency prediction has been widely explored. Building upon this bottom-up approach, Koch and Ullman \\cite{koch1987shifts} and Itti et al. \\cite{itti1998model} proposed approaches based on extracting image features such as colour, intensity and orientation. These methods generate centre-biased acceptable saliency predictions for free viewing but are highly inaccurate in complex real world scenes \\cite{deep-fix}. Recent studies such as \\cite{valenti2009image,liu2013saliency, lang2012depth, erdem2013visual,porikli2006covariance} have looked into the development of more complex features for saliency estimation. \\par\nIn contrast, motivated by information theory, authors in \\cite{bruce2006saliency,modellin-search, action-in-the-eye} have taken the top-down approach where task dependent features comes into play. They incorporate local information from regions of interest for the task at hand, such as features from person detectors and HoG descriptors. These models \\cite{bruce2006saliency,modellin-search, action-in-the-eye} are completely task specific, rendering adaptation from one task to another nearly impossible. Furthermore, they neglect the fact that different subjects may exhibit different behavioural patterns when achieving the same goal which generates unique strategies or sub goals that we term user preferences. \\par\n\nIn order to exploit the representative power of deep architectures, more recent studies have been driven towards the utilisation of convolution neural networks. In contrast to the above approaches, which use hand crafted features, deep learning based approaches offer automatic feature learning. In \\cite{kummerer2014deep} the authors propose the usage of feature representations from a pre-trained model that has been trained for object classification. This work was followed by \\cite{deep-fix,deep-ml} where authors train end-to-end saliency prediction models from scratch and their experimental evaluations suggest that deep models trained for saliency prediction itself can outperform off-the-shelf CNN models. \\par\nLiu et al. \\cite{liu2015predicting} proposed a mulitresolution-CNN for predicting saliency, which has been trained on multiple scales of the observed image. The motivation behind this approach is to capture low and high level features. Yet this design has an inherit deficiency due to the use of isolated image patches which fail to capture the global context, composed of the context of the observed image, the task at hand (i.e free viewing, recognising actions, searching for objects) and user preferences. Even though the context of the observed image is well represented in deep single scale architectures such as \\cite{deep-fix,deep-ml} they ignore the rest of the global context, the task description and user behavioural patterns, which are often crucial for saliency estimation. \\par\nThe proposed work bridges the research gap between deep architectures that capture bottom-up saliency features; and top-down methods \\cite{bruce2006saliency,modellin-search, action-in-the-eye} that are purely driven by the hand crafted features. We investigate the plausibility of the complete automatic learning of global context, which has been ill represented in literature thus far, through a memory augmented conditional generative adversarial model. \n\n\\subsection{Generative Adversarial Networks}\nGenerative adversarial networks (GAN), which belong to the family of generative models, have achieved promising results for pixel-to-pixel synthesis \\cite{arici2016associative}. Several works have looked into numerous architectural augmentations when synthesising natural images. For instance, in \\cite{gregor2015draw} the authors utilise a recurrent network approach where as in \\cite{dosovitskiy2015learning} a de-convolution network approach is used to generate higher quality images. Most recently authors in \\cite{pan2017salgan} have utilised the GAN architecture for visual saliency prediction and proposed a novel loss function which is proven to be effective for both initialising the generator, and stabilising adversarial training. Yet their work fails to delineate the ways of achieving task specific saliency estimation and of incorporating task specific dependencies and the subject behavioural patterns for saliency estimation. \\par\nThe proposed work draws inspiration from conditional GANs \\cite{li2016precomputed,mathieu2015deep,mirza2014conditional,pathak2016context,reed2016generative,yoo2016pixel,zhou2016learning,wang2016generative}. This architecture is extensively applied for image based prediction problems such as image prediction from normal maps \\cite{wang2016generative}, future frame prediction in videos \\cite{mathieu2015deep}, image style transfer \\cite{li2016precomputed}, image manipulation guided by user preferences \\cite{zhou2016learning}, etc. In \\cite{pix2pix} the authors proposed a novel U-Net \\cite{ronneberger2015u} based architecture for conditional GANs. Their evaluations suggested that this network is capable of capturing local semantics with applications to a wide range of problems. We investigate the possibility of merging the discriminative learning power of conditional GANs together with a local memory to fully capture the global context, contributing a novel application area and structural argumentation for conditional GANs. \n\n\\vspace{-2mm}\n\\section{Visual Saliency Model}\n\n\\subsection{Objectives}\n\nGenerative adversarial networks learn a mapping from a random noise vector $z$ to an output image $y, G: z \\rightarrow y$ \\cite{goodfellow2014generative}; where as conditional GANs learn a mapping from an observed image $x$ and random noise vector $z$, to output $y$, given auxiliary information $c$, where $c$ can be class labels or data from other modalities. $G: \\{x,z | c \\} \\rightarrow y$. When we incorporate the notion of time into the system, then the observed image at time instance $t$ will be $x_t$, the respective noise vector $z_t$ and the relevant class label will be $c_t$. Then the objective function of a conditional GAN can be written as, \n\\begin{equation}\n\\begin{split}\nL_{cGAN}(G,D) =E_{x_t, y_t \\sim p_{data}(x_t, y_t)}[log(D (x_t, y_t | c_t))] + \\\\\nE_{x_t \\sim p_{data(x_t), z_t \\sim p_z(z_t)}}[log(1-D(x_t, G(x_t,z_t | c_t)))].\n\\end{split}\n\\end{equation}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=.90\\linewidth]{fig\/memory_fig.pdf} %\n \\caption{Memory architecture: The model receives an input $o_t$ at time instance $t$. A function $f_r^{LSTM}$ is used to embed this input and retrieve the content of the memory $M_{t-1}$. When reading, we use a $softmax$ function to weight the association between each memory slot and the input $o_t$, deriving a weighted retrieval $h_{t}$. The final output $m_t$ is derived using both $\\grave{o}$ and $h_{t}$. Finally we update the memory using memory write function $f_w^{LSTM}$. This generates the memory representation $M_{t}$ at time instance $t+1$, shown to the right.}\n\t\t\t\\vspace{-3mm}\n \\label{fig:memory_architecture}\n \\end{figure*}\n \nLet $ M \\in \\mathbb{R}^{k*l} $, shown in Fig. \\ref{fig:memory_architecture}, be the working memory with $k$ memory slots and $l$ is the embedding dimension of the generator output,\n\\begin{equation}\no_t= G(x_t,z_t| c_t).\n\\end{equation}\nIf the representation of memory at time instance $t-1$ is given by $M_{t-1}$ and $f_r^{LSTM}$ is a read function, then we can generate a key vector $a_t$, representing the similarity between the current memory content and the current generator embedding via attending over the memory slots such that,\n\\begin{equation}\n\\grave{o}_t=f_r^{LSTM}(o_t),\n\\end{equation}\n\\begin{equation}\na_t= softmax(\\grave{o}_t^T,M_{t-1}),\n\\end{equation}\nand\n\\begin{equation}\nh_t= a_t^TM_{t-1}.\n\\end{equation}\nThen we retrieve the current memory state by, \n\\begin{equation}\nm_t= f^{MLP}(\\grave{o}_t,h_t), \n\\end{equation}\nwhere $f^{MLP}$ is a neural network composed of multi-layer perceptrons (MPL) trained jointly with other networks. \nThen we generate the vector for the memory update $\\grave{m}_t$ via passing it through a write function $ f_w^{LSTM}$\n\\begin{equation}\n\\grave{m}_t= f_w^{LSTM}(m_t),\n\\end{equation}\nand finally we completely update the memory using,\n\\begin{equation}\nM_{t}=M_{t-1}(1- (a_t \\otimes e_k )^T) + (\\grave{m}_t \\otimes e_l)(a_t \\otimes e_k )^T.\n\\end{equation}\nwhere 1 is a matrix of ones, $e_l \\in \\mathbb{R}^l $ and $ e_k \\in \\mathbb{R}^k$ be vectors of ones and $ \\otimes$ denotes the outer product which duplicates its left vector $l$ or $k$ times to form a matrix.\nNow the objective of the proposed memory augmented cGAN can be written as, \n\\begin{equation}\n\\begin{split}\nL^*_{cGAN}(G,D) =E_{x_t, y_t \\sim p_{data}(x_t, y_t)}[log(D (x_t, y_t | c_t))] + \\\\\nE_{x_t \\sim p_{data(x_t), z_t \\sim p_z(z_t)}}[ log(1-D(x_t, o_t \\otimes tanh(m_t)))].\n\\end{split}\n\\end{equation}\nWe would like to emphasise that we are learning a single network for all the tasks at hand, rendering a simpler but informative framework, which can be directly applied to a variety of tasks without any fine tuning.\n\\subsection{Network Architecture}\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.4\\linewidth]{fig\/u_new_architecture.pdf} %\n \\caption{U-Net architecture}\n \\label{fig:U_Net_architecture}\n\t\t\t\\vspace{-3mm}\n \\end{figure}\n\nFor the generator we adapt the U-Net architecture of \\cite{pix2pix} (see Fig. \\ref{fig:U_Net_architecture}). Let $Ck$ denote a Convolution-BatchNorm-ReLU layer group with $k$ filters. $CDk$ denotes a Convolution-BatchNorm-Dropout-ReLU layer with a dropout rate of 50\\%. Then the generator architecture can be written as,\nEncoder: C64-C128-C256-C512-C512-C512-C512-C512 followed by a U-Net decoder: CD512-CD1024-CD1024-C1024-C1024-C512-C256-C128 where there are skip connections between each $i^{th}$ layer in the encoder and the ${n-i}^{th}$ layer of the decoder, and there are $n$ total layers in the generator (see \\cite{pix2pix} for details). \nThe discriminator architecture is: C64-C128-C256-C512-C512-C512. \\par\nAll convolutions are 4 x 4 spatial filters applied with stride 2. Convolutions in the encoder and in the discriminator down sample by a factor of 2, whereas in the decoder they up sample by a factor of 2. A description of our memory architecture is as follows. For functions $f_r^{LSTM}$ and $f_w^{LSTM}$ we utilise two, one-layer LSTM networks \\cite{hochreiter1997long} with 100 hidden units and for $f^{MLP}$ we use a neural network with a single hidden layer and 1024 units with ReLU activations. This memory module is fully differentiable, and we learn it jointly with other networks. We trained the proposed model with the Adam \\cite{kingma2014adam} optimiser, with a batch size of 32 and an initial learning rate to 1e-5, for 10 epochs.\n\n\\vspace{-2mm}\n\\section{Experimental Evaluations}\n\n\\subsection{Datasets}\nWe evaluate our proposed approach on 2 publicly available datasets, VOCA-2012 \\cite{action-in-the-eye} and MIT person detection (MIT-PD) \\cite{modellin-search}. \\par\nThe VOCA-2012 dataset consists of 1,085,381 human eye fixation from 12 volunteers (5 male and 7 female) divided into 2 groups based on the given task. It contains 8 subjects performing action recognition in the given image where as the rest of the subjects are performing context dependent visual search. The subjects in this group are searching for furniture, paintings\/ wallpapers, bodies of water, buildings, cars\/ trucks, mountain\/ hills, road\/ trees in the given scene. The MIT-PD dataset consist of 12,768 fixations from 14 subjects (between 18-40 years), where the subjects search for people in 912 urban scenes. MIT-PD contains only a single task, and we use this dataset to demonstrate the effectiveness of the memory network.\n\n\\subsection{Evaluation metrics}\nLet $N$ denote the number of examples in the testing set, $y$ denotes the ground truth saliency map and $\\hat{y}$ is the predicted saliency map. Following this notation we define the following metrics:\n\\begin{itemize}\n\\item \\textbf{Area Under Curve (AUC): }\nAUC is a widely used metrics for evaluating saliency models. We use the formulation of this metric defined in \\cite{borji2012exploiting}. \n\\item \\textbf{Normalised Scan path Saliency (NSS): }\nNSS \\cite{peters2005components} is calculated by taking the mean scores assigned by the unit normalised saliency map $\\hat{y}^{norm}$ at human eye fixations. \n\\begin{equation}\nNSS=\\frac{1}{N}\\sum_{i=1}^{N}\\hat{y}^{norm}_{i}\n\\end{equation}\n\\item \\textbf{Linear Correlation Coefficient (CC): }\nIn order to measure the linear relationship between the ground truth and predicted map we utilise the linear correlation coefficient, \n\\begin{equation}\nCC=\\frac{cov(y,\\hat{y})}{\\sigma_{y}*\\sigma_{\\hat{y}}},\n\\end{equation}\nwhere $cov(y,\\hat{y})$ is referred to as the covariance between distributions $y$ and $\\hat{y}$, $\\sigma_{y}$ is the standard deviation of distributions $y$ and $\\sigma_{\\hat{y}}$ is the standard deviation of distributions $\\hat{y}$. As the name implies $CC=1$ denotes a perfect linear relationship between distributions $y$ and $\\hat{y}$ where as a value of $0$ implies that there is no linear relationship. \n\\item \\textbf{KL divergence (KL): }\nTo measure the non-symmetric difference between two distributions we utilise the KL divergence measure given by,\n\\begin{equation}\nKL=\\sum_{i=1}^{N}\\hat{y}_{i}log(\\frac{\\hat{y}_i}{y_i}).\n\\end{equation}\nAs ground truth and predicted saliency maps can be seen as 2D distributions, we can use KL divergence to measure the difference between them.\n\\item \\textbf{Similarity metric (SM): }\nComputes the sum of the minimum values at each pixel location between $\\hat{y}^{norm}$ and $y^{norm}$ distributions. \n\\begin{equation}\nSM=\\sum_{i=1}^{P}min(\\hat{y}^{norm}_{i},y^{norm}_{i}), \n\\end{equation}\nwhere\n$\\sum_{i=1}^{P}\\hat{y}^{norm}_{i}=1 $\nand\n$\\sum_{i=1}^{P}y^{norm}_{i}=1 $\nare the normalised probability distributions and $P$ denotes all the pixel location in the 2D maps.\n\\end{itemize}\n\n\\subsection{Results}\nQuantitative evaluations on the VOCA-2012 dataset is presented in Tab. \\ref{tab:tab_1}. In the proposed model, in order to retain the user dependent factors such as user preference in memory, we feed the examples in order such that examples from each specific user go through in sequence. We compare our model with 8 state-of-the-art methods. The row `human' stands for the human saliency predictor, which computes the saliency map derived from the fixations made by half of the human subjects performing the same task. This predictor is evaluated with respect to the rest of the subjects, as opposed to the entire group \\cite{action-in-the-eye}. \\par\n\nThe evaluations suggest that the bottom-up model of Itti et. al \\cite{itti2000saliency} generates poor results as it does not incorporate task specific information. Even with high level object detectors, the models of Judd et al. \\cite{judd2009learning} and HOG detector \\cite{action-in-the-eye} fail to render accurate predictions. \\par\nDeep learning models PDP \\cite{jetley2016end} and ML-net \\cite{deep-ml} are able to out perform the techniques stated above but they lack the ability to learn task dependent information. We note the accuracy gain of cGAN model over PDP, ML-net and SalGAN, where the model incorporates a conditional variable to discriminate between the `action recognition' and `context recognition' tasks instead of learning two separate networks or fine-tuning on them individually. Our proposed approach builds upon this by incorporating an augmented memory architecture with conditional learning. We learn different user patterns and retain the dependencies among different tasks, and outperform all baselines considered (MC-GAN (proposed), Tab. \\ref{tab:tab_1}). \\par \nAs further study, in Tab. \\ref{tab:tab_1}, row M-GAN (separate), we show the evaluations for training two separate memory augmented GAN networks for the tasks in the VOCA-2012 test set without using the conditional learning process. The results emphasise the importance of learning a single network for all the tasks, leveraging semantic relationships between different tasks. The accuracy of the networks learned for separate tasks are lower than the combined MC-GAN and cGAN approaches (rows MC-GAN (proposed) and cGAN, Tab. \\ref{tab:tab_1}), highlighting the importance of learning the different tasks together and allowing the model to discriminate between the tasks and learn the complimentary information, rather than keeping the model completely blind regarding the existence of another task category.\\par\n\\begin{table}[t]\n \\centering\n \\begin{adjustbox}{width=.98\\linewidth,center}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n & \\multicolumn{4}{|c|}{\\textbf{Task}} \\\\\n \\cline{2-5}\n & \\multicolumn{2}{|c|}{\\textbf{Action Rec}} & \\multicolumn{2}{|c|}{\\textbf{Context Rec}}\\\\\n \\cline{2-5}\n \\textbf{Saliency Models} & \\textbf{AUC} & \\textbf{KL} & \\textbf{AUC} & \\textbf{KL} \\\\\n \\hline\n \\hline\n HOG detector \\cite{action-in-the-eye} & 0.736 & 8.54 & 0.646 & 8.10 \\\\\n \\hline\n Judd et al. \\cite{judd2009learning} & 0.715 & 11.00 & 0.636 & 9.66 \\\\\n \\hline\n Itti et. al \\cite{itti2000saliency} & 0.533 & 16.53 & 0.512 & 15.04 \\\\\n \\hline\n central bias \\cite{action-in-the-eye} & 0.780 & 9.59 & 0.685 & 8.82\\\\\n \\hline\n PDP \\cite{jetley2016end} & 0.875 & 8.23 & 0.690 & 7.98 \\\\\n \\hline\n ML-net \\cite{deep-ml}& 0.847 & 8.51 & 0.684 & 8.02 \\\\\n \\hline\n SalGAN \\cite{pan2017salgan}& 0.848 & 8.47 & 0.679 & 8.00 \\\\\n \\hline\n cGAN \\cite{pix2pix} &0.852 &8.24 &0.701 &7.95 \\\\\n \\hline\n M-GAN (separate) & 0.848 & 8.54 & 0.704 & 8.00\\\\\n \\hline\n \\textbf{MC-GAN (proposed)} &\\textbf{0.901} & \\textbf{8.07 } &\\textbf{0.734} & \\textbf{7.65 } \\\\\n \\hline\n \\hline\n Human \\cite{action-in-the-eye} & 0.922 & 6.14 & 0.813 & 5.90\\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Experimental evaluation on VOCA-2012 test set. We augment the current state-of-the-art GAN method (SalGAN \\cite{pan2017salgan}) by adding a conditional variable (cGAN \\cite{pix2pix}) to mimic the joint learning process instead of learning two separate networks. To capture user and task specific behavioural patterns we add a memory module to cGAN, MC-GAN (proposed), and outperform all the baselines. We also compare training $2$ separate memory augmented GAN networks, M-GAN (separate) without the conditional learning process.}\n\t\t\t\\vspace{-3mm}\n \\label{tab:tab_1}\n\\end{table}\n\n\n\n\\begin{table}[t]\n \\centering\n \\begin{adjustbox}{width=.98\\linewidth,center}\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n & \\multicolumn{6}{|c|}{\\textbf{Task}} \\\\\n \\cline{2-7}\n & \\multicolumn{3}{|c|}{\\textbf{Action Rec}} & \\multicolumn{3}{|c|}{\\textbf{Context Rec}}\\\\\n \\cline{2-7}\n \\textbf{Saliency Models} & \\textbf{NSS} & \\textbf{CC} & \\textbf{SM} & \\textbf{NSS} & \\textbf{CC} & \\textbf{SM} \\\\\n \\hline\n \\hline\n ML-net \\cite{deep-ml}& 2.05 & 0.71 & 0.51 & 2.03 & 0.64 & 0.42 \\\\\n \\hline\n SalGAN \\cite{pan2017salgan}& 2.10 & 0.73 & 0.51 & 2.10 & 0.68 & 0.44 \\\\\n \\hline\n cGAN \\cite{pix2pix} & 2.23 & 0.76 & 0.55 & 2.14 & 0.71 & 0.57 \\\\\n \\hline\n \\textbf{MC-GAN (proposed)} &\\textbf{2.23} & \\textbf{0.79 } &\\textbf{0.60} & \\textbf{2.20 } &\\textbf{0.77} & \\textbf{0.69 } \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison between ML-Net, SalGAN, cGAN and MC-GAN (proposed) on VOCA-2012}\n\t\t\t\\vspace{-3mm}\n \\label{tab:tab_3}\n\\end{table}\n\nTo provide qualitative insight, some predicted maps along with ground truth and baseline ML-net \\cite{deep-ml} predictions are given in Fig. \\ref{fig:fig_action_context_rec}. In the first column we show the input image, and columns ``Action rec GT'' and ``Context rec GT'' depict the ground truth saliency maps for the respective tasks. In columns ``Our action rec'' and ``Our context rec'' we show the respective predictions from our model, and finally the column `ml-Net' contains the prediction from the ML-net \\cite{deep-ml} baseline. Observing columns ``Action rec GT '' and ``Context rec GT'' one can clearly see how the tasks differ based on the different saliency patterns. Yet, the proposed model is able to capture these different semantics within a single network which is trained together for all the tasks. As shown in Fig. \\ref{fig:fig_action_context_rec}, it has efficiently identified the image saliency from low level features as well as task dependent saliency factors from high level cues such as trees, furniture and roads. Furthermore, the single learning process and the incorporation of a memory architecture renders the plausibility of retaining the semantical relationships among different tasks and how users adapt to those. \n\n\\begin{figure*}[!htb]\n \n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090_gt_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090_pred_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090_gt_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090_pred_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090_mlnet.pdf} %\n \n \\end{subfigure}\n \n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094_gt_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094_pred_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094_gt_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094_pred_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094_mlnet.pdf} %\n \n \\end{subfigure}\n\n \n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125_gt_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125_pred_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125_gt_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125_pred_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125_mlnet.pdf} %\n \n \\end{subfigure}\n\n \n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150_gt_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150_pred_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150_gt_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150_pred_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150_mlnet.pdf} %\n \n \\end{subfigure}\n\n \n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153_gt_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153_pred_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153_gt_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153_pred_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153_mlnet.pdf} %\n \n \\end{subfigure}\n\n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157.jpg} %\n \\caption{Image}\n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157_gt_action.pdf} %\n \\caption{Action rec GT}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157_pred_action.pdf} %\n \\caption{Our action rec}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157_gt_context.pdf} %\n \\caption{Context rec GT}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157_pred_context.pdf} %\n \\caption{Our context rec}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157_mlnet.pdf} %\n \\caption{ML-net}\n \\end{subfigure}\n\n \n \\caption{Qualitative results for VOCA-2012 dataset and comparisons to the state-of-the-art.}\n\t\t\t\\vspace{-3mm}\n \\label{fig:fig_action_context_rec}\n\\end{figure*}\n\nTab. \\ref{tab:tab_2} shows the performance of the proposed model with 5 baselines for the MIT-PD test set. The first baseline ``Scene Context'' \\cite{modellin-search} utilises colour and orientation features where as ``Combined'' \\cite{modellin-search} incorporates both scene context features and higher level features from a person detector \\cite{dalal2006human}. Even with such explicit modelling of the task, this baseline fails to generate accurate predictions suggesting the subjective nature of the task specific viewing. With the aid of the associative memory of the proposed model we successfully capture those underlying factors.\n\n\\begin{table}[!h]\n \\centering\n \\begin{adjustbox}{width=.98\\linewidth,center}\n \\begin{tabular}{|c|c|c|}\n \\hline\n & \\multicolumn{2}{|c|}{\\textbf{Scene Type}} \\\\\n \\cline{2-3}\n & \\textbf{Target Present} & \\textbf{Target absent}\\\\\n \\cline{2-3}\n \\textbf{Saliency Models} & \\textbf{AUC} & \\textbf{AUC} \\\\\n \\hline\n \\hline\n Scene Context \\cite{modellin-search} & 0.844 & 0.845 \\\\\n \\hline\n Combined \\cite{modellin-search} & 0.896 & 0.877 \\\\\n \\hline\n ML-net \\cite{deep-ml}& 0.901 & 0.881\\\\\n \\hline\n SalGAN \\cite{pan2017salgan}&0.910 & 0.887\\\\\n \\hline\n cGAN \\cite{pix2pix} & 0.923 & 0.899 \\\\\n \\hline\n \\textbf{MC-GAN (proposed)} &\\textbf{0.942} & \\textbf{0.903} \\\\\n \\hline\n \\hline\n Human \\cite{modellin-search} & 0.955 & 0.930 \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Experimental evaluation on MIT-PD test set}\n\t\t\t\\vspace{-3mm}\n \\label{tab:tab_2}\n\\end{table}\n\nIn Tab. \\ref{tab:tab_3} and Tab. \\ref{tab:tab_4} we present the evaluations of NSS, CC and SM metrics. In order to evaluate ML-net, SalGAN and cGAN models we utilise the implementation of the algorithm released by the authors. When comparing the results between the ML-net \\cite{deep-ml}, cGAN \\cite{pix2pix} and SalGAN \\cite{pan2017salgan} models and proposed MC-GAN model a considerable gain in performance is observed in all the metrics considered, which emphasises a greater agreement between predicted and ground truth saliency maps. We were unable to compare other baselines using these metrics due to the unavailability of public implementations.\n\n\\begin{table}[t]\n \\centering\n \\begin{adjustbox}{width=.98\\linewidth,center}\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n & \\multicolumn{6}{|c|}{\\textbf{Task}} \\\\\n \\cline{2-7}\n & \\multicolumn{3}{|c|}{\\textbf{Target Present}} & \\multicolumn{3}{|c|}{\\textbf{Target absent}}\\\\\n \\cline{2-7}\n \\textbf{Saliency Models} & \\textbf{NSS} & \\textbf{CC} & \\textbf{SM} & \\textbf{NSS} & \\textbf{CC} & \\textbf{SM} \\\\\n \\hline\n \\hline\n ML-Net \\cite{deep-ml} & 1.41 & 0.55 &0.41 &1.22 & 0.43 & 0.38 \\\\\n \\hline\n SalGAN \\cite{pan2017salgan}& 1.41 & 0.53 &0.44 &1.20 & 0.42 & 0.35 \\\\ \n \\hline\n cGAN \\cite{pix2pix} & 1.67 & 0.51 &0.59 & 2.02 & 0.41 & 0.52 \\\\\n \\hline\n \\textbf{MC-GAN (proposed)} &\\textbf{2.17} & \\textbf{0.76 } &\\textbf{0.71} & \\textbf{2.34 } &\\textbf{0.75} & \\textbf{0.78 } \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison between ML-Net, SalGAN, cGAN and MC-GAN (proposed) on MIT-PD test set}\n\t\t\t\\vspace{-3mm}\n \\label{tab:tab_4}\n\\end{table}\n\n\nThe qualitative results obtained from the proposed model along with the ML-net \\cite{deep-ml} network on a few examples from the MIT-PD dataset are shown in Fig. \\ref{fig:fig_context_rec}. We would like to emphasise the usage of a subject's prior knowledge in the task of searching for people in the urban scene. The subjects selectively attend the areas such as high rise buildings (see rows 2, 6) and pedestrian walkways (see rows 1, 3-6), which are more likely to contain humans, which our model has effectively captured. With the lack of capacity to model such user preferences, the baseline ML-Net model generates centre biased saliency without effectively understanding the subject's strategy. \n\\begin{figure}[!htb]\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_31.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_31.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_31.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_31.pdf} %\n \n \\end{subfigure}\n \n \n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_14.jpg} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_14.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_14.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_14.pdf} %\n \n \\end{subfigure}\n \n \n \n\\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_18.jpg} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_18.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_18.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_18.pdf} %\n \n \\end{subfigure}\n \n\t\\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_4.jpg} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_4.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_4.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_4.pdf} %\n \n \\end{subfigure} \n \n \n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_33.jpg} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_33.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_33.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_33.pdf} %\n \n \\end{subfigure}\n \n \n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_5.jpg} %\n \\caption{Image}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_5.pdf} %\n \\caption{GT}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_5.pdf} %\n \\caption{Our}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_5.pdf} %\n \\caption{ML-net}\n \\end{subfigure}\n\n \n \\caption{Qualitative results for MIT-PD dataset and comparisons to the state-of-the-art.}\n \\label{fig:fig_context_rec}\n\\end{figure}\n\n\n\\subsection{Task Specific Generator Activations}\nIn Fig. \\ref{fig:generator_activations} we visualise the activations from the 2nd (conv-l-2) and 2nd last (conv-l-8) convolution layers of the generator. The task specific learning of the proposed conditional GAN architecture is clearly evident in the activations. For instance, when the task at hand is to recognise actions the generator activations are highly concentrated around the foreground of the image (see (b), (g)), while for context recognition the model has learned that the areas of interest are in the background of the image (see (c), (h)). These task specific salient features are combined and compressed hierarchically and in latter layers (i.e conv-l-8), the networks has learned the most specific areas to focus when generating the output saliency map. \n\n\n\\begin{figure}[htb]\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth, ,height= .82\\linewidth]{fig\/generator_activations\/017_2010_006129.jpg} %\n \\caption{Input}\n \\end{subfigure}\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/generator_activations\/layer_4\/017_2010_006129_action_rec.pdf} %\n \\caption{l-2 AR} \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/generator_activations\/layer_4\/017_2010_006129_context_rec.pdf} %\n \\caption{l-2 CR}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth ]{fig\/generator_activations\/layer_12\/017_2010_006129_action_rec.pdf} %\n \\caption{l-8 AR}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/generator_activations\/layer_12\/017_2010_006129_context_rec.pdf} %\n \\caption{l-8 CR}\n \\end{subfigure}\n \n \n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth, ,height= .82\\linewidth]{fig\/generator_activations\/017_2010_006217.jpg} %\n \\caption{Input}\n \\end{subfigure}\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/generator_activations\/layer_4\/017_2010_006217_action_rec.pdf} %\n \\caption{l-2 AR} \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth, ,height= .82\\linewidth]{fig\/generator_activations\/layer_4\/017_2010_006217_context_rec.pdf} %\n \\caption{l-2 CR}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth, ,height= .82\\linewidth]{fig\/generator_activations\/layer_12\/017_2010_006217_action_rec.pdf} %\n \\caption{l-8 AR}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth, ,height= .82\\linewidth]{fig\/generator_activations\/layer_12\/017_2010_006217_context_rec.pdf} %\n \\caption{l-8 CR}\n \\end{subfigure}\n \\caption{Visualisation of the generator activations from 2nd (l-2) and 2nd last (l-8) convolution layers for action recognition (AR) and context recognition (CR) tasks. The importance varies from blue to yellow where blue represents the areas of least importance and yellow represents areas of more importance.}\n\t\t\t\\vspace{-3mm}\n\\label{fig:generator_activations}\n \\end{figure}\n\n\\vspace{-2mm}\n\\section{Conclusion}\n\nThis work introduces a novel human saliency estimation architecture which combines task and user specific information together in a generative adversarial pipeline. We show the importance of fully capturing the context information which incorporates the task information, subject behavioural goals and image context. The resultant frame work offers several advantages compared to task specific handcrafted features, enabling direct transferability among different tasks. Qualitative and quantitative experimental evaluations on two public datasets demonstrates superior performance with respect to the current state-of-the-art. \n\n\\vspace{-2mm}\n\\footnotesize{\n\\subsubsection*{Acknowledgement}\n\\vspace{-2mm}\nThis research was supported by an Australian Research Council's Linkage grant (LP140100221). The authors also thank QUT High Performance Computing (HPC) for providing the computational resources for this research.\n}\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn \\cite[Lemma IIe]{wiener32}, it states that\n``{\\em If $f(x)$ is a function with an absolutely convergent Fourier series, which nowhere vanishes for real arguments, $1\/f(x)$ has an absolutely convergent Fourier series.}\"\nThe above statement is now known as the classical Wiener's lemma.\n\nWe say that a Banach space ${\\mathcal A}$ with norm $\\|\\cdot\\|_{\\mathcal A}$\nis a {\\em Banach algebra} if \nit has operation of multiplications possessing the usual algebraic properties, and\n\\begin{equation}\\label{banachalgebra.def}\n\\|AB\\|_{\\mathcal A}\\le K \\|A\\|_{\\mathcal A}\\|B\\|_{\\mathcal A}\\ \\ {\\rm for\\ all} \\ A, B\\in {\\mathcal A},\n\\end{equation}\nwhere $K$ is a positive constant. \nGiven two Banach algebras ${\\mathcal A}$ and ${\\mathcal B}$ such that ${\\mathcal A}$ is a Banach subalgebra of ${\\mathcal B}$,\nwe say that\n ${\\mathcal A}$ is {\\it inverse-closed} in ${\\mathcal B}$ if $A\\in {\\mathcal A}$ and\n$A^{-1} \\in {\\mathcal B}$ implies $A^{-1}\\in {\\mathcal A}$. Inverse-closedness is also known as spectral invariance, Wiener pair, local subalgebra, etc \n \\cite{douglasbook, gelfandbook, Naimarkbook, takesaki}.\nLet ${\\mathcal C}$ be the algebra of all periodic continuous functions under multiplication, and\n${\\mathcal W}$ be its Banach subalgebra of all periodic functions with absolutely convergent Fourier series,\n\\begin{equation}\\label{Wieneralgebra.def}\n{\\mathcal W}=\\Big\\{f(x)=\\sum_{n\\in \\ZZ} \\hat f(n) e^{inx},\\ \\\n \\|f\\|_{\\mathcal W}:=\\sum_{n\\in \\ZZ} |\\hat f(n)|<\\infty\\Big\\}.\\end{equation}\nThen the classical Wiener's lemma can be reformulated as that ${\\mathcal W}$ is an inverse-closed subalgebra of ${\\mathcal C}$.\n Due to the above interpretation, we also call the inverse-closed property for a Banach subalgebra ${\\mathcal A}$ as Wiener's lemma for that subalgebra.\n Wiener's lemma for Banach algebras of infinite matrices and integral operators with certain off-diagonal decay\n can be informally interpreted as localization preservation under inversion.\nSuch a localization preservation is of great importance in applied harmonic analysis, numerical analysis, optimization\nand many mathematical and engineering fields \\cite{akramgrochenigsiam, chengsun19, christensen05, grochenigr10, ksw13, sunsiam06}.\n The readers may refer to the survey papers \\cite{grochenig10, Krishtal11, shinsun13}, the recent papers \\cite{fangshinsun20, samei19, shinsun19} and references therein for historical remarks and recent advances.\n\n\n\n\nGiven an element $A$ in a Banach algebra ${\\mathcal A}$ with the identity $I$, we define its {\\em spectral set} $\\sigma_{\\mathcal{A}}(A)$\nand {\\em spectral radius} $\\rho_{\\mathcal{A}}(A)$ by\n$$\n\\sigma_{\\mathcal{A}}(A):=\\big\\{\\lambda \\in \\mathbb{C} : \\lambda I -A \\text{ is not invertible in }\n\\mathcal{A} \\big\\}\n$$\nand\n$$\n\\rho_\\mathcal{A}(A) := \\max \\big\\{ |\\lambda| :\\lambda \\in \\sigma_{\\mathcal A}(A)\\big\\}\n$$\nrespectively.\n Let ${\\mathcal A}$ and\n${\\mathcal B}$ be Banach algebras with common identity $I$ and ${\\mathcal A}$ be a Banach subalgebra of ${\\mathcal B}$.\nThen an equivalent condition for the inverse-closedness of\n${\\mathcal A}$ in ${\\mathcal B}$ is that the spectral set of any\n$A\\in {\\mathcal A}$ in Banach algebras ${\\mathcal A}$ and ${\\mathcal B}$ are the same, i.e.,\n$$\n\\sigma_{\\mathcal A}(A)=\\sigma_{\\mathcal B}(A).\n$$\nBy the above equivalence, a necessary condition for the inverse-closedness of\n${\\mathcal A}$ in ${\\mathcal B}$ is that\nthe spectral radius of any\n$A\\in {\\mathcal A}$ in the Banach algebras ${\\mathcal A}$ and ${\\mathcal B}$ are the same, i.e.,\n\\begin{equation} \\label{spectralradius}\n\\rho_\\mathcal{A}(A) =\\rho_\\mathcal{B}(A).\n\\end{equation}\nThe above necessary condition is shown by Hulanicki \\cite{hulanicki} to be sufficient if we further assume that\n$\\mathcal{A}$ and $\\mathcal{B}$ are $*$-algebras with common identity and involution, and that\n $\\mathcal{B}$ is symmetric.\n Here we say that\n a Banach algebra $\\mathcal B$ is a $*$-algebra if\nthere is a continuous linear {\\em involution $*$} on $\\mathcal {B}$\nwith the properties that\n\\begin{equation*}\n(AB)^* = B^* A^*\\ \\text{ and }\\ A^{**} = A\\ \\text{ for all } A, B \\in\n{\\mathcal B},\n\\end{equation*}\nand that a $*$-algebra ${\\mathcal B}$ is {\\em symmetric} if\n$$\\sigma_{\\mathcal {A}} (A^* A) \\subset [0,\\infty )\\ \\ {\\rm for \\ all} \\ A\\in {\\mathcal B}.$$\nThe spectral radii approach \\eqref{spectralradius}, known as the Hulanicki's spectral method,\n has been used to establish the inverse-closedness of symmetric $*$-algebras \\cite{branden, gkI, grochenigklotz10, sunca11, suntams07, suncasp05},\n however the above approach does not provide a norm estimate for the inversion, which is crucial for many mathematical and engineering applications.\n\n\n\nTo consider norm estimate for the inversion, we recall the concept of norm-controlled inversion of a Banach subalgebra ${\\mathcal A}$ of a symmetric $*$-algebra ${\\mathcal B}$, which was initiated by Nikolski \\cite{nikolski99} and coined by Gr\\\"ochenig and Klotz \\cite{gkI}. Here\nwe say that\n a Banach subalgebra ${\\mathcal A}$ of ${\\mathcal B}$ admits {\\em\nnorm-controlled inversion} in ${\\mathcal B}$ if there exists a continuous function $h$ from\n$[0, \\infty)\\times [0, \\infty)$ to $[0, \\infty)$\n such that\n\\begin{equation}\\label{normcontrol}\n\\|A^{-1}\\|_{\\mathcal A}\\le h\\big(\\|A\\|_{\\mathcal A}, \\|A^{-1}\\|_{\\mathcal B}\\big)\n\\end{equation}\nfor all $A\\in {\\mathcal A}$ being invertible in ${\\mathcal B}$\n\\cite{gkII, gkI, samei19, shinsun19}.\n\n\nThe norm-controlled inversion is a strong version of Wiener's lemma.\nThe classical Banach algebra ${\\mathcal W}$ in \\eqref{Wieneralgebra.def} is inverse-closed in\nthe algebra\n ${\\mathcal C}$ of all periodic continuous functions\n \\cite{wiener32}, however it does not have norm-controlled inversion in\n ${\\mathcal C}$ \\cite{belinskiijfaa97, nikolski99}.\nTo establish Wiener's lemma, there are several methods, including\n the Wiener's localization \\cite{wiener32}, the Gelfand's technique\n\\cite{gelfandbook}, the Brandenburg's trick \\cite{branden}, the Hulanicki's spectral method \\cite{hulanicki}, the Jaffard's boot-strap argument \\cite{jaffard90},\nthe derivation technique \\cite{grochenigklotz10},\nand the Sj\\\"ostrand's commutator estimates \\cite{shinsun19, sjostrand94}.\n In this paper, we will use the Brandenburg's trick\n to establish norm-controlled inversion of\n a differential $*$-subalgebra ${\\mathcal A}$ of a symmetric $*$-algebra ${\\mathcal B}$.\n\n\n\n\n\n\n\n\n\nThis introduction article is organized as follows. In Section \\ref{differentialalgebra.section}, we recall the concept of\ndifferential subalgebras and present some differential subalgebras of infinite matrices with polynomial off-diagonal decay.\nIn Section \\ref{generalizedDS.section}, we introduce the concept of generalized differential subalgebras and\npresent some generalized differential subalgebras of integral operators with kernels being H\\\"older continuous and having polynomial off-diagonal decay.\nIn Section \\ref{normcontrolledinversion.section}, we use the Brandenburg's trick to establish norm-controlled inversion\n of a differential $*$-subalgebra of a symmetric $*$-algebra, and we conclude the section with two remarks on\n the norm-controlled inversion with the norm control function bounded by a polynomial and\n the norm-controlled inversion of nonsymmetric Banach algebras.\n\n\n\n\n\n\n\n\n\n\\section{Differential Subalgebras}\\label{differentialalgebra.section}\n\n Let ${\\mathcal A}$ and\n${\\mathcal B}$ be Banach algebras such that ${\\mathcal A}$ is a Banach subalgebra of ${\\mathcal B}$.\nWe say that\n${\\mathcal A}$ is a {\\em differential subalgebra of order $\\theta\\in (0, 1]$} in ${\\mathcal B}$ \nif there exists a positive constant $D_0:=D_0({\\mathcal A}, {\\mathcal B}, \\theta)$ such that\n\\begin{equation}\\label{differentialnorm.def}\n\\|AB\\|_{\\mathcal A}\\le D_0\\|A\\|_{\\mathcal A} \\|B\\|_{\\mathcal A} \\Big (\\Big(\\frac{\\|A\\|_{\\mathcal B}}{\\|A\\|_{\\mathcal A}}\\Big)^\\theta +\n\\Big(\\frac{\\|B\\|_{\\mathcal B}}{\\|B\\|_{\\mathcal A}}\\Big)^\\theta\n\\Big)\n\\quad {\\rm for \\ all} \\ A, B \\in {\\mathcal A}.\n\\end{equation}\nThe concept of differential subalgebras of order $\\theta$ was introduced in \\cite{blackadarcuntz91, kissin94, rieffel10} for $\\theta=1$ and\n\\cite{christ88, gkI, shinsun19} for $\\theta\\in (0, 1)$.\n We also refer the reader\nto \\cite{barnes87, fangshinsun13, gkII, gkI, grochenigklotz10, jaffard90, rssun12, samei19, sunca11, sunacha08, suntams07, suncasp05} for various differential subalgebras\nof infinite matrices, convolution operators, and integral operators with certain off-diagonal decay.\n\n\n\n For $\\theta=1$, the requirement\n\\eqref{differentialnorm.def} can be reformulated as\n\\begin{equation}\n\\|AB\\|_{\\mathcal A}\\le D_0\\|A\\|_{\\mathcal A} \\|B\\|_{\\mathcal B}+ D_0 \\|A\\|_{\\mathcal B} \\|B\\|_{\\mathcal A}\n\\quad {\\rm for \\ all} \\ A, B \\in {\\mathcal A}. \\end{equation}\nSo the norm $\\|\\cdot\\|_{\\mathcal A}$ satisfying \\eqref{differentialnorm.def}\nis also referred as a Leibniz norm on ${\\mathcal A}$.\n\n\n\n Let $C[a, b]$ be the space of all continuous functions on the interval $[a, b]$ with its norm defined by\n$$\\|f\\|_{C[a, b]}=\\sup_{t\\in [a, b]} |f(t)|, \\ \\ f\\in C[a, b],$$\n and $C^k[a, b], k\\ge 1$, be the space of all continuously differentiable functions on the interval $[a, b]$ up to order $k$\nwith its norm defined by\n$$\\|h\\|_{C^k[a, b]}= \\sum_{j=0}^k \\|h^{(j)}\\|_{C[a, b]} \\ {\\rm for} \\ h\\in C^k[a, b].$$\nClearly, $C[a, b]$ and $C^k[a, b]$ are Banach algebras under function multiplication.\nMoreover \n \\begin{eqnarray}\\label{Cab.eq}\n\\|h_1h_2\\|_{C^1[a,b]} & = & \\|(h_1h_2)'\\|_{C[a, b]}+ \\|h_1h_2\\|_{C[a, b]}\\nonumber\\\\\n& \\le &\n\\|h_1'\\|_{C[a, b]} \\|h_2\\|_{C[a, b]}+ \\|h_1\\|_{C[a, b]} \\|h_2'\\|_{C[a, b]}\n + \\|h_1\\|_{C[a, b]}\\|h_2\\|_{C[a, b]}\\nonumber\\\\\n& \\le & \\|h_1\\|_{C^1[a,b]} \\|h_2\\|_{C[a,b]}+\\|h_1\\|_{C[a,b]}\\|h_2\\|_{C^1[a,b]} \\ {\\rm for \\ all} \\ h_1, h_2\\in C^1[a, b],\n\\end{eqnarray}\nwhere the second inequality follows from the Leibniz rule. Therefore we have\n\n\\begin{thm}\\label{C1ab.thm}\n $C^1[a, b]$ is a differential subalgebra of order one in $C[a, b]$.\n \\end{thm}\n Due to the above illustrative example of differential subalgebras of order one,\nthe norm $\\|\\cdot\\|_{\\mathcal A}$ satisfying \\eqref{differentialnorm.def}\nis also used to describe smoothness in abstract Banach algebra \\cite{blackadarcuntz91}.\n\n\n\nLet\n${\\mathcal W}^1$ be the Banach algebra of all periodic functions such that\nboth $f$ and its derivative $f'$ belong to the Wiener algebra ${\\mathcal W}$, and define the norm on ${\\mathcal W}^1$ by\n\\begin{equation}\\label{differentialwiener.def}\n\\|f\\|_{{\\mathcal W}^1} = \\|f\\|_{\\mathcal W}+\\|f'\\|_{\\mathcal W}\n = \\sum_{n\\in \\ZZ} (|n|+1) |\\hat f(n)|\n \\end{equation}\n for $f(x)=\\sum_{n\\in \\ZZ} \\hat f(n) e^{inx}\\in {\\mathcal W}^1$.\nFollowing the argument used in the proof of Theorem \\ref{C1ab.thm},\nwe have\n\\begin{thm} \\label{wienerdiff.theorem}\n ${\\mathcal W}^1$ is a differential subalgebra of order one in ${\\mathcal W}$.\n \\end{thm}\n\nRecall from the classical Wiener's lemma that ${\\mathcal W}$ is an inverse-closed subalgebra of\n${\\mathcal C}$, the algebra of all periodic continuous functions under multiplication.\nThis leads to the following natural question:\n\n\\begin{ques}\\label{question1}\n Is ${\\mathcal W}^1$ a differential subalgebra\nof ${\\mathcal C}$?\n\\end{ques}\n\nLet $\\ell^p, 1\\le p\\le \\infty$, be\n the space of all $p$-summable sequences on $\\ZZ$ with norm denoted by $\\|\\cdot\\|_p$.\n To answer the above question,\nwe consider Banach algebras ${\\mathcal C}$, ${\\mathcal W}$ and ${\\mathcal W}^1$ in the ``frequency domain\".\nLet ${\\mathcal B}(\\ell^p)$ be the algebra of all bounded linear operators on $\\ell^p, 1\\le p\\le \\infty$,\n and let \n\\begin{equation}\n\\label{tildew.def}\n\\tilde {\\mathcal W}=\\Big\\{A:=(a(i-j))_{i,j\\in \\ZZ},\\ \\| A\\|_{\\tilde W}=\\sum_{k\\in \\ZZ} |a(k)|<\\infty\\Big\\}\\end{equation}\nand\n\\begin{equation}\n\\label{tildew1.def}\n{\\tilde {\\mathcal W}}^1=\\Big\\{A:=(a(i-j))_{i,j\\in \\ZZ}, \\ \\| A\\|_{{\\tilde W}^1}=\\sum_{k\\in \\ZZ} |k| |a(k)|<\\infty\\Big\\}\\end{equation}\nbe Banach algebras of Laurent matrices\nwith symbols in ${\\mathcal W}$ and ${\\mathcal W}^1$ respectively. Then the classical Wiener's lemma can be reformulated\nas that $\\tilde {\\mathcal W}$ is an inverse-closed subalgebra of ${\\mathcal B}(\\ell^2)$,\nand an equivalent statement of Theorem \\ref{wienerdiff.theorem}\nis that ${\\tilde {\\mathcal W}}^1$ is a differential subalgebra of order one in $\\tilde {\\mathcal W}$.\nDue to the above equivalence, Question\n\\ref{question1} in the ``frequency domain\" becomes whether ${\\mathcal W}^1$ is a differential subalgebra of order $\\theta\\in (0, 1]$ in ${\\mathcal C}$.\nIn \\cite{suncasp05}, the first example of differential subalgebra of infinite matrices with order $\\theta\\in (0, 1)$\nwas discovered.\n\n\n\\begin{thm}\\label{W1.thm}\n${\\mathcal W}^1$ is a differential subalgebra of ${\\mathcal C}$ with order $2\/3$.\n\\end{thm}\n\n\n\n\nTo consider differential subalgebras of infinite matrices in the noncommutative setting, we introduce three noncommutative Banach algebras of\ninfinite matrices with certain off-diagonal decay.\nGiven $1\\le p\\le \\infty$ and $\\alpha\\ge 0$,\n we define\nthe Gr\\\"ochenig-Schur family of infinite matrices\nby\n \\begin{equation}\\label{GS.def}\n{\\mathcal A}_{p,\\alpha}=\\Big\\{ A=(a(i,j))_{i,j \\in \\Z}, \\ \\|A\\|_{{\\mathcal A}_{p,\\alpha}}<\\infty\\Big\\}\n\\end{equation}\n\\cite{gltams06, jaffard90, moteesun, schur11, suntams07, suncasp05},\nthe\nBaskakov-Gohberg-Sj\\\"ostrand family of infinite matrices by\n\\begin{equation}\\label{BGS.def}\n{\\mathcal C}_{p,\\alpha}=\\Big\\{ A= (a(i,j))_{i,j \\in \\Z}, \\ \\|A\\|_{{\\mathcal C}_{p,\\alpha}}<\\infty\\Big\\}\n\\end{equation}\n\\cite{baskakov90, gkwieot89, gltams06, sjostrand94,suntams07}, and\nthe Beurling family of infinite matrices\n\\begin{equation}\\label{Beurling.def}\n{\\mathcal B}_{p,\\alpha}=\\Big\\{ A= (a(i,j))_{i,j \\in \\Z}, \\ \\|B\\|_{{\\mathcal A}_{p,\\alpha}}<\\infty\\Big\\}\n\\end{equation}\n\\cite{beurling49, shinsun19, sunca11},\n where $u_\\alpha(i, j)=(1+|i-j|)^\\alpha, \\alpha\\ge 0$, are polynomial weights on $\\Z^2$,\n \\begin{equation}\\label{GSnorm.def}\n \\|A\\|_{{\\mathcal A}_{p,\\alpha}}\n = \\max \\Big\\{ \\sup_{i \\in \\Z} \\big\\|\\big(a(i,j) u_\\alpha(i, j)\\big)_{j\\in \\Z}\\big\\|_p, \\ \\ \\sup _{j \\in \\Z}\n \\big\\|\\big(a(i,j) u_\\alpha(i, j)\\big)_{i\\in \\Z}\\big\\|_p\n \\Big\\},\n\\end{equation}\n\\begin{equation}\\label{BKSnorm.def}\n\\|A\\|_{{\\mathcal C}_{p,\\alpha}} = \\Big\\| \\Big(\\sup_{i-j=k} |a(i,j)| u_\\alpha(i, j)\\Big)_{k\\in \\Z} \\Big\\|_p,\n\\end{equation}\nand\n\\begin{equation}\\label{Beurlingnorm.def}\n\\|A\\|_{{\\mathcal B}_{p,\\alpha}} = \\Big\\| \\Big(\\sup_{|i-j|\\ge |k| } |a(i,j)| u_\\alpha(i, j)\\Big)_{k\\in \\Z} \\Big\\|_p.\n\\end{equation}\nClearly, we have\n\\begin{equation}\\label{properinclusion}\n{\\mathcal B}_{p,\\alpha} \\subset {\\mathcal C}_{p,\\alpha} \\subset\n{\\mathcal A}_{p,\\alpha} \\ \\ {\\rm for \\ all}\\ 1\\le p\\le \\infty \\ {\\rm and} \\ \\alpha\\ge 0.\n\\end{equation}\nThe above inclusion is proper for $1\\le p<\\infty$, while\nthe above three families \nof infinite matrices coincide for $p=\\infty$,\n\\begin{equation}\\label{properinclusioninfinite}\n{\\mathcal B}_{\\infty,\\alpha}={\\mathcal C}_{\\infty,\\alpha}=\n{\\mathcal A}_{\\infty,\\alpha} \\ \\ {\\rm for \\ all} \\ \\alpha\\ge 0,\n\\end{equation}\nwhich is also known as the Jaffard family of infinite matrices \\cite{jaffard90},\n \\begin{equation}\\label{Jaffard.def}\n{\\mathcal J}_{\\alpha}=\\Big\\{ A= (a(i,j))_{i,j \\in \\Z}, \\ \\|A\\|_{{\\mathcal J}_{\\alpha}}=\\sup_{i,j\\in \\Z}|a(i,j)| u_\\alpha(i-j)<\\infty\\Big\\}.\n\\end{equation}\n\nObserve that\n$\\|A\\|_{{\\mathcal A}_{p,\\alpha}}=\\|A\\|_{{\\mathcal C}_{p,\\alpha}}$\nfor a Laurent matrix $A=(a(i-j))_{i,j\\in \\Z}$. Then\nBanach algebras $\\tilde {\\mathcal W}$ and ${\\tilde {\\mathcal W}}^1$\nin \\eqref{tildew.def} and \\eqref{tildew1.def}\nare the commutative subalgebra of the Gr\\\"ochenig-Schur algebra ${\\mathcal A}_{1, \\alpha}$\nand the Baskakov-Gohberg-Sj\\\"ostrand algebra ${\\mathcal C}_{1, \\alpha}$ for $\\alpha=0, 1$ respectively,\n\\begin{equation}\\label{wa.re}\n\\tilde {\\mathcal W}= {\\mathcal A}_{1, 0}\\cap {\\mathcal L}={\\mathcal C}_{1, 0}\\cap {\\mathcal L}\n\\end{equation}\nand\n\\begin{equation}\\label{wa1.re}\n{\\tilde {\\mathcal W}}^1= {\\mathcal A}_{1, 1}\\cap {\\mathcal L}={\\mathcal C}_{1, 1}\\cap {\\mathcal L},\n\\end{equation}\nwhere ${\\mathcal L}$ is the set of all Laurent matrices $A=(a(i-j))_{i,j\\in \\Z}$.\nThe sets ${\\mathcal A}_{p, \\alpha}, {\\mathcal C}_{p,\\alpha}, {\\mathcal B}_{p,\\alpha}$\nwith $p=1$ and $\\alpha=0$ are noncommutative Banach algebras under matrix multiplication,\nthe Baskakov-Gohberg-Sj\\\"ostrand algebra ${\\mathcal C}_{1,0}$ and the Beurling algebra ${\\mathcal B}_{1, 0}$ are inverse-closed subalgebras of ${\\mathcal B}(\\ell^2)$ \\cite{baskakov90, bochnerphillips42, gkwieot89, sjostrand94, sunca11}, however\nthe Schur algebra ${\\mathcal A}_{1,0}$ is not inverse-closed in ${\\mathcal B}(\\ell^2)$ \\cite{tessera10}.\nWe remark that the inverse-closedness of the Baskakov-Gohberg-Sj\\\"ostrand algebra ${\\mathcal C}_{1,0}$\nin ${\\mathcal B}(\\ell^2)$ can be understood as a noncommutative extension of the classical Wiener's lemma for the\ncommutative subalgebra $\\tilde {\\mathcal W}$ of Laurent matrices in ${\\mathcal B}(\\ell^2)$.\n\n\n\n\n\nFor $1\\le p\\le \\infty$ and $\\alpha>1-1\/p$, one may verify that\nthe Gr\\\"ochenig-Schur family ${\\mathcal A}_{p,\\alpha}$,\nthe Baskakov-Gohberg-Sj\\\"ostrand family ${\\mathcal C}_{p,\\alpha}$ and the Beurling family ${\\mathcal B}_{p, \\alpha}$\nof infinite matrices form Banach algebras under matrix multiplication\nand they are inverse-closed subalgebras of ${\\mathcal B}(\\ell^2)$ \\cite{gltams06, jaffard90, sunca11, suntams07, suncasp05}.\nIn \\cite{sunca11, suntams07, suncasp05}, their differentiability in ${\\mathcal B}(\\ell^2)$ is established.\n\n\\begin{thm} \\label{sundiff.thm}\nLet $1\\le p\\le \\infty$ and $\\alpha>1-1\/p$. Then\n ${\\mathcal A}_{p,\\alpha}$,\n ${\\mathcal C}_{p,\\alpha}$ and ${\\mathcal B}_{p, \\alpha}$ are differential subalgebras of order\n $\\theta_0= (\\alpha+1\/p-1)\/(\\alpha+1\/p-1\/2)\\in (0, 1)$ in ${\\mathcal B}(\\ell^2)$.\n\\end{thm}\n\n\\begin{proof} The following argument about differential subalgebra property for the Gr\\\"ochenig-Schur algebra ${\\mathcal A}_{p, \\alpha}, 1\\tau_0}\\Big) |a(i,j)|\\nonumber\\\\\n & \\le & \\Big(\\sum_{ |j-i|\\le \\tau_0} |a(i,j)|^2\\Big)^{1\/2} \\Big(\\sum_{|j-i|\\le \\tau_0} 1\\Big)^{1\/2}\\nonumber\\\\\n & & + \\Big(\\sum_{|j-i|\\ge \\tau_0+1} |a(i,j)|^p (u_\\alpha(i, j))^p\\Big)^{1\/p} \\Big(\\sum_{|j-i|\\ge \\tau_0+1} (u_\\alpha(i, j))^{-p'} \\Big)^{1\/p'}\\nonumber\\\\\n & \\le & \\|A\\|_{{\\mathcal B}(\\ell^2)} (2\\tau_0+1)^{1\/2}+ 2^{1\/p'} (\\alpha p'-1)^{-1\/p'} \\|A\\|_{{\\mathcal A}_{p, \\alpha}}\n (\\tau_0+1)^{-\\alpha+1\/p'}\\nonumber\\\\\n & \\le & D\n\n \\|A\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|A\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0},\n \n\n \\end{eqnarray}\n where $D$ is an absolute constant depending on $p, \\alpha$ only, and\n the last inequality follows from \\eqref{tau0.def} and the following estimate\n $$ \\|A\\|_{{\\mathcal B}(\\ell^2)}\\le \\|A\\|_{{\\mathcal A}_{1, 0}}\\le \\Big(\\sum_{k\\in \\Z} (|k|+1)^{-\\alpha p'}\\Big)^{1\/p'}\n \\|A\\|_{{\\mathcal A}_{p, \\alpha}}\\le \\Big(\\frac{\\alpha p'+1}{\\alpha p'-1}\\Big)^{1\/p'}\\|A\\|_{{\\mathcal A}_{p, \\alpha}}.$$\n \n Similarly we can prove that\n \\begin{equation}\n \\label{para2.eq}\n \\sup_{j\\in \\Z} \\sum_{i\\in \\Z} |a(i,j)| \\le D\n\n \\|A\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|A\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0}.\\end{equation}\n Combining \\eqref{para1.eq} and \\eqref{para2.eq} leads to\n \\begin{equation}\n \\label{para3.eq}\n \\|A\\|_{{\\mathcal A}_{1, 0}} \\le D\n\n \\|A\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|A\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0}.\\end{equation}\nReplacing the matrix $A$ in \\eqref{para3.eq} by the matrix $B$ gives\n \\begin{equation}\n \\label{para4.eq}\n \\|B\\|_{{\\mathcal A}_{1, 0}} \\le D\n\n \\|B\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|B\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0}.\\end{equation}\nTherefore it follows from \\eqref{sundiff.thm.pf.eq1}, \\eqref{para3.eq} and \\eqref{para4.eq} that\n\\begin{equation}\n\\|C\\|_{{\\mathcal A}_{p, \\alpha}}\\le 2^\\alpha D\n \\|A\\|_{{\\mathcal A}_{p, \\alpha}}\\|B\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|B\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0}\n+2^\\alpha D\n \\|B\\|_{{\\mathcal A}_{p, \\alpha}}\\|A\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|A\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0},\n\\end{equation}\nwhich proves the differential subalgebra property for Banach algebras ${\\mathcal A}_{p, \\alpha}$ with $11-1\/p$.\n\\end{proof}\n\nThe argument used in the proof of Theorem \\ref{sundiff.thm} involves a triplet of Banach algebras\n${\\mathcal A}_{p, \\alpha}$, ${\\mathcal A}_{1, 0}$ and ${\\mathcal B}^2$ satisfying \\eqref{sundiff.thm.pf.eq1} and\n\\eqref{para3.eq}.\nIn the following theorem, we extend the above observation to\n general Banach algebra triplets $({\\mathcal A}, {\\mathcal M}, {\\mathcal B})$.\n\n\\begin{thm}\\label{triple1.thm}\n Let ${\\mathcal A}, {\\mathcal M}$ and\n${\\mathcal B}$ be Banach algebras such that ${\\mathcal A}$ is a Banach subalgebra of ${\\mathcal M}$\nand ${\\mathcal M}$ is a Banach subalgebra of ${\\mathcal B}$.\nIf there exist positive exponents $\\theta_0, \\theta_1\\in (0, 1]$ and absolute constants $D_0, D_1$ such that\n\\begin{equation}\\label{triple1.thm.eq1}\n\\|AB\\|_{\\mathcal A}\\le D_0\\|A\\|_{\\mathcal A} \\|B\\|_{\\mathcal A} \\Big (\\Big(\\frac{\\|A\\|_{\\mathcal M}}{\\|A\\|_{\\mathcal A}}\\Big)^{\\theta_0} +\n\\Big(\\frac{\\|B\\|_{\\mathcal M}}{\\|B\\|_{\\mathcal A}}\\Big)^{\\theta_0}\n\\Big)\n\\quad {\\rm for \\ all} \\ A, B \\in {\\mathcal A},\n\\end{equation}\n and\n\\begin{equation}\\label{triple1.thm.eq2}\n\\|A\\|_{\\mathcal M}\\le D_1 \\|A\\|_{\\mathcal A}^{1-\\theta_1} \\|A\\|_{\\mathcal B}^{\\theta_1} \\ \\ {\\rm for \\ all} \\ \\ A\\in {\\mathcal A},\n\\end{equation}\nthen\n${\\mathcal A}$ is a differential subalgebra of order $\\theta_0\\theta_1$ in ${\\mathcal B}$.\n\\end{thm}\n\n\\begin{proof} For any $A, B\\in {\\mathcal A}$, we obtain from \\eqref{triple1.thm.eq1} and\n\\eqref{triple1.thm.eq2} that\n\\begin{eqnarray*}\n\\|AB\\|_{\\mathcal A} & \\le & D_0\\|A\\|_{\\mathcal A} \\|B\\|_{\\mathcal A} \\Bigg (\\Big(\\frac{D_1 \\|A\\|_{\\mathcal A}^{1-\\theta_1} \\|A\\|_{\\mathcal B}^{\\theta_1}}{\\|A\\|_{\\mathcal A}}\\Big)^{\\theta_0} +\n\\Big(\\frac{D_1 \\|B\\|_{\\mathcal A}^{1-\\theta_1} \\|B\\|_{\\mathcal B}^{\\theta_1}}{\\|B\\|_{\\mathcal A}}\\Big)^{\\theta_0}\n\\Bigg)\\\\\n & \\le & D_0 D_1^{\\theta_0}\\|A\\|_{\\mathcal A} \\|B\\|_{\\mathcal A} \\Big (\\Big(\\frac{\\|A\\|_{\\mathcal B}}{\\|A\\|_{\\mathcal A}}\\Big)^{\\theta_0\\theta_1} +\n\\Big(\\frac{\\|B\\|_{\\mathcal B}}{\\|B\\|_{\\mathcal A}}\\Big)^{\\theta_0\\theta_1}\n\\Big),\n\\end{eqnarray*}\nwhich completes the proof.\n\\end{proof}\n\n\nFollowing the argument used in \\eqref{Cab.eq}, we can show that $C^2[a, b]$ is a differential subalgebra of $C^1[a, b]$.\nFor any distinct $x, y\\in [a, b]$ and $f\\in C^2[a, b]$, observe that\n$$\n|f'(x)|= \\frac{|f(y)-f(x)-f''(\\xi) (y-x)^2\/2 |}{|y-x|} \\le 2\\|f\\|_{C[a, b]} |y-x|^{-1}+ \\frac{1}{2} \\|f^{\\prime\\prime}\\|_{C[a, b]}\n|y-x|\n$$\nfor some $\\xi\\in [a, b]$, which implies that\n\\begin{equation}\n\\|f'\\|_{C[a, b]}\\le \\max\\big (4 \\|f\\|_{C[a, b]}^{1\/2} \\|f^{\\prime\\prime} \\|_{C[a, b]}^{1\/2}, 8 (b-a)^{-1} \\|f\\|_{C[a, b]}\\big).\n\\end{equation}\nTherefore there exists a positive constant $D$ such that\n\\begin{equation}\n\\|f\\|_{C^1[a,b]}\\le D \\|f\\|_{C^2[a, b]}^{1\/2} \\|f\\|_{C[a, b]}^{1\/2} \\ \\ {\\rm for \\ all} \\ \\ f\\in C^2[a, b].\n\\end{equation}\nAs an application of Theorem \\ref{triple1.thm}, we conclude that\n$C^2[a, b]$ is a differential subalgebra of order $1\/2$ in $C[a, b]$.\n\n\\smallskip\n\nWe finish the section with the proof of Theorem \\ref{W1.thm}.\n\n\\begin{proof}[Proof of Theorem \\ref{W1.thm}]\nThe conclusion follows from \\eqref{wa1.re} and Theorem \\ref{sundiff.thm} with $p=1$ and $\\alpha=1$.\n\\end{proof}\n\n\n\\section{Generalized differential subalgebras}\\label{generalizedDS.section}\n\n\nBy \\eqref{differentialnorm.def}, a differential subalgebra ${\\mathcal A}$ satisfies the Brandenburg's requirement:\n\\begin{equation}\\label{bt.req}\n\\|A^2\\|_{\\mathcal A}\\le 2D_0 \\|A\\|_{\\mathcal A}^{2-\\theta} \\|A\\|_{\\mathcal B}^{\\theta}, \\ A\\in {\\mathcal A}.\n\\end{equation}\nTo consider the norm-controlled\ninversion of a Banach subalgebra ${\\mathcal A}$ of ${\\mathcal B}$,\nthe above requirement \\eqref{bt.req} could be relaxed to the existence of an integer $m\\ge 2$ such that\nthe $m$-th power of elements in ${\\mathcal A}$ satisfies\n\\begin{equation}\\label{weakpower}\n\\|A^m\\|_{\\mathcal A} \\le D \\|A\\|_{\\mathcal A}^{m-\\theta} \\|A\\|_{\\mathcal B}^{\\theta}, \\ \\ A\\in {\\mathcal A},\n\\end{equation}\nwhere $\\theta\\in (0, m-1]$ and $D=D({\\mathcal A}, {\\mathcal B}, m, \\theta)$ is an absolute positive constant, see Theorem \\ref{main-thm1} in the next section.\nFor $h\\in C^1[a, b]$ and $m\\ge 2$, we have\n \\begin{equation*}\n\\|h^m \\|_{C^1[a,b]} = m \\|h^{m-1} h'\\|_{C[a, b]}+ \\|h^m\\|_{C[a, b]}\n\\le m \\|h\\|_{C^1[a, b]} \\|h\\|_{C[a, b]}^{m-1},\n\\end{equation*}\nand hence the differential subalgebra $C^1[a, b]$ of $C[a, b]$ satisfies\n\\eqref{weakpower} with $\\theta=m-1$.\nIn this section, we introduce some sufficient conditions\nso that\n\\eqref{weakpower} holds for some integer $m\\ge 2$.\n\n\n\n\n\\begin{thm}\\label{triplenew.thm} Let ${\\mathcal A}, {\\mathcal M}$ and\n${\\mathcal B}$ be Banach algebras such that ${\\mathcal A}$ is a Banach subalgebra of ${\\mathcal M}$\nand ${\\mathcal M}$ is a Banach subalgebra of ${\\mathcal B}$.\nIf there exist an integer $k\\ge 2$, positive exponents $\\theta_0, \\theta_1$, and absolute constants $E_0, E_1$\nsuch that\n\\begin{equation}\\label{triplenew.eq1}\n\\|A_1A_2\\cdots A_k\\|_{\\mathcal A} \\le E_0\n\\Big(\\prod_{i=1}^k\\|A_i\\|_{{\\mathcal A}} \\Big) \\sum_{j=1}^k \\Big(\\frac{\\|A_i\\|_{\\mathcal M}}{\\|A_i\\|_{\\mathcal A}}\\Big)^{\\theta_0}, \\ \\ A_1, \\ldots, A_k \\in {\\mathcal A}\n\\end{equation}\nand\n\\begin{equation}\\label{triplenew.eq2}\n\\|A^2\\|_{\\mathcal M}\\le E_1 \\|A\\|_{\\mathcal A}^{2-\\theta_1} \\|A\\|_{\\mathcal B}^{\\theta_1},\\ A\\in {\\mathcal A},\n\\end{equation}\nthen \\eqref{weakpower} holds for $m=2k$ and $\\theta=\\theta_0\\theta_1$.\n\\end{thm}\n\n\n\\begin{proof} By \\eqref{banachalgebra.def},\n\\eqref{triplenew.eq1} and \\eqref{triplenew.eq2}, we have\n\\begin{equation}\n\\|A^{2k}\\|_{\\mathcal A} \\le k E_0\n\\|A^2\\|_{{\\mathcal A}}^{k-\\theta_0} \\|A^2\\|_{\\mathcal M}^{\\theta_0}\n\\le k E_0 E_1^{\\theta_0} K^{k-\\theta_0} \\|A\\|_{{\\mathcal A}}^{2k-\\theta_0\\theta_1} \\|A\\|_{\\mathcal B}^{\\theta_0\\theta_1}, \\ \\ A\\in {\\mathcal A},\n\\end{equation}\nwhich completes the proof.\n\\end{proof}\n\n\nFor a Banach algebra triplet $({\\mathcal A}, {\\mathcal M}, {\\mathcal B})$ in Theorem \\ref{triple1.thm}, we obtain from\n\\eqref{triple1.thm.eq1} and \\eqref{triple1.thm.eq2} that\n\\begin{eqnarray}\\label{triple1.eq3}\n\\|A_1A_2\\cdots A_k\\|_{\\mathcal A} & \\le & D_0\n\\|A_1\\|_{{\\mathcal A}} \\|A_2\\cdots A_k\\|_{{\\mathcal A}}\n\\Bigg (\\Big(\\frac{\\|A_1\\|_{\\mathcal M}}{\\|A_1\\|_{\\mathcal A}}\\Big)^{\\theta_0} +\n\\Big(\\frac{\\|A_2\\cdots A_k\\|_{\\mathcal M}}{\\|A_2\\cdots A_k\\|_{\\mathcal A}}\\Big)^{\\theta_0}\n\\Bigg)\\nonumber\\\\\n& \\le & \\tilde D_0\n\\Big(\\prod_{i=1}^k\\|A_i\\|_{{\\mathcal A}} \\Big)\n\\sum_{j=1}^k \\Big(\\frac{\\|A_j\\|_{\\mathcal M}}{\\|A_j\\|_{\\mathcal A}}\\Big)^{\\theta_0}, \\ \\ A_1, \\ldots, A_k\\in {\\mathcal A},\n\\end{eqnarray}\nand\n\\begin{equation}\\label{triple1.eq4}\n\\|A^2\\|_{\\mathcal M}\\le \\tilde K \\|A\\|_{\\mathcal M}^2\\le\nD_1^2 \\tilde K \\| A\\|_{\\mathcal A}^{2-2\\theta_1}\\|A\\|_{\\mathcal B}^{2\\theta_1}, \\ \\ A\\in {\\mathcal A},\n\\end{equation}\nwhere $\\tilde D_0$ is an absolute constant and $\\tilde K$ is the constant $K$ in\n\\eqref{banachalgebra.def} for the Banach algebra ${\\mathcal M}$.\nTherefore the assumptions \\eqref{triplenew.eq1} and \\eqref{triplenew.eq2}\nin Theorem \\ref{triplenew.thm} are satisfied\nfor the Banach algebra triplet $({\\mathcal A}, {\\mathcal M}, {\\mathcal B})$ in Theorem \\ref{triple1.thm}.\n\nFor a differential subalgebra ${\\mathcal A}$ of order $\\theta_0$ in ${\\mathcal B}$,\nwe observe that the requirements\n\\eqref{triplenew.eq1} and \\eqref{triplenew.eq2} with ${\\mathcal M}={\\mathcal B}$, $k=2$ and $\\theta_1=2$ are met,\n and hence\n \\eqref{weakpower} holds for $m=4$ and $\\theta=2\\theta_0$.\nRecall that ${\\mathcal B}$ is a trivial differential subalgebra of ${\\mathcal B}$.\nIn the following corollary, we can extend the above conclusion to arbitrary differential subalgebras ${\\mathcal M}$ of ${\\mathcal B}$.\n\n\\begin{cor}\nLet ${\\mathcal A}, {\\mathcal M}$ and\n${\\mathcal B}$ be Banach algebras such that ${\\mathcal A}$ is a differential subalgebra of order $\\theta_0$ in ${\\mathcal M}$\nand ${\\mathcal M}$ is a differential subalgebra of order $\\theta_1$ in ${\\mathcal B}$.\nThen \\eqref{weakpower} holds for $m=4$ and $\\theta=\\theta_0\\theta_1$.\n\\end{cor}\n\n\n\n\nFollowing the argument used in the proof of Theorem \\ref{triplenew.thm}, we can show that\n\\eqref{weakpower} holds for $m=4$ if the requirement \\eqref{triplenew.eq1} with $k=3$ is replaced by the following strong version\n\\begin{equation}\\label{triplenew.eq3}\n\\|ABC\\|_{\\mathcal A} \\le E_0\n\\|A\\|_{{\\mathcal A}} \\|C\\|_{{\\mathcal A}} \\|B\\|_{{\\mathcal A}}^{1-\\theta_0}\n\\|B\\|_{\\mathcal M}^{\\theta_0}, \\ \\ A, B, C \\in {\\mathcal A}.\n\\end{equation}\n\n\n\n\\begin{thm}\\label{triplenew.thm2}\nLet ${\\mathcal A}, {\\mathcal M}$ and\n${\\mathcal B}$ be Banach algebras such that ${\\mathcal A}$ is a Banach subalgebra of ${\\mathcal M}$\nand ${\\mathcal M}$ is a Banach subalgebra of ${\\mathcal B}$.\nIf there exist positive exponents $\\theta_0, \\theta_1\\in (0, 1]$ and absolute constants $E_0, E_1$\nsuch that \\eqref{triplenew.eq2} and \\eqref{triplenew.eq3} hold,\nthen \\eqref{weakpower} holds for $m=4$ and $\\theta=\\theta_0\\theta_1$.\n\\end{thm}\n\nLet $L^p:=L^p(\\R), 1\\le p\\le \\infty$, be the space of all $p$-integrable functions on $\\R$ with standard norm $\\|\\cdot\\|_p$,\nand ${\\mathcal B}(L^p)$ be the algebra of bounded linear operators on\n$L^p$ with the norm $\\|\\cdot \\|_{{\\mathcal B}(L^p)}$.\nFor $1\\le p\\le \\infty, \\alpha\\ge 0$ and $\\gamma\\in [0, 1)$, we define the norm\nof a kernel $K$ on $\\R\\times \\R$ by\n\\begin{equation}\\label{Ex3-def-norm}\n\\|K\\|_{{\\mathcal W}_{p,\\alpha}^\\gamma}=\n\\left\\{\n\\begin{array}{ll}\n \\max\\Big(\\sup_{x\\in \\R} \\big\\|K(x,\\cdot)u_\\alpha(x,\\cdot)\\big\\|_p,\\\n\\sup_{y\\in \\R} \\big\\|K(\\cdot,y)u_\\alpha(\\cdot,y)\\big\\|_p\\Big) & {\\rm if }\\ \\gamma =0\n\\\\\n\\|K\\|_{{\\mathcal W}_{p,\\alpha}^0}+\\sup_{0<\\delta\\le 1} \\delta^{-\\gamma}\n\\|\\omega_\\delta(K)\\|_{{\\mathcal W}_{p,\\alpha}^0} & {\\rm if } \\ 0 < \\gamma <1,\n\\end{array}\n\\right.\n\\end{equation}\nwhere the modulus of continuity\nof the kernel $K$ is defined by\n\\begin{equation}\\label{Ex3-def-mod}\n\\omega_\\delta(K)(x,y):=\\sup_{|x^\\prime|\\le \\delta, |y^\\prime|\\le \\delta}\n|K(x+x^\\prime, y+y^\\prime)-K(x,y)|, \\ x, y\\in \\RR,\n\\end{equation}\nand $u_\\alpha(x, y)= (1+|x-y|)^\\alpha, x, y\\in \\R$ are polynomial weights on $\\R\\times \\R$.\nConsider the set ${\\mathcal W}_{p,\\alpha}^\\gamma$ of integral operators\n\\begin{equation}\\label{Ex3-def-int-oper}\nTf(x)=\\int_{{\\R}} K_T(x,y) f(y) dy, \\quad f \\in L^p,\n\\end{equation}\nwhose integral kernels $K_T$ satisfy $\\|K_T\\|_{{\\mathcal W}^\\gamma_{p, \\alpha}}<\\infty$, and define\n$$\n\\|T\\|_{{\\mathcal W}_{p,\\alpha}^\\gamma}:=\n\\|K_T \\|_{{\\mathcal W}_{p,\\alpha}^\\gamma}, \\ T\\in {\\mathcal W}_{p,\\alpha}^\\gamma.\n$$\nIntegral operators in ${\\mathcal W}_{p, \\alpha}^\\gamma$ have their kernels being H\\\"older continuous of order $\\gamma$\n and having off-diagonal polynomial decay of order $\\alpha$.\nFor $1\\le p\\le \\infty$ and $\\alpha>1-1\/p$, one may verify that\n ${\\mathcal W}_{p, \\alpha}^\\gamma, 0\\le \\gamma<1$, are Banach subalgebras of\n ${\\mathcal B}(L^2)$ under operator composition.\n The Banach algebras ${\\mathcal W}_{p, \\alpha}^\\gamma, 0<\\gamma<1$, of integral operators\n may not form a differential subalgebra of ${\\mathcal B}(L^2)$, however the triple\n$({\\mathcal W}_{p, \\alpha}^\\gamma, {\\mathcal W}_{p, \\alpha}^0, {\\mathcal B}(L^2))$ is proved in\n \\cite{sunacha08} to satisfy the following\n\\begin{equation}\\label{Ex3-norm-comp}\n\\|T_0\\|_{\\mathcal B} \\le D \\|T_0\\|_{{\\mathcal W}_{p,\\alpha}^0} \\le\nD\\|T_0\\|_{{\\mathcal W}_{p,\\alpha}^\\gamma},\n\\end{equation}\n\\begin{equation}\\label{Ex3-norm-2-theta}\n\\|T_0^2 \\|_{{\\mathcal W}_{p,\\alpha}^0} \\le D\n\\|T_0 \\|_{{\\mathcal W}_{p,\\alpha}^\\gamma}^{1+\\theta} \\|T_0 \\|_{{\\mathcal B}(L^2)}^{1-\\theta}\n\\end{equation}\nand\n\\begin{equation}\\label{Ex3-norm-3-product}\n\\|T_1 T_2 T_3 \\|_{{\\mathcal W}_{p,\\alpha}^\\gamma} \\le D\n\\|T_1 \\|_{{\\mathcal W}_{p,\\alpha}^\\gamma} \\| T_2 \\|_{{\\mathcal W}_{p,\\alpha}^0}\n\\| T_3 \\|_{{\\mathcal W}_{p,\\alpha}^\\gamma}\n\\end{equation}\nholds for all $T_i\\in {\\mathcal W}_{p, \\alpha}^\\gamma, 0\\le i\\le 3$, where $D$ is an absolute constant and\n$$\\theta= \\frac{\\alpha+\\gamma+1\/p}{(1+\\gamma)(\\alpha+1\/p)}.$$\nThen the requirements \\eqref{triplenew.eq2} and \\eqref{triplenew.eq3} in Theorem \\ref{triplenew.thm2}\nare met for the triplet $({\\mathcal W}_{p, \\alpha}^\\gamma, {\\mathcal W}_{p, \\alpha}^0, {\\mathcal B}(L^2))$,\nand hence\nthe Banach space pair $({\\mathcal W}_{p, \\alpha}^\\gamma, {\\mathcal B}(L^2))$ satisfies the Brandenburg's condition \\eqref{weakpower} with $m=4$\n\\cite{fangshinsun13, sunacha08}.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Brandenburg trick and norm-controlled inversion}\\label{normcontrolledinversion.section}\n\n\nLet\n$\\mathcal{A}$ and $\\mathcal{B}$ are $*$-algebras with common identity and involution, and let\n $\\mathcal{B}$ be symmetric. In this section, we show that\n ${\\mathcal A}$ has norm-controlled inversion in ${\\mathcal B}$ if it meets\n the Brandenburg requirement \\eqref{weakpower}.\n\n\n\n\n\n\\begin{thm}\\label{main-thm1}\nLet ${\\mathcal B}$ be a symmetric $*$-algebra with its norm\n$\\|\\cdot\\|_{\\mathcal B}$\nbeing normalized in the sense that \\eqref{banachalgebra.def} holds with $K=1$,\n \\begin{equation}\\label{banachalgebra.defnew2}\n\\|\\tilde A\\tilde B\\|_{\\mathcal B}\\le \\|\\tilde A\\|_{\\mathcal B}\\|\\tilde B\\|_{\\mathcal B},\\ \\tilde A, \\tilde B\\in {\\mathcal B},\n\\end{equation}\nand\n$\\mathcal{A}$ be a $*$-algebra with its\nnorm\n$\\|\\cdot\\|_{\\mathcal A}$\nbeing normalized too,\n\\begin{equation}\\label{banachalgebra.defnew1}\n\\|AB\\|_{\\mathcal A}\\le \\|A\\|_{\\mathcal A}\\|B\\|_{\\mathcal A}, \\ A, B\\in {\\mathcal A}.\n\\end{equation}\nIf ${\\mathcal A}$ is a $*$-subalgebra of ${\\mathcal B}$ with common identity $I$ and involution $*$, and\nthe pair $({\\mathcal A}, {\\mathcal B})$ satisfies\nthe Brandenburg requirement \\eqref{weakpower}, then\n${\\mathcal A}$ has norm-controlled inversion in ${\\mathcal B}$. Moreover,\nfor any $A\\in{\\mathcal A}$ being invertible in ${\\mathcal B}$ we have\n\\begin{eqnarray}\\label{norm-control1}\n\\|A^{-1}\\|_{\\mathcal A}\n& \\le &\n\\|A^* A\\|_{\\mathcal B}^{-1} \\|A^*\\|_{\\mathcal A}\\nonumber\\\\\n& & \\times\n\\left\\{\\begin{array}{ll}\n \\big(2t_0+(1- 2^{\\log_m (1-\\theta\/m)})^{-1} (\\ln a)^{-1}\\big) a\n \\exp\\Big(\\frac{\\ln m-\\ln (m-\\theta)} {\\ln (m-\\theta)} t_0 \\ln a\\Big)\n & {\\rm if} \\ \\theta 1$,\n$$b= \\frac{ \\|I\\|_{\\mathcal A}+ \\|A^* A\\|_{\\mathcal B}^{-1}\n \\|A^* A\\|_{\\mathcal A}} {1- (\\kappa(A^*A))^{-1} }\\ge a >1,\n$$\nand\n\\begin{equation}\\label{t0.def0}\nt_0=\\Big( \\frac{ (m-1)(m-\\theta) \\log_m (m-\\theta) \\ln (Db)}{(m-1-\\theta) \\ln a}\\Big)^{\\ln m\/(\\ln m-\\ln (m-\\theta))} \\ {\\rm for}\\ 0<\\theta0$\nand the Jaffard algebra ${\\mathcal J}_\\alpha, \\alpha>1$ have norm-controlled inversion in ${\\mathcal B}(\\ell^2)$ with\n the norm control function\n $h$ bounded by a polynomial. In \\cite{shinsun19}, we \nproved that the Beurling algebras ${\\mathcal B}_{p, \\alpha}$\n with $1\\le p\\le \\infty$ and $\\alpha>1-1\/p$ admit norm-controlled inversion in ${\\mathcal B}(\\ell^2)$ with\n the norm control function\n bounded by some polynomials. Following the commutator technique used in \\cite{shinsun19, sjostrand94}, we can establish a similar result for\n the Baskakov-Gohberg-Sj\\\"ostrand algebras ${\\mathcal C}_{p, \\alpha}$ with $1\\le p\\le \\infty$ and $\\alpha>1-1\/p$.\n\n \\begin{thm}\\label{polynomial.thm}\n Let $1\\le p\\le \\infty$ and $\\alpha>1-1\/p$. Then\n the Baskakov-Gohberg-Sj\\\"ostrand algebra ${\\mathcal C}_{p, \\alpha}$\n and the Beurling algebra ${\\mathcal B}_{p, \\alpha}$\n admit norm-controlled inversion in ${\\mathcal B}(\\ell^2)$ with\n the norm control function\n bounded by a polynomial.\n \\end{thm}\n\n It is still unknown whether Gr\\\"ochenig-Schur algebras ${\\mathcal A}_{p, \\alpha}, 1\\le p<\\infty, \\alpha>1-1\/p$,\n admit norm-controlled inversion in ${\\mathcal B}(\\ell^q), 1\\le q< \\infty$, with\n the norm control function bounded by a polynomial.\nIn \\cite{gkII}, Gr\\\"ochenig and Klotz introduce a differential operator ${\\mathcal D}$ on a Banach algebra and\nuse it to define a differential $*$-algebra ${\\mathcal A}$ of a symmetric $*$-algebra ${\\mathcal B}$, which\n admits norm-controlled inversion with\n the norm control function\n bounded by a polynomial. However, the differential algebra in \\cite{gkII} does not include\n the Gr\\\"ochenig-Schur algebras ${\\mathcal A}_{p, \\alpha}$,\n the Baskakov-Gohberg-Sj\\\"ostrand algebra ${\\mathcal C}_{p, \\alpha}$\n and the Beurling algebra ${\\mathcal B}_{p, \\alpha}$ with $1\\le p<\\infty$ and $\\alpha>1-1\/p$.\n It could be an interesting problem to extend the conclusions in Theorem \\ref{polynomial.thm}\n to general Banach algebras such that\n the norm control functions in the norm-controlled inversion have polynomial growth.\n}\n\\end{rem}\n\n\n\n\n\\begin{rem} {\\rm A crucial step in the\nproof of Theorem \\ref{main-thm1} is to introduce\n $B:=I- \\|A^* A\\|_{\\mathcal B}^{-1} A^* A\\in {\\mathcal A}$, whose spectrum is contained in an interval\n on the positive real axis.\n The above reduction depends on the requirements that\n ${\\mathcal B}$ is symmetric and both ${\\mathcal A}$ and ${\\mathcal B}$ are $*$-algebras with common identity and involution.\nFor the applications to some mathematical and\nengineering fields, the widely-used algebras ${\\mathcal B}$ of infinite matrices and integral operators\nare the operator algebras ${\\mathcal B}(\\ell^p)$ and ${\\mathcal B}(L^p), 1\\le p\\le \\infty$, which are symmetric only when $p=2$.\nIn \\cite{akramjfa09, fangshinsun13, shinsun19, shincjfa09, sunacha08, tesserajfa10}, inverse-closedness of localized matrices and integral operators\nin ${\\mathcal B}(\\ell^p)$ and ${\\mathcal B}(L^p), 1\\le p\\le \\infty$, are discussed, and in \\cite{fangshinsun20},\nBeurling algebras ${\\mathcal B}_{p,\\alpha}$ with $1\\le p<\\infty$ and $\\alpha>d(1-1\/p)$ are shown to admit polynomial norm-controlled\ninversion in nonsymmetric algebras ${\\mathcal B}(\\ell^p), 1\\le p<\\infty$.\nIt is still widely open to discuss Wiener's lemma and\n norm-controlled inversion when ${\\mathcal B}$ and ${\\mathcal A}$ are not $*$-algebras and\n ${\\mathcal B}$ is not a symmetric algebra.\n}\n\\end{rem}\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn recent years, many applications related to stochastic differential equations (SDEs) with discontinuous drift coefficient\nhave emerged. These types of equations typically arise in mathematical finance and insurance \\cite{Asmussen.1997,Ichiba.2013,Ichiba.2011,Karatzas.2009},\nengineering applications~\\cite{LaBolle.2000,Olama2009}, economy~\\cite{Shardin.2016,Leobacher2015} or stochastic control problems \\cite{Shardin.2016,Balakrish1980,kushner,touzi}.\n\nThe existence and uniqueness of solutions of SDEs in the standard case, i.e.\\ the case of sufficiently smooth coefficients, is well understood~\\cite{ks}. However, the standard theory on SDEs does not apply anymore in case of a discontinuous drift coefficient, e.g.\\ a piecewise constant drift coefficient, and a special theory is \nneeded to address the question of existence and uniqueness of solutions of such SDEs~\\cite{Kleptsyna.1985,Veretennikov.1981,Zvonkin.1974}. \nThe same is true for the numerical analysis: The convergence behavior of approximation schemes needs to be\nreconsidered and ``research on numerical methods for SDEs with irregular coefficients is highly active.''\\ (\\cite[p.\\ 2]{LSeuler}). In the case of a sufficiently smooth drift and a constant diffusion coefficient, the exact strong rate of convergence is 1 for the Euler scheme, see~\\cite{DGR2003,Kloeden.1999}.\nAt the time, when the main part of the research presented here was undertaken, no comparable result was known in the case of a discontinuous, e.g.\\ piecewise constant, drift coefficient.\nAfter many discussions and investigations, also inspired by a previous version of this manuscript, refined results are now about to be established, see Section \\ref{subsec:equation}.\n\nIn this work, we will focus on numerical approximations of SDEs in the presence of a piecewise constant \ndrift and a constant diffusion coefficient. We will provide theoretical considerations on the long time behavior of approximated SDE solutions based on results from the theory of ergodic Markov chains. Moreover, we will provide further insight into the numerical behavior of approximation schemes, in particular the Euler scheme, by analyzing the numerical convergence rates based on a reference solution.\nThe numerical speed of convergence heavily depends on\nthe initial value and properties of the drift coefficient, e.g. drift direction or\njump height. Our tests reveal that for a special class of drift coefficients the numerical convergence rates are higher and independent of\ninitial conditions due to the ergodicity of the Euler scheme and the underlying SDE. We also use the Euler scheme to verify the long time behavior of a rank-based stock market model~\\cite{Banner.2005},\na prominent model in finance to describe the evolution of the capital distribution within the market. \n\nThe remainder of this manuscript is as follows: \nIn Section \\ref{sec:Stability}, we will introduce some theoretical and numerical basics\nand establish the ergodicity of the Euler approximations\nin the case of an appropriate, piecewise constant drift coefficient.\nIn Section \\ref{sec:SimStudies}, we will discuss numerical convergence properties and further\nfindings of several numerical tests. \nWe will conclude this work in Section \\ref{sec:finance} with the application from mathematical finance mentioned above, \nwhere SDEs of discontinuous type naturally arise. \n\n\n\\section{Problem Description} \\label{sec:Stability}\n\n\nIn this section, we will introduce our basic setting, i.e.\\ the type of SDE, we are interested in, and some basic terms for the numerical tests. Besides the Euler scheme and its long time properties in our setting, we will also briefly discuss the applicability and performance of some other numerical schemes. \n\n\\subsection{The Equation} \\label{subsec:equation}\n\n\nIn this manuscript, we will consider time-homogeneous SDEs with piecewise constant drift coefficient and additive noise:\n\\begin{align} \\label{eq:SDE}\ndX_t = \\sum\\limits_{j=1}^{s} \\alpha_j \\cdot \\mathbbmss{1}_{B_j}(X_t)dt + \\sigma dW_t, \\quad t \\geq 0, \\qquad X_0= \\xi.\n\\end{align}\nHere, we have $s \\in \\mathbb{N}$, $\\alpha_j, \\sigma, \\xi \\in \\mathbb{R}$ and disjoint (possibly infinitely many) intervals $B_j \\subset \\mathbb{R}$ for all $1 \\leq j \\leq s$ and $\\left( W_t \\right)_{t \\in [0,T]}$ is a one-dimensional Brownian motion.\n\nThe existence and uniqueness of solutions to this type of SDEs is guaranteed by results of \\cite{Veretennikov.1981,Zvonkin.1974} and \\cite{Kleptsyna.1985}. In \\cite{Veretennikov.1981}, conditions on the drift and diffusion coefficient are derived under which the corresponding SDE has a unique strong solution. As emphasized therein, those conditions are in particular fulfilled for a bounded drift coefficient and a constant diffusion coefficient. Thus, the existence and uniqueness of a strong solution for SDEs of type \\eqref{eq:SDE} is ensured.\n\n\\medskip\n\nFor the numerical analysis of SDEs with discontinuous drift and\/or diffusion coefficient, the situation is more involved.\nIn this manuscript, we will focus on the strong convergence rate of the Euler scheme, which, for a general SDE\n$$ dX_t = f(X_t) dt + g(X_t) dW_t, \\qquad t \\in [0,T], \\qquad X_0= \\xi,$$ where $f$ and $g$ are such that a unique strong solution exists, is given by\n\\begin{align} \\label{euler_alg}\n\tx_{k+1}^{\\text{expE}} = x_k^{\\text{expE}} + f(x_k^{\\text{expE}}) \\Delta + g(x_k^{\\text{expE}}) (W_{(k+1)\\Delta}-W_{k \\Delta}), \\quad k=0,\\ldots,n-1, \\qquad x_0^{\\text{expE}}= \\xi.\n\\end{align}\nThe underlying time discretization of the time interval $[0,T]$ is $0=t_00$.\n\nFor a better comparison, note that in the standard setting of an SDE with additive noise, where the drift coefficient is sufficiently smooth, the Euler scheme has an exact strong convergence order of $1$, see e.g. \\cite{DGR2003} and \\cite[p.\\ 350f]{Kloeden.1999}.\n\nSo to summarize: The Euler scheme for our non-standard setting of SDE \\eqref{eq:SDE} has at least $L^2$-convergence order $3\/4-\\varepsilon$. However, observing this convergence order numerically will be a different story (see Section \\ref{sec:SimStudies}).\n\n\\subsection{Simulation studies and empirical convergence rates}\nAs already mentioned, we are interested in empirically measuring the strong convergence rate of the Euler scheme. The standard procedure for this is as follows: The root mean squared error (RMSE) at time $T$ for the Euler scheme \\eqref{eq:SDE} with step size $\\Delta =T\/n$ is given by\n\\begin{align}\n\te(n) \\mathrel{\\mathop:}= \\left( \\mathbb{E}\\left \\vert X_T - x_n^{\\text{expE}} \\right\\vert^2 \\right)^{1\/2}.\n\\end{align}\nSince an explicit form of $X_T$ is unknown in general, one needs to replace $X_T$ in our simulation studies by\na numerical reference solution $X_T^{\\tt num}$, which is computed by the Euler scheme for an extremely small step size $\\Delta=T\/N$ with a very large number of $N+1$ grid points such that this approximation can be considered close enough to the true solution.\nMoreover, also the expectation $ \\mathbb{E} \\vert X_T^{ \\tt num} - x_n^{\\text{expE}}\\vert^2$ is not known explicitly, so we will approximate this expectation by the empirical RMSE\n\\begin{align}\ne_{\\tt emp}(n) = \\sqrt{\\frac{1}{M} \\sum\\limits_{i=1}^{M}\\left\\vert \\left( X_T^{\\tt num} - x_n^{\\text{expE}}\\right)^{(i)}\\right\\vert^2},\n\\end{align}\nwith a large number $M$ of Monte Carlo repetitions, i.e.\\ $( X_T^{\\tt num} - x_n^{\\text{expE}})^{(i)}$, $i=1, \\ldots, M$, are iid copies of $X_T^{\\tt num} - x_n^{\\text{expE}}$.\nHere, $X_T^{\\tt num}$ and $x_n^{\\text{expE}}$ have the same random input.\nNote that $N$ has to be chosen sufficiently large to generate the numerical reference solution and to avoid oscillations in $e_{\\tt emp}(n)$, which might occur if $N$ and $n$ are close. The number of repetitions $M$ should also be large enough to have a good approximation of the expectation, i.e.\\ a small Monte Carlo error.\n\n\n\\subsection{Other schemes}\n\n\nA natural idea is of course to consider other schemes than the explicit Euler scheme and to compare them in our simulation studies.\n\n\\subsubsection{The implicit Euler scheme}\n\nImplicit schemes have good stability properties, thus, they are a natural choice to consider. For an SDE with additive noise, where the drift coefficient is sufficiently smooth, the implicit Euler scheme has strong convergence order $1$ (see e.g. \\cite{Alfonsi.2013} and \\cite{Neuenkirch.2014}).\n\nHowever, for SDEs of type \\eqref{eq:SDE}, already the implicit Euler scheme is not well defined. To see this, consider the SDE\n \\begin{align*} dX_t= \\left( \\alpha_1 \\cdot \\mathbbmss{1}_{(-\\infty,0)} (X_t) + \\alpha_2 \\cdot \\mathbbmss{1}_{[0,\\infty)} (X_t) \\right) dt + \\sigma dW_t, \\quad t \\geq 0, \\qquad X_0=\\xi, \\end{align*} \nwith $\\alpha_1 >0 > \\alpha_2$. The implicit Euler scheme \n$$ x_{k+1}^{\\text{impE}} = x_{k}^{\\text{impE}} + \\left(\\alpha_1 \\cdot \\mathbbmss{1}_{(-\\infty,0)}(x_{k+1}^{\\text{impE}}) + \\alpha_2 \\cdot \\mathbbmss{1}_{[0,\\infty)}(x_{k+1}^{\\text{impE}})\\right)\\Delta + \\sigma ( W_{(k+1)\\Delta} - W_{k \\Delta}), \\ k=0,\\ldots, n-1,$$\nrequires to solve, for fixed but arbitrary $z \\in \\mathbb{R}$, the equation\n$$ y - \\left( \\alpha_1 \\cdot \\mathbbmss{1}_{(-\\infty,0)} (y) + \\alpha_2 \\cdot \\mathbbmss{1}_{[0,\\infty)} (y) \\right) \\Delta= z,$$\nwith respect to $y \\in \\mathbb{R}$. This equation does not possess a solution if $ z \\in (-\\alpha_1 \\Delta, -\\alpha_2 \\Delta)$, and hence an implicit Euler scheme is not well defined in this setting.\n\n\\subsubsection{The Heun scheme}\n\nThe Heun scheme is another scheme with strong order one for SDEs with additive noise under appropriate smoothness conditions on the drift coefficient. Adapted from \\cite[p.\\ 373]{Kloeden.1999} for SDEs of type \\eqref{eq:SDE}, it is defined by\n\\begin{align*} \n\tx_{k+1}^{\\text{Heun}} &= x_{k}^{\\text{Heun}} + \\frac{1}{2} \\left(\\sum\\limits_{j=1}^{s} \\alpha_j \\cdot \\mathbbmss{1}_{B_j}(x_{k}^{\\text{Heun}}) + \\sum\\limits_{j=1}^{s} \\alpha_j \\cdot \\mathbbmss{1}_{B_j}(\\Gamma_{k})\\right) \\Delta + \\sigma (W_{(k+1)\\Delta}-W_{k \\Delta}), \\\\\n\t\\Gamma_{k} &= x_{k}^{\\text{Heun}} + \\sum\\limits_{j=1}^{s} \\alpha_j \\cdot \\mathbbmss{1}_{B_j}(x_{k}^{\\text{Heun}}) \\Delta + \\sigma ( W_{(k+1)\\Delta} - W_{k \\Delta}), \\qquad k=0,\\ldots,n-1. \\notag\n\\end{align*}\n\nFor a closer look at the behaviour of this scheme at a discontinuity assume that the drift coefficient is given by $a(x)=\\pm\\textrm{sign}(x)$. An increment of the Heun scheme with $x_{k}^{\\text{Heun}}=x$ is then given by\n\\begin{align*} \n\tx_{k+1}^{\\text{Heun}} - x = \\frac{1}{2} \\big( a(x) + a(x + a(x)\\Delta + \\sigma ( W_{(k+1)\\Delta} - W_{k \\Delta})) \\big) \\Delta \t\n\t+ \\sigma ( W_{(k+1)\\Delta} - W_{k \\Delta}) .\n\\end{align*}\nSo if no drift change occurs in the Euler step $x + a(x)\\Delta + \\sigma ( W_{(k+1)\\Delta} - W_{k \\Delta})$, a Heun step and an Euler step coincide. However, if a drift change occurs in the Euler step, the Heun step\nreads as\n$$ x_{k+1}^{\\text{Heun}} = x + \\sigma ( W_{(k+1)\\Delta} - W_{k \\Delta}), $$\ni.e., it approximates the drift coefficient by zero and its dynamics are purely diffusion-based in this case. \n\n\\subsubsection{A Wagner-Platen type scheme}\n\nA strong order $1.5$-scheme for SDEs with smooth drift coeffi\\-cient and additive noise is given by a Wagner-Platen type scheme (see e.g. \\cite[p.\\ 383]{Kloeden.1999}), which \nreads in our setting as\n\\begin{align*}\n\tx_{k+1}^{\\text{Pla}} = x_{k}^{\\text{Pla}} & + a_{k} \\Delta + \\sigma ( W_{(k+1)\\Delta} - W_{k \\Delta}) \\\\ &\n\t+ \\frac{1}{4} \\left(a_{k}^{+} - 2a_k + a_{k}^{-}\\right) \\Delta \n\t+ \\frac{1}{2\\sqrt{\\Delta}} \\left(a_{k}^{+} - a_{k}^{-}\\right) \\int_{k \\Delta}^{(k+1)\\Delta} (W_u-W_{k \\Delta}) du,\n\\end{align*}\nwith\n$$ \n\\Gamma_{k}^{\\pm} =x_{k}^{\\text{Pla}} + a_k \\Delta \\pm \\sigma \\sqrt{\\Delta}, \\quad \n\ta_k =a(x_{k}^{\\text{Pla}}), \\quad\n\ta_k^{\\pm} = a(\\Gamma_{k}^{\\pm}), \\quad \\text{with} \\ a(x) = \\sum\\limits_{j=1}^{s} \\alpha_j \\cdot \\mathbbmss{1}_{B_j}(x).$$\n\t\nNow, we look again at the case of a drift coefficient given by $a(x)=\\pm\\textrm{sign}(x)$ and stepsize $\\Delta < \\sigma^2$. \nFor a Wagner-Platen step with $x_{k}^{\\text{Pla}}=x$, it depends now on whether\n$$ x + a(x) \\Delta + \\sigma \\sqrt{\\Delta}, \\quad x, \\quad x + a(x) \\Delta - \\sigma \\sqrt{\\Delta} $$\nhave the same sign or not. \nIf this condition is fulfilled, i.e., if $x$ is sufficiently far away from the discontinuity, then, a Wagner-Platen step and an Euler step coincide.\nIf the latter condition is not satisfied, then we have the dynamics\n\\begin{align*}\n\tx_{k+1}^{\\text{Pla}} = x &+ \\frac{1}{2} a(x) \\Delta + \\sigma ( W_{(k+1)\\Delta} - W_{k \\Delta}) \\notag \\\\\n\t&+ \\frac{1}{2\\sqrt{\\Delta}} \\left(a\\big(x+a(x) \\Delta + \\sigma \\sqrt{\\Delta}\\big) - a\\big(x+a(x) \\Delta - \\sigma \\sqrt{\\Delta}\\big) \\right) \\int_{k \\Delta}^{(k+1)\\Delta} (W_u-W_{k \\Delta}) du.\n\\end{align*}\nSo also here, the diffusive dynamic dominates the scheme when taking values close to the discontinuity. \n\n\n\n\\subsection{Ergodicity and stability of the Euler scheme} \\label{sec:ergod}\n\n\nWe will now address the long time properties of the Euler scheme based on results from the theory of ergodic Markov chains. For simplicity, we consider here a special case of SDE \\eqref{eq:SDE}, namely\n\\begin{align*} dX_t= \\left( \\alpha_1 \\cdot \\mathbbmss{1}_{(-\\infty,0)} (X_t) + \\alpha_2 \\cdot \\mathbbmss{1}_{[0,\\infty)} (X_t) \\right) dt + dW_t, \\quad t,s \\geq 0, \\qquad X_0=\\xi, \\end{align*}\nand assume that $$\\alpha_1 >0 > \\alpha_2,$$ i.e.\\ a drift coefficient, which is pointing towards zero.\nClearly, we have\n\\begin{align} \\lim_{s \\rightarrow 0} \\mathbb{E}(X_{t+s}|X_t=x) = x + \\alpha_1 \\cdot \\mathbbmss{1}_{(-\\infty,0)} (x) + \\alpha_2 \\cdot \\mathbbmss{1}_{[0,\\infty)} (x), \\qquad t \\geq 0, \\,\\, x \\neq 0, \\label{ce_cont} \\end{align}\ni.e. on average, the solution is moving inwards. Moreover, following e.g.\\ chapter 6 in \\cite{gs}, this SDE admits a unique invariant distribution with Lebesgue density\n$$ \\varphi_{\\infty}(x)= c \\cdot e^{ 2 \\alpha_2 x} \\cdot \\mathbbmss{1}_{[0,\\infty)} (x) + c \\cdot e^{ 2 \\alpha_1 x} \\cdot \\mathbbmss{1}_{(-\\infty,0)} (x), \\qquad x \\in \\mathbb{R},$$\nwhere the normalizing constant $c>0$ is such that $\\int_{-\\infty}^{\\infty} \\varphi_{\\infty}(x) dx =1$.\nIn particular, we have\nthat \\begin{align}\\label{conv_cont}\n \\lim_{t \\rightarrow \\infty} {\\mathbb P}(X_t \\leq y) = \\int_{-\\infty}^y \\varphi_{\\infty}(z) dz, \\qquad y \\in \\mathbb{R}, \\end{align}\nand the law of large numbers\n\\begin{align} \\lim_{L \\rightarrow \\infty} \\frac{1}{L} \\int_0^L h(X_t) dt = \\int_{-\\infty}^{\\infty} h(x) \\varphi_{\\infty}(x) dx \\qquad \\textrm{a.s.,} \\label{ln}\t\\end{align}\nholds, if $h: \\mathbb{R} \\rightarrow \\mathbb{R}$ is measurable and satisfies $ \\int_{-\\infty}^{\\infty} |h(x)| \\varphi_{\\infty}(x) dx < \\infty$.\n\n\n\\smallskip\n\nIt will turn out that the explicit Euler scheme\n\\begin{align}\\label{e_disc} x_{k+1}^{\\text{expE}, \\xi}= x_k^{\\text{expE}, \\xi} + a(x_k^{\\text{expE}, \\xi}) \\Delta + W_{(k+1)\\Delta}-W_{k \\Delta}, \\quad k=0,1,\\ldots, \\qquad x_0^{\\text{expE}, \\xi}= \\xi,\\end{align}\nwith\n$$ a(x) = \\alpha_1 \\cdot \\mathbbmss{1}_{(-\\infty,0)} (x) + \\alpha_2 \\cdot \\mathbbmss{1}_{[0,\\infty)} (x), \\qquad x \\in \\mathbb{R},$$ will recover these properties. (Here we also indicate the dependence on the initial value $\\xi$ in our notation.) The Euler scheme \\eqref{e_disc}\ncorresponds to a time homogenous Markov chain with transition kernel\n$$ p_{\\Delta}(x,A) = \\int_A \\frac{1}{\\sqrt{2 \\pi \\Delta}} \\exp\\left( -\\frac{1}{2\\Delta}\\big(y-(x+a(x)\\Delta \\big)^2\\right) dy, \\qquad x \\in \\mathbb{R}, \\quad A \\in \\mathcal{B}(\\mathbb{R}),$$\nand satisfies the discrete counterpart to \\eqref{ce_cont}, i.e.\\\n\\begin{align} \\mathbb{E}( x_{k+1}^{\\text{expE}, \\xi}| x_{k}^{\\text{expE}, \\xi}=x) = x + a(x)\\Delta, \\qquad k=0,1,\\ldots, \\qquad x \\in \\mathbb{R}. \\label{ce_disc} \\end{align}\n\nNow, we will prove the existence of a unique stationary distribution for the Euler scheme. In particular, due to the discontinuity at zero, the following Proposition \\ref{ergod_prop} is not covered by the standard references as e.g.\\ \\cite{msh2002} and \\cite{gr1996} for Euler-type discretizations of ergodic SDEs.\nNote that the long time properties of \\eqref{e_disc} have also been heuristically studied in \\cite{Simonsen2013}.\n\nHowever, we can easily verify that $V(x)=e^{ \\tau |x|}$, $x \\in \\mathbb{R}$, is an appropriate Lyapunov function for the above Markov chain, if $\\tau>0$ is sufficiently small.\nThis is a direct consequence of the well known form of the moment generating function for the folded normal distribution, i.e.\\ \n\\begin{align}\\label{folded} {\\mathbb E} e^{\\tau | \\mu + \\nu W_1|}=e^{\\frac{\\nu^2 \\tau^2}{2}+\\mu \\tau}\\left[1-\\Phi\\left(-\\mu\/\\nu-\\nu \\tau \\right) \\right]+\ne^{\\frac{\\nu^2 \\tau^2}{2}-\\mu \\tau}\\left[1-\\Phi\\left(\\mu\/\\nu-\\nu \\tau \\right) \\right], \\qquad \\tau \\in \\mathbb{R}, \\end{align}\nwhere $\\Phi$ is the distribution function of the standard normal distribution and $\\mu \\in \\mathbb{R}$, $\\nu >0$.\nUsing \\eqref{folded} with $\\mu=x+a(x)\\Delta$ and $\\nu^2=\\Delta$, we obtain\n\\begin{align*} {\\mathbb{E}}\\big( V (x_{k+1}^{\\text{expE}, \\xi}) | x_k^{\\text{expE}, \\xi} = x \\big) & \\leq e^{ \\Delta \\tau\\left(\\frac{\\tau}{2} + |\\alpha_2| \\right)} + e^{\\Delta \\tau \\left( \\frac{\\tau}{2} - |\\alpha_2| \\right) } e^{ \\tau x}, \\,\\,\\,\\, \\qquad x \\geq 0, \\\\\n{\\mathbb{E}}\\big( V (x_{k+1}^{\\text{expE}, \\xi}) | x_k^{\\text{expE}, \\xi} = x \\big) & \\leq e^{ \\Delta \\tau\\left(\\frac{\\tau}{2} + |\\alpha_1| \\right)} + e^{\\Delta \\tau \\left( \\frac{\\tau}{2} - |\\alpha_1|\\right) } e^{-\\tau x}, \\qquad x < 0.\n\\end{align*}\nSo, we have\n\\begin{align*} {\\mathbb{E}}\\big( V (x_{k+1}^{\\text{expE}, \\xi}) | x_k^{\\text{expE}, \\xi} = x \\big) & \\leq e^{ \\Delta \\tau\\left(\\frac{\\tau}{2} + \\max \\{ |\\alpha_1|, |\\alpha_2| \\}\\right)} + e^{\\Delta \\tau \\left( \\frac{\\tau}{2} - \\min\\{ |\\alpha_1|, |\\alpha_2| \\} \\right) } e^{ \\tau |x|}, \\qquad x \\in \\mathbb{R},\n\\end{align*}\nand choosing $\\tau < 2\\min\\{ |\\alpha_1|, |\\alpha_2| \\}$ gives the desired property \n$$ {\\mathbb{E}}\\big( V (x_{k+1}^{\\text{expE}, \\xi}) | x_k^{\\text{expE}, \\xi} = x \\big) \\leq C + \\gamma V(x), \\qquad x \\in \\mathbb{R},\n$$\nwith $C>0$, $\\gamma \\in (0,1)$. Since the transition kernel is Gaussian, an application of the quantitative Harris Theorem (see e.g. chapter 15 in \\cite{Tweedie} or Theorem 3.15 (and the following example) in \\cite{eberle}) yields the following geometric ergodicity result:\n\n\n\\begin{proposition}\\label{ergod_prop}\nLet $\\alpha_1>0>\\alpha_2$ and $\\Delta>0$ be fixed. Then, the Euler scheme \\eqref{e_disc} admits a unique stationary distribution $\\mu_{\\Delta}$, which is independent of the initial value $\\xi$. Moreover, \nthere exist $\\beta_{\\Delta} \\in (0,1)$ and constants $\\mathcal{M}_{\\Delta}(\\xi), \\xi \\in \\mathbb{R},$ such that\n$$ \\sup_{A \\in \\mathcal{B}(\\mathbb{R})} \\left| \\mathbb{P}( x_{k}^{\\textrm{expE}, \\xi} \\in A) - \\mu_{\\Delta}(A) \\right| \\leq \\mathcal{M}_{\\Delta}(\\xi) \\cdot \\beta_{\\Delta}^k, \\qquad k \\geq 1.$$\n\\end{proposition}\n\nChoosing $A=(-\\infty,y]$, we obtain in particular the counterpart to \\eqref{conv_cont}, i.e.\\\n\\begin{align} \\label{conv_disc}\n \\lim_{k \\rightarrow \\infty} {\\mathbb{P}}(x_k^{\\text{expE}, \\xi} \\leq y) = \\mu_{\\Delta}((-\\infty,y]), \\qquad y \\in \\mathbb{R}. \n \\end{align}\n \n \nNote that the limit distribution is independent of the initial value, as for the underlying SDE. \n\n\\smallskip\n\nFinally, an ergodic Theorem as e.g. Corollary 2.5 in \\cite{eberle} yields also the discrete counterpart to the law of large numbers \\eqref{ln}: We have\n\\begin{align} \\label{dln}\n\\lim_{L \\rightarrow \\infty} \\frac{1}{L}\\sum_{k=1}^L h(x_k^{\\text{expE}, \\xi}) = \\int_{-\\infty}^{\\infty} h(x) \\mu_{\\Delta}(dx)\t\\qquad \\textrm{a.s.,}\n\\end{align}\n for all measurable $h: \\mathbb{R} \\rightarrow \\mathbb{R}$ such that $\\int_{-\\infty}^{\\infty} |h(x)| \\mu_{\\Delta}(dx) < \\infty.$\n\n\n\\bigskip\n\n\n\\section{Simulation Studies} \\label{sec:SimStudies}\n\n\nThis section is concerned with the numerical investigation of SDEs of type \\eqref{eq:SDE}. \nFor the remainder, we will choose $T=1$, $M=10^5$, $N=2^{14}$ and $n=2^{\\tilde{n}}$ with $\\tilde{n} \\in \\{4,\\ldots,10\\}$ (unless otherwise mentioned).\nWe then calculate the corresponding Euler approximation and the empirical RMSE $e_{\\tt emp}(n)$. For simplicity, we omit the upper index of the numerical approximation indicating that the approximation is based on the Euler scheme.\nThe empirical convergence rate is given by the negative slope of the regression line, which we obtain when plotting $\\tilde{n}=\\log_2(n)$ versus $\\log_2\\left(e_{\\tt emp}(n)\\right)$.\nHere, we will focus on two types of drift coefficients: inward and outward pointing drift coefficients. \n\\begin{definition}\nWe will call a drift coefficient $a: \\mathbb{R} \\rightarrow \\mathbb{R}$ {\\it inward pointing}, if there exists $x^* \\in \\mathbb{R}$ such that\n$$ a(x)>0, \\quad x < x^*, \\qquad a(x)<0, \\quad x>x^*,$$\nand {\\it outward pointing}, if there exists $x^* \\in \\mathbb{R}$ such that\n$$ a(x)<0, \\quad x < x^*, \\qquad a(x)>0, \\quad x>x^*.$$\n\\end{definition}\n\nOur numerical investigations are based on several additional key characteristics:\nWe consider the average number of \\textit{drift changes}. As the Euler scheme for SDE \\eqref{eq:SDE} is exact up to the first drift change, another quantity of interest is the \\textit{number of paths with at least one drift change}.\nTo get further insight whether some paths are really far away from the true solution, we measure the \\textit{largest error} that occurs within the considered time interval (not necessarily in the end).\nBesides the error sizes themselves, it is interesting to see what proportion of errors at final time $T$ is large, medium, or small and how this \\textit{distribution of error sizes} depends on the step size.\nFurthermore, we analyze the \\textit{evolution of the error over time} for a fixed step size.\nTo underline the influence of the \\textit{drift direction} towards or away from the discontinuity, we generate plots of several solution sample paths. We will see that the observed empirical\\footnote{We use the expressions \"numerical\" and \"empirical\" rate (respectivley order) of convergence synonymously.} rates of convergence heavily depend on whether the drift coefficient is \\textit{inward or outward pointing}. Whereas for the latter one, there is a dependency on the \\textit{initial value} of the SDE, rates in case of an inward pointing drift coefficient seem to be independent of the initial value, corresponding to Proposition \\ref{ergod_prop}.\nIn addition, we analyze how the \\textit{jump height} (difference in drift values) influences the empirical convergence rate.\n\n\\medskip\n\nAs representatives of the class of SDEs \\eqref{eq:SDE}, we consider here the SDEs given in Table \\ref{tab:overviewSDEs}.\n\n\\begin{table}[htbp]\n\t\\begin{minipage}{\\textwidth}\n\t\t\\centering\n\t\t\\caption{Selection of analyzed SDEs}\n\t\t\t\t\\label{tab:overviewSDEs}\n\t\t\\begin{adjustbox}{max width=\\textwidth}\n\t\t\t\\begin{tabular}{ll}\n\t\t\t\t\\textbf{Drift coefficient} & \\textbf{Corresponding SDE} \\\\\n\t\t\t\t\\hline sign & $dX_t = \\sign(X_t)dt + dW_t$ \\\\ \n\t\t\t\t\\hline minusSign & $dX_t = -\\sign(X_t)dt + dW_t$ \\\\\n\t\t\t\t\\hline 10sign & $dX_t = 10 \\cdot \\sign(X_t)dt + dW_t$ \\\\ \n\t\t\t\t\\hline minus10sign & $dX_t = -10 \\cdot \\sign(X_t)dt + dW_t$ \\\\ \n\t\t\t\t\\hline elementary\\_minus34 & $dX_t = \\left(-3\\cdot \\mathbbmss{1}_{(-\\infty,1.4)}(X_t) + 4\\cdot \\mathbbmss{1}_{[1.4,\\infty)}(X_t)\\right)dt + dW_t$ \\\\\n\t\t\t\t\\hline elementary4minus3 & $dX_t = \\left(4\\cdot \\mathbbmss{1}_{(-\\infty,1.4)}(X_t) - 3\\cdot \\mathbbmss{1}_{[1.4,\\infty)}(X_t)\\right)dt + dW_t$ \\\\\n\t\t\t\t\\hline elementary\\_minus0.6\\_1 & $dX_t = \\left(-0.6\\cdot \\mathbbmss{1}_{(-\\infty,1.4)}(X_t) + \\mathbbmss{1}_{[1.4,\\infty)}(X_t)\\right)dt + dW_t$ \\\\\n\t\t\t\t\\hline elementary1minus0.6 & $dX_t = \\left(\\mathbbmss{1}_{(-\\infty,1.4)}(X_t) - 0.6\\cdot \\mathbbmss{1}_{[1.4,\\infty)}(X_t)\\right)dt + dW_t$ \\\\\n\t\t\t\t\\hline \n\t\t\t\\end{tabular}\n\t\t\n\t\t\\end{adjustbox}\n\t\n\t\n\t\n\t\n\t\\end{minipage}\n\t\\label{tab:OverviewAnalysedSDEs}\n\\end{table}\n\nIn the remainder of this chapter, we will present and discuss some key results of the simulation studies.\n\n\n\\subsection{Key results} \\label{subsec:keyresults}\n\n\nThe empirical convergence rates obtained by the Euler scheme are given in Table \\ref{tab:EulerMSQRates-step4-finePart} (outward pointing drift coefficients highlighted in light gray, the discontinuity in gray): \n\\begin{table}[htbp]\n\t\\begin{minipage}{\\textwidth}\n\t\\setcounter{mpfootnote}{\\value{footnote}}\n\t\\renewcommand{\\thempfootnote}{\\arabic{mpfootnote}}\n\t\\centering\n\t\\caption{Numerical Euler convergence rates}\n\t\\label{tab:EulerMSQRates-step4-finePart}%\n\t\\begin{tabular}{rcccccc}\n\t\t\\toprule\n\t\t\\textbf{Initial values} & \\textbf{-1} & \\cellcolor{lightgray} \\textbf{0} & \\textbf{1} & \\textbf{2.5} & \\textbf{3} & \\textbf{5} \\\\\n\t\t\\midrule\n\t\t\\rowcolor{verylightgray} sign & 0.69 & \\cellcolor{lightgray} 0.59 & 0.68 & 0.83 & 1.01 & \\textbf{--} \\\\\n\t\t\\rowcolor{verylightgray} 10sign & \\textbf{--}\\footnotemark[1] & \\cellcolor{lightgray} 0.25 & \\textbf{--} & \\textbf{--} & \\textbf{--} & \\textbf{--} \\\\\n\t\tminusSign & 0.81 & \\cellcolor{lightgray} 0.80 & 0.81 & 0.82 & 0.82 & 0.89 \\\\\n\t\tminus10sign & 0.91 & \\cellcolor{lightgray} 0.91 & 0.91 & 0.91 & 0.91 & 0.91 \\\\\n\t\t\\midrule\n\t\t\\textbf{Initial values} & \\textbf{0} & \\textbf{1} & \\textbf{1.2} & \\textbf{1.25} & \\cellcolor{lightgray} \\textbf{1.4} & \\textbf{2} \\\\\n\t\t\\midrule\n\t\t\\rowcolor{verylightgray} elementary\\_minus34 & 1.17 & 0.37 & 0.38 & 0.40 & \\cellcolor{lightgray} 0.39 & 0.31 \\\\\n\t\t\\rowcolor{verylightgray} elementary\\_minus0.6\\_1 & 0.75 & 0.69 & 0.69 & 0.69 & \\cellcolor{lightgray} 0.71 & 0.70 \\\\\n\t\telementary4minus3 & 0.87 & 0.87 & 0.87 & 0.87 & \\cellcolor{lightgray} 0.87 & 0.87 \\\\\n\t\telementary1minus0.6 & 0.81 & 0.80 & 0.80 & 0.80 & \\cellcolor{lightgray} 0.80 & 0.80 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}%\n\t\\footnotetext[1]{errors close to machine accuracy; no empirical convergence rate calculated (see also equation \\eqref{exit_prob_2})}\n\t\\setcounter{footnote}{\\value{mpfootnote}}\n\t\\end{minipage}\n\\end{table}%\n\nOur results show that \n\\begin{itemize}\n \\item in general, we loose convergence order one, which the Euler scheme has under standard assumptions for SDEs with additive noise \n \\item and that a crucial factor is whether the drift coefficient is inward or outward pointing: for inward pointing coefficients the guaranteed convergence order $3\/4$ is recovered, which is not always the case for outward pointing coefficients.\n\\end{itemize}\n\\medskip\n\nFurthermore, our numerical tests show that neither using the Heun scheme nor using the Platen scheme yields a different picture. In particular, convergence rates do not improve significantly, and the schemes do not yield a better resolution of the discontinuity (see Tables \\ref{tab:HeunMSQRates-step4-finePart} and \\ref{tab:PlatenMSQRates-step4-finePart}).\n\n\\begin{table}[htbp]\n\t\\centering\n\t\\caption{Numerical Heun convergence rates, step size $2^{-4}$ onwards}\n\t\\begin{tabular}{rrrrrrr}\n\t\t\\toprule\n\t\t\\textbf{Initial values} & \\textbf{0} & \\textbf{1} & \\textbf{1.2} & \\textbf{1.25} & \\textbf{1.4} & \\textbf{2} \\\\\n\t\t\\midrule\n\t\telementary\\_minus34 & 1.15 & 0.42 & 0.38 & 0.38 & 0.41 & 0.40 \\\\\n\t\telementary4minus3 & 0.77 & 0.77 & 0.77 & 0.77 & 0.77 & 0.77 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}%\n\t\\label{tab:HeunMSQRates-step4-finePart}%\n\\end{table}%\n\\begin{table}[htbp]\n\t\\centering\n\t\\caption{Numerical Platen convergence rates, step size $2^{-4}$ onwards}\n\t\\begin{tabular}{rcccccccc}\n\t\t\\toprule\n\t\t\\textbf{Initial values} & \\textbf{0} & \\textbf{1} & \\textbf{1.2} & \\textbf{1.25} & \\textbf{1.4} & \\textbf{2} \\\\\n\t\t\\midrule\n\t\telementary\\_minus34 & 1.22 & 0.40 & 0.40 & 0.40 & 0.42 & 0.43 \\\\\n\t\telementary4minus3 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}%\n\t\\label{tab:PlatenMSQRates-step4-finePart}%\n\\end{table}%\n\n\n\\subsection{Drift direction and initial value} \\label{subsec:InwardOutward}\n\n\nFor an outward pointing drift coefficient, the numerical convergence order even seems to depend on the initial value and the spectrum of orders obtained for different initial values is very broad with values between $0.25$ and $1.17$ (see Table \\ref{tab:EulerMSQRates-step4-finePart}). \n\nOn the other hand, for an inward pointing drift coefficient, the convergence order seems to be independent of the initial value and the spectrum of orders numerically obtained for different initial values and inward pointing drift coefficients is tight with values between $0.80$ and $0.91$ (see Table \\ref{tab:EulerMSQRates-step4-finePart}).\nThe stability of the estimates is due to the ergodicity of the SDE and the Euler scheme in this case, see Subsection \\ref{sec:ergod}.\nThe geometric convergence speed in Proposition \\ref{ergod_prop} explains \nwhy the numerical tests for inward pointing drift coefficients yield such stable estimates, independently of the initial value: $X_T^{\\tt num}$ and $x_n$ are, for a sufficiently large number of grid points $n+1$, close to their unique\nstationary distributions, which stabilizes the Monte-Carlo estimates. Also, as pointed out already above, the guaranteed convergence order $3\/4$ is recovered here.\n\nFor the above equations, the structure of the drift coefficient is directly related to the number of drift changes. An inward pointing drift coefficient results in many drift changes, while in the case of an outward pointing drift coefficient, only few drift changes occur. We can further observe that:\n\\begin{itemize}\n \\item[(i)] when starting away from the discontinuity, numerical rates for outward pointing drift coefficients are better than for inward ones;\n \\item[(ii)] when starting close to the discontinuity, outward pointing drift coefficients imply worse numerical convergence rates than inward ones.\n \\end{itemize}\nSo, in the latter case we obtain a positive correlation between the number of drift changes and the numerical convergence rate, which implies that frequent drift changes are not necessarily bad for the quality of the approximation -- quite the contrary seems to apply, which is surprising\nat first glance.\n\n\\medskip\n\nHence, the type of monotonicity of the drift coefficient is of great importance.\nIntuitively, an inward pointing drift coefficient should lead to many drift changes, which suggests that individual drift changes are not of great importance.\nAn outward pointing drift coefficient on the other hand pushes the solution away from the discontinuity implying a low number of drift changes.\n\n\n\\subsection{Jump height}\n\n\nThe intensity of the effects related to inward and outward pointing drift coefficients depends on the jump height, i.e.\\ the distance between assigned drift values. In case of elementary\\_minus34, this distance amounts to 7 whereas it is 1.6 in case of elementary\\_minus0.6\\_1. The empirical convergence rates in Table \\ref{tab:EulerMSQRates-step4-finePart} show: The higher the jump height, the more pronounced are the effects described in Subsection \\ref{subsec:InwardOutward}. Exemplary, there is a difference of $0.8$ in the empirical convergence rates for elementary\\_minus34 for initial values $0$ and $1$ whereas this diffe\\-rence is only $0.06$ for elementary\\_minus0.6\\_1.\nThis phenomenon is related to a scaling property. By enlarging the drift value, the influence of the diffusive part of the SDE is weakened:\nConsider e.g. the SDE\n$$ dX_t = \\alpha \\sign(X_t)dt + dW_t \\label{eq:SDE X_t}$$\nwith $\\alpha \\geq 1$. Using the new variable $Y_t= \\frac{1}{\\alpha} X_t$ we have the dynamics\n$$ dY_t= \\sign(Y_t)dt + \\frac{1}{\\alpha} dW_t, $$\nwith a reduced diffusion coefficient.\n\n\n\n \n\\subsection{Case study of an inward versus outward pointing drift coefficient}\n\n\n\nIn this Subsection, we will analyze the pattern described in \\ref{subsec:InwardOutward} in more detail, exemplary for the drift coefficients elementary4minus3 and elementary\\_minus34.\n\n\\subsubsection{Drift changes}\nFigure \\ref{fig:avNumDriftChanges_ele4minus3ANDele_minus34_start1Komma4} shows the average number of drift changes for both coefficients. The behavior goes along with the intuitive understanding described above.\nHere $\\tilde{n}$ is the exponent of the dyadic step size $\\Delta=2^{-\\tilde{n}}$. Note that for step sizes $2^{-4}$ to $2^{-8}$ and elementary\\_minus34 the number of drift changes stays below $2$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.6\\linewidth]{avNumDriftChanges_ele4minus3ANDele_minus34_start1Komma4.pdf}\n\t\\caption{Average number of drift changes for $\\xi=1.4$: elementary4minus3 vs. elementary\\_minus34}\n\t\\label{fig:avNumDriftChanges_ele4minus3ANDele_minus34_start1Komma4}\n\\end{figure}\n\n\\subsubsection{Comparison of solution sample paths}\n\nFigure \\ref{fig:samplePaths_ele4minus3VSele_minus34_start1Komma4} shows 100 sample paths of the numerical reference solution ($\\Delta=2^{-14}$). The black line represents the discontinuity in the drift coefficient.\n\n\\begin{figure}[h!]\n\t\\subfigure[\\ elementary4minus3, $\\xi=1.4$]{\\includegraphics[width=0.49\\textwidth]{samplePaths_elementary4minus3_start1Komma4_paths1-100_n11.png}}\\hfill\n\t\\subfigure[\\ elementary\\_minus34, $\\xi=1.4$]{\\includegraphics[width=0.49\\textwidth]{samplePaths_elementary_minus34_start1Komma4_paths1-100_n11.png}}\n\n\t\\caption{Comparison of solution paths: elementary4minus3 vs. elementary\\_minus34}\n\t\n\t\\label{fig:samplePaths_ele4minus3VSele_minus34_start1Komma4}\n\\end{figure}\n\n\\smallskip\n\nIn the situation of Figure \\ref{fig:samplePaths_ele4minus3VSele_minus34_start1Komma4}(b), where the solution drifts away from the discontinuity, it is of tremendous importance whether a drift change is captured by the approximation or not: the solution does not stay close to the discontinuity and thus, there are not many chances for a drift correction to take place, see Figure \\ref{fig:importance_captureDriftChange_ele_minus34_1_sample9999}.\nFor the SDE\n\\begin{align} dX_t= \\left( \\alpha_1 \\cdot \\mathbbmss{1}_{(-\\infty,0)} (X_t) + \\alpha_2 \\cdot \\mathbbmss{1}_{[0,\\infty)} (X_t) \\right) dt + dW_t, \\quad t \\geq 0, \\qquad X_0=\\xi, \\label{test_illustrate} \\end{align}\nwith $\\alpha_1<0<\\alpha_2$ and $\\xi >0$ the conditional probability $p(\\xi,\\theta,\\Delta)$ that the exact solution changes its drift over $[0,\\Delta]$ given that the approximation $x_1$ at $t=\\Delta$ has value $\\theta \\geq 0$ (and thus has not changed its drift) satisfies\n\\begin{align} p(\\xi,\\theta,\\Delta):=\\mathbb{P}\\Big{(} \\inf_{t \\in [0,\\Delta]}X_t<0 \\Big{|} X_0=\\xi, x_1 = \\theta \\Big{)}= \\exp \\left( - 2 \\frac{\\xi \\theta}{\\Delta} \\right), \\end{align}\nsee e.g. \\cite{Gobet}, page 169. So the (conditional) probability of missing drift changes is not negligible and even close to one for small $\\xi$ or $\\theta$.\n\n\n\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.6\\linewidth]{importance_captureDriftChange_ele_minus34_1_sample9999.pdf}\n\t\\caption{Importance of capturing the drift changes for elementary\\_minus34, $\\xi=1$}\n\t\\label{fig:importance_captureDriftChange_ele_minus34_1_sample9999}\n\\end{figure}\n\n\n\\subsubsection{Largest error}\n\nThe latter observation is also reflected in the largest distance for $10^4$ sample paths between the approximation based on step size $2^{-10}$ and the numerical reference solution, see Table \\ref{tab:errorSizes_MSQ_Euler}.\nThe largest distances amount to 1.271 for elementary4minus3 and 4.508 for elementary\\_minus34. \n\\begin{table}[htbp]\n\t\\centering\n\t\\caption{Largest and smallest Euler errors}\n\t\\begin{tabular}{rrrrrrrrrr}\n\t\t\\toprule\n\t\t\\textbf{Initial values} & \\textbf{} & \\textbf{0} & \\textbf{1} & \\textbf{1.2} & \\textbf{1.25} & \\textbf{1.4} & \\textbf{2} \\\\\n\t\t\\midrule\n\t\t\\textbf{elementary\\_minus34} & max & 0.045 & 1.331 & 2.614 & 3.087 & 4.508 & 0.335 \\\\\n\t\t& min & 0.0002 & 0.179 & 0.331 & 0.383 & 0.696 & 0.059 \\\\\n\t\t\\textbf{elementary4minus3} & max & 1.223 & 0.934 & 0.968 & 0.981 & 1.013 & 1.271 \\\\\n\t\t& min & 0.005 & 0.005 & 0.005 & 0.005 & 0.005 & 0.005 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}%\n\t\\label{tab:errorSizes_MSQ_Euler}%\n\\end{table}%\n\n\n\n\\subsubsection{Evolution of the error over time}\n\nTo gain even more insight, we compare the empirical RMSE for increasing time $t$ \nof elementary4minus3 and elementary\\_minus34 when starting in the discontinuity $\\xi=1.4$ for step sizes $2^{-4}$, $2^{-8}$ and $2^{-10}$ by plotting the base-2 logarithm of the RMSE against the time (see Figure \\ref{fig:errorOverTime_4_8_10_ele4minus3ANDele_minus34_start1Komma4}).\nWe have added in these figures the following additional information: If the number is not zero, the most frequent times of drift changes corresponding to the chosen step size are indicated. The number of plotted drift change times is based on the average number of drift changes over the simulated sample paths.\n\nFurthermore, if in the corresponding cases drift changes occur, we add the very first drift change (of all simulated paths) of the numerical reference solution and the Euler schemes. They are generated by finding the time at which the first drift change occurs for $10^4$ saved paths and then taking the minimum over all that times. The time is registered as the point of discretization at which a drift change that took place was detected. The very first drift change of the reference solution is marked at a height of zero for a better distinguishability. \nRMSE over time and drift change times are calculated on a basis of $10^{4}$ simulation paths.\n\n\n\\begin{figure}[htbp]\n\t\\subfigure[\\ elementary4minus3, $\\tilde{n}=4$]{\\includegraphics[width=0.45\\textwidth]{rootMSQErrorOverTime_10power5_step4_elementary4minus3_1Komma4.pdf}}\n\t\\subfigure[\\ elementary\\_minus34, $\\tilde{n}=4$]{\\includegraphics[width=0.45\\textwidth]{rootMSQErrorOverTime_10power5_step4_elementary_minus34_1Komma4.pdf}}\n\t\\subfigure[\\ elementary4minus3, $\\tilde{n}=8$]{\\includegraphics[width=0.45\\textwidth]{rootMSQErrorOverTime_10power5_step8_elementary4minus3_1Komma4.pdf}}\n\t\\subfigure[\\ elementary\\_minus34, $\\tilde{n}=8$]{\\includegraphics[width=0.45\\textwidth]{rootMSQErrorOverTime_10power5_step8_elementary_minus34_1Komma4.pdf}}\n\t\\subfigure[\\ elementary4minus3, $\\tilde{n}=10$]{\\includegraphics[width=0.45\\textwidth]{rootMSQErrorOverTime_10power5_step10_elementary4minus3_1Komma4.pdf}}\n\t\\subfigure[\\ elementary\\_minus34, $\\tilde{n}=10$]{\\includegraphics[width=0.45\\textwidth]{rootMSQErrorOverTime_10power5_step10_elementary_minus34_1Komma4.pdf}}\n\t\n\t\\caption{Comparison of the error evolution over time for $\\xi=1.4$ for different step sizes: elementary4minus3 vs. elementary\\_minus34}\n\t\n\t\\label{fig:errorOverTime_4_8_10_ele4minus3ANDele_minus34_start1Komma4}\n\\end{figure}\t\nWe can extract from Figure \\ref{fig:errorOverTime_4_8_10_ele4minus3ANDele_minus34_start1Komma4} at least two features:\n\\begin{itemize}\n \\item[(i)] The error stays constant or even decreases over time for elementary4minus3 -- in contrast to a strong error accumulation over time for elementary\\_minus34. (Note that the ordinate has a base-2 log scale.)\n \\item[(ii)] In the inward pointing drift coefficient case, the error is by several magnitudes smaller than for an outward pointing drift coefficient. \n\\end{itemize}\n\n\nThis illustrates again the stabilizing effect of an inward pointing drift coefficient and the importance of capturing the first drift changes correctly in case of an outward pointing drift coefficient.\n\n\\subsubsection{Distribution of error sizes}\n\nBesides the empirical RMSE itself, the empirical distribution of the errors in $t=T$ is of interest. The error at final time $T$ is quantified by $\\left\\vert x_N - x_n\\right\\vert$ for step size $\\Delta=T\/n=2^{-\\tilde{n}}$. The histograms in Figure \\ref{fig:hist_10power5_ele4minus3_ele_minus34_n4810_1Komma4} are based on $M=10^4$ simulations for different step sizes and highlight again the different magnitudes of the empirical RMSE (abscissa with a base-$2$ logarithm scale).\nAnother feature, which we can extract from the histograms, is a non-negligible part of simulated paths with an error of machine accuracy size for elementary\\_minus34. We will discuss this feature in more detail in the next Subsection. \n\n\\begin{figure}[htbp]\n\t\\subfigure[\\ elementary4minus3]{\\includegraphics[width=0.49\\textwidth]{hist_10power5_elementary4minus3_n4810_1Komma4.pdf}}\\hfill\n\t\\subfigure[\\ elementary\\_minus34]{\\includegraphics[width=0.49\\textwidth]{hist_10power5_elementary_minus34_n4810_1Komma4.pdf}}\n\t\n\t\\caption{Distribution of the error at time $T$ for different step sizes for elementary4minus3 and elementary\\_minus34 with $\\xi=1.4$}\n\t\n\t\\label{fig:hist_10power5_ele4minus3_ele_minus34_n4810_1Komma4}\n\\end{figure}\n\n\\subsection{Rare events and goodness of the regression fit}\n\nIn case of an outward pointing drift coefficient the empirical RMSE and the linear regression estimates become unreliable or at least questionable.\n\n\nFor initial values close to the discontinuity the observed empirical convergence order are in some cases far away from the guarenteed $3\/4$, altough the linear regression typically produces stable results, see Figure \\ref{fig:PlatenEulerRates_elementary_minus34_part1}(b). A possible explanation for this are again the first drift changes. When starting close to the initial value, the first drift changes seem to be very sensitive to the step-size, which results in rather different trajectories of the Euler scheme.\n \nFurthermore, if the initial value is far away from the discontinuity, only very few drift changes occur in the underlying SDE (if at all). Hence, if the step size of the Euler scheme is significantly small, these changes are captured and the error drops drastically. Figure \\ref{fig:PlatenEulerRates_elementary_minus34_part1} illustrates this by comparing the regressions for an initial value $\\xi=0$ away from the discontinuity in $1.4$ and an initial value $\\xi=1$, which is closer to the discontinuity. (The regression has also to deal in Figure \\ref{fig:PlatenEulerRates_elementary_minus34_part1}(a) with two different regimes.)\nNote that the Euler scheme for \\eqref{eq:SDE} is always exact up to the time of the first drift change.\n\\begin{figure}[htbp]\n\t\\subfigure[\\ $\\xi=0$]{\\includegraphics[width=0.49\\textwidth]{RMSQerrorEnd_Euler_10power5_elementary_minus34_0.pdf}}\\hfill\n\t\\subfigure[\\ $\\xi=1$]{\\includegraphics[width=0.49\\textwidth]{RMSQerrorEnd_Euler_10power5_elementary_minus34_1.pdf}}\n\n\t\n\t\\caption{Euler rates of convergence for elementary\\_minus34 for $\\xi \\in \\{0,1\\}$}\n\t\n\t\\label{fig:PlatenEulerRates_elementary_minus34_part1}\n\\end{figure}\n\nMoreover, for an outward pointing drift coefficient, the Euler scheme and the exact solution coincide with high probability, which explains e.g. the errors close to machine accuracy for the drift coefficient $\\textrm{sign}$ and the initial value $\\xi=5$.\nNote that in this setting, the number of paths with at least one drift change is even zero over all saved $10^4$ solution paths.\n\nTo explain this phenomenon, consider again the SDE\n\\begin{align} dX_t= \\left( \\alpha_1 \\cdot \\mathbbmss{1}_{(-\\infty,0)} (X_t) + \\alpha_2 \\cdot \\mathbbmss{1}_{[0,\\infty)} (X_t) \\right) dt + dW_t, \\quad t \\geq 0, \\qquad X_0=\\xi, \\label{test_illustrate_2} \\end{align}\nwith $\\alpha_1<0<\\alpha_2$. An application of formula (5.13) in chapter 3.5.C in \\cite{ks} gives\n\\begin{align} \\label{exit_prob_2}\n\\mathbb{P} \\left( \\inf_{t \\geq 0} |X_t| >0 \\right) &= 1-e^{2 \\alpha_1 \\xi^{-} -2 \\alpha_2 \\xi^{+}}, \\qquad \\xi \\neq 0.\n\\end{align}\nNote that an initial value $\\xi \\neq 0$ is not a restriction as we analyze the case of an initial value far away from the discontinuity. So, for drift values $-\\alpha_1=\\alpha_2=1$, and an initial value $\\xi=5$, the Euler scheme is exact with a probability of at least $1-e^{-10}$ $\\approx 0.99995460 \\ldots $ \n\nTo summarize: Standard Monte Carlo simulations for testing convergence rates seem to be unreliable in the case of outward pointing coefficients. \nNo stable asymptotic regime seems to be reached by our estimators. Smaller stepsizes or a larger Monte-Carlo sample might be a remedy for this problem, similar to \\cite{arnulf} where moment explosions of the Euler scheme for SDEs with superlinear cofficients are observed in a numerically asymptotic setting. But this is beyond the scope of the present manuscript.\n\n\n\\section{The Euler scheme for the Atlas model}\\label{sec:finance}\n\n\nIn this section, we will use the Euler scheme to simulate the so-called Atlas model, which is a particular first-order market model \\cite{Banner.2005}. In such models, the asset dynamics depend\non the size (measured in terms of market capitalization) of the corresponding firm, which results in an SDE model with discontinuous coefficients.\n\n\\medskip\n\n\\subsection{First-order market models} \\label{subsec:FO-model}\nA \\textit{first-order model} \\cite{Banner.2005} is defined as follows:\nLet $\\gamma, g_1,...,g_d \\in \\mathbb{R}$ and $\\sigma_1,...,\\sigma_d \\in (0, \\infty)$ such that\n\\begin{align*}\n\tg_1<0, \\quad g_1+g_2<0, \\ldots, \\quad g_1+\\cdots+g_{d-1}<0, \\quad g_1+\\cdots+g_d=0.\n\\end{align*}\nConsider now stocks for which the market capitalizations are given by $X_1,\\ldots,X_d$, where the index $ i \\in \\{1,2, \\ldots, d \\}$ indicates the name of the firm, and that follow the dynamics\n\\begin{align} \\label{eq:FOmodel}\n\td\\log X_i(t) = \\gamma_i(t) dt + \\sigma_i(t) dW_i(t), \\quad t \\in [0,\\infty),\\qquad i=1,\\ldots,d.\n\\end{align}\nHere, $W_1,...,W_d$ are independent Brownian motions and the {growth rates} $\\gamma_i:[0,\\infty) \\rightarrow \\mathbb{R}$ and {volatilities} $\\sigma_i:[0,\\infty) \\rightarrow (0, \\infty)$ are given by\n\\begin{align} \\label{eq:FOmodelParameters}\n\t\\gamma_i(t) & = \\gamma + \\sum_{k=1}^{d} g_k \\mathbbmss{1}_{\\{r_i(t)=k\\}}, \\qquad \\quad \\sigma_i(t) = \\sum_{k=1}^{d} \\sigma_k \\mathbbmss{1}_{\\{r_i(t)=k\\}}.\n\\end{align}\nThe ranks $r_i(t)$ for the stock $X_i(t)$ at time $t$ arise from the reverse order-statistics:\n\\begin{align} \\label{eq:ranking}\n\t\\max_{1\\leq i\\leq d} X_i(t) \\mathrel={\\mathop:} X_{(1)}(t) \\geq X_{(2)}(t) \\geq \\cdots \\geq X_{(d-1)}(t) \\geq X_{(d)}(t) \\mathrel{\\mathop:}= \\min_{1 \\leq i \\leq d} X_i(t).\n\\end{align}\nTies in the ranking are resolved by giving the firm with a lower index $i$ the better ranking. So, in such a model the $k$-th largest firm is assigned a growth rate of $\\gamma + g_k$ and a volatility of $\\sigma_k$ over the whole time horizon.\n\n\\smallskip\n\nAccording to \\cite{Banner.2005}, the simplest among the first-order models is the so-called \\textit{Atlas model}, which was introduced in \\cite[Ex.\\ 5.3.3]{Fernholz.2002}.\nWithin the setting of \\eqref{eq:FOmodel} and \\eqref{eq:FOmodelParameters}, choosing\n\\begin{align} \\label{eq:AtlasParameters}\n\t\\gamma = g > 0, \\quad g_k = -g, \\,\\, k=1, \\ldots ,d-1, \\quad g_d = (d-1)g \\quad \\text{and} \\quad \\sigma_i(t) = \\sigma > 0, \\,\\, i=1, \\ldots, d,\n\\end{align}\nleads to the Atlas model. Here, only the smallest stock in the market -- called the {Atlas stock} -- has a nonzero but positive growth rate (for its log-dynamics).\n\n\nBy setting $Y_i(t) \\mathrel{\\mathop:}= \\log X_i(t)$, $i=1,...,d$, as well as plugging in the Atlas parameters \\eqref{eq:AtlasParameters} in our first-order model \\eqref{eq:FOmodel} -- \\eqref{eq:FOmodelParameters}, we obtain the Atlas model in compact form as\n\\begin{align} \\label{eq:AtlasModel}\n\td Y_i(t) &= (d \\cdot g) \\mathbbmss{1}_{\\{r_i(t)=d\\}} dt + \\sigma dW_i(t), \\quad i=1,...,d.\n\\end{align}\nAs stated in \\cite[Prop.\\ 2.3]{Banner.2005}, the solution of \\eqref{eq:AtlasModel} satisfies the ergodic relation\n\t\\begin{align} \\label{eq:longTermAtlas}\n\t\t \\lim\\limits_{T \\rightarrow \\infty} \\frac{1}{T} \\int_{0}^{T} \\mathbbmss{1}_{\\{r_i(t)=k\\}} dt = \\frac{1}{d} \\quad \\textrm{a.s.}, \\qquad i,k=1, \\ldots, d, \n\t\\end{align}\ni.e., all stocks in the market asymptotically spent at each rank approximately the same amount of time. Similar ergodic relations also hold for general first-order market models.\n\n\n\\subsection{Numerical results}\n\n\nFor simulations of the Atlas and general first-order models one has to rely on discretization schemes such as the Euler method.\nIn this subsection, we test whether the Euler scheme is able to recover the long time behavior \\eqref{eq:longTermAtlas}, i.e.,\nwhether the discrete occupation rates \n$$ \\frac{1}{T} \\sum_{\\ell=1}^{T\/\\Delta} \\mathbbmss{1}_{\\{\\widehat{r}_i(\\ell \\Delta)=k\\}}, \\qquad i,k=1, \\ldots, d, $$\nwhere $\\widehat{r}_i$ is the discretized counterpart of \\eqref{eq:ranking} based on the Euler scheme and $T\/\\Delta \\in \\mathbb{N}$, converge to the analytical value.\n\n\nHere, we consider a three-dimensional model with initial log-capitalizations\n${Y(0)=[3.4, 4.1, 5.7]}$ and $\\widetilde{Y}(0)=[1.2, 3.5, 10.8]$, $\\gamma=0.1$ as market drift and $\\sigma=0.09$ as market volatility\\footnote{The market parameters are inspired by parameters from A.\\ Banner's (INTECH Investement Technologies LLC, Princeton) presentation on ``Equity Market Stability'' given at the WCMF6 conference, Santa Barbara, 2014.}.\nTable \\ref{tab:LTrankings} presents the discrete occupation rates (averaged over $M=10^3$ repetitions) for $\\Delta=2^{-14}$ and diffe\\-rent values of $T$ as well as the sum of the squared deviations from the analytical asymptotic occupation rate. \nAs hoped, the discrete occupation rates converge to the analytical asymptotic occupation rate of $1\/d=1\/3$ with increasing time horizon.\n\nFurthermore, results suggest that less varying initial capitalizations imply that the numerical values are closer to the analytical result already for shorter time horizons, which coincides with the intuitive understanding.\nWe also simulated the above scenarios with $\\Delta=2^{-10}$ instead of $\\Delta=2^{-14}$: all occupation times where equal with an accuracy of four digits and one third of the $90$ occupation rates differed in the fifth digit. This suggests that -- as soon as the step size is small enough -- a further refinement of the step size is no longer beneficial and the crucial simulation parameter is $T$, the endpoint of the considered time horizon.\n\n \n\\bigskip\n\n\\begin{table}[htbp]\n\t\\centering\n\t\\caption{Discrete occupation rates for the discretized Atlas model}\n\t\\begin{tabular}{|c|c|l|r|r|r|c|c|c|}\n\t\t\\toprule\n\t\t& \\textbf{T} & & \\multicolumn{1}{l|}{\\textbf{Firm 1}} & \\multicolumn{1}{l|}{\\textbf{Firm 2}} & \\multicolumn{1}{l|}{\\textbf{Firm 3}} & \\multicolumn{3}{c|}{\\textbf{Quadratic deviations}} \\\\\n\t\t\\midrule\n\t\t\\multirow{14}[10]{*}{$Y(0)$} & \\multirow{3}[2]{*}{\\textbf{100}} & \\textbf{Rank 1} & 0.2911 & 0.2895 & 0.4194 & \\multirow{3}[2]{*}{0.0030} & \\multirow{3}[2]{*}{0.0031} & \\multirow{3}[2]{*}{0.0111} \\\\\n\t\t& & \\textbf{Rank 2} & 0.3425 & 0.3662 & 0.2913 & & & \\\\\n\t\t& & \\textbf{Rank 3} & 0.3664 & 0.3443 & 0.2892 & & & \\\\\n\t\t\\cmidrule{2-9}\n\t\t& \\multirow{3}[1]{*}{\\textbf{250}} & \\textbf{Rank 1} & 0.3156 & 0.3161 & 0.3683 & \\multirow{3}[1]{*}{0.0005} & \\multirow{3}[1]{*}{0.0005} & \\multirow{3}[1]{*}{0.0018} \\\\\n\t\t& & \\textbf{Rank 2} & 0.3375 & 0.3463 & 0.3162 & & & \\\\\n\t\t& & \\textbf{Rank 3} & 0.3469 & 0.3376 & 0.3155 & & & \\\\\n\t\t\\cmidrule{2-9}\n\t\t& \\multirow{3}[1]{*}{\\textbf{500}} & \\textbf{Rank 1} & 0.3238 & 0.3246 & 0.3517 & \\multirow{3}[1]{*}{0.0001} & \\multirow{3}[1]{*}{0.0001} & \\multirow{3}[1]{*}{0.0005} \\\\\n\t\t& & \\textbf{Rank 2} & 0.3357 & 0.3400 & 0.3243 & & & \\\\\n\t\t& & \\textbf{Rank 3} & 0.3406 & 0.3354 & 0.3240 & & & \\\\\n\t\t\\cmidrule{2-9}\n\t\t& \\multirow{3}[1]{*}{\\textbf{750}} & \\textbf{Rank 1} & 0.3273 & 0.3273 & 0.3454 & \\multirow{3}[1]{*}{0.0001} & \\multirow{3}[1]{*}{0.0001} & \\multirow{3}[1]{*}{0.0002} \\\\\n\t\t& & \\textbf{Rank 2} & 0.3347 & 0.3379 & 0.3274 & & & \\\\\n\t\t& & \\textbf{Rank 3} & 0.3380 & 0.3348 & 0.3272 & & & \\\\\n\t\t\\cmidrule{2-9}\n\t\t& \\multirow{3}[1]{*}{\\textbf{1000}} & \\textbf{Rank 1} & 0.3288 & 0.3287 & 0.3425 & \\multirow{3}[1]{*}{0.0000} & \\multirow{3}[1]{*}{0.0000} & \\multirow{3}[1]{*}{0.0001} \\\\\n\t\t& & \\textbf{Rank 2} & 0.3344 & 0.3368 & 0.3288 & & & \\\\\n\t\t& & \\textbf{Rank 3} & 0.3368 & 0.3345 & 0.3287 & & & \\\\\n\t\t\\midrule\n\t\t\\multicolumn{1}{|r}{} & \\multicolumn{1}{r}{} & \\multicolumn{1}{r}{} & \\multicolumn{1}{r}{} & \\multicolumn{1}{r}{} & \\multicolumn{1}{r}{} & \\multicolumn{1}{r}{} & \\multicolumn{1}{r}{} & \\\\\n\t\t\\midrule\n\t\t& \\textbf{T} & & \\multicolumn{1}{l|}{\\textbf{Firm 1}} & \\multicolumn{1}{l|}{\\textbf{Firm 2}} & \\multicolumn{1}{l|}{\\textbf{Firm 3}} & \\multicolumn{3}{c|}{\\textbf{Quadratic deviations}} \\\\\n\t\t\\midrule\n\t\t\\multirow{19}[10]{*}{$\\widetilde{Y}(0)$} & \\multirow{3}[2]{*}{\\textbf{100}} & \\textbf{Rank 1} & 0.1464 & 0.1447 & 0.7089 & \\multirow{3}[2]{*}{0.0554} & \\multirow{3}[2]{*}{0.0562} & \\multirow{3}[2]{*}{0.2116} \\\\\n\t\t& & \\textbf{Rank 2} & 0.3883 & 0.4654 & 0.1463 & & & \\\\\n\t\t& & \\textbf{Rank 3} & 0.4653 & 0.3899 & 0.1448 & & & \\\\\n\t\t\\cmidrule{2-9}\n\t\t& \\multirow{3}[1]{*}{\\textbf{250}} & \\textbf{Rank 1} & 0.2581 & 0.2579 & 0.4840 & \\multirow{3}[1]{*}{0.0090} & \\multirow{3}[1]{*}{0.0090} & \\multirow{3}[1]{*}{0.0341} \\\\\n\t\t& & \\textbf{Rank 2} & 0.3555 & 0.3863 & 0.2582 & & & \\\\\n\t\t& & \\textbf{Rank 3} & 0.3864 & 0.3558 & 0.2577 & & & \\\\\n\t\t\\cmidrule{2-9}\n\t\t& \\multirow{3}[1]{*}{\\textbf{500}} & \\textbf{Rank 1} & 0.2949 & 0.2956 & 0.4096 & \\multirow{3}[1]{*}{0.0023} & \\multirow{3}[1]{*}{0.0023} & \\multirow{3}[1]{*}{0.0087} \\\\\n\t\t& & \\textbf{Rank 2} & 0.3448 & 0.3599 & 0.2953 & & & \\\\\n\t\t& & \\textbf{Rank 3} & 0.3603 & 0.3446 & 0.2951 & & & \\\\\n\t\t\\cmidrule{2-9}\n\t\t& \\multirow{3}[1]{*}{\\textbf{750}} & \\textbf{Rank 1} & 0.3082 & 0.3079 & 0.3840 & \\multirow{3}[1]{*}{0.0010} & \\multirow{3}[1]{*}{0.0010} & \\multirow{3}[1]{*}{0.0038} \\\\\n\t\t& & \\textbf{Rank 2} & 0.3407 & 0.3513 & 0.3081 & & & \\\\\n\t\t& & \\textbf{Rank 3} & 0.3511 & 0.3409 & 0.3080 & & & \\\\\n\t\t\\cmidrule{2-9}\n\t\t& \\multirow{3}[1]{*}{\\textbf{1000}} & \\textbf{Rank 1} & 0.3143 & 0.3142 & 0.3715 & \\multirow{3}[1]{*}{0.0006} & \\multirow{3}[1]{*}{0.0006} & \\multirow{3}[1]{*}{0.0022} \\\\\n\t\t& & \\textbf{Rank 2} & 0.3391 & 0.3467 & 0.3143 & & & \\\\\n\t\t& & \\textbf{Rank 3} & 0.3467 & 0.3391 & 0.3143 & & & \\\\\n\t\t\\bottomrule\n\t\\end{tabular}%\n\t\\label{tab:LTrankings}%\n\\end{table}%\n\n\\bigskip\n\n\\section{Conclusion and Outlook}\nWe have seen that the numerical approximation of solutions of SDEs with discontinuous drift coefficients is a challenging task, where several particularities arise. We were able to identify two main classes of discontinuous drift coefficients: outward and inward pointing drift coefficients. For the latter class, we analyzed stability properties.\nIt turned out that the main difficulty in measuring the empirical convergence rates is how to appropriately capture drift changes. For inward pointing coefficients, we obtained stable estimates, which are in accordance with the theoretical results. For outward pointing cases, the estimates seem to be unreliable, no stabilizing asymptotic regime seems to be reached for the estimates.\n We tested two higher-order numerical schemes, that are frequently used in a setting where coefficients are sufficiently smooth. However, both schemes did not lead to an improved behavior.\n\n\\bigskip\n\n\n\n\n\\section*{Acknowledgment}\nThis work was supported by the DFG grant No.\\ GO 1920\/4-1. \nPart of this work was carried out while A. Neuenkirch was visiting \nthe Facultad de Matem\\'aticas de la Universidad de Sevilla; A. Neuenkirch whishes to thank the\nDpto. Ecuaciones Diferenciales y An\\'alisis Num\\'erico \nfor its hospitality and support.\n\nThe publication of this article was funded by the Ministry of Science, Research and the Arts Baden-W\\\"urttemberg and the University of Mannheim.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \nIn \\cite{Hardy:1920}, Hardy showed his famous inequality \n\\begin{equation}\\label{EQ:Hardy0}\n\\int_b^\\infty \\left(\\frac{\\int_b^x f(t) dt}{x}\\right)^p dx \\leq \\left(\\frac{p}{p-1}\\right)^p \\int_b^\\infty f(x)^p dx,\n\\end{equation} \nwhere $p>1$, $b>0$, and $f\\geq 0$ is a nonnegative function. The original discrete version of this inequality goes back to \\cite{Hardy1919} making this year the 100${}^{th}$ anniversary of this topic, see \\cite{RS-book} for a historical discussion. Consequently, we do not try to provide a comprehensive historical account here but refer to \\cite{RS-book} for an extensive historical overview of the subject.\n\nIn particular, since then a lot of work has been done on Hardy inequalities in different forms and in different settings. It is clearly impossible to give a complete overview of the literature, so let us only refer to\nbooks and surveys by Opic and Kufner \\cite{OK-book}, Davies \\cite{Davies}, \nKufner, Persson and Samko \\cite{Hardy-weighted-book,KP-Samko},\nEdmunds and Evans \\cite{DE-book}, Mazya \\cite{Mazya-Sobolev-spaces,Mazya-book}, Ghoussoub and Moradifam \\cite{GM-book}, Balinsky, Evans and Lewis \\cite{BEL-book}, and references therein. Hardy inequalities in 1D with weights have been also studied in \\cite{AM,M} in a similar spirit to our approach. Another list of conditions was given in \\cite{DV2}, however, both in more limited settings, as well as, more recently, in \\cite{GKP10}. More precisely, \nin our Theorem \\ref{THM:Hardy1} in the special case $\\mathbb X=\\mathbb R$, the conditions $D_i$ correspond to conditions $A_i(s)$, $s=1,2,3,4$, in \\cite[Theorem 2]{GKP10}. \n\nIn this paper we show that the inequality \\eqref{EQ:Hardy0} actually holds in a much more general setting, also with rather general pairs of weights. However, the weights have to satisfy certain compatibility conditions for such inequalities to hold true, and these conditions are {\\em necessary and sufficient}.\n \nWe note that the only known cases of our results are essentially only the Euclidean ones. The examples we give on homogeneous groups and hyperbolic spaces are new, but the main result itself is of course more general, completely characterising the weights for the integral Hardy inequality on metric measure spaces. \nThe importance of such results is, in particular, in that they lead to {\\em a variety of hypoelliptic Hardy-Sobolev and other inequalities, once we apply it with the weights associated to Riesz kernels (for hypoelliptic operators}, see \\cite{RY-hypoelliptic}).\n \n \n More specifically, we consider metric spaces $(\\mathbb X,d)$ with a Borel measure $dx$ allowing for the following {\\em polar decomposition} at $a\\in{\\mathbb X}$: we assume that there is a locally integrable function $\\lambda \\in L^1_{loc}$ such that for all $f\\in L^1(\\mathbb X)$ we have\n \\begin{equation}\\label{EQ:polar}\n \\int_{\\mathbb X}f(x)dx= \\int_0^{\\infty}\\int_{\\Sigma_r} f(r,\\omega) \\lambda(r,\\omega) d\\omega_r dr,\n \\end{equation}\n for the sets $\\Sigma_r=\\{x\\in\\mathbb X: d(x,a)=r\\}$\n with a measure on it denoted by $d\\omega=d\\omega_r$.\n The condition \\eqref{EQ:polar} is rather general since we allow the function $\\lambda$ to depend on the whole variable $x=(r,\\omega)$. \n\nThe reason to assume \\eqref{EQ:polar} is that since $\\mathbb X$ does not have to have a differentiable structure, the function $\\lambda(r,\\omega)$ can not be in general obtained as the Jacobian of the polar change of coordinates. However, if such a differentiable structure exists on $\\mathbb X$, the condition \\eqref{EQ:polar} can be obtained as the standard polar decomposition formula. \nIn particular, let us give several examples of $\\mathbb X$ for which the condition \\eqref{EQ:polar} is satisfied with different expressions for $ \\lambda (r,\\omega)$:\n\n\\begin{itemize}\n\\item[(I)] Euclidean space ${\\mathbb R}^n$: $ \\lambda (r,\\omega)= {r}^{n-1}.$\n\\item[(II)] Homogeneous groups: $ \\lambda (r,\\omega)= {r}^{Q-1}$, where $Q$ is the homogeneous dimension of the group. Such groups have been consistently developed by Folland and Stein \\cite{FS-Hardy}, see also an up-to-date exposition in \\cite{FR}.\n\\item[(III)] Hyperbolic spaces $\\mathbb H^n$: $\\lambda(r,\\omega)=(\\sinh {r})^{n-1}$.\n\\item[(IV)] Cartan-Hadamard manifolds: Let $K_M$ be the sectional curvature on $(M, g).$ A Riemannian manifold $(M, g)$ is called {\\em a Cartan-Hadamard manifold} if it is complete, simply connected and has non-positive sectional curvature, i.e., the sectional curvature $K_M\\le 0$ along each plane section at each point of M. Let us fix a point $a\\in M$ and denote by \n$\\rho(x)=d(x,a)$ the geodesic distance from $x$ to $a$ on $M$. The exponential map ${\\rm exp}_a :T_a M \\to M$ is a diffeomorphism, see e.g. Helgason \\cite{DV3}. Let $J(\\rho,\\omega)$ be the density function on $M$, see e.g. \\cite{DV1}. Then we have the following polar decomposition: \n$$\n\\int_M f(x) dx=\\int_0^{\\infty}\\int_{\\mathbb S^{n-1}}f({\\rm exp}_{a}(\\rho \\omega))J(\\rho,\\omega) \\rho^{n-1}d\\rho d\\omega,\n$$\nso that we have \\eqref{EQ:polar} with $\\lambda(\\rho,\\omega)= J(\\rho,\\omega) \\rho^{n-1}.$ \n \\item[(V)] Complete manifolds: Let $M$ be a complete manifold. Let $p\\in M$ and let $C(p)$ denote the cut locus of $p$. Let $D_{p}:=M\\backslash C(p)$ and $S(p;r):=\\{x\\in M_{p}:|x|=r\\}$, where $|\\cdot|$ is the Riemannian length, where $M_{p}$ stands for the tangent space at $p$. Then for any $p\\in M$ and any integrable function $f$ on $M$ we have (e.g. see \\cite[Formula III.3.5, P.123]{Cha06}) the polar decompsition\n \\begin{equation}\\label{pol_decom_manif}\n \\int_{M}f dV=\\int_{0}^{+\\infty}dr\\int_{r^{-1}S(p;r)\\cap D_{p}}f(\\exp r\\xi)\\sqrt{g}(r;\\xi)d\\mu_{p}(\\xi)\n \\end{equation}\n for some function $\\sqrt{g}$ on $D_{p}$, where $r^{-1}S(p,r)\\cap D_{p}$ is the subset of $S_{p}$ obtained by dividing each of the elements of $S(p,r)\\cap D_{p}$ by $r$, and $S_{p}:=S(p;1)$. Here $d\\mu_{p}(\\xi)$ is the Riemannian measure on $S_{p}$ induced by the Euclidean Lebesgue measure on $M_{p}$. We refer to \\cite{Cha06}, \\cite[Chapter 4]{Li12} and \\cite[Chapter 1, Paragraph 12]{CLN06} for more details on this decompsition.\n\\end{itemize} \n\n\n\n Throughout this paper, by $ A\\approx B$ we will always mean that the expressions $A$ and $B$ are equivalent.\n \n\n\\section{Main results}\n\nWe denote by $B(a,r)$ the ball in $\\mathbb X$ with centre $a$ and radius $r$, i.e \n$$B(a,r):= \\{x\\in\\mathbb X : d(x,a)0$. Let $\\mathbb X $ be a metric measure space with a polar decomposition \\eqref{EQ:polar} at a. \nLet $u,v> 0$ be measurable functions positive a.e in $\\mathbb X$ such that $u\\in L^1(\\mathbb X\\backslash \\{a\\})$ and $v^{1-p'}\\in L^1_{loc}(\\mathbb X)$. Denote\n\\begin{align}\nU(x):= { \\int_{\\mathbb X\\backslash{B(a,|x|_a )}} u(y) dy} \\nonumber\n\\end{align} \nand \n\\begin{align} \nV(x):= \\int_{B(a,|x|_a )}v^{1-p'}(y)dy\\nonumber. \n\\end{align}\nThen the inequality\n\\begin{equation}\\label{EQ:Hardy1}\n\\bigg(\\int_\\mathbb X\\bigg(\\int_{B(a,\\vert x \\vert_a)}\\vert f(y) \\vert dy\\bigg)^q u(x)dx\\bigg)^\\frac{1}{q}\\le C\\bigg\\{\\int_{\\mathbb X} {\\vert f(x) \\vert}^pv(x)dx\\bigg\\}^{\\frac1p}\n\\end{equation}\nholds for all measurable functions $f:\\mathbb X\\to{\\mathbb C}$ if and only if any of the following equivalent conditions hold:\n\n\\begin{enumerate}\n\\item $\\mathcal D_{1} :=\\sup_{x\\not=a} \\bigg\\{U^\\frac{1}{q}(x) V^\\frac{1}{p'}(x)\\bigg\\}<\\infty.$\n\\end{enumerate}\n\n\\begin{enumerate}\\setcounter{enumi}{1}\n\\item $\\mathcal D_{2}:=\\sup_{x\\not=a} \\bigg\\{\\int_{\\mathbb X\\backslash{B(a,|x|_a )}}u(y)V^{q(\\frac{1}{p'}-s)}(y)dy\\bigg\\}^\\frac{1}{q}V^s(x)<\\infty.$\n\\item $\\mathcal D_{3}:=\\sup_{x\\not=a}\\bigg\\{\\int_{B(a,|x|_a)}u(y)V^{q(\\frac{1}{p'}+s)}(y)dy\\bigg\\}^{\\frac{1}{q}}V^{-s}(x)<\\infty $, provided that $u,v^{1-p'}\\in L^1(\\mathbb X)$.\n\\end{enumerate}\n\n\\begin{enumerate}\\setcounter{enumi}{3}\n\\item $\\mathcal D_{4}:=\\sup_{x\\not=a}\\bigg\\{\\int_{B(a,\\vert x \\vert_a)}v^{1-p'}(y) U^{p'(\\frac{1}{q}-s)}(y)dy\\bigg\\}^\\frac{1}{p'}U^s(x)<\\infty.$ \n\n\\item $\\mathcal D_{5}:=\\sup_{x\\not=a}\\bigg\\{\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a )}}v^{1-p'}(y)U^{p'(\\frac{1}{q}+s)}(y)dy\\bigg\\}^\\frac{1}{p'}U^{-s}(x)<\\infty$, provided that $u,v^{1-p'}\\in L^1(\\mathbb X)$.\n\\end{enumerate}\n\nMoreover, the constant $C$ for which \\eqref{EQ:Hardy1} holds and quantities $\\mathcal D_{1}-\\mathcal D_{5}$ are related by \n\\begin{equation}\\label{EQ:constants}\n\\mathcal D_{1} \\leq C\\leq \\mathcal D_1(p')^{\\frac{1}{p'}} p^\\frac{1}{q},\n\\end{equation} \nand \n$$\\mathcal D_1 \\le \\left(\\max(1,{p'}{s})\\right)^\\frac{1}{q}\\mathcal D_2, \\;\\mathcal D_2 \\le (\\max(1,\\frac{1}{p's}))^\\frac{1}{q} \\mathcal D_1,$$ \n$$(\\frac{sp'}{1+p's})^\\frac{1}{q} \\mathcal D_3 \\le \\mathcal D_1\\le(1+sp')^\\frac{1}{q}\\mathcal D_3,$$\n$$\\mathcal D_1 \\le (\\max(1,qs))^\\frac{1}{p'} \\mathcal D_4,\\; \\mathcal D_4 \\le (\\max(1,\\frac{1}{qs}))^\\frac{1}{p'}\n\\mathcal D_1,$$ \n$$(\\frac{sq}{1+qs})^\\frac{1}{p'}\\mathcal D_5 \\le \\mathcal D_1 \\le (1+sq)^\\frac{1}{p'} \\mathcal D_5.$$\n\\end{thm}\n\nIn particular, Theorem \\ref {THM:Hardy1} is an extension of \\eqref{EQ:Hardy0} to the setting of metric measures spaces $\\mathbb X$ with the polar decomposition \\eqref{EQ:polar}: in particular, for $p=q$ and real-valued nonnegative measurable $f\\geq 0$, inequality \\eqref{EQ:Hardy1} becomes \n$$\n\\int_\\mathbb X\\bigg(\\int_{B(a,\\vert x \\vert_a)} f(y) dy\\bigg)^p u(x)dx \\le C \\int_{\\mathbb X} {f(x)}^p v(x)dx,\n$$\nas an extension of \\eqref{EQ:Hardy0}. Indeed, in this case we can take $u(x)=\\frac{1}{x^p}$, $v(x)=1$, $\\mathbb X=[b,\\infty)$, $a=b$, so that Theorem \\ref {THM:Hardy1} implies \\eqref{EQ:Hardy0}.\n\nFor the results in the case of $\\mathbb X=\\mathbb R$ we can refer to \\cite{Hardy-weighted-book, PS01}, and also to \\cite{PSW07} for inequalities for $q0$. Let $\\mathbb X $ be a metric measure space with a polar decomposition as in \\eqref{EQ:polar}. Let $u,v >0$ be measurable functions positive a.e. such that\n$u\\in L^1_{loc}(\\mathbb X)$ and $v^{1-p'}\\in L^1(\\mathbb X\\backslash \\{a\\})$. Let\n\\begin{align}\nU(x)= {\\int_{{B(a,\\vert x \\vert_a )}} u(y) dy} \\nonumber\n\\end{align} \nand \\begin{align} \nV(x)= \\int_{\\mathbb X\\backslash B(a,\\vert x \\vert_a )}v^{1-p'}(y)dy\\nonumber. \n\\end{align}\nThen the inequality\n\\begin{align}\\label{EQ:Hardycon}\n\\bigg(\\int_\\mathbb X\\bigg(\\int_{\\mathbb X\\backslash B(a,\\vert x \\vert_a)}\\vert f(y) \\vert dy\\bigg)^q u(x)dx\\bigg)^\\frac{1}{q}\\le C\\bigg\\{\\int_{\\mathbb X} {\\vert f(x) \\vert}^p v(x)dx\\bigg\\}^\\frac{1}{p}\n\\end{align}\nholds for all measurable functions f if and only if any of the following equivalent conditions holds:\n\\begin{enumerate}\n\\item $\\mathcal D_{1}^{*} :=\\sup_{x\\not=a} \\bigg\\{U^\\frac{1}{q}(x) V^\\frac{1}{p'}(x)\\bigg\\}<\\infty$.\n\\item $\\mathcal D_{2}^{*}:=\\sup_{x\\not=a} \\bigg\\{\\int_{{B(a,\\vert x \\vert_a )}}u(y)V^{q(\\frac{1}{p'}-s)}(y)dy\\bigg\\}^\\frac{1}{q}V^s(x)<\\infty.$\n\n\\item $\\mathcal D_{3}^{*}:=\\sup_{x\\not=a}\\bigg\\{\\int_{\\mathbb X\\backslash B(a,\\vert x \\vert_a)}u(y)V^{q(\\frac{1}{p'}+s)}(y)dy\\bigg\\}^{\\frac{1}{q}}V^{-s}(x)<\\infty,$\nprovided that $u,v^{1-p'}\\in L^1(\\mathbb X).$\n\\item $\\mathcal D_{4}^{*}:=\\sup_{x\\not=a}\\bigg\\{\\int_{\\mathbb X\\backslash B(a,\\vert x \\vert_a)}v^{1-p'}(y) U^{p'(\\frac{1}{q}-s)}(y)dy\\bigg\\}^\\frac{1}{p'}U^s(x)<\\infty.$\n\\item $\\mathcal D_{5}^{*}:=\\sup_{x\\not=a}\\bigg\\{\\int_{{B(a,\\vert x \\vert_a )}}v^{1-p'}(t)U^{p'(\\frac{1}{q}+s)}(t)dt\\bigg\\}^\\frac{1}{p'}U^{-s}(x)<\\infty,$\nprovided that $u,v^{1-p'}\\in L^1(\\mathbb X).$\n\\end{enumerate}\n\\end{thm}\n\n\n \n\\section{Applications and examples}\n\nIn this section we will give examples of the application of Theorem \\ref {THM:Hardy1} in the settings of homogeneous groups, hyperbolic spaces, and Cartan-Hadamard manifolds.\n\n\\subsection{Homogeneous groups}\n\nLet $\\mathbb G$ be a homogeneous group of homogeneous dimension $Q$, equipped with a quasi-norm $|\\cdot|$.\nFor the general description of the setup of homogeneous groups we refer to \\cite{FS-Hardy} or \\cite{FR}. Particular example of homogeneous groups are the Euclidean space ${\\mathbb R}^n$ (in which case $Q=n$), the Heisenberg group, as well as general stratified groups (homogeneous Carnot groups) and graded groups.\n\nIn relation to the notation of this paper, let us take $a=0$ to be the identity of the group $\\mathbb G$. We can also simplify the notation denoting $\\vert x \\vert_a$ by $\\vert x \\vert$, which is consistent with the notation for the quasi-norm $|\\cdot|.$\n\nIf we take power weights \n$$u(x)= {\\vert x \\vert}^\\alpha \\textrm{ and } v(x)= {\\vert x \\vert }^\\beta,$$\nthen the inequality \\eqref{EQ:Hardy1} holds for $1< p \\le q <\\infty$ if and only if \n$$\n\\mathcal D_1=\\sup_{r>0} \\bigg(\\displaystyle \\sigma\\int_{r}^\\infty {\\rho}^\\alpha {\\rho}^{Q-1}d\\rho \\bigg)^\\frac{1}{q} \\bigg(\\displaystyle \\sigma \\int_{0}^r{\\rho}^{\\beta(1-p')} {\\rho}^{Q-1}d\\rho\\bigg)^\\frac{1}{p'}<\\infty,\n$$\nwhere $\\sigma$ is the area of the unit sphere in $\\mathbb G$ with respect to the quasi-norm $|\\cdot|.$ For this supremum to be well-defined we need to have $\\alpha+Q<0$\n and $\\beta(1-p')+Q>0.$\nConsequently, we have\n\\begin{multline*}\n\\mathcal D_1=\\sigma^{(\\frac1q+\\frac{1}{p'})}\\sup_{r>0}\n\\bigg(\\displaystyle\\int_{r}^\\infty{\\rho}^{\\alpha+Q-1}d\\rho\\bigg)^\\frac{1}{q}\\bigg(\\displaystyle\\int_0^{r} {\\rho}^{\\beta(1-p')+Q-1}d\\rho \\bigg)^\\frac{1}{p'} \\\\\n= \\sigma^{(\\frac1q+\\frac{1}{p'})}\n\\sup_{r>0} \\frac {{r}^ \\frac{\\alpha+Q}{q}}{{|\\alpha+Q|}^\\frac{1}{q}}\n\\frac{ {r}^\\frac{\\beta(1-p')+Q} {p'}}{{(\\beta(1-p')+Q)}^\\frac{1}{p'}},\n\\end{multline*} \nwhich is finite if and only if the power of $r$ is zero.\nSummarising, we obtain\n\n\\begin{cor}\\label{COR:hom}\nLet $\\mathbb G$ be a homogeneous group of homogeneous dimension $Q$, equipped with a quasi-norm $|\\cdot|$. Let $10$ and \n$\\frac{\\alpha+Q}{q}+\\frac{\\beta(1-p')+Q} {p'}=0.$\nMoreover, the constant $C$ for \\eqref{EQ:Hardy1hg} satisfies\n\\begin{equation}\\label{EQ:constants-hg}\n\\frac{\\sigma^{\\frac1q+\\frac{1}{p'}}}{|\\alpha+Q|^\\frac{1}{q}(\\beta(1-p')+Q)^\\frac{1}{p'}}\n\\leq C\\leq (p')^{\\frac{1}{p'}} p^\\frac{1}{q}\\frac{\\sigma^{\\frac1q+\\frac{1}{p'}}}{|\\alpha+Q|^\\frac{1}{q}(\\beta(1-p')+Q)^\\frac{1}{p'}},\n\\end{equation} \nwhere $\\sigma$ is the area of the unit sphere in $\\mathbb G$ with respect to the quasi-norm $|\\cdot|.$\n\\end{cor} \n\n \\subsection{Hyperbolic spaces} \n \n Let $\\mathbb H^n$ be the hyperbolic space of dimension $n$ and let $a\\in \\mathbb H^n$.\n Let us take the weights \n $$u(x)= (\\sinh {\\vert x \\vert_a})^\\alpha \\textrm{ and } v(x)= (\\sinh {\\vert x \\vert_a})^\\beta.$$ \nThen, passing to polar coordinates, $\\mathcal D_1$ is equivalent to\n \n $$\\mathcal D_1\\simeq \\sup_{\\vert x \\vert_a>0}\\bigg(\\displaystyle\\int_{\\vert x \\vert_a}^\\infty(\\sinh{\\rho})^{\\alpha+n-1}d\\rho\\bigg)^\\frac{1}{q}\\bigg(\\displaystyle\\int_0^{\\vert x \\vert_a} (\\sinh{\\rho})^{\\beta(1-p')+n-1}d\\rho \\bigg)^\\frac{1}{p'}.$$\n For the integrability of the first and the second terms we need, respectively,\n$\\alpha+n-1<0$ and $ \\beta(1-p')+n>0.$\n\nLet us now analyse conditions for this supremum to be finite. \nFor $\\vert x \\vert_a \\gg1 $, it can be written as\n\n\\begin{multline*}\n\\sup_{\\vert x \\vert_a \\gg1} \\bigg(\\displaystyle\\int_{\\vert x \\vert_a}^\\infty (\\exp{\\rho})^{\\alpha+n-1}d\\rho\\bigg)^\\frac{1}{q} \\bigg(\\displaystyle\\int_0^{\\vert x \\vert_a} (\\exp{\\rho})^{\\beta(1-p')+n-1}d\\rho \\bigg)^\\frac{1}{p'}\\\\\n\\simeq \\sup_{\\vert x \\vert_a \\gg1} (\\exp {\\vert x \\vert_a})^{\\bigg(\\frac{\\alpha+n-1}{q}+\\frac{\\beta(1-p')+n-1}{p'} \\bigg)},\n\\end{multline*} \nwhich is finite if and only if $\\frac{\\alpha+n-1}{q}+\\frac{\\beta(1-p')+n-1}{p'}\\le0.$\nFor $\\vert x \\vert_a\\ll 1 $, it can be written as\n\n$\\sup_{\\vert x \\vert_a\\ll 1} \\bigg(\\displaystyle\\int_{\\vert x \\vert_a}^\\infty(\\sinh{\\rho})^{\\alpha+n-1}d\\rho\\bigg)^\\frac{1}{q} \\bigg(\\displaystyle\\int_0^{\\vert x \\vert_a} {\\rho}^{\\beta(1-p')+n-1}d\\rho \\bigg)^\\frac{1}{p'}$\n\n$\\simeq \\sup_{\\vert x \\vert_a\\ll 1} \\bigg(\\displaystyle\\int_{\\vert x \\vert_a}^ R (\\sinh{\\rho})^{\\alpha+n-1}d\\rho +\\displaystyle\\int_{R}^\\infty(\\sinh{\\rho})^{\\alpha+n-1}d\\rho\\bigg)^\\frac{1}{q} \\bigg(\\displaystyle\\int_0^{\\vert x \\vert_a} {\\rho}^{\\beta(1-p')+n-1}d\\rho \\bigg)^\\frac{1}{p'}.$\n\nFor some small $R$ we have ${\\sinh{\\rho} }_{\\vert x \\vert_a\\le \\rho0$ and $\\frac{\\alpha+n}{q}+\\frac{\\beta(1-p')+n}{p'}\\le \\frac{1}{q}+\\frac{1}{p'}$;\n\\item[(B)] for $\\alpha+n <0 $, if and only if $\\beta(1-p')+n>0$ and $0\\leq \\frac{\\alpha+n}{q}+\\frac{\\beta(1-p')+n}{p'}\\le \\frac{1}{q}+\\frac{1}{p'}$.\n\\end{itemize} \n\\end{cor} \n\n\\subsection{Cartan-Hadamard manifolds}\n\nLet $(M,g)$ be a Cartan-Hadamard manifold and assume that the sectional curvature $K_M$ is constant. In this case it is known that $J(t,\\omega)$ is a function of $t$ only. More precisely, if $K_M=-b$ for $b\\ge0$, then \n$J(t,\\omega)= 1$ if $b=0$, and $J(t,\\omega)=(\\frac{\\sinh \\sqrt{b}t}{\\sqrt{b}t})^{n-1}$ for $b>0$,\nsee e.g. \\cite{BC01-book} or \\cite[p. 166-167]{DV1}.\n \n When $b=0$, then let us take $u(x)= \\vert x \\vert_a^\\alpha$ and $v(x)= \\vert x \\vert_a^\\beta$ , then the inequality \\eqref{EQ:Hardy1} holds for $10} \\bigg(\\displaystyle\\int_{M \\backslash B(a,\\vert x \\vert_a)}\\vert y \\vert_a^\\alpha dy \\bigg)^\\frac{1}{q} \\bigg(\\displaystyle \\int_{B(a,\\vert x \\vert_a)}\\vert y \\vert_a^{\\beta(1-p')} dy\\bigg)^\\frac{1}{p'}<\\infty$.\\\\\n\nAfter changing to the polar coordinates, this is equivalent to \n\n$\\sup_{\\vert x \\vert_a>0}\\bigg(\\int_{\\vert x \\vert_a }^\\infty {\\rho}^{\\alpha+n-1} d\\rho \\bigg)^\\frac{1}{q}\\bigg(\\int_0^{\\vert x \\vert_a} {\\rho}^{\\beta(1-p')+n-1}d\\rho \\bigg)^\\frac{1}{p'}$, \\\\\nwhich is finite if and only if conditions of Corollary \\ref{COR:hom} hold with $Q=n$ (which is natural since the curvature is zero).\n\nWhen $b>0$, let us take $u(x)=(\\sinh \\sqrt{b}{\\vert x \\vert_a})^\\alpha$ and $v(x)=(\\sinh \\sqrt{b}{\\vert x \\vert_a})^\\beta$. Then he inequality \\eqref{EQ:Hardy1} holds for $10} \\bigg(\\displaystyle\\int_{\\mathbb M \\backslash B(a,\\vert x \\vert_a)} (\\sinh \\sqrt{b}{\\vert y \\vert_a})^{\\alpha} dy \\bigg)^\\frac{1}{q} \\bigg(\\displaystyle \\int_{B(a,\\vert x \\vert_a)}(\\sinh \\sqrt{b}{\\vert y \\vert_a})^{\\beta(1-p')}dy \\bigg)^\\frac{1}{p'}<\\infty.$\\\\\n\nAfter changing to the polar coordinates, this supremum is equivalent to \n\n$\\sup_{\\vert x \\vert_a>0}(\\displaystyle\\int_{\\vert x \\vert_a}^\\infty (\\sinh \\sqrt{b}{t})^\\alpha (\\frac{\\sinh \\sqrt{b} t}{\\sqrt{b} t})^{n-1} t^{n-1}dt )^\\frac{1}{q}$\\\\\n$\\quad\\times(\\displaystyle \\int_{0}^ {\\vert x \\vert_a} (\\sinh \\sqrt{b}{t})^{\\beta(1-p')}(\\frac{\\sinh \\sqrt{b} t}{\\sqrt{b} t})^{n-1} t^{n-1} dt )^\\frac{1}{p'}$\\\\\n\n\n$\\simeq\\sup_{\\vert x \\vert_a>0}\\bigg(\\displaystyle\\int_{\\vert x \\vert_a}^\\infty (\\sinh \\sqrt{b}{t})^{\\alpha+n-1}dt \\bigg)^\\frac{1}{q}\\bigg(\\displaystyle\\int_0^{\\vert x \\vert_a} (\\sinh \\sqrt{b} t)^{\\beta(1-p')+n-1}dt \\bigg)^\\frac{1}{p'}, $\\\\\nwhich has the same conditions for finiteness as the case of the hyperbolic space in Corollary \\ref{COR:hyp} (which is also natural since it is the negative constant curvature case).\n\n\n\n\n\n\n \n\n \n \\section{Equivalence of weight conditions}\n \n In this section we prove that the quantities $\\mathcal D_{1}$--$\\mathcal D_{5}$\ninvolving the weights in Theorem \\ref{THM:Hardy1} are equivalent. However, it will be convenient to formulate it in the following slightly more general form:\n \n \\begin{thm}\\label{THM:equivalence}\n Let $\\alpha , \\beta, s >0$ and let\n $f \\in L^{1}(\\mathbb X\\backslash \\{a\\})$, $g \\in L^{1} _{loc} (\\mathbb X)$, be such that $ f,g >0 $ are positive a.e in $\\mathbb X$. Let us denote\n \\begin{align}\n F(x):= \\int_{\\mathbb X\\backslash B(a,\\vert x \\vert_a)}f(y)dy\\nonumber,\n \\end{align}\n and\n \\begin{align}\n G(x):=\\int_{B(a,\\vert x \\vert_a)} g(y)dy\\nonumber.\n \\end{align}\n Then the following quantities are equivalent:\n \\begin{enumerate}\n \\item$\\mathcal A_1:= \\sup_{x\\not=a}A_1(x;\\alpha,\\beta):= \\sup_{x\\not=a}F^\\alpha(x)G^\\beta(x).$\n \\item$\\mathcal A_2:=\\sup_{x\\not=a} A_2(x;\\alpha,\\beta,s):=\\sup_{x\\not=a}{\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)dy\\bigg)^\\alpha} G^s(x),$ provided that $G^{(\\beta-s)\/\\alpha}(y)$ makes sense.\n \\item$\\mathcal A_3:=\\sup_{x\\not=a} A_3(x;\\alpha,\\beta,s):= \\sup_{x\\not=a}\\bigg(\\int_{B(a,\\vert x \\vert_a)}g(y)F^{(\\alpha-s)\/\\beta}(y)dy\\bigg)^\\beta F^s(x),$ provided that $F^{(\\alpha-s)\/\\beta}(y)$ makes sense. \\item$\\mathcal A_4:=\\sup_{x\\not=a} A_4(x;\\alpha,\\beta,s):= \\sup_{x\\not=a}\\bigg(\\int_{B(a,\\vert x \\vert_a)}f(y)G^{(\\beta+s)\/\\alpha}(y)dy\\bigg)^\\alpha G^{-s}(x),$ provided that $f,g\\in L^1(\\mathbb X)$ and that $G^{-s}(x)$ makes sense.\n \\item$\\mathcal A_5:=\\sup_{x\\not=a} A_5(x;\\alpha,\\beta,s):=\\sup_{x\\not=a}\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}g(y) F^{(\\alpha+s)\/\\beta}(y) dy\\bigg)^\\beta F^{-s}(x),$\n provided $f,g\\in L^1(\\mathbb X)$ and that that $F^{-s}(x)$ makes sense.\n \\end{enumerate}\nMoreover, we have the following relations between the above quantities:\n\\begin{itemize}\n\\item $\\mathcal {A}_1\\le (\\max(1,\\frac{s}{\\beta}))^\\alpha \\mathcal {A}_2 $ and $\\mathcal {A}_2\\le (\\max(1,\\frac{\\beta}{s}))^\\alpha \\mathcal {A}_1$;\n\\item $\\mathcal {A}_1\\le(\\max(1,\\frac{s}{\\alpha}))^\\beta \\mathcal {A}_3 $ and $\\mathcal {A}_3\\le (\\max(1,\\frac{\\alpha}{s}))^\\beta \\mathcal {A}_1$;\n\\item $ (\\frac{s}{\\beta+s})^\\alpha \\mathcal A_4 \\le \\mathcal A_1\\le (1+\\frac{s}{\\beta})^\\alpha \\mathcal A_4 $ and $ (\\frac{s}{\\alpha+s})^\\beta \\mathcal A_5 \\le \\mathcal A_1\\le (1+\\frac{s}{\\alpha})^\\beta \\mathcal A_5 $.\n\\end{itemize}\n\\end {thm}\n\n \\begin{proof}[Proof of Theorem \\ref{THM:equivalence}]\n $\\boxed{{\\mathcal A_1\\approx \\mathcal A_2}}$\n \n We will first consider the case $s\\le\\beta$. Then for $\\vert y \\vert_a\\geq \\vert x \\vert_a$ we have \n $ G^{(\\beta-s)\/\\alpha}(y)\\ge G^{(\\beta-s)\/\\alpha}(x)$. Consequently, we can estimate\n \\begin{align*}\n A_2(x;\\alpha,\\beta,s) & ={\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)dy\\bigg)^\\alpha} G^s(x)\\nonumber\n \\\\\n & \\ge{\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)dy\\bigg)^\\alpha G^\\beta(x)}\\nonumber\n\\\\ &\n =F^{\\alpha}(x)G^{\\beta}(x),\\nonumber\n \\end{align*}\n which implies $\\mathcal A_2 \\ge \\mathcal A_1$.\n For $s>\\beta$, let us first introduce some notation, using the polar decomposition \\eqref{EQ:polar}. First, we denote\n \\begin{align*}\n W(x) & :={\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)dy}\\nonumber\\\\\n & =\\int_{\\vert x \\vert_a}^{\\infty}\\int_{\\Sigma_r}f(r,\\omega) \\lambda(r,\\omega)\\widetilde{G_1}(r)^{(\\beta-s)\/\\alpha} d\\omega_r dr\\nonumber\\\\\n & = \\int_{\\vert x \\vert_a}^{\\infty}{\\widetilde{W}}(r)dr\\nonumber\\\\\n & =:\\widetilde{W_1}(\\vert x \\vert_a),\\nonumber\n \\end{align*}\n where\n$$\n \\widetilde{G_1}(r) := \\int_0^r\\int_{\\Sigma_s} g(s,\\sigma)\\lambda(s,\\sigma)d\\sigma_s ds =\\int_0^r\\widetilde{G}(s)ds,\n $$\n with $\\widetilde{G}(s):=\\int_{\\Sigma_s} g(s,\\sigma)\\lambda(s,\\sigma)d\\sigma_s$,\n and\n$$\n \\widetilde{W}(r):=\\int_{\\Sigma_r}f(r,\\omega) \\lambda(r,\\omega)\\widetilde{G_1}(r)^{(\\beta-s)\/\\alpha}d\\omega_r.\n$$\nMoreover, we denote \n$$\n \\widetilde{F_1}(r) : = \\int_r^{\\infty} \\int_{\\Sigma_s} \\lambda(s,\\sigma)f(s,\\sigma)d\\sigma_s ds=\\int_r^{\\infty}\\widetilde{F}(s)ds.\n$$\n\nUsing the function $W$ defined above, we can estimate\n \\begin{align*}\n & F^{\\alpha}(x)G^{\\beta}(x) \\\\\n & = G^{\\beta}(x)\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)G^{(s-\\beta)\/\\alpha}(y)W^{(s-\\beta)\/s}(y)W^{(\\beta-s)\/s}(y)dy\\bigg)^\\alpha\\nonumber\n \\\\\n &=G^{\\beta}(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}\\int_{\\Sigma_r} \\lambda(r,\\omega) f(r,\\omega)\\widetilde{G_1}^{(\\beta-s)\/\\alpha}(r)\\widetilde{G_1}^{(s-\\beta)\/\\alpha}(r)\\widetilde{W_1}^{(s-\\beta)\/s}(r)\\widetilde{W_1}^{(\\beta-s)\/s}(r)d\\omega_r dr\\bigg)^\\alpha\\nonumber\n \\\\ & \n \\le\\bigg(\\sup_{r>\\vert x \\vert_a}\\widetilde{G_1}^{(s-\\beta)}(r)\\widetilde{W_1}^{(s-\\beta)\\alpha\/s}(r)\\bigg)G^{\\beta}(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}\\widetilde{W}(r) {(\\int_r^{\\infty}\\widetilde{W}(s)ds)}^{(\\beta-s)\/s}dr\\bigg)^\\alpha\\nonumber\n \\\\\n&= \\bigg(\\sup_{\\vert y \\vert _a>\\vert x \\vert_a}{G}^{s}(y){W}^{\\alpha}(y)\\bigg)^{(s-\\beta)\/s}\\bigg(\\frac{s}{\\beta}\\bigg)^{\\alpha}\nG^{\\beta}(x) W^{(\\beta\\alpha)\/s}(x)\\nonumber\n\\\\ & \n \\le {\\bigg(\\frac{s}{\\beta}\\bigg)}^{\\alpha}\\bigg(\\sup_{\\vert y \\vert_a >\\vert x \\vert_a}{G}^{s}(y){W}^{\\alpha}(y)\\bigg)^{(1-{\\beta\/s})}\n\\bigg(\\sup_{{\\vert x \\vert_a}>0} G^{s}(x) W^{\\alpha}(x)\\bigg)^{\\beta\/s}\\nonumber\n\\\\ & \\le {\\bigg(\\frac{s}{\\beta}\\bigg)}^{\\alpha}\\sup_{{\\vert x \\vert_a} >0} A_2(x;\\alpha,\\beta,s).\\nonumber\n \\end{align*} \n Therefore, we obtain\n $$\n \\mathcal A_1 \\le {\\left(\\frac{s}{\\beta}\\right)}^{\\alpha} \\mathcal A_2.\n$$\nHence, we have for every $s>0$ the inequality\n$$\n\\mathcal A_1 \\le {\\left(\\max(1,\\frac{s}{\\beta})\\right)}^{\\alpha} \\mathcal A_2.\n$$\n Conversely, we have for $s<\\beta$,\n \\begin{align*}\n & G^s(x)W^\\alpha(x) \\\\\n & =G^s(x)\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)F^{(\\beta-s)\/\\beta}(y)F^{(s-\\beta)\/\\beta}(y)dy\\bigg)^\\alpha\\nonumber\\\\\n& =G^s(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}{\\int_{\\Sigma_r} \\lambda(r,\\omega)f(r,\\omega)}\\bigg(\\int_0^r\\int_{\\Sigma_s} \\lambda(s,\\sigma) g(s,\\sigma)ds d\\sigma_s\\bigg)^{(\\beta-s)\/\\alpha}\\nonumber\\\\\n&\\quad\\times\\widetilde{F_1}^{(\\beta-s)\/\\beta}(r)\\widetilde{F_1}^{(s-\\beta)\/\\beta}(r)drd\\omega_r \\bigg)^\\alpha\\nonumber \\\\\n & = G^s(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}{\\int_{\\Sigma_r} \\lambda(r,\\omega)f(r,\\omega)}\\widetilde{G_1}^{(\\beta-s)\/\\alpha}(r)\\widetilde{F_1}^{(\\beta-s)\/\\beta}(r)\\widetilde{F_1}^{(s-\\beta)\/\\beta}(r)d\\omega_r dr\\bigg)^\\alpha.\\nonumber\n\\end{align*}\n Consequently, we can estimate\n \\begin{align*}\n & G^s(x)W^\\alpha(x)\n \\\\ &\n \\le\\bigg(\\sup_{r>\\vert x \\vert_a}\\widetilde{G_1}^{(\\beta-s)\/\\alpha}(r)\\widetilde{F_1}^{(\\beta-s)\/\\beta}(r)\\bigg)^\\alpha G^{s}(x) \\bigg(\\int_{\\vert x \\vert_a}^{\\infty}\\widetilde{F}(r)\\left(\\int_r^{\\infty}\\widetilde{F}(s)ds\\right)^{(s-\\beta)\/\\beta} dr\\bigg)^\\alpha\\nonumber \n \\\\ &=\\bigg(\\sup_{r>\\vert x \\vert_a}\\widetilde{G_1}^{\\beta}(r)\\widetilde{F_1}^{\\alpha}(r)\\bigg)^{(\\beta-s)\/\\beta} G^{s}(x) {\\bigg(\\frac{\\beta}{s}\\bigg)}^\\alpha \\widetilde{F_1}^{(\\alpha s)\/\\beta}(r)\\nonumber \n \\\\ &\\le\\bigg(\\sup_{\\vert y \\vert_a>\\vert x \\vert_a} G^{\\beta}(y) F^{\\alpha}(y)\\bigg)^{(\\beta-s)\/\\beta}{\\bigg(\\frac{\\beta}{s}\\bigg)}^\\alpha \\bigg(\\sup_{\\vert x \\vert_a>0}G^{\\beta}(x) F^{\\alpha}(x)\\bigg)^{s\/\\beta}\\nonumber\n\\\\ &\\le\\bigg(\\frac{\\beta}{s}\\bigg)^\\alpha \\sup_{\\vert x \\vert_a>0} A_1 (x;\\alpha,\\beta),\\nonumber\n \\end{align*}\n which gives \n $\n \\mathcal A_2 \\leq {\\left(\\frac{\\beta}{s}\\right)}^\\alpha \\mathcal A_1.\n $\n \n On the other hand, for $s\\geq\\beta$, when $\\vert y \\vert_a > \\vert x \\vert_a$ we have\n $\n G^{(\\beta-s)\/\\alpha}(y) \\le G^{(\\beta-s)\/\\alpha}(x).\n $\n \n Therefore, we can estimate\n \\begin{align*}\n A_{2}(x;\\alpha,\\beta,s) &= G^{s}(x)\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)dy\\bigg)^\\alpha\\nonumber\n \\\\ & \n \\le G^{s}(x)\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)dy\\bigg)^\\alpha G^{(\\beta-s)}(x)\\nonumber\n\\\\\n&=F^\\alpha(x)G^\\beta(x)\\nonumber\n\\end{align*}\n i.e. $\\mathcal A_2 \\le \\mathcal A_1.$ \nTherefore, we have for $s>0$, the overall estimate\n$$\n\\mathcal A_2 \\le {\\left(\\max(1,\\frac{\\beta}{s})\\right)}^{\\alpha} \\mathcal A_1.\n$$\nHence we have also shown that\n$\n \\mathcal A_1 \\approx \\mathcal A_2.\n$\n\n \n Next we observe that the proof of $\\boxed{\\mathcal A_1 \\approx \\mathcal A_3}$ follows along the same lines as that of $\\mathcal A_1 \\approx \\mathcal A_2$, where we just need to interchange the roles of $F$ and $G$.\n \n \\smallskip\n $\\boxed{\\mathcal A_1 \\approx \\mathcal A_4}$\n \n \\smallskip\n Let us denote\n \\begin{align*}\nW_{0}(x) &:= \\int_{B(a,\\vert x \\vert_a)} f(y) G^{(\\beta+s)\/\\alpha}(y)dy\\nonumber\n \\\\ &= \\int_{0}^{\\vert x \\vert_a}\\int_{\\Sigma_r} \\lambda(r,\\omega) f(r,\\omega){G}^{(\\beta+s)\/\\alpha}(r,\\omega)d\\omega_r dr\\nonumber\\\\\n\\\\ \n &=: \\int_{0}^{\\vert x \\vert_a}\\widetilde{W_0}(r)dr,\\nonumber\n\\end{align*}\n\\\\\n\\\\\nso that we can write\n $$\n A_4(x;\\alpha,\\beta,s) = G^{-s}(x)W_0^\\alpha(x).\n $$ \n We rewrite\n $A_1$ as\n \\begin{align*}\n A_1(x;\\alpha,\\beta)&=G^{\\beta}(x)\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta+s)\/\\alpha}(y)G^{-{(\\beta+s)\/\\alpha}}(y)dy\\bigg)^\\alpha\\nonumber\n \\\\\n &= G^\\beta(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}\\int_{\\Sigma_r} \\lambda(r,\\omega) f(r,\\omega){G}^{(\\beta+s)\/\\alpha}(r,\\omega)G^{-(\\beta+s)\/\\alpha}(r,\\omega) d\\omega_r dr\\bigg)^\\alpha\\nonumber\n \\\\\n &=G^\\beta(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}\\widetilde{G_1}^{-(\\beta+s)\/\\alpha}(r)\\frac{d}{dr}\\bigg(\\int_0^r\\widetilde{W_0}(s)ds\\bigg)dr\\bigg)^\\alpha.\\nonumber\n \\end{align*}\n We can estimate this by \n \\begin{align*}\n & \\le G^{\\beta}(x)\\bigg(\\widetilde{G_1}^{-(\\beta+s)\/\\alpha}(\\infty){W_0}(\\infty) + \\frac{(\\beta+s)}{\\alpha}\\int_{\\vert x \\vert_a}^{\\infty}\\widetilde{G}(r)(\\widetilde{G_1}(r))^{\\frac{-(\\beta+s)}{\\alpha}-1}W_{0}(r)dr\\bigg)^\\alpha\\nonumber\n \\\\\n& \\le G^{\\beta}(x)\\bigg(\\sup_{\\vert y \\vert_a >\\vert x \\vert_a} G^{-s}(y) W_{0}^{\\alpha}(y)\\bigg)\\times\n\\\\ & \n\\;\\; \\times \\bigg(\\widetilde{G_{1}}^{-\\beta\/\\alpha}{(\\infty)} + \\frac{(\\beta+s)}{\\alpha}\\int_{\\vert x \\vert_a}^{\\infty}\\widetilde{G_1}^{-(\\beta\/\\alpha)-1}(r) \\frac{d}{dr}\\bigg(\\int_0^r\\widetilde{G}(s)ds\\bigg)dr\\bigg)^\\alpha\\nonumber\n\\\\ &\n \\le G^{\\beta}(x)\\sup_{\\vert y \\vert_a > 0} {A_{4}}(y;\\alpha,\\beta,s)\\bigg(\\widetilde{G_{1}}^{-\\beta\/\\alpha}{(\\infty)} + \\frac{(\\beta+s)}{\\beta} \\bigg(\\widetilde{G_1}^{-(\\beta\/\\alpha)}(\\vert x \\vert_a) -{\\widetilde{G_1}}^{-\\beta\/\\alpha}(\\infty)\\bigg)\\bigg)^\\alpha\\nonumber\n\\\\\n &= \\sup_{\\vert y \\vert_a > 0} {A_{4}}(y;\\alpha,\\beta,s) \\bigg[ \\frac{(\\beta+s)}{\\beta} + \\bigg(1-\\frac{(\\beta+s)}{\\beta}\\bigg)\\bigg(\\frac{G(x)}{G(\\infty)}\\bigg)^{\\beta\/\\alpha}\\bigg ]^\\alpha\\nonumber\n \\\\ &\n \\le \\bigg(1+\\frac{s}{\\beta}\\bigg)^\\alpha \\sup_{\\vert y \\vert_a>0} A_4(y;\\alpha,\\beta,s),\n \\end{align*}\n where the expressions like $G(\\infty)$ make sense since $g\\in L^1(\\mathbb X)$.\n Therefore, we obtain \n $$\n \\mathcal A_1\\le (1+s\/\\beta)^\\alpha \\mathcal A_4.\n $$\n To prove the opposite inequality, we assume that\n $$\n \\sup_ {\\vert x \\vert_a>0}{ A_1}(x;\\alpha, \\beta)<\\infty.\n $$\n Then we have \n \\begin{align}\n A_{4}(x;\\alpha,\\beta,s)&= G^{-s}(x)\\bigg(\\int_{B(a,\\vert x \\vert_a)} G^{(\\beta+s)\/\\alpha}(y)f(y)dy\\bigg)^\\alpha\\nonumber\n \\end{align}\n\\begin{align}\n&=G^{-s}(x)\\bigg(\\int_0^{\\vert x \\vert_a}{\\widetilde{ G_1}}^{(\\beta+s)\/\\alpha}(r) \\frac{d}{dr}\\bigg(-\\int_r^{\\infty}{\\widetilde{F}(s)}ds\\bigg)dr\\bigg)^\\alpha\\nonumber\n\\end{align}\n\\begin{align}\n&=G^{-s}(x)\\bigg({\\widetilde{ G_1}}^{(\\beta+s)\/\\alpha}(r){\\widetilde{F_1}}(r) \\bigg\\vert_{\\vert x \\vert_a}^0+\\frac{\\beta+s}{\\alpha}\\int_0^{\\vert x \\vert_a}{\\widetilde{F_1}}(r){\\widetilde{ G_1}}^{(\\beta+s)\/\\alpha-1}(r) \\frac{d}{dr}\\bigg(\\int_0^r{\\widetilde{G}(s)}ds\\bigg)dr\\bigg)^\\alpha\\nonumber\n\\end{align}\n\\begin{align}\n\\le G^{-s}(x)\\bigg (\\sup_{00}G^\\beta(y) F^\\alpha(y) G^{-s}(x)\\bigg(\\frac{\\alpha}{s} G^{s\/\\alpha}(x)\\bigg)^\\alpha\\nonumber\n\\end{align}\n\\begin{align}\n&= {\\left(\\frac{\\beta+s}{s}\\right)}^\\alpha \\sup_{\\vert x \\vert_a>0} A_1(x;\\alpha,\\beta),\\nonumber\n\\end{align}\nwhere we have used that $f\\in L^1(\\mathbb X)$. Hence we have proved that \n$A_1\\approx A_4.$\n\nThe proof of $\\boxed{A_1\\approx A_5}$ follows\n the same lines as that of the case $A_1\\approx A_4$ if we interchange the roles of $F$ and $G$.\n\\end{proof}\n\n\n\n\n\\section{Equivalent conditions for the Hardy inequality}\n\\label{SEC:proofH}\n\nIn this section we prove Theorem \\ref{THM:Hardy1} and also give some comments concerning Theorem \\ref{THM:Hardy2}.\nWithout loss of generality we can assume that $f\\geq 0$.\nThen we observe that if in Theorem \\ref{THM:equivalence} we take \n$$f(x)=u(x), \\; g(x)=v^{1-p'}(x), \\;\\alpha=\\frac{1}{q}, \\; \\beta=\\frac{1}{p'},$$ \nthen it follows that we have the equivalence of the quantities \n$$\n \\mathcal D_{1} \\approx \\mathcal D_{2} \\approx \\mathcal D_{3} \\approx \\mathcal D_{4} \\approx \\mathcal D_{5}.\n$$ \n So, we first assume that any one of these equivalent conditions holds true. In particular, $\\mathcal D_1<\\infty$, and using polar coordinates \\eqref{EQ:polar}, we have for every $a>0$ that\n \\begin{equation}\\label{EQ:D1-cond}\n \\bigg\\{\\int_a^{\\infty}\\int_{\\Sigma_r}\n \\lambda(r,\\omega)u(r,\\omega)d\\omega_r dr\\bigg\\}^\\frac{1}{q}\\bigg\\{\\int_0^{a}\\int_{\\Sigma_r}\\lambda(r,\\omega)v^{1-p'}(r,\\omega)d\\omega_r dr\\bigg\\}^\\frac{1}{p'}\\le \\mathcal D_1.\n \\end{equation}\nWe denote\n$$\n h(t):=\\bigg( \\int_0^t \\int_{\\Sigma_s} \\lambda(s,\\sigma) v^{1-p'}(s,\\sigma)ds d\\sigma_s\\bigg)^\\frac{1}{pp'},\n$$\n \\begin{align}\n \\widetilde{U}_1(t):= \\int_{\\Sigma_t} \\lambda(t,\\omega) u(t,\\omega)d\\omega_t\\nonumber,\n \\end{align} \n \\begin{align}\n F_1(s):= \\int_{\\Sigma_s} \\lambda(s,\\sigma) [f(s,\\sigma) v^{\\frac{1}{p}}(s,\\sigma)h(s)]^p d\\sigma_s\\nonumber,\n \\end{align} \n and \n \\begin{align}\n H_1(t):=\\int_0^t \\int_{\\Sigma_s} \\lambda(s,\\sigma)[v^\\frac{1}{p}(s,\\sigma)h(s)]^{-p'}d\\sigma_s ds.\\nonumber\n \\end{align}\nThen using polar coordinates \\eqref{EQ:polar}, H\\\"older's inequality, and Minkowski's inequality, the left side of \\eqref{EQ:Hardy1} can be estimated as \n \\begin{align}\\label{EQ:e0}\n & \\int_\\mathbb X u(x) \\bigg(\\int_{B(a,\\vert x \\vert_a)}f(y)dy\\bigg)^qdx \\nonumber\n \\\\ &\\leq\\int_0^{\\infty}\\int_{\\Sigma_r}\\lambda(r,\\omega)u(r,\\omega)\n \\bigg( \\int_0^r \\int_{\\Sigma_s} \\lambda(s,\\sigma)[f(s, \\sigma)v^{\\frac{1}{p}}(s,\\sigma)h(s)]^pdsd\\sigma_s \\bigg)^\\frac{q}{p}\\nonumber\n \\\\\n &\\quad\\times \\bigg(\\int_0^r \\int_{\\Sigma_s} \\lambda(s,\\sigma) [v^{\\frac{1}{p}}(s,\\sigma) h(s)]^{-p'}ds d\\sigma_s \\bigg)^\\frac{q}{p'}d\\omega_r dr\\nonumber\n \\\\\n &=\\int_0^\\infty \\widetilde{U}_1(r)\\bigg(\\int_0^r F_1(s)ds\\bigg)^\\frac{q}{p}H_1^\\frac{q}{p'}(r)dr\\nonumber\n \\\\\n & \\le \\bigg(\\int_0^\\infty F_1(s) \\bigg(\\int_s^\\infty \\widetilde{U}_1(r) H_1^\\frac{q}{p'}(r)dr\\bigg)^\\frac{p}{q}ds\\bigg)^\\frac{q}{p}.\n \\end{align} \n Denoting \n$$\n V_1(s):= \\int_{\\Sigma_s} \\lambda(s,\\sigma) v^{1-p'}(s,\\sigma)d\\sigma_s,\n$$\n and using inequality \\eqref{EQ:D1-cond} we can estimate\n \\begin{align}\\label{EQ:e1}\n & H_1(t) =\\int_0^t \\int_{\\Sigma_r} \\lambda(r,\\sigma)[v^\\frac{1}{p}(r,\\sigma)h(r)]^{-p'}d\\sigma_r dr\\nonumber\n \\\\\n &=\\int_0^t \\int_{\\Sigma_r} \\lambda(r,\\sigma) v^{1-p'}(r,\\sigma)\\bigg( \\int_0^r \\int_{\\Sigma_{\\rho}} \\lambda(\\rho,\\omega) v^{1-p'}(\\rho,\\omega)d\\rho d\\omega_{\\rho}\\bigg)^{-\\frac{1}{p}}dr d\\sigma_r\\nonumber\n \\\\\n &=\\int_0^t V_1(r)\\bigg(\\int_0^r V_1(\\rho) d\\rho \\bigg)^{-\\frac{1}{p}} dr\\nonumber\n\\\\\n&=p'\\bigg(\\int_0^t V_1(r)dr \\bigg)^\\frac{1}{p'}\\nonumber\n\\\\\n&=p'\\bigg(\\int_0^t \\int_{\\Sigma_r} \\lambda(r,\\sigma) v^{1-p'}(r,\\sigma)dr d\\sigma_r \\bigg)^\\frac{1}{p'}\\bigg(\\int_t^\\infty \\int_{\\Sigma_r} \\lambda(r,\\sigma) u(r,\\sigma)dr d\\sigma_r \\bigg)^\\frac{1}{q}\\nonumber\n\\\\ \n&\\quad\\times \\bigg(\\int_t^\\infty \\int_{\\Sigma_r} \\lambda(r,\\sigma) u(r,\\sigma)dr d\\sigma_r \\bigg)^{-\\frac{1}{q}} \\nonumber\n\\\\\n& \\le p' \\mathcal D_1 \\bigg(\\int_t^\\infty\\widetilde{U}_1(s)ds \\bigg)^{-\\frac{1}{q}}.\n \\end{align}\n At the same time we can also estimate\n \\begin{align}\\label{EQ:e2}\n \\int_s^\\infty \\widetilde{U}_1(t) \\bigg(\\int_t^\\infty \\widetilde{U}_1(\\tau)d\\tau\\bigg)^{-\\frac{1}{p'}} dt\n &=-p \\int_s^\\infty {\\frac{d}{dt}} \\bigg(\\int_t^\\infty \\widetilde{U}_1(\\tau)d\\tau\\bigg)^ \\frac{1}{p} dt\\nonumber \n \\\\\n &=p \\left(\\int_s^\\infty \\widetilde{U}_1(t)dt \\right)^\\frac{1}{p}\\nonumber\n \\\\\n &=p \\left(\\int_s^ \\infty \\int_{\\Sigma_t} \\lambda(t,\\omega) u(t,\\omega)dt d\\omega_t\\right)^\\frac{1}{p}\\nonumber\n \\\\\n &= p \\left\\{\\left(\\int_s^ \\infty \\int_{\\Sigma_t}\\lambda(t,\\omega) u(t,\\omega)dtd\\omega_t\\right)^\\frac{1}{q} \\right.\\nonumber\n \\\\\n &\\quad\\times \\left.\\left(\\int_0^s \\int_{\\Sigma_t} \\lambda(t,\\omega)v^{1-p'}(t,\\omega)dtd\\omega_t\\right)^\\frac{1}{p'} \\right\\} ^\\frac{q}{p}\\nonumber\n \\\\\n &\\quad\\times \\left(\\int_0^s\\int_{\\Sigma_t} \\lambda(t,\\omega)v^{1-p'}(t,\\omega)dtd\\omega_t\\right)^{-\\frac{q}{p'p}}\\nonumber\n \\\\\n &\\le p \\mathcal D_1 ^\\frac{q}{p} h^{-q}(s), \n \\end{align}\n in view of \\eqref{EQ:D1-cond}.\n Therefore using \\eqref{EQ:e1} and \\eqref{EQ:e2} in \\eqref{EQ:e0}, we have\n \\begin{align}\n \\int_\\mathbb X u(x) \\bigg(\\int_{B(a,\\vert x \\vert_a)} f(y) dy \\bigg)^qdx\\leq \\mathcal D_1^q {p'}^\\frac{q}{p'}p \\bigg(\\int_\\mathbb X v(x) {f(x) }^p dx \\bigg)^\\frac{q}{p}.\\nonumber\n \\end{align}\n Hence, it follows that \\eqref{EQ:Hardy1} holds with\n $C\\leq \\mathcal D_1(p')^{\\frac{1}{p'}} p^\\frac{1}{q}$ proving one of the relations in \\eqref{EQ:constants}.\n \n Conversely, let us assume that inequality \\eqref{EQ:Hardy1} holds, and consider the function\n $$f(x)=v^{1-p'}(x) \\chi_{(0,t)} (\\vert x \\vert_a)$$\n for some $ t>0$, and where $\\chi$ is the cut-off function.\n With this function, the right hand side of of \\eqref{EQ:Hardy1} takes the form \n \\begin{align}\n &\\bigg(\\int_{\\mathbb X} v(x){ \\vert f(x) \\vert }^pdx\\bigg)^\\frac{1}{p}\n = \\bigg(\\int_{\\vert x \\vert_a \\le t} v^{1-p'}(x)dx\\bigg)^\\frac{1}{p}.\\nonumber\n\\end{align}\nAt the same time, the left hand side of of \\eqref{EQ:Hardy1} takes the form\n\\begin{align}\n\\bigg(\\int_\\mathbb X\\bigg(\\int_{B(a,\\vert x \\vert_a)}\\vert f(y) \\vert dy\\bigg)^q u(x)dx\\bigg)^\\frac{1}{q}\n&\\geq \\bigg(\\int_{\\vert x \\vert_a \\geq t} \\bigg(\\int_{B(a,\\vert x \\vert_a)}\\vert f(y) \\vert dy\\bigg)^q u(x)dx\\bigg)^\\frac{1}{q}\\nonumber\n\\\\\n&=\\bigg(\\int_{\\vert x \\vert_a \\geq t} \\bigg(\\int_{\\vert y \\vert_a \\leq t} v^{1-p'}(y) dy\\bigg)^q u(x)dx\\bigg)^\\frac{1}{q}\\nonumber\n\\\\\n&=\\bigg(\\int_{\\vert x \\vert_a \\geq t} u(x)dx\\bigg)^\\frac{1}{q} \\bigg(\\int_{\\vert y \\vert_a \\leq t} v^{1-p'}(y) dy\\bigg).\\nonumber\n\\end{align}\nAltogether the inequality \\eqref{EQ:Hardy1} takes the form\n\\begin{align}\n \\bigg(\\int_{\\vert x \\vert_a \\ge t} u(x)dx\\bigg)^\\frac{1}{q}\\bigg(\\int_{\\vert y \\vert_a \\le t} v^{1-p'}(y)dy \\bigg)\\le C \\bigg(\\int_{\\vert x \\vert_a \\le t} v^{1-p'}(x)dx\\bigg)^\\frac{1}{p},\\nonumber\n \\end{align}\n which gives \n $\\mathcal D_1\\le C$.\n Hence, we have the equivalence and the second relation in \\eqref{EQ:constants}.\n\n\\smallskip\n As for Theorem \\ref{THM:Hardy2}, \nif we take $f(x)= v^{1-p'}$ , $g(x)=u(x)$, $\\alpha= \\frac{1}{p'}$ and $\\beta= \\frac{1}{q}$ in Theorem \\eqref{THM:equivalence}, we find that \n$$\n\\mathcal D_{1}^{*} \\approx \\mathcal D_{2}^{*} \\approx \\mathcal D_{3}^{*} \\approx \\mathcal D_{4}^{*} \\approx \\mathcal D_{5}^{*}.$$\n\nConsequently, we can show Theorem \\ref{THM:Hardy2} by the argument similar to that in Section \\ref{SEC:proofH} where Theorem \\ref{THM:Hardy1} was proved. \nWe also note that in the case of homogeneous groups, we can actually also derive it from \nTheorem \\ref{THM:Hardy1} by the involutive change of variables $x\\mapsto x^{-1}.$\n\n\\medskip \\noindent\n{\\bf Data accessibility.} No new data was collected or generated during the course of research.\n\n\\medskip \\noindent\n{\\bf Competing interests.} We have no competing interests.\n\n\\medskip \\noindent\n{\\bf Authors' contributions.} The authors contributed equally to this paper.\n\n\\medskip \\noindent\n{\\bf Acknowledgements.} The authors would like to thank Nurgissa Yessirkegenov for discussions and for checking our calculations. We would also like to thank all 5 referees of this paper for useful comments.\n\n\\medskip \\noindent\n{\\bf Funding statement.}\nThe first author was supported in parts by the FWO Odysseus Project, EPSRC\ngrant EP\/R003025\/1 and by the Leverhulme Grant RPG-2017-151. \n\n\\medskip \\noindent\n{\\bf Ethics statement.} The work did not involve any collection of human data.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}