diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzptcg" "b/data_all_eng_slimpj/shuffled/split2/finalzzptcg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzptcg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nLow-mass starless cores are the earliest observed phase of isolated\nlow-mass star formation. They are identified via submm\ndust continuum and dense gas molecular lines, they typically contain\na few solar masses, they have sizes of approximately $0.1$ pc,\nand they may form one or a few low-mass \n(M $\\sim 1$ M$_{\\odot}$) stars.\nSeveral hundred starless cores have been\nobserved in the nearest star-forming molecular clouds and isolated Bok\nglobules. \nRecent large scale\nsurveys of nearby molecular clouds have established a remarkable connection\nbetween the Core Mass Function and the Initial Mass Function of stars\n(e.g., Motte et al. 1998, Johnstone et al. 2000), indicating the importance\nof constraining the evolution of starless cores in order\nto understand the initial conditions of disk and protostar formation.\nTheoretically, the basic core formation and evolution process is still \ndebated between a turbulent-dominated (Mac Low \\& Klessen 2004) \nor ambi-polar diffusion-dominated model (Shu et al. 1987). \nObservationally, a fundamental challenge is to determine\nthe evolutionary state of a starless core. Given a set of observations,\ncan we determine how close a starless core is to forming a protostar?\n\nOne step toward understanding the evolutionary state of a starless\ncore is to determine its physical structure ($n(r)$ cm$^{-3}$ and\n$T(r)$ K). \nRadiative transfer modeling of submillimeter dust continuum emission\nhave successfully fit the density structure with hydrostatic\nconfigurations (Bonnor-Ebert Spheres, BESs), while the calculated \ntemperature structure decreases toward\nthe center of the core due to attenuation of the ISRF (Evans et al. 2001). \nSince BESs may be parameterized in terms of their central density,\n$n_c$ cm$^{-3}$, it is natural to think that this may be the\nmain evolutionary variable for starless cores. However, detailed\nmolecular line observations have revealed that the chemical\nstructure ($X(r)$) can strongly vary among cores with similar central\ndensities. Figure 1 shows the example of L1498 and L1521E, two\nstarless cores in Taurus which have comparable $n_c$ (Tafalla et al. 2006), \nbut very different chemical structures. L1521E is centrally peaked in C$_3$S\nwith weak NH$_3$ emission while L1498 is centrally peaked in NH$_3$\nand heavily depleted in C$_3$S. These observations can be explained\nif the cores are evolving at different rates. \\textbf{In order to\ndetermine the evolutionary state of a starless core, it is\nnecessary to map the chemical structure of the core}.\n\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=9 cm]{Shirley_fig1.ps}\n\\caption{Observations of L1521E (top) and L1498 (bottom) in 850 $\\mu$m, \nC$_3$S,\nand NH$_3$ showing chemical differentiation. Data are from the ARO-GBT survey\nand Shirley et al. (2005)} \\label{fig}\n\\end{center}\n\\end{figure}\n\n\n\n\\section{Chemical Processes in Starless Cores}\n\\label{section1}\n\nThe rate at which molecules are created and destroyed differ for each\nspecies; therefore, the relative abundance of molecular species may be\nused as a chemical clock. A prediction that appears to be ubiquitous among\nstarless core chemodynamical models is that there exists\na class of molecules, named ``early-time'' molecules, which\nform rapidly in the cold, moderate density environments typical of nascent\nstarless cores (e.g., CO, C$_2$S, C$_3$S, SO). The early-time molecule\nabundance peaks typically within a few $10^5$ years and then decreases\ndue to various destruction mechanisms (see below). Another class of molecules,\nnamed ``late-time'' molecules, build up in abundance slower and\nremain in the gas phase longer at low temperatures and high densities\n(e.g., N$_2$H$^+$, NH$_3$, H$_2$D$^+$).\nIt has been proposed that observations of the abundance ratio\nof species such as [C$_2$S]\/[NH$_3$] may date the chemical maturity of\na core (Suzuki et al. 1992). Figure 1 shows an example of two cores\nwith very different chemical states in early-time and late-time\nmolecules despite having comparable central densities. \n\n\n\nThe environments of starless cores are cold ($T < 15$ K) \nand dense ($n_c > 10^4$ cm$^{-3}$), thus many gas phase species\nadsorb onto dust grains. The best example is the second most abundant\nmolecular species, CO, which freezes out of the gas at a rate of\n$(dn_{\\rm{CO}}\/dt) \\propto n_g T^{1\/2} S n_{\\rm{CO}}$,\nwhere $n_g$ is the dust grain density and\n$S$ is the sticking coefficient (Rawlings et al. 1992). \nSince the timescale for freezeout depends\non the density and temperature of the core, the amount of CO\ndepletion encodes the history of the physical structure of the\ncore. For instance, a core that evolves slowly (quasi-statically)\nwill have more CO depletion compared to a core that evolves quickly \nto the same $n_c$. Complicating factors to the simple adsorption\nmodel include competing desorption processes due to direct cosmic ray heating,\ncosmic ray photodesorption, and H$_2$ formation on grains, all of which \nmay be important in starless cores (Roberts et al. 2007). CO is a destruction agent of many\ngas phase ions (e.g. N$_2$H$^+$ and H$_2$D$^+$), therefore the abundance history\nof these species are directly related to the amount of CO freezout\n(see Figure 2). The resulting chemical networks in heavily depleted environments\nare very different (see Vastel et al. 2006). Mapping of starless cores have revealed\na plethora of depleted species toward the dust column density peaks \n(Tafalla et al. 2006). \n\nDeuterium fractionation is an important chemical diagnostic in low-mass\ncores. At low temperatures, many chemical reactions involving HD favor\nthe formation of deuterated molecules due to the lower zero-point vibrational\nenergy of deuterated species compared to the hydrogenated species. The \nclassic deuteration reaction operating in the\nenvironments of starless cores is H$_3$$^+$ + HD $\\rightarrow$\nH$_2$D$^+$ + H$_2$ + $230$ K, where the backreaction is inefficent\nat low temperatures (Vastel et al. 2006). As the density increases, the temperature\ndecreases, and species such as CO freeze-out, the deuteration of\nhydrogenated species may increases up to four orders of magnitude\nover the cosmic [D]\/[H] $\\sim 10^{-5}$ (Roberts et al. 2004). Figure 2b illustrates \nthe observed degrees of deuteration of N$_2$H$^+$\nincreases with the amount of CO depletion in the core (Crapsi et al. 2005). \nSimilarly, since a deuterated species, such as NH$_2$D, \nmay be viewed as an extreme late-time molecule, a comparison between deuterated \nand early-time molecules from the ARO-GBT survey indicate increasing\nlate-time vs. early-time molecules with increasing central density is\nthe core (Figure 2a). \n\nThe chemical structure of the core is also important for determining\nwhich species are good probes of the kinematical structure of\nthe core. Several species (CS, H$_2$CO, HCO$^+$, HCN)\nhave been identified as infall tracers, molecules that are moderately \noptically thick and display asymmetric, self-aborbed line profiles. \nRecent surveys have attempted to search for evidence of large \nscale infall (e.g. Sohn et al. 2007). \nFurthermore, the linewidths of \nheavy species that lack hyperfine structure, such as C$_3$S (68 amu), \nare dominated by non-thermal motions and can trace turbulent motions\nor large-scale kinematical motions along different lines-of-sight in the core.\n\nA more thorough review of the chemical processes in starless cores\nmay be found in Di Francesco et al. (2005 and references therein). \nThe comparison of early-time vs. late-time\nspecies, the amount of depletion, the amount of deuteration, and \nthe identification of large scale kinematical motions should be used together to \nelucidate the evolutionary state of a starless core. Other indicators,\nsuch as the ortho-to-para evolution of symmetric molecules with non-zero spins\nand variations in\nthe structure of ionization fraction should also be explored.\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=9 cm]{Shirley_fig2.ps}\n\\caption{Left: The ratio of the ``late-time'' species NH$_2$D to the ``early-time'' \nspecies C$_2$S vs. $n_c$ toward cores where both lines were detected in the \nARO-GBT survey. Right: The deuterium fraction of N$_2$H$^+$ vs. the CO depletion factor\nreported by Crapsi et al. (2005).} \\label{fig}\n\\end{center}\n\\end{figure}\n\n\n\\section{Developing an Evolutionary Sequence}\n\\label{section2} \n\nUltimately, to determine the evolutionary state of a starless core, \nwe should model the molecular line radiaitve transfer\nof each transition along multiple lines-of-sight\nand compare to a grid of chemodynamical models (e.g. Lee et al. 2004).\nThis processes is computationally intensive and the current generation\nof chemodynamical models have not fully explored the parameter space \nof possible conditions in nearby starless cores. \nAn alternative first step is to develop a Boolean evolultionary comparison. \nA flag of $1$ (more evolved) or $0$ (less evolved) is given to a particular \nobserved property of the core if the chemical criterion is met, and the sum\nof flags represents the observed chemical maturity of the core.\nThis strategy was implemented by Crapsi et al. (2005) for a sample of\n$12$ starless cores and is being extended to the sample of $25$ cores\nwith a larger set of chemical criteria in the ARO-GBT survey (Shirley et al.\n2008, in prep.). \n\nWhile there is still much work to develop a\ndetailed understanding of the chemical processes within low-mass starless cores, \nit is now possible and necessary to synthesize the chemical evolutionary indicators.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nVisual saliency patterns are the result of a number of factors, including the task being performed, user preferences and domain knowledge. However existing approaches to predict saliency patterns \\cite{walther2002attentional,hadizadeh2014saliency,wang2015saliency,achanta2009saliency,sharma2012discriminative,yubing2011spatiotemporal} ignore these factors, and instead learn a model specific to a single task while disregarding factors such as user preferences. \\par\n\nBased on empirical results, the human visual system is driven by both bottom-up and top-down factors \\cite{connor2004visual}. The first category (bottom-up) is entirely driven by the visual scene where humans deploy their attention towards salient, informative regions such as bright colours, unique textures or sudden movements. The bottom-up factors are typically exhibited during the free viewing mechanism. In contrast, the top-down attention component, where the observer is performing a task specific search, is modulated by the task at hand \\cite{modellin-search} and the subject's prior knowledge \\cite{deep-fix}. For example, when the observer is searching for people in the scene, they can selectively attend to the scene regions which are most likely to contain the targets \\cite{ einha2008task, zelinsky2008theory}. Furthermore, a subject's prior knowledge, such as the scene layout, scene categories and statistical regularities will influence the search mechanisms and the fixations \\cite{castelhano2007initial, chaumon2008unconscious,neider2006scene}, rendering task specific visual saliency prediction a highly subjective, situational and challenging task \\cite{modellin-search, action-in-the-eye,deep-fix}, which motivates the need for a working memory \\cite{deep-fix, fernando2017tree,Fernando_2017_ICCV}. Even within groups of subjects completing the same task, due to the differences in a subject's behavioural goals, expectations and domain knowledge, unique saliency patterns are generated. Ideally, this user related context information should be captured via a working memory. \\par\n\nFig. \\ref{fig:fig_1} shows the variability of the saliency maps when observers are performing action recognition and context recognition on the same image. In Fig. \\ref{fig:fig_1} (a) the observer is asked to recognise the action performed by the human in the scene; while in Fig. \\ref{fig:fig_1} (b) the saliency map is generated when the observer is searching for cars\/ trees in the scene. It is evident that there exists variability in the resultant saliency patterns, yet accurate modelling of human fixations in the application areas specified above requires task specific models. For example, semantic video search, content aware image resizing, video surveillance, and video\/ scene classification may require a search for pedestrians, for different objects, or recognising human actions depending on the task and video context. \\par\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}{.45\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/saliency_predictions_action_rec.pdf} %\n \\caption{Human action recognition task}\n \\end{subfigure}\n \\begin{subfigure}{.45\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/saliency_predictions_context_rec.pdf} %\n \\caption{Searching for cars\/ trees}\n \\end{subfigure}\n \\caption{Variability of the saliency maps when observers are performing different tasks}\n\t\t\t\\vspace{-3mm}\n \\label{fig:fig_1}\n\\end{figure}\n\nIn recent years, motivated by the success of deep learning techniques \\cite{gammulle2017two,fernando2017soft+}, there have been several attempts to model visual saliency of the human free viewing mechanism with the aid of deep convolutional networks \\cite{deep-fix,deep-ml, vig2014large, kummerer2014deep}. Yet, the usual approach when modelling visual saliency for task specific viewing is to hand-craft the visual features. For instance, in \\cite{modellin-search} the authors utilise the features from person detectors \\cite{dalal2006human} when estimating the search for humans in the scene; while in \\cite{action-in-the-eye} the authors use the features from HoG descriptors \\cite{dalal2005histograms} when searching for the objects in the scene. Therefore, these approaches are application dependent and fail to capture the top-down attentional mechanism of humans which is driven by factors such as a subject's prior knowledge and expectation. \\par \n\nIn this work we propose a deep learning architecture for task specific visual saliency estimation. We draw our inspiration from the recent success of Generative Adversarial Networks (GAN) \\cite{goodfellow2014generative, denton2015deep, radford2015unsupervised, salimans2016improved, zhao2016energy} for pixel to pixel translation tasks. We exploit the capability of the conditional GAN framework \\cite{pix2pix} for automatic learning of task specific features in contrast to hand-crafted features that are tailored for specific applications \\cite{modellin-search, action-in-the-eye}. This results in a unified, simpler architecture enabling direct application to a variety of tasks. The conditional nature of the proposed architecture enables us to learn one network for all the tasks of interest, instead of learning separate networks for each of the tasks. Apart from the advantage of a simpler learning process, this enables the capability of learning semantic correspondences among different tasks and propagating these contextual relationships from one task to another. \\par\nFig. \\ref{fig:networks} (a) shows the conditional GAN architecture where the discriminator $D$ learns to classify between real and synthesised pairs of saliency maps $y$, given the observed image $x$ and task specific class label $c$. The generator $G$ tries to fool the discriminator. It also observes the observed image $x$ and task specific class label $c$. We compare this model to the proposed model given in Fig. \\ref{fig:networks} (b). The differences arise in the utilisation of memory $M$, where we capture subject specific behavioural patterns. This incorporates a subject's prior knowledge, behavioural goals and expectations.\n\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}{.49\\linewidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{fig\/CGANS.pdf} %\n \\caption{Conditional GAN}\n \\end{subfigure}\n \\begin{subfigure}{.49\\linewidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{fig\/proposed.pdf} %\n \\caption{MC-GAN (proposed model)}\n \\end{subfigure}\n \\caption{A comparison of conditional GAN architecture with the proposed model}\n\t\t\t\\vspace{-3mm}\n \\label{fig:networks}\n\\end{figure}\n\n\\vspace{-2mm}\n\\section{Related Work}\nLiterature related to this work can be broadly categorised into ``Visual Saliency Prediction'' and ``Generative Adversarial Networks'', and these two areas are addressed in Sections 2.1 and 2.2 respectively.\n\n\\subsection{Visual Saliency Prediction}\nSince the first attempts to model human saliency through feature integration \\cite{treisman1980feature}, the area of saliency prediction has been widely explored. Building upon this bottom-up approach, Koch and Ullman \\cite{koch1987shifts} and Itti et al. \\cite{itti1998model} proposed approaches based on extracting image features such as colour, intensity and orientation. These methods generate centre-biased acceptable saliency predictions for free viewing but are highly inaccurate in complex real world scenes \\cite{deep-fix}. Recent studies such as \\cite{valenti2009image,liu2013saliency, lang2012depth, erdem2013visual,porikli2006covariance} have looked into the development of more complex features for saliency estimation. \\par\nIn contrast, motivated by information theory, authors in \\cite{bruce2006saliency,modellin-search, action-in-the-eye} have taken the top-down approach where task dependent features comes into play. They incorporate local information from regions of interest for the task at hand, such as features from person detectors and HoG descriptors. These models \\cite{bruce2006saliency,modellin-search, action-in-the-eye} are completely task specific, rendering adaptation from one task to another nearly impossible. Furthermore, they neglect the fact that different subjects may exhibit different behavioural patterns when achieving the same goal which generates unique strategies or sub goals that we term user preferences. \\par\n\nIn order to exploit the representative power of deep architectures, more recent studies have been driven towards the utilisation of convolution neural networks. In contrast to the above approaches, which use hand crafted features, deep learning based approaches offer automatic feature learning. In \\cite{kummerer2014deep} the authors propose the usage of feature representations from a pre-trained model that has been trained for object classification. This work was followed by \\cite{deep-fix,deep-ml} where authors train end-to-end saliency prediction models from scratch and their experimental evaluations suggest that deep models trained for saliency prediction itself can outperform off-the-shelf CNN models. \\par\nLiu et al. \\cite{liu2015predicting} proposed a mulitresolution-CNN for predicting saliency, which has been trained on multiple scales of the observed image. The motivation behind this approach is to capture low and high level features. Yet this design has an inherit deficiency due to the use of isolated image patches which fail to capture the global context, composed of the context of the observed image, the task at hand (i.e free viewing, recognising actions, searching for objects) and user preferences. Even though the context of the observed image is well represented in deep single scale architectures such as \\cite{deep-fix,deep-ml} they ignore the rest of the global context, the task description and user behavioural patterns, which are often crucial for saliency estimation. \\par\nThe proposed work bridges the research gap between deep architectures that capture bottom-up saliency features; and top-down methods \\cite{bruce2006saliency,modellin-search, action-in-the-eye} that are purely driven by the hand crafted features. We investigate the plausibility of the complete automatic learning of global context, which has been ill represented in literature thus far, through a memory augmented conditional generative adversarial model. \n\n\\subsection{Generative Adversarial Networks}\nGenerative adversarial networks (GAN), which belong to the family of generative models, have achieved promising results for pixel-to-pixel synthesis \\cite{arici2016associative}. Several works have looked into numerous architectural augmentations when synthesising natural images. For instance, in \\cite{gregor2015draw} the authors utilise a recurrent network approach where as in \\cite{dosovitskiy2015learning} a de-convolution network approach is used to generate higher quality images. Most recently authors in \\cite{pan2017salgan} have utilised the GAN architecture for visual saliency prediction and proposed a novel loss function which is proven to be effective for both initialising the generator, and stabilising adversarial training. Yet their work fails to delineate the ways of achieving task specific saliency estimation and of incorporating task specific dependencies and the subject behavioural patterns for saliency estimation. \\par\nThe proposed work draws inspiration from conditional GANs \\cite{li2016precomputed,mathieu2015deep,mirza2014conditional,pathak2016context,reed2016generative,yoo2016pixel,zhou2016learning,wang2016generative}. This architecture is extensively applied for image based prediction problems such as image prediction from normal maps \\cite{wang2016generative}, future frame prediction in videos \\cite{mathieu2015deep}, image style transfer \\cite{li2016precomputed}, image manipulation guided by user preferences \\cite{zhou2016learning}, etc. In \\cite{pix2pix} the authors proposed a novel U-Net \\cite{ronneberger2015u} based architecture for conditional GANs. Their evaluations suggested that this network is capable of capturing local semantics with applications to a wide range of problems. We investigate the possibility of merging the discriminative learning power of conditional GANs together with a local memory to fully capture the global context, contributing a novel application area and structural argumentation for conditional GANs. \n\n\\vspace{-2mm}\n\\section{Visual Saliency Model}\n\n\\subsection{Objectives}\n\nGenerative adversarial networks learn a mapping from a random noise vector $z$ to an output image $y, G: z \\rightarrow y$ \\cite{goodfellow2014generative}; where as conditional GANs learn a mapping from an observed image $x$ and random noise vector $z$, to output $y$, given auxiliary information $c$, where $c$ can be class labels or data from other modalities. $G: \\{x,z | c \\} \\rightarrow y$. When we incorporate the notion of time into the system, then the observed image at time instance $t$ will be $x_t$, the respective noise vector $z_t$ and the relevant class label will be $c_t$. Then the objective function of a conditional GAN can be written as, \n\\begin{equation}\n\\begin{split}\nL_{cGAN}(G,D) =E_{x_t, y_t \\sim p_{data}(x_t, y_t)}[log(D (x_t, y_t | c_t))] + \\\\\nE_{x_t \\sim p_{data(x_t), z_t \\sim p_z(z_t)}}[log(1-D(x_t, G(x_t,z_t | c_t)))].\n\\end{split}\n\\end{equation}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=.90\\linewidth]{fig\/memory_fig.pdf} %\n \\caption{Memory architecture: The model receives an input $o_t$ at time instance $t$. A function $f_r^{LSTM}$ is used to embed this input and retrieve the content of the memory $M_{t-1}$. When reading, we use a $softmax$ function to weight the association between each memory slot and the input $o_t$, deriving a weighted retrieval $h_{t}$. The final output $m_t$ is derived using both $\\grave{o}$ and $h_{t}$. Finally we update the memory using memory write function $f_w^{LSTM}$. This generates the memory representation $M_{t}$ at time instance $t+1$, shown to the right.}\n\t\t\t\\vspace{-3mm}\n \\label{fig:memory_architecture}\n \\end{figure*}\n \nLet $ M \\in \\mathbb{R}^{k*l} $, shown in Fig. \\ref{fig:memory_architecture}, be the working memory with $k$ memory slots and $l$ is the embedding dimension of the generator output,\n\\begin{equation}\no_t= G(x_t,z_t| c_t).\n\\end{equation}\nIf the representation of memory at time instance $t-1$ is given by $M_{t-1}$ and $f_r^{LSTM}$ is a read function, then we can generate a key vector $a_t$, representing the similarity between the current memory content and the current generator embedding via attending over the memory slots such that,\n\\begin{equation}\n\\grave{o}_t=f_r^{LSTM}(o_t),\n\\end{equation}\n\\begin{equation}\na_t= softmax(\\grave{o}_t^T,M_{t-1}),\n\\end{equation}\nand\n\\begin{equation}\nh_t= a_t^TM_{t-1}.\n\\end{equation}\nThen we retrieve the current memory state by, \n\\begin{equation}\nm_t= f^{MLP}(\\grave{o}_t,h_t), \n\\end{equation}\nwhere $f^{MLP}$ is a neural network composed of multi-layer perceptrons (MPL) trained jointly with other networks. \nThen we generate the vector for the memory update $\\grave{m}_t$ via passing it through a write function $ f_w^{LSTM}$\n\\begin{equation}\n\\grave{m}_t= f_w^{LSTM}(m_t),\n\\end{equation}\nand finally we completely update the memory using,\n\\begin{equation}\nM_{t}=M_{t-1}(1- (a_t \\otimes e_k )^T) + (\\grave{m}_t \\otimes e_l)(a_t \\otimes e_k )^T.\n\\end{equation}\nwhere 1 is a matrix of ones, $e_l \\in \\mathbb{R}^l $ and $ e_k \\in \\mathbb{R}^k$ be vectors of ones and $ \\otimes$ denotes the outer product which duplicates its left vector $l$ or $k$ times to form a matrix.\nNow the objective of the proposed memory augmented cGAN can be written as, \n\\begin{equation}\n\\begin{split}\nL^*_{cGAN}(G,D) =E_{x_t, y_t \\sim p_{data}(x_t, y_t)}[log(D (x_t, y_t | c_t))] + \\\\\nE_{x_t \\sim p_{data(x_t), z_t \\sim p_z(z_t)}}[ log(1-D(x_t, o_t \\otimes tanh(m_t)))].\n\\end{split}\n\\end{equation}\nWe would like to emphasise that we are learning a single network for all the tasks at hand, rendering a simpler but informative framework, which can be directly applied to a variety of tasks without any fine tuning.\n\\subsection{Network Architecture}\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.4\\linewidth]{fig\/u_new_architecture.pdf} %\n \\caption{U-Net architecture}\n \\label{fig:U_Net_architecture}\n\t\t\t\\vspace{-3mm}\n \\end{figure}\n\nFor the generator we adapt the U-Net architecture of \\cite{pix2pix} (see Fig. \\ref{fig:U_Net_architecture}). Let $Ck$ denote a Convolution-BatchNorm-ReLU layer group with $k$ filters. $CDk$ denotes a Convolution-BatchNorm-Dropout-ReLU layer with a dropout rate of 50\\%. Then the generator architecture can be written as,\nEncoder: C64-C128-C256-C512-C512-C512-C512-C512 followed by a U-Net decoder: CD512-CD1024-CD1024-C1024-C1024-C512-C256-C128 where there are skip connections between each $i^{th}$ layer in the encoder and the ${n-i}^{th}$ layer of the decoder, and there are $n$ total layers in the generator (see \\cite{pix2pix} for details). \nThe discriminator architecture is: C64-C128-C256-C512-C512-C512. \\par\nAll convolutions are 4 x 4 spatial filters applied with stride 2. Convolutions in the encoder and in the discriminator down sample by a factor of 2, whereas in the decoder they up sample by a factor of 2. A description of our memory architecture is as follows. For functions $f_r^{LSTM}$ and $f_w^{LSTM}$ we utilise two, one-layer LSTM networks \\cite{hochreiter1997long} with 100 hidden units and for $f^{MLP}$ we use a neural network with a single hidden layer and 1024 units with ReLU activations. This memory module is fully differentiable, and we learn it jointly with other networks. We trained the proposed model with the Adam \\cite{kingma2014adam} optimiser, with a batch size of 32 and an initial learning rate to 1e-5, for 10 epochs.\n\n\\vspace{-2mm}\n\\section{Experimental Evaluations}\n\n\\subsection{Datasets}\nWe evaluate our proposed approach on 2 publicly available datasets, VOCA-2012 \\cite{action-in-the-eye} and MIT person detection (MIT-PD) \\cite{modellin-search}. \\par\nThe VOCA-2012 dataset consists of 1,085,381 human eye fixation from 12 volunteers (5 male and 7 female) divided into 2 groups based on the given task. It contains 8 subjects performing action recognition in the given image where as the rest of the subjects are performing context dependent visual search. The subjects in this group are searching for furniture, paintings\/ wallpapers, bodies of water, buildings, cars\/ trucks, mountain\/ hills, road\/ trees in the given scene. The MIT-PD dataset consist of 12,768 fixations from 14 subjects (between 18-40 years), where the subjects search for people in 912 urban scenes. MIT-PD contains only a single task, and we use this dataset to demonstrate the effectiveness of the memory network.\n\n\\subsection{Evaluation metrics}\nLet $N$ denote the number of examples in the testing set, $y$ denotes the ground truth saliency map and $\\hat{y}$ is the predicted saliency map. Following this notation we define the following metrics:\n\\begin{itemize}\n\\item \\textbf{Area Under Curve (AUC): }\nAUC is a widely used metrics for evaluating saliency models. We use the formulation of this metric defined in \\cite{borji2012exploiting}. \n\\item \\textbf{Normalised Scan path Saliency (NSS): }\nNSS \\cite{peters2005components} is calculated by taking the mean scores assigned by the unit normalised saliency map $\\hat{y}^{norm}$ at human eye fixations. \n\\begin{equation}\nNSS=\\frac{1}{N}\\sum_{i=1}^{N}\\hat{y}^{norm}_{i}\n\\end{equation}\n\\item \\textbf{Linear Correlation Coefficient (CC): }\nIn order to measure the linear relationship between the ground truth and predicted map we utilise the linear correlation coefficient, \n\\begin{equation}\nCC=\\frac{cov(y,\\hat{y})}{\\sigma_{y}*\\sigma_{\\hat{y}}},\n\\end{equation}\nwhere $cov(y,\\hat{y})$ is referred to as the covariance between distributions $y$ and $\\hat{y}$, $\\sigma_{y}$ is the standard deviation of distributions $y$ and $\\sigma_{\\hat{y}}$ is the standard deviation of distributions $\\hat{y}$. As the name implies $CC=1$ denotes a perfect linear relationship between distributions $y$ and $\\hat{y}$ where as a value of $0$ implies that there is no linear relationship. \n\\item \\textbf{KL divergence (KL): }\nTo measure the non-symmetric difference between two distributions we utilise the KL divergence measure given by,\n\\begin{equation}\nKL=\\sum_{i=1}^{N}\\hat{y}_{i}log(\\frac{\\hat{y}_i}{y_i}).\n\\end{equation}\nAs ground truth and predicted saliency maps can be seen as 2D distributions, we can use KL divergence to measure the difference between them.\n\\item \\textbf{Similarity metric (SM): }\nComputes the sum of the minimum values at each pixel location between $\\hat{y}^{norm}$ and $y^{norm}$ distributions. \n\\begin{equation}\nSM=\\sum_{i=1}^{P}min(\\hat{y}^{norm}_{i},y^{norm}_{i}), \n\\end{equation}\nwhere\n$\\sum_{i=1}^{P}\\hat{y}^{norm}_{i}=1 $\nand\n$\\sum_{i=1}^{P}y^{norm}_{i}=1 $\nare the normalised probability distributions and $P$ denotes all the pixel location in the 2D maps.\n\\end{itemize}\n\n\\subsection{Results}\nQuantitative evaluations on the VOCA-2012 dataset is presented in Tab. \\ref{tab:tab_1}. In the proposed model, in order to retain the user dependent factors such as user preference in memory, we feed the examples in order such that examples from each specific user go through in sequence. We compare our model with 8 state-of-the-art methods. The row `human' stands for the human saliency predictor, which computes the saliency map derived from the fixations made by half of the human subjects performing the same task. This predictor is evaluated with respect to the rest of the subjects, as opposed to the entire group \\cite{action-in-the-eye}. \\par\n\nThe evaluations suggest that the bottom-up model of Itti et. al \\cite{itti2000saliency} generates poor results as it does not incorporate task specific information. Even with high level object detectors, the models of Judd et al. \\cite{judd2009learning} and HOG detector \\cite{action-in-the-eye} fail to render accurate predictions. \\par\nDeep learning models PDP \\cite{jetley2016end} and ML-net \\cite{deep-ml} are able to out perform the techniques stated above but they lack the ability to learn task dependent information. We note the accuracy gain of cGAN model over PDP, ML-net and SalGAN, where the model incorporates a conditional variable to discriminate between the `action recognition' and `context recognition' tasks instead of learning two separate networks or fine-tuning on them individually. Our proposed approach builds upon this by incorporating an augmented memory architecture with conditional learning. We learn different user patterns and retain the dependencies among different tasks, and outperform all baselines considered (MC-GAN (proposed), Tab. \\ref{tab:tab_1}). \\par \nAs further study, in Tab. \\ref{tab:tab_1}, row M-GAN (separate), we show the evaluations for training two separate memory augmented GAN networks for the tasks in the VOCA-2012 test set without using the conditional learning process. The results emphasise the importance of learning a single network for all the tasks, leveraging semantic relationships between different tasks. The accuracy of the networks learned for separate tasks are lower than the combined MC-GAN and cGAN approaches (rows MC-GAN (proposed) and cGAN, Tab. \\ref{tab:tab_1}), highlighting the importance of learning the different tasks together and allowing the model to discriminate between the tasks and learn the complimentary information, rather than keeping the model completely blind regarding the existence of another task category.\\par\n\\begin{table}[t]\n \\centering\n \\begin{adjustbox}{width=.98\\linewidth,center}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n & \\multicolumn{4}{|c|}{\\textbf{Task}} \\\\\n \\cline{2-5}\n & \\multicolumn{2}{|c|}{\\textbf{Action Rec}} & \\multicolumn{2}{|c|}{\\textbf{Context Rec}}\\\\\n \\cline{2-5}\n \\textbf{Saliency Models} & \\textbf{AUC} & \\textbf{KL} & \\textbf{AUC} & \\textbf{KL} \\\\\n \\hline\n \\hline\n HOG detector \\cite{action-in-the-eye} & 0.736 & 8.54 & 0.646 & 8.10 \\\\\n \\hline\n Judd et al. \\cite{judd2009learning} & 0.715 & 11.00 & 0.636 & 9.66 \\\\\n \\hline\n Itti et. al \\cite{itti2000saliency} & 0.533 & 16.53 & 0.512 & 15.04 \\\\\n \\hline\n central bias \\cite{action-in-the-eye} & 0.780 & 9.59 & 0.685 & 8.82\\\\\n \\hline\n PDP \\cite{jetley2016end} & 0.875 & 8.23 & 0.690 & 7.98 \\\\\n \\hline\n ML-net \\cite{deep-ml}& 0.847 & 8.51 & 0.684 & 8.02 \\\\\n \\hline\n SalGAN \\cite{pan2017salgan}& 0.848 & 8.47 & 0.679 & 8.00 \\\\\n \\hline\n cGAN \\cite{pix2pix} &0.852 &8.24 &0.701 &7.95 \\\\\n \\hline\n M-GAN (separate) & 0.848 & 8.54 & 0.704 & 8.00\\\\\n \\hline\n \\textbf{MC-GAN (proposed)} &\\textbf{0.901} & \\textbf{8.07 } &\\textbf{0.734} & \\textbf{7.65 } \\\\\n \\hline\n \\hline\n Human \\cite{action-in-the-eye} & 0.922 & 6.14 & 0.813 & 5.90\\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Experimental evaluation on VOCA-2012 test set. We augment the current state-of-the-art GAN method (SalGAN \\cite{pan2017salgan}) by adding a conditional variable (cGAN \\cite{pix2pix}) to mimic the joint learning process instead of learning two separate networks. To capture user and task specific behavioural patterns we add a memory module to cGAN, MC-GAN (proposed), and outperform all the baselines. We also compare training $2$ separate memory augmented GAN networks, M-GAN (separate) without the conditional learning process.}\n\t\t\t\\vspace{-3mm}\n \\label{tab:tab_1}\n\\end{table}\n\n\n\n\\begin{table}[t]\n \\centering\n \\begin{adjustbox}{width=.98\\linewidth,center}\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n & \\multicolumn{6}{|c|}{\\textbf{Task}} \\\\\n \\cline{2-7}\n & \\multicolumn{3}{|c|}{\\textbf{Action Rec}} & \\multicolumn{3}{|c|}{\\textbf{Context Rec}}\\\\\n \\cline{2-7}\n \\textbf{Saliency Models} & \\textbf{NSS} & \\textbf{CC} & \\textbf{SM} & \\textbf{NSS} & \\textbf{CC} & \\textbf{SM} \\\\\n \\hline\n \\hline\n ML-net \\cite{deep-ml}& 2.05 & 0.71 & 0.51 & 2.03 & 0.64 & 0.42 \\\\\n \\hline\n SalGAN \\cite{pan2017salgan}& 2.10 & 0.73 & 0.51 & 2.10 & 0.68 & 0.44 \\\\\n \\hline\n cGAN \\cite{pix2pix} & 2.23 & 0.76 & 0.55 & 2.14 & 0.71 & 0.57 \\\\\n \\hline\n \\textbf{MC-GAN (proposed)} &\\textbf{2.23} & \\textbf{0.79 } &\\textbf{0.60} & \\textbf{2.20 } &\\textbf{0.77} & \\textbf{0.69 } \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison between ML-Net, SalGAN, cGAN and MC-GAN (proposed) on VOCA-2012}\n\t\t\t\\vspace{-3mm}\n \\label{tab:tab_3}\n\\end{table}\n\nTo provide qualitative insight, some predicted maps along with ground truth and baseline ML-net \\cite{deep-ml} predictions are given in Fig. \\ref{fig:fig_action_context_rec}. In the first column we show the input image, and columns ``Action rec GT'' and ``Context rec GT'' depict the ground truth saliency maps for the respective tasks. In columns ``Our action rec'' and ``Our context rec'' we show the respective predictions from our model, and finally the column `ml-Net' contains the prediction from the ML-net \\cite{deep-ml} baseline. Observing columns ``Action rec GT '' and ``Context rec GT'' one can clearly see how the tasks differ based on the different saliency patterns. Yet, the proposed model is able to capture these different semantics within a single network which is trained together for all the tasks. As shown in Fig. \\ref{fig:fig_action_context_rec}, it has efficiently identified the image saliency from low level features as well as task dependent saliency factors from high level cues such as trees, furniture and roads. Furthermore, the single learning process and the incorporation of a memory architecture renders the plausibility of retaining the semantical relationships among different tasks and how users adapt to those. \n\n\\begin{figure*}[!htb]\n \n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090_gt_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090_pred_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090_gt_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090_pred_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006090_mlnet.pdf} %\n \n \\end{subfigure}\n \n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094_gt_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094_pred_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094_gt_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094_pred_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006094_mlnet.pdf} %\n \n \\end{subfigure}\n\n \n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125_gt_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125_pred_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125_gt_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125_pred_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006125_mlnet.pdf} %\n \n \\end{subfigure}\n\n \n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150_gt_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150_pred_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150_gt_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150_pred_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006150_mlnet.pdf} %\n \n \\end{subfigure}\n\n \n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153_gt_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153_pred_action.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153_gt_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153_pred_context.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006153_mlnet.pdf} %\n \n \\end{subfigure}\n\n \n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157.jpg} %\n \\caption{Image}\n \\end{subfigure}\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157_gt_action.pdf} %\n \\caption{Action rec GT}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157_pred_action.pdf} %\n \\caption{Our action rec}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157_gt_context.pdf} %\n \\caption{Context rec GT}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157_pred_context.pdf} %\n \\caption{Our context rec}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.160\\textwidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/supplymentry_material\/voc_saliency_maps\/006_2010_006157_mlnet.pdf} %\n \\caption{ML-net}\n \\end{subfigure}\n\n \n \\caption{Qualitative results for VOCA-2012 dataset and comparisons to the state-of-the-art.}\n\t\t\t\\vspace{-3mm}\n \\label{fig:fig_action_context_rec}\n\\end{figure*}\n\nTab. \\ref{tab:tab_2} shows the performance of the proposed model with 5 baselines for the MIT-PD test set. The first baseline ``Scene Context'' \\cite{modellin-search} utilises colour and orientation features where as ``Combined'' \\cite{modellin-search} incorporates both scene context features and higher level features from a person detector \\cite{dalal2006human}. Even with such explicit modelling of the task, this baseline fails to generate accurate predictions suggesting the subjective nature of the task specific viewing. With the aid of the associative memory of the proposed model we successfully capture those underlying factors.\n\n\\begin{table}[!h]\n \\centering\n \\begin{adjustbox}{width=.98\\linewidth,center}\n \\begin{tabular}{|c|c|c|}\n \\hline\n & \\multicolumn{2}{|c|}{\\textbf{Scene Type}} \\\\\n \\cline{2-3}\n & \\textbf{Target Present} & \\textbf{Target absent}\\\\\n \\cline{2-3}\n \\textbf{Saliency Models} & \\textbf{AUC} & \\textbf{AUC} \\\\\n \\hline\n \\hline\n Scene Context \\cite{modellin-search} & 0.844 & 0.845 \\\\\n \\hline\n Combined \\cite{modellin-search} & 0.896 & 0.877 \\\\\n \\hline\n ML-net \\cite{deep-ml}& 0.901 & 0.881\\\\\n \\hline\n SalGAN \\cite{pan2017salgan}&0.910 & 0.887\\\\\n \\hline\n cGAN \\cite{pix2pix} & 0.923 & 0.899 \\\\\n \\hline\n \\textbf{MC-GAN (proposed)} &\\textbf{0.942} & \\textbf{0.903} \\\\\n \\hline\n \\hline\n Human \\cite{modellin-search} & 0.955 & 0.930 \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Experimental evaluation on MIT-PD test set}\n\t\t\t\\vspace{-3mm}\n \\label{tab:tab_2}\n\\end{table}\n\nIn Tab. \\ref{tab:tab_3} and Tab. \\ref{tab:tab_4} we present the evaluations of NSS, CC and SM metrics. In order to evaluate ML-net, SalGAN and cGAN models we utilise the implementation of the algorithm released by the authors. When comparing the results between the ML-net \\cite{deep-ml}, cGAN \\cite{pix2pix} and SalGAN \\cite{pan2017salgan} models and proposed MC-GAN model a considerable gain in performance is observed in all the metrics considered, which emphasises a greater agreement between predicted and ground truth saliency maps. We were unable to compare other baselines using these metrics due to the unavailability of public implementations.\n\n\\begin{table}[t]\n \\centering\n \\begin{adjustbox}{width=.98\\linewidth,center}\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n & \\multicolumn{6}{|c|}{\\textbf{Task}} \\\\\n \\cline{2-7}\n & \\multicolumn{3}{|c|}{\\textbf{Target Present}} & \\multicolumn{3}{|c|}{\\textbf{Target absent}}\\\\\n \\cline{2-7}\n \\textbf{Saliency Models} & \\textbf{NSS} & \\textbf{CC} & \\textbf{SM} & \\textbf{NSS} & \\textbf{CC} & \\textbf{SM} \\\\\n \\hline\n \\hline\n ML-Net \\cite{deep-ml} & 1.41 & 0.55 &0.41 &1.22 & 0.43 & 0.38 \\\\\n \\hline\n SalGAN \\cite{pan2017salgan}& 1.41 & 0.53 &0.44 &1.20 & 0.42 & 0.35 \\\\ \n \\hline\n cGAN \\cite{pix2pix} & 1.67 & 0.51 &0.59 & 2.02 & 0.41 & 0.52 \\\\\n \\hline\n \\textbf{MC-GAN (proposed)} &\\textbf{2.17} & \\textbf{0.76 } &\\textbf{0.71} & \\textbf{2.34 } &\\textbf{0.75} & \\textbf{0.78 } \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\caption{Comparison between ML-Net, SalGAN, cGAN and MC-GAN (proposed) on MIT-PD test set}\n\t\t\t\\vspace{-3mm}\n \\label{tab:tab_4}\n\\end{table}\n\n\nThe qualitative results obtained from the proposed model along with the ML-net \\cite{deep-ml} network on a few examples from the MIT-PD dataset are shown in Fig. \\ref{fig:fig_context_rec}. We would like to emphasise the usage of a subject's prior knowledge in the task of searching for people in the urban scene. The subjects selectively attend the areas such as high rise buildings (see rows 2, 6) and pedestrian walkways (see rows 1, 3-6), which are more likely to contain humans, which our model has effectively captured. With the lack of capacity to model such user preferences, the baseline ML-Net model generates centre biased saliency without effectively understanding the subject's strategy. \n\\begin{figure}[!htb]\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_31.jpg} %\n \n \\end{subfigure}\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_31.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_31.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_31.pdf} %\n \n \\end{subfigure}\n \n \n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_14.jpg} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_14.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_14.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_14.pdf} %\n \n \\end{subfigure}\n \n \n \n\\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_18.jpg} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_18.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_18.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_18.pdf} %\n \n \\end{subfigure}\n \n\t\\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_4.jpg} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_4.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_4.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_4.pdf} %\n \n \\end{subfigure} \n \n \n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_33.jpg} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_33.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_33.pdf} %\n \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_33.pdf} %\n \n \\end{subfigure}\n \n \n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/Image\/saliency_predictions_5.jpg} %\n \\caption{Image}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/GT\/saliency_predictions_5.pdf} %\n \\caption{GT}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/our\/saliency_predictions_5.pdf} %\n \\caption{Our}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.24\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/visualisations_2\/ml-net\/saliency_predictions_5.pdf} %\n \\caption{ML-net}\n \\end{subfigure}\n\n \n \\caption{Qualitative results for MIT-PD dataset and comparisons to the state-of-the-art.}\n \\label{fig:fig_context_rec}\n\\end{figure}\n\n\n\\subsection{Task Specific Generator Activations}\nIn Fig. \\ref{fig:generator_activations} we visualise the activations from the 2nd (conv-l-2) and 2nd last (conv-l-8) convolution layers of the generator. The task specific learning of the proposed conditional GAN architecture is clearly evident in the activations. For instance, when the task at hand is to recognise actions the generator activations are highly concentrated around the foreground of the image (see (b), (g)), while for context recognition the model has learned that the areas of interest are in the background of the image (see (c), (h)). These task specific salient features are combined and compressed hierarchically and in latter layers (i.e conv-l-8), the networks has learned the most specific areas to focus when generating the output saliency map. \n\n\n\\begin{figure}[htb]\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth, ,height= .82\\linewidth]{fig\/generator_activations\/017_2010_006129.jpg} %\n \\caption{Input}\n \\end{subfigure}\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/generator_activations\/layer_4\/017_2010_006129_action_rec.pdf} %\n \\caption{l-2 AR} \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/generator_activations\/layer_4\/017_2010_006129_context_rec.pdf} %\n \\caption{l-2 CR}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth ]{fig\/generator_activations\/layer_12\/017_2010_006129_action_rec.pdf} %\n \\caption{l-8 AR}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/generator_activations\/layer_12\/017_2010_006129_context_rec.pdf} %\n \\caption{l-8 CR}\n \\end{subfigure}\n \n \n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth, ,height= .82\\linewidth]{fig\/generator_activations\/017_2010_006217.jpg} %\n \\caption{Input}\n \\end{subfigure}\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth]{fig\/generator_activations\/layer_4\/017_2010_006217_action_rec.pdf} %\n \\caption{l-2 AR} \n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth, ,height= .82\\linewidth]{fig\/generator_activations\/layer_4\/017_2010_006217_context_rec.pdf} %\n \\caption{l-2 CR}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth, ,height= .82\\linewidth]{fig\/generator_activations\/layer_12\/017_2010_006217_action_rec.pdf} %\n \\caption{l-8 AR}\n \\end{subfigure}\n \\centering\n \\begin{subfigure}{.190\\linewidth}\n \\centering\n \\includegraphics[width=.95\\linewidth, ,height= .82\\linewidth]{fig\/generator_activations\/layer_12\/017_2010_006217_context_rec.pdf} %\n \\caption{l-8 CR}\n \\end{subfigure}\n \\caption{Visualisation of the generator activations from 2nd (l-2) and 2nd last (l-8) convolution layers for action recognition (AR) and context recognition (CR) tasks. The importance varies from blue to yellow where blue represents the areas of least importance and yellow represents areas of more importance.}\n\t\t\t\\vspace{-3mm}\n\\label{fig:generator_activations}\n \\end{figure}\n\n\\vspace{-2mm}\n\\section{Conclusion}\n\nThis work introduces a novel human saliency estimation architecture which combines task and user specific information together in a generative adversarial pipeline. We show the importance of fully capturing the context information which incorporates the task information, subject behavioural goals and image context. The resultant frame work offers several advantages compared to task specific handcrafted features, enabling direct transferability among different tasks. Qualitative and quantitative experimental evaluations on two public datasets demonstrates superior performance with respect to the current state-of-the-art. \n\n\\vspace{-2mm}\n\\footnotesize{\n\\subsubsection*{Acknowledgement}\n\\vspace{-2mm}\nThis research was supported by an Australian Research Council's Linkage grant (LP140100221). The authors also thank QUT High Performance Computing (HPC) for providing the computational resources for this research.\n}\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn \\cite[Lemma IIe]{wiener32}, it states that\n``{\\em If $f(x)$ is a function with an absolutely convergent Fourier series, which nowhere vanishes for real arguments, $1\/f(x)$ has an absolutely convergent Fourier series.}\"\nThe above statement is now known as the classical Wiener's lemma.\n\nWe say that a Banach space ${\\mathcal A}$ with norm $\\|\\cdot\\|_{\\mathcal A}$\nis a {\\em Banach algebra} if \nit has operation of multiplications possessing the usual algebraic properties, and\n\\begin{equation}\\label{banachalgebra.def}\n\\|AB\\|_{\\mathcal A}\\le K \\|A\\|_{\\mathcal A}\\|B\\|_{\\mathcal A}\\ \\ {\\rm for\\ all} \\ A, B\\in {\\mathcal A},\n\\end{equation}\nwhere $K$ is a positive constant. \nGiven two Banach algebras ${\\mathcal A}$ and ${\\mathcal B}$ such that ${\\mathcal A}$ is a Banach subalgebra of ${\\mathcal B}$,\nwe say that\n ${\\mathcal A}$ is {\\it inverse-closed} in ${\\mathcal B}$ if $A\\in {\\mathcal A}$ and\n$A^{-1} \\in {\\mathcal B}$ implies $A^{-1}\\in {\\mathcal A}$. Inverse-closedness is also known as spectral invariance, Wiener pair, local subalgebra, etc \n \\cite{douglasbook, gelfandbook, Naimarkbook, takesaki}.\nLet ${\\mathcal C}$ be the algebra of all periodic continuous functions under multiplication, and\n${\\mathcal W}$ be its Banach subalgebra of all periodic functions with absolutely convergent Fourier series,\n\\begin{equation}\\label{Wieneralgebra.def}\n{\\mathcal W}=\\Big\\{f(x)=\\sum_{n\\in \\ZZ} \\hat f(n) e^{inx},\\ \\\n \\|f\\|_{\\mathcal W}:=\\sum_{n\\in \\ZZ} |\\hat f(n)|<\\infty\\Big\\}.\\end{equation}\nThen the classical Wiener's lemma can be reformulated as that ${\\mathcal W}$ is an inverse-closed subalgebra of ${\\mathcal C}$.\n Due to the above interpretation, we also call the inverse-closed property for a Banach subalgebra ${\\mathcal A}$ as Wiener's lemma for that subalgebra.\n Wiener's lemma for Banach algebras of infinite matrices and integral operators with certain off-diagonal decay\n can be informally interpreted as localization preservation under inversion.\nSuch a localization preservation is of great importance in applied harmonic analysis, numerical analysis, optimization\nand many mathematical and engineering fields \\cite{akramgrochenigsiam, chengsun19, christensen05, grochenigr10, ksw13, sunsiam06}.\n The readers may refer to the survey papers \\cite{grochenig10, Krishtal11, shinsun13}, the recent papers \\cite{fangshinsun20, samei19, shinsun19} and references therein for historical remarks and recent advances.\n\n\n\n\nGiven an element $A$ in a Banach algebra ${\\mathcal A}$ with the identity $I$, we define its {\\em spectral set} $\\sigma_{\\mathcal{A}}(A)$\nand {\\em spectral radius} $\\rho_{\\mathcal{A}}(A)$ by\n$$\n\\sigma_{\\mathcal{A}}(A):=\\big\\{\\lambda \\in \\mathbb{C} : \\lambda I -A \\text{ is not invertible in }\n\\mathcal{A} \\big\\}\n$$\nand\n$$\n\\rho_\\mathcal{A}(A) := \\max \\big\\{ |\\lambda| :\\lambda \\in \\sigma_{\\mathcal A}(A)\\big\\}\n$$\nrespectively.\n Let ${\\mathcal A}$ and\n${\\mathcal B}$ be Banach algebras with common identity $I$ and ${\\mathcal A}$ be a Banach subalgebra of ${\\mathcal B}$.\nThen an equivalent condition for the inverse-closedness of\n${\\mathcal A}$ in ${\\mathcal B}$ is that the spectral set of any\n$A\\in {\\mathcal A}$ in Banach algebras ${\\mathcal A}$ and ${\\mathcal B}$ are the same, i.e.,\n$$\n\\sigma_{\\mathcal A}(A)=\\sigma_{\\mathcal B}(A).\n$$\nBy the above equivalence, a necessary condition for the inverse-closedness of\n${\\mathcal A}$ in ${\\mathcal B}$ is that\nthe spectral radius of any\n$A\\in {\\mathcal A}$ in the Banach algebras ${\\mathcal A}$ and ${\\mathcal B}$ are the same, i.e.,\n\\begin{equation} \\label{spectralradius}\n\\rho_\\mathcal{A}(A) =\\rho_\\mathcal{B}(A).\n\\end{equation}\nThe above necessary condition is shown by Hulanicki \\cite{hulanicki} to be sufficient if we further assume that\n$\\mathcal{A}$ and $\\mathcal{B}$ are $*$-algebras with common identity and involution, and that\n $\\mathcal{B}$ is symmetric.\n Here we say that\n a Banach algebra $\\mathcal B$ is a $*$-algebra if\nthere is a continuous linear {\\em involution $*$} on $\\mathcal {B}$\nwith the properties that\n\\begin{equation*}\n(AB)^* = B^* A^*\\ \\text{ and }\\ A^{**} = A\\ \\text{ for all } A, B \\in\n{\\mathcal B},\n\\end{equation*}\nand that a $*$-algebra ${\\mathcal B}$ is {\\em symmetric} if\n$$\\sigma_{\\mathcal {A}} (A^* A) \\subset [0,\\infty )\\ \\ {\\rm for \\ all} \\ A\\in {\\mathcal B}.$$\nThe spectral radii approach \\eqref{spectralradius}, known as the Hulanicki's spectral method,\n has been used to establish the inverse-closedness of symmetric $*$-algebras \\cite{branden, gkI, grochenigklotz10, sunca11, suntams07, suncasp05},\n however the above approach does not provide a norm estimate for the inversion, which is crucial for many mathematical and engineering applications.\n\n\n\nTo consider norm estimate for the inversion, we recall the concept of norm-controlled inversion of a Banach subalgebra ${\\mathcal A}$ of a symmetric $*$-algebra ${\\mathcal B}$, which was initiated by Nikolski \\cite{nikolski99} and coined by Gr\\\"ochenig and Klotz \\cite{gkI}. Here\nwe say that\n a Banach subalgebra ${\\mathcal A}$ of ${\\mathcal B}$ admits {\\em\nnorm-controlled inversion} in ${\\mathcal B}$ if there exists a continuous function $h$ from\n$[0, \\infty)\\times [0, \\infty)$ to $[0, \\infty)$\n such that\n\\begin{equation}\\label{normcontrol}\n\\|A^{-1}\\|_{\\mathcal A}\\le h\\big(\\|A\\|_{\\mathcal A}, \\|A^{-1}\\|_{\\mathcal B}\\big)\n\\end{equation}\nfor all $A\\in {\\mathcal A}$ being invertible in ${\\mathcal B}$\n\\cite{gkII, gkI, samei19, shinsun19}.\n\n\nThe norm-controlled inversion is a strong version of Wiener's lemma.\nThe classical Banach algebra ${\\mathcal W}$ in \\eqref{Wieneralgebra.def} is inverse-closed in\nthe algebra\n ${\\mathcal C}$ of all periodic continuous functions\n \\cite{wiener32}, however it does not have norm-controlled inversion in\n ${\\mathcal C}$ \\cite{belinskiijfaa97, nikolski99}.\nTo establish Wiener's lemma, there are several methods, including\n the Wiener's localization \\cite{wiener32}, the Gelfand's technique\n\\cite{gelfandbook}, the Brandenburg's trick \\cite{branden}, the Hulanicki's spectral method \\cite{hulanicki}, the Jaffard's boot-strap argument \\cite{jaffard90},\nthe derivation technique \\cite{grochenigklotz10},\nand the Sj\\\"ostrand's commutator estimates \\cite{shinsun19, sjostrand94}.\n In this paper, we will use the Brandenburg's trick\n to establish norm-controlled inversion of\n a differential $*$-subalgebra ${\\mathcal A}$ of a symmetric $*$-algebra ${\\mathcal B}$.\n\n\n\n\n\n\n\n\n\nThis introduction article is organized as follows. In Section \\ref{differentialalgebra.section}, we recall the concept of\ndifferential subalgebras and present some differential subalgebras of infinite matrices with polynomial off-diagonal decay.\nIn Section \\ref{generalizedDS.section}, we introduce the concept of generalized differential subalgebras and\npresent some generalized differential subalgebras of integral operators with kernels being H\\\"older continuous and having polynomial off-diagonal decay.\nIn Section \\ref{normcontrolledinversion.section}, we use the Brandenburg's trick to establish norm-controlled inversion\n of a differential $*$-subalgebra of a symmetric $*$-algebra, and we conclude the section with two remarks on\n the norm-controlled inversion with the norm control function bounded by a polynomial and\n the norm-controlled inversion of nonsymmetric Banach algebras.\n\n\n\n\n\n\n\n\n\n\\section{Differential Subalgebras}\\label{differentialalgebra.section}\n\n Let ${\\mathcal A}$ and\n${\\mathcal B}$ be Banach algebras such that ${\\mathcal A}$ is a Banach subalgebra of ${\\mathcal B}$.\nWe say that\n${\\mathcal A}$ is a {\\em differential subalgebra of order $\\theta\\in (0, 1]$} in ${\\mathcal B}$ \nif there exists a positive constant $D_0:=D_0({\\mathcal A}, {\\mathcal B}, \\theta)$ such that\n\\begin{equation}\\label{differentialnorm.def}\n\\|AB\\|_{\\mathcal A}\\le D_0\\|A\\|_{\\mathcal A} \\|B\\|_{\\mathcal A} \\Big (\\Big(\\frac{\\|A\\|_{\\mathcal B}}{\\|A\\|_{\\mathcal A}}\\Big)^\\theta +\n\\Big(\\frac{\\|B\\|_{\\mathcal B}}{\\|B\\|_{\\mathcal A}}\\Big)^\\theta\n\\Big)\n\\quad {\\rm for \\ all} \\ A, B \\in {\\mathcal A}.\n\\end{equation}\nThe concept of differential subalgebras of order $\\theta$ was introduced in \\cite{blackadarcuntz91, kissin94, rieffel10} for $\\theta=1$ and\n\\cite{christ88, gkI, shinsun19} for $\\theta\\in (0, 1)$.\n We also refer the reader\nto \\cite{barnes87, fangshinsun13, gkII, gkI, grochenigklotz10, jaffard90, rssun12, samei19, sunca11, sunacha08, suntams07, suncasp05} for various differential subalgebras\nof infinite matrices, convolution operators, and integral operators with certain off-diagonal decay.\n\n\n\n For $\\theta=1$, the requirement\n\\eqref{differentialnorm.def} can be reformulated as\n\\begin{equation}\n\\|AB\\|_{\\mathcal A}\\le D_0\\|A\\|_{\\mathcal A} \\|B\\|_{\\mathcal B}+ D_0 \\|A\\|_{\\mathcal B} \\|B\\|_{\\mathcal A}\n\\quad {\\rm for \\ all} \\ A, B \\in {\\mathcal A}. \\end{equation}\nSo the norm $\\|\\cdot\\|_{\\mathcal A}$ satisfying \\eqref{differentialnorm.def}\nis also referred as a Leibniz norm on ${\\mathcal A}$.\n\n\n\n Let $C[a, b]$ be the space of all continuous functions on the interval $[a, b]$ with its norm defined by\n$$\\|f\\|_{C[a, b]}=\\sup_{t\\in [a, b]} |f(t)|, \\ \\ f\\in C[a, b],$$\n and $C^k[a, b], k\\ge 1$, be the space of all continuously differentiable functions on the interval $[a, b]$ up to order $k$\nwith its norm defined by\n$$\\|h\\|_{C^k[a, b]}= \\sum_{j=0}^k \\|h^{(j)}\\|_{C[a, b]} \\ {\\rm for} \\ h\\in C^k[a, b].$$\nClearly, $C[a, b]$ and $C^k[a, b]$ are Banach algebras under function multiplication.\nMoreover \n \\begin{eqnarray}\\label{Cab.eq}\n\\|h_1h_2\\|_{C^1[a,b]} & = & \\|(h_1h_2)'\\|_{C[a, b]}+ \\|h_1h_2\\|_{C[a, b]}\\nonumber\\\\\n& \\le &\n\\|h_1'\\|_{C[a, b]} \\|h_2\\|_{C[a, b]}+ \\|h_1\\|_{C[a, b]} \\|h_2'\\|_{C[a, b]}\n + \\|h_1\\|_{C[a, b]}\\|h_2\\|_{C[a, b]}\\nonumber\\\\\n& \\le & \\|h_1\\|_{C^1[a,b]} \\|h_2\\|_{C[a,b]}+\\|h_1\\|_{C[a,b]}\\|h_2\\|_{C^1[a,b]} \\ {\\rm for \\ all} \\ h_1, h_2\\in C^1[a, b],\n\\end{eqnarray}\nwhere the second inequality follows from the Leibniz rule. Therefore we have\n\n\\begin{thm}\\label{C1ab.thm}\n $C^1[a, b]$ is a differential subalgebra of order one in $C[a, b]$.\n \\end{thm}\n Due to the above illustrative example of differential subalgebras of order one,\nthe norm $\\|\\cdot\\|_{\\mathcal A}$ satisfying \\eqref{differentialnorm.def}\nis also used to describe smoothness in abstract Banach algebra \\cite{blackadarcuntz91}.\n\n\n\nLet\n${\\mathcal W}^1$ be the Banach algebra of all periodic functions such that\nboth $f$ and its derivative $f'$ belong to the Wiener algebra ${\\mathcal W}$, and define the norm on ${\\mathcal W}^1$ by\n\\begin{equation}\\label{differentialwiener.def}\n\\|f\\|_{{\\mathcal W}^1} = \\|f\\|_{\\mathcal W}+\\|f'\\|_{\\mathcal W}\n = \\sum_{n\\in \\ZZ} (|n|+1) |\\hat f(n)|\n \\end{equation}\n for $f(x)=\\sum_{n\\in \\ZZ} \\hat f(n) e^{inx}\\in {\\mathcal W}^1$.\nFollowing the argument used in the proof of Theorem \\ref{C1ab.thm},\nwe have\n\\begin{thm} \\label{wienerdiff.theorem}\n ${\\mathcal W}^1$ is a differential subalgebra of order one in ${\\mathcal W}$.\n \\end{thm}\n\nRecall from the classical Wiener's lemma that ${\\mathcal W}$ is an inverse-closed subalgebra of\n${\\mathcal C}$, the algebra of all periodic continuous functions under multiplication.\nThis leads to the following natural question:\n\n\\begin{ques}\\label{question1}\n Is ${\\mathcal W}^1$ a differential subalgebra\nof ${\\mathcal C}$?\n\\end{ques}\n\nLet $\\ell^p, 1\\le p\\le \\infty$, be\n the space of all $p$-summable sequences on $\\ZZ$ with norm denoted by $\\|\\cdot\\|_p$.\n To answer the above question,\nwe consider Banach algebras ${\\mathcal C}$, ${\\mathcal W}$ and ${\\mathcal W}^1$ in the ``frequency domain\".\nLet ${\\mathcal B}(\\ell^p)$ be the algebra of all bounded linear operators on $\\ell^p, 1\\le p\\le \\infty$,\n and let \n\\begin{equation}\n\\label{tildew.def}\n\\tilde {\\mathcal W}=\\Big\\{A:=(a(i-j))_{i,j\\in \\ZZ},\\ \\| A\\|_{\\tilde W}=\\sum_{k\\in \\ZZ} |a(k)|<\\infty\\Big\\}\\end{equation}\nand\n\\begin{equation}\n\\label{tildew1.def}\n{\\tilde {\\mathcal W}}^1=\\Big\\{A:=(a(i-j))_{i,j\\in \\ZZ}, \\ \\| A\\|_{{\\tilde W}^1}=\\sum_{k\\in \\ZZ} |k| |a(k)|<\\infty\\Big\\}\\end{equation}\nbe Banach algebras of Laurent matrices\nwith symbols in ${\\mathcal W}$ and ${\\mathcal W}^1$ respectively. Then the classical Wiener's lemma can be reformulated\nas that $\\tilde {\\mathcal W}$ is an inverse-closed subalgebra of ${\\mathcal B}(\\ell^2)$,\nand an equivalent statement of Theorem \\ref{wienerdiff.theorem}\nis that ${\\tilde {\\mathcal W}}^1$ is a differential subalgebra of order one in $\\tilde {\\mathcal W}$.\nDue to the above equivalence, Question\n\\ref{question1} in the ``frequency domain\" becomes whether ${\\mathcal W}^1$ is a differential subalgebra of order $\\theta\\in (0, 1]$ in ${\\mathcal C}$.\nIn \\cite{suncasp05}, the first example of differential subalgebra of infinite matrices with order $\\theta\\in (0, 1)$\nwas discovered.\n\n\n\\begin{thm}\\label{W1.thm}\n${\\mathcal W}^1$ is a differential subalgebra of ${\\mathcal C}$ with order $2\/3$.\n\\end{thm}\n\n\n\n\nTo consider differential subalgebras of infinite matrices in the noncommutative setting, we introduce three noncommutative Banach algebras of\ninfinite matrices with certain off-diagonal decay.\nGiven $1\\le p\\le \\infty$ and $\\alpha\\ge 0$,\n we define\nthe Gr\\\"ochenig-Schur family of infinite matrices\nby\n \\begin{equation}\\label{GS.def}\n{\\mathcal A}_{p,\\alpha}=\\Big\\{ A=(a(i,j))_{i,j \\in \\Z}, \\ \\|A\\|_{{\\mathcal A}_{p,\\alpha}}<\\infty\\Big\\}\n\\end{equation}\n\\cite{gltams06, jaffard90, moteesun, schur11, suntams07, suncasp05},\nthe\nBaskakov-Gohberg-Sj\\\"ostrand family of infinite matrices by\n\\begin{equation}\\label{BGS.def}\n{\\mathcal C}_{p,\\alpha}=\\Big\\{ A= (a(i,j))_{i,j \\in \\Z}, \\ \\|A\\|_{{\\mathcal C}_{p,\\alpha}}<\\infty\\Big\\}\n\\end{equation}\n\\cite{baskakov90, gkwieot89, gltams06, sjostrand94,suntams07}, and\nthe Beurling family of infinite matrices\n\\begin{equation}\\label{Beurling.def}\n{\\mathcal B}_{p,\\alpha}=\\Big\\{ A= (a(i,j))_{i,j \\in \\Z}, \\ \\|B\\|_{{\\mathcal A}_{p,\\alpha}}<\\infty\\Big\\}\n\\end{equation}\n\\cite{beurling49, shinsun19, sunca11},\n where $u_\\alpha(i, j)=(1+|i-j|)^\\alpha, \\alpha\\ge 0$, are polynomial weights on $\\Z^2$,\n \\begin{equation}\\label{GSnorm.def}\n \\|A\\|_{{\\mathcal A}_{p,\\alpha}}\n = \\max \\Big\\{ \\sup_{i \\in \\Z} \\big\\|\\big(a(i,j) u_\\alpha(i, j)\\big)_{j\\in \\Z}\\big\\|_p, \\ \\ \\sup _{j \\in \\Z}\n \\big\\|\\big(a(i,j) u_\\alpha(i, j)\\big)_{i\\in \\Z}\\big\\|_p\n \\Big\\},\n\\end{equation}\n\\begin{equation}\\label{BKSnorm.def}\n\\|A\\|_{{\\mathcal C}_{p,\\alpha}} = \\Big\\| \\Big(\\sup_{i-j=k} |a(i,j)| u_\\alpha(i, j)\\Big)_{k\\in \\Z} \\Big\\|_p,\n\\end{equation}\nand\n\\begin{equation}\\label{Beurlingnorm.def}\n\\|A\\|_{{\\mathcal B}_{p,\\alpha}} = \\Big\\| \\Big(\\sup_{|i-j|\\ge |k| } |a(i,j)| u_\\alpha(i, j)\\Big)_{k\\in \\Z} \\Big\\|_p.\n\\end{equation}\nClearly, we have\n\\begin{equation}\\label{properinclusion}\n{\\mathcal B}_{p,\\alpha} \\subset {\\mathcal C}_{p,\\alpha} \\subset\n{\\mathcal A}_{p,\\alpha} \\ \\ {\\rm for \\ all}\\ 1\\le p\\le \\infty \\ {\\rm and} \\ \\alpha\\ge 0.\n\\end{equation}\nThe above inclusion is proper for $1\\le p<\\infty$, while\nthe above three families \nof infinite matrices coincide for $p=\\infty$,\n\\begin{equation}\\label{properinclusioninfinite}\n{\\mathcal B}_{\\infty,\\alpha}={\\mathcal C}_{\\infty,\\alpha}=\n{\\mathcal A}_{\\infty,\\alpha} \\ \\ {\\rm for \\ all} \\ \\alpha\\ge 0,\n\\end{equation}\nwhich is also known as the Jaffard family of infinite matrices \\cite{jaffard90},\n \\begin{equation}\\label{Jaffard.def}\n{\\mathcal J}_{\\alpha}=\\Big\\{ A= (a(i,j))_{i,j \\in \\Z}, \\ \\|A\\|_{{\\mathcal J}_{\\alpha}}=\\sup_{i,j\\in \\Z}|a(i,j)| u_\\alpha(i-j)<\\infty\\Big\\}.\n\\end{equation}\n\nObserve that\n$\\|A\\|_{{\\mathcal A}_{p,\\alpha}}=\\|A\\|_{{\\mathcal C}_{p,\\alpha}}$\nfor a Laurent matrix $A=(a(i-j))_{i,j\\in \\Z}$. Then\nBanach algebras $\\tilde {\\mathcal W}$ and ${\\tilde {\\mathcal W}}^1$\nin \\eqref{tildew.def} and \\eqref{tildew1.def}\nare the commutative subalgebra of the Gr\\\"ochenig-Schur algebra ${\\mathcal A}_{1, \\alpha}$\nand the Baskakov-Gohberg-Sj\\\"ostrand algebra ${\\mathcal C}_{1, \\alpha}$ for $\\alpha=0, 1$ respectively,\n\\begin{equation}\\label{wa.re}\n\\tilde {\\mathcal W}= {\\mathcal A}_{1, 0}\\cap {\\mathcal L}={\\mathcal C}_{1, 0}\\cap {\\mathcal L}\n\\end{equation}\nand\n\\begin{equation}\\label{wa1.re}\n{\\tilde {\\mathcal W}}^1= {\\mathcal A}_{1, 1}\\cap {\\mathcal L}={\\mathcal C}_{1, 1}\\cap {\\mathcal L},\n\\end{equation}\nwhere ${\\mathcal L}$ is the set of all Laurent matrices $A=(a(i-j))_{i,j\\in \\Z}$.\nThe sets ${\\mathcal A}_{p, \\alpha}, {\\mathcal C}_{p,\\alpha}, {\\mathcal B}_{p,\\alpha}$\nwith $p=1$ and $\\alpha=0$ are noncommutative Banach algebras under matrix multiplication,\nthe Baskakov-Gohberg-Sj\\\"ostrand algebra ${\\mathcal C}_{1,0}$ and the Beurling algebra ${\\mathcal B}_{1, 0}$ are inverse-closed subalgebras of ${\\mathcal B}(\\ell^2)$ \\cite{baskakov90, bochnerphillips42, gkwieot89, sjostrand94, sunca11}, however\nthe Schur algebra ${\\mathcal A}_{1,0}$ is not inverse-closed in ${\\mathcal B}(\\ell^2)$ \\cite{tessera10}.\nWe remark that the inverse-closedness of the Baskakov-Gohberg-Sj\\\"ostrand algebra ${\\mathcal C}_{1,0}$\nin ${\\mathcal B}(\\ell^2)$ can be understood as a noncommutative extension of the classical Wiener's lemma for the\ncommutative subalgebra $\\tilde {\\mathcal W}$ of Laurent matrices in ${\\mathcal B}(\\ell^2)$.\n\n\n\n\n\nFor $1\\le p\\le \\infty$ and $\\alpha>1-1\/p$, one may verify that\nthe Gr\\\"ochenig-Schur family ${\\mathcal A}_{p,\\alpha}$,\nthe Baskakov-Gohberg-Sj\\\"ostrand family ${\\mathcal C}_{p,\\alpha}$ and the Beurling family ${\\mathcal B}_{p, \\alpha}$\nof infinite matrices form Banach algebras under matrix multiplication\nand they are inverse-closed subalgebras of ${\\mathcal B}(\\ell^2)$ \\cite{gltams06, jaffard90, sunca11, suntams07, suncasp05}.\nIn \\cite{sunca11, suntams07, suncasp05}, their differentiability in ${\\mathcal B}(\\ell^2)$ is established.\n\n\\begin{thm} \\label{sundiff.thm}\nLet $1\\le p\\le \\infty$ and $\\alpha>1-1\/p$. Then\n ${\\mathcal A}_{p,\\alpha}$,\n ${\\mathcal C}_{p,\\alpha}$ and ${\\mathcal B}_{p, \\alpha}$ are differential subalgebras of order\n $\\theta_0= (\\alpha+1\/p-1)\/(\\alpha+1\/p-1\/2)\\in (0, 1)$ in ${\\mathcal B}(\\ell^2)$.\n\\end{thm}\n\n\\begin{proof} The following argument about differential subalgebra property for the Gr\\\"ochenig-Schur algebra ${\\mathcal A}_{p, \\alpha}, 1
\\tau_0}\\Big) |a(i,j)|\\nonumber\\\\\n & \\le & \\Big(\\sum_{ |j-i|\\le \\tau_0} |a(i,j)|^2\\Big)^{1\/2} \\Big(\\sum_{|j-i|\\le \\tau_0} 1\\Big)^{1\/2}\\nonumber\\\\\n & & + \\Big(\\sum_{|j-i|\\ge \\tau_0+1} |a(i,j)|^p (u_\\alpha(i, j))^p\\Big)^{1\/p} \\Big(\\sum_{|j-i|\\ge \\tau_0+1} (u_\\alpha(i, j))^{-p'} \\Big)^{1\/p'}\\nonumber\\\\\n & \\le & \\|A\\|_{{\\mathcal B}(\\ell^2)} (2\\tau_0+1)^{1\/2}+ 2^{1\/p'} (\\alpha p'-1)^{-1\/p'} \\|A\\|_{{\\mathcal A}_{p, \\alpha}}\n (\\tau_0+1)^{-\\alpha+1\/p'}\\nonumber\\\\\n & \\le & D\n\n \\|A\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|A\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0},\n \n\n \\end{eqnarray}\n where $D$ is an absolute constant depending on $p, \\alpha$ only, and\n the last inequality follows from \\eqref{tau0.def} and the following estimate\n $$ \\|A\\|_{{\\mathcal B}(\\ell^2)}\\le \\|A\\|_{{\\mathcal A}_{1, 0}}\\le \\Big(\\sum_{k\\in \\Z} (|k|+1)^{-\\alpha p'}\\Big)^{1\/p'}\n \\|A\\|_{{\\mathcal A}_{p, \\alpha}}\\le \\Big(\\frac{\\alpha p'+1}{\\alpha p'-1}\\Big)^{1\/p'}\\|A\\|_{{\\mathcal A}_{p, \\alpha}}.$$\n \n Similarly we can prove that\n \\begin{equation}\n \\label{para2.eq}\n \\sup_{j\\in \\Z} \\sum_{i\\in \\Z} |a(i,j)| \\le D\n\n \\|A\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|A\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0}.\\end{equation}\n Combining \\eqref{para1.eq} and \\eqref{para2.eq} leads to\n \\begin{equation}\n \\label{para3.eq}\n \\|A\\|_{{\\mathcal A}_{1, 0}} \\le D\n\n \\|A\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|A\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0}.\\end{equation}\nReplacing the matrix $A$ in \\eqref{para3.eq} by the matrix $B$ gives\n \\begin{equation}\n \\label{para4.eq}\n \\|B\\|_{{\\mathcal A}_{1, 0}} \\le D\n\n \\|B\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|B\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0}.\\end{equation}\nTherefore it follows from \\eqref{sundiff.thm.pf.eq1}, \\eqref{para3.eq} and \\eqref{para4.eq} that\n\\begin{equation}\n\\|C\\|_{{\\mathcal A}_{p, \\alpha}}\\le 2^\\alpha D\n \\|A\\|_{{\\mathcal A}_{p, \\alpha}}\\|B\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|B\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0}\n+2^\\alpha D\n \\|B\\|_{{\\mathcal A}_{p, \\alpha}}\\|A\\|_{{\\mathcal A}_{p, \\alpha}}^{1-\\theta_0} \\|A\\|_{{\\mathcal B}(\\ell^2)}^{\\theta_0},\n\\end{equation}\nwhich proves the differential subalgebra property for Banach algebras ${\\mathcal A}_{p, \\alpha}$ with $1
1-1\/p$.\n\\end{proof}\n\nThe argument used in the proof of Theorem \\ref{sundiff.thm} involves a triplet of Banach algebras\n${\\mathcal A}_{p, \\alpha}$, ${\\mathcal A}_{1, 0}$ and ${\\mathcal B}^2$ satisfying \\eqref{sundiff.thm.pf.eq1} and\n\\eqref{para3.eq}.\nIn the following theorem, we extend the above observation to\n general Banach algebra triplets $({\\mathcal A}, {\\mathcal M}, {\\mathcal B})$.\n\n\\begin{thm}\\label{triple1.thm}\n Let ${\\mathcal A}, {\\mathcal M}$ and\n${\\mathcal B}$ be Banach algebras such that ${\\mathcal A}$ is a Banach subalgebra of ${\\mathcal M}$\nand ${\\mathcal M}$ is a Banach subalgebra of ${\\mathcal B}$.\nIf there exist positive exponents $\\theta_0, \\theta_1\\in (0, 1]$ and absolute constants $D_0, D_1$ such that\n\\begin{equation}\\label{triple1.thm.eq1}\n\\|AB\\|_{\\mathcal A}\\le D_0\\|A\\|_{\\mathcal A} \\|B\\|_{\\mathcal A} \\Big (\\Big(\\frac{\\|A\\|_{\\mathcal M}}{\\|A\\|_{\\mathcal A}}\\Big)^{\\theta_0} +\n\\Big(\\frac{\\|B\\|_{\\mathcal M}}{\\|B\\|_{\\mathcal A}}\\Big)^{\\theta_0}\n\\Big)\n\\quad {\\rm for \\ all} \\ A, B \\in {\\mathcal A},\n\\end{equation}\n and\n\\begin{equation}\\label{triple1.thm.eq2}\n\\|A\\|_{\\mathcal M}\\le D_1 \\|A\\|_{\\mathcal A}^{1-\\theta_1} \\|A\\|_{\\mathcal B}^{\\theta_1} \\ \\ {\\rm for \\ all} \\ \\ A\\in {\\mathcal A},\n\\end{equation}\nthen\n${\\mathcal A}$ is a differential subalgebra of order $\\theta_0\\theta_1$ in ${\\mathcal B}$.\n\\end{thm}\n\n\\begin{proof} For any $A, B\\in {\\mathcal A}$, we obtain from \\eqref{triple1.thm.eq1} and\n\\eqref{triple1.thm.eq2} that\n\\begin{eqnarray*}\n\\|AB\\|_{\\mathcal A} & \\le & D_0\\|A\\|_{\\mathcal A} \\|B\\|_{\\mathcal A} \\Bigg (\\Big(\\frac{D_1 \\|A\\|_{\\mathcal A}^{1-\\theta_1} \\|A\\|_{\\mathcal B}^{\\theta_1}}{\\|A\\|_{\\mathcal A}}\\Big)^{\\theta_0} +\n\\Big(\\frac{D_1 \\|B\\|_{\\mathcal A}^{1-\\theta_1} \\|B\\|_{\\mathcal B}^{\\theta_1}}{\\|B\\|_{\\mathcal A}}\\Big)^{\\theta_0}\n\\Bigg)\\\\\n & \\le & D_0 D_1^{\\theta_0}\\|A\\|_{\\mathcal A} \\|B\\|_{\\mathcal A} \\Big (\\Big(\\frac{\\|A\\|_{\\mathcal B}}{\\|A\\|_{\\mathcal A}}\\Big)^{\\theta_0\\theta_1} +\n\\Big(\\frac{\\|B\\|_{\\mathcal B}}{\\|B\\|_{\\mathcal A}}\\Big)^{\\theta_0\\theta_1}\n\\Big),\n\\end{eqnarray*}\nwhich completes the proof.\n\\end{proof}\n\n\nFollowing the argument used in \\eqref{Cab.eq}, we can show that $C^2[a, b]$ is a differential subalgebra of $C^1[a, b]$.\nFor any distinct $x, y\\in [a, b]$ and $f\\in C^2[a, b]$, observe that\n$$\n|f'(x)|= \\frac{|f(y)-f(x)-f''(\\xi) (y-x)^2\/2 |}{|y-x|} \\le 2\\|f\\|_{C[a, b]} |y-x|^{-1}+ \\frac{1}{2} \\|f^{\\prime\\prime}\\|_{C[a, b]}\n|y-x|\n$$\nfor some $\\xi\\in [a, b]$, which implies that\n\\begin{equation}\n\\|f'\\|_{C[a, b]}\\le \\max\\big (4 \\|f\\|_{C[a, b]}^{1\/2} \\|f^{\\prime\\prime} \\|_{C[a, b]}^{1\/2}, 8 (b-a)^{-1} \\|f\\|_{C[a, b]}\\big).\n\\end{equation}\nTherefore there exists a positive constant $D$ such that\n\\begin{equation}\n\\|f\\|_{C^1[a,b]}\\le D \\|f\\|_{C^2[a, b]}^{1\/2} \\|f\\|_{C[a, b]}^{1\/2} \\ \\ {\\rm for \\ all} \\ \\ f\\in C^2[a, b].\n\\end{equation}\nAs an application of Theorem \\ref{triple1.thm}, we conclude that\n$C^2[a, b]$ is a differential subalgebra of order $1\/2$ in $C[a, b]$.\n\n\\smallskip\n\nWe finish the section with the proof of Theorem \\ref{W1.thm}.\n\n\\begin{proof}[Proof of Theorem \\ref{W1.thm}]\nThe conclusion follows from \\eqref{wa1.re} and Theorem \\ref{sundiff.thm} with $p=1$ and $\\alpha=1$.\n\\end{proof}\n\n\n\\section{Generalized differential subalgebras}\\label{generalizedDS.section}\n\n\nBy \\eqref{differentialnorm.def}, a differential subalgebra ${\\mathcal A}$ satisfies the Brandenburg's requirement:\n\\begin{equation}\\label{bt.req}\n\\|A^2\\|_{\\mathcal A}\\le 2D_0 \\|A\\|_{\\mathcal A}^{2-\\theta} \\|A\\|_{\\mathcal B}^{\\theta}, \\ A\\in {\\mathcal A}.\n\\end{equation}\nTo consider the norm-controlled\ninversion of a Banach subalgebra ${\\mathcal A}$ of ${\\mathcal B}$,\nthe above requirement \\eqref{bt.req} could be relaxed to the existence of an integer $m\\ge 2$ such that\nthe $m$-th power of elements in ${\\mathcal A}$ satisfies\n\\begin{equation}\\label{weakpower}\n\\|A^m\\|_{\\mathcal A} \\le D \\|A\\|_{\\mathcal A}^{m-\\theta} \\|A\\|_{\\mathcal B}^{\\theta}, \\ \\ A\\in {\\mathcal A},\n\\end{equation}\nwhere $\\theta\\in (0, m-1]$ and $D=D({\\mathcal A}, {\\mathcal B}, m, \\theta)$ is an absolute positive constant, see Theorem \\ref{main-thm1} in the next section.\nFor $h\\in C^1[a, b]$ and $m\\ge 2$, we have\n \\begin{equation*}\n\\|h^m \\|_{C^1[a,b]} = m \\|h^{m-1} h'\\|_{C[a, b]}+ \\|h^m\\|_{C[a, b]}\n\\le m \\|h\\|_{C^1[a, b]} \\|h\\|_{C[a, b]}^{m-1},\n\\end{equation*}\nand hence the differential subalgebra $C^1[a, b]$ of $C[a, b]$ satisfies\n\\eqref{weakpower} with $\\theta=m-1$.\nIn this section, we introduce some sufficient conditions\nso that\n\\eqref{weakpower} holds for some integer $m\\ge 2$.\n\n\n\n\n\\begin{thm}\\label{triplenew.thm} Let ${\\mathcal A}, {\\mathcal M}$ and\n${\\mathcal B}$ be Banach algebras such that ${\\mathcal A}$ is a Banach subalgebra of ${\\mathcal M}$\nand ${\\mathcal M}$ is a Banach subalgebra of ${\\mathcal B}$.\nIf there exist an integer $k\\ge 2$, positive exponents $\\theta_0, \\theta_1$, and absolute constants $E_0, E_1$\nsuch that\n\\begin{equation}\\label{triplenew.eq1}\n\\|A_1A_2\\cdots A_k\\|_{\\mathcal A} \\le E_0\n\\Big(\\prod_{i=1}^k\\|A_i\\|_{{\\mathcal A}} \\Big) \\sum_{j=1}^k \\Big(\\frac{\\|A_i\\|_{\\mathcal M}}{\\|A_i\\|_{\\mathcal A}}\\Big)^{\\theta_0}, \\ \\ A_1, \\ldots, A_k \\in {\\mathcal A}\n\\end{equation}\nand\n\\begin{equation}\\label{triplenew.eq2}\n\\|A^2\\|_{\\mathcal M}\\le E_1 \\|A\\|_{\\mathcal A}^{2-\\theta_1} \\|A\\|_{\\mathcal B}^{\\theta_1},\\ A\\in {\\mathcal A},\n\\end{equation}\nthen \\eqref{weakpower} holds for $m=2k$ and $\\theta=\\theta_0\\theta_1$.\n\\end{thm}\n\n\n\\begin{proof} By \\eqref{banachalgebra.def},\n\\eqref{triplenew.eq1} and \\eqref{triplenew.eq2}, we have\n\\begin{equation}\n\\|A^{2k}\\|_{\\mathcal A} \\le k E_0\n\\|A^2\\|_{{\\mathcal A}}^{k-\\theta_0} \\|A^2\\|_{\\mathcal M}^{\\theta_0}\n\\le k E_0 E_1^{\\theta_0} K^{k-\\theta_0} \\|A\\|_{{\\mathcal A}}^{2k-\\theta_0\\theta_1} \\|A\\|_{\\mathcal B}^{\\theta_0\\theta_1}, \\ \\ A\\in {\\mathcal A},\n\\end{equation}\nwhich completes the proof.\n\\end{proof}\n\n\nFor a Banach algebra triplet $({\\mathcal A}, {\\mathcal M}, {\\mathcal B})$ in Theorem \\ref{triple1.thm}, we obtain from\n\\eqref{triple1.thm.eq1} and \\eqref{triple1.thm.eq2} that\n\\begin{eqnarray}\\label{triple1.eq3}\n\\|A_1A_2\\cdots A_k\\|_{\\mathcal A} & \\le & D_0\n\\|A_1\\|_{{\\mathcal A}} \\|A_2\\cdots A_k\\|_{{\\mathcal A}}\n\\Bigg (\\Big(\\frac{\\|A_1\\|_{\\mathcal M}}{\\|A_1\\|_{\\mathcal A}}\\Big)^{\\theta_0} +\n\\Big(\\frac{\\|A_2\\cdots A_k\\|_{\\mathcal M}}{\\|A_2\\cdots A_k\\|_{\\mathcal A}}\\Big)^{\\theta_0}\n\\Bigg)\\nonumber\\\\\n& \\le & \\tilde D_0\n\\Big(\\prod_{i=1}^k\\|A_i\\|_{{\\mathcal A}} \\Big)\n\\sum_{j=1}^k \\Big(\\frac{\\|A_j\\|_{\\mathcal M}}{\\|A_j\\|_{\\mathcal A}}\\Big)^{\\theta_0}, \\ \\ A_1, \\ldots, A_k\\in {\\mathcal A},\n\\end{eqnarray}\nand\n\\begin{equation}\\label{triple1.eq4}\n\\|A^2\\|_{\\mathcal M}\\le \\tilde K \\|A\\|_{\\mathcal M}^2\\le\nD_1^2 \\tilde K \\| A\\|_{\\mathcal A}^{2-2\\theta_1}\\|A\\|_{\\mathcal B}^{2\\theta_1}, \\ \\ A\\in {\\mathcal A},\n\\end{equation}\nwhere $\\tilde D_0$ is an absolute constant and $\\tilde K$ is the constant $K$ in\n\\eqref{banachalgebra.def} for the Banach algebra ${\\mathcal M}$.\nTherefore the assumptions \\eqref{triplenew.eq1} and \\eqref{triplenew.eq2}\nin Theorem \\ref{triplenew.thm} are satisfied\nfor the Banach algebra triplet $({\\mathcal A}, {\\mathcal M}, {\\mathcal B})$ in Theorem \\ref{triple1.thm}.\n\nFor a differential subalgebra ${\\mathcal A}$ of order $\\theta_0$ in ${\\mathcal B}$,\nwe observe that the requirements\n\\eqref{triplenew.eq1} and \\eqref{triplenew.eq2} with ${\\mathcal M}={\\mathcal B}$, $k=2$ and $\\theta_1=2$ are met,\n and hence\n \\eqref{weakpower} holds for $m=4$ and $\\theta=2\\theta_0$.\nRecall that ${\\mathcal B}$ is a trivial differential subalgebra of ${\\mathcal B}$.\nIn the following corollary, we can extend the above conclusion to arbitrary differential subalgebras ${\\mathcal M}$ of ${\\mathcal B}$.\n\n\\begin{cor}\nLet ${\\mathcal A}, {\\mathcal M}$ and\n${\\mathcal B}$ be Banach algebras such that ${\\mathcal A}$ is a differential subalgebra of order $\\theta_0$ in ${\\mathcal M}$\nand ${\\mathcal M}$ is a differential subalgebra of order $\\theta_1$ in ${\\mathcal B}$.\nThen \\eqref{weakpower} holds for $m=4$ and $\\theta=\\theta_0\\theta_1$.\n\\end{cor}\n\n\n\n\nFollowing the argument used in the proof of Theorem \\ref{triplenew.thm}, we can show that\n\\eqref{weakpower} holds for $m=4$ if the requirement \\eqref{triplenew.eq1} with $k=3$ is replaced by the following strong version\n\\begin{equation}\\label{triplenew.eq3}\n\\|ABC\\|_{\\mathcal A} \\le E_0\n\\|A\\|_{{\\mathcal A}} \\|C\\|_{{\\mathcal A}} \\|B\\|_{{\\mathcal A}}^{1-\\theta_0}\n\\|B\\|_{\\mathcal M}^{\\theta_0}, \\ \\ A, B, C \\in {\\mathcal A}.\n\\end{equation}\n\n\n\n\\begin{thm}\\label{triplenew.thm2}\nLet ${\\mathcal A}, {\\mathcal M}$ and\n${\\mathcal B}$ be Banach algebras such that ${\\mathcal A}$ is a Banach subalgebra of ${\\mathcal M}$\nand ${\\mathcal M}$ is a Banach subalgebra of ${\\mathcal B}$.\nIf there exist positive exponents $\\theta_0, \\theta_1\\in (0, 1]$ and absolute constants $E_0, E_1$\nsuch that \\eqref{triplenew.eq2} and \\eqref{triplenew.eq3} hold,\nthen \\eqref{weakpower} holds for $m=4$ and $\\theta=\\theta_0\\theta_1$.\n\\end{thm}\n\nLet $L^p:=L^p(\\R), 1\\le p\\le \\infty$, be the space of all $p$-integrable functions on $\\R$ with standard norm $\\|\\cdot\\|_p$,\nand ${\\mathcal B}(L^p)$ be the algebra of bounded linear operators on\n$L^p$ with the norm $\\|\\cdot \\|_{{\\mathcal B}(L^p)}$.\nFor $1\\le p\\le \\infty, \\alpha\\ge 0$ and $\\gamma\\in [0, 1)$, we define the norm\nof a kernel $K$ on $\\R\\times \\R$ by\n\\begin{equation}\\label{Ex3-def-norm}\n\\|K\\|_{{\\mathcal W}_{p,\\alpha}^\\gamma}=\n\\left\\{\n\\begin{array}{ll}\n \\max\\Big(\\sup_{x\\in \\R} \\big\\|K(x,\\cdot)u_\\alpha(x,\\cdot)\\big\\|_p,\\\n\\sup_{y\\in \\R} \\big\\|K(\\cdot,y)u_\\alpha(\\cdot,y)\\big\\|_p\\Big) & {\\rm if }\\ \\gamma =0\n\\\\\n\\|K\\|_{{\\mathcal W}_{p,\\alpha}^0}+\\sup_{0<\\delta\\le 1} \\delta^{-\\gamma}\n\\|\\omega_\\delta(K)\\|_{{\\mathcal W}_{p,\\alpha}^0} & {\\rm if } \\ 0 < \\gamma <1,\n\\end{array}\n\\right.\n\\end{equation}\nwhere the modulus of continuity\nof the kernel $K$ is defined by\n\\begin{equation}\\label{Ex3-def-mod}\n\\omega_\\delta(K)(x,y):=\\sup_{|x^\\prime|\\le \\delta, |y^\\prime|\\le \\delta}\n|K(x+x^\\prime, y+y^\\prime)-K(x,y)|, \\ x, y\\in \\RR,\n\\end{equation}\nand $u_\\alpha(x, y)= (1+|x-y|)^\\alpha, x, y\\in \\R$ are polynomial weights on $\\R\\times \\R$.\nConsider the set ${\\mathcal W}_{p,\\alpha}^\\gamma$ of integral operators\n\\begin{equation}\\label{Ex3-def-int-oper}\nTf(x)=\\int_{{\\R}} K_T(x,y) f(y) dy, \\quad f \\in L^p,\n\\end{equation}\nwhose integral kernels $K_T$ satisfy $\\|K_T\\|_{{\\mathcal W}^\\gamma_{p, \\alpha}}<\\infty$, and define\n$$\n\\|T\\|_{{\\mathcal W}_{p,\\alpha}^\\gamma}:=\n\\|K_T \\|_{{\\mathcal W}_{p,\\alpha}^\\gamma}, \\ T\\in {\\mathcal W}_{p,\\alpha}^\\gamma.\n$$\nIntegral operators in ${\\mathcal W}_{p, \\alpha}^\\gamma$ have their kernels being H\\\"older continuous of order $\\gamma$\n and having off-diagonal polynomial decay of order $\\alpha$.\nFor $1\\le p\\le \\infty$ and $\\alpha>1-1\/p$, one may verify that\n ${\\mathcal W}_{p, \\alpha}^\\gamma, 0\\le \\gamma<1$, are Banach subalgebras of\n ${\\mathcal B}(L^2)$ under operator composition.\n The Banach algebras ${\\mathcal W}_{p, \\alpha}^\\gamma, 0<\\gamma<1$, of integral operators\n may not form a differential subalgebra of ${\\mathcal B}(L^2)$, however the triple\n$({\\mathcal W}_{p, \\alpha}^\\gamma, {\\mathcal W}_{p, \\alpha}^0, {\\mathcal B}(L^2))$ is proved in\n \\cite{sunacha08} to satisfy the following\n\\begin{equation}\\label{Ex3-norm-comp}\n\\|T_0\\|_{\\mathcal B} \\le D \\|T_0\\|_{{\\mathcal W}_{p,\\alpha}^0} \\le\nD\\|T_0\\|_{{\\mathcal W}_{p,\\alpha}^\\gamma},\n\\end{equation}\n\\begin{equation}\\label{Ex3-norm-2-theta}\n\\|T_0^2 \\|_{{\\mathcal W}_{p,\\alpha}^0} \\le D\n\\|T_0 \\|_{{\\mathcal W}_{p,\\alpha}^\\gamma}^{1+\\theta} \\|T_0 \\|_{{\\mathcal B}(L^2)}^{1-\\theta}\n\\end{equation}\nand\n\\begin{equation}\\label{Ex3-norm-3-product}\n\\|T_1 T_2 T_3 \\|_{{\\mathcal W}_{p,\\alpha}^\\gamma} \\le D\n\\|T_1 \\|_{{\\mathcal W}_{p,\\alpha}^\\gamma} \\| T_2 \\|_{{\\mathcal W}_{p,\\alpha}^0}\n\\| T_3 \\|_{{\\mathcal W}_{p,\\alpha}^\\gamma}\n\\end{equation}\nholds for all $T_i\\in {\\mathcal W}_{p, \\alpha}^\\gamma, 0\\le i\\le 3$, where $D$ is an absolute constant and\n$$\\theta= \\frac{\\alpha+\\gamma+1\/p}{(1+\\gamma)(\\alpha+1\/p)}.$$\nThen the requirements \\eqref{triplenew.eq2} and \\eqref{triplenew.eq3} in Theorem \\ref{triplenew.thm2}\nare met for the triplet $({\\mathcal W}_{p, \\alpha}^\\gamma, {\\mathcal W}_{p, \\alpha}^0, {\\mathcal B}(L^2))$,\nand hence\nthe Banach space pair $({\\mathcal W}_{p, \\alpha}^\\gamma, {\\mathcal B}(L^2))$ satisfies the Brandenburg's condition \\eqref{weakpower} with $m=4$\n\\cite{fangshinsun13, sunacha08}.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Brandenburg trick and norm-controlled inversion}\\label{normcontrolledinversion.section}\n\n\nLet\n$\\mathcal{A}$ and $\\mathcal{B}$ are $*$-algebras with common identity and involution, and let\n $\\mathcal{B}$ be symmetric. In this section, we show that\n ${\\mathcal A}$ has norm-controlled inversion in ${\\mathcal B}$ if it meets\n the Brandenburg requirement \\eqref{weakpower}.\n\n\n\n\n\n\\begin{thm}\\label{main-thm1}\nLet ${\\mathcal B}$ be a symmetric $*$-algebra with its norm\n$\\|\\cdot\\|_{\\mathcal B}$\nbeing normalized in the sense that \\eqref{banachalgebra.def} holds with $K=1$,\n \\begin{equation}\\label{banachalgebra.defnew2}\n\\|\\tilde A\\tilde B\\|_{\\mathcal B}\\le \\|\\tilde A\\|_{\\mathcal B}\\|\\tilde B\\|_{\\mathcal B},\\ \\tilde A, \\tilde B\\in {\\mathcal B},\n\\end{equation}\nand\n$\\mathcal{A}$ be a $*$-algebra with its\nnorm\n$\\|\\cdot\\|_{\\mathcal A}$\nbeing normalized too,\n\\begin{equation}\\label{banachalgebra.defnew1}\n\\|AB\\|_{\\mathcal A}\\le \\|A\\|_{\\mathcal A}\\|B\\|_{\\mathcal A}, \\ A, B\\in {\\mathcal A}.\n\\end{equation}\nIf ${\\mathcal A}$ is a $*$-subalgebra of ${\\mathcal B}$ with common identity $I$ and involution $*$, and\nthe pair $({\\mathcal A}, {\\mathcal B})$ satisfies\nthe Brandenburg requirement \\eqref{weakpower}, then\n${\\mathcal A}$ has norm-controlled inversion in ${\\mathcal B}$. Moreover,\nfor any $A\\in{\\mathcal A}$ being invertible in ${\\mathcal B}$ we have\n\\begin{eqnarray}\\label{norm-control1}\n\\|A^{-1}\\|_{\\mathcal A}\n& \\le &\n\\|A^* A\\|_{\\mathcal B}^{-1} \\|A^*\\|_{\\mathcal A}\\nonumber\\\\\n& & \\times\n\\left\\{\\begin{array}{ll}\n \\big(2t_0+(1- 2^{\\log_m (1-\\theta\/m)})^{-1} (\\ln a)^{-1}\\big) a\n \\exp\\Big(\\frac{\\ln m-\\ln (m-\\theta)} {\\ln (m-\\theta)} t_0 \\ln a\\Big)\n & {\\rm if} \\ \\theta 0$. Let $\\mathbb X $ be a metric measure space with a polar decomposition as in \\eqref{EQ:polar}. Let $u,v >0$ be measurable functions positive a.e. such that\n$u\\in L^1_{loc}(\\mathbb X)$ and $v^{1-p'}\\in L^1(\\mathbb X\\backslash \\{a\\})$. Let\n\\begin{align}\nU(x)= {\\int_{{B(a,\\vert x \\vert_a )}} u(y) dy} \\nonumber\n\\end{align} \nand \\begin{align} \nV(x)= \\int_{\\mathbb X\\backslash B(a,\\vert x \\vert_a )}v^{1-p'}(y)dy\\nonumber. \n\\end{align}\nThen the inequality\n\\begin{align}\\label{EQ:Hardycon}\n\\bigg(\\int_\\mathbb X\\bigg(\\int_{\\mathbb X\\backslash B(a,\\vert x \\vert_a)}\\vert f(y) \\vert dy\\bigg)^q u(x)dx\\bigg)^\\frac{1}{q}\\le C\\bigg\\{\\int_{\\mathbb X} {\\vert f(x) \\vert}^p v(x)dx\\bigg\\}^\\frac{1}{p}\n\\end{align}\nholds for all measurable functions f if and only if any of the following equivalent conditions holds:\n\\begin{enumerate}\n\\item $\\mathcal D_{1}^{*} :=\\sup_{x\\not=a} \\bigg\\{U^\\frac{1}{q}(x) V^\\frac{1}{p'}(x)\\bigg\\}<\\infty$.\n\\item $\\mathcal D_{2}^{*}:=\\sup_{x\\not=a} \\bigg\\{\\int_{{B(a,\\vert x \\vert_a )}}u(y)V^{q(\\frac{1}{p'}-s)}(y)dy\\bigg\\}^\\frac{1}{q}V^s(x)<\\infty.$\n\n\\item $\\mathcal D_{3}^{*}:=\\sup_{x\\not=a}\\bigg\\{\\int_{\\mathbb X\\backslash B(a,\\vert x \\vert_a)}u(y)V^{q(\\frac{1}{p'}+s)}(y)dy\\bigg\\}^{\\frac{1}{q}}V^{-s}(x)<\\infty,$\nprovided that $u,v^{1-p'}\\in L^1(\\mathbb X).$\n\\item $\\mathcal D_{4}^{*}:=\\sup_{x\\not=a}\\bigg\\{\\int_{\\mathbb X\\backslash B(a,\\vert x \\vert_a)}v^{1-p'}(y) U^{p'(\\frac{1}{q}-s)}(y)dy\\bigg\\}^\\frac{1}{p'}U^s(x)<\\infty.$\n\\item $\\mathcal D_{5}^{*}:=\\sup_{x\\not=a}\\bigg\\{\\int_{{B(a,\\vert x \\vert_a )}}v^{1-p'}(t)U^{p'(\\frac{1}{q}+s)}(t)dt\\bigg\\}^\\frac{1}{p'}U^{-s}(x)<\\infty,$\nprovided that $u,v^{1-p'}\\in L^1(\\mathbb X).$\n\\end{enumerate}\n\\end{thm}\n\n\n \n\\section{Applications and examples}\n\nIn this section we will give examples of the application of Theorem \\ref {THM:Hardy1} in the settings of homogeneous groups, hyperbolic spaces, and Cartan-Hadamard manifolds.\n\n\\subsection{Homogeneous groups}\n\nLet $\\mathbb G$ be a homogeneous group of homogeneous dimension $Q$, equipped with a quasi-norm $|\\cdot|$.\nFor the general description of the setup of homogeneous groups we refer to \\cite{FS-Hardy} or \\cite{FR}. Particular example of homogeneous groups are the Euclidean space ${\\mathbb R}^n$ (in which case $Q=n$), the Heisenberg group, as well as general stratified groups (homogeneous Carnot groups) and graded groups.\n\nIn relation to the notation of this paper, let us take $a=0$ to be the identity of the group $\\mathbb G$. We can also simplify the notation denoting $\\vert x \\vert_a$ by $\\vert x \\vert$, which is consistent with the notation for the quasi-norm $|\\cdot|.$\n\nIf we take power weights \n$$u(x)= {\\vert x \\vert}^\\alpha \\textrm{ and } v(x)= {\\vert x \\vert }^\\beta,$$\nthen the inequality \\eqref{EQ:Hardy1} holds for $1< p \\le q <\\infty$ if and only if \n$$\n\\mathcal D_1=\\sup_{r>0} \\bigg(\\displaystyle \\sigma\\int_{r}^\\infty {\\rho}^\\alpha {\\rho}^{Q-1}d\\rho \\bigg)^\\frac{1}{q} \\bigg(\\displaystyle \\sigma \\int_{0}^r{\\rho}^{\\beta(1-p')} {\\rho}^{Q-1}d\\rho\\bigg)^\\frac{1}{p'}<\\infty,\n$$\nwhere $\\sigma$ is the area of the unit sphere in $\\mathbb G$ with respect to the quasi-norm $|\\cdot|.$ For this supremum to be well-defined we need to have $\\alpha+Q<0$\n and $\\beta(1-p')+Q>0.$\nConsequently, we have\n\\begin{multline*}\n\\mathcal D_1=\\sigma^{(\\frac1q+\\frac{1}{p'})}\\sup_{r>0}\n\\bigg(\\displaystyle\\int_{r}^\\infty{\\rho}^{\\alpha+Q-1}d\\rho\\bigg)^\\frac{1}{q}\\bigg(\\displaystyle\\int_0^{r} {\\rho}^{\\beta(1-p')+Q-1}d\\rho \\bigg)^\\frac{1}{p'} \\\\\n= \\sigma^{(\\frac1q+\\frac{1}{p'})}\n\\sup_{r>0} \\frac {{r}^ \\frac{\\alpha+Q}{q}}{{|\\alpha+Q|}^\\frac{1}{q}}\n\\frac{ {r}^\\frac{\\beta(1-p')+Q} {p'}}{{(\\beta(1-p')+Q)}^\\frac{1}{p'}},\n\\end{multline*} \nwhich is finite if and only if the power of $r$ is zero.\nSummarising, we obtain\n\n\\begin{cor}\\label{COR:hom}\nLet $\\mathbb G$ be a homogeneous group of homogeneous dimension $Q$, equipped with a quasi-norm $|\\cdot|$. Let $1 0$ and \n$\\frac{\\alpha+Q}{q}+\\frac{\\beta(1-p')+Q} {p'}=0.$\nMoreover, the constant $C$ for \\eqref{EQ:Hardy1hg} satisfies\n\\begin{equation}\\label{EQ:constants-hg}\n\\frac{\\sigma^{\\frac1q+\\frac{1}{p'}}}{|\\alpha+Q|^\\frac{1}{q}(\\beta(1-p')+Q)^\\frac{1}{p'}}\n\\leq C\\leq (p')^{\\frac{1}{p'}} p^\\frac{1}{q}\\frac{\\sigma^{\\frac1q+\\frac{1}{p'}}}{|\\alpha+Q|^\\frac{1}{q}(\\beta(1-p')+Q)^\\frac{1}{p'}},\n\\end{equation} \nwhere $\\sigma$ is the area of the unit sphere in $\\mathbb G$ with respect to the quasi-norm $|\\cdot|.$\n\\end{cor} \n\n \\subsection{Hyperbolic spaces} \n \n Let $\\mathbb H^n$ be the hyperbolic space of dimension $n$ and let $a\\in \\mathbb H^n$.\n Let us take the weights \n $$u(x)= (\\sinh {\\vert x \\vert_a})^\\alpha \\textrm{ and } v(x)= (\\sinh {\\vert x \\vert_a})^\\beta.$$ \nThen, passing to polar coordinates, $\\mathcal D_1$ is equivalent to\n \n $$\\mathcal D_1\\simeq \\sup_{\\vert x \\vert_a>0}\\bigg(\\displaystyle\\int_{\\vert x \\vert_a}^\\infty(\\sinh{\\rho})^{\\alpha+n-1}d\\rho\\bigg)^\\frac{1}{q}\\bigg(\\displaystyle\\int_0^{\\vert x \\vert_a} (\\sinh{\\rho})^{\\beta(1-p')+n-1}d\\rho \\bigg)^\\frac{1}{p'}.$$\n For the integrability of the first and the second terms we need, respectively,\n$\\alpha+n-1<0$ and $ \\beta(1-p')+n>0.$\n\nLet us now analyse conditions for this supremum to be finite. \nFor $\\vert x \\vert_a \\gg1 $, it can be written as\n\n\\begin{multline*}\n\\sup_{\\vert x \\vert_a \\gg1} \\bigg(\\displaystyle\\int_{\\vert x \\vert_a}^\\infty (\\exp{\\rho})^{\\alpha+n-1}d\\rho\\bigg)^\\frac{1}{q} \\bigg(\\displaystyle\\int_0^{\\vert x \\vert_a} (\\exp{\\rho})^{\\beta(1-p')+n-1}d\\rho \\bigg)^\\frac{1}{p'}\\\\\n\\simeq \\sup_{\\vert x \\vert_a \\gg1} (\\exp {\\vert x \\vert_a})^{\\bigg(\\frac{\\alpha+n-1}{q}+\\frac{\\beta(1-p')+n-1}{p'} \\bigg)},\n\\end{multline*} \nwhich is finite if and only if $\\frac{\\alpha+n-1}{q}+\\frac{\\beta(1-p')+n-1}{p'}\\le0.$\nFor $\\vert x \\vert_a\\ll 1 $, it can be written as\n\n$\\sup_{\\vert x \\vert_a\\ll 1} \\bigg(\\displaystyle\\int_{\\vert x \\vert_a}^\\infty(\\sinh{\\rho})^{\\alpha+n-1}d\\rho\\bigg)^\\frac{1}{q} \\bigg(\\displaystyle\\int_0^{\\vert x \\vert_a} {\\rho}^{\\beta(1-p')+n-1}d\\rho \\bigg)^\\frac{1}{p'}$\n\n$\\simeq \\sup_{\\vert x \\vert_a\\ll 1} \\bigg(\\displaystyle\\int_{\\vert x \\vert_a}^ R (\\sinh{\\rho})^{\\alpha+n-1}d\\rho +\\displaystyle\\int_{R}^\\infty(\\sinh{\\rho})^{\\alpha+n-1}d\\rho\\bigg)^\\frac{1}{q} \\bigg(\\displaystyle\\int_0^{\\vert x \\vert_a} {\\rho}^{\\beta(1-p')+n-1}d\\rho \\bigg)^\\frac{1}{p'}.$\n\nFor some small $R$ we have ${\\sinh{\\rho} }_{\\vert x \\vert_a\\le \\rho 0} \\bigg(\\displaystyle\\int_{M \\backslash B(a,\\vert x \\vert_a)}\\vert y \\vert_a^\\alpha dy \\bigg)^\\frac{1}{q} \\bigg(\\displaystyle \\int_{B(a,\\vert x \\vert_a)}\\vert y \\vert_a^{\\beta(1-p')} dy\\bigg)^\\frac{1}{p'}<\\infty$.\\\\\n\nAfter changing to the polar coordinates, this is equivalent to \n\n$\\sup_{\\vert x \\vert_a>0}\\bigg(\\int_{\\vert x \\vert_a }^\\infty {\\rho}^{\\alpha+n-1} d\\rho \\bigg)^\\frac{1}{q}\\bigg(\\int_0^{\\vert x \\vert_a} {\\rho}^{\\beta(1-p')+n-1}d\\rho \\bigg)^\\frac{1}{p'}$, \\\\\nwhich is finite if and only if conditions of Corollary \\ref{COR:hom} hold with $Q=n$ (which is natural since the curvature is zero).\n\nWhen $b>0$, let us take $u(x)=(\\sinh \\sqrt{b}{\\vert x \\vert_a})^\\alpha$ and $v(x)=(\\sinh \\sqrt{b}{\\vert x \\vert_a})^\\beta$. Then he inequality \\eqref{EQ:Hardy1} holds for $1 0} \\bigg(\\displaystyle\\int_{\\mathbb M \\backslash B(a,\\vert x \\vert_a)} (\\sinh \\sqrt{b}{\\vert y \\vert_a})^{\\alpha} dy \\bigg)^\\frac{1}{q} \\bigg(\\displaystyle \\int_{B(a,\\vert x \\vert_a)}(\\sinh \\sqrt{b}{\\vert y \\vert_a})^{\\beta(1-p')}dy \\bigg)^\\frac{1}{p'}<\\infty.$\\\\\n\nAfter changing to the polar coordinates, this supremum is equivalent to \n\n$\\sup_{\\vert x \\vert_a>0}(\\displaystyle\\int_{\\vert x \\vert_a}^\\infty (\\sinh \\sqrt{b}{t})^\\alpha (\\frac{\\sinh \\sqrt{b} t}{\\sqrt{b} t})^{n-1} t^{n-1}dt )^\\frac{1}{q}$\\\\\n$\\quad\\times(\\displaystyle \\int_{0}^ {\\vert x \\vert_a} (\\sinh \\sqrt{b}{t})^{\\beta(1-p')}(\\frac{\\sinh \\sqrt{b} t}{\\sqrt{b} t})^{n-1} t^{n-1} dt )^\\frac{1}{p'}$\\\\\n\n\n$\\simeq\\sup_{\\vert x \\vert_a>0}\\bigg(\\displaystyle\\int_{\\vert x \\vert_a}^\\infty (\\sinh \\sqrt{b}{t})^{\\alpha+n-1}dt \\bigg)^\\frac{1}{q}\\bigg(\\displaystyle\\int_0^{\\vert x \\vert_a} (\\sinh \\sqrt{b} t)^{\\beta(1-p')+n-1}dt \\bigg)^\\frac{1}{p'}, $\\\\\nwhich has the same conditions for finiteness as the case of the hyperbolic space in Corollary \\ref{COR:hyp} (which is also natural since it is the negative constant curvature case).\n\n\n\n\n\n\n \n\n \n \\section{Equivalence of weight conditions}\n \n In this section we prove that the quantities $\\mathcal D_{1}$--$\\mathcal D_{5}$\ninvolving the weights in Theorem \\ref{THM:Hardy1} are equivalent. However, it will be convenient to formulate it in the following slightly more general form:\n \n \\begin{thm}\\label{THM:equivalence}\n Let $\\alpha , \\beta, s >0$ and let\n $f \\in L^{1}(\\mathbb X\\backslash \\{a\\})$, $g \\in L^{1} _{loc} (\\mathbb X)$, be such that $ f,g >0 $ are positive a.e in $\\mathbb X$. Let us denote\n \\begin{align}\n F(x):= \\int_{\\mathbb X\\backslash B(a,\\vert x \\vert_a)}f(y)dy\\nonumber,\n \\end{align}\n and\n \\begin{align}\n G(x):=\\int_{B(a,\\vert x \\vert_a)} g(y)dy\\nonumber.\n \\end{align}\n Then the following quantities are equivalent:\n \\begin{enumerate}\n \\item$\\mathcal A_1:= \\sup_{x\\not=a}A_1(x;\\alpha,\\beta):= \\sup_{x\\not=a}F^\\alpha(x)G^\\beta(x).$\n \\item$\\mathcal A_2:=\\sup_{x\\not=a} A_2(x;\\alpha,\\beta,s):=\\sup_{x\\not=a}{\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)dy\\bigg)^\\alpha} G^s(x),$ provided that $G^{(\\beta-s)\/\\alpha}(y)$ makes sense.\n \\item$\\mathcal A_3:=\\sup_{x\\not=a} A_3(x;\\alpha,\\beta,s):= \\sup_{x\\not=a}\\bigg(\\int_{B(a,\\vert x \\vert_a)}g(y)F^{(\\alpha-s)\/\\beta}(y)dy\\bigg)^\\beta F^s(x),$ provided that $F^{(\\alpha-s)\/\\beta}(y)$ makes sense. \\item$\\mathcal A_4:=\\sup_{x\\not=a} A_4(x;\\alpha,\\beta,s):= \\sup_{x\\not=a}\\bigg(\\int_{B(a,\\vert x \\vert_a)}f(y)G^{(\\beta+s)\/\\alpha}(y)dy\\bigg)^\\alpha G^{-s}(x),$ provided that $f,g\\in L^1(\\mathbb X)$ and that $G^{-s}(x)$ makes sense.\n \\item$\\mathcal A_5:=\\sup_{x\\not=a} A_5(x;\\alpha,\\beta,s):=\\sup_{x\\not=a}\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}g(y) F^{(\\alpha+s)\/\\beta}(y) dy\\bigg)^\\beta F^{-s}(x),$\n provided $f,g\\in L^1(\\mathbb X)$ and that that $F^{-s}(x)$ makes sense.\n \\end{enumerate}\nMoreover, we have the following relations between the above quantities:\n\\begin{itemize}\n\\item $\\mathcal {A}_1\\le (\\max(1,\\frac{s}{\\beta}))^\\alpha \\mathcal {A}_2 $ and $\\mathcal {A}_2\\le (\\max(1,\\frac{\\beta}{s}))^\\alpha \\mathcal {A}_1$;\n\\item $\\mathcal {A}_1\\le(\\max(1,\\frac{s}{\\alpha}))^\\beta \\mathcal {A}_3 $ and $\\mathcal {A}_3\\le (\\max(1,\\frac{\\alpha}{s}))^\\beta \\mathcal {A}_1$;\n\\item $ (\\frac{s}{\\beta+s})^\\alpha \\mathcal A_4 \\le \\mathcal A_1\\le (1+\\frac{s}{\\beta})^\\alpha \\mathcal A_4 $ and $ (\\frac{s}{\\alpha+s})^\\beta \\mathcal A_5 \\le \\mathcal A_1\\le (1+\\frac{s}{\\alpha})^\\beta \\mathcal A_5 $.\n\\end{itemize}\n\\end {thm}\n\n \\begin{proof}[Proof of Theorem \\ref{THM:equivalence}]\n $\\boxed{{\\mathcal A_1\\approx \\mathcal A_2}}$\n \n We will first consider the case $s\\le\\beta$. Then for $\\vert y \\vert_a\\geq \\vert x \\vert_a$ we have \n $ G^{(\\beta-s)\/\\alpha}(y)\\ge G^{(\\beta-s)\/\\alpha}(x)$. Consequently, we can estimate\n \\begin{align*}\n A_2(x;\\alpha,\\beta,s) & ={\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)dy\\bigg)^\\alpha} G^s(x)\\nonumber\n \\\\\n & \\ge{\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)dy\\bigg)^\\alpha G^\\beta(x)}\\nonumber\n\\\\ &\n =F^{\\alpha}(x)G^{\\beta}(x),\\nonumber\n \\end{align*}\n which implies $\\mathcal A_2 \\ge \\mathcal A_1$.\n For $s>\\beta$, let us first introduce some notation, using the polar decomposition \\eqref{EQ:polar}. First, we denote\n \\begin{align*}\n W(x) & :={\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)dy}\\nonumber\\\\\n & =\\int_{\\vert x \\vert_a}^{\\infty}\\int_{\\Sigma_r}f(r,\\omega) \\lambda(r,\\omega)\\widetilde{G_1}(r)^{(\\beta-s)\/\\alpha} d\\omega_r dr\\nonumber\\\\\n & = \\int_{\\vert x \\vert_a}^{\\infty}{\\widetilde{W}}(r)dr\\nonumber\\\\\n & =:\\widetilde{W_1}(\\vert x \\vert_a),\\nonumber\n \\end{align*}\n where\n$$\n \\widetilde{G_1}(r) := \\int_0^r\\int_{\\Sigma_s} g(s,\\sigma)\\lambda(s,\\sigma)d\\sigma_s ds =\\int_0^r\\widetilde{G}(s)ds,\n $$\n with $\\widetilde{G}(s):=\\int_{\\Sigma_s} g(s,\\sigma)\\lambda(s,\\sigma)d\\sigma_s$,\n and\n$$\n \\widetilde{W}(r):=\\int_{\\Sigma_r}f(r,\\omega) \\lambda(r,\\omega)\\widetilde{G_1}(r)^{(\\beta-s)\/\\alpha}d\\omega_r.\n$$\nMoreover, we denote \n$$\n \\widetilde{F_1}(r) : = \\int_r^{\\infty} \\int_{\\Sigma_s} \\lambda(s,\\sigma)f(s,\\sigma)d\\sigma_s ds=\\int_r^{\\infty}\\widetilde{F}(s)ds.\n$$\n\nUsing the function $W$ defined above, we can estimate\n \\begin{align*}\n & F^{\\alpha}(x)G^{\\beta}(x) \\\\\n & = G^{\\beta}(x)\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)G^{(s-\\beta)\/\\alpha}(y)W^{(s-\\beta)\/s}(y)W^{(\\beta-s)\/s}(y)dy\\bigg)^\\alpha\\nonumber\n \\\\\n &=G^{\\beta}(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}\\int_{\\Sigma_r} \\lambda(r,\\omega) f(r,\\omega)\\widetilde{G_1}^{(\\beta-s)\/\\alpha}(r)\\widetilde{G_1}^{(s-\\beta)\/\\alpha}(r)\\widetilde{W_1}^{(s-\\beta)\/s}(r)\\widetilde{W_1}^{(\\beta-s)\/s}(r)d\\omega_r dr\\bigg)^\\alpha\\nonumber\n \\\\ & \n \\le\\bigg(\\sup_{r>\\vert x \\vert_a}\\widetilde{G_1}^{(s-\\beta)}(r)\\widetilde{W_1}^{(s-\\beta)\\alpha\/s}(r)\\bigg)G^{\\beta}(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}\\widetilde{W}(r) {(\\int_r^{\\infty}\\widetilde{W}(s)ds)}^{(\\beta-s)\/s}dr\\bigg)^\\alpha\\nonumber\n \\\\\n&= \\bigg(\\sup_{\\vert y \\vert _a>\\vert x \\vert_a}{G}^{s}(y){W}^{\\alpha}(y)\\bigg)^{(s-\\beta)\/s}\\bigg(\\frac{s}{\\beta}\\bigg)^{\\alpha}\nG^{\\beta}(x) W^{(\\beta\\alpha)\/s}(x)\\nonumber\n\\\\ & \n \\le {\\bigg(\\frac{s}{\\beta}\\bigg)}^{\\alpha}\\bigg(\\sup_{\\vert y \\vert_a >\\vert x \\vert_a}{G}^{s}(y){W}^{\\alpha}(y)\\bigg)^{(1-{\\beta\/s})}\n\\bigg(\\sup_{{\\vert x \\vert_a}>0} G^{s}(x) W^{\\alpha}(x)\\bigg)^{\\beta\/s}\\nonumber\n\\\\ & \\le {\\bigg(\\frac{s}{\\beta}\\bigg)}^{\\alpha}\\sup_{{\\vert x \\vert_a} >0} A_2(x;\\alpha,\\beta,s).\\nonumber\n \\end{align*} \n Therefore, we obtain\n $$\n \\mathcal A_1 \\le {\\left(\\frac{s}{\\beta}\\right)}^{\\alpha} \\mathcal A_2.\n$$\nHence, we have for every $s>0$ the inequality\n$$\n\\mathcal A_1 \\le {\\left(\\max(1,\\frac{s}{\\beta})\\right)}^{\\alpha} \\mathcal A_2.\n$$\n Conversely, we have for $s<\\beta$,\n \\begin{align*}\n & G^s(x)W^\\alpha(x) \\\\\n & =G^s(x)\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)F^{(\\beta-s)\/\\beta}(y)F^{(s-\\beta)\/\\beta}(y)dy\\bigg)^\\alpha\\nonumber\\\\\n& =G^s(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}{\\int_{\\Sigma_r} \\lambda(r,\\omega)f(r,\\omega)}\\bigg(\\int_0^r\\int_{\\Sigma_s} \\lambda(s,\\sigma) g(s,\\sigma)ds d\\sigma_s\\bigg)^{(\\beta-s)\/\\alpha}\\nonumber\\\\\n&\\quad\\times\\widetilde{F_1}^{(\\beta-s)\/\\beta}(r)\\widetilde{F_1}^{(s-\\beta)\/\\beta}(r)drd\\omega_r \\bigg)^\\alpha\\nonumber \\\\\n & = G^s(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}{\\int_{\\Sigma_r} \\lambda(r,\\omega)f(r,\\omega)}\\widetilde{G_1}^{(\\beta-s)\/\\alpha}(r)\\widetilde{F_1}^{(\\beta-s)\/\\beta}(r)\\widetilde{F_1}^{(s-\\beta)\/\\beta}(r)d\\omega_r dr\\bigg)^\\alpha.\\nonumber\n\\end{align*}\n Consequently, we can estimate\n \\begin{align*}\n & G^s(x)W^\\alpha(x)\n \\\\ &\n \\le\\bigg(\\sup_{r>\\vert x \\vert_a}\\widetilde{G_1}^{(\\beta-s)\/\\alpha}(r)\\widetilde{F_1}^{(\\beta-s)\/\\beta}(r)\\bigg)^\\alpha G^{s}(x) \\bigg(\\int_{\\vert x \\vert_a}^{\\infty}\\widetilde{F}(r)\\left(\\int_r^{\\infty}\\widetilde{F}(s)ds\\right)^{(s-\\beta)\/\\beta} dr\\bigg)^\\alpha\\nonumber \n \\\\ &=\\bigg(\\sup_{r>\\vert x \\vert_a}\\widetilde{G_1}^{\\beta}(r)\\widetilde{F_1}^{\\alpha}(r)\\bigg)^{(\\beta-s)\/\\beta} G^{s}(x) {\\bigg(\\frac{\\beta}{s}\\bigg)}^\\alpha \\widetilde{F_1}^{(\\alpha s)\/\\beta}(r)\\nonumber \n \\\\ &\\le\\bigg(\\sup_{\\vert y \\vert_a>\\vert x \\vert_a} G^{\\beta}(y) F^{\\alpha}(y)\\bigg)^{(\\beta-s)\/\\beta}{\\bigg(\\frac{\\beta}{s}\\bigg)}^\\alpha \\bigg(\\sup_{\\vert x \\vert_a>0}G^{\\beta}(x) F^{\\alpha}(x)\\bigg)^{s\/\\beta}\\nonumber\n\\\\ &\\le\\bigg(\\frac{\\beta}{s}\\bigg)^\\alpha \\sup_{\\vert x \\vert_a>0} A_1 (x;\\alpha,\\beta),\\nonumber\n \\end{align*}\n which gives \n $\n \\mathcal A_2 \\leq {\\left(\\frac{\\beta}{s}\\right)}^\\alpha \\mathcal A_1.\n $\n \n On the other hand, for $s\\geq\\beta$, when $\\vert y \\vert_a > \\vert x \\vert_a$ we have\n $\n G^{(\\beta-s)\/\\alpha}(y) \\le G^{(\\beta-s)\/\\alpha}(x).\n $\n \n Therefore, we can estimate\n \\begin{align*}\n A_{2}(x;\\alpha,\\beta,s) &= G^{s}(x)\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta-s)\/\\alpha}(y)dy\\bigg)^\\alpha\\nonumber\n \\\\ & \n \\le G^{s}(x)\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)dy\\bigg)^\\alpha G^{(\\beta-s)}(x)\\nonumber\n\\\\\n&=F^\\alpha(x)G^\\beta(x)\\nonumber\n\\end{align*}\n i.e. $\\mathcal A_2 \\le \\mathcal A_1.$ \nTherefore, we have for $s>0$, the overall estimate\n$$\n\\mathcal A_2 \\le {\\left(\\max(1,\\frac{\\beta}{s})\\right)}^{\\alpha} \\mathcal A_1.\n$$\nHence we have also shown that\n$\n \\mathcal A_1 \\approx \\mathcal A_2.\n$\n\n \n Next we observe that the proof of $\\boxed{\\mathcal A_1 \\approx \\mathcal A_3}$ follows along the same lines as that of $\\mathcal A_1 \\approx \\mathcal A_2$, where we just need to interchange the roles of $F$ and $G$.\n \n \\smallskip\n $\\boxed{\\mathcal A_1 \\approx \\mathcal A_4}$\n \n \\smallskip\n Let us denote\n \\begin{align*}\nW_{0}(x) &:= \\int_{B(a,\\vert x \\vert_a)} f(y) G^{(\\beta+s)\/\\alpha}(y)dy\\nonumber\n \\\\ &= \\int_{0}^{\\vert x \\vert_a}\\int_{\\Sigma_r} \\lambda(r,\\omega) f(r,\\omega){G}^{(\\beta+s)\/\\alpha}(r,\\omega)d\\omega_r dr\\nonumber\\\\\n\\\\ \n &=: \\int_{0}^{\\vert x \\vert_a}\\widetilde{W_0}(r)dr,\\nonumber\n\\end{align*}\n\\\\\n\\\\\nso that we can write\n $$\n A_4(x;\\alpha,\\beta,s) = G^{-s}(x)W_0^\\alpha(x).\n $$ \n We rewrite\n $A_1$ as\n \\begin{align*}\n A_1(x;\\alpha,\\beta)&=G^{\\beta}(x)\\bigg(\\int_{\\mathbb X\\backslash{B(a,\\vert x \\vert_a)}}f(y)G^{(\\beta+s)\/\\alpha}(y)G^{-{(\\beta+s)\/\\alpha}}(y)dy\\bigg)^\\alpha\\nonumber\n \\\\\n &= G^\\beta(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}\\int_{\\Sigma_r} \\lambda(r,\\omega) f(r,\\omega){G}^{(\\beta+s)\/\\alpha}(r,\\omega)G^{-(\\beta+s)\/\\alpha}(r,\\omega) d\\omega_r dr\\bigg)^\\alpha\\nonumber\n \\\\\n &=G^\\beta(x)\\bigg(\\int_{\\vert x \\vert_a}^{\\infty}\\widetilde{G_1}^{-(\\beta+s)\/\\alpha}(r)\\frac{d}{dr}\\bigg(\\int_0^r\\widetilde{W_0}(s)ds\\bigg)dr\\bigg)^\\alpha.\\nonumber\n \\end{align*}\n We can estimate this by \n \\begin{align*}\n & \\le G^{\\beta}(x)\\bigg(\\widetilde{G_1}^{-(\\beta+s)\/\\alpha}(\\infty){W_0}(\\infty) + \\frac{(\\beta+s)}{\\alpha}\\int_{\\vert x \\vert_a}^{\\infty}\\widetilde{G}(r)(\\widetilde{G_1}(r))^{\\frac{-(\\beta+s)}{\\alpha}-1}W_{0}(r)dr\\bigg)^\\alpha\\nonumber\n \\\\\n& \\le G^{\\beta}(x)\\bigg(\\sup_{\\vert y \\vert_a >\\vert x \\vert_a} G^{-s}(y) W_{0}^{\\alpha}(y)\\bigg)\\times\n\\\\ & \n\\;\\; \\times \\bigg(\\widetilde{G_{1}}^{-\\beta\/\\alpha}{(\\infty)} + \\frac{(\\beta+s)}{\\alpha}\\int_{\\vert x \\vert_a}^{\\infty}\\widetilde{G_1}^{-(\\beta\/\\alpha)-1}(r) \\frac{d}{dr}\\bigg(\\int_0^r\\widetilde{G}(s)ds\\bigg)dr\\bigg)^\\alpha\\nonumber\n\\\\ &\n \\le G^{\\beta}(x)\\sup_{\\vert y \\vert_a > 0} {A_{4}}(y;\\alpha,\\beta,s)\\bigg(\\widetilde{G_{1}}^{-\\beta\/\\alpha}{(\\infty)} + \\frac{(\\beta+s)}{\\beta} \\bigg(\\widetilde{G_1}^{-(\\beta\/\\alpha)}(\\vert x \\vert_a) -{\\widetilde{G_1}}^{-\\beta\/\\alpha}(\\infty)\\bigg)\\bigg)^\\alpha\\nonumber\n\\\\\n &= \\sup_{\\vert y \\vert_a > 0} {A_{4}}(y;\\alpha,\\beta,s) \\bigg[ \\frac{(\\beta+s)}{\\beta} + \\bigg(1-\\frac{(\\beta+s)}{\\beta}\\bigg)\\bigg(\\frac{G(x)}{G(\\infty)}\\bigg)^{\\beta\/\\alpha}\\bigg ]^\\alpha\\nonumber\n \\\\ &\n \\le \\bigg(1+\\frac{s}{\\beta}\\bigg)^\\alpha \\sup_{\\vert y \\vert_a>0} A_4(y;\\alpha,\\beta,s),\n \\end{align*}\n where the expressions like $G(\\infty)$ make sense since $g\\in L^1(\\mathbb X)$.\n Therefore, we obtain \n $$\n \\mathcal A_1\\le (1+s\/\\beta)^\\alpha \\mathcal A_4.\n $$\n To prove the opposite inequality, we assume that\n $$\n \\sup_ {\\vert x \\vert_a>0}{ A_1}(x;\\alpha, \\beta)<\\infty.\n $$\n Then we have \n \\begin{align}\n A_{4}(x;\\alpha,\\beta,s)&= G^{-s}(x)\\bigg(\\int_{B(a,\\vert x \\vert_a)} G^{(\\beta+s)\/\\alpha}(y)f(y)dy\\bigg)^\\alpha\\nonumber\n \\end{align}\n\\begin{align}\n&=G^{-s}(x)\\bigg(\\int_0^{\\vert x \\vert_a}{\\widetilde{ G_1}}^{(\\beta+s)\/\\alpha}(r) \\frac{d}{dr}\\bigg(-\\int_r^{\\infty}{\\widetilde{F}(s)}ds\\bigg)dr\\bigg)^\\alpha\\nonumber\n\\end{align}\n\\begin{align}\n&=G^{-s}(x)\\bigg({\\widetilde{ G_1}}^{(\\beta+s)\/\\alpha}(r){\\widetilde{F_1}}(r) \\bigg\\vert_{\\vert x \\vert_a}^0+\\frac{\\beta+s}{\\alpha}\\int_0^{\\vert x \\vert_a}{\\widetilde{F_1}}(r){\\widetilde{ G_1}}^{(\\beta+s)\/\\alpha-1}(r) \\frac{d}{dr}\\bigg(\\int_0^r{\\widetilde{G}(s)}ds\\bigg)dr\\bigg)^\\alpha\\nonumber\n\\end{align}\n\\begin{align}\n\\le G^{-s}(x)\\bigg (\\sup_{0