diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhsrq" "b/data_all_eng_slimpj/shuffled/split2/finalzzhsrq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhsrq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nReinforcement learning deals with taking sequences of actions in previously unknown environments, to maximize cumulative rewards. Deep reinforcement learning combines the power of deep neural networks for function approximation with reinforcement learning algorithms. Such algorithms have recently been successful in solving several challenging problems \\cite{mnih2015human, silver2017mastering, schulman2017proximal}. However, these algorithms are often not sample-efficient, typically requiring an unreasonable amount of interactions in an environment \\cite{mnih2015human, schulman2017proximal, duan2016benchmarking}. In real world applications, it is often the case that collecting data is expensive, motivating the need for algorithms that can learn from a minimal number of interactions with the environment.\n\nEffectively using a model of the environment for learning could result in a significant increase of sample-efficiency \\cite{chua2018deep, lowrey2018plan}. This is because models introduce an inductive bias that the environment behaves in a forward generative causal direction. Models can be directly used for planning actions that maximize the sum of expected rewards in a finite horizon. Previous works have successfully used fast and accurate simulators of the environment for planning, to solve challenging tasks such as control of a Humanoid \\cite{lowrey2018plan, tassa2012synthesis}. However, most real-world problems do not come with a fast and accurate simulator and engineering one from scratch is a laborious task. Even if a simulator exists: 1)~maintaining the simulator to accommodate for changes in the real-world is a lot of manual work, and 2)~matching the true state of the environment and the simulator is not trivial. This motivates the need for \\emph{learning} dynamics models of the environment. Dynamics models of the environment can be efficiently learned using supervised learning techniques \\cite{nagabandi2018neural} and also efficiently adapted to changes in the environment \\cite{clavera2018learning}.\n\nPlanning with learned dynamics models is challenging because the planner can exploit the inaccuracies of the model to arrive at actions that are imagined to produce highly over-optimistic rewards (see Figure~\\ref{f:traj_opt}). In this paper, we propose to regularize model-based planning with an energy-based model trained on the same transitions as the dynamics model. The planning procedure is augmented with an additive cost of minimizing the energy of the imagined transitions. This penalizes the planner from producing trajectories with transitions that are outside the training data distribution (that is, transitions with high energy estimates). We demonstrate that the proposed method is effective at regularizing model-based planning from exploiting inaccuracies of the model. Previous works have proposed to use ensembles of models to deal with this problem \\cite{chua2018deep}. We show that the proposed method can further improve the performance of planning on top of pre-trained ensemble of models. Furthermore, we show that the proposed method enables sample-efficient learning to achieve competitive performance in five popular continuous control tasks.\n\n\n\\section{Model-Based Planning}\n\nIn this section, we formalize the problem setting as a Markov Decision Process (MDP). At every discrete time-step $t$, the environment is in state $s_t$, the agent takes an action $a_t$ to receive a scalar reward $r_t = r(s_t, a_t)$ and the environment transitions to the next state $s_{t+1}$, following the dynamics $s_{t+1} = f(s_t, a_t)$. The goal of the agent is to choose actions $a_t$ so as to maximize the sum of expected future rewards (called return), $G = \\mathbb{E} \\left[ \\sum_{t=0}^\\infty r(s_t, a_t) \\right]$.\n\nIn this paper, we focus on finite-horizon planning with learned dynamics models $\\hat{f}$. Also, we assume that the reward function $r(s_t, a_t)$ is known and that the state $s_t$ is fully observed. Forward dynamics models of the environment can be learned using supervised learning techniques to predict the next state $s_{t+1}$ from the current state $s_t$ and action $a_t$:\n\\[ s_{t+1} = \\hat{f}(s_t, a_t) \\,. \\]\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth, trim=180 120 180 120, clip]{training_loop.pdf}\n\\end{center}\n\\caption{Overview of training loop. We initially perform random exploration for one or more episodes to train the dynamics model and then interact with the environment by planning using the learned dynamics model. At the end of each episode, we re-train the dynamics model on the past experience.}\n\\label{f:training-loop}\n\\end{figure}\n\nAt time-step $t$, the agent can plan a sequence of actions $\\{a_t, \\ldots, a_{t+H}\\}$ by unrolling the learned dynamics model to maximize the sum of rewards:\n\\begin{align}\na^*_t, \\ldots, a^*_{t+H} = \\argmax_{a_t, \\ldots, a_{t+H}} \\mathbb{E} \\left[ \\sum_{\\tau=t}^{t+H} r(s_\\tau, a_\\tau) \\right] \n\\,,\n\\label{eq:obj-true}\n\\end{align}\nsuch that $s_{\\tau+1} = f(s_\\tau, a_\\tau)$. Since we do not have access to the true dynamics $f$, we plan using the following objective as a proxy to the true objective:\n\\begin{align}\na^*_t, \\ldots, a^*_{t+H} = \\argmax_{a_t, \\ldots, a_{t+H}} \\sum_{\\tau=t}^{t+H} r(s_\\tau, a_\\tau) \n\\,,\n\\label{eq:obj-proxy}\n\\end{align}\nsuch that $s_{\\tau+1} = \\hat{f}(s_\\tau, a_\\tau)$. We use model-predictive control (MPC) \\cite{garcia1989model, nagabandi2018neural} to adapt our plans to new states, that is we apply just the first action from the optimized sequence and re-plan at next step. This reduces the effect of error accumulation due to multi-step predictions using the model.\n\nWe initially train the dynamics model using data collected by executing a random policy for one or more episodes. More episodes of initial exploration reduces the effect of the initial model weights. The initial data can also be collected by an existing policy, for example human demonstrations or human-engineered controllers. After training the dynamics models, we interact with the environment using model-based planning (Equation~\\ref{eq:obj-proxy}). The $(s_t, a_t, s_{t+1})$ transition observed at each interaction is stored in a replay buffer along with the initial data and the model is re-trained on the replay buffer at the end of each episode. We iterate this alternating process of training the dynamics model and model-based planning. An overview of the training loop is illustrated in Figure~\\ref{f:training-loop}.\n\n\\subsection{Regularizing Model-Based Planning}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=\\textwidth, trim=20 80 20 100, clip]{comp_graph.pdf}\n\\end{center}\n\\caption{Computational graph of regularizing model-based planning with energy-based models. At each timestep $t-1$, the environment is in a state $s_{t-1}$ and we initially consider an action trajectory $\\{a_t, a_{t+1}, \\ldots, a_{t+H}\\}$ that we could apply for the next $H$ timesteps. The dynamics model is used to predict the future state transitions $\\{\\tilde{s}_t, \\tilde{s}_{t+1}, \\ldots, \\tilde{s}_{t+H}\\}$ at each timestep for the considered action trajectory. The rewards at each time-step is computed directly from the state-action pairs, resulting in a prediction of the finite-horizon cumulative reward. An additive regularization term is augmented to the planning objective by computing the energy of each $(\\tilde{s}_\\tau, a_\\tau, \\tilde{s}_{\\tau+1})$ transition. The action trajectory $\\{a_t, a_{t+1}, \\ldots, a_{t+H}\\}$ can be optimized to maximize this regularized planning objective.}\n\\label{f:comp-graph}\n\\end{figure}\n\nDirectly optimizing the objective in Equation~\\ref{eq:obj-proxy} is challenging because $\\hat{f}$ is only an approximation of the true dynamics $f$. Deep neural networks are commonly used as function approximators to learn $\\hat{f}$ and they are amenable to erroneous predictions for samples outside the data distribution.\nAn effective optimizer can be easily deceived by these erroneous predictions, converging to action trajectories that are imagined to produce very high rewards but is not the case in reality. This problem can be alleviated by penalizing the optimizer from considering trajectories that are outside the training distribution of the learned dynamics model $\\hat{f}$. This can be achieved by augmenting the planning objective with an additive term to maximize the probability of imagined transitions:\n\\begin{align}\na^*_t, \\ldots, a^*_{t+H} = \\argmax_{a_t, \\ldots, a_{t+H}} \\sum_{\\tau=t}^{t+H} r(s_\\tau, a_\\tau) + \n\\alpha \\log p(s_t, a_t, s_{t+1}, \\ldots, s_{t+H}, a_{t+H}, s_{t+H+1})\n\\,.\n\\nonumber\n\\end{align}\nwhere the scalar $\\alpha$ modulates the weight between both costs. We approximate the joint probability of the whole trajectory as a sum of joint probabilities of each transition in the trajectory:\n\\begin{align}\na^*_t, \\ldots, a^*_{t+H} = \\argmax_{a_t, \\ldots, a_{t+H}} \\sum_{\\tau=t}^{t+H} \\left[ r(s_\\tau, a_\\tau) +\n\\alpha \\log p(s_\\tau, a_\\tau, s_{\\tau+1}) \\right]\n\\,.\n\\label{eq:obj-main}\n\\end{align}\n\nAssume that we want to learn the probability density function using a parameterized model $p(x_\\tau; \\theta)$, where $x_\\tau = [s_\\tau, a_\\tau, s_{\\tau+1}]$. The energy function $E(x_\\tau; \\theta)$ is the unnormalized log-density, that is the probability density function $p(x_\\tau)$ is defined as:\n\\begin{align}\np(x_\\tau; \\theta) = \\frac{1}{Z(\\theta)} \\exp{(-E(x_\\tau;\\theta))}\n\\,,\n\\nonumber\n\\end{align}\nwhere $Z(\\theta) = \\int{\\exp{(-E(x'_\\tau;\\theta))}} dx'_\\tau$ is the partition function which normalizes the probability density. Computing the partition function is generally intractable in practice and is not important for regularization in Equation~\\ref{eq:obj-main} since it does not depend on $x_\\tau$.\nWe can instead learn and use the energy function for regularizing model-based planning:\n\\begin{align}\na^*_t, \\ldots, a^*_{t+H} = \\argmax_{a_t, \\ldots, a_{t+H}} \\sum_{\\tau=t}^{t+H} \\left[ r(s_\\tau, a_\\tau) -\n\\alpha E(s_\\tau, a_\\tau, s_{\\tau+1}) \\right]\n\\,.\n\\label{eq:obj-energy}\n\\end{align}\n\n\n\\section{Energy-Based Models}\n\\label{sec:energy}\n\nIn principle, any energy-based model \\cite{lecun2006tutorial} can be used to estimate $E(s_\\tau, a_\\tau, s_{\\tau+1})$ in Equation~\\ref{eq:obj-energy}. In this paper, we use the recently introduced Deep Energy Estimator Networks (DEEN) \\cite{saremi2018deep, saremi2019neural} for energy estimation. In this section, we introduce deep energy estimator networks and further contrast it against direct score function estimation using a denoising autoencoder. We show that deep energy estimator networks offers a principled and scalable way to estimate both the energy and score functions, making it a good choice for regularization in both gradient-free and gradient-based planning.\n\nConsider a random variable $Y$ that is a noisy observation of another unknown random variable $X$ with density function $p(x)$. \n\\citet{robbins1956empirical} derived the least squares estimators of variable $X$ from observed value $y$ of variable $Y$ for Poisson, geometric and binomial noise distributions and \\citet{miyasawa1961empirical} later extended the work to derive the estimator for univariate Gaussian noise. \\citet{raphan2011least} generalized these results into a unified framework and derived least square estimators for more distributions including multivariate Gaussian distribution. The empirical Bayes least squares estimator or the optimal denoising function $g(y)$ for zero-mean multivariate Gaussian noise is given by:\n\\begin{align}\ng(y) = y + \\sigma^2 \\nabla_y \\log p(y)\n\\,,\n\\label{eq:opt-denoising}\n\\end{align}\nwhere $y \\sim x + N(0, \\sigma^2I_d)$. Assume that we have access to samples $x_i \\in X$. Then, we can corrupt the samples using zero-mean Gaussian noise to obtain samples $y_i \\in Y$. We can train a feedforward neural network $\\hat{g}$ to denoise each sample $y_i$ to predict $x_i$. Such a function $\\hat{g}$ can be implemented with a denoising autoencoder (DAE)\nand based on Equation~\\ref{eq:opt-denoising} we can use it to approximate the score function $\\nabla_y \\log p(y)$ of the corrupted distribution as follows:\n\\begin{align}\n\\nabla_y \\log p(y) \\propto \\hat{g}(y) - y\n\\,.\n\\label{eq:dae}\n\\end{align}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=\\textwidth, trim=10 10 10 10, clip]{deen_vs_dae.pdf}\n\\end{center}\n\\caption{Comparison of score function estimation using DEEN vs DAE. We consider a simple mixture of 1d Gaussian distributions and generate 1000 samples from it. The ground truth probability density function $p(x)$, $\\log\\ p(x)$ and score function $\\frac{\\partial\\log\\ p(x)}{\\partial x}$ are shown in the first three figures. We corrupt the training data using additive Gaussian noise with a scale of 0.5. We train a DEEN and a DAE on this training data and the energy and score function estimates of the corrupted distribution $p(y)$ are shown in the last three figures. DEEN provides good smooth estimates of the energy and score function. DAE also provides a reasonable estimate of the score function.}\n\\label{f:deen-dae}\n\\end{figure}\n\n\\citet{boney2019regularizing} proposed to use this approximation to directly estimate the gradient of $\\log p(s_\\tau, a_\\tau, s_{\\tau+1})$ in Equation~\\ref{eq:obj-main}. This can be used in gradient-based planning by using the penalty term $\\|\\hat{g}(x) - x\\|^2$ instead of $\\log p(s_\\tau, a_\\tau, s_{\\tau+1})$ in Equation~\\ref{eq:obj-main} and stopping the gradient propagation through the denoising autoencoder $g$.\n\nGradient-free planning with the regularized objective in Equation~\\ref{eq:obj-energy} requires explicit energy estimates. \\citet{boney2019regularizing} used DAE regularization for gradient-free planning, which is not accurate as the denoising error does not correspond to energy estimation. Gradient-free planning offers some attractive properties: 1)~It is very easy to parallelize, 2)~There is no need to backpropogate gradients, 3)~Gradient-based planning involves backpropagating through very deep networks (where the same network is repeatedly applied for a finite horizon), where the problem of exploding and vanishing gradients may arise, and 4)~The learned dynamics model can become chaotic leading to high variance in the backpropagated gradients \\cite{parmas2018pipps}. In this paper, we propose to use differentiable energy-based models to obtain explicit energy estimates, from which the score function can be computed (if needed).\n\n\\citet{saremi2018deep} proposed to explicitly parameterize the energy function $E(y; \\theta)$ with a neural network\nand to compute the derivative of the energy by backpropagating through the network. Such a network can be trained by minimizing the following objective based on the relation in Equation~\\ref{eq:opt-denoising}:\n\\begin{align}\n\\argmin_\\theta \\sum_{x_i \\in X, y_i \\in Y} \\left\\| x_i - y_i + \\sigma^2 \\frac{\\partial E(y=y_i; \\theta)}{\\partial y} \\right\\|^2\n\\label{eq:deen}\n\\end{align}\n\nThe energy function network $E(y; \\theta)$ is called deep energy estimator network (DEEN).\nNote that minimizing this objective involves double backpropagation at each step. Optimizing the objective in Equation~\\ref{eq:deen} ensures that the score function $\\partial E(y; \\theta) \/ \\partial y$ satisfies the relation in Equation~\\ref{eq:opt-denoising}. This leads the energy network $E$ to explicitly learn the energy function of the corrupted distribution such that the gradient of the network also corresponds to the score function of the corrupted distribution. In this paper, we propose to use the energy network $E(y; \\theta)$ to learn the energy function $E(s_\\tau, a_\\tau, s_{\\tau+1})$ and use it for regularization in the planning objective (Equation~\\ref{eq:obj-energy}). A computational graph of regularizing model-based planning with energy estimation using a DEEN network is illustrated in Figure~\\ref{f:comp-graph}.\n\nIt is to be noted that both the denoising autocoder and the DEEN methods approximate the score function of the corrupted distribution $p(y)$ instead of the true data distribution $p(x)$. This can potentially behave better in practice since $p(y)$ can be seen as a Parzen window estimate of $p(x)$ with variance $\\sigma^2$ as the smoothing parameter \\cite{saremi2019neural, vincent2011connection}.\n\nIn Section~\\ref{sec:exp}, we show that energy estimation with DEEN is more effective than direct score function estimation using DAE for regularizing model-based planning. Deep energy estimator networks have been shown to be robust for score function estimation since the score function is computed from explicit energy estimates \\cite{saremi2018deep}. We compare score function estimation using denoising autoencoders and deep energy estimator networks on a toy example in Figure~\\ref{f:deen-dae}. Previous works \\cite{saremi2019neural, alain2014regularized} have also observed that directly estimating the score function is not robust in practice.\n\n\n\\section{Experiments}\n\\label{sec:exp}\n\nWe compare the proposed energy-based regularization method to probabilistic ensembles with trajectory sampling (PETS) \\cite{chua2018deep} and DAE regularization \\cite{boney2019regularizing}. PETS is a state-of-the-art model-based algorithm which involves learning an ensemble of probabilistic dynamics models.\nWe perform the comparison in five popular continuous control benchmarks from \\cite{brockman2016openai}: Cartpole, Reacher, Pusher, Half-cheetah and Ant. We use cross-entropy method (CEM) \\cite{botev2013cross} as the optimizer in all our experiments since it is computationally significantly faster than the Adam optimizer used in \\cite{boney2019regularizing} and also was able to achieve competitive or even better results in these benchmarks. In Section~\\ref{sec:exp-pre}, we test energy-based regularization method on top of the pre-trained PETS models \nto show that energy-based regularization further improves planning. In Section~\\ref{sec:exp-scratch}, we\nshow that enery-based regularization enables sample-efficient learning to solve all tasks from just a handful of trials.\n\n\\subsection{Experiments on Pre-Trained Dynamics Models}\n\\label{sec:exp-pre}\n\nWe test the proposed regularization on top of the state-of-the-art model-based RL algorithm: PETS. We trained an ensemble of probabilistic dynamics models using the code provided by the authors of \\cite{chua2018deep}. The results are shown in Table~\\ref{t:pretrained-results}. We trained PETS on the Half-cheetah benchmark for 300 episodes and perform closed-loop planning by augmenting the planning objective with an additive term consisting of the energy estimates (Equation~\\ref{eq:obj-energy}). Both DAE and DEEN regularization are able to improve upon PETS, with DEEN regularization performing the best. Similar to \\cite{boney2019regularizing}, we did not observe any improvements on tasks with low-dimensional action spaces: Cartpole, Reacher and Pusher.\n\n\n\\begin{table}[h]\n\\caption{Comparison of planning using pre-trained PETS models with different optimizers}\n\\label{t:pretrained-results}\n\\centering\n\\begin{tabular}{llllll} \n\\toprule\nOptimizer & CEM & CEM + DAE & Adam + DAE & CEM + DEEN \\\\\n\\midrule\nReturn & $10955 \\pm 2865$ & $12967 \\pm 3216$ & $12796 \\pm 2716$ & $\\mathbf{13052 \\pm 2814}$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Experiments on Learning from Scratch}\n\\label{sec:exp-scratch}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{corl_results.pdf}\n\\end{center}\n\\caption{Results of our experiments on learning from scratch on five continuous control benchmarks: Cartpole, Reacher, Pusher, Half-cheetah and Ant. We compare to PETS \\cite{chua2018deep} and DAE regularization \\cite{boney2019regularizing} using the return (cumulative reward) obtained in each episode. \nPETS is a state-of-the-art model-based RL algorithm and DAE regularization has been shown to be effective in sample-efficient learning in these tasks. We also compare against planning with Gaussian Processes (GP) based dynamics model in the Cartpole task. We show the mean and standard deviation of each setting averaged across 5 seeds.}\n\\label{f:scratch-results}\n\\end{figure}\n\nIn this section, we test the effectiveness of regularization using energy-based models for learning from scratch. We perform one or more episodes of random exploration, train a dynamics model and the regularizer on the transitions and further interact with the environment using MPC by planning at each time-step using the regularized objective in Equation~\\ref{eq:obj-energy}. We maintain a replay buffer of all past transitions and at the end of each episode the dynamics model and regularizer are re-trained on the whole replay buffer. In these experiments, we use a single feedforward network as the dynamics model (as opposed to an ensemble of models in \\cite{chua2018deep}) to test the efficacy of the proposed method in a simple setting. The results are shown in Figure~\\ref{f:scratch-results}. In low-dimensional environments such as Cartpole, Reacher and Pusher, CEM optimization with DEEN regularization is comparable or better than the other methods. In Half-cheetah, CEM optimization with DEEN regularization clearly performs the best, obtaining good asymptotic performance in just 20,000 timesteps (corresponding to 16.6 minutes of experience). In Ant, CEM optimization with DEEN regularization also performs the best, learning to walk reasonably in just 4,000 timesteps (corresponding to 3.3 minutes of experience)\\footnote{Videos of the training progress are available at \\href{https:\/\/sites.google.com\/view\/regularizing-mbrl}{https:\/\/sites.google.com\/view\/regularizing-mbrl}.}. It can also be observed that CEM optimization with DEEN regularization performs competitively or better than Adam optimization with DAE regularization, which requires much more computation.\nIn the Half-cheetah benchmark, state-of-the-art model-free methods \\cite{haarnoja2018soft, fujimoto2018addressing} and PETS obtains better asymptotic performance during later stages of learning. \nWe postulate that this is due to lack of a proper exploration mechanism in the proposed approach. However, DEEN regularization enables excellent performance during the early stages of training and also shows consistent improvements after each episode. This is very important for practical applications in the real-world, where the agent is expected to perform sensible actions from the very beginning. The proposed method enables efficient exploitation of the learned dynamics model and combining it with an explicit exploration mechanism would facilitate controlled exploration and exploitation. We leave complementation of the proposed method with an effective exploration strategy to future research. For example, energy estimates of the transitions could be used as bonuses for curiosity-driven exploration \\cite{pathak2017curiosity} to visit novel states and try novel actions.\n\nWe visually demonstrate the effectiveness of DEEN regularization in Figure~\\ref{f:traj_opt}. To compare with \\cite{boney2019regularizing}, we visualize trajectory optimization on the Half-cheetah task using dynamics models obtained after 5 episodes of training. We perform actions in the environment for 50 timesteps using model-based planning to arrive at a stable state and then perform trajectory optimization using a randomly initialized population of action trajectories. Without any regularization, the planning procedure leads to trajectories that are predicted to produce high rewards but is not the case in reality. It can be observed that while DAE regularization is also effective, DEEN regularization is clearly better, being able to successfully prevent the reality and imagination from diverging and also leading to trajectories with a better outcome.\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=.8\\textwidth]{traj_opt.pdf}\n\\end{center}\n\\caption{Visualization of trajectory optimization after 5 episodes of training in the Half-cheetah environment. We perform actions in the environment for 50 timesteps using MPC and then initialize the CEM optimization with the a random population of trajectories and optimize the planning objective in three different settings: 1) without any regularization, 2) with DAE regularization \\cite{boney2019regularizing}, and 3) with DEEN regularization. Here, the red lines denote the rewards predicted by the model (imagination) and the black lines denote the true rewards obtained when applying the sequence of optimized actions (reality). It is noticeable that planning without any regularization exploits inaccuracies of the dynamics model but DAE and DEEN regularization is able to prevent this.\n}\n\\label{f:traj_opt}\n\\end{figure}\n\n\\subsection{Model Architecture and Hyperparameters}\n\nThe important hyperparameters used in our experiments are reported in Table~\\ref{t:hyperparameters}.\nFollowing \\cite{chua2018deep, boney2019regularizing}, we used a Bayesian neural network (BNN) with mean and variance predictions as our dynamics model. Although the predicted variance is not used for planning, it was found to have a regularizing effect during training \\cite{chua2018deep}. We used a vanilla feedforward network to model the energy function\nThe energy estimation network is trained by corrupting the transitions in the replay buffer using additive Gaussian noise and minimizing the objective in Equation~\\ref{eq:deen}. We found the noise scale $\\sigma$, cost multiplier $\\alpha$ and number of training epochs to be the most sensitive hyperparameters. \nTo prevent overfitting to the replay buffer, we explored simple strategies like increasing the number of initial episodes with random exploration and decaying the number of training epochs after each episode. In Half-cheetah, we perform random exploration for the first 3 episodes and decay the number of training epochs to 8 after 10 episodes. In Ant, we decay the number of training epochs of the dynamics model and the DEEN network by factors of 0.6 and 0.7 respectively.\n\n\\begin{table}[h]\n\\caption{Important hyperparameters used in our experiments}\n\\label{t:hyperparameters}\n\\centering\n\\begin{tabular}{llccccc}\n\\toprule\n& Hyperparameter & Cartpole & Reacher & Pusher & Half-cheetah & Ant \\\\\n\\midrule\n\\multirow{4}{*}{Model} & Hidden layers & 3 & 3 & 3 & 4 & 4 \\\\\n& Hidden size & 200 & 200 & 200 & 200 & 400 \\\\\n& Epochs & 500 & 500 & 100 & 300 & 600 \\\\\n& Batch Size & 32 & 32 & 32 & 128 & 400 \\\\\n\\midrule\n\\multirow{6}{*}{DEEN} & Hidden layers & 3 & 3 & 3 & 5 & 3 \\\\\n& Hidden size & 200 & 200 & 200 & 500 & 300 \\\\\n& Epochs & 500 & 500 & 100 & 100 & 800 \\\\\n& Batch Size & 32 & 32 & 32 & 32 & 64 \\\\\n& Noise scale $\\sigma$ & 0.1 & 0.1 & 0.1 & 0.37 & 0.9 \\\\\n& Cost multiplier $\\alpha$ & 0.001 & 0.001 & 0.01 & 0.05 & 0.035 \\\\\n\\midrule\nCEM & Iterations & 5 & 5 & 5 & 5 & 7 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\section{Related Work}\n\nThis paper is inspired by the recent works on planning with learned dynamics models. PILCO \\cite{deisenroth2011pilco} is a well-known method for sample-efficient learning in low-dimensional control problems. However, it has been difficult to scale such methods to high-dimensional problems. \\citet{nagabandi2018neural} used neural networks as dynamics models for planning to demonstrate sample-efficient learning on challenging continuous control problems and further fine-tuned the learned policy using model-free methods for good asymptotic performance. \\citet{mishra2017prediction} introduced deep generative models based on convolutional autoregressive models and variational autoencoders \\cite{kingma2013auto} and used it for gradient-based trajectory optimization and policy optimization. \\citet{chua2018deep} used ensembling of probabilistic dynamics models and trajectory sampling to deal with inaccuracies of learned dynamics models and demonstrated sample-efficient learning to achieve high asymptotic performance on challenging continuous control problems. While the use of ensembling in \\cite{chua2018deep} allows for a better dynamics model, energy-based regularization also bounds the transitions of the agent to be similar to it's previous experience, which might be important for safety-critical applications. While KL divergence penalty between action distribution has been used in previous works \\cite{levine2013guided, kumar2016optimal}, energy-based regularization also bounds the familiarity of states. \\citet{boney2019regularizing} proposed to use denoising autoencoders to prevent trajectory optimization from exploiting modelling inaccuracies and demonstrated sample-efficient learning from very few episodes, using gradient-based trajectory optimization. \\citet{hafner2018learning} introduced a novel latent dynamics model for planning in high-dimensional observation spaces, such as images. Recent works on model-based RL have also explored Dyna-style \\cite{sutton1990integrated} architectures where learned dynamics models are used to generate data to train model-free methods \\cite{kurutach2018modelensemble, clavera2018model, ha2018recurrent}.\n\nEnergy-based models can be directly used for planning by sampling future trajectories from the generative model. \\citet{du2019implicit} showed that energy-based models can be used to sample diverse predictions of future trajectories and it was later extended in \\cite{du2019model} for model-based planning. DEEN \\cite{saremi2018deep} could also be directly used for planning by sampling future trajectories using the novel walk-jump sampling algorithm introduced in \\cite{saremi2019neural}. However, sampling from such models at each timestep for planning is expensive and we instead use a separate forward dynamics model for directly predicting the future trajectories but only use the energy-based model for regularization. This can be seen as an ensemble of two different kinds of models.\n\n\n\\section{Conclusion}\n\nPlanning with learned dynamics models is challenging because planning can exploit the inaccuracies in the model to produce over-optimistic trajectories. In this paper, we propose to regularize planning using energy estimates of state transitions in the environment. We use a recently proposed energy estimation method called DEEN for this purpose. We demonstrated that an energy estimation network can be trained on the past experience of pre-trained dynamics models to further improve planning. We also demonstrated that the energy regularization enables sample-efficient learning on challenging tasks such as Half-cheetah and Ant, in just a few minutes of interaction.\n\nOne of the limitations of the proposed and related model-based planning algorithms is the additional hyperparameter tuning required for learning dynamics models. AutoML algorithms can be potentially used to automate the training of effective dynamics model by splitting the replay buffer into a training set and validation set and optimizing the prediction performance on the validation set. This could enable automatic architecture and hyperparameter tuning of dynamics models using more computational resources, without any additional data or human supervision. This would be an interesting line of future work.\n\n\n\n\\clearpage\n\n\\acknowledgments{We would like to thank Saeed Saremi for valuable discussions about his work on deep energy estimator networks and neural empirical Bayes.}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn diverse families of strongly correlated electron systems, including cuprates, iron-pnictides, and heavy fermion compounds, superconductivity is often found near a quantum critical point (QCP) where a magnetic phase vanishes in the limit of zero temperature, pointing to a magnetic glue as the source of electron pairing \\cite{Mathur,Shibauchi2014,Keimer}. In these materials, microscopic coexistence of superconducting and magnetically ordered phases both involving the same charge carriers is a striking example for unusual emergent electronic phases. Moreover, superconductivity is frequently the strongest near the QCP, suggesting that the proliferation of critical magnetic excitations emanating from the QCP plays an important role in Cooper pairing. Despite tremendous research, however, the entangled relationship between superconductivity and magnetism has remained largely elusive. \n\n\n\\begin{figure}[b]\n\\begin{center}\n\\includegraphics[width=1.0\\linewidth]{Fig1.eps}\n\\caption{\n(a) Schematic figure of the interaction between $d$-wave superconductivity (SC) and static antiferromagnetic (AFM) order via the interface. (b) Interaction between two competing orders under pressure near quantum critical point (QCP), where AFM order disappears. \n(c) High resolution cross-sectional TEM image for CeCoIn$_5$($5$)\/CeRhIn$_5$($5$) superlattice. (d) The TEM image of the boxed area in (c).\n}\n \\end{center}\n \\end{figure}\n\nRecently, realization that interactions between superconducting electrons and bosonic excitations through an atomic interface can have a profound influence on Cooper-pair formation has raised the exciting possibility of a new route to controlling superconductivity. For instance, when a monolayer of FeSe is grown on a SrTiO$_3$ substrate, the interaction between FeSe electrons and SrTiO$_3$ phonons via the interface enhances the pairing interaction, giving rise to the highest transition temperature $T_c$ among iron-based superconductors \\cite{Huang,DHLee,JJLee,Rademaker}. This discovery raises the possibility of a magnetic analogue in which the pairing interaction is influenced by magnetic fluctuations though an interface between an unconventional superconductor and a magnetic metal. This concept is illustrated schematically in Figs.\\,1(a) and 1(b). Besides allowing a new approach to revealing the entangled relationship between magnetism and unconventional superconductivity, this concept has the advantage that magnetic excitations are tunable as a magnetic transition is driven toward zero temperature, unlike phonon excitations in SrTiO$_3$. The state-of-the-art molecular beam epitaxy (MBE) technique enables realization of this idea through fabrication of artificial Kondo superlattices with alternating layers of Ce-based heavy fermion superconductors and magnets that are atomic layer thick \\cite{Shishido2010,Mizukami,Shimozawa2016}. These artificially engineered materials are particularly suitable systems to elucidate the mutual interaction through the interface, providing a new platform to study the interplay of competing orders. \n \nThe layered heavy fermion compounds Ce$M$In$_5$ ($M=$\\,Co, Rh) are ideal model systems in which the interplay between magnetism and superconductivity can be explored, because of their high purity and small energy scales \\cite{Thompson,Kenzelman,Knebel2010}. They have similar Fermi surface structures and similar pressure-temperature ($p$-$T$) phase diagrams. At ambient pressure, CeCoIn$_5$ is a superconductor ($T_c$=2.3\\,K) with $d_{x^2-y^2}$-wave symmetry \\cite{Izawa,Allan,Zhou}. The normal state possesses non-Fermi-liquid properties in zero field, including $T$-linear resistivity, indicative of a nearby underlying QCP \\cite{Sidorov,Nakajima}. In contrast, CeRhIn$_5$ orders antiferromagnetically at atmospheric pressure ($T_{\\rm N}$=3.8\\,K) \\cite{Bao}. Its magnetic transition is suppressed by applying pressure and the ground state becomes purely superconducting state at $p>p^\\ast\\approx$1.7\\, GPa, indicating the presence of a pressure induced QCP \\cite{Park2006,Knebel2008,Park2008,Shishido2005}. As disorder may seriously influence physical properties especially near a QCP, there is a great benefit in examining quantum critical systems which are stoichiometric, and hence, relatively disorder free; both compounds are ones of a small number of such systems. Both host a wide range of fascinating superconducting properties including an upper critical field $H_{c2}$ that is limited by extremely strong Pauli pair-breaking \\cite{Izawa,Knebel2008}. \n\nTo realize hybrid heterostructures shown in Figs.\\,1(a) and 1(b), we fabricate superlattice films with alternating block layers (BLs) of $n$ unit-cell-thick (UCT) CeCoIn$_5$ and $m$-UCT CeRhIn$_5$, CeCoIn$_5$($n$)\/CeRhIn$_5$($m$).\nWe demonstrate that the pairing interaction in a $d$-wave superconductor is tuned by injecting magnetic fluctuations through the atomic interface. Moreover, we show that the pairing strength is maximized near the critical pressure where AFM order vanishes. \n\n\n\nThe hybrid superlattices CeCoIn$_5$($n$)\/CeRhIn$_5$($m$) with $c$ axis oriented structure are grown on MgF$_2$ substrate by the MBE technique \\cite{Shishido2010,Mizukami,Shimozawa2016}. \nFigure\\,1(c) displays a high-resolution cross-sectional transmission electron microscope (TEM) image of a CeCoIn$_5$($5$)\/CeRhIn$_5$($5$) superlattice. The TEM image displayed in Fig.\\,1(d) (the boxed area in Fig.\\,1(c)) demonstrate that the Rh and Co atoms are clearly distinguished by bright and dark spots, respectively. No discernible atomic inter-diffusion between the neighboring Co and Rh layers is seen, which is also confirmed by lateral satellite peaks in an X-ray diffraction pattern. The epitaxial growth of each layer with atomic flatness is confirmed by reflection high energy electron diffraction (Fig.\\,S1 in \\cite{SM}). These results indicate the successful fabrication of epitaxial superlattices with sharp interfaces. \nHigh-pressure resistivity measurements have been performed under hydrostatic pressure up to 2.4\\,GPa using a piston cylinder cell with oil as pressure transmitting medium.\n\n\n\n\n \\begin{figure}[t]\n \t\\begin{center}\n \t\n \t\t\\includegraphics[width=1.0\\linewidth]{Fig2.eps}\n \t\n \t\t\\caption{\n\t\n\t\t(a), (b) $p$-$T$ phase diagrams of thin films and single crystals of (a) CeCoIn$_5$ and (b) CeRhIn$_5$. \n\t\n\t\t(c) Temperature dependence of the resistivity of CeCoIn$_5$ thin film at ambient pressure and at $p=2.1$\\,GPa. (d) and (e) show temperature dependence of the resistivity (solid lines, left axes) and its temperature derivative $d\\rho(T)\/dT$ (dotted lines, right axes) for CeRhIn$_5$ thin film and CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice at ambient pressure and at $p=2.1$\\,GPa, respectively. The peak of $d\\rho(T)\/dT$ corresponds to AFM transition. \n \t\t}\n \t\t\\label{fig:Fig1.eps}\n \t\\end{center}\n \\end{figure}\n\nFigures\\,2(a) and 2(b) depict the resistively determined $p$-$T$ phase diagrams of separate, MBE-grown epitaxial thin films of CeCoIn$_5$ and CeRhIn$_5$, whose resistivities ($\\rho$) are shown in Figs.\\,2(c) and 2(d), respectively. The $p$-$T$ phase diagrams of both films are essentially those of single crystals. \n$T_c$ (=2.0\\,K) in the CeCoIn$_5$ thin film, however, is slightly reduced from the bulk value, possibly due to strain induced by a slight lattice mismatch with the substrate, while $T_{\\rm N}$ (=3.7\\,K) of CeRhIn$_5$ thin film is almost the same as that in a single crystal. With pressure, $T_c$ of CeCoIn$_5$ thin film increases and shows a broad peak near $p\\sim$1.7\\,GPa. CeRhIn$_5$ thin film undergoes the superconducting transition with no signature of AFM transition at $p\\approx$2.1\\,GPa. Similar to CeRhIn$_5$ single crystals \\cite{Park2006,Sidorov,Park2008}, superconductivity in the thin films develops at $p\\agt$1\\,GPa where it coexists with magnetic order, and there is only a purely superconducting state at $p\\agt$2.1\\,GPa (Fig.\\,S2 in \\cite{SM}), a slightly higher pressure than in single crystals. \n\n\n\\begin{figure}[t]\n \t\\begin{center}\n \t\n \t\t\\includegraphics[width=1.0\\linewidth]{Fig3.eps}\n \t\n \t\t\\caption{\n\t\n\t\n\t\t(a) $p$-$T$ phase diagram of CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice. \n\t\tOut-of-plane upper critical field $H_{c2\\perp}$ normalized by $T_c$, $H_{c2\\perp}\/T_c$, measures the coupling strength of the superconductivity. \n\t\t(b) Temperature dependence of in-plane and out-of-plane upper critical fields at ambient pressure and at $p=1.8$ and 2.1\\,GPa. (c) Anisotropy of upper critical field, $H_{c2\\parallel}\/H_{c2\\perp}$, near $T_c$ of superlattices at ambient pressure and at 2.1\\,GPa, along with the data of CeCoIn$_5$ thin film. (d) Angular dependence of upper critical field of superlattice at $p=1.8$ and 2.1\\,GPa. The inset is an expanded view of the low angle region.\n \t\t}\n \t\t\\label{fig:Fig1.eps}\n \t\\end{center}\n \\end{figure}\n \n \nFigure\\,2(e) compares the $T$-dependence of $\\rho(T)$ and its temperature derivative $d\\rho(T)\/dT$ for a CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice at ambient pressure and at $p=2.1$\\,GPa. At ambient pressure, a distinct peak in $d\\rho(T)\/dT$ associated with an AFM transition can be seen at 3\\,K in addition to a superconducting transition at $\\sim1.4$\\,K \\cite{Knebel2008}. While $T_c$ and $T_{\\rm N}$ of the hybrid superlattice are lower than that of the CeCoIn$_5$ and CeRhIn$_5$ thin films, respectively, they are still larger than that of respective CeCoIn$_5$\/YbCoIn$_5$ and CeRhIn$_5$\/YbRhIn$_5$ superlattices (Fig.\\,S2 in \\cite{SM}), indicating the importance of mutual interaction between the CeCoIn$_5$ and CeRhIn$_5$ BLs. On the other hand, at $p=2.1$\\,GPa, there is no signature for magnetic order, while the superconductivity remains with slightly higher $T_c$ than at ambient pressure. In Fig.\\,3(a), we plot the $p$-dependence of $T_c$ and $T_{\\rm N}$ determined by the peak in $d\\rho(T)\/dT$. At $p\\sim2$\\,GPa, $T_c$ is a maximum, forming a dome-shaped $p$-dependence. With pressure, $T_{\\rm N}$ is suppressed gradually at low $p$, followed by a rapid suppression at $p\\agt1$\\,GPa (Fig.\\,S3 in \\cite{SM}). At $p\\agt 1.6$\\,GPa, evidence for magnetic order is hidden beneath the superconducting dome. A simple extrapolation of $T_{\\rm N}(p)$ gives a critical pressure $p_c\\sim2$\\,GPa at which the magnetic transition reaches zero temperature and $T_c$ shows a maximum. \n\nWe demonstrate that two-dimensional (2D) superconductivity is realized in CeCoIn$_5$ BLs in the whole pressure regime. Figures\\,3(b) and 3(c) depict the $T$-dependence of the upper critical field determined by the mid point of the resistive transition in a magnetic field $H$ applied parallel ($H_{c2\\parallel}$) and perpendicular ($H_{c2\\perp}$) to the $ab$ plane and the $T$-dependence of the anisotropy of upper critical fields, $H_{c2\\parallel}\/H_{c2\\perp}$, respectively. The anisotropy diverges on approaching $T_c$, in sharp contrast to the CeCoIn$_5$ thin film whose anisotropy shows little $T$-dependence up to $T_c$. This diverging anisotropy in the superlattice is a characteristic feature of 2D superconductivity, in which $H_{c2\\parallel}$ increases as $\\sqrt{T_c-T}$ due to the Pauli paramagnetic limiting, but $H_{c2\\perp}$ increases as $T_c-T$ due to orbital limiting near $T_c$. This result, along with the fact that the thickness of the CeCoIn$_5$-BL is comparable to the perpendicular superconducting coherence length $\\xi_{\\perp}\\sim3$--4\\,nm, indicates that each 5-UCT CeCoIn$_5$ BL effectively acts as a 2D superconductor \\cite{Mizukami}. The 2D superconductivity is reinforced by the angular variation of $H_{c2}(\\theta)$. Figure \\,3(d) and its inset show $H_{c2}(\\theta)$ below and above $p^*$. For both pressures, at $T\\ll T_c$,\n$H_{c2}(\\theta)$ in the regime $|\\theta|\\alt30^{\\circ}$ is enhanced with decreasing $|\\theta|$ and exhibits a sharp cusp at $\\theta=0$. This cusp behavior is typical for a Josephson coupled layered superconductor \\cite{Tinkham}.\n\nWe note that in stark contrast to CeRhIn$_5$ single crystal and our thin film, each CeRhIn$_5$ BL in CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice is not fully superconducting even when the AFM order is suppressed under pressure, which leads to the realization of 2D superconductivity in a wide range of pressure. In fact, as shown in Fig.\\,3(d), overall angle dependence of $H_{c2}(\\theta)$ including the cusp structure near $\\theta=0$ is observed at $p=1.8$\\,GPa, where the bulk superconductivity is not observed in CeRhIn$_5$ thin film (Fig.\\,2(b) and Fig.\\,S2 in \\cite{SM}). Essentially very similar angle dependence of $H_{c2}(\\theta)$ is observed at $p=2.1$\\,GPa above $p_c$. These results imply that 2D superconductivity occurs in CeCoIn$_5$ BLs even above $p_c$. Moreover, in CeRhIn$_5$(5)\/YbRhIn$_5$(5) \nsuperlattice zero resistivity is not attained under pressure (Fig.\\,S4 in \\cite{SM}). \nWith the reduction of BL thickness, the superconductivity of CeRhIn$_5$ is strongly suppressed, in stark contrast to CeCoIn$_5$. This may be related to the incommensurate magnetic structure of CeRhIn$_5$ with ordering vector $\\bm{q}=(0.5,0.5, 0.297)$ \\cite{Bao}, in which the long-wave-length AFM fluctuations perpendicular to the layers are suppressed in CeRhIn$_5$ BLs with atomic layer thickness. In CeCoIn$_5$, on the other hand, AFM fluctuations with different $\\bm{q}=(0.45, 0.45, 0.5)$ are dominant \\cite{Raymond}. This commensurability along the $c$ axis would be better compatible with the superlattice structure, and as a result, the superconductivity is robust against the reduction of BL thickness \\cite{Yamanaka}. \nWe here comment on the low temperature anisotropy of $H_{c2}$ of the CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice (Fig.\\,3(b)). At $p=2.1$\\,GPa, $H_{c2\\perp}$ exceeds $H_{c2\\parallel}$ at low temperatures. Such a reversed anisotropy of $H_{c2}$ has been reported in CeRhIn$_5$ single crystal above the pressure where the AFM order disappears \\cite{Thompson,Park2008}. However, similar reversed anisotropy ($H_{c2\\perp}>H_{c2\\parallel}$) is preserved at $p=1.8$\\,GPa, where $H_{c2\\parallel}$ exceeds $H_{c2\\perp}$ in CeRhIn$_5$ single crystal and thin film. This indicates that anisotropy reversal of $H_{c2}$ occurs under pressure in 5-UCT CeCoIn$_5$ BLs. Based on these results, we conclude that 2D superconducting CeCoIn$_5$ BLs in CeCoIn$_5$(5)\/CeRhIn$_5$(5) are coupled by the Josephson effect in the whole pressure regime.\n\n\n\n\\begin{figure}[t]\n \t\\begin{center}\n \t\n \t\t\\includegraphics[width=1.0\\linewidth]{Fig4.eps}\n \t\n \t\t\\caption{\n\t\n\t\t(a) Out-of-plane upper critical field $H_{c2\\perp}$ normalized by the orbital-limited upper critical field at $T=0$\\,K, $H_{c2\\perp}\/H_{c2\\perp}^{\\rm orb}(0)$, for CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice is plotted as a function of the normalized temperature $T\/T_c$. Two extreme cases, i.e. the result of the bulk CeCoIn$_5$ dominated by Pauli paramagnetic effect and the WHH curve with no Pauli effect, are also shown. (b) Pressure dependence of $H_{c2}^{\\rm orb}(0)$ of CeCoIn$_5$($n$)\/CeRhIn$_5$($n$) superlattices with $n=4$ and 5 for ${\\bm H}$$\\parallel$$c$. For comparison, $H_{c2}^{\\rm orb}(0)$ of CeRhIn$_5$ single crystals for ${\\bm H}$$\\parallel$$a$ and that of CeCoIn$_5$ single crystal for ${\\bm H}$$\\parallel$$c$ are shown. \nSolid and dashed arrows represent $p_c$ for CeCoIn$_5$($n$)\/CeRhIn$_5$($n$) superlattices and CeRhIn$_5$ single crystal, respectively. \n \t\t}\n \t\t\\label{fig:Fig1.eps}\n \t\\end{center}\n \\end{figure}\n\n\n\nApplication of the pressure leads to a drastic change in the nature of superconductivity in the hybrid superlattices. Figure\\,4(a) depicts the $T$-dependence of $H_{c2\\perp}$, normalized by the orbital-limited upper critical field at $T=0$\\,K, $H_{c2\\perp}^{\\rm orb}(0)$, which is obtained from the Werthamer-Helfand-Hohenberg (WHH) formula, $H_{c2\\perp}^{\\rm orb}(0)=-0.69T_c(dH_{c2\\perp}\/dT)_{T_c}$. We also include two extreme cases: $H_{c2\\perp}\/H_{c2\\perp}^{\\rm orb}(0)$ for bulk CeCoIn$_5$ \\cite{Tayama}, in which $H_{c2}$ is dominated by Pauli paramagnetism, and the WHH curve with no Pauli effect. Pressure dramatically enhances $H_{c2\\perp}\/H_{c2\\perp}^{\\rm orb}$. What is remarkable is that near the critical pressure $p_c\\sim 2$\\,GPa at which evidence for magnetic order disappears, $H_{c2\\perp}\/H_{c2\\perp}^{\\rm orb}$ nearly coincides with the WHH curve, indicating that $H_{c2\\perp}$ is limited solely by orbital pair-breaking. \n\n\nThe fact that $H_{c2\\perp}$ approaches the orbital limit provides important insight on superconductivity of the hybrid superlattice. In CeCoIn$_5$\/YbCoIn$_5$, where YbCoIn$_5$ is a conventional metal, Pauli pair-breaking effect is weaken in the superlattice compared with the bulk due to local inversion symmetry breaking at the interfaces, which splits the Fermi surfaces with spin texture and thus effectively suppresses the Zeeman effect \\cite{Goh,Maruyama2012}. This leads to the Rashba-induced anisotropic suppression of the Zeeman effect \\cite{Shimozawa2016}, which may be partly responsible for the observed reversed anisotropy $H_{c2\\parallel}\/H_{c2\\perp}<1$ at low temperatures (Fig.\\,3(d)). However, this effect is less important in CeCoIn$_5$($n$)\/CeRhIn$_5$($n$) superlattices compared with CeCoIn$_5$\/YbCoIn$_5$, which is evidenced by the fact that $H_{c2\\perp}\/H_{c2\\perp}^{\\rm orb}(0)$ does not strongly depend on $n$ (Fig.\\,S5 in \\cite{SM}). Moreover, such an effect is not expected to have significant pressure dependence. Therefore, there must be a different mechanism that significantly enhances the Pauli-limiting field $H_{c2\\perp}^{\\rm Pauli}=\\sqrt{2}\\Delta\/g\\mu_B$, where $g$ is the $g$-factor of electrons and $\\mu_B$ is the Bohr magneton. An enhancement of $H_{c2\\perp}^{\\rm Pauli}$ is not due to a dramatic suppression of $g$ by pressure, because it is highly unlikely that the Ce crystalline electric field state, which determines $g$-factor, strongly depends on pressure. Therefore the enhancement of $H_{c2\\perp}^{\\rm Pauli}$ is attributed to a strong increase in the superconducting gap $\\Delta$. This is supported by the observed enhancement of $H_{c2\\perp}\/T_c$ upon approaching $p_c$ shown in Fig.\\,3(a). Because $H_{c2\\perp}\\approx H_{c2\\perp}^{\\rm Pauli} \\ll H_{c2\\perp}^{\\rm orb}(0)$ in the low $p$ regime and $H_{c2\\perp}\\approx H_{c2\\perp}^{\\rm orb}(0) \\ll H_{c2\\perp}^{\\rm Pauli}$ near $p\\sim p_c$, the enhancement of $H_{c2\\perp}\/k_BT_c$ directly indicates an enhancement of $H_{c2\\perp}^{\\rm Pauli}\/T_c$ and hence $\\Delta\/k_BT_c$. This behavior contrasts with observations on CeCoIn$_5$ single crystals, in which $H_{c2}\/T_c$ decreases with pressure. The enhancement of $\\Delta\/k_BT_c$ is caused as a consequence of enhancement of pairing interaction. In spin fluctuation mediated scenario, the pairing interaction is mainly provided by high energy spin fluctuations whose energy scale is well above $\\Delta$ and low energy fluctuations cause the pair-breaking. Since the high energy fluctuations enhance $T_c$ while low energy ones reduce $T_c$, the enhancement of pairing interaction can give rise to the increase of $\\Delta\/k_BT_c$ without accompanying a large enhancement of $T_c$, which is consistent with the observed behavior. Thus, the present results demonstrate that the pairing interaction in CeCoIn$_5$ BLs is strikingly enhanced as a result of the quantum critical magnetic fluctuations that develop in CeRhIn$_5$ BLs, which are injected into CeCoIn$_5$ BLs through the interface. \n\n\nIt is well established that quantum fluctuations strongly influence normal and superconducting properties in many classes of unconventional superconductors. One of the most striking is a diverging effective quasiparticle mass $m^*$ upon approaching the QCP, as reported in cuprate, pnictide and heavy-fermion systems \\cite{Shibauchi2014,Shishido2005,Ramshaw}. Such a mass enhancement gives rise to a corresponding enhancement of $H_{c2}^{\\rm orb}$, which is proportional to $(m^*\\Delta)^2$. Here we stress that there is a fundamental difference in the present hybrid superlattices. Figure\\,4(b) depicts the $p$-dependence of $H_{c2\\perp}^{\\rm orb}$ of the CeCoIn$_5$($n$)\/CeRhIn$_5$($n$) superlattices with $n=4$ and 5, along with the result for CeCoIn$_5$ and CeRhIn$_5$ single crystals \\cite{Park2008,Knebel2010}. In contrast to a CeRhIn$_5$ single crystal which shows a sharp peak at the critical pressure, $H_{c2\\perp}^{\\rm orb}$ of the superlattices depends weakly on pressure with no significant anomaly at $p_c$. Compared to the monotonic decrease observed in single crystal CeCoIn$_5$, this weak dependence is consistent with an enlarged gap $\\Delta$, but the results suggest the absence of\nmass enhancement in the CeCoIn$_5$ BL. Such a behavior is in contrast to usual expectations for quantum criticality, details of which deserve further studies. \n\n\n\nIn summary, we have designed and fabricated hybrid superlattice CeCoIn$_5$\/CeRhIn$_5$ formed by alternating atomically thick layers of a $d$-wave heavy fermion superconductor CeCoIn$_5$ and an AFM metal CeRhIn$_5$. \nThe present results demonstrate the importance of the interface between which unconventional superconducting and nonsuperconducting magnetic layers can interact with each other. \nIn particular, the strength of the pairing interaction can be tuned by magnetic fluctuations, or paramagnons, injected through the interface, highlighting that the pairing interaction can be maximized by the critical fluctuations emanating from the magnetic QCP without an accompanying mass enhancement. The fabrication of a wide variety of hybrid superlattices paves a new way to study the entangled relationship between unconventional superconductivity and magnetism, offering a route to exploring the emergence of novel superconducting systems and the roles of their interface. \n\n\n\nWe thank E.-A. Kim, H. Kontani, A. H. Nevidomskyy, R. Peters, and Y. Yanase for fruitful discussions. This work was supported by Grants-in-Aid for Scientific Research (KAKENHI) (Nos. 25220710, 15H02014, 15H02106, and 15H05457) and on Innovative Areas `Topological Material Science' (No. JP15H05852) and `3D Active-Site Science' (No. 26105004) from Japan Society for the Promotion of Science (JPSJ). Work at Los Alamos National Laboratory was performed under the auspices of the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering.\n \n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\nNeural networks frequently suffer from the problem of \\textit{over-parameterization}, such that the model can be compressed by a large factor to drastically reduce memory footprint, computation as well as energy consumption while maintaining similar performance. \nThis is especially pronounced for models for computer vision~\\citep{simonyan2014very}, speech recognition~\\citep{pratap2020massively} and large text understanding models such as BERT~\\citep{devlin2018bert}. \nThe improvements obtained from intelligently reducing the number of model parameters has several benefits, such as reduction in datacenter power consumption, faster inference and reduced memory footprint on edge devices such as mobile phones {which also enable decentralized techniques ex. federated learning~\\citep{kairouz2019advances}.} \n\nThere are several techniques to reduce model size while maintaining similar generalization performance, such as model quantization~\\citep{polino2018model}, NAS (Neural Architecture Search)~\\citep{elsken2019neural} and model distillation through teacher-student networks~\\citep{gou2021knowledge}. \nFor the scope of this paper, we consider pruning as a technique to remove trainable weights in the network, and save on computation costs for the FBNet family of models. \nThe motivations for this are two-fold. \nFirstly, state-of-the-art models such as FBNet~\\citep{wu2019fbnet} already adopt the best practices in the area of efficient hardware-aware design of convolutional neural network based models, and are widely used across different vision tasks. \nThis makes them suitable baselines to understand whether pruning can offer any performance gain over their already optimized behavior. \n{While there has been limited work on pruning for efficient convolution network models they investigate older architectures such as EfficientNet and MobileNet~\\citep{aflalo2020knapsack} or integrate pruning into expensive techniques such as joint prune-and-architecture search~\\citep{wang2020apq}. }\n\nFor each of the constituent models of the FBNetV3 family (FBNetV3A, FBNetV3B,..., FBNetV3G) we reduce the number of parameters using two pruning based approaches: \n(1) \\textit{Global magnitude-based pruning}: \nStarting with the pre-trained model, we prune all weights whose magnitude is below a threshold chosen in order to achieve a target number of FLOPs for the pruned model; \n(2) \\textit{Uniform magnitude-based pruning}: \nStarting with the pre-trained model, we prune weights in each layer whose magnitude is below a level-specific threshold in order to yield a pruned model achieving a target number of FLOPs with the same sparsity in each layer. \nAfter either pruning method is applied, \nwe fine-tune the pruned model for a certain number of epochs until convergence is reached.\nWithin the scope of our study in this paper, we are mostly interested in the following research questions:\n\\begin{itemize}[leftmargin=*]\n \\item \\requ1: Pruning to improve computation vs.~performance tradeoff. Can a model obtained by pruning a larger FBNetV3 model \\textbf{M1} (optimized using NAS) achieve higher generalization performance than a smaller FBNetV3 model \\textbf{M2} when the pruned model has the same number of FLOPs as \\textbf{M2}? \n \\item \\textbf{RQ2}: Pruning as an efficient paradigm. When a larger FBNetV3 model \\textbf{M1} is available and computational resources are limited, is pruning a faster and less computationally expensive approach to obtain a model with higher accuracy at a desired computation level (FLOPs) than running a full-fledged architecture search?\n\\end{itemize}\n\\textit{Pruning to improve computation vs.~performance tradeoff (\\requ1).}\nThere have been recent research advances in the area of building hardware-aware efficient models~\\citep{deng2020model}. \nThese can provide good generalization performance while adhering to constraints on memory, inference latency and battery power, which are often dictated by the hardware environment where inference happens. \nExperiments described in existing work on efficient vision models such as ChamNet~\\citep{dai2019chamnet}, MobileNet~\\citep{howard2017mobilenets}, EfficientNet~\\citep{tan2019efficientnet} and FBNetV2~\\citep{wan2020fbnetv2} have shown that it is possible to achieve even higher performances on standard image recognition tasks such as ImageNet~\\citep{deng2009imagenet} at a certain level of FLOPs. \nHowever the efficient design of these models does not solve the over-parameterization problem completely, and none of these approaches study how model pruning can be performed to obtain even better trade-offs between computation and model accuracy. \nThis paper is the first of its kind to understand how we can improve on the state-of-the-art in this problem space. \n\n\\textit{Pruning as an efficient paradigm (\\textbf{RQ2}).}\nIn addition to achieving state-of-the-art performance with reduced FLOPs, we are also interested in understanding how such pruned models can be obtained \\textit{inexpensively} with limited resources that are generally available to a machine learning practitioner who has access to existing optimized models but limited computing resources. \nFor example, the FBNetV3 models are freely available through Facebook's Mobile Model Zoo\\footnote{FBNetV3 models available here \\url{http:\/\/https:\/\/github.com\/facebookresearch\/mobile_cv\/model_zoo\/models\/model_info\/fbnet_v2\/model_info_fbnet_v3.json}}, while EfficientNet models can be obtained at GitHub\\footnote{EfficientNet models available here \\url{https:\/\/github.com\/mingxingtan\/efficientnet}}. \nWhile the techniques needed to obtain computation- and latency-friendly models have been democratized through open-sourcing the source code as well as the models themselves, fully applying these techniques necessitates costly operations such as finding an optimal network topology through meta-learning approaches~\\citep{you2020greedynas} and search algorithms such as Genetic Algorithms (GAs)~\\citep{goldberg1991comparative}.\n\nGiven the high-degree of intractability of this problem, expensive computational resources are often needed in this case, easily exceeding the budget available to a university research laboratory or an angel-stage startup~\\citep{zoph2016neural}. \nWhen a starting model is already available, for example through open-sourcing, the best option would be to perform a cheap modification of the model to fit a certain target FLOPs\/latency requirement. \nIn this paper we have compared the NAS approaches for training FBNetV3 models with our pruning techniques on a computational complexity metric (GPU-hours) to effectively answer \\textbf{RQ2}.\n\n\\textit{Benchmark results.}\nIn addition to experimental outcomes for answering \\requ1 and \\textbf{RQ2}, we also benchmark pruned FBNetV3 models using available open-sourced quantized sparse kernels and conduct ablation studies to obtain additional insights into pruning performance. \nThese results augment our main observations and demonstrate that with existing hardware support, it is possible to deploy pruned cutting-edge computer vision models with practical latency reductions and improve further beyond the performance vs. FLOPs trade-off.\n\nWe conduct our experiments on ImageNet, which is an object-recognition task on a large training dataset of 1.2 million images. \nWe show that computationally less intensive techniques such as uniform and global magnitude-based pruning of larger FBNetV3 models can yield higher test accuracies than small models while having the same number of FLOPs. \nGiven a target computation budget for an efficient model, we show that it is more practically advantageous (both in terms of performance and running time) to simply prune the larger model than run a neural architecture search to find the target model from scratch. \n\n{The technique we have employed for pruning (unstructured sparsity) is already tried and tested, however our novelty lies in studying whether efficient image recognition models such as FBNetV3 can be optimized further to improve on the FLOPs-accuracy curve, and the contributions are two fold : (1) FBNets are themselves state-of-the-art in efficient vision models and we achieve better accuracy-FLOPs tradeoff over these models and (2) from the standpoint of computational overhead, we significantly reduce the amount of GPU hours required to obtain such models. Pruning a publicly available NAS optimized model incurs $\\approx$4x less GPU hours to achieve a target FLOPs level, compared to training a full-fledged NAS to obtain a model which has less accuracy at the same FLOPs level.}\n\n\\textit{Paper organization.}\nThe remainder of this paper is organized as follows. \nIn Section~\\ref{related-work}, we describe related work in the area of efficient vision model design and also provide an introduction to different pruning techniques. \nIn Section~\\ref{experimental-setup}, we discuss our experimental setup, including a description of the baseline models and the \\textit{global} and \\textit{uniform} pruning approaches we have employed. \nSection~\\ref{results} describes our main findings and we conclude the paper in Section~\\ref{conclusions}.\n\n\\section{Related Work}~\\label{related-work}\nWe discuss related literature in the areas of \\textit{computationally efficient vision models} and \\textit{model pruning}.\nWithin the scope of our work, we mainly focus on inference efficiency of models in contrast to training efficiency.\n\\par\n\\textit{Computationally efficient vision models:} Neural networks for computer vision are generally characterized by convolutional layers and fully-connected layers, along with blocks such as residual or skip connections. \nThis makes such networks resource intensive in terms of FLOPs, which affects the memory storage and power consumed, and also leads to increased latency. \nIt is of paramount importance to design more efficient networks which can provide higher performance for the same FLOPs or latency level, or even to optimize them appropriately to provide the same performance at reduced FLOPs\/latency. This can be performed either through the design of new simplified layers, for example in deep residual learning~\\citep{he2016deep} or though explicit model compression as in weight quantization~\\citep{polino2018model}.\nExtremely deep networks for image recognition often suffer from not only high complexity and inference latency, but also from the issue of \\textit{vanishing gradients}~\\citep{pascanu2013difficulty}. This was addressed through deep residual networks which effectively simplified network design through skip-connections. \nMobileNets~\\citep{howard2017mobilenets} are one of the earlier approaches to building small low-latency networks by using depthwise separable convolutions with two parameters, \\textit{width} and \\textit{resolution} multipliers. They demonstrate the effectiveness of MobileNets across different vision tasks, such as face embeddings and object detection. MobileNetV2~\\citep{sandler2018mobilenetv2} extends MobileNets by utilizing inverted residual filter structures and linear bottlenecks, obtaining improvements on state-of-the-art models both in terms of accuracy and computational complexity. ShuffleNets~\\citep{zhang2018shufflenet} propose dedicated residual units where 1\\ensuremath{\\times}1\\xspace convolutions are replaced with pointwise group convolutions and channel shuffling reducing FLOPs computations. \n\\par\nMore recently, the focus on building efficient neural network models has shifted to techniques that treat the design of efficient networks as a search problem, falling under the umbrella of Neural Architecture Search (NAS).\nEfficientNets~\\citep{tan2019efficientnet} propose a novel scaling method which adjusts the network's length, width, and resolution to optimize performance subject to target memory and FLOPs constraints. They also define a novel baseline that is optimized by a multi-objective neural architecture search. The FBNet collections of models---FBNet~\\citep{wu2019fbnet}, FBNetV2~\\citep{wan2020fbnetv2} and FBNetV3~\\citep{dai2021fbnetv3}---employ neural architecture search to obtain highly-optimized models that improve on the state-of-the-art for different visual understanding tasks. \nFBNet frames the architecture search as a differentiable meta-learning problem with gradient based techniques, namely \\textit{DNAS}---Differentiable Neural Architecture Search---by \\cite{wu2019fbnet}, and avoids selecting the optimized model over a discrete set. \nThe subsequent entry in this collection, FBNetV2, expands the search space over conventional DNAS, and employs a masking scheme to maintain the same level of computational complexity while searching over this expanded space. \nFBNetV3 further improves on the state-of-the-art by employing Neural Architecture Recipe Search (NARS) and searching over the space of not only architectures, but also corresponding recipes (which are generally hyper-parameters). In this paper, we consider FBNetV3 models as our baselines as they are state-of-the-art. \nWe are interested in understanding if they are overparameterized and evaluate how much model pruning can improve performance at a certain FLOPs level over the state-of-the-art in this family of models.\n\\par\n\\textit{Model Pruning:} Modern neural networks, particularly those processing complex sensory inputs (such as speech, vision and language) for perception applications, are often over-parameterized. \nIt is only to be expected that we should be able to compress such networks significantly to maintain the same level of performance at decreased level of computation (fewer weights and reduced FLOPs), memory footprint and power consumption. Foundational efforts in this space include the \\textit{Optimal Brain Surgeon}~\\citep{hassibi1993second} and \\textit{Optimal Brain Damage}~\\citep{lecun1990optimal}. \nRecently the idea of network pruning has been formalized through the lottery ticket hypothesis~\\citep{frankle2018lottery}, which claims that randomly initialized, feed-forward networks have winning sub-networks that perform just as well as the original network on an unseen test dataset. \nModel pruning is generally of two types: unstructured and structured pruning. \nUnstructured pruning, as the name suggests, doesn't adhere to any structure and prunes neurons based on chosen criteria (such as magnitude). This has the advantage of providing higher performance, but is difficult to implement in hardware, as it needs dedicated support for efficient sparse matrix multiplications. \nMeanwhile, structured pruning is the practice of removing entire groups of neurons (e.g., blocks within the weight matrix, or channels in convolutional neural networks). \nThis is easy to implement without dedicated hardware support, but has the issue of lower generalization performance than unstructured pruning~\\citep{yao2019balanced}. \nIn the literature, there have also been several studies, for example investigating whether rewinding (training from scratch with a fixed mask) can perform just as well as the fine-tuning on top of the original unpruned network~\\citep{renda2020comparing}. {~\\cite{blalock2020state} provide an overview survey of recent advances and open problems in neural network pruning.}\n\\par\nIn the research area of designing efficient networks for computer vision, there has not been much focus on understanding how pruning can be applied to the current generation of models.\nMost literature on pruning is based on older networks such as VGGNet, ResNet~\\citep{he2016deep}, and MobileNet~\\citep{sandler2018mobilenetv2}.\nOur work improves upon these existing studies by understanding how pruning can improve the FLOPs-accuracy tradeoff over existing state-of-the-art networks.\n\n\\section{Pruning Techniques and Setup}\n\\label{experimental-setup}\nIn this section, we describe the main components of our techniques and experimental setup, including \\textit{Baseline Models}, \\textit{Pruning Techniques}, \\textit{Latency Measurement} and \\textit{Metrics}. We have mainly used standard splits of the ImageNet dataset, further details are in Section~\\ref{dataset} of the appendix.\n\n\\subsection{Baseline Models}\\label{baseline-models}\n\\cite{dai2020fbnetv3} address the previous limitations of NAS-based architecture search where these approaches can only search over architectures given a training recipe (set of hyperparameters), and thus cannot optimize over both. \nAs described in Section~\\ref{related-work}, the most recent state-of-the-art models are based on NARS (Neural Architecture-Recipe Search), which we select as baseline models. Table~\\ref{tab:baseline-models} lists the accuracy of FBNetV3 models~\\citep{dai2021fbnetv3} on the ImageNet classification task, along with the number of model parameters and computation complexity in terms of FLOPs. \n\\par\nEach baseline model consists of multiple IRF (Inverted Residual Filter) blocks, which contain convolutional layers of different kernel sizes. \nFor our experiments, we are mostly interested in 1\\ensuremath{\\times}1\\xspace convolutions as potentially prunable, since within each FBNetV3 model, the 1\\ensuremath{\\times}1\\xspace convolution layers constitute >80\\% of total model FLOPs for all models in the family, and the open-sourced sparsity kernel support we use for latency benchmarking is available only for fully connected layers. \nA 1\\ensuremath{\\times}1\\xspace convolution can be transformed into an equivalent fully connected layer with a few tensor reshape operations without any significant loss of performance or latency.\n\nFor each initial and target FBNetV3 model $X$ and $Y$, where $X$ is larger than $Y$, we prune $X$ to a \\emph{sparsity level} of $S$ so that the FLOP count is the same as for $Y$. The number of FLOPs consumed by a linear layer of sparsity $S$ is proportional to the number of sparse matrix multiplications performed and is given by $S * F$, where $F$ is the corresponding dense FLOPs. \nThus if $F_{1\\ensuremath{\\times}1\\xspace}(X)$ is the number of FLOPs consumed by the 1\\ensuremath{\\times}1\\xspace convolution layers and $F(x)$ is the total number of FLOPs consumed by model $X$, we have:\n\\begin{equation}\\label{flops-eq}\n S = {(F(X) - F(Y))}\/{F_{1\\ensuremath{\\times}1\\xspace}(X)}\n\\end{equation}\nHence, sparsity measures the fraction of 1\\ensuremath{\\times}1\\xspace convolution weights removed, and so \nhigher sparsity indicates a smaller model. \nFor the uniform pruning scnario, Table~\\ref{sparsity-table} shows the amount of sparsity required to prune each larger FBNetV3 model to a smaller one based on Eq.~(\\ref{flops-eq}). For global pruning, (\\ref{flops-eq}) does not hold, and we compute the target sparsities empirically from the layer shapes instead with details provided in Section~\\ref{global_flops}.\nWe prune each larger FBNetV3 model to a discrete FLOPs target based on a defined set of smaller models in the family, and not to a continuous range of FLOPs values, as it makes it easier to compare models directly based on a given computation budget. \nIf we can demonstrate that for the same computation level, the pruned larger FBNetV3 model has higher performance than a smaller model with the same FLOPs, it is sufficient to demonstrate that we can improve on the FLOPs-accuracy curve over the state-of-the-art.\n\n\\subsection{Pruning Techniques}\\label{pruning-techniques}\nIn this paper, we utilize a pre-trained FBNetV3 model with higher number of FLOPs without training an image classification model from scratch with sparsity, which would be time consuming and computationally intensive. There are several approaches in the literature such as prune-and-fine-tune~\\citep{han2015learning} and iterative pruning with sparsity scheduling~\\citep{frankle2018lottery}. \nWe have utilized the former for our experiments, as although studies have shown that iterative and incremental pruning approaches lead to better generalization performance, they typically require training for high number of epochs, need tuning and selection of optimal sparsity schedules and are computationally resource intensive. We have therefore not considered them in our experiments. {For our prune and fine-tune experiments, we have used 8-GPU boxes, with each box having Nvidia V100 (Volta) 32G GPUs.}\nAs described in Section~\\ref{intro}, we perform both global and magnitude-based pruning experiments. For the latency benchmarking, we also perform magnitude-based uniform pruning with a sparse block size of $1\\times4$ as explained in Section~\\ref{latency}.\n\nWe have conducted a hyper-parameter tuning for the learning rate parameter, with LR {values in the set} \\{4e-5, 8e-5, 1.6e-4\\}, as fine-tuning generally admits smaller learning rates than training from scratch. We have found that using the same learning rate for all models, along with the same hyper-parameter settings used for training the seed model is sufficient to obtain pruned networks which are superior to the baseline FBNetV3 models. Hence minimal hyper-parameter tuning was required for our experiments and we have used values of settings such as weight decay and momentum to be the same as those used for training the baseline FBNetV3 models. During fine-tuning after pruning, we have used a smoothed validation loss to stop the process early after a convergence tolerance (0.01\\%) is reached between two consecutive epochs. Generally, we have observed fine-tuning to converge around $\\sim$250 epochs\n\n\\subsection{latency measurements and Metrics} \\label{latency}\nWe are interested not only in the sparsity level of our pruned models and the image recognition performance, but also in metrics which potentially improve due to model sparsity, such as number of parameters, the FLOP count and the model latency. \nFor reporting model performance under pruning, we use standard image recognition metrics such as Top-1 and Top-5 {test} accuracies.\nWe measure overall model sparsity, which is different to the layer sparsity since we \nonly prune 1\\ensuremath{\\times}1\\xspace convolution layers, as explained in Section~\\ref{baseline-models}. \nWe report the model FLOPs, because this metric captures the computational footprint of the model and its power consumption. \n\nLast, we record the total latency (in ms.) under pruning. The sparse kernels used in our experiments are already in open-source and released under the PyTorch sparse quantization library\\footnote{https:\/\/github.com\/pytorch\/pytorch\/blob\/master\/torch\/ao\/nn\/sparse\/quantized\/linear.py}. Prior to using these kernels, we perform uniform layer-wise block-based pruning with block sizes of $1\\times4$. Magnitude based pruning is implemented at block level, and the model is quantized to 8-bit integers (int8) before latency benchmarking{, which is performed on Intel CPUs designed using the Skylake micro-architecture.}\nWhile we would expect sparsity to translate to tangible inference speedups, this is highly dependent on the sparse kernel support provided by hardware. \nCurrent hardware is not well-suited for unstructured randomly sparse matrix multiplications and tend to do better with structured sparsity in models~\\citep{anwar2017structured}. We have utilized block sparsity within the weight matrix for latency experiments.\nHowever this often tends to come at a decreased level of model performance. \nThe design of highly performant sparse models under structured sparsity with reasonable inference speedups remains an important research topic outside the scope of this paper.\n\n\n\\section{Results}~\\label{results}\n\\subsection{Pruned FBNetV3 model performance}\\label{pruning_performance}\nTo answer \\textbf{RQ1}, we consider the family of FBNetV3 models as baselines and seed models for further pruning. For each pair of models $X$, $Y$ in the family, we calculate the amount of sparsity required to prune the larger model $X$ to a model that consumes the same number of FLOPs as the target smaller model $Y$, via Equation~\\ref{flops-eq}.\nThere are 21 potential seed and target model pairs, however we conduct pruning experiments only for a depth of 2 for tractability. For example, given FBNetV3E as the seed, we only prune it to FLOPs targets corresponding to FBNetV3D and FBNetV3C.\nTable~\\ref{flops_table} presents the accuracy and number of parameters of the pruned models at each target FLOPs level. The improvement in performance is apparent even at lower FLOPs targets, where we might expect baseline models such as FBNetV3A to not be over-parameterized. \nFor example, pruning FBNetV3C to a target of 356.6 MFLOPs obtains a network which is 1.43\\% better than FBNetV3A. Figure~\\ref{flops-curve} plots the Top-1 ImageNet testing accuracy vs. FLOPs for the best pruned models as seen from Table~\\ref{flops_table}. This clearly shows that pruning FBNetV3 models with minimal fine-tuning can significantly improve on the state-of-the-art for FLOPs vs. Top-1 accuracy trade-off. \nThis analysis is performed for both uniform layer-wise and global magnitude-based prune with fine-tune settings. Global pruning ranks the weights of the entire network in contrast to uniform layer-wise pruning, which ranks each layer's weights to determine the sparsity mask. It would be expected that global pruning performs better than uniform pruning for the same target sparsity level or number of non-sparse parameters. However in our experiments we determine the pruning threshold based on FLOPs targets, and find global pruning to require higher sparsity levels, which results in uniform pruning outperforming global pruning in Top-1 ImageNet accuracy in most cases.\n\\begin{table}[]\n\\centering\n\\caption{Sparsity level (in percentage) and performance of pruned FBNetV3 networks on ImageNet dataset for different target MFLOPs. The best accuracy obtained at each target FLOPs level is highlighted in bold.}\n\\label{sparsity-table}\n\\label{flops_table}\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Seed\\\\ network\\\\FBNetV3\\_\\end{tabular}}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Target\\\\ network\\\\FBNetV3\\_\\end{tabular}}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Target\\\\ MFLOPs\\end{tabular}}} &\n \\multirow{2}{*}{\\begin{tabular}[c]{@{}l@{}}Baseline\\\\ Accuracy\\end{tabular}} &\n \\multicolumn{3}{c|}{Uniform pruning} &\n \\multicolumn{3}{c|}{Global pruning} \\\\ \\cline{5-10} \n\\multicolumn{1}{|c|}{} &\n \\multicolumn{1}{c|}{} &\n \\multicolumn{1}{c|}{} &\n &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Sparsity \\\\ level(\\%)\\end{tabular}} &\n \\begin{tabular}[c]{@{}l@{}}Top-1 \\\\ Acc.\\end{tabular} &\n \\multicolumn{1}{c|}{Gain(\\%)} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Sparsity \\\\ level(\\%)\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Top-1 \\\\ Acc.\\end{tabular}} &\n Gain(\\%) \\\\ \\hline\nB & A & 356.6 & 79.6 & 26.59 & 80.308 & 0.887 & 39.5 & 80.232 & 0.793 \\\\ \\hline\n C & A & 356.6 & 79.6 & 40.7 & \\textbf{80.738} & 1.43 & 57.9 & \\textbf{80.476} & 1.1 \\\\ \\hline\n C & B & 461.6 & 80.2 & 19.4 & 80.996 & 0.992 & 28.9 & 80.998 & 0.985 \\\\ \\hline\n D & B & 461.6 & 80.2 & 31.47 & \\textbf{81.116} & 1.142 & 43.7 & \\textbf{81.08} & 1.097 \\\\ \\hline\n D & C & 557.0 & 80.8 & 15.04 & 81.278 & 0.591 & 21.5 & \\textbf{81.208} & 1.256 \\\\ \\hline\n E & C & 557.0 & 80.8 & 31.0 & \\textbf{81.282} & 0.596 & 43.6 & 81.184 & 0.475 \\\\ \\hline\n E & D & 644.4 & 81.0 & 17.8 & 81.118 & 0.145 & 25.8 & 81.388 & 0.479 \\\\ \\hline\n F & D & 644.4 & 81.0 & 38.2 & \\textbf{82.00} & 1.234 & 67.8 & \\textbf{81.484} & 0.597 \\\\ \\hline\n F & E & 762.0 & 81.3 & 29.8 & \\textbf{82.19} & 1.094 & 54.7 & \\textbf{81.97} & 0.824 \\\\ \\hline\nG & E & 762.0 & 81.3 & 71.67 & 81.166 & -0.16 & 85.5 & 79.934 & -1.68 \\\\ \\hline\n G & F & 1181.6 & 82.0 & 49.69 & \\textbf{82.528} & 0.643 & 63.8 & \\textbf{82.454} & 0.553 \\\\ \\hline\n\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[h] \n\\centering\n\\includegraphics[scale=0.40]{new_pruned_paper.pdf}\n\\caption{FLOPs vs. performance (ImageNet Top-1 acc.) for different pruned FBNetV3 networks. For comparison, the existing FBNetV3 networks are also shown here.}\n\\label{flops-curve}\n\\end{figure}\n\n\\subsection{Pruning Complexity}\nIn addition to demonstrating the improvement over state-of-the-art obtained by pruning FBNetV3 models, \nit is also important to quantify the reduction in computational complexity obtained in pruning a larger FBNetV3 model compared to training an FBNetV3 model directly through NAS (Network Architecture Search). \n\\textbf{RQ2} (pruning for efficient model search) asks if the pruning and subsequent fine-tuning approach in Section~\\ref{pruning_performance} is faster than a full-fledged neural architecture search. \nDuring pruning and subsequent fine-tuning, we train the pruned networks till the validation loss converges to within a pre-specified tolerance, as described in Section~\\ref{pruning-techniques}.\nThe time needed is generally less than when training the original FBNetV3 models, which runs for 400 epochs. \nThe number of GPU-hours is computed as (number of training GPU nodes) * (number of GPUs per node) * (training time to convergence) for each network.\nIn Table~\\ref{gpu-hours}, for each of the best performing uniformly-pruned models in Section~\\ref{pruning_performance} we report the number of GPU-hours consumed by the prune and fine-tune strategy, along with the GPU-hours consumed when obtaining a FBNetV3 model through architecture search using the method described in~\\cite{dai2020fbnetv3}. \nThe results are quite conclusive---we not only obtain pruned models superior in performance to the original neural search optimized models, but also as described in Section~\\ref{intro}, computational cost is significantly lower when starting from a pre-trained model with higher FLOPs. \nGiven the performance improvements obtained with lower computational resources, this approach is beneficial for an experimental setting where researchers have access to open-sourced pre-trained models and limited GPU resources, for example in a small startup or an academic environment. \nWe observe that the degree of speedup reduces as the network size gets bigger (e.g., in FBNetV3A vs. FBNetV3C) due to higher training time to convergence.\nNevertheless, we still obtain a speedup of 3-5 times compared to a full NAS (Neural Architecture Search). \n\n\\begin{table}[]\n\\centering\n\\caption{Computation speedup in term of GPU-hours when comparing NAS (neural Architecture Search) with pruning and fine-tuning approaches. {The selected seed networks are drawn from those in Table~\\ref{flops_table} with the best performance at target FLOPs.}}\n\\label{gpu-hours}\n\\begin{tabular}{|l|l|l|l|}\n\\hline\n\\multicolumn{1}{|c|}{\\begin{tabular}[c]{@{}c@{}}Target FLOPs \\\\ (FBNetV3 Model)\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}GPU-hours \\\\ in NAS\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}GPU-hours\\\\ in pruning \\\\ and fine-tuning\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Computational cost\\\\ speedup\\end{tabular}} \\\\ \\hline\n356.6 (FBNetV3A) & 10.7k & 2.240k & 4.77 \\\\ \\hline\n557.0 (FBNetV3C) & 10.7k & 2.496k & 4.28 \\\\ \\hline\n762.0 (FBNetV3E) & 10.7k & 3.456k & 3.09 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\subsection{Latency Experiments}\nWe also measure the latency-performance tradeoff for the pruned FBNetV3G models. FBNetV3G is the largest model in the family and so is expected to have the best generalization performance under high sparsity levels. \nAs described in Section~\\ref{latency}, we prune the network using block sparsity (where the block size is $1\\times4$) to sparsity levels in the set \\{40\\%, 50\\%, 60\\%\\}. \nWe have not utilized lower sparsity levels, as we have observed that for the selected kernels we need at least 40\\% sparsity to yield any significant latency benefits. \nWe have pruned all 1\\ensuremath{\\times}1\\xspace convolution layers uniformly here and subsequently converted them to fully-connected layers for compatibility with the quantized sparse kernels. \nIn Figure~\\ref{latency-curve}, we present the Top-1 ImageNet accuracy vs. latency curve after pruning the FBNetV3G network for different sparsity levels. \nThe pruned FBNetV3G models show marked performance reduction with lower latency as expected, with a sparsity level of 60\\% translating to around 7\\% absolute accuracy reduction with a latency reduction of 18 ms (16\\% relative). While the 1\\ensuremath{\\times}1\\xspace convolution layers account for >80\\% of FLOPs, they only constitute 25\\% of overall network latency. \nThis is consistent with previous literature~\\citep{dudziak2020brp} which shows that computational complexity (ex. FLOPs) and latency are not well-correlated, and indeed the latter is more dependent on layer shapes. \nThis result underscores the need to develop more latency-friendly pruning techniques which can potentially improve on the state-of-the-art in this domain.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{latency}\n \\caption{Latency vs. Top-1 accuracy on ImageNet}\n \\label{latency-curve}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{sparsity_pattern.png}\n \\caption{Layer-wise sparsity pattern for FBNetV3E}\n \\label{sparsity_pattern}\n \\end{subfigure} \\\\\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{flops_curve.png}\n \\caption{Layer-wise FLOPs distribution for FBNetV3E}\n \\label{flops_pattern}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{boxplots}\n \\caption{Performance distribution for layer type}\n \\label{sensitivity_pattern}\n \\end{subfigure}\n \\caption{Latency benchmarking on FBNetV3G for different sparsity levels \\{40\\%, 50\\%, 60\\%\\} and layer-wise sparsity\/FLOPs\/accuracy sensitivity for a pruned FBNetV3E network.}\n \\label{fig:three graphs}\n\\end{figure}\n\\subsection{Insights into pruning experiments}\nOur pruning experiments demonstrate that we can improve on the state-of-the-art FBNetV3 models in generalization performance for a given FLOPs level. In this subsection, we obtain an insight into\n(1) the sparsity pattern under global magnitude-based pruning and \n(2) the sensitivity of each layer when pruned in isolation under uniform layer-wise magnitude pruning (sparsity level of 95\\%). For (1) in Figure~\\ref{sparsity_pattern}, we plot the amount of sparsity obtained per 1\\ensuremath{\\times}1\\xspace convolution layer. The model being considered is an FBNetV3E network pruned to a sparsity level of 43.6\\%, to the same FLOPs level as FBNetV3C and subsequently fine-tuned. We note that the sparsity level in lower layers is lower which is potentially required for maintaining the performance~\\cite{}. Higher sparsity can be admitted in upper layers of the network where it has learnt more redundant representations. SE (Squeeze and Excitation) 1\\ensuremath{\\times}1\\xspace convolution layers generally tend to get pruned more compared to other layers, with the sparsity being >99\\% for two such SE layers in stage $xif5\\_0$. This indicates that we can also consider revisiting SE layer role in FBNetV3 networks, and even remove entire layers in future work to yield additional latency and FLOPs benefits. \n\nFor analysis (2) we prune each 1\\ensuremath{\\times}1\\xspace convolution layer in isolation at a sparsity target of 95\\% and record the Top-1 test accuracy obtained on ImageNet dataset. For each type of layer, PW:expansion, PWL: bottleneck, SE: Squeeze-Excitation we plot the distribution of accuracies in Figure~\\ref{sensitivity_pattern}. We observe that the PW and PWL layers are most sensitive to high sparsity, while SE layers are able to retain performance adequately. We could also avoid pruning the most sensitive layers (appearing as outliers in the figure) to maintain generalization performance. This observation corroborates findings from analysis (1), and motivates us to revisit the role of squeeze-excitation layers in future work. \n\n\\section{Conclusions}~\\label{conclusions}\nIn this paper, we have investigated the problem of improving on the current state-of-the-art FLOPs vs. performance trade-off for FBNets which have been pre-optimized by NAS (Neural Architecture Search). We have employed network pruning techniques, and our results demonstrate that we can further improve on performance over FBNetV3 at a given FLOPs target through global as well as uniform magnitude-based pruning. This happens not only for relatively over-parameterized networks such as FBNetV3G, but also smaller networks such as FBNetV3A which have lower computational complexity. On average, the GPU-hours incurred during pruning is about $\\sim\\!\\!4\\times$ less than that consumed by a full-scale NAS. We have also performed latency measurements on the FBNetV3G model and conducted an analysis to understand the sparsity patterns and sensitivity of different FBNetV3 layers to pruning. For future work, we plan to investigate squeeze-excitation layers in more detail, and explore structured pruning approaches such as channel and layer pruning to further improve on the latency-performance tradeoff for this family of models. \n\n\n\\section{Introduction}\\label{intro}\nNeural networks frequently suffer from the problem of \\textit{over-parameterization}, such that the model can be compressed by a large factor to drastically reduce memory footprint, computation as well as energy consumption while maintaining similar performance. \nThis is especially pronounced for models for computer vision~\\citep{simonyan2014very}, speech recognition~\\citep{pratap2020massively} and large text understanding models such as BERT~\\citep{devlin2018bert}. \nThe improvements obtained from intelligently reducing the number of model parameters has several benefits, such as reduction in datacenter power consumption, faster inference and reduced memory footprint on edge devices such as mobile phones {which also enable decentralized techniques ex. federated learning~\\citep{kairouz2019advances}.} \n\nThere are several techniques to reduce model size while maintaining similar generalization performance, such as model quantization~\\citep{polino2018model}, NAS (Neural Architecture Search)~\\citep{elsken2019neural} and model distillation through teacher-student networks~\\citep{gou2021knowledge}. \nFor the scope of this paper, we consider pruning as a technique to remove trainable weights in the network, and save on computation costs for the FBNet family of models. \nThe motivations for this are two-fold. \nFirstly, state-of-the-art models such as FBNet~\\citep{wu2019fbnet} already adopt the best practices in the area of efficient hardware-aware design of convolutional neural network based models, and are widely used across different vision tasks. \nThis makes them suitable baselines to understand whether pruning can offer any performance gain over their already optimized behavior. \n{While there has been limited work on pruning for efficient convolution network models they investigate older architectures such as EfficientNet and MobileNet~\\citep{aflalo2020knapsack} or integrate pruning into expensive techniques such as joint prune-and-architecture search~\\citep{wang2020apq}. }\n\nFor each of the constituent models of the FBNetV3 family (FBNetV3A, FBNetV3B,..., FBNetV3G) we reduce the number of parameters using two pruning based approaches: \n(1) \\textit{Global magnitude-based pruning}: \nStarting with the pre-trained model, we prune all weights whose magnitude is below a threshold chosen in order to achieve a target number of FLOPs for the pruned model; \n(2) \\textit{Uniform magnitude-based pruning}: \nStarting with the pre-trained model, we prune weights in each layer whose magnitude is below a level-specific threshold in order to yield a pruned model achieving a target number of FLOPs with the same sparsity in each layer. \nAfter either pruning method is applied, \nwe fine-tune the pruned model for a certain number of epochs until convergence is reached.\nWithin the scope of our study in this paper, we are mostly interested in the following research questions:\n\\begin{itemize}[leftmargin=*]\n \\item \\requ1: Pruning to improve computation vs.~performance tradeoff. Can a model obtained by pruning a larger FBNetV3 model \\textbf{M1} (optimized using NAS) achieve higher generalization performance than a smaller FBNetV3 model \\textbf{M2} when the pruned model has the same number of FLOPs as \\textbf{M2}? \n \\item \\textbf{RQ2}: Pruning as an efficient paradigm. When a larger FBNetV3 model \\textbf{M1} is available and computational resources are limited, is pruning a faster and less computationally expensive approach to obtain a model with higher accuracy at a desired computation level (FLOPs) than running a full-fledged architecture search?\n\\end{itemize}\n\\textit{Pruning to improve computation vs.~performance tradeoff (\\requ1).}\nThere have been recent research advances in the area of building hardware-aware efficient models~\\citep{deng2020model}. \nThese can provide good generalization performance while adhering to constraints on memory, inference latency and battery power, which are often dictated by the hardware environment where inference happens. \nExperiments described in existing work on efficient vision models such as ChamNet~\\citep{dai2019chamnet}, MobileNet~\\citep{howard2017mobilenets}, EfficientNet~\\citep{tan2019efficientnet} and FBNetV2~\\citep{wan2020fbnetv2} have shown that it is possible to achieve even higher performances on standard image recognition tasks such as ImageNet~\\citep{deng2009imagenet} at a certain level of FLOPs. \nHowever the efficient design of these models does not solve the over-parameterization problem completely, and none of these approaches study how model pruning can be performed to obtain even better trade-offs between computation and model accuracy. \nThis paper is the first of its kind to understand how we can improve on the state-of-the-art in this problem space. \n\n\\textit{Pruning as an efficient paradigm (\\textbf{RQ2}).}\nIn addition to achieving state-of-the-art performance with reduced FLOPs, we are also interested in understanding how such pruned models can be obtained \\textit{inexpensively} with limited resources that are generally available to a machine learning practitioner who has access to existing optimized models but limited computing resources. \nFor example, the FBNetV3 models are freely available through Facebook's Mobile Model Zoo\\footnote{FBNetV3 models available here \\url{http:\/\/https:\/\/github.com\/facebookresearch\/mobile_cv\/model_zoo\/models\/model_info\/fbnet_v2\/model_info_fbnet_v3.json}}, while EfficientNet models can be obtained at GitHub\\footnote{EfficientNet models available here \\url{https:\/\/github.com\/mingxingtan\/efficientnet}}. \nWhile the techniques needed to obtain computation- and latency-friendly models have been democratized through open-sourcing the source code as well as the models themselves, fully applying these techniques necessitates costly operations such as finding an optimal network topology through meta-learning approaches~\\citep{you2020greedynas} and search algorithms such as Genetic Algorithms (GAs)~\\citep{goldberg1991comparative}.\n\nGiven the high-degree of intractability of this problem, expensive computational resources are often needed in this case, easily exceeding the budget available to a university research laboratory or an angel-stage startup~\\citep{zoph2016neural}. \nWhen a starting model is already available, for example through open-sourcing, the best option would be to perform a cheap modification of the model to fit a certain target FLOPs\/latency requirement. \nIn this paper we have compared the NAS approaches for training FBNetV3 models with our pruning techniques on a computational complexity metric (GPU-hours) to effectively answer \\textbf{RQ2}.\n\n\\textit{Benchmark results.}\nIn addition to experimental outcomes for answering \\requ1 and \\textbf{RQ2}, we also benchmark pruned FBNetV3 models using available open-sourced quantized sparse kernels and conduct ablation studies to obtain additional insights into pruning performance. \nThese results augment our main observations and demonstrate that with existing hardware support, it is possible to deploy pruned cutting-edge computer vision models with practical latency reductions and improve further beyond the performance vs. FLOPs trade-off.\n\nWe conduct our experiments on ImageNet, which is an object-recognition task on a large training dataset of 1.2 million images. \nWe show that computationally less intensive techniques such as uniform and global magnitude-based pruning of larger FBNetV3 models can yield higher test accuracies than small models while having the same number of FLOPs. \nGiven a target computation budget for an efficient model, we show that it is more practically advantageous (both in terms of performance and running time) to simply prune the larger model than run a neural architecture search to find the target model from scratch. \n\n{The technique we have employed for pruning (unstructured sparsity) is already tried and tested, however our novelty lies in studying whether efficient image recognition models such as FBNetV3 can be optimized further to improve on the FLOPs-accuracy curve, and the contributions are two fold : (1) FBNets are themselves state-of-the-art in efficient vision models and we achieve better accuracy-FLOPs tradeoff over these models and (2) from the standpoint of computational overhead, we significantly reduce the amount of GPU hours required to obtain such models. Pruning a publicly available NAS optimized model incurs $\\approx$4x less GPU hours to achieve a target FLOPs level, compared to training a full-fledged NAS to obtain a model which has less accuracy at the same FLOPs level.}\n\n\\textit{Paper organization.}\nThe remainder of this paper is organized as follows. \nIn Section~\\ref{related-work}, we describe related work in the area of efficient vision model design and also provide an introduction to different pruning techniques. \nIn Section~\\ref{experimental-setup}, we discuss our experimental setup, including a description of the baseline models and the \\textit{global} and \\textit{uniform} pruning approaches we have employed. \nSection~\\ref{results} describes our main findings and we conclude the paper in Section~\\ref{conclusions}.\n\n\\section{Related Work}~\\label{related-work}\nWe discuss related literature in the areas of \\textit{computationally efficient vision models} and \\textit{model pruning}.\nWithin the scope of our work, we mainly focus on inference efficiency of models in contrast to training efficiency.\n\\par\n\\textit{Computationally efficient vision models:} Neural networks for computer vision are generally characterized by convolutional layers and fully-connected layers, along with blocks such as residual or skip connections. \nThis makes such networks resource intensive in terms of FLOPs, which affects the memory storage and power consumed, and also leads to increased latency. \nIt is of paramount importance to design more efficient networks which can provide higher performance for the same FLOPs or latency level, or even to optimize them appropriately to provide the same performance at reduced FLOPs\/latency. This can be performed either through the design of new simplified layers, for example in deep residual learning~\\citep{he2016deep} or though explicit model compression as in weight quantization~\\citep{polino2018model}.\nExtremely deep networks for image recognition often suffer from not only high complexity and inference latency, but also from the issue of \\textit{vanishing gradients}~\\citep{pascanu2013difficulty}. This was addressed through deep residual networks which effectively simplified network design through skip-connections. \nMobileNets~\\citep{howard2017mobilenets} are one of the earlier approaches to building small low-latency networks by using depthwise separable convolutions with two parameters, \\textit{width} and \\textit{resolution} multipliers. They demonstrate the effectiveness of MobileNets across different vision tasks, such as face embeddings and object detection. MobileNetV2~\\citep{sandler2018mobilenetv2} extends MobileNets by utilizing inverted residual filter structures and linear bottlenecks, obtaining improvements on state-of-the-art models both in terms of accuracy and computational complexity. ShuffleNets~\\citep{zhang2018shufflenet} propose dedicated residual units where 1\\ensuremath{\\times}1\\xspace convolutions are replaced with pointwise group convolutions and channel shuffling reducing FLOPs computations. \n\\par\nMore recently, the focus on building efficient neural network models has shifted to techniques that treat the design of efficient networks as a search problem, falling under the umbrella of Neural Architecture Search (NAS).\nEfficientNets~\\citep{tan2019efficientnet} propose a novel scaling method which adjusts the network's length, width, and resolution to optimize performance subject to target memory and FLOPs constraints. They also define a novel baseline that is optimized by a multi-objective neural architecture search. The FBNet collections of models---FBNet~\\citep{wu2019fbnet}, FBNetV2~\\citep{wan2020fbnetv2} and FBNetV3~\\citep{dai2021fbnetv3}---employ neural architecture search to obtain highly-optimized models that improve on the state-of-the-art for different visual understanding tasks. \nFBNet frames the architecture search as a differentiable meta-learning problem with gradient based techniques, namely \\textit{DNAS}---Differentiable Neural Architecture Search---by \\cite{wu2019fbnet}, and avoids selecting the optimized model over a discrete set. \nThe subsequent entry in this collection, FBNetV2, expands the search space over conventional DNAS, and employs a masking scheme to maintain the same level of computational complexity while searching over this expanded space. \nFBNetV3 further improves on the state-of-the-art by employing Neural Architecture Recipe Search (NARS) and searching over the space of not only architectures, but also corresponding recipes (which are generally hyper-parameters). In this paper, we consider FBNetV3 models as our baselines as they are state-of-the-art. \nWe are interested in understanding if they are overparameterized and evaluate how much model pruning can improve performance at a certain FLOPs level over the state-of-the-art in this family of models.\n\\par\n\\textit{Model Pruning:} Modern neural networks, particularly those processing complex sensory inputs (such as speech, vision and language) for perception applications, are often over-parameterized. \nIt is only to be expected that we should be able to compress such networks significantly to maintain the same level of performance at decreased level of computation (fewer weights and reduced FLOPs), memory footprint and power consumption. Foundational efforts in this space include the \\textit{Optimal Brain Surgeon}~\\citep{hassibi1993second} and \\textit{Optimal Brain Damage}~\\citep{lecun1990optimal}. \nRecently the idea of network pruning has been formalized through the lottery ticket hypothesis~\\citep{frankle2018lottery}, which claims that randomly initialized, feed-forward networks have winning sub-networks that perform just as well as the original network on an unseen test dataset. \nModel pruning is generally of two types: unstructured and structured pruning. \nUnstructured pruning, as the name suggests, doesn't adhere to any structure and prunes neurons based on chosen criteria (such as magnitude). This has the advantage of providing higher performance, but is difficult to implement in hardware, as it needs dedicated support for efficient sparse matrix multiplications. \nMeanwhile, structured pruning is the practice of removing entire groups of neurons (e.g., blocks within the weight matrix, or channels in convolutional neural networks). \nThis is easy to implement without dedicated hardware support, but has the issue of lower generalization performance than unstructured pruning~\\citep{yao2019balanced}. \nIn the literature, there have also been several studies, for example investigating whether rewinding (training from scratch with a fixed mask) can perform just as well as the fine-tuning on top of the original unpruned network~\\citep{renda2020comparing}. {~\\cite{blalock2020state} provide an overview survey of recent advances and open problems in neural network pruning.}\n\\par\nIn the research area of designing efficient networks for computer vision, there has not been much focus on understanding how pruning can be applied to the current generation of models.\nMost literature on pruning is based on older networks such as VGGNet, ResNet~\\citep{he2016deep}, and MobileNet~\\citep{sandler2018mobilenetv2}.\nOur work improves upon these existing studies by understanding how pruning can improve the FLOPs-accuracy tradeoff over existing state-of-the-art networks.\n\n\\section{Pruning Techniques and Setup}\n\\label{experimental-setup}\nIn this section, we describe the main components of our techniques and experimental setup, including \\textit{Baseline Models}, \\textit{Pruning Techniques}, \\textit{Latency Measurement} and \\textit{Metrics}. We have mainly used standard splits of the ImageNet dataset, further details are in Section~\\ref{dataset} of the appendix.\n\n\\subsection{Baseline Models}\\label{baseline-models}\n\\cite{dai2020fbnetv3} address the previous limitations of NAS-based architecture search where these approaches can only search over architectures given a training recipe (set of hyperparameters), and thus cannot optimize over both. \nAs described in Section~\\ref{related-work}, the most recent state-of-the-art models are based on NARS (Neural Architecture-Recipe Search), which we select as baseline models. Table~\\ref{tab:baseline-models} lists the accuracy of FBNetV3 models~\\citep{dai2021fbnetv3} on the ImageNet classification task, along with the number of model parameters and computation complexity in terms of FLOPs. \n\\par\nEach baseline model consists of multiple IRF (Inverted Residual Filter) blocks, which contain convolutional layers of different kernel sizes. \nFor our experiments, we are mostly interested in 1\\ensuremath{\\times}1\\xspace convolutions as potentially prunable, since within each FBNetV3 model, the 1\\ensuremath{\\times}1\\xspace convolution layers constitute >80\\% of total model FLOPs for all models in the family, and the open-sourced sparsity kernel support we use for latency benchmarking is available only for fully connected layers. \nA 1\\ensuremath{\\times}1\\xspace convolution can be transformed into an equivalent fully connected layer with a few tensor reshape operations without any significant loss of performance or latency.\n\nFor each initial and target FBNetV3 model $X$ and $Y$, where $X$ is larger than $Y$, we prune $X$ to a \\emph{sparsity level} of $S$ so that the FLOP count is the same as for $Y$. The number of FLOPs consumed by a linear layer of sparsity $S$ is proportional to the number of sparse matrix multiplications performed and is given by $S * F$, where $F$ is the corresponding dense FLOPs. \nThus if $F_{1\\ensuremath{\\times}1\\xspace}(X)$ is the number of FLOPs consumed by the 1\\ensuremath{\\times}1\\xspace convolution layers and $F(x)$ is the total number of FLOPs consumed by model $X$, we have:\n\\begin{equation}\\label{flops-eq}\n S = {(F(X) - F(Y))}\/{F_{1\\ensuremath{\\times}1\\xspace}(X)}\n\\end{equation}\nHence, sparsity measures the fraction of 1\\ensuremath{\\times}1\\xspace convolution weights removed, and so \nhigher sparsity indicates a smaller model. \nFor the uniform pruning scnario, Table~\\ref{sparsity-table} shows the amount of sparsity required to prune each larger FBNetV3 model to a smaller one based on Eq.~(\\ref{flops-eq}). For global pruning, (\\ref{flops-eq}) does not hold, and we compute the target sparsities empirically from the layer shapes instead with details provided in Section~\\ref{global_flops}.\nWe prune each larger FBNetV3 model to a discrete FLOPs target based on a defined set of smaller models in the family, and not to a continuous range of FLOPs values, as it makes it easier to compare models directly based on a given computation budget. \nIf we can demonstrate that for the same computation level, the pruned larger FBNetV3 model has higher performance than a smaller model with the same FLOPs, it is sufficient to demonstrate that we can improve on the FLOPs-accuracy curve over the state-of-the-art.\n\n\\subsection{Pruning Techniques}\\label{pruning-techniques}\nIn this paper, we utilize a pre-trained FBNetV3 model with higher number of FLOPs without training an image classification model from scratch with sparsity, which would be time consuming and computationally intensive. There are several approaches in the literature such as prune-and-fine-tune~\\citep{han2015learning} and iterative pruning with sparsity scheduling~\\citep{frankle2018lottery}. \nWe have utilized the former for our experiments, as although studies have shown that iterative and incremental pruning approaches lead to better generalization performance, they typically require training for high number of epochs, need tuning and selection of optimal sparsity schedules and are computationally resource intensive. We have therefore not considered them in our experiments. {For our prune and fine-tune experiments, we have used 8-GPU boxes, with each box having Nvidia V100 (Volta) 32G GPUs.}\nAs described in Section~\\ref{intro}, we perform both global and magnitude-based pruning experiments. For the latency benchmarking, we also perform magnitude-based uniform pruning with a sparse block size of $1\\times4$ as explained in Section~\\ref{latency}.\n\nWe have conducted a hyper-parameter tuning for the learning rate parameter, with LR {values in the set} \\{4e-5, 8e-5, 1.6e-4\\}, as fine-tuning generally admits smaller learning rates than training from scratch. We have found that using the same learning rate for all models, along with the same hyper-parameter settings used for training the seed model is sufficient to obtain pruned networks which are superior to the baseline FBNetV3 models. Hence minimal hyper-parameter tuning was required for our experiments and we have used values of settings such as weight decay and momentum to be the same as those used for training the baseline FBNetV3 models. During fine-tuning after pruning, we have used a smoothed validation loss to stop the process early after a convergence tolerance (0.01\\%) is reached between two consecutive epochs. Generally, we have observed fine-tuning to converge around $\\sim$250 epochs\n\n\\subsection{latency measurements and Metrics} \\label{latency}\nWe are interested not only in the sparsity level of our pruned models and the image recognition performance, but also in metrics which potentially improve due to model sparsity, such as number of parameters, the FLOP count and the model latency. \nFor reporting model performance under pruning, we use standard image recognition metrics such as Top-1 and Top-5 {test} accuracies.\nWe measure overall model sparsity, which is different to the layer sparsity since we \nonly prune 1\\ensuremath{\\times}1\\xspace convolution layers, as explained in Section~\\ref{baseline-models}. \nWe report the model FLOPs, because this metric captures the computational footprint of the model and its power consumption. \n\nLast, we record the total latency (in ms.) under pruning. The sparse kernels used in our experiments are already in open-source and released under the PyTorch sparse quantization library\\footnote{https:\/\/github.com\/pytorch\/pytorch\/blob\/master\/torch\/ao\/nn\/sparse\/quantized\/linear.py}. Prior to using these kernels, we perform uniform layer-wise block-based pruning with block sizes of $1\\times4$. Magnitude based pruning is implemented at block level, and the model is quantized to 8-bit integers (int8) before latency benchmarking{, which is performed on Intel CPUs designed using the Skylake micro-architecture.}\nWhile we would expect sparsity to translate to tangible inference speedups, this is highly dependent on the sparse kernel support provided by hardware. \nCurrent hardware is not well-suited for unstructured randomly sparse matrix multiplications and tend to do better with structured sparsity in models~\\citep{anwar2017structured}. We have utilized block sparsity within the weight matrix for latency experiments.\nHowever this often tends to come at a decreased level of model performance. \nThe design of highly performant sparse models under structured sparsity with reasonable inference speedups remains an important research topic outside the scope of this paper.\n\n\n\\section{Results}~\\label{results}\n\\subsection{Pruned FBNetV3 model performance}\\label{pruning_performance}\nTo answer \\textbf{RQ1}, we consider the family of FBNetV3 models as baselines and seed models for further pruning. For each pair of models $X$, $Y$ in the family, we calculate the amount of sparsity required to prune the larger model $X$ to a model that consumes the same number of FLOPs as the target smaller model $Y$, via Equation~\\ref{flops-eq}.\nThere are 21 potential seed and target model pairs, however we conduct pruning experiments only for a depth of 2 for tractability. For example, given FBNetV3E as the seed, we only prune it to FLOPs targets corresponding to FBNetV3D and FBNetV3C.\nTable~\\ref{flops_table} presents the accuracy and number of parameters of the pruned models at each target FLOPs level. The improvement in performance is apparent even at lower FLOPs targets, where we might expect baseline models such as FBNetV3A to not be over-parameterized. \nFor example, pruning FBNetV3C to a target of 356.6 MFLOPs obtains a network which is 1.43\\% better than FBNetV3A. Figure~\\ref{flops-curve} plots the Top-1 ImageNet testing accuracy vs. FLOPs for the best pruned models as seen from Table~\\ref{flops_table}. This clearly shows that pruning FBNetV3 models with minimal fine-tuning can significantly improve on the state-of-the-art for FLOPs vs. Top-1 accuracy trade-off. \nThis analysis is performed for both uniform layer-wise and global magnitude-based prune with fine-tune settings. Global pruning ranks the weights of the entire network in contrast to uniform layer-wise pruning, which ranks each layer's weights to determine the sparsity mask. It would be expected that global pruning performs better than uniform pruning for the same target sparsity level or number of non-sparse parameters. However in our experiments we determine the pruning threshold based on FLOPs targets, and find global pruning to require higher sparsity levels, which results in uniform pruning outperforming global pruning in Top-1 ImageNet accuracy in most cases.\n\\begin{table}[]\n\\centering\n\\caption{Sparsity level (in percentage) and performance of pruned FBNetV3 networks on ImageNet dataset for different target MFLOPs. The best accuracy obtained at each target FLOPs level is highlighted in bold.}\n\\label{sparsity-table}\n\\label{flops_table}\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Seed\\\\ network\\\\FBNetV3\\_\\end{tabular}}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Target\\\\ network\\\\FBNetV3\\_\\end{tabular}}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Target\\\\ MFLOPs\\end{tabular}}} &\n \\multirow{2}{*}{\\begin{tabular}[c]{@{}l@{}}Baseline\\\\ Accuracy\\end{tabular}} &\n \\multicolumn{3}{c|}{Uniform pruning} &\n \\multicolumn{3}{c|}{Global pruning} \\\\ \\cline{5-10} \n\\multicolumn{1}{|c|}{} &\n \\multicolumn{1}{c|}{} &\n \\multicolumn{1}{c|}{} &\n &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Sparsity \\\\ level(\\%)\\end{tabular}} &\n \\begin{tabular}[c]{@{}l@{}}Top-1 \\\\ Acc.\\end{tabular} &\n \\multicolumn{1}{c|}{Gain(\\%)} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Sparsity \\\\ level(\\%)\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Top-1 \\\\ Acc.\\end{tabular}} &\n Gain(\\%) \\\\ \\hline\nB & A & 356.6 & 79.6 & 26.59 & 80.308 & 0.887 & 39.5 & 80.232 & 0.793 \\\\ \\hline\n C & A & 356.6 & 79.6 & 40.7 & \\textbf{80.738} & 1.43 & 57.9 & \\textbf{80.476} & 1.1 \\\\ \\hline\n C & B & 461.6 & 80.2 & 19.4 & 80.996 & 0.992 & 28.9 & 80.998 & 0.985 \\\\ \\hline\n D & B & 461.6 & 80.2 & 31.47 & \\textbf{81.116} & 1.142 & 43.7 & \\textbf{81.08} & 1.097 \\\\ \\hline\n D & C & 557.0 & 80.8 & 15.04 & 81.278 & 0.591 & 21.5 & \\textbf{81.208} & 1.256 \\\\ \\hline\n E & C & 557.0 & 80.8 & 31.0 & \\textbf{81.282} & 0.596 & 43.6 & 81.184 & 0.475 \\\\ \\hline\n E & D & 644.4 & 81.0 & 17.8 & 81.118 & 0.145 & 25.8 & 81.388 & 0.479 \\\\ \\hline\n F & D & 644.4 & 81.0 & 38.2 & \\textbf{82.00} & 1.234 & 67.8 & \\textbf{81.484} & 0.597 \\\\ \\hline\n F & E & 762.0 & 81.3 & 29.8 & \\textbf{82.19} & 1.094 & 54.7 & \\textbf{81.97} & 0.824 \\\\ \\hline\nG & E & 762.0 & 81.3 & 71.67 & 81.166 & -0.16 & 85.5 & 79.934 & -1.68 \\\\ \\hline\n G & F & 1181.6 & 82.0 & 49.69 & \\textbf{82.528} & 0.643 & 63.8 & \\textbf{82.454} & 0.553 \\\\ \\hline\n\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[h] \n\\centering\n\\includegraphics[scale=0.40]{new_pruned_paper.pdf}\n\\caption{FLOPs vs. performance (ImageNet Top-1 acc.) for different pruned FBNetV3 networks. For comparison, the existing FBNetV3 networks are also shown here.}\n\\label{flops-curve}\n\\end{figure}\n\n\\subsection{Pruning Complexity}\nIn addition to demonstrating the improvement over state-of-the-art obtained by pruning FBNetV3 models, \nit is also important to quantify the reduction in computational complexity obtained in pruning a larger FBNetV3 model compared to training an FBNetV3 model directly through NAS (Network Architecture Search). \n\\textbf{RQ2} (pruning for efficient model search) asks if the pruning and subsequent fine-tuning approach in Section~\\ref{pruning_performance} is faster than a full-fledged neural architecture search. \nDuring pruning and subsequent fine-tuning, we train the pruned networks till the validation loss converges to within a pre-specified tolerance, as described in Section~\\ref{pruning-techniques}.\nThe time needed is generally less than when training the original FBNetV3 models, which runs for 400 epochs. \nThe number of GPU-hours is computed as (number of training GPU nodes) * (number of GPUs per node) * (training time to convergence) for each network.\nIn Table~\\ref{gpu-hours}, for each of the best performing uniformly-pruned models in Section~\\ref{pruning_performance} we report the number of GPU-hours consumed by the prune and fine-tune strategy, along with the GPU-hours consumed when obtaining a FBNetV3 model through architecture search using the method described in~\\cite{dai2020fbnetv3}. \nThe results are quite conclusive---we not only obtain pruned models superior in performance to the original neural search optimized models, but also as described in Section~\\ref{intro}, computational cost is significantly lower when starting from a pre-trained model with higher FLOPs. \nGiven the performance improvements obtained with lower computational resources, this approach is beneficial for an experimental setting where researchers have access to open-sourced pre-trained models and limited GPU resources, for example in a small startup or an academic environment. \nWe observe that the degree of speedup reduces as the network size gets bigger (e.g., in FBNetV3A vs. FBNetV3C) due to higher training time to convergence.\nNevertheless, we still obtain a speedup of 3-5 times compared to a full NAS (Neural Architecture Search). \n\n\\begin{table}[]\n\\centering\n\\caption{Computation speedup in term of GPU-hours when comparing NAS (neural Architecture Search) with pruning and fine-tuning approaches. {The selected seed networks are drawn from those in Table~\\ref{flops_table} with the best performance at target FLOPs.}}\n\\label{gpu-hours}\n\\begin{tabular}{|l|l|l|l|}\n\\hline\n\\multicolumn{1}{|c|}{\\begin{tabular}[c]{@{}c@{}}Target FLOPs \\\\ (FBNetV3 Model)\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}GPU-hours \\\\ in NAS\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}GPU-hours\\\\ in pruning \\\\ and fine-tuning\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Computational cost\\\\ speedup\\end{tabular}} \\\\ \\hline\n356.6 (FBNetV3A) & 10.7k & 2.240k & 4.77 \\\\ \\hline\n557.0 (FBNetV3C) & 10.7k & 2.496k & 4.28 \\\\ \\hline\n762.0 (FBNetV3E) & 10.7k & 3.456k & 3.09 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\subsection{Latency Experiments}\nWe also measure the latency-performance tradeoff for the pruned FBNetV3G models. FBNetV3G is the largest model in the family and so is expected to have the best generalization performance under high sparsity levels. \nAs described in Section~\\ref{latency}, we prune the network using block sparsity (where the block size is $1\\times4$) to sparsity levels in the set \\{40\\%, 50\\%, 60\\%\\}. \nWe have not utilized lower sparsity levels, as we have observed that for the selected kernels we need at least 40\\% sparsity to yield any significant latency benefits. \nWe have pruned all 1\\ensuremath{\\times}1\\xspace convolution layers uniformly here and subsequently converted them to fully-connected layers for compatibility with the quantized sparse kernels. \nIn Figure~\\ref{latency-curve}, we present the Top-1 ImageNet accuracy vs. latency curve after pruning the FBNetV3G network for different sparsity levels. \nThe pruned FBNetV3G models show marked performance reduction with lower latency as expected, with a sparsity level of 60\\% translating to around 7\\% absolute accuracy reduction with a latency reduction of 18 ms (16\\% relative). While the 1\\ensuremath{\\times}1\\xspace convolution layers account for >80\\% of FLOPs, they only constitute 25\\% of overall network latency. \nThis is consistent with previous literature~\\citep{dudziak2020brp} which shows that computational complexity (ex. FLOPs) and latency are not well-correlated, and indeed the latter is more dependent on layer shapes. \nThis result underscores the need to develop more latency-friendly pruning techniques which can potentially improve on the state-of-the-art in this domain.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{latency}\n \\caption{Latency vs. Top-1 accuracy on ImageNet}\n \\label{latency-curve}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{sparsity_pattern.png}\n \\caption{Layer-wise sparsity pattern for FBNetV3E}\n \\label{sparsity_pattern}\n \\end{subfigure} \\\\\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{flops_curve.png}\n \\caption{Layer-wise FLOPs distribution for FBNetV3E}\n \\label{flops_pattern}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{boxplots}\n \\caption{Performance distribution for layer type}\n \\label{sensitivity_pattern}\n \\end{subfigure}\n \\caption{Latency benchmarking on FBNetV3G for different sparsity levels \\{40\\%, 50\\%, 60\\%\\} and layer-wise sparsity\/FLOPs\/accuracy sensitivity for a pruned FBNetV3E network.}\n \\label{fig:three graphs}\n\\end{figure}\n\\subsection{Insights into pruning experiments}\nOur pruning experiments demonstrate that we can improve on the state-of-the-art FBNetV3 models in generalization performance for a given FLOPs level. In this subsection, we obtain an insight into\n(1) the sparsity pattern under global magnitude-based pruning and \n(2) the sensitivity of each layer when pruned in isolation under uniform layer-wise magnitude pruning (sparsity level of 95\\%). For (1) in Figure~\\ref{sparsity_pattern}, we plot the amount of sparsity obtained per 1\\ensuremath{\\times}1\\xspace convolution layer. The model being considered is an FBNetV3E network pruned to a sparsity level of 43.6\\%, to the same FLOPs level as FBNetV3C and subsequently fine-tuned. We note that the sparsity level in lower layers is lower which is potentially required for maintaining the performance~\\cite{}. Higher sparsity can be admitted in upper layers of the network where it has learnt more redundant representations. SE (Squeeze and Excitation) 1\\ensuremath{\\times}1\\xspace convolution layers generally tend to get pruned more compared to other layers, with the sparsity being >99\\% for two such SE layers in stage $xif5\\_0$. This indicates that we can also consider revisiting SE layer role in FBNetV3 networks, and even remove entire layers in future work to yield additional latency and FLOPs benefits. \n\nFor analysis (2) we prune each 1\\ensuremath{\\times}1\\xspace convolution layer in isolation at a sparsity target of 95\\% and record the Top-1 test accuracy obtained on ImageNet dataset. For each type of layer, PW:expansion, PWL: bottleneck, SE: Squeeze-Excitation we plot the distribution of accuracies in Figure~\\ref{sensitivity_pattern}. We observe that the PW and PWL layers are most sensitive to high sparsity, while SE layers are able to retain performance adequately. We could also avoid pruning the most sensitive layers (appearing as outliers in the figure) to maintain generalization performance. This observation corroborates findings from analysis (1), and motivates us to revisit the role of squeeze-excitation layers in future work. \n\n\\section{Conclusions}~\\label{conclusions}\nIn this paper, we have investigated the problem of improving on the current state-of-the-art FLOPs vs. performance trade-off for FBNets which have been pre-optimized by NAS (Neural Architecture Search). We have employed network pruning techniques, and our results demonstrate that we can further improve on performance over FBNetV3 at a given FLOPs target through global as well as uniform magnitude-based pruning. This happens not only for relatively over-parameterized networks such as FBNetV3G, but also smaller networks such as FBNetV3A which have lower computational complexity. On average, the GPU-hours incurred during pruning is about $\\sim\\!\\!4\\times$ less than that consumed by a full-scale NAS. We have also performed latency measurements on the FBNetV3G model and conducted an analysis to understand the sparsity patterns and sensitivity of different FBNetV3 layers to pruning. For future work, we plan to investigate squeeze-excitation layers in more detail, and explore structured pruning approaches such as channel and layer pruning to further improve on the latency-performance tradeoff for this family of models. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{}\n\\vspace{-1cm}\n\n\n\n\\footnotetext{\\textit{$^{a}$~Institut Charles Sadron, CNRS UPR22 - Universit\\'{e} de Strasbourg, Strasbourg, France; Fax: 33 (0)3 88 41 40 99; Tel: 33 (0)3 88 41 40 43; E-mail: Wiebke.Drenckhan@ics-cnrs.unistra.fr}}\n\\footnotetext{\\textit{$^{b}$~Sorbonne Universit\\'{e}s, UPMC Univ Paris 06, CNRS-UMR 7588, Institut des NanoSciences de Paris, 4 place Jussieu, 75005 Paris, France; Email: hohler@insp.upmc.fr} }\n\\footnotetext{\\textit{$^{c}$~Universit\\'{e} Gustave Eiffel , 5 Bd Descartes, Champs-sur-Marne, F-77454 Marne-la-Vall\\'{e} cedex 2, France. }}\n\\footnotetext{\\textit{$^{d}$~TU Dortmund University, Department of Physics, 44221 Dortmund, Germany }}\n\n\\footnotetext{Electronic supplementary information (ESI) available. See DOI: 10.1039\/D1SM01109J}\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:Intro}\n\n\nThe mechanical response of interfaces separating immiscible fluids enters into many fundamental and applied problems of topical interest. Within the current desire to describe complex liquid interfaces \\cite{Edwards1991,Rehage_RheoAct_2002,Sagis_RevModPhys_2011,Fuller_SoftMatter_2011,Fuller_2012,Sagis_COCIS_2014,Verwijlen_ACIS_2014}, two scientific communities meet, accustomed to treating either \\textit{drops or bubbles} with \\textit{fluid-like} interfaces, or \\textit{capsules} and \\textit{balloons} whose membranes have \\textit{solid-like} mechanical properties. \n\nThe interfacial tension of complex \\textit{fluid interfaces} of drops or bubbles commonly depends on the adsorption of surfactant molecules and on their interactions (top of Fig. \\ref{fig:interfaces}). Their interfacial stress is isotropic and in the static limit insensitive to shear deformations. Such fluid systems can present an elastic stress response to dilation in addition to the surface tension. This is commonly called \\textit{Gibbs Elasticity} if surfactant exchange between interface and bulk can be neglected. \n\nThe stress in the \\textit{solid-like membranes} bounding capsules or balloons (bottom Fig. \\ref{fig:interfaces}) strongly depends on both shear deformation and compression away from a stress-free \"reference state\". These membranes are often thin enough for the elastic bending energy to be negligible compared to those associated with dilation and shear. These \"skins\" behave like 2D solids, with an elastic response characterised for small deformations by an \\textit{interfacial dilational modulus} and a \\textit{interfacial shear modulus}. \n\\par Like the physics of simple drops\/bubbles \\cite{Miller1998}, the physics of capsules\/balloons \\cite{Pozrikidis2003,Mueller_Strehlow_2004,Neubauer_ACIS_2014,Fery_Pol_2007,Sagis2015,Sagis2015a} is now quite well understood. \nHowever, \"intermediate\" systems are of increasing interest, which we shall name \"droploons\" or \"bubbloons\". Their interfacial properties combine those encountered respectively in drops\/bubbles and capsules\/balloons: interfacial tension and solid-like membrane-stresses coexist. Here, the reference state is defined by the absence of a solid-like stress contribution so that only capillary stress is present. \nA multitude of bubbloon- and droploon-like systems have been investigated in the past, involving interfacially active particles, proteins, cross-linked surfactant monolayers, polymer multi-layers, polymer-surfactant mixtures, etc. \\cite{Edwards1991,Rehage_RheoAct_2002,Sagis_RevModPhys_2011,Erni2011,Fuller_SoftMatter_2011,Fuller_2012,Sagis_COCIS_2014,Verwijlen_ACIS_2014,Pepicelli_SocRheo_2019}. In most of these systems, liquid- and solid-like elastic contributions are intricately entangled, \ncalling for physical models and experimental approaches helping to distinguish and study these contributions. In the following, we provide a very short state of the art of relevant approaches before introducing the one taken for this article. \nInterfacial stresses may in general be of dynamic or static nature and they may present a plastic response depending on deformation history. Here we shall concentrate on the quasi-static response of interfaces. For more details, the reader is referred to recent books and review articles \\cite{Edwards1991,Rehage_RheoAct_2002,Miller2009,Sagis_RevModPhys_2011,Fuller_SoftMatter_2011,Fuller_2012,Sagis_COCIS_2014,Verwijlen_ACIS_2014,Pepicelli_SocRheo_2019}. For simplicity, we will also only talk about \\textit{droploons} and \\textit{liquid\/liquid} interfaces, but all derived concepts apply equally to \\textit{bubbloons} and \\textit{gas\/liquid} interfaces. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=7cm,keepaspectratio]{FIGURES\/LiquidSolidInterface.png}\n \\caption{Contrast between the elastic response of fluid and solid interfaces. \\textit{Top}: The dilation of the concentration of surfactant molecules adsorbed to fluid interfaces creates an elastic contribution to surface stress. If the exchange of the surfactants with the bulk is inhibited, this response is static and called \"Gibbs elasticity\". \\textit{Bottom}: An elastic stress also appears when a solid like-skin covering the interface is stretched. \\label{fig:interfaces}}\n\\end{figure}\n\nThe development of dedicated interfacial shear rheometers has enabled reliable measurements of the \\textit{interfacial shear modulus} \\cite{Miller2009,Kragel2010}. However, the characterisation of the \\textit{dilational modulus} remains challenging due to the experimental difficulty of applying an accurately controlled homogeneous dilation to an interface and of assessing the accuracy of the modulus measurement if the deformation is only approximately a homogeneous dilation. \\par\nRecently, Vermant and coworkers \\cite{Verwijlen_ACIS_2014} constructed a special Langmuir trough in which the surface dilation is achieved by the action of twelve fingers arranged circularly. They used this set-up to investigate successfully the static and dynamic dilational response of complex interfaces. However, in order to access the surface stresses, this technique uses a Wilhelmy Balance which introduces potential errors in the measurement due to the influence of the contact line configuration on the Wilhelmy plate. Moreover, the large surfaces required for these measurements are prone to attract impurities, to encourage evaporation and make it challenging to work with liquid\/liquid systems. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=8.5cm]{FIGURES\/Sceme_Variables_3.png}\n\\caption{Of interest here is the in- and deflation of spherical drops around a reference state of radius $R_0$. These drops are either isolated or attached to a capillary with circular cross-section of radius $R_n$.}\n\\label{fig:Schemes}\n\\end{figure}\n\n\\par\nSince the volume change of a sphere leads to a perfect dilation of its surface, measuring the pressure-radius relation of a small, spherical droploon should be the preferred method to determine the dilational modulus. This has been implemented for capsules using osmotic pressure variations \\cite{Gao_EPJ_2001} or acoustic pressure fields \\cite{Dollet2019}\\footnote{Note that many other techniques have been developed which squeeze initially spherical droploons between two plates, use AFM, spinning drops or investigate the deformation of droploons in controlled flow fields. However, the associated deformations are all a combination of interfacial shear and dilation, making the quantitative analysis extremely complex.}. However, these approaches introduce physico-chemical or technical complexity. It is much more convenient to study the pressure\/shape relation of drops held by a capillary with circular cross-section, a technique called \"capillary tensiometry\" or \"pressure tensiometry\" when shape or pressure analysis is used, respectively. In the past, it has been used extensively for droploons deformed under gravity, a variant called \"capillary elastometry\" \\cite{Knoche2013,Hegemann2018}. However, the drop shapes are in this case non-spherical with complex interfacial deformations combining shear and dilational components, so that numerical fitting procedures with numerous parameters are required for the data analysis, introducing many uncertainties. Various improvements have been made to these approaches in the past, including improved shape fitting algorithms or combined shape\/pressure analysis \\cite{Danov_CollIntSci_2015,Nage_l2017}, yet without removing the complexity arising from the non-trivial object shape.\n\nIn the aim to identify and validate a quantitative technique for measuring the interfacial dilational modulus, we propose here to use the simplest possible geometry: an initially spherical droploon attached to a capillary in the absence of gravity. Combining simulations and analytical modelling, we investigate how pressure-deformation relations depend on the interplay between surface tension and solid-like interfacial elasticity. \nPressure tensiometry of hemi-spherical drops has been exploited in the past \\cite{Russev_2008,Kotula_JRheo_2015}, but in all previous work homogeneous isotropic interfacial dilation was assumed. This is an uncontrolled approximation, since such an idealised deformation is incompatible with the boundary imposed by the attachment to the capillary. The exact bubble shape depending on surface tension, the elastic properties of the skin and the gas pressure cannot be calculated analytically. We therefore perform pioneering simulations for this configuration using the Surface Evolver Software - a finite element tool graciously developed and provided by Ken Brakke - in which the combined effects of interfacial tension and specific local mechanical constitutive laws can be implemented. Surface Evolver has already been successfully applied to advance our understanding of systems composed of simple drops \\cite{Weaire1999} or of capsules\/membranes without surface tension \\cite{Bouzidi_CompStruct_2004,Quilliet2016}. Surprisingly, its power has not yet been exploited to perform predictive simulations of droploon-type systems where surface tension and solid-like elasticity are combined. Since direct numerical schemes can be used for the axisymmetric droploon problem (Section \\ref{sec:ModelKierfeld}), it provides an ideal benchmark test for the Surface Evolver simulations. The latter will be necessary to predict the response of more complex objects, such as droploon assemblies, where direct numerical schemes will fail.\n\nWe treat here a simple model interface, as sketched in the top row of Fig. \\ref{fig:Schemes}. We assume it to be composed of a liquid\/liquid interface of interfacial tension $\\gamma_0$, on which a permanently cross-linked, polymeric gel of thickness $h_0$ is grown. The liquid phase containing the gel is supposed to be a good solvent for the gel, such that the interfacial tension between the gel and the solvent is negligibly small. We furthermore assume that this gel layer is thick enough to be considered a bulk material with bulk shear modulus $G$ and that its mechanical response can be described by a Neo-Hookean model (Section \\ref{sec:Theory}). For this purpose, we make the simplifying assumption that the gel can be considered as incompressible, in the sense that its bulk modulus is much larger than its shear modulus. Last but not least, we make the assumption that the gel is dilute enough such that neither its presence nor its deformation modifies the liquid\/liquid interfacial tension, thus equal to that of the pure solvent $\\gamma_0$.\n\\par\n\n \n After a general introduction to the main theoretical concepts (Section \\ref{sec:TheoryFundamentals}), we provide exact analytical relations for the pressure-deformation relations of spherical droploons (Section \\ref{sec:TheorySpheres}) and, for the first time, well-matching analytical approximations for droploons on capillaries (Section \\ref{sec:TheoryNeedle}). We then show how Surface Evolver can be used to provide reliable simulations of the equilibrium shapes and pressure-deformation relations of this simple physical scenario (Section \\ref{sec:SE}), and we show excellent agreement with direct numerical predictions (Section \\ref{sec:ModelKierfeld} \\cite{Knoche2013,Hegemann2018}). In Section \\ref{sec:ResultsDropsNeedles}, we combine theory and simulation to show that the main influence of the capillary results from the change in geometry and not the induced deformation anisotropy. The influence of the capillary on the pressure-volume relationship of a doploon represents a challenging and unsolved theoretical problem because of the interplay of the curved droploon equilibrium shape with the presence of a rigid inclusion, which induces anisotropic elastic deformations of the droploon. We show that this stress anisotropy is strongly localised around the capillary and provide for the first time analytical relations to estimate the parameter ranges over which the anisotropy at the capillary\n has negligible impact on the pressure-deformation relation, i.e., over which the provided analytical pressure-deformation relations may be used reliably to analyse experiments. We regularly compare with analytical predictions obtained for perfectly fluid interfaces with Gibbs elasticity as a reference. \n\n We note that in most experimental systems the interfacial stress may not only depend on deformation but also on the exchange of surfactant molecules between the bulk and the surface or to temperature changes. Sufficiently thick skins may also present a bending stiffness. In addition to an elastic, reversible mechanical response, viscous and plastic behavior is commonly observed. None of these effects will be considered in the present paper focused on the simplest case of linear and nonlinear 2D elastic skin behavior which is already challenging.\n\n\n\n\n\\section{Theory}\n\\label{sec:Theory}\n\n\\subsection{Theoretical framework}\n\n\\label{sec:TheoryFundamentals}\n\nSince the recent literature has seen many debates about the physically correct description of the deformation of complex interfaces, we consider it necessary to start here with a fairly general introduction to clarify our point of view before introducing the specific concepts used later in the article.\n\n\nInterfaces are characterised by the amount of \\textit{interfacial free energy per surface area}, that we will denote $f$. If the \\textit{interfacial stress} is independent of area changes, the work needed to increase the area by $dA$ is $\\gamma dA = f dA $; $f$ and $\\gamma$ are in this case equivalent quantities. However, this is no longer true if the stress and energy density are modified by interfacial area changes. This can be due to interacting surfactant molecules in a fluid-like interface (top of Fig. \\ref{fig:interfaces}), or due to a solid, elastic (polymer) skin adsorbed to the interface (bottom of Fig. \\ref{fig:interfaces}), or due to a mixture of both.\n\nIn this general case, the interfacial stress is no longer necessarily isotropic and its description requires a second rank tensor $\\sigma_{ij}$, where $i,j=1,2$ specify components in a 2D cartesian coordinate system locally tangent to the interface. Assuming that the stresses due to the liquid interfacial tension $\\gamma\\delta_{ij}$ and those due to the adsorbed elastic skin $\\tau_{ij}$ are simply additive one may write \\cite{Jaensson_COCIS_2018}\n\\begin{equation}\n \\sigma_{ij} = \\gamma\\delta_{ij} + \\tau_{ij},\n \\label{eq:combine}\n\\end{equation}\nwhere $\\delta_{ij}$ is the Kronecker symbol with $\\delta_{ij}=1$ if $i=j$ and $\\delta_{ij}=0$ otherwise. $\\tau_{ij}$ may contain both isotropic and anisotropic contributions, in contrast to $\\gamma\\delta_{ij}$ which is purely isotropic. The additive decomposition in Eq. (\\ref{eq:combine}) should not be taken for granted: if surfactants are cross-linked or co-adsorbed with a polymeric skin, the different contributions to the interfacial stress may be hard to tell apart, not only experimentally but also conceptually. In the present paper, we will not consider this issue further. \n\n\n Any measure of interfacial strain is based on the coordinates of a given interfacial point: $X_i$ in the reference state and $x_i$ after the\ndeformation ($i=1,2,3$). From these, one may derive the displacement field $U_i(X_i)=x_i - X_i$, where $U_1$ and $U_2$ are the tangential displacements\nand $U_3$ the displacement normal to the interface.\nFor an interface with the two principal radii of curvature in the reference shape $R_{01}$ and $R_{02}$,\ndisplacements give rise to an \\textit{infinitesimal} strain tensor \\cite{LandauLifshitz}\n\\begin{equation}\n \\epsilon_{ij}=\\frac{1}{2}\\left(\\frac{\\partial U_i}{\\partial X_j}\n +\\frac{\\partial U_j}{\\partial X_i} +\n \\frac{\\partial U_3}{\\partial X_i} \\frac{\\partial U_3}{\\partial X_j}\n \\right)\n +\\delta_{ij}\\frac{U_3}{2}\\left(\\frac{1}{R_{01}}+\\frac{1}{R_{02}}\\right)\n \\label{eq:epsilon}\n\\end{equation}\ndescribing the interfacial 2D strains ($i,j=1,2$). For a spherical surface, the two principal curvature radii are equal ($R_{01}=R_{02}=R_0$) and $\\frac{1}{2}(\\frac{1}{R_{01}}+\\frac{1}{R_{02}})=\\frac{1}{R_0}$.\nIt contains information about the deformation which is\ninvariant to rotation and translation \\cite{LandauLifshitz}.\n Following Kirchhoff's hypothesis \\cite{Ventsel2001}, we apply\n classical thin shell approximations, and\n neglect all strains in the plane normal to the interface,\n $\\epsilon_{i3}=\\epsilon_{3i}=0$ ($i=1,2,3$).\n Both in the Surface Evolver simulations and in the shape equation\n calculus we will employ alternative \\textit{finite strain}\n measures, which are introduced below. Their relation to the infinitesimal strain tensor is provided in Appendix \\ref{AppendixA}.\n\n\n\n\n\nFor fluid-like interfaces, stress and strain are isotropic, and in this case scalar quantities of the stress $\\sigma$ and the strain $\\epsilon$ are useful. They are defined as \n\\begin{eqnarray}\n \\sigma= \\frac{1}{2}(\\sigma_{11}+\\sigma_{22}) \\\\ \n \\epsilon =\\epsilon_{11}+\\epsilon_{22}.\n \\label{eq:isotropicStressStrain}\n\\end{eqnarray}\n$\\epsilon$ is equal to the relative variation of surface area $dA\/A$. \\par\nA rigorous description of \\textit{finite strains} can be derived either by considering nonlinear corrections to the kinematics based on the infinitesimal strain tensor \\cite{LandauLifshitz,Audoly_2010} or using the displacement gradient tensor \\cite{Beatty1987, Mal1991}\n\\begin{equation}\n F_{ij} = \\frac{\\partial x_i}{\\partial X_j},\n \\label{eq:defgrad}\n \\end{equation}\n and finally the left Cauchy-Green strain tensor\n \\begin{equation}\n B_{ij}= F_{ik} F_{jk},\n \\label{eq:defB}\n\\end{equation}\nor the right Cauchy-Green tensor\n\n\\begin{equation}\n C_{ij}= F_{ki} F_{kj},\n \\label{eq:defC}\n\\end{equation}\n which extract from $F_{ik}$ information about the strain which is independent of rotation and translation. Please note that in this paper we consider right Cauchy Green tensors in 2 and 3 dimensions. To avoid confusions, we denote them respectively as $\\mathbf{C}$ and $\\mathcal{C}$.\n In this paper, Surface Evolver computes numerically the strain of the surface using the right Cauchy-Green-Tensor, whose explicit expression in the finite element method is derived in the APPENDIX \\ref{AppendixA}. For theoretical expressions, however, we will use the left Cauchy-Green tensor, to conform to the commonly used stress-strain expression derived using the Cayley-Hamilton theorem \\cite{Macosko}. As stressed by Beatty \\cite{Beatty1987}, both tensors have identical principal values (Tr($B_{ij}$)=Tr($C_{ij}$), Tr($B_{ij}^2$)=Tr($C_{ij}^2$), det($B_{ij}$)=det($C_{ij}$)), and are hence equivalent regarding the computation of strain energy.\n \n In Eqs. (\\ref{eq:defgrad}) and (\\ref{eq:defB}), we use Einstein's summation convention: indices occurring twice should be summed over. \\par\n In some models, the Hencky strain is found to be convenient. In the case of an extension that transforms a length $L$ measured in the reference state into a length $L'$, the infinitesimal strain definition in this scalar case would yield $(L-L')\/L$ while the Hencky strain is defined as $\\ln(L'\/L)$. Extensions of the Hencky strain to the tensorial case have been discussed in the literature \\cite{Verwijlen_ACIS_2014}.\n\nTo build constitutive laws, the strain must be connected to energy density and stress.\nShuttleworth has demonstrated the following general relation between surface stress $\\sigma_{ij}$ and surface energy density, assuming constant temperature \n\\cite{Shuttleworth_ProcPhysSocA_1950}\n\n\\begin{equation}\n \\sigma_{ij} = f\\delta_{ij} + \\frac{\\partial f}{\\partial \\epsilon_{ij}},\n \\label{eq:shuttleworth}\n\\end{equation}\nwhere i,j=1,2. $f$ combines potential energy contributions due to the excess energy of solvent molecules at the interface, adsorbed molecules or elastic potential energy of the skin. \n\nIn the case of fluid interfaces without skins where the stress is isotropic, a scalar model is sufficient. By taking half of the trace of Eq. (\\ref{eq:shuttleworth}) and using Eq.s (\\ref{eq:isotropicStressStrain}) we obtain the average surface stress, which is equal to the surface tension \n\\begin{equation}\n \\sigma(\\epsilon) = \\gamma(\\epsilon) = f + \\frac{\\partial f}{\\partial {\\epsilon}}.\n \\label{eq:isotropicshuttleworth}\n\\end{equation}\n \n\n \n \n\n\n\n \n \n \n\n \nFor the more general case, we can consider a first order expansion of $\\sigma (\\epsilon)$ around the reference state yielding\n\\begin{equation}\n \\sigma(\\epsilon) =\\sigma(0) +K \\epsilon,\n \\label{eq:K}\n\\end{equation}\n\nwhere we have introduced the elastic dilational modulus\n\\begin{equation}\n K= \\left. \\frac{\\partial f}{\\partial\\epsilon}\\right|_{\\epsilon=0}.\n \\label{eq:liquid1}\n\\end{equation}\n \n\nIn the spirit of the Hencky strain, the following alternative definition of a dilational modulus, commonly called \"Gibbs modulus\", is often used \\cite{Mysels1961,Kitchener1962}\n\\begin{equation}\n K_G= \\frac{\\partial f}{\\partial\\ln A}.\n \\label{eq:Gibbs}\n\\end{equation}\n For infinitesimal strains, $d \\mathrm{ln}A =dA\/A = \\epsilon $ and both definitions (Eq.s (\\ref{eq:liquid1}) and ( \\ref{eq:Gibbs})) coincide so that $K = K_G$. For finite strains, there is a distinction between $dA\/A$ where the area $A$ evolves along the deformation and $dA\/A_0=\\epsilon$ where $A_0$ is the area in the reference state. However, since the Gibbs modulus and the dilational modulus can vary independently as a function of strain, there is no contradiction between the two definitions. Using the Gibbs modulus and assuming its independence of strain amounts to choosing a particular type of constitutive law which appears to describe well some experimental systems \\cite{Salonen_2016,Verwijlen_ACIS_2014}. \\\\\n \n Let us now turn to interfaces with an adsorbed solid skin. Eq. (\\ref{eq:combine}) illustrates our simple hypothesis that the total surface stress is the sum of an interfacial tension and the elastic stress from the skin. To model this latter contribution, we focus on the case where plastic or viscous response is negligible so that the stress can be derived from a mechanical potential energy. Such materials are called hyperelastic. We focus further on incompressible materials and recall that in this case, the most general constitutive law relating the three-dimensional elastic stress to deformation can be cast in the form \\cite{Beatty1987, Mal1991}\n \\begin{equation}\n \\sigma^{3D}_{ij} = -p \\delta_{ij} + \\beta_1 \\mathcal{B}_{ij} - \\beta_{-1} \\mathcal{B}_{ij}^{-1},\n \\label{eq:constitutive}\n \\end{equation}\nwhere i,j=1,2,3 and where $p$ is the 3D pressure. The so-called response functions $\\beta_1$ and $\\beta_{-1}$ depend on the properties of the material and must be expressed as functions of the invariants of the strain tensor to ensure frame invariance. In the simplest case, they are constants leading to what is commonly called the \"Mooney-Rivlin\" model. It has proven successful in describing many polymers \\cite{Macosko,Mueller_Strehlow_2004}. Within this class of models, the case $\\beta_{-1} = 0$ is of particular interest. It leads to the so called Neo-Hookean model where $\\beta_1$ is equal to the shear modulus $G$ \\cite{Beatty1987} so that\n\\begin{equation}\n \\sigma^{3D}_{ij} = -p \\delta_{ij} + G\\, \\mathcal{B}_{ij}.\n\\end{equation}\nThis Neo-Hookean model has been derived from a simplified microscopic description of polymer dynamics using statistical mechanics \\cite{Larson1998,Mueller_Strehlow_2004}, and it successfully describes the stress response under finite strains. Since for moderate deformations, the Neo-Hooke model remains very close to the Mooney-Rivlin model, it is the method of choice for our simulations.\nIn the limit of small deformations, the Neo-Hookean model reduces to the well known Hookean model of linear elastic response.\n The 3D mechanical elastic energy density of a Neo-Hookean solid can be expressed as \n \\begin{equation}\n W = \\frac{G}{2} (I_\\mathcal{B} - 3),\n \\end{equation}\n where $I_\\mathcal{B}$ is the first invariant of the left Cauchy Green tensor defined in Eq. (\\ref{eq:defB}), defined as its trace. This will be useful for the simulations presented in Section \n \\ref{sec:Modelling}.\\par\n \n\n\n\\subsection{Perfectly spherical droploons}\n\\label{sec:TheorySpheres}\n\nAs given in Eq. (\\ref{eq:combine}) and sketched in Fig.s \\ref{fig:interfaces} and \\ref{fig:Schemes}, we assume that the total interfacial stress can be modeled as the sum of surface tension and and elastic contribution. In the case of fluid-like interfaces, this elastic contribution is given by a Gibbs elasticity. In the case of a solid-like interface, the extra elastic stresses arise from a (Neo-)Hookean skin. \n\nIf the interface is fluid, i.e. only Gibbs elasticity is present, one can integrate Eq. (\\ref{eq:Gibbs}) assuming a constant Gibbs dilational modulus $K_G$. In the limit of negligible gravity (i.e. low density mismatch between the phases or $\\Delta\\rho gR_0^2\/\\gamma_0\\ll 1$), the reference shape of the drop is spherical and the principal radii of curvature can be assumed to be equal ($R_{01}=R_{02}\\equiv R_0$). This gives for a spherical droploon of radius $R$ \n\\begin{equation}\n \\sigma(A) = \\gamma(A) = \\gamma_0 + K_{G} \\ln{\\left( \\frac{A}{A_0}\\right)} = \\gamma_0 + 2K_{G} \\ln{\\left( \\frac{R}{R_0} \\right)}.\n \\label{eq:GibbsGamma}\n\\end{equation}\n\nFrom this, the pressure drop $\\Delta P$ across the interface is obtained via the Young-Laplace law\n \\begin{equation}\n \\Delta P = \\frac{2\\gamma}{R}.\n \\label{eq:Laplace}\n \\end{equation}\n\nIn the reference state $R=R_0$ and $\\gamma =\\gamma_0$ so that $\\Delta P_0 = 2 \\gamma_0\/R_0$.\n\nTo prepare our analysis of solid-like and fluid-like contributions, we introduce the following normalised quantities.\nWe define an \"elastocapillary number\"\n\\begin{equation}\n \\alpha = \\frac{K}{\\gamma_0},\n \\label{eq:alpha}\n\\end{equation}\nwhich compares the dilational elastic modulus $K$ to the interfacial tension $\\gamma_0$ of the reference state. $K$ is either due to Gibbs elasticity (denoted $K_G$ in this case) or to a solid-like elasticity, as given later.\n\nFor spheres, the stretch $\\lambda$ is given by\n\\begin{equation}\n\\lambda = \\frac{R}{R_0}.\n \\label{eq:areaStretch}\n\\end{equation}\n\nMoreover, we introduce the normalised interfacial stress\n\n\\begin{equation}\n\\hat{\\sigma} = \\frac{\\sigma}{\\gamma_0}. \\label{eq:NormalisedStress}\n\\end{equation}\nIn the case where only Gibbs elasticity is present, the total interfacial stress is therefore given by\n\\begin{equation}\n\\hat{\\sigma} = \\hat{\\gamma}= 1 + 2\\alpha \\ln{\\lambda}.\n\\label{eq:GibbsND}\n\\end{equation}\n In the small-deformation limit this reduces to \n \\begin{equation}\n \\hat{\\sigma} = \\hat{\\gamma} = 1+ 2\\alpha (\\lambda-1).\n \\label{eq:GibbsSmallDef}\n \\end{equation}\nWhatever the origin of the tension and elastic response may be, the normalised pressure is obtained using\n \n\\begin{equation}\n \\Delta \\hat{P} = \\frac{\\Delta P}{\\Delta P_0} = \\frac{\\hat{\\sigma}}{\\lambda}.\n \\label{eq:NormalisedPressure}\n\\end{equation}\n \nLet us now consider solid-like interfaces. For the case of a spherical balloon with initial skin thickness $h_0<< R_0$, starting from Eq. (\\ref{eq:constitutive}), Beatty \\cite{Beatty1987} derived an analysis valid for any hyperelastic material \n \\begin{equation}\n \\Delta P(\\lambda)=\\frac{2\\sigma}{R}=\\frac{2 G h_0}{\\lambda R_0}\\left[ 1-\\frac{1}{\\lambda^6}\\right]\\left(\\beta_1-\\lambda^2\\beta_{-1}\\right).\n \\end{equation}\nIn the neo-Hookean case this yields the following expression for the stress in the skin \n \\begin{equation}\n \\sigma_{Balloon}= G h_0\\left[ 1-\\lambda^{-6}\\right].\n \\end{equation}\n\nIn several more recent models of non-linear mechanical behavior, nonlinear variations of the response functions with the strain invariants are considered, as reviewed in \\cite{Horgan2015,Puglisi2015}. However, for the remainder of this paper we restrict ourselves to the use of the Neo-Hookean model.\n\n \n\n\nWe characterised the elastic skin, assumed to be isotropic and incompressible, by its 3D shear modulus $G$. To link it to the 2D dilational modulus, we note that the skin is in a state of plane stress, and that in this case\n\\begin{equation}\n \\epsilon=\\epsilon_{11}+\\epsilon_{22}=\\frac{\\sigma_{11}+\\sigma_{22}}{2 E} = \\frac{\\sigma}{h_0 E} \n\\end{equation}\nwhere $E$ is Young's modulus. Here, the biaxial stress in the solid induced by stretching is expressed as a skin tension divided by the skin thickness. In view of Eq. (\\ref{eq:K}), this means that $K=E h_0$ in the present case. For incompressible materials $E=3G$, so that for isotropic, small deformations \n\\begin{equation}\n K= 3G h_0. \n \\label{eq:SolidModulus}\n\\end{equation}\n\nIn the case of an elastic skin attached to an interface with tension $\\gamma_0$ we therefore obtain for the elastocapillary number \n \\begin{equation}\n \\alpha = \\frac{3Gh_0}{\\gamma_0}.\n \\label{eq:alphaNH}\n \\end{equation}\nThe total interfacial stress of a spherical neo-Hookean droploon is therefore given by \n \\begin{equation}\n \\hat{\\sigma} = 1+\\frac{G h_0}{\\gamma_0}(1-\\lambda^{-6})=1+\\frac{\\alpha}{3}(1-\\lambda^{-6}).\n \\label{eq:NeoHookeTension}\n \\end{equation}\nIn the small deformation limit one obtains the prediction of the linear elastic Hooke model\n \\begin{equation}\n \\hat{\\sigma} = 1+ 2\\alpha(\\lambda-1),\n \\label{eq:HookeTension}\n \\end{equation}\n which is identical to Eq. (\\ref{eq:GibbsSmallDef}). This result shows that in the limit of isotropic and small deformations both Gibbs elasticity and Neo-Hookean elasticity lead to a linear elastic response captured by Hooke's law in two dimensions with a compression modulus $K_G = 3Gh_0$.\\par\n \n \\begin{table*}\n\\renewcommand{\\arraystretch}{2}\n \\centering\n \\begin{tabular}{|c|c|c|c|}\n \\hline\nSphere model & Normalised surface stress $\\hat{\\sigma}$ & Critical stretch $\\lambda_{A,c}$ & Stretch at maximum pressure $\\lambda_{A,m}$ \\\\ \\hline\nGibbs (liquid) & $1 + \\alpha \\ln{\\lambda_A}$ & $ \\exp{\\left(-\\frac{1}{\\alpha}\\right)}$ & $ \\exp{\\left(2-\\frac{1}{\\alpha}\\right)}=e^2\\lambda_{A,c}$\\\\ \\hline\nNeo-Hooke (solid) & $1 + \\frac{\\alpha}{3} (1-\\lambda_A^{-3})$ & $ \\left( \\frac{\\alpha}{\\alpha+3} \\right) ^{1\/3} $ & $\\left( \\frac{7\\alpha}{\\alpha+3} \\right)^{\\frac{1}{3}} =7^{\\frac{1}{3}}\\lambda_{A,c}$ \\\\ \\hline\nHooke & $1 + \\alpha(\\lambda_A-1) $ & $ \\left( 1-\\frac{1}{2\\alpha} \\right)^2$ (for $\\alpha>0.5$) & no maximum \\\\\n\\hline\n \\end{tabular}\n \\caption{Summary of the normalised expressions for the normalised surface stress $\\hat{\\sigma}=\\sigma\/\\gamma_0$; the critical stretch $\\lambda_{A,c}$ at which the pressure changes sign; and the stretch at maximum pressure $\\lambda_{A,m}$ for the Gibbs, Neo-Hooke and Hooke model.}\n \\label{tab:models}\n\\end{table*}\n \nEq. (\\ref{eq:HookeTension}) shows that for $\\alpha > 1\/2$, an extensional stretch induces a positive total surface stress, acting as a restoring force while for $\\alpha < 1\/2$ an extensional stretch yields a negative total stress which favors further deformation. Analogous tendencies are predicted for compression. The condition $\\alpha = 1\/2$ has therefore received particular attention and is often called the \"Gibbs criterion\" since the physical response of a system may change fundamentally around this value. This is known, for example, for the case of bubble dissolution and foam coarsening \\cite{Stocco2009,Salonen_2016}.\n\nIn the case of spheres, it is natural to express interfacial stresses and curvatures via the radial stretch $\\lambda$. However, for more general surfaces, the relationship between both depends on the geometry of the surface. In this case it is more appropriate to express the dilational stresses via the area stretch $\\lambda_A = A\/A_0$. \nFor spheres, the relationship between area and radial stretch is simply \n\\begin{equation}\n\\lambda = \\frac{R}{R_0} = \\left( \\frac{A}{A_0} \\right)^{1\/2} = \\lambda_A^{1\/2}.\n \\label{eq:radiusStretch}\n\\end{equation}\nIn Table \\ref{tab:models} we summarise the interfacial stresses for the Gibbs, Neo-Hookean and Hookean model expressed via their area stretches, together with some critical stretches which are discussed in Section \\ref{sec:ResultsSpheres}. In the following we will use those relations.\n\n\n\n\\subsection{Droploons on capillaries}\n\\label{sec:TheoryNeedle}\nLet us now consider droploons attached to capillaries with circular cross-section of radius $R_n$ (Fig. \\ref{fig:Schemes}). In this case one geometrically removes a cap of radius $R_n$ from the droploon and fixes the perimeter of the resulting circular hole to the end of the capillary. For fluid interfaces with Gibbs elasticity, the interfacial stresses are isotropic and constant everywhere in the interface, even if the droploon is inflated or deflated. Hence, the droploon shapes remain spherical sectors and, as we show below, all pressure-deformation relations can be calculated analytically, giving useful insight into the impact of the geometry change. In the case of interfaces with a solid skin, this is much less straightforward. Fixing the interface points on the capillary boundary induces shear deformation in the vicinity of the capillary upon inflation or deflation and hence deviations from the shape of a perfect sphere. The presence of the capillary in the case of a solid-like skin therefore combines a geometrical impact (as for the Gibbs elasticity) with one of a non-isotropic deformation. Both contributions are coupled and their relative importance depends on the capillary number $\\alpha$, on the deformation $A\/A_0$ and on the capillary-to-drop size ratio $R_n\/R_0$.\\\\\nLet us assume in the following that shear stresses remain negligible and that we can estimate the droploon shape by spherical sectors derived from perfect spheres of radius $R$ from which a cap of radius $R_n$ is removed, as depicted in Fig. \\ref{fig:Schemes}. The interfacial area $A$ is then given by \n\\begin{equation}\n \\begin{split}\n A(R) & = 2\\pi R^2\\left(1 \\mp \\sqrt{1-\\left(\\frac{R_n}{R}\\right)^2}\\right),\n \\end{split}\n \\label{eq:interfacialarea01}\n\\end{equation}\nwhere the two signs correspond to droploons larger than a hemisphere (\"+\") or smaller than a hemisphere (\"-\").The latter geometry introduces a major difference between drops with and without capillaries: the radius of the drop \\textit{increases} upon further deflation from the hemisphere. This changes dramatically the pressure-deformation relation, which is why we will exclude this case in the remaining discussion.\\\\\nEq. (\\ref{eq:interfacialarea01}) can be used to relate the area stretch $\\lambda_A$ and the radial stretch $\\lambda$ via\n\n\\begin{equation}\n \\begin{split}\n \\lambda = & \\;\\lambda_A^{1\/2}\\frac{1+\\sqrt{1-\\left(\\frac{R_n}{R_0}\\right)^2}}{\\sqrt{2\\left[1+\\sqrt{1-\\left(\\frac{R_n}{R_0}\\right)^2}\\right] -\\left(\\frac{R_n}{R_0}\\right)^2\\frac{1}{\\lambda_A}}} \\\\\n = & \\;\\lambda_A^{1\/2} \\; \\mathpzc{f}\\left( \\frac{R_n}{R_0},\\lambda_A\\right).\n \\end{split}\n \\label{eq:GeometricalCorrection}\n\\end{equation}\n\ni.e. when comparing with the full sphere expression of Eq. (\\ref{eq:areaStretch}), the presence of the capillary introduces a correction factor $\\mathpzc{f}\\left( \\frac{R_n}{R_0},\\lambda_A\\right)$ to the relationship between the radial and the area stretch. \n\nFor a given area stretch $\\lambda_A$ - which is experimentally and computationally more easily accessible than the radial stretch $\\lambda$ - we can then rewrite the pressure-deformation relation as\n\n\\begin{equation}\n \\Delta \\hat{P}=\\frac{\\hat{\\sigma}(\\lambda_A)}{\\lambda}=\\frac{\\hat{\\sigma}(\\lambda_A)}{\\lambda_A^{1\/2}} \\mathpzc{f}^{-1} = \\Delta \\hat{P}_S \\mathpzc{f}^{-1},\n \\label{eq:PressureDefNeedle}\n\\end{equation}\nwhere $\\Delta \\hat{P}_S$ is the pressure of the sphere with the same area stretch and the interfacial stress $\\hat{\\sigma}$ is given in Table \\ref{tab:models} for the different models. Hence, in the approximation of negligible shear contributions, the capillary may be considered to impose a simple geometrical correction on the pressure-deformation relation which depends only on the capillary size $\\frac{R_n}{R_0}$ and the area stretch $\\lambda_A$. In the case of fluid-like interfaces (Gibbs elasticity), Eq. (\\ref{eq:PressureDefNeedle}) is accurate, while in the case of solid-like interfaces (Neo-Hooke \\& Hooke), this is an approximation. We shall see in Section \\ref{sec:ResultsDropsNeedles} that this remains nevertheless an excellent approximation over a wide range of parameters.\n\nHere we have chosen to express the pressure-deformation relations in terms of area stretch $\\lambda_A$ since it simplifies comparison with simulations and experiments. One may also choose to express them in terms of radial stretch $\\lambda$. In this case it is the expression of the interfacial stress $\\hat{\\sigma}$ which needs to be modified, leading to more complex expressions. We provide these relations for the interested reader in Annex \\ref{annex:PressDefNeedle}. \n\n\n\n\n\\section{Numerical modelling}\n\\label{sec:Modelling}\n\\subsection{Surface Evolver simulations}\n\\label{sec:SE}\n\\label{Subsec:SEPrinciple}\n\nSurface Evolver\\cite{Brakke1992} is a widely used software that determines the equilibrium structure of systems containing several fluid phases separated by interfaces. It uses the principle that in equilibrium, the interfacial energy must be minimal under the constraints imposed by boundary conditions. Examples of this are foams where the volume of each bubble is fixed \\cite{Buffel2014,Weaire2017,Hohler2017,Ginot2019}. Surface Evolver can also be used to model elastic membranes \\cite{Bouzidi_CompStruct_2004,Quilliet2016}.\n\n\nIn Surface Evolver simulations, interfaces are represented as meshes of triangular facets whose energy is evaluated. Most previous studies on bubble or drop shapes focus on systems where this energy is proportional to the interfacial area, the proportionality factor being the surface tension $\\gamma$. Additionally to this contribution, Surface Evolver simulations can also take into account an elastic energy induced by the deformation of each facet, simulating an elastic skin. Several constitutive laws are implemented in the Evolver Software and can be used: Hooke's law describing linear elastic response, as well as the non-linear Saint-Venant or Neo-Hooke's law\\cite{Bouzidi_CompStruct_2004}. In the work reported here, we use Neo-Hooke's law introduced in Section \\ref{sec:TheoryFundamentals}. We implement, for the first time to our knowledge, an interface with both surface tension and neo-Hooke interfacial elasticity. As a first implementation, we thoroughly compare Surface Evolver results to the numerical solution of the shape equations (Section \\ref{sec:ModelKierfeld}), and ensure that it provides physically sound results in the investigated range of parameters.\n\nIn contrast to fluid interfaces where the interfacial area uniquely determines the energy, the energy of elastic skins depends on their deformation with respect to a reference state. The reference state of an interface element is given by a shape with zero interfacial elastic stress. This state is encoded in the reference positions of the facet vertices. The implementation of elastic stress in the framework of the Surface Evolver requires an expression of the facet deformation energy for arbitrary large strains, given as a function of the vertex positions. A detailed presentation of this feature and the implementation of elastic energy in the Surface Evolver has not been published so far to our knowledge. We therefore provide this information in the Appendix \\ref{AppendixA} to clarify for the interested reader how exactly the software operates. Here we shall concentrate on a very general description of the approach.\n\n\nOur Surface Evolver calculations simulate an experiment where a bubble or drop is inflated at the tip of a cylindrical hollow capillary inserted into a liquid, as illustrated in Fig. \\ref{fig:DroploonSimulations}. In the first step, we need to obtain a physically correct reference shape for a drop without interfacial elasticity. For this purpose, an initially very coarse mesh is attached to a cylindrical boundary representing the capillary. The interfacial area is then minimised for the given drop target volume assuming that interfacial energy is due only to a uniform and constant surface tension\\footnote{This could represent a physical system where the elastic skin forms progressively at an initially \"naked\" interface}. Successive refinements and energy minimisations of the mesh are then performed to simulate the drop shape and the pressure in the reference bubble accurately. When the relative variation of total interfacial energy $|E^{n+1}-E^{n}|\/E^n$ remains smaller than $10^{-8}$ over 100 iteration steps we consider that convergence has been achieved.\\\\\nIn the second step of the simulation, an elastic skin is added to the drop surface of the obtained reference state, so that initially there is no elastic stress. Numerically, it consists in saving the current positions $\\{\\vec{X}_i\\}$ of the vertices as their reference positions, and setting a non-zero elastic modulus value for the interfacial energy computation for further minimisation iterations. How reference and current positions are used for deformation computation is detailed in Appendix \\ref{AppendixA}.\\\\\nThe third step consists in inflating or deflating this droploon up to a new volume where mechanical equilibrium is again established via progressive mesh relaxation. Frequent merging of facets significantly smaller than average and refinement of facets larger than average hastens convergence whilst avoiding to trap the system in local energy minima. These operations are all performed by Surface Evolver in-built routines as part of a standard energy minimisation procedure. When the mesh management and energy minimisation have converged ($|E^{n+1}-E^{n}|\/E^n<10^{-8}$), the elastic stress in the skin, the pressure in the bubble and the bubble shapes are recorded.\\\\\n\n\n\n\\subsection{Numerical integration of the shape equations} \n\\label{sec:ModelKierfeld}\n\nWe solve for the shape and stress\/strain profile of an axi-symmetric capsule by \nnumerically integrating the \\emph{shape equations} \\cite{Hegemann2018,Knoche2013}. Because we impose axial symmetry, \nthe droploon can be parametrised as a single arc with\narc length $s$ and arc angle $\\Psi$. The transformation from arc length parametrisation to cylindrical coordinates $\\{r, \\phi, z\\}$ gives the first two shape equations\n\\begin{equation}\n \\label{eqn:shape_eqn_rz} \\frac{\\mathrm{d}r}{\\mathrm{d}s} = \\cos\\Psi ~~~\\text{and}~~~\n \\frac{\\mathrm{d}z}{\\mathrm{d}s} = \\sin\\Psi \\,.\n\\end{equation}\nThe remaining shape equations, needed to close the set of partial differential equations, take into account the constitutive material law and reflect the force balance at every point along the arc $s$. They are derived by searching for the stationary solutions of the appropriate energy functional.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=7cm,keepaspectratio]{FIGURES\/drop_visualization.pdf}\n \\caption{A pendant droploon parametrised in arc-length $s$ and arc-angle\n $\\Psi$.\n \n \\label{fig:parametrization}}\n\\end{figure}\n\nIn the experimentally relevant setting we control either the droploon volume or the mechanical pressure at the capillary inlet. Thus, the appropriate energy functional is the enthalpy\n\\begin{equation}\n H = \\int \\mathrm{d}A_0 \\, W_{2D} + \\int \\mathrm{d}A \\, \\gamma_0 - \\int \\mathrm{d}V \\, \\Delta P \\,,\n \\label{eqn:enthalpy_functional}\n\\end{equation}\nwith a contribution from the surface energy $W_{2D}$, measured \nwith respect to the undeformed area $A_0$, from the surface tension $\\gamma_0$ and from the volumetric work \nagainst a pressure difference $\\Delta P$.\nWe find the stationary states of the enthalpy $H$ of Eq. (\\eqref{eqn:enthalpy_functional}) via the first \nvariation, $\\delta H = 0$ (see \\cite{Knoche2013, Hegemann2018} for details), leading to the shape equations\n\\begin{align}\n \\label{eqn:shape_eqn_psi} \\frac{\\mathrm{d} \\Psi}{\\mathrm{d}s} &= \\kappa_s = \\frac{1}{\\sigma_s} \\left(\\Delta P - \n \\kappa_\\phi \\sigma_\\phi \\right)\n \\,, \\\\\n \\label{eqn:shape_eqn_taus} \\frac{\\mathrm{d} \\sigma_s}{\\mathrm{d}s} &= \\frac{\\cos\\Psi}{r} \\left( \\sigma_\\phi - \\sigma_s \\right) \\,,\n\\end{align}\nwhere ($\\kappa_s,\\kappa_{\\phi}$) and ($\\sigma_{s},\\sigma_{\\phi}$) are the meridional and circumferential curvatures and surface stresses, respectively. The curvatures are given by \n$\\kappa_\\phi = {\\sin \\Psi}\/{r}$ and \n$\\kappa_s = {\\mathrm{d}\\Psi}\/{\\mathrm{d}s}$.\nNote that the shape equations \\eqref{eqn:shape_eqn_rz}, \\eqref{eqn:shape_eqn_psi} and \\eqref{eqn:shape_eqn_taus} still require a constitutive material law for closure.\nAt this point, no detailed knowledge about the 2D surface energy functional $W_{2D}$ is required, as we\ndefine \n\\begin{equation}\n \\sigma_{s, \\phi} = \\frac{1}{\\lambda_{\\phi, s}}\\left(\\frac{\\partial W_{2D}}{\\partial \\lambda_{s, \\phi}} \n + \\frac{\\partial (\\gamma_0 \\lambda_s \\lambda_\\phi)}{\\partial \\lambda_{s, \\phi}}\\right) \\,,\n \\label{eqn:stress_energy_functional}\n\\end{equation}\nwhere $\\lambda_s$ and $\\lambda_\\phi$ are the meridional and circumferential\nstretch ratios of the droploon. The shape equations \\eqref{eqn:shape_eqn_rz},\n\\eqref{eqn:shape_eqn_psi} and \\eqref{eqn:shape_eqn_taus}\nare written in terms of the arc length $s$ of the deformed \nshape. For the numerical solution we reparametrise in terms of the\n\\emph{undeformed} arc length coordinate $s_0$ of the original undeformed\nshape by using the relation\n$\\mathrm{d}s \/ \\mathrm{d}s_0 = \\lambda_s$, which is necessary in order to\ngain access to the meridional stretches $\\lambda_s$.\nThe circumferential stress $\\lambda_\\phi = r\/r_0$\nis given by the ratio of undeformed and deformed \nradial coordinate.\n\nThe surface energy $W_{2D}$ accounts for the material specific model and can incorporate various effects, such as film thinning. To express the constitutive equation in terms of our parametrisation we write the right 2D Cauchy-Green tensor, discussed in Section \\ref{sec:Modelling} and in Appendix \\ref{AppendixA}, as \n\\begin{equation}\n \\mathbf{C} = \\mathrm{diag}(\\lambda_s^2, \\lambda_\\phi^2)\\,.\n\\end{equation}\nFor a two-dimensional Neo-Hookean elastic material the surface energy is given by Eq. \\eqref{eq:2D energy density final} from the Appendix \\ref{sec:energy}\n\\begin{equation}\n W_{2D} =\\frac{G h_0}{2} \\left( \\mathrm{Tr}\\mathbf{C} + \\mathcal{C}_{33} + \\frac{G}{\\Lambda} \\mathcal{C}^2_{33}\\right).\n\\end{equation}\nwith 3D Lam\u00e9 parameters $G$ and $\\Lambda$.\nHere, $\\mathbf{C}$ is the 2D Cauchy-Green tensor describing deformations within the surface, while \n$\\mathcal{C}_{33}$ is the component of the 3D Cauchy-Green tensor describing normal (thickness) deformations of the elastic skin. Requiring the absence of normal stresses, $\\mathcal{C}_{33}$ becomes a function of $G\/\\Lambda$ and $\\mathrm{det}\\mathbf{C}$ as derived in Appendix \\ref{sec:energy}.\n\nFrom this surface energy, we extract the constitutive law needed to close the shape equations using Eq.\\ \\eqref{eqn:stress_energy_functional},\n\\begin{equation}\n \\sigma_{s, \\phi} = G h_0 \\left( \\frac{\\lambda_{s, \\phi}}{\\lambda_{\\phi, s}} - \\frac{\\mathcal{C}_{33}}{\\lambda_s \\lambda_\\phi} \\right) + \\gamma_0 \\,.\n\\end{equation}\nIn the following, we focus on the incompressible limit $G \/ \\Lambda \\ll 1$, where $\\mathcal{C}_{33} \\approx 1 \/ \\mathrm{det}\\mathbf{C} = 1 \/ \\lambda_s^2 \\lambda_\\phi^2$. \n\nFor a given undeformed shape (described by a function $r_0(s_0)$),\nthe shape equations, along with the constitutive equations, are numerically integrated from the apex ($s=0$) to the attachment point at the capillary ($s=L$) using a Runge-Kutta scheme, paired with a shooting algorithm to satisfy the boundary conditions\n\\begin{equation}\n r(s = 0) = z(s = 0) = \\Psi(s = 0) = 0~~~\\text{and}~~~\n r(s = L) = R_n .\n\\end{equation}\nIn the shooting procedre, \nwe prescribe an apex stress $\\sigma_s(s = 0)$ and iteratively search for\na pressure drop $\\Delta P$ satisfying the attachment boundary condition at the\ncapillary. Moreover, we restrict the prescribed apex stresses to the\nphysically relevant ones for our context giving $\\sigma_s(s = 0) > 0$ (no\ncompressive stresses), and do not exceed the maximal possible apex stress\nallowed by the constitutive equations,\n$\\sigma_{s, \\phi}(s = 0)^{\\text{max}} = G h_0 + \\gamma_0$.\n\n\n\n\n\n\\section{Results}\n\\label{sec:Results}\n\n\nIn Section \\ref{sec:ResultsSpheres} we compare the theoretical predictions of the different elastic laws in Eqs. \\eqref{eq:GibbsND}, \\eqref{eq:alphaNH} and \\eqref{eq:NeoHookeTension}, and the results obtained from Surface Evolver simulations. In Section \\ref{sec:ResultsDropsNeedles}, we compare the numerical simulations to the analytical predictions where the needle is treated as a geometrical perturbation truncating an isotropic droploon (Section \\ref{sec:TheoryNeedle}). These two results are compared to the direct numerical predictions (Section \\ref{sec:ModelKierfeld}), which account both for the geometrical perturbation and the anisotropy imposed by the needle. Finally, we quantify the perturbation of the pressure induced by the needle, and show that it can be in large part explained by the geometrical perturbation. In the last step, we use the direct numerical predictions to quantify the importance of anisotropic stretches, and provide experimentalists with guidelines to predict the parameter ranges over which the influence of the capillary (shape change and\/or stress anisotropy) can be neglected.\n\n\\subsection{Spherical droploons}\n\\label{sec:ResultsSpheres}\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=13cm,keepaspectratio]{FIGURES\/Quad_ElasticSphere.png}\n\\caption{Normalised pressure as a function of area stretch $\\lambda_A$ for spherical droploons whose skin elasticity is described by Gibbs', Neo-Hooke's or Hooke's law. Four characteristic elastocapillary number values ($\\alpha = 0.1$, $0.5$,$1$,$10$) are investigated. The data obtained by Surface Evolver simulations are obtained assuming Neo-Hookean elasticity. }\n\\label{fig:PressureDeformationSPhere}\n\\end{figure*}\n \n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=9.5cm]{FIGURES\/lambda_Ac_lambda_Am_pressure_vs_epsilon_LambdaArea.png}\n\\caption{Variation of characteristic features (critical area stretch $\\lambda_{A,c}$, stretch $\\lambda_{A,m}$ at maximum pressure and maximum pressure $\\Delta \\hat{P}(\\lambda_{A,m})$) with the elastocapillary number $\\alpha$ predicted for droploons with skins presenting Gibbs, Neo-Hookean and Hookean elasticity. Surface Evolver simulations are performed for the Neo-Hooke case. }\n\\label{fig:SummFeaturesSphere}\n\\end{figure}\n\nWe run Neo-Hookean Surface Evolver simulations (Section \\ref{sec:SE}) for spheres with four different elastocapillary numbers ($\\alpha =$ 0.1, 0.5, 1, 10) imposing inflation and deflation while recording the normalised pressure difference $\\Delta \\hat{P}$. The results are shown in Fig. \\ref{fig:PressureDeformationSPhere} as a function of area stretch $\\lambda_A$ along with the theoretical predictions for the Gibbs, Hooke and Neo-Hooke models provided in Section \\ref{sec:TheorySpheres}. \n\nThe simulations show excellent agreement with the Neo-Hookean theory over the full range of investigated deformations. As expected and discussed in Section \\ref{sec:TheorySpheres}, all three models coincide in the small deformation limit $\\lambda_A \\approx 1$. However, for deformations of a few percent, the three models already show very pronounced differences, indicating the importance of choosing the physically most realistic model for the interpretation of pressure-deformation relations. \n\nFor non-zero $\\alpha$, in the case of the Gibbs and Neo-Hookean elasticity, the initially monotonously decreasing Young-Laplace-like behaviour is replaced by a pressure-deformation relation with a well-pronounced pressure maximum $\\Delta\\tilde{P}(\\lambda_{A,m})$ at a characteristic stretch $\\lambda_{A,m}$. Upon deflation ($\\lambda_A<1$), this leads to the apparition of a critical stretch $\\lambda_{A,c}$ at which the pressure difference is zero, and beyond which it becomes negative. This point corresponds to elastic instabilities of compressed interfaces, which manifest themselves in buckling phenomena \\cite{LandauLifshitz,Zoldesi1998,Sacanna2011}. A proper handling of this range requires to take into account the bending energies of the interfaces. Since this is neither of interest here, nor implemented in our simulations, we stay away from the buckling range in our analysis.\n\nThe variation of $\\lambda_{A,c}$, $\\lambda_{A,m}$ and of the pressure difference $\\Delta\\tilde{P}$ at $\\lambda_{A,m}$ with elastocapillary number $\\alpha$ for the different models are shown in Fig. \\ref{fig:SummFeaturesSphere}. The corresponding analytical expressions are given in Table \\ref{tab:models}. They put in evidence clear differences between Gibbs, Hookean and Neo-Hookean models. In comparison to Gibbs elasticity, the Neo-Hookean critical and maximal stretches vary only mildly with $\\alpha$. The Surface Evolver results again agree very well with theory. The critical stretch for Hooke's model appears when the elastocapillary number crosses the Gibbs criterion $\\alpha=0.5$. The Gibbs critical stretch tends exponentially towards 0, as $\\lambda_{A,c}=\\mathrm{exp}(-1\/\\alpha)$. In the limit of large $\\alpha$, the critical stretches all converge towards $\\lambda_{A,c}=1$, that is, a shell so rigid that it buckles as soon as compressed. \nHooke elasticity does not predict a local pressure maximum at any elastocapillary number. But it predicts an interesting deformation-independent pressure for $\\alpha=0.5$, i.e. at the \"Gibbs criterion\". Gibbs and Neo-Hooke, on the other hand, have a maximal pressure stretch increasing with $\\alpha$. In particular, at the Gibbs criterion $\\alpha=0.5$, the maximal pressure is reached at null deformation ($\\lambda=1)$. Lower elastocapillary numbers move $\\lambda_{A,m}$ to the compression regime ($\\lambda_{A,m}<1$), while $\\alpha>0.5$ shift $\\lambda_{A,m}$ to the dilation regime ($\\lambda_{A,m}>1$).\nThe most remarkable features of the elastocapillary transition (onset of significant critical stretch, variation of the maximal pressure stretch) occur for elastocapillary numbers between $0.1$ and $10$. For this reason, we expose in this article results for $\\alpha=0.1$, $1$ and $10$, so as to span two decades of elastocapillary numbers. Because of its history as the Gibbs criterion and its pivot point between capillarity and elasticity, $\\alpha=0.5$ will also be represented.\n\n\\subsection{Droploons on capillaries}\n\\label{sec:ResultsDropsNeedles}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=8.5cm,keepaspectratio]{FIGURES\/DropSimulation_Alpha05.png}\n\\caption{Examples of neo-Hookean droploons at different area stretches and capillary ratios $R_n\/R_0$ obtained for $\\alpha = 0.5$ using Surface Evolver.}\n\\label{fig:DroploonSimulations}\n\\end{figure}\n\nIn a second step, we run Surface Evolver simulations of pendant droploons attached to a capillary with circular cross-section of radius $R_n$ (Fig. \\ref{fig:Schemes}). The droploons are inflated and deflated while their interfacial area and inner pressure are recorded (Section \\ref{sec:SE}). Three ratios between the capillary radius $R_n$ and the radius $R_0$ of the droploon in the reference configuration are used: $R_n\/R_0$ = 0.1, 0.5 and 0.9. Representative examples of obtained droploon shapes are shown in Fig. \\ref{fig:DroploonSimulations} for three characteristic area stretches ($\\lambda_A=$ 0.8, 1, 2) for the case of $\\alpha = 0.5$. \n\nIn Fig. \\ref{fig:PressureDeformationSPhereNeedle} we show the obtained pressure-deformation relations for the elastocapillary numbers $\\alpha = 0.1, 0.5, 1, 10$. Along with the Surface Evolver results (crosses) we plot results obtained by direct numerical predictions (empty circles) using the Neo-Hookean shape equations for the same set of parameters (Section \\ref{sec:ModelKierfeld}). The excellent agreement between both for all elastocapillary numbers, capillary radii and deformations demonstrates the reliability of Surface Evolver simulations for such systems.\n\nThe solid lines shown in Fig. \\ref{fig:PressureDeformationSPhereNeedle} correspond to the analytical approximation given in Eq. (\\ref{eq:PressureDefNeedle}) which models droplets as spherical sectors covered with a Neo-Hookean skin. The agreement is excellent in the whole deformation range for all capillary sizes and elastocapillary numbers. This means that in this parameter range the deviation from the predictions for spherical droploons without any capillary (gray line in Fig. \\ref{fig:PressureDeformationSPhereNeedle}) are essentially a result of the associated change of the geometry induced by the capillary, rather than due to the shear deformation in the vicinity of the capillary. Deviations from the simple model set in only for large capillary sizes ($R_n\/R_0=0.9$) and large elastocapillary numbers ($\\alpha = 10$). \n\nTo investigate why the spherical sector approximations fit the results so well, Fig.\\ \\ref{fig:anisotropy_area_stretch} plots different measures of the anisotropy of the stretch distributions on the droploon surface obtained from the Neo-Hookean shape equations for the same parameter ranges as in Fig.\\ \\ref{fig:PressureDeformationSPhereNeedle}. In the case of fully isotropic deformation, corresponding to a spherical sector shape, the deviation of the mean stretch ratio along the contour $\\left\\langle \\frac{\\lambda_s}{\\lambda_{\\phi}} \\right\\rangle-1$ (Fig.\\ \\ref{fig:anisotropy_area_stretch}a,b) and the standard deviation of the meridional and circumferential stretches $\\mathrm{std}_s(\\lambda_{s,\\phi})$ (Fig.\\ \\ref{fig:anisotropy_area_stretch}c,d) are both zero. Since we neglect gravitational effects, it is clear that the unstressed shape\nof the capsule at $\\lambda_A = 1$ \\emph{must} be a spherical sector. The\nstretched shape will be anisotropically stressed, in general, because of the\nboundary condition imposed by the attachment at the capillary. We can find,\nhowever, another particular stretch, where the \\emph{stressed} shape is a\nspherical sector. This is reached at the critical stretch $\\lambda_{A,c}$\n(see also Section \\ref{sec:ResultsSpheres}) at which $\\Delta \\hat {P}=0$. The\nforce balance for every point on the capsule requires that the pressure force\ncancels the tension force. For $\\Delta \\hat {P} = 0$, we therefore have\n$\\sigma_s = \\sigma_\\phi = 0$ all over the surface, i.e. the surface is\nstress-free everywhere at this critical stretch. Since\n$\\sigma_s = \\sigma_\\phi = 0$ implies isotropic stretching, the shape at this\npoint is again correctly described by the spherical sector equation\n(\\ref{eq:PressureDefNeedle}). If the stretch is further decreased to\n$\\lambda_A<\\lambda_{A,c}$ both $\\sigma_s<0$ and $\\sigma_\\phi<0$ will become compressive and buckling or wrinkling instabilities of the droploon interface will \noccur \\cite{LandauLifshitz,Knoche2013}. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=15cm,keepaspectratio]{FIGURES\/NeoHooke_Needles.png}\n\\caption{Normalised pressure as a function of area stretch $\\lambda_A$ of Neo-Hookean droploons on capillaries for three ratios of capillary and initial droploon radius ($R_n\/R_0 = 0.1, 0.5, 0.9$), and four characteristic elastocapillary numbers ($\\alpha = 0.1, 0.5,1,10$). Surface Evolver simulations are compared with direct numerical predictions (Section \\ref{sec:Modelling}) and with the analytical expression of Eq.\\ (\\ref{eq:NeoHookeTension}) using a simple geometrical correction to the perfect sphere theory.}\n\\label{fig:PressureDeformationSPhereNeedle}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=15cm]{FIGURES\/comparison_anisotropy.pdf}\n \\caption{ Characterization of the stretch anisotropy and the\n stretch inhomogeneity. (a,b) The \n mean ratio of meridional and circumferential stretches\n $\\left\\langle \\frac{\\lambda_s}{\\lambda_{\\phi}} \\right\\rangle-1$\n along the contour characterizes stretch\n anisotropy and is shown \n for (a) $\\alpha \\leq 0.5$ and (b) $\\alpha > 0.5 $.\n The standard deviations of (c) meridional stretches\n $\\lambda_s$ and (d) circumferential stretches $\\lambda_{\\phi}$\n along the contour characterize the inhomogeneity of\n stretches. We show the critical stretches $\\lambda_{A,c}$ as red diamonds in (a-d).\n }\n \\label{fig:anisotropy_area_stretch}\n\\end{figure*}\n\nFor stretch values other than $\\lambda_A = 1$ or $\\lambda_{A,c}$, the droploon\nshape is non-spherical, because of the anisotropy\n($\\lambda_s \\neq \\lambda_\\phi$) introduced by the boundary condition at the\ncapillary. This can clearly be seen in Figs.\\\n\\ref{fig:anisotropy_area_stretch}a,b. For inflated shapes $\\lambda_A > 1$, we find\n$\\left\\langle \\frac{\\lambda_s}{\\lambda_{\\phi}} \\right\\rangle - 1 >0$\nindicating that stretching is biased towards meridional deformations resulting\nin slightly prolate shapes, whereas for deflated shapes $\\lambda_A < 1$,\n$\\left\\langle \\frac{\\lambda_s}{\\lambda_{\\phi}} \\right\\rangle-1<0$ and\ncircumferential deformations are preferred, resulting in slightly oblate\nshapes.\n The mean anisotropy increases upon inflation before\ndecreasing again at much higher stretches (see the insets in\nFig.\\ \\ref{fig:anisotropy_area_stretch}a,b for a wider deformation range), when the influence of the capillary\nbecomes again negligible.\n\nFurthermore, the standard deviation of the stretches along the contour\n$\\mathrm{std}_s(\\lambda_s )$ and $\\mathrm{std}_s(\\lambda_\\phi)$ shown in\nFigs.\\ \\ref{fig:anisotropy_area_stretch}c,d characterizes the inhomogeneity of\nthe stretches along a contour. A standard deviation of\n$\\mathrm{std}_s(\\lambda_s ) = \\mathrm{std}_s(\\lambda_s ) = 0$ corresponds to a\nspherical sector. The meridional and circumferential stretches of an inflated\ndroploon are isotropic at the apex with\n$\\lambda_s(s = 0) = \\lambda_\\phi(s = 0)\\propto \\lambda_A^{1\/2}$. At the\ncapillary, the attachment condition mandates $\\lambda_\\phi^\\mathrm{cap} = 1$\nwhile $\\lambda_s^\\mathrm{cap}$ increases with $\\lambda_A$, which introduces\nanisotropy and inhomogeneity into the problem with meridional stresses\naccumulating at the capillary. The spherical approximation will hold well for\nshapes where the stretches are approximately \\textit{homogeneous} over a large\narc length, corresponding to a small standard deviation of the stretches, and\n\\textit{isotropic}, corresponding to a mean stretch along the contour\n$\\left\\langle \\frac{\\lambda_s}{\\lambda_{\\phi}} \\right\\rangle$ close to unity.\nThis is fulfilled at the two spherical configurations\n$\\lambda_A = 1$ and $\\lambda_{A,c}$. The spherical configuration \nwith $\\lambda_{A,c}$ appears to be highly sensitive, and small changes in\n$\\lambda_A$ lead to large deviations in the anisotropy (and inhomogeneity).\nIt is interesting to note that at small deformations around $\\lambda_A= 1$,\nthe anisotropy evolution depends only on the ratio $R_n \/ R_0$ and not on $\\alpha$.\n\n\nWe argue that the evolution of the anisotropy and inhomogeneity can be grasped by considering that the capillary acts similarly to a rigid inclusion\nin a stretched elastic membrane as both enforce the absence of circumferential\nstretching ($\\lambda_\\phi = 1$)\nat their boundary. A rigid inclusion\nin a stretched elastic membrane is known to concentrate the meriodional\nstresses creating anisotropy and inhomogeneity, similar to the stress\nconcentration around a crack tip. For flat membranes, a rigid inclusion is a\nclassic problem that was studied for neo-Hookean membranes by Wong and Shield\n\\cite{Wong1969}. For the droploon we have a curved geometry, which gives rise to an even more pronounced increase of anisotropy around the capillary.\n\nWe see clear evidence of the increased anisotropy \naround the capillary in numerical solutions to\nthe full anisotropic shape equations from Section\n\\ref{sec:ModelKierfeld} as shown in Fig.\\ \\ref{fig:decayDetails}. \nIn \nFig.\\ \\ref{fig:decayDetails}a,b,c, we show the stretch ratios $\\lambda_s$ and $\\lambda_\\phi$ and \nthe redistribution of arc length along the contour of inflated \ndroploons. These results show the rise of meridional stretch \nclose to the capillary.\nFig.\\ \\ref{fig:decayDetails}d reveals that the\nresulting \nstretch anisotropy $\\lambda_s\/\\lambda_\\phi-1$ is localized at the\ncapillary and that it decays exponentially\nover a characteristic arc length\n$s_0^*$ away from the capillary.\nHere, $s_0$ is the arc length of the undeformed reference shape\n(the spherical droplet),\nwhich is related to the arc length $s$ of the deformed\nshape by the meridional stretch ratio,\n$\\mathrm{d}s \/ \\mathrm{d}s_0 = \\lambda_s$\n(see section \\ref{sec:ModelKierfeld}).\nWe use the logarithmic derivative\nof $\\lambda_s\/\\lambda_\\phi-1$ to numerically determine the size $s_0^*$\nof the zone of increased anisotropy around the capillary. \n\n\nWe propose that the relative meridional extent\n of the anisotropy zone along\n the \\emph{deformed} droploon contour provides a non-dimensonal number $Q$,\n which is suitable to \n characterize the importance of elastic anisotropy effects\n in the regime $\\alpha>1$, where elastic energies dominate.\n We thus define $ Q \\equiv {s^*}\/{L} $, where\n $s^*$ is the meridional length of the anisotropy region\n measured in terms of the \\emph{deformed} arc length, while $L$ is the\n total arc length of the deformed droploon contour. \n For $\\alpha<1$,\n elastic energies are small compared to droplet surface tension such\n that also elastic anisotropy becomes less important. \n\n\nIn order to evaluate the anisotropy parameter $Q$, we \nuse the general relation $\\mathrm{d}s \/ \\mathrm{d}s_0 = \\lambda_s$\nbetween deformed and undeformed arc length at the capillary\nand $L \\sim \\pi R_0 \\lambda_A^{1\/2}$ for the total arc length $L$\nin the limit $R_n \\ll R_0$ to obtain\n\\begin{equation}\n Q \\equiv \\frac{s^*}{L} \\sim \\frac{s_0^* \\lambda_s^\\mathrm{cap}}{L}\n \\sim \\frac{s_0^* \\lambda_s^\\mathrm{cap}}{\\pi R_0} \\lambda_A^{-1\/2}\n \\label{eqn:Q}\n\\end{equation}\nwhere $\\lambda_s^\\mathrm{cap}$ is the meridional stretch at the\ncapillary. To make further progress, we derive relations\nfor the size $s_0^*$ of the anisotropy zone and the\nstretch ratio $\\lambda_s^\\mathrm{cap}$ at the capillary\n from numerical results shown in Fig.\\ \\ref{fig:Q}.\n\nBecause the \n maximal stretch anisotropy is found at the capillary and \n $\\lambda_{\\phi}=1$ at the capillary, the meridional stretch\n at the capillary actually equals the maximal stretch anisotropy, \n$\\max{\\left(\\frac{\\lambda_s} {\\lambda_\\phi}\\right)} =\n\\lambda_s^\\mathrm{cap}$. While in the case of flat membranes the maximal\naniosotropy $\\lambda_s^\\mathrm{cap} \\propto \\lambda_s(s=\\infty)$ is\nproportional to the radial stretch at infinity \\cite{Wong1969},\nour numerical results for curved droploons indicate \nthat $\\lambda_s^\\mathrm{cap}$ first increases upon inflation $\\lambda_A>1$ but\nsaturates for highly inflated droploons with\narea stretches $\\lambda_A$ exceeding\na fairly well-defined value $\\lambda_A^\\dag$,\nas shown in Fig.\\ \\ref{fig:Q}c for the\ncase of $\\alpha = 10$. Further numerical analysis of the\nsaturation value as performed in Fig.\\ \\ref{fig:Q}b\nallows us to quantify the saturation value as \n\\begin{equation}\n \\max{\\left(\\frac{\\lambda_s} {\\lambda_\\phi}\\right)}\n \\approx \\lambda_s^\\mathrm{cap} \\equiv\n {\\rm const} \\left( \\frac{R_n}{R_0}\\right)^{-1\/3} \n \\label{eqn:maximal_anisotropy}\n \\end{equation}\n with ${\\rm const} \\approx 1.47$ in the regime $\\alpha >1$. This saturation\n value is solely determined by the geometrical parameter $R_n \/ R_0$ of the\n undeformed droploon, which demonstrates that saturation is induced by\n droplet curvature.\n We also find $\\lambda_A^\\dag \\sim (\\lambda_s^\\mathrm{cap})^{3 \/ 2}$\n for the\n area stretch, where saturation of the maximal anisotropy sets in. \n The maximal anisotropy given in Eq.\\ (\\ref{eqn:maximal_anisotropy})\n diverges in the limit $R_n \/ R_0 \\approx 0$, which\nseems counter-intuitive at first, because the spherical approximation works\nbest for exactly this limit. This issue will be resolved below. \n\n\n \n\\begin{figure*}\n \\centering\n \\includegraphics[width=15cm]{FIGURES\/decay_details.pdf}\n \\caption{\n Stretch anisotropy of droploon shapes with $\\alpha = 10$ for\n three values of $R_n\/R_0$ for each of three area stretches\n $\\lambda_A \\gg \\lambda_A^\\dag$, $\\lambda_A > \\lambda_A^\\dag$, and\n $\\lambda_A < \\lambda_A^\\dag$ (see also Fig.\\ \\ref{fig:Q}\n for a definition of the characteristic \n area stretch $\\lambda_A^\\dag$). \n (a,b) Stretch ratios $\\lambda_s$ and\n $\\lambda_\\phi$ as a function of the undeformed arc length $s_0\/L_0$\n along the contour. While $\\lambda_\\phi$ is approaching the undeformed\n value of 1 at the capillary ($s_0\/L_0=1$), $\\lambda_s$ rises at the\n capillary. (c) shows that the deformed arc length $s$ considerably\n deviates from the the undeformed arc length $s_0$ along the contour. (d)\n The resulting stretch anisotropy $\\lambda_s \/ \\lambda_\\phi - 1$ is\n localized at the capillary. The size of the anisotropy zone around the\n capillary can be characterized by an exponential decay arc\n length $s_0^*$,\n which is calculated from the logarithmic derivative of\n $\\lambda_s \/ \\lambda_\\phi - 1$ at the capillary for the solid lines and\n shown as colored dots in all plots (a-d). We also show the maximal stretch at the capillary from Eq.\\ (\\ref{eqn:maximal_anisotropy}) as red diamonds in (a) and (d). }\n \\label{fig:decayDetails}\n\\end{figure*}\n\n\nLet us quantify the\nsize $s_0^*$ of the anisotropy zone around the capillary. From Fig.\\ \\ref{fig:Q}a, we find a conservative bound\n\\begin{equation}\n s_0^* \\leq \\frac{R_n}{2}.\n \\label{eqn:decaylength}\n \\end{equation}\nThis relation reveals that the size of the\nstretch anisotropy zone\n is set by the geometry parameter $R_n\/R_0$ of the reference\n state rather than the elastocapillary number $\\alpha$.\n\n \n Using Eq.\\ (\\ref{eqn:decaylength}) for $s_0^*$\n and the saturation value given in Eq.\\ (\\ref{eqn:maximal_anisotropy})\n for $\\lambda_s^\\mathrm{cap}$ in (\\ref{eqn:Q}), we obtain \n\\begin{equation}\n Q = \\frac{\\rm const}{2\\pi} \\left(\\frac{R_n}{R_0}\\right)^{2 \/ 3}\n \\frac{1}{\\lambda_A^{1 \/ 2}} \n \\label{eqn:QLargeLambda}\n \\end{equation}\n for the anisotropy parameter $Q$ for highly inflated droploons\n $\\lambda_A>\\lambda_A^\\dag$.\n This parameter remains small for $R_n \\ll R_0$ indicating that\n we can neglect anisotropy effects in this limit.\n\n At smaller deformations $1 < \\lambda_A < \\lambda_A^\\dag$, \n where\n saturation of the capillary anisotropy has not yet set in, we numerically find\n that the maximal stretch anisotropy scales\n with $\\log(\\lambda_A)$ (see Fig.\\ \\ref{fig:Q}c), giving\n\\begin{equation}\n Q = \\frac{R_n}{R_0} \\frac{\\lambda_s^\\mathrm{cap} - 1}\n {3 \\pi \\log(\\rm \\lambda_s^\\mathrm{cap})}\n \\frac{\\log(\\lambda_A)}{\\lambda_A^{1 \/ 2}},\n \\label{eqn:anisotropyQuantifier}\n\\end{equation}\nwhere we again use the saturation value $\\lambda_s^\\mathrm{cap}$ from Eq.\\ (\\ref{eqn:maximal_anisotropy}).\n\nWe obtain a full contour plot of the anisotropy\nparameter $Q$ in Fig.\\ \\ref{fig:Q}d by\njoining the results in the two regimes\n( $\\lambda_A > \\lambda_A^\\dag$ and $\\lambda_A < \\lambda_A^\\dag$) with\na smooth interpolating function. This plot confirms that $Q$\nis small ($Q \\ll 1$) for shapes where the spherical approximation works\nbest. In particular, we find that\nwe can neglect anisotropy effects ($Q\\ll 1$)\nin the limit $R_n \/ R_0 \\approx 0$,\nresolving the counter-intuitive behaviour of the maximal anisotropy. We\nemphasize the fact that Eq.\\ \\eqref{eqn:anisotropyQuantifier} only\ndepends on\n$R_n \/ R_0$ and $\\lambda_A$ and \\emph{not} on $\\alpha$, as long as\n$\\alpha > 1$. This indicates that stretch anisotropy is\nmainly governed by geometry rather than by elastic energy\ncontributions.\nAs already pointed out above, elastic contributions and, thus,\nalso elastic anisotropy effects become increasingly irrelevant\nfor $\\alpha < 1$, where surface tension dominates and the\nshape resembles a spherical liquid droplet.\nThe regions $\\lambda_A > \\lambda_A^\\dag$ and $\\lambda_A < \\lambda_A^\\dag$\ndiffer markedly in their functional dependence on \n $\\lambda_A$.\nThis results in a maximum of the parameter $Q$ for area stretches\n$\\lambda_A\\sim \\lambda_A^\\dag\\propto (R_n \/ R_0)^{- 1 \/ 2}$\nat a fixed value of $R_n\/R_0$.\nThis, in turn, indicates that stretch anisotropy is most relevant\nfor these intermediate area stretches. \n\n \\begin{figure*}\n \\centering\n \\includegraphics[width=15cm]{FIGURES\/QPlot.pdf}\n \\caption{Analysis of the anisotropy zone and the anisotropy\n parameter $Q$ from\n numerical solutions of the anisotropic shape\n equations. (a) The size of the anisotropy zone $s_0^*$ is roughly constant\n giving rise to the bound (\\ref{eqn:decaylength}).\n (b) The saturation value is mainly determined by the\n parameter ($R_n\/R_0$), see Eq.\\\n (\\ref{eqn:maximal_anisotropy}).\n (c) As a function of the area stretch $\\lambda_A$, the maximum\n anisotropy saturates at large deformations beyond a value\n $\\lambda_A^\\dag$ (results for $\\alpha =10$ shown as colored diamonds). \n (d) Contour plot of the non-dimensional anisotropy parameter $Q$\n according to Eq.\\ (\\ref{eqn:Q}).\n Stretch anisotropy effects are negligible for $Q\\ll 1$.}\n \\label{fig:Q}\n\\end{figure*}\n\nThe possibility of approximating the droploon shape by a spherical sector over a wide range of parameters is an important piece of information for experimentalists since it means that the analytical expression of Eq. (\\ref{eq:PressureDefNeedle}) can be used to quantify reliably the elastocapillary properties of the droploon interfaces over a reasonably wide range of elastocapillary numbers. We also remind the reader that from the expressions it evident that within our geometrical approximations, the critical area stretch at which the pressure changes sign is independent of the size of the capillary. \nThe combined numerical analysis provides another important piece of information: for reasonably small capillary sizes ($R_n\/R_0<0.5$), the pressure-deformation relation is actually well described by the simple sphere equations without capillary (Section \\ref{sec:TheorySpheres}), making the quantitative interpretation of experimental data fairly straightforward. In order to quantify the deviation from the simple sphere theory, we plot in Fig.\\ref{fig:ErrorDroploonNeedle} the heatmap of the normalised deviation of the numerically predicted pressure $\\Delta\\hat{P}$ with capillary (using Surface Evolver) from that predicted by the sphere theory $\\Delta \\hat{P}_S$ for a given area stretch, i.e. we plot\n\\begin{equation}\n \\left| \\frac{\\Delta\\hat{P}_S-\\Delta\\hat{P}}{\\Delta\\hat{P}}\\right| = \\left| 1 - \\frac{\\Delta\\hat{P}_S}{\\Delta\\hat{P}}\\right|.\n \\label{eq:ErrorHeatMap}\n\\end{equation}\nMaking the spherical sector hypothesis of Eq. (\\ref{eq:PressureDefNeedle}), this expression becomes simply \n\\begin{equation}\n \\left| 1 - \\frac{\\Delta\\hat{P}_S}{\\Delta\\hat{P}} \\right| = \\left| 1 - \\mathpzc{f} \\right|,\n \\label{eq:ErrorHeatMapSpherical}\n\\end{equation}\nwhich is plotted as lines of equal relative error. These isolines are identical in all four graphs of Fig.\\ref{fig:ErrorDroploonNeedle} since they are independent of $\\alpha$ (see Eq.\\eqref{eq:GeometricalCorrection}). \n\nDeviations of the heatmaps in Fig. \\ref{fig:ErrorDroploonNeedle} from the geometrical prediction have two origins: imperfect relaxation in the simulations and the influence of shear contributions of the solid skin which are neglected in the geometrical approximations. The first is at the origin of most of the deviations for $\\alpha < 10$, while the latter starts to be clearly visible for $\\alpha = 10$. Nevertheless, this latter difference remains small ($<0.5\\%$), confirming again that shear contributions play a minor role in most of the investigated parameter range in accordance with the non-dimensional $Q$-parameter plotted in Fig. \\ref{fig:Q}d. Our geometrically-corrected pressure-deformation relation of Eq. (\\ref{eq:PressureDefNeedle}), although not accounting for stretch anisotropy, is therefore a very good approximation for pendant drops with Neo-Hookean elastic interfaces within the parameter range investigated here. \n\nLet us now turn to the analysis of the heatmaps themselves. They indicate that in the small deformation limit ($\\lambda_A \\approx 1$), the error made in using the sphere approximation remains smaller than $1\\%$ at any radii ratio and elastocapillary number. For larger deformations in the inflation regime ($\\lambda_A>1$), the approximation error is still smaller than $1\\%$ for small capillary radii ($R_n\/R_0<0.2$). Similar behaviour is observed in the deflation regime. However, the prediction systematically fails when approaching the critical stretch $\\lambda_{A,c}$. This is because wrinkling instabilities in the skin may become relevant in this regime. This phenomenon can be captured neither within the sphere approximation, nor by our Surface Evolver simulations where the skin bending energy - crucial for wrinkling - is not taken into account. Skin bending can be implemented in Surface Evolver, but is beyond the scope of this paper. In the heatmaps we have therefore colored these zones in gray. \n\nAt small $\\alpha$ and large $R_n\/R_0$ an additional zone of large approximation error ($>10\\%$) appears for pressures $\\Delta\\hat{P}\\approx 1$. This deviation arises from the increasing difference between sphere and truncated sphere geometry: As the truncated sphere shrinks, it reaches the shape of a half-sphere of radius $R_n$. Any further decrease in drop volume causes an actual increase in curvature radius which is not captured by the sphere theory, hence the failure of the analytical prediction beyond this point in the parameters space.\n\nDespite those considerations for large capillary radii, the heatmaps of Fig.\\ref{fig:ErrorDroploonNeedle} provide very good news for the experimentalist aiming to quantify the elastic properties of droploon surfaces: when working with reasonable capillary sizes ($R_n\/R_0<0.5$), reasonably small deformations (<0.1) and reasonable elastocapillary numbers ($\\alpha < 10$), experimental data can be confidently fitted by the simple sphere theory (without capillary) since experimental errors are likely to outweigh the small error introduced by the sphere assumption. \n\n\n\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{FIGURES\/DeltaQuad.png}\n \\caption{Relative error of pressure difference between Surface Evovler and neo-Hookean perfect sphere, at the same area stretch $\\lambda_A$ for four elastocapillary numbers ($\\alpha=0.1$,$0.5$,$1$,$10$). The grey boxes delimit the stretch values below critical stretch value $\\lambda_{A,c}$. Full lines are lines of equal relative error between the neo-Hookean perfect sphere and the neo-Hookean truncated sphere, given by Equ. \\eqref{eq:ErrorHeatMapSpherical}. }\n \\label{fig:ErrorDroploonNeedle}\n\\end{figure*}\n\n\n\n\\clearpage\n\n\\newpage\n\n\\section{Conclusion and outlook}\n\nTreating the seemingly simple problem of a drop covered by an elastic skin attached to a circular capillary in the absence of gravity, we have been able to show that Surface Evolver simulations are a powerful tool to study systems in which surface tension and nonlinear (Neo-Hookean) elasticity co-exist within the same interface. We have chosen on purpose such a simple geometry, in order to avail of independent theoretical and numerical predictions relying on cylindrical symmetry (Section \\ref{sec:Theory} and Section \\ref{sec:ModelKierfeld}) which can be compared to the Surface Evolver solutions. In all cases, they showed excellent agreement. Surface Evolver will therefore be useful to tackle more complex geometries, such as droploons on complex capillary shapes, interacting droploons or complete emulsions composed of droploons, where theory or alternative numerical predictions requiring symmetry will not be available. In contrast to other finite element tools, the energy minimisation approach of Surface Evolver, widely used in the communities studying foams and emulsions, provides access to a wide range of problems in which interfaces of complex geometry play a key role. In the Appendix we provide a detailed description of the implementation of nonlinear elasticity in Surface Evolver simulations to facilitate future developments, and we also provide our Surface Evolver code for download in the Supplementary Materials. Taking into account bending stiffness in the simulations would be an interesting perspective for future work.\n\nFor simplicity, we have been talking about drops\/droploons all along. However, all presented concepts are equally valid for bubbles\/bubbloons and hence foams. Our analysis shows how complex the interplay of capillary and elastic forces at an interface is, even for the relatively simple geometry of an initially spherical droploon inflated on a circular capillary. Due to the intricate coupling of changes in interfacial curvature and area, accurate theoretical models and simulations are required to extract interfacial properties quantitatively from measured pressure-deformation relations.\n\nThe problem of the pressure deformation of a\ndroploon covered by an elastic skin and attached to a capillary in the absence of gravity is a seemingly simple problem. From the point of view of elasticity theory it is challenging, however, because the elastic skin represents a closed curved shell and the capillary\na rigid circular inclusion within this shell.\nHoles or rigid inclusions in elastic membranes are known to produce stress anisotropies and stress concentration upon stretching. Here, the droploon skin is stretched by inflation, contains a rigid inclusion and features the additional complication of a background curvature because the initial relaxed shape is spherical\n(neglecting gravity). We obtained theoretical predictions regarding the influence of the stress anisotropy induced by the capillary\nonto the pressure-deformation relation from Surface Evolver simulations and a careful numerical analysis\nof stresses and strains in the shape equation approach. \nA full analytical solution remains an open problem for future research.\n\n\nIn the parameter range investigated by our simulations, we have been able to show that for elastocapillary numbers of $\\alpha < 10$ the influence of the capillary on the pressure-deformation relation is essentially of geometrical nature, i.e. the capillary modifies in the first place the relationship between the area stretch (related to interfacial stress) and the interface curvature. In this case, the droploon shapes can be represented approximately by spherical sectors and the pressure-deformation relation is given by Eq. (\\ref{eq:PressureDefNeedle}). For interfaces with Gibbs elasticity, this expression is exact, while for (Neo-)Hookean interfaces it remains an (excellent) approximation. Deviations from this simple geometrical approximation are starting to be significant for the largest capillary sizes ($R_n\/R_0=0.9$) and elastocapillary number ($\\alpha = 10$) simulated by us, suggesting that the anisotropic contribution to the interfacial stress and deformation near the capillary is starting to play a role. \n\nTo show that this anisotropy is indeed strongly localised at the capillary, we calculate, as a function of position on the interface, the deviation of the ratio of meriodional and circumferential stretches from one. This quantity decays nearly exponentially with the distance from the capillary, over a characteristic length $s^*$. The extent of this anisotropically strained zone can be compared to the total droploon size by defining the non-dimensional ratio $Q=s^*\/L$, where $L$ is the total arc length of the droploon. For droploon inflation and for $\\alpha > 1$, we find \n\\begin{equation}\n Q = \\frac{R_n}{R_0} \\frac{\\rm \\lambda_s^\\mathrm{cap} - 1}{3 \\pi \\log(\\rm \\lambda_s^\\mathrm{cap})} \\frac{\\log(\\lambda_A)}{\\lambda_A^{1 \/ 2}},\n \\label{eqn:anisotropyQuantifier2}\n\\end{equation}\nwith \n\\begin{equation}\n \\lambda_s^\\mathrm{cap} \\equiv\n {\\rm const} \\left( \\frac{R_n}{R_0}\\right)^{- 1 \/ 3}, \n \\label{eqn:maximal_anisotropy2}\n \\end{equation}\nbeing the \"saturation\" meridional stretch reached at the capillary for large deformations and const = 1.47. \nFor large deformations, we therefore obtain\n\\begin{equation}\n Q = \\frac{\\rm const}{2\\pi} \\left(\\frac{R_n}{R_0}\\right)^{2\/3} \\frac{1}{\\lambda_A^{1 \/ 2}}. \n \\label{eqn:QLargeLambda2}\n\\end{equation}\nThese relations and their analysis provided in Section \\ref{sec:ResultsDropsNeedles} and Fig. \\ref{fig:Q} put in evidence that the extent of the anisotropic zone (and hence its influence on the pressure-deformation relation), is mainly controlled by the reference geometry of the droploon ($R_n\/R_0$) and by the stretch $\\lambda_A$. We therefore show for the first time that the extent of this zone is essentially governed by geometrical features while the influence of the elastocapillary number $\\alpha$ remains negligible. These are very good news for experimentalists who can rely on the spherical droploon equations given in Table \\ref{tab:models} combined with the geometrical correction of Eq. (\\ref{eq:GeometricalCorrection}) to fit their data for a wide range of $\\alpha$ as long as $R_n\/R_0$ and $\\lambda_A$ remain reasonable. The heatmaps and relations provided in Section \\ref{sec:ResultsDropsNeedles} will help to estimate the appropriate parameter ranges.\n\n\nMore importantly for the analysis of experimental data, we have also shown that when working with sufficiently small capillaries ($R_n\/R_0<0.5$) and at small deformations ($\\sim 5\\%$ area), the simple analytical pressure-deformation relations of spheres \\textit{without} capillaries (Table \\ref{tab:models}) provide excellent approximations to the pressure-deformation relations of droploons on capillaries. The much simpler analytical relations of Table \\ref{tab:models} can therefore be used to extract quantitative interfacial properties from fits to experimental data. Experimentalists are referred to Fig. \\ref{fig:ErrorDroploonNeedle} to estimate the error they make using this approximation. \n\nIn Section \\ref{sec:TheorySpheres} we showed that for small deformations, the Gibbs, Neo-Hookean and Hookean models for liquid- and solid-like interfaces all predict the same kind of pressure-deformation relation. In view of the analysis presented above, this may explain why a lot of experimental data for solid-like interfaces seems to have been successfully fitted in the past by the Gibbs model. Indeed, our analysis shows that at small deformations, pendant drop experiments with nearly spherical droploons do not allow to discriminate between liquid-like and solid-like interfaces. Alternative experiments, such as interfacial shear rheology measurements or the Capillary Mensicus Dynanometry\\cite{Danov_CollIntSci_2015} are required to obtain this information. \n\nWe have chosen here a minimal model of a droploon interface where the elastic extra stress of a Neo-Hookean solid material is simply added to a constant interfacial tension. Real interfaces are not as simple \\cite{Edwards1991,Rehage_RheoAct_2002,Sagis_RevModPhys_2011,Erni2011,Fuller_SoftMatter_2011,Fuller_2012,Sagis_COCIS_2014,Verwijlen_ACIS_2014,Pepicelli_SocRheo_2019}. Surface tension and elasticity tend to be coupled in a complex manner \\cite{Verwijlen_ACIS_2014}, and the description of the response of the elastic membrane is likely to require taking into account an anisotropic, viscous and plastic response as well as non-linearities which are more complex than those of the Neo-Hookean model. Nevertheless, our simple approach already gives important insight into some fundamental properties of pressure-deformation relations of pendant droploons.\n\nConsidering that pendant drop experiments, even in the simplest configuration without gravity, overlay a geometric non-linearity with non-linearities in the material response of a solid-like interfacial material, it remains questionable if this is the appropriate experimental choice to discriminate between appropriate models to describe solid-like interfaces. Differences between models are likely to show up only at larger deformations which makes the interpretation extremely difficult. However, due to their simplicity, pendant drop experiments remain an excellent choice for a phenomenological characterisation of the dilational visco-elastic properties at small deformation. \n\n\nLast but not least, all our investigations have been performed without gravity, while pendant drops (and bubbles) are prone to gravity-driven deformations rendering them non-spherical. We recall that for a nearly spherical drop the Bond number $ Bo = \\Delta\\rho g R_0^2\/\\gamma_0$ indicates the ratio of the hydrostatic pressure difference between the top and the bottom of the bubble $\\Delta\\rho g 2R_0$ and the Laplace pressure which is due to surface tension $2 \\gamma_0 \/R_0$. The impact of gravity on bubble shape is negligible if $Bo \\ll 1$. If density-matched systems cannot be used, very small bubbles may therefore be a solution \\cite{Kotula_JRheo_2015} to reduce the impact of gravity. This also has the advantage to increase the interface curvature, and hence the pressure and therefore experimental sensitivity.\n\nIf gravity-driven deformation cannot be completely avoided, the following two aspects need to be taken into account. The first influence of gravity is on the shape of the droploon in the reference state. Gravity may create a concave neck close to the capillary, which creates additional stress localisation. Using numerical investigations of the droplet shape bifurcation diagram (yellow line of bifurcations in Figs.\\ 4 and 5 of Ref.\\ \\cite{Kratz2020}), we could show in previous work that only for\n\\begin{equation}\n\\frac{R_n}{R_0} < 2.6 \\mathrm{Bo}^{1.64},\n\\end{equation}\nthe drop remains fully convex and neck formation can be neglected. \n\nThe second aspect concerns deformation with elastic skins, where the increasing droploon size upon inflation or the decreasing effective surface stresses upon deflation may make the system increasingly sensitive to gravity. In this case one may want to introduce an elastic Bond number which contains the deformation-dependent elastic contribution to the surface stress based on the Hookean expression (\\ref{eq:HookeTension})\n\n\\begin{equation}\n Bo_{el} = \\frac{\\Delta \\rho g}{\\gamma_0(1+2\\alpha(\\lambda-1))}\\lambda^2 R_0^2.\n\\end{equation}\n \n\nFor sufficiently small elastic Bond numbers, gravity can then be neglected. Since gravity can be implemented easily in Surface Evolver, future investigations may explore the influence of gravity more quantitatively. \n\n\n\\section*{Acknowledgements}\nThis work has been conducted in the framework of an ERC Consolidator Grant (agreement 819511 \u2013 METAFOAM). It has also profited from an IdEx Unistra \"Attractivity grant\" (Chaire W. Drenckhan) and has, as such, benefited from funding from the state, managed by the French National Research Agency as part of the 'Investments for the future' program. The authors would like to thank Fran\u00e7ois Schosseler, Leandro Jacomine, Stephane Pivard and Aur\u00e9lier Hourlier-Fargette for regular in-depth discussions concerning the experimental characterisation of pressure-deformation relations of droploons, which has stimulated greatly this numerical investigation. \n\n\n\n\\section{Appendix A : numerical determination of the interfacial deformation}\n\\label{AppendixA}\n\n\nWe use the Surface Evolver software to determine the bubble or droplet shapes for which interfacial energy is minimal, respecting volume constraints and boundary conditions. The case where an elastic skin is attached to the interface raises the question how local strain should be deduced from the representation of the interface as an assembly of triangular facets. Section \\ref{sec:convected} explains how {\\it convected } coordinates are used for this. Section \\ref{sec:energy} provides details about the calculation of the elastic energy density, based on the neo Hooke constitutive model. \n\\subsection{Strain represented using convected coordinates \\label{sec:convected}}\n\\begin{figure}\n\\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{FIGURES\/Figure9.JPG}\n \\caption{A triangular finite element of an interface is represented in the reference configuration and in the current, deformed configuration. The figure illustrates the notations used in the text: $\\vec{\\bf x}$ for vectors pointing to vertices and $\\vec{\\bf s}$ for finite element edge vectors. Capital letters are used for the reference configuration and small letters for the current configuration. For the sake of simplicity, only one of the three vectors pointing to vertices is shown in each configuration. The contravariant components of both $\\vec{\\bf X}$ and $\\vec{\\bf x}$ are indicated on the same set of Cartesian axes.}\n \\label{fig:convected}\n\\end{figure}\nThe shape of the triangular facets used in the Surface Evolver as finite elements is fully defined if two edge vectors are given. Upon deformation of the investigated bubble, the facet is generally displaced and the edge vectors are changed, spanning a facet of modified shape. In the spirit of a linear discretization, an affine displacement field is assumed within each facet. One could describe the facet deformation using a coordinate system whose origin is attached to a given vertex of the facet, and express how the Cartesian coordinates of each point on the facet evolve. Alternatively, one may interpret the edge vectors as basis vectors which evolve upon a deformation and which are therefore in general non orthogonal. In this latter approach, the coordinates of each point of the interface are fixed and the deformation is represented in terms of a change of the basis vectors. This \"convected coordinate\" method goes back to pioneering work by Hencky \\cite{Hencky1925}.\nIn the Surface Evolver this method is convenient because the relevant edge vectors can easily be derived from the three facet vertex positions in the current configuration, denoted $\\vec{x}_1,\\vec{x}_2,\\vec{x}_3$ and in the reference configuration $\\vec{X}_1,\\vec{X}_2,\\vec{X}_3$,\n\\begin{equation}\n\\label{eq:defS}\n\\begin{split}\n \\vec{S}_1&=\\vec{X}_3-\\vec{X}_1 , \\quad \\vec{s}_1=\\vec{x}_3-\\vec{x}_1,\\\\\n \\vec{S}_2&=\\vec{X}_2-\\vec{X}_1 , \\quad \\vec{s}_2=\\vec{x}_2-\\vec{x}_1.\n\\end{split}\n\\end{equation}\nThe edge vectors are represented using a cartesian orthonormal basis $(\\vec{e}_x,\\vec{e}_y)$ such that $\\vec{S}_i=S_{ix}\\vec{e}_x+S_{iy}\\vec{e}_y$ and $\\vec{s}_i=s_{ix}\\vec{e}_x+s_{iy}\\vec{e}_y$. \\par \nAs mentioned, convected coordinates remain constant upon a deformation; this introduces simplicity. But this choice also introduces complexity since the expression of the scalar product is no longer given by contraction $\\vec{a}\\cdot\\vec{b}=a_ib_i$, additional terms appear since the basis vectors are generally not orthogonal. To avoid such complexity, one represents vectors and tensors that one wishes to associate in products using two different bases: a \"covariant\" and contravariant one. Covariant basis vectors follow the deformation of the edge facets. They are denoted $\\vec{G}_1,\\vec{G}_2$ in the reference state and $\\vec{g}_1,\\vec{g}_2$ in the current state. Covariant quantities are identified by lower indices, \n\\begin{equation}\n\\begin{split}\n \\vec{G}_1&= \\vec{S}_1,\\quad \\vec{g}_1= \\vec{s}_1,\\\\\n \\vec{G}_2&=\\vec{S}_2,\\quad \\vec{g}_2= \\vec{s}_2.\n\\end{split}\n\\end{equation}\nContravariant basis vectors $(\\vec{G}^1,\\vec{G}^2) $ or $(\\vec{g}^1,\\vec{g}^2 )$, are identified by upper indices, and they are defined through the following orthogonality relations:\n\\begin{equation}\n \\vec{G}^i\\cdot\\vec{G}_j=\\delta^i_j, \\quad \\vec{g}^i\\dot \\quad\\vec{g}_j=\\delta^i_j,\n \\label{eq:orthogonality}\n\\end{equation}\nwhere $\\delta^i_j =1$ if $i=j$ and $\\delta^i_j =0$ otherwise. The Cartesian coordinate system is a special case within this general framework where covariant and contravariant bases coincide. Using co- and contravariant bases simplifies the expressions of the scalar products of vectors and tensors in the case of non-orthogonal basis vectors.\n\nAn arbitrary vector $d\\vec{X}$ representing a small line element on the surface reads in terms of the covariant basis\n\\begin{equation}\n d\\vec{X} = d\\Theta^j \\vec{G}_j.\n\\end{equation} \n $d\\Theta^j$ are the convected contravariant coordinates. We use the Einstein summation convention\n and sum over repeated indices.\n \nDescriptions of strain in large deformation continuum mechanics are commonly based on the deformation gradient tensor $\\mathbf{F}$, represented by a matrix that transforms a line element $ d\\vec{X} $ in the reference state into $ d\\vec{x} $ in the current state,\n\\begin{equation}\n d\\vec{x} = \\mathbf{F} d\\vec{X}. \n\\end{equation}\nIn terms of convected coordinates, $\\mathbf{F}$ may be written\n\\begin{equation}\n\\mathbf{F}=g_j\\otimes G^j.\n\\label{Equation:RightCauchyGreen}\n\\end{equation}\nThe symbol $\\otimes$ indicates an operation assembling two vectors into a tensor, called tensor product. \nIndeed, in view of Eq.\\ \\ref{eq:orthogonality} we have\n\\begin{equation}\n \\mathbf{F} d \\vec{X} = (\\vec{g}_i\\otimes \\vec{G}^i)\\,d\\Theta^i G_j=d\\Theta^i \\vec{g}_i = d \\vec{x}.\n\\end{equation}\nThe deformation gradient tensor contains information about rotations that is irrelevant for interfacial energy. The interfacial energy in the Surface Evolver is computed using the 2D right Cauchy-Green strain tensor $\\mathbf{C}$ which is invariant to rotations\\cite{Mal1991}, contrary to $\\mathbf{F}$:\n\\begin{equation}\n\\mathbf{C}=\\mathbf{F}^T \\mathbf{F}=(\\vec{G}^i\\otimes \\vec{g}_i)(\\vec{g}_j\\otimes \\vec{G}^j)=g_{ij}\\,\\vec{G}^i\\otimes \\vec{G}^j.\n\\label{Equation:RightCauchyGreenAppendix}\n\\end{equation}\n$g_{ij}$ is the metric tensor in the current configuration, defined as follows:\n\\begin{equation}\n g_{ij} =\\vec{g}_i \\cdot \\vec{g}_j.\n \\label{eq:metric}\n\\end{equation}\n\n\n\nTo determine the elastic energy of a facet in a simulation, {\\bf C} needs to be determined numerically. The components of the contravariant basis vectors in the reference state $G^i$ are deduced from the covariant ones using the orthogonality properties (\\ref{eq:orthogonality}): \n\n\\begin{equation}\n\\begin{aligned}\n\\vec{G}^1\\cdot\\vec{G}_1=1=S^{1x}S_{1x}+S^{1y}S_{1y} &\\rightarrow &S^{1x}=\\frac{1-S^{1y}S_{1y}}{S_1x}\\\\\n\\vec{G}^2\\cdot\\vec{G}_1 = 0 =S^{2x}S_{1x}+S^{2y}S_{1y} & \\rightarrow & S^{2x}=-S^{2y}\\frac{S_{1y}}{S_{1x}}\\\\\n\\vec{G}^2\\cdot\\vec{G}_2 = 1 =S^{2x}S_{2x}+S^{2y}S_{2y} & \\rightarrow & S^{2x}=\\frac{1-S^{2y}S_{2y}}{S_{2x}}\\\\\n\\vec{G}^1\\cdot\\vec{G}_2 = 0 =S^{1x}S_{2x}+S^{1y}S_{2y} & \\rightarrow & S^{1x}=-S^{1y}\\frac{S_{2y}}{S_{2x}}.\n\\end{aligned}\n\\label{Equation:OrthogonalMaterialBasis}\n\\end{equation} \n\nSolving the system (\\ref{Equation:OrthogonalMaterialBasis}) yields the components of the vectors $\\vec{G}^i$: \n\\begin{equation}\\begin{aligned}\n\\vec{G}^1=\\vec{S}^1=\\left(\\frac{S_{2y}}{S_{1x}S_{2y}-S_{1y}S_{2x}}, -\\frac{S_{2x}}{S_{1x}S_{2y}-S_{1y}S_{2x}}\\right) \\\\\n\\vec{G}^2=\\vec{S}^2=\\left(-\\frac{S_{1y}}{S_{1x}S_{2y}-S_{1y}S_{2x}}, \\frac{S_{1x}}{S_{1x}S_{2y}-S_{1y}S_{2x}}\\right).\n\\end{aligned}\n\\label{Equation:ContravariantMaterialCoordinates}\n\\end{equation}\n\nTo express the Cauchy Green strain tensor directly as a function of the edge vectors, it is convenient to introduce Gram matrices. The Gram matrix of two arbitrary vectors $\\vec{v}_1$ and $\\vec{v}_2$ is a 2x2 matrix whose element $ij$ is by definition given by the scalar product $\\vec{v}_i\\cdot \\vec{v}_j$. The covariant metric tensor defined in Eq.\\ (\\ref{eq:metric}) is thus the Gram matrix of the edge vectors in the current configuration. Following the notation used in the Surface Evolver manual, we will call this quantity $\\mathbf{s}$: \n \\begin{equation}\n \\mathbf{s} = \\begin{pmatrix}\n \\vec{s}_1\\cdot\\vec{s}_1 & \\vec{s}_1\\cdot\\vec{s}_2 \\\\\n \\vec{s}_2\\cdot\\vec{s}_1 & \\vec{s}_2\\cdot\\vec{s}_2\n \\end{pmatrix}=g_{ij}.\n \\end{equation}\n\nThe Gram matrix of the edge vectors in the reference state is denoted $\\mathbf{S}$:\n\\begin{equation}\n \\mathbf{S} = \\begin{pmatrix}\n \\vec{S}_1\\cdot\\vec{S}_1 & \\vec{S}_1\\cdot\\vec{S}_2 \\\\\n \\vec{S}_2\\cdot\\vec{S}_1 & \\vec{S}_2\\cdot\\vec{S}_2\n \\end{pmatrix}.\n \\end{equation}\n\nWe note that the denominators in Eqs.\\ (\\ref{Equation:ContravariantMaterialCoordinates}) are the determinant of $ \\mathbf{S}$: \n\\begin{equation}\n\\mathrm{det}\\,\n \\mathbf{S} = \\left(\\vec{S}_1\\cdot\\vec{S}_1\\right)\\cdot\\left(\\vec{S}_2\\cdot\\vec{S}_2\\right) - \\left(\\vec{S}_1\\cdot\\vec{S}_2\\right)^2=\\left(S_{1x}S_{2y}-S_{1y}S_{2x} \\right)^2.\n\\label{Equation:CovariantGramDeterminant}\n\\end{equation}\n\nSince the components of the tensor $G^i\\otimes G^j$ are the scalar products of $G^i$ and $G^j$ \\cite{Kelly} we can now write Eq.\\ (\\ref{Equation:RightCauchyGreenAppendix}) in terms of the cartesian components of S, using Eqs.\\ (\\ref{Equation:ContravariantMaterialCoordinates}) and (\\ref{Equation:CovariantGramDeterminant}),\n\n\\begin{equation}\\begin{aligned}\n\\vec{G}^1\\cdot\\vec{G}^1 &= \\frac{S_{2x}S_{2x}+S_{2y}S_{2y}}{\\mathrm{det}(\\mathbf{S})} =& \\frac{\\vec{S}_2\\cdot\\vec{S}_2}{\\mathrm{det}(\\mathbf{S})} \\\\\n\\vec{G}^2\\cdot\\vec{G}^2 &= \\frac{S_{1x}S_{1x}+S_{1y}S_{1y}}{\\mathrm{det}(\\mathbf{S})} =&\\frac{\\vec{S}_1\\cdot\\vec{S}_1}{\\mathrm{det}(\\mathbf{S})} \\\\\n\\vec{G}^1\\cdot\\vec{G}^2 &= -\\frac{S_{1x}S_{2x}+S_{1y}S_{2y}}{\\mathrm{det}(\\mathbf{S})} =&-\\frac{\\vec{S}_1\\cdot\\vec{S}_2}{\\mathrm{det}(\\mathbf{S})}.\n\\end{aligned}\\end{equation}\n\nThis result shows that $G^i\\otimes G^j$ is the inverse of the Gram matrix $\\mathbf{S}$,\n\\begin{equation}\nG^{i}\\otimes G^{j} = \\frac{1}{\\mathrm{det}|\\mathbf{S}|}\n\\begin{pmatrix}\n\\vec{S}_2\\cdot\\vec{S}_2 & -\\vec{S}_1\\cdot\\vec{S}_2 \\\\\n-\\vec{S}_1\\cdot\\vec{S}_2 & \\vec{S}_1\\cdot\\vec{S}_1\n\\end{pmatrix}\n=\\mathbf{S}^{-1}.\n\\end{equation}\n\n We can finally express the 2D right Cauchy-Green tensor (Eq. \\ref{Equation:RightCauchyGreen}), needed in section \\ref{sec:energy} to calculate the elastic energy, in terms of the Gram matrices $\\mathbf{s}$ and $\\mathbf{S}$:\n\\begin{equation}\n\\mathbf{C}=\\mathbf{F}^T\\mathbf{F} = \\mathbf{s}\\,\\mathbf{S}^{-1}.\n\\label{Equation:RightCauchyGreenExplained}\n\\end{equation}\n\nWe note that Eq. (\\ref{Equation:RightCauchyGreenExplained}) can also be used to compute the Green-Lagrange strain tensor $\\mathbf{E}=\\mathbf{F}^T\\mathbf{F}-\\mathbf{I}$ from the vertex coordinates. $\\mathbf{E}$ converges to the infinitesimal strain tensor $\\mathbf{\\varepsilon}$ in the limit of small deformations. Eq.\\ (\\ref{Equation:RightCauchyGreenExplained}) is thus the key result for evaluating strain in Surface Evolver calculations. We note that Eq.\\eqref{Equation:RightCauchyGreenExplained} also gives the correct strain for displacements of vertices normal to the surface.\n\n\n\n\\subsection{Elastic energy \\label{sec:energy}}\nIn this section we explain how the elastic contribution to the interfacial energy is determined in our simulations. According to the compressible 3D Neo Hookean model implemented in the Surface Evolver\\cite{Bouzidi_CompStruct_2004}, and commonly used in the literature \\cite{Pence2015} the elastic energy per volume is\n\\begin{equation}\nW_{3D} = \\frac{G}{2} (Tr\\, \\mathcal{C}-3)-G \\ln J +\\frac{\\Lambda}{2} (\\ln J)^2.\n\\label{eq:3D energy density 1}\n\\end{equation} \n$G$ and $\\Lambda$ are the Lam\u00e9 parameters.\n $J^2=\\mathrm{det}(\\mathcal{C})$ is an invariant of $\\mathcal{C}$, a scalar quantity independent of the reference frame. It is given by the ratio of the volumes of a material element in the current deformed and initial states. \nIn the limit of small deformations, the energy density Eq.\\ref{eq:3D energy density 1} reduces as expected to the one deduced from Hooke's law for linear elastic isotropic materials \\cite{LandauLifshitz}, using the infinitesimal strain tensor $\\mathbf{\\epsilon}$ defined by Eq. \\ref{eq:epsilon}. \n\\begin{equation}\nW_{3D} = \\frac{ \\Lambda}{2} Tr (\\,\\mathbf{\\epsilon})^2+G Tr(\\mathbf{\\epsilon}^2).\n\\label{eq:linear_energy_density}\n\\end{equation}\n\n\\par \nThe elastic skins considered in our work are so thin that their bending stiffness is negligible. Their resistance to shear deformations where the two opposite faces are displaced relative to each other is very strong, we neglect this mode of deformation and assume a state of {\\it plane stress}, consistently with the Kirchhoff hypotheses of thin shell theory \\cite{axelrad1987}. Using Cartesian coordinates with an $x_3$ axis perpendicular to an element of the skin, this is expressed as $\\mathcal{C}_{31}=\\mathcal{C}_{32}=\\mathcal{C}_{13}=\\mathcal{C}_{23}=0$. \nIn the same spirit, we consider the case where the stress normal to the skin has a negligible effect on its shape, so that we can assume $\\sigma_{33}=0$ without loss of generality.\nFor plane stress, the changes of volume and changes of skin thickness are directly related. To analyse this feature, we recall a general relation between the energy density and the Cauchy stress of hyperelastic materials \\cite{Mal1991}\n\\begin{equation}\n J \\mathbf{F}^{-1} \\mathbf{\\sigma}\\mathbf{F}^{-T} =2 \\frac{\\partial W_{3D}}{\\partial\\mathcal{C}}.\n\\end{equation}\nThe plane stress condition can thus be expressed as \n\\begin{equation}\n \\frac{\\partial W_{3D}}{\\partial\\mathcal{C}_{33}}=0.\n\\end{equation}\nUsing Eq.\\ref{eq:3D energy density 1} this yields. \n\\begin{equation}\n \\Lambda \\ln J = G (1 - \\mathcal{C}_{33}).\n \\label{eq:JC33}\n\\end{equation}\nPhysically speaking, this equation previously derived for a similar constitutive equation \\cite{Pascon2019} relates the squared ratio of the current and initial skin thicknesses given by $\\mathcal{C}_{33}$ to the ratio of the current and initial skin volumes, expressed by $J$. \nIn the aim to derive a 2D energy density, we write Eq.\\ \\eqref{eq:JC33} as a function of the components of $\\mathcal{C}$, taking into account that many of them are zero in the case of plane stress, as pointed out above:\n\\begin{equation}\n \\mathcal{C}_{33}(\\mathcal{C}_{11}\\mathcal{C}_{22}-\\mathcal{C}_{12}^2)=\\exp\\left[\\frac{2 G}{\\Lambda}(1-\\mathcal{C}_{33})\\right]\n \\label{eq:JC332}\n\\end{equation}\nTo represent the skin as a 2D material whose deformation is fully specified by $\\mathcal{C}_{11},\\mathcal{C}_{22}$ and $\\mathcal{C}_{12}$, we need to express $\\mathcal{C}_{33}$ in terms of these other variables. This can be done by solving Eq.\\ \\eqref{eq:JC332} either numerically \\cite{Pascon2019}, or analytically, using Lambert's $W$ function \\cite{corless1996}:\n\\begin{equation}\n\\mathcal{C}_{33} = \\frac{\\Lambda}{2 G }\\, W\\left[\\frac{2 G \\, \\exp(2 G \/\\Lambda)}{\\Lambda \\mathrm{det}(\\mathbf{C}) }\\right]\\\\\n= \\frac{\\Lambda}{2 G }\\, W\\left[\\frac{2 G \\, \\exp(2 G \/\\Lambda)}{\\Lambda(\\mathcal{C}_{11}\\,\\mathcal{C}_{22}-\\mathcal{C}_{12}^2)}\\right]\n\\end{equation}\nThe latter option has been implemented by R.\\ Bouzidi in the Surface Evolver software. Inserting the expression of $\\mathcal{C}_{33}$ in Eq.\\ \\eqref{eq:JC33} and the resulting expression for $\\ln J$ into the 3D energy density Eq.\\ \\eqref{eq:3D energy density 1}, we obtain the following 2D energy density for a neo-Hookean skin, where $h_0$ is the skin thickness in the reference state,\n\\begin{equation}\nW_{2D} =G h_0\\left( \\frac{1}{2} (Tr\\, \\mathcal{C}-3)- \\frac{G}{\\Lambda}(1-\\mathcal{C}_{33}) +\\frac{G}{2\\Lambda} (1-\\mathcal{C}_{33})^2 \\right).\n\\label{eq:2D energy density}\n\\end{equation}\n$G h_0$ may be interpreted as a 2D shear modulus.\nNeglecting constant terms which are irrelevant for a potential energy and expressing the result in terms of the 2D right Cauchy Green tensor using\n$\\mathrm{Tr} \\mathcal{C} = \\mathrm{Tr} \\mathbf{C} + \\mathcal{C} _{33}$, we obtain\n\\begin{equation}\nW_{2D} =\\frac{G h_0}{2}\\left( Tr\\, \\mathbf{C} +\\mathcal{C}_{33} + \\frac{G}{\\Lambda} \\mathcal{C}_{33}^2 \\right).\n\\label{eq:2D energy density final}\n\\end{equation}\n\n\n\n\nThe skin materials considered in the present paper are much easier to shear than to compress such that $G \\ll \\Lambda$. In this case, the last term in Eq.\\ \\eqref{eq:2D energy density final} can be neglected.\n \\par Besides the neo-Hookean model discussed so far, the Surface Evolver software provides an alternative energy density expression called \"linear elastic model\" which yields behavior consistent with Eq.\\ \\eqref{eq:linear_energy_density} in the limit of small deformations. However, one should be aware that for large deformations this numerical model based on the right Cauchy Green tensor is not consistent with Eq.\\ (\\ref{eq:linear_energy_density}). \n\n\n\n\n\n\\section{Pressure-deformation relations of droploons on capillarys expressed via radial stretch}\n\\label{annex:PressDefNeedle}\n\nIn the main body of the article we expressed all relations in terms of area stretch $\\lambda_A$. The same approach can be done for the radial stretch $\\lambda$ leading, however, to expressions which are less intuitive and less directly accessible by experiments and simulations. For completeness, we shall provide the resulting equations here. \n\nWe can rewrite the interfacial $A$ for a droploon on a capillary larger than a hemisphere as \n\\begin{equation}\n \\begin{split}\n A & = 2\\pi R^2\\left(1 - \\sqrt{1-\\left(\\frac{R_n}{R}\\right)^2}\\right) \\\\\n & = 2\\pi R^2 \\mathpzc{f}(R_n\/R).\n \\end{split}\n \\label{eq:interfacialarea}\n\\end{equation}\nThe function $F(R_n\/R)$ defined by Eq.(\\ref{eq:interfacialarea}) helps to express the result in a more concise way.\n\nThe term $\\ln(A\/A_0)$ in the Gibbs relation (\\ref{eq:GibbsGamma}) can then be rewritten using Eq. (\\ref{eq:interfacialarea}) to give the normalised surface stress of the droploon on the capillary\n \\begin{equation}\n \\hat{\\sigma} = 1+ 2\\alpha \\ln \\lambda + \\alpha \\ln \\xi.\n \\label{eq:GibbsNeedle}\n \\end{equation}\n The last term, depending on the geometric factor \n \\begin{equation}\n \\xi = \\frac{\\mathpzc{f}(R_n\/R)}{\\mathpzc{f}(R_n\/R_0)},\n \\label{eq:GeomFactor}\n \\end{equation}\n expresses the impact of a capillary on the elastic stress at the surface of a sphere, assuming a spherical sector shape.\n \n In the first two terms one recognises the result previously obtained for the perfect sphere (Eq. (\\ref{eq:GibbsGamma})). One can therefore rewrite\n \\begin{equation}\n \\hat{\\sigma} = \\hat{\\sigma}_{sphere} + \\alpha \\ln \\xi.\n \\label{eq:GibbsNeedleSphere}\n \\end{equation}\n \nCompared to a sphere with the same radius, the presence of the capillary introduces a corrective term in the surface stress which depends on $\\alpha$, $R$, $R_n$ and $R_0$. \n \nFor neo-Hookean droploons, the droploon shapes on capillaries are no longer perfect spherical sectors, making analytical descriptions much harder - which is why numerical simulations are required. Nevertheless, we shall make here the seemingly crude approximation that the shapes can be approximated as spherical sectors. \n\nUsing exactly the same approach as for the Gibbs interface but with the neo-Hookean relation(see Table \\ref{tab:models}), one finds for a neo-Hookean droploon on a capillary \n\n \\begin{equation}\n \\hat{\\sigma} = 1 + \\frac{\\alpha}{3} \\left( 1 - \\lambda^{-6} \\xi^{-3} \\right).\n \\label{eq:NeoHookeNeedleAnnexe1}\n \\end{equation}\n \n After some algebra, this can be rewritten as the expression for the perfect sphere with a corrective term taking account of the capillary\n \\begin{equation}\n \\hat{\\sigma} = \\hat{\\sigma}_{sphere} + \\frac{\\alpha}{3} \\left(1 - \\xi^{-3}\\right)\\lambda^{-6}.\n \\label{eq:NeoHookeNeedleAnnexe2}\n \\end{equation}\n \n In the limit of small deformations, our results for both Gibbs and neo-Hooke elastiticy yield the same relation \n \\begin{equation}\n \\hat{\\sigma} = \\hat{\\sigma}_{sphere} + \\alpha \\left(\\xi - 1 \\right)\\lambda,\n \\label{eq:HookeNeedleAnnexe3}\n \\end{equation}\nconsistently with what one would obtain for a perfectly spherical sector droploon with Hookean skin on a capillary. In all cases, the corrective term is zero in the reference state where $R=R_0$. Once the interfacial stresses are known, the pressure-deformation relation can be calculated using the Young-Laplace law given in Eq. (\\ref{eq:NormalisedPressure}).\n\nTable \\ref{tab:modelsNeedles} summarises normalised expressions derived from this simple geometrical approximation model, together with expressions for the critical stretch.\n\n\\begin{table*}[t]\n \\centering\n \\begin{tabular}{|c|c|c|}\n \\hline\n Model on capillary & Normalised surface stress $\\hat{\\sigma}$ & Critical stretch $\\lambda_c$ \\\\ \\hline\nGibbs & $\\hat{\\sigma}_{sphere} + \\alpha \\ln \\xi$ & $\\frac{R_0\\left(1+\\sqrt{1-(\\frac{R_n}{R_0})^2}\\right)e^{-\\frac{1}{\\alpha}}}{\\sqrt{2R_0^2(1+\\sqrt{1-(\\frac{R_n}{R_0})^2})e^{-\\frac{1}{\\alpha}}-R_n^2}}$ \\\\ \\hline\nNeo-Hooke & $\\hat{\\sigma}_{sphere} + \\frac{\\alpha}{3} \\left(1-\\xi^{-3}\\right)\\lambda^{-6}$ & $\\frac{R_0(1+\\sqrt{1-(\\frac{R_n}{R_0})^2})\\left(1-\\frac{1}{2\\alpha} \\right)^2}\n{ \\sqrt{2R_0^2\\left(1+\\sqrt{1-(\\frac{R_n}{R_0})^2}\\right)\\left(1-\\frac{1}{2\\alpha} \\right)^2-R_n^2}}$ \\\\ \\hline\nHooke & $\\hat{\\sigma}_{sphere} + \\alpha \\left(\\xi - 1 \\right)\\lambda$ & $\\frac{R_0(1+\\sqrt{1-(\\frac{R_n}{R_0})^2})\\left(\\frac{\\alpha}{\\alpha+3}\\right)^{\\frac{1}{3}} }\n{ \\sqrt{2R_0^2\\left(1+\\sqrt{1-(\\frac{R_n}{R_0})^2}\\right)\\left(\\frac{\\alpha}{\\alpha+3}\\right)^{\\frac{1}{3}}-R_n^2}}$\\\\\n\\hline\n \\end{tabular}\n \\caption{Summary of the normalised expressions for the surface stress of drops on capillaries using the approximation that the drop can be described by a spherical sector. While for Gibbs droploons these are correct, they are only approximations for Hookean and neo-Hookean droploons. The expressions for $\\hat{\\sigma}_{sphere}$ are given in Table \\ref{tab:models}. The geometric factor $\\xi$ is given in Eq. (\\ref{eq:GeomFactor}). }\n \\label{tab:modelsNeedles}\n\\end{table*}\n\n\n\n\n\n\n\\clearpage\n\n\n\n\n\\balance\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\subsection{Gopakumar-Vafa invariants}\nA smooth complex projective 4-fold $X$ is {\\em holomorphic symplectic} if\nit is equipped with a non-degenerate holomorphic 2-form $\\sigma\\in H^0(X,\\Omega^2_X)$. \nThe ordinary Gromov-Witten invariants of $X$ always vanish for non-zero curve classes. Instead a reduced Gromov-Witten theory is\ndefined by Kiem-Li's cosection localization \\cite{KiL}. \n\nGiven cohomology classes $\\gamma_i \\in H^{\\ast}(X,\\mathbb{Z})$,\nthe (reduced) Gromov-Witten invariants of $X$ in \na non-zero curve class \n$\\beta \\in H_2(X,\\mathbb{Z})$ \nare defined by\n\\begin{align}\\label{intro GWinv}\n\\mathrm{GW}_{g, \\beta}(\\gamma_1, \\ldots, \\gamma_l)\n=\\int_{[\\overline{M}_{g, l}(X, \\beta)]^{\\rm{vir}}}\n\\prod_{i=1}^l \\mathrm{ev}_i^{\\ast}(\\gamma_i),\n\\end{align}\nwhere \n\\begin{equation*}[\\overline{M}_{g, l}(X, \\beta)]^{\\mathrm{vir}}\\in A_{2-g+l}(\\overline{M}_{g, l}(X, \\beta)) \\end{equation*}\nis the (reduced) virtual class and $\\mathrm{ev}_i \\colon \\overline{M}_{g,l}(X, \\beta)\\to X$\nis the evaluation map at the $i$-th marking. \nWe refer to \\cite{O1, O3, OSY} for some references on computations for \\eqref{intro GWinv}.\nGromov-Witten invariants are in general rational numbers because the moduli space $\\overline{M}_{g,l}(X, \\beta)$ of stable maps is a Deligne-Mumford stack. \nIt is an interesting question to find out integer-valued invariants\nwhich underlie them.\n\nIn \\cite{COT1}, we studied this question and defined \\textit{genus 0 Gopakumar-Vafa invariants}\n\\begin{equation}\\label{intro gv invs1}n_{0,\\beta}(\\gamma_1, \\ldots, \\gamma_l)\\in \\mathbb{Q} \\end{equation}\nfor any non-zero curve class $\\beta$ and \\textit{genus 1 and 2 Gopakumar-Vafa invariants}\n\\begin{equation}\\label{intro gv invs2}n_{1,\\beta}(\\gamma)\\in \\mathbb{Q}, \\,\\,\\forall \\,\\, \\gamma\\in H^4(X,\\mathbb{Z}); \\quad n_{2,\\beta} \\in \\mathbb{Q} \\end{equation}\nfor any primitive curve class $\\beta$ (i.e.~it is not a multiple of a non-zero curve class in $H_2(X,{\\mathbb{Z}})$) from Gromov-Witten invariants \\eqref{intro GWinv} (see \\S \\ref{sect on gv} for details). \nThis may be compared with the previous works of Gopakumar and Vafa \\cite{GV} on Calabi-Yau 3-folds, Klemm and Pandharipande \\cite{KP} on Calabi-Yau 4-folds and Pandharipande and Zinger \\cite{PZ} on Calabi-Yau 5-folds. \n\nIn loc.~cit., we conjectured the integrality of \\eqref{intro gv invs1}, \\eqref{intro gv invs2} and provided substantial evidence for it. \nThe aim of this paper is to give a sheaf theoretic interpretation of these Gopakumar-Vafa invariants using moduli spaces of stable pairs, in analogy with the discussion of \\cite{CMT2, CT1} on ordinary Calabi-Yau 4-folds. \n\n\\subsection{GV\/Pairs correspondence}\nLet $F$ be a one dimensional coherent sheaf on $X$ and $s\\in H^0(F)$ be a section.\nFor an ample divisor $\\omega$ on $X$, we denote the slope function by $\\mu(F)=\\chi(F)\/(\\omega \\cdot [F])$.\nThe pair $(F,s)$ is called $Z_t$-\\textit{stable} $($$t\\in\\mathbb{R}$$)$ if\n\\begin{enumerate}\n\\renewcommand{\\labelenumi}{(\\roman{enumi})}\n\\item for any subsheaf $0\\neq F' \\subseteq F$, we have \n$\\mu(F')t$. \n\\end{enumerate}\nFor a non-zero curve class $\\beta \\in H_2(X, \\mathbb{Z})$ and $n\\in \\mathbb{Z}$, \nwe denote by\n\\begin{align*}\nP_n^t(X, \\beta)\n\\end{align*}\nthe moduli space of \n$Z_t$-stable pairs \n$(F, s)$ with $([F], \\chi(F))=(\\beta, n)$. It has a wall-chamber structure and for a \\textit{general} $t \\in \\mathbb{R}$ (i.e.~outside a finite subset of rational numbers in $\\mathbb{R}$), \nit is a projective scheme. \n\nWhen $t<\\frac{n}{\\omega \\cdot \\beta}$, $P_n^t(X, \\beta)$ is empty. The first nontrivial chamber appears when \n$t=\\frac{n}{\\omega \\cdot \\beta}+0^+$, which we call \\textit{Joyce-Song (JS) chamber} (here $0^+$ denotes a sufficiently small positive number \nwith respect to the fixed $\\omega,\\beta,n$). When $t\\gg 1$, it recovers the moduli space of \\textit{Pandharipande-Thomas (PT) stable pairs} \\cite{PT} (Proposition \\ref{prop:chambers}).\n\nFor general $t\\in \\mathbb{R}$, by Theorem \\ref{existence of proj moduli space}, we can define its $\\mathop{\\rm DT}\\nolimits_4$ virtual class following \\cite{BJ, OT} (see also \\cite{CL1}).\nHowever, by a cosection argument the virtual class vanishes, see \\cite{KiP, Sav}. \nUsing Kiem-Park's cosection localization \\cite{KiP}, we have a (reduced) virtual class\n\\begin{align*}[P^t_n(X,\\beta)]^{\\mathrm{vir}}\\in A_{n+1}(P^t_n(X,\\beta),\\mathbb{Q}), \\end{align*}\ndepending on the choice of orientation \\cite{CGJ, CL2}. More precisely, for each connected component of $P^t_n(X,\\beta)$, there \nare two choices of orientation which affect the virtual class by a sign (component-wise).\nTo define its counting invariants, let \n\\begin{align*}\\tau: H^{m}(X,\\mathbb{Z})\\to H^{m-2}(P_n^t(X,\\beta),\\mathbb{Z}), \\end{align*}\n\\begin{align*}\\tau(\\gamma):=\\pi_{P\\ast}\\left(\\pi_X^{\\ast}\\gamma \\cup\\mathop{\\rm ch}\\nolimits_{3}(\\mathbb{F})\\right),\n\\end{align*}\nwhere $\\mathbb{I}=(\\mathcal{O}\\to \\mathbb{F})$ is the universal $Z_t$-stable pair and $\\pi_P, \\pi_X$ are projections from $P_n^t(X,\\beta)\\times X$ onto its factors. \nFor $\\gamma_i \\in H^{m_i}(X, \\mathbb{Z})$, the $Z_t$-\\textit{stable pair invariants} are defined by \n\\begin{align*} \nP_{n,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l):=\\int_{[P_n^t(X,\\beta)]^{\\rm{vir}}}\\prod_{i=1}^l\\tau(\\gamma_i)\\in\\mathbb{Q}.\n\\end{align*}\nWhen $n=-1$, we also write \n$$P_{-1,\\beta}^t:=\\int_{[P_{-1}^t(X,\\beta)]^{\\rm{vir}}}1. $$\nHere is the main conjecture of this paper, which gives a sheaf theoretic interpretation of\nall genus Gopakumar-Vafa invariants using $Z_t$-stable pair invariants. \n\\begin{conj}\\emph{(Conjecture \\ref{conj on DT4\/GV})}\\label{intro conj on DT4\/GV}\nFix $n\\in\\mathbb{Z}$, $\\beta\\in H_2(X,\\mathbb{Z})$ and let $t>\\frac{n}{\\omega\\cdot \\beta}$ be generic. \nFor certain choice of orientation, we have \n\\begin{enumerate}\n\\item If $n\\geqslant 2$, then \n\\begin{align*} \nP_{n,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=0.\n\\end{align*}\n\\item If $n=1$, then\n\\begin{align*} \nP_{1,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=n_{0, \\beta}(\\gamma_1, \\ldots, \\gamma_l). \\end{align*}\n\\item If $n=0$ and $\\beta$ is primitive, then\n\\begin{align*} \nP_{0,\\beta}^t(\\gamma)=n_{1, \\beta}(\\gamma).\n\\end{align*}\n\\item If $n=-1$ and $\\beta$ is primitive, then \n\\begin{align*} \nP_{-1,\\beta}^t=n_{2,\\beta}.\n\\end{align*}\n\\end{enumerate}\n\\end{conj}\nWe verify this conjecture by a computation in an ideal geometry where curves deform in families of expected dimensions and \nhave expected generic properties (see \\S \\ref{sect on heur}). Besides this, we study several examples and prove our conjecture in those cases.\n\n\n\\subsection{Verification of conjectures I: $K3\\times K3$}\nLet $X=S\\times T$ be the product of two $K3$ surfaces. \nWhen the curve class $\\beta \\in H_2(S \\times T, {\\mathbb{Z}})$\nis of non-trivial degree over both $S$ and $T$, then\none can construct two linearly independent cosections for moduli spaces of stable maps,\nwhich imply that the (reduced) Gromov-Witten invariants of $X$ in this class vanish. Therefore we always restrict to consider curve classes of form\n\\begin{equation*}\\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z}). \\end{equation*} \n\n\\begin{thm}\\emph{(Theorem \\ref{thm on g=0 conj on prod}, \\ref{thm on g=1 conj on prod}, \\ref{thm on P_-1}, Remark \\ref{rmk on pri g=0})}\nLet $X=S\\times T$ be as above. Then Conjecture \\ref{intro conj on DT4\/GV}\nholds for any primitive curve class $\\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z})$. \n\\end{thm}\nIn fact, by the global Torelli theorem (see e.g.~\\cite{Ver, Huy}), primitive curve classes on $K3$ surfaces can be deformed to irreducible curve classes. \nBy deformation invariance, we only need to deal with an irreducible curve class $\\beta$, in which case we have an isomorphism (Proposition \\ref{prop on smoothness}):\n\\begin{equation*}P^t_{n}(X,\\beta)\\cong P^t_{n}(S,\\beta)\\times T, \\end{equation*}\nand a forgetful map \n\\begin{equation}\\label{intro fort map}P^t_{n}(S,\\beta)\\to M_n(S,\\beta), \\end{equation}\nwhere $M_n(S,\\beta)$ is the coarse moduli space of one dimensional stable sheaves $F$ on $S$ with $[F]=\\beta$, $\\chi(F)=n$.\nBoth $P^t_{n}(S,\\beta)$ and $M_n(S,\\beta)$ are smooth schemes. We can then determine the $\\mathop{\\rm DT}\\nolimits_4$ virtual class of $P^t_{n}(X,\\beta)$ (Theorem \\ref{thm on vir clas}) and its pushforward (under the forgetful map) by the Thom-Porteus formula (Proposition \\ref{deg loci}). This enables us to reduce the computation\nof $Z_t$-stable pair invariants to certain tautological integrals on $M_n(S,\\beta)$.\nBy Markman's framework of monodromy operators \\cite{Markman}, we relate such integrals to certain tautological integrals\non Hilbert schemes of points on $S$ (see \\S \\ref{sect on trans} for details), \nwhich we explicitly determine using \\cite{COT1} (see the proof of Theorem \\ref{thm on g=1 conj on prod}, \\ref{thm on P_-1} for details). \n \n\n\n\n\\subsection{Verification of conjectures II: $T^*\\mathbb{P}^2$}\nLet $H \\in H^2(T^{\\ast} {\\mathbb{P}}^2)$ be the pullback of the hyperplane class and let us identify $H_2(T^{\\ast} {\\mathbb{P}}^2, {\\mathbb{Z}}) \\equiv {\\mathbb{Z}}$ by taking the degree against $H$.\n\nBy explicitly describing the moduli spaces and virtual classes, we obtain: \n\\begin{prop}\\emph{(Proposition \\ref{prop on tp2})}\\label{intro prop on tp2}\nFor certain choice of orientation, we have \n$$P_{1,1}(H^2,H^2)=1, \\quad P_{1,2}(H^2,H^2)=-1, \\quad P_{1,3}(H^2,H^2)=0, $$\n$$P_{0,1}(H^2)=P_{0,2}(H^2)=0, \\quad P_{0,3}(H^2)=1, \\quad P_{-1,1}=P_{-1,2}=P_{-1,3}=0.$$\nMoreover, $P^t_{n}(X,d)$ is independent of the choice of $t>n\/d$ in the listed cases above. \n\nIn particular, for $X=T^*\\mathbb{P}^2$, we have\n\\begin{itemize}\n\\item Conjecture \\ref{intro conj on DT4\/GV} (2) holds when $d\\leqslant 3$. \n\\item Conjecture \\ref{intro conj on DT4\/GV} (3), (4) hold. \n\\end{itemize}\n\\end{prop}\n\n \n\\subsection{Verification of conjectures III: exceptional curves on $\\mathop{\\rm Hilb}\\nolimits^2(K3)$} \nLet $S$ be a $K3$ surface and $\\mathop{\\rm Hilb}\\nolimits^2(S)$ be the Hilbert scheme of two points on $S$. Consider the Hilbert-Chow map \n$$\\pi: \\mathop{\\rm Hilb}\\nolimits^2(S)\\to \\mathop{\\rm Sym}\\nolimits^2(S) $$\nto the symmetric product of $S$. Let $D$ be the exceptional divisor fitting into Cartesian diagram: \n\\begin{align*} \\xymatrix{\nD \\ar[d]_{\\pi} \\ar[r]^{i \\quad \\,\\,\\, } & \\mathop{\\rm Hilb}\\nolimits^2(S) \\ar[d]^{\\pi} \\\\\nS \\ar[r]^{\\Delta \\quad \\,\\,\\, } & \\mathop{\\rm Sym}\\nolimits^2(S), } \\quad \\quad\n\\end{align*}\nwhere $\\Delta$ is the diagonal embedding and $\\pi: D\\to S$ is a $\\mathbb{P}^1$-bundle.\nThe following provides a verification of our (genus 0) conjecture for imprimitive curve classes. \n\\begin{thm}\\emph{(Theorem \\ref{thm on hilbS})}\\label{intro thm on hilbS}\nIn the JS chamber, Conjecture \\ref{intro conj on DT4\/GV} (1),~(2) hold for multiple fiber classes $\\beta=r[\\mathbb{P}^1]$ $($$r\\geqslant 1$$)$ of $\\pi$ as above. \n\\end{thm} \nIn fact, by the Jordan-H\\\"older filtration and a dimension counting, the JS pair invariants of $P_{n}^{\\mathrm{JS}}(X,r[\\mathbb{P}^1])$ are zero unless $n=r$ and \nin which case we have \n$$P_{n}^{\\mathrm{JS}}(X,n[\\mathbb{P}^1]) \\cong \\mathop{\\rm Hilb}\\nolimits^n(S). $$\nThen the proof makes use of the Chern class operator of tautological bundles by Lehn \\cite{Lehn}. \n\n\\subsection{Multiple fiber classes of elliptic fibrations} \nLet $p: S\\rightarrow\\mathbb{P}^{1}$ be an elliptic $K3$ surface and consider the elliptic fibration: \n$$\\bar{p}:=p\\times \\textrm{id}_T: X:=S\\times T\\to \\mathbb{P}^{1}\\times T=:Y, $$\nwhere $T$ is a $K3$ surface.\nDenote $f$ to be a generic fiber of $\\bar{p}$ and ${\\mathsf{p}}\\in H_0(T)$ be the point class.\nThe following gives a closed formula of $Z_t$-stable pair invariants for multiple fiber classes. \n\\begin{thm}\\emph{(Theorem \\ref{thm2 on g=1 of multiple fiber})}\\label{intro thm2 on g=1 of multiple fiber}\nLet $t>0$. Then for certain choice of orientation, we have \n\\begin{align*}\n\\sum_{r\\geqslant 0}P^t_{0,r[f]}(\\gamma)\\,q^r=24\\,\\left(\\int_{S \\times {\\mathsf{p}}} \\gamma\\right)\\cdot \\sum_{m\\geqslant 1}\\sum_{n | m}n^2q^m. \\end{align*}\n\\end{thm}\nAs for the proof, we note that there is an isomorphism \n$$\\bar{p}^*: \\mathop{\\rm Hilb}\\nolimits^r(Y) \\cong P^t_0(X,r[f]), \\quad I_Z\\mapsto \\bar{p}^*I_Z, $$\nunder which the (reduced) virtual classes \n$$(-1)^{n+1}[\\mathop{\\rm Hilb}\\nolimits^r(Y)]^{\\mathrm{vir}}=[P^t_0(X,r[f])]^{\\mathrm{vir}}\\in A_1(\\mathop{\\rm Hilb}\\nolimits^r(Y)) $$\ncan be identified for certain choice of orientation on the right hand side. \nThen we are left to evaluate an integral on $[\\mathop{\\rm Hilb}\\nolimits^r(Y)]^{\\mathrm{vir}}$ which can be done via the degeneration method and a Behrend function argument \\cite{B, OS}.\nWe refer to Theorem \\ref{thm1 on g=1 of multiple fiber} for a similar result for trivial elliptic fibration $E\\times E\\times T\\to E\\times T$ \nand the proof therein for details. \n\nThe formula in Theorem \\ref{intro thm2 on g=1 of multiple fiber} seems to support our speculation of a GV\/Pairs correspondence in genus 1 for imprimivite curve classes (see \\S \\ref{sect on impri} for details). \n\n\\subsection{A conjectural virtual pushforward formula} \nFinally we remark that for a general holomorphic symplectic 4-fold $X$ and an irreducible curve class $\\beta\\in H_2(X,\\mathbb{Z})$, \nwe have a forgetful map as in \\eqref{intro fort map}:\n\\begin{equation*}P^t_{n}(X,\\beta)\\to M_n(X,\\beta), \\end{equation*}\nwhere $M_n(S,\\beta)$ is the coarse moduli space of one dimensional stable sheaves $F$ on $X$ with $[F]=\\beta$, $\\chi(F)=n$.\nIn Appendix \\S \\ref{sect on app}, we conjecture a virtual pushforward formula for this map (which we verify for the product of $K3$ surfaces, see Proposition \\ref{prop on prod of k3 app}). Together with Conjecture \\ref{intro conj on DT4\/GV} (4), this formula implies a conjectural relation between genus 2 Gopakumar-Vafa invariants and certain descendent invariants on \n$M_1(X,\\beta)$ (Proposition \\ref{prop on appe}), which appears as \\cite[Conj.~2.2 (iii)]{COT1}.\n\n\\subsection{Notation and convention}\nIn this paper, all varieties and schemes are defined over $\\mathbb{C}$. \nFor a morphism $\\pi \\colon X \\to Y$ of schemes, \nand for $\\mathcal{F}, \\mathcal{G} \\in \\mathrm{D^{b}(Coh(\\textit{X\\,}))}$, we denote by \n$\\mathbf{R} \\mathcal{H} om_{\\pi}(\\mathcal{F}, \\mathcal{G})$ \nthe functor $\\mathbf{R} \\pi_{\\ast} \\mathbf{R} \\mathcal{H} om_X(\\mathcal{F}, \\mathcal{G})$. \n\nA class $\\beta\\in H_2(X,\\mathbb{Z})$ is called \\textit{effective} if there exists a non-empty curve $C \\subset X$ with class $[C] = \\beta$. An effective class $\\beta$ is called \\textit{irreducible} if it is not the sum of two effective classes, and it is called \\textit{primitive} if it is not a positive integer multiple of an effective class.\n\nA holomorphic-symplectic variety is a smooth projective variety\nwith a non-degenerate holomorphic two form $\\sigma\\in H^0(X,\\Omega^2_X)$. \nA holomorphic-symplectic variety is irreducible \\textit{hyperk\\\"ahler}\nif $X$ is simply connected and $H^0(X, \\Omega_X^2)$ is generated by a symplectic form.\nA $K3$ surface is an (irreducible) hyperk\\\"ahler variety of dimension $2$.\n\n\n\n\\subsection*{Acknowledgement} \nWe thank Luca Battistella, Chen Jiang, Young-Hoon Kiem, Sergej Monavari, Rahul Pandharipande and Hyeonjun Park for helpful discussions.\n\nY. C. is partially supported by RIKEN Interdisciplinary Theoretical and Mathematical Sciences\nProgram (iTHEMS), World Premier International Research Center Initiative (WPI), MEXT, Japan, \nJSPS KAKENHI Grant Number JP19K23397 and Royal Society Newton International Fellowships Alumni 2020 and 2021. \nG.O. is partially supported by Deutsche Forschungsgemeinschaft (DFG) - OB 512\/1-1. \nY. T. is partially supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan, and\nGrant-in Aid for Scientific Research grant (No. 19H01779) from MEXT, Japan.\n\n\n\n\n\n\\section{Definitions and conjectures }\n\n\\subsection{Gopakumar-Vafa invariants}\\label{sect on gv}\n\nLet $X$ be a holomorphic symplectic 4-fold and $\\overline{M}_{g, l}(X, \\beta)$\nbe the moduli stack of genus $g$, $l$-pointed stable maps\nto $X$ with non-zero curve class $\\beta$. Its virtual class \\cite{BF, LT} vanishes due to a trivial factor in the obstruction sheaf.\nBy Kiem-Li's theory of cosection localization \\cite{KiL}, one can define a (reduced) virtual class\\,\\footnote{The virtual class mentioned in this paper\nis always assumed to be the reduced one.}\n$$[\\overline{M}_{g, l}(X, \\beta)]^{\\mathrm{vir}}\\in A_{2-g+l}(\\overline{M}_{g, l}(X, \\beta)). $$\nFor integral classes\n\\begin{align}\\label{gamma}\n\\gamma_i \\in H^{m_i}(X, \\mathbb{Z}), \\\n1\\leqslant i\\leqslant l,\n\\end{align}\nthe (primary) Gromov-Witten invariant is defined by\n\\begin{align}\\label{GWinv}\n\\mathrm{GW}_{g, \\beta}(\\gamma_1, \\ldots, \\gamma_l)\n=\\int_{[\\overline{M}_{g, l}(X, \\beta)]^{\\rm{vir}}}\n\\prod_{i=1}^l \\mathrm{ev}_i^{\\ast}(\\gamma_i)\\in \\mathbb{Q},\n\\end{align}\nwhere $\\mathrm{ev}_i \\colon \\overline{M}_{g,l}(X, \\beta)\\to X$\nis the $i$-th evaluation map.\n\nWhen $g=0$, the\nvirtual dimension of $\\overline{M}_{0, l}(X, \\beta)$\nis $l+2$, and (\\ref{GWinv})\nis zero unless\n\\begin{align}\\label{sum1}\n\\sum_{i=1}^{l}(m_i-2)=4.\n\\end{align}\nSimilar to the case of Calabi-Yau 4-folds and 5-folds \\cite{KP, PZ}, we make the following definition: \n\\begin{defi}\\emph{(\\cite[Def.~1.5]{COT1})}\\label{def of g=0 GV inv}\nFor any $\\gamma_1, \\ldots, \\gamma_l \\in H^{\\ast}(X,{\\mathbb{Z}})$, \nwe define the genus $0$ Gopakumar-Vafa invariant $n_{0, \\beta}(\\gamma_1, \\ldots, \\gamma_l) \\in {\\mathbb{Q}}$ recursively by the multiple cover formula: \n$$\\mathrm{GW}_{0, \\beta}(\\gamma_1, \\ldots, \\gamma_l)=\\sum_{\\begin{subarray}{c}k\\geqslant 1, k|\\beta \\end{subarray}}k^{n-3}\\, n_{0, \\beta\/k}(\\gamma_1, \\ldots, \\gamma_l). $$\n\\end{defi}\nWhen $g=1$, the virtual dimension of\n$\\overline{M}_{1, l}(X, \\beta)$ is $l+1$, and (\\ref{GWinv})\nis zero unless\n\\begin{align}\\label{sum2}\n\\sum_{i=1}^{l}(m_i-2)=2.\n\\end{align}\nIn this paper, we concentrate on the case when $l=1$ and $m_1=4$. \nBecause curves in imprimitive curve classes are very difficult to control,\nwe restrict hereby to the case of a primitive curve class.\n\\begin{defi}\\emph{(\\cite[Def.~1.6]{COT1})}\\label{def of g=1 GV inv}\nAssume that $\\beta \\in H_2(X,{\\mathbb{Z}})$ is primitive. For any $\\gamma\\in H^4(X, \\mathbb{Z})$, we define the genus 1 Gopakumar-Vafa invariant $n_{1, \\beta}(\\gamma)\\in \\mathbb{Q}$ by\n$$\\mathrm{GW}_{1, \\beta}(\\gamma)=n_{1,\\beta}(\\gamma) - \\frac{1}{24} \\mathrm{GW}_{0,\\beta}(\\gamma,c_2(X)), $$\nwhere $c_2(X)$ is the second Chern class of $T_X$. \n\\end{defi}\nWhen $g=2$, the virtual dimension of\n$\\overline{M}_{2, 0}(X, \\beta)$ is zero, so we can consider (\\ref{GWinv}) without insertions:\n\\begin{align*}\n\\mathrm{GW}_{2, \\beta}:=\\int_{[\\overline{M}_{2, 0}(X, \\beta)]^{\\rm{vir}}}1\\in \\mathbb{Q}.\n\\end{align*}\n\\begin{defi}\\emph{(\\cite[Def.~1.7]{COT1})}\\label{def of g=2 GV inv}\nAssume that $\\beta \\in H_2(X,{\\mathbb{Z}})$ is primitive. We define the genus $2$ Gopakumar-Vafa invariant $n_{2,\\beta}\\in \\mathbb{Q}$ by\n\\[\\mathrm{GW}_{2, \\beta}=n_{2,\\beta}\n- \\frac{1}{24} n_{1,\\beta}(c_2(X))\n+ \\frac{1}{2 \\cdot 24^2} \\mathrm{GW}_{0, \\beta}(c_2(X),c_2(X))\n+ \\frac{1}{24} N_{\\mathrm{nodal},\\beta}. \\]\nHere $n_{1,\\beta}(-)$ is given in Definition \\ref{def of g=1 GV inv} and $N_{\\mathrm{nodal},\\beta}\\in \\mathbb{Q}$ is the virtual count of rational nodal curves \\cite{NO} \nas defined by \n\\begin{equation} \\label{Nnodal}\nN_{\\mathrm{nodal},\\beta}:=\n\\frac{1}{2}\\left[\n\\int_{[\\overline{M}_{0,2}(X,\\beta)]^{\\mathrm{vir}}} (\\mathop{\\rm ev}\\nolimits_1 \\times \\mathop{\\rm ev}\\nolimits_2)^{\\ast}(\\Delta_X) - \\int_{[ \\overline{M}_{0,1}(X,\\beta) ]^{\\mathrm{vir}}} \\frac{\\mathop{\\rm ev}\\nolimits_1^{\\ast}(c(X))}{1-\\psi_1}\n\\right], \n\\end{equation}\nwhere \n\\begin{itemize}\n\\item $\\Delta_X \\in H^8(X \\times X)$ is the class of the diagonal, and\n\\item $c(X) = 1 + c_2(X) + c_4(X)$ is the total Chern class of $T_X$.\n\\end{itemize}\n \\end{defi}\n\n\n\n\n\\subsection{$Z_t$-stable pair invariants}\n\nLet $\\omega$ be an ample divisor on $X$ and $t\\in\\mathbb{R}$, we recall the following notion of $Z_t$-stable pairs.\n\\begin{defi}\\label{def Zt sta}\\emph{(\\cite[Lem~1.7]{CT1})}\nLet $F$ be a one dimensional coherent sheaf and $s: \\mathcal{O}_X\\to F$ be a section. For an ample divisor $\\omega$, we denote the slope function\nby $\\mu(F)=\\chi(F)\/(\\omega \\cdot [F])$.\n\nWe say $(F,s)$ is a $Z_t$-(semi)stable pair $($$t\\in\\mathbb{R}$$)$ if \n\\begin{enumerate}\n\\renewcommand{\\labelenumi}{(\\roman{enumi})}\n\\item for any subsheaf $0\\neq F' \\subseteq F$, we have \n$\\mu(F')<(\\leqslant)t$,\n\\item for any\nsubsheaf $ F' \\subsetneq F$ \nsuch that $s$ factors through $F'$, \nwe have \n$\\mu(F\/F')>(\\geqslant)t$. \n\\end{enumerate}\n\\end{defi}\nThere are two distinguished stability conditions appearing as \nspecial cases of $Z_t$-stability. \n\\begin{defi}\\label{defi:PTJSpair}\\emph{(\\cite{PT}, \\cite[Def.~1.10]{CT1})} \n\n(i) A pair $(F,s)$ is a PT stable pair if\n$F$ is a pure one dimensional sheaf and $s$ is surjective in dimension one. \n\n(ii) A pair $(F,s)$ is a JS stable pair if $s$ is a non-zero morphism, $F$ is $\\mu$-semistable and \nfor any subsheaf $0\\neq F' \\subsetneq F$ such that $s$ factors through \n$F'$ we have $\\mu(F')<\\mu(F)$. \n\\end{defi}\n\\begin{prop}\\label{prop:chambers}\\emph{(\\cite[Prop.~1.11]{CT1})} \nFor a pair $(F,s)$ with $[F]=\\beta$ and $\\chi(F)=n$, its\n\n(i) $Z_t$-stability with $t\\to \\infty$ is exactly PT stability, \n\n(ii) $Z_t$-stability with $t=\\frac{n}{\\omega\\cdot \\beta}+0^+$ is exactly JS stability. \n\\end{prop}\nFor $\\beta \\in H_2(X, \\mathbb{Z})$ and $n\\in \\mathbb{Z}$, we denote by\n$$P^t_n(X, \\beta)\\subseteq \\mathcal{P}^t_n(X, \\beta) $$\nthe moduli stack of $Z_t$-stable (semistable) pairs $(F,s)$ with $[F]=\\beta$ and $\\chi(F)=n$.\n\nBy Proposition \\ref{prop:chambers}, there are two disinguished moduli spaces, \nPT moduli spaces and JS moduli spaces, \nby specializing $t\\to \\infty$ and $t=\\frac{n}{\\omega\\cdot \\beta}+0^+$ respectively:\n\\begin{align*}\nP_n(X, \\beta) \\cneq P_n^{t\\to \\infty}(X, \\beta), \\quad\nP_n^{\\mathrm{JS}}(X, \\beta) \\cneq \nP_n^{t=\\frac{n}{\\omega\\cdot \\beta}+0^+}(X, \\beta). \n\\end{align*}\nBy a GIT construction, \n$P^t_n(X, \\beta)$ is a quasi-projective scheme, and $\\mathcal{P}^t_n(X, \\beta)$ admits a good moduli space\n\\begin{align*}\n\\mathcal{P}^t_n(X, \\beta) \\to \\overline{P}_n^t(X, \\beta),\n\\end{align*}\nwhere $\\overline{P}_n^t(X, \\beta)$ is a projective \nscheme which parametrizes $Z_t$-polystable objects.\nThe following result shows that moduli stacks of $Z_t$-stable pairs are indeed open substacks of moduli stacks of objects in the derived categories of coherent sheaves.\n\\begin{thm}\\label{existence of proj moduli space}\\emph{(\\cite[Thm.~0.1]{CT1})} \n$P^t_n(X, \\beta)$ admits an open immersion \n$$P^t_n(X, \\beta)\\to \\mathcal{M}_0, \\quad (F,s)\\mapsto (\\mathcal{O}_X\\stackrel{s}{\\to} F) $$\nto the moduli stack $\\mathcal{M}_0 $ of $E\\in D^b\\mathop{\\rm Coh}\\nolimits (X)$ with $\\mathop{\\rm Ext}\\nolimits^{<0}(E,E)=0$ and $\\det(E)\\cong \\mathcal{O}_X$.\n\\end{thm}\nTherefore for a general choice of $t$ (i.e.~outside a finite subset of rational numbers in $\\mathbb{R}$), $P^t_n(X, \\beta)$ is a projective scheme which can\nbe given a $(-2)$-shifted symplectic derived scheme structure \\cite{PTVV} and has a virtual class \\cite{BJ, OT} (see also \\cite{CL1}). \n\nParallel to GW theory, the virtual class of $P_n^t(X,\\beta)$ vanishes \\cite{KiP, Sav}. \nOne can define \na reduced virtual class due to Kiem-Park \\cite[Def.~8.7, Lem.~9.4]{KiP}: \n\\begin{align}\\label{red vir class}[P_n^t(X,\\beta)]^{\\mathrm{vir}}\\in A_{n+1}(P_n^t(X,\\beta),\\mathbb{Q}), \\end{align}\ndepending on the choice of orientation \\cite{CGJ, CL2}. \nTo define its counting invariants, let \n\\begin{align}\\label{equ on pri ins}\\tau: H^{m}(X,\\mathbb{Z})\\to H^{m-2}(P_n^t(X,\\beta),\\mathbb{Z}), \\end{align}\n\\begin{align*}\\tau(\\gamma):=\\pi_{P\\ast}\\left(\\pi_X^{\\ast}\\gamma \\cup\\mathop{\\rm ch}\\nolimits_{3}(\\mathbb{F})\\right),\n\\end{align*}\nwhere $\\mathbb{I}=(\\mathcal{O}\\to \\mathbb{F})$ is the universal $Z_t$-stable pair and $\\pi_P, \\pi_X$ are projections from $P_n^t(X,\\beta)\\times X$ onto its factors. \n\\begin{defi}\\label{def DT4 inv}\nLet $t\\in \\mathbb{R}$ be generic and $\\gamma_i \\in H^{m_i}(X, \\mathbb{Z})$ $(1\\leqslant i\\leqslant l)$. The $Z_t$-stable pair invariants are \n\\begin{align*} \nP_{n,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l):=\\int_{[P_n^t(X,\\beta)]^{\\rm{vir}}}\\prod_{i=1}^l\\tau(\\gamma_i)\\in\\mathbb{Q}.\n\\end{align*}\nWhen $n=-1$, we write \n$$P_{-1,\\beta}^t:=\\int_{[P_{-1}^t(X,\\beta)]^{\\rm{vir}}}1. $$\nIn PT and JS stabilites, we also write \n$$P_{n,\\beta}(\\gamma_1,\\ldots,\\gamma_l):=P_{n,\\beta}^{t\\to \\infty}(\\gamma_1,\\ldots,\\gamma_l), \\,\\,\\, \nP^{\\mathrm{JS}}_{n,\\beta}(\\gamma_1,\\ldots,\\gamma_l):=P_{n,\\beta}^{t=\\frac{n}{\\omega\\cdot \\beta}+0^+\n}(\\gamma_1,\\ldots,\\gamma_l). $$\n\\end{defi}\n\\begin{rmk}\nBy Definition \\ref{def Zt sta} and a dimension counting, $Z_t$-stable pair invariants are non-zero only if \nboth of the following conditions hold: \n\\begin{align*} \nt>\\frac{n}{\\omega\\cdot \\beta}, \\quad \\sum_{i=1}^{l}(m_i-2)=2n+2.\n\\end{align*}\n\\end{rmk}\n\nIn \\cite{CMT2, CT1}, similar invariants are used to give sheaf theoretic interpretations of Gopakumar-Vafa type invariants for ordinary Calabi-Yau 4-folds \\cite{KP}. \nBelow, we give a parallel proposal for holomorphic symplectic 4-folds using Definition \\ref{def DT4 inv}. \n\n\n\\subsection{Conjecture}\nWe state the main conjecture of this paper. \n\\begin{conj}\\label{conj on DT4\/GV}\nLet $X$ be a holomorphic symplectic 4-fold with an ample divisor $\\omega$.\nFix $n\\in\\mathbb{Z}$ and $\\beta\\in H_2(X,\\mathbb{Z})$ and let $t>\\frac{n}{\\omega\\cdot \\beta}$ be generic. \nFor certain choice of orientation, we have \n\\begin{enumerate}\n\\item If $n\\geqslant 2$, then \n\\begin{align*} \nP_{n,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=0.\n\\end{align*}\n\\item If $n=1$, then\n\\begin{align*} \nP_{1,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=n_{0, \\beta}(\\gamma_1, \\ldots, \\gamma_l) \\in \\mathbb{Z}. \\end{align*}\n\\item If $n=0$ and $\\beta$ is primitive, then\n\\begin{align*} \nP_{0,\\beta}^t(\\gamma)=n_{1, \\beta}(\\gamma) \\in \\mathbb{Z}.\n\\end{align*}\n\\item If $n=-1$ and $\\beta$ is primitive, then \n\\begin{align*} \nP_{-1,\\beta}^t=n_{2,\\beta} \\in \\mathbb{Z}.\n\\end{align*}\n\\end{enumerate}\n\\end{conj}\n\\begin{rmk}\nBy the global Torelli theorem \\cite{Ver, Huy}, primitive curve classes on irreducible hyperk\\\"ahler varieties can be deformed to irreducible curve classes. Therefore $Z_t$-stable pair invariants \nare independent of the choice of $t>\\frac{n}{\\omega\\cdot \\beta}$ for such cases by \\cite[Prop.~1.12]{CT1}.\n\\end{rmk}\n\\begin{rmk}\nOur conjecture implies that there is no nontrivial wall-crossing for $Z_t$-stable pairs invariants when $t>\\frac{n}{\\omega\\cdot \\beta}$, contrary to the \nordinary $\\mathrm{CY_4}$ case \\cite{CT1, CT3, CT4}.\n\\end{rmk}\n\\begin{rmk}\nSimilarly to \\cite[Conj.~0.3]{CK2}, we may use counting invariants on Hilbert schemes $I_n(X,\\beta)$ of curves to give \na sheaf theoretic interpretation of Gopakumar-Vafa invariants in which case zero dimensional subschemes \\cite{CK1} (conjecturally) will not contribute, i.e. $``\\mathop{\\rm DT}\\nolimits=\\mathop{\\rm PT}\\nolimits\"$. \nIt is curious whether one can do a $K$-theoretic refinement as \\cite{CKM1}. \n\\end{rmk}\n\n\\subsection{Heuristic argument}\\label{sect on heur}\nIn this section, we verify Conjecture \\ref{conj on DT4\/GV} using heuristic argument in an ideal geometry (ref.~\\cite[\\S 1.4, \\S 1.5]{COT1}).\nTo be specific, as the virtual dimension of $\\overline{M}_{g,0}(X,\\beta)$ is $2-g$, we assume that:\n\\begin{quote}\nAny genus $g$ curve moves in a smooth compact $(2-g)$-dimensional family.\n\\end{quote}\nIn particular, there are no curves of genus $g \\geqslant 3$.\nUnfortunately, complicated phenomena still arise even in the ideal case, for example, one can have two (resp.~one) dimensional \nfamilies of reducible rational (resp.~elliptic) curves, and any member of a rational curve family is expected to intersect nontrivially with\nsome member in the same family (see \\cite[\\S 1.4]{COT1} for details). \n\nHowever, things will be simplified if we make the following additional assumptions:\n\\begin{itemize}\n\\item $X$ is irreducible hyperk\\\"ahler,\n\\item the effective curve class $\\beta \\in H_2(X,{\\mathbb{Z}})$ is primitive,\n\\end{itemize}\nBy the global Torelli for (irreducible) hyperk\\\"ahler varieties \\cite{Ver, Huy},\nthe pair $(X,\\beta)$ is deformation equivalent (through a deformation with keeps $\\beta$ of Hodge type)\nto a pair $(X', \\beta')$, where $\\beta' \\in H_2(X,{\\mathbb{Z}})$ is irreducible, so we may without loss of generality assume:\n\\begin{itemize}\n\\item the effective curve class $\\beta \\in H_2(X,{\\mathbb{Z}})$ is irreducible.\n\\end{itemize}\nUnder these assumptions, our ideal geometry of curves simplifies to the following form:\n\\begin{enumerate}\n\\item\nThe rational curves in $X$ of class $\\beta$\nmove in a proper 2-dimensional smooth family of embedded irreducible rational curves. Except for a finite number of rational nodal curves, the rational curves are smooth, with normal bundle ${\\mathcal O}_{{\\mathbb{P}}^1} \\oplus {\\mathcal O}_{{\\mathbb{P}}^1} \\oplus \\mathcal{O}_{\\mathbb{P}^{1}}(-2)$. \n\\item\nThe arithmetic genus $1$ curves in $X$ of class $\\beta$ move in a proper 1-dimensional smooth family of embedded irreducible genus 1 curves. Except for a finite number of rational nodal curves, the genus one curves are smooth elliptic curves with normal bundle $L\\oplus L^{-1}\\oplus \\mathcal{O}$, where $L$ is a generic degree zero line bundle.\n\\item\nAll genus two curves are smooth and rigid.\n\\item\nThere are no curves of genus $g\\geqslant 3$.\n\\end{enumerate}\nWe need to compute $Z_t$-stable pair invariants in this ideal setting. \nThe key heuristic we use is that only $Z_t$-stable pairs with \\textit{connected support} will `contribute' to our invariants. \n\nThe observation is that for a $Z_t$-stable pair $I=(\\mathcal{O}_X\\to F)$ which is supported on a disconnected curve $C=C_1\\sqcup C_2$, we may write \n$$I= I_1 \\oplus I_2, \\quad I_1=(\\mathcal{O}_X\\to F_1), \\,\\, I_2=(\\mathcal{O}_X\\to F_2), $$\nwhere $I_i$ is supported on $C_i$ ($i=1,2$). \nThen the obstruction space \nsatisfies \n$$\\mathop{\\rm Ext}\\nolimits^2(I,I)_0=\\mathop{\\rm Ext}\\nolimits^2(I_1,I_1)_0\\oplus \\mathop{\\rm Ext}\\nolimits^2(I_2,I_2)_0. $$\nTherefore the surjective isotropic cosections (see~\\cite[Lem.~9.4]{KiP}) of obstruction spaces in the RHS give rise to a (mutually orthogonal) two dimensional isotropic cosection in the LHS. Heuristically speaking, such $Z_t$-stable pairs will not `contribute' to the reduced virtual class as the reduced obstruction space still have a surjective isotropic cosection.\n\nBy Definition \\ref{def DT4 inv} and above discussion, $Z_t$-stable pair invariants\n\\begin{align*} \nP_{n,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=\\int_{[P_n^t(X,\\beta)]^{\\rm{vir}}}\\prod_{i=1}^l\\tau(\\gamma_i)\n\\end{align*}\ncount $Z_t$-stable pairs whose support are connected and incident to cycles dual to $\\gamma_1,\\ldots, \\gamma_l$. \nSay such an incident $Z_t$-stable pair is supported on a $(2-g)$-dimensional family:\n$$p:\\mathcal{C}^g_\\beta\\to S^g_\\beta $$ \nof genus $g$ curves ($g=0,1,2$), where $\\mathcal{C}^g_\\beta$ is the total space of this family.\nEach cycle $\\gamma_i$ will cut down real dimension of $S^g_\\beta$ by $\\deg(\\gamma_i)-2$. As we have \n$$\\sum_{i=1}^{l}(\\deg(\\gamma_i)-2)=2n+2, $$\nso all insertions in total cut down real dimension of $S^g_\\beta$ by $2n+2$. \n\n${}$ \\\\\n\\textbf{The case $n\\geqslant 1$}. When $n\\geqslant 2$, the dimension cut down by insertions is bigger than the largest possible dimension of $S^g_\\beta$, so there can not be such incident stable pairs and \n$$P_{n\\geqslant 2,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=0. $$ \nThis confirms Conjecture \\ref{def DT4 inv} (1). \n\nWhen $n=1$, insertions cut down real dimension of $S^g_\\beta$ by $4$, so any incident $Z_t$-stable pair $I=(\\mathcal{O}_X\\to F)$ can only be supported on genus 0 family. \nAs in \\cite[\\S4.1]{CT1}, by Harder-Narasimhan and Jordan-H\\\"older filtration, we know \n$$F\\cong \\mathcal{O}_C, $$\nfor some rational curve $C$ in $S^0_{\\beta}$. Therefore incident $Z_t$-stable pairs (with $\\chi=1$) are in one to one correspondence with \nintersection points of $\\mathcal{C}^0_{\\beta}$ with cycles dual to $\\gamma_1,\\ldots,\\gamma_l$ and \n\\begin{align*} \nP_{1,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=\\int_{S^0_\\beta}\\prod_{i=1}^lp_*(f^*\\gamma_i), \\end{align*}\nwhere $f: \\mathcal{C}^0_{\\beta}\\to X$ is the evaluation map.\nTherefore Conjecture \\ref{def DT4 inv} (2) is confirmed in this ideal case as both sides are (virtually) enumerating rational curves of class \n$\\beta$ incident to cycles dual to $\\gamma_1,\\ldots,\\gamma_l$.\n\n${}$ \\\\\n\\textbf{The case $n=0$}. Since $Z_t$-stable pairs $I=(\\mathcal{O}_X\\to F)$ supported on genus $0$ curves satisfy $\\chi(F)>0$ and \na 4-cycle $\\gamma\\in H^4(X)$ misses genus 2 curves in general position, so when $[F]=\\beta$ is irreducible, the pair \nmust be scheme theoretically supported on an elliptic curve $C$ and \n$$I=(\\mathcal{O}_X\\twoheadrightarrow \\mathcal{O}_C\\stackrel{s}{\\to} L), $$\nwhere $L$ is a line bundle on $C$ with $\\chi(C,L)=0$. By $Z_t$-stability, $s$ is non-trivial, so $s$ must \nbe an isomorphism by the stability of line bundles. Therefore incident $Z_t$-stable pairs (with $\\chi=0$) are in one to one correspondence with \nintersection points of 4-cycle $\\gamma$ with genus 1 curve family $\\mathcal{C}^1_\\beta$ of class $\\beta$ and \n\\begin{align*} \nP_{0,\\beta}^t(\\gamma)=\\int_{\\mathcal{C}^1_\\beta}f^*\\gamma, \\end{align*}\nwhere $f: \\mathcal{C}^1_{\\beta}\\to X$ is the evaluation map. \nTherefore Conjecture \\ref{def DT4 inv} (3) is confirmed in this ideal setting as both sides are (virtually) enumerating elliptic curves of class \n$\\beta$ incident to $\\gamma$.\n\n${}$ \\\\\n\\textbf{The case $n=-1$}. Any $Z_t$-stable pair $I=(\\mathcal{O}_X\\to F)$ with irreducible curve class $[F]=\\beta$ \nis scheme theoretically supported on a smooth rigid genus 2 curve $C$: \n$$I=(\\mathcal{O}_X\\twoheadrightarrow \\mathcal{O}_C\\stackrel{s}{\\to} L), $$\nwhere $L$ is a line bundle on $C$ with $\\chi(C,L)=-1$. As above, by $Z_t$-stability, $s$ must \nbe an isomorphism. Hence $P_{-1}^t(X,\\beta)$ is identified with \nthe set of all rigid genus 2 curves in class $\\beta$ in the ideal geometry, whose count gives exactly genus 2 Gopakumar-Vafa invariant $n_{2,\\beta}$. \nTherefore Conjecture \\ref{def DT4 inv} (4) is confirmed in the ideal setting.\n\n\n\\subsection{Speculations for general curve classes}\\label{sect on impri}\nFor a smooth projective Calabi-Yau 4-fold $X$ and $\\gamma\\in H^4(X,\\mathbb{Z})$, \nwe have genus $0$, $1$ Gopakumar-Vafa type invariants $n_{0,\\beta}(\\gamma), n_{1,\\beta}\\in\\mathbb{Q}$ defined \nby Klemm and Pandharipande\nfrom Gromov-Witten theory \\cite{KP} and stable pair invariants $P_{n,\\beta}(\\gamma)\\in \\mathbb{Z}$ \\cite{CMT2}. \nThey are related by the following conjectural formula \\cite[\\S 1.7]{CMT2}: \n\\begin{align}\\label{equ on pt\/gv on cy4}\n\\sum_{n,\\beta}\\frac{P_{n,\\beta}(\\gamma)}{n!}y^n q^{\\beta}\n=\\prod_{\\beta>0}\\Big(\\exp(yq^{\\beta})^{n_{0,\\beta}(\\gamma)}\\cdot \nM(q^{\\beta})^{n_{1,\\beta}}\\Big), \n\\end{align}\nwhere $M(q)=\\prod_{k\\geqslant 1}(1-q^{k})^{-k}$ is the MacMahon function. \n\nBy taking logarithmic differentiation with respect to $y$, we obtain \n\\begin{align*}\ny\\frac{d}{dy}\\log\\left(\\sum_{n,\\beta}\\frac{P_{n,\\beta}(\\gamma)}{n!}y^n q^{\\beta}\\right)=\\sum_{\\beta>0}y\\frac{d}{dy}n_{0,\\beta}(\\gamma)yq^\\beta =\\sum_{\\beta>0}n_{0,\\beta}(\\gamma)yq^\\beta.\n\\end{align*}\nIf we view it as an equality for corresponding reduced invariants on holomorphic symplectic 4-folds, \nit surprisingly recovers Conjecture \\ref{conj on DT4\/GV} (i),~(ii) (i.e. the genus zero part). \n\nWe do similar manipulations for genus one invariants. Note that $y^0q^\\beta$ parts of \\eqref{equ on pt\/gv on cy4} are \n$$\\sum_{\\beta}P_{0,\\beta}q^\\beta=\\prod_{\\beta>0}M(q^{\\beta})^{n_{1,\\beta}}. $$\nThis equality is written down by a computation in the ``$\\mathrm{CY_4}$ ideal geometry\" (ref.~\\cite[\\S 2.5]{CMT2}), where rational curves contribute zero and each super-rigid elliptic curve \n(on an ideal $\\mathrm{CY_4}$)\nin class $\\beta$ contributes by $M(q^\\beta)$ (ref.~\\cite[Thm.~5.10]{CMT2}). \nTaking logarithmic differentiation with respect to $q$:\n\\begin{align*}q\\frac{d}{dq}\\log\\left(M(q) \\right)=\\sum_{d\\geqslant 1}q^d\\sum_{i\\geqslant1,i|d}i^2. \\end{align*}\nWe then wonder whether in the holomorphic symplectic 4-folds setting, each ideal elliptic curve family in class $\\beta$ contributes to $P_{0,d\\beta}(\\gamma)$ by \n$$\\sum_{i\\geqslant1,i|d}i^2. $$\nSumming over all elliptic curve families, this would imply\n\\begin{align}\\label{general pt\/gv on hk4}P_{0,\\beta}(\\gamma)=\\sum_{d\\geqslant1,d|\\beta}n_{1,\\beta\/d}(\\gamma) \\sum_{i\\geqslant1,i|d}i^2. \\end{align}\nIt is quite curious whether the above formula gives the correct PT\/GV correspondence. For multiple fiber classes \nof elliptic fibrations, our computations show the formula seems correct (see Theorem \\ref{thm1 on g=1 of multiple fiber}, \\ref{thm2 on g=1 of multiple fiber}, Remark \\ref{rmk on impr}). As for $P_{-1,\\beta}$ and genus 2 Gopakumar-Vafa invariants, we haven't found analogous \nformula for general curve classes. \n\n\n\n\\section{Product of $K3$ surfaces}\nIn this section, we consider the product of two $K3$ surfaces: \n$$X=S\\times T, \\quad \\mathrm{with} \\,\\, \\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z}). $$ \nAs observed in \\cite[\\S 5]{COT1}, this contains all interesting curve classes on $X$ because if $\\beta \\in H_2(X, {\\mathbb{Z}})$\nis of non-trivial degree over both $S$ and $T$, one can construct two linearly independent cosections,\nwhich imply that reduced Gromov-Witten invariants of $X$ in this class vanish.\n\n\\subsection{Gopakumar-Vafa invariants}\nRecall Gopakumar-Vafa invariants specified in Definitions \\ref{def of g=0 GV inv}, \\ref{def of g=1 GV inv}, \\ref{def of g=2 GV inv}. \nThey are computed in \\cite[Prop.~5.1]{COT1} as follows: \nwrite $\\gamma,\\gamma'\\in H^{4}(X)$ as \n\\begin{align*}\n \\gamma&=A_1\\cdot 1\\otimes {\\mathsf{p}}+D_1\\otimes D_2+A_2\\cdot {\\mathsf{p}}\\otimes 1, \\\\\n\\gamma'&=A'_1\\cdot 1\\otimes {\\mathsf{p}}+D'_1\\otimes D'_2+A'_2\\cdot {\\mathsf{p}}\\otimes 1, \n\\end{align*}\nbased on K\\\"unneth decomposition:\n$$H^{4}(X)\\cong (H^0(S)\\otimes H^4(T))\\oplus (H^2(S)\\otimes H^2(T))\\oplus (H^4(S)\\otimes H^0(T)). $$\nFix also a curve class\n$$\\alpha=\\theta_1\\otimes {\\mathsf{p}}+{\\mathsf{p}}\\otimes \\theta_2\\in (H^6(X) \\cong H^2(S))\\otimes (H^4(T)\\oplus H^4(S)\\otimes H^2(T)).$$\n\\begin{prop}\\emph{(\\cite[Prop.~5.1]{COT1})}\\label{prop on gw on prod}\nFor $\\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z})$, we have \n\\begin{align*}\nn_{0,\\beta}(\\gamma, \\gamma') &=(D_1\\cdot\\beta)\\cdot (D_1'\\cdot\\beta)\\cdot\\int_T(D_2\\cdot D_2')\\cdot N_{0}\\left(\\frac{\\beta^2}{2}\\right), \\\\\nn_{0,\\beta}(\\alpha)&=(\\theta_1\\cdot \\beta)\\,N_{0}\\left(\\frac{\\beta^2}{2}\\right). \n\\end{align*}\nIf $\\beta$ is primitive, we have\n\\begin{align*}\nn_{1, \\beta}(\\gamma)= 24 A_2\\, N_1\\left(\\frac{\\beta^2}{2}\\right), \\quad n_{2,\\beta}= N_2\\left( \\frac{\\beta^2}{2} \\right), \n\\end{align*}\nwhere\n\\begin{align}\\sum_{l\\in\\mathbb{Z}}N_{0}(l)\\, q^l&=\\frac{1}{q} \\prod_{n\\geqslant 1}\\frac{1}{(1-q^n)^{24}}, \\label{equ on N0} \\\\\n\\sum_{l \\in {\\mathbb{Z}}} N_{1}(l)\\,q^l &=\\left(\\frac{1}{q} \\prod_{n\\geqslant 1}\\frac{1}{(1-q^n)^{24}}\\right)\\left(q \\frac{d}{dq}G_2(q)\\right), \\label{equ on N1} \\\\\n\\sum_{l\\in\\mathbb{Z}}N_{2}(l)\\, q^l&=\\left(\\frac{1}{q} \\prod_{n\\geqslant 1}\\frac{1}{(1-q^n)^{24}}\\right) \\left( 24 q \\frac{d}{dq} G_2 - 24 G_2 - 1 \\right), \\label{equ on N2}\\end{align}\nwith Eisenstein series: \n$$G_2(q) = -\\frac{1}{24} + \\sum_{n \\geqslant 1} \\sum_{d|n} d q^n. $$\n\\end{prop}\n\n\n\n\\subsection{Moduli spaces of $Z_t$-stable pairs}\nFor a point $t\\in T$, let $i_t \\colon S\\to S\\times \\{t\\}\\hookrightarrow X$ be the inclusion. \nConsider the pushforward map \n\\begin{align}\\label{equ psf map}i_*:P^t_{n}(S,\\beta)\\times T\\to P^t_{n}(X,\\beta), \\end{align}\n\\begin{align*}(\\mathcal{O}_S\\stackrel{s}{\\to} F,\\,t)\\mapsto (\\mathcal{O}_X\\twoheadrightarrow i_{t*}\\mathcal{O}_{S}\\stackrel{i_{t*}s}{\\to} i_{t*}F), \\end{align*}\nwhere $P^t_n(S,\\beta)$ is the moduli space of $Z_t$-stable pairs $(F,s)$ on $S$ with $[F]=\\beta$ and $\\chi(F)=n$.\n\nWe restrict to the following setting. \n\\begin{set}\\label{setting} \nWe consider the case when the following conditions are satisfied:\n\\begin{enumerate}\n\\item The map \\eqref{equ psf map} is an isomorphism and $P^t_{n}(S,\\beta)$ is smooth of dimension $\\beta^2+n+1$.\n\\item There is a well-defined forgetful map \n$$f: P^t_{n}(S,\\beta)\\to M_n(S,\\beta), \\quad (\\mathcal{O}_S\\to F)\\mapsto F, $$\nto the coarse moduli scheme $M_n(S,\\beta)$ of one dimensional stable sheaves $F$ on $S$ with $[F]=\\beta$ and $\\chi(F)=n$.\n\\end{enumerate}\n\\end{set}\n\\begin{prop}\\label{prop on smoothness}\nSetting \\ref{setting} is satisfied when $\\beta$ is irreducible. \n\\end{prop}\n\\begin{proof}\nWhen $\\beta$ is irreducible, $P^t_{n}(X,\\beta)$ is independent of the choice of $t>\\frac{n}{\\omega\\cdot \\beta}$ \\cite[Prop.~1.12]{CT1},\nso we can set $t\\to \\infty$ and work with PT stability. \nThe isomorphism follows from similar argument as \\cite[Prop.~3.11]{CMT2}.\nThe key point is that for any such $Z_t$-stable pair $(F,s)$, $F$ is stable and therefore scheme theoretically supported on $S\\times \\{t\\}$ for some \n$t\\in T$ (\\cite[Lem.~2.2]{CMT1}). \nThe smoothness of $P^t_{n}(S,\\beta)$ follows from \\cite{KY}, \\cite[Prop.~C.2]{PT2}.\n\\end{proof}\n\n\\subsection{Virtual classes}\nWe determine the virtual class of $P^t_{n}(X,\\beta)$ in Setting \\ref{setting}. Firstly recall:\n\\begin{defi}$($\\cite[Ex.~16.52,~pp.~410]{Sw}, \\cite[Lem.~5]{EG}$)$\nLet $E$ be a $\\mathrm{SO}(2n,\\mathbb{C})$-bundle with a non-degenerate symmetric bilinear form $Q$ on a connected scheme $M$. \nDenote $E_+$ to be its positive real form\\,\\footnote{This means a real half dimensional subbundle such that $Q$ is real and positive definite on it. By\nhomotopy equivalence $\\mathrm{SO}(m,\\mathbb{C})\\sim \\mathrm{SO}(m,\\mathbb{R})$, it exists and is unique up to isomorphisms.}.\nThe half Euler class of $(E,Q)$ is \n$$e^{\\frac{1}{2}}(E,Q):=\\pm\\,e(E_+)\\in H^{2n}(M,\\mathbb{Z}), $$\nwhere the sign depends on the choice of orientation of $E_+$. \n\\end{defi}\n\n\\begin{defi}$($\\cite{EG},~\\cite[Def.~8.7]{KiP}$)$\nLet $E$ be a $\\mathrm{SO}(2n,\\mathbb{C})$-bundle with a non-degenerate symmetric bilinear form $Q$ on a connected scheme $M$. \nAn isotropic cosection of $(E,Q)$ is a map \n$$\\phi: E\\to \\mathcal{O}_M, $$\nsuch that the composition \n$$\\phi\\circ \\phi^{\\vee}: \\mathcal{O}_M\\to E^{\\vee}\\stackrel{Q}{\\cong} E \\to \\mathcal{O}_M$$\nis zero. If $\\phi$ is furthermore surjective, we define the (reduced) half Euler class: \n$$e_{\\mathrm{red}}^{\\frac{1}{2}}(E,Q):=e^{\\frac{1}{2}}\\left((\\phi^{\\vee}\\mathcal{O}_M)^{\\perp}\/(\\phi^{\\vee}\\mathcal{O}_M),\\bar{Q}\\right)\\in H^{2n-2}(M,\\mathbb{Z}), $$\nas the half Euler class of the isotropic reduction.\nHere $\\bar{Q}$ denotes the induced non-degenerate symmetric bilinear form on $(\\phi^{\\vee}\\mathcal{O}_M)^{\\perp}\/(\\phi^{\\vee}\\mathcal{O}_M)$. \n\\end{defi}\nWe show reduced half Euler classes are independent of the choice of surjective isotropic cosection. \n\\begin{lem}$($\\cite[Lem.~5.5]{COT1}$)$\\label{lem on indep of cosec}\nLet $E$ be a $\\mathrm{SO}(2n,\\mathbb{C})$-bundle with a non-degenerate symmetric bilinear form $Q$ on a connected scheme $M$ and \n$$\\phi: E\\to \\mathcal{O}_M $$\nbe a surjective isotropic cosection.\nThen we can write the positive real form $E_+$ of $E$ as \n$$E_+=\\mathcal{E}_+\\oplus \\underline{\\mathbb{R}}^2$$\nsuch that \n$$e_{\\mathrm{red}}^{\\frac{1}{2}}(E,Q)=\\pm\\,e(\\mathcal{E}_+). $$\nMoreover, it is independent of the choice of surjective cosection. \n\nIn particular, when $E=\\mathcal{O}^{\\oplus2} \\oplus V$ such that $Q=\\begin{pmatrix}\n0 & 1 \\\\\n1 & 0\n\\end{pmatrix} \\oplus Q|_{V}$, we have \n$$e_{\\mathrm{red}}^{\\frac{1}{2}}(E,Q)=\\pm\\,e^{\\frac{1}{2}}(V,Q|_{V}).$$\n\\end{lem}\nRecall a $\\mathrm{Sp}(2r,\\mathbb{C})$-bundle (or symplectic vector bundle) is a complex vector bundle of rank $2r$ \nwith a non-degenerate anti-symmetric bilinear form.\nOne class of quadratic vector bundles is given by tensor product of two symplectic vector bundles $V_1, V_2$. \nTheir half Euler classes\ncan be computed using Chern classes of $V_1,V_2$. \nFor our purpose, we restrict to the following case.\n\\begin{lem}$($\\cite[Lem.~5.6]{COT1}$)$\\label{lem on compu of half euler class}\nLet $(V_1,\\omega_1)$, $(V_2,\\omega_2)$ be a $\\mathrm{Sp}(2r,\\mathbb{C})$ $($resp.~$\\mathrm{Sp}(2,\\mathbb{C})$-bundle$)$ on a connected scheme $M$. \nThen\n$$(V_1\\otimes V_2,\\omega_1\\otimes \\omega_2)$$ \ndefines a $SO(4r,\\mathbb{C})$-bundle whose half Euler class satisfies \n$$e^{\\frac{1}{2}}(V_1\\otimes V_2,\\omega_1\\otimes \\omega_2)=\\pm\\,\\big(e(V_1)-c_{2r-2}(V_1)\\cdot e(V_2)\\big). $$\n\\end{lem}\nWe determine the (reduced) virtual class of $P^t_{n}(X,\\beta)$. \n\\begin{thm}\\label{thm on vir clas}\nIn Setting \\ref{setting}, for certain choice of orientation, we have \n\\begin{equation}\\label{vir class StimesT}\n[P^t_{n}(X,\\beta)]^{\\mathrm{vir}}=\n\\left([P^t_{n}(S,\\beta)]\\cap f^*e(T_{M_n(S,\\beta)})\\right)\\times[T]-e(T)\\left([P^t_{n}(S,\\beta)]\\cap f^*c_{\\beta^2}(T_{M_n(S,\\beta)})\\right), \n\\end{equation}\nwhere $f: P^t_{n}(S,\\beta)\\to M_{n}(S,\\beta)$ is the map as in Setting \\ref{setting}. \n\\end{thm}\n\\begin{proof}\nThe proof is similar as \\cite[Prop.~4.7]{CMT2}. \nUnder the isomorphism \\eqref{equ psf map}: \n\\begin{align*}P^t_{n}(S,\\beta)\\times T\\cong P^t_{n}(X,\\beta), \\end{align*}\nthe universal stable pair $\\mathbb{I}_X=(\\mathcal{O}\\to \\mathbb{F}_X)$ of $P^t_{n}(X,\\beta)$ satisfies \n\\begin{align}\\label{equ of univ sheaf on prod}\\mathbb{F}_X=\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T}, \\end{align}\nwhere $\\mathbb{I}_S=(\\mathcal{O}\\to \\mathbb{F}_S)$ is the universal stable pair of $P^t_{n}(S,\\beta)$ and $\\Delta_T$ denotes the diagonal.\n\nAs in \\cite[Eqn.~(29)]{CMT2}, we have a distinguished triangle\n\\begin{align}\\label{equ on dist tri1}\\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{F}_X)\n\\to \\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{I}_X)_0[1]\\to \\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{F}_X,\\mathcal{O})[2], \\end{align}\nwhere $\\pi_{P_X}: P^t_{n}(X,\\beta)\\times X\\to P^t_{n}(X,\\beta)$ is the projection. \n\nFrom stable pair $\\mathbb{I}_X=(\\mathcal{O}\\to \\mathbb{F}_X)$ and Eqn.~\\eqref{equ of univ sheaf on prod}, we get a distinguished triangle\n\\begin{align}\\label{equ on dist tri2}\\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T},\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T})\n\\to \\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathcal{O},\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T})\\to \\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{F}_X). \\end{align}\nBy adjunction, we get an isomorphism \n\\begin{align}\\label{equ on iso of rhom}\\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T},\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T})\n\\cong \\mathbf{R}\\mathcal{H} om_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes \\wedge^iT_T[-i],\n\\end{align}\nwhere $\\pi_{P_S}\\colon P^t_{n}(S,\\beta)\\times S\\to P^t_{n}(S,\\beta)$ is the projection. \n\nCombining \\eqref{equ on dist tri2} and \\eqref{equ on iso of rhom}, we obtain \n\\begin{align}\\label{equ on iso on rhomIf}\n\\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{F}_X)\\cong \\mathbf{R}\\mathcal{H} om_{\\pi_{P_S}}(\\mathbb{I}_S,\\mathbb{F}_S)\\oplus \n\\mathbf{R}\\mathcal{H} om_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes (T_T\\oplus \\mathcal{O}_T[-1]).\n\\end{align}\nCombining \\eqref{equ on dist tri1} and \\eqref{equ on iso on rhomIf}, we obtain \n$$\\mathcal{E} xt^1_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{I}_X)_0\\cong \\mathcal{E} xt^0_{\\pi_{P_S}}(\\mathbb{I}_S,\\mathbb{F}_S)\\oplus T_T, $$\nand an exact sequence \n\\begin{align}\\label{equ on exa seq}\n0\\to \\mathcal{E} xt^1_{\\pi_{P_S}}(\\mathbb{I}_S,\\mathbb{F}_S)\\oplus \\mathcal{E} xt^1_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes T_T\\oplus \n\\mathcal{E} xt^0_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes\\mathcal{O}_T \\to \n\\mathcal{E} xt^2_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{I}_X)_0 \\to \\cdots. \\end{align}\nWe claim that the second arrow above is an isomorphism, which can be done by a dimension counting. \nIn fact, let $I=(\\mathcal{O}_S\\to F)\\in P^t_{n}(S,\\beta)$, the cohomology of the distinguished triangle \n$$\\mathop{\\dR\\mathrm{Hom}}\\nolimits_S(F,F)\\to \\mathop{\\dR\\mathrm{Hom}}\\nolimits_S(\\mathcal{O}_S,F)\\to \\mathop{\\dR\\mathrm{Hom}}\\nolimits_S(I,F)$$\nimplies that $\\mathop{\\rm Ext}\\nolimits^i_S(I,F)=0$ for $i\\geqslant 2$. \nIn Setting \\ref{setting}, we know $\\mathop{\\rm ext}\\nolimits^0_S(I,F)=\\beta^2+n+1$, therefore \n$$\\mathop{\\rm ext}\\nolimits^1_S(I,F)=1. $$\nAs $F$ is stable, we have \n$$\\mathop{\\rm ext}\\nolimits^2_S(F,F)=\\mathop{\\rm ext}\\nolimits^0_S(F,F)=1, \\quad \\mathop{\\rm ext}\\nolimits^1_S(F,F)=\\beta^2+2. $$ \nSo the rank of the second term of \\eqref{equ on exa seq} is $2\\beta^2+6$. One can easily check the rank of the third term in \\eqref{equ on exa seq} \nis also $2\\beta^2+6$ by Riemann-Roch formula and first condition of Setting \\ref{setting}. \nTo sum up, we get an isomorphism:\n$$\\mathcal{E} xt^1_{\\pi_{P_S}}(\\mathbb{I}_S,\\mathbb{F}_S)\\oplus \\mathcal{E} xt^1_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes T_T\\oplus \n\\mathcal{E} xt^0_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes\\mathcal{O}_T \\cong \n\\mathcal{E} xt^2_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{I}_X)_0. $$\nAs in \\cite[Prop.~4.7]{CMT2}, one can show the decomposition in the LHS is also with respect to the Serre duality pairing on $\\mathcal{E} xt^2_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{I}_X)_0$. The our claim follows from Lemmata \\ref{lem on indep of cosec} and \\ref{lem on compu of half euler class}.\n\\end{proof}\n\n\n\\subsection{Thom-Porteus formula}\nAs our insertion \\eqref{equ on pri ins} depends only on the fundamental cycle of the universal sheaf, it is useful to know \nthe pushforward of the virtual class \\eqref{vir class StimesT} under the forgetful map. \nIn this section, let $\\beta \\in H_2(S,{\\mathbb{Z}})$ be an irreducible curve class, then $P^t_{n}(X,\\beta)$ is independent of the choice of $t>\\frac{n}{\\omega\\cdot \\beta}$ \\cite[Prop.~1.12]{CT1},\nso we can set $t\\to \\infty$ and work with PT stability. \nConsider the forgetful map\n\\[ f : P_n(S,\\beta) \\to M_n(S,\\beta), \\quad (\\mathcal{O}_S\\to F)\\mapsto F. \\]\nRecall that $P_n(S,\\beta)$ is smooth of dimension $\\beta^2 + n + 1$ and $M_n(S,\\beta)$ is smooth of dimension $\\beta^2 + 2$.\nThe image of $f$ in $M_n(S,\\beta)$ is the locus\n\\begin{equation} \\big\\{ F \\in M_n(S,\\beta)\\,|\\,h^0(F) \\geqslant 1 \\big\\}, \\label{x} \\end{equation}\nwhere surjectivity follows since $\\beta$ is irreducible and $F$ is pure, so any non-zero section $s \\in H^0(S,F)$ must have zero-dimensional cokernel.\nThe expected dimension of sections is $\\chi(F) = n$,\nso the image is everything if $n=1$, a divisor if $n=0$ and a codimension $2$ cycle if $n=-1$.\n\nLet $\\mathbb{F}_S$ be a (twisted) universal sheaf on $M_n(S,\\beta) \\times S$.\nIf $n=1$ (or more generally, there exists a $K$-theory class pairing with $1$ with a sheaf parametrized by $M_n(S,\\beta)$)\nthe twisted sheaf can be taken to be an actual sheaf.\nFor us here the difference will not matter, since we are only interested in the Chern character of the universal sheaf,\nwhich can also be easily defined in the twisted case. We refer to \\cite{Markman} for a discussion.\n\nLet $\\pi_{M}: M_n(S,\\beta) \\times S\\to M_n(S,\\beta)$ be the projection. \nWe resolve the complex\n$\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S)$ by a $2$-term complex of vector bundles: \n$$\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S)\\cong (E_0 \\xrightarrow{\\sigma} E_1). $$\nThen \\eqref{x} is the \\textit{degeneracy locus}\n$$D_1(\\sigma) = \\big\\{ x \\in M_n(S,\\beta)\\,|\\, \\dim_{\\mathbb{C}} \\ker(\\sigma(x)) \\geqslant 1 \\big\\}. $$\nBy the \\textit{Thom-Porteus formula} \\cite[\\S14.4]{Ful}\n(see \\cite[Prop.~1]{GT} for a modern treatment and observe that $P_n(S,\\beta)$ is precisely what is called $\\tilde{D}_1(\\sigma)$ there), we get the following:\n\\begin{prop}\\label{deg loci}\n\\begin{equation*}f_{\\ast} [P_n(S,\\beta)] =c_{1-n}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S))\\cap [M_n(S,\\beta)]. \\end{equation*}\n\\end{prop}\n\nWe can calculate the right hand side above by Grothendieck-Riemann-Roch formula\n\\[ \\mathop{\\rm ch}\\nolimits( - \\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S) ) = - \\pi_{M\\ast}( \\mathop{\\rm ch}\\nolimits(\\mathbb{F}_S)\\cdot\\pi_S^*\\mathop{\\rm td}\\nolimits(S )). \\]\nWe obtain the following:\n\\begin{equation} \\label{Pn expressions}\n\\begin{aligned}\nf_{\\ast} [P_1(S,\\beta) ] & = 1, \\\\\nf_{\\ast} [P_0(S,\\beta) ] & = -\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_3(\\mathbb{F}_S))-2\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\pi_S^*{\\mathsf{p}}), \\\\\nf_{\\ast} [P_{-1}(S,\\beta) ] & = \\frac{1}{2}\\left(c_{1}(-\\mathbf{R}\n\\pi_{M\\ast}(\\mathbb{F}_S))\\right)^2+\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_4(\\mathbb{F}_S))+2\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_2(\\mathbb{F}_S)\\pi_S^*{\\mathsf{p}}), \n\\end{aligned}\n\\end{equation}\nwhere we used Poincar\\'e duality on the right to identify homology and cohomology and \n${\\mathsf{p}}\\in H^4(S)$ denotes the point class. \nA small calculation shows that the right hand side is indeed \nindependent of the choice of universal family $\\mathbb{F}$ (i.e. the formulae stay invariant under replacing $\\mathbb{F}$ by $\\mathbb{F} \\otimes \\pi_{M}^*{\\mathcal L}$\nfor ${\\mathcal L}\\in \\mathop{\\rm Pic}\\nolimits(M_n(S,\\beta))$).\nThis will be useful later on.\n\n\\subsection{Genus 0 in irreducible classes}\nIn this section, we prove Conjecture \\ref{conj on DT4\/GV} (1), (2) for irreducible curve classes. \nWe first recall a result of Fujiki \\cite{Fuji} and its generalization in \\cite[Cor.~23.17]{GHJ}.\n\\begin{thm}\\label{fujiki result}$($\\cite{Fuji}, \\cite[Cor.~23.17]{GHJ}$)$\nLet $M$ be a hyperk\\\"ahler variety of dimension $2n$. Assume $\\alpha\\in H^{4j}(M,\\mathbb{C})$ is of type $(2j, 2j)$ on all small deformation of $M$. Then there exists a constant $C(\\alpha)\\in\\mathbb{C}$ depending only on $\\alpha$ $($called Fujiki constant of $\\alpha$$)$ such that\n$$\\int_{M}\\alpha\\cdot\\beta^{2n-2j}=C({\\alpha})\\cdot q_{M}(\\beta)^{n-j}, \\quad \\forall\\,\\, \\beta\\in H^2(M, \\mathbb{C}), $$\nwhere $q_M: H^2(M, \\mathbb{C}) \\to \\mathbb{C}$ denotes the Beauville-Bogomolov-Fujiki form. \n\\end{thm}\n\\begin{thm}\\label{thm on g=0 conj on prod}\nLet $X=S\\times T$ and $\\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z})$ be an irreducible curve class. Then Conjecture \\ref{conj on DT4\/GV} (1), (2)\nhold.\n\\end{thm}\n\\begin{proof}\nBy Proposition \\ref{prop on smoothness}, we have a forgetful map \n$$\\bar{f}=(f,\\textrm{id}_T): P_{n}(X,\\beta)=P_{n}(S,\\beta)\\times T\\to M_{n}(S,\\beta)\\times T. $$\nAs our insertion \\eqref{equ on pri ins} only involves fundamental cycle of the universal one dimensional sheaf, so it is the pullback $\\bar{f}^*$\nof a cohomology class from $M_{n}(S,\\beta)\\times T$. \n \nWhen $n>1$, we have \n$$\\dim_{\\mathbb{C}} P_{n}(S,\\beta)=\\beta^2+n+1>\\beta^2+2=\\dim_{\\mathbb{C}}M_{n}(S,\\beta). $$ \nBy Theorem \\ref{thm on vir clas} and Proposition \\ref{deg loci}, it is easy to see \n$$P_{n,\\beta}(\\gamma_1,\\ldots,\\gamma_l)=0, \\quad n>1. $$\nWhen $n=1$, we take insertion $\\gamma,\\gamma'\\in H^4(X)$ for example (other cases follow from easier versions of the same argument).\nBased on K\\\"unneth decomposition:\n$$H^{4}(X)\\cong (H^0(S)\\otimes H^4(T))\\oplus (H^2(S)\\otimes H^2(T))\\oplus (H^4(S)\\otimes H^0(T)), $$\nwe write \n\\begin{align*}\n\\gamma &=A_1\\cdot 1\\otimes {\\mathsf{p}}+D_1\\otimes D_2+A_2\\cdot {\\mathsf{p}}\\otimes 1, \\\\\n\\gamma' &=A'_1\\cdot 1\\otimes {\\mathsf{p}}+D'_1\\otimes D'_2+A'_2\\cdot {\\mathsf{p}}\\otimes 1. \n\\end{align*}\nBy Eqn.~\\eqref{equ of univ sheaf on prod}, the insertion becomes (see also \\cite[Proof~of~Thm.~5.8]{COT1}):\n\\begin{align}\\label{equ on pri ins on prod}\\tau(\\gamma)=(D_1\\cdot\\beta)\\otimes D_2+A_2f^*\\pi_{M*}(\\pi_S^*{\\mathsf{p}}\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S))\\otimes 1, \\end{align}\nwhere $\\pi_S$, $\\pi_{M}$ are projections from $S\\times M_n(S,\\beta)$ to its factors. Hence\n$$\\tau(\\gamma)\\cdot\\tau(\\gamma')=(D_1\\cdot\\beta)\\cdot (D_1'\\cdot\\beta)\\otimes (D_2\\cdot D_2')+A_2A_2'f^*\\left(\\pi_{M*}\\left(\\pi_S^*{\\mathsf{p}}\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\right)\\right)^2\\otimes 1+\\mathrm{others}, $$\nwhere ``others'' lie in $H^2(P_{1}(S,\\beta))\\otimes H^2(T)$. \nBy Theorem \\ref{thm on vir clas}, we get \n\\begin{align*}P_{1,\\beta}(\\gamma,\\gamma')&=(D_1\\cdot\\beta)\\, (D_1'\\cdot\\beta)\\,\\int_T(D_2\\cdot D_2')\\int_{P_{1}(S,\\beta)}f^*e(T_{M_1(S,\\beta)}) \\\\\n& \\quad -e(T)A_2A_2' \\int_{P_{1}(S,\\beta)}f^*\\left(c_{\\beta^2}(T_{M_1(S,\\beta)})\\cdot \\pi_{M*}\\left(\\pi_S^*{\\mathsf{p}}\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\right)^2\\right) \n\\\\\n&=(D_1\\cdot\\beta)\\, (D_1'\\cdot\\beta)\\,\\int_T(D_2\\cdot D_2')\\int_{M_{1}(S,\\beta)}e(T_{M_1(S,\\beta)}) \\\\\n& \\quad -e(T)A_2A_2' \\int_{M_{1}(S,\\beta)}c_{\\beta^2}(T_{M_1(S,\\beta)})\\cdot \\pi_{M*}\\left(\\pi_S^*{\\mathsf{p}}\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\right)^2 \\\\\n&=(D_1\\cdot\\beta)\\, (D_1'\\cdot\\beta)\\,\\int_T(D_2\\cdot D_2')\\,e(M_1(S,\\beta)),\n\\end{align*}\nwhere the second equality follows from Proposition \\ref{deg loci}\nand the last equality is proved using Fujiki formula (Theorem \\ref{fujiki result}) \nand the evaluation\n\\[ q_M( \\pi_{M*}\\left(\\pi_S^*{\\mathsf{p}}\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\right) ) = 0, \\]\n(which follows for example from \\cite[Proof~of~Thm.~5.8]{COT1}).\nConjecture \\ref{conj on DT4\/GV} (2) then reduces to \\cite[Thm.~5.8]{COT1}.\n\\end{proof}\n\n\n\\subsection{Transport of integrals to the Hilbert schemes}\\label{sect on trans}\nTo compute the stable pair theory on $P_n(S,\\beta)$ for $n \\leqslant 0$ we will need to\nhandle more complicated descendent integrals on $M_n(S,\\beta)$.\nAs in \\cite[\\S 4.4]{COT1} which deals with the $n = 1$ case,\nwe use here the general framework of monodromy operators of Markman \\cite{Markman} (see also \\cite{OUniversality})\nto reduce to the Hilbert schemes.\n\nConsider the Mukai lattice, which is the lattice $\\Lambda = H^{\\ast}(S,{\\mathbb{Z}})$ endowed with the Mukai pairing\n\\[ \\langle x , y \\rangle := - \\int_S x^{\\vee} y, \\]\nwhere, if we decompose an element $x \\in \\Lambda$ according to degree as $(r,D,n)$, we have written $x^{\\vee} = (r,-D,n)$.\nGiven a sheaf or complex $E$ on $S$ the Mukai vector of $E$ is defined by\n\\[ v(E) = \\sqrt{\\mathop{\\rm td}\\nolimits_S} \\cdot \\mathop{\\rm ch}\\nolimits(E) \\in \\Lambda. \\]\nLet $M(v)$ be a proper smooth moduli space of stable sheaves on $S$ with Mukai vector $v \\in \\Lambda$ (where stability is with respect to some fixed polarization).\nWe assume that there exists a universal family ${\\mathbb{F}}$ on $M(v) \\times S$.\nIf it does not exists, everything below can be made to work by working with the Chern character $\\mathop{\\rm ch}\\nolimits({\\mathbb{F}})$\nof a quasi-universal family, see \\cite{Markman} or \\cite{OUniversality}.\nLet $\\pi_M, \\pi_S$ be the projections to $M(v)$ and $S$.\nOne has the Mukai morphism \n$$\\theta_{{\\mathbb{F}}} : \\Lambda \\to H^2(M(v)), $$\n\\[ \\theta_{{\\mathbb{F}}}(x) = \\left[ \\pi_{M \\ast}( \\mathop{\\rm ch}\\nolimits({\\mathbb{F}}) \\cdot \\sqrt{\\mathop{\\rm td}\\nolimits_S} \\cdot x^{\\vee} ) \\right]_{\\deg = 2}, \\]\nwhere $[ - ]_{\\deg = k}$ stands for extracting the degree $k$ component\nand (as we will also do below) we have suppressed the pullback maps from the projection to $S$.\nDefine the universal class\n\\[ u_v = \\exp\\left( \\frac{ \\theta_{{\\mathbb{F}}}(v) }{\\langle v,v \\rangle} \\right) \\mathop{\\rm ch}\\nolimits({\\mathbb{F}}) \\sqrt{\\mathop{\\rm td}\\nolimits_S}, \\]\nwhich is independent of the choice of universal family ${\\mathbb{F}}$.\nFor $x \\in \\Lambda$, consider the normalized descendents:\n\\[ B(x) := \\pi_{M\\ast}( u_v \\cdot x^{\\vee} ), \\]\nand let $B_k(x) = [ B(x) ]_{\\deg=2k}$ its degree $2k$ component.\n\n\n\\begin{example}\nFor $v=(1,0,1-d)$, the moduli space becomes the punctual Hilbert scheme: $M(v) = S^{[d]}$.\nThen we have\n\\[ u_v = \\exp\\left( \\frac{-\\delta}{2d-2} \\right) \\mathop{\\rm ch}\\nolimits( {\\mathcal I}_{{\\mathcal Z}} ) \\sqrt{\\mathop{\\rm td}\\nolimits_S}, \\]\nwhere we let $\\delta = \\pi_{\\ast} \\mathop{\\rm ch}\\nolimits_3( {\\mathcal O}_{{\\mathcal Z}} )$ (so that $-2 \\delta$ is the class of the locus of non-reduced subschemes).\n\nWe define the standard descendents on the Hilbert scheme by\n\\[ {\\mathfrak{G}}_d(\\alpha) = \\pi_{\\ast}( \\pi_S^{\\ast}(\\alpha) \\mathop{\\rm ch}\\nolimits_d({\\mathcal O}_{{\\mathcal Z}}) ) \\in H^{\\ast}(S^{[d]}). \\]\nOne obtains that\n\\begin{align*}\nB_1({\\mathsf{p}}) & = - \\frac{\\delta}{2d-2}, \\\\\nB_2({\\mathsf{p}}) & = \\frac{1}{2} \\frac{\\delta^2}{(2d-2)^2} - {\\mathfrak{G}}_2({\\mathsf{p}}). \n\\end{align*}\nFor a divisor $D \\in H^2(S)$, one finds\n\\begin{align*}\nB_1(D) & = {\\mathfrak{G}}_2(D), \\\\\nB_2(D) & = {\\mathfrak{G}}_3(D) - \\frac{\\delta}{2d-2} {\\mathfrak{G}}_2(D).\n\\end{align*}\n\\end{example}\nUsing the descendents $B_k(x)$, one allows to move between any two moduli spaces of stable sheaves on $S$\njust by specifying a Mukai lattice isomorphism $g : \\Lambda \\to \\Lambda$.\nWe give the details in the case of our interest, \nsee \\cite{Markman, OUniversality} for the general case.\n\nAs before let $\\beta \\in \\mathop{\\rm Pic}\\nolimits(S)$ be an irreducible effective class of square $\\beta \\cdot \\beta = 2d-2$,\nand let $n \\in {\\mathbb{Z}}$. We want to connect the moduli spaces\n\\[ M_n(S,\\beta) \\,\\, \\rightsquigarrow \\,\\, S^{[d]}. \\]\nLet $\\beta = e + (d-1) f$ where $e, f \\in H^2(S,{\\mathbb{Z}})$ span a hyperbolic lattice: ${\\mathbb{Z}} e \\oplus {\\mathbb{Z}} f \\cong \\binom{0\\ 1}{1\\ 0}$.\nWe do not require $e,f$ to be effective here.\nDefine the isomorphism $g : \\Lambda \\to \\Lambda$ by\n\\begin{align*}\n1 \\mapsto (0,-e, n ), \\quad {\\mathsf{p}} \\mapsto (0,f,0), \\quad e \\mapsto (1, -nf, 0), \\quad f \\mapsto (0,0,-1), \\quad\ng|_{ \\{ 1,{\\mathsf{p}}, e, f \\}^{\\perp}} = \\textrm{id}.\n\\end{align*}\nOne sees that $g$ is an isometry of the Mukai lattice and that\n\\[ g_{\\ast} ( 0, \\beta, n) = (1,0, 1-d). \\]\nThen one has:\n\\begin{thm}$($Markman \\cite{Markman}, reformulation as in \\cite[Thm.~4]{OUniversality}$)$ \\label{thm:Markman} For any $k_i \\geqslant 0$ and $\\alpha_i \\in H^{\\ast}(S)$ and any polynomial $P$,\n\\[\n\\int_{ M_n(S,\\beta) } P( B_{k_i}(\\alpha_i) , c_j( T_{M_n(S,\\beta)} ) ) \n=\n\\int_{ S^{[d]} } P( B_{k_i}(g \\alpha_i) , c_j( T_{S^{[d]}} ) ). \n\\]\n\\end{thm}\n\n\n\n\n\\subsection{Genus 1 in irreducible classes}\nRecall the genus $1$ Gopakumar-Vafa invariants (Proposition \\ref{prop on gw on prod}).\nOn the stable pair side, we have the following:\n\\begin{thm} \\label{thm on g=1 conj on prod}\nLet $\\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z})$ be an irreducible curve class. Then for certain choice of orientation, we have \n\\begin{align}\\label{equ on P0 pro}P_{0,\\beta}(\\gamma)=e(T)\\, N_{1}\\left(\\frac{\\beta^2}{2}\\right) \\int_{S\\times{\\mathsf{p}}}\\gamma. \\end{align}\nIn particular, Conjecture \\ref{conj on DT4\/GV} (3) holds in this case. \n\\end{thm}\n\\begin{proof\nThe strategy is as follows: First we write our stable pair invariants as integrals on the moduli spaces $M_0(S,\\beta)$,\nthen express the integrand in terms of the classes $B_k(x)$ and then use\nMarkman's Theorem~\\ref{thm:Markman} to reduce to an integral over the Hilbert scheme, which is known by the results of \\cite{COT1}.\n\nBy Eqn.~\\eqref{equ on pri ins on prod} and Theorem \\ref{thm on vir clas} (choose the inverse orientation there), we have \n$$P_{0,\\beta}(\\gamma)=e(T)\\, \\int_{S\\times{\\mathsf{p}}}\\gamma\\cdot\\int_{P_{0}(S,\\beta)}f^*\\left(c_{\\beta^2}(T_{M_0(S,\\beta)})\\cdot \\pi_{M*}\\left(\\pi_S^*({\\mathsf{p}})\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\right)\\right).$$\nUsing Proposition \\ref{deg loci}, we find\n\\[ P_{0,\\beta}(\\gamma)=e(T)\\,\n\\int_{S\\times{\\mathsf{p}}}\\gamma\\cdot\\int_{M_{0}(S,\\beta)}c_{\\beta^2}(T_{M_0(S,\\beta)})\\cdot c_{1}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S))\\cdot \n\\pi_{M*}\\left(\\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\cdot \\pi_S^*({\\mathsf{p}})\\right). \\]\nA calculation shows that we have\n\\[ B_1({\\mathsf{p}}) = \\pi_{\\ast}( \\mathop{\\rm ch}\\nolimits_1({\\mathbb{F}}_S)\\,\\pi_S^{\\ast}({\\mathsf{p}}) ). \\]\nMoreover, the expressions \\eqref{Pn expressions} are\ninvariant under replacing $\\mathop{\\rm ch}\\nolimits({\\mathbb{F}}_S)$ by $\\mathop{\\rm ch}\\nolimits({\\mathbb{F}}_S) \\exp( \\ell )$ for any line bundle $\\ell \\in H^2(M_n(S,\\beta))$.\nHence we can use $\\mathop{\\rm ch}\\nolimits({\\mathbb{F}}_S') := \\mathop{\\rm ch}\\nolimits({\\mathbb{F}}_S) \\exp( \\theta_{{\\mathbb{F}}_S}(v) \/ \\langle v,v \\rangle )$ which shows that\n\\begin{align*}\nc_{1}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S))\n& = -\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_3(\\mathbb{F}_S'))-2\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S')\\,\\pi_S^*{\\mathsf{p}}) \\\\\n& = - B_1\\left( \\sqrt{\\mathop{\\rm td}\\nolimits_S}^{-1} \\right) - 2 B_1( {\\mathsf{p}} ) \\\\\n& = - B_1( 1 + {\\mathsf{p}} ).\n\\end{align*}\nWe obtain that:\n\\begin{align*}\n& \\int_{M_{0}(S,\\beta)}c_{2d-2}(T_{M_0(S,\\beta)})\\cdot c_{1}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S))\\cdot \n\\pi_{M*}\\left(\\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\cdot \\pi_S^*{\\mathsf{p}}\\right) \\\\\n= & - \\int_{M_{0}(S,\\beta)}c_{2d-2}(T_{M_0(S,\\beta)}) B_1(1 + {\\mathsf{p}}) B_1({\\mathsf{p}}) \\\\\n= & - \\int_{S^{[d]}}c_{2d-2}(T_{S^{[d]}}) B_1(-e+f) B_1(f) \\\\\n= & - \\int_{S^{[d]}}c_{2d-2}(T_{S^{[d]}}) {\\mathfrak{G}}_2(-e+f) {\\mathfrak{G}}_2(f) \\\\\n= & - ((-e+f) \\cdot f) C( c_{2d-2}(T_{S^{[d]}}) ) \\\\\n= & N_1(d-1),\n\\end{align*}\nwhere we used the $k=1$ case of \\cite[Thm.~4.2]{COT1} in the last step. \n\\end{proof}\n\n\n\n\\subsection{Genus 2 in irreducible classes}\nLet $\\beta_d \\in H_2(S,{\\mathbb{Z}}) \\subseteq H_2(X,{\\mathbb{Z}})$ be an irreducible curve class of square $\\beta_d^2 = 2d-2$.\nBelow, we use similar method to compute stable pair invariants $P_{-1,\\beta_d}$ on $X$ for all $d$.\n\\begin{thm}\\label{thm on P_-1} For certain choice of orientation, we have \n\\begin{align*} \\sum_{d \\in\\mathbb{Z}} P_{-1,\\beta_d}\\, q^d &= \n\\left(\\prod_{n \\geqslant 1} (1-q^n)^{-24}\\right) \\left(24q \\frac{d}{dq} G_2(q) - 24G_2(q) - 1 \\right) \\\\\n&= 72 q^2 + 1920 q^3 + 28440 q^4 + 305280 q^5 + 2639760 q^6 + \n 19450368 q^7 + \\cdots .\n\\end{align*}\nIn particular, Conjecture \\ref{conj on DT4\/GV} (4) holds in this case. \n\\end{thm}\n\n\n\\begin{proof}\nAs in the genus $1$ case, by Theorem \\ref{thm on vir clas} and Proposition \\ref{deg loci} we have:\n\\[\nP_{-1,\\beta}=-e(T)\\int_{M_{-1}(S,\\beta)}c_{2d-2}(T_{M_{-1}(S,\\beta)})\\cdot c_{2}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S)).\n\\]\nWith the same discussion as before one gets:\n\\[\nc_{2}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S)) = \\frac{1}{2} B_1(1 + {\\mathsf{p}})^2 + B_2(1 + {\\mathsf{p}}).\n\\]\nHence applying Markman's Theorem~\\ref{thm:Markman}, we conclude\n\\begin{align*}\n& \\int_{M_{-1}(S,\\beta)}c_{2d-2}(T_{M_{-1}(S,\\beta)})\\cdot c_{2}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S)) \\\\\n= & \\int_{M_{-1}(S,\\beta)}c_{2d-2}(T_{M_{-1}(S,\\beta)})\\cdot \\left( \\frac{1}{2} B_1(1 + {\\mathsf{p}})^2 + B_2(1 + {\\mathsf{p}}) \\right) \\\\\n= & \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) \\left( \\frac{1}{2} B_1(-e+f - {\\mathsf{p}})^2 + B_2(-e+f-{\\mathsf{p}} ) \\right) \\\\\n= & \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) \n\\frac{1}{2} \\left[ {\\mathfrak{G}}_2(-e+f) + \\frac{\\delta}{2d-2} \\right]^2 \\\\\n& \\ + \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) \\left( {\\mathfrak{G}}_3(-e+f) - \\frac{\\delta}{2d-2} {\\mathfrak{G}}_2(-e+f) - \\frac{1}{2} \\frac{\\delta^2}{(2d-2)^2} + {\\mathfrak{G}}_2({\\mathsf{p}}) \\right) \\\\\n= & \\frac{1}{2} \\left( (-e+f)^2 + \\frac{ \\delta \\cdot \\delta}{(2d-2)^2} \\right) N_1(d-1)\n- \\frac{1}{2} \\frac{ \\delta \\cdot \\delta }{(2d-2)^2} N_1(d-1) + \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) {\\mathfrak{G}}_2({\\mathsf{p}}) \\\\\n= & - N_1(d-1) + \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) {\\mathfrak{G}}_2({\\mathsf{p}}).\n\\end{align*}\nThus we conclude that\n\\begin{align*}\\label{equ on P-1 pro}\nP_{-1,\\beta}=e(T) \\left( N_1(d-1) - \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) {\\mathfrak{G}}_2({\\mathsf{p}}) \\right).\n\\end{align*}\nThe desired formula now follows by the evaluation given in \\cite[Prop.~4.6]{COT1}:\n\\[ \\sum_{d \\geqslant 0} q^d \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) {\\mathfrak{G}}_2({\\mathsf{p}}) = \\prod_{n = 1} (1-q^n)^{-24} \\left( G_2(q) + \\frac{1}{24} \\right). \\]\nFinally, comparing with Proposition \\ref{prop on gw on prod}, we are done. \n\\end{proof}\n\\begin{rmk}\\label{rmk on pri g=0}\nBy the global Torelli theorem, primitive curve classes on $K3$ surfaces can be deformed to irreducible curve classes. Combining Theorem \\ref{thm on g=0 conj on prod}, Theorem \\ref{thm on g=1 conj on prod}, Theorem \\ref{thm on P_-1}, we know \nConjecture \\ref{conj on DT4\/GV} also holds for primitive curve classes $\\beta\\in H_2(S)\\subseteq H_2(X)$. \\end{rmk}\n\n\n\\subsection{Genus 1:~multiple fiber classes of elliptic fibrations}\nLet $X=E\\times E\\times T$ be the product two copies of an elliptic curve $E$ and a $K3$ surface $T$. It gives the trivial elliptic fibration \n\\begin{align}\\label{equ on trivial ell fib}\\pi: X\\to Y:=E\\times T. \\end{align} \nFor multiple fiber classes of $\\pi$ \\eqref{equ on trivial ell fib}, we have the following closed evaluation:\n\\begin{thm}\\label{thm1 on g=1 of multiple fiber}\nLet $t > 0$ and $\\gamma \\in H^4(X)$. For certain choice of orientation, we have \n\\begin{align}\\label{equ on P0 multiple fiber}\n\\sum_{r\\geqslant 0}P^t_{0,r[E]}(\\gamma)\\,q^r=24\\,\\left(\\int_{E \\times E \\times {\\mathsf{p}}} \\gamma\\right)\\cdot\\sum_{m\\geqslant 1} \\sum_{n | m}n^2q^m. \\end{align}\n\\end{thm}\n\\begin{proof}\nBy \\cite[Prop.~5.3]{CT1}, we know $P^t_0(X,n[E])$ is independent of the choice of $t>0$, so we may set $t\\to \\infty$ and work with PT stability. \nAs in \\cite[Lem.~3.5]{CMT2}, there is an isomorphism \n$$\\pi^*: \\mathop{\\rm Hilb}\\nolimits^n(Y) \\cong P_0(X,n[E]), \\quad I_Z\\mapsto \\pi^*I_Z. $$\nFor $I=\\pi^*I_Z\\in P_0(X,n[E])$, by projection formula and \n$$\\pi_*\\mathcal{O}_X\\cong \\mathcal{O}_{Y}\\oplus K_{Y}[-1], $$\nwe obtain \n$$\\mathop{\\dR\\mathrm{Hom}}\\nolimits_X(I,I)\\cong \\mathop{\\dR\\mathrm{Hom}}\\nolimits_{Y}(I_Z,I_Z)\\oplus \\mathop{\\dR\\mathrm{Hom}}\\nolimits_{Y}(I_Z,I_Z\\otimes K_{Y})[-1]. $$\nBy taking the traceless part, we get \n\\begin{align}\\label{equ1 on ell fib}\\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0&\\cong \\mathop{\\rm Ext}\\nolimits^2_{Y}(I_Z,I_Z)_0\\oplus \\mathop{\\rm Ext}\\nolimits^1_{Y}(I_Z,I_Z)_0 \\\\ \\nonumber \n&\\cong \\mathop{\\rm Ext}\\nolimits^2_{Y}(I_Z,I_Z)_0\\oplus \\mathop{\\rm Ext}\\nolimits^2_{Y}(I_Z,I_Z)_0^{\\vee}, \n\\end{align}\nwhere we use Serre duality in the second isomorphism.\n\nNext we compare cosections on these obstruction spaces. By \\cite[Lem.~9.4]{KiP}, we have a surjective isotropic cosection\n\\begin{align*}\\phi_X: \\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0 \\stackrel{\\mathrm{At}(I)}{\\longrightarrow} \\mathop{\\rm Ext}\\nolimits^3_X(I,I\\otimes T^*X)\\stackrel{\\mathrm{tr}}{\\longrightarrow}\nH^3(X,T^*X)\\stackrel{H\\sigma_X }{\\longrightarrow} H^4(X,\\wedge^4T^*X) \\stackrel{\\int}{\\longrightarrow}\\mathbb{C}, \\end{align*}\nwhere $\\mathrm{At}(I)\\in \\mathop{\\rm Ext}\\nolimits^1_X(I,I\\otimes T^*X)$ denotes the Atiyah class of $I$, $H\\in H^1(X,T^*X)$ is an ample divisor \nand $\\sigma_X\\in H^0(X,\\wedge^2T^*X)$ is a holomorphic symplectic form of $X$. \n\nBy the compatibility of Atiyah classes with map $\\pi: X\\to Y$ (ref.~\\cite[Prop.~3.14]{BFl}), we have a commutative diagram \n\\begin{align}\\label{com diag on atiyah class}\\xymatrix{\n\\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0 \\ar[r]^{\\mathrm{At}(I) \\,\\,\\, \\quad } & \\mathop{\\rm Ext}\\nolimits^3_X(I,I\\otimes T^*X) \\ar[r]^{ \\quad \\mathrm{tr} } & H^3(X,T^*X) \\ar[r]^{\\mathrm{pr}\\quad \\,\\,\\,} &\nH^{1,1}(S)\\otimes H^{0,2}(T) \\\\\n\\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0 \\ar[u]_{i} \\ar[r]^{\\mathrm{At}(I_Z) \\,\\,\\, \\quad } & \\mathop{\\rm Ext}\\nolimits^3_Y(I_Z,I_Z\\otimes T^*Y) \\ar[u]_{ } \\ar[r]^{\\quad \\,\\, \\mathrm{tr} } & H^3(Y,T^*Y) \\ar[r]^{\\cong\\quad \\quad } & H^{1,1}(E)\\otimes H^{0,2}(T) \\ar[u]_{\\pi^*}, } \\end{align}\nwhere $i$ is the embedding in \\eqref{equ1 on ell fib}, $tr$ denotes the trace map and \n$pr$ is the projection with respect to K\\\"unneth decomposition.\nWe define a cosection \n\\begin{align*}\\phi_Y: \\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0 \\stackrel{\\mathrm{At}(I_Z)}{\\longrightarrow} \\mathop{\\rm Ext}\\nolimits^3_Y(I_Z,I_Z\\otimes T^*Y)\\stackrel{\\mathrm{tr}}{\\longrightarrow}\nH^3(Y,T^*Y)\\cong H^{1,1}(E)\\otimes H^{0,2}(T)\\stackrel{\\epsilon}{\\longrightarrow} \\mathbb{C}, \\end{align*}\n$$\\mathrm{where} \\quad \\epsilon(\\alpha)=\\int_{X} H \\sigma_X\\cdot\\pi^*\\alpha, \\quad \\alpha\\in H^{1,1}(E)\\otimes H^{0,2}(T). $$\nIt is easy to see $\\phi_Y$ is a positive multiple of the standard cosection of $\\mathop{\\rm Hilb}\\nolimits^n(Y)$ (see~e.g.~\\cite[Eqn.~(6)]{O2}), \nhence its reduced virtual class keeps the same.\n\nBy diagram \\eqref{com diag on atiyah class}, we have a commutative diagram: \n\\begin{align*}\\xymatrix{\n\\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0 \\ar[r]^{\\quad \\quad \\phi_X} & \\mathbb{C} \\\\\n\\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0 \\ar[u]_{i} \\ar[ur]_{\\quad \\phi_Y}. & } \\end{align*}\nWe claim that $\\mathop{\\rm Ker}\\nolimits(\\phi_Y)$ is a maximal isotropic subspace of \n$\\mathop{\\rm Ker}\\nolimits(\\phi_X)\/\\mathrm{Im}(\\phi^{\\vee}_X)$. In fact, by taking dual, we have a commutative diagram \n\\begin{align*}\\xymatrix{\n\\mathbb{C} \\ar[d]^{=} \\ar[r]^{ \\phi^{\\vee}_X \\quad \\quad \\,\\, } & \\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0^{\\vee} \\ar[d]^{i^{\\vee}} \\ar[r]^{Q_{\\mathrm{Serre}} \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad}_{\\cong \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad } & \\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0\\cong \\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0\\oplus \n\\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0^{\\vee} \\ar[dl]^{\\pi_2} \\\\\n\\mathbb{C}\\ar[r]^{ \\phi^{\\vee}_Y \\quad \\quad \\quad } & \\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0^{\\vee}. & } \\end{align*}\nSince $\\phi_Y$ is surjective, so $\\phi^{\\vee}_Y$ is injective, therefore \n$$\\mathrm{Im}(\\phi^{\\vee}_X)\\cap\\mathop{\\rm Ker}\\nolimits(\\phi_Y)\\subseteq \\mathrm{Im}(\\phi^{\\vee}_X)\\cap \\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0=0, $$\nand $\\mathop{\\rm Ker}\\nolimits(\\phi_Y)$ defines a subspace of $\\mathop{\\rm Ker}\\nolimits(\\phi_X)\/\\mathrm{Im}(\\phi^{\\vee}_X)$. \nThis is a maximal isotropic subspace as $i: \\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0\\to\\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0$ is so. \n\nThe above construction works in family and therefore we have \n$$[P_0(X,n[E])]^{\\mathrm{vir}}=[\\mathop{\\rm Hilb}\\nolimits^n(Y)]^{\\mathrm{vir}}\\in A_1(P_0(X,n[E])), $$\nfor certain choice of orientation in the LHS. Consider a commutative diagram:\n\\begin{align*} \\xymatrix{\nX \\ar[d]_{\\pi} & X\\times P_0(X,n[E]) \\ar[d]_{\\bar{\\pi}=(\\pi,(\\pi^*)^{-1})} \\ar[l]_{\\pi_X \\quad\\quad} \\ar[r]^{\\quad \\pi_P } & P_0(X,n[E]) \\ar[d]_{(\\pi^*)^{-1}}^{\\cong} \\\\\nY & Y\\times \\mathop{\\rm Hilb}\\nolimits^n(Y) \\ar[l]_{\\pi_{Y} \\quad\\quad } \\ar[r]^{\\,\\, \\pi_M } & \\mathop{\\rm Hilb}\\nolimits^n(Y), } \\quad \\quad\n\\end{align*}\nand denote $\\mathcal{Z}\\hookrightarrow Y\\times \\mathop{\\rm Hilb}\\nolimits^n(Y)$ to be the universal 0-dimensional subscheme. Then \n\\begin{align*}\nP_{0,n[E]}(\\gamma)&=\\int_{[P_0(X,n[E])]^{\\mathrm{vir}}}\\pi_{P*}(\\pi_X^*\\gamma\\cdot \\bar{\\pi}^*\\mathop{\\rm ch}\\nolimits_3(\\mathcal{O}_{\\mathcal{Z}})) \\\\\n&=\\int_{[\\mathop{\\rm Hilb}\\nolimits^n(Y)]^{\\mathrm{vir}}}\\pi_{M*}\\bar{\\pi}_*(\\pi_X^*\\gamma\\cdot \\bar{\\pi}^*\\mathop{\\rm ch}\\nolimits_3(\\mathcal{O}_{\\mathcal{Z}})) \\\\\n&=\\int_{[\\mathop{\\rm Hilb}\\nolimits^n(Y)]^{\\mathrm{vir}}} \\pi_{M*}(\\mathop{\\rm ch}\\nolimits_3(\\mathcal{O}_{\\mathcal{Z}})\\cdot\\bar{\\pi}_*(\\pi_X^*\\gamma) ) \\\\\n&=\\int_{[\\mathop{\\rm Hilb}\\nolimits^n(Y)]^{\\mathrm{vir}}} \\pi_{M*}(\\mathop{\\rm ch}\\nolimits_3(\\mathcal{O}_{\\mathcal{Z}})\\cdot\\pi_{Y}^*\\pi_*\\gamma ).\n\\end{align*}\nThe statement now follows from Proposition~\\ref{prop:K3xE calculation} below.\n\\end{proof}\n\n\\begin{prop}\n\\label{prop:K3xE calculation}\nLet $\\omega \\in H^2(E,{\\mathbb{Z}})$ be the class of point and $D \\in H^2(T,{\\mathbb{Q}})$ any class. Then for any $n \\geqslant 1$ we have:\n\\begin{align*}\n\\int_{[ \\mathop{\\rm Hilb}\\nolimits^n(T \\times E) ]^{\\text{vir}} } \\pi_{M \\ast}\\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}}) \\pi_Y^{\\ast}( \\omega \\otimes 1 ) \\big) & = (-1)^{n+1} e(T) \\sum_{d|n} d^2, \\\\\n\\int_{[ \\mathop{\\rm Hilb}\\nolimits^n(T \\times E) ]^{\\text{vir}} } \\pi_{M \\ast}\\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}}) \\pi_Y^{\\ast}( 1 \\otimes D ) \\big) & = 0.\n\\end{align*}\n\\end{prop}\n\\begin{proof}\nWrite $\\mathop{\\rm Hilb}\\nolimits = \\mathop{\\rm Hilb}\\nolimits^n(T \\times E)$ and consider the diagram\n\\[\n\\begin{tikzcd}\nT \\times E & \\mathop{\\rm Hilb}\\nolimits \\times T \\times E \\ar[swap]{l}{\\pi_{T \\times E}} \\ar{r}{\\pi_M} \\ar{d}{\\tilde{p}} & \\mathop{\\rm Hilb}\\nolimits \\ar{d}{p} \\\\\n& \\frac{ \\mathop{\\rm Hilb}\\nolimits \\times T \\times E }{E} \\ar{r}{\\pi_{M\/E}} & \\mathop{\\rm Hilb}\\nolimits\/E,\n\\end{tikzcd}\n\\]\nwhere the quotient by $E$ is taken in the stacky sense.\nThe universal subscheme ${\\mathcal Z} \\subset \\mathop{\\rm Hilb}\\nolimits \\times T \\times E$ has a natural $E$-linearization and hence arises from the pullback of\na subscheme ${\\mathcal Z}\/E \\subset (\\mathop{\\rm Hilb}\\nolimits \\times T \\times E)\/E$. Moreover, as in \\cite{O2},\nthere exists a natural (0-dimensional) virtual class $[ \\mathop{\\rm Hilb}\\nolimits\/E ]^{\\text{vir}}$ such that\n\\[ [ \\mathop{\\rm Hilb}\\nolimits ]^{\\text{vir}} = p^{\\ast} [\\mathop{\\rm Hilb}\\nolimits\/E]^{\\text{vir}}. \\]\nSince the virtual class of $\\mathop{\\rm Hilb}\\nolimits\/E$ arises from a symmetric obstruction theory (on an \\'etale cover of $\\mathop{\\rm Hilb}\\nolimits\/E$), its degree \ncan be computed by as an Behrend weighted Euler characteristic \\cite{B}:\n\\[ \\int_{ [\\mathop{\\rm Hilb}\\nolimits\/E]^{\\text{vir}} } 1 = e\\left( \\mathop{\\rm Hilb}\\nolimits\/E, \\nu \\right). \\]\nWe argue now as follows: Applying the pushpull formula and using $p \\circ \\pi_M = \\pi_{M\/E} \\circ \\tilde{p}$ we have\n\\begin{align*}\nN_n & := \\int_{[ \\mathop{\\rm Hilb}\\nolimits ]^{\\text{vir}} } \\pi_{M \\ast}\\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}}) \\pi_{T \\times E}^{\\ast}( \\omega \\otimes 1 ) \\big) \\\\\n& = \\int_{[ \\mathop{\\rm Hilb}\\nolimits\/E ]^{\\text{vir}} } \\pi_{M\/E \\ast} \\tilde{p}_{\\ast} \\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}}) \\pi_{T \\times E}^{\\ast}( \\omega \\otimes 1 ) \\big) \\\\\n& = \\int_{[ \\mathop{\\rm Hilb}\\nolimits\/E ]^{\\text{vir}} } \\pi_{M\/E \\ast} \\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}\/E}) \\tilde{p}_{\\ast}( \\pi_{T \\times E}^{\\ast}( \\omega \\otimes 1 )) \\big).\n\\end{align*}\nThen by checking on fibers we have $\\tilde{p}_{\\ast}( \\pi_{T \\times E}^{\\ast}( \\omega \\otimes 1 )) = 1$ \nas well as $\\pi_{M\/E \\ast} \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}\/E}) = n$. This implies that\n\\[ N_n = n \\int_{ [\\mathop{\\rm Hilb}\\nolimits\/E]^{\\text{vir}} } 1 = n\\cdot e\\left( \\mathop{\\rm Hilb}\\nolimits\/E, \\nu \\right) = 24 (-1)^{n-1} \\sum_{d|n} d^2, \\]\nwhere for the last equality we have used \\cite[Cor.~1]{OS}.\n\nFor the second integral we argue identically, but observe that we have\n\\[ \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}}) \\pi_{T \\times E}^{\\ast}( 1 \\otimes D ) = \\tilde{p}^{\\ast}( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}\/E}) \\pi_T^{\\ast}(D)), \\]\nso when pushing forward by $\\tilde{p}$ the integral vanishes.\n\\end{proof}\nSimilarly, we can consider a nontrivial elliptic fibration: \n$$\\bar{p}=(p,\\textrm{id}_T): X=S\\times T\\to \\mathbb{P}^{1}\\times T, $$\nwhere $p: S\\rightarrow\\mathbb{P}^{1}$ is an elliptic $K3$ surface with a section $i$. Let $f$ be a generic fiber of $\\bar{p}$.\n\\begin{thm}\\label{thm2 on g=1 of multiple fiber}\nLet $t>0$ and $\\gamma \\in H^4(X)$. Then for certain choice of orientation, we have \n\\begin{align}\\label{equ on P0 multiple fiber}\n\\sum_{r\\geqslant 0}P^t_{0,r[f]}(\\gamma)\\,q^r=24\\,\\left(\\int_{S \\times {\\mathsf{p}}} \\gamma\\right)\\cdot \\sum_{m\\geqslant 1}\\sum_{n | m}n^2q^m. \\end{align}\n\\end{thm}\n\\begin{proof}\nThe first proof is parallel to the proof of Theorem \\ref{thm1 on g=1 of multiple fiber}. \nFor the second part, we need to evaluate\n\\begin{equation} \\int_{[ \\mathop{\\rm Hilb}\\nolimits^n(T \\times {\\mathbb{P}}^1) ]^{\\text{vir}} } \\pi_{M \\ast}\\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}})\\,\\pi_Y^{\\ast}( \\omega \\otimes 1 ) \\big), \\label{aaas} \\end{equation}\nwhere $\\omega \\in H^2({\\mathbb{P}}^1)$ is the class of a point.\nWe consider the degeneration of $T \\times {\\mathbb{P}}^1$ given by the product of $T$ with the degeneration of ${\\mathbb{P}}^1$ into a chain of three ${\\mathbb{P}}^1$'s.\nBy specializing the insertion $\\omega$ to the middle factor, we\nare reduced to an integral of the relative Hilbert schemes $\\mathop{\\rm Hilb}\\nolimits^n(T \\times {\\mathbb{P}}^1 \/ T_0 \\cup T_{\\infty} )$ with the same integrand.\nBut this integral is also the outcome of applying the degeneration formula to the integrals considered in Proposition~\\ref{prop:K3xE calculation}\n(under the degeneration of $E$ to a nodal ${\\mathbb{P}}^1$).\nHence \\eqref{aaas} is given by $(-1)^{n+1} e(T) \\sum_{d|n} d^2$ as well.\nFor the analogue of the second integral in Proposition~\\ref{prop:K3xE calculation}, the localization formula applied to the scaling action of ${\\mathbb{C}}^{\\ast}$ on ${\\mathbb{P}}^1$ shows that it vanishes.\n\\end{proof}\n\\begin{rmk}\\label{rmk on impr}\nOn the product of two $K3$ surfaces, \ngenus 1 Gopakumar-Vafa invariants in imprimitive classes are defined in \\cite[Def.~A.1]{COT1}. In particular, for multiple fiber classes \n$\\beta=r[f]$ above, by using \\cite[Eqn.~(5.7)]{COT1}, we know $n_{1,r[f]}(\\gamma)=0$ if $r>1$. \n\\end{rmk}\n\n\n\n\n\\section{Hilbert schemes of two points on $K3$}\n\n\\subsection{Rational curves on exceptional locus}\nLet $S$ be a $K3$ surface. \nConsider the Hilbert-Chow map \n$$\\pi: \\mathop{\\rm Hilb}\\nolimits^2(S)\\to \\mathop{\\rm Sym}\\nolimits^2(S) $$\nto the symmetric product of $S$. Let $D$ be the exceptional divisor fitting into Cartesian diagram: \n\\begin{align*} \\xymatrix{\nD \\ar[d]_{\\pi} \\ar[r]^{i \\quad \\,\\,\\, } & \\mathop{\\rm Hilb}\\nolimits^2(S) \\ar[d]^{\\pi} \\\\\nS \\ar[r]^{\\Delta \\quad \\,\\,\\, } & \\mathop{\\rm Sym}\\nolimits^2(S), } \\quad \\quad\n\\end{align*}\nwhere $\\Delta$ is the diagonal embedding. Note that $\\pi: D\\to S$ is a $\\mathbb{P}^1$-bundle and any fiber of it has normal bundle \n$\\mathcal{O}_{\\mathbb{P}^1}(-2,0,0)$. \n\\begin{thm}\\label{thm on hilbS}\nWhen $t=\\frac{n}{\\omega\\cdot\\beta}+0^+$ $($i.e.~in JS chamber$)$,\nConjecture \\ref{conj on DT4\/GV} (1),~(2) hold for multiple fiber classes $\\beta=r[\\mathbb{P}^1]$ $($$r\\geqslant 1$$)$ of $\\pi$ as above. \n\\end{thm} \n\\begin{proof}\nBy Jordan-H\\\"older filtration, the JS moduli space $P_n^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])$ is nonempty only if \n$$d\\,|\\,n, \\,\\,\\, n>0. $$\nso we may assume $n=m\\cdot d$ for $m\\in \\mathbb{Z}_{\\geqslant 1}$. Consider the map \n\\begin{align}f: P_{md}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])\\to \\mathop{\\rm Sym}\\nolimits^d(S), \\quad (F,s)\\mapsto \\pi_*[F]. \\end{align}\nAs the insertion \\eqref{equ on pri ins} only involves fundamental cycle of the universal one dimensional sheaf $\\mathbb{F}$, we have \n$$[\\mathbb{F}]=\\bar{f}^*[\\mathcal{Z}], $$\nwhere $[\\mathcal{Z}]\\hookrightarrow \\mathop{\\rm Sym}\\nolimits^d(S)\\times S$ is the class of incident subvariety and $\\bar{f}=(f,\\textrm{id}_S)$.\nTherefore \n$$\\int_{[P_{md}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])]^{\\rm{vir}}}\\prod_{i=1}^l\\tau(\\gamma_i)\n=\\int_{[P_{md}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])]^{\\rm{vir}}}f^*\\Phi, $$\nfor some $\\Phi\\in H^{2(md+1)}(\\mathop{\\rm Sym}\\nolimits^d(S))$. When $m>1$, we have $md+1>2d$, therefore $\\Phi=0$ and \n$$P_{md,d[\\mathbb{P}^1]}^{\\mathrm{JS}}(\\gamma_1,\\ldots,\\gamma_l)=0, \\quad \\mathrm{if}\\,\\,m>1. $$\nFor $m=1$, we have an isomorphism \n\\begin{align*}\n \\mathop{\\rm Hilb}\\nolimits^d(S) &\\stackrel{\\pi^*}{\\cong} P_{d}^{\\mathrm{JS}}(D,d[\\mathbb{P}^1]) \\cong P_{d}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1]), \\\\\nI_Z &\\mapsto \\pi^*I_Z\\mapsto (\\mathcal{O}_X\\to i_*\\pi^*\\mathcal{O}_Z). \n\\end{align*}\nFor $I_X=(\\mathcal{O}_X\\to i_*\\pi^*\\mathcal{O}_Z)$, we write $I_D=(\\mathcal{O}_D\\to\\pi^*\\mathcal{O}_Z)$. As in \\cite[Prop.~4.3]{CMT2}, \\cite[Prop.~4.2]{CKM2}, \nwe have a canonical isomorphism \n$$\\mathop{\\rm Ext}\\nolimits^0_D(I_D,\\pi^*\\mathcal{O}_Z)\\cong \\mathop{\\rm Ext}\\nolimits^1_X(I_X,I_X)_0, $$\nand an inclusion of maximal isotropic subspace \n\\begin{align}\\label{equ1 on excep curve}\\mathop{\\rm Ext}\\nolimits^1_D(I_D,\\pi^*\\mathcal{O}_Z)\\hookrightarrow \\mathop{\\rm Ext}\\nolimits^2_X(I_X,I_X)_0. \\end{align}\nFrom distinguished triangle \n$$I_D\\to \\mathcal{O}_D\\to \\pi^*\\mathcal{O}_Z, $$\nwe obtain a distinguished triangle \n$$\\mathop{\\dR\\mathrm{Hom}}\\nolimits_D(\\pi^*\\mathcal{O}_Z,\\pi^*\\mathcal{O}_Z)\\to \\mathop{\\dR\\mathrm{Hom}}\\nolimits_D(\\mathcal{O}_D,\\pi^*\\mathcal{O}_Z)\\to\\mathop{\\dR\\mathrm{Hom}}\\nolimits_D(I_D,\\pi^*\\mathcal{O}_Z). $$\nBy projection formula, we have \n$$\\mathop{\\dR\\mathrm{Hom}}\\nolimits_D(\\pi^*\\mathcal{O}_Z,\\pi^*\\mathcal{O}_Z)\\cong \\mathop{\\dR\\mathrm{Hom}}\\nolimits_S(\\mathcal{O}_Z,\\mathcal{O}_Z), \\,\\,\\, \\mathop{\\dR\\mathrm{Hom}}\\nolimits_D(\\mathcal{O}_D,\\pi^*\\mathcal{O}_Z)\\cong \\mathop{\\dR\\mathrm{Hom}}\\nolimits_S(\\mathcal{O}_S,\\mathcal{O}_Z). $$ \nTherefore we get an exact sequence \n\\begin{align}\\label{equ2 on excep curve}0=H^1(S,\\mathcal{O}_Z)\\cong H^1(D,\\pi^*\\mathcal{O}_Z)\\to \\mathop{\\rm Ext}\\nolimits^1_D(I_D,\\pi^*\\mathcal{O}_Z) \\to \\mathop{\\rm Ext}\\nolimits^2_S(\\mathcal{O}_Z,\\mathcal{O}_Z)\\to 0. \\end{align}\nBy Serre duality, we have \n\\begin{align}\\label{equ3 on excep curve} \\mathop{\\rm Ext}\\nolimits^2_S(\\mathcal{O}_Z,\\mathcal{O}_Z)\\cong \\mathop{\\rm Ext}\\nolimits^0_S(\\mathcal{O}_Z,\\mathcal{O}_Z)^{\\vee}\\cong H^0(S,\\mathcal{O}_Z)^{\\vee}. \\end{align}\nCombining Eqns.~\\eqref{equ1 on excep curve}, \\eqref{equ2 on excep curve}, \\eqref{equ3 on excep curve}, we obtain a maximal isotropic subspace\n$$H^0(S,\\mathcal{O}_Z)^{\\vee}\\hookrightarrow\\mathop{\\rm Ext}\\nolimits^2_X(I_X,I_X)_0. $$\nWorking in family, we see that the dual of tautological bundle $\\mathcal{O}_S^{[d]}$ on $\\mathop{\\rm Hilb}\\nolimits^d(S)$ is a maximal isotropic subbundle \nof the obstruction bundle of $P_{d}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])$. By Lemma \\ref{lem on indep of cosec}, we obtain\n$$[P_{d}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])]^{\\mathrm{vir}}=[\\mathop{\\rm Hilb}\\nolimits^d(S)]\\cap c_{d-1}\\left(\\mathcal{O}_S^{[d]}\\right), $$\nfor certain choice of orientation. As for insertions, consider the following diagram \n\\begin{align*} \\xymatrix{\nS & D \\ar[l]_{\\pi} \\ar[r]^{i} & X \\\\\nS\\times \\mathop{\\rm Hilb}\\nolimits^d(S) \\ar[d]^{\\pi_M} \\ar[u]_{\\pi_S} & D\\times \\mathop{\\rm Hilb}\\nolimits^d(S) \\ar[u]_{\\pi_D} \\ar[d]^{\\pi_M} \\ar[l]_{\\bar{\\pi}=(\\pi,\\textrm{id})} \\ar[r]^{\\bar{i}=(i,\\textrm{id})} & X\\times \\mathop{\\rm Hilb}\\nolimits^d(S) \\ar[d]^{\\pi_M} \\ar[u]_{\\pi_X} \\\\\n\\mathop{\\rm Hilb}\\nolimits^d(S) &\\mathop{\\rm Hilb}\\nolimits^d(S) & \\mathop{\\rm Hilb}\\nolimits^d(S), } \n\\end{align*}\nlet $\\mathcal{Z}\\hookrightarrow\\mathop{\\rm Hilb}\\nolimits^d(S)\\times S$ denote the universal zero dimensional subscheme, then \n\\begin{align*}\n\\tau(\\gamma)&=\\pi_{M*}\\left(\\pi_X^*\\gamma\\cdot\\mathop{\\rm ch}\\nolimits_3(\\bar{i}_*\\bar{\\pi}^*\\mathcal{O}_{\\mathcal{Z}})\\right) \\\\\n&=\\pi_{M*}\\left(\\pi_X^*\\gamma\\cdot\\bar{i}_*\\bar{\\pi}^*[\\mathcal{Z}]\\right) \\\\\n&=\\pi_{M*}\\bar{i}_*\\left(\\bar{i}^*\\pi_X^*\\gamma\\cdot\\bar{\\pi}^*[\\mathcal{Z}]\\right) \\\\\n&=\\pi_{M*}\\bar{\\pi}_{*}\\left(\\pi_D^*i^*\\gamma\\cdot\\bar{\\pi}^*[\\mathcal{Z}]\\right) \\\\\n&=\\pi_{M*}\\left(\\bar{\\pi}_{*}\\pi_D^*i^*\\gamma\\cdot [\\mathcal{Z}]\\right)\\\\\n&=\\pi_{M*}\\left(\\pi_S^*\\pi_*i^*\\gamma\\cdot [\\mathcal{Z}]\\right)\\in H^2(\\mathop{\\rm Hilb}\\nolimits^d(S)),\n\\end{align*}\nwhich depends only on $[\\mathcal{Z}]$ and hence it is a pullback from $\\mathop{\\rm Sym}\\nolimits^d(S)$ by the Hilbert-Chow map\n$$\\mathrm{HC} \\colon \\mathop{\\rm Hilb}\\nolimits^d(S)\\to \\mathop{\\rm Sym}\\nolimits^d(S). $$ \nTo sum up, we have \n\\begin{align}\\label{equ on exc cur} P_{d,d[\\mathbb{P}^1]}^{\\mathrm{JS}}(\\gamma_1,\\ldots,\\gamma_l)=\\int_{\\mathop{\\rm Hilb}\\nolimits^d(S)}c_{d-1}\\left(\\mathcal{O}_S^{[d]}\\right)\\cdot \n\\prod_{i=1}^l\\pi_{M*}\\left(\\pi_S^*\\pi_*i^*\\gamma_i\\cdot [\\mathcal{Z}]\\right). \\end{align}\nWhen $d=1$, this reduces to \\cite[Lem.~3.7]{COT1}. When $d>1$, we claim the above integral is zero. \nIn fact, by \\cite[Thm.~4.6]{Lehn}, \nwe have the formula \n\\begin{align*}\n\t\\sum_{m\\geqslant 0} c\\left(\\mathcal{O}_S^{[m]}\\right)z^m =\n\t\\mathrm{exp}\\left(\\sum_{m\\geqslant 1} \\frac{(-1)^{m-1}}{m} q_m(1) z^m \\right) \\cdot 1\n\t\\end{align*}\nwhere $q_m(1)$ are linear maps (called Nakajima operators)\n\\begin{align*}\n\tq_m(1) \\in \\mathop{\\rm End}\\nolimits(\\mathbb{H}), \\quad \n\t\\mathbb{H}=\\bigoplus_{m\\geqslant 0} H^{\\ast}(\\mathrm{Hilb}^m(S), \\mathbb{Q}),\n\t\\end{align*}\nwhich is of bidegree $(m, 2m-2)$. \nBy looking at the bidegree $(d, 2d-2)$-part, \nwe have \n\\begin{align*}\n\tc_{d-1}\\left(\\mathcal{O}_S^{[d]}\\right)=q_d(1)(1), \\quad \\mathrm{where} \\,\\, 1\\in H^0(\\mathop{\\rm Hilb}\\nolimits^0(S)). \n\t\\end{align*}\nBy the definition of $q_d(1)$ in~\\cite[Def.~2.3]{Lehn}, we have $q_d(1)(1)=p_{1\\ast}[\\mathcal{Q}]$\nwhere $\\mathcal{Q}$ is the cycle on \n$\\mathop{\\rm Hilb}\\nolimits^d(S) \\times S$ supported on \n$(x, \\xi)$ with $\\mathrm{Supp}(\\xi)=x$. \nTherefore \nwe know $c_{d-1}\\left(\\mathcal{O}_S^{[d]}\\right)$ is supported on $\\mathrm{HC}^{-1}(\\Delta)$, where\n$$\\Delta=\\big\\{(x,\\cdots,x)\\in\\mathop{\\rm Sym}\\nolimits^d(S) \\big\\}\\subseteq \\mathop{\\rm Sym}\\nolimits^d(S)$$ is the small diagonal. \nOur insertion is a pullback from $\\mathop{\\rm Sym}\\nolimits^d(S)$ and gives $(d+1)$-dimensional constrain on $\\mathop{\\rm Sym}\\nolimits^d(S)$. If $d>1$, \n$d+1>2=\\dim_{\\mathbb{C}}\\Delta$, therefore the integral \\eqref{equ on exc cur} is zero. \n\\end{proof}\n\n\\subsection{Small degree curve classes on $X=T^*\\mathbb{P}^2$}\nWhen the $K3$ surface $S$ has a\n$(-2)$-curve $C \\subset S$, \nthe Hilbert scheme $\\mathop{\\rm Hilb}\\nolimits^2(S)$ contains \n$\\mathrm{Sym}^2(C) \\subset \\mathop{\\rm Hilb}\\nolimits^2(S)$\nas a Lagrangian subvariety.\nFor curve classes coming from $\\mathrm{Sym}^2(C) \\cong \n\\mathbb{P}^2$, our invariants can be studied on the local model $X=T^*\\mathbb{P}^2$. \n\nWe have an identification of curve classes:\n\\[ H_2(X,{\\mathbb{Z}}) = H_2({\\mathbb{P}}^2, {\\mathbb{Z}}) = {\\mathbb{Z}} [ \\ell ], \\]\nwhere $\\ell \\subset {\\mathbb{P}}^2$ is a line.\nLet $H \\in H^2(T^{\\ast} {\\mathbb{P}}^2)$ be the pullback of hyperplane class\nand identify $H_2(T^{\\ast} {\\mathbb{P}}^2, {\\mathbb{Z}}) \\equiv {\\mathbb{Z}}$ by its degree against $H$.\nGopakumar-Vafa invariants are given as follows: \n\\begin{prop}\\emph{(\\cite[Cor.~6.2]{COT1})}\\label{cor on inte on local p2}\n\\begin{align*}\nn_{0,d}(H^2,H^2)&=\n\\left\\{\\begin{array}{rcl} 1 &\\mathrm{if} \\,\\, d=1, \\\\ \n -1 &\\mathrm{if} \\,\\, d=2, \\\\\n 0 & \\,\\, \\mathrm{otherwise}. \n\\end{array} \\right. \\\\\nn_{1,1}(H^2)&=0, \\quad n_{2,1}=0. \n\\end{align*}\n\\end{prop}\nIn the stable pair side, we compute invariants for small degree curve classes.\n\\begin{prop}\\label{prop on tp2}\nFor certain choice of orientation, we have \n$$P_{1,1}(H^2,H^2)=1, \\quad P_{1,2}(H^2,H^2)=-1, \\quad P_{1,3}(H^2,H^2)=0, $$\n$$P_{0,1}(H^2)=P_{0,2}(H^2)=0, \\quad P_{0,3}(H^2)=1,\\quad P_{-1,1}=P_{-1,2}=P_{-1,3}=0. $$\nMoreover, $P^t_{n}(X,d)$ is independent of the choice of $t>n\/d$ in the listed cases above. \n\nIn particular, for $X=T^*\\mathbb{P}^2$, we have\n\\begin{itemize}\n\\item Conjecture \\ref{conj on DT4\/GV} (2) holds when $d\\leqslant 3$. \n\\item Conjecture \\ref{conj on DT4\/GV} (3), (4) hold. \n\\end{itemize}\n\\end{prop}\n\\begin{proof}\nAs noted in \\cite[Proof~of~Lem.~6.3]{COT1}, we have a diagram \n\\begin{align*} \\xymatrix{\nX=T^{\\ast}\\mathbb{P}^2 \\ar@{^{(}->}[r]^{i\\quad } & \\mathcal{O}_{\\mathbb{P}^2}(-1)^{\\oplus 3} \\ar[d]^{\\pi} \\\\\n & T, } \n\\end{align*} \nwhere $i$ is a closed imbedding and $\\pi$ contracts $\\mathbb{P}^2$ to a point in \nan affine scheme $T$. \nIt is easy to see that any one dimensional closed subscheme $C\\subset X$ with $[C]=d$ ($d=1,2$) satisfies $\\chi(\\mathcal{O}_C)\\geqslant 1$.\nTherefore by \\cite[Prop.~1,12]{CT1}, we know for $n=-1,0,1$ and $d\\leqslant 3$, the moduli space \n$P_n^t(X,d)$ is independent of the choice of $t>n\/d$. So we may take $t\\to \\infty$ and work with PT stability. \nUsing similar analysis as \\cite[Prop.~3.9]{CKM2}, we know all stable pairs $(\\mathcal{O}_X\\stackrel{s}{\\to} F)$ in the above cases are scheme theoretically supported on \nthe zero section $\\mathbb{P}^2\\subset X$ and $F$ are stable. \nThen obviously $P_{-1}(X,d)=\\emptyset$ if $d\\leqslant 3$ and corresponding invariants vanish. \n\nWhen $n=1$, $d\\leqslant 3$, the isomorphism \n$$P_1(X,d)\\cong M_1(X,d), \\quad (\\mathcal{O}_X\\to F)\\mapsto F, $$\nto the moduli space of one dimensional stable sheaves $F$ with $[F]=d[\\ell]$ and $\\chi(F)=1$ will reduce the computation\nto the corresponding one on $M_1(X,d)$ \\cite[Prop.~6.5]{COT1}.\n\nWhen $d=1,2$, we have $P_0(X,d)=\\emptyset$, so invariants are zero. \nFor $d=3$, the support map \n$$P_0(X,3)\\cong P_0(\\mathbb{P}^2,3)\\stackrel{\\cong}{\\to} |\\mathcal{O}_{\\mathbb{P}^2}(3)|\\cong\\mathbb{P}^9, \\quad F\\mapsto \\mathrm{supp}(F) $$\nis an isomorphism. The universal one dimensional sheaf satisfies $\\mathbb{F}=\\mathcal{O}_{\\mathcal{C}}$ for the universal $(1,3)$-divisor\n$\\mathcal{C}\\hookrightarrow \\mathbb{P}^9\\times \\mathbb{P}^2$. \nLet $\\pi_M \\colon\nP_0(X,3)\\times \\mathbb{P}^2\\to P_0(X,3)$ be the projection. Bott's formula implies that \n\\begin{align*}\n \\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O},\\mathcal{O}(-\\mathcal{C})\\boxtimes T^*\\mathbb{P}^2) &\\cong \\mathcal{O}_{\\mathbb{P}^9}(-1)[-2]^{\\oplus 8}, \\\\\n\\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O},\\mathcal{O}(\\mathcal{C})\\boxtimes T^*\\mathbb{P}^2) &\\cong \\mathcal{O}_{\\mathbb{P}^9}(-1)^{\\oplus 8}, \\\\\n\\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O},\\mathcal{O}\\boxtimes T^*\\mathbb{P}^2)&\\cong \\mathcal{O}_{\\mathbb{P}^9}[-1].\n\\end{align*}\nTherefore, we have \n\\begin{align*}\n&\\quad \\, \\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O}_{\\mathcal{C}},\\mathcal{O}_{\\mathcal{C}}\\boxtimes T^*\\mathbb{P}^2)[1]\\\\\n&\\cong \\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O}(-\\mathcal{C})\\to \\mathcal{O},(\\mathcal{O}(-\\mathcal{C})\\to \\mathcal{O}) \\boxtimes T^*\\mathbb{P}^2)[1] \\\\\n&\\cong \\mathcal{O}_{\\mathbb{P}^9}(-1)^{\\oplus 8}\\oplus \\mathcal{O}_{\\mathbb{P}^9}(1)^{\\oplus 8} \\oplus \\mathcal{O}_{\\mathbb{P}^9} \\oplus \\mathcal{O}_{\\mathbb{P}^9}. \n\\end{align*}\nBy Grothendieck-Verdier duality, it is easy to see \n$$\\mathcal{O}_{\\mathbb{P}^9}(-1)^{\\oplus 8}\\oplus \\mathcal{O}_{\\mathbb{P}^9}$$ \nis a maximal isotropic subbundle of $\\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O}_{\\mathcal{C}},\\mathcal{O}_{\\mathcal{C}}\\boxtimes T^*\\mathbb{P}^2)[1]$. \nThe reduced virtual class satisfies \n$$[P_0(X,3)]^{\\mathrm{vir}}=\\pm e\\left(\\mathcal{O}_{\\mathbb{P}^9}(-1)^{\\oplus 8}\\right)\\cap [\\mathbb{P}^9] \\in H_2(\\mathbb{P}^9). $$\nLet $h\\in H^2(\\mathbb{P}^9)$ denote the hyperplane class. It is straightforward to check \n$$\\tau_0(H^2)=[h]. $$ \nBy integration again the virtual class, we have the desired result.\n\\end{proof}\n\n\n \n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}}