diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzimim" "b/data_all_eng_slimpj/shuffled/split2/finalzzimim" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzimim" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{section:litReview}\n\nInsurance is based on the idea that society asks for protection against unforeseeable events which may cause serious financial damage. Insurance companies offer a financial protection against these events. The general idea is to build a community where everybody contributes a certain amount and those who are exposed to the damage receive financial reimbursement \\cite{wuthrich2019non} . \n\\smallskip\n\nWhen (non-life) insurers set premium prices they usually start by finding the so-called pure premium, which is the expected value of the total claims that will occur in one time unit. However, when pricing insurance policies, insurers must take into account the risk associated with the policy as well as additional costs (e.g. operational cost, capital cost, etc.). Therefore, a so-called security loading is added to cover the risk and additional costs. The security loading is often calculated using some premium calculation principle,\nand the insurance premium is obtained once the security loading has been determined and added to the pure premium. The main concerns are usually whether the loading is an appropriate measure of the risk and which premium principle to choose. The higher the loading the higher the premium and consequently, the underwriting risk will be lower. However, if the premium price is too high then the exposure will be too low due to competition, and the operational cost of the insurer will engulf the premium income resulting in financial instability. Therefore, insurers usually require sophisticated premium calculations in order to secure stability.\n\\smallskip\n\nCollective risk models are fundamental in actuarial science to model the aggregate claim amount of a line of business in an insurance company. The collective risk model has two random components, the number of claims and the severity of claims, and is usually modelled with a compound process \\cite[Chapter~3]{kaas2008modern}. The classical Lundberg risk process has been studied extensively and there exist many variations, for example including reinsurance or investments \\cite{hipp2000optimal}. It assumes that premia come in a continuous stream while claims happen at discrete times according to a Poisson distribution.\n\\smallskip\n\nAnother common assumption is that the risk can be divided into groups of homogeneous risks such that the pure premia and security loadings can be estimated separately for each risk group. The pure premia of these individual groups are usually modelled with generalized linear models (GLM). GLM's have been applied extensively in actuarial work and a good overview is provided in \\cite{ohlsson2010glm, yao2013generalized}. Traditional risk theory has usually assumed independence between risks due to its convenience, but it is generally not very realistic. Claims in an insurer's risk portfolio are correlated as they are subject to the same event causes \\cite{campana2005distortion}. Completely homogeneous risk groups are extremely rare and dependence among risks has become a flourishing topic in actuarial literature \\cite{Copulas_Barning}. Dependence has mostly been measured through linear correlation coefficients \\cite{britt2005linear}. The popularity of linear coefficient is mainly due to the ease with which dependencies can be parameterized, in terms of correlation matrices. Most random variables, however, are not jointly elliptically distributed and it could be very misleading to use linear coefficients \\cite{straumann2001correlation}. This motivated the use of concordance measures. Two random variables are concordant when large values of one go with large values of the other \\cite{nelsen2007introductionCopulas}. The Lundberg risk model is a L\u00e9vy jump process, \\cite{ContTankov2004Jumps} which means that the dependency of two claim processes is best explained through their L\u00e9vy measure \\cite{tankov2016levy}. This study will not go into details about L\u00e9vy processes, but both \\cite{ContTankov2004Jumps} and \\cite{papapantoleon2008introduction} provide a very good introduction, and \\cite{Copulas_Barning, BAUERLE2011398, sato1999levy, van2012parameter, avanzi_cassar_wong_2011} are examples of applications of L\u00e9vy copulas to risk processes. For example, van Velesen \\cite{van2012parameter} showed how L\u00e9vy copulas can be used in operational modelling and discussed how dependence is implied by the L\u00e9vy copula. In this work we consider bivariate claim processes, but the presented theory can be straightforwardly extended to multiple claim processes.\n\\smallskip\n\nRuin probability is a classical measure of risk and has been extensively studied \\cite{kaas2008modern, hipp2000optimal,kasumo2018minimizing, trufin2011properties}. Although there is no absolute meaning to the probability of ruin, it still measures the stability of insurance companies. A high ruin probability indicates instability, and risk mitigation techniques should be used, like reinsurance or raising premia \\cite{kaas2008modern}. Most non-life insurance products have a term of one year and therefore it can be argued that the one year ruin probability should be used. The one year ruin probability is the probability that the capital of an insurance company will hit zero within one year. However, the appropriateness of risk measures defined over fixed time horizons can be questioned, since ruin in a given time span can be minimized by increasing the probability of ruin in the aftermath of that period. Lundberg concluded that the actual assumptions behind the classical collective risk model are in fact less restrictive when time-invariant quantities like the infinite time ruin probability are considered \\cite{trufin2011properties}. Therefore, we focus on the infinite time ruin probability in this paper.\n\\smallskip\n\nIn this work, the optimal loadings based on two strategies are derived, and compared. One strategy maximizes the profit and the other minimizes the ruin probability. We show that the two loading strategies give different results. Furthermore, we show how the optimal loading with respect to the ruin probability can be found and compare it to the one obtained when the expected profit is maximized. We consider dependencies and illustrate how L\u00e9vy copulas can be used to model claim process dependencies and how dependencies can affect the riskiness of the insurance portfolio. We take this idea further and consider dependency between the acquisition of insurance for different risks by policyholders. This is a realistic assumption as policyholders usually buy multiple insurance products from the same insurance company. We also take into account the fact that the market risk process and the company's risk process are not the same, and how the company's risk process depends on its exposure to the market. This is, to our knowledge, the first analysis of the interplay of the ruin probability, the dependency structure of claim, and the dependency structure of acquisition of insurance. We demonstrate that even if there is a strong dependency between insurance products within the market, small insurance companies have less dependency and therefore less risk than bigger insurance companies, provided the dependency between acquisition of insurance for different risks is not too strong. \n\\medskip\n\nThe paper is organized as follows: Section \\ref{section:min_ruin} contains some background material about ruin probabilities in the Lundberg process and aggregation of compound Poisson processes. Section \\ref{section:demand} deals with the single-risk case. We characterize the optimal loading and compare it with the loading maximizing the expected profit. Section \\ref{section:companyRisk} handles the multiple risks case. We show how the dependency structure existing in the market (i.e. the general population) translates into the risk exposure of the company through its market shares on different risks and the likelihood that clients acquire insurance for more than one risk. Section \\ref{seq:numerical} contains a numerical illustration. A numerical scheme to compute the ruin probabilities is given in the appendix. \n\n\n\n\n\n\\section{Preliminaries}\n\\label{section:min_ruin}\n\n\n\\subsection{Claim and Surplus Processes}\n\nThe Lundberg risk model describes the evolution of the capital of an insurance company and assumes that exposure is constant in time, losses follow a compound Poisson process, and premia arrive at a fixed continuous rate:\n\n$$ X_t = u + ct-\\sum_{i=0}^{N_t} Y_i = u + ct - S_t, \\qquad Y_0 \\coloneqq 0 ,$$\n\nwhere $u$ is the initial surplus, $c$ is the risk premium rate, $N_t$ is a time homogeneous Poisson process with intensity parameter $\\lambda$, and $Y_i$ are i.i.d. random variables representing the severity of claim $i$, $i = 0,\\dots, N_t$. Here it is assumed that $Y_i$ are positive. In the following sections, $Y$ denotes an arbitrary random variable with the same distribution as any $Y_i$. The severity distribution is denoted as $F(x)$ and the severity survival distribution as $\\overline{F}(x)$. $S_t$ is a compound Poisson process and thus $X_t$ is a stochastic process (sometimes called the surplus process) representing the insurance wealth at time $t$. $X_t$ increases because of earned premia and decreases when claims occur. When the capital of an insurance company hits zero, the insurance company is said to be ruined. Formally, the ruin probability is defined as follows.\n\n\\begin{definition}[Probability of Ruin]\n\\label{SimpleRuin}\nLet $(\\Omega, \\mathcal{F}, \\{\\mathcal{F}_t\\}_{t\\geq0}, \\mathbbm{P})$ be a filtered probability space and $X = (X_t)_{t \\in [0, \\infty[}$ a surplus process which is adapted and Markov with respect to the filtration. The state space is $(\\mathbbm{R}, \\mathcal{B}(\\mathbbm{R}))$. If X is time homogeneous, the infinite time ruin probability is the function $V: \\mathbbm{R} \\mapsto [0,1]$ such that\n\n$$V(x) = \\Prob \\big( \\exists s\\in[0,+\\infty[:X_s \\leq 0 \\given[\\big] X_0 = x \\big), \\quad x \\in \\mathbbm{R} .$$\n\n\\end{definition}\n\nSometimes it is useful to use the survival (non-ruin) probability, defined as $\\overline{V}(x) = 1-V(x)$. The ruin probability can be calculated using the following integro-differential equation \\cite{grandell2012aspects}.\n\n\\begin{proposition}\n\\label{prop:Ruin_lundberg_inf}\nAssume that $X_t$ is defined as above and the premium rate satisfies $c > \\lambda \\E[Y]$. If $V \\in C^1(]0, \\infty[),$ then the probability of ruin with infinite time horizon satisfies the following equation:\n\\begin{align}\n\\label{eq:Ruin_lundberg_inf}\n 0 = c \\deriv{x}V(x) + \\lambda \\bigg( \\int_{0}^{x} V(x-y)dF(y) - V(x) + 1-F(x) \\bigg), \\quad x>0,\n\\end{align}\nwith the following boundary condition:\n\\[\n\\begin{cases}\n V(x) = 1 & x\\leq 0,\\\\\n \\lim_{x \\to 0^+}V(x) = \\frac{\\lambda}{c}\\E[Y] . \\\\\n\\end{cases}\n\\]\nFurthermore, the probability of non-ruin satisfies the following equation:\n\\begin{equation}\n\\label{eq:Ruin_lundberg_inf_survival}\n \\overline{V}(x)-\\overline{V}(\\epsilon) = \\frac{\\lambda}{c}\\int_\\epsilon^x\\overline{V}(x-y)\\overline{F}(y)dy\n\\end{equation}\nfor $0 < \\epsilon \\leq x < + \\infty$ with the following boundary condition:\n\\[\n\\begin{cases}\n \\overline{V}(x) = 0 & x\\leq 0,\\\\\n \\lim_{x \\to 0^+} \\overline{V}(x) =1- \\frac{\\lambda}{c}\\E[Y] . \\\\\n\\end{cases}\n\\]\n\\end{proposition}\n\\medskip\n\nA numerical scheme solving equation \\eqref{eq:Ruin_lundberg_inf} can be found in Appendix \\ref{appendix:NA_ez_Poi_Exp_inf}. \n\n\n\\subsection{Accounting for Claim Dependencies}\n\nConsider the surplus process $\\bm{X} = (X_t^{(1)},...,X_t^{(n)}) $ where\n\\begin{equation}\n\\label{multiclaims}\n \\begin{split}\n X_t^{(1)} &= u^{(1)}+ c^{(1)}t - \\sum_{i = 0}^{N_t^{(1)}}Y_i^{(1)} \\\\\n \\vdots &\\\\\n X_t^{(n)} &= u^{(n)}+ c^{(n)}t - \\sum_{i = 0}^{N_t^{(n)}}Y_i^{(n)} \\\\\n \\end{split}\n\\end{equation}\n\nIf these processes are independent, it is relatively easy to combine them into a single process using the aggregation property of compound Poisson processes as described in W\\\"{u}trich \\cite{wuthrich2019non}. The aggregation property allows the combination of multiple surplus processes into a single risk process as follows:\n\\begin{equation*}\n X_t = \\sum_{j= 1}^n u^{(j)} +\\sum_{j = 1}^n c^{(j)}t -\\sum_{i = 0}^{N_t}Y_i,\n\\end{equation*}\nwhere $N_t$ is a Poisson r.v. with $\\lambda = \\lambda_{1} + ...+ \\lambda_{n}$ and $Y_i$ are i.i.d. random variables, which follow the severity distribution $F(x) = \\sum_{j = 1}^n \\frac{\\lambda_j}{\\lambda} F_j(x)$. This aggregation property allows us to use the integro-differential equation \\eqref{eq:Ruin_lundberg_inf} to calculate the ruin of multiple surplus processes.\n\\smallskip\n\nIf the risks are not independent, then we can use the fact that compound Poisson processes are characterized by their L\u00e9vy measure to decompose the claim process into independent processes to which the aggregation property can be applied.\nIn particular, for $n = 2$ risks, we obtain the decomposition:\n\\begin{equation*}\n \\begin{split}\n X_t = X_t^{(1)} + X_t^{(2)} = u + ct - S_t^{1\\perp} - S_t^{2\\perp} -S_t^\\parallel.\n \\end{split}\n\\end{equation*}\nwhere $S^{1\\perp}$ and $S^{2\\perp}$ are compound Poisson processes accounting for events concerning only risk 1 and risk 2, respectively. $S^{\\parallel}$ is a compound Poisson process accounting for events concerning both risks simultaneously. Furthermore, $S^{1\\perp}$, $S^{2\\perp}$ and $S^{\\parallel}$ are mutually independent. \\smallskip\n\nIn this section, we briefly explain how this can be achieved. Further details can be found in \\cite{tankov2016levy}. We will use the following definitions:\n\n\\begin{definition}\nThe tail integral of a L\u00e9vy measure $\\nu$ on $[0, \\infty]^2$ is given by a function $U:[0, \\infty]^2 \\mapsto [0, \\infty]$\n\\begin{equation}\n\\label{eq:tail_int}\n \\begin{split}\n & U(x_1,x_2) = 0 \\quad if \\quad x_1 = \\infty \\quad or \\quad x_2 = \\infty ,\\\\\n & U(x_1,x_2) = \\nu\\big([x_1, \\infty[ \\times [x_2, \\infty[ \\big) \\quad for \\quad (x_1,x_2) \\in ]0, \\infty[^2 , \\\\\n & U(0,0) = \\infty.\n \\end{split}\n \\end{equation}\n\\end{definition}\n\n\\begin{definition}[L\u00e9vy Copula for Processes with Positive Jumps]\nA two-dimensional L\u00e9vy copula for L\u00e9vy processes with positive jumps, or for short, a positive L\u00e9vy copula, is a 2-increasing grounded function $\\mathcal{C}: [0,\\infty]^2 \\to [0, \\infty]$ with uniform margins, that is, $\\mathcal{C}(x,\\infty) = \\mathcal{C}(\\infty,x) = x$.\n\\end{definition}\n\nSimilarly to Sklar's theorem for ordinary copulas \\cite{nelsen2007introductionCopulas}, it has been shown that the dependency structure of $(X_t^{(1)},X_t^{(2)})$ can be characterized by a Levy copula $\\mathcal{C}$ such that $\\mathcal{C}(U_1(x_1), U_2(x_2))$ where $U_1$ and $U_2$ are the marginal tail integrals for $X_t^{(1)}$ and $X_t^{(2)}$. If $U_1$ and $U_2$ are absolutely continuous, this L\u00e9vy copula is unique, otherwise it is unique on $Range(U_1)\\times Range(U_2)$, the product of ranges of one-dimensional tail integrals, \\cite[Theorem~5.4]{ContTankov2004Jumps}\n\\smallskip\n\nConsider a two dimensional claim process:\n\\begin{equation}\n \\label{eq:2DimClaimProcess}\n S_t = (S_t^{(1)}, S_t^{(2)}) = \\sum_{i = 0}^{N_t} (Y_i^{(1)},Y_i^{(2)}),\n\\end{equation}\nwhere $N_t$ is a Poisson process with intensity $\\lambda$ and $Y_i = (Y_i^{(1)},Y_i^{(2)})$, $i \\in \\mathbbm{N}$ are independent random variables with common joint distribution $F_Y$. The components of $S$, $S^{(1)}$ and $S^{(2)}$, are one-dimensional compound Poisson processes with intensities $\\lambda_1$ and $\\lambda_2$ and severity distributions $F_{Y^{(1)}}$ and $F_{Y^{(2)}}$, respectively. \nWe wish to obtain a decomposition:\n\\begin{equation}\n\\label{eq:bracketSum}\n (S_t^{(1)}, S_t^{(2)}) = \\sum_{i = 0}^{N_t^{1\\perp}} (Y_i^{(1\\perp)}, 0) + \\sum_{i = 0}^{N_t^{2\\perp}} (0, Y_i^{(2\\perp)}) + \\sum_{i = 0}^{N_t^{\\parallel}}(Y_i^{(1\\parallel)}, Y_i^{(2\\parallel)}),\n\\end{equation}\nwhere $\\sum_{i = 0}^{N_t^{1\\perp}} Y_i^{(1\\perp)}$,$\\sum_{i = 0}^{N_t^{2\\perp}} Y_i^{(2\\perp)}$ and $\\sum_{i = 0}^{N_t^{\\parallel}}(Y_i^{(1\\parallel)}, Y_i^{(2\\parallel)})$ are independent compound Poisson processes with intensities $\\lambda_1^\\perp$, $\\lambda_2^\\perp$, $\\lambda^\\parallel$ and severity distributions $F_{Y^{1\\perp}}$,$F_{Y^{2\\perp}}$,$F_{Y^{\\parallel}}$, respectively. In the above setting, we consider \\begin{equation}\n\\label{eq:AllDistZero}\n F_Y(0,0) = F_{Y^{\\parallel}}(0,0) = 0, \\quad F_{Y^{(1)}}(0) = F_{Y^{1\\perp}}(0) = F_{Y^{(2)}}(0) = F_{Y^{2\\perp}}(0) \n\\end{equation}\n\nA compound Poisson process, $S$, is a L\u00e9vy process with L\u00e9vy measure $\\nu(dx) = \\lambda dF(x)$, with tail integral\n\\begin{equation*}\n U(x_1,x_2) = \n \\begin{cases}\n \\lambda \\Prob \\big(Y^{(1)} \\geq x_1, Y^{(2)} \\geq x_2 \\big) & \\textrm{if } x_1 >0 \\textrm{ or } x_2 >0 \\\\\n +\\infty & \\textrm{if } x_1 = x_2 = 0 .\n \\end{cases}\n\\end{equation*}\nThe components $S^{(1)}$ and $S^{(2)}$ are independent if and only if $U(x_1, x_2) = 0$ for every $(x_1, x_2) \\in ]0, + \\infty[$, i.e., if and only if $\\lim_{x_1 \\to 0^{+}, x_2 \\to 0^{+}} U(x_1, x_2) = 0 $.\n\\smallskip\n\nThe L\u00e9vy measure of the processes $S^{(i)}$, $i = 1,2$, have tail integrals\n\\begin{equation*}\n \\begin{split}\n U_1(x_1) &= \\lambda_1 \\Prob \\big( Y^{(1)} \\geq x_1 \\big) = U(x_1,0) \\\\\n U_2(x_2) &= \\lambda_2 \\Prob \\big( Y^{(2)} \\geq x_2 \\big) = U(0,x_2)\n \\end{split}\n\\end{equation*}\nTaking equation \\eqref{eq:AllDistZero} into account, one obtains \n\\begin{equation*}\n \\lambda_i = \\lim_{x_i \\to 0^{+}} U_i(x_i), \\quad i = 1,2 \n\\end{equation*}\n\\begin{equation*}\n \\lambda =\\lim_{x_1, x_2 \\to 0^{+}} \\big(U_1(x_1) + U_2(x_2) - U(x_1,x_2) \\big) \n = \\lambda_1 + \\lambda_2 - \\lambda^\\parallel\n\\end{equation*}\n\\begin{equation*}\n \\lambda^\\parallel =\\lim_{x_1, x_2 \\to 0^{+}} U(x_1,x_2) \n\\end{equation*}\n\\begin{equation*}\n \\lambda_i^\\perp = \\lambda_i - \\lambda^\\parallel, \\quad i = 1,2 .\n\\end{equation*}\nThe severity distributions $F_{Y^{1\\perp}}$, $F_{Y^{2\\perp}}$, and $F_{Y^{\\parallel}}$ can be recovered from the tail integrals:\n\\begin{equation*}\n \\Prob \\big(Y^{1\\perp} \\geq x_1 \\big) = \\frac{1}{\\lambda_1^\\perp} \\lim_{x_2 \\to 0^{+}} \\Big( U_1(x_1) - U(x_1,x_2) \\Big)\n\\end{equation*}\n\\begin{equation*}\n \\Prob \\big(Y^{2\\perp} \\geq x_2 \\big) = \\frac{1}{\\lambda_2^\\perp} \\lim_{x_1 \\to 0^{+}} \\Big( U_2(x_2) - U(x_1,x_2) \\Big)\n\\end{equation*}\n\\begin{equation*}\n \\Prob \\big(Y^{1\\perp} \\geq x_1, Y^{2\\perp} \\geq x_2 \\big) = \\frac{1}{\\lambda^\\parallel} U(x_1,x_2) .\n\\end{equation*}\n\nIf the dependency between $S^{(1)}$ and $S^{(2)}$ is characterized by a L\u00e9vy copula, $\\mathcal{C}$, i.e. $U(x_1, x_2) = \\mathcal{C}(U_1(x_1), U_2(x_2))$ for $(x_1, x_2) \\in [0, +\\infty[^2$, then the relations above can be written using the L\u00e9vy copula and one-dimensional tail integrals:\n\\begin{equation*}\n\\lambda^\\parallel = \\lim_{u_1 \\to \\lambda_1^{+}, u_2 \\to \\lambda_2^{+}} \\mathcal{C}(u_1, u_2)\n\\end{equation*}\n\\begin{equation*}\n \\Prob \\big(Y^{1\\perp} \\geq x_1 \\big) = \\frac{1}{\\lambda_1^\\perp} \\lim_{u_2 \\to \\lambda_2^{+}} \\Big( U_1(x_1) - \\mathcal{C}(U_1(x_1),u_2) \\Big)\n\\end{equation*}\n\\begin{equation*}\n \\Prob \\big(Y^{2\\perp} \\geq x_2 \\big) = \\frac{1}{\\lambda_2^\\perp} \\lim_{u_1 \\to \\lambda_1^{+}} \\Big( U_2(x_2) - \\mathcal{C}(u_1, U_2(x_2)) \\Big)\n\\end{equation*}\n\\begin{equation*}\n \\Prob \\big(Y^{1\\perp} \\geq x_1, Y^{2\\perp} \\geq x_2 \\big) = \\frac{1}{\\lambda^\\parallel} \\mathcal{C}(U_1(x_1), U_2(x_2)) .\n\\end{equation*}\n\nUsing the above methodology, the surplus process\ncan be represented as\n\\begin{equation*}\n X_t = u + ct - \\sum_{i = 0}^{N_t^{1\\perp}} Y_i^{1\\perp} - \\sum_{i = 0}^{N_t^{2\\perp}} Y_i^{2\\perp} - \\sum_{i = 0}^{N_t^{\\parallel}} ( Y_i^{1\\parallel} + Y_i ^{2 \\parallel} ) =\n u +ct - \\sum_{i = 1}^{N_t^{*}} Y_i^{*} ,\n\\end{equation*}\nwhere $u = u_1 + u_2$, $c = c_1 + c_2$, $N^{*}$ is a Poisson process with intensity $\\lambda = \\lambda_1^\\perp + \\lambda_2^\\perp + \\lambda^\\parallel$ and $Y_i^{*}$ are i.i.d. random variables with distribution:\n\\begin{equation*}\n F^{*} = \\frac{\\lambda_1^\\perp}{\\lambda} F_{Y^{1\\perp}} + \\frac{\\lambda_2^\\perp}{\\lambda} F_{Y^{2\\perp}} + \\frac{\\lambda^\\parallel}{\\lambda} F_{Y^{1\\parallel} + Y^{2\\parallel}} ,\n\\end{equation*}\nwhere\n\\begin{equation*}\n F_{Y^{1\\parallel} + Y^{2\\parallel}}(x) = \\int_{x_1 + x_2 \\leq x} dF_{Y^\\parallel}(x_1,x_2).\n\\end{equation*}\n\n\n\\section{The Optimal Loading for a Single Risk}\n\\label{section:demand}\n\nAn insurer can control the volume of its business through the premium loading $\\theta$. A reasonable assumption is that the higher the loading, the smaller the number of contracts in its portfolio, which means that the claim intensity (or business volume) will decrease. Therefore, both the claim intensity $\\E^\\theta[N_1]$, and the premium rate $c(\\theta)$, will depend on $\\theta$. It is reasonable to assume that $\\E^\\infty[N_1] = 0$, as abnormal premium rates will not attract customers \\cite{hipp2004insurancecontrol}. To capture these concepts let $\\E^\\theta[N_1] = \\lambda p(\\theta)$. Here $\\lambda$ is the average number of claims per unit of time for the whole market, and $p(\\theta)$ is the probability that a potential claim is filed as an actual claim to the particular insurer under consideration. In other words, $p(\\theta)$ reflects the demand or the market share sensitivity to the loading parameter $\\theta$. $p(\\theta)$ can be interpreted as a probability that a customer buys an insurance product. For example, we may assume that demand of insurance contracts is described by a logit glm model as in Hardin and Tabari \\cite{hardin2017renewal}. Thus, $p(\\theta)$, will be :\n\\begin{equation}\n\\label{eq:logit}\n p(\\theta) = \\frac{1}{1+e^{\\beta_0 + \\beta_1 \\theta}},\n\\end{equation}\nwhere $\\beta_0$ and $\\beta_1$ are determined from the glm and $\\theta$ is the loading parameter. $\\beta_1$ will be a positive number so $p\\to 0$ when $\\theta \\to \\infty$ and $p\\to 1$ when $\\theta \\to -\\infty$. Assuming that the company has some fixed costs, independent of the risk exposure, denoted by $r > 0$, the expression for the net premium income becomes:\n\\begin{equation*}\n c(\\theta) = (1 + \\theta)\\E^\\theta[N_1]\\E[Y] -r.\n\\end{equation*}\n\nThe following proposition characterizes the behaviour of the solution of equation \\eqref{eq:Ruin_lundberg_inf} with respect to the loading $\\theta$.\n\n\\begin{proposition}\n\\label{prop:alpha_proof}\nIf $V(x,\\theta)$ satisfies equation \\eqref{eq:Ruin_lundberg_inf} then $V(x,\\theta)$ is strictly increasing with respect to the parameter $\\alpha= \\frac{ \\E^\\theta [N_1]}{c(\\theta)}$.\n\\end{proposition}\n\n\\begin{proof}\nIt is possible to integrate Equation \\eqref{eq:Ruin_lundberg_inf} on the interval $]0, x]$ to obtain:\n\\begin{equation}\n\\label{eq:omega_prove_1}\n \\begin{split}\n V(x, \\theta) = \\frac{ \\E^\\theta [N_1]}{c(\\theta)} \\Bigg( \\E[Y] + \\int_0^x \\Big( V(z, \\theta) - \\int_0^z V(z-y)dF(y) + F(z) -1 \\Big)dz \\Bigg) .\n \\end{split}\n\\end{equation}\n\nTo prove the proposition, we will study equations of the general form:\n\\begin{equation}\n\\label{eq:alpha_prove_2}\n \\begin{split}\n u(x) = \\alpha \\Bigg( g(x) + \\int_0^x \\Big( u(z) - \\int_0^z u(z-y)dF(y) \\Big)dz \\Bigg) .\n \\end{split}\n\\end{equation}\n\nWe introduce the operator $\\Psi$, acting on measurable locally bounded functions $h:[0, +\\infty] \\mapsto \\mathbbm{R}$, as:\n\\begin{equation}\n\\label{eq:IntegralOperator}\n(\\Psi h)(x) = \\int_0^x \\Big( h(z) - \\int_0^z h(z-y)dF(y) \\Big)dz, \\quad x \\geq 0 \n\\end{equation}\nNotice that the transformation $h \\mapsto \\Psi h$ is linear and for every $h$, $\\Psi h : [0, + \\infty[ \\mapsto \\mathbbm{R}$ is continuous, hence measurable and locally bounded. Thus, powers of the operator $\\Psi$ are defined in the usual way.\n\\begin{equation*}\n\\Psi^0h = h, \\qquad \\Psi^nh = \\Psi (\\Psi^{n-1}h), \\quad n \\in \\mathbb N.\n\\end{equation*}\n\nLet, $\\norm{h}_{[0,x]} = \\sup_{z \\in [0,x]} |h(z)| $. Then:\n\\begin{equation*}\n \\begin{split}\n | (\\Psi h)(x) | &\\leq \\int_0^x \\Big( |h(z)| +\\int_0^z |h(z-y)|dF(y) \\Big)dz \\leq 2x \\norm{h}_{[0,x]}.\n \\end{split} \n\\end{equation*}\nIf the inequality \n\\begin{equation}\n\\label{Eq bound Psi n}\n \\norm{(\\Psi^n h)}_{[0,x]} \\leq \\frac{2^n x^n}{n!} \\norm{h}_{[0,x]}\n\\end{equation}\nholds, for some $n \\in \\mathbbm{N}$, then\n\\begin{equation*}\n \\begin{split}\n | (\\Psi^{n+1} h)(x) | &\\leq \\int_0^x \\Big( | (\\Psi^{n}h)(z)| +\\int_0^z |(\\Psi^{n}h)(z-y)|dF(y) \\Big)dz \\\\\n &\\leq \\int_0^x 2 \\frac{2^n z^n}{n!} \\norm{h}_{[0,x]} dz = \\frac{2^{n+1} x^{n+1}}{(n+1)!} \\norm{h}_{[0,x]}.\n \\end{split} \n\\end{equation*}\nThus, by induction, \\eqref{Eq bound Psi n} holds for every $n \\in \\mathbb N$.\nTherefore, for every $x \\in [0, \\infty[$, fixed, there is some $n \\in \\mathbb N$ such that $\\Psi^n$ is a contraction in the space of measureable and bounded functions $h:[0,x] \\mapsto \\mathbbm{R}$. It follows from the contraction principle that equation \\eqref{eq:omega_prove_1} has one unique solution. Further, $\\lim_{n \\to \\infty} (\\alpha^n \\Psi^n) h = 0$, uniformly in $[0,x]$ for any given $h$ and any fixed $x \\in [0, +\\infty[$. \n\nLet $u_{\\alpha, g}$ be the solution of equation \\eqref{eq:alpha_prove_2} for given $g$ and $\\alpha$. Then,\n\\begin{equation*}\n \\begin{split}\n u_{\\alpha,g} &= \\alpha (g + \\Psi u_{\\alpha,g}) = \\alpha g + \\alpha \\Psi(\\alpha(g + \\Psi u_{\\alpha,g}))\n = \\alpha g + \\alpha^2 \\Psi g + \\alpha^2 \\Psi u_{\\alpha,g} \\\\ \n &= \\alpha g + \\alpha^2 \\Psi g + \\dots +\\alpha^{n+1} \\Psi^n g +\\alpha^{n+2} \\Psi^{n +1} u_{\\alpha,g} .\n \\end{split}\n\\end{equation*}\nSince $\\lim_{n \\to \\infty} \\alpha^n \\Psi^n u_{\\alpha,g}(x) = 0$, this shows that $u_{\\alpha,g}$ admits the series representation:\n\\begin{equation*}\n \\begin{split}\n u_{\\alpha,g} &= \\sum_{n=0}^\\infty\\alpha^{n+1} \\Psi^n g ,\n \\end{split}\n\\end{equation*}\nwhich converges uniformly with respect to $\\alpha$ on compact intervals. \nThus, we can differentiate term by term and obtain\n\\begin{equation*}\n \\begin{split}\n \\frac{d}{d\\alpha} u_{\\alpha,g}(x) &= \\sum_{n=0}^\\infty (n+1)\\alpha^n(\\Psi^n g)(x)\\\\\n &= \\sum_{n = 0}^\\infty \\alpha^n (\\Psi^n g)(x) + \\sum_{n = 1}^\\infty n \\alpha^n (\\Psi^n g)(x) \\\\\n &= \\frac{1}{\\alpha}u_{\\alpha,g} + \\sum_{n=1}^\\infty \\alpha^n (\\Psi^n g)(x) + \\sum_{n=2}^\\infty (n-1) \\alpha^n (\\Psi^n g)(x)\\\\\n &= \\frac{1}{\\alpha}u_{\\alpha,g} + \\sum_{n=0}^\\infty \\alpha^{n+1} (\\Psi^{n+1} g)(x) + \\sum_{n=1}^\\infty n \\alpha^{n+1} (\\Psi^{n+1} g)(x) \\\\\n &= \\frac{1}{\\alpha} u_{\\alpha,g} + (\\Psi u_{\\alpha,g})(x) + \\sum_{n=1}^\\infty \\alpha^{n+1} (\\Psi^{n+1} g)(x) + \\sum_{n=2}^\\infty (n-1) \\alpha^{n+1} (\\Psi^{n+1} g)(x) \\\\\n &= \\frac{1}{\\alpha} u{\\alpha,g} + (\\Psi u_{\\alpha,g})(x) + \\sum_{n=0}^\\infty \\alpha^{n+2} (\\Psi^{n+2} g)(x) + \\sum_{n=1}^\\infty n \\alpha^{n+2} (\\Psi^{n+2} g)(x) \\\\\n &= \\frac{1}{\\alpha} u_{\\alpha,g} + (\\Psi u_{\\alpha,g})(x) + ( \\alpha \\Psi^2 u_{\\alpha,g})(x) + \\dots + ( \\alpha^{k-1} \\Psi^k u_{\\alpha,g})(x) + \\sum_{n = 1}^\\infty n \\alpha^{n+k} (\\Psi^{n+k} g)(x) \\\\\n &= \\sum_{n=0}^\\infty \\alpha^{n-1} (\\Psi^n u_{\\alpha,g})(x) = \\frac{1}{\\alpha^2} u_{\\alpha,u_{\\alpha,g}}.\n \\end{split}\n\\end{equation*}\n\nFor any $h:[0,x] \\mapsto \\mathbbm{R}$ locally absolutely continuous function:\n\\begin{equation*}\n \\begin{split}\n (\\Psi h)(x) &= \\int_0^x \\Big( h(z) - \\int_0^z h(z-y)dF(y) \\Big)dz \\\\\n &= \\int_0^x \\Big(h(z) - [h(z-y)F(y)]_{y = 0}^{y = z} - \\int_0^z h'(z-y)F(y)dy \\Big)dz \\\\\n &= \\int_0^x (h(z) - h(0)F(z))dz - \\int_0^x \\int_y^x h'(z-y)F(y)dzdy \\\\\n &= \\int_0^x (h(z) - h(0)F(z))dz - \\int_0^x \\big(h(x-y) - h(0) \\big)F(y)dy \\\\\n &= \\int_0^x h(z)dz - \\int_0^x h(x-y)F(y)dy = \\int_0^x h(z) (1+F(x-z))dz .\\\\\n \\end{split}\n\\end{equation*}\nThus, $h>0$ implies $(\\Psi h)>0$, which implies $(\\Psi^n h)>0$, $\\forall n \\in \\mathbbm{N}$, and therefore $u_{\\alpha,h}>0$ for any $\\alpha >0$. This argument shows that $\\frac{d}{d\\alpha} V =\\frac{1}{\\alpha^2} u_{\\alpha,V} >0$ as $V >0$. Therefore $V$ is strictly increasing with $\\alpha$.\n\\end{proof}\n\nAccording to Proposition \\ref{prop:alpha_proof}, in order to find $\\theta$ minimizing the probability of ruin, it is sufficient to find $\\theta$ minimizing $\\frac{\\mathbb E^\\theta[N_1]}{c(\\theta)}$. For example, using the logit demand model \\eqref{eq:logit}, the optimal loading is found with direct differentiation of $\\alpha$ and is given by:\n\\begin{equation}\n\\label{eq:alpha_direct}\n \\begin{split}\n\\theta_{ruin} = \\frac{1}{\\beta_1} \\Big( \\ln \\big( \\frac{\\lambda \\E[Y]}{r \\beta_1} \\big) - \\beta_0 \\Big).\n \\end{split}\n\\end{equation}\nHowever, the loading that maximizes the expected profit is:\n\\begin{equation*}\n \\begin{split}\n \\theta_{profit} = \\argmax_\\theta \n \\E^\\theta[X_1 \\given[] X_0 = x] = \\argmax_\\theta \\{ \\theta\\E^\\theta[N_1]\\E[Y] -r\\},\n \\end{split}\n\\end{equation*} \nwhich is, in the case of logit demand \\eqref{eq:logit}, the unique solution of:\n\\begin{equation}\n\\label{eq:profit_equation}\n \\begin{split}\n 1 + e^{\\beta_0 + \\beta_1 \\theta} - \\beta_1 \\theta e^{\\beta_0 + \\beta_1 \\theta} = 0.\n \\end{split}\n\\end{equation}\nThus, in general, $\\theta_{ruin}$ does not coincide with $\\theta_{profit}$.\n\n\n\\section{The Multiple Risk Case }\n\\label{section:companyRisk}\n\nIn this section, we explore how dependencies between risks available in an insurance market translate into risk exposure for a company through its market shares on the different risks. It turns out that this mechanism is non trivial when the risks are dependent. For the sake of simplicity, we assume that the company offers insurance for two risks in a market constituted by identical individuals, all of them exposed to both risks. Using the notation in equations \\eqref{eq:2DimClaimProcess} and \\eqref{eq:bracketSum} to denote the market claim process, $S_t = (S_t^{(1)}, S_t^{(2)})$ is the vector of the total (accumulated) amount of claims of each risk that occurred in the market, up to time $t$. The marginal distributions of $S^{(1)}$ and $S^{(2)}$ are characterized by claim intensities $\\lambda_1$ and $\\lambda_2$ and the severity distributions $F_{Y^{(1)}}$, $F_{Y^{(2)}}$ and their dependency structure is characterized by a parameter $\\lambda^\\parallel \\in [0, \\min(\\lambda_1, \\lambda_2)]$ and a joint distribution $F_{(Y^{1\\parallel},Y^{2\\parallel})}$, as explained in Section $\\ref{section:min_ruin}$. \n\n\n\\subsection{Risk Exposure as a Function of Market Shares}\n\\label{section:sub4.1}\n\nTo extend the demand model outlined in Section \\ref{section:demand} to a market with multiple risks where the acquisition of insurance for different risks may not be independent, we propose the following interpretation for the function $p$.\n\\smallskip\n\nLet $(\\theta_1, \\theta_2)$ be the loadings charged by the company for each risk. We assume that every individual in the market (a potential client) is provided with a vector of bid prices ($b_1$, $b_2$). The client acquires the insurance for risk $i$ if $b_i \\geq \\theta_i$ (for convenience, we consider prices net of the pure premium). The distribution of the price vectors in the market is modelled by a random vector $B = (B_1, B_2)$. Thus, $p_i(\\theta) = p_i(\\theta_i) = \\Prob \\big( B_i \\geq \\theta_i \\big)$ is the company's market share for the insurance of risk $i$ at equilibrium, given the loadings $\\theta = ( \\theta_1, \\theta_2)$. Let $p^{(1,0)}$ be the proportion of individuals in the market holding a policy for risk 1 and no policy for risk 2. Similarly, $p^{(0,1)}(\\theta)$ and $p^{(1,1)}(\\theta)$ denote the proportion of individuals holding a policy only for risk 2 and for both risks, respectively. If the acquisition of polices for different risks is independent, then:\n\\begin{equation}\n\\label{eq:AcquisitionProbabilities}\n p^{(1,1)}(\\theta) = p_1(\\theta_1)p_2(\\theta_2), \\quad p^{(1,0)}(\\theta) = p_1(\\theta_1)(1-p_2(\\theta_2)), \\quad p^{(0,1)}(\\theta) = p_2(\\theta_2)(1-p_1(\\theta_1)).\n\\end{equation}\n\nDependency between the acquisition of different risks can be introduced by considering dependent bid prices $B = (B_1, B_2)$. In particular, if the joint distribution of $B$ is characterized by an ordinary copula $C: [0,1]^2 \\mapsto [0,1]$, then, according to Sklar's theorem $F_B(\\theta_1, \\theta_2) = C(F_{B_1}(\\theta_1), F_{B_2}(\\theta_2))$ \\cite{nelsen2007introductionCopulas}. This gives:\n\\begin{equation*}\n\\label{eq:EQ8}\n \\begin{split}\n p^{(1,0)} &= F_{B_2}(\\theta_2^{-}) - C(F_{B_1}(\\theta_1^{-}), F_{B_2}(\\theta_2^{-})), \\\\\n p^{(0,1)} &= F_{B_1}(\\theta_1^{-}) - C(F_{B_1}(\\theta_1^{-}), F_{B_2}(\\theta_2^{-})), \\\\\n p^{(1,1)} &= 1 -F_{B_1}(\\theta_1^{-})- F_{B_2}(\\theta_2^{-}) + C(F_{B_1}(\\theta_1^{-}), F_{B_2}(\\theta_2^{-})). \\\\\n \\end{split}\n\\end{equation*}\n\nUnder this model, the company's surplus process is:\n\\begin{equation}\n\\label{eq:agg_company_claims}\n \\Tilde{X}_t = u^{(1)} + u^{(2)} + \\big( c^{(1)}(\\theta_1) + c^{(2)}(\\theta_2) \\big)t - \\sum_{i = 0}^{\\Tilde{N}_t^{1\\perp}} \\Tilde{Y}_i^{1\\perp} - \\sum_{i = 0}^{\\Tilde{N}_t^{2\\perp}} \\Tilde{Y}_i^{2\\perp} - \\sum_{i = 0}^{\\Tilde{N}_t^{\\parallel}} \\Big(Y_i^{1\\parallel} + Y_i^{2\\parallel} \\Big),\n\\end{equation}\nwhere $\\Tilde{N}_t^{1\\perp}$, $\\Tilde{N}_t^{2\\perp}$ , and $\\Tilde{N}_t^{\\parallel}$ count the number of claims received by the company concerning only risk 1, only risk 2, and both risks, respectively. Their intensities are, respectively,\n\\begin{equation*}\n \\begin{split}\n \\Tilde{\\lambda}_1^\\perp &= p^{(1,0)}(\\theta) \\big( \\lambda_1^\\perp + \\lambda^\\parallel \\big) + p^{(1,1)}(\\theta) \\lambda_1^\\perp = p_1(\\theta_1) \\lambda_1^\\perp + p^{(1,0)}(\\theta) \\lambda^\\parallel, \\\\\n \\Tilde{\\lambda}_2^\\perp &= p_2(\\theta_2) \\lambda_2^\\perp + p^{(0,1)}(\\theta) \\lambda^\\parallel, \\\\\n \\Tilde{\\lambda}^\\parallel &= p^{(1,1)}(\\theta) \\lambda^\\parallel.\n \\end{split}\n\\end{equation*}\n\nThe distribution of the single risk claim amounts $\\Tilde{Y}^{1\\perp}$ (resp., $\\Tilde{Y}^{2\\perp}$) is a mixture of the distributions $Y^{1\\perp}$ and $Y^{1\\parallel}$ (resp., $Y^{2\\perp}$ and $Y^{2\\parallel}$):\n\\begin{equation*}\n \\begin{split}\n &F_{\\Tilde{Y}^{1\\perp}} = \\frac{p_1\\lambda_1^\\perp }{p_1 \\lambda_1^\\perp + p^{(1,0)} \\lambda^\\parallel } F_{Y^{1\\perp}} + \\frac{p^{(1,0)} \\lambda^\\parallel }{p_1 \\lambda_1^\\perp + p^{(1,0)} \\lambda^\\parallel } F_{Y^{1\\parallel}} \\\\\n &F_{\\Tilde{Y}^{2\\perp}} = \\frac{p_2 \\lambda_2^\\perp }{p_2 \\lambda_2^\\perp + p^{(0,1)} \\lambda^\\parallel } F_{Y^{2\\perp}} + \\frac{p^{(0,1)} \\lambda^\\parallel }{p_2 \\lambda_2^\\perp + p^{(0,1)} \\lambda^\\parallel } F_{Y^{2\\parallel}}\n \\end{split}\n\\end{equation*}\nThis is because some customers insure risk 1, but not risk 2 and vice-versa. Therefore, the aggregate process for the insurer is\n\\begin{equation}\n\\label{eq:indp_company_claim}\n \\Tilde{X}_t = u^{(1)} + u^{(2)} + \\big( c^{(1)}(\\theta_1) + c^{(2)}(\\theta_2) \\big)t - \\sum_{i = 0}^{\\Tilde{N}_t} \\Tilde{Y}_i,\n\\end{equation}\nwhere $\\Tilde{N}_t$ is a Poisson process with intensity\n\\begin{equation}\n\\label{eq:lambda_tilde}\n \\Tilde{\\lambda} = p_1 \\lambda_1^\\perp + p_2 \\lambda_2^\\perp + \\big( p^{(1,0)} + p^{(0,1)}+ p^{(1,1)} \\big)\\lambda^\\parallel = p_1 \\lambda_1 + p_2 \\lambda_2 - p^{(1,1)}\\lambda^\\parallel ,\n\\end{equation}\nand $\\Tilde{Y}_i$, $i \\in \\mathbb N$ are i.i.d random variables with distribution\n\\begin{equation}\n\\label{eq:DepDist}\n\\begin{split}\n F_{\\Tilde{Y}} &= \\frac{p_1 \\lambda_1^{\\perp} }{\\Tilde{\\lambda}} F_{Y^{1 \\perp}} + \\frac{p_2 \\lambda_2^{\\perp} }{\\Tilde{\\lambda}} F_{Y^{2 \\perp}} \n + \\frac{p^{(1,0)} \\lambda^{\\parallel} }{\\Tilde{\\lambda}} F_{Y^{1 \\parallel}}\n + \\frac{p^{(0,1)} \\lambda^{\\parallel} }{\\Tilde{\\lambda}} F_{Y^{2 \\parallel}}\n + \\frac{p^{(1,1)} \\lambda^{\\parallel} }{\\Tilde{\\lambda}} F_{Y^{1 \\parallel}+ Y^{2 \\parallel}} \\\\\n &=\\frac{1}{p_1 \\lambda_1 + p_2 \\lambda_2 - p^{(1,1)} \\lambda^\\parallel} \\bigg( p_1 \\lambda_1 F_{Y^{1}} + p_2 \\lambda_2 F_{Y^{2}} + p^{(1,1)}\\lambda^\\parallel \\Big( F_{Y^{1\\parallel} + Y^{2\\parallel}} - F_{Y^{1\\parallel}} - F_{Y^{2\\parallel}} \\Big) \\bigg). \n \\end{split}\n\\end{equation}\n\nThus, if the risks in the market are independent (i.e. if $\\lambda^\\parallel = 0$), then the risk in the company's portfolio is just a sum of the risks $S^{(1)}$ and $S^{(2)}$, weighted by the respective market shares, $p_1$ and $p_2$, irrespective of any dependency between sales of policies for different risks. However, if the risks in the market are dependent ( $\\lambda^\\parallel \\neq 0$), then the company's risk is not, in general, a weighted sum of $S^{(1)}$ and $S^{(2)}$. Further, this effect persists even in the case where sales of different policies are independent (i.e., $p^{(1,1,)} = p_1 p_2$). On the other hand, equalities \\eqref{eq:lambda_tilde} and \\eqref{eq:DepDist} show that in the (unlikely) situation where clients always buy insurance for only one risk, the risk exposure of the insurer is accurately computed using only the marginal distributions of each risk (i.e. assuming that the risks are independent). This is due to the static nature of our model. For example, it does not take into account the possibility of external factors changing the frequency of claim events in both risks simultaneously.\n\n\n\\subsection{The Impact of Dependencies on Ruin Probability}\n\nFrom the discussion above and Proposition \\ref{prop:alpha_proof}, it follows that the ruin probability of a company with market shares ($p_1$, $p_2$, $p^{(1,1)}$) solves the equation \n\\begin{align}\n\\label{eq:EQ3}\n\\frac{dV(x)}{dx} = &\n\\frac{\\Tilde{\\lambda}}{c^{(1)} + c^{(2)}} \\Big( V(x) - \\int_0^x V(x-y)dF_{\\Tilde{Y}}(y) + F_{\\Tilde{Y}}(x) -1 \\Big),\n\\\\ \\label{eq:EQ4}\n V(0^{+}) =&\n \\frac{\\Tilde{\\lambda}}{c^{(1)} + c^{(2)}} \\E[\\Tilde{Y}] ,\n\\end{align}\nwith $\\Tilde{\\lambda}$ and $F_{\\Tilde{Y}}$ given by equations \\eqref{eq:lambda_tilde} and \\eqref{eq:DepDist}.\n\\smallskip\n\nSince estimating the dependency structure may pose substantial difficulties, we may wish to have an a-priori bound for the error introduced by neglecting dependencies, that is, by substituting the probability $V_{ind}(x)$ for $V(x)$, where $V_{ind}(x)$ solves the equation.\n\\begin{equation}\n \\label{eq:EQ5}\n \\frac{dV(x)}{dx} = \\frac{\\hat{\\lambda}}{c^{(1)} + c^{(2)}} \\Big( V(x) - \\int_0^x V(x-y)dF_{\\hat{Y}}(y) + F_{\\hat{Y}}(x) -1 \\Big) ,\n\\end{equation}\nwhere $\\hat{\\lambda} = \\lambda_1p_1 + \\lambda_2p_2$ and $F_{\\hat{Y}}(x) = \\frac{\\lambda_1 p_1 F_{Y^{(1)}} + \\lambda_2 p_2 F_{Y^{(2)}}}{\\hat{\\lambda}}$. Notice that $\\hat{\\lambda} \\E[\\hat{Y}] = \\Tilde{\\lambda} \\E[\\Tilde{\\lambda}]$ and therefore the boundary condition for \\eqref{eq:EQ5} is again \\eqref{eq:EQ4}.\n\\smallskip\n\nThe discussion in Subsection \\ref{section:sub4.1} shows that the difference $V(x) - V_{ind}(x)$ is expected to be small when $p^{(1,1)}$ is small compared to $p_1 + p_2$. The following proposition gives a precise meaning for this statement.\n\n\\begin{proposition}\n\\label{prop:P1}\nWith the notation above:\n\\begin{equation*}\n |V(x) - V_{ind}(x)| \\leq p^{(1,1)} \\lambda^\\parallel \\frac{e^{\\frac{2\\Tilde{\\lambda}x}{c^{(1)} + c^{(2)}}} - 1}{\\Tilde{\\lambda}}\n\\end{equation*}\nfor every amount of initial reserve $x \\geq 0$.\n\\end{proposition}\n\n\\begin{proof}\nFrom equalities \\eqref{eq:EQ3}, \\eqref{eq:EQ4} and \\eqref{eq:EQ5}, straightforward computations yield:\n\\begin{equation}\n\\label{eq:EQ6}\n \\begin{split}\n V(x) -V_{ind}(x) &= \\frac{p^{(1,1)} \\lambda^\\parallel}{c^{(1)} + c^{(1)}} \\Bigg( \\int_0^x V_{ind}(z) - \\int_0^z V_{ind}(z-y)dF_{Y^{1\\parallel} + Y^{2\\parallel}}(y) + F_{Y^{1\\parallel} + Y^{2\\parallel}}(z) -1 dz - \\\\\n & \\quad \\int_0^x V_{ind}(z) - \\int_0^z V_{ind}(z-y)dF_{Y^{1\\parallel}}(y) + F_{Y^{1\\parallel} }(z) -1 dz - \\\\\n & \\quad \\int_0^x V_{ind}(z) - \\int_0^z V_{ind}(z-y)dF_{Y^{2\\parallel}}(y) + F_{Y^{2\\parallel}}(z) -1 dz -\\Bigg) \\\\\n & \\quad + \\frac{\\Tilde{\\lambda}}{c^{(1)} + c^{(2)}} \\int_0^x (V - V_{ind})(z) - \\int_0^z (V - V_{ind})(z-y)dF_{\\Tilde{Y}}(y)dz\n \\end{split}\n\\end{equation}\n\nIt can be checked that for every distribution function $G:[0, +\\infty[ \\mapsto [0,1]$,\n\\begin{equation*}\n -x \\leq \\int_0^x V_{ind}(z) -\\int_0^z V_{ind}(z-y)dG(y) + G(z) -1dz \\leq 0\n\\end{equation*}\n\nTherefore, \\eqref{eq:EQ6} implies:\n\\begin{equation*}\n \\max_{y \\in [0,x]} |V(x) - V_{ind}(x)| \\leq \\frac{ p^{(1,1)} \\lambda^\\parallel}{c^{(1)} + c^{(2)}}2x + \\frac{\\Tilde{\\lambda}}{c^{(1)} + c^{(2)}} \\int_0^x 2 \\max_{y \\in [0,z]} |V(x) - V_{ind}(y)|dz\n\\end{equation*}\nThus, the result follows by Gr\u00f6nwall's inequality \\cite{dragomir2003some}.\n\\end{proof}\n\n\n\\subsection{The Impact of Dependencies on Small Companies}\n\nNow, we proceed with the argument above to explore how dependencies affect companies of different size. We measure the size of the company by it's expected total value of claims, $\\Tilde{\\lambda} \\E[\\Tilde{Y}]$ and, to make comparisons meaningful, we consider that the total revenue is proportional to the company's size, i.e.\n\\begin{equation*}\n c^{(1)} + c^{(2)} = (1 + \\theta) \\Tilde{\\lambda} \\E [\\Tilde{Y}], \\quad \\textrm{with } \\theta>0 \\textrm{ constant.}\n\\end{equation*}\nSimilarly, we consider the initial reserve to be proportional to size, i.e.:\n\\begin{equation*}\n x = x_o \\Tilde{\\lambda} \\E [\\Tilde{Y}], \\quad \\textrm{with } x_0>0 \\textrm{ constant.}\n \\end{equation*}\nNotice that, due to equations \\eqref{eq:EQ3}, \\eqref{eq:EQ4} and \\eqref{eq:EQ5}, the effect of dependencies must be bounded in the sense that\n\\begin{equation*}\n |V(x_0\\Tilde{\\lambda}\\E[\\Tilde{Y}]) - V_{ind}(x_0\\Tilde{\\lambda}\\E[\\Tilde{Y}])| \\leq K_1 x_0\\Tilde{\\lambda}\\E[\\Tilde{Y}] \\leq K_2(p_1 + p_2) ,\n\\end{equation*}\nfor some constants $K_1, K_2 < +\\infty$. However, we can use Proposition \\ref{prop:P1} to obtain a better estimate:\n\\begin{equation}\n \\label{eq:EQ7}\n |V(x_0\\Tilde{\\lambda}\\E[\\Tilde{Y}]) - V_{ind}(x_0\\Tilde{\\lambda}\\E[\\Tilde{Y}])| \\leq p^{(1,1)} \\lambda^\\parallel \\frac{e^{\\frac{2\\Tilde{\\lambda}\\frac{x_0}{1 + \\theta}}{c^{(1)} + c^{(2)}}} - 1}{\\Tilde{\\lambda}}.\n\\end{equation}\n\nNotice that the right-hand side of \\eqref{eq:EQ7} has the same asymptotic behaviour as \n\\begin{equation*}\n p^{(1,1)} \\lambda^\\parallel \\frac{x_0}{1 + \\theta}, \\quad \\textrm{when } p_1 + p_2 \\to 0.\n\\end{equation*}\nFurther, if the sales of policies for different risks to the same individual are independent, then $p^{(1,1)} = p_1 p_2$ goes to zero faster than $\\Tilde{\\lambda} \\E[\\Tilde{Y}] = p_1 \\lambda_1 \\E[Y^{(1)}] + p_2 \\lambda_2 \\E[Y^{(2)}]$, when $p_1 + p_2 \\to 0$. Thus, a small company selling policies for different risks independently is relatively immune to the effects of dependencies between the risks, contrary to a large company (it is obvious that a monopolistic company is fully exposed to the dependencies between risks). \nThis immunity to risk's dependencies may persist even when sales of policies for different risks are not independent, provided the dependency in sales is sufficiently mild. For example, $\\lim_{p_1 + p_2 \\to 0} \\frac{p^{(1,1)}}{p_1 + p_2} = 0$ if the dependency between sales is modelled by a Clayton or a Frank copula in \\eqref{eq:EQ8}. However, small companies are not specially protected from risk dependencies if the dependency between sales is modelled by a Pareto or a Gumbel copula.\n\n\\subsection{Optimal Loadings and Market Shares}\n\nSince the right-hand sides of equalities \\eqref{eq:EQ3} and \\eqref{eq:EQ4} depend on the loadings through both $\\frac{\\E[\\Tilde{N}_1}{c^{(1)} + c^{(2)}}$ and $F_{\\Tilde{Y}}$, Proposition \\ref{prop:alpha_proof} can not be generalized to models with multiple risks. However, it is possible to provide optimality conditions for the loadings $\\theta = (\\theta_1, \\theta_2)$ minimizing the ruin probability.\n\\smallskip\n\nTo do this, we extend the notation introduced in the proof of Proposition \\ref{prop:alpha_proof}. For any distribution function $G:[0, +\\infty[ \\mapsto [0,1]$, we consider the compounding operator of type \\eqref{eq:IntegralOperator}\n\\begin{equation*}\n (\\Psi_G h)(x) = \\int_0^x \\Big( h(z) - \\int_0^z h(z-y) d\\theta(y)\\Big)dz, \\quad x \\geq 0.\n\\end{equation*}\nThus, the 2-risk version of equation \\eqref{eq:omega_prove_1} can be written as \n\\begin{equation}\n\\label{eq:EQ10}\n V_\\theta(x) = \\frac{\\Tilde{\\lambda}_\\theta}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_\\theta(z)dz + \\Big(\\Psi_{F_\\theta} V_\\theta \\Big)(x) \\bigg) ,\n\\end{equation}\nwhere\n\\begin{equation*}\n F_\\theta = \\frac{\\lambda_1^\\perp}{\\Tilde{\\lambda}_\\theta} p_1(\\theta) F_{Y^{1\\perp}} + \\frac{\\lambda_2^\\perp}{\\Tilde{\\lambda}_\\theta} p_1(\\theta) F_{Y^{2\\perp}} + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(1,0)}(\\theta) F_{Y^{1\\parallel}} + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(0,1)}(\\theta) F_{Y^{2\\parallel}} + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(1,1)}(\\theta) F_{Y^{1\\parallel} + Y^{2\\parallel}} .\n\\end{equation*}\nSince $\\frac{\\lambda_1^\\perp}{\\Tilde{\\lambda}_\\theta} p_1(\\theta) + \\frac{\\lambda_2^\\perp}{\\Tilde{\\lambda}_\\theta}p_2(\\theta) + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(1,0)}(\\theta) + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(0,1)}(\\theta) + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(1,1)}(\\theta) = 1$, \\eqref{eq:EQ10} becomes\n\\begin{equation*}\n \\begin{split}\n V_{\\theta}(x) &= \\frac{\\lambda_1^\\perp p_1(\\theta)}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_{Y^{1\\perp}}(z)dz + (\\Psi_{F_{Y^{1\\perp}}}V_\\theta)(x) \\bigg) \\\\\n & \\quad + \\frac{\\lambda_2^\\perp p_2(\\theta)}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_{Y^{2\\perp}}(z)dz + (\\Psi_{F_{Y^{2\\perp}}}V_\\theta)(x) \\bigg) \\\\\n & \\quad + \\frac{\\lambda^\\parallel p^{(1,0)}(\\theta)}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_{Y^{1\\parallel}}(z)dz + (\\Psi_{F_{Y^{1\\parallel}}}V_\\theta)(x) \\bigg) \\\\\n & \\quad + \\frac{\\lambda^\\parallel p^{(0,1)}(\\theta)}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_{Y^{2\\parallel}}(z)dz + (\\Psi_{F_{Y^{2\\parallel}}}V_\\theta)(x) \\bigg) \\\\\n & \\quad + \\frac{\\lambda^\\parallel p^{(1,1)}(\\theta)}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_{Y^{1\\parallel} +Y^{2\\parallel}}(z)dz + (\\Psi_{F_{Y^{1\\parallel} + Y^{2\\parallel}}}V_\\theta)(x) \\bigg) .\n \\end{split}\n\\end{equation*}\n\nWe write this in abbreviated form:\n\\begin{equation*}\n V_{\\theta}(x) = <\\alpha(\\theta), \\Gamma(x)> + (<\\alpha(\\theta), \\Psi>V_\\theta)(x) ,\n\\end{equation*}\nwhere $\\alpha(\\theta)$ is the vector\n\\begin{equation*}\n \\alpha(\\theta) = \\frac{1}{c(\\theta)} \\Big(\\lambda_1^\\perp p_1(\\theta), \\lambda_2^\\perp p_2(\\theta), \\lambda^\\parallel p^{(1,0)}, \\lambda^\\parallel p^{(0,1)}, \\lambda^\\parallel p^{(1,1)} \\Big),\n\\end{equation*}\n$\\Gamma(x)$ is the vector function\n\\begin{equation*}\n \\Gamma(x) = \\Big(\\int_x^\\infty 1- F_{Y^{1\\perp}}(z)dz, \\int_x^\\infty 1- F_{Y^{2\\perp}}(z)dz, \\int_x^\\infty 1- F_{Y^{1\\parallel}}(z)dz, \\int_x^\\infty 1- F_{Y^{2\\parallel}}(z)dz, \\int_x^\\infty 1- F_{Y^{1\\parallel} + Y^{2\\parallel}}(z)dz \\Big),\n\\end{equation*}\n$\\Psi$ is the vector of operators\n\\begin{equation*}\n \\Psi = \\Big(\\Psi_{F_{Y^{1\\perp}}}, \\Psi_{F_{Y^{2\\perp}}}, \\Psi_{F_{Y^{1\\parallel}}}, \\Psi_{F_{Y^{2\\parallel}}}, \\Psi_{F_{Y^{1\\parallel}} + Y^{2\\parallel}} \\Big) ,\n\\end{equation*}\nand $<\\cdot, \\cdot>$ is the usual inner product in $\\mathbbm{R}^5$. \n\\smallskip\n\nUsing the argument in the proof of Proposition \\ref{prop:alpha_proof}, we see that $V_\\theta$ admits the series representation\n\\begin{equation*}\n V_\\theta(x) = \\sum_{n=0}^\\infty \\Big(<\\alpha(\\theta), \\Psi>^n <\\alpha(\\theta), \\Gamma> \\Big)(x) .\n\\end{equation*}\nSimilarly, any vector $\\gamma \\in \\mathbbm{R}^5$ and any bounded measurable function $g:[0, +\\infty[ \\mapsto \\mathbbm{R}$ define one unique function\n\\begin{equation*}\n u_{\\gamma, g} (x) = \\sum_{n = 0}^\\infty \\Big(<\\gamma, \\Psi>^n g\\Big)(x) .\n\\end{equation*}\nThis function is analytic with respect to $\\gamma$, with partial derivatives\n\\begin{equation*}\n \\frac{\\partial{u_{\\gamma, g}}}{\\partial{\\gamma_i}} = \\sum_{n = 0}^\\infty <\\gamma, \\Psi>^n\\big( \\Psi_i u_{\\gamma,g}\\big) = u_{\\gamma, \\Psi_i u_{\\gamma, g}}, \\quad i = 1, \\dots, 5 .\n\\end{equation*}\n\nTaking into account the chain rule for derivatives, this proves the following proposition.\n\n\\begin{proposition}\n\\label{prop:P2}\nIf $\\theta \\mapsto \\alpha(\\theta)$ is differentiable, then $\\theta \\mapsto V_\\theta(x)$ is differentiable for every $x \\geq 0$ and\n\n$$\\frac{\\partial{}}{\\partial{\\theta_i}} V_\\theta(x) = \\sum_{j = 1}^5 u_{\\alpha(\\theta),(\\Gamma_j + \\Psi_j V_\\theta)} \\frac{\\partial{\\alpha_j(\\theta)}}{\\partial{\\theta_i}}, \\quad i = 1,2.$$\n\\end{proposition}\n\nBy Proposition \\ref{prop:P2}, the optimal loadings satisfy the equation \n\\begin{equation}\n \\label{eq:EQ11}\n \\sum_{j = 1}^5 u_{\\alpha(\\theta),(\\Gamma_j + \\Psi_j V_\\theta)} \\frac{\\partial{\\alpha_j(\\theta)}}{\\partial{\\theta_i}} = 0, \\quad i = 1,2\n\\end{equation}\n\nContrary to the single-risk case, the odds of finding explicit solutions for this equation seem very low, even in simple cases. \nHowever, \\eqref{eq:EQ11} can be numerically solved by Newton's algorithm, the second-order partial derivatives being\n\\begin{equation*}\n \\frac{\\partial^2}{\\partial{\\theta_i}\\partial{\\theta_j}} V_\\theta(x) = \\sum_{k = 1}^5 u_{\\alpha(\\theta),(\\Gamma_k + \\Psi_k V_\\theta)} \\frac{\\partial^2{\\alpha_k(\\theta)}}{\\partial{\\theta_i}\\partial{\\theta_j}} + \\sum_{k = 1}^5\\sum_{l = 1}^5 u_{\\alpha(\\theta),\\Psi_k} u_{\\alpha(\\theta),(\\Gamma_l + \\Psi_l V_\\theta)} \\frac{\\partial{\\alpha_k(\\theta)}}{\\partial{\\theta_i}} \\frac{\\partial{\\alpha_l(\\theta)}}{\\partial{\\theta_j}} .\n\\end{equation*}\n\nNotice that the expected profit is\n\\begin{equation*}\n c^{(1)}(\\theta) + c^{(2)}(\\theta) - \\Tilde{\\lambda} \\E[\\Tilde{Y}] = \\theta_1 p_1(\\theta_1)\\lambda_1\\E[Y^{(1)}] + \\theta_2 p_2(\\theta_2)\\lambda_2\\E[Y^{(2)}].\n\\end{equation*}\nThus, it depends only on the marginal distribution of the claim processes $S^{(1)}$, $S^{(2)}$, being independent of the dependency structure. It follows that the loadings minimizing the joint profit coincide with the loadings minimizing the profit on each risk, separately. That is, a pricing strategy that completely focus on expected profit completely fails to take both dependencies between risks and dependencies between sales of policies into account.\n\n\n\n\n\n\\section{Numerical Results}\n\\label{seq:numerical}\n\nThroughout this section, $Y_i^{(i)}$ are assumed to be i.i.d gamma distributed random variables with shape parameter, $a^{(i)}$, and scale parameters, $k^{(i)}$, which means that the mean is, $\\E[Y^{(i)}] = a^{(i)}k^{(i)}$, for $i = 1,2$. In the following numerical analysis let $a^{(1)} = a^{(2)} = 2$, $k^{(1)} = k^{(2)} = 500$, $\\lambda^{(1)} = \\lambda^{(2)} = 800$, $\\beta_0^{(1)} = \\beta_0^{(2)} = -0.5$, $\\beta_1^{(2)} = 4$ and $\\beta_1^{(1)} = 4.5$. That is, the difference stems from surplus process 2 being more sensitive to the loading via the parameter $\\beta_1^{(2)}$. $r^{(i)}$ is taken to be $20\\%$ of the pure premium if the exposure was $40\\%$, that is $r^{(i)} = 0.4 * 0.2 k^{(i)} a^{(i)} N^{(i)}$. The operational cost is therefore $8\\%$ of the expected total amount of claims in the market. The Clayton L\u00e9vy copula is considered for positive dependence and the parameter is set to $\\omega = 1$. Finally, let $\\theta_{ruin}^*$ and $\\theta_{profit}^*$ denote the optimal loading when the ruin probability and expected profit criterion is used, respectively. The programming language R was used for every calculation. \\\\\n\n\n\\subsection{Single Surplus Process}\n\nThe surplus processes are first considered separately. The ruin probability and the expected profit is plotted as a function of $\\theta$ for the two processes in Figures \\ref{fig:logit_1} and \\ref{fig:logit_2}. $\\theta_{ruin}^*$ was found by minimizing $\\alpha$.\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[scale = 0.4]{.\/logit_1.png}\n \\caption[Optimal loading parameter of surplus process 1.]{Surplus process 1. The blue lines show the ruin probability as a function of $\\theta$ for a given surplus x. The black line shows the expected profit per time unit as a function of $\\theta$. The blue dots show the minimum ruin probability for each surplus. $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ denote the optimal security loading parameter for the expected profit and for the probability of ruin, respectively. }\n \\label{fig:logit_1}\n\\end{figure}\n\n\nFrom Figure \\ref{fig:logit_1} it can be seen that the optimal security loading parameter for the ruin probability is, $\\theta_{ruin}^* = 0.435$, while the $\\theta$ that maximizes the expected profit is lower, $\\theta_{profit}^* = 0.359$. Moreover, in this example, the maximum expected profit is 22.843 units and is given at $\\theta_{profit}^*$. The expected profit taken at the point $\\theta_{ruin}^*$ is lower, close to 20.000 units. \n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[scale = 0.4]{.\/logit_2.png}\n \\caption[Optimal loading parameter of surplus process 2.]{Surplus process 2. The blue lines show the ruin probability as a function of $\\theta$ for a given surplus x. The black line shows the expected profit per time unit as a function of $\\theta$. The blue dots show the minimum ruin probability for each surplus. $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ denote the optimal security loading parameter for the expected profit and for the probability of ruin, respectively.}\n \\label{fig:logit_2}\n\\end{figure}\n\n\nFrom Figure \\ref{fig:logit_2} it can be seen that the optimal security loading parameter for the ruin probability is $\\theta_{ruin}^* = 0.358$, while the $\\theta$ that maximizes the expected profit is again lower or $\\theta_{profit}^*= 0.319$. \\\\\n\nObviously, for both processes, the ruin probability decreases with increasing surplus. Moreover, it can be seen that surplus process $X_2$ has higher probability of ruin than surplus process $X_1$ for the same amount of surplus. The sensitivity of the demand curve affects the ruin probability and $\\theta_{ruin}^*$ greatly. The more sensitive to the exposure the demand curve is, the closer the $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ are. This more sensitive curve also has higher probability of ruin for a given surplus, which indicates that more competitive insurance products are riskier. These effects can be seen if the two Figures (\\ref{fig:logit_1} and \\ref{fig:logit_2}) are compared. Conversely, if the demand curve is not sensitive to the price, then the gap between $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ can become quite large. Additionally, it can be seen from the curve at surplus = 100 that the ruin probability for $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ are similar but as the surplus grows the values start to differ and once the surplus is great enough the two values $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ result in similar ruin probabilities again. This means that if the insurance firm has high enough surplus then they can choose arbitrary $\\theta$ without risking the chance of ruin. If the surplus is great enough then the value of $\\theta$ does not matter as much. However, having too much reserves can be bad for insurance companies as it can be seen as a negative leverage. The bowl shape of the blue curves in the two Figures (\\ref{fig:logit_1} and \\ref{fig:logit_2}) is because of the interplay between the fixed cost and the demand curve. \\\\\n\n$\\theta_{ruin}^*$ should give the minimum ruin probability at all surplus values. This can be tested by graphing multiple ruin probability curves and compare it with the one obtained by $\\theta_{ruin}^*$. Figure \\ref{fig:test_many_thetas} shows that $\\theta_{ruin}^*$ gives the minimum ruin probability indeed.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[scale = 0.4]{.\/test_many_thetas.png}\n \\caption[Optimal value function of surplus process, $X_1$.]{The figure compares the optimal value function of surplus process, $X_1$ (blue line) to other cost functions (grey lines). The blue line is achieved by setting $\\theta = \\theta_{ruin}^*$. All the cost functions lie above the optimal value function, as is expected. }\n \\label{fig:test_many_thetas}\n\\end{figure}\n\n \n\n\\subsection{Two Aggregated Surplus Processes with Common Loading}\n\nNext, the two surplus processes, $X_1$ and $X_2$ are aggregated, both when the claims are independent and dependent. The acquisition is independent in this subsection.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[scale = 0.38]{.\/dep_vs_indp.png}\n \\caption[Surplus processes 1 and 2 aggregated. Showing the ruin probability and optimal $\\theta$ both in the case of independence and dependence. ]{Ruin probability when $X_1$ and $X_2$ are aggregated as a function of the security loading parameter, $\\theta$, both when they are independent and dependent via Clayton L\u00e9vy copula with $\\omega = 0.5$. The blue curves show the ruin probability when the two processes are independent for different values of the surplus and the red curves show the same for the dependent case. The curves have similar shapes, but the ruin probability is higher in the case of dependence, for the same surplus. The values in the legend show the minimum ruin probability for a given surplus (surplus $\\rightarrow$ probability).}\n \\label{fig:logit_dep_vs_indp}\n\n\\end{figure}\n\nFigure \\ref{fig:logit_dep_vs_indp} shows the ruin probability of the aggregated surplus process as a function of the security loading parameter, $\\theta$, both when they are independent and dependent via Clayton L\u00e9vy copula. The red curves represent dependence while the blue curves represent independence. \\\\\n\nFirstly, it can be seen that the expected profit is the same for dependence and independence and from the figure, $\\theta^*_{profit} \\approx 0.34$. The reason is that the claim mean and the claim frequency is almost the same (numerically) for dependence and independence. \\\\\n\nSecondly, the dependent case has a higher probability of ruin than the independent case for the same amount of surplus. However, the ruin probability is almost the same for small surplus values as can be seen from the figure. Interestingly, the optimal loading for dependence and independence seem to be the same and numerically the values are $\\theta^*_{ruin,dep} = 0.4 = \\theta^*_{ruin,indp}$. The surplus value does not change the optimal loading $\\theta^*$, as expected. The reason why the ruin probability difference between the dependent and independent cases is relatively small is because of the probability $p^{(0,1)}(\\theta)$. The fact that the insurance company does not always have the both claims $Y^{1\\parallel}$ and $Y^{1\\parallel}$ when a common jump occurs reduces the risk.\n\\\\\n\n\nFinally, the difference of the two ruin probability curves (red and blue) for a given surplus seems to be increasing with increasing surplus, meaning that the ruin probability in the independent case decreases more rapidly with increasing surplus then for the dependent case. Therefore, it is clear that the positive dependent case is riskier. \\\\\\\n\n\nNote that $\\theta_{ruin}^* \\approx 0.4$, which is very close to the weighted average of the optimal loading parameter of the isolated surplus processes where the weight is the exposure ratio of each surplus process, that is\n\n\\begin{equation*}\n \\theta_{weighted} = \\frac{0.435 \\frac{1}{1 + \\exp(-0.6 + 4*0.4)} + 0.358\\frac{1}{1 + \\exp(-0.6 + 4.5*0.4)}}{\\frac{1}{1 + \\exp(-0.6 + 4*0.4)} + \\frac{1}{1 + \\exp(-0.6 + 4.5*0.4)}} \\approx 0.4,\n\\end{equation*}\n\nwhich strongly indicates that the optimal value, $\\theta_{ruin}^*$, is simply the weighted average. \\\\\n\n\\subsubsection{Two Aggregated Surplus Processes with Separate Loadings}\n\nIt is more realistic to consider $\\theta$ as a vector so that the loading parameter can be different for each surplus process separately, to spread the total premium over the policies in an optimal way. The two surplus processes, $X_1$ and $X_2$, are aggregated as before and the constants are the same, but let $\\theta = (\\theta^{(1)}, \\theta^{(2)})$. \\\\\n\n\\begin{figure}[ht]\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.9\\linewidth]{.\/indp_multi_profit.png} \n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.9\\linewidth]{.\/indp_multi_ruin.png} \n\\end{subfigure}\n \\caption[Two security loading parameters of two aggregated independent surplus processes. ]{Expected profit (left) and the ruin probability (right) when $X_1$ and $X_2$ are aggregated, as a function of the security loading parameters, $\\theta^{(1)}$ and $\\theta^{(2)}$. The processes are assumed to be independent and the surplus is fixed at $x = 5000$. The parenthesis in the right figure shows the optimal values of $\\theta^{(1)}$ and $\\theta^{(2)}$ with the ruin probability as a criterion. The arrow indicates which values $\\theta^{(1)}$ and $\\theta^{(2)}$ are mapped into, thus showing the minimum ruin probability. The parenthesis in the left figure shows the same for the expected profit. The shape of the contour plot is due to the fact that the $\\theta$ grid considered is sparser for values that give high ruin probability.}\n \\label{fig:both_two_indp_15000}\n\\end{figure}\n\nFigure \\ref{fig:both_two_indp_15000} shows the expected profit (left) and the ruin probability (right), when $X_1$ and $X_2$ are assumed to be independent and aggregated, as a function of the security loading parameters, $\\theta^{(1)}$ and $\\theta^{(2)}$. The surplus is fixed at $x = 5000$ and the optimal values are shown. It should be noted that many surplus values were tested and they all gave the same value for $\\theta_{ruin}^{*(1)}$, $\\theta_{ruin}^{*(2)}$, $\\theta_{profit}^{*(2)}$, and $\\theta_{profit}^{*(2)}$ as shown, only the ruin probability level changed. Note that the optimal loading parameters for the expected profit are the same as those for the individual surplus processes. However, the optimal loading parameters for the ruin probability change when compared to the individual one (compare it with Figures \\ref{fig:logit_1} and \\ref{fig:logit_2}). When compared to the optimal loading parameter for the individual surplus process, $\\theta^{(1)}$ decreases from $0.435$ to $0.42$ and $\\theta^{(2)}$ increases from 0.358 to 0.38. Therefore, the optimal security loading parameter decision is to decrease the loading parameter of the less sensitive surplus process while increasing the loading parameter of the more sensitive surplus process. Additionally, when compared to Figure \\ref{fig:logit_dep_vs_indp}, the minimum ruin probability for one shared loading is $0.57$ while the ruin for two loadings is $0.56$, showing only a marginal difference. When the same is done for other surplus values a similar difference is found. The expected profit is marginally higher. \\\\\n\nLastly, consider the case when the surplus processes are assumed to be dependent via L\u00e9vy Clayton copula and the loadings can be different for each surplus process separately. Figure \\ref{fig:both_two_dep_15000} shows the ruin probability when $X_1$ and $X_2$ are aggregated as a function of the security loading parameters, $\\theta^{(1)}$ and $\\theta^{(2)}$. The shape of the contour plots is due to the fact that the $\\theta$ grid considered is sparser for values that give high ruin probability. The surplus is fixed at $x = 5000$. \\\\\n\n\\begin{figure}[ht]\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.96\\linewidth]{.\/dep_multi_profit.png} \n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.96\\linewidth]{.\/dep_multi_ruin.png} \n\\end{subfigure}\n \\caption[Two security loading parameters of two aggregated dependent surplus processes.]{Expected profit (left) and the ruin probability (right) when $X_1$ and $X_2$ are aggregated, as a function of the security loading parameters, $\\theta^{(1)}$ and $\\theta^{(2)}$. The processes are assumed to be dependent via Clayton L\u00e9vy copula and the surplus is fixed at $x = 5000$. The parenthesis in the right figure shows the optimal values of $\\theta^{(1)}$ and $\\theta^{(2)}$ with the ruin probability as a criterion. The arrow shows which values $\\theta^{(1)}$ and $\\theta^{(2)}$ are mapped into, thus showing the minimum ruin probability. The parenthesis in the left figure shows the same except for the expected profit. The shape of the contour plot is due to the fact that the $\\theta$ grid considered is sparser for values that give high ruin probability.}\n \\label{fig:both_two_dep_15000}\n\\end{figure}\n\n\n\nIt can be seen that the optimal loadings $\\theta^{(1)}$ and $\\theta^{(2)}$ are the same as the ones in the case of independence and the minimum ruin probability is higher (compared to the case in Figure \\ref{fig:both_two_indp_15000}). Both the values and the optimal loadings of the expected profit are the same as the independent case. \nAgain, the optimal security loading parameter decision is to decrease the loading parameter of the less sensitive surplus process while increasing the loading parameter of the more sensitive surplus process. The difference between the ruin probability in Figure \\ref{fig:both_two_dep_15000} vs Figure \\ref{fig:both_two_indp_15000} is only $0.03$ but in this case the surplus is low compared to the expected profit. If the surplus would be increased to $\\approx 20.000$ the difference would become greater. The difference would then decrease again if the surplus were increased to $\\approx 40.000$. \\\\\n\nAdditionally, when compared to Figure \\ref{fig:logit_dep_vs_indp}, the minimum ruin probability for one common loading is $0.59$, which is the same as the ruin probability for separate loading selections, therefore the difference is only marginal. \\\\\n\n\\subsection{Dependent Claims and Dependent Acquisition}\n\nIt is time to look at the case when we have dependent claims and dependent acquisition. Note that the case when we have independent claims and dependent acquisition is the same as the total independence case. We will look both at the case when the acquisition is modelled with a Gumbel and Clayton dependency structure. To compare these two structures we use Kendell's tau. The following equations relate the copula parameters, $\\omega_{clayton}$ and $\\omega_{gumbel}$ to kendell's tau, $\\tau$. \n\n\\begin{equation*}\n \\omega_{clayton} = \\frac{2 \\tau}{1-\\tau}, \\quad \\omega_{gumbel} = \\frac{1}{1-\\tau}.\n\\end{equation*}\n\nWe know that the expected profit is the same as before for all values of $\\tau$. Therefore, we analyze the ruin probability. \\\\\n\n\\begin{figure}[ht]\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.9\\linewidth]{.\/Clayton.png} \n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.9\\linewidth]{.\/Gumbel.png} \n\\end{subfigure}\n \\caption[Acquisition dependency.]{The ruin probability when the acquisition is modelled with a Clayton (left) and a Gumbel (right) dependency structure. The surplus is constant. In both cases, the ruin probability is higher for higher kendell's tau. The Gumbel case is more riskier. The surplus is fixed at 5000 units.}\n \\label{fig:acquisition}\n\\end{figure}\n\nIn Figure \\ref{fig:acquisition} we can see the ruin probability for different dependency values when the surplus is fixed at 5000 units. We can see that the ruin probability is higher for more dependent acquisition, as we expected. Also, we can see that the Gumbel acquisition model gives higher ruin probabilities than the Clayton model for the same Kendell's tau value. They give the same optimal loading parameter. It can also be seen that when kendell's tau is $0.05$ (close to $0$) the ruin probability is close to the case of independent acquisition, as expected. The optimal loading parameter is the same for all dependency levels. \n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\n\n\n\nUntil recently, the nuclear magic numbers have been considered to be robust \nand durable with respect to the shell structure of atomic nuclei.\nThe nuclear shell model initiated by Mayer and Jensen \\cite{MayJen}\nsuccessfully accounts for the shell structure of nuclei on and near \nthe $\\beta$-stability line. \nHowever, recent experimental studies on neutron-rich nuclei \nfar from the $\\beta$-stability line have revealed \ndisappearance of magic numbers \n(e.g., N $=$ 8 \\cite{be12a,be12b,be12c,be12d}, 20 \\cite{mg32}) \nand appearance of new magic numbers (e.g., N $=$ 16 \\cite{n16magic}).\nThe new N $=$ 16 magic number comes from the reduction of the N $=$ 20\nshell gap. \nOne theoretical explanation \\cite{otsuka1, utsuno1} for this phenomenon \nis that the attractive \nmonopole part of the tensor force acting between the $\\pi$d$_{5\/2}$ and \n$\\nu$d$_{3\/2}$ orbitals \nreduces the N $=$ 20 gap in nuclei with high N\/Z ratios. \n\nAnother anomalous phenomenon discovered in neutron-rich nuclei is \nthe occurrence of a {\\it strongly deformed} ground state on and near \nthe N $=$ 20 magic nuclei in the Z $\\sim$ 12 region, \nthe so-called 'island of inversion' \\cite{inversion}. \nHere, the intruder two-particle, two-hole (2p--2h) configuration occupying \nthe {\\it fp} shell and the normal 0p--0h configuration in the {\\it sd} shell \nare inverted in energy, and the 2p--2h configuration dominates in the \nground state. \nThis is a new challenge to the nuclear shell model. \nThe most convincing evidence for the large quadrupole collectivity of the nuclei \nin the 'island of inversion' was the measured low E(2$_{1}^{+}$) energy \\cite{mg32a} \nand the high B(E2) strength \\cite{mg32} as well as the enhancement of the binding \nenergy \\cite{mg32mass}. \nThe experimentally deduced large B(E2; 0$_{g.s.}^{+}\\rightarrow$2$_{1}^{+}$) value \nindicates large deformation ($\\beta_{2}=$ 0.512(44)) in $^{32}$Mg \\cite{mg32}.\n$^{34}$Mg (N $=$ 22) shows an even larger deformation ($\\beta_{2}=$ 0.58(6)) \n\\cite{mg34}.\nMonte Carlo shell model calculations \\cite{mcsm} that include the effects of \nthe narrowed shell gap at N $=$ 20 and enhanced 2p--2h excitations reproduce \nthe experimental values quite well \\cite{mg34}. \n\nOn the other hand, such a multiparticle--multihole (mp--mh) excitation appears \nin the excited states of nuclei near the $\\beta$-stability line.\nThese states can be studied by the heavy-ion induced fusion-evaporation reaction\nof stable isotopes. \nIn fact, an mp--mh excitation from the {\\it sd} to {\\it fp} shell produces \nsuperdeformation (SD) in N $=$ Z light mass nuclei, \nsuch as $^{36}$Ar \\cite{ar36}, $^{40}$Ca \\cite{ca40}, and $^{44}$Ti \\cite{ti44}.\nThese SD nuclei exhibit a large deformation of $\\beta_{2}\\sim$ 0.5, which is about \nthe same magnitude as the ground state deformation in the 'island of inversion'.\nThe presence of SD in the three abovementioned nuclei can be also understood \nin terms of the SD shell gaps of N $=$ Z $=$ 18, 20, and 22, respectively \\cite{ca40}. \nIn this region, the spherical and SD magic numbers occur at similar \nparticle numbers, which results in shape coexistence. \nHowever, the existence of a SD shell structure in neutron-rich nuclei \nhas not been experimentally confirmed yet.\n\nIn order to access the currently reachable neutron-richest SD states with asymmetric \nSD magic numbers, especially the nucleus with N $=$ 22 corresponding to $^{34}$Mg, \nwe employed a proton emission channel (2p2n) in the fusion-evaporation reaction using \nthe neutron-richest beam and target combination of stable isotopes obtained so far. \nConsequently, we successfully populated the high-spin states of the SD double magic\nZ $=$ 18 and N $=$ 22 nucleus, $^{40}$Ar. \nIn this Letter, we report experimental results on the SD states in $^{40}$Ar \nassociated with the mp--mh excitation between the {\\it sd} and {\\it fp} shells. \n\n\nHigh-spin states in $^{40}$Ar have previously been studied by proton-$\\gamma$ \ncoincidence measurements using the $^{37}$Cl($\\alpha$,~p$\\gamma$) reaction \n\\cite{old40ar}.\nHigh-spin levels below 6.8~MeV were identified up to (8$^{+}$) and spin-parity \nassignments up to the 6$^{+}$ state were obtained from the particle-$\\gamma$ \nangular correlations. \nThe parity of the 5$^{-}$ state at 4.494~MeV was determined by the linear \npolarization of the 5$^{-}\\rightarrow$4$^{+}$ transition at 1602~keV.\nThe lifetimes of low-lying levels were measured by the Doppler-shift attenuation \nmethod. \nThe high E2 strengths of the 6$_{2}^{+}\\rightarrow$4$_{2}^{+}$ \nand 4$_{2}^{+}\\rightarrow$2$_{2}^{+}$ transitions are respectively deduced to be \n67$_{-19}^{+38}$ and 46$_{-10}^{+15}$ in Weisskopf units, \nwhich indicates the large collectivity of the band. \nHowever, the (8$^{+}$) assignment was based solely on the similarity of the\nlevel structure to that in $^{42}$Ca, but the $\\gamma-\\gamma$ coincidence\nrelations of the in-band transitions were not examined and the presence of the \nband structure was not unambiguously identified by experiment. \nTherefore, it is essential to find the higher-spin members of the rotational band \nand to confirm the coincidence relations between the in-band $\\gamma$ transitions.\n\n\nHigh-spin states in $^{40}$Ar were populated via the \n$^{26}$Mg($^{18}$O, 2p2n)$^{40}$Ar reaction with a 70-MeV $^{18}$O beam\nprovided by the tandem accelerator facility at the Japan Atomic Energy Agency.\nTwo stacked self-supporting foils of $^{26}$Mg enriched isotopes with \nthicknesses of 0.47 and 0.43 mg\/cm$^{2}$ were used. \nThe mean beam energy of the $^{18}$O beam used to irradiate the $^{26}$Mg foils \nwas 69.0~MeV.\nGamma rays were measured by the GEMINI-II array \\cite{gemini} consisting of \n16 HPGe detectors with BGO Compton suppression shields, in coincidence with \ncharged particles detected by the Si-Ball \\cite{siball}, a 4 $\\pi$ array \nconsisting of 11 $\\Delta$E Si detectors that were 170~$\\mu$m thick. \nThe most forward placed Si detector was segmented into five sections and \nthe other detectors were segmented into two sections each, giving a total of \n25 channels that were used to enhance the selectivity of multi charged-particle \nevents.\nWith a trigger condition of more than two Compton-suppressed Ge detectors\nfiring in coincidence with charged particles, a total number of \n6.6$\\times$10$^{8}$ events were collected.\n\nBased on the number of hits in the charged particle detectors, events were \nsorted into three types of E$_{\\gamma}-$E$_{\\gamma}$ coincidence matrices \nfor each evaporation channel.\nA symmetrized matrix was created and the RADWARE program ESCL8R \\cite{radware}\nwas used to examine the coincidence relations of $\\gamma$ rays. \nBy gating on the previously reported $\\gamma$ rays, high-spin states\nin $^{40}$Ar were investigated.\n\n\n\nBy gating on the known 1461, 1432, and 571~keV peaks of the \n2$^{+}\\rightarrow $0$^{+}$, \n4$^{+}\\rightarrow $2$^{+}$, and \n6$^{+}\\rightarrow $4$^{+}$ transitions respectively, \nseveral new levels were identified above the 5$^{-}$ states at 4.49~MeV by \nconnecting with high-energy $\\gamma$ transitions of $\\geq$2.5~MeV.\nThe previously assigned deformed band members of 2$_{2}^{+}$, 4$_{2}^{+}$, \nand 6$_{2}^{+}$ states were confirmed at 2.522, 3.515, and 4.960~MeV, \nrespectively.\nIn addition, two $\\gamma$-ray cascade transitions of 2269 and 2699~keV \nwere identified in coincidence with the 993, 1445, and 1841~keV transitions, \nwhich form a rotational band up to the (12$^{+}$) state at 11.769~MeV. \nLinking $\\gamma$ transitions were also observed between the excited \n2$_{2}^{+}$, 4$_{2}^{+}$, and 6$_{2}^{+}$ states and the low-lying \n2$_{1}^{+}$ and 4$_{1}^{+}$ levels, which establishes the excitation energies \nand the spin-parity assignment of the band. \n\n\nSpins of the observed levels are assigned on the basis of the \nDCO (Directional Correlations from Oriented states) ratios of $\\gamma$ rays \nby analyzing an asymmetric angular correlation matrix. \nThe multipolarities of the in-band transitions of the band and \nthe linking transitions of 4$_{2}^{+}\\rightarrow $2$_{1}^{+}$ and \n6$_{2}^{+}\\rightarrow $4$_{1}^{+}$ are consistent with a stretched quadrupole \ncharacter. \nAssuming E2 multipolarity for the stretched quadrupole transition, the parity \nof the band was assigned to be positive.\nThe multipolarity of the 2699~keV transition could not be determined due to \nthe lack of statistics, but it was in coincidence with other $\\gamma$ \ntransitions in the band and assigned as E2. \n\nTo determine the deformation of the band, the transition quadrupole moment Q$_{t}$ \nwas deduced. \nLifetimes were estimated by the \\cite{DSAM} technique, which is based on the \nresidual Doppler shift of the $\\gamma$-ray energies emitted from the deceleration \nof recoiling nuclei in a thin target. \nThe average recoil velocity $<\\beta>$ is expressed as a function of the \ninitial recoil velocity to obtain $F(\\tau) \\equiv <\\beta>\/\\beta_{0}$.\nIn Fig.~3, the fractional Doppler shift $F(\\tau)$ is plotted against the \n$\\gamma$-ray energy. \nThe experimental $F(\\tau)$ values are compared with the calculated values\nbased on known stopping powers \\cite{srim2003}. \nIn this calculation, the side feeding into each state is assumed to consist\nof a cascade of two transitions having the same lifetime as the in-band\ntransitions.\nThe intensities of the side-feeding transitions were modeled to reproduce\nthe observed intensity profile.\nThe data are best fitted with a transition quadrupole moment \n$Q_{t} = 1.45_{-0.31}^{+0.49} e$b, which corresponds to a quadrupole \ndeformation of $\\beta_{2}=0.53_{-0.10}^{+0.15}$.\nThis result is consistent with a superdeformed shape of the band.\n\n\n\nIn order to compare the high-spin behavior of the rotational band in $^{40}$Ar\nwith the SD bands in $^{36}$Ar and $^{40}$Ca, \nthe so-called 'backbending' plot of the SD bands is displayed in Fig.~\\ref{fig4}.\nThe gradients of the plots correspond to the kinematic moments of inertia. \nBecause $^{40}$Ar has a similar gradient to $^{36}$Ar and $^{40}$Ca, \nthe deformation size of the $^{40}$Ar rotational band is expected to be as \nlarge as the deformation of the SD bands in $^{36}$Ar and $^{40}$Ca. \nUnlike $^{36}$Ar, no backbending was observed in $^{40}$Ar.\nIts behavior is rather similar to that of $^{40}$Ca.\nMany theoretical models, including the shell model \\cite{ar36,caurier,poves,sun} \nand various mean-field models \\cite{inakura,bender,HFB}, have been used to analyze \n$^{36}$Ar.\nAll the calculations reveal that the strong backbending in \n$^{36}$Ar is due to the simultaneous alignment of protons and neutrons \nin the f$_{7\/2}$ orbital.\n\n\nThe pronounced difference in the high-spin behaviors of $^{36}$Ar and \n$^{40}$Ar implies that the addition of four neutrons to $^{36}$Ar gives rise to \na dramatic effect on its structure. \nIn order to understand this structural change theoretically, cranked \nHartree--Fock--Bogoliubov (HFB) calculations with the P+QQ force \\cite{HFB} \nwere conducted.\nThe evolution of the nuclear shape was treated in a fully self-consistent manner \nand the model space of the full {\\it sd-fp} shell plus the g$_{9\/2}$ orbital \nwas employed.\nThe calculation shows that $\\beta_2 = 0.57$ at $J = 0 \\hbar$ and that the deformation \ngradually decreases to 0.45 at $J = 12\\hbar$. \nTriaxiality is found to be almost zero ($\\gamma\\simeq 0$) throughout this \nangular momentum range. \nThis result agrees with the experimental $Q_t$ value within the error bars.\n\n\n\nThe occupation number of each orbital was also calculated (Table \\ref{occupation}).\nThe ground-state configuration is expressed as \n({\\it sd})$^{-2}$({\\it fp})$^{2}$ relative to the ground-state configuration of \n$^{40}$Ca, where the Fermi levels for protons and neutrons lie at d$_{3\/2}$\nand f$_{7\/2}$, respectively.\nThe self-consistently calculated second $0^+$ state has \nthe ({\\it sd})$^{-6}$({\\it fp})$^{6}$ configuration.\nHere, the {\\it fp} shell is occupied by two protons and four neutrons,\nwhile the {\\it sd} shell has four proton holes and two neutron holes.\nConsidering the rise of the neutron Fermi level relative to that in $^{36}$Ar,\nthis excited configuration is essentially equivalent to the 4p--4h superdeformed \nconfiguration in $^{36}$Ar.\n\nCranking is then performed to study high-spin states. \nIn the proton sector, the behaviors of single-particle orbitals are similar to \nthose of $^{36}$Ar \\cite{HFB}. \nFor example, the occupation numbers in the $\\pi$p$_{3\/2}$ orbital monotonically\ndecrease up to $J=16 \\hbar$ while the $\\pi$f$_{7\/2}$ orbital starts to increase \nat $J=$ 8 $\\hbar$ due to the disappearance of the pairing gap energy.\n\nIn the neutron sector, clear differences are observed from the $^{36}$Ar case.\nThe occupation number in the $\\nu$f$_{7\/2}$ orbital is almost constant ($\\sim$3) \nagainst the total angular momentum; it is about 1.5 times larger than that for \n$^{36}$Ar. \nThe $\\nu$d$_{5\/2}$ orbitals are almost fully occupied ($\\simeq 5.5$) \nfrom the low- to the high-spin regions.\nIn the case of $^{36}$Ar, the structural change involving the sharp backbending\nis caused by a particle feeding from the p$_{3\/2}$ orbital to the f$_{7\/2}$ \norbital for both protons and neutrons.\nIn the neutron sector of $^{40}$Ar, this feeding happens from the p$_{3\/2}$\nto the many other single-particle orbitals.\nThis is because the rise of the neutron Fermi level enhances the occupation \nnumbers of the single-particle orbitals, particularly at the bottom end of \nthe {\\it fp} shell. \nFor example, the f$_{7\/2}$ is well occupied by $\\simeq 40\\%$. \nThis high occupation influences the response of the system to the Coriolis \nforce. \nIn general, low-$\\Omega$ states tend to come down energetically lower, so that \nsuch states are the first to be \"submerged\" when the Fermi level rises. \nAs a result, neutron states near the Fermi level in $^{40}$Ar possess a higher \n$\\Omega$ value and the rotational alignment is suppressed. \nIn $^{36}$Ar, many $\\Omega = 1\/2$ states are vacant in the {\\it fp} shell, \nso that it is possible to place particles in the $\\Omega = 1\/2$ states during \nthe feeding from the p$_{3\/2}$ orbital to the f$_{7\/2}$ orbital.\nHowever, in $^{40}$Ar, such $\\Omega = 1\/2$ states are filled due to the\nrise of the neutron Fermi level. \nIt is thus necessary to place neutrons in the $\\Omega$ = 3\/2 or 5\/2 levels \nin order to generate angular momentum. \nBut this way of feeding weakens the rotational alignment.\nThis ``Pauli blocking effect'' is one of the reasons why $^{40}$Ar does not \nbackbend (at least, not in the spin region so far observed). \nIt is also worth mentioning that because of the rise of the neutron Fermi level \nin $^{40}$Ar, angular momentum generation is spread among the extra f$_{7\/2}$ \nneutrons, in comparison with $^{36}$Ar. \nThis means that, unlike their neutron counterparts, the f$_{7\/2}$ protons do not \nneed to \"work hard\" to generate angular momentum. \nAs a result, simultaneous alignment of the f$_{7\/2}$ protons and neutrons \ndoes not occur in $^{40}$Ar. \nOur calculation confirms this picture. \n\n\nIn summary, a discrete-line superdeformed band has been identified in $^{40}$Ar. \nThe observed large transition quadrupole moment ($Q_{t}= 1.45_{-0.31}^{+0.49} e$b)\nsupports its SD character. \nThe properties of the SD band could be reasonably well explained by cranked HFB \ncalculations with the P+QQ force. \nThis finding of the SD band in $^{40}$Ar is similar to those observed in $^{36}$Ar \n\\cite{ar36} and $^{40}$Ca \\cite{ca40}, indicating the persistence of the SD shell \nstructure in the neutron-rich A $=$ 30 $\\sim$ 40 nuclei and possibly implying that \n$^{40}$Ar is a doubly SD magic nucleus with Z $=$ 18 and N $=$ 22.\nThe observed SD structure with a deformation of $\\beta_2 \\sim$ 0.5 caused by the \nmp--mh excitation across the {\\it sd--fp} shell gap might explain\nthe origin of the strongly deformed ground state in the 'island of inversion'.\n\n\nThe authors thank the staff at the JAEA tandem accelerator for providing \nthe $^{18}$O beam. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn his seminal paper, published in 1870, about the solution of a nonlinear equation in a single unknown, \n\\begin{equation} \nf(z)=0,\n\\end{equation}\\label{eq1}\nSchr\\\"oder deals with the problem of characterizing general iterative algorithms to solve~\\eqref{eq1} with a prefixed order of convergence $\\omega\\ge 2$ (see the original paper \\cite{Sch} or the commented English translation \\cite{Ste}). The main core of Schr\\\"oder's work studies two families of iterative processes, the well-known families of first and second kind \\cite{Pet1}, \\cite{Pet2}. The $\\omega$-th member of these families is an iterative method that converges with order $\\omega$ to a solution of~\\eqref{eq1}. In this way, the second method of both families is Newton's method. The third method of the first family is Chebyshev's method. The third method of the second family is Halley's method. The rest of the methods in both families (with order of convergence $\\omega\\ge 4$) are not so well known. \n\nNote that Newton's, Chebyshev's and Halley's method are also members of another famous family of iterative methods, the known as Chebyshev-Halley family of methods (introduced by Werner \\cite{Wer} and reported by may other authors such as \\cite{Amat} or \\cite{Dub}):\n \\begin{equation}\\label{eq2}\nz_{k+1}=z_k-\\left(1-\\frac{1}{2}\\frac{L_f(z_k)}{1-\\alpha L_f(z_k)}\\frac{f(z_k)}{f'(zk+1)}\\right), \\quad \\alpha\\in \\mathbb{R},\\quad n\\ge 0, \\quad z_0\\in\\mathbb{C},\n\\end{equation}\nwhere we have used the notation\n \\begin{equation}\\label{eq3}\n L_f(z)=\\frac{f(z)f''(z)}{f'(z)^2}.\n\\end{equation}\nIn fact, Chebyshev's method is obtained for $\\alpha=0$, Halley's method appears for $\\alpha=1\/2$ and Newton's method can be obtained as a limit case when $|\\alpha| \\to \\infty$. Except for the limit case of Newton's method, all the methods in the family have third order of convergence.\n\nIn this general context of families of iterative methods for solving nonlinear equations, we would like to highlight a detail that appears in the aforementioned paper by Schr\\\"oder~\\cite{Sch}. Actually, in the third section of this article, Schr\\\"oder constructs an algorithm by applying Newton's method to the equation \n$$\n\\frac{f(z)}{f'(z)}=0.\n$$\nThe resulting iterative scheme can be written as\n$$\nz_{k+1}=z_k-\\frac{f(z_k)f'(z_k)}{f'(z_k)^2-f(z_k)f''(z_k)}, \\quad n\\ge 0, \\quad z_0\\in\\mathbb{C},\n$$\nas it is known as \\emph{Schr\\\"oder's method} by many authors (see for instance \\cite{Proinov} or \\cite{Scavo}).\n\nFor our convenience, we denote by $S_f(z)$ the iteration function of Schr\\\"oder's method. Note that it can be written in terms of the function $L_f(z)$ introduced in~\\eqref{eq3} in the following way\n \\begin{equation}\\label{eq4}\nz_{k+1}=S_f(z_k)=z_k-\\frac{1}{1-L_f(z_k)}\\frac{f(z_k)}{f'(z_k)} \\quad n\\ge 0, \\quad z_0\\in\\mathbb{C}.\n\\end{equation}\nThe same Schr\\\"oder compares the resulting algorithm~\\eqref{eq4} with Newton's method and says: \n\\begin{quote}\n``It is an equally worthy algorithm which to my knowledge has not been previously considered. Besides, being almost as simple, this latter algorithm has the advantage that it converges quadratically even for multiple roots''.\n\\end{quote}\n\nCuriously, Schr\\\"oder's method~\\eqref{eq4} does not belong neither to the Schr\\\"oder's families of first and second kind nor the Chebyshev-Halley family~\\eqref{eq2}. It has very interesting numerical properties, such as the quadratic convergence even for multiple roots, but the fact of having a high computational cost (equivalent to the the third order methods in~\\eqref{eq2}) could be an important handicap for practical purposes.\n\nIn this paper we present a first approach to the dynamical behavior of Schr\\\"oder's method. So we show that for polynomials with two different roots and different multiplicities, it is possible to characterize the basins of attraction and the corresponding Julia set. We can appreciate the influence of the multiplicities in such sets.\n\n\\section{Preliminaries}\n\nIn the 13th section of Schr\\\"oder's work~\\cite{Sch}, that has the title \\emph{The Principal Algorithms Appllied to very Simple Examples}, we can find the first dynamical study of a couple of rootfinding methods. Actually, Schr\\\"oder's considers those who, in his opinion, are the two most useful methods: Newton's method, defined by the iterative scheme\n\\begin{equation}\\label{eq5}\nz_{k+1}=N_f(z_k)=z_k-\\frac{f(z_k)}{f'(z_k)} \\quad n\\ge 0, \\quad z_0\\in\\mathbb{C},\n\\end{equation}\nand the method $z_{k+1}=S_f(z_k)$ given in~\\eqref{eq4}.\n\nIn the simplest case, namely equations with only one root, we can assume without loss of generality that $f(z)=z^n$. It is easy to see that\n$$\nN_f(z)=\\frac{n-1}{n}z,\\quad S_f(z)=0.\n$$\nSo Schr\\\"oder's method gives the correct root ($z=0$) of the equation in just one step, whereas Newton's method converges to this root with lineal convergence:\n$$\nz_k=\\left(\\frac{n-1}{n}\\right)^k z_0.\n$$\nConsequently, for equations with s single root Schr\\\"oder concludes that the convergence regions of these two methods is the entire complex plane.\n\nThe next simple case considered by Schr\\\"oder is the quadratic equation. Again, without loss of generality he asumes \n$f(z)=(z-1)(z+1)$. After a series of cumbersome calculus, he estates that in this case and for both methods, the entire complex plane decomposes into two regions separated by the imaginary axis. A few years later, Cayley~\\cite{Cay} addresses the same problem, only for Newton's method. In a very elegant way, Cayley proves that for polynomials\n\\begin{equation}\\label{eq6}\nf(z)=(z-a)(z-b),\\quad a,b\\in \\mathbb{C}, \\quad a\\ne b,\n\\end{equation}\nNewton's iterates converge to the root $a$ if $|z_0-a|<|z_0-b|$ and to the root $b$ if $|z_0-b|<|z_0-a|$. The Julia set is the equidistant line between the two roots. The key to prove this result is to check that Newton iteration function~\\eqref{eq5} applied to polynomials~\\eqref{eq6} is conjugate via the M\\\"obius map\n\\begin{equation}\\label{eq7}\nM(z)=\\frac{z-a}{z-b}\n\\end{equation}\nwith the function $R(z)=z^2$, that is, $R(z)=M\\circ N_f\\circ M^{-1}(z)$. The unit circle $S^1=\\{z\\in\\mathbb{C}; |z|=1\\}$ is invariant by $R$. Its anti-image by $R$ is the bisector between the roots $a$ and $b$.\n\nTwo functions $f,g: \\mathbb{C} \\to \\mathbb{C} $ are said topologically conjugate if there exists a homeomorphism $\\varphi$ such that \n$$\n\\varphi\\circ g=f\\circ \\varphi.\n$$\nTopological conjugation is a very useful tool in dynamical systems (see \\cite{Dev} for more details)\nbecause two conjugate functions share the same dynamics properties, from the topological viewpoint. For instance, the fixed points of one function are mapped into the fixed points of the other, the periodic points of one function are mapped into the periodic points of the other function, and so on. Speaking informally, we can say that the two functions are the same from a dynamical point of view. As we have just seen, in some cases one of the functions in a conjugation could be much simpler than the other. In the case of Cayley's problem $R(z)=z^2$ is topologically conjugate (and much simpler) to\n$$\nN_f(z)=\\frac{a b-z^2}{a+b-2 z}.\n$$\n\n\nIn the same way, we have that Schr\\\"oder's method~\\eqref{eq4} applied to polynomials~\\eqref{eq6} \n$$\nS_f(z)=\\frac{z^2 (a+b)-4 a b z+a b (a+b)}{a^2-2 z (a+b)+b^2+2 z^2}\n$$is conjugated with $-R(z)$ via the M\\\"obius map defined in~\\eqref{eq7}, that is $M\\circ S_f\\circ M^{-1}(z)=-z^2$. Consequently, the dynamical behavior of Schr\\\"oder's method for quadratic polynomials mimics the behavior of Newton's method: the Julia set is the bisector between the two roots and the basins of attraction are the corresponding half-planes.\n\n\\section{Main results}\n\nNow we consider the case of polynomials with two roots, but with different multiplicities, $m\\ge n\\ge 1$:\n\\begin{equation}\\label{eq8}\nf(z)=(z-a)^m(z-b)^n,\\quad a,b\\in \\mathbb{C}, \\quad a\\ne b.\n\\end{equation}\nFor our simplicity, and to better appreciate symmetries, we move the roots $a$ and $b$ to $1$ and $-1$. To do this, we conjugate with the affine map\n\\begin{equation}\\label{eq9}\nA(z)=1+2\\frac{z-a}{a-b}\n\\end{equation}\nto obtain a simpler function that does not depend on the roots $a$ and $b$. Let $T_{m,n}(z)$ be the corresponding conjugate map:\n\\begin{equation}\\label{eq10}\nT_{m,n}(z)=A\\circ S_f\\circ A^{-1}(z)=\\frac{(m-n) z^2 +2 (m+n) z +m-n}{(m+n) z^2 +2(m-n)z +m+n}.\n\\end{equation}\nA new conjugation of $T_{m,n}(z)$ with the M\\\"obius map \\eqref{eq7}, when $a=1$ and $b=-1$ provides us a new rational function whose dynamics are extremely simple. Actually:\n\\begin{equation}\\label{eq11}\nR_{m,n}(z)=M\\circ T_{m,n}\\circ M^{-1}(z)=-\\frac{nz^2}{m}.\n\\end{equation}\n\nNote that the circle $C_{m,n}=\\{z\\in\\mathbb{C}; |z|=m\/n\\}$ is invariant by $R_{m,n}(z)$. After iteration by $R_{m,n}(z)$, the orbits of the points $z_0$ with $|z_0|m\/n$ go to $\\infty$. Consequently, $C_{m,n}$ is the Julia set of the map $R_{m,n}(z)$.\n\n\n\\begin{Theorem}\\label{teo1}\nLet $T_{m,n}(z)$ be the rational map defined by~\\eqref{eq10} and let us denote by $J_{m,n}$ its Julia set. Then we have:\n\\begin{enumerate}\n\\item If $m=n$, then $J_{m,m}$ is the imaginary axis.\n\\item If $m>n\\ge 1$, then $J_{m,n}$ is the circle\n$$J_{m,n}=\\left\\{z\\in\\mathbb{C}; \\left|z+\\frac{m^2+n^2}{m^2-n^2}\\right|=\\frac{2mn}{m^2-n^2}\\right\\}.$$\n\\end{enumerate}\n\\end{Theorem}\n\n\\begin{proof\nThe proof follows immediately, just by taking into account that $J_{m,n}$ is the pre-image by $M(z)=(z-1)\/(z+1)$ of the circle $C_{m,n}$ and by distinguishing the two situations indicated in the statement of the theorem.\n\\end{proof}\n\n\\begin{Theorem}\\label{teo2}\nLet $S_f(z)$ be the rational map defined by applying Schr\\\"oder's method to polynomials~\\eqref{eq8} and let us denote by $J_{m,n,a,b}$ its Julia set. Then we have:\n\\begin{enumerate}\n\\item If $m=n$, $J_{m,m,a,b}$ is the equidistant line between the points $a$ and $b$.\n\\item If $m>n\\ge 1$, then $J_{m,n,a,b}$ is the circle\n$$J_{m,n,a,b}=\\left\\{z\\in\\mathbb{C}; \\left|z+\\frac{b m^2-a n^2}{m^2-n^2}\\right|=\\frac{mn |a-b|}{m^2-n^2}\\right\\}.$$\n\\end{enumerate}\n\\end{Theorem}\n\n\\begin{proof\nNow we deduce this result by calculating the pre-images of $J_{m,n}$ by the affine map $A(z)$ defined in~\\eqref{eq9} in the two situations indicated in the previous theorem.\n\\end{proof}\n\n\n\n\\section{Conclusions and further work}\n\n\nWe have studied the behavior of Schr\\\"oder's method for polynomials with two different complex roots and with different multiplicities~\\ref{eq8}. Actually, we have proved that the Julia set of the corresponding rational functions obtained in this case is a circle given in Theorem~\\ref{teo2}.\n\nIn addition, Theorem~\\ref{teo1} gives us a universal result that characterizes the behavior of Schr\\\"oder's method in a very simplified form, depending only of the values of the multiplicities $m$ and $n$. The influence of the roots $a$ and $b$ is revealed in Theorem~\\ref{teo2}, and is just an affine transformation of the situation given in Theorem~\\ref{teo1}.\n\n\nLet us consider the points $(x,y)\\in\\mathbb{R}^2$ given by the centers and radius of the circles defined in Theorem~\\ref{teo1}, that is\n$$\n(x,y)=\\left( \\frac{m^2+n^2}{m^2-n^2}, \\frac{2mn}{m^2-n^2}\\right).\n$$\nThese points belong to the hyperbola $x^2-y^2=1$ in the real plane $\\mathbb{R}^2$.\n\nIn addition, we appreciate that are polynomials for which Schr\\\"oder's method has the same dynamical behavior. Actually, if we introduce the new parameter\n$$\np=\\frac{m}{n},\n$$\nwe have that the circles $J_{m,n}$ defined in Theorem~\\ref{teo1} can be expressed as\n$$J_{p}=\\left\\{z\\in\\mathbb{C}; \\left|z+\\frac{p^2+1}{p^2-1}\\right|=\\frac{2p}{p^2-1}\\right\\}.$$\nTherefore Schr\\\"oder's method applied to polynomials with couples of multiplicities $(m,n)$ having the same quotient $p$ have the same Julia set.\n\nWe can schematize of the dynamics of Schr\\\"oder's method applied to polynomials $(z-1)^m(z+1)^n$, $m>n$ in the following way:\n\\begin{itemize}\n\\item When $p=m\/n\\to \\infty$, the Julia sets $J_p$ are circles that tends to collapse in the point $z=-1$.\n\\item When $p=m\/n\\to 1^+$, the Julia sets $J_p$ are circles with centers in the negative real line. Note\nthat centers\n$$\n-\\frac{p^2+1}{p^2-1}\\to -\\infty \\text{ when } p\\to 1^+\n$$\nand radius\n$$\n\\frac{2p}{p^2-1}\\to \\infty \\text{ when } p\\to 1^+.\n$$\nSo when $p\\to 1^+$ the Julia set are circles getting bigger and tending to ``explode'' into the limit case, given by the imaginary axis when $p=1$.\n\\end{itemize}\n\nIf we consider the presence of the roots $a$ and $b$, the dynamics of Schr\\\"oder's method applied to polynomials $(z-a)^m(z-b)^n$, $m>n$ can be summarized in a ``travel'' from a circle concentrated in the root with the smallest multiplicity, $b$ to circles with the center in the line connecting the roots $a$ and $b$ and radius tending to infinity until the ``explosion'' into the limit case, given by the bisector of the two roots, when $p=1$.\n\nIn Figures~\\ref{fig1}--\\ref{fig3} we show some graphics of different Julia sets obtained when Schr\\\"oder's method is applied to polynomials $(z-a)^m(z-b)^n$, $m\\ge n\\ge 1$. We compare these dynamical planes with the ones obtained for Newton's. For instance, in Figure~\\ref{fig1} we show the behavior when $p=m\/n$ is increasing. We appreciate how the Julia set for Schr\\\"oder's method (a circle) tends to to collapse in the point $z=-1$ that in this case is the simple root. In the case of Newton's method, the Julia set is a kind of ``deformed parabola'', whose ``axis of symmetry'' is the real line, it is open to the left, the ``vertex'' tends to the simple root $z=-1$ and the ``latus rectum'' tends to zero. We see how the basin of attraction of the multiple root $z=1$ invades more and more the basin of the simple root $z=-1$, as it was pointed out by Hern\\'andez-Paricio et al. \\cite{Gut}.\n\nIn Figure~\\ref{fig2} we see what happens when $p=m\/n\\approx 1$. The Julia set for Schr\\\"oder's method are circles getting bigger as $p$ approaches the value 1 and exploding into a half-plane limited by the imaginary axis when $p=1$. In the case of Newton's method, the Julia set is again a ``deformed parabola'' with the real line as ``axis of symmetry'' and open to the left. However as $p$ goes to 1, the ``vertex'' tends to $z=0$ and the ``latus rectum'' tends to infinity. As a limit case, when $p=1$ this ``deformed parabola'' becomes a straight line, actually the imaginary axis.\n\n\nFigure~\\ref{fig3} shows the circle corresponding to the Julia of Schr\\\"oder's method applied to polynomials $(z-1)^m(z+1)^n$ with $p=m\/n=2$. We can also see the Julia set of Newton's method applied to such polynomials. In the case of Newton's method we observe that the behavior is not the same fo values of $m$ and $n$ such that $p=m\/n=2$. The corresponding ``deformed parabola'' tends to be smoother when the values of $m$ and $n$ increase.\n\nFinally in Figure~\\ref{fig4} we show the Julia set $J_{m,n,a,b}$ defined in Theorem~\\ref{teo2} in the case $m=2$, $n=1$, $a=1$, $b=i$ together with the corresponding Julia set for Newton's method. In these figures we appreciate the loss of symmetry respect to the imaginary axis. This role is now played by the equidistant line between the roots $a$ and $b$.\n\nAs a further work, we would like to explore the influence of the multiplicity in the Julia set of Newton's method applied to polynomials $(z-a)^m(z-b)^n$, $m\\ge n\\ge 1$ and its possible relationship between the study of Schr\\\"oder's method. In particular, we are interested in characterize the main properties of the ``deformed parabolas'' that appear in the case of Newton's method.\n\n\n\n\n\n\\begin{figure}[H]\n\\centering \n\\twofractals{imagenes\/sch_m_2n_1.jpg}{Schr\\\"oder $m=2, \\, n=1.$}{imagenes\/newton_m_2n_1}{Newton $m=2, \\, n=1.$}\n\n\\twofractals{imagenes\/sch_m_5n_1}{Schr\\\"oder $m=5, \\, n=1.$}{imagenes\/newton_m_5n_1}{Newton $m=5, \\, n=1.$}\n\n\\twofractals{imagenes\/sch_m_8n_1}{Schr\\\"oder $m=8, \\, n=1.$}{imagenes\/newton_m_8n_1}{Newton $m=8, \\, n=1.$}\n\\caption{Basins of attraction of Schr\\\"oder's and Newton's methods applied to polynomials $(z-1)^n(z+1)^m$ for $n=1$, $m=2, 5, 8$.}\n\\label{fig1}\n\\end{figure} \n\n\\begin{figure}[H]\n\\centering \n\\twofractals{imagenes\/sch_n_6_m_6}{Schr\\\"oder $m=6, \\, n=6.$}{imagenes\/newton_n_6_m_6}{Newton $m=6, \\, n=6.$}\n\n\\twofractals{imagenes\/sch_m_7n_6}{Schr\\\"oder $m=7, \\, n=6.$}{imagenes\/newton_m_7n_6}{Newton $m=7, \\, n=6.$}\n\n\\twofractals{imagenes\/sch_m_8n_6}{Schr\\\"oder $m=8, \\, n=6.$}{imagenes\/newton_m_8n_6}{Newton $m=8, \\, n=6.$}\n\\caption{Basins of attraction of Schr\\\"oder's and Newton's methods applied to polynomials $(z-1)^n(z+1)^m$ for $n=6$, $m=6, 7, 8$.}\n\\label{fig2}\n\\end{figure} \n\n\n\\begin{figure}[H]\n\\centering \n\\imagen{imagenes\/sch_m_4n_2}{Schr\\\"oder $p=m\/n=2.$}{imagenes\/newton_m_4n_2}{Newton $m=4, \\, n=2.$}\n\n\\imagen{imagenes\/newton_m_6n_3}{Newton $m=6, \\, n=3.$}{imagenes\/newton_m_8n_4}{Newton $m=8, \\, n=4.$}\n\n\\caption{The first graphic shows the basin of attraction of Schr\\\"oder's method applied to polynomials $(z-1)^n(z+1)^m$ with $p=m\/n=2$. The other graphics show the basins of attraction of Newton's method applied to the same polynomials for different values of $m$ and $n$ with $p=m\/n=2$.}\n\\label{fig3}\n\\end{figure} \n\n\\begin{figure}[H]\n\\centering \n\\includegraphics[width=0.4\\textwidth]{imagenes\/sch-i_m2_n1.jpg}\\quad \n\\includegraphics[width=0.4\\textwidth]{imagenes\/newton1-i_m2_n1.jpg}\n\n\n\\caption{Basin of attraction of Schr\\\"oder's and Newton's methods applied to polynomials $(z-1)^2(z+i)$.}\n\\label{fig4}\n\\end{figure} \n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRayleigh-B\\'enard convection of a horizontal fluid layer heated from\nbelow is a paradigmatic system for the study of complex spatial and\nspatio-temporal patterns \\C{BoPe00}. Usually, the system parameters\nare chosen such that variations of the fluid properties across the\nlayer are very small. In the theoretical description this allows the\nOberbeck-Boussinesq (OB) approximation, in which the temperature\ndependence is neglected in all fluid properties except for the density\nterm that is responsible for the buoyancy. For large temperature\ndifferences across the layer the variation of the fluid properties\nwith the temperature become significant. These non-Oberbeck-Boussinesq\n(NB) effects break the up-down symmetry that is characteristic of the\nOB approximation and, thus, allow a resonant triad interaction among\nthree Fourier modes whose wavevectors form a hexagonal planform. Due\nto the resonant interaction the primary bifurcation to the hexagons is\ntranscritical, and hexagons are preferred over rolls in the immediate\nvicinity of onset \\cite{Bu67}. \n\nWithin the framework of the leading-order weakly nonlinear analysis,\nhexagons typically become unstable to rolls further above threshold,\nwhere the amplitudes are larger and the resonant-triad interaction\nloses significance compared to interactions involving four modes\n\\cite{Pa60,SeSt62,Se65,Bu67,PaEl67,DaSe68}. This scenario of a\ntransition from hexagons to rolls has been confirmed in quite a number\nof experimental investigations\n\\cite{SoDo70,DuBe78,Ri78,WaAh81,BoBr91,PaPe92}, and quite commonly it\nhas been assumed that hexagon patterns in NB convection are confined\nto the regime close to onset.\n\nTwo convection experiments using SF$_6$ as the working fluid\n\\C{AsSt96,RoSt02} have shown, however, that even in the strongly\nnonlinear regime stable hexagon patterns can be observed. Under OB\nconditions \\cite{AsSt96} hexagons were found at relatively high\nRayleigh numbers, $\\epsilon \\equiv (R-R_c)\/R_c \\approx 3.5$ \n\\C{AsSt96}. Due to the mid-plane symmetry of OB convection hexagons\nwith up-flow in the center coexist stably with down-flow hexagons in\nthis regime. In experiments using SF$_6$ under NB conditions\n\\cite{RoSt02} the hexagons that appear at threshold were replaced by\nrolls for somewhat larger heating and then reappeared in the strongly\nnon-linear regime near $\\epsilon ={\\mathcal O}(1)$. The\nrestabilization was attributed to the large compressibility of SF$_6$\nnear its critical point \\cite{RoSt02}. The hexagons that regain\nstability were termed {\\em reentrant hexagons}.\n\nRecent numerical computations \\cite{MaRi05} have demonstrated that \nhexagons can restabilize in NB convection even if the fluid is {\\em\nincompressible}. For instance, in water hexagons that are unstable at\n$\\epsilon=(R-R_c)\/R_c =0.15$ can already restabilize at\n$\\epsilon=0.2$. The origin of this reentrance was traced back to the\nexistence of stable hexagons in OB convection at large Rayleigh\nnumbers, and the dependence of the NB effects on the Rayleigh number.\n\nAt low Prandtl numbers ($Pr\\simeq 1$) and further above onset OB\nconvection exhibits a new state: {\\em spiral defect chaos} (SDC). It\nwas first found experimentally \\C{MoBo93,MoBo96,LiAh96} and then\ninvestigated theoretically using simple models \\cite{XiGu93,CrTu95} as\nwell as simulations of the full fluid equations \\cite{DePe94a,ChPa03}.\nThis fascinating state of spatio-temporal chaos is characterized by\nrotating spirals with varying numbers of arms and of different size,\nwhich appear and disappear irregularly and interact with each other\nand with other defects. SDC arises from the roll state at a threshold \nthat can be as low as $\\epsilon=0.1$ in the limit of small $Pr$. It\nis driven by large-scale flows that are induced by roll curvature and\nhave a strength that is proportional to $Pr^{-1}$ \\cite{ChPa03}.\n\nSo far, strongly non-linear NB convection has been studied mostly for\nlarge Prandtl numbers \\cite{YoRi03b,MaPe04,MaRi05}, but little is\nknown for small ( $Pr\\simeq 1$) or very small Prandtl numbers ($Pr\\ll \n1$). In particular, whether reentrant hexagons exist at large\n$\\epsilon$ in the presence of the large-scale flows that arise at low\n$Pr$, and how NB effects impact spiral defect chaos are interesting\nquestions, which we address in this paper. \n\nHere we study NB convection in gases with small Prandtl numbers.\nSpecifically, we consider parameters corresponding to convection in\nCO$_2$ and SF$_6$ ($Pr\\simeq 0.8$) and in a H$_2$-Xe mixture\n($Pr=0.17$). We show that reentrant hexagons are possible in\nconvection in CO$_2$. For spiral defect chaos in SF$_6$ we find that\nNB effects promote small convection cells (`bubbles'). In H$_2$-Xe,\ncloser to threshold, roll-like structures dominate. In both cases the\nNB effects reduce the spiral character of the pattern. We quantify the\nimpact of the NB effects on spiral defect chaos using geometric\ndiagnostics that we have proposed recently \\cite{RiMa06}. \n\nThe paper is organized as follows. In Sec.\\ref{sec:basicequations} we\nbriefly review the basic equations, emphasizing how our computations\nfocus on weakly non-Boussinesq effects, but strongly nonlinear\nconvection. In Sec.\\ref{sec:co2stability} we present the results for\nthe linear stability of hexagons and rolls in CO$_2$ for a range of\nparameters accessible experimentally. To compare with experiments, we\nstudy the influence of different lateral boundary conditions on the\ntransition from hexagons to rolls in Sec. \\ref{sec:sim-co2}. In Sec.\n\\ref{sec:sim-sf6} we discuss spiral defect chaos in SF$_6$ under NB\nconditions. The stability of hexagons and spiral defect chaos in\nfluids with very low Prandlt number ($Pr=0.17$) is studied in a \nmixture of $H_2$ and $Xe$ in Sec. \\ref{sec:h2xe}. Finally, conclusions\nare drawn in Sec.\\ref{sec:conclusions}.\n\n\n\\section{Basic equations \\LB{sec:basicequations} }\n\nThe basic equations that we use for the description of NB\nconvection have been discussed in detail previously\n\\cite{YoRi03b,MaRi05}. We therefore give here only a brief summary. \nWe consider a horizontal fluid layer of thickness $d$, density $\\rho$,\nkinematic viscosity $\\nu$, heat conductivity $\\lambda$, thermal\ndiffusivity $\\kappa$, and specific heat $c_p$. The system is heated\nfrom below (at temperature $T_1$) and cooled from above (at\ntemperature $T_2 < T_1$). \n\nTo render the governing equations and boundary conditions\ndimensionless we choose the length $d$, the time $d^{2}\/\\kappa_0$, the\nvelocity $\\kappa_0\/d$, the pressure $\\rho_0\\nu_0 \\kappa_0\/d^{2}$, and\nthe temperature $T_s=\\nu_0 \\kappa_0\/\\alpha_0 g d^3$ as the respective\nscales. The subscripted quantities refer to the respective values at\nthe middle of the fluid layer in the conductive state. The\nnon-dimensionalization gives rise to two dimensionless quantities: the\nPrandtl number $Pr=\\nu_0\/\\kappa_0$, and the Rayleigh number\n$R=\\alpha_0 \\Delta T g d^3\/\\nu_0 \\kappa_0$. Furthermore, we write the\nequations in terms of the dimensionless momentum density $v_i=\\rho d\nu_i\/\\rho_0 \\kappa_0$ instead of the velocities $u_i$. The\ndimensionless form of the temperature $\\hat T =T\/T_s$, heat\nconductivity $\\hat \\lambda =\\lambda\/\\lambda_0$, density $\\hat \\rho\n=\\rho\/\\rho_0$, kinematic viscosity $\\hat \\nu =\\nu\/\\nu_0$, and specific\nheat $\\hat c_p =c_p\/c_{p0}$ will be used in the ensuing equations and\nthe hats dropped for clarity. In dimensionless form the equations for\nthe momentum, mass conservation and heat are then given, respectively,\nby\n\\begin{eqnarray}\n\\frac{1}{Pr}\\left(\\partial_tv_i+v_j\\partial_j\\left(\\frac{v_i}{\\rho}\\right)\\right)&=&-\\partial_i\np \\\\\n&&+\\delta_{i3}\\left(1+\\gamma_1(-2 z+\\frac{\\Theta}{R})\\right)\\Theta\\nonumber \\\\\n&&+\\partial_j\\left[\\nu\\rho\\left(\\partial_i(\\frac{v_j}{\\rho})+\\partial_j(\\frac{v_i}{\\rho})\\right)\\right]\n\\nonumber \\LB{e:v}\\\\\n\\partial_jv_j&=&0, \\LB{e:cont}\\\\\n\\partial_t\\Theta+\\frac{v_j}{\\rho}\\partial_j\\Theta\n& =&\\frac{1}{\\rho\nc_p}\\partial_j(\\lambda\\partial_j\\Theta)-\\gamma_3\\partial_z\\Theta-\\nonumber \\\\\n&& R\\frac{v_z}{\\rho}(1+\\gamma_3z).\\LB{e:T}\n\\end{eqnarray}\nwith the dimensionless boundary conditions \n\\begin{eqnarray}\n\\vec{v}(x,y,z,t)=\\Theta(x,y,z,t)=0 \\mbox{ at } z= \\pm \\frac{1}{2}.\\LB{e:bc}\n\\end{eqnarray}\nHere $\\Theta$ is the deviation of the temperature field from the basic\nconductive profile. Summation over repeated indices is assumed.\n\nWe consider the NB effects to be weak and retain in a\nTaylor expansion of all material properties only the leading-order\ntemperature dependence {\\it beyond} the OB approximation. For\nthe density this implies also a quadratic term with coefficient\n$\\gamma_1$. It contributes, however, only to the buoyancy term in\n(\\ref{e:v}); in all other expressions it would constitute only a\nquadratic correction to the leading-order NB effect. Thus,\nthe remaining temperature dependence of the fluid parameters $\\rho$,\n$\\nu$, $\\lambda$, and $c_p$ in (\\ref{e:v},\\ref{e:cont},\\ref{e:T}) is\ntaken to be linear\n\\begin{eqnarray}\n\\rho(\\Theta)&=&1-\\gamma_0(-z+\\frac{\\Theta}{R}),\\LB{e:rhoTh}\\\\\n\\nu(\\Theta)&=& 1+\\gamma_2(-z+\\frac{\\Theta}{R}),\\LB{e:nuTh}\\\\\n\\lambda(\\Theta)&=&1+\\gamma_3(-z+\\frac{\\Theta}{R}),\\LB{e:lambdaTh}\\\\\nc_p(\\Theta)&=&1+\\gamma_4(-z+\\frac{\\Theta}{R}).\\LB{e:cpTh}\n\\end{eqnarray}\n\nThe coefficients $\\gamma_i$ give the difference of the respective\nfluid properties across the layer. They depend therefore linearly on\nthe Rayleigh number, \n\\begin{eqnarray}\n\\gamma_i(\\Delta T)=\\gamma_i^{c}\\,\\left(\\frac{R}{R_c} \\right) \n=\\gamma_i^{c} \\, (1+\\epsilon) ,\n\\end{eqnarray}\nwhere $\\gamma_i^{c}$ is the value of $\\gamma_i$ at the onset of convection\nand $\\epsilon\\equiv (R-R_c(\\gamma_i^{c}))\/R_c(\\gamma_i^{c})$ is the\nreduced Rayleigh number. \n\nIn analogy to \\cite{Bu67}, we further omit NB terms that\ncontain cubic nonlinearities in the amplitudes $v_i$ or $\\Theta$, as\nthey arise from the expansion of the advection terms $v_j\n\\partial_j(v_i\/\\rho)$ and $(v_j\/\\rho)\\partial_j \\Theta$ when the\ntemperature-dependence of the density is taken into account. Since we\nwill be considering Rayleigh numbers up to twice the critical value,\nwhich implies enhanced NB effects, these approximations\nmay lead to quantitative differences compared to the fully\nNB system, even though the temperature-dependence of the\nmaterial properties themselves is in most situations well described by\nthe linear (or quadratic in the case of the density) approximation. \n\nTo quantify the overall strength of the NB-effects we use Busse's\nNB parameter $Q$, which is given by\n\\begin{eqnarray}\nQ = \\sum_{i=0}^{4}\\gamma_i^{c} {\\cal P}_i,\\LB{e:busseq}\n\\end{eqnarray}\nwhere the quantities ${\\cal P}_i$ are linear functions of $Pr^{-1}$.\nThe NB parameter $Q$ quantifies the breaking of the up-down symmetry,\nwhich renders at most one of the two types of hexagons stable. Gases\nhave a positive value of $Q$ and exhibit hexagons with down-flow in\nthe center ($g$-hexagons), whereas liquids have negative $Q$ and show\nhexagons with up-flow ($l$-hexagons).\n\nWe focus in this paper on the stability properties of patterns in the\nstrongly nonlinear regime. They are determined by a Galerkin method\n(e.g. \\cite{BuCl79a}). We use a Fourier expansion on a hexagonal\nlattice in the lateral directions. The Fourier wave vectors ${\\bf q}$\nare constructed as linear combinations of the hexagon basis vectors\n${\\bf b}_1 =q(1,0)$ and ${\\bf b}_2 =q(1\/2, \\sqrt{3}\/2)$ with ${\\bf\nq} = m {\\bf b}_1 + n {\\bf b}_2$ where the integers $m$ and $n$ are in\nthe range $|m {\\bf b}_1+n{\\bf b}_2 | \\le n_q q$. The largest\nwavenumber is then $n_q q$ and the number of Fourier modes retained is\ngiven by $1+6\\sum_{j=1}^{n_q}j$. Typically we use $n_q =3 $. The top\nand bottom boundary conditions are satisfied by using appropriate\ncombinations of trigonometric and Chandrasekhar functions in $z$\n(\\cite{Ch61,Bu67}). In most of the computations we use $n_z=6$ modes\nfor each field in Eq.(\\ref{e:v},\\ref{e:cont},\\ref{e:T}). The linear\nanalysis yields the critical Rayleigh number $R_c$ as well as the\ncritical wavenumber $q_c$. Both depend on the NB-coefficients\n$\\gamma_i^{c}$ which in turn depend on $R_c$. Thus, in principle one\nobtains an implicit equation for the $\\gamma_i^{c}$. The shift in the\ncritical Rayleigh number away from the classical value $R_c=1708$ due\nto the NB-effects is, however, quite small (less than 1 percent) and\ntherefore the resulting change in the $\\gamma_i^{c}$ is negligible. In\nthis paper we therefore choose the $\\gamma_i^{c}$ corresponding to\n$R_c=1708$. \n\n\nTo investigate the nonlinear hexagon solutions, we start with\nthe standard weakly nonlinear analysis to determine the\ncoefficients of the three coupled amplitude equations for the\nmodes making up the hexagon pattern. To obtain the fully\nnonlinear solutions requires the solution of a set of nonlinear\nalgebraic equations for the expansion coefficients with respect\nto the Galerkin modes. This is achieved with a Newton solver for\nwhich the weakly nonlinear solutions serve as convenient\nstarting solutions. In the Galerkin code amplitude instabilities\nare tested by linear perturbations of the expansion\ncoefficients. In addition, modulational instabilities are considered, which involve the introduction of Floquet\nmultipliers $\\exp (i {\\bf s}\\cdot (x,y))$ in the Fourier ansatz for the linear perturbations \nof the Galerkin solutions. \n\nWe also study the temporal evolution of the system. For that we employ\na Fourier spectral code on a rectangular grid $(i\\,dq_x,j\\,dq_y)$,\n$i,j=1...N$, with $dq_y\/dq_x=\\sqrt{3}\/2$ to accomodate perfect\nhexagonal patterns. The same vertical modes are used as in the\nGalerkin stability code \\cite{DePe94a}. To solve for the time\ndependence a fully implicit scheme is used for the linear terms,\nwhereas the nonlinear parts are treated explicitly (second order\nAdams-Bashforth method). The time step is typically taken to be\n$t_v\/500$, where $t_v$ is the vertical diffusion time. We have tested\nthat the stability regimes obtained from the Galerkin analysis are\nconsistent with the direct numerical simulations. Both codes employed\nin this paper have been kindly provided by W. Pesch\n\\cite{DePe94a,Pe96}. \n\n\\section{Reentrant hexagons in CO$_2$ \\LB{sec:co2stability}}\n\nIn this paper we investigate specific scenarios for convection in\ngases that should be experimentally realizable. We focus in this\nsection on CO$_2$ in a conventional range for the layer thickness,\npressure, and temperature. Table \\ref{t:gamma-co2} provides the NB\ncoefficients and the $Q$-value at the onset of convection for a\nrepresentative range of the mean temperature $T_0$ in a layer of\nthickness $d=0.08\\,cm$ \\footnote{These values were obtained with a\ncode kindly provided by G. Ahlers.}. \n\n\\begin{table}\n\\begin{tabular}{|c|cccccccc|}\\hline \n$T_0$ & $\\Delta T_c$ & $Pr$ & $\\gamma_0^{c}$ &\n$\\gamma_1^{c}$ & $\\gamma_2^{c} $ & $\\gamma_3^{c}$ &\n$\\gamma_4^{c}$ & $Q$ \\\\ \\hline \n20 & 9.43 & 0.87 & 0.0486 & -0.0669 & 0.0779 &0.0236 & -0.0251& 1.199 \\\\ \\hline \n40 & 15.52 & 0.84 & 0.0685 & -0.0883 & 0.1132 &0.0508 & -0.0184& 1.724 \\\\ \\hline \n60 & 23.80 & 0.82 & 0.0931 & -0.1148 & 0.1566 &0.0919 & -0.0074 & 2.430 \\\\ \\hline \n\\end{tabular} \n\\caption{Values for the critical temperature (in $^\\circ C$), the \nPrandtl number $Pr$, NB coefficients\n$\\gamma_i^{c}$, and Busse's parameter $Q$ for CO$_2$ at the\nonset of convection for three values of the mean temperature (in\n$^\\circ C$). The values correspond to a layer thickness of \n$d=0.08\\,cm$ and a pressure of $P=300$\\,psi. \\LB{t:gamma-co2} } \n\\end{table} \n\n\n\\subsection{Amplitude instabilities}\n\nIn our analysis we first concentrate on spatially periodic solutions\nwith their wavenumber being fixed at the critical wavenumber and\ndiscuss their domains of existence and their stability with respect to\namplitude instabilities, which do not change the wavenumber. For\nthree different cells with thicknesses $d=0.07,\\, 0.08, \\mbox{ and }\n0.09$\\, cm, respectively, we consider the range $0 < \\epsilon < 1$ at\na pressure of $P=300$\\,psi. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{arti-co2-arti-3.eps}\n\\caption{Stability regions for hexagons and rolls in CO$_2$ with respect to amplitude \nperturbations for three fluid depths: $d=0.07\\, cm$ (dotted\nlines), $d=0.08\\, cm$ (full lines), $d=0.09\\, cm$ (dot-dashed line).\nPressure is kept at $P=300$\\,psi.\nThick lines: stability boundaries for hexagons. Thin lines: stability\nboundaries for rolls. For a given depth, rolls are stable above the thin \nline, and hexagons are unstable in the inner region of the thick line. \\LB{fig:co2-ampli3d}}\n\\end{figure}\n\\end{center}\n\nThe results of the stability analysis for hexagons and rolls are shown\nin Fig.\\ref{fig:co2-ampli3d}. The hexagons are linearly stable for\nvery small $\\epsilon$. For a given layer thickness and not too high\nmean temperature $T_0$ the hexagons become unstable as the control\nparameter is increased. The hexagon patterns then undergo a second\nsteady bifurcation as the control parameter is increased further and\nbecome stable again. Such restabilized hexagons have been termed {\\em\nreentrant hexagons} \\cite{AsSt96,RoSt02,MaRi05}. As the mean\ntemperature is increased or the layer thickness is decreased the\ncritical heating and with it the NB effects increase. This shifts the\npoint of reentrance to lower $\\epsilon$ and the lower stability limit\nto higher $\\epsilon$, decreasing the $\\epsilon$-range over which the\nhexagons are unstable, until the two limits merge at a temperature\n$T_m$. For $T_0>T_m$ the hexagons are amplitude-stable over the whole\nrange of $\\epsilon$ considered ($0 \\le \\epsilon \\le 1$).\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=270]{artico2-To40.ps} \n\\caption{ Stability regions for hexagons in CO$_2$ with respect\nto amplitude (dashed line) and side-band perturbations \n(solid line) for layer depth $d=0.08\\,\ncm$, pressure $P=300$\\,psi, mean temperature $T_0=40\\,^oC$, and Prandtl number $Pr=0.84$.\nThe NB coefficients $\\gamma_i^{c}$ are reported in Tab. \\ref{t:gamma-co2}.\nHexagons are stable with respect to amplitude perturbations outside the\ndashed-line region, and stable with respect to side-band\nperturbation inside the solid-line region.\\LB{fig:side-co2}}\n\\end{figure}\n\\end{center}\n\nWe have also computed the stability of rolls with respect to amplitude\nperturbations. The corresponding stability limits are indicated in\nFig.\\ref{fig:co2-ampli3d} by thin lines. Rolls are stable above these\nlines. As the NB effects become stronger the stabilization\nof rolls is shifted to larger $\\epsilon$. In contrast to the\nhexagons, the rolls do not undergo a second bifurcation within the\nparameter regime investigated and remain amplitude-stable beyond\n$\\epsilon =1.0$. For strong NB\neffects one has therefore a large range of parameters over which the\ncompeting rolls and hexagons are both linearly amplitude-stable. \n\nThe amplitude-stability limits of the hexagons and rolls depend on\ntheir wavenumber. This is illustrated for the hexagons in Fig.\n\\ref{fig:side-co2} for a mean temperature of $T_0=40\\,^o C$. The\ninstability region with respect to amplitude perturbations forms a\nbubble-like closed curve, inside of which the hexagons are unstable\nwith respect to amplitude perturbations. \n\nIt is worth mentioning that the stability limits for hexagons in\nCO$_2$ are quite similar to those of NB convection in\nwater, except that in CO$_2$ the NB effects increase\nrather than decrease with increasing mean temperature \\cite{MaRi05}.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=270]{side-to27-co2.ps}\n\\caption{ Stability regions for hexagons in CO$_2$ with respect to \nside-band perturbations for layer thickness $d=0.052\\, cm$, pressure\n$P=300$\\,psi, mean temperature $T_0=27^o C$, and Prandtl $Pr=0.86$. The NB coefficients\nare $\\gamma_0^{c}=0.2053$, $\\gamma_1^{c}= -0.2877$, $\\gamma_2^{c}=0.3267$, $\\gamma_3^{c}=0.1152$, and $\\gamma_4^{c}=-0.086$.\nThe hexagons are stable with respect to side-band perturbations in the region \ninside the solid lines and unstable outside. The dashed line corresponds to\nthe neutral curve. \\LB{fig:side-to27}} \n\\end{figure}\n\\end{center}\n\n\n\\subsection{Side-Band Instabilities}\n\nIn systems with sufficently large aspect ratio side-band instabilities\ncan be the most relevant instabilities. Using the Galerkin method, we\nhave studied the stability of the hexagons with respect to long- and\nshort-wave side-band perturbations for $T_0=40^oC$. The results are\nshown in Fig.\\ref{fig:side-co2}. We find that over the whole range\n$0\\le \\epsilon \\le 1$ the only relevant side-band perturbations are\nlong-wave and steady, as is the case in the weakly nonlinear regime.\nThe same is true also in water with higher Prandtl number \\C{MaRi05}.\nIn this parameter regime the stability region consists of two\ndisconnected domains, reflecting the reentrant nature of the hexagons.\nThe stability domain near onset is very small and closes up as the\namplitude stability limit is reached. In the reentrant regime the\nstable domain opens up again in an analogous fashion when the\namplitude-stability limit is crossed. Note that the stability\nboundaries are leaning toward lower wavenumbers. Thus, stable\nreentrant hexagon patterns are expected to have wavenumbers below\n$q_c$. This is related to the fact that in the OB case hexagons can be\nstable for large $\\epsilon$, but they are side-band stable only for\nsmall wavenumbers \\cite{ClBu96}.\n\nAs the mean temperature is increased the bubble of the amplitude\ninstability shrinks and eventually disappears, as shown in\n\\F{fig:side-to27} for a cell with thickness $d=0.052\\, cm$ and mean\ntemperature $T_0 =27\\,^o C$. As before, the relevant side-band\ninstabilities are long-wave all along the stability limits and with\nincreasing $\\epsilon$ the wavenumber range over which the hexagons are\nstable shifts towards smaller wavenumbers. For these parameters the\nregion of side-band-stable hexagons reaches without interruption from\nthe strongly nonlinear regime all the way down to threshold. For yet\nstronger NB effects the range of stable wavenumbers widens. \n\n\\section{Comparison with experiments in CO$_2$ \\LB{sec:sim-co2}}\n\nBodenschatz {\\em et al.} \\cite{BoBr91} carried out a set of experiments on\nconvection in CO$_2$ in a cylindrical cell with aspect ratio\n$\\Gamma\\sim172$, thickness $d=0.052\\, cm$, and pressure $P=300$\\, psi.\nUnder these conditions NB effects are relevant. In the experiments a\nweakly hysteretic transition from hexagons to rolls was found near\n$\\epsilon=0.1$. Noting that this transition point was below the\namplitude instability of hexagons to rolls as predicted by weakly\nnonlinear theory, the authors interpreted their results in terms of\nthe heterogeneous nucleation of rolls by the sidewalls. They found\nthat for small $\\epsilon$ the concentric rolls induced by the sidewall\nheating remained confined to the inmediate vicinity of the sidewalls;\nhowever, as $\\epsilon$ was increased the rolls invaded the hexagons\nand filled the whole cell, inducing a transition from hexagons to\nrolls. \n\nA comparison of the experimental findings with the stability results\nshown in Fig. \\ref{fig:side-to27} shows that indeed the transition\ncannot be due to an amplitude instability of the hexagons. In fact, in\nthis regime the NB effects are so strong that, in contrast to the\npredictions of the weakly nonlinear theory, the hexagons do not\nundergo an amplitude instability at all. To clarify the influence of\nthe sidewalls and to assess the significance of the side-band\ninstabilities for the transition from hexagons to rolls, we perform\ndirect simulations of the Navier-Stokes equations\n(\\ref{e:v},\\ref{e:cont},\\ref{e:T},\\ref{e:bc}) for two different sets\nof boundary conditions \\footnote{In the experimental setup the top\ntemperature is held constant at $T=12.84 ^oC$, and therefore the mean\ntemperature changes as $\\epsilon$ is increased. In our computations,\nhowever, we keep $T_0$ fixed. Since the transition occurs quite close\nto threshold this is a good aproximation to the experimental\nprocedure.}: \n\ni) periodic boundary conditions,\n\nii) concentric rolls as boundary conditions. In our computations this\ntype of boundary condition is generated by a suitably patterned \nheating in the interior of the fluid. In the experiments concentric\nrolls near the side walls were generated by a side-wall heating\n\\cite{BoBr91}. \n\n\ni) according to Fig. \\ref{fig:side-to27}, for $\\epsilon=0.3$ hexagons\nwith wavenumbers larger than $q>2.98$, which includes the critical\nwavenumber, are unstable with respect to side-band instabilities. To\ntest whether these side-band instabilities trigger a transition to\nrolls we perform numerical simulations with periodic boundary\nconditions and hexagons as initial conditions. Fig.\n\\ref{fig:perio-co2} presents some snapshots of the ensuing temporal\nevoluation in a cell of size $L=8\\cdot 2\\,\\pi\/q_c=16.11$ using\n$128\\times 128$ Fourier modes. More precisely, to allow perfect\nhexagons in a rectangular container the container cannot be square. In\nour simulations we use $L_x=L$ and $L_y=\\sqrt{3}L\/2$. The sideband\ninstability of the initially almost perfect hexagon pattern (cf.\n\\F{fig:perio-co2}(a)) induces a shearing of the pattern (cf.\n\\F{fig:perio-co2}(b)). At the same time a few penta-hepta defects\narise and some hexagonal convection cells tend to connect with each\nother forming short roll-like structures. However, as time evolves the\nsystem becomes progressively more ordered again and eventually after\nlosing a number of convection cells a defect-free hexagon pattern with\na smaller wavenumber is established (cf. \\F{fig:perio-co2}(c) at \n$t\\simeq 60t_v$). \n\nThus, while roll-like features appear for this value of $\\epsilon$,\nwith periodic boundary conditions no transition to rolls occurs and\nthe system relaxes to a new ordered hexagon pattern. Only for yet\nlarger values of $\\epsilon$ the roll-like structures that arise in the\nintermediate, disordered state take over and lead to a transition to\nrolls induced by the sideband instabilities.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\textwidth]{R2193-a-b-c-2.eps}\n\\caption{ Numerical simulation corresponding to a layer of \nCO$_2$ with thickness $d=0.052\\,cm$, pressure $P=316$\\,psi, \nmean temperature $T_0=27.3^o C$, Prandtl number $Pr=0.87$, and control parameter $\\epsilon=0.3$\n(cf. Fig.\\ref{fig:side-to27}). The size of the integration domain is\n$L=8\\cdot 2\\,\\pi\/q_c=16.11$ and the boundary conditions are\nperiodic. As initial condition a perfect hexagon lattice with \nwavenumber $q_c=3.12$ has been used. a) corresponds to $t=0$, \nb) to $t=18t_v$, and c) to $t=60t_v$.\\\\ \\LB{fig:perio-co2}} \n\\end{figure}\n\\end{center}\n\nii) clearly, the simulations for periodic boundary conditions do not match\nthe experimental results described above, where a transition from\nhexagons to rolls occurs already for $\\epsilon \\gtrsim 0.11$. To\naddress this disagreement we take into account the fact that in the\nexperiments the side walls promote the nucleation of rolls\n\\cite{BoBr91}.\n\nStrictly speaking, the code we are using does not allow non-periodic\nboundary conditions. To mimic the experimentally used clylindrical\ncell we employ a step ramp in $\\epsilon$ that reduces $\\epsilon$ to\nvalues well below $\\epsilon=0$ outside a circle of radius $r=0.45L$\nwith $L=16\\cdot 2\\pi\/q_c=32.22$ \\C{DePe94a}. To induce concentric\nrolls near the sidewalls we introduce for $r> 0.45L$ an additional\nheating in the interior of the fluid in the form of concentric rings.\nUsing hexagonal initial conditions in the bulk, this leads to an\ninitial state as shown in Fig. \\ref{fig:boun-co2}a.\n\nFig. \\ref{fig:boun-co2}a,b shows two snapshots at $t=t_v$ and\n$t=158t_v$ demonstrating how the rolls induced by the side walls\ninvade the carefully prepared hexagonal state in the bulk already for\n$\\epsilon=0.2$. This is well below the $\\epsilon$-value for which with\nperiodic boundary conditions the hexagons persisted even through the\nside-band instability. The final steady state consists of concentric\nrolls as observed in the experiments (cf. Fig. 5 in \\cite{BoBr91}).\nFor lower $\\epsilon$, however, the experimentally observed final state\nconsists of hexagons in the bulk of the system surrounded by\nwall-induced concentric rolls (cf. Fig. 4 in \\cite{BoBr91}). We find\nthis also in our numerical simulations, as shown in\n\\F{fig:boun-co2}(c-d). There the forcing of rolls is identical to that\nin \\F{fig:boun-co2}(a-b) but $\\epsilon=0.05$. Starting with random\ninitial conditions, the forcing gives rise to a ring contained in the\nsquare integration domain. At the beginnig of the simulation\n(\\F{fig:boun-co2}(c)) the rolls created by the forcing invade the\ninterior of this small system. However, as time progresses the rolls\npull out of the central region of the cell, and the final steady\nstate (for $t \\gtrsim 165\\,t_v$) consists of stable hexagons\nsurrounded by a couple of concentric rolls in addition to those \ninduced by the forcing. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth]{ergT.000005-ergT.000790-co2-to27-qc-eps0.2-forcing.rolls.eps}\\\\\n\\vspace{0.5cm}\n\\includegraphics[width=0.4\\textwidth]{ergT-eps0.05-for.rolls-init.random.eps}\n\\caption{ Numerical simulation in a cell of CO$_2$ with thickness\n$d=0.052\\,cm$, pressure $P=300$\\,psi, mean temperature $T_0=27^o\nC$, and Prandtl number $Pr=0.86$. The size of the integration domain\nis $L=16\\cdot 2\\,\\pi\/q_c=32.22$, and the boundary conditions are rings\nof concentric rolls generated with and external forcing. \\\\ For\n$\\epsilon=0.2$ (upper snapshots) hexagon inititial conditions have\nbeen used, with $t=t_v$ (left snapshot) and $t=158\\,t_v$ (right). The\nlower snapshots correspond to a simulation with $\\epsilon=0.05$ and\nrandom initial conditions, at $t=40\\,t_v$ \n(left) and $t=158\\,t_v$ (right).\\LB{fig:boun-co2}} \n\\end{figure}\n\\end{center}\n\nThus, our simulations suggest that the experimentally observed\ntransition from hexagons to rolls is neither due to amplitude\ninstabilities nor to side-band instabilities. Rather, there is a large\nrange of parameters in which hexagons and rolls are both linearly\nstable and the final state is selected by one type of pattern invading\nthe other. The transition to rolls at these low values of $\\epsilon$\nis made possible by the boundaries, which provide a seed for the\nnucleation of rolls. We expect that by applying a forcing that is\nconfined to the region near the boundaries and that replaces the rolls\nby hexagons the transition to rolls could be shifted to substantially\nlarger values of $\\epsilon$. Such a forcing could be achieved by a\npatterned heating of the interior of the fluid \\cite{SeSc02} or\npossibly by a suitable geometric patterning of the bottom plate\n\\cite{Bopriv}. \n\n\n\\section{NB Spiral Defect Chaos in SF$_6$\\LB{sec:sim-sf6}}\n\nA fascinating state observed in convection at low Prandtl numbers is\nspiral defect chaos. It is characterized by self-sustained chaotic\ndynamics of rotating spirals, as well as dislocations and\ndisclinations, and arises in fluids with $Pr\\lesssim 1$\n\\C{MoBo93,XiGu93,DePe94a,CrTu95,LiAh96} in a parameter range where\nstraight rolls are linearly stable. Spiral defect chaos has so far\npredominantly been investigated under OB conditions, in which\nup-flows and down-flows are equivalent. \n\nAs mentioned before, NB effects break the up-down symmetry and\ndifferent flow structures may be predominantly associated with up-flow\nand down-flow, respectively. Moreover, in the absence of the OB\nsymmetry a resonant triad interaction is allowed. If it is strong\nenough it leads to the formation of hexagons. For weaker interaction\none may still expect an enhancement of cellular rather than roll-like\nstructures.\n\nTo investigate the impact of NB effects on spiral defect chaos we\nconsider convection in a layer of SF$_6$. This gas has been used\npreviously in experimental convection studies under usual laboratory\nconditions \\C{BoCa92}, and near the thermodynamical critical point\n\\C{AsSt94,AsSt96,RoSt02}. In Fig. \\ref{fig:sf6-ampli}(a) we present\nthe stability diagram for hexagons and rolls with respect to amplitude\nperturbations in a layer of SF$_6$ of thickness $d=0.0542$\\,cm,\npressure $P=140\\,$psi, and a range of temperatures that is\nexperimentally accessible. Hexagons are amplitude-stable to the right\nof the solid line and rolls above the dashed line. As in the case of\nCO$_2$ the NB effects increase with increasing mean temperature $T_0$\nand above a certain value of $T_0$ hexagons are linearly\namplitude-stable over the whole range of $\\epsilon$ investigated. Here\nwe focus on relatively strong NB effects. We therefore show in\nFig.\\ref{fig:sf6-ampli}b the stability limits with respect to\nside-band perturbations for a relatively large mean temperature,\n$T_0=80^\\circ C$. As in the case of CO$_2$, the wavenumber range over\nwhich the hexagons are stable is leaning towards smaller wavenumbers.\nOverall, amplitude and side-band stability limits are qualitatively\nsimilar to those of convection in CO$_2$ (cf. Fig.\n\\ref{fig:co2-ampli3d}).\n\n\\begin{center}\n\\begin{figure}\n\\begin{minipage}{0.35\\textwidth}\n\\includegraphics[width=\\textwidth,angle=0]{sf6.eps}\n\\end{minipage} \n\\hspace{0.4cm} \n\\begin{minipage}{0.35\\textwidth}\n\\includegraphics[width=\\textwidth,angle=270]{sf6-to80-side2.ps}\n\\end{minipage}\n\\caption{ \nStability regions of hexagons and rolls in a layer of SF$_6$ \nof thickness $d=0.0542\\,cm$, pressure $P=140$\\,psi, and Prandtl number $Pr=0.8$. \\\\\na) Stability regions with respect to amplitude perturbations. \nContinous line: stability boundary for hexagons. Dashed line: stability boundary for\nrolls. Stability limits obtained for the critical wavenumber $q_c$.\\\\\nb) Stability regions with respect to side-band perturbations for the above\nlayer with a mean temperature $T_0=80\\,^oC$ The corresponding NB coefficients\nare $\\gamma_0^{c}=0.1714$, $\\gamma_1^{c}= -0.2118$, $\\gamma_2^{c}=0.2836$, $\\gamma_3^{c}=0.1905 $, and\n$\\gamma_4^{c}=0.0624$ corresponding to $Q=4.2$.. The dashed line corresponds to\nthe neutral curve. \n\\LB{fig:sf6-ampli}}\n\\end{figure}\n\\end{center}\n\nFig.\\ref{fig:sf6-nb-ob} shows two snapshots obtained by direct \nnumerical simulations of the Navier-Stokes equations corresponding to\nconvection in SF$_6$ for $T_0=80\\,^oC$ and $\\epsilon=1.4$ in a\nconvective cell of thickness $d=0.0542\\,cm$ and horizontal size\n$L=16\\cdot 2 \\,\\pi\/q_c=32.22$. Periodic boundary conditions are used\nwith $128\\times 128$ Fourier modes and 6 vertical modes. Both states\nare obtained after an integration time of $160\\,t_v$, starting from\nrandom initial conditions. While in\nFig.\\ref{fig:sf6-nb-ob}b all NB are retained, in\nFig.\\ref{fig:sf6-nb-ob}a the same values are used for $Pr$ and\n$\\epsilon$, but all NB parameters $\\gamma_i^c$ are set to 0, i.e. the\nsystem is treated as if it was Boussinesq. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth]{sf6-nb-ob-4-sqrt.eps\n\\caption{ \nDirect numerical simulation of equations (\\ref{e:v}-\\ref{e:bc}) for SF$_6$ in a\ncell of thickness $d=0.0542\\,cm$, pressure $P=140$\\,psi, mean temperature \n$T_0=80^oC$, Prandtl number $Pr=0.8$, and control parameter $\\epsilon=1.4$. \nThe cell has size $L=16\\cdot 2 \\,\\pi\/q_c=32.22$ with periodic boundary conditions. \nStarting from random initial conditions both snapshots are taken at\n$t=160\\,t_v$. Left panels for OB conditions ($\\gamma_i$=0, $i=0..4$, left), \nright panels for NB conditions appropriate for $T_0=80^oC$ \n(cf. Fig\\ref{fig:sf6-ampli}b). Bottom panels give the corresponding contour lines used\nfor the pattern diagnostics.\n\\LB{fig:sf6-nb-ob}}\n\\end{figure}\n\\end{center}\n\nThe snapshots depicted in Fig.\\ref{fig:sf6-nb-ob} show, as expected,\nthat due to the NB effects down-flow convection cells, which are white\nin Fig.\\ref{fig:sf6-nb-ob}, outnumber cells with up-flow (black).\nMoreover, in this regime the NB effects enhance the overall cellular\nrather than roll-like character of SDC. This manifests itself in the\nappearance of numerous small down-flow convection cells (white\n`bubbles') and in the appearance of quite noticeable bulges on the NB\nconvection rolls. To quantify these and other differences we analyse\na long sequence of snapshots with a recently introduced geometric\napproach \\cite{RiMa06}. It is based on the contour lines corresponding\nto the intensity half-way between the minimal and maximal intensity of\nall snapshots in a given run. The contour lines corresponding to the\ntemperature field of snapshots Fig.\\ref{fig:sf6-nb-ob}(a,b) are shown in\nFig.\\ref{fig:sf6-nb-ob}(c,d). In the following we\npresent various statistics of these contour lines.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{NOB-OB-ratio0.8-2.eps} \\\\%{NOB-OB-2.ps}\n\\caption{ \nNumber of black and white closed contours as a function of time\nfor NB and OB conditions for the simulations corresponding to the\nsnapshots shown in \\F{fig:sf6-nb-ob}. \n\\LB{fig:num-bubbles}}\n\\end{figure}\n\\end{center}\n\nThe most striking difference between the OB and the NB case is the\nasymmetry that is induced by the NB effects between `black' and\n`white' components, i.e. between closed contours that enclose up- and\ndown-flow regiones, respectively. To quantify this asymmetry we\nmeasure the number of white and black components. These two\ntopological measures correspond, respectively, to the Betti numbers of\norder 1 and 2 of the pattern defined by the white components\n\\cite{GaMi04}. Fig.\\ref{fig:num-bubbles} shows these two quantities as\na function of time in the OB and the NB case. As expected, in the OB\ncase the number of black and white components is essentially the same\nat all times, whereas in the NB case the white components\nsignificantly outnumber the black ones. The ratio of white to black\ncomponents is therefore a sensitive indicator for the significance of\nNB effects. Fig.\\ref{fig:num-bubbles} also illustrates how much the\nnumber of components fluctuates during these runs. Recently, the two\nBetti numbers have also been measured based on patterns obtained in\nexperiments on SDC in convection in CO$_2$. Scanning $\\epsilon$ in\nvery small steps the authors report steps in the Betti numbers\nindicative of transitions between different chaotic, disordered states\n\\cite{KrGaunpub}.\n\nFig.\\ref{fig:num-bubbles} shows that the total number of components\n(closed contours) is considerably larger in the NB case than in the OB\ncase. This is presented in more quantitative detail in\nFig.\\ref{fig:num-bubbles-eps}, which gives the mean value of the total\nnumber of components, i.e. the sum of black and white components, as a\nfunction of $\\epsilon$ for OB as well as NB conditions. In the NB case\nthe total number of components is up to four times larger than in the\nOB case. We attribute this difference to the resonant triad\ninteraction that is made possible by the breaking of the OB symmetry\nand which tends to enhance cellular rather than filamentary roll-like\nstructures.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{meanNP-NPratio0.8-2.eps}\n\\caption{ \nTotal number of closed contours as a function of the control parameter \n$\\epsilon$ for NB (circles) and OB (squares) conditions. \n\\LB{fig:num-bubbles-eps}}\n\\end{figure}\n\\end{center}\n \nTo characterize the components better and to distinguish cellular and\nroll-like structures we introduce the `compactness' ${\\mathcal C}$ of\ncomponents \\cite{RiMa06},\n\\begin{eqnarray}\n{\\mathcal C}=4\\pi \\frac{{\\mathcal A}}{{\\mathcal P}^2}. \\LB{e:compact}\n\\end{eqnarray} \nHere ${\\mathcal A}$ is the area inside a closed contour and $P$ its\nperimeter. With the normalization used in (\\ref{e:compact}) compact,\ncellular structures are charecterized by ${\\mathcal C}\\lesssim 1$,\nwhereas filamentary, roll-like structures have ${\\mathcal C} \\ll 1$.\n\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{histo_compact_SF.eps}\n\\caption{ \nMean number of closed contours per snapshot with a \ngiven compactness ${\\mathcal C}$ for convection in SF$_6$ at\n$\\epsilon=1.4$ under NB (circles) and under OB (squares) conditions \n(cf. \\F{fig:sf6-nb-ob}).\n\\LB{fig:hist-compact}}\n\\end{figure}\n\\end{center}\n\n\\F{fig:hist-compact} shows the mean number of closed contours per\nsnapshot for a given compactness $\\mathcal C$ for the NB and the OB\nsimulation at $\\epsilon=1.4$ over the duration $t=360\\,t_v$. As\nexpected, in the NB case the number of white components is much larger\nthan that of black components, whereas in the OB case both are about\nthe same. The total number of components is noticeably larger in the\nNB case, which also shows an increase in the number of white,\nfilamentary contours with small compactness. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{histo_compact_smoothed_normalized.eps}\n\\caption{ \nDistribution function for the compactness ${\\mathcal C}$ for NB\n(circles) and OB (squares) conditions (cf. \\F{fig:sf6-nb-ob}).\n\\LB{fig:hist-compact-normalized}}\n\\end{figure}\n\\end{center}\n\nThe relative distribution among components with different compactness\nis more clearly visible in the {\\it relative} frequency of contours as\na function of the compactness, which is shown in\nFig.\\ref{fig:hist-compact-normalized}. More precisely, its data result\nfrom running averages over adjacent 10 points of the corresponding\ndata shown in Fig.\\ref{fig:hist-compact}, which are then normalized.\nThe normalized data show that the increase in the relative frequency\nof white filamentary components (${\\mathcal C}\\ll 1$) is essentially\nthe same in the NB case and in the OB case. However, only very few\nblack filamentary components arise in the NB case. \n\nA feature of Fig.\\ref{fig:hist-compact-normalized} that is surprising\nat first sight is the essentially equal height of the NB peak and the\nOB peak for components with ${\\mathcal C}\\sim 1$. Visually, the NB\nsnapshot exhibits many more small compact `bubbles' than the OB run.\nAn explanation for this observation can be obtained by correlating the\ncompactness of the closed contours with their length ${\\mathcal P}$.\nThe joint distribution function for these two quantities is shown in\nFig.\\ref{fig:corr-cont-comp} using logarithmic scales for ${\\mathcal\nP}$ and ${\\mathcal C}$. Focussing on the compact objects with\n${\\mathcal C}\\lesssim 1$ one recognizes in the OB case a second peak\nat somewhat larger contourlength. We associate this peak with the\nappearance of target-like structures, i.e. with a second contourline\nencircling a smaller compact, almost circular contourline. In the NB\ncase this second peak is barely visible. Instead, the shoulder of the\nmain peak is extended significantly towards smaller contourlength\n${\\mathcal P}$. It signifies the appearance of compact objects that\nare smaller than a typical wavelength, which we associate with the\nsmall `bubbles' that are easily recognized in the snapshot of the NB\ncase Fig. \\ref{fig:sf6-nb-ob}b. Thus, the comparable relative\nfrequency of small components shown in\nFig.\\ref{fig:hist-compact-normalized} has a different origin in the OB\nand the NB case. Whereas in the NB case it is mostly due to small\nbubbles, it seems to originate from target-like structures in the OB\ncase. \n\nIn the logarithmic scaling used in Fig.\\ref{fig:corr-cont-comp} a\nstraight ridge arises in the distribution function for large\ncontourlengths. It is characteristic for filamentary structures with a\ntypical width, which corresponds here to half a wavelength $\\lambda$\nof the convection rolls,\n\\begin{eqnarray}\n{\\mathcal C}\\sim 4\\pi\n\\frac{\\lambda \\, {\\mathcal P}}{4{\\mathcal P}^2} \\propto {\\mathcal\nP}^{-1}.\n\\end{eqnarray}\nIn the OB case one can discern deviations from this scaling. They are\nconfined to larger rather than smaller compactness values for a given\ncontourlength, indicating that the long rolls can be wider but not\nnarrower than a certain thickness. In the NB case these deviations are\nmuch smaller. \n\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{corr-cont-comp.MaRi05a.f12a-4.eps}\n\\includegraphics[width=0.48\\textwidth]{corr-cont-comp.MaRi05a.f12b-2.eps}\n\\caption{Distribution function for contour length and \ncompactness for OB (a) and NB conditions (b)\n$Pr=0.8$ and $\\epsilon=1.4$.\n\\LB{fig:corr-cont-comp}\n} \n\\end{figure}\n\nTo identify spiral components in the pattern directly we also measure\nthe winding number of the components \\cite{RiMa06}. It is defined via\nthe angle $\\theta$ by which the (spiral) arm of a pattern component is\nrotated from its tip to its end at the vertex at which it merges with\nthe rest of the component. In cases in which a component has no\nvertices we split it into two arms at the location of minimal\ncurvature \\cite{RiMa06}. The winding number is then defined as\n$|{\\mathcal W}|=\\theta\/2\\pi$. To assess the impact of the NB effects\non the spiral character of the pattern we measure the number of\nspirals in each snapshot and show the resulting histogram over the\nwhole run in Fig.\\ref{fig:num-spiral}. We use three different\nthresholds ${\\mathcal W}_{min}$ for the identification of spirals,\n${\\mathcal W}_{min}= 1$, ${\\mathcal W}_{min}= 1\/2$, and ${\\mathcal\nW}_{min}= 1\/4$. As Fig.\\ref{fig:num-spiral} shows, the number of\nsmall spirals with $|{\\mathcal W}|\\gtrsim 1\/4$ is quite similar in the\nOB and the NB case. However, larger spirals with $|{\\mathcal W}|\\ge\n1\/2$ or even $|{\\mathcal W}|\\ge 1$ are much more rare in the NB case;\nin fact, for the system size $L=16 \\cdot 2\\pi\/q_c=32.22$ that we have\nused in these simulations there was at most one spiral with \n$|{\\mathcal W}| \\ge 1$ at any given time. \n\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{histo_num_spiral.OB_NB.1.eps}\n\\caption{Distribution function for the number of spirals in spiral\ndefect chaos under OB and NB conditions for three values of the\nthreshold, ${\\mathcal W}_{min}=1\/4$ (circles),${\\mathcal W}_{min}=1\/2$\n(squares), and ${\\mathcal W}_{min}=1$ (diamonds). \n\\LB{fig:num-spiral}\n} \n\\end{figure}\n\nThe reduced spiral character of NB spiral defect chaos is also quite\napparent in Fig.\\ref{fig:corr-arc-winding}, which shows the\ncorrelation between the winding number and the arclength of the spiral\narm. More precisely, each dot marks the occurrence of one spiral arm\nin a snapshot. In the OB case one can see quite clearly a maximal\nwinding number for any given arclength, which is consistent with an\nArchimedean shape of the spiral \\cite{RiMa06}\\footnote{Note that\ndetailed analyses of large spirals show deviations from the\nArchimedean shape due to the dislocations that accompany finite\nspirals \\cite{Pl97}}. In the NB case, however, only components with\nvery small contourlength reach the Archimedean limit and most\ncomponents have winding numbers that are much smaller, i.e. the\ncomponents are quite straight.\n\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{corr-arc-winding_SF6-OB-NB.eps}\n\\caption{Correlation between the arclength and the winding number of\nspiral arms under OB (black squares) and NB conditions (red circles).\n\\LB{fig:corr-arc-winding}} \n\\end{figure}\n\n\\section{Hexagons and Spiral Defect Chaos at Very Low Prandtl numbers. H$_2$-Xe\nMixtures \\LB{sec:h2xe}}\n\nAs mentioned above, the restabilization of NB hexagons at larger\nRayleigh number is related to the existence of stable OB hexagons at\nlarge Rayleigh numbers. The wavenumber range over which the OB\nhexagons are stable shrinks with decreasing Prandtl number and for $Pr\n< 1.2$ the OB hexagons are side-band unstable at {\\it all} wavenumbers\n\\cite{ClBu96}. However, as seen in the case of CO$_2$ and SF$_6$, NB\nhexagons can be side-band stable at large $\\epsilon$ even below\n$Pr=1.2$ due to the additional stabilizing effect of the resonant\ntriad interaction. It is therefore of interest to investigate whether\nthe NB effects can be sufficient to stabilize strongly nonlinear\nhexagons even for Prandtl numbers significantly below $Pr=1$. \n\nPrandtl numbers well below $Pr=1$ can be reached by using a mixture of\na heavy and a light gas. An experimentally investigated case is a\nmixture of H$_2$ and Xe \\cite{LiAh97}. With a mole fraction of\n$\\chi=0.4$, one can reach Prandtl numbers as small as $Pr=0.17$. The\nLewis number of such a mixture is close to one \\cite{LiAh96,Ah05}. \nTherefore such mixtures are expected to behave essentially like a pure\nfluid with the same Prandtl number \\cite{BoPe00}.\n\nWe investigate the stability of hexagon and roll convection in a\nH$_2$-Xe mixture with mole fraction $\\chi=0.4$ at a pressure of $300$\n\\,psi and a layer thickness of $d=0.1$cm. With respect to amplitude\ninstabilities the stability diagram is very similar to that of\nconvection in CO$_2$ and SF$_6$ with hexagons becoming reentrant at\n$\\epsilon$-values as low as $\\epsilon=0.14$ for $T_0=20\\,^oC$. \n\nFocusing on strong NB effects we perform a detailed stability analysis\nwith respect to sideband perturbations at a mean temperature of\n$T_0=80^oC$ using the same layer thickness of $d=0.1$cm. For these\nlow Prandtl numbers the numerical resolution has to be increased to\nobtain sufficiently well resolved flows. While for the stability\nanlyses of hexagons in CO$_2$ and SF$_6$ it is sufficient to use \n$n_z=6$ and $n_q=3$ in the Galerkin expansion, for the H$_2$-Xe\nmixture with $P_r\\simeq0.17$ at least $n_q=5$ and $n_z=6$ are\nrequired. \\F{fig:side-h2xe} depicts the resulting stability diagram. It shows\nthat the region of side-band stable hexagons is not contiguous but\nconsists of the usual region immediately above threshold and an\nadditional, disconnected region at larger Rayleigh numbers. We could\nnot follow the stability limits to smaller values of $q$ than shown in\nFig.\\ref{fig:side-h2xe} due to numerical convergence problems.\nPresumably, these arise due to bifurcations involving additional,\nresonant wavevectors \\cite{Mo04a}, somewhat similar to the resonances\nstudied in Taylor vortex flow \\cite{RiPa86,PaRi90}. The fact that the\nregion of stability is disconnected is remarkable since the region of\namplitude-stability (not shown) is actually contiguous. This is in\ncontrast to the behavior found in CO$_2$ and SF$_6$ where the two\nside-band stable regions become connected when the bubble-like region\nof amplitude instability disappears (cf.\nFig.\\ref{fig:side-co2},\\ref{fig:side-to27}). The comparison of the\nstability limits with those of CO$_2$ and of SF$_6$ \n(Fig.\\ref{fig:sf6-ampli}b) shows further that the maximal wavenumber\n$q$ at which the hexagons are stable with respect to side-band\nperturbations decreases with decreasing Prandtl number and as a result\nthe over-all stability region shrinks as well. \n\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=270]{h2xe-to80.ps\n\\caption{ \nStability limits with respect to sideband perturbations in a mixture\nof $H_2-Xe$ with molar fraction $\\chi=0.4$, thickness $d=0.1 cm$, pressure $P=300$\\,psi, \nmean temperature $T_0=80\\,^oC$, and Prandtl number $Pr=0.17$. The NB coefficients are \n$\\gamma_0=0.5535$, $\\gamma_1=-0.6421$, $\\gamma_2=0.9224$, $\\gamma_3=0.3647$, \n$\\gamma_4=-0.0712$ resulting in $Q=13.8$. Hexagons are stable in the\nregions enclosed by the solid lines and unstable outside. The dashed line represents \nthe neutral curve. The results for H$_2$-Xe are obtained with $n_z=6$ and $n_q=5$ \n\\LB{fig:side-h2xe}.}\n\\end{figure}\n\\end{center}\n\n\nThe stability analysis of the H$_2$-Xe mixture also reveals an\noscillatory instability of the hexagon patterns at $\\epsilon \\sim 1$.\nWithin the $\\epsilon$-range investigated, no such oscillatory\ninstability was found at the larger Prandtl numbers relevant for\nCO$_2$ and SF$_6$. Unfortunately, it turns out that before the onset\nof the oscillatory instability the hexagons already become unstable to\na side-band instability at half the hexagon wavelength, which will\nalways preempt the oscillatory instability. For the rolls we also find\nan oscillatory instability. It is presumably related to the well-known\noscillatory instability of Boussinesq rolls \\cite{CrWi89}. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.3\\textwidth,angle=0]{h2xe.snap.001401-sqrt.eps}\n\\caption{ \nTypical pattern for convection in H$_2$-Xe at $\\epsilon=0.3$.\n\\LB{fig:pattern-h2xe}. }\n\\end{figure}\n\\end{center}\n\nFor not too small $\\epsilon$ generic initial conditions will not lead\nto hexagonal patterns but rather to spiral defect chaos. A typical\nsnapshot of a pattern in this state is shown in\nFig.\\ref{fig:pattern-h2xe} for $\\epsilon=0.3$. Compared to the\npatterns obtained in NB convection in SF$_6$ (cf.\nFig.\\ref{fig:sf6-nb-ob}b) the patterns in H$_2$-Xe are less cellular\nand do not show a large number of small bubbles. To quantify these and\nother characteristics of the patterns we again apply the geometric\ndiagnostics introduced earlier \\cite{RiMa06}. \n\nIn Fig.\\ref{fig:compact-h2xe} we show the normalized distribution\nfunctions for the compactness of white and black components (cf.\nFig.\\ref{fig:hist-compact-normalized}). Since there are only very few\nblack components their distribution function exhibits large\nstatistical fluctuations. Of particular interest is the distribution\nfunction for the white components. It confirms the visual impression\nthat the number of compact components is significantly reduced\ncompared to the case of SF$_6$; in fact, while in SF$_6$ the maximum\nof the distribution function is close to ${\\mathcal C}=1$, in H$_2$-Xe\nthe absolute maximum is at ${\\mathcal C}\\sim 0.1$, which corresponds\nto filamentary structures.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{histo_compact.h2xe.eps}\n\\caption{ \nDistribution function of the compactness of closed contours \nfor H$_2$-Xe ($Pr=0.17$, $\\epsilon=0.3$)\n\\LB{fig:compact-h2xe}. }\n\\end{figure}\n\\end{center}\n\nThe lack of small bubbles is demonstrated in more detail in the joint\ndistribution function for the contourlength and the compactness, which\nis shown in Fig.\\ref{fig:corr-cont-compt-h2xe}. Note that the view is\nrotated compared to Fig.\\ref{fig:corr-cont-comp}. The distribution\nfunction is lacking the broad shoulder seen in NB convection in SF$_6$\n(see Fig.\\ref{fig:corr-cont-comp}b). Instead, the decay of the\ndistribution function towards long filamentary contours is quite\nslow. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{corr-cont-comp.MaRi05a.f18-d.eps\n\\caption{ \nJoint distribution function for the contour length and \nthe compactness of closed contours for H$_2$-Xe ($Pr=0.17$, $\\epsilon=0.3$).\n\\LB{fig:corr-cont-compt-h2xe}.}\n\\end{figure}\n\\end{center}\n\nTo assess the spiral character of NB spiral defect chaos at these low\nPrandtl numbers we show in Fig.\\ref{fig:histo-winding} the\ndistribution function for the absolute value $|{\\mathcal W}|$ of the\nwinding number for NB convection in H$_2$-Xe (at $\\epsilon=0.3$ and\n$PR=0.17$) as well as for Boussinesq and non-Boussinesq convection in\nSF$_6$ (at $\\epsilon=1.4$ and $Pr=0.8$). As had been noted\npreviously in the Boussinesq case \\cite{EcHu97,RiMa06} the\ndistribution function is roughly consistent with exponential behavior.\nIn the NB case the exponential decays substantially faster than in the\nBoussinesq case and spirals with winding numbers above ${|\\mathcal\nW}|=1$ are rare. In the Boussinesq case we had found that the decay\nrate depends mostly on the Prandtl number, but only very little on\n$\\epsilon$ \\cite{RiMa06}. Unfortunately, we do not have enough\nnon-Boussinesq data to investigate such trends in the $\\epsilon$- and\n$Pr$-dependence. However, it is worth noting that in the two\nnon-Boussinesq cases shown in Fig.\\ref{fig:histo-winding} the decay\nrates are essentially the same despite their substantial difference in\nPrandtl number and both decays are much faster than that in the\nBoussinesq case. Thus, possibly the impact of NB effects dominates\nthe dependence on the Prandtl number.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{histo_winkel.H2Xe_SF6.eps}\n\\caption{ \nDistribution function for the absolute value of the winding number for\nNB convection in H$_2$-Xe in comparison with convection in \nSF$_6$ in the NB and the OB case.\n\\LB{fig:histo-winding}. }\n\\end{figure}\n\\end{center}\n\nIn NB convection in H$_2$-Xe, the distribution function for the number\nof spirals is qualitatively similar to that in SF$_6$, which is shown\nin Fig.\\ref{fig:num-spiral} above. In particular, there are almost no\nspirals with a winding number $|{\\mathcal W}|>1$. Similarly, the\ncorrelation between the arclength and the winding number of spirals\nreveals that in NB convection in H$_2$-Xe there is no significant\ntrend towards Archimedean spirals. \n\n\\section{Conclusion}\n\\LB{sec:conclusions}\n\nIn this paper we have studied non-Boussinesq convection in gases\n(CO$_2$, SF$_6$, H$_2$-Xe) with Prandtl numbers ranging from $Pr\\sim\n1$ down to $Pr=0.17$ in experimentally relevant parameter regimes. We\nhave complemented a Galerkin stability analysis of hexagon patterns\nwith direct numerical simulations of the fluid equations to study\ntransitions between different hexagon patterns and to quantify the\nimpact of non-Boussinesq effects on spiral defect chaos.\n\nWe find that the reentrance of hexagons that we have identified\npreviously in non-Boussinesq convection in water \\cite{MaRi05} also\noccurs at low Prandtl numbers. As was the case at large Prandtl\nnumbers, compressibility is not neccessary for reentrance. Since, in\naddition, the range of wavenumbers for which the reentrant hexagons\nare stable differs significantly from that of the reentrant hexagons\nobserved in experiments on convection in SF$_6$ near the thermodynamic\ncritical point \\cite{RoSt02}, the mechanisms underlying the two types\nof restabilization of the hexagons is most likely different.\nReflecting the fact that in gases the non-Boussinesq effects increase\nwith increasing temperature the reentrance is shifted to lower values\nof the Rayleigh number when the mean temperature is increased, opposite to\nthe behavior in water \\cite{MaRi05}. In convection in water the\nreentrant hexagons are stable only for wavenumbers below the critical\nwavenumber. This trend becomes more pronounced with decreasing Prandtl\nnumber. In fact, for the gas mixture with $Pr=0.17$ the wavenumber at\nthe stability limit of the hexagons decreases so rapidly with\nincreasing Rayleigh number that the range in Rayleigh number over\nwhich the hexagons are stable becomes quite small. \n\nThe comparison of our stability results with experiments on the\ntransition between hexagons and rolls in CO$_2$ \\cite{BoBr91} shows\nthat this transition is not due to an amplitude or a side-band\ninstability. As a matter of fact, for the parameters of the\nexperimental system the hexagons do not undergo any linear amplitude\ninstability to rolls, contrary to the prediction of the weakly\nnonlinear theory \\cite{BoBr91}. We have performed detailed numerical\nsimulations with various lateral boundary conditions and confirm that\nthe transition is the result of the heterogeneous nucleation of rolls\nat the side walls of the container, which then invade the whole system\nif the Rayleigh number is sufficiently high. Our simulations suggest\nthat hexagons could be stabilized well beyond the experimentally\nobserved transition point if the influence of the lateral walls can be\nreduced by applying a spatially patterned forcing that drives hexagons\nat the wall. Such a forcing can be achieved by localized heating\n\\cite{SeSc02} or by geometric patterning of the bottom plate\n\\cite{Bopriv}. Of course, the wavenumber of the forced hexagons would\nhave to be adjusted to lie in the stable range.\n\nWe have also investigated the stability of hexagons in H$_2$-Xe\nmixtures with very small Prandlt number ($Pr=0.17$). There also stable\nreentrant hexagons are possible, but they are restricted to a small\nrange in wavenumber ($q