diff --git "a/data_all_eng_slimpj/shuffled/split2/finalznmr" "b/data_all_eng_slimpj/shuffled/split2/finalznmr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalznmr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe Tijdeman-Zagier conjecture \\cites{elkies2007abc, waldschmidt2004open, crandall2006prime}, or more formally the generalized Fermat equation \\cite{bennett2015generalized} states that given\n\\begin{equation}\nA^X+B^Y=C^Z \\label{eqn:1}\n\\end{equation}\nwhere $A$, $B$, $C$, $X$, $Y$, $Z$ are positive integers, and exponents $X$, $Y$, $Z\\geq3$, then bases $A$, $B$, and $C$ have a common factor. Some researchers refer to this conjecture as Beal's conjecture \\cite{beal1997generalization}. There are considerable theoretical foundational advances in and around this topic \\cites{nitaj1995conjecture, darmon1995equations, vega2020complexity}. Many exhaustive searches within limited domains have produced indications this conjecture may be correct \\cites{beal1997generalization, norvig2010beal, durango}. More formal attempts to explicitly prove or counter-prove the conjecture abound in the literature but are often unable to entirely generalize due to difficulty in overcoming floating point arithmetic limits, circular logic, unsubstantiated claims, incomplete steps, or reliance on conjectures \\cites{di2013proof, norvig2010beal, dahmen2013perfect, de2016solutions}. Many partial proofs are published in which limited domains, conditional upper bounds of the exponents, specific configuration of bases or exponents, or additional, relaxed, or modified constraints are applied in which the conjecture holds \\cites{beukers1998, beukers2020generalized, siksek2012partial, poonen1998some, merel1997winding, bennett2006equation, anni2016modular, billerey2018some, kraus1998equation, poonen2007twists, siksek2014generalised, miyazaki2015upper}. \n\nSome researchers demonstrate or prove limited coprimality of the exponents \\cite{ghosh2011proof}, properties of perfect powers and relationships to Pillai's conjecture \\cite{waldschmidt2009perfect}, impossibility of solutions for specific bases \\cite{mihailescu2004primary}, influence of the parity of the exponents \\cite{joseph2018another}, characterizations of related Diophantine equations \\cite{nathanson2016diophantine}, relationship between the smallest base and the common factor \\cite{townsend2010search}, and countless other insights. \n\n\nTo formally establish a rigorous and complete proof, we need to consider two complimentary conditions: 1) when gcd$(A,B,C)=1$ there is no integer solution to \\BealsEq, and 2) if there is an integer solution, then gcd$(A,B,C)>1$. The approach we take is linked to the properties of slopes. An integer solution that satisfies the conjecture also marks a point $(A,B,C)$ that subtends a line through the origin on a 3 dimensional Cartesian graph. Being integers, this is a lattice point and thus the line has a rational slope in all 3 planes. Among other properties, it will be shown that if gcd$(A,B,C)=1$ with integer exponents, then one or more of the slopes of the subtended line is irrational and cannot pass through any non-trivial lattice points. Conversely it will be shown that if there exists a solution that satisfies the conjecture, the subtended line must pass through a non-trivial lattice point and must have rational slopes.\n\n\\section{Details of the Proof}\n\n\nTo establish the proof requires we first identify, substantiate, and then prove several preliminary properties:\n\\begin{itemize}\n\\item \\textbf{Slopes of the Terms}: determine slopes of lines subtended by the origin\n and the lattice point $(A,B,C)$ that satisfy the terms of the conjecture\n (\\cref{Thm:2.1_Irrational_Slope_No_Lattice} on\n pages \\pageref{Section:Slopes_Start} to \\pageref{Section:Slopes_End}).\n\\item \\textbf{Coprimality of the Bases}: determine implications of 3-way coprimality on\n pairwise coprimality and implications of pairwise coprimality on 3-way\n coprimality\n (\\cref{Thm:2.2_Coprime,Thm:2.3_Coprime,Thm:2.4_Coprime}\n on pages \\pageref{Section:Comprimality_Start} to\n \\pageref{Section:Comprimality_End}).\n\\item \\textbf{Restrictions of the Exponents}: determine limits of the exponents related\n to coprimality of the bases and bounds of the conjecture\n (\\cref{Thm:2.5_X_cannot_be_mult_of_Z} on pages \\pageref{Section:Exponents_Start}\n to \\pageref{Section:Exponents_End}).\n\\item \\textbf{Reparameterization of the Terms}: determine equivalent functional forms\n of the terms and associated properties as related to coprimality of the terms\n (\\cref{Thm:2.6_Initial_Expansion_of_Differences,Thm:2.7_Indeterminate_Limit,Thm:2.8_Functional_Form,Thm:2.9_Real_Alpha_Beta,Thm:2.10_No_Solution_Alpha_Beta_Irrational,Thm:2.11_Coprime_Alpha_Beta_Irrational,Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime,Thm:2.13_Coprime_Any_Alpha_Beta_Irrational_Indeterminate} on\n pages \\pageref{Section:Reparameterize_Start} to\n \\pageref{Section:Reparameterize_End}).\n\\item \\textbf{Impossibility of the Terms}: determine the relationship between\n term coprimality and slope irrationality, and between\n slope irrationality and solution impossibility\n (\\cref{Thm:2.14_Main_Proof_Coprime_No_Solutions} on\n pages \\pageref{Section:Impossibility_Start} to \\pageref{Section:Impossibility_End}).\n\\item \\textbf{Requirement for Possibility of the Terms}: determine characteristics of\n gcd$(A,B,C)$ required for there to exist a solution given the\n properties of slopes and coprimality\n (\\cref{Thm:2.15_Main_Proof_Solutions_Then_Not_Coprime} on pages\n \\pageref{Section:Possibility_Start} to \\pageref{Section:Possibility_End}).\n\\end{itemize}\n\nBefore articulating each of the underlying formal proofs, we establish two specific definitions to ensure consistency of interpretation.\n\\begin{enumerate}\n\\item \\textbf{Reduced Form}: We define the bases of $A^X$, $B^Y$, and $C^Z$ to be in reduced\n form, meaning that rather than let the bases be perfect powers, we\n define exponents $X$, $Y$, and $Z$ such that the corresponding bases\n are not perfect powers. For example, $8^5$ can be reduced to\n $2^{15}$ and thus the base and exponent would be 2 and 15,\n respectively, not 8 and 5, respectively. Hence throughout this\n document, we assume all bases are reduced accordingly given that the\n reduced and non-reduced forms of bases raised to their corresponding\n exponents are equal.\n\\item $\\bm{C^Z=f(A,B,X,Y)}$: Without loss of generality, when establishing the impossibility\n of integer solutions, unless stated otherwise, we assume to start\n with integer values for $A$, $B$, $X$, and $Y$ and then determine\n the impossibility of integers for $C$ or $Z$. Given the commutative\n property of the equation, we hereafter base the determination of\n integrality of $C$ and $Z$ as a function of definite integrality of\n $A$, $B$, $X$, and $Y$, as doing so for any other combination of\n variables is a trivial generalization.\n\\end{enumerate}\n\nThe following hereafter establishes the above objectives.\n\n\n\n\n\n\\Line\n\\subsection{Slopes of the Terms}\n\\label{Section:Slopes_Start}\nAlthough the exponents in \\BealsEq\\, suggest a volumetric relationship between cubes and hypercubes, and given that the exponents cannot all be the same as it would violate Fermat's last theorem \\cites{wiles1995modular, taylor1995ring, shanks2001solved}, the expectation of apparent geometric interpretability is low. However every set of values that satisfy the conjecture correspond to a point on a Cartesian grid and subtend a line segment with the origin, which in turn means properties of the slopes of these line segments are directly related to the conjecture. Properties of these slopes form a crucial basis for the subsequent main proofs.\n\n\n\\begin{figure}\n\\large{$C^Z\\, vs\\, B^Y\\, vs\\, A^X$}\\\\\n\\includegraphics[width=.5\\textwidth]{CZBYAX_Scatter.jpg}\n\\caption{Plot of $(A^X,B^Y,C^Z)$ given positive $A$, $B$, $C$, $X$, $Y$, and $Z$, where \\BealsEq, $A^X+B^Y\\leq10^{28}$, $X\\geq4$, $Y,Z\\geq3$.}\n\\label{Fig:CZBYAZScatter}\n\\end{figure}\n\nIn the Cartesian plot of $A^X\\times B^Y\\times C^Z$, each point $(A^X, B^Y, C^Z)$ corresponds to a specific integer solution that satisfies the conjecture found from exhaustive search within a limited domain. See \\cref{Fig:CZBYAZScatter}. There exists a unique line segment between each point and the origin. The line segment subtends a line segment in each of the three planes, and a set of corresponding angles in those planes made with the axes. See \\cref{Fig:3DScatter,Fig:ScatterPlotWithAngles}.\n\n\\begin{figure}\n\\includegraphics[width=.33\\textwidth]{3D_Scatter_Plot.jpg}\n\\caption{Line segment connecting the origin and point $(A^X, B^Y, C^Z)$ where \\BealsEq\\, satisfying the conjecture from \\cref{Fig:CZBYAZScatter}.}\n\\label{Fig:3DScatter}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=.66\\textwidth]{3D_Scatter_Plot_with_Angles.jpg}\n\\caption{Angles between the axes and the line segment subtended by the origin and point $(A^X, B^Y, C^Z)$ from \\cref{Fig:CZBYAZScatter}.}\n\\label{Fig:ScatterPlotWithAngles}\n\\end{figure}\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.1_Irrational_Slope_No_Lattice}\nIf the slope $m$ of line $y=mx$ is irrational, then the line does not go through any non-trivial lattice points.\n\\end{theorem}\n\n\\begin{proof}\nSuppose there exists a line $y=mx$, with irrational slope $m$ that goes through lattice point $(X,Y)$. Then the slope can be calculated from the two known lattice points through which this line traverses, $(0,0)$ and $(X,Y)$. Hence the slope is $\\displaystyle{m=\\frac{Y-0}{X-0}}$. However, $\\displaystyle{m=\\frac{Y}{X}}$ is the ratio of integers, thus contradicting $m$ being irrational, hence a line with an irrational slope passing through the origin cannot pass through a non-trivial lattice point.\n\\end{proof}\n\nSince values that satisfy the conjecture are integer, by definition they correspond to a lattice point and thus the line segment between that lattice point and the origin will have a rational slope per \\cref{Thm:2.1_Irrational_Slope_No_Lattice}. We next establish the properties of term coprimality and thereafter the relationship coprimality has on the slope irrationality. Thereafter we establish several other preliminary proofs relating to reparameterizations and non-standard binomial expansions before returning back to the connection between term coprimality, slope irrationality, and the conjecture proof.\n\\label{Section:Slopes_End}\n\n\n\n\n\n\n\\Line\n\\subsection{Coprimality of the Bases}\n\\label{Section:Comprimality_Start}\nAccording to the conjecture, solutions only exist when gcd$(A,B,C)>1$. Hence testing for impossibility when gcd$(A,B,C)=1$ requires we establish the relationship between 3-way coprimality and the more stringent pairwise coprimality.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.2_Coprime}\n\\Conjecture, if gcd$(A,B)=1$, then gcd$(A^X,C^Z )=$ gcd$(B^Y,C^Z )=1$.\\\\\n\\{If $A$ and $B$ are coprime, then $C^Z$ is pairwise coprime to $A^X$ and $B^Y$.\\}\n\\end{theorem}\n\n\\begin{proof}\nSuppose $A$ and $B$ are coprime. Then $A^X$ and $B^Y$ are coprime, and we can define these terms based on their respective prime factors, namely\n\\begin{subequations}\n\\begin{gather}\nA^X = \\PList \\label{eqn:2a} \\\\\nB^Y = \\QList \\label{eqn:2b}\n\\end{gather}\n\\end{subequations}\nwhere $p_i$, $q_j$ are prime, and $p_i\\neq q_j$, for all $i,j$. Based on \\cref{eqn:1}, we can express $C^Z$ as\n\\begin{equation}\nC^Z=\\PList + \\QList \\label{eqn:3}\n\\end{equation}\n\nWe now take any prime factor of $A^X$ or $B^Y$ and divide both sides of \\cref{eqn:3} by that prime factor. Without loss of generalization, suppose we choose $p_i$. Thus\n\\begin{subequations}\n\\begin{align}\n\\frac{C^Z}{p_i} &= \\frac{\\PList + \\QList}{p_i} \\label{eqn:4a} \\\\\n\\frac{C^Z}{p_i} &= \\frac{\\PList}{p_i} + \\frac{\\QList}{p_i} \\label{eqn:4b}\n\\end{align}\n\\end{subequations}\nThe term $\\displaystyle\\frac{\\PList}{p_i}$ is an integer since by definition $p_i |\\PList$. However the term $\\displaystyle\\frac{\\QList}{p_i}$ cannot be simplified since $p_i \\nmid \\QList$ and thus $p_i \\nmid (\\PList+\\QList)$. Hence by extension $p_i \\nmid C^Z$ and $A^X$ must thus be coprime to $C^Z$. By applying the same logic with $q_j$, then $B^Y$ must also be coprime to $C^Z$. Therefore if $A$ and $B$ are coprime, then $C^Z$ must be pairwise coprime to both $A^X$ and $B^Y$.\n\\end{proof}\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.3_Coprime}\n\\Conjecture, if gcd$(A,C)=1$ or gcd$(B,C)=1$ then gcd$(A^X,C^Z)=$ gcd$(B^Y,C^Z)=1$.\\\\\n\\{If $A$ or $B$ is coprime to $C$, then $C^Z$ is pairwise coprime to $A^X$ and $B^Y$.\\}\n\\end{theorem}\n\n\\begin{proof}\nWithout loss of generalization, suppose $A$ and $C$ are coprime. Thus $A^X$ and $C^Z$ are coprime. We can define $C^Z$ based on its prime factors, namely\n\\begin{equation}\nC^Z=\\RList \\label{eqn:5}\n\\end{equation}\nwhere $r_k$ are primes. Based on \\cref{eqn:2a,eqn:2a,eqn:5}, we can define $B^Y$ based on the difference between $C^Z$ and $A^X$, namely\n\\begin{equation}\nB^Y=\\RList - \\PList \\label{eqn:6}\n\\end{equation}\nWe now take any prime factor of $C^Z$ and divide both sides of \\cref{eqn:6} by that prime factor. Without loss of generalization, suppose we choose $r_k$. Thus\n\\begin{subequations}\n\\begin{align}\n\\frac{B^Y}{r_k} &= \\frac{\\RList - \\PList}{r_k} \\label{eqn:7a} \\\\\n\\frac{B^Y}{r_k} &= \\frac{\\RList}{r_k} - \\frac{\\PList}{r_k} \\label{eqn:7b}\n\\end{align}\n\\end{subequations}\nThe term $\\displaystyle\\frac{\\RList}{r_k}$ is an integer since by definition $r_k |\\RList$. However the term $\\displaystyle\\frac{\\PList}{r_k}$ cannot be simplified since $r_k \\nmid \\PList$ and thus $r_k \\nmid (\\RList - \\PList)$. Hence by extension $r_k \\nmid B^Y$ and $C^Z$ must thus be coprime to $B^Y$. By applying the same logic with $p_i$, then $C^Z$ must also be coprime to $A^X$. Therefore if either $A$ or $B$ is coprime to $C$, then $C^Z$ must be pairwise coprime to both $A^X$ and $B^Y$.\n\\end{proof}\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.4_Coprime}\n\\Conjecture, if gcd$(A,B,C) = 1$ then gcd $(A^X,B^Y)=$ gcd $(A^X,C^Z)=$ gcd $(B^Y,C^Z)=1$\\\\\n\\{If $A$, $B$ and $C$ are 3-way coprime, then they are all pairwise coprime.\\}\n\\end{theorem}\n\n\\begin{proof} We consider two scenarios when gcd$(A,B,C)=1$, namely: gcd$(A,B)>1$ and gcd$(A,C)>1$ (the later of which generalizes to gcd$(B,C)>1$).\n\n\\bigskip\n\n\\textbf{Scenario 1 of 2:} Suppose gcd$(A,B,C)=1$ while gcd$(A,B)>1$. Therefore $A$ and $B$ have a common factor. Thus we can express $A^X$ and $B^Y$ relative to their common factor, namely\n\\begin{subequations}\n\\begin{gather}\nA^X = k\\cdot\\PList \\label{eqn:8a} \\\\\nB^Y = k\\cdot\\QList \\label{eqn:8b}\n\\end{gather}\n\\end{subequations}\nwhere integer $k$ is the common factor, and $p_i$, $q_j$ are prime, and $p_i\\neq q_j$, for all $i,j$. Based on \\cref{eqn:1}, we can express $C^Z$ as\n\\begin{subequations}\n\\begin{gather}\nC^Z=k\\cdot\\PList + k\\cdot\\QList \\label{eqn:9a} \\\\\nC^Z=k(\\PList + \\QList) \\label{eqn:9b}\n\\end{gather}\n\\end{subequations}\nPer \\cref{eqn:9b}, $k$ is a factor of $C^Z$, just as it is a factor of $A^X$ and $B^Y$, thus gcd$(A,B,C)\\neq1$, hence a contradiction. Thus when gcd$(A,B,C)=1$ we know gcd$(A,B)\\ngtr1$ and thus $k$ must be 1.\n\n\\bigskip\n\n\\textbf{Scenario 2 of 2:} Suppose gcd$(A,B,C)=1$ while gcd$(A,C)>1$. Therefore $A$ and $C$ have a common factor. Thus we can express $A^X$ and $C^Z$ relative to their common factor, namely\n\\begin{subequations}\n\\begin{gather}\nA^X = k\\cdot\\PList \\label{eqn:10a} \\\\\nC^Z = k\\cdot\\RList \\label{eqn:10b}\n\\end{gather}\n\\end{subequations}\nwhere integer $k$ is the common factor, and $p_i$, $r_k$ are prime, and $p_i\\neq r_k$, for all $i,k$. Based on \\cref{eqn:1}, we can express $B^Y$ as\n\\begin{subequations}\n\\begin{gather}\nB^Y=k\\cdot\\RList - k\\cdot\\PList \\label{eqn:11a} \\\\\nB^Y=k(\\RList - \\PList) \\label{eqn:11b}\n\\end{gather}\n\\end{subequations}\nPer \\cref{eqn:11b}, $k$ is a factor of $B^Y$, just as it is a factor of $A^X$ and $C^Z$, thus gcd$(A,B,C)\\neq1$, hence a contradiction. Thus when gcd$(A,B,C)=1$ we know gcd$(A,C)\\ngtr1$ and thus $k$ must be 1. By extension and generalization, when gcd$(A,B,C)=1$ we know gcd$(B,C)\\ngtr1$.\n\\end{proof}\n\nBased on \\cref{Thm:2.2_Coprime,Thm:2.3_Coprime,Thm:2.4_Coprime} if any pair of terms $A$, $B$, and $C$ have no common factor, then all pairs of terms are coprime. Hence either all three terms share a common factor or they are all pairwise coprime. We thus formally conclude that if gcd$(A, B, C)=1$, then gcd$(A^X,B^Y)=$ gcd$(A^X,C^Z)=$ gcd$(B^Y,C^Z)=1$, and if gcd$(A,B)=1$ or gcd$(A,C)=1$ or gcd$(B,C)=1$, then gcd$(A,B,C)=1$.\n\\label{Section:Comprimality_End}\n\n\n\\bigskip\n\n\n\\Line\n\\subsection{Restrictions of the Exponents}\n\\label{Section:Exponents_Start}\nTrivial restrictions of the exponents are defined by the conjecture, namely integer and greater than 2. However, other restrictions apply such as, per Fermat's last theorem, the exponents cannot be equal while greater than 2. More subtle restrictions also apply which will be required for the main proofs.\n\n\\begin{theorem}\n\\label{Thm:2.5_X_cannot_be_mult_of_Z}\n\\Conjecture, exponents $X$ and $Y$ cannot simultaneously be integer multiples of exponent $Z$.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $X$ and $Y$ are simultaneously each integer multiples of $Z$. Thus $X=jZ$ and $Y=kZ$ for positive integers $j$ and $k$. Therefore we can restate \\cref{eqn:1} as\n\\begin{equation}\n(A^j)^Z + (B^k)^Z= C^Z \\label{eqn:12}\n\\end{equation}\nPer \\cref{eqn:12}, we have 3 terms which are each raised to exponent $Z\\ge3$. According to Fermat's last theorem \\cites{wiles1995modular, taylor1995ring, shanks2001solved}, no integer solution exists when the terms share a common exponent greater than 2. Therefore $X$ and $Y$ cannot simultaneously be integer multiples of $Z$.\n\\end{proof}\n\n\nA trivially equivalent variation of \\cref{Thm:2.5_X_cannot_be_mult_of_Z} is that $Z$ cannot simultaneously be a unit fraction of $X$ and $Y$. Given \\cref{Thm:2.5_X_cannot_be_mult_of_Z}, there are only two possibilities:\n\\begin{enumerate}\n\\item Neither $X$ or $Y$ are multiples of $Z$.\n\\item Only one of either $X$ or $Y$ is a multiple of $Z$.\n\\end{enumerate}\nAs such, given that at least one of the two exponents $X$ and $Y$ cannot be a multiple of $Z$, of the terms $A^X$ and $B^Y$, we therefore can arbitrarily choose $A^X$ to be the term whose exponent is not an integer multiple of exponent $Z$. Hence the following definition is used hereafter:\n\n\n\\begin{definition}\n\\label{Dfn:2.1_X_cannot_be_mult_of_Z}\n$X$ is not an integer multiple of $Z$.\n\\end{definition}\n\nSince per \\cref{Thm:2.5_X_cannot_be_mult_of_Z} at most only one of $X$ or $Y$ can be a multiple of $Z$ and given one can arbitrarily swap $A^X$ and $B^Y$, the arbitrary fixing hereafter of $A^X$ to be the term for which its exponent is not a multiple of $Z$ does not interfere with any of the characteristics or implications of the solution. Hence we hereafter define $A^X$ and $B^Y$ such that \\cref{Dfn:2.1_X_cannot_be_mult_of_Z} is maintained.\n\n\n\\label{Section:Exponents_End}\n\n\\bigskip\n\n\n\n\\Line\n\\subsection{Reparameterization of the Terms}\n\\label{Section:Reparameterize_Start}\nIn exploring ways to leverage the binomial expansion and other equivalences, some researchers \\cites{beauchamp2018,beauchamp2019,edwards2005platonic} explored reparameterizing one or more of the terms of \\BealsEq\\, so as to compare different sets of expansions. We broaden this idea to establish various irrationality conditions as related to coprimality of the terms, establish properties of the non-unique characteristics of key terms in the expansions, and showcase an exhaustive view to be leveraged when validating the conjecture.\n\nThe binomial expansion applied to the difference of perfect powers with different exponents is critical to mathematical research in general and to several proofs specifically later in this document. One feature of the binomial expansion in our application is the circumstance under which the upper limit of the sum is indeterminate \\cites{beauchamp2018,beauchamp2019} to be introduced in the following two theorems.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.6_Initial_Expansion_of_Differences}\nIf $p,q\\neq 0$ and $v,w$ are real, then $\\displaystyle{p^v-q^w=(p+q)(p^{v-1}-q^{w-1}) - pq(p^{v-2} - q^{w-2})}$.\\\\\n\\{Expanding the difference of two powers\\}\n\\end{theorem}\n\n\\begin{proof}\nGiven non-zero $p$ and $q$, and real $v$ and $w$, suppose we can expand the difference $p^v-q^w$ as\n\\begin{equation}\np^v-q^w=(p+q)(p^{v-1}-q^{w-1}) - pq(p^{v-2} - q^{w-2}) \\label{eqn:13}\n\\end{equation}\nDistributing $(p+q)$ on the right side of \\cref{eqn:13} into $(p^{v-1}-q^{w-1})$ gives us $\\left[p^v-pq^{w-1} + p^{v-1}q-q^w\\right]$ and distributing $-pq$ into $(p^{v-2} - q^{w-2})$ gives us $\\left[-p^{v-1}q+pq^{w-1}\\right]$. Thus simplifying \\cref{eqn:13} gives us\n\\begin{subequations}\n\\begin{align}\np^v-q^w &= \\left[p^v-pq^{w-1} + p^{v-1}q-q^w\\right] +\\left[-p^{v-1}q+pq^{w-1}\\right] \\label{eqn:14a} \\\\\np^v-q^w &= p^v+\\left[pq^{w-1}-pq^{w-1}\\right] + \\left[p^{v-1}q -p^{v-1}q\\right] -q^w\\label{eqn:14b} \\\\\np^v-q^w &= p^v -q^w\\label{eqn:14c}\n\\end{align}\n\\end{subequations}\n\\end{proof}\nThus the difference of powers can indeed be expanded per the above functional form accordingly. We also observe \\cref{eqn:13} can be expressed in more compact notation, namely\n\\begin{equation}\np^v-q^w=\\sum \\limits_{i=0}^1 (p+q)^{1-i}(-pq)^i(p^{v-1-i}-q^{w-1-i}) \\label{eqn:15}\n\\end{equation}\nWe further observe in \\cref{eqn:13} of \\cref{Thm:2.6_Initial_Expansion_of_Differences} that this expansion of the difference of two powers yields two other terms which are themselves differences of powers, namely $(p^{v-1}-q^{w-1})$ and $(p^{v-2} - q^{w-2})$. Each of these differences could likewise be expanded with the same functional form of \\cref{Thm:2.6_Initial_Expansion_of_Differences}. Recursively expanding the resulting terms of differences of powers leads to a more general form of \\cref{eqn:15}.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.7_Indeterminate_Limit}\nIf $p,q\\neq 0$ and integer $n\\geq0$, then $\\displaystyle{p^v-q^w =\\sum \\limits_{i=0}^n \\binom{n}{i} (p+q)^{n-i}(-pq)^i(p^{v-n-i}-q^{w-n-i})}$.\\\\\n\\{General form of the expansion of the difference of two powers\\}\n\\end{theorem}\n\n\\begin{proof}\nSuppose $p,q\\neq 0$ and integer $n\\geq0$, and suppose\n\\begin{equation}\np^v-q^w =\\sum \\limits_{i=0}^n \\binom{n}{i} (p+q)^{n-i}(-pq)^i(p^{v-n-i}-q^{w-n-i})\n\\label{eqn:16}\n\\end{equation}\nConsider $n=0$. The right side of \\cref{eqn:16} reduces to $p^v-q^w$, thus \\cref{eqn:16} holds when $n=0$. Consider $n=1$. The right side of \\cref{eqn:16} becomes\n\\begin{subequations}\n\\begin{align}\n\\begin{split}\np^v-q^w &=\\binom{1}{0} (p+q)^{1-0}(-pq)^0(p^{v-1-0}-q^{w-1-0})\n +\\binom{1}{1} (p+q)^{1-1}(-pq)^1(p^{v-1-1}-q^{w-1-1})\n \\label{eqn:17a}\n\\end{split}\\\\\n\\begin{split}\np^v-q^w &= \\biggl[ (p+q)(p^{v-1}-q^{w-1}) \\biggr] + \\biggl[(-pq)(p^{v-2}-q^{w-2})\\biggr] \\\\\n &= p^v-q^w\n \\label{eqn:17b}\n\\end{split}\n\\end{align}\n\\end{subequations}\nThe right side of \\cref{eqn:17b} also reduces to $p^v-q^w$. Hence \\cref{eqn:16} holds for $n=0$ and $n=1$.\n\n\nIn generalizing, enumerating the terms of \\cref{eqn:16} gives us\n\\begin{equation}\n\\begin{split}\np^v-q^w &=\n \\binom{n}{0} (p+q)^n (p^{v-n} -q^{w-n }) \\\\\n&+\\binom{n}{1} (p+q)^{n-1}(-pq) (p^{v-n-1} -q^{w-n-1}) \\\\\n&+\\binom{n}{2} (p+q)^{n-2}(-pq)^2(p^{v-n-2} -q^{w-n-2}) \\\\\n&+ \\ \\cdots \\\\\n&+\\binom{n}{n-2}(p+q)^2(-pq)^{n-2}(p^{v-2n+2}-q^{w-2n+2}) \\\\\n&+\\binom{n}{n-1}(p+q) (-pq)^{n-1}(p^{v-2n+1}-q^{w-2n+1}) \\\\\n&+\\binom{n}{n} (-pq)^n (p^{v-2n} - q^{w-2n}) \\\\\n\\end{split}\n\\label{eqn:18}\n\\end{equation}\nExpanding each of the $n+1$ differences of powers $(p^{v-n-i}-q^{w-n-i})$ of \\cref{eqn:18} per \\cref{Thm:2.6_Initial_Expansion_of_Differences} gives us\n\n\\begin{equation}\n\\begin{split}\np^v-q^w &=\n \\binom{n}{0} (p+q)^n \\left[(p+q)(p^{v-n-1}-q^{w-n-1}) - pq(p^{v-n-2} - q^{w-n-2}) \\right] \\\\\n&+\\binom{n}{1} (p+q)^{n-1}(-pq) [(p+q)(p^{v-n-2}-q^{w-n-2}) - pq(p^{v-n-3} - q^{w-n-3}) ] \\\\\n&+\\binom{n}{2} (p+q)^{n-2}(-pq)^2[(p+q)(p^{v-n-3}-q^{w-n-3}) - pq(p^{v-n-4} - q^{w-n-4}) ] \\\\\n&+ \\ \\cdots \\\\\n&+\\binom{n}{n-2}(p+q)^2(-pq)^{n-2}[(p+q)(p^{v-2n+1}-q^{w-2n+1}) - pq(p^{v-2n} - q^{w-2n}) ] \\\\\n&+\\binom{n}{n-1}(p+q) (-pq)^{n-1}[(p+q)(p^{v-2n}-q^{w-2n}) - pq(p^{v-2n-1} - q^{w-2n-1}) ] \\\\\n&+\\binom{n}{n} (-pq)^n [(p+q)(p^{v-2n-1}-q^{w-2n-1}) - pq(p^{v-2n-2} - q^{w-2n-2}) ] \\\\\n\\end{split}\n\\label{eqn:19}\n\\end{equation}\nDistributing each of the $\\displaystyle{\\binom{n}{i}(p+q)^{n-i}}(-pq)^i$ terms of \\cref{eqn:19} into the corresponding bracketed terms then gives us\n\\begin{equation}\n\\begin{split}\np^v-q^w &=\n \\binom{n}{0} (p+q)^{n+1} (p^{v-n-1}-q^{w-n-1}) + \\binom{n}{0}(p+q)^n(-pq)(p^{v-n-2} - q^{w-n-2}) \\\\\n&+\\binom{n}{1} (p+q)^n(-pq)(p^{v-n-2}-q^{w-n-2}) + \\binom{n}{1}(p+q)^{n-1}(-pq)^2(p^{v-n-3} - q^{w-n-3}) \\\\\n&+\\binom{n}{2} (p+q)^{n-1}(-pq)^2(p^{v-n-3}-q^{w-n-3}) + \\binom{n}{2}(p+q)^{n-2} (-pq)^3(p^{v-n-4} - q^{w-n-4}) \\\\\n&+ \\ \\cdots \\\\\n&+\\binom{n}{\\!n-2\\!}(p+q)^3(-pq)^{n-2}(p^{v-2n+1}\\!-\\!q^{w-2n+1}) + \\binom{n}{\\!n-2\\!}(p+q)^2 (-pq)^{n-1}(p^{v-2n} \\!-\\! q^{w-2n}) \\\\\n&+\\binom{n}{\\!n-1\\!}(p+q)^2(-pq)^{n-1}(p^{v-2n}\\!-\\!q^{w-2n}) + \\binom{n}{\\!n-1\\!}(p+q) (-pq)^n(p^{v-2n-1} \\!-\\! q^{w-2n-1}) \\\\\n&+\\binom{n}{n} (p+q) (-pq)^n (p^{v-2n-1}-q^{w-2n-1}) + \\binom{n}{n} (-pq)^{n+1}(p^{v-2n-2} - q^{w-2n-2}) \\\\\n\\end{split}\n\\label{eqn:20}\n\\end{equation}\nwhich can be simplified to\n\\begin{equation}\n\\begin{split}\np^v-q^w &=\n \\binom{n}{0} (p+q)^{n+1} (p^{v-n-1}-q^{w-n-1}) \\\\\n&+\\left[\\binom{n}{1}+\\binom{n}{0}\\right](p+q)^n(-pq)(p^{v-n-2}-q^{w-n-2}) \\\\\n&+\\left[\\binom{n}{2}+\\binom{n}{1}\\right](p+q)^{n-1}(-pq)^2(p^{v-n-3}-q^{w-n-3})\\\\\n&+\\left[\\binom{n}{3}+\\binom{n}{2}\\right](p+q)^{n-2}(-pq)^3(p^{v-n-4}-q^{w-n-4})\\\\\n&+ \\ \\cdots \\\\\n&+\\left[\\binom{n}{n-2}+\\binom{n}{n-3}\\right](p+q)^3(-pq)^{n-2}(p^{v-2n+1}-q^{w-2n+1}) \\\\\n&+\\left[\\binom{n}{n-1}+\\binom{n}{n-2}\\right](p+q)^2(-pq)^{n-1}(p^{v-2n}-q^{w-2n}) \\\\\n&+\\left[\\binom{n}{n}+\\binom{n}{n-1}\\right] (p+q)(-pq)^n (p^{v-2n-1}-q^{w-2n-1})\\\\\n&+ \\binom{n}{n} (-pq){n+1}(p^{v-2n-2} - q^{w-2n-2}) \\\\\n\\end{split}\n\\label{eqn:21}\n\\end{equation}\nPascal's identity states that $\\displaystyle{\\binom{m+1}{k} = \\binom{m}{k}+\\binom{m}{k-1}}$ for integer $k\\geq1$ and integer $m\\geq0$ \\cite{macmillan2011proofs,fjelstad1991extending}. Given this identity, each sum of the pairs of binomial coefficients in the brackets of \\cref{eqn:21} simplifies to:\n\\begin{subequations}\n\\begin{gather}\n\\begin{split}\np^v-q^w &=\n \\binom{n}{0} (p+q)^{n+1} (p^{v-n-1} -q^{w-n-1}) \\\\\n&+\\binom{n+1}{1} (p+q)^n (-pq) (p^{v-n-2} -q^{w-n-2}) \\\\\n&+\\binom{n+1}{2} (p+q)^{n-1}(-pq)^2(p^{v-n-3} -q^{w-n-3}) \\\\\n&+\\binom{n+1}{3} (p+q)^{n-2}(-pq)^3(p^{v-n-4} -q^{w-n-4}) \\\\\n&+ \\ \\cdots \\\\\n&+\\binom{n+1}{n-2} (p+q)^3(-pq)^{n-2}(p^{v-2n+1}-q^{w-2n+1}) \\\\\n&+\\binom{n+1}{n-1} (p+q)^2(-pq)^{n-1}(p^{v-2n} -q^{w-2n}) \\\\\n&+\\binom{n+1}{n} (p+q) (-pq)^n (p^{v-2n-1}-q^{w-2n-1}) \\\\\n&+ \\binom{n}{n} (-pq)^{n+1}(p^{v-2n-2}-q^{w-2n-2}) \\\\\n\\end{split} \\label{eqn:22a} \\\\\np^v-q^w =\\sum \\limits_{i=0}^{n+1} \\binom{n+1}{i} (p+q)^{n-i+1}(-pq)^i(p^{v-n-i-1}-q^{w-n-i-1})\n\\label{eqn:22b}\n\\end{gather}\n\\end{subequations}\nThe right side of \\cref{eqn:16} and \\cref{eqn:22b} both equal $\\displaystyle{p^v-q^w}$, thus the right side of these two equations are equal. Hence\n\\begin{equation}\n\\sum \\limits_{i=0}^n \\binom{n}{i} (p+q)^{n-i}(-pq)^i(p^{v-n-i}-q^{w-n-i}) = \\sum \\limits_{i=0}^{n+1} \\binom{n+1}{i} (p+q)^{n-i+1}(-pq)^i(p^{v-n-i-1}-q^{w-n-i-1})\n\\label{eqn:23}\n\\end{equation}\nTherefore by induction, since $\\displaystyle{p^v-q^w =\\sum \\limits_{i=0}^n \\binom{n}{i} (p+q)^{n-i}(-pq)^i(p^{v-n-i}-q^{w-n-i})}$ for $n=0$ and $n=1$ and per \\cref{eqn:22b,eqn:23} this relation holds when $n$ is replaced with $n+1$. Hence this relation holds for all integers $n\\geq0$.\n\\end{proof}\n\nWe observe an important property, per \\cref{eqn:16,eqn:22b,eqn:23}, that $n$ is indeterminate since $\\displaystyle{p^v-q^w =\\sum \\limits_{i=0}^n \\binom{n}{i} (p+q)^{n-i}(-pq)^i(p^{v-n-i}-q^{w-n-i})}$ holds for every non-negative integer value of $n$. Hence any non-negative integer value of $n$ can be selected and the resulting expansion still applies, leading to different expansions that sum to identical outcomes.\n\n\n\\bigskip\n\n\nOther preliminary properties required for the proof of the conjecture include the fact that each of the perfect powers $A^X$, $B^Y$ and $C^Z$ can be expressed as a linear combination of an additive and multiplicative form of the bases of the other two terms. This property will reveal a variety of equivalences across various domains. We need to first establish a few basic principles.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.8_Functional_Form}\n\\Conjecture, if there exists an integer solution, then there exists non-zero positive rational $\\alpha$ and $\\beta$ such that $A^X=[(C+B)\\alpha-CB\\beta]^X$.\n\\end{theorem}\n\n\\begin{proof}\nGiven \\cref{eqn:1}, by definition $A^X=C^Z-B^Y$. If there exist integer solutions that satisfy the conjecture then $A$ and $\\sqrt[X]{C^Z-B^Y}$ must be integers. Suppose there exists non-zero rational $\\alpha$ and $\\beta$ such that $A=(C+B)\\alpha-CB\\beta$, then these two expressions for $A$ are identical, namely\n\\begin{equation}\n\\sqrt[X]{C^Z-B^Y} = (C+B)\\alpha-CB\\beta \\label{eqn:24}\n\\end{equation}\nSolving for $\\alpha$ when given any rational $\\beta>0$ gives us $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$, which is positive since the numerator and denominator are each positive. Further, if $\\beta$ is rational, then $\\alpha$ must also be rational since $\\displaystyle{\\sqrt[X]{C^Z-B^Y}}$, $C$, and $B$ are each integer.\n\nSolving instead for $\\beta$ when given any arbitrary sufficiently large positive rational $\\alpha$ gives us $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$, which is positive given the numerator and denominator are both negative integers. Further, if $\\alpha$ is rational, then $\\beta$ must also be rational since $\\displaystyle{\\sqrt[X]{C^Z-B^Y}}$, $C$, and $B$ are each integer.\n\nHence there exist non-zero positive rational $\\alpha$ and $\\beta$ such that $A^X=C^Z-B^Y$ and $A^X=[(C+B)\\alpha-CB\\beta]^X$, when the terms of the conjecture are satisfied.\n\\end{proof}\n\n\n\\bigskip\n\n\nWithout loss of generalization, it can be trivially shown that \\cref{Thm:2.8_Functional_Form} establishes an alternate functional form in which $A^X=[(C+B)\\alpha-CB\\beta]^X$ also applies to $B^Y$ wherein there exists other non-zero positive rational $\\alpha$ and $\\beta$ such that $B^Y=[(C+A)\\alpha-CA\\beta]^Y$.\n\nSuppose we arbitrarily let $\\displaystyle{\\alpha=\\sqrt[X]{|C^{Z-X}-B^{Y-X}|}}$ then per \\cref{Thm:2.8_Functional_Form}, we can solve for $\\beta$ from $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ which gives us $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$. Likewise if we arbitrarily let $\\displaystyle{\\beta=\\sqrt[X]{|C^{Z-2X}-B^{Y-2X}|}}$ then per \\cref{Thm:2.8_Functional_Form}, we can solve for $\\alpha$ from $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ which gives us $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$. In either case, this yields set $\\{\\alpha,\\beta\\}$ that satisfies $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ which shows these definitions of $\\alpha$ and $\\beta$ maintain $A^X$ as a perfect power of an integer based on $B$ and $C$, namely $A^X=[(C+B)\\alpha-CB\\beta]^X$ while satisfying $A^X=C^Z-B^Y$. Further, based on the indeterminacy of the upper bound in the binomial expansion of the difference of two perfect powers from \\cref{Thm:2.7_Indeterminate_Limit}, we can also find values of $\\alpha$ and $\\beta$ that are explicitly functions of $C$ and $B$.\n\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.9_Real_Alpha_Beta}\n\\Conjecture, values $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$, $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$, and real $M$ satisfy $C^Z-B^Y=[(C+B)M\\alpha-CBM\\beta]^X$.\n\\end{theorem}\n\n\\begin{proof} Given \\cref{eqn:1}, we know that integer $A^X=C^Z-B^Y$ can be expanded with the general binomial expansion from \\cref{Thm:2.7_Indeterminate_Limit} for the difference of perfect powers, namely\n\\begin{equation}\nA^X=C^Z-B^Y=\\sum \\limits_{i=0}^n \\binom{n}{1} (C+B)^{n-i}(-CB)^i(C^{Z-n-i}-B^{Y-n-i})\n\\label{eqn:25}\n\\end{equation}\nPer \\cref{Thm:2.7_Indeterminate_Limit}, since upper limit $n$ in \\cref{eqn:25} is indeterminate, we can replace $n$ with any value such as $X$ while entirely preserving complete integrity of the terms, hence\n\\begin{equation}\nA^X=C^Z-B^Y=\\underbrace{\\sum \\limits_{i=0}^X \\binom{X}{i} (C+B)^{X-i}(-CB)^i}_{Common\\,to\\,Equation\\,(\\ref{eqn:27b})\\,below}(C^{Z-X-i}-B^{Y-X-i})\n\\label{eqn:26}\n\\end{equation}\nFurthermore, from \\cref{Thm:2.8_Functional_Form} we know that $A=(C+B)\\alpha-CB\\beta$ for non-zero real $\\alpha$ and $\\beta$. Raising $A=(C+B)\\alpha-CB\\beta$ to $X$ and then expanding gives us\n\\begin{subequations}\n\\begin{align}\nA^X=\\bigl((C+B)\\alpha-CB\\beta\\bigr)^X&=\\sum \\limits_{i=0}^X \\binom{X}{i}\n \\bigl((C+B)\\alpha\\bigr)^{X-i}\n (-CB\\beta)^i\n\\label{eqn:27a} \\\\\nA^X=\\bigl((C+B)\\alpha-CB\\beta\\bigr)^X&=\\underbrace{ \\sum \\limits_{i=0}^X \\binom{X}{i}\n (C+B)^{X-i}(-CB)^i}_{Common\\,to\\,Equation\\,(\\ref{eqn:26})\\,above}\n \\alpha^{X-i}\\beta^i\n\\label{eqn:27b}\n\\end{align}\n\\end{subequations}\n\\cref{eqn:26,eqn:27b} have an identical number of terms, share the identical binomial coefficient, and share $(C+B)^{X-i}(-CB)^i$ for each value of $i$. See the expansion in \\cref{tab:TableSideBySideExpansion}.\n\n\\begin{table}[h!]\n\\begin{center}\n\\caption{Term-by-term comparison of the binomial expansions of \\cref{eqn:26,eqn:27b}.}\n\\label{tab:TableSideBySideExpansion}\n\\begin{tabular}{c|c c c}\n\\textbf{Term} & \\textbf{Terms Common to} & \\multicolumn{2}{c}{\\textbf{\\quad Terms Unique to Equations}} \\\\\n\\textbf{\\bm{$i$}} & \\textbf{\\cref{eqn:26,eqn:27b}} & \\textbf{(\\ref{eqn:26})} & \\textbf{(\\ref{eqn:27b})} \\\\\n\\hline\n0 & $\\displaystyle{\\binom{X}{0}(C+B)^X}$ & $C^{Z-X}-B^{Y-X}$ & $\\alpha^{X}$\\\\\n1 & $\\displaystyle{\\binom{X}{1}(C+B)^{X-1}(-CB)}$ & $C^{Z-X-1}-B^{Y-X-1}$ & $\\alpha^{X-1}\\beta^1$\\\\\n2 & $\\displaystyle{\\binom{X}{2}(C+B)^{X-2}(-CB)^2}$ & $C^{Z-X-2}-B^{Y-X-2}$ & $\\alpha^{X-2}\\beta^2$\\\\\n\\vdots & \\vdots & \\vdots & \\vdots\\\\\n$X-1$ & $\\displaystyle{\\binom{X}{X-1}(C+B)(-CB)^{X-1}}$ & $C^{Z-2X+1}-B^{Y-2X+1}$ & $\\alpha^1\\beta^{X-1}$\\\\\n$X$ & $\\displaystyle{\\binom{X}{X}(-CB)^X}$ & $C^{Z-2X}-B^{Y-2X}$ & $\\beta^X$\\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nPer \\cref{eqn:26,eqn:27b,Thm:2.8_Functional_Form}, $A^X$ equals both $C^Z-B^Y$ and $[(C+B)\\alpha-CB\\beta]^X$, thus the finite sums of \\cref{eqn:26,eqn:27b} are equal. Each of the terms in the corresponding expansions have common components and unique components. Thus there exists a term-wise map between the unique components of the two sets of expansions.\n\nWhen $i=0$, we observe in \\cref{tab:TableSideBySideExpansion} that $\\alpha^{X}$ maps to $C^{Z-X}-B^{Y-X}$. Hence if $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ or more generically $\\displaystyle{\\alpha=\\sqrt[X]{|C^{Z-X}-B^{Y-X}|}}$, then per \\cref{Thm:2.8_Functional_Form}, the corresponding $\\beta$ is $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$.\n\nWhen $i=X$, we observe in \\cref{tab:TableSideBySideExpansion} that $\\beta^X$ maps to $C^{Z-2X}-B^{Y-2X}$. Hence if $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ or more generically $\\displaystyle{\\beta=\\sqrt[X]{|C^{Z-2X}-B^{Y-2X}|}}$, then per \\cref{Thm:2.8_Functional_Form}, the corresponding $\\alpha$ is $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$.\n\nUsing the map between $\\alpha^{X}$ to $C^{Z-X}-B^{Y-X}$ based on $i=0$ and the map between $\\beta^X$ to $C^{Z-2X}-B^{Y-2X}$ based on $i=X$, we can align to the other terms in the expansion of \\cref{eqn:26,eqn:27b}. When $i=1$, we have the terms $C^{Z-X-1}-B^{Y-X-1}$ and $\\alpha^{X-1}\\beta$ (see \\cref{tab:TableSideBySideExpansion}). Substituting $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ we have\n\\begin{subequations}\n\\begin{gather}\nC^{Z-X-1}-B^{Y-X-1} = \\alpha^{X-1}\\beta \\label{eqn:28a} \\\\\nC^{Z-X-1}-B^{Y-X-1} = (C^{Z-X}-B^{Y-X})^\\frac{X-1}{X}(C^{Z-2X}-B^{Y-2X})^{\\frac{1}{X}} \\label{eqn:28b}\n\\end{gather}\n\\end{subequations}\n\\cref{eqn:28b} holds in only trivial conditions. Hence $\\alpha$ and $\\beta$ cannot simultaneously equal $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$. As such, there are only three possibilities regarding $\\alpha$ and $\\beta$ that ensure $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$:\n\\begin{enumerate}\n\\item $\\alpha$ is arbitrarily defined and thus $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$.\n\\item $\\beta$ is arbitrarily defined and thus $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$.\n\\item $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ where one or both are scaled.\n\\end{enumerate}\nThe first two cases, given $\\alpha$ or $\\beta$ and the other derived therefrom per \\cref{Thm:2.8_Functional_Form}, will satisfy $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$. In the third case, since $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ do not simultaneously satisfy $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$, then scaling $\\alpha$ and $\\beta$ by $M$ such that $C^Z-B^Y=[(C+B)M\\alpha-CBM\\beta]^X$ will ensure equality, where\n\\begin{subequations}\n\\begin{gather}\nC^Z-B^Y = [(C+B)M\\alpha-CBM\\beta]^X \\label{eqn:29a}\\\\\n\\sqrt[X]{C^Z-B^Y} = M[(C+B)\\alpha-CB\\beta] \\label{eqn:29b}\\\\\nM= \\frac{\\sqrt[X]{C^Z-B^Y}}{(C+B)\\alpha-CB\\beta} \\label{eqn:29c}\n\\end{gather}\n\\end{subequations}\nSince every set $\\{\\alpha,\\beta\\}$ is unique, then per \\cref{eqn:29c} there exists a unique $M$ that satisfies $C^Z-B^Y=[(C+B)M\\alpha-CBM\\beta]^X$ when $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ simultaneously.\n\nGiven that $C^Z-B^Y$ and $[(C+B)M\\alpha-CBM\\beta]^X$ are identical, their binomial expansions are structurally identical, and their sums are identical, then indeed $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ together ensure the equality of \\cref{eqn:26,eqn:27b} hold for $M$ as defined in \\cref{eqn:29c}.\n\\end{proof}\n\n\n\\bigskip\n\n\n\\noindent\\textbf{Characteristics of $\\bm{M}$, $\\bm{\\alpha}$ and $\\bm{\\beta$} from \\cref{Thm:2.9_Real_Alpha_Beta}}\n\nThe important feature is not the scalar but instead the characteristics of $\\alpha$ and $\\beta$ as defined above despite the scalar. Based on \\BealsEq, we know $A=\\sqrt[X]{C^Z-B^Y}$. The structural similarity between $C^Z-B^Y$ in the formula for $A$ and the expressions $C^{Z-X}-B^{Y-X}$ and $C^{Z-2X}-B^{Y-2X}$ in the formulas for $\\alpha$ and $\\beta$, respectively, is critical. This structural similarity will be explored and exploited later in this document.\n\nWe note $\\alpha$ and $\\beta$ could be defined differently from above and still maintain the equality of $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$, without scalar $M$. However, if one defined $\\alpha$ or $\\beta$ differently than above, then either of these terms must be $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$ or $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$ in order to satisfy $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$. Since any arbitrary $\\alpha$ corresponds to a unique $\\beta$, there are an infinite number of sets $\\{\\alpha,\\beta\\}$ that satisfy this equation. In some conditions, there are no rational pairs among the infinite number of sets $\\{\\alpha,\\beta\\}$, such as if $\\sqrt[X]{C^Z-B^Y}$ is irrational. But per the above, there is only unique set $\\{\\alpha,\\beta\\}$ when $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ when scalar $M$ is applied accordingly. We note all possible sets of $\\{\\alpha,\\beta\\}$ map to $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ for scalar $M$ since all sets must satisfy $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.10_No_Solution_Alpha_Beta_Irrational}\n\\Conjecture, if there exists set $\\{A,B,C,X,Y,Z\\}$ that does not satisfy these conditions, then $\\alpha$ or $\\beta$ that satisfies $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ will be irrational.\n\\end{theorem}\n\\begin{proof}\nFrom \\cref{Thm:2.9_Real_Alpha_Beta}, we know for any and every possible value of $\\alpha$, that $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$. Without loss of generality, suppose $B$, $C$, $X$, $Y$, and $Z$ are integer but there is no integer value of $A$ that satisfies the conjecture, then $\\displaystyle{A=\\sqrt[X]{C^Z-B^Y}}$ must be irrational. Hence $\\beta$ is irrational given the irrational term $\\displaystyle{\\sqrt[X]{C^Z-B^Y}}$ in the numerator of $\\beta$.\n\nWe also know from \\cref{Thm:2.9_Real_Alpha_Beta} given any and every possible value of $\\beta$ that $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$. Here too without loss of generality, if $B$, $C$, $X$, $Y$, and $Z$ are integer but there is no integer value of $A$ that satisfies the conjecture, then $\\displaystyle{A=\\sqrt[X]{C^Z-B^Y}}$ must be irrational. Hence $\\alpha$ is irrational given the irrational term $\\displaystyle{\\sqrt[X]{C^Z-B^Y}}$ in the numerator of $\\alpha$.\n\nHence when set $\\{A,B,C,X,Y,Z\\}$ does not satisfy the conjecture, then the corresponding $\\alpha$ or $\\beta$ that satisfies $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ is irrational.\n\\end{proof}\n\n\nWe note that the exclusion of scalar $M$ from $C^Z-B^Y=[(C+B)M\\alpha-CBM\\beta]^X$ in \\cref{Thm:2.10_No_Solution_Alpha_Beta_Irrational} (letting $M=1$) does not change the outcome in that if $C^Z-B^Y$ is not a perfect power, then scaled or unscaled $\\alpha$ and $\\beta$ cannot change the irrationality of the root of $C^Z-B^Y$. Since $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$, $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$, and scalar $M$ always together satisfy $C^Z-B^Y=[(C+B)M\\alpha-CBM\\beta]^X$, then we can study the properties of $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ as related to key characteristics of the conjecture. To do so, we need to establish the implication of coprimality and irrationality as it relates to $\\alpha$ and $\\beta$.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.11_Coprime_Alpha_Beta_Irrational}\n\\Conjecture, if gcd$(A,B,C)=1$, then both $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are irrational.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ is rational, $A,B,C,X,Y,$ and $Z$ are integers that satisfy the conjecture, and gcd$(A,B,C)=1$. Then we can express $\\alpha$ as\n\\begin{subequations}\n\\begin{gather}\n\\alpha^X=C^{Z-X}-B^{Y-X}\n\\label{eqn:30a} \\\\\n\\alpha^X=\\frac{C^Z}{C^X}-\\frac{B^Y}{B^X}\n\\label{eqn:30b} \\\\\n\\alpha^X=\\frac{B^XC^Z-C^XB^Y}{B^XC^X}\n\\label{eqn:30c}\n\\end{gather}\n\\end{subequations}\nSince $\\alpha$ is rational, then $\\alpha$ can be expressed as the ratio of integers $p$ and $q$ such that\n\\begin{equation}\n\\frac{p^X}{q^X}=\\frac{B^XC^Z-C^XB^Y}{B^XC^X}\\label{eqn:31}\n\\end{equation}\nwhere $p^X=B^XC^Z-C^XB^Y$ and $q^X=B^XC^X$. We note that the denominator of \\cref{eqn:31} is a perfect power of $X$ and thus we know that the numerator must also be a perfect power of $X$ for the $X^{th}$ root of their ratio to be rational. Hence the ratio of the perfect power of the numerator and the perfect power of the denominator, even after simplifying, must thus be rational.\n\nRegardless of the parity of $B$ or $C$, per \\cref{eqn:31}, $p^X$ must be even as it is the difference of two odd numbers or the difference of two even numbers. Furthermore, since $p^X$ by definition is a perfect power of $X$ and given that it is even, then $p^X$ must be defined as $2^{iX}(f_1f_2 \\cdots f_{n_f})^{jX}$ for positive integers $i$, $j$, $f_1$, $f_2$, ..., $f_{n_f}$ where $2^{iX}$ is the perfect power of the even component of $p^X$ and $(f_1f_2\\cdots f_{n_f})^{jX}$ is the perfect power of the remaining $n_f$ prime factors of $p^X$ where $f_1$, $f_2$, ..., $f_{n_f}$ are the remaining prime factors of $p^X$. Hence\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}(f_1f_2\\cdots f_{n_f})^{jX}}{BC} = \\frac{B^XC^Z-C^XB^Y}{BC}\\label{eqn:32}\n\\end{equation}\n$B$ and $C$ can also be expressed as a function of their prime factors, thus\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}f_1^{jX}f_2^{jX}\\cdots f_{n_f}^{jX}}{b_1b_2\\cdots b_{n_b} c_1c_2\\cdots c_{n_c}}\n\\label{eqn:33}\n\\end{equation}\nwhere $b_1, b_2, \\dots,b_{n_b}$ and $c_1, c_2, \\dots,c_{n_c}$ are prime factors of $B$ and $C$ respectively. Based on the right side of \\cref{eqn:32}, the entire denominator $BC$ is fully subsumed by the numerator, and thus every one of the prime factors in the denominator equals one of the prime factors in the numerator. Thus after dividing, one or more of the exponents in the numerator reduces thereby canceling the entire denominator accordingly. For illustration, suppose $b_1=b_2=2$, $b_{n_b}=f_1$, $c_1=c_2=f_2$ and $c_{n_c}=f_{n_f}$. As such, \\cref{eqn:33} simplifies to\n\\begin{equation}\np^X=2^{iX-2}f_1^{jX-1}f_2^{jX-2}\\cdots f_{n_f}^{jX-1}\n\\label{eqn:34}\n\\end{equation}\nwhich has terms with exponents that are not multiples of $X$ and are thus not perfect powers of $X$. Therefore the $X^{th}$ root is irrational which contradicts the assumption that $\\displaystyle{\\alpha=\\frac{p}{q}}$ is rational. We note that all factors in the denominator of \\cref{eqn:33} cannot each be perfect powers of $X$ since the bases $B$ and $C$ are defined to be reduced. More generally, beyond the illustration, after simplifying one or more terms, \\cref{eqn:34} will have an exponent that is not a multiple of $X$ and thus is irrational when taking the root accordingly.\n\nSuppose $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ is rational, $A,B,C,X,Y,$ and $Z$ are integers, and gcd$(A,B,C)=1$. Then we can express $\\beta$ as\n\\begin{subequations}\n\\begin{gather}\n\\beta^X=C^{Z-2X}-B^{Y-2X}\n\\label{eqn:35a} \\\\\n\\beta^X=\\frac{C^Z}{C^{2X}}-\\frac{B^Y}{B^{2X}}\n\\label{eqn:35b} \\\\\n\\beta^X=\\frac{B^{2X}C^Z-C^{2X}B^Y}{B^{2X}C^{2X}}\n\\label{eqn:35c}\n\\end{gather}\n\\end{subequations}\nSince $\\beta$ is rational, then $\\beta$ can be expressed as the ratio of integers $p$ and $q$ such that\n\\begin{equation}\n\\frac{p^X}{q^X}=\\frac{B^{2X}C^Z-C^{2X}B^Y}{B^{2X}C^{2X}}\\label{eqn:36}\n\\end{equation}\nwhere $p^X=B^{2X}C^Z-C^{2X}B^Y$ and $q^X=B^{2X}C^{2X}$. We note that the denominator of \\cref{eqn:36} is a perfect power of $X$ and thus we know that the numerator must also be a perfect power of $X$ for the $X^{th}$ root of their ratio to be rational. Hence the ratio of the perfect power of the numerator and the perfect power of the denominator, even after simplifying, must thus be rational.\n\nRegardless of the parity of $B$ or $C$, per \\cref{eqn:36}, $p^X$ must be even as it is the difference of two odd numbers or the difference of two even numbers. Furthermore, since $p^X$ by definition is a perfect power of $X$ and given that it is even, then $p^X$ must be defined as $2^{iX}(f_1f_2 \\cdots f_{n_f})^{jX}$ for positive integers $i$, $j$, $f_1$, $f_2$, ..., $f_{n_f}$ where $2^{iX}$ is the perfect power of the even component of $p^X$ and $(f_1f_2\\cdots f_{n_f})^{jX}$ is the perfect power of the remaining $n_f$ prime factors of $p^X$ where $f_1$, $f_2$, ..., $f_{n_f}$ are the remaining prime factors of $p^X$. Hence\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}(f_1f_2\\cdots f_{n_f})^{jX}}{B^2C^2} = \\frac{B^{2X}C^Z-C^{2X}B^Y}{B^2C^2}\\label{eqn:37}\n\\end{equation}\n$B^2$ and $C^2$ can also be expressed as a function of their prime factors, thus\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}f_1^{jX}f_2^{jX}\\cdots f_{n_f}^{jX}}{b_1^2b_2^2\\cdots b_{n_b}^2 c_1^2c_2^2\\cdots c_{n_c}^2}\n\\label{eqn:38}\n\\end{equation}\nwhere $b_1, b_2, \\dots,b_{n_b}$ and $c_1, c_2, \\dots,c_{n_c}$ are prime factors of $B$ and $C$ respectively. Based on the right side of \\cref{eqn:37}, the entire denominator $B^2C^2$ is fully subsumed by the numerator, and thus every one of the prime factors in the denominator equals one of the prime factors in the numerator. Thus after dividing, one or more of the exponents in the numerator reduces thereby canceling the entire denominator accordingly. For illustration, suppose $b_1=b_2=2$, $b_{n_b}=f_1$, $c_1=c_2=f_2$ and $c_{n_c}=f_{n_f}$. As such, \\cref{eqn:38} simplifies to\n\\begin{equation}\np^X=2^{iX-4}f_1^{jX-2}f_2^{jX-4}\\cdots f_{n_f}^{jX-2}\n\\label{eqn:39}\n\\end{equation}\nwhich has terms with exponents that are not multiples of $X$ and are thus not perfect powers of $X$. Therefore the $X^{th}$ root is irrational which contradicts the assumption that $\\displaystyle{\\alpha=\\frac{p}{q}}$ is rational. We note that all factors in the denominator of \\cref{eqn:38} cannot each be perfect powers of $X$ since the bases $B$ and $C$ are defined to be reduced. More generally, beyond the illustration, after simplifying one or more terms, \\cref{eqn:39} will have an exponent that is not a multiple of $X$ and thus is irrational when taking the root accordingly.\n\n\nSince the definition of a rational number is the ratio of two integers in which the ratio is reduced, given that gcd$(B,C)=1$, then both $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are irrational.\n\\end{proof}\n\n\n\\bigskip\n\n\n\\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational} establishes that if gcd$(A,B,C)=1$ then $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are irrational. We now establish the reverse, such that if $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are rational then gcd$(A,B,C)>1$.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime}\n\\Conjecture, if $\\displaystyle{\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ or $\\displaystyle{\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are rational, then gcd$(A,B,C)>1$.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ is rational and $A,B,C,X,Y,$ and $Z$ are integers that satisfy the conjecture. Then we can express $\\alpha$ as\n\\begin{subequations}\n\\begin{gather}\n\\alpha^X=C^{Z-X}-B^{Y-X}\n\\label{eqn:40a} \\\\\n\\alpha^X=\\frac{C^Z}{C^X}-\\frac{B^Y}{B^X}\n\\label{eqn:40b} \\\\\n\\alpha^X=\\frac{B^XC^Z-C^XB^Y}{B^XC^X}\n\\label{eqn:40c}\n\\end{gather}\n\\end{subequations}\nSince $\\alpha$ is rational, then $\\alpha$ can be expressed as the ratio of integers $p$ and $q$ such that\n\\begin{equation}\n\\frac{p^X}{q^X}=\\frac{B^XC^Z-C^XB^Y}{B^XC^X}\\label{eqn:41}\n\\end{equation}\nwhere $p^X=B^XC^Z-C^XB^Y$ and $q^X=B^XC^X$. We note that the denominator of \\cref{eqn:41} is a perfect power of $X$ and thus we know that the numerator must also be a perfect power of $X$ for the $X^{th}$ root of their ratio to be rational. Hence the ratio of the perfect power of the numerator and the perfect power of the denominator, even after simplifying, must thus be rational.\n\nSuppose gcd$(A,B,C)=k$ where integer $k\\geq 2$. Thus $A=ak$, $B=bk$, and $C=ck$ for pairwise coprime integers $a$, $b$, and $c$. We can express \\cref{eqn:41} with the common term, namely\n\\begin{subequations}\n\\begin{gather}\n\\frac{p^X}{q^X}=\\frac{(kb)^X(kc)^Z-(kc)^X(kb)^Y}{(kb)^X(kc)^X} \\label{eqn:42a} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{X+Z}b^Xc^Z-k^{X+Y}c^Xb^Y}{k^{2X}b^Xc^X} \\label{eqn:42b} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{Z-X}b^Xc^Z-k^{Y-X}c^Xb^Y}{b^Xc^X} \\label{eqn:42c} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{min(Z-X,Y-X)}[k^{Z-min(Z-X,Y-X)}b^Xc^Z-k^{Y-min(Z-X,Y-X)}c^Xb^Y]}{b^Xc^X} \\label{eqn:42d}\n\\end{gather}\n\\end{subequations}\n\nRegardless of the parity of $b$, $c$, or $k$ per \\cref{eqn:42c,eqn:42d}, $p^X$ must be even as it is the difference of two odd numbers or the difference of two even numbers. Furthermore, since $p^X$ by definition is a perfect power of $X$ and given that it is even, then $p^X$ must be defined as $2^{iX}k^{min(Z-X,Y-X)}(f_1f_2 \\cdots f_{n_f})^{jX}$ for positive integers $i$, $j$, $f_1$, $f_2$, ..., $f_{n_f}$ where $2^{iX}$ is the perfect power of the even component of $p^X$, $k^{min(Z-X,Y-X)}$ is the common factor based on gcd$(A,B,C)$, and $(f_1f_2\\cdots f_{n_f})^{jX}$ is the perfect power of the remaining $n_f$ prime factors of $p^X$ where $f_1$, $f_2$, ..., $f_{n_f}$ are the remaining prime factors of $p^X$. Hence\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}k^{min(Z-X,Y-X)}(f_1f_2\\cdots f_n)^{jX}}{bc}\n=\\frac{k^{Z-X}b^Xc^Z-k^{Y-X}c^Xb^Y}{bc} \\label{eqn:43}\n\\end{equation}\nBoth $b$ and $c$ can also be expressed as a function of their prime factors, thus\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}k^{min(Z-X,Y-X)}f_1^{jX}f_2^{jX}\\cdots f_{n_f}^{jX}}{b_1b_2\\cdots b_{n_b} c_1c_2\\cdots c_{n_c}}\n\\label{eqn:44}\n\\end{equation}\nwhere $b_1, b_2, \\dots,b_{n_b}$ and $c_1, c_2, \\dots,c_{n_c}$ are prime factors of $b$ and $c$ respectively. Based on the right side of \\cref{eqn:43}, the entire denominator $bc$ is fully subsumed by the numerator, and thus every one of the prime factors in the denominator equals one of the prime factors in the numerator. Thus after dividing, one or more of the exponents in the numerator reduces thereby canceling the entire denominator accordingly. For illustration, suppose $b_1=b_2=2$, $b_{n_b}=f_1$, $c_1=c_2=f_2$ and $c_{n_c}=f_{n_f}$. As such, \\cref{eqn:44} simplifies to\n\\begin{equation}\np^X=2^{iX-4}k^{min(Z-X,Y-X)}f_1^{jX-2}f_2^{jX-4}\\cdots f_{n_f}^{jX-2}\n\\label{eqn:45}\n\\end{equation}\nwhich has terms with exponents that are not multiples of $X$ and are thus not perfect powers of $X$. If $k=1$, then the $X^{th}$ root is irrational which contradicts the assumption that $\\displaystyle{\\alpha=\\frac{p}{q}}$ is rational. However if $k>1$ such that it is a composite of the factors that are not individually perfect powers of $X$, then the resulting expression is a perfect power of X. Hence when $k=1$, then per \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational}, $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ is irrational. However if $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ is rational, then $k\\neq1$ and thus gcd$(A,B,C)\\neq1$.\n\n\\bigskip\n\nSuppose $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ is rational and $A,B,C,X,Y,$ and $Z$ are integers that satisfy the conjecture. Then we can express $\\beta$ as\n\\begin{subequations}\n\\begin{gather}\n\\beta^X=C^{Z-2X}-B^{Y-2X}\n\\label{eqn:46a} \\\\\n\\beta^X=\\frac{C^Z}{C^2X}-\\frac{B^Y}{B^2X}\n\\label{eqn:46b} \\\\\n\\beta^X=\\frac{B^{2X}C^Z-C^{2X}B^Y}{B^{2X}C^{2X}}\n\\label{eqn:46c}\n\\end{gather}\n\\end{subequations}\nSince $\\beta$ is rational, then $\\beta$ can be expressed as the ratio of integers $p$ and $q$ such that\n\\begin{equation}\n\\frac{p^X}{q^X}=\\frac{B^{2X}C^Z-C^{2X}B^Y}{B^{2X}C^{2X}}\\label{eqn:47}\n\\end{equation}\nwhere $p^X=B^{2X}C^Z-C^{2X}B^Y$ and $q^X=B^{2X}C^{2X}$. We note that the denominator of \\cref{eqn:47} is a perfect power of $X$ and thus we know that the numerator must also be a perfect power of $X$ for the $X^{th}$ root of their ratio to be rational. Hence the ratio of the perfect power of the numerator and the perfect power of the denominator, even after simplifying, must thus be rational.\n\nSuppose gcd$(A,B,C)=k$ where integer $k\\geq 2$. Thus $A=ak$, $B=bk$, and $C=ck$ for pairwise coprime integers $a$, $b$, and $c$. We can express \\cref{eqn:47} with the common term, namely\n\\begin{subequations}\n\\begin{gather}\n\\frac{p^X}{q^X}=\\frac{(kb)^{2X}(kc)^Z-(kc)^{2X}(kb)^Y}{(kb)^{2X}(kc)^{2X}}\\label{eqn:48a} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{2X+Z}b^Xc^Z-k^{2X+Y}c^Xb^Y}{k^{4X}b^Xc^X} \\label{eqn:48b} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{Z-2X}b^Xc^Z-k^{Y-2X}c^Xb^Y}{b^Xc^X} \\label{eqn:48c} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{min(Z-2X,Y-2X)}[k^{Z-min(Z-2X,Y-2X)}b^Xc^Z-k^{Y-min(Z-2X,Y-2X)}c^Xb^Y]}{b^Xc^X} \\label{eqn:48d}\n\\end{gather}\n\\end{subequations}\n\nRegardless of the parity of $b$, $c$, or $k$ per \\cref{eqn:48c,eqn:48d}, $p^X$ must be even as it is the difference of two odd numbers or the difference of two even numbers. Furthermore, since $p^X$ by definition is a perfect power of $X$ and given that it is even, then $p^X$ must be defined as $2^{iX}k^{min(Z-2X,Y-2X)}(f_1f_2 \\cdots f_{n_f})^{jX}$ for positive integers $i$, $j$, $f_1$, $f_2$, ..., $f_{n_f}$ where $2^{iX}$ is the perfect power of the even component of $p^X$, $k^{min(Z-X,Y-X)}$ is the common factor based on gcd$(A,B,C)$, and $(f_1f_2\\cdots f_{n_f})^{jX}$ is the perfect power of the remaining $n_f$ prime factors of $p^X$ where $f_1$, $f_2$, ..., $f_{n_f}$ are the remaining prime factors of $p^X$. Hence\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}k^{min(Z-2X,Y-2X)}(f_1f_2\\cdots f_{n_f})^{jX}}{bc}\n=\\frac{k^{Z-2X}b^Xc^Z-k^{Y-2X}c^Xb^Y}{bc} \\label{eqn:49}\n\\end{equation}\nBoth $b$ and $c$ can also be expressed as a function of their prime factors, thus\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}k^{min(Z-2X,Y-2X)}f_1^{jX}f_2^{jX}\\cdots f_{n_f}^{jX}}{b_1b_2\\cdots b_{n_b} c_1c_2\\cdots c_{n_c}}\n\\label{eqn:50}\n\\end{equation}\nwhere $b_1, b_2, \\dots,b_{n_b}$ and $c_1, c_2, \\dots,c_{n_c}$ are prime factors of $b$ and $c$ respectively. Based on the right side of \\cref{eqn:49}, the entire denominator $bc$ is fully subsumed by the numerator, and thus every one of the prime factors in the denominator equals one of the prime factors in the numerator. Thus after dividing, one or more of the exponents in the numerator reduces thereby canceling the entire denominator accordingly. For illustration, suppose $b_1=b_2=2$, $b_{n_b}=f_1$, $c_1=c_2=f_2$ and $c_{n_c}=f_{n_f}$. As such, \\cref{eqn:50} simplifies to\n\\begin{equation}\np^X=2^{iX-4}k^{min(Z-2X,Y-2X)}f_1^{jX-2}f_2^{jX-4}\\cdots f_{n_f}^{jX-2}\n\\label{eqn:51}\n\\end{equation}\nwhich has terms with exponents that are not multiples of $X$ and are thus not perfect powers of $X$. If $k=1$, then the $X^{th}$ root is irrational which contradicts the assumption that $\\displaystyle{\\alpha=\\frac{p}{q}}$ is rational. However if $k>1$ such that it is a composite of the factors that are not individually perfect powers of $X$, then the resulting expression is a perfect power of X. Hence when $k=1$, then per \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational}, $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ is irrational. However if $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ is rational, then $k\\neq1$ and thus gcd$(A,B,C)\\neq1$.\n\nThus if either or both $\\displaystyle{\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ or $\\displaystyle{\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are rational, then gcd$(A,B,C)>1$.\n\\end{proof}\n\n\n\\bigskip\n\n\nThe values of $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ have critical properties as related to coprimality and other relationships with integer solutions that satisfy the conjecture. We know from \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational} that if gcd$(A,B,C)=1$, then $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are both irrational. Even though all feasible values of $\\alpha$ and $\\beta$ map to this pair given \\cref{Thm:2.9_Real_Alpha_Beta}, we can still consider other feasible values of $\\alpha$ and $\\beta$ as related to coprimality.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.13_Coprime_Any_Alpha_Beta_Irrational_Indeterminate}\n\\Conjecture, if gcd$(A,B,C)=1$, then any value of $\\alpha$ or $\\beta$ that satisfies $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ must be irrational or indeterminate.\n\\end{theorem}\n\n\\begin{proof}\nGiven $\\sqrt[X]{C^Z-B^Y} =(C+B)\\alpha-CB\\beta$ per \\cref{Thm:2.8_Functional_Form}. If we suppose $\\sqrt[X]{C^Z-B^Y}$ is irrational, then $(C+B)\\alpha-CB\\beta$ is irrational. With $C$ and $B$ integer, if $\\alpha$ and $\\beta$ were rational, then $(C+B)\\alpha-CB\\beta$ is composed solely of rational terms and thus must be rational, which contradicts the assumption of irrationality. Hence $\\alpha$ or $\\beta$ must be irrational. Further, per \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational}, $\\alpha$ and $\\beta$ are irrational if $\\sqrt[X]{C^Z-B^Y}$ is irrational and gcd$(A,B,C)=1$. Thus here too $\\alpha$ or $\\beta$ must be irrational.\n\nIf we suppose instead $\\sqrt[X]{C^Z-B^Y}$ is rational, then $(C+B)\\alpha-CB\\beta$ is rational.\nGiven gcd$(A,B,C)=1$, per \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational} we know both $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are irrational.\n\nSuppose instead $\\alpha$ is to be defined as any real other than $\\displaystyle{\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$. As such, per \\cref{Thm:2.8_Functional_Form}, $\\beta$ is derived by $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$. Hence substituting into \\cref{eqn:24} gives us\n\\begin{subequations}\n\\begin{align}\n\\sqrt[X]{C^Z-B^Y} &=(C+B)\\alpha-CB\\beta \\label{eqn:52a} \\\\\n\\sqrt[X]{C^Z-B^Y} &=(C+B)\\alpha-CB\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB} \\label{eqn:52b}\\\\\n\\sqrt[X]{C^Z-B^Y} &=(C+B)\\alpha+\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha \\label{eqn:52c} \\\\\n\\sqrt[X]{C^Z-B^Y} &=\\sqrt[X]{C^Z-B^Y} \\label{eqn:52d}\n\\end{align}\n\\end{subequations}\nRegardless of the value selected for $\\alpha$, both $\\alpha$ and $\\beta$ fall out and thus both are indeterminate when gcd$(A,B,C)=1$.\n\nSuppose instead $\\beta$ is to be defined as any real other than $\\displaystyle{\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$. As such, per \\cref{Thm:2.8_Functional_Form}, $\\alpha$ is derived by $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$. Hence substituting into \\cref{eqn:24} gives us\n\\begin{subequations}\n\\begin{align}\n\\sqrt[X]{C^Z-B^Y} &=(C+B)\\alpha-CB\\beta \\label{eqn:53a} \\\\\n\\sqrt[X]{C^Z-B^Y} &=(C+B)\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}-CB\\beta \\label{eqn:53b}\\\\\n\\sqrt[X]{C^Z-B^Y} &=\\sqrt[X]{C^Z-B^Y}+CB\\beta-CB\\beta \\label{eqn:53c} \\\\\n\\sqrt[X]{C^Z-B^Y} &=\\sqrt[X]{C^Z-B^Y} \\label{eqn:53d}\n\\end{align}\n\\end{subequations}\nRegardless of the value selected for $\\beta$, both $\\alpha$ and $\\beta$ fall out and thus both are indeterminate when gcd$(A,B,C)=1$.\n\nHence if gcd$(A,B,C)=1$, then both of the terms $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ must be irrational, while any other random value of $\\alpha$ or $\\beta$ that satisfies $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ must be irrational or indeterminate.\n\\end{proof}\n\\label{Section:Reparameterize_End}\n\n\n\n\\bigskip\n\n\n\\Line\n\\subsection{Impossibility of the Terms}\n\\label{Section:Impossibility_Start}\nHaving established that pairwise coprimality is a definite byproduct when gcd$(A,B,C)=1$ (\\cref{Thm:2.2_Coprime,Thm:2.3_Coprime,Thm:2.4_Coprime}), there is a unique reparamterization of the bases of \\BealsEq\\, whose rationality is tied to coprimality of terms (\\cref{Thm:2.9_Real_Alpha_Beta,Thm:2.10_No_Solution_Alpha_Beta_Irrational,Thm:2.11_Coprime_Alpha_Beta_Irrational,Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime}), and that a line through the origin with an irrational slope does not go through any non-trivial lattice points (\\cref{Thm:2.1_Irrational_Slope_No_Lattice}), we now delve into proving the conjecture under two mutually exclusive conditions:\n\\begin{enumerate}\n\\item geometric implications when gcd$(A,B,C)=1$\n\\item geometric implications when gcd$(A,B,C)>1$\n\\end{enumerate}\nThese steps lead to a critical contradiction which demonstrates the impossibility of the existance of counter-examples due to fundamental features of the conjecture.\n\nAs implied by Catalan's conjecture and proven by Mih\u0103ilescu \\cite{mihailescu2004primary}, no integer solutions exist when $A$, $B$, or $C$ equals 1, regardless of coprimality. Hence we consider the situation in which $A,B,C\\geq2$. Given the configuration of the conjecture, $A$, $B$, $C$, $X$, $Y$, and $Z$ are positive integers, and thus $A^X$, $B^Y$, and $C^Z$ are also integers. A set of values that satisfy the conjecture can be plotted on a Cartesian coordinate grid with axes $A^X$, $B^Y$, and $C^Z$. See \\cref{Fig:3DScatter}. Based on \\cref{eqn:1} the line passing through the origin and the point $(A^X,B^Y,A^X+B^Y)$ can be expressed based on the angles in relation to the axes (see \\cref{Fig:ScatterPlotWithAngles}), namely\n\\begin{subequations}\n\\begin{gather}\n\\theta_{_{C^ZB^Y}} = \\tan^{-1}\\frac{A^X+B^Y}{B^Y} \\label{eqn:54a} \\\\\n\\theta_{_{C^ZA^X}} = \\tan^{-1}\\frac{A^X+B^Y}{A^X} \\label{eqn:54b} \\\\\n\\theta_{_{B^YA^X}} = \\tan^{-1}\\frac{B^Y}{A^X} \\label{eqn:54c}\n\\end{gather}\n\\end{subequations}\nwhere $\\theta_{_{C^ZB^Y}}$ is the angle subtended between the $B^Y$ axis to the line through the origin and the given point in the $C^Z \\times B^Y$ plane, $\\theta_{_{C^ZA^X}}$ is the angle subtended between the $A^X$ axis to the line through the origin to the given point in the $C^Z \\times A^X$ plane, and $\\theta_{_{B^YA^X}}$ is the angle subtended between the $B^Y$ axis to the line through the origin to the given point in the $B^Y \\times A^X$ plane.\n\nThe line subtended in each plane based on the origin and the given point $(A^X,B^Y,A^X+B^Y)$ has slopes that by definition are identical to the arguments in the corresponding tangent functions in \\crefrange{eqn:54a}{eqn:54c}. In each case, the numerator and denominator are integers, and thus the corresponding ratios (and therefore slopes) are rational. Given that this line passes through the origin and has a rational slope in all three planes, then we conclude the infinitely long line passes through infinitely many lattice points, namely at specific integer multiples of the slopes.\n\n\\begin{figure}\n\\includegraphics[width=.55\\textwidth]{3D_Scatter_Map.jpg}\n\\caption{Map of a corresponding point between two different plots based on \\BealsEq\\, satisfying the associated constraints and bounds.}\n\\label{Fig:3DScatterMap}\n\\end{figure}\n\nBased on the conjecture the given lattice point $(A^X,B^Y,A^X+B^Y)$ relative to axes $A^X$, $B^Y$, and $C^Z$ corresponds to a lattice point $(A,B,\\sqrt[Z]{A^X+B^Y})$ in a scatter plot based on axes $A$, $B$, and $C$. See \\cref{Fig:3DScatterMap}. The conjecture states there is no integer solution to \\BealsEq\\, that simultaneously satisfies all the conditions if gcd$(A,B,C)=1$. Thus from a geometric perspective, this means that if gcd$(A,B,C)=1$, then the line in the scatter plot based on axes $A$, $B$, and $C$ in \\cref{Fig:3DScatterMap} could never go through a non-trivial lattice point since $A$, $B$, and $C$ could not all be integer simultaneously. Conversely, if the corresponding line in the scatter plot based on axes $A$, $B$, and $C$ in \\cref{Fig:3DScatterMap} does go through a non-trivial lattice point, then based on the conjecture we know gcd$(A,B,C)>1$. Hence to validate the conjecture, we need to test the relationship between \\BealsEq, the lattice points in both graphs, and the slopes subtended by the origin and these lattice points.\n\n\\begin{figure}\n\\includegraphics[width=.66\\textwidth]{3D_Scatter_Plot_with_Angles_2.jpg}\n\\caption{Angles between the axes and the line segment subtended by the origin and a single point $A$, $B$, and $C$ from \\cref{Fig:CZBYAZScatter}.}\n\\label{Fig:ScatterPlotWithAngles2}\n\\end{figure}\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.14_Main_Proof_Coprime_No_Solutions}\n\\Conjecture, if $gcd(A,B,C) = 1$, then there is no integer solution.\n\\end{theorem}\n\n\\begin{proof}\nThe line through the origin and point $(A,B,\\sqrt[Z]{A^X+B^Y})$ can be expressed based on the angles in relation to the axes. See \\cref{Fig:ScatterPlotWithAngles2}. These angles are\n\\begin{subequations}\n\\begin{gather}\n\\theta_{_{CB}} = \\tan^{-1}\\frac{\\sqrt[Z]{A^X+B^Y}}{B} \\label{eqn:55a} \\\\\n\\theta_{_{CA}} = \\tan^{-1}\\frac{\\sqrt[Z]{A^X+B^Y}}{A} \\label{eqn:55b} \\\\\n\\theta_{_{BA}} = \\tan^{-1}\\frac{B}{A} \\label{eqn:55c}\n\\end{gather}\n\\end{subequations}\nwhere $\\theta_{_{CB}}$ is the angle subtended between the $B$ axis to the line through the origin and the given point in the $C\\times B$ plane, $\\theta_{_{CA}}$ is the angle subtended between the $A$ axis to the line through the origin to the given point in the $C\\times A$ plane, and $\\theta_{_{BA}}$ is the angle subtended between the $B$ axis to the line through the origin to the given point in the $B\\times A$ plane. See \\cref{Fig:ScatterPlotWithAngles2}.\n\nThe line that corresponds to $\\theta_{_{CB}}$ in \\cref{eqn:55a} has slope $\\displaystyle{m=\\frac{\\sqrt[Z]{A^X+B^Y}}{B}}$ in the $C\\times B$ plane, and the line that corresponds to $\\theta_{_{CA}}$ in \\cref{eqn:55b} has slope $\\displaystyle{m=\\frac{\\sqrt[Z]{A^X+B^Y}}{A}}$ in the $C\\times A$ plane. These two slopes are different than the slope of the line that corresponds to $\\theta_{_{BA}}$ in \\cref{eqn:55c} which is $\\displaystyle{m=\\frac{B}{A}}$ in the $B\\times A$ plane since this latter slope is merely the ratio of two integers whereas the numerator of the first two are $\\sqrt[Z]{A^X+B^Y}$ which may not be rational.\n\nBuilding from \\cref{eqn:55a,eqn:55b}, let $\\MCB$ and $\\MCA$\nbe the slopes of the lines through the origin and the given point in the $C \\times B$ and $C \\times A$ planes, respectively. Thus we have\n\\begin{subequations}\n\\begin{align}\n\\MCB &= \\frac{\\sqrt[Z]{A^X+B^Y}}{B} &\n\\MCA &= \\frac{\\sqrt[Z]{A^X+B^Y}}{A}\n\\label{eqn:56a} \\\\\n\\MCB &= \\sqrt[Z]{\\frac{A^X+B^Y}{B^Z}} &\n\\MCA &= \\sqrt[Z]{\\frac{A^X+B^Y}{A^Z}}\n\\label{eqn:56b} \\\\\n\\MCB &= \\left(\\frac{A^X}{B^Z} +B^{Y-Z}\\right) ^{\\frac{1}{Z}} &\n\\MCA &= \\left(A^{X-Z} + \\frac{B^Y}{A^Z} \\right) ^{\\frac{1}{Z}}\n\\label{eqn:56c}\n\\end{align}\n\\end{subequations}\nPer \\cref{Thm:2.1_Irrational_Slope_No_Lattice}, if a line through the origin has an irrational slope, then that line does not pass through any non-trivial lattice points. Relative to the conjecture, a line in 3 dimensions that passes through the origin also passes through a point equal to the integer bases which satisfy the terms of the conjecture. Since the solution that satisfies the conjecture is integer and must be a lattice point, then the corresponding lines must have rational slopes. Hence if there exist integer solutions, then slopes $\\MCB$ and $\\MCA$ are rational. If the slopes are irrational, then there is no integer solution. We must now consider three mutually exclusive scenarios in relation to slopes $\\MCB$ and $\\MCA$:\n\n\\bigskip\n\n\\textbf{Scenario 1 of 2: $\\bm{\\displaystyle{A^X \\neq B^Y}$}.} Suppose $A^X\\neq B^Y$. Dividing both terms by $B^Z$ gives us $\\displaystyle{\\frac{A^X}{B^Z} \\neq B^{Y-Z}}$. Likewise, dividing both terms of $A^X \\neq B^Y$ by $A^Z$ gives us $\\displaystyle{A^{X-Z} \\neq \\frac{B^Y}{A^Z}}$. Using this relationship, $\\MCB$ and $\\MCA$ from \\cref{eqn:56c} become\n\\begin{subequations}\n\\begin{align}\n\\MCB &= \\left(\\frac{A^X}{B^Z}\\right)^{\\frac{1}{Z}} \\left(1+ \\frac{B^{Y-Z}}\n {\\left(\\frac{A^X}{B^Z}\\right)}\\right)^{\\frac{1}{Z}} &\n\\MCA &= (A^{X-Z})^{\\frac{1}{Z}} \\left(1+ \\frac{\\left(\\frac{B^Y}{A^Z}\\right)}\n {A^{X-Z}} \\right)^{\\frac{1}{Z}}\n\\label{eqn:57a} \\\\\n\\MCB &= \\frac{A^{\\frac{X}{Z}}}{B} \\left(1+ \\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}} &\n\\MCA &= ({A^{\\frac{X}{Z}-1}}) \\left(1+ \\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}\n\\label{eqn:57b}\n\\end{align}\n\\end{subequations}\n\nBoth $\\MCB$ and $\\MCA$ must be rational for there to exist an integer solution that satisfies the conjecture. Based on \\cref{eqn:57b}, there are two cases that can ensure both $\\MCB$ and $\\MCA$ are rational:\n\\begin{enumerate}\n\\item The term $\\displaystyle{\\left(1+\\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}}$ from\n \\cref{eqn:57b} is rational, therefore both terms\n $\\displaystyle{\\frac{A^{\\frac{X}{Z}}}{B}}$ and $A^{\\frac{X}{Z}-1}$ must be rational so\n their respective products are rational.\n\\item The term $\\displaystyle{\\left(1+\\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}}$ from\n \\cref{eqn:57b} is irrational, therefore both terms\n $\\displaystyle{\\frac{A^{\\frac{X}{Z}}}{B}}$ and $A^{\\frac{X}{Z}-1}$ are irrational.\n However the irrationality of these two terms are canceled with the irrationality\n of the denominator of $\\displaystyle{\\left(1+\\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}}$\n so their respective products are rational.\n\\end{enumerate}\n\n\\bigskip\n\n\\noindent\\textbf{Case 1 of 2: the terms of $\\bm{\\MCB }$ and $\\bm{\\MCA }$ are rational}\n\nStarting with the first case, assume $\\displaystyle{\\left(1+ \\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}}$ in \\cref{eqn:57b} is rational and thus $\\displaystyle{\\frac{A^{\\frac{X}{Z}}}{B}}$ and $A^{\\frac{X}{Z}-1}$ are rational. Since $A$ is reduced (not a perfect power), gcd$(A,B)=1$, and $A$ is raised to exponent $\\displaystyle{\\frac{X}{Z}}$ and $\\displaystyle{\\frac{X}{Z}-1}$, respectively, these terms are rational only if $X$ is an integer multiple of $Z$. However, per \\cref{Dfn:2.1_X_cannot_be_mult_of_Z} on page \\pageref{Dfn:2.1_X_cannot_be_mult_of_Z}\\, $X$ is not an integer multiple of $Z$, therefore the requirements to ensure $\\MCB$ and $\\MCA$ are rational cannot be met.\n\n\\bigskip\n\\noindent\\textbf{Case 2 of 2: the terms of $\\bm{\\MCB }$ and $\\bm{\\MCA }$ are irrational}\n\nConsidering the second case, assume $\\displaystyle{\\left(1+ \\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}}$ in \\cref{eqn:57b} is irrational, and thus $\\displaystyle{\\frac{A^{\\frac{X}{Z}}}{B}}$ and $A^{\\frac{X}{Z}-1}$ must also be irrational such that they cancel the irrationality with multiplication. Since slopes $\\MCB$ and $\\MCA$ in \\cref{eqn:57b} are rational with irrational terms, both \\cref{eqn:57a,eqn:57b} must be rational. We can re-express \\cref{eqn:56a} equivalently as\n\\begin{equation}\n\\MCB = \\frac{C}{\\sqrt[Y]{C^Z-A^X}} \\qquad\\qquad\\quad\n\\MCA = \\frac{C}{\\sqrt[X]{C^Z-B^Y}}\n\\label{eqn:58}\n\\end{equation}\nin which the denominators must be rational. Per \\cref{Thm:2.8_Functional_Form}, the denominators in \\cref{eqn:58} can be reparameterized and thus \\cref{eqn:58} becomes\n\\begin{equation}\n \\MCB = \\frac{C}{(C+A)M_{_1}\\alphaCA -CAM_{_1}\\betaCA } \\qquad\\qquad\n \\MCA = \\frac{C}{(C+B)M_{_2}\\alphaCB -CBM_{_2}\\betaCB }\n \\label{eqn:59}\n\\end{equation}\nwhere $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ are positive rational numbers and $M_{_1}$ and $M_{_2}$ are positive scalars. Per \\cref{Thm:2.8_Functional_Form,Thm:2.9_Real_Alpha_Beta} $\\alphaCA$, $\\alphaCB$, $\\betaCA$ and $\\betaCB$ can be defined as\n\\begin{subequations}\n\\begin{align}\n\\alphaCA &=\\sqrt[Y]{C^{Z-Y}-A^{X-Y}} &\n\\alphaCB &=\\sqrt[X]{C^{Z-X}-B^{Y-X}}\n\\label{eqn:60a} \\\\\n\\betaCA &=\\sqrt[Y]{C^{Z-2Y}-A^{X-2Y}} &\n\\betaCB &=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}\n\\label{eqn:60b}\n\\end{align}\n\\end{subequations}\nPer \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational}, $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ as defined in \\cref{eqn:60a,eqn:60b} are irrational when gcd$(A,B,C)=1$. Thus $\\MCB$ and $\\MCA$ in \\cref{eqn:59} are irrational.\n\nTherefore as consequences of both cases 1 and 2, slopes $\\MCB$ and $\\MCA$ must be irrational when $A^X \\neq B^Y$ and gcd$(A,B,C)=1$.\n\n\\bigskip\n\nAs a side note, an alternate way to define $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ is to select any real value for $\\alphaCA$ and $\\alphaCB$ and then derive $\\betaCA$ and $\\betaCB$ or select any real value for $\\betaCA$ and $\\betaCB$ and then derive $\\alphaCA$ and $\\alphaCB$, per \\cref{Thm:2.9_Real_Alpha_Beta}. However per \\cref{Thm:2.13_Coprime_Any_Alpha_Beta_Irrational_Indeterminate}, when gcd$(A,B,C)=1$, the derived values of $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ are either irrational or indeterminate. Thus $\\MCB$ and $\\MCA$ in \\cref{eqn:59} are irrational or indeterminate. If the slopes $\\MCB$ and $\\MCA$ were rational, then $\\alpha$ and $\\beta$ would both be determinate, thus a contradiction. Hence here too the requirements to ensure $\\MCB$ and $\\MCA$ are rational cannot be met when gcd$(A,B,C)=1$.\n\n\n\n\\bigskip\n\n\\textbf{Scenario 2 of 2: $\\bm{\\displaystyle{A^X=B^Y}$}.} Suppose $A^X=B^Y$. Given the bases are reduced, we conclude that $A=B$. Per this theorem, gcd$(A,B)=1$ and thus $A\\neq B$, hence this scenario is impossible.\n\n\\bigskip\n\nSince the scenarios are mutually exclusive and exhaustive given gcd$(A,B)=1$, and given that in each scenario, the slopes $\\MCB$ and $\\MCA$ are irrational, then the line that goes through the origin and through the integer solution must also have irrational slopes. However, as already proven in \\cref{Thm:2.1_Irrational_Slope_No_Lattice}, lines through the origin with irrational slopes cannot go through any non-trivial lattice points. \\Conjecture, if gcd$(A,B,C) = 1$, then slopes $\\MCB$ and $\\MCA$ are irrational and thus there is no integer solution.\n\\end{proof}\n\nWe know that each grid point $(A,B,C)$ subtends a line through the origin and that point, whereby that point is supposed to be an integer solution that satisfies the conjecture. We also know that each grid point corresponds to a set of slopes. Further, we know from \\cref{Thm:2.1_Irrational_Slope_No_Lattice} that a line through the origin with an irrational slope does not pass through any non-trivial lattice points. Since both $\\MCB$ and $\\MCA$ are irrational when gcd$(A,B,C)=1$, then their corresponding lines fail to go through any non-trivial lattice points, and thus these slopes mean there are no corresponding integer solutions for $A$, $B$, and $C$. Hence there is no integer solution satisfying the conjecture when gcd$(A,B,C)=1$.\n\\label{Section:Impossibility_End}\n\n\n\\bigskip\n\n\n\\Line\n\\subsection{Requirement for Possibility of the Terms}\n\\label{Section:Possibility_Start}\nHaving established that the slopes of a line through the origin and the lattice point $(A,B,C)$ are irrational when gcd$(A,B,C)=1$ translates to no integer solutions to the conjecture and the non-existence of a non-trivial lattice point. We now consider the reverse; if there is an integer solution that satisfies the conjecture, then gcd$(A,B,C)$ must be greater than 1 translates to the existence of a non-trivial lattice point through which the line passes.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.15_Main_Proof_Solutions_Then_Not_Coprime}\n\\Conjecture, if there are integer solutions, then $gcd(A,B,C)>1$.\n\\end{theorem}\n\n\\begin{proof}\nGiven the configuration of the conjecture, $A$, $B$, $C$, $X$, $Y$, and $Z$ are positive integers, and thus $A^X$, $B^Y$, and $C^Z$ are also integers. A set of values that satisfy the conjecture correspond to a point on a scatter plot with axes $A$, $B$, and $C$. See \\cref{Fig:3DScatterMap}. Based on \\cref{eqn:1} the line passing through the origin and the point $(A,B,\\sqrt[Z]{A^X+B^Y})$ can be expressed based on its slopes in the three planes (see \\cref{eqn:55a,eqn:55b,eqn:56a} and \\cref{Fig:ScatterPlotWithAngles2}), namely\n\\begin{subequations}\n\\begin{align}\n\\MCB &= \\frac{\\sqrt[Z]{A^X+B^Y}}{B} =\\frac{C}{B} &\n\\MCA &= \\frac{\\sqrt[Z]{A^X+B^Y}}{A} =\\frac{C}{A} \\label{eqn:61a} \\\\\n\\MCB &= \\sqrt[Z]{\\frac{A^X+B^Y}{B^Z}} &\n\\MCA &= \\sqrt[Z]{\\frac{A^X+B^Y}{A^Z}} \\label{eqn:61b}\n\\end{align}\n\\end{subequations}\nwhere $\\MCB$ and $\\MCA$ are the slopes of the lines through the origin and the given point in the $C \\times B$ and $C \\times A$ planes, respectively. We can likewise define $\\MBA$ as the slope of the lines through the origin and the given point in the $A\\times B$ plane based on \\cref{eqn:55c}, namely\n\\begin{equation}\n \\MBA = \\frac{B}{A} \\label{eqn:62}\n\\end{equation}\nWe know from \\cref{Thm:2.14_Main_Proof_Coprime_No_Solutions} that $\\MCB$ and $\\MCA$ cannot be rational if gcd$(A,B,C)=1$. Suppose gcd$(A,B,C)=k$ where $k\\geq 2$. Thus with integer $k$ common to $A$, $B$, and $C$, we can express \\cref{eqn:1} with the common term, namely\n\\begin{equation}\na^Xk^X + b^Yk^Y = c^Zk^Z \\label{eqn:63}\n\\end{equation}\nwhere $a$, $b$, and $c$ are positive coprime integer factors of $A$, $B$, and $C$ respectively, and where $A=ak$, $B=bk$, $C=ck$, and where $A^X=a^Xk^X$, $B^Y=b^Yk^Y$, and $C^Z=c^Zk^Z$. We can thus express the slopes from \\cref{eqn:61b,eqn:62} with the common term, namely\n\\begin{subequations}\n\\begin{gather}\n\\MCB = \\sqrt[Z]{\\frac{a^Xk^X+b^Yk^Y}{b^Zk^Z}} \\qquad\\qquad\n\\MCA = \\sqrt[Z]{\\frac{a^Xk^X+b^Yk^Y}{a^Zk^Z}} \\label{eqn:64a}\\\\\n\\MBA = \\frac{bk}{ak} = \\frac{b}{a} \\label{eqn:64b}\n\\end{gather}\n\\end{subequations}\nWe observe that the common term $k$ cancels from $\\MBA$ in \\cref{eqn:64b} and thus $\\MBA$ is rational regardless of the common term. We can simplify \\cref{eqn:64a} as\n\\begin{equation}\n\\MCB = \\left(\\frac{(ak)^X}{(bk)^Z} +(bk)^{Y-Z}\\right) ^{\\frac{1}{Z}} \\qquad \\MCA = \\left((ak)^{X-Z} + \\frac{(bk)^Y}{(ak)^Z} \\right) ^{\\frac{1}{Z}} \\label{eqn:65}\n\\end{equation}\n\nBefore applying Newton's generalized binomial expansion to slopes $\\MCB$ and $\\MCA$ in \\cref{eqn:65}, we must first consider two mutually exclusive scenarios:\n\n\\bigskip\n\n\\textbf{Scenario 1 of 2: $\\bm{\\displaystyle{A^X \\neq B^Y}$}.} Suppose $A^X\\neq B^Y$. As such\n$\\displaystyle{\\frac{A^X}{B^Z}\\neq B^{Y-Z}}$ and $\\displaystyle{A^{X-Z} \\neq \\frac{B^Y}{A^Z}}$, and thus $\\displaystyle{\\frac{(ak)^X}{(bk)^Z}\\neq (bk)^{Y-Z}}$ and $\\displaystyle{(ak)^{X-Z} \\neq \\frac{(bk)^Y}{(ak)^Z}}$. Therefore we can re-express $\\MCB$ and $\\MCA$ in \\cref{eqn:65} as\n\\begin{subequations}\n\\begin{align}\nm_{_{C, B}} &= \\left[\\frac{(ak)^X}{(bk)^Z}\\right]^{\\frac{1}{Z}}\n \\left[1+\\frac{(bk)^{Y-Z}} {\\left(\\frac{(ak)^X}{(bk)^Z}\\right)}\\right]\n ^{\\frac{1}{Z}} &\nm_{_{C, A}} &= \\left[(ak)^{X-Z}\\right]^{\\frac{1}{Z}} \\left[1+\n \\frac{ \\left( \\frac{(bk)^Y}{(ak)^Z} \\right)} {(ak)^{X-Z}} \\right] ^{\\frac{1}{Z}}\n\\label{eqn:66a} \\\\\nm_{_{C, B}} &= \\frac{(ak)^{\\frac{X}{Z}}}{bk} \\left(1 +\\frac{(bk)^Y}{(ak)^Z}\\right)\n ^{\\frac{1}{Z}} &\nm_{_{C, A}} &= {(ak)^{\\frac{X}{Z}-1}} \\left(1 +\\frac{(bk)^Y}{(ak)^Z}\\right) ^{\\frac{1}{Z}}\n\\label{eqn:66b}\n\\end{align}\n\\end{subequations}\n\n\\bigskip\n\nGiven there are integer solutions that satisfy the conjecture, then $\\MCB$ and $\\MCA$ must be rational. Based on \\cref{eqn:66b}, there are two cases that can ensure both $\\MCB$ and $\\MCA$ are rational:\n\\begin{enumerate}\n\\item The term $\\displaystyle{\\left(1+\\frac{(bk)^Y}{(ak)^Z}\\right)^{\\frac{1}{Z}}}$ from\n \\cref{eqn:66b} is rational, therefore both terms\n $\\displaystyle{\\frac{(ak)^{\\frac{X}{Z}}}{bk}}$ and $(ak)^{\\frac{X}{Z}-1}$ must be\n rational so their respective products are rational.\n\\item The term $\\displaystyle{\\left(1+\\frac{(bk)^Y}{(ak)^Z}\\right)^{\\frac{1}{Z}}}$ from\n \\cref{eqn:66b} is irrational, therefore both terms\n $\\displaystyle{\\frac{(ak)^{\\frac{X}{Z}}}{bk}}$ and $(ak)^{\\frac{X}{Z}-1}$ are irrational.\n However the irrationality of these two terms are canceled with the irrationality\n of the denominator of $\\displaystyle{\\left(1+\\frac{(bk)^Y}{(ak)^Z}\\right)^{\\frac{1}{Z}}}$\n so their respective products are rational.\n\\end{enumerate}\n\n\\bigskip\n\n\\noindent\\textbf{Case 1 of 2: the terms of $\\bm{\\MCB }$ and $\\bm{\\MCA }$ are rational}\n\nStarting with the first case, assume $\\displaystyle{\\left(1+ \\frac{(bk)^Y}{(ak)^X}\\right)^{\\frac{1}{Z}}}$ in \\cref{eqn:66b} is rational and thus $\\displaystyle{\\frac{(ak)^{\\frac{X}{Z}}}{bk}}$ and $(ak)^{\\frac{X}{Z}-1}$ are rational. Applying the binomial expansion to \\cref{eqn:66b} gives us\n\n\\begin{equation}\n\\MCB = \\frac{(ak)^{\\frac{X}{Z}}}{bk} \\sum \\limits_{i=0}^{\\infty}\n \\binom{\\frac{1}{Z}}{i} \\frac{(bk)^{Yi}}{(ak)^{Xi}} \\qquad\n\\MCA = {(ak)^{\\frac{X}{Z}-1}} \\sum \\limits_{i=0}^{\\infty}\n \\binom{\\frac{1}{Z}}{i} \\frac{(bk)^{Yi}}{(ak)^{Xi}}\n\\label{eqn:67}\n\\end{equation}\nThe binomial coefficient $\\displaystyle{\\binom{\\frac{1}{Z}}{i}}$ and ratio $\\displaystyle{\\frac{(bk)^{Yi}}{(ak)^{Xi}}}$ in both formulas in \\cref{eqn:67} are rational for all $i$, as are their products, regardless of the value of $k$. Hence we consider the terms $\\displaystyle{\\frac{(ak)^{\\frac{X}{Z}}}{bk}}$ and $(ak)^{\\frac{X}{Z}-1}$ in \\cref{eqn:67}. Since integers $A$ and $ak$ are reduced (not perfect powers), there are only three possibilities that ensure they are rational:\n\\begin{enumerate}\n\\item $a$ is a perfect power of $Z$, or\n\\item $X$ is an integer multiple of $Z$, or\n\\item $k > 1$ such that $ak$ is a perfect power of $Z$.\n\\end{enumerate}\n\nSuppose $a$ is a perfect power of Z. Since $A=ak$ cannot be a perfect power of $Z$ given that $A$ is reduced, then $k$ must be greater than 1 so that when factored from composite $A$, the remaining factors of $A$ are a perfect power of $Z$. Hence $k>1$ and thus gcd$(A,B,C)>1$.\n\nSuppose instead $X$ is an integer multiple of $Z$, hence exponent $X=iZ$ for some positive integer $i$. However, per \\cref{Dfn:2.1_X_cannot_be_mult_of_Z} on page \\pageref{Dfn:2.1_X_cannot_be_mult_of_Z}, $X$ is not an integer multiple of $Z$. Thus if $k=1$, the term $\\displaystyle{(ak)^{\\frac{X}{Z}}=A^{\\frac{X}{Z}}=a^{\\frac{X}{Z}}}$ is not a perfect power and thus $\\MCB$ and $\\MCA$ are irrational, contradicting their given rationality. If instead $k$ is a multiple of $a$ such as $k=a^j$ for some positive integer $j$, then $\\displaystyle{(ak)^{\\frac{X}{Z}}=a^{\\frac{X+j}{Z}}}$ for which $X+j$ could be a multiple of $Z$. Thus $\\MCB$ and $\\MCA$ are rational only for some values $k>1$ and thus gcd$(A,B,C)>1$.\n\n\n\\bigskip\n\n\\noindent\\textbf{Case 2 of 2: the terms of $\\bm{\\MCB }$ and $\\bm{\\MCA }$ are irrational}\n\nConsidering the second case, assume $\\displaystyle{\\left(1+ \\frac{(bk)^Y}{(ak)^X}\\right)^{\\frac{1}{Z}}}$ in \\cref{eqn:66b} is irrational, and thus $\\displaystyle{\\frac{(ak)^{\\frac{X}{Z}}}{bk}}$ and $(ak)^{\\frac{X}{Z}-1}$ must also be irrational such that they cancel the irrationality with multiplication. Since slopes $\\MCB$ and $\\MCA$ in \\cref{eqn:66b} are rational with irrational terms, both slopes in \\cref{eqn:64a} must be rational.\n\nWe can re-express slopes $\\MCB$ and $\\MCA$ from \\cref{eqn:61a} as\n\\begin{equation}\n\\MCB = \\frac{C}{\\sqrt[Y]{C^Z-A^X}} \\qquad\\qquad\\quad\n\\MCA = \\frac{C}{\\sqrt[X]{C^Z-B^Y}}\n\\label{eqn:68}\n\\end{equation}\nin which the denominators must both be rational. Per \\cref{Thm:2.8_Functional_Form}, the denominators in \\cref{eqn:68} can be reparameterized and thus \\cref{eqn:68} becomes\n\\begin{equation}\n\\MCB = \\frac{C}{(C+A)M_{_1}\\alphaCA -CAM_{_1}\\betaCA } \\qquad\\qquad\n\\MCA = \\frac{C}{(C+B)M_{_2}\\alphaCB -CBM_{_2}\\betaCB }\n\\label{eqn:69}\n\\end{equation}\nwhere $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ are positive rational numbers and $M_{_1}$ and $M_{_2}$ are positive scalars. Per \\cref{Thm:2.8_Functional_Form,Thm:2.9_Real_Alpha_Beta} $\\alphaCA$, $\\alphaCB$, $\\betaCA$ and $\\betaCB$ are defined as\n\\begin{subequations}\n\\begin{align}\n\\alphaCA &=\\sqrt[Y]{C^{Z-Y}-A^{X-Y}} &\n\\alphaCB &=\\sqrt[X]{C^{Z-X}-B^{Y-X}}\n\\label{eqn:70a} \\\\\n\\betaCA &=\\sqrt[Y]{C^{Z-2Y}-A^{X-2Y}} &\n\\betaCB &=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}\n\\label{eqn:70b}\n\\end{align}\n\\end{subequations}\nPer \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational}, $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ as defined in \\cref{eqn:70a,eqn:70b} are irrational when gcd$(A,B,C)=k=1$. However, since $\\MCB$ and $\\MCA$ in \\cref{eqn:70a,eqn:70b} are rational, we need to consider the common factor in the bases.\n\nPer \\cref{Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime}, $\\displaystyle{\\sqrt[Y]{C^{Z-Y}-A^{X-Y}}}$ and $\\displaystyle{\\sqrt[Y]{C^{Z-2Y}-A^{X-2Y}}}$ are rational only if gcd$(A,B,C)>1$. By extension, $\\displaystyle{\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are also rational only if gcd$(A,B,C)>1$.\n\nSince slopes $\\MCB$ and $\\MCA$ in \\cref{eqn:69} are rational, then their denominators are rational, and thus $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ must be rational. Per the defintions of $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ in \\cref{eqn:70a,eqn:70b} and given \\cref{Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime} in which these terms can only be rational when gcd$(A,B,C)>1$, we conclude $k$ must be greater than 1 and thus gcd$(A,B,C)>1$.\n\n\\bigskip\n\nTherefore as consequences of both cases 1 and 2, for slopes $\\MCB$ and $\\MCA$ to be rational when $A^X\\neq B^Y$ requires gcd$(A,B,C)>1$, and thus $A$, $B$, and $C$ must share a common factor greater than 1.\n\n\\bigskip\n\n\\textbf{Scenario 2 of 2: $\\bm{\\displaystyle{A^X=B^Y}$}.} If $A^X=B^Y$, then gcd$(A,B)=k>1$. Hence by definition, $k$ is a factor of $A^X+B^Y$ and of $C^Z$, and thus $A$, $B$, and $C$ must share a common factor greater than 1.\n\n\n\\bigskip\n\n\\Conjecture, when there exist integer solutions, two scenarios show common factor $k$ must be greater than 1 to ensure slopes $\\MCB$ and $\\MCA$ are rational. We know that each grid point $(A,B,C)$ subtends a line through the origin and that point, whereby that point is supposed to be an integer solution that satisfies the conjecture. We also know that each grid point corresponds to a set of slopes. Further, we know from \\cref{Thm:2.1_Irrational_Slope_No_Lattice} that a line through the origin with an irrational slope does not pass through any non-trivial lattice points. Since both $\\MCB$ and $\\MCA$ are rational only for some common factor $k>1$, then gcd$(A,B,C)=k$ is required. We know $\\MBA$ is always rational but since $\\MCB$ and $\\MCA$ can be rational only when gcd$(A,B,C)>1$ for only certain common factors, then we know the lines go through non-trivial lattice points, and thus these slopes mean there can be integer solutions for $A$, $B$, and $C$. Hence there can be integer solutions satisfying the conjecture only when gcd$(A,B,C)>1$.\n\\end{proof}\n\\label{Section:Possibility_End}\n\n\n\\bigskip\n\n\n\\section{Conclusion}\nEvery set of values that satisfy the Tijdeman-Zagier conjecture corresponds to a lattice point on a multi-dimensional Cartesian grid. Together with the origin this point defines a line in multi-dimensional space. This line requires a rational slope in order for it to pass through a non-trivial lattice point. Hence the core of the various proofs contained herein center on the irrationality of the slope based on the coprimality of the terms. Several key steps were required to establish this relationship and then support the proof.\n\n\\cref{Thm:2.2_Coprime,Thm:2.3_Coprime,Thm:2.4_Coprime} establish that within the relation \\BealsEq\\, if any pair of terms, $A$, $B$, and $C$ is coprime, then all 3 terms must be coprime, and if all 3 terms are coprime, then each pair of terms must likewise be coprime. Likewise, \\cref{Thm:2.5_X_cannot_be_mult_of_Z} establish a similarly restrictive relationship between the exponents, namely that exponents $X$ and $Y$ cannot be integer multiples or unit fractions of exponent $Z$ and that $Z$ cannot be an integer multiple of $X$ or $Y$.\n\n\\cref{Thm:2.6_Initial_Expansion_of_Differences,Thm:2.7_Indeterminate_Limit} establish that the difference of powers can be factored and expanded based on an arbitrary and indeterminate upper limit.\n\n\\cref{Thm:2.8_Functional_Form,Thm:2.9_Real_Alpha_Beta} establish that $A^X$ could be parameterized as a linear combination of $C+B$ and $CB$, with two parameters. \\cref{Thm:2.10_No_Solution_Alpha_Beta_Irrational,Thm:2.11_Coprime_Alpha_Beta_Irrational,Thm:2.13_Coprime_Any_Alpha_Beta_Irrational_Indeterminate} establish that when these parameters are irrational there can be no integer solution satisfying the conjecture and if gcd$(A,B,C)=1$, then these parameters must be irrational. \\cref{Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime} establishes that if\nthese two parameters are rational, then gcd$(A,B,C)>1$.\n\nThe relationships between coprimality of terms and irrationality of the parameters\n(\\cref{Thm:2.8_Functional_Form,Thm:2.9_Real_Alpha_Beta,Thm:2.10_No_Solution_Alpha_Beta_Irrational,Thm:2.11_Coprime_Alpha_Beta_Irrational,Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime,Thm:2.13_Coprime_Any_Alpha_Beta_Irrational_Indeterminate}) are critical to the slopes that are core to the remaining theorems. It is shown that the slopes are functions of these parameters and thus the irrationality properties of the parameters translate to irrationality conditions for the slopes.\n\n\\cref{Thm:2.1_Irrational_Slope_No_Lattice} establishes that a line with an irrational slope that passes through the origin will not pass through any non-trivial lattice points. This simple, subtle theorem is critical to the proof since the link between irrationality of slope and non-integer solutions is key to relating the outcomes to coprimality of terms. The logic of the proof is that integer solutions which satisfy the conjecture can be expressed only with a set of rational slopes and thus tests of the slope rationality are equivalent to tests of the integrality of the solution.\n\n\\cref{Thm:2.14_Main_Proof_Coprime_No_Solutions} establishes that when gcd$(A,B,C)=1$, the slopes are irrational. Thus if the slopes are irrational, then the line that is equivalent to the integer solution does not pass through non-trivial lattice points, hence there is no integer solution. \\cref{Thm:2.15_Main_Proof_Solutions_Then_Not_Coprime} establishes the reverse, namely that the slopes of the corresponding lines can only be rational when gcd$(A,B,C)>1$, and that integer solutions satisfying the conjecture fall on the lines with rational slopes.\n\nAny proof of the Tijdeman-Zagier conjecture requires four conditions be satisfied:\n\\begin{itemize}\n\\item $A$, $B$, $C$, $X$, $Y$, and $Z$ are positive integers.\n\\item $X,Y,Z\\geq3$\n\\item \\BealsEq\n\\item gcd$(A,B,C)=1$\n\\end{itemize}\nSince the set of values that satisfy the conjecture is directly a function of rationality of slopes, we have demonstrated the explicit linkage between the coprimality aspect of the conjecture, the integer requirement of the framework, and properties of slopes of lines through the origin. Via contradiction these theorems prove the four conditions cannot be simultaneously met. Given the fully exhaustive and mutual exclusivity of the theorems, the totality of the conjecture is thus proven.\n\n\n\\bigskip\n\n\n\n\n\\section*{Acknowledgment}\nThe authors acknowledge and thank emeritus Professor Harry Hauser for guidance and support in the shaping, wordsmithing, and expounding the theorems, proofs, and underlying flow of the document, and for the tremendous array of useful suggestions throughout.\n\n\n\n\n\\section*{References}\n\n\\begin{biblist}\n\n\\bib{anni2016modular}{article}{,\n title={Modular elliptic curves over real abelian fields and the\n generalized Fermat equation $x^{2l}+ y^{2m}= z^p$},\n author={Anni, Samuele},\n author={Siksek, Samir},\n journal={Algebra \\& Number Theory},\n volume={10},\n number={6},\n pages={1147--1172},\n year={2016},\n publisher={Mathematical Sciences Publishers},\n doi={https:\/\/doi.org\/10.2140\/ant.2016.10.1147}}\n\n\\bib{beauchamp2018}{article}{,\n title={A Proof for Beal's Conjecture},\n author={Beauchamp, Julian TP},\n journal={viXra},\n note={www.vixra.org\/abs\/1808.0567},\n date={2018-9-05} }\n\n\\bib{beauchamp2019}{article}{,\n title={A Concise Proof for Beal's Conjecture},\n author={Beauchamp, Julian TP},\n journal={viXra},\n note={www.vixra.org\/abs\/1906.0199},\n date={2019-6-13} }\n\n\\bib{bennett2006equation}{article}{,\n title = {The equation $x^{2n}+y^{2n}=z^5$},\n author = {Bennett, Michael A},\n journal = {Journal of th{\\'e}orie of Bordeaux numbers},\n volume = {18},\n number = {2},\n pages = {315--321},\n year = {2006},\n doi={https:\/\/doi.org\/10.5802\/jtnb.546} }\n\n\\bib{bennett2015generalized}{article}{,\n title={Generalized Fermat equations: a miscellany},\n author={Bennett, Michael A},\n author={Chen, Imin},\n author={Dahmen, Sander R},\n author={Yazdani, Soroosh},\n journal={International Journal of Number Theory},\n volume={11},\n number={01},\n pages={1--28},\n year={2015},\n publisher={World Scientific},\n doi={https:\/\/doi.org\/10.1142\/S179304211530001X} }\n\n\\bib{beukers1998}{article}{,\n title={The Diophantine equation $Ax^p+By^q=Cz^r$},\n author={Beukers, Frits},\n journal={Duke Mathematical Journal},\n month={01},\n year={1998},\n volume={91},\n number={1},\n pages={61--88},\n publisher={Duke University Press},\n dol={https:\/\/doi.org\/10.1215\/S0012-7094-98-09105-0} }\n\n\\bib{beukers2020generalized}{article}{,\n title={The generalized Fermat equation},\n author={Beukers, Frits},\n note={https:\/\/dspace.library.uu.nl\/handle\/1874\/26637},\n date={2006-01-20} }\n\n\\bib{billerey2018some}{article}{,\n title={Some extensions of the modular method and Fermat equations of signature $(13, 13, n)$},\n author={Billerey, Nicolas},\n author={Chen, Iimin},\n author={Dembele, Lassina},\n author={Dieulefait, Luis},\n author={Freitas, Nuno},\n journal={arXiv preprint arXiv:1802.04330},\n year={2018} }\n\n\\bib{crandall2006prime}{book}{,\n title={Prime numbers: a computational perspective},\n author={Crandall, Richard},\n author={Pomerance, Carl B},\n volume={182},\n year={2006},\n publisher={Springer Science \\& Business Media} }\n\n\\bib{dahmen2013perfect}{article}{,\n title={Perfect powers expressible as sums of two fifth or seventh powers},\n author={Dahmen, Sander R},\n author={Siksek, Samir},\n journal={arXiv preprint arXiv:1309.4030},\n year={2013} }\n\n\\bib{darmon1995equations}{article}{,\n title={On the equations $z^m=F(x, y)\\,and\\,Ax^p+By^q=Cz^r$},\n author={Darmon, Henri},\n author={Granville, Andrew},\n journal={Bulletin of the London Mathematical Society},\n volume={27},\n number={6},\n pages={513--543},\n year={1995},\n publisher={Wiley Online Library},\n doi={https:\/\/doi.org\/10.1112\/blms\/27.6.513} }\n\n\\bib{de2016solutions}{article}{,\n title={Solutions to Beal's Conjecture, Fermat's last theorem and Riemann Hypothesis},\n author={{d}e Alwis, A.C. Wimal Lalith},\n journal={Advances in Pure Mathematics},\n volume={6},\n number={10},\n pages={638--646},\n year={2016},\n publisher={Scientific Research Publishing},\n doi={https:\/\/doi.org\/10.4236\/apm.2016.610053} }\n\n\\bib{di2013proof}{article}{,\n title={Proof for the Beal conjecture and a new proof for Fermat's last theorem},\n author={Di Gregorio, Leandro Torres},\n journal={Pure and Applied Mathematics Journal},\n volume={2},\n number={5},\n pages={149--155},\n year={2013},\n doi={https:\/\/doi.org\/10.11648\/j.pamj.20130205.11} }\n\n\\bib{durango}{webpage}{,\n title={The Search for a Counterexample to Beal's Conjecture},\n author={Durango, Bill},\n year={2002},\n url={http:\/\/www.durangobill.com\/BealsConjecture.html},\n note={Computer search results} }\n\n\\bib{edwards2005platonic}{book}{,\n title={Platonic solids and solutions to $x^2+y^3=dZ^r$},\n author={Edwards, Edward Jonathan},\n year={2005},\n publisher={Utrecht University} }\n\n\\bib{elkies2007abc}{article}{\n title={The ABC's of number theory},\n author={Elkies, Noam},\n journal={The Harvard College Mathematics Review},\n year={2007},\n publisher={Harvard University},\n note={http:\/\/nrs.harvard.edu\/urn-3:HUL.InstRepos:2793857} }\n\n\\bib{fjelstad1991extending}{article}{,\n title={Extending Pascal's triangle},\n author={Fjelstad, P},\n journal={Computers \\& Mathematics with Applications},\n volume={21},\n number={9},\n pages={1--4},\n year={1991},\n publisher={Elsevier},\n doi={https:\/\/doi.org\/10.1016\/0898-1221(91)90119-O} }\n\n\\bib{gaal2013sum}{article}{,\n title={The sum of two S-units being a perfect power in global function fields},\n author={Ga{\\'a}l, Istv{\\'a}n},\n author={Pohst, Michael},\n journal={Mathematica Slovaca},\n volume={63},\n number={1},\n pages={69--76},\n year={2013},\n publisher={Springer},\n doi={https:\/\/doi.org\/10.2478\/s12175-012-0083-0} }\n\n\\bib{ghosh2011proof}{book}{,\n title={The Proof of the Beal's Conjecture},\n author={Ghosh, Byomkes Chandra},\n publisher={2006 Hawaii International Conference on Statistics, Mathematics and Related Fields,\n Honolulu, Hawaii, USA},\n year={2011},\n note={https:\/\/ssrn.com\/abstract=1967491} }\n\n\\bib{joseph2018another}{article}{,\n title={Another Proof of Beal's Conjecture},\n author={Joseph, James E},\n author={Nayar, Bhamini P},\n journal={Journal of Advances in Mathematics},\n volume={14},\n number={2},\n pages={7878--7879},\n year={2018},\n doi={https:\/\/doi.org\/10.24297\/jam.v14i2.7587} }\n\n\\bib{koshy2002elementary}{book}{,\n title={Elementary number theory with applications},\n author={Koshy, Thomas},\n year={2002},\n pages={545},\n publisher={Academic press},\n note={https:\/\/doi.org\/10.1017\/S0025557200173255} }\n\n\\bib{kraus1998equation}{article}{,\n title={Sur l'{\\'e}quation $a^3+b^3=c^p$},\n author={Kraus, Alain},\n journal={Experimental Mathematics},\n volume={7},\n number={1},\n pages={1--13},\n year={1998},\n publisher={Taylor \\& Francis},\n doi={https:\/\/doi.org\/10.1080\/10586458.1998.10504355} }\n\n\\bib{macmillan2011proofs}{article}{,\n title={Proofs of power sum and binomial coefficient congruences via Pascal's identity},\n author={MacMillan, Kieren}\n author={Sondow, Jonathan},\n journal={The American Mathematical Monthly},\n volume={118},\n number={6},\n pages={549--551},\n year={2011},\n publisher={Taylor \\& Francis},\n doi={https:\/\/doi.org\/10.4169\/amer.math.monthly.118.06.549} }\n\n\\bib{beal1997generalization}{article}{\n title={A generalization of Fermat's last theorem: the Beal conjecture and prize problem},\n author={Mauldin, R. Daniel},\n journal={Notices of the AMS},\n volume={44},\n number={11},\n year={1997},\n note={https:\/\/www.ams.org\/notices\/199711\/beal.pdf} }\n\n\\bib{merel1997winding}{article}{,\n title={Winding quotients and some variants of Fermat's Last Theorem},\n author={Merel, Loic},\n author={Darmon, Henri},\n journal={Journal f{\\\"u}r die reine und angewandte Mathematik},\n volume={1997},\n number={490},\n pages={81--100},\n year={1997},\n publisher={De Gruyter},\n doi={https:\/\/doi.org\/10.1515\/crll.1997.490.81} }\n\n\\bib{metsankyla2004catalan}{article}{,\n title={Catalan's conjecture: another old Diophantine problem solved},\n author={Mets{\\\"a}nkyl{\\\"a}, Tauno},\n journal={Bulletin of the American Mathematical Society},\n volume={41},\n number={1},\n pages={43--57},\n year={2004},\n doi={https:\/\/doi.org\/10.1090\/S0273-0979-03-00993-5} }\n\n\\bib{mihailescu2004primary}{article}{,\n title={Primary cyclotomic units and a proof of Catalan's conjecture},\n author={Mihailescu, Preda},\n journal={Journal Fur die reine und angewandte Mathematik},\n volume={572},\n pages={167--196},\n year={2004},\n doi={https:\/\/doi.org\/10.1515\/crll.2004.048} }\n\n\\bib{miyazaki2015upper}{article}{,\n title={Upper bounds for solutions of an exponential Diophantine equation},\n author={Miyazaki, Takafumi},\n journal={Rocky Mountain Journal of Mathematics},\n volume={45},\n number={1},\n pages={303--344},\n year={2015},\n publisher={Rocky Mountain Mathematics Consortium},\n doi={https:\/\/doi.org\/10.1216\/RMJ-2015-45-1-303} }\n\n\\bib{nathanson2016diophantine}{article}{,\n title={On a diophantine equation of MJ Karama},\n author={Nathanson, Melvyn B},\n journal={arXiv preprint arXiv:1612.03768},\n year={2016} }\n\n\\bib{nitaj1995conjecture}{article}{,\n title={On a Conjecture of Erd{\\\"o}s on 3-Powerful Numbers},\n author={Nitaj, Abderrahmane},\n journal={Bulletin of the London Mathematical Society},\n volume={27},\n number={4},\n pages={317--318},\n year={1995},\n publisher={Wiley Online Library},\n doi={https:\/\/doi.org\/10.1112\/blms\/27.4.317} }\n\n\\bib{norvig2010beal}{article}{,\n title={Beal's conjecture: A search for counterexamples},\n author={Norvig, Peter},\n journal={norvig.com, Retrieved 2014-03},\n volume={6},\n year={2010},\n note={http:\/\/norvig.com\/beal.html} }\n\n\\bib{poonen1998some}{article}{,\n title={Some diophantine equations of the form $x^n+y^n=z^m$},\n author={Poonen, Bjorn},\n journal={Acta Arithmetica},\n volume={86},\n number={3},\n pages={193--205},\n year={1998},\n doi={https:\/\/doi.org\/10.4064\/aa-86-3-193-205} }\n\n\\bib{poonen2007twists}{article}{,\n title={Twists of $X(7)$ and primitive solutions to $x^2+y^3=z^7$},\n author={Poonen, Bjorn},\n author={Schaefer, Edward F},\n author={Stoll, Michael},\n journal={Duke Mathematical Journal},\n volume={137},\n number={1},\n pages={103--158},\n year={2007},\n publisher={Duke University Press},\n doi={https:\/\/doi.org\/10.1215\/S0012-7094-07-13714-1} }\n\n\\bib{siksek2012partial}{article}{,\n title={Partial descent on hyperelliptic curves and the generalized Fermat equation\n $x^3+y^4+z^5=0$},\n author={Siksek, Samir},\n author={Stoll, Michael},\n journal={Bulletin of the London Mathematical Society},\n volume={44},\n number={1},\n pages={151--166},\n year={2012},\n publisher={Wiley Online Library},\n doi={https:\/\/doi.org\/10.1112\/blms\/bdr086} }\n\n\\bib{siksek2014generalised}{article}{,\n title={The generalised Fermat equation $x^2+ y^3=z^{15}$},\n author={Siksek, Samir},\n author={Stoll, Michael},\n journal={Archiv der Mathematik},\n volume={102},\n number={5},\n pages={411--421},\n year={2014},\n publisher={Springer},\n doi={https:\/\/doi.org\/10.1007\/s0001} }\n\n\\bib{shanks2001solved}{book}{,\n title={Solved and unsolved problems in number theory},\n author={Shanks, Daniel},\n volume={297},\n year={2001},\n publisher={American Mathematical Soc.} }\n\n\\bib{stillwell2012numbers}{book}{,\n title={Numbers and geometry},\n author={Stillwell, John},\n year={2012},\n pages={133},\n publisher={Springer Science \\& Business Media} }\n\n\\bib{taylor1995ring}{article}{,\n title={Ring-theoretic properties of certain Hecke algebras},\n author={Taylor, Richard},\n author={Wiles, Andrew},\n journal={Annals of Mathematics},\n pages={553--572},\n year={1995},\n publisher={JSTOR},\n doi={https:\/\/doi.org\/10.2307\/2118560},\n url={https:\/\/www.jstor.org\/stable\/2118560} }\n\n\\bib{townsend2010search}{article}{,\n title={The Search for Maximal Values of min$(A,B,C)$\/gcd$(A,B,C)$ for $A^x+B^y=C^z$},\n author={Townsend, Arthur R},\n journal={arXiv preprint arXiv:1004.0430},\n year={2010} }\n\n\\bib{vega2020complexity}{article}{,\n title={The Complexity of Mathematics},\n author={Vega, Frank},\n journal={Preprints},\n year={2020},\n publisher={MDPI AG},\n doi={https:\/\/doi.org\/10.20944\/preprints202002.0379.v4} }\n\n\\bib{waldschmidt2004open}{article}{,\n title={Open diophantine problems},\n author={Waldschmidt, Michel},\n journal={Moscow Mathematical Journal},\n volume={4},\n number={1},\n pages={245--305},\n year={2004},\n publisher={\u041d\u0435\u0437\u0430\u0432\u0438\u0441\u0438\u043c\u044b\u0439 \u041c\u043e\u0441\u043a\u043e\u0432\u0441\u043a\u0438\u0439 \u0443\u043d\u0438\u0432\u0435\u0440\u0441\u0438\u0442\u0435\u0442--\u041c\u0426\u041d\u041c\u041e},\n note={https:\/\/arxiv.org\/pdf\/math\/0312440.pdf} }\n\n\\bib{waldschmidt2009perfect}{article}{,\n title={Perfect Powers: Pillai's works and their developments},\n author={Waldschmidt, Michel},\n journal={arXiv preprint arXiv:0908.4031},\n year={2009},\n doi={https:\/\/doi.org\/10.17323\/1609-4514-2004-4-1-245-305} }\n\n\\bib{wiles1995modular}{article}{,\n title={Modular elliptic curves and Fermat's last theorem},\n author={Wiles, Andrew},\n journal={Annals of mathematics},\n volume={141},\n number={3},\n pages={443--551},\n year={1995},\n publisher={JSTOR},\n doi={https:\/\/doi.org\/10.2307\/2118559} }\n\n\\end{biblist}\n\n\\end{document}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMagneto-transport of two-dimensional electrons is an interesting but yet complicated topic in condensed matter physics. Its various behaviors, such as the Shubnikov-de Haas oscillation~\\cite{Physics Kinetics}, quantum Hall conductance~\\cite{Physics Kinetics}, linear magnetoresistance~\\cite{PhysRevB.58.2788}, etc., contain a wealth of information about the underlying systems. However, one of the simplest questions in this field, i.e., how the electron transports through disordered materials under a magnetic field in the classical regime, has not been fully understood yet. \n\nIn the classical ($\\omega_c\\tau \\lesssim 1$) regime, the electron transport can be generally described by the Boltzmann equation~\\cite{Boltzmann}. However, it has been pointed out that the Boltzmann equation has to be revised to incorporate the non-Markovian effect (also called memory effect \\cite{Phys. Rev. Lett. 75 197, J. Stat. Phys. 87 5\/6, PhysRevLett.89.266804, PRB2003, PRB2008, PRB2005}) resulting from either repeatedly scattering on the same impurity, or repeatedly passing through a region without scattering (the latter one is also called Corridor effect~\\cite{Phys. Rev. Lett. 75 197, J. Stat. Phys. 87 5\/6, PhysRevLett.89.266804, PRB2003}). In addition to the memory effect, there is an equally important issue that needs to be addressed, i.e. how the magnetic field affects a single electron-impurity scattering event. This problem has a fundamental difficulty in defining scattering parameters as the incoming and outgoing asymptotic trajectories are bent by the magnetic field. \n\nIn this work, we introduce a general recipe based on an abstraction of the actual impurity scattering process to define scattering parameters for the single elastic impurity scattering. It yields the conventional scattering parameters in the absence of the magnetic field. More importantly, it can introduce an appropriate set of scattering parameters in the presence of magnetic field to calculate the differential cross section. Specifically, the real scattering process can be abstracted into a sudden switch between the initial asymptotic and final asymptotic trajectory. \nIn this classical picture, we can conveniently describe the skew scattering~\\cite{Physica.24.1958} and coordinate jump~\\cite{PhysRevB.2.4559}, which will eventually modify the Boltzmann equation.\nWe then apply this recipe to the two-dimensional\nLorentz model ~\\cite{PhysRevA.25.533} where free electrons are subject to in-plane electric field and out-of-plane magnetic field, and scattered by randomly distributed hard-disk impurities. \n\nWe show the following results. 1) The magnetoresistivity is a negative parabolic function of magnetic field. Our result, together with the one from the previous theory of corridor effect \\cite{PRB2003} yields a more accurate magnetoresistivity, closer to the numerical result \\cite{PhysRevLett.89.266804}. 2) The obtained Hall coefficient becomes magnetic field-dependent, deviating from the Drude theory. For experiments, this deviation needs to be taken into account when converting the measured Hall coefficients to real electron densities. 3) The longitudinal relaxation time obtained in our theory depends on magnetic field which deviates from the Drude theory. \n\nThis paper is organized in the following way. In Section II, we present\nthe general recipe to define scattering parameters for the impurity scattering, and use it to discuss the skew scattering and coordinate jump under magnetic field. The conventional Boltzmann equation is thus modified by these two mechanisms in the linear response regime~\\cite{LRT}. In Section III, we solve the modified Boltzmann equation for the two-dimensional Lorentz model and derive the\nanomalous Hall resistivity and negative magnetoresistivity. In Section IV we compare our result with relevant\nsimulations and experiments. Finally, we introduce a phenomenological method to include skew scattering into the Drude model. \n\n\n\\section{Classical theory of impurity scattering and electron transport under magnetic field}\n\nIn this section, we will formulate a classical theory of impurity scattering and electron transport in two-dimensional plane influenced by the external perpendicular magnetic field. Our theory only considers a single scattering event and ignore the well-studied non-Markovian and localization effect. One possible application of our theory is the electron transport in randomly distributed two-dimensional anti-dots under magnetic field. The anti-dots are geometrical holes punched into two dimensional electron gas (2DEG) on semiconductor GaAs \\cite{APL.70.2309, Mirlin2001, Weiss1995, PhysRevLett.77.147}. \n\nOur theory requires $a\\ll ll$, which is the necessary condition to avoid repeated scattering at the same impurity. Summing up all the above requirement, the pre-condition of our theory is $a\\ll l0$, respectively. We call those imaginary trajectories as the initial and final asymptote, respectively. \nWe define this method as the abstraction of the impurity scattering process, as it only keeps the essence of the scattering process, i.e. the transition from the initial asymptote to the final asymptote, and abstract the detail of the transition as a sudden switch. \n\nThere is a degree of freedom in the above procedure. Note that even though we have restricted the scattering to occur at $t=0$, this point itself is not well defined. In other words, we have the freedom to define this artificial point. For a central scattering potential, we can fix this issue by requiring that at $t=0$ the electron reaches the point in the initial asymptote closest to the scatter. We call this point the starting point (represented by the red dot in Fig.~\\ref{fig_event_line}). If the scattering potential respects the rotational symmetry, the starting point in different initial asymptotes form a straight line called the event line which marks the occurring of scattering event as illustrated in Fig.~\\ref{fig_event_line}. It turns out that the event line is orthogonal to the initial asymptotes and passes the center of the scatterer. \n\nWith the help of the abstraction of the impurity scattering process, we define the scattering parameters as follows. We define the distance between the starting point and the scattering center to be the impact parameter, the momentum at $t=0_-$ and $t=0_+$ to be the incoming and outgoing momentum, respectively, and the angle between the incoming and outgoing momentum to be the scattering angle. Those scattering parameters reduce to the conventional ones in the absence of the magnetic field, as shown in Fig.~\\ref{fig_event_line}. We further define the point in the final asymptote at $t=0_+$ to be the ending point (represented by the blue dot in Fig.~\\ref{fig_event_line}). This definition of scattering parameters is clearly independent of the scattering details and works for any type of the initial and final asymptotes.\n\nUsing the above concepts, the abstraction of the scattering process can be concisely stated as follows: the electron moves along the initial asymptote to the starting point, gets scattered to the ending point and finally moves away from the scatterer along the final asymptote.\n\n\n\\begin{figure}[b]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.39}{\\includegraphics*{traditional_hard_ball_scattering.pdf}}\n\\caption{The illustration of the conventional hard ball scattering with no magnetic field. The red and blue empty dot are the starting point and ending point, respectively. The green line is the event line passing through starting point and impurity center. The initial asymptote and final asymptote are marked by dashed red and blue line with the incoming momentum $\\mathbf{k}$, the outgoing momentum $\\mathbf{k'}$, and the angle of scattering $\\theta$, the impact parameter $b$. The coordinate jump can be divided into two directions, which are transverse jump and longitudinal jump. }\n\\label{fig_traditional hard ball}\n\\end{figure}\n\n\n\\begin{figure}[b]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.4}{\\includegraphics*{pig_picture_new.pdf}}\n\\caption{The illustration of electron scattering on hard disk impurity with cyclotron orbit under magnetic field. The impurity radius is $a$. The cyclotron radius is $R$. \nThe red and blue solid lines are the real trajectory of the incoming and outgoing electron, respectively. The red and blue complete circle forms the initial asymptote and final asymptote. The red and blue empty dots are the starting point and ending point, respectively. The incoming momentum\n$\\mathbf{k}$ and the outgoing momentum $\\mathbf{k}^{\\prime}$ are along the tangential direction at the starting point and ending point. The angle of scattering $\\theta$ is the angle between $\\mathbf{k}$ and $\\mathbf{k'}$. }\n\\label{fig_pig_picture}\n\\end{figure}\n\n\n\\subsection{Application to hard disk potential}\n\nWe first apply the abstraction of the scattering process to hard disk potential in the absence of magnetic field. By applying to this fully known case, we aims at a necessity check of the correctness of our theory. \nConsider an electron incident on a hard disk potential with straight line trajectory (Fig.~\\ref{fig_traditional hard ball}). The real trajectory (solid lines) changes its direction after the electron hits the scatterer. However, the initial and final asymptote (dashed lines) can be elongated along the real trajectory and pass through the scatterer. The event line that marks the occurring of scattering event, passes through the center of scatterer and the starting point (red empty dot) on the initial asymptote. The incoming momentum $\\mathbf{k}$ and outgoing momentum $\\mathbf{k'}$ are defined as the starting (red empty dot) and ending point (blue empty dot) on the initial and final asymptote, respectively. \n\nIn contrast, in the presence of magnetic field, the trajectory is bent, and we use the abstraction of the scattering process discussed in the previous subsections to define scattering parameters, as shown in Fig.~\\ref{fig_pig_picture}. The incoming momentum $\\mathbf{k}$ and outgoing momentum $\\mathbf{k'}$ cannot be defined straightforwardly, due to the directions of the initial\/final asymptote changes over time. As shown in Fig. ~\\ref{fig_pig_picture}, the red and blue dashed lines are the asymptotic trajectory which completes the circular trajectory. \nThe incoming $\\mathbf{k}$ and outgoing $\\mathbf{k'}$ are defined along the tangential direction to the initial asymptote and final asymptote at the starting point and ending point, respectively (see Fig.~\\ref{fig_pig_picture}). The $\\mathbf{k}$ and $\\mathbf{k'}$ are rotated by the same angle in unit time. \n\nIn the Appendix \\ref{APP-E}, we demonstrate how the abstraction method can be applied to the soft potential under magnetic field. \n\n\\begin{figure}[h]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.38}{\\includegraphics*{cross_section_tick_pi.pdf}}\n\\caption{The plots of the differential cross section of two processes: $\\mathbf{k} \\rightarrow \\mathbf{k'}$ and $\\mathbf{k'} \\rightarrow \\mathbf{k}$, respectively, in the unit of impurity radius $a$. The ratio $\\frac{a}{R}=0.16$. The $\\Omega_{\\mathbf{kk}^{\\prime}}$ does not overlap with $\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}}$, which leads to skew scattering. }\n\\label{fig_cross section curve}\n\\end{figure}\n\n\n\\subsection{Skew scattering under magnetic field}\nIn this section, we discuss the skew scattering in the classical picture. As shown in previous literatures, the antisymmetric part of the probability of scattering $W_{ \\mathbf{k} \\mathbf{k}^\\prime}$ leads to the skew scattering~\\cite{Physica.24.1958}. $W_{ \\mathbf{k} \\mathbf{k}^\\prime}$ (probability of scattering of $\\mathbf{k} \\rightarrow \\mathbf{k'}$ process) is related to the differential cross section as $W_{ \\mathbf{k}\\mathbf{k}^\\prime}=n_i v_{ \\mathbf{k}} \\Omega_{ \\mathbf{k}\\mathbf{k}^\\prime}$, where $n_i$ is the impurity concentration and $v_{ \\mathbf{k}}$ is the electron velocity. For hard-disk potentials, the scattering is elastic, i.e. $|v_{ \\mathbf{k}}|=|v_{ \\mathbf{k}^\\prime}|$. Therefore, a nontrivial antisymmetric part of $W_{ \\mathbf{k} \\mathbf{k}^\\prime}$ only comes from that $\\Omega_{ \\mathbf{k}\\mathbf{k}^\\prime} \\neq \\Omega_{ \\mathbf{k}^\\prime\\mathbf{k}}$. \n\nUsing the scattering parameters shown in Fig.~\\ref{fig_pig_picture}, the differential cross section is easily calculated by $\\Omega_{\\mathbf{k}\\mathbf{k}^\\prime}=\\left\\vert \\frac{db}{d\\theta}\\right\\vert $. \nHere we use the fact that $b$ is only a function of $\\theta$ and $k=|\\mathbf{k}|$ due to the rotational symmetry and the elastic nature of scattering. For two-dimensional Lorentz model, the relation between $b$ and $\\theta$ and $k$ (with $R=\\hbar k\/(eB)$) is given by (derived in Appendix \\ref{APP-A})\n\\begin{equation}\nb(\\theta,k)=-R+\\sqrt{a^{2}+R^{2}+2aR\\cos\\frac{\\theta}{2}},\n\\label{b}\n\\end{equation}\nTherefore, the differential cross section reads as\n\\begin{equation}\n\\label{eq_kkprime}\n\\Omega_{\\mathbf{kk}^{\\prime}}=%\n\\frac{a\\sin\\frac{\\theta}{2}}%\n{2\\sqrt{1+2\\frac{a}{R}\\cos\\frac{\\theta}{2}+\\left( \\frac{a}{R}\\right) ^{2}%\n}}.\n\\end{equation}\n\nOn the other hand, \nthe differential cross section of the inverse process $\\mathbf{k}^{\\prime}\\rightarrow \\mathbf{k}$ is labeled by $\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}}$, and can be calculated as follows: $\\Omega_{\\mathbf{k}^\\prime\\mathbf{k}}=\\left\\vert \\frac{db}{d\\theta}\\right\\vert_{\\theta\\rightarrow 2\\pi-\\theta} $. Therefore, its expression reads as\n\\begin{equation}\n\\label{eq_kprimek}\n\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}}=\\frac{a\\sin\\frac{\\theta}{2}}%\n{2\\sqrt{1-2\\frac{a}{R}\\cos\\frac{\\theta}{2}+\\left( \\frac{a}{R}\\right) ^{2}%\n}}.\n\\end{equation}\n\nWe plot $\\Omega_{\\mathbf{k}\\mathbf{k}^\\prime}$ and $\\Omega_{\\mathbf{k}^\\prime \\mathbf{k}}$ in Fig.~\\ref{fig_cross section curve}. It shows that $\\Omega_{\\mathbf{k}\\mathbf{k}^\\prime}\\neq \\Omega_{\\mathbf{k}^\\prime \\mathbf{k}}$, leading to the nontrivial skew scattering contribution to the electron transport in two-dimensional Lorentz model. In Eq. \\ref{tau-perp} in Section III B and Section IV C, we will find out that only when $\\Omega_{\\mathbf{k}\\mathbf{k}^\\prime}\\neq \\Omega_{\\mathbf{k}^\\prime \\mathbf{k}}$, there is $\\frac{1}{\\tau^{\\perp}} \\neq 0$ ($\\frac{1}{\\tau^{\\perp}}$ is the reciprocal of transverse relaxation time), which is the signature of skew scattering. We further comment that the nature of the above inequivalence is a finite magnetic field, i.e. only in the limit $\\mathbf{B}\\rightarrow 0$, $R\\rightarrow \\infty$ and hence $\\Omega_{\\mathbf{k}\\mathbf{k}^\\prime}- \\Omega_{\\mathbf{k}^\\prime \\mathbf{k}}\\rightarrow 0$. Therefore, a finite magnetic field is essential to the skew scattering mechanism, which breaks the time-reversal symmetry. \n\n\n\\subsection{Coordinate jump under magnetic field}\n\nIn this section, we discuss the coordinate jump~\\cite{PhysRevB.2.4559, PhysRevB.72.045346, PhysRevB.73.075318}, labeled by $\\delta\\mathbf{r}_{\\mathbf{k}^\\prime \\mathbf{k}}$ (coordinate jump from $\\mathbf{k} \\rightarrow \\mathbf{k'}$). In our recipe of describing the impurity scattering, it can be conveniently defined as the difference between the starting point $\\mathbf{r}_s$ and the ending point $\\mathbf{r}_e$: $\\delta\\mathbf{r}_{\\mathbf{k}^\\prime \\mathbf{k}}=\\mathbf{r}_e-\\mathbf{r}_s$. It can be further divided into longitudinal jump and transverse jump, which are parallel and orthogonal to the incoming momentum $\\mathbf{k}$, respectively (Fig.~\\ref{fig_traditional hard ball}). \n\nAs the incoming momentum is along $x$-axis, the longitudinal jump is $\\delta\\mathbf{x}_{\\mathbf{k}^\\prime \\mathbf{k}}$, and the transverse jump is $\\delta\\mathbf{y}_{\\mathbf{k}^\\prime \\mathbf{k}}$. Similar to the differential cross section, the coordinate jump is also a functions of $\\theta$ and $k$, and can be calculated as follows based on the two-dimensional Lorentz model (derived in Appendix \\ref{APP-B})\n\\begin{align} \\label{eq_longj}\n\\delta\\mathbf{x}_{\\mathbf{k}^\\prime \\mathbf{k}}&=R\\left[ \\sin\\theta\n-\\frac{\\sin\\theta+2\\frac{a}{R}\\sin\\left( \\frac{\\theta}{2}\\right) }%\n{\\sqrt{1+\\frac{2a}{R}\\cos(\\frac{\\theta}{2})+\\frac{a^{2}}{R^{2}}}}\\right]\\mathbf{\\hat{x}}\\,,\\\\\n\\label{eq_tranj}\\delta\\mathbf{y}_{\\mathbf{k}^\\prime \\mathbf{k}}&=2R\\sin^{2}\\left(\n\\frac{\\theta}{2}\\right) \\left[ 1-\\frac{1}{\\sqrt{1+\\frac{2a}{R}\\cos\n(\\frac{\\theta}{2})+\\frac{a^{2}}{R^{2}}}}\\right] \\mathbf{\\hat{y}}\\,.\n\\end{align}\n\nGenerally, the coordinate jump has two contributions to the electron transport. First, it may induce a net jump velocity $\\mathbf{v}_{cj}$ that modifies the electronic drift velocity:\n\\begin{equation}\n\\mathbf{v}_{cj}=\\sum_{\\mathbf{k}^\\prime} W_{ \\mathbf{k} \\mathbf{k}^\\prime} \\delta \\mathbf{r}_{\\mathbf{k}^\\prime \\mathbf{k}}=\\int_{0}^{2\\pi}d\\theta n_{i}v\\Omega_{\\mathbf{k}\\mathbf{k}^{\\prime}}\\delta\\mathbf{r}_{\\mathbf{k}^\\prime \\mathbf{k}}, \n\\label{vcj}\n\\end{equation}\nwith $v=\\hbar k\/m$. \nSecondly, it leads to an electrostatic potential difference $e\\mathbf{E}\\cdot \\delta \\mathbf{r}_{\\mathbf{k}^\\prime \\mathbf{k}}$ and thus affects the electronic equilibrium distribution function. \n\nFinally, we comment that as $\\mathbf{B}\\rightarrow 0$ the transverse jump does not have a net jump velocity, as the system respects a mirror symmetry with the mirror passing through the scatterer, parallel to $\\mathbf{k}$, and normal to the material plane. On the other hand, the longitudinal jump is not restricted by any symmetry and hence the net jump velocity is nonzero. Both statements can be easily verified for the two-dimensional Lorentz model using Eq.~\\ref{eq_kkprime}, \\ref{eq_longj}, \\ref{eq_tranj} and \\ref{vcj}. \n\n\n\\subsection{The nature of the anisotropic scattering}\n\nIn the first glance, the assignment of the scattering events of $t=0$ at the event line instead of the circular boundary of scatter is counterintuitive and artificial. However, it has deeper physical ground underneath. \n\n\\begin{figure}[h]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.19}{\\includegraphics*{b_not_zero2.pdf}}\n\\caption{The illustration of `$\\pi$ event' (when the scattering angle is $\\pi$) in the absence and presence of magnetic field. }\n\\label{fig_pi_not_0}\n\\end{figure}\n\n\nThe advantage of using the event line defined in our theory instead of the colliding boundary, is that the cross sectional area (which overlaps with the event line) is the projection of the boundary. The incoming scattering events are uniformly distributed on the event line with momentum perpendicular to the event line, but not uniform on the boundary. Therefore, the number of electrons being scattered is proportional to the cross-sectional area on the event line. This provides convenience to count the number of scattering events and scattering cross section. \n\n\\begin{figure}[h]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.35}{\\includegraphics*{cross_section_theta.pdf}}\n\\caption{The plot of differential cross section of $\\mathbf{k} \\rightarrow \\mathbf{k}^{\\prime}$ process in the unit of impurity radius $a$. The vertical black line marks the $\\pi$ event. The red shaded area is the cross sectional area within scattering angle $[0,\\pi]$. The green shaded area is the cross sectional area within scattering angle $[\\pi,2\\pi]$. The $\\pi$-event unevenly divides the cross sectional area, with the red shaded area smaller than the green shaded area. }\n\\label{fig_cross_theta}\n\\end{figure}\n\nIn order to understand the nature of anisotropic scattering, we define `$\\pi$ event' as the scattering event with scattering angle $\\theta=\\pi$. When there is no magnetic field, the `$\\pi$ event' evenly divides the cross sectional area along the event line (Fig. \\ref{fig_pi_not_0}) and there is no skew scattering. When there is magnetic field present, the $\\pi$ event unevenly divides the cross-sectional area on the event line (Fig. \\ref{fig_pi_not_0}), resulting in the uneven division of the number of electrons being scattered up (with the scattering angle within $[0, \\pi]$) and scattered down (with the scattering angle within $[\\pi, 2\\pi]$). This is shown in Fig. \\ref{fig_cross_theta}, where the red shaded area (corresponding to the cross-sectional area being scattered up) is smaller than the green shaded area (corresponding to the cross-sectional area being scattered down). \n\nWe provide a second way to understand the anisotropic scattering in Appendix \\ref{APP-F}. \n\n\n\\subsection{Modified Boltzmann equation}\n\nThe Boltzmann equation can be generalized to include the skew scattering and coordinate jump, reading as ($e>0$)%\n\\begin{widetext}\n\\begin{equation}\n\\left( -e\\right) \\left( \\mathbf{E+v}\\times\\mathbf{B}\\right) \\cdot\n\\frac{\\partial f_{\\mathbf{k}}}{\\hbar\\partial\\mathbf{k}}=-n_{i}v\\int_{0}^{2\\pi\n}d\\theta \\left [ \\Omega_{\\mathbf{kk}^{\\prime}} f\\left( \\epsilon\n,\\mathbf{k}\\right) -\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}} f\\left( \\epsilon,\\mathbf{k}^{\\prime}\\right)\n+\\Omega_{\\mathbf{k}^\\prime \\mathbf{k}}\\partial_{\\epsilon}f^{0}e\\mathbf{E}\\cdot\\delta\\mathbf{r}_{\\mathbf{k}^{\\prime\n}\\mathbf{k}}\\right ] ,\n\\label{Boltzmann}\n\\end{equation}\n\\end{widetext}\nwhere $f^0$ is the equilibrium distribution function. We emphasize that in the above equation, $|\\mathbf{k}^\\prime|=|\\mathbf{k}|$ because the scattering is elastic.\n\nTo solve up to the linear order of electric field, we assume that\n\\begin{equation}\nf\\left( \\epsilon,\\mathbf{k}\\right) =f^{0}\\left( \\epsilon\\right)\n+g^{\\rm r}\\left( \\epsilon,\\mathbf{k}\\right) +g^{\\rm cj}\\left( \\epsilon\n,\\mathbf{k}\\right) ,\n\\label{assume}\n\\end{equation}\nwhere $g^{\\rm cj}\\left( \\epsilon,\\mathbf{k}\\right)$ is the part of the non-equilibrium distribution function purely due to the coordinate jump (or called anomalous distribution function), and $g^{\\rm r}$ is the non-equilibrium distribution function in the absence of coordinate jump (or called normal distribution function).\nCombining Eq.~\\ref{Boltzmann} and Eq.~\\ref{assume}, keeping the terms of linear order in the electric and magnetic field, and ignoring the coupling between skew scattering and coordinate jump, the Boltzmann equation is decomposed into two equations:%\n\\begin{widetext}\n\\begin{equation}\n\\left( -e\\right) \\mathbf{E}\\cdot\\frac{\\partial f^{0}}{\\hbar\\partial\n\\mathbf{k}}+\\left( -e\\right) \\left( \\mathbf{v}\\times\\mathbf{B}\\right)\n\\cdot\\frac{\\partial g^{\\rm r}_{\\mathbf{k}}}{\\hbar\\partial\\mathbf{k}}=-\\int_{0}^{2\\pi\n}d\\theta n_{i}v \\left[ \\Omega_{\\mathbf{kk}^{\\prime}} g^{\\rm r}_{\\mathbf{k}%\n}-\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}} g^{\\rm r}_{\\mathbf{k}\\prime}\\right] ,\\label{SBE-n}%\n\\end{equation}%\n\\begin{equation}\n\\left( -e\\right) \\mathbf{E}\\cdot\\left( \\int_{0}^{2\\pi}d\\theta n_{i}%\nv\\Omega_{\\mathbf{k}^{\\prime} \\mathbf{k}}\\delta\\mathbf{r}_{\\mathbf{k}^{\\prime}\\mathbf{k}}\\right) \\partial_{\\epsilon}f^{0}-\\left( -e\\right) \\left(\n\\mathbf{v}\\times\\mathbf{B}\\right) \\cdot\\frac{\\partial g_{\\mathbf{k}}^{\\rm cj}%\n}{\\hbar\\partial\\mathbf{k}}=\\int_{0}^{2\\pi}d\\theta n_{i}v \\left[ \\Omega_{\\mathbf{kk}%\n^{\\prime}} g_{\\mathbf{k}}^{\\rm cj}-\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}} g_{\\mathbf{k}\\prime}^{\\rm cj}\\right].\n\\label{SBE-a}%\n\\end{equation}\n\\end{widetext}\n\nWith all the above ingredients\nthe electrical current density is given by%\n\\begin{equation}\n\\mathbf{j}=\\left( -e\\right) \\int \\frac{d\\mathbf{k}}{4\\pi^2}\\left[ g^{\\rm r}+g^{\\rm cj}\\right] \\left[\n\\mathbf{v}+\\mathbf{v}^{cj}\\right] .\n\\end{equation}\n\n\n\\section{Solutions of the Boltzmann equation}\n\n\\subsection{Zero magnetic field case}\n\nIn this case, only the longitudinal coordinate jump along the $\\mathbf{k}$-direction exists. \n$\\mathbf{v}^{cj}\\equiv\\int_{0}^{2\\pi}d\\theta n_{i}v\\Omega\n\\left( \\theta\\right) \\delta\\mathbf{r}_{\\mathbf{k}^{\\prime}\\mathbf{k}}=-\\mathbf{v}\\frac{3\\pi n_{i}a^{2}}{4}$ which is along the opposite direction to\n$\\mathbf{v}$. \n\nThe Boltzmann equation is solved as%\n\\begin{equation}\ng^{\\rm r}_{\\mathbf{k}}=\\left( -\\partial_{\\epsilon}f^{0}\\right) \\left( -e\\right)\n\\mathbf{E\\cdot v}\\tau^{0}\\left( \\epsilon\\right) ,\n\\end{equation}\n\\begin{equation}\ng_{\\mathbf{k}}^{\\rm cj}=\\left( \\partial_{\\epsilon}f^{0}\\right) \\left(\n-e\\right) \\mathbf{E\\cdot v}^{cj}\\tau^{0}\\left( \\epsilon\\right) ,\n\\end{equation}\nwhere $\\frac{1}{\\tau^{0}\\left( \\epsilon\\right) }=n_{i}v\\frac{8a}{3}%\n$. The electric current density is therefore\n$j_{x}\\equiv\\left( \\sigma^{0}+\\sigma^{cj1}+\\sigma^{cj2}+\\sigma^{cj1,cj2}\\right)\nE_{x}$ with%\n\\begin{align}\n\\sigma^{0}=\\left( -e\\right) \\sum_{k}\\frac{g^{\\rm r}_{\\mathbf{k}}}{E_{x}}v_{x}%\n=\\frac{ne^{2}\\tau^{0}\\left( \\epsilon_{F}\\right) }{m},\n\\end{align}\n\\begin{align}\n\\sigma^{\\rm cj1} & =\\left( -e\\right) \\sum_{k}\\frac{g_{\\mathbf{k}}^{\\rm cj}}{E_{x}%\n}v_{x}=\\frac{3n_{i}\\pi a^{2}}{4}\\frac{ne^{2}\\tau^{0}\\left(\n\\epsilon_{F}\\right) }{m},\\\\\n\\sigma^{\\rm cj2} & =\\left( -e\\right) \\sum_{k}\\frac{g^{\\rm r}_{\\mathbf{k}}}{E_{x}}%\nv_{x}^{\\rm cj}=-\\sigma^{\\rm cj1},\n\\label{cancel}\n\\end{align}\nand%\n\\begin{align}\n\\sigma^{\\rm cj1, \\rm cj2}=\\left( -e\\right) \\sum_{k}\\frac{g_{\\mathbf{k}}^{\\rm cj}}{E_x} v_{x}^{\\rm cj}%\n=-\\frac{ne^{2}\\tau^{0}\\left( \\epsilon_{F}\\right) }{m}\\left(\n\\frac{3n_{i}\\pi a^{2}}{4}\\right) ^{2},\n\\end{align}\nwhere the carrier density $n=\\frac{m \\epsilon_{F}}{\\pi\\hbar^{2}}$ with $\\epsilon_{F}$ the Fermi energy.\\ \n\nHere, $\\sigma^{0}$ is the conventional zero-field conductivity in the Drude theory. $\\sigma^{\\rm cj1}$ is the conductivity induced by the anomalous distribution from the coordinate jump. $\\sigma^{\\rm cj2}$ is the conductivity induced by the velocity correction from the coordinate jump. It cancels $\\sigma^{\\rm cj1}$. $\\sigma^{\\rm cj1,\\rm cj2}$ is the conductivity with both the distribution and velocity being corrected by the coordinate jump. \nTherefore, the total electrical conductivity is\n\\begin{equation}\n\\sigma=\\sigma^{0}+\\sigma^{\\rm cj1, \\rm cj2}=\\frac{ne^{2}\\tau^{0}\\left(\n\\epsilon_{F}\\right) }{m}\\left[ 1-\\left( \\frac{3}{4}n_{i}\\pi a^{2}\\right)\n^{2}\\right] .\n\\end{equation}\n\n\nThere is a correction to the electron density, because the electrons are only present in the\nfree area excluding the area occupied by impurities. The electron density\n$n=\\frac{N}{A-A_i}=\\frac{n_D}{1-\\frac{A_i}{A}}$,\nwhere $A$ and $A_{i}$ represent the total 2D area and\nthe area occupied by the hard disk impurities, respectively, and $\\frac{A_i}{A}=\\pi n_i a^2$, and\n$n_D=\\frac{N}{A}$ is the electron density without the correction to exclude the area that impurities take. \nThus, the Fermi momentum $k_F=\\sqrt{2\\pi n}=\\frac{{k_F}^D}{\\sqrt{1-\\frac{A_i}{A}}}$, where $k_{F}^{D}=\\sqrt{2\\pi n_{D}}$. \n\nTherefore, the measured electrical conductivity is also corrected by\n\\begin{equation}\n\\begin{aligned}\n\\sigma^{M}&=\\sigma\\frac{A-A_{i}}{A}\\\\\n &=\\frac{n_{D}e^{2}\\tau^{D} }{m}\\left[ 1-\\left( \\frac{3}{4}\\pi n_{i}a^{2}\\right)\n^{2}\\right] \\sqrt{1-\\pi n_{i}a^{2}},\n\\label{corrected cond}\n\\end{aligned}\n\\end{equation}\nwith the Drude transport relaxation rate $1\/\\tau^{D}=n_{i}v_{F}^{D}\\frac\n{8a}{3}$ a constant. \nThe conductivity in our theory $\\sigma^{M}$ is lower than the Drude conductivity $\\sigma^{D}=\\frac{n_{D}e^{2}\\tau^{D} }{m}$ by a factor of $\\left[ 1-\\left( \\frac{3}{4}\\pi n_{i}a^{2}\\right)^{2}\\right] \\sqrt{1-\\pi n_{i}a^{2}}$ as can be seen from Eq.~\\ref{corrected cond}, which decreases as a function of the dimensionless quantity $n_{i}a^{2}$. The deviation of the diffusion coefficient from the Drude model in a previous computer simulation of Lorentz model with overlapped hard sphere impurities \\cite{PhysRevA.25.533} is similar to that in our theory. \n\n\\subsection{Low magnetic field case: Hall coefficient and magnetoresistivity}\n\nIn this section, we evaluate the conductivity under a weak magnetic field. We first discuss the contribution from the skew scattering. According to previous discussions, we need to solve the distribution function using Eq. ~\\ref{SBE-n}. \n\n$g_{\\mathbf{k}}^{\\rm r}=\\left( -\\partial_{\\epsilon}f^{0}\\right) \\left( -e\\right)\n\\left[ \\mathbf{E\\cdot v}\\tau^{L}\\left( \\epsilon\\right) +\\left(\n\\mathbf{\\mathbf{\\hat{z}}\\times E}\\right) \\cdot\\mathbf{v}\\tau^{T}\\left(\n\\epsilon\\right) \\right]$ into Eq.~\\ref{SBE-n} and obtain%\n\\begin{align}\n\\tau^{L}\\left( \\epsilon\\right) & =\\frac{\\tau^{\\parallel}\\left(\n\\epsilon\\right) }{1+\\left[ \\omega_{c}\\tau^{\\parallel}\\left( \\epsilon\n\\right) +\\frac{\\tau^{\\parallel}\\left( \\epsilon\\right) }{\\tau^{\\perp}\\left(\n\\epsilon\\right) }\\right] ^{2}},\\nonumber\\\\\n\\tau^{T}\\left( \\epsilon\\right) & =\\left[ \\omega_{c}\\tau^{\\parallel\n}\\left( \\epsilon\\right) +\\frac{\\tau^{\\parallel}\\left( \\epsilon\\right)}{\\tau^{\\perp}\\left( \\epsilon\\right) }\\right] \\tau^{L}\\left(\\epsilon\\right),\n\\end{align}\nwhere we define%\n\\begin{widetext}\n\\begin{align}\n\\frac{1}{\\tau^{\\parallel}\\left( \\epsilon\\right) } & =\\int_{0}^{2\\pi\n}d\\theta n_{i}v [ \\Omega^{A}\\left( 1+\\cos\\left(\n\\theta \\right) \\right)+\\Omega^{S}\\left( 1-\\cos\\left(\\theta \\right) \\right) ] =\\frac{8}{3}n_{i}va\\left[ 1-\\frac{1}{5}\\left( \\frac{a}{R}\\right) ^{2}+O\\left( \\left( \\frac{a}{R}\\right) ^{4}\\right) \\right],\n\\label{tau-para}\n\\end{align}\n\\begin{align}\n\\frac{1}{\\tau^{\\perp}\\left( \\epsilon\\right) } & =\\int_{0}^{2\\pi}d\\theta\nn_{i}v [ \\Omega^{S}-\\Omega^{A} ] \\sin\\left( \\theta \\right) =-\\frac{\\pi}{4}n_{i}va\\frac{a}{R}\\left[ 1+O\\left( \\left( \\frac{a}{R}\\right) ^{2}\\right)\n\\right]. \n\\label{tau-perp}\n\\end{align}\n\\end{widetext}\n\nHere $\\Omega^{A}=\\frac{1}{2}\\left(\\Omega_{\\mathbf{kk}^{\\prime}}-\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}}\\right) $, which is the antisymmetric part of the differential cross section, and $\\Omega^{S}=\\frac{1}{2}\\left(\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}}+\\Omega_{\\mathbf{kk}^{\\prime}}\\right) $, which is the symmetric part of the differential cross section. $\\tau^{\\perp}$ is purely due to the skew scattering, i.e. $\\Omega_\\mathbf{kk'} \\neq \\Omega_\\mathbf{k'k}$. In our theory, only when $B \\neq 0$, $\\Omega_\\mathbf{kk'} \\neq \\Omega_\\mathbf{k'k}$. \n\nGenerally, we prove that $\\tau^{\\parallel}$ is purely contributed by $\\Omega^{S}$ by showing $\\int_{0}^{2\\pi}d\\theta \\Omega^{A} (1+\\cos ( \\theta ))=0$, and $\\tau^{\\perp}$ is purely contributed by $\\Omega^{A}$ by showing $\\int_{0}^{2\\pi}d\\theta \\Omega^{S} \\sin\\left( \\theta \\right)=0$ (see Appendix \\ref{APP-D}). \nAs a result, $\\tau^{\\parallel}$ is not enough to characterize the collision process as long as the scattering probability contains an antisymmetric part, in which case, $\\tau^{\\perp}$ naturally emerges. \n\nIn our example, $\\frac{1}{\\tau^{\\perp}}$ is always negative (as shown in Eq. \\ref{tau-perp} and Fig. \\ref{fig_tau_perp}). Besides, as Eq. \\ref{tau-perp} shows, $\\frac{1}{\\tau^{\\perp}} \\neq 0$, as long as $\\frac{a}{R}$ is finite. Moreover, the ratio of $\\tau^{\\parallel}$ to $\\tau^{\\perp}$ is proportional to $a\/R$. Since we are considering the weak magnetic field scenario with a large $R$ ($R>a$), $\\tau^{\\perp}$ will be bigger than $\\tau^{\\parallel}$. Taking the data from the second row of Table I as example where $\\beta=0.6$, $\\frac{a}{R}=\\frac{ 2\\beta c}{\\pi} =0.06$, the ratio of $\\tau^{\\parallel}$ to $\\tau^{\\perp}$ is then around $-0.017$. \n\n\n\\begin{figure}[h]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.36}{\\includegraphics*{tau_perp.pdf}}\n\\caption{The plot of the reciprocal of transverse relaxation time $\\frac{1}{\\tau^{\\perp}}$ in the unit of $n_{i} a v$, where $a$ is the impurity radius, $n_{i}$ is the impurity density, and $v$ is the electron velocity. The $\\frac{1}{\\tau^{\\perp}}$ is always negative as long as $a0,\n\\end{equation}\nrespectively. The correction due to the effective area of free space excluding the area of all the impurities is of higher order and thus neglected.\n\nThe magnetoresistivity $\\frac{\\delta\\rho_{\\parallel}\\left(\nB\\right) }{\\rho_{\\parallel}\\left( 0\\right) }\\simeq-\\frac{64}{15}\\left(\nn_{i}a^{2}\\omega_{c}\\tau^{0}\\right) ^{2}$ is negative, and\nis composed of three contributions: 1) the\ncontribution from the Hall angle, more specifically, from the anomalous distribution function to the Hall transport $\\left( C_{a}^{\\perp\n}\\rightarrow\\tau^{T,cj}\\rightarrow\\tan\\theta_{H}\\right) $;\n2) the magnetic-field-induced correction to the longitudinal transport\nrelaxation time $\\left( \\left( \\tau^{\\parallel}-\\tau^{0}\\right)\n\\rightarrow\\sigma_{xx}\\right) $;\n3) the contribution of\nanomalous distribution function to the longitudinal transport $\\left(\nC_{a}^{\\perp}\\rightarrow\\tau^{L,cj}\\rightarrow\\sigma_{xx}\\right) $.\n\nThe leading order correction of the Hall angle $-\\frac{\\pi}{4}n_{i}a^{2}\\omega_{c}\\tau^{0}$\nstems from the magnetic-field-induced skew\nscattering. This result is comparable to that corrected by the classical memory\neffect \\cite{PRB2008} in the limit $n_{i}a^{2}\\ll\\omega_{c}\\tau_{D}%\n\\ll1$: $\\delta R_{H}^{cm}\/R_{H}^{B}=-\\frac{32}{9\\pi}n_{i}a^{2}$, where $R_{H}^{B}$ is the Hall coefficient in the conventional Boltzmann theory $R_{H}^{B}=-\\frac{1}{n_D e}=-\\frac{1}{ne(1-\\pi n_i a^2)}$, and $\\delta R_{H}^{cm}$ \nis the difference between the Hall coefficient corrected by classical memory effect and the conventional Hall coefficient. \n\nIn experiments, to obtain the real electron density $n$ from the measured Hall coefficient, the correction to $R_{H}$ has to be included. \nThe Hall coefficient is $R_{H}=-1\/n^{\\prime}e$, where\n$n^{\\prime}$ is the effective electron density $n^{\\prime}\\approx \\frac{n}{\\left(\n1-\\frac{c}{4}+\\frac{128 c^{2}}{45\\pi^{2}}\\right)}$ and $c=\\pi a^{2}n_{i}$. We use the \nvalue of $c=0.15$ here as an example (this value is also used in the discussion) and find that $n^{\\prime}\\approx \\frac{n}{0.97} $ which is equivalent to a $3\\%$ error. This error is larger when the impurity density increases. \n\nWe note that in a previous work \\cite{PhysRevLett. 112. 166601}, it is already recognized that there may be corrections to the Hall coefficient. However, their result is due to the magnetic-field-affected Bloch-electron drifting motion, and is proportional to the $\\frac{1}{(\\tau^0)^2}$ (or equivalently, $(n_i a^2)^2$). In comparison, our correction here has different origins (magnetic-field-affected electron-impurity scattering), as well as different scaling behavior (i.e. proportional to $n_i a^2$).\n\n\n\\section{Discussions}\n\n\\subsection{Magnetoresistivity in comparison with simulation at low magnetic field}\n\nIn this section, we compare our theoretical results with\nthe analytical and numerical results for the pure 2D Lorentz model previously\nobtained in literatures \\cite{PRB2003, PhysRevLett.89.266804}.\nAt low magnetic field $\\omega_{c}\\tau^{0}<1$, the theory in \\cite{PRB2003, PhysRevLett.89.266804} predicted a negative magnetoresistivity due to the influence of magnetic field on the Corridor effect (enhancing the backscattering from the first impurity to the second impurity and back to the first impurity) and on multiple scatterings. \n\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n$\\beta$ & $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right) ^{an}$ &\n$\\frac{\\delta\\rho_{\\parallel}^{\\prime}}{c\\rho_{0}}$ & $\\left( \\frac{\\delta\n\\rho_{\\parallel}^{Cor}}{c\\rho_{0}}\\right) ^{th}$ & $\\frac{\\delta\\rho\n_{\\parallel}^{\\prime}}{c\\rho_{0}}+\\left( \\frac{\\delta\\rho_{\\parallel}^{Cor}%\n}{c\\rho_{0}}\\right) ^{th}$ & $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}%\n}\\right) ^{an}+\\frac{\\delta\\rho_{\\parallel}^{\\prime}}{c\\rho_{0}}+\\left(\n\\frac{\\delta\\rho_{\\parallel}^{Cor}}{c\\rho_{0}}\\right) ^{th}$ & $\\left(\n\\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right) ^{si}$\\\\\n\\hline\n$\\beta\/c=3,\\beta=0.45$ & -0.0074 & -0.026 & -0.0605 & -0.0865 & -0.094 & -0.1\\\\\n\\hline\n$\\beta\/c=4,\\beta=0.6$ & -0.013 & -0.046 & -0.07 & -0.116 & -0.13 & -0.14\\\\\n\\hline\n$\\beta\/c=5,\\beta=0.75$ & -0.021 & -0.072 & -0.076 & -0.148 & -0.17 & -0.19\\\\\n\\hline\n$\\beta\/c=5.5,\\beta=0.825$ & -0.025 & -0.087 & -0.0796 & -0.167 & -0.19 & -0.24\\\\\n\\hline\n$\\beta\/c=6,\\beta=0.9$ & -0.03 & -0.103 & -0.086 & -0.189 & -0.22 & -0.28\\\\\\hline\n\\label{table_corridor}\n\\end{tabular}\\\\\n\\caption{The comparison between the summation of analytical results from \\cite{PRB2003} and our theory, and the numerical results from \\cite{PhysRevLett.89.266804}. The second column $\\left( \\frac{\\delta\\rho_{\\parallel}}%\n{c \\rho_{0}}\\right) ^{an}$ is the magnetoresistivity calculated by our formula. The third column $\\frac{\\delta\\rho_{\\parallel}^{\\prime}}{c\\rho\n_{0}}$ is the quadratic contribution due to the\ninfluences of magnetic field on returns after multiple scatterings in \\cite{PRB2003}. The fourth column $\\left( \\frac{\\delta\\rho_{\\parallel}^{Cor}}{c\\rho_{0}}\\right)\n^{th}$ is the analytical values of magnetoresistivity influenced by Corridor effect \nin \\cite{PRB2003}. The fifth column is the summation of all the analytical results from \\cite{PRB2003}. The sixth column includes our results, in addition to the previous analytical results in the fifth column. The seventh column $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right)\n^{si}$ is the simulation result of magnetoresistivity in \\cite{PhysRevLett.89.266804}. }\n\\end{table*}\n\nIn table I, $c=\\pi n_{i}a^{2}=0.15$, $\\beta=\\omega_{c}\\tau=\\frac{4}{3}\\omega\n_{c}\\tau^{0}$, where $\\tau=\\left( 2 v n_{i}a\\right) ^{-1}$ is the\nsingle-particle scattering time. The second column $\\left( \\frac{\\delta\\rho_{\\parallel}}%\n{c \\rho_{0}}\\right) ^{an}=\\frac{12}{5\\pi^{2}} c \\beta^{2}$ is the magnetoresistivity up to the order\n$O\\left( \\left( n_{i}a^{2}\\omega_{c}\\tau^{0}\\right) ^{2}\\right)\n$ calculated by our formula. The third column $\\frac{\\delta\\rho_{\\parallel}^{\\prime}}{c\\rho\n_{0}}=-\\frac{0.4}{\\pi}\\beta^{2}$ is the quadratic contribution due to\nthe influence of magnetic field on multiple scatterings in \\cite{PRB2003}. The fourth column $\\left( \\frac{\\delta\\rho_{\\parallel}^{Cor}}{c\\rho_{0}}\\right)\n^{th}$ is the analytical values of magnetoresistivity influenced by Corridor effect \nin \\cite{PRB2003}. The fifth column is the summation of all the analytical results from \\cite{PRB2003}. The sixth column includes our results in addition to the previous analytical results in the fifth column. The seventh column $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right)\n^{si}$ is the simulation result of magnetoresistivity in \\cite{PhysRevLett.89.266804}. \n\n\\begin{figure}[h]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.32}{\\includegraphics*{compare_numerical_results_finer.pdf}}\n\\caption{The plots of the numerical magnetoresistivity from [5], the magnetoresistivity from correlation effect [6], and the inclusion of our magnetoresistivity into the analytical correlation effect [6], respectively. }\n\\label{fig_compare}\n\\end{figure}\n\nAs can be seen from the table I, the\ninclusion of $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right) ^{an}%\n$ (the magnetoresistivity calculated in our theory) yields a more accurate magnetoresistivity, closer to the numerical result, especially under relatively small magnetic field\n($\\beta=0.45$ and $\\beta=0.6$). This is also reflected in Fig. \\ref{fig_compare}. Under relatively larger magnetic\nfields, the deviation of the analytical values from the simulation values\nincreases. The reason is as follows. The validity of our theory demands $R>l\\gg0.6\/\\sqrt{n_{i}}a$ where $l$ is the mean free path \\cite{PRB2001}, such that the percolation transition cannot occur. \nThis requirement yields $\\beta<1\\ll0.83\\left( n_{i}a^{2}\\right) ^{-1\/2}$.\\ Using the value\n$\\pi n_{i}a^{2}=0.15$ in table I, we obtain the following restriction to $\\beta$: $\\beta \\ll 3.8$. Therefore, the value of $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right) ^{an}%\n$ at large $\\beta$ may not be accurate. Also,\nthe validity of the theory of corridor effect influenced magnetoresistivity \\cite{PRB2003} holds well under similar\nrestrictions. Thereby the difference between the analytical (fifth column) and simulation results (sixth column) increases with larger $\\beta$. \n\nWe choose the value of $c=\\pi n_{i}a^{2}=0.15$ in table I because we want to compare with the result from literatures \\cite{PRB2003, PhysRevLett.89.266804} where the largest value of $c$ is $0.15$, and besides, the larger the value of $c$, the more significant the negative magnetoresistivity effect in our theory. The reason can be found from the expression $\\left( \\delta\\rho_{\\parallel}\\right) ^{an}\/\\delta\\rho_{\\parallel}^{\\prime}=6n_{i}a^{2}$. \nIn literatures \\cite{PRB2003, PhysRevLett.89.266804}, the dominance of corridor effect decreases when the magnetic field increases. The suppression of the corridor effect makes our result prominent, therefore, we need the magnetic field as large as possible inside the weak field regime. \n\n\n\\subsection{Magnetoresistivity in comparison with experimental results}\n\nIn this subsection we discuss the possible relevance of our result to\nexperiments. Our theory is based on the Boltzmann framework neglecting the\nmemory effect with successive scattering events. In real 2D electron systems with strong scatterers, the correlation between successive collisions may be\nbroken by the disorders and the applied electric field. Thereby, we may try to fit some experiments by only our results\nregardless of the correlation effect. \n\nThe negative parabolic magnetoresistivity has been observed in a corrugated 2DEG in GaAs wells\n\\cite{PRB2004}. Although the authors explained their observation by the Corridor\neffect related magnetoresistivity, the fitting value for $a\/l=2n_{i}a^{2}$ is beyond\nits valid range (here $n_{i}a^{2}=2.6$, but the range of $n_{i}a^{2}$ is supposed to be $[0,1]$). Thus, the magnetoresistivity theory in terms of corridor effect in the 2D Lorentz model may\nnot provide a suitable description for the experiments with low magnetic field. If we fit the\nexperimental parabolic negative magnetoresistivity in low magnetic field to our formula, we\nget a reasonable value $n_{i}a^{2}=0.12$. Negative parabolic magnetoresistivity was also\nobserved in a 2DEG in a GaN heterostructure \\cite{PRB2005}, and explained by a\ntwo-component disorder model \\cite{Mirlin2001}. We can also fit the\nparabolic negative magnetoresistivity by choosing $n_{i}a^{2}=0.042$. However, both of our theory and the classical magnetoresistivity theory based on memory effects \\cite{PRB2005}\ncannot explain the observed large negative linear magnetoresistivity in a larger magnetic field\n\\cite{PRB2004, PRB2005}. This experimental regime is still beyond existing theories.\n\nWe comment that in some cases, the electron motion is quasi-two-dimensional, and the vertical motion is not negligible. One example is shown in Ref. \\cite{PhysRevLett.77.147}, in which an in-plane magnetic field is applied and the periodically distributed large-scale impurities are prepared. This is, however, beyond the scope of our theory. \n\n\n\\subsection{Phenomenological inclusion of skew scattering into the Drude model}\n\nIn this subsection we\ndemonstrate that the skew scattering, signified by $\\frac{1}{\\tau^{\\perp}}$, can be phenomenologically included into Drude framework using a tensor $\\tensor{\\frac{1}{\\tau}}$. \n\nIn traditional Drude theory, the scattering rate $\\frac{1}{\\tau}$ is\ntreated as a scalar. The equation of motion is%\n\\begin{equation}\nm\\mathbf{\\dot{v}}=-e(\\mathbf{E}+\\mathbf{v\\times \\mathbf{B}})-\\frac{m \\mathbf{v}}{\\tau}.\n\\end{equation}\n\nIn the presence of out of plane magnetic field, due to the rotational symmetry in the two dimensional plane, \n$\\frac{1}{\\tau}$ becomes an antisymmetric tensor \\cite{Physica.24.1958, PhysRevB.72.045346, Hua2015} with\nnonzero off-diagonal element: \n\\begin{equation}\n\\tensor{\\frac{1}{\\tau}}=\\left(\n\\begin{array}\n[c]{cc}%\n\\frac{1}{\\tau^{\\parallel}} & \\frac{1}{\\tau^{\\perp}}\\\\\n-\\frac{1}{\\tau^{\\perp}} & \\frac{1}{\\tau^{\\parallel}}%\n\\end{array}\n\\right) .\n\\end{equation}\n\nThe modified equation of motion is\n\\begin{equation}\nm\\mathbf{\\dot{v}}=-e\\left(\n\\begin{array}\n[c]{c}%\nE_{x}\\\\\nE_{y}%\n\\end{array}\n\\right) -e\\left(\n\\begin{array}\n[c]{c}%\nv_{y}B_{z}\\\\\n-v_{x}B_{z}%\n\\end{array}\n\\right) -m\\left(\n\\begin{array}\n[c]{cc}%\n\\frac{1}{\\tau^{\\parallel}} & \\frac{1}{\\tau^{\\perp}}\\\\\n-\\frac{1}{\\tau^{\\perp}} & \\frac{1}{\\tau^{\\parallel}}%\n\\end{array}\n\\right) \\left(\n\\begin{array}\n[c]{c}%\nv_{x}\\\\\nv_{y}%\n\\end{array}\n\\right),\n\\end{equation}\nwith the conductivity%\n\\begin{equation}\n\\sigma_{xx}=\\frac{\\frac{ne^{2}\\tau^{\\parallel}}{m}}{1+(\\frac{eB}{m}+\\frac\n{1}{\\tau^{\\perp}})^{2}\\tau^{\\parallel2}},\\text{ \\ }\\sigma_{xy}=-\\frac\n{ne^{2}(eB+\\frac{m}{\\tau^{\\perp}})\\frac{\\tau^{\\parallel2}}{m^{2}}}%\n{1+(\\frac{eB}{m}+\\frac{1}{\\tau^{\\perp}})^{2}\\tau^{\\parallel2}},%\n\\end{equation}\nwhich gives the same result as that in the Boltzmann theory Eq. \\ref{con-skew} when considering only the skew scattering part. In Drude model, $m \\mathbf{v}\/ \\tau$ is a resistive force. The physical meaning of the anisotropic resistive force is that the direction of the force is no longer the same with that of the velocity. This anisotropic force in the Drude theory, on the other hand, is equivalent to the anisotropic scattering in the Boltzmann theory. The difference between Boltzmann theory and the Drude phenomenological theory is that the Drude theory cannot give a specific expression of the longitudinal and transverse relaxation time. \n\nConverting the conductivity into resistivity, we get \n\\begin{equation}\n\\rho_{xx} =\\frac{m}{e^{2} \\tau^{\\parallel} n}, \n\\end{equation}\n\\begin{equation}\n\\rho_{xy} =-\\left( \\frac{B}{en}+\\frac{m}{e^{2} \\tau^{\\perp} n} \\right).\n\\end{equation}\nBased on our theory, from Eq. \\ref{tau-para}, we see that the magnetic field dependence of $\\tau^{\\parallel}$ contribute to the negative magnetoresistance, while $\\frac{1}{\\tau^{\\perp}}$ contribute to the anomalous Hall effect. \n\n\n\\section{Conclusion}\n\nIn summary, we have formulated a classical theory for the magnetotransport\nin the 2D Lorentz model. This theory takes into account the effects of the\nmagnetic field on the electron-impurity scattering\nusing the recipe of the abstraction of the real scattering process in the classical\nBoltzmann framework. We find a correction to the Hall\nresistivity in the conventional Boltzmann-Drude theory and a negative magnetoresistivity as a parabolic function of magnetic field. The origin of these results has been analyzed. We have also discussed the relevance between our theory and recent simulation and experimental works. Our theory dominates in a dilute impurity system where the correlation effect is negligible. \n\n\\vskip0.8cm\nWe acknowledge useful discussions with Liuyang Sun, Liang Dong, Nikolai A. Sinitsyn, Qi Chen, Liang Du, Fengcheng Wu and Huaiming Guo. Q.N. is supported by DOE (DE-FG03-02ER45958, \nDivision of Materials Science and Engineering) in the formulation of our theoy. J.F., C.X. and Y.G. are supported by NSF (EFMA- 1641101) and Welch Foundation (F-1255). \n\n\n\\vskip0.8cm\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Related Work} \\label{section:relatedwork}\n\n\\subsection{Color Mapping}\nContinuous color mapping (also heat mapping) refers to the association of a color to a scalar value over a domain and can be considered the most popular visualization technique for two-dimensional scalar data. \n\nThere are many heuristic rules for designing a good colormap, which have been applied mostly intuitively for hundreds of years~\\cite{Silva::2017::UseColorInVis, silva2011using, zhou2016survey}.\nThe most important ones are order, high discriminative power, uniformity, and smoothness~\\cite{Bujack:2018:TVCG}. \n\nWhile some colormaps have been designed to sever as default colormaps for many data sets and can perform reasonably well in terms of rule compliance~\\cite{moreland2009diverging}, many colormaps are purposely-designed according to application-specific requirements such as the shape of the data, the audience, the display, or the visualization goal~\\cite{sloan1979color, bergman1995rule, rheingans2000task, borland2011collaboration}. \nThe number of possible colormap configurations and the body of related work on this topic are huge~\\cite{ware1988color, bergman1995rule, rogowitz1996not, rheingans2000task, tominski2008task}. \n\nAn effort has been made to measure the quality of colormaps with respect to these rules quantitatively~\\cite{tajima1983uniform, robertson1986generation, levkowitz1992design, moreland2009diverging, Bujack:2018:TVCG} or experimentally~\\cite{ware1988color, rogowitz1999trajectories, kalvin2000building, ware2017uniformity}, in order to develop theories and algorithms that can help automate the generation, evaluation, and improvement of colormaps.\nAlthough such theories and algorithms are usually general enough to be application-independent, the design of colormaps in many practical applications can only be effective if one includes some application-specific semantics in the design, such as key values, critical ranges, non-linearity, probability distribution of certain values or certain spatial relationships among values, and so on.\nSupporting such application-specific design effort is the goal of this work.\n\n\\subsection{Colormap Test Data}\nSo far, there is no test suite for colormaps.\nHowever, the literature has provided various examples where some data sets were used for comparing color maps and demonstrating color mapping algorithms.\n\nSloan and Brown \\cite{sloan1979color} suggest treating a colormap as a path through a color space and stress that the optimal choice of a colormap depends on the task, human perception, and display. They showcase their findings with $x$-ray and satellite images.\nWainer and Francolini \\cite{wainer1980empirical} point out the importance of order in a visual variable using statistical information on maps.\nPizer \\cite{pizer1981intensity, pizer1982concepts} stresses the importance of uniformity in a colormap, of which the curve of just noticeable differences (JNDs) is constant and in a natural order.\nThe uniformity can be achieved by increasing monotonically in brightness or each of the RGB components, such that the order of their intensities does not change throughout the colormap.\nTajima \\cite{tajima1983uniform} uses colormaps with regular color differences in a perceptually uniform colorspace to achieve perceptually uniform color mapping of satellite images.\nLevkowitz and Herman \\cite{levkowitz1992design} suggest an algorithm for creating colormaps that produces maximal color differences while satisfying monotonicity in RGB, hue, saturation, and brightness. They test them with medical data from CT scans.\nBernard et al. \\cite{bernard2015survey} suggest definitions of colormap properties and build relations to mathematical criteria for their assessment and map them to different tasks independent from data properties in the context of bivariate colormaps. \nThey test the criteria on analytical data that has different shapes (e.g., different gradients and spherical surfaces).\n\nPizer states that the qualitative task is more important in color mapping applications, because quantitative tasks can be better performed using contours or by explicitly displaying the value when the user hovers over a data point with the mouse.\nHe uses medical images, including CT scans and digital subtraction fluorography, as example data sets.\nWare \\cite{ware1988color} also distinguishes qualitative and quantitative tasks.\nHe agrees that the qualitative characteristics are more important and explicitly mentions tables as a suitable means for the visualization of quantitative values.\nOn the one hand, he finds that monotonic change in luminance is important to see the overall form (qualitative) of his analytic test data consisting of linear gradients, ridges, convexity, concavity, saddles, discontinuities, and cusps.\nOn the other hand, his experiments show that when a colormap consists of only one completely monotonic path in a single perceptual channel, the quantitative task is error-prone if one tries to read the exact data values based on the visualization.\n\nRogowitz, et al.~\\cite{rogowitz1992task, rogowitz1994using, bergman1995rule, rogowitz1996not, rogowitz1998data, kalvin2000building, rogowitz2001blair} distinguish different tasks (isomorphic, segmentation, and highlighting), data types, (nominal, ordinal, interval, and ratio), and spatial frequency (low, high), recommending colormap properties for each combination.\nThey perform experiments on the visual perception of contrast in colormaps using Gaussian or Gabor targets of varying strength superimposed on linear gradients of common colormaps~\\cite{rogowitz1999trajectories, kalvin2000building}.\nRogowitz et al. use a huge variety of data through their extensive experiments, for example, text\\cite{rogowitz1992task}, MRI scans of the human head \\cite{rogowitz1998data, rogowitz1998data}, weather data showing clouds or ozone distribution \\cite{rogowitz1998data, rogowitz1998data}, vector field data from a simulation of the earth's magnetic field or jet flows \\cite{rogowitz1998data, bergman1995rule}, measurements from remote sensing \\cite{rogowitz1996not}, cartographic height data \\cite{rogowitz1998data}, analytic data covering a broad spectrum of frequencies such as planar wave patterns \\cite{rogowitz1996not}, linear gradients distorted by a Gaussian or Gabor target of increasing magnitude \\cite{rogowitz1999trajectories, kalvin2000building}, the luminance of a human face photograph \\cite{rogowitz2001blair}, and so on.\nTheir work demonstrates the diversity in the application field of color mapping and how important it is for a colormap to encode application-specific semantics.\nZhang and Montag \\cite{zhang2006perceptual} evaluate the quality of colormaps designed in a uniform color space with a user study using a CAT scan and scientific measurements such as remote sensing and topographic height data.\nGresh \\cite{gresh2010self} measures the JND between colors in a colormap, using cartographic height data.\nWare et al.~\\cite{ware2017uniformity} generate stimuli for experiments on colormap uniformity by superimposing vertical strips of Gabor filters of different spatial extent over popular colormaps with magnitudes ranging from nonexistence on the top to very strong contrast on the bottom.\nThe users' task is to pick the location where they could first perceive the distortion.\n\nLight and Bartlein \\cite{light2004end} warn of using the rainbow colormap, showing that it is highly confusing for color vision impaired users at the example of temperature data covering North and South America.\nBorland \\cite{borland2007rainbow} also criticizes the rainbow colormap for its lack of order.\nHe compares different colormaps based on analytic test data that features a spectrum of changing frequencies, different surface shapes, and gradients.\nKindlmann et al. \\cite{kindlmann2002face} suggest a method to evaluate users' perception of luminance using a photograph of a human face.\nSchulze-Wollgast et al. \\cite{schulze2005enhancing} focus on the task of comparing data using statistical information on maps.\nTominski et al. \\cite{tominski2008task} also stress that the characteristics of the data, tasks, goals, user, and output device need to be taken into account.\nThey introduce their task-color-cube, which gives recommendations for the different cases.\nThey use cartographic data to demonstrate their findings.\nWang \\cite{Wang:2008} chooses color for illustrative visualizations using medical data and measurements of transmission electron microscopy (TEM), analytic jumps, and mixing of rectangles.\nZeileis et al. \\cite{zeileis2009escaping} provide code to generate color palettes in the cylindrical coordinates of CIEUV and showcase results using geyser eruption data of Old Faithful and cartographic data. \nMoreland \\cite{moreland2009diverging} presents an algorithm that generates diverging colormaps that have a long path through CIELAB without sudden non-smooth bends.\nHis red-blue diverging colormap is the current default in ParaView \\cite{Ahrens:2005:ParaView}.\nHe tests different colormaps with data representing a spectrum of frequencies and gradients partly distorted by noise.\nHe also stresses the importance of testing on 3D surfaces where shading and color mapping compete, e.g., the density on the surface of objects in flow simulation data or on 3D renderings of cartographic height data. \nBorland \\cite{borland2011collaboration} collaborates with an application scientist working on urban airflow.\nThey suggest combining existing colormaps to design domain-specific ones, and in case of doubt stick with the black-body radiation map. %\nThey sacrifice traditional rules (e.g., order) to satisfy the needs (huge discriminative power) of the application.\nEisemann et al. \\cite{eisemann2011data} separate the adaption of the histogram of the data from the color mapping task, introducing an interactive pre-colormapping transformation for statistical information on maps.\nThompson et al. \\cite{thompson2013provably} suggest applying special colors outside the usual gradient of the colormap to dominantly-occurring values, which are ``prominent'' values occurring with high frequency.\nTheir test data includes the analytic Mandelbrot fractal and flow simulation results, which are partly provided as examples in ParaView. \nBrewer \\cite{brewer1994color,Brewer:2004:designing} provides an online tool to choose carefully designed discrete colormaps.\nThis is perhaps the most widely used tool for discrete colormaps.\nMittelst\\\"adt et al. \\cite{mittelstaedt2014methods,mittelstaedt2015colorcat} present a tool that helps to find a suitable colormap for different task combinations.\nThey showcase their findings with analytical data, like gradients and jumps, and real-world maps.\nSamsel et al. \\cite{Samsel:2015:CHI, samsel2017envir} provide intuitive colormaps designed by an artist to visualize ocean simulations and scientific measurements in the environmental sciences.\nFang et al.~\\cite{Fang:2017:TVCG} present an optimization tool for categorical colormaps, and use the tool to improve the colormap of the London underground map and that for seismological data visualization.\nNardini et al.~\\cite{nardini2019making} provides an online tool, the \\texttt{CCC-Tool}, for creating, editing, and analyzing continuous colormaps, demonstrating its uses with captured hurricane data, simulated ocean temperature data, and results of simulating ancient water formation.\n\n\\begin{table}[t]\n\\vspace{0mm}\n\\caption{The most popular test data for colormap testing in the visualization literature.\\label{t:related}}\n\\centering\n\\begin{tabular}{@{\\hspace{4mm}}r@{\\hspace{4mm}}l@{\\hspace{4mm}}}\n\\hline\nanalytic data & \n\\cite{ware1988color, rogowitz1996not, rogowitz1999trajectories, kalvin2000building, borland2007rainbow, Wang:2008, moreland2009diverging, thompson2013provably, mittelstaedt2014methods}\\\\\n& \\cite{mittelstaedt2015colorcat, bernard2015survey, ware2017uniformity}\\\\\nstatistics and maps& \\cite{sloan1979color, brewer1994color, Brewer:2004:designing, schulze2005enhancing, tominski2008task, zeileis2009escaping, eisemann2011data, mittelstaedt2014methods, Fang:2017:TVCG} \\\\\nmedical imaging & \\cite{sloan1979color, pizer1981intensity, pizer1982concepts, levkowitz1992design, rogowitz1998data, zhang2006perceptual, Wang:2008} \\\\\nscientific measurements & \\cite{rogowitz1996not, rogowitz1998data, light2004end, zhang2006perceptual, moreland2009diverging, gresh2010self, zeileis2009escaping, samsel2017envir, Fang:2017:TVCG, nardini2019making} \\\\\nscientific simulations & \\cite{bergman1995rule, rogowitz1998data, moreland2009diverging, thompson2013provably, borland2011collaboration, Samsel:2015:CHI, samsel2017envir, nardini2019making} \\\\\nphotographs & \\cite{sloan1979color, tajima1983uniform,rogowitz2001blair, kindlmann2002face} \\\\\n\\hline\n\\end{tabular}\n\\vspace{-4mm}\n\\end{table}\n\nAll in all, we found that the most popular way of evaluating the quality of colormaps in the literature is the use of specifically designed analytic data like gradients, ridges, different surface shapes, fractals, jumps, or different frequencies, because these synthetic data sets help to identify specific properties of the colormaps.\nThe second most common use is cartographic maps, which reflects the historical use of color mapping.\nFurthermore, it is also common to use data in typical applications of scientific visualization as test data, e.g., fluid simulations (wind, ocean, turbulence), scientific measurements (weather, clouds, chemical concentration, temperature, elevation data), and medical imaging (x-ray, CT scan, digital subtraction fluorography, transmission electron microscopy).\nA summary can be found in \\autoref{t:related}.\nWe have carefully designed our colormap test suite according to these findings, not only providing an extensive selection of expressive analytic data, but also containing real-world data from different scientific applications.\n\n\n\n\n\n\n\n\\section{Motivation} \\label{section:motivationAndDesign}\nIn several fields of computer science, the use of established test suites for evaluating techniques is standard or commonplace.\nThe motivation for this paper is to introduce such a test suite to scientific visualization. \nSo far, user testimonies and empirical studies have been the dominant means of evaluation in the literature.\nWith this work, we would like to initiate the development of an open resource that the community can use to conduct extensive and rigorous tests on various colormap designs.\nWe also anticipate that the community will contribute new tests to this resource continuously, including but not limited to tests for colormaps used in vector- or tensor-field visualization.\nSuch a test suite can also provide user-centered evaluation methods with stimuli and case studies, while stimulating new hypotheses to be investigated using perceptual and cognitive experiments. \n\nThe development of testing functions in this paper deals with the common features that pose challenges in scalar analysis, such as jumps, local extrema, ridge or valley lines, different distributions of scalar values, different gradients, different signal frequencies, different levels of noise, and so on.\nThis scope should be extended in future work progressively with more and more complex or specialized cases.\n\nThe main design goal of our test suite is to provide a set of intuitive functions, each of which deals with one particular challenge at a time.\nThey should be easy to interpret and to customize by experts as well as non-expert users.\nThis aspired simplicity in design can be exploited in future work to facilitate automatic production of test reports or automatic optimization of colormaps with respect to a selection of tests.\n\nAt present, this initial development should provides a set of test functions simulating a variety of planar scalar fields with different characteristic features.\nIt should enable the users to observe the effects when different continuous colormaps are applied to scalar fields that have the characteristic features similar to those featured in an application.\nIn many situations, the users may anticipate certain features in data sets that are yet to arrive, and would like to ensure that the color mapping can reveal such features effectively when the data arrives.\nFinding and defining a suitable testing function is usually easier than manually creating a data set.\nEspecially, unlike a synthetic data set, a test function is normally resolution-independent and is accompanied by some parameters for customization.\n\nIn addition, the test suite should provide users with data sets that come from real-world applications, possibly with some modification wherever appropriate. Such an application-focused collection can be compiled from the most popular data for colormap testing in the visualization literature.\nSince both the collection of test functions and that of real-world data sets are extensible, the field of visualization may soon see a powerful test suite for evaluating colormap design.\nThis is desirably in line with other computer science disciplines.\n\\section{Test Suite} \\label{section:testingFunctions}\nThe first design goal of our test functions is to allow intuitive interpretation of colormap properties by users.\nThis requires each test function to have an easily-understandable behavior, and to have a clear mathematical description that can be reproduced consistently across different implementation.\nThe second design goal is to build the collection of test functions on the existing analytic examples in the literature surveyed in \\autoref{section:relatedwork} to ensure that the existing experiments can be repeated and compared with new experiments.\nThe third design goal is to help users to find the test suite and to conduct tests easily.\nHence we integrate the test suite with the \\texttt{CCC-Tool}, allowing users to conduct tests immediately after creating or editing a colormap specification.\n\nAs mentioned in \\autoref{section:introduction}, our test suite has three parts: local tests, global tests, and a set of application data sets.\nThe local tests are mostly based on analytic examples in the literature and are defined with considerations from calculus to cover most local properties of scalar functions.\nThe global tests feature analytic properties of scalar functions that are not local, such as signal to noise ratio, global topological properties, different levels of variation, etc.\nFinally, the application-specific data sets reflect the well-documented fact that colormaps should also be evaluated using real-world data sets.\n\nThe mathematical notions in this section use a few common parameters. The user-defined parameters, $r, R$, set the test function range, with $r, R \\in \\PazoBB{R} \\land r \\neq R$. $R$ and $r$ determine the minimum $m$ and maximum $M$ of the test function with $m < M \\in \\PazoBB{R}$. With $b \\in \\PazoBB{N}$ the user can select an exponent that describes the polynomial order.\nFor functions with enumerated cases, the user can select a specific option $T$.\n \n \n \n \n \\begin{figure}[t]\n \t\\centering\n \t\\includegraphics[width=0.99\\linewidth]{pic_stepsExample2.png}\n \t\\caption{ \\label{fig:stepExample}\n \t \\textbf{Left:} The table shows the structure of the neighborhood with four elements ($A=\\{a_0,a_1,a_2,a_3\\}$). The odd indexed columns (yellow) always include the same value, increasing from the first to the last column. The even indexed columns (orange) contain the whole set of test values in increasing order. \n \t \\textbf{Right:} Neighborhood variation test with $A=\\{0.0,0.25,0.75,1.0\\}$ for the colormap displayed below including a three-dimensional version encoding the values through height.\n \t }\n \n \\end{figure}\n \n \n \n \n \n \\begin{figure}[t]\n \t\\centering\n \t\\includegraphics[width=0.8\\linewidth]{pic_gradientImage_short_NewColored.png}\n \t\\caption{ \\label{fig:gradientExample} Three gradient tests, with $r=0$, $R=1.0$.\n \t\\textbf{Top Row}: Color mapping visualizations of the \\texttt{Gradient Variation} function for the types $linear$, $convex$, and $concave$ with $T_x=T_y$ and $b=1$ for the first type and $b=2$ for the other types.\n \t\\textbf{Bottom Row}: 3D height-map visualizations of the three gradient tests.\n \t}\n \n \\end{figure}\n \n \n \n \n \n \\begin{figure}[t]\n \t\\centering\n \t\\includegraphics[width=0.8\\linewidth]{pic_minMaxSaddleImage2.png}\n \t\\caption{ \\label{fig:minMaxSaddleExample} This figure shows a 2D color mapping representation (top) and a 3D height-map (bottom) of the 2d scalar fields created with the test function yielding a minimum with $o=1$ and $p=1$ (left), a maximum with $o=-1$ and $p=-1$ (middle), and a saddle with $o=-1$ and $p=1$ (right).}\n \n \\end{figure}\n \n \n \n \n \n \\begin{figure}[t]\n \t\\centering\n \t\\includegraphics[width=0.8\\linewidth]{pic_ridgeImage_short_NewColored.png}\n \t\\caption{ \\label{fig:ridgeExample} Three ridge\/valley-line tests (columns), with $r=0$, $R=1.0$. The ridge\/valley-line is always centrally at $x=0$.\n \t\\textbf{Top Row}: Color mapping with $T_x=T_y=linear$, $T_x=T_y=concave$, and $T_x=T_y=convex$, and $b=2$ in the latter two cases.\n \t\\textbf{Bottom Row}: 3D height-map versions of the same tests.\n \t}\n \n \\end{figure}\n \n \n \n\\input{4_1_0_Local_Attributes.tex}\n\\input{4_2_0_Global_Attributes.tex}\n\\input{4_3_0_RealWorldData.tex}\n \n \n \n \n \n \n\n\n\\subsection{Local Tests}\nOur basic design principle behind the local tests is classical calculus.\nLocal means that these test functions help to check the appearance of local properties of a scalar function after mapping it to color with the selected color map.\nThe main idea is to use typical local approximations like low order Taylor series expansions to create the test functions.\nWe use step functions to show the effect of discontinuities, and provide functions with different gradients, various local extrema, saddles, ridges, and valley lines.\nThis corresponds to ideas in the literature, as shown in \\autoref{section:relatedwork}, e.g., works by Mittelst{\\\"a}dt~\\cite{mittelstaedt2014methods,mittelstaedt2015colorcat} or Ware~\\cite{ware1988color}.\nWe also use elements of Fourier calculus by providing functions to test the effect of different frequencies.\nThe final test looks at the colormap's potential to visually reveal small differences within the data range, which might be an important colormap design goal. %\n \n \n\n\n \\input{4_1_1_Testing_Functions_NeighbourhoodVariations.tex}\n \\input{4_1_2_Testing_Functions_GradientVariations.tex}\n \\input{4_1_3_Testing_Functions_MinMaxSaddleVariations.tex}\n \\input{4_1_4_Testing_Functions_Ridge_and_Valley_Lines.tex}\n \\input{4_1_5_Testing_Functions_FrequencyVariations.tex}\n \\input{4_1_6_Testing_Functions_TresholdVariations.tex}\n \n\\subsubsection{Step Functions} \\label{subsubsection:neighbourhoodVariations}\nSome popular test images in the literature use steps between adjacent pixels~\\cite{mittelstaedt2014methods, mittelstaedt2015colorcat, Wang:2008}.\nIn terms of calculus, this means to use a function with discontinuities.\nIdeally, the function should have different step heights starting from different levels.\nFor this purpose, we define a set $A ={a_0, ... , a_{n-1}}$ of increasing test values $a_ir$, we get an increasing function and with $R0 \\land p>0$, and a saddle with $o>0 \\land p<0 \\lor o<0 \\land p>0$.\nThe starting value of the structure is given by $m \\in \\PazoBB{R}$.\n\\autoref{fig:minMaxSaddleExample} shows an example of this test function with visualizations of minima, maxima, and saddle points.\n\n\n\n\n \n\\subsubsection{Local Topology: Ridge and Valley Lines} \\label{subsubsection:ridgeValleyLines}\n\nBesides local extrema and saddle points, ridge and valley lines are further relevant topological shape descriptors.\nAgain, this has been noted by Ware~\\cite{ware1988color} with respect to color mapping.\nAlso, the relevance of ridges and valley lines is well established in feature-based flow visualization~\\cite{Heine:2016}.\nTo test the suitability of colormaps for scalar fields that include such lines, we use a function $f_{RV}:[-1,1]\\times[0,1] \\rightarrow \\PazoBB{R}$.\nThe location of the ridge\/valley-line is always at $x=0$ as a vertical line.\nIts shape is determined by the function $g$ that we introduced in \\autoref{subsubsection:gradientVariations}, so it may be linear, convex, or concave according to the exponent $b \\in \\PazoBB{N}$ and the shape descriptor $T_y$.\nFor the slope in $x$-direction, we basically use the absolute value with exponent $b$, i.e., $|x|^b$ on the interval $[-1,1]$.\nThis creates a concave shape.\nFor convex shapes, we use the similar function $1-(1-|x|)^b$.\nBoth functions are adjusted to interpolate between $r$ at $-1$ and $1$ and $g(y)$ at $0$.\nThis is quite similar to the definition of $g$.\nWe introduce the type parameter $T_x$ and set it to \"convex\" or \"concave\" and arrive at the definition:\n\n\\begin{equation}\n\\label{equ:ridgeValley}\nf_{RV}(x,y)=\n \\begin{cases}\n (r-g(y)) |x|^b + g(y) & \\textbf{if } T_x = concave \\\\\n (r-g(y)) (1-(1-|x|)^b) + g(y) & \\textbf{if } T_x = convex\n \\end{cases}\n\\end{equation}\n\nAs in the gradient variation case, $b=1$ leads to the same linear function in both cases, which we also denote as \"linear\".\nFor $R>r$, we get a ridge-line. For $R 0)\\\\\n \n (f_m(y)-t) (1-(1-|x|)^b) + t & \\textbf{if } (T=steep \\land x \\leq 0) \\\\\n (f_M(y)-t) (1-(1-|x|)^b) + t & \\textbf{if } (T=steep \\land x > 0)\n \\end{cases}\n\\end{multline}\n\n\\noindent In \\autoref{fig:tresholdExample}, a plot shows examples of all three types.\n\n\n\n \n\n\\subsection{Global Tests}\n\nIn contrast to the local tests, the global tests look at more global properties of scalar functions and how well the colormap presents them.\nFirst, we look at global topological properties.\nWe use functions showing Perlin noise to create multiple local minima and maxima on different height levels and different spatial structures.\nDetails can be found in \\autoref{subsubsection:globalTopologicalStructures}.\nSecond, it is a challenge for any colormap to deal with a large span of the overall values while small, but relevant, local value variations are also present.\nAny nearly linear colormap will completely overlook these so-called little bit variations.\nAs test functions, we use linear functions with varying gradient and height as background with small grooves to include little bit variations.\nThe definition is given in \\autoref{subsubsection:littleBitVariations}.\nThird, real-world data, especially images created by measurements, contain noise of various types and intensity, i.e., signal-to-noise ratio.\nWe use functions from the local test suite and add uniform or Gaussian distributed noise of different signal-to-noise ratios.\nWe describe the details in \\autoref{subsubsection:signalNoiseVariations}.\nFinally, we add a collection of test functions from other computer science disciplines to allow for tests using these functions.\n\n\\input{4_2_1_Global_Topological_Structures.tex}\n\\input{4_2_2_Testing_Functions_LittleBitVariations.tex}\n\\input{4_2_3_Testing_Functions_SignalNoiseVariations.tex}\n\\input{4_2_4_Testing_Functions_Collection.tex}\n\\subsubsection{Global Topological Structures} \\label{subsubsection:globalTopologicalStructures}\n\nAs noted above, other authors indicated the relevance of critical points for testing colormaps before.\nIn contrast to the local topology in \\autoref{subsubsection:minMaxSaddle}, we use a larger number of critical points in the following test.\nFor the creation of global topological structures, we take the 2D version of the improved noise algorithm introduced by Perlin \\cite{Perlin_1985, Perlin_2002}, which is often used for the creation of procedural textures or for terrain generation in computer games.\nThe idea of this test function is to use some other test function from \\autoref{subsubsection:gradientVariations} to \\autoref{subsubsection:littleBitVariations} as a background and combine this field with noise according to Perlin's work.\nThese distorted gradients and shapes are in analogy with colormap testing functions specifically used to determine the discriminative power of subregions of colormaps~\\cite{rogowitz1999trajectories, kalvin2000building, ware2017evaluating, ware2017uniformity, moreland2009diverging}.\n \nTo create the critical points, we use the noise function $f_{Noise}(x,y) \\in [-n,n]$, $n>0 \\land n\\leq1$ and distinguish four options (see \\autoref{fig:globalTopologyExample}).\nFor the options $min-$,$max-$, and $range-scaled$ the selected test-function $f_{test}$ affect the result. Whereby the influence of the first two options depend on the closeness of the local value $f_{Noise}(x,y)$ to the $m$, or rather $M$.\nThis procedure creates noise that is focused on small\/high values.\nAt the $range-scaled$ option, the adjustment of the local value is limited by the test-function range from $m$ to $M$.\nFurthermore, an optional clipping method for these three options prevent values out of $[m,M]$.\nFourth, we offer $replacement$ as a final option, where users can set a custom noise-range $N=[n_m,n_M]$ with $f_{Noise}(x,y) \\in N$.\nWith this option, the entries of the test-function will be replaced by the noise value.\n\n\\begin{equation}\n\\label{equ:noiseOptions}\nf_{test}(x,y)=\n \\begin{cases}\n f_{test}(x,y)+f_{Noise}(x,y)*\\frac{f_{test}(x,y)-m}{M-m} & \\textbf{if } max-scaled \\\\\n f_{test}(x,y)+f_{Noise}(x,y)*\\frac{M-f_{test}(x,y)}{M-m} & \\textbf{if } min-scaled \\\\\n f_{test}(x,y)+f_{Noise}(x,y)*(M-m) & \\textbf{if } range-scaled \\\\\n f_{Noise}(x,y) & \\textbf{if } replacement\n \\end{cases}\n\\end{equation}\n\n\n\n\\subsubsection{Little Bit Variation} \\label{subsubsection:littleBitVariations}\n\nThe teaser \\autoref{fig:camelExample} demonstrates that standard colormaps may easily lead to overlooked small value variations.\nFor such cases, i.e., if small variations in the scalar field (within a small sub-range of the full data range) carry valuable information for interpretation, we define a test on the potential of a given colormap to visually resolve small perturbations.\nThis is similar to distorted gradients, which appear quite frequently in the literature~\\cite{rogowitz1999trajectories, kalvin2000building, ware2017evaluating, ware2017uniformity, moreland2009diverging}.\nThe \\texttt{Little Bit Variation} tes\n \n\\begin{equation}\n f_{LB}:[0,2n+1]\\times[0,1] \\rightarrow \\PazoBB{R}\n\\end{equation}\n\n\\noindent uses a background function and adds a function $f_{G}$ producing $n$ small grooves. \nThe background function in this test is a linear gradient along the y-direction, which is defined by a user-specified value range $[m,M]$.\nAlong the $x$-direction, this function is modified by a function $f_{G}$ creating $2n+1$ alternate stripes of unchanged background and grooves, so we use\n\n\\begin{equation}\n \\label{equ:littleBit}\n f_{LB}(x,y)=m+(M-m)y-f_{G}(x)\n\\end{equation}\n\n\\noindent The function $f_G$ produces sine-shaped grooves for odd $\\left\\lfloor x \\right\\rfloor$ and no changes for even $\\left\\lfloor x \\right\\rfloor$.\nAs $x$ runs from $0$ to $2n+1$, this creates exactly $n$ grooves.\n\n\\begin{equation}\n\\label{equ:littleBitGroove}\n f_{G}(x) =\n \\begin{cases}\n 0, & \\textbf{ if } \\lfloor x \\rfloor \\mod 2 =0\\\\\n - f_{A}(x) \\sin( \\pi (x - \\lfloor x \\rfloor)), & \\textbf{otherwise}\\\\\n \\end{cases}\n\\end{equation}\n\n\\noindent As can be seen, the sine wave's amplitude is changed by a function $f_{A}(x)$ and create a test of different small value changes (groove depths).\nThe function $f_{A}(x)$ determine for each groove the depth by linear interpolation between user-defined minimum $g_m$ and maximum $g_M$.\nIn \\autoref{fig:littleBitExample}, you can see an example for the \\texttt{Little Bit Variations} test. \n\n\\begin{equation}\n\\label{equ:littleBitAmplitude}\n f_{A}(x) = g_{m} + \\frac{\\lfloor x \\rfloor - 1}{2*n-2} (g_{M}-g_{m})\n\\end{equation}\n \n\n\n\n\\subsubsection{Signal-Noise Variation} \\label{subsubsection:signalNoiseVariations}\n\nIn the signal and data processing, noise plays an important role.\nIt also affects the results of scientific visualizations.\nLike the global topology test (see~\\autoref{subsubsection:globalTopologicalStructures}), our tool offers to add noise to each test function (\\autoref{subsubsection:gradientVariations} - \\autoref{subsubsection:littleBitVariations}).\nThe tool uses the standard random algorithm from JavaScript, which produces pseud- random numbers in the range $[0,1]$ with uniform distribution.\nFor the noise behavior, we offer the same noise behavior options from \\autoref{subsubsection:globalTopologicalStructures}.\nIndependent from the selected option, the fraction of noisy pixels can be set.\nThis fraction describes how many randomly selected field values are affected by noise.\nIf the noise proportion is set to 100\\%, the full test-function is affected by noise.\nFor more flexibility, we also offer a conversion from a uniform distribution to a normal or a beta distribution.\nThe conversion from uniform to the normal distribution is done with the Box-Muller transform~\\cite{BoxMull58}.\nWith the normal distribution, the noise will be more focused on weaker changes around null for the $min\/max-scaled$ and $range-scaled$ options.\nFor the $replacment$ option, the normal distribution causes a focus on values around the median of the defined range of noise values.\nThe approach from uniform to a beta-like distribution (with $alpha,beta=0.5$) is done with the equation $beta_{Random}=sin(r*\\frac{\\pi}{2})^2$, with $r$ being the result of the standard random generator.\nAdding noise using a beta distribution with the $min\/max-scaled$ or $range-scaled$ options will have a priority for values near the maximal change parameter $m$ and $-m$.\nFor the $replacement$ option, values near the minimum and maximum of the defined noise value range will be preferred.\nWe modified this conversion with a view to do this preference on only one side, thus for $m$ or $-m$ in the first case or for the maximum or minimum in the other case.\nThe modification is a mirror at the median random value to the left or right side of this median.\nThis allows us to create a beta-like distribution and also a left-oriented beta distribution and a right-oriented beta distribution.\n\\autoref{fig:noiseCollection} shows the different distribution options.\n\n \n \n \n \n \n \n \n\n\\subsubsection{Function Collection} \\label{subsubsection:testcollection}\n\nMany domains of computer science use test functions for the evaluation of algorithms.\nThere are several widespread well-known functions like \\texttt{Mandelbrot Set} or \\texttt{Marschner Lobb} and also functions like the \\texttt{Six-Hump Camel Function} from the teaser, which are better known in optimization than computer science \\cite{Mandelbrot::1980, testfunctions::MarschnerLobb::94, testfunctions::Jamil::2013}.\nSuch functions and their different attributes could also be an enrichment for evaluation in scientific visualization.\nTherefore, we included a collection of such functions from the literature in our testing environment.\nThese functions stand beside our development of test functions, and provide further challenges for colormaps.\nWith this collection, we want to provide over time more and more such functions of interest.\nIn order to allow users to test their colormaps without changes, we allow the user to scale the values of these functions to the range of the colormap or a user-defined range.\n\\autoref{fig:collectionExample} shows some examples of functions used for optimization.\nObviously, they also have relevant properties for the evaluation of color mapping.\nFor example, the \\texttt{Bukin Function} includes many small local minima along a valley-line. \\cite{testfunctions::Jamil::2013}\n\n\n\n\\subsection{Application-Specific Tests} \\label{subsection:realWorldData}\nIn the two previous sections, we described several analytic test functions concerning specific challenges encountered in color mapping.\nAdditionally, we also introduced a collection of already existing test functions from other computer science domains.\nNevertheless, we think that the involvement of real-world data is indispensable for the completion of this test suite.\nReal-world data originates from many different sources, is generated with various measurement techniques or simulation algorithms, and includes a myriad of attribute variations.\nMost importantly, such data could potentially present several of the challenges described in the two previous sections at the same time. \nThis kind of test cannot be easily replaced by our theory-based test functions completely. \nTherefore, we decided to include a set of application test data from different domains to cover a wide spectrum of realistic challenges.\n\nWithin one specific scientific domain, there is often a similarity between typical data sets; e.g., in medicine, data from the MRI (Magnetic Resonance Imaging) or the CT (Computer Tomography) is frequently used. \nSuch data sets have similar attributes, and similar requirements have to be fulfilled by colormaps.\nIf we cover different typical data sets of different scientific disciplines, in the future, we can hopefully offer enough different real-world test cases so that most users will find a case that has some similarities with his data.\nLike the test function collection from \\autoref{subsubsection:testcollection}, this collection of real-world data will be extended over time.\nAt the current version, the tool offers medical-, flow-, and photograph-specific real-world data.\n \n\n \n\n \n\\section{Test Evaluation} \\label{section:testevaluation}\n\nMostly there are good reasons to select specific colormaps or to design colormaps in a specific way.\nDepending on the actually envisaged purpose of the colormap, a user decides on the number of keys; the hue, saturation and value of each key; the gradients in the mapping between the data range and the colormap; and so on.\nFurthermore, \\emph{de facto} standards and cognitive motivation may also influence the user's choice. \nTherefore, meaningful automated evaluation of continuous colormaps without knowledge of their intended use is rarely feasible.\nTherefore, a general colormap score computed based on automatic tests and benchmarks might not be informative.\n\nInstead, we propose to derive information based on aforementioned test functions that can be analyzed and rated by users themselves.\nA user first chooses a test-function from \\autoref{section:testingFunctions}.\nFor each grid point of the generated test field, we calculate the value differences to the neighboring grid points.\nDepending on the location within the field, the number of neighbors varies between three and eight.\nWe normalize these value differences with the minimum and maximum value differences found and save them into a \\texttt{Value Difference Field}.\nWe repeat this process also for the colors.\nHere, we use some color difference norm (Lab, DIN99, DE94, or CIEDE2000) and save the normalized values into the \\texttt{Color Difference Field}.\n\nBy subtracting these two fields from another, we get a \\texttt{Subtraction Field}.\nThis field represents the local uniformity of the color mapping; when the local gradients found in the data are accordingly represented in the color mapped field, the difference between normalized data field and normalized color mapped field is zero for all pixels\/locations. \nIn the case of a non-linear color mapping, in contrast, the \\texttt{Subtraction Field} will particularly highlight areas with strong non-linear mapping, which the user might have designed intentionally in order to increase the number of discriminable colors for a part of the data range.\nThe user can study the \\texttt{Color Difference Field} as well as the \\texttt{Subtraction Field} to analyze the color mapping of the test function.\n\nEach of the three fields has three up to eight values for each pixel.\nFor the color mapping (\\autoref{fig:evaluation_Screenshot}), the user can select maximum, average, or median.\nNext to that, there are options to select a method for the calculation of the color difference.\nThe tool offers Euclidean distance for Lab and DIN99 space or the use of the DE94 or CIEDE2000 metrics in the Lab space. \n\nTo compare the visualizations of \\texttt{Color Difference Field} of different colormaps, we cannot use the normalization by minimum and maximum. The colors of such color mappings would relate to different color difference values and are not comparable.\nTherefore we implemented two alternative options using fixed values for the minimum and maximum of the normalization to create comparable results.\nThe \\texttt{Black-White} normalization use the greatest possible color difference between black and white as maximum and zero as minimum.\nThe \\texttt{Custom} normalization uses a user-entered maximum, which is a necessity if the black-white difference is to big by contrast with the occurring color differences of the \\texttt{Color Difference Fields}.\nIn \\autoref{fig:application_Threshold}, we used this third option to get a comparable visualization for a colormap with a discontinuous transition point.\n\n\\section{Application Case} \\label{section:application}\n\nIn this section, we show how the test suite could be utilized to evaluate the suitability of colormaps with respect to a given application problem.\nFor this example, we chose a data set from a simulation with a high-resolution global atmosphere model.\nThe data we use is one timestep of the temperature at a height of 2m simulated with the icosahedral ICON model at a global resolution of 5km~\\cite{Stevens:2020}.\nWe remapped the data from the unstructured model grid to a regular grid with $4000 \\times 2000$ grid points for easier use with different tools. \n\nOn the global scale, the 2m-temperature is typically characterized by a wide range of values between less than $-80^{\\circ}${C} and more than $50^{\\circ}${C}.\nFor the selected time step, the simulated 2m-temperature varies between about $-63^{\\circ}${C} and $52^{\\circ}${C}.\nRegionally, however, small temperature variations of the order of $0.1^{\\circ}${C} might be critical for the analysis as, e.g., in the neighborhood of the freezing point at $0^{\\circ}${C}.\n\nPanel \\textbf{1a} of \\autoref{fig:application_Threshold} shows a visualization of the data using a spherical projection with a focus on the South Pole.\nIn contrast to mountainous regions, where the horizontal $2m$-temperature gradient is generally high, the gradient in flat areas such as oceanic regions is much smaller.\nHere, the color differences are too small to depict local temperature variations, as for example, in regions with values close to $0^{\\circ}${C} as shown in the close-up in the lower right corner of the image.\n\nTo test a given colormap for its discriminative power in the data range around the freezing point, we applied a threshold test with the options $Flat-Surrounding$, $m=-63$, $M=53$, and $t=0$.\nFirst, we start with a local uniform cool-warm colormap (\\textbf{1a} of \\autoref{fig:application_Threshold}).\nThe related test function visualization \\textbf{1b} demonstrates that it is impossible to differentiate between negative and positive values if the values are close to $0^{\\circ}${C}.\nThe \\textbf{1c} \\texttt{Subtraction Field} method of the test evaluation part (\\autoref{section:testevaluation}) yields a nearly white image, which reflects that the colormap uniformly represents the gradients produced by the test function.\nTo highlight the freezing point in the mapping, we introduce a non-linearity in the colormap, at $0^{\\circ}${C}.\nWe use the twin key option of the CCC-Tool colormap specification (CMS)~\\cite{nardini2019making}, which separates the color key at $0^{\\circ}${C} into a left and right color key to create the discontinuous transition.\nTo improve the visual difference between both sides, we slightly lower the lightness value and increase the left color saturation to achieve light blue.\nWe kept white as the right-hand part of the color key.\nPanels \\textbf{2a} and \\textbf{2b} of \\autoref{fig:application_Threshold} illustrates that the introduced discontinuity in the colormap clearly separates the areas with negative and positive temperature values.\nIn comparison to \\textbf{1c}, the \\texttt{Subtraction Field} in \\textbf{2c} shows with a vertical red line the spatial position of the discontinuous transition at $0^{\\circ}${C}.\nThe according visualization of the temperature field of the modified colormap is shown in panel \\textbf{2a}.\n\nIf we visualize the global 2m-temperature field using a linear colormap and look at the tropics or the mid-latitudes, we find that regional variations are also not very well resolved.\nUsing the same colormap, \\autoref{fig:application_LittleBit}~\\textbf{1a} shows a different view onto our planet, as \\autoref{fig:application_Threshold}~\\textbf{1a}.\nThe resolving power of the linear colormap is equally distributed over the full data range.\nHowever, when we analyze the global temperature distribution, we find that more than half of the data range is used for the temperature variations far below $0^{\\circ}${C} mostly in Antarctica, although this information is less important for most users of such a data set.\nWith respect to vegetation and agriculture, we may want to put more focus on regions with temperatures mostly above $0^{\\circ}${C}.\n\nTherefore we extended the path of the colormap through the color space to get more distinguishable colors for the positive data range. \nWe used a \\texttt{Little Bit} test to control improvements during this process.\nPanel~\\textbf{1a} of \\autoref{fig:application_LittleBit} shows a visualization using the colormap with the discontinuous transition introduced above.\nThe corresponding \\texttt{Little Bit} test is shown in panel~\\textbf{1b}.\nFor the evaluation, we used the \\texttt{Color Difference Field} (\\autoref{section:testevaluation}).\nPanel~\\textbf{1c} shows how the small grooves in the linear gradient of the \\texttt{Little Bit} test function (that are hardly noticeable in \\textbf{1b}) become clearly visible in the color difference field.\nFrom left to right, the regularly spaced perturbations in the field increase in magnitude, which is represented by a stripe pattern in panel~\\textbf{1c} that increases in contrast from left to right.\nThe vertically constant color of the stripe pattern is a direct consequence of a linear colormap.\n\nHowever, as we wanted to increase the discriminative power in the upper part of the colormap, we inserted additional color keys.\nFirst, we moved the blue part of the colormap representing negative values slightly away from cyan.\nThe freed color space was utilized to represent the lower positive temperatures.\nA gradient from white to cyan $0^{\\circ}${C}-$10^{\\circ}${C} is followed by a gradient from cyan to green to represent the moderate temperature range of $0^{\\circ}${C}-$20^{\\circ}${C}.\nNext to this, a subsequent gradient from yellow through beige to light brown shows values between $20^{\\circ}${C} and $40^{\\circ}${C}. \nA further transition to dark red finally shows higher temperature range of up to $53^{\\circ}${C}.\n\nOur colormap semantics were designed to roughly differentiate between five temperature zones: very cold (blue to light blue), moderately cool (white to cyan), moderately warm (cyan to green), warm (green to yellow to beige) and hot (red).\nConcerning red-green colorblind viewers, we used a lower and not overlapping lightness range for the red gradient and the green gradient.\nThe respective color gradients were separately optimized for local uniformity.\nThe panels~\\textbf{2a} and \\textbf{2b} of \\autoref{fig:application_LittleBit} show the visualizations of the temperature data and the test function with the modified colormap.\nNote that we used the \\texttt{Little Bit} test function only for the upper part of the colormap that corresponds to temperature values between $m=5^{\\circ}${C} and $M=53^{\\circ}${C}.\nAs a result of our modifications of the colormap, it is now possible to see much more detail in the inhabited part of our planet and to distinguish between the different temperature zones.\nCompared to~\\textbf{1c}, the \\texttt{Color Difference Field}~\\textbf{2c} shows an increase in the color difference at the expense of the local uniformity of the positive data range. \n \n\n \n\n \n\\section{Conclusion} \\label{section:conclusion}\nIn this paper, we have introduced the approach of using test functions as a standard evaluation method, and we have presented a test suite for continuous colormaps.\nLike in other fields of computer science, one could use such test functions besides user-centered evaluation (e.g., user testimonies and empirical studies).\nIn compassion with user-centered evaluation, there is no need to recruit participants, design questionnaires or stimuli, organize payment, arrange experiment time and environment, and provide apparatus. Evaluating colormaps using the test suite can be conducted quickly and easily.\nThe designer can test many optional colormaps against many test functions and data sets, which is usually not feasible with user-centered evaluation. The same tests can be repeated with consistent control and comparability.\n\nFor the test suite, we first focused on the specific challenges of scalar fields.\nThe \\autoref{subsubsection:neighbourhoodVariations}-\\ref{subsubsection:littleBitVariations} describe the test functions we chose to address these challenges.\nTo help users with a less mathematical background, we tried to develop intuitive functions that are simple and easy to interpret.\nThe test suite currently includes step functions, different gradients, minima, maxima, saddle points, ridge and valley lines, global topology, thresholds, different frequencies, and a test for very small value changes.\nAlthough these test functions cannot cover all possible challenges, we have laid down a solid foundation that can be extended continually.\nWe have also included the option to add noise to extend the possibilities of the basic test functions. \n\nBesides our newly designed functions, we have presented in \\autoref{subsubsection:testcollection} a collection of functions used for evaluation in other computer science fields.\nWe think they will prove to be useful for the evaluation of colormaps as well.\nFurthermore, we have included an initial selection of real-world data sets from different application areas.\nAs described in \\autoref{subsection:realWorldData}, tests against real-world data are important in practice. \nEach real-world data set in our test suite presents an individual challenge of a combination of in scalar field analysis. \nHere, our intention is to provide a broad cover such that users are less dependent on external data.\n\nOur test suite has been integrated into the open-access CCC-Tool.\nIn \\autoref{section:testevaluation} we describe means to evaluate the results of the test functions visually and numerically that we have also implemented into our online-tool. \nAn example of using the test suite to evaluate and enhance a user-designed colormap concerning a specific application problem is finally presented and discussed in \\autoref{section:application}.\n\nFor a long-term perspective, we plan to continue the extension of our collection.\nOne option for real world data would be an open source database with a web interface and a link to our tool.\nIn order to adopt the test suite as a standard evaluation method, we would like to work on the method of automatic test reports, which can perform automatic analysis of a colormap with a set of tests chosen by the user.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nCounterfactual explanations have generated immense interest in several high-stakes applications, e.g., lending, credit decision, hiring, etc~\\cite{verma2020counterfactual, Karimi_arXiv_2020,wachter2017counterfactual}. Broadly speaking, the goal of counterfactual explanations is to guide an applicant on how they can change the outcome of a model by providing suggestions for improvement. Given a specific input value (e.g., a data point that is declined by a model), counterfactual explanations attempt to find another input value for which the model would provide a different outcome (essentially get accepted). Such an input value that changes the model outcome is often referred to as a \\emph{counterfactual}.\n\n\nSeveral existing works usually focus on finding counterfactuals that are as ``close'' to the original data point as possible with respect to various distance metrics, e.g., $L_1$ cost or $L_2$ cost. This cost is believed to represent the ``effort'' that an applicant might need to make to get accepted by the model. Thus, the ``closest'' counterfactuals essentially represent the counterfactuals attainable with minimum effort. \n\n\n\n\nHowever, the closest counterfactuals may not always be the most preferred one. For instance, if the model changes even slightly, e.g., due to retraining, the counterfactual may no longer remain valid. In Table~\\ref{robustness_demo}, we present a scenario where we retrain an XGBoost model~\\cite{chen2015xgboost} with same hyperparameters on the same dataset, leaving out just one data point. We demonstrate that a large fraction of the ``closest'' counterfactuals generated using the state-of-the-art techniques for tree-based models no longer remain valid. This motivates our primary question: \n\\begin{center}\n\\emph{How do we generate counterfactuals for tree-based ensembles that are not only close but also robust to changes in the model?}\n\\end{center}\n\n\\begin{table}\n\\caption{Validity of Counterfactuals Generated Using State-Of-The-Art Techniques (with $L_1$ cost minimization) for XGBoost Models on German Credit Dataset~\\cite{UCI}: Models were retrained after dropping only a single data point. A large fraction of the counterfactuals for the previous model no longer remain valid for the new models obtained after retraining.}\n\\label{robustness_demo}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccc}\n\\toprule\nMethod & FT & FOCUS & FACE & NN \\\\\n\\midrule\nValidity & $72.9\\%$ \n & $72.8\\%$ \n & $84.4\\%$ \n & $92.5\\%$\\\\ \n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\end{table}\n\n\n\n\nTowards addressing this question, in this work, we make the following contributions:\n\n\\begin{itemize}\n\\item \\textbf{Quantification of Counterfactual Stability:} We propose a novel metric that we call -- \\emph{Counterfactual Stability} -- that quantifies how robust a counterfactual is going to be to possible changes in the model. In order to arrive at this metric, we identify the desirable theoretical properties of counterfactuals in tree-based ensembles that can make them more stable, i.e., less likely to be invalidated due to possible model changes under retraining. Tree-based ensembles pose additional challenges in robust counterfactual generation because they do not conform to standard assumptions, e.g., they are not smooth and continuous, have a non-differentiable objective function, and can change a lot in the parameter space under retraining on similar data. Our proposed quantification is of the form $R_{\\Phi}(x,M)$ where $x \\in \\mathbb{R}^d$ is an input (not necessarily in the dataset or data manifold), $M(\\cdot):\\mathbb{R}^d \\to [0,1]$ is the original model, and $\\Phi$ denotes some hyperparameters for this metric. We find that while counterfactuals on the data manifold have been found to be more robust than simply ``closest'' or ``sparsest'' counterfactuals (see \\cite{pawelczyk2020counterfactual}), being on the data manifold may not be sufficient for robustness, thus calling for our metric.\n\n\\item \\textbf{Conservative Counterfactuals With Theoretical Robustness Guarantee:}\nWe introduce the concept of \\emph{Conservative Counterfactuals} which are essentially counterfactuals (points with desired outcome) lying in the dataset that also have high counterfactual stability $R_{\\Phi}(x,M)$. Given an input $x \\in \\mathbb{R}^d$, a conservative counterfactual is essentially its nearest neighbor in the dataset on the other side of the decision boundary that also passes the counterfactual stability test, i.e., $R_{\\Phi}(x,M) \\geq \\tau$ for some threshold $\\tau$. We provide a theoretical guarantee (see Theorem~\\ref{thm:guarantee}) that bounds the probability of invalidation of the conservative counterfactual under model changes.\n\n\n\\item \\textbf{An Algorithm for Robust Counterfactual Explanations (RobX):} We propose \\emph{RobX} that generates robust counterfactuals for tree-based ensembles leveraging our metric of counterfactual stability. Our proposed strategy is a post-processing one, i.e., it can be applied after generating counterfactuals using any of the existing methods for tree-based ensembles (that we also refer to as the base method), e.g., Feature Tweaking (FT)~\\cite{tolomei2017interpretable}, FOCUS~\\cite{lucic2019focus}, Nearest Neighbor (NN)~\\cite{albini2021counterfactual}, FACE~\\cite{poyiadzi2020face}, etc. Our strategy iteratively refines the counterfactual generated by the base method and moves it towards the conservative counterfactual, until a ``stable'' counterfactual is found (i.e., one that passes our counterfactual stability test $R_{\\Phi}(x,M) \\geq \\tau$).\n\n\\item \\textbf{Experimental Demonstration:} Our experiments on real-world datasets, namely, German Credit~\\cite{UCI}, and HELOC~\\cite{Fico_web_2018}, demonstrate that the counterfactuals generated using RobX significantly improves the robustness of counterfactuals over SOTA techniques (nearly 100\\% validity after actual model changes). Furthermore, our counterfactuals also lie in the dense regions of the data manifold, thereby being realistic in terms of Local Outlier Factor (see Definition~\\ref{defn:lof}), a metric popularly used to quantify likeness to the data manifold. \n\\end{itemize}\n\n\n\n\n\n\n\n\\begin{rem}[Drastic Model Changes]\nOne might question that why should one want counterfactuals to necessarily remain valid after changes to the model. Shouldn't they instead vary with the model to reflect the changes to the model? E.g., economic changes might cause drastic changes in lending models (possibly due to major data distribution shifts). In such scenarios, one might in fact prefer counterfactuals for the old and new models to be different. Indeed, we agree that counterfactuals are not required to remain valid for very drastic changes to the model (see Figure~\\ref{fig:robustness_not_needed}; also see an impossibility result in Theorem~\\ref{thm:impossibility1}). However, this work focuses on small changes to the model, e.g., retraining on some data drawn from the same distribution, or minor changes to the hyperparameters, keeping the underlying data mostly similar. Such small changes to the model are in fact quite common in several applications and occur frequently in practice~\\cite{upadhyay2021towards,Hancox-Li_fat_2020,black2021consistent,barocas2020hidden}.\n\\end{rem}\n\n\n\\textbf{Related Works:} Counterfactual explanations have received significant attention in recent years (see \\cite{verma2020counterfactual,Karimi_arXiv_2020,wachter2017counterfactual,multiobjective,konig2021causal,albini2021counterfactual,kanamori2020dace,poyiadzi2020face,lucic2019focus,pawelczyk2020counterfactual,ley2022global,spooner2021counterfactual,sharma2019certifai} as well as the references therein). In \\cite{pawelczyk2020counterfactual,kanamori2020dace,poyiadzi2020face}, the authors argue that counterfactuals that lie on the data manifold are likely to be more robust than the closest counterfactuals, but the focus is more on generating counterfactuals that specifically lie on the data manifold (which may not always be sufficient for robustness). Despite researchers arguing that robustness is an important desideratum of local explanation methods~\\cite{Hancox-Li_fat_2020}, the problem of generating robust counterfactuals has been less explored, with the notable exceptions of some recent works \\cite{upadhyay2021towards,rawal2020can,black2021consistent}. In \\cite{upadhyay2021towards,black2021consistent}, the authors propose algorithms that aim to find the \\emph{closest} counterfactuals that are also robust (with demonstration on linear models and neural networks). In \\cite{rawal2020can}, the focus is on analytical trade-offs between validity and cost. We also refer to \\cite{Mishra_arXiv_2021} for a survey on the robustness of both feature-based attributions and counterfactuals. \n\nIn this work, our focus is on generating robust counterfactuals for tree-based ensembles. Tree-based ensembles pose additional challenges in robust counterfactual generation because they do not conform to standard assumptions for linear models and neural networks, e.g., they have a non-smooth, non-differentiable objective function. Furthermore, our performance metrics include both distance ($L_1$ or $L_2$ cost), and likeness to the data manifold (LOF). \n\nWe note that \\cite{alvarez2018robustness} proposes an alternate perspective of robustness in explanations called $L$-stability which is built on similar individuals receiving similar explanations. Instead, our focus is on explanations remaining valid after some changes to the model.\n\n\\begin{figure}\n\\centering\n\\includegraphics[height=4.2cm]{robustness_not_needed}\n\\includegraphics[height=4.2cm]{robustness_needed}\n\\caption{Scenarios distinguishing drastic and small model changes: (Left) Drastic model changes due to major distribution shifts; One may not want robustness of counterfactuals here. (Right) Small model changes due to retraining on very similar data or minor hyperparameter changes that occur frequently in practice. Robustness of counterfactuals is highly desirable here.}\n\\label{fig:robustness_not_needed}\n\\end{figure}\n\n\\section{Problem Setup}\nLet $\\mathcal{X} \\subseteq \\mathbb{R}^d$ denote the input space and let $\\mathcal{S}=\\{x_i\\}_{i=1}^N \\in \\mathcal{X}$ be a dataset consisting of $N$ independent and identically distributed points generated from a density $q$ over $\\mathcal{X}$.\nWe also let $M (\\cdot):\\mathbb{R}^d \\to [0,1]$ denote the original machine learning model (a tree-based ensemble, e.g., an XGBoost model) that takes an input value and produces an output probability lying between $0$ and $1$. The final decision is denoted by: $$D(x)=\\begin{cases} 1 \\text{ if $M(x)>0.5$,}\\\\\n0 \\text{ otherwise.}\\end{cases}$$\n\nSimilarly, we denote a changed model by $M_{new}(\\cdot):\\mathbb{R}^d \\to [0,1]$, and the decision of the changed model by: $$D_{new}(x)=\\begin{cases} 1 \\text{ if $M_{new}(x)>0.5$,}\\\\\n0 \\text{ otherwise.}\\end{cases}$$\n\n\nIn this work, we are mainly interested in tree-based ensembles~\\cite{chen2015xgboost}. A tree-based ensemble model is defined as follows:\n$M(x)=\\sum_{t=1}^T m^{(t)}(x)$ where each $m^{(t)}(x)$ is an independent tree with $L$ leaves, having weights $\\{w_1,\\ldots,w_L\\} \\in \\mathbb{R}$. A tree $m^{(t)}(x)$ maps a data point $x \\in \\mathbb{R}^d$ to one of the leaf indices (based on the tree structure), and produces an output $w_l \\in \\{w_1,\\ldots,w_L\\}$. One may use a \\texttt{sigmoid} function~\\cite{chen2015xgboost} for the final output to lie in $[0,1]$.\n\n\n\\subsection{Background on Counterfactuals}\n\nHere, we provide a brief background on counterfactuals.\n\n\\begin{defn}[Closest Counterfactual $\\mathcal{C}_{p}(x,M)$]\nGiven $x\\in \\mathbb{R}^d$ such that $M(x)\\leq 0.5$, its closest counterfactual (in terms of $L_p$-norm) with respect to the model $M(\\cdot)$ is defined as a point $x'\\in \\mathbb{R}^d$ that minimizes the $l_p$ norm $||x-x'||_p$ such that $M(x')>0.5$. \n\\begin{equation}\n\\mathcal{C}_{p}(x,M)=\\arg \\min_{x'\\in \\mathbb{R}^d} ||x-x'||_p \n\\text{ such that } M(x')>0.5. \\nonumber\n\\end{equation} \n\\end{defn}\n\nFor tree-based ensembles, some existing approaches to find the closest counterfactuals include~\\cite{tolomei2017interpretable,lucic2019focus}. When $p=1$, these counterfactuals are also referred to as ``sparse'' counterfactuals in existing literature~\\cite{pawelczyk2020counterfactual} because they attempt to find counterfactuals that can be attained by changing as few features as possible (enforcing a sparsity constraint). \n\nClosest counterfactuals have often been criticized in existing literature~\\cite{poyiadzi2020face,pawelczyk2020counterfactual,kanamori2020dace} as being too far from the data manifold, and thus being too unrealistic, and anomalous. This has led to several approaches for generating ``data-support'' counterfactuals that are lie on the data manifold, e.g., \\cite{kanamori2020dace,albini2021counterfactual,poyiadzi2020face}. Here, we choose one such definition of data-support counterfactual which is essentially the nearest neighbor with respect to the dataset $\\mathcal{S}$, that also gets accepted by the model~\\cite{albini2021counterfactual}.\n\n\\begin{defn}[Closest Data-Support Counterfactual $\\mathcal{C}_{p,\\mathcal{S}}(x,M)$]\n\\label{defn:data-support-CF}\nGiven $x\\in \\mathbb{R}^d$ such that $M(x)\\leq 0.5$, its closest data-support counterfactual $\\mathcal{C}_{p,\\mathcal{S}}(x,M)$ with respect to the model $M(\\cdot)$ and dataset $\\mathcal{S}$ is defined as a point $x'\\in \\mathcal{S}$ that minimizes the $l_p$ norm $||x-x'||_p$ such that $M(x')>0.5$.\n\\begin{equation}\n\\mathcal{C}_{p,\\mathcal{S}}(x,M)=\\arg \\min_{x'\\in \\mathcal{S}} ||x-x'||_p\n\\text{ such that }{M(x')>0.5}. \\nonumber\n\\end{equation} \n\\end{defn}\n\n\\begin{rem}[Metrics to Quantify Likeness to Data Manifold] In practice, instead of finding counterfactuals that lie exactly on the dataset, one may use alternate metrics that quantify how alike or anomalous is a point with respect to the dataset. One popular metric to quantify anomality that is also used in existing literature~\\cite{pawelczyk2020counterfactual,kanamori2020dace} on counterfactual explanations is Local Outlier Factor (see Definition~\\ref{defn:lof}; also see \\cite{breunig2000lof}).\n\\end{rem}\n\n\\begin{defn}[Local Outlier Factor (LOF)]\nFor $x \\in \\mathcal{S}$, let $N_k(x)$ be its $k$-nearest neighbors (k-NN) in $\\mathcal{S}$. The $k$-reachability distance $rd_k$ of $x$ with respect to $x'$\nis defined by $rd_k(x, x')= \\max\\{\\Delta(x, x'), d_k(x')\\}$, where $d_k(x')$\nis the distance $\\Delta$ between $x'$ and its the $k$-th nearest instance\non $\\mathcal{S}$. The $k$-local reachability density of $x$ is defined by\n$lrd_k(x) = |N_k(x)| (\n\\sum_{x' \\in N_k(x)} rd_k(x, x'))^{-1}.$ Then, the\nk-LOF of $x$ on $\\mathcal{S}$ is defined as follows:\n$$q_k(x | \\mathcal{S}) = \\frac{1}{|N_k(x)|}\n\\sum_{x' \\in N_k(x)}\n\\frac{lrd_k(x')}{lrd_k(x)}\n.$$ Here, $\\Delta(x, x')$ is the distance between two $d$-dimensional feature vectors.\n\\label{defn:lof}\n\\end{defn}\n\nIn this work, we use an existing implementation of computing LOF from \\texttt{scikit}~\\cite{scikit-lof} that predicts $-1$ if the point is anomalous, and $+1$ for inliers. So, in this work, a high average LOF essentially suggests the points lie on the data manifold, and are more realistic, i.e., \\emph{higher is better}.\n\nNext, we introduce our goals.\n\\subsection{Goals}\n\nGiven a data point $x \\in \\mathcal{X}$ such that $M(x)\\leq 0.5$, our goal is to find a counterfactual $x'$ with $M(x')>0.5$ that meets our requirements:\n\\begin{itemize}[leftmargin=*, itemsep=0pt, topsep=0pt]\n\\item Close in terms of $L_p$ cost: The point $x'$ is close to $x$, i.e., $||x-x'||_p$ is as low as possible.\n\\item Robust: The point $x'$ remains valid after changes to the model, i.e., $M_{new}(x')>0.5$.\n\\item Realistic: The point $x'$ is as similar to the data manifold as possible, e.g., has a high LOF (higher is better).\n\\end{itemize}\n\n\n\n\n\n\n\\begin{rem}[Bookkeeping Past Counterfactuals] One possible solution for ensuring the robustness of counterfactuals under model changes could be to keep a record of past counterfactuals. Then, even if there are small changes to the model that can render those counterfactuals invalid, one might still want to accept them because they have been recommended in the past: Ouput $D(x) \\text{ if x is a past counterfactual}$ or $D_{new}(x)$ otherwise. However, this approach would require significant storage overhead. Furthermore, there would also be fairness concerns if two data points that are extremely close to each other are receiving the same decision, e.g., one is being accepted because it was a past counterfactual even though the new model rejects it, while the other point is being rejected. \n\\end{rem}\n\n\n\\section{Main Results}\n\\label{sec:main}\nIn this section, we first identify the desirable properties of counterfactuals in tree-based ensembles that make them more stable, i.e., less likely to be invalidated by small changes to the model. These properties then leads us to propose a novel metric -- that we call \\emph{Counterfactual Stability} -- that quantifies the robustness of a counterfactual with respect to possible changes to a model. This metric enables us to arrive at an algorithm for generating robust counterfactuals that can be applied over any base method.\n\n\\subsection{Desirable properties of counterfactuals in tree-based ensembles that make them more stable}\n\n\nIn this work, we are interested in finding counterfactuals that are robust to small changes to the model (recall Figure~\\ref{fig:robustness_not_needed}), e.g., retraining on some data from the same distribution, or minor changes to the hyperparameters. We note that if the model changes drastically, it might not make sense to expect that counterfactuals will remain valid, as demonstrated in the following impossibility result. \n\\begin{thm}[Impossibility Under Drastic Model Changes]\nGiven a tree-based ensemble model $M(\\cdot):\\mathbb{R}^d \\to [0,1]$, there always exists another tree-based ensemble model $M_{new}(\\cdot):\\mathbb{R}^d \\to [0,1]$ such that all counterfactuals to $M$ with respect to a dataset $\\mathcal{S}$ no longer remains valid.\n\\label{thm:impossibility1}\n\\end{thm}\n\n\n\n\nThus, we first need to make some \\emph{reasonable} assumptions on how the model changes during retraining, or rather, what kind of model changes are we most interested in.\n\n\nIn this work, we instead arrive at the following desirable properties of counterfactuals for tree-based ensembles that can make them more stable, i.e., less likely to be invalidated. Our first property is based on the fact that the output of a model $M(x) \\in [0,1]$ is expected to be higher if the model has more confidence in that prediction. \n\n\\begin{propty}\nFor any $x \\in \\mathbb{R}^d$, a higher value of $M(x)$ makes it less likely to be invalidated due to model changes.\n\\label{propty:high_confidence}\n\\end{propty}\n\n\n\n\n\nHowever, having high $M(x)$ may not be the only property to ensure robustness, particularly in tree-based ensembles. This is because \\emph{tree-based models do not have a smooth and continuous output function}. For instance, there may exist points $x \\in \\mathbb{R}^d$ with very high output value $M(x)$ but several points in its neighborhood have a low output value (not smooth). This issue is illustrated in Figure~\\ref{fig:property2}. There may be points with high $M(x)$ that are quite close to the decision boundary, and thus more vulnerable to being invalidated with model changes. \n\nAs a safeguard against such a possibility, we introduce our next desirable property.\n\n\\begin{propty}\nAn $x \\in \\mathbb{R}^d$ is less likely to be invalidated due to model changes if several points close to $x$ (denoted by $x'$) have a high value of $M(x')$.\n\\label{propty:high_confidence_mean}\n\\end{propty}\n\n\n\n\nWe also note that a counterfactual may be more likely to be invalidated if it lies in a highly variable region of the model output function $M(x)$. This is because the confidence of the model predictions in that region might be less reliable. This issue is illustrated in Figure~\\ref{fig:property3}. One resolution to capturing the variability of a model output is to examine its derivative. However, because \\emph{tree-based ensembles are not differentiable}, we instead examine the standard deviation of the model output around $x$ as a representative of its variability. \n\n\n\n\n\\begin{propty}\nAn $x \\in \\mathbb{R}^d$ is less likely to be invalidated due to model changes if the model output values around $x$ have low variability (standard deviation).\n\\label{propty:variability}\n\\end{propty}\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}[b]{0.48\\textwidth}\n\\centering\n{\\centering \\includegraphics[height=4.1cm]{property2}}\n\\caption{Counterfactual is close to the boundary. \\label{fig:property2}}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.48\\textwidth}\n\\centering\n{\\centering \\includegraphics[height=4.1cm]{property3}}\n\\caption{Counterfactual lies in a highly variable region.}\\label{fig:property3}\n\\end{subfigure}\n\\caption{Motivation for desirable properties.}\n\\end{figure}\n\n\n\\subsection{Proposed Quantification of Robustness to Possible Model Changes:\\\\ Counterfactual Stability}\n\nOur properties lead us to introduce a novel metric -- that we call counterfactual stability -- that attempts to quantify the robustness of a counterfactual $x \\in \\mathbb{R}^d$ to possible changes in the model (irrespective of whether $x$ is in the data manifold). \n\n\n\\begin{defn}[Counterfactual Stability] The stability of a counterfactual $x\\in \\mathbb{R}^d$ is defined as follows: \\begin{align}&R_{K,\\sigma^2}(x,M)=\\frac{1}{K}\\sum_{x' \\in N_x}M(x') - \\sqrt{\\frac{1}{K}\\sum_{x' \\in N_x}\\left(M(x') - \\frac{1}{K}\\sum_{x' \\in N_x}M(x')\\right)^2} \n\\end{align}where $N_x$ is a set of $K$ points in $\\mathbb{R}^d$ drawn from the distribution $\\mathcal{N}(x,\\sigma^2\\mathrm{I}_{d})$ where $\\mathrm{I}_{d}$ is the identity matrix.\n\\label{defn:stability}\n\\end{defn}\n\nThis metric of counterfactual stability is aligned with our desirable properties. Given a point $x\\in \\mathbb{R}^d$, it generates a set of $K$ points centered around $x$. The first term $\\frac{1}{K}\\sum_{x' \\in N_x}M(x')$ is expected to be high if the model output value $M(x)$ is high for $x$ (Property~\\ref{propty:high_confidence}) as well as several points close to $x$ (Property~\\ref{propty:high_confidence_mean}). However, we note that the mean value of $M(x)$ around a point $x \\in \\mathbb{R}^d$ may not always capture the variability in that region. For instance, a combination of very high and very low values can also produce a reasonable mean value. Thus, we also incorporate a second term, i.e., the standard deviation $\\sqrt{\\frac{1}{K}\\sum_{x' \\in N_x}\\left(M(x') - \\frac{1}{K}\\sum_{x' \\in N_x}M(x')\\right)^2} $ which captures the variability of the model output values in a region around $x$ (recall Property~\\ref{propty:variability}). \n\nWe also note that the variability term (standard deviation) in Definition~\\ref{defn:stability} is useful only given the first term (mean) as well. This is because even points on the other side of the decision boundary (i.e., $M(x')< 0.5$) can have high or low variance. We include the histogram of $M(x)$, $\\frac{1}{K}\\sum_{x' \\in N_x}M(x')$, and $R_{K,\\sigma^2}(x,M)$ in the Appendix for further insights.\n\nNext, we discuss how our proposed metric can be used to test if a counterfactual is \\emph{stable}.\n\n\\begin{defn}[Counterfactual Stability Test] \\label{defn:stability_test} A counterfactual $x\\in \\mathbb{R}^d$ satisfies the counterfactual stability test if: \n\\begin{equation}R_{K,\\sigma^2}(x,M)\\geq \\tau.\n\\end{equation}\n\\end{defn}\n\n\n\n\\begin{rem}[Discussion on Data Manifold] \n\\label{rem:data_manifold}Our definition of counterfactual stability holds for all points $x\\in \\mathbb{R}^d$ and is not necessarily restricted to points that lie on the data manifold, e.g., $x \\in \\mathcal{S}$. This is because there might be points or regions outside the data manifold that could also be robust to model changes. E.g., assume a loan applicant who is exceptionally good at literally everything. Such an applicant might not lie on the data manifold, but it is expected that most models would accept such a data point even after retraining. We note however that recent work~\\cite{pawelczyk2020counterfactual} demonstrate that data-support counterfactuals are more robust that sparse counterfactuals, an aspect that we discuss further in Section~\\ref{subsec:robustness_guarantee} which also motivates our definition of conservative counterfactual. \n\\end{rem}\n\n\n\\subsection{Concept of Conservative Counterfactuals}\n\\label{subsec:conservative_counterfactuals}\nHere, we introduce the concept of \\emph{Conservative Counterfactuals} which allows us to use our counterfactual stability test to generate stable counterfactuals from the dataset.\n\n\n\\begin{defn}[Conservative Counterfactual $\\mathcal{C}^{(\\tau)}_{p,\\mathcal{S}}(x,M)$] Given a data point $x \\in \\mathcal{S}$ such that $M(x)\\leq 0.5$, a conservative counterfactual $\\mathcal{C}^{(\\tau)}_{p,\\mathcal{S}}(x,M)$ is defined as a data point $x' \\in \\mathcal{S}$ such that $M(x')>0.5$ and $R_{K,\\sigma^2}(x',M)\\geq\\tau$, that also minimizes the $l_p$ norm $||x-x'||_p$, i.e.,\n\\begin{align}\n&\\mathcal{C}^{(\\tau)}_{p,\\mathcal{S}}(x,M)=\\arg \\min_{x'\\in \\mathcal{S}} ||x-x'||_p \\nonumber \\\\\n&\\text{such that } M(x')>0.5 \\text{ and } R_{K,\\sigma^2}(x',M)\\geq \\tau.\n\\end{align} \n\\end{defn}\n\n\\begin{rem}[Existence] Higher $\\tau$ leads to better robustness. However, a conservative counterfactual may or may not exist depending on how high the threshold $\\tau$ is. When $\\tau$ is very low, the conservative counterfactuals become the closest data-support counterfactuals.\n\\end{rem}\n\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[height=4cm]{gaussian_1.png}\n \\includegraphics[height=4cm]{gaussian_2.png}\n \\includegraphics[height=4cm]{gaussian_3.png}\n \\caption{Thought experiment to understand how a conservative counterfactual is more robust than typical closest counterfactuals or closest-data-support counterfactuals: The data $\\mathcal{S}$ is drawn from the following distribution: $p(x|y=1)\\sim \\mathcal{N}(\\mu,\\Sigma)$ and $p(x|y=0)\\sim \\mathcal{N}(-\\mu,\\Sigma)$. The first model denotes the original model $M(x)$ while the next two models denote possible models obtained after retraining on the same data with almost similar accuracy (performance) on the given dataset. Given a rejected applicant, we have $A$, $B$, and $C$ as three possible counterfactuals. Counterfactual $A$ is the closest counterfactual: it may not lie on the data manifold. Counterfactual $B$ is the closest-data-support counterfactual, i.e., the nearest neighbor on the other side of the decision boundary. The second figure demonstrates that data-support counterfactuals are more robust than closest (or, sparse) counterfactuals. However, lying on the data manifold is not always enough for robustness (third figure), e.g., $B$ happens to be quite close to the boundary. Here, $C$ is the conservative counterfactual that not only lies on the data manifold but also well within the decision boundary.}\n \\label{fig:gaussian}\n\\end{figure*}\n\n\n\n\\subsection{Theoretical Robustness Guarantee of Conservative Counterfactuals}\n\\label{subsec:robustness_guarantee}\nHere, we derive theoretical guarantees on the robustness of conservative counterfactuals (see Theorem~\\ref{thm:guarantee} for our main result). Before stating our result, we introduce two assumptions over the randomness of the new model $M_{new}$.\n\n\\begin{assm}[Goodness of Metric] For any data point $x \\in \\mathbb{R}^d$, let $M_{new}(x)$ be a random variable taking different values due to model changes. \nWe assume that the expected value $E[M_{new}(x)]>R_{K,\\sigma^2}(x,M)$.\n\\label{assm:1}\n\\end{assm}\n\n\n\\begin{assm}[Goodness of Data Manifold]\nThe standard deviation of $M_{new}(x)$ is $V_x$ which depends on $x$. When $x \\in \\mathcal{S}$, we have $V_x \\leq V$ for a small constant $V$. \\label{assm:2}\n\\end{assm}\n\n\n\n\n\n\n\nThe rationale for Assumption~\\ref{assm:2} is built on evidence from recent work~\\cite{pawelczyk2020counterfactual} that demonstrate that data-support counterfactuals are more robust than closest or sparsest counterfactuals. When a model is retrained on same or similar data, the decisions of a model are less likely to change for points that lie on the data manifold as compared to points that may not lie on the data manifold (illustrated in Figure.~\\ref{fig:gaussian}).\n\n\n\n\\begin{rem}[One-Way Implication]\nWhile we assume that the new model outputs for points in the dataset $\\mathcal{S}$ have low standard deviation, we do not necessarily assume that points outside the dataset $\\mathcal{S}$ would always have high standard deviation. This is because there can potentially be regions outside the data manifold that also have low $V_x$, and are also robust to model changes (recall Remark~\\ref{rem:data_manifold}).\n\\end{rem}\n\nOne popular assumption in existing literature to quantify small model changes is to assume that the model changes are bounded in the parameter space, i.e., $|\\text{Parameters}(M)-\\text{Parameters}(M_{new})|\\leq \\Delta$ where $\\text{Parameters}(M)$ denote the parameters of the model $M$, e.g., weights of a neural network. However, this might not be a good assumption for tree-based ensembles. This is because tree-based ensembles can often change a lot in the parameter space while actually causing very little difference with respect to the actual decisions on the dataset $\\mathcal{S}$ (see Figure~\\ref{fig:gaussian}). \n\nClosely connected to model change is the idea of Rashomon models~\\cite{pawelczyk2020counterfactual,marx2020predictive} which suggests that there can be models that are very different from each other but have almost similar performance on the same data, e.g., $\\sum_{x \\in \\mathcal{S}} |D(x)-D_{new}(x)|\\leq \\Delta$. Thus, Assumption~\\ref{assm:2} might be better suited for tree-based ensembles over boundedness in the parameter space.\n\nNow, we provide our main result: a robustness guarantee on conservative counterfactuals based on these assumptions.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{thm}[Robustness Guarantee for Conservative Counterfactuals] Suppose Assumptions~\\ref{assm:1} and \\ref{assm:2} hold, and $\\tau>0.5$. Then, for any conservative counterfactual $x' \\in \\mathcal{C}^{(\\tau)}_{p,\\mathcal{S}}(x,M)$, the following holds:\n\\begin{equation}\n \\Pr(M_{new}(x')< 0.5) \\leq \\frac{V^2}{V^2 + (\\tau-0.5)^2}.\n\\end{equation}\n\\label{thm:guarantee}\n\\end{thm}\n\nThe result essentially says that the probability of invalidation by the new model ($\\Pr(M_{new}(x')< 0.5)$) is strictly upper-bounded for conservative counterfactuals. A smaller variability $V$ makes this bound smaller. \n\nThe conservative counterfactuals (henceforth denoted by CCF) already serve as good candidates for robust counterfactuals. They are also expected to be realistic with high LOF because they lie in the dataset $\\mathcal{S}$. However, because they only search for counterfactuals on the dataset $\\mathcal{S}$, they may not always be optimal in terms of the distance between the original data point and its counterfactual (not so close). This leads us to now propose a novel algorithm that leverages conservative counterfactuals (CCF) and counterfactual stability test to find robust counterfactuals that meet all our requirements (close, robust, realistic). \n\\subsection{Proposed Algorithm to Generate Robust Counterfactuals in Practice: RobX}\n\nIn this section, we discuss our proposed algorithm -- that we call RobX -- that generates robust counterfactuals that meets our requirements (see Algorithm~\\ref{alg:example}). \n\nOur proposed algorithm RobX can be applied on top of any preferred base method of counterfactual generation, irrespective of whether the counterfactual lies in the dataset $\\mathcal{S}$. RobX checks if the generated counterfactual satisfies the counterfactual \\emph{stability test} (recall Definition~\\ref{defn:stability_test}): if the test is not satisfied, the algorithm iteratively refines the obtained counterfactual and keeps moving it towards the conservative counterfactual until a \\emph{stable} counterfactual is found that satisfies the test. \n\nOne might wonder if moving a counterfactual towards the conservative counterfactual can cause it to pass through undesired regions of the model output where $M(x)<0.5$, thus making it more vulnerable to invalidation. We note that, while this concern is reasonable, the counterfactual \\emph{stability test} at each step ensures that such points are not selected. We further address this concern as follows: (i) consider a diverse set of conservative counterfactuals (e.g., first $c$ nearest neighbors that satisfy the stability test where $c>1$); (ii) iteratively move towards each one of them until a \\emph{stable} counterfactual is found for all $c$ cases; (iii) pick the best of these $c$ \\emph{stable} counterfactuals, e.g., one with the lowest $L_1$ or $L_2$ cost as desired. \n\nWe also observe that this approach of moving a counterfactual towards a conservative counterfactual improves its LOF, making it more realistic.\n\n\\begin{algorithm}[t]\n \\caption{RobX: Generating Robust Counterfactual Explanations for Tree-Based Ensembles}\n \\label{alg:example}\n\\begin{algorithmic}\n \\STATE {\\bfseries Input:} Model $M(\\cdot)$, Dataset $\\mathcal{S}$, Datapoint $x$ such that $M(x) \\leq 0.5$, Algorithm parameters $(p, K, \\sigma^2, \\tau, \\alpha, c)$\n \\STATE{Step 1: Generate counterfactual $x'$ for $x$ using any existing technique for tree-based ensembles}\n \\STATE{Step 2: Perform counterfactual stability test on $x'$: Check if $R_{K,\\sigma^2}(x',M)\\geq \\tau$ where $N_x$ is a set of $K$ points drawn from the distribution $\\mathcal{N}(x,\\sigma^2)$.}\n \\IF{counterfactual stability test is satisfied:}\n \\STATE{Output $x'$ and exit}\n \\ELSE\n \\STATE{Generate $c$ conservative counterfactuals $\\{x_1,\\ldots,x_c\\}$ which are $c$ nearest neighbors of $x'$ in the dataset $\\mathcal{S}$ that pass the stability test: $R_{K,\\sigma^2}(x_i,M)\\geq \\tau$}\n \\STATE{Initialize placeholders for $c$ counterfactuals $\\{x'_1,\\ldots,x'_c\\}$ with each $x'_i=x'$}\n \\FOR {$i=1 \\text{ to } c$}\n \\REPEAT \n \\STATE{Update: $x'_i=\\alpha x_i + (1-\\alpha)x'_i$}\n \\STATE{Perform counterfactual stability test on $x'_i$:\\\\ \\hspace{1cm} $R_{K,\\sigma^2}(x'_i,M)\\geq \\tau$}\n \\UNTIL counterfactual stability test on $x'_i$ is satisfied\n \\ENDFOR\n \\ENDIF\n \\STATE{Output $x^*=\\arg \\min_{x'_i \\in \\{x'_1,x'_2,\\ldots,x_c'\\}} ||x-x'_i||_p$ and exit}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Experiments}\n\nHere, we present our experimental results on benchmark datasets, namely, German Credit~\\cite{UCI} and HELOC~\\cite{Fico_web_2018}. \n\nFor simplicity, we normalize the features to lie between $[0,1]$. We consider XGBoost models after selecting hyperparameters from a grid search (details in Appendix). For each of these datasets, we set aside 30\\% of the dataset for testing, and use the remaining 70\\% for training (in different configurations as discussed here). \n\nWe consider the following types of model change scenarios:\n\\begin{itemize}[leftmargin=*, topsep=0pt, itemsep=0pt]\n \\item Minor changes: (i) Train a model on the training dataset and retrain new models after dropping very few data points ($1$ for German Credit, $10$ for HELOC), keeping hyperparameters constant. (ii) Train a model on the training dataset and retrain new models, changing one hyperparameter, e.g., \\texttt{max\\_depth} or \\texttt{n\\_estimators}. The results for this configuration is in the Appendix.\n \\item Moderate changes: Train a model on half of the training dataset and retrain new models on the other half, keeping hyperparameters mostly constant, varying either \\texttt{max\\_depth} or \\texttt{n\\_estimators}. The results for this configuration is in Table~\\ref{table:performance}.\n\\end{itemize}\n\n\n\n\\begin{table}[t]\n\\caption{Performance on HELOC and German Credit dataset.}\n\\label{table:performance}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{c|p{0.9cm}p{0.9cm}p{0.9cm}|p{0.9cm}p{0.9cm}p{0.9cm}}\n\\toprule\n\\textbf{HELOC}& \\multicolumn{3}{c|}{$L_1$ Based} & \\multicolumn{3}{c}{$L_2$ Based}\\\\\n\\midrule\nMethod & Cost & Val. & LOF & Cost & Val. & LOF\\\\\n\\midrule\nCCF & 1.89 & 100\\% & 0.81 & 0.65& 99.9\\%& 0.75\\\\\n\\midrule\nFT & 0.19 & 18.7\\%& 0.40 & 0.16 & 15.6\\%& 0.48\\\\\n+RobX & 1.55 & 100\\% & 0.92 & 0.55 & 99.9\\% & 0.84 \\\\\n\\midrule\nFOCUS & 0.21 & 29.5\\% & 0.36 & 0.17 & 33.0\\% & 0.63\\\\\n+RobX & 1.52& 100\\% & 0.91 & 0.61& 99.8\\% & 0.72\\\\\n\\midrule\nFACE & 2.86 & 89.4\\%& 0.68 & 1.19 & 97.3\\% & 0.50 \\\\\n+RobX & 2.30 & 100\\% & 0.78 & 0.95& 100\\% & 0.65\\\\\n\\midrule\nNN & 0.96 & 35.1\\% & 0.81 & 0.34 & 39.0\\%& 0.69 \\\\\n+RobX & 1.61 & 100\\% & 0.93 & 0.56 & 100\\% & 0.85\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{c|p{0.9cm}p{0.9cm}p{0.9cm}|p{0.9cm}p{0.9cm}p{0.9cm}}\n\\toprule\n\\textbf{German}& \\multicolumn{3}{c|}{$L_1$ Based} & \\multicolumn{3}{c}{$L_2$ Based}\\\\\n\\midrule\nMethod & Cost & Val. & LOF & Cost & Val. & LOF\\\\\n\\midrule\nCCF & 2.92 & 100\\% & 0.85 & 1.21 & 100\\% & 0.94\\\\\n\\midrule\nFT & 0.13 & 55.7\\% & 0.93 & 0.11 & 59.2\\% & 0.94 \\\\\n+RobX & 2.17 & 92.6\\% & 1.0 & 0.95 & 91.1\\% & 0.94 \\\\\n\\midrule\nFOCUS & 0.37 & 65.7\\% & 0.93 & 0.24 & 65.3\\% & 0.93\\\\\n+RobX & 2.18 & 96.5\\% & 1.0 & 1.05 & 100\\% & 1.0\\\\\n\\midrule\nFACE & 2.65 & 84.5\\% & 0.57 & 1.30 & 87.6\\% & 0.76 \\\\\n+RobX & 2.29 & 97.1\\% & 1.0 & 1.05 & 96.1\\% & 0.94\\\\\n\\midrule\nNN & 0.76 & 65.9\\% & 1.0 & 0.48 & 60.7\\% & 1.0\\\\\n+RobX & 2.21 & 97.7\\% & 1.0 & 0.97 & 91.7\\% & 0.93 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\nFor each case, we first generate counterfactuals for the original model using the following base methods:\n\\begin{itemize}[leftmargin=*, topsep=0pt, itemsep=0pt]\n \\item Feature Tweaking (FT)~\\cite{tolomei2017interpretable} is a popular counterfactual generation technique for tree-based ensembles that finds ``closest'' counterfactuals ($L_1$ or $L_2$ cost), not necessarily on the data manifold. The algorithm searches for all possible paths (tweaks) in each tree that can change the final outcome of the model.\n \\item FOCUS~\\cite{lucic2019focus} is another popular technique that approximates the tree-based models with \\texttt{sigmoid} functions, and finds closest counterfactuals (not necessarily on the data manifold) by solving an optimization.\n \\item FACE~\\cite{poyiadzi2020face} attempts to find counterfactuals that are not only close ($L_1$ or $L_2$ cost), but also (i) lie on the data manifold; and (ii) are connected to the original data point via a path on a connectivity graph on the dataset $\\mathcal{S}$. Such a graph is generated from the given dataset $\\mathcal{S}$ by connecting every two points that are reasonably close to each other, so that one can be ``attained'' from the other. \n \\item Nearest Neighbor (NN)~\\cite{albini2021counterfactual} attempts to find counterfactuals that are essentially the nearest neighbors ($L_1$ or $L_2$ cost) to the original data points with respect to the dataset $\\mathcal{S}$ that lie on the other side of the decision boundary (recall Definition~\\ref{defn:data-support-CF}).\n\\end{itemize}\n\nWe compare these base methods with: (i) Our proposed Conservative Counterfactuals (CCF) approach; and (ii) Our proposed RobX applied on top of these base methods.\n\n\n\n\n\n\n\n\\begin{rem}\nWe note that there are several techniques for generating counterfactual explanations (see \\cite{verma2020counterfactual} for a survey); however only some of them apply to tree-based models. Several techniques are also broadly similar to each other in spirit. We believe our choice of these four base methods to be quite a diverse representation of the existing approaches, namely, search-based closest counterfactual (FT), optimization-based closest counterfactual (FOCUS), graph-based data-support counterfactual (FACE), and closest-data-support counterfactual (NN). We note that another alternative perspective is a causal approach~\\cite{konig2021causal}, that often requires knowledge of causal structure, which is outside the scope of this work.\n\\end{rem}\n\nOur empirical performance metrics of interest are:\n\\begin{itemize}[leftmargin=*, topsep=0pt, itemsep=0pt]\n \\item \\textbf{Cost ($L_1$ or $L_2$):} Average distance ($L_1$ or $L_2$) between the original point and its counterfactual.\n \\item \\textbf{Validity (\\%):} Percentage of counterfactuals that still remain counterfactuals under the new model $M_{new}$.\n \\item \\textbf{LOF}: See Definition~\\ref{defn:lof}; Implemented using \\cite{scikit-lof} (+1 for inliers, -1 otherwise). A higher average is better.\n\\end{itemize}\n\n\n\n\\textbf{Hyperparameters:} For the choice of $K$ and $\\sigma$, we refer to some guidelines in adversarial machine learning literature. Our metric of stability is loosely inspired from certifiable robustness in adversarial machine learning literature~\\cite{cohen2019certified,raghunathan2018certified}, which uses the metric $\\frac{1}{K}\\sum_{x' \\in N_x}I(M(x')>0.5)$. Here $I(.)$ is the indicator function. Our metric for counterfactual stability (in Definition~\\ref{defn:stability}) has some key differences: (i) No indicator function; (ii) We leverage the standard deviation as well along with the mean. Because the feature values are normalized, a fixed choice of $K=1000$ and $\\sigma=0.1$ is used for all our experiments. \n\nThe choice of threshold $\\tau$ however, is quite critical, and depends on the dataset. As we increase $\\tau$ for conservative counterfactuals, the validity improves but the $L_1$\/$L_2$ cost also increases, until the validity almost saturates. If we increase $\\tau$ beyond that, there are no more conservative counterfactuals found. In practice, one can examine the histogram of $R_{K,\\sigma^2}(x',M)$ for $x \\in \\mathcal{S}$, and choose an appropriate quantile for that dataset as $\\tau$ so that a reasonable fraction of points in $\\mathcal{S}$ qualify to be conservative counterfactuals. But, \\emph{the same quantile may not suffice for $\\tau$ across different datasets.} One could also perform the following steps: (i) choose a small validation set; (ii) keep increasing $\\tau$ from $0.5$ for CCF and plot the validity and $L1\/L2$ cost; (iii) select a $\\tau$ beyond which validity does not improve much and the $L1\/L2$ cost is acceptable. \n\n\nNext, we include the experimental results for moderate changes to the model in Table~\\ref{table:performance} for both HELOC and German Credit datasets. Additional results are provided in the Appendix.\n\n\n\\begin{table}\n\\caption{Performance of FOCUS on the model with a higher threshold, i.e., $M(x)> \\gamma$ on HELOC and German Credit datasets.}\n\\label{table:threshold_focus}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{c|p{0.9cm}p{0.9cm}p{0.9cm}|p{0.9cm}p{0.9cm}p{0.9cm}}\n\\toprule\n\\textbf{HELOC}& \\multicolumn{3}{c|}{$L_1$ Based} & \\multicolumn{3}{c}{$L_2$ Based}\\\\\n\\midrule\nMethod & Cost & Val. & LOF & Cost & Val. & LOF\\\\\n\\midrule\n$\\gamma{=}0.5$ & 0.21 & 29.5\\% & 0.36 & 0.17 & 33.0\\% & 0.63 \\\\\n+RobX & 1.52 & 100\\% & 0.91 & 0.55 & 99.9\\% & 0.84\\\\\n\\midrule\n$\\gamma{=}0.7$ & 0.59& 92.2\\% & -0.01& 0.34 & 98.8\\% & 0.30\\\\\n +RobX & 1.38 & 99.9\\% & 0.70 & 0.60 & 99.8\\% & 0.70 \\\\\n\\midrule\n$\\gamma{=}0.75$ & 1.13 & 98.9\\% & -0.32& 0.44 & 99.9\\% & 0.09\\\\\n +RobX & 1.44 & 100\\% & 0.06& 0.55 & 99.9\\% & 0.51\\\\\n\\midrule\n$\\gamma{=}0.8$ & 2.11 & 100\\% & -0.70& 0.60 & 100\\% & -0.20\\\\\n+RobX & 2.14 & 100\\% & -0.66 & 0.62 & 100\\% & -0.08\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{c|p{0.9cm}p{0.9cm}p{0.9cm}|p{0.9cm}p{0.9cm}p{0.9cm}}\n\\toprule\n\\textbf{German} & \\multicolumn{3}{c|}{$L_1$ Based} & \\multicolumn{3}{c}{$L_2$ Based}\\\\\n\\midrule\nMethod & Cost & Val. & LOF & Cost & Val. & LOF\\\\\n\\midrule\n$\\gamma{=}0.5$ & 0.37 & 65.7\\% & 0.93 & 0.24 & 65.3\\% & 0.93 \\\\\n +RobX & 2.18 & 96.5\\% & 1.0 & 1.05 & 100\\% & 1.0 \\\\\n\\midrule\n$\\gamma{=}0.8$ & 0.74& 83.9\\% & 0.87 & 0.41& 92.1\\% & 0.75\\\\\n +RobX & 2.20 & 98.0\\% & 1.0 & 1.05 & 100\\% & 1.0\\\\\n\\midrule\n$\\gamma{=}0.99$ & 2.57 & 97.7\\% & -0.45 & 1.05 & 98.5\\% & -0.33\\\\\n +RobX & 2.85 & 100\\% & 0.03& 1.19 & 100\\% & -0.27\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\n\n\n\n\\textbf{Observations:} The average cost ($L_1$ or $L_2$ cost) between the original data point and the counterfactual increases only slightly for base methods such as FT, FOCUS, and NN (which find counterfactuals by explicitly minimizing this cost); however our counterfactuals are significantly more robust (in terms of validity) and realistic (in terms of LOF). Interestingly, for FACE (which finds counterfactuals on the data manifold that are connected via a path), our strategy is able to improve both robustness (validity) and cost ($L_1$ or $L_2$ cost), with almost similar LOF.\n\nAnother competing approach that we consider in this work (that has not been considered before) is to find counterfactuals using base methods but setting a higher threshold value for the model, i.e., $M(x) > \\gamma$ where $\\gamma$ is greater than $0.5$. Interestingly, we observe that this simple modification can also sometimes generate counterfactuals that are significantly robust; however this approach has several disadvantages: (i) It generates counterfactuals that are quite unrealistic, and thus have very poor LOF. (ii) The algorithm takes significantly longer to find counterfactuals as the threshold $\\gamma$ is increased, and sometimes even returns a \\texttt{nan} value because no counterfactual is found, e.g., if $\\gamma=0.9$ and the model output $M(x)$ rarely takes such a high value (because the output range of tree-based ensembles) takes discrete values). Because of these disadvantages, we believe this technique might not be preferable to use standalone; however, it can be used as an alternate base method over which our technique might be applied when cost ($L_1$ or $L_2$) is a higher priority over LOF (see Table~\\ref{table:threshold_focus}). \n\n\n\n\n\n\n\n\n\n\\textbf{Discussion and Future Work:} This work addresses the problem of finding robust counterfactuals for tree-based ensembles. It provides a novel metric to compute the stability of a counterfactual that can be representative of its robustness to possible model changes, as well as, a novel algorithm to find robust counterfactuals. Though not exactly comparable, but our cost and validity are in the same ballpark as that observed for these datasets in existing works~\\cite{upadhyay2021towards,black2021consistent}, focusing on robust counterfactuals for linear models or neural networks (differentiable models). Our future work would include: (i) extending to causal approaches~\\cite{konig2021causal}; and (ii) accounting for immutability or differences among features, e.g., some features being more variable than others. \n\n\n\\paragraph{Disclaimer}\nThis paper was prepared for informational purposes by\nthe Artificial Intelligence Research group of JPMorgan Chase \\& Co. and its affiliates (``JP Morgan''),\nand is not a product of the Research Department of JP Morgan.\nJP Morgan makes no representation and warranty whatsoever and disclaims all liability,\nfor the completeness, accuracy or reliability of the information contained herein.\nThis document is not intended as investment research or investment advice, or a recommendation,\noffer or solicitation for the purchase or sale of any security, financial instrument, financial product or service,\nor to be used in any way for evaluating the merits of participating in any transaction,\nand shall not constitute a solicitation under any jurisdiction or to any person,\nif such solicitation under such jurisdiction or to such person would be unlawful.\n\n\n\\section*{Acknowledgements} We thank Emanuele Albini and Dan Ley for useful discussions.\n\n\n\\small{\n\n\\section{Proof of Theorem 1}\n\nThe proof follows by demonstrating that another tree-based ensemble model exists that does not accept the generated set of counterfactuals. Let us denote this set as $\\mathcal{CF}$.\n\nThere are various ways to construct such a model. A simple way could be to choose an identical tree structure but with the predictions flipped, i.e., $M_{new}(x)=1-M(x)$. This can be designed by altering the weights of the leaves.\n\n\nFor an $x \\in \\mathcal{CF}$ with $M(x)>0.5$, we would have $M_{new}(x) \\leq 0.5$.\n\n\n\n\\section{Proof of Theorem 2}\n\nThe proof follows from Cantelli's inequality. The inequality states that, for $ \\lambda >0$,\n$$ \\Pr(Z-\\mathbb{E}[Z]\\leq -\\lambda )\\leq {\\frac {V_Z^{2}}{V_Z ^{2}+\\lambda ^{2}}},$$\nwhere $Z$ is a real-valued random variable,\n$\\Pr $ is the probability measure,\n$ \\mathbb{E}[Z]$ is the expected value of $Z$, and\n$ V_Z^{2}$ is the variance of $Z$.\n\nHere, let $Z=M_{new}(x')$ be a random variable that takes different values for different models $M_{new}$. Let $$\\lambda=\\mathbb{E}[Z]-0.5 \\geq R(x',M,K,\\sigma^2)-0.5> \\tau-0.5 > 0.$$\n\nThen, we have:\n\\begin{align}\n \\Pr(Z\\leq 0.5 ) \n &= \\Pr(Z -\\mathbb{E}[Z] \\leq 0.5-\\mathbb{E}[Z] ) \\nonumber \\\\\n & = \\Pr(Z -\\mathbb{E}[Z] \\leq -\\lambda ) \\text{ where } \\lambda=\\mathbb{E}[Z]-0.5 \\nonumber \\\\\n & \\overset{(a)}{\\leq} \\frac{V_Z^2}{V_Z^2+\\lambda^2} \\overset{(b)}{\\leq} \\frac{V_Z^2}{V_Z^2+(\\tau-0.5)^2} \\overset{(c)}{\\leq} \\frac{V^2}{V^2+(\\tau-0.5)^2}.\n\\end{align}\nHere, (a) holds from Cantelli's inequality, (b) holds because $\\lambda> \\tau-0.5$ (from the conditions of the theorem), and (c) holds because the variance of $Z$ is bounded by $V^2$ from Assumption 2. \n\n\n\\section{Additional Details and Experiments}\n\nHere, we include further details related to our experiments, as well as, some additional results.\n\n\\subsection{Datasets}\n\n\\begin{itemize}[topsep=0pt, itemsep=0pt, leftmargin=*]\n \\item HELOC~\\cite{Fico_web_2018}: This dataset has $10K$ data points, each with $23$ finance-related features. We drop the features \\texttt{MSinceMostRecentDelq}, \\texttt{MSinceMostRecentInqexcl7days}, and \\texttt{NetFractionInstallBurden}. We also drop the data points with missing values. Our pre-processed dataset has $n=8291$ data points with $d=20$ features each.\n \\item German Credit~\\cite{UCI}: This dataset $1000$ data points, each with $20$ features. The features are a mix of numerical and categorical features. We only use the following $10$ features: \\texttt{existingchecking}, \\texttt{credithistory}, \\texttt{creditamount}, \\texttt{savings}, \\texttt{employmentsince}, \\texttt{otherdebtors}, \\texttt{property}, \\texttt{housing}, \\texttt{existingcredits}, and \\texttt{job}. Among these features, we convert the categorical features into appropriate numeric values. E.g., \\texttt{existingchecking} originally has four categorical values: A11 if ... < 0 DM, A12 if 0 <= ... < 200 DM, A13 if ... >= 200 DM, and A14 if no checking account. We convert them into numerical values as follows: $0$ for A14, $1$ for A11, $2$ for A12, and $3$ for A13. Our pre-processed dataset has $n=1000$ data points with $d=10$ features each.\n\\end{itemize}\n\nAll features are normalized to lie between $[0,1]$.\n\n\n\n\n\\subsection{Experimental Results Under Minor Changes to the Model}\n\nFor each of the datasets, we perform a $30\/70$ test-train split. We train an XGBoost Model after tuning the hyperparameters using the \\texttt{hyperopt} package. \n\nThe observed accuracy for the two datasets are: 74\\% (HELOC) and 73\\% (German Credit).\n\nFor RobX, we choose $K=1000$, $\\sigma=0.1$, and $\\tau$ is chosen based on the histogram of $R_{K,\\sigma^2}(x,M)$ for each dataset. For HELOC, $\\tau=0.65$, and for German Credit, $\\tau=0.93$.\n\nWe first present our experimental results after minor changes to the model.\\\\\n\n(i) Train a model ($M(x)$) on the training dataset and retrain new models ($M_{new}(x)$) after dropping a small percentage of data points ($1$ for German Credit, $10$ for HELOC), keeping hyperparameters fairly constant. We experiment with $20$ different new models and report the average values in Tables~\\ref{table:heloc_minor} and \\ref{table:german_minor}.\n\n\n\n\\begin{table}[!htbp]\n\\caption{Performance on HELOC dataset minimizing for $L_1$ and $L_2$ cost.}\n\\label{table:heloc_minor}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_1$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 1.83 & 100\\% & 0.89\\\\\n\\midrule\nFT & 0.19 & 71.1\\%& 0.25 \\\\\nFT +RobX & 1.51 & 100\\% & 0.96\\\\\n\\midrule\nFOCUS & 0.22 & 77.8\\% & 0.26 \\\\\nFOCUS +RobX & 1.49 & 100\\% & 0.94 \\\\\n\\midrule\nFACE & 2.95 & 98.9\\%& 0.70 \\\\\nFACE +RobX & 2.26 & 100\\% & 0.91\\\\\n\\midrule\nNN & 1.01 & 85.3\\% & 0.75 \\\\\nNN +RobX & 1.56 & 100\\% & 0.96 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 0.62 & 100\\% & 0.83\\\\\n\\midrule\nFT & 0.16 & 68.3\\% & 0.40 \\\\\nFT +RobX & 0.54 & 100\\% & 0.93 \\\\\n\\midrule\nFOCUS & 0.16 & 57.1\\% & 0.57 \\\\\nFOCUS +RobX & 0.59 & 100\\% & 0.86\\\\\n\\midrule\nFACE & 1.20 & 99.6\\% & 0.58 \\\\\nFACE +RobX & 0.89 & 100\\% & 0.74 \\\\\n\\midrule\nNN & 0.35 & 84.0\\%& 0.75 \\\\\nNN +RobX & 0.55 & 100\\%& 0.95 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\\begin{table}[!htbp]\n\\caption{Performance on German Credit dataset minimizing for $L_1$ and $L_2$ cost.}\n\\label{table:german_minor}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_1$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 3.05 & 100\\% & 1.0\\\\\n\\midrule\nFT & 0.08 & 72.9\\%& 0.65 \\\\\nFT +RobX & 2.70 & 99.7\\% & 1.0\\\\\n\\midrule\nFOCUS & 0.12 & 72.8\\% & 0.71 \\\\\nFOCUS +RobX & 2.71 & 100\\% & 1.0\\\\\n\\midrule\nFACE & 2.67 & 92.5\\% & 0.94 \\\\\nFACE +RobX & 2.70 & 99.1\\% & 1.0\\\\\n\\midrule\nNN & 0.80 & 84.4\\%& 0.94 \\\\\nNN +RobX & 2.71 & 100\\% & 1.0\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 1.42 & 100\\%& 1.0\\\\\n\\midrule\nFT & 0.08 & 68.7\\%& 0.65 \\\\\nFT +RobX & 1.27 & 100\\% & 1.0 \\\\\n\\midrule\nFOCUS & 0.11 & 70.1\\% & 0.82 \\\\\nFOCUS +RobX & 1.32 & 100\\% & 1.0\\\\\n\\midrule\nFACE & 1.25 & 95.0\\% & 0.77 \\\\\nFACE +RobX & 1.28 & 100\\% & 1.0\\\\\n\\midrule\nNN & 0.49 & 79.1\\%& 0.88 \\\\\nNN +RobX & 1.30 & 100\\% & 1.0\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n(ii) Train a model $M(x)$ on the training dataset and retrain new models, changing one hyperparameter, e.g., \\texttt{max\\_depth} or \\texttt{n\\_estimators}. We experiment with 20 different new models and report the average values in Tables~\\ref{table:heloc_minor2} and \\ref{table:german_minor2}. \n\n\n\n\n\\begin{table}[H]\n\\caption{Performance on HELOC dataset minimizing for $L_1$ and $L_2$ cost.}\n\\label{table:heloc_minor2}\n\\vskip 0.1in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_1$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 1.83 & 98.9\\% & 0.89\\\\\n\\midrule\nFT & 0.19 &49.9\\% & 0.25 \\\\\nFT +RobX &1.51 & 99.4\\% & 0.96\\\\\n\\midrule\nFOCUS & 0.22 & 37.1\\% & 0.26 \\\\\nFOCUS +RobX &1.49 & 99.2\\% & 0.94 \\\\\n\\midrule\nFACE & 2.67 &89.7\\% & 0.94 \\\\\nFACE +RobX & 2.70 & 99.7\\% & 1.0\\\\\n\\midrule\nNN & 1.01 & 71.3\\%& 0.75 \\\\\nNN +RobX & 1.56 &99.6\\% & 0.96\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 0.62 & 98.7\\% & 0.82\\\\\n\\midrule\nFT &0.16 & 49.4\\%& 0.40 \\\\\nFT +RobX &0.54 & 99.3\\% & 0.92 \\\\\n\\midrule\nFOCUS & 0.16 & 50.3\\% & 0.57 \\\\\nFOCUS +RobX & 0.59 & 99.9\\% & 0.86\\\\\n\\midrule\nFACE & 1.20 & 89.5\\% & 0.59 \\\\\nFACE +RobX & 0.89 & 99.9\\% & 0.75\\\\\n\\midrule\nNN & 0.35 & 71.0\\% & 0.74 \\\\\nNN +RobX & 0.55 & 99.6\\% & 0.95\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\\begin{table}[H]\n\\caption{Performance on German Credit dataset minimizing for $L_1$ and $L_2$ cost.}\n\\label{table:german_minor2}\n\\vskip 0.1in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_1$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 3.05 & 99.9\\% & 1.0\\\\\n\\midrule\nFT & 0.08 & 56.4\\%& 0.65 \\\\\nFT +RobX & 2.70 & 99.9\\% & 1.0 \\\\\n\\midrule\nFOCUS & 0.12 & 53.7\\% & 0.71 \\\\\nFOCUS +RobX & 2.71 & 99.7\\% & 1.0\\\\\n\\midrule\nFACE & 2.62 & 88.8\\%& 0.82 \\\\\nFACE +RobX & 2.72 & 99.7\\% & 1.0\\\\\n\\midrule\nNN & 0.80 & 84.4\\% & 0.94 \\\\\nNN +RobX & 2.71& 99.7\\%& 1.0 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 1.36 & 97.4\\% & 1.0 \\\\\n\\midrule\nFT & 0.08 & 53.4 & 0.65 \\\\\nFT +RobX & 1.17 & 98.6 & 1.0 \\\\\n\\midrule\nFOCUS & 0.11 & 53.2\\% & 0.82 \\\\\nFOCUS +RobX &1.2 & 100\\% &1.0 \\\\\n\\midrule\nFACE & 1.25 & 88.7\\%& 0.77 \\\\\nFACE +RobX & 1.18 & 98.4\\% & 1.0\\\\\n\\midrule\nNN & 0.49 & 79.0\\% & 0.88 \\\\\nNN +RobX & 1.18& 99.0\\% & 0.94\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\n\n\n\n\\subsection{Histograms}\nHere, we include the following histograms for the HELOC dataset for further insights (see Figure~\\ref{fig:ablation}):\n\\begin{enumerate}[itemsep=0pt, topsep=0pt, leftmargin=*]\n\\item Model outputs alone, i.e., $M(x)$.\n\\item Mean of the model outputs in a neighborhood, i.e., $\\frac{1}{K}\\sum_{x' \\in N_x}M(x')$.\n\\item Our robustness metric which includes the mean of the model outputs in a neighborhood minus their standard deviation, i.e., $R_{K,\\sigma^2}(x,M)=\\frac{1}{K}\\sum_{x' \\in N_x}M(x')- \\sqrt{\\frac{1}{K}\\sum_{x' \\in N_x}\\left(M(x') - \\frac{1}{K}\\sum_{x' \\in N_x}M(x')\\right)^2}$.\n\\end{enumerate}\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[height=3cm]{heloc_complete_histogram.png}\n\\caption{Histograms to visualize the proposed robustness metric. \\label{fig:ablation}}\n\\end{figure}\n\n\\subsection{Experiments Under Major Changes to the Model}\n\nThese experimental results have already been included in the main paper in Section 4. Here we include some additional details. For each of the datasets, we first perform a $30\/70$ test-train split, and set the test data aside.\n\nOn the training data, we again perform a $50\/50$ split. We train the original model $M(x)$ on one of these splits, and the new model $M_{new}(x)$ on the other. For $M(x)$, we train an XGBoost Model after tuning the hyperparameters using the \\texttt{hyperopt} package. For $M_{new}(x)$, we keep the hyperparameters mostly constant, varying either \\texttt{n\\_estimators} or \\texttt{max\\_depth}.\n\nThe observed accuracy for the two datasets are: 73\\% (HELOC) and 71\\% (German Credit).\n\nFor RobX, we choose $K=1000$, $\\sigma=0.1$, and $\\tau$ is chosen based on the histogram of $R_{K,\\sigma^2}(x,M)$ for each dataset. For HELOC, $\\tau=0.65$, and for German Credit, $\\tau=0.93$.\n\\subsection{Additional Experimental Results}\n\nIn our experiments so far, we normalize the features in the dataset to lie between $0$ and $1$ as is done in existing works (using \\texttt{MaxMinScalar}). Here, we also include some additional experimental results for using \\texttt{StandardScalar} instead of \\texttt{MaxMinScalar}. These are results for \\textbf{moderate} changes to the model.\n\n\\begin{table}[!htbp]\n\\caption{Performance on HELOC dataset minimizing for $L_2$ cost.}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 3.94& 100.0 \\%& 0.96\\\\\n\\midrule\nFT & 1.05 & 17.0\\%& 0.57 \\\\\nFT +RobX & 2.93 & 94.4\\% & 0.95 \\\\\n\\midrule\nFOCUS & 1.17 & 30.1\\% & 0.69 \\\\\nFOCUS +RobX & 2.94& 97.4\\% & 0.96\\\\\n\\midrule\nFACE & 5.83 & 90.1\\% & 0.74 \\\\\nFACE +RobX & 4.67& 100.0\\% & 0.94 \\\\\n\\midrule\nNN & 2.71 & 37.6\\%& 0.87 \\\\\nNN +RobX & 3.14 & 98.2\\% & 0.98 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\\begin{table}\n\\caption{Performance on HELOC dataset with $L_2$ cost: FOCUS is applied on the model with a higher threshold, i.e., $M(x) > \\gamma$}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nFOCUS ($\\gamma=$0.5) & 1.17 & 30.1\\% & 0.69 \\\\\nFOCUS ($\\gamma=$0.5) +RobX & 2.94 & 97.4\\% & 0.96 \\\\\n\\midrule\nFOCUS ($\\gamma=$0.6) & 1.67& 68.4\\% & 0.64\\\\\nFOCUS ($\\gamma=$0.6) +RobX & 3.65& 100\\% & 0.96\\\\\n\\midrule\nFOCUS ($\\gamma=$0.7) & 2.28& 97.4\\% & 0.59\\\\\nFOCUS ($\\gamma=$0.7) +RobX & 3.64& 100\\% & 0.97\\\\\n\\midrule\nFOCUS ($\\gamma=$0.8) & 3.68& 100\\% & 0.55\\\\\nFOCUS ($\\gamma=$0.8) +RobX & 3.69& 100\\% & 0.55\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\\begin{table}\n\\caption{Performance on German Credit dataset for $L_2$ cost.}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 3.18 & 98.2\\% & 0.69 \\\\\n\\midrule\nFT & 0.56 & 60.0\\% & 0.88 \\\\\nFT +RobX & 2.35 & 97.0\\% & 0.88 \\\\\n\\midrule\nFOCUS & 0.77 & 67.2\\% & 0.87 \\\\\nFOCUS +RobX & 2.83 & 97.0\\% & 0.75 \\\\\n\\midrule\nFACE & 4.74 & 94.3\\% & 0.81 \\\\\nFACE +RobX & 3.38 & 96.9\\% & 0.87\\\\\n\\midrule\nNN & 1.99 & 76.7 \\%& 0.75 \\\\\nNN +RobX & 2.50 & 96.9\\% & 0.81 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\\begin{table}\n\\caption{Performance on German Credit dataset with $L_2$ cost: FOCUS is applied on the model with a higher threshold, i.e., $M(x) > \\gamma$.}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nFOCUS ($\\gamma=$0.5) & 0.77 & 67.2\\% & 0.87 \\\\\nFOCUS ($\\gamma=$0.5) +RobX & 2.83 & 97.0\\% & 0.75 \\\\\n\\midrule\nFOCUS ($\\gamma=$0.6) & 0.84& 77.6\\% & 0.81\\\\\nFOCUS ($\\gamma=$0.6) +RobX &2.83 & 97.0 \\% & 0.75 \\\\\n\\midrule\nFOCUS ($\\gamma=$0.7) & 0.89 & 82.0\\% & 0.81\\\\\nFOCUS ($\\gamma=$0.7) +RobX & 2.83& 97.0\\% & 0.75\\\\\n\\midrule\nFOCUS ($\\gamma=$0.8) & 1.27& 88.6\\% & 0.69\\\\\nFOCUS ($\\gamma=$0.8) +RobX & 2.79& 100\\% & 0.75\\\\\n\\midrule\nFOCUS ($\\gamma=$0.9) & 1.62& 87.7\\% & 0.51\\\\\nFOCUS ($\\gamma=$0.9) +RobX & 2.69& 100\\% & 0.81\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\noindent \nTo every prime $p$ we associate a set $E(p)$ of positive allowed exponents. Thus $E(p)$ is a subset\nof $\\mathbb N$. We consider the set $S$ of integers consisting of 1 and\nall integers $n$ of the form $n=\\prod_i p_i^{e_i}$ with $e_i\\in E(p_i)$. Note that this\nset is {\\it multiplicative}, i.e., if $m$ and $n$ are coprime integers then $mn$ is in $S$\niff both $m$ and $n$ are in $S$. It is easy to see that in this way we obtain all multiplicative\nsets of natural numbers. As an example, let us consider the case where $E(p)$ consists of the\npositive even integers if $p\\equiv 3({\\rm mod~}4)$ and $E(p)=\\mathbb N$ for the other primes.\nThe set $S_B$ obtained in this way can be described in another way. By the well-known result\nthat every positive integer can be written as a sum of two squares iff every prime divisor $p$ of\n$n$ of the form $p\\equiv 3({\\rm mod~}4)$ occurs to an even exponent, we see that $S_B$\nis the set of positive integers that can be written as a sum of two integer squares.\\\\\n\\indent In this note\nwe are interested in the counting function associated to $S$, $S(x)$, which counts the\nnumber of $n\\le x$ that are in $S$. \nBy $\\pi_S(x)$ we denote the number of primes $p\\le x$\nthat are in $S$. We will only consider $S$ with the property that $\\pi_S(x)$ can be\nwell-approximated by $\\delta \\pi(x)$ with $\\delta>0$ real and $\\pi(x)$ the prime counting\nfunction (thus $\\pi(x)=\\sum_{p\\le x}1$). Recall that the Prime Number Theorem states\nthat asymptotically $\\pi(x)\\sim x\/\\log x$. Gauss as a teenager conjectured that\nthe logarithmic integral, Li$(x)$, defined as $\\int_2^x{dt\/\\log t}$ gives a much better\napproximation to $\\pi(x)$. Indeed, it is now known that, for any $r>0$ we have \n$\\pi(x)={\\rm Li}(x)+O(x\\log^{-r}x)$. On the other hand, the result that\n$\\pi(x)={x\/\\log x}+O(x\\log^{-r}x)$, is false for $r>2$. In this note two types\nof approximation of $\\pi_S(x)$ by $\\delta \\pi(x)$ play an important role. We say\n$S$ satisfies Condtion A if, asymptotically,\n\\begin{equation}\n\\pi_S(x)\\sim \\delta {x\\over \\log x}.\n\\end{equation}\nWe say that $S$ satisfies Condition B if\nthere are some fixed positive numbers $\\delta$ and $\\rho$ such that asymptotically\n\\begin{equation}\n\\label{conditionB}\n\\pi_S(x)=\\delta{\\rm Li}(x)+O\\Big({x\\over \\log^{2+\\rho}x}\\Big).\n\\end{equation}\nThe following result is a special case of a result\nof Wirsing \\cite{Wirsing}, with a reformulation following Finch et al.~\\cite[p. 2732]{FMS}. As usual\n$\\Gamma$ will denote the gamma function. By $\\chi_S$ we deonte the characteristic function of $S$, that\nis we put $\\chi_S(n)=1$ if $n$ is in $S$ and zero otherwise.\n\\begin{Thm}\n\\label{een}\nLet $S$ be a multiplicative set satisfying Condtion A, then\n$$S(x)\\sim C_0(S) x \\log^{\\delta-1}x,$$\nwhere \n$$C_0(S)={1\\over \\Gamma(\\delta)}\\lim_{P\\rightarrow \\infty}\\prod_{p0$ this theorem states that\n$$\\pi(x;d,a):=\\sum_{p\\le x,~p\\equiv a({\\rm mod~}d)}1={{\\rm Li}(x)\\over \\varphi(d)}+\nO\\Big({x\\over \\log^{r}x}\\Big).$$\nTheorem \\ref{een} thus gives that, asymptotically,\n$S_B(x)\\sim C_0(S_B)x\/\\sqrt{\\log x}$, a result derived in 1908 by Edmund Landau.\nRamanujan, in his first letter to Hardy (1913), wrote in our notation that\n\\begin{equation}\n\\label{kleemie}\nS_B(x)=C_0(S_B)\\int_2^x{dt\\over \\sqrt{\\log t}}+\\theta(x),\n\\end{equation}\nwith $\\theta(x)$ very small. In reply to Hardy's question what `very small' is in this context\nRamanujan wrote back $O(\\sqrt{x\/\\log x})$. \n(For a more detailed account and further references see Moree and Cazaran \\cite{MC}.)\nNote that by partial integration Ramanujan's claim, if true, implies\nthe result of Landau. This leads us to the following defintion.\n\\begin{Def}\nLet $S$ be a multiplicative set such that $\\pi_S(x)\\sim \\delta x\/\\log x$ for some $\\delta>0$.\nIf for all $x$ sufficiently large $$|S(x)- C_0(S) x \\log^{\\delta-1}x|< \n|S(x)- C_0(S) \\int_2^x \\log^{\\delta-1}dt|,$$\nfor every $x$ sufficiently large, we say that the Landau approximation is better than the\nRamanujan approximation. If the reverse inequality holds for every $x$ sufficiently large, we say\nthat the Ramanujan approximation is better than the Landau approximation.\n\\end{Def}\nWe denote the formal Dirichlet series $\\sum_{n=1,~n\\in S}^{\\infty}n^{-s}$ associated\nto $S$ by $L_S(s)$. For Re$(s)>1$ it converges. If\n\\begin{equation}\n\\label{EK}\n\\gamma_S:=\\lim_{s\\rightarrow 1+0}\\Big({L_{S}'(s)\\over L_{S}(s)}+{\\delta\\over s-1}\\Big)\n\\end{equation}\nexists, we say that $S$ has {\\it Euler-Kronecker constant} $\\gamma_S$. \nIn case $S$ consists of all positive integers we have\n$L_S(s)=\\zeta(s)$ and it is well known that \n\\begin{equation}\n\\label{gammo}\n\\lim_{s\\rightarrow 1+0}\\Big({\\zeta'(s)\\over \\zeta(s)}+{1\\over s-1}\\Big)=\\gamma.\n\\end{equation}\nIf the multiplicative set $S$ satisfies\ncondtion B, then it can be shown that $\\gamma_S$ exists. Indeed, we have the following result.\n\\begin{Thm}\n\\label{vier1} {\\rm \\cite{eerstev}.}\nIf the multiplicative set $S$ satisfies Condition B, then\n$$S(x)=C_0(S)x\\log^{\\delta-1}x\\Big(1+(1+o(1)){C_1(S)\\over \\log x}\\Big),\\qquad {as}\\quad x\\to\\infty,$$\nwhere $C_1(S)=(1-\\delta)(1-\\gamma_S)$.\n\\end{Thm}\n\\begin{Cor}\nSuppose that $S$ is multiplicative and satisfies Condition B. If $\\gamma_S<1\/2$, then the Ramanujan \napproximation\nis asymptotically better than the Landau one. If $\\gamma_S>1\/2$ it is the other\nway around.\n\\end{Cor}\nThe corollary follows on noting that by partial integration we have\n\\begin{equation}\n\\label{part1}\n\\int_2^x \\log^{\\delta-1}dt=x\\log^{\\delta-1}x\\Big(1+{1-\\delta\\over \\log x}+O\\Big({1\\over \\log^2 x}\\Big)\\Big).\n\\end{equation}\nOn\ncomparing (\\ref{part1}) with Theorem \\ref{vier1} we see Ramanujan's claim (\\ref{kleemie}), if\ntrue, implies $\\gamma_{S_B}=0$.\\\\\n\\indent A special, but common case, is where the primes in the set $S$ are, with finitely many exceptions, precisely\nthose in a finite union of arithmetic progressions, that is, there exists a modulus $d$ and integers\n$a_1,\\ldots,a_s$ such that for all sufficiently large primes $p$ we have $p\\in S$ iff\n$p\\equiv a_i({\\rm mod~}d)$ for some $1\\le i\\le s$. \n(Indeed, all examples we consider in this paper belong to this special case.)\nUnder this assumption it can be shown, see Serre \\cite{Serre}, that $S(x)$ has an aysmptotic\nexpansion in the sense of Poincar\\'e, that is, for every integer $m\\ge 1$ we have\n\\begin{equation}\n\\label{starrie}\nS(x)=C_0(S)x\\log^{\\delta-1}x\\Big(1+{C_1(S)\\over \\log x}+{C_2(S)\\over \\log^2 x}+\\ldots+\n{C_m(S)\\over \\log^m x}+O({1\\over \\log^{m+1}x})\\Big),\n\\end{equation}\nwhere the implicit error term may depend on both $m$ and $S$. In particular \n$S_B(x)$ has an expansion of the form (\\ref{starrie}) (see, e.g., \nHardy \\cite[p. 63]{Hardy} for a proof). \n\n\\section{On the numerical evaluation of $\\gamma_S$}\nWe discuss various ways of numerically approximating $\\gamma_S$. A few of these approaches involve\na generalization of the von Mangoldt function $\\Lambda(n)$ (for more details see Section 2.2 of\n\\cite{MC}).\\\\\n\\indent We define $\\Lambda_S(n)$ implicitly by\n\\begin{equation}\n\\label{loggie}\n-{L_S'(s)\\over L_S(s)}=\\sum_{n=1}^{\\infty}{\\Lambda_S(n)\\over n^s}.\n\\end{equation}\nAs an example let us compute $\\Lambda_S(n)$ in case $S=\\mathbb N$. Since\n$$L_{\\mathbb N}(s)=\\zeta(s)=\\prod_p\\Big(1-{1\\over p^s}\\Big)^{-1},$$\nwe obtain \n$\\log \\zeta(s)=-\\sum_p \\log(1-p^{-s})$ and hence\n$$-{L_S'(s)\\over L_S(s)}=-{\\zeta'(s)\\over \\zeta(s)}=\\sum_p {\\log p\\over p^s-1}.$$\nWe infer that $\\Lambda_S(n)=\\Lambda(n)$, the von Mangoldt function. Recall that\n$$\\Lambda(n)=\n\\begin{cases}\n\\log p & {\\rm if~}n=p^e;\\\\\n0 & {\\rm otherwise}.\n\\end{cases}\n$$\nIn case $S$ is a multiplicative semigroup generated by $q_1,q_2,...\\ldots$, we have \n$$L_S(s)=\\prod_i\\Big(1-{1\\over {q_i}^s}\\Big)^{-1},$$\nand we find\n$$\\Lambda_S(n)=\n\\begin{cases}\n\\log q_i & {\\rm if~}n=q_i^e;\\\\\n0 & {\\rm otherwise}.\n\\end{cases}\n$$\nNote that $S_B$ is a multiplicative semigroup. It is generated by $2$, the primes $p\\equiv 1({\\rm mod~}4)$\nand the squares of the primes $p\\equiv 3({\\rm mod~}4)$.\\\\\n\\indent For a more general multiplicative set $\\Lambda_S(n)$ can become more difficult in nature as we\nwill now argue.\nWe claim that (\\ref{loggie}) gives rise to the identity\n\\begin{equation}\n\\label{idi1}\n\\chi_S(n)\\log n =\\sum_{d|n}\\chi_S({n\\over d})\\Lambda_S(d).\n\\end{equation}\nIn the case $S=\\mathbb N$, e.g., we obtain $\\log n=\\sum_{d|n}\\Lambda(d)$.\nIn order to derive (\\ref{idi1}) we use the observation that if $F(s)=\\sum f(n)n^{-s}$,\n$G(s)=\\sum g(n)n^{-s}$ and $F(s)G(s)=H(s)=\\sum h(n)n^{-s}$ are formal Dirichlet series, then\n$h$ is the Dirichlet convolution of $f$ and $g$, that is $h(n)=(f*g)(n)=\\sum_{d|n}f(d)g(n\/d)$.\nBy an argument similar to the one that led us to the von Mangoldt function, one sees that\n$\\Lambda_S(n)=0$ in case $n$ is not a prime power. Thus we can rewrite (\\ref{idi1}) as \n\\begin{equation}\n\\label{idi2}\n\\chi_S(n)\\log n =\\sum_{p^j|n}\\chi_S({n\\over d})\\Lambda_S(d).\n\\end{equation}\nBy induction one then finds that $\\Lambda_S(p^e)=c_S(p^e)\\log p$, where $c_S(p)=\\chi_S(p)$ and\n$c_S(p^e)$ is defined recursively for $e>1$ by\n$$c_S(p^e)=e\\chi_S(p^e)-\\sum_{j=1}^{e-1}c_s(p^j)\\chi_S(p^{e-j}).$$\nAlso a more closed expression for $\\Lambda_S(n)$ can be given (\\cite[Proposition 13]{MC}), namely \n$$\\Lambda_S(n)=e\\log p \\sum_{m=1}^e {(-1)^{m-1}\\over m} \\sum_{k_1+k_2+\\ldots+k_m=e}\\chi_S(p^{k_1})\\chi_S(p^{k_2})\n\\cdots \\chi_S(p^{k_m}),$$\nif $n=p^e$ for some $e\\ge 1$ and $\\Lambda_S(n)=0$ otherwise, or alternatively $\\Lambda_S(n)=We\\log p$, where\n$$W=\\sum_{l_1+2l_2+\\ldots+el_e=e}{(-1)^{l_1+\\ldots+l_e-1}\\over l_1+l_2+\\ldots+l_e}\n\\Big({l_1+l_2+\\ldots+l_e\\over l_1!l_2!\\cdots l_e!}\\Big)\n\\chi_S(p)^{l_1}\\chi_S(p^2)^{l_2}\\cdots \\chi_S(p^e)^{l_e},$$\nif $n=p^e$ and $\\Lambda_S(n)=0$ otherwise, where the $k_i$ run through the natural numbers and the $l_j$ through\nthe non-negative integers.\\\\\n\\indent Now that we can compute $\\Lambda_S(n)$ we are ready for some formulae expressing $\\gamma_S$ in\nterms of this function.\n\n\\begin{Thm}\n\\label{vier}\nSuppose that $S$ is a multiplicative set satisfying Condition B. Then\n$$\\sum_{n\\le x}{\\Lambda_S(n)\\over n}=\\delta \\log x - \\gamma_S+O({1\\over \\log^{\\rho}x}).$$\nMoreover, we have\n$$\\gamma_{S}=-\\delta \\gamma + \\sum_{n=1}^{\\infty}{\\delta-\\Lambda_S(n)\\over n}.$$\nIn case $S$ furthermore is a semigroup generated by $q_1,q_2,\\ldots$, then one has \n$$\\gamma_S=\\lim_{x\\rightarrow \\infty}\\Big(\\delta \\log x -\\sum_{q_i\\le x}{\\log q_i\\over q_i-1}\\Big).$$\n\\end{Thm}\nThe second formula given in\nTheorem \\ref{vier} easily follows from the first on invoking the classical definition of $\\gamma$:\n$$\\gamma=\\lim_{x\\rightarrow \\infty}\\Big(\\sum_{n\\le x}{1\\over n}-\\log x\\Big).$$\nTheorem \\ref{vier} is quite suitable for getting an approximative value of $\\gamma_{S}$. The formulae given\nthere, however, do not allow\none to compute $\\gamma_{S}$ with a prescribed numerical precision. For doing that another approach is needed, \nthe idea of which is to relate the generating series $L_{S}(s)$ to $\\zeta(s)$ and then take\nthe logarithmic derivative. We illustrate this in Section \\ref{SEK}\nby showing how $\\gamma_{S_D}$ (defined \nin that section) can be computed with high numerical precision.\n\n\n\\section{Non-divisibility of multiplicative arithmetic functions}\nGiven a multiplicative arithmetic function $f$ taking only integer values, it is an almost immediate obervation\nthat, with $q$ a prime, the set $S_{f;q}:=\\{n:q\\nmid f(n)\\}$ is multiplicative. \n\n\\subsection{Non-divisibility of Ramanujan's $\\tau$}\nIn his so-called `unpublished' manuscript on the partition and tau functions \\cite{BO}, Ramanujan considers\nthe counting function of $S_{\\tau;q}$, where $q\\in \\{3,5,7,23,691\\}$ and $\\tau$ is the Ramanujan\n$\\tau$-function. \nRamanujan's $\\tau$-function is defined as the coefficients of the power series in $q$;\n$$\\Delta:=q\\prod_{m=1}^{\\infty}(1-q^m)^{24}=\\sum_{n=1}^{\\infty}\\tau(n)q^n.$$\nAfter setting $q=e^{2\\pi i z}$, the function $\\Delta(z)$ is the unique normalized cusp form of\nweight 12 for the full modular group SL$_2(\\mathbb Z)$.\nIt turns out that $\\tau$ is a multiplicative function\nand hence the set $S_{\\tau;q}$ is multiplicative. Given any such $S_{\\tau;q}$, Ramanujan denotes\n$\\chi_{S_{\\tau;q}}(n)$ by $t_n$. He then typically writes: ``It is easy to prove by quite elementary\nmethods that $\\sum_{k=1}^n t_k=o(n)$. It can be shown by transcendental methods that\n\\begin{equation}\n\\label{simpelonia}\n\\sum_{k=1}^n t_k\\sim {Cn\\over \\log^{\\delta_q} n};\n\\end{equation}\nand\n\\begin{equation}\n\\label{kleemie2}\n\\sum_{k=1}^n t_k=C\\int_2^n{dx\\over \\log ^{\\delta_q} x}+O\\Big({n\\over \\log^r n}\\Big),\n\\end{equation}\nwhere $r$ is any positive number'. Ramanujan claims that $\\delta_3=\\delta_7=\\delta_{23}=1\/2$, \n$\\delta_5=1\/4$ and $\\delta_{691}=1\/690$. \nExcept for $q=5$ and $q=691$ Ramanujan also writes down an Euler\nproduct for $C$. These are correct, except for a minor omission he made in case $q=23$.\n\\begin{Thm} {\\rm (\\cite{M}).} For $q\\in \\{3,5,7,23,691\\}$ we have\n$\\gamma_{S_{\\tau;q}}\\ne 0$ and thus Ramamnujan's claim {\\rm (\\ref{kleemie2})} is false for $r>2$.\n\\end{Thm}\nThe reader might wonder why this specific small set of $q$. The answer is that in these cases Ramanujan\nestablished easy congruences such as \n$$\\tau(n)\\equiv \\sum_{d|n}d^{11}({\\rm mod~}691)$$ that allow one to \neasily describe the non-divisibility of $\\tau(n)$ for these $q$. Serre, see \\cite{SwD}, has shown\nthat for every odd prime $q$ a formula of type (\\ref{simpelonia}) exists, although no simple congruences\nas above exist. This result requires quite sophisticated tools, e.g., the theory of $l$-adic representations.\nThe question that\narises is whether $\\gamma_{S_{\\tau;q}}$ exists for every odd $q$ and if yes, to compute it with enough\nnumerical precision as to determine whether it is zero or not and to be able to tell whether the\nLandau or the Ramanujan approximation is better. \n\n\\subsection{Non-divisibility of Euler's totient function $\\varphi$}\nSpearman and Williams \\cite{SW} determined the\nasymptotic behaviour of $S_{\\varphi;q}(x)$. Here invariants from the cyclotomic field $\\mathbb Q(\\zeta_q)$ come\ninto play. The mathematical connection with cyclotomic fields is not very direct in \\cite{SW}. However, this\nconnection can be made and in this way the results of Spearman and Williams can then be rederived in a rather straightforward way, see \\cite{FLM, eerstev}. Recall that the Extended Riemann Hypothesis (ERH) says that the\nRiemann Hypothesis holds true for every Dirichlet L-series $L(s,\\chi)$.\n\\begin{Thm}{\\rm (\\cite{FLM}).} \\label{eflm}\nFor $q\\le 67$ we have $1\/2>\\gamma_{S_{\\varphi;q}}>0$. For $q>67$ we have \n$\\gamma_{S_{\\varphi;q}}>1\/2$.\nFurthermore we have $\\gamma_{S_{\\varphi;q}}=\\gamma+O(\\log^2q\/\\sqrt{q})$, \nunconditionally with an effective constant, $\\gamma_{S_{\\varphi;q}}=\\gamma+O(q^{\\epsilon-1})$, unconditionally\nwith an ineffective constant and $\\gamma_{S_{\\varphi;q}}=\\gamma+O((\\log q)(\\log\\log q)\/q)$ if ERH holds true.\n\\end{Thm}\nThe explicit inequalities in this result were \nfirst proved by the author \\cite{eerstev}, who established them assuming ERH. Note that the\nresult shows that Landau wins over Ramanujan for every prime $q\\ge 71$.\\\\\n\\indent Given a number field $K$, the Euler-Kronecker constant ${\\mathcal EK}_K$ of the number field $K$ is\ndefined as $${\\mathcal EK}_K=\\lim_{s \\downarrow 1}\\Big({\\zeta'_K(s)\\over \\zeta_K(s)}+{1\\over s-1}\\Big),$$\nwhere $\\zeta_K(s)$ denotes the Dedekind zeta-function of $K$. Given a prime $p\\ne q$, let $f_p$ the smallest\npositive integer such that $p^{f_p}\\equiv 1({\\rm mod~}q)$. Put\n$$S(q)=\\sum_{p\\ne q,f_p\\ge 2} \n{\\log p\\over p^{f_p}-1}.$$\nWe have \n\\begin{equation}\n\\label{EK01}\n\\gamma_{S_{\\varphi;q}}= \\gamma-{(3-q)\\log q\\over (q-1)^2(q+1)} -S(q) -\n{\\mathcal{EK}_{{\\mathbb Q}(\\zeta_q)}\\over q-1}.\n\\end{equation}\n(This is a consequence of Theorem \\ref{vier1} and Proposition 2 of Ford et al. \\cite{FLM}.)\\\\\n\\indent The Euler-Kronecker constants ${\\mathcal EK}_K$ and in particular \n$\\mathcal{EK}_{{\\mathbb Q}(\\zeta_q)}$ have been well-studied, see e.g. Ford et al.~\\cite{FLM}, Ihara \\cite{I} or \nKumar Murty \\cite{KM} for results and references.\n\n\n\\section{Some Euler-Kronecker constants related to binary quadratic forms}\n\\label{SEK}\nHardy \\cite[p. 9, p. 63]{Hardy} was under the misapprehension that for $S_B$ Landau's approximation is better. However, he based himself\non a computation of his student Geraldine Stanley \\cite{Stanley} that turned out to be incorrect.\nShanks proved that\n\\begin{equation}\n\\label{geraldine}\n\\gamma_{S_B}={\\gamma\\over 2}+{1\\over 2}{L'\\over L}(1,\\chi_{-4})-{\\log 2\\over 2}\n-\\sum_{p\\equiv 3({\\rm mod~}4)}{\\log p\\over p^2-1}.\n\\end{equation}\nVarious mathematicians independently discovered the result that\n$${L'\\over L}(1,\\chi_{-4})=\\log\\Big(M(1,\\sqrt{2})^2e^{\\gamma}\/2\\Big),$$\nwhere $M(1,\\sqrt{2})$ denotes the limiting value of Lagrange's AGM algorithm\n$a_{n+1}=(a_n+b_n)\/2$, $b_{n+1}=\\sqrt{a_n b_n}$ with starting values $a_0=1$ and $b_0=\\sqrt{2}$.\nGauss showed (in his diary) that\n$${1\\over M(1,\\sqrt{2})}={2\\over \\pi}\\int_0^1 {dx\\over \\sqrt{1-x^4}}.$$\nThe total arclength of the lemniscate $r^2=\\cos(2\\theta)$ is given by $2l$, where\n$L=\\pi\/M(1,\\sqrt{2})$ is the so-called lemniscate constant.\\\\\n\\indent Shanks used these formulae to show that \n$\\gamma_{S_B}=-0.1638973186345\\ldots \\ne 0$, thus establishing the\nfalsity of Ramanujan's claim (\\ref{kleemie}). \nSince $\\gamma_{S_B}<1\/2$, it follows by Corollary 1 that actually the Ramanujan approximation is better.\n\nA natural question is to determine the primitive binary quadratic forms $f(X,Y)=aX^2+bXY+cY^2$\nof negative discriminant for which the integers represented form a multiplicative set. \nThis does not seem to be known. However, in the more restrictive case where we require the multiplicative\nset to be also a semigroup the answer is known, see Earnest and Fitzgerald \\cite{earnest}.\n\\begin{Thm}\nThe value set of a positive definite integral binary quadratic form forms a semigroup if and only if it is in the \nprincipal class, i.e. represents 1, or has order 3 (under Gauss composition).\n\\end{Thm}\nIn the former case, the set of represented integers is just the set of norms from the order \n${\\mathfrak O}_D$, which is multiplicative. In the latter case, the smallest example are the forms of \ndiscriminant -23, for which the class group is cyclic of order 3: the primes $p$ are partitioned into those of the form $X^2 - XY + 6Y^2$ and those of the form $2X^2 \\pm XY + 3Y^2$.\n\nAlthough the integers represented by $f(X,Y)$ do not in general form a multiplicative set, the associated set\n$I_f$ of integers represented by $f$, always satisfies the same type of asymptotic, namely we have\n$$I_f(x)\\sim C_f{x\\over \\sqrt{\\log x}}.$$ \nThis result is due to Paul Bernays \\cite{Bernays}, of fame in logic, who did his PhD thesis with Landau. Since his work\nwas not published in a mathematical journal it got forgotten and later partially rediscovered by mathematicians\nsuch as James and Pall. For a brief description of the proof approach of Bernays see Brink et al. \\cite{Brink}. \n\nWe like to point out that in general the estimate\n$$I_f(x)=C_f{x\\over \\sqrt{\\log x}}\\Big(1+(1+o(1)){C'_f\\over \\log x}\\Big)$$ \ndoes not hold. For example, for $f(X,Y)=X^2+14Y^2$, see Shanks and Schmid \\cite{SS}.\n\nBernays did not compute $C_f$, this was only done much later and required the combined effort of various\nmathematicians. The author and Osburn \\cite{mos} combined these results to show that of all the two dimensional lattices\nof covolume 1, the hexagonal lattice has the fewest distances. Earlier Conway and Sloane \\cite{CS} had identified the\nlattices with fewest distances in dimensions 3 to 8, also relying on the work of many other mathematicians. \n\nIn the special case where $f=X^2+nY^2$, a remark in a paper of Shanks seemed to suggest that he thought $C_f$\nwould be maximal in case $n=2$. However, the maximum does not occur for $n=2$, see Brink et al. \\cite{Brink}.\n\nIn estimating $I_f(x)$, the first step is to count $B_D(x)$. \nGiven a discriminant $D\\le -3$ we let $B_D(x)$ count the number of integers $n\\le x$ that are coprime to\n$D$ and can be represented by some primitive quadratic integral form of discriminant $D$. The integers so\nrepresented are known, see e.g. James \\cite{James}, to form a multiplicative semigroup, $S_D$, generated by the \nprimes $p$ with $({D\\over p})=1$ and the squares \nof the primes $q$ with $({D\\over q})=-1$. James \\cite{James} showed that we have\n$$B_D(x)=C(S_D){x\\over \\sqrt{\\log x}}+O({x\\over \\log x}).$$\nAn easier proof, following closely the ideas employed by Rieger \\cite{Rieger}, was given by Williams \\cite{Williams}.\nThe set of primes in $S_D$ has density $\\delta=1\/2$. By the law of quadratic reciprocity the set of primes\n$p$ satisfying $({D\\over p})=1$ is, with finitely many exceptions, precisely a union of arithmetic progressions.\nIt thus follows that Condition B is satisfied and, moreover, that for every integer $m\\ge 1$, \nwe have an expansion of the form\n$$B_D(x)=C(S_D){x\\over \\sqrt{\\log x}}\\big(1+{b_1\\over \\log x}+{b_2\\over \\log^2 x}+\n\\cdots +O({1\\over \\log^m x})\\Big).$$\nBy Theorem \\ref{vier1} and Theorem \\ref{vier} we infer that $b_1=(1-\\gamma_{S_D})\/2$, with\n$$\\gamma_{S_D}=\\lim_{x\\rightarrow \\infty}\\Big({\\log x\\over 2}-\\sum_{p\\le x,~({D\\over p})=1}{\\log p\\over p-1}\\Big)\n-\\sum_{({D\\over p})=-1}{2\\log p\\over p^2-1}.$$\nAs remarked earlier, in order to compute $\\gamma_{S_D}$ with some numerical\nprecision the above formula is not suitable and another route has to be taken.\n\\begin{Prop} \n\\label{expressie} {\\rm (\\cite{James}.)}\nWe have, for Re$(s)>1$,\n$$L_{S_D}(s)^2=\\zeta(s)L(s,\\chi_D)\\prod_{({D\\over p})=-1}(1-p^{-2s})^{-1}\\prod_{p|D}(1-p^{-s}).$$\n\\end{Prop}\n\\noindent {\\it Proof}. On noting that\n$$L_{S_D}(s)=\\prod_{({D\\over p})=1}(1-p^{-s})^{-1}\\prod_{({D\\over p})=-1}(1-p^{-2s})^{-1},$$\nand\n$$L(s,\\chi_D)=\\prod_{({D\\over p})=1}(1-p^{-s})^{-1}\\prod_{({D\\over p})=-1}(1+p^{-s})^{-1},$$\nthe proof follows on comparing Euler factors on both sides. \\qed\n\\begin{Prop} \n\\label{2gamma}\nWe have\n$$2\\gamma_{S_D}=\\gamma+{L'\\over L}(1,\\chi_D)-\\sum_{({D\\over p})=-1}{2\\log p\\over p^2-1}+\n\\sum_{p|D}{\\log p\\over p-1}.$$\n\\end{Prop}\n\\noindent {\\it Proof}. Follows on logarithmically differentiating the expression for $L_{S_D}(s)^2$ given\nin Proposition \\ref{expressie}, invoking (\\ref{gammo}) and recalling that $L(1,\\chi_D)\\ne 0$. \\qed\\\\\n\nThe latter result together with $b_1=(1-\\gamma_{S_D})\/2$ leads to a formula first proved by Heupel \\cite{Heupel}\nin a different way. \n\nThe first sum appearing in Proposition \\ref{2gamma} can be evaluated with high numerical precision by using\nthe identity\n\\begin{equation}\n\\label{idie}\n\\sum_{({D\\over p})=-1}{2\\log p\\over p^2-1}=\\sum_{k=1}^{\\infty}\\Big({L'\\over L}(2^k,\\chi_{D})-{\\zeta'\\over \\zeta}(2^k)-\\sum_{p|D}{\\log p\\over p^{2^k}-1}\\Big).\n\\end{equation}\n\nThis identity in case $D=-3$ was established in \\cite[p. 436]{M2}. The proof given there is easily \ngeneralized. An alternative proof follows on combining Proposition \\ref{55} with Proposition \\ref{56}.\n\\begin{Prop}\n\\label{55}\nWe have\n$$\\sum_p {({D\\over p})\\log p\\over p-1}=-{L'\\over L}(1,\\chi_{D})+\n\\sum_{k=1}^{\\infty}\\Big(-{L'\\over L}(2^k,\\chi_{D})+{\\zeta'\\over \\zeta}(2^k)+\\sum_{p|D}{\\log p\\over p^{2^k}-1}\\Big).$$\n\\end{Prop}\n{\\it Proof}. This is Lemma 12 in Cilleruelo \\cite{C}. \\qed\n\\begin{Prop}\n\\label{56}\nWe have\n$$-\\sum_p {({D\\over p})\\log p\\over p-1}={L'\\over L}(1,\\chi_{D})+\\sum_{({D\\over p})=-1}{2\\log p\\over p^2-1}.$$\n\\end{Prop}\n{\\it Proof}. Put $G_d(s)=\\prod_p(1-p^{-s})^{(D\/p)}$.\nWe have $${1\\over G_d(s)}=L(s,\\chi_{D})\\prod_{({D\\over p})=-1}(1-p^{-2s}).$$ The result then follows\non logarithmic differentiation of both sides of\nthe identity and the fact that $L(1,\\chi_{D})\\ne 0$. \\qed\\\\\n\nThe terms in (\\ref{idie}) can be calculated with MAGMA with high precision and\nthe series involved converge very fast. Cilleruelo \\cite{C} claims\nthat \n$$\\sum_{k=1}^{\\infty} {L'\\over L}(2^k,\\chi_D)=\\sum_{k=1}^6 {L'\\over L}(2^k,\\chi_D)+{\\rm Error},~|{\\rm Error}|\\le 10^{-40}.$$\n\nWe will now rederive Shanks' result (\\ref{geraldine}). Since there is only one primitive quadratic form\nof discriminant -4, we see that $S_{-4}$ is precisely the set of odd integers that can be written as a sum\nof two squares. If $m$ is an odd integer that can be written as a sum of two squares, then so can\n$2^em$ with $e\\ge 0$ arbitrary.\nIt follows that $L_{S_B}(s)=(1-2^{-s})^{-1}L_{S_{-4}}(s)$ and hence $\\gamma_{S_B}=\\gamma_{S_{-4}}-\\log 2$. On invoking\nProposition \\ref{2gamma} one then finds the identity (\\ref{geraldine}).\n\n\\section{Integers composed only of primes in a prescribed arithmetic progession}\nConsider an arithmetic progression having infinitely many primes in it, that is consider\nthe progression $a,a+d,a+2d,\\ldots$ with $a$ and $d$ coprime.\nLet $S'_{d;a}$ be the multiplicative set of integers composed only\nof primes $p\\equiv a({\\rm mod~}d)$. Here we will only consider the simple case where\n$a=1$ and $d=q$ is a prime number. This problem is very closely related to that in Section 3.2.\nOne has \n$L_{S'_{\\varphi;q}}(s)=(1+q^{-s})\\prod_{p\\equiv 1({\\rm mod~}q)\\atop p\\ne q}(1-p^{-s})^{-1}$.\nSince $L_{S'_{q;1}}(s)=\\prod_{p\\equiv 1({\\rm mod~}q)}(1-p^{-s})^{-1}$, we then infer that\n$$L_{S'_{\\varphi;q}}(s)L_{S'_{q;1}}(s)=\\zeta(s)(1-q^{-2s})$$\nand hence\n\\begin{eqnarray}\n\\label{bloep}\n\\gamma_{S'_{q;1}} & = &\\gamma-\\gamma_{S'_{\\varphi;q}}+{2\\log q\\over q^2-1}\\\\\n& = & {\\log q\\over (q-1)^2} +S(q) +{\\mathcal{EK}_{{\\mathbb Q}(\\zeta_q)}\\over q-1},\\nonumber\n\\end{eqnarray}\nwhere the latter equality follows by identity (\\ref{EK01}).\nBy Theorem \\ref{eflm}, (\\ref{bloep}) and the Table in Ford et al. \\cite{FLM}, we then arrive after\nsome easy analysis at the following\nresult.\n\\begin{Thm}\n\\label{eflm2}\nFor $q\\le 7$ we have $\\gamma_{S'_{q;1}}>0.5247$. For $q>7$ we have \n$\\gamma_{S'_{q;1}}<0.2862$.\nFurthermore we have $\\gamma_{S'_{q;1}}=O(\\log^2q\/\\sqrt{q})$, \nunconditionally with an effective constant, $\\gamma_{S'_{q;1}}=O(q^{\\epsilon-1})$, unconditionally\nwith an ineffective constant and $\\gamma_{S'_{q;1}}=O((\\log q)(\\log\\log q)\/q)$ if ERH holds true.\n\\end{Thm}\n \n\n\n\\section{Multiplicative set races}\nGiven two multiplicative sets $S_1$ and $S_2$, one can wonder whether for every $x\\ge 0$ we have\n$S_1(x)\\ge S_2(x)$. We give an example showing that this question is not as far-fetched as one might\nthink at first sight. Schmutz Schaller \\cite[p. 201]{PSS}, motivated by\nconsiderations from hyperbolic geometry, conjectured that the hexagonal lattice is better\nthan the square lattice, by which he means that $S_B(x)\\ge S_H(x)$ for every $x$, where $S_H$ is the set of squared distances\noccurring in the hexagonal lattices, that is the integers represented by \nthe quadratic form $X^2+XY+Y^2$. It is well-known that\nthe numbers represented by this form are the integers generated by the primes $p\\equiv 1({\\rm mod~}3)$, 3 and\nthe numbers $p^2$ with $p\\equiv 2({\\rm mod~}3)$. Thus $S_H$ is a multplicative set. If $0v>0$ integers. The set $S_{NH}$ of non-hypotenuse numbers\nforms a multiplicative set that is generated by 2 and all the primes $p\\equiv 3({\\rm mod~}4)$. \nShow that $L_{NH}(s)=L_{S_B}(s)\/L(s,\\chi_{-4})$ and hence\n$$2\\gamma_{NH}=2\\gamma_{S_B}-2{L'\\over L}(1,\\chi_{-4})=\\gamma-\\log 2 + \\sum_{p>2}{({-1\\over p})\\log p\\over p-1}.$$\n{\\tt Remark}. Put $f(x)=X^2+1$. Cilleruelo \\cite{C} showed that, as $n$ tends to infinity,\n$$\\log {\\rm l.c.m.} (f(1),\\ldots,f(n))=n\\log n +Jn+o(n),$$\nwith\n$$J=\\gamma-1-{\\log 2\\over 2}-\\sum_{p>3}{({-1\\over p})\\log p\\over p-1}=-0.0662756342\\ldots$$\nWe have $J=2\\gamma-1-{3\\over 2}\\log 2-2\\gamma_{NH}$.\\\\\n\\indent Recently the error term $o(n)$ has been improved by \nRu\\'e et al. \\cite{madrid} to \n$$O_{\\epsilon}\\big({n\\over \\log^{4\/9-\\epsilon}n}\\big),$$ with\n$\\epsilon>0$.\\\\\n\\vfil\\eject\n\\noindent {\\tt Exercise 2}. Let $S'_D$ be the semigroup generated by the primes $p$ with $({D\\over p})=-1$. It is easy\nto see that $L_{S'_D}(s)^2=L_{S_D}(s)^2L(s,\\chi_D)^{-2}$ and hence, by Proposition \\ref{2gamma}, we obtain\n\\begin{eqnarray*}\n2\\gamma_{S'_D}&=& 2\\gamma_{S_D}-2{L'\\over L}(1,\\chi_D)\\\\\n&=& \\gamma -{L'\\over L}(1,\\chi_D)-\\sum_{({D\\over p})=1}{2\\log p\\over p^2-1}+\\sum_{p|D}{\\log p\\over p-1}\\\\\n&=&\\gamma+\\sum_p{({D\\over p})\\log p\\over p-1}+\\sum_{p|D}{\\log p\\over p-1}.\n\\end{eqnarray*}\n\n\\centerline{{\\bf Table :} Overview of Euler-Kronecker constants discussed in this paper}\n$~~$\\\\\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|}\\hline\nset & $\\gamma_{\\rm set} $ & winner & reference \\\\ \\hline \\hline\n$n=a^2+b^2$ & $-0.1638\\ldots$ & Ramanujan & \\cite{Sh} \\\\ \\hline\nnon-hypotenuse & $-0.4095\\ldots$ & Ramanujan & \\cite{Sh2} \\\\ \\hline\n$3\\nmid \\tau$ & $+0.5349\\ldots$ & Landau & \\cite{M} \\\\ \\hline\n$5\\nmid \\tau$ & $+0.3995\\ldots$ & Ramanujan & \\cite{M} \\\\ \\hline\n$7\\nmid \\tau$ & $+0.2316\\ldots$ & Ramanujan & \\cite{M} \\\\ \\hline\n$23\\nmid \\tau$ & $+0.2166\\ldots$ & Ramanujan & \\cite{M} \\\\ \\hline\n$691\\nmid \\tau$ & $+0.5717\\ldots$ & Landau & \\cite{M} \\\\ \\hline\n$q\\nmid \\varphi$, $q\\le 67$ & $<0.4977$ & Ramanujan & \\cite{FLM} \\\\ \\hline\n$q\\nmid \\varphi$, $q\\ge 71$ & $>0.5023$ & Landau & \\cite{FLM} \\\\ \\hline\n$S'_{q;1}$, $q\\le 7$ & $>0.5247$ & Landau & Theorem \\ref{eflm2} \\\\ \\hline\n$S'_{q;1}$, $q>7$ & $<0.2862$ & Ramanujan & Theorem \\ref{eflm2} \\\\ \\hline\n\\end{tabular}\n\\end{center}\n$~~$\\\\\n$~~$\\\\\n\\noindent {\\tt Acknowledgement}. I like to thank Andrew Earnest and John Voight for helpful \ninformation concerning \nqudaratic forms having a value set that is multiplicative, and Ana Zumalac\\'arregui for sending me \\cite{madrid}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}