diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzprmr" "b/data_all_eng_slimpj/shuffled/split2/finalzzprmr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzprmr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nServices running on the client-server model may crash or behave unintentionally from time to time due to software bugs or attacks by malicious users.\nTo prevent such problems and continuously provide services to clients, fault tolerance is an important consideration.\n\\emph{State machine replication} (SMR) \\cite{Schneider1990} is commonly used to improve fault tolerance by replicating a service over multiple replicas.\nIn SMR, the replicated service is called replicas, and the state of all replicas are kept consistent by executing a replication protocol.\nHence, using this method, an active operation can be continued as a whole even if a failure occurs in a part of the replicas.\nSeveral SMR protocols have been proposed in previous studies \\cite{Moniz2011,Cachin2001,Castro2002,Nakamura2014a,Kotla2007,Sousa2012,Bessani2014}.\n\nAn SMR that deploys replicas on a continental scale is called \\emph{geographic SMR} \\cite{Sousa2015,Liu2017,Eischer2018,Mao2008,Veronese2010,Coelho2018}.\nReplicas in geographic SMR are separated by a large distance to withstand a catastrophic disaster, such as an earthquake.\nIf some of the replicas fail, the service can be continued by the replicas in other \\emph{sites} (regions).\nWith the development of public cloud services after the 2000s, geographic SMR can be easily realized.\n\n\nAlthough geographic SMR could have been easily implemented, ways of obtaining the best performance using the optimal replica deployment remain unclear.\nPerformance of a replica deployment depends on several factors, including the location of the leader replica, distances between replicas, and distances between clients and replicas.\nFor example, if replicas are deployed in nearby regions, the time taken for request processing can be shortened, but the fault tolerance will be reduced.\nIn contrast, if the replicas are distributed farther apart from one another, the fault tolerance will increase, but the processing time for a normal request will be slower.\n\nIn this paper, we propose a performance-estimation method to determine the optimal replica deployment for building a service using geographic SMR.\nFirst, we define the task to find the optimal replica deployment among all possible candidates as \\emph{replica deployment decision problem}, which requires to output a ranking of all possible replica deployments sorted by their latencies.\nThe proposed method solves this problem by using an evaluation function that estimates a latency of each replica deployment based on the \\emph{round-trip time} (RTT), which is generally regarded as an important parameter in geographic SMR.\nAlthough it is unrealistic to actually build all possible replica deployments and measure their latencies, RTTs can be measured relatively easily.\nTherefore, this evaluation function is practical and can be used to select the optimal deployment for actual service construction.\n\n\nFinally, we conduct an experimental evaluation using Amazon Web Services with 15 regions to demonstrate the effectiveness and practicality of the proposed method.\nIn the experiment, we actually build thousands of geographic replications and measure their latencies; then we create the measured latency ranking and compare it against the rankings generated by the proposed method.\nThe results exhibit that the proposed method with the RTT-based evaluation function can generate a consistent ranking with reasonable calculation time.\n\nIn particular, this paper makes the following contributions:\n\\begin{enumerate}\n\\item It presents a new method that generate a ranking to assist deciding a replica deployment for geographic SMR.\n\\item It also presents a evaluation function that consistently calculates latency of a replica deployment by using round-trip time between sites, which can be easily measured compared with the actual latency of the deployment.\n\\item It conducts exhaustive experiments with thousands of replications built on Amazon Web Services, and evaluates the proposed method and the evaluation function.\n\\end{enumerate}\n\n\\section{Background}\n\\label{sec:backgrund}\n\n\n\\subsection{State Machine Replication}\n\\label{sec:smr}\n\n\\emph{State machine replication} (SMR) \\cite{Schneider1990} is a replication method for the client-server model.\nIn SMR, the server is modeled by a state machine; thus, on receipt of a message, the server changes its state and sends messages to other processes if necessary.\nThe server's role is replicated over $n$ replicas that independently operate the functions on distinct hosts and interact with clients via request and response messages.\n\nClient requests to be executed are submitted to all replicas, and the order in which different replicas receive these requests may differ due to variations in the communication delays.\nTherefore, the replicas execute a replication protocol to guarantee that they process requests in the same order to maintain consistency.\nAfter a replica processes a request, it replies to the client with the execution result.\n\n\nThere are two variations of SMR;\nSMR that can withstand crash failures (resp. Byzantine failures) is called CFT SMR (resp. BFT SMR).\nThe number of faulty replicas that a replication can tolerate $f$ is related to $n$ as follows \\cite{Lamport2002}:\n$n \\geq 2f +1$ for CFT SMR and $n \\geq 3f +1$ for BFT SMR.\nHereafter, we assume BFT SMR and $n = 4$ (i.e., $f=1$); however, the proposed method is applicable for any $n$ and $f$ of BFT SMR and CFT SMR.\n\n\\subsection{Related Work}\n\\label{sec:relatedwork}\n\n\n\n\n\n\n\nThe problem of determining the optimal replica deployment has been extensively studied in the field of data replication.\nCook et al. formulated the time required to read and write data as a cost in a simple read-write policy (when reading a data object, refer to one replica. When writing data, a client transfer the data to all servers that have its replica) and proved that this problem is NP-complete \\cite{Cook2002}.\nThey also proposed an approximation algorithm for the problem.\nAlthough the target replication problem is different, their formulation is very similar to the evaluation function proposed in this paper.\nThe survey by Sen et al. \\cite{Sen2015} provides a comprehensive overview of the previous studies on the data location optimization problem using mathematical models.\n\nIn the field of geographic SMR, there are a few methods that optimize a replica deployment \\cite{Liu2017,Eischer2018}.\nIn \\cite{Liu2017}, Liu and Vukoli\\'{c} proposed two methods for geographic SMR: Droppy that dynamically relocates a set of replication leaders according to given replication settings and workload situations, and Dripple that divides the replicated system state into multiple partitions so that Droppy can efficiently relocate the leaders.\nEischer and Distler proposed Archer \\cite{Eischer2018} that relocates leaders based on their response times as measured by clients.\nA Hash-chain-based technique was employed in the protocol to allow clients to detect illegal phases caused by Byzantine replicas to prevent such replicas from being wrongly assigned as leaders.\n\nIn this paper, we propose a method that can help identify the best replica deployment when building an geographic SMR.\nThe proposed method differs from these prior studies in several ways.\nFirst, the proposed method can be used with any replication protocol by defining an evaluation function to calculate the estimated latency of different replica deployments.\nIn contrast, although Droppy and Archer can dynamically relocates the leader replica locations, they only support leader-based replication protocols.\nSecond, the proposed method can also identify the best replica deployment from all possible replica deployments; this complements these existing methods, which are limited to determining an assignment of replication roles to the replicas in a replication.\n\n\n\\section{Replica Deployment Decision Problem}\n\\label{sec:problem-definition}\n\nWe formally define the problem addressed herein as a \\emph{replica deployment decision problem}.\nIn the definition, we call a location wherein a replica (or a client) can be deployed to as a \\emph{site}\\footnote{For example, if the SMR is built on a public cloud service, each region is a site; if it is built in facilities on premises, each data center is a site.}.\nIn the problem, the following inputs are provided by a user.\n\\begin{itemize}\n \\item $n$: the number of replicas that the user wants to deploy\n \\item $SC$: a set of candidate sites wherein replicas can be deployed\n \\item $C$: a set of client locations\n\\end{itemize}\n\nThe goal of this problem is to output a ranking\\footnote{\nThe proposed method outputs not only the best replica deployment, but also the whole ranking of all possible deployments, because the best deployment may not be acceptable for some reason other than latency.\n}\nof replica deployments sorted by latency (of course, a replica deployment with smaller latency is ranked higher).\nThe user will then choose the final replica deployment for the SMR from this ranking.\nHere, latency is defined as the time taken by a client from sending a request to the replicas until receiving its response.\n\n\n\n\n\n\\section{Proposed Method}\n\\label{sec:proposed-method}\n\nIn this section, we propose a method to solve the replica deployment decision problem and to determine the optimal replica deployment from all the possible deployments for geographic SMR.\nUsing the proposed method, any replication configuration can be evaluated without actually building it.\n\n\n\n\n\\subsection{Overview}\n\\label{sec:proposed-method-overview}\n\nFigure \\ref{fig:approach} illustrates the overview of the proposed method and the method consists of the following steps:\n\\begin{figure}[tp]\n \\centering\n \\includegraphics[width=65mm]{fig\/conceptual_diagram_of_propose_method.pdf}\n \\caption{Overview of the proposed method}\n \\label{fig:approach}\n\\end{figure}\n\\begin{enumerate}\n\\item First, a set, $DC$, of all possible replica deployments is created based on $SC$ and $n$.\nEach replica deployment is expressed as a pair of locations for the leader and the other replicas\\footnote{Here, we assume rotating coordinator-based SMR protocols similar to those \\cite{Lamport1998,Castro2002,Sousa2012,Kotla2007}.\nIf the proposed method is applied to leader-less SMR protocols similar to those \\cite{Moniz2011,Cachin2001,Nakamura2014a}, then each replica deployment is simply expressed as a set of replica locations of size $n$.\n}.\n\\item Next, for each replica deployment $x \\in DC$, its latency is estimated using the evaluation function $f(x, C)$ based on the measured RTTs.\nThis function is further described in Section \\ref{sec:evaluation-function}.\n\\item The elements in $DC$ are sorted based on their calculated latency; the sorted result is outputted as the ranking for the inputs.\n\\end{enumerate}\nThus, the replica deployment with the shortest latency is ranked as the best replica deployment.\n\n\\subsection{Evaluation Function $f(x, C)$} \n\\label{sec:evaluation-function}\n\nThe evaluation function $f(x, C)$ outputs an estimated latency based on replica deployment, $x$, and the client locations, $C$ by tracing message transmissions specific to a replication protocol being used.\nThe function plays an important role in the proposed method.\n\n\n\\subsubsection{Approach}\n\\label{subsubsec:evaluation-function-approach}\n\nIf site candidates $SC$ is large, it is impractical to actually build SMRs with all possible replica deployments to evaluate their latencies.\nTherefore, the evaluation function estimates them based on round-trip time (RTT) between sites, which can be measured more easily, and outputs as an latency for that deployment.\nIn other words, before using the proposed method, a user must measure RTTs between candidate sites in advance.\nHere, the time required for message processing in a replica is disregarded because the communication delay between replicas is relatively large compared with the processing time in a geographic SMR.\n\nAssuming that a latency can only be estimated from the communication time, two factors must be considered: the types of message communications (i.e., \\emph{message transmission patterns}) that constitute latency and the communication time between sites.\nThe message transmission pattern can be found by referring to an SMR protocol used in a replication.\nThen, for a given set $C$ of clients and replica locations $x$, the function simulate the transmission and receipt of messages based on the message transmission pattern of the replication protocol and the measured RTTs.\n\nHere, we model the message transmission pattern of Mod-SMaRt \\cite{Sousa2012} of BFT-SMaRt \\cite{Bessani2014} as an example; however, we believe the same approach can be applied to other SMR protocols.\nIn Mod-SMaRt, a special replica (called a \\emph{leader} replica) determines the order in which requests are executed and communicates this order to the other replicas.\nThe message transmission pattern involves five types of messages that are exchanged among the client and replicas to process the request, as shown in Fig. \\ref{fig:bft-smart-message-flow}.\n\\begin{figure}[tp]\n \\centering\n \\includegraphics[width=65mm]{fig\/bft-smart-message-flow.pdf}\n \\caption{The message transmission pattern for Mod-SMaRt protocol \\cite{Sousa2012} in BFT-SMaRt \\cite{Bessani2014}.\n Replica 1 is the leader replica and Req., P, W, A, and Res., indicate Request, Propose, Write, Accept, and Response messages, respectively.}\n \\label{fig:bft-smart-message-flow}\n\\end{figure}\nFirst, the client sends a request to each replica (Request).\nWhen the leader replica receives the request, it sends Propose messages to each replica to propose a candidate value for agreement (Propose).\nThen, Write and Accept messages are exchanged between all replicas to confirm the validity of the candidate values to determine the final agreed value (Write and Accept).\nFinally, the replicas execute the ordered request and return the result to the client (Response).\nHereafter, an RTT and a message transmission delay between sites $a$ and $b$ is denoted as $\\mathrm{RTT}(a, b)$ and $\\mathrm{delay}(a, b) = \\mathrm{RTT}(a, b)\/2$, respectively.\n\n\n\n\\subsubsection{Latency Formulation}\n\\label{subsubsec:latency-calculating}\n\nThe evaluation function $f$ estimates a latency of each client location $c \\in C$, and outputs the average of these latencies as follows:\n\\begin{equation}\n f(x, C) = \\sum_{c \\in C} f_c(x, c) \/ |C|,\n\\end{equation}\nwhere $f_c$ is a evaluation function for a single client.\nHereafter, we explain how $f_c(x, c)$ calculates a latency on a replica deployment.\nThe message pattern of Mod-SMaRt comprises five parts as depicted in Fig.~\\ref{fig:bft-smart-message-flow}, and we denote the timings of these parts by $S_{req}$, $S_{pro}$, $S_{wrt}$, $S_{acc}$, and $S_{res}$, respectively. \nIf necessary, we denote the timing for a specific replica $r_i$ by adding a superscript such as $S_{pro}^i$.\n\nFirst, we calculate the timing $S_{req}$ at which the leader receives a request.\nIn the replication protocol, a request message is sent from a client to each replica although only the leader replica processes the request in the fault-free case;\nthus, $S_{req}$ can be expressed as the average of the RTTs from each client $c$ to the leader replica $l$:\n\\begin{equation}\n S_{req} = \\sum_{c \\in C} \\mathrm{delay}(c, l) \/ |C|.\n\\end{equation}\n\nThen, the leader sends the request to each replica as Propose messages; the timing $S_{pro}^i$ at which the replica $r_i$ receives the Propose message is expressed as follows:\n\\begin{equation}\n S_{pro}^i = S_{req} + \\mathrm{delay}(l, r_i).\n\\end{equation}\n\nWhen a replica receives the Propose message, it broadcasts a Write message to all replicas.\nEach replica accepts the Write message when it receives the same Write messages from a majority $\\lceil (n+1)\/2 \\rceil$ of the replicas.\nThe timing $S_{wrt}^i$ at which replica $r_i$ accepts the Write messages can be calculated based on the timing at which the replica $r_{i}$ receives the Write message sent from replica $r_j$:\n\\begin{equation}\n S_{wrt}^i = \\mathrm{find}(T^i_{wrt}, \\lceil (n+1)\/2 \\rceil),\n\\end{equation}\nwhere $t_{wrt}(r_i, r_j) = S_{pro}^j + \\mathrm{delay}(r_j, r_i)$, $T_{wrt}^i = \\{ t \\mid t_{wrt}(r_i, r_j), 0 \\leq j < n \\}$, and $\\mathrm{find} (S, k)$ is a function that returns the $k$-th smallest element of set $S$.\n\nAn Accept message is sent in the same way as Write messages.\nTherefore, if we define $t_{acc}(r_i, r_j) = S_{wrt}^i + \\mathrm{delay}(r_j, r_i)$, $S_{acc}^i$ is\n\\begin{equation}\n S_{acc}^i = \\mathrm{find}(T^i_{acc}, \\lceil (n+1)\/2 \\rceil),\n\\end{equation}\nwhere $T_{acc}^i = \\{ t \\mid t_{acc}(r_i, r_j), 0 \\leq j < n \\}$.\n\nFinally, when a replica receives a majority of Accept messages, it executes the request and sends the execution result to the client as a Response message.\nWhen a client receives the same response message from $f + 1$ distinct replicas, it accepts the result.\nTherefore,\n\\begin{equation}\n f_c(x, c) = S_{res} = \\mathrm{find}(T_{res}, f+1),\n\\end{equation}\nwhere $T_{res} = \\{ t \\mid S_{acc}^i + \\mathrm{delay}(r_i, c), 0 \\leq i < n \\}$.\n\n\n\\section{Evaluation}\n\\label{sec:evaluation}\n\nIn this section, we examine the effectiveness of the proposed method described in Section \\ref{sec:proposed-method}.\nFirst, the evaluation of replica deployments in terms of the RTT is verified in Section \\ref{sec:rtt-experiment}.\nNext, the latencies of thousands of replica deployments on a public cloud service are measured to evaluate the accuracy of the ranking generated by the proposed method in Section \\ref{sec:latency-experiment}.\nFinally, Section \\ref{sec:ranking-experiment} characterizes the time that it takes to generate a ranking.\n\nAll experiments are conducted using Amazon Web Services EC2, a representative public cloud service.\nWe use 15 regions\\footnote{N.~Virginia, Ohio, N.~California, Oregon, Mumbai, Seoul, Singapore, Sydney, Tokyo, Canada Central, Frankfurt, Ireland, London, Paris, S\\~{a}o Paulo} of Amazon EC2 as site candidates $SC$ for replica deployments (i.e., $|SC| = 15$).\nReplica and client programs are executed on Ubuntu Server 16.04 (64 bit).\nFor replicas and clients, we use t2.micro instances that have one vCPU, 1 GiB memory, EBS storage, and a network interface of \"Low to Moderate\" performance.\n\n\\subsection{Validation of the Use of RTTs}\n\\label{sec:rtt-experiment}\nThe proposed method calculates lantecies based on the RTTs between sites.\nHere, we evaluate whether it is appropriate to use the RTTs for estimating lantecy and how long the generated ranking is valid.\n\n\\subsubsection{Method}\n\\label{subsubsec:rtt-measuring-method}\n\nAn instance is deployed in each of the regions, and the \\verb,ping, command is executed against the instances in the other 14 regions every two seconds.\nRTTs were measured during the following periods (all times are displayed in UTC in 24-h notation):\n\\begin{itemize}\n\\item Term A: March 7, 19:27 -- 22:13, 2018\n\\item Term B: January 11, 11:14 -- January 28, 3:41, 2019\n\\item Term C: April 15, 15:48 -- April 23, 11:15, 2019\n\\end{itemize}\n\n\n\\subsubsection{Results and Discussion}\n\\label{subsubsec:rtt-results}\n\nRTTs measured during Term C are shown as a boxplot in Fig. \\ref{fig:rtt-ireland-all} (only the results for the Ireland instance are shown due to space limitations).\nAlthough RTT varied from region to region, these variations were small.\nThe largest variation was observed between the Ireland and Singapore regions, and its mean and standard deviation were 180.3 and 24.1 ms, respectively.\n\n\\begin{figure}[tp]\n\t\\centering\n\t\\includegraphics[scale=0.80]{fig\/ireland-boxplot-all.pdf}\n\t\\caption{Distribution of RTT from Ireland to each region during term A.}\n\t\\label{fig:rtt-ireland-all}\n\\end{figure}\n\nNext, we compare the RTTs from Ireland to Singapore (where the largest variations were observed) during terms A, B, and C.\nThe average RTTs were 175.3 ms in term A, 179.8 ms in term B, and 180.3 ms in term C.\nOver the 13 months between term A and term C, RTT increased by 5 ms.\nAlthough this may seem like a small difference, if similar changes occurred between all regions, it is likely that the ranking generated by the proposed method would change considerably.\n\n\n\nTo investigate how these difference affects a replica deployment ranking, we generated two rankings from the RTTs measured during Terms A and C with the client location Multiple (see Section \\ref{subsubsec:latency-measuring-method} for its definition).\nFigure \\ref{fig:rtt_termA_vs_termC} shows the correlation between these rankings. \nWe can observe that the RTT changes affected the ranking certainly, especially for the 2000--5000 ranks.\nThe largest difference happened on the replica deployment of Tokyo (leader), Canada, Oregon, and Singapore.\nThe deployment was in 3523rd place in the term A ranking, while it was in 2688th place in the term C ranking.\n\n\\begin{figure}[tp]\n \\centering\n \\includegraphics[scale=0.31]{fig\/scatter_WPSDS_TermA-TermC_Fstrict.pdf}\n \\caption{Difference between the rankings of Terms A and C.}\n \\label{fig:rtt_termA_vs_termC}\n\\end{figure}\n\nThe results indicate that the RTT variations in the public cloud are sufficiently small in the short term; thus, estimating a replica deployment in terms of based on the RTTs between sites is valid.\nIn contrast, RTTs between regions changed over long periods (on the order of one year).\nTherefore, a replica deployment that is found to be optimal may no longer be optimal after a long time has passed, suggesting that replicas should be relocated periodically to maintain optimal performance.\n\n\\subsection{Ranking Accuracy}\n\\label{sec:latency-experiment}\nHere, we discuss the accuracy of a ranking generated by the proposed method by comparing the rankings with those derived from the experimentally measured latencies of all possible replica deployments.\n\n\\subsubsection{Method}\n\\label{subsubsec:latency-measuring-method}\n\n\nWe introduce a baseline evaluation function $f_{simple}(x, C)$ to compare the accuracy of the evaluation function of the proposed method.\nThis function roughly estimates a latency based on a simplified message pattern for Mod-SMaRt.\nFirst, it divides the pattern into three parts: Request, Byzantine Consensus, and Response as Fig.~\\ref{fig:bft-smart-message-flow} and calculates their timings $S_{req}$, $S_{con}$, and $S_{res}$ as follows:\n$S_{req}$ is the average of the half RTTs from each client to the leader replica.\n$S_{con}$ is the sum of the half RTTs between all pairs of replicas.\n$S_{res}$ is the average of the half RTTs from each replica to each client.\nFinally, this function outputs the sum of these timings as a latency.\n\nIn this experiment, all possible replica deployments are built on AWS and the latency of each one is measured.\nWe do not assume that multiple replicas are deployed in the same region.\nSince $|SC| = 15$ and $n = 4$, the total number of possible replica deployments $|DC| = |SC| \\times {}_{|SC|-1}C_{n-1} = 5,460$.\nIf replicas are deployed to the same combination of regions, the location of the leader replica may differ; hence, such deployments are considered independently. \n\nAs with replicas, it is assumed that the clients are also located in the AWS regions.\nTo evaluate the effects of the number and locations of clients, clients are placed in geographically distant regions, namely Ireland, Sydney, and N.~Virginia.\nThe case wherein multiple clients are placed in multiple regions (we call this deployment as ``Multiple'') is also evaluated: 10 clients are placed in Ireland, 3 clients are placed in Sydney, and 5 clients are placed in N.~Virginia.\n\nSMR is built using the open-source SMR library BFT-SMaRt \\cite{Bessani2014}\\footnote{\\url{https:\/\/github.com\/bft-smart\/library\/releases\/tag\/v1.1-beta}}.\nA replication is build to withstand Byzantine failures; the tolerable number of failures is $f=1$ and the number of replicas is $n=4$.\nThe defaults are used for all other BFT-SMaRt settings.\n\nAll latencies are measured using the sample programs LatencyClient and LatencyServer bundled in BFT-SMaRt.\nLatencyClient periodically sends requests to the service and measures the latency.\nLatencyServer is a dummy service that provides no functionality; it simply returns a response immediately after receiving a request from a client.\nThe payload sizes of the requests and responses are 1,024 bytes.\nLatencyClient sends requests 50 times every 2 sec.\nThe top 10\\% (i.e., the highest five values) and bottom 10\\% (i.e., the lowest five values) of the measured values are considered as outliers and disregarded;\nthe average of the other values (40 values in total) is considered as the latency of the replica deployment.\nThe latency is estimated with the average RTTs measured during Term C in Section \\ref{subsubsec:latency-measuring-method}.\n\n\n\\subsubsection{Results and Discussion}\n\\label{subsubsec:latency-results}\n\nFigure \\ref{fig:multiple-5460} shows the correlations between the rankings generated via the proposed method using the evaluation functions.\nDue to space limitations, only the results for the multiple are shown.\nTable \\ref{tab:scatter-result-overall} also shows the root mean square error (RMSE)\ncalculated based on the ideal ranking (i.e., $y = x$), which perfectly matches the ranking based on the measured latencies, and the correlation coefficient (CC) for each client location.\nThe results indicate that the RMSE was lower and the CC was higher (exceeding 0.91 in all cases) for $f$ than for $f_{simple}$ for all client locations.\nThis implies that $f$ yielded more accurate rankings by tracing the communications between the replicas in detail.\n\n\\begin{figure}[tp]\n\t\\centering\n\t\\includegraphics[scale=0.31]{fig\/ScatterPlot_WPSDS_single_sydney.pdf}\n\t\\caption{\n \tScatter plots of measured latency rank and estimated latency rank ($C$ = Sydney, $|DC| = 5460$).\n Each plotted point represents the latency of a replica deployment (red for $f$ and blue for $f_{simple}$).\n The horizontal axis represents the ranking derived from the latencies output by the proposed method with $f$ or $f_{simple}$, and the vertical axis represents the ranking derived from the measured latencies. \n }\n\t\\label{fig:multiple-5460}\n\\end{figure}\n\n\\begin{table}[tp]\n \\caption{RMSE and correlation coefficient (CC)}\n \\begin{center}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\textbf{} & \\multicolumn{2}{|c|}{\\textbf{RMSE}} &\\multicolumn{2}{|c|}{\\textbf{CC}} \\\\\n \\cline{2-5} \n \\textbf{Client location} & \n \\textbf{\\textit{$f_{simple}$}}& \\textbf{\\textit{$f$}}&\n \\textbf{\\textit{$f_{simple}$}}& \\textbf{\\textit{$f$}} \\\\ \\hline\n Ireland & 759.411 &\t620.686 & 0.884 & 0.922\\\\ \\hline\n N. Virginia & 722.516 &\t548.982 &\t0.895 & 0.939\\\\ \\hline\n Sydney & 985.598 & 638.275 & 0.804 & 0.918\\\\ \\hline\n Multiple & 697.473 & 629.228 & 0.902 & 0.920\\\\ \\hline\n \\end{tabular}\n \\label{tab:scatter-result-overall}\n \\end{center}\n\\end{table}\n\n\n\n\nThese experiments confirmed that the proposed method can generate consistent rankings in various client locations.\nFurther, it was revealed that the rankings generated by $f$ are more accurate than those generated by $f_{simple}$ (particularly for the higher-ranked deployments).\nHence, a higher reproducibility of the replication protocol can reproduce more accurate replica deployment ranking.\n\n\n\\subsection{Calculation Time to Generate a Ranking}\n\\label{sec:ranking-experiment}\n\nFinally, we evaluate the calculation time required to generate a ranking with the proposed method\\footnote{\nAll the rankings were calculated by the program implemented with Python 3.6 on the following PC: Intel Core i5 7400, Windows 10 Home 64-bit.\n}.\nThe ranking calculation times of $f_{simple}$ and $f$ were 1.88s and 10.88s, respectively for $n=4$, that is, $f_{simple}$ is about five times faster than $f$. \nThis finding indicates that more time is required to calculate the estimated latency using the evaluation function and to improve the reproducibility of the communication.\n\n\nNext, we investigate the influence of the size of $SC$ on the calculation time with $f$.\nTable \\ref{tab:ranking-each-SC} shows the resulting calculation times for different $|SC|$ values and the corresponding calculation times per replica deployment $t\/|DC|$.\nThe result shows that as the size of $SC$ increased, the calculation time required to generate the rankings considerably increased because the total number of replica deployments $|DC|$ increases exponentially with $|SC|$.\n\\begin{table}[tp]\n \\centering\n \\caption{Calculation time by size of site candidates $SC$ ($n = 4$)}\n \\label{tab:ranking-each-SC}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n \\textbf{$|SC|$} & \\textbf{Time $t$ [sec]} & \\textbf{$|DC|$} & \\textbf{$t\/|DC|$ [msec]}\\\\ \\hline\n 15 & 12.9 & 5,460 & 2.36\\\\ \\hline\n 20 & 39.5 & 19,380 & 2.04\\\\ \\hline\n 25 & 110.5 & 50,600 & 2.18\\\\ \\hline\n 30 & 227.3 & 109,620 & 2.07\\\\ \\hline\n \\end{tabular}\n\\end{table}\n\n\n\nFurthermore, the influence of the number of replicas $n$ on the calculation time with $f$.\nTable \\ref{tab:ranking-each-n} shows the calculation time $t$ for different $|DC|$ values and the corresponding calculation times per replica deployment, $t\/|DC|$, as $n$ is varied.\nThe result shows that $t$ and $|DC|$ were maximized at different values of $n$ (10 and 7, respectively) because the calculation time of a replica deployment $t\/|DC|$ increases as $n$ increases.\n\n\\begin{table}[tp]\n \\centering\n \\caption{Calculation time by the number of replicas $n$ ($|SC| = 15$)}\n \\label{tab:ranking-each-n}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n \\textbf{$n$} & \\textbf{Time $t$ [sec]} & \\textbf{$|DC|$} & \\textbf{$t\/|DC|$ [msec]}\\\\ \\hline\n 4 & 12.9 & 5,460 & 2.36\\\\ \\hline\n 7 & 429.1 & 45,045 & 9.53\\\\ \\hline\n 10 & 792.1 & 30,030 & 26.38\\\\ \\hline\n 13 & 77.7 & 1,365 & 56.90\\\\ \\hline\n \\end{tabular}\n\\end{table}\n\n\nThese measurement results reveal that the rankings for replica deployments can be calculated in several hundred seconds when the replica number and site number are relatively small.\nThis is considered a reasonable calculation time since a deployed SMR is typically operated for more than one year. \nIn contrast, if large numbers of replicas and $SC$ are used, the calculation time becomes very high.\nIn such a case, some changes need to be made so that the solution is still practical, e.g., calculations latencies in parallel, discarding replica deployments that seems to be slow, and so on.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nIn this paper, we addressed on the difficulty of determining the optimal replica deployment for geographic state machine replication by proposing a novel method to generate a ranking of all possible replica deployments.\nWe introduced an evaluation function that estimates a latency of each replica deployment based on the RTTs between sites, which are easy to measure without actually building the deployments.\nHence, all possible replica deployments can be evaluated and ranked accordingly to determine the optimal replica deployment for geographic SMR.\nWe confirmed the validity of evaluating replica deployments in terms of their RTTs.\nAfter that, we measured the latencies of thousands of replica deployments built on Amazon Web Services, and ranked the deployments accordingly.\nThen, we compared this experimentally derived ranking with those rankings generated using the proposed method.\nThe results exhibited that the proposed method can create a ranking with sufficient accuracy in a reasonable time.\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeep X-ray surveys have recently revealed a population of moderately to heavily\nabsorbed active galactic nuclei (AGN) at faint fluxes. A few such objects \nare known to be at high redshift, for example one source discovered by\n{\\it ROSAT}\\ is at z=2.35 (Almaini et al 1995) and two others discovered by \n{\\it ASCA}\\ have z=0.9 and z=0.672 (Ohta et al 1996; Boyle et al 1998). \nThe so-called Narrow-Line X-ray Emitting Galaxies (NLXGs) might, in fact,\nbe the low redshift counterparts of these obscured objects, since both\nclasses are characterised by hard X-ray spectra (Carballo et al 1995; \nAlmaini et al 1996). The discovery of \nsuch sources at faint X-ray fluxes is of vital importance in explaining the \norigin of the X-ray background, since the brightest AGN in the X-ray sky \n(mostly type 1 AGN) generally have much softer X-ray spectra than the X-ray \nbackground spectrum (Fabian \\& Barcons 1992).\n\nThe UK {\\it ROSAT} Medium Sensitivity Survey (Branduardi - Raymond et al\n1994; Carballo et al 1995) was carried out in order to identify a\ncomplete sample of moderately faint X-ray selected sources (flux over\nthe 0.5--2 keV band in excess of $1.7\\times 10^{-14}{\\rm erg}\\, {\\rm cm}^{-2}\\, {\\rm s}^{-1}$) over a\nsignificant area of the sky (2.2 deg$^2$) in a region of minimal\nGalactic absorption. In this survey the source with the highest hardness\nratio is RX J1011.2+5545 with $HR=0.67$ ($HR=(H-S)\/(H+S)$ where $S$ and \n$H$ are the counts in PSPC channels 11--39 and 40--200 respectively). It is\nalso one of its brightest sources with a flux $S(0.5-2\\,\n{\\rm keV} )=6.6\\times 10^{-14}\\, {\\rm erg}\\, {\\rm cm}^{-2}\\, {\\rm s}^{-1}$. The hard X-ray spectrum together\nwith the fact that the source has no optical counterpart visible on the POSS\nplates (which is atypical of the X-ray sources at this flux level), \nsuggested a possibly highly obscured source and prompted us to start a\nprogram of follow-up optical and {\\it ASCA}\\ hard X-ray observations. A\nNED search also revealed that the source is a radio-emitter at various\nfrequencies, with a double lobe morphology. The combination of\nradio, optical and X-ray data has enabled us to classify this object as a\nradio-loud, moderately obscured, high-excitation AGN at a redshift\n$z=1.246$. This is the first X-ray selected obscured AGN discovered \nat high redshift found to be radio loud. In this paper we report on all \nof the recent observations and discuss the nature of this source.\n\n \n\\section {The Data}\n\n\\subsection{{\\it ROSAT} soft X-ray observations}\n\nThe discovery observation was carried out on May 11, 1992 with \nthe {\\it ROSAT}\\ PSPC-B, giving an exposure time of 18529s. The data were \nreduced and scanned for sources as described in Carballo et al (1995). \nAfter a number of sources in the PSPC image were identified with optical \ncounterparts, the astrometry of the X-ray field was corrected by applying \nshifts in RA and DEC. The final X-ray position for the source \nRX J1011.2+5545 is $10^h11^m12^s.4$ and $55^{\\circ}44'50''$\n(J2000) with a 90 per cent error circle of radius $\\sim 4''$.\nThe X-ray image showed no evidence for any extension in the source\n(the FWHM is $27''$, consistent with the PSF at an offset angle\nfrom the {\\it ROSAT}\\ field centre of $7.3'$).\nThe Galactic column density in this direction is $6.7\\times 10^{19}\\, {\\rm\ncm}^{-2}$.\n\n\n\n\n We used the FTOOLS\/XSELECT V3.6 package to extract the counts contained\n within a circle of radius $1.5'$ centered on the source and used \n a ``source-free'' region of radius $6.5'$ at a similar off-axis angle\n in the background subtraction. For the purpose of spectral fitting we \n grouped the PSPC pulse-height data so that every spectral bin\n contained at least 20 counts, leading to a 0.1--2.0 keV source\n spectrum with just 6 bins. \n\n A single power-law fit, assuming only Galactic line-of-sight absorption, \n gives a very flat photon spectral index\n $\\Gamma=0.93_{-0.23}^{+0.20}$ (we always quote 90 per cent errors for\n a single parameter). However, the quality of the fit is not very good\n ($\\chi^2\/{\\nu}=10.2\/4$) corresponding to probability for the null \n hypothesis (PNH) of only 3.7 per cent. The inclusion in the spectral model\n of absorption \n intrinsic to the X-ray source produces a somewhat better fit \n ($\\chi^2\/{\\nu}=4.4\/3$) with a steeper underlying power law\n (although formally the improvement in the fit is not significant\n in terms of the F-test). Clearly data at higher energies are required\n in order to better constrain the continuum slope in this source.\n\n\\subsection{{\\it ASCA} hard X-ray observations}\n\nRX J1011.2+5545 was observed with {\\it ASCA}\\ on November 12-13,\n1995. The source was clearly seen in both the SIS0 and SIS1 cameras,\nwhich were operated in 1-CCD mode (Bright mode). Standard FTOOLS\/XSELECT V3.6\ntasks and techniques were used to clean the data (using default\nparameters), resulting in effective exposure times of 53054s (SIS0)\nand 52855s (SIS1). In this paper we ignore the GIS2 and GIS3\nobservations since the source is barely detected in these detectors. \nWe rely on the spectral calibration of the SIS0 data, in preference to that \nfor SIS1 when necessary.\n\nA spectrum was extracted from a $3'$ radius region centred on the\nsource. The background subtraction was found to be much more accurate\nwhen we chose a source-free region within the same image rather than\nusing the available archival background images. (For example, a\ndetector Fe fluorescent line in the 6-7 keV spectral region went away\nwhen we used the adopted method, but not when the archival background\nwas used). After background subtraction the resulting spectrum was\nagain binned in order to give a minimum of 20 counts per spectral\nchannel. The result was 16 bins for SIS0 and 15 for SIS1. No\nsignificant source variability was found in the data.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.xray.ps,width=0.5\\textwidth,angle=270}}\n \\caption{The measured {\\it ROSAT}\\ and {\\it ASCA}\\ X-ray spectra of RX J1011.2+5545\n together with the residuals to the best fitting (power-law plus\n intrinsic absorption) model. The filled squares, empty squares and triangles\n are the {\\it ROSAT}\\ PSPC, the {\\it ASCA}\\ SIS0 and the {\\it ASCA}\\ SIS1 data points\n respectively. The {\\it ROSAT}\\ PSPC model is shown with a dashed line, the\n {\\it ASCA}\\ SIS0 one with a solid line and the {\\it ASCA}\\ SIS1 one with a\n dotted line.}\n\\end{figure}\n\nThe simultaneous fitting of the SIS0 and SIS1 data with a single power-law \nmodel (but with different normalisations applying to the two detectors\nto allow for calibration uncertainties) gives an acceptable fit\n($\\chi^2\/{\\nu}=34.5\/28$ with PNH of 18.5 per cent) with a rather flat photon\nindex $\\Gamma=1.43_{-0.23}^{+0.24}$. The inclusion of absorption\nintrinsic to the X-ray source again produces a steeper underlying power law\nbut with only a modest improvement in the fit ($\\chi^2\/{\\nu}=31.8\/27$)\n(which again is not a significant improvement in terms of the F-test).\n\n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.cont.ps,width=0.5\\textwidth,angle=270}}\n \\caption{Confidence contours (68, 90 and 99 per cent confidence) for the intrinsic absorption and photon\n index from the combined {\\it ROSAT}\\ and {\\it ASCA}\\ data. }\n\\end{figure}\n\n\nWe then combined the {\\it ROSAT}\\ and {\\it ASCA}\\ data so as to better constrain the\nspectral parameters. Our approach has been to assume the same value of the\nmodel normalisation for the {\\it ROSAT}\\ and {\\it ASCA}\\ SIS0 data but allow\na different normalisation for {\\it ASCA}\\ SIS1 data. This procedure\nproduces a significantly better fit than taking the same normalisation for all\nthree datasets (at 99.9 per cent using F-test), whereas introducing different\nnormalisations for each instrument does not result in a\nsignificant improvement. A single power law fit is only marginally\nacceptable ($\\chi^2\/{\\nu}=53.9\/34$ with a PNH=1.6 per cent), with\n$\\Gamma=1.13\\pm 0.16$. However, the fit improves if absorption intrinsic to \nthe source is included ($\\chi^2\/{\\nu}=47.2\/33$ with PNH of 5.2 per cent). \nThe best fit (see Fig. 1) corresponds to $\\Gamma=1.45^{+0.72}_{-0.28}$ and\n$N_H=(2.1^{+12.4}_{-1.6})\\times 10^{21}\\, {\\rm cm}^{-2}$ (at the redshift of\nthe source $z=1.246$). Fig. 2 shows the confidence\ncontours for the two free spectral parameters.\n\nThere is some evidence for significant residuals in all three\ninstruments at $\\sim 1$ ${\\rm keV}$ (Fig. 1). The inclusion of a\nGaussian-line component centered at this energy results in a further\nimprovement of the fit ($\\chi^2\/{\\nu}= 31.8\/29$). However, it is not\ncompletely obvious that such a feature is associated with the source;\nthe corresponding rest-frame energy is $2.2\\pm 0.1\\, {\\rm keV}$ with a\nrest-frame equivalent width $\\sim 165\\, {\\rm eV}$. One could identify\nthis as a SiXIV-SiXVI complex (Netzer \\& Turner 1997), but then the Fe\nK line should be seen in the spectrum and it is not (rest-frame\nequivalent width $< 418$ eV at 95 per cent confidence). Attempts to\naccount for these residuals in the X-ray spectrum in terms of ionised\nabsorbers did not show significant improvemet in the fit.\n\nWe conclude that the absorbed, power-law fit is the most tenable model.\nThe flux of the source is $S(0.5-2\\, {\\rm keV})=6.6\\times 10^{-14}\\, {\\rm erg}\\, {\\rm cm}^{-2}\\, {\\rm s}^{-1}$ and\n$S(2-10\\, {\\rm keV})=1.9\\times 10^{-13}\\, {\\rm erg}\\, {\\rm cm}^{-2}\\, {\\rm s}^{-1}$ and the K-corrected rest\nframe luminosity (using the measured redshift of $z=1.246$) is\n$L(0.5-2\\, {\\rm keV})= 4.8\\times 10^{44} {\\rm erg}\\, {\\rm s}^{-1}$ and $L(2-10\\, {\\rm keV})=\n2.1\\times 10^{45} {\\rm erg}\\, {\\rm s}^{-1}$ ($H_0=50\\, {\\rm km}\\, {\\rm s}^{-1}\\, {\\rm\nMpc}^{-1}$ and $q_0=0$).\n\n\\subsection{Optical imaging}\n\nThe POSS plates show no counterpart within or near the position of the\n{\\it ROSAT}\\ source. In order to search for fainter candidate optical \ncounterparts, we imaged the field (as for all the other survey sources) \nwith the CCD\ncamera at the Cassegrain focus of the 2.2m telescope of the Centro\nAstron\\'omico Hispano Alem\\'an, Calar Alto on February 10,\n1994. A single exposure of 900s was taken with the Johnson R filter.\nThe photometric conditions were good and the seeing was\n$\\sim 1.4''$. Data reduction and the astrometric and photometric \ncalibrations were performed as described by Carballo \net al (1995).\n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.caha.ps,width=0.5\\textwidth,angle=0}}\n \\caption{$R$-band image of the field around RX J1011.2+5545. The\n image is 1 arcmin each side. The\n circle is the 90 per cent error circle for the {\\it ROSAT}\\ X-ray position}\n\\end{figure}\n\n\nThe R-band image (Fig. 3) reveals a single $R=21.02\\pm 0.08$ source\nwithin or near the error circle of the X-ray source, whose position is\n$10^h11^m12^s.3$ and $55^{\\circ}44'47''$ (J2000) and is the likely\ncounterpart, later confirmed by spectroscopy. The surface brightness\nprofile of the source does not show compelling evidence for any\nadditional extension to the profile of a bright star. \n\n\\subsection{Optical spectroscopy}\n\nOptical spectroscopy of the candidate counterpart was carried out at\nthe 4.2m William Herschel Telescope on the Observatorio del Roque de\nlos Muchachos (La Palma) with the ISIS double spectrograph, on \nFebruary 25, 1998. We used the 150 lines\/mm gratings and TEK CCD\ndetectors for both arms, covering a spectral range from 3400 to 8550\\AA .\nThe atmospheric conditions were very poor with bad and variable sky\ntransparency, dust and seeing, with the latter starting at $3.5''$ but \nlater improving to between $2''$ and $2.5''$. Two sets of observations were \ncarried out, the first set corresponding to the period of worst seeing \nwith a slit width of $2.5''$ and the second set with a slit width of \n$1.5''$. Here we ignore the first set of observations, although \nqualitatively they reveal much the same as the second set. \n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.wht.ps,width=0.5\\textwidth,angle=0}}\n \\caption{Raw optical spectrum of RX J1011.2+5545.}\n\\end{figure}\n\nThe observations with the slit width set at $1.5''$ totalled 5 on-source\nexposures of 1800s each, all close to parallactic angle and\nwith airmass less than 1.2. The data were reduced using standard IRAF\nroutines. The optimally extracted source spectra were registered to a common\nwavelength origin using the sky spectrum. The resulting summed\nspectrum was wavelength calibrated using polynomial fits to standard\narc maps, yielding rms residuals of 0.72\\AA\\ and 0.37\\AA\\ in the blue\nand in the red respectively. The spectral resolution was measured from\nunblended arc lines to be 9.6\\AA\\ and 8.8\\AA\\ in the blue and in the\nred respectively. Given the poor conditions, no attempt\nwas made to flux calibrate the spectra.\n\nFig. 4 shows the resulting spectra with markers on the most prominent\nemission lines. The redshift $z=1.246$ has been determined from the\nstrongest features [NeV]$\\lambda$3426 and [OII]$\\lambda$3727, although\nthe other emission lines are entirely consistent with this\nredshift. The presence of the high ionisation [NeV]$\\lambda$3346 and\n$\\lambda$3426 lines clearly reveals an AGN. Table 1 lists the emission\nfeatures detected in the spectrum, rest-frame equivalent widths and\nFWHM estimated via gaussian fitting with 90 per cent errors and\ncorrected for spectral dispersion.\n\n\\begin{table}\n\\centering\n\\begin{minipage}{70mm}\n\\caption{Detected emission lines in the optical spectrum}\n\\begin{tabular}{llcc}\n\n\\hline\n\nEmission & Redshift & $W_{\\lambda}^a$ & FWHM\\\\\nline & & (\\AA ) & (${\\rm km}\\, {\\rm s}^{-1}$)\\\\\n\n\\hline\nCIV$\\lambda$1550 & 1.2450 & 35$^b$ & $<2500^b$\\\\\nHeII$\\lambda$1640 & 1.2452 & 15$^b$ & $<800^b$\\\\\nCIII$]\\lambda$1909 & 1.2445 & 16 & $385^{+390}_{-380}$\\\\\n$[$NeIV$]\\lambda$2423& 1.2469 & 12 & $560^{+280}_{-270}$ \\\\\nMgII$\\lambda$2798 & 1.242$^b$ & 15$^b$ & $2000^{+2700}_{-1000}$ ($^b$)\\\\\n$[$NeV$]\\lambda$3346 & 1.2453 & 4 & $480^{+160}_{-130}$\\\\\n$[$NeV$]\\lambda$3426 & 1.2462 & 14 & $920^{+310}_{-240}$\\\\\n$[$OII$]\\lambda$3727 & 1.2462 & 42 & $625^{+60}_{-55}$\\\\ \\hline\n\\end{tabular}\n\n$^a$ Rest-frame equivalent width\\\\\n$^b$ Highly uncertain\\\\\n\\end{minipage}\n\\end{table}\n\nThe semi-forbidden CIII] line is clearly detected and narrow (see\nTable 1). Since this line is predicted to be broad in a type 1 AGN,\nthe implication is that the broad-line region in this AGN is\nobscured. The CIV and HeII lines appear also narrow, but in a low\nsignal to noise part of the spectrum. The MgII line is probably broad,\nbut with an equivalent width normalised to the equivalent width of the\nnarrow lines significantly smaller (10--20 times) than is typically\nfound in type 1 AGN (Francis et al 1991). Broad MgII has been found\nin IR hyperluminous galaxies (Hines \\& Wills 1993; Hines et al 1995)\nand high-redshift radiogalaxies (di Serego Alighieri, Cimati \\&\nFosbury 1994; Stockton, Kellogg \\& Ridgway 1995) and has been\ninterpreted as scattered emission from a hidden type 1 AGN.\n\n\n\\subsection{Radio data}\n\nWe searched in various archives for radio observations of our source.\nThere are a number of detections, the most relevant of which are the\nWesterbork Northern Sky Survey (Rengelink et al 1997) at 326 MHz, the\nTexas Survey (Douglas et al 1996) at 365 MHz, the FIRST survey (White\net al 1997) at 1.4 GHz and the Green Bank 6cm survey (Gregory et al\n1996) at 4.85 GHz. Both the Texas and the FIRST surveys resolve the\nsource into two components aligned approximately N-S. The N component is\nthe brightest in the FIRST data (0.090 Jy compared to the 0.071 Jy of\nthe S component). The optical position lies in between both\ncomponents (see Fig. 5). The separation between the components \nis $\\sim 11''$ at 1.4 GHz and $\\sim 15''$ at 365 MHz.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.first.ps,width=0.5\\textwidth,angle=0}}\n \\caption{A radio map of RX J1011.2+5545 at 1.4GHz from the FIRST\n survey. The cross shows the position of the optical source and the\n thick circle is the error circle of the X-ray source.}\n\\end{figure}\n\n\nThe integrated radio fluxes together with the measurements at optical and \nX-ray frequencies are shown in Fig. 6 in the form of a spectral\nenergy distribution. The radio spectrum has a \n$S_{\\nu}\\propto \\nu^{-0.9}$ shape from 326 MHz to 4.85 GHz, which is\ntypical of lobe-dominated radio sources. Although\nfrom the spatial information at the various frequencies it is not\ncompletely clear that this is a lobe-dominated double source, both the\nspectral index and the position of the optical source strongly support\nthis hypothesis.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.sed.ps,width=0.5\\textwidth,angle=270}}\n \\caption{The spectral energy distribution of RX J1011.2+5545 from\n radio to X-rays (see text for details on the data points).}\n\\end{figure}\n\n\\section{Discussion}\n\nThere are various facts that support the conclusion that RX\nJ1011.2+5545 is an AGN, the most relevant being a high radio to optical\nflux ratio, a $2-10\\, {\\rm keV}$ luminosity exceeding $10^{45}\\, {\\rm erg}\\, {\\rm s}^{-1}$ and\nthe broad MgII emission. In addition the strong [NeV]$\\lambda$3624 line \nimplies the presence of an underlying hard ionising continuum.\n\nWe initially suspected obscuration in this source because of its high \n{\\it ROSAT}\\ PSPC hardness ratio. Also for a typical uncovered AGN, the average\noptical magnitude corresponding to its X-ray flux would be $R\\sim 19$ (see,\ne.g., Hasinger 1996) instead of the observed value of $R\\sim 21$.\nThe absence of a broad CIII] line confirms this obscuration hypothesis.\n\nA weak broad MgII line is detected, its equivalent width being 3 to 5\ntimes smaller than for a type I AGN (Francis et al 1991; Baker \\&\nHunstead 1995). This cannot be explained as a simple obscuration\neffect, since in that case both the broad lines and the nuclear\ncontinuum would be equally suppressed, leaving the equivalent widths\nunchanged. The weakness of MgII and the absence of broad CIV and HeII\nmay be the result of dilution by a source of blue continuum over and\nabove that emanating directly from the nucleus. The requirement would\nbe that at a rest wavelength of $\\sim 2800$\\AA\\ the nuclear continuum\nmay be only 20 to 50 per cent of the total. The nature of this extra\nblue component is unknown, but reflected nuclear radiation, nebular\ncontinuum and copious star formation are all possibilities. The\nnon-detection of a reflected Fe K line in X-rays and the strong [OII] line\nwith respect to typical type I situation favour the enhanced star\nformation scenario. The equivalent width of the broad CIII] component\nis expected to be roughly 2 to 5 times smaller than that of MgII in a\ntype I AGN, and therefore it would be very weak in this object.\nObscuration of the nuclear continuum could also lead to the narrow\n[NeV] lines having enhanced equivalent widths.\n\n\nThe power-law in the X-ray spectrum of this object is similar to that\nfound for other luminous radio-loud quasars at high redshifts,\n($\\Gamma\\sim 1.5$, Cappi et al 1997), distinctively flatter than for\nradio-quiet AGN. This has been associated with different emission\nmechanisms (synchrotron self-Compton with the radio-emitting electrons\nin radio-loud AGN versus nuclear emission in radio-quiet objects). It\nis then possible that in radio-loud active galaxies the line-of-sight\nto X-ray emitting regions intercepts less obscuring material than does\nthe direct path to the nucleus. Larger absorbing columns ($N_H\\sim\n10^{22}\\, {\\rm cm}^{-2}$) than that observed in RX J1011.2+5545 are common\nonly among radio-loud quasars at very high redshifts ($z>3$, Cappi et\nal 1997, Fiore et al 1998). The possible contribution to the X-ray\nflux from a cluster of galaxies hosting this source (which might be\ndominant in radiogalaxies, Crawford \\& Fabian 1996) is small, since\nthe X-ray data does not show evidence for a spectral cutoff consistent\nwith thermal emission.\n\nThe amount of X-ray absorption predicts an optical extinction for the\nX-ray source which is $A_V=1.1^{+6.7}_{-0.85}$, using standard dust to\ngas ratios. For moderate extinction ($A_V\\sim 1-2$), the nuclear light\nseen in the optical can be direct radiation from the nucleus. However, if \nthe obscuration is much larger, then the MgII\nbroad line would be seen through reflection only. It is even possible\nthat the nucleus is very heavily obscured in the optical ($A_V\\gg 10$) in\nwhich case the direct X-ray continuum and nuclear Fe K emission might also be\nsuppressed, leaving a dominant X-ray component arising in the radio lobes \nwith only moderate associated photoelectric absorption. \nDisentangling both possibilities requires high spatial resolution\noptical and IR observations.\n\nIn any event, the discovery of this object demonstrates that\nhigh-redshift radio-loud obscured AGN are present at faint X-ray\nfluxes. Such objects may play a role, albeit probably minor, in \nproducing the X-ray background. Surveys to be carried out with AXAF and XMM \nwill undoubtely find large numbers of obscured AGNs and show what is their \ncontribution to the X-ray background.\n\n\\section*{Acknowledgments}\n\nXB and RC were visiting astronomers of the Centro-Astron\\'omico\nHispano-Alem\\'an, Calar Alto, operated by the Max-Planck-Institute for\nAstronomy, Heidelberg jointly with the Spanish `Comisi\\'on Nacional\nde Astronom\\'\\i a'. The William Herschel Telescope is operated on the\nisland of La Palma by the Isaac Newton Group in the spanish\nObservatorio del Roque de los Muchachos of the Instituto de Astrof\\'\\i\nsica de Canarias. This research has made use of the NASA\/IPAC\nExtragalactic Database (NED), which is operated by the Jet Propulsion\nLaboratory, California Institute of Technology under contract with the\nNational Aeronautics and Space Administration. XB, RC, MTC and JIGS\nacknowledge financial support by the DGES under project PB95-0122.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec1}\nRadiation therapy, chemotherapy, surgery or their combination are commonly used to control the cancer in clinics.\nCompared to conventional 3D conformal therapy, modern treatment methods, such as intensity modulation radiation therapy (IMRT) and volumetric arc therapy (VMAT) focus more on delivering prescribed dosage to planning target volume (PTV) while protecting organs-at-risk (OAR)~\\cite{Rifat}. Tumor proximity to these critical structures demands a high accuracy in tumor delineation to avoid toxicities from radiation therapy~\\cite{lin2019deep}.\nTo obtain the treatment plan with a desirable dose distribution, a physicist is required to carefully go through multiple trial-and-error iterations that tune the treatment planning parameters and weightings to control the trade-offs between clinical objectives. Such procedure is time-consuming and could suffer from a large inter-\/intra-observer variability due to the different experience and skills of physicists~\\cite{van2020automatic}. \n\nKnowledge-based planning (KBP) provides a promising solution to overcome the above limitations by automatically generating dose distributions, patient-specific dose-volume histograms (DVH) and dose constraints of PTVs and OARs. It can serve as the references of planning optimization processing and planning quality control, thereby streamlining the treatment planning process. Recently, advancements in the field of deep learning have inspired many researches in radiation oncology~\\cite{ge2019knowledge}, as the dose distribution can be directly predicted by the data-driven approaches. Specifically, Nguyenet~\\textit{et al}.\\cite{nguyen2019feasibility} used U-Net to predict the dose distribution for prostate cancer. Fan~\\textit{et al}.\\cite{fan2019automatic} further utilized ResUNet to predict the dose distribution for head-and-neck cancer. Kandalan~\\textit{et al}.\\cite{kandalan2020dose} aimed to study the generalizability of U-Net for prostate cancer dose prediction via transfer learning with minimal input data. However, existing methods typically employed U-Net and its variants~\\cite{nguyen20193d,willems2019feasibility,nguyen2019feasibility,barragan2019three}, where off-the-shelf networks cannot guarantee the applicability for various physicians, diseases and clinical settings. \nRecently, the model ensemble approach has been verified to improve the performance and robustness, which is constructed by a collection of neural networks whose predictions are combined at the test stage by weighted averaging or voting. The strategies for building ensembles typically include training the same network with various initializations~\\cite{lakshminarayanan2016simple}, different number of iterations~\\cite{huang2017snapshot}, and multiple subsets of the training data~\\cite{zhou2002ensembling}.\nAlthough diversity is believed to be essential for successful ensembles~\\cite{kuncheva2003measures}, this is overlooked by existing methods typically using a single network architecture coupled with different training strategies or a combination of a few off-the-shelf architectures. \n\nThe lack of diversity issue can be tackled by the method based on neural architecture search (NAS), which can generate a large number of diverse architectures, driving a natural bias towards diversity of predictions, and in turn to afford the opportunity to integrate these networks for a better result. However, several important research gaps are unexplored: \n1) Which one is more important to create an ensemble between base learners' performance and diversity?\n2) How to balance the trade-off between ensemble performance with computational complexity? \n3) How to encourage the diversity in the searching process of NAS?\n\nIn this study, we propose a learning-based ensemble approach with NAS for dose prediction, named LENAS, which adopts the teacher-student paradigm by leveraging the combination of diverse outputs from multiple automatically designed neural networks as a teacher model zoo to guide the target student network training. \nThe core of our LENAS includes two folds. First, instead of using off-the-shelf networks, we present a novel U-shape differentiable neural architecture search framework, named U-NAS, which automatically and efficiently searches for neural architecture from enormous architecture configurations to ensure both high performance and diversity of teacher models. \nSecond, to reduce the computational costs in the inference phase and meanwhile ensure high ensemble performance, we further present a knowledge distillation (KD) network with adversarial learning, named KDA-Net, which hierarchically transfers the distilled knowledge from the teacher networks to the student network. \nTo the best of our knowledge, the proposed LENAS is the first method to integrate NAS into KD in the medical imaging field, and the first method to investigate NAS and KD in ensemble learning. \nThe proposed method has been evaluated on two public datasets, \\textit{i}.\\textit{e}., OpenKBP dataset of 2020 AAPM Grand Challenge and the AIMIS dataset of 2021 Tencent AIMIS Challenge (task 4). Our U-NAS ensembles achieved the mean absolute error (MAE) of 2.357 and 1.465 in dose score and DVH score on the OpenKBP dataset, respectively, superior to the champion of the AAPM challenge. And our single LENAS model achieved the MAE of 2.565 and 1.737 in dose score and DVH score on OpenKBP, respectively, superior to the state-of-the-art methods~\\cite{babier2020openkbp,long2015fully,milletari2016v,ronneberger2015u}. \nIn addition, our single U-NAS model also achieved the mean square error (MSE) of 15611.6398 in dose score on AIMIS, winning the first place in the AIMIS challenge.\n\nOur contributions mainly include four folds:\n\\begin{itemize}\n \\item We present a novel learning-based ensemble framework, named LENAS, including the U-NAS framework which efficiently and automatically searches for optimal architectures, and a KDA-Net for the trade-off between the computational cost and accuracy;\n \\item It is the first attempt to investigate NAS and KD in ensemble learning, especially in the field of medical image analysis;\n \\item We provide several in-depth analysis and empirical guidelines for the base learners generation and selection in ensemble learning in consideration of both diversity and performance;\n \\item Extensive experiments on two public datasets demonstrated the effectiveness of each module and superior performance of our method to the state-of-the-art methods. \n \n\\end{itemize}\n\\section{Related Work}\n\\subsection{Knowledge-Based Planning}\nKnowledge-based automatic treatment planning is realized by building an atlas-based repository or a mathematical model to predict the dosimetry (\\textit{i}.\\textit{e}., dose distribution, entire DVH curve, dose volume metrics, etc.), which utilizes previously optimized plans~\\cite{momin2021knowledge}. For example, in atlas-based methods, manually designed geometric features are selected as metrics to define the similarity between previous plans and a new plan. On the other hand, the previous parameters of the most similar plan are adopted as the initialization of the new plan optimization. The modeling methods use handcrafted features to regress and predict DVH of a new plan to guide the optimization processing~\\cite{zhu2011planning}. The features include overlap volume histogram (OVH)~\\cite{wu2009patient}, beams eye view (BEV) projections, and overlap of regions of interests (ROIs), etc., which are applicable to both methods.\n\nHowever, the traditional KBP methods only predict 2-dimensional or 1-dimensional dosimetry metrics, which lack entire spatial distribution of dosage.\nIn the past few years, many researchers focused on the deep learning-based KBP methods. Due to the powerful ability of extracting statistical and contextual features of convolution neural network (CNN), 3-dimensional voxel-wise dose distribution with high accuracy can be directly predicted. The inputs of deep learning-based models usually are images (\\textit{e}.\\textit{g}., CT images and structure masks), and the architecture of models are mainly U-Net~\\cite{willems2019feasibility, kajikawa2019convolutional, bohara2020using}. The two main directions for improving the performance of CNN-based dose prediction are: 1) designing different architectures, including modified U-Net~\\cite{ma2019individualized, gotz2020deep}, U-Res-Net~\\cite{liu2019deep}, HD U-net~\\cite{barragan2019three, nguyen20193d}, GAN-based~\\cite{murakami2020fully, nguyen2020incorporating}, etc.; 2) adding clinical parameters into inputs, such as isocenter~\\cite{willems2019feasibility}, beam geometry infomation~\\cite{barragan2019three}, isodose lines and gradient infomation~\\cite{tan2021incorporating}).\n\n\n\\subsection{Ensemble Learning}\nEnsemble learning has shown impressive power in various deep learning tasks (\\textit{e}.\\textit{g}., the ILSVRC challenge~\\cite{russakovsky2015imagenet}), a large amount of literature has provided theoretical and empirical justifications for its success, including Bayesian model averaging~\\cite{domingos2000bayesian,monteith2011turning}, enriching representations~\\cite{domingos1997does}, and reducing stochastic optimization error~\\cite{dietterich2000ensemble,zhou2021ensemble}. \nThese arguments reached a consensus that the individual learner in the ensembles should be \\textit{accurate and diverse}~\\cite{brown2005managing,zhang2013exploiting}. \nTo encourage the diversity of the ensembles, the strategies for building ensembles typically include: 1) training the same network with various settings, such as bagging~\\cite{altman2017ensemble}, random initializations~\\cite{kornblith2019similarity}, and different hype-parameters~\\cite{morcos2018insights} (\\textit{e}.\\textit{g}., iteration, learning rate, and objective function); 2) training different networks with the various architectures. One of the most famous techniques is dropout~\\cite{srivastava2014dropout}, in which some of the neurons are dropped in each iteration, and the final model can be viewed as an ensemble composed of multiple different sub-models. \nIn addition, Lin~\\textit{et al}.\\cite{lin2019seg4reg} won the first place in the AASCE\\footnote{https:\/\/aasce19.grand-challenge.org} challenge with ensemble of ResNet~\\cite{he2016deep}, DenseNet~\\cite{huang2017densely}, and EfficientNet~\\cite{tan2019efficientnet}. \nAs for combining the predictions of each base model in an ensemble, the most prevailing method is majority voting~\\cite{opitz1996actively} for classification and segmentation (which can be viewed as pixel-wise classification), and simple averaging~\\cite{perrone1993networks} for the regression task. \n\nDespite of their success, most existing ensemble methods do not explicitly balance the two important factors, \\textit{i}.\\textit{e}., the performance of individual learners and diversity among them. To the best of our knowledge, we are the first to adopt NAS to ensemble learning, and we further provide empirical guidelines for selecting members for an ensemble.\n\n\\subsection{Neural Architecture Search}\nNeural architecture search (NAS) aims at searching for a desirable neural architecture from a large architecture collection.\nIt has received increasing interest in various medical image analysis tasks, such as image classification~\\cite{dondeti2020deep}, localization~\\cite{jiang2020elixirnet}, segmentation~\\cite{wang2021bix}, and reconstruction~\\cite{yan2020neural}. \nMuch of the focus has been on the design of search space and search strategy. \nFor example, Weng~\\textit{et al}.\\cite{weng2019unet} introduced NAS-Unet for 2D medical image segmentation which consists of different primitive operation sets for down-sampling cell and up-sampling cell.\nZhu~\\textit{et al}.\\cite{zhu2019v} proposed V-NAS for volumetric medical image segmentation that designed a search space including 2D, 3D, or pseudo-3D (P3D) convolutions. \nAs for the search strategy, existing researches can be categorized into three classes: the evolutionary algorithm~\\cite{real2019regularized}, reinforcement learning~\\cite{zoph2018learning}, and gradient-based differentiable methods~\\cite{liu2019darts}. \n\nHowever, there are still two aspects of the research gap that will be addressed in this paper. First, we embed NAS to the pixel-wise regression task in medical imaging (\\textit{e}.\\textit{g}., dose prediction). Second, we investigate the effectiveness of NAS in ensemble learning, as most of the existing methods focus on searching for the single best model, ignoring the value of enormous architecture candidates.\n\n\n\\section{Methods}\n\\label{sec_methods}\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figs\/framework.pdf}\n \\caption{Overview of the proposed LENAS.\n $O_i$ in the hybrid module denotes the operation and $\\alpha_i$ denotes its weight. $Dis$ in KDA-Net denotes the discriminator.}\n \\label{fig_framwork}\n\\end{figure*}\n\nThe framework of the proposed LENAS is shown in Fig.~\\ref{fig_framwork}, which has two components: 1) a U-shape differentiable neural architecture search (U-NAS) pipeline for automatic architecture search, and 2) KDA-Net which hierarchically transfers the to-be-distilled knowledge of U-NAS ensembles to a single lightweight network via adversarial learning. In the following, we introduce each component in details.\n\n\\subsection{U-NAS}\nAs shown in Fig.~\\ref{fig_framwork}, the proposed U-NAS follows the autoencoder~\\cite{ronneberger2015u} structure with four down cells (DCs) and four up cells (UCs). Each individual cell is learned in a large search space with about $4\\times 10^4$ architecture configurations. In the following, we first introduce the search space, then describe the training strategy for joint optimization of the architecture and its weights.\n\n\\noindent\\textbf{Search Space.}\nThe yellow and red blocks in Fig.~\\ref{fig_framwork} show the network topologies of DC and UC, respectively, which include several fundamental computing units called hybrid modules (HMs). Each HM is a sum of different operations, and there are four types of HM: normal ($N$), downward ($D$), upward ($U$), and connect ($C$), corresponding to different operation groups in the search space. As shown in Table~\\ref{tab_ops}, we include the following operations in the search space: convolution (conv), squeeze-and-excitation conv (se\\_conv), dilated conv (dil\\_conv), depthwise-separable conv (dep\\_conv), max pooling (max\\_pool), average pooling (avg\\_pool), trilinear interpolate (interpolate), and residual connection (identity). \n\n\\begin{table}[htbp]\n\\caption{Operation set used for searching cells.}\n\\resizebox{0.48\\textwidth}{!}{\n\\begin{tabular}{cccccc}\n\\bottomrule[1.5pt]\nNormOps & DownOps & UpOps & \\multicolumn{1}{c|}{ConnectOps} & pre & post \\\\ \\hline\nidentity & avg\\_pool & up\\_se\\_conv & \\multicolumn{1}{c|}{identity} & conv & conv \\\\\nconv & max\\_pool & up\\_dep\\_conv & \\multicolumn{1}{c|}{no connection} & & \\\\\nse\\_conv & down\\_se\\_conv & up\\_conv & \\multicolumn{1}{c|}{} & & \\\\\ndil\\_conv & down\\_dil\\_conv & up\\_dil\\_conv & \\multicolumn{1}{c|}{} & & \\\\\ndep\\_conv & down\\_dep\\_conv & interpolate & \\multicolumn{1}{c|}{} & & \\\\\n & down\\_conv & & \\multicolumn{1}{c|}{} & & \\\\ \n\\toprule[1.5pt]\n\\end{tabular}}\n\\label{tab_ops}\n\\end{table}\n\nThe prefix `down' means the stride of the convolution operation is two, while the prefix `up' indicates the transposed convolution, which doubles the image resolution. \nFor the first three columns of Table~\\ref{tab_ops}, we use $3\\times 3\\times 3$ kernels for all convolution operations int the Conv-IN-ReLU order. In addition, the $3\\times 3\\times 3$ convolution (pre) and $1 \\times 1 \\times 1$ convolution (post) are applied to adjust the number of channels.\n\n\\begin{algorithm}[!h]\n\\caption{Training Strategy of U-NAS}\n \\begin{algorithmic}\n \\STATE Create the mixed operations $\\overline{O}$ parametrized by $\\alpha$.\n \\WHILE{not converged}\n \\STATE 1. Update weights $\\omega$ by gradient descending $\\nabla _\\omega \\mathcal{L}_{\\mathrm{dose}}(\\omega , \\alpha)$ on $\\mathcal{D}_{\\mathrm{train}}$.\n \\STATE 2. Update $\\alpha$ by gradient descending $\\nabla _{\\alpha} \\mathcal{L}_{\\mathrm{dose}}(\\omega , \\alpha)$ on $\\mathcal{D}_{\\mathrm{val}}$.\n \\ENDWHILE\n \\STATE \\fontsize{8.5pt}{\\baselineskip}\\selectfont Replace $\\overline{O}$ with $O=O_i$, $i=\\arg \\max_k \\exp (\\alpha _k)\/\\sum ^{N}_{j=1} \\exp(\\alpha _j)$.\n \\STATE Re-train the network with the best learned cell structures on $\\mathcal{D}_{\\mathrm{train}}$.\n \\end{algorithmic}\n \\label{algo_1}\n\\end{algorithm}\n\n\\noindent\\textbf{Training Strategy.}\nThe training strategy of U-NAS contains two stages: the searching process and the re-training process. In the searching process, U-NAS is learned in a differentiable way \\cite{liu2019darts}, which optimizes a super network consisting of HMs with mixed operations. As Fig.~\\ref{fig_framwork} shows, for each operation $O_i$ in total $N$ operations $O$, the weight of each operation is determined by the parameter $\\alpha_i \\in \\alpha$, whose softmax transformation $\\tilde{\\alpha}_i=\\exp(\\alpha_i) \/ \\sum^{N}_{j=1}\\exp(\\alpha_j)$ represents how much $O_i$ contributes to the HM. Then, the architecture parameters $\\alpha$ and the network weights $\\omega$ are learned by the mixed operations alternately. We repeat the searching processing several times with different initializations to converge to different local optima, resulting in different searched architectures.\n\nOnce the searching process finishes, each HM would only keep the most likely operation based on the parameter $\\alpha$, then we replace DCs and UCs with the best learned structure and re-train the network on $\\mathcal{D}_{\\mathrm{train}}$. Algorithm~\\ref{algo_1} describes the details of the training strategy of U-NAS. In both searching and re-training processes, $\\mathcal{L}_1$ norm is used to measure the difference between the dose prediction $\\hat{y}$ and the target $y$:\n\\begin{equation}\n \\mathcal{L}_{\\mathrm{dose}}=\\| y - \\hat{y} \\|_1\n \\label{eq_dose}\n\\end{equation}\n\n\\noindent\\textbf{Diversity Encouraging Loss.}\nThe diversity among the models obtained by U-NAS can be potentially achieved via different initializations. However in many settings, the independently searching process could converge to similar local optima as the same searching goal is exploited. To optimize for diversity directly in the architecture searching process, we propose a diversity encouraging loss to encourage different predictions between the learned model with the best model\\footnote{The model with the best performance in multiple optimized architectures}.\n\nSpecifically, in the searching process, the goal is to achieve high accuracy of the learned model and encourage the difference of the prediction between the learned model and best model. Therefore, in the training stage of U-NAS, the final loss of $\\mathcal{L}_{\\mathrm{nas}}$ can be formulated by adding the dose error loss $\\mathcal{L}_{\\mathrm{dose}}$ in Eq.~(\\ref{eq_dose}) with diversity encouraging loss $\\mathcal{L}_{\\mathrm{div}}$ as:\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{L}_{\\mathrm{nas}} &= \\mathcal{L}_{\\mathrm{dose}} + \\mathcal{L}_{\\mathrm{div}} \\\\\n &= \\|y - \\hat{y}\\|_1 + \\eta \\max (0, m - \\frac{\\left\\|\\hat{y}-\\hat{y*} \\right\\|_1}{\\left(\\|\\hat{y}\\|_1 + \\|\\hat{y*}\\|_1\\right)\/2} ),\n \\end{aligned}\n\\end{equation}\nwhere $\\|\\cdot\\|_1$ is the voxel-wise $l_1$ norm; $y$, $\\hat{y}$, and $\\hat{y*}$ denote the ground-truth, prediction result of the training model and best model, respectively; and $m$ is the margin (empirically set to 0.2) used to reduce the correlation between $\\hat{y}$ and $\\hat{y*}$ while avoiding the outliers; $\\eta$ is a weighting hyper-parameter to balance the two loss terms (empirically set to 1). \n\n\\subsection{KDA-Net}\nThe proposed KDA-Net performs knowledge distillation from the U-NAS ensembles to a single target network with adversarial learning. As shown in Fig.~\\ref{fig_framwork}, we use a single U-Net network as the student and the average of multiple U-NAS predictions as the teacher ensemble. For all the $K=8$ blocks (four D blocks and four C blocks) of the network, we apply the similarity loss on the intermediate output between the teacher ensembles and the student based on squared Euclidean distance as:\\footnote{Instead of ${L}_1$ loss in Eq.~\\ref{eq_dose}, we adopt ${L}_2$ loss to the deep supervision for a fast optimization.}\n\\begin{equation}\n \n \\mathcal{L}_{\\mathrm{sim}}=\\sum_{k=1}^8\\left\\|\\frac{1}{M}\\sum_{i=1}^M \\left(I_k^{T_i} - I^S_k\\right)\\right\\|_2^2,\n \\label{eq_sim}\n\\end{equation}\nwhere $I^{T_i}_k$ and $I^S_k$ denote the intermediate output of the $k$-th block of the $i$-th teacher network $T$ and student network $S$, respectively, and $M$ denotes the number of teacher networks.\n\nThen we further adopt adversarial learning in our knowledge distillation process to force the model to generate more similar features by the student and teachers. Specifically, for the $k$-th block, we learn a discriminator $D_k$ to distinguish the output of teachers from that of the student, which in turn encourages the student to produce more similar output with teachers. The adversarial loss is defined as:\n\\begin{equation}\n \\mathcal{L}_{\\mathrm{adv}}=\\sum_{k=1}^{8} \\mathbb{E}_{I_k\\sim P_T}\\log D_k\\left(I_k\\right)+ \\sum_{k=1}^{8} \\mathbb{E}_{I_k\\sim P_S}\\log \\left(1-D_k\\left(I_k\\right)\\right),\n \\label{eq_adv}\n\\end{equation}\nwhere $I_k\\sim P_T$ and $I_k\\sim P_S$ denote outputs from the $k$-th block of teacher ensembles and the student network, respectively. Based on the above definition, we incorporate the dose loss in Eq.~(\\ref{eq_dose}), the similarity loss in Eq.~(\\ref{eq_sim}) and the adversarial loss in Eq.~(\\ref{eq_adv}) into our final loss function of KDA-Net:\n\\begin{equation}\n \\mathcal{L}_{\\mathrm{KDA}}=\\mathcal{L}_{\\mathrm{dose}} + \\lambda_1 \\mathcal{L}_{\\mathrm{sim}} + \\lambda_2 \\mathcal{L}_{\\mathrm{adv}},\n \\label{eq_final}\n\\end{equation}\nwhere $\\lambda_1$ and $\\lambda_2$ are weighting hyper-parameters which are empirically set to 0.05 and 0.01, respectively, in our experiments.\n\n\\section{Experiments}\n\\subsection{Datasets}\nIn this study, we evaluate the proposed method using two public datasets: the OpenKBP dataset and the AIMIS dataset.\n\n\\textbf{OpenKBP dataset.} The Open Knowledge-Based Planning (OpenKBP) dataset of 2020 AAPM Grand Challenge~\\cite{babier2020openkbp} is a public dataset consisting of 340 CT scans for the dose prediction task. The OpenKBP dataset includes subjects treated for head-and-neck cancer with radiation therapy. The data is partitioned into training ($n=200$), validation ($n=40$), and test ($n=100$) sets. The ROIs used in this study include the body, seven OARs (\\textit{i}.\\textit{e}., brainstem, spinal cord, right parotid, left parotid, larynx, esophagus and mandible) and three planning target volumes (PTVs) with gross disease (PTV70), intermediate-risk target volumes (PTV63), and elective target volumes (PTV56).\n\n\\textbf{AIMIS dataset.} The AIMIS dataset consists of 500 CT scans from the 2021 Tencent AI Medical Innovation System (AIMIS) Challenge (task 4).\\footnote{https:\/\/contest.taop.qq.com\/channelDetail?id=108} Each scan is from a patient treated for lung cancer with stereotactic body radiation therapy (SBRT). The dataset is officially partitioned into 300 scans for training, 100 scans for validation and 100 scans for testing. The ROIs used in this study include the body, five OARs (\\textit{i}.\\textit{e}., left lung, right lung, total lung, spinal cord, and heart) as well as inner target volume (ITV) and planning target volume (PTV). \n\n\\subsection{Implementation and Evaluation Metrics}\nThe pre-processing for the two datasets follows~\\cite{liu2021cascade}. For normalization, the CT-values are truncated to [-1024 HU, 1500 HU].\nThe following data augmentations are performed during training: horizontal and vertical flips, translation, and rotation around $z$-axis.\nFor each case of the OpenKBP dataset, the OAR masks (7 channels) and the merged target (1 channel) are concatenated with the CT scan (1 channel) as a $9\\times 128\\times 128\\times 128$ tensor and fed into the dose prediction models.\nAnd for the AIMIS dataset, the input consists of OAR masks (5 channels), CT scan (1 channel), and target volume (2 channels) and body (1 channel).\n\nFor U-NAS, in the searching process, we first train the super network for $8\\times 10^4$ iterations using an Adam optimizer with an initial learning rate of $3\\times 10^{-4}$, and a weight decay of $1\\times 10^{-4}$. \nAfter that, the architecture parameters $\\alpha$ are determined from the super network on the validation set. \nWe repeat the searching process multiple times with different random seeds to obtain various architectures. Then we re-train the searched models on the training set for $8\\times 10^4$ iterations with a learning rate of $3\\times 10^{-4}$.\nFor KDA-Net, we train the student network for $6\\times 10^4$ iterations using an Adam optimizer with an initial learning rate of $1\\times 10^{-5}$ and weight decay of $1\\times 10^{-4}$. \n\nWe use the official evaluation codes to validate the proposed method. Specifically, for the OpenKBP dataset, the evaluation metircs include: (1) dose error, which calculates the mean absolute error (MAE) between the dose prediction and its corresponding ground-truth plan with mean absolute error; and (2) DVH error, which calculates the absolute error of the DVH curves between the prediction and ground truth. According to~\\cite{babier2020openkbp}, \\{D99, D50 D1\\} for PTVs and \\{D0.01cc, Dmean\\} for OARs are selected to measure the similarity of DVH curves in this task.\nAnd for the AIMIS dataset, the evaluation is performed by measuring dose error with the mean squared error (MSE).\nIn addition, We use a paired t-test to calculate the statistical significance of the results.\n\n\\begin{table*}[]\n\\caption{Performance comparison of NAS models with manually designed networks on $\\mathcal{D}_{val}$. And BS, SC, RP, LA, ES, LE, MD, P$_{70}$, P$_{63}$, and P$_{56}$ denote Brainstem, Spinal Cord, Right Parotid, Left Parotid, Larynx, Esophagus, Mandible, PTV70, PTV63, and PTV56, respectivily. $\\dagger$ represents significantly different results ($p < 0.05$, paired t-tests)}\n\\resizebox{1\\linewidth}{!}{\n\\begin{tabular}{c|c|c|c|c|c|c|c|c}\n\\toprule\n\\multicolumn{2}{c|}{\\multirow{3}{*}{}} & \\multicolumn{5}{c|}{Single Model} & \\multicolumn{2}{c}{Ensemble} \\\\ \\cline{3-9} \n\\multicolumn{2}{c|}{} & \\multicolumn{1}{c|}{Conv} & \\multicolumn{1}{c|}{Se\\_conv} & \\multicolumn{1}{c|}{Dil\\_conv} & \\multicolumn{1}{c|}{Dep\\_conv} & NAS & \\multicolumn{1}{c|}{Manual} & NAS \\\\ \n\\hline\n\\hline\n\\multirow{11}{*}{\\makecell{Dose\\\\ Error}} & Body & $2.634\\pm 0.760$ & $2.736\\pm 0.861\\dagger$ & $2.741\\pm 0.805\\dagger$ & $2.728\\pm 0.789\\dagger$ & \\bm{$2.581\\pm 0.784$} & $2.503\\pm 0.762\\dagger$ & \\bm{$2.400\\pm 0.743$} \\\\\n & BS & $1.606\\pm 1.076$ & $1.646\\pm 1.152$ & $1.743\\pm 1.305\\dagger$ & $1.677\\pm 1.278$ & \\bm{$1.486\\pm 0.971$} & $1.541\\pm 1.136$ & \\bm{$1.442\\pm 1.035$} \\\\\n & SC & $2.095\\pm 1.014$ & $2.131\\pm 1.181$ & $2.184\\pm 1.115$ & $2.068\\pm 0.986\\dagger$ & \\bm{$2.018\\pm 0.936$} & $1.939\\pm 0.909$ & \\bm{$1.891\\pm 0.878$} \\\\\n & RP & \\bm{$3.040\\pm 0.870$} & $3.277\\pm 1.020$ & $3.152\\pm 0.921$ & $3.472\\pm 1.117$ & $3.074\\pm 0.909$ & $2.942\\pm 0.868$ & \\bm{$2.820\\pm 0.889$} \\\\\n & LA & $3.154\\pm 0.839$ & $3.208\\pm 0.979$ & $3.075\\pm 0.803$ & $3.383\\pm 1.212$ & \\bm{$3.029\\pm 0.874$} & $2.917\\pm 0.740$ & \\bm{$2.710\\pm 0.753$} \\\\\n & ES & \\bm{$2.428\\pm 1.004$} & $2.773\\pm 0.830$ & $2.749\\pm 1.036$ & $2.531\\pm 1.077$ & $2.467\\pm 1.009$ & $2.410\\pm 0.892$ & \\bm{$2.182\\pm 0.705$} \\\\\n & LE & $3.034\\pm 1.385$ & $3.204\\pm 1.419$ & $3.451\\pm 1.561$ & $3.148\\pm 1.481$ & \\bm{$2.883\\pm 1.366$} & $2.973\\pm 1.334$ & \\bm{$2.593\\pm 0.727$} \\\\\n & MD & $3.988\\pm 1.341$ & $3.992\\pm 1.413$ & $4.086\\pm 1.146$ & $4.051\\pm 1.218$ & \\bm{$3.800\\pm 1.189$} & $3.745\\pm 1.152$ & \\bm{$3.520\\pm 1.133$} \\\\\n & P$_{70}$ & $2.062\\pm 1.004\\dagger$ & $2.090\\pm 1.167\\dagger$ & $2.198\\pm 1.463\\dagger$ & $2.045\\pm 0.895\\dagger$ & \\bm{$1.620\\pm 0.789$} & $1.896\\pm 1.108\\dagger$ & \\bm{$1.570\\pm 0.784$} \\\\\n & P$_{63}$ & $2.345\\pm 1.137$ & $2.534\\pm 1.250\\dagger$ & $2.686\\pm 1.566\\dagger$ & $2.521\\pm 1.090\\dagger$ & \\bm{$2.135\\pm 0.950$} & $2.318\\pm 1.213\\dagger$ & \\bm{$2.057\\pm 0.926$} \\\\\n & P$_{56}$ & $2.227\\pm 0.916$ & $2.296\\pm 0.885$ & $2.326\\pm 1.029\\dagger$ & $2.371\\pm 0.758\\dagger$ & \\bm{$2.116\\pm 0.768$} & $2.122\\pm 0.841\\dagger$ & \\bm{$1.926\\pm 0.703$} \\\\ \\hline\n\\bottomrule\n\\end{tabular}}\n\\label{tab_NAS}\n\\end{table*}\n\n\\begin{figure}[thbp]\n \\centering\n \\subfloat[]{\n\t\t\\includegraphics[width=0.49\\textwidth]{figs\/multi_dvh.pdf}\n\t\t\\label{fig:multi_dvh}\n\t}\\\\\n \\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{figs\/visual_kda.pdf}\n\t\t\\label{fig:visual_kda}\n\t}\n \\caption{(a) DVHs of the dose distribution of ground-truth plan (solid curves) and predictions by a single U-Net with and without the proposed KDA method, illustrated by dashed lines and dotted curves, respectively. The PTV70, PTV63, and PTV56 are shown in red, orange, and yellow curves, respectively.(b) An example of dose distributions of the clinical plan and predicted plans of a single U-Net with and without the KDA method.}\n \\label{fig:kda}\n\\end{figure}\n\n\n\n\\subsection{Experimental Results}\n\\subsubsection{Performance of U-NAS}\nWe first compare the performance of our U-NAS model with four manually designed architectures on the OpenKBP validation set. \nFor each manually designed architecture, we employ the same convolution operation choice in the normal HM, including \\textit{conv}, \\textit{se\\_conv}, \\textit{dil\\_conv}, and \\textit{dep\\_conv}. \nAnd we apply \\textit{max\\_pool}, \\textit{interpolate} and \\textit{no connection} operations for all the manually designed architectures. \nTable~\\ref{tab_NAS} shows performance comparison of different models. Our U-NAS model outperforms all the manually designed networks in the body, seven OARs and three PTVs. \nIn summary, the single U-NAS model achieves MAE of 2.580 and 1.736 in dose score and DVH score, respectively, outperforming the best manually designed network by 0.111 and 0.128 in dose error and DVH error, respectively.\nAnd it is interesting that in most cases, the ensemble of four models outperforms the corresponding individual models (for both the manually designed and NAS learned models), and the ensemble of NAS models outperforms the ensemble of manually designed models. Please refer to Sec.~\\ref{sec.dis} for more discussions of ensemble learning.\n\n\\subsubsection{Performance of KDA-Net}\nWe compare the performance of a single U-Net with and without the proposed KDA module, including the dose distributions and DVH, on the OpenKBP validation set.\nFig.~\\ref{fig:multi_dvh} shows an example of DVH curves from a patient of the validation set. The solid lines represent the DVH curves of ground truth, while the dashed lines and dotted lines represent the DVHs extracted from predicted dose of U-Net with and without KDA (\\textit{i}.\\textit{e}., train from scratch), respectively. For this example patient, the U-Net with KDA exhibits a better agreement in predicting the dose to the PTVs. The predictions of OARs are more variable between two methods. Fig.~\\ref{fig:visual_kda} shows the corresponding dose color contour for the same patient in Fig.~\\ref{fig:multi_dvh}, which suggests that the single U-Net model with KDA is able to achieve better dosimetric congruence with the original plan on the PTV. \n\n\\begin{table}[!t]\n\\caption{Comparison of performance with the state-of-the-art methods on the OpenKBP test set.}\n\\centering\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{l|l|l|l|l|l}\n\\toprule\n\\multicolumn{2}{c|}{\\multirow{2}{*}{Methods}} & \\multicolumn{2}{c|}{Dose Score} & \\multicolumn{2}{c}{DVH Score} \\\\\n\\cline{3-6}\n\\multicolumn{2}{c|}{} & MAE & MSE & MAE & MSE \\\\\n\\hline\n\\multirow{5}{*}{Leaderboar\n} &Top \\#1 & 2.429 & 15.488 & 1.478 & 5.913 \\\\\n &Top \\#2 & 2.564 & 16.550 & 1.704 & 6.812 \\\\\n &Top \\#3 & 2.615 & 17.774 & 1.582 & 6.961 \\\\\n &Top \\#4 & 2.650 & 18.091 & 1.539 & 6.031 \\\\\n &Top \\#5 & 2.679 & 18.023 & 1.573 & 6.525 \\\\\n\\hline\n\\hline\n\\multirow{5}{*}{Single Model} &FCN~\\cite{long2015fully} & 2.681 & 18.144 & 2.452 & 12.310 \\\\\n &V-Net~\\cite{milletari2016v} & 3.129 & 23.336 & 2.325 & 11.417 \\\\\n &U-Net~\\cite{ronneberger2015u} & 2.619 & 17.221 & 2.313 & 11.343 \\\\\n &ResUNet~\\cite{yu2017volumetric} & 2.601 & 16.932 & 2.209 & 10.591 \\\\\n &U-NAS (ours) & 2.597 & 16.962 & 1.803 & 7.628 \\\\\n &LENAS (ours) & \\textbf{2.565} & \\textbf{16.614} & \\textbf{1.737} & \\textbf{7.272} \\\\\n\\hline\n\\hline\n\\multirow{3}{*}{Cascade} &U-Net~\\cite{ronneberger2015u} & 2.461 & 15.489 & 1.588 & 6.511 \\\\\n &ResUNet~\\cite{yu2017volumetric} & 2.448 & 16.023 & 1.499 & 5.855 \\\\\n &U-NAS (ours) & \\textbf{2.434} & \\textbf{15.376} & \\textbf{1.496} & \\textbf{5.564} \\\\\n\\hline\n\\hline\n\\multirow{2}{*}{Ensemble} &Off-the-shelf & 2.521 & 16.060 & 1.771 & 6.851 \\\\\n &U-NAS (ours) & \\textbf{2.357} & \\textbf{14.326} & \\textbf{1.465} & \\textbf{5.560} \\\\\n\\bottomrule\n\\end{tabular}}\n\\label{tab_sota}\n\\end{table}\n\n\\begin{table}[!t]\n \\centering\n \\caption{Comparison with the state-of-the-art methods on the test set of AIMIS.\n }\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{c|l|l|c|l|l}\n \\toprule\n \\multicolumn{3}{c|}{Primary Phase} & \\multicolumn{3}{c}{Final Phase} \\\\\n \\hline\n Rank & Team & Dose Score & Rank & Team & Dose Score\\\\\n \\hline\n \\textbf{\\#1} & \\textbf{qqll (ours)} & \\textbf{15611.6398} & \\textbf{\\#1} & \\textbf{qqll (ours)} & \\textbf{15571.6051} \\\\\n \\#2 & deepmedimg & 17223.3940 & \\#2 & gosnail & 15869.4256 \\\\\n \\#3 & gosnail & 18425.5708 & \\#3 & teamC & 16323.9720 \\\\\n \\#4 & adosepredictor & 18638.4767 & \\#4 & 27149 & 16486.1417 \\\\\n \\#5 & star & 19340.0643 & \\#5 & capsicummeat & 18137.9836 \\\\\n \\bottomrule\n \\end{tabular}}\n \\label{tab:aimis_test}\n\\end{table}\n\n\\begin{table}[!t]\n \\centering\n \\caption{Comparison of U-NAS with the off-the-shelf models on the validation set of AIMIS.}\n \n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c}\n \\toprule\n \\multirow{2}{*}{Methods} & \\multirow{2}{*}{All} & \\multirow{2}{*}{Body} & \\multirow{2}{*}{Heart} & \\multirow{2}{*}{L-Lung} & \\multirow{2}{*}{R-Lung} & Total & Spinal & \\multirow{2}{*}{ITV} & \\multirow{2}{*}{PTV} \\\\\n & & & & & & -Lung & -Cord & & \\\\\n \\hline\n U-Net & 9801 & 56608 & 45643 & \\textbf{71894} & 75099 & \\textbf{64108} & \\textbf{68377} & 525499 & \\textbf{842770} \\\\\n ResUNet & 9782 & 56668 & \\textbf{41858} & 77288 & 77399 & 66066 & 71790 & 593486 & 904382 \\\\\n U-NAS & \\textbf{9484} & \\textbf{54839} & 43746 & 82291 & \\textbf{71597} & 66922 & 71175 & \\textbf{510381} & 858750 \\\\\n \\bottomrule\n \\end{tabular}}\n \\label{tab:aimis_val}\n\\end{table}\n\n\n\\subsubsection{Comparison with the State-of-the-art Methods}\nIn Table~\\ref{tab_sota}, we compare the proposed LENAS model with several state-of-the-art methods on the OpenKBP test set. The competing methods include 3D FCN~\\cite{long2015fully}, V-Net~\\cite{long2015fully}, 3D U-Net~\\cite{ronneberger2015u}, 3D ResUNet~\\cite{yu2017volumetric}, and five top-ranking methods on the AAPM-2020 challenge learderboard~\\cite{babier2020openkbp}. We thoroughly compare our LENAS model with existing methods using single model, cascade and ensemble strategies. The cascade strategy is to sequentially combine two networks and produce the results in a coarse-to-fine fashion.\n\\textbf{For single model}, our U-NAS achieves MAE of 2.597 and 1.803 in dose score and DVH score, respectively, outperforming the best off-the-shelf method (\\textit{i}.\\textit{e}., ResUNet). Integrating the KDA mechanism (\\textit{i}.\\textit{e}., LENAS) could further improve the performance to 2.565 and 1.737, respectively. \\textbf{For cascade models}, our cascade U-NAS model achieves 2.434 and 1.496 MAE of dose score and DVH score, respectively, outperforming the cascade ResUNet which achieves 2.448 and 1.499. \n\\textbf{For five model ensembles},\nour U-NAS ensemble achieves 2.357 MAE and 14.326 MSE of dose score, and 1.465 MAE and 5.560 MSE in DVH score, outperforming the ensembles of off-the-shelf models and the top ranking solutions on the AAPM-2020 challenge leaderboard. \n\nWe further explore the generalization capability of our method on the AIMIS dataset. Specifically, we apply the best architecture learned from the OpenKBP dataset to the AIMIS challenge. The evaluation results are calculated by the organizers\\footnote{https:\/\/contest.taop.qq.com} of the challenge, shown in Table~\\ref{tab:aimis_test}.\nOur U-NAS method achieves the first place in the AIMIS challenge in both the primary and final phases\\footnote{In primary phase, the test set consists of 50 scans, and in final phase, the test set consists of another 150 scans.}, outperforming the runner-up by 9.36\\% and 1.88\\%, respectively. And in Table~\\ref{tab:aimis_val}, we further compare our U-NAS model with two best best performing off-the-shelf models, \\textit{i}.\\textit{e}., U-Net and ResUNet, with respect to different ROIs. Consistent trend can be observed that our U-NAS outperforms the off-the-shelf models and other top ranking solutions on both the validation set and the test set of AIMIS.\n\n\\begin{figure}[thb]\n \\centering\n \\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{figs\/20_nas.pdf}\n\t\t\\label{fig:mba_nas_20}\n\t}\\\\\n \\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{figs\/20_nas_en.pdf}\n\t\t\\label{fig:mba_nas_20_en}\n\t}\n \\caption{The dose score (MAE) of (a) 20 NAS models; (b) the ensembles.}\n \\label{fig:many_better_all}\n\\end{figure}\n\n\\section{Discussions}\n\\label{sec.dis}\nIn this section, we investigate the correlations between diversity and ensemble performance, and empirically provide insightful guidance for the ensemble learning with NAS in the task of dose prediction. \n\n\\subsection{Ensemble Many is Better than All}\nMost works, especially for the competitions~\\cite{tang2020gp,lin2019seg4reg,deng2014ensemble,fernandez2014we,ghimire2014extreme}, brusquely integrate all the obtained models to obtain the final result. \nTo explore the correlation between the number of ensembles and its corresponding performance, we follow~\\cite{zhou2002ensembling} to systemically conduct the searching processes multiple times and select the top 20 models for the experiment. Then, we average the results one-by-one sequentially w.r.t. the individual performance. The results are shown in Fig.~\\ref{fig:many_better_all}. \nFig.~\\ref{fig:mba_nas_20} shows the dose score of the 20 selected models which ranges from 2.5806 (NAS\\_1) to 2.6979 (NAS\\_20) in MAE.\nFig.~\\ref{fig:mba_nas_20_en} shows the dose score of ensembles of top $k\\in[1,20]$ models. It can be seen that the ensembles achieve best performance with top 14 models (2.3621 in MAE), instead of all the 20 models (2.3693 in MAE). \nIntuitively, the models in the ensembles with unacceptable performance could hurt the final ensemble results.\nOur next step is to explore the selection criterion for the members of the ensemble.\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{figs\/performance_diversity.pdf}\n \\caption{The dose score (MAE) of the ensembles with different numbers of models. The yellow bars indicate the models selected based on performance; the blue and green bars indicate the models selected based on diversity from top 20 models and 10 models, respectively.}\n \\label{fig:per_div}\n\\end{figure}\n\n\\subsection{Performance vs. Diversity}\nExtensive literature~\\cite{opitz1996actively} has shown that the ideal ensemble is one comprised of accurate networks that make errors on different parts of the input space. In other word, the performance of ensemble depends not only on the accuracy of base learners, but also the diversity of them. However, existing works typically implicitly encourage the diversity of the ensembles with different training strategies (\\textit{e}.\\textit{g}., random initializations, different hyper-parameters and loss functions), then causally select the models based on the accuracy. So \\textit{does diversity matter when selecting the members in the ensemble?} To answer the question, we conduct the experiments as follows.\n\nThe diversity of two models is measured by averaging the root mean absolute error (RMAE) between the predictions for each sample as follows:\n $d(y_a, y_b) = \\frac{|y_a^i - y_b^i|}{y_a^i + y_b^i}$,\nwhere $y_a$ and $y_b$ are the outputs of two models. \nThen, we select different pairs of models based on individual models' performance and the diversities between them. The yellow and blue bars of Fig.~\\ref{fig:per_div} show that for the total 20 models, the ensemble performance based on the individual models' performance is consistently better than those based on the diversity. \nIt reveal that \\textit{the performance of individual model's performance is an essential factor in ensembling.} \nIn addition, the results of the yellow and green bars show that in the top 10 models, the observation exactly contradicts to the former one that the ensemble performance based on the individual models' performance are lower than those based on the diversity. \nIt suggests that the diversity is indeed an important factor as well. Especially \\textit{when the performance of the individual models are comparable, the diversity is more important than the accuracy.}\n\n\\begin{figure}[!t]\n \\centering\n \\subfloat[]{\n\t\t\\includegraphics[width=0.22\\textwidth]{figs\/strategies_div.pdf}\n\t\t\\label{fig:strategies_div}\n\t}\n \\subfloat[]{\n\t\t\\includegraphics[width=0.22\\textwidth]{figs\/strategies_dose.pdf}\n\t\t\\label{fig:strategies_dose}\n\t}\n \\caption{(a) Diversity and (b) dose score (MAE) of individual models in different ensemble strategies: NAS, bagging, random initializations and iterations, and different off-the-shelf architectures (denotes public).}\n \\label{fig:diff_strate}\n\\end{figure}\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.35\\textwidth]{figs\/strategies_dose_en.pdf}\n \\caption{Overall Dose score (MAE) of the ensembles with different strategies.}\n \\label{fig:diff_strate_en}\n\\end{figure}\n\\subsection{Comparison of Different Ensemble Strategies}\nA uniqueness of the proposed LENAS model is to exploit a diverse collection of network structures that drives a natural bias towards diversity of predictions produced by individual networks. \nTo assess the impact of LENAS on the ensemble, we compare the diversity and performance of our method with four ensemble strategies on the OpenKBP validation set in Fig.~\\ref{fig:diff_strate}, including bagging, random initializations, different training iterations, and different off-the-shelf architectures. \nFor each strategy, we obtain four models randomly. \nSpecifically, for NAS models, we select the top four models in the aforementioned 20 NAS models (\\textit{i}.\\textit{e}., NAS\\_1 to NAS\\_4). For bagging, we split the training set into four non-overlapped sub sets and using different portions of the data (three subsets) to train four models. For random initialization, we repeat the training procedures four times with different initialization seeds. For the different training epochs, we train a single network with $8 \\times 10^4$ iterations, and pick the last four checkpoints with a gap of 2000 training iterations in between. For the off-the-shelf architectures, we select the four most popular architectures in 3D medical image analysis, including FCN, VNet, U-Net, and ResUNet.\nThe diversities of the four ensemble strategies are illustrated in Fig.~\\ref{fig:strategies_div}. The diversity of the NAS models is 0.0326 in RMAE with standard deviation of 0.0009, greater than the other three strategies (\\textit{i}.\\textit{e}., cross-validation, random initialization and iterations), comparable to off-the-shelf architectures which achieve $0.0338\\pm0.0013$ diversity. \nThe results in Fig.~\\ref{fig:strategies_dose} shows that the mean and standard deviation of the dose score of NAS models is $2.587\\pm0.0082$, outperforming other strategies by a large margin.\nNote that the diversity of 4-fold cross-validation is close to the NAS models; however, the individual models' performance suffer from the limited representations of the subsets of training data. The similar trends are observed in off-the-shelf models: individual models' performance restricts the performance of the final ensemble.\nThe performance with respect to the dose score in MAE of the ensembles are depicted in Fig.~\\ref{fig:diff_strate_en}. \nObviously, the ensembles of NAS achieve the best performance with 2.392 in MAE, superior to the other ensemble strategies. The result reveals that \nproducing and ensembling different network architectures is superior to those simply creating an ensemble containing duplicates of a single network architecture with different model parameters.\n\n\\subsection{Effectiveness of Diversity Encouraging Loss}\nWe investigate the effectiveness of the diversity encouraging loss in Fig.~\\ref{fig:div_loss}. Specifically, we searched 10 architectures with and without diversity encouraging loss,\\footnote{We follow~\\cite{liu2019darts} to search single architecture of DC and UC in each model for facilitating the optimization of searching process.} and the learned down cells and up cells are shown in Fig.~\\ref{fig:div_loss}. We further calculate the variation of the operations in each module. \nSpecifically, we first rank the operations in each HM based on the frequency in all the architectures (\\textit{e}.\\textit{g}., the operation with the highest frequency is identified with ID 0), then product all the std. of each HM.\nThe quantitative results of the variation w\/ and w\/o diversity encouraging loss are 328.3 and 31.1, respectively, indicating that with the diversity encouraging loss, the U-NAS method can generate architectures with a greater variation, and consequently encourage the diversity of the predictions.\n\n\\begin{figure}[!t]\n \\centering\n \\subfloat[]{\n\t\t\\includegraphics[width=0.49\\textwidth]{figs\/wo_loss.pdf}\n\t\t\\label{fig:wo_loss}\n\t}\\\\\n \\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{figs\/w_loss.pdf}\n\t\t\\label{fig:w_loss}\n\t}\n \\caption{The down cells and up cells of 10 learned architectures (a) w\/o and (b) w\/ diversity encouraging loss, where \\textit{id}, \\textit{n}, \\textit{co}, \\textit{se}, \\textit{di}, \\textit{de}, \\textit{av}, \\textit{m}, \\textit{in}, and \\textit{pr} denote \\textit{identity}, \\textit{no connection}, \\textit{conv}, \\textit{se\\_conv}, \\textit{dil\\_conv}, \\textit{dep\\_conv}, \\textit{avg\\_pool}, \\textit{max\\_pool}, \\textit{interpolate}, and \\textit{pre} in Table~\\ref{tab_ops}, respectively. The yellow operations are not used by the 10 architectures.}\n \\label{fig:div_loss}\n\\end{figure}\n\n\\section{Conclusion and Future Work}\nIn this paper, we proposed a learning-based ensemble approach, named LENAS, for 3D radiotherapy dose prediction. \nTwo key components of LENAS include 1) a U-NAS framework which automatically searches the neural architectures\nfrom numerous architecture configurations to form a teacher network zoo, and 2) a KDA-Net which hierarchically transfers the distilled knowledge from the teacher networks to the student network to reduce the inference time while maintain competitive accuracy. \nIn addition, we conducted comprehensively experiments to investigate the impact of diversity in ensemble learning, and derived several empirical guidelines for producing and ensembling multiple base learners in consideration of individual accuracy and diversity.\nExtensive experiments on two public datasets demonstrated the effectiveness and superior performance of our method to the state-of-the-art methods. \n\nWe would point out several limitations of our work. First, the NAS ensembles require multiple rounds of searching-retraining, which is very time-consuming in the training phase. Second, a few failure models may be generated by NAS. This situation is also common in the gradient-based NAS methods. Third, the diversity between learners in ensemble is hard to formulate appropriately, which could be task specific and vary for different outputs (\\textit{e}.\\textit{g}., classification, segmentation, and regression).\nFuture studies will be focused on: 1) a more specific model selection criterion for the best ensemble strategies; 2) a computational-efficient training strategies for multiple ensemble learners; and 3) an optimization method from dose prediction map to the final radiotherapy treatment plan.\n\n\n\\bibliographystyle{IEEEtran.bst}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecurrence relations are powerful tools for\nevaluating multi--loop Feynman integrals \\cite{ch-tk}.\nThey relate Feynman integrals with various\ndegrees of their denominators. In many cases\nthey provide the possibility to express an\nintegral with given degrees of denominators as a linear\ncombination of a few master integrals with some coefficients which\nwe will call weight factors.\n\nAt two--loop level the recurrence relations are relatively simple and one\ncan easily find and realize the corresponding recursive algorithm for\nthe calculation of the weight factors.\n\nAt three--loop level the recurrence relations are more complicated and to \nfind an algorithm to calculate weight factors is a serious problem.\n\nFor vacuum integrals with one nonzero mass and various numbers of \nmassless lines the corresponding\nalgorithm was described in \n\\cite{REC}-\\cite{av-p}.\nHere a repeated application of various recurrence relations is performed\nuntil the integrals of a certain type are eliminated. \n\nIn recent works \\cite{REC},\\cite{AFMT}-\\cite{chet} the recursive algorithm\nwas successfully applied in calculations of\ntwo--loop and three--loop QED and QCD.\n\nNevertheless, such recursive algorithms\nlead to too time and memory consuming calculations\nbecause of the size of intermediate\nexpressions grows exponentially with respect to the degrees of\nthe denominators in the initial integral. \nIn fact, the calculations mentioned above were made at the\nlimits of computer capabilities.\n\nIn this work we suggest a new approach based on explicit formulas for \nthe solutions of the recurrence relations. As an application, the case\nof three loop vacuum integrals with four equal mass and two\nmassless lines is considered. The efficiency of this approach\nis demonstrated by calculations of previousely unknown coefficients in \nTaylor expansion of QED photon vacuum polarization for small $q^2$.\n\n\\section{General case}\n\nLet us consider the three--loop vacuum integrals with six different masses:\n\n\\begin{eqnarray}\nB(\\underline{n},D)\\equiv\nB(n_1,n_2,n_3,n_4,n_5,n_6,D)=\n\\frac{m^{2\\Sigma_1^6 n_i-3D}}\n{\\big[\\imath\\pi^{D\/2}\\Gamma(3-D\/2)\\big]^3}\n\\int\\int\\int \\frac{d^Dp\\,d^Dk\\,d^Dl} \n{D_1^{n_1}D_2^{n_2}D_3^{n_3}D_4^{n_4}D_5^{n_5}D_6^{n_6}}\n\\label{integral}\n\\end{eqnarray}\n\n\\noindent\nwhere\n\n\\centerline{\n\\begin{tabular}{lll}\n$D_1=k^2-\\mu_1 m^2$,&\n$D_2=l^2-\\mu_2 m^2$,&\n$D_3=(p+k)^2-\\mu_3 m^2$\\\\\n$D_4=(p+l)^2-\\mu_4 m^2$,&\n$D_5=(p+k+l)^2-\\mu_5 m^2$,&\n$D_6=p^2-\\mu_6 m^2$\\\\\n\\end{tabular}\n}\n\nLet us derive recurrence relations that result from integration by parts,\nby letting $(\\partial\/\\partial p_i)\\cdot p_j$ act on\nthe integrand, with $p_{i,j}\\in\\{p,k,l\\}$. For example, \nacting by $(\\partial\/\\partial k)\\cdot (p+k)$ we get\n\n\\begin{eqnarray}\n(D-2n_3-n_1-n_5) \\}B(\\underline{n},D)&=&\n\\{ n_1 {\\bf 1}^+({\\bf 3}^- -{\\bf 6}^- + \\mu_3 -\\mu_6 +\\mu_1)\n+2n_3 {\\bf 3}^+ \\mu_3 \\nonumber\\\\\n&&+n_5 {\\bf 5}^+({\\bf 3}^- -{\\bf 2}^- + \\mu_3 -\\mu_2 +\\mu_5)\\}B(\\underline{n},D)\n\\label{rr}\n\\end{eqnarray}\n\n\\noindent\nwhere \n${\\bf 1}^\\pm B(n_1,\\ldots)\\equiv B(n_1\\pm1,\\ldots)$, etc.\n\nOthers relations can be obtained from (\\ref{rr}) by proper \npermutations of the $n_i, \\mu_i$ and ${\\bf I}^\\pm$ objects. \n\nThe common way of using these relations is step by step re-expression\nof the integral (\\ref{integral}) with some values of $n_i$ through a set of \nintegrals \nwith shifted values of $n_i$, with the final goal to reduce this set to\na few integrals with $n_i$ are equal to $0$ or $1$, so called \"master\" \nintegrals. The result can be represented as\n\n\\begin{eqnarray}\nB(\\underline{n},D)=\\sum_k f^k(\\underline{n},D)N_k(D)\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwhere the index $k$ enumerate master integrals $N_k(D)$ and corresponding \ncoefficient functions $f^k(n_i,D)$. \n\nThere are two problems on this way. First, there is no general \napproach to construction of such recursive procedure, that is to find \nproper combinations of these relations and a proper sequence of its use \nis the matter of art even for the case of one mass \n\\cite{av-p}. Second, even in cases when such procedures were \nconstructed, they lead to very time and memory consuming calculation\nbecause of large reproduction rate at every recursion step.\nFor example, the relation (\\ref{rr}) expresses the integral through 7 \nothers.\n\n\nInstead, let us construct the coefficient functions $f^k(\\underline{n},D)$ \ndirectly as solutions of the given recurrence relations. \n\nFor that, let us diagonalize the recurrence relations with respect to \n$n_i\\,{\\bf I}^+$ operators. \nWe found that the recurrence relations can be represented in the following \nsimple form\n\n\\begin{eqnarray}\n\\{P(x_1,\\dots,x_6)\\cdot n_i{\\bf I}^+ - \n\\frac{D-4}{2}\\partial_i(P(x_1,\\dots,x_6)) \\}_{x_i={\\bf\\small I}^- \n+\\mu_i} B(\\underline{n},D)=0,\\quad i=1,\\dots, 6.\n\\label{rr2}\n\\end{eqnarray}\n\n\\noindent\nwhere \n\\begin{eqnarray}\nP(x_1,\\dots,x_6)&=&\n2(x_1x_2(x_1+x_2)+x_3x_4(x_3+x_4)+x_5x_6(x_5+x_6))\\nonumber\\\\\n&&+x_1x_3x_6+x_1x_4x_5+x_2x_3x_5+x_2x_4x_6\\nonumber\\\\\n&&-(x_1+x_2+x_3+x_4+x_5+x_6)(x_1x_2+x_3x_4+x_5x_6)\\nonumber\n\\end{eqnarray}\n\nThe differential equation corresponding to (\\ref{rr2}) has the solution\n$P^{D\/2-2}(x_i+\\mu_i)$. Let\nus consider \"Lourent\" coefficients of this function:\n\n\\begin{eqnarray}\nf(n_i,D)=\n\\frac{1}{(2\\pi\\imath)^6}\n\\oint\\oint\\oint\\oint\\oint\\oint\n\\frac\n{dx_1dx_2dx_3dx_4dx_5dx_6}\n{x_1^{n_1}x_2^{n_2}x_3^{n_3}x_4^{n_4}x_5^{n_5}x_6^{n_6}}\n{P(x_1+\\mu_1,\\dots,x_6+\\mu_6)^{D\/2-2}}\n\\label{solution}\n\\end{eqnarray}\n\n\\noindent\nwhere integral symbols denote six subsequent complex \nintegrations with contours \nwhich will be described below. If one acts by (\\ref{rr2}) on \n(\\ref{solution}) one gets up to the surface terms \nthe same expression\nas acting by $P\\partial_i -(D\/2-2)(\\partial_iP)$ on $P^{D\/2-2}$, that is \nzero. Then, the surface terms can be removed if we choose\nclosed or ended in infinity point contours. For more accuracy one can \nconsider \nanalytical continuations of the result on $D$ from large negative values.\nSo (\\ref{solution}) is the solution of the relations (\\ref{rr}),\nand the different choices of the contours correspond to different \nindependent solutions. Note, that if one chooses the contour as a small \ncircle over zero, one get the true Lourent coefficient\nof the function $P^{D\/2-2}$, so this function can be called generalized \ngenerating function for the solutions of the relations (\\ref{rr}).\n\nThen, in accordance with the dimensional regularization rules, \nthe integrals (\\ref{integral}) are not equal to zero only if at least three\namong $n_i$ are positive. So it is natural to construct \nthe solutions from those that are equal to zero if the index from \ndefinite three--index set (\"Taylor\" indexes) is not positive. \nOne can obtain such solutions \nif one chooses the contours, corresponding to these indexes,\nas circles over zero. In this case these three integrations can be\nevaluated and lead to coefficient in the common Taylor expansion in\ncorresponding variables.\n\nThe three remaining integrations in general case lead to the sum of \ngeneralized hypergeometric seria, but for some cases of practical interest\n(see below)\ncan be reduced to the finite sums of Pochhammers symbols products. \n\n\\section{Example}\n\nAs an example let us consider the case of integrals with four equal mass\nand two massless lines, that is $\\mu_1=\\mu_2=0,\\mu_3=\\mu_4=\\mu_5=\\mu_6=1$. \nLet us calculate the coefficient functions \nwhich corresponds to the choice of the master integrals from \\cite{REC}.\nThat is, we expand $B(\\underline{n})$ as\n\n\\begin{eqnarray}\nB(\\underline{n},D)=\nN(\\underline{n},D)B(0,0,1,1,1,1,D)+\nM(\\underline{n},D)B(1,1,0,0,1,1,D)+\nT(\\underline{n},D)B(0,0,0,1,1,1,D)\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwith the following normalization conditions\n\\begin{eqnarray}\nN(0,0,1,1,1,1,D)=1,\\quad N(1,1,0,0,1,1,D)=0,\\quad \nN(0,0,0,1,1,1,D)=0,\\label{condN}\\\\\nM(0,0,1,1,1,1,D)=0,\\quad M(1,1,0,0,1,1,D)=1,\\quad\nM(0,0,0,1,1,1,D)=0,\\label{condM}\\\\\nT(0,0,1,1,1,1,D)=0,\\quad T(1,1,0,0,1,1,D)=0,\\quad \nT(0,0,0,1,1,1,D)=1,\\label{condT} \\end{eqnarray}\n\nThe practical rule for choosing the integration contours is: circle around\nzero for unity in the master integral and contour over cut for zero in \nthe master integral.\n\nTo get $N(\\underline{n})$ one should make first the Taylor expansion \nin $x_3,x_4,x_5,x_6$\n\n\\begin{eqnarray}\nB(n_i,D)\\propto\\oint\\oint\n\\frac{dx_1dx_2}{x_1^{n_1}x_2^{n_2}}\n\\big(\\frac{\\partial_3^{n_3-1}\\dots\\partial_6^{n_6-1}}\n{(n_3-1)!\\dots(n_6-1)!}\nP(x_1,x_2,x_3+1,\\dots,x_6+1)^{D\/2-2}\\big)\n\\vert_{x_3,\\dots,x_6=0}\\nonumber\n\\end{eqnarray}\n\nThe remaining integrals over $x_1,x_2$ are of the type\n\n\\begin{eqnarray}\n\\oint\\oint\n\\frac{dx_1dx_2}{x_1^{n_1}x_2^{n_2}}\n[x_1x_2(x_1+x_2-4)]^{D\/2-2}\n\\propto\n(-4)^{-n_1-n_2}\\frac{(D\/2-1)_{-n_1}(D\/2-1)_{-n_2}}{(3D\/2-3)_{-n_1-n_2}}\n\\equiv N(n_1,n_2,1,1,1,1,D)\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwhere we follow the normalization (\\ref{condN}). \n\nThe case $M(\\underline{n},D)$ is analogous. The only difference is that \ndue to the \nsymmetry of the task we should take the sum of the solutions with the \nsignatures $(++\\pm\\pm++)$ and $(++++\\pm\\pm)$.\n\nThe case $T(\\underline{n},D)$ is more complicated. The symmetry of the task \nassumes that one should try the following form of the solution\n\n\\begin{eqnarray}\nT(n_1,n_2,n_3,n_4,n_5,n_6,D)&=&\n\\phantom{+}t(n_1,n_2,n_3,n_4,n_5,n_6,D)+t(n_1,n_2,n_4,n_3,n_6,n_5,D)\\nonumber\\\\\n&&+t(n_1,n_2,n_5,n_6,n_3,n_4,D)+t(n_1,n_2,n_6,n_5,n_4,n_3,D)\n\\nonumbe\n\\end{eqnarray}\n\n\n\\noindent\nwhere $t(\\underline{n},D)$ is non--zero only if $n_4,n_5,n_6>0$.\nLet us construct $t(\\underline{n},D)$ using (\\ref{solution}),\nkeeping in mind possible mixing with $N(\\underline{n},D)$ solution.\nAfter differentiating \nover last three indexes the task reduces to the construction of \n$t(n_1,n_2,n_3,1,1,1,D)$. Let us consider the corresponding integral:\n\n\\begin{eqnarray}\n\\overline{t}(n_1,n_2,n_3,D)=\n\\frac{1}{(2\\pi\\imath)^3}\n\\oint\\oint\\oint\n\\frac\n{dx_1dx_2dx_3}\n{x_1^{n_1}x_2^{n_2}x_3^{n_3}}\n{(x_3^2-x_1x_2x_3+x_1x_2(x_1+x_2-4))^{D\/2-2}}\n\\label{tbar}\n\\end{eqnarray}\n\nFor $n_3<1$ one can calculates this integral immediately (the possible \n$N(\\underline{n},D)$ contribution vanish). Taking into account the \nnormalization (\\ref{condT}) we get\n\n\\begin{eqnarray}\nt(n_1,n_2,n_3<1,1,1,1,D)&=&\\Df\n{\\overline{t}(n_1,n_2,n_3,D)}{\\overline{t}(0,0,0,D)}\\nonumber\\\\\n&=&\n\\Df\n{(2-D)_{(n_1+n_3)}\n(2-D)_{(n_2+n_3)}\n(\\Df{D-1}{2})_{(-n_1-n_3)}\n(\\Df{D-1}{2})_{(-n_2-n_3)}}\n{(-4)^{(n_1+n_2)}(-8)^{n_3}}\n\\nonumber\\\\\n&&\\sum_{k=0}^{[-n_3\/2]}\\Df{\n(\\Df{D-1}{2}-n_1-n_3)_{-k}\n(\\Df{D-1}{2}-n_2-n_3)_{-k}\n(n_3)_{(-n_3-2k)}\n}{\n(\\Df{3-D}{2})_{-k}\n(\\Df{1}{2})_{-k}\n(-n_3-2k)!\n}\n\\nonumbe\n\\end{eqnarray}\n\nFor $n_3>1$ using integration by parts for $x_3$ in (\\ref{tbar}) (which \nreduces to evaluation of $(n_3-1)^{th}$ derivative of $P^{D\/2-2}$)\nthe $\\overline{t}(n_1,n_2,n_3,D)$ can be reduced to \na set of $\\overline{t}(n_1,n_2,1,D)$ with different $n_1,n_2$.\nLet us extract the $t(n_1,n_2,1,1,1,1,D)$ from $\\overline{t}(n_1,n_2,1,D)$ \naccording to the conditions (\\ref{condT})\n\n\\begin{eqnarray}\nt(n_1,n_2,1,1,1,1,D)=\\Df{1}{\\overline{t}(0,0,0,D)}\n(\\overline{t}(n_1,n_2,1,D)\n-\\overline{t}(0,0,1,D)N(n_1,n_2,1,1,1,1))\n\\label{t2}\n\\end{eqnarray}\n\nOne can calculate the $t(n_1,n_2,1,1,1,1,D)$\nby direct use of the (\\ref{t2}) expanding it for example in seria over\n$D\/2-2$, but we found more suitable to use the recurrence relations \non $n_1,n_2$:\n\n\\begin{eqnarray}\nt(n_1,n_2,1,1,1,1,D)&=&\n-\\Df{(D-2)^2}{4(D-3)(2n_1-D+2)}\n(-\\Df{1}{2}t(n_1-1,n_2-1,0,1,1,1,D-2)\\nonumber\\\\\n&&+t(n_1-2,n_2-1,1,1,1,1,D-2))\\nonumber\\\\\n&&-\\Df{2(D-2)^2(11D-38)}{3(3D-10)(3D-8)(D-3)}\nN(n_1,n_2,1,1,1,1,D)\\label{rt1}\\\\\nt(n_1,n_2,1,1,1,1,D)&=&\n\\Df{(n_1-n_2-1)}{(2n_1-D+2)}\nt(n_1,n_2+1,0,1,1,1,D)\\nonumber\\\\\n&&+\\Df{(2n_2-D+4)}{(2n_1-D+2)}\nt(n_1-1,n_2+1,1,1,1,1,D)\n\\label{rt2}\n\\end{eqnarray}\n\nWith the help of (\\ref{rt1}) the $n_1+n_2$ can be reduced to $-1,0,1$ \nand with the help of (\\ref{rt2}) the $n_1-n_2$ can be reduced to $0,1$\n(note that $t(n_1,n_2,1,1,1,1,D)=t(n_2,n_1,1,1,1,1,D)$).\nHere at every recursion step the one integral reexpreses through\nthe other one plus rational over $D$, that is there is no \n\"exponential reproduction\". Then, the recursion acts separately\non variables $n_1+n_2$ and $n_1-n_2$. So, although the relations \n(\\ref{rt1},\\ref{rt2}) can be solved to explicit formulas,\nthis \"safe\" variant\nof recursion is in this case the most effective way of calculations.\n\nThe relations (\\ref{rt1},\\ref{rt2}) are the simple example of the recurrence\nrelations with D-shifts, which can be derived in the following way.\nNote that if \n$f^k(n_i,D)$ is a solution of (\\ref{rr2}), then\n$P({\\bf I}^-+\\mu_i)f^k(n_i,D-2)$ also is a solution.\nHence, if $f^k(n_i,D)$ is a complete set of solutions, then\n\n\\begin{eqnarray}\nf^k(n_i,D)=\\sum_n S^k_n(D)P({\\bf I}^-+\\mu_i)f^n(n_i,D-2)\n\\label{rrD}\n\\end{eqnarray}\n\n\\noindent\nwhere the coefficients of mixing matrix $S$ depends only over $D$.\nFor the solutions (\\ref{solution}) the matrix $S$ is unit matrix.\nOn the other hand, the desire to come to some specific set of master \nintegrals leads to nontrivial mixing matrix and for the example considered \nabove these coefficients are\n\n\\vspace{3mm}\n\\begin{tabular}{lll}\n$S^n_n=-\\Df{3}{64}\\Df{(3D - 8)(3D - 10)}{(D - 4)^2}$\n&$S^m_m=\\Df{3}{16}\\Df{(3D - 8)(3D - 10)}{(2D - 7)(2D - 9)}$&\n$S^t_t=-\\Df{(D - 2)^2}{4(D - 3)(D - 4)}$\\\\\n$S^t_n=\\Df{(11D - 38)(D - 2)^2}{32(D - 3)(D - 4)^2}$& \n\\multicolumn{2}{l}{$S^n_t=S^t_m=S^m_t=S^m_n =S^n_m=0$}\\\\\n\\end{tabular}\n\n\\vspace{3mm}\nTo check the efficiency of this approach we \nevaluated, to 3 loops, the first 5 moments in the \n$z\\equiv q^2\/4m^2\\to 0$ \nexpansion of the QED photon vacuum polarization \n \\[\\Pi(z) = \\sum_{n>0} C_n\\,z^n + {\\rm O}(\\alpha^4)\\,,\\]\n\nThe $C_n$ are expressed through approximately $10^5$ scalar \nintegrals, but there is no necessary to evaluate these integrals separately.\nInstead, we evaluated a few integrals of (\\ref{solution}) type, but\nwith $P^{D\/2-2}$ producted by a long polinomial in $x_i$.\n\nAfter OS mass \\cite{GBGS,BGS} and charge\\cite{REC} renormalization,\nwe obtained the finite $D\\rightarrow4$ limits\n(the coefficients $C_1, C_2, C_3$ can be found in \\cite{3l}):\n\n\\begin{eqnarray}\nC_4 & = & \\Bigl\\{ N^2\\left[ \\Df{256}{693} \\,\\zeta_2\n + \\Df{2522821}{9437184} \\,\\zeta_3\n - \\Df{129586264289}{143327232000})\\right]\\nonumber\\\\\n & &{} + N \\left[ \\Df{160}{231} \\left(1-\\Df{8}{5}\\ln2\\right)\\zeta_2\n + \\Df{1507351507033}{1651507200} \\,\\zeta_3\n - \\Df{269240669884818833}{245806202880000} \\right]\n \\Bigr\\}\\frac{\\alpha^3}{\\pi^3} \n\\nonumber\\\\\n & &{}+\\Df{51986}{127575}\\,N\\frac{\\alpha^2}{\\pi^2}\n +\\Df{32}{693} \\,N\\frac{\\alpha }{\\pi }\\,,\\nonumber\\\\\nC_5 & = & \\Bigl\\{ N^2\\left[ \\Df{1024}{3003} \n\\,\\zeta_2\n + \\Df{1239683}{3932160} \\,\\zeta_3\n - \\Df{512847330943}{556351488000})\\right]\\nonumber\\\\\n & &{} + N \\left[ \\Df{640}{1001} \\left(1-\\Df{8}{5}\\ln2\\right)\\zeta_2\n + \\Df{939939943788973}{190749081600} \\,\\zeta_3\n - \\Df{360248170450504167133}{60837035212800000} \\right]\n \\Bigr\\}\\frac{\\alpha^3}{\\pi^3} \n\\nonumber\\\\\n & &{}+\\Df{432385216}{1260653625}\\,N\\frac{\\alpha^2}{\\pi^2}\n +\\Df{512}{15015} \\,N\\frac{\\alpha }{\\pi }\\,,\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwhere we follow common practice \\cite{BKT}, by allowing for\n$N$ degenerate leptons. In pure QED, $N=1$; formally,\nthe powers of $N$ serve to count the number of electron loops.\n\nThe $N$ contribution of $C_4$ is in agreement with recent QCD \ncalculations \\cite{chet}, the $N^2$ part of $C_4$ and the \n$C_5$ are new.\n\nThe bare (non-renormalized) integrals were calculated for arbitrary $D$.\nCalculations for $C_4$ were made on PC with \nPentium-75 processor by REDUCE with 24Mbyte memory, within \napproximately 10 CPU hours. \nThe most difficult diagrams for $C_5$ were calculated\non HP735 workstation. \n\nThese results demonstrates a reasonable progress in comparison with common \nrecursive \napproach. For example, the common way used in \\cite{3l} demands \nseveral CPU hours on DEC-Alpha workstation to calculate full $D$ \ndependence of $C_2$ integrals, and further calculations became \npossible only after truncation in $(D\/2-2)$. In the present approach the \nfull $D$ calculations for $C_2$ demand about 5 minutes on PC.\n\n\\section{Conclusions}\n\nThe new approach suggested in this work allows to produce explicit \nformulas (\\ref{solution}) for the solutions of the recurrence relations for \n3--loop vacuum integrals.\nThis formulas can be used for direct calculations and demonstrate a\nhigh efficiency.\nOn the other hand, they produce a new type $D$-shifted recurrence \nrelations (\\ref{rrD}) for these integrals.\nFinally, we hope that simple representation (\\ref{rr2}) of the \ntraditional recurrence relations which allows to obtain all these\nresults is not intrinsic for 3--loop vacuum case and \ngeneralization for multi--loop or\/and non-vacuum case is possible.\n\n\n\\section{Acknowledgment}\nI would like to thank D.Broadhurst for the possibility to use\nhis RECURSOR \\cite{REC} which produced a lot of initial materials for \ninvestigating the structure of the solutions and V.Ilyin for drawing the \nattention to the problem and many fruitful discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nIn statistical physics, \nto calculate the free energy of solvable lattice models\n for finite temperature is one of the \n important problems. \n For this purpose, thermodynamic Bethe ansatz (TBA) \nequations have been often used \\cite{Ta99}. In general, \nthe TBA equations are an infinite number of coupled nonlinear \nintegral equations (NLIE) with an infinite number unknown functions. \nThen it is desirable to reduce TBA equations \nto a finite number of coupled NLIE with a finite number of \nunknown functions. \n\nDestri, de Vega \\cite{DD92} and Kl\\\"umper \\cite{K93,K92}\n proposed NLIE with two (or one \\footnote{ if an integral contour \n with a closed loop is adopted}) \nunknown functions for the $XXZ$ (or $XYZ$) model. \nTo generalize their NLIE to models \n whose underlying algebras have arbitrary rank seems to be \na difficult problem as we need considerable trial and errors \nto find auxiliary functions which are needed to derive the NLIE. \nThen there are NLIE of abovementioned type\n for models whose underlying algebras have \nat most rank 3 (for example, \\cite{KWZ97,FK99,D05}). \n\nSeveral years ago, \nTakahashi discovered \\cite{Ta01} an another NLIE for the \n$XXZ$ model in simplifying TBA equations. \nLater, the same NLIE was rederived \\cite{TSK01} from fusion relations \n($T$-system) \\cite{KR87} \namong quantum transfer matrices (QTM) \\cite{S85}. \nIn addition, it was also rederived \\cite{KW02} for the $XXX$ model \nfrom a fugacity expansion formula. \n\nIn view of these situations, we have derived NLIE of Takahashi type for \nthe $osp(1|2s)$ model \\cite{T02}, the $sl(r+1)$ model \\cite{T03},\nthe higher spin Heisenberg model \\cite{T04}, \nthe $U_{q}(\\widehat{sl}(r+1))$ Perk-Schultz model \\cite{TT05}. \nIn these cases, \nthe number of unknown functions and NLIE coincide with the rank of the \nunderlying algebras. In this paper, we will further derive NLIE with a finite \nnumber of unknown functions \nfor the $U_{q}(\\widehat{sl}(r+1|s+1))$ Perk-Schultz model \\cite{PS81,Sc83}, \nwhich is a multicomponent generalization of the 6-vertex model and \none of the fundamental solvable lattice models in statistical mechanics. \nFor example, a special case of this model is related to the \nsupersymmetric $t-J$ model, which is important in \nstrongly correlated electron systems. \n\nIn section 2, we introduce the $U_{q}(\\widehat{sl}(r+1|s+1))$ Perk-Schultz model, \nand define the QTM for it. \nAs a summation over tableaux labeled by $a \\times m$ Young (super) diagram, we \nintroduce an auxiliary function (\\ref{DVF}) \\cite{T97,T98}\n which includes an eigenvalue formula (\\ref{QTM-eigen}) of the QTM \nas a special case. \nWe also introduce a system of functional relations ($T$-system) which is satisfied by\n this auxiliary function.\n\nIn section 3, we derive two kind of NLIE which contain only $r+s+1$ unknown functions. \nThe first ones (\\ref{nlie-general}), (\\ref{nlie-generalb}) \n reduce to the NLIE for the $U_{q}(\\widehat{sl}(r+1))$ Perk-Schultz model \nin \\cite{TT05} if $s=-1$. \nHowever our new NLIE are not straightforward generalization of the ones in \nour previous paper \\cite{TT05}. \nIn fact for $r,s \\ge 0$ case, \na straightforward generation of our previous NLIE \n becomes a system of an infinite number of coupled NLIE which contains an \ninfinite number of unknown functions \n(see (\\ref{nlie4})). \nTo overcome this difficulty, we will use the \nquantum (supersymmetric) Jacobi-Trudi and Giambelli formula \n(\\ref{jacobi-trudi}) and \na duality (\\ref{dual}) for the auxiliary function, \nfrom which a closed set of NLIE can be derived. \nWe will also propose another NLIE (\\ref{nlie-xi=-1}) and (\\ref{nlie-xi=-1b}) \nin the latter part of the \nsection 3, which have never been considered before \neven for the $U_{q}(\\widehat{sl}(2))$ case. \nIn deriving the NLIE, we assume that $q$ is generic. \nHowever we expect that our results can be also analytically continued to \nthe case where $q$ is a root of unity. \n\nIn section 4, we calculate the high temperature expansion of the \nfree energy based on our NLIE.\n In particular, we can derive coefficients (\\ref{coe1})-(\\ref{coe5}) \nup to the order of 5 for the arbitrary rank $r+s+1$. \nThe point is that if we fix the degree of the high temperature expansion, \nwe can write down a general formula of the coefficients. \nOn the other hand, if we specialize parameters, we can \nderive the coefficients for much higher orders. For example \nfor $(r,s)=(2,-1),(-1,2)$, $q=1$, $\\mu_{a}=0$ case, \n coefficients of the high temperature expansion of the specific heat \n up to the order of 40 are presented in appendix. \n It will be difficult to derive the coefficients of such a \nhigh order by other method. \n \nSection 5 is devoted to concluding remarks. \n\\section{The Perk-Schultz model and the quantum transfer matrix method} \nIn this section, we will introduce the $U_{q}(\\widehat{sl}(r+1|s+1))$ \nPerk-Schultz model\n\\footnote{$U_{q}(\\widehat{sl}(r+1|s+1))$ is a quantum affine superalgebra, \nwhich characterizes the $R$-matrix of this model. \nSee for example, \\cite{Y99}. \nWe assume $\\eta \\in {\\mathbb R}$ ($q=e^{\\eta}$). \nA rational limit ($q \\to 1$) of the Perk-Schultz model is\n the Uimin-Sutherland model \\cite{U70,S75}.}\n \\cite{PS81,Sc83} and \nthe quantum transfer matrix (QTM) method \n \\cite{S85,SI87,K87,SAW90,K92,K93} \n for it. \nThe QTM method was applied to the Perk-Schultz model \n in ref. \\cite{KWZ97} \n(see also, ref. \\cite{JKS97,JKS98,FK99}). \n\nLet us introduce three sets $B=\\{1,2,\\dots,r+s+2\\}=B_{+}\\cup B_{-}$, \nwhere $B_{+} \\cap B_{-}=\\phi $, $|B_{+}|=r+1$ and $|B_{-}|=s+1$\n ($r,s \\in {\\mathbb Z}_{\\ge -1}$).\nWe define a grading parameter $p(a)$ ($a \\in B$) such that \n$p(a)=0$ for $a \\in B_{+}$ and \n$p(a)=1$ for $a \\in B_{-}$. \nThe $R$-matrix of the $U_{q}(\\widehat{sl}(r+1|s+1))$ Perk-Schultz model \\cite{PS81} \nis given as \n\\begin{eqnarray}\nR(v)=\n\\sum_{a_{1},a_{2},b_{1},b_{2}\\in B}\nR^{a_{1},b_{1}}_{a_{2},b_{2}}(v) \nE^{a_{1},a_{2}}\\otimes E^{b_{1},b_{2}},\n\\end{eqnarray}\nwhere $E^{a,b}$ is a $r+s+2$ by $r+s+2$ matrix \nwhose $(i,j)$ element is given as \n$(E^{a,b})_{i,j}=\\delta_{ai}\\delta_{bj}$; \n$R^{a_{1},b_{1}}_{a_{2},b_{2}}(v)$ is defined as \n\\begin{eqnarray}\n&& R^{a,a}_{a,a}(v)=[(-1)^{p(a)}v+1]_{q}, \\\\\n&& R^{a,b}_{a,b}(v)=(-1)^{p(a)p(b)}[v]_{q} \\quad (a \\ne b), \\\\\n&& R^{b,a}_{a,b}(v)=q^{\\mathrm{sign}(a-b)v}\n\\quad (a \\ne b), \\label{R-mat}\n\\end{eqnarray}\nwhere $v \\in \\mathbb{C}$ is the spectral parameter;\n$a,b \\in B$; \n $[v]_{q}=(q^{v}-q^{-v})\/(q-q^{-1})$; \n$q=e^{\\eta}$. \nNote that this $R$-matrix reduces to the one for the well known 6-vertex model \nif $(r,s)=(1,-1)$.\n\nLet $L$ be a positive integer (the number of lattice sites). \nThe row-to-row transfer matrix on $({\\mathbb C}^{r+s+2})^{\\otimes L}$ \nis defined as\n\\footnote{The lower index $i,j$ of $R_{ij}(v)$ is used as follows: \nfor example, $E^{a,b}_{k}$ \nis defined on $({\\mathbb C}^{r+s+2})^{\\otimes (L+1)}$: \n$E^{a,b}_{k}=I^{\\otimes k}\\otimes E^{a,b}\\otimes I^{\\otimes (L-k)}$, \nwhere $I$ is $r+s+2$ by $r+s+2$ identity matrix; \n$k=0,1,\\dots, L$. \nThen \n$R_{ij}(v)$ is defined as \n$\nR_{ij}(v)=\\sum_{a_{1},a_{2},b_{1},b_{2}} \nR^{a_{1},b_{1}}_{a_{2},b_{2}}(v) \nE^{a_{1},a_{2}}_{i} E^{b_{1},b_{2}}_{j}\n$. The trace ${\\mathrm tr}_{0}$ is \ntaken over the auxiliary space indexed by $0$.} \n\\begin{eqnarray}\nt(v)={\\mathrm tr}_{0}(R_{0L}(v)\n \\cdots R_{02}(v)R_{01}(v)).\n\\label{rtr}\n\\end{eqnarray}\nThe main part of the Hamiltonian is proportional to \nthe logarithmic derivative of the row-to-row transfer matrix (\\ref{rtr}): \n\\begin{eqnarray}\n&& \\hspace{-20pt} \nH_{body}=\\frac{J\\sinh \\eta}{\\eta}\\frac{d}{dv}\\log t(v) |_{v=0}\n= J\\sum_{j=1}^{L}\\biggl\\{\n \\cosh \\eta \\sum_{a \\in B} (-1)^{p(a)}E^{a,a}_{j}E^{a,a}_{j+1} +\n\\nonumber \\\\ && \n \\sum_{\n {\\scriptsize \\begin{array}{c}\n a, b \\in B \\\\\n a\\ne b \n \\end{array}}\n }\n \\left( {\\rm sign}(a-b) \\sinh \\eta \n E^{a,a}_{j}E^{b,b}_{j+1} +\n (-1)^{p(a)p(b)}E^{b,a}_{j}E^{a,b}_{j+1}\n \\right)\n\\biggl\\}, \\label{ham0}\n\\end{eqnarray}\nwhere we adopt the periodic boundary condition \n$E^{a,b}_{L+1}=E^{a,b}_{1}$. \nWithout breaking the integrability, we can also add the chemical \npotential term\n\\begin{eqnarray}\nH_{ch}=-\\sum_{j=1}^{L}\\sum_{a \\in B}\\mu_{a}E^{a,a}_{j} \\label{hamch}\n\\end{eqnarray}\n to $H_{body}$. Then the total Hamiltonian is $H=H_{body}+H_{ch}$. \n \nTo treat the model at finite temperature $T$, \nwe introduce the so-called quantum transfer matrix (QTM)\\cite{S85}: \n\\begin{eqnarray}\n&& \\hspace{-30pt} t_{\\mathrm{QTM}}(v)=\\sum_{\\{\\alpha_{k}\\},\\{\\beta_{k}\\}}\nt_{\\mathrm{QTM}}(v)\n^{\\{\\beta_{1},\\dots, \\beta_{N} \\}}\n_{\\{\\alpha_{1},\\dots,\\alpha_{N} \\}}\nE^{\\beta_{1}\\alpha_{1}}_{1}\nE^{\\beta_{2}\\alpha_{2}}_{2}\n\\cdots \nE^{\\beta_{N}\\alpha_{N}}_{N}, \\label{QTM} \\\\\n&& \\hspace{-46pt}\nt_{\\mathrm{QTM}}(v)^{\\{\\beta_{1},\\dots, \\beta_{N} \\}}\n_{\\{\\alpha_{1},\\dots,\\alpha_{N} \\}}=\n\\sum_{\\{\\nu_{k}\\}}e^{\\frac{\\mu_{\\nu_{1}}}{T}}\n\\prod_{k=1}^{\\frac{N}{2}}\n R^{\\beta_{2k},\\nu_{2k+1}}_{\\alpha_{2k},\\nu_{2k}}(u+iv)\n \\widetilde{R}^{\\beta_{2k-1},\\nu_{2k}}_{\\alpha_{2k-1},\\nu_{2k-1}}(u-iv),\n \\nonumber \n\\end{eqnarray}\nwhere $N \\in 2{\\mathbb Z}_{\\ge 1} $ is the Trotter number; \n$\\nu_{N+1}=\\nu_{1}$; $\\nu_{k},\\alpha_{k},\\beta_{k}\n \\in B$; $u=-\\frac{J \\sinh \\eta }{\\eta N T}$; \n$\\widetilde{R}^{a_{1},b_{1}}_{a_{2},b_{2}}(v)=\nR^{b_{1},a_{2}}_{b_{2},a_{1}}(v)$ is the \\symbol{\"60}$90^{\\circ}$ rotation' of $R(v)$. \n We can express \\cite{S85} the free energy per site \nin terms of only the largest eigenvalue $\\Lambda_{1}$ of \nthe QTM (\\ref{QTM}) at $v=0$:\n\\begin{eqnarray}\nf=\n-T\\lim_{N\\to \\infty}\\log \\Lambda_{1},\n\\label{free-en-qtm}\n\\end{eqnarray} \nwhere the Boltzmann constant is set to $1$. \n\nDue to the Yang-Baxter equation, the QTM (\\ref{QTM}) forms \ncommuting family for any $v$. \nThus it can be diagonalized by the \nBethe ansatz. \nThe eigenvalue formula\n\\footnote{To be precise, \nthis formula is a conjecture \nfor general parameters $r,s,q,\\mu_{a},N$. \nIn \\cite{KWZ97}, the \nalgebraic Bethe ansatz for a one particle state was \nexecuted for the QTM of the $U_{q}(\\hat{sl}(r+1|s+1))$ Perk-Schultz model. \nAs for the $U_{q}(\\hat{sl}(2))$ case, a proof of this formula by \n the algebraic Bethe ansatz is similar to the \nrow-to-row transfer matrix case (cf. \\cite{GKS04}). \nThis formula has a quite natural form (dressed vacuum form) \nfrom a point of view of the analytic Bethe ansatz \\cite{R83,KS95}. \nAn eigenvalue formula of the row to row transfer matrix (\\ref{rtr}) \nwas derived in \\cite{BVV82,Sc83}. It has essentially same form as\n (\\ref{QTM-eigen}) except for a part which is related to \n the vacuum eigenvalue. \nThere is also support by numerical calculations for small \n$r,s$.}\n of the QTM (\\ref{QTM}) will be (cf. \\cite{KWZ97,FK99}) \n\\begin{eqnarray}\nT^{(1)}_{1}(v)=\\sum_{a\\in B}z(a;v), \n \\label{QTM-eigen}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& z(a;v)=\\psi_{a}(v) \\xi_{a}\n \\nonumber \\\\ \n&& \\times \n\\frac{Q_{a-1}(v-\\frac{i\\sum_{j=1}^{a-1}(-1)^{p(j)}}{2}-i(-1)^{p(a)})\nQ_{a}(v-\\frac{i\\sum_{j=1}^{a}(-1)^{p(j)}}{2}+i (-1)^{p(a)})}\n{Q_{a-1}(v-\\frac{i\\sum_{j=1}^{a-1}(-1)^{p(j)}}{2})\nQ_{a}(v-\\frac{i\\sum_{j=1}^{a}(-1)^{p(j)}}{2})}, \\nonumber \\\\\n&& Q_{a}(v)=\\prod_{k=1}^{M_{a}}\\sin \\eta(v-v_{k}^{(a)}),\n \\\\\n&& \\psi_{a}(v)=e^{\\frac{\\mu_{a}}{T}}\n \\phi_{-}(v-i(-1)^{p(1)}\\delta_{a,1})\n\\phi_{+}(v+i(-1)^{p(r+s+2)}\\delta_{a,r+s+2}),\n \\nonumber \\\\\n&& \\hspace{20pt}\n\\phi_{\\pm}(v)=\\left(\n\\frac{\\sin \\eta (v\\pm iu)}{\\sinh \\eta }\\right)^{\\frac{N}{2}},\n\\nonumber \n\\end{eqnarray}\nwhere $M_{a}\\in {\\mathbb Z}_{\\ge 0}$; $Q_{0}(v)=Q_{r+s+2}(v)=1$. \n$\\xi_{a} \\in \\{-1,1\\}$ is a parameter which depends on the grading \nparameter $\\{p(b)\\}_{b \\in B}$. \n$\\{v^{(a)}_{k}\\}$ is a root of the Bethe ansatz equation \n(BAE)\n\\begin{eqnarray}\n&& \\hspace{-20pt} \n\\frac{\\psi_{a}(v^{(a)}_{k}+\\frac{i}{2}\\sum_{j=1}^{a}(-1)^{p(j)})}\n {\\psi_{a+1}(v^{(a)}_{k}+\\frac{i}{2}\\sum_{j=1}^{a}(-1)^{p(j)})} \\label{BAE} \\\\\n&& =\n-\\varepsilon_{a}\n\\frac{Q_{a-1}(v^{(a)}_{k}+\\frac{i(-1)^{p(a)}}{2})Q_{a}(v^{(a)}_{k}-i(-1)^{p(a+1)})\n Q_{a+1}(v^{(a)}_{k}+\\frac{i(-1)^{p(a+1)}}{2})}\n {Q_{a-1}(v^{(a)}_{k}-\\frac{i(-1)^{p(a)}}{2})Q_{a}(v^{(a)}_{k}+i(-1)^{p(a)})\n Q_{a+1}(v^{(a)}_{k}-\\frac{i(-1)^{p(a+1)}}{2})}\n \\nonumber \\\\\n&& \\hspace{40pt} \\mbox{for} \\quad k\\in \\{1,2, \\dots, M_{a}\\} \\quad \n\\mbox{and} \\quad a\\in \\{1,2,\\dots, r+s+1 \\}, \\nonumber\n\\end{eqnarray}\nwhere $\\varepsilon_{a}=\\frac{\\xi_{a+1}}{\\xi_{a}} \\in \\{-1,1 \\} $.\nFrom now on, we assume the relation $p(1)=p(r+s+2)$ on \nthe grading parameter. \nIn this case, the eigenvalue formula (\\ref{QTM-eigen}) \nof the QTM has good analyticity to derive the NLIE. \nWe expect that this assumption does not spoil generality \nas the free energy will be independent of the order of the \ngrading parameters. \n\nLet us define \nan auxiliary function \\cite{T97,T98} (see also \\cite{T98-2}): \n\\begin{eqnarray}\nT_{m}^{(a)}(v)=\\sum_{\\{d_{j,k}\\}} \\prod_{j=1}^{a}\\prod_{k=1}^{m}\nz(d_{j,k};v-\\frac{i}{2}(a-m-2j+2k)),\n\\label{DVF}\n\\end{eqnarray}\nwhere $m,a \\in \\mathbb{Z}_{\\ge 1}$, and the summation is taken over \n$d_{j,k}\\in B$ ($ 1 < 2 < \\cdots < r+s+2$) \nsuch that\n\\begin{eqnarray}\n&& d_{j,k} \\le d_{j+1,k} \\quad {\\rm and} \\quad d_{j,k} \\le d_{j,k+1} \\label{rule1} \\\\ \n&& d_{j,k} < d_{j,k+1} \\quad {\\rm if} \\quad \n d_{j,k} \\in B_{-} \\quad {\\rm or} \\quad d_{j,k+1} \\in B_{-} \n \\label{rule2} \\\\ \n&& d_{j,k} < d_{j+1,k} \\quad {\\rm if} \\quad d_{j,k} \\in B_{+} \n\\quad {\\rm or} \\quad d_{j+1,k} \\in B_{+}. \\label{rule3}\n\\end{eqnarray}\nThis function contains \n$T_{1}^{(1)}(v)$ (\\ref{QTM-eigen}) as a special case \n$(a,m)=(1,1)$. \n(\\ref{DVF}) can be interpreted as a \nsummation over a Young (super) tableaux labeled by \n$a \\times m$ Young (super) diagram. \nIt is related to a system of eigenvalue formulae of the \nQTM for fusion models \\cite{KRS81}. \nNote that the condition (\\ref{rule2}) is void if $s=-1$, then \n(\\ref{DVF}) reduces to the Bazhanov-Reshetikhin formula \\cite{BR90}. \n\nFor $a,m \\in {\\mathbb Z}_{\\ge 1}$, we \n will normalize (\\ref{DVF}) as \n $ \\widetilde{T}^{(a)}_{m}(v)=\n T^{(a)}_{m}(v)\/{\\mathcal N}^{(a)}_{m}(v)$, \n where \n\\begin{eqnarray}\n\\hspace{-30pt} && {\\mathcal N}^{(a)}_{m}(v)=\n \\frac{\\phi_{-}(v- \\frac{a+m}{2} \\xi i)\n\\phi_{+}(v+ \\frac{a+m}{2}\\xi i)}{\n \\phi_{-}(v-\\frac{a-m}{2}i)\\phi_{+}(v+\\frac{a-m}{2}i)}\n \\nonumber \\\\ \n\\hspace{-30pt} && \\hspace{20pt} \\times\n \\prod_{j=1}^{a}\\prod_{k=1}^{m}\n \\phi_{-}(v-\\frac{a-m-2j+2k}{2}i)\\phi_{+}(v-\\frac{a-m-2j+2k}{2}i).\n \\label{normal}\n\\end{eqnarray}\nHere we introduce a parameter $\\xi \\in \\{-1,1 \\}$. \n$T^{(a)}_{m}(v)$ has no pole on $v$ due to the BAE (\\ref{BAE}). \nIn contrast, $\\widetilde{T}^{(a)}_{m}(v)$ has \npoles at $v=\\pm (\\frac{m+a}{2}\\xi i +iu)+\\frac{n \\pi}{\\eta}$ \n($n \\in {\\mathbb Z}$) for \n$(a,m) \\in {\\mathbb Z}_{\\ge 1} \\times \\{1,2,\\dots,s+1 \\} \\cup \n \\{1,2,\\dots,r+1 \\}\\times {\\mathbb Z}_{\\ge 1}$. \n\nOne can show that \n$\\widetilde{T}^{(a)}_{m}(v)$ satisfies the \nso called $T$-system for $U_{q}(\\widehat{sl}(r+1|s+1))$ \\cite{T97,T98} \n(see also \\cite{JKS98} for a derivation of TBA equations from the \n$T$-system). \nFor $m,a \\in {\\mathbb Z}_{\\ge 1}$,\n\\begin{eqnarray}\n&& \\hspace{-10pt} \n\\widetilde{T}^{(a)}_{m}(v-\\frac{i}{2})\\widetilde{T}^{(a)}_{m}(v+\\frac{i}{2})=\n\\widetilde{T}^{(a)}_{m-1}(v)\\widetilde{T}^{(a)}_{m+1}(v)+\n\\widetilde{T}^{(a-1)}_{m}(v)\\widetilde{T}^{(a+1)}_{m}(v)\\label{T-sys} \\\\ \n&& \\hspace{-10pt} \\mbox{for} \\quad \na \\in \\{1,2,\\dots, r\\} \\quad \\mbox{or} \\quad m \\in \\{1,2,\\dots, s\\}\n \\quad \\mbox{or}\\quad (a,m)=(r+1,s+1), \\nonumber \\\\\n&& \\hspace{-10pt}\n\\widetilde{T}^{(r+1)}_{m}(v-\\frac{i}{2})\\widetilde{T}^{(r+1)}_{m}(v+\\frac{i}{2})=\n\\widetilde{T}^{(r+1)}_{m-1}(v)\\widetilde{T}^{(r+1)}_{m+1}(v) \n\\quad \\mbox{for} \\quad m \\in {\\mathbb Z}_{\\ge s+2}, \\label{T-sys-m} \\\\\n&& \\hspace{-10pt}\n\\widetilde{T}^{(a)}_{s+1}(v-\\frac{i}{2})\\widetilde{T}^{(a)}_{s+1}(v+\\frac{i}{2})=\n\\widetilde{T}^{(a-1)}_{s+1}(v)\\widetilde{T}^{(a+1)}_{s+1}(v) \n\\quad \\mbox{for} \\quad a \\in {\\mathbb Z}_{\\ge r+2}, \\label{T-sys-a}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& \\hspace{-35pt} \n \\widetilde{T}^{(a)}_{0}(v)=\\frac{\\phi_{-}(v-\\frac{a}{2}i)\\phi_{+}(v+\\frac{a}{2}i)}\n {\\phi_{-}(v-\\frac{a}{2}\\xi i)\\phi_{+}(v+\\frac{a}{2} \\xi i)}\n\\quad {\\rm for} \\quad a \\in {\\mathbb Z}_{\\ge 1},\\label{a0} \\\\\n&& \\hspace{-35pt}\n\\widetilde{T}^{(0)}_{m}(v)=\n \\frac{\\phi_{-}(v+\\frac{m}{2}i)\\phi_{+}(v-\\frac{m}{2}i)}\n {\\phi_{-}(v-\\frac{m}{2} \\xi i)\\phi_{+}(v+\\frac{m}{2} \\xi i)}\n\\quad {\\rm for} \\quad m \\in {\\mathbb Z}_{\\ge 1}. \\label{0m} \n\\end{eqnarray}\nThere is a duality relation for the auxiliary function.\n\\begin{eqnarray} \n&& \\hspace{-35pt}\n\\widetilde{T}^{(r+1)}_{a+s}(v)=\n\\zeta^{a-1} \n\\widetilde{T}^{(r+a)}_{s+1}(v) \\quad {\\rm for} \\quad a \\in Z_{\\ge 1} ,\n\\label{dual}\n\\end{eqnarray} \nwhere \n$\\zeta = \\frac{\\prod_{a \\in B_{+}} \\xi_{a}\n e^{\\frac{\\mu_{a}}{T}}}{\\prod_{b \\in B_{-}}\\xi_{b}e^{\\frac{\\mu_{b}}{T}}}$. \n(\\ref{a0}) (resp. (\\ref{0m})) becomes $1$ if $\\xi=1$ (resp. $\\xi=-1$). \nNote that there is no upper bound for the index $a$ of $\\widetilde{T}^{(a)}_{m}(v)$ \nfor $m \\in \\{1,2,\\dots, s+1 \\}$ if $s \\in {\\mathbb Z}_{\\ge 0}$.\nFor $s=-1$, this $T$-system reduces the one for $U_{q}(\\widehat{sl}(r+1))$\n \\cite{KNS94} (see also \\cite{KR87}). \nIn this case, (\\ref{dual}) reduces to \n$\\widetilde{T}^{(r+1)}_{a-1}(v)=\\zeta^{a-1}=\ne^{\\frac{(a-1)(\\mu_{1}+\\mu_{2}+\\cdots +\\mu_{r+1})}{T}}$ \n if $ \\xi =1 $ (see eq. (2.21) in \\cite{TT05}).\nFrom the relations (\\ref{T-sys-m}), (\\ref{T-sys-a}), (\\ref{dual}) and \n(\\ref{T-sys}) for $(a,m)=(r+1,s+1)$, one can derive the following relation \n for $a \\in {\\mathbb Z}_{\\ge 2}$: \n\\begin{eqnarray}\n&& \\hspace{-20pt} \\widetilde{T}^{(r+1)}_{s+a}(v) =\n\\zeta^{a-1}\n\\widetilde{T}^{(r+a)}_{s+1}(v) \\nonumber \\\\\n&& =\n \\frac{\n\\zeta^{a-1} \n\\prod_{j=1}^{a} \\widetilde{T}^{(r+1)}_{s+1}(v+\\frac{a-2j+1}{2}i) }\n {\\prod_{j=2}^{a} \\bigl( \n \\zeta\n\\widetilde{T}^{(r+1)}_{s}(v+\\frac{a-2j+2}{2}i)+\n \\widetilde{T}^{(r)}_{s+1}(v+\\frac{a-2j+2}{2}i) \\bigr)} .\n \\nonumber \\\\\n \\label{sol}\n\\end{eqnarray}\n$\\widetilde{T}^{(a)}_{m}(v)$ can also be written in terms of a determinant \n(the quantum (supersymmetric) Jacobi-Trudi and Giambelli formula \\cite{T97,T98} \n(for $s=-1$ case, \\cite{BR90}; \nfor $U_{q}(B_{r}^{(1)})$ case, \\cite{KOS95}))\n\\begin{eqnarray}\n\\widetilde{T}^{(a)}_{m}(v)&=&\n W^{(a)}_{m}(v)\\det _{1\\le j,k \\le m}\n\\left(\\widetilde{T}^{(a+j-k)}_{1}\n\\left(\nv-\\frac{j+k-m-1}{2}i\n\\right) \n\\right) \\label{jacobi-trudi} \\\\\n&=& Z^{(a)}_{m}(v) \\det _{1\\le j,k \\le a}\n\\left(\\widetilde{T}^{(1)}_{m+j-k}\n\\left(\nv-\\frac{a-j-k+1}{2}i\n\\right) \n\\right), \\label{jacobi-trudi2}\n\\end{eqnarray}\nwhere $\\widetilde{T}^{(a)}_{1}(v)=0$ for $a <0$ and \n$\\widetilde{T}^{(1)}_{m}(v)=0$ for $m <0$. \n$ W^{(a)}_{m}(v)$ and $ Z^{(a)}_{m}(v)$ are normalization functions: \n\\begin{eqnarray}\n&& W^{(a)}_{m}(v)=\\frac{1}{\\prod_{j=1}^{m-1}\\widetilde{T}^{(a)}_{0}(v+\\frac{m-2j}{2}i)}, \\\\\n&& Z^{(a)}_{m}(v)= \\frac{1}{\\prod_{j=1}^{a-1}\\widetilde{T}^{(0)}_{m}(v-\\frac{a-2j}{2}i)},\n\\end{eqnarray} \nwhere $\\prod_{j=1}^{0}(\\cdots )=1$. \nSubstituting (\\ref{jacobi-trudi}) into (\\ref{dual}), we obtain an equation\n\\begin{eqnarray}\n&& W^{(r+1)}_{a+s}(v) \\det _{1\\le j,k \\le a+s}\n\\left(\\widetilde{T}^{(r+1+j-k)}_{1}\n\\left(\nv-\\frac{j+k-a-s-1}{2}i\n\\right) \n\\right) \\nonumber \\\\\n&&=\n\\zeta^{a-1}\nW^{(r+a)}_{s+1}(v)\n\\det _{1\\le j,k \\le s+1}\n\\left(\\widetilde{T}^{(r+a+j-k)}_{1}\n\\left(\nv-\\frac{j+k-s-2}{2}i\n\\right) \n\\right) \n\\nonumber \\\\\n&& \\hspace{180pt} \\mbox{for} \\quad \na \\in {\\mathbb Z}_{\\ge 1}. \\label{det-eq}\n\\end{eqnarray}\nExpanding partially (\\ref{det-eq}) on both side, \nwe obtain\n\\begin{eqnarray}\n&& \\widetilde{T}^{(a+r+s)}_{1}(v)=\n\\frac{\n\\widetilde{A}_{1}(v)-\n\\zeta^{a-1}\n\\frac{W^{(r+a)}_{s+1}(v)}{W^{(r+1)}_{a+s}(v)}\n\\widetilde{A}_{2}(v)\n}\n{(-1)^{a+s}\\widetilde{A}_{3}(v)+(-1)^{s}\n\\zeta^{a-1} \n\\frac{W^{(r+a)}_{s+1}(v)}{W^{(r+1)}_{a+s}(v)}\n \\widetilde{A}_{4}(v)} \n \\nonumber \\\\ \n&& \\hspace{160pt} \\mbox{for} \\quad \na \\in {\\mathbb Z}_{\\ge 2}, \n\\label{a+r+s}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& \\widetilde{A}_{1}(v)=\\det _{1\\le j,k \\le a+s}\n\\left(\\widetilde{f}_{j,k}\n\\left(\nv-\\frac{j+k-a-s-1}{2}i\n\\right) \n\\right) \\\\\n&& \\quad \\widetilde{f}_{j,k}(v)=\\widetilde{T}^{(r+1+j-k)}_{1}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (a+s,1), \n\\quad \\widetilde{f}_{a+s,1}(v)=0, \\nonumber \\\\\n&&\\widetilde{A}_{2}(v)=\n\\det _{1\\le j,k \\le s+1}\n\\left(\\widetilde{g}_{j,k}\n\\left(\nv-\\frac{j+k-s-2}{2}i\n\\right) \n\\right) \n \\\\\n&& \\quad \\widetilde{g}_{j,k}(v)=\\widetilde{T}^{(r+a+j-k)}_{1}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (s+1,1), \n\\quad \\widetilde{g}_{s+1,1}(v)=0, \\nonumber \\\\ \n&& \\widetilde{A}_{3}(v)=\\det _{1\\le j,k \\le a+s-1}\n\\left(\\widetilde{T}^{(r+j-k)}_{1}\n\\left(\nv-\\frac{j+k-a-s}{2}i\n\\right) \n\\right), \\\\\n&&\\widetilde{A}_{4}(v)=\n\\det _{1\\le j,k \\le s}\n\\left(\\widetilde{T}^{(r+a+j-k-1)}_{1}\n\\left(\nv-\\frac{j+k-s-1}{2}i\n\\right) \n\\right) \n.\n\\end{eqnarray}\nIt turns out that $\\widetilde{T}^{(a+r+s)}_{1}(v)$ is written in \nterms of $\\{\\widetilde{T}^{(d)}_{1}(v)\\}$ where $ \\max (0,r-s+2-a) \\le d \\le a+r+s-1$. \nThen $ \\widetilde{T}^{(a)}_{1}(v) $ for $a \\in {\\mathbb Z}_{\\ge r+s+2}$ \ncan be expressed in \nterms of $\\{\\widetilde{T}^{(d)}_{1}(v)\\}$ where $ 0 \\le d \\le r+s+1$.\nSimilarly, we can derive the \nfollowing relation from (\\ref{dual}) and (\\ref{jacobi-trudi2}).\n\\begin{eqnarray}\n&& \\widetilde{T}^{(1)}_{a+r+s}(v)=\n\\frac{\n\\zeta^{a-1}\n\\frac{Z^{(r+a)}_{s+1}(v)}{Z^{(r+1)}_{a+s}(v)}\n\\widetilde{A}_{5}(v)-\n\\widetilde{A}_{6}(v)\n}\n{(-1)^{a+r}\n\\zeta^{a-1}\n\\frac{Z^{(r+a)}_{s+1}(v)}{Z^{(r+1)}_{a+s}(v)}\n\\widetilde{A}_{7}(v)+(-1)^{r} \n\\widetilde{A}_{8}(v)} \n\\nonumber \\\\\n&& \\hspace{140pt} \\mbox{for} \\quad \na \\in {\\mathbb Z}_{\\ge 2}, \n\\label{a+r+s-b}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& \\widetilde{A}_{5}(v)=\\det _{1\\le j,k \\le a+r}\n\\left(\\widetilde{h}_{j,k}\n\\left(\nv-\\frac{a+r+1-j-k}{2}i\n\\right) \n\\right) \\\\\n&& \\quad \\widetilde{h}_{j,k}(v)=\\widetilde{T}^{(1)}_{s+1+j-k}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (a+r,1), \n\\quad \\widetilde{h}_{a+r,1}(v)=0, \\nonumber \\\\\n&&\\widetilde{A}_{6}(v)=\n\\det _{1\\le j,k \\le r+1}\n\\left(\\widetilde{b}_{j,k}\n\\left(\nv-\\frac{r+2-j-k}{2}i\n\\right) \n\\right) \n \\\\\n&& \\quad \\widetilde{b}_{j,k}(v)=\\widetilde{T}^{(1)}_{a+s+j-k}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (r+1,1), \n\\quad \\widetilde{b}_{r+1,1}(v)=0, \\nonumber \\\\ \n&& \\widetilde{A}_{7}(v)=\\det _{1\\le j,k \\le a+r-1}\n\\left(\\widetilde{T}^{(1)}_{s+j-k}\n\\left(\nv-\\frac{a+r-j-k}{2}i\n\\right) \n\\right), \\\\\n&&\\widetilde{A}_{8}(v)=\n\\det _{1\\le j,k \\le r}\n\\left(\\widetilde{T}^{(1)}_{a+s-1+j-k}\n\\left(\nv-\\frac{r+1-j-k}{2}i\n\\right) \n\\right) \n.\n\\end{eqnarray}\n\nLet us consider the limit \n\\begin{eqnarray}\n&& Q^{(a)}_{m}:=\\lim_{v \\to i \\eta^{-1} \\infty} \\widetilde{T}^{(a)}_{m}(v) \n =\\sum_{\\{ d_{j,k}\\}}\n\\prod_{j=1}^{a}\\prod_{k=1}^{m} \\xi_{d_{j,k}}\n\\exp \\left(\\frac{\\mu_{d_{j,k}}}{T} \\right),\n \\label{limit}\n\\end{eqnarray} \nwhere the summation is taken over $\\{ d_{j,k}\\}$ ($d_{j,k} \\in B$)\n which obey the rules (\\ref{rule1})-(\\ref{rule3}).\nFor example, for $U_{q}(\\widehat{sl}(2|1))$ ($B_{+}=\\{1,3\\}$, $B_{-}=\\{2\\}$) case, we have, \n\\begin{eqnarray}\nQ^{(1)}_{1}&=& \\xi_{1}e^{\\frac{\\mu_{1}}{T}}+\\xi_{2}e^{\\frac{\\mu_{2}}{T}}\n +\\xi_{3} e^{\\frac{\\mu_{3}}{T}},\n \\label{Q11-sl21} \\\\\nQ^{(a)}_{1}&=&\n \\xi_{1} \\xi_{2}^{a-1} e^{\\frac{\\mu_{1}+(a-1)\\mu_{2}}{T}}\n+\\xi_{1} \\xi_{2}^{a-2} \\xi_{3} e^{\\frac{\\mu_{1}+(a-2)\\mu_{2}+\\mu_{3}}{T}}\n+\\xi_{2}^{a}e^{\\frac{a \\mu_{2}}{T}}\n+\\xi_{2}^{a-1} \\xi_{3} e^{\\frac{(a-1)\\mu_{2}+\\mu_{3}}{T}} \\nonumber \\\\\n&=& \\xi_{2}^{a-2}e^{ \\frac{(a-2) \\mu_{2}}{T}}Q^{(2)}_{1} \n\\qquad \\mbox{for} \\quad a \\in {\\mathbb Z}_{\\ge 2}. \n \\label{Q-sl21}\n\\end{eqnarray}\nWe can also rewrite (\\ref{Q-sl21}) as \n\\begin{eqnarray}\nQ^{(a)}_{1}=\\frac{{Q^{(3)}_{1}}^{a-2}}{{Q^{(2)}_{1}}^{a-3}}\n=\\frac{{Q^{(2)}_{1}}^{a-1}}{(\\zeta +Q^{(1)}_{1})^{a-2}}.\n\\label{Qa1-sl21}\n\\end{eqnarray}\nThis quantity (\\ref{limit}) corresponds to \n the character of $a$-th anti-(super)symmetric and \n$m$-th (super)symmetric tensor representation. \nWe will use $Q^{(a)}_{1}$ and $Q^{(1)}_{m}$ later. \n\n$Q^{(a)}_{m}$ also satisfies the so called $Q$-system, \nwhich is the $T$-system (\\ref{T-sys})-(\\ref{dual})\n without the spectral parameter $v$: \nfor $ m,a \\in {\\mathbb Z}_{\\ge 1}$, we have \n\\begin{eqnarray}\n\\hspace{-20pt} && {Q^{(a)}_{m}}^{2}=Q^{(a)}_{m-1}Q^{(a)}_{m+1}+Q^{(a-1)}_{m}Q^{(a+1)}_{m}\n\\label{Q-sys} \\\\ \n&&\\hspace{10pt} \\mbox{for} \\quad \na \\in \\{1,2,\\dots, r\\} \\quad \\mbox{or} \\quad m \\in \\{1,2,\\dots, s\\}\n \\nonumber \\\\\n&& \\hspace{130pt} \\mbox{or}\\quad (a,m)=(r+1,s+1), \\nonumber \\\\\n&&{Q^{(r+1)}_{m}}^{2}=Q^{(r+1)}_{m-1}Q^{(r+1)}_{m+1}\n \\quad \\mbox{for} \\quad m \\in {\\mathbb Z}_{\\ge s+2},\\\\\n&&{Q^{(a)}_{s+1}}^{2} =Q^{(a-1)}_{s+1}Q^{(a+1)}_{s+1} \n\\quad \\mbox{for} \\quad a \\in {\\mathbb Z}_{\\ge r+2}, \n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& Q^{(a)}_{0}=Q^{(0)}_{m}=1\n\\quad {\\rm for} \\quad a,m \\in {\\mathbb Z}_{\\ge 1},\\nonumber \\\\\n&& Q^{(r+1)}_{a+s}=\n\\zeta^{a-1}\nQ^{(r+a)}_{s+1} \\quad {\\rm for} \\quad a \\in Z_{\\ge 1} .\n\\end{eqnarray} \nThe $Q$-system was introduced \\cite{K89,KR90} as functional relations among \ncharacters of finite dimensional representations of \nYangians (or quantum affine algebras) associated with simple Lie algebras. \nThe above system of equations is a superalgebra version of them. \n\nIn closing this section, \nlet us comment on the analyticity of the auxiliary function (\\ref{DVF}). \nAs mentioned before, the free energy (\\ref{free-en-qtm}) is given only by the \nlargest eigenvalue of the QTM (\\ref{QTM}). \nThen we are only interested in a root of the BAE (\\ref{BAE}) \nwhich gives the largest eigenvalue of the QTM. \nJudging from numerical calculations \\cite{JKS97,JKS98,T03,TT05}, \nsuch a root will exist in the sector \n$\\frac{N}{2}=M_{1}=\\cdots =M_{r+s+1}$ of the BAE, \nand it will form \\symbol{\"60}one-string' on the complex plane. \nFor this root, the zeros of the auxiliary function (\\ref{DVF}) will \n exist near the lines ${\\rm Im} v= \\pm \\frac{a+m}{2}$ \n at least for $\\{\\mu_{a}\\}=\\{0\\}$ and small $u$ \n(see, figures in \\cite{JKS98,T03,TT05}). \nIn this sector, we have \n\\begin{eqnarray}\n&& \\xi_{b}=1 \\qquad {\\rm for} \\qquad b \\in B,\n\\nonumber \\\\ \n&& \\varepsilon_{b}=1 \\qquad {\\rm for} \\qquad b \\in B-\\{r+s+2 \\},\n\\label{para}\n\\\\\n&& \\zeta=\\exp(\\frac{\\sum_{a\\in B_{+}}\\mu_{a}-\\sum_{a\\in B_{-}}\\mu_{a}}{T}).\n\\nonumber \\end{eqnarray} \nFrom now on, we only consider the largest eigenvalue of the \nQTM, and assume these values (\\ref{para}) of the parameters. \n\\section{The nonlinear integral equations}\nIn this section, we will derive NLIE by using formulae in the previous section. \nWe will treat two kind of NLIE paying attention to the value of the \nparameter $\\xi \\in \\{-1,1\\}$. \nAlthough the first step of calculations (\\ref{mustput})-(\\ref{nlie2}) is similar to\n $s=-1$ case \\cite{TSK01,T03,TT05}, we will present it for reader's convenience. \n \nTaking note on \nthe limit (\\ref{limit}) and \n the fact that $\\widetilde{T}^{(a)}_{m}(v)$ has \npoles at $v=\\pm (\\frac{m+a}{2}\\xi i +iu)+ \\frac{n \\pi}{\\eta}$ \n($n \\in {\\mathbb Z}$) \nfor $(a,m) \\in \\{1,2,\\dots, r+1\\}\\times {\\mathbb Z}_{\\ge 1} \\cup \n{\\mathbb Z}_{\\ge 1} \\times \\{1,2,\\dots, s+1\\}$, \nwe can expand ${\\widetilde T}^{(a)}_{m}(v)$ as follows.\n\\begin{eqnarray}\n&& {\\widetilde T}^{(a)}_{m}(v)=Q^{(a)}_{m}\n \\label{mustput} \\\\ \n&& \\hspace{20pt} +\n\\sum_{n \\in {\\mathbb Z}} \n\\sum_{j=1}^{\\frac{N}{2}} \n \\left\\{ \n\\frac{A^{(a)}_{m,j}}{(v-\\frac{a+m}{2}\\xi i-iu-\\frac{\\pi n}{\\eta})^{j}}\n+\n\\frac{{\\bar A}^{(a)}_{m,j}}{(v+\\frac{a+m}{2}\\xi i+iu+\\frac{\\pi n}{\\eta})^{j}}\n\\right\\},\n\\nonumber \n\\end{eqnarray}\nwhere the coefficients $A^{(a)}_{m,j}, {\\bar A}^{(a)}_{m,j} \\in {\\mathbb C}$ \ncan be expressed as contour integrals:\n\\begin{eqnarray}\n&& A^{(a)}_{m,j}= \\oint_{{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} v}{2\\pi i}\n \\widetilde{T}^{(a)}_{m}(v)(v-\\frac{a+m}{2}\\xi i-iu)^{j-1},\\nonumber \\\\\n&& \\overline{A}^{(a)}_{m,j}=\n \\oint_{\\overline{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} v}{2\\pi i}\n \\widetilde{T}^{(a)}_{m}(v)(v+\\frac{a+m}{2}\\xi i+iu)^{j-1}.\n \\label{coeff}\n\\end{eqnarray}\nHere the contour ${\\tilde C}^{(a)}_{m}$ (resp. $\\overline{\\tilde C}^{(a)}_{m}$) \nis a counterclockwise closed loop \nwhich surrounds $v=\\frac{a+m}{2}\\xi i+iu$ (resp. $v=-\\frac{a+m}{2}\\xi i-iu$) \nand does not surround $v=-\\frac{a+m}{2}\\xi i-iu-\\frac{\\pi n}{\\eta},\n\\frac{a+m}{2}\\xi i+iu+\\frac{\\pi k}{\\eta},$ \n(resp. $v=\\frac{a+m}{2}\\xi i+iu+\\frac{\\pi n}{\\eta}, \n-\\frac{a+m}{2}\\xi i-iu-\\frac{\\pi k}{\\eta}$), where $n \\in {\\mathbb Z}, k \\in {\\mathbb Z}-\\{0\\} $.\nUsing the $T$-system (\\ref{T-sys})-(\\ref{T-sys-a}), \nwe can rewrite (\\ref{coeff}) as \n\\begin{eqnarray}\n&& A^{(a)}_{m,j}= \\oint_{{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} v}{2\\pi i}\n \\bigg\\{\n \\frac{\\widetilde{T}^{(a)}_{m-1}(v-\\frac{\\xi i}{2})\n \\widetilde{T}^{(a)}_{m+1}(v-\\frac{\\xi i}{2})}\n {\\widetilde{T}^{(a)}_{m}(v-\\xi i)} \\nonumber \\\\\n&& \\hspace{80pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(v-\\frac{\\xi i}{2})\n \\widetilde{T}^{(a+1)}_{m}(v-\\frac{\\xi i}{2})}\n {\\widetilde{T}^{(a)}_{m}(v-\\xi i)}\n \\bigg\\}\n (v-\\frac{a+m}{2}\\xi i-iu)^{j-1},\\nonumber \\\\\n&& \\overline{A}^{(a)}_{m,j}=\n \\oint_{\\overline{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} v}{2\\pi i}\n\\bigg\\{\n \\frac{\\widetilde{T}^{(a)}_{m-1}(v+\\frac{\\xi i}{2})\n \\widetilde{T}^{(a)}_{m+1}(v+\\frac{\\xi i}{2})}\n {\\widetilde{T}^{(a)}_{m}(v+\\xi i)} \n\\label{coeff2} \\\\\n&& \\hspace{80pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(v+\\frac{\\xi i}{2})\n \\widetilde{T}^{(a+1)}_{m}(v+\\frac{\\xi i}{2})}\n {\\widetilde{T}^{(a)}_{m}(v+\\xi i)}\n \\bigg\\}\n (v+\\frac{a+m}{2}\\xi i+iu)^{j-1},\n \\nonumber \n\\end{eqnarray}\nwhere we admit $\\widetilde{T}^{(b)}_{n}(v)=0$ if \n$(b,n) \\in {\\mathbb Z }_{\\ge r+2}\\times {\\mathbb Z}_{\\ge s+2}$\n (cf. \\cite{DM92,MR92}).\nSubstituting (\\ref{coeff2}) into (\\ref{mustput}) and taking the summation\n over $j$, we obtain\n\\begin{eqnarray}\n&& \\hspace{-30pt}\n\\widetilde{T}^{(a)}_{m}(v)=Q^{(a)}_{m} \\nonumber \\\\ \n&& +\n\\sum_{n \\in {\\mathbb Z}}\n\\oint_{{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n\\frac{1-\\left(\\frac{y}{v-\\frac{a+m}{2} \\xi i-iu -\\frac{\\pi n}{\\eta}}\\right)^{\\frac{N}{2}}}\n {v-y-\\frac{a+m}{2} \\xi i-iu -\\frac{\\pi n}{\\eta}} \n \\nonumber \\\\\n&&\\hspace{20pt} \\times \n \\bigg\\{ \n \\frac{\\widetilde{T}^{(a)}_{m-1}(y+\\frac{a+m-1}{2} \\xi i+iu) \n \\widetilde{T}^{(a)}_{m+1}(y+\\frac{a+m-1}{2} \\xi i+iu)}\n {\\widetilde{T}^{(a)}_{m}(y+\\frac{a+m-2}{2} \\xi i+iu)} \\nonumber \\\\\n && \\hspace{50pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(y+\\frac{a+m-1}{2} \\xi i+iu) \n \\widetilde{T}^{(a+1)}_{m}(y+\\frac{a+m-1}{2} \\xi i+iu)}\n {\\widetilde{T}^{(a)}_{m}(y+\\frac{a+m-2}{2} \\xi i+iu)}\n \\bigg\\} \\nonumber \\\\\n&& +\n\\sum_{n \\in {\\mathbb Z}}\n\\oint_{\\overline{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n\\frac{1-\\left(\\frac{y}{v+\\frac{a+m}{2} \\xi i+iu +\\frac{\\pi n}{\\eta}}\\right)^{\\frac{N}{2}}}\n {v-y+\\frac{a+m}{2} \\xi i+iu +\\frac{\\pi n}{\\eta}} \n \\nonumber \\\\\n&&\\hspace{20pt} \\times \n \\bigg\\{ \n \\frac{\\widetilde{T}^{(a)}_{m-1}(y-\\frac{a+m-1}{2} \\xi i-iu) \n \\widetilde{T}^{(a)}_{m+1}(y-\\frac{a+m-1}{2} \\xi i-iu)}\n {\\widetilde{T}^{(a)}_{m}(y-\\frac{a+m-2}{2} \\xi i-iu)} \n\\label{nlie1} \\\\\n && \\hspace{50pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(y-\\frac{a+m-1}{2} \\xi i-iu) \n \\widetilde{T}^{(a+1)}_{m}(y-\\frac{a+m-1}{2} \\xi i-iu)}\n {\\widetilde{T}^{(a)}_{m}(y-\\frac{a+m-2}{2} \\xi i-iu)}\n \\bigg\\}.\n \\nonumber\n\\end{eqnarray}\nHere the contours are shifted as follows: \n the contour ${\\tilde C}^{(a)}_{m}$ (resp. $\\overline{\\tilde C}^{(a)}_{m}$) \nis a counterclockwise closed loop \nwhich surrounds $y=0 $ (resp. $y=0$)\nand does not surround $y=-(a+m)\\xi i-2iu-\\frac{\\pi n}{\\eta},\\frac{\\pi k}{\\eta}$ \n(resp. $y=(a+m)\\xi i+2iu+\\frac{\\pi n}{\\eta},\\frac{\\pi k}{\\eta}$), \nwhere $n \\in {\\mathbb Z}, k \\in {\\mathbb Z}-\\{0 \\}$.\nWe can neglect the terms $\\left(\\frac{y}{v \\pm \\frac{a+m}{2} \\xi i \\pm iu \\pm \n\\frac{\\pi n}{\\eta}}\\right)^{\\frac{N}{2}}$ in (\\ref{nlie1}) since the poles at $y=0$ in \n the two brackets $\\{\\cdots \\}$ \nare canceled by the zeros from these terms. \nBy using the following relation\n\\begin{eqnarray}\n\\lim_{m \\to \\infty}\n\\sum_{n=-m}^{m}\\frac{1}{v-\\frac{\\pi n}{\\eta}}\n=\\frac{\\eta}{\\tan \\eta v},\n\\end{eqnarray}\nwe can take the summation over $n \\in {\\mathbb Z}$.\n\\begin{eqnarray}\n&& \\hspace{-30pt}\n\\widetilde{T}^{(a)}_{m}(v)=Q^{(a)}_{m} \\nonumber \\\\ \n&& +\n\\oint_{{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n\\frac{\\eta }\n {\\tan \\eta (v-y-\\frac{a+m}{2} \\xi i-iu)} \n \\nonumber \\\\\n&&\\hspace{20pt} \\times \n \\bigg\\{ \n \\frac{\\widetilde{T}^{(a)}_{m-1}(y+\\frac{a+m-1}{2} \\xi i+iu) \n \\widetilde{T}^{(a)}_{m+1}(y+\\frac{a+m-1}{2} \\xi i+iu)}\n {\\widetilde{T}^{(a)}_{m}(y+\\frac{a+m-2}{2} \\xi i+iu)} \\nonumber \\\\\n && \\hspace{50pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(y+\\frac{a+m-1}{2} \\xi i+iu) \n \\widetilde{T}^{(a+1)}_{m}(y+\\frac{a+m-1}{2} \\xi i+iu)}\n {\\widetilde{T}^{(a)}_{m}(y+\\frac{a+m-2}{2} \\xi i+iu)}\n \\bigg\\} \\nonumber \\\\\n&& +\n\\oint_{\\overline{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n\\frac{\\eta }\n {\\tan \\eta (v-y+\\frac{a+m}{2} \\xi i+iu)} \n \\nonumber \\\\\n&&\\hspace{20pt} \\times \n \\bigg\\{ \n \\frac{\\widetilde{T}^{(a)}_{m-1}(y-\\frac{a+m-1}{2} \\xi i-iu) \n \\widetilde{T}^{(a)}_{m+1}(y-\\frac{a+m-1}{2} \\xi i-iu)}\n {\\widetilde{T}^{(a)}_{m}(y-\\frac{a+m-2}{2} \\xi i-iu)} \n\\label{nlie2} \\\\\n && \\hspace{50pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(y-\\frac{a+m-1}{2} \\xi i-iu) \n \\widetilde{T}^{(a+1)}_{m}(y-\\frac{a+m-1}{2} \\xi i-iu)}\n {\\widetilde{T}^{(a)}_{m}(y-\\frac{a+m-2}{2} \\xi i-iu)}\n \\bigg\\},\n \\nonumber \\\\\n && {\\rm for} \\quad (a,m) \\in \n\\{1,2,\\dots,r+1\\} \\times {\\mathbb Z}_{\\ge 1} \\cup \n{\\mathbb Z}_{\\ge 1} \\times \\{1,2,\\dots,s+1\\}.\n \\nonumber \n\\end{eqnarray}\nIn the next subsection, we will consider specializations \nof this system of NLIE (\\ref{nlie2}). \n\\subsection{The nonlinear integral equations for $\\xi=1$}\nLet us consider the NLIE (\\ref{nlie2}) for $\\xi=1$ and $m=1$. \nTaking note on the fact ${\\widetilde T}^{(a)}_{0}(v)=1$ (cf.(\\ref{a0})), \nwe can drop the first terms in the two brackets $\\{\\cdots \\}$ in (\\ref{nlie2}) \nsince they have no poles at $y=0$.\nThen the NLIE (\\ref{nlie2}) reduce to the following NLIE on \n${\\mathcal T}^{(a)}_{1}(v)=\\lim_{N \\to \\infty}\\widetilde{T}^{(a)}_{1}(v)$ \n after the Trotter limit $N \\to \\infty $ with $u=-\\frac{J \\sinh \\eta }{\\eta N T}$.\n\\begin{eqnarray}\n{\\mathcal T}^{(a)}_{1}(v)=Q^{(a)}_{1} \n&+&\n\\oint_{C^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y+\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y+\\frac{i a}{2})}\n {\\tan \\eta (v-y-\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y+\\frac{i(a-1)}{2})}\n \\nonumber \\\\\n&+&\n\\oint_{\\overline{C}^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y-\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y-\\frac{i a}{2})}\n {\\tan \\eta (v-y+\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y-\\frac{i(a-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{120pt} \n {\\rm for} \\quad a \\in {\\mathbb Z}_{\\ge 1},\n \\label{nlie4}\n\\end{eqnarray}\nwhere \nthe contour $C^{(a)}_{1}$ (resp. $\\overline{C}^{(a)}_{1}$) \nis a counterclockwise closed loop around $y=0$ (resp. $y=0$) \nwhich satisfies the condition \n$y \\ne v-\\frac{a+1}{2}i+\\frac{\\pi n}{\\eta}$ \n(resp. $y \\ne v+\\frac{a+1}{2}i+\\frac{\\pi n}{\\eta}$) and \ndoes not surround \n$z^{(a)}_{1}-\\frac{a-1}{2}i+\\frac{\\pi n}{\\eta}$, \n$-(a+1)i\n+\\frac{\\pi n}{\\eta}$, $\\frac{\\pi k}{\\eta}$ \n(resp. \n$z^{(a)}_{1}+\\frac{a-1}{2}i+\\frac{\\pi n}{\\eta}$, \n$(a+1)i +\\frac{\\pi n}{\\eta}$, $\\frac{\\pi k}{\\eta}$); \n($n \\in \\mathbb{Z}$, $k \\in \\mathbb{Z}-\\{0\\}$). \nHere we put the zeros of $\\mathcal{T}^{(a)}_{1}(v)$ as $\\{ z^{(a)}_{1} \\} $: \n$\\mathcal{T}^{(a)}_{1}(z^{(a)}_{1})=0$. \n$\\mathcal{T}^{(0)}_{1}(v)$ is a known function:\n\\begin{eqnarray} \n\\mathcal{T}^{(0)}_{1}(v)=\n\\lim_{N \\to \\infty} \\widetilde{T}^{(0)}_{1}(v)\n=\\exp \\left(\\frac{2J (\\sinh \\eta)^{2} }\n{T(\\cosh \\eta -\\cos (2\\eta v))}\\right).\n\\end{eqnarray}\nNote that (\\ref{nlie4}) are an infinite number of couple NLIE \nif $ s \\in {\\mathbb Z}_{\\ge 0} $. \nThis situation is quite different from the $U_{q}(\\widehat{sl}(r+1))$\n case \\cite{TT05,T03,TSK01}. \nHowever these NLIE are not independent, then \nwe will take the first $r+s+1$ of them ((\\ref{nlie4}) for $a \\in \\{1,2,\\dots r+s+1 \\}$). \nThe NLIE for $a=r+s+1$ contains $\\mathcal{T}^{(r+s+2)}_{1}(v)$, then we \nwill eliminate this by using the relation (\\ref{a+r+s}), \nwhere $W^{(a)}_{m}(v)=1$ for $\\xi=1$.\n\\begin{eqnarray}\n&& {\\mathcal T}^{(a)}_{1}(v)=Q^{(a)}_{1} \n+\n\\oint_{C^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y+\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y+\\frac{i a}{2})}\n {\\tan \\eta (v-y-\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y+\\frac{i(a-1)}{2})}\n \\nonumber \\\\\n&& \\hspace{40pt} +\n\\oint_{\\overline{C}^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y-\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y-\\frac{i a}{2})}\n {\\tan \\eta (v-y+\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y-\\frac{i(a-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{70pt} \n {\\rm for} \\quad a \\in \\{1,2,\\dots r+s \\},\n \\label{nlie-general} \\\\\n && {\\mathcal T}^{(r+s+1)}_{1}(v)=Q^{(r+s+1)}_{1}\n \\nonumber \\\\ \n&&\\hspace{20pt} +\n\\oint_{C^{(r+s+1)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(r+s)}_{1}(y+\\frac{i (r+s+1)}{2}) \n \\mathcal{F}(y+\\frac{i (r+s+1)}{2})}\n {\\tan \\eta (v-y-\\frac{i(r+s+2)}{2})\n \\mathcal{T}^{(r+s+1)}_{1}(y+\\frac{i(r+s)}{2})}\n \\nonumber \\\\\n&& \\hspace{20pt}+\n\\oint_{\\overline{C}^{(r+s+1)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(r+s)}_{1}(y-\\frac{i (r+s+1)}{2}) \n \\mathcal{F}(y-\\frac{i (r+s+1)}{2})}\n {\\tan \\eta (v-y+\\frac{i(r+s+2)}{2})\n \\mathcal{T}^{(r+s+1)}_{1}(y-\\frac{i(r+s)}{2})} ,\n\\label{nlie-generalb}\n \\\\\n&& \\hspace{20pt} \n\\mathcal{F}(v)=\\lim_{N \\to \\infty }\\widetilde{T}^{(r+s+2)}_{1}(v)=\n\\frac{\nA_{1}(v)-\n\\zeta \nA_{2}(v)\n}\n{(-1)^{s}A_{3}(v)+(-1)^{s} \n\\zeta \nA_{4}(v)},\n\\label{det-hashi}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& A_{1}(v)=\\det _{1\\le j,k \\le s+2}\n\\left(f_{j,k}\n\\left(\nv-\\frac{j+k-s-3}{2}i\n\\right) \n\\right) \\\\\n&& \\quad f_{j,k}(v)=\\mathcal{T}^{(r+1+j-k)}_{1}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (s+2,1), \n\\quad f_{s+2,1}(v)=0, \\nonumber \\\\\n&& A_{2}(v)=\n\\det _{1\\le j,k \\le s+1}\n\\left(g_{j,k}\n\\left(\nv-\\frac{j+k-s-2}{2}i\n\\right) \n\\right) \n \\\\\n&& \\quad g_{j,k}(v)=\\mathcal{T}^{(r+2+j-k)}_{1}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (s+1,1), \n\\quad g_{s+1,1}(v)=0, \\nonumber \\\\ \n&& A_{3}(v)=\\det _{1\\le j,k \\le s+1}\n\\left(\\mathcal{T}^{(r+j-k)}_{1}\n\\left(\nv-\\frac{j+k-2-s}{2}i\n\\right) \n\\right), \\\\\n&& A_{4}(v)=\n\\det _{1\\le j,k \\le s}\n\\left(\\mathcal{T}^{(r+j-k+1)}_{1}\n\\left(\nv-\\frac{j+k-s-1}{2}i\n\\right) \n\\right) \n.\n\\end{eqnarray}\nIf $s=-1$, then $A_{1}(v)=A_{4}(v)=0$ and \n$A_{2}(v)=A_{3}(v)=1$, and consequently \n(\\ref{det-hashi}) reduces to \n${\\mathcal F}(v)=\\mathcal{T}^{(r+1)}_{1}(v)=Q^{(r+1)}_{1}=\n\\zeta =\ne^{\\frac{\\mu_{1}+\\cdots +\\mu_{r+1}}{T}}$, where \nthe determinants should be interpreted as \n$\\det_{1\\le j,k \\le 0} (\\cdots )=1$, $\\det_{1\\le j,k \\le -1} (\\cdots )=0$. Thus \n(\\ref{nlie-general}) and (\\ref{nlie-generalb}) \nreduce to the NLIE for $U_{q}(\\widehat{sl}(r+1))$ in \\cite{TT05}.\nIn particular for $s=0$ ($U_{q}(\\widehat{sl}(r+1|1))$ case, we can use \n(\\ref{sol}): \n\\begin{eqnarray}\n&& {\\mathcal T}^{(a)}_{1}(v)=Q^{(a)}_{1} \n+\n\\oint_{C^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y+\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y+\\frac{i a}{2})}\n {\\tan \\eta (v-y-\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y+\\frac{i(a-1)}{2})}\n \\nonumber \\\\\n&& \\hspace{76pt} +\n\\oint_{\\overline{C}^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y-\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y-\\frac{i a}{2})}\n {\\tan \\eta (v-y+\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y-\\frac{i(a-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{100pt} \n {\\rm for} \\quad a \\in \\{1,2,\\dots r \\},\n \\label{nlie-s=0} \\\\\n&& {\\mathcal T}^{(r+1)}_{1}(v)=Q^{(r+1)}_{1} \n \\nonumber \\\\ \n&& \\hspace{10pt}+\n\\oint_{C^{(r+1)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(r)}_{1}(y+\\frac{i (r+1)}{2}) \n \\mathcal{T}^{(r+1)}_{1}(y+\\frac{i(r+2)}{2})}\n {\\tan \\eta (v-y-\\frac{i(r+2)}{2})\n (\n \\zeta \n+\\mathcal{T}^{(r)}_{1}(y+\\frac{i(r+1)}{2}))}\n \\nonumber \\\\\n&& \\hspace{10pt}+\n\\oint_{\\overline{C}^{(r+1)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(r)}_{1}(y-\\frac{i (r+1)}{2}) \n \\mathcal{T}^{(r+1)}_{1}(y-\\frac{i(r+2)}{2})}\n {\\tan \\eta (v-y+\\frac{i(r+2)}{2})\n (\n \\zeta \n+\\mathcal{T}^{(r)}_{1}(y-\\frac{i(r+1)}{2}))}.\n \\nonumber \\\\\n && \\label{nlie-s=0b}\n\\end{eqnarray}\nThe free energy per site is given by a solution of these \nNLIE (\\ref{nlie-general})-(\\ref{nlie-s=0b})\n\\begin{eqnarray}\nf=J \\cosh \\eta -T \\log \\mathcal{T}^{(1)}_{1}(0).\n \\label{free-en}\n\\end{eqnarray}\nIn these NLIE (\\ref{nlie-general})-(\\ref{nlie-s=0b}), \nthe number of unknown functions and equations is \n$r+s+1$, which contrasts with TBA equations \\cite{Sch87,Sch92,EK94,JKS98,Sa99}.\n\\subsection{The nonlinear integral equations for $\\xi=-1$}\nNext, \nlet us consider the NLIE (\\ref{nlie2}) for $\\xi=-1$ and $a=1$. \nTaking note on the fact ${\\widetilde T}^{(0)}_{m}(v)=1$ (cf.(\\ref{0m})), \nwe can drop the second terms in the two brackets $\\{\\cdots \\}$ in (\\ref{nlie2}) \nsince they have no poles at $y=0$.\nThen the NLIE (\\ref{nlie2}) reduce to the following NLIE on \n${\\mathcal T}^{(1)}_{m}(v)=\\lim_{N \\to \\infty}\\widetilde{T}^{(1)}_{m}(v)$ \n after the Trotter limit $N \\to \\infty $ with $u=-\\frac{J \\sinh \\eta }{\\eta N T}$.\n\\begin{eqnarray}\n{\\mathcal T}^{(1)}_{m}(v)=Q^{(1)}_{m} \n&+&\n\\oint_{C^{(1)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y-\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y-\\frac{i m}{2})}\n {\\tan \\eta (v-y+\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y-\\frac{i(m-1)}{2})}\n \\nonumber \\\\\n&+&\n\\oint_{\\overline{C}^{(1)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y+\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y+\\frac{i m}{2})}\n {\\tan \\eta (v-y-\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y+\\frac{i(m-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{70pt} \n {\\rm for} \\quad m \\in {\\mathbb Z}_{\\ge 1},\n \\label{infinitenlie-xi=-1}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray} \n\\mathcal{T}^{(1)}_{0}(v)=\n\\lim_{N \\to \\infty} \\widetilde{T}^{(1)}_{0}(v)\n=\\exp \\left(-\\frac{2J (\\sinh \\eta)^{2} }\n{T(\\cosh \\eta -\\cos (2\\eta v))}\\right),\n\\end{eqnarray}\nand the contour $C^{(1)}_{m}$ (resp. $\\overline{C}^{(1)}_{m}$) \nis a counterclockwise closed loop around $y=0$ (resp. $y=0$) \nwhich satisfies the condition \n$y \\ne v+\\frac{m+1}{2}i+\\frac{\\pi n}{\\eta}$ \n(resp. $y \\ne v-\\frac{m+1}{2}i+\\frac{\\pi n}{\\eta}$) and \ndoes not surround \n$z^{(1)}_{m}+\\frac{m-1}{2}i+\\frac{\\pi n}{\\eta}$, \n$(1+m)i\n+\\frac{\\pi n}{\\eta}$, $\\frac{\\pi k}{\\eta}$ \n(resp. \n$z^{(1)}_{m}-\\frac{m-1}{2}i+\\frac{\\pi n}{\\eta}$, \n$-(1+m)i +\\frac{\\pi n}{\\eta}$, $\\frac{\\pi k}{\\eta}$) \n ($n \\in \\mathbb{Z}$, $k \\in \\mathbb{Z}-\\{0\\}$). \nHere $\\{z^{(1)}_{m}\\}$ are zeros of ${\\mathcal T}^{(1)}_{m}(v)$: \n${\\mathcal T}^{(1)}_{m}(z^{(1)}_{m})=0$. \nThese are an infinite number of coupled NLIE. \nWe can reduce them as $\\xi=1$ case. \nBy using (\\ref{a+r+s-b}) in the limit $N \\to \\infty$,\n we can reduce (\\ref{infinitenlie-xi=-1}) \nas follows, \nwhere $Z^{(a)}_{m}(v)=1$ for $\\xi=-1$.\n\\begin{eqnarray}\n{\\mathcal T}^{(1)}_{m}(v)=Q^{(1)}_{m} \n&+&\n\\oint_{C^{(1)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y-\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y-\\frac{i m}{2})}\n {\\tan \\eta (v-y+\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y-\\frac{i(m-1)}{2})}\n \\nonumber \\\\\n&+&\n\\oint_{\\overline{C}^{(1)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y+\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y+\\frac{i m}{2})}\n {\\tan \\eta (v-y-\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y+\\frac{i(m-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{70pt} \n {\\rm for} \\quad m \\in \\{1,2,\\dots r+s \\},\n \\label{nlie-xi=-1} \\\\\n{\\mathcal T}^{(1)}_{r+s+1}(v)=Q^{(1)}_{r+s+1} \n&+&\n\\oint_{C^{(1)}_{r+s+1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{r+s}(y-\\frac{i (r+s+1)}{2}) \n \\mathcal{G}(y-\\frac{i (r+s+1)}{2})}\n {\\tan \\eta (v-y+\\frac{i(r+s+2)}{2})\n \\mathcal{T}^{(1)}_{r+s+1}(y-\\frac{i(r+s)}{2})}\n \\nonumber \\\\\n&& \\hspace{-70pt}+\n\\oint_{\\overline{C}^{(1)}_{r+s+1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{r+s}(y+\\frac{i (r+s+1)}{2}) \n \\mathcal{G}(y+\\frac{i (r+s+1)}{2})}\n {\\tan \\eta (v-y-\\frac{i(r+s+2)}{2})\n \\mathcal{T}^{(1)}_{r+s+1}(y+\\frac{i(r+s)}{2})} ,\n \\label{nlie-xi=-1b}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\mathcal{G}(v)=\n\\lim_{N \\to \\infty}\n\\widetilde{T}^{(1)}_{r+s+2}(v)=\n\\frac{\n\\zeta \nA_{5}(v)-A_{6}(v)\n}\n{(-1)^{r}\n\\zeta \nA_{7}(v)+(-1)^{r} A_{8}(v)},\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& A_{5}(v)=\\det _{1\\le j,k \\le r+2}\n\\left( h_{j,k}\n\\left(\nv-\\frac{r+3-j-k}{2}i\n\\right) \n\\right) \\\\\n&& \\quad h_{j,k}(v)={\\mathcal T}^{(1)}_{s+1+j-k}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (2+r,1), \n\\quad h_{r+2,1}(v)=0, \\nonumber \\\\\n&& A_{6}(v)=\n\\det _{1\\le j,k \\le r+1}\n\\left( b_{j,k}\n\\left(\nv-\\frac{r+2-j-k}{2}i\n\\right) \n\\right) \n \\\\\n&& \\quad b_{j,k}(v)={\\mathcal T}^{(1)}_{s+2+j-k}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (r+1,1), \n\\quad b_{r+1,1}(v)=0, \\nonumber \\\\ \n&& A_{7}(v)=\\det _{1\\le j,k \\le r+1}\n\\left({\\mathcal T}^{(1)}_{s+j-k}\n\\left(\nv-\\frac{r+2-j-k}{2}i\n\\right) \n\\right), \\\\\n&&A_{8}(v)=\n\\det _{1\\le j,k \\le r}\n\\left({\\mathcal T}^{(1)}_{s+1+j-k}\n\\left(\nv-\\frac{r+1-j-k}{2}i\n\\right) \n\\right) \n,\n\\end{eqnarray}\nwhere ${\\mathcal T}^{(1)}_{m}(v)=0$ for $m<0 $.\n\nIn particular for $r=0$ ($U_{q}(\\widehat{sl}(1|s+1))$ case, we can use \n(\\ref{sol}): \n\\begin{eqnarray}\n&& {\\mathcal T}^{(1)}_{m}(v)=Q^{(1)}_{m} +\n\\oint_{C^{(1)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y-\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y-\\frac{i m}{2})}\n {\\tan \\eta (v-y+\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y-\\frac{i(m-1)}{2})}\n \\nonumber \\\\\n&& \\hspace{76pt} +\n\\oint_{\\overline{C}^{(1)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y+\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y+\\frac{i m}{2})}\n {\\tan \\eta (v-y-\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y+\\frac{i(m-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{100pt} \n {\\rm for} \\quad m \\in \\{1,2,\\dots s \\},\n \\label{nlie-r=0} \\\\\n&& {\\mathcal T}^{(1)}_{s+1}(v)=Q^{(1)}_{s+1} \n\\nonumber \\\\\n&& \\hspace{8pt} +\n\\oint_{C^{(1)}_{s+1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{s}(y-\\frac{i (s+1)}{2}) \n \\mathcal{T}^{(1)}_{s+1}(y-\\frac{i(s+2)}{2})}\n {\\tan \\eta (v-y+\\frac{i(s+2)}{2})\n (\n \\zeta^{-1}\n+\\mathcal{T}^{(1)}_{s}(y-\\frac{i(s+1)}{2}))}\n \\nonumber \\\\\n&& \\hspace{8pt}+\n\\oint_{\\overline{C}^{(1)}_{s+1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{s}(y+\\frac{i (s+1)}{2}) \n \\mathcal{T}^{(1)}_{s+1}(y+\\frac{i(s+2)}{2})}\n {\\tan \\eta (v-y-\\frac{i(s+2)}{2})\n (\n \\zeta^{-1}\n+\\mathcal{T}^{(1)}_{s}(y+\\frac{i(s+1)}{2}))}. \n\\nonumber \\\\ \n\\label{nlie-r=0b}\n\\end{eqnarray}\nThe free energy per site is given by a solution of these \nNLIE (\\ref{nlie-xi=-1})-(\\ref{nlie-r=0b})\n\\begin{eqnarray}\nf=-J \\cosh \\eta -T \\log \\mathcal{T}^{(1)}_{1}(0).\n \\label{free-en2}\n\\end{eqnarray}\nIn some sense, these NLIE are \\symbol{\"60}dual' to the ones in the previous section. \nThe NLIE (\\ref{nlie-xi=-1})-(\\ref{nlie-r=0b})have only $r+s+1$ unknown functions. \nThese NLIE have never been considered before even for $U_{q}(\\widehat{sl}(2))$ case. \n\\section{High temperature expansions} \nIn this section, we will calculate the high temperature \nexpansion of the free energy from our new NLIE. \nFor large $T\/|J|$, we assume the following expansion :\n\\begin{eqnarray}\n&&\\mathcal{T}^{(a)}_{1}(v)=\n \\exp \\left(\\sum_{n=0}^{{\\mathrm deg}}b_{n}^{(a)}(v)(\\frac{J}{T})^{n} \n+O((\\frac{J}{T})^{{\\mathrm deg}+1}) \\right)\n \\nonumber \n\\\\\n&& =Q^{(a)}_{1}\\Biggl\\{ 1+b^{(a)}_{1}(v)\\frac{J}{T}+\n\\left(b^{(a)}_{2}(v)+\\frac{(b^{(a)}_{1}(v))^2}{2}\\right)(\\frac{J}{T})^2\n+ \\label{hte-ta} \\\\\n&& \\left(b^{(a)}_{3}(v)+b^{(a)}_{2}(v)b^{(a)}_{1}(v)+\n\\frac{(b^{(a)}_{1}(v))^3}{6}\\right)\n(\\frac{J}{T})^3 +\\cdots \\Biggr\\}+O((\\frac{J}{T})^{{\\mathrm deg}+1}),\n\\nonumber \n\\end{eqnarray}\nwhere $b_{0}^{(a)}(v)=\\log Q^{(a)}_{1}$. \nHere we do not expand $\\{Q^{(b)}_{1}\\}_{b \\ge 1}$ with respect to $\\frac{J}{T}$. \nThus the coefficients $\\{b^{(a)}_{n}(v) \\}$ \nthemselves depend on $\\frac{1}{T}$.\nIn this sense, our high temperature expansion formula \n is different from ordinary one. \nSubstituting this (\\ref{hte-ta}) into some of the NLIE \n(\\ref{nlie4})-(\\ref{nlie-s=0b}), \nwe can calculate the coefficients $\\{b^{(a)}_{n}(v) \\}$ up to the order of $n={\\mathrm deg}$. \nNote that we only need $\\{b^{(1)}_{n}(0) \\}$ to calculate the free energy (\\ref{free-en}). \nTaking note on this fact, \nfirstly we use\n\\footnote{As for numerical calculations of the free energy, \nwe expect that the reduced NLIE (\\ref{nlie-general})-(\\ref{nlie-s=0b}) \nare easier to use than the non-reduced NLIE (\\ref{nlie4}).}\n a subset (NLIE for $a \\in \\{1,2,\\dots, {\\mathrm deg} \\}$) \nof the non-reduced NLIE (\\ref{nlie4}) \nrather than the reduced NLIE (\\ref{nlie-general})-(\\ref{nlie-s=0b}). \nWe have observed that $b^{(1)}_{n}(0)$ can be expressed in terms of \n\\footnote{For $s=-1$ case, \nthey are \n$Q^{(1)}_{1},Q^{(2)}_{1}, \\dots ,Q^{(d)}_{1}$: \n$d=\\min (n+1,r+1)$ since \n$Q^{(a)}_{1}=0$ if $a \\ge r+2$.}\n$Q^{(1)}_{1},Q^{(2)}_{1}, \\dots ,Q^{(n+1)}_{1}$. \nWe have calculated the coefficients by using Mathematica. \nAs examples, we shall enumerate the coefficients $\\{b^{(1)}_{n}(0) \\}$ up to the \norder of $5$, where we put $\\Delta=\\cosh \\eta $. \n\\begin{eqnarray}\n&& \\hspace{-20pt}\nb^{(1)}_{1}(0)= \\frac{2 \\Delta Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}, \n \\label{coe1} \\\\\n&& \\hspace{-20pt}\nb^{(1)}_{2}(0)=-\\frac{6 \\Delta^2 {Q^{(2)}_{1}}^2}{{Q^{(1)}_{1}}^4}+\\frac{\\left(2 \\Delta^2+1\\right)\n Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}+\\frac{\\left(4 \\Delta^2-1\\right) Q^{(3)}_{1}}{{Q^{(1)}_{1}}^3},\n \\label{coe2} \\\\ \n&& \\hspace{-20pt}\nb^{(1)}_{3}(0)=\\frac{80 {Q^{(2)}_{1}}^3 \\Delta^3}{3\n {Q^{(1)}_{1}}^6}\n+\\frac{8 Q^{(3)}_{1} \\Delta^3}{{Q^{(1)}_{1}}^3}\n+\\frac{\\left(\\frac{4 \\Delta^3}{3}+2 \\Delta\\right)\n Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}\n\\nonumber \\\\\n&& \n\\hspace{-15pt}\n+\\frac{\\left(8 \\Delta-32 \\Delta^3\\right) Q^{(2)}_{1} Q^{(3)}_{1}}{{Q^{(1)}_{1}}^5}\n+\\frac{\\left(-12 \\Delta^3-6\n \\Delta\\right) {Q^{(2)}_{1}}^2\n+\\left(8 \\Delta^3-4 \\Delta\\right) Q^{(4)}_{1}}{{Q^{(1)}_{1}}^4},\n \\label{coe3} \\\\\n&&\\hspace{-20pt}\n b^{(1)}_{4}(0)=-\\frac{140 \\Delta^4\n {Q^{(2)}_{1}}^4}{{Q^{(1)}_{1}}^8}\n+\\frac{\\left(240 \\Delta^4-60 \\Delta^2\\right) Q^{(3)}_{1}\n {Q^{(2)}_{1}}^2}{{Q^{(1)}_{1}}^7}\n\\nonumber \\\\\n&& \n+\\frac{\\left(\\frac{2 \\Delta^4}{3}+2 \\Delta^2+\\frac{1}{4}\\right)\n Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}\n+\\frac{\\left(\\frac{28 \\Delta^4}{3}+\\frac{14 \\Delta^2}{3}-\\frac{1}{4}\\right)\n Q^{(3)}_{1}}{{Q^{(1)}_{1}}^3}\n\\nonumber \\\\\n&& \n+\\frac{\\left(-14 \\Delta^4-\\frac{56 \\Delta^2}{3}-\\frac{3}{2}\\right) \n {Q^{(2)}_{1}}^2+\\left(24 \\Delta^4-8\n \\Delta^2-1\\right) Q^{(4)}_{1}}{{Q^{(1)}_{1}}^4}\n\\nonumber \\\\\n&& \n+\\frac{\\left(80 \\Delta^4+40 \\Delta^2\\right) {Q^{(2)}_{1}}^3+\\left(40 \\Delta^2-80 \\Delta^4\\right)\n Q^{(4)}_{1} Q^{(2)}_{1}}{{Q^{(1)}_{1}}^6}\n\\nonumber \\\\\n&& \n+\\frac{\\left(-40 \\Delta^4+20 \\Delta^2-\\frac{5}{2}\\right) {Q^{(3)}_{1}}^2}{{Q^{(1)}_{1}}^6}\n\\nonumber \\\\\n&& \n+\\frac{\\left(-96 \\Delta^4-8\n \\Delta^2+4\\right) Q^{(2)}_{1} Q^{(3)}_{1}\n +\\left(16 \\Delta^4-12 \\Delta^2+1\\right) Q^{(5)}_{1}}{{Q^{(1)}_{1}}^5},\n\\label{coe4} \n\\end{eqnarray}\n\\begin{eqnarray}\n&& \\hspace{-15pt} b^{(1)}_{5}(0)=\\frac{4032 \\Delta^5\n {Q^{(2)}_{1}}^5}{5 {Q^{(1)}_{1}}^{10}}\n +\\frac{\\left(448 \\Delta^3-1792 \\Delta^5\\right) Q^{(3)}_{1}\n {Q^{(2)}_{1}}^3}{{Q^{(1)}_{1}}^9}\n\\nonumber \\\\\n&& \n +\\frac{\\left(\\frac{4 \\Delta^5}{15}+\\frac{4 \\Delta^3}{3}+\\frac{\\Delta}{2}\\right)\n Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}\n+\\frac{\\left(8 \\Delta^5+10 \\Delta^3+\\frac{\\Delta}{2}\\right) Q^{(3)}_{1}}{{Q^{(1)}_{1}}^3}\n\\nonumber \\\\\n&& \n +\\frac{\\left(-12\n \\Delta^5-30 \\Delta^3-8 \\Delta\\right) {Q^{(2)}_{1}}^2+\\left(40 \\Delta^5-6 \\Delta\\right)\n Q^{(4)}_{1}}{{Q^{(1)}_{1}}^4}\n\\nonumber \\\\\n&& \n+\\frac{\\left(-560 \\Delta^5-280\n \\Delta^3\\right) {Q^{(2)}_{1}}^4+\\left(672 \\Delta^5-336 \\Delta^3\\right) \n Q^{(4)}_{1} {Q^{(2)}_{1}}^2}{{Q^{(1)}_{1}}^8}\n \\nonumber \\\\\n&& \n+\\frac{\\left(672 \\Delta^5-336 \\Delta^3+42 \\Delta\\right)\n {Q^{(3)}_{1}}^2 Q^{(2)}_{1}}{{Q^{(1)}_{1}}^8}\n\\nonumber \\\\\n&& \n+\\frac{\\left(-160 \\Delta^5-100 \\Delta^3+11 \\Delta\\right) Q^{(2)}_{1} Q^{(3)}_{1}\n +\\left(64 \\Delta^5-40\n \\Delta^3\\right) Q^{(5)}_{1}}{{Q^{(1)}_{1}}^5}\n\\nonumber \\\\\n&& \n\\hspace{-10pt}\n+\\frac{\\left(960 \\Delta^5+120 \\Delta^3-60 \\Delta\\right) Q^{(3)}_{1} {Q^{(2)}_{1}}^2+\\left(-192\n \\Delta^5+144 \\Delta^3-12 \\Delta\\right) Q^{(5)}_{1} Q^{(2)}_{1}}{{Q^{(1)}_{1}}^7}\n\\nonumber \\\\\n&&\n+\\frac{\\left(-192 \\Delta^5+144 \\Delta^3-24 \\Delta\\right) Q^{(3)}_{1}\n Q^{(4)}_{1}}{{Q^{(1)}_{1}}^7}\n\\nonumber \\\\\n&& \n+\\frac{\\left(\\frac{400 \\Delta^5}{3}+\\frac{500 \\Delta^3}{3}+20 \\Delta\\right) {Q^{(2)}_{1}}^3+\\left(-320\n \\Delta^5+80 \\Delta^3+30 \\Delta\\right) Q^{(4)}_{1} Q^{(2)}_{1}}{{Q^{(1)}_{1}}^6}\n\\nonumber \\\\\n&& \n+\\frac{\\left(40 \\Delta^3-160 \\Delta^5\\right) {Q^{(3)}_{1}}^2+\\left(32 \\Delta^5-32 \\Delta^3+6\n \\Delta\\right) Q^{(6)}_{1}}{{Q^{(1)}_{1}}^6}.\n \\label{coe5} \n\\end{eqnarray}\nIn deriving these coefficients (\\ref{coe1})-(\\ref{coe5}), we \ndid not assume (\\ref{limit}). Of course, when one calculate the free energy of the model, \n one must assume (\\ref{limit}) and (\\ref{para}). \nWe can also rewrite the coefficient $b^{(1)}_{n}(0)$ in terms of \n$Q^{(1)}_{1},Q^{(2)}_{1},\\dots,Q^{(d)}_{1}$ and $\\zeta$ \n\\footnote{\n$Q^{(r+1)}_{1}=\\zeta$ if $s=-1$.}\n( $d=\\min (n+1,r+s+1)$ ) since $Q^{(a)}_{1}$ for $a \\in {\\mathbb Z}_{\\ge r+s+2}$ can \nbe written in terms of $Q^{(1)}_{1},Q^{(2)}_{1},\\dots,Q^{(r+s+1)}_{1}$ and $\\zeta$ \ndue to the relation (\\ref{a+r+s}) in the limit $v \\to i\\eta^{-1} \\infty $ \n(see also an example: (\\ref{Q11-sl21})-(\\ref{Qa1-sl21})). \nIf $b^{(n)}_{1}(0)$ is written in terms of \n$Q^{(1)}_{1},Q^{(2)}_{1},\\dots,Q^{(d)}_{1}$ and $\\zeta$ ( \n$d=\\min (n+1,r+s+1)$), it should be the coefficient \nof the high temperature expansion directly derived from \n the reduced NLIE (\\ref{nlie-general})-(\\ref{nlie-s=0b}). \n Of course these two expressions of the coefficient $b^{(1)}_{n}(0)$ are \n equivalent under the relations (\\ref{limit}) and (\\ref{para}). \n \n For fixed values of parameters, we have calculated \n the high temperature expansion for much higher order (see, appendix). \nWe have plotted the high temperature expansion \nof the specific heat (Figure \\ref{specific2}-\\ref{specific4}).\n Here we have adopted the Pade approximation method. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1\\textwidth]\n{specific2.eps}\n\\end{center}\n\\caption{Temperature dependence of the high temperature \nexpansion of the specific heat $C$ \nfor the rank 2 case ($r+s=1$, $J=1$, $q=1$, \n$\\mu_{a}=0$ ($a \\in B$)). We have plotted \nplan series (dotted lines) of $C$ in Appendix and their Pade approximations \nof order [$n$,$d$] (numerator: a degree $n$ polynomial of $1\/T$, \n denominator: a degree $d$ polynomial of $1\/T$) \nby using Mathematica: \n each line denotes $C$ for \n$sl(3|0)$ with [20,20] (thin), $sl(2|1)$ with [17,17] (medium),\n$sl(1|2)$ with [17,17] (thick), $sl(0|3)$ [20,20] (dashed thick) respectively. \nWe have also plotted (thick dots) \na result of numerical calculation from another NLIE by J\\\"uttner \nand Kl\\\"umper \\cite{JK97} for the $sl(2|1)$ case. \n C for the $sl(3|0)$ case was also \nconsidered in \\cite{FK02,FK99}.}\n\\label{specific2}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1\\textwidth]\n{specific3.eps}\n\\end{center}\n\\caption{Temperature dependence of the high temperature \nexpansion of the specific heat $C$ \nfor the rank 3 case ($r+s=2$, $J=1$, $q=1$, \n$\\mu_{a}=0$ ($a \\in B$)). We have plotted \nplan series (dotted lines) of $C$ in Appendix and their Pade approximations \nof order [$n$,$d$] (numerator: a degree $n$ polynomial of $1\/T$, \n denominator: a degree $d$ polynomial of $1\/T$): \n each line denotes $C$ for \n$sl(4|0)$ with [19,20] (thin), $sl(3|1)$ with [17,17] (medium),\n$sl(2|2)$ with [16,16] (thick), $sl(1|3)$ with [17,17] \n(dashed medium), $sl(0|4)$ with [18,21] (dashed thick) respectively. \n C for the $sl(4|0)$ case was also \nconsidered in \\cite{FK02}.}\n\\label{specific3}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1\\textwidth]\n{specific4.eps}\n\\end{center}\n\\caption{Temperature dependence of the high temperature \nexpansion of the specific heat $C$ \nfor the rank 4 case ($r+s=3$, $J=1$, $q=1$, \n$\\mu_{a}=0$ ($a \\in B$)). We have plotted \nplan series (dotted lines) of $C$ in Appendix and their Pade approximations \nof order [$n$,$d$] (numerator: a degree $n$ polynomial of $1\/T$, \n denominator: a degree $d$ polynomial of $1\/T$): \n each line denotes $C$ for \n$sl(5|0)$ with [17,21] (thin), $sl(4|1)$ with [16,18] (medium),\n$sl(3|2)$ with [17,17] (thick), \n$sl(2|3)$ with [16,17] (dashed thin), $sl(1|4)$ with [16,18] \n(dashed medium), $sl(0|5)$ with [17,21] (dashed thick) respectively. }\n\\label{specific4}\n\\end{figure}\nThere is a duality among the specific heats with respect to interchange of \n$r$ and $s$. \nIn particular, $r=s$ case is self-dual, then \nthe specific heat becomes an even function of $T$ (see (\\ref{hte-sl22})). \nIn Figure \\ref{specific2}, we have also plotted a result of \n a numerical calculation by another NLIE \\cite{JK97}.\nWe find a good agreement \n between our result and their result except for very low temperature region. \n \nWe can also calculate the high temperature expansion from the NLIE \nfor $\\xi=-1$ in subsection 3.2. \nSimilar to $\\xi=1$ case, we assume \n\\begin{eqnarray}\n&&\\mathcal{T}^{(1)}_{m}(v)=\n \\exp \\left(\\sum_{n=0}^{{\\mathrm deg}}\\widehat{b}_{m,n}(v)(\\frac{J}{T})^{n} \n+O((\\frac{J}{T})^{{\\mathrm deg}+1}) \\right) ,\n\\label{hte-tm}\n\\end{eqnarray}\nwhere $\\widehat{b}_{m,0}(v)=\\log Q^{(1)}_{m}$. \nHere we do not expand $\\{ Q^{(1)}_{k} \\}_{k \\ge 1}$ with respect to $\\frac{J}{T}$. \n(\\ref{hte-ta}) for $a=1$ should coincide with \n(\\ref{hte-tm}) for $m=1$ up to a factor from \nthe normalization function (\\ref{normal}). \nThus we have \n\\begin{eqnarray}\nb^{(1)}_{n}(0)=\\widehat{b}_{1,n}(0)+2\\Delta \\delta_{n,1}\n \\label{ty1}\n\\end{eqnarray}\nDue to symmetry between the NLIE for $\\xi=1$ and the one for $\\xi=-1$, \nthe following relation follows:\n\\begin{eqnarray}\n\\widehat{b}_{1,n}(0)=(-1)^{n}b^{(1)}_{n}(0)|_{Q^{(a)}_{1} \\to Q^{(1)}_{a} \n \\ {\\rm for} \\ a \\ge 1}.\n \\label{ty2}\n\\end{eqnarray}\nFor example, (\\ref{ty1}) and (\\ref{ty2}) for $n=1$ \nand (\\ref{coe1}) reproduce \n the $Q$-system (\\ref{Q-sys}) for $(a,m)=(1,1)$. \nFrom the relations \n(\\ref{ty1}) and (\\ref{ty2}) for $n=2$ and (\\ref{coe2}), we obtain \n identities among characters \n\\begin{eqnarray} \n&& \\hspace{-40pt} \n-3 {Q^{(2)}_{1}}^{2}+Q^{(2)}_{1}{Q^{(1)}_{1}}^{2}+2 Q^{(3)}_{1}Q^{(1)}_{1}\n=-3 {Q^{(1)}_{2}}^{2}+Q^{(1)}_{2}{Q^{(1)}_{1}}^{2}+2 Q^{(1)}_{3}Q^{(1)}_{1}, \\\\\n&& \\hspace{-40pt}\n Q^{(2)}_{1}Q^{(1)}_{1}-Q^{(3)}_{1}=Q^{(1)}_{2}Q^{(1)}_{1}-Q^{(1)}_{3},\n\\end{eqnarray}\nwhere we have used the fact that $Q^{(a)}_{m}$ does not depend on $\\Delta $. \nThese relations can be proved from the \nrelations (\\ref{jacobi-trudi}), (\\ref{jacobi-trudi2}) and (\\ref{limit}).\n\nSome comments on references on the high temperature expansion \nare in order. \nThe high temperature expansion of the free energy was\n calculated from the Takahashi's NLIE for \n the $XXX$-model up to the order of 100 \\cite{ShT02}; \n the $XXZ$-model up to the order of 99 \\cite{TT05}. \nAs for the higher rank or higher spin case, we have some results\n \\cite{T02,T03,T04,TT05} from NLIE. \nIn particular, our result on the $sl(r+1)$ Uimin-Sutherland model \nin \\cite{T03} was applied \\cite{BGOSTF03,YRFC04,YRZ04,BGO04,BGOF04,BGOT05} \nto spin ladder models and \ngood agreement\nwas seen between theoretical results and \nexperimental data. \nWe note that \nthe coefficients (\\ref{coe1})-(\\ref{coe3}) coincide with eqs. \n(4.14)-(4.16) in \\cite{TT05}. \nNote however that the coefficients in our paper are more general than the ones in \n\\cite{TT05} since the value of $Q^{(a)}_{1}$ (\\ref{limit})\n was restricted to $s=-1$ case in \\cite{TT05}. \nThere are also several works on high temperature expansions by different methods \n(see for example, \\cite{DV95,RST02,BEU00,FK02,F03}).\n\\section{Concluding remarks}\nIn this paper, we have derived NLIE which contain only $r+s+1$ unknown functions \nfor the $U(\\widehat{sl}(r+1|s+1))$ Perk-Schultz model. \nThe key is a duality for the auxiliary function (\\ref{dual}) \nand the quantum (supersymmetric) Jacobi-Trudi and Giambelli \nformula (\\ref{jacobi-trudi}) and (\\ref{jacobi-trudi2}). \nAlthough we assumed that $q$ is generic, \nwe expect that our NLIE (at least reduced ones \n(\\ref{nlie-general})-(\\ref{nlie-s=0b}), \n(\\ref{nlie-xi=-1})-(\\ref{nlie-r=0b})) will also be \nvalid even for the case where $q$ is root of unity \nas we will not need to take into account truncation of the \n$T$-system. \nThe high temperature expansion of the free energy \nin terms of characters was calculated from our NLIE. \n\nThere are NLIE with a finite number of unknown functions \nfor algebras of arbitrary rank in different context \\cite{Z98,DDT00}. \nThese NLIE are different from Takahashi-type. \nWhether one can generalize (or modify) their NLIE for finite \ntemperature case\n is still not clear. \n A deeper understanding of this subject is desirable. \n\nThere is an another kind of formulation of transfer matrices \nwhich is based on the graded formulation of the \n quantum inverse scattering method.\nIn this formulation, the row-to-row transfer matrix \nis defined as a supertrace: \n$\\widehat{t}(v)={\\mathrm str}_{0}(\\widehat{R}_{0L}(v)\n \\cdots \\widehat{R}_{02}(v)\\widehat{R}_{01}(v))$, where \nthe $R$-matrix is defined as $\\widehat{ R}^{a_{1},b_{1}}_{a_{2},b_{2}}(v)=\n(-1)^{p(a_{1})p(b_{1})}\nR^{a_{1},b_{1}}_{a_{2},b_{2}}(v)$ and the graded tensor product is adopted. \nAs far as the free energy (in the thermodynamic limit)\n is concerned, we think that there is no difference \nbetween this graded formulation and the one we have adopted. \n\\section*{Acknowledgments}\nThe author would like to thank A. Kl\\\"umper and K. Sakai for \ncomments on a figure of specific heats. \nHe also thank Y. Nagatani for a remark \n on programming of Mathematica. \n\\noindent\n\\renewcommand{\\theequation}{A.1.\\arabic{equation}}\n\\begin{landscape}\n\\section*{Appendix: The high temperature expansion of the specific heat}\nWe will list the high temperature expansion of the \nspecific heat $C_{sl(r+1|s+1)}$ for the $U_{q}(\\widehat{sl}(r+1|s+1))$ \nPerk-Schultz model at $q=1$, \n$\\mu_{a}=0$ ($a \\in B$). \nHere we put $t=\\frac{J}{T}$. \nIn this case, $Q^{(a)}_{1}$ (cf. (\\ref{limit})) becomes \n\\begin{eqnarray}\nQ^{(a)}_{1}=\\sum_{j=0}^{a}\\binom{r+1}{j}\\binom{a+s-j}{a-j}, \\label{Q-q=1}\n\\end{eqnarray}\nwhich is the dimension of $a$-th anti-(super)symmetric tensor representation \nof $sl(r+1|s+1)$. \nIf one substitute (\\ref{Q-q=1}), $\\Delta=1$ and the values of $(r,s)$\n into (\\ref{coe1})-(\\ref{coe5}), \none can recover (\\ref{hte-sl30})-(\\ref{hte-sl32}) up to the order of 5 \nthrough $C=-T\\frac{\\partial^{2} f}{\\partial T^{2}}$. \n A formula for $r